diff --git a/.circleci/config.yml b/.circleci/config.yml
index 39e815138bbb..26256450302c 100644
--- a/.circleci/config.yml
+++ b/.circleci/config.yml
@@ -14,7 +14,7 @@ jobs:
shellcheck -x test/repo-sync.sh
lint-charts:
docker:
- - image: gcr.io/kubernetes-charts-ci/test-image:v3.2.0
+ - image: gcr.io/kubernetes-charts-ci/test-image:v3.3.2
steps:
- checkout
- run:
diff --git a/.github/ISSUE_TEMPLATE.md b/.github/ISSUE_TEMPLATE.md
deleted file mode 100644
index f5910ca76575..000000000000
--- a/.github/ISSUE_TEMPLATE.md
+++ /dev/null
@@ -1,38 +0,0 @@
-
-
-**Is this a request for help?**:
-
----
-
-**Is this a BUG REPORT or FEATURE REQUEST?** (choose one):
-
-
-
-**Version of Helm and Kubernetes**:
-
-
-**Which chart**:
-
-
-**What happened**:
-
-
-**What you expected to happen**:
-
-
-**How to reproduce it** (as minimally and precisely as possible):
-
-
-**Anything else we need to know**:
diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md
new file mode 100644
index 000000000000..c784b1dafffa
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/bug_report.md
@@ -0,0 +1,36 @@
+---
+name: Bug report
+about: Create a report to help us improve
+title: '[name of the chart e.g. stable/chart] issue title'
+labels: ''
+assignees: ''
+
+---
+
+
+
+**Describe the bug**
+A clear and concise description of what the bug is.
+
+**Version of Helm and Kubernetes**:
+
+
+**Which chart**:
+
+
+**What happened**:
+
+
+**What you expected to happen**:
+
+
+**How to reproduce it** (as minimally and precisely as possible):
+
+
+**Anything else we need to know**:
+
diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md
new file mode 100644
index 000000000000..4816cc513f72
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/feature_request.md
@@ -0,0 +1,27 @@
+---
+name: Feature request
+about: Suggest an idea for this project
+title: '[name of the chart e.g. stable/chart] issue title'
+labels: ''
+assignees: ''
+
+---
+
+
+
+**Is your feature request related to a problem? Please describe.**
+A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
+
+**Describe the solution you'd like**
+A clear and concise description of what you want to happen.
+
+**Describe alternatives you've considered**
+A clear and concise description of any alternative solutions or features you've considered.
+
+**Additional context**
+Add any other context or screenshots about the feature request here.
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index a728453a0a61..44e3cfb96fca 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -35,6 +35,7 @@ even continue reviewing your changes.
#### Checklist
[Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.]
-- [ ] [DCO](https://www.helm.sh/blog/helm-dco/index.html) signed
+- [ ] [DCO](https://github.com/helm/charts/blob/master/CONTRIBUTING.md#sign-your-work) signed
- [ ] Chart Version bumped
- [ ] Variables are documented in the README.md
+- [ ] Title of the PR starts with chart name (e.g. `[stable/chart]`)
diff --git a/OWNERS b/OWNERS
index c50530e72d44..546506debf20 100644
--- a/OWNERS
+++ b/OWNERS
@@ -3,14 +3,17 @@ approvers:
- prydonius
- sameersbn
- viglesiasce
- - foxish
- unguiculus
- scottrigby
- mattfarina
- davidkarlsen
- paulczar
- cpanato
+ - jlegrone
+ - maorfr
emeritus:
+ - foxish
- linki
- mgoodness
- - seanknox
\ No newline at end of file
+ - seanknox
+
diff --git a/REVIEW_GUIDELINES.md b/REVIEW_GUIDELINES.md
index 880cbf546a93..979578a29da5 100644
--- a/REVIEW_GUIDELINES.md
+++ b/REVIEW_GUIDELINES.md
@@ -16,6 +16,17 @@ Note, if a reviewer who is not an approver in an OWNERS file leaves a comment of
Chart releases must be immutable. Any change to a chart warrants a chart version bump even if it is only changes to the documentation.
+## Versioning
+
+The chart `version` should follow [semver](https://semver.org/).
+
+Stable charts should start at `1.0.0` (for maintainability don't create new PRs for stable charts only to meet this criteria, but when reviewing PRs take the opportunity to ensure that this is met).
+
+Any breaking (backwards incompatible) changes to a chart should:
+
+1. Bump the MAJOR version
+2. In the README, under a section called "Upgrading", describe the manual steps necessary to upgrade to the new (specified) MAJOR version
+
## Chart Metadata
The `Chart.yaml` should be as complete as possible. The following fields are mandatory:
@@ -338,3 +349,13 @@ While reviewing Charts that contain workloads such as [Deployments](https://kube
10. As much as possible complex pre-app setups are configured using [init containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/).
More [configuration](https://kubernetes.io/docs/concepts/configuration/overview/) best practices.
+
+
+## Tests
+
+This repository follows a [test procedure](https://github.com/helm/charts/blob/master/test/README.md). This allows the charts of this repository to be tested according to several rules (linting, semver checking, deployment testing, etc) for every Pull Request.
+
+The `ci` directory of a given Chart allows testing different use cases, by allowing you to define different sets of values overriding `values.yaml`, one file per set. See the [documentation](https://github.com/helm/charts/blob/master/test/README.md#providing-custom-test-values) for more information.
+
+This directory MUST exist with at least one test file in it.
+
diff --git a/incubator/aws-alb-ingress-controller/Chart.yaml b/incubator/aws-alb-ingress-controller/Chart.yaml
index 01aec32ddd59..ac100a6bf461 100644
--- a/incubator/aws-alb-ingress-controller/Chart.yaml
+++ b/incubator/aws-alb-ingress-controller/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
name: aws-alb-ingress-controller
description: A Helm chart for AWS ALB Ingress Controller
-version: 0.1.4
-appVersion: "v1.0.1"
+version: 0.1.8
+appVersion: "v1.1.2"
engine: gotpl
home: https://github.com/kubernetes-sigs/aws-alb-ingress-controller
sources:
diff --git a/incubator/aws-alb-ingress-controller/OWNERS b/incubator/aws-alb-ingress-controller/OWNERS
new file mode 100644
index 000000000000..5f792ea9d6f6
--- /dev/null
+++ b/incubator/aws-alb-ingress-controller/OWNERS
@@ -0,0 +1,6 @@
+approvers:
+- bigkraig
+- M00nF1sh
+reviewers:
+- bigkraig
+- M00nF1sh
diff --git a/incubator/aws-alb-ingress-controller/README.md b/incubator/aws-alb-ingress-controller/README.md
index 831753d6b0b2..bbcab95ce77f 100644
--- a/incubator/aws-alb-ingress-controller/README.md
+++ b/incubator/aws-alb-ingress-controller/README.md
@@ -56,9 +56,11 @@ The following tables lists the configurable parameters of the alb-ingress-contro
| `image.repository` | controller container image repository | `894847497797.dkr.ecr.us-west-2.amazonaws.com/aws-alb-ingress-controller` |
| `image.tag` | controller container image tag | `v1.0.1` |
| `image.pullPolicy` | controller container image pull policy | `IfNotPresent` |
-| `enableReadinessProbe` | enable readinessProbe on controller pod |`false` |
+| `enableReadinessProbe` | enable readinessProbe on controller pod | `false` |
| `enableLivenessProbe` | enable livenessProbe on controller pod | `false` |
| `extraEnv` | map of environment variables to be injected into the controller pod | `{}` |
+| `volumesMounts` | volumeMounts into the controller pod | `[]` |
+| `volumes` | volumes the controller pod | `[]` |
| `nodeSelector` | node labels for controller pod assignment | `{}` |
| `tolerations` | controller pod toleration for taints | `{}` |
| `podAnnotations` | annotations to be added to controller pod | `{}` |
@@ -71,7 +73,7 @@ The following tables lists the configurable parameters of the alb-ingress-contro
| `scope.watchNamespace` | If scope.singleNamespace=true, the ALB ingress controller will only act on Ingress resources in this namespace | `""` (namespace of the ALB ingress controller) |
```bash
-helm install incubator/aws-alb-ingress-controller --set clusterName=MyClusterName --set autoDiscoverAwsRegion=true --set autoDiscoverAwsVpcID=true --name my-release --namespace kube-system
+helm install incubator/aws-alb-ingress-controller --set clusterName=MyClusterName --set autoDiscoverAwsRegion=true --set autoDiscoverAwsVpcID=true --name my-release --namespace kube-system
```
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
@@ -82,4 +84,4 @@ helm install incubator/aws-alb-ingress-controller --name my-release -f values.ya
> **Tip**: You can use the default [values.yaml](values.yaml)
-> **Tip**: If you use `aws-alb-ingress-controller` as releaseName, the generated pod name will be shorter.(e.g. `aws-alb-ingress-controller-66cc9fb67c-7mg4w` instead of `my-release-aws-alb-ingress-controller-66cc9fb67c-7mg4w`)
\ No newline at end of file
+> **Tip**: If you use `aws-alb-ingress-controller` as releaseName, the generated pod name will be shorter.(e.g. `aws-alb-ingress-controller-66cc9fb67c-7mg4w` instead of `my-release-aws-alb-ingress-controller-66cc9fb67c-7mg4w`)
diff --git a/incubator/aws-alb-ingress-controller/templates/deployment.yaml b/incubator/aws-alb-ingress-controller/templates/deployment.yaml
index 212ac6d31f9b..7832c414bca2 100644
--- a/incubator/aws-alb-ingress-controller/templates/deployment.yaml
+++ b/incubator/aws-alb-ingress-controller/templates/deployment.yaml
@@ -77,6 +77,10 @@ spec:
{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
+ {{- with .Values.volumeMounts }}
+ volumeMounts:
+{{ toYaml . | indent 12 }}
+ {{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
@@ -87,6 +91,10 @@ spec:
{{- end }}
{{- with .Values.tolerations }}
tolerations:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.volumes }}
+ volumes:
{{ toYaml . | indent 8 }}
{{- end }}
serviceAccountName: {{ if .Values.rbac.create }}{{ include "aws-alb-ingress-controller.fullname" . }}{{ else }}"{{ .Values.rbac.serviceAccountName }}"{{ end }}
diff --git a/incubator/aws-alb-ingress-controller/values.yaml b/incubator/aws-alb-ingress-controller/values.yaml
index ac3c4e1aed73..46108af472a7 100644
--- a/incubator/aws-alb-ingress-controller/values.yaml
+++ b/incubator/aws-alb-ingress-controller/values.yaml
@@ -22,7 +22,7 @@ autoDiscoverAwsVpcID: false
scope:
## If provided, the ALB ingress controller will only act on Ingress resources annotated with this class
- ## Ref: https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/master/docs/configuration.md#limiting-ingress-class
+ ## Ref: https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/master/docs/guide/controller/config.md#limiting-ingress-class
ingressClass: alb
## If true, the ALB ingress controller will only act on Ingress resources in a single namespace
@@ -30,7 +30,7 @@ scope:
singleNamespace: false
## If scope.singleNamespace=true, the ALB ingress controller will only act on Ingress resources in this namespace
- ## Ref: https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/master/docs/configuration.md#limiting-namespaces
+ ## Ref: https://github.com/kubernetes-sigs/aws-alb-ingress-controller/blob/master/docs/guide/controller/config.md#limiting-namespaces
## Default: namespace of the ALB ingress controller
watchNamespace: ""
@@ -71,7 +71,7 @@ rbac:
image:
repository: docker.io/amazon/aws-alb-ingress-controller
- tag: "v1.0.1"
+ tag: "v1.1.2"
pullPolicy: IfNotPresent
replicaCount: 1
@@ -99,3 +99,13 @@ tolerations: []
# effect: NoSchedule
affinity: {}
+
+volumeMounts: []
+ # - name: aws-iam-credentials
+ # mountPath: /meta/aws-iam
+ # readOnly: true
+
+volumes: []
+ # - name: aws-iam-credentials
+ # secret:
+ # secretName: alb-ingress-controller-role
diff --git a/incubator/azuremonitor-containers/Chart.yaml b/incubator/azuremonitor-containers/Chart.yaml
index 883914225e8b..7e88da4357d6 100644
--- a/incubator/azuremonitor-containers/Chart.yaml
+++ b/incubator/azuremonitor-containers/Chart.yaml
@@ -1,11 +1,14 @@
apiVersion: v1
-appVersion: 2.0.0-3
+appVersion: 4.0.0-0
description: Helm chart for deploying Azure Monitor container monitoring agent in Kubernetes
name: azuremonitor-containers
-version: 0.4.0
+version: 0.6.0
keywords:
- monitoring
- azuremonitor
+ - azure
+ - oms
+ - containerinsights
- metric
- event
- logs
diff --git a/incubator/azuremonitor-containers/README.md b/incubator/azuremonitor-containers/README.md
index 70d9d54b351f..c56b1b9b83b9 100644
--- a/incubator/azuremonitor-containers/README.md
+++ b/incubator/azuremonitor-containers/README.md
@@ -20,7 +20,7 @@ This article describes how to set up and use [Azure Monitor - Containers](https:
2. [Add the 'AzureMonitor-Containers' Solution to your Log Analytics workspace.](http://aka.ms/coinhelmdoc)
-3. [For ACS-engine K8S cluster, add Log Analytics workspace tag to cluster resources, to be able to use Azure Container monitoring User experience (aka.ms/azmon-containers)](http://aka.ms/coin-acs-tag-doc)
+3. [For AKS-Engine or ACS-Engine K8S cluster, add required tags on cluster resources, to be able to use Azure Container monitoring User experience (aka.ms/azmon-containers)](http://aka.ms/coin-acs-tag-doc)
---
@@ -58,7 +58,7 @@ The following table lists the configurable parameters of the MSOMS chart and the
| `omsagent.secret.wsid` | Azure Log analytics workspace id | Does not have a default value, needs to be provided |
| `omsagent.secret.key` | Azure Log analytics workspace key | Does not have a default value, needs to be provided |
| `omsagent.domain` | Azure Log analytics cloud domain (public / govt) | opinsights.azure.com (Public cloud as default), opinsights.azure.us (Govt Cloud) |
-| `omsagent.env.clusterName` | Name of your cluster | Does not have a default value, needs to be provided. If ACS-engine cluster, it is recommended to provide either one of the below as cluster name, to be able to use Azure Container monitoring User experience (aka.ms/azmon-containers)
- Azure Resource group resource ID of ACS-Engine cluster - Provide a friendly name here and ensure this name is used to 'tag' the cluster master node(s) - see step-3 in pre-requisites above |
+| `omsagent.env.clusterName` | Name of your cluster | Does not have a default value, needs to be provided. If AKS-Engine or ACS-Engine K8S cluster, it is recommended to provide either one of the below as cluster name, to be able to use Azure Container monitoring User experience (aka.ms/azmon-containers)
- Azure Resource group resource ID of ACS-Engine cluster - Provide a friendly name here and ensure this name is used to 'tag' the cluster master node(s) - see step-3 in pre-requisites above |
|`omsagent.env.doNotCollectKubeSystemLogs`| Disable collecting logs from containers in 'kube-system' namespace | true|
| `omsagent.rbac` | rbac enabled/disabled | true (i.e enabled) |
@@ -70,7 +70,7 @@ Specify each parameter using the `--set key=value[,key=value]` argument to `helm
```bash
$ helm install --name myrelease-1 \
---set omsagent.secret.wsid=,omsagent.secret.key=,omsagent.env.clusterName= incubator/azuremonitor-containers
+--set omsagent.secret.wsid=,omsagent.secret.key=,omsagent.env.clusterName= incubator/azuremonitor-containers
```
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
@@ -83,4 +83,4 @@ $ helm install --name myrelease-1 -f values.yaml incubator/azuremonitor-containe
After you successfully deploy the chart, you will be able to see your data in the [azure portal](aka.ms/azmon-containers)
-If you need help with this chart, please reach us out thru [this](mailto:askcoin@microsoft.com) email.
\ No newline at end of file
+If you need help with this chart, please reach us out through [this](mailto:askcoin@microsoft.com) email.
\ No newline at end of file
diff --git a/incubator/azuremonitor-containers/templates/omsagent-daemonset.yaml b/incubator/azuremonitor-containers/templates/omsagent-daemonset.yaml
index afa9e12a1144..9c76643e6435 100644
--- a/incubator/azuremonitor-containers/templates/omsagent-daemonset.yaml
+++ b/incubator/azuremonitor-containers/templates/omsagent-daemonset.yaml
@@ -32,6 +32,8 @@ spec:
value: {{ .Values.omsagent.env.clusterName | quote }}
- name: DISABLE_KUBE_SYSTEM_LOG_COLLECTION
value: {{ .Values.omsagent.env.doNotCollectKubeSystemLogs | quote }}
+ - name: CONTROLLER_TYPE
+ value: "DaemonSet"
- name: NODE_IP
valueFrom:
fieldRef:
@@ -44,12 +46,17 @@ spec:
- containerPort: 25224
protocol: UDP
volumeMounts:
- - mountPath: /var/run/docker.sock
+ - mountPath: /hostfs
+ name: host-root
+ readOnly: true
+ - mountPath: /var/run/host
name: docker-sock
- mountPath: /var/log
name: host-log
- mountPath: /var/lib/docker/containers
name: containerlog-path
+ - mountPath: /etc/kubernetes/host
+ name: azure-json-path
- mountPath: /etc/omsagent-secret
name: omsagent-secret
readOnly: true
@@ -58,7 +65,7 @@ spec:
command:
- /bin/bash
- -c
- - ps -ef | grep omsagent | grep -v "grep"
+ - (ps -ef | grep omsagent | grep -v "grep") && (ps -ef | grep td-agent-bit | grep -v "grep")
initialDelaySeconds: 60
periodSeconds: 60
nodeSelector:
@@ -70,9 +77,12 @@ spec:
value: "true"
effect: "NoSchedule"
volumes:
+ - name: host-root
+ hostPath:
+ path: /
- name: docker-sock
hostPath:
- path: /var/run/docker.sock
+ path: /var/run
- name: container-hostname
hostPath:
path: /etc/hostname
@@ -82,6 +92,9 @@ spec:
- name: containerlog-path
hostPath:
path: /var/lib/docker/containers
+ - name: azure-json-path
+ hostPath:
+ path: /etc/kubernetes
- name: omsagent-secret
secret:
secretName: omsagent-secret
diff --git a/incubator/azuremonitor-containers/templates/omsagent-deployment.yaml b/incubator/azuremonitor-containers/templates/omsagent-deployment.yaml
index f5a0f4d4c9be..ca3699608b32 100644
--- a/incubator/azuremonitor-containers/templates/omsagent-deployment.yaml
+++ b/incubator/azuremonitor-containers/templates/omsagent-deployment.yaml
@@ -36,6 +36,8 @@ spec:
value: {{ .Values.omsagent.env.clusterName | quote }}
- name: DISABLE_KUBE_SYSTEM_LOG_COLLECTION
value: {{ .Values.omsagent.env.doNotCollectKubeSystemLogs | quote }}
+ - name: CONTROLLER_TYPE
+ value: "ReplicaSet"
- name: NODE_IP
valueFrom:
fieldRef:
@@ -48,12 +50,14 @@ spec:
- containerPort: 25224
protocol: UDP
volumeMounts:
- - mountPath: /var/run/docker.sock
+ - mountPath: /var/run/host
name: docker-sock
- mountPath: /var/log
name: host-log
- mountPath: /var/lib/docker/containers
name: containerlog-path
+ - mountPath: /etc/kubernetes/host
+ name: azure-json-path
- mountPath: /etc/omsagent-secret
name: omsagent-secret
readOnly: true
@@ -73,7 +77,7 @@ spec:
volumes:
- name: docker-sock
hostPath:
- path: /var/run/docker.sock
+ path: /var/run
- name: container-hostname
hostPath:
path: /etc/hostname
@@ -83,6 +87,9 @@ spec:
- name: containerlog-path
hostPath:
path: /var/lib/docker/containers
+ - name: azure-json-path
+ hostPath:
+ path: /etc/kubernetes
- name: omsagent-secret
secret:
secretName: omsagent-secret
diff --git a/incubator/azuremonitor-containers/templates/omsagent-rs-configmap.yaml b/incubator/azuremonitor-containers/templates/omsagent-rs-configmap.yaml
index 62295c319817..9dc7b5f5045f 100644
--- a/incubator/azuremonitor-containers/templates/omsagent-rs-configmap.yaml
+++ b/incubator/azuremonitor-containers/templates/omsagent-rs-configmap.yaml
@@ -5,54 +5,76 @@ data:
kube.conf: |
# Fluentd config file for OMS Docker - cluster components (kubeAPI)
- #Kubernetes pod inventory
-
+ #Kubernetes pod inventory
+
type kubepodinventory
tag oms.containerinsights.KubePodInventory
run_interval 60s
log_level debug
-
+
- #Kubernetes events
-
+ #Kubernetes events
+
type kubeevents
- tag oms.api.KubeEvents.CollectionTime
+ tag oms.containerinsights.KubeEvents
run_interval 60s
log_level debug
-
+
- #Kubernetes logs
-
+ #Kubernetes logs
+
type kubelogs
tag oms.api.KubeLogs
run_interval 60s
-
+
- #Kubernetes services
-
+ #Kubernetes services
+
type kubeservices
- tag oms.api.KubeServices.CollectionTime
+ tag oms.containerinsights.KubeServices
run_interval 60s
log_level debug
-
+
- #Kubernetes Nodes
-
+ #Kubernetes Nodes
+
type kubenodeinventory
tag oms.containerinsights.KubeNodeInventory
run_interval 60s
log_level debug
-
+
- #Kubernetes perf
-
+ #Kubernetes perf
+
type kubeperf
tag oms.api.KubePerf
run_interval 60s
log_level debug
-
+
-
+ #cadvisor perf- Windows nodes
+
+ type wincadvisorperf
+ tag oms.api.wincadvisorperf
+ run_interval 60s
+ log_level debug
+
+
+
+ type filter_inventory2mdm
+ custom_metrics_azure_regions eastus,southcentralus,westcentralus,westus2,southeastasia,northeurope,westEurope
+ log_level info
+
+
+ # custom_metrics_mdm filter plugin for perf data from windows nodes
+
+ type filter_cadvisor2mdm
+ custom_metrics_azure_regions eastus,southcentralus,westcentralus,westus2,southeastasia,northeurope,westEurope
+ metrics_to_collect cpuUsageNanoCores,memoryWorkingSetBytes
+ log_level info
+
+
+
type out_oms
log_level debug
num_threads 5
@@ -65,23 +87,24 @@ data:
retry_limit 10
retry_wait 30s
max_retry_wait 9m
-
+
-
- type out_oms_api
+
+ type out_oms
log_level debug
num_threads 5
buffer_chunk_limit 5m
buffer_type file
- buffer_path %STATE_DIR_WS%/out_oms_api_kubeevents*.buffer
+ buffer_path %STATE_DIR_WS%/out_oms_kubeevents*.buffer
buffer_queue_limit 10
buffer_queue_full_action drop_oldest_chunk
flush_interval 20s
retry_limit 10
retry_wait 30s
-
+ max_retry_wait 9m
+
-
+
type out_oms_api
log_level debug
buffer_chunk_limit 10m
@@ -91,10 +114,10 @@ data:
flush_interval 20s
retry_limit 10
retry_wait 30s
-
+
-
- type out_oms_api
+
+ type out_oms
log_level debug
num_threads 5
buffer_chunk_limit 20m
@@ -106,9 +129,9 @@ data:
retry_limit 10
retry_wait 30s
max_retry_wait 9m
-
+
-
+
type out_oms
log_level debug
num_threads 5
@@ -121,9 +144,22 @@ data:
retry_limit 10
retry_wait 30s
max_retry_wait 9m
-
+
-
+
+ type out_oms
+ log_level debug
+ buffer_chunk_limit 20m
+ buffer_type file
+ buffer_path %STATE_DIR_WS%/out_oms_containernodeinventory*.buffer
+ buffer_queue_limit 20
+ flush_interval 20s
+ retry_limit 10
+ retry_wait 15s
+ max_retry_wait 9m
+
+
+
type out_oms
log_level debug
num_threads 5
@@ -136,7 +172,54 @@ data:
retry_limit 10
retry_wait 30s
max_retry_wait 9m
-
+
+
+
+ type out_mdm
+ log_level debug
+ num_threads 5
+ buffer_chunk_limit 20m
+ buffer_type file
+ buffer_path %STATE_DIR_WS%/out_mdm_*.buffer
+ buffer_queue_limit 20
+ buffer_queue_full_action drop_oldest_chunk
+ flush_interval 20s
+ retry_limit 10
+ retry_wait 30s
+ max_retry_wait 9m
+ retry_mdm_post_wait_minutes 60
+
+
+
+ type out_oms
+ log_level debug
+ num_threads 5
+ buffer_chunk_limit 20m
+ buffer_type file
+ buffer_path %STATE_DIR_WS%/out_oms_api_wincadvisorperf*.buffer
+ buffer_queue_limit 20
+ buffer_queue_full_action drop_oldest_chunk
+ flush_interval 20s
+ retry_limit 10
+ retry_wait 30s
+ max_retry_wait 9m
+
+
+
+ type out_mdm
+ log_level debug
+ num_threads 5
+ buffer_chunk_limit 20m
+ buffer_type file
+ buffer_path %STATE_DIR_WS%/out_mdm_cdvisorperf*.buffer
+ buffer_queue_limit 20
+ buffer_queue_full_action drop_oldest_chunk
+ flush_interval 20s
+ retry_limit 10
+ retry_wait 30s
+ max_retry_wait 9m
+ retry_mdm_post_wait_minutes 60
+
metadata:
name: omsagent-rs-config
namespace: kube-system
diff --git a/incubator/azuremonitor-containers/values.yaml b/incubator/azuremonitor-containers/values.yaml
index 6cf3e4c253f0..62e7a35152c8 100644
--- a/incubator/azuremonitor-containers/values.yaml
+++ b/incubator/azuremonitor-containers/values.yaml
@@ -6,10 +6,10 @@
## ref: https://github.com/Microsoft/OMS-docker/tree/ci_feature_prod
omsagent:
image:
- tag: "ciprod11292018"
+ tag: "ciprod04232019"
pullPolicy: IfNotPresent
- dockerProviderVersion: "3.0.0-2"
- agentVersion: "1.6.0-163"
+ dockerProviderVersion: "4.0.0-0"
+ agentVersion: "1.10.0.1"
## To get your workspace id and key do the following
## You can create a Azure Loganalytics workspace from portal.azure.com and get its ID & PRIMARY KEY from 'Advanced Settings' tab in the Ux.
@@ -29,7 +29,7 @@ omsagent:
daemonset:
requests:
cpu: 50m
- memory: 150Mi
+ memory: 225Mi
limits:
cpu: 150m
memory: 300Mi
diff --git a/incubator/buzzfeed-sso/Chart.yaml b/incubator/buzzfeed-sso/Chart.yaml
new file mode 100644
index 000000000000..fe2f3f972531
--- /dev/null
+++ b/incubator/buzzfeed-sso/Chart.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+description: Single sign-on for your Kubernetes services using Google OAuth
+name: buzzfeed-sso
+version: 0.0.1
+appVersion: 1.1.0
+home: https://github.com/buzzfeed/sso
+sources:
+ - https://hub.docker.com/r/buzzfeed/sso/
+keywords:
+ - sso
+ - octoboi
+ - ssoctopus
+icon: https://user-images.githubusercontent.com/10510566/44476420-a64e5980-a605-11e8-8ad9-2820109deb75.png
+maintainers:
+ - name: darioblanco
+ email: dblanco@minddoc.de
diff --git a/incubator/buzzfeed-sso/OWNERS b/incubator/buzzfeed-sso/OWNERS
new file mode 100644
index 000000000000..3ee6653c6388
--- /dev/null
+++ b/incubator/buzzfeed-sso/OWNERS
@@ -0,0 +1,4 @@
+approvers:
+- darioblanco
+reviewers:
+- darioblanco
diff --git a/incubator/buzzfeed-sso/README.md b/incubator/buzzfeed-sso/README.md
new file mode 100644
index 000000000000..420f0331d8b6
--- /dev/null
+++ b/incubator/buzzfeed-sso/README.md
@@ -0,0 +1,182 @@
+# Buzzfeed SSO
+
+Single sign-on for your Kubernetes services using Google OAuth (more providers are welcomed)
+
+[Blogpost](https://tech.buzzfeed.com/unleashing-the-a6a1a5da39d6?gi=e6db395406ae)
+[Quickstart guide](https://github.com/buzzfeed/sso/blob/master/docs/quickstart.md)
+[SSO in Kubernetes with Google Auth](https://medium.com/@while1eq1/single-sign-on-for-internal-apps-in-kubernetes-using-google-oauth-sso-2386a34bc433)
+[Repo](https://github.com/buzzfeed/sso)
+
+This helm chart is heavily inspired in [Buzzfeed's example](https://github.com/buzzfeed/sso/tree/master/quickstart/kubernetes), and provides a way of protecting Kubernetes services that have no authentication layer globally from a single OAuth proxy.
+
+Many of the Kubernetes OAuth solutions require to run an extra container within the pod using [oauth2_proxy](https://github.com/bitly/oauth2_proxy), but the project seems to not be maintained anymore. The approach presented on this chart allows to have a global OAuth2 Proxy that can protect services even in different namespaces, thanks to [Kube DNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/).
+
+We use this chart in production at [MindDoc](https://minddoc.de) for protecting endpoints that have no built-in authentication (or that would require to run inner containers), like `Kibana`, `Prometheus`, etc...
+
+## Introduction
+
+This chart creates a SSO deployment on a [Kubernetes](http://kubernetes.io)
+cluster using the [Helm](https://helm.sh) package manager.
+
+## Prerequisites
+
+- Kubernetes 1.8+ with Beta APIs enabled
+- Kube DNS
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`:
+
+```bash
+$ helm install --name my-release stable/buzzfeed-sso
+```
+
+The command deploys SSO on the Kubernetes cluster using the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
+
+This chart has required variables, see [Configuration](#configuration).
+
+## Uninstalling the Chart
+
+To uninstall/delete the `my-release` deployment:
+
+```bash
+$ helm delete --purge my-release
+```
+The command removes all the Kubernetes components associated with the chart and deletes the release.
+
+## Configuration
+
+The following table lists the configurable parameters of the SSO chart and their default/required values.
+
+Parameter | Description | Default
+--- | --- | ---
+`namespace` | namespace to use | `default`
+`emailDomain` | the sso email domain for authentication | REQUIRED
+`rootDomain` | the parent domain used for protecting your backends | REQUIRED
+`auth.annotations` | extra annotations for auth pods | `{}`
+`auth.domain` | the auth domain used for OAuth callbacks | REQUIRED
+`auth.replicaCount` | desired number of auth pods | `1`
+`auth.resources` | resource limits and requests for auth pods | `{ limits: { memory: "256Mi", cpu: "200m" }}`
+`auth.nodeSelector` | node selector logic for auth pods | `{}`
+`auth.tolerations` | resource tolerations for auth pods | `{}`
+`auth.affinity` | node affinity for auth pods | `{}`
+`auth.service.type` | type of auth service to create | `ClusterIP`
+`auth.service.port` | port for the http auth service | `80`
+`auth.secret` | secrets to be generated randomly with `openssl rand -base64 32 | head -c 32`. | REQUIRED if `auth.customSecret` is not set
+`auth.tls` | tls configuration for central sso auth ingress. | `{ secretName: "sso-auth-tls-secret" }`
+`auth.customSecret` | the secret key to reuse (avoids secret creation via helm) | REQUIRED if `auth.secret` is not set
+`proxy.annotations` | extra annotations for proxy pods | `{}`
+`proxy.providerUrlInternal` | url for split dns deployments |
+`proxy.cluster` | the cluster name for SSO | `dev`
+`proxy.replicaCount` | desired number of proxy pods | `1`
+`proxy.resources` | resource limits and requests for proxy pods | `{ limits: { memory: "256Mi", cpu: "200m" }}`
+`proxy.nodeSelector` | node selector logic for proxy pods | `{}`
+`proxy.tolerations` | resource tolerations for proxy pods | `{}`
+`proxy.affinity` | node affinity for proxy pods | `{}`
+`proxy.service.type` | type of proxy service to create | `ClusterIP`
+`proxy.service.port` | port for the http proxy service | `80`
+`proxy.secret` | secrets to be generated randomly with `openssl rand -base64 32 | head -c 32 | base64`. | REQUIRED if `proxy.customSecret` is not set
+`proxy.customSecret` | the secret key to reuse (avoids secret creation via helm) | REQUIRED if `proxy.secret` is not set
+`provider.google` | the Oauth provider to use (only Google support for now) | REQUIRED
+`provider.google.adminEmail` | the Google admin email | `undefined`
+`provider.google.secret` | the Google OAuth secrets | REQUIRED if `provider.google.customSecret` is not set
+`provider.google.customSecret` | the secret key to reuse instead of creating it via helm | REQUIRED if `provider.google.secret` is not set
+`image.repository` | container image repository | `buzzfeed/sso`
+`image.tag` | container image tag | `v1.0.0`
+`image.pullPolicy` | container image pull policy | `IfNotPresent`
+`ingress.annotations` | ingress load balancer annotations | `{}`
+`ingress.hosts` | proxied hosts | `[]`
+`ingress.tls` | tls certificates for the proxied hosts | `[]`
+`upstreams` | configuration of services that use sso | `[]`
+
+Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
+
+```bash
+$ helm install --name my-release \
+ --set key_1=value_1,key_2=value_2 \
+ stable/buzzfeed-sso
+```
+
+Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
+
+```bash
+$ helm install --name my-release -f values.yaml stable/buzzfeed-sso
+```
+
+> **Tip**: This will merge parameters with [values.yaml](values.yaml), which does not specify all the required values
+
+### Example
+
+**NEVER expose your `auth.secret`, `proxy.secret`, `provider.google.clientId`, `provider.google.clientSecret` and `provider.google.serviceAccount`.** Always keep them in a safe place and do not push them to any repository. As values are merged, you can always generate a different `.yaml` file. For instance:
+
+```yaml
+# values.yaml
+emailDomain: 'email.coolcompany.foo'
+
+rootDomain: 'coolcompany.foo'
+
+auth:
+ domain: sso-auth.coolcompany.foo
+
+proxy:
+ cluster: dev
+
+google:
+ adminEmail: iamtheadmin@email.coolcompany.foo
+```
+
+```yaml
+# secrets.yaml
+auth:
+ secret:
+ codeSecret: 'randomSecret1'
+ cookieSecret: 'randomSecret2'
+
+proxy:
+ secret:
+ clientId: 'randomSecret3'
+ clientSecret: 'randomSecret4'
+ cookieSecret: 'randomSecret6'
+
+google:
+ secret:
+ clientId: 'googleSecret!'
+ clientSecret: 'evenMoreSecret'
+ serviceAccount: '{ }'
+```
+
+Therefore, you could push your own `values.yaml` to a repo and keep `secrets.yaml` locally safe, and then install/update the chart:
+
+```bash
+$ helm install --name my-release -f values.yaml -f secrets.yaml stable/buzzfeed-sso
+```
+
+Alternatively, you can specify your own secret key, if you have already created it in the cluster. The secret should follow the data format defined in `secret.yaml` (auth and proxy) and `google-secret.yaml` (google provider).
+
+```yaml
+# values.yaml
+emailDomain: 'email.coolcompany.foo'
+
+rootDomain: 'coolcompany.foo'
+
+auth:
+ domain: sso-auth.coolcompany.foo
+ customSecret: my-sso-auth-secret
+
+proxy:
+ cluster: dev
+ customSecret: my-sso-proxy-secret
+
+provider:
+ google:
+ adminEmail: iamtheadmin@email.coolcompany.foo
+ customSecret: my-sso-google-secret
+```
+
+## Updating the Chart
+
+You can update the chart values and trigger a pod reload. If the configmap changes, it will automatically retrieve the new values.
+
+```bash
+$ helm upgrade -f values.yaml my-release stable/buzzfeed-sso
+```
diff --git a/incubator/buzzfeed-sso/templates/NOTES.txt b/incubator/buzzfeed-sso/templates/NOTES.txt
new file mode 100644
index 000000000000..1810b9cf7ff3
--- /dev/null
+++ b/incubator/buzzfeed-sso/templates/NOTES.txt
@@ -0,0 +1,134 @@
+Please be patient: buzzfeed-sso might take a few minutes to install.
+
+{{- if eq .Values.emailDomain "" }}
+
+###############################################################################
+#### ERROR: You did not provide an email domain. ####
+###############################################################################
+
+This deployment will be incomplete until you configure a valid email domain.
+The email domain is required for the auth and proxy deployments.
+
+{{- end }}
+
+{{- if eq .Values.rootDomain "" }}
+
+###############################################################################
+#### ERROR: You did not provide a root domain. ####
+###############################################################################
+
+This deployment will be incomplete until you configure a valid root domain.
+The root domain is required for the auth deployment.
+
+{{- end }}
+
+{{- if eq .Values.auth.domain "" }}
+
+###############################################################################
+#### ERROR: You did not provide proper auth domain. ####
+###############################################################################
+
+This deployment will be incomplete until you configure a valid auth domain.
+For instance, "sso-auth.mydomain.foo".
+
+{{- end }}
+
+{{- if not (or .Values.auth.secret .Values.auth.customSecret) }}
+
+###############################################################################
+#### ERROR: You did not provide proper auth secrets. ####
+###############################################################################
+
+This deployment will be incomplete until you configure proper auth secrets.
+You can generate an auth secret by running
+ helm upgrade {{ .Release.Name }} \
+ --reuse-values \
+ --set auth.secret.codeSecret="$(openssl rand -base64 32 | head -c 32 | base64)" \
+ --set auth.secret.cookieSecret="$(openssl rand -base64 32 | head -c 32 | base64)" \
+ incubator/buzzfeed-sso
+
+Or you can provide a custom auth secret that is a reference to an already created
+Kubernetes secret resource.
+ kubectl create secret generic buzzfeed-sso-auth-secret \
+ --namespace={{ .Release.Namespace }} \
+ --from-literal=auth-code-secret="auth-code-secret"
+ --from-literal=auth-cookie-secret="auth-cookie-secret"
+
+ helm upgrade {{ .Release.Name }} \
+ --reuse-values \
+ --set auth.customSecret="buzzfeed-sso-auth-secret" \
+ incubator/buzzfeed-sso
+
+{{- end }}
+
+{{- if not (or .Values.proxy.secret .Values.proxy.customSecret) }}
+
+###############################################################################
+#### ERROR: You did not provide proper proxy secrets. ####
+###############################################################################
+
+This deployment will be incomplete until you configure proper proxy secrets.
+You can generate a proxy secret by running
+ helm upgrade {{ .Release.Name }} \
+ --reuse-values \
+ --set proxy.secret.clientId="$(openssl rand -base64 32 | head -c 32 | base64)" \
+ --set proxy.secret.clientSecret="$(openssl rand -base64 32 | head -c 32 | base64)" \
+ --set proxy.secret.cookieSecret="$(openssl rand -base64 32 | head -c 32 | base64)" \
+ incubator/buzzfeed-sso
+
+Or you can provide a custom proxy secret that is a reference to an already created
+Kubernetes secret resource.
+ kubectl create secret generic buzzfeed-sso-proxy-secret \
+ --namespace={{ .Release.Namespace }} \
+ --from-literal=proxy-client-id="proxy-client-id"
+ --from-literal=proxy-client-secret="proxy-client-secret"
+ --from-literal=proxy-cookie-secret="proxy-cookie-secret"
+
+ helm upgrade {{ .Release.Name }} \
+ --reuse-values \
+ --set proxy.customSecret="buzzfeed-sso-proxy-secret" \
+ incubator/buzzfeed-sso
+
+{{- end }}
+
+{{- if not (or .Values.provider.google.secret .Values.provider.google.customSecret) }}
+
+###############################################################################
+#### ERROR: You did not provide a proper Google provider. ####
+###############################################################################
+
+This deployment will be incomplete until you configure a valid provider.
+
+Currently, the only accepted provider is Google. You need to specify it with
+a given secret or custom secret.
+
+You can define the secret with your Google's client id, client secret and
+service account in JSON format.
+ helm upgrade {{ .Release.Name }} \
+ --reuse-values \
+ --set provider.google.secret.clientId="foo123123-fake123123.apps.googleusercontent.com" \
+ --set provider.google.secret.clientSecret="googleOauthClientSecret" \
+ --set provider.google.secret.serviceAccount="$(cat myserviceaccount.json)" \
+ incubator/buzzfeed-sso
+
+Or you can provide a custom secret that is a reference to an already created
+Kubernetes secret resource.
+ kubectl create secret generic buzzfeed-sso-google-secret \
+ --namespace={{ .Release.Namespace }} \
+ --from-literal=google-client-id="foo123123-fake123123.apps.googleusercontent.com"
+ --from-literal=google-client-secret="googleOauthClientSecret"
+ --from-literal=service-account="$(cat myserviceaccount.json)"
+
+ helm upgrade {{ .Release.Name }} \
+ --reuse-values \
+ --set provider.google.customSecret="buzzfeed-sso-google-secret" \
+ incubator/buzzfeed-sso
+
+{{- end }}
+
+{{- if .Values.ingress.hosts }}
+Visit the external application URLs to use your application:
+{{- range .Values.ingress.hosts }}
+ https://{{ .domain }}{{ .path }}
+{{- end }}
+{{- end }}
diff --git a/incubator/buzzfeed-sso/templates/_helpers.tpl b/incubator/buzzfeed-sso/templates/_helpers.tpl
new file mode 100644
index 000000000000..7e2fb0a6633d
--- /dev/null
+++ b/incubator/buzzfeed-sso/templates/_helpers.tpl
@@ -0,0 +1,32 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "buzzfeed-sso.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "buzzfeed-sso.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "buzzfeed-sso.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/incubator/buzzfeed-sso/templates/auth-deployment.yaml b/incubator/buzzfeed-sso/templates/auth-deployment.yaml
new file mode 100644
index 000000000000..748879288c36
--- /dev/null
+++ b/incubator/buzzfeed-sso/templates/auth-deployment.yaml
@@ -0,0 +1,150 @@
+{{- if and (or .Values.auth.customSecret .Values.auth.secret) (or .Values.provider.google.customSecret .Values.provider.google.secret) (ne .Values.auth.domain "") -}}
+{{- $fullName := include "buzzfeed-sso.fullname" . -}}
+{{- $googleSecret := .Values.provider.google.customSecret | default (printf "%s-google" ($fullName)) -}}
+{{- $authSecret := .Values.auth.customSecret | default ($fullName) -}}
+{{- $name := include "buzzfeed-sso.name" . -}}
+{{- $authDomain := .Values.auth.domain -}}
+apiVersion: apps/v1beta1
+kind: Deployment
+metadata:
+ name: {{ $fullName }}-auth
+ labels:
+ app: {{ $name }}
+ chart: {{ template "buzzfeed-sso.chart" . }}
+ component: {{ $name }}-auth
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+spec:
+ replicas: {{ .Values.auth.replicaCount }}
+ selector:
+ matchLabels:
+ app: {{ $name }}
+ component: {{ $name }}-auth
+ release: {{ .Release.Name }}
+ template:
+ metadata:
+ annotations:
+ checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
+ checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
+ {{- with .Values.auth.annotations }}
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ labels:
+ app: {{ $name }}
+ component: {{ $name }}-auth
+ release: {{ .Release.Name }}
+ spec:
+ {{- if .Values.provider.google }}
+ volumes:
+ - name: google-service-account
+ secret:
+ secretName: {{ $googleSecret }}
+ items:
+ - key: service-account
+ path: sso-serviceaccount.json
+ {{- end }}
+ containers:
+ - name: {{ .Chart.Name }}-auth
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ command: ["/bin/sso-auth"]
+ ports:
+ - name: http
+ containerPort: 4180
+ protocol: TCP
+ env:
+ - name: SSO_EMAIL_DOMAIN
+ value: {{ .Values.emailDomain | quote }}
+ - name: HOST
+ value: {{ $authDomain }}
+ - name: REDIRECT_URL
+ value: https://{{ $authDomain }}
+ - name: PROXY_ROOT_DOMAIN
+ value: {{ .Values.rootDomain | quote }}
+ - name: PROXY_CLIENT_ID
+ valueFrom:
+ secretKeyRef:
+ name: {{ $authSecret }}
+ key: proxy-client-id
+ - name: PROXY_CLIENT_SECRET
+ valueFrom:
+ secretKeyRef:
+ name: {{ $authSecret }}
+ key: proxy-client-secret
+ - name: AUTH_CODE_SECRET
+ valueFrom:
+ secretKeyRef:
+ name: {{ $authSecret }}
+ key: auth-code-secret
+ - name: COOKIE_SECRET
+ valueFrom:
+ secretKeyRef:
+ name: {{ $authSecret }}
+ key: auth-cookie-secret
+ # # OLD_COOKIE_SECRET is the same as COOKIE_SECRET, not sure why its even needed at this point
+ - name: OLD_COOKIE_SECRET
+ valueFrom:
+ secretKeyRef:
+ name: {{ $authSecret }}
+ key: auth-cookie-secret
+ # STATSD_HOST and STATSD_PORT must be defined or the app wont launch, they dont need to be a real host / port
+ - name: STATSD_HOST
+ value: localhost
+ - name: STATSD_PORT
+ value: "11111"
+ - name: COOKIE_SECURE
+ value: "true"
+ - name: CLUSTER
+ value: dev
+ # Provider variables
+ {{- with .Values.provider.google }}
+ {{- if .adminEmail }}
+ - name: GOOGLE_ADMIN_EMAIL
+ value: {{ .adminEmail | quote }}
+ - name: GOOGLE_SERVICE_ACCOUNT_JSON
+ value: /creds/sso-serviceaccount.json
+ {{- end }}
+ - name: CLIENT_ID
+ valueFrom:
+ secretKeyRef:
+ name: {{ $googleSecret }}
+ key: google-client-id
+ - name: CLIENT_SECRET
+ valueFrom:
+ secretKeyRef:
+ name: {{ $googleSecret }}
+ key: google-client-secret
+ {{- end }}
+ readinessProbe:
+ httpGet:
+ path: /ping
+ port: 4180
+ scheme: HTTP
+ livenessProbe:
+ httpGet:
+ path: /ping
+ port: 4180
+ scheme: HTTP
+ initialDelaySeconds: 10
+ timeoutSeconds: 1
+ {{- if .Values.provider.google.adminEmail }}
+ volumeMounts:
+ - name: google-service-account
+ mountPath: /creds
+ readOnly: true
+ {{- end }}
+ resources:
+{{ toYaml .Values.auth.resources | indent 12 }}
+ {{- with .Values.auth.nodeSelector }}
+ nodeSelector:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.auth.affinity }}
+ affinity:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.auth.tolerations }}
+ tolerations:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+{{- end }}
diff --git a/incubator/buzzfeed-sso/templates/auth-service.yaml b/incubator/buzzfeed-sso/templates/auth-service.yaml
new file mode 100644
index 000000000000..7b24477376e6
--- /dev/null
+++ b/incubator/buzzfeed-sso/templates/auth-service.yaml
@@ -0,0 +1,22 @@
+{{- $name := include "buzzfeed-sso.name" . -}}
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ template "buzzfeed-sso.fullname" . }}-auth
+ labels:
+ app: {{ $name }}
+ chart: {{ template "buzzfeed-sso.chart" . }}
+ component: {{ $name }}-auth
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+spec:
+ type: {{ .Values.auth.service.type }}
+ ports:
+ - name: http
+ port: {{ .Values.auth.service.port }}
+ targetPort: 4180
+ protocol: TCP
+ selector:
+ app: {{ $name }}
+ component: {{ $name }}-auth
+ release: {{ .Release.Name }}
diff --git a/incubator/buzzfeed-sso/templates/configmap.yaml b/incubator/buzzfeed-sso/templates/configmap.yaml
new file mode 100644
index 000000000000..12f9a4f6e000
--- /dev/null
+++ b/incubator/buzzfeed-sso/templates/configmap.yaml
@@ -0,0 +1,14 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "buzzfeed-sso.fullname" . }}
+ labels:
+ app: {{ template "buzzfeed-sso.name" . }}
+ chart: {{ template "buzzfeed-sso.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+data:
+{{- with .Values.upstreams }}
+ upstream_configs.yml: |-
+{{ toYaml . | indent 4 }}
+{{- end }}
diff --git a/incubator/buzzfeed-sso/templates/google-secret.yaml b/incubator/buzzfeed-sso/templates/google-secret.yaml
new file mode 100644
index 000000000000..9a3f46ea8b6b
--- /dev/null
+++ b/incubator/buzzfeed-sso/templates/google-secret.yaml
@@ -0,0 +1,18 @@
+{{- if .Values.provider.google.secret }}
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ template "buzzfeed-sso.fullname" . }}-google
+ labels:
+ app: {{ template "buzzfeed-sso.name" . }}
+ chart: {{ template "buzzfeed-sso.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+type: Opaque
+data:
+{{- with .Values.provider.google.secret }}
+ google-client-id: {{ .clientId | b64enc }}
+ google-client-secret: {{ .clientSecret | b64enc }}
+ service-account: {{ .serviceAccount | b64enc }}
+{{- end }}
+{{- end }}
diff --git a/incubator/buzzfeed-sso/templates/ingress.yaml b/incubator/buzzfeed-sso/templates/ingress.yaml
new file mode 100644
index 000000000000..5f7c3f3b7cf8
--- /dev/null
+++ b/incubator/buzzfeed-sso/templates/ingress.yaml
@@ -0,0 +1,48 @@
+{{- if ne .Values.auth.domain "" -}}
+{{- $fullName := include "buzzfeed-sso.fullname" . -}}
+{{- $authDomain := .Values.auth.domain -}}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ $fullName }}
+ labels:
+ app: {{ template "buzzfeed-sso.name" . }}
+ chart: {{ template "buzzfeed-sso.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+{{- with .Values.ingress.annotations }}
+ annotations:
+{{ toYaml . | indent 4 }}
+{{- end }}
+spec:
+ tls:
+ - hosts:
+ - {{ $authDomain }}
+ secretName: {{ .Values.auth.tls.secretName -}}
+ {{- range .Values.ingress.tls }}
+ - hosts:
+ {{- range .hosts }}
+ - {{ . }}
+ {{- end }}
+ secretName: {{ .secretName }}
+ {{- end }}
+ rules:
+ # Upstreams that need SSO authentication
+ {{- range .Values.ingress.hosts }}
+ - host: {{ .domain }}
+ http:
+ paths:
+ - path: {{ .path }}
+ backend:
+ serviceName: {{ $fullName }}-proxy
+ servicePort: http
+ {{- end }}
+ # Global SSO used in the callback for login
+ - host: {{ $authDomain }}
+ http:
+ paths:
+ - path: /
+ backend:
+ serviceName: {{ $fullName }}-auth
+ servicePort: http
+{{- end }}
diff --git a/incubator/buzzfeed-sso/templates/proxy-deployment.yaml b/incubator/buzzfeed-sso/templates/proxy-deployment.yaml
new file mode 100644
index 000000000000..e636a8cf5c81
--- /dev/null
+++ b/incubator/buzzfeed-sso/templates/proxy-deployment.yaml
@@ -0,0 +1,112 @@
+{{- if or .Values.proxy.customSecret .Values.proxy.secret -}}
+{{- $fullName := include "buzzfeed-sso.fullname" . -}}
+{{- $proxySecret := .Values.proxy.customSecret | default ($fullName) -}}
+{{- $name := include "buzzfeed-sso.name" . -}}
+apiVersion: apps/v1beta1
+kind: Deployment
+metadata:
+ name: {{ $fullName }}-proxy
+ labels:
+ app: {{ $name }}
+ chart: {{ template "buzzfeed-sso.chart" . }}
+ component: {{ $name }}-proxy
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+spec:
+ replicas: {{ .Values.proxy.replicaCount }}
+ selector:
+ matchLabels:
+ app: {{ $name }}
+ component: {{ $name }}-proxy
+ release: {{ .Release.Name }}
+ template:
+ metadata:
+ annotations:
+ checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
+ checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
+ {{- with .Values.proxy.annotations }}
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ labels:
+ app: {{ $name }}
+ component: {{ $name }}-proxy
+ release: {{ .Release.Name }}
+ spec:
+ volumes:
+ - name: {{ $fullName }}
+ configMap:
+ name: {{ $fullName }}
+ containers:
+ - name: {{ .Chart.Name }}-proxy
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ command: ["/bin/sso-proxy"]
+ ports:
+ - name: http
+ containerPort: 8080
+ protocol: TCP
+ env:
+ - name: CLIENT_ID
+ valueFrom:
+ secretKeyRef:
+ name: {{ $proxySecret }}
+ key: proxy-client-id
+ - name: CLIENT_SECRET
+ valueFrom:
+ secretKeyRef:
+ name: {{ $proxySecret }}
+ key: proxy-client-secret
+ - name: COOKIE_SECRET
+ valueFrom:
+ secretKeyRef:
+ name: {{ $proxySecret }}
+ key: proxy-cookie-secret
+ - name: EMAIL_DOMAIN
+ value: {{ .Values.emailDomain | quote }}
+ - name: UPSTREAM_CONFIGS
+ value: /sso/upstream_configs.yml
+ - name: PROVIDER_URL
+ value: https://{{ .Values.auth.domain }}
+ # STATSD_HOST and STATSD_PORT must be defined or the app wont launch, they dont need to be a real host / port, but they do need to be defined.
+ - name: STATSD_HOST
+ value: localhost
+ - name: STATSD_PORT
+ value: "11111"
+ - name: COOKIE_SECURE
+ value: "true"
+ - name: CLUSTER
+ value: {{ .Values.proxy.cluster | quote }}
+ {{- if .Values.proxy.providerUrlInternal }}
+ - name: PROVIDER_URL_INTERNAL
+ value: {{ .Values.proxy.providerUrlInternal | quote }}
+ {{- end }}
+ readinessProbe:
+ httpGet:
+ path: /ping
+ port: 4180
+ scheme: HTTP
+ livenessProbe:
+ httpGet:
+ path: /ping
+ port: 4180
+ scheme: HTTP
+ initialDelaySeconds: 10
+ timeoutSeconds: 1
+ volumeMounts:
+ - name: {{ $fullName }}
+ mountPath: /sso
+ resources:
+{{ toYaml .Values.proxy.resources | indent 12 }}
+ {{- with .Values.proxy.nodeSelector }}
+ nodeSelector:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.proxy.affinity }}
+ affinity:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.proxy.tolerations }}
+ tolerations:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+{{- end }}
diff --git a/incubator/buzzfeed-sso/templates/proxy-service.yaml b/incubator/buzzfeed-sso/templates/proxy-service.yaml
new file mode 100644
index 000000000000..2bf38711f080
--- /dev/null
+++ b/incubator/buzzfeed-sso/templates/proxy-service.yaml
@@ -0,0 +1,22 @@
+{{- $name := include "buzzfeed-sso.name" . -}}
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ template "buzzfeed-sso.fullname" . }}-proxy
+ labels:
+ app: {{ $name }}
+ chart: {{ template "buzzfeed-sso.chart" . }}
+ component: {{ $name }}-proxy
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+spec:
+ type: {{ .Values.proxy.service.type }}
+ ports:
+ - name: http
+ port: {{ .Values.proxy.service.port }}
+ targetPort: 4180
+ protocol: TCP
+ selector:
+ app: {{ $name }}
+ component: {{ $name }}-proxy
+ release: {{ .Release.Name }}
diff --git a/incubator/buzzfeed-sso/templates/secret.yaml b/incubator/buzzfeed-sso/templates/secret.yaml
new file mode 100644
index 000000000000..6c27d77240c0
--- /dev/null
+++ b/incubator/buzzfeed-sso/templates/secret.yaml
@@ -0,0 +1,22 @@
+{{- if or .Values.auth.secret .Values.proxy.secret }}
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ template "buzzfeed-sso.fullname" . }}
+ labels:
+ app: {{ template "buzzfeed-sso.name" . }}
+ chart: {{ template "buzzfeed-sso.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+type: Opaque
+data:
+{{- with .Values.proxy.secret }}
+ proxy-client-id: {{ .clientId | b64enc }}
+ proxy-client-secret: {{ .clientSecret | b64enc }}
+ proxy-cookie-secret: {{ .cookieSecret | b64enc }}
+{{- end }}
+{{- with .Values.auth.secret }}
+ auth-code-secret: {{ .codeSecret | b64enc }}
+ auth-cookie-secret: {{ .cookieSecret | b64enc }}
+{{- end }}
+{{- end }}
diff --git a/incubator/buzzfeed-sso/values.yaml b/incubator/buzzfeed-sso/values.yaml
new file mode 100644
index 000000000000..dfaf3c1ca54d
--- /dev/null
+++ b/incubator/buzzfeed-sso/values.yaml
@@ -0,0 +1,98 @@
+# Default values for buzzfeed-sso.
+
+emailDomain: "" # Required. e.g "email.mydomain.foo"
+rootDomain: "" # Required. e.g "mydomain.foo"
+
+auth:
+ annotations: {}
+ domain: "" # Required. e.g "sso-auth.mydomain.foo"
+ replicaCount: 1
+ resources:
+ limits:
+ memory: "256Mi"
+ cpu: "200m"
+ nodeSelector: {}
+ tolerations: []
+ affinity: {}
+ service:
+ type: ClusterIP
+ port: 80
+ # Generate these secrets with the command:
+ # 'openssl rand -base64 32 | head -c 32 | base64'
+ secret: {} # Required (if customSecret is not set)
+ # codeSecret: ''
+ # cookieSecret: ''
+ # # Or if you do not want to create the secret via helm
+ # customSecret: my-sso-auth-secret
+ tls:
+ secretName: sso-auth-tls-secret
+
+proxy:
+ annotations: {}
+ # providerUrlInternal: https://sso-auth.mydomain.com
+ cluster: dev
+ replicaCount: 1
+ resources:
+ limits:
+ memory: "256Mi"
+ cpu: "200m"
+ nodeSelector: {}
+ tolerations: []
+ affinity: {}
+ service:
+ type: ClusterIP
+ port: 80
+ # Generate these secrets with the command:
+ # 'openssl rand -base64 32 | head -c 32 | base64'
+ secret: {} # Required (if customSecret is not set)
+ # clientId: ''
+ # clientSecret: ''
+ # cookieSecret: ''
+ # # Or if you do not want to create the secret via helm
+ # customSecret: my-sso-proxy-secret
+
+provider:
+ google: {} # Required.
+ # google:
+ # adminEmail: me@mydomain.foo
+ # secret:
+ # clientId: foo123123-fake123123.apps.googleusercontent.com
+ # clientSecret: googleOauthClientSecret
+ # serviceAccount: 'service account content in JSON format'
+ # # Or if you do not want to create the secret via helm
+ # google:
+ # adminEmail: me@mydomain.foo
+ # customSecret: my-sso-google-secret
+
+image:
+ repository: buzzfeed/sso
+ tag: v1.1.0
+ pullPolicy: IfNotPresent
+
+ingress:
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # certmanager.k8s.io/cluster-issuer: my-letsencrypt-issuer
+ # ingress.kubernetes.io/ssl-redirect: "true"
+ hosts: []
+ # - domain: mybackend.mydomain.foo
+ # path: /
+ tls: []
+ # - secretName: mybackend-mydomain-tls
+ # hosts:
+ # - mybackend.mydomain.foo
+
+upstreams: []
+# See https://github.com/buzzfeed/sso/blob/f437f237ac977201f15868601c9bc0e9dff11f40/docs/sso_config.md#proxy-config
+# - service: mybackend
+# default:
+# from: mybackend.mydomain.foo
+# to: http://mybackend.mynamespace.svc.cluster.local:9091
+# options:
+# allowed_groups:
+# - sso-test-group-1@example.com
+# - sso-test-group-2@example.com
+# skip_auth_regex:
+# - ^\/github-webhook\/$
+# header_overrides:
+# X-Frame-Options: DENY
diff --git a/incubator/kube-spot-termination-notice-handler/.helmignore b/incubator/cassandra-reaper/.helmignore
similarity index 100%
rename from incubator/kube-spot-termination-notice-handler/.helmignore
rename to incubator/cassandra-reaper/.helmignore
diff --git a/incubator/cassandra-reaper/Chart.yaml b/incubator/cassandra-reaper/Chart.yaml
new file mode 100644
index 000000000000..01706cd2c49b
--- /dev/null
+++ b/incubator/cassandra-reaper/Chart.yaml
@@ -0,0 +1,12 @@
+apiVersion: v1
+appVersion: 1.3.0
+description: Reaper is a centralized, stateful, and highly configurable tool for running Apache Cassandra repairs against single or multi-site clusters.
+name: cassandra-reaper
+home: http://cassandra-reaper.io/
+keywords:
+ - cassandra
+version: 0.2.0
+maintainers:
+ - name: kamsz
+ email: kamil@szczygiel.io
+engine: gotpl
diff --git a/incubator/cassandra-reaper/OWNERS b/incubator/cassandra-reaper/OWNERS
new file mode 100644
index 000000000000..abbcc2e89966
--- /dev/null
+++ b/incubator/cassandra-reaper/OWNERS
@@ -0,0 +1,6 @@
+approvers:
+- kamsz
+- reillybrogan
+reviewers:
+- kamsz
+- reillybrogan
diff --git a/incubator/cassandra-reaper/README.md b/incubator/cassandra-reaper/README.md
new file mode 100644
index 000000000000..4b99e4429747
--- /dev/null
+++ b/incubator/cassandra-reaper/README.md
@@ -0,0 +1,53 @@
+# Cassandra
+A cassandra-reaper Chart for Kubernetes
+
+## Install Chart
+To install the cassandra-reaper Chart into your Kubernetes cluster
+
+```bash
+helm install --namespace cassandra -n cassandra-reaper incubator/cassandra-reaper
+```
+
+If you want to delete your Chart, use this command
+```bash
+helm delete --purge cassandra-reaper
+```
+
+## Configuration
+
+The following table lists the configurable parameters of the cassandra-reaper chart and their default values.
+
+To properly configure `cassandra-reaper`, please refer to [the environment variables documentation](http://cassandra-reaper.io/docs/configuration/docker_vars/).
+
+As `cassandra-reaper` currently lacks an authentication mechanism basic auth support is provided (whether this will work for you is dependent on your chosen ingress
+controller). Check your ingress controllers documentation for how to specifically configure this as each implementation is slightly different. Note that you need to
+provide a base64-encoded version of the auth string if you enable this feature.
+
+Example:
+```bash
+htpassword -c ./auth myuser
+cat ./auth | base64
+```
+
+
+| Parameter | Description | Default |
+| -------------------------- | ------------------------------------------------------ | ---------------------------------------------------------- |
+| `replicaCount` | The number of `cassandra-reaper` replicas | `1` |
+| `image.repository` | `cassandra-reaper` image repository | `thelastpickle/cassandra-reaper` |
+| `image.tag` | `cassandra-reaper` image tag | `1.3.0` |
+| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `service.type` | Kubernetes service type exposing ports, e.g. `NodePort`| `ClusterIP` |
+| `ingress.enabled` | Enable Ingress resource | `false` |
+| `ingress.annotations` | Annotations for Ingress resource | `{}` |
+| `ingress.labels` | Additional labels for Ingress resource | `{}` |
+| `ingress.path` | Path for Ingress resource | `/` |
+| `ingress.hosts` | Ingress resource hosts | `[]` |
+| `ingress.tls` | Ingress resource TLS definition | `[]` |
+| `ingress.basicAuth.enabled`| Creates basic auth secret if true | `false` |
+| `ingress.basicAuth.name` | Name of the basic auth secret resource | `basic-auth` |
+| `ingress.basicAuth.secret` | Base64 encoded contents of the basic auth file | MUST be provided if basic auth is enabled |
+| `env` | Environment variables | `{}` |
+| `resources` | Resource requests/limits | `{}` |
+| `nodeSelector` | Kubernetes node selector | `{}` |
+| `tolerations` | Kubernetes node tolerations | `[]` |
+| `affinity` | Kubernetes node affinity | `{}` |
diff --git a/incubator/cassandra-reaper/templates/_helpers.tpl b/incubator/cassandra-reaper/templates/_helpers.tpl
new file mode 100644
index 000000000000..5deeb223aed5
--- /dev/null
+++ b/incubator/cassandra-reaper/templates/_helpers.tpl
@@ -0,0 +1,32 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "cassandra-reaper.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "cassandra-reaper.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "cassandra-reaper.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/incubator/cassandra-reaper/templates/deployment.yaml b/incubator/cassandra-reaper/templates/deployment.yaml
new file mode 100644
index 000000000000..0db5ff57a10f
--- /dev/null
+++ b/incubator/cassandra-reaper/templates/deployment.yaml
@@ -0,0 +1,65 @@
+apiVersion: apps/v1beta2
+kind: Deployment
+metadata:
+ name: {{ include "cassandra-reaper.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "cassandra-reaper.name" . }}
+ helm.sh/chart: {{ include "cassandra-reaper.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ replicas: {{ .Values.replicaCount }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "cassandra-reaper.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "cassandra-reaper.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ spec:
+ containers:
+ - name: {{ .Chart.Name }}
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ env:
+ {{- range $key, $value := .Values.env }}
+ - name: {{ $key }}
+ value: {{ $value | quote }}
+ {{- end }}
+ ports:
+ - name: http
+ containerPort: 8080
+ protocol: TCP
+ - name: api
+ containerPort: 8081
+ protocol: TCP
+ livenessProbe:
+ httpGet:
+ path: /
+ port: api
+ initialDelaySeconds: 60
+ periodSeconds: 20
+ timeoutSeconds: 5
+ readinessProbe:
+ httpGet:
+ path: /
+ port: api
+ initialDelaySeconds: 10
+ periodSeconds: 10
+ timeoutSeconds: 5
+ resources:
+{{ toYaml .Values.resources | indent 12 }}
+ {{- with .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.affinity }}
+ affinity:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.tolerations }}
+ tolerations:
+{{ toYaml . | indent 8 }}
+ {{- end }}
diff --git a/incubator/cassandra-reaper/templates/ingress.yaml b/incubator/cassandra-reaper/templates/ingress.yaml
new file mode 100644
index 000000000000..a96c3ac27378
--- /dev/null
+++ b/incubator/cassandra-reaper/templates/ingress.yaml
@@ -0,0 +1,39 @@
+{{- if .Values.ingress.enabled -}}
+{{- $fullName := include "cassandra-reaper.fullname" . -}}
+{{- $ingressPath := .Values.ingress.path -}}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ $fullName }}
+ labels:
+ app.kubernetes.io/name: {{ include "cassandra-reaper.name" . }}
+ helm.sh/chart: {{ include "cassandra-reaper.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+{{ toYaml .Values.ingress.labels | indent 4 }}
+{{- with .Values.ingress.annotations }}
+ annotations:
+{{ toYaml . | indent 4 }}
+{{- end }}
+spec:
+{{- if .Values.ingress.tls }}
+ tls:
+ {{- range .Values.ingress.tls }}
+ - hosts:
+ {{- range .hosts }}
+ - {{ . | quote }}
+ {{- end }}
+ secretName: {{ .secretName }}
+ {{- end }}
+{{- end }}
+ rules:
+ {{- range .Values.ingress.hosts }}
+ - host: {{ . | quote }}
+ http:
+ paths:
+ - path: {{ $ingressPath }}
+ backend:
+ serviceName: {{ $fullName }}
+ servicePort: http
+ {{- end }}
+{{- end }}
diff --git a/incubator/cassandra-reaper/templates/secret.yaml b/incubator/cassandra-reaper/templates/secret.yaml
new file mode 100644
index 000000000000..53b9593e2bb1
--- /dev/null
+++ b/incubator/cassandra-reaper/templates/secret.yaml
@@ -0,0 +1,14 @@
+{{- if and (.Values.ingress.enabled) (.Values.ingress.basicAuth.enabled) -}}
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ .Values.ingress.basicAuth.name | default "basic-auth" }}
+ labels:
+ app.kubernetes.io/name: {{ include "cassandra-reaper.name" . }}
+ helm.sh/chart: {{ include "cassandra-reaper.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+type: Opaque
+data:
+ auth: {{ required ".Values.ingress.basicAuth.secret is required when basicAuth is enabled" .Values.ingress.basicAuth.secret }}
+{{- end }}
diff --git a/incubator/cassandra-reaper/templates/service.yaml b/incubator/cassandra-reaper/templates/service.yaml
new file mode 100644
index 000000000000..cf54babd06df
--- /dev/null
+++ b/incubator/cassandra-reaper/templates/service.yaml
@@ -0,0 +1,19 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "cassandra-reaper.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "cassandra-reaper.name" . }}
+ helm.sh/chart: {{ include "cassandra-reaper.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ type: {{ .Values.service.type }}
+ ports:
+ - port: 8080
+ targetPort: http
+ protocol: TCP
+ name: http
+ selector:
+ app.kubernetes.io/name: {{ include "cassandra-reaper.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
diff --git a/incubator/cassandra-reaper/values.yaml b/incubator/cassandra-reaper/values.yaml
new file mode 100644
index 000000000000..f147e72027b3
--- /dev/null
+++ b/incubator/cassandra-reaper/values.yaml
@@ -0,0 +1,40 @@
+# Default values for cassandra-reaper.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+
+nameOverride: ""
+fullnameOverride: ""
+
+replicaCount: 1
+
+image:
+ repository: thelastpickle/cassandra-reaper
+ tag: 1.3.0
+ pullPolicy: IfNotPresent
+
+service:
+ type: ClusterIP
+
+ingress:
+ enabled: false
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+ labels: {}
+ path: /
+ hosts: []
+ tls: []
+ # - secretName: chart-example-tls
+ # hosts:
+ # - chart-example.local
+ basicAuth:
+ enabled: false
+ name: ~
+ # base64 encoded version of the basic auth file
+ secret: ~
+
+env: {}
+resources: {}
+nodeSelector: {}
+tolerations: []
+affinity: {}
diff --git a/incubator/cassandra/Chart.yaml b/incubator/cassandra/Chart.yaml
index 16fd4b92ed8d..0b9a2e38f667 100644
--- a/incubator/cassandra/Chart.yaml
+++ b/incubator/cassandra/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: cassandra
-version: 0.10.3
+version: 0.12.2
appVersion: 3.11.3
description: Apache Cassandra is a free and open-source distributed database management
system designed to handle large amounts of data across many commodity servers, providing
@@ -13,4 +14,6 @@ home: http://cassandra.apache.org
maintainers:
- name: KongZ
email: goonohc@gmail.com
+- name: maorfr
+ email: maor.friedman@redhat.com
engine: gotpl
diff --git a/incubator/cassandra/README.md b/incubator/cassandra/README.md
index 6cdbb8aa0cd4..c33754b49033 100644
--- a/incubator/cassandra/README.md
+++ b/incubator/cassandra/README.md
@@ -19,6 +19,24 @@ If you want to delete your Chart, use this command
helm delete --purge "cassandra"
```
+## Upgrading
+
+To upgrade your Cassandra release, simply run
+
+```bash
+helm upgrade "cassandra" incubator/cassandra
+```
+
+### 0.12.0
+
+This version fixes https://github.com/helm/charts/issues/7803 by removing mutable labels in `spec.VolumeClaimTemplate.metadata.labels` so that it is upgradable.
+
+Until this version, in order to upgrade, you have to delete the Cassandra StatefulSet before upgrading:
+```bash
+$ kubectl delete statefulset --cascade=false my-cassandra-release
+```
+
+
## Persist data
You need to create `StorageClass` before able to persist data in persistent volume.
To create a `StorageClass` on Google Cloud, run the following
@@ -88,6 +106,7 @@ The following table lists the configurable parameters of the Cassandra chart and
| `config.cluster_name` | The name of the cluster. | `cassandra` |
| `config.cluster_size` | The number of nodes in the cluster. | `3` |
| `config.seed_size` | The number of seed nodes used to bootstrap new clients joining the cluster. | `2` |
+| `config.seeds` | The comma-separated list of seed nodes. | Automatically generated according to `.Release.Name` and `config.seed_size` |
| `config.num_tokens` | Initdb Arguments | `256` |
| `config.dc_name` | Initdb Arguments | `DC1` |
| `config.rack_name` | Initdb Arguments | `RAC1` |
@@ -128,17 +147,19 @@ The following table lists the configurable parameters of the Cassandra chart and
| `backup.enabled` | Enable backup on chart installation | `false` |
| `backup.schedule` | Keyspaces to backup, each with cron time | |
| `backup.annotations` | Backup pod annotations | iam.amazonaws.com/role: `cain` |
-| `backup.image.repo` | Backup image repository | `nuvo/cain` |
-| `backup.image.tag` | Backup image tag | `0.4.1` |
+| `backup.image.repository` | Backup image repository | `maorfr/cain` |
+| `backup.image.tag` | Backup image tag | `0.6.0` |
| `backup.extraArgs` | Additional arguments for cain | `[]` |
| `backup.env` | Backup environment variables | AWS_REGION: `us-east-1` |
| `backup.resources` | Backup CPU/Memory resource requests/limits | Memory: `1Gi`, CPU: `1` |
| `backup.destination` | Destination to store backup artifacts | `s3://bucket/cassandra` |
+| `backup.google.serviceAccountSecret` | Secret containing credentials if GCS is used as destination | |
| `exporter.enabled` | Enable Cassandra exporter | `false` |
| `exporter.image.repo` | Exporter image repository | `criteord/cassandra_exporter` |
| `exporter.image.tag` | Exporter image tag | `2.0.2` |
| `exporter.port` | Exporter port | `5556` |
| `exporter.jvmOpts` | Exporter additional JVM options | |
+| `exporter.resources` | Exporter CPU/Memory resource requests/limits | `{}` |
| `affinity` | Kubernetes node affinity | `{}` |
| `tolerations` | Kubernetes node tolerations | `[]` |
diff --git a/incubator/cassandra/templates/backup/cronjob.yaml b/incubator/cassandra/templates/backup/cronjob.yaml
index 36461f6ab80e..28d7b1419136 100644
--- a/incubator/cassandra/templates/backup/cronjob.yaml
+++ b/incubator/cassandra/templates/backup/cronjob.yaml
@@ -7,7 +7,7 @@
apiVersion: batch/v1beta1
kind: CronJob
metadata:
- name: {{ template "cassandra.fullname" $ }}-backup-{{ $schedule.keyspace }}
+ name: {{ template "cassandra.fullname" $ }}-backup-{{ $schedule.keyspace | replace "_" "-" }}
labels:
app: {{ template "cassandra.name" $ }}-cain
chart: {{ template "cassandra.chart" $ }}
@@ -28,7 +28,7 @@ spec:
serviceAccountName: {{ template "cassandra.serviceAccountName" $ }}
containers:
- name: cassandra-backup
- image: "{{ $backup.image.repos }}:{{ $backup.image.tag }}"
+ image: "{{ $backup.image.repository }}:{{ $backup.image.tag }}"
command: ["cain"]
args:
- backup
@@ -42,15 +42,30 @@ spec:
- {{ $backup.destination }}
{{- with $backup.extraArgs }}
{{ toYaml . | indent 12 }}
- {{- end }}
- {{- with $backup.env }}
+ {{- end }}
env:
+{{- if $backup.google.serviceAccountSecret }}
+ - name: GOOGLE_APPLICATION_CREDENTIALS
+ value: "/etc/secrets/google/credentials.json"
+{{- end }}
+ {{- with $backup.env }}
{{ toYaml . | indent 12 }}
{{- end }}
{{- with $backup.resources }}
resources:
{{ toYaml . | indent 14 }}
{{- end }}
+{{- if $backup.google.serviceAccountSecret }}
+ volumeMounts:
+ - name: google-service-account
+ mountPath: /etc/secrets/google/
+{{- end }}
+{{- if $backup.google.serviceAccountSecret }}
+ volumes:
+ - name: google-service-account
+ secret:
+ secretName: {{ $backup.google.serviceAccountSecret | quote }}
+{{- end }}
affinity:
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
diff --git a/incubator/cassandra/templates/statefulset.yaml b/incubator/cassandra/templates/statefulset.yaml
index 412a4cbcf215..d2b47450b5ce 100644
--- a/incubator/cassandra/templates/statefulset.yaml
+++ b/incubator/cassandra/templates/statefulset.yaml
@@ -30,6 +30,7 @@ spec:
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
+ hostNetwork: {{ .Values.hostNetwork }}
{{- if .Values.selector }}
{{ toYaml .Values.selector | indent 6 }}
{{- end }}
@@ -50,6 +51,8 @@ spec:
{{- if .Values.exporter.enabled }}
- name: cassandra-exporter
image: "{{ .Values.exporter.image.repo }}:{{ .Values.exporter.image.tag }}"
+ resources:
+{{ toYaml .Values.exporter.resources | indent 10 }}
env:
- name: CASSANDRA_EXPORTER_CONFIG_listenPort
value: {{ .Values.exporter.port | quote }}
@@ -86,7 +89,11 @@ spec:
{{- $seed_size := default 1 .Values.config.seed_size | int -}}
{{- $global := . }}
- name: CASSANDRA_SEEDS
+ {{- if .Values.hostNetwork }}
+ value: {{ required "You must fill \".Values.config.seeds\" with list of Cassandra seeds when hostNetwork is set to true" .Values.config.seeds | quote }}
+ {{- else }}
value: "{{- range $i, $e := until $seed_size }}{{ template "cassandra.fullname" $global }}-{{ $i }}.{{ template "cassandra.fullname" $global }}.{{ $global.Release.Namespace }}.svc.{{ $global.Values.config.cluster_domain }}{{- if (lt ( add1 $i ) $seed_size ) }},{{- end }}{{- end }}"
+ {{- end }}
- name: MAX_HEAP_SIZE
value: {{ default "8192M" .Values.config.max_heap_size | quote }}
- name: HEAP_NEWSIZE
@@ -176,9 +183,7 @@ spec:
name: data
labels:
app: {{ template "cassandra.name" . }}
- chart: {{ template "cassandra.chart" . }}
release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.accessMode | quote }}
diff --git a/incubator/cassandra/values.yaml b/incubator/cassandra/values.yaml
index 86cec364e33c..9607cab599b0 100644
--- a/incubator/cassandra/values.yaml
+++ b/incubator/cassandra/values.yaml
@@ -148,8 +148,12 @@ serviceAccount:
# If not set and create is true, a name is generated using the fullname template
# name:
+# Use host network for Cassandra pods
+# You must pass seed list into config.seeds property if set to true
+hostNetwork: false
+
## Backup cronjob configuration
-## Ref: https://github.com/nuvo/cain
+## Ref: https://github.com/maorfr/cain
backup:
enabled: false
@@ -167,11 +171,11 @@ backup:
iam.amazonaws.com/role: cain
image:
- repos: nuvo/cain
- tag: 0.4.1
+ repository: maorfr/cain
+ tag: 0.6.0
# Additional arguments for cain
- # Ref: https://github.com/nuvo/cain#usage
+ # Ref: https://github.com/maorfr/cain#usage
extraArgs: []
# Add additional environment variables
@@ -188,10 +192,14 @@ backup:
memory: 1Gi
cpu: 1
+ # Name of the secret containing the credentials of the service account used by GOOGLE_APPLICATION_CREDENTIALS, as a credentials.json file
+ # google:
+ # serviceAccountSecret:
+
# Destination to store the backup artifacts
- # Supported cloud storage services: AWS S3, Minio S3, Azure Blob Storage
+ # Supported cloud storage services: AWS S3, Minio S3, Azure Blob Storage, Google Cloud Storage
# Additional support can added. Visit this repository for details
- # Ref: https://github.com/nuvo/skbn
+ # Ref: https://github.com/maorfr/skbn
destination: s3://bucket/cassandra
## Cassandra exported configuration
@@ -203,3 +211,10 @@ exporter:
tag: 2.0.2
port: 5556
jvmOpts: ""
+ resources: {}
+ # limits:
+ # cpu: 1
+ # memory: 1Gi
+ # requests:
+ # cpu: 1
+ # memory: 1Gi
diff --git a/incubator/couchdb/Chart.yaml b/incubator/couchdb/Chart.yaml
index 657d1d905d2e..1c2879c44ed3 100644
--- a/incubator/couchdb/Chart.yaml
+++ b/incubator/couchdb/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: couchdb
-version: 1.1.1
+version: 1.1.3
appVersion: 2.3.0
description: A database featuring seamless multi-master sync, that scales from
big data to mobile, with an intuitive HTTP/JSON API and designed for
diff --git a/incubator/couchdb/README.md b/incubator/couchdb/README.md
index 7cd31a8b810d..9440d59295d7 100644
--- a/incubator/couchdb/README.md
+++ b/incubator/couchdb/README.md
@@ -125,6 +125,7 @@ A variety of other parameters are also configurable. See the comments in the
| `podManagementPolicy` | Parallel |
| `affinity` | |
| `resources` | |
+| `service.annotations` | |
| `service.enabled` | true |
| `service.type` | ClusterIP |
| `service.externalPort` | 5984 |
diff --git a/incubator/couchdb/templates/service.yaml b/incubator/couchdb/templates/service.yaml
index 3393d0447b1f..d4325b903b56 100644
--- a/incubator/couchdb/templates/service.yaml
+++ b/incubator/couchdb/templates/service.yaml
@@ -8,6 +8,10 @@ metadata:
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
+{{- if .Values.service.annotations }}
+ annotations:
+{{ toYaml .Values.service.annotations | indent 4 }}
+{{- end }}
spec:
ports:
- port: {{ .Values.service.externalPort }}
diff --git a/incubator/couchdb/values.yaml b/incubator/couchdb/values.yaml
index d1a981696424..ee8c75457766 100644
--- a/incubator/couchdb/values.yaml
+++ b/incubator/couchdb/values.yaml
@@ -77,6 +77,7 @@ affinity:
## chart without any additional configuration. The Service block below refers
## to a second Service that governs how clients connect to the CouchDB cluster.
service:
+ # annotations:
enabled: true
type: ClusterIP
externalPort: 5984
diff --git a/incubator/etcd/Chart.yaml b/incubator/etcd/Chart.yaml
index 20967c30527f..022384e13c17 100755
--- a/incubator/etcd/Chart.yaml
+++ b/incubator/etcd/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: etcd
home: https://github.com/coreos/etcd
-version: 0.6.2
+version: 0.6.3
appVersion: 2.2.5
description: Distributed reliable key-value store for the most critical data of a
distributed system.
diff --git a/incubator/fluentd-cloudwatch/Chart.yaml b/incubator/fluentd-cloudwatch/Chart.yaml
index 36d329d3d9ad..89076e753650 100644
--- a/incubator/fluentd-cloudwatch/Chart.yaml
+++ b/incubator/fluentd-cloudwatch/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: fluentd-cloudwatch
-version: 0.6.4
-appVersion: v0.12.43-cloudwatch
+version: 0.10.0
+appVersion: v1.3.3-debian-cloudwatch-1.0
description: A Fluentd CloudWatch Helm chart for Kubernetes.
home: https://www.fluentd.org/
icon: https://raw.githubusercontent.com/fluent/fluentd-docs/master/public/logo/Fluentd_square.png
diff --git a/incubator/fluentd-cloudwatch/README.md b/incubator/fluentd-cloudwatch/README.md
index fec16b71722c..93d82cb3774f 100644
--- a/incubator/fluentd-cloudwatch/README.md
+++ b/incubator/fluentd-cloudwatch/README.md
@@ -46,30 +46,34 @@ The command removes all the Kubernetes components associated with the chart and
The following table lists the configurable parameters of the Fluentd Cloudwatch chart and their default values.
-| Parameter | Description | Default |
-| ------------------------------- | ------------------------------------------------------------------------- | --------------------------------------|
-| `image.repository` | Image repository | `fluent/fluentd-kubernetes-daemonset` |
-| `image.tag` | Image tag | `v0.12.43-cloudwatch` |
-| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
-| `resources.limits.cpu` | CPU limit | `100m` |
-| `resources.limits.memory` | Memory limit | `200Mi` |
-| `resources.requests.cpu` | CPU request | `100m` |
-| `resources.requests.memory` | Memory request | `200Mi` |
-| `hostNetwork` | Host network | `false` |
-| `annotations` (removed for now) | Annotations | `nil` |
-| `awsRegion` | AWS Cloudwatch region | `us-east-1` |
-| `awsRole` | AWS IAM Role To Use | `nil` |
-| `awsAccessKeyId` | AWS Access Key Id of a AWS user with a policy to access Cloudwatch | `nil` |
-| `awsSecretAccessKey` | AWS Secret Access Key of a AWS user with a policy to access Cloudwatch | `nil` |
-| `fluentdConfig` | Fluentd configuration | `example configuration` |
-| `logGroupName` | AWS Cloudwatch log group | `kubernetes` |
-| `rbac.create` | If true, create & use RBAC resources | `false` |
-| `rbac.serviceAccountName` | existing ServiceAccount to use (ignored if rbac.create=true) | `default` |
-| `tolerations` | Add tolerations | `[]` |
-| `extraVars` | Add pod environment variables (must be specified as a single line object) | `[]` |
-| `updateStrategy` | Define daemonset update strategy | `OnDelete` |
-
-Starting with fluentd-kubernetes-daemonset v0.12.43-cloudwatch, the container runs as user fluentd. To be able to write pos files to the host system, you'll need to run fluentd as root. Add the following extraVars value to run as root.
+| Parameter | Description | Default |
+| ---------------------------- | ------------------------------------------------------------------------- | --------------------------------------|
+| `image.repository` | Image repository | `fluent/fluentd-kubernetes-daemonset` |
+| `image.tag` | Image tag | `v1.3.3-debian-cloudwatch-1.0` |
+| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `resources.limits.cpu` | CPU limit | `100m` |
+| `resources.limits.memory` | Memory limit | `200Mi` |
+| `resources.requests.cpu` | CPU request | `100m` |
+| `resources.requests.memory` | Memory request | `200Mi` |
+| `hostNetwork` | Host network | `false` |
+| `podAnnotations` | Annotations | `{}` |
+| `podSecurityContext` | Security Context | `{}` |
+| `awsRegion` | AWS Cloudwatch region | `us-east-1` |
+| `awsRole` | AWS IAM Role To Use | `nil` |
+| `awsAccessKeyId` | AWS Access Key Id of a AWS user with a policy to access Cloudwatch | `nil` |
+| `awsSecretAccessKey` | AWS Secret Access Key of a AWS user with a policy to access Cloudwatch | `nil` |
+| `fluentdConfig` | Fluentd configuration | `example configuration` |
+| `logGroupName` | AWS Cloudwatch log group | `kubernetes` |
+| `rbac.create` | If true, create & use RBAC resources | `false` |
+| `rbac.serviceAccountName` | existing ServiceAccount to use (ignored if rbac.create=true) | `default` |
+| `tolerations` | Add tolerations | `[]` |
+| `extraVars` | Add pod environment variables (must be specified as a single line object) | `[]` |
+| `updateStrategy` | Define daemonset update strategy | `OnDelete` |
+| `nodeSelector` | Node labels for pod assignment | `{}` |
+| `affinity` | Node affinity for pod assignment | `{}` |
+| `priorityClassName` | Set priority class for daemon set | `nil` |
+
+If using fluentd-kubernetes-daemonset v0.12.43-cloudwatch, the container runs as user fluentd. To be able to write pos files to the host system, you'll need to run fluentd as root. Add the following extraVars value to run as root.
```code
"{ name: FLUENT_UID, value: '0' }"
diff --git a/incubator/fluentd-cloudwatch/templates/configmap.yaml b/incubator/fluentd-cloudwatch/templates/configmap.yaml
index b6f89a09ee8e..e6eac5077bc8 100644
--- a/incubator/fluentd-cloudwatch/templates/configmap.yaml
+++ b/incubator/fluentd-cloudwatch/templates/configmap.yaml
@@ -8,4 +8,4 @@ metadata:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
data:
- fluent.conf: {{ toYaml .Values.fluentdConfig | indent 2 }}
+{{ toYaml .Values.data | indent 2 }}
diff --git a/incubator/fluentd-cloudwatch/templates/daemonset.yaml b/incubator/fluentd-cloudwatch/templates/daemonset.yaml
index b72a6e1f472c..c517f40e8a1e 100644
--- a/incubator/fluentd-cloudwatch/templates/daemonset.yaml
+++ b/incubator/fluentd-cloudwatch/templates/daemonset.yaml
@@ -20,6 +20,8 @@ spec:
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
+ securityContext:
+{{ toYaml .Values.podSecurityContext | indent 8 }}
serviceAccountName: {{ if .Values.rbac.create }}{{ template "fluentd-cloudwatch.fullname" . }}{{ else }}"{{ .Values.rbac.serviceAccountName }}"{{ end }}
initContainers:
- name: copy-fluentd-config
@@ -69,6 +71,9 @@ spec:
readOnly: true
- name: config
mountPath: /fluentd/etc
+{{- if .Values.priorityClassName }}
+ priorityClassName: "{{ .Values.priorityClassName }}"
+{{- end }}
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
@@ -86,5 +91,13 @@ spec:
tolerations:
{{ toYaml .Values.tolerations | indent 6 }}
{{- end }}
+ {{- if .Values.affinity }}
+ affinity:
+{{ toYaml .Values.affinity | indent 8 }}
+ {{- end }}
+ {{- if .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.nodeSelector | indent 8 }}
+ {{- end }}
updateStrategy:
-{{ toYaml .Values.updateStrategy | indent 4 }}
\ No newline at end of file
+{{ toYaml .Values.updateStrategy | indent 4 }}
diff --git a/incubator/fluentd-cloudwatch/values.yaml b/incubator/fluentd-cloudwatch/values.yaml
index ebc8dc25bec8..5e09b03b3a23 100644
--- a/incubator/fluentd-cloudwatch/values.yaml
+++ b/incubator/fluentd-cloudwatch/values.yaml
@@ -1,6 +1,6 @@
image:
repository: fluent/fluentd-kubernetes-daemonset
- tag: v0.12.43-cloudwatch
+ tag: v1.3.3-debian-cloudwatch-1.0
## Specify an imagePullPolicy (Required)
## It's recommended to change this to 'Always' if the image tag is 'latest'
## ref: http://kubernetes.io/docs/user-guide/images/#updating-images
@@ -9,24 +9,52 @@ image:
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
-resources:
- limits:
- cpu: 100m
- memory: 200Mi
- requests:
- cpu: 100m
- memory: 200Mi
+resources: {}
+# We usually recommend not to specify default resources and to leave this as a conscious
+# choice for the user. This also increases chances charts run on environments with little
+# resources, such as Minikube. If you do want to specify resources, uncomment the following
+# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
+# limits:
+# cpu: 100m
+# memory: 200Mi
+# requests:
+# cpu: 100m
+# memory: 200Mi
# hostNetwork: false
+## Node labels for pod assignment
+## Ref: https://kubernetes.io/docs/user-guide/node-selection/
+##
+nodeSelector: {}
+ # kubernetes.io/role: node
+# Ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#affinity-v1-core
+# Expects input structure as per specification for example:
+# affinity:
+# nodeAffinity:
+# requiredDuringSchedulingIgnoredDuringExecution:
+# nodeSelectorTerms:
+# - matchExpressions:
+# - key: foo.bar.com/role
+# operator: In
+# values:
+# - master
+affinity: {}
## Add tolerations if specified
tolerations: []
# - key: node-role.kubernetes.io/master
# operator: Exists
# effect: NoSchedule
+podSecurityContext: {}
+
podAnnotations: {}
+# Pod priority
+# Sets PriorityClassName if defined.
+#
+# priorityClassName: "my-priority-class"
+
awsRegion: us-east-1
awsRole:
awsAccessKeyId:
@@ -46,146 +74,147 @@ extraVars: []
updateStrategy:
type: OnDelete
-fluentdConfig: |
-
- type null
-
-
-
- type tail
- enable_stat_watcher false
- path /var/log/containers/*.log
- pos_file /var/log/fluentd-containers.log.pos
- time_format %Y-%m-%dT%H:%M:%S.%NZ
- tag kubernetes.*
- format json
- read_from_head true
-
-
-
- type tail
- enable_stat_watcher false
- format /^(?
-
-
- type tail
- enable_stat_watcher false
- format syslog
- path /var/log/startupscript.log
- pos_file /var/log/fluentd-startupscript.log.pos
- tag startupscript
-
-
-
- type tail
- enable_stat_watcher false
- format /^time="(?
-
-
- type tail
- enable_stat_watcher false
- format none
- path /var/log/etcd.log
- pos_file /var/log/fluentd-etcd.log.pos
- tag etcd
-
-
-
- type tail
- enable_stat_watcher false
- format kubernetes
- multiline_flush_interval 5s
- path /var/log/kubelet.log
- pos_file /var/log/fluentd-kubelet.log.pos
- tag kubelet
-
-
-
- type tail
- enable_stat_watcher false
- format kubernetes
- multiline_flush_interval 5s
- path /var/log/kube-proxy.log
- pos_file /var/log/fluentd-kube-proxy.log.pos
- tag kube-proxy
-
-
-
- type tail
- enable_stat_watcher false
- format kubernetes
- multiline_flush_interval 5s
- path /var/log/kube-apiserver.log
- pos_file /var/log/fluentd-kube-apiserver.log.pos
- tag kube-apiserver
-
-
-
- type tail
- enable_stat_watcher false
- format kubernetes
- multiline_flush_interval 5s
- path /var/log/kube-controller-manager.log
- pos_file /var/log/fluentd-kube-controller-manager.log.pos
- tag kube-controller-manager
-
-
-
- type tail
- enable_stat_watcher false
- format kubernetes
- multiline_flush_interval 5s
- path /var/log/kube-scheduler.log
- pos_file /var/log/fluentd-kube-scheduler.log.pos
- tag kube-scheduler
-
-
-
- type tail
- enable_stat_watcher false
- format kubernetes
- multiline_flush_interval 5s
- path /var/log/rescheduler.log
- pos_file /var/log/fluentd-rescheduler.log.pos
- tag rescheduler
-
-
-
- type tail
- enable_stat_watcher false
- format kubernetes
- multiline_flush_interval 5s
- path /var/log/glbc.log
- pos_file /var/log/fluentd-glbc.log.pos
- tag glbc
-
-
-
- type tail
- enable_stat_watcher false
- format kubernetes
- multiline_flush_interval 5s
- path /var/log/cluster-autoscaler.log
- pos_file /var/log/fluentd-cluster-autoscaler.log.pos
- tag cluster-autoscaler
-
-
-
- type kubernetes_metadata
-
-
-
- type cloudwatch_logs
- log_group_name "#{ENV['LOG_GROUP_NAME']}"
- auto_create_stream true
- use_tag_as_stream true
-
+data:
+ fluent.conf: |
+
+ type null
+
+
+
+ type tail
+ enable_stat_watcher false
+ path /var/log/containers/*.log
+ pos_file /var/log/fluentd-containers.log.pos
+ time_format %Y-%m-%dT%H:%M:%S.%NZ
+ tag kubernetes.*
+ format json
+ read_from_head true
+
+
+
+ type tail
+ enable_stat_watcher false
+ format /^(?
+
+
+ type tail
+ enable_stat_watcher false
+ format syslog
+ path /var/log/startupscript.log
+ pos_file /var/log/fluentd-startupscript.log.pos
+ tag startupscript
+
+
+
+ type tail
+ enable_stat_watcher false
+ format /^time="(?
+
+
+ type tail
+ enable_stat_watcher false
+ format none
+ path /var/log/etcd.log
+ pos_file /var/log/fluentd-etcd.log.pos
+ tag etcd
+
+
+
+ type tail
+ enable_stat_watcher false
+ format kubernetes
+ multiline_flush_interval 5s
+ path /var/log/kubelet.log
+ pos_file /var/log/fluentd-kubelet.log.pos
+ tag kubelet
+
+
+
+ type tail
+ enable_stat_watcher false
+ format kubernetes
+ multiline_flush_interval 5s
+ path /var/log/kube-proxy.log
+ pos_file /var/log/fluentd-kube-proxy.log.pos
+ tag kube-proxy
+
+
+
+ type tail
+ enable_stat_watcher false
+ format kubernetes
+ multiline_flush_interval 5s
+ path /var/log/kube-apiserver.log
+ pos_file /var/log/fluentd-kube-apiserver.log.pos
+ tag kube-apiserver
+
+
+
+ type tail
+ enable_stat_watcher false
+ format kubernetes
+ multiline_flush_interval 5s
+ path /var/log/kube-controller-manager.log
+ pos_file /var/log/fluentd-kube-controller-manager.log.pos
+ tag kube-controller-manager
+
+
+
+ type tail
+ enable_stat_watcher false
+ format kubernetes
+ multiline_flush_interval 5s
+ path /var/log/kube-scheduler.log
+ pos_file /var/log/fluentd-kube-scheduler.log.pos
+ tag kube-scheduler
+
+
+
+ type tail
+ enable_stat_watcher false
+ format kubernetes
+ multiline_flush_interval 5s
+ path /var/log/rescheduler.log
+ pos_file /var/log/fluentd-rescheduler.log.pos
+ tag rescheduler
+
+
+
+ type tail
+ enable_stat_watcher false
+ format kubernetes
+ multiline_flush_interval 5s
+ path /var/log/glbc.log
+ pos_file /var/log/fluentd-glbc.log.pos
+ tag glbc
+
+
+
+ type tail
+ enable_stat_watcher false
+ format kubernetes
+ multiline_flush_interval 5s
+ path /var/log/cluster-autoscaler.log
+ pos_file /var/log/fluentd-cluster-autoscaler.log.pos
+ tag cluster-autoscaler
+
+
+
+ type kubernetes_metadata
+
+
+
+ type cloudwatch_logs
+ log_group_name "#{ENV['LOG_GROUP_NAME']}"
+ auto_create_stream true
+ use_tag_as_stream true
+
diff --git a/incubator/gogs/Chart.yaml b/incubator/gogs/Chart.yaml
index cde1ac6d6655..ccf84d2e08dd 100644
--- a/incubator/gogs/Chart.yaml
+++ b/incubator/gogs/Chart.yaml
@@ -1,11 +1,12 @@
apiVersion: v1
description: 'Gogs: Go Git Service'
name: gogs
-version: 0.7.6
-appVersion: 0.11.79
+version: 0.7.9
+appVersion: 0.11.86
home: https://gogs.io/
icon: https://gogs.io/img/favicon.ico
maintainers:
- name: obeyler
+ - name: poblin-orange
keywords:
- git
diff --git a/incubator/gogs/OWNERS b/incubator/gogs/OWNERS
index 683b9aa040a6..d0bf2d2b879b 100644
--- a/incubator/gogs/OWNERS
+++ b/incubator/gogs/OWNERS
@@ -1,4 +1,6 @@
approvers:
- obeyler
+- poblin-orange
reviewers:
- obeyler
+- poblin-orange
diff --git a/incubator/gogs/README.md b/incubator/gogs/README.md
index 2b88b6bde562..c4d4fb192795 100644
--- a/incubator/gogs/README.md
+++ b/incubator/gogs/README.md
@@ -44,13 +44,14 @@ chart and their default values.
| Parameter | Description | Default |
| ----------------------- | ---------------------------------- | ---------------------------------------------------------- |
| `image.repository` | Gogs image | `gogs/gogs` |
-| `image.tag` | Gogs image tag | `0.11.66` |
+| `image.tag` | Gogs image tag | `0.11.86` |
| `image.pullPolicy` | Gogs image pull policy | `Always` if `imageTag` is `latest`, else `IfNotPresent` |
| `postgresql.install` | Weather or not to install PostgreSQL dependency | `true` |
| `postgresql.postgresHost` | PostgreSQL host (if `postgresql.install == false`) | `nil` |
| `postgresql.postgresUser` | PostgreSQL User to create | `gogs` |
| `postgresql.postgresPassword` | PostgreSQL Password for the new user | `gogs` |
| `postgresql.postgresDatabase` | PostgreSQL Database to create | `gogs` |
+| `postgresql.postgresSSLMode` | PostgreSQL SSL Mode | `disable` |
| `postgresql.persistence.enabled` | Enable PostgreSQL persistence using Persistent Volume Claims | `true` |
| `service.httpNodePort` | Enable a static port where the Gogs http service is exposed on each Node’s IP | `nil` |
| `service.sshNodePort` | Enable a static port where the Gogs ssh service is exposed on each Node’s IP | `nil` |
diff --git a/incubator/gogs/templates/_helpers.tpl b/incubator/gogs/templates/_helpers.tpl
index a6c20193f1c3..f270ff055e2a 100644
--- a/incubator/gogs/templates/_helpers.tpl
+++ b/incubator/gogs/templates/_helpers.tpl
@@ -75,3 +75,14 @@ Determine database name based on use of postgresql dependency.
{{- .Values.service.gogs.databaseName | quote -}}
{{- end -}}
{{- end -}}
+
+{{/*
+Determine database SSL mode based on use of postgresql dependency.
+*/}}
+{{- define "gogs.database.ssl_mode" -}}
+{{- if .Values.postgresql.install -}}
+{{- .Values.postgresql.postgresSSLMode | quote -}}
+{{- else -}}
+{{- .Values.service.gogs.databaseSSLMode | quote -}}
+{{- end -}}
+{{- end -}}
diff --git a/incubator/gogs/templates/configmap.yaml b/incubator/gogs/templates/configmap.yaml
index 56835e8ce3c6..b709e7b5d11b 100644
--- a/incubator/gogs/templates/configmap.yaml
+++ b/incubator/gogs/templates/configmap.yaml
@@ -39,12 +39,25 @@ data:
ENABLE_REVERSE_PROXY_AUTHENTICATION = false
ENABLE_REVERSE_PROXY_AUTO_REGISTRATION = false
+ [mailer]
+ ENABLED = {{ .Values.service.gogs.mailerEnabled }}
+ HOST = {{ .Values.service.gogs.mailerHost }}
+ DISABLE_HELO = false
+ HELO_HOSTNAME =
+ SKIP_VERIFY = {{ .Values.service.gogs.mailerSkipVerify }}
+ SUBJECT_PREFIX = {{ .Values.service.gogs.mailerSubjectPrefix }}
+ FROM = {{ .Values.service.gogs.mailerFrom }}
+ USER = {{ .Values.service.gogs.mailerUser }}
+ PASSWD = {{ .Values.service.gogs.mailerPasswd }}
+ USE_PLAIN_TEXT = text/plain
+
[database]
DB_TYPE = {{ .Values.service.gogs.databaseType | quote }}
HOST = {{ template "gogs.database.host" . }}
NAME = {{ template "gogs.database.name" . }}
USER = {{ template "gogs.database.user" . }}
PASSWD = {{ template "gogs.database.password" . }}
+ SSL_MODE = {{ template "gogs.database.ssl_mode" . }}
[security]
INSTALL_LOCK = {{ .Values.service.gogs.installLock }}
diff --git a/incubator/gogs/templates/deployment.yaml b/incubator/gogs/templates/deployment.yaml
index d0cd87762791..e12601844306 100644
--- a/incubator/gogs/templates/deployment.yaml
+++ b/incubator/gogs/templates/deployment.yaml
@@ -19,6 +19,18 @@ spec:
{{- with .Values.securityContext }}
securityContext:
{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- if .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.nodeSelector | indent 8 }}
+ {{- end }}
+ {{- if .Values.affinity }}
+ affinity:
+{{ toYaml .Values.affinity | indent 8 }}
+ {{- end }}
+ {{- if .Values.tolerations }}
+ tolerations:
+{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
@@ -56,7 +68,11 @@ spec:
- name: data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
+ {{- if .Values.persistence.existingClaim }}
+ claimName: {{ .Values.persistence.existingClaim }}
+ {{- else }}
claimName: {{ template "gogs.fullname" . }}
+ {{- end -}}
{{- else }}
emptyDir: {}
{{- end -}}
diff --git a/incubator/gogs/templates/pvc.yaml b/incubator/gogs/templates/pvc.yaml
index 458bc40f43f1..56ccae800549 100644
--- a/incubator/gogs/templates/pvc.yaml
+++ b/incubator/gogs/templates/pvc.yaml
@@ -1,4 +1,4 @@
-{{- if .Values.persistence.enabled }}
+{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
diff --git a/incubator/gogs/values.yaml b/incubator/gogs/values.yaml
index fb7ebe0dc56d..8aba2efddf5b 100644
--- a/incubator/gogs/values.yaml
+++ b/incubator/gogs/values.yaml
@@ -11,7 +11,7 @@ replicaCount: 1
image:
repository: gogs/gogs
- tag: 0.11.79
+ tag: 0.11.86
pullPolicy: IfNotPresent
service:
@@ -105,6 +105,34 @@ service:
##
serviceEnableNotifyMail: false
+ ## Enable this to send mail with SMTP server.
+ ##
+ mailerEnabled: false
+
+ ## SMTP server host.
+ ##
+ mailerHost:
+
+ ## SMTP server user.
+ ##
+ mailerUser:
+
+ ## SMTP server password.
+ ##
+ mailerPasswd:
+
+ ## Mail from address. Format RFC 5322, email@example.com, or "Name"
+ ##
+ mailerFrom:
+
+ ## Prefix prepended mail subject.
+ ##
+ mailerSubjectPrefix:
+
+ ## Do not verify the self-signed certificates.
+ ##
+ mailerSkipVerify: false
+
## Either "memory", "redis", or "memcache", default is "memory"
##
cacheAdapter: memory
@@ -153,6 +181,10 @@ service:
##
databaseName:
+ ## Database SSL Mode for Postgres. Unused unless `postgresql.install` is false.
+ ##
+ databaseSSLMode: disable
+
## Hook task queue length, increase if webhook shooting starts hanging
##
webhookQueueLength: 1000
@@ -294,6 +326,10 @@ persistence:
##
enabled: true
+ ## If defined, PVC must be created manually before volume will be bound
+ ##
+ # existingClaim: "-"
+
## gogs data Persistent Volume Storage Class
## If defined, storageClassName:
## If set to "-", storageClassName: "", which disables dynamic provisioning
@@ -333,6 +369,10 @@ postgresql:
##
postgresDatabase: gogs
+ ## PostgreSQL SSL Mode
+ ##
+ postgresSSLMode: disable
+
## Persistent Volume Storage configuration.
## ref: https://kubernetes.io/docs/user-guide/persistent-volumes
##
@@ -340,5 +380,11 @@ postgresql:
## Enable PostgreSQL persistence using Persistent Volume Claims.
##
enabled: true
+
## Security context
securityContext: {}
+
+## Node, affinity and tolerations labels for pod assignment
+nodeSelector: {}
+affinity: {}
+tolerations: []
diff --git a/incubator/haproxy-ingress/Chart.yaml b/incubator/haproxy-ingress/Chart.yaml
index 14e445c3568e..6b6bf945b1dd 100644
--- a/incubator/haproxy-ingress/Chart.yaml
+++ b/incubator/haproxy-ingress/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: haproxy-ingress
-version: 0.0.6
-appVersion: 0.7.0
+version: 0.0.14
+appVersion: 0.7.1
home: https://github.com/jcmoraisjr/haproxy-ingress
description: Ingress controller implementation for haproxy loadbalancer.
icon: http://www.haproxy.org/img/HAProxyCommunityEdition_60px.png
diff --git a/incubator/haproxy-ingress/README.md b/incubator/haproxy-ingress/README.md
index 5dfe77deea3a..a107870f3a52 100644
--- a/incubator/haproxy-ingress/README.md
+++ b/incubator/haproxy-ingress/README.md
@@ -36,17 +36,19 @@ The following table lists the configurable parameters of the haproxy-ingress cha
Parameter | Description | Default
--- | --- | ---
`rbac.create` | If true, create & use RBAC resources | `true`
+`rbac.security.enable` | If true, and rbac.create is true, create & use PSP resources | `false`
`serviceAccount.create` | If true, create serviceAccount | `true`
`serviceAccount.name` | ServiceAccount to be used | ``
`controller.name` | name of the controller component | `controller`
`controller.image.repository` | controller container image repository | `quay.io/jcmoraisjr/haproxy-ingress`
-`controller.image.tag` | controller container image tag | `v0.7-beta.5`
+`controller.image.tag` | controller container image tag | `v0.7.1`
`controller.image.pullPolicy` | controller container image pullPolicy | `IfNotPresent`
+`controller.initContainers` | extra containers that can initialize the haproxy-ingress-controller | `[]`
`controller.extraArgs` | extra command line arguments for the haproxy-ingress-controller | `{}`
`controller.extraEnv` | extra environment variables for the haproxy-ingress-controller | `{}`
`controller.template` | custom template for haproxy-ingress-controller | `{}`
`controller.defaultBackendService` | backend service if defualtBackend.enabled==false | `""`
-`controller.ingressClass` | name of the ingress class to route through this controller | `happroxy`
+`controller.ingressClass` | name of the ingress class to route through this controller | `haproxy`
`controller.healthzPort` | The haproxy health check (monitoring) port | `10253`
`controller.livenessProbe.path` | The liveness probe path | `/healthz`
`controller.livenessProbe.port` | The livneness probe port | `10253`
@@ -62,14 +64,17 @@ Parameter | Description | Default
`controller.readinessProbe.periodSeconds` | The readiness probe period (in seconds) | `10`
`controller.readinessProbe.successThreshold` | The readiness probe success threshold | `1`
`controller.readinessProbe.timeoutSeconds` | The readiness probe timeout (in seconds) | `1`
-`controller.podAnnotations` | Annotations for the haproxy-ingress-conrtoller pod | `{}`
-`controller.podLabels` | Labels for the haproxy-ingress-conrtoller pod | `{}`
-`controller.securityContext` | Security context settings for the haproxy-ingress-conrtoller pod | `{}`
+`controller.podAnnotations` | Annotations for the haproxy-ingress-controller pod | `{}`
+`controller.podLabels` | Labels for the haproxy-ingress-controller pod | `{}`
+`controller.podAffinity` | Add affinity to the controller pods to control scheduling | `{}`
+`controller.priorityClassName` | Priority Class to be used | ``
+`controller.securityContext` | Security context settings for the haproxy-ingress-controller pod | `{}`
`controller.config` | additional haproxy-ingress [ConfigMap entries](https://github.com/jcmoraisjr/haproxy-ingress/blob/v0.6/README.md#configmap) | `{}`
`controller.hostNetwork` | Optionally set to true when using CNI based kubernetes installations | `false`
`controller.dnsPolicy` | Optionally change this to ClusterFirstWithHostNet in case you have 'hostNetwork: true' | `ClusterFirst`
`controller.kind` | Type of deployment, DaemonSet or Deployment | `Deployment`
`controller.tcp` | TCP [service ConfigMap](https://github.com/jcmoraisjr/haproxy-ingress/blob/v0.6/README.md#tcp-services-configmap): `: /:[:[][:]]` | `{}`
+`controller.enableStaticPorts` | Set to `false` to only rely on ports from `controller.tcp` | `true`
`controller.daemonset.useHostPort` | Set to true to use host ports 80 and 443 | `false`
`controller.daemonset.hostPorts.http` | If `controller.daemonset.useHostPort` is `true` and this is non-empty sets the hostPort for http | `"80"`
`controller.daemonset.hostPorts.https` | If `controller.daemonset.useHostPort` is `true` and this is non-empty sets the hostPort for https | `"443"`
@@ -80,17 +85,16 @@ Parameter | Description | Default
`controller.minReadySeconds` | seconds to avoid killing pods before we are ready | `0`
`controller.replicaCount` | the number of replicas to deploy (when `controller.kind` is `Deployment`) | `1`
`controller.minAvailable` | PodDisruptionBudget minimum available controller pods | `1`
-`controller.resources` | controller pod resource requests & limits | `{}`
+`controller.resources` | controller container resource requests & limits | `{}`
`controller.autoscaling.enabled` | enabling controller horizontal pod autoscaling (when `controller.kind` is `Deployment`) | `false`
-`controller.autoscaling.minReplicas` | minimum number of replicas |
-`controller.autoscaling.maxReplicas` | maximum number of replicas |
-`controller.autoscaling.targetCPUUtilizationPercentage` | target cpu utilization |
-`controller.autoscaling.targetMemoryUtilizationPercentage` | target memory utilization |
+`controller.autoscaling.minReplicas` | minimum number of replicas |
+`controller.autoscaling.maxReplicas` | maximum number of replicas |
+`controller.autoscaling.targetCPUUtilizationPercentage` | target cpu utilization |
+`controller.autoscaling.targetMemoryUtilizationPercentage` | target memory utilization |
`controller.autoscaling.customMetrics` | Extra custom metrics to add to the HPA | `[]`
`controller.tolerations` | to control scheduling to servers with taints | `[]`
`controller.affinity` | to control scheduling | `{}`
`controller.nodeSelector` | to control scheduling | `{}`
-`controller.accessLogsSidecar` | enable a sidecar container that collects access logs from haproxy and outputs to stdout | `true`
`controller.service.annotations` | annotations for controller service | `{}`
`controller.service.labels` | labels for controller service | `{}`
`controller.service.clusterIP` | internal controller cluster service IP | `""`
@@ -111,9 +115,11 @@ Parameter | Description | Default
`controller.stats.service.servicePort` | the port number exposed by the stats service | `1936`
`controller.stats.service.type` | type of controller service to create | `ClusterIP`
`controller.metrics.enabled` | If `controller.stats.enabled = true` and `controller.metrics.enabled = true`, Prometheus metrics will be exported | `false`
-`controller.metrics.image.repository` | prometheus exporter container image repository | `quay.io/prometheus/haproxy-exporter`
-`controller.metrics.image.tag` | prometheus exporter image tag | `v0.9.0`
-`controller.metrics.image.pullPolicy` | prometheus exporter image pullPolicy | `IfNotPresent`
+`controller.metrics.image.repository` | prometheus-exporter image repository | `quay.io/prometheus/haproxy-exporter`
+`controller.metrics.image.tag` | prometheus-exporter image tag | `v0.10.0`
+`controller.metrics.image.pullPolicy` | prometheus-exporter image pullPolicy | `IfNotPresent`
+`controller.metrics.extraArgs` | Extra arguments to the haproxy_exporter | `{}`
+`controller.metrics.resources` | prometheus-exporter container resource requests & limits | `{}`
`controller.metrics.service.annotations` | annotations for metrics service | `{}`
`controller.metrics.service.clusterIP` | internal metrics cluster service IP | `""`
`controller.metrics.service.externalIPs` | list of IP addresses at which the metrics service is available | `[]`
@@ -121,6 +127,11 @@ Parameter | Description | Default
`controller.metrics.service.loadBalancerSourceRanges` | | `[]`
`controller.metrics.service.servicePort` | the port number exposed by the metrics service | `1936`
`controller.metrics.service.type` | type of controller service to create | `ClusterIP`
+`controller.logs.enabled` | enable an access-logs sidecar container that collects access logs from haproxy and outputs to stdout | `false`
+`controller.logs.image.repository` | access-logs container image repository | `quay.io/prometheus/haproxy-exporter`
+`controller.logs.image.tag` | access-logs image tag | `v0.10.0`
+`controller.logs.image.pullPolicy` | access-logs image pullPolicy | `IfNotPresent`
+`controller.logs.resources` | access-logs container resource requests & limits | `{}`
`defaultBackend.enabled` | whether to use the default backend component | `true`
`defaultBackend.name` | name of the default backend component | `default-backend`
`defaultBackend.image.repository` | default backend container image repository | `gcr.io/google_containers/defaultbackend`
@@ -133,7 +144,7 @@ Parameter | Description | Default
`defaultBackend.podLabels` | Labels for the default backend pod | `{}`
`defaultBackend.replicaCount` | the number of replicas to deploy (when `controller.kind` is `Deployment`) | `1`
`defaultBackend.minAvailable` | PodDisruptionBudget minimum available default backend pods | `1`
-`defaultBackend.resources` | default backend pod resources | _see defaults below_
+`defaultBackend.resources` | default backend pod resources | _see defaults below_
`defaultBackend.resources.limits.cpu` | default backend cpu resources limit | `10m`
`defaultBackend.resources.limits.memory` | default backend memory resources limit | `20Mi`
`defaultBackend.service.name` | name of default backend service to create | `ingress-default-backend`
diff --git a/incubator/haproxy-ingress/templates/clusterrole.yaml b/incubator/haproxy-ingress/templates/clusterrole.yaml
index 9b05cf52617e..1cc5880c7369 100644
--- a/incubator/haproxy-ingress/templates/clusterrole.yaml
+++ b/incubator/haproxy-ingress/templates/clusterrole.yaml
@@ -1,4 +1,4 @@
-{{- if or .Values.rbac.create -}}
+{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
@@ -35,7 +35,7 @@ rules:
- list
- watch
- apiGroups:
- - "extensions"
+ - extensions
resources:
- ingresses
verbs:
@@ -50,7 +50,7 @@ rules:
- create
- patch
- apiGroups:
- - "extensions"
+ - extensions
resources:
- ingresses/status
verbs:
diff --git a/incubator/haproxy-ingress/templates/controller-configmap.yaml b/incubator/haproxy-ingress/templates/controller-configmap.yaml
index d46ba6626414..590d876096c9 100644
--- a/incubator/haproxy-ingress/templates/controller-configmap.yaml
+++ b/incubator/haproxy-ingress/templates/controller-configmap.yaml
@@ -11,7 +11,7 @@ metadata:
data:
healthz-port: {{ .Values.controller.healthzPort | quote }}
stats-port: {{ .Values.controller.stats.port | quote }}
-{{- if .Values.controller.accessLogsSidecar }}
+{{- if .Values.controller.logs.enabled }}
syslog-endpoint: "localhost:514"
{{- end }}
{{- if .Values.controller.config }}
diff --git a/incubator/haproxy-ingress/templates/controller-daemonset.yaml b/incubator/haproxy-ingress/templates/controller-daemonset.yaml
index f8f7f42eba53..4ece0908380e 100644
--- a/incubator/haproxy-ingress/templates/controller-daemonset.yaml
+++ b/incubator/haproxy-ingress/templates/controller-daemonset.yaml
@@ -1,4 +1,4 @@
-{{- if eq .Values.controller.kind "DaemonSet" }}
+{{ if eq .Values.controller.kind "DaemonSet" -}}
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
@@ -19,11 +19,11 @@ spec:
release: {{ .Release.Name }}
template:
metadata:
- {{- if .Values.controller.podAnnotations }}
annotations:
{{- if .Values.controller.template }}
checksum/config: {{ include (print $.Template.BasePath "/controller-template.yaml") . | sha256sum }}
{{- end }}
+ {{- if .Values.controller.podAnnotations }}
{{ toYaml .Values.controller.podAnnotations | indent 8}}
{{- end }}
labels:
@@ -34,7 +34,15 @@ spec:
{{ toYaml .Values.controller.podLabels | indent 8 }}
{{- end }}
spec:
+ {{- if .Values.controller.podAffinity }}
+ affinity:
+{{ toYaml .Values.controller.podAffinity | indent 8 }}
+ {{- end }}
serviceAccountName: {{ template "haproxy-ingress.serviceAccountName" . }}
+ {{- if .Values.controller.initContainers }}
+ initContainers:
+{{ toYaml .Values.controller.initContainers | indent 8 }}
+ {{- end }}
containers:
- name: haproxy-ingress
image: "{{ .Values.controller.image.repository }}:{{ .Values.controller.image.tag }}"
@@ -123,20 +131,16 @@ spec:
{{- end }}
resources:
{{ toYaml .Values.controller.resources | indent 12 }}
- {{- if .Values.controller.accessLogsSidecar }}
+ {{- if .Values.controller.logs.enabled }}
- name: access-logs
- image: whereisaaron/kube-syslog-sidecar
+ image: "{{ .Values.controller.logs.image.repository }}:{{ .Values.controller.logs.image.tag }}"
+ imagePullPolicy: "{{ .Values.controller.logs.image.pullPolicy }}"
ports:
- name: udp
containerPort: 514
protocol: UDP
resources:
- limits:
- cpu: 200m
- memory: 512Mi
- requests:
- cpu: 100m
- memory: 256Mi
+{{ toYaml .Values.controller.logs.resources | indent 12 }}
{{- end }}
{{- if and .Values.controller.stats.enabled .Values.controller.metrics.enabled }}
- name: prometheus-exporter
@@ -144,21 +148,23 @@ spec:
imagePullPolicy: "{{ .Values.controller.metrics.image.pullPolicy }}"
args:
- '--haproxy.scrape-uri=http://localhost:{{ .Values.controller.stats.port }}/haproxy?stats;csv'
+ {{- range $key, $value := .Values.controller.metrics.extraArgs }}
+ {{- if $value }}
+ - --{{ $key }}={{ $value }}
+ {{- else }}
+ - --{{ $key }}
+ {{- end }}
+ {{- end }}
ports:
- name: metrics
containerPort: 9101
protocol: TCP
- readinessProbe:
+ livenessProbe:
httpGet:
path: /
- port: 9101
+ port: metrics
resources:
- limits:
- cpu: 200m
- memory: 512Mi
- requests:
- cpu: 100m
- memory: 256Mi
+{{ toYaml .Values.controller.metrics.resources | indent 12 }}
{{- end }}
{{- if .Values.controller.template }}
volumes:
@@ -181,4 +187,11 @@ spec:
affinity:
{{ toYaml .Values.controller.affinity | indent 8 }}
{{- end }}
+ {{- if .Values.controller.priorityClassName }}
+ priorityClassName: {{ .Values.controller.priorityClassName | quote }}
+ {{- end }}
+ {{- if .Values.controller.securityContext }}
+ securityContext:
+{{ toYaml .Values.controller.securityContext | indent 8 }}
+ {{- end }}
{{- end }}
diff --git a/incubator/haproxy-ingress/templates/controller-deployment.yaml b/incubator/haproxy-ingress/templates/controller-deployment.yaml
index 90ee680d4e23..ce04175b018f 100644
--- a/incubator/haproxy-ingress/templates/controller-deployment.yaml
+++ b/incubator/haproxy-ingress/templates/controller-deployment.yaml
@@ -1,4 +1,4 @@
-{{- if eq .Values.controller.kind "Deployment" }}
+{{ if eq .Values.controller.kind "Deployment" -}}
apiVersion: apps/v1beta2
kind: Deployment
metadata:
@@ -36,7 +36,15 @@ spec:
{{ toYaml .Values.controller.podLabels | indent 8 }}
{{- end }}
spec:
+ {{- if .Values.controller.podAffinity }}
+ affinity:
+{{ toYaml .Values.controller.podAffinity | indent 8 }}
+ {{- end }}
serviceAccountName: {{ template "haproxy-ingress.serviceAccountName" . }}
+ {{- if .Values.controller.initContainers }}
+ initContainers:
+{{ toYaml .Values.controller.initContainers | indent 8 }}
+ {{- end }}
containers:
- name: haproxy-ingress
image: "{{ .Values.controller.image.repository }}:{{ .Values.controller.image.tag }}"
@@ -57,10 +65,12 @@ spec:
{{- end }}
{{- end }}
ports:
+ {{- if .Values.controller.enableStaticPorts }}
- name: http
containerPort: 80
- name: https
containerPort: 443
+ {{- end }}
{{- if .Values.controller.stats.enabled }}
- name: stats
containerPort: {{ .Values.controller.stats.port }}
@@ -69,8 +79,8 @@ spec:
- name: healthz
containerPort: {{ .Values.controller.healthzPort }}
{{- range $key, $value := .Values.controller.tcp }}
- - name: "{{ $key }}-tcp"
- containerPort: {{ $key }}
+ - name: "{{ tpl $key $ }}-tcp"
+ containerPort: {{ tpl $key $ }}
protocol: TCP
{{- end }}
livenessProbe:
@@ -112,20 +122,16 @@ spec:
{{- end }}
resources:
{{ toYaml .Values.controller.resources | indent 12 }}
- {{- if .Values.controller.accessLogsSidecar }}
+ {{- if .Values.controller.logs.enabled }}
- name: access-logs
- image: whereisaaron/kube-syslog-sidecar
+ image: "{{ .Values.controller.logs.image.repository }}:{{ .Values.controller.logs.image.tag }}"
+ imagePullPolicy: "{{ .Values.controller.logs.image.pullPolicy }}"
ports:
- name: udp
containerPort: 514
protocol: UDP
resources:
- limits:
- cpu: 200m
- memory: 512Mi
- requests:
- cpu: 100m
- memory: 256Mi
+{{ toYaml .Values.controller.logs.resources | indent 12 }}
{{- end }}
{{- if and .Values.controller.stats.enabled .Values.controller.metrics.enabled }}
- name: prometheus-exporter
@@ -133,21 +139,23 @@ spec:
imagePullPolicy: "{{ .Values.controller.metrics.image.pullPolicy }}"
args:
- '--haproxy.scrape-uri=http://localhost:{{ .Values.controller.stats.port }}/haproxy?stats;csv'
+ {{- range $key, $value := .Values.controller.metrics.extraArgs }}
+ {{- if $value }}
+ - --{{ $key }}={{ $value }}
+ {{- else }}
+ - --{{ $key }}
+ {{- end }}
+ {{- end }}
ports:
- name: metrics
containerPort: 9101
protocol: TCP
- readinessProbe:
+ livenessProbe:
httpGet:
path: /
- port: 9101
+ port: metrics
resources:
- limits:
- cpu: 200m
- memory: 512Mi
- requests:
- cpu: 100m
- memory: 256Mi
+{{ toYaml .Values.controller.metrics.resources | indent 12 }}
{{- end }}
{{- if .Values.controller.template }}
volumes:
@@ -174,4 +182,7 @@ spec:
securityContext:
{{ toYaml .Values.controller.securityContext | indent 8 }}
{{- end }}
+ {{- if .Values.controller.priorityClassName }}
+ priorityClassName: {{ .Values.controller.priorityClassName | quote }}
+ {{- end }}
{{- end }}
diff --git a/incubator/haproxy-ingress/templates/controller-service.yaml b/incubator/haproxy-ingress/templates/controller-service.yaml
index 560480fd382b..ffcd678aac01 100644
--- a/incubator/haproxy-ingress/templates/controller-service.yaml
+++ b/incubator/haproxy-ingress/templates/controller-service.yaml
@@ -38,6 +38,7 @@ spec:
healthCheckNodePort: {{ .Values.controller.service.healthCheckNodePort }}
{{- end }}
ports:
+ {{- if .Values.controller.enableStaticPorts }}
{{- range .Values.controller.service.httpPorts }}
- name: "{{ .port }}-http"
port: {{ .port }}
@@ -56,11 +57,12 @@ spec:
nodePort: {{ .nodePort }}
{{- end }}
{{- end }}
- {{- range $key, $value := .Values.tcp }}
- - name: "{{ $key }}-tcp"
- port: {{ $key }}
+ {{- end }}
+ {{- range $key, $value := .Values.controller.tcp }}
+ - name: "{{ tpl $key $ }}-tcp"
+ port: {{ tpl $key $ }}
protocol: TCP
- targetPort: "{{ $key }}-tcp"
+ targetPort: "{{ tpl $key $ }}-tcp"
{{- end }}
selector:
app: {{ template "haproxy-ingress.name" . }}
diff --git a/incubator/haproxy-ingress/templates/psp.yaml b/incubator/haproxy-ingress/templates/psp.yaml
new file mode 100644
index 000000000000..fbe95f11a239
--- /dev/null
+++ b/incubator/haproxy-ingress/templates/psp.yaml
@@ -0,0 +1,42 @@
+{{ if .Values.rbac.security.enable -}}
+apiVersion: extensions/v1beta1
+kind: PodSecurityPolicy
+metadata:
+ name: {{ template "haproxy-ingress.fullname" . }}
+ labels:
+ app: {{ template "haproxy-ingress.name" . }}
+ chart: {{ template "haproxy-ingress.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ annotations:
+ seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default'
+ apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
+ seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
+ apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
+spec:
+ privileged: true
+ allowPrivilegeEscalation: true
+ defaultAllowPrivilegeEscalation: false
+ allowedCapabilities:
+ - SYS_RESOURCE
+ defaultAddCapabilities:
+ - SYS_RESOURCE
+ volumes:
+ - configMap
+ - secret
+ hostNetwork: false
+ hostPorts:
+ - min: 0
+ max: 65535
+ runAsUser:
+ rule: 'RunAsAny'
+ seLinux:
+ rule: 'RunAsAny'
+ supplementalGroups:
+ rule: 'RunAsAny'
+ fsGroup:
+ rule: 'RunAsAny'
+ allowedHostPaths:
+ - pathPrefix: /etc/haproxy/template
+ readOnly: false
+{{ end -}}
diff --git a/incubator/haproxy-ingress/templates/role.yaml b/incubator/haproxy-ingress/templates/role.yaml
index 888a044bce40..059b41e1e17e 100644
--- a/incubator/haproxy-ingress/templates/role.yaml
+++ b/incubator/haproxy-ingress/templates/role.yaml
@@ -1,4 +1,4 @@
-{{- if or .Values.rbac.create -}}
+{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
@@ -12,7 +12,6 @@ rules:
- apiGroups:
- ""
resources:
- - configmaps
- pods
- secrets
- namespaces
@@ -22,21 +21,19 @@ rules:
- ""
resources:
- configmaps
+ - endpoints
verbs:
- get
- - update
- - apiGroups:
- - ""
- resources:
- - configmaps
- verbs:
- create
+ - update
+{{- if .Values.rbac.security.enable }}
- apiGroups:
- - ""
+ - extensions
resources:
- - endpoints
+ - podsecuritypolicies
+ resourceNames:
+ - {{ template "haproxy-ingress.fullname" . }}
verbs:
- - get
- - create
- - update
+ - use
+{{- end -}}
{{- end -}}
diff --git a/incubator/haproxy-ingress/templates/rolebinding.yaml b/incubator/haproxy-ingress/templates/rolebinding.yaml
index 801e27634dc8..46a9ff8df063 100644
--- a/incubator/haproxy-ingress/templates/rolebinding.yaml
+++ b/incubator/haproxy-ingress/templates/rolebinding.yaml
@@ -1,4 +1,4 @@
-{{- if or .Values.rbac.create -}}
+{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
diff --git a/incubator/haproxy-ingress/templates/tcp-configmap.yaml b/incubator/haproxy-ingress/templates/tcp-configmap.yaml
index 2d385450f68b..d777e2057827 100644
--- a/incubator/haproxy-ingress/templates/tcp-configmap.yaml
+++ b/incubator/haproxy-ingress/templates/tcp-configmap.yaml
@@ -1,3 +1,4 @@
+---
{{- if .Values.controller.tcp }}
apiVersion: v1
kind: ConfigMap
@@ -10,5 +11,7 @@ metadata:
heritage: {{ .Release.Service }}
name: {{ template "haproxy-ingress.controller.fullname" . }}-tcp
data:
-{{ toYaml .Values.controller.tcp | indent 2 }}
+{{- range $key, $value := .Values.controller.tcp }}
+ {{ tpl $key $ | quote }}: {{ tpl $value $ | quote }}
+{{- end }}
{{- end }}
diff --git a/incubator/haproxy-ingress/values.yaml b/incubator/haproxy-ingress/values.yaml
index d1b287749733..5f2d698e49a4 100644
--- a/incubator/haproxy-ingress/values.yaml
+++ b/incubator/haproxy-ingress/values.yaml
@@ -1,6 +1,8 @@
# Enable RBAC
rbac:
create: true
+ security:
+ enable: false
# Create ServiceAccount
serviceAccount:
@@ -14,7 +16,7 @@ controller:
name: controller
image:
repository: quay.io/jcmoraisjr/haproxy-ingress
- tag: "v0.7-beta.5"
+ tag: "v0.7.1"
pullPolicy: IfNotPresent
## Additional command line arguments to pass to haproxy-ingress-controller
@@ -33,6 +35,9 @@ controller:
# key: FOO
# name: secret-resource
+ ## Additional containers that can initialize the pod.
+ initContainers: []
+
# custom haproxy template
template: ""
@@ -75,6 +80,14 @@ controller:
##
podLabels: {}
+ ## Affinity to be added to controller pods
+ ##
+ podAffinity: {}
+
+ ## Priority Class to be used
+ ##
+ priorityClassName: ""
+
## Security context settings to be added to the controller pods
##
securityContext: {}
@@ -105,6 +118,9 @@ controller:
tcp: {}
# 8080: "default/example-tcp-svc:9000"
+ # optionally disable static ports, including the default 80 and 443
+ enableStaticPorts: true
+
## Use host ports 80 and 443
daemonset:
useHostPort: false
@@ -164,8 +180,6 @@ controller:
##
nodeSelector: {}
- accessLogsSidecar: true
-
service:
annotations: {}
labels: {}
@@ -223,9 +237,23 @@ controller:
# (scrapes the stats port and exports metrics to prometheus)
image:
repository: quay.io/prometheus/haproxy-exporter
- tag: "v0.9.0"
+ tag: "v0.10.0"
pullPolicy: IfNotPresent
+ ## Additional command line arguments to pass to haproxy_exporter
+ ## E.g. to specify the client timeout you can use
+ ## extraArgs:
+ ## haproxy.timeout: 15s
+ extraArgs: {}
+
+ resources: {}
+ # limits:
+ # cpu: 500m
+ # memory: 600Mi
+ # requests:
+ # cpu: 200m
+ # memory: 400Mi
+
service:
annotations: {}
# prometheus.io/scrape: "true"
@@ -243,6 +271,28 @@ controller:
servicePort: 9101
type: ClusterIP
+ ## access-logs side-car container for collecting haproxy logs
+ ## Enabling this will configure haproxy to emit logs to syslog localhost:514 UDP port.
+ ## The access-logs container starts a syslog process that listens on UDP 514 and outputs to stdout.
+ logs:
+ enabled: false
+
+ # syslog for haproxy
+ # https://github.com/whereisaaron/kube-syslog-sidecar
+ # (listens on UDP port 514 and outputs to stdout)
+ image:
+ repository: whereisaaron/kube-syslog-sidecar
+ tag: latest
+ pullPolicy: IfNotPresent
+
+ resources: {}
+ # limits:
+ # cpu: 200m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 32Mi
+
# Default 404 backend
defaultBackend:
## If false, controller.defaultBackendService must be provided
diff --git a/incubator/hoverfly/.helmignore b/incubator/hoverfly/.helmignore
new file mode 100644
index 000000000000..46fd89965620
--- /dev/null
+++ b/incubator/hoverfly/.helmignore
@@ -0,0 +1,23 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
+# OWNERS file for Kubernetes
+OWNERS
diff --git a/incubator/hoverfly/Chart.yaml b/incubator/hoverfly/Chart.yaml
new file mode 100644
index 000000000000..33880e5605a4
--- /dev/null
+++ b/incubator/hoverfly/Chart.yaml
@@ -0,0 +1,17 @@
+apiVersion: v1
+appVersion: 1.0.0-rc.2
+description: Hoverfly is a lightweight, open source API simulation tool. Using Hoverfly, you can create realistic simulations of the APIs your application depends on.
+name: hoverfly
+version: 0.1.0
+keywords:
+ - hoverfly
+ - api-simulation
+ - mocking
+ - stubbing
+ - service-virtualization
+home: https://hoverfly.io
+sources:
+- https://github.com/SpectoLabs/hoverfly
+maintainers:
+- name: tommysitu
+ email: tommy.situ@specto.io
diff --git a/incubator/hoverfly/OWNERS b/incubator/hoverfly/OWNERS
new file mode 100644
index 000000000000..00ba52f978a7
--- /dev/null
+++ b/incubator/hoverfly/OWNERS
@@ -0,0 +1,4 @@
+approvers:
+- tommysitu
+reviewers:
+- tommysitu
diff --git a/incubator/hoverfly/README.md b/incubator/hoverfly/README.md
new file mode 100644
index 000000000000..8bd4f811a218
--- /dev/null
+++ b/incubator/hoverfly/README.md
@@ -0,0 +1,72 @@
+# Hoverfly
+
+[Hoverfly](https://hoverfly.io/) is a lightweight, open source API simulation tool. Using Hoverfly, you can create realistic simulations of the APIs your application depends on.
+
+
+## TL;DR;
+
+```console
+$ helm install incubator/hoverfly
+```
+
+## Introduction
+
+This chart bootstraps a [Hoverfly](https://hoverfly.io/) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
+
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`:
+
+```console
+$ helm install --name my-release incubator/hoverfly
+```
+
+The command deploys Hoverfly on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
+
+> **Tip**: List all releases using `helm list`
+
+## Uninstalling the Chart
+
+To uninstall/delete the `my-release` deployment:
+
+```console
+$ helm delete my-release
+```
+
+The command removes all the Kubernetes components associated with the chart and deletes the release.
+
+## Configuration
+
+The following table lists the configurable parameters of the Hoverfly chart and their default values.
+
+| Parameter | Description | Default |
+| --------------------------------- | ------------------------------------------ | --------------------------------------------------------- |
+| `image.repository` | Hoverfly Image name | `docker.io/spectolabs/hoverfly` |
+| `image.tag` | Hoverfly Image tag | `v1.0.0-rc.2` |
+| `hoverflyFlags` | Flags to start Hoverfly with, eg. '-auth' | `""` |
+| `healthcheckEndpoint` | Admin API path for Kubernetes healthcheck | `/api/health` |
+| `service.type` | Kubernetes Service type | `ClusterIP` |
+| `service.adminPort` | Container Admin port | `8888` |
+| `service.proxyPort` | Container Proxy port | `8500` |
+| `service.externalAdminPort` | Service Admin port | `8888` |
+| `service.externalProxyPort` | Service Proxy port | `8500` |
+| `resources` | CPU/Memory resource requests/limits | Memory: `200Mi`, CPU: `0.2` |
+
+Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
+
+```console
+$ helm install --name my-release \
+ --set hoverflyFlags='-webserver -journal-size 0' \
+ incubator/hoverfly
+```
+
+The above command starts Hoverfly in webserver mode and disable journal. You can find all the available flags [here](https://hoverfly.readthedocs.io/en/latest/pages/reference/hoverfly/hoverflycommands.html)
+
+Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
+
+```console
+$ helm install --name my-release -f values.yaml incubator/hoverfly
+```
+
+> **Tip**: You can use the default [values.yaml](values.yaml)
diff --git a/incubator/hoverfly/templates/NOTES.txt b/incubator/hoverfly/templates/NOTES.txt
new file mode 100644
index 000000000000..82d521d996e4
--- /dev/null
+++ b/incubator/hoverfly/templates/NOTES.txt
@@ -0,0 +1,17 @@
+1. Get the application URL by running these commands:
+{{- if contains "NodePort" .Values.service.type }}
+ export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "fullname" . }})
+ export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+ echo http://$NODE_IP:$NODE_PORT
+{{- else if contains "LoadBalancer" .Values.service.type }}
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ You can watch the status of by running 'kubectl get svc -w {{ template "fullname" . }}'
+ export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ echo "Hoverfly Admin URL: http://$SERVICE_IP:{{ .Values.service.externalAdminPort }}"
+ echo "Hoverfly Proxy URL: http://$SERVICE_IP:{{ .Values.service.externalProxyPort }}"
+{{- else if contains "ClusterIP" .Values.service.type }}
+ export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "fullname" . }}" -o jsonpath="{.items[0].metadata.name}")
+ kubectl port-forward $POD_NAME {{ .Values.service.externalAdminPort }} {{ .Values.service.externalProxyPort }}
+ echo "Hoverfly Admin URL http://127.0.0.1:{{ .Values.service.externalAdminPort }}"
+ echo "Hoverfly Proxy URL http://127.0.0.1:{{ .Values.service.externalProxyPort }}"
+{{- end }}
diff --git a/incubator/kube-spot-termination-notice-handler/templates/_helpers.tpl b/incubator/hoverfly/templates/_helpers.tpl
similarity index 60%
rename from incubator/kube-spot-termination-notice-handler/templates/_helpers.tpl
rename to incubator/hoverfly/templates/_helpers.tpl
index 68c2062efce2..f0d83d2edba6 100644
--- a/incubator/kube-spot-termination-notice-handler/templates/_helpers.tpl
+++ b/incubator/hoverfly/templates/_helpers.tpl
@@ -14,14 +14,3 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
-
-{{/*
-Create the name of the service account to use
-*/}}
-{{- define "kube-spot-termination-notice-handler.serviceAccountName" -}}
-{{- if .Values.serviceAccount.create -}}
- {{ default (include "fullname" .) .Values.serviceAccount.name }}
-{{- else -}}
- {{ default "default" .Values.serviceAccount.name }}
-{{- end -}}
-{{- end -}}
diff --git a/incubator/hoverfly/templates/deployment.yaml b/incubator/hoverfly/templates/deployment.yaml
new file mode 100644
index 000000000000..2a184c5880a4
--- /dev/null
+++ b/incubator/hoverfly/templates/deployment.yaml
@@ -0,0 +1,40 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ template "fullname" . }}
+ labels:
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+spec:
+ replicas: {{ .Values.replicaCount }}
+ selector:
+ matchLabels:
+ app: {{ template "fullname" . }}
+ template:
+ metadata:
+ labels:
+ app: {{ template "fullname" . }}
+ spec:
+ containers:
+ - name: {{ .Chart.Name }}
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ env:
+ - name: FLAGS
+ value: {{ .Values.hoverflyFlags }}
+ args: ["$(FLAGS)"]
+ ports:
+ - containerPort: {{ .Values.service.adminPort }}
+ - containerPort: {{ .Values.service.proxyPort }}
+ livenessProbe:
+ httpGet:
+ path: {{ .Values.healthcheckEndpoint }}
+ port: {{ .Values.service.adminPort }}
+ initialDelaySeconds: 5
+ timeoutSeconds: 1
+ readinessProbe:
+ httpGet:
+ path: {{ .Values.healthcheckEndpoint }}
+ port: {{ .Values.service.adminPort }}
+ initialDelaySeconds: 5
+ timeoutSeconds: 1
+ resources:
+{{ toYaml .Values.resources | indent 12 }}
diff --git a/incubator/hoverfly/templates/service.yaml b/incubator/hoverfly/templates/service.yaml
new file mode 100644
index 000000000000..66967be5115e
--- /dev/null
+++ b/incubator/hoverfly/templates/service.yaml
@@ -0,0 +1,19 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ template "fullname" . }}
+ labels:
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+spec:
+ type: {{ .Values.service.type }}
+ ports:
+ - port: {{ .Values.service.externalAdminPort }}
+ targetPort: {{ .Values.service.adminPort }}
+ protocol: TCP
+ name: admin
+ - port: {{ .Values.service.externalProxyPort }}
+ targetPort: {{ .Values.service.proxyPort }}
+ protocol: TCP
+ name: proxy
+ selector:
+ app: {{ template "fullname" . }}
diff --git a/incubator/hoverfly/values.yaml b/incubator/hoverfly/values.yaml
new file mode 100644
index 000000000000..7d729a21443b
--- /dev/null
+++ b/incubator/hoverfly/values.yaml
@@ -0,0 +1,22 @@
+# Default values for hoverfly.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+image:
+ repository: docker.io/spectolabs/hoverfly
+ tag: v1.0.0-rc.2
+ pullPolicy: IfNotPresent
+service:
+ type: ClusterIP
+ externalAdminPort: 8888
+ adminPort: 8888
+ externalProxyPort: 8500
+ proxyPort: 8500
+healthcheckEndpoint: /api/health
+hoverflyFlags:
+# resources:
+# limits:
+# cpu: 0.2
+# memory: 200Mi
+# requests:
+# cpu: 0.1
+# memory: 100Mi
diff --git a/incubator/jaeger/Chart.yaml b/incubator/jaeger/Chart.yaml
index f8f21a5a0cd2..11c20f4a4209 100644
--- a/incubator/jaeger/Chart.yaml
+++ b/incubator/jaeger/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
-appVersion: 1.8.2
+appVersion: 1.11.0
description: A Jaeger Helm chart for Kubernetes
name: jaeger
-version: 0.8.2
+version: 0.10.2
keywords:
- jaeger
- opentracing
diff --git a/incubator/jaeger/README.md b/incubator/jaeger/README.md
index 1413f3758de7..03095d9a31db 100644
--- a/incubator/jaeger/README.md
+++ b/incubator/jaeger/README.md
@@ -175,8 +175,10 @@ The following table lists the configurable parameters of the Jaeger chart and th
| `elasticsearch.data.persistence.enabled` | To enable storage persistence | false (Highly recommended to enable) |
| `elasticsearch.image.tag` | Elasticsearch image tag | "5.4" |
| `elasticsearch.rbac.create` | To enable RBAC | false |
+| `fullnameOverride` | Override full name | `nil` |
| `hotrod.enabled` | Enables the Hotrod demo app | false |
| `hotrod.service.loadBalancerSourceRanges` | list of IP CIDRs allowed access to load balancer (if supported) | `[]` |
+| `nameOverride` | Override name | `nil` |
| `provisionDataStore.cassandra` | Provision Cassandra Data Store | true |
| `provisionDataStore.elasticsearch` | Provision Elasticsearch Data Store | false |
| `query.service.annotations` | Annotations for Query SVC | nil |
@@ -214,7 +216,7 @@ The following table lists the configurable parameters of the Jaeger chart and th
| `storage.elasticsearch.user` | Provisioned elasticsearch user | elastic |
| `storage.elasticsearch.nodesWanOnly` | Only access specified es host | false |
| `storage.type` | Storage type (ES or Cassandra) | cassandra |
-| `tag` | Image tag/version | 1.8.2 |
+| `tag` | Image tag/version | 1.11.0 |
For more information about some of the tunable parameters that Cassandra provides, please visit the helm chart for [cassandra](https://github.com/kubernetes/charts/tree/master/incubator/cassandra) and the official [website](http://cassandra.apache.org/) at apache.org.
@@ -251,3 +253,9 @@ Jaeger offers a multitude of [tags](https://hub.docker.com/u/jaegertracing/) for
- [x] Fix hard-coded replica count
- [x] Collector service works both as `NodePort` and `ClusterIP` service types
- [ ] Sidecar deployment support
+
+## Upgrading
+
+### From < 0.9.0 to >= 0.9.0
+
+Version `0.9.0` introduces recommended labels. The approch to upgrading is to delete and reinstall the release.
diff --git a/incubator/jaeger/requirements.yaml b/incubator/jaeger/requirements.yaml
index 5e6aac86c5f8..bec78b621ec3 100644
--- a/incubator/jaeger/requirements.yaml
+++ b/incubator/jaeger/requirements.yaml
@@ -1,9 +1,9 @@
dependencies:
- name: cassandra
- version: ^0.9.4
+ version: ^0.10.4
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
condition: provisionDataStore.cassandra
- name: elasticsearch
- version: ^0.4.1
- repository: https://kubernetes-charts-incubator.storage.googleapis.com/
+ version: ^1.19.1
+ repository: https://kubernetes-charts.storage.googleapis.com/
condition: provisionDataStore.elasticsearch
diff --git a/incubator/jaeger/templates/_helpers.tpl b/incubator/jaeger/templates/_helpers.tpl
index 681ad06286d2..29a310434f47 100644
--- a/incubator/jaeger/templates/_helpers.tpl
+++ b/incubator/jaeger/templates/_helpers.tpl
@@ -1,16 +1,4 @@
{{/* vim: set filetype=mustache: */}}
-
-{{/*
-Return the appropriate apiVersion for cronjob APIs.
-*/}}
-{{- define "cronjob.apiVersion" -}}
-{{- if .Capabilities.APIVersions.Has "batch/v1beta1" -}}
-"batch/v1beta1"
-{{- else -}}
-"batch/v2alpha1"
-{{- end -}}
-{{- end -}}
-
{{/*
Expand the name of the chart.
*/}}
@@ -20,7 +8,7 @@ Expand the name of the chart.
{{/*
Create a default fully qualified app name.
-We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec)
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "jaeger.fullname" -}}
@@ -36,6 +24,13 @@ If release name contains chart name it will be used as a full name.
{{- end -}}
{{- end -}}
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "jaeger.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
{{/*
Create a fully qualified query name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
diff --git a/incubator/jaeger/templates/agent-ds.yaml b/incubator/jaeger/templates/agent-ds.yaml
index 086b109f59b9..410e534f4608 100644
--- a/incubator/jaeger/templates/agent-ds.yaml
+++ b/incubator/jaeger/templates/agent-ds.yaml
@@ -1,20 +1,24 @@
{{- if .Values.agent.enabled -}}
-apiVersion: extensions/v1beta1
+apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ template "jaeger.agent.name" . }}
labels:
- app: {{ template "jaeger.name" . }}
- jaeger-infra: agent-daemonset
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
- component: agent
- heritage: {{ .Release.Service }}
- release: {{ .Release.Name }}
+ app.kubernetes.io/name: {{ include "jaeger.name" . }}
+ helm.sh/chart: {{ include "jaeger.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: agent
{{- if .Values.agent.annotations }}
annotations:
{{ toYaml .Values.agent.annotations | indent 4 }}
{{- end }}
spec:
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "jaeger.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/component: agent
template:
metadata:
{{- if .Values.agent.podAnnotations }}
@@ -22,10 +26,9 @@ spec:
{{ toYaml .Values.agent.podAnnotations | indent 8 }}
{{- end }}
labels:
- app: {{ template "jaeger.name" . }}
- component: agent
- release: {{ .Release.Name }}
- jaeger-infra: agent-instance
+ app.kubernetes.io/name: {{ include "jaeger.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/component: agent
{{- if .Values.agent.podLabels }}
{{ toYaml .Values.agent.podLabels | indent 8 }}
{{- end }}
diff --git a/incubator/jaeger/templates/agent-svc.yaml b/incubator/jaeger/templates/agent-svc.yaml
index 233bfb584e1b..5cb5943188d3 100644
--- a/incubator/jaeger/templates/agent-svc.yaml
+++ b/incubator/jaeger/templates/agent-svc.yaml
@@ -4,12 +4,11 @@ kind: Service
metadata:
name: {{ template "jaeger.agent.name" . }}
labels:
- app: {{ template "jaeger.name" . }}
- jaeger-infra: agent-service
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
- component: agent
- heritage: {{ .Release.Service }}
- release: {{ .Release.Name }}
+ app.kubernetes.io/name: {{ include "jaeger.name" . }}
+ helm.sh/chart: {{ include "jaeger.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: agent
{{- if .Values.agent.service.annotations }}
annotations:
{{ toYaml .Values.agent.service.annotations | indent 4 }}
@@ -34,9 +33,8 @@ spec:
targetPort: {{ .Values.agent.service.samplingPort }}
type: {{ .Values.agent.service.type }}
selector:
- app: {{ template "jaeger.name" . }}
- component: agent
- release: {{ .Release.Name }}
- jaeger-infra: agent-instance
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ app.kubernetes.io/component: agent
+ app.kubernetes.io/instance: {{ .Release.Name }}
{{- template "loadBalancerSourceRanges" .Values.agent }}
{{- end -}}
diff --git a/incubator/jaeger/templates/cassandra-schema-job.yaml b/incubator/jaeger/templates/cassandra-schema-job.yaml
index 8f2d8afb77a7..10fa68e33ad9 100644
--- a/incubator/jaeger/templates/cassandra-schema-job.yaml
+++ b/incubator/jaeger/templates/cassandra-schema-job.yaml
@@ -5,12 +5,11 @@ kind: Job
metadata:
name: {{ template "jaeger.fullname" . }}-cassandra-schema
labels:
- app: {{ template "jaeger.name" . }}
- jaeger-infra: cassandra-schema-job
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
- component: cassandra-schema
- heritage: {{ .Release.Service }}
- release: {{ .Release.Name }}
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ helm.sh/chart: {{ include "jaeger.chart" . }}
+ app.kubernetes.io/component: cassandra-schema
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.schema.annotations }}
annotations:
{{ toYaml .Values.schema.annotations | indent 4 }}
diff --git a/incubator/jaeger/templates/collector-deploy.yaml b/incubator/jaeger/templates/collector-deploy.yaml
index 0cf520de5687..203dc43d146b 100644
--- a/incubator/jaeger/templates/collector-deploy.yaml
+++ b/incubator/jaeger/templates/collector-deploy.yaml
@@ -1,21 +1,25 @@
{{- if .Values.collector.enabled -}}
-apiVersion: extensions/v1beta1
+apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "jaeger.collector.name" . }}
labels:
- app: {{ template "jaeger.name" . }}
- jaeger-infra: collector-deployment
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
- component: collector
- heritage: {{ .Release.Service }}
- release: {{ .Release.Name }}
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ helm.sh/chart: {{ include "jaeger.chart" . }}
+ app.kubernetes.io/component: collector
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.collector.annotations }}
annotations:
{{ toYaml .Values.collector.annotations | indent 4 }}
{{- end }}
spec:
replicas: {{ .Values.collector.replicaCount }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ app.kubernetes.io/component: collector
+ app.kubernetes.io/instance: {{ .Release.Name }}
strategy:
type: Recreate
template:
@@ -25,10 +29,9 @@ spec:
{{ toYaml .Values.collector.podAnnotations | indent 8 }}
{{- end }}
labels:
- app: {{ template "jaeger.name" . }}
- component: collector
- release: {{ .Release.Name }}
- jaeger-infra: collector-pod
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ app.kubernetes.io/component: collector
+ app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.collector.podLabels }}
{{ toYaml .Values.collector.podLabels | indent 8 }}
{{- end }}
@@ -44,6 +47,10 @@ spec:
image: {{ .Values.collector.image }}:{{ .Values.tag }}
imagePullPolicy: {{ .Values.collector.pullPolicy }}
env:
+ {{- range $key, $value := .Values.collector.cmdlineParams }}
+ - name: {{ $key | replace "." "_" | replace "-" "_" | upper | quote }}
+ value: {{ $value }}
+ {{- end }}
- name: SPAN_STORAGE_TYPE
valueFrom:
configMapKeyRef:
@@ -65,6 +72,16 @@ spec:
configMapKeyRef:
name: {{ template "jaeger.fullname" . }}
key: cassandra.keyspace
+ - name: CASSANDRA_USERNAME
+ valueFrom:
+ configMapKeyRef:
+ name: {{ template "jaeger.fullname" . }}
+ key: cassandra.username
+ - name: CASSANDRA_PASSWORD
+ valueFrom:
+ configMapKeyRef:
+ name: {{ template "jaeger.fullname" . }}
+ key: cassandra.password
{{- end }}
{{- if eq .Values.storage.type "elasticsearch" }}
- name: ES_PASSWORD
diff --git a/incubator/jaeger/templates/collector-svc.yaml b/incubator/jaeger/templates/collector-svc.yaml
index b2a0a07b069c..6024ea3685d7 100644
--- a/incubator/jaeger/templates/collector-svc.yaml
+++ b/incubator/jaeger/templates/collector-svc.yaml
@@ -4,12 +4,11 @@ kind: Service
metadata:
name: {{ template "jaeger.collector.name" . }}
labels:
- app: {{ template "jaeger.name" . }}
- jaeger-infra: collector-service
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
- component: collector
- heritage: {{ .Release.Service }}
- release: {{ .Release.Name }}
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ helm.sh/chart: {{ include "jaeger.chart" . }}
+ app.kubernetes.io/component: collector
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.collector.service.annotations }}
annotations:
{{ toYaml .Values.collector.service.annotations | indent 4 }}
@@ -41,10 +40,9 @@ spec:
targetPort: zipkin
{{- end }}
selector:
- app: {{ template "jaeger.name" . }}
- component: collector
- release: {{ .Release.Name }}
- jaeger-infra: collector-pod
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ app.kubernetes.io/component: collector
+ app.kubernetes.io/instance: {{ .Release.Name }}
type: {{ .Values.collector.service.type }}
{{- template "loadBalancerSourceRanges" .Values.collector }}
{{- end -}}
diff --git a/incubator/jaeger/templates/common-cm.yaml b/incubator/jaeger/templates/common-cm.yaml
index e2f023294073..2eecbb4cddf4 100644
--- a/incubator/jaeger/templates/common-cm.yaml
+++ b/incubator/jaeger/templates/common-cm.yaml
@@ -3,18 +3,19 @@ kind: ConfigMap
metadata:
name: {{ template "jaeger.fullname" . }}
labels:
- app: {{ template "jaeger.name" . }}
- jaeger-infra: common-configmap
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
- heritage: {{ .Release.Service }}
- release: {{ .Release.Name }}
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ helm.sh/chart: {{ include "jaeger.chart" . }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
data:
cassandra.contact-points: {{ template "cassandra.contact_points" . }}
cassandra.datacenter.name: {{ .Values.cassandra.config.dc_name | quote }}
cassandra.keyspace: {{ printf "%s_%s" "jaeger_v1" .Values.cassandra.config.dc_name | quote }}
+ cassandra.password: {{ .Values.storage.cassandra.password }}
cassandra.port: {{ .Values.storage.cassandra.port | quote }}
cassandra.schema.mode: {{ .Values.schema.mode | quote }}
cassandra.servers: {{ template "cassandra.host" . }}
+ cassandra.username: {{ .Values.storage.cassandra.user }}
collector.host-port: {{ template "jaeger.collector.host-port" . }}
collector.http-port: {{ .Values.collector.service.httpPort | quote }}
collector.port: {{ .Values.collector.service.tchannelPort | quote }}
@@ -42,11 +43,9 @@ data:
# cassandra.archive.username:
# cassandra.connections-per-host:
# cassandra.max-retry-attempts:
- # cassandra.password:
# cassandra.proto-version:
# cassandra.socket-keep-alive:
# cassandra.timeout:
- # cassandra.username:
# collector.health-check-http-port:
# collector.num-workers:
# collector.queue-size:
diff --git a/incubator/jaeger/templates/hotrod-deploy.yaml b/incubator/jaeger/templates/hotrod-deploy.yaml
index 977697d64b1a..b315e5d808e2 100644
--- a/incubator/jaeger/templates/hotrod-deploy.yaml
+++ b/incubator/jaeger/templates/hotrod-deploy.yaml
@@ -1,24 +1,28 @@
{{- if .Values.hotrod.enabled -}}
-apiVersion: extensions/v1beta1
+apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "jaeger.fullname" . }}-hotrod
labels:
- app: {{ template "jaeger.name" . }}
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
jaeger-infra: hotrod-deployment
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
- component: hotrod
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
+ helm.sh/chart: {{ include "jaeger.chart" . }}
+ app.kubernetes.io/component: hotrod
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.hotrod.replicaCount }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ app.kubernetes.io/component: hotrod
+ app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
- app: {{ template "jaeger.name" . }}
- component: hotrod
- release: {{ .Release.Name }}
- jaeger-infra: hotrod-instance
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ app.kubernetes.io/component: hotrod
+ app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ template "jaeger.fullname" . }}-hotrod
diff --git a/incubator/jaeger/templates/hotrod-ing.yaml b/incubator/jaeger/templates/hotrod-ing.yaml
index 9c96188623ce..37736b4d5d73 100644
--- a/incubator/jaeger/templates/hotrod-ing.yaml
+++ b/incubator/jaeger/templates/hotrod-ing.yaml
@@ -7,12 +7,11 @@ kind: Ingress
metadata:
name: {{ template "jaeger.fullname" . }}-hotrod
labels:
- app: {{ template "jaeger.name" . }}
- jaeger-infra: hotrod-ingress
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
- component: hotrod
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ helm.sh/chart: {{ include "jaeger.chart" . }}
+ app.kubernetes.io/component: hotrod
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.hotrod.ingress.annotations }}
annotations:
{{ toYaml .Values.hotrod.ingress.annotations | indent 4 }}
diff --git a/incubator/jaeger/templates/hotrod-svc.yaml b/incubator/jaeger/templates/hotrod-svc.yaml
index 01e4ecff8d12..33995bec55c2 100644
--- a/incubator/jaeger/templates/hotrod-svc.yaml
+++ b/incubator/jaeger/templates/hotrod-svc.yaml
@@ -4,12 +4,11 @@ kind: Service
metadata:
name: {{ template "jaeger.fullname" . }}-hotrod
labels:
- app: {{ template "jaeger.name" . }}
- jaeger-infra: hotrod-service
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
- component: hotrod
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ helm.sh/chart: {{ include "jaeger.chart" . }}
+ app.kubernetes.io/component: hotrod
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.hotrod.service.annotations }}
annotations:
{{ toYaml .Values.hotrod.service.annotations | indent 4 }}
@@ -22,9 +21,8 @@ spec:
protocol: TCP
targetPort: {{ .Values.hotrod.service.internalPort }}
selector:
- app: {{ template "jaeger.name" . }}
- component: hotrod
- release: {{ .Release.Name }}
- jaeger-infra: hotrod-instance
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ app.kubernetes.io/component: hotrod
+ app.kubernetes.io/instance: {{ .Release.Name }}
{{- template "loadBalancerSourceRanges" .Values.hotrod }}
{{- end -}}
diff --git a/incubator/jaeger/templates/query-deploy.yaml b/incubator/jaeger/templates/query-deploy.yaml
index 49ab0640a312..8211b12ba984 100644
--- a/incubator/jaeger/templates/query-deploy.yaml
+++ b/incubator/jaeger/templates/query-deploy.yaml
@@ -1,21 +1,25 @@
{{- if .Values.query.enabled -}}
-apiVersion: extensions/v1beta1
+apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "jaeger.query.name" . }}
labels:
- app: {{ template "jaeger.name" . }}
- jaeger-infra: query-deployment
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
- component: query
- heritage: {{ .Release.Service }}
- release: {{ .Release.Name }}
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ helm.sh/chart: {{ include "jaeger.chart" . }}
+ app.kubernetes.io/component: query
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.query.annotations }}
annotations:
{{ toYaml .Values.query.annotations | indent 4 }}
{{- end }}
spec:
replicas: {{ .Values.query.replicaCount }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ app.kubernetes.io/component: query
+ app.kubernetes.io/instance: {{ .Release.Name }}
strategy:
type: Recreate
template:
@@ -25,10 +29,9 @@ spec:
{{ toYaml .Values.query.podAnnotations | indent 8 }}
{{- end }}
labels:
- app: {{ template "jaeger.name" . }}
- component: query
- release: {{ .Release.Name }}
- jaeger-infra: query-pod
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ app.kubernetes.io/component: query
+ app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.query.podLabels }}
{{ toYaml .Values.query.podLabels | indent 8 }}
{{- end }}
@@ -44,6 +47,10 @@ spec:
image: {{ .Values.query.image }}:{{ .Values.tag }}
imagePullPolicy: {{ .Values.query.pullPolicy }}
env:
+ {{- range $key, $value := .Values.query.cmdlineParams }}
+ - name: {{ $key | replace "." "_" | replace "-" "_" | upper | quote }}
+ value: {{ $value }}
+ {{- end }}
- name: SPAN_STORAGE_TYPE
valueFrom:
configMapKeyRef:
@@ -65,6 +72,16 @@ spec:
configMapKeyRef:
name: {{ template "jaeger.fullname" . }}
key: cassandra.keyspace
+ - name: CASSANDRA_USERNAME
+ valueFrom:
+ configMapKeyRef:
+ name: {{ template "jaeger.fullname" . }}
+ key: cassandra.username
+ - name: CASSANDRA_PASSWORD
+ valueFrom:
+ configMapKeyRef:
+ name: {{ template "jaeger.fullname" . }}
+ key: cassandra.password
{{- end }}
{{- if eq .Values.storage.type "elasticsearch" }}
- name: ES_PASSWORD
diff --git a/incubator/jaeger/templates/query-ing.yaml b/incubator/jaeger/templates/query-ing.yaml
index e8370f859528..41db82f9ca9a 100644
--- a/incubator/jaeger/templates/query-ing.yaml
+++ b/incubator/jaeger/templates/query-ing.yaml
@@ -5,12 +5,11 @@ kind: Ingress
metadata:
name: {{ template "jaeger.query.name" . }}
labels:
- app: {{ template "jaeger.name" . }}
- jaeger-infra: query-ingress
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
- component: query
- heritage: {{ .Release.Service }}
- release: {{ .Release.Name }}
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ helm.sh/chart: {{ include "jaeger.chart" . }}
+ app.kubernetes.io/component: query
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.query.ingress.annotations }}
annotations:
{{ toYaml .Values.query.ingress.annotations | indent 4 }}
diff --git a/incubator/jaeger/templates/query-svc.yaml b/incubator/jaeger/templates/query-svc.yaml
index cc71530e0548..c290f4955cf6 100644
--- a/incubator/jaeger/templates/query-svc.yaml
+++ b/incubator/jaeger/templates/query-svc.yaml
@@ -4,12 +4,11 @@ kind: Service
metadata:
name: {{ template "jaeger.query.name" . }}
labels:
- app: {{ template "jaeger.name" . }}
- jaeger-infra: query-service
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
- component: query
- heritage: {{ .Release.Service }}
- release: {{ .Release.Name }}
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ helm.sh/chart: {{ include "jaeger.chart" . }}
+ app.kubernetes.io/component: query
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.query.service.annotations }}
annotations:
{{ toYaml .Values.query.service.annotations | indent 4 }}
@@ -21,10 +20,9 @@ spec:
protocol: TCP
targetPort: {{ .Values.query.service.targetPort }}
selector:
- app: {{ template "jaeger.name" . }}
- component: query
- release: {{ .Release.Name }}
- jaeger-infra: query-pod
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ app.kubernetes.io/component: query
+ app.kubernetes.io/instance: {{ .Release.Name }}
type: {{ .Values.query.service.type }}
{{- template "loadBalancerSourceRanges" .Values.query }}
{{- end -}}
diff --git a/incubator/jaeger/templates/spark-cronjob.yaml b/incubator/jaeger/templates/spark-cronjob.yaml
index 4c925bc6327f..23d6238072f9 100644
--- a/incubator/jaeger/templates/spark-cronjob.yaml
+++ b/incubator/jaeger/templates/spark-cronjob.yaml
@@ -1,16 +1,14 @@
{{- if .Values.spark.enabled -}}
-{{- if or (.Capabilities.APIVersions.Has "batch/v1beta1") (.Capabilities.APIVersions.Has "batch/v2alpha1") -}}
-apiVersion: {{ template "cronjob.apiVersion" $ }}
+apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ template "jaeger.fullname" . }}-spark
labels:
- app: {{ template "jaeger.name" . }}
- jaeger-infra: spark-cronjob
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
- component: spark
- heritage: {{ .Release.Service }}
- release: {{ .Release.Name }}
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ helm.sh/chart: {{ include "jaeger.chart" . }}
+ app.kubernetes.io/component: spark
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.spark.annotations }}
annotations:
{{ toYaml .Values.spark.annotations | indent 4 }}
@@ -24,10 +22,9 @@ spec:
template:
metadata:
labels:
- app: {{ template "jaeger.name" . }}
- component: spark
- release: {{ .Release.Name }}
- jaeger-infra: spark-instance
+ app.kubernetes.io/name: {{ template "jaeger.name" . }}
+ app.kubernetes.io/component: spark
+ app.kubernetes.io/instance: {{ .Release.Name }}
{{- if .Values.spark.podLabels }}
{{ toYaml .Values.spark.podLabels | indent 12 }}
{{- end }}
@@ -86,4 +83,3 @@ spec:
{{ toYaml .Values.spark.resources | indent 14 }}
restartPolicy: OnFailure
{{- end -}}
-{{- end -}}
diff --git a/incubator/jaeger/values.yaml b/incubator/jaeger/values.yaml
index 860798778229..7ec7f65e5cdd 100644
--- a/incubator/jaeger/values.yaml
+++ b/incubator/jaeger/values.yaml
@@ -6,7 +6,10 @@ provisionDataStore:
cassandra: true
elasticsearch: false
-tag: 1.8.2
+tag: 1.11.0
+
+nameOverride: ""
+fullnameOverride: ""
storage:
# allowed values (cassandra, elasticsearch)
@@ -65,7 +68,7 @@ schema:
# Begin: Override values on the Elasticsearch subchart to customize for Jaeger
elasticsearch:
image:
- tag: "5.4"
+ tag: "6.6"
cluster:
name: "tracing"
data:
diff --git a/incubator/kafka/Chart.yaml b/incubator/kafka/Chart.yaml
index a22ec619fe3e..1efc5ed2526a 100755
--- a/incubator/kafka/Chart.yaml
+++ b/incubator/kafka/Chart.yaml
@@ -2,7 +2,7 @@ apiVersion: v1
description: Apache Kafka is publish-subscribe messaging rethought as a distributed
commit log.
name: kafka
-version: 0.13.7
+version: 0.15.4
appVersion: 5.0.1
keywords:
- kafka
diff --git a/incubator/kafka/README.md b/incubator/kafka/README.md
index a982cccef19f..d257012e4a79 100644
--- a/incubator/kafka/README.md
+++ b/incubator/kafka/README.md
@@ -62,30 +62,30 @@ following configurable parameters:
| `replicas` | Kafka Brokers | `3` |
| `component` | Kafka k8s selector key | `kafka` |
| `resources` | Kafka resource requests and limits | `{}` |
+| `securityContext` | Kafka containers security context | `{}` |
| `kafkaHeapOptions` | Kafka broker JVM heap options | `-Xmx1G-Xms1G` |
| `logSubPath` | Subpath under `persistence.mountPath` where kafka logs will be placed. | `logs` |
| `schedulerName` | Name of Kubernetes scheduler (other than the default) | `nil` |
| `affinity` | Defines affinities and anti-affinities for pods as defined in: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity preferences | `{}` |
| `tolerations` | List of node tolerations for the pods. https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ | `[]` |
-| `headless.annotations` | List of annotations for the headless service. https://kubernetes.io/docs/concepts/services-networking/service/#headless-services | `[]` |
-| `headless.targetPort` | Target port to be used for the headless service. This is not a required value. | `nil` |
-| `headless.port` | Port to be used for the headless service. https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ | `9092` |
+| `headless.annotations` | List of annotations for the headless service. https://kubernetes.io/docs/concepts/services-networking/service/#headless-services | `[]` |
+| `headless.targetPort` | Target port to be used for the headless service. This is not a required value. | `nil` |
+| `headless.port` | Port to be used for the headless service. https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ | `9092` |
| `external.enabled` | If True, exposes Kafka brokers via NodePort (PLAINTEXT by default) | `false` |
-| `external.dns.useInternal` | If True, add Annotation for internal DNS service | `false` |
-| `external.dns.useExternal` | If True, add Annotation for external DNS service | `true` |
+| `external.dns.useInternal` | If True, add Annotation for internal DNS service | `false` |
+| `external.dns.useExternal` | If True, add Annotation for external DNS service | `true` |
| `external.servicePort` | TCP port configured at external services (one per pod) to relay from NodePort to the external listener port. | '19092' |
| `external.firstListenerPort` | TCP port which is added pod index number to arrive at the port used for NodePort and external listener port. | '31090' |
| `external.domain` | Domain in which to advertise Kafka external listeners. | `cluster.local` |
-| `external.init` | External init container settings. | (see `values.yaml`) |
| `external.type` | Service Type. | `NodePort` |
| `external.distinct` | Distinct DNS entries for each created A record. | `false` |
| `external.annotations` | Additional annotations for the external service. | `{}` |
+| `external.loadBalancerIP` | Add Static IP to the type Load Balancer. Depends on the provider if enabled | `[]`
| `podAnnotations` | Annotation to be added to Kafka pods | `{}` |
-| `loadBalancerIP` | Add Static IP to the type Load Balancer. Depends on the provider if enabled | `[]`
-| `rbac.enabled` | Enable a service account and role for the init container to use in an RBAC enabled cluster | `false` |
+| `podLabels` | Labels to be added to Kafka pods | `{}` |
| `envOverrides` | Add additional Environment Variables in the dictionary format | `{ zookeeper.sasl.enabled: "False" }` |
| `configurationOverrides` | `Kafka ` [configuration setting][brokerconfigs] overrides in the dictionary format | `{ offsets.topic.replication.factor: 3 }` |
-| `secrets` | `{}` | Pass any secrets to the kafka pods. Each secret will be passed as an environment variable by default. The secret can also be mounted to a specific path if required. Environment variable names are generated as: `_` (All upper case)|
+| `secrets` | Pass any secrets to the kafka pods. Each secret will be passed as an environment variable by default. The secret can also be mounted to a specific path if required. Environment variable names are generated as: `_` (All upper case) | `{}` |
| `additionalPorts` | Additional ports to expose on brokers. Useful when the image exposes metrics (like prometheus, etc.) through a javaagent instead of a sidecar | `{}` |
| `readinessProbe.initialDelaySeconds` | Number of seconds before probe is initiated. | `30` |
| `readinessProbe.periodSeconds` | How often (in seconds) to perform the probe. | `10` |
@@ -118,9 +118,12 @@ following configurable parameters:
| `prometheus.kafka.scrapeTimeout` | Timeout that Prometheus scrapes Kafka metrics when using Prometheus Operator | `10s` |
| `prometheus.kafka.port` | Kafka Exporter Port which exposes metrics in Prometheus format for scraping | `9308` |
| `prometheus.kafka.resources` | Allows setting resource limits for kafka-exporter pod | `{}` |
+| `prometheus.kafka.affinity` | Defines affinities and anti-affinities for pods as defined in: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity preferences | `{}` |
+| `prometheus.kafka.tolerations` | List of node tolerations for the pods. https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ | `[]` |
| `prometheus.operator.enabled` | True if using the Prometheus Operator, False if not | `false` |
| `prometheus.operator.serviceMonitor.namespace` | Namespace which Prometheus is running in. Default to kube-prometheus install. | `monitoring` |
| `prometheus.operator.serviceMonitor.selector` | Default to kube-prometheus install (CoreOS recommended), but should be set according to Prometheus install | `{ prometheus: kube-prometheus }` |
+| `configJob.backoffLimit` | Number of retries before considering kafka-config job as failed | `6` |
| `topics` | List of topics to create & configure. Can specify name, partitions, replicationFactor, reassignPartitions, config. See values.yaml | `[]` (Empty list) |
| `zookeeper.enabled` | If True, installs Zookeeper Chart | `true` |
| `zookeeper.resources` | Zookeeper resource requests and limits | `{}` |
@@ -174,9 +177,9 @@ Kafka has a rich ecosystem, with lots of tools. This sections is intended to com
- [Schema-registry](https://github.com/kubernetes/charts/tree/master/incubator/schema-registry) - A confluent project that provides a serving layer for your metadata. It provides a RESTful interface for storing and retrieving Avro schemas.
-### Connecting to Kafka from outside Kubernetes
+## Connecting to Kafka from outside Kubernetes
-#### Node Port External Service Type
+### NodePort External Service Type
Review and optionally override to enable the example text concerned with external access in `values.yaml`.
@@ -207,10 +210,185 @@ the a `containerPort` with a number matching its respective `NodePort`. The rang
should not actually listen, on all Kafka pods in the StatefulSet. As any given pod will listen only one
such port at a time, setting the range at every Kafka pod is a reasonably safe configuration.
-#### Load Balancer External Service Type
+#### Example values.yml for external service type NodePort
+The + lines are with the updated values.
+```
+ external:
+- enabled: false
++ enabled: true
+ # type can be either NodePort or LoadBalancer
+ type: NodePort
+ # annotations:
+@@ -170,14 +170,14 @@ configurationOverrides:
+ ##
+ ## Setting "advertised.listeners" here appends to "PLAINTEXT://${POD_IP}:9092,", ensure you update the domain
+ ## If external service type is Nodeport:
+- # "advertised.listeners": |-
+- # EXTERNAL://kafka.cluster.local:$((31090 + ${KAFKA_BROKER_ID}))
++ "advertised.listeners": |-
++ EXTERNAL://kafka.cluster.local:$((31090 + ${KAFKA_BROKER_ID}))
+ ## If external service type is LoadBalancer and distinct is true:
+ # "advertised.listeners": |-
+ # EXTERNAL://kafka-$((${KAFKA_BROKER_ID})).cluster.local:19092
+ ## If external service type is LoadBalancer and distinct is false:
+ # "advertised.listeners": |-
+ # EXTERNAL://EXTERNAL://${LOAD_BALANCER_IP}:31090
+ ## Uncomment to define the EXTERNAL Listener protocol
+- # "listener.security.protocol.map": |-
+- # PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
++ "listener.security.protocol.map": |-
++ PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
+
+
+$ kafkacat -b kafka.cluster.local:31090 -L
+Metadata for all topics (from broker 0: kafka.cluster.local:31090/0):
+ 3 brokers:
+ broker 2 at kafka.cluster.local:31092
+ broker 1 at kafka.cluster.local:31091
+ broker 0 at kafka.cluster.local:31090
+ 0 topics:
+
+$ kafkacat -b kafka.cluster.local:31090 -P -t test1 -p 0
+msg01 from external producer to topic test1
+
+$ kafkacat -b kafka.cluster.local:31090 -C -t test1 -p 0
+msg01 from external producer to topic test1
+```
+### LoadBalancer External Service Type
The load balancer external service type differs from the node port type by routing to the `external.servicePort` specified in the service for each statefulset container (if `external.distinct` is set). If `external.distinct` is false, `external.servicePort` is unused and will be set to the sum of `external.firstListenerPort` and the replica number. It is important to note that `external.firstListenerPort` does not have to be within the configured node port range for the cluster, however a node port will be allocated.
+#### Example values.yml and DNS setup for external service type LoadBalancer with external.distinct: true
+The + lines are with the updated values.
+```
+ external:
+- enabled: false
++ enabled: true
+ # type can be either NodePort or LoadBalancer
+- type: NodePort
++ type: LoadBalancer
+ # annotations:
+ # service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
+ dns:
+@@ -138,10 +138,10 @@ external:
+ # If using external service type LoadBalancer and external dns, set distinct to true below.
+ # This creates an A record for each statefulset pod/broker. You should then map the
+ # A record of the broker to the EXTERNAL IP given by the LoadBalancer in your DNS server.
+- distinct: false
++ distinct: true
+ servicePort: 19092
+ firstListenerPort: 31090
+- domain: cluster.local
++ domain: example.com
+ loadBalancerIP: []
+ init:
+ image: "lwolf/kubectl_deployer"
+@@ -173,11 +173,11 @@ configurationOverrides:
+ # "advertised.listeners": |-
+ # EXTERNAL://kafka.cluster.local:$((31090 + ${KAFKA_BROKER_ID}))
+ ## If external service type is LoadBalancer and distinct is true:
+- # "advertised.listeners": |-
+- # EXTERNAL://kafka-$((${KAFKA_BROKER_ID})).cluster.local:19092
++ "advertised.listeners": |-
++ EXTERNAL://kafka-$((${KAFKA_BROKER_ID})).example.com:19092
+ ## Uncomment to define the EXTERNAL Listener protocol
+- # "listener.security.protocol.map": |-
+- # PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
++ "listener.security.protocol.map": |-
++ PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
+
+$ kubectl -n kafka get svc
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+kafka ClusterIP 10.39.241.217 9092/TCP 2m39s
+kafka-0-external LoadBalancer 10.39.242.45 35.200.238.174 19092:30108/TCP 2m39s
+kafka-1-external LoadBalancer 10.39.241.90 35.244.44.162 19092:30582/TCP 2m39s
+kafka-2-external LoadBalancer 10.39.243.160 35.200.149.80 19092:30539/TCP 2m39s
+kafka-headless ClusterIP None 9092/TCP 2m39s
+kafka-zookeeper ClusterIP 10.39.249.70 2181/TCP 2m39s
+kafka-zookeeper-headless ClusterIP None 2181/TCP,3888/TCP,2888/TCP 2m39s
+
+DNS A record entries:
+kafka-0.example.com A record 35.200.238.174 TTL 60sec
+kafka-1.example.com A record 35.244.44.162 TTL 60sec
+kafka-2.example.com A record 35.200.149.80 TTL 60sec
+
+$ ping kafka-0.example.com
+PING kafka-0.example.com (35.200.238.174): 56 data bytes
+
+$ kafkacat -b kafka-0.example.com:19092 -L
+Metadata for all topics (from broker 0: kafka-0.example.com:19092/0):
+ 3 brokers:
+ broker 2 at kafka-2.example.com:19092
+ broker 1 at kafka-1.example.com:19092
+ broker 0 at kafka-0.example.com:19092
+ 0 topics:
+
+$ kafkacat -b kafka-0.example.com:19092 -P -t gkeTest -p 0
+msg02 for topic gkeTest
+
+$ kafkacat -b kafka-0.example.com:19092 -C -t gkeTest -p 0
+msg02 for topic gkeTest
+```
+
+#### Example values.yml and DNS setup for external service type LoadBalancer with external.distinct: false
+The + lines are with the updated values.
+```
+ external:
+- enabled: false
++ enabled: true
+ # type can be either NodePort or LoadBalancer
+- type: NodePort
++ type: LoadBalancer
+ # annotations:
+ # service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
+ dns:
+@@ -138,10 +138,10 @@ external:
+ distinct: false
+ servicePort: 19092
+ firstListenerPort: 31090
+ domain: cluster.local
+ loadBalancerIP: [35.200.238.174,35.244.44.162,35.200.149.80]
+ init:
+ image: "lwolf/kubectl_deployer"
+@@ -173,11 +173,11 @@ configurationOverrides:
+ # "advertised.listeners": |-
+ # EXTERNAL://kafka.cluster.local:$((31090 + ${KAFKA_BROKER_ID}))
+ ## If external service type is LoadBalancer and distinct is true:
+- # "advertised.listeners": |-
+- # EXTERNAL://kafka-$((${KAFKA_BROKER_ID})).cluster.local:19092
++ "advertised.listeners": |-
++ EXTERNAL://${LOAD_BALANCER_IP}:31090
+ ## Uncomment to define the EXTERNAL Listener protocol
+- # "listener.security.protocol.map": |-
+- # PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
++ "listener.security.protocol.map": |-
++ PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
+
+$ kubectl -n kafka get svc
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+kafka ClusterIP 10.39.241.217 9092/TCP 2m39s
+kafka-0-external LoadBalancer 10.39.242.45 35.200.238.174 31090:30108/TCP 2m39s
+kafka-1-external LoadBalancer 10.39.241.90 35.244.44.162 31090:30582/TCP 2m39s
+kafka-2-external LoadBalancer 10.39.243.160 35.200.149.80 31090:30539/TCP 2m39s
+kafka-headless ClusterIP None 9092/TCP 2m39s
+kafka-zookeeper ClusterIP 10.39.249.70 2181/TCP 2m39s
+kafka-zookeeper-headless ClusterIP None 2181/TCP,3888/TCP,2888/TCP 2m39s
+
+$ kafkacat -b 35.200.238.174:31090 -L
+Metadata for all topics (from broker 0: 35.200.238.174:31090/0):
+ 3 brokers:
+ broker 2 at 35.200.149.80:31090
+ broker 1 at 35.244.44.162:31090
+ broker 0 at 35.200.238.174:31090
+ 0 topics:
+
+$ kafkacat -b 35.200.238.174:31090 -P -t gkeTest -p 0
+msg02 for topic gkeTest
+
+$ kafkacat -b 35.200.238.174:31090 -C -t gkeTest -p 0
+msg02 for topic gkeTest
+```
+
## Known Limitations
* Only supports storage options that have backends for persistent volume claims (tested mostly on AWS)
diff --git a/incubator/kafka/requirements.lock b/incubator/kafka/requirements.lock
index 59e9ca7d2479..bdee6836f1f9 100644
--- a/incubator/kafka/requirements.lock
+++ b/incubator/kafka/requirements.lock
@@ -1,6 +1,6 @@
dependencies:
- name: zookeeper
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
- version: 1.2.0
-digest: sha256:48de211cbffc0b7df9995edc4fd5d693e8bbc94e684aa83c11e6f94803f0e8b9
-generated: 2018-11-26T17:47:36.893674-05:00
+ version: 1.3.1
+digest: sha256:c21214b1f44972d0c120e73c9ff4af5f420a1e6c5c387e0ef440a181c45f053e
+generated: 2019-05-23T15:54:57.740788654-04:00
diff --git a/incubator/kafka/requirements.yaml b/incubator/kafka/requirements.yaml
index 2bee53a96d6d..13ff52e7650e 100644
--- a/incubator/kafka/requirements.yaml
+++ b/incubator/kafka/requirements.yaml
@@ -1,6 +1,6 @@
dependencies:
- name: zookeeper
- version: 1.2.0
+ version: 1.3.1
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
condition: zookeeper.enabled
diff --git a/incubator/kafka/templates/NOTES.txt b/incubator/kafka/templates/NOTES.txt
index d4450e109332..597947f5fa61 100644
--- a/incubator/kafka/templates/NOTES.txt
+++ b/incubator/kafka/templates/NOTES.txt
@@ -19,20 +19,20 @@ You can connect to Kafka by running a simple pod in the K8s cluster like this wi
Once you have the testclient pod above running, you can list all kafka
topics with:
- kubectl -n {{ .Release.Namespace }} exec testclient -- /usr/bin/kafka-topics --zookeeper {{ .Release.Name }}-zookeeper:2181 --list
+ kubectl -n {{ .Release.Namespace }} exec testclient -- /opt/kafka/bin/kafka-topics.sh --zookeeper {{ .Release.Name }}-zookeeper:2181 --list
To create a new topic:
- kubectl -n {{ .Release.Namespace }} exec testclient -- /usr/bin/kafka-topics --zookeeper {{ .Release.Name }}-zookeeper:2181 --topic test1 --create --partitions 1 --replication-factor 1
+ kubectl -n {{ .Release.Namespace }} exec testclient -- /opt/kafka/bin/kafka-topics.sh --zookeeper {{ .Release.Name }}-zookeeper:2181 --topic test1 --create --partitions 1 --replication-factor 1
To listen for messages on a topic:
- kubectl -n {{ .Release.Namespace }} exec -ti testclient -- /usr/bin/kafka-console-consumer --bootstrap-server {{ include "kafka.fullname" . }}:9092 --topic test1 --from-beginning
+ kubectl -n {{ .Release.Namespace }} exec -ti testclient -- /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server {{ include "kafka.fullname" . }}:9092 --topic test1 --from-beginning
To stop the listener session above press: Ctrl+C
To start an interactive message producer session:
- kubectl -n {{ .Release.Namespace }} exec -ti testclient -- /usr/bin/kafka-console-producer --broker-list {{ include "kafka.fullname" . }}-headless:9092 --topic test1
+ kubectl -n {{ .Release.Namespace }} exec -ti testclient -- /opt/kafka/bin/kafka-console-producer.sh --broker-list {{ include "kafka.fullname" . }}-headless:9092 --topic test1
To create a message in the above session, simply type the message and press "enter"
To end the producer session try: Ctrl+C
diff --git a/incubator/kafka/templates/deployment-kafka-exporter.yaml b/incubator/kafka/templates/deployment-kafka-exporter.yaml
index d43aab1f773e..709ea0c743e6 100644
--- a/incubator/kafka/templates/deployment-kafka-exporter.yaml
+++ b/incubator/kafka/templates/deployment-kafka-exporter.yaml
@@ -35,4 +35,16 @@ spec:
- containerPort: {{ .Values.prometheus.kafka.port }}
resources:
{{ toYaml .Values.prometheus.kafka.resources | indent 10 }}
+{{- if .Values.prometheus.kafka.tolerations }}
+ tolerations:
+{{ toYaml .Values.prometheus.kafka.tolerations | indent 8 }}
+{{- end }}
+{{- if .Values.prometheus.kafka.affinity }}
+ affinity:
+{{ toYaml .Values.prometheus.kafka.affinity | indent 8 }}
+{{- end }}
+{{- if .Values.prometheus.kafka.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.prometheus.kafka.nodeSelector | indent 8 }}
+{{- end }}
{{- end }}
diff --git a/incubator/kafka/templates/job-config.yaml b/incubator/kafka/templates/job-config.yaml
index 21cb7c89a8fd..54bf4f73be58 100644
--- a/incubator/kafka/templates/job-config.yaml
+++ b/incubator/kafka/templates/job-config.yaml
@@ -10,6 +10,7 @@ metadata:
heritage: "{{ .Release.Service }}"
release: "{{ .Release.Name }}"
spec:
+ backoffLimit: {{ .Values.configJob.backoffLimit }}
template:
metadata:
labels:
diff --git a/incubator/kafka/templates/rbac.yaml b/incubator/kafka/templates/rbac.yaml
deleted file mode 100644
index 0173ab66b492..000000000000
--- a/incubator/kafka/templates/rbac.yaml
+++ /dev/null
@@ -1,36 +0,0 @@
-{{- if .Values.rbac.enabled }}
----
-apiVersion: v1
-kind: ServiceAccount
-metadata:
- name: {{ .Release.Name }}
- namespace: {{ .Release.Namespace }}
----
-apiVersion: rbac.authorization.k8s.io/v1beta1
-kind: Role
-metadata:
- name: {{ .Release.Name }}
- namespace: {{ .Release.Namespace }}
-rules:
-- apiGroups:
- - ""
- resources:
- - pods
- verbs:
- - get
- - list
- - patch
----
-kind: RoleBinding
-apiVersion: rbac.authorization.k8s.io/v1beta1
-metadata:
- name: {{ .Release.Name }}
-roleRef:
- kind: Role
- name: {{ .Release.Name }}
- apiGroup: rbac.authorization.k8s.io
-subjects:
-- kind: ServiceAccount
- name: {{ .Release.Name }}
- namespace: {{ .Release.Namespace }}
-{{- end }}
diff --git a/incubator/kafka/templates/service-brokers-external.yaml b/incubator/kafka/templates/service-brokers-external.yaml
index a0813b211e74..8d06c7c1b1ea 100644
--- a/incubator/kafka/templates/service-brokers-external.yaml
+++ b/incubator/kafka/templates/service-brokers-external.yaml
@@ -2,6 +2,7 @@
{{- $fullName := include "kafka.fullname" . }}
{{- $replicas := .Values.replicas | int }}
{{- $servicePort := .Values.external.servicePort }}
+ {{- $firstListenerPort := .Values.external.firstListenerPort }}
{{- $dnsPrefix := printf "%s" .Release.Name }}
{{- $root := . }}
{{- range $i, $e := until $replicas }}
@@ -45,11 +46,17 @@ spec:
ports:
- name: external-broker
{{- if and (eq $root.Values.external.type "LoadBalancer") (not $root.Values.external.distinct) }}
- port: {{ $externalListenerPort }}
+ port: {{ $firstListenerPort }}
{{- else }}
port: {{ $servicePort }}
{{- end }}
+ {{- if and (eq $root.Values.external.type "LoadBalancer") ($root.Values.external.distinct) }}
+ targetPort: {{ $servicePort }}
+ {{- else if and (eq $root.Values.external.type "LoadBalancer") (not $root.Values.external.distinct) }}
+ targetPort: {{ $firstListenerPort }}
+ {{- else }}
targetPort: {{ $externalListenerPort }}
+ {{- end }}
{{- if eq $root.Values.external.type "NodePort" }}
nodePort: {{ $externalListenerPort }}
{{- end }}
@@ -60,6 +67,6 @@ spec:
selector:
app: {{ include "kafka.name" $root }}
release: {{ $root.Release.Name }}
- pod: {{ $responsiblePod | quote }}
+ statefulset.kubernetes.io/pod-name: {{ $responsiblePod | quote }}
{{- end }}
{{- end }}
diff --git a/incubator/kafka/templates/statefulset.yaml b/incubator/kafka/templates/statefulset.yaml
index 85cc3f5a9085..58febd9e0a6d 100644
--- a/incubator/kafka/templates/statefulset.yaml
+++ b/incubator/kafka/templates/statefulset.yaml
@@ -16,6 +16,7 @@ spec:
replicas: {{ default 3 .Values.replicas }}
template:
metadata:
+{{- if or .Values.podAnnotations (and .Values.prometheus.jmx.enabled (not .Values.prometheus.operator.enabled)) }}
annotations:
{{- if and .Values.prometheus.jmx.enabled (not .Values.prometheus.operator.enabled) }}
prometheus.io/scrape: "true"
@@ -23,37 +24,19 @@ spec:
{{- end }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
+{{- end }}
{{- end }}
labels:
app: {{ include "kafka.name" . }}
release: {{ .Release.Name }}
+ {{- if .Values.podLabels }}
+ ## Custom pod labels
+{{ toYaml .Values.podLabels | indent 8 }}
+ {{- end }}
spec:
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
-{{- if .Values.rbac.enabled }}
- serviceAccountName: {{ .Release.Name }}
-{{- end }}
- {{- if .Values.external.enabled }}
- ## ref: https://github.com/Yolean/kubernetes-kafka/blob/master/kafka/50kafka.yml
- initContainers:
- - name: init-ext
- image: "{{ .Values.external.init.image }}:{{ .Values.external.init.imageTag }}"
- imagePullPolicy: "{{ .Values.external.init.imagePullPolicy }}"
- command:
- - sh
- - -euxc
- - "kubectl label pods ${POD_NAME} --namespace ${POD_NAMESPACE} pod=${POD_NAME} --overwrite"
- env:
- - name: POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- - name: POD_NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- {{- end }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
@@ -154,10 +137,10 @@ spec:
- name: JMX_PORT
value: "{{ .Values.jmx.port }}"
{{- end }}
- - name: POD_NAME
+ - name: POD_IP
valueFrom:
fieldRef:
- fieldPath: metadata.name
+ fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
@@ -209,6 +192,9 @@ spec:
- |
unset KAFKA_PORT && \
export KAFKA_BROKER_ID=${POD_NAME##*-} && \
+ {{- if eq .Values.external.type "LoadBalancer" }}
+ export LOAD_BALANCER_IP=$(echo '{{ .Values.external.loadBalancerIP }}' | tr -d '[]' | cut -d ' ' -f "$(($KAFKA_BROKER_ID + 1))") && \
+ {{- end }}
export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${POD_NAME}.{{ include "kafka.fullname" . }}-headless.${POD_NAMESPACE}:9092{{ if kindIs "string" $advertisedListenersOverride }}{{ printf ",%s" $advertisedListenersOverride }}{{ end }} && \
exec /etc/confluent/docker/run
volumeMounts:
@@ -244,6 +230,10 @@ spec:
name: {{ include "kafka.fullname" . }}-metrics
{{- end }}
{{- end }}
+ {{- if .Values.securityContext }}
+ securityContext:
+{{ toYaml .Values.securityContext | indent 8 }}
+ {{- end }}
{{- range .Values.secrets }}
- name: {{ include "kafka.fullname" $ }}-{{ .name }}
secret:
diff --git a/incubator/kafka/values.yaml b/incubator/kafka/values.yaml
index 24c0bc098a0f..0ec8c9f435d3 100644
--- a/incubator/kafka/values.yaml
+++ b/incubator/kafka/values.yaml
@@ -26,6 +26,9 @@ resources: {}
# memory: 1024Mi
kafkaHeapOptions: "-Xmx1G -Xms1G"
+## Optional Container Security context
+securityContext: {}
+
## The StatefulSet Update Strategy which Kafka will use when changes are applied.
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
updateStrategy:
@@ -35,11 +38,6 @@ updateStrategy:
## ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy
podManagementPolicy: OrderedReady
-## If RBAC is enabled on the cluster, the Kafka init container needs a service account
-## with permissisions sufficient to apply pod labels
-rbac:
- enabled: false
-
## Useful if using any custom authorizer
## Pass in some secrets to use (if required)
# secrets:
@@ -132,15 +130,18 @@ headless:
## External access.
##
external:
+ enabled: false
+ # type can be either NodePort or LoadBalancer
type: NodePort
# annotations:
# service.beta.kubernetes.io/openstack-internal-load-balancer: "true"
dns:
useInternal: false
useExternal: true
- # create an A record for each statefulset pod
+ # If using external service type LoadBalancer and external dns, set distinct to true below.
+ # This creates an A record for each statefulset pod/broker. You should then map the
+ # A record of the broker to the EXTERNAL IP given by the LoadBalancer in your DNS server.
distinct: false
- enabled: false
servicePort: 19092
firstListenerPort: 31090
domain: cluster.local
@@ -153,6 +154,12 @@ external:
# Annotation to be added to Kafka pods
podAnnotations: {}
+# Labels to be added to Kafka pods
+podLabels: {}
+ # service: broker
+ # team: developers
+
+
## Configuration Overrides. Specify any Kafka settings you would like set on the StatefulSet
## here in map format, as defined in the official docs.
## ref: https://kafka.apache.org/documentation/#brokerconfigs
@@ -170,9 +177,17 @@ configurationOverrides:
## - http://kafka.apache.org/documentation/#security_configbroker
## - https://cwiki.apache.org/confluence/display/KAFKA/KIP-103%3A+Separation+of+Internal+and+External+traffic
##
- ## Setting "advertised.listeners" here appends to "PLAINTEXT://${POD_IP}:9092,"
+ ## Setting "advertised.listeners" here appends to "PLAINTEXT://${POD_IP}:9092,", ensure you update the domain
+ ## If external service type is Nodeport:
# "advertised.listeners": |-
# EXTERNAL://kafka.cluster.local:$((31090 + ${KAFKA_BROKER_ID}))
+ ## If external service type is LoadBalancer and distinct is true:
+ # "advertised.listeners": |-
+ # EXTERNAL://kafka-$((${KAFKA_BROKER_ID})).cluster.local:19092
+ ## If external service type is LoadBalancer and distinct is false:
+ # "advertised.listeners": |-
+ # EXTERNAL://${LOAD_BALANCER_IP}:31090
+ ## Uncomment to define the EXTERNAL Listener protocol
# "listener.security.protocol.map": |-
# PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
@@ -306,6 +321,39 @@ prometheus:
# cpu: 100m
# memory: 100Mi
+ # Tolerations for nodes that have taints on them.
+ # Useful if you want to dedicate nodes to just run kafka-exporter
+ # https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
+ tolerations: []
+ # tolerations:
+ # - key: "key"
+ # operator: "Equal"
+ # value: "value"
+ # effect: "NoSchedule"
+
+ ## Pod scheduling preferences (by default keep pods within a release on separate nodes).
+ ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
+ ## By default we don't set affinity
+ affinity: {}
+ ## Alternatively, this typical example defines:
+ ## affinity (to encourage Kafka Exporter pods to be collocated with Kafka pods)
+ # affinity:
+ # podAffinity:
+ # preferredDuringSchedulingIgnoredDuringExecution:
+ # - weight: 50
+ # podAffinityTerm:
+ # labelSelector:
+ # matchExpressions:
+ # - key: app
+ # operator: In
+ # values:
+ # - kafka
+ # topologyKey: "kubernetes.io/hostname"
+
+ ## Node labels for pod assignment
+ ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
+ nodeSelector: {}
+
operator:
## Are you using Prometheus Operator?
enabled: false
@@ -320,6 +368,13 @@ prometheus:
selector:
prometheus: kube-prometheus
+## Kafka Config job configuration
+##
+configJob:
+ ## Specify the number of retries before considering kafka-config job as failed.
+ ## https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#pod-backoff-failure-policy
+ backoffLimit: 6
+
## Topic creation and configuration.
## The job will be run on a deployment only when the config has been changed.
## - If 'partitions' and 'replicationFactor' are specified we create the topic (with --if-not-exists.)
diff --git a/incubator/kube-registry-proxy/Chart.yaml b/incubator/kube-registry-proxy/Chart.yaml
index ab34612858ab..f00235c53ff2 100644
--- a/incubator/kube-registry-proxy/Chart.yaml
+++ b/incubator/kube-registry-proxy/Chart.yaml
@@ -1,5 +1,9 @@
+apiVersion: v1
name: kube-registry-proxy
home: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/registry
-version: 0.3.0
+version: 0.3.1
appVersion: 0.4
description: Installs the kubernetes-registry-proxy cluster addon.
+maintainers:
+- name: maorfr
+ email: maor.friedman@redhat.com
diff --git a/incubator/kube-spot-termination-notice-handler/Chart.yaml b/incubator/kube-spot-termination-notice-handler/Chart.yaml
deleted file mode 100644
index 492843929feb..000000000000
--- a/incubator/kube-spot-termination-notice-handler/Chart.yaml
+++ /dev/null
@@ -1,11 +0,0 @@
-apiVersion: v1
-description: Watch and action AWS spot termination events
-name: kube-spot-termination-notice-handler
-version: 0.4.0
-appVersion: 1.10.8-1
-home: https://github.com/kube-aws/kube-spot-termination-notice-handler
-source:
- - https://hub.docker.com/r/kubeaws/kube-spot-termination-notice-handler/
-maintainers:
- - name: egeland
- email: egeland@gmail.com
diff --git a/incubator/kube-spot-termination-notice-handler/README.md b/incubator/kube-spot-termination-notice-handler/README.md
deleted file mode 100644
index 31569192ddd8..000000000000
--- a/incubator/kube-spot-termination-notice-handler/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
-# Kubernetes AWS EC2 Spot Termination Notice Handler
-
-This chart installs the [kube-spot-termination-notice-handler](https://github.com/kube-aws/kube-spot-termination-notice-handler) as a daemonset across the cluster nodes.
-
-## Purpose
-
-The handler watches for Spot termination events, and will do the following if detected:
-
-* Drain the affected node
-
-* [Optional] Send a message to a Slack channel informing that a termination notice has been received.
-
-## Installation
-
-You should install into the `kube-system` namespace, but this is not a requirement. The following example assumes this has been chosen.
-
-```
-helm install incubator/kube-spot-termination-notice-handler --name-space kube-system
-```
-
-## Configuration
-
-You may set these options in your values file:
-
-* `enableLogspout` - if you use Logspout to capture logs, this option will ensure your logs are captured. The logs are noisy, and as such are disabled from Logspout by default.
-
-* `slackUrl` - optional - put a slack webhook URL here to get messaged when a termination notice is received.
-
-* `clusterName` - optional - when slack is configured use this cluster name for reports
-
-* `pollInterval` - how often to query the EC2 metadata for termination notices. Defaults to every `5` seconds.
-
-* `rbac.create` - Specifies whether RBAC resources should be created. Defaults to `true`.
-
-* `serviceAccount.create` - Specifies whether a ServiceAccount should be created. Defaults to `true`.
-
-* `serviceAccount.name` - The name of the ServiceAccount to use. If not set and create is true, a name is generated using the fullname template.
diff --git a/incubator/kube-spot-termination-notice-handler/templates/NOTES.txt b/incubator/kube-spot-termination-notice-handler/templates/NOTES.txt
deleted file mode 100644
index 00d334555247..000000000000
--- a/incubator/kube-spot-termination-notice-handler/templates/NOTES.txt
+++ /dev/null
@@ -1 +0,0 @@
-# Notes TBC
diff --git a/incubator/kube-spot-termination-notice-handler/templates/daemonset.yaml b/incubator/kube-spot-termination-notice-handler/templates/daemonset.yaml
deleted file mode 100644
index 82399d81081e..000000000000
--- a/incubator/kube-spot-termination-notice-handler/templates/daemonset.yaml
+++ /dev/null
@@ -1,52 +0,0 @@
-apiVersion: extensions/v1beta1
-kind: DaemonSet
-metadata:
- name: {{ template "fullname" . }}
- labels:
- app: {{ template "name" . }}
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
-spec:
- template:
- metadata:
- labels:
- app: {{ template "name" . }}
- release: {{ .Release.Name }}
- spec:
- serviceAccountName: {{ template "kube-spot-termination-notice-handler.serviceAccountName" . }}
- containers:
- - name: {{ .Chart.Name }}
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
- imagePullPolicy: {{ .Values.image.pullPolicy }}
- env:
- {{- if not .Values.enableLogspout }}
- - name: LOGSPOUT
- value: "ignore"
- {{- end }}
- {{- with .Values.slackUrl }}
- - name: SLACK_URL
- value: {{ . | quote }}
- {{- end }}
- - name: POLL_INTERVAL
- value: {{ .Values.pollInterval | quote }}
- - name: CLUSTER
- value: {{ .Values.clusterName | quote }}
- - name: POD_NAME
- valueFrom:
- fieldRef:
- fieldPath: metadata.name
- - name: NAMESPACE
- valueFrom:
- fieldRef:
- fieldPath: metadata.namespace
- resources:
-{{ toYaml .Values.resources | indent 12 }}
- {{- if .Values.nodeSelector }}
- nodeSelector:
-{{ toYaml .Values.nodeSelector | indent 8 }}
- {{- end }}
-{{- if .Values.tolerations }}
- tolerations:
-{{ toYaml .Values.tolerations | indent 8 }}
- {{- end }}
diff --git a/incubator/kube-spot-termination-notice-handler/templates/rbac.yaml b/incubator/kube-spot-termination-notice-handler/templates/rbac.yaml
deleted file mode 100644
index 9ff2517243b2..000000000000
--- a/incubator/kube-spot-termination-notice-handler/templates/rbac.yaml
+++ /dev/null
@@ -1,66 +0,0 @@
-{{- if .Values.rbac.create -}}
-kind: ClusterRoleBinding
-apiVersion: rbac.authorization.k8s.io/v1beta1
-metadata:
- name: {{ template "fullname" . }}
- labels:
- app: {{ template "fullname" . }}
- chart: {{ .Chart.Name }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
-roleRef:
- kind: ClusterRole
- name: {{ template "fullname" . }}
- apiGroup: rbac.authorization.k8s.io
-subjects:
-- kind: ServiceAccount
- namespace: {{ .Release.Namespace | quote }}
- name: {{ template "kube-spot-termination-notice-handler.serviceAccountName" . }}
----
-apiVersion: rbac.authorization.k8s.io/v1beta1
-kind: ClusterRole
-metadata:
- name: {{ template "fullname" . }}
- labels:
- app: {{ template "fullname" . }}
- chart: {{ .Chart.Name }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
-rules:
-- apiGroups:
- - ""
- resources:
- - pods
- verbs:
- - get
- - list
-- apiGroups:
- - extensions
- resources:
- - replicasets
- - daemonsets
- verbs:
- - get
- - list
-- apiGroups:
- - apps
- resources:
- - statefulsets
- verbs:
- - get
- - list
-- apiGroups:
- - ""
- resources:
- - nodes
- verbs:
- - get
- - list
- - patch
-- apiGroups:
- - ""
- resources:
- - pods/eviction
- verbs:
- - create
-{{- end -}}
diff --git a/incubator/kube-spot-termination-notice-handler/templates/serviceaccount.yaml b/incubator/kube-spot-termination-notice-handler/templates/serviceaccount.yaml
deleted file mode 100644
index 67496367b889..000000000000
--- a/incubator/kube-spot-termination-notice-handler/templates/serviceaccount.yaml
+++ /dev/null
@@ -1,11 +0,0 @@
-{{- if .Values.serviceAccount.create -}}
-apiVersion: v1
-kind: ServiceAccount
-metadata:
- name: {{ template "kube-spot-termination-notice-handler.serviceAccountName" . }}
- labels:
- app: {{ template "fullname" . }}
- chart: {{ .Chart.Name }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
-{{- end -}}
diff --git a/incubator/kube-spot-termination-notice-handler/values.yaml b/incubator/kube-spot-termination-notice-handler/values.yaml
deleted file mode 100644
index a83ad5c9eb76..000000000000
--- a/incubator/kube-spot-termination-notice-handler/values.yaml
+++ /dev/null
@@ -1,48 +0,0 @@
-# Default values for kube-spot-termination-notice-handler.
-# This is a YAML-formatted file.
-# Declare variables to be passed into your templates.
-image:
- repository: kubeaws/kube-spot-termination-notice-handler
- tag: 1.10.8-1
- pullPolicy: IfNotPresent
-
-# Poll the metadata every pollInterval seconds for termination events:
-pollInterval: 5
-
-# Send notifications to a Slack webhook URL - replace with your own value and uncomment:
-# slackUrl: https://hooks.slack.com/services/EXAMPLE123/EXAMPLE123/example1234567
-
-# Set the cluster name to be reported in a Slack message
-# clusterName: test
-
-# Silence logspout by default - set to true to enable logs arriving in logspout
-enableLogspout: false
-
-resources: {}
-# We usually recommend not to specify default resources and to leave this as a conscious
-# choice for the user. This also increases chances charts run on environments with little
-# resources, such as Minikube. If you do want to specify resources, uncomment the following
-# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
-# limits:
-# cpu: 100m
-# memory: 128Mi
-# requests:
-# cpu: 100m
-# memory: 128Mi
-
-rbac:
- # Specifies whether RBAC resources should be created
- create: true
-
-serviceAccount:
- # Specifies whether a service account should be created
- create: true
- # The name of the service account to use.
- # If not set and create is true, a name is generated using the fullname template
- name:
-
-tolerations: []
- # key: "dedicated"
- # operator: "Equal"
- # value: "gpu"
- # effect: "NoSchedule"
diff --git a/incubator/kubeless/Chart.yaml b/incubator/kubeless/Chart.yaml
index d201c1e16956..f38bf9f84ce3 100644
--- a/incubator/kubeless/Chart.yaml
+++ b/incubator/kubeless/Chart.yaml
@@ -1,6 +1,6 @@
name: kubeless
-version: 2.0.1
-appVersion: v1.0.1
+version: 2.0.5
+appVersion: v1.0.3
description: Kubeless is a Kubernetes-native serverless framework. It runs on top of your Kubernetes cluster and allows you to deploy small unit of code without having to build container images.
icon: https://cloud.githubusercontent.com/assets/4056725/25480209/1d5bf83c-2b48-11e7-8db8-bcd650f31297.png
apiVersion: v1
diff --git a/incubator/kubeless/README.md b/incubator/kubeless/README.md
index 5c301bc8b9e9..e54e78a1f224 100644
--- a/incubator/kubeless/README.md
+++ b/incubator/kubeless/README.md
@@ -56,10 +56,11 @@ The following table lists the configurable parameters of the Kubeless chart and
| Parameter | Description | Default |
| ----------------------------------------------------------------- | ------------------------------------------ | ----------------------------------------- |
| `rbac.create` | Create RBAC backed ServiceAccount | `false` |
+| `config.functionsNamespace` | Functions namespace | "" |
| `config.builderImage` | Function builder image | `kubeless/function-image-builder` |
| `config.builderImagePullSecret` | Secret to pull builder image | "" |
-| `config.builderImage` | Provision image | `kubeless/unzip` |
-| `config.builderImagePullSecret` | Secret to pull provision image | "" |
+| `config.provisionImage` | Provision image | `kubeless/unzip` |
+| `config.provisionImagePullSecret` | Secret to pull provision image | "" |
| `config.deploymentTemplate` | Deployment template for functions | `{}` |
| `config.enableBuildStep` | Enable builder functionality | `false` |
| `config.functionRegistryTLSVerify` | Enable TLS verification for image registry | `{}` |
diff --git a/incubator/kubeless/templates/kubeless-config.yaml b/incubator/kubeless/templates/kubeless-config.yaml
index 051b3e30ede2..8afe5024e695 100644
--- a/incubator/kubeless/templates/kubeless-config.yaml
+++ b/incubator/kubeless/templates/kubeless-config.yaml
@@ -1,5 +1,6 @@
apiVersion: v1
data:
+ functions-namespace: {{ default "" .Values.config.functionsNamespace | quote }}
builder-image: "{{ .Values.config.builderImage }}:{{ .Values.controller.deployment.functionController.image.tag }}"
builder-image-secret: "{{ .Values.config.builderImagePullSecret }}"
deployment: "{{ .Values.config.deploymentTemplate }}"
diff --git a/incubator/kubeless/values.yaml b/incubator/kubeless/values.yaml
index db54ae81ac99..ca0423f4eabf 100644
--- a/incubator/kubeless/values.yaml
+++ b/incubator/kubeless/values.yaml
@@ -10,12 +10,12 @@ controller:
functionController:
image:
repository: kubeless/function-controller
- tag: v1.0.1
+ tag: v1.0.3
pullPolicy: IfNotPresent
httpTriggerController:
image:
repository: bitnami/http-trigger-controller
- tag: v1.0.0-alpha.9
+ tag: v1.0.0
pullPolicy: IfNotPresent
cronJobTriggerController:
image:
@@ -35,6 +35,7 @@ controller:
## Kubeless configuration
config:
+ functionsNamespace: ""
builderImage: kubeless/function-image-builder
builderImagePullSecret: ""
deploymentTemplate: '{}'
diff --git a/incubator/mysqlha/Chart.yaml b/incubator/mysqlha/Chart.yaml
index f4e71886922f..0486feeeb7d3 100644
--- a/incubator/mysqlha/Chart.yaml
+++ b/incubator/mysqlha/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: mysqlha
-version: 0.4.0
+version: 0.5.1
appVersion: 5.7.13
description: MySQL cluster with a single master and zero or more slave replicas
keywords:
diff --git a/incubator/mysqlha/README.md b/incubator/mysqlha/README.md
index 2736cd786034..a280e3d757b9 100644
--- a/incubator/mysqlha/README.md
+++ b/incubator/mysqlha/README.md
@@ -33,25 +33,34 @@ $ helm delete my-release
The following table lists the configurable parameters of the MySQL chart and their default values.
-| Parameter | Description | Default |
-| ----------------------- | ----------------------------------- | -------------------------------------- |
-| `mysqlImage` | `mysql` image and tag. | `mysql:5.7.13` |
-| `xtraBackupImage` | `xtrabackup` image and tag. | `gcr.io/google-samples/xtrabackup:1.0` |
-| `replicaCount` | Number of MySQL replicas | 3 |
-| `mysqlRootPassword` | Password for the `root` user. | Randomly generated |
-| `mysqlUser` | Username of new user to create. | `nil` |
-| `mysqlPassword` | Password for the new user. | Randomly generated |
-| `mysqlReplicationUser` | Username for replication user | `repl` |
-| `mysqlReplicationPassword` | Password for replication user. | Randomly generated |
-| `mysqlDatabase` | Name of the new Database to create | `nil` |
-| `configFiles.master.cnf` | Master configuration file | See `values.yaml` |
-| `configFiles.slave.cnf` | Slave configuration file | See `values.yaml` |
-| `persistence.enabled` | Create a volume to store data | true |
-| `persistence.size` | Size of persistent volume claim | 10Gi |
-| `persistence.storageClass` | Type of persistent volume claim | `nil` |
-| `persistence.accessModes` | Persistent volume access modes | `[ReadWriteOnce]` |
-| `persistence.annotations` | Persistent volume annotations | `{}` |
-| `resources` | CPU/Memory resource requests/limits | Memory: `128Mi`, CPU: `100m` |
+| Parameter | Description | Default |
+| ----------------------------------------- | ------------------------------------------------- | -------------------------------------- |
+| `mysqlImage` | `mysql` image and tag. | `mysql:5.7.13` |
+| `xtraBackupImage` | `xtrabackup` image and tag. | `gcr.io/google-samples/xtrabackup:1.0` |
+| `replicaCount` | Number of MySQL replicas | 3 |
+| `mysqlRootPassword` | Password for the `root` user. | Randomly generated |
+| `mysqlUser` | Username of new user to create. | `nil` |
+| `mysqlPassword` | Password for the new user. | Randomly generated |
+| `mysqlReplicationUser` | Username for replication user | `repl` |
+| `mysqlReplicationPassword` | Password for replication user. | Randomly generated |
+| `mysqlDatabase` | Name of the new Database to create | `nil` |
+| `configFiles.master.cnf` | Master configuration file | See `values.yaml` |
+| `configFiles.slave.cnf` | Slave configuration file | See `values.yaml` |
+| `persistence.enabled` | Create a volume to store data | true |
+| `persistence.size` | Size of persistent volume claim | 10Gi |
+| `persistence.storageClass` | Type of persistent volume claim | `nil` |
+| `persistence.accessModes` | Persistent volume access modes | `[ReadWriteOnce]` |
+| `persistence.annotations` | Persistent volume annotations | `{}` |
+| `resources` | CPU/Memory resource requests/limits | Memory: `128Mi`, CPU: `100m` |
+| `metrics.enabled` | Start a side-car prometheus exporter | false |
+| `metrics.image` | Exporter image | `prom/mysqld-exporter` |
+| `metrics.imageTag` | Exporter image | `v0.10.0` |
+| `metrics.imagePullPolicy` | Exporter image pull policy | `IfNotPresent` |
+| `metrics.resources` | Exporter resource requests/limit | See `values.yaml` |
+| `metrics.livenessProbe.initialDelaySeconds` | Delay before metrics liveness probe is initiated | 15 |
+| `metrics.livenessProbe.timeoutSeconds` | When the probe times out | 5 |
+| `metrics.readinessProbe.initialDelaySeconds` | Delay before metrics readiness probe is initiated | 5 |
+| `metrics.readinessProbe.timeoutSeconds` | When the probe times out | 1 |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
diff --git a/incubator/mysqlha/templates/statefulset.yaml b/incubator/mysqlha/templates/statefulset.yaml
index 29db5db8f78b..e7b3745a1656 100644
--- a/incubator/mysqlha/templates/statefulset.yaml
+++ b/incubator/mysqlha/templates/statefulset.yaml
@@ -216,6 +216,39 @@ spec:
requests:
cpu: 100m
memory: 100Mi
+ {{- if .Values.metrics.enabled }}
+ - name: metrics
+ image: "{{ .Values.metrics.image }}:{{ .Values.metrics.imageTag }}"
+ imagePullPolicy: {{ .Values.pullPolicy | quote }}
+ {{- if .Values.mysqlha.mysqlAllowEmptyPassword }}
+ command: ['sh', '-c', 'DATA_SOURCE_NAME="root@(localhost:3306)/" /bin/mysqld_exporter' ]
+ {{- else }}
+ env:
+ - name: MYSQL_ROOT_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "fullname" . }}
+ key: mysql-root-password
+ command: [ 'sh', '-c', 'DATA_SOURCE_NAME="root:$MYSQL_ROOT_PASSWORD@(localhost:3306)/" /bin/mysqld_exporter' ]
+ {{- end }}
+ ports:
+ - name: metrics
+ containerPort: 9104
+ livenessProbe:
+ httpGet:
+ path: /
+ port: metrics
+ initialDelaySeconds: {{ .Values.metrics.livenessProbe.initialDelaySeconds }}
+ timeoutSeconds: {{ .Values.metrics.livenessProbe.timeoutSeconds }}
+ readinessprobe:
+ httpGet:
+ path: /
+ port: metrics
+ initialDelaySeconds: {{ .Values.metrics.readinessProbe.initialDelaySeconds }}
+ timeoutSeconds: {{ .Values.metrics.readinessProbe.timeoutSeconds }}
+ resources:
+{{ toYaml .Values.metrics.resources | indent 10 }}
+ {{- end }}
volumes:
- name: conf
emptyDir: {}
diff --git a/incubator/mysqlha/templates/svc.yaml b/incubator/mysqlha/templates/svc.yaml
index 00532f257a25..f0e24c7877f5 100644
--- a/incubator/mysqlha/templates/svc.yaml
+++ b/incubator/mysqlha/templates/svc.yaml
@@ -27,9 +27,18 @@ metadata:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
+ annotations:
+{{- if and (.Values.metrics.enabled) (.Values.metrics.annotations) }}
+{{ toYaml .Values.metrics.annotations | indent 4 }}
+{{- end }}
spec:
ports:
- name: {{ template "fullname" . }}
port: 3306
+ {{- if .Values.metrics.enabled }}
+ - name: metrics
+ port: 9104
+ targetPort: metrics
+ {{- end }}
selector:
app: {{ template "fullname" . }}
diff --git a/incubator/mysqlha/values.yaml b/incubator/mysqlha/values.yaml
index 83ba0f89ce95..cf22d6c0caeb 100644
--- a/incubator/mysqlha/values.yaml
+++ b/incubator/mysqlha/values.yaml
@@ -64,3 +64,21 @@ resources:
requests:
cpu: 100m
memory: 128Mi
+
+metrics:
+ enabled: false
+ image: prom/mysqld-exporter
+ imageTag: v0.10.0
+ imagePullPolicy: IfNotPresent
+ annotations: {}
+
+ livenessProbe:
+ initialDelaySeconds: 15
+ timeoutSeconds: 5
+ readinessProbe:
+ initialDelaySeconds: 5
+ timeoutSeconds: 1
+ resources:
+ requests:
+ cpu: 100m
+ memory: 100Mi
diff --git a/incubator/orientdb/.helmignore b/incubator/orientdb/.helmignore
new file mode 100644
index 000000000000..f0c131944441
--- /dev/null
+++ b/incubator/orientdb/.helmignore
@@ -0,0 +1,21 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
diff --git a/incubator/orientdb/Chart.yaml b/incubator/orientdb/Chart.yaml
new file mode 100644
index 000000000000..e900c97e4674
--- /dev/null
+++ b/incubator/orientdb/Chart.yaml
@@ -0,0 +1,13 @@
+apiVersion: v1
+name: orientdb
+home: https://orientdb.com
+description: A Helm chart for Distributed OrientDB
+version: 0.1.2
+icon: https://orientdb.com/wp-content/uploads/cropped-favicon-orientdb-192x192.png
+maintainers:
+ - name: b-yond-infinite-network
+ email: sommeliers@b-yond.com
+ url: https://www.b-yond.com
+ - name: soujiro32167
+ email: eli.kasik@b-yond.com
+appVersion: 3.0.13
diff --git a/incubator/orientdb/README.md b/incubator/orientdb/README.md
new file mode 100644
index 000000000000..3c310144b93f
--- /dev/null
+++ b/incubator/orientdb/README.md
@@ -0,0 +1,40 @@
+# Infinity OrientDB helm chart
+
+Orient DB helm chart
+
+## Installation:
+
+`helm install . --name --namespace --set rootpassword=`
+
+If rootPassword is not set, a random one will be used.
+
+## Scaling:
+
+Get the name of your statefulset:
+
+`kubectl get statefulsets -n `
+
+Then scale it:
+
+`kubectl scale --replicas=`
+
+This scaling is possible due to the hazelcast plugin used for node discovery. For more information check the config file under config/hazelcast.xml and at http://docs.hazelcast.org/docs/3.0/manual/html/ch12s02.html
+
+## Testing:
+
+`helm test --cleanup --timeout 1000`
+
+## Accessing the UI
+
+`kubectl port-forward 2480:2480 -n `
+
+Note: POD-NAME is any pod from the statefulset.
+
+## Editing the hazelcast configuration
+
+The hazelcast configuration can be edited at runtime by editing the config.yaml file in the templates of the orient. As of right now only the hazelcast file can be edited dynamically
+
+## Maintainers
+
+Product Engineering Team (AKA The Sommeliers) @ [B-yond](https://www.b-yond.com)
+E:
\ No newline at end of file
diff --git a/incubator/orientdb/config/default-distributed-db-config.json b/incubator/orientdb/config/default-distributed-db-config.json
new file mode 100644
index 000000000000..0481c3076a5d
--- /dev/null
+++ b/incubator/orientdb/config/default-distributed-db-config.json
@@ -0,0 +1,15 @@
+{
+ "autoDeploy": {{ .Values.distributed.autoDeploy }},
+ "executionMode": "{{ .Values.distributed.executionMode }}",
+ "readQuorum": {{ .Values.distributed.readQuorum }},
+ "writeQuorum": "{{ .Values.distributed.writeQuorum }}",
+ "readYourWrites": {{ .Values.distributed.readYourWrites }},
+ "newNodeStrategy": "{{ .Values.distributed.newNodeStrategy }}",
+ "servers": { "*": "master" },
+ "clusters": {
+ "internal": {},
+ "*": {
+ "servers": [""]
+ }
+ }
+}
\ No newline at end of file
diff --git a/incubator/orientdb/config/hazelcast.xml b/incubator/orientdb/config/hazelcast.xml
new file mode 100644
index 000000000000..323c7d9f5ecf
--- /dev/null
+++ b/incubator/orientdb/config/hazelcast.xml
@@ -0,0 +1,53 @@
+{{- $self := . -}}
+{{- $fullname := include "orientdb.fullname" . -}}
+
+
+
+
+
+ {{ .Values.hazelcast.groupName }}
+ {{ .Values.hazelcast.groupPassword }}
+
+
+ false
+ false
+ false
+ false
+ 5
+ 1
+ 1
+ 1
+ 1
+ 1
+ 5
+ 30
+ 15
+
+
+ 2434
+
+
+ 235.1.1.1
+ 2434
+
+
+
+ {{- range $i, $e := until ( .Values.replicaCount | int ) }}
+ {{ $fullname }}-{{$i}}.{{ $fullname }}-headless.{{ $self.Release.Namespace }}.svc.cluster.local
+ {{- end }}
+
+
+
+
+ 16
+
+
\ No newline at end of file
diff --git a/incubator/orientdb/templates/NOTES.txt b/incubator/orientdb/templates/NOTES.txt
new file mode 100644
index 000000000000..8068712b375f
--- /dev/null
+++ b/incubator/orientdb/templates/NOTES.txt
@@ -0,0 +1,14 @@
+1. Get your 'root' user password by running:
+ printf $(kubectl get secret --namespace {{ .Release.Namespace }} {{ .Release.Name }}-secret -o jsonpath="{.data.root-password}" | base64 --decode);echo
+
+2. Access the UI using the following command
+ kubectl port-forward --namespace {{ .Release.Namespace }} {{ template "orientdb.fullname" . }}-0 2480:2480
+
+{{- if .Values.persistence.enabled }}
+{{- else }}
+#################################################################################
+###### WARNING: Persistence is disabled!!! You will lose your data when #####
+###### the Orient Cluster is terminated. #####
+#################################################################################
+{{- end }}
+
diff --git a/incubator/orientdb/templates/_helpers.tpl b/incubator/orientdb/templates/_helpers.tpl
new file mode 100644
index 000000000000..0a4361ffc62d
--- /dev/null
+++ b/incubator/orientdb/templates/_helpers.tpl
@@ -0,0 +1,34 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "orientdb.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "orientdb.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "orientdb.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/incubator/orientdb/templates/config.yaml b/incubator/orientdb/templates/config.yaml
new file mode 100644
index 000000000000..8e60ef702ebd
--- /dev/null
+++ b/incubator/orientdb/templates/config.yaml
@@ -0,0 +1,11 @@
+{{- $self := . -}}
+
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "orientdb.fullname" . }}-configmap
+data:
+ {{ range $path, $bytes := .Files.Glob "config/*" }}
+ {{ base $path }}: |-
+{{ tpl ($self.Files.Get $path) $self | indent 4 }}
+ {{ end }}
\ No newline at end of file
diff --git a/incubator/orientdb/templates/ingress.yaml b/incubator/orientdb/templates/ingress.yaml
new file mode 100644
index 000000000000..44a0f6addb4e
--- /dev/null
+++ b/incubator/orientdb/templates/ingress.yaml
@@ -0,0 +1,39 @@
+{{- if .Values.ingress.enabled -}}
+{{- $fullName := include "orientdb.fullname" . -}}
+{{- $servicePort := .Values.service.port -}}
+{{- $ingressPath := .Values.ingress.path -}}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ template "orientdb.fullname" . }}
+ labels:
+ app: {{ template "orientdb.name" . }}
+ chart: {{ template "orientdb.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+{{- with .Values.ingress.annotations }}
+ annotations:
+{{ toYaml . | indent 4 }}
+{{- end }}
+spec:
+{{- if .Values.ingress.tls }}
+ tls:
+ {{- range .Values.ingress.tls }}
+ - hosts:
+ {{- range .hosts }}
+ - {{ . }}
+ {{- end }}
+ secretName: {{ .secretName }}
+ {{- end }}
+{{- end }}
+ rules:
+ {{- range .Values.ingress.hosts }}
+ - host: {{ . }}
+ http:
+ paths:
+ - path: {{ $ingressPath }}
+ backend:
+ serviceName: {{ $fullName }}
+ servicePort: http
+ {{- end }}
+{{- end }}
diff --git a/incubator/orientdb/templates/secret.yaml b/incubator/orientdb/templates/secret.yaml
new file mode 100644
index 000000000000..a9d7100093f0
--- /dev/null
+++ b/incubator/orientdb/templates/secret.yaml
@@ -0,0 +1,15 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ template "orientdb.fullname" . }}-secret
+ labels:
+ app: {{ template "orientdb.name" . }}
+ chart: {{ template "orientdb.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+data:
+ {{ if .Values.rootPassword }}
+ root-password: {{ .Values.rootPassword | b64enc | quote }}
+ {{ else }}
+ root-password: {{ randAlphaNum 10 | b64enc | quote }}
+ {{ end }}
\ No newline at end of file
diff --git a/incubator/orientdb/templates/service.yaml b/incubator/orientdb/templates/service.yaml
new file mode 100644
index 000000000000..4b03db0f8389
--- /dev/null
+++ b/incubator/orientdb/templates/service.yaml
@@ -0,0 +1,58 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ template "orientdb.fullname" . }}-headless
+ labels:
+ app: {{ template "orientdb.name" . }}
+ chart: {{ template "orientdb.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+spec:
+ type: ClusterIP
+ ports:
+ - port: {{ .Values.service.orientHttp }}
+ targetPort: http
+ name: http
+ - port: {{ .Values.service.hazelcast }}
+ targetPort: hazelcast
+ name: hazelcast
+ - port: {{ .Values.service.orientBinary }}
+ targetPort: binary
+ name: binary
+ - port: {{ .Values.service.gremlinWebsocket }}
+ targetPort: gremlin
+ name: gremlin
+
+ # headless service
+ clusterIP: None
+ selector:
+ app: {{ template "orientdb.name" . }}
+ release: {{ .Release.Name }}
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ template "orientdb.fullname" . }}-svc
+ labels:
+ app: {{ template "orientdb.name" . }}
+ chart: {{ template "orientdb.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+spec:
+ type: {{ .Values.service.type }}
+ ports:
+ - port: {{ .Values.service.orientHttp }}
+ targetPort: http
+ name: http
+ - port: {{ .Values.service.hazelcast }}
+ targetPort: hazelcast
+ name: hazelcast
+ - port: {{ .Values.service.orientBinary }}
+ targetPort: binary
+ name: binary
+ - port: {{ .Values.service.gremlinWebsocket }}
+ targetPort: gremlin
+ name: gremlin
+ selector:
+ app: {{ template "orientdb.name" . }}
+ release: {{ .Release.Name }}
diff --git a/incubator/orientdb/templates/statefulset.yaml b/incubator/orientdb/templates/statefulset.yaml
new file mode 100644
index 000000000000..162093ef23f9
--- /dev/null
+++ b/incubator/orientdb/templates/statefulset.yaml
@@ -0,0 +1,145 @@
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: {{ template "orientdb.fullname" . }}
+ labels:
+ app: {{ template "orientdb.name" . }}
+ chart: {{ template "orientdb.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+spec:
+ replicas: {{ .Values.replicaCount }}
+ selector:
+ matchLabels:
+ app: {{ template "orientdb.name" . }}
+ release: {{ .Release.Name }}
+ serviceName: {{ template "orientdb.fullname" . }}-headless
+ template:
+ metadata:
+ labels:
+ app: {{ template "orientdb.name" . }}
+ release: {{ .Release.Name }}
+ spec:
+ terminationGracePeriodSeconds: 10
+ initContainers:
+ # orientdb-server-config.xml is an executable file while kubernetes mounts
+ # configmaps as read-only (since 1.94). This is a workaround to mount it as
+ # configmap first and then copy it over to its final location
+ - name: "fix-orientdb-server-config"
+ image: "busybox"
+ imagePullPolicy: IfNotPresent
+ command: [ "sh", "-c", "cp /configmap/* /config" ]
+ volumeMounts:
+ - name: orientdb-configmap-vol
+ mountPath: /configmap
+ - name: orientdb-config-vol
+ mountPath: /config
+
+ containers:
+ - name: {{ template "orientdb.name" . }}
+ image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ ports:
+ - containerPort: {{ .Values.service.orientHttp }}
+ name: http
+ - containerPort: {{ .Values.service.hazelcast }}
+ name: hazelcast
+ - containerPort: {{ .Values.service.orientBinary }}
+ name: binary
+ - containerPort: {{ .Values.service.gremlinWebsocket }}
+ name: gremlin
+ env:
+ - name: ORIENTDB_ROOT_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "orientdb.fullname" . }}-secret
+ key: root-password
+ - name: ORIENTDB_NODE_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ {{- if .Values.jvm.memory }}
+ - name: ORIENTDB_OPTS_MEMORY
+ value: {{ .Values.jvm.memory | quote }}
+ {{- end }}
+ {{- if .Values.jvm.options }}
+ - name: JAVA_OPTS_SCRIPT
+ value: {{ .Values.jvm.options | quote }}
+ {{- end }}
+ {{- if .Values.jvm.settings }}
+ - name: ORIENTDB_SETTINGS
+ value: {{ .Values.jvm.settings | quote }}
+ {{- end }}
+ volumeMounts:
+ {{- if .Values.config.overrideHazelcastConfig }}
+ - name: orientdb-config-vol
+ mountPath: /orientdb/config/hazelcast.xml
+ subPath: hazelcast.xml
+ {{- end }}
+ {{- if .Values.config.overrideDistributedDbConfig}}
+ - name: orientdb-config-vol
+ mountPath: /orientdb/config/default-distributed-db-config.json
+ subPath: default-distributed-db-config.json
+ {{- end }}
+ {{- if .Values.persistence.enabled }}
+ - name: storage
+ mountPath: /orientdb/databases
+ - name: backup
+ mountPath: /orientdb/backup
+ {{- end }}
+ {{- if .Values.readinessProbe.enabled }}
+ readinessProbe:
+ tcpSocket:
+ port: http
+ {{- end }}
+ {{- if .Values.livenessProbe.enabled }}
+ livenessProbe:
+ tcpSocket:
+ port: http
+ {{- end }}
+{{- if .Values.distributed.enabled }}
+ command: ["dserver.sh"]
+{{- else }}
+ command: ["server.sh"]
+{{- end}}
+ resources:
+{{ toYaml .Values.resources | indent 12 }}
+ {{- with .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.affinity }}
+ affinity:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.tolerations }}
+ tolerations:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ volumes:
+ - name: orientdb-configmap-vol
+ configMap:
+ name: {{ template "orientdb.fullname" . }}-configmap
+ - name: orientdb-config-vol
+ emptyDir: {}
+{{- if .Values.image.pullSecret }}
+ imagePullSecrets:
+ - name: {{ .Values.image.pullSecret }}
+{{- end }}
+{{- if .Values.persistence.enabled }}
+ volumeClaimTemplates:
+ - metadata:
+ name: storage
+ spec:
+ accessModes: {{ .Values.persistence.storage.accessMode }}
+ resources:
+ requests:
+ storage: {{ .Values.persistence.storage.size }}
+ - metadata:
+ name: backup
+ spec:
+ accessModes: {{ .Values.persistence.backup.accessMode }}
+ resources:
+ requests:
+ storage: {{ .Values.persistence.backup.size }}
+{{- end}}
\ No newline at end of file
diff --git a/incubator/orientdb/templates/tests/simple-crud-test.yaml b/incubator/orientdb/templates/tests/simple-crud-test.yaml
new file mode 100644
index 000000000000..20a89ae2b1ad
--- /dev/null
+++ b/incubator/orientdb/templates/tests/simple-crud-test.yaml
@@ -0,0 +1,24 @@
+{{- if .Values.testing.enabled }}
+apiVersion: v1
+kind: Pod
+metadata:
+ name: {{ template "orientdb.fullname" .}}-simple-crud-test
+ annotations:
+ "helm.sh/hook": test-success
+spec:
+ containers:
+ - name: {{ template "orientdb.fullname" .}}-simple-crud-test
+ image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
+ env:
+ - name: ORIENT_HOST
+ value: {{ template "orientdb.fullname" . }}-svc
+ - name: ORIENT_PORT
+ value: "2480"
+ - name: ORIENTDB_ROOT_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "orientdb.fullname" .}}-secret
+ key: root-password
+ command: ["sh", "-c", "wget --spider http://${ORIENT_HOST}:${ORIENT_PORT}" ]
+ restartPolicy: Never
+{{- end }}
\ No newline at end of file
diff --git a/incubator/orientdb/values.yaml b/incubator/orientdb/values.yaml
new file mode 100644
index 000000000000..373630335513
--- /dev/null
+++ b/incubator/orientdb/values.yaml
@@ -0,0 +1,80 @@
+replicaCount: 1
+
+# random if not set
+# rootPassword: root123
+
+image:
+ name: orientdb
+ tag: 3.0.13
+ pullPolicy: IfNotPresent
+
+# distributed settings for default-distributed-db-config.json
+distributed:
+ enabled: false
+ autoDeploy: true
+ executionMode: undefined
+ readQuorum: 1
+ writeQuorum: majority
+ newNodeStrategy: static
+ readYourWrites: true
+
+service:
+ type: ClusterIP
+ orientHttp: 2480
+ hazelcast: 2434
+ orientBinary: 2424
+ gremlinWebsocket: 8182
+
+ingress:
+ enabled: false
+ annotations: {}
+
+resources:
+ requests:
+ cpu: "500m"
+ memory: "2Gi"
+ limits:
+ cpu: "2000m"
+ memory: "8Gi"
+
+jvm: {}
+# Optional jvm settings:
+# memory: "-Xms800m -Xmx800m"
+# options: "-Djna.nosys=true -XX:+HeapDumpOnOutOfMemoryError -Djava.awt.headless=true -Dfile.encoding=UTF8 -Drhino.opt.level=9"
+# settings: "-Dstorage.diskCache.bufferSize=7200"
+
+nodeSelector: {}
+
+tolerations: []
+
+affinity: {}
+
+readinessProbe:
+ enabled: true
+
+livenessProbe:
+ enabled: true
+
+hazelcast:
+ groupName: orientdb
+ groupPassword: orientdb
+
+persistence:
+ enabled: true
+ storage:
+ accessMode:
+ - ReadWriteOnce
+ size: 10Gi
+ backup:
+ accessMode:
+ - ReadWriteOnce
+ size: 2Gi
+
+config:
+ overrideHazelcastConfig: true
+ overrideOrientdbServerConfig: true
+ overrideGremlinServerConfig: true
+ overrideDistributedDbConfig: true
+
+testing:
+ enabled: false
diff --git a/incubator/patroni/Chart.yaml b/incubator/patroni/Chart.yaml
index 586c9aab68f9..8b4c58533522 100644
--- a/incubator/patroni/Chart.yaml
+++ b/incubator/patroni/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: patroni
description: 'Highly available elephant herd: HA PostgreSQL cluster.'
-version: 0.11.0
+version: 0.12.1
appVersion: 1.4-p16
home: https://github.com/zalando/patroni
sources:
diff --git a/incubator/patroni/templates/ep-patroni.yaml b/incubator/patroni/templates/ep-patroni.yaml
index 9581596c845e..a218f53dadc3 100644
--- a/incubator/patroni/templates/ep-patroni.yaml
+++ b/incubator/patroni/templates/ep-patroni.yaml
@@ -3,7 +3,7 @@ kind: Endpoints
metadata:
name: {{ template "patroni.fullname" . }}
labels:
- app: {{ template "patroni.name" . }}
+ app: {{ template "patroni.fullname" . }}
chart: {{ template "patroni.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
diff --git a/incubator/patroni/templates/role-patroni.yaml b/incubator/patroni/templates/role-patroni.yaml
index 3341826b1b10..b18e1f29d5f4 100644
--- a/incubator/patroni/templates/role-patroni.yaml
+++ b/incubator/patroni/templates/role-patroni.yaml
@@ -4,7 +4,7 @@ kind: Role
metadata:
name: {{ template "patroni.fullname" . }}
labels:
- app: {{ template "patroni.name" . }}
+ app: {{ template "patroni.fullname" . }}
chart: {{ template "patroni.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
@@ -20,6 +20,10 @@ rules:
- watch
# delete is required only for 'patronictl remove'
- delete
+- apiGroups: [""]
+ resources: ["services"]
+ verbs:
+ - create
- apiGroups: [""]
resources: ["endpoints"]
verbs:
diff --git a/incubator/patroni/templates/rolebinding-patroni.yaml b/incubator/patroni/templates/rolebinding-patroni.yaml
index d0965702f942..163fa00d4cfa 100644
--- a/incubator/patroni/templates/rolebinding-patroni.yaml
+++ b/incubator/patroni/templates/rolebinding-patroni.yaml
@@ -4,7 +4,7 @@ kind: RoleBinding
metadata:
name: {{ template "patroni.fullname" . }}
labels:
- app: {{ template "patroni.name" . }}
+ app: {{ template "patroni.fullname" . }}
chart: {{ template "patroni.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
diff --git a/incubator/patroni/templates/sec-patroni.yaml b/incubator/patroni/templates/sec-patroni.yaml
index d62faae88f8f..63950ab173ca 100644
--- a/incubator/patroni/templates/sec-patroni.yaml
+++ b/incubator/patroni/templates/sec-patroni.yaml
@@ -3,7 +3,7 @@ kind: Secret
metadata:
name: {{ template "patroni.fullname" . }}
labels:
- app: {{ template "patroni.name" . }}
+ app: {{ template "patroni.fullname" . }}
chart: {{ template "patroni.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
diff --git a/incubator/patroni/templates/serviceaccount-patroni.yaml b/incubator/patroni/templates/serviceaccount-patroni.yaml
index eed6ab8a82d4..e88f6c612e51 100644
--- a/incubator/patroni/templates/serviceaccount-patroni.yaml
+++ b/incubator/patroni/templates/serviceaccount-patroni.yaml
@@ -4,7 +4,7 @@ kind: ServiceAccount
metadata:
name: {{ template "patroni.serviceAccountName" . }}
labels:
- app: {{ template "patroni.name" . }}
+ app: {{ template "patroni.fullname" . }}
chart: {{ template "patroni.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
diff --git a/incubator/patroni/templates/statefulset-patroni.yaml b/incubator/patroni/templates/statefulset-patroni.yaml
index 76c129102d07..3cd611db3b7e 100644
--- a/incubator/patroni/templates/statefulset-patroni.yaml
+++ b/incubator/patroni/templates/statefulset-patroni.yaml
@@ -3,7 +3,7 @@ kind: StatefulSet
metadata:
name: {{ template "patroni.fullname" . }}
labels:
- app: {{ template "patroni.name" . }}
+ app: {{ template "patroni.fullname" . }}
chart: {{ template "patroni.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
@@ -12,13 +12,13 @@ spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
- app: {{ template "patroni.name" . }}
+ app: {{ template "patroni.fullname" . }}
release: {{ .Release.Name }}
template:
metadata:
name: {{ template "patroni.fullname" . }}
labels:
- app: {{ template "patroni.name" . }}
+ app: {{ template "patroni.fullname" . }}
release: {{ .Release.Name }}
spec:
serviceAccountName: {{ template "patroni.serviceAccountName" . }}
@@ -46,7 +46,7 @@ spec:
- name: DCS_ENABLE_KUBERNETES_API
value: "true"
- name: KUBERNETES_LABELS
- value: {{ (printf "{ \"app\": \"%s\", \"release\": \"%s\" }" (include "patroni.name" .) .Release.Name) | quote }}
+ value: {{ (printf "{ \"app\": \"%s\", \"release\": \"%s\" }" (include "patroni.fullname" .) .Release.Name) | quote }}
- name: KUBERNETES_SCOPE_LABEL
value: "app"
{{- end }}
@@ -167,7 +167,7 @@ spec:
{{ toYaml .Values.persistentVolume.annotations | indent 8 }}
{{- end }}
labels:
- app: {{ template "patroni.name" . }}
+ app: {{ template "patroni.fullname" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
diff --git a/incubator/patroni/templates/svc-patroni.yaml b/incubator/patroni/templates/svc-patroni.yaml
index 8cf5a00fa159..27e01bb9858d 100644
--- a/incubator/patroni/templates/svc-patroni.yaml
+++ b/incubator/patroni/templates/svc-patroni.yaml
@@ -3,7 +3,7 @@ kind: Service
metadata:
name: {{ template "patroni.fullname" . }}
labels:
- app: {{ template "patroni.name" . }}
+ app: {{ template "patroni.fullname" . }}
chart: {{ template "patroni.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
diff --git a/incubator/raw/Chart.yaml b/incubator/raw/Chart.yaml
index c2560cf6e29b..d6808bd57bb3 100644
--- a/incubator/raw/Chart.yaml
+++ b/incubator/raw/Chart.yaml
@@ -1,8 +1,11 @@
+apiVersion: v1
name: raw
home: https://github.com/helm/charts/blob/master/incubator/raw
-version: 0.1.0
-appVersion: 0.1.0
+version: 0.2.3
+appVersion: 0.2.3
description: A place for all the Kubernetes resources which don't already have a home.
maintainers:
- name: josdotso
email: josdotso@cisco.com
+- name: mumoshu
+ email: ykuoka@gmail.com
diff --git a/incubator/raw/OWNERS b/incubator/raw/OWNERS
index 5f8fd9e5d1b1..db15b5118e0c 100644
--- a/incubator/raw/OWNERS
+++ b/incubator/raw/OWNERS
@@ -1,4 +1,6 @@
approvers:
- josdotso
+- mumoshu
reviewers:
- josdotso
+- mumoshu
diff --git a/incubator/raw/README.md b/incubator/raw/README.md
index b191f64d449d..b691aeafec20 100644
--- a/incubator/raw/README.md
+++ b/incubator/raw/README.md
@@ -1,17 +1,22 @@
# incubator/raw
-The `incubator/raw` chart takes a list of raw Kubernetes resources and
+The `incubator/raw` chart takes a list of Kubernetes resources and
merges each resource with a default `metadata.labels` map and installs
the result.
+The Kubernetes resources can be "raw" ones defined under the `resources` key, or "templated" ones defined under the `templates` key.
+
Some use cases for this chart include Helm-based installation and
maintenance of resources of kinds:
- LimitRange
- PriorityClass
+- Secret
## Usage
-### STEP 1: Create a yaml file containing your raw resources.
+### Raw resources
+
+#### STEP 1: Create a yaml file containing your raw resources.
```
# raw-priority-classes.yaml
@@ -83,8 +88,42 @@ resources:
description: "This priority class should only be used for low priority app pods."
```
-### STEP 2: Install your raw resources.
+#### STEP 2: Install your raw resources.
```
helm install --name raw-priority-classes incubator/raw -f raw-priority-classes.yaml
```
+
+### Templated resources
+
+#### STEP 1: Create a yaml file containing your templated resources.
+
+```
+# values.yaml
+
+templates:
+- |
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: common-secret
+ stringData:
+ mykey: {{ .Values.mysecret }}
+```
+
+The yaml file containing `mysecret` should be encrypted with a tool like [helm-secrets](https://github.com/futuresimple/helm-secrets)
+
+```
+# secrets.yaml
+mysecret: abc123
+```
+
+```
+$ helm secrets enc secrets.yaml
+```
+
+#### STEP 2: Install your templated resources.
+
+```
+helm secrets install --name mysecret incubator/raw -f values.yaml -f secrets.yaml
+```
diff --git a/incubator/raw/ci/resources-values.yaml b/incubator/raw/ci/resources-values.yaml
new file mode 100644
index 000000000000..1028ae519907
--- /dev/null
+++ b/incubator/raw/ci/resources-values.yaml
@@ -0,0 +1,8 @@
+resources:
+- apiVersion: scheduling.k8s.io/v1beta1
+ kind: PriorityClass
+ metadata:
+ name: common-critical
+ value: 100000000
+ globalDefault: false
+ description: "This priority class should only be used for critical priority common pods."
diff --git a/incubator/raw/ci/templates-values.yaml b/incubator/raw/ci/templates-values.yaml
new file mode 100644
index 000000000000..600f40ee883c
--- /dev/null
+++ b/incubator/raw/ci/templates-values.yaml
@@ -0,0 +1,6 @@
+templates:
+- |
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: raw
diff --git a/incubator/raw/ci/values.yaml b/incubator/raw/ci/values.yaml
new file mode 100644
index 000000000000..876494a1cb8c
--- /dev/null
+++ b/incubator/raw/ci/values.yaml
@@ -0,0 +1,18 @@
+resources:
+- apiVersion: v1
+ kind: Secret
+ metadata:
+ name: common
+ stringData:
+ foo: bar
+
+mysecret: abc134
+
+templates:
+- |
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: common-secret
+ stringData:
+ mykey: "{{ .Values.mysecret }}"
diff --git a/incubator/raw/templates/resources.yaml b/incubator/raw/templates/resources.yaml
index 2799ad8e81f0..83c900de0b61 100644
--- a/incubator/raw/templates/resources.yaml
+++ b/incubator/raw/templates/resources.yaml
@@ -1,5 +1,9 @@
{{- $template := fromYaml (include "raw.resource" .) -}}
{{- range .Values.resources }}
---
-{{- toYaml (merge . $template) -}}
+{{ toYaml (merge . $template) -}}
+{{- end }}
+{{- range $i, $t := .Values.templates }}
+---
+{{ toYaml (merge (tpl $t $ | fromYaml) $template) -}}
{{- end }}
diff --git a/incubator/raw/values.yaml b/incubator/raw/values.yaml
index 71167aa65e3c..4305a27cfe01 100644
--- a/incubator/raw/values.yaml
+++ b/incubator/raw/values.yaml
@@ -63,3 +63,18 @@ resources: []
# value: 70000
# globalDefault: false
# description: "This priority class should only be used for low priority app pods."
+
+templates: []
+# - |
+# apiVersion: v1
+# kind: ConfigMap
+# metadata:
+# name: raw
+#
+# - |
+# apiVersion: v1
+# kind: Secret
+# metadata:
+# name: common-secret
+# stringData:
+# mykey: {{ .Values.mysecret }}
diff --git a/incubator/rundeck/.helmignore b/incubator/rundeck/.helmignore
new file mode 100644
index 000000000000..50af03172541
--- /dev/null
+++ b/incubator/rundeck/.helmignore
@@ -0,0 +1,22 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
+.vscode/
diff --git a/incubator/rundeck/Chart.yaml b/incubator/rundeck/Chart.yaml
new file mode 100644
index 000000000000..8deda0886dd6
--- /dev/null
+++ b/incubator/rundeck/Chart.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+description: A Rundeck chart for Kubernetes
+name: rundeck
+home: https://github.com/rundeck/rundeck
+version: 0.1.0
+appVersion: 3.0.16
+keywords:
+- rundeck
+- jobs
+- automation
+- operations
+sources:
+- https://github.com/rundeck/rundeck
+maintainers:
+- name: dwardu89
+ email: hello@dwardu.com
diff --git a/incubator/rundeck/OWNERS b/incubator/rundeck/OWNERS
new file mode 100644
index 000000000000..685529426c4c
--- /dev/null
+++ b/incubator/rundeck/OWNERS
@@ -0,0 +1,4 @@
+approvers:
+- dwardu89
+reviewers:
+- dwardu89
diff --git a/incubator/rundeck/README.md b/incubator/rundeck/README.md
new file mode 100644
index 000000000000..7ad13036f8f5
--- /dev/null
+++ b/incubator/rundeck/README.md
@@ -0,0 +1,27 @@
+# Rundeck Community Helm Chart
+
+Rundeck lets you turn your operations procedures into self-service jobs. Safely give others the control and visibility they need. Read more about Rundeck at [https://www.rundeck.com/open-source](https://www.rundeck.com/open-source).
+
+
+## Install
+
+ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
+ helm install incubator/rundeck
+
+## Configuration
+
+The following configurations may be set. It is recommended to use values.yaml for overwriting the Riemann config.
+
+Parameter | Description | Default
+--------- | ----------- | -------
+replicaCount | How many replicas to run. Riemann can really only work with one. | 1
+image.repository | Name of the image to run, without the tag. | [rundeck/rundeck](https://github.com/rundeck/rundeck)
+image.tag | The image tag to use. | 3.0.16
+image.pullPolicy | The kubernetes image pull policy. | IfNotPresent
+service.type | The kubernetes service type to use. | ClusterIP
+service.port | The tcp port the service should listen on. | 80
+ingress | Any ingress rules to apply. | None
+resources | Any resource constraints to apply. | None
+rundeck.env | The rundeck environment variables that you would want to set | Default variables provided in docker file
+rundeck.sshSecrets | A reference to the Kubernetes Secret that contains the ssh keys. | ""
+rundeck.awsCredentialsSecret | A reference to the Kubernetes Secret that contains the aws credentials. | ""
diff --git a/incubator/rundeck/files/nginx/nginx.conf b/incubator/rundeck/files/nginx/nginx.conf
new file mode 100644
index 000000000000..d0537d392cdc
--- /dev/null
+++ b/incubator/rundeck/files/nginx/nginx.conf
@@ -0,0 +1,16 @@
+events {
+ worker_connections 1024;
+}
+
+http {
+ server {
+
+ location / {
+ recursive_error_pages on;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+ proxy_set_header User-Agent $http_user_agent;
+ proxy_pass http://localhost:4440;
+ }
+ }
+}
diff --git a/incubator/rundeck/templates/NOTES.txt b/incubator/rundeck/templates/NOTES.txt
new file mode 100644
index 000000000000..7897fb630b28
--- /dev/null
+++ b/incubator/rundeck/templates/NOTES.txt
@@ -0,0 +1,21 @@
+1. Get the application URL by running these commands:
+{{- if .Values.ingress.enabled }}
+{{- range $host := .Values.ingress.hosts }}
+ {{- range $.Values.ingress.paths }}
+ http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host }}{{ . }}
+ {{- end }}
+{{- end }}
+{{- else if contains "NodePort" .Values.service.type }}
+ export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "rundeck.fullname" . }})
+ export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+ echo http://$NODE_IP:$NODE_PORT
+{{- else if contains "LoadBalancer" .Values.service.type }}
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ You can watch the status of by running 'kubectl get svc -w {{ include "rundeck.fullname" . }}'
+ export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "rundeck.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ echo http://$SERVICE_IP:{{ .Values.service.port }}
+{{- else if contains "ClusterIP" .Values.service.type }}
+ export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "rundeck.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
+ echo "Visit http://127.0.0.1:4440 to use your application"
+ kubectl port-forward $POD_NAME --namespace {{ .Release.Namespace }} 4440:4440
+{{- end }}
diff --git a/incubator/rundeck/templates/_helpers.tpl b/incubator/rundeck/templates/_helpers.tpl
new file mode 100644
index 000000000000..c9adb0328c54
--- /dev/null
+++ b/incubator/rundeck/templates/_helpers.tpl
@@ -0,0 +1,32 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "rundeck.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "rundeck.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "rundeck.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/incubator/rundeck/templates/deployment.yaml b/incubator/rundeck/templates/deployment.yaml
new file mode 100644
index 000000000000..20b19db4f146
--- /dev/null
+++ b/incubator/rundeck/templates/deployment.yaml
@@ -0,0 +1,120 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ include "rundeck.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "rundeck.name" . }}
+ helm.sh/chart: {{ include "rundeck.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ replicas: {{ .Values.replicaCount }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "rundeck.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "rundeck.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ spec:
+ containers:
+ - name: nginx
+ image: nginx:stable
+ ports:
+ - name: http
+ containerPort: 80
+ protocol: TCP
+ livenessProbe:
+ httpGet:
+ path: /
+ port: 80
+ scheme: HTTP
+ initialDelaySeconds: 60
+ periodSeconds: 120
+ readinessProbe:
+ httpGet:
+ path: /
+ port: 80
+ scheme: HTTP
+ initialDelaySeconds: 10
+ periodSeconds: 5
+ volumeMounts:
+ - name: nginx-config
+ mountPath: /etc/nginx
+ - name: {{ .Chart.Name }}
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ envFrom:
+ - configMapRef:
+ name: {{ .Release.Name }}-environment-configmap
+ env:
+ - name: RUNDECK_GRAILS_URL
+ value: "http://{{ include "rundeck.name" . }}.{{ .Release.Namespace }}.svc.cluster.local"
+ - name: RUNDECK_SERVER_FORWARDED
+ value: "true"
+ - name: RUNDECK_LOGGING_STRATEGY
+ value: "CONSOLE"
+ volumeMounts:
+ - name: data
+ mountPath: /home/rundeck/server/data
+ {{- if .Values.rundeck.sshSecrets }}
+ - name: sshkeys
+ mountPath: /home/rundeck/.ssh
+ readOnly: true
+ {{- end }}
+ {{- if .Values.rundeck.awsCredentialsSecret }}
+ - name: aws-credentials
+ mountPath: /home/rundeck/.aws/credentials
+ {{- end }}
+ ports:
+ - name: rundeck
+ containerPort: 4440
+ livenessProbe:
+ httpGet:
+ path: /
+ port: 4440
+ scheme: HTTP
+ initialDelaySeconds: 120
+ periodSeconds: 120
+ readinessProbe:
+ httpGet:
+ path: /
+ port: 4440
+ scheme: HTTP
+ initialDelaySeconds: 60
+ periodSeconds: 5
+ resources:
+ {{- toYaml .Values.resources | nindent 12 }}
+ volumes:
+ - name: nginx-config
+ configMap:
+ name: {{ .Release.Name }}-nginx-configmap
+ items:
+ - key: nginx.conf
+ path: nginx.conf
+ - name: data
+ emptyDir: {}
+ {{- if .Values.rundeck.sshSecrets }}
+ - name: sshkeys
+ secret:
+ secretName: {{ .Values.rundeck.sshSecrets }}
+ {{- end }}
+ {{- if .Values.rundeck.awsCredentialsSecret }}
+ - name: aws-credentials
+ secret:
+ secretName: {{ .Values.rundeck.awsCredentialsSecret}}
+ {{- end }}
+ {{- with .Values.nodeSelector }}
+ nodeSelector:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.affinity }}
+ affinity:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.tolerations }}
+ tolerations:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
diff --git a/incubator/rundeck/templates/ingress.yaml b/incubator/rundeck/templates/ingress.yaml
new file mode 100644
index 000000000000..f204add45ca4
--- /dev/null
+++ b/incubator/rundeck/templates/ingress.yaml
@@ -0,0 +1,40 @@
+{{- if .Values.ingress.enabled -}}
+{{- $fullName := include "rundeck.fullname" . -}}
+{{- $ingressPaths := .Values.ingress.paths -}}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ $fullName }}
+ labels:
+ app.kubernetes.io/name: {{ include "rundeck.name" . }}
+ helm.sh/chart: {{ include "rundeck.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ {{- with .Values.ingress.annotations }}
+ annotations:
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+spec:
+{{- if .Values.ingress.tls }}
+ tls:
+ {{- range .Values.ingress.tls }}
+ - hosts:
+ {{- range .hosts }}
+ - {{ . | quote }}
+ {{- end }}
+ secretName: {{ .secretName }}
+ {{- end }}
+{{- end }}
+ rules:
+ {{- range .Values.ingress.hosts }}
+ - host: {{ . | quote }}
+ http:
+ paths:
+ {{- range $ingressPaths }}
+ - path: {{ . }}
+ backend:
+ serviceName: {{ $fullName }}
+ servicePort: http
+ {{- end }}
+ {{- end }}
+{{- end }}
diff --git a/incubator/rundeck/templates/nginx-configmap.yaml b/incubator/rundeck/templates/nginx-configmap.yaml
new file mode 100644
index 000000000000..77100af4d145
--- /dev/null
+++ b/incubator/rundeck/templates/nginx-configmap.yaml
@@ -0,0 +1,8 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ .Release.Name }}-nginx-configmap
+type: Opaque
+data:
+ nginx.conf: |-
+{{ .Files.Get "files/nginx/nginx.conf" | indent 4 }}
diff --git a/incubator/rundeck/templates/rundeck-environment-configmap.yaml b/incubator/rundeck/templates/rundeck-environment-configmap.yaml
new file mode 100644
index 000000000000..b4b57f8200be
--- /dev/null
+++ b/incubator/rundeck/templates/rundeck-environment-configmap.yaml
@@ -0,0 +1,7 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ .Release.Name }}-environment-configmap
+type: Opaque
+data:
+{{ toYaml .Values.rundeck.env | indent 4}}
\ No newline at end of file
diff --git a/incubator/rundeck/templates/service.yaml b/incubator/rundeck/templates/service.yaml
new file mode 100644
index 000000000000..e4730eeeebaf
--- /dev/null
+++ b/incubator/rundeck/templates/service.yaml
@@ -0,0 +1,19 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "rundeck.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "rundeck.name" . }}
+ helm.sh/chart: {{ include "rundeck.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ type: {{ .Values.service.type }}
+ ports:
+ - port: {{ .Values.service.port }}
+ targetPort: http
+ protocol: TCP
+ name: http
+ selector:
+ app.kubernetes.io/name: {{ include "rundeck.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
diff --git a/incubator/rundeck/values.yaml b/incubator/rundeck/values.yaml
new file mode 100644
index 000000000000..3a8b38068c7f
--- /dev/null
+++ b/incubator/rundeck/values.yaml
@@ -0,0 +1,65 @@
+# Default values for rundeck.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+
+replicaCount: 1
+
+image:
+ repository: rundeck/rundeck
+ tag: 3.0.16
+ pullPolicy: IfNotPresent
+
+rundeck:
+ env:
+ RUNDECK_GRAILS_URL: "http://{{ .Release.Name }}.{{ .Release.Namespace }}.svc.cluster.local"
+ RUNDECK_SERVER_FORWARDED: "true"
+ RUNDECK_LOGGING_STRATEGY: "CONSOLE"
+ # RUNDECK_DATABASE_DRIVER: com.mysql.jdbc.Driver
+ # RUNDECK_DATABASE_USERNAME: rundeck
+ # RUNDECK_DATABASE_PASSWORD: rundeck
+ # RUNDECK_DATABASE_URL: jdbc:mysql://mysql/rundeck?autoReconnect=true&useSSL=false
+ # RUNDECK_PLUGIN_EXECUTIONFILESTORAGE_NAME: com.rundeck.rundeckpro.amazon-s3
+ # RUNDECK_PLUGIN_EXECUTIONFILESTORAGE_S3_BUCKET: ${RUNDECK_PLUGIN_EXECUTIONFILESTORAGE_S3_BUCKET}
+ # RUNDECK_PLUGIN_EXECUTIONFILESTORAGE_S3_REGION: ${RUNDECK_PLUGIN_EXECUTIONFILESTORAGE_S3_REGION}
+ # RUNDECK_STORAGE_CONVERTER_1_CONFIG_PASSWORD: ${RUNDECK_STORAGE_PASSWORD}
+ # RUNDECK_CONFIG_STORAGE_CONVERTER_1_CONFIG_PASSWORD: ${RUNDECK_STORAGE_PASSWORD}
+ # sshSecrets: "ssh-secret"
+ awsCredentialsSecret: ""
+
+nameOverride: ""
+fullnameOverride: ""
+
+service:
+ type: ClusterIP
+ port: 80
+
+ingress:
+ enabled: false
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+ paths: []
+ hosts:
+ - chart-example.local
+ tls: []
+ # - secretName: chart-example-tls
+ # hosts:
+ # - chart-example.local
+
+resources: {}
+ # We usually recommend not to specify default resources and to leave this as a conscious
+ # choice for the user. This also increases chances charts run on environments with little
+ # resources, such as Minikube. If you do want to specify resources, uncomment the following
+ # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
+ # limits:
+ # cpu: 100m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 128Mi
+
+nodeSelector: {}
+
+tolerations: []
+
+affinity: {}
diff --git a/incubator/schema-registry/Chart.yaml b/incubator/schema-registry/Chart.yaml
index 3c75c4465a43..16bda3c826d6 100644
--- a/incubator/schema-registry/Chart.yaml
+++ b/incubator/schema-registry/Chart.yaml
@@ -1,6 +1,6 @@
name: schema-registry
home: https://docs.confluent.io/current/schema-registry/docs/index.html
-version: 1.1.2
+version: 1.1.4
appVersion: 5.0.1
keywords:
- confluent
diff --git a/incubator/schema-registry/README.md b/incubator/schema-registry/README.md
index f21b817a8459..53e63e9014af 100644
--- a/incubator/schema-registry/README.md
+++ b/incubator/schema-registry/README.md
@@ -78,6 +78,7 @@ The following table lists the configurable parameters of the SchemaRegistry char
| `sasl.scram.zookeeperClientPassword` | the sasl scram password to use to authenticate to zookeeper | `zookeeper-password` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `servicePort` | The port on which the SchemaRegistry server will be exposed. | `8081` |
+| `service.labels` | Additional labels for the service | `{}` |
| `overrideGroupId` | Group ID defaults to using Release Name so each release is its own Schema Registry worker group, it can be overridden | `{- .Release.Name -}}` |
| `kafkaStore.overrideBootstrapServers` | Defaults to Kafka Servers in the same release, it can be overridden in case there was a separate release for Kafka Deploy | `{{- printf "PLAINTEXT://%s-kafka-headless:9092" .Release.Name }}`
| `kafka.enabled` | If `true`, install Kafka/Zookeeper alongside the `SchemaRegistry`. This is intended for testing and argument-less helm installs of this chart only and should not be used in Production. | `true` |
diff --git a/incubator/schema-registry/templates/deployment.yaml b/incubator/schema-registry/templates/deployment.yaml
index 8bb62d0eee2e..2521c14b9145 100644
--- a/incubator/schema-registry/templates/deployment.yaml
+++ b/incubator/schema-registry/templates/deployment.yaml
@@ -109,8 +109,6 @@ spec:
value: {{ template "schema-registry.kafkaStore.bootstrapServers" . }}
- name: SCHEMA_REGISTRY_KAFKASTORE_GROUP_ID
value: {{ template "schema-registry.kafkaStore.groupId" . }}
- - name: SCHEMA_REGISTRY_MASTER_ELIGIBILITY
- value: "true"
{{ range $configName, $configValue := .Values.configurationOverrides }}
- name: SCHEMA_REGISTRY_{{ $configName | replace "." "_" | upper }}
value: {{ $configValue | quote }}
diff --git a/incubator/schema-registry/templates/service.yaml b/incubator/schema-registry/templates/service.yaml
index 5e9204906bc0..94e10ecb94b9 100644
--- a/incubator/schema-registry/templates/service.yaml
+++ b/incubator/schema-registry/templates/service.yaml
@@ -7,6 +7,9 @@ metadata:
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
+{{- if .Values.service.labels }}
+{{ toYaml .Values.service.labels | indent 4 }}
+{{- end }}
spec:
ports:
- name: schema-registry
diff --git a/incubator/schema-registry/values.yaml b/incubator/schema-registry/values.yaml
index 08d2fd3ab1bf..92554d7a6c7f 100644
--- a/incubator/schema-registry/values.yaml
+++ b/incubator/schema-registry/values.yaml
@@ -18,6 +18,8 @@ replicaCount: 1
## Schema Registry Settings Overrides
## Configuration Options can be found here: https://docs.confluent.io/current/schema-registry/docs/config.html
configurationOverrides: {}
+ ## The default master.eligiblity is true
+ # master.eligibility: false
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
@@ -35,6 +37,11 @@ resources: {}
## The port on which the SchemaRegistry will be available and serving requests
servicePort: 8081
+## Provides schema registry service settings
+service:
+ ## Any additional labels to add to the service
+ labels: {}
+
## If `Kafka.Enabled` is `false`, kafkaStore.overrideBootstrapServers must be provided for Master Election.
## You can list load balanced service endpoint, or list of all brokers (which is hard in K8s). e.g.:
## overrideBootstrapServers: "PLAINTEXT://dozing-prawn-kafka-headless:9092"
diff --git a/incubator/sentry-kubernetes/Chart.yaml b/incubator/sentry-kubernetes/Chart.yaml
index 736143fd3c7e..8888ac0986a4 100644
--- a/incubator/sentry-kubernetes/Chart.yaml
+++ b/incubator/sentry-kubernetes/Chart.yaml
@@ -1,11 +1,15 @@
apiVersion: v1
description: A Helm chart for sentry-kubernetes (https://github.com/getsentry/sentry-kubernetes)
name: sentry-kubernetes
-version: 0.1.6
+version: 0.2.0
appVersion: latest
+icon: https://sentry-brand.storage.googleapis.com/sentry-glyph-white.png
home: https://github.com/getsentry/sentry-kubernetes
sources:
- https://github.com/getsentry/sentry-kubernetes
+maintainers:
+ - name: cpanato
+ email: ctadeu@gmail.com
keywords:
- sentry
- report kubernetes events
diff --git a/incubator/sentry-kubernetes/OWNERS b/incubator/sentry-kubernetes/OWNERS
index ca300fa25d28..37ac5afbdf43 100644
--- a/incubator/sentry-kubernetes/OWNERS
+++ b/incubator/sentry-kubernetes/OWNERS
@@ -1,4 +1,6 @@
approvers:
+- cpanato
- gianrubio
reviewers:
+- cpanato
- gianrubio
diff --git a/incubator/sentry-kubernetes/README.md b/incubator/sentry-kubernetes/README.md
index 207c26817fb6..b72f285fb841 100644
--- a/incubator/sentry-kubernetes/README.md
+++ b/incubator/sentry-kubernetes/README.md
@@ -17,6 +17,7 @@ The following table lists the configurable parameters of the sentry-kubernetes c
| `sentry.dsn` | Sentry dsn | Empty |
| `sentry.environment` | Sentry environment | Empty |
| `sentry.release` | Sentry release | Empty |
+| `sentry.logLevel` | Sentry log level | Empty |
| `image.repository` | Container image name | `getsentry/sentry-kubernetes` |
| `image.tag` | Container image tag | `latest` |
| `rbac.create` | If `true`, create and use RBAC resources | `true` |
diff --git a/incubator/sentry-kubernetes/templates/deployment.yaml b/incubator/sentry-kubernetes/templates/deployment.yaml
index c3067f4b258b..772d1f6a68d1 100644
--- a/incubator/sentry-kubernetes/templates/deployment.yaml
+++ b/incubator/sentry-kubernetes/templates/deployment.yaml
@@ -34,6 +34,10 @@ spec:
- name: RELEASE
value: {{ .Values.sentry.release }}
{{ end }}
+ {{ if .Values.sentry.logLevel }}
+ - name: LOG_LEVEL
+ value: {{ .Values.sentry.logLevel }}
+ {{ end }}
resources:
{{ toYaml .Values.resources | indent 10 }}
{{- if .Values.nodeSelector }}
diff --git a/incubator/sentry-kubernetes/values.yaml b/incubator/sentry-kubernetes/values.yaml
index 46c9d17e30a6..3c88d2f16597 100644
--- a/incubator/sentry-kubernetes/values.yaml
+++ b/incubator/sentry-kubernetes/values.yaml
@@ -2,6 +2,7 @@
sentry:
dsn:
+ logLevel: ~
image:
repository: getsentry/sentry-kubernetes
tag: latest
diff --git a/incubator/solr/Chart.yaml b/incubator/solr/Chart.yaml
new file mode 100644
index 000000000000..205a4048544c
--- /dev/null
+++ b/incubator/solr/Chart.yaml
@@ -0,0 +1,15 @@
+---
+
+apiVersion: "v1"
+name: "solr"
+version: "1.0.0"
+appVersion: "7.6.0"
+description: "A helm chart to install Apache Solr: http://lucene.apache.org/solr/"
+keywords:
+ - "solr"
+home: "http://lucene.apache.org/solr/"
+sources:
+ - "https://gitbox.apache.org/repos/asf?p=lucene-solr.git"
+maintainers:
+ - name: "ian-thebridge-lucidworks"
+ email: "ian.thebridge@lucidworks.com"
diff --git a/incubator/solr/README.md b/incubator/solr/README.md
new file mode 100644
index 000000000000..cef0a0334200
--- /dev/null
+++ b/incubator/solr/README.md
@@ -0,0 +1,116 @@
+# Solr Helm Chart
+
+This helm chart installs a Solr cluster and it's required Zookeeper cluster into a running
+kubernetes cluster.
+
+The chart installs the Solr docker image from: https://hub.docker.com/_/solr/
+
+## Dependencies
+
+- The zookeeper incubator helm chart
+- Tested on kubernetes 1.10+
+
+## Installation
+
+To install the Solr helm chart run:
+
+```
+helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
+$ helm install --name solr incubator/solr
+```
+
+## Configuration Options
+
+The following table shows the configuration options for the Solr helm chart:
+
+
+| Parameter | Description | Default Value |
+| --------------------------------------------- | ------------------------------------- | --------------------------------------------------------------------- |
+| `port` | The port that Solr will listen on | `8983` |
+| `replicaCount` | The number of replicas in the Solr statefulset | `3` |
+| `javaMem` | JVM memory settings to pass to Solr | `-Xms2g -Xmx3g` |
+| `resources` | Resource limits and requests to set on the solr pods | `{}` |
+| `terminationGracePeriodSeconds` | The termination grace period of the Solr pods | `180`|
+| `image.repository` | The repository to pull the docker image from| `solr` |
+| `image.tag` | The tag on the repository to pull | `7.6.0` |
+| `image.pullPolicy` | Solr pod pullPolicy | `IfNotPresent` |
+| `livenessProbe.initialDelaySeconds` | Inital Delay for Solr pod liveness probe | `20` |
+| `livenessProbe.periodSeconds` | Poll rate for liveness probe | `10` |
+| `readinessProbe.initialDelaySeconds` | Inital Delay for Solr pod readiness probe | `15` |
+| `readinessProbe.periodSeconds` | Poll rate for readiness probe | `5` |
+| `podAnnotations` | Annotations to be applied to the solr pods | `{}` |
+| `affinity` | Affinity policy to be applied to the Solr pods | `{}` |
+| `updateStrategy` | The update strategy of the solr pods | `{}` |
+| `logLevel` | The log level of the solr pods | `INFO` |
+| `podDisruptionBudget` | The pod disruption budget for the Solr statefulset | `{"maxUnavailable": 1}` |
+| `volumeClaimTemplates.storageClassName` | The name of the storage class for the Solr PVC | `` |
+| `volumeClaimTemplates.storageSize` | The size of the PVC | `20Gi` |
+| `volumeClaimTemplates.accessModes` | The access mode of the PVC| `[ "ReadWriteOnce" ]` |
+| `tls.enabled` | Whether to enable TLS, requires `tls.certSecret.name` to be set to a secret containing cert details, see README for details | `false` |
+| `tls.wantClientAuth` | Whether Solr wants client authentication | `false` |
+| `tls.needClientAuth` | Whether Solr requires client authentication | `false` |
+| `tls.keystorePassword` | Password for the tls java keystore | `changeit` |
+| `tls.importKubernetesCA` | Whether to import the kubernetes CA into the Solr truststore | `false` |
+| `tls.checkPeerName` | Whether Solr checks the name in the TLS certs | `false` |
+| `tls.caSecret.name` | The name of the Kubernetes secret containing the ca bunble to import into the truststore | `` |
+| `tls.caSecret.bundlePath` | The key in the Kubernetes secret that contains the CA bundle | `` |
+| `tls.certSecret.name` | The name of the Kubernetes secret that contains the TLS certificate and private key | `` |
+| `tls.certSecret.keyPath` | The key in the Kubernetes secret that contains the private key | `tls.key` |
+| `tls.certSecret.certPath` | The key in the Kubernetes secret that contains the TLS certificate | `tls.crt` |
+| `service.type` | The type of service for the solr client service | `ClusterIP` |
+| `service.annotations` | Annotations to apply to the solr client service | `{}` |
+| `exporter.enabled` | Whether to enable the Solr Prometheus exporter | `false` |
+| `exporter.configFile` | The path in the docker image that the exporter loads the config from | `/opt/solr/contrib/prometheus-exporter/conf/solr-exporter-config.xml` |
+| `exporter.updateStrategy` | Update strategy for the exporter deployment | `{}` |
+| `exporter.podAnnotations` | Annotations to set on the exporter pods | `{}`
+| `exporter.resources` | Resource limits to set on the exporter pods | `{}` |
+| `exporter.port` | The port that the exporter runs on | `9983` |
+| `exporter.threads` | The number of query threads that the exporter runs | `7` |
+| `exporter.livenessProbe.initialDelaySeconds` | Inital Delay for the exporter pod liveness| `20` |
+| `exporter.livenessProbe.periodSeconds` | Poll rate for liveness probe | `10` |
+| `exporter.readinessProbe.initialDelaySeconds` | Inital Delay for the exporter pod readiness | `15` |
+| `exporter.readinessProbe.periodSeconds` | Poll rate for readiness probe | `5` |
+| `exporter.service.type` | The type of the exporter service | `ClusterIP` |
+| `exporter.service.annotations` | Annotations to apply to the exporter service | `{}` |
+
+
+## TLS Configuration
+
+Solr can be configured to use TLS to encrypt the traffic between solr nodes. To set this up with a certificate signed by the Kubernetes CA:
+
+Generate SSL certificate for the installation:
+
+`cfssl genkey ssl_config.json | cfssljson -bare server`
+
+base64 Encode the CSR and apply into kubernetes as a CertificateSigningRequest
+
+```
+export MY_CSR_NAME="solr-certifiate"
+cat < server-cert.pem`
+
+We store the certificate and private key in a Kubernetes secret:
+
+`kubectl create secret tls solr-certificate --cert server-cert.pem --key server-key.pem`
+
+Now the secret can be used in the solr installation:
+
+`helm install . --set tls.enabled=true,tls.certSecret.name=solr-certificate,tls.importKubernetesCA=true`
diff --git a/incubator/solr/requirements.lock b/incubator/solr/requirements.lock
new file mode 100644
index 000000000000..15d4874562e7
--- /dev/null
+++ b/incubator/solr/requirements.lock
@@ -0,0 +1,6 @@
+dependencies:
+- name: zookeeper
+ repository: https://kubernetes-charts-incubator.storage.googleapis.com/
+ version: 1.2.2
+digest: sha256:535c0850e71490a52df2686fc1f0b3c737535388df646e1eefe1f0a76999283e
+generated: 2019-02-05T15:07:14.273428Z
diff --git a/incubator/solr/requirements.yaml b/incubator/solr/requirements.yaml
new file mode 100644
index 000000000000..772ceaee01fa
--- /dev/null
+++ b/incubator/solr/requirements.yaml
@@ -0,0 +1,6 @@
+---
+
+dependencies:
+ - name: zookeeper
+ version: 1.2.2
+ repository: "https://kubernetes-charts-incubator.storage.googleapis.com/"
diff --git a/incubator/solr/templates/NOTES.txt b/incubator/solr/templates/NOTES.txt
new file mode 100644
index 000000000000..c152a37c0d2c
--- /dev/null
+++ b/incubator/solr/templates/NOTES.txt
@@ -0,0 +1,11 @@
+Your Solr cluster has now been installed, and can be accessed in the following ways:
+
+ * Internally, within the kubernetes cluster on:
+
+{{ template "solr.service-name" . }}.{{ .Release.Namespace }}:{{ .Values.port }}
+
+ * External to the kubernetes cluster:
+
+export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "solr.name" . }},component=server,release={{ .Release.Name }}" -o jsonpath="{ .items[0].metadata.name }")
+echo "Visit http://127.0.0.1:{{ .Values.port }} to access Solr"
+kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME {{ .Values.port }}:{{ .Values.port }}
diff --git a/incubator/solr/templates/_helpers.tpl b/incubator/solr/templates/_helpers.tpl
new file mode 100644
index 000000000000..b8a98da72a09
--- /dev/null
+++ b/incubator/solr/templates/_helpers.tpl
@@ -0,0 +1,91 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "solr.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "solr.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Define the name of the headless service for solr
+*/}}
+{{- define "solr.headless-service-name" -}}
+{{- printf "%s-%s" (include "solr.fullname" .) "headless" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Define the name of the client service for solr
+*/}}
+{{- define "solr.service-name" -}}
+{{- printf "%s-%s" (include "solr.fullname" .) "svc" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Define the name of the solr exporter
+*/}}
+{{- define "solr.exporter-name" -}}
+{{- printf "%s-%s" (include "solr.fullname" .) "exporter" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+The name of the zookeeper service
+*/}}
+{{- define "solr.zookeeper-name" -}}
+{{- printf "%s-%s" .Release.Name "zookeeper" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+The name of the zookeeper headless service
+*/}}
+{{- define "solr.zookeeper-service-name" -}}
+{{ printf "%s-%s" (include "solr.zookeeper-name" .) "headless" | trunc 63 | trimSuffix "-" }}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "solr.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+ Define the name of the solr PVC
+*/}}
+{{- define "solr.pvc-name" -}}
+{{ printf "%s-%s" (include "solr.fullname" .) "pvc" | trunc 63 | trimSuffix "-" }}
+{{- end -}}
+
+{{/*
+ Define the name of the solr.xml configmap
+*/}}
+{{- define "solr.configmap-name" -}}
+{{- printf "%s-%s" (include "solr.fullname" .) "config-map" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+ Define the labels that should be applied to all resources in the chart
+*/}}
+{{- define "solr.common.labels" -}}
+app.kubernetes.io/name: {{ include "solr.name" . }}
+app.kubernetes.io/instance: {{ .Release.Name }}
+app.kubernetes.io/managed-by: {{ .Release.Service }}
+helm.sh/chart: {{ include "solr.chart" . }}
+{{- end -}}
diff --git a/incubator/solr/templates/exporter-deployment.yaml b/incubator/solr/templates/exporter-deployment.yaml
new file mode 100644
index 000000000000..ba416e19c0d5
--- /dev/null
+++ b/incubator/solr/templates/exporter-deployment.yaml
@@ -0,0 +1,103 @@
+{{- if .Values.exporter.enabled }}
+---
+
+apiVersion: "v1"
+kind: "Service"
+metadata:
+ name: "{{ include "solr.exporter-name" . }}"
+ labels:
+{{ include "solr.common.labels" . | indent 4 }}
+ app.kubernetes.io/component: "exporter"
+ annotations:
+{{ toYaml .Values.exporter.service.annotations | indent 4}}
+spec:
+ type: "{{ .Values.exporter.service.type }}"
+ ports:
+ - port: {{ .Values.exporter.port }}
+ name: "solr-client"
+ selector:
+ app.kubernetes.io/name: "{{ include "solr.name" . }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/component: "exporter"
+
+
+---
+
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ include "solr.exporter-name" . }}
+ labels:
+{{ include "solr.common.labels" . | indent 4 }}
+ app.kubernetes.io/component: "exporter"
+spec:
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: "{{ include "solr.name" . }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/component: "exporter"
+ replicas: 1
+ strategy:
+ {{ toYaml .Values.exporter.updateStrategy | indent 4}}
+ template:
+ metadata:
+ labels:
+{{ include "solr.common.labels" . | indent 8 }}
+ app.kubernetes.io/component: "exporter"
+ annotations:
+{{ toYaml .Values.exporter.podAnnotations | indent 8 }}
+ spec:
+ affinity:
+{{ tpl (toYaml .Values.affinity) . | indent 8 }}
+ containers:
+ - name: exporter
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ resources:
+{{ toYaml .Values.exporter.resources | indent 12 }}
+ ports:
+ - containerPort: {{ .Values.port }}
+ name: solr-client
+ command:
+ - "/opt/solr/contrib/prometheus-exporter/bin/solr-exporter"
+ - "-p"
+ - "{{ .Values.exporter.port }}"
+ - "-z"
+ - "{{ include "solr.zookeeper-name" . }}:2181"
+ - "-n"
+ - "{{ .Values.exporter.threads }}"
+ - "-f"
+ - "{{ .Values.exporter.configFile }}"
+ livenessProbe:
+ initialDelaySeconds: {{ .Values.exporter.livenessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.exporter.livenessProbe.periodSeconds }}
+ httpGet:
+ path: "/metrics"
+ port: {{ .Values.exporter.port }}
+ readinessProbe:
+ initialDelaySeconds: {{ .Values.exporter.readinessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.exporter.readinessProbe.periodSeconds }}
+ httpGet:
+ path: "/metrics"
+ port: {{ .Values.exporter.port }}
+ initContainers:
+ - name: solr-init
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ command:
+ - 'sh'
+ - '-c'
+ - |
+ {{- if .Values.tls.enabled }}
+ PROTOCOL="https://"
+ {{ else }}
+ PROTOCOL="http://"
+ {{- end }}
+ COUNTER=0;
+ while [ $COUNTER -lt 30 ]; do
+ curl -k -s --connect-timeout 10 "${PROTOCOL}{{ include "solr.service-name" . }}:{{ .Values.port }}/solr/admin/info/system" && exit 0
+ sleep 2
+ done;
+ echo "Did NOT see a Running Solr instance after 60 secs!";
+ exit 1;
+{{ end }}
diff --git a/incubator/solr/templates/poddisruptionbudget.yaml b/incubator/solr/templates/poddisruptionbudget.yaml
new file mode 100644
index 000000000000..677ded3e6e72
--- /dev/null
+++ b/incubator/solr/templates/poddisruptionbudget.yaml
@@ -0,0 +1,14 @@
+---
+apiVersion: "policy/v1beta1"
+kind: "PodDisruptionBudget"
+metadata:
+ name: "{{ include "solr.fullname" . }}"
+ labels:
+{{ include "solr.common.labels" . | indent 4 }}
+spec:
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: "{{ include "solr.name" . }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/component: "server"
+{{ toYaml .Values.podDisruptionBudget | indent 2 }}
diff --git a/incubator/solr/templates/service-headless.yaml b/incubator/solr/templates/service-headless.yaml
new file mode 100644
index 000000000000..9ff2aa23da75
--- /dev/null
+++ b/incubator/solr/templates/service-headless.yaml
@@ -0,0 +1,17 @@
+---
+
+apiVersion: "v1"
+kind: "Service"
+metadata:
+ name: "{{ include "solr.headless-service-name" . }}"
+ labels:
+{{ include "solr.common.labels" . | indent 4 }}
+spec:
+ clusterIP: "None"
+ ports:
+ - port: {{ .Values.port }}
+ name: "solr-headless"
+ selector:
+ app.kubernetes.io/name: "{{ include "solr.name" . }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/component: "server"
diff --git a/incubator/solr/templates/service.yaml b/incubator/solr/templates/service.yaml
new file mode 100644
index 000000000000..e77273bd53a3
--- /dev/null
+++ b/incubator/solr/templates/service.yaml
@@ -0,0 +1,19 @@
+---
+
+apiVersion: "v1"
+kind: "Service"
+metadata:
+ name: "{{ include "solr.service-name" . }}"
+ labels:
+{{ include "solr.common.labels" . | indent 4 }}
+ annotations:
+{{ toYaml .Values.service.annotations | indent 4}}
+spec:
+ type: "{{ .Values.service.type }}"
+ ports:
+ - port: {{ .Values.port }}
+ name: "solr-client"
+ selector:
+ app.kubernetes.io/instance: "{{ include "solr.name" . }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/component: "server"
diff --git a/incubator/solr/templates/solr-xml-configmap.yaml b/incubator/solr/templates/solr-xml-configmap.yaml
new file mode 100644
index 000000000000..43cdb2367b62
--- /dev/null
+++ b/incubator/solr/templates/solr-xml-configmap.yaml
@@ -0,0 +1,29 @@
+---
+
+apiVersion: "v1"
+kind: "ConfigMap"
+metadata:
+ name: "{{ include "solr.configmap-name" . }}"
+ labels:
+{{ include "solr.common.labels" . | indent 4}}
+data:
+ solr.xml: |
+
+
+
+ ${host:}
+ ${jetty.port:8983}
+ ${hostContext:solr}
+ ${genericCoreNodeNames:true}
+ ${zkClientTimeout:30000}
+ ${distribUpdateSoTimeout:600000}
+ ${distribUpdateConnTimeout:60000}
+ ${zkCredentialsProvider:org.apache.solr.common.cloud.DefaultZkCredentialsProvider}
+ ${zkACLProvider:org.apache.solr.common.cloud.DefaultZkACLProvider}
+
+
+ ${socketTimeout:600000}
+ ${connTimeout:60000}
+
+
diff --git a/incubator/solr/templates/statefulset.yaml b/incubator/solr/templates/statefulset.yaml
new file mode 100644
index 000000000000..5c259be91898
--- /dev/null
+++ b/incubator/solr/templates/statefulset.yaml
@@ -0,0 +1,204 @@
+---
+
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: {{ include "solr.fullname" . }}
+ labels:
+{{ include "solr.common.labels" . | indent 4 }}
+ app.kubernetes.io/component: server
+spec:
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: "{{ include "solr.name" . }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/component: "server"
+ serviceName: {{ include "solr.headless-service-name" . }}
+ replicas: {{ .Values.replicaCount }}
+ updateStrategy:
+ {{ toYaml .Values.updateStrategy | indent 4}}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: "{{ include "solr.name" . }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/component: "server"
+ annotations:
+{{ toYaml .Values.podAnnotations | indent 8 }}
+ spec:
+ securityContext:
+ fsGroup: 8983
+ affinity:
+{{ tpl (toYaml .Values.affinity) . | indent 8 }}
+ terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
+ volumes:
+{{- if .Values.tls.enabled }}
+ - name: keystore-volume
+ emptyDir: {}
+ - name: "tls-secret"
+ secret:
+ secretName: {{ .Values.tls.certSecret.name }}
+{{- if not (eq .Values.tls.caSecret.name "") }}
+ - name: "tls-ca"
+ secret:
+ secretName: {{ .Values.tls.caSecret.name }}
+{{- end }}
+{{- end }}
+ - name: solr-xml
+ configMap:
+ name: {{ include "solr.configmap-name" . }}
+ items:
+ - key: solr.xml
+ path: solr.xml
+ initContainers:
+ - name: check-zk
+ image: busybox:latest
+ command:
+ - 'sh'
+ - '-c'
+ - |
+ COUNTER=0;
+ while [ $COUNTER -lt 120 ]; do
+ for i in {{ $vals := . -}}{{ range $i, $e := until ( int .Values.zookeeper.replicaCount ) -}}
+ "{{- include "solr.zookeeper-name" $vals }}-{{ $i }}.{{ include "solr.zookeeper-service-name" $vals }}" {{ end -}};
+ do mode=$(echo srvr | nc $i 2181 | grep "Mode");
+ if [ "$mode" == "Mode: leader" ] || [ "$mode" == "Mode: standalone" ]; then
+ exit 0;
+ fi;
+ done;
+ let COUNTER=COUNTER+1;
+ sleep 2;
+ done;
+ echo "Did NOT see a ZK leader after 240 secs!";
+ exit 1;
+ - name: "cp-solr-xml"
+ image: busybox:latest
+ command: ['sh', '-c', 'cp /tmp/solr.xml /tmp-config/solr.xml']
+ volumeMounts:
+ - name: "solr-xml"
+ mountPath: "/tmp"
+ - name: "{{ include "solr.pvc-name" . }}"
+ mountPath: "/tmp-config"
+
+{{- if .Values.tls.enabled }}
+ - name: "setup-keystore-and-properties"
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: "{{ .Values.image.pullPolicy }}"
+ command:
+ - "sh"
+ - "-c"
+ - |
+ set -e
+ PKCS12_OUTPUT=/tmp/keystore.pkcs12
+ DEST_KEYSTORE="/tmp/keystore/solr.jks"
+ TRUST_KEYSTORE="/tmp/keystore/solr-truststore.jks"
+ PASSWORD={{ .Values.tls.keystorePassword }}
+ openssl "pkcs12" -export -inkey "/tmp/tls_secret/{{.Values.tls.certSecret.keyPath }}" -in "/tmp/tls_secret/{{.Values.tls.certSecret.certPath }}" -out "${PKCS12_OUTPUT}" -password "pass:${PASSWORD}"
+ keytool -importkeystore -noprompt -srckeystore "${PKCS12_OUTPUT}" -srcstoretype "pkcs12" -destkeystore "${DEST_KEYSTORE}" -storepass "${PASSWORD}" -srcstorepass "${PASSWORD}"
+{{ if .Values.tls.importKubernetesCA }}
+ csplit -z -f crt- /var/run/secrets/kubernetes.io/serviceaccount/ca.crt '/-----BEGIN CERTIFICATE-----/' '{*}'
+ for file in crt-*; do
+ keytool -import -noprompt -keystore "${TRUST_KEYSTORE}" -file "${file}" -storepass "${PASSWORD}" -alias service-$file;
+ done
+ rm crt-*
+{{ end }}
+{{ if not (eq .Values.tls.caSecret.name "") }}
+ csplit -z -f crt- /tmp/tls_ca/{{ .Values.tls.caSecret.bundlePath }} '/-----BEGIN CERTIFICATE-----/' '{*}'
+ for file in crt-*; do
+ keytool -import -noprompt -keystore "${TRUST_KEYSTORE}" -file "${file}" -storepass "${PASSWORD}" -alias service-$file;
+ done
+ rm crt-*
+{{ end }}
+ /opt/solr/server/scripts/cloud-scripts/zkcli.sh -zkhost "{{ include "solr.zookeeper-name" . }}:2181" -cmd clusterprop -name urlScheme -val https
+ volumeMounts:
+ - name: "keystore-volume"
+ mountPath: "/tmp/keystore"
+ - name: "tls-secret"
+ mountPath: "/tmp/tls_secret"
+ readOnly: true
+
+{{ if not (eq .Values.tls.caSecret.name "") }}
+ - name: "tls-ca"
+ mountPath: "/tmp/tls_ca"
+ readOnly: true
+{{ end }}
+{{ end }}
+ containers:
+ - name: solr
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ resources:
+{{ toYaml .Values.resources | indent 12 }}
+ ports:
+ - containerPort: {{ .Values.port }}
+ name: solr-client
+ env:
+ - name: "SOLR_JAVA_MEM"
+ value: "{{ .Values.javaMem }}"
+ - name: "SOLR_HOME"
+ value: "/opt/solr/server/home"
+ - name: "SOLR_PORT"
+ value: "{{ .Values.port }}"
+ - name: "POD_HOSTNAME"
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ - name: "SOLR_HOST"
+ value: "$(POD_HOSTNAME).{{ include "solr.headless-service-name" . }}"
+ - name: "ZK_HOST"
+ value: "{{ include "solr.zookeeper-name" . }}:2181"
+ - name: "SOLR_LOG_LEVEL"
+ value: "{{ .Values.logLevel }}"
+{{ if .Values.tls.enabled }}
+ - name: "SOLR_SSL_ENABLED"
+ value: "true"
+ - name: "SOLR_SSL_KEY_STORE"
+ value: "/etc/ssl/keystores/solr.jks"
+ - name: "SOLR_SSL_KEY_STORE_PASSWORD"
+ value: "{{ .Values.tls.keystorePassword }}"
+ - name: "SOLR_SSL_TRUST_STORE"
+ value: "/etc/ssl/keystores/solr-truststore.jks"
+ - name: "SOLR_SSL_TRUST_STORE_PASSWORD"
+ value: "{{ .Values.tls.keystorePassword }}"
+ - name: "SOLR_SSL_WANT_CLIENT_AUTH"
+ value: "{{ .Values.tls.wantClientAuth }}"
+ - name: "SOLR_SSL_NEED_CLIENT_AUTH"
+ value: "{{ .Values.tls.needClientAuth }}"
+ - name: "SOLR_SSL_CHECK_PEER_NAME"
+ value: "{{ .Values.tls.checkPeerName }}"
+{{ end }}
+ livenessProbe:
+ initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
+ httpGet:
+ scheme: "{{ .Values.tls.enabled | ternary "HTTPS" "HTTP" }}"
+ path: /solr/admin/info/system
+ port: {{ .Values.port }}
+ readinessProbe:
+ initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
+ httpGet:
+ scheme: "{{ .Values.tls.enabled | ternary "HTTPS" "HTTP" }}"
+ path: /solr/admin/info/system
+ port: {{ .Values.port }}
+ volumeMounts:
+ - name: {{ include "solr.pvc-name" . }}
+ mountPath: /opt/solr/server/home
+{{ if .Values.tls.enabled }}
+ - name: "keystore-volume"
+ mountPath: "/etc/ssl/keystores"
+{{ end }}
+ volumeClaimTemplates:
+ - metadata:
+ name: {{ include "solr.pvc-name" . }}
+ annotations:
+ pv.beta.kubernetes.io/gid: "8983"
+ spec:
+ accessModes:
+{{ toYaml .Values.volumeClaimTemplates.accessModes | indent 10 }}
+{{- if not ( eq .Values.volumeClaimTemplates.storageClassName "" ) }}
+ storageClassName: "{{ .Values.volumeClaimTemplates.storageClassName }}"
+{{- end }}
+ resources:
+ requests:
+ storage: {{ .Values.volumeClaimTemplates.storageSize }}
diff --git a/incubator/solr/values.yaml b/incubator/solr/values.yaml
new file mode 100644
index 000000000000..dacd5c9ab9e3
--- /dev/null
+++ b/incubator/solr/values.yaml
@@ -0,0 +1,97 @@
+---
+
+# Which port should solr listen on
+port: 8983
+
+# Number of solr instances to run
+replicaCount: 3
+
+# Settings for solr java memory
+javaMem: "-Xms2g -Xmx3g"
+
+# Set the limits and requests on solr pod resources
+resources: {}
+
+# Sets the termination Grace period for the solr pods
+# This can take a while for shards to elect new leaders
+terminationGracePeriodSeconds: 180
+
+# Solr image settings
+image:
+ repository: solr
+ tag: 7.6.0
+ pullPolicy: IfNotPresent
+
+# Solr pod liveness
+livenessProbe:
+ initialDelaySeconds: 20
+ periodSeconds: 10
+
+# Solr pod readiness
+readinessProbe:
+ initialDelaySeconds: 15
+ periodSeconds: 5
+
+# Annotations to apply to the solr pods
+podAnnotations: {}
+
+# Affinity group rules or the solr pods
+affinity: {}
+
+# Update Strategy for solr pods
+updateStrategy:
+ type: "RollingUpdate"
+
+# The log level of the Solr instances
+logLevel: "INFO"
+
+# Solr pod disruption budget
+podDisruptionBudget:
+ maxUnavailable: 1
+
+# Configuration for the solr PVC
+volumeClaimTemplates:
+ storageClassName: ""
+ storageSize: "20Gi"
+ accessModes:
+ - "ReadWriteOnce"
+
+# Configuration for solr TLS handling, see README.md for more instructions
+tls:
+ enabled: false
+ wantClientAuth: "false"
+ needClientAuth: "false"
+ keystorePassword: "changeit"
+ importKubernetesCA: "false"
+ checkPeerName: "false"
+ caSecret:
+ name: ""
+ bundlePath: ""
+ certSecret:
+ name: ""
+ keyPath: "tls.key"
+ certPath: "tls.crt"
+
+# Configuration for the solr service
+service:
+ type: ClusterIP
+ annotations: {}
+
+# Configuration for the solr prometheus exporter
+exporter:
+ enabled: false # Deploy the exporter
+ configFile: "/opt/solr/contrib/prometheus-exporter/conf/solr-exporter-config.xml" # The config file to point the exporter to
+ updateStrategy: {}
+ podAnnotations: {} # Annotations to apply to the exporter pod
+ resources: {} # Resource limits for the exporter
+ port: 9983 # The port to run the exporter on
+ threads: 7 # The number of threads the exporter uses to query solr
+ livenessProbe: # Liveness configuration for exporter pod
+ initialDelaySeconds: 20
+ periodSeconds: 10
+ readinessProbe: # Readiness configuration for exporter pod
+ initialDelaySeconds: 15
+ periodSeconds: 5
+ service:
+ type: "ClusterIP"
+ annotations: {}
diff --git a/incubator/sparkoperator/Chart.yaml b/incubator/sparkoperator/Chart.yaml
index e02fcf4152bf..278f8e434619 100644
--- a/incubator/sparkoperator/Chart.yaml
+++ b/incubator/sparkoperator/Chart.yaml
@@ -1,7 +1,8 @@
+apiVersion: v1
name: sparkoperator
description: A Helm chart for Spark on Kubernetes operator
-version: 0.1.7
-appVersion: v1beta1-0.7-2.4.0
+version: 0.2.3
+appVersion: v2.4.0-v1beta1-0.8.2
kubeVersion: ">=1.8.0-0"
keywords:
- spark
diff --git a/incubator/sparkoperator/README.md b/incubator/sparkoperator/README.md
index 6ff802918714..88ebb8375f04 100644
--- a/incubator/sparkoperator/README.md
+++ b/incubator/sparkoperator/README.md
@@ -1,6 +1,6 @@
### Helm Chart for Spark Operator
-This is the Helm chart for the [Spark-on-Kubernetes Operator](https://github.com/GoogleCloudPlatform/spark-on-k8s-operator).
+This is the Helm chart for the [Kubernetes Operator for Apache Spark](https://github.com/GoogleCloudPlatform/spark-on-k8s-operator).
#### Prerequisites
@@ -12,7 +12,7 @@ The chart can be installed by running:
```bash
$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
-$ helm install incubator/sparkoperator --namespace spark-operator
+$ helm install incubator/sparkoperator --namespace spark-operator --set sparkJobNamespace=default
```
Note that you need to use the `--namespace` flag during `helm install` to specify in which namespace you want to install the operator. The namespace can be existing or not. When it's not available, Helm would take care of creating the namespace. Note that this namespace has no relation to the namespace where you would like to deploy Spark jobs (i.e. the setting `sparkJobNamespace` shown in the table below). They can be the same namespace or different ones.
@@ -24,18 +24,20 @@ The following table lists the configurable parameters of the Spark operator char
| Parameter | Description | Default |
| ------------------------- | ------------------------------------------------------------ | -------------------------------------- |
| `operatorImageName` | The name of the operator image | `gcr.io/spark-operator/spark-operator` |
-| `operatorVersion` | The version of the operator to install | `v2.4.0-v1beta1-latest` |
+| `operatorVersion` | The version of the operator to install | `v2.4.0-v1beta1-0.8.1` |
| `imagePullPolicy` | Docker image pull policy | `IfNotPresent` |
-| `sparkJobNamespace` | K8s namespace where Spark jobs are to be deployed | `default` |
-| `enableWebhook` | Whether to enable mutating admission webhook | false |
+| `sparkJobNamespace` | K8s namespace where Spark jobs are to be deployed | `` |
+| `enableWebhook` | Whether to enable mutating admission webhook | false |
| `enableMetrics` | Whether to expose metrics to be scraped by Premetheus | true |
-| `controllerThreads` | Number of worker threads used by the SparkApplication controller | 10 |
+| `controllerThreads` | Number of worker threads used by the SparkApplication controller | 10 |
+| `ingressUrlFormat` | Ingress URL format | "" |
+| `logLevel` | Logging verbosity level | 2 |
| `installCrds` | Whether to install CRDs | true |
| `metricsPort` | Port for the metrics endpoint | 10254 |
| `metricsEndpoint` | Metrics endpoint | "/metrics" |
| `metricsPrefix` | Prefix for the metrics | "" |
| `resyncInterval` | Informer resync interval in seconds | 30 |
-| `webhookPort` | Service port of the webhook server | 8080 | |
+| `webhookPort` | Service port of the webhook server | 8080 |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
diff --git a/incubator/sparkoperator/templates/crd-cleanup-job.yaml b/incubator/sparkoperator/templates/crd-cleanup-job.yaml
new file mode 100644
index 000000000000..6f3f3585edb5
--- /dev/null
+++ b/incubator/sparkoperator/templates/crd-cleanup-job.yaml
@@ -0,0 +1,44 @@
+{{ if .Values.installCrds }}
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: {{ include "sparkoperator.fullname" . }}-crd-cleanup
+ annotations:
+ "helm.sh/hook": pre-delete
+ "helm.sh/hook-delete-policy": hook-succeeded
+ labels:
+ app.kubernetes.io/name: {{ include "sparkoperator.name" . }}
+ helm.sh/chart: {{ include "sparkoperator.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ template:
+ spec:
+ serviceAccountName: {{ include "sparkoperator.serviceAccountName" . }}
+ restartPolicy: OnFailure
+ containers:
+ - name: delete-sparkapp-crd
+ image: {{ .Values.operatorImageName }}:{{ .Values.operatorVersion }}
+ imagePullPolicy: {{ .Values.imagePullPolicy }}
+ command:
+ - "/bin/sh"
+ - "-c"
+ - "curl -ik \
+ -X DELETE \
+ -H \"Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" \
+ -H \"Accept: application/json\" \
+ -H \"Content-Type: application/json\" \
+ https://kubernetes.default.svc/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/sparkapplications.sparkoperator.k8s.io"
+ - name: delete-scheduledsparkapp-crd
+ image: {{ .Values.operatorImageName }}:{{ .Values.operatorVersion }}
+ imagePullPolicy: {{ .Values.imagePullPolicy }}
+ command:
+ - "/bin/sh"
+ - "-c"
+ - "curl -ik \
+ -X DELETE \
+ -H \"Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" \
+ -H \"Accept: application/json\" \
+ -H \"Content-Type: application/json\" \
+ https://kubernetes.default.svc/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/scheduledsparkapplications.sparkoperator.k8s.io"
+{{ end }}
diff --git a/incubator/sparkoperator/templates/spark-operator-deployment.yaml b/incubator/sparkoperator/templates/spark-operator-deployment.yaml
index d2f1612aad7d..6530f45b057a 100644
--- a/incubator/sparkoperator/templates/spark-operator-deployment.yaml
+++ b/incubator/sparkoperator/templates/spark-operator-deployment.yaml
@@ -56,8 +56,9 @@ spec:
- containerPort: {{ .Values.metricsPort }}
{{ end }}
args:
- - -v=2
+ - -v={{ .Values.logLevel }}
- -namespace={{ .Values.sparkJobNamespace }}
+ - -ingress-url-format={{ .Values.ingressUrlFormat }}
- -install-crds={{ .Values.installCrds }}
- -controller-threads={{ .Values.controllerThreads }}
- -resync-interval={{ .Values.resyncInterval }}
@@ -76,3 +77,7 @@ spec:
- -webhook-svc-name={{ .Release.Name }}-webhook
- -webhook-config-name={{ include "sparkoperator.fullname" . }}-webhook-config
{{- end }}
+ {{- if .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.nodeSelector | indent 8 }}
+ {{- end }}
diff --git a/incubator/sparkoperator/templates/spark-operator-rbac.yaml b/incubator/sparkoperator/templates/spark-operator-rbac.yaml
index 3ff3f9ae1226..ed9d79a04764 100644
--- a/incubator/sparkoperator/templates/spark-operator-rbac.yaml
+++ b/incubator/sparkoperator/templates/spark-operator-rbac.yaml
@@ -14,6 +14,9 @@ rules:
verbs: ["*"]
- apiGroups: [""]
resources: ["services", "configmaps", "secrets"]
+ verbs: ["create", "get", "delete", "update"]
+- apiGroups: ["extensions"]
+ resources: ["ingresses"]
verbs: ["create", "get", "delete"]
- apiGroups: [""]
resources: ["nodes"]
@@ -35,7 +38,6 @@ apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "sparkoperator.fullname" . }}-crb
- namespace: {{ .Release.Namespace }}
labels:
app.kubernetes.io/name: {{ include "sparkoperator.name" . }}
helm.sh/chart: {{ include "sparkoperator.chart" . }}
diff --git a/incubator/sparkoperator/templates/webhook-cleanup-job.yaml b/incubator/sparkoperator/templates/webhook-cleanup-job.yaml
index d6d9df7cae2f..b6f1a971e11e 100644
--- a/incubator/sparkoperator/templates/webhook-cleanup-job.yaml
+++ b/incubator/sparkoperator/templates/webhook-cleanup-job.yaml
@@ -2,7 +2,7 @@
apiVersion: batch/v1
kind: Job
metadata:
- name: {{ include "sparkoperator.fullname" . }}-cleanup
+ name: {{ include "sparkoperator.fullname" . }}-webhook-cleanup
annotations:
"helm.sh/hook": pre-delete, pre-upgrade
"helm.sh/hook-delete-policy": hook-succeeded
diff --git a/incubator/sparkoperator/templates/webhook-service.yaml b/incubator/sparkoperator/templates/webhook-service.yaml
index 42c5bc62e112..2237ff6a7352 100644
--- a/incubator/sparkoperator/templates/webhook-service.yaml
+++ b/incubator/sparkoperator/templates/webhook-service.yaml
@@ -11,7 +11,7 @@ metadata:
spec:
ports:
- port: 443
- targetPort: 8080
+ targetPort: {{ .Values.webhookPort }}
name: webhook
selector:
app.kubernetes.io/name: {{ include "sparkoperator.name" . }}
diff --git a/incubator/sparkoperator/values.yaml b/incubator/sparkoperator/values.yaml
index 563c402f6b1a..71e88c0598c9 100644
--- a/incubator/sparkoperator/values.yaml
+++ b/incubator/sparkoperator/values.yaml
@@ -1,5 +1,5 @@
operatorImageName: gcr.io/spark-operator/spark-operator
-operatorVersion: v2.4.0-v1beta1-latest
+operatorVersion: v2.4.0-v1beta1-0.8.2
imagePullPolicy: IfNotPresent
rbac:
@@ -19,9 +19,17 @@ enableWebhook: false
enableMetrics: true
controllerThreads: 10
+ingressUrlFormat: ""
installCrds: true
metricsPort: 10254
metricsEndpoint: "/metrics"
metricsPrefix: ""
resyncInterval: 30
webhookPort: 8080
+
+## Node labels for pod assignment
+## Ref: https://kubernetes.io/docs/user-guide/node-selection/
+##
+nodeSelector: {}
+
+logLevel: 2
diff --git a/incubator/tensorflow-inception/Chart.yaml b/incubator/tensorflow-inception/Chart.yaml
index b07ead2ce3b9..bb0dd094b591 100755
--- a/incubator/tensorflow-inception/Chart.yaml
+++ b/incubator/tensorflow-inception/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: tensorflow-inception
home: https://github.com/kubernetes/charts
-version: 0.4.0
+version: 0.4.1
appVersion: 1.4.0
description: Open source software library for numerical computation using data flow graphs.
icon: https://camo.githubusercontent.com/ee91ac3c9f5ad840ebf70b54284498fe0e6ddb92/68747470733a2f2f7777772e74656e736f72666c6f772e6f72672f696d616765732f74665f6c6f676f5f7472616e73702e706e67
diff --git a/incubator/tensorflow-inception/README.md b/incubator/tensorflow-inception/README.md
index 4d425d3eaaad..44bb1df60392 100644
--- a/incubator/tensorflow-inception/README.md
+++ b/incubator/tensorflow-inception/README.md
@@ -28,7 +28,7 @@ The following table lists the configurable parameters of the TensorFlow inceptio
| Parameter | Description | Default |
| ----------------------- | ---------------------------------- | ---------------------------------------------------------- |
| `image.repository` | Container image name | `quay.io/thomasjungblut/tfs-inception` |
-| `image.tag` | Container image tag | `tfs-1.8.0-gpu` |
+| `image.tag` | Container image tag | `tfs-1.8.0-cpu` |
| `replicas` | k8s deployment replicas | `1` |
| `component` | k8s selector key | `tensorflow-inception` |
| `resources` | Set the resource to be allocated and allowed for the Pods | `{}` |
@@ -37,6 +37,8 @@ The following table lists the configurable parameters of the TensorFlow inceptio
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
+> **Note**: For the GPU version, use `image.tag=tfs-1.8.0-gpu`.
+
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```bash
diff --git a/incubator/tensorflow-inception/values.yaml b/incubator/tensorflow-inception/values.yaml
index 620867039748..23823fa6f26a 100644
--- a/incubator/tensorflow-inception/values.yaml
+++ b/incubator/tensorflow-inception/values.yaml
@@ -10,7 +10,7 @@ component: "tensorflow-inception"
replicas: 1
image:
repository: "quay.io/thomasjungblut/tfs-inception"
- tag: "tfs-1.8.0-gpu"
+ tag: "tfs-1.8.0-cpu"
pullPolicy: "IfNotPresent"
env:
diff --git a/incubator/vault/Chart.yaml b/incubator/vault/Chart.yaml
index 75f4065aef86..661a594dac11 100644
--- a/incubator/vault/Chart.yaml
+++ b/incubator/vault/Chart.yaml
@@ -1,7 +1,7 @@
apiVersion: v1
description: A Helm chart for Vault, a tool for managing secrets
name: vault
-version: 0.14.6
+version: 0.18.7
appVersion: 1.0.1
home: https://www.vaultproject.io/
icon: https://www.vaultproject.io/assets/images/mega-nav/logo-vault-0f83e3d2.svg
diff --git a/incubator/vault/README.md b/incubator/vault/README.md
index e3bf7004710d..dbb4e3dee994 100644
--- a/incubator/vault/README.md
+++ b/incubator/vault/README.md
@@ -55,25 +55,41 @@ The following table lists the configurable parameters of the Vault chart and the
| `vault.dev` | Use Vault in dev mode | true (set to false in production) |
| `vault.extraEnv` | Extra env vars for Vault pods | `{}` |
| `vault.extraContainers` | Sidecar containers to add to the vault pod | `{}` |
+| `vault.extraInitContainers` | Init containers to be added to the vault pod | `{}` |
| `vault.extraVolumes` | Additional volumes to the controller pod | `{}` |
-| `vault.customSecrets` | Custom secrets available to Vault | `[]` |
+| `vault.extraVolumeMounts` | Extra volumes to mount to the controller pod | `{}` |
+| `vault.existingConfigName` | Location of existing Vault configuration | nil |
| `vault.config` | Vault configuration | No default backend |
| `replicaCount` | k8s replicas | `3` |
| `resources.limits.cpu` | Container requested CPU | `nil` |
| `resources.limits.memory` | Container requested memory | `nil` |
| `affinity` | Affinity settings | See values.yaml |
+| `service.loadBalancerIP` | Assign a static IP to the loadbalancer | `nil` |
| `service.loadBalancerSourceRanges`| IP whitelist for service type loadbalancer | `[]` |
| `service.annotations` | Annotations for service | `{}` |
+| `service.externalPort` | External port for the service | `8200` |
+| `service.port` | The API port Vault is using | `8200` |
+| `service.clusterExternalPort` | External cluster port for the service | `nil` |
+| `service.clusterPort` | The cluster port Vault is using | `8201` |
| `annotations` | Annotations for deployment | `{}` |
| `labels` | Extra labels for deployment | `{}` |
| `ingress.labels` | Labels for ingress | `{}` |
| `podAnnotations` | Annotations for pods | `{}` |
+| `podLabels` | Extra labels for pods | `{}` |
+| `serviceAccount.create` | Specifies whether a ServiceAccount should be created | `false` |
| `consulAgent.join` | If set, start start a consul agent | `nil` |
| `consulAgent.repository` | Container image for consul agent | `consul` |
| `consulAgent.tag` | Container image tag for consul agent | `1.4.0` |
| `consulAgent.pullPolicy` | Container pull policy for consul agent | `IfNotPresent` |
| `consulAgent.gossipKeySecretName` | k8s secret containing gossip key | `nil` (see values.yaml for details) |
| `consulAgent.HttpPort` | HTTP port for consul agent API | `8500` |
+| `consulAgent.resources` | Container resources for consul agent | `nil` |
+| `vaultExporter.enabled` | Enable or disable vault exporter | `false` |
+| `vaultExporter.repository` | Container image for vault exporter | `grapeshot/vault_exporter` |
+| `vaultExporter.tag` | Container image tag for vault exporter | `v0.1.2` |
+| `vaultExporter.pullPolicy` | Image pull policy that sould be used | `IfNotPresent` |
+| `vaultExporter.vaultAddress` | Vault address that exporter should use | `127.0.0.1:8200` |
+| `vaultExporter.tlsCAFile` | Vault TLS CA certificate mount path | `/vault/tls/ca.crt` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
@@ -101,6 +117,15 @@ are encrypted with a gossip key. You can configure a secret with the
same format as that chart and specify it in the
`consulAgent.gossipKeySecretName` parameter.
+## Optional Vault Exporter
+If you want to monitor Vault with Prometheus you can simply enable the Vault exporter
+which then runs as a sidecar container within the same pod as Vault itself. To use the
+exporter just set `vaultExporter.enabled` to true and set the other variables according to
+your needs.
+
+If your Vault is set up with TLS make sure to specify the CA certificate path properly.
+This is done through the parameter `vaultExporter.tlsCAFile`.
+
## Using Vault
Once the Vault pod is ready, it can be accessed using a `kubectl
@@ -111,3 +136,29 @@ $ kubectl port-forward vault-pod 8200
$ export VAULT_ADDR=http://127.0.0.1:8200
$ vault status
```
+
+## Migrating Custom Secrets
+
+Previous versions of this chart had a configuration option `vault.customSecrets`.
+Custom secrets should now be expressed with `vault.extraVolumeMounts`. For example:
+
+```yaml
+vault:
+ customSecrets:
+ - secretName: vault-tls
+ mountPath: /vault/tls
+```
+
+Would be expressed as:
+
+```yaml
+vault:
+ extraVolumes:
+ - name: vault-tls
+ secret:
+ secretName: vault-tls
+ extraVolumeMounts:
+ - name: vault-tls
+ mountPath: /vault/tls
+ readOnly: true
+```
diff --git a/incubator/vault/templates/_helpers.tpl b/incubator/vault/templates/_helpers.tpl
index fae4b0aa0e06..9c91fc03e500 100644
--- a/incubator/vault/templates/_helpers.tpl
+++ b/incubator/vault/templates/_helpers.tpl
@@ -19,3 +19,20 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
{{- end -}}
{{- end -}}
+{{/*
+Create the name of the service account to use
+*/}}
+{{- define "vault.serviceAccountName" -}}
+{{- if .Values.serviceAccount.create -}}
+ {{ default (include "vault.fullname" .) .Values.serviceAccount.name }}
+{{- else -}}
+ {{ default "default" .Values.serviceAccount.name }}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "vault.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/incubator/vault/templates/configmap.yaml b/incubator/vault/templates/configmap.yaml
index 8d4072ae3758..ec2c8bc4845b 100644
--- a/incubator/vault/templates/configmap.yaml
+++ b/incubator/vault/templates/configmap.yaml
@@ -1,3 +1,4 @@
+{{ if not .Values.vault.existingConfigName }}
apiVersion: v1
kind: ConfigMap
metadata:
@@ -6,7 +7,8 @@ metadata:
app: "{{ template "vault.name" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
+ chart: "{{ template "vault.chart" . }}"
data:
config.json: |
{{ .Values.vault.config | toJson }}
+{{ end }}
diff --git a/incubator/vault/templates/deployment.yaml b/incubator/vault/templates/deployment.yaml
index 44f27516e72b..9c3aef86f23b 100644
--- a/incubator/vault/templates/deployment.yaml
+++ b/incubator/vault/templates/deployment.yaml
@@ -4,7 +4,7 @@ metadata:
name: {{ template "vault.fullname" . }}
labels:
app: {{ template "vault.name" . }}
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
+ chart: {{ template "vault.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.labels }}
@@ -23,9 +23,19 @@ spec:
labels:
app: {{ template "vault.name" . }}
release: {{ .Release.Name }}
+{{- if .Values.podLabels }}
+{{ toYaml .Values.podLabels | indent 8 }}
+{{- end }}
annotations:
-{{ toYaml .Values.podAnnotations | indent 8 }}
+ checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
+ {{- range $key, $value := .Values.podAnnotations }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
spec:
+ {{- if .Values.vault.extraInitContainers }}
+ initContainers:
+{{ tpl (toYaml .Values.vault.extraInitContainers) . | indent 6 }}
+ {{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ default .Chart.AppVersion .Values.image.tag }}"
@@ -37,17 +47,23 @@ spec:
{{- end }}
{{- if .Values.lifecycle }}
lifecycle:
-{{ tpl .Values.lifecycle . | indent 10 }}
+{{ tpl (toYaml .Values.lifecycle) . | indent 10 }}
{{- end }}
ports:
- containerPort: {{ .Values.service.port }}
name: api
- - containerPort: 8201
+ - containerPort: {{ .Values.service.clusterPort }}
name: cluster-address
livenessProbe:
- # Alive if it is listening for clustering traffic
- tcpSocket:
+ # Alive if Vault is successfully responding to requests
+ httpGet:
+ path: /v1/sys/health?standbyok=true&
+ {{- if .Values.vault.liveness.aliveIfUninitialized -}}uninitcode=204&{{- end }}
+ {{- if .Values.vault.liveness.aliveIfSealed -}}sealedcode=204&{{- end }}
port: {{ .Values.service.port }}
+ scheme: {{ if .Values.vault.config.listener.tcp.tls_disable -}}HTTP{{- else -}}HTTPS{{- end }}
+ initialDelaySeconds: {{ .Values.vault.liveness.initialDelaySeconds }}
+ periodSeconds: {{ .Values.vault.liveness.periodSeconds }}
readinessProbe:
# Ready depends on preference
httpGet:
@@ -57,6 +73,8 @@ spec:
{{- if .Values.vault.readiness.readyIfUninitialized -}}uninitcode=204&{{- end }}
port: {{ .Values.service.port }}
scheme: {{ if .Values.vault.config.listener.tcp.tls_disable -}}HTTP{{- else -}}HTTPS{{- end }}
+ initialDelaySeconds: {{ .Values.vault.readiness.initialDelaySeconds }}
+ periodSeconds: {{ .Values.vault.readiness.periodSeconds }}
securityContext:
readOnlyRootFilesystem: true
capabilities:
@@ -79,16 +97,14 @@ spec:
mountPath: /vault/config/
- name: vault-root
mountPath: /root/
- {{- range .Values.vault.customSecrets }}
- - name: {{ .secretName | replace "." "-"}}
- mountPath: {{ .mountPath }}
+ {{- if .Values.vault.extraVolumeMounts }}
+{{ toYaml .Values.vault.extraVolumeMounts | indent 8 }}
{{- end }}
{{- if .Values.vault.extraContainers }}
-{{ toYaml .Values.vault.extraContainers | indent 6}}
+{{ tpl (toYaml .Values.vault.extraContainers) . | indent 6}}
{{- end }}
resources:
{{ toYaml .Values.resources | indent 10 }}
- {{- if .Values.affinity }}
{{- if .Values.consulAgent.join }}
- name: {{ .Chart.Name }}-consul-agent
image: "{{ .Values.consulAgent.repository }}:{{ .Values.consulAgent.tag }}"
@@ -113,26 +129,60 @@ spec:
GOSSIP_KEY="-config-file /etc/consul/encrypt.json"
fi
{{- end }}
+ {{- if .Values.vault.config.storage.consul.token }}
+ echo "{\"acl\":{\"tokens\":{\"agent\":\"{{ .Values.vault.config.storage.consul.token }}\"}}}" > /etc/consul/agent-token.json
+ AGENT_TOKEN="-config-file /etc/consul/agent-token.json"
+ {{- end }}
exec /bin/consul agent \
$GOSSIP_KEY \
+ $AGENT_TOKEN \
-join={{- .Values.consulAgent.join }} \
-data-dir=/etc/consul
+ resources:
+{{ toYaml .Values.consulAgent.resources | indent 10 }}
+ {{- end }}
+ {{- if .Values.vaultExporter.enabled }}
+ - name: {{ .Chart.Name }}-exporter
+ image: "{{ .Values.vaultExporter.repository }}:{{ .Values.vaultExporter.tag }}"
+ imagePullPolicy: {{ .Values.vaultExporter.pullPolicy }}
+ securityContext:
+ readOnlyRootFilesystem: true
+ env:
+ - name: VAULT_ADDR
+ {{- if .Values.vault.config.listener.tcp.tls_disable }}
+ value: "http://{{ .Values.vaultExporter.vaultAddress }}"
+ {{- else }}
+ value: "https://{{ .Values.vaultExporter.vaultAddress }}"
+ {{- end }}
+ {{- if .Values.vaultExporter.tlsCAFile }}
+ - name: VAULT_CACERT
+ value: {{ .Values.vaultExporter.tlsCAFile | quote }}
+ {{- end }}
+ {{- range .Values.vault.customSecrets }}
+ volumeMounts:
+ - name: {{ .secretName | replace "." "-"}}
+ mountPath: {{ .mountPath }}
+ {{- end }}
{{- end }}
+ {{- if .Values.affinity }}
affinity:
-{{ tpl .Values.affinity . | indent 8 }}
+{{ tpl (toYaml .Values.affinity) . | indent 8 }}
+ {{- end }}
+ {{- if .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.nodeSelector | indent 8 }}
+ {{- end }}
+ {{- if .Values.tolerations }}
+ tolerations:
+{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
volumes:
- name: vault-config
configMap:
- name: "{{ template "vault.fullname" . }}-config"
+ name: {{ if .Values.vault.existingConfigName }}{{ .Values.vault.existingConfigName }}{{- else }}"{{ template "vault.fullname" . }}-config"{{- end }}
- name: vault-root
emptyDir: {}
- {{- range .Values.vault.customSecrets }}
- - name: {{ .secretName | replace "." "-"}}
- secret:
- secretName: {{ .secretName }}
- {{- end }}
{{- if .Values.vault.extraVolumes }}
{{ toYaml .Values.vault.extraVolumes | indent 8}}
{{- end }}
diff --git a/incubator/vault/templates/ingress.yaml b/incubator/vault/templates/ingress.yaml
index d99bb1e1efd2..bf569f5f4ef0 100644
--- a/incubator/vault/templates/ingress.yaml
+++ b/incubator/vault/templates/ingress.yaml
@@ -1,13 +1,13 @@
{{- if .Values.ingress.enabled -}}
{{- $serviceName := include "vault.fullname" . -}}
-{{- $servicePort := .Values.service.port -}}
+{{- $servicePort := .Values.service.externalPort -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "vault.fullname" . }}
labels:
app: {{ template "vault.name" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
+ chart: "{{ template "vault.chart" . }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
{{- if .Values.ingress.labels }}
diff --git a/incubator/vault/templates/service.yaml b/incubator/vault/templates/service.yaml
index a29a6f5a2c4b..728756a84a96 100644
--- a/incubator/vault/templates/service.yaml
+++ b/incubator/vault/templates/service.yaml
@@ -4,7 +4,7 @@ metadata:
name: {{ template "vault.fullname" . }}
labels:
app: {{ template "vault.name" . }}
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
+ chart: {{ template "vault.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.service.annotations }}
@@ -17,6 +17,9 @@ spec:
clusterIP: {{ .Values.service.clusterIP }}
{{- end }}
{{- if eq .Values.service.type "LoadBalancer" }}
+ {{- if .Values.service.loadBalancerIP }}
+ loadBalancerIP: {{ .Values.service.loadBalancerIP }}
+ {{- end }}
loadBalancerSourceRanges:
{{- range .Values.service.loadBalancerSourceRanges }}
- {{ . }}
@@ -27,6 +30,12 @@ spec:
protocol: TCP
targetPort: {{ .Values.service.port }}
name: api
+ {{- if .Values.service.clusterExternalPort }}
+ - port: {{ .Values.service.clusterExternalPort }}
+ protocol: TCP
+ targetPort: {{ .Values.service.clusterPort }}
+ name: cluster
+ {{- end }}
selector:
app: {{ template "vault.name" . }}
release: {{ .Release.Name }}
diff --git a/incubator/vault/templates/serviceaccount.yaml b/incubator/vault/templates/serviceaccount.yaml
new file mode 100644
index 000000000000..e04418fda236
--- /dev/null
+++ b/incubator/vault/templates/serviceaccount.yaml
@@ -0,0 +1,11 @@
+{{- if .Values.serviceAccount.create -}}
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: {{ template "vault.serviceAccountName" . }}
+ labels:
+ app: {{ template "vault.name" . }}
+ chart: "{{ template "vault.chart" . }}"
+ release: "{{ .Release.Name }}"
+ heritage: "{{ .Release.Service }}"
+{{- end -}}
diff --git a/incubator/vault/values.yaml b/incubator/vault/values.yaml
index 9e537bea9aa8..d36db4ac652c 100644
--- a/incubator/vault/values.yaml
+++ b/incubator/vault/values.yaml
@@ -9,6 +9,13 @@ image:
tag:
pullPolicy: IfNotPresent
+vaultExporter:
+ enabled: false
+ repository: grapeshot/vault_exporter
+ tag: v0.1.2
+ pullPolicy: IfNotPresent
+ vaultAddress: 127.0.0.1:8200
+ # tlsCAFile: /vault/tls/ca.crt
consulAgent:
repository: consul
tag: 1.4.0
@@ -24,15 +31,30 @@ consulAgent:
#
# Optionally override the agent's http port
HttpPort: 8500
+ resources: {}
+ # We usually recommend not to specify default resources and to leave this as a conscious
+ # choice for the user. This also increases chances charts run on environments with little
+ # resources, such as Minikube. If you do want to specify resources, uncomment the following
+ # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
+ # limits:
+ # cpu: 100m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 128Mi
service:
name: vault
type: ClusterIP
# type: LoadBalancer
+ # Assign a static LB IP
+ # loadBalancerIP: 203.0.113.32
loadBalancerSourceRanges: []
# - 10.0.0.0/8
# - 130.211.204.2/32
externalPort: 8200
port: 8200
+ # clusterExternalPort: 8201
+ clusterPort: 8201
# clusterIP: None
annotations: {}
# cloud.google.com/load-balancer-type: "Internal"
@@ -67,7 +89,14 @@ resources: {}
# requests:
# cpu: 100m
# memory: 128Mi
-affinity: |
+
+## Node selector
+## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
+nodeSelector: {}
+
+## Affinity
+## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
+affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
@@ -75,8 +104,12 @@ affinity: |
topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
- app: {{ template "vault.fullname" . }}
- release: {{ .Release.Name }}
+ app: '{{ template "vault.name" . }}'
+ release: '{{ .Release.Name }}'
+
+## Tolerations
+## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
+tolerations: []
## Deployment annotations
annotations: {}
@@ -84,7 +117,12 @@ annotations: {}
## Extra Deployment labels
labels: {}
+## Pod annotations
podAnnotations: {}
+
+## Pod labels
+podLabels: {}
+
## Read more about kube2iam to provide access to s3 https://github.com/jtblin/kube2iam
# iam.amazonaws.com/role: role-arn
@@ -92,38 +130,56 @@ podAnnotations: {}
## if automation saves your unseal keys to a k8s secret on deploy
## writing a script to do this would be trivial and solves the
## issues of scaling up if deployed in HA.
-# lifecycle: |
+# lifecycle:
# postStart:
# exec:
# command: ["./unseal -s my-unseal-keys"]
+serviceAccount:
+ ## Specifies whether a ServiceAccount should be created
+ ##
+ create: false
+ ## The name of the ServiceAccount to use.
+ ## If not set and create is true, a name is generated using the fullname template
+ name:
+
vault:
# Only used to enable dev mode. When in dev mode, the rest of this config
# section below is not used to configure Vault. See
# https://www.vaultproject.io/intro/getting-started/dev-server.html for more
# information.
dev: true
- # Allows the mounting of various custom secrets th enable production vault
- # configurations. The comments show an example usage for mounting a TLS
- # secret. The two fields required are a secretName indicating the name of
- # the Kubernetes secret (created outside of this chart), and the mountPath
- # at which it should be mounted in the Vault container.
- customSecrets: []
- # - secretName: vault-tls
- # mountPath: /vault/tls
- #
# Configure additional environment variables for the Vault containers
- extraEnv: {}
+ extraEnv: []
# - name: VAULT_API_ADDR
# value: "https://vault.internal.domain.name:8200"
- extraContainers: {}
+ extraContainers: []
## Additional containers to be added to the Vault pod
+ # extraContainers:
# - name: vault-sidecar
# image: vault-sidecar:latest
# volumeMounts:
# - name: some-mount
# mountPath: /some/path
- extraVolumes: {}
+ # Extra volumes to mount to the Vault pod. The comments show an example usage
+ # for mounting a TLS secret. In this example, the volume name must match
+ # the volumeMount name. The two other fields required are the name of the
+ # Kubernetes secret (created outside of this chart), and the mountPath
+ # at which it should be mounted in the Vault container.
+ extraVolumes: []
+ # - name: vault-tls
+ # secret:
+ # secretName: vault-tls-secret
+ extraVolumeMounts: []
+ # - name: vault-tls
+ # mountPath: /vault/tls
+ # readOnly: true
+ extraInitContainers: []
+ ## Init containers to be added
+ # extraInitContainers:
+ # - name: do-something
+ # image: busybox
+ # command: ['do', 'something']
# Log level
# https://www.vaultproject.io/docs/commands/server.html#log-level
logLevel: "info"
@@ -131,10 +187,19 @@ vault:
# - name: extra-volume
# secret:
# secretName: some-secret
+ liveness:
+ aliveIfUninitialized: true
+ aliveIfSealed: true
+ initialDelaySeconds: 30
+ periodSeconds: 10
readiness:
readyIfSealed: false
readyIfStandby: true
readyIfUninitialized: true
+ initialDelaySeconds: 10
+ periodSeconds: 10
+ ## Use an existing config in a named ConfigMap
+ # existingConfigName: vault-cm
config:
# A YAML representation of a final vault config.json file.
# See https://www.vaultproject.io/docs/configuration/ for more information.
diff --git a/incubator/zookeeper/Chart.yaml b/incubator/zookeeper/Chart.yaml
index fee4b2ae8acd..3a1ad23506a4 100644
--- a/incubator/zookeeper/Chart.yaml
+++ b/incubator/zookeeper/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: zookeeper
home: https://zookeeper.apache.org/
-version: 1.2.2
+version: 1.3.1
appVersion: 3.4.10
description: Centralized service for maintaining configuration information, naming,
providing distributed synchronization, and providing group services.
diff --git a/incubator/zookeeper/templates/statefulset.yaml b/incubator/zookeeper/templates/statefulset.yaml
index d5098d218c86..e70f7bcbf912 100644
--- a/incubator/zookeeper/templates/statefulset.yaml
+++ b/incubator/zookeeper/templates/statefulset.yaml
@@ -52,10 +52,11 @@ spec:
- name: zookeeper
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
- command:
- - /bin/bash
- - -xec
- - zkGenConfig.sh && exec zkServer.sh start-foreground
+ {{- with .Values.command }}
+ command: {{ range . }}
+ - {{ . | quote }}
+ {{- end }}
+ {{- end }}
ports:
{{- range $key, $port := .Values.ports }}
- name: {{ $key }}
diff --git a/incubator/zookeeper/values.yaml b/incubator/zookeeper/values.yaml
index 5ed16c0a4062..439236ba23f2 100644
--- a/incubator/zookeeper/values.yaml
+++ b/incubator/zookeeper/values.yaml
@@ -113,6 +113,12 @@ securityContext:
fsGroup: 1000
runAsUser: 1000
+## Useful, if you want to use an alternate image.
+command:
+ - /bin/bash
+ - -xec
+ - zkGenConfig.sh && exec zkServer.sh start-foreground
+
## Useful if using any custom authorizer.
## Pass any secrets to the kafka pods. Each secret will be passed as an
## environment variable by default. The secret can also be mounted to a
diff --git a/stable/aerospike/Chart.yaml b/stable/aerospike/Chart.yaml
index 96a8d88d055d..1668b9ec6b1a 100644
--- a/stable/aerospike/Chart.yaml
+++ b/stable/aerospike/Chart.yaml
@@ -1,14 +1,17 @@
-appVersion: v3.14.1.2
+apiVersion: v1
+appVersion: v4.5.0.5
description: A Helm chart for Aerospike in Kubernetes
name: aerospike
keywords:
- aerospike
- big-data
home: http://aerospike.com
-version: 0.2.1
+version: 0.2.8
icon: https://s3-us-west-1.amazonaws.com/aerospike-fd/wp-content/uploads/2016/06/Aerospike_square_logo.png
sources:
- https://github.com/aerospike/aerospike-server
maintainers:
- name: kavehmz
email: kavehmz@gmail.com
+- name: okgolove
+ email: okgolove@markeloff.net
diff --git a/stable/aerospike/OWNERS b/stable/aerospike/OWNERS
new file mode 100644
index 000000000000..bba777cb3f20
--- /dev/null
+++ b/stable/aerospike/OWNERS
@@ -0,0 +1,4 @@
+approvers:
+- okgolove
+reviewers:
+- okgolove
diff --git a/stable/aerospike/README.md b/stable/aerospike/README.md
index 112448ea552a..23c1885c36d5 100644
--- a/stable/aerospike/README.md
+++ b/stable/aerospike/README.md
@@ -2,9 +2,9 @@
This is an implementation of Aerospike StatefulSet found here:
- * https://github.com/aerospike/aerospike-kubernetes
+*
-## Pre Requisites:
+## Pre Requisites
* Kubernetes 1.7+ with beta APIs enabled and support for statefulsets
@@ -14,11 +14,11 @@ This is an implementation of Aerospike StatefulSet found here:
## StatefulSet Details
-* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
+*
## StatefulSet Caveats
-* https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations
+*
## Chart Details
@@ -30,9 +30,9 @@ This chart will do the following:
To install the chart with the release name `my-aerospike` using a dedicated namespace(recommended):
-```
-$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
-$ helm install --name my-aerospike --namespace aerospike stable/aerospike
+```sh
+helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
+helm install --name my-aerospike --namespace aerospike stable/aerospike
```
The chart can be customized using the following configurable parameters:
@@ -40,28 +40,29 @@ The chart can be customized using the following configurable parameters:
| Parameter | Description | Default |
| ------------------------------- | ----------------------------------------------------------------| -----------------------------|
| `image.repository` | Aerospike Container image name | `aerospike/aerospike-server` |
-| `image.tag` | Aerospike Container image tag | `3.14.1.2` |
+| `image.tag` | Aerospike Container image tag | `4.5.0.5` |
| `image.pullPolicy` | Aerospike Container pull policy | `Always` |
| `replicaCount` | Aerospike Brokers | `1` |
| `command` | Custom command (Docker Entrypoint) | `[]` |
| `args` | Custom args (Docker Cmd) | `[]` |
+| `tolerations` | List of node taints to tolerate | `[]` |
| `persistentVolume` | Config of persistent volumes for storage-engine | `{}` |
| `confFile` | Config filename. This file should be included in the chart path | `aerospike.conf` |
| `resources` | Resource requests and limits | `{}` |
| `nodeSelector` | Labels for pod assignment | `{}` |
-| `terminationGracePeriodSeconds` | Wait time before forcefully terminating container | `30` |
+| `terminationGracePeriodSeconds` | Wait time before forcefully terminating container | `30` |
Specify parameters using `--set key=value[,key=value]` argument to `helm install`
Alternatively a YAML file that specifies the values for the parameters can be provided like this:
-```bash
-$ helm install --name my-aerospike -f values.yaml stable/aerospike
+```sh
+helm install --name my-aerospike -f values.yaml stable/aerospike
```
### Conf files for Aerospike
-There is one conf file added to each Aerospike release. This conf file can be replaced with a custom file and updating the `confFile` value.
+There is one conf file added to each Aerospike release. This conf file can be replaced with a custom file and updating the `confFile` value.
If you modify the `aerospike.conf` (and you use more than 1 replica), you want to add the `#REPLACE_THIS_LINE_WITH_MESH_CONFIG` comment to the config file (see the default conf file). This will update your mesh to connect each replica.
diff --git a/stable/aerospike/templates/_helpers.tpl b/stable/aerospike/templates/_helpers.tpl
index e2cfaaa6424f..2e336b438193 100644
--- a/stable/aerospike/templates/_helpers.tpl
+++ b/stable/aerospike/templates/_helpers.tpl
@@ -9,11 +9,20 @@ Expand the name of the chart.
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
*/}}
{{- define "aerospike.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
+{{- end -}}
+{{- end -}}
{{/*
Create aerospike mesh setup
diff --git a/stable/aerospike/templates/configmap.yaml b/stable/aerospike/templates/configmap.yaml
index 91b2cd8a09c8..371e9e3007cb 100644
--- a/stable/aerospike/templates/configmap.yaml
+++ b/stable/aerospike/templates/configmap.yaml
@@ -11,4 +11,4 @@ data:
aerospike.conf: |
# aerospike configuration
{{- $mesh := include "aerospike.mesh" . }}
- {{ .Values.confFile |replace "#REPLACE_THIS_LINE_WITH_MESH_CONFIG" $mesh | indent 4}}
\ No newline at end of file
+ {{ .Values.confFile |replace "#REPLACE_THIS_LINE_WITH_MESH_CONFIG" $mesh | indent 4}}
diff --git a/stable/aerospike/templates/mesh-service.yaml b/stable/aerospike/templates/mesh-service.yaml
index 4decc44063f9..20b043ee54ae 100644
--- a/stable/aerospike/templates/mesh-service.yaml
+++ b/stable/aerospike/templates/mesh-service.yaml
@@ -8,11 +8,14 @@ metadata:
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
+ # deprecation in 1.10, supported until at least 1.13, breaks peer-finder/kube-dns if not used
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
{{- range $key, $value := .Values.meshService.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
+ # deprecates service.alpha.kubernetes.io/tolerate-unready-endpoints as of 1.10? see: kubernetes/kubernetes#49239 Fixed in 1.11 as of #63742
+ publishNotReadyAddresses: true
clusterIP: None
type: ClusterIP
ports:
diff --git a/stable/aerospike/templates/service.yaml b/stable/aerospike/templates/service.yaml
index 4486cc10d4c7..5a056684a3ad 100644
--- a/stable/aerospike/templates/service.yaml
+++ b/stable/aerospike/templates/service.yaml
@@ -8,21 +8,24 @@ metadata:
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
+ # deprecation in 1.10, supported until at least 1.13, breaks peer-finder/kube-dns if not used
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
{{- range $key, $value := .Values.service.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
-{{ if .Values.service.clusterIP }}
+ # deprecates service.alpha.kubernetes.io/tolerate-unready-endpoints as of 1.10? see: kubernetes/kubernetes#49239 Fixed in 1.11 as of #63742
+ publishNotReadyAddresses: true
+ {{ if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP | quote }}
-{{ end }}
+ {{ end }}
type: {{ .Values.service.type }}
{{ if eq .Values.service.type "LoadBalancer" -}} {{ if .Values.service.loadBalancerIP -}}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{ end -}}
{{- if .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
- {{ toYaml .Values.service.loadBalancerSourceRanges | indent 2}}
+ {{ toYaml .Values.service.loadBalancerSourceRanges | indent 2}}
{{ end -}}
{{- end -}}
ports:
diff --git a/stable/aerospike/templates/statefulset.yaml b/stable/aerospike/templates/statefulset.yaml
index 78a92b78eca4..4ad94a308a82 100644
--- a/stable/aerospike/templates/statefulset.yaml
+++ b/stable/aerospike/templates/statefulset.yaml
@@ -55,6 +55,10 @@ spec:
{{- if .Values.affinity }}
affinity:
{{ toYaml .Values.affinity | indent 8 }}
+ {{- end }}
+ {{- with .Values.tolerations }}
+ tolerations:
+{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: config-volume
diff --git a/stable/aerospike/values.yaml b/stable/aerospike/values.yaml
index c868f0addb6a..da56469bf7f3 100644
--- a/stable/aerospike/values.yaml
+++ b/stable/aerospike/values.yaml
@@ -4,7 +4,7 @@ replicaCount: 1
nodeSelector: {}
image:
repository: aerospike/aerospike-server
- tag: 3.14.1.2
+ tag: 4.5.0.5
pullPolicy: IfNotPresent
# pass custom command. This is equivalent of Entrypoint in docker
@@ -40,6 +40,8 @@ service:
meshService:
annotations: {}
+tolerations: []
+
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
@@ -57,6 +59,7 @@ confFile: |-
service {
user root
group root
+ paxos-protocol v5
paxos-single-replica-limit 1
pidfile /var/run/aerospike/asd.pid
service-threads 4
@@ -64,36 +67,39 @@ confFile: |-
transaction-threads-per-queue 4
proto-fd-max 15000
}
+
logging {
file /var/log/aerospike/aerospike.log {
- context any info
+ context any info
}
console {
- context any info
+ context any info
}
}
+
network {
service {
- address any
- port 3000
+ address any
+ port 3000
}
- heartbeat {
- address any
- interval 150
- #REPLACE_THIS_LINE_WITH_MESH_CONFIG
- mode mesh
- port 3002
- timeout 20
- protocol v3
+ heartbeat {
+ address any
+ interval 150
+ #REPLACE_THIS_LINE_WITH_MESH_CONFIG
+ mode mesh
+ port 3002
+ timeout 20
+ protocol v3
}
+
fabric {
- port 3001
+ port 3001
}
info {
- port 3003
+ port 3003
}
}
@@ -102,7 +108,7 @@ confFile: |-
memory-size 1G
default-ttl 5d
storage-engine device {
- file /opt/aerospike/data/test.dat
- filesize 4G
+ file /opt/aerospike/data/test.dat
+ filesize 4G
}
}
diff --git a/stable/airflow/Chart.yaml b/stable/airflow/Chart.yaml
index 64acd2fdec92..4279b2bdccc5 100644
--- a/stable/airflow/Chart.yaml
+++ b/stable/airflow/Chart.yaml
@@ -1,7 +1,8 @@
+apiVersion: v1
description: Airflow is a platform to programmatically author, schedule and monitor workflows
name: airflow
-version: 0.15.0
-appVersion: 1.10.0
+version: 2.8.2
+appVersion: 1.10.2
icon: https://airflow.apache.org/_images/pin_large.png
home: https://airflow.apache.org/
maintainers:
diff --git a/stable/airflow/README.md b/stable/airflow/README.md
index 3d8fbc71448f..0b09603c07d7 100644
--- a/stable/airflow/README.md
+++ b/stable/airflow/README.md
@@ -82,14 +82,14 @@ You can also add generic environment variables such as proxy or private pypi:
```yaml
airflow:
config:
- AIRFLOW__CORE__EXPOSE_CONFIG: True
+ AIRFLOW__WEBSERVER__EXPOSE_CONFIG: True
PIP_INDEX_URL: http://pypi.mycompany.com/
PIP_TRUSTED_HOST: pypi.mycompany.com
HTTP_PROXY: http://proxy.mycompany.com:1234
HTTPS_PROXY: http://proxy.mycompany.com:1234
```
-If you are using a private image for your dags (see [Embedded Dags](#embedded-dags))
+If you are using a private image for your dags (see [Embedded Dags](#embedded-dags))
or for use with the KubernetesPodOperator (available in version 1.10.0), then add
an image pull secret to the airflow config:
```yaml
@@ -115,6 +115,31 @@ airflow:
Note: As connections may require to include sensitive data - the resulting script is stored encrypted in a kubernetes secret and mounted into the airflow scheduler container. It is probably wise not to put connection data in the default values.yaml and instead create an encrypted my-secret-values.yaml. this way it can be decrypted before the installation and passed to helm with -f
+#### Airflow variables
+
+Variables are a generic way to store and retrieve arbitrary content or settings as a simple key value store within Airflow.
+These variables will be automatically imported by the scheduler when it starts up.
+
+Example:
+```yaml
+airflow:
+ variables: '{ "environment": "dev" }'
+```
+
+#### Airflow pools
+
+Some systems can get overwhelmed when too many processes hit them at the same time.
+Airflow pools can be used to limit the execution parallelism on arbitrary sets of tasks. For more info see the [airflow
+documentation](https://airflow.apache.org/concepts.html#pools).
+The feature to import pools has only been added in airflow 1.10.2.
+These pools will be automatically imported by the scheduler when it starts up.
+
+Example:
+```yaml
+airflow:
+ pools: '{ "example": { "description": "This is an example of a pool", "slots": 2 } }'
+```
+
### Worker Statefulset
Celery workers uses StatefulSet.
@@ -159,18 +184,41 @@ $ kubectl create secret generic redshift-user --from-file=redshift-user=~/secret
```
Where `redshift-user.txt` contains the user secret as a single text string.
-### Use precreated secret for postgres and redis
+### Use precreated secret for airflow secrets or environment variables
-You can use a precreated secret for the connection credentials to both postgresql and redis. To do
+You can use a precreated secret for the connection credentials, or general environment variables. To do
so specify in values.yaml `existingAirflowSecret`, where the value is the name of the secret which has
-postgresUser, postgresPassword, and redisPassword defined. If not specified, it will fall back to using
+postgresUser, postgresPassword, and redisPassword etc. is defined. If not specified, it will fall back to using
`secrets.yaml` to store the connection credentials by default.
+Map each specific secret to specific environment variables in your values.yaml. Where envVar is the airflow environment
+variable to populate and secretKey is the key that contains your secret value in your kubernetes secret:
+```yaml
+existingAirflowSecret: my-airflow-secrets
+airflow:
+ secretsMapping:
+ - envVar: AIRFLOW__LDAP__BIND_PASSWORD
+ secretKey: ldapBindPassword
+
+ - envVar: POSTGRES_USER
+ secretKey: airflowPostgresUser
+
+ - envVar: POSTGRES_PASSWORD
+ secretKey: airflowPostgresPassword
+
+ - envVar: REDIS_PASSWORD
+ secretKey: airflowRedisPassword
+```
+
### Local binaries
Please note a folder `~/.local/bin` will be automatically created and added to the PATH so that
Bash operators can use command line tools installed by `pip install --user` for instance.
+## Installing dependencies
+
+Add a `requirements.txt` file at the root of your DAG project (`dags.path` entry at `values.yaml`) and they will be automatically installed. That works for both shared persistent volume and init-container deployment strategies (see below).
+
## DAGs Deployment
Several options are provided for synchronizing your Airflow DAGs.
@@ -198,9 +246,6 @@ To share a PV with multiple Pods, the PV needs to have accessMode 'ReadOnlyMany'
If you enable set `dags.init_container.enabled=true`, the pods will try upon startup to fetch the
git repository defined by `dags.git_repo`, on branch `dags.git_branch` as DAG folder.
-You can also add a `requirements.txt` file at the root of your DAG project to have other
-Python dependencies installed.
-
This is the easiest way of deploying your DAGs to Airflow.
If you are using a private Git repo, you can set `dags.gitSecret` to the name of a secret you created containing private keys and a `known_hosts` file.
@@ -236,6 +281,13 @@ This is controlled by the `logsPersistence.enabled` setting.
Refer to the `Mount a Shared Persistent Volume` section above for details on using persistent volumes.
+## Service monitor
+
+The service monitor is something introduced by the [CoresOS prometheus operator](https://github.com/coreos/prometheus-operator).
+To be able to expose metrics to prometheus you need install a plugin, this can be added to the docker image. A good one is: https://github.com/epoch8/airflow-exporter.
+This exposes dag and task based metrics from Airflow.
+For service monitor configuration see the generic [Helm chart Configuration](#helm-chart-configuration).
+
## Helm chart Configuration
The following table lists the configurable parameters of the Airflow chart and their default values.
@@ -247,21 +299,35 @@ The following table lists the configurable parameters of the Airflow chart and t
| `airflow.executor` | the executor to run | `Celery` |
| `airflow.initRetryLoop` | max number of retries during container init | |
| `airflow.image.repository` | Airflow docker image | `puckel/docker-airflow` |
-| `airflow.image.tag` | Airflow docker tag | `1.10.0-4` |
+| `airflow.image.tag` | Airflow docker tag | `1.10.2` |
| `airflow.image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `airflow.image.pullSecret` | Image pull secret | |
| `airflow.schedulerNumRuns` | -1 to loop indefinitively, 1 to restart after each exec | |
| `airflow.webReplicas` | how many replicas for web server | `1` |
| `airflow.config` | custom airflow configuration env variables | `{}` |
| `airflow.podDisruptionBudget` | control pod disruption budget | `{'maxUnavailable': 1}` |
+| `airflow.secretsMapping` | override any environment variable with a secret | |
+| `airflow.extraConfigmapMounts` | Additional configMap volume mounts on the airflow pods. | `[]` |
+| `airflow.podAnnotations` | annotations for scheduler, worker and web pods | `{}` |
+| `airflow.extraContainers` | additional containers to run in the scheduler, worker & web pods | `[]` |
+| `airflow.extraVolumeMounts` | additional volumeMounts to the main container in scheduler, worker & web pods | `[]`|
+| `airflow.extraVolumes` | additional volumes for the scheduler, worker & web pods | `[]` |
+| `flower.resources` | custom resource configuration for flower pod | `{}` |
+| `web.resources` | custom resource configuration for web pod | `{}` |
+| `web.initialStartupDelay` | amount of time webserver pod should sleep before initializing webserver | `60` |
+| `web.initialDelaySeconds` | initial delay on livenessprobe before checking if webserver is available | `360` |
+| `scheduler.resources` | custom resource configuration for scheduler pod | `{}` |
| `workers.enabled` | enable workers | `true` |
| `workers.replicas` | number of workers pods to launch | `1` |
| `workers.resources` | custom resource configuration for worker pod | `{}` |
| `workers.celery.instances` | number of parallel celery tasks per worker | `1` |
-| `workers.pod.annotations` | annotations for the worker pods | `{}` |
+| `workers.podAnnotations` | annotations for the worker pods | `{}` |
| `workers.secretsDir` | directory in which to mount secrets on worker nodes | /var/airflow/secrets |
| `workers.secrets` | secrets to mount as volumes on worker nodes | [] |
| `existingAirflowSecret` | secret to use for postgres and redis connection | |
+| `nodeSelector` | Node labels for pod assignment | `{}` |
+| `affinity` | Affinity labels for pod assignment | `{}` |
+| `tolerations` | Toleration labels for pod assignment | `[]` |
| `ingress.enabled` | enable ingress | `false` |
| `ingress.web.host` | hostname for the webserver ui | "" |
| `ingress.web.path` | path of the werbserver ui (read `values.yaml`) | `` |
@@ -271,7 +337,7 @@ The following table lists the configurable parameters of the Airflow chart and t
| `ingress.flower.host` | hostname for the flower ui | "" |
| `ingress.flower.path` | path of the flower ui (read `values.yaml`) | `` |
| `ingress.flower.livenessPath` | path to the liveness probe (read `values.yaml`) | `/` |
-| `ingress.flower.annotations` | annotations for the web ui ingress | `{}` |
+| `ingress.flower.annotations` | annotations for the flower ui ingress | `{}` |
| `ingress.flower.tls.enabled` | enables TLS termination at the ingress | `false` |
| `ingress.flower.tls.secretName` | name of the secret containing the TLS certificate & key | `` |
| `persistence.enabled` | enable persistence storage for DAGs | `false` |
@@ -307,8 +373,22 @@ The following table lists the configurable parameters of the Airflow chart and t
| `postgresql.persistance.storageClass` | Persistant class | (undefined) |
| `postgresql.persistance.accessMode` | Access mode | `ReadWriteOnce` |
| `redis.enabled` | Create a Redis cluster | `true` |
+| `redis.redisHost` | Redis Hostname | (undefined) |
| `redis.password` | Redis password | `airflow` |
| `redis.master.persistence.enabled` | Enable Redis PVC | `false` |
| `redis.cluster.enabled` | enable master-slave cluster | `false` |
+| `serviceMonitor.enabled` | enable service monitor | `false` |
+| `serviceMonitor.interval` | Interval at which metrics should be scraped | `30s` |
+| `serviceMonitor.path` | The path at which the metrics should be scraped | `/admin/metrics` |
+| `serviceMonitor.selector` | label Selector for Prometheus to find ServiceMonitors | `prometheus: kube-prometheus` |
+| `prometheusRule.enabled` | enable prometheus rule | `false` |
+| `prometheusRule.groups` | define alerting rules | `{}` |
+| `prometheusRule.additionalLabels` | add additional labels to the prometheus rule | `{}` |
+
Full and up-to-date documentation can be found in the comments of the `values.yaml` file.
+
+## Upgrading
+### To 2.0.0
+The parameter `workers.pod.annotations` has been renamed to `workers.podAnnotations`. If using a
+custom values file, rename this parameter.
diff --git a/stable/airflow/examples/minikube-values.yaml b/stable/airflow/examples/minikube-values.yaml
index d06fd85b0eb6..8911bbec7839 100644
--- a/stable/airflow/examples/minikube-values.yaml
+++ b/stable/airflow/examples/minikube-values.yaml
@@ -1,7 +1,7 @@
airflow:
image:
repository: puckel/docker-airflow
- tag: 1.10.0-4
+ tag: 1.10.2
pullPolicy: IfNotPresent
service:
type: NodePort
@@ -10,26 +10,14 @@ airflow:
AIRFLOW__CORE__LOGGING_LEVEL: DEBUG
AIRFLOW__CORE__LOAD_EXAMPLES: True
+ variables: '{ "environment": "dev" }'
+ pools: '{ "example": { "description": "This is an example of a pool", "slots": 2 } }'
+
workers:
replicas: 1
celery:
instances: 1
-ingress:
- enabled: true
- web:
- path: "/airflow"
- host: "minikube"
- annotations:
- traefik.frontend.rule.type: PathPrefix
- kubernetes.io/ingress.class: traefik
- flower:
- path: "/airflow/flower"
- host: "minikube"
- annotations:
- traefik.frontend.rule.type: PathPrefixStrip
- kubernetes.io/ingress.class: traefik
-
persistence:
enabled: true
accessMode: ReadWriteOnce
diff --git a/stable/airflow/requirements.yaml b/stable/airflow/requirements.yaml
index ae00016bd4b8..f9683d145a24 100644
--- a/stable/airflow/requirements.yaml
+++ b/stable/airflow/requirements.yaml
@@ -4,6 +4,6 @@ dependencies:
repository: https://kubernetes-charts.storage.googleapis.com/
condition: postgresql.enabled
- name: redis
- version: 3.3.5
+ version: 7.0.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: redis.enabled
diff --git a/stable/airflow/templates/NOTES.txt b/stable/airflow/templates/NOTES.txt
index ce5e61b08d11..5cbf031e3e9a 100644
--- a/stable/airflow/templates/NOTES.txt
+++ b/stable/airflow/templates/NOTES.txt
@@ -10,7 +10,7 @@ URL to Airflow and Flower:
1. Get the Airflow URL by running these commands:
- export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "airflow.fullname" . }})
+ export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "airflow.fullname" . }})-web
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT/
@@ -22,9 +22,9 @@ URL to Airflow and Flower:
echo http://$SERVICE_IP/
{{- else if contains "ClusterIP" .Values.airflow.service.type }}
- export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "component={{ .Values.airflow.name }}" -o jsonpath="{.items[0].metadata.name}")
- echo http://127.0.0.1:{{ .Values.airflow.externalPortHttp }}
- kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME {{ .Values.airflow.externalPortHttp }}:{{ .Values.airflow.internalPortHttp }}
+ export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "component=web,app={{ template "airflow.name" . }}" -o jsonpath="{.items[0].metadata.name}")
+ echo http://127.0.0.1:8080
+ kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME 8080:8080
2. Open Airflow in your web browser
{{- end }}
diff --git a/stable/airflow/templates/_helpers.tpl b/stable/airflow/templates/_helpers.tpl
index 1c8a9b29e037..48b6180bc4ba 100644
--- a/stable/airflow/templates/_helpers.tpl
+++ b/stable/airflow/templates/_helpers.tpl
@@ -38,7 +38,7 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
*/}}
{{- define "airflow.postgresql.fullname" -}}
{{- if .Values.postgresql.postgresHost }}
- {{- printf "%s" .Values.postgresql.postgresHost -}}
+ {{- .Values.postgresql.postgresHost -}}
{{- else }}
{{- $name := default "postgresql" .Values.postgresql.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
@@ -46,12 +46,16 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
{{- end -}}
{{/*
-Create a default fully qualified redis cluster name.
+Create a default fully qualified redis cluster name or use the `redisHost` value if defined
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "airflow.redis.fullname" -}}
-{{- $name := default "redis" .Values.redis.nameOverride -}}
-{{- printf "%s-%s-master" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- if .Values.redis.redisHost }}
+ {{- .Values.redis.redisHost -}}
+{{- else }}
+ {{- $name := default "redis" .Values.redis.nameOverride -}}
+ {{- printf "%s-%s-master" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
{{- end -}}
{{/*
@@ -75,3 +79,30 @@ Create the name for the airflow secret.
{{ template "airflow.fullname" . }}
{{- end -}}
{{- end -}}
+
+{{/*
+Map environment vars to secrets
+*/}}
+{{- define "airflow.mapenvsecrets" -}}
+ {{- $secretName := printf "%s-env" (include "airflow.fullname" .) }}
+ {{- $mapping := .Values.airflow.defaultSecretsMapping }}
+ {{- if .Values.existingAirflowSecret }}
+ {{- $secretName := .Values.existingAirflowSecret }}
+ {{- if .Values.airflow.secretsMapping }}
+ {{- $mapping := .Values.airflow.secretsMapping }}
+ {{- end }}
+ {{- end }}
+ {{- range $val := $mapping }}
+ {{- if $val }}
+ - name: {{ $val.envVar }}
+ valueFrom:
+ secretKeyRef:
+ {{- if $val.secretName }}
+ name: {{ $val.secretName }}
+ {{- else }}
+ name: {{ $secretName }}
+ {{- end }}
+ key: {{ $val.secretKey }}
+ {{- end }}
+ {{- end }}
+{{- end }}
diff --git a/stable/airflow/templates/configmap-airflow.yaml b/stable/airflow/templates/configmap-env.yaml
similarity index 100%
rename from stable/airflow/templates/configmap-airflow.yaml
rename to stable/airflow/templates/configmap-env.yaml
diff --git a/stable/airflow/templates/configmap-variables-pools.yaml b/stable/airflow/templates/configmap-variables-pools.yaml
new file mode 100644
index 000000000000..a8eb7adfbd03
--- /dev/null
+++ b/stable/airflow/templates/configmap-variables-pools.yaml
@@ -0,0 +1,20 @@
+{{- if or .Values.airflow.variables .Values.airflow.pools }}
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "airflow.fullname" . }}-variables-pools
+ labels:
+ app: {{ template "airflow.name" . }}
+ chart: {{ template "airflow.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+data:
+{{- if or .Values.airflow.variables }}
+ variables.json: |
+ {{ .Values.airflow.variables }}
+{{- end }}
+{{- if or .Values.airflow.pools }}
+ pools.json: |
+ {{ .Values.airflow.pools }}
+{{- end }}
+{{- end }}
\ No newline at end of file
diff --git a/stable/airflow/templates/deployments-flower.yaml b/stable/airflow/templates/deployments-flower.yaml
index c03fa1af6752..5f2752001827 100644
--- a/stable/airflow/templates/deployments-flower.yaml
+++ b/stable/airflow/templates/deployments-flower.yaml
@@ -25,8 +25,8 @@ spec:
template:
metadata:
annotations:
- checksum/config: {{ include (print $.Template.BasePath "/configmap-airflow.yaml") . | sha256sum }}
- configmap.fabric8.io/update-on-change: "{{ template "airflow.fullname" . }}-env"
+ checksum/config-env: {{ include (print $.Template.BasePath "/configmap-env.yaml") . | sha256sum }}
+ checksum/secret-env: {{ include (print $.Template.BasePath "/secret-env.yaml") . | sha256sum }}
labels:
app: {{ template "airflow.name" . }}
component: flower
@@ -37,6 +37,18 @@ spec:
- name: {{ .Values.airflow.image.pullSecret }}
{{- end }}
restartPolicy: Always
+ {{- if .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.nodeSelector | indent 8 }}
+ {{- end }}
+ {{- if .Values.affinity }}
+ affinity:
+{{ toYaml .Values.affinity | indent 8 }}
+ {{- end }}
+ {{- if .Values.tolerations }}
+ tolerations:
+{{ toYaml .Values.tolerations | indent 8 }}
+ {{- end }}
containers:
- name: {{ .Chart.Name }}-flower
image: {{ .Values.airflow.image.repository }}:{{ .Values.airflow.image.tag }}
@@ -45,21 +57,7 @@ spec:
- configMapRef:
name: "{{ template "airflow.fullname" . }}-env"
env:
- - name: POSTGRES_USER
- valueFrom:
- secretKeyRef:
- name: {{ template "airflow.secret" . }}
- key: postgresUser
- - name: POSTGRES_PASSWORD
- valueFrom:
- secretKeyRef:
- name: {{ template "airflow.secret" . }}
- key: postgresPassword
- - name: REDIS_PASSWORD
- valueFrom:
- secretKeyRef:
- name: {{ template "airflow.secret" . }}
- key: redisPassword
+ {{- include "airflow.mapenvsecrets" . | indent 10 }}
ports:
- name: flower
containerPort: 5555
@@ -74,4 +72,6 @@ spec:
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
+ resources:
+{{ toYaml .Values.flower.resources | indent 12 }}
{{- end }}
diff --git a/stable/airflow/templates/deployments-scheduler.yaml b/stable/airflow/templates/deployments-scheduler.yaml
old mode 100755
new mode 100644
index ff582eefa28a..12756caca90a
--- a/stable/airflow/templates/deployments-scheduler.yaml
+++ b/stable/airflow/templates/deployments-scheduler.yaml
@@ -25,8 +25,15 @@ spec:
template:
metadata:
annotations:
- checksum/config: {{ include (print $.Template.BasePath "/configmap-airflow.yaml") . | sha256sum }}
- configmap.fabric8.io/update-on-change: "{{ template "airflow.fullname" . }}-env"
+ checksum/config-env: {{ include (print $.Template.BasePath "/configmap-env.yaml") . | sha256sum }}
+ checksum/config-git-clone: {{ include (print $.Template.BasePath "/configmap-git-clone.yaml") . | sha256sum }}
+ checksum/config-scripts: {{ include (print $.Template.BasePath "/configmap-scripts.yaml") . | sha256sum }}
+ checksum/config-variables-pools: {{ include (print $.Template.BasePath "/configmap-variables-pools.yaml") . | sha256sum }}
+ checksum/secret-connections: {{ include (print $.Template.BasePath "/secret-connections.yaml") . | sha256sum }}
+ checksum/secret-env: {{ include (print $.Template.BasePath "/secret-env.yaml") . | sha256sum }}
+{{- if .Values.airflow.podAnnotations }}
+{{ toYaml .Values.airflow.podAnnotations | indent 8 }}
+{{- end }}
labels:
app: {{ template "airflow.name" . }}
component: scheduler
@@ -37,6 +44,18 @@ spec:
- name: {{ .Values.airflow.image.pullSecret }}
{{- end }}
restartPolicy: Always
+ {{- if .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.nodeSelector | indent 8 }}
+ {{- end }}
+ {{- if .Values.affinity }}
+ affinity:
+{{ toYaml .Values.affinity | indent 8 }}
+ {{- end }}
+ {{- if .Values.tolerations }}
+ tolerations:
+{{ toYaml .Values.tolerations | indent 8 }}
+ {{- end }}
serviceAccountName: {{ template "airflow.serviceAccountName" . }}
{{- if .Values.dags.initContainer.enabled }}
initContainers:
@@ -67,39 +86,42 @@ spec:
- configMapRef:
name: "{{ template "airflow.fullname" . }}-env"
env:
- - name: POSTGRES_USER
- valueFrom:
- secretKeyRef:
- name: {{ template "airflow.secret" . }}
- key: postgresUser
- - name: POSTGRES_PASSWORD
- valueFrom:
- secretKeyRef:
- name: {{ template "airflow.secret" . }}
- key: postgresPassword
- - name: REDIS_PASSWORD
- valueFrom:
- secretKeyRef:
- name: {{ template "airflow.secret" . }}
- key: redisPassword
+ {{- include "airflow.mapenvsecrets" . | indent 10 }}
+ resources:
+{{ toYaml .Values.scheduler.resources | indent 12 }}
volumeMounts:
+ - name: scripts
+ mountPath: /usr/local/scripts
{{- if .Values.persistence.enabled }}
- name: dags-data
mountPath: {{ .Values.dags.path }}
{{- else if .Values.dags.initContainer.enabled }}
- name: dags-data
mountPath: {{ .Values.dags.path }}
- - name: scripts
- mountPath: /usr/local/scripts
- {{- if .Values.airflow.connections }}
- - name: connections
- mountPath: /usr/local/connections
- {{- end}}
{{- end }}
{{- if .Values.logsPersistence.enabled }}
- name: logs-data
mountPath: {{ .Values.logs.path }}
{{- end }}
+ {{- if .Values.airflow.connections }}
+ - name: connections
+ mountPath: /usr/local/connections
+ {{- end}}
+ {{- if or .Values.airflow.variables .Values.airflow.pools }}
+ - name: variables-pools
+ mountPath: /usr/local/variables-pools/
+ {{- end}}
+ {{- range .Values.airflow.extraConfigmapMounts }}
+ - name: {{ .name }}
+ mountPath: {{ .mountPath }}
+ readOnly: {{ .readOnly }}
+ {{ if .subPath }}
+ subPath: {{ .subPath }}
+ {{ end }}
+ {{- end }}
+{{- if .Values.airflow.extraVolumeMounts }}
+{{ toYaml .Values.airflow.extraVolumeMounts | indent 12 }}
+{{- end }}
args:
- "bash"
- "-c"
@@ -116,6 +138,14 @@ spec:
{{- if .Values.airflow.connections }}
echo "adding connections" &&
/usr/local/connections/add-connections.sh &&
+ {{- end }}
+ {{- if .Values.airflow.variables }}
+ echo "adding variables" &&
+ airflow variables -i /usr/local/variables-pools/variables.json &&
+ {{- end }}
+ {{- if .Values.airflow.variables }}
+ echo "adding pools" &&
+ airflow pool -i /usr/local/variables-pools/pools.json &&
{{- end }}
echo "executing scheduler" &&
airflow scheduler -n {{ .Values.airflow.schedulerNumRuns }}
@@ -127,10 +157,29 @@ spec:
export PATH=/usr/local/airflow/.local/bin:$PATH &&
echo "executing initdb" &&
airflow initdb &&
+ {{- if .Values.airflow.connections }}
+ echo "adding connections" &&
+ /usr/local/connections/add-connections.sh &&
+ {{- end }}
+ {{- if .Values.airflow.variables }}
+ echo "adding variables" &&
+ airflow variables -i /usr/local/variables-pools/variables.json &&
+ {{- end }}
+ {{- if .Values.airflow.variables }}
+ echo "adding pools" &&
+ airflow pool -i /usr/local/variables-pools/pools.json &&
+ {{- end }}
echo "executing scheduler" &&
airflow scheduler -n {{ .Values.airflow.schedulerNumRuns }}
{{- end }}
+{{- if .Values.airflow.extraContainers }}
+{{ toYaml .Values.airflow.extraContainers | indent 8 }}
+{{- end }}
volumes:
+ - name: scripts
+ configMap:
+ name: {{ template "airflow.fullname" . }}-scripts
+ defaultMode: 0755
- name: dags-data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
@@ -144,20 +193,10 @@ spec:
claimName: {{ .Values.logsPersistence.existingClaim | default (printf "%s-logs" (include "airflow.fullname" . | trunc 58 )) }}
{{- end }}
{{- if .Values.dags.initContainer.enabled }}
- - name: scripts
- configMap:
- name: {{ template "airflow.fullname" . }}-scripts
- defaultMode: 0755
- name: git-clone
configMap:
name: {{ template "airflow.fullname" . }}-git-clone
defaultMode: 0755
- {{- if .Values.airflow.connections }}
- - name: connections
- secret:
- secretName: {{ template "airflow.fullname" . }}-connections
- defaultMode: 0755
- {{- end }}
{{- if .Values.dags.git.secret }}
- name: git-clone-secret
secret:
@@ -165,3 +204,23 @@ spec:
defaultMode: 0700
{{- end }}
{{- end }}
+ {{- if .Values.airflow.connections }}
+ - name: connections
+ secret:
+ secretName: {{ template "airflow.fullname" . }}-connections
+ defaultMode: 0755
+ {{- end }}
+ {{- if or .Values.airflow.variables .Values.airflow.pools }}
+ - name: variables-pools
+ configMap:
+ name: {{ template "airflow.fullname" . }}-variables-pools
+ defaultMode: 0755
+ {{- end }}
+ {{- range .Values.airflow.extraConfigmapMounts }}
+ - name: {{ .name }}
+ configMap:
+ name: {{ .configMap }}
+ {{- end }}
+{{- if .Values.airflow.extraVolumes }}
+{{ toYaml .Values.airflow.extraVolumes | indent 8 }}
+{{- end }}
diff --git a/stable/airflow/templates/deployments-web.yaml b/stable/airflow/templates/deployments-web.yaml
index c71431041ec7..845395e0835e 100644
--- a/stable/airflow/templates/deployments-web.yaml
+++ b/stable/airflow/templates/deployments-web.yaml
@@ -25,8 +25,13 @@ spec:
template:
metadata:
annotations:
- checksum/config: {{ include (print $.Template.BasePath "/configmap-airflow.yaml") . | sha256sum }}
- configmap.fabric8.io/update-on-change: "{{ template "airflow.fullname" . }}-env"
+ checksum/config-env: {{ include (print $.Template.BasePath "/configmap-env.yaml") . | sha256sum }}
+ checksum/config-git-clone: {{ include (print $.Template.BasePath "/configmap-git-clone.yaml") . | sha256sum }}
+ checksum/config-scripts: {{ include (print $.Template.BasePath "/configmap-scripts.yaml") . | sha256sum }}
+ checksum/secret-env: {{ include (print $.Template.BasePath "/secret-env.yaml") . | sha256sum }}
+{{- if .Values.airflow.podAnnotations }}
+{{ toYaml .Values.airflow.podAnnotations | indent 8 }}
+{{- end }}
labels:
app: {{ template "airflow.name" . }}
component: web
@@ -37,6 +42,18 @@ spec:
- name: {{ .Values.airflow.image.pullSecret }}
{{- end }}
restartPolicy: Always
+ {{- if .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.nodeSelector | indent 8 }}
+ {{- end }}
+ {{- if .Values.affinity }}
+ affinity:
+{{ toYaml .Values.affinity | indent 8 }}
+ {{- end }}
+ {{- if .Values.tolerations }}
+ tolerations:
+{{ toYaml .Values.tolerations | indent 8 }}
+ {{- end }}
{{- if .Values.dags.initContainer.enabled }}
initContainers:
- name: git-clone
@@ -70,42 +87,41 @@ spec:
- configMapRef:
name: "{{ template "airflow.fullname" . }}-env"
env:
- - name: POSTGRES_USER
- valueFrom:
- secretKeyRef:
- name: {{ template "airflow.secret" . }}
- key: postgresUser
- - name: POSTGRES_PASSWORD
- valueFrom:
- secretKeyRef:
- name: {{ template "airflow.secret" . }}
- key: postgresPassword
- - name: REDIS_PASSWORD
- valueFrom:
- secretKeyRef:
- name: {{ template "airflow.secret" . }}
- key: redisPassword
+ {{- include "airflow.mapenvsecrets" . | indent 10 }}
+ resources:
+{{ toYaml .Values.web.resources | indent 12 }}
volumeMounts:
+ - name: scripts
+ mountPath: /usr/local/scripts
{{- if .Values.persistence.enabled }}
- name: dags-data
mountPath: {{ .Values.dags.path }}
{{- else if .Values.dags.initContainer.enabled }}
- name: dags-data
mountPath: {{ .Values.dags.path }}
- - name: scripts
- mountPath: /usr/local/scripts
{{- end }}
{{- if .Values.logsPersistence.enabled }}
- name: logs-data
mountPath: {{ .Values.logs.path }}
{{- end }}
+ {{- range .Values.airflow.extraConfigmapMounts }}
+ - name: {{ .name }}
+ mountPath: {{ .mountPath }}
+ readOnly: {{ .readOnly }}
+ {{ if .subPath }}
+ subPath: {{ .subPath }}
+ {{ end }}
+ {{- end }}
+{{- if .Values.airflow.extraVolumeMounts }}
+{{ toYaml .Values.airflow.extraVolumeMounts | indent 12 }}
+{{- end }}
args:
- "bash"
- "-c"
{{- if and ( .Values.dags.initContainer.enabled ) ( .Values.dags.initContainer.installRequirements ) }}
- >
- echo 'waiting 60s...' &&
- sleep 60 &&
+ echo 'waiting {{ .Values.web.initialStartupDelay }}s...' &&
+ sleep {{ .Values.web.initialStartupDelay }} &&
echo 'installing requirements...' &&
mkdir -p /usr/local/airflow/.local/bin &&
export PATH=/usr/local/airflow/.local/bin:$PATH &&
@@ -114,8 +130,8 @@ spec:
airflow webserver
{{- else }}
- >
- echo 'waiting 60s...' &&
- sleep 60 &&
+ echo 'waiting {{ .Values.web.initialStartupDelay }}s...' &&
+ sleep {{ .Values.web.initialStartupDelay }} &&
mkdir -p /usr/local/airflow/.local/bin &&
export PATH=/usr/local/airflow/.local/bin:$PATH &&
echo 'executing webserver...' &&
@@ -126,12 +142,19 @@ spec:
path: "{{ .Values.ingress.web.path }}/health"
port: web
## Keep 6 minutes the delay to allow clean wait of postgres and redis containers
- initialDelaySeconds: 360
+ initialDelaySeconds: {{ .Values.web.initialDelaySeconds }}
periodSeconds: 60
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
+{{- if .Values.airflow.extraContainers }}
+{{ toYaml .Values.airflow.extraContainers | indent 8 }}
+{{- end }}
volumes:
+ - name: scripts
+ configMap:
+ name: {{ template "airflow.fullname" . }}-scripts
+ defaultMode: 0755
- name: dags-data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
@@ -145,10 +168,6 @@ spec:
claimName: {{ .Values.logsPersistence.existingClaim | default (printf "%s-logs" (include "airflow.fullname" . | trunc 58 )) }}
{{- end }}
{{- if .Values.dags.initContainer.enabled }}
- - name: scripts
- configMap:
- name: {{ template "airflow.fullname" . }}-scripts
- defaultMode: 0755
- name: git-clone
configMap:
name: {{ template "airflow.fullname" . }}-git-clone
@@ -160,3 +179,11 @@ spec:
defaultMode: 0700
{{- end }}
{{- end }}
+ {{- range .Values.airflow.extraConfigmapMounts }}
+ - name: {{ .name }}
+ configMap:
+ name: {{ .configMap }}
+ {{- end }}
+{{- if .Values.airflow.extraVolumes }}
+{{ toYaml .Values.airflow.extraVolumes | indent 8 }}
+{{- end }}
diff --git a/stable/airflow/templates/prometheus-rule.yaml b/stable/airflow/templates/prometheus-rule.yaml
new file mode 100644
index 000000000000..7ba9c19b34c9
--- /dev/null
+++ b/stable/airflow/templates/prometheus-rule.yaml
@@ -0,0 +1,17 @@
+{{- if .Values.prometheusRule.enabled }}
+apiVersion: monitoring.coreos.com/v1
+kind: PrometheusRule
+metadata:
+ name: {{ template "airflow.fullname" . }}
+ labels:
+ app: {{ template "airflow.name" . }}
+ chart: {{ template "airflow.chart" . }}
+ heritage: {{ .Release.Service }}
+ release: {{ .Release.Name }}
+ {{- if .Values.prometheusRule.additionalLabels }}
+{{ toYaml .Values.prometheusRule.additionalLabels | indent 4 }}
+ {{- end }}
+spec:
+ groups:
+{{ toYaml .Values.prometheusRule.groups | indent 4 }}
+{{- end }}
diff --git a/stable/airflow/templates/role.yaml b/stable/airflow/templates/role.yaml
index d049c1f922b2..a1b671467799 100644
--- a/stable/airflow/templates/role.yaml
+++ b/stable/airflow/templates/role.yaml
@@ -17,4 +17,8 @@ rules:
resources:
- "pods/log"
verbs: ["get", "list"]
+- apiGroups: [""]
+ resources:
+ - "pods/exec"
+ verbs: ["create", "get"]
{{ end }}
\ No newline at end of file
diff --git a/stable/airflow/templates/secret-connections.yaml b/stable/airflow/templates/secret-connections.yaml
old mode 100755
new mode 100644
diff --git a/stable/airflow/templates/secrets.yaml b/stable/airflow/templates/secret-env.yaml
similarity index 91%
rename from stable/airflow/templates/secrets.yaml
rename to stable/airflow/templates/secret-env.yaml
index 47728198cc63..f03edc4b6ed8 100644
--- a/stable/airflow/templates/secrets.yaml
+++ b/stable/airflow/templates/secret-env.yaml
@@ -2,7 +2,7 @@
apiVersion: v1
kind: Secret
metadata:
- name: {{ template "airflow.fullname" . }}
+ name: {{ template "airflow.fullname" . }}-env
labels:
app: {{ template "airflow.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
diff --git a/stable/airflow/templates/servicemonitor.yaml b/stable/airflow/templates/servicemonitor.yaml
new file mode 100644
index 000000000000..37c43b3b7123
--- /dev/null
+++ b/stable/airflow/templates/servicemonitor.yaml
@@ -0,0 +1,25 @@
+{{- if .Values.serviceMonitor.enabled }}
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: {{ template "airflow.fullname" . }}
+ labels:
+ app: {{ template "airflow.name" . }}
+ component: worker
+ chart: {{ template "airflow.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ {{- range $key, $value := .Values.serviceMonitor.selector }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+spec:
+ selector:
+ matchLabels:
+ app: {{ template "airflow.name" . }}
+ component: web
+ release: {{ .Release.Name }}
+ endpoints:
+ - port: web
+ path: {{ .Values.serviceMonitor.path }}
+ interval: {{ .Values.serviceMonitor.interval }}
+{{- end }}
diff --git a/stable/airflow/templates/statefulsets-workers.yaml b/stable/airflow/templates/statefulsets-workers.yaml
index 3ea0b5c926e8..26167bea324a 100644
--- a/stable/airflow/templates/statefulsets-workers.yaml
+++ b/stable/airflow/templates/statefulsets-workers.yaml
@@ -29,11 +29,16 @@ spec:
template:
metadata:
annotations:
- checksum/config: {{ include (print $.Template.BasePath "/configmap-airflow.yaml") . | sha256sum }}
- configmap.fabric8.io/update-on-change: "{{ template "airflow.fullname" . }}-env"
- {{ range $key, $value := .Values.workers.pod.annotations }}
- {{ $key }}: {{ $value | quote }}
- {{- end }}
+ checksum/config-env: {{ include (print $.Template.BasePath "/configmap-env.yaml") . | sha256sum }}
+ checksum/config-git-clone: {{ include (print $.Template.BasePath "/configmap-git-clone.yaml") . | sha256sum }}
+ checksum/config-scripts: {{ include (print $.Template.BasePath "/configmap-scripts.yaml") . | sha256sum }}
+ checksum/secret-env: {{ include (print $.Template.BasePath "/secret-env.yaml") . | sha256sum }}
+{{- if .Values.airflow.podAnnotations }}
+{{ toYaml .Values.airflow.podAnnotations | indent 8 }}
+{{- end }}
+{{- if .Values.workers.podAnnotations }}
+{{ toYaml .Values.workers.podAnnotations | indent 8 }}
+{{- end }}
labels:
app: {{ template "airflow.name" . }}
component: worker
@@ -46,6 +51,19 @@ spec:
restartPolicy: Always
terminationGracePeriodSeconds: 30
serviceAccountName: {{ template "airflow.serviceAccountName" . }}
+ {{- if .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.nodeSelector | indent 8 }}
+ {{- end }}
+ {{- if .Values.affinity }}
+ affinity:
+{{ toYaml .Values.affinity | indent 8 }}
+ {{- end }}
+ {{- if .Values.tolerations }}
+ tolerations:
+{{ toYaml .Values.tolerations | indent 8 }}
+ {{- end }}
+
{{- if .Values.dags.initContainer.enabled }}
initContainers:
- name: git-clone
@@ -75,22 +93,10 @@ spec:
- configMapRef:
name: "{{ template "airflow.fullname" . }}-env"
env:
- - name: POSTGRES_USER
- valueFrom:
- secretKeyRef:
- name: {{ template "airflow.secret" . }}
- key: postgresUser
- - name: POSTGRES_PASSWORD
- valueFrom:
- secretKeyRef:
- name: {{ template "airflow.secret" . }}
- key: postgresPassword
- - name: REDIS_PASSWORD
- valueFrom:
- secretKeyRef:
- name: {{ template "airflow.secret" . }}
- key: redisPassword
+ {{- include "airflow.mapenvsecrets" . | indent 10 }}
volumeMounts:
+ - name: scripts
+ mountPath: /usr/local/scripts
{{- $secretsDir := .Values.workers.secretsDir -}}
{{- range .Values.workers.secrets }}
- name: {{ . }}-volume
@@ -104,9 +110,18 @@ spec:
{{- else if .Values.dags.initContainer.enabled }}
- name: dags-data
mountPath: {{ .Values.dags.path }}
- - name: scripts
- mountPath: /usr/local/scripts
{{- end }}
+ {{- range .Values.airflow.extraConfigmapMounts }}
+ - name: {{ .name }}
+ mountPath: {{ .mountPath }}
+ readOnly: {{ .readOnly }}
+ {{ if .subPath }}
+ subPath: {{ .subPath }}
+ {{ end }}
+ {{- end }}
+{{- if .Values.airflow.extraVolumeMounts }}
+{{ toYaml .Values.airflow.extraVolumeMounts | indent 12 }}
+{{- end }}
args:
- "bash"
- "-c"
@@ -135,7 +150,14 @@ spec:
protocol: TCP
resources:
{{ toYaml .Values.workers.resources | indent 12 }}
+{{- if .Values.airflow.extraContainers }}
+{{ toYaml .Values.airflow.extraContainers | indent 8 }}
+{{- end }}
volumes:
+ - name: scripts
+ configMap:
+ name: {{ template "airflow.fullname" . }}-scripts
+ defaultMode: 0755
{{- range .Values.workers.secrets }}
- name: {{ . }}-volume
secret:
@@ -149,10 +171,6 @@ spec:
emptyDir: {}
{{- end }}
{{- if .Values.dags.initContainer.enabled }}
- - name: scripts
- configMap:
- name: {{ template "airflow.fullname" . }}-scripts
- defaultMode: 0755
- name: git-clone
configMap:
name: {{ template "airflow.fullname" . }}-git-clone
@@ -164,4 +182,12 @@ spec:
defaultMode: 0700
{{- end }}
{{- end }}
+ {{- range .Values.airflow.extraConfigmapMounts }}
+ - name: {{ .name }}
+ configMap:
+ name: {{ .configMap }}
+ {{- end }}
+{{- if .Values.airflow.extraVolumes }}
+{{ toYaml .Values.airflow.extraVolumes | indent 8 }}
+{{- end }}
{{- end }}
diff --git a/stable/airflow/values.yaml b/stable/airflow/values.yaml
index 3fdf978a6ab8..d9951461cb29 100644
--- a/stable/airflow/values.yaml
+++ b/stable/airflow/values.yaml
@@ -1,9 +1,57 @@
# Duplicate this file and put your customization here
-
##
## common settings and setting for the webserver
airflow:
+ extraConfigmapMounts: []
+ # - name: extra-metadata
+ # mountPath: /opt/metadata
+ # configMap: airflow-metadata
+ # readOnly: true
+ #
+ # Example of configmap mount with subPath
+ # - name: extra-metadata
+ # mountPath: /opt/metadata/file.yaml
+ # configMap: airflow-metadata
+ # readOnly: true
+ # subPath: file.yaml
+
+
+ ## When existingAirflowSecret is defined, secretsMapping can be
+ ## overridden. When no secretName is given then the value of
+ ## existingAirflowSecret is assumed.
+ ## secretsMapping:
+ ## - envVar: AIRFLOW__LDAP__BIND_PASSWORD
+ ## secretName: ldap
+ ## secretKey: ldapBindPassword
+ ## - envVar: AIRFLOW__ATLAS__PASSWORD
+ ## secretKey: atlasPassword
+ ## - envVar: AIRFLOW__SMTP__PASSWORD
+ ## secretKey: smtpPassword
+ ## - envVar: AIRFLOW__KUBERNETES__GIT_PASSWORD
+ ## secretKey: kubernetesGitPassword
+ ## - envVar: POSTGRES_USER
+ ## secretName: postgres
+ ## secretKey: postgresUser
+ ## - envVar: POSTGRES_PASSWORD
+ ## secretName: postgres
+ ## secretKey: postgresPassword
+ ## - envVar: REDIS_PASSWORD
+ ## secretName: redis
+ ## secretKey: redisPassword
+ secretsMapping:
+
+
+ ## Used only when existingAirflowSecret is null, in which case
+ ## a secret will be created with a default name and the following mapping.
+ defaultSecretsMapping:
+ - envVar: POSTGRES_USER
+ secretKey: postgresUser
+ - envVar: POSTGRES_PASSWORD
+ secretKey: postgresPassword
+ - envVar: REDIS_PASSWORD
+ secretKey: redisPassword
+
##
## You will need to define your fernet key:
## Generate fernetKey with:
@@ -33,7 +81,7 @@ airflow:
repository: puckel/docker-airflow
##
## image tag
- tag: 1.10.0-4
+ tag: 1.10.2
##
## Image pull policy
## values: Always or IfNotPresent
@@ -42,7 +90,7 @@ airflow:
## image pull secret for private images
pullSecret:
##
- ## Set schedulerNumRuns to control how the schduler behaves:
+ ## Set schedulerNumRuns to control how the scheduler behaves:
## -1 will let him looping indefinitively but it will never update the DAG
## 1 will have the scheduler quit after each refresh, but kubernetes will restart it.
##
@@ -70,7 +118,7 @@ airflow:
## Custom airflow configuration environment variables
## Use this to override any airflow setting settings defining environment variables in the
## following form: AIRFLOW____.
- ## See the Airflow documentation: http://airflow.readthedocs.io/en/latest/configuration.html?highlight=__CORE__#setting-configuration-options)
+ ## See the Airflow documentation: https://airflow.readthedocs.io/en/stable/howto/set-config.html?highlight=setting-configuration
## Example:
## config:
## AIRFLOW__CORE__EXPOSE_CONFIG: "True"
@@ -92,6 +140,71 @@ airflow:
## type: aws
## extra: '{"aws_access_key_id": "**********", "aws_secret_access_key": "***", "region_name":"eu-central-1"}'
connections: {}
+
+ ## Add airflow variables
+ ## This should be a json string with your variables in it
+ ## Examples:
+ ## variables: '{ "environment": "dev" }'
+ variables: {}
+
+ ## Add airflow ppols
+ ## This should be a json string with your pools in it
+ ## Examples:
+ ## pools: '{ "example": { "description": "This is an example of a pool", "slots": 2 } }'
+ pools: {}
+
+ ##
+ ## Annotations for the Scheduler, Worker and Web pods
+ podAnnotations: {}
+ ## Example:
+ ## iam.amazonaws.com/role: airflow-Role
+ extraContainers: []
+ ## Additional containers to run alongside the Scheduler, Worker and Web pods
+ ## This could, for example, be used to run a sidecar that syncs DAGs from object storage.
+ # - name: s3-sync
+ # image: my-user/s3sync:latest
+ # volumeMounts:
+ # - name: synchronised-dags
+ # mountPath: /dags
+ extraVolumeMounts: []
+ ## Additional volumeMounts to the main containers in the Scheduler, Worker and Web pods.
+ # - name: synchronised-dags
+ # mountPath: /usr/local/airflow/dags
+ extraVolumes: []
+ ## Additional volumes for the Scheduler, Worker and Web pods.
+ # - name: synchronised-dags
+ # emptyDir: {}
+
+scheduler:
+ resources:
+ scheduler: {}
+ # limits:
+ # cpu: "1000m"
+ # memory: "1Gi"
+ # requests:
+ # cpu: "500m"
+ # memory: "512Mi"
+
+
+flower:
+ resources: {}
+ # limits:
+ # cpu: "100m"
+ # memory: "128Mi"
+ # requests:
+ # cpu: "100m"
+ # memory: "128Mi"
+
+web:
+ resources: {}
+ # limits:
+ # cpu: "300m"
+ # memory: "1Gi"
+ # requests:
+ # cpu: "100m"
+ # memory: "512Mi"
+ initialStartupDelay: "60"
+ initialDelaySeconds: "360"
##
## Workers configuration
workers:
@@ -110,10 +223,9 @@ workers:
# memory: "512Mi"
##
## Annotations for the Worker pods
- pod:
- annotations:
- ## Example:
- ## iam.amazonaws.com/role: airflow-worker-Role
+ podAnnotations: {}
+ ## Example:
+ ## iam.amazonaws.com/role: airflow-Role
##
## Celery worker configuration
celery:
@@ -127,7 +239,13 @@ workers:
## Secrets which will be mounted as a file at `secretsDir/`.
secrets: []
-
+## Support Node, affinity and tolerations for airflow and workers labels for pod assignment
+## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
+## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
+## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
+nodeSelector: {}
+affinity: {}
+tolerations: []
##
## Ingress configuration
ingress:
@@ -298,6 +416,8 @@ dags:
##
## branch name, tag or sha1 to reset to
ref: master
+ ## pre-created secret with key, key.pub and known_hosts file for private repos
+ secret: {}
initContainer:
## Fetch the source code when the pods starts
enabled: false
@@ -306,7 +426,7 @@ dags:
## docker-airflow image
repository: alpine/git
## image tag
- tag: 1.0.4
+ tag: 1.0.7
## Image pull policy
## values: Always or IfNotPresent
pullPolicy: IfNotPresent
@@ -349,11 +469,8 @@ postgresql:
## Set to false if bringing your own PostgreSQL.
enabled: true
##
- ## If bringing your own PostgreSQL, the full uri to use
- ## e.g. postgres://airflow:changeme@my-postgres.com:5432/airflow?sslmode=disable
- # uri:
- ##
- ## PostgreSQL hostname
+ ## If you are bringing your own PostgreSQL, you should set postgresHost and
+ ## also probably service.port, postgresUser, postgresPassword, and postgresDatabase
## postgresHost:
##
## PostgreSQL port
@@ -391,7 +508,11 @@ redis:
## Set to false if bringing your own redis.
enabled: true
##
+ ## If you are bringing your own redis, you can set the host in redisHost.
+ ## redisHost:
+ ##
## Redis password
+ ##
password: airflow
##
## Master configuration
@@ -418,3 +539,27 @@ redis:
## Disable cluster management by default.
cluster:
enabled: false
+
+# Enable this if you're using https://github.com/coreos/prometheus-operator
+# Don't forget you need to install something like https://github.com/epoch8/airflow-exporter in your airflow docker container
+serviceMonitor:
+ enabled: false
+ interval: "30s"
+ path: /admin/metrics
+ ## [Kube Prometheus Selector Label](https://github.com/helm/charts/tree/master/stable/prometheus-operator#exporters)
+ selector:
+ prometheus: kube-prometheus
+
+# Enable this if you're using https://github.com/coreos/prometheus-operator
+prometheusRule:
+ enabled: false
+ ## Namespace in which the prometheus rule is created
+ # namespace: monitoring
+ ## Define individual alerting rules as required
+ ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#rulegroup
+ ## https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/
+ groups: {}
+
+ ## Used to pass Labels that are used by the Prometheus installed in your cluster to select Prometheus Rules to work with
+ ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
+ additionalLabels: {}
diff --git a/stable/ambassador/.helmignore b/stable/ambassador/.helmignore
new file mode 100644
index 000000000000..a0482efdf830
--- /dev/null
+++ b/stable/ambassador/.helmignore
@@ -0,0 +1,23 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
+.vscode/
+OWNERS
diff --git a/stable/ambassador/CHANGELOG.md b/stable/ambassador/CHANGELOG.md
new file mode 100644
index 000000000000..f8e57d314bf4
--- /dev/null
+++ b/stable/ambassador/CHANGELOG.md
@@ -0,0 +1,150 @@
+# Change Log
+
+This file documents all notable changes to Ambassador Helm Chart. The release
+numbering uses [semantic versioning](http://semver.org).
+
+## v2.6.0
+
+### Minor Changes
+
+- Add ambassador CRDs!
+- Update ambassador to 0.70.0
+
+## v2.5.1
+
+### Minor Changes
+
+- Update ambassador to 0.61.1
+
+## v2.5.0
+
+### Minor Changes
+
+- Add support for autoscaling using HPA, see `autoscaling` values.
+
+## v2.4.1
+
+### Minor Changes
+
+- Update ambassador to 0.61.0
+
+## v2.4.0
+
+### Minor Changes
+
+- Allow configuring `hostNetwork` and `dnsPolicy`
+
+## v2.3.1
+
+### Minor Changes
+
+- Adds HOST_IP environment variable
+
+## v2.3.0
+
+### Minor Changes
+
+- Adds support for init containers using `initContainers` and pod labels `podLabels`
+
+## v2.2.5
+
+### Minor Changes
+
+- Update ambassador to 0.60.3
+
+## v2.2.4
+
+### Minor Changes
+
+- Add support for Ambassador PRO [see readme](https://github.com/helm/charts/blob/master/stable/ambassador/README.md#ambassador-pro)
+
+## v2.2.3
+
+### Minor Changes
+
+- Update ambassador to 0.60.2
+
+## v2.2.2
+
+### Minor Changes
+
+- Update ambassador to 0.60.1
+
+## v2.2.1
+
+### Minor Changes
+
+- Fix RBAC for ambassador 0.60.0
+
+## v2.2.0
+
+### Minor Changes
+
+- Update ambassador to 0.60.0
+
+## v2.1.0
+
+### Minor Changes
+
+- Added `scope.singleNamespace` for configuring ambassador to run in single namespace
+
+## v2.0.2
+
+### Minor Changes
+
+- Update ambassador to 0.53.1
+
+## v2.0.1
+
+### Minor Changes
+
+- Update ambassador to 0.52.0
+
+## v2.0.0
+
+### Major Changes
+
+- Removed `ambassador.id` and `namespace.single` in favor of setting environment variables.
+
+## v1.1.5
+
+### Minor Changes
+
+- Update ambassador to 0.50.3
+
+## v1.1.4
+
+### Minor Changes
+
+- support targetPort specification
+
+## v1.1.3
+
+### Minor Changes
+
+- Update ambassador to 0.50.2
+
+## v1.1.2
+
+### Minor Changes
+
+- Add additional chart maintainer
+
+## v1.1.1
+
+### Minor Changes
+
+- Default replicas -> 3
+
+## v1.1.0
+
+### Minor Changes
+
+- Allow RBAC to be namespaced (`rbac.namespaced`)
+
+## v1.0.0
+
+### Major Changes
+
+- First release of Ambassador Helm Chart in helm/charts
+- For migration see [Migrating from datawire/ambassador chart](https://github.com/helm/charts/tree/master/stable/ambassador#migrating-from-datawireambassador-chart-chart-version-0400-or-0500)
diff --git a/stable/ambassador/Chart.yaml b/stable/ambassador/Chart.yaml
new file mode 100644
index 000000000000..ec68156dc3ff
--- /dev/null
+++ b/stable/ambassador/Chart.yaml
@@ -0,0 +1,23 @@
+apiVersion: v1
+appVersion: 0.70.1
+description: A Helm chart for Datawire Ambassador
+name: ambassador
+version: 2.6.1
+icon: https://www.getambassador.io/images/logo.png
+home: https://www.getambassador.io/
+sources:
+ - https://github.com/datawire/ambassador
+ - https://github.com/prometheus/statsd_exporter
+keywords:
+ - api gateway
+ - ambassador
+ - datawire
+ - envoy
+maintainers:
+ - name: flydiverny
+ email: markus@maga.se
+ - name: kflynn
+ email: flynn@datawire.io
+ - name: nbkrause
+ email: nkrause@datawire.io
+engine: gotpl
diff --git a/stable/ambassador/OWNERS b/stable/ambassador/OWNERS
new file mode 100644
index 000000000000..a151dd3f2bfd
--- /dev/null
+++ b/stable/ambassador/OWNERS
@@ -0,0 +1,8 @@
+approvers:
+- flydiverny
+- kflynn
+- nbkrause
+reviewers:
+- flydiverny
+- kflynn
+- nbkrause
diff --git a/stable/ambassador/README.md b/stable/ambassador/README.md
new file mode 100755
index 000000000000..092803d6d234
--- /dev/null
+++ b/stable/ambassador/README.md
@@ -0,0 +1,205 @@
+# Ambassador
+
+Ambassador is an open source, Kubernetes-native [microservices API gateway](https://www.getambassador.io/about/microservices-api-gateways) built on the [Envoy Proxy](https://www.envoyproxy.io/).
+
+## TL;DR;
+
+```console
+$ helm install stable/ambassador
+```
+
+## Introduction
+
+This chart bootstraps an [Ambassador](https://www.getambassador.io) deployment on
+a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
+
+## Prerequisites
+
+- Kubernetes 1.7+
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`:
+
+```console
+$ helm install --name my-release stable/ambassador
+```
+
+The command deploys Ambassador API gateway on the Kubernetes cluster in the default configuration.
+The [configuration](#configuration) section lists the parameters that can be configured during installation.
+
+## Uninstalling the Chart
+
+To uninstall/delete the `my-release` deployment:
+
+```console
+$ helm delete --purge my-release
+```
+
+The command removes all the Kubernetes components associated with the chart and deletes the release.
+
+## Configuration
+
+The following tables lists the configurable parameters of the Ambassador chart and their default values.
+
+| Parameter | Description | Default |
+| ---------------------------------- | ------------------------------------------------------------------------------- | --------------------------------- |
+| `adminService.create` | If `true`, create a service for Ambassador's admin UI | `true` |
+| `adminService.nodePort` | If explicit NodePort for admin service is required | `true` |
+| `adminService.type` | Ambassador's admin service type to be used | `ClusterIP` |
+| `ambassadorConfig` | Config thats mounted to `/ambassador/ambassador-config` | `""` |
+| `crds.create` | If `true`, Creates CRD resources | `true` |
+| `crds.keep` | If `true`, if the ambassador CRDs should be kept when the chart is deleted | `true` |
+| `daemonSet` | If `true`, Create a daemonSet. By default Deployment controller will be created | `false` |
+| `hostNetwork` | If `true`, uses the host network, useful for on-premise setups | `false` |
+| `dnsPolicy` | Dns policy, when hostNetwork set to ClusterFirstWithHostNet | `ClusterFirst` |
+| `env` | Any additional environment variables for ambassador pods | `{}` |
+| `image.pullPolicy` | Ambassador image pull policy | `IfNotPresent` |
+| `image.repository` | Ambassador image | `quay.io/datawire/ambassador` |
+| `image.tag` | Ambassador image tag | `0.70.1` |
+| `imagePullSecrets` | Image pull secrets | `[]` |
+| `namespace.name` | Set the `AMBASSADOR_NAMESPACE` environment variable | `metadata.namespace` |
+| `scope.singleNamespace` | Set the `AMBASSADOR_SINGLE_NAMESPACE` environment variable | `false` |
+| `podAnnotations` | Additional annotations for ambassador pods | `{}` |
+| `podLabels` | Additional labels for ambassador pods | |
+| `prometheusExporter.enabled` | Prometheus exporter side-car enabled | `false` |
+| `prometheusExporter.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `prometheusExporter.repository` | Prometheus exporter image | `prom/statsd-exporter` |
+| `prometheusExporter.tag` | Prometheus exporter image | `v0.8.1` |
+| `rbac.create` | If `true`, create and use RBAC resources | `true` |
+| `rbac.namespaced` | If `true`, permissions are namespace-scoped rather than cluster-scoped | `false` |
+| `replicaCount` | Number of Ambassador replicas | `3` |
+| `resources` | CPU/memory resource requests/limits | `{}` |
+| `securityContext` | Set security context for pod | `{ "runAsUser": "8888" }` |
+| `initContainers` | Containers used to initialize context for pods | `[]` |
+| `service.annotations` | Annotations to apply to Ambassador service | See "Annotations" below |
+| `service.externalTrafficPolicy` | Sets the external traffic policy for the service | `""` |
+| `service.http.enabled` | if port 80 should be opened for service | `true` |
+| `service.http.nodePort` | If explicit NodePort is required | None |
+| `service.http.port` | if port 443 should be opened for service | `true` |
+| `service.http.targetPort` | Sets the targetPort that maps to the service's cleartext port | `8080` |
+| `service.https.enabled` | if port 443 should be opened for service | `true` |
+| `service.https.nodePort` | If explicit NodePort is required | None |
+| `service.https.port` | if port 443 should be opened for service | `true` |
+| `service.https.targetPort` | Sets the targetPort that maps to the service's TLS port | `8443` |
+| `service.loadBalancerIP` | IP address to assign (if cloud provider supports it) | `""` |
+| `service.loadBalancerSourceRanges` | Passed to cloud provider load balancer if created (e.g: AWS ELB) | None |
+| `service.type` | Service type to be used | `LoadBalancer` |
+| `serviceAccount.create` | If `true`, create a new service account | `true` |
+| `serviceAccount.name` | Service account to be used | `ambassador` |
+| `volumeMounts` | Volume mounts for the ambassador service | `[]` |
+| `volumes` | Volumes for the ambassador service | `[]` |
+| `pro.enabled` | Installs the Ambassador Pro container as a sidecar to Ambassador | `false` |
+| `pro.image.repository` | Ambassador Pro image | `quay.io/datawire/ambassador_pro` |
+| `pro.image.tag` | Ambassador Pro image tag | `amb-sidecar-0.4.0` |
+| `pro.ports.auth` | Ambassador Pro authentication port | `8500` |
+| `pro.ports.ratelimit` | Ambassador Pro ratelimit port | `8501` |
+| `pro.ports.ratelimitDebug` | Debug port for Ambassador Pro ratelimit | `8502` |
+| `pro.licenseKey.value` | License key for Ambassador Pro | "" |
+| `pro.licenseKey.secret` | Stores the license key as a base64-encoded string in a Kubernetes secret | `false` |
+| `autoscaling.enabled` | If true, creates Horizontal Pod Autoscaler | `false` |
+| `autoscaling.minReplica` | If autoscaling enabled, this field sets minimum replica count | `2` |
+| `autoscaling.maxReplica` | If autoscaling enabled, this field sets maximum replica count | `5` |
+| `autoscaling.metrics` | If autoscaling enabled, configure hpa metrics | |
+
+**NOTE:** Make sure the configured `service.http.targetPort` and `service.https.targetPort` ports match your [Ambassador Module's](https://www.getambassador.io/reference/modules/#the-ambassador-module) `service_port` and `redirect_cleartext_from` configurations.
+
+### Annotations
+
+The default annotation applied to the Ambassador service is
+
+```
+getambassador.io/config: |
+ ---
+ apiVersion: ambassador/v1
+ kind: Module
+ name: ambassador
+ config:
+ service_port: 8080
+```
+
+If you intend to use `service.annotations`, remember to include the `getambassador.io/config` annotation key as above,
+and remember that you'll have to escape newlines. For example, the annotation above could be defined as
+
+```
+service.annotations: { "getambassador.io/config": "---\napiVersion: ambassador/v1\nkind: Module\nname: ambassador\nconfig:\n service_port: 8080" }
+```
+
+### Ambassador Pro
+
+Setting `pro.enabled: true` will install Ambassador Pro as a sidecar to Ambassador with the required CRDs and redis instance.
+
+You must set the `pro.licenseKey.value` to the license key issued to you. Sign up for a [free trial](https://www.getambassador.io/pro/free-trial) of Ambassador Pro or [contact](https://www.getambassador.io/contact) our sales team to obtain a license key.
+
+For most use cases, `pro.image` and `pro.ports` can be left as default.
+
+### Specifying Values
+
+Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
+
+```console
+$ helm upgrade --install --wait my-release \
+ --set adminService.type=NodePort \
+ stable/ambassador
+```
+
+Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
+
+```console
+$ helm upgrade --install --wait my-release -f values.yaml stable/ambassador
+```
+
+---
+
+# Upgrading
+
+## To 2.0.0
+
+### Ambassador ID
+
+ambassador.id has been removed in favor of setting it via an environment variable in `env`. `AMBASSADOR_ID` defaults to `default` if not set in the environment. This is mainly used for [running multiple Ambassadors](https://www.getambassador.io/reference/running#ambassador_id) in the same cluster.
+
+| Parameter | Env variables |
+| --------------- | --------------- |
+| `ambassador.id` | `AMBASSADOR_ID` |
+
+## Migrating from `datawire/ambassador` chart (chart version 0.40.0 or 0.50.0)
+
+Chart now runs ambassador as non-root by default, so you might need to update your ambassador module config to match this.
+
+### Timings
+
+Timings values have been removed in favor of setting the env variables using `env´
+
+| Parameter | Env variables |
+| ----------------- | -------------------------- |
+| `timing.restart` | `AMBASSADOR_RESTART_TIME` |
+| `timing.drain` | `AMBASSADOR_DRAIN_TIME` |
+| `timing.shutdown` | `AMBASSADOR_SHUTDOWN_TIME` |
+
+### Single namespace
+
+| Parameter | Env variables |
+| ------------------ | ----------------------------- |
+| `namespace.single` | `AMBASSADOR_SINGLE_NAMESPACE` |
+
+### Renamed values
+
+Service ports values have changed names and target ports have new defaults.
+
+| Previous parameter | New parameter | New default value |
+| --------------------------- | -------------------------- | ----------------- |
+| `service.enableHttp` | `service.http.enabled` | |
+| `service.httpPort` | `service.http.port` | |
+| `service.httpNodePort` | `service.http.nodePort` | |
+| `service.targetPorts.http` | `service.http.targetPort` | `8080` |
+| `service.enableHttps` | `service.https.enabled` | |
+| `service.httpsPort` | `service.https.port` | |
+| `service.httpsNodePort` | `service.https.nodePort` | |
+| `service.targetPorts.https` | `service.https.targetPort` | `8443` |
+
+### Exporter sidecar
+
+Pre version `0.50.0` ambassador was using socat and required a sidecar to export statsd metrics. In `0.50.0` ambassador no longer uses socat and doesn't need a sidecar anymore to export its statsd metrics. Statsd metrics are disabled by default and can be enabled by setting environment `STATSD_ENABLED`, this will (in 0.50) send metrics to a service named `statsd-sink`, if you want to send it to another service or namespace it can be changed by setting `STATSD_HOST`
+
+If you are using prometheus the chart allows you to enable a sidecar which can export to prometheus see the `prometheusExporter` values.
diff --git a/stable/ambassador/ci/ci-values.yaml b/stable/ambassador/ci/ci-values.yaml
new file mode 100644
index 000000000000..c3dacb28717c
--- /dev/null
+++ b/stable/ambassador/ci/ci-values.yaml
@@ -0,0 +1,30 @@
+daemonSet: true
+rbac:
+ create: false
+
+prometheusExporter:
+ enabled: true
+
+env:
+ AMBASSADOR_SINGLE_NAMESPACE: true
+ AMBASSADOR_NO_KUBEWATCH: no_kubewatch
+
+volumes:
+ - name: nothing
+ emptyDir: {}
+
+volumeMounts:
+ - mountPath: /var/nothing
+ name: nothing
+ readOnly: true
+
+ambassadorConfig: |
+ apiVersion: ambassador/v1
+ kind: Module
+ name: ambassador
+ config:
+ service_port: 8080
+
+crds:
+ create: false
+ keep: false
diff --git a/stable/ambassador/ci/default-values.yaml b/stable/ambassador/ci/default-values.yaml
new file mode 100644
index 000000000000..538beacf9e21
--- /dev/null
+++ b/stable/ambassador/ci/default-values.yaml
@@ -0,0 +1,12 @@
+env:
+ AMBASSADOR_NO_KUBEWATCH: no_kubewatch
+
+ambassadorConfig: |
+ apiVersion: ambassador/v1
+ kind: Module
+ name: ambassador
+ config:
+ service_port: 8080
+
+crds:
+ keep: false
diff --git a/stable/ambassador/templates/NOTES.txt b/stable/ambassador/templates/NOTES.txt
new file mode 100644
index 000000000000..f84fd5f2862f
--- /dev/null
+++ b/stable/ambassador/templates/NOTES.txt
@@ -0,0 +1,26 @@
+Congratuations! You've successfully installed Ambassador.
+
+For help, visit our Slack at https://d6e.co/slack or view the documentation online at https://www.getambassador.io.
+
+To get the IP address of Ambassador, run the following commands:
+
+{{- if contains "NodePort" .Values.service.type }}
+ export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "ambassador.fullname" . }})
+ export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+ echo http://$NODE_IP:$NODE_PORT
+{{- else if contains "LoadBalancer" .Values.service.type }}
+NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ You can watch the status of by running 'kubectl get svc -w --namespace {{ .Release.Namespace }} {{ include "ambassador.fullname" . }}'
+
+ On GKE/Azure:
+ export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "ambassador.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+
+ On AWS:
+ export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "ambassador.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
+
+ echo http://$SERVICE_IP:{{ .Values.service.port }}
+{{- else if contains "ClusterIP" .Values.service.type }}
+ export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ include "ambassador.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
+ echo "Visit http://127.0.0.1:8080 to use your application"
+ kubectl port-forward $POD_NAME 8080:80
+{{- end }}
diff --git a/stable/ambassador/templates/_helpers.tpl b/stable/ambassador/templates/_helpers.tpl
new file mode 100644
index 000000000000..6540f582fbfa
--- /dev/null
+++ b/stable/ambassador/templates/_helpers.tpl
@@ -0,0 +1,43 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "ambassador.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "ambassador.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "ambassador.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create the name of the service account to use
+*/}}
+{{- define "ambassador.serviceAccountName" -}}
+{{- if .Values.serviceAccount.create -}}
+ {{ default (include "ambassador.fullname" .) .Values.serviceAccount.name }}
+{{- else -}}
+ {{ default "default" .Values.serviceAccount.name }}
+{{- end -}}
+{{- end -}}
diff --git a/stable/ambassador/templates/admin-service.yaml b/stable/ambassador/templates/admin-service.yaml
new file mode 100644
index 000000000000..1ad80e356868
--- /dev/null
+++ b/stable/ambassador/templates/admin-service.yaml
@@ -0,0 +1,24 @@
+{{- if .Values.adminService.create -}}
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "ambassador.fullname" . }}-admins
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ type: {{ .Values.adminService.type }}
+ ports:
+ - port: {{ .Values.adminService.port }}
+ targetPort: admin
+ protocol: TCP
+ name: admin
+ {{- if (and (eq .Values.adminService.type "NodePort") (not (empty .Values.adminService.nodePort))) }}
+ nodePort: {{ .Values.adminService.nodePort }}
+ {{- end }}
+ selector:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+{{- end -}}
diff --git a/stable/ambassador/templates/ambassador-pro-crd.yaml b/stable/ambassador/templates/ambassador-pro-crd.yaml
new file mode 100644
index 000000000000..1e5300c4d078
--- /dev/null
+++ b/stable/ambassador/templates/ambassador-pro-crd.yaml
@@ -0,0 +1,58 @@
+{{- if .Values.pro.enabled -}}
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+ name: filterpolicies.getambassador.io
+spec:
+ group: getambassador.io
+ version: v1beta2
+ versions:
+ - name: v1beta2
+ served: true
+ storage: true
+ scope: Namespaced
+ names:
+ plural: filterpolicies
+ singular: filterpolicy
+ kind: FilterPolicy
+ shortNames:
+ - fp
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+ name: filters.getambassador.io
+spec:
+ group: getambassador.io
+ version: v1beta2
+ versions:
+ - name: v1beta2
+ served: true
+ storage: true
+ scope: Namespaced
+ names:
+ plural: filters
+ singular: filter
+ kind: Filter
+ shortNames:
+ - fil
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+ name: ratelimits.getambassador.io
+spec:
+ group: getambassador.io
+ version: v1beta1
+ versions:
+ - name: v1beta1
+ served: true
+ storage: true
+ scope: Namespaced
+ names:
+ plural: ratelimits
+ singular: ratelimit
+ kind: RateLimit
+ shortNames:
+ - rl
+{{- end -}}
diff --git a/stable/ambassador/templates/ambassador-pro-license-key-secret.yaml b/stable/ambassador/templates/ambassador-pro-license-key-secret.yaml
new file mode 100644
index 000000000000..05e59843c900
--- /dev/null
+++ b/stable/ambassador/templates/ambassador-pro-license-key-secret.yaml
@@ -0,0 +1,9 @@
+{{- if .Values.pro.licenseKey.secret -}}
+apiVersion: v1
+kind: Secret
+metadata:
+ name: ambassador-pro-license-key
+type: Opaque
+data:
+ key: {{ .Values.pro.licenseKey.value | b64enc }}
+{{- end -}}
diff --git a/stable/ambassador/templates/ambassador-pro-redis.yaml b/stable/ambassador/templates/ambassador-pro-redis.yaml
new file mode 100644
index 000000000000..3f02ba0db805
--- /dev/null
+++ b/stable/ambassador/templates/ambassador-pro-redis.yaml
@@ -0,0 +1,45 @@
+{{- if .Values.pro.enabled -}}
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "ambassador.fullname" . }}-pro-redis
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}-pro-redis
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ type: ClusterIP
+ ports:
+ - port: 6379
+ targetPort: 6379
+ selector:
+ app.kubernetes.io/name: {{ include "ambassador.fullname" . }}-pro-redis
+ app.kubernetes.io/instance: {{ .Release.Name }}
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ include "ambassador.fullname" . }}-pro-redis
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}-pro-redis
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}-pro-redis
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}-pro-redis
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ spec:
+ containers:
+ - name: redis
+ image: redis:5.0.1
+ restartPolicy: Always
+{{- end -}}
diff --git a/stable/ambassador/templates/ambassador-pro-service.yaml b/stable/ambassador/templates/ambassador-pro-service.yaml
new file mode 100644
index 000000000000..9fa98e618f4d
--- /dev/null
+++ b/stable/ambassador/templates/ambassador-pro-service.yaml
@@ -0,0 +1,37 @@
+{{- if .Values.pro.enabled -}}
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "ambassador.fullname" . }}-pro
+ labels:
+ service: ambassador-pro
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ annotations:
+ getambassador.io/config: |
+ ---
+ apiVersion: ambassador/v1
+ kind: AuthService
+ name: ambassador-pro-auth
+ proto: grpc
+ auth_service: 127.0.0.1:{{ .Values.pro.ports.auth }}
+ allow_request_body: false # setting this to 'true' allows Plugin and External filters to access the body, but has performance overhead
+ ---
+ # This mapping needs to exist, but is never actually followed.
+ apiVersion: ambassador/v1
+ kind: Mapping
+ name: callback_mapping
+ prefix: /callback
+ service: NoTaReAlSeRvIcE
+ ---
+ apiVersion: ambassador/v1
+ kind: RateLimitService
+ name: ambassador-pro-ratelimit
+ service: 127.0.0.1:{{ .Values.pro.ports.ratelimit }}
+spec:
+ ports:
+ - name: ratelimit-grpc
+ port: 80
+{{- end -}}
diff --git a/stable/ambassador/templates/config.yaml b/stable/ambassador/templates/config.yaml
new file mode 100644
index 000000000000..7f8b836ed1cc
--- /dev/null
+++ b/stable/ambassador/templates/config.yaml
@@ -0,0 +1,14 @@
+{{- if .Values.ambassadorConfig }}
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: '{{ include "ambassador.fullname" . }}-file-config'
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+data:
+ ambassadorConfig: |-
+ {{- .Values.ambassadorConfig | nindent 4 }}
+{{- end }}
diff --git a/stable/ambassador/templates/crds.yaml b/stable/ambassador/templates/crds.yaml
new file mode 100644
index 000000000000..49e79adb1192
--- /dev/null
+++ b/stable/ambassador/templates/crds.yaml
@@ -0,0 +1,180 @@
+{{- if .Values.crds.create }}
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+ name: authservices.getambassador.io
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ {{ if .Values.crds.keep }}
+ annotations:
+ "helm.sh/resource-policy": keep
+ {{ end }}
+spec:
+ group: getambassador.io
+ version: v1
+ versions:
+ - name: v1
+ served: true
+ storage: true
+ scope: Namespaced
+ names:
+ plural: authservices
+ singular: authservice
+ kind: AuthService
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+ name: mappings.getambassador.io
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ {{ if .Values.crds.keep }}
+ annotations:
+ "helm.sh/resource-policy": keep
+ {{ end }}
+spec:
+ group: getambassador.io
+ version: v1
+ versions:
+ - name: v1
+ served: true
+ storage: true
+ scope: Namespaced
+ names:
+ plural: mappings
+ singular: mapping
+ kind: Mapping
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+ name: modules.getambassador.io
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ group: getambassador.io
+ version: v1
+ versions:
+ - name: v1
+ served: true
+ storage: true
+ scope: Namespaced
+ names:
+ plural: modules
+ singular: module
+ kind: Module
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+ name: ratelimitservices.getambassador.io
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ {{ if .Values.crds.keep }}
+ annotations:
+ "helm.sh/resource-policy": keep
+ {{ end }}
+spec:
+ group: getambassador.io
+ version: v1
+ versions:
+ - name: v1
+ served: true
+ storage: true
+ scope: Namespaced
+ names:
+ plural: ratelimitservices
+ singular: ratelimitservice
+ kind: RateLimitService
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+ name: tcpmappings.getambassador.io
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ {{ if .Values.crds.keep }}
+ annotations:
+ "helm.sh/resource-policy": keep
+ {{ end }}
+spec:
+ group: getambassador.io
+ version: v1
+ versions:
+ - name: v1
+ served: true
+ storage: true
+ scope: Namespaced
+ names:
+ plural: tcpmappings
+ singular: tcpmapping
+ kind: TCPMapping
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+ name: tlscontexts.getambassador.io
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ {{ if .Values.crds.keep }}
+ annotations:
+ "helm.sh/resource-policy": keep
+ {{ end }}
+spec:
+ group: getambassador.io
+ version: v1
+ versions:
+ - name: v1
+ served: true
+ storage: true
+ scope: Namespaced
+ names:
+ plural: tlscontexts
+ singular: tlscontext
+ kind: TLSContext
+---
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+ name: tracingservices.getambassador.io
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ {{ if .Values.crds.keep }}
+ annotations:
+ "helm.sh/resource-policy": keep
+ {{ end }}
+spec:
+ group: getambassador.io
+ version: v1
+ versions:
+ - name: v1
+ served: true
+ storage: true
+ scope: Namespaced
+ names:
+ plural: tracingservices
+ singular: tracingservice
+ kind: TracingService
+{{- end }}
diff --git a/stable/ambassador/templates/deployment.yaml b/stable/ambassador/templates/deployment.yaml
new file mode 100644
index 000000000000..94770959defe
--- /dev/null
+++ b/stable/ambassador/templates/deployment.yaml
@@ -0,0 +1,199 @@
+apiVersion: apps/v1
+{{- if .Values.daemonSet }}
+kind: DaemonSet
+{{- else }}
+kind: Deployment
+{{- end }}
+metadata:
+ name: {{ include "ambassador.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+{{- if not .Values.daemonSet }}
+ replicas: {{ .Values.replicaCount }}
+{{- end }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ {{- if .Values.podLabels }}
+ {{- toYaml .Values.podLabels | nindent 8 }}
+ {{- end }}
+ annotations:
+ checksum/config: {{ include (print $.Template.BasePath "/config.yaml") . | sha256sum }}
+ {{- if .Values.podAnnotations }}
+ {{- toYaml .Values.podAnnotations | nindent 8 }}
+ {{- end }}
+ spec:
+ {{- with .Values.securityContext }}
+ securityContext:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ serviceAccountName: {{ include "ambassador.serviceAccountName" . }}
+ volumes:
+ {{- if .Values.prometheusExporter.enabled }}
+ - name: stats-exporter-mapping-config
+ configMap:
+ name: {{ include "ambassador.fullname" . }}-exporter-config
+ items:
+ - key: exporterConfiguration
+ path: mapping-config.yaml
+ {{- end }}
+ {{- if .Values.ambassadorConfig }}
+ - name: ambassador-config
+ configMap:
+ name: {{ include "ambassador.fullname" . }}-file-config
+ items:
+ - key: ambassadorConfig
+ path: ambassador-config.yaml
+ {{- end }}
+ {{- with .Values.volumes }}
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.initContainers }}
+ initContainers:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ containers:
+ {{- if .Values.prometheusExporter.enabled }}
+ - name: prometheus-exporter
+ image: "{{ .Values.prometheusExporter.repository }}:{{ .Values.prometheusExporter.tag }}"
+ imagePullPolicy: {{ .Values.prometheusExporter.pullPolicy }}
+ ports:
+ - name: metrics
+ containerPort: 9102
+ - name: listener
+ containerPort: 8125
+ args:
+ - --statsd.listen-udp=:8125
+ - --web.listen-address=:9102
+ - --statsd.mapping-config=/statsd-exporter/mapping-config.yaml
+ volumeMounts:
+ - name: stats-exporter-mapping-config
+ mountPath: /statsd-exporter/
+ readOnly: true
+ {{- end }}
+ - name: {{ .Chart.Name }}
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ ports:
+ {{- if .Values.service.http.enabled }}
+ - name: http
+ containerPort: {{ .Values.service.http.targetPort }}
+ {{- end }}
+ {{- if .Values.service.https.enabled }}
+ - name: https
+ containerPort: {{ .Values.service.https.targetPort }}
+ {{- end }}
+ - name: admin
+ containerPort: 8877
+ env:
+ - name: HOST_IP
+ valueFrom:
+ fieldRef:
+ fieldPath: status.hostIP
+ {{- if .Values.prometheusExporter.enabled }}
+ - name: STATSD_ENABLED
+ value: "true"
+ - name: STATSD_HOST
+ value: "localhost"
+ {{- end }}
+ {{- if .Values.scope.singleNamespace }}
+ - name: AMBASSADOR_SINGLE_NAMESPACE
+ value: "YES"
+ {{- end }}
+ - name: AMBASSADOR_NAMESPACE
+ {{- if .Values.namespace }}
+ value: {{ .Values.namespace.name | quote }}
+ {{ else }}
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ {{- end -}}
+ {{- if .Values.env }}
+ {{- range $key,$value := .Values.env }}
+ - name: {{ $key | upper | quote}}
+ value: {{ $value | quote}}
+ {{- end }}
+ {{- end }}
+ livenessProbe:
+ httpGet:
+ path: /ambassador/v0/check_alive
+ port: admin
+ initialDelaySeconds: 30
+ periodSeconds: 3
+ readinessProbe:
+ httpGet:
+ path: /ambassador/v0/check_ready
+ port: admin
+ initialDelaySeconds: 30
+ periodSeconds: 3
+ volumeMounts:
+ {{- if .Values.ambassadorConfig }}
+ - name: ambassador-config
+ mountPath: /ambassador/ambassador-config/ambassador-config.yaml
+ subPath: ambassador-config.yaml
+ {{- end }}
+ {{- with .Values.volumeMounts }}
+ {{- toYaml . | nindent 12 }}
+ {{- end }}
+ resources:
+ {{- toYaml .Values.resources | nindent 12 }}
+ {{- if .Values.pro.enabled }}
+ - name: ambassador-pro
+ image: "{{ .Values.pro.image.repository }}:{{ .Values.pro.image.tag }}"
+ ports:
+ - name: grpc-auth
+ containerPort: {{ .Values.pro.ports.auth }}
+ - name: grpc-ratelimit
+ containerPort: {{ .Values.pro.ports.ratelimit }}
+ - name: http-debug
+ containerPort: {{ .Values.pro.ports.ratelimitDebug }}
+ env:
+ - name: REDIS_SOCKET_TYPE
+ value: tcp
+ - name: REDIS_URL
+ value: ambassador-pro-redis:6379
+ - name: APRO_AUTH_PORT
+ value: "{{ .Values.pro.ports.auth }}"
+ - name: GRPC_PORT
+ value: "{{ .Values.pro.ports.ratelimit }}"
+ - name: DEBUG_PORT
+ value: "{{ .Values.pro.ports.ratelimitDebug }}"
+ - name: APP_LOG_LEVEL
+ value: "{{ .Values.pro.logLevel }}"
+ - name: AMBASSADOR_LICENSE_KEY
+ {{- if .Values.pro.licenseKey.secret }}
+ valueFrom:
+ secretKeyRef:
+ name: ambassador-pro-license-key
+ key: key
+ {{ else }}
+ value: {{ .Values.pro.licenseKey.value }}
+ {{- end }}
+ {{- end }}
+ {{- with .Values.nodeSelector }}
+ nodeSelector:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.affinity }}
+ affinity:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.tolerations }}
+ tolerations:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ imagePullSecrets:
+ {{- toYaml .Values.imagePullSecrets | nindent 8 }}
+ dnsPolicy: {{ .Values.dnsPolicy }}
+ hostNetwork: {{ .Values.hostNetwork }}
+
diff --git a/stable/ambassador/templates/exporter-config.yaml b/stable/ambassador/templates/exporter-config.yaml
new file mode 100644
index 000000000000..735da2e62f16
--- /dev/null
+++ b/stable/ambassador/templates/exporter-config.yaml
@@ -0,0 +1,17 @@
+{{- if .Values.prometheusExporter.enabled }}
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: '{{ include "ambassador.fullname" . }}-exporter-config'
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+data:
+ exporterConfiguration:
+{{- if .Values.prometheusExporter.configuration }} |
+ {{- .Values.prometheusExporter.configuration | nindent 4 }}
+{{- else }} ''
+{{- end }}
+{{- end }}
diff --git a/stable/ambassador/templates/hpa.yaml b/stable/ambassador/templates/hpa.yaml
new file mode 100644
index 000000000000..1eb22faed666
--- /dev/null
+++ b/stable/ambassador/templates/hpa.yaml
@@ -0,0 +1,20 @@
+{{- if and .Values.autoscaling.enabled (not .Values.daemonSet) }}
+apiVersion: autoscaling/v2beta2
+kind: HorizontalPodAutoscaler
+metadata:
+ name: {{ include "ambassador.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ scaleTargetRef:
+ apiVersion: apps/v1
+ kind: Deployment
+ name: {{ include "ambassador.fullname" . }}
+ minReplicas: {{ .Values.autoscaling.minReplicas }}
+ maxReplicas: {{ .Values.autoscaling.maxReplicas }}
+ metrics:
+ {{- toYaml .Values.autoscaling.metrics | nindent 4 }}
+{{- end }}
diff --git a/stable/ambassador/templates/rbac.yaml b/stable/ambassador/templates/rbac.yaml
new file mode 100644
index 000000000000..5c58cfcdbb1c
--- /dev/null
+++ b/stable/ambassador/templates/rbac.yaml
@@ -0,0 +1,52 @@
+{{- if .Values.rbac.create -}}
+apiVersion: rbac.authorization.k8s.io/v1beta1
+{{- if .Values.rbac.namespaced }}
+kind: Role
+{{- else }}
+kind: ClusterRole
+{{- end }}
+metadata:
+ name: {{ include "ambassador.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+rules:
+ - apiGroups: [""]
+ resources:
+ - namespaces
+ - services
+ - secrets
+ - endpoints
+ verbs: ["get", "list", "watch"]
+ - apiGroups: [ "getambassador.io" ]
+ resources: [ "*" ]
+ verbs: ["get", "list", "watch"]
+---
+apiVersion: rbac.authorization.k8s.io/v1beta1
+{{- if .Values.rbac.namespaced }}
+kind: RoleBinding
+{{- else }}
+kind: ClusterRoleBinding
+{{- end }}
+metadata:
+ name: {{ include "ambassador.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ {{- if .Values.rbac.namespaced }}
+ kind: Role
+ {{- else }}
+ kind: ClusterRole
+ {{- end }}
+ name: {{ include "ambassador.fullname" . }}
+subjects:
+ - name: {{ include "ambassador.serviceAccountName" . }}
+ namespace: {{ .Release.Namespace | quote }}
+ kind: ServiceAccount
+{{- end -}}
diff --git a/stable/ambassador/templates/service.yaml b/stable/ambassador/templates/service.yaml
new file mode 100644
index 000000000000..59b07688c5c9
--- /dev/null
+++ b/stable/ambassador/templates/service.yaml
@@ -0,0 +1,47 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "ambassador.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ {{- with .Values.service.annotations }}
+ annotations:
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+spec:
+ type: {{ .Values.service.type }}
+ {{- if .Values.service.loadBalancerIP }}
+ loadBalancerIP: "{{ .Values.service.loadBalancerIP }}"
+ {{- end }}
+ {{- if .Values.service.externalTrafficPolicy }}
+ externalTrafficPolicy: "{{ .Values.service.externalTrafficPolicy }}"
+ {{- end }}
+ ports:
+ {{- if .Values.service.http.enabled }}
+ - port: {{ .Values.service.http.port }}
+ targetPort: {{ .Values.service.http.targetPort }}
+ protocol: TCP
+ name: http
+ {{- with .Values.service.http.nodePort }}
+ nodePort: {{ toYaml . }}
+ {{- end }}
+ {{- end }}
+ {{- if .Values.service.https.enabled }}
+ - port: {{ .Values.service.https.port }}
+ targetPort: {{ .Values.service.https.targetPort }}
+ protocol: TCP
+ name: https
+ {{- with .Values.service.https.nodePort }}
+ nodePort: {{ toYaml . }}
+ {{- end }}
+ {{- end }}
+ selector:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ {{- with .Values.service.loadBalancerSourceRanges }}
+ loadBalancerSourceRanges:
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
diff --git a/stable/ambassador/templates/serviceaccount.yaml b/stable/ambassador/templates/serviceaccount.yaml
new file mode 100644
index 000000000000..4708472a1f72
--- /dev/null
+++ b/stable/ambassador/templates/serviceaccount.yaml
@@ -0,0 +1,11 @@
+{{- if .Values.serviceAccount.create -}}
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: {{ include "ambassador.serviceAccountName" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+{{- end -}}
diff --git a/stable/ambassador/templates/tests/test-ready.yaml b/stable/ambassador/templates/tests/test-ready.yaml
new file mode 100644
index 000000000000..73f4bbb94e60
--- /dev/null
+++ b/stable/ambassador/templates/tests/test-ready.yaml
@@ -0,0 +1,20 @@
+{{- if not .Values.daemonSet }}
+apiVersion: v1
+kind: Pod
+metadata:
+ name: "{{ include "ambassador.fullname" . }}-test-ready"
+ labels:
+ app.kubernetes.io/name: {{ include "ambassador.name" . }}
+ helm.sh/chart: {{ include "ambassador.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ annotations:
+ "helm.sh/hook": test-success
+spec:
+ containers:
+ - name: wget
+ image: busybox
+ command: ['wget']
+ args: ['{{ include "ambassador.fullname" . }}:{{ .Values.service.http.port }}/ambassador/v0/check_ready']
+ restartPolicy: Never
+{{- end }}
diff --git a/stable/ambassador/values.yaml b/stable/ambassador/values.yaml
new file mode 100644
index 000000000000..241d69face83
--- /dev/null
+++ b/stable/ambassador/values.yaml
@@ -0,0 +1,179 @@
+# Default values for ambassador.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+
+replicaCount: 3
+daemonSet: false
+
+# Enable autoscaling using HorizontalPodAutoscaler
+# daemonSet: true, autoscaling will be disabled
+autoscaling:
+ enabled: false
+ minReplicas: 2
+ maxReplicas: 5
+ metrics:
+ - type: Resource
+ resource:
+ name: cpu
+ targetAverageUtilization: 60
+ - type: Resource
+ resource:
+ name: memory
+ targetAverageUtilization: 60
+
+# namespace:
+ # name: default
+
+# Additional container environment variable
+env:
+ {}
+ # Exposing statistics via StatsD
+ # STATSD_ENABLED: true
+ # STATSD_HOST: statsd-sink
+ # sets the minimum number of seconds between Envoy restarts
+ # AMBASSADOR_RESTART_TIME: 15
+ # sets the number of seconds that the Envoy will wait for open connections to drain on a restart
+ # AMBASSADOR_DRAIN_TIME: 5
+ # sets the number of seconds that Ambassador will wait for the old Envoy to clean up and exit on a restart
+ # AMBASSADOR_SHUTDOWN_TIME: 10
+ # labels Ambassador with an ID to allow for configuring multiple Ambassadors in a cluster
+ # AMBASSADOR_ID: default
+
+imagePullSecrets: []
+
+securityContext:
+ runAsUser: 8888
+
+image:
+ repository: quay.io/datawire/ambassador
+ tag: 0.70.1
+ pullPolicy: IfNotPresent
+
+nameOverride: ""
+fullnameOverride: ""
+dnsPolicy: "ClusterFirst"
+hostNetwork: false
+
+service:
+ type: LoadBalancer
+
+ # Note that target http ports need to match your ambassador configurations service_port
+ # https://www.getambassador.io/reference/modules/#the-ambassador-module
+ http:
+ enabled: true
+ port: 80
+ targetPort: 8080
+ # nodePort: 30080
+
+ https:
+ enabled: true
+ port: 443
+ targetPort: 8443
+ # nodePort: 30443
+
+ annotations:
+ getambassador.io/config: |
+ ---
+ apiVersion: ambassador/v1
+ kind: Module
+ name: ambassador
+ config:
+ service_port: 8080
+ # diagnostics:
+ # enabled: false
+
+ # externalTrafficPolicy:
+ # loadBalancerSourceRanges:
+ # - YOUR_IP_RANGE
+
+adminService:
+ create: true
+ type: ClusterIP
+ port: 8877
+ # NodePort used if type is NodePort
+ # nodePort: 38877
+
+rbac:
+ # Specifies whether RBAC resources should be created
+ create: true
+ namespaced: false
+
+scope:
+ # tells Ambassador to only use resources in the namespace or namespace set by namespace.name
+ singleNamespace: false
+
+serviceAccount:
+ # Specifies whether a service account should be created
+ create: true
+ # The name of the service account to use.
+ # If not set and create is true, a name is generated using the fullname template
+ name:
+
+initContainers: []
+
+volumes: []
+
+volumeMounts: []
+
+podLabels:
+ {}
+
+podAnnotations:
+ {}
+ # prometheus.io/scrape: "true"
+ # prometheus.io/port: "9102"
+
+resources:
+ {}
+ # If you do want to specify resources, uncomment the following
+ # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
+ # limits:
+ # cpu: 100m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 128Mi
+
+nodeSelector: {}
+
+tolerations: []
+
+affinity: {}
+
+# Enabling the prometheus exporter creates a sidecar and configures ambassador to use it
+prometheusExporter:
+ enabled: false
+ repository: prom/statsd-exporter
+ tag: v0.8.1
+ pullPolicy: IfNotPresent
+ # You can configure the statsd exporter to modify the behavior of mappings and other features.
+ # See documentation: https://github.com/prometheus/statsd_exporter/tree/v0.8.1#metric-mapping-and-configuration
+ # Uncomment the following line if you wish to specify a custom configuration:
+ # configuration: |
+ # ---
+ # mappings:
+ # - match: 'envoy.cluster.*.upstream_cx_connect_ms'
+ # name: "envoy_cluster_upstream_cx_connect_time"
+ # timer_type: 'histogram'
+ # labels:
+ # cluster_name: "$1"
+
+ambassadorConfig: ""
+
+pro:
+ enabled: false
+ image:
+ repository: quay.io/datawire/ambassador_pro
+ tag: amb-sidecar-0.4.0
+ ports:
+ auth: 8500
+ ratelimit: 8501
+ ratelimitDebug: 8502
+ logLevel: info
+ licenseKey:
+ value:
+ secret: false
+
+crds:
+ create: true
+ keep: true
diff --git a/stable/anchore-engine/Chart.yaml b/stable/anchore-engine/Chart.yaml
index a346f699811a..9db039acc044 100644
--- a/stable/anchore-engine/Chart.yaml
+++ b/stable/anchore-engine/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: anchore-engine
-version: 0.11.0
-appVersion: 0.3.2
+version: 1.0.3
+appVersion: 0.4.0
description: Anchore container analysis and policy evaluation engine service
keywords:
- analysis
@@ -9,6 +10,8 @@ keywords:
- "anchore-engine"
- image
- security
+ - vulnerability
+ - scanner
home: https://anchore.com
sources:
- https://github.com/anchore/anchore-engine
diff --git a/stable/anchore-engine/README.md b/stable/anchore-engine/README.md
index 1a1417cd918f..81eebf0a04fa 100644
--- a/stable/anchore-engine/README.md
+++ b/stable/anchore-engine/README.md
@@ -2,7 +2,7 @@
This chart deploys the Anchore Engine docker container image analysis system. Anchore Engine requires a PostgreSQL database (>=9.6) which may be handled by the chart or supplied externally, and executes in a service based architecture utilizing the following Anchore Engine services: External API, Simplequeue, Catalog, Policy Engine, and Analyzer.
-This chart can also be used to install the following Anchore Enterprise services: GUI, RBAC, On-prem Feeds. Enterprise services require a valid Anchore Enterprise License as well as credentials with access to the private dockerhub repository hosting the images. These are not enabled by default.
+This chart can also be used to install the following Anchore Enterprise services: GUI, RBAC, Reporting, & On-premises Feeds. Enterprise services require a valid Anchore Enterprise License as well as credentials with access to the private Dockerhub repository hosting the images. These are not enabled by default.
Each of these services can be scaled and configured independently.
@@ -16,21 +16,71 @@ The chart is split into global and service specific configurations for the OSS A
* The `anchoreEnterpriseGlobal` section is for configuration values required by all Anchore Engine Enterprise components.
* Service specific configuration values allow customization for each individual service.
-For a description of each component, view the official documentation at: [Anchore Enterprise Service Overview](https://anchore.freshdesk.com/support/solutions/articles/36000098518-enterprise-service-overview-and-architecture)
+For a description of each component, view the official documentation at: [Anchore Enterprise Service Overview](https://docs.anchore.com/current/docs/overview/architecture/)
-## Installing the Anchore Engine OSS Chart
+## Installing the Anchore Engine Helm Chart
TL;DR - `helm install stable/anchore-engine`
Anchore Engine will take approximately 3 minutes to bootstrap. After the initial bootstrap period, Anchore Engine will begin a vulnerability feed sync. During this time, image analysis will show zero vulnerabilities until the sync is completed. This sync can take multiple hours depending on which feeds are enabled. The following anchore-cli command is available to poll the system and report back when the engine is bootstrapped and the vulnerability feeds are all synced up. `anchore-cli system wait`
+The recommended way to install the Anchore Engine Helm Chart is with a customized values file and a custom release name. It is highly recommended to set non-default passwords when deploying, all passwords are set to defaults specified in the chart. It is also recommended to utilize an external database, rather than using the included postgresql chart.
-The recommended way to install the Anchore Engine Chart is with a customized values file and a custom release name. Create a new file named `anchore_values.yaml` and add all desired custom values (examples below); then run the following command:
+Create a new file named `anchore_values.yaml` and add all desired custom values (examples below); then run the following command:
`helm install --name -f anchore_values.yaml stable/anchore-engine`
-*Note: It is highly recommended to set non-default passwords when deploying. All passwords are set to defaults specified in the chart.*
+##### Example anchore_values.yaml - using chart managed PostgreSQL service with custom passwords.
+*Note: Installs with chart managed PostgreSQL database. This is not a guaranteed production ready config.*
+ ```
+ ## anchore_values.yaml
+
+ postgresql:
+ postgresPassword:
+ persistence:
+ size: 50Gi
+
+ anchoreGlobal:
+ defaultAdminPassword:
+ defaultAdminEmail:
+ ```
+
+## Adding Enterprise Components
+
+ The following features are available to Anchore Enterprise customers. Please contact the Anchore team for more information about getting a license for the enterprise features. [Anchore Enterprise Demo](https://anchore.com/demo/)
+
+ * Role based access control
+ * LDAP integration
+ * Graphical user interface
+ * Customizable UI dashboards
+ * On-premises feeds service
+ * Proprietary vulnerability data feed
+ * Anchore reporting API
+
+### Enabling Enterprise Services
+Enterprise services require an Anchore Enterprise license, as well as credentials with
+permission to the private docker repositories that contain the enterprise images.
+
+To use this Helm chart with the enterprise services enabled, perform these steps.
+
+1. Create a kubernetes secret containing your license file.
+
+ `kubectl create secret generic anchore-enterprise-license --from-file=license.yaml=`
+
+1. Create a kubernetes secret containing dockerhub credentials with access to the private anchore enterprise repositories.
+
+ `kubectl create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username= --docker-password= --docker-email=`
+
+1. (demo) Install the Helm chart using default values
+
+ `helm fetch stable/anchore-engine --untar && helm install --name enterprise stable/anchore-engine -f anchore-engine/enterprise_values.yaml`
+
+1. (production) Install the Helm chart using a custom anchore_values.yaml file - *see examples below*
+
+ `helm install --name -f /path/to/anchore_values.yaml stable/anchore-engine`
+
+##### Example anchore_values.yaml - installing Anchore Enterprise
+*Note: Installs with chart managed PostgreSQL & Redis databases. This is not a guaranteed production ready config.*
-##### Install using chart managed PostgreSQL service with custom passwords.
```
## anchore_values.yaml
@@ -42,12 +92,78 @@ The recommended way to install the Anchore Engine Chart is with a customized val
anchoreGlobal:
defaultAdminPassword:
defaultAdminEmail:
+ enableMetrics: True
+
+ anchoreEnterpriseGlobal:
+ enabled: True
+
+ anchore-feeds-db:
+ postgresPassword:
+
+ anchore-ui-redis:
+ password:
```
+
+## Upgrading to Chart version 1.0.0
+The following features were added with this chart version:
+ * Rootless UBI 7 base image
+ * Analyzer image layer caching
+ * Enterprise UI dashboards
+ * Enterprise LDAP integration
+ * Enterprise Reporting API
+
+Scratch volume configs for the analyzer component & the enterprise-feeds component have been moved to the anchoreGlobal section. Update your values.yaml file to reflect this change.
+
+#### Chart v0.13.0 scratch volume config
+```
+anchoreAnalyzer:
+ scratchVolume:
+ mountPath: /analysis_scratch
+ details:
+ # Specify volume configuration here
+ emptyDir: {}
+
+anchoreEnterpriseFeeds:
+ scratchVolume:
+ mountPath: /analysis_scratch
+ details:
+ # Specify volume configuration here
+ emptyDir: {}
+```
+
+#### Chart v1.0.0 scratch volume config
+```
+anchoreGlobal:
+ scratchVolume:
+ mountPath: /analysis_scratch
+ details:
+ # Specify volume configuration here
+ emptyDir: {}
+```
+
+## Upgrading to Chart version 0.12.0
+Redis dependency chart major version updated to v6.1.3 - check redis chart readme for instructions for upgrade.
+
+The ingress configuration has been consolidated to a single global section. This should make it easier to manage the ingress resource. Before performing an upgrade ensure you update your custom values file to reflect this change.
+
+#### Chart v0.12.0 ingress config
+```
+ingress:
+ enabled: true
+ annotations:
+ kubernetes.io/ingress.class: gce
+ apiPath: /v1/*
+ uiPath: /*
+ apiHosts:
+ - anchore-api.example.com
+ uiHosts:
+ - anchore-ui.example.com
+```
+
## Upgrading to Chart version 0.11.0
The image map has been removed in all configuration sections in favor of individual keys. This should make configuration for tools like skaffold simpler. If using a custom values file, update your `image.repository`, `image.tag`, & `image.pullPolicy` values with `image` & `imagePullPolicy`.
-##### v0.11.0 image config
-
+#### Chart v0.11.0 image config
```
anchoreGlobal:
image: docker.io/anchore/anchore-engine:v0.3.2
@@ -69,16 +185,7 @@ Ingress resources have been changed to work natively with NGINX ingress controll
Service configs have been moved from the anchoreGlobal section, to individual component sections in the values.yaml file. If you're upgrading from a previous install and are using custom ports or serviceTypes, be sure to update your values.yaml file accordingly.
-##### v0.9.0 service config
-
-```
-anchoreGlobal:
- service:
- type: ClusterIP
- apiPort: 8228
-```
-
-##### v0.10.0 service config
+#### Chart v0.10.0 service config
```
anchoreApi:
service:
@@ -107,44 +214,73 @@ All configurations should be appended to your custom `anchore_values.yaml` file
#### Using Ingress
-This configuration allows SSL termination at the LB.
-
-*Note: Ingress controllers can use custom hosts or paths for routing requests. Custom paths or hosts should be set in the corresponding component configuration - anchoreEnterpriseUI.ingress or anchoreApi.ingress*
+This configuration allows SSL termination using your chosen ingress controller.
##### NGINX Ingress Controller
```
-anchoreGlobal:
+ingress:
+ enabled: true
+```
+
+##### ALB Ingress Controller
+```
ingress:
enabled: true
+ annotations:
+ kubernetes.io/ingress.class: alb
+ alb.ingress.kubernetes.io/scheme: internet-facing
+ apiPath: /v1/*
+ uiPath: /*
+ apiHosts:
+ - anchore-api.example.com
+ uiHosts:
+ - anchore-ui.example.com
+
+ anchoreApi:
+ service:
+ type: NodePort
+
+ anchoreEnterpriseUi:
+ service
+ type: NodePort
```
##### GCE Ingress Controller
```
- anchoreGlobal:
- ingress:
- enabled: true
- annotations: null
+ ingress:
+ enabled: true
+ annotations:
+ kubernetes.io/ingress.class: gce
+ apiPath: /v1/*
+ uiPath: /*
+ apiHosts:
+ - anchore-api.example.com
+ uiHosts:
+ - anchore-ui.example.com
anchoreApi:
- ingress:
- path: /v1/*
service:
type: NodePort
anchoreEnterpriseUi:
- ingress:
- path: /*
service
type: NodePort
```
-##### Using Service Type
+#### Using Service Type
```
anchoreApi:
service:
type: LoadBalancer
```
+### Utilize an Existing Secret
+Can be used to override the default secrets.yaml provided
+```
+anchoreGlobal:
+ existingSecret: "foo-bar"
+```
+
### Install using an existing/external PostgreSQL instance
*Note: it is recommended to use an external Postgresql instance for production installs*
@@ -161,6 +297,24 @@ anchoreGlobal:
ssl: true
```
+### Install using Google CloudSQL
+ ```
+ ## anchore_values.yaml
+ postgresql:
+ enabled: false
+ postgresPassword:
+ postgresUser:
+ postgresDatabase:
+
+ cloudsql:
+ enabled: true
+ instance: "project:zone:cloudsqlinstancename"
+ image:
+ repository: gcr.io/cloudsql-docker/gce-proxy
+ tag: 1.12
+ pullPolicy: IfNotPresent
+ ```
+
### Archive Driver
*Note: it is recommended to use an external archive driver for production installs.*
@@ -324,56 +478,3 @@ To set a specific number of service containers:
To update the number in a running configuration:
`helm upgrade --set anchoreAnalyzer.replicaCount=2 stable/anchore-engine -f anchore_values.yaml`
-
-## Adding Enterprise Components
-
- The following features are available to Anchore Enterprise customers. Please contact the Anchore team for more information about getting a license for the enterprise features. [Anchore Enterprise Demo](https://anchore.com/demo/)
-
- * Role based access control
- * Graphical User Interface
- * On-prem feeds service
- * Snyk vulnerability data
-
-### Enabling Enterprise Services
-Enterprise services require an Anchore Enterprise license, as well as credentials with
-permission to the private docker repositories that contain the enterprise images.
-
-To use this Helm chart with the enterprise services enabled, perform these steps.
-
-1. Create a kubernetes secret containing your license file.
-
- `kubectl create secret generic anchore-enterprise-license --from-file=license.yaml=`
-
-1. Create a kubernetes secret containing dockerhub credentials with access to the private anchore enterprise repositories.
-
- `kubectl create secret docker-registry anchore-enterprise-pullcreds --docker-server=docker.io --docker-username= --docker-password= --docker-email=`
-
-1. Install the helm chart using a custom anchore_values.yaml file (see examples below)
-
- `helm install --name -f /path/to/anchore_values.yaml stable/anchore-engine`
-
-##### Example anchore_values.yaml file for installing Anchore Enterprise
-*Note: This installs with chart managed PostgreSQL & Redis databases. This is not a production ready config.*
-
- ```
- ## anchore_values.yaml
-
- postgresql:
- postgresPassword:
- persistence:
- size: 50Gi
-
- anchoreGlobal:
- defaultAdminPassword:
- defaultAdminEmail:
- enableMetrics: True
-
- anchoreEnterpriseGlobal:
- enabled: True
-
- anchore-feeds-db:
- postgresPassword:
-
- anchore-ui-redis:
- password:
- ```
diff --git a/stable/anchore-engine/enterprise_values.yaml b/stable/anchore-engine/enterprise_values.yaml
new file mode 100644
index 000000000000..2d29b42b1d39
--- /dev/null
+++ b/stable/anchore-engine/enterprise_values.yaml
@@ -0,0 +1,4 @@
+anchoreEnterpriseGlobal:
+ enabled: true
+
+
diff --git a/stable/anchore-engine/requirements.lock b/stable/anchore-engine/requirements.lock
index 82c75c6a50ba..75a774f0840e 100644
--- a/stable/anchore-engine/requirements.lock
+++ b/stable/anchore-engine/requirements.lock
@@ -7,6 +7,6 @@ dependencies:
version: 1.0.0
- name: redis
repository: https://kubernetes-charts.storage.googleapis.com
- version: 5.1.0
-digest: sha256:c72be0f60c6cb3d764e444e77a51eae11beb0b782bde8c528cb61783dab18e67
-generated: 2018-12-05T18:50:35.229545-08:00
+ version: 6.4.5
+digest: sha256:468485c2a122e03e69f112be6e0fe631f048124ce96fb0548138d64eef46b16b
+generated: 2019-05-06T16:49:00.766734-07:00
diff --git a/stable/anchore-engine/requirements.yaml b/stable/anchore-engine/requirements.yaml
index 2d06c9fdcbed..26f538c6cbb8 100644
--- a/stable/anchore-engine/requirements.yaml
+++ b/stable/anchore-engine/requirements.yaml
@@ -11,7 +11,7 @@ dependencies:
alias: anchore-feeds-db
- name: redis
- version: "*"
+ version: "6"
repository: "alias:stable"
condition: anchore-ui-redis.enabled,anchoreEnterpriseGlobal.enabled
alias: anchore-ui-redis
diff --git a/stable/anchore-engine/templates/NOTES.txt b/stable/anchore-engine/templates/NOTES.txt
index 9f893e65c565..254cfa0edd8f 100644
--- a/stable/anchore-engine/templates/NOTES.txt
+++ b/stable/anchore-engine/templates/NOTES.txt
@@ -9,8 +9,8 @@ To configure your anchore-cli run:
ANCHORE_CLI_USER=admin
ANCHORE_CLI_PASS=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "anchore-engine.fullname" . }} -o jsonpath="{.data.ANCHORE_ADMIN_PASSWORD}" | base64 --decode; echo)
-{{ if .Values.anchoreApi.ingress.enabled }}
- ANCHORE_CLI_URL=http://$(kubectl get ingress --namespace {{ .Release.Namespace }} {{ template "anchore-engine.api.fullname" . }} -o jsonpath="{.status.loadBalancer.ingress[0].ip}")/v1/
+{{ if .Values.ingress.enabled }}
+ ANCHORE_CLI_URL=http://$(kubectl get ingress --namespace {{ .Release.Namespace }} {{ template "anchore-engine.fullname" . }} -o jsonpath="{.status.loadBalancer.ingress[0].ip}")/v1/
{{ else }}
Using the service endpoint from within the cluster you can use:
ANCHORE_CLI_URL=http://{{ template "anchore-engine.api.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.anchoreApi.service.port}}/v1/
@@ -25,7 +25,7 @@ from within the container you can use 'anchore-cli' commands.
* NOTE: On first startup of anchore-engine, it performs a CVE data sync which may take several minutes to complete. During this time the system status will report 'partially_down' and any images added for analysis will stay in the 'not_analyzed' state.
Once the sync is complete, any queued images will be analyzed and the system status will change to 'all_up'.
-Initial setup time can be >60sec for postgresql setup and readiness checks to pass for the services as indicated by pod state. You can check with:
+Initial setup time can be >120sec for postgresql setup and readiness checks to pass for the services as indicated by pod state. You can check with:
kubectl get pods -l app={{ template "anchore-engine.fullname" .}},component=api
diff --git a/stable/anchore-engine/templates/_helpers.tpl b/stable/anchore-engine/templates/_helpers.tpl
index 6a9244c83111..9afc6445ceb2 100755
--- a/stable/anchore-engine/templates/_helpers.tpl
+++ b/stable/anchore-engine/templates/_helpers.tpl
@@ -87,6 +87,15 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
{{- printf "%s-%s-%s" .Release.Name $name "enterprise-feeds"| trunc 63 | trimSuffix "-" -}}
{{- end -}}
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+*/}}
+{{- define "anchore-engine.enterprise-reports.fullname" -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- printf "%s-%s-%s" .Release.Name $name "enterprise-reports"| trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
{{/*
Create a default fully qualified dependency name for the db.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
diff --git a/stable/anchore-engine/templates/analyzer_deployment.yaml b/stable/anchore-engine/templates/analyzer_deployment.yaml
index 646e776ef6ea..38296215727f 100644
--- a/stable/anchore-engine/templates/analyzer_deployment.yaml
+++ b/stable/anchore-engine/templates/analyzer_deployment.yaml
@@ -20,23 +20,36 @@ spec:
labels:
app: "{{ template "anchore-engine.fullname" . }}"
component: {{ $component }}
-{{- if .Values.anchoreAnalyzer.annotations }}
+ {{- with .Values.anchoreAnalyzer.annotations }}
annotations:
-{{ toYaml .Values.anchoreAnalyzer.annotations | indent 8 }}
-{{- end }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
spec:
containers:
+ {{- if .Values.cloudsql.enabled }}
+ - name: cloudsql-proxy
+ image: {{ .Values.cloudsql.image.repository }}:{{ .Values.cloudsql.image.tag }}
+ imagePullPolicy: {{ .Values.cloudsql.image.pullPolicy }}
+ command: ["/cloud_sql_proxy"]
+ args: ["-instances={{ .Values.cloudsql.instance }}=tcp:5432"]
+ {{- end }}
- name: {{ .Chart.Name }}-{{ $component }}
image: {{ .Values.anchoreGlobal.image }}
imagePullPolicy: {{ .Values.anchoreGlobal.imagePullPolicy }}
- command: ["/usr/local/bin/anchore-manager"]
+ command: ["anchore-manager"]
args: ["service", "start", "analyzer"]
envFrom:
- secretRef:
- name: {{ template "anchore-engine.fullname" . }}
+ name: {{ default (include "anchore-engine.fullname" .) .Values.anchoreGlobal.existingSecret }}
- configMapRef:
name: {{ template "anchore-engine.fullname" . }}
env:
+ {{- with .Values.anchoreGlobal.extraEnv }}
+ {{- toYaml . | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreAnalyzer.extraEnv }}
+ {{- toYaml . | nindent 8 | trim }}
+ {{- end }}
- name: ANCHORE_POD_NAME
valueFrom:
fieldRef:
@@ -50,11 +63,11 @@ spec:
subPath: config.yaml
{{- if .Values.anchoreGlobal.internalServicesSslEnabled }}
- name: certs
- mountPath: {{ default "/certs" .Values.anchoreGlobal.internalServicesSsl.certDir }}
+ mountPath: {{ .Values.anchoreGlobal.internalServicesSsl.certDir }}
readOnly: true
{{- end }}
- - name: analysis-scratch
- mountPath: {{ .Values.anchoreAnalyzer.scratchVolume.mountPath }}
+ - name: {{ $component }}-scratch
+ mountPath: {{ .Values.anchoreGlobal.scratchVolume.mountPath }}
livenessProbe:
httpGet:
path: /health
@@ -73,7 +86,7 @@ spec:
failureThreshold: 3
successThreshold: 1
resources:
-{{ toYaml .Values.anchoreAnalyzer.resources | indent 10 }}
+ {{ toYaml .Values.anchoreAnalyzer.resources | nindent 10 | trim }}
volumes:
- name: config-volume
configMap:
@@ -83,17 +96,17 @@ spec:
secret:
secretName: {{ .Values.anchoreGlobal.internalServicesSsl.certSecret }}
{{- end }}
- - name: analysis-scratch
-{{ toYaml .Values.anchoreAnalyzer.scratchVolume.details | indent 10 }}
- {{- if .Values.anchoreAnalyzer.nodeSelector }}
+ - name: {{ $component }}-scratch
+ {{ toYaml .Values.anchoreGlobal.scratchVolume.details | indent 10 | trim }}
+ {{- with .Values.anchoreAnalyzer.nodeSelector }}
nodeSelector:
-{{ toYaml .Values.anchoreAnalyzer.nodeSelector | indent 8 }}
- {{- end }}
- {{- with .Values.anchoreAnalyzer.affinity }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreAnalyzer.affinity }}
affinity:
-{{ toYaml . | indent 8 }}
- {{- end }}
- {{- with .Values.anchoreAnalyzer.tolerations }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreAnalyzer.tolerations }}
tolerations:
-{{ toYaml . | indent 8 }}
- {{- end }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
diff --git a/stable/anchore-engine/templates/api_deployment.yaml b/stable/anchore-engine/templates/api_deployment.yaml
index c297707be432..f843fc7722e7 100644
--- a/stable/anchore-engine/templates/api_deployment.yaml
+++ b/stable/anchore-engine/templates/api_deployment.yaml
@@ -20,40 +20,40 @@ spec:
labels:
app: {{ template "anchore-engine.fullname" . }}
component: {{ $component }}
-{{- if .Values.anchoreApi.annotations }}
+ {{- with .Values.anchoreApi.annotations }}
annotations:
-{{ toYaml .Values.anchoreApi.annotations | indent 8 }}
-{{- end }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
spec:
- volumes:
- - name: config-volume
- configMap:
- name: {{ template "anchore-engine.fullname" . }}
- {{- if and .Values.anchoreEnterpriseGlobal.enabled .Values.anchoreEnterpriseRbac.enabled }}
- - name: anchore-license
- secret:
- secretName: {{ .Values.anchoreEnterpriseGlobal.licenseSecretName }}
- - name: rbac-config-volume
- configMap:
- name: {{ template "anchore-engine.enterprise.fullname" . }}
- {{- end}}
- {{- if .Values.anchoreGlobal.internalServicesSslEnabled }}
- - name: certs
- secret:
- secretName: {{ .Values.anchoreGlobal.internalServicesSsl.certSecret }}
- {{- end }}
+ {{ if and .Values.anchoreEnterpriseGlobal.enabled (or .Values.anchoreEnterpriseRbac.enabled .Values.anchoreEnterpriseReports.enabled) }}
+ imagePullSecrets:
+ - name: {{ .Values.anchoreEnterpriseGlobal.imagePullSecretName }}
+ {{- end }}
containers:
+ {{- if .Values.cloudsql.enabled }}
+ - name: cloudsql-proxy
+ image: {{ .Values.cloudsql.image.repository }}:{{ .Values.cloudsql.image.tag }}
+ imagePullPolicy: {{ .Values.cloudsql.image.pullPolicy }}
+ command: ["/cloud_sql_proxy"]
+ args: ["-instances={{ .Values.cloudsql.instance }}=tcp:5432"]
+ {{- end }}
- name: "{{ .Chart.Name }}-{{ $component }}"
image: {{ .Values.anchoreGlobal.image }}
imagePullPolicy: {{ .Values.anchoreGlobal.imagePullPolicy }}
- command: ["/usr/local/bin/anchore-manager"]
+ command: ["anchore-manager"]
args: ["service", "start", "apiext"]
envFrom:
- secretRef:
- name: {{ template "anchore-engine.fullname" . }}
+ name: {{ default (include "anchore-engine.fullname" .) .Values.anchoreGlobal.existingSecret }}
- configMapRef:
name: {{ template "anchore-engine.fullname" . }}
env:
+ {{- with .Values.anchoreGlobal.extraEnv }}
+ {{- toYaml . | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreApi.extraEnv }}
+ {{- toYaml . | nindent 8 | trim }}
+ {{- end }}
- name: ANCHORE_POD_NAME
valueFrom:
fieldRef:
@@ -67,7 +67,7 @@ spec:
subPath: config.yaml
{{- if .Values.anchoreGlobal.internalServicesSslEnabled }}
- name: certs
- mountPath: {{ default "/certs" .Values.anchoreGlobal.internalServicesSsl.certDir }}
+ mountPath: {{ .Values.anchoreGlobal.internalServicesSsl.certDir }}
readOnly: true
{{- end }}
livenessProbe:
@@ -88,20 +88,25 @@ spec:
failureThreshold: 3
successThreshold: 1
resources:
-{{ toYaml .Values.anchoreApi.resources | indent 10 }}
-
+ {{ toYaml .Values.anchoreApi.resources | nindent 10 | trim }}
{{- if and .Values.anchoreEnterpriseGlobal.enabled .Values.anchoreEnterpriseRbac.enabled }}
- name: {{ .Chart.Name }}-rbac-manager
image: {{ .Values.anchoreEnterpriseGlobal.image }}
imagePullPolicy: {{ .Values.anchoreEnterpriseGlobal.imagePullPolicy }}
- command: ["/usr/local/bin/anchore-enterprise-manager"]
+ command: ["anchore-enterprise-manager"]
args: ["service", "start", "rbac_manager"]
envFrom:
- secretRef:
- name: {{ template "anchore-engine.fullname" . }}
+ name: {{ default (include "anchore-engine.fullname" .) .Values.anchoreGlobal.existingSecret }}
- configMapRef:
name: {{ template "anchore-engine.fullname" . }}
env:
+ {{- with .Values.anchoreGlobal.extraEnv }}
+ {{- toYaml . | nindent 10 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreEnterpriseRbac.extraEnv }}
+ {{- toYaml . | nindent 10 | trim }}
+ {{- end }}
- name: ANCHORE_POD_NAME
valueFrom:
fieldRef:
@@ -111,14 +116,14 @@ spec:
name: rbac-manager
volumeMounts:
- name: anchore-license
- mountPath: /license.yaml
+ mountPath: /home/anchore/license.yaml
subPath: license.yaml
- - name: rbac-config-volume
+ - name: enterprise-config-volume
mountPath: /config/config.yaml
subPath: config.yaml
{{- if .Values.anchoreGlobal.internalServicesSslEnabled }}
- name: certs
- mountPath: {{ default "/certs" .Values.anchoreGlobal.internalServicesSsl.certDir }}
+ mountPath: {{ .Values.anchoreGlobal.internalServicesSsl.certDir }}
readOnly: true
{{- end }}
livenessProbe:
@@ -139,19 +144,24 @@ spec:
failureThreshold: 3
successThreshold: 1
resources:
-{{ toYaml .Values.anchoreEnterpriseRbac.managerResources | indent 10 }}
-
+ {{ toYaml .Values.anchoreEnterpriseRbac.managerResources | nindent 10 | trim }}
- name: {{ .Chart.Name }}-rbac-authorizer
image: {{ .Values.anchoreEnterpriseGlobal.image }}
imagePullPolicy: {{ .Values.anchoreEnterpriseGlobal.imagePullPolicy }}
- command: ["/usr/local/bin/anchore-enterprise-manager"]
+ command: ["anchore-enterprise-manager"]
args: ["service", "start", "rbac_authorizer"]
envFrom:
- secretRef:
- name: {{ template "anchore-engine.fullname" . }}
+ name: {{ default (include "anchore-engine.fullname" .) .Values.anchoreGlobal.existingSecret }}
- configMapRef:
name: {{ template "anchore-engine.fullname" . }}
env:
+ {{- with .Values.anchoreGlobal.extraEnv }}
+ {{- toYaml . | nindent 10 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreEnterpriseRbac.extraEnv }}
+ {{- toYaml . | nindent 10 | trim }}
+ {{- end }}
- name: ANCHORE_POD_NAME
valueFrom:
fieldRef:
@@ -161,14 +171,14 @@ spec:
name: rbac-auth
volumeMounts:
- name: anchore-license
- mountPath: /license.yaml
+ mountPath: /home/anchore/license.yaml
subPath: license.yaml
- - name: rbac-config-volume
+ - name: enterprise-config-volume
mountPath: /config/config.yaml
subPath: config.yaml
{{- if .Values.anchoreGlobal.internalServicesSslEnabled }}
- name: certs
- mountPath: {{ default "/certs" .Values.anchoreGlobal.internalServicesSsl.certDir }}
+ mountPath: {{ .Values.anchoreGlobal.internalServicesSsl.certDir }}
readOnly: true
{{- end }}
livenessProbe:
@@ -193,23 +203,93 @@ spec:
failureThreshold: 3
successThreshold: 1
resources:
-{{ toYaml .Values.anchoreEnterpriseRbac.authResources | indent 10 }}
-
- imagePullSecrets:
- - name: {{ .Values.anchoreEnterpriseGlobal.imagePullSecretName }}
+ {{ toYaml .Values.anchoreEnterpriseRbac.authResources | nindent 10 | trim }}
{{- end }}
-
- {{- if .Values.anchoreApi.nodeSelector }}
+ {{- if and .Values.anchoreEnterpriseGlobal.enabled .Values.anchoreEnterpriseReports.enabled }}
+ - name: "{{ .Chart.Name }}-reports"
+ image: {{ .Values.anchoreEnterpriseGlobal.image }}
+ imagePullPolicy: {{ .Values.anchoreEnterpriseGlobal.imagePullPolicy }}
+ command: ["anchore-enterprise-manager"]
+ args: ["service", "start", "reports"]
+ ports:
+ - containerPort: {{ .Values.anchoreEnterpriseReports.service.port }}
+ name: reports-api
+ envFrom:
+ - secretRef:
+ name: {{ template "anchore-engine.fullname" . }}
+ - configMapRef:
+ name: {{ template "anchore-engine.fullname" . }}
+ env:
+ {{- with .Values.anchoreGlobal.extraEnv }}
+ {{- toYaml . | nindent 10 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreEnterpriseReports.extraEnv }}
+ {{- toYaml . | nindent 10 | trim }}
+ {{- end }}
+ - name: ANCHORE_POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ volumeMounts:
+ - name: enterprise-config-volume
+ mountPath: /config/config.yaml
+ subPath: config.yaml
+ - name: anchore-license
+ mountPath: /home/anchore/license.yaml
+ subPath: license.yaml
+ {{- if .Values.anchoreGlobal.internalServicesSslEnabled }}
+ - name: certs
+ mountPath: {{ .Values.anchoreGlobal.internalServicesSsl.certDir }}
+ readOnly: true
+ {{- end }}
+ livenessProbe:
+ httpGet:
+ path: /health
+ port: reports-api
+ initialDelaySeconds: 120
+ timeoutSeconds: 10
+ periodSeconds: 10
+ failureThreshold: 6
+ successThreshold: 1
+ readinessProbe:
+ httpGet:
+ path: /health
+ port: reports-api
+ timeoutSeconds: 10
+ periodSeconds: 10
+ failureThreshold: 3
+ successThreshold: 1
+ resources:
+ {{ toYaml .Values.anchoreEnterpriseReports.resources | nindent 10 | trim }}
+ {{- end }}
+ volumes:
+ - name: config-volume
+ configMap:
+ name: {{ template "anchore-engine.fullname" . }}
+ {{ if and .Values.anchoreEnterpriseGlobal.enabled (or .Values.anchoreEnterpriseRbac.enabled .Values.anchoreEnterpriseReports.enabled) }}
+ - name: anchore-license
+ secret:
+ secretName: {{ .Values.anchoreEnterpriseGlobal.licenseSecretName }}
+ - name: enterprise-config-volume
+ configMap:
+ name: {{ template "anchore-engine.enterprise.fullname" . }}
+ {{- end}}
+ {{- if .Values.anchoreGlobal.internalServicesSslEnabled }}
+ - name: certs
+ secret:
+ secretName: {{ .Values.anchoreGlobal.internalServicesSsl.certSecret }}
+ {{- end }}
+ {{- with .Values.anchoreApi.nodeSelector }}
nodeSelector:
-{{ toYaml .Values.anchoreApi.nodeSelector | indent 8 }}
+ {{ toYaml . | nindent 8 | trim }}
{{- end }}
{{- with .Values.anchoreApi.affinity }}
affinity:
-{{ toYaml . | indent 8 }}
+ {{ toYaml . | nindent 8 | trim }}
{{- end }}
{{- with .Values.anchoreApi.tolerations }}
tolerations:
-{{ toYaml . | indent 8 }}
+ {{ toYaml . | nindent 8 | trim }}
{{- end }}
---
@@ -223,9 +303,9 @@ metadata:
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: {{ $component }}
- {{- if .Values.anchoreApi.service.annotations }}
+ {{- with .Values.anchoreApi.service.annotations }}
annotations:
-{{ toYaml .Values.anchoreApi.service.annotations | indent 4 }}
+ {{ toYaml . | nindent 4 | trim }}
{{- end }}
spec:
type: {{ .Values.anchoreApi.service.type }}
@@ -234,12 +314,18 @@ spec:
port: {{ .Values.anchoreApi.service.port }}
targetPort: {{ .Values.anchoreApi.service.port }}
protocol: TCP
- {{- if and .Values.anchoreEnterpriseGlobal.enabled .Values.anchoreEnterpriseRbac.enabled }}
+ {{- if and .Values.anchoreEnterpriseGlobal.enabled .Values.anchoreEnterpriseRbac.enabled }}
- name: anchore-rbac-manager
port: {{ .Values.anchoreEnterpriseRbac.service.apiPort }}
targetPort: {{ .Values.anchoreEnterpriseRbac.service.apiPort }}
protocol: TCP
- {{- end }}
+ {{- end }}
+ {{- if and .Values.anchoreEnterpriseGlobal.enabled .Values.anchoreEnterpriseReports.enabled }}
+ - name: reports-api
+ port: {{ .Values.anchoreEnterpriseReports.service.port }}
+ targetPort: {{ .Values.anchoreEnterpriseReports.service.port }}
+ protocol: TCP
+ {{- end }}
selector:
app: {{ template "anchore-engine.fullname" . }}
component: {{ $component }}
diff --git a/stable/anchore-engine/templates/catalog_deployment.yaml b/stable/anchore-engine/templates/catalog_deployment.yaml
index 9e4107d3694a..33d38539ef85 100644
--- a/stable/anchore-engine/templates/catalog_deployment.yaml
+++ b/stable/anchore-engine/templates/catalog_deployment.yaml
@@ -20,23 +20,36 @@ spec:
labels:
app: {{ template "anchore-engine.fullname" . }}
component: {{ $component }}
-{{- if .Values.anchoreCatalog.annotations }}
+ {{- with .Values.anchoreCatalog.annotations }}
annotations:
-{{ toYaml .Values.anchoreCatalog.annotations | indent 8 }}
-{{- end }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
spec:
containers:
+ {{- if .Values.cloudsql.enabled }}
+ - name: cloudsql-proxy
+ image: {{ .Values.cloudsql.image.repository }}:{{ .Values.cloudsql.image.tag }}
+ imagePullPolicy: {{ .Values.cloudsql.image.pullPolicy }}
+ command: ["/cloud_sql_proxy"]
+ args: ["-instances={{ .Values.cloudsql.instance }}=tcp:5432"]
+ {{- end }}
- name: {{ .Chart.Name }}-{{ $component }}
image: {{ .Values.anchoreGlobal.image }}
imagePullPolicy: {{ .Values.anchoreGlobal.imagePullPolicy }}
- command: ["/usr/local/bin/anchore-manager"]
+ command: ["anchore-manager"]
args: ["service", "start", "catalog"]
envFrom:
- secretRef:
- name: {{ template "anchore-engine.fullname" . }}
+ name: {{ default (include "anchore-engine.fullname" .) .Values.anchoreGlobal.existingSecret }}
- configMapRef:
name: {{ template "anchore-engine.fullname" . }}
env:
+ {{- with .Values.anchoreGlobal.extraEnv }}
+ {{- toYaml . | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreCatalog.extraEnv }}
+ {{- toYaml . | nindent 8 | trim }}
+ {{- end }}
- name: ANCHORE_POD_NAME
valueFrom:
fieldRef:
@@ -50,7 +63,7 @@ spec:
subPath: config.yaml
{{- if .Values.anchoreGlobal.internalServicesSslEnabled }}
- name: certs
- mountPath: {{ default "/certs" .Values.anchoreGlobal.internalServicesSsl.certDir }}
+ mountPath: {{ .Values.anchoreGlobal.internalServicesSsl.certDir }}
readOnly: true
{{- end }}
livenessProbe:
@@ -71,7 +84,7 @@ spec:
failureThreshold: 3
successThreshold: 1
resources:
-{{ toYaml .Values.anchoreCatalog.resources | indent 10 }}
+ {{ toYaml .Values.anchoreCatalog.resources | nindent 10 | trim }}
volumes:
- name: config-volume
configMap:
@@ -81,18 +94,18 @@ spec:
secret:
secretName: {{ .Values.anchoreGlobal.internalServicesSsl.certSecret }}
{{- end }}
- {{- if .Values.anchoreCatalog.nodeSelector }}
+ {{- with .Values.anchoreCatalog.nodeSelector }}
nodeSelector:
-{{ toYaml .Values.anchoreCatalog.nodeSelector | indent 8 }}
- {{- end }}
- {{- with .Values.anchoreCatalog.affinity }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreCatalog.affinity }}
affinity:
-{{ toYaml . | indent 8 }}
- {{- end }}
- {{- with .Values.anchoreCatalog.tolerations }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreCatalog.tolerations }}
tolerations:
-{{ toYaml . | indent 8 }}
- {{- end }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
---
apiVersion: v1
@@ -105,9 +118,9 @@ metadata:
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: {{ $component }}
- {{- if .Values.anchoreCatalog.service.annotations }}
+ {{- with .Values.anchoreCatalog.service.annotations }}
annotations:
-{{ toYaml .Values.anchoreCatalog.service.annotations | indent 4 }}
+ {{ toYaml .Values.anchoreCatalog.service.annotations | nindent 4 | trim }}
{{- end }}
spec:
type: {{ .Values.anchoreCatalog.service.type }}
diff --git a/stable/anchore-engine/templates/engine_configmap.yaml b/stable/anchore-engine/templates/engine_configmap.yaml
index 5190a656405f..ba90bfb31aac 100644
--- a/stable/anchore-engine/templates/engine_configmap.yaml
+++ b/stable/anchore-engine/templates/engine_configmap.yaml
@@ -12,6 +12,8 @@ data:
ANCHORE_DB_USER: {{ index .Values "postgresql" "postgresUser" | quote }}
{{- if and (index .Values "postgresql" "externalEndpoint") (not (index .Values "postgresql" "enabled")) }}
ANCHORE_DB_HOST: {{ index .Values "postgresql" "externalEndpoint" | quote }}
+ {{- else if and (index .Values "cloudsql" "enabled") (not (index .Values "postgresql" "enabled")) }}
+ ANCHORE_DB_HOST: "localhost:5432"
{{- else }}
ANCHORE_DB_HOST: "{{ template "postgres.fullname" . }}:5432"
{{- end }}
@@ -20,7 +22,7 @@ data:
# Anchore Service Configuration File from ConfigMap
service_dir: {{ .Values.anchoreGlobal.serviceDir }}
- tmp_dir: {{ default "/scratch" .Values.anchoreAnalyzer.scratchVolume.mountPath }}
+ tmp_dir: {{ .Values.anchoreGlobal.scratchVolume.mountPath }}
log_level: {{ .Values.anchoreGlobal.logLevel }}
cleanup_images: {{ .Values.anchoreGlobal.cleanupImages }}
@@ -37,11 +39,10 @@ data:
#
{{ if .Values.anchoreGlobal.webhooksEnabled }}
webhooks:
-{{ toYaml .Values.anchoreGlobal.webhooks | indent 6 }}
+ {{ toYaml .Values.anchoreGlobal.webhooks | nindent 6 | trim }}
{{ end }}
- # Configure what feeds to sync. The 'admin' anchoreIO credentials are used if present, but not required.
- # The 'anonymous' user is used for the sync otherwise.
+ # Configure what feeds to sync.
# The sync will hit http://ancho.re/feeds, if any outbound firewall config needs to be set in your environment.
feeds:
sync_enabled: true
@@ -85,7 +86,7 @@ data:
credentials:
database:
- db_connect: "postgresql+pg8000://${ANCHORE_DB_USER}:${ANCHORE_DB_PASSWORD}@${ANCHORE_DB_HOST}/${ANCHORE_DB_NAME}"
+ db_connect: "postgresql://${ANCHORE_DB_USER}:${ANCHORE_DB_PASSWORD}@${ANCHORE_DB_HOST}/${ANCHORE_DB_NAME}"
db_connect_args:
timeout: {{ .Values.anchoreGlobal.dbConfig.timeout }}
ssl: {{ .Values.anchoreGlobal.dbConfig.ssl }}
@@ -115,9 +116,15 @@ data:
port: {{ .Values.anchoreAnalyzer.containerPort }}
cycle_timer_seconds: 1
cycle_timers:
-{{ toYaml .Values.anchoreAnalyzer.cycleTimers | indent 10 }}
+ {{ toYaml .Values.anchoreAnalyzer.cycleTimers | nindent 10 | trim }}
max_threads: {{ .Values.anchoreAnalyzer.concurrentTasksPerWorker }}
analyzer_driver: 'nodocker'
+ {{- if gt .Values.anchoreAnalyzer.layerCacheMaxGigabytes 0.0 }}
+ layer_cache_enable: true
+ {{- else }}
+ layer_cache_enable: false
+ {{- end }}
+ layer_cache_max_gigabytes: {{ .Values.anchoreAnalyzer.layerCacheMaxGigabytes }}
ssl_cert: "{{ .Values.anchoreGlobal.internalServicesSsl.certDir -}}/{{- .Values.anchoreGlobal.internalServicesSsl.certSecretCertName }}"
ssl_key: "{{ .Values.anchoreGlobal.internalServicesSsl.certDir -}}/{{ .Values.anchoreGlobal.internalServicesSsl.certSecretKeyName }}"
ssl_enable: {{ .Values.anchoreGlobal.internalServicesSslEnabled }}
@@ -129,14 +136,14 @@ data:
port: {{ .Values.anchoreCatalog.service.port }}
cycle_timer_seconds: 1
cycle_timers:
-{{ toYaml .Values.anchoreCatalog.cycleTimers | indent 10 }}
+ {{ toYaml .Values.anchoreCatalog.cycleTimers | nindent 10 | trim }}
ssl_enable: {{ .Values.anchoreGlobal.internalServicesSslEnabled }}
ssl_cert: "{{ .Values.anchoreGlobal.internalServicesSsl.certDir -}}/{{- .Values.anchoreGlobal.internalServicesSsl.certSecretCertName }}"
ssl_key: "{{ .Values.anchoreGlobal.internalServicesSsl.certDir -}}/{{- .Values.anchoreGlobal.internalServicesSsl.certSecretKeyName }}"
event_log:
-{{ toYaml .Values.anchoreCatalog.events | indent 10 }}
+ {{ toYaml .Values.anchoreCatalog.events | nindent 10 | trim }}
archive:
-{{ toYaml .Values.anchoreCatalog.archive | indent 10 }}
+ {{ toYaml .Values.anchoreCatalog.archive | nindent 10 | trim }}
simplequeue:
enabled: true
require_auth: true
@@ -154,7 +161,7 @@ data:
port: {{ .Values.anchorePolicyEngine.service.port }}
cycle_timer_seconds: 1
cycle_timers:
-{{ toYaml .Values.anchorePolicyEngine.cycleTimers | indent 10 }}
+ {{ toYaml .Values.anchorePolicyEngine.cycleTimers | nindent 10 | trim }}
ssl_cert: "{{ .Values.anchoreGlobal.internalServicesSsl.certDir -}}/{{- .Values.anchoreGlobal.internalServicesSsl.certSecretCertName }}"
ssl_key: "{{ .Values.anchoreGlobal.internalServicesSsl.certDir -}}/{{- .Values.anchoreGlobal.internalServicesSsl.certSecretKeyName }}"
- ssl_enable: {{ .Values.anchoreGlobal.internalServicesSslEnabled }}
+ ssl_enable: {{ .Values.anchoreGlobal.internalServicesSslEnabled }}
\ No newline at end of file
diff --git a/stable/anchore-engine/templates/rbac_configmap.yaml b/stable/anchore-engine/templates/enterprise_configmap.yaml
similarity index 68%
rename from stable/anchore-engine/templates/rbac_configmap.yaml
rename to stable/anchore-engine/templates/enterprise_configmap.yaml
index 7f88a93f238e..9554abe6d753 100644
--- a/stable/anchore-engine/templates/rbac_configmap.yaml
+++ b/stable/anchore-engine/templates/enterprise_configmap.yaml
@@ -1,5 +1,5 @@
-{{- if and .Values.anchoreEnterpriseGlobal.enabled .Values.anchoreEnterpriseRbac.enabled -}}
-{{- $component := "enterprise-rbac" -}}
+{{- if and .Values.anchoreEnterpriseGlobal.enabled (or .Values.anchoreEnterpriseRbac.enabled .Values.anchoreEnterpriseReports.enabled) -}}
+{{- $component := "enterprise" -}}
apiVersion: v1
kind: ConfigMap
metadata:
@@ -18,7 +18,7 @@ data:
# be altered for basic operation
#
service_dir: {{ .Values.anchoreGlobal.serviceDir }}
- tmp_dir: {{ default "/scratch" .Values.anchoreAnalyzer.scratchVolume.mountPath }}
+ tmp_dir: {{ .Values.anchoreGlobal.scratchVolume.mountPath }}
log_level: {{ .Values.anchoreGlobal.logLevel }}
cleanup_images: {{ .Values.anchoreGlobal.cleanupImages }}
@@ -26,14 +26,14 @@ data:
host_id: "${ANCHORE_POD_NAME}"
internal_ssl_verify: {{ .Values.anchoreGlobal.internalServicesSsl.verifyCerts }}
auto_restart_services: False
- license_file: "/license.yaml"
+ license_file: /home/anchore/license.yaml
metrics:
enabled: {{ .Values.anchoreGlobal.enableMetrics }}
credentials:
database:
- db_connect: "postgresql+pg8000://${ANCHORE_DB_USER}:${ANCHORE_DB_PASSWORD}@${ANCHORE_DB_HOST}/${ANCHORE_DB_NAME}"
+ db_connect: "postgresql://${ANCHORE_DB_USER}:${ANCHORE_DB_PASSWORD}@${ANCHORE_DB_HOST}/${ANCHORE_DB_NAME}"
db_connect_args:
timeout: {{ .Values.anchoreGlobal.dbConfig.timeout }}
ssl: {{ .Values.anchoreGlobal.dbConfig.ssl }}
@@ -43,8 +43,8 @@ data:
services:
# This should never be exposed outside of linked containers/localhost. It is used only for internal service access
rbac_authorizer:
- enabled: True
- require_auth: True
+ enabled: true
+ require_auth: true
endpoint_hostname: localhost
listen: 127.0.0.1
port: {{ .Values.anchoreEnterpriseRbac.service.authPort }}
@@ -52,8 +52,8 @@ data:
ssl_key: "{{ .Values.anchoreGlobal.internalServicesSsl.certDir -}}/{{- .Values.anchoreGlobal.internalServicesSsl.certSecretKeyName }}"
ssl_enable: {{ .Values.anchoreGlobal.internalServicesSslEnabled }}
rbac_manager:
- enabled: True
- require_auth: True
+ enabled: true
+ require_auth: true
endpoint_hostname: {{ template "anchore-engine.api.fullname" . }}
listen: 0.0.0.0
port: {{ .Values.anchoreEnterpriseRbac.service.apiPort }}
@@ -63,4 +63,19 @@ data:
ssl_cert: "{{ .Values.anchoreGlobal.internalServicesSsl.certDir -}}/{{- .Values.anchoreGlobal.internalServicesSsl.certSecretCertName }}"
ssl_key: "{{ .Values.anchoreGlobal.internalServicesSsl.certDir -}}/{{- .Values.anchoreGlobal.internalServicesSsl.certSecretKeyName }}"
ssl_enable: {{ .Values.anchoreGlobal.internalServicesSslEnabled }}
+ reports:
+ enabled: true
+ require_auth: true
+ endpoint_hostname: {{ template "anchore-engine.enterprise-reports.fullname" . }}
+ listen: '0.0.0.0'
+ port: {{ .Values.anchoreEnterpriseReports.service.port }}
+ enable_graphiql: "{{ .Values.anchoreEnterpriseReports.enableGraphql }}"
+ enable_data_ingress: "{{ .Values.anchoreEnterpriseReports.enableDataIngress }}"
+ cycle_timers:
+ {{ toYaml .Values.anchoreEnterpriseReports.cycleTimers | nindent 10 | trim }}
+ {{- if .Values.anchoreEnterpriseRbac.enabled }}
+ authorization_handler: external
+ authorization_handler_config:
+ endpoint: "http://localhost:{{ .Values.anchoreEnterpriseRbac.service.authPort }}"
+ {{- end }}
{{- end -}}
diff --git a/stable/anchore-engine/templates/feeds_configmap.yaml b/stable/anchore-engine/templates/enterprise_feeds_configmap.yaml
similarity index 79%
rename from stable/anchore-engine/templates/feeds_configmap.yaml
rename to stable/anchore-engine/templates/enterprise_feeds_configmap.yaml
index 28cf47fde8a8..4657a74d764a 100644
--- a/stable/anchore-engine/templates/feeds_configmap.yaml
+++ b/stable/anchore-engine/templates/enterprise_feeds_configmap.yaml
@@ -11,6 +11,16 @@ metadata:
heritage: {{ .Release.Service }}
component: {{ $component }}
data:
+ ANCHORE_DB_NAME: {{ index .Values "anchore-feeds-db" "postgresDatabase" | quote }}
+ ANCHORE_DB_USER: {{ index .Values "anchore-feeds-db" "postgresUser" | quote }}
+ {{- if and (index .Values "anchore-feeds-db" "externalEndpoint") (not (index .Values "anchore-feeds-db" "enabled")) }}
+ ANCHORE_DB_HOST: {{ index .Values "anchore-feeds-db" "externalEndpoint" | quote }}
+ {{- else if and (index .Values "cloudsql" "enabled") (not (index .Values "anchore-feeds-db" "enabled")) }}
+ ANCHORE_DB_HOST: "localhost:5432"
+ {{- else }}
+ ANCHORE_DB_HOST: "{{ template "postgres.anchore-feeds-db.fullname" . }}:5432"
+ {{- end }}
+
config.yaml: |
# Anchore Enterprise Service Configuration File
@@ -18,7 +28,7 @@ data:
# be altered for basic operation
#
service_dir: {{ .Values.anchoreGlobal.serviceDir }}
- tmp_dir: {{ default "/scratch" .Values.anchoreAnalyzer.scratchVolume.mountPath }}
+ tmp_dir: {{ .Values.anchoreGlobal.scratchVolume.mountPath }}
log_level: {{ .Values.anchoreGlobal.logLevel }}
cleanup_images: {{ .Values.anchoreGlobal.cleanupImages }}
@@ -26,14 +36,14 @@ data:
host_id: "${ANCHORE_POD_NAME}"
internal_ssl_verify: {{ .Values.anchoreGlobal.internalServicesSsl.verifyCerts }}
auto_restart_services: false
- license_file: "/license.yaml"
+ license_file: "/home/anchore/license.yaml"
metrics:
enabled: {{ .Values.anchoreGlobal.enableMetrics }}
credentials:
database:
- db_connect: "postgresql+pg8000://${ANCHORE_DB_USER}:${ANCHORE_DB_PASSWORD}@${ANCHORE_DB_HOST}/${ANCHORE_DB_NAME}"
+ db_connect: "postgresql://${ANCHORE_DB_USER}:${ANCHORE_DB_PASSWORD}@${ANCHORE_DB_HOST}/${ANCHORE_DB_NAME}"
db_connect_args:
timeout: {{ .Values.anchoreEnterpriseFeeds.dbConfig.timeout }}
ssl: {{ .Values.anchoreEnterpriseFeeds.dbConfig.ssl }}
@@ -49,9 +59,9 @@ data:
port: {{ .Values.anchoreEnterpriseFeeds.service.port }}
# Time delay in seconds between consecutive driver runs for processing data
cycle_timers:
-{{ toYaml .Values.anchoreEnterpriseFeeds.cycleTimers | indent 10 }}
+ {{ toYaml .Values.anchoreEnterpriseFeeds.cycleTimers | nindent 10 | trim }}
# Staging space for holding normalized output from drivers.
- local_workspace: {{ .Values.anchoreEnterpriseFeeds.scratchVolume.mountPath }}
+ local_workspace: {{ .Values.anchoreGlobal.scratchVolume.mountPath }}
# Drivers process data from external sources and store normalized data in local_workspace. Processing large data sets
# is a time consuming process for some drivers. To speed it up the container is shipped with pre-loaded data which is used
# by default if local_workspace is empty.
@@ -72,7 +82,7 @@ data:
# rubygem data comes packaged as a PostgreSQL dump file. gem driver loads the pg dump and normalizes the data.
# To enable gem driver comment the enabled property and uncomment the db_connect property.
enabled: {{ default "false" .Values.anchoreEnterpriseFeeds.gemDriverEnabled }}
- db_connect: {{ default "'postgresql+pg8000://${ANCHORE_DB_USER}:${ANCHORE_DB_PASSWORD}@${ANCHORE_DB_HOST}/gems'" .Values.anchoreEnterpriseFeeds.gemDbEndpoint }}
+ db_connect: {{ default "'postgresql://${ANCHORE_DB_USER}:${ANCHORE_DB_PASSWORD}@${ANCHORE_DB_HOST}/gems'" .Values.anchoreEnterpriseFeeds.gemDbEndpoint }}
centos:
enabled: {{ default "true" .Values.anchoreEnterpriseFeeds.centosDriverEnabled }}
debian:
diff --git a/stable/anchore-engine/templates/enterprise_feeds_deployment.yaml b/stable/anchore-engine/templates/enterprise_feeds_deployment.yaml
index 6eda35f5796e..38a3af4cea75 100644
--- a/stable/anchore-engine/templates/enterprise_feeds_deployment.yaml
+++ b/stable/anchore-engine/templates/enterprise_feeds_deployment.yaml
@@ -21,46 +21,41 @@ spec:
labels:
app: {{ template "anchore-engine.fullname" . }}
component: {{ $component }}
-{{- if .Values.anchoreEnterpriseFeeds.annotations }}
+ {{- with .Values.anchoreEnterpriseFeeds.annotations }}
annotations:
-{{ toYaml .Values.anchoreEnterpriseFeeds.annotations | indent 8 }}
-{{- end }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
spec:
- volumes:
- - name: config-volume
- configMap:
- name: {{ template "anchore-engine.enterprise-feeds.fullname" . }}
- - name: scratch-volume
-{{ toYaml .Values.anchoreEnterpriseFeeds.scratchVolume.details | indent 10 }}
- - name: anchore-license
- secret:
- secretName: {{ .Values.anchoreEnterpriseGlobal.licenseSecretName }}
imagePullSecrets:
- name: {{ .Values.anchoreEnterpriseGlobal.imagePullSecretName }}
containers:
+ {{- if .Values.cloudsql.enabled }}
+ - name: cloudsql-proxy
+ image: {{ .Values.cloudsql.image.repository }}:{{ .Values.cloudsql.image.tag }}
+ imagePullPolicy: {{ .Values.cloudsql.image.pullPolicy }}
+ command: ["/cloud_sql_proxy"]
+ args: ["-instances={{ .Values.cloudsql.instance }}=tcp:5432"]
+ {{- end }}
- name: "{{ .Chart.Name }}-{{ $component }}"
image: {{ .Values.anchoreEnterpriseGlobal.image }}
imagePullPolicy: {{ .Values.anchoreEnterpriseGlobal.imagePullPolicy }}
- command: ["/usr/local/bin/anchore-enterprise-manager"]
+ command: ["anchore-enterprise-manager"]
args: ["service", "start", "feeds"]
ports:
- containerPort: {{ .Values.anchoreEnterpriseFeeds.service.port }}
name: feeds-api
envFrom:
- secretRef:
- name: {{ template "anchore-engine.fullname" . }}
+ name: {{ default (include "anchore-engine.fullname" .) .Values.anchoreGlobal.existingSecret }}
+ - configMapRef:
+ name: {{ template "anchore-engine.enterprise-feeds.fullname" . }}
env:
- {{- if and (index .Values "anchore-feeds-db" "externalEndpoint") (not (index .Values "anchore-feeds-db" "enabled")) }}
- - name: ANCHORE_DB_HOST
- value: {{ index .Values "anchore-feeds-db" "externalEndpoint" | quote }}
- {{- else}}
- - name: ANCHORE_DB_HOST
- value: "{{ template "postgres.anchore-feeds-db.fullname" . }}:5432"
+ {{- with .Values.anchoreGlobal.extraEnv }}
+ {{- toYaml . | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreEnterpriseFeeds.extraEnv }}
+ {{- toYaml . | nindent 8 | trim }}
{{- end }}
- - name: ANCHORE_DB_NAME
- value: {{ index .Values "anchore-feeds-db" "postgresDatabase" | quote }}
- - name: ANCHORE_DB_USER
- value: {{ index .Values "anchore-feeds-db" "postgresUser" | quote }}
- name: ANCHORE_DB_PASSWORD
valueFrom:
secretKeyRef:
@@ -74,10 +69,10 @@ spec:
- name: config-volume
mountPath: /config/config.yaml
subPath: config.yaml
- - name: scratch-volume
- mountPath: {{ .Values.anchoreEnterpriseFeeds.scratchVolume.mountPath }}
+ - name: {{ $component }}-scratch
+ mountPath: {{ .Values.anchoreGlobal.scratchVolume.mountPath }}
- name: anchore-license
- mountPath: /license.yaml
+ mountPath: /home/anchore/license.yaml
subPath: license.yaml
livenessProbe:
httpGet:
@@ -97,19 +92,28 @@ spec:
failureThreshold: 3
successThreshold: 1
resources:
-{{ toYaml .Values.anchoreEnterpriseFeeds.resources | indent 10 }}
- {{- if .Values.anchoreEnterpriseFeeds.nodeSelector }}
+ {{ toYaml .Values.anchoreEnterpriseFeeds.resources | nindent 10 | trim }}
+ volumes:
+ - name: config-volume
+ configMap:
+ name: {{ template "anchore-engine.enterprise-feeds.fullname" . }}
+ - name: {{ $component}}-scratch
+ {{ toYaml .Values.anchoreGlobal.scratchVolume.details | nindent 10 | trim }}
+ - name: anchore-license
+ secret:
+ secretName: {{ .Values.anchoreEnterpriseGlobal.licenseSecretName }}
+ {{- with .Values.anchoreEnterpriseFeeds.nodeSelector }}
nodeSelector:
-{{ toYaml .Values.anchoreEnterpriseFeeds.nodeSelector | indent 8 }}
- {{- end }}
- {{- with .Values.anchoreEnterpriseFeeds.affinity }}
+ {{ toYaml .Values.anchoreEnterpriseFeeds.nodeSelector | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreEnterpriseFeeds.affinity }}
affinity:
-{{ toYaml . | indent 8 }}
- {{- end }}
- {{- with .Values.anchoreEnterpriseFeeds.tolerations }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreEnterpriseFeeds.tolerations }}
tolerations:
-{{ toYaml . | indent 8 }}
- {{- end }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
---
apiVersion: v1
@@ -122,9 +126,9 @@ metadata:
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: {{ $component }}
- {{- if .Values.anchoreEnterpriseFeeds.service.annotations }}
+ {{- with .Values.anchoreEnterpriseFeeds.service.annotations }}
annotations:
-{{ toYaml .Values.anchoreEnterpriseFeeds.service.annotations | indent 4 }}
+ {{ toYaml . | nindent 4 | trim }}
{{- end }}
spec:
type: {{ .Values.anchoreEnterpriseFeeds.service.type }}
diff --git a/stable/anchore-engine/templates/enterprise_ui_config.yaml b/stable/anchore-engine/templates/enterprise_ui_config.yaml
new file mode 100644
index 000000000000..4583e73ca0a2
--- /dev/null
+++ b/stable/anchore-engine/templates/enterprise_ui_config.yaml
@@ -0,0 +1,42 @@
+{{- if and .Values.anchoreEnterpriseGlobal.enabled .Values.anchoreEnterpriseUi.enabled -}}
+{{- $component := "enterprise-ui" -}}
+
+# Using a secret until UI app supports ENV vars inside the config file. Redis password is included in config.
+kind: Secret
+apiVersion: v1
+metadata:
+ name: {{ include "anchore-engine.enterprise-ui.fullname" . | quote }}
+ labels:
+ app: {{ include "anchore-engine.fullname" . | quote }}
+ component: {{ $component }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ release: "{{ .Release.Name }}"
+ heritage: "{{ .Release.Service }}"
+type: Opaque
+stringData:
+ config-ui.yaml: |
+ engine_uri: "http://{{ template "anchore-engine.api.fullname" . }}:{{ .Values.anchoreApi.service.port }}/v1"
+ {{- if and (index .Values "anchore-ui-redis" "externalEndpoint") (not (index .Values "anchore-ui-redis" "enabled")) }}
+ redis_uri: "{{ index .Values "anchore-ui-redis" "externalEndpoint" }}"
+ {{- else }}
+ redis_uri: "redis://:{{ index .Values "anchore-ui-redis" "password" }}@{{ template "redis.fullname" . }}-master:6379"
+ {{- end }}
+ {{- if .Values.anchoreEnterpriseRbac.enabled }}
+ rbac_uri: "http://{{ template "anchore-engine.api.fullname" . }}:{{ .Values.anchoreEnterpriseRbac.service.apiPort }}/v1"
+ {{- end }}
+ {{- if .Values.anchoreEnterpriseReports.enabled }}
+ reports_uri: "http://{{ template "anchore-engine.api.fullname" . }}:{{ .Values.anchoreEnterpriseReports.service.port}}/v1"
+ {{- end }}
+ {{- if and .Values.postgresql.externalEndpoint (not .Values.postgresql.enabled) }}
+ appdb_uri: "postgresql://{{ .Values.postgresql.postgresUser }}:{{ .Values.postgresql.postgresPassword }}@{{ .Values.postgresql.externalEndpoint }}/{{ .Values.postgresql.postgresDatabase }}"
+ {{- else if and (index .Values "cloudsql" "enabled") (not (index .Values "postgresql" "enabled")) }}
+ appdb_uri: "postgresql://{{ .Values.postgresql.postgresUser }}:{{ .Values.postgresql.postgresPassword }}@localhost:5432/{{ .Values.postgresql.postgresDatabase }}"
+ {{- else }}
+ appdb_uri: "postgresql://{{ .Values.postgresql.postgresUser }}:{{ .Values.postgresql.postgresPassword }}@{{ template "postgres.fullname" . }}:5432/{{ .Values.postgresql.postgresDatabase }}"
+ {{- end }}
+ license_path: "/home/anchore/"
+ enable_ssl: {{ .Values.anchoreEnterpriseUi.enableSsl }}
+ enable_proxy: {{ .Values.anchoreEnterpriseUi.enableProxy }}
+ allow_shared_login: {{ .Values.anchoreEnterpriseUi.enableSharedLogin }}
+ redis_flushdb: {{ .Values.anchoreEnterpriseUi.redisFlushdb }}
+{{- end -}}
diff --git a/stable/anchore-engine/templates/enterprise_ui_deployment.yaml b/stable/anchore-engine/templates/enterprise_ui_deployment.yaml
index 6f1be1b607f5..fd1e81b6b463 100644
--- a/stable/anchore-engine/templates/enterprise_ui_deployment.yaml
+++ b/stable/anchore-engine/templates/enterprise_ui_deployment.yaml
@@ -25,59 +25,44 @@ spec:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
-{{- if .Values.anchoreEnterpriseUi.annotations }}
+ {{- with .Values.anchoreEnterpriseUi.annotations }}
annotations:
-{{ toYaml .Values.anchoreEnterpriseUi.annotations | indent 8 }}
-{{- end }}
+ {{ toYaml .Values.anchoreEnterpriseUi.annotations | nindent 8 | trim }}
+ {{- end }}
spec:
- volumes:
- - name: anchore-license
- secret:
- secretName: {{ .Values.anchoreEnterpriseGlobal.licenseSecretName }}
imagePullSecrets:
- name: {{ .Values.anchoreEnterpriseGlobal.imagePullSecretName }}
containers:
+ {{- if .Values.cloudsql.enabled }}
+ - name: cloudsql-proxy
+ image: {{ .Values.cloudsql.image.repository }}:{{ .Values.cloudsql.image.tag }}
+ imagePullPolicy: {{ .Values.cloudsql.image.pullPolicy }}
+ command: ["/cloud_sql_proxy"]
+ args: ["-instances={{ .Values.cloudsql.instance }}=tcp:5432"]
+ {{- end }}
- name: "{{ .Chart.Name }}-{{ $component }}"
image: {{ .Values.anchoreEnterpriseUi.image }}
imagePullPolicy: {{ .Values.anchoreEnterpriseUi.imagePullPolicy }}
env:
- - name: REDIS_PASSWORD
- valueFrom:
- secretKeyRef:
- name: {{ template "redis.fullname" . }}
- key: redis-password
- {{- if and (index .Values "anchore-ui-redis" "externalEndpoint") (not (index .Values "anchore-ui-redis" "enabled")) }}
- - name: ANCHORE_REDIS_URI
- value: {{ index .Values "anchore-ui-redis" "externalEndpoint" | quote }}
- {{- else }}
- - name: ANCHORE_REDIS_URI
- value: "redis://:$(REDIS_PASSWORD)@{{ template "redis.fullname" . }}-master:6379"
+ {{- with .Values.anchoreGlobal.extraEnv }}
+ {{- toYaml . | nindent 8 | trim }}
{{- end }}
- - name: ANCHORE_ENGINE_URI
- value: "http://{{ template "anchore-engine.api.fullname" . }}:{{ .Values.anchoreApi.service.port }}/v1"
- - name: ANCHORE_LICENSE_PATH
- value: "/"
- {{- if .Values.anchoreEnterpriseRbac.enabled }}
- - name: ANCHORE_RBAC_URI
- value: "http://{{ template "anchore-engine.api.fullname" . }}:{{ .Values.anchoreEnterpriseRbac.service.apiPort }}/v1"
+ {{- with .Values.anchoreEnterpriseUi.extraEnv }}
+ {{- toYaml . | nindent 8 | trim }}
{{- end }}
- - name: ANCHORE_ENABLE_SSL
- value: '{{ .Values.anchoreEnterpriseUi.enableSsl }}'
- - name: ANCHORE_ENABLE_PROXY
- value: '{{ .Values.anchoreEnterpriseUi.enableProxy }}'
- - name: ANCHORE_ALLOW_SHARED_LOGIN
- value: '{{ .Values.anchoreEnterpriseUi.enableSharedLogin }}'
ports:
- containerPort: 3000
protocol: TCP
name: enterprise-ui
volumeMounts:
- name: anchore-license
- mountPath: /license.yaml
+ mountPath: /home/anchore/license.yaml
subPath: license.yaml
+ - name: anchore-ui-config
+ mountPath: /config/config-ui.yaml
+ subPath: config-ui.yaml
livenessProbe:
- httpGet:
- path: /
+ tcpSocket:
port: enterprise-ui
initialDelaySeconds: 120
periodSeconds: 10
@@ -91,19 +76,26 @@ spec:
failureThreshold: 3
successThreshold: 1
resources:
-{{ toYaml .Values.anchoreEnterpriseUi.resources | indent 10 }}
- {{- if .Values.anchoreEnterpriseUi.nodeSelector }}
+ {{ toYaml .Values.anchoreEnterpriseUi.resources | nindent 10 | trim }}
+ volumes:
+ - name: anchore-license
+ secret:
+ secretName: {{ .Values.anchoreEnterpriseGlobal.licenseSecretName }}
+ - name: anchore-ui-config
+ secret:
+ secretName: {{ template "anchore-engine.enterprise-ui.fullname" . }}
+ {{- with .Values.anchoreEnterpriseUi.nodeSelector }}
nodeSelector:
-{{ toYaml .Values.anchoreEnterpriseUi.nodeSelector | indent 8 }}
- {{- end }}
- {{- with .Values.anchoreEnterpriseUi.affinity }}
+ {{ toYaml .Values.anchoreEnterpriseUi.nodeSelector | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreEnterpriseUi.affinity }}
affinity:
-{{ toYaml . | indent 8 }}
- {{- end }}
- {{- with .Values.anchoreEnterpriseUi.tolerations }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreEnterpriseUi.tolerations }}
tolerations:
-{{ toYaml . | indent 8 }}
- {{- end }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
---
apiVersion: v1
@@ -116,12 +108,12 @@ metadata:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
- {{- if .Values.anchoreEnterpriseUi.service.annotations }}
+ {{- with .Values.anchoreEnterpriseUi.service.annotations }}
annotations:
-{{ toYaml .Values.anchoreEnterpriseUi.service.annotations | indent 4 }}
+ {{ toYaml . | nindent 4 | trim }}
{{- end }}
spec:
- sessionAffinity: ClientIP
+ sessionAffinity: {{ .Values.anchoreEnterpriseUi.service.sessionAffinity }}
type: {{ .Values.anchoreEnterpriseUi.service.type }}
ports:
- name: enterprise-ui
diff --git a/stable/anchore-engine/templates/ingress.yaml b/stable/anchore-engine/templates/ingress.yaml
index afd33f6a3937..5042ef3f9dec 100644
--- a/stable/anchore-engine/templates/ingress.yaml
+++ b/stable/anchore-engine/templates/ingress.yaml
@@ -1,4 +1,4 @@
-{{- if .Values.anchoreGlobal.ingress.enabled -}}
+{{- if .Values.ingress.enabled -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
@@ -8,14 +8,14 @@ metadata:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
- {{- if .Values.anchoreGlobal.ingress.annotations }}
+ {{- with .Values.ingress.annotations }}
annotations:
-{{ toYaml .Values.anchoreGlobal.ingress.annotations | indent 4 }}
+ {{ toYaml . | nindent 4 | trim }}
{{- end }}
spec:
- {{- if .Values.anchoreGlobal.ingress.tls }}
+ {{- if .Values.ingress.tls }}
tls:
- {{- range .Values.anchoreGlobal.ingress.tls }}
+ {{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
@@ -24,36 +24,36 @@ spec:
{{- end }}
{{- end }}
rules:
- {{- if .Values.anchoreApi.ingress.hosts }}
- {{- range .Values.anchoreApi.ingress.hosts }}
+ {{- if .Values.ingress.apiHosts }}
+ {{- range .Values.ingress.apiHosts }}
- host: {{ . | quote }}
http:
paths:
- - path: {{ $.Values.anchoreApi.ingress.path }}
+ - path: {{ $.Values.ingress.apiPath }}
backend:
serviceName: {{ template "anchore-engine.api.fullname" $ }}
servicePort: {{ $.Values.anchoreApi.service.port }}
{{- end }}
- {{- if and (and .Values.anchoreEnterpriseGlobal.enabled .Values.anchoreEnterpriseUi.enabled) .Values.anchoreEnterpriseUi.ingress.hosts }}
- {{- range .Values.anchoreEnterpriseUi.ingress.hosts }}
+ {{- if and (and .Values.anchoreEnterpriseGlobal.enabled .Values.anchoreEnterpriseUi.enabled) .Values.ingress.uiHosts }}
+ {{- range .Values.ingress.uiHosts }}
- host: {{ . | quote }}
http:
paths:
- - path: {{ $.Values.anchoreEnterpriseUi.ingress.path }}
+ - path: {{ $.Values.ingress.uiPath }}
backend:
serviceName: {{ template "anchore-engine.enterprise-ui.fullname" $ }}
servicePort: {{ $.Values.anchoreEnterpriseUi.service.port }}
{{- end }}
- {{- end }}
- {{- else }}
+ {{- end }}
+ {{- else }}
- http:
paths:
- - path: {{ $.Values.anchoreApi.ingress.path }}
+ - path: {{ $.Values.ingress.apiPath }}
backend:
serviceName: {{ template "anchore-engine.api.fullname" $ }}
servicePort: {{ $.Values.anchoreApi.service.port }}
{{- if and .Values.anchoreEnterpriseGlobal.enabled .Values.anchoreEnterpriseUi.enabled }}
- - path: {{ $.Values.anchoreEnterpriseUi.ingress.path }}
+ - path: {{ $.Values.ingress.uiPath }}
backend:
serviceName: {{ template "anchore-engine.enterprise-ui.fullname" $ }}
servicePort: {{ $.Values.anchoreEnterpriseUi.service.port }}
diff --git a/stable/anchore-engine/templates/policy_engine_deployment.yaml b/stable/anchore-engine/templates/policy_engine_deployment.yaml
index 1bdacab63778..6a905211ff43 100644
--- a/stable/anchore-engine/templates/policy_engine_deployment.yaml
+++ b/stable/anchore-engine/templates/policy_engine_deployment.yaml
@@ -20,23 +20,36 @@ spec:
labels:
app: {{ template "anchore-engine.fullname" . }}
component: {{ $component }}
-{{- if .Values.anchorePolicyEngine.annotations }}
+ {{- with .Values.anchorePolicyEngine.annotations }}
annotations:
-{{ toYaml .Values.anchorePolicyEngine.annotations | indent 8 }}
-{{- end }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
spec:
containers:
+ {{- if .Values.cloudsql.enabled }}
+ - name: cloudsql-proxy
+ image: {{ .Values.cloudsql.image.repository }}:{{ .Values.cloudsql.image.tag }}
+ imagePullPolicy: {{ .Values.cloudsql.image.pullPolicy }}
+ command: ["/cloud_sql_proxy"]
+ args: ["-instances={{ .Values.cloudsql.instance }}=tcp:5432"]
+ {{- end }}
- name: {{ .Chart.Name }}-{{ $component }}
image: {{ .Values.anchoreGlobal.image }}
imagePullPolicy: {{ .Values.anchoreGlobal.imagePullPolicy }}
- command: ["/usr/local/bin/anchore-manager"]
+ command: ["anchore-manager"]
args: ["service", "start", "policy_engine"]
envFrom:
- secretRef:
- name: {{ template "anchore-engine.fullname" . }}
+ name: {{ default (include "anchore-engine.fullname" .) .Values.anchoreGlobal.existingSecret }}
- configMapRef:
name: {{ template "anchore-engine.fullname" . }}
env:
+ {{- with .Values.anchoreGlobal.extraEnv }}
+ {{- toYaml . | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchorePolicyEngine.extraEnv }}
+ {{- toYaml . | nindent 8 | trim }}
+ {{- end }}
- name: ANCHORE_POD_NAME
valueFrom:
fieldRef:
@@ -50,7 +63,7 @@ spec:
subPath: config.yaml
{{- if .Values.anchoreGlobal.internalServicesSslEnabled }}
- name: certs
- mountPath: {{ default "/certs" .Values.anchoreGlobal.internalServicesSsl.certDir }}
+ mountPath: {{ .Values.anchoreGlobal.internalServicesSsl.certDir }}
readOnly: true
{{- end }}
livenessProbe:
@@ -71,7 +84,7 @@ spec:
failureThreshold: 3
successThreshold: 1
resources:
-{{ toYaml .Values.anchorePolicyEngine.resources | indent 10 }}
+ {{ toYaml .Values.anchorePolicyEngine.resources | nindent 10 | trim }}
volumes:
- name: config-volume
configMap:
@@ -81,18 +94,18 @@ spec:
secret:
secretName: {{ .Values.anchoreGlobal.internalServicesSsl.certSecret }}
{{- end }}
- {{- if .Values.anchorePolicyEngine.nodeSelector }}
+ {{- with .Values.anchorePolicyEngine.nodeSelector }}
nodeSelector:
-{{ toYaml .Values.anchorePolicyEngine.nodeSelector | indent 8 }}
- {{- end }}
- {{- with .Values.anchorePolicyEngine.affinity }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchorePolicyEngine.affinity }}
affinity:
-{{ toYaml . | indent 8 }}
- {{- end }}
- {{- with .Values.anchorePolicyEngine.tolerations }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchorePolicyEngine.tolerations }}
tolerations:
-{{ toYaml . | indent 8 }}
- {{- end }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
---
apiVersion: v1
@@ -105,9 +118,9 @@ metadata:
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: {{ $component }}
- {{- if .Values.anchorePolicyEngine.service.annotations }}
+ {{- with .Values.anchorePolicyEngine.service.annotations }}
annotations:
-{{ toYaml .Values.anchorePolicyEngine.service.annotations | indent 4 }}
+ {{ toYaml . | nindent 4 | trim }}
{{- end }}
spec:
type: {{ .Values.anchorePolicyEngine.service.type }}
diff --git a/stable/anchore-engine/templates/secrets.yaml b/stable/anchore-engine/templates/secrets.yaml
index 3d27aadb8265..fe9f49c37ad8 100644
--- a/stable/anchore-engine/templates/secrets.yaml
+++ b/stable/anchore-engine/templates/secrets.yaml
@@ -1,3 +1,4 @@
+{{- if not .Values.anchoreGlobal.existingSecret }}
apiVersion: v1
kind: Secret
metadata:
@@ -14,3 +15,4 @@ stringData:
{{- if and .Values.anchoreEnterpriseGlobal.enabled .Values.anchoreEnterpriseFeeds.enabled }}
.feedsDbPassword: {{ index .Values "anchore-feeds-db" "postgresPassword" | quote }}
{{- end }}
+{{- end }}
\ No newline at end of file
diff --git a/stable/anchore-engine/templates/simplequeue_deployment.yaml b/stable/anchore-engine/templates/simplequeue_deployment.yaml
index 59dfa4fab796..ee472a89fb09 100644
--- a/stable/anchore-engine/templates/simplequeue_deployment.yaml
+++ b/stable/anchore-engine/templates/simplequeue_deployment.yaml
@@ -20,23 +20,36 @@ spec:
labels:
app: {{ template "anchore-engine.fullname" . }}
component: {{ $component }}
-{{- if .Values.anchoreSimpleQueue.annotations }}
+ {{- with .Values.anchoreSimpleQueue.annotations }}
annotations:
-{{ toYaml .Values.anchoreSimpleQueue.annotations | indent 8 }}
-{{- end }}
+ {{ toYaml .Values.anchoreSimpleQueue.annotations | nindent 8 | trim }}
+ {{- end }}
spec:
containers:
+ {{- if .Values.cloudsql.enabled }}
+ - name: cloudsql-proxy
+ image: {{ .Values.cloudsql.image.repository }}:{{ .Values.cloudsql.image.tag }}
+ imagePullPolicy: {{ .Values.cloudsql.image.pullPolicy }}
+ command: ["/cloud_sql_proxy"]
+ args: ["-instances={{ .Values.cloudsql.instance }}=tcp:5432"]
+ {{- end }}
- name: "{{ .Chart.Name }}-{{ $component }}"
image: {{ .Values.anchoreGlobal.image }}
imagePullPolicy: {{ .Values.anchoreGlobal.imagePullPolicy }}
- command: ["/usr/local/bin/anchore-manager"]
+ command: ["anchore-manager"]
args: ["service", "start", "simplequeue"]
envFrom:
- secretRef:
- name: {{ template "anchore-engine.fullname" . }}
+ name: {{ default (include "anchore-engine.fullname" .) .Values.anchoreGlobal.existingSecret }}
- configMapRef:
name: {{ template "anchore-engine.fullname" . }}
env:
+ {{- with .Values.anchoreGlobal.extraEnv }}
+ {{- toYaml . | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreSimpleQueue.extraEnv }}
+ {{- toYaml . | nindent 8 | trim }}
+ {{- end }}
- name: ANCHORE_POD_NAME
valueFrom:
fieldRef:
@@ -50,7 +63,7 @@ spec:
subPath: config.yaml
{{- if .Values.anchoreGlobal.internalServicesSslEnabled }}
- name: certs
- mountPath: {{ default "/certs" .Values.anchoreGlobal.internalServicesSsl.certDir }}
+ mountPath: {{ .Values.anchoreGlobal.internalServicesSsl.certDir }}
readOnly: true
{{- end }}
livenessProbe:
@@ -71,7 +84,7 @@ spec:
failureThreshold: 3
successThreshold: 1
resources:
-{{ toYaml .Values.anchoreSimpleQueue.resources | indent 10 }}
+ {{ toYaml .Values.anchoreSimpleQueue.resources | nindent 10 | trim }}
volumes:
- name: config-volume
configMap:
@@ -81,18 +94,18 @@ spec:
secret:
secretName: {{ .Values.anchoreGlobal.internalServicesSsl.certSecret }}
{{- end }}
- {{- if .Values.anchoreSimpleQueue.nodeSelector }}
+ {{- with .Values.anchoreSimpleQueue.nodeSelector }}
nodeSelector:
-{{ toYaml .Values.anchoreSimpleQueue.nodeSelector | indent 8 }}
- {{- end }}
- {{- with .Values.anchoreSimpleQueue.affinity }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreSimpleQueue.affinity }}
affinity:
-{{ toYaml . | indent 8 }}
- {{- end }}
- {{- with .Values.anchoreSimpleQueue.tolerations }}
+ {{ toYaml . | nindent 8 | trim }}
+ {{- end }}
+ {{- with .Values.anchoreSimpleQueue.tolerations }}
tolerations:
-{{ toYaml . | indent 8 }}
- {{- end }}
+ {{ toYaml . | indent 8 | trim }}
+ {{- end }}
---
apiVersion: v1
@@ -105,9 +118,9 @@ metadata:
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: {{ $component }}
- {{- if .Values.anchoreSimpleQueue.service.annotations }}
+ {{- with .Values.anchoreSimpleQueue.service.annotations }}
annotations:
-{{ toYaml .Values.anchoreSimpleQueue.service.annotations | indent 4 }}
+ {{ toYaml . | nindent 4 | trim }}
{{- end }}
spec:
type: {{ .Values.anchoreSimpleQueue.service.type }}
diff --git a/stable/anchore-engine/values.yaml b/stable/anchore-engine/values.yaml
index 76cb88c4a1ce..98aedb805e96 100644
--- a/stable/anchore-engine/values.yaml
+++ b/stable/anchore-engine/values.yaml
@@ -2,7 +2,7 @@
# Anchore engine has a dependency on Postgresql, configure here
postgresql:
- # To use an external DB, uncomment & set 'enabled: false'
+ # To use an external DB or Google CloudSQL in GKE, uncomment & set 'enabled: false'
# externalEndpoint, postgresUser, postgresPassword & postgresDatabase are required values for external postgres
# enabled: false
postgresUser: anchoreengine
@@ -14,41 +14,80 @@ postgresql:
externalEndpoint: Null
# Configure size of the persitant volume used with helm managed chart.
- # This is ignored if using an external endpoint.
+ # This should be commented out if using an external endpoint.
persistence:
- size: 8Gi
+ resourcePolicy: nil
+ size: 20Gi
+
+# Google CloudSQL support in GKE via gce-proxy
+cloudsql:
+ # To use CloudSQL in GKE set 'enable: true'
+ enabled: false
+ # set CloudSQL instance: 'project:zone:instancname'
+ instance: ""
+ image:
+ # set repo and image tag of gce-proxy
+ repository: gcr.io/cloudsql-docker/gce-proxy
+ tag: 1.12
+ pullPolicy: IfNotPresent
+
+# Create an ingress resource for all external anchore engine services (API & Enterprise UI).
+# By default this chart is setup to use the NGINX ingress controller which needs to be installed & configured on your cluster.
+# To utilize a GCE/ALB ingress controller comment out the nginx annotations below, change ingress.class, edit path configurions as per the comments, & set API/UI services to use NodePort.
+ingress:
+ enabled: false
+ # Use the following paths for GCE/ALB ingress controller
+ # apiPath: /v1/*
+ # uiPath: /*
+ apiPath: /v1/
+ uiPath: /
+ # Uncomment the following lines to bind on specific hostnames
+ # apiHosts:
+ # - anchore-api.example.com
+ # uiHosts:
+ # - anchore-ui.example.com
+ annotations:
+ # kubernetes.io/ingress.class: gce
+ kubernetes.io/ingress.class: nginx
+ # nginx.ingress.kubernetes.io/ssl-redirect: "false"
+ # kubernetes.io/ingress.allow-http: false
+ # kubernetes.io/tls-acme: true
+ tls: []
+ # Secrets must be manually created in the namespace.
+ # - secretName: chart-example-tls
+ # hosts:
+ # - chart-example.local
# Global configuration shared by all anchore-engine services.
anchoreGlobal:
# Image used for all anchore engine deployments (excluding enterprise components).
- image: docker.io/anchore/anchore-engine:v0.3.2
+ image: docker.io/anchore/anchore-engine:v0.4.0
imagePullPolicy: IfNotPresent
- # Create an ingress resource for all external anchore engine services.
- # By default this chart is setup to use the NGINX ingress controller which needs to be installed & configured on your cluster.
- # To utilize a GCE ingress controller comment out the annotations below, also edit path configurion the UI & Api configs as per the comments.
- # Ingress paths/hosts can be setup for the anchoreApi & anchoreEnterpriseUi deployments in the corresponding values sections.
- ingress:
- enabled: false
- annotations:
- kubernetes.io/ingress.class: nginx
- # nginx.ingress.kubernetes.io/ssl-redirect: "false"
- # kubernetes.io/ingress.allow-http: false
- # kubernetes.io/tls-acme: true
- tls: []
- # Secrets must be manually created in the namespace.
- # - secretName: chart-example-tls
- # hosts:
- # - chart-example.local
+ # Set extra environment variables. These will be set on all containers.
+ extraEnv: []
+ # - name: foo
+ # value: bar
+
+ # Specifies an existing secret to be used for admin and db passwords
+ existingSecret: ""
+
+ # The scratchVolume controls the mounting of an external volume for scratch space for image analysis. Generally speaking
+ # you need to provision 3x the size of the largest image (uncompressed) that you want to analyze for this space.
+ scratchVolume:
+ mountPath: /analysis_scratch
+ details:
+ # Specify volume configuration here
+ emptyDir: {}
###
- # Start of General Anchore Engine Configurations (populates config.yaml)
+ # Start of General Anchore Engine Configurations (populates /config/config.yaml)
###
# Set where default configs are placed at startup. This must be a writable location for the pod.
- serviceDir: /anchore_service_config
+ serviceDir: /anchore_service
logLevel: INFO
- # If true, if a user adds an ECR registry with username = awsauto then the system will look for an instance profile to use for auth against the registry
+ # If true, when a user adds an ECR registry with username = awsauto then the system will look for an instance profile to use for auth against the registry
allowECRUseIAMRole: false
# Enable prometheus metrics
@@ -71,7 +110,7 @@ anchoreGlobal:
internalServicesSsl:
# specify whether cert is verfied against the local certifacte bundle (allow self-signed certs if set to false)
verifyCerts: false
- certDir: /certs
+ certDir: /home/anchore/certs
certSecret: Null
certSecretKeyName: tls.key
certSecretCertName: tls.crt
@@ -98,6 +137,11 @@ anchoreAnalyzer:
replicaCount: 1
containerPort: 8084
+ # Set extra environment variables. These will be set only on analyzer containers.
+ extraEnv: []
+ # - name: foo
+ # value: bar
+
# The cycle timer is the interval between checks to the work queue for new jobs
cycleTimers:
image_analyzer: 5
@@ -106,13 +150,12 @@ anchoreAnalyzer:
# necessarily be faster depending on hardware. Should test and balance this value vs. number of analyzers for your deployment cluster performance.
concurrentTasksPerWorker: 1
- # The analysisVolume controls the mounting of an external volume for scratch space for image analysis. Generally speaking
- # you need to provision 3x the size of the largest image (uncompressed) that you want to analyze for this space.
- scratchVolume:
- mountPath: /scratch
- details:
- # Specify volume configuration here
- emptyDir: {}
+ # Image layer caching can be enabled to speed up image downloads before analysis.
+ # This chart sets up a scratch directory for all analyzer pods using the values found at anchoreGlobal.scratchVolume.
+ # When setting anchoreAnalyzer.layerCacheMaxGigabytes, ensure the scratch volume has suffient storage space.
+ # For more info see - https://docs.anchore.com/current/docs/engine/engine_installation/storage/layer_caching/
+ # Enable image layer caching by setting a cache size > 0GB.
+ layerCacheMaxGigabytes: 0
# resources:
# limits:
@@ -132,23 +175,17 @@ anchoreAnalyzer:
anchoreApi:
replicaCount: 1
+ # Set extra environment variables. These will be set on all api containers.
+ extraEnv: []
+ # - name: foo
+ # value: bar
+
# kubernetes service configuration for anchore external API
service:
type: ClusterIP
port: 8228
annotations: {}
- # Used to create Ingress record for the anchore engine external API (api service)
- # (should used with service.type: ClusterIP or NodePort depending on platform)
- ingress:
- # For GCE ingress controllers use the following path
- # path: /v1/*
- # By default this is configured to use an NGINX ingress controller.
- path: /v1/
- # You can bound on specific hostnames
- # hosts:
- # - anchore-api.local
-
# resources:
# limits:
# cpu: 100m
@@ -165,6 +202,11 @@ anchoreApi:
anchoreCatalog:
replicaCount: 1
+ # Set extra environment variables. These will be set on all catalog containers.
+ extraEnv: []
+ # - name: foo
+ # value: bar
+
# Intervals to run specific events on (seconds)
cycleTimers:
# Interval to check for an update to a tag
@@ -185,7 +227,7 @@ anchoreCatalog:
# Event log configuration for webhooks
events:
notification:
- enabled: true
+ enabled: false
# Send notifications for events with severity level that matches items in this list
level:
- error
@@ -262,6 +304,11 @@ anchoreCatalog:
anchorePolicyEngine:
replicaCount: 1
+ # Set extra environment variables. These will be set on all policy engine containers.
+ extraEnv: []
+ # - name: foo
+ # value: bar
+
# Intervals to run specific events on (seconds)
cycleTimers:
# Interval to run a feed sync to get latest cve data
@@ -292,6 +339,11 @@ anchorePolicyEngine:
anchoreSimpleQueue:
replicaCount: 1
+ # Set extra environment variables. These will be set on all simplequeue containers.
+ extraEnv: []
+ # - name: foo
+ # value: bar
+
# kubernetes service configuration for anchore simplequeue api
service:
type: ClusterIP
@@ -318,16 +370,16 @@ anchoreEnterpriseGlobal:
# Create this secret with the following command - kubectl create secret generic anchore-license --from-file=license.yaml=
licenseSecretName: anchore-enterprise-license
- image: docker.io/anchore/enterprise:v0.3.3
+ image: docker.io/anchore/enterprise:v0.4.0
imagePullPolicy: IfNotPresent
# Name of the kubernetes secret containing your dockerhub creds with access to the anchore enterprise images.
# Create this secret with the following command - kubectl create secret docker-registry anchore-dockerhub-creds --docker-server=docker.io --docker-username= --docker-password= --docker-email=
imagePullSecretName: anchore-enterprise-pullcreds
# Configure the second postgres database instance for the enterprise feeds service.
-# Only utilized if anchoreEnterpriseFeeds.enabled: true
+# Only utilized if anchoreEnterpriseGlobal.enabled: true
anchore-feeds-db:
- # To use an external DB, uncomment & set 'enabled: false'
+ # To use an external DB or Google CloudSQL, uncomment & set 'enabled: false'
# externalEndpoint, postgresUser, postgresPassword & postgresDatabase are required values for external postgres
# enabled: false
postgresUser: anchoreengine
@@ -339,15 +391,21 @@ anchore-feeds-db:
externalEndpoint: Null
# Configure size of the persitant volume used with helm managed chart.
- # This is ignored if using an external endpoint.
+ # This should be commented out if using an external endpoint.
persistence:
- size: 8Gi
+ resourcePolicy: nil
+ size: 20Gi
# Configure & enable the Anchore Enterprise on-prem feeds service.
anchoreEnterpriseFeeds:
# If enabled is set to false, set anchore-feeds-db.enabled to false to ensure that helm doesn't stand up a unneccessary postgres instance.
enabled: true
+ # Set extra environment variables. These will be set on all feeds containers.
+ extraEnv: []
+ # - name: foo
+ # value: bar
+
# Time delay in seconds between consecutive driver runs for processing data
cycleTimers:
driver_sync: 7200
@@ -366,13 +424,6 @@ anchoreEnterpriseFeeds:
port: 8448
annotations: {}
- # Staging space for holding normalized output from drivers.
- scratchVolume:
- mountPath: /scratch
- details:
- # Specify volume configuration here
- emptyDir: {}
-
# resources:
# limits:
# cpu: 100m
@@ -391,6 +442,11 @@ anchoreEnterpriseFeeds:
anchoreEnterpriseRbac:
enabled: true
+ # Set extra environment variables. These will be set on all rbac containers.
+ extraEnv: []
+ # - name: foo
+ # value: bar
+
# Kubernetes service config - annotations & serviceType configs must be set in anchoreApi
# Due to RBAC sharing a service with the general API.
service:
@@ -413,10 +469,59 @@ anchoreEnterpriseRbac:
# cpu: 100m
# memory: 3Gi
+# Configure the Anchore Enterprise reporting component.
+anchoreEnterpriseReports:
+ enabled: true
+
+ # Set extra environment variables. These will be set on all rbac containers.
+ extraEnv: []
+ # - name: foo
+ # value: bar
+
+ # GraphiQL is a GUI for editing and testing GraphQL queries and mutations.
+ # Set enable_graphiql to true and open http://:/v1/reports/graphql in a browser for reports API
+ enableGraphql: true
+ # Set enable_data_ingress to true for periodically syncing data from anchore engine into the reports service
+ enableDataIngress: true
+
+ cycleTimers:
+ # images and tags synced every 10 minutes
+ reports_data_load: 600
+ # policy evaluations and vulnerabilities refreshed every 2 hours
+ reports_data_refresh: 7200
+ # metrics generated every hour
+ reports_metrics: 3600
+
+ # Kubernetes service config - annotations & serviceType configs must be set in anchoreApi
+ # Due to Reports sharing a service with the general API.
+ service:
+ port: 8558
+
+ # resources:
+ # limits:
+ # cpu: 100m
+ # memory: 8Gi
+ # requests:
+ # cpu: 100m
+ # memory: 3Gi
+
+ annotations: {}
+ nodeSelector: {}
+ tolerations: []
+ affinity: {}
+
# Configure the Anchore Enterprise UI.
anchoreEnterpriseUi:
# If enabled is set to false, set anchore-ui-redis.enabled to false to ensure that helm doesn't stand up a unneccessary redis instance.
enabled: true
+ image: docker.io/anchore/enterprise-ui:v0.3.3
+ imagePullPolicy: IfNotPresent
+
+ # Set extra environment variables. These will be set on all UI containers.
+ extraEnv: []
+ # - name: foo
+ # value: bar
+
# Specifies whether to trust a reverse proxy when setting secure cookies (via the `X-Forwarded-Proto` header).
enableProxy: false
# Specifies if SSL is enabled in the web app container.
@@ -428,24 +533,17 @@ anchoreEnterpriseUi:
# sessions that are using the same set of credentials. Note that setting this property to `false` does not prevent a
# single session from being viewed within multiple *tabs* inside the same browser.
enableSharedLogin: true
-
- image: docker.io/anchore/enterprise-ui:v0.3.1
- imagePullPolicy: IfNotPresent
+ # The redisFlushdb` key specifies if the Redis datastore containing
+ # user session keys and data is emptied on application startup. If the datastore
+ # is flushed, any users with active sessions will be required to re-authenticate.
+ redisFlushdb: true
# kubernetes service configuration for anchore UI
service:
type: ClusterIP
port: 80
annotations: {}
-
- ingress:
- # For GCE ingress controllers use the following path
- # path: /*
- # By default this is configured to use an NGINX ingress controller.
- path: /
- # You can bound on specific hostnames
- # hosts:
- # - anchore-ui.local
+ sessionAffinity: ClientIP
# resources:
# limits:
diff --git a/stable/apm-server/Chart.yaml b/stable/apm-server/Chart.yaml
index e537313d45c5..4ac0b49fe24f 100644
--- a/stable/apm-server/Chart.yaml
+++ b/stable/apm-server/Chart.yaml
@@ -2,11 +2,13 @@ apiVersion: v1
description: The server receives data from the Elastic APM agents and stores the data into a datastore like Elasticsearch
icon: https://www.elastic.co/assets/blt47799dcdcf08438d/logo-elastic-beats-lt.svg
name: apm-server
-version: 0.1.0
-appVersion: 6.2.4
+version: 2.1.2
+appVersion: 7.0.0
home: https://www.elastic.co/solutions/apm
sources:
- https://www.elastic.co/guide/en/apm/get-started/current/index.html
maintainers:
- name: mumoshu
email: ykuoka@gmail.com
+- name: at-k
+ email: atushi.k@gmail.com
diff --git a/stable/apm-server/OWNERS b/stable/apm-server/OWNERS
new file mode 100644
index 000000000000..a4dd6b0d13c8
--- /dev/null
+++ b/stable/apm-server/OWNERS
@@ -0,0 +1,6 @@
+approvers:
+- mumoshu
+- at-k
+reviewers:
+- mumoshu
+- at-k
diff --git a/stable/apm-server/README.md b/stable/apm-server/README.md
index 82d05212cd12..e350132a672d 100644
--- a/stable/apm-server/README.md
+++ b/stable/apm-server/README.md
@@ -1,6 +1,6 @@
# apm-server
-[apm-server](https://www.elastic.co/guide/en/beats/apm-server/current/index.html) is the server receives data from the Elastic APM agents and stores the data into a datastore like Elasticsearch.
+[apm-server](https://www.elastic.co/guide/en/apm/server/current/index.html) is the server receives data from the Elastic APM agents and stores the data into a datastore like Elasticsearch.
## Introduction
@@ -43,16 +43,37 @@ The following table lists the configurable parameters of the apm-server chart an
| `image.repository` | The image repository to pull from | `docker.elastic.co/apm/apm-server` |
| `image.tag` | The image tag to pull | `6.2.4` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `kind` | Install as Deployment or DaemonSet | `Deployment` |
+| `replicaCount` | Number of replicas when kind is Deployment | `1` |
+| `updateStrategy` | Allows setting of RollingUpdate strategy | `{}` |
+| `service.enabled` | If true, create service pointing to APM Server | `true` |
+| `service.type` | type of service | `ClusterIP` |
+| `service.port` | Service port | `8200` |
+| `service.portName` | Service port name | None |
+| `service.clusterIP` | Static clusterIP or None for headless services | None |
+| `service.externalIPs` | External IP addresses | None |
+| `service.loadBalancerIP` | Load Balancer IP address | None |
+| `service.loadBalancerSourceRanges` | Limit load balancer source IPs to list of CIDRs (where available) | `[]` |
+| `service.nodePort` | NodePort value if service.type is NodePort | None |
+| `service.annotations` | Kubernetes service annotations | None |
+| `service.labels` | Kubernetes service labels | None |
+| `ingress.enabled` | If true, create ingress pointing to service | `false` |
+| `ingress.annotations` | Kubernetes ingress annotations | None |
+| `ingress.labels` | Kubernetes service labels | None |
+| `ingress.hosts` | List of ingress accepted hostnames | apm-server-ingress.example.com |
| `rbac.create` | If true, create & use RBAC resources | `true` |
| `rbac.serviceAccount` | existing ServiceAccount to use (ignored if rbac.create=true) | `default` |
-| `config` | The content of the configuration file consumed by apm-server. See the [apm-server documentation](https://www.elastic.co/guide/en/beats/apm-server/current/apm-server-reference-yml.html) for full details |
-| `plugins` | List of beat plugins |
+| `config` | The content of the configuration file consumed by apm-server. See the [apm-server documentation](https://www.elastic.co/guide/en/beats/apm-server/current/apm-server-reference-yml.html) for full details | |
+| `plugins` | List of apm-server plugins | |
| `extraVars` | A map of additional environment variables | |
| `extraVolumes`, `extraVolumeMounts` | Additional volumes and mounts, for example to provide other configuration files | |
| `resources.requests.cpu` | CPU resource requests | |
| `resources.limits.cpu` | CPU resource limits | |
| `resources.requests.memory` | Memory resource requests | |
| `resources.limits.memory` | Memory resource limits | |
+| `nodeSelector` | Node labels for pod assignment | `{}` |
+| `tolerations` | List of node taints to tolerate | `[]` |
+| `affinity` | Node/Pod affinities | None |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
diff --git a/stable/apm-server/templates/NOTES.txt b/stable/apm-server/templates/NOTES.txt
index c69de39ffd78..7cb7c5307b32 100644
--- a/stable/apm-server/templates/NOTES.txt
+++ b/stable/apm-server/templates/NOTES.txt
@@ -1,3 +1,3 @@
To verify that apm-server has started, run:
- kubectl --namespace={{ .Release.Namespace }} get pods -l "app={{ template "apm-server.name" . }},release={{ .Release.Name }}"
+ kubectl --namespace={{ .Release.Namespace }} get pods -l "app.kubernetes.io/name={{ include "apm-server.name" . }},app.kubernetes.io/instance={{ .Release.Name }}"
diff --git a/stable/apm-server/templates/clusterrole.yaml b/stable/apm-server/templates/clusterrole.yaml
index c936aaa539a7..141a8add06a9 100644
--- a/stable/apm-server/templates/clusterrole.yaml
+++ b/stable/apm-server/templates/clusterrole.yaml
@@ -4,10 +4,10 @@ kind: ClusterRole
metadata:
name: {{ template "apm-server.fullname" . }}
labels:
- app: {{ template "apm-server.name" . }}
- chart: {{ template "apm-server.chart" . }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
+ app.kubernetes.io/name: {{ include "apm-server.name" . }}
+ helm.sh/chart: {{ include "apm-server.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
rules:
- apiGroups: [""]
resources:
diff --git a/stable/apm-server/templates/clusterrolebinding.yaml b/stable/apm-server/templates/clusterrolebinding.yaml
index 4ca97b92ed6b..54bf393aa2ee 100644
--- a/stable/apm-server/templates/clusterrolebinding.yaml
+++ b/stable/apm-server/templates/clusterrolebinding.yaml
@@ -4,10 +4,10 @@ kind: ClusterRoleBinding
metadata:
name: {{ template "apm-server.fullname" . }}
labels:
- app: {{ template "apm-server.name" . }}
- chart: {{ template "apm-server.chart" . }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
+ app.kubernetes.io/name: {{ include "apm-server.name" . }}
+ helm.sh/chart: {{ include "apm-server.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
diff --git a/stable/apm-server/templates/daemonset.yaml b/stable/apm-server/templates/daemonset.yaml
index 0d22ce4200f2..280ba3a51444 100644
--- a/stable/apm-server/templates/daemonset.yaml
+++ b/stable/apm-server/templates/daemonset.yaml
@@ -1,27 +1,26 @@
+{{- if eq .Values.kind "DaemonSet" }}
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: {{ template "apm-server.fullname" . }}
labels:
- app: {{ template "apm-server.name" . }}
- chart: {{ template "apm-server.chart" . }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
+ app.kubernetes.io/name: {{ include "apm-server.name" . }}
+ helm.sh/chart: {{ include "apm-server.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
selector:
matchLabels:
- app: {{ template "apm-server.name" . }}
- release: {{ .Release.Name }}
+ app.kubernetes.io/name: {{ include "apm-server.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
minReadySeconds: 10
updateStrategy:
- type: RollingUpdate
- rollingUpdate:
- maxUnavailable: 1
+{{ toYaml .Values.updateStrategy | indent 4 }}
template:
metadata:
labels:
- app: {{ template "apm-server.name" . }}
- release: {{ .Release.Name }}
+ app.kubernetes.io/name: {{ include "apm-server.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
{{- range $key, $value := .Values.podLabels }}
{{ $key }}: {{ $value }}
{{- end }}
@@ -50,7 +49,18 @@ spec:
value: {{ $value }}
{{- end }}
ports:
- - containerPort: 8200
+ - name: http
+ containerPort: 8200
+ livenessProbe:
+ httpGet:
+ path: /
+ port: http
+ initialDelaySeconds: 60
+ readinessProbe:
+ httpGet:
+ path: /
+ port: http
+ initialDelaySeconds: 60
securityContext:
runAsUser: 0
resources:
@@ -90,3 +100,4 @@ spec:
affinity:
{{ toYaml .Values.affinity | indent 8 }}
{{- end }}
+{{- end }}
diff --git a/stable/apm-server/templates/deployment.yaml b/stable/apm-server/templates/deployment.yaml
new file mode 100644
index 000000000000..bbd1c728b1d5
--- /dev/null
+++ b/stable/apm-server/templates/deployment.yaml
@@ -0,0 +1,105 @@
+{{- if eq .Values.kind "Deployment" }}
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ template "apm-server.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "apm-server.name" . }}
+ helm.sh/chart: {{ include "apm-server.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ replicas: {{ .Values.replicaCount }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "apm-server.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ minReadySeconds: 10
+ strategy:
+{{ toYaml .Values.updateStrategy | indent 4 }}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "apm-server.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ {{- range $key, $value := .Values.podLabels }}
+ {{ $key }}: {{ $value }}
+ {{- end }}
+ annotations:
+ checksum/secret: {{ toYaml .Values.config | sha256sum }}
+ {{- range $key, $value := .Values.podAnnotations }}
+ {{ $key }}: {{ $value }}
+ {{- end }}
+ spec:
+ containers:
+ - name: {{ .Chart.Name }}
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ args:
+ - "-e"
+{{- if .Values.plugins }}
+ - "--plugin"
+ - {{ .Values.plugins | join "," | quote }}
+{{- end }}
+{{- if .Values.extraArgs }}
+{{ toYaml .Values.extraArgs | indent 8 }}
+{{- end }}
+ env:
+{{- range $key, $value := .Values.extraVars }}
+ - name: {{ $key }}
+ value: {{ $value }}
+{{- end }}
+ ports:
+ - name: http
+ containerPort: 8200
+ protocol: TCP
+ livenessProbe:
+ httpGet:
+ path: /
+ port: http
+ initialDelaySeconds: 60
+ readinessProbe:
+ httpGet:
+ path: /
+ port: http
+ initialDelaySeconds: 60
+ securityContext:
+ runAsUser: 0
+ resources:
+{{ toYaml .Values.resources | indent 10 }}
+ volumeMounts:
+ - name: apm-server-config
+ mountPath: /usr/share/apm-server/apm-server.yml
+ readOnly: true
+ subPath: apm-server.yml
+ - name: data
+ mountPath: /usr/share/apm-server/data
+{{- if .Values.extraVolumeMounts }}
+{{ toYaml .Values.extraVolumeMounts | indent 8 }}
+{{- end }}
+ volumes:
+ - name: apm-server-config
+ secret:
+ secretName: {{ template "apm-server.fullname" . }}
+ - name: data
+ hostPath:
+ path: /var/lib/apm-server
+ type: DirectoryOrCreate
+{{- if .Values.extraVolumes }}
+{{ toYaml .Values.extraVolumes | indent 6 }}
+{{- end }}
+ terminationGracePeriodSeconds: 60
+ serviceAccountName: {{ template "apm-server.serviceAccountName" . }}
+{{- if .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.nodeSelector | indent 8 }}
+ {{- end }}
+ {{- if .Values.tolerations }}
+ tolerations:
+{{ toYaml .Values.tolerations | indent 8 }}
+ {{- end }}
+ {{- if .Values.affinity }}
+ affinity:
+{{ toYaml .Values.affinity | indent 8 }}
+ {{- end }}
+{{- end }}
diff --git a/stable/apm-server/templates/ingress.yaml b/stable/apm-server/templates/ingress.yaml
new file mode 100644
index 000000000000..3a20b49ea4e3
--- /dev/null
+++ b/stable/apm-server/templates/ingress.yaml
@@ -0,0 +1,40 @@
+{{- if .Values.ingress.enabled }}
+{{- if .Values.service.enabled }}
+{{- $serviceName := include "apm-server.fullname" . -}}
+{{- $servicePort := .Values.service.port -}}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "apm-server.name" . }}
+ helm.sh/chart: {{ include "apm-server.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ {{- range $key, $value := .Values.ingress.labels }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+ name: {{ template "apm-server.fullname" . }}
+ {{- with .Values.ingress.annotations }}
+ annotations:
+ {{- range $key, $value := . }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+ {{- end }}
+spec:
+ backend:
+ serviceName: {{ $serviceName }}
+ servicePort: {{ $servicePort }}
+{{- if .Values.ingress.hosts }}
+ rules:
+ {{- range $host := .Values.ingress.hosts }}
+ - host: {{ $host }}
+ http:
+ paths:
+ - path: /
+ backend:
+ serviceName: {{ $serviceName }}
+ servicePort: {{ $servicePort }}
+ {{- end -}}
+{{- end -}}
+{{- end }}
+{{- end }}
diff --git a/stable/apm-server/templates/secret.yaml b/stable/apm-server/templates/secret.yaml
index 7fa592177e46..057b62615106 100644
--- a/stable/apm-server/templates/secret.yaml
+++ b/stable/apm-server/templates/secret.yaml
@@ -3,10 +3,10 @@ kind: Secret
metadata:
name: {{ template "apm-server.fullname" . }}
labels:
- app: {{ template "apm-server.name" . }}
- chart: {{ template "apm-server.chart" . }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
+ app.kubernetes.io/name: {{ include "apm-server.name" . }}
+ helm.sh/chart: {{ include "apm-server.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
type: Opaque
data:
apm-server.yml: {{ toYaml .Values.config | indent 4 | b64enc }}
diff --git a/stable/apm-server/templates/service.yaml b/stable/apm-server/templates/service.yaml
new file mode 100644
index 000000000000..0742f3e2e091
--- /dev/null
+++ b/stable/apm-server/templates/service.yaml
@@ -0,0 +1,51 @@
+{{- if .Values.service.enabled }}
+apiVersion: v1
+kind: Service
+metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "apm-server.name" . }}
+ helm.sh/chart: {{ include "apm-server.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ {{- range $key, $value := .Values.service.labels }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+ name: {{ template "apm-server.fullname" . }}
+ {{- with .Values.service.annotations }}
+ annotations:
+ {{- range $key, $value := . }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+ {{- end }}
+spec:
+ {{- if .Values.service.loadBalancerSourceRanges }}
+ loadBalancerSourceRanges:
+ {{- range $cidr := .Values.service.loadBalancerSourceRanges }}
+ - {{ $cidr }}
+ {{- end }}
+ {{- end }}
+ type: {{ .Values.service.type }}
+ {{- if and (eq .Values.service.type "ClusterIP") .Values.service.clusterIP }}
+ clusterIP: {{ .Values.service.clusterIP }}
+ {{- end }}
+ ports:
+ - port: {{ .Values.service.port }}
+ targetPort: http
+ protocol: TCP
+{{ if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
+ nodePort: {{ .Values.service.nodePort }}
+{{ end }}
+{{- if .Values.service.portName }}
+ name: {{ .Values.service.portName }}
+{{- end }}
+{{- if .Values.service.externalIPs }}
+ externalIPs:
+{{ toYaml .Values.service.externalIPs | indent 4 }}
+{{- end }}
+ selector:
+ app.kubernetes.io/name: {{ include "apm-server.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+{{- if .Values.service.loadBalancerIP }}
+ loadBalancerIP: {{ .Values.service.loadBalancerIP }}
+{{- end }}
+{{- end }}
diff --git a/stable/apm-server/templates/serviceaccount.yaml b/stable/apm-server/templates/serviceaccount.yaml
index d7decedcc7d2..447525f6fc34 100644
--- a/stable/apm-server/templates/serviceaccount.yaml
+++ b/stable/apm-server/templates/serviceaccount.yaml
@@ -4,8 +4,8 @@ kind: ServiceAccount
metadata:
name: {{ template "apm-server.serviceAccountName" . }}
labels:
- app: {{ template "apm-server.name" . }}
- chart: {{ template "apm-server.chart" . }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
+ app.kubernetes.io/name: {{ include "apm-server.name" . }}
+ helm.sh/chart: {{ include "apm-server.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end -}}
diff --git a/stable/apm-server/values.yaml b/stable/apm-server/values.yaml
index 33fd51078f24..b05002b8e865 100644
--- a/stable/apm-server/values.yaml
+++ b/stable/apm-server/values.yaml
@@ -1,12 +1,57 @@
image:
repository: docker.elastic.co/apm/apm-server
- tag: 6.2.4
+ tag: 7.0.0
pullPolicy: IfNotPresent
+# DaemonSet or Deployment
+kind: DaemonSet
+
+# Number of replicas when kind is Deployment
+replicaCount: 1
+
+# The update strategy to apply to the Deployment or DaemonSet
+updateStrategy: {}
+ # rollingUpdate:
+ # maxUnavailable: 1
+ # type: RollingUpdate
+
+service:
+ enabled: false
+ type: ClusterIP
+ port: 8200
+ # portName: apm-server-svc
+ # clusterIP: None
+ ## External IP addresses of service
+ ## Default: nil
+ # externalIPs:
+ # - 192.168.0.1
+ #
+ ## LoadBalancer IP if service.type is LoadBalancer
+ ## Default: nil
+ # loadBalancerIP: 10.2.2.2
+ ## Limit load balancer source ips to list of CIDRs (where available)
+ # loadBalancerSourceRanges: []
+
+ annotations: {}
+ # Annotation example: setup ssl with aws cert when service.type is LoadBalancer
+ # service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:EXAMPLE_CERT
+ labels: {}
+ ## Label example: show service URL in `kubectl cluster-info`
+ # kubernetes.io/cluster-service: "true"
+
+ingress:
+ enabled: false
+ annotations: {}
+ # Annotation example: set nginx ingress type
+ # kubernetes.io/ingress.class: nginx-public
+ labels: {}
+ hosts:
+ - apm-server-ingress.example.com
+
config:
- apm-server: {}
+ apm-server:
### Defines the host and port the server is listening on
- # host: "localhost:8200"
+ host: "0.0.0.0:8200"
## Maximum permitted size in bytes of an unzipped request accepted by the server to be processed.
# max_unzipped_size: 52428800
@@ -54,11 +99,13 @@ config:
# When a key contains a period, use this format for setting values on the command line:
# --set config."output\.file".enabled=false
output.file:
+ # enabled: false
path: "/usr/share/apm-server/data"
filename: apm-server
rotate_every_kb: 10000
number_of_files: 5
+ ## Set output.file.enabled to false to enable elasticsearch
# output.elasticsearch:
# hosts: ["elasticsearch:9200"]
# protocol: "https"
@@ -110,6 +157,10 @@ resources: {}
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
nodeSelector: {}
+# Tolerations for pod assignment
+# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
+tolerations: []
+
## Affinity configuration for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
affinity: {}
diff --git a/stable/ark/Chart.yaml b/stable/ark/Chart.yaml
index 16bc377319c4..42d1cdb2a103 100644
--- a/stable/ark/Chart.yaml
+++ b/stable/ark/Chart.yaml
@@ -1,15 +1,14 @@
apiVersion: v1
-appVersion: 0.9.1
-description: A Helm chart for ark
+appVersion: 0.10.2
+## This Ark chart is deprecated because Ark has been renamed to Velero as of Velero version 0.11.0.
+## This chart is no longer maintained. Use the stable/velero chart for future work.
+## Please see the Helm deprecation policy in the PROCESSES.md file.
+deprecated: true
+description: DEPRECATED A Helm chart for ark
name: ark
-version: 2.0.0
+version: 4.2.2
home: https://github.com/heptio/ark
icon: https://cdn-images-1.medium.com/max/1600/1*-9mb3AKnKdcL_QD3CMnthQ.png
sources:
- https://github.com/heptio/ark
-maintainers:
- - name: domcar
- email: d-caruso@hotmail.it
- - name: unguiculus
- email: unguiculus@gmail.com
tillerVersion: ">=2.10.0"
diff --git a/stable/ark/README.md b/stable/ark/README.md
index b63a832bea49..b15702bb2d62 100644
--- a/stable/ark/README.md
+++ b/stable/ark/README.md
@@ -1,14 +1,36 @@
# Ark-server
-This helm chart installs Ark version v0.9.0
-https://github.com/heptio/ark/tree/v0.9.0
+# THIS CHART HAS BEEN DEPRECATED. PLEASE MOVE TO THE STABLE/VELERO CHART.
+This helm chart installs Ark version v0.10.2
+https://github.com/heptio/ark/tree/v0.10.2
+
+## Upgrading to v0.10
+
+Ark v0.10.1 introduces breaking changes. The below instructions are based on the [official upgrade guide](https://github.com/heptio/ark/blob/master/docs/upgrading-to-v0.10.md).
+
+1. Pull the latest changes in this chart. If you're using Helm dependencies, update the chart version you're using in your `requirements.yaml` and run `helm dependency update`.
+
+2. Scale down
+
+```sh
+kubectl scale -n heptio-ark deploy/ark --replicas 0
+```
+
+3. Migrate file structure of your backup storage according to [guide](https://github.com/heptio/ark/blob/master/docs/storage-layout-reorg-v0.10.md)
+4. Adjust your `values.yaml` to the new structure and naming
+5. Upgrade your deployment
+
+```sh
+helm upgrade --force --namespace heptio-ark ark ./ark
+```
+
## Prerequisites
### Secret for cloud provider credentials
Ark server needs an IAM service account in order to run, if you don't have it you must create it.
-Please follow the official documentation: https://heptio.github.io/ark/v0.9.0/cloud-common
+Please follow the official documentation: https://heptio.github.io/ark/v0.10.0/install-overview
Don't forget the step to create the secret
```
@@ -17,7 +39,7 @@ kubectl create secret generic cloud-credentials --namespace --fr
### Configuration
Please change the values.yaml according to your setup
-See here for the official documentation https://heptio.github.io/ark/v0.9.0/config-definition
+See here for the official documentation https://heptio.github.io/ark/v0.10.0/install-overview
Parameter | Description | Default | Required
--- | --- | --- | ---
@@ -41,26 +63,32 @@ Parameter | Description | Default
`rbac.server.serviceAccount.create` | Whether a new service account name that the server will use should be created | `true`
`rbac.server.serviceAccount.name` | Service account to be used for the server. If not set and `rbac.server.serviceAccount.create` is `true` a name is generated using the fullname template | ``
`resources` | Resource requests and limits | `{}`
+`initContainers` | InitContainers and their specs to start with the deployment pod | `[]`
`tolerations` | List of node taints to tolerate | `[]`
`nodeSelector` | Node labels for pod assignment | `{}`
-`configuration.persistentVolumeProvider.name` | The name of the cloud provider the cluster is using for persistent volumes, if any | `{}`
-`configuration.persistentVolumeProvider.config.region` | The cloud provider region (AWS only) | ``
-`configuration.persistentVolumeProvider.config.apiTimeout` | The API timeout (Azure only) |
-`configuration.backupStorageProvider.name` | The name of the cloud provider that will be used to actually store the backups (`aws`, `azure`, `gcp`) | ``
-`configuration.backupStorageProvider.bucket` | The storage bucket where backups are to be uploaded | ``
-`configuration.backupStorageProvider.config.region` | The cloud provider region (AWS only) | ``
-`configuration.backupStorageProvider.config.s3ForcePathStyle` | Set to `true` for a local storage service like Minio | ``
-`configuration.backupStorageProvider.config.s3Url` | S3 url (primarily used for local storage services like Minio) | ``
-`configuration.backupStorageProvider.config.kmsKeyId` | KMS key for encryption (AWS only) | ``
+`configuration.backupStorageLocation.name` | The name of the cloud provider that will be used to actually store the backups (`aws`, `azure`, `gcp`) | ``
+`configuration.backupStorageLocation.bucket` | The storage bucket where backups are to be uploaded | ``
+`configuration.backupStorageLocation.config.region` | The cloud provider region (AWS only) | ``
+`configuration.backupStorageLocation.config.s3ForcePathStyle` | Set to `true` for a local storage service like Minio | ``
+`configuration.backupStorageLocation.config.s3Url` | S3 url (primarily used for local storage services like Minio) | ``
+`configuration.backupStorageLocation.config.kmsKeyId` | KMS key for encryption (AWS only) | ``
+`configuration.backupStorageLocation.prefix` | The directory inside a storage bucket where backups are to be uploaded | ``
`configuration.backupSyncPeriod` | How frequently Ark queries the object storage to make sure that the appropriate Backup resources have been created for existing backup files | `60m`
`configuration.extraEnvVars` | Key/values for extra environment variables such as AWS_CLUSTER_NAME, etc | `{}`
-`configuration.gcSyncPeriod` | How frequently Ark queries the object storage to delete backup files that have passed their TTL | `60m`
-`configuration.scheduleSyncPeriod` | How frequently Ark checks its Schedule resource objects to see if a backup needs to be initiated | `1m`
-`configuration.resourcePriorities` | An ordered list that describes the order in which Kubernetes resource objects should be restored | `[]`
+`configuration.provider` | The name of the cloud provider where you are deploying ark to (`aws`, `azure`, `gcp`) |
+`configuration.restoreResourcePriorities` | An ordered list that describes the order in which Kubernetes resource objects should be restored | `namespaces,persistentvolumes,persistentvolumeclaims,secrets,configmaps,serviceaccounts,limitranges,pods`
`configuration.restoreOnlyMode` | When RestoreOnly mode is on, functionality for backups, schedules, and expired backup deletion is turned off. Restores are made from existing backup files in object storage | `false`
+`configuration.volumeSnapshotLocation.name` | The name of the cloud provider the cluster is using for persistent volumes, if any | `{}`
+`configuration.volumeSnapshotLocation.config.region` | The cloud provider region (AWS only) | ``
+`configuration.volumeSnapshotLocation.config.apiTimeout` | The API timeout (`azure` only) |
`credentials.existingSecret` | If specified and `useSecret` is `true`, uses an existing secret with this name instead of creating one | ``
`credentials.useSecret` | Whether a secret should be used. Set this to `false` when using `kube2iam` | `true`
`credentials.secretContents` | Contents for the credentials secret | `{}`
+`deployRestic` | If `true`, enable restic deployment | `false`
+`metrics.enabled` | Set this to `true` to enable exporting Prometheus monitoring metrics | `false`
+`metrics.scrapeInterval` | Scrape interval for the Prometheus ServiceMonitor | `30s`
+`metrics.serviceMonitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false`
+`metrics.serviceMonitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}`
`schedules` | A dict of schedules | `{}`
diff --git a/stable/ark/templates/NOTES.txt b/stable/ark/templates/NOTES.txt
index 35f02949c7d3..7035f63ea8c3 100644
--- a/stable/ark/templates/NOTES.txt
+++ b/stable/ark/templates/NOTES.txt
@@ -1,3 +1,5 @@
+THIS CHART HAS BEEN DEPRECATED. PLEASE MOVE TO THE STABLE/VELERO CHART.
+
Check that the ark is up and running:
Check that the secret has been created:
diff --git a/stable/ark/templates/backups.yaml b/stable/ark/templates/backups.yaml
index 3591c03b77a3..06aa25943db5 100644
--- a/stable/ark/templates/backups.yaml
+++ b/stable/ark/templates/backups.yaml
@@ -9,6 +9,7 @@ metadata:
app: {{ template "ark.name" . }}
annotations:
"helm.sh/hook": crd-install
+ "helm.sh/hook-delete-policy": "before-hook-creation"
spec:
group: ark.heptio.com
version: v1
diff --git a/stable/ark/templates/backupstoragelocation.yaml b/stable/ark/templates/backupstoragelocation.yaml
new file mode 100644
index 000000000000..921158d9e693
--- /dev/null
+++ b/stable/ark/templates/backupstoragelocation.yaml
@@ -0,0 +1,47 @@
+{{- $root := . }}
+{{- with .Values.configuration }}
+{{- with .backupStorageLocation }}
+apiVersion: ark.heptio.com/v1
+kind: BackupStorageLocation
+metadata:
+ name: default
+ labels:
+ chart: {{ template "ark.chart" $root }}
+ heritage: {{ $root.Release.Service }}
+ release: {{ $root.Release.Name }}
+ app: {{ template "ark.name" $root }}
+spec:
+ provider: {{ .name }}
+ objectStorage:
+ bucket: {{ .bucket }}
+ {{- with .prefix }}
+ prefix: {{ . }}
+ {{- end }}
+{{- with .config }}
+ config:
+ {{- with .region }}
+ region: {{ . }}
+ {{- end }}
+ {{- with .s3ForcePathStyle }}
+ s3ForcePathStyle: {{ . | quote }}
+ {{- end }}
+ {{- with .s3Url }}
+ s3Url: {{ . }}
+ {{- end }}
+ {{- with .kmsKeyId }}
+ kmsKeyId: {{ . }}
+ {{- end }}
+ {{- with .resourceGroup }}
+ resourceGroup: {{ . }}
+ {{- end }}
+ {{- with .storageAccount }}
+ storageAccount: {{ . }}
+ {{- end }}
+ {{- if .publicUrl }}
+ {{- with .publicUrl }}
+ publicUrl: {{ . }}
+ {{- end }}
+ {{- end }}
+{{- end }}
+{{- end }}
+{{- end }}
diff --git a/stable/ark/templates/configs.yaml b/stable/ark/templates/backupstoragelocations.yaml
similarity index 67%
rename from stable/ark/templates/configs.yaml
rename to stable/ark/templates/backupstoragelocations.yaml
index 957815c148c7..60c1ad7d08fb 100644
--- a/stable/ark/templates/configs.yaml
+++ b/stable/ark/templates/backupstoragelocations.yaml
@@ -1,7 +1,7 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
- name: configs.ark.heptio.com
+ name: backupstoragelocations.ark.heptio.com
labels:
chart: {{ template "ark.chart" . }}
heritage: {{ .Release.Service }}
@@ -9,10 +9,11 @@ metadata:
app: {{ template "ark.name" . }}
annotations:
"helm.sh/hook": crd-install
+ "helm.sh/hook-delete-policy": "before-hook-creation"
spec:
group: ark.heptio.com
version: v1
scope: Namespaced
names:
- plural: configs
- kind: Config
+ plural: backupstoragelocations
+ kind: BackupStorageLocation
diff --git a/stable/ark/templates/config.yaml b/stable/ark/templates/config.yaml
deleted file mode 100644
index 1ac5b60db8fb..000000000000
--- a/stable/ark/templates/config.yaml
+++ /dev/null
@@ -1,48 +0,0 @@
-apiVersion: ark.heptio.com/v1
-kind: Config
-metadata:
- name: default
- labels:
- chart: {{ template "ark.chart" . }}
- heritage: {{ .Release.Service }}
- release: {{ .Release.Name }}
- app: {{ template "ark.name" . }}
-{{ with .Values.configuration }}
-{{- with .persistentVolumeProvider }}
-persistentVolumeProvider:
- name: {{ .name }}
-{{ with .config }}
- config:
- {{- with .region }}
- region: {{ . }}
- {{- end }}
- {{- with .apitimeout }}
- apiTimeout: {{ . }}
- {{- end }}
-{{- end }}
-{{- end }}
-{{- with .backupStorageProvider }}
-backupStorageProvider:
- name: {{ .name }}
- bucket: {{ .bucket }}
-{{- with .config }}
- config:
- {{- with .region }}
- region: {{ . }}
- {{- end }}
- {{- with .s3ForcePathStyle }}
- s3ForcePathStyle: {{ . }}
- {{- end }}
- {{- with .s3Url }}
- s3Url: {{ . }}
- {{- end }}
- {{- with .kmsKeyId }}
- kmsKeyId: {{ . }}
- {{- end }}
-{{- end }}
-{{- end }}
-backupSyncPeriod: {{ .backupSyncPeriod }}
-gcSyncPeriod: {{ .gcSyncPeriod }}
-scheduleSyncPeriod: {{ .scheduleSyncPeriod }}
-restoreOnlyMode: {{ .restoreOnlyMode }}
-{{- end }}
diff --git a/stable/ark/templates/deletebackuprequests.yaml b/stable/ark/templates/deletebackuprequests.yaml
index 4dc9baaae3f9..87fe6b1491fa 100644
--- a/stable/ark/templates/deletebackuprequests.yaml
+++ b/stable/ark/templates/deletebackuprequests.yaml
@@ -9,6 +9,7 @@ metadata:
app: {{ template "ark.name" . }}
annotations:
"helm.sh/hook": crd-install
+ "helm.sh/hook-delete-policy": "before-hook-creation"
spec:
group: ark.heptio.com
version: v1
diff --git a/stable/ark/templates/deployment.yaml b/stable/ark/templates/deployment.yaml
index 02af98cd861f..398a1aabe4c0 100644
--- a/stable/ark/templates/deployment.yaml
+++ b/stable/ark/templates/deployment.yaml
@@ -1,5 +1,5 @@
-{{- if and .Values.configuration.backupStorageProvider.name .Values.configuration.backupStorageProvider.bucket -}}
-{{- $provider := .Values.configuration.backupStorageProvider.name -}}
+{{- if .Values.configuration.provider -}}
+{{- $provider := .Values.configuration.provider -}}
apiVersion: apps/v1beta2
kind: Deployment
metadata:
@@ -20,10 +20,15 @@ spec:
labels:
release: {{ .Release.Name }}
app: {{ template "ark.name" . }}
- {{- with .Values.podAnnotations }}
+ {{- if or .Values.podAnnotations .Values.metrics.enabled }}
annotations:
-{{ toYaml . | indent 8 }}
- {{- end }}
+{{- if .Values.podAnnotations }}
+{{ toYaml .Values.podAnnotations | indent 8 }}
+{{- end }}
+{{- if .Values.metrics.enabled }}
+{{ toYaml .Values.metrics.podAnnotations | indent 8 }}
+{{- end }}
+ {{- end }}
spec:
restartPolicy: Always
serviceAccountName: {{ template "ark.serverServiceAccount" . }}
@@ -31,10 +36,29 @@ spec:
- name: ark
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
+ {{- if .Values.metrics.enabled }}
+ ports:
+ - name: monitoring
+ containerPort: 8085
+ {{- end }}
command:
- /ark
args:
- server
+ {{- with .Values.configuration }}
+ {{- with .backupSyncPeriod }}
+ - --backup-sync-period={{ . }}
+ {{- end }}
+ {{- with .resticTimeout }}
+ - --restic-timeout={{ . }}
+ {{- end }}
+ {{- if .restoreOnlyMode }}
+ - --restore-only
+ {{- end }}
+ {{- with .restoreResourcePriorities }}
+ - --restore-resource-priorities={{ . }}
+ {{- end }}
+ {{- end }}
{{- if eq $provider "azure" }}
envFrom:
- secretRef:
@@ -50,6 +74,8 @@ spec:
{{- if and .Values.credentials.useSecret (or (eq $provider "aws") (eq $provider "gcp")) }}
- name: cloud-credentials
mountPath: /credentials
+ - name: scratch
+ mountPath: /scratch
env:
{{- if eq $provider "aws" }}
- name: AWS_SHARED_CREDENTIALS_FILE
@@ -57,6 +83,8 @@ spec:
- name: GOOGLE_APPLICATION_CREDENTIALS
{{- end }}
value: /credentials/cloud
+ - name: ARK_SCRATCH_DIR
+ value: /scratch
{{ if .Values.configuration.extraEnvVars }}
{{- range $key, $value := .Values.configuration.extraEnvVars }}
- name: {{ default "none" $key }}
@@ -64,6 +92,10 @@ spec:
{{- end }}
{{- end }}
{{- end }}
+{{- if .Values.initContainers }}
+ initContainers:
+{{ toYaml .Values.initContainers | indent 8 }}
+{{- end }}
volumes:
{{- if and .Values.credentials.useSecret (or (eq $provider "aws") (eq $provider "gcp")) }}
- name: cloud-credentials
@@ -72,6 +104,8 @@ spec:
{{- end }}
- name: plugins
emptyDir: {}
+ - name: scratch
+ emptyDir: {}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
diff --git a/stable/ark/templates/downloadrequests.yaml b/stable/ark/templates/downloadrequests.yaml
index c083fe3e69bf..c0a155d9bc44 100644
--- a/stable/ark/templates/downloadrequests.yaml
+++ b/stable/ark/templates/downloadrequests.yaml
@@ -9,6 +9,7 @@ metadata:
app: {{ template "ark.name" . }}
annotations:
"helm.sh/hook": crd-install
+ "helm.sh/hook-delete-policy": "before-hook-creation"
spec:
group: ark.heptio.com
version: v1
diff --git a/stable/ark/templates/podvolumebackups.yaml b/stable/ark/templates/podvolumebackups.yaml
index b649ccf30f02..e0e625d0872e 100644
--- a/stable/ark/templates/podvolumebackups.yaml
+++ b/stable/ark/templates/podvolumebackups.yaml
@@ -9,6 +9,7 @@ metadata:
app: {{ template "ark.name" . }}
annotations:
"helm.sh/hook": crd-install
+ "helm.sh/hook-delete-policy": "before-hook-creation"
spec:
group: ark.heptio.com
version: v1
diff --git a/stable/ark/templates/podvolumerestores.yaml b/stable/ark/templates/podvolumerestores.yaml
index 72edce146cbb..52ac283ead21 100644
--- a/stable/ark/templates/podvolumerestores.yaml
+++ b/stable/ark/templates/podvolumerestores.yaml
@@ -9,6 +9,7 @@ metadata:
app: {{ template "ark.name" . }}
annotations:
"helm.sh/hook": crd-install
+ "helm.sh/hook-delete-policy": "before-hook-creation"
spec:
group: ark.heptio.com
version: v1
diff --git a/stable/ark/templates/restic-daemonset.yaml b/stable/ark/templates/restic-daemonset.yaml
new file mode 100644
index 000000000000..0fad357f4e45
--- /dev/null
+++ b/stable/ark/templates/restic-daemonset.yaml
@@ -0,0 +1,92 @@
+{{- if .Values.deployRestic }}
+{{- $provider := .Values.configuration.provider -}}
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: restic
+ labels:
+ app: {{ template "ark.name" . }}
+ chart: {{ template "ark.chart" . }}
+ heritage: {{ .Release.Service }}
+ release: {{ .Release.Name }}
+spec:
+ selector:
+ matchLabels:
+ name: restic
+ template:
+ metadata:
+ labels:
+ name: restic
+ {{- with .Values.podAnnotations }}
+ annotations:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ spec:
+ {{- if .Values.serviceAccount.server.create }}
+ serviceAccountName: {{ template "ark.serverServiceAccount" . }}
+ {{- end }}
+ securityContext:
+ runAsUser: 0
+ volumes:
+ {{- if and .Values.credentials.useSecret (or (eq $provider "aws") (eq $provider "gcp")) }}
+ - name: cloud-credentials
+ secret:
+ secretName: {{ template "ark.secretName" . }}
+ {{- end }}
+ - name: host-pods
+ hostPath:
+ path: /var/lib/kubelet/pods
+ - name: scratch
+ emptyDir: {}
+ containers:
+ - name: ark
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ command:
+ - /ark
+ args:
+ - restic
+ - server
+ volumeMounts:
+ {{- if and .Values.credentials.useSecret (or (eq $provider "aws") (eq $provider "gcp")) }}
+ - name: cloud-credentials
+ mountPath: /credentials
+ {{- end }}
+ - name: host-pods
+ mountPath: /host_pods
+ mountPropagation: HostToContainer
+ - name: scratch
+ mountPath: /scratch
+ {{- if and .Values.credentials.useSecret (eq $provider "azure") }}
+ envFrom:
+ - secretRef:
+ name: {{ template "ark.secretName" . }}
+ {{- end }}
+ env:
+ - name: HEPTIO_ARK_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ - name: NODE_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+ - name: ARK_SCRATCH_DIR
+ value: /scratch
+ {{- if eq $provider "aws" }}
+ - name: AWS_SHARED_CREDENTIALS_FILE
+ value: /credentials/cloud
+ {{- end }}
+ {{- if eq $provider "gcp" }}
+ - name: GOOGLE_APPLICATION_CREDENTIALS
+ value: /credentials/cloud
+ {{- end }}
+ {{- if eq $provider "minio" }}
+ - name: AWS_SHARED_CREDENTIALS_FILE
+ value: /credentials/cloud
+ {{- end }}
+ {{- with .Values.resources }}
+ resources:
+{{ toYaml . | indent 12 }}
+ {{- end }}
+{{- end }}
diff --git a/stable/ark/templates/resticrepositories.yaml b/stable/ark/templates/resticrepositories.yaml
index 8ba66943ba5e..cebb2bd8a902 100644
--- a/stable/ark/templates/resticrepositories.yaml
+++ b/stable/ark/templates/resticrepositories.yaml
@@ -9,6 +9,7 @@ metadata:
app: {{ template "ark.name" . }}
annotations:
"helm.sh/hook": crd-install
+ "helm.sh/hook-delete-policy": "before-hook-creation"
spec:
group: ark.heptio.com
version: v1
diff --git a/stable/ark/templates/restores.yaml b/stable/ark/templates/restores.yaml
index 21dbce9bd5a5..cd569d67f770 100644
--- a/stable/ark/templates/restores.yaml
+++ b/stable/ark/templates/restores.yaml
@@ -9,6 +9,7 @@ metadata:
app: {{ template "ark.name" . }}
annotations:
"helm.sh/hook": crd-install
+ "helm.sh/hook-delete-policy": "before-hook-creation"
spec:
group: ark.heptio.com
version: v1
diff --git a/stable/ark/templates/schedules.yaml b/stable/ark/templates/schedules.yaml
index f7f1850a5d85..847da2f79407 100644
--- a/stable/ark/templates/schedules.yaml
+++ b/stable/ark/templates/schedules.yaml
@@ -9,6 +9,7 @@ metadata:
app: {{ template "ark.name" . }}
annotations:
"helm.sh/hook": crd-install
+ "helm.sh/hook-delete-policy": "before-hook-creation"
spec:
group: ark.heptio.com
version: v1
diff --git a/stable/ark/templates/service.yaml b/stable/ark/templates/service.yaml
new file mode 100644
index 000000000000..234db5e8afe6
--- /dev/null
+++ b/stable/ark/templates/service.yaml
@@ -0,0 +1,20 @@
+{{- if .Values.metrics.enabled }}
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ template "ark.fullname" . }}
+ labels:
+ release: {{ .Release.Name }}
+ app: {{ template "ark.name" . }}
+ chart: {{ template "ark.chart" . }}
+ heritage: {{ .Release.Service }}
+spec:
+ type: ClusterIP
+ ports:
+ - name: monitoring
+ port: 8085
+ targetPort: monitoring
+ selector:
+ release: {{ .Release.Name }}
+ app: {{ template "ark.name" . }}
+{{- end }}
diff --git a/stable/ark/templates/servicemonitor.yaml b/stable/ark/templates/servicemonitor.yaml
new file mode 100644
index 000000000000..3bb964deb990
--- /dev/null
+++ b/stable/ark/templates/servicemonitor.yaml
@@ -0,0 +1,22 @@
+{{- if and .Values.metrics.enabled .Values.metrics.serviceMonitor.enabled }}
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: {{ template "ark.fullname" . }}
+ labels:
+ release: {{ .Release.Name }}
+ app: {{ template "ark.name" . }}
+ chart: {{ template "ark.chart" . }}
+ heritage: {{ .Release.Service }}
+ {{- if .Values.metrics.serviceMonitor.additionalLabels }}
+{{ toYaml .Values.metrics.serviceMonitor.additionalLabels | indent 4 }}
+ {{- end }}
+spec:
+ selector:
+ matchLabels:
+ release: {{ .Release.Name }}
+ app: {{ template "ark.name" . }}
+ endpoints:
+ - port: monitoring
+ interval: {{ .Values.metrics.scrapeInterval }}
+{{- end }}
diff --git a/stable/ark/templates/volumesnapshotlocation.yaml b/stable/ark/templates/volumesnapshotlocation.yaml
new file mode 100644
index 000000000000..d6cf91d6e356
--- /dev/null
+++ b/stable/ark/templates/volumesnapshotlocation.yaml
@@ -0,0 +1,27 @@
+{{- $root := . }}
+{{- with .Values.configuration }}
+{{- with .volumeSnapshotLocation }}
+apiVersion: ark.heptio.com/v1
+kind: VolumeSnapshotLocation
+metadata:
+ name: default
+ labels:
+ chart: {{ template "ark.chart" $root }}
+ heritage: {{ $root.Release.Service }}
+ release: {{ $root.Release.Name }}
+ app: {{ template "ark.name" $root }}
+spec:
+ provider: {{ .name }}
+ objectStorage:
+ bucket: {{ .bucket }}
+{{ with .config }}
+ config:
+ {{- with .region }}
+ region: {{ . }}
+ {{- end }}
+ {{- with .apitimeout }}
+ apiTimeout: {{ . }}
+ {{- end }}
+{{- end }}
+{{- end }}
+{{- end }}
diff --git a/stable/ark/templates/volumesnapshotlocations.yaml b/stable/ark/templates/volumesnapshotlocations.yaml
new file mode 100644
index 000000000000..5ca0e5beb9a1
--- /dev/null
+++ b/stable/ark/templates/volumesnapshotlocations.yaml
@@ -0,0 +1,19 @@
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+ name: volumesnapshotlocations.ark.heptio.com
+ labels:
+ chart: {{ template "ark.chart" . }}
+ heritage: {{ .Release.Service }}
+ release: {{ .Release.Name }}
+ app: {{ template "ark.name" . }}
+ annotations:
+ "helm.sh/hook": crd-install
+ "helm.sh/hook-delete-policy": "before-hook-creation"
+spec:
+ group: ark.heptio.com
+ version: v1
+ scope: Namespaced
+ names:
+ plural: volumesnapshotlocations
+ kind: VolumeSnapshotLocation
diff --git a/stable/ark/values.yaml b/stable/ark/values.yaml
index 27915d041f05..411f2e21ad93 100644
--- a/stable/ark/values.yaml
+++ b/stable/ark/values.yaml
@@ -1,17 +1,29 @@
image:
repository: gcr.io/heptio-images/ark
- tag: v0.9.1
+ tag: v0.10.2
pullPolicy: IfNotPresent
-# Only kube2iam: change the AWS_ACCOUNT_ID and HEPTIO_ARK_ROLE_NAME
+# Only kube2iam/kiam: change the AWS_ACCOUNT_ID and HEPTIO_ARK_ROLE_NAME
podAnnotations: {}
# iam.amazonaws.com/role: arn:aws:iam:::role/
+# prometheus.io/scrape: "true"
+# prometheus.io/port: "8085"
+# prometheus.io/path: "/metrics"
rbac:
create: true
resources: {}
+# this is the k8s spec block for initContainers:
+initContainers: []
+ # - name:
+ # image:
+ # volumeMounts:
+ # - name: plugins
+ # mountPath: /target
+
+
serviceAccount:
server:
create: true
@@ -24,25 +36,30 @@ nodeSelector: {}
## Parameters for the ' default' Config resource
## See https://heptio.github.io/ark/v0.9.0/config-definition
configuration:
- persistentVolumeProvider: {}
+ provider:
+
+ volumeSnapshotLocation: {}
# name:
# config:
# region:
# apiTimeout:
- backupStorageProvider:
+ backupStorageLocation:
name:
bucket:
+ # prefix:
config: {}
# region:
# s3ForcePathStyle:
# s3Url:
# kmsKeyId:
+ # resourceGroup:
+ # storageAccount:
+ # publicUrl:
backupSyncPeriod: 60m
- gcSyncPeriod: 60m
- scheduleSyncPeriod: 1m
- resourcePriorities: []
+ resticTimeout: 1h
+ restoreResourcePriorities: namespaces,persistentvolumes,persistentvolumeclaims,secrets,configmaps,serviceaccounts,limitranges,pods
restoreOnlyMode: false
# additional key/value pairs to be used as environment variables such as "AWS_CLUSTER_NAME: 'yourcluster.domain.tld'"
extraEnvVars: {}
@@ -61,3 +78,18 @@ credentials:
existingSecret:
useSecret: true
secretContents: {}
+
+deployRestic: false
+
+metrics:
+ enabled: false
+ scrapeInterval: 30s
+
+ # Pod annotations for Prometheus
+ podAnnotations:
+ prometheus.io/scrape: "true"
+ prometheus.io/port: "8085"
+
+ serviceMonitor:
+ enabled: false
+ additionalLabels: {}
diff --git a/stable/atlantis/Chart.yaml b/stable/atlantis/Chart.yaml
index f577b088fb3c..9ed195982b1a 100644
--- a/stable/atlantis/Chart.yaml
+++ b/stable/atlantis/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
-appVersion: "v0.4.11"
+appVersion: "v0.7.1"
description: A Helm chart for Atlantis https://www.runatlantis.io
name: atlantis
-version: 1.1.2
+version: 3.4.1
keywords:
- terraform
home: https://www.runatlantis.io
@@ -14,3 +14,4 @@ maintainers:
- name: callmeradical
- name: jeff-knurek
- name: lkysow
+- name: anubhavmishra
diff --git a/stable/atlantis/OWNERS b/stable/atlantis/OWNERS
index 8e1ed1f60d57..74183c347c0a 100644
--- a/stable/atlantis/OWNERS
+++ b/stable/atlantis/OWNERS
@@ -4,9 +4,11 @@ approvers:
- lkysow
- jeff-knurek
- sstarcher
+- anubhavmishra
reviewers:
- jkodroff
- callmeradical
- lkysow
- jeff-knurek
-- sstarcher
\ No newline at end of file
+- sstarcher
+- anubhavmishra
diff --git a/stable/atlantis/README.md b/stable/atlantis/README.md
index 5b1565066d81..ac2f0af7d62a 100644
--- a/stable/atlantis/README.md
+++ b/stable/atlantis/README.md
@@ -1,68 +1,117 @@
# Atlantis
-
[Atlantis](https://www.runatlantis.io/) is a tool for safe collaboration on [Terraform](https://www.terraform.io/) repositories.
## Introduction
-
This chart creates a single pod in a StatefulSet running Atlantis. Atlantis persists Terraform [plan files](https://www.terraform.io/docs/commands/plan.html) and [lock files](https://www.terraform.io/docs/state/locking.html) to disk for the duration of a Pull/Merge Request. These files are stored in a PersistentVolumeClaim to survive Pod failures.
## Prerequisites
-
- Kubernetes 1.9+
- PersistentVolume support
## Required Configuration
-
-In order for Atlantis to start and run successfully, all of the following must be true:
-
+In order for Atlantis to start and run successfully:
1. At least one of the following sets of credentials must be defined:
- `github`
- `gitlab`
- `bitbucket`
Refer to [values.yaml](values.yaml) for detailed examples.
+ They can also be provided directly through a Kubernetes `Secret`, use the variable `vcsSecretsName` to reference it.
-1. Supply a value for `orgWhitelist`, e.g. `github.org/my_company/*`.
+1. Supply a value for `orgWhitelist`, e.g. `github.org/myorg/*`.
## Customization
-
The following options are supported. See [values.yaml](values.yaml) for more detailed documentation and examples:
-| Parameter | Description | Default |
-| ------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------- |
-| `allow_repo_config` | Whether to allow the use of [atlantis.yaml files](https://www.runatlantis.io/docs/atlantis-yaml-reference.html). | `false` |
-| `atlantis_data_storage` | The amount of storage available for Atlantis' data directory (mostly used to check out git repositories). | `5Gi` |
-| `aws.config` | Contents of a file to be mounted to `~atlantis/.aws/config`. | n/a |
-| `aws.credentials` | Contents of a file to be mounted to `~atlantis/.aws/credentials`. | n/a |
-| `bitbucket.user` | The name of the Atlantis Bitbucket user.,This value should not be defined if Atlantis is not working against Bitbucket repositories. | n/a |
-| `bitbucket.token` | The personal access token for the Atlantis Bitbucket user.,This value should not be defined if Atlantis is not integrated with Bitbucket repositories. | n/a |
-| `bitbucket.secret` | Bitbucket Server only: The webhook secret for Bitbucket repositories. | n/a |
-| `bitbucket.base_url` | Bitbucket Server only: The hostname of your Bitbucket Server installation. | n/a |
-| `environment` | Additional environment variables for the container. | n/a |
-| `gitconfig` | Contents of a file to be mounted to `~atlantis/.gitconfig`. Use to allow redirection for Terraform modules in private git repositories. | n/a |
-| `github.user` | The name of the Atlantis GitHub user. This value should defined if Atlantis is not working against GitHub repositories. | n/a |
-| `github.token` | The personal access token for the Atlantis GitHub user.,This value should not be defined if Atlantis is not integrated with GitHub repositories. | n/a |
-| `github.secret` | The repository or organization-wide secret for the Atlantis GitHub integration.,All repositories in GitHub that are to be integrated with Atlantis must share the same value.,For this reason, the Atlantis maintainers recommend an organization-scoped webhook.,This value should not be defined if Atlantis is not integrated with GitHub repositories. | n/a |
-| `github.hostname` | GitHub Enterprise only: The hostname of your GitHub Enterprise installation. | n/a |
-| `gitlab.user` | The repository or organization-wide secret for the Atlantis GitLab,integration.,All repositories in GitHub that are to be integrated with,Atlantis must share the same value.,For this reason, the Atlantis,maintainers recommend an organization-scoped webhook.,This value should,not be defined if Atlantis is not integrated with GitLab repositories. | n/a |
-| `gitlab.token` | The personal access token for the Atlantis GitHub user.,This value should not be defined if Atlantis is not integrated with GitHub repositories. | n/a |
-| `gitlab.secret` | The repository secret for the Atlantis GitLab integration.,All repositories in GitLab that are to be integrated with,Atlantis must share the same value.,(Unlike GitHub, GitLab does not support organization-wide integrations.) | n/a |
-| `gitlab.hostname` | GitLab Enterprise only: The hostname of your GitLab Enterprise installation. | n/a |
-| `podTemplate.annotations` | Specifies additional annotations to use for the StatefulSet | n/a |
-| `logLevel` | The level to use for logging. | n/a |
-| `orgWhiteList` | A whitelist of repositories from which Atlantis will accept webhooks. **This value must be changed for Atlantis to function correctly.** Accepts wildcard characters (`*`). Multiple values may be comma-separated. | `github.com/yourorg/*` |
-| `serviceAccount.create` | Whether to create a Kubernetes ServiceAccount if no account matching `serviceAccount.name` exists. | `true` |
-| `serviceAccount.name` | The name of the Kubernetes ServiceAccount under which Atlantis should run.
If no value is specified and `serviceAccount.create` is `true`, Atlantis will be run under a ServiceAccount whose name is the FullName of the Helm chart's instance.
If no value is specified and `serviceAccount.create` is `false`, Atlantis will be run under the `default` ServiceAccount. | n/a |
-| `serviceAccountSecrets.credentials` | JSON object representing secrets for a Google Cloud Platform production service account. Only applicable if hosting Atlantis on GKE. | n/a |
-| `serviceAccountSecrets.credentials-staging` | JSON object representing secrets for a Google Cloud Platform staging,service account. Only applicable if hosting Atlantis on GKE. | n/a |
-| `service.port` | Specifies the port of the service. | `80` |
-| `service.loadBalancerSourceRanges` | An array of whitelisted IP addresses for the Atlantis Service in Kubernetes. If no value is specified, the Service will allow incoming traffic from all IP addresses (0.0.0.0/0). | n/a |
-| `tlsSecretName` | The name of a Kubernetes Secret for Atlantis' HTTPS certificate containing the following data items `tls.crt` with the public certificate and `tls.key` with the private key. | n/a |
-
-
+| Parameter | Description | Default |
+|---------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
+| `dataStorage` | Amount of storage available for Atlantis' data directory (mostly used to check out git repositories). | `5Gi` |
+| `aws.config` | Contents of a file to be mounted to `~/.aws/config`. | n/a |
+| `aws.credentials` | Contents of a file to be mounted to `~/.aws/credentials`. | n/a |
+| `bitbucket.user` | Name of the Atlantis Bitbucket user. | n/a |
+| `bitbucket.token` | Personal access token for the Atlantis Bitbucket user. | n/a |
+| `bitbucket.secret` | Webhook secret for Bitbucket repositories (Bitbucket Server only). | n/a |
+| `bitbucket.baseURL` | Base URL of Bitbucket Server installation. | n/a |
+| `environment` | Map of environment variables for the container. | `{}` |
+| `imagePullSecrets` | List of secrets for pulling images from private registries. | `[]` |
+| `gitconfig` | Contents of a file to be mounted to `~/.gitconfig`. Use to allow redirection for Terraform modules in private git repositories. | n/a |
+| `command` | Optionally override the [`command` field](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#container-v1-core) of the Atlantis Docker container. If not set, the default Atlantis `ENTRYPOINT` is used. Must be an array. | n/a |
+| `github.user` | Name of the Atlantis GitHub user. | n/a |
+| `github.token` | Personal access token for the Atlantis GitHub user. | n/a |
+| `github.secret` | Repository or organization-wide webhook secret for the Atlantis GitHub integration. All repositories in GitHub that are to be integrated with Atlantis must share the same value. | n/a |
+| `github.hostname` | Hostname of your GitHub Enterprise installation. | n/a |
+| `gitlab.user` | Repository or organization-wide secret for the Atlantis GitLab,integration. All repositories in GitLab that are to be integrated with Atlantis must share the same value. | n/a |
+| `gitlab.token` | Personal access token for the Atlantis GitLab user. | n/a |
+| `gitlab.secret` | Webhook secret for the Atlantis GitLab integration. All repositories in GitLab that are to be integrated with Atlantis must share the same value. | n/a |
+| `gitlab.hostname` | Hostname of your GitLab Enterprise installation. | n/a |
+| `vcsSecretsName` | Name of a pre-existing Kubernetes `Secret` containing `token` and `secret` keys set to your VCS provider's API token and webhook secret, respectively. Use this instead of `github.token`/`github.secret`, etc. (optional) | n/a |
+| `podTemplate.annotations` | Additional annotations to use for the StatefulSet. | n/a |
+| `logLevel` | Level to use for logging. Either debug, info, warn, or error. | n/a |
+| `orgWhiteList` | Whitelist of repositories from which Atlantis will accept webhooks. **This value must be set for Atlantis to function correctly.** Accepts wildcard characters (`*`). Multiple values may be comma-separated. | none |
+| `repoConfig` | [Server Side Repo Configuration](https://www.runatlantis.io/docs/server-side-repo-config.html) as a raw YAML string. Configuration is stored in ConfigMap. | n/a |
+| `defaultTFVersion` | Default Terraform version to be used by atlantis server | n/a |
+| `allowForkPRs` | Allow atlantis to run on fork Pull Requests | `false` |
+| `serviceAccount.create` | Whether to create a Kubernetes ServiceAccount if no account matching `serviceAccount.name` exists. | `true` |
+| `serviceAccount.name` | Name of the Kubernetes ServiceAccount under which Atlantis should run. If no value is specified and `serviceAccount.create` is `true`, Atlantis will be run under a ServiceAccount whose name is the FullName of the Helm chart's instance, else Atlantis will be run under the `default` ServiceAccount. | n/a |
+| `serviceAccountSecrets.credentials` | JSON string representing secrets for a Google Cloud Platform production service account. Only applicable if hosting Atlantis on GKE. | n/a |
+| `serviceAccountSecrets.credentials-staging` | JSON string representing secrets for a Google Cloud Platform staging service account. Only applicable if hosting Atlantis on GKE. | n/a |
+| `service.port` | Port of the `Service`. | `80` |
+| `service.loadBalancerSourceRanges` | Array of whitelisted IP addresses for the Atlantis Service. If no value is specified, the Service will allow incoming traffic from all IP addresses (0.0.0.0/0). | n/a |
+| `storageClassName` | Storage class of the volume mounted for the Atlantis data directory. | n/a |
+| `tlsSecretName` | Name of a Secret for Atlantis' HTTPS certificate containing the following data items `tls.crt` with the public certificate and `tls.key` with the private key. | n/a |
+| `ingress.enabled` | Whether to create a Kubernetes Ingress. | `true` |
+| `ingress.annotations` | Additional annotations to use for the Ingress. | `{}` |
+| `ingress.path` | Path to use in the `Ingress`. Should be set to `/*` if using gce-ingress in Google Cloud. | `/` |
+| `ingress.host` | Domain name Kubernetes Ingress rule looks for. Set it to the domain Atlantis will be hosted on. | `chart-example.local` |
+| `ingress.tls` | Kubernetes tls block. See [Kubernetes docs](https://kubernetes.io/docs/concepts/services-networking/ingress/#tls) for details. | `[]` |
+
+**NOTE**: All the [Server Configurations](https://www.runatlantis.io/docs/server-configuration.html) are passed as [Environment Variables](https://www.runatlantis.io/docs/server-configuration.html#environment-variables).
+
+
+## Upgrading
+### From 2.* to 3.*
+* The following value names have been removed. They are replaced by [Server Side Repo Configuration](https://www.runatlantis.io/docs/server-side-repo-config.html)
+ * `requireApproval`
+ * `requireMergeable`
+ * `allowRepoConfig`
+
+To replicate your previous configuration, run Atlantis locally with your previous flags and Atlantis will print out the equivalent repo-config, for example:
+
+```
+$ atlantis server --allow-repo-config --require-approval --require-mergeable --gh-user=foo --gh-token=bar --repo-whitelist='*'
+WARNING: Flags --require-approval, --require-mergeable and --allow-repo-config have been deprecated.
+Create a --repo-config file with the following config instead:
+
+---
+repos:
+- id: /.*/
+ apply_requirements: [approved, mergeable]
+ allowed_overrides: [apply_requirements, workflow]
+ allow_custom_workflows: true
+
+or use --repo-config-json='{"repos":[{"id":"/.*/", "apply_requirements":["approved", "mergeable"], "allowed_overrides":["apply_requirements","workflow"], "allow_custom_workflows":true}]}'
+```
+
+Then use this YAML in the new repoConfig value:
+
+```
+repoConfig: |
+ ---
+ repos:
+ - id: /.*/
+ apply_requirements: [approved, mergeable]
+ allowed_overrides: [apply_requirements, workflow]
+ allow_custom_workflows: true
+```
+
+### From 1.* to 2.*
+* The following value names have changed:
+ * `allow_repo_config` => `allowRepoConfig`
+ * `atlantis_data_storage` => `dataStorage` **NOTE: more than just a snake_case change**
+ * `atlantis_data_storageClass` => `storageClassName` **NOTE: more than just a snake_case change**
+ * `bitbucket.base_url` => `bitbucket.baseURL`
## Testing the Deployment
-
To perform a smoke test of the deployment (i.e. ensure that the Atlantis UI is up and running):
1. Install the chart. Supply your own values file or use `test-values.yaml`, which has a minimal set of values required in order for Atlantis to start.
diff --git a/stable/atlantis/templates/_helpers.tpl b/stable/atlantis/templates/_helpers.tpl
index 7bc672b727b1..78cc254ec21b 100644
--- a/stable/atlantis/templates/_helpers.tpl
+++ b/stable/atlantis/templates/_helpers.tpl
@@ -59,3 +59,14 @@ Defines the internal kubernetes address to Atlantis
{{- define "atlantis.url" -}}
{{ template "atlantis.url.scheme" . }}://{{ template "atlantis.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.service.port }}
{{- end -}}
+
+{{/*
+Generates secret-webhook name
+*/}}
+{{- define "atlantis.vcsSecretsName" -}}
+{{- if .Values.vcsSecretsName -}}
+ {{ .Values.vcsSecretsName }}
+{{- else -}}
+ {{ template "atlantis.fullname" . }}-webhook
+{{- end -}}
+{{- end -}}
diff --git a/stable/atlantis/templates/configmap-repo-config.yaml b/stable/atlantis/templates/configmap-repo-config.yaml
new file mode 100644
index 000000000000..91e2a817face
--- /dev/null
+++ b/stable/atlantis/templates/configmap-repo-config.yaml
@@ -0,0 +1,14 @@
+{{- if .Values.repoConfig -}}
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "atlantis.fullname" . }}-repo-config
+ labels:
+ app: {{ template "atlantis.name" . }}
+ chart: {{ template "atlantis.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+data:
+ repos.yaml: |
+{{ .Values.repoConfig | indent 4 }}
+{{- end -}}
diff --git a/stable/atlantis/templates/secret-webhook.yaml b/stable/atlantis/templates/secret-webhook.yaml
index 0ca10ee845b1..279760267941 100644
--- a/stable/atlantis/templates/secret-webhook.yaml
+++ b/stable/atlantis/templates/secret-webhook.yaml
@@ -1,3 +1,4 @@
+{{- if not .Values.vcsSecretsName }}
apiVersion: v1
kind: Secret
metadata:
@@ -18,7 +19,8 @@ data:
{{- end}}
{{- if .Values.bitbucket }}
bitbucket_token: {{ required "bitbucket.token is required if bitbucket configuration is specified." .Values.bitbucket.token | b64enc }}
- {{- if .Values.bitbucket.base_url }}
+ {{- if .Values.bitbucket.baseUrl }}
bitbucket_secret: {{ required "bitbucket.secret is required if bitbucket.baseurl is specified." .Values.bitbucket.secret | b64enc }}
{{- end}}
{{- end }}
+{{- end }}
diff --git a/stable/atlantis/templates/statefulset.yaml b/stable/atlantis/templates/statefulset.yaml
index 93b7a2dc3de7..b35f2f48f68a 100644
--- a/stable/atlantis/templates/statefulset.yaml
+++ b/stable/atlantis/templates/statefulset.yaml
@@ -51,6 +51,17 @@ spec:
secret:
secretName: {{ template "atlantis.fullname" . }}-aws
{{- end }}
+ {{- if .Values.repoConfig }}
+ - name: repo-config
+ configMap:
+ name: {{ template "atlantis.fullname" . }}-repo-config
+ {{- end }}
+ {{- if .Values.imagePullSecrets }}
+ imagePullSecrets:
+ {{- range .Values.imagePullSecrets }}
+ - name: {{ . }}
+ {{- end }}
+ {{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
@@ -61,12 +72,14 @@ spec:
exec:
command: ["/bin/sh", "-c", "cp /etc/secret-gitconfig/gitconfig /home/atlantis/.gitconfig && chown atlantis /home/atlantis/.gitconfig"]
{{- end}}
- command: ["atlantis"]
+ {{- if .Values.command }}
+ command:
+ {{- range .Values.command }}
+ - {{ . }}
+ {{- end }}
+ {{- end }}
args:
- server
- {{- if .Values.allowRepoConfig }}
- - --allow-repo-config
- {{- end }}
ports:
- name: atlantis
containerPort: 4141
@@ -75,6 +88,14 @@ spec:
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
+ {{- if .Values.allowForkPRs }}
+ - name: ATLANTIS_ALLOW_FORK_PRS
+ value: {{ .Values.allowForkPRs | quote }}
+ {{- end }}
+ {{- if .Values.defaultTFVersion }}
+ - name: ATLANTIS_DEFAULT_TF_VERSION
+ value: {{ .Values.defaultTFVersion }}
+ {{- end }}
{{- if .Values.logLevel }}
- name: ATLANTIS_LOG_LEVEL
value: {{ .Values.logLevel | quote}}
@@ -88,9 +109,13 @@ spec:
- name: ATLANTIS_DATA_DIR
value: /atlantis-data
- name: ATLANTIS_REPO_WHITELIST
- value: {{ .Values.orgWhitelist }}
+ value: {{ toYaml .Values.orgWhitelist }}
- name: ATLANTIS_PORT
value: "4141"
+ {{- if .Values.repoConfig }}
+ - name: ATLANTIS_REPO_CONFIG
+ value: /etc/atlantis/repos.yaml
+ {{- end }}
{{- if .Values.atlantisUrl }}
- name: ATLANTIS_ATLANTIS_URL
value: {{ .Values.atlantisUrl }}
@@ -104,12 +129,12 @@ spec:
- name: ATLANTIS_GH_TOKEN
valueFrom:
secretKeyRef:
- name: {{ template "atlantis.fullname" . }}-webhook
+ name: {{ template "atlantis.vcsSecretsName" . }}
key: github_token
- name: ATLANTIS_GH_WEBHOOK_SECRET
valueFrom:
secretKeyRef:
- name: {{ template "atlantis.fullname" . }}-webhook
+ name: {{ template "atlantis.vcsSecretsName" . }}
key: github_secret
{{- if .Values.github.hostname }}
- name: ATLANTIS_GH_HOSTNAME
@@ -122,12 +147,12 @@ spec:
- name: ATLANTIS_GITLAB_TOKEN
valueFrom:
secretKeyRef:
- name: {{ template "atlantis.name" . }}-webhook
+ name: {{ template "atlantis.vcsSecretsName" . }}
key: gitlab_token
- name: ATLANTIS_GITLAB_WEBHOOK_SECRET
valueFrom:
secretKeyRef:
- name: {{ template "atlantis.fullname" . }}-webhook
+ name: {{ template "atlantis.vcsSecretsName" . }}
key: gitlab_secret
{{- if .Values.gitlab.hostname }}
- name: ATLANTIS_GITLAB_HOSTNAME
@@ -140,15 +165,15 @@ spec:
- name: ATLANTIS_BITBUCKET_TOKEN
valueFrom:
secretKeyRef:
- name: {{ template "atlantis.fullname" . }}-webhook
+ name: {{ template "atlantis.vcsSecretsName" . }}
key: bitbucket_token
- {{- if .Values.bitbucket.base_url }}
+ {{- if .Values.bitbucket.baseURL }}
- name: ATLANTIS_BITBUCKET_BASE_URL
- value: {{ .Values.bitbucket.base_url }}
+ value: {{ .Values.bitbucket.baseURL }}
- name: ATLANTIS_BITBUCKET_WEBHOOK_SECRET
valueFrom:
secretKeyRef:
- name: {{ template "atlantis.fullname" . }}-webhook
+ name: {{ template "atlantis.vcsSecretsName" . }}
key: bitbucket_secret
{{- end }}
{{- end }}
@@ -186,18 +211,23 @@ spec:
{{- end }}
{{- if .Values.gitconfig}}
- name: gitconfig-volume
- readonly: true
+ readOnly: true
mountPath: /etc/secret-gitconfig
{{- end }}
{{- if .Values.aws}}
- name: aws-volume
- readonly: true
+ readOnly: true
mountPath: /home/atlantis/.aws
{{- end }}
{{- if .Values.tlsSecretName }}
- name: tls
mountPath: /etc/tls/
{{- end }}
+ {{- if .Values.repoConfig }}
+ - name: repo-config
+ mountPath: /etc/atlantis
+ readOnly: true
+ {{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
@@ -217,8 +247,11 @@ spec:
name: atlantis-data
spec:
accessModes: ["ReadWriteOnce"] # Volume should not be shared by multiple nodes.
+ {{- if .Values.storageClassName }}
+ storageClassName: {{ .Values.storageClassName }} # Storage class of the volume
+ {{- end }}
resources:
requests:
# The biggest thing Atlantis stores is the Git repo when it checks it out.
# It deletes the repo after the pull request is merged.
- storage: {{ .Values.atlantis_data_storage }}
+ storage: {{ .Values.dataStorage }}
diff --git a/stable/atlantis/values.yaml b/stable/atlantis/values.yaml
index a94c2592dc19..a55194f092b4 100644
--- a/stable/atlantis/values.yaml
+++ b/stable/atlantis/values.yaml
@@ -7,7 +7,7 @@
# atlantisUrl: http://10.0.0.0
# Replace this with your own repo whitelist:
-orgWhitelist: github.com/yourorg/*
+orgWhitelist:
# logLevel: "debug"
# If using GitHub, specify like the following:
@@ -37,10 +37,12 @@ orgWhitelist: github.com/yourorg/*
# base_url: https://bitbucket.yourorganization.com
# (The chart will perform the base64 encoding for you for values that are stored in secrets.)
+# If managing secrets outside the chart for the webhook, use this variable to reference the secret name
+# vcsSecretsName: 'mysecret'
# When referencing Terraform modules in private repositories, it may be helpful
# (necessary?) to use redirection in a .gitconfig like so:
-# gitconfig:
+# gitconfig: |
# [url "https://YOUR_GH_TOKEN@github.com"]
# insteadOf = https://github.com
# [url "https://YOUR_GH_TOKEN@github.com"]
@@ -75,11 +77,39 @@ serviceAccountSecrets:
image:
repository: runatlantis/atlantis
- tag: v0.4.13
+ tag: v0.7.1
pullPolicy: IfNotPresent
-## enable using atlantis.yaml file
-allowRepoConfig: false
+## Optionally specify an array of imagePullSecrets.
+## Secrets must be manually created in the namespace.
+## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+##
+# imagePullSecrets:
+# - myRegistryKeySecretName
+
+## Use Server Side Repo Config,
+## ref: https://www.runatlantis.io/docs/server-side-repo-config.html
+## Example default configuration
+# repoConfig: |
+# ---
+# repos:
+# - id: /.*/
+# apply_requirements: []
+# workflow: default
+# allowed_overrides: []
+# allow_custom_workflows: false
+# workflows:
+# default:
+# plan:
+# steps: [init, plan]
+# apply:
+# steps: [apply]
+
+# allowForkPRs enables atlantis to run on a fork Pull Requests
+allowForkPRs: false
+
+## defaultTFVersion set the default terraform version to be used in atlantis server
+# defaultTFVersion: v0.12.0
# We only need to check every 60s since Atlantis is not a high-throughput service.
livenessProbe:
@@ -132,7 +162,7 @@ resources:
cpu: 100m
# Disk space for Atlantis to check out repositories
-atlantis_data_storage: 5Gi
+dataStorage: 5Gi
replicaCount: 1
diff --git a/stable/auditbeat/.helmignore b/stable/auditbeat/.helmignore
index f0c131944441..825c00779157 100644
--- a/stable/auditbeat/.helmignore
+++ b/stable/auditbeat/.helmignore
@@ -19,3 +19,5 @@
.project
.idea/
*.tmproj
+
+OWNERS
diff --git a/stable/auditbeat/Chart.yaml b/stable/auditbeat/Chart.yaml
index 2176faea4f69..367335bcfd44 100644
--- a/stable/auditbeat/Chart.yaml
+++ b/stable/auditbeat/Chart.yaml
@@ -2,11 +2,13 @@ apiVersion: v1
description: A lightweight shipper to audit the activities of users and processes on your systems
icon: https://www.elastic.co/assets/blt27d1fd26b0862613/icon-auditbeat-bb.svg
name: auditbeat
-version: 0.4.2
-appVersion: 6.5.4
+version: 1.1.0
+appVersion: 6.7.0
home: https://www.elastic.co/products/beats/auditbeat
sources:
- https://www.elastic.co/guide/en/beats/auditbeat/current/index.html
maintainers:
+- name: cpanato
+ email: ctadeu@gmail.com
- name: mumoshu
email: ykuoka@gmail.com
diff --git a/stable/auditbeat/OWNERS b/stable/auditbeat/OWNERS
new file mode 100644
index 000000000000..cadd300e0f3c
--- /dev/null
+++ b/stable/auditbeat/OWNERS
@@ -0,0 +1,4 @@
+approvers:
+- cpanato
+reviewers:
+- cpanato
diff --git a/stable/auditbeat/README.md b/stable/auditbeat/README.md
index 80c527c90962..5ca637465889 100644
--- a/stable/auditbeat/README.md
+++ b/stable/auditbeat/README.md
@@ -10,7 +10,7 @@ By default this chart only ships a single output to a file on the local system.
## Prerequisites
-- Kubernetes 1.9+
+- Kubernetes 1.9+
## Installing the Chart
@@ -38,21 +38,21 @@ The command removes all the Kubernetes components associated with the chart and
The following table lists the configurable parameters of the auditbeat chart and their default values.
-| Parameter | Description | Default |
-|-------------------------------------|------------------------------------|-------------------------------------------|
-| `image.repository` | The image repository to pull from | `docker.elastic.co/beats/auditbeat` |
-| `image.tag` | The image tag to pull | `6.5.4` |
-| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
-| `rbac.create` | If true, create & use RBAC resources | `true` |
-| `rbac.serviceAccount` | existing ServiceAccount to use (ignored if rbac.create=true) | `default` |
-| `config` | The content of the configuration file consumed by auditbeat. See the [auditbeat documentation](https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-reference-yml.html) for full details |
-| `plugins` | List of beat plugins |
-| `extraVars` | A map of additional environment variables | |
-| `extraVolumes`, `extraVolumeMounts` | Additional volumes and mounts, for example to provide other configuration files | |
-| `resources.requests.cpu` | CPU resource requests | |
-| `resources.limits.cpu` | CPU resource limits | |
-| `resources.requests.memory` | Memory resource requests | |
-| `resources.limits.memory` | Memory resource limits | |
+| Parameter | Description | Default |
+| ----------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------- |
+| `image.repository` | The image repository to pull from | `docker.elastic.co/beats/auditbeat` |
+| `image.tag` | The image tag to pull | `6.7.0` |
+| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `rbac.create` | If true, create & use RBAC resources | `true` |
+| `rbac.serviceAccount` | existing ServiceAccount to use (ignored if rbac.create=true) | `default` |
+| `config` | The content of the configuration file consumed by auditbeat. See the [auditbeat documentation](https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-reference-yml.html) for full details | |
+| `plugins` | List of beat plugins | |
+| `extraVars` | A map of additional environment variables | |
+| `extraVolumes`, `extraVolumeMounts` | Additional volumes and mounts, for example to provide other configuration files | |
+| `resources.requests.cpu` | CPU resource requests | |
+| `resources.limits.cpu` | CPU resource limits | |
+| `resources.requests.memory` | Memory resource requests | |
+| `resources.limits.memory` | Memory resource limits | |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
diff --git a/stable/auditbeat/templates/daemonset.yaml b/stable/auditbeat/templates/daemonset.yaml
index 3aec114539c8..88a445e6bc05 100644
--- a/stable/auditbeat/templates/daemonset.yaml
+++ b/stable/auditbeat/templates/daemonset.yaml
@@ -46,6 +46,14 @@ spec:
{{ toYaml .Values.extraArgs | indent 8 }}
{{- end }}
env:
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ - name: NODE_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
{{- range $key, $value := .Values.extraVars }}
- name: {{ $key }}
value: {{ $value }}
diff --git a/stable/auditbeat/values.yaml b/stable/auditbeat/values.yaml
index 1d65aba05ed6..0bdda6d94561 100644
--- a/stable/auditbeat/values.yaml
+++ b/stable/auditbeat/values.yaml
@@ -1,6 +1,6 @@
image:
repository: docker.elastic.co/beats/auditbeat
- tag: 6.5.4
+ tag: 6.7.0
pullPolicy: IfNotPresent
config:
diff --git a/stable/aws-iam-authenticator/.helmignore b/stable/aws-iam-authenticator/.helmignore
new file mode 100644
index 000000000000..f0c131944441
--- /dev/null
+++ b/stable/aws-iam-authenticator/.helmignore
@@ -0,0 +1,21 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
diff --git a/stable/aws-iam-authenticator/Chart.yaml b/stable/aws-iam-authenticator/Chart.yaml
new file mode 100644
index 000000000000..92bf3ba79b93
--- /dev/null
+++ b/stable/aws-iam-authenticator/Chart.yaml
@@ -0,0 +1,11 @@
+apiVersion: v1
+appVersion: "1.0"
+description: A Helm chart for aws-iam-authenticator
+name: aws-iam-authenticator
+version: 0.1.0
+home: https://github.com/kubernetes-sigs/aws-iam-authenticator
+maintainers:
+ - name: plumdog
+ email: plummer574@gmail.com
+sources:
+ - https://github.com/kubernetes-sigs/aws-iam-authenticator
diff --git a/stable/aws-iam-authenticator/README.md b/stable/aws-iam-authenticator/README.md
new file mode 100644
index 000000000000..65f3120c9977
--- /dev/null
+++ b/stable/aws-iam-authenticator/README.md
@@ -0,0 +1,36 @@
+# AWS IAM Authenticator
+
+See https://github.com/kubernetes-sigs/aws-iam-authenticator
+
+In particular, make sure that have configured your API server as in
+https://github.com/kubernetes-sigs/aws-iam-authenticator#how-do-i-use-it. (This
+chart only installs the DaemonSet and a ConfigMap.)
+
+## Values
+
+| Config | Description | Default |
+| `image.repository` | Image repo | `gcr.io/heptio-images/authenticator` |
+| `image.tag` | Image tag | `v0.1.0` |
+| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `config` | All the config, see below | `{}` |
+| `resources` | Pod resources | `{}` |
+| `hostPathConfig.output` | HostPath output | `/srv/kubernetes/aws-iam-authenticator/` |
+| `hostPathConfig.state` | HostPath state | `/srv/kubernetes/aws-iam-authenticator/` |
+
+### Config
+
+The value set for `config` is where all the action happens - this is
+how you map AWS IAM roles to groups in the cluster. See the
+aws-iam-authenticator docs for all of the possible options for this.
+
+A simple example values file might look like:
+```
+config:
+ clusterID: mycluster.io
+ server:
+ mapRoles:
+ - groups:
+ - developers # the name of a group within Kubernetes
+ roleARN: arn:aws:iam::000000000000:role/developer # the ARN of a role in AWS
+ username: developer
+```
diff --git a/stable/aws-iam-authenticator/templates/_helpers.tpl b/stable/aws-iam-authenticator/templates/_helpers.tpl
new file mode 100644
index 000000000000..cc1a336c77db
--- /dev/null
+++ b/stable/aws-iam-authenticator/templates/_helpers.tpl
@@ -0,0 +1,32 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "aws-iam-authenticator.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "aws-iam-authenticator.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "aws-iam-authenticator.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/stable/aws-iam-authenticator/templates/configmap.yaml b/stable/aws-iam-authenticator/templates/configmap.yaml
new file mode 100644
index 000000000000..b082b08e7d69
--- /dev/null
+++ b/stable/aws-iam-authenticator/templates/configmap.yaml
@@ -0,0 +1,12 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "aws-iam-authenticator.fullname" . }}
+ labels:
+ app: {{ template "aws-iam-authenticator.name" . }}
+ chart: {{ template "aws-iam-authenticator.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+data:
+ config.yaml: |
+{{ toYaml .Values.config | indent 4 }}
diff --git a/stable/aws-iam-authenticator/templates/daemonset.yaml b/stable/aws-iam-authenticator/templates/daemonset.yaml
new file mode 100644
index 000000000000..66aa35d88556
--- /dev/null
+++ b/stable/aws-iam-authenticator/templates/daemonset.yaml
@@ -0,0 +1,67 @@
+apiVersion: extensions/v1beta1
+kind: DaemonSet
+metadata:
+ name: {{ template "aws-iam-authenticator.fullname" . }}
+ labels:
+ app: {{ template "aws-iam-authenticator.name" . }}
+ chart: {{ template "aws-iam-authenticator.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+spec:
+ updateStrategy:
+ type: RollingUpdate
+ template:
+ metadata:
+ annotations:
+ scheduler.alpha.kubernetes.io/critical-pod: ""
+ checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
+ labels:
+ app: {{ template "aws-iam-authenticator.name" . }}
+ release: {{ .Release.Name }}
+ spec:
+ # run on the host network (don't depend on CNI)
+ hostNetwork: true
+
+ # run on each master node
+ nodeSelector:
+ node-role.kubernetes.io/master: ""
+ tolerations:
+ - effect: NoSchedule
+ key: node-role.kubernetes.io/master
+ - key: CriticalAddonsOnly
+ operator: Exists
+
+ # run `aws-iam-authenticator server` with three volumes
+ # - config (mounted from the ConfigMap at /etc/aws-iam-authenticator/config.yaml)
+ # - state (persisted TLS certificate and keys, mounted from the host)
+ # - output (output kubeconfig to plug into your apiserver configuration, mounted from the host)
+ containers:
+ - name: {{ template "aws-iam-authenticator.fullname" . }}
+ image: gcr.io/heptio-images/authenticator:v0.1.0
+ args:
+ - server
+ - --config=/etc/aws-iam-authenticator/config.yaml
+ - --state-dir=/var/aws-iam-authenticator
+ - --generate-kubeconfig=/etc/kubernetes/aws-iam-authenticator/kubeconfig.yaml
+
+ resources:
+{{ toYaml .Values.resources | indent 10 }}
+
+ volumeMounts:
+ - name: config
+ mountPath: /etc/aws-iam-authenticator/
+ - name: state
+ mountPath: /var/aws-iam-authenticator/
+ - name: output
+ mountPath: /etc/kubernetes/aws-iam-authenticator/
+
+ volumes:
+ - name: config
+ configMap:
+ name: {{ template "aws-iam-authenticator.fullname" . }}
+ - name: output
+ hostPath:
+ path: {{ .Values.hostPathConfig.output }}
+ - name: state
+ hostPath:
+ path: {{ .Values.hostPathConfig.state }}
diff --git a/stable/aws-iam-authenticator/values.yaml b/stable/aws-iam-authenticator/values.yaml
new file mode 100644
index 000000000000..94eac6a81082
--- /dev/null
+++ b/stable/aws-iam-authenticator/values.yaml
@@ -0,0 +1,12 @@
+image:
+ repository: gcr.io/heptio-images/authenticator
+ tag: v0.1.0
+ pullPolicy: IfNotPresent
+
+config: {}
+
+resources: {}
+
+hostPathConfig:
+ output: /srv/kubernetes/aws-iam-authenticator/
+ state: /srv/kubernetes/aws-iam-authenticator/
diff --git a/stable/bitcoind/Chart.yaml b/stable/bitcoind/Chart.yaml
index a37a667e134a..3d284345d36c 100644
--- a/stable/bitcoind/Chart.yaml
+++ b/stable/bitcoind/Chart.yaml
@@ -1,7 +1,7 @@
apiVersion: v1
name: bitcoind
-version: 0.1.5
-appVersion: 0.15.1
+version: 0.2.1
+appVersion: 0.17.1
description: Bitcoin is an innovative payment network and a new kind of money.
keywords:
- bitcoind
diff --git a/stable/bitcoind/README.md b/stable/bitcoind/README.md
index 2236e9eefd95..e7a76c97d96e 100644
--- a/stable/bitcoind/README.md
+++ b/stable/bitcoind/README.md
@@ -40,21 +40,22 @@ The command removes all the Kubernetes components associated with the chart and
The following table lists the configurable parameters of the bitcoind chart and their default values.
-Parameter | Description | Default
------------------------ | ---------------------------------- | ----------------------------------------------------------
-`image.repository` | Image source repository name | `arilot/docker-bitcoind`
-`image.tag` | `bitcoind` release tag. | `0.15.1`
-`image.pullPolicy` | Image pull policy | `IfNotPresent`
-`service.rpcPort` | RPC port | `8332`
-`service.p2pPort` | P2P port | `8333`
-`service.testnetPort` | Testnet port | `18332`
-`service.testnetP2pPort` | Testnet p2p ports | `18333`
-`service.selector` | Node selector | `tx-broadcast-svc`
-`persistence.enabled` | Create a volume to store data | `true`
-`persistence.accessMode` | ReadWriteOnce or ReadOnly | `ReadWriteOnce`
-`persistence.size` | Size of persistent volume claim | `300Gi`
-`resources` | CPU/Memory resource requests/limits| `{}`
-`configurationFile` | Config file ConfigMap entry |
+Parameter | Description | Default
+------------------------------- | ------------------------------------------------- | ----------------------------------------------------------
+`image.repository` | Image source repository name | `arilot/docker-bitcoind`
+`image.tag` | `bitcoind` release tag. | `0.17.1`
+`image.pullPolicy` | Image pull policy | `IfNotPresent`
+`service.rpcPort` | RPC port | `8332`
+`service.p2pPort` | P2P port | `8333`
+`service.testnetPort` | Testnet port | `18332`
+`service.testnetP2pPort` | Testnet p2p ports | `18333`
+`service.selector` | Node selector | `tx-broadcast-svc`
+`persistence.enabled` | Create a volume to store data | `true`
+`persistence.accessMode` | ReadWriteOnce or ReadOnly | `ReadWriteOnce`
+`persistence.size` | Size of persistent volume claim | `300Gi`
+`resources` | CPU/Memory resource requests/limits | `{}`
+`configurationFile` | Config file ConfigMap entry |
+`terminationGracePeriodSeconds` | Wait time before forcefully terminating container | `30`
For more information about Bitcoin configuration please see [Bitcoin.conf_Configuration_File](https://en.bitcoin.it/wiki/Running_Bitcoin#Bitcoin.conf_Configuration_File).
diff --git a/stable/bitcoind/templates/deployment.yaml b/stable/bitcoind/templates/deployment.yaml
index b0b2decd318e..fa1dd2cdbc8b 100644
--- a/stable/bitcoind/templates/deployment.yaml
+++ b/stable/bitcoind/templates/deployment.yaml
@@ -20,6 +20,7 @@ spec:
app: {{ template "bitcoind.name" . }}
release: {{ .Release.Name }}
spec:
+ terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- if .Values.configurationFile }}
initContainers:
- name: copy-bitcoind-config
@@ -68,4 +69,4 @@ spec:
claimName: {{ .Values.persistence.existingClaim | default (include "bitcoind.fullname" .) }}
{{- else }}
emptyDir: {}
- {{- end -}}
+ {{- end -}}
\ No newline at end of file
diff --git a/stable/bitcoind/values.yaml b/stable/bitcoind/values.yaml
index 87b7039b57d7..2edd20421681 100644
--- a/stable/bitcoind/values.yaml
+++ b/stable/bitcoind/values.yaml
@@ -1,10 +1,10 @@
# Default values for bitcoind.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
-
+terminationGracePeriodSeconds: 30
image:
repository: arilot/docker-bitcoind
- tag: 0.15.1
+ tag: 0.17.1
pullPolicy: IfNotPresent
service:
diff --git a/stable/bookstack/Chart.yaml b/stable/bookstack/Chart.yaml
index c315f4dc3861..805c2720f55d 100644
--- a/stable/bookstack/Chart.yaml
+++ b/stable/bookstack/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
-appVersion: 0.24.3
+appVersion: 0.25.2
description: BookStack is a simple, self-hosted, easy-to-use platform for organising and storing information.
name: bookstack
-version: 1.0.1
+version: 1.1.0
home: https://www.bookstackapp.com/
icon: https://github.com/BookStackApp/website/blob/master/static/images/logo.png
sources:
diff --git a/stable/bookstack/README.md b/stable/bookstack/README.md
index 77f83263f896..3bcdbd454c4d 100644
--- a/stable/bookstack/README.md
+++ b/stable/bookstack/README.md
@@ -49,7 +49,7 @@ The following table lists the configurable parameters of the Redmine chart and t
| --------------------------------- | ---------------------------------------- | ------------------------------------------------------- |
| `replicaCount` | Number of replicas to start | `1` |
| `image.repository` | Bookstack image name | `solidnerd/bookstack` |
-| `image.tag` | Bookstack image tag | `0.24.3` |
+| `image.tag` | Bookstack image tag | `0.25.2` |
| `image.pullPolicy` | Bookstack image pull policy | `IfNotPresent` |
| `externalDatabase.host` | Host of the external database | `nil` |
| `externalDatabase.port` | Port of the external database | `3306` |
diff --git a/stable/bookstack/templates/deployment.yaml b/stable/bookstack/templates/deployment.yaml
index bc12ddd5028d..8a0d3c2e4fbe 100644
--- a/stable/bookstack/templates/deployment.yaml
+++ b/stable/bookstack/templates/deployment.yaml
@@ -89,14 +89,14 @@ spec:
- name: uploads
mountPath: /var/www/bookstack/public/uploads
- name: storage
- mountPath: /var/www/bookstack/public/storage
+ mountPath: /var/www/bookstack/storage/uploads
resources:
{{ toYaml .Values.resources | indent 12 }}
volumes:
- name: uploads
{{- if .Values.persistence.uploads.enabled }}
persistentVolumeClaim:
- claimName: {{ .Values.persistence.storage.existingClaim | default (printf "%s-%s" (include "bookstack.fullname" .) "uploads") }}
+ claimName: {{ .Values.persistence.uploads.existingClaim | default (printf "%s-%s" (include "bookstack.fullname" .) "uploads") }}
{{- else }}
emptyDir: {}
{{- end }}
diff --git a/stable/bookstack/values.yaml b/stable/bookstack/values.yaml
index 4b5e63632ef3..2b71fab885c1 100644
--- a/stable/bookstack/values.yaml
+++ b/stable/bookstack/values.yaml
@@ -6,7 +6,7 @@ replicaCount: 1
image:
repository: solidnerd/bookstack
- tag: 0.24.3
+ tag: 0.25.2
pullPolicy: IfNotPresent
app:
diff --git a/stable/burrow/Chart.yaml b/stable/burrow/Chart.yaml
index ee648cc0f26b..3e4398cd83c5 100644
--- a/stable/burrow/Chart.yaml
+++ b/stable/burrow/Chart.yaml
@@ -1,9 +1,9 @@
name: burrow
-version: 1.0.1
-appVersion: 0.23.3
+version: 1.3.0
+appVersion: 0.25.1
description: Burrow is a permissionable smart contract machine
home: https://github.com/hyperledger/burrow
-icon: https://wiki.hyperledger.org/_media/projects/hyperledger_burrow_logo_color.png
+icon: https://raw.githubusercontent.com/hyperledger/burrow/develop/docs/assets/images/burrow.png
keywords:
- blockchain
- smart_contracts
diff --git a/stable/burrow/README.md b/stable/burrow/README.md
index b3752f951f8c..e112065ff744 100644
--- a/stable/burrow/README.md
+++ b/stable/burrow/README.md
@@ -13,7 +13,7 @@ This chart bootstraps a burrow network on a [Kubernetes](http://kubernetes.io) c
To deploy a new blockchain network, this chart requires that two objects be present in the same Kubernetes namespace: a configmap should house the genesis file and a secret should hold any validator keys. The provided script, `initialize.sh` automatically provisions a number of files using the [burrow](https://github.com/hyperledger/burrow) toolkit, so please first ensure that `burrow --version` matches the `image.tag` in the [configuration](#configuration). This sequence also requires that the [jq](https://stedolan.github.io/jq/) binary is installed. Two files will be generated, the first of note is `chain-info.yaml` which contains the two necessary Kubernetes specifications to be added to the cluster:
```bash
-curl -LO https://raw.githubusercontent.com/helm/charts/master/initialize.sh
+curl -LO https://raw.githubusercontent.com/helm/charts/master/stable/burrow/initialize.sh
CHAIN_NODES=4 CHAIN_NAME="my-release" ./initialize.sh
kubectl apply --filename chain-info.yaml
```
@@ -51,67 +51,65 @@ The following table lists the configurable parameters of the Burrow chart and it
| Parameter | Description | Default |
| --------- | ----------- | ------- |
-| `image.repository` | image repository | `"hyperledger/burrow"` |
-| `image.tag` | image tag | `"0.23.3"` |
-| `image.pullPolicy` | image pull policy | `"IfNotPresent"` |
+| `affinity` | node/pod affinities | `{}` |
| `chain.nodes` | number of nodes for the blockchain network | `1` |
| `chain.logLevel` | log level for the nodes (`debug`, `info`, `warn`) | `"info"` |
| `chain.extraSeeds` | network seeds to dial in addition to the cluster booted by the chart; each entry in the array should be in the form `ip:port` (note: because P2P connects over tcp, the port is absolutely required) | `[]` |
+| `chain.restore.enabled` | toggle chain restore mechanism | `false` |
+| `chain.restore.dumpURL` | accessible dump file from absolute url | `""` |
| `chain.testing` | toggle pre-generated keys & genesis for ci testing | `false` |
-| `validatorAddresses` | list of validators to deploy | `[]` |
+| `config` | the [burrow configuration file](https://github.com/hyperledger/burrow/blob/develop/tests/chain/burrow.toml) | `{}` |
+| `config.Tendermint.ListenPort` | peer port | `26656` |
+| `contracts.enabled` | toggle post-install contract deployment | `false` |
+| `contracts.image` | contract deployer image | `""` |
+| `contracts.tag` | contract deployer tag | `""` |
+| `contracts.deploy` | command to run in post-install hook | `""` |
| `env` | environment variables to configure burrow | `{}` |
| `extraArgs` | extra arguments to give to the build in `burrow start` command | `{}` |
+| `image.repository` | image repository | `"hyperledger/burrow"` |
+| `image.tag` | image tag | `"0.25.1"` |
+| `image.pullPolicy` | image pull policy | `"IfNotPresent"` |
+| `livenessProbe.enabled` | enable liveness checks | `true` |
+| `livenessProbe.path` | http endpoint | `"/status?block_seen_time_within=3m"` |
+| `livenessProbe.initialDelaySeconds` | start after | `240` |
+| `livenessProbe.timeoutSeconds` | retry after | `1` |
+| `livenessProbe.periodSeconds` | check every | `30` |
+| `nodeSelector` | node labels for pod assignment | `{}` |
| `organization` | name of the organization running these nodes (used in the peer's moniker) | `""` |
| `persistence.enabled` | enable pvc for the chain data | `true` |
| `persistence.size` | size of the chain data pvc | `"80Gi"` |
| `persistence.storageClass` | storage class for the chain data pvc | `"standard"` |
| `persistence.accessMode` | access mode for the chain data pvc | `"ReadWriteOnce"` |
| `persistence.persistentVolumeReclaimPolicy` | does not delete on node restart | `"Retain"` |
-| `peer.service.type` | service type | `"ClusterIP"` |
-| `peer.service.port` | peer port | `26656` |
-| `peer.ingress.enabled` | expose port | `false` |
-| `peer.ingress.hosts` | - | `[]` |
-| `rpcGRPC.enabled` | enable grpc service | `true` |
-| `rpcGRPC.service.port` | grpc port | `10997` |
-| `rpcGRPC.service.type` | service type | `"ClusterIP"` |
-| `rpcGRPC.service.loadBalance` | enable load balancing across nodes | `true` |
-| `rpcGRPC.ingress.enabled` | expose port | `false` |
-| `rpcGRPC.ingress.hosts` | - | `[]` |
-| `rpcGRPC.ingress.annotations` | extra annotations | `` |
-| `rpcGRPC.ingress.tls` | - | `` |
-| `rpcInfo.enabled` | enable Info service | `true` |
-| `rpcInfo.service.port` | Info port | `26658` |
-| `rpcInfo.service.type` | service type | `"ClusterIP"` |
-| `rpcInfo.service.loadBalance` | enable load balancing across nodes | `true` |
-| `rpcInfo.ingress.enabled` | expose port | `false` |
-| `rpcInfo.ingress.partial` | exposes the `/accounts` and `/blocks` paths externally | `false` |
-| `rpcInfo.ingress.pathLeader` | - | `"/"` |
-| `rpcInfo.ingress.annotations` | extra annotations | `` |
-| `rpcInfo.ingress.hosts` | - | `[]` |
-| `rpcInfo.ingress.tls` | - | `` |
-| `rpcMetrics.enabled` | enable Info service | `true` |
-| `rpcMetrics.port` | Info port | `9102` |
-| `rpcMetrics.path` | http endpoint | `"/metrics"` |
-| `rpcMetrics.blockSampleSize` | number of previous blocks to utilize in calculating the histograms and summaries which are sent to prometheus | `100` |
-| `rpcProfiler.enabled` | enable Info service | `false` |
-| `rpcProfiler.port` | Info port | `6060` |
+| `podAnnotations` | annotations to add to each pod | `{}` |
+| `podLabels` | labels to add to each pod | `{}` |
+| `readinessProbe.enabled` | enable readiness checks | `true` |
+| `readinessProbe.path` | http endpoint | `"/status"` |
+| `readinessProbe.initialDelaySeconds` | start after | `5` |
| `resources.limits.cpu` | - | `"500m"` |
| `resources.limits.memory` | - | `"1Gi"` |
| `resources.requests.cpu` | - | `"100m"` |
| `resources.requests.memory` | - | `"256Mi"` |
-| `livenessProbe.enabled` | enable liveness checks | `true` |
-| `livenessProbe.path` | http endpoint | `"/status?block_seen_time_within=3m"` |
-| `livenessProbe.initialDelaySeconds` | start after | `240` |
-| `livenessProbe.timeoutSeconds` | retry after | `1` |
-| `livenessProbe.periodSeconds` | check every | `30` |
-| `readinessProbe.enabled` | enable readiness checks | `true` |
-| `readinessProbe.path` | http endpoint | `"/status"` |
-| `readinessProbe.initialDelaySeconds` | start after | `5` |
-| `podAnnotations` | annotations to add to each pod | `{}` |
-| `podLabels` | labels to add to each pod | `{}` |
-| `affinity` | node/pod affinities | `{}` |
+| `grpc.service.type` | service type | `"ClusterIP"` |
+| `grpc.service.loadBalance` | enable load balancing across nodes | `true` |
+| `grpc.ingress.enabled` | expose port | `false` |
+| `grpc.ingress.hosts` | - | `[]` |
+| `grpc.ingress.annotations` | extra annotations | `` |
+| `grpc.ingress.tls` | - | `` |
+| `info.service.type` | service type | `"ClusterIP"` |
+| `info.service.loadBalance` | enable load balancing across nodes | `true` |
+| `info.ingress.enabled` | expose port | `false` |
+| `info.ingress.partial` | exposes the `/accounts` and `/blocks` paths externally | `false` |
+| `info.ingress.pathLeader` | - | `"/"` |
+| `info.ingress.annotations` | extra annotations | `` |
+| `info.ingress.hosts` | - | `[]` |
+| `info.ingress.tls` | - | `` |
+| `peer.service.type` | service type | `"ClusterIP"` |
+| `peer.ingress.enabled` | expose port | `false` |
+| `peer.ingress.hosts` | - | `[]` |
| `tolerations` | list of node taints to tolerate | `[]` |
-| `nodeSelector` | node labels for pod assignment | `{}` |
+| `validatorAddresses` | list of validators to deploy | `[]` |
+
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
diff --git a/stable/burrow/templates/_helpers.tpl b/stable/burrow/templates/_helpers.tpl
index 491250838fbe..9a5f740a87a5 100644
--- a/stable/burrow/templates/_helpers.tpl
+++ b/stable/burrow/templates/_helpers.tpl
@@ -40,11 +40,11 @@ Formulate the how the seeds feed is populated.
{{- range (until (sub $.Values.chain.nodes 1 | int)) -}}
{{- $addr := (index $.Values.validatorAddresses ( print "Validator_" . )).NodeAddress | lower -}}
{{- $node := printf "%03d" . -}}
-tcp://{{ $addr }}@{{ $node }}.{{ $host }}:{{ $.Values.peer.service.port }},
+tcp://{{ $addr }}@{{ $node }}.{{ $host }}:{{ $.Values.config.Tendermint.ListenPort }},
{{- end -}}
{{- $addr := (index $.Values.validatorAddresses ( print "Validator_" (sub .Values.chain.nodes 1))).NodeAddress | lower -}}
{{- $node := sub .Values.chain.nodes 1 | printf "%03d" -}}
-tcp://{{ $addr }}@{{ $node }}.{{ $host }}:{{ $.Values.peer.service.port }}
+tcp://{{ $addr }}@{{ $node }}.{{ $host }}:{{ $.Values.config.Tendermint.ListenPort }}
{{- if not (eq (len .Values.chain.extraSeeds) 0) -}}
{{- range .Values.chain.extraSeeds -}},{{ . }}{{- end -}}
{{- end -}}
@@ -52,11 +52,11 @@ tcp://{{ $addr }}@{{ $node }}.{{ $host }}:{{ $.Values.peer.service.port }}
{{- range (until (sub $.Values.chain.nodes 1 | int)) -}}
{{- $addr := (index $.Values.validatorAddresses ( print "Validator_" . )).NodeAddress | lower -}}
{{- $node := printf "%03d" . -}}
-tcp://{{ $addr }}@{{ template "burrow.fullname" $ }}-peer-{{ $node }}:{{ $.Values.peer.service.port }},
+tcp://{{ $addr }}@{{ template "burrow.fullname" $ }}-peer-{{ $node }}:{{ $.Values.config.Tendermint.ListenPort }},
{{- end -}}
{{- $addr := (index $.Values.validatorAddresses ( print "Validator_" (sub .Values.chain.nodes 1))).NodeAddress | lower -}}
{{- $node := sub .Values.chain.nodes 1 | printf "%03d" -}}
-tcp://{{ $addr }}@{{ template "burrow.fullname" $ }}-peer-{{ $node }}:{{ $.Values.peer.service.port }}
+tcp://{{ $addr }}@{{ template "burrow.fullname" $ }}-peer-{{ $node }}:{{ $.Values.config.Tendermint.ListenPort }}
{{- if not (eq (len .Values.chain.extraSeeds) 0) -}}
{{- range .Values.chain.extraSeeds -}},{{ . }}{{- end -}}
{{- end -}}
diff --git a/stable/burrow/templates/_settings.yaml b/stable/burrow/templates/_settings.yaml
new file mode 100644
index 000000000000..390989372656
--- /dev/null
+++ b/stable/burrow/templates/_settings.yaml
@@ -0,0 +1,13 @@
+{{- define "settings" -}}
+{{- range $.Values.environment.secrets }}
+- name: {{ .name }}
+ valueFrom:
+ secretKeyRef:
+ name: {{ .location }}
+ key: {{ .key }}
+{{- end }}
+{{- range $key, $val := $.Values.environment.inline }}
+- name: {{ $key }}
+ value: {{ $val | quote }}
+{{- end }}
+{{- end -}}
diff --git a/stable/burrow/templates/config-burrow.yaml b/stable/burrow/templates/config-burrow.yaml
deleted file mode 100644
index 9302d1cbf3b6..000000000000
--- a/stable/burrow/templates/config-burrow.yaml
+++ /dev/null
@@ -1,47 +0,0 @@
-kind: ConfigMap
-apiVersion: v1
-metadata:
- labels:
- app: {{ template "burrow.name" . }}
- chart: {{ template "burrow.chart" $ }}
- heritage: {{ $.Release.Service }}
- release: {{ $.Release.Name }}
- name: {{ template "burrow.fullname" . }}-config
-data:
- burrow.toml: |-
- [Tendermint]
- Seeds = ""
- SeedMode = false
- PersistentPeers = "{{ template "burrow.seeds" . }}"
- ListenAddress = "tcp://0.0.0.0:{{ .Values.peer.service.port }}"
- ExternalAddress = ""
- Moniker = ""
- TendermintRoot = ".burrow"
- [Execution]
- [Keys]
- GRPCServiceEnabled = true
- AllowBadFilePermissions = true
- RemoteAddress = ""
- KeysDirectory = "/keys"
- [RPC]
- [RPC.Info]
- Enabled = {{ .Values.rpcInfo.enabled }}
- ListenAddress = "tcp://0.0.0.0:{{ .Values.rpcInfo.service.port }}"
- [RPC.Profiler]
- Enabled = {{ .Values.rpcProfiler.enabled }}
- ListenAddress = "tcp://0.0.0.0:{{ .Values.rpcProfiler.port }}"
- [RPC.GRPC]
- Enabled = {{ .Values.rpcGRPC.enabled }}
- ListenAddress = "0.0.0.0:{{ .Values.rpcGRPC.service.port }}"
- [RPC.Metrics]
- Enabled = {{ .Values.rpcMetrics.enabled }}
- ListenAddress = "tcp://0.0.0.0:{{ .Values.rpcMetrics.port }}"
- MetricsPath = {{ .Values.rpcMetrics.path | quote }}
- BlockSampleSize = {{ .Values.rpcMetrics.blockSampleSize }}
- [Logging]
- ExcludeTrace = true
- NonBlocking = true
- [Logging.RootSink]
- [Logging.RootSink.Output]
- OutputType = "stderr"
- Format = "json"
diff --git a/stable/burrow/templates/configmap.yaml b/stable/burrow/templates/configmap.yaml
new file mode 100644
index 000000000000..1aebe960ebe1
--- /dev/null
+++ b/stable/burrow/templates/configmap.yaml
@@ -0,0 +1,14 @@
+{{- $config := .Values.config }}
+{{- $pp := dict "Tendermint" (dict "PersistentPeers" (include "burrow.seeds" .)) }}
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ labels:
+ app: {{ template "burrow.name" . }}
+ chart: {{ template "burrow.chart" $ }}
+ heritage: {{ $.Release.Service }}
+ release: {{ $.Release.Name }}
+ name: {{ template "burrow.fullname" . }}-config
+data:
+ burrow.json: |-
+{{ toJson (mergeOverwrite $config $pp) | indent 4 }}
\ No newline at end of file
diff --git a/stable/burrow/templates/contracts.yaml b/stable/burrow/templates/contracts.yaml
new file mode 100644
index 000000000000..a6275ef108cb
--- /dev/null
+++ b/stable/burrow/templates/contracts.yaml
@@ -0,0 +1,53 @@
+{{- if .Values.contracts.enabled }}
+{{- $refDir := printf "/ref" }}
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: {{ template "burrow.fullname" $ }}-contracts
+ namespace: {{ $.Release.Namespace | quote }}
+ labels:
+ app: {{ template "burrow.name" $ }}
+ chart: {{ template "burrow.chart" $ }}
+ heritage: {{ $.Release.Service }}
+ release: {{ $.Release.Name }}
+ annotations:
+ "helm.sh/hook": "post-install"
+ "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
+spec:
+ template:
+ spec:
+ # we always want burrow & solc installed
+ initContainers:
+ - name: burrow
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ command: ['sh', '-c', 'cp /usr/local/bin/* /tmp']
+ volumeMounts:
+ - name: bin
+ mountPath: /tmp
+ containers:
+ - name: contracts-deploy
+ image: "{{ .Values.contracts.image }}:{{ $.Values.contracts.tag }}"
+ imagePullPolicy: Always
+ volumeMounts:
+ - name: bin
+ mountPath: /usr/local/bin/
+ - mountPath: {{ $refDir }}
+ name: ref-dir
+ env:
+ - name: CHAIN_URL_GRPC
+ value: {{ template "burrow.fullname" $ }}-grpc:{{ .Values.config.RPC.GRPC.ListenPort }}
+{{- include "settings" . | indent 8 }}
+ command: ["/bin/sh", "-c", "{{ .Values.contracts.deploy }}"]
+ restartPolicy: Never
+ volumes:
+ - name: bin
+ emptyDir: {}
+ - name: ref-dir
+ projected:
+ sources:
+ - configMap:
+ name: {{ template "burrow.fullname" $ }}-config
+ - configMap:
+ name: {{ template "burrow.fullname" $ }}-genesis
+ backoffLimit: 0
+{{- end }}
\ No newline at end of file
diff --git a/stable/burrow/templates/deployments.yaml b/stable/burrow/templates/deployments.yaml
index 79c06eeddc3f..bd5cc4fbb2f6 100644
--- a/stable/burrow/templates/deployments.yaml
+++ b/stable/burrow/templates/deployments.yaml
@@ -24,12 +24,12 @@ spec:
nodeNumber: {{ $nodeNumber | quote }}
template:
metadata:
-{{- if (or $.Values.podAnnotations $.Values.rpcMetrics.enabled) }}
+{{- if (or $.Values.podAnnotations $.Values.config.RPC.Metrics.Enabled) }}
annotations:
-{{- if $.Values.rpcMetrics.enabled }}
+{{- if $.Values.config.RPC.Metrics.Enabled }}
prometheus.io/scrape: "true"
- prometheus.io/port: {{ $.Values.rpcMetrics.port | quote }}
- prometheus.io/path: {{ $.Values.rpcMetrics.path }}
+ prometheus.io/port: {{ $.Values.config.RPC.Metrics.ListenPort | quote }}
+ prometheus.io/path: {{ $.Values.config.RPC.Metrics.MetricsPath }}
{{- end }}
{{- if $.Values.podAnnotations }}
{{ toYaml $.Values.podAnnotations | indent 8 }}
@@ -60,35 +60,70 @@ spec:
mkdir -p {{ $workDir }}/.burrow/config && \
cp nodekey-Validator_{{ . }} {{ $workDir }}/.burrow/config/node_key.json && \
chmod 600 {{ $workDir }}/.burrow/config/node_key.json
+{{- if $.Values.chain.restore.enabled }}
+ - name: retrieve
+ image: appropriate/curl
+ imagePullPolicy: {{ $.Values.image.pullPolicy }}
+ workingDir: {{ $workDir }}
+ command:
+ - curl
+ args:
+ - -o
+ - dumpFile
+ - {{ $.Values.chain.restore.dumpURL }}
+ volumeMounts:
+ - mountPath: {{ $workDir }}
+ name: work-dir
+ - name: restore
+ image: "{{ $.Values.image.repository }}:{{ $.Values.image.tag }}"
+ imagePullPolicy: {{ $.Values.image.pullPolicy }}
+ workingDir: {{ $workDir }}
+ command:
+ - burrow
+ args:
+ - restore
+ - --config
+ - "{{ $refDir }}/burrow.json"
+ - --genesis
+ - "{{ $refDir }}/genesis.json"
+ - --silent
+ - dumpFile
+ - --validator-address
+ - {{ (index $.Values.validatorAddresses ( printf "Validator_%d" $nodeIndex )).Address | quote }}
+ - --validator-moniker
+ - {{ printf "%s-validator-%s" $.Values.organization $nodeNumber | quote }}
+ volumeMounts:
+ - mountPath: {{ $workDir }}
+ name: work-dir
+ - mountPath: {{ $refDir }}
+ name: ref-dir
+{{- end }}
containers:
- name: node
image: "{{ $.Values.image.repository }}:{{ $.Values.image.tag }}"
imagePullPolicy: {{ $.Values.image.pullPolicy }}
workingDir: {{ $workDir }}
command:
- - "burrow"
+ - burrow
args:
- - "start"
- - "--config"
- - "{{ $refDir }}/burrow.toml"
- - "--genesis"
+ - start
+ - --config
+ - "{{ $refDir }}/burrow.json"
+ - --genesis
- "{{ $refDir }}/genesis.json"
- - "--validator-address"
+ - --validator-address
- {{ (index $.Values.validatorAddresses ( printf "Validator_%d" $nodeIndex )).Address | quote }}
- - "--validator-moniker"
+ - --validator-moniker
- {{ printf "%s-validator-%s" $.Values.organization $nodeNumber | quote }}
{{- if (and $.Values.peer.ingress.enabled (not (eq (len $.Values.peer.ingress.hosts) 0))) }}
- - "--external-address"
- - "{{ $nodeNumber }}.{{ index $.Values.peer.ingress.hosts 0 }}:{{ $.Values.peer.service.port }}"
+ - --external-address
+ - "{{ $nodeNumber }}.{{ index $.Values.peer.ingress.hosts 0 }}:{{ $.Values.config.Tendermint.ListenPort }}"
{{- end }}
{{- range $key, $value := $.Values.extraArgs }}
- --{{ $key }}={{ $value }}
{{- end }}
env:
-{{- range $key, $value := $.Values.env }}
- - name: "{{ $key }}"
- value: "{{ $value }}"
-{{- end }}
+{{- include "settings" $ | indent 10 }}
volumeMounts:
- mountPath: {{ $refDir }}
name: ref-dir
@@ -101,21 +136,21 @@ spec:
ports:
- name: peer
protocol: TCP
- containerPort: {{ $.Values.peer.service.port }}
-{{- if $.Values.rpcGRPC.enabled }}
+ containerPort: {{ $.Values.config.Tendermint.ListenPort }}
+{{- if $.Values.config.RPC.GRPC.Enabled }}
- name: grpc
protocol: TCP
- containerPort: {{ $.Values.rpcGRPC.service.port }}
+ containerPort: {{ $.Values.config.RPC.GRPC.ListenPort }}
{{- end }}
-{{- if $.Values.rpcInfo.enabled }}
+{{- if $.Values.config.RPC.Info.Enabled }}
- name: info
protocol: TCP
- containerPort: {{ $.Values.rpcInfo.service.port }}
+ containerPort: {{ $.Values.config.RPC.Info.ListenPort }}
{{- end }}
-{{- if $.Values.rpcMetrics.enabled }}
+{{- if $.Values.config.RPC.Metrics.Enabled }}
- name: metrics
protocol: TCP
- containerPort: {{ $.Values.rpcMetrics.port }}
+ containerPort: {{ $.Values.config.RPC.Metrics.ListenPort }}
{{- end }}
{{- if not $.Values.chain.testing }}
{{- if $.Values.livenessProbe.enabled }}
@@ -175,6 +210,8 @@ spec:
nodeSelector:
{{ toYaml $.Values.nodeSelector | indent 8 }}
{{- end }}
+{{- if $.Values.tolerations }}
tolerations:
{{ toYaml $.Values.tolerations | indent 8 }}
{{- end }}
+{{- end }}
diff --git a/stable/burrow/templates/ingress-grpc.yaml b/stable/burrow/templates/ingress-grpc.yaml
index 2128f09c7bd3..ff79e8868fa6 100644
--- a/stable/burrow/templates/ingress-grpc.yaml
+++ b/stable/burrow/templates/ingress-grpc.yaml
@@ -1,6 +1,6 @@
-{{- if .Values.rpcGRPC.ingress.enabled -}}
+{{- if .Values.grpc.ingress.enabled -}}
{{- $serviceName := printf "%s-grpc" (include "burrow.fullname" .) -}}
-{{- $servicePort := .Values.rpcGRPC.service.port -}}
+{{- $servicePort := .Values.config.RPC.GRPC.ListenPort -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
@@ -11,12 +11,12 @@ metadata:
release: {{ .Release.Name }}
name: {{ template "burrow.fullname" . }}-grpc
annotations:
- {{- range $key, $value := .Values.rpcGRPC.ingress.annotations }}
+ {{- range $key, $value := .Values.grpc.ingress.annotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
rules:
- {{- range $host := .Values.rpcGRPC.ingress.hosts }}
+ {{- range $host := .Values.grpc.ingress.hosts }}
- host: {{ $host }}
http:
paths:
@@ -25,8 +25,8 @@ spec:
serviceName: {{ $serviceName }}
servicePort: {{ $servicePort }}
{{- end -}}
- {{- if .Values.rpcGRPC.ingress.tls }}
+ {{- if .Values.grpc.ingress.tls }}
tls:
-{{ toYaml .Values.rpcGRPC.ingress.tls | indent 4 }}
+{{ toYaml .Values.grpc.ingress.tls | indent 4 }}
{{- end -}}
{{- end -}}
diff --git a/stable/burrow/templates/ingress-info.yaml b/stable/burrow/templates/ingress-info.yaml
index 3dc4b22b57d6..a34d4b3d200e 100644
--- a/stable/burrow/templates/ingress-info.yaml
+++ b/stable/burrow/templates/ingress-info.yaml
@@ -1,8 +1,8 @@
-{{- if .Values.rpcInfo.ingress.enabled -}}
+{{- if .Values.info.ingress.enabled -}}
{{- $serviceName := printf "%s-info" (include "burrow.fullname" .) -}}
-{{- $servicePort := .Values.rpcInfo.service.port -}}
-{{- $pathLeader := .Values.rpcInfo.ingress.pathLeader -}}
-{{- $partialIngress := .Values.rpcInfo.ingress.partial -}}
+{{- $servicePort := $.Values.config.info.ListenPort -}}
+{{- $pathLeader := .Values.info.ingress.pathLeader -}}
+{{- $partialIngress := .Values.info.ingress.partial -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
@@ -13,10 +13,10 @@ metadata:
release: {{ .Release.Name }}
name: {{ template "burrow.fullname" . }}-info
annotations:
-{{ toYaml .Values.rpcInfo.ingress.annotations | indent 4 }}
+{{ toYaml .Values.info.ingress.annotations | indent 4 }}
spec:
rules:
-{{- range $host := .Values.rpcInfo.ingress.hosts }}
+{{- range $host := .Values.info.ingress.hosts }}
- host: {{ $host }}
http:
paths:
@@ -36,8 +36,8 @@ spec:
servicePort: {{ $servicePort }}
{{- end -}}
{{- end -}}
-{{- if .Values.rpcInfo.ingress.tls }}
+{{- if .Values.info.ingress.tls }}
tls:
-{{ toYaml .Values.rpcInfo.ingress.tls | indent 4 }}
+{{ toYaml .Values.info.ingress.tls | indent 4 }}
{{- end -}}
{{- end -}}
diff --git a/stable/burrow/templates/service-grpc.yaml b/stable/burrow/templates/service-grpc.yaml
index 4b25d80bcd1f..ad495c517685 100644
--- a/stable/burrow/templates/service-grpc.yaml
+++ b/stable/burrow/templates/service-grpc.yaml
@@ -8,8 +8,8 @@ metadata:
heritage: {{ .Release.Service }}
name: {{ template "burrow.fullname" . }}-grpc
spec:
- type: {{ .Values.rpcGRPC.service.type }}
-{{- if .Values.rpcGRPC.service.loadBalance }}
+ type: {{ .Values.grpc.service.type }}
+{{- if .Values.grpc.service.loadBalance }}
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
@@ -17,12 +17,12 @@ spec:
{{- end }}
ports:
- name: grpc
- port: {{ .Values.rpcGRPC.service.port }}
+ port: {{ $.Values.config.RPC.GRPC.ListenPort }}
targetPort: grpc
protocol: TCP
selector:
app: {{ template "burrow.name" . }}
release: {{ .Release.Name }}
-{{- if not .Values.rpcGRPC.service.loadBalance }}
- nodeNumber: {{ .Values.rpcGRPC.service.node | quote }}
+{{- if not .Values.grpc.service.loadBalance }}
+ nodeNumber: {{ .Values.grpc.service.node | quote }}
{{- end }}
diff --git a/stable/burrow/templates/service-info.yaml b/stable/burrow/templates/service-info.yaml
index 62627a19a8b4..5b805bbf797b 100644
--- a/stable/burrow/templates/service-info.yaml
+++ b/stable/burrow/templates/service-info.yaml
@@ -8,18 +8,18 @@ metadata:
heritage: {{ .Release.Service }}
name: {{ template "burrow.fullname" . }}-info
spec:
- type: {{ .Values.rpcInfo.service.type }}
-{{- if .Values.rpcInfo.service.loadBalance }}
+ type: {{ .Values.info.service.type }}
+{{- if .Values.info.service.loadBalance }}
sessionAffinity: ClientIP
{{- end }}
ports:
- name: info
- port: {{ .Values.rpcInfo.service.port }}
+ port: {{ $.Values.config.RPC.Info.ListenPort }}
targetPort: info
protocol: TCP
selector:
app: {{ template "burrow.name" . }}
release: {{ .Release.Name }}
-{{- if not .Values.rpcInfo.service.loadBalance }}
- nodeNumber: {{ .Values.rpcInfo.service.node | quote }}
+{{- if not .Values.info.service.loadBalance }}
+ nodeNumber: {{ .Values.info.service.node | quote }}
{{- end }}
diff --git a/stable/burrow/templates/service-peers.yaml b/stable/burrow/templates/service-peers.yaml
index 2729c2681c4b..57ca28cbf539 100644
--- a/stable/burrow/templates/service-peers.yaml
+++ b/stable/burrow/templates/service-peers.yaml
@@ -26,7 +26,7 @@ spec:
{{- end }}
ports:
- name: peer
- port: {{ $.Values.peer.service.port }}
+ port: {{ $.Values.config.Tendermint.ListenPort }}
targetPort: peer
protocol: TCP
selector:
diff --git a/stable/burrow/values.yaml b/stable/burrow/values.yaml
index c0a26002427f..25db8f8c8346 100644
--- a/stable/burrow/values.yaml
+++ b/stable/burrow/values.yaml
@@ -1,6 +1,6 @@
image:
repository: hyperledger/burrow
- tag: 0.23.3
+ tag: 0.25.1
pullPolicy: IfNotPresent
chain:
@@ -8,14 +8,67 @@ chain:
logLevel: info
extraSeeds: []
testing: false
+ restore:
+ enabled: false
+ dumpURL: ""
+
+config:
+ BurrowDir: ".burrow"
+ Tendermint:
+ Seeds: ""
+ SeedMode: false
+ ListenHost: "0.0.0.0"
+ ListenPort: "26656"
+ ExternalAddress: ""
+ Moniker: ""
+ Keys:
+ GRPCServiceEnabled: true
+ AllowBadFilePermissions: true
+ RemoteAddress: ""
+ KeysDirectory: "/keys"
+ RPC:
+ Info:
+ Enabled: true
+ ListenHost: "0.0.0.0"
+ ListenPort: "26658"
+ Profiler:
+ Enabled: false
+ ListenHost: "0.0.0.0"
+ ListenPort: "6060"
+ GRPC:
+ Enabled: true
+ ListenHost: "0.0.0.0"
+ ListenPort: "10997"
+ Metrics:
+ Enabled: true
+ ListenHost: "0.0.0.0"
+ ListenPort: "9102"
+ MetricsPath: "/metrics"
+ BlockSampleSize: 100
+ Logging:
+ ExcludeTrace: true
+ NonBlocking: true
+ RootSink:
+ Output:
+ OutputType: "stderr"
+ Format: "json"
validatorAddresses:
Validator_0:
Address: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
NodeAddress: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
-env: {}
+contracts:
+ # wait required to ensure chain readiness
+ enabled: false
+ image: ""
+ tag: ""
+ deploy: ""
+
extraArgs: {}
+environment:
+ inline: {}
+ secrets: []
organization: "user"
@@ -29,15 +82,12 @@ persistence:
peer:
service:
type: ClusterIP
- port: 26656
ingress:
enabled: false
hosts: []
-rpcGRPC:
- enabled: true
+grpc:
service:
- port: 10997
type: ClusterIP
loadBalance: true
ingress:
@@ -46,10 +96,8 @@ rpcGRPC:
annotations: {}
tls: {}
-rpcInfo:
- enabled: true
+info:
service:
- port: 26658
type: ClusterIP
loadBalance: true
ingress:
@@ -62,16 +110,6 @@ rpcInfo:
hosts: []
tls: {}
-rpcMetrics:
- enabled: true
- port: 9102
- path: /metrics
- blockSampleSize: 100
-
-rpcProfiler:
- enabled: false
- port: 6060
-
# resources:
# limits:
# cpu: 500m
diff --git a/stable/cerebro/Chart.yaml b/stable/cerebro/Chart.yaml
index ef5f9527709e..b091d265d35f 100644
--- a/stable/cerebro/Chart.yaml
+++ b/stable/cerebro/Chart.yaml
@@ -1,13 +1,13 @@
+name: cerebro
+version: 1.0.2
+appVersion: 0.8.3
apiVersion: v1
-appVersion: 0.8.1
description: A Helm chart for Cerebro - a web admin tool that replaces Kopf.
home: https://github.com/lmenezes/cerebro
icon: https://github.com/lmenezes/cerebro/blob/master/public/img/logo.png
sources:
- https://github.com/lmenezes/cerebro-docker
- https://github.com/lmenezes/cerebro
-name: cerebro
-version: 0.5.2
maintainers:
- name: davidkarlsen
email: david@davidkarlsen.com
diff --git a/stable/cerebro/README.md b/stable/cerebro/README.md
index 58f23d4688ec..a4001099514c 100644
--- a/stable/cerebro/README.md
+++ b/stable/cerebro/README.md
@@ -41,8 +41,9 @@ The following table lists the configurable parameters of the cerebro chart and t
|-------------------------------------|-------------------------------------|-------------------------------------------|
| `replicaCount` | Number of replicas | `1` |
| `image.repository` | The image to run | `lmenezes/cerebro` |
-| `image.tag` | The image tag to pull | `0.8.1` |
+| `image.tag` | The image tag to pull | `0.8.3` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `image.pullSecrets` | Specify image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
| `init.image.repository` | The image to run | `docker.io/busybox` |
| `init.image.tag` | The image tag to pull | `musl` |
| `init.image.pullPolicy` | Image pull policy | `IfNotPresent` |
@@ -56,6 +57,7 @@ The following table lists the configurable parameters of the cerebro chart and t
| `resources.requests.memory` | Memory resource requests | |
| `resources.limits.memory` | Memory resource limits | |
| `ingress` | Settings for ingress | `{}` |
+| `ingress.labels` | Labels to add to the ingress | `{}` |
| `nodeSelector` | Settings for nodeselector | `{}` |
| `tolerations` | Settings for toleration | `{}` |
| `affinity` | Settings for affinity | `{}` |
diff --git a/stable/cerebro/templates/deployment.yaml b/stable/cerebro/templates/deployment.yaml
index c7d08322b573..1c06fa7f27c3 100644
--- a/stable/cerebro/templates/deployment.yaml
+++ b/stable/cerebro/templates/deployment.yaml
@@ -34,6 +34,12 @@ spec:
volumeMounts:
- name: db
mountPath: /var/db/cerebro
+ {{- if .Values.image.pullSecrets }}
+ imagePullSecrets:
+ {{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+ {{- end }}
+ {{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
diff --git a/stable/cerebro/templates/ingress.yaml b/stable/cerebro/templates/ingress.yaml
index 47b795b14d8a..79f37534e673 100644
--- a/stable/cerebro/templates/ingress.yaml
+++ b/stable/cerebro/templates/ingress.yaml
@@ -10,6 +10,9 @@ metadata:
chart: {{ template "cerebro.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
+{{- if .Values.ingress.labels }}
+{{ toYaml .Values.ingress.labels | indent 4 }}
+{{- end }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
diff --git a/stable/cerebro/templates/service.yaml b/stable/cerebro/templates/service.yaml
index 731e24ca1622..35655693741d 100644
--- a/stable/cerebro/templates/service.yaml
+++ b/stable/cerebro/templates/service.yaml
@@ -10,10 +10,10 @@ metadata:
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4 }}
{{- end }}
+{{- with .Values.service.annotations }}
annotations:
- {{- range $key, $value := .Values.service.annotations }}
- {{ $key }}: {{ $value | quote }}
- {{- end }}
+{{ toYaml . | indent 4 }}
+{{- end }}
spec:
type: {{ .Values.service.type }}
ports:
diff --git a/stable/cerebro/values.yaml b/stable/cerebro/values.yaml
index 030b3bd1502a..e65c63bce742 100644
--- a/stable/cerebro/values.yaml
+++ b/stable/cerebro/values.yaml
@@ -5,7 +5,7 @@ image:
repository: lmenezes/cerebro
# Note: when updating the version, ensure `config` and the ConfigMap are kept
# in sync with the default configuration of the upstream image
- tag: 0.8.1
+ tag: 0.8.3
pullPolicy: IfNotPresent
init:
@@ -14,6 +14,13 @@ init:
tag: musl
pullPolicy: IfNotPresent
+ ## Optionally specify an array of imagePullSecrets.
+ ## Secrets must be manually created in the namespace.
+ ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+ ##
+ # pullSecrets:
+ # - myRegistrKeySecretName
+
deployment:
annotations: {}
@@ -28,6 +35,7 @@ ingress:
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
+ labels: {}
path: /
hosts:
- chart-example.local
diff --git a/stable/cert-manager/Chart.yaml b/stable/cert-manager/Chart.yaml
index edfe99c2b682..8f745138f135 100644
--- a/stable/cert-manager/Chart.yaml
+++ b/stable/cert-manager/Chart.yaml
@@ -1,6 +1,6 @@
name: cert-manager
-version: v0.6.0
-appVersion: v0.6.0
+version: v0.6.7
+appVersion: v0.6.2
description: A Helm chart for cert-manager
home: https://github.com/jetstack/cert-manager
keywords:
@@ -10,6 +10,11 @@ keywords:
- tls
sources:
- https://github.com/jetstack/cert-manager
-maintainers:
- - name: munnerz
- email: james@jetstack.io
+# Deprecated charts cannot have maintainers
+maintainers: []
+ # - name: munnerz
+ # email: james@jetstack.io
+# This version of the Helm chart is deprecated.
+# All future updates should be instead made to the official cert-manager
+# repository, found at https://github.com/jetstack/cert-manager/tree/master/deploy
+deprecated: true
diff --git a/stable/cert-manager/README.md b/stable/cert-manager/README.md
index 5914878c7d1a..8e7f8483a5b6 100644
--- a/stable/cert-manager/README.md
+++ b/stable/cert-manager/README.md
@@ -1,5 +1,10 @@
# cert-manager
+> **This Helm chart is deprecated**.
+> All future changes to the cert-manager Helm chart should be made in the
+> [official repository](https://github.com/jetstack/cert-manager/tree/master/deploy).
+> The latest version of the chart can be found on the [Helm Hub](https://hub.helm.sh/charts/jetstack/cert-manager).
+
cert-manager is a Kubernetes addon to automate the management and issuance of
TLS certificates from various issuing sources.
@@ -23,6 +28,12 @@ To install the chart with the release name `my-release`:
$ kubectl apply \
-f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.6/deploy/manifests/00-crds.yaml
+##Â IMPORTANT: if you are deploying into a namespace that **already exists**,
+## you MUST ensure the namespace has an additional label on it in order for
+## the deployment to succeed
+$ kubectl label namespace certmanager.k8s.io/disable-validation="true"
+
+## Install the cert-manager helm chart
$ helm install --name my-release stable/cert-manager
```
@@ -66,7 +77,7 @@ The following table lists the configurable parameters of the cert-manager chart
| --------- | ----------- | ------- |
| `global.imagePullSecrets` | Reference to one or more secrets to be used when pulling images | `[]` |
| `image.repository` | Image repository | `quay.io/jetstack/cert-manager-controller` |
-| `image.tag` | Image tag | `v0.6.0` |
+| `image.tag` | Image tag | `v0.6.2` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `replicaCount` | Number of cert-manager replicas | `1` |
| `clusterResourceNamespace` | Override the namespace used to store DNS provider credentials etc. for ClusterIssuer resources | Same namespace as cert-manager pod
@@ -101,7 +112,7 @@ The following table lists the configurable parameters of the cert-manager chart
| `webhook.extraArgs` | Optional flags for cert-manager webhook component | `[]` |
| `webhook.resources` | CPU/memory resource requests/limits for the webhook pods | |
| `webhook.image.repository` | Webhook image repository | `quay.io/jetstack/cert-manager-webhook` |
-| `webhook.image.tag` | Webhook image tag | `v0.6.0` |
+| `webhook.image.tag` | Webhook image tag | `v0.6.2` |
| `webhook.image.pullPolicy` | Webhook image pull policy | `IfNotPresent` |
| `webhook.caSyncImage.repository` | CA sync image repository | `quay.io/munnerz/apiextensions-ca-helper` |
| `webhook.caSyncImage.tag` | CA sync image tag | `v0.1.0` |
diff --git a/stable/cert-manager/cert-manager-v0.6.0-dev.5.tgz b/stable/cert-manager/cert-manager-v0.6.0-dev.5.tgz
deleted file mode 100644
index e281684bfe30..000000000000
Binary files a/stable/cert-manager/cert-manager-v0.6.0-dev.5.tgz and /dev/null differ
diff --git a/stable/cert-manager/requirements.lock b/stable/cert-manager/requirements.lock
index a3c0070e6582..2a883ee63dcd 100644
--- a/stable/cert-manager/requirements.lock
+++ b/stable/cert-manager/requirements.lock
@@ -1,6 +1,6 @@
dependencies:
- name: webhook
repository: file://webhook
- version: v0.6.0
-digest: sha256:93a9a73b4f6aa718152642d6a4156fb6f9a4fb078d0136065c42bab2fe76c9b0
-generated: 2019-01-22T16:13:19.816854629Z
+ version: v0.6.4
+digest: sha256:a0af88ca014f7195e521457f22c31d8bf28c7c90b0c9a088bfc5cb8ab188b769
+generated: 2019-02-19T11:13:47.831977937Z
diff --git a/stable/cert-manager/requirements.yaml b/stable/cert-manager/requirements.yaml
index 16f21f133100..c6d8928e239f 100644
--- a/stable/cert-manager/requirements.yaml
+++ b/stable/cert-manager/requirements.yaml
@@ -1,6 +1,6 @@
# requirements.yaml
dependencies:
- name: webhook
- version: "v0.6.0"
+ version: "v0.6.4"
repository: "file://webhook"
condition: webhook.enabled
diff --git a/stable/cert-manager/templates/00-namespace.yaml b/stable/cert-manager/templates/00-namespace.yaml
deleted file mode 100644
index 1502a599772d..000000000000
--- a/stable/cert-manager/templates/00-namespace.yaml
+++ /dev/null
@@ -1,9 +0,0 @@
-{{ if .Values.createNamespaceResource }}
-apiVersion: v1
-kind: Namespace
-metadata:
- name: {{ .Release.Namespace | quote }}
- labels:
- name: {{ .Release.Namespace | quote }}
- certmanager.k8s.io/disable-validation: "true"
-{{- end }}
diff --git a/stable/cert-manager/templates/NOTES.txt b/stable/cert-manager/templates/NOTES.txt
index 3469b25ea220..5d57f7e66f76 100644
--- a/stable/cert-manager/templates/NOTES.txt
+++ b/stable/cert-manager/templates/NOTES.txt
@@ -13,3 +13,8 @@ Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:
https://cert-manager.readthedocs.io/en/latest/reference/ingress-shim.html
+
+**This Helm chart is deprecated**.
+All future changes to the cert-manager Helm chart should be made in the
+official repository: https://github.com/jetstack/cert-manager/tree/master/deploy.
+The latest version of the chart can be found on the Helm Hub: https://hub.helm.sh/charts/jetstack/cert-manager.
diff --git a/stable/cert-manager/templates/certificate-crd.yaml b/stable/cert-manager/templates/certificate-crd.yaml
deleted file mode 100644
index 0657c4af516d..000000000000
--- a/stable/cert-manager/templates/certificate-crd.yaml
+++ /dev/null
@@ -1,26 +0,0 @@
-{{- if .Values.createCustomResource -}}
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: certificates.certmanager.k8s.io
-{{- if semverCompare ">=2.10-0" .Capabilities.TillerVersion.SemVer }}
- annotations:
- "helm.sh/hook": crd-install
-{{- end }}
- labels:
- app: {{ template "cert-manager.name" . }}
- chart: {{ template "cert-manager.chart" . }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
-spec:
- group: certmanager.k8s.io
- version: v1alpha1
- scope: Namespaced
- names:
- kind: Certificate
- plural: certificates
- {{- if .Values.certificateResourceShortNames }}
- shortNames:
-{{ toYaml .Values.certificateResourceShortNames | indent 6 }}
- {{- end -}}
-{{- end -}}
diff --git a/stable/cert-manager/templates/clusterissuer-crd.yaml b/stable/cert-manager/templates/clusterissuer-crd.yaml
deleted file mode 100644
index cfa67b9ae76e..000000000000
--- a/stable/cert-manager/templates/clusterissuer-crd.yaml
+++ /dev/null
@@ -1,22 +0,0 @@
-{{- if .Values.createCustomResource -}}
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: clusterissuers.certmanager.k8s.io
-{{- if semverCompare ">=2.10-0" .Capabilities.TillerVersion.SemVer }}
- annotations:
- "helm.sh/hook": crd-install
-{{- end }}
- labels:
- app: {{ template "cert-manager.name" . }}
- chart: {{ template "cert-manager.chart" . }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
-spec:
- group: certmanager.k8s.io
- version: v1alpha1
- names:
- kind: ClusterIssuer
- plural: clusterissuers
- scope: Cluster
-{{- end -}}
diff --git a/stable/cert-manager/templates/issuer-crd.yaml b/stable/cert-manager/templates/issuer-crd.yaml
deleted file mode 100644
index 5886676e5a13..000000000000
--- a/stable/cert-manager/templates/issuer-crd.yaml
+++ /dev/null
@@ -1,22 +0,0 @@
-{{- if .Values.createCustomResource -}}
-apiVersion: apiextensions.k8s.io/v1beta1
-kind: CustomResourceDefinition
-metadata:
- name: issuers.certmanager.k8s.io
-{{- if semverCompare ">=2.10-0" .Capabilities.TillerVersion.SemVer }}
- annotations:
- "helm.sh/hook": crd-install
-{{- end }}
- labels:
- app: {{ template "cert-manager.name" . }}
- chart: {{ template "cert-manager.chart" . }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
-spec:
- group: certmanager.k8s.io
- version: v1alpha1
- names:
- kind: Issuer
- plural: issuers
- scope: Namespaced
-{{- end -}}
diff --git a/stable/cert-manager/templates/rbac.yaml b/stable/cert-manager/templates/rbac.yaml
index 4d3532073eea..cf4cb0a5d569 100644
--- a/stable/cert-manager/templates/rbac.yaml
+++ b/stable/cert-manager/templates/rbac.yaml
@@ -10,7 +10,7 @@ metadata:
heritage: {{ .Release.Service }}
rules:
- apiGroups: ["certmanager.k8s.io"]
- resources: ["certificates", "issuers", "clusterissuers", "orders", "challenges"]
+ resources: ["certificates", "certificates/finalizers", "issuers", "clusterissuers", "orders", "orders/finalizers", "challenges"]
verbs: ["*"]
- apiGroups: [""]
resources: ["configmaps", "secrets", "events", "services", "pods"]
diff --git a/stable/cert-manager/values.yaml b/stable/cert-manager/values.yaml
index e14b49a3c381..f4b5e55e94f8 100644
--- a/stable/cert-manager/values.yaml
+++ b/stable/cert-manager/values.yaml
@@ -21,7 +21,7 @@ strategy: {}
image:
repository: quay.io/jetstack/cert-manager-controller
- tag: v0.6.0
+ tag: v0.6.2
pullPolicy: IfNotPresent
# Override the namespace used to store DNS provider credentials etc. for ClusterIssuer
diff --git a/stable/cert-manager/webhook/Chart.yaml b/stable/cert-manager/webhook/Chart.yaml
index 02829d1fe9e3..3bb4934ab040 100644
--- a/stable/cert-manager/webhook/Chart.yaml
+++ b/stable/cert-manager/webhook/Chart.yaml
@@ -1,7 +1,7 @@
name: webhook
apiVersion: v1
-version: "v0.6.0"
-appVersion: "v0.6.0"
+version: "v0.6.4"
+appVersion: "v0.6.2"
description: A Helm chart for deploying the cert-manager webhook component
home: https://github.com/jetstack/cert-manager
sources:
diff --git a/stable/cert-manager/webhook/templates/pki.yaml b/stable/cert-manager/webhook/templates/pki.yaml
index 1654b29b56d1..41285755fca5 100644
--- a/stable/cert-manager/webhook/templates/pki.yaml
+++ b/stable/cert-manager/webhook/templates/pki.yaml
@@ -12,7 +12,7 @@ metadata:
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
- selfsigned: {}
+ selfSigned: {}
---
@@ -29,6 +29,7 @@ metadata:
heritage: {{ .Release.Service }}
spec:
secretName: {{ include "webhook.rootCACertificate" . }}
+ duration: 43800h # 5y
issuerRef:
name: {{ include "webhook.selfSignedIssuer" . }}
commonName: "ca.webhook.cert-manager"
@@ -66,6 +67,7 @@ metadata:
heritage: {{ .Release.Service }}
spec:
secretName: {{ include "webhook.servingCertificate" . }}
+ duration: 8760h # 1y
issuerRef:
name: {{ include "webhook.rootCAIssuer" . }}
dnsNames:
diff --git a/stable/cert-manager/webhook/values.yaml b/stable/cert-manager/webhook/values.yaml
index 82499b5d4c1b..a094349d5e7c 100644
--- a/stable/cert-manager/webhook/values.yaml
+++ b/stable/cert-manager/webhook/values.yaml
@@ -28,7 +28,7 @@ resources: {}
image:
repository: quay.io/jetstack/cert-manager-webhook
- tag: v0.6.0
+ tag: v0.6.2
pullPolicy: IfNotPresent
caSyncImage:
diff --git a/stable/chaoskube/Chart.yaml b/stable/chaoskube/Chart.yaml
index 792a080be11f..cf84058945ac 100755
--- a/stable/chaoskube/Chart.yaml
+++ b/stable/chaoskube/Chart.yaml
@@ -1,7 +1,7 @@
apiVersion: v1
name: chaoskube
-version: 0.14.0
-appVersion: 0.12.1
+version: 3.1.1
+appVersion: 0.14.0
description: Chaoskube periodically kills random pods in your Kubernetes cluster.
icon: https://raw.githubusercontent.com/linki/chaoskube/master/chaoskube.png
home: https://github.com/linki/chaoskube
@@ -13,4 +13,6 @@ keywords:
maintainers:
- name: linki
email: linki+kubernetes.io@posteo.de
+- name: alexvicegrab
+ email: a.vicente.grab@gmail.com
engine: gotpl
diff --git a/stable/chaoskube/OWNERS b/stable/chaoskube/OWNERS
index c9e2122ab88f..c33245731011 100644
--- a/stable/chaoskube/OWNERS
+++ b/stable/chaoskube/OWNERS
@@ -1,4 +1,6 @@
approvers:
- linki
+- alexvicegrab
reviewers:
- linki
+- alexvicegrab
diff --git a/stable/chaoskube/README.md b/stable/chaoskube/README.md
index a2f24a4abb74..4ce7864e873a 100644
--- a/stable/chaoskube/README.md
+++ b/stable/chaoskube/README.md
@@ -37,32 +37,39 @@ If you're sure you want to use it run `helm` with:
$ helm install stable/chaoskube --set dryRun=false
```
-| Parameter | Description | Default |
-|---------------------------|-----------------------------------------------------|----------------------------------|
-| `name` | container name | chaoskube |
-| `image` | docker image | quay.io/linki/chaoskube |
-| `imageTag` | docker image tag | v0.11.0 |
-| `replicas` | number of replicas to run | 1 |
-| `interval` | interval between pod terminations | 10m |
-| `labels` | label selector to filter pods by | "" (matches everything) |
-| `annotations` | annotation selector to filter pods by | "" (matches everything) |
-| `namespaces` | namespace selector to filter pods by | "" (all namespaces) |
-| `dryRun` | don't kill pods, only log what would have been done | true |
-| `debug` | Enable debug logging mode, for detailed logs | false |
-| `timezone` | Set timezone for running actions (Optional) | "" (UTC) |
-| `excludedWeekdays` | Set Days of the Week to avoid actions (Optional) | "" (Don't skip any weekdays) |
-| `excludedTimesOfDay` | Set Time Range to avoid actions (Optional) | "" (Don't skip any times of day) |
-| `excludedDaysOfYear` | Set Days of the Year to avoid actions (Optional) | "" (Don't skip any days) |
-| `priorityClassName` | priorityClassName | `nil` |
-| `rbac.create` | create rbac service account and roles | false |
-| `rbac.serviceAccountName` | name of serviceAccount to use when create is false | default |
-| `resources` | CPU/Memory resource requests/limits | `{}` |
-| `nodeSelector` | Node labels for pod assignment | `{}` |
-| `tolerations` | Toleration labels for pod assignment | `[]` |
-| `affinity` | Affinity settings for pod assignment | `{}` |
-| `minimumAge` | Set minimum pod age to filter pod by | `0s` |
-| `podAnnotations` | Annotations for the chaoskube pod | `{}` |
-| `gracePeriod` | grace period to give pods when terminating them | `-1s` (pod decides) |
+| Parameter | Description | Default |
+|-------------------------------------------|---------------------------------------------------------------------------------------|----------------------------------|
+| `name` | container name | chaoskube |
+| `image` | docker image | quay.io/linki/chaoskube |
+| `imageTag` | docker image tag | v0.11.0 |
+| `replicas` | number of replicas to run | 1 |
+| `interval` | interval between pod terminations | 10m |
+| `labels` | label selector to filter pods by | "" (matches everything) |
+| `annotations` | annotation selector to filter pods by | "" (matches everything) |
+| `namespaces` | namespace selector to filter pods by | "" (all namespaces) |
+| `dryRun` | don't kill pods, only log what would have been done | true |
+| `logFormat` | Options to choose log format(i.e. "text" or "json") | "text" |
+| `debug` | Enable debug logging mode, for detailed logs | false |
+| `timezone` | Set timezone for running actions (Optional) | "" (UTC) |
+| `excludedWeekdays` | Set Days of the Week to avoid actions (Optional) | "" (Don't skip any weekdays) |
+| `excludedTimesOfDay` | Set Time Range to avoid actions (Optional) | "" (Don't skip any times of day) |
+| `excludedDaysOfYear` | Set Days of the Year to avoid actions (Optional) | "" (Don't skip any days) |
+| `priorityClassName` | priorityClassName | `nil` |
+| `rbac.create` | create rbac service account and roles | false |
+| `rbac.serviceAccountName` | name of serviceAccount to use when create is false | default |
+| `resources` | CPU/Memory resource requests/limits | `{}` |
+| `nodeSelector` | Node labels for pod assignment | `{}` |
+| `tolerations` | Toleration labels for pod assignment | `[]` |
+| `affinity` | Affinity settings for pod assignment | `{}` |
+| `minimumAge` | Set minimum pod age to filter pod by | `0s` |
+| `podAnnotations` | Annotations for the chaoskube pod | `{}` |
+| `gracePeriod` | grace period to give pods when terminating them | `-1s` (pod decides) |
+| `metrics.enabled` | Enable metrics handler | `false` |
+| `metrics.port` | Listening port for metrics handler | `8080` |
+| `metrics.service.type` | Metrics service type | `ClusterIP` |
+| `metrics.service.port` | Metrics service port | `8080` |
+| `metrics.serviceMonitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false` |
+| `metrics.serviceMonitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}` |
Setting label and namespaces selectors from the shell can be tricky but is possible (example with zsh):
diff --git a/stable/chaoskube/templates/NOTES.txt b/stable/chaoskube/templates/NOTES.txt
index 1e9fc3bed95d..9ec1418f0a0f 100644
--- a/stable/chaoskube/templates/NOTES.txt
+++ b/stable/chaoskube/templates/NOTES.txt
@@ -2,7 +2,7 @@ chaoskube is running and will kill arbitrary pods every {{ .Values.interval }}.
You can follow the logs to see what chaoskube does:
- POD=$(kubectl -n {{ .Release.Namespace }} get pods -l='release={{ template "chaoskube.fullname" . }}' --output=jsonpath='{.items[0].metadata.name}')
+ POD=$(kubectl -n {{ .Release.Namespace }} get pods -l='app.kubernetes.io/instance={{ template "chaoskube.fullname" . }}' --output=jsonpath='{.items[0].metadata.name}')
kubectl -n {{ .Release.Namespace }} logs -f $POD
{{ if .Values.dryRun }}
You are running in dry-run mode. No pod is actually terminated.
diff --git a/stable/chaoskube/templates/_helpers.tpl b/stable/chaoskube/templates/_helpers.tpl
index ced36d7cf717..15fc3d631c11 100644
--- a/stable/chaoskube/templates/_helpers.tpl
+++ b/stable/chaoskube/templates/_helpers.tpl
@@ -31,15 +31,9 @@ Create chart name and version as used by the chart label.
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
-{{- /*
-Credit: @technosophos
-https://github.com/technosophos/common-chart/
-labels.standard prints the standard Helm labels.
-The standard labels are frequently used in metadata.
-*/ -}}
{{- define "labels.standard" -}}
-app: {{ include "chaoskube.name" . }}
-heritage: {{ .Release.Service | quote }}
-release: {{ .Release.Name | quote }}
-chart: {{ include "chaoskube.chart" . }}
+app.kubernetes.io/name: {{ include "chaoskube.name" . }}
+app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
+app.kubernetes.io/instance: {{ .Release.Name | quote }}
+helm.sh/chart: {{ include "chaoskube.chart" . }}
{{- end -}}
diff --git a/stable/chaoskube/templates/deployment.yaml b/stable/chaoskube/templates/deployment.yaml
index 8e3e71f1fb6b..90228998200c 100644
--- a/stable/chaoskube/templates/deployment.yaml
+++ b/stable/chaoskube/templates/deployment.yaml
@@ -12,12 +12,16 @@ spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
- app: {{ include "chaoskube.name" . }}
- release: {{ .Release.Name }}
+ app.kubernetes.io/name: {{ include "chaoskube.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
{{ include "labels.standard" . | indent 8 }}
+ {{- if .Values.podAnnotations }}
+ annotations:
+{{ toYaml .Values.podAnnotations | indent 8 }}
+ {{- end }}
spec:
containers:
- name: {{ .Values.name }}
@@ -30,15 +34,34 @@ spec:
{{- if not .Values.dryRun }}
- --no-dry-run
{{- end }}
+ {{- if .Values.includedPodNames }}
+ - --included-pod-names={{ .Values.includedPodNames }}
+ {{- end }}
+ {{- if .Values.excludedPodNames }}
+ - --excluded-pod-names={{ .Values.excludedPodNames }}
+ {{- end }}
- --excluded-weekdays={{ .Values.excludedWeekdays }}
- --excluded-times-of-day={{ .Values.excludedTimesOfDay }}
- --excluded-days-of-year={{ .Values.excludedDaysOfYear }}
- --timezone={{ .Values.timezone }}
+ {{- if .Values.logFormat }}
+ - --log-format={{ .Values.logFormat }}
+ {{- end }}
{{- if .Values.debug }}
- --debug
{{- end }}
- --minimum-age={{ .Values.minimumAge }}
- --grace-period={{ .Values.gracePeriod }}
+ {{- if .Values.metrics.enabled }}
+ - --metrics-address=:{{ .Values.metrics.port }}
+ {{- else }}
+ - --metrics-address=
+ {{- end }}
+ {{- if .Values.metrics.enabled }}
+ ports:
+ - name: metrics
+ containerPort: {{ .Values.metrics.port }}
+ {{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
securityContext:
diff --git a/stable/chaoskube/templates/service.yaml b/stable/chaoskube/templates/service.yaml
new file mode 100644
index 000000000000..b58882c2903e
--- /dev/null
+++ b/stable/chaoskube/templates/service.yaml
@@ -0,0 +1,18 @@
+{{- if .Values.metrics.enabled }}
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "chaoskube.fullname" . }}
+ labels:
+{{ include "labels.standard" . | indent 4 }}
+spec:
+ type: {{ .Values.metrics.service.type }}
+ ports:
+ - port: {{ .Values.metrics.service.port }}
+ targetPort: metrics
+ protocol: TCP
+ name: metrics
+ selector:
+ app.kubernetes.io/name: {{ include "chaoskube.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+{{- end }}
diff --git a/stable/chaoskube/templates/servicemonitor.yaml b/stable/chaoskube/templates/servicemonitor.yaml
new file mode 100644
index 000000000000..e3495513789e
--- /dev/null
+++ b/stable/chaoskube/templates/servicemonitor.yaml
@@ -0,0 +1,22 @@
+{{- if and .Values.metrics.enabled .Values.metrics.serviceMonitor.enabled }}
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: {{ include "chaoskube.fullname" . }}
+ labels:
+{{ include "labels.standard" . | indent 4 }}
+ {{- if .Values.metrics.serviceMonitor.additionalLabels }}
+{{ toYaml .Values.metrics.serviceMonitor.additionalLabels | indent 4 }}
+ {{- end }}
+spec:
+ endpoints:
+ - port: metrics
+ interval: 30s
+ namespaceSelector:
+ matchNames:
+ - {{ .Release.Namespace }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "chaoskube.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+{{- end }}
diff --git a/stable/chaoskube/values.yaml b/stable/chaoskube/values.yaml
index fb0fe59e6b5f..862ca80c298b 100644
--- a/stable/chaoskube/values.yaml
+++ b/stable/chaoskube/values.yaml
@@ -5,7 +5,7 @@ name: chaoskube
image: quay.io/linki/chaoskube
# docker image tag
-imageTag: v0.12.1
+imageTag: v0.14.0
# number of replicas to run
replicas: 1
@@ -28,6 +28,12 @@ dryRun: true
# Enable debug logging
debug: false
+# regular expression pattern for pod names to include (all included by default)
+includedPodNames:
+
+# regular expression pattern for pod names to exclude (none excluded by default)
+excludedPodNames:
+
# Set values for exempting specific week days from Chaoskube Actions
excludedWeekdays:
@@ -40,12 +46,36 @@ excludedDaysOfYear:
# Set specific Timezone for Actions to take place
timezone: UTC
+# If nothing set, defaults to "text". Switch to "json" to enable structured logging
+logFormat:
+
# minimum lifetime of a pod before it's considered for termination (0: immediately)
minimumAge: 0s
# grace period to give pods when terminating them (negative: pod decides)
gracePeriod: -1s
+metrics:
+ # Enable metrics handler
+ enabled: false
+
+ # Listening port for metrics handler
+ port: 8080
+
+ service:
+ # Metrics service type
+ type: ClusterIP
+
+ # Metrics service port
+ port: 8080
+
+ serviceMonitor:
+ # Set this to `true` to create ServiceMonitor for Prometheus operator
+ enabled: false
+
+ # Additional labels that can be used so ServiceMonitor will be discovered by Prometheus
+ additionalLabels: {}
+
priorityClassName: ""
# create service account with permission to list and kill pods
@@ -78,7 +108,7 @@ affinity: {}
# - topologyKey: "kubernetes.io/hostname"
# labelSelector:
# matchLabels:
- # app: chaoskube
+ # app.kubernetes.io/name: chaoskube
podAnnotations: {}
## Annotations for the chaoskube pod.
diff --git a/stable/chartmuseum/Chart.yaml b/stable/chartmuseum/Chart.yaml
index 373e7fa684e6..0d6922d0d314 100644
--- a/stable/chartmuseum/Chart.yaml
+++ b/stable/chartmuseum/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
description: Host your own Helm Chart Repository
name: chartmuseum
-version: 1.9.0
-appVersion: 0.8.1
+version: 2.3.1
+appVersion: 0.8.2
home: https://github.com/helm/chartmuseum
icon: https://raw.githubusercontent.com/helm/chartmuseum/master/logo2.png
keywords:
diff --git a/stable/chartmuseum/README.md b/stable/chartmuseum/README.md
index 07b0e048c71b..054e593575c1 100644
--- a/stable/chartmuseum/README.md
+++ b/stable/chartmuseum/README.md
@@ -22,9 +22,18 @@ Please also see https://github.com/kubernetes-helm/chartmuseum
- [Using with Microsoft Azure Blob Storage](#using-with-microsoft-azure-blob-storage)
- [Using with Alibaba Cloud OSS Storage](#using-with-alibaba-cloud-oss-storage)
- [Using with Openstack Object Storage](#using-with-openstack-object-storage)
+ - [Using with Oracle Object Storage](#using-with-oracle-object-storage)
- [Using an existing secret](#using-an-existing-secret)
- [Using with local filesystem storage](#using-with-local-filesystem-storage)
- [Example storage class](#example-storage-class)
+ - [Authentication](#authentication)
+ - [Basic Authentication](#basic-authentication)
+ - [Bearer/Token auth](#bearertoken-auth)
+ - [Ingress](#ingress)
+ - [Hosts](#hosts)
+ - [Annotations](#annotations)
+ - [Extra Paths](#extra-paths)
+ - [Example Ingress configuration](#example-ingress-configuration)
- [Uninstall](#uninstall)
@@ -57,84 +66,109 @@ kubectl create -f /path/to/storage_class.yaml
The following table lists common configurable parameters of the chart and
their default values. See values.yaml for all available options.
-| Parameter | Description | Default |
-|----------------------------------------|---------------------------------------------|-----------------------------------------------------|
-| `image.pullPolicy` | Container pull policy | `IfNotPresent` |
-| `image.repository` | Container image to use | `chartmuseum/chartmuseum` |
-| `image.tag` | Container image tag to deploy | `v0.8.0` |
-| `persistence.accessMode` | Access mode to use for PVC | `ReadWriteOnce` |
-| `persistence.enabled` | Whether to use a PVC for persistent storage | `false` |
-| `persistence.size` | Amount of space to claim for PVC | `8Gi` |
-| `persistence.labels` | Additional labels for PVC | `{}` |
-| `persistence.storageClass` | Storage Class to use for PVC | `-` |
-| `persistence.volumeName` | Volume to use for PVC | `` |
-| `persistence.pv.enabled` | Whether to use a PV for persistent storage | `false` |
-| `persistence.pv.capacity.storage` | Storage size to use for PV | `8Gi` |
-| `persistence.pv.accessMode` | Access mode to use for PV | `ReadWriteOnce` |
-| `persistence.pv.nfs.server` | NFS server for PV | `` |
-| `persistence.pv.nfs.path` | Storage Path | `` |
-| `persistence.pv.pvname` | Custom name for private volume | `` |
-| `replicaCount` | k8s replicas | `1` |
-| `resources.limits.cpu` | Container maximum CPU | `100m` |
-| `resources.limits.memory` | Container maximum memory | `128Mi` |
-| `resources.requests.cpu` | Container requested CPU | `80m` |
-| `resources.requests.memory` | Container requested memory | `64Mi` |
-| `serviceAccount.create` | If true, create the service account | `false` |
-| `serviceAccount.name` | Name of the serviceAccount to create or use | `{{ chartmuseum.fullname }}` |
-| `securityContext` | Map of securityContext for the pod | `{ fsGroup: 1000 }` |
-| `nodeSelector` | Map of node labels for pod assignment | `{}` |
-| `tolerations` | List of node taints to tolerate | `[]` |
-| `affinity` | Map of node/pod affinities | `{}` |
-| `env.open.STORAGE` | Storage Backend to use | `local` |
-| `env.open.STORAGE_ALIBABA_BUCKET` | Bucket to store charts in for Alibaba | `` |
-| `env.open.STORAGE_ALIBABA_PREFIX` | Prefix to store charts under for Alibaba | `` |
-| `env.open.STORAGE_ALIBABA_ENDPOINT` | Alternative Alibaba endpoint | `` |
-| `env.open.STORAGE_ALIBABA_SSE` | Server side encryption algorithm to use | `` |
-| `env.open.STORAGE_AMAZON_BUCKET` | Bucket to store charts in for AWS | `` |
-| `env.open.STORAGE_AMAZON_ENDPOINT` | Alternative AWS endpoint | `` |
-| `env.open.STORAGE_AMAZON_PREFIX` | Prefix to store charts under for AWS | `` |
-| `env.open.STORAGE_AMAZON_REGION` | Region to use for bucket access for AWS | `` |
-| `env.open.STORAGE_AMAZON_SSE` | Server side encryption algorithm to use | `` |
-| `env.open.STORAGE_GOOGLE_BUCKET` | Bucket to store charts in for GCP | `` |
-| `env.open.STORAGE_GOOGLE_PREFIX` | Prefix to store charts under for GCP | `` |
-| `env.open.STORAGE_MICROSOFT_CONTAINER` | Container to store charts under for MS | `` |
-| `env.open.STORAGE_MICROSOFT_PREFIX` | Prefix to store charts under for MS | `` |
-| `env.open.STORAGE_OPENSTACK_CONTAINER` | Container to store charts for openstack | `` |
-| `env.open.STORAGE_OPENSTACK_PREFIX` | Prefix to store charts for openstack | `` |
-| `env.open.STORAGE_OPENSTACK_REGION` | Region of openstack container | `` |
-| `env.open.STORAGE_OPENSTACK_CACERT` | Path to a CA cert bundle for openstack | `` |
-| `env.open.CHART_POST_FORM_FIELD_NAME` | Form field to query for chart file content | `` |
-| `env.open.PROV_POST_FORM_FIELD_NAME` | Form field to query for chart provenance | `` |
-| `env.open.DEPTH` | levels of nested repos for multitenancy. | `0` |
-| `env.open.DEBUG` | Show debug messages | `false` |
-| `env.open.LOG_JSON` | Output structured logs in JSON | `true` |
-| `env.open.DISABLE_STATEFILES` | Disable use of index-cache.yaml | `false` |
-| `env.open.DISABLE_METRICS` | Disable Prometheus metrics | `true` |
-| `env.open.DISABLE_API` | Disable all routes prefixed with /api | `true` |
-| `env.open.ALLOW_OVERWRITE` | Allow chart versions to be re-uploaded | `false` |
-| `env.open.CHART_URL` | Absolute url for .tgzs in index.yaml | `` |
-| `env.open.AUTH_ANONYMOUS_GET` | Allow anon GET operations when auth is used | `false` |
-| `env.open.CONTEXT_PATH` | Set the base context path | `` |
-| `env.open.INDEX_LIMIT` | Parallel scan limit for the repo indexer | `` |
-| `env.open.CACHE` | Cache store, can be one of: redis | `` |
-| `env.open.CACHE_REDIS_ADDR` | Address of Redis service (host:port) | `` |
-| `env.open.CACHE_REDIS_DB` | Redis database to be selected after connect | `0` |
-| `env.field` | Expose pod information to containers through environment variables | `` |
-| `env.existingSecret` | Name of the existing secret use values | `` |
-| `env.existingSecret.BASIC_AUTH_USER` | Key name in the secret for the Username | `` |
-| `env.existingSecret.BASIC_AUTH_PASS` | Key name in the secret for the Password | `` |
-| `env.secret.BASIC_AUTH_USER` | Username for basic HTTP authentication | `` |
-| `env.secret.BASIC_AUTH_PASS` | Password for basic HTTP authentication | `` |
-| `env.secret.CACHE_REDIS_PASSWORD` | Redis requirepass server configuration | `` |
-| `gcp.secret.enabled` | Flag for the GCP service account | `false` |
-| `gcp.secret.name` | Secret name for the GCP json file | `` |
-| `gcp.secret.key` | Secret key for te GCP json file | `credentials.json` |
-| `service.type` | Kubernetes Service type | `ClusterIP` |
-| `service.clusterIP` | Static clusterIP or None for headless services| `nil` |
-| `service.servicename` | Custom name for service | `` |
-| `service.labels` | Additional labels for service | `{}` |
-| `deployment.labels` | Additional labels for deployment | `{}` |
-| `deployment.matchlabes` | Match labels for deployment selector | `{}` |
+| Parameter | Description | Default |
+|-----------------------------------------|--------------------------------------------------------------------|--------------------------------------|
+| `image.pullPolicy` | Container pull policy | `IfNotPresent` |
+| `image.repository` | Container image to use | `chartmuseum/chartmuseum` |
+| `image.tag` | Container image tag to deploy | `v0.8.0` |
+| `persistence.accessMode` | Access mode to use for PVC | `ReadWriteOnce` |
+| `persistence.enabled` | Whether to use a PVC for persistent storage | `false` |
+| `persistence.size` | Amount of space to claim for PVC | `8Gi` |
+| `persistence.labels` | Additional labels for PVC | `{}` |
+| `persistence.storageClass` | Storage Class to use for PVC | `-` |
+| `persistence.volumeName` | Volume to use for PVC | `` |
+| `persistence.pv.enabled` | Whether to use a PV for persistent storage | `false` |
+| `persistence.pv.capacity.storage` | Storage size to use for PV | `8Gi` |
+| `persistence.pv.accessMode` | Access mode to use for PV | `ReadWriteOnce` |
+| `persistence.pv.nfs.server` | NFS server for PV | `` |
+| `persistence.pv.nfs.path` | Storage Path | `` |
+| `persistence.pv.pvname` | Custom name for private volume | `` |
+| `replicaCount` | k8s replicas | `1` |
+| `resources.limits.cpu` | Container maximum CPU | `100m` |
+| `resources.limits.memory` | Container maximum memory | `128Mi` |
+| `resources.requests.cpu` | Container requested CPU | `80m` |
+| `resources.requests.memory` | Container requested memory | `64Mi` |
+| `serviceAccount.create` | If true, create the service account | `false` |
+| `serviceAccount.name` | Name of the serviceAccount to create or use | `{{ chartmuseum.fullname }}` |
+| `securityContext` | Map of securityContext for the pod | `{ fsGroup: 1000 }` |
+| `nodeSelector` | Map of node labels for pod assignment | `{}` |
+| `tolerations` | List of node taints to tolerate | `[]` |
+| `affinity` | Map of node/pod affinities | `{}` |
+| `env.open.STORAGE` | Storage Backend to use | `local` |
+| `env.open.STORAGE_ALIBABA_BUCKET` | Bucket to store charts in for Alibaba | `` |
+| `env.open.STORAGE_ALIBABA_PREFIX` | Prefix to store charts under for Alibaba | `` |
+| `env.open.STORAGE_ALIBABA_ENDPOINT` | Alternative Alibaba endpoint | `` |
+| `env.open.STORAGE_ALIBABA_SSE` | Server side encryption algorithm to use | `` |
+| `env.open.STORAGE_AMAZON_BUCKET` | Bucket to store charts in for AWS | `` |
+| `env.open.STORAGE_AMAZON_ENDPOINT` | Alternative AWS endpoint | `` |
+| `env.open.STORAGE_AMAZON_PREFIX` | Prefix to store charts under for AWS | `` |
+| `env.open.STORAGE_AMAZON_REGION` | Region to use for bucket access for AWS | `` |
+| `env.open.STORAGE_AMAZON_SSE` | Server side encryption algorithm to use | `` |
+| `env.open.STORAGE_GOOGLE_BUCKET` | Bucket to store charts in for GCP | `` |
+| `env.open.STORAGE_GOOGLE_PREFIX` | Prefix to store charts under for GCP | `` |
+| `env.open.STORAGE_MICROSOFT_CONTAINER` | Container to store charts under for MS | `` |
+| `env.open.STORAGE_MICROSOFT_PREFIX` | Prefix to store charts under for MS | `` |
+| `env.open.STORAGE_OPENSTACK_CONTAINER` | Container to store charts for openstack | `` |
+| `env.open.STORAGE_OPENSTACK_PREFIX` | Prefix to store charts for openstack | `` |
+| `env.open.STORAGE_OPENSTACK_REGION` | Region of openstack container | `` |
+| `env.open.STORAGE_OPENSTACK_CACERT` | Path to a CA cert bundle for openstack | `` |
+| `env.open.STORAGE_ORACLE_COMPARTMENTID` | Compartment ID for Oracle Object Store | `` |
+| `env.open.STORAGE_ORACLE_BUCKET` | Bucket to store charts in Oracle Object Store | `` |
+| `env.open.STORAGE_ORACLE_PREFIX` | Prefix to store charts for Oracle object Store | `` |
+| `env.open.CHART_POST_FORM_FIELD_NAME` | Form field to query for chart file content | `` |
+| `env.open.PROV_POST_FORM_FIELD_NAME` | Form field to query for chart provenance | `` |
+| `env.open.DEPTH` | levels of nested repos for multitenancy. | `0` |
+| `env.open.DEBUG` | Show debug messages | `false` |
+| `env.open.LOG_JSON` | Output structured logs in JSON | `true` |
+| `env.open.DISABLE_STATEFILES` | Disable use of index-cache.yaml | `false` |
+| `env.open.DISABLE_METRICS` | Disable Prometheus metrics | `true` |
+| `env.open.DISABLE_API` | Disable all routes prefixed with /api | `true` |
+| `env.open.ALLOW_OVERWRITE` | Allow chart versions to be re-uploaded | `false` |
+| `env.open.CHART_URL` | Absolute url for .tgzs in index.yaml | `` |
+| `env.open.AUTH_ANONYMOUS_GET` | Allow anon GET operations when auth is used | `false` |
+| `env.open.CONTEXT_PATH` | Set the base context path | `` |
+| `env.open.INDEX_LIMIT` | Parallel scan limit for the repo indexer | `` |
+| `env.open.CACHE` | Cache store, can be one of: redis | `` |
+| `env.open.CACHE_REDIS_ADDR` | Address of Redis service (host:port) | `` |
+| `env.open.CACHE_REDIS_DB` | Redis database to be selected after connect | `0` |
+| `env.open.BEARER_AUTH` | Enable bearer auth | `false` |
+| `env.open.AUTH_REALM` | Realm used for bearer authentication | `` |
+| `env.open.AUTH_SERVICE` | Service used for bearer authentication | `` |
+| `env.field` | Expose pod information to containers through environment variables | `` |
+| `env.existingSecret` | Name of the existing secret use values | `` |
+| `env.existingSecret.BASIC_AUTH_USER` | Key name in the secret for the Username | `` |
+| `env.existingSecret.BASIC_AUTH_PASS` | Key name in the secret for the Password | `` |
+| `env.secret.BASIC_AUTH_USER` | Username for basic HTTP authentication | `` |
+| `env.secret.BASIC_AUTH_PASS` | Password for basic HTTP authentication | `` |
+| `env.secret.CACHE_REDIS_PASSWORD` | Redis requirepass server configuration | `` |
+| `gcp.secret.enabled` | Flag for the GCP service account | `false` |
+| `gcp.secret.name` | Secret name for the GCP json file | `` |
+| `gcp.secret.key` | Secret key for te GCP json file | `credentials.json` |
+| `oracle.secret.enabled` | Flag for Oracle OCI account | `false` |
+| `oracle.secret.name` | Secret name for OCI config and key | `` |
+| `oracle.secret.config` | Secret key that holds the OCI config | `config` |
+| `oracle.secret.key_file` | Secret key that holds the OCI private key | `key_file` |
+| `bearerAuth.secret.enabled` | Flag for bearer auth public key secret | `` |
+| `bearerAuth.secret.publicKey` | The name of the secret with the public key | `` |
+| `service.type` | Kubernetes Service type | `ClusterIP` |
+| `service.clusterIP` | Static clusterIP or None for headless services | `nil` |
+| `service.externalTrafficPolicy` | Source IP preservation (only for Service type NodePort) | `Local` |
+| `service.servicename` | Custom name for service | `` |
+| `service.labels` | Additional labels for service | `{}` |
+| `deployment.labels` | Additional labels for deployment | `{}` |
+| `deployment.matchlabes` | Match labels for deployment selector | `{}` |
+| `ingress.enabled` | Enable ingress controller resource | `false` |
+| `ingress.annotations` | Ingress annotations | `[]` |
+| `ingress.labels` | Ingress labels | `[]` |
+| `ingress.hosts[0].name` | Hostname for the ingress | `` |
+| `ingress.hosts[0].path` | Path within the url structure | `` |
+| `ingress.hosts[0].tls ` | Enable TLS on the ingress host | `false` |
+| `ingress.hosts[0].tlsSecret` | TLS secret to use (must be manually created) | `` |
+| `ingress.hosts[0].serviceName` | The name of the service to route traffic to. | `{{ .Values.service.externalPort }}` |
+| `ingress.hosts[0].servicePort` | The port of the service to route traffic to. | `{{ .chartmuseum. }}` |
+| `ingress.extraPaths[0].path` | Path within the url structure. | `` |
+| `ingress.extraPaths[0].service` | The name of the service to route traffic to. | `` |
+| `ingress.extraPaths[0].port` | The port of the service to route traffic to. | `` |
Specify each parameter using the `--set key=value[,key=value]` argument to
`helm install`.
@@ -411,6 +445,44 @@ env:
Run command to install
+```shell
+helm install --name my-chartmuseum -f custom.yaml stable/chartmuseum
+```
+### Using with Oracle Object Storage
+
+Oracle (OCI) configuration and private key need to be added to a secret and are mounted at /home/chartmuseum/.oci. Your OCI config needs to be under [DEFAULT] and your `key_file` needs to be /home/chartmuseum/.oci/oci.key. See https://docs.cloud.oracle.com/iaas/Content/API/Concepts/sdkconfig.htm
+
+```shell
+kubectl create secret generic chartmuseum-secret --from-file=config=".oci/config" --from-file=key_file=".oci/oci.key"
+```
+
+Then you can either use a `VALUES` yaml with your values or set those values in the command line:
+
+```shell
+helm install stable/chartmuseum --debug --set env.open.STORAGE=oracle,env.open.STORAGE_ORACLE_COMPARTMENTID=ocid1.compartment.oc1..abc123,env.open.STORAGE_ORACLE_BUCKET=myocibucket,env.open.STORAGE_ORACLE_PREFIX=chartmuseum,oracle.secret.enabled=true,oracle.secret.name=chartmuseum-secret
+```
+
+If you prefer to use a yaml file:
+
+```yaml
+env:
+ open:
+ STORAGE: oracle
+ STORAGE_ORACLE_COMPARTMENTID: ocid1.compartment.oc1..abc123
+ STORAGE_ORACLE_BUCKET: myocibucket
+ STORAGE_ORACLE_PREFIX: chartmuseum
+
+oracle:
+ secret:
+ enabled: enabled
+ name: chartmuseum-secret
+ config: config
+ key_file: key_file
+
+```
+
+Run command to install
+
```shell
helm install --name my-chartmuseum -f custom.yaml stable/chartmuseum
```
@@ -503,6 +575,102 @@ parameters:
userSecretName: thesecret
```
+### Authentication
+
+By default this chart does not have any authentication configured and allows anyone to fetch or upload (assuming the API is enabled) charts there are two supported methods of authentication
+
+#### Basic Authentication
+
+This allows all API routes to be protected by HTTP basic auth, this is configured either as plain text in the values that gets stored as a secret in the kubernetes cluster by setting:
+
+```yaml
+env:
+ secret:
+ BASIC_AUTH_USER: curator
+ BASIC_AUTH_PASS: mypassword
+```
+
+Or by using values from an existing secret in the cluster that can be created using:
+
+'''shell
+kubectl create secret generic chartmuseum-secret --from-literal="basic-auth-user=curator" --from-literal="basic-auth-pass=mypassword"
+'''
+
+This secret can be used in the values file as follows:
+
+```yaml
+env:
+ existingSecret: chartmuseum-secret
+ existingSecretMappings:
+ BASIC_AUTH_USER: basic-auth-user
+ BASIC_AUTH_PASS: basic-auth-pass
+```
+
+#### Bearer/Token auth
+
+When using this ChartMuseum is configured with a public key, and will accept RS256 JWT tokens signed by the associated private key, passed in the Authorization header. You can use the [chartmuseum/auth](https://github.com/chartmuseum/auth) Go library to generate valid JWT tokens. For more information about how this works, please see [chartmuseum/auth-server-example](https://github.com/chartmuseum/auth-server-example)
+
+To use this the public key should be stored in a secret this can be done with
+
+```shell
+kubectl create secret generic chartmuseum-public-key --from-file=public-key.pem
+```
+
+And Bearer/Token auth can be configured using the following values
+
+```yaml
+env:
+ open:
+ BEARER_AUTH: true
+ AUTH_REALM:
+ AUTH_SERVICE:
+
+bearerAuth:
+ secret:
+ enabled: true
+ publicKeySecret: chartmuseum-public-key
+```
+
+### Ingress
+
+This chart provides support for ingress resources. If you have an ingress controller installed on your cluster, such as [nginx-ingress](https://hub.kubeapps.com/charts/stable/nginx-ingress) or [traefik](https://hub.kubeapps.com/charts/stable/traefik) you can utilize the ingress controller to expose Kubeapps.
+
+To enable ingress integration, please set `ingress.enabled` to `true`
+
+#### Hosts
+
+Most likely you will only want to have one hostname that maps to this Chartmuseum installation, however, it is possible to have more than one host. To facilitate this, the `ingress.hosts` object is an array. TLS secrets referenced in the ingress host configuration must be manually created in the namespace.
+
+In most cases, you should not specify values for `ingress.hosts[0].serviceName` and `ingress.hosts[0].servicePort`. However, some ingress controllers support advanced scenarios requiring you to specify these values. For example, [setting up an SSL redirect using the AWS ALB Ingress Controller](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/tasks/ssl_redirect/).
+
+#### Extra Paths
+
+Specifying extra paths to prepend to every host configuration is especially useful when configuring [custom actions with AWS ALB Ingress Controller](https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/#actions).
+
+```shell
+helm install --name my-chartmuseum stable/chartmuseum \
+ --set ingress.enabled=true \
+ --set ingress.hosts[0].name=chartmuseum.domain.com \
+ --set ingress.extraPaths[0].service=ssl-redirect \
+ --set ingress.extraPaths[0].port=use-annotation \
+```
+
+
+#### Annotations
+
+For annotations, please see [this document for nginx](https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md) and [this document for Traefik](https://docs.traefik.io/configuration/backends/kubernetes/#general-annotations). Not all annotations are supported by all ingress controllers, but this document does a good job of indicating which annotation is supported by many popular ingress controllers. Annotations can be set using `ingress.annotations`.
+
+#### Example Ingress configuration
+
+```shell
+helm install --name my-chartmuseum stable/chartmuseum \
+ --set ingress.enabled=true \
+ --set ingress.hosts[0].name=chartmuseum.domain.com \
+ --set ingress.hosts[0].path=/
+ --set ingress.hosts[0].tls=true
+ --set ingress.hosts[0].tlsSecret=chartmuseum.tls-secret
+```
+
## Uninstall
By default, a deliberate uninstall will result in the persistent volume
diff --git a/stable/chartmuseum/templates/deployment.yaml b/stable/chartmuseum/templates/deployment.yaml
index 3187b2fbf293..683faee0d41a 100644
--- a/stable/chartmuseum/templates/deployment.yaml
+++ b/stable/chartmuseum/templates/deployment.yaml
@@ -77,6 +77,10 @@ spec:
{{- end }}
{{- end }}
{{- end }}
+{{- if .Values.bearerAuth.secret.enabled }}
+ - name: AUTH_CERT_PATH
+ value: /var/keys/public-key.pem
+{{ end }}
args:
- --port=8080
{{- if eq .Values.env.open.STORAGE "local" }}
@@ -95,15 +99,23 @@ spec:
path: {{ .Values.env.open.CONTEXT_PATH }}/health
port: http
{{ toYaml .Values.probes.readiness | indent 10 }}
-{{- if eq .Values.env.open.STORAGE "local" }}
volumeMounts:
+{{- if eq .Values.env.open.STORAGE "local" }}
- mountPath: /storage
name: storage-volume
{{- end }}
{{- if .Values.gcp.secret.enabled }}
- volumeMounts:
- mountPath: /etc/secrets/google
name: {{ include "chartmuseum.fullname" . }}-gcp
+{{- end }}
+{{- if .Values.oracle.secret.enabled }}
+ - mountPath: /home/chartmuseum/.oci
+ name: {{ include "chartmuseum.fullname" . }}-oracle
+{{- end }}
+{{- if .Values.bearerAuth.secret.enabled }}
+ - name: public-key
+ mountPath: /var/keys
+ readOnly: true
{{- end }}
{{- with .Values.resources }}
resources:
@@ -153,3 +165,18 @@ spec:
path: credentials.json
{{ end }}
{{ end }}
+ {{ if .Values.oracle.secret.enabled }}
+ - name: {{ include "chartmuseum.fullname" . }}-oracle
+ secret:
+ secretName: {{ .Values.oracle.secret.name }}
+ items:
+ - key: {{ .Values.oracle.secret.config }}
+ path: config
+ - key: {{ .Values.oracle.secret.key_file }}
+ path: oci.key
+ {{ end }}
+{{- if .Values.bearerAuth.secret.enabled }}
+ - name: public-key
+ secret:
+ secretName: {{ .Values.bearerAuth.secret.publicKeySecret }}
+{{- end }}
diff --git a/stable/chartmuseum/templates/ingress.yaml b/stable/chartmuseum/templates/ingress.yaml
index 0ae70e22cd2c..b3131c118a81 100644
--- a/stable/chartmuseum/templates/ingress.yaml
+++ b/stable/chartmuseum/templates/ingress.yaml
@@ -1,6 +1,7 @@
+{{- if .Values.ingress.enabled }}
{{- $servicePort := .Values.service.externalPort -}}
{{- $serviceName := include "chartmuseum.fullname" . -}}
-{{- if .Values.ingress.enabled }}
+{{- $ingressExtraPaths := .Values.ingress.extraPaths -}}
---
apiVersion: extensions/v1beta1
kind: Ingress
@@ -15,19 +16,27 @@ metadata:
{{ include "chartmuseum.labels.standard" . | indent 4 }}
spec:
rules:
- {{- range $host, $paths := .Values.ingress.hosts }}
- - host: {{ $host }}
+ {{- range .Values.ingress.hosts }}
+ - host: {{ .name }}
http:
paths:
- {{- range $paths }}
- - path: {{ . }}
+ {{- range $ingressExtraPaths }}
+ - path: {{ default "/" .path | quote }}
+ backend:
+ serviceName: {{ default $serviceName .service }}
+ servicePort: {{ default $servicePort .port }}
+ {{- end }}
+ - path: {{ default "/" .path | quote }}
backend:
- serviceName: {{ $serviceName }}
- servicePort: {{ $servicePort }}
- {{- end -}}
- {{- end -}}
- {{- if .Values.ingress.tls }}
+ serviceName: {{ default $serviceName .serviceName }}
+ servicePort: {{ default $servicePort .servicePort }}
+ {{- end }}
tls:
-{{ toYaml .Values.ingress.tls | indent 4 }}
- {{- end -}}
+ {{- range .Values.ingress.hosts }}
+ {{- if .tls }}
+ - hosts:
+ - {{ .name }}
+ secretName: {{ .tlsSecret }}
+ {{- end }}
+ {{- end }}
{{- end -}}
diff --git a/stable/chartmuseum/templates/service.yaml b/stable/chartmuseum/templates/service.yaml
index 65ce7a288302..7d42601ccbc3 100644
--- a/stable/chartmuseum/templates/service.yaml
+++ b/stable/chartmuseum/templates/service.yaml
@@ -17,6 +17,9 @@ metadata:
{{- end }}
spec:
type: {{ .Values.service.type }}
+ {{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
+ externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
+ {{- end }}
{{- if eq .Values.service.type "ClusterIP" }}
{{- if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
diff --git a/stable/chartmuseum/values.yaml b/stable/chartmuseum/values.yaml
index 28d1a19d852d..96d29928bfc7 100644
--- a/stable/chartmuseum/values.yaml
+++ b/stable/chartmuseum/values.yaml
@@ -5,11 +5,11 @@ strategy:
maxUnavailable: 0
image:
repository: chartmuseum/chartmuseum
- tag: v0.8.1
+ tag: v0.8.2
pullPolicy: IfNotPresent
env:
open:
- # storage backend, can be one of: local, alibaba, amazon, google, microsoft
+ # storage backend, can be one of: local, alibaba, amazon, google, microsoft, oracle
STORAGE: local
# oss bucket to store charts for alibaba storage backend
STORAGE_ALIBABA_BUCKET:
@@ -46,6 +46,12 @@ env:
STORAGE_OPENSTACK_REGION:
# path to a CA cert bundle for your openstack endpoint
STORAGE_OPENSTACK_CACERT:
+ # compartment id for for oracle storage backend
+ STORAGE_ORACLE_COMPARTMENTID:
+ # oci bucket to store charts for oracle storage backend
+ STORAGE_ORACLE_BUCKET:
+ # prefix to store charts for oracle storage backend
+ STORAGE_ORACLE_PREFIX:
# form field which will be queried for the chart file content
CHART_POST_FORM_FIELD_NAME: chart
# form field which will be queried for the provenance file content
@@ -78,6 +84,12 @@ env:
CACHE_REDIS_ADDR:
# Redis database to be selected after connect
CACHE_REDIS_DB: 0
+ # enable bearer auth
+ BEARER_AUTH: false
+ # auth realm used for bearer auth
+ AUTH_REALM:
+ # auth service used for bearer auth
+ AUTH_SERVICE:
field:
# POD_IP: status.podIP
secret:
@@ -118,6 +130,7 @@ replica:
service:
servicename:
type: ClusterIP
+ externalTrafficPolicy: Local
# clusterIP: None
externalPort: 8080
nodePort:
@@ -210,18 +223,19 @@ ingress:
## Chartmuseum Ingress hostnames
## Must be provided if Ingress is enabled
##
-# hosts:
-# chartmuseum.domain.com:
-# - /charts
-# - /index.yaml
-
-## Chartmuseum Ingress TLS configuration
-## Secrets must be manually created in the namespace
-##
-# tls:
-# - secretName: chartmuseum-server-tls
-# hosts:
-# - chartmuseum.domain.com
+# hosts:
+# - name: chartmuseum.domain1.com
+# path: /
+# tls: false
+# - name: chartmuseum.domain2.com
+# path: /
+#
+# ## Set this to true in order to enable TLS on the ingress record
+# tls: true
+#
+# ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
+# ## Secrets must be added manually to the namespace
+# tlsSecret: chartmuseum.domain2-tls
# Adding secrets to tiller is not a great option, so If you want to use an existing
# secret that contains the json file, you can use the following entries
@@ -232,3 +246,16 @@ gcp:
name:
# Secret key that holds the json value.
key: credentials.json
+oracle:
+ secret:
+ enabled: false
+ # Name of the secret that contains the encoded config and key
+ name:
+ # Secret key that holds the oci config
+ config: config
+ # Secret key that holds the oci private key
+ key_file: key_file
+bearerAuth:
+ secret:
+ enabled: false
+ publicKeySecret: chartmuseum-public-key
diff --git a/stable/chronograf/Chart.yaml b/stable/chronograf/Chart.yaml
index 145fc048e36f..02de1a1c0e8a 100755
--- a/stable/chronograf/Chart.yaml
+++ b/stable/chronograf/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: chronograf
-version: 1.0.1
+version: 1.0.2
appVersion: 1.7.7
description: Open-source web application written in Go and React.js that provides
the tools to visualize your monitoring data and easily create alerting and automation
diff --git a/stable/clamav/.helmignore b/stable/clamav/.helmignore
new file mode 100644
index 000000000000..f0c131944441
--- /dev/null
+++ b/stable/clamav/.helmignore
@@ -0,0 +1,21 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
diff --git a/stable/clamav/Chart.yaml b/stable/clamav/Chart.yaml
new file mode 100644
index 000000000000..26c2a68070ea
--- /dev/null
+++ b/stable/clamav/Chart.yaml
@@ -0,0 +1,9 @@
+apiVersion: v1
+appVersion: "1.0"
+description: An Open-Source antivirus engine for detecting trojans, viruses, malware & other malicious threats.
+name: clamav
+version: 1.0.0
+home: https://www.clamav.net
+maintainers:
+- name: zakkg3
+ email: zakkg3@gmail.com
diff --git a/stable/clamav/OWNERS b/stable/clamav/OWNERS
new file mode 100644
index 000000000000..c6c13219bf4e
--- /dev/null
+++ b/stable/clamav/OWNERS
@@ -0,0 +1,4 @@
+approvers:
+- zakkg3
+reviewers:
+- zakkg3
diff --git a/stable/clamav/README.md b/stable/clamav/README.md
new file mode 100644
index 000000000000..e4a07887ad01
--- /dev/null
+++ b/stable/clamav/README.md
@@ -0,0 +1,66 @@
+# ClamAV
+
+## An Open-Source antivirus engine for detecting trojans, viruses, malware & other malicious threats.
+
+[ClamAV](https://www.clamav.net/) is the open source standard for mail gateway scanning software.
+ Developed by [Cisco Talos](https://github.com/Cisco-Talos/clamav-devel). This Helm Chart uses the [MailU](https://github.com/Mailu/Mailu) Docker image.
+
+## QuickStart
+
+```bash
+$ helm install stable/clamav --name foo --namespace bar
+```
+
+## Introduction
+
+This chart bootstraps a ClamAV deployment and service on a Kubernetes cluster using the Helm Package manager.
+
+## Prerequisites
+
+- Kubernetes 1.4+
+- PV provisioner support in the underlying infrastructure (optional)
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`:
+
+```bash
+$ helm install --name my-release stable/clamav
+```
+
+The command deploys ClamAV on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
+
+> **Tip**: List all releases using `helm list`
+
+## Uninstalling the Chart
+
+To uninstall/delete the `my-release` deployment:
+
+```bash
+$ helm delete my-release --purge
+```
+
+The command removes all the Kubernetes components associated with the chart and deletes the release.
+
+## Configuration
+
+The configurable parameters of the ClamAV chart and
+their descriptions can be seen in `values.yaml`. The [full documentation](https://www.clamav.net/documents/clam-antivirus-0-101-0-user-manual) contains more information about running ClamAV in docker.
+
+> **Tip**: You can use the default [values.yaml](values.yaml)
+
+## Memory Usage
+
+ClamAV uses around 1 GB RAM.
+
+
+
+
+# Virus Definitions
+
+For ClamAV to work properly, both the ClamAV engine and the ClamAV Virus Database (CVD) must be kept up to date.
+
+The virus database is usually updated many times per week.
+
+Freshclam should perform these updates automatically. Instructions for setting up Freshclam can be found in the [ documentation](https://www.clamav.net/documents/clam-antivirus-0-101-0-user-manual) section.
+If your network is segmented or the end hosts are unable to reach the Internet, you should investigate setting up a private local mirror. If this is not viable, you may use these direct [ download](https://www.clamav.net/downloads)
diff --git a/stable/clamav/templates/NOTES.txt b/stable/clamav/templates/NOTES.txt
new file mode 100644
index 000000000000..0d36bbcb931b
--- /dev/null
+++ b/stable/clamav/templates/NOTES.txt
@@ -0,0 +1,19 @@
+1. Get the application URL by running these commands:
+{{- if .Values.ingress.enabled }}
+{{- range .Values.ingress.hosts }}
+ http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
+{{- end }}
+{{- else if contains "NodePort" .Values.service.type }}
+ export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "clamav.fullname" . }})
+ export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+ echo http://$NODE_IP:$NODE_PORT
+{{- else if contains "LoadBalancer" .Values.service.type }}
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ You can watch the status of by running 'kubectl get svc -w {{ include "clamav.fullname" . }}'
+ export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "clamav.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ echo http://$SERVICE_IP:{{ .Values.service.port }}
+{{- else if contains "ClusterIP" .Values.service.type }}
+ export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "clamav.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
+ echo "Visit http://127.0.0.1:8080 to use your application"
+ kubectl port-forward $POD_NAME 8080:80
+{{- end }}
diff --git a/stable/clamav/templates/_helpers.tpl b/stable/clamav/templates/_helpers.tpl
new file mode 100644
index 000000000000..05851a8245ab
--- /dev/null
+++ b/stable/clamav/templates/_helpers.tpl
@@ -0,0 +1,32 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "clamav.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "clamav.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "clamav.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/stable/clamav/templates/deployment.yaml b/stable/clamav/templates/deployment.yaml
new file mode 100644
index 000000000000..c1621312fa32
--- /dev/null
+++ b/stable/clamav/templates/deployment.yaml
@@ -0,0 +1,51 @@
+apiVersion: apps/v1beta2
+kind: Deployment
+metadata:
+ name: {{ include "clamav.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "clamav.name" . }}
+ helm.sh/chart: {{ include "clamav.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ replicas: {{ .Values.replicaCount }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "clamav.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "clamav.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ spec:
+ containers:
+ - name: {{ .Chart.Name }}
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ ports:
+ - name: clamavport
+ containerPort: 3310
+ protocol: TCP
+ livenessProbe:
+ tcpSocket:
+ port: clamavport
+ initialDelaySeconds: 300
+ readinessProbe:
+ tcpSocket:
+ port: clamavport
+ initialDelaySeconds: 300
+ resources:
+{{ toYaml .Values.resources | indent 12 }}
+ {{- with .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.affinity }}
+ affinity:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.tolerations }}
+ tolerations:
+{{ toYaml . | indent 8 }}
+ {{- end }}
diff --git a/stable/clamav/templates/ingress.yaml b/stable/clamav/templates/ingress.yaml
new file mode 100644
index 000000000000..4956381f737c
--- /dev/null
+++ b/stable/clamav/templates/ingress.yaml
@@ -0,0 +1,38 @@
+{{- if .Values.ingress.enabled -}}
+{{- $fullName := include "clamav.fullname" . -}}
+{{- $ingressPath := .Values.ingress.path -}}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ $fullName }}
+ labels:
+ app.kubernetes.io/name: {{ include "clamav.name" . }}
+ helm.sh/chart: {{ include "clamav.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+{{- with .Values.ingress.annotations }}
+ annotations:
+{{ toYaml . | indent 4 }}
+{{- end }}
+spec:
+{{- if .Values.ingress.tls }}
+ tls:
+ {{- range .Values.ingress.tls }}
+ - hosts:
+ {{- range .hosts }}
+ - {{ . | quote }}
+ {{- end }}
+ secretName: {{ .secretName }}
+ {{- end }}
+{{- end }}
+ rules:
+ {{- range .Values.ingress.hosts }}
+ - host: {{ . | quote }}
+ http:
+ paths:
+ - path: {{ $ingressPath }}
+ backend:
+ serviceName: {{ $fullName }}
+ servicePort: http
+ {{- end }}
+{{- end }}
diff --git a/stable/clamav/templates/service.yaml b/stable/clamav/templates/service.yaml
new file mode 100644
index 000000000000..a23621bf7592
--- /dev/null
+++ b/stable/clamav/templates/service.yaml
@@ -0,0 +1,19 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "clamav.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "clamav.name" . }}
+ helm.sh/chart: {{ include "clamav.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ type: {{ .Values.service.type }}
+ ports:
+ - port: {{ .Values.service.port }}
+ targetPort: 3310
+ protocol: TCP
+ name: clamavport
+ selector:
+ app.kubernetes.io/name: {{ include "clamav.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
diff --git a/stable/clamav/values.yaml b/stable/clamav/values.yaml
new file mode 100644
index 000000000000..2334199bee64
--- /dev/null
+++ b/stable/clamav/values.yaml
@@ -0,0 +1,48 @@
+# Default values for ClamAV.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+
+replicaCount: 1
+
+image:
+ repository: mailu/clamav
+ tag: latest
+ pullPolicy: IfNotPresent
+
+nameOverride: ""
+fullnameOverride: ""
+
+service:
+ type: ClusterIP
+ port: 3310
+
+ingress:
+ enabled: false
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+ path: /
+ hosts:
+ - chart-example.local
+ tls: []
+ # - secretName: chart-example-tls
+ # hosts:
+ # - chart-example.local
+
+resources: {}
+ # We usually recommend not to specify default resources and to leave this as a conscious
+ # choice for the user. This also increases chances charts run on environments with little
+ # resources, such as Minikube. If you do want to specify resources, uncomment the following
+ # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
+ # limits:
+ # cpu: 100m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 128Mi
+
+nodeSelector: {}
+
+tolerations: []
+
+affinity: {}
diff --git a/stable/cloudserver/Chart.yaml b/stable/cloudserver/Chart.yaml
index f935c929c4be..7741a40481da 100644
--- a/stable/cloudserver/Chart.yaml
+++ b/stable/cloudserver/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: cloudserver
-version: 1.0.0
-appVersion: 8.1.1
+version: 1.0.2
+appVersion: 8.1.5
kubeVersion: "^1.8.0-0"
description: An open-source Node.js implementation of the Amazon S3 protocol on the front-end and backend storage capabilities to multiple clouds, including Azure and Google.
home: https://www.zenko.io/cloudserver/
diff --git a/stable/cloudserver/OWNERS b/stable/cloudserver/OWNERS
index fa510b0036ea..b1df4d8802eb 100644
--- a/stable/cloudserver/OWNERS
+++ b/stable/cloudserver/OWNERS
@@ -1,4 +1,6 @@
approvers:
- giacomoguiulfo
+- ssalaues
reviewers:
- giacomoguiulfo
+- ssalaues
diff --git a/stable/cloudserver/README.md b/stable/cloudserver/README.md
index be2caec8b43f..09a2c8271d20 100644
--- a/stable/cloudserver/README.md
+++ b/stable/cloudserver/README.md
@@ -45,7 +45,7 @@ Parameter | Description | Default
`serviceAccounts.localdata.create` | If true, create the cloudserver localdata service account | `true`
`serviceAccounts.localdata.name` | name of the cloudserver localdata service account to use or create | `{{ cloudserver.localdata.fullname }}`
`image.repository` | cloudserver image repository | `zenko/cloudserver`
-`image.tag` | cloudserver image tag | `8.1.1`
+`image.tag` | cloudserver image tag | `8.1.5`
`image.pullPolicy` | cloudserver image pullPolicy | `IfNotPresent`
`api.replicaCount` | number of api replicas | `1`
`api.locationConstraints` | cloudserver location constraint configuration | `{}`
diff --git a/stable/cloudserver/values.yaml b/stable/cloudserver/values.yaml
index 191a82acb307..2d4d4da28b6a 100644
--- a/stable/cloudserver/values.yaml
+++ b/stable/cloudserver/values.yaml
@@ -17,7 +17,7 @@ serviceAccounts:
##
image:
repository: zenko/cloudserver
- tag: 8.1.1
+ tag: 8.1.5
pullPolicy: IfNotPresent
## Configuration for the cloudserver api component
diff --git a/stable/cluster-autoscaler/Chart.yaml b/stable/cluster-autoscaler/Chart.yaml
index bbe57b23862b..4970c55c0018 100644
--- a/stable/cluster-autoscaler/Chart.yaml
+++ b/stable/cluster-autoscaler/Chart.yaml
@@ -2,7 +2,7 @@ apiVersion: v1
description: Scales worker nodes within autoscaling groups.
icon: https://github.com/kubernetes/kubernetes/blob/master/logo/logo.png
name: cluster-autoscaler
-version: 0.11.2
+version: 0.13.2
appVersion: 1.13.1
home: https://github.com/kubernetes/autoscaler
sources:
diff --git a/stable/cluster-autoscaler/README.md b/stable/cluster-autoscaler/README.md
index 9fa9a9518600..800fec934998 100644
--- a/stable/cluster-autoscaler/README.md
+++ b/stable/cluster-autoscaler/README.md
@@ -40,6 +40,8 @@ Auto-discovery finds ASGs tags as below and automatically manages them based on
1) tag the ASGs with _key_ `k8s.io/cluster-autoscaler/enabled` and _key_ `kubernetes.io/cluster/`
2) verify the [IAM Permissions](#iam)
3) set `autoDiscovery.clusterName=`
+4) set `awsRegion=`
+5) set `awsAccessKeyID=` and `awsSecretAccessKey=` if you want to [use AWS credentials directly instead of an instance role](https://github.com/kubernetes/autoscaler/blob/5ac706fdfa5601348f33d5b634e62de6655bb9bf/cluster-autoscaler/cloudprovider/aws/README.md#using-aws-credentials)
```console
$ helm install stable/cluster-autoscaler --name my-release --set autoDiscovery.clusterName=
@@ -76,7 +78,7 @@ In the event you want to explicitly specify MIGs instead of using auto-discovery
##### Required Parameters
- `cloudProvider=azure`
- `autoscalingGroups[0].name=your-agent-pool,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=1`
-- `azureClientID: "your-service-principal-app-id"`
+- `azureClientID: "your-service-principal-app-id"`
- `azureClientSecret: "your-service-principal-client-secret"`
- `azureSubscriptionID: "your-azure-subscription-id"`
- `azureTenantID: "your-azure-tenant-id"`
@@ -121,10 +123,13 @@ Parameter | Description | Default
`autoscalingGroups[].maxSize` | maximum autoscaling group size | None. Required unless `autoDiscovery.enabled=true`
`autoscalingGroups[].minSize` | minimum autoscaling group size | None. Required unless `autoDiscovery.enabled=true`
`awsRegion` | AWS region (required if `cloudProvider=aws`) | `us-east-1`
+`awsAccessKeyID` | AWS access key ID ([if AWS user keys used](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#using-aws-credentials)) | `""`
+`awsSecretAccessKey` | AWS access secret key ([if AWS user keys used](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#using-aws-credentials)) | `""`
`autoscalingGroupsnamePrefix[].name` | GCE MIG name prefix (the full name is invalid) | None. Required for `cloudProvider=gce`
`autoscalingGroupsnamePrefix[].maxSize` | maximum MIG size | None. Required for `cloudProvider=gce`
`autoscalingGroupsnamePrefix[].minSize` | minimum MIG size | None. Required for `cloudProvider=gce`
-`sslCertPath` | Path on the host where ssl ca cert exists | `/etc/ssl/certs/ca-certificates.crt`
+`sslCertPath` | Path on the pod where ssl ca cert exists | `/etc/ssl/certs/ca-certificates.crt`
+`sslCertHostPath` | Path on the host where ssl ca cert exists | `/etc/ssl/certs/ca-certificates.crt`
`cloudProvider` | `aws` or `spotinst` are currently supported for AWS. `gce` for GCE. `azure` for Azure AKS | `aws`
`image.repository` | Image | `k8s.gcr.io/cluster-autoscaler`
`image.tag` | Image tag | `v1.13.1`
@@ -134,11 +139,13 @@ Parameter | Description | Default
`extraEnv` | additional container environment variables | `{}`
`nodeSelector` | node labels for pod assignment | `{}`
`podAnnotations` | annotations to add to each pod | `{}`
+`deployment.apiVersion` | apiVersion for the deployment | `extensions/v1beta1`
`rbac.create` | If true, create & use RBAC resources | `false`
`rbac.serviceAccountName` | existing ServiceAccount to use (ignored if rbac.create=true) | `default`
`rbac.pspEnabled` | Must be used with `rbac.create` true. If true, creates & uses RBAC resources required in the cluster with [Pod Security Policies](https://kubernetes.io/docs/concepts/policy/pod-security-policy/) enabled. | `false`
`replicaCount` | desired number of pods | `1`
`priorityClassName` | priorityClassName | `nil`
+`dnsPolicy` | dnsPolicy | `nil`
`resources` | pod resource requests & limits | `{}`
`service.annotations` | annotations to add to service | none
`service.clusterIP` | IP address to assign to service | `""`
@@ -166,6 +173,7 @@ Parameter | Description | Default
`azureResourceGroup` | Azure resource group that the cluster is located | none
`azureVMType: "AKS"` | Azure VM type | `AKS`
`azureNodeResourceGroup` | azure resource group where the clusters Nodes are located, typically set as `MC___` | none
+`azureUseManagedIdentityExtension` | Whether to use Azure's managed identity extension for credentials | false
Specify each parameter you'd like to override using a YAML file as described above in the [installation](#installing-the-chart) section or by using the `--set key=value[,key=value]` argument to `helm install`. For example, to change the region and [expander](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders):
diff --git a/stable/cluster-autoscaler/templates/clusterrole.yaml b/stable/cluster-autoscaler/templates/clusterrole.yaml
index f640bc3bbc0e..814747b42777 100644
--- a/stable/cluster-autoscaler/templates/clusterrole.yaml
+++ b/stable/cluster-autoscaler/templates/clusterrole.yaml
@@ -87,6 +87,7 @@ rules:
- apiGroups:
- apps
resources:
+ - daemonsets
- replicasets
- statefulsets
verbs:
diff --git a/stable/cluster-autoscaler/templates/deployment.yaml b/stable/cluster-autoscaler/templates/deployment.yaml
index bff41e3e4de5..5fe71014723f 100644
--- a/stable/cluster-autoscaler/templates/deployment.yaml
+++ b/stable/cluster-autoscaler/templates/deployment.yaml
@@ -1,6 +1,6 @@
{{- if or .Values.autoDiscovery.clusterName .Values.autoscalingGroups }}
{{/* one of the above is required */}}
-apiVersion: extensions/v1beta1
+apiVersion: {{ .Values.deployment.apiVersion }}
kind: Deployment
metadata:
labels:
@@ -11,6 +11,13 @@ metadata:
name: {{ template "cluster-autoscaler.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
+ selector:
+ matchLabels:
+ app: {{ template "cluster-autoscaler.name" . }}
+ release: {{ .Release.Name }}
+ {{- if .Values.podLabels }}
+{{ toYaml .Values.podLabels | indent 8 }}
+ {{- end }}
template:
metadata:
{{- if .Values.podAnnotations }}
@@ -27,6 +34,9 @@ spec:
{{- if .Values.priorityClassName }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
+ {{- if .Values.dnsPolicy }}
+ dnsPolicy: "{{ .Values.dnsPolicy }}"
+ {{- end }}
containers:
- name: {{ template "cluster-autoscaler.name" . }}
{{- if eq .Values.cloudProvider "spotinst" }}
@@ -65,9 +75,23 @@ spec:
{{- end }}
env:
- {{- if eq .Values.cloudProvider "aws" }}
+ {{- if and (eq .Values.cloudProvider "aws") (ne .Values.awsRegion "") }}
- name: AWS_REGION
value: "{{ .Values.awsRegion }}"
+ {{- if .Values.awsAccessKeyID }}
+ - name: AWS_ACCESS_KEY_ID
+ valueFrom:
+ secretKeyRef:
+ key: AwsAccessKeyId
+ name: {{ template "cluster-autoscaler.fullname" . }}
+ {{- end }}
+ {{- if .Values.awsSecretAccessKey }}
+ - name: AWS_SECRET_ACCESS_KEY
+ valueFrom:
+ secretKeyRef:
+ key: AwsSecretAccessKey
+ name: {{ template "cluster-autoscaler.fullname" . }}
+ {{- end }}
{{- else if eq .Values.cloudProvider "spotinst" }}
- name: SPOTINST_TOKEN
value: "{{ .Values.spotinst.token }}"
@@ -84,6 +108,15 @@ spec:
secretKeyRef:
key: ResourceGroup
name: {{ template "cluster-autoscaler.fullname" . }}
+ - name: ARM_VM_TYPE
+ valueFrom:
+ secretKeyRef:
+ key: VMType
+ name: {{ template "cluster-autoscaler.fullname" . }}
+ {{- if .Values.azureUseManagedIdentityExtension }}
+ - name: ARM_USE_MANAGED_IDENTITY_EXTENSION
+ value: "true"
+ {{- else }}
- name: ARM_TENANT_ID
valueFrom:
secretKeyRef:
@@ -99,11 +132,6 @@ spec:
secretKeyRef:
key: ClientSecret
name: {{ template "cluster-autoscaler.fullname" . }}
- - name: ARM_VM_TYPE
- valueFrom:
- secretKeyRef:
- key: VMType
- name: {{ template "cluster-autoscaler.fullname" . }}
- name: AZURE_CLUSTER_NAME
valueFrom:
secretKeyRef:
@@ -114,6 +142,7 @@ spec:
secretKeyRef:
key: NodeResourceGroup
name: {{ template "cluster-autoscaler.fullname" . }}
+ {{- end }}
{{- end }}
{{- range $key, $value := .Values.extraEnv }}
- name: {{ $key }}
@@ -150,10 +179,10 @@ spec:
volumes:
- name: ssl-certs
hostPath:
- path: {{ .Values.sslCertPath }}
+ path: {{ .Values.sslCertHostPath }}
{{- if eq .Values.cloudProvider "gce" }}
- name: cloudconfig
hostPath:
path: {{ .Values.cloudConfigPath }}
{{- end }}
-{{- end}}
+{{- end }}
diff --git a/stable/cluster-autoscaler/templates/secret.yaml b/stable/cluster-autoscaler/templates/secret.yaml
index 6f32c9fc3b97..fd8343dd610b 100644
--- a/stable/cluster-autoscaler/templates/secret.yaml
+++ b/stable/cluster-autoscaler/templates/secret.yaml
@@ -1,9 +1,10 @@
-{{- if eq .Values.cloudProvider "azure" }}
+{{- if or (eq .Values.cloudProvider "azure") (eq .Values.cloudProvider "aws") }}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "cluster-autoscaler.fullname" . }}
data:
+{{- if eq .Values.cloudProvider "azure" }}
ClientID: "{{ .Values.azureClientID | b64enc }}"
ClientSecret: "{{ .Values.azureClientSecret | b64enc }}"
ResourceGroup: "{{ .Values.azureResourceGroup | b64enc }}"
@@ -12,4 +13,8 @@ data:
VMType: "{{ .Values.azureVMType | b64enc }}"
ClusterName: "{{ .Values.azureClusterName | b64enc }}"
NodeResourceGroup: "{{ .Values.azureNodeResourceGroup | b64enc }}"
-{{- end }}
\ No newline at end of file
+{{- else if eq .Values.cloudProvider "aws" }}
+ AwsAccessKeyId: "{{ .Values.awsAccessKeyID | b64enc }}"
+ AwsSecretAccessKey: "{{ .Values.awsSecretAccessKey | b64enc }}"
+{{- end }}
+{{- end }}
diff --git a/stable/cluster-autoscaler/templates/service.yaml b/stable/cluster-autoscaler/templates/service.yaml
index 94ef0508ad7f..d64d1248e7e1 100644
--- a/stable/cluster-autoscaler/templates/service.yaml
+++ b/stable/cluster-autoscaler/templates/service.yaml
@@ -12,7 +12,9 @@ metadata:
release: {{ .Release.Name }}
name: {{ template "cluster-autoscaler.fullname" . }}
spec:
+{{- if .Values.service.clusterIP }}
clusterIP: "{{ .Values.service.clusterIP }}"
+{{- end }}
{{- if .Values.service.externalIPs }}
externalIPs:
{{ toYaml .Values.service.externalIPs | indent 4 }}
diff --git a/stable/cluster-autoscaler/values.yaml b/stable/cluster-autoscaler/values.yaml
index 77995348db99..15c5e84c1757 100644
--- a/stable/cluster-autoscaler/values.yaml
+++ b/stable/cluster-autoscaler/values.yaml
@@ -23,6 +23,8 @@ autoscalingGroupsnamePrefix: []
# Required if cloudProvider=aws
awsRegion: us-east-1
+awsAccessKeyID: ""
+awsSecretAccessKey: ""
# Required if cloudProvider=azure
# clientID/ClientSecret with contributor permission to Cluster and Node ResourceGroup
@@ -36,11 +38,14 @@ azureTenantID: ""
azureVMType: "AKS"
azureClusterName: ""
azureNodeResourceGroup: ""
+# if using MSI, ensure subscription ID and resource group are set
+azureUseManagedIdentityExtension: false
# Currently only `gce`, `aws`, `azure` & `spotinst` are supported
cloudProvider: aws
sslCertPath: /etc/ssl/certs/ca-certificates.crt
+sslCertHostPath: /etc/ssl/certs/ca-certificates.crt
# Configuration file for cloud provider
cloudConfigPath: /etc/gce.conf
@@ -88,6 +93,9 @@ podAnnotations: {}
podLabels: {}
replicaCount: 1
+deployment:
+ apiVersion: "extensions/v1beta1"
+
rbac:
## If true, create & use RBAC resources
##
@@ -109,6 +117,11 @@ resources: {}
priorityClassName: ""
+# Defaults to "ClusterFirst". Valid values are
+# 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'
+# autoscaler does not depend on cluster DNS, recommended to set this to "Default"
+# dnsPolicy: "Default"
+
service:
annotations: {}
clusterIP: ""
diff --git a/stable/cluster-overprovisioner/Chart.yaml b/stable/cluster-overprovisioner/Chart.yaml
index b2809168dece..db901dd7eec3 100644
--- a/stable/cluster-overprovisioner/Chart.yaml
+++ b/stable/cluster-overprovisioner/Chart.yaml
@@ -3,7 +3,7 @@ appVersion: "1.0"
description: Installs the a deployment that overprovisions the cluster
name: cluster-overprovisioner
home: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler
-version: 0.1.0
+version: 0.2.0
maintainers:
- name: max-rocket-internet
email: max.williams@deliveryhero.com
diff --git a/stable/cluster-overprovisioner/README.md b/stable/cluster-overprovisioner/README.md
index 7d5d0feb6139..1ec3394bc4ae 100644
--- a/stable/cluster-overprovisioner/README.md
+++ b/stable/cluster-overprovisioner/README.md
@@ -37,6 +37,7 @@ The following table lists the configurable parameters for this chart and their d
| -----------------------------------|---------------------------------------------------|-------------------|
| `priorityClassOverprovision.name` | Name of the overprovision priorityClass | `overprovision` |
| `priorityClassOverprovision.value` | Priority value of the overprovision priorityClass | `-1` |
+| `priorityClassDefault.enabled` | If true, enable default priorityClass | `true` |
| `priorityClassDefault.name` | Name of the default priorityClass | `default` |
| `priorityClassDefault.value` | Priority value of the default priorityClass | `0` |
| `replicaCount` | Number of replicas | `1` |
diff --git a/stable/cluster-overprovisioner/templates/priorityclass-default.yaml b/stable/cluster-overprovisioner/templates/priorityclass-default.yaml
index 88314e62b054..9a5975230900 100644
--- a/stable/cluster-overprovisioner/templates/priorityclass-default.yaml
+++ b/stable/cluster-overprovisioner/templates/priorityclass-default.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.priorityClassDefault.enabled }}
apiVersion: scheduling.k8s.io/{{ template "PriorityClass.apiVersion" . }}
kind: PriorityClass
metadata:
@@ -10,3 +11,4 @@ metadata:
value: {{ .Values.priorityClassDefault.value }}
globalDefault: true
description: "Default priority class for all pods"
+{{- end }}
diff --git a/stable/cluster-overprovisioner/values.yaml b/stable/cluster-overprovisioner/values.yaml
index 0113bfb2a301..43ae57d733b6 100644
--- a/stable/cluster-overprovisioner/values.yaml
+++ b/stable/cluster-overprovisioner/values.yaml
@@ -3,6 +3,7 @@ priorityClassOverprovision:
value: -1
priorityClassDefault:
+ enabled: true
name: default
value: 0
diff --git a/stable/cockroachdb/Chart.yaml b/stable/cockroachdb/Chart.yaml
index bf7db185c594..bfe8f9431375 100755
--- a/stable/cockroachdb/Chart.yaml
+++ b/stable/cockroachdb/Chart.yaml
@@ -1,11 +1,18 @@
+apiVersion: v1
name: cockroachdb
home: https://www.cockroachlabs.com
-version: 2.0.10
-appVersion: 2.1.4
+version: 2.1.8
+appVersion: 19.1.0
description: CockroachDB is a scalable, survivable, strongly-consistent SQL database.
icon: https://raw.githubusercontent.com/cockroachdb/cockroach/master/docs/media/cockroach_db.png
sources:
- https://github.com/cockroachdb/cockroach
maintainers:
- name: a-robinson
- email: alex@cockroachlabs.com
+ email: alexdwanerobinson@gmail.com
+- name: DuskEagle
+ email: Joel.A.Kenny@gmail.com
+- name: joshimhoff
+ email: joshimhoff13@gmail.com
+- name: keith-mcclellan
+ email: keith.mcclellan@gmail.com
diff --git a/stable/cockroachdb/OWNERS b/stable/cockroachdb/OWNERS
index c37fb25ebea6..8ed64f3c17c5 100644
--- a/stable/cockroachdb/OWNERS
+++ b/stable/cockroachdb/OWNERS
@@ -1,4 +1,10 @@
approvers:
- a-robinson
+- DuskEagle
+- joshimhoff
+- keith-mcclellan
reviewers:
- a-robinson
+- DuskEagle
+- joshimhoff
+- keith-mcclellan
diff --git a/stable/cockroachdb/README.md b/stable/cockroachdb/README.md
index a4e43eebdf1d..da11c09f71aa 100644
--- a/stable/cockroachdb/README.md
+++ b/stable/cockroachdb/README.md
@@ -1,8 +1,11 @@
# CockroachDB Helm Chart
+## Documentation
+Below is a brief overview of operating the CockroachDB Helm Chart and some specific implementation details. For additional information, please see https://www.cockroachlabs.com/docs/v19.1/orchestrate-cockroachdb-with-kubernetes-insecure.html
+
## Prerequisites Details
* Kubernetes 1.8
-* PV support on the underlying infrastructure
+* PV support on the underlying infrastructure. [Docker for windows hostpath provisioner is not supported](https://github.com/cockroachdb/docs/issues/3184).
* If you want to secure your cluster to use TLS certificates for all network
communication, [Helm must be installed with RBAC
privileges](https://github.com/kubernetes/helm/blob/master/docs/rbac.md)
@@ -43,7 +46,102 @@ certificate for each node (e.g. `default.node.eerie-horse-cockroachdb-0` and
one client certificate for the job that initializes the cluster (e.g.
`default.node.root`).
+Confirm that three pods are ```running``` successfully and init has completed:
+
+```shell
+kubectl get pods
+```
+```
+NAME READY STATUS RESTARTS AGE
+my-release-cockroachdb-0 1/1 Running 0 1m
+my-release-cockroachdb-1 1/1 Running 0 1m
+my-release-cockroachdb-2 1/1 Running 0 1m
+my-release-cockroachdb-init-k6jcr 0/1 Completed 0 1m
+```
+
+Confirm that persistent volumes are created and claimed for each pod:
+```shell
+kubectl get persistentvolumes
+```
+```
+NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
+pvc-64878ebf-f3f0-11e8-ab5b-42010a8e0035 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 51s
+pvc-64945b4f-f3f0-11e8-ab5b-42010a8e0035 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 51s
+pvc-649d920d-f3f0-11e8-ab5b-42010a8e0035 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 51s
+```
## Upgrading
+### From 2.0.0 on
+Launch a temporary interactive pod and start the built-in SQL client:
+
+```shell
+kubectl run cockroachdb -it \
+--image=cockroachdb/cockroach \
+--rm \
+--restart=Never \
+-- sql \
+--insecure \
+--host=my-release-cockroachdb-public
+```
+
+Set the ```cluster.preserve_downgrade_option``` cluster setting where $current_version = the version of CRDB currently running, e.g. 2.1:
+```> SET CLUSTER SETTING cluster.preserve_downgrade_option = '$current_version';```
+
+Exit the shell and delete the temp pod:
+```> \q ```
+
+Kick off the upgrade process by changing to the new Docker image, where $new_version is the version being upgraded to:
+
+```shell
+kubectl delete job my-release-cockroachdb-init
+```
+```shell
+helm upgrade \
+my-release \
+stable/cockroachdb \
+--set ImageTag=$new_version \
+--reuse-values
+```
+Monitor the cluster's pods until all have been successfully restarted:
+
+```shell
+kubectl get pods
+```
+```
+NAME READY STATUS RESTARTS AGE
+my-release-cockroachdb-0 1/1 Running 0 2m
+my-release-cockroachdb-1 1/1 Running 0 3m
+my-release-cockroachdb-2 1/1 Running 0 3m
+my-release-cockroachdb-3 0/1 ContainerCreating 0 25s
+my-release-cockroachdb-init-nwjkh 0/1 ContainerCreating 0 6s
+```
+```shell
+kubectl get pods \
+-o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}'
+```
+```
+my-release-cockroachdb-0 cockroachdb/cockroach:v19.1.1
+my-release-cockroachdb-1 cockroachdb/cockroach:v19.1.1
+my-release-cockroachdb-2 cockroachdb/cockroach:v19.1.1
+my-release-cockroachdb-3 cockroachdb/cockroach:v19.1.1
+```
+
+Resume normal operations. Once you are comfortable that the stability and performance of the cluster is what you'd expect post upgrade, finalize it by running the following:
+
+```shell
+kubectl run cockroachdb -it \
+--image=cockroachdb/cockroach \
+--rm \
+--restart=Never \
+-- sql \
+--insecure \
+--host=my-release-cockroachdb-public
+```
+```
+> RESET CLUSTER SETTING cluster.preserve_downgrade_option;
+```
+```
+\q
+```
### To 2.0.0
Due to having no explicit selector set for the StatefulSet before version 2.0.0 of
this chart, upgrading from any version that uses a version of kubernetes that locks
@@ -69,7 +167,7 @@ The following table lists the configurable parameters of the CockroachDB chart a
| ------------------------------ | ------------------------------------------------ | ----------------------------------------- |
| `Name` | Chart name | `cockroachdb` |
| `Image` | Container image name | `cockroachdb/cockroach` |
-| `ImageTag` | Container image tag | `v2.1.4` |
+| `ImageTag` | Container image tag | `v19.1.0` |
| `ImagePullPolicy` | Container pull policy | `Always` |
| `Replicas` | k8s statefulset replicas | `3` |
| `MaxUnavailable` | k8s PodDisruptionBudget parameter | `1` |
@@ -102,6 +200,11 @@ The following table lists the configurable parameters of the CockroachDB chart a
| `Secure.ServiceAccount.Name` | Name of RBAC service account to use | `""` |
| `JoinExisting` | List of already-existing cockroach instances | `[]` |
| `Locality` | Locality attribute for this deployment | `""` |
+| `ExtraArgs` | Additional command-line arguments | `[]` |
+| `ExtraSecretMounts` | Additional secrets to mount at cluster members | `[]` |
+| `ExtraEnvArgs` | Allows to set extra ENV args | `[]` |
+| `ExtraAnnotations` | Allows to set extra Annotations | `[]` |
+| `ExtraInitAnnotations` | Allows to set extra Annotations to init pod | `[]` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
diff --git a/stable/cockroachdb/templates/cluster-init.yaml b/stable/cockroachdb/templates/cluster-init.yaml
index 70dc4c6b14de..1f63bae9d55c 100644
--- a/stable/cockroachdb/templates/cluster-init.yaml
+++ b/stable/cockroachdb/templates/cluster-init.yaml
@@ -8,10 +8,14 @@ metadata:
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
template:
-{{- if and (.Values.NetworkPolicy.Enabled) (not .Values.NetworkPolicy.AllowExternal) }}
metadata:
+{{- if and (.Values.NetworkPolicy.Enabled) (not .Values.NetworkPolicy.AllowExternal) }}
labels:
{{.Release.Name}}-{{.Values.Component}}-client: "true"
+{{- end }}
+{{- if .Values.ExtraInitAnnotations }}
+ annotations:
+{{ toYaml .Values.ExtraInitAnnotations | indent 8 }}
{{- end }}
spec:
{{- if .Values.Secure.Enabled }}
diff --git a/stable/cockroachdb/templates/cockroachdb-statefulset.yaml b/stable/cockroachdb/templates/cockroachdb-statefulset.yaml
index b7102db8959b..f1ab4432af66 100644
--- a/stable/cockroachdb/templates/cockroachdb-statefulset.yaml
+++ b/stable/cockroachdb/templates/cockroachdb-statefulset.yaml
@@ -192,6 +192,10 @@ spec:
component: "{{ .Release.Name }}-{{ .Values.Component }}"
template:
metadata:
+{{- if .Values.ExtraAnnotations }}
+ annotations:
+{{ toYaml .Values.ExtraAnnotations | indent 8 }}
+{{- end }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
@@ -283,19 +287,27 @@ spec:
value: "{{ printf "%s-%s" .Release.Name .Values.Name | trunc 56 }}.{{ .Release.Namespace }}.svc.{{ .Values.ClusterDomain }}"
- name: COCKROACH_CHANNEL
value: kubernetes-helm
+{{- if .Values.ExtraEnvArgs }}
+{{ toYaml .Values.ExtraEnvArgs | indent 8 }}
+{{- end }}
volumeMounts:
- name: datadir
mountPath: /cockroach/cockroach-data
{{- if .Values.Secure.Enabled }}
- name: certs
mountPath: /cockroach/cockroach-certs
+{{- end }}
+{{- range .Values.ExtraSecretMounts }}
+ - name: extra-secret-{{ . }}
+ mountPath: /etc/cockroach/secrets/{{ . }}
+ readOnly: true
{{- end }}
command:
- "/bin/bash"
- "-ecx"
# The use of qualified `hostname -f` is crucial:
# Other nodes aren't able to look up the unqualified hostname.
- - "exec /cockroach/cockroach start --logtostderr {{ if .Values.Secure.Enabled }}--certs-dir /cockroach/cockroach-certs{{ else }}--insecure{{ end }} --advertise-host $(hostname).${STATEFULSET_FQDN} --http-host 0.0.0.0 --http-port {{ .Values.InternalHttpPort }} --port {{ .Values.InternalGrpcPort }} --cache {{ .Values.CacheSize }} --max-sql-memory {{ .Values.MaxSQLMemory }} {{ if .Values.Locality }}--locality={{.Values.Locality }}{{ end }} --join {{ if .Values.JoinExisting }}{{ join "," .Values.JoinExisting }}{{ else }}${STATEFULSET_NAME}-0.${STATEFULSET_FQDN}:{{ .Values.InternalGrpcPort }},${STATEFULSET_NAME}-1.${STATEFULSET_FQDN}:{{ .Values.InternalGrpcPort }},${STATEFULSET_NAME}-2.${STATEFULSET_FQDN}:{{ .Values.InternalGrpcPort }}{{ end }}"
+ - "exec /cockroach/cockroach start --logtostderr {{ if .Values.Secure.Enabled }}--certs-dir /cockroach/cockroach-certs{{ else }}--insecure{{ end }} --advertise-host $(hostname).${STATEFULSET_FQDN} --http-host 0.0.0.0 --http-port {{ .Values.InternalHttpPort }} --port {{ .Values.InternalGrpcPort }} --cache {{ .Values.CacheSize }} --max-sql-memory {{ .Values.MaxSQLMemory }} {{ if .Values.Locality }}--locality={{.Values.Locality }}{{ end }} --join {{ if .Values.JoinExisting }}{{ join "," .Values.JoinExisting }}{{ else }}${STATEFULSET_NAME}-0.${STATEFULSET_FQDN}:{{ .Values.InternalGrpcPort }},${STATEFULSET_NAME}-1.${STATEFULSET_FQDN}:{{ .Values.InternalGrpcPort }},${STATEFULSET_NAME}-2.${STATEFULSET_FQDN}:{{ .Values.InternalGrpcPort }}{{ end }}{{ range .Values.ExtraArgs }} {{ . }}{{ end }}"
# No pre-stop hook is required, a SIGTERM plus some time is all that's
# needed for graceful shutdown of a node.
terminationGracePeriodSeconds: 60
@@ -306,6 +318,11 @@ spec:
{{- if .Values.Secure.Enabled }}
- name: certs
emptyDir: {}
+{{- end }}
+{{- range .Values.ExtraSecretMounts }}
+ - name: extra-secret-{{ . }}
+ secret:
+ secretName: {{ . }}
{{- end }}
podManagementPolicy: {{ .Values.PodManagementPolicy }}
updateStrategy:
diff --git a/stable/cockroachdb/values.yaml b/stable/cockroachdb/values.yaml
index 7ee05aabba7a..e1edb228b595 100644
--- a/stable/cockroachdb/values.yaml
+++ b/stable/cockroachdb/values.yaml
@@ -5,7 +5,7 @@
Name: "cockroachdb"
Image: "cockroachdb/cockroach"
-ImageTag: "v2.1.4"
+ImageTag: "v19.1.0"
ImagePullPolicy: "Always"
Replicas: 3
MaxUnavailable: 1
@@ -68,3 +68,23 @@ Secure:
JoinExisting: []
# Set a locality (e.g. "region=us-central1,datacenter=us-centra1-a") if you're doing multi-cluster so data is distributed properly
Locality: ""
+# Additional command-line arguments you want to pass to the `cockroach start` commands
+ExtraArgs: []
+# ExtraSecretMounts is a list of names from secrets in the same namespace as the cockroachdb cluster, which shall be mounted into /etc/cockroach/secrets/ for every cluster member.
+ExtraSecretMounts: []
+# ExtraEnvArgs is a list of name,value tuples providing extra ENV variables.
+# e.g.:
+# ExtraEnvArgs:
+# - name: COCKROACH_ENGINE_MAX_SYNC_DURATION
+# value: "24h"
+ExtraEnvArgs: []
+# ExtraAnnotations is an object to provide additional annotations to the Statefulset
+# e.g.:
+# ExtraAnnotations:
+# key: values
+ExtraAnnotations: {}
+# ExtraInitAnnotations is an object to provide additional annotations to the ClusterInit Pod
+# e.g.:
+# ExtraInitAnnotations:
+# key: values
+ExtraInitAnnotations: {}
diff --git a/stable/collabora-code/.helmignore b/stable/collabora-code/.helmignore
new file mode 100644
index 000000000000..50af03172541
--- /dev/null
+++ b/stable/collabora-code/.helmignore
@@ -0,0 +1,22 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
+.vscode/
diff --git a/stable/collabora-code/Chart.yaml b/stable/collabora-code/Chart.yaml
new file mode 100644
index 000000000000..c906dcdb2dad
--- /dev/null
+++ b/stable/collabora-code/Chart.yaml
@@ -0,0 +1,12 @@
+apiVersion: v1
+appVersion: "4.0.3.1"
+description: A Helm chart for Collabora Office - CODE-Edition
+name: collabora-code
+version: 1.0.2
+icon: https://avatars0.githubusercontent.com/u/22418908?s=200&v=4
+sources:
+- https://github.com/CollaboraOnline/Docker-CODE
+maintainers:
+- name: Christian
+ email: christian.ingenhaag@googlemail.com
+home: https://www.collaboraoffice.com/code/
diff --git a/stable/collabora-code/OWNERS b/stable/collabora-code/OWNERS
new file mode 100644
index 000000000000..0067073415d8
--- /dev/null
+++ b/stable/collabora-code/OWNERS
@@ -0,0 +1,4 @@
+approvers:
+- chrisingenhaag
+reviewers:
+- chrisingenhaag
\ No newline at end of file
diff --git a/stable/collabora-code/README.md b/stable/collabora-code/README.md
new file mode 100644
index 000000000000..645c92edda65
--- /dev/null
+++ b/stable/collabora-code/README.md
@@ -0,0 +1,99 @@
+# Collabora CODE
+
+[Collabora](https://www.collaboraoffice.com/code/) is a online office suite.
+
+## Introduction
+
+This chart creates a single Collabora CODE Pod to run Collabora CODE suite, for example as integration together with nextcloud. Installation is based on the docker documentation [CollaboraDocker](https://www.collaboraoffice.com/code/docker/).
+
+For most easy integration it´s recommended to use cert-manager together with your favorite ingress controller to get a fully working, ssl-terminated Collabora CODE server.
+
+## Prerequisites
+
+- Kubernetes 1.9+ with Beta APIs enabled
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`, run:
+
+```bash
+$ helm install --name my-release stable/collabora
+```
+
+This command deploys a Collabora Online Development Edition server.
+
+> **Tip**: List all releases using `helm list`
+
+## Uninstalling the Chart
+
+To uninstall/delete the `my-release` deployment:
+
+```bash
+$ helm delete my-release
+```
+
+The command removes all the Kubernetes components associated with the chart and deletes the release.
+
+## Configuration
+
+Refer to [values.yaml](values.yaml) for the full run-down on defaults. These are a mixture of Kubernetes and Collabora-related directives that map to environment variables in the [CollaboraCODEDocker](https://github.com/CollaboraOnline/Docker-CODE) Docker image.
+
+Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
+
+```bash
+$ helm install --name my-release \
+ --set varname=true stable/collabora
+```
+
+Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
+
+```bash
+$ helm install --name my-release -f values.yaml stable/collabora
+```
+
+> **Tip**: You can use the default [values.yaml](values.yaml)
+
+The following tables lists the configurable parameters of this chart and their default values.
+
+| Parameter | Description | Default |
+| ------------------------------------------------- | ------------------------------------------------------------- | ----------------------------------------------------------- |
+| `replicaCount` | Number of provisioner instances to deployed | `1` |
+| `strategy` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
+| `image.repository` | Provisioner image | `collabora/code` |
+| `image.tag` | Version of provisioner image | `4.0.0.2` |
+| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `collabora.DONT_GEN_SSL_CERT` | | `true` |
+| `collabora.domain` | Double escaped WOPI host | `wopihost\\.domain` |
+| `collabora.extra_params` | List of params to use as env var | `--o:ssl.termination=true --o:ssl.enable=false` |
+| `collabora.server_name` | Collabora server name (single escaped) | `collabora\.domain` |
+| `collabora.password` | Collabora admin panel pass | `examplepass` |
+| `collabora.username` | Collabora admin panel user | `admin` |
+| `collabora.dictionaries` | Collabora enabled dictionaries | `de_DE en_GB en_US es_ES fr_FR it nl pt_BR pt_PT ru` |
+| `ingress.enabled` | | `false` |
+| `ingress.annotations` | | `{}` |
+| `ingress.paths` | | `[]` |
+| `ingress.hosts` | | `[]` |
+| `ingress.tls` | | `[]` |
+| `livenessProbe.enabled` | Turn on and off liveness probe | `true` |
+| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
+| `livenessProbe.periodSeconds` | How often to perform the probe | `10` |
+| `livenessProbe.timeoutSeconds` | When the probe times out | `2` |
+| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe | `1` |
+| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe | `3` |
+| `readinessProbe.enabled` | Turn on and off readiness probe | `true` |
+| `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
+| `readinessProbe.periodSeconds` | How often to perform the probe | `10` |
+| `readinessProbe.timeoutSeconds` | When the probe times out | `2` |
+| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe | `1` |
+| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe | `3` |
+| `securityContext.allowPrivilegeEscalation` | Create & use Pod Security Policy resources | `true` |
+| `securitycontext.capabilities.add` | Collabora needs to run with MKNOD as capabibility | `[MKNOD]` |
+| `resources` | Resources required (e.g. CPU, memory) | `{}` |
+| `nodeSelector` | Node labels for pod assignment | `{}` |
+| `affinity` | Affinity settings | `{}` |
+| `tolerations` | List of node taints to tolerate | `[]` |
+
+
+## Persistence
+
+There is no need for a persistent storage to run collabora code edition. All parameters in `/etc/loolwsd/loolwsd.xml` can be adjusted with using extra_params environment variable.
diff --git a/stable/collabora-code/templates/NOTES.txt b/stable/collabora-code/templates/NOTES.txt
new file mode 100644
index 000000000000..df2acd7afd75
--- /dev/null
+++ b/stable/collabora-code/templates/NOTES.txt
@@ -0,0 +1,21 @@
+1. Get the application URL by running these commands:
+{{- if .Values.ingress.enabled }}
+{{- range $host := .Values.ingress.hosts }}
+ {{- range $.Values.ingress.paths }}
+ http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host }}{{ . }}
+ {{- end }}
+{{- end }}
+{{- else if contains "NodePort" .Values.service.type }}
+ export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "collabora-code.fullname" . }})
+ export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+ echo http://$NODE_IP:$NODE_PORT
+{{- else if contains "LoadBalancer" .Values.service.type }}
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ You can watch the status of by running 'kubectl get svc -w {{ include "collabora-code.fullname" . }}'
+ export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "collabora-code.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ echo http://$SERVICE_IP:{{ .Values.service.port }}
+{{- else if contains "ClusterIP" .Values.service.type }}
+ export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "collabora-code.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
+ echo "Visit http://127.0.0.1:9980 to use your application"
+ kubectl port-forward $POD_NAME 9980:9980
+{{- end }}
diff --git a/stable/collabora-code/templates/_helpers.tpl b/stable/collabora-code/templates/_helpers.tpl
new file mode 100644
index 000000000000..88fcce47c073
--- /dev/null
+++ b/stable/collabora-code/templates/_helpers.tpl
@@ -0,0 +1,32 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "collabora-code.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "collabora-code.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "collabora-code.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/stable/collabora-code/templates/configmap.yaml b/stable/collabora-code/templates/configmap.yaml
new file mode 100644
index 000000000000..187f53db6aa5
--- /dev/null
+++ b/stable/collabora-code/templates/configmap.yaml
@@ -0,0 +1,10 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ include "collabora-code.fullname" . }}
+data:
+ DONT_GEN_SSL_CERT: "{{ .Values.collabora.DONT_GEN_SSL_CERT }}"
+ dictionaries: {{ .Values.collabora.dictionaries }}
+ domain: {{ .Values.collabora.domain }}
+ extra_params: {{ .Values.collabora.extra_params }}
+ server_name: {{ .Values.collabora.server_name }}
diff --git a/stable/collabora-code/templates/deployment.yaml b/stable/collabora-code/templates/deployment.yaml
new file mode 100644
index 000000000000..f450a02965a6
--- /dev/null
+++ b/stable/collabora-code/templates/deployment.yaml
@@ -0,0 +1,109 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ include "collabora-code.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "collabora-code.name" . }}
+ helm.sh/chart: {{ include "collabora-code.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ replicas: {{ .Values.replicaCount }}
+ strategy:
+ type: {{ .Values.strategy }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "collabora-code.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "collabora-code.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ spec:
+ containers:
+ - name: {{ .Chart.Name }}
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ env:
+ - name: DONT_GEN_SSL_CERT
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "collabora-code.fullname" . }}
+ key: DONT_GEN_SSL_CERT
+ - name: dictionaries
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "collabora-code.fullname" . }}
+ key: dictionaries
+ - name: domain
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "collabora-code.fullname" . }}
+ key: domain
+ - name: extra_params
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "collabora-code.fullname" . }}
+ key: extra_params
+ - name: server_name
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "collabora-code.fullname" . }}
+ key: server_name
+ - name: username
+ valueFrom:
+ secretKeyRef:
+ name: {{ include "collabora-code.fullname" . }}
+ key: username
+ - name: password
+ valueFrom:
+ secretKeyRef:
+ name: {{ include "collabora-code.fullname" . }}
+ key: password
+ {{- if .Values.livenessProbe.enabled }}
+ livenessProbe:
+ failureThreshold: 3
+ httpGet:
+ path: /
+ port: http
+ scheme: HTTP
+ initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
+ successThreshold: {{ .Values.livenessProbe.successThreshold }}
+ failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
+ {{- end }}
+ {{- if .Values.readinessProbe.enabled }}
+ readinessProbe:
+ failureThreshold: 3
+ httpGet:
+ path: /
+ port: http
+ scheme: HTTP
+ initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
+ successThreshold: {{ .Values.readinessProbe.successThreshold }}
+ failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
+ {{- end }}
+ ports:
+ - name: http
+ containerPort: {{ .Values.service.port }}
+ protocol: TCP
+ resources:
+ {{- toYaml .Values.resources | nindent 12 }}
+ securityContext:
+ {{- toYaml .Values.securitycontext | nindent 12 }}
+ {{- with .Values.nodeSelector }}
+ nodeSelector:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.affinity }}
+ affinity:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.tolerations }}
+ tolerations:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
diff --git a/stable/collabora-code/templates/ingress.yaml b/stable/collabora-code/templates/ingress.yaml
new file mode 100644
index 000000000000..f7b86cd1c2c7
--- /dev/null
+++ b/stable/collabora-code/templates/ingress.yaml
@@ -0,0 +1,40 @@
+{{- if .Values.ingress.enabled -}}
+{{- $fullName := include "collabora-code.fullname" . -}}
+{{- $ingressPaths := .Values.ingress.paths -}}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ $fullName }}
+ labels:
+ app.kubernetes.io/name: {{ include "collabora-code.name" . }}
+ helm.sh/chart: {{ include "collabora-code.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ {{- with .Values.ingress.annotations }}
+ annotations:
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+spec:
+{{- if .Values.ingress.tls }}
+ tls:
+ {{- range .Values.ingress.tls }}
+ - hosts:
+ {{- range .hosts }}
+ - {{ . | quote }}
+ {{- end }}
+ secretName: {{ .secretName }}
+ {{- end }}
+{{- end }}
+ rules:
+ {{- range .Values.ingress.hosts }}
+ - host: {{ . | quote }}
+ http:
+ paths:
+ {{- range $ingressPaths }}
+ - path: {{ . }}
+ backend:
+ serviceName: {{ $fullName }}
+ servicePort: http
+ {{- end }}
+ {{- end }}
+{{- end }}
diff --git a/stable/collabora-code/templates/secret.yaml b/stable/collabora-code/templates/secret.yaml
new file mode 100644
index 000000000000..9999e6245d2f
--- /dev/null
+++ b/stable/collabora-code/templates/secret.yaml
@@ -0,0 +1,7 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ include "collabora-code.fullname" . }}
+data:
+ username: {{ .Values.collabora.username | b64enc }}
+ password: {{ .Values.collabora.password | b64enc }}
\ No newline at end of file
diff --git a/stable/collabora-code/templates/service.yaml b/stable/collabora-code/templates/service.yaml
new file mode 100644
index 000000000000..c80a6ab8e0fd
--- /dev/null
+++ b/stable/collabora-code/templates/service.yaml
@@ -0,0 +1,19 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "collabora-code.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "collabora-code.name" . }}
+ helm.sh/chart: {{ include "collabora-code.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ type: {{ .Values.service.type }}
+ ports:
+ - port: {{ .Values.service.port }}
+ targetPort: http
+ protocol: TCP
+ name: http
+ selector:
+ app.kubernetes.io/name: {{ include "collabora-code.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
diff --git a/stable/collabora-code/templates/tests/test-connection.yaml b/stable/collabora-code/templates/tests/test-connection.yaml
new file mode 100644
index 000000000000..1d4689bf343a
--- /dev/null
+++ b/stable/collabora-code/templates/tests/test-connection.yaml
@@ -0,0 +1,18 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: "{{ include "collabora-code.fullname" . }}-test-connection"
+ labels:
+ app.kubernetes.io/name: {{ include "collabora-code.name" . }}
+ helm.sh/chart: {{ include "collabora-code.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ annotations:
+ "helm.sh/hook": test-success
+spec:
+ containers:
+ - name: wget
+ image: busybox
+ command: ['wget']
+ args: ['{{ include "collabora-code.fullname" . }}:{{ .Values.service.port }}']
+ restartPolicy: Never
diff --git a/stable/collabora-code/values.yaml b/stable/collabora-code/values.yaml
new file mode 100644
index 000000000000..ab0703694ff9
--- /dev/null
+++ b/stable/collabora-code/values.yaml
@@ -0,0 +1,71 @@
+# Default values for collabora-code.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+
+replicaCount: 1
+
+image:
+ repository: collabora/code
+ tag: 4.0.3.1
+ pullPolicy: IfNotPresent
+
+strategy: Recreate
+
+nameOverride: ""
+fullnameOverride: ""
+
+service:
+ type: ClusterIP
+ port: 9980
+
+ingress:
+ enabled: false
+ annotations: {}
+ paths: []
+ hosts: []
+ tls: []
+
+collabora:
+ DONT_GEN_SSL_CERT: true
+ domain: nextcloud\\.domain
+ extra_params: --o:ssl.termination=true --o:ssl.enable=false
+ server_name: collabora\.domain
+ password: examplepass
+ username: admin
+ dictionaries: de_DE en_GB en_US es_ES fr_FR it nl pt_BR pt_PT ru
+
+securitycontext:
+ allowPrivilegeEscalation: true
+ capabilities:
+ add:
+ - MKNOD
+
+resources: {}
+ # limits:
+ # cpu: 100m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 128Mi
+
+nodeSelector: {}
+
+tolerations: []
+
+affinity: {}
+
+livenessProbe:
+ enabled: true
+ initialDelaySeconds: 30
+ timeoutSeconds: 2
+ periodSeconds: 10
+ successThreshold: 1
+ failureThreshold: 3
+
+readinessProbe:
+ enabled: true
+ initialDelaySeconds: 30
+ timeoutSeconds: 2
+ periodSeconds: 10
+ successThreshold: 1
+ failureThreshold: 3
diff --git a/stable/concourse/CHANGELOG.md b/stable/concourse/CHANGELOG.md
new file mode 100644
index 000000000000..d448962321c3
--- /dev/null
+++ b/stable/concourse/CHANGELOG.md
@@ -0,0 +1,4 @@
+## v6.0.0:
+
+- added the ability to create worker only and web-only deployments using `web.enabled` and `worker.enabled`
+- **[breaking]** worker and web secrets are now separated into 2 different templates, `worker-secrets.yaml` and `web-secrets.yaml`. Users bringing their own secrets will have to split them into 2 different k8s objects.
diff --git a/stable/concourse/Chart.yaml b/stable/concourse/Chart.yaml
index 4adda3d39fab..73833f7fc760 100644
--- a/stable/concourse/Chart.yaml
+++ b/stable/concourse/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: concourse
-version: 3.7.2
-appVersion: 4.2.2
+version: 6.2.1
+appVersion: 5.2.0
description: Concourse is a simple and scalable CI system.
icon: https://avatars1.githubusercontent.com/u/7809479
keywords:
@@ -16,4 +17,6 @@ maintainers:
email: cscosta@pivotal.io
- name: william-tran
email: will@autonomic.ai
+- name: YoussB
+ email: byoussef@pivotal.io
engine: gotpl
diff --git a/stable/concourse/OWNERS b/stable/concourse/OWNERS
index 89a9975cb1e9..675c6d82de1e 100644
--- a/stable/concourse/OWNERS
+++ b/stable/concourse/OWNERS
@@ -1,6 +1,8 @@
approvers:
- cirocosta
- william-tran
+- YoussB
reviewers:
- cirocosta
- william-tran
+- YoussB
diff --git a/stable/concourse/README.md b/stable/concourse/README.md
index e2ee6b808cc9..6525f3efdf64 100644
--- a/stable/concourse/README.md
+++ b/stable/concourse/README.md
@@ -2,20 +2,24 @@
[Concourse](https://concourse-ci.org/) is a simple and scalable CI system.
+
## TL;DR;
```console
$ helm install stable/concourse
```
+
## Introduction
This chart bootstraps a [Concourse](https://concourse-ci.org/) deployment on a [Kubernetes](https://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
+
## Prerequisites Details
-* Kubernetes 1.6 (for `pod affinity` support)
-* PV support on underlying infrastructure (if persistence is required)
+* Kubernetes 1.6 (for [`pod affinity`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) support)
+* [`PersistentVolume`](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) support on underlying infrastructure (if persistence is required)
+
## Installing the Chart
@@ -25,6 +29,7 @@ To install the chart with the release name `my-release`:
$ helm install --name my-release stable/concourse
```
+
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
@@ -35,9 +40,13 @@ $ helm delete my-release
The command removes nearly all the Kubernetes components associated with the chart and deletes the release.
+> ps: By default, a [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) is created for the `main` team named after `${RELEASE}-main` and is kept untouched after a `helm delete`.
+> See the [Configuration section](#configuration) for how to control the behavior.
+
+
### Cleanup orphaned Persistent Volumes
-This chart uses `StatefulSets` for Concourse Workers. Deleting a `StatefulSet` does not delete associated Persistent Volumes.
+This chart uses [`StatefulSets`](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) for Concourse Workers. Deleting a `StatefulSet` does not delete associated `PersistentVolume`s.
Do the following after deleting the chart release to clean up orphaned Persistent Volumes.
@@ -45,21 +54,24 @@ Do the following after deleting the chart release to clean up orphaned Persisten
$ kubectl delete pvc -l app=${RELEASE-NAME}-worker
```
-## Scaling the Chart
-Scaling should typically be managed via the `helm upgrade` command, but `StatefulSets` don't yet work with `helm upgrade`. In the meantime, until `helm upgrade` works, if you want to change the number of replicas, you can use the `kubectl scale` command as shown below:
+### Restarting workers
+
+If a [Worker](https://concourse-ci.org/architecture.html#architecture-worker) isn't taking on work, you can recreate it with `kubectl delete pod`. This initiates a graceful shutdown by ["retiring"](https://concourse-ci.org/worker-internals.html#RETIRING-table) the worker, to ensure Concourse doesn't try looking for old volumes on the new worker.
-```console
-$ kubectl scale statefulset my-release-worker --replicas=3
-```
+The value`worker.terminationGracePeriodSeconds` can be used to provide an upper limit on graceful shutdown time before forcefully terminating the container.
-### Restarting workers
+Check the output of `fly workers`, and if a worker is [`stalled`](https://concourse-ci.org/worker-internals.html#STALLED-table), you'll also need to run [`fly prune-worker`](https://concourse-ci.org/administration.html#fly-prune-worker) to allow the new incarnation of the worker to start.
+
+> **TIP**: you can download `fly` either from https://concourse-ci.org/download.html or the home page of your Concourse installation.
-If a worker isn't taking on work, you can restart the worker with `kubectl delete pod`. This initiates a graceful shutdown by "retiring" the worker, to ensure Concourse doesn't try looking for old volumes on the new worker. The value`worker.terminationGracePeriodSeconds` can be used to provide an upper limit on graceful shutdown time before forcefully terminating the container. Check the output of `fly workers`, and if a worker is `stalled`, you'll also need to run `fly prune-worker` to allow the new incarnation of the worker to start.
### Worker Liveness Probe
-The worker's Liveness Probe will trigger a restart of the worker if it detects unrecoverable errors, by looking at the worker's logs. The set of strings used to identify such errors could change in the future, but can be tuned with `worker.fatalErrors`. See [values.yaml](values.yaml) for the defaults.
+By default, the worker's [`LivenessProbe`](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes) will trigger a restart of the worker container if it detects errors when trying to reach the worker's healthcheck endpoint which takes care of making sure that the [workers' components](https://concourse-ci.org/architecture.html#architecture) can properly serve their purpose.
+
+See [Configuration](#configuration) and [`values.yaml`](./values.yaml) for the configuration of both the `livenessProbe` (`worker.livenessProbe`) and the default healthchecking timeout (`concourse.worker.healthcheckTimeout`).
+
## Configuration
@@ -67,76 +79,41 @@ The following table lists the configurable parameters of the Concourse chart and
| Parameter | Description | Default |
| ----------------------- | ---------------------------------- | ---------------------------------------------------------- |
-| `image` | Concourse image | `concourse/concourse` |
-| `imageTag` | Concourse image version | `4.2.2` |
+| `fullnameOverride` | Provide a name to substitute for the full names of resources | `nil` |
+| `imageDigest` | Specific image digest to use in place of a tag. | `nil` |
| `imagePullPolicy` | Concourse image pull policy | `IfNotPresent` |
| `imagePullSecrets` | Array of imagePullSecrets in the namespace for pulling images | `[]` |
-| `web.additionalAffinities` | Additional affinities to apply to web pods. E.g: node affinity | `{}` |
-| `web.additionalVolumeMounts` | VolumeMounts to be added to the web pods | `nil` |
-| `web.additionalVolumes` | Volumes to be added to the web pods | `nil` |
-| `web.annotations`| Concourse Web deployment annotations | `nil` |
-| `web.authSecretsPath` | Specify the mount directory of the web auth secrets | `/concourse-auth` |
-| `web.env` | Configure additional environment variables for the web containers | `[]` |
-| `web.ingress.annotations` | Concourse Web Ingress annotations | `{}` |
-| `web.ingress.enabled` | Enable Concourse Web Ingress | `false` |
-| `web.ingress.hosts` | Concourse Web Ingress Hostnames | `[]` |
-| `web.ingress.tls` | Concourse Web Ingress TLS configuration | `[]` |
-| `web.keysSecretsPath` | Specify the mount directory of the web keys secrets | `/concourse-keys` |
-| `web.livenessProbe` | Liveness Probe settings | `{"failureThreshold":5,"httpGet":{"path":"/api/v1/info","port":"atc"},"initialDelaySeconds":10,"periodSeconds":15,"timeoutSeconds":3}` |
-| `web.nameOverride` | Override the Concourse Web components name | `nil` |
-| `web.nodeSelector` | Node selector for web nodes | `{}` |
-| `web.postgresqlSecrtsPath` | Specify the mount directory of the web postgresql secrets | `/concourse-postgresql` |
-| `web.readinessProbe` | Readiness Probe settings | `{"httpGet":{"path":"/api/v1/info","port":"atc"}}` |
-| `web.replicas` | Number of Concourse Web replicas | `1` |
-| `web.resources` | Concourse Web resource requests and limits | `{requests: {cpu: "100m", memory: "128Mi"}}` |
-| `web.service.annotations` | Concourse Web Service annotations | `nil` |
-| `web.service.atcNodePort` | Sets the nodePort for atc when using `NodePort` | `nil` |
-| `web.service.atcTlsNodePort` | Sets the nodePort for atc tls when using `NodePort` | `nil` |
-| `web.service.labels` | Additional concourse web service labels | `nil` |
-| `web.service.loadBalancerIP` | The IP to use when web.service.type is LoadBalancer | `nil` |
-| `web.service.loadBalancerSourceRanges` | Concourse Web Service Load Balancer Source IP ranges | `nil` |
-| `web.service.tsaNodePort` | Sets the nodePort for tsa when using `NodePort` | `nil` |
-| `web.service.type` | Concourse Web service type | `ClusterIP` |
-| `web.syslogSecretsPath` | Specify the mount directory of the web syslog secrets | `/concourse-syslog` |
-| `web.tolerations` | Tolerations for the web nodes | `[]` |
-| `web.vaultSecretsPath` | Specify the mount directory of the web vault secrets | `/concourse-vault` |
-| `worker.nameOverride` | Override the Concourse Worker components name | `nil` |
-| `worker.replicas` | Number of Concourse Worker replicas | `2` |
-| `worker.minAvailable` | Minimum number of workers available after an eviction | `1` |
-| `worker.resources` | Concourse Worker resource requests and limits | `{requests: {cpu: "100m", memory: "512Mi"}}` |
-| `worker.env` | Configure additional environment variables for the worker container(s) | `[]` |
-| `worker.annotations` | Annotations to be added to the worker pods | `{}` |
-| `worker.keysSecretsPath` | Specify the mount directory of the worker keys secrets | `/concourse-keys` |
-| `worker.additionalVolumeMounts` | VolumeMounts to be added to the worker pods | `nil` |
-| `worker.additionalVolumes` | Volumes to be added to the worker pods | `nil` |
-| `worker.additionalAffinities` | Additional affinities to apply to worker pods. E.g: node affinity | `{}` |
-| `worker.tolerations` | Tolerations for the worker nodes | `[]` |
-| `worker.terminationGracePeriodSeconds` | Upper bound for graceful shutdown to allow the worker to drain its tasks | `60` |
-| `worker.fatalErrors` | Newline delimited strings which, when logged, should trigger a restart of the worker | *See [values.yaml](values.yaml)* |
-| `worker.updateStrategy` | `OnDelete` or `RollingUpdate` (requires Kubernetes >= 1.7) | `RollingUpdate` |
-| `worker.podManagementPolicy` | `OrderedReady` or `Parallel` (requires Kubernetes >= 1.7) | `Parallel` |
-| `worker.hardAntiAffinity` | Should the workers be forced (as opposed to preferred) to be on different nodes? | `false` |
-| `worker.emptyDirSize` | When persistance is disabled this value will be used to limit the emptyDir volume size | `nil` |
+| `imageTag` | Concourse image version | `5.0.0` |
+| `image` | Concourse image | `concourse/concourse` |
+| `nameOverride` | Provide a name in place of `concourse` for `app:` labels | `nil` |
| `persistence.enabled` | Enable Concourse persistence using Persistent Volume Claims | `true` |
-| `persistence.worker.storageClass` | Concourse Worker Persistent Volume Storage Class | `generic` |
| `persistence.worker.accessMode` | Concourse Worker Persistent Volume Access Mode | `ReadWriteOnce` |
| `persistence.worker.size` | Concourse Worker Persistent Volume Storage Size | `20Gi` |
+| `persistence.worker.storageClass` | Concourse Worker Persistent Volume Storage Class | `generic` |
| `postgresql.enabled` | Enable PostgreSQL as a chart dependency | `true` |
-| `postgresql.postgresUser` | PostgreSQL User to create | `concourse` |
-| `postgresql.postgresPassword` | PostgreSQL Password for the new user | `concourse` |
-| `postgresql.postgresDatabase` | PostgreSQL Database to create | `concourse` |
+| `postgresql.persistence.accessMode` | Persistent Volume Access Mode | `ReadWriteOnce` |
| `postgresql.persistence.enabled` | Enable PostgreSQL persistence using Persistent Volume Claims | `true` |
-| `rbac.create` | Enables creation of RBAC resources | `true` |
+| `postgresql.persistence.size` | Persistent Volume Storage Size | `8Gi` |
+| `postgresql.persistence.storageClass` | Concourse data Persistent Volume Storage Class | `nil` |
+| `postgresql.postgresDatabase` | PostgreSQL Database to create | `concourse` |
+| `postgresql.postgresPassword` | PostgreSQL Password for the new user | `concourse` |
+| `postgresql.postgresUser` | PostgreSQL User to create | `concourse` |
| `rbac.apiVersion` | RBAC version | `v1beta1` |
+| `rbac.create` | Enables creation of RBAC resources | `true` |
| `rbac.webServiceAccountName` | Name of the service account to use for web pods if `rbac.create` is `false` | `default` |
| `rbac.workerServiceAccountName` | Name of the service account to use for workers if `rbac.create` is `false` | `default` |
-| `secrets.create` | Create the secret resource from the following values. *See [Secrets](#secrets)* | `true` |
+| `secrets.awsSecretsmanagerAccessKey` | AWS Access Key ID for Secrets Manager access | `nil` |
+| `secrets.awsSecretsmanagerSecretKey` | AWS Secret Access Key ID for Secrets Manager access | `nil` |
+| `secrets.awsSecretsmanagerSessionToken` | AWS Session Token for Secrets Manager access | `nil` |
| `secrets.awsSsmAccessKey` | AWS Access Key ID for SSM access | `nil` |
| `secrets.awsSsmSecretKey` | AWS Secret Access Key ID for SSM access | `nil` |
| `secrets.awsSsmSessionToken` | AWS Session Token for SSM access | `nil` |
+| `secrets.bitbucketCloudClientId` | Client ID for the BitbucketCloud OAuth | `nil` |
+| `secrets.bitbucketCloudClientSecret` | Client Secret for the BitbucketCloud OAuth | `nil` |
| `secrets.cfCaCert` | CA certificate for cf auth provider | `nil` |
| `secrets.cfClientId` | Client ID for cf auth provider | `nil` |
| `secrets.cfClientSecret` | Client secret for cf auth provider | `nil` |
+| `secrets.create` | Create the secret resource from the following values. *See [Secrets](#secrets)* | `true` |
| `secrets.encryptionKey` | current encryption key | `nil` |
| `secrets.githubCaCert` | CA certificate for Enterprise Github OAuth | `nil` |
| `secrets.githubClientId` | Application client ID for GitHub OAuth | `nil` |
@@ -146,6 +123,7 @@ The following table lists the configurable parameters of the Concourse chart and
| `secrets.hostKeyPub` | Concourse Host Public Key | *See [values.yaml](values.yaml)* |
| `secrets.hostKey` | Concourse Host Private Key | *See [values.yaml](values.yaml)* |
| `secrets.influxdbPassword` | Password used to authenticate with influxdb | `nil` |
+| `secrets.ldapCaCert` | CA Certificate for LDAP | `nil` |
| `secrets.localUsers` | Create concourse local users. Default username and password are `test:test` *See [values.yaml](values.yaml)* |
| `secrets.oauthCaCert` | CA certificate for Generic OAuth | `nil` |
| `secrets.oauthClientId` | Application client ID for Generic OAuth | `nil` |
@@ -154,13 +132,14 @@ The following table lists the configurable parameters of the Concourse chart and
| `secrets.oidcClientId` | Application client ID for OIDI OAuth | `nil` |
| `secrets.oidcClientSecret` | Application client secret for OIDC OAuth | `nil` |
| `secrets.oldEncryptionKey` | old encryption key, used for key rotation | `nil` |
-| `secrets.postgresqlCaCert` | PostgreSQL CA certificate | `nil` |
-| `secrets.postgresqlClientCert` | PostgreSQL Client certificate | `nil` |
-| `secrets.postgresqlClientKey` | PostgreSQL Client key | `nil` |
-| `secrets.postgresqlPassword` | PostgreSQL User Password | `nil` |
-| `secrets.postgresqlUser` | PostgreSQL User Name | `nil` |
+| `secrets.postgresCaCert` | PostgreSQL CA certificate | `nil` |
+| `secrets.postgresClientCert` | PostgreSQL Client certificate | `nil` |
+| `secrets.postgresClientKey` | PostgreSQL Client key | `nil` |
+| `secrets.postgresPassword` | PostgreSQL User Password | `nil` |
+| `secrets.postgresUser` | PostgreSQL User Name | `nil` |
| `secrets.sessionSigningKey` | Concourse Session Signing Private Key | *See [values.yaml](values.yaml)* |
| `secrets.syslogCaCert` | SSL certificate to verify Syslog server | `nil` |
+| `secrets.teamAuthorizedKeys` | Array of team names and worker public keys for external workers | `nil` |
| `secrets.vaultAuthParam` | Paramter to pass when logging in via the backend | `nil` |
| `secrets.vaultCaCert` | CA certificate use to verify the vault server SSL cert | `nil` |
| `secrets.vaultClientCert` | Vault Client Certificate | `nil` |
@@ -170,8 +149,80 @@ The following table lists the configurable parameters of the Concourse chart and
| `secrets.webTlsKey` | An RSA private key, used to encrypt HTTPS traffic | `nil` |
| `secrets.workerKeyPub` | Concourse Worker Public Key | *See [values.yaml](values.yaml)* |
| `secrets.workerKey` | Concourse Worker Private Key | *See [values.yaml](values.yaml)* |
+| `web.additionalAffinities` | Additional affinities to apply to web pods. E.g: node affinity | `{}` |
+| `web.additionalVolumeMounts` | VolumeMounts to be added to the web pods | `nil` |
+| `web.additionalVolumes` | Volumes to be added to the web pods | `nil` |
+| `web.annotations`| Concourse Web deployment annotations | `nil` |
+| `web.authSecretsPath` | Specify the mount directory of the web auth secrets | `/concourse-auth` |
+| `web.enabled` | Enable or disable the web component | `true` |
+| `web.env` | Configure additional environment variables for the web containers | `[]` |
+| `web.ingress.annotations` | Concourse Web Ingress annotations | `{}` |
+| `web.ingress.enabled` | Enable Concourse Web Ingress | `false` |
+| `web.ingress.hosts` | Concourse Web Ingress Hostnames | `[]` |
+| `web.ingress.tls` | Concourse Web Ingress TLS configuration | `[]` |
+| `web.keySecretsPath` | Specify the mount directory of the web keys secrets | `/concourse-keys` |
+| `web.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
+| `web.livenessProbe.httpGet.path` | Path to access on the HTTP server when performing the healthcheck | `/api/v1/info` |
+| `web.livenessProbe.httpGet.port` | Name or number of the port to access on the container | `atc` |
+| `web.livenessProbe.initialDelaySeconds` | Number of seconds after the container has started before liveness probes are initiated | `10` |
+| `web.livenessProbe.periodSeconds` | How often (in seconds) to perform the probe | `15` |
+| `web.livenessProbe.timeoutSeconds` | Number of seconds after which the probe times out | `3` |
+| `web.nameOverride` | Override the Concourse Web components name | `nil` |
+| `web.nodeSelector` | Node selector for web nodes | `{}` |
+| `web.postgresqlSecretsPath` | Specify the mount directory of the web postgresql secrets | `/concourse-postgresql` |
+| `web.readinessProbe.httpGet.path` | Path to access on the HTTP server when performing the healthcheck | `/api/v1/info` |
+| `web.readinessProbe.httpGet.port` | Name or number of the port to access on the container | `atc` |
+| `web.replicas` | Number of Concourse Web replicas | `1` |
+| `web.strategy` | Strategy for updates to deployment. | `{}` |
+| `web.resources.requests.cpu` | Minimum amount of cpu resources requested | `100m` |
+| `web.resources.requests.memory` | Minimum amount of memory resources requested | `128Mi` |
+| `web.service.annotations` | Concourse Web Service annotations | `nil` |
+| `web.service.atcNodePort` | Sets the nodePort for atc when using `NodePort` | `nil` |
+| `web.service.atcTlsNodePort` | Sets the nodePort for atc tls when using `NodePort` | `nil` |
+| `web.service.labels` | Additional concourse web service labels | `nil` |
+| `web.service.loadBalancerIP` | The IP to use when web.service.type is LoadBalancer | `nil` |
+| `web.service.loadBalancerSourceRanges` | Concourse Web Service Load Balancer Source IP ranges | `nil` |
+| `web.service.tsaNodePort` | Sets the nodePort for tsa when using `NodePort` | `nil` |
+| `web.service.type` | Concourse Web service type | `ClusterIP` |
+| `web.sidecarContainers` | Array of extra containers to run alongside the Concourse web container | `nil` |
+| `web.syslogSecretsPath` | Specify the mount directory of the web syslog secrets | `/concourse-syslog` |
+| `web.tlsSecretsPath` | Where in the container the web TLS secrets should be mounted | `/concourse-web-tls` |
+| `web.tolerations` | Tolerations for the web nodes | `[]` |
+| `web.vaultSecretsPath` | Specify the mount directory of the web vault secrets | `/concourse-vault` |
+| `worker.additionalAffinities` | Additional affinities to apply to worker pods. E.g: node affinity | `{}` |
+| `worker.additionalVolumeMounts` | VolumeMounts to be added to the worker pods | `nil` |
+| `worker.additionalVolumes` | Volumes to be added to the worker pods | `nil` |
+| `worker.annotations` | Annotations to be added to the worker pods | `{}` |
+| `worker.cleanUpWorkDirOnStart` | Removes any previous state created in `concourse.worker.workDir` | `true` |
+| `worker.emptyDirSize` | When persistance is disabled this value will be used to limit the emptyDir volume size | `nil` |
+| `worker.enabled` | Enable or disable the worker component. You should set postgres.enabled=false in order not to get an unnecessary Postgres chart deployed | `true` |
+| `worker.env` | Configure additional environment variables for the worker container(s) | `[]` |
+| `worker.hardAntiAffinity` | Should the workers be forced (as opposed to preferred) to be on different nodes? | `false` |
+| `worker.keySecretsPath` | Specify the mount directory of the worker keys secrets | `/concourse-keys` |
+| `worker.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded | `5` |
+| `worker.livenessProbe.httpGet.path` | Path to access on the HTTP server when performing the healthcheck | `/` |
+| `worker.livenessProbe.httpGet.port` | Name or number of the port to access on the container | `worker-hc` |
+| `worker.livenessProbe.initialDelaySeconds` | Number of seconds after the container has started before liveness probes are initiated | `10` |
+| `worker.livenessProbe.periodSeconds` | How often (in seconds) to perform the probe | `15` |
+| `worker.livenessProbe.timeoutSeconds` | Number of seconds after which the probe times out | `3` |
+| `worker.minAvailable` | Minimum number of workers available after an eviction | `1` |
+| `worker.nameOverride` | Override the Concourse Worker components name | `nil` |
+| `worker.nodeSelector` | Node selector for worker nodes | `{}` |
+| `worker.podManagementPolicy` | `OrderedReady` or `Parallel` (requires Kubernetes >= 1.7) | `Parallel` |
+| `worker.readinessProbe` | Periodic probe of container service readiness | `{}` |
+| `worker.replicas` | Number of Concourse Worker replicas | `2` |
+| `worker.resources.requests.cpu` | Minimum amount of cpu resources requested | `100m` |
+| `worker.resources.requests.memory` | Minimum amount of memory resources requested | `512Mi` |
+| `worker.sidecarContainers` | Array of extra containers to run alongside the Concourse worker container | `nil` |
+| `worker.terminationGracePeriodSeconds` | Upper bound for graceful shutdown to allow the worker to drain its tasks | `60` |
+| `worker.tolerations` | Tolerations for the worker nodes | `[]` |
+| `worker.updateStrategy` | `OnDelete` or `RollingUpdate` (requires Kubernetes >= 1.7) | `RollingUpdate` |
-For configurable concourse parameters, refer to [values.yaml](values.yaml) `concourse` section. All parameters under this section are strictly mapped from concourse binary commands. For example if one needs to configure the concourse external URL, the param `concourse` -> `web` -> `externalUrl` should be set, which is equivalent to running concourse binary as `concourse web --external-url`. For those sub-sections that have `enabled`, one needs to set `enabled` to be `true` to use the following params within the section.
+For configurable Concourse parameters, refer to [`values.yaml`](values.yaml)' `concourse` section. All parameters under this section are strictly mapped from the `concourse` binary commands.
+
+For example if one needs to configure the Concourse external URL, the param `concourse` -> `web` -> `externalUrl` should be set, which is equivalent to running the `concourse` binary as `concourse web --external-url`.
+
+For those sub-sections that have `enabled`, one needs to set `enabled` to be `true` to use the following params within the section.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
@@ -183,13 +234,28 @@ $ helm install --name my-release -f values.yaml stable/concourse
> **Tip**: You can use the default [values.yaml](values.yaml)
+
### Secrets
-For your convenience, this chart provides some default values for secrets, but it is recommended that you generate and manage these secrets outside the Helm chart. To do this, set `secrets.create` to `false`, create files for each secret value, and turn it all into a k8s secret. Be careful with introducing trailing newline characters; following the steps below ensures none end up in your secrets. First, perform the following to create the mandatory secret values:
+For your convenience, this chart provides some default values for secrets, but it is recommended that you generate and manage these secrets outside the Helm chart.
-```console
+To do that, set `secrets.create` to `false`, create files for each secret value, and turn it all into a Kubernetes [Secret](https://kubernetes.io/docs/concepts/configuration/secret/).
+
+Be careful with introducing trailing newline characters; following the steps below ensures none end up in your secrets. First, perform the following to create the mandatory secret values:
+
+```sh
+# Create a directory to host the set of secrets that are
+# required for a working Concourse installation and get
+# into it.
+#
mkdir concourse-secrets
cd concourse-secrets
+
+# Generate the files for the secrets that are required:
+# - web key pair,
+# - worker key pair, and
+# - the session signing token.
+#
ssh-keygen -t rsa -f host-key -N ''
mv host-key.pub host-key-pub
ssh-keygen -t rsa -f worker-key -N ''
@@ -199,19 +265,33 @@ rm session-signing-key.pub
printf "%s:%s" "concourse" "$(openssl rand -base64 24)" > local-users
```
-You'll also need to create/copy secret values for optional features. See [templates/secrets.yaml](templates/secrets.yaml) for possible values. In the example below, we are not using the [PostgreSQL](#postgresql) chart dependency, and so we must set `postgresql-user` and `postgresql-password` secrets.
+All the worker-specific secrets, namely, `workerKey`, `workerKeyPub`, `hostKeyPub` are to be added to a separate Kubernetes secrets object with the name [release name]-worker.
-```console
-# copy a posgres user to clipboard and paste it to file
+All other secrets are to be added to a secrets object with the name `[release name]-web`.
+
+For the time being, the secret `workerKeyPub` is to be added to both the worker and the web secret objects, until investigated within issue #13019.
+
+You'll also need to create/copy secret values for optional features. See [templates/web-secrets.yaml](templates/web-secrets.yaml) and [templates/web-secrets.yaml](templates/web-secrets.yaml) for possible values.
+
+In the example below, we are not using the [PostgreSQL](#postgresql) chart dependency, and so we must set `postgresql-user` and `postgresql-password` secrets.
+
+```sh
+# Still within the directory where our secrets exist,
+# copy a postgres user to clipboard and paste it to file.
+#
printf "%s" "$(pbpaste)" > postgresql-user
-# copy a posgres password to clipboard and paste it to file
+
+# Copy a postgres password to clipboard and paste it to file
+#
printf "%s" "$(pbpaste)" > postgresql-password
-# copy Github client id and secrets to clipboard and paste to files
+# Copy Github client id and secrets to clipboard and paste to files
+#
printf "%s" "$(pbpaste)" > github-client-id
printf "%s" "$(pbpaste)" > github-client-secret
-# set an encryption key for DB encryption at rest
+# Set an encryption key for DB encryption at rest
+#
printf "%s" "$(openssl rand -base64 24)" > encryption-key
```
@@ -223,9 +303,14 @@ kubectl create secret generic my-release-concourse --from-file=.
Make sure you clean up after yourself.
+
### Persistence
-This chart mounts a Persistent Volume for each Concourse Worker. The volume is created using dynamic volume provisioning. If you want to disable it or change the persistence properties, update the `persistence` section of your custom `values.yaml` file:
+This chart mounts a Persistent Volume for each Concourse Worker.
+
+The volume is created using dynamic volume provisioning.
+
+If you want to disable it or change the persistence properties, update the `persistence` section of your custom `values.yaml` file:
```yaml
## Persistent Volume Storage configuration.
@@ -252,7 +337,8 @@ persistence:
size: "20Gi"
```
-It is highly recommended to use Persistent Volumes for Concourse Workers; otherwise, the container images managed by the Worker are stored in an `emptyDir` volume on the node's disk. This will interfere with k8s ImageGC and the node's disk will fill up as a result. This will be fixed in a future release of k8s: https://github.com/kubernetes/kubernetes/pull/57020
+It is highly recommended to use Persistent Volumes for Concourse Workers; otherwise, the Concourse volumes managed by the Worker are stored in an [`emptyDir`](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) volume on the Kubernetes node's disk. This will interfere with Kubernete's [ImageGC](https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/#image-collection) and the node's disk will fill up as a result.
+
### Ingress TLS
@@ -308,7 +394,7 @@ Pipelines usually need credentials to do things. Concourse supports the use of a
#### Kubernetes Secrets
-By default, this chart uses Kubernetes Secrets as a credential manager.
+By default, this chart uses Kubernetes Secrets as a credential manager.
For a given Concourse *team*, a pipeline looks for secrets in a namespace named `[namespacePrefix][teamName]`. The namespace prefix is the release name followed by a hyphen by default, and can be overridden with the value `concourse.web.kubernetes.namespacePrefix`. Each team listed under `concourse.web.kubernetes.teams` will have a namespace created for it, and the namespace remains after deletion of the release unless you set `concourse.web.kubernetes.keepNamespace` to `false`. By default, a namespace will be created for the `main` team.
@@ -382,6 +468,8 @@ concourse:
To use SSM, set `concourse.web.kubernetes.enabled` to false, and set `concourse.web.awsSsm.enabled` to true.
+Authentication can be configured to use an access key and secret key as well as a session token. This is done by setting `concourse.web.awsSsm.keyAuth.enabled` to `true`. Alternatively, if it set to `false`, AWS IAM role based authentication (instance or pod credentials) is assumed. To use a session token, `concourse.web.awsSsm.useSessionToken` should be set to `true`. The secret values can be managed using the values specified in this helm chart or separately. For more details, see https://concourse-ci.org/creds.html#ssm.
+
For a given Concourse *team*, a pipeline looks for secrets in SSM using either `/concourse/{team}/{secret}` or `/concourse/{team}/{pipeline}/{secret}`; the patterns can be overridden using the `concourse.web.awsSsm.teamSecretTemplate` and `concourse.web.awsSsm.pipelineSecretTemplate` settings.
Concourse requires AWS credentials which are able to read from SSM for this feature to function. Credentials can be set in the `secrets.awsSsm*` settings; if your cluster is running in a different AWS region, you may also need to set `concourse.web.awsSsm.region`.
@@ -412,6 +500,8 @@ Where `` is the ARN of the KMS key used to encrypt the secrets in P
To use Secrets Manager, set `concourse.web.kubernetes.enabled` to false, and set `concourse.web.awsSecretsManager.enabled` to true.
+Authentication can be configured to use an access key and secret key as well as a session token. This is done by setting `concourse.web.awsSecretsManager.keyAuth.enabled` to `true`. Alternatively, if it set to `false`, AWS IAM role based authentication (instance or pod credentials) is assumed. To use a session token, `concourse.web.awsSecretsManger.useSessionToken` should be set to `true`. The secret values can be managed using the values specified in this helm chart or separately. For more details, see https://concourse-ci.org/creds.html#asm.
+
For a given Concourse *team*, a pipeline looks for secrets in Secrets Manager using either `/concourse/{team}/{secret}` or `/concourse/{team}/{pipeline}/{secret}`; the patterns can be overridden using the `concourse.web.awsSecretsManager.teamSecretTemplate` and `concourse.web.awsSecretsManager.pipelineSecretTemplate` settings.
Concourse requires AWS credentials which are able to read from Secrets Manager for this feature to function. Credentials can be set in the `secrets.awsSecretsmanager*` settings; if your cluster is running in a different AWS region, you may also need to set `concourse.web.awsSecretsManager.region`.
diff --git a/stable/concourse/ci/default-values.yaml b/stable/concourse/ci/default-values.yaml
new file mode 100644
index 000000000000..182342ee46eb
--- /dev/null
+++ b/stable/concourse/ci/default-values.yaml
@@ -0,0 +1,4 @@
+concourse:
+ web:
+ kubernetes:
+ keepNamespaces: false
diff --git a/stable/concourse/more-config.yaml b/stable/concourse/more-config.yaml
deleted file mode 100644
index f08a291b220e..000000000000
--- a/stable/concourse/more-config.yaml
+++ /dev/null
@@ -1,8 +0,0 @@
-web:
- additionalVolumes:
- - name: team-authorized-keys
- configMap:
- name: hush-house-team-authorized-keys
- additionalVolumeMounts:
- - name: team-authorized-keys
- mountPath: /team-authorized-keys/
diff --git a/stable/concourse/templates/NOTES.txt b/stable/concourse/templates/NOTES.txt
index bec296c5decf..006ef18cf25e 100644
--- a/stable/concourse/templates/NOTES.txt
+++ b/stable/concourse/templates/NOTES.txt
@@ -35,19 +35,21 @@
{{- end }}
* If this is your first time using Concourse, follow the tutorials at https://concourse-ci.org/tutorials.html
+{{- if .Values.concourse.worker.baggageclaim.driver }}
{{- if contains "naive" .Values.concourse.worker.baggageclaim.driver }}
*******************
******WARNING******
*******************
-You are using the "naive" baggage claim driver, which is also the default value for this chart.
+You are using the "naive" baggage claim driver, which is also the default value for this chart.
-This is the default for compatibility reasons, but is very space inefficient, and should be changed to either "btrfs" (recommended) or "overlay" depending on that filesystem's support in the Linux kernel your cluster is using.
+This is the default for compatibility reasons, but is very space inefficient, and should be changed to either "btrfs" (recommended) or "overlay" depending on that filesystem's support in the Linux kernel your cluster is using.
Please see https://github.com/concourse/concourse/issues/1230 and https://github.com/concourse/concourse/issues/1966 for background.
{{- end }}
+{{- end }}
diff --git a/stable/concourse/templates/_helpers.tpl b/stable/concourse/templates/_helpers.tpl
index 5e4cdc5aceed..e4ec766a3639 100644
--- a/stable/concourse/templates/_helpers.tpl
+++ b/stable/concourse/templates/_helpers.tpl
@@ -7,19 +7,18 @@ Expand the name of the chart.
{{- end -}}
{{/*
-Create a default fully qualified concourse name.
+Create a default fully qualified web node(s) name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
-{{- define "concourse.concourse.fullname" -}}
-{{- $name := default .Chart.Name .Values.nameOverride -}}
-{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
-{{- end -}}
-
{{- define "concourse.web.fullname" -}}
{{- $name := default "web" .Values.web.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
+{{/*
+Create a default fully qualified worker node(s) name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+*/}}
{{- define "concourse.worker.fullname" -}}
{{- $name := default "worker" .Values.worker.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
@@ -45,3 +44,11 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
{{- define "concourse.namespacePrefix" -}}
{{- default (printf "%s-" .Release.Name ) .Values.concourse.web.kubernetes.namespacePrefix -}}
{{- end -}}
+
+{{- define "concourse.are-there-additional-volumes.with-the-name.concourse-work-dir" }}
+ {{- range .Values.worker.additionalVolumes }}
+ {{- if .name | eq "concourse-work-dir" }}
+ {{- .name }}
+ {{- end }}
+ {{- end }}
+{{- end }}
diff --git a/stable/concourse/templates/namespace.yaml b/stable/concourse/templates/namespace.yaml
index 941c12de7cc4..85f818e1e7df 100644
--- a/stable/concourse/templates/namespace.yaml
+++ b/stable/concourse/templates/namespace.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.web.enabled -}}
{{- if and .Values.concourse.web.kubernetes.enabled .Values.concourse.web.kubernetes.createTeamNamespaces -}}
{{- range .Values.concourse.web.kubernetes.teams }}
---
@@ -10,9 +11,10 @@ metadata:
{{- end }}
name: {{ template "concourse.namespacePrefix" $ }}{{ . }}
labels:
- app: {{ template "concourse.concourse.fullname" $ }}
+ app: {{ template "concourse.web.fullname" $ }}
chart: "{{ $.Chart.Name }}-{{ $.Chart.Version }}"
release: "{{ $.Release.Name }}"
heritage: "{{ $.Release.Service }}"
{{- end }}
+{{- end }}
{{- end -}}
diff --git a/stable/concourse/templates/required-check.yaml b/stable/concourse/templates/required-check.yaml
new file mode 100644
index 000000000000..543049b01e93
--- /dev/null
+++ b/stable/concourse/templates/required-check.yaml
@@ -0,0 +1,3 @@
+{{ if not (or .Values.web.enabled .Values.worker.enabled) }}
+{{- required "Must set either web.enabled or worker.enabled to create a concourse deployment" "" }}
+{{ end }}
diff --git a/stable/concourse/templates/web-deployment.yaml b/stable/concourse/templates/web-deployment.yaml
index 209cac97bea4..3c89ab603834 100644
--- a/stable/concourse/templates/web-deployment.yaml
+++ b/stable/concourse/templates/web-deployment.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.web.enabled -}}
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
@@ -9,21 +10,28 @@ metadata:
heritage: "{{ .Release.Service }}"
spec:
replicas: {{ .Values.web.replicas }}
+{{- if .Values.web.strategy }}
+{{ toYaml .Values.web.strategy | indent 2 }}
+{{- end }}
template:
metadata:
labels:
app: {{ template "concourse.web.fullname" . }}
release: "{{ .Release.Name }}"
+ {{- if .Values.web.annotations }}
annotations:
{{ toYaml .Values.web.annotations | indent 8 }}
+ {{- end }}
spec:
- {{- with .Values.web.nodeSelector }}
+ {{- if .Values.web.nodeSelector }}
nodeSelector:
-{{ toYaml . | indent 8 }}
+{{ toYaml .Values.web.nodeSelector | indent 8 }}
{{- end }}
serviceAccountName: {{ if .Values.rbac.create }}{{ template "concourse.web.fullname" . }}{{ else }}{{ .Values.rbac.webServiceAccountName }}{{ end }}
+ {{- if .Values.web.tolerations }}
tolerations:
{{ toYaml .Values.web.tolerations | indent 8 }}
+ {{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.imagePullSecrets }}
@@ -31,6 +39,9 @@ spec:
{{- end }}
{{- end }}
containers:
+ {{- if .Values.web.sidecarContainers }}
+ {{- toYaml .Values.web.sidecarContainers | nindent 8 }}
+ {{- end }}
- name: {{ template "concourse.web.fullname" . }}
{{- if .Values.imageDigest }}
image: "{{ .Values.image }}@{{ .Values.imageDigest }}"
@@ -39,14 +50,112 @@ spec:
{{- end }}
imagePullPolicy: {{ .Values.imagePullPolicy | quote }}
args:
- - "web"
- {{- if and (.Values.concourse.web.awsSecretsManager.enabled) (.Values.concourse.web.awsSecretsManager.region) }}
- - '--aws-secretsmanager-region={{ .Values.concourse.web.awsSecretsManager.region | quote }}'
+ - web
+ env:
+ {{- if .Values.concourse.web.clusterName }}
+ - name: CONCOURSE_CLUSTER_NAME
+ value: {{ .Values.concourse.web.clusterName | quote }}
{{- end }}
- {{- if and (.Values.concourse.web.awsSsm.enabled) (.Values.concourse.web.awsSsm.region) }}
- - '--aws-ssm-region={{ .Values.concourse.web.awsSsm.region | quote }}'
+ {{- if .Values.concourse.web.enableGlobalResources }}
+ - name: CONCOURSE_ENABLE_GLOBAL_RESOURCES
+ value: {{ .Values.concourse.web.enableGlobalResources | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.enableBuildAuditing }}
+ - name: CONCOURSE_ENABLE_BUILD_AUDITING
+ value: {{ .Values.concourse.web.enableBuildAuditing | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.enableContainerAuditing }}
+ - name: CONCOURSE_ENABLE_CONTAINER_AUDITING
+ value: {{ .Values.concourse.web.enableContainerAuditing | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.enableJobAuditing }}
+ - name: CONCOURSE_ENABLE_JOB_AUDITING
+ value: {{ .Values.concourse.web.enableJobAuditing | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.enablePipelineAuditing }}
+ - name: CONCOURSE_ENABLE_PIPELINE_AUDITING
+ value: {{ .Values.concourse.web.enablePipelineAuditing | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.enableResourceAuditing }}
+ - name: CONCOURSE_ENABLE_RESOURCE_AUDITING
+ value: {{ .Values.concourse.web.enableResourceAuditing | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.enableSystemAuditing }}
+ - name: CONCOURSE_ENABLE_SYSTEM_AUDITING
+ value: {{ .Values.concourse.web.enableSystemAuditing | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.enableTeamAuditing }}
+ - name: CONCOURSE_ENABLE_TEAM_AUDITING
+ value: {{ .Values.concourse.web.enableTeamAuditing | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.enableWorkerAuditing }}
+ - name: CONCOURSE_ENABLE_WORKER_AUDITING
+ value: {{ .Values.concourse.web.enableWorkerAuditing | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.enableVolumeAuditing }}
+ - name: CONCOURSE_ENABLE_VOLUME_AUDITING
+ value: {{ .Values.concourse.web.enableVolumeAuditing | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.secretRetryAttempts }}
+ - name: CONCOURSE_SECRET_RETRY_ATTEMPTS
+ value: {{ .Values.concourse.web.secretRetryAttempts | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.secretRetryInterval }}
+ - name: CONCOURSE_SECRET_RETRY_INTERVAL
+ value: {{ .Values.concourse.web.secretRetryInterval | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.secretCacheDuration }}
+ - name: CONCOURSE_SECRET_CACHE_DURATION
+ value: {{ .Values.concourse.web.secretCacheDuration | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.secretCacheEnabled }}
+ - name: CONCOURSE_SECRET_CACHE_ENABLED
+ value: {{ .Values.concourse.web.secretCacheEnabled | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.secretCachePurgeInterval }}
+ - name: CONCOURSE_SECRET_CACHE_PURGE_INTERVAL
+ value: {{ .Values.concourse.web.secretCachePurgeInterval | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.awsSecretsManager.region }}
+ - name: CONCOURSE_AWS_SECRETSMANAGER_REGION
+ value: {{ .Values.concourse.web.awsSecretsManager.region | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.awsSsm.region }}
+ - name: CONCOURSE_AWS_SSM_REGION
+ value: {{ .Values.concourse.web.awsSsm.region | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.metrics.captureErrorMetrics }}
+ - name: CONCOURSE_CAPTURE_ERROR_METRICS
+ value: {{ .Values.concourse.web.metrics.captureErrorMetrics | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.gc.missingGracePeriod }}
+ - name: CONCOURSE_GC_MISSING_GRACE_PERIOD
+ value: {{ .Values.concourse.web.gc.missingGracePeriod | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.auth.mainTeam.config }}
+ - name: CONCOURSE_MAIN_TEAM_CONFIG
+ value: {{ .Values.concourse.web.auth.mainTeam.config | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.auth.mainTeam.bitbucketCloud.user }}
+ - name: CONCOURSE_MAIN_TEAM_BITBUCKET_CLOUD_USER
+ value: {{ .Values.concourse.web.auth.mainTeam.bitbucketCloud.user | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.auth.mainTeam.bitbucketCloud.team }}
+ - name: CONCOURSE_MAIN_TEAM_BITBUCKET_CLOUD_TEAM
+ value: {{ .Values.concourse.web.auth.mainTeam.bitbucketCloud.team | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.auth.bitbucketCloud.enabled }}
+ - name: CONCOURSE_BITBUCKET_CLOUD_CLIENT_ID
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "concourse.web.fullname" . }}
+ key: bitbucket-cloud-client-id
+ - name: CONCOURSE_BITBUCKET_CLOUD_CLIENT_SECRET
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "concourse.web.fullname" . }}
+ key: bitbucket-cloud-client-secret
{{- end }}
- env:
{{- if .Values.concourse.web.logLevel }}
- name: CONCOURSE_LOG_LEVEL
value: {{ .Values.concourse.web.logLevel | quote }}
@@ -63,12 +172,12 @@ spec:
- name: CONCOURSE_ADD_LOCAL_USER
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: local-users
{{- end }}
{{- if .Values.concourse.web.tls.enabled }}
- name: CONCOURSE_TLS_BIND_PORT
- value: {{ .Values.concourse.web.tls.bindPort | default "443" | quote }}
+ value: {{ .Values.concourse.web.tls.bindPort | quote }}
- name: CONCOURSE_TLS_CERT
value: "{{ .Values.web.tlsSecretsPath }}/client.cert"
- name: CONCOURSE_TLS_KEY
@@ -80,31 +189,19 @@ spec:
{{- else }}
{{- if .Values.concourse.web.externalUrl }}
- name: CONCOURSE_EXTERNAL_URL
- value:
value: {{ .Values.concourse.web.externalUrl | quote }}
{{- end }}
{{- end }}
- {{- if .Values.concourse.web.peerUrl }}
- - name: CONCOURSE_PEER_URL
- value: {{ .Values.concourse.web.peerUrl | quote }}
- {{- else }}
- - name: POD_IP
- valueFrom:
- fieldRef:
- fieldPath: status.podIP
- - name: CONCOURSE_PEER_URL
- value: "http://$(POD_IP):$(CONCOURSE_BIND_PORT)"
- {{- end }}
{{- if .Values.concourse.web.encryption.enabled }}
- name: CONCOURSE_ENCRYPTION_KEY
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: encryption-key
- name: CONCOURSE_OLD_ENCRYPTION_KEY
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: old-encryption-key
{{- end }}
{{- if .Values.concourse.web.debugBindIp }}
@@ -159,6 +256,14 @@ spec:
- name: CONCOURSE_MAX_BUILD_LOGS_TO_RETAIN
value: {{ .Values.concourse.web.maxBuildLogsToRetain | quote }}
{{- end }}
+ {{- if .Values.concourse.web.defaultDaysToRetainBuildLogs }}
+ - name: CONCOURSE_DEFAULT_DAYS_TO_RETAIN_BUILD_LOGS
+ value: {{ .Values.concourse.web.defaultDaysToRetainBuildLogs | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.maxDaysToRetainBuildLogs }}
+ - name: CONCOURSE_MAX_DAYS_TO_RETAIN_BUILD_LOGS
+ value: {{ .Values.concourse.web.maxDaysToRetainBuildLogs | quote }}
+ {{- end }}
{{- if .Values.concourse.web.defaultTaskCpuLimit }}
- name: CONCOURSE_DEFAULT_TASK_CPU_LIMIT
value: {{ .Values.concourse.web.defaultTaskCpuLimit | quote }}
@@ -167,7 +272,6 @@ spec:
- name: CONCOURSE_DEFAULT_TASK_MEMORY_LIMIT
value: {{ .Values.concourse.web.defaultTaskMemoryLimit | quote }}
{{- end }}
-
{{- if .Values.postgresql.enabled }}
- name: CONCOURSE_POSTGRES_HOST
value: {{ template "concourse.postgresql.fullname" . }}
@@ -196,12 +300,12 @@ spec:
- name: CONCOURSE_POSTGRES_USER
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: postgresql-user
- name: CONCOURSE_POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: postgresql-password
{{- if .Values.concourse.web.postgres.sslmode }}
- name: CONCOURSE_POSTGRES_SSLMODE
@@ -228,7 +332,6 @@ spec:
value: {{ .Values.concourse.web.postgres.database | quote }}
{{- end }}
{{- end }}
-
{{- if .Values.concourse.web.kubernetes.enabled }}
- name: CONCOURSE_KUBERNETES_IN_CLUSTER
value: "true"
@@ -244,25 +347,26 @@ spec:
value: {{ .Values.concourse.web.kubernetes.namespacePrefix | quote }}
{{- end }}
{{- end }}
-
{{- if .Values.concourse.web.awsSecretsManager.enabled }}
+ {{- if .Values.concourse.web.awsSecretsManager.keyAuth.enabled }}
- name: CONCOURSE_AWS_SECRETSMANAGER_ACCESS_KEY
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: aws-secretsmanager-access-key
- name: CONCOURSE_AWS_SECRETSMANAGER_SECRET_KEY
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: aws-secretsmanager-secret-key
- {{- if .Values.secrets.awsSecretsManagerSessionToken }}
+ {{- if .Values.concourse.web.awsSecretsManager.keyAuth.useSessionToken }}
- name: CONCOURSE_AWS_SECRETSMANAGER_SESSION_TOKEN
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: aws-secretsmanager-session-token
{{- end }}
+ {{- end }}
{{- if .Values.concourse.web.awsSecretsManager.pipelineSecretTemplate }}
- name: CONCOURSE_AWS_SECRETSMANAGER_PIPELINE_SECRET_TEMPLATE
value: {{ .Values.concourse.web.awsSecretsManager.pipelineSecretTemplate | quote }}
@@ -272,25 +376,26 @@ spec:
value: {{ .Values.concourse.web.awsSecretsManager.teamSecretTemplate | quote }}
{{- end }}
{{- end }}
-
{{- if .Values.concourse.web.awsSsm.enabled }}
+ {{- if .Values.concourse.web.awsSsm.keyAuth.enabled }}
- name: CONCOURSE_AWS_SSM_ACCESS_KEY
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: aws-ssm-access-key
- name: CONCOURSE_AWS_SSM_SECRET_KEY
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: aws-ssm-secret-key
- {{- if .Values.secrets.awsSsmSessionToken }}
+ {{- if .Values.concourse.web.awsSsm.keyAuth.useSessionToken }}
- name: CONCOURSE_AWS_SSM_SESSION_TOKEN
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: aws-ssm-session-token
{{- end }}
+ {{- end }}
{{- if .Values.concourse.web.awsSsm.pipelineSecretTemplate }}
- name: CONCOURSE_AWS_SSM_PIPELINE_SECRET_TEMPLATE
value: {{ .Values.concourse.web.awsSsm.pipelineSecretTemplate | quote }}
@@ -300,46 +405,45 @@ spec:
value: {{ .Values.concourse.web.awsSsm.teamSecretTemplate | quote }}
{{- end }}
{{- end }}
-
{{- if .Values.concourse.web.vault.enabled }}
- name: CONCOURSE_VAULT_URL
value: {{ .Values.concourse.web.vault.url | quote }}
- name: CONCOURSE_VAULT_PATH_PREFIX
value: {{ .Values.concourse.web.vault.pathPrefix | quote }}
+ {{- if.Values.concourse.web.vault.sharedPath }}
+ - name: CONCOURSE_VAULT_SHARED_PATH
+ value: {{ .Values.concourse.web.vault.sharedPath | quote }}
+ {{- end }}
- name: CONCOURSE_VAULT_AUTH_BACKEND
value: {{ .Values.concourse.web.vault.authBackend | quote }}
{{- if .Values.concourse.web.vault.useCaCert }}
- name: CONCOURSE_VAULT_CA_CERT
value: "{{ .Values.web.vaultSecretsPath }}/ca.cert"
{{- end }}
- {{- if eq (default "" .Values.concourse.web.vault.authBackend) "token" }}
+ {{- if eq .Values.concourse.web.vault.authBackend "token" }}
- name: CONCOURSE_VAULT_CLIENT_TOKEN
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: vault-client-token
{{- end }}
- {{- if eq (default "" .Values.concourse.web.vault.authBackend) "cert" }}
+ {{- if eq .Values.concourse.web.vault.authBackend "cert" }}
- name: CONCOURSE_VAULT_CLIENT_CERT
value: "{{ .Values.web.vaultSecretsPath }}/client.cert"
- name: CONCOURSE_VAULT_CLIENT_KEY
value: "{{ .Values.web.vaultSecretsPath }}/client.key"
{{- end }}
- {{- if eq (default "" .Values.concourse.web.vault.authBackend) "approle" }}
+ {{- if eq .Values.concourse.web.vault.authBackend "approle" }}
- name: CONCOURSE_VAULT_AUTH_PARAM
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: vault-client-auth-param
{{- end }}
{{- if .Values.concourse.web.vault.authBackendMaxTtl }}
- name: CONCOURSE_VAULT_AUTH_BACKEND_MAX_TTL
value: {{ .Values.concourse.web.vault.authBackendMaxTtl | quote }}
{{- end }}
- {{- if .Values.concourse.web.vault.cache }}
- - name: CONCOURSE_VAULT_CACHE
- value: {{ .Values.concourse.web.vault.cache | quote }}
- {{- end }}
{{- if .Values.concourse.web.vault.caPath }}
- name: CONCOURSE_VAULT_CA_PATH
value: {{ .Values.concourse.web.vault.caPath | quote }}
@@ -348,10 +452,6 @@ spec:
- name: CONCOURSE_VAULT_INSECURE_SKIP_VERIFY
value: {{ .Values.concourse.web.vault.insecureSkipVerify | quote }}
{{- end }}
- {{- if .Values.concourse.web.vault.maxLease }}
- - name: CONCOURSE_VAULT_MAX_LEASE
- value: {{ .Values.concourse.web.vault.maxLease | quote }}
- {{- end }}
{{- if .Values.concourse.web.vault.retryInitial }}
- name: CONCOURSE_VAULT_RETRY_INITIAL
value: {{ .Values.concourse.web.vault.retryInitial | quote }}
@@ -365,12 +465,10 @@ spec:
value: {{ .Values.concourse.web.vault.serverName | quote }}
{{- end }}
{{- end }}
-
{{- if .Values.concourse.web.noop }}
- name: CONCOURSE_NOOP
value: {{ .Values.concourse.web.noop | quote }}
{{- end }}
-
{{- if .Values.concourse.web.staticWorker.enabled }}
{{- if .Values.concourse.web.staticWorker.gardenUrl }}
- name: CONCOURSE_WORKER_GARDEN_URL
@@ -385,7 +483,6 @@ spec:
value: {{ .Values.concourse.web.staticWorker.resource | quote }}
{{- end }}
{{- end }}
-
{{- if .Values.concourse.web.metrics.hostName }}
- name: CONCOURSE_METRICS_HOST_NAME
value: {{ .Values.concourse.web.metrics.hostName | quote }}
@@ -394,7 +491,6 @@ spec:
- name: CONCOURSE_METRICS_ATTRIBUTE
value: {{ .Values.concourse.web.metrics.attribute | quote }}
{{- end }}
-
{{- if .Values.concourse.web.datadog.enabled }}
- name: CONCOURSE_DATADOG_AGENT_HOST
{{- if .Values.concourse.web.datadog.agentHostUseHostIP }}
@@ -411,7 +507,6 @@ spec:
value: {{ .Values.concourse.web.datadog.prefix | quote }}
{{- end }}
{{- end }}
-
{{- if .Values.concourse.web.influxdb.enabled }}
- name: CONCOURSE_INFLUXDB_URL
value: {{ .Values.concourse.web.influxdb.url | quote }}
@@ -422,17 +517,15 @@ spec:
- name: CONCOURSE_INFLUXDB_PASSWORD
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: influxdb-password
- name: CONCOURSE_INFLUXDB_INSECURE_SKIP_VERIFY
value: {{ .Values.concourse.web.influxdb.insecureSkipVerify | quote}}
{{- end }}
-
{{- if .Values.concourse.web.emitToLogs }}
- name: CONCOURSE_EMIT_TO_LOGS
value: {{ .Values.concourse.web.emitToLogs | quote }}
{{- end }}
-
{{- if .Values.concourse.web.newrelic.enabled }}
{{- if .Values.concourse.web.newrelic.accountId }}
- name: CONCOURSE_NEWRELIC_ACCOUNT_ID
@@ -447,14 +540,12 @@ spec:
value: {{ .Values.concourse.web.newrelic.servicePrefix | quote }}
{{- end }}
{{- end }}
-
{{- if .Values.concourse.web.prometheus.enabled }}
- name: CONCOURSE_PROMETHEUS_BIND_IP
value: {{ .Values.concourse.web.prometheus.bindIp | quote }}
- name: CONCOURSE_PROMETHEUS_BIND_PORT
value: {{ .Values.concourse.web.prometheus.bindPort | quote }}
{{- end }}
-
{{- if .Values.concourse.web.riemann.enabled }}
{{- if .Values.concourse.web.riemann.host }}
- name: CONCOURSE_RIEMANN_HOST
@@ -473,12 +564,10 @@ spec:
value: {{ .Values.concourse.web.riemann.tag | quote }}
{{- end }}
{{- end }}
-
{{- if .Values.concourse.web.xFrameOptions }}
- name: CONCOURSE_X_FRAME_OPTIONS
value: {{ .Values.concourse.web.xFrameOptions | quote }}
{{- end }}
-
{{- if .Values.concourse.web.gc.overrideDefaults }}
{{- if .Values.concourse.web.gc.interval }}
- name: CONCOURSE_GC_INTERVAL
@@ -489,7 +578,6 @@ spec:
value: {{ .Values.concourse.web.gc.oneOffGracePeriod | quote }}
{{- end }}
{{- end }}
-
{{- if .Values.concourse.web.syslog.enabled }}
{{- if .Values.concourse.web.syslog.hostname }}
- name: CONCOURSE_SYSLOG_HOSTNAME
@@ -512,7 +600,6 @@ spec:
value: "{{ .Values.web.syslogSecretsPath }}/ca.cert"
{{- end }}
{{- end }}
-
{{- if .Values.concourse.web.auth.cookieSecure }}
- name: CONCOURSE_COOKIE_SECURE
value: {{ .Values.concourse.web.auth.cookieSecure | quote }}
@@ -523,16 +610,10 @@ spec:
{{- end }}
- name: CONCOURSE_SESSION_SIGNING_KEY
value: "{{ .Values.web.keySecretsPath }}/session_signing_key"
-
{{- if .Values.concourse.web.auth.mainTeam.localUser }}
- name: CONCOURSE_MAIN_TEAM_LOCAL_USER
value: {{ .Values.concourse.web.auth.mainTeam.localUser | quote }}
{{- end }}
- {{- if .Values.concourse.web.auth.mainTeam.allowAllUsers }}
- - name: CONCOURSE_MAIN_TEAM_ALLOW_ALL_USERS
- value: {{ .Values.concourse.web.auth.mainTeam.allowAllUsers | quote }}
- {{- end }}
-
{{- if .Values.concourse.web.auth.mainTeam.cf.org }}
- name: CONCOURSE_MAIN_TEAM_CF_ORG
value: {{ .Values.concourse.web.auth.mainTeam.cf.org | quote }}
@@ -549,7 +630,6 @@ spec:
- name: CONCOURSE_MAIN_TEAM_CF_USER
value: {{ .Values.concourse.web.auth.mainTeam.cf.user | quote }}
{{- end }}
-
{{- if .Values.concourse.web.auth.mainTeam.github.user }}
- name: CONCOURSE_MAIN_TEAM_GITHUB_USER
value: {{ .Values.concourse.web.auth.mainTeam.github.user | quote }}
@@ -562,7 +642,6 @@ spec:
- name: CONCOURSE_MAIN_TEAM_GITHUB_TEAM
value: {{ .Values.concourse.web.auth.mainTeam.github.team | quote }}
{{- end }}
-
{{- if .Values.concourse.web.auth.mainTeam.gitlab.user }}
- name: CONCOURSE_MAIN_TEAM_GITLAB_USER
value: {{ .Values.concourse.web.auth.mainTeam.gitlab.user | quote }}
@@ -571,7 +650,6 @@ spec:
- name: CONCOURSE_MAIN_TEAM_GITLAB_GROUP
value: {{ .Values.concourse.web.auth.mainTeam.gitlab.group | quote }}
{{- end }}
-
{{- if .Values.concourse.web.auth.mainTeam.ldap.user }}
- name: CONCOURSE_MAIN_TEAM_LDAP_USER
value: {{ .Values.concourse.web.auth.mainTeam.ldap.user | quote }}
@@ -580,7 +658,6 @@ spec:
- name: CONCOURSE_MAIN_TEAM_LDAP_GROUP
value: {{ .Values.concourse.web.auth.mainTeam.ldap.group | quote }}
{{- end }}
-
{{- if .Values.concourse.web.auth.mainTeam.oauth.user }}
- name: CONCOURSE_MAIN_TEAM_OAUTH_USER
value: {{ .Values.concourse.web.auth.mainTeam.oauth.user | quote }}
@@ -589,7 +666,6 @@ spec:
- name: CONCOURSE_MAIN_TEAM_OAUTH_GROUP
value: {{ .Values.concourse.web.auth.mainTeam.oauth.group | quote }}
{{- end }}
-
{{- if .Values.concourse.web.auth.mainTeam.oidc.group }}
- name: CONCOURSE_MAIN_TEAM_OIDC_GROUP
value: {{ .Values.concourse.web.auth.mainTeam.oidc.group | quote }}
@@ -598,17 +674,16 @@ spec:
- name: CONCOURSE_MAIN_TEAM_OIDC_USER
value: {{ .Values.concourse.web.auth.mainTeam.oidc.user | quote }}
{{- end }}
-
{{- if .Values.concourse.web.auth.cf.enabled }}
- name: CONCOURSE_CF_CLIENT_ID
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: cf-client-id
- name: CONCOURSE_CF_CLIENT_SECRET
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: cf-client-secret
{{- if .Values.concourse.web.auth.cf.apiUrl }}
- name: CONCOURSE_CF_API_URL
@@ -623,17 +698,16 @@ spec:
value: {{ .Values.concourse.web.auth.cf.skipSslValidation | quote }}
{{- end }}
{{- end }}
-
{{- if .Values.concourse.web.auth.github.enabled }}
- name: CONCOURSE_GITHUB_CLIENT_ID
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: github-client-id
- name: CONCOURSE_GITHUB_CLIENT_SECRET
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: github-client-secret
{{- if .Values.concourse.web.auth.github.host }}
- name: CONCOURSE_GITHUB_HOST
@@ -644,24 +718,22 @@ spec:
value: "{{ .Values.web.authSecretsPath }}/github_ca.cert"
{{- end }}
{{- end }}
-
{{- if .Values.concourse.web.auth.gitlab.enabled }}
- name: CONCOURSE_GITLAB_CLIENT_ID
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: gitlab-client-id
- name: CONCOURSE_GITLAB_CLIENT_SECRET
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: gitlab-client-secret
{{- if .Values.concourse.web.auth.gitlab.host }}
- name: CONCOURSE_GITLAB_HOST
value: {{ .Values.concourse.web.auth.gitlab.host | quote }}
{{- end }}
{{- end }}
-
{{- if .Values.concourse.web.auth.ldap.enabled }}
{{- if .Values.concourse.web.auth.ldap.bindDn }}
- name: CONCOURSE_LDAP_BIND_DN
@@ -748,7 +820,6 @@ spec:
value: {{ .Values.concourse.web.auth.ldap.userSearchUsername | quote }}
{{- end }}
{{- end }}
-
{{- if .Values.concourse.web.auth.oauth.enabled }}
{{- if .Values.concourse.web.auth.oauth.displayName }}
- name: CONCOURSE_OAUTH_DISPLAY_NAME
@@ -757,12 +828,12 @@ spec:
- name: CONCOURSE_OAUTH_CLIENT_ID
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: oauth-client-id
- name: CONCOURSE_OAUTH_CLIENT_SECRET
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: oauth-client-secret
{{- if .Values.concourse.web.auth.oauth.authUrl }}
- name: CONCOURSE_OAUTH_AUTH_URL
@@ -792,8 +863,15 @@ spec:
- name: CONCOURSE_OAUTH_SKIP_SSL_VALIDATION
value: {{ .Values.concourse.web.auth.oauth.skipSslValidation | quote }}
{{- end }}
+ {{- if .Values.concourse.web.auth.oauth.userNameKey }}
+ - name: CONCOURSE_OAUTH_USER_NAME_KEY
+ value: {{ .Values.concourse.web.auth.oauth.userNameKey | quote }}
+ {{- end }}
+ {{- if .Values.concourse.web.auth.oauth.userIdKey }}
+ - name: CONCOURSE_OAUTH_USER_ID_KEY
+ value: {{ .Values.concourse.web.auth.oauth.userIdKey | quote }}
+ {{- end }}
{{- end }}
-
{{- if .Values.concourse.web.auth.oidc.enabled }}
{{- if .Values.concourse.web.auth.oidc.displayName }}
- name: CONCOURSE_OIDC_DISPLAY_NAME
@@ -806,12 +884,12 @@ spec:
- name: CONCOURSE_OIDC_CLIENT_ID
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: oidc-client-id
- name: CONCOURSE_OIDC_CLIENT_SECRET
valueFrom:
secretKeyRef:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
key: oidc-client-secret
{{- if .Values.concourse.web.auth.oidc.scope }}
- name: CONCOURSE_OIDC_SCOPE
@@ -833,8 +911,22 @@ spec:
- name: CONCOURSE_OIDC_SKIP_SSL_VALIDATION
value: {{ .Values.concourse.web.auth.oidc.skipSslValidation | quote }}
{{- end }}
+ {{- if .Values.concourse.web.auth.oidc.userNameKey }}
+ - name: CONCOURSE_OIDC_USER_NAME_KEY
+ value: {{ .Values.concourse.web.auth.oidc.userNameKey | quote }}
+ {{- end }}
+ {{- end }}
+ {{- if .Values.concourse.web.peerAddress }}
+ - name: CONCOURSE_PEER_ADDRESS
+ value: {{ .Values.concourse.web.peerAddress | quote }}
+ {{- else }}
+ - name: POD_IP
+ valueFrom:
+ fieldRef:
+ fieldPath: status.podIP
+ - name: CONCOURSE_PEER_ADDRESS
+ value: "$(POD_IP)"
{{- end }}
-
{{- if .Values.concourse.web.tsa.logLevel }}
- name: CONCOURSE_TSA_LOG_LEVEL
value: {{ .Values.concourse.web.tsa.logLevel | quote }}
@@ -845,21 +937,21 @@ spec:
{{- end }}
- name: CONCOURSE_TSA_BIND_PORT
value: {{ .Values.concourse.web.tsa.bindPort | quote }}
- {{- if .Values.concourse.web.tsa.bindDebugPort }}
- - name: CONCOURSE_TSA_BIND_DEBUG_PORT
- value: {{ .Values.concourse.web.tsa.bindDebugPort | quote }}
+ {{- if .Values.concourse.web.tsa.debugBindIp }}
+ - name: CONCOURSE_TSA_DEBUG_BIND_IP
+ value: {{ .Values.concourse.web.tsa.debugBindIp | quote }}
{{- end }}
- {{- if .Values.concourse.web.tsa.peerIp }}
- - name: CONCOURSE_TSA_PEER_IP
- value: {{ .Values.concourse.web.tsa.peerIp | quote }}
+ {{- if .Values.concourse.web.tsa.debugBindPort }}
+ - name: CONCOURSE_TSA_DEBUG_BIND_PORT
+ value: {{ .Values.concourse.web.tsa.debugBindPort | quote }}
{{- end }}
- name: CONCOURSE_TSA_HOST_KEY
value: "{{ .Values.web.keySecretsPath }}/host_key"
- name: CONCOURSE_TSA_AUTHORIZED_KEYS
value: "{{ .Values.web.keySecretsPath }}/worker_key.pub"
- {{- if .Values.concourse.web.tsa.teamAuthorizedKeys }}
+ {{- if .Values.secrets.teamAuthorizedKeys }}
- name: CONCOURSE_TSA_TEAM_AUTHORIZED_KEYS
- value: {{ .Values.concourse.web.tsa.teamAuthorizedKeys | quote }}
+ value: "{{- $root := . -}}{{- range $i, $v := .Values.secrets.teamAuthorizedKeys }}{{- if $i}},{{- end}}{{ $v.team }}:{{ $root.Values.web.teamSecretsPath }}/{{ $v.team }}-authorized-key.pub{{- end }}"
{{- end }}
{{- if .Values.concourse.web.tsa.atcUrl }}
- name: CONCOURSE_TSA_ATC_URL
@@ -897,16 +989,30 @@ spec:
- name: prometheus
containerPort: {{ .Values.concourse.web.prometheus.bindPort }}
{{- end }}
+{{- if .Values.web.livenessProbe }}
livenessProbe:
{{ toYaml .Values.web.livenessProbe | indent 12 }}
+{{- end }}
+{{- if .Values.web.readinessProbe }}
readinessProbe:
{{ toYaml .Values.web.readinessProbe | indent 12 }}
+{{- end }}
+{{- if .Values.web.resources }}
resources:
{{ toYaml .Values.web.resources | indent 12 }}
+{{- end }}
volumeMounts:
+{{- if .Values.web.additionalVolumeMounts }}
+{{ toYaml .Values.web.additionalVolumeMounts | indent 12 }}
+{{- end }}
- name: concourse-keys
mountPath: {{ .Values.web.keySecretsPath | quote }}
readOnly: true
+ {{- if .Values.secrets.teamAuthorizedKeys }}
+ - name: team-authorized-keys
+ mountPath: {{ .Values.web.teamSecretsPath | quote }}
+ readOnly: true
+ {{- end }}
{{- if .Values.concourse.web.tls.enabled }}
- name: web-tls
mountPath: {{ .Values.web.tlsSecretsPath | quote }}
@@ -917,7 +1023,7 @@ spec:
mountPath: {{ .Values.web.vaultSecretsPath | quote }}
readOnly: true
{{- end }}
- {{- if not (eq (default "disable" .Values.concourse.web.postgres.sslmode) "disable") }}
+ {{- if not (eq .Values.concourse.web.postgres.sslmode "disable") }}
- name: postgresql-keys
mountPath: {{ .Values.web.postgresqlSecretsPath | quote }}
readOnly: true
@@ -930,11 +1036,8 @@ spec:
- name: auth-keys
mountPath: {{ .Values.web.authSecretsPath | quote }}
readOnly: true
-{{- if .Values.web.additionalVolumeMounts }}
-{{ toYaml .Values.web.additionalVolumeMounts | indent 12 }}
-{{- end }}
- affinity:
{{- if .Values.web.additionalAffinities }}
+ affinity:
{{ toYaml .Values.web.additionalAffinities | indent 8 }}
{{- end }}
volumes:
@@ -943,7 +1046,7 @@ spec:
{{- end }}
- name: concourse-keys
secret:
- secretName: {{ template "concourse.concourse.fullname" . }}
+ secretName: {{ template "concourse.web.fullname" . }}
defaultMode: 0400
items:
- key: host-key
@@ -952,10 +1055,21 @@ spec:
path: session_signing_key
- key: worker-key-pub
path: worker_key.pub
+ {{- if .Values.secrets.teamAuthorizedKeys }}
+ - name: team-authorized-keys
+ secret:
+ secretName: {{ template "concourse.web.fullname" . }}
+ defaultMode: 0400
+ items:
+ {{- range .Values.secrets.teamAuthorizedKeys }}
+ - key: {{ .team }}-team-authorized-key
+ path: {{ .team }}-authorized-key.pub
+ {{- end }}
+ {{- end }}
{{- if .Values.concourse.web.tls.enabled }}
- name: web-tls
secret:
- secretName: {{ template "concourse.concourse.fullname" . }}
+ secretName: {{ template "concourse.web.fullname" . }}
defaultMode: 0400
items:
- key: web-tls-cert
@@ -966,24 +1080,24 @@ spec:
{{- if .Values.concourse.web.vault.enabled }}
- name: vault-keys
secret:
- secretName: {{ template "concourse.concourse.fullname" . }}
+ secretName: {{ template "concourse.web.fullname" . }}
defaultMode: 0400
items:
{{- if .Values.concourse.web.vault.useCaCert }}
- key: vault-ca-cert
path: ca.cert
{{- end }}
- {{- if eq (default "" .Values.concourse.web.vault.authBackend) "cert" }}
+ {{- if (eq .Values.concourse.web.vault.authBackend "cert") }}
- key: vault-client-cert
path: client.cert
- key: vault-client-key
path: client.key
{{- end }}
{{- end }}
- {{- if not (eq (default "disable" .Values.concourse.web.postgres.sslmode) "disable") }}
+ {{- if not (eq .Values.concourse.web.postgres.sslmode "disable") }}
- name: postgresql-keys
secret:
- secretName: {{ template "concourse.concourse.fullname" . }}
+ secretName: {{ template "concourse.web.fullname" . }}
defaultMode: 0400
items:
- key: postgresql-ca-cert
@@ -996,7 +1110,7 @@ spec:
{{- if .Values.concourse.web.syslog.enabled }}
- name: syslog-keys
secret:
- secretName: {{ template "concourse.concourse.fullname" . }}
+ secretName: {{ template "concourse.web.fullname" . }}
defaultMode: 0400
items:
- key: syslog-ca-cert
@@ -1004,7 +1118,7 @@ spec:
{{- end }}
- name: auth-keys
secret:
- secretName: {{ template "concourse.concourse.fullname" . }}
+ secretName: {{ template "concourse.web.fullname" . }}
defaultMode: 0400
items:
{{- if .Values.concourse.web.auth.cf.useCaCert }}
@@ -1027,3 +1141,4 @@ spec:
- key: oidc-ca-cert
path: oidc_ca.cert
{{- end }}
+{{- end }}
diff --git a/stable/concourse/templates/web-ingress.yaml b/stable/concourse/templates/web-ingress.yaml
index d77afaf51a87..4a08091c6d9c 100644
--- a/stable/concourse/templates/web-ingress.yaml
+++ b/stable/concourse/templates/web-ingress.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.web.enabled -}}
{{- if .Values.web.ingress.enabled -}}
{{- $releaseName := .Release.Name -}}
{{- $serviceName := default "web" .Values.web.nameOverride -}}
@@ -30,3 +31,4 @@ spec:
{{ toYaml .Values.web.ingress.tls | indent 4 }}
{{- end -}}
{{- end -}}
+{{- end -}}
diff --git a/stable/concourse/templates/web-role.yaml b/stable/concourse/templates/web-role.yaml
index b518d1d9faf2..d00b8d6539b8 100644
--- a/stable/concourse/templates/web-role.yaml
+++ b/stable/concourse/templates/web-role.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.web.enabled -}}
{{- if .Values.rbac.create -}}
{{- if .Values.concourse.web.kubernetes.enabled -}}
apiVersion: rbac.authorization.k8s.io/{{ .Values.rbac.apiVersion }}
@@ -15,3 +16,4 @@ rules:
verbs: ["get"]
{{- end -}}
{{- end -}}
+{{- end -}}
diff --git a/stable/concourse/templates/web-rolebinding.yaml b/stable/concourse/templates/web-rolebinding.yaml
index c178519e7cf4..c01524880ef8 100644
--- a/stable/concourse/templates/web-rolebinding.yaml
+++ b/stable/concourse/templates/web-rolebinding.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.web.enabled -}}
{{- if .Values.rbac.create -}}
{{- if .Values.concourse.web.kubernetes.enabled -}}
{{- range .Values.concourse.web.kubernetes.teams }}
@@ -23,3 +24,4 @@ subjects:
{{- end }}
{{- end -}}
{{- end -}}
+{{- end -}}
diff --git a/stable/concourse/templates/secrets.yaml b/stable/concourse/templates/web-secrets.yaml
similarity index 89%
rename from stable/concourse/templates/secrets.yaml
rename to stable/concourse/templates/web-secrets.yaml
index 0aa5cf274bda..580ca05375f2 100644
--- a/stable/concourse/templates/secrets.yaml
+++ b/stable/concourse/templates/web-secrets.yaml
@@ -1,20 +1,20 @@
+{{- if .Values.web.enabled }}
{{- if .Values.secrets.create }}
apiVersion: v1
kind: Secret
metadata:
- name: {{ template "concourse.concourse.fullname" . }}
+ name: {{ template "concourse.web.fullname" . }}
labels:
- app: {{ template "concourse.concourse.fullname" . }}
+ app: {{ template "concourse.web.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
type: Opaque
data:
host-key: {{ .Values.secrets.hostKey | b64enc | quote }}
- host-key-pub: {{ .Values.secrets.hostKeyPub | b64enc | quote }}
session-signing-key: {{ .Values.secrets.sessionSigningKey | b64enc | quote }}
- worker-key: {{ .Values.secrets.workerKey | b64enc | quote }}
worker-key-pub: {{ .Values.secrets.workerKeyPub | b64enc | quote }}
+
{{- if not .Values.postgresql.enabled }}
postgresql-user: {{ template "concourse.secret.required" dict "key" "postgresUser" "isnt" "postgresql.enabled" "root" . }}
postgresql-password: {{ template "concourse.secret.required" dict "key" "postgresPassword" "isnt" "postgresql.enabled" "root" . }}
@@ -34,6 +34,10 @@ data:
cf-client-secret: {{ template "concourse.secret.required" dict "key" "cfClientSecret" "is" "concourse.web.auth.cf.enabled" "root" . }}
cf-ca-cert: {{ default "" .Values.secrets.cfCaCert | b64enc | quote }}
{{- end }}
+ {{- if .Values.concourse.web.auth.bitbucketCloud.enabled }}
+ bitbucket-cloud-client-id: {{ template "concourse.secret.required" dict "key" "bitbucketCloudClientId" "is" "concourse.web.auth.bitbucketCloud.enabled" "root" . }}
+ bitbucket-cloud-client-secret: {{ template "concourse.secret.required" dict "key" "bitbucketCloudClientSecret" "is" "concourse.web.auth.bitbucketCloud.enabled" "root" . }}
+ {{- end }}
{{- if .Values.concourse.web.auth.github.enabled }}
github-client-id: {{ template "concourse.secret.required" dict "key" "githubClientId" "is" "concourse.web.auth.github.enabled" "root" . }}
github-client-secret: {{ template "concourse.secret.required" dict "key" "githubClientSecret" "is" "concourse.web.auth.github.enabled" "root" . }}
@@ -87,4 +91,8 @@ data:
{{- if .Values.concourse.web.syslog.enabled }}
syslog-ca-cert: {{ default "" .Values.secrets.syslogCaCert | b64enc | quote }}
{{- end }}
+ {{- range .Values.secrets.teamAuthorizedKeys}}
+ {{ .team }}-team-authorized-key: {{ .key | b64enc | quote }}
+ {{- end}}
+{{- end }}
{{- end }}
diff --git a/stable/concourse/templates/web-serviceaccount.yaml b/stable/concourse/templates/web-serviceaccount.yaml
index 5f1c2f88884d..570349f0239b 100644
--- a/stable/concourse/templates/web-serviceaccount.yaml
+++ b/stable/concourse/templates/web-serviceaccount.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.web.enabled -}}
{{- if .Values.rbac.create -}}
apiVersion: v1
kind: ServiceAccount
@@ -9,3 +10,4 @@ metadata:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- end -}}
+{{- end }}
diff --git a/stable/concourse/templates/web-svc.yaml b/stable/concourse/templates/web-svc.yaml
index e39a4a691f16..02aed3165e08 100644
--- a/stable/concourse/templates/web-svc.yaml
+++ b/stable/concourse/templates/web-svc.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.web.enabled -}}
apiVersion: v1
kind: Service
metadata:
@@ -10,6 +11,7 @@ metadata:
{{- range $key, $value := .Values.web.service.labels }}
{{ $key }}: {{ $value | quote }}
{{- end }}
+ {{- if or .Values.web.service.annotations .Values.concourse.web.prometheus.enabled }}
annotations:
{{- range $key, $value := .Values.web.service.annotations }}
{{ $key }}: {{ $value | quote }}
@@ -18,6 +20,7 @@ metadata:
prometheus.io/scrape: "true"
prometheus.io/port: {{ .Values.concourse.web.prometheus.bindPort | quote }}
{{- end }}
+ {{- end }}
spec:
type: {{ .Values.web.service.type }}
{{ if .Values.web.service.loadBalancerSourceRanges }}
@@ -57,3 +60,4 @@ spec:
{{- end }}
selector:
app: {{ template "concourse.web.fullname" . }}
+{{- end }}
diff --git a/stable/concourse/templates/worker-policy.yaml b/stable/concourse/templates/worker-policy.yaml
index fad115b955a4..e09db8b19a2c 100644
--- a/stable/concourse/templates/worker-policy.yaml
+++ b/stable/concourse/templates/worker-policy.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.worker.enabled -}}
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
@@ -12,3 +13,4 @@ spec:
selector:
matchLabels:
app: {{ template "concourse.worker.fullname" . }}
+{{- end }}
diff --git a/stable/concourse/templates/worker-prestop-configmap.yaml b/stable/concourse/templates/worker-prestop-configmap.yaml
new file mode 100644
index 000000000000..9d5dd3134b4e
--- /dev/null
+++ b/stable/concourse/templates/worker-prestop-configmap.yaml
@@ -0,0 +1,15 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "concourse.worker.fullname" . }}
+ labels:
+ app: {{ template "concourse.worker.fullname" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ release: "{{ .Release.Name }}"
+ heritage: "{{ .Release.Service }}"
+data:
+ pre-stop-hook.sh: |
+ #!/bin/bash
+ kill -s {{ .Values.concourse.worker.shutdownSignal }} 1
+ while [ -e /proc/1 ]; do sleep 1; done
+
diff --git a/stable/concourse/templates/worker-role.yaml b/stable/concourse/templates/worker-role.yaml
index 01ecf9fd930c..11ba4bd5266d 100644
--- a/stable/concourse/templates/worker-role.yaml
+++ b/stable/concourse/templates/worker-role.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.worker.enabled -}}
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/{{ .Values.rbac.apiVersion }}
kind: Role
@@ -18,3 +19,4 @@ rules:
verbs:
- use
{{- end -}}
+{{- end }}
diff --git a/stable/concourse/templates/worker-rolebinding.yaml b/stable/concourse/templates/worker-rolebinding.yaml
index b412b68e2d12..6bdc2af69710 100644
--- a/stable/concourse/templates/worker-rolebinding.yaml
+++ b/stable/concourse/templates/worker-rolebinding.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.worker.enabled -}}
{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/{{ .Values.rbac.apiVersion }}
kind: RoleBinding
@@ -16,3 +17,4 @@ subjects:
- kind: ServiceAccount
name: {{ template "concourse.worker.fullname" . }}
{{- end -}}
+{{- end }}
diff --git a/stable/concourse/templates/worker-secrets.yaml b/stable/concourse/templates/worker-secrets.yaml
new file mode 100644
index 000000000000..741bec247778
--- /dev/null
+++ b/stable/concourse/templates/worker-secrets.yaml
@@ -0,0 +1,18 @@
+{{- if .Values.worker.enabled }}
+{{- if .Values.secrets.create }}
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ template "concourse.worker.fullname" . }}
+ labels:
+ app: {{ template "concourse.worker.fullname" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ release: "{{ .Release.Name }}"
+ heritage: "{{ .Release.Service }}"
+type: Opaque
+data:
+ host-key-pub: {{ .Values.secrets.hostKeyPub | b64enc | quote }}
+ worker-key: {{ .Values.secrets.workerKey | b64enc | quote }}
+ worker-key-pub: {{ .Values.secrets.workerKeyPub | b64enc | quote }}
+{{- end }}
+{{- end }}
diff --git a/stable/concourse/templates/worker-serviceaccount.yaml b/stable/concourse/templates/worker-serviceaccount.yaml
index 486c77966d60..795969ab362c 100644
--- a/stable/concourse/templates/worker-serviceaccount.yaml
+++ b/stable/concourse/templates/worker-serviceaccount.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.worker.enabled -}}
{{- if .Values.rbac.create -}}
apiVersion: v1
kind: ServiceAccount
@@ -9,3 +10,4 @@ metadata:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
{{- end -}}
+{{- end }}
diff --git a/stable/concourse/templates/worker-statefulset.yaml b/stable/concourse/templates/worker-statefulset.yaml
index 25e839f5cf64..7af3841861d1 100644
--- a/stable/concourse/templates/worker-statefulset.yaml
+++ b/stable/concourse/templates/worker-statefulset.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.worker.enabled -}}
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
@@ -7,7 +8,6 @@ metadata:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
-
spec:
serviceName: {{ template "concourse.worker.fullname" . }}
replicas: {{ .Values.worker.replicas }}
@@ -16,27 +16,32 @@ spec:
labels:
app: {{ template "concourse.worker.fullname" . }}
release: "{{ .Release.Name }}"
+ {{- if .Values.worker.annotations }}
annotations:
- {{- range $key, $value := .Values.worker.annotations }}
- {{ $key }}: {{ $value | quote }}
- {{- end }}
+{{ toYaml .Values.worker.annotations | indent 8 }}
+ {{- end }}
spec:
- {{- with .Values.worker.nodeSelector }}
+ {{- if .Values.worker.nodeSelector }}
nodeSelector:
-{{ toYaml . | indent 8 }}
+{{ toYaml .Values.worker.nodeSelector | indent 8 }}
{{- end }}
serviceAccountName: {{ if .Values.rbac.create }}{{ template "concourse.worker.fullname" . }}{{ else }}{{ .Values.rbac.workerServiceAccountName }}{{ end }}
+ {{- if .Values.worker.tolerations }}
tolerations:
{{ toYaml .Values.worker.tolerations | indent 8 }}
+ {{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
+ {{- if .Values.worker.terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ .Values.worker.terminationGracePeriodSeconds }}
- containers:
- - name: {{ template "concourse.worker.fullname" . }}
+ {{- end }}
+ {{- if .Values.worker.cleanUpWorkDirOnStart }}
+ initContainers:
+ - name: {{ template "concourse.worker.fullname" . }}-init-rm
{{- if .Values.imageDigest }}
image: "{{ .Values.image }}@{{ .Values.imageDigest }}"
{{- else }}
@@ -46,47 +51,69 @@ spec:
command:
- /bin/sh
args:
- - -c
+ - -ce
- |-
- cp /dev/null /tmp/.liveness_probe
- rm -rf ${CONCOURSE_WORK_DIR:-/concourse-work-dir}/*
- while ! concourse retire-worker --name=${HOSTNAME} | grep -q worker-not-found; do
- touch /tmp/.pre_start_cleanup
- sleep 5
- done
- rm -f /tmp/.pre_start_cleanup
- concourse worker --name=${HOSTNAME} | tee -a /tmp/.liveness_probe
+ rm -rf {{ .Values.concourse.worker.workDir }}/*
+ volumeMounts:
+ - name: concourse-work-dir
+ mountPath: {{ .Values.concourse.worker.workDir | quote }}
+ {{- end }}
+ containers:
+ {{- if .Values.worker.sidecarContainers }}
+ {{- toYaml .Values.worker.sidecarContainers | nindent 8 }}
+ {{- end }}
+ - name: {{ template "concourse.worker.fullname" . }}
+ {{- if .Values.imageDigest }}
+ image: "{{ .Values.image }}@{{ .Values.imageDigest }}"
+ {{- else }}
+ image: "{{ .Values.image }}:{{ .Values.imageTag }}"
+ {{- end }}
+ imagePullPolicy: {{ .Values.imagePullPolicy | quote }}
+ args:
+ - worker
+{{- if .Values.worker.livenessProbe }}
livenessProbe:
- exec:
- command:
- - /bin/sh
- - -c
- - |-
- FATAL_ERRORS=$( echo "${LIVENESS_PROBE_FATAL_ERRORS}" | grep -q '\S' && \
- grep -F "${LIVENESS_PROBE_FATAL_ERRORS}" /tmp/.liveness_probe )
- cp /dev/null /tmp/.liveness_probe
- if [ ! -z "${FATAL_ERRORS}" ]; then
- >&2 echo "Fatal error detected: ${FATAL_ERRORS}"
- exit 1
- fi
- if [ -f /tmp/.pre_start_cleanup ]; then
- >&2 echo "Still trying to clean up before starting concourse. 'fly prune-worker -w ${HOSTNAME}' might need to be called to force cleanup."
- exit 1
- fi
- failureThreshold: 1
- initialDelaySeconds: 10
- periodSeconds: 10
+{{ toYaml .Values.worker.livenessProbe | indent 12 }}
+{{- end }}
+{{- if .Values.worker.readinessProbe }}
+ readinessProbe:
+{{ toYaml .Values.worker.readinessProbe | indent 12 }}
+{{- end }}
lifecycle:
preStop:
exec:
command:
- - /bin/sh
- - -c
- - |-
- while ! concourse retire-worker --name=${HOSTNAME} | grep -q worker-not-found; do
- sleep 5
- done
+ - "/bin/bash"
+ - "/pre-stop-hook.sh"
env:
+ {{- if .Values.concourse.worker.rebalanceInterval }}
+ - name: CONCOURSE_REBALANCE_INTERVAL
+ value: {{ .Values.concourse.worker.rebalanceInterval | quote }}
+ {{- end }}
+ {{- if .Values.concourse.worker.sweepInterval }}
+ - name: CONCOURSE_SWEEP_INTERVAL
+ value: {{ .Values.concourse.worker.sweepInterval | quote }}
+ {{- end }}
+ {{- if .Values.concourse.worker.resourceTypes }}
+ - name: CONCOURSE_RESOURCE_TYPES
+ value: {{ .Values.concourse.worker.resourceTypes | quote }}
+ {{- end }}
+ {{- if .Values.concourse.worker.connectionDrainTimeout }}
+ - name: CONCOURSE_CONNECTION_DRAIN_TIMEOUT
+ value: {{ .Values.concourse.worker.connectionDrainTimeout | quote }}
+ {{- end }}
+ {{- if .Values.concourse.worker.healthcheckBindIp }}
+ - name: CONCOURSE_HEALTHCHECK_BIND_IP
+ value: {{ .Values.concourse.worker.healthcheckBindIp | quote }}
+ {{- end }}
+ {{- if .Values.concourse.worker.healthcheckBindPort }}
+ - name: CONCOURSE_HEALTHCHECK_BIND_PORT
+ value: {{ .Values.concourse.worker.healthcheckBindPort | quote }}
+ {{- end }}
+ {{- if .Values.concourse.worker.healthcheckTimeout }}
+ - name: CONCOURSE_HEALTHCHECK_TIMEOUT
+ value: {{ .Values.concourse.worker.healthcheckTimeout | quote }}
+ {{- end }}
{{- if .Values.concourse.worker.name }}
- name: CONCOURSE_NAME
value: {{ .Values.concourse.worker.name | quote }}
@@ -115,9 +142,13 @@ spec:
- name: CONCOURSE_EPHEMERAL
value: {{ .Values.concourse.worker.ephemeral | quote }}
{{- end }}
- {{- if .Values.concourse.worker.bindDebugPort }}
- - name: CONCOURSE_BIND_DEBUG_PORT
- value: {{ .Values.concourse.worker.bindDebugPort | quote }}
+ {{- if .Values.concourse.worker.debugBindIp }}
+ - name: CONCOURSE_DEBUG_BIND_IP
+ value: {{ .Values.concourse.worker.debugBindIp | quote }}
+ {{- end }}
+ {{- if .Values.concourse.worker.debugBindPort }}
+ - name: CONCOURSE_DEBUG_BIND_PORT
+ value: {{ .Values.concourse.worker.debugBindPort | quote }}
{{- end }}
{{- if .Values.concourse.worker.certsDir }}
- name: CONCOURSE_CERTS_DIR
@@ -135,275 +166,41 @@ spec:
- name: CONCOURSE_BIND_PORT
value: {{ .Values.concourse.worker.bindPort | quote }}
{{- end }}
- {{- if .Values.concourse.worker.peerIp }}
- - name: CONCOURSE_PEER_IP
- value: {{ .Values.concourse.worker.peerIp | quote }}
- {{- end }}
{{- if .Values.concourse.worker.logLevel }}
- name: CONCOURSE_LOG_LEVEL
value: {{ .Values.concourse.worker.logLevel | quote }}
{{- end }}
-
+ {{ if and .Values.worker.enabled (not .Values.web.enabled) }}
- name: CONCOURSE_TSA_HOST
- value: "{{ template "concourse.web.fullname" . }}:{{ .Values.concourse.web.tsa.bindPort}}"
+ value: "{{ required "concourse.worker.tsa.host must be set in case of worker only deployment" .Values.concourse.worker.tsa.host }}:{{ .Values.concourse.worker.tsa.port}}"
+ {{ else }}
+ - name: CONCOURSE_TSA_HOST
+ value: "{{ template "concourse.web.fullname" . }}:{{ .Values.concourse.worker.tsa.port}}"
+ {{ end }}
- name: CONCOURSE_TSA_PUBLIC_KEY
value: "{{ .Values.worker.keySecretsPath }}/host_key.pub"
- name: CONCOURSE_TSA_WORKER_PRIVATE_KEY
value: "{{ .Values.worker.keySecretsPath }}/worker_key"
-
- {{- if .Values.concourse.worker.garden.logLevel }}
- - name: CONCOURSE_GARDEN_LOG_LEVEL
- value: {{ .Values.concourse.worker.garden.logLevel | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.timeFormat }}
- - name: CONCOURSE_GARDEN_TIME_FORMAT
- value: {{ .Values.concourse.worker.garden.timeFormat | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.bindIp }}
- - name: CONCOURSE_GARDEN_BIND_IP
- value: {{ .Values.concourse.worker.garden.bindIp | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.bindPort }}
- - name: CONCOURSE_GARDEN_BIND_PORT
- value: {{ .Values.concourse.worker.garden.bindPort | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.bindSocket }}
- - name: CONCOURSE_GARDEN_BIND_SOCKET
- value: {{ .Values.concourse.worker.garden.bindSocket | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.debugBindIp }}
- - name: CONCOURSE_GARDEN_DEBUG_BIND_IP
- value: {{ .Values.concourse.worker.garden.debugBindIp | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.debugBindPort }}
- - name: CONCOURSE_GARDEN_DEBUG_BIND_PORT
- value: {{ .Values.concourse.worker.garden.debugBindPort | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.skipSetup }}
- - name: CONCOURSE_GARDEN_SKIP_SETUP
- value: {{ .Values.concourse.worker.garden.skipSetup | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.depot }}
- - name: CONCOURSE_GARDEN_DEPOT
- value: {{ .Values.concourse.worker.garden.depot | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.propertiesPath }}
- - name: CONCOURSE_GARDEN_PROPERTIES_PATH
- value: {{ .Values.concourse.worker.garden.propertiesPath | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.consoleSocketsPath }}
- - name: CONCOURSE_GARDEN_CONSOLE_SOCKETS_PATH
- value: {{ .Values.concourse.worker.garden.consoleSocketsPath | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.cleanupProcessDirsOnWait }}
- - name: CONCOURSE_GARDEN_CLEANUP_PROCESS_DIRS_ON_WAIT
- value: {{ .Values.concourse.worker.garden.cleanupProcessDirsOnWait | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.disablePrivilegedContainers }}
- - name: CONCOURSE_GARDEN_DISABLE_PRIVILEGED_CONTAINERS
- value: {{ .Values.concourse.worker.garden.disablePrivilegedContainers | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.uidMapStart }}
- - name: CONCOURSE_GARDEN_UID_MAP_START
- value: {{ .Values.concourse.worker.garden.uidMapStart | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.uidMapLength }}
- - name: CONCOURSE_GARDEN_UID_MAP_LENGTH
- value: {{ .Values.concourse.worker.garden.uidMapLength | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.gidMapStart }}
- - name: CONCOURSE_GARDEN_GID_MAP_START
- value: {{ .Values.concourse.worker.garden.gidMapStart | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.gidMapLength }}
- - name: CONCOURSE_GARDEN_GID_MAP_LENGTH
- value: {{ .Values.concourse.worker.garden.gidMapLength | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.defaultRootfs }}
- - name: CONCOURSE_GARDEN_DEFAULT_ROOTFS
- value: {{ .Values.concourse.worker.garden.defaultRootfs | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.defaultGraceTime }}
- - name: CONCOURSE_GARDEN_DEFAULT_GRACE_TIME
- value: {{ .Values.concourse.worker.garden.defaultGraceTime | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.destroyContainersOnStartup }}
- - name: CONCOURSE_GARDEN_DESTROY_CONTAINERS_ON_STARTUP
- value: {{ .Values.concourse.worker.garden.destroyContainersOnStartup | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.apparmor }}
- - name: CONCOURSE_GARDEN_APPARMOR
- value: {{ .Values.concourse.worker.garden.apparmor | quote }}
+ {{- if .Values.concourse.worker.externalGardenUrl }}
+ - name: CONCOURSE_EXTERNAL_GARDEN_URL
+ value: {{ .Values.concourse.worker.externalGardenUrl | quote }}
{{- end }}
- {{- if .Values.concourse.worker.garden.assetsDir }}
- - name: CONCOURSE_GARDEN_ASSETS_DIR
- value: {{ .Values.concourse.worker.garden.assetsDir | quote }}
+ {{- if .Values.concourse.worker.garden.useHoudini }}
+ - name: CONCOURSE_GARDEN_USE_HOUDINI
+ value: {{ .Values.concourse.worker.garden.useHoudini | quote }}
{{- end }}
- {{- if .Values.concourse.worker.garden.dadooBin }}
- - name: CONCOURSE_GARDEN_DADOO_BIN
- value: {{ .Values.concourse.worker.garden.dadooBin | quote }}
+ {{- if .Values.concourse.worker.garden.bin }}
+ - name: CONCOURSE_GARDEN_BIN
+ value: {{ .Values.concourse.worker.garden.bin | quote }}
{{- end }}
- {{- if .Values.concourse.worker.garden.nstarBin }}
- - name: CONCOURSE_GARDEN_NSTAR_BIN
- value: {{ .Values.concourse.worker.garden.nstarBin | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.tarBin }}
- - name: CONCOURSE_GARDEN_TAR_BIN
- value: {{ .Values.concourse.worker.garden.tarBin | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.iptablesBin }}
- - name: CONCOURSE_GARDEN_IPTABLES_BIN
- value: {{ .Values.concourse.worker.garden.iptablesBin | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.iptablesRestoreBin }}
- - name: CONCOURSE_GARDEN_IPTABLES_RESTORE_BIN
- value: {{ .Values.concourse.worker.garden.iptablesRestoreBin | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.initBin }}
- - name: CONCOURSE_GARDEN_INIT_BIN
- value: {{ .Values.concourse.worker.garden.initBin | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.runtimePlugin }}
- - name: CONCOURSE_GARDEN_RUNTIME_PLUGIN
- value: {{ .Values.concourse.worker.garden.runtimePlugin | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.runtimePluginExtraArg }}
- - name: CONCOURSE_GARDEN_RUNTIME_PLUGIN_EXTRA_ARG
- value: {{ .Values.concourse.worker.garden.runtimePluginExtraArg | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.graph }}
- - name: CONCOURSE_GARDEN_GRAPH
- value: {{ .Values.concourse.worker.garden.graph | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.graphCleanupThresholdInMegabytes }}
- - name: CONCOURSE_GARDEN_GRAPH_CLEANUP_THRESHOLD_IN_MEGABYTES
- value: {{ .Values.concourse.worker.garden.graphCleanupThresholdInMegabytes | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.persistentImage }}
- - name: CONCOURSE_GARDEN_PERSISTENT_IMAGE
- value: {{ .Values.concourse.worker.garden.persistentImage | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.imagePlugin }}
- - name: CONCOURSE_GARDEN_IMAGE_PLUGIN
- value: {{ .Values.concourse.worker.garden.imagePlugin | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.imagePluginExtraArg }}
- - name: CONCOURSE_GARDEN_IMAGE_PLUGIN_EXTRA_ARG
- value: {{ .Values.concourse.worker.garden.imagePluginExtraArg | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.privilegedImagePlugin }}
- - name: CONCOURSE_GARDEN_PRIVILEGED_IMAGE_PLUGIN
- value: {{ .Values.concourse.worker.garden.privilegedImagePlugin | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.privilegedImagePluginExtraArg }}
- - name: CONCOURSE_GARDEN_PRIVILEGED_IMAGE_PLUGIN_EXTRA_ARG
- value: {{ .Values.concourse.worker.garden.privilegedImagePluginExtraArg | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.dockerRegistry }}
- - name: CONCOURSE_GARDEN_DOCKER_REGISTRY
- value: {{ .Values.concourse.worker.garden.dockerRegistry | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.insecureDockerRegistry }}
- - name: CONCOURSE_GARDEN_INSECURE_DOCKER_REGISTRY
- value: {{ .Values.concourse.worker.garden.insecureDockerRegistry | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.networkPool }}
- - name: CONCOURSE_GARDEN_NETWORK_POOL
- value: {{ .Values.concourse.worker.garden.networkPool | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.allowHostAccess }}
- - name: CONCOURSE_GARDEN_ALLOW_HOST_ACCESS
- value: {{ .Values.concourse.worker.garden.allowHostAccess | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.denyNetwork }}
- - name: CONCOURSE_GARDEN_DENY_NETWORK
- value: {{ .Values.concourse.worker.garden.denyNetwork | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.dnsServer }}
- - name: CONCOURSE_GARDEN_DNS_SERVER
- value: {{ .Values.concourse.worker.garden.dnsServer | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.additionalDnsServer }}
- - name: CONCOURSE_GARDEN_ADDITIONAL_DNS_SERVER
- value: {{ .Values.concourse.worker.garden.additionalDnsServer | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.additionalHostEntry }}
- - name: CONCOURSE_GARDEN_ADDITIONAL_HOST_ENTRY
- value: {{ .Values.concourse.worker.garden.additionalHostEntry | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.externalIp }}
- - name: CONCOURSE_GARDEN_EXTERNAL_IP
- value: {{ .Values.concourse.worker.garden.externalIp | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.portPoolStart }}
- - name: CONCOURSE_GARDEN_PORT_POOL_START
- value: {{ .Values.concourse.worker.garden.portPoolStart | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.portPoolSize }}
- - name: CONCOURSE_GARDEN_PORT_POOL_SIZE
- value: {{ .Values.concourse.worker.garden.portPoolSize | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.portPoolPropertiesPath }}
- - name: CONCOURSE_GARDEN_PORT_POOL_PROPERTIES_PATH
- value: {{ .Values.concourse.worker.garden.portPoolPropertiesPath | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.mtu }}
- - name: CONCOURSE_GARDEN_MTU
- value: {{ .Values.concourse.worker.garden.mtu | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.networkPlugin }}
- - name: CONCOURSE_GARDEN_NETWORK_PLUGIN
- value: {{ .Values.concourse.worker.garden.networkPlugin | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.networkPluginExtraArg }}
- - name: CONCOURSE_GARDEN_NETWORK_PLUGIN_EXTRA_ARG
- value: {{ .Values.concourse.worker.garden.networkPluginExtraArg | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.cpuQuotaPerShare }}
- - name: CONCOURSE_GARDEN_CPU_QUOTA_PER_SHARE
- value: {{ .Values.concourse.worker.garden.cpuQuotaPerShare | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.tcpMemoryLimit }}
- - name: CONCOURSE_GARDEN_TCP_MEMORY_LIMIT
- value: {{ .Values.concourse.worker.garden.tcpMemoryLimit | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.defaultContainerBlockioWeight }}
- - name: CONCOURSE_GARDEN_DEFAULT_CONTAINER_BLOCKIO_WEIGHT
- value: {{ .Values.concourse.worker.garden.defaultContainerBlockioWeight | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.maxContainers }}
- - name: CONCOURSE_GARDEN_MAX_CONTAINERS
- value: {{ .Values.concourse.worker.garden.maxContainers | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.disableSwapLimit }}
- - name: CONCOURSE_GARDEN_DISABLE_SWAP_LIMIT
- value: {{ .Values.concourse.worker.garden.disableSwapLimit | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.metricsEmissionInterval }}
- - name: CONCOURSE_GARDEN_METRICS_EMISSION_INTERVAL
- value: {{ .Values.concourse.worker.garden.metricsEmissionInterval | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.dropsondeOrigin }}
- - name: CONCOURSE_GARDEN_DROPSONDE_ORIGIN
- value: {{ .Values.concourse.worker.garden.dropsondeOrigin | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.dropsondeDestination }}
- - name: CONCOURSE_GARDEN_DROPSONDE_DESTINATION
- value: {{ .Values.concourse.worker.garden.dropsondeDestination | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.containerdSocket }}
- - name: CONCOURSE_GARDEN_CONTAINERD_SOCKET
- value: {{ .Values.concourse.worker.garden.containerdSocket | quote }}
- {{- end }}
- {{- if .Values.concourse.worker.garden.useContainerdForProcesses }}
- - name: CONCOURSE_GARDEN_USE_CONTAINERD_FOR_PROCESSES
- value: {{ .Values.concourse.worker.garden.useContainerdForProcesses | quote }}
+ {{- if .Values.concourse.worker.garden.config }}
+ - name: CONCOURSE_GARDEN_CONFIG
+ value: {{ .Values.concourse.worker.garden.config | quote }}
{{- end }}
{{- if .Values.concourse.worker.garden.dnsProxyEnable }}
- name: CONCOURSE_GARDEN_DNS_PROXY_ENABLE
value: {{ .Values.concourse.worker.garden.dnsProxyEnable | quote }}
{{- end }}
-
{{- if .Values.concourse.worker.baggageclaim.logLevel }}
- name: CONCOURSE_BAGGAGECLAIM_LOG_LEVEL
value: {{ .Values.concourse.worker.baggageclaim.logLevel | quote }}
@@ -416,9 +213,13 @@ spec:
- name: CONCOURSE_BAGGAGECLAIM_BIND_PORT
value: {{ .Values.concourse.worker.baggageclaim.bindPort | quote }}
{{- end }}
- {{- if .Values.concourse.worker.baggageclaim.bindDebugPort }}
- - name: CONCOURSE_BAGGAGECLAIM_BIND_DEBUG_PORT
- value: {{ .Values.concourse.worker.baggageclaim.bindDebugPort | quote }}
+ {{- if .Values.concourse.worker.baggageclaim.debugBindIp }}
+ - name: CONCOURSE_BAGGAGECLAIM_DEBUG_BIND_IP
+ value: {{ .Values.concourse.worker.baggageclaim.debugBindIp | quote }}
+ {{- end }}
+ {{- if .Values.concourse.worker.baggageclaim.debugBindPort }}
+ - name: CONCOURSE_BAGGAGECLAIM_DEBUG_BIND_PORT
+ value: {{ .Values.concourse.worker.baggageclaim.debugBindPort | quote }}
{{- end }}
{{- if .Values.concourse.worker.baggageclaim.volumes }}
- name: CONCOURSE_BAGGAGECLAIM_VOLUMES
@@ -440,18 +241,28 @@ spec:
- name: CONCOURSE_BAGGAGECLAIM_OVERLAYS_DIR
value: {{ .Values.concourse.worker.baggageclaim.overlaysDir | quote }}
{{- end }}
- {{- if .Values.concourse.worker.baggageclaim.reapInterval }}
- - name: CONCOURSE_BAGGAGECLAIM_REAP_INTERVAL
- value: {{ .Values.concourse.worker.baggageclaim.reapInterval | quote }}
+ {{- if .Values.concourse.worker.baggageclaim.disableUserNamespaces }}
+ - name: CONCOURSE_BAGGAGECLAIM_DISABLE_USER_NAMESPACES
+ value: {{ .Values.concourse.worker.baggageclaim.disableUserNamespaces | quote }}
+ {{- end }}
+ {{- if .Values.concourse.worker.volumeSweeperMaxInFlight }}
+ - name: CONCOURSE_VOLUME_SWEEPER_MAX_IN_FLIGHT
+ value: {{ .Values.concourse.worker.volumeSweeperMaxInFlight | quote }}
+ {{- end }}
+ {{- if .Values.concourse.worker.containerSweeperMaxInFlight }}
+ - name: CONCOURSE_CONTAINER_SWEEPER_MAX_IN_FLIGHT
+ value: {{ .Values.concourse.worker.containerSweeperMaxInFlight | quote }}
{{- end }}
- - name: LIVENESS_PROBE_FATAL_ERRORS
- value: {{ .Values.worker.fatalErrors | quote }}
-
{{- if .Values.worker.env }}
{{ toYaml .Values.worker.env | indent 12 }}
{{- end }}
+ ports:
+ - name: worker-hc
+ containerPort: {{ .Values.concourse.worker.healthcheckBindPort }}
+{{- if .Values.worker.resources }}
resources:
{{ toYaml .Values.worker.resources | indent 12 }}
+{{- end }}
securityContext:
privileged: true
volumeMounts:
@@ -459,7 +270,11 @@ spec:
mountPath: {{ .Values.worker.keySecretsPath | quote }}
readOnly: true
- name: concourse-work-dir
- mountPath: {{ .Values.concourse.workingDirectory | default "/concourse-work-dir" | quote }}
+ mountPath: {{ .Values.concourse.worker.workDir | quote }}
+ - name: pre-stop-hook
+ mountPath: /pre-stop-hook.sh
+ subPath: pre-stop-hook.sh
+
{{- if .Values.worker.additionalVolumeMounts }}
{{ toYaml .Values.worker.additionalVolumeMounts | indent 12 }}
{{- end }}
@@ -489,9 +304,12 @@ spec:
{{- if .Values.worker.additionalVolumes }}
{{ toYaml .Values.worker.additionalVolumes | indent 8 }}
{{- end }}
+ - name: pre-stop-hook
+ configMap:
+ name: {{ template "concourse.worker.fullname" . }}
- name: concourse-keys
secret:
- secretName: {{ template "concourse.concourse.fullname" . }}
+ secretName: {{ template "concourse.worker.fullname" . }}
defaultMode: 0400
items:
- key: host-key-pub
@@ -500,13 +318,6 @@ spec:
path: worker_key
- key: worker-key-pub
path: worker_key.pub
-{{- define "concourse.are-there-additional-volumes.with-the-name.concourse-work-dir" }}
- {{- range .Values.worker.additionalVolumes }}
- {{- if .name | eq "concourse-work-dir" }}
- {{- .name }}
- {{- end }}
- {{- end }}
-{{- end }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
@@ -533,8 +344,11 @@ spec:
{{- end }}
{{- end }}
{{- end }}
-{{- if semverCompare "^1.7-0" .Capabilities.KubeVersion.GitVersion }}
+ {{- if semverCompare "^1.7-0" .Capabilities.KubeVersion.GitVersion }}
updateStrategy:
type: {{ .Values.worker.updateStrategy }}
-{{- end }}
+ {{- end }}
+ {{- if .Values.worker.podManagementPolicy }}
podManagementPolicy: {{ .Values.worker.podManagementPolicy }}
+ {{- end }}
+{{- end }}
diff --git a/stable/concourse/templates/worker-svc.yaml b/stable/concourse/templates/worker-svc.yaml
index 6feccd5ca05b..677cd2f4c465 100644
--- a/stable/concourse/templates/worker-svc.yaml
+++ b/stable/concourse/templates/worker-svc.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.worker.enabled -}}
## A Headless Service is required when using a StatefulSet
## ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
##
@@ -19,3 +20,4 @@ spec:
ports: []
selector:
app: {{ template "concourse.worker.fullname" . }}
+{{- end }}
diff --git a/stable/concourse/values.yaml b/stable/concourse/values.yaml
index 8e0f435c3394..adbf95871eb0 100644
--- a/stable/concourse/values.yaml
+++ b/stable/concourse/values.yaml
@@ -2,152 +2,341 @@
## This is a YAML-formatted file.
## Declare variables to be passed into your templates.
-## Override the name of the Chart.
+## Provide a name in place of `concourse` for `app:` labels
##
-# nameOverride:
+nameOverride:
-## Concourse image.
+## Provide a name to substitute for the full names of resources
+##
+fullnameOverride:
+
+## Concourse image to use in both Web and Worker containers.
##
image: concourse/concourse
-## Concourse image version.
-## ref: https://hub.docker.com/r/concourse/concourse/tags/
+## Concourse image tag.
+## ps.: release candidates are published under `concourse/concourse-rc` instead
+## of `concourse/concourse`.
+## Ref: https://hub.docker.com/r/concourse/concourse/tags/
##
-imageTag: "4.2.2"
+imageTag: "5.2.0"
## Specific image digest to use in place of a tag.
-## ref: https://kubernetes.io/docs/concepts/configuration/overview/#container-images
+## Ref: https://kubernetes.io/docs/concepts/configuration/overview/#container-images
##
-# imageDigest: sha256:54ea351808b55ecc14af6590732932e2a6a0ed8f6d10f45e8be3b51165d5526a
+imageDigest:
-## Specify a imagePullPolicy: 'Always' if imageTag is 'latest', else set to 'IfNotPresent'.
-## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
+## Specify a imagePullPolicy regarding the fetching of container images.
+## Ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
imagePullPolicy: IfNotPresent
-## Optionally specify an array of imagePullSecrets.
-## Secrets must be manually created in the namespace.
-## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+## Array of imagePullSecrets to allow pulling the Concourse image from private registries.
+## ps.: secrets must be manually created in the namespace.
+## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+##
+## Example:
##
-# imagePullSecrets:
-# - myRegistrKeySecretName
+## imagePullSecrets:
+## - myRegistryKeySecretName
+##
+imagePullSecrets: []
+
-## Configuration values for Concourse.
-## ref: https://concourse-ci.org/setting-up.html
+## Configuration values for the Concourse application (worker and web components).
+## The values specified here are almost direct references to the flags under the
+## `concourse web` and `concourse worker` commands.
##
concourse:
+ ## Configurations for the `web` component based on the possible flags configurable
+ ## through the `concourse web` command.
+ ##
web:
- ## Minimum level of logs to see.
- # logLevel: info
- ## IP address on which to listen for web traffic.
- # bindIp: 0.0.0.0
- ## Port on which to listen for HTTP traffic.
+ ## A name for this Concourse cluster, to be displayed on the dashboard page.
+ ##
+ clusterName:
+
+ ## Enable equivalent resources across pipelines and teams to share a single version history.
+ ## Ref: https://concourse-ci.org/global-resources.html
+ ##
+ enableGlobalResources: true
+
+ ## Enable auditing for all api requests connected to builds.
+ ##
+ enableBuildAuditing: false
+
+ ## Enable auditing for all api requests connected to containers.
+ ##
+ enableContainerAuditing: false
+
+ ## Enable auditing for all api requests connected to jobs.
+ ##
+ enableJobAuditing: false
+
+ ## Enable auditing for all api requests connected to pipelines.
+ ##
+ enablePipelineAuditing: false
+
+ ## Enable auditing for all api requests connected to resources.
+ ##
+ enableResourceAuditing: false
+
+ ## Enable auditing for all api requests connected to system transactions.
+ ##
+ enableSystemAuditing: false
+
+ ## Enable auditing for all api requests connected to teams.
+ ##
+ enableTeamAuditing: false
+
+ ## Enable auditing for all api requests connected to workers.
+ ##
+ enableWorkerAuditing: false
+
+ ## Enable auditing for all api requests connected to volumes.
+ ##
+ enableVolumeAuditing: false
+
+ ## The number of attempts secret will be retried to be fetched,
+ ## in case a retryable error happens.
+ ##
+ secretRetryAttempts:
+
+ ## The interval between secret retry retieval attempts.
+ ##
+ secretRetryInterval:
+
+ ## Enable in-memory cache for secrets.
+ ##
+ secretCacheEnabled: false
+
+ ## If the cache is enabled, secret values will be cached for not longer
+ ## than this duration (it can be less, if underlying secret lease time
+ ## is smaller).
+ ##
+ secretCacheDuration:
+
+ ## If the cache is enabled, expired items will be removed on this internal.
+ ##
+ secretCachePurgeInterval:
+
+ ## Minimum level of logs to see. Possible options: debug, info, error.
+ ##
+ logLevel:
+
+ ## IP address on which to listen for HTTP traffic (web UI and API).
+ ##
+ bindIp:
+
+ ## Port on which to listen for HTTP traffic (web UI and API).
+ ##
bindPort: 8080
- ## TLS configurations for the web component to be able to serve HTTPS traffic.
- ## Once enabled, consumes the certificates set via secrets.
- #
+
+ ## TLS configuration for the web component to be able to serve HTTPS traffic.
+ ## Once enabled, consumes the certificates set via secrets (`web-tls-cert` and
+ ## `web-tls-key`).
+ ##
tls:
+
+ ## Enable serving HTTPS traffic directly through the web component.
+ ##
enabled: false
+
## Port on which to listen for HTTPS traffic.
- # bindPort:
+ ##
+ bindPort: 443
+
## URL used to reach any ATC from the outside world.
- # externalUrl: http://127.0.0.1:8080
- ## URL used to reach this ATC from other ATCs in the cluster.
- # peerUrl: http://127.0.0.1:8080
- ## Enable encryption of pipeline configuration. Encryption keys can be set via secrets.
- ## See https://concourse-ci.org/encryption.html
+ ## This is *very* important for a proper authentication workflow as
+ ## browser redirects are based on the value set here.
+ ##
+ ## Example: http://ci.concourse-ci.org
##
+ externalUrl:
+
encryption:
+ ## Enable encryption of pipeline configuration. Encryption keys can be set via secrets
+ ## (`encryption-key` and `old-encryption-key` fields).
+ ## Ref: https://concourse-ci.org/encryption.html
+ ##
enabled: false
+
localAuth:
+ ## Enable the use of local authentication (basic auth).
+ ## Once enabled, users configured through `local-users` (secret)
+ ## are able to authenticate.
+ ##
+ ## Local users can be individually added to the `main` team by setting
+ ## `concourse.web.auth.mainTeam.localUser` with a comma-separated list
+ ## of ids.
+ ##
+ ## Ref: https://concourse-ci.org/install.html#local-auth-config
+ ##
enabled: true
+
## IP address on which to listen for the pprof debugger endpoints.
- # debugBindIp: 127.0.0.1
+ ##
+ debugBindIp:
+
## Port on which to listen for the pprof debugger endpoints.
- # debugBindPort: 8079
+ ##
+ debugBindPort: 8079
+
## Length of time for a intercepted session to be idle before terminating.
- # interceptIdleTimeout: 0m
+ ##
+ interceptIdleTimeout:
+
## Time limit on checking for new versions of resources.
- # globalResourceCheckTimeout: 1h
+ ##
+ globalResourceCheckTimeout:
+
## Interval on which to check for new versions of resources.
- # resourceCheckingInterval: 1m
+ ##
+ resourceCheckingInterval:
+
## Interval on which to check for new versions of resource types.
- # resourceTypeCheckingInterval: 1m
+ ##
+ resourceTypeCheckingInterval:
+
## Method by which a worker is selected during container placement.
- # containerPlacementStrategy: volume-locality
+ ## Possible values: volume-locality | random | fewest-build-containers
+ containerPlacementStrategy:
+
## How long to wait for Baggageclaim to send the response header.
- # baggageclaimResponseHeaderTimeout: 1m
+ ##
+ baggageclaimResponseHeaderTimeout:
+
## Directory containing downloadable CLI binaries.
- # cliArtifactsDir:
+ ## By default, Concourse will try to find the assets
+ ## path relative to the executable.
+ ##
+ cliArtifactsDir:
+
## Log database queries.
- # logDbQueries:
+ ##
+ logDbQueries: false
+
## Interval on which to run build tracking.
- # buildTrackerInterval: 10s
- ## Default build logs to retain, 0 means all
- # defaultBuildLogsToRetain:
- ## Maximum build logs to retain, 0 means not specified. Will override values configured in jobs
- # maxBuildLogsToRetain:
- ## Default max number of cpu shares per task, 0 means unlimited
- # defaultTaskCpuLimit:
- ## Default maximum memory per task, 0 means unlimited
- # defaultTaskMemoryLimit:
+ ##
+ buildTrackerInterval:
+
+ ## Default number of build logs to retain. 0 means all.
+ ##
+ defaultBuildLogsToRetain:
+
+ ## Maximum build logs to retain, 0 means not specified. Will override values configured in jobs.
+ ##
+ maxBuildLogsToRetain:
+
+ ## Default days to retain build logs. 0 means unlimited.
+ ##
+ defaultDaysToRetainBuildLogs:
+
+ ## Maximum days to retain build logs, 0 means not specified. Will override values configured in jobs.
+ ##
+ maxDaysToRetainBuildLogs:
+
+ ## Default max number of cpu shares per task, 0 means unlimited.
+ ##
+ defaultTaskCpuLimit:
+
+ ## Default maximum memory per task, 0 means unlimited.
+ ##
+ defaultTaskMemoryLimit:
+
+ ## Network address of this web node, reachable by other web nodes. Used for forwarded worker addresses. (default: $POD_IP)
+ ##
+ peerAddress:
+
+ ## Configurations regarding how the web component is able to connect to a postgres
+ ## instance.
+ ##
postgres:
## The host to connect to.
- host: 127.0.0.1
+ ##
+ host:
+
## The port to connect to.
- port: 5432
+ ##
+ port:
+
## Path to a UNIX domain socket to connect to.
- # socket:
+ ##
+ socket:
+
## Whether or not to use SSL.
+ ##
sslmode: disable
+
## Dialing timeout. (0 means wait indefinitely)
- connectTimeout: 5m
+ ##
+ connectTimeout:
+
## The name of the database to use.
- database: atc
+ ##
+ database:
- kubernetes:
- ## Enable the use of in-cluster Kubernetes Secrets.
+ kubernetes:
+ ## Enable the use of Kubernetes Secrets as the credential provider for
+ ## concourse pipelines.
##
enabled: true
- ## Prefix to use for Kubernetes namespaces under which secrets will be looked up. Defaults to
- ## the Release name hyphen, e.g. "my-release-" produces namespace "my-release-main" for the
- ## "main" Concourse team.
+ ## Prefix to use for Kubernetes namespaces under which secrets will be looked up.
+ ## Defaults to the Release name hyphen, e.g. "my-release-" produces namespace "my-release-main"
+ ## for the "main" Concourse team.
##
- ## namespacePrefix:
+ namespacePrefix:
## Teams to create namespaces for to hold secrets.
+ ## This property only has effect if `createTeamNamespaces` is set to `true`.
+ ##
teams:
- main
- ## Create the Kubernetes namespace for each team listed above.
+ ## Create the Kubernetes namespace for each team listed under `concourse.web.kubernetes.teams`.
+ ##
createTeamNamespaces: true
## When true, namespaces are not deleted when the release is deleted.
## Irrelevant if the namespaces are not created by this chart.
+ ##
keepNamespaces: true
## Path to Kubernetes config when running ATC outside Kubernetes.
- # configPath:
+ ##
+ configPath:
+ ## Configuration for using AWS SSM as a credential manager.
+ ## Ref: https://concourse-ci.org/creds.html#asm
+ ##
awsSecretsManager:
- ## Enable the use of AWS Secrets Manager.
+ ## Enable the use of AWS Secrets Manager for credential management.
##
enabled: false
## AWS region to use when reading from Secrets Manager
##
- # region:
+ region:
+
+ ## Configure authentication using an access key and secret key. If disabled, IAM role auth is assumed.
+ ## Session Token can also be enabled, if required.
+ keyAuth:
+ enabled: true
+ useSessionToken: false
## pipeline-specific template for Secrets Manager parameters, defaults to: /concourse/{team}/{pipeline}/{secret}
##
- # pipelineSecretTemplate:
+ pipelineSecretTemplate:
## team-specific template for Secrets Manager parameters, defaults to: /concourse/{team}/{secret}
##
- # teamSecretTemplate: ''
+ teamSecretTemplate:
+ ## Configuration for using AWS SSM as a credential manager.
+ ## Ref: https://concourse-ci.org/creds.html#ssm
+ ##
awsSsm:
## Enable the use of AWS SSM.
##
@@ -155,534 +344,870 @@ concourse:
## AWS region to use when reading from SSM
##
- # region:
+ region:
+
+ ## Configure authentication using an access key and secret key. If disabled, IAM role auth is assumed.
+ ## Session Token can also be enabled, if required.
+ keyAuth:
+ enabled: true
+ useSessionToken: false
## pipeline-specific template for SSM parameters, defaults to: /concourse/{team}/{pipeline}/{secret}
##
- # pipelineSecretTemplate:
+ pipelineSecretTemplate:
## team-specific template for SSM parameters, defaults to: /concourse/{team}/{secret}
##
- # teamSecretTemplate: ''
+ teamSecretTemplate:
+ ## Configuration for using Vault as a credential manager.
+ ## Ref: https://concourse-ci.org/creds.html#vault
+ ##
vault:
+ ## Enable the use of Vault as a credential manager.
+ ##
enabled: false
## URL pointing to vault addr (i.e. http://vault:8200).
##
- # url:
+ url:
+
+ ## Vault path under which to namespace credentials lookup.
+ ##
+ pathPrefix:
- ## vault path under which to namespace credential lookup, defaults to /concourse.
+ ## Path under which to lookup shared credentials.
##
- pathPrefix: /concourse
+ sharedPath:
## if the Vault server is using a self-signed certificate, set this to true,
- ## and provide a value for the cert in secrets.
+ ## and provide a value for the cert in secrets (field `vault-ca-cert`).
##
- # useCaCert:
+ useCaCert: false
- ## vault authentication backend, leave this blank if using an initial periodic token
- ## currently supported backends: token, approle, cert.
+ ## Vault authentication backend, leave this blank if using an initial periodic token.
+ ## Currently supported backends: token, approle, cert.
##
- # authBackend:
+ authBackend: ""
- ## Cache returned secrets for their lease duration in memory
- # cache:
- ## If the cache is enabled, and this is set, override secrets lease duration with a maximum value
- # maxLease:
## Path to a directory of PEMEncoded CA cert files to verify the vault server SSL cert.
- # caPath:
+ ##
+ caPath:
+
## If set, is used to set the SNI host when connecting via TLS.
- # serverName:
+ ##
+ serverName:
+
## Enable insecure SSL verification.
- # insecureSkipVerify:
- ## Client token for accessing secrets within the Vault server.
- # clientToken:
- ## Auth backend to use for logging in to Vault.
- # authBackend:
+ ##
+ insecureSkipVerify: false
+
+ ## Client token for accessing secrets within the Vault server.
+ ##
+ clientToken:
+
## Time after which to force a reLogin. If not set, the token will just be continuously renewed.
- # authBackendMaxTtl:
+ ##
+ authBackendMaxTtl:
+
## The maximum time between retries when logging in or reAuthing a secret.
- retryMax: 5m
+ ##
+ retryMax:
+
## The initial time between retries when logging in or reAuthing a secret.
- retryInitial: 1s
+ ##
+ retryInitial:
+
## Don't actually do any automatic scheduling or checking.
- # noop:
+ ##
+ noop: false
+
staticWorker:
+ ## Enables the direct registration of a worker that has its properties
+ ## hardcoded.
+ ##
enabled: false
+
## A Garden API endpoint to register as a worker.
+ ##
gardenUrl:
+
## A Baggageclaim API endpoint to register with the worker.
+ ##
baggageclaimUrl:
+
## A resource type to advertise for the worker. Can be specified multiple times.
+ ##
resource:
+
metrics:
## Host string to attach to emitted metrics.
+ ##
hostName:
- ## A keyValue attribute to attach to emitted metrics. Can be specified multiple times.
+
+ ## A key-value attribute to attach to emitted metrics.
+ ##
attribute:
+
+ ## Enable capturing of error log metrics.
+ ##
+ captureErrorMetrics: false
+
datadog:
enabled: false
+
## Use IP of node the pod is scheduled on, overrides `agentHost`
+ ##
agentHostUseHostIP: false
+
## Datadog agent host to expose dogstatsd metrics
+ ##
agentHost: 127.0.0.1
+
## Datadog agent port to expose dogstatsd metrics
+ ##
agentPort: 8125
+
## Prefix for all metrics to easily find them in Datadog
- # prefix: concoursedev
+ ##
+ prefix:
+
influxdb:
enabled: false
+
## InfluxDB server address to emit points to.
- url: http://127.0.0.1:8086
+ ## Example: http://127.0.0.1:8086
+ ##
+ url:
+
## InfluxDB database to write points to.
+ ##
database: concourse
+
## InfluxDB server username.
- # username:
+ ##
+ username:
+
## Skip SSL verification when emitting to InfluxDB.
+ ##
insecureSkipVerify: false
- ## Emit metrics to logs.
- # emitToLogs:
+
+ ## Emit metrics to logs instead of an actual metrics system.
+ ##
+ emitToLogs: false
+
newrelic:
enabled: false
+
## New Relic Account ID
- # accountId:
+ ##
+ accountId:
+
## New Relic Insights API Key
- # apiKey:
+ ##
+ apiKey:
+
## An optional prefix for emitted New Relic events
- # servicePrefix:
+ ##
+ servicePrefix:
+
prometheus:
enabled: false
+
## IP to listen on to expose Prometheus metrics.
+ ##
bindIp: "0.0.0.0"
+
## Port to listen on to expose Prometheus metrics.
+ ##
bindPort: 9391
+
riemann:
enabled: false
+
## Riemann server address to emit metrics to.
- # host:
+ ##
+ host:
+
## Port of the Riemann server to emit metrics to.
+ ##
port: 5555
+
## An optional prefix for emitted Riemann services
- # servicePrefix:
+ ##
+ servicePrefix:
+
## Tag to attach to emitted metrics. Can be specified multiple times.
- # tag:
- ## The value to set for XFrame-Options. If omitted, the header is not set.
- # xFrameOptions:
+ ##
+ tag:
+
+ ## The value to set for X-Frame-Options. If omitted, the header is not set.
+ ##
+ xFrameOptions:
+
gc:
+ ## Enables overriding the default values that Concourse sets
+ ## for the parameters related to scheduling.
+ ##
+ ## **Do not change this values unless you're sure about what you're doing**.
+ ##
overrideDefaults: false
+
## Interval on which to perform garbage collection.
+ ##
interval: 30s
+
## Grace period before reaping oneOff task containers
+ ##
oneOffGracePeriod: 5m
+
+ ## Period after which to reap containers and volumes that were created but
+ ## went missing from the worker.
+ ##
+ missingGracePeriod:
+
syslog:
+ ## Enables the emission of build logs to external log ingesters through
+ ## using the syslog protocol.
+ ##
enabled: false
+
## Client hostname with which the build logs will be sent to the syslog server.
- hostName: atc-syslog-drainer
+ ##
+ hostName:
+
## Remote syslog server address with port (Example: 0.0.0.0:514).
- # address:
+ ##
+ address:
+
## Transport protocol for syslog messages (Currently supporting tcp, udp & tls).
- # transport:
- ## Interval over which checking is done for new build logs to send to syslog server (duration measurement units are s/m/h; eg. 30s/30m/1h)
+ ##
+ transport:
+
+ ## Interval over which checking is done for new build logs to send to syslog server
+ ## (duration measurement units are s/m/h; eg. 30s/30m/1h)
drainInterval: 30s
- ## if the syslog server is using a self-signed certificate, set this to true,
- ## and provide a value for the cert in secrets.
+
+ ## If the syslog server is using a self-signed certificate, set this to true,
+ ## and provide a value for the cert in secrets (`syslog-ca-cert`).
+ ##
useCaCert: false
+
auth:
## Force sending secure flag on http cookies
- # cookieSecure:
+ ##
+ cookieSecure: false
+
## Length of time for which tokens are valid. Afterwards, users will have to log back in.
- # duration: 24h
+ ## The value must be specified as Go duration values (e.g.: 30m or 24h).
+ duration:
+
mainTeam:
- ## List of whitelisted local concourse users. These are the users you've added at atc startup with the addLocalUser setting.
+ ## Configuration file for specifying team params.
+ ## Ref: https://concourse-ci.org/managing-teams.html#setting-roles
+ ##
+ config:
+
+ ## List of local Concourse users to be included as members of the `main` team.
+ ## Make sure you have local users support enabled (`concourse.web.localAuth.enabled`) and
+ ## that the users were added (`local-users` secret).
+ ##
localUser: "test"
- ## Setting this flag will whitelist all logged in users in the system. ALL OF THEM. If, for example, you've configured GitHub, any user with a GitHub account will have access to your team.
- # allowAllUsers:
+
## Authentication (Main Team) (CloudFoundry)
+ ##
cf:
## List of whitelisted CloudFoundry users.
+ ##
user:
+
## List of whitelisted CloudFoundry orgs
+ ##
org:
+
## List of whitelisted CloudFoundry spaces
+ ##
space:
+
## (Deprecated) List of whitelisted CloudFoundry space guids
+ ##
spaceGuid:
+
+ ## Authentication (Main Team) (Bitbucket Cloud)
+ ##
+ bitbucketCloud:
+
+ ## List of whitelisted Bitbucket Cloud users
+ ##
+ user:
+
+ ## List of whitelisted Bitbucket Cloud teams
+ ##
+ team:
+
## Authentication (Main Team) (GitHub)
+ ##
github:
## List of whitelisted GitHub users
+ ##
user:
+
## List of whitelisted GitHub orgs
+ ##
org:
+
## List of whitelisted GitHub teams
+ ##
team:
+
## Authentication (Main Team) (GitLab)
+ ##
gitlab:
+
## List of whitelisted GitLab users
+ ##
user:
+
## List of whitelisted GitLab groups
+ ##
group:
+
## Authentication (Main Team) (LDAP)
+ ##
ldap:
## List of whitelisted LDAP users
+ ##
user:
+
## List of whitelisted LDAP groups
+ ##
group:
+
## Authentication (Main Team) (OAuth2)
+ ##
oauth:
## List of whitelisted OAuth2 users
+ ##
user:
+
## List of whitelisted OAuth2 groups
+ ##
group:
+
## Authentication (Main Team) (OIDC)
+ ##
oidc:
+
## List of whitelisted OIDC users
+ ##
user:
+
## List of whitelisted OIDC groups
+ ##
group:
+
## Authentication (CloudFoundry)
+ ##
cf:
enabled: false
- ## (Required) The base API URL of your CF deployment. It will use this information to discover information about the authentication provider.
- # apiUrl: https://api.run.pivotal.io
+
+ ## (Required) The base API URL of your CF deployment. It will use this information to discover information
+ ## about the authentication provider.
+ ##
+ ## Example: https://api.run.pivotal.io
+ ##
+ apiUrl:
+
## CA Certificate
- # useCaCert:
+ ##
+ useCaCert: false
+
## Skip SSL validation
- # skipSslValidation:
+ ##
+ skipSslValidation: false
+
## Authentication (GitHub)
+ ##
github:
enabled: false
+
## Hostname of GitHub Enterprise deployment (No scheme, No trailing slash)
- # host:
+ ##
+ host:
+
## CA certificate of GitHub Enterprise deployment
- # useCaCert:
+ ##
+ useCaCert: false
+
+ ## Authentication (BitbucketCloud)
+ ##
+ bitbucketCloud:
+ enabled: false
+
## Authentication (GitLab)
gitlab:
enabled: false
+
## Hostname of Gitlab Enterprise deployment (Include scheme, No trailing slash)
- # host:
+ ##
+ host:
+
## Authentication (LDAP)
ldap:
enabled: false
+
## The auth provider name displayed to users on the login page
- # displayName:
- ## (Required) The host and optional port of the LDAP server. If port isn't supplied, it will be guessed based on the TLS configuration. 389 or 636.
- # host:
+ ##
+ displayName:
+
+ ## (Required) The host and optional port of the LDAP server. If port isn't supplied, it will be guessed
+ ## based on the TLS configuration. 389 or 636.
+ ##
+ host:
+
## (Required) Bind DN for searching LDAP users and groups. Typically this is a readOnly user.
- # bindDn:
+ ##
+ bindDn:
+
## (Required) Bind Password for the user specified by 'bindDn'
- # bindPw:
+ ##
+ bindPw:
+
## Required if LDAP host does not use TLS.
- # insecureNoSsl:
+ ##
+ insecureNoSsl:
+
## Skip certificate verification
- # insecureSkipVerify:
+ ##
+ insecureSkipVerify:
+
## Start on insecure port, then negotiate TLS
- # startTls:
+ ##
+ startTls:
+
## CA certificate
- # useCaCert:
+ ##
+ useCaCert:
+
## BaseDN to start the search from. For example 'cn=users,dc=example,dc=com'
- # userSearchBaseDn:
+ ##
+ userSearchBaseDn:
+
## Optional filter to apply when searching the directory. For example '(objectClass=person)'
- # userSearchFilter:
- ## Attribute to match against the inputted username. This will be translated and combined with the other filter as '(=)'.
- # userSearchUsername:
+ ##
+ userSearchFilter:
+
+ ## Attribute to match against the inputted username. This will be translated and combined with the other
+ ## filter as '(=)'.
+ ##
+ userSearchUsername:
+
## Can either be: 'sub' search the whole sub tree or 'one' - only search one level. Defaults to 'sub'.
- # userSearchScope:
+ ##
+ userSearchScope:
+
## A mapping of attributes on the user entry to claims. Defaults to 'uid'.
- # userSearchIdAttr:
+ ##
+ userSearchIdAttr:
+
## A mapping of attributes on the user entry to claims. Defaults to 'mail'.
- # userSearchEmailAttr:
+ ##
+ userSearchEmailAttr:
+
## A mapping of attributes on the user entry to claims.
- # userSearchNameAttr:
+ ##
+ userSearchNameAttr:
+
## BaseDN to start the search from. For example 'cn=groups,dc=example,dc=com'
- # groupSearchBaseDn:
+ ##
+ groupSearchBaseDn:
+
## Optional filter to apply when searching the directory. For example '(objectClass=posixGroup)'
- # groupSearchFilter:
+ ##
+ groupSearchFilter:
+
## Can either be: 'sub' search the whole sub tree or 'one' - only search one level. Defaults to 'sub'.
- # groupSearchScope:
+ ##
+ groupSearchScope:
+
## Adds an additional requirement to the filter that an attribute in the group match the user's attribute value. The exact filter being added is: (=)
- # groupSearchUserAttr:
+ ##
+ groupSearchUserAttr:
+
## Adds an additional requirement to the filter that an attribute in the group match the user's attribute value. The exact filter being added is: (=)
- # groupSearchGroupAttr:
+ ##
+ groupSearchGroupAttr:
+
## The attribute of the group that represents its name.
- # groupSearchNameAttr:
+ ##
+ groupSearchNameAttr:
+
## Authentication (OAuth2)
+ ##
oauth:
enabled: false
+
## The auth provider name displayed to users on the login page
- # displayName:
+ ##
+ displayName:
+
## (Required) Authorization URL
- # authUrl:
+ ##
+ authUrl:
+
## (Required) Token URL
- # tokenUrl:
+ ##
+ tokenUrl:
+
## UserInfo URL
- # userinfoUrl:
+ ##
+ userinfoUrl:
+
## Any additional scopes that need to be requested during authorization
- # scope:
+ ##
+ scope:
+
## The groups key indicates which claim to use to map external groups to Concourse teams.
- # groupsKey:
+ ##
+ groupsKey:
+
## CA Certificate
- # useCaCert:
+ ##
+ useCaCert:
+
## Skip SSL validation
- # skipSslValidation:
+ ##
+ skipSslValidation:
+
+ ## The user id key indicates which claim to use to map an external user id to a
+ ## Concourse user id.
+ ##
+ userIdKey: user_id
+
+ ## The user name key indicates which claim to use to map an external user name to a
+ ## Concourse user name.
+ ##
+ userNameKey: user_name
+
## Authentication (OIDC)
oidc:
enabled: false
+
## The auth provider name displayed to users on the login page
- # displayName:
+ ##
+ displayName:
+
## (Required) An OIDC issuer URL that will be used to discover provider configuration using the .wellKnown/openid-configuration
- # issuer:
+ ##
+ issuer:
+
## Any additional scopes that need to be requested during authorization
- # scope:
+ ##
+ scope:
+
## The groups key indicates which claim to use to map external groups to Concourse teams.
- # groupsKey:
+ ##
+ groupsKey:
+
## CA Certificate
- # useCaCert:
+ ##
+ useCaCert:
+
## Skip SSL validation
- # skipSslValidation:
+ ##
+ skipSslValidation:
+
+ ## The user name key indicates which claim to use to map an external user name to a
+ ## Concourse user name.
+ ##
+ userNameKey: username
+
tsa:
- ## Minimum level of logs to see.
- # logLevel: info
+ ## Minimum level of logs to see. Possible values: debug, info, error.
+ ##
+ logLevel:
+
## IP address on which to listen for SSH.
- # bindIp: 0.0.0.0
+ ##
+ bindIp:
+
## Port on which to listen for SSH.
+ ##
bindPort: 2222
+
+ ## IP address on which to listen for the pprof debugger endpoints (default: 127.0.0.1)
+ ##
+ debugBindIp:
+
## Port on which to listen for TSA pprof server.
- # bindDebugPort: 8089
- ## IP address of this TSA, reachable by the ATCs. Used for forwarded worker addresses.
- # peerIp:
+ ##
+ debugBindPort: 2221
+
## Path to private key to use for the SSH server.
- # hostKey:
- ## Path to file containing keys to authorize, in SSH authorized_keys format (one public key per line).
- # authorizedKeys:
+ ##
+ hostKey:
+
## Path to file containing keys to authorize, in SSH authorized_keys format (one public key per line).
- # teamAuthorizedKeys:
+ ##
+ authorizedKeys:
+
## ATC API endpoints to which workers will be registered.
- # atcUrl:
+ ##
+ atcUrl:
+
## Path to private key to use when signing tokens in reqests to the ATC during registration.
- # sessionSigningKey:
- ## interval on which to heartbeat workers to the ATC
- # heartbeatInterval: 30s
+ ##
+ sessionSigningKey:
+
+ ## Interval on which to heartbeat workers to the ATC.
+ ##
+ heartbeatInterval:
+
worker:
+ ## Signal to send to the worker container when shutting down.
+ ## Possible values:
+ ##
+ ## - SIGUSR1: land the worker, and
+ ## - SIGUSR2: retire the worker.
+ ##
+ ## Note.: using SIGUSR2 with persistence enabled implies the use of an
+ ## initContainer that removes any data the existed previously under
+ ## `concourse.worker.workDir` as the action of `retire`ing a worker implies
+ ## that no state comes back with it when re-registering.
+ ##
+ ## Ref: https://concourse-ci.org/concourse-worker.html
+ ## Ref: https://concourse-ci.org/worker-internals.html
+ ##
+ shutdownSignal: SIGUSR2
+
+ ## Duration after which the registration should be swapped to another random SSH gateway.
+ ##
+ rebalanceInterval:
+
+ ## IP address on which to listen for health checking requests.
+ ##
+ healthcheckBindIp:
+
+ ## Port on which to listen for health checking requests.
+ ##
+ healthcheckBindPort: 8888
+
+ ## HTTP timeout for the full duration of health checking.
+ ##
+ healthcheckTimeout:
+
## The name to set for the worker during registration. If not specified, the hostname will be used.
- # name:
+ ##
+ name:
+
## A tag to set during registration. Can be specified multiple times.
- # tag:
+ ##
+ tag:
+
## The name of the team that this worker will be assigned to.
- # team:
+ ##
+ team:
+
## HTTP proxy endpoint to use for containers.
- # http_proxy:
+ ##
+ http_proxy:
+
## HTTPS proxy endpoint to use for containers.
- # https_proxy:
+ ##
+ https_proxy:
+
## Blacklist of addresses to skip the proxy when reaching.
- # no_proxy:
+ ##
+ no_proxy:
+
## If set, the worker will be immediately removed upon stalling.
- # ephemeral:
+ ##
+ ephemeral:
+
+ ## IP address on which to listen for the pprof debugger endpoints.
+ ##
+ debugBindIp:
+
## Port on which to listen for beacon pprof server.
- # bindDebugPort: 9099
+ ##
+ debugBindPort: 7776
+
## Version of the worker. This is normally baked in to the binary, so this flag is hidden.
- # version:
+ ##
+ version:
+
## Directory in which to place container data.
+ ##
workDir: /concourse-work-dir
+
## IP address on which to listen for the Garden server.
- # bindIp: 127.0.0.1
+ ##
+ bindIp:
+
## Port on which to listen for the Garden server.
- # bindPort: 7777
- ## IP used to reach this worker from the ATC nodes.
- # peerIp:
- ## Minimum level of logs to see.
- # logLevel: info
+ ##
+ bindPort: 7777
+
+ ## Minimum level of logs to see. Possible options: debug, info, error.
+ ##
+ logLevel:
+
+ ## Maximum number of containers which can be swept in parallel.
+ ##
+ containerSweeperMaxInFlight: 5
+
+ ## Maximum number of volumes which can be swept in parallel.
+ ##
+ volumeSweeperMaxInFlight: 5
+
tsa:
- ## TSA host to forward the worker through. Can be specified multiple times.
- host: 127.0.0.1:2222
+ ## TSA host to forward the worker through.
+ ##
+ host:
+
+ ## TSA port to forward the worker through.
+ ##
+ port: 2222
+
## File containing a public key to expect from the TSA.
- # publicKey:
+ ##
+ publicKey:
+
## File containing the private key to use when authenticating to the TSA.
- # workerPrivateKey:
+ ##
+ workerPrivateKey:
+
+ ## API endpoint of an externally managed Garden server to use instead of
+ ## running the embedded Garden server.
+ ##
+ externalGardenUrl:
+
garden:
- ## Minimum level of logs to see.
- # logLevel: info
- ## format of log timestamps
- # timeFormat: unix-epoch
- ## Bind with TCP on the given IP.
- # bindIp:
- ## Bind with TCP on the given port.
- bindPort: 7777
- ## Bind with Unix on the given socket path.
- # bindSocket: /tmp/garden.sock
- ## Bind the debug server on the given IP.
- # debugBindIp:
- ## Bind the debug server to the given port.
- # debugBindPort: 17013
- ## Skip the preparation part of the host that requires root privileges
- # skipSetup:
- ## Directory in which to store container data.
- # depot: /var/run/gdn/depot
- ## Path in which to store properties.
- # propertiesPath:
- ## Path in which to store temporary sockets
- # consoleSocketsPath:
- ## Clean up proccess dirs on first invocation of wait
- # cleanupProcessDirsOnWait:
- ## Disable creation of privileged containers
- # disablePrivilegedContainers:
- ## The lowest numerical subordinate user ID the user is allowed to map
- # uidMapStart: 1
- ## The number of numerical subordinate user IDs the user is allowed to map
- # uidMapLength:
- ## The lowest numerical subordinate group ID the user is allowed to map
- # gidMapStart: 1
- ## The number of numerical subordinate group IDs the user is allowed to map
- # gidMapLength:
- ## Default rootfs to use when not specified on container creation.
- # defaultRootfs:
- ## Default time after which idle containers should expire.
- # defaultGraceTime:
- ## Clean up all the existing containers on startup.
- # destroyContainersOnStartup:
- ## Apparmor profile to use for unprivileged container processes
- # apparmor:
- ## Directory in which to extract packaged assets
- # assetsDir: /var/gdn/assets
- ## Path to the 'dadoo' binary.
- # dadooBin:
- ## Path to the 'nstar' binary.
- # nstarBin:
- ## Path to the 'tar' binary.
- # tarBin:
- ## path to the iptables binary
- # iptablesBin: /sbin/iptables
- ## path to the iptables-restore binary
- # iptablesRestoreBin: /sbin/iptables-restore
- ## Path execute as pid 1 inside each container.
- # initBin:
- ## Path to the runtime plugin binary.
- # runtimePlugin: runc
- ## Extra argument to pass to the runtime plugin. Can be specified multiple times.
- # runtimePluginExtraArg:
- ## Directory on which to store imported rootfs graph data.
- # graph:
- ## Disk usage of the graph dir at which cleanup should trigger, or -1 to disable graph cleanup.
- # graphCleanupThresholdInMegabytes: -1
- ## Image that should never be garbage collected. Can be specified multiple times.
- # persistentImage:
- ## Path to image plugin binary.
- # imagePlugin:
- ## Extra argument to pass to the image plugin to create unprivileged images. Can be specified multiple times.
- # imagePluginExtraArg:
- ## Path to privileged image plugin binary.
- # privilegedImagePlugin:
- ## Extra argument to pass to the image plugin to create privileged images. Can be specified multiple times.
- # privilegedImagePluginExtraArg:
- ## Docker registry API endpoint.
- # dockerRegistry: registry-1.docker.io
- ## Docker registry to allow connecting to even if not secure. Can be specified multiple times.
- # insecureDockerRegistry:
- ## Network range to use for dynamically allocated container subnets.
- # networkPool: 10.254.0.0/22
- ## Allow network access to the host machine.
- # allowHostAccess:
- ## Network ranges to which traffic from containers will be denied. Can be specified multiple times.
- # denyNetwork:
- ## DNS server IP address to use instead of automatically determined servers. Can be specified multiple times.
- # dnsServer:
- ## DNS server IP address to append to the automatically determined servers. Can be specified multiple times.
- # additionalDnsServer:
- ## Per line hosts entries. Can be specified multiple times and will be appended verbatim in order to /etc/hosts
- # additionalHostEntry:
- ## IP address to use to reach container's mapped ports. Autodetected if not specified.
- # externalIp:
- ## Start of the ephemeral port range used for mapped container ports.
- # portPoolStart: 61001
- ## Size of the port pool used for mapped container ports.
- # portPoolSize: 4534
- ## Path in which to store port pool properties.
- # portPoolPropertiesPath:
- ## MTU size for container network interfaces. Defaults to the MTU of the interface used for outbound access by the host. Max allowed value is 1500.
- # mtu:
- ## Path to network plugin binary.
- # networkPlugin:
- ## Extra argument to pass to the network plugin. Can be specified multiple times.
- # networkPluginExtraArg:
- ## Maximum number of microseconds each cpu share assigned to a container allows per quota period
- # cpuQuotaPerShare: 0
- ## Set hard limit for the tcp buf memory, value in bytes
- # tcpMemoryLimit: 0
- ## Default block IO weight assigned to a container
- # defaultContainerBlockioWeight: 0
- ## Maximum number of containers that can be created.
- # maxContainers: 0
- ## Disable swap memory limit
- # disableSwapLimit:
- ## Interval on which to emit metrics.
- # metricsEmissionInterval: 1m
- ## Origin identifier for Dropsonde-emitted metrics.
- # dropsondeOrigin: garden-linux
- ## Destination for Dropsonde-emitted metrics.
- # dropsondeDestination: 127.0.0.1:3457
- ## Path to a containerd socket.
- # containerdSocket:
- ## Use containerd to run processes in containers.
- # useContainerdForProcesses:
- ## Enable proxy DNS server.
- # dnsProxyEnable:
+ ## Path to the 'gdn' executable (or leave as 'gdn' to find it in $PATH)
+ ##
+ bin:
+
+ ## Path to a config file to use for Garden in INI format.
+ ##
+ ## For example, in a ConfigMap:
+ ##
+ ## [server]
+ ## max-containers = 100
+ ##
+ ## For information about the possible values:
+ ## Ref: https://bosh.io/jobs/garden?source=github.com/cloudfoundry/garden-runc-release
+ ##
+ config:
+
+ ## Enable a proxy DNS server for Garden
+ ##
+ dnsProxyEnable:
+
+ ## Use the insecure Houdini Garden backend.
+ ##
+ useHoudini:
+
baggageclaim:
- ## Minimum level of logs to see.
- # logLevel: info
+ ## Minimum level of logs to see. Possible values: debug, info, error
+ ##
+ logLevel:
+
## IP address on which to listen for API traffic.
- # bindIp: 127.0.0.1
+ ##
+ bindIp:
+
## Port on which to listen for API traffic.
- # bindPort: 7788
+ ##
+ bindPort: 7788
+
+ ## IP address on which to listen for the pprof debugger endpoints.
+ ##
+ debugBindIp:
+
+ ## Disable remapping of user/group IDs in unprivileged volumes.
+ ##
+ disableUserNamespaces:
+
## Port on which to listen for baggageclaim pprof server.
- # bindDebugPort: 8099
+ ##
+ debugBindPort: 7787
+
## Directory in which to place volume data.
- # volumes:
+ ##
+ volumes:
+
## Driver to use for managing volumes.
+ ## Possible values: detect, naive, btrfs, and overlay.
+ ##
driver: naive
+
## Path to btrfs binary
- # btrfsBin: btrfs
+ ##
+ btrfsBin:
+
## Path to mkfs.btrfs binary
- # mkfsBin: mkfs.btrfs
+ ##
+ mkfsBin:
+
## Path to directory in which to store overlay data
- # overlaysDir:
- ## Interval on which to reap expired volumes.
- # reapInterval: 10s
+ ##
+ overlaysDir:
## Configuration values for Concourse Web components.
+## For more information regarding the characteristics of
+## Concourse Web nodes, see https://concourse-ci.org/concourse-web.html.
##
web:
+
+ ## Enable or disable the web component.
+ ## This allows the creation of worker-only releases by setting this to false.
+ ##
+ enabled: true
+
## Override the components name (defaults to web).
##
- # nameOverride:
+ nameOverride:
## Number of replicas.
##
replicas: 1
- ## Configures the liveness probe used to determine
- ## if the Web component is up.
- ## Note.: if you're upgrading Concourse from one version
- ## to another, the probe will probably fail for some time
- ## before migrations are finished - in such situations,
- ## either consider bumping the values set here.
+ ## Array of extra containers to run alongside the Concourse Web
+ ## container.
+ ##
+ ## Example:
+ ## - name: myapp-container
+ ## image: busybox
+ ## command: ['sh', '-c', 'echo Hello && sleep 3600']
+ ##
+ sidecarContainers: []
+
+ ## Configures the liveness probe used to determine if the Web component is up.
+ ## ps.: if you're upgrading Concourse from one version to another, the probe will
+ ## probably fail for some time before migrations are finished - in such situations,
+ ## consider bumping the values set here.
+ ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
+ ##
livenessProbe:
failureThreshold: 5
- httpGet:
- path: /api/v1/info
- port: atc
initialDelaySeconds: 10
periodSeconds: 15
timeoutSeconds: 3
+ httpGet:
+ path: /api/v1/info
+ port: atc
## Configures the readiness probes.
+ ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
+ ##
readinessProbe:
httpGet:
path: /api/v1/info
port: atc
## Configure resource requests and limits.
- ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
+ ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
requests:
@@ -691,14 +1216,19 @@ web:
## Configure additional environment variables for the
## web containers.
- # env:
- # - name: CONCOURSE_LOG_LEVEL
- # value: "debug"
- # - name: CONCOURSE_TSA_LOG_LEVEL
- # value: "debug"
+ ## Example:
+ ##
+ ## - name: CONCOURSE_LOG_LEVEL
+ ## value: "debug"
+ ## - name: CONCOURSE_TSA_LOG_LEVEL
+ ## value: "debug"
+ ##
+ env:
- ## For managing where secrets should be mounted for the web agents
+ ## Where secrets should be mounted for the web container.
+ ##
keySecretsPath: "/concourse-keys"
+ teamSecretsPath: "/team-authorized-keys"
authSecretsPath: "/concourse-auth"
vaultSecretsPath: "/concourse-vault"
postgresqlSecretsPath: "/concourse-postgresql"
@@ -706,94 +1236,139 @@ web:
tlsSecretsPath: "/concourse-web-tls"
## Configure additional volumes for the
- ## web container(s)
+ ## web container(s).
+ ##
+ ## Example:
##
- # additionalVolumes:
- # - name: my-team-authorized-keys
- # configMap:
- # name: my-team-authorized-keys-config
+ ## - name: my-team-authorized-keys
+ ## configMap:
+ ## name: my-team-authorized-keys-config
+ ##
+ ## Ref: https://kubernetes.io/docs/concepts/storage/volumes/
+ ##
+ additionalVolumes: []
## Configure additional volumeMounts for the
## web container(s)
##
- # additionalVolumeMounts:
- # - name: my-team-authorized-keys
- # mountPath: /my-team-authorized-keys
+ ## Example:
+ ##
+ ## - name: my-team-authorized-keys
+ ## mountPath: /my-team-authorized-keys
+ ##
+ ## Ref: https://kubernetes.io/docs/concepts/storage/volumes/
+ ##
+ additionalVolumeMounts:
## Additional affinities to add to the web pods.
##
- # additionalAffinities:
- # nodeAffinity:
- # preferredDuringSchedulingIgnoredDuringExecution:
- # - weight: 50
- # preference:
- # matchExpressions:
- # - key: spot
- # operator: NotIn
- # values:
- # - "true"
+ ## Example:
+ ## nodeAffinity:
+ ## preferredDuringSchedulingIgnoredDuringExecution:
+ ## - weight: 50
+ ## preference:
+ ## matchExpressions:
+ ## - key: spot
+ ## operator: NotIn
+ ## values:
+ ## - "true"
+ ##
+ ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
+ ##
+ additionalAffinities:
## Annotations for the web nodes.
+ ##
+ ## Example:
+ ## key1: "value1"
+ ## key2: "value2"
+ ##
## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
+ ##
annotations: {}
- # annotations:
- # key1: "value1"
- # key2: "value2"
## Node selector for web nodes.
+ ## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
+ ##
nodeSelector: {}
## Tolerations for the web nodes.
+ ##
+ ## Example:
+ ## - key: "toleration=key"
+ ## operator: "Equal"
+ ## value: "value"
+ ## effect: "NoSchedule"
+ ##
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
+ ##
tolerations: []
- # tolerations:
- # - key: "toleration=key"
- # operator: "Equal"
- # value: "value"
- # effect: "NoSchedule"
+
+ ## Strategy for web deployment updates.
+ ##
+ ## Example:
+ ##
+ ## strategy:
+ ## type: RollingUpdate
+ ## rollingUpdate:
+ ## maxSurge: 1
+ ## maxUnavailable: 25%
+ strategy: {}
## Service configuration.
- ## ref: https://kubernetes.io/docs/user-guide/services/
+ ## Ref: https://kubernetes.io/docs/user-guide/services/
##
service:
## For minikube, set this to ClusterIP, elsewhere use LoadBalancer or NodePort
- ## ref: https://kubernetes.io/docs/user-guide/services/#publishing-services---service-types
+ ## Ref: https://kubernetes.io/docs/user-guide/services/#publishing-services---service-types
##
type: ClusterIP
- ## When using web.service.type: LoadBalancer, sets the user-specified load balancer IP
- # loadBalancerIP: 172.217.1.174
+ ## When using `web.service.type: LoadBalancer`, sets the user-specified load balancer IP.
+ ## Example: 172.217.1.174
+ ##
+ loadBalancerIP:
- # # Additional Labels to be added to the web service.
- # labels:
+ ## Additional Labels to be added to the web service.
+ ##
+ labels:
## Annotations to be added to the web service.
##
- # annotations:
- # prometheus.io/probe: "true"
- # prometheus.io/probe_path: "/"
- #
- # ## When using web.service.type: LoadBalancer, enable HTTPS with an ACM cert
- # service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:eu-west-1:123456789:certificate/abc123-abc123-abc123-abc123"
- # service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
- # service.beta.kubernetes.io/aws-load-balancer-backend-port: "atc"
- # service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
- #
- # ## When using web.service.type: LoadBalancer, whitelist the load balancer to particular IPs
- # loadBalancerSourceRanges:
- # - 192.168.1.10/32
-
- # When using web.service.type: NodePort, sets the nodePort for atc
- # atcNodePort: 30150
- #
- # When using web.service.type: NodePort, sets the nodePort for atc tls
- # atcTlsNodePort: 30151
- #
- # When using web.service.type: NodePort, sets the nodePort for tsa
- # tsaNodePort: 30152
+ ## Example:
+ ##
+ ## prometheus.io/probe: "true"
+ ## prometheus.io/probe_path: "/"
+ ##
+ ## When using `web.service.type: LoadBalancer` in AWS, enable HTTPS with an ACM cert:
+ ##
+ ## service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:eu-west-1:123456789:certificate/abc123-abc123-abc123-abc123"
+ ## service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
+ ## service.beta.kubernetes.io/aws-load-balancer-backend-port: "atc"
+ ## service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
+ ##
+ annotations:
+
+ ## When using `web.service.type: LoadBalancer`, whitelist the load balancer to particular IPs
+ ## Example:
+ ## - 192.168.1.10/32
+ ##
+ loadBalancerSourceRanges:
+
+ ## When using `web.service.type: NodePort`, sets the nodePort for atc
+ ##
+ atcNodePort:
+
+ ## When using `web.service.type: NodePort`, sets the nodePort for atc tls
+ ##
+ atcTlsNodePort:
+
+ ## When using `web.service.type: NodePort`, sets the nodePort for tsa
+ ##
+ tsaNodePort:
## Ingress configuration.
- ## ref: https://kubernetes.io/docs/user-guide/ingress/
+ ## Ref: https://kubernetes.io/docs/user-guide/ingress/
##
ingress:
## Enable Ingress.
@@ -801,45 +1376,86 @@ web:
enabled: false
## Annotations to be added to the web ingress.
+ ## Example:
+ ## kubernetes.io/ingress.class: nginx
+ ## kubernetes.io/tls-acme: 'true'
##
- # annotations:
- # kubernetes.io/ingress.class: nginx
- # kubernetes.io/tls-acme: 'true'
+ annotations:
## Hostnames.
## Must be provided if Ingress is enabled.
+ ## Example:
+ ## - concourse.domain.com
##
- # hosts:
- # - concourse.domain.com
+ hosts:
## TLS configuration.
## Secrets must be manually created in the namespace.
+ ## Example:
+ ## - secretName: concourse-web-tls
+ ## hosts:
+ ## - concourse.domain.com
##
- # tls:
- # - secretName: concourse-web-tls
- # hosts:
- # - concourse.domain.com
- #
- #
+ tls:
## Configuration values for Concourse Worker components.
+## For more information regarding the characteristics of
+## Concourse Workers, see https://concourse-ci.org/concourse-worker.html
##
worker:
+
+ ## Enable or disable the worker component.
+ ## This can allow users to create web only releases by setting this to false
+ ##
+ enabled: true
+
## Override the components name (defaults to worker).
##
- # nameOverride:
+ nameOverride:
+
+ ## Removes any previous state created in `concourse.worker.workDir`.
+ ##
+ cleanUpWorkDirOnStart: true
## Number of replicas.
##
replicas: 2
+ ## Array of extra containers to run alongside the Concourse worker
+ ## container.
+ ##
+ ## Example:
+ ##
+ ## - name: myapp-container
+ ## image: busybox
+ ## command: ['sh', '-c', 'echo Hello && sleep 3600']
+ ##
+ sidecarContainers: []
+
## Minimum number of workers available after an eviction
- ## ref: https://kubernetes.io/docs/admin/disruptions/
+ ## Ref: https://kubernetes.io/docs/admin/disruptions/
##
minAvailable: 1
+ ## Configures the liveness probe used to determine if the Worker component is up.
+ ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
+ ##
+ livenessProbe:
+ failureThreshold: 5
+ initialDelaySeconds: 10
+ periodSeconds: 15
+ timeoutSeconds: 3
+ httpGet:
+ path: /
+ port: worker-hc
+
+ ## Configures the readiness probes.
+ ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
+ ##
+ readinessProbe: {}
+
## Configure resource requests and limits.
- ## ref: https://kubernetes.io/docs/user-guide/compute-resources/
+ ## Ref: https://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
requests:
@@ -848,97 +1464,101 @@ worker:
## Configure additional environment variables for the
## worker container(s)
- # env:
- # - name: http_proxy
- # value: "http://proxy.your-domain.com:3128"
- # - name: https_proxy
- # value: "http://proxy.your-domain.com:3128"
- # - name: no_proxy
- # value: "your-domain.com"
- # - name: CONCOURSE_GARDEN_DNS_SERVER
- # value: "8.8.8.8"
- # - name: CONCOURSE_GARDEN_DNS_PROXY_ENABLE
- # value: "true"
- # - name: CONCOURSE_GARDEN_ALLOW_HOST_ACCESS
- # value: "true"
-
+ ##
+ ## Example:
+ ##
+ ## - name: CONCOURSE_NAME
+ ## value: "anything"
+ ##
+ env: []
## For managing where secrets should be mounted for worker agents
+ ##
keySecretsPath: "/concourse-keys"
## Configure additional volumeMounts for the
## worker container(s)
- # additionalVolumeMounts:
- # - name: concourse-baggageclaim
- # mountPath: /baggageclaim
+ ##
+ ## Example:
+ ## - name: concourse-baggageclaim
+ ## mountPath: /baggageclaim
+ ##
+ additionalVolumeMounts: []
## Annotations to be added to the worker pods.
##
- # annotations:
- # iam.amazonaws.com/role: arn:aws:iam::123456789012:role/concourse
- #
+ ## Example:
+ ##
+ ## iam.amazonaws.com/role: arn:aws:iam::123456789012:role/concourse
+ ##
+ annotations: {}
## Node selector for the worker nodes.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
+ ##
nodeSelector: {}
- # nodeSelector: {type: concourse}
## Additional affinities to add to the worker pods.
- ## Useful if you prefer to run workers on non-spot instances, for example
- ##
- # additionalAffinities:
- # nodeAffinity:
- # preferredDuringSchedulingIgnoredDuringExecution:
- # - weight: 50
- # preference:
- # matchExpressions:
- # - key: spot
- # operator: NotIn
- # values:
- # - "true"
+ ## Useful if you prefer to run workers on non-spot instances, for example.
+ ##
+ ## Example:
+ ##
+ ## nodeAffinity:
+ ## preferredDuringSchedulingIgnoredDuringExecution:
+ ## - weight: 50
+ ## preference:
+ ## matchExpressions:
+ ## - key: spot
+ ## operator: NotIn
+ ## values:
+ ## - "true"
+ ##
+ additionalAffinities: {}
## Configure additional volumes for the
- ## worker container(s)
- # additionalVolumes:
- # - name: concourse-baggageclaim
- # hostPath:
- # path: /dev/nvme0n1
- # type: BlockDevice
- #
- # As a special exception, this allows taking over the `concourse-work-dir`
- # volume (from the default emptyDir) if `persistence.enabled` is false:
- #
- # additionalVolumes:
- # - name: concourse-work-dir
- # hostPath:
- # path: /mnt/locally-mounted-fast-disk/concourse
- # type: DirectoryOrCreate
+ ## worker container(s).
+ ## Example:
+ ##
+ ## - name: concourse-baggageclaim
+ ## hostPath:
+ ## path: /dev/nvme0n1
+ ## type: BlockDevice
+ ##
+ ##
+ ## As a special exception, this allows taking over the `concourse-work-dir`
+ ## volume (from the default emptyDir) if `persistence.enabled` is false:
+ ##
+ ## additionalVolumes:
+ ## - name: concourse-work-dir
+ ## hostPath:
+ ## path: /mnt/locally-mounted-fast-disk/concourse
+ ## type: DirectoryOrCreate
+ ##
+ additionalVolumes: []
## Whether the workers should be forced to run on separate nodes.
## This is accomplished by setting their AntiAffinity with requiredDuringSchedulingIgnoredDuringExecution as opposed to preferred
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature
+ ##
hardAntiAffinity: false
## Tolerations for the worker nodes.
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
+ ##
+ ## For example:
+ ##
+ ## - key: "toleration=key"
+ ## operator: "Equal"
+ ## value: "value"
+ ## effect: "NoSchedule"
+ ##
tolerations: []
- # tolerations:
- # - key: "toleration=key"
- # operator: "Equal"
- # value: "value"
- # effect: "NoSchedule"
## Time to allow the pod to terminate before being forcefully terminated. This should provide time for
## the worker to retire, i.e. drain its tasks. See https://concourse-ci.org/worker-internals.html for worker
## lifecycle semantics.
- terminationGracePeriodSeconds: 60
-
- ## If any of the strings are found in logs, the worker's livenessProbe will fail and trigger a pod restart.
- ## Specify one string per line, exact matching is used.
##
- fatalErrors: |-
- guardian.api.garden-server.create.failed
- baggageclaim.api.volume-server.create-volume-async.failed-to-create
+ terminationGracePeriodSeconds: 60
## Strategy for StatefulSet updates (requires Kubernetes 1.6+)
## Ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset
@@ -950,14 +1570,18 @@ worker:
##
## "OrderedReady" is default. "Parallel" means worker pods will launch or terminate
## in parallel.
+ ##
podManagementPolicy: Parallel
## When persistance is disabled this value will be used to limit the emptyDir volume size
## Ref: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
- # emptyDirSize: 20Gi
+ ##
+ ## Example: 20Gi
+ ##
+ emptyDirSize:
## Persistent Volume Storage configuration.
-## ref: https://kubernetes.io/docs/user-guide/persistent-volumes
+## Ref: https://kubernetes.io/docs/user-guide/persistent-volumes
##
persistence:
## Enable persistence using Persistent Volume Claims.
@@ -974,7 +1598,7 @@ persistence:
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
- # storageClass: "-"
+ storageClass:
## Persistent Volume Access Mode.
##
@@ -985,12 +1609,13 @@ persistence:
size: 20Gi
## Configuration values for the postgresql dependency.
-## ref: https://github.com/kubernetes/charts/blob/master/stable/postgresql/README.md
+## Ref: https://github.com/helm/charts/blob/master/stable/postgresql/README.md
##
postgresql:
## Use the PostgreSQL chart dependency.
## Set to false if bringing your own PostgreSQL, and set secret value postgresql-uri.
+ ## Should be set to false if using the chart as a worker only deployment.
##
enabled: true
@@ -1008,33 +1633,40 @@ postgresql:
postgresDatabase: concourse
## Persistent Volume Storage configuration.
- ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes
+ ## Ref: https://kubernetes.io/docs/user-guide/persistent-volumes
##
persistence:
## Enable PostgreSQL persistence using Persistent Volume Claims.
##
enabled: true
- ## concourse data Persistent Volume Storage Class
+
+ ## Concourse data Persistent Volume Storage Class.
+ ##
## If defined, storageClassName:
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
- # storageClass: "-"
+ storageClass:
+
## Persistent Volume Access Mode.
##
accessMode: ReadWriteOnce
+
## Persistent Volume Storage Size.
##
size: 8Gi
-## For RBAC support:
+## For Kubernetes RBAC support:
+##
rbac:
- # true here enables creation of rbac resources
+ ## Enable the creation of RBAC resources.
+ ##
create: true
- # rbac version
+ ## RBAC Version
+ ##
apiVersion: v1beta1
## The name of the service account to use for web pods if rbac.create is false
@@ -1048,21 +1680,36 @@ rbac:
## For managing secrets using Helm
##
secrets:
-
- ## List of username:password or username:bcrypted_password combinations for all your local concourse users.
- localUsers: "test:test"
## Create the secret resource from the following values. Set this to
## false to manage these secrets outside Helm.
##
create: true
+ ## Array of team names and public keys for team external workers.
+ ##
+ ## Example:
+ ## - team: main
+ ## key: |-
+ ## ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDYBQ9fG6IML+qsFaMh1Pl+81wyUwRilHdfhItAiAsLVQsOwI5+V4pn5aLhHPBuRQqIqYmbkZ7I1VUIN1+90PVJ3X7l9qqanb85AHMtLujw1j9u0zDyH2XHgpUloknUQzUSLIZjjU3Hn3Uo/XikF+vT8104isO7Ym8Xp7sIcRuvOQ3nuRsFVCRogxpLTVHD/k57rwYVqWWLaKLwvx01ZVXOq4GHk/BVaKa9ODC/dNgbZMfwvVVXuf7/NFGmSMyXb49Si4aoP4Gn7jAX6GngBbm/bgKqO0skQy/ggQm/YVF+s5q4EhleMBLVJKD1VpM5LeLDFpiu/y4bVd8wUcgK+QQ9 Concourse
+ ##
+ ## Make sure to chack the security caveats here: https://concourse-ci.org/teams-caveats.html
+ ## Extra Reads: https://github.com/concourse/concourse/issues/1865#issuecomment-464166994
+ ## https://concourse-ci.org/global-resources.html#complications-with-reusing-containers
+ ##
+ teamAuthorizedKeys:
+
+ ## List of `username:password` or `username:bcrypted_password` combinations for all your local concourse users.
+ ##
+ localUsers: "test:test"
+
## The TLS certificate and private key for the web component to be able to terminate
## TLS connections.
- # webTlsCert:
- # webTlsKey:
+ ##
+ webTlsCert:
+ webTlsKey:
## Concourse Host Keys.
- ## ref: https://concourse-ci.org/install.html#generating-keys
+ ## Ref: https://concourse-ci.org/install.html#generating-keys
##
hostKey: |-
-----BEGIN RSA PRIVATE KEY-----
@@ -1097,7 +1744,7 @@ secrets:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDYBQ9fG6IML+qsFaMh1Pl+81wyUwRilHdfhItAiAsLVQsOwI5+V4pn5aLhHPBuRQqIqYmbkZ7I1VUIN1+90PVJ3X7l9qqanb85AHMtLujw1j9u0zDyH2XHgpUloknUQzUSLIZjjU3Hn3Uo/XikF+vT8104isO7Ym8Xp7sIcRuvOQ3nuRsFVCRogxpLTVHD/k57rwYVqWWLaKLwvx01ZVXOq4GHk/BVaKa9ODC/dNgbZMfwvVVXuf7/NFGmSMyXb49Si4aoP4Gn7jAX6GngBbm/bgKqO0skQy/ggQm/YVF+s5q4EhleMBLVJKD1VpM5LeLDFpiu/y4bVd8wUcgK+QQ9 Concourse
## Concourse Session Signing Keys.
- ## ref: https://concourse-ci.org/install.html#generating-keys
+ ## Ref: https://concourse-ci.org/install.html#generating-keys
##
sessionSigningKey: |-
-----BEGIN RSA PRIVATE KEY-----
@@ -1129,7 +1776,7 @@ secrets:
-----END RSA PRIVATE KEY-----
## Concourse Worker Keys.
- ## ref: https://concourse-ci.org/install.html#generating-keys
+ ## Ref: https://concourse-ci.org/install.html#generating-keys
##
workerKey: |-
-----BEGIN RSA PRIVATE KEY-----
@@ -1164,95 +1811,105 @@ secrets:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC496FSYFcBAKgDtMsBAJiF/6/NxlXKP5UZecyEsedYuTt1GOgJTwaA1qZ1LmHsbfLDE68oDdiM4uvxfI4wtLhz57w3u0jOUxZ2JeF7SVwEf1nVqLn4Gh/f8GUNQGSyIp1zUD5Bx9fq0PAyQ47mt7Ufi84rcf8LKl7nzAIHTcdg2BvTkQN9bUGPaq/Pb1W2bKPAQy4OzXTSIyrAJ89TH2jFeaZfyxQFGbD9jVHH/yl0oiMrDeaRYgccE5II+KY7WoLjsBry/9Qf2ERELKTK4UeIGIqWci9lab1ti+GxFPPiC3krNFjo4jShV4eUs4cNIrjwNrxVaKPXmU6o7Y3Hpayx Concourse
## Secrets for DB access
- # postgresUser:
- # postgresPassword:
- # postgresCaCert:
- # postgresClientCert:
- # postgresClientKey:
+ ##
+ postgresUser:
+ postgresPassword:
+ postgresCaCert:
+ postgresClientCert:
+ postgresClientKey:
## Secrets for DB encryption
##
- # encryptionKey:
- # oldEncryptionKey:
+ encryptionKey:
+ oldEncryptionKey:
## Secrets for SSM AWS access
- # awsSsmAccessKey:
- # awsSsmSecretKey:
- # awsSsmSessionToken:
+ ##
+ awsSsmAccessKey:
+ awsSsmSecretKey:
+ awsSsmSessionToken:
## Secrets for Secrets Manager AWS access
- # awsSecretsmanagerAccessKey:
- # awsSecretsmanagerSecretKey:
- # awsSecretsmanagerSessionToken:
+ ##
+ awsSecretsmanagerAccessKey:
+ awsSecretsmanagerSecretKey:
+ awsSecretsmanagerSessionToken:
## Secrets for CF OAuth
- # cfClientId:
- # cfClientSecret:
- # cfCaCert: |-
+ ##
+ cfClientId:
+ cfClientSecret:
+ cfCaCert:
+
+ ## Secrets for BitbucketCloud OAuth.
+ ##
+ bitbucketCloudClientId:
+ bitbucketCloudClientSecret:
## Secrets for GitHub OAuth.
##
- # githubClientId:
- # githubClientSecret:
- # githubCaCert: |-
+ githubClientId:
+ githubClientSecret:
+ githubCaCert:
## Secrets for GitLab OAuth.
##
- # gitlabClientId:
- # gitlabClientSecret:
+ gitlabClientId:
+ gitlabClientSecret:
## Secrets for LDAP Auth.
##
- # ldapCaCert: |-
+ ldapCaCert:
## Secrets for generic OAuth.
##
- # oauthClientId:
- # oauthClientSecret:
- # oauthCaCert: |-
+ oauthClientId:
+ oauthClientSecret:
+ oauthCaCert:
## Secrets for oidc OAuth.
##
- # oidcClientId:
- # oidcClientSecret:
- # oidcCaCert: |-
+ oidcClientId:
+ oidcClientSecret:
+ oidcCaCert:
## Secrets for using Hashcorp Vault as a credential manager.
##
## if the Vault server is using a self-signed certificate, provide the CA public key.
## the value will be written to /concourse-vault/ca.cert
##
- # vaultCaCert: |-
+ vaultCaCert:
## initial periodic token issued for concourse
- ## ref: https://www.vaultproject.io/docs/concepts/tokens.html#periodic-tokens
+ ## Ref: https://www.vaultproject.io/docs/concepts/tokens.html#periodic-tokens
##
- # vaultClientToken:
+ vaultClientToken:
## vault authentication parameters
- ## Paramter to pass when logging in via the backend
+ ## Parameter to pass when logging in via the backend
## Required for "approle" authenication method
## e.g. "role_id=x,secret_id=x"
- ## ref: https://concourse-ci.org/creds.html#vault-auth-param=NAME=VALUE
+ ## Ref: https://concourse-ci.org/creds.html#vault-auth-param=NAME=VALUE
##
- # vaultAuthParam:
+ vaultAuthParam:
## provide the client certificate for authenticating with the [TLS](https://www.vaultproject.io/docs/auth/cert.html) backend
## the value will be written to /concourse-vault/client.cert
## make sure to also set credentialManager.vault.authBackend to `cert`
##
- # vaultClientCert: |-
+ vaultClientCert:
## provide the client key for authenticating with the [TLS](https://www.vaultproject.io/docs/auth/cert.html) backend
## the value will be written to /concourse-vault/client.key
## make sure to also set credentialManager.vault.authBackend to `cert`
##
- # vaultClientKey: |-
+ vaultClientKey:
## If influxdb metrics are enabled and authentication is required,
## provide a password here to authenticate with the influxdb server configured.
##
- # influxdbPassword:
+ influxdbPassword:
## SSL certificate used to verify the Syslog server for draining build logs.
- # syslogCaCert: |-
+ ##
+ syslogCaCert:
diff --git a/stable/consul/Chart.yaml b/stable/consul/Chart.yaml
index 48883341de27..e3e2899e9121 100755
--- a/stable/consul/Chart.yaml
+++ b/stable/consul/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: consul
home: https://github.com/hashicorp/consul
-version: 3.5.3
+version: 3.6.2
appVersion: 1.0.0
description: Highly available and distributed service discovery and key-value store
designed with support for the modern data center to make distributed systems and
diff --git a/stable/consul/README.md b/stable/consul/README.md
index 9573d3cebd43..5ed72bb93396 100644
--- a/stable/consul/README.md
+++ b/stable/consul/README.md
@@ -70,6 +70,7 @@ The following table lists the configurable parameters of the consul chart and th
| `test.imageTag` | Test container image tag (used for helm test) | `v1.4.8-bash` |
| `test.rbac.create` | Create rbac for test container | `false` |
| `test.rbac.serviceAccountName` | Name of existed service account for test container | `` |
+| `additionalLabels` | Add labels to Pod and StatefulSet | `{}` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
diff --git a/stable/consul/templates/consul-statefulset.yaml b/stable/consul/templates/consul-statefulset.yaml
index 74a359e857e6..8a78c4c2a580 100644
--- a/stable/consul/templates/consul-statefulset.yaml
+++ b/stable/consul/templates/consul-statefulset.yaml
@@ -7,6 +7,9 @@ metadata:
release: {{ .Release.Name | quote }}
chart: {{ template "consul.chart" . }}
component: "{{ .Release.Name }}-{{ .Values.Component }}"
+{{- if .Values.additionalLabels }}
+{{ toYaml .Values.additionalLabels | indent 4 }}
+{{- end }}
spec:
serviceName: "{{ template "consul.fullname" . }}"
replicas: {{ default 3 .Values.Replicas }}
@@ -24,6 +27,9 @@ spec:
release: {{ .Release.Name | quote }}
chart: {{ template "consul.chart" . }}
component: "{{ .Release.Name }}-{{ .Values.Component }}"
+{{- if .Values.additionalLabels }}
+{{ toYaml .Values.additionalLabels | indent 8 }}
+{{- end }}
spec:
securityContext:
fsGroup: 1000
@@ -139,7 +145,7 @@ spec:
{{- range .Values.ConsulConfig }}
-config-dir /etc/consul/userconfig/{{ .name }} \
{{- end}}
- {{- if .Values.uiService.enabled }}
+ {{- if .Values.ui.enabled }}
-ui \
{{- end }}
{{- if .Values.DisableHostNodeId }}
diff --git a/stable/consul/values.yaml b/stable/consul/values.yaml
index b09c894a242e..6e15bb4f6c54 100644
--- a/stable/consul/values.yaml
+++ b/stable/consul/values.yaml
@@ -136,3 +136,4 @@ test:
nodeSelector: {}
tolerations: []
+additionalLabels: {}
diff --git a/stable/coredns/Chart.yaml b/stable/coredns/Chart.yaml
index e20bc75ba4af..1e1c1f0c3283 100644
--- a/stable/coredns/Chart.yaml
+++ b/stable/coredns/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: coredns
-version: 1.2.4
-appVersion: 1.3.1
+version: 1.5.3
+appVersion: 1.5.0
description: CoreDNS is a DNS server that chains plugins and provides Kubernetes DNS Services
keywords:
- coredns
diff --git a/stable/coredns/templates/_helpers.tpl b/stable/coredns/templates/_helpers.tpl
index be5fef1ce66a..a2efcb43e857 100644
--- a/stable/coredns/templates/_helpers.tpl
+++ b/stable/coredns/templates/_helpers.tpl
@@ -45,6 +45,10 @@ Generate the list of ports automatically from the server definitions
*/}}
{{- range .zones -}}
{{- if has (default "" .scheme) (list "dns://") -}}
+ {{/* Optionally enable tcp for this service as well */}}
+ {{- if eq .use_tcp true }}
+ {{- $innerdict := set $innerdict "istcp" true -}}
+ {{- end }}
{{- $innerdict := set $innerdict "isudp" true -}}
{{- end -}}
@@ -53,9 +57,10 @@ Generate the list of ports automatically from the server definitions
{{- end -}}
{{- end -}}
- {{/* If none of the zones specify scheme, default to UDP (CoreDNS defaults to dns://) */}}
+ {{/* If none of the zones specify scheme, default to dns:// on both tcp & udp */}}
{{- if and (not (index $innerdict "istcp")) (not (index $innerdict "isudp")) -}}
{{- $innerdict := set $innerdict "isudp" true -}}
+ {{- $innerdict := set $innerdict "istcp" true -}}
{{- end -}}
{{/* Write the dict back into the outer dict */}}
@@ -65,10 +70,10 @@ Generate the list of ports automatically from the server definitions
{{/* Write out the ports according to the info collected above */}}
{{- range $port, $innerdict := $ports -}}
{{- if index $innerdict "isudp" -}}
- {{- printf "- {port: %v, protocol: UDP}\n" $port -}}
+ {{- printf "- {port: %v, protocol: UDP, name: udp-%s}\n" $port $port -}}
{{- end -}}
{{- if index $innerdict "istcp" -}}
- {{- printf "- {port: %v, protocol: TCP}\n" $port -}}
+ {{- printf "- {port: %v, protocol: TCP, name: tcp-%s}\n" $port $port -}}
{{- end -}}
{{- end -}}
{{- end -}}
@@ -99,6 +104,10 @@ Generate the list of ports automatically from the server definitions
*/}}
{{- range .zones -}}
{{- if has (default "" .scheme) (list "dns://") -}}
+ {{/* Optionally enable tcp for this service as well */}}
+ {{- if eq .use_tcp true }}
+ {{- $innerdict := set $innerdict "istcp" true -}}
+ {{- end }}
{{- $innerdict := set $innerdict "isudp" true -}}
{{- end -}}
@@ -107,9 +116,10 @@ Generate the list of ports automatically from the server definitions
{{- end -}}
{{- end -}}
- {{/* If none of the zones specify scheme, default to UDP (CoreDNS defaults to dns://) */}}
+ {{/* If none of the zones specify scheme, default to dns:// on both tcp & udp */}}
{{- if and (not (index $innerdict "istcp")) (not (index $innerdict "isudp")) -}}
{{- $innerdict := set $innerdict "isudp" true -}}
+ {{- $innerdict := set $innerdict "istcp" true -}}
{{- end -}}
{{/* Write the dict back into the outer dict */}}
@@ -119,10 +129,10 @@ Generate the list of ports automatically from the server definitions
{{/* Write out the ports according to the info collected above */}}
{{- range $port, $innerdict := $ports -}}
{{- if index $innerdict "isudp" -}}
- {{- printf "- {containerPort: %v, protocol: UDP}\n" $port -}}
+ {{- printf "- {containerPort: %v, protocol: UDP, name: udp-%s}\n" $port $port -}}
{{- end -}}
{{- if index $innerdict "istcp" -}}
- {{- printf "- {containerPort: %v, protocol: TCP}\n" $port -}}
+ {{- printf "- {containerPort: %v, protocol: TCP, name: tcp-%s}\n" $port $port -}}
{{- end -}}
{{- end -}}
{{- end -}}
diff --git a/stable/coredns/templates/clusterrole.yaml b/stable/coredns/templates/clusterrole.yaml
index a45c65b4a3e6..029d13e272ac 100644
--- a/stable/coredns/templates/clusterrole.yaml
+++ b/stable/coredns/templates/clusterrole.yaml
@@ -4,15 +4,15 @@ kind: ClusterRole
metadata:
name: {{ template "coredns.fullname" . }}
labels:
- heritage: {{ .Release.Service | quote }}
- release: {{ .Release.Name | quote }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
+ app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
+ app.kubernetes.io/instance: {{ .Release.Name | quote }}
+ helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
- app: {{ template "coredns.name" . }}
+ app.kubernetes.io/name: {{ template "coredns.name" . }}
rules:
- apiGroups:
- ""
diff --git a/stable/coredns/templates/clusterrolebinding.yaml b/stable/coredns/templates/clusterrolebinding.yaml
index 80ade3d25e03..49da9b548455 100644
--- a/stable/coredns/templates/clusterrolebinding.yaml
+++ b/stable/coredns/templates/clusterrolebinding.yaml
@@ -4,21 +4,21 @@ kind: ClusterRoleBinding
metadata:
name: {{ template "coredns.fullname" . }}
labels:
- heritage: {{ .Release.Service | quote }}
- release: {{ .Release.Name | quote }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
+ app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
+ app.kubernetes.io/instance: {{ .Release.Name | quote }}
+ helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
- app: {{ template "coredns.name" . }}
+ app.kubernetes.io/name: {{ template "coredns.name" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "coredns.fullname" . }}
subjects:
- kind: ServiceAccount
- name: {{ template "coredns.fullname" . }}
+ name: {{ template "coredns.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}
diff --git a/stable/coredns/templates/configmap.yaml b/stable/coredns/templates/configmap.yaml
index eccd995092f2..b7e1a667fabb 100644
--- a/stable/coredns/templates/configmap.yaml
+++ b/stable/coredns/templates/configmap.yaml
@@ -3,15 +3,15 @@ kind: ConfigMap
metadata:
name: {{ template "coredns.fullname" . }}
labels:
- heritage: {{ .Release.Service | quote }}
- release: {{ .Release.Name | quote }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
+ app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
+ app.kubernetes.io/instance: {{ .Release.Name | quote }}
+ helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
- app: {{ template "coredns.name" . }}
+ app.kubernetes.io/name: {{ template "coredns.name" . }}
data:
Corefile: |-
{{ range .Values.servers }}
diff --git a/stable/coredns/templates/deployment.yaml b/stable/coredns/templates/deployment.yaml
index db0ba5e68ff9..84fc7ceb3de2 100644
--- a/stable/coredns/templates/deployment.yaml
+++ b/stable/coredns/templates/deployment.yaml
@@ -3,32 +3,32 @@ kind: Deployment
metadata:
name: {{ template "coredns.fullname" . }}
labels:
- heritage: {{ .Release.Service | quote }}
- release: {{ .Release.Name | quote }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
+ app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
+ app.kubernetes.io/instance: {{ .Release.Name | quote }}
+ helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
- app: {{ template "coredns.name" . }}
+ app.kubernetes.io/name: {{ template "coredns.name" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
- release: {{ .Release.Name | quote }}
+ app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.isClusterService }}
k8s-app: {{ .Chart.Name | quote }}
{{- end }}
- app: {{ template "coredns.name" . }}
+ app.kubernetes.io/name: {{ template "coredns.name" . }}
template:
metadata:
labels:
{{- if .Values.isClusterService }}
k8s-app: {{ .Chart.Name | quote }}
{{- end }}
- app: {{ template "coredns.name" . }}
- release: {{ .Release.Name | quote }}
+ app.kubernetes.io/name: {{ template "coredns.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name | quote }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- if .Values.isClusterService }}
@@ -37,6 +37,9 @@ spec:
{{- end }}
spec:
serviceAccountName: {{ template "coredns.serviceAccountName" . }}
+ {{- if .Values.priorityClassName }}
+ priorityClassName: {{ .Values.priorityClassName | quote }}
+ {{- end }}
{{- if .Values.isClusterService }}
dnsPolicy: Default
{{- end }}
diff --git a/stable/coredns/templates/podsecuritypolicy.yaml b/stable/coredns/templates/podsecuritypolicy.yaml
index eacb7ee228e4..c3d8b41d919f 100644
--- a/stable/coredns/templates/podsecuritypolicy.yaml
+++ b/stable/coredns/templates/podsecuritypolicy.yaml
@@ -4,15 +4,15 @@ kind: PodSecurityPolicy
metadata:
name: {{ template "coredns.fullname" . }}
labels:
- heritage: {{ .Release.Service | quote }}
- release: {{ .Release.Name | quote }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
+ app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
+ app.kubernetes.io/instance: {{ .Release.Name | quote }}
+ helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- else }}
- app: {{ template "coredns.name" . }}
+ app.kubernetes.io/name: {{ template "coredns.name" . }}
{{- end }}
spec:
privileged: false
diff --git a/stable/coredns/templates/service.yaml b/stable/coredns/templates/service.yaml
index 85d0a9c695b0..a2864f6354bd 100644
--- a/stable/coredns/templates/service.yaml
+++ b/stable/coredns/templates/service.yaml
@@ -3,27 +3,33 @@ kind: Service
metadata:
name: {{ template "coredns.fullname" . }}
labels:
- heritage: {{ .Release.Service | quote }}
- release: {{ .Release.Name | quote }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
+ app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
+ app.kubernetes.io/instance: {{ .Release.Name | quote }}
+ helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
- app: {{ template "coredns.name" . }}
+ app.kubernetes.io/name: {{ template "coredns.name" . }}
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
spec:
selector:
- release: {{ .Release.Name | quote }}
+ app.kubernetes.io/instance: {{ .Release.Name | quote }}
{{- if .Values.isClusterService }}
k8s-app: {{ .Chart.Name | quote }}
{{- end }}
- app: {{ template "coredns.name" . }}
+ app.kubernetes.io/name: {{ template "coredns.name" . }}
{{- if .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{- end }}
+ {{- if .Values.service.externalTrafficPolicy }}
+ externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }}
+ {{- end }}
+ {{- if .Values.service.loadBalancerIP }}
+ loadBalancerIP: {{ .Values.service.loadBalancerIP }}
+ {{- end }}
ports:
{{ include "coredns.servicePorts" . | indent 2 -}}
type: {{ default "ClusterIP" .Values.serviceType }}
diff --git a/stable/coredns/templates/serviceaccount.yaml b/stable/coredns/templates/serviceaccount.yaml
index 6db7e291a453..bced7ca3df5f 100644
--- a/stable/coredns/templates/serviceaccount.yaml
+++ b/stable/coredns/templates/serviceaccount.yaml
@@ -4,13 +4,13 @@ kind: ServiceAccount
metadata:
name: {{ template "coredns.serviceAccountName" . }}
labels:
- heritage: {{ .Release.Service | quote }}
- release: {{ .Release.Name | quote }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
+ app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
+ app.kubernetes.io/instance: {{ .Release.Name | quote }}
+ helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
{{- if .Values.isClusterService }}
k8s-app: {{ .Chart.Name | quote }}
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
{{- end }}
- app: {{ template "coredns.name" . }}
+ app.kubernetes.io/name: {{ template "coredns.name" . }}
{{- end }}
diff --git a/stable/coredns/values.yaml b/stable/coredns/values.yaml
index f1cd61b79cdc..e8fc5475da0a 100644
--- a/stable/coredns/values.yaml
+++ b/stable/coredns/values.yaml
@@ -6,7 +6,7 @@ replicaCount: 1
image:
repository: coredns/coredns
- tag: "1.3.1"
+ tag: "1.5.0"
pullPolicy: IfNotPresent
resources:
@@ -21,6 +21,8 @@ serviceType: "ClusterIP"
service:
# clusterIP: ""
+# loadBalancerIP: ""
+# externalTrafficPolicy: ""
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9153"
@@ -43,6 +45,9 @@ rbac:
# isClusterService specifies whether chart should be deployed as cluster-service or normal k8s app.
isClusterService: true
+# Optional priority class to be used for the coredns pods
+priorityClassName: ""
+
servers:
- zones:
- zone: .
@@ -58,13 +63,17 @@ servers:
parameters: round_robin
- name: prometheus
parameters: 0.0.0.0:9153
- - name: proxy
+ - name: forward
parameters: . /etc/resolv.conf
# Complete example with all the options:
# - zones: # the `zones` block can be left out entirely, defaults to "."
# - zone: hello.world. # optional, defaults to "."
# scheme: tls:// # optional, defaults to "" (which equals "dns://" in CoreDNS)
+# - zone: foo.bar.
+# scheme: dns://
+# use_tcp: true # set this parameter to optionally expose the port on tcp as well as udp for the DNS protocol
+# # Note that this will not work if you are also exposing tls or grpc on the same server
# port: 12345 # optional, defaults to "" (which equals 53 in CoreDNS)
# plugins: # the plugins to use for this server block
# - name: kubernetes # name of plugin, if used multiple times ensure that the plugin supports it!
diff --git a/stable/cosbench/Chart.yaml b/stable/cosbench/Chart.yaml
index a8ed90999eda..650e3b161134 100644
--- a/stable/cosbench/Chart.yaml
+++ b/stable/cosbench/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: cosbench
-version: 1.0.0
+version: 1.0.1
appVersion: 0.0.6
kubeVersion: "^1.8.0-0"
description: A benchmark tool for cloud object storage services
diff --git a/stable/coscale/Chart.yaml b/stable/coscale/Chart.yaml
index ae43ef1cc08c..00e5f7726fc9 100755
--- a/stable/coscale/Chart.yaml
+++ b/stable/coscale/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: coscale
-version: 0.3.0
+version: 1.0.0
appVersion: 3.16.0
description: CoScale Agent
keywords:
diff --git a/stable/dask/Chart.yaml b/stable/dask/Chart.yaml
index 9f3a2695f3d2..0f6abb0c3e21 100755
--- a/stable/dask/Chart.yaml
+++ b/stable/dask/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: dask
-version: 2.1.0
+version: 2.2.1
appVersion: 1.1.0
description: Distributed computation in Python with task scheduling
home: https://dask.pydata.org
diff --git a/stable/dask/README.md b/stable/dask/README.md
index 0567fafab5fb..d606d066c4eb 100644
--- a/stable/dask/README.md
+++ b/stable/dask/README.md
@@ -35,7 +35,9 @@ The following tables list the configurable parameters of the Dask chart and thei
| `scheduler.image` | Container image name | `daskdev/dask` |
| `scheduler.imageTag` | Container image tag | `1.1.0` |
| `scheduler.replicas` | k8s deployment replicas | `1` |
-| `scheduler.resources` | Container resources | `{}` |
+| `scheduler.tolerations` | Tolerations | `[]` |
+| `scheduler.nodeSelector` | nodeSelector | `{}` |
+| `scheduler.affinity` | Container affinity | `{}` |
### Dask webUI
@@ -50,9 +52,12 @@ The following tables list the configurable parameters of the Dask chart and thei
| ----------------------- | ---------------------------------| ---------------|
| `worker.name` | Dask worker name | `worker` |
| `worker.image` | Container image name | `daskdev/dask` |
-| `worker.imageTag` | Container image tag | `1.1.0` |
+| `worker.imageTag` | Container image tag | `1.1.0` |
| `worker.replicas` | k8s hpa and deployment replicas | `3` |
| `worker.resources` | Container resources | `{}` |
+| `worker.tolerations` | Tolerations | `[]` |
+| `worker.nodeSelector` | nodeSelector | `{}` |
+| `worker.affinity` | Container affinity | `{}` |
|
### jupyter
@@ -62,10 +67,13 @@ The following tables list the configurable parameters of the Dask chart and thei
| `jupyter.name` | Jupyter name | `jupyter` |
| `jupyter.enabled` | Include optional Jupyter server | `true` |
| `jupyter.image` | Container image name | `daskdev/dask-notebook` |
-| `jupyter.imageTag` | Container image tag | `1.1.0` |
+| `jupyter.imageTag` | Container image tag | `1.1.0` |
| `jupyter.replicas` | k8s deployment replicas | `1` |
| `jupyter.servicePort` | k8s service port | `80` |
| `jupyter.resources` | Container resources | `{}` |
+| `jupyter.tolerations` | Tolerations | `[]` |
+| `jupyter.nodeSelector` | nodeSelector | `{}` |
+| `jupyter.affinity` | Container affinity | `{}` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
diff --git a/stable/dask/templates/dask-jupyter-deployment.yaml b/stable/dask/templates/dask-jupyter-deployment.yaml
index f704ec29eeeb..919282932d22 100644
--- a/stable/dask/templates/dask-jupyter-deployment.yaml
+++ b/stable/dask/templates/dask-jupyter-deployment.yaml
@@ -47,5 +47,16 @@ spec:
- name: config-volume
configMap:
name: {{ template "dask.fullname" . }}-jupyter-config
-
-{{ end }}
+ {{- with .Values.jupyter.nodeSelector }}
+ nodeSelector:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.jupyter.affinity }}
+ affinity:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.jupyter.tolerations }}
+ tolerations:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+{{- end }}
diff --git a/stable/dask/templates/dask-scheduler-deployment.yaml b/stable/dask/templates/dask-scheduler-deployment.yaml
index 7c65b8063afa..791a6c1e7895 100644
--- a/stable/dask/templates/dask-scheduler-deployment.yaml
+++ b/stable/dask/templates/dask-scheduler-deployment.yaml
@@ -41,3 +41,15 @@ spec:
{{ toYaml .Values.scheduler.resources | indent 12 }}
env:
{{ toYaml .Values.scheduler.env | indent 12 }}
+ {{- with .Values.scheduler.nodeSelector }}
+ nodeSelector:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.scheduler.affinity }}
+ affinity:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.scheduler.tolerations }}
+ tolerations:
+{{ toYaml . | indent 8 }}
+ {{- end }}
diff --git a/stable/dask/templates/dask-worker-deployment.yaml b/stable/dask/templates/dask-worker-deployment.yaml
index cdef7b17ee60..350aeef6a29b 100644
--- a/stable/dask/templates/dask-worker-deployment.yaml
+++ b/stable/dask/templates/dask-worker-deployment.yaml
@@ -44,12 +44,15 @@ spec:
{{ toYaml .Values.worker.resources | indent 12 }}
env:
{{ toYaml .Values.worker.env | indent 12 }}
+ {{- with .Values.worker.nodeSelector }}
+ nodeSelector:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.worker.affinity }}
+ affinity:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.worker.tolerations }}
tolerations:
- - key: "k8s.dask.org_dedicated"
- operator: "Equal"
- value: "worker"
- effect: "NoSchedule"
- - key: "k8s.dask.org/dedicated"
- operator: "Equal"
- value: "worker"
- effect: "NoSchedule"
+{{ toYaml . | indent 8 }}
+ {{- end }}
diff --git a/stable/dask/values.yaml b/stable/dask/values.yaml
index 7edce6d339d1..874e3718b67f 100644
--- a/stable/dask/values.yaml
+++ b/stable/dask/values.yaml
@@ -17,6 +17,9 @@ scheduler:
# requests:
# cpu: 1.8
# memory: 6G
+ tolerations: []
+ nodeSelector: {}
+ affinity: {}
webUI:
name: webui
@@ -45,6 +48,9 @@ worker:
# requests:
# cpu: 1
# memory: 3G
+ tolerations: []
+ nodeSelector: {}
+ affinity: {}
jupyter:
name: jupyter
@@ -69,3 +75,6 @@ jupyter:
# requests:
# cpu: 2
# memory: 6G
+ tolerations: []
+ nodeSelector: {}
+ affinity: {}
diff --git a/stable/datadog/Chart.yaml b/stable/datadog/Chart.yaml
index 8f48807b51c0..14030e913e3b 100755
--- a/stable/datadog/Chart.yaml
+++ b/stable/datadog/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: datadog
-version: 1.18.1
-appVersion: 6.9.0
+version: 1.30.7
+appVersion: 6.10.1
description: DataDog Agent
keywords:
- monitoring
@@ -16,7 +17,9 @@ maintainers:
email: haissam@datadoghq.com
- name: irabinovitch
email: ilan@datadoghq.com
-- name: xvello
- email: xavier.vello@datadoghq.com
- name: charlyf
email: charly@datadoghq.com
+- name: mfpierre
+ email: pierre.margueritte@datadoghq.com
+- name: clamoriniere
+ email: cedric.lamoriniere@datadoghq.com
diff --git a/stable/datadog/OWNERS b/stable/datadog/OWNERS
index 0e0d1bfb5577..41deeb02d11c 100644
--- a/stable/datadog/OWNERS
+++ b/stable/datadog/OWNERS
@@ -1,10 +1,12 @@
approvers:
- hkaj
- irabinovitch
-- xvello
- charlyf
+- mfpierre
+- clamoriniere
reviewers:
- hkaj
- irabinovitch
-- xvello
- charlyf
+- mfpierre
+- clamoriniere
diff --git a/stable/datadog/README.md b/stable/datadog/README.md
index d4553dd0c156..81501eaea527 100644
--- a/stable/datadog/README.md
+++ b/stable/datadog/README.md
@@ -1,175 +1,144 @@
# Datadog
-[Datadog](https://www.datadoghq.com/) is a hosted infrastructure monitoring platform.
+[Datadog](https://www.datadoghq.com/) is a hosted infrastructure monitoring platform. This chart adds the Datadog Agent to all nodes in your cluster via a DaemonSet. It also optionally depends on the [kube-state-metrics chart](https://github.com/kubernetes/charts/tree/master/stable/kube-state-metrics). For more information about monitoring Kubernetes with Datadog, please refer to the [Datadog documentation website](https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/).
-## Introduction
+Datadog [offers two variants](https://hub.docker.com/r/datadog/agent/tags/), switch to a `-jmx` tag if you need to run JMX/java integrations. The chart also supports running [the standalone dogstatsd image](https://hub.docker.com/r/datadog/dogstatsd/tags/).
-This chart adds the Datadog Agent to all nodes in your cluster via a DaemonSet. It also optionally depends on the [kube-state-metrics chart](https://github.com/kubernetes/charts/tree/master/stable/kube-state-metrics). For more information about monitoring Kubernetes with Datadog, please refer to the [Datadog documentation website](https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/).
+See the [Datadog JMX integration](https://docs.datadoghq.com/integrations/java/) to learn more.
## Prerequisites
-Kubernetes 1.4+ or OpenShift 3.4+ (1.3 support is currently partial, full support is planned for 6.4.0).
+Kubernetes 1.4+ or OpenShift 3.4+, note that:
+
+* the Datadog Agent supports Kubernetes 1.3+
+* The Datadog chart's defaults are tailored to Kubernetes 1.7.6+, see [Datadog Agent legacy Kubernetes versions documentation](https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent#legacy-kubernetes-versions) for adjustments you might need to make for older versions
+
+## Quick start
+
+By default, the Datadog Agent runs in a DaemonSet. It can alternatively run inside a Deployment for special use cases.
+
+**Note:** simultaneous DaemonSet + Deployment installation within a single release will be deprecated in a future version, requiring two releases to achieve this.
-## Installing the Chart
+### Installing the Datadog Chart
-To install the chart with the release name `my-release`, retrieve your Datadog API key from your [Agent Installation Instructions](https://app.datadoghq.com/account/settings#agent/kubernetes) and run:
+To install the chart with the release name ``, retrieve your Datadog API key from your [Agent Installation Instructions](https://app.datadoghq.com/account/settings#agent/kubernetes) and run:
```bash
-helm install --name my-release \
- --set datadog.apiKey=YOUR-KEY-HERE stable/datadog
+helm install --name \
+ --set datadog.apiKey= stable/datadog
```
-After a few minutes, you should see hosts and metrics being reported in Datadog.
-
-**Tip**: List all releases using `helm list`
+By default, this Chart creates a Secret and puts an API key in that Secret.
+However, you can use manually created secret by setting the `datadog.apiKeyExistingSecret` value. After a few minutes, you should see hosts and metrics being reported in Datadog.
### Enabling the Datadog Cluster Agent
Read about the Datadog Cluster Agent in the [official documentation](https://docs.datadoghq.com/agent/kubernetes/cluster/).
-Run the following if you want to deploy the chart with the Datadog Cluster Agent.
-Note that specifying `clusterAgent.metricsProvider.enabled=true` will enable the External Metrics Server.
-If you want to learn to use this feature, you can check out this [walkthrough](https://github.com/DataDog/datadog-agent/blob/master/docs/cluster-agent/CUSTOM_METRICS_SERVER.md).
-The Leader Election is enabled by default in the chart for the Cluster Agent. Only the Cluster Agent(s) participate in the election, in case you have several replicas configured (using `clusterAgent.replicas`.
-You can specify the token used to secure the communication between the Cluster Agent(s)q and the Agents with `clusterAgent.token`. If not specified, a random one will be generated and you will be prompted a warning when installing the chart.
+Run the following if you want to deploy the chart with the Datadog Cluster Agent:
```bash
helm install --name datadog-monitoring \
- --set datadog.apiKey=YOUR-API-KEY-HERE \
- --set datadog.appKey=YOUR-APP-KEY-HERE \
+ --set datadog.apiKey= \
+ --set datadog.appKey=` deployment:
```bash
-helm delete my-release
+helm delete --purge
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
-The following table lists the configurable parameters of the Datadog chart and their default values.
-
-| Parameter | Description | Default |
-|-----------------------------|------------------------------------|-------------------------------------------|
-| `datadog.apiKey` | Your Datadog API key | `Nil` You must provide your own key |
-| `datadog.apiKeyExistingSecret` | If set, use the secret with a provided name instead of creating a new one |`nil` |
-| `datadog.appKey` | Datadog APP key required to use metricsProvider | `Nil` You must provide your own key |
-| `datadog.appKeyExistingSecret` | If set, use the secret with a provided name instead of creating a new one |`nil` |
-| `image.repository` | The image repository to pull from | `datadog/agent` |
-| `image.tag` | The image tag to pull | `6.9.0` |
-| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
-| `image.pullSecrets` | Image pull secrets | `nil` |
-| `rbac.create` | If true, create & use RBAC resources | `true` |
-| `rbac.serviceAccount` | existing ServiceAccount to use (ignored if rbac.create=true) | `default` |
-| `datadog.name` | Container name if Daemonset or Deployment | `datadog` |
-| `datadog.site` | Site ('datadoghq.com' or 'datadoghq.eu') | `nil` |
-| `datadog.dd_url` | Datadog intake server | `nil` |
-| `datadog.env` | Additional Datadog environment variables | `nil` |
-| `datadog.logsEnabled` | Enable log collection | `nil` |
-| `datadog.logsConfigContainerCollectAll` | Collect logs from all containers | `nil` |
-| `datadog.logsPointerHostPath` | Host path to store the log tailing state in | `/var/lib/datadog-agent/logs` |
-| `datadog.apmEnabled` | Enable tracing from the host | `nil` |
-| `datadog.processAgentEnabled` | Enable live process monitoring | `nil` |
-| `datadog.checksd` | Additional custom checks as python code | `nil` |
-| `datadog.confd` | Additional check configurations (static and Autodiscovery) | `nil` |
-| `datadog.criSocketPath` | Path to the container runtime socket (if different from Docker) | `nil` |
-| `datadog.tags` | Set host tags | `nil` |
-| `datadog.nonLocalTraffic` | Enable statsd reporting from any external ip | `False` |
-| `datadog.useCriSocketVolume` | Enable mounting the container runtime socket in Agent containers | `True` |
-| `datadog.volumes` | Additional volumes for the daemonset or deployment | `nil` |
-| `datadog.volumeMounts` | Additional volumeMounts for the daemonset or deployment | `nil` |
-| `datadog.podAnnotationsAsTags` | Kubernetes Annotations to Datadog Tags mapping | `nil` |
-| `datadog.podLabelsAsTags` | Kubernetes Labels to Datadog Tags mapping | `nil` |
-| `datadog.resources.requests.cpu` | CPU resource requests | `200m` |
-| `datadog.resources.limits.cpu` | CPU resource limits | `200m` |
-| `datadog.resources.requests.memory` | Memory resource requests | `256Mi` |
-| `datadog.resources.limits.memory` | Memory resource limits | `256Mi` |
-| `datadog.securityContext` | Allows you to overwrite the default securityContext applied to the container | `nil` |
-| `datadog.livenessProbe` | Overrides the default liveness probe | http port 5555 |
-| `datadog.hostname` | Set the hostname (write it in datadog.conf) | `nil` |
-| `datadog.acInclude` | Include containers based on image name | `nil` |
-| `datadog.acExclude` | Exclude containers based on image name | `nil` |
-| `daemonset.podAnnotations` | Annotations to add to the DaemonSet's Pods | `nil` |
-| `daemonset.tolerations` | List of node taints to tolerate (requires Kubernetes >= 1.6) | `nil` |
-| `daemonset.nodeSelector` | Node selectors | `nil` |
-| `daemonset.affinity` | Node affinities | `nil` |
-| `daemonset.useHostNetwork` | If true, use the host's network | `nil` |
-| `daemonset.useHostPID`. | If true, use the host's PID namespace | `nil` |
-| `daemonset.useHostPort` | If true, use the same ports for both host and container | `nil` |
-| `daemonset.priorityClassName` | Which Priority Class to associate with the daemonset| `nil` |
-| `datadog.leaderElection` | Enable the leader Election feature | `false` |
-| `datadog.leaderLeaseDuration`| The duration for which a leader stays elected.| `nil` |
-| `datadog.collectEvents` | Enable Kubernetes event collection. Requires leader election. | `false` |
-| `deployment.affinity` | Node / Pod affinities | `{}` |
-| `deployment.tolerations` | List of node taints to tolerate | `[]` |
-| `deployment.priorityClassName` | Which Priority Class to associate with the deployment | `nil` |
-| `kubeStateMetrics.enabled` | If true, create kube-state-metrics | `true` |
-| `kube-state-metrics.rbac.create`| If true, create & use RBAC resources for kube-state-metrics | `true` |
-| `kube-state-metrics.rbac.serviceAccount` | existing ServiceAccount to use (ignored if rbac.create=true) for kube-state-metrics | `default` |
-| `clusterAgent.enabled` | Use the cluster-agent for cluster metrics (Kubernetes 1.10+ only) | `false` |
-| `clusterAgent.token` | A cluster-internal secret for agent-to-agent communication. Must be 32+ characters a-zA-Z | Generates a random value |
-| `clusterAgent.containerName` | The container name for the Cluster Agent | `cluster-agent` |
-| `clusterAgent.image.repository` | The image repository for the cluster-agent | `datadog/cluster-agent` |
-| `clusterAgent.image.tag` | The image tag to pull | `1.0.0` |
-| `clusterAgent.image.pullPolicy` | Image pull policy | `IfNotPresent` |
-| `clusterAgent.image.pullSecrets` | Image pull secrets | `nil` |
-| `clusterAgent.metricsProvider.enabled` | Enable Datadog metrics as a source for HPA scaling | `false` |
-| `clusterAgent.resources.requests.cpu` | CPU resource requests | `200m` |
-| `clusterAgent.resources.limits.cpu` | CPU resource limits | `200m` |
-| `clusterAgent.resources.requests.memory` | Memory resource requests | `256Mi` |
-| `clusterAgent.resources.limits.memory` | Memory resource limits | `256Mi` |
-| `clusterAgent.tolerations` | List of node taints to tolerate | `[]` |
-| `clusterAgent.livenessProbe` | Overrides the default liveness probe | http port 443 if external metrics enabled |
-| `clusterAgent.readinessProbe` | Overrides the default readiness probe | http port 443 if external metrics enabled |
-
-Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
+As a best practice, a YAML file that specifies the values for the chart parameters should be provided to configure the chart:
+
+1. **Copy the default [`datadog-values.yaml`](values.yaml) value file.**
+2. Set the `apiKey` parameter with your [Datadog API key](https://app.datadoghq.com/account/settings#api).
+3. Upgrade the Datadog Helm chart with the new `datadog-values.yaml` file:
```bash
-helm install --name my-release \
- --set datadog.apiKey=YOUR-KEY-HERE,datadog.logLevel=DEBUG \
- stable/datadog
+helm upgrade -f datadog-values.yaml stable/datadog --recreate-pods
+```
+
+See the [All configuration options](#all-configuration-options) section to discover all possibilities offered by the Datadog chart.
+
+### Enabling Log Collection
+
+Update your [datadog-values.yaml](values.yaml) file with the following log collection configuration:
+
+```
+datadog:
+ (...)
+ logsEnabled: true
+ logsConfigContainerCollectAll: true
```
-Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
+then upgrade your Datadog Helm chart:
```bash
-helm install --name my-release -f my-values.yaml stable/datadog
+helm upgrade -f datadog-values.yaml stable/datadog --recreate-pods
```
-**Tip**: You can copy and customize the default [values.yaml](values.yaml)
+### Enabling Process Collection
-### Image repository and tag
+Update your [datadog-values.yaml](values.yaml) file with the process collection configuration:
-Datadog [offers two variants](https://hub.docker.com/r/datadog/agent/tags/), switch to a `-jmx` tag if you need to run JMX/java integrations. The chart also supports running [the standalone dogstatsd image](https://hub.docker.com/r/datadog/dogstatsd/tags/).
+```
+datadog:
+ (...)
+ processAgentEnabled: true
+```
-Starting with version 1.0.0, this chart does not support deploying Agent 5.x anymore. If you cannot upgrade to Agent 6.x, you can use a previous version of the chart by calling helm install with `--version 0.18.0`.
+then upgrade your Datadog Helm chart:
-### DaemonSet and Deployment
+```bash
+helm upgrade -f datadog-values.yaml stable/datadog --recreate-pods
+```
-By default, the Datadog Agent runs in a DaemonSet. It can alternatively run inside a Deployment for special use cases.
+### Kubernetes event collection
-**Note:** simultaneous DaemonSet + Deployment installation within a single release will be deprecated in a future version, requiring two releases to achieve this.
+Use the [Datadog Cluster Agent](#enabling-the-datadog-cluster-agent) to collect Kubernetes events. Please read [the official documentation](https://docs.datadoghq.com/agent/kubernetes/event_collection/) for more context.
-### Secret
+Alternatively set the `datadog.leaderElection`, `datadog.collectEvents` and `rbac.create` options to `true` in order to enable Kubernetes event collection.
-By default, this Chart creates a Secret and puts an API key in that Secret.
-However, you can use manually created secret by setting the `datadog.apiKeyExistingSecret` value.
+### conf.d and checks.d
-### confd and checksd
+The Datadog [entrypoint](https://github.com/DataDog/datadog-agent/blob/master/Dockerfiles/agent/entrypoint/89-copy-customfiles.sh) copies files with a `.yaml` extension found in `/conf.d` and files with `.py` extension in `/check.d` to `/etc/datadog-agent/conf.d` and `/etc/datadog-agent/checks.d` respectively.
-The Datadog [entrypoint
-](https://github.com/DataDog/datadog-agent/blob/master/Dockerfiles/agent/entrypoint/89-copy-customfiles.sh)
-will copy files with a `.yaml` extension found in `/conf.d` and files with `.py` extension in
-`/check.d` to `/etc/datadog-agent/conf.d` and `/etc/datadog-agent/checks.d` respectively. The keys for
-`datadog.confd` and `datadog.checksd` should mirror the content found in their
-respective ConfigMaps, ie
+The keys for `datadog.confd` and `datadog.checksd` should mirror the content found in their respective ConfigMaps. Update your [datadog-values.yaml](values.yaml) file with the check configurations:
```yaml
datadog:
@@ -196,18 +165,17 @@ datadog:
port: 6379
```
-For more details, please refer to [the documentation](https://docs.datadoghq.com/agent/kubernetes/integrations/).
-
-### Kubernetes event collection
+then upgrade your Datadog Helm chart:
-To enable event collection, you will need to set the `datadog.leaderElection`, `datadog.collectEvents` and `rbac.create` options to `true`.
+```bash
+helm upgrade -f datadog-values.yaml stable/datadog --recreate-pods
+```
-It is now recommended to use the Datadog Cluster Agent to collect the events - Refer to the [Enabling the Datadog Cluster Agent](#enabling-the-datadog-cluster-agent) section.
-Please read [the official documentation](https://docs.datadoghq.com/agent/kubernetes/event_collection/) for more context.
+For more details, please refer to [the documentation](https://docs.datadoghq.com/agent/kubernetes/integrations/).
### Kubernetes Labels and Annotations
-To map Kubernetes pod labels and annotations to Datadog tags, provide a dictionary with kubernetes labels/annotations as keys and datadog tags as values:
+To map Kubernetes pod labels and annotations to Datadog tags, provide a dictionary with kubernetes labels/annotations as keys and Datadog tags key as values in your [datadog-values.yaml](values.yaml) file:
```yaml
podAnnotationsAsTags:
@@ -220,11 +188,117 @@ podLabelsAsTags:
release: helm_release
```
+then upgrade your Datadog Helm chart:
+
+```bash
+helm upgrade -f datadog-values.yaml stable/datadog --recreate-pods
+```
+
### CRI integration
-As of the version 6.6.0, the Datadog Agent supports collecting metrics from any container runtime interface used in your cluster.
-Configure the location path of the socket with `datadog.criSocketPath` and make sure you allow the socket to be mounted into the pod running the agent by setting `datadog.useCriSocketVolume` to `True`.
+As of the version 6.6.0, the Datadog Agent supports collecting metrics from any container runtime interface used in your cluster. Configure the location path of the socket with `datadog.criSocketPath` and make sure you allow the socket to be mounted into the pod running the agent by setting `datadog.useCriSocketVolume` to `True`.
Standard paths are:
- Containerd socket: `/var/run/containerd/containerd.sock`
- Cri-o socket: `/var/run/crio/crio.sock`
+
+## All configuration options
+
+The following table lists the configurable parameters of the Datadog chart and their default values. Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
+
+```bash
+helm install --name \
+ --set datadog.apiKey=,datadog.logLevel=DEBUG \
+ stable/datadog
+```
+
+| Parameter | Description | Default |
+| ----------------------------- | ------------------------------------ | ------------------------------------------- |
+| `datadog.apiKey` | Your Datadog API key | `Nil` You must provide your own key |
+| `datadog.apiKeyExistingSecret` | If set, use the secret with a provided name instead of creating a new one | `nil` |
+| `datadog.appKey` | Datadog APP key required to use metricsProvider | `Nil` You must provide your own key |
+| `datadog.appKeyExistingSecret` | If set, use the secret with a provided name instead of creating a new one | `nil` |
+| `image.repository` | The image repository to pull from | `datadog/agent` |
+| `image.tag` | The image tag to pull | `6.10.1` |
+| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `image.pullSecrets` | Image pull secrets | `nil` |
+| `nameOverride` | Override name of app | `nil` |
+| `fullnameOverride` | Override full name of app | `nil` |
+| `rbac.create` | If true, create & use RBAC resources | `true` |
+| `rbac.serviceAccount` | existing ServiceAccount to use (ignored if rbac.create=true) | `default` |
+| `daemonset.podLabels` | labels to add to each pod | `nil` |
+| `datadog.name` | Container name if Daemonset or Deployment | `datadog` |
+| `datadog.site` | Site ('datadoghq.com' or 'datadoghq.eu') | `nil` |
+| `datadog.dd_url` | Datadog intake server | `nil` |
+| `datadog.env` | Additional Datadog environment variables | `nil` |
+| `datadog.logsEnabled` | Enable log collection | `nil` |
+| `datadog.logsConfigContainerCollectAll` | Collect logs from all containers | `nil` |
+| `datadog.logsPointerHostPath` | Host path to store the log tailing state in | `/var/lib/datadog-agent/logs` |
+| `datadog.apmEnabled` | Enable tracing from the host | `nil` |
+| `datadog.processAgentEnabled` | Enable live process monitoring | `nil` |
+| `datadog.checksd` | Additional custom checks as python code | `nil` |
+| `datadog.confd` | Additional check configurations (static and Autodiscovery) | `nil` |
+| `datadog.criSocketPath` | Path to the container runtime socket (if different from Docker) | `nil` |
+| `datadog.tags` | Set host tags | `nil` |
+| `datadog.nonLocalTraffic` | Enable statsd reporting from any external ip | `False` |
+| `datadog.useCriSocketVolume` | Enable mounting the container runtime socket in Agent containers | `True` |
+| `datadog.dogstatsdOriginDetection` | Enable origin detection for container tagging | `False` |
+| `datadog.useDogStatsDSocketVolume` | Enable dogstatsd over Unix Domain Socket | `False` |
+| `datadog.volumes` | Additional volumes for the daemonset or deployment | `nil` |
+| `datadog.volumeMounts` | Additional volumeMounts for the daemonset or deployment | `nil` |
+| `datadog.podAnnotationsAsTags` | Kubernetes Annotations to Datadog Tags mapping | `nil` |
+| `datadog.podLabelsAsTags` | Kubernetes Labels to Datadog Tags mapping | `nil` |
+| `datadog.resources.requests.cpu` | CPU resource requests | `200m` |
+| `datadog.resources.limits.cpu` | CPU resource limits | `200m` |
+| `datadog.resources.requests.memory` | Memory resource requests | `256Mi` |
+| `datadog.resources.limits.memory` | Memory resource limits | `256Mi` |
+| `datadog.securityContext` | Allows you to overwrite the default securityContext applied to the container | `nil` |
+| `datadog.livenessProbe` | Overrides the default liveness probe | http port 5555 |
+| `datadog.hostname` | Set the hostname (write it in datadog.conf) | `nil` |
+| `datadog.acInclude` | Include containers based on image name | `nil` |
+| `datadog.acExclude` | Exclude containers based on image name | `nil` |
+| `daemonset.podAnnotations` | Annotations to add to the DaemonSet's Pods | `nil` |
+| `daemonset.tolerations` | List of node taints to tolerate (requires Kubernetes >= 1.6) | `nil` |
+| `daemonset.nodeSelector` | Node selectors | `nil` |
+| `daemonset.affinity` | Node affinities | `nil` |
+| `daemonset.useHostNetwork` | If true, use the host's network | `nil` |
+| `daemonset.useHostPID`. | If true, use the host's PID namespace | `nil` |
+| `daemonset.useHostPort` | If true, use the same ports for both host and container | `nil` |
+| `daemonset.priorityClassName` | Which Priority Class to associate with the daemonset | `nil` |
+| `datadog.leaderElection` | Enable the leader Election feature | `false` |
+| `datadog.leaderLeaseDuration` | The duration for which a leader stays elected. | 60 sec, 15 if Cluster Checks enabled |
+| `datadog.collectEvents` | Enable Kubernetes event collection. Requires leader election. | `false` |
+| `deployment.affinity` | Node / Pod affinities | `{}` |
+| `deployment.tolerations` | List of node taints to tolerate | `[]` |
+| `deployment.priorityClassName` | Which Priority Class to associate with the deployment | `nil` |
+| `kubeStateMetrics.enabled` | If true, create kube-state-metrics | `true` |
+| `kube-state-metrics.rbac.create` | If true, create & use RBAC resources for kube-state-metrics | `true` |
+| `kube-state-metrics.rbac.serviceAccount` | existing ServiceAccount to use (ignored if rbac.create=true) for kube-state-metrics | `default` |
+| `clusterAgent.enabled` | Use the cluster-agent for cluster metrics (Kubernetes 1.10+ only) | `false` |
+| `clusterAgent.token` | A cluster-internal secret for agent-to-agent communication. Must be 32+ characters a-zA-Z | Generates a random value |
+| `clusterAgent.containerName` | The container name for the Cluster Agent | `cluster-agent` |
+| `clusterAgent.image.repository` | The image repository for the cluster-agent | `datadog/cluster-agent` |
+| `clusterAgent.image.tag` | The image tag to pull | `1.2.0` |
+| `clusterAgent.image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `clusterAgent.image.pullSecrets` | Image pull secrets | `nil` |
+| `clusterAgent.metricsProvider.enabled` | Enable Datadog metrics as a source for HPA scaling | `false` |
+| `clusterAgent.clusterChecks.enabled` | Enable Cluster Checks on both the Cluster Agent and the Agent daemonset | `false` |
+| `clusterAgent.confd` | Additional check configurations (static and Autodiscovery) | `nil` |
+| `clusterAgent.resources.requests.cpu` | CPU resource requests | `200m` |
+| `clusterAgent.resources.limits.cpu` | CPU resource limits | `200m` |
+| `clusterAgent.resources.requests.memory` | Memory resource requests | `256Mi` |
+| `clusterAgent.resources.limits.memory` | Memory resource limits | `256Mi` |
+| `clusterAgent.tolerations` | List of node taints to tolerate | `[]` |
+| `clusterAgent.livenessProbe` | Overrides the default liveness probe | http port 443 if external metrics enabled |
+| `clusterAgent.readinessProbe` | Overrides the default readiness probe | http port 443 if external metrics enabled |
+| `clusterchecksDeployment.enabled` | Enable Datadog agent deployment dedicated for running Cluster Checks. It allows having different resources (Request/Limit) for Cluster Checks agent pods. | `false` |
+| `clusterchecksDeployment.env` | Additional Datadog environment variables for Cluster Checks Deployment | `nil` |
+| `clusterchecksDeployment.resources.requests.cpu` | CPU resource requests | `200m` |
+| `clusterchecksDeployment.resources.limits.cpu` | CPU resource limits | `200m` |
+| `clusterchecksDeployment.resources.requests.memory` | Memory resource requests | `256Mi` |
+| `clusterchecksDeployment.resources.limits.memory` | Memory resource limits | `256Mi` |
+| `clusterchecksDeployment.nodeSelector` | Node selectors | `nil` |
+| `clusterchecksDeployment.affinity` | Node affinities | avoid running pods on the same node |
+| `clusterchecksDeployment.livenessProbe` | Overrides the default liveness probe | http port 5555 |
+| `clusterchecksDeployment.rbac.dedicated` | If true, use dedicated RBAC resources for clusterchecks agent's pods | `false` |
+| `clusterchecksDeployment.rbac.serviceAccount` | existing ServiceAccount to use (ignored if rbac.create=true) for clusterchecks | `default` |
\ No newline at end of file
diff --git a/stable/datadog/requirements.lock b/stable/datadog/requirements.lock
index 346615c7e67f..82c3189eac97 100644
--- a/stable/datadog/requirements.lock
+++ b/stable/datadog/requirements.lock
@@ -1,6 +1,6 @@
dependencies:
- name: kube-state-metrics
repository: https://kubernetes-charts.storage.googleapis.com/
- version: 0.11.0
-digest: sha256:78b2838433c7b6ca3fc7d62ca3a017c2c1116c080df2b245a29f3f6ecc37d544
-generated: 2018-11-21T19:00:49.191534591+01:00
+ version: 0.16.0
+digest: sha256:0f9c623d315b30d1b0680cb0146e8740d866d8360eee2fc17d991401e724dfff
+generated: 2019-04-01T16:43:29.032424+02:00
diff --git a/stable/datadog/requirements.yaml b/stable/datadog/requirements.yaml
index f68aa78668d2..9996bf51c4e8 100644
--- a/stable/datadog/requirements.yaml
+++ b/stable/datadog/requirements.yaml
@@ -1,5 +1,5 @@
dependencies:
- name: kube-state-metrics
- version: ~0.11.0
+ version: ~0.16.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: kubeStateMetrics.enabled
diff --git a/stable/datadog/templates/NOTES.txt b/stable/datadog/templates/NOTES.txt
index 9cb829aeb7b0..ef6d1e6f3b2f 100644
--- a/stable/datadog/templates/NOTES.txt
+++ b/stable/datadog/templates/NOTES.txt
@@ -89,7 +89,7 @@ chart if your use case still requires a separate Deployment.
The autoconf value is deprecated, Autodiscovery templates can now
be safely moved to the confd value. As a temporary measure, both
-values were merged into the {{ template "datadog.confd.fullname" . }} configmap,
+values were merged into the {{ template "datadog.fullname" . }}-confd configmap,
but this will be removed in a future chart release.
Please note that duplicate file names may have conflicted during
the merge. In that case, the confd entry will take precedence.
diff --git a/stable/datadog/templates/_helpers.tpl b/stable/datadog/templates/_helpers.tpl
index 559456fec64a..d7ef2c5d4865 100644
--- a/stable/datadog/templates/_helpers.tpl
+++ b/stable/datadog/templates/_helpers.tpl
@@ -9,11 +9,28 @@ Expand the name of the chart.
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+And depending on the resources the name is completed with an extension.
+If release name contains chart name it will be used as a full name.
*/}}
{{- define "datadog.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "datadog.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
{{/*
Return secret name to be used based on provided values.
@@ -31,30 +48,6 @@ Return secret name to be used based on provided values.
{{- default $fullName .Values.datadog.appKeyExistingSecret | quote -}}
{{- end -}}
-{{/*
-Create a default fully qualified confd name.
-We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
-*/}}
-{{- define "datadog.confd.fullname" -}}
-{{- printf "%s-datadog-confd" .Release.Name | trunc 63 | trimSuffix "-" -}}
-{{- end -}}
-
-{{/*
-Create a default fully qualified checksd name.
-We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
-*/}}
-{{- define "datadog.checksd.fullname" -}}
-{{- printf "%s-datadog-checksd" .Release.Name | trunc 63 | trimSuffix "-" -}}
-{{- end -}}
-
-{{/*
-Create a default fully qualified cluster-agent name.
-We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
-*/}}
-{{- define "datadog.clusterAgent.fullname" -}}
-{{- printf "%s-cluster-agent" .Release.Name | trunc 63 | trimSuffix "-" -}}
-{{- end -}}
-
{{/*
Return the appropriate apiVersion for RBAC APIs.
*/}}
diff --git a/stable/datadog/templates/agent-apiservice.yaml b/stable/datadog/templates/agent-apiservice.yaml
index 774e80be0bc8..9432ba935fd0 100644
--- a/stable/datadog/templates/agent-apiservice.yaml
+++ b/stable/datadog/templates/agent-apiservice.yaml
@@ -10,12 +10,11 @@ metadata:
release: {{ .Release.Name | quote }}
spec:
service:
- name: {{ template "datadog.clusterAgent.fullname" . }}-metrics-api
+ name: {{ template "datadog.fullname" . }}-cluster-agent-metrics-api
namespace: {{ .Release.Namespace }}
version: v1beta1
insecureSkipTLSVerify: true
group: external.metrics.k8s.io
groupPriorityMinimum: 100
versionPriority: 100
- priority: 100
{{- end -}}
diff --git a/stable/datadog/templates/agent-clusterchecks-clusterrolebinding.yaml b/stable/datadog/templates/agent-clusterchecks-clusterrolebinding.yaml
new file mode 100644
index 000000000000..c4e59c32d604
--- /dev/null
+++ b/stable/datadog/templates/agent-clusterchecks-clusterrolebinding.yaml
@@ -0,0 +1,19 @@
+{{- if and .Values.rbac.create .Values.clusterAgent.enabled .Values.clusterAgent.clusterChecks.enabled .Values.clusterchecksDeployment.enabled -}}
+apiVersion: {{ template "rbac.apiVersion" . }}
+kind: ClusterRoleBinding
+metadata:
+ labels:
+ app: "{{ template "datadog.fullname" . }}"
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
+ release: {{ .Release.Name | quote }}
+ heritage: {{ .Release.Service | quote }}
+ name: {{ template "datadog.fullname" . }}-cluster-checks
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: {{ template "datadog.fullname" . }}
+subjects:
+ - kind: ServiceAccount
+ name: {{ template "datadog.fullname" . }}-cluster-checks
+ namespace: {{ .Release.Namespace }}
+{{- end -}}
diff --git a/stable/datadog/templates/agent-clusterchecks-deployment.yaml b/stable/datadog/templates/agent-clusterchecks-deployment.yaml
new file mode 100644
index 000000000000..42514bcfb318
--- /dev/null
+++ b/stable/datadog/templates/agent-clusterchecks-deployment.yaml
@@ -0,0 +1,99 @@
+{{- if and .Values.clusterAgent.enabled .Values.clusterAgent.clusterChecks.enabled .Values.clusterchecksDeployment.enabled -}}
+apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+ name: {{ template "datadog.fullname" . }}-clusterchecks
+ labels:
+ app: "{{ template "datadog.fullname" . }}"
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
+ release: {{ .Release.Name | quote }}
+ heritage: {{ .Release.Service | quote }}
+spec:
+ replicas: {{ .Values.clusterchecksDeployment.replicas }}
+ template:
+ metadata:
+ labels:
+ app: {{ template "datadog.fullname" . }}-clusterchecks
+ name: {{ template "datadog.fullname" . }}-clusterchecks
+ spec:
+ {{- if .Values.clusterchecksDeployment.rbac.dedicated }}
+ serviceAccountName: {{ if .Values.rbac.create }}{{ template "datadog.fullname" . }}-cluster-checks{{ else }}"{{ .Values.clusterchecksDeployment.rbac.serviceAccountName }}"{{ end }}
+ {{- else }}
+ serviceAccountName: {{ if .Values.rbac.create }}{{ template "datadog.fullname" . }}{{ else }}"{{ .Values.rbac.serviceAccountName }}"{{ end }}
+ {{- end }}
+ containers:
+ - name: {{ default .Chart.Name .Values.datadog.name }}
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ env:
+ - name: DD_API_KEY
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "datadog.apiSecretName" . }}
+ key: api-key
+ - name: DD_EXTRA_CONFIG_PROVIDERS
+ value: "clusterchecks"
+ - {name: DD_HEALTH_PORT, value: "5555"}
+ # Cluster checks
+ - name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
+ value: {{ template "datadog.fullname" . }}-cluster-agent
+ - name: DD_CLUSTER_AGENT_AUTH_TOKEN
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "datadog.fullname" . }}-cluster-agent
+ key: token
+ - name: DD_CLUSTER_AGENT_ENABLED
+ value: {{ .Values.clusterAgent.enabled | quote }}
+ - {name: DD_EXTRA_CONFIG_PROVIDERS, value: "clusterchecks"}
+ # Remove unused features
+ - {name: DD_APM_ENABLED, value: "false"}
+ - {name: DD_PROCESS_AGENT_ENABLED, value: "false"}
+ - {name: DD_LOGS_ENABLED, value: "false"}
+ # Safely run alongside the daemonset
+ - {name: DD_ENABLE_METADATA_COLLECTION, value: "false"}
+ - name: DD_HOSTNAME
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+{{- if .Values.clusterchecksDeployment.env }}
+{{ toYaml .Values.clusterchecksDeployment.env | indent 10 }}
+{{- end }}
+ resources:
+{{ toYaml .Values.clusterchecksDeployment.resources | indent 12 }}
+ volumeMounts:
+ - {name: s6-run, mountPath: /var/run/s6}
+ - {name: remove-corechecks, mountPath: /etc/datadog-agent/conf.d}
+{{- if .Values.clusterchecksDeployment.livenessProbe }}
+ livenessProbe:
+{{ toYaml .Values.clusterchecksDeployment.livenessProbe | indent 10 }}
+{{- else }}
+ livenessProbe:
+ httpGet:
+ path: /health
+ port: 5555
+ initialDelaySeconds: 15
+ periodSeconds: 15
+ timeoutSeconds: 5
+ successThreshold: 1
+ failureThreshold: 6
+{{- end }}
+ volumes:
+ - {name: s6-run, emptyDir: {}}
+ - {name: remove-corechecks, emptyDir: {}}
+ affinity:
+{{- if .Values.clusterchecksDeployment.affinity }}
+{{ toYaml .Values.clusterchecksDeployment.affinity | indent 8 }}
+{{- else }}
+ # Ensure we only run one worker per node, to avoid name collisions
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchLabels:
+ app: {{ template "datadog.fullname" . }}-clusterchecks
+ topologyKey: kubernetes.io/hostname
+{{- end }}
+ {{- if .Values.clusterchecksDeployment.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.clusterchecksDeployment.nodeSelector | indent 8 }}
+ {{- end }}
+{{ end }}
diff --git a/stable/datadog/templates/agent-clusterchecks-serviceaccount.yaml b/stable/datadog/templates/agent-clusterchecks-serviceaccount.yaml
new file mode 100644
index 000000000000..7f148ed758c5
--- /dev/null
+++ b/stable/datadog/templates/agent-clusterchecks-serviceaccount.yaml
@@ -0,0 +1,11 @@
+{{- if and .Values.rbac.create .Values.clusterAgent.enabled .Values.clusterAgent.clusterChecks.enabled .Values.clusterchecksDeployment.enabled -}}
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ labels:
+ app: "{{ template "datadog.fullname" . }}"
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
+ heritage: {{ .Release.Service | quote }}
+ release: {{ .Release.Name | quote }}
+ name: {{ template "datadog.fullname" . }}-cluster-checks
+{{- end -}}
diff --git a/stable/datadog/templates/agent-clusterrole.yaml b/stable/datadog/templates/agent-clusterrole.yaml
index 5999b6490077..d6a8ba6f93a2 100644
--- a/stable/datadog/templates/agent-clusterrole.yaml
+++ b/stable/datadog/templates/agent-clusterrole.yaml
@@ -7,7 +7,7 @@ metadata:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
- name: {{ template "datadog.clusterAgent.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-cluster-agent
rules:
- apiGroups:
- ""
@@ -22,6 +22,12 @@ rules:
- get
- list
- watch
+- apiGroups: ["quota.openshift.io"]
+ resources:
+ - clusterresourcequotas
+ verbs:
+ - get
+ - list
- apiGroups:
- "autoscaling"
resources:
diff --git a/stable/datadog/templates/agent-clusterrolebinding-auth-delegator.yaml b/stable/datadog/templates/agent-clusterrolebinding-auth-delegator.yaml
index f173e098c097..20e48daec87c 100644
--- a/stable/datadog/templates/agent-clusterrolebinding-auth-delegator.yaml
+++ b/stable/datadog/templates/agent-clusterrolebinding-auth-delegator.yaml
@@ -7,13 +7,13 @@ metadata:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
- name: {{ template "datadog.clusterAgent.fullname" . }}:system:auth-delegator
+ name: {{ template "datadog.fullname" . }}-cluster-agent:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
- name: {{ template "datadog.clusterAgent.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-cluster-agent
namespace: {{ .Release.Namespace }}
{{- end -}}
diff --git a/stable/datadog/templates/agent-clusterrolebinding.yaml b/stable/datadog/templates/agent-clusterrolebinding.yaml
index b231858fba7c..b525ceacde36 100644
--- a/stable/datadog/templates/agent-clusterrolebinding.yaml
+++ b/stable/datadog/templates/agent-clusterrolebinding.yaml
@@ -7,13 +7,13 @@ metadata:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
- name: {{ template "datadog.clusterAgent.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-cluster-agent
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
- name: {{ template "datadog.clusterAgent.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-cluster-agent
subjects:
- kind: ServiceAccount
- name: {{ template "datadog.clusterAgent.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-cluster-agent
namespace: {{ .Release.Namespace }}
{{- end -}}
diff --git a/stable/datadog/templates/agent-secret.yaml b/stable/datadog/templates/agent-secret.yaml
index 793eb05a7313..7ee758c2d1c1 100644
--- a/stable/datadog/templates/agent-secret.yaml
+++ b/stable/datadog/templates/agent-secret.yaml
@@ -3,7 +3,7 @@
apiVersion: v1
kind: Secret
metadata:
- name: {{ template "datadog.clusterAgent.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-cluster-agent
labels:
app: "{{ template "datadog.fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
diff --git a/stable/datadog/templates/agent-service-cmd.yaml b/stable/datadog/templates/agent-service-cmd.yaml
index 28f53c743e1c..ba9a97299809 100644
--- a/stable/datadog/templates/agent-service-cmd.yaml
+++ b/stable/datadog/templates/agent-service-cmd.yaml
@@ -2,7 +2,7 @@
apiVersion: v1
kind: Service
metadata:
- name: {{ template "datadog.clusterAgent.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-cluster-agent
labels:
app: "{{ template "datadog.fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
@@ -11,7 +11,7 @@ metadata:
spec:
type: ClusterIP
selector:
- app: {{ template "datadog.clusterAgent.fullname" . }}
+ app: {{ template "datadog.fullname" . }}-cluster-agent
ports:
- port: 5005
name: agentport
diff --git a/stable/datadog/templates/agent-service-metrics.yaml b/stable/datadog/templates/agent-service-metrics.yaml
index f43e338c7461..b9869fa42533 100644
--- a/stable/datadog/templates/agent-service-metrics.yaml
+++ b/stable/datadog/templates/agent-service-metrics.yaml
@@ -2,7 +2,7 @@
apiVersion: v1
kind: Service
metadata:
- name: {{ template "datadog.clusterAgent.fullname" . }}-metrics-api
+ name: {{ template "datadog.fullname" . }}-cluster-agent-metrics-api
labels:
app: "{{ template "datadog.fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
@@ -11,7 +11,7 @@ metadata:
spec:
type: ClusterIP
selector:
- app: {{ template "datadog.clusterAgent.fullname" . }}
+ app: {{ template "datadog.fullname" . }}-cluster-agent
ports:
- port: 443
name: metricsapi
diff --git a/stable/datadog/templates/agent-serviceaccount.yaml b/stable/datadog/templates/agent-serviceaccount.yaml
index b15e3a8c55ae..375d3c563afa 100644
--- a/stable/datadog/templates/agent-serviceaccount.yaml
+++ b/stable/datadog/templates/agent-serviceaccount.yaml
@@ -7,5 +7,5 @@ metadata:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
- name: {{ template "datadog.clusterAgent.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-cluster-agent
{{- end -}}
diff --git a/stable/datadog/templates/agenthpa-rolebinding.yaml b/stable/datadog/templates/agenthpa-rolebinding.yaml
index eaf2758e7d56..2e3be3c1eb72 100644
--- a/stable/datadog/templates/agenthpa-rolebinding.yaml
+++ b/stable/datadog/templates/agenthpa-rolebinding.yaml
@@ -7,13 +7,13 @@ metadata:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
- name: "{{ template "datadog.clusterAgent.fullname" . }}"
+ name: "{{ template "datadog.fullname" . }}-cluster-agent"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
- name: {{ template "datadog.clusterAgent.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-cluster-agent
namespace: {{ .Release.Namespace }}
{{- end -}}
diff --git a/stable/datadog/templates/checksd-configmap.yaml b/stable/datadog/templates/checksd-configmap.yaml
index 0685876205f7..2d4f50fe39b5 100644
--- a/stable/datadog/templates/checksd-configmap.yaml
+++ b/stable/datadog/templates/checksd-configmap.yaml
@@ -2,7 +2,7 @@
apiVersion: v1
kind: ConfigMap
metadata:
- name: {{ template "datadog.checksd.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-checksd
labels:
app: "{{ template "datadog.fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
diff --git a/stable/datadog/templates/cluster-agent-confd-configmap.yaml b/stable/datadog/templates/cluster-agent-confd-configmap.yaml
new file mode 100644
index 000000000000..0911f847d68a
--- /dev/null
+++ b/stable/datadog/templates/cluster-agent-confd-configmap.yaml
@@ -0,0 +1,15 @@
+{{- if .Values.clusterAgent.confd }}
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "datadog.fullname" . }}-cluster-agent-confd
+ labels:
+ app: "{{ template "datadog.fullname" . }}"
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
+ release: {{ .Release.Name | quote }}
+ heritage: {{ .Release.Service | quote }}
+ annotations:
+ checksum/confd-config: {{ tpl (toYaml .Values.clusterAgent.confd) . | sha256sum }}
+data:
+{{ tpl (toYaml .Values.clusterAgent.confd) . | indent 2 }}
+{{- end -}}
diff --git a/stable/datadog/templates/cluster-agent-deployment.yaml b/stable/datadog/templates/cluster-agent-deployment.yaml
index 5de9de64c660..356dc9538005 100644
--- a/stable/datadog/templates/cluster-agent-deployment.yaml
+++ b/stable/datadog/templates/cluster-agent-deployment.yaml
@@ -2,7 +2,7 @@
apiVersion: apps/v1
kind: Deployment
metadata:
- name: {{ template "datadog.clusterAgent.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-cluster-agent
labels:
app: "{{ template "datadog.fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
@@ -12,12 +12,26 @@ spec:
replicas: {{ .Values.clusterAgent.replicas }}
selector:
matchLabels:
- app: {{ template "datadog.clusterAgent.fullname" . }}
+ app: {{ template "datadog.fullname" . }}-cluster-agent
template:
metadata:
labels:
- app: {{ template "datadog.clusterAgent.fullname" . }}
- name: {{ template "datadog.clusterAgent.fullname" . }}
+ app: {{ template "datadog.fullname" . }}-cluster-agent
+ name: {{ template "datadog.fullname" . }}-cluster-agent
+ annotations:
+ ad.datadoghq.com/{{ .Values.clusterAgent.containerName }}.check_names: '["prometheus"]'
+ ad.datadoghq.com/{{ .Values.clusterAgent.containerName }}.init_configs: '[{}]'
+ ad.datadoghq.com/{{ .Values.clusterAgent.containerName }}.instances: |
+ [{
+ "prometheus_url": "http://%%host%%:5000/metrics",
+ "namespace": "datadog.cluster_agent",
+ "metrics": [
+ "go_goroutines", "go_memstats_*", "process_*",
+ "api_requests",
+ "datadog_requests", "external_metrics",
+ "cluster_checks_*"
+ ]
+ }]
spec:
{{- if .Values.clusterAgent.image.pullSecrets }}
imagePullSecrets:
@@ -53,6 +67,18 @@ spec:
name: {{ template "datadog.appKeySecretName" . }}
key: app-key
{{- end }}
+ {{- if .Values.clusterAgent.clusterChecks.enabled }}
+ - name: DD_CLUSTER_CHECKS_ENABLED
+ value: {{ .Values.clusterAgent.clusterChecks.enabled | quote }}
+ - name: DD_EXTRA_CONFIG_PROVIDERS
+ value: "kube_services"
+ - name: DD_EXTRA_LISTENERS
+ value: "kube_services"
+ {{- end }}
+ {{- if .Values.datadog.clusterName }}
+ - name: DD_CLUSTER_NAME
+ value: {{ .Values.datadog.clusterName | quote }}
+ {{- end }}
{{- if .Values.datadog.site }}
- name: DD_SITE
value: {{ .Values.datadog.site | quote }}
@@ -70,17 +96,20 @@ spec:
{{- if .Values.datadog.leaderLeaseDuration }}
- name: DD_LEADER_LEASE_DURATION
value: {{ .Values.datadog.leaderLeaseDuration | quote }}
+ {{- else if .Values.clusterAgent.clusterChecks.enabled }}
+ - name: DD_LEADER_LEASE_DURATION
+ value: "15"
{{- end }}
{{- if .Values.datadog.collectEvents }}
- name: DD_COLLECT_KUBERNETES_EVENTS
value: {{ .Values.datadog.collectEvents | quote}}
- name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
- value: {{ template "datadog.clusterAgent.fullname" . }}
+ value: {{ template "datadog.fullname" . }}-cluster-agent
{{- end }}
- name: DD_CLUSTER_AGENT_AUTH_TOKEN
valueFrom:
secretKeyRef:
- name: {{ template "datadog.clusterAgent.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-cluster-agent
key: token
- name: DD_KUBE_RESOURCES_NAMESPACE
value: {{ .Release.Namespace }}
@@ -106,6 +135,16 @@ spec:
port: 443
path: /healthz
scheme: HTTPS
+{{- end }}
+{{- if .Values.clusterAgent.confd }}
+ volumeMounts:
+ - name: confd
+ mountPath: /conf.d
+ readOnly: true
+ volumes:
+ - name: confd
+ configMap:
+ name: {{ template "datadog.fullname" . }}-cluster-agent-confd
{{- end }}
{{- if .Values.clusterAgent.tolerations }}
tolerations:
@@ -115,5 +154,5 @@ spec:
affinity:
{{ toYaml .Values.clusterAgent.affinity | indent 8 }}
{{- end }}
- serviceAccountName: {{ if .Values.rbac.create }}{{ template "datadog.clusterAgent.fullname" . }}{{ else }}"{{ .Values.rbac.serviceAccountName }}"{{ end }}
+ serviceAccountName: {{ if .Values.rbac.create }}{{ template "datadog.fullname" . }}-cluster-agent{{ else }}"{{ .Values.rbac.serviceAccountName }}"{{ end }}
{{ end }}
diff --git a/stable/datadog/templates/clusterrole.yaml b/stable/datadog/templates/clusterrole.yaml
index 6733ed64bdc0..87a3d8fc78e7 100644
--- a/stable/datadog/templates/clusterrole.yaml
+++ b/stable/datadog/templates/clusterrole.yaml
@@ -23,6 +23,12 @@ rules:
- get
- list
- watch
+- apiGroups: ["quota.openshift.io"]
+ resources:
+ - clusterresourcequotas
+ verbs:
+ - get
+ - list
{{- if .Values.datadog.collectEvents }}
- apiGroups:
- ""
@@ -57,6 +63,10 @@ rules:
verbs:
- get
{{- end }}
+- nonResourceURLs:
+ - "/metrics"
+ verbs:
+ - get
- apiGroups: # Kubelet connectivity
- ""
resources:
@@ -65,4 +75,10 @@ rules:
- nodes/proxy
verbs:
- get
+- apiGroups: # leader election check
+ - ""
+ resources:
+ - endpoints
+ verbs:
+ - get
{{- end -}}
diff --git a/stable/datadog/templates/clusterrolebinding.yaml b/stable/datadog/templates/clusterrolebinding.yaml
index 7db402447c6c..4e1b9392909f 100644
--- a/stable/datadog/templates/clusterrolebinding.yaml
+++ b/stable/datadog/templates/clusterrolebinding.yaml
@@ -1,4 +1,4 @@
-{{- if .Values.rbac.create -}}
+{{- if and .Values.rbac.create (not .Values.clusterchecksDeployment.rbac.dedicated) -}}
apiVersion: {{ template "rbac.apiVersion" . }}
kind: ClusterRoleBinding
metadata:
diff --git a/stable/datadog/templates/confd-configmap.yaml b/stable/datadog/templates/confd-configmap.yaml
index be9c9a513e07..98eebe0508fe 100644
--- a/stable/datadog/templates/confd-configmap.yaml
+++ b/stable/datadog/templates/confd-configmap.yaml
@@ -2,7 +2,7 @@
apiVersion: v1
kind: ConfigMap
metadata:
- name: {{ template "datadog.confd.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-confd
labels:
app: "{{ template "datadog.fullname" . }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
diff --git a/stable/datadog/templates/daemonset.yaml b/stable/datadog/templates/daemonset.yaml
index 8681376f3c0a..04a8112f94c7 100644
--- a/stable/datadog/templates/daemonset.yaml
+++ b/stable/datadog/templates/daemonset.yaml
@@ -14,6 +14,9 @@ spec:
metadata:
labels:
app: {{ template "datadog.fullname" . }}
+ {{- if .Values.daemonset.podLabels }}
+{{ toYaml .Values.daemonset.podLabels | indent 8 }}
+ {{- end }}
name: {{ template "datadog.fullname" . }}
annotations:
checksum/autoconf-config: {{ tpl (toYaml .Values.datadog.autoconf) . | sha256sum }}
@@ -68,6 +71,10 @@ spec:
secretKeyRef:
name: {{ template "datadog.apiSecretName" . }}
key: api-key
+ {{- if .Values.datadog.clusterName }}
+ - name: DD_CLUSTER_NAME
+ value: {{ .Values.datadog.clusterName | quote }}
+ {{- end }}
{{- if .Values.datadog.site }}
- name: DD_SITE
value: {{ .Values.datadog.site | quote }}
@@ -84,6 +91,10 @@ spec:
- name: DD_DOGSTATSD_NON_LOCAL_TRAFFIC
value: {{ .Values.datadog.nonLocalTraffic | quote }}
{{- end }}
+ {{- if .Values.datadog.dogstatsdOriginDetection }}
+ - name: DD_DOGSTATSD_ORIGIN_DETECTION
+ value: {{ .Values.datadog.dogstatsdOriginDetection | quote }}
+ {{- end }}
{{- if .Values.datadog.tags }}
- name: DD_TAGS
value: {{ .Values.datadog.tags | quote }}
@@ -125,11 +136,11 @@ spec:
- name: DD_CLUSTER_AGENT_ENABLED
value: {{ .Values.clusterAgent.enabled | quote }}
- name: DD_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME
- value: {{ template "datadog.clusterAgent.fullname" . }}
+ value: {{ template "datadog.fullname" . }}-cluster-agent
- name: DD_CLUSTER_AGENT_AUTH_TOKEN
valueFrom:
secretKeyRef:
- name: {{ template "datadog.clusterAgent.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-cluster-agent
key: token
{{- end }}
- name: KUBERNETES
@@ -164,6 +175,14 @@ spec:
- name: DD_HEALTH_PORT
value: "5555"
{{- end }}
+ {{- if .Values.datadog.useDogStatsDSocketVolume }}
+ - name: DD_DOGSTATSD_SOCKET
+ value: "/var/run/datadog/dsd.socket"
+ {{- end }}
+ {{- if and .Values.clusterAgent.clusterChecks.enabled (not .Values.clusterchecksDeployment.enabled) }}
+ - name: DD_EXTRA_CONFIG_PROVIDERS
+ value: "clusterchecks"
+ {{- end }}
{{- if .Values.datadog.env }}
{{ toYaml .Values.datadog.env | indent 10 }}
{{- end }}
@@ -173,6 +192,10 @@ spec:
mountPath: {{ default "/var/run/docker.sock" .Values.datadog.criSocketPath | quote }}
readOnly: true
{{- end }}
+ {{- if .Values.datadog.useDogStatsDSocketVolume }}
+ - name: dsdsocket
+ mountPath: "/var/run/datadog"
+ {{- end }}
- name: procdir
mountPath: /host/proc
readOnly: true
@@ -223,6 +246,11 @@ spec:
path: {{ default "/var/run/docker.sock" .Values.datadog.criSocketPath | quote }}
name: runtimesocket
{{- end }}
+ {{- if .Values.datadog.useDogStatsDSocketVolume }}
+ - hostPath:
+ path: "/var/run/datadog/"
+ name: dsdsocket
+ {{- end }}
- hostPath:
path: /proc
name: procdir
@@ -234,12 +262,12 @@ spec:
{{- if (or (.Values.datadog.confd) (.Values.datadog.autoconf)) }}
- name: confd
configMap:
- name: {{ template "datadog.confd.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-confd
{{- end }}
{{- if .Values.datadog.checksd }}
- name: checksd
configMap:
- name: {{ template "datadog.checksd.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-checksd
{{- end }}
{{- if .Values.datadog.logsEnabled }}
- hostPath:
@@ -268,6 +296,6 @@ spec:
{{ toYaml .Values.daemonset.nodeSelector | indent 8 }}
{{- end }}
updateStrategy:
- type: {{ default "OnDelete" .Values.daemonset.updateStrategy | quote }}
+ type: {{ default "RollingUpdate" .Values.daemonset.updateStrategy | quote }}
{{ end }}
{{ end }}
diff --git a/stable/datadog/templates/deployment.yaml b/stable/datadog/templates/deployment.yaml
index 836b72811af3..a2b3710e6e43 100644
--- a/stable/datadog/templates/deployment.yaml
+++ b/stable/datadog/templates/deployment.yaml
@@ -70,6 +70,10 @@ spec:
- name: DD_DOGSTATSD_NON_LOCAL_TRAFFIC
value: {{ .Values.datadog.nonLocalTraffic | quote }}
{{- end }}
+ {{- if .Values.datadog.dogstatsdOriginDetection }}
+ - name: DD_DOGSTATSD_ORIGIN_DETECTION
+ value: {{ .Values.datadog.dogstatsdOriginDetection | quote }}
+ {{- end }}
{{- if .Values.datadog.tags }}
- name: DD_TAGS
value: {{ .Values.datadog.tags | quote }}
@@ -88,6 +92,10 @@ spec:
- name: DD_CRI_SOCKET_PATH
value: {{ .Values.datadog.criSocketPath | quote }}
{{- end }}
+ {{- if .Values.datadog.useDogStatsDSocketVolume }}
+ - name: DD_DOGSTATSD_SOCKET
+ value: "/var/run/datadog/dsd.socket"
+ {{- end }}
{{- if .Values.datadog.env }}
{{ toYaml .Values.datadog.env | indent 10 }}
{{- end }}
@@ -97,6 +105,10 @@ spec:
mountPath: {{ default "/var/run/docker.sock" .Values.datadog.criSocketPath | quote }}
readOnly: true
{{- end }}
+ {{- if .Values.datadog.useDogStatsDSocketVolume }}
+ - name: dsdsocket
+ mountPath: "/var/run/datadog"
+ {{- end }}
- name: procdir
mountPath: /host/proc
readOnly: true
@@ -134,6 +146,11 @@ spec:
path: {{ default "/var/run/docker.sock" .Values.datadog.criSocketPath | quote }}
name: runtimesocket
{{- end }}
+ {{- if .Values.datadog.useDogStatsDSocketVolume }}
+ - hostPath:
+ path: "/var/run/datadog/"
+ name: dsdsocket
+ {{- end }}
- hostPath:
path: /proc
name: procdir
@@ -143,12 +160,12 @@ spec:
{{- if (or (.Values.datadog.confd) (.Values.datadog.autoconf)) }}
- name: confd
configMap:
- name: {{ template "datadog.confd.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-confd
{{- end }}
{{- if .Values.datadog.checksd }}
- name: checksd
configMap:
- name: {{ template "datadog.checksd.fullname" . }}
+ name: {{ template "datadog.fullname" . }}-checksd
{{- end }}
{{- if .Values.datadog.volumes }}
{{ toYaml .Values.datadog.volumes | indent 8 }}
diff --git a/stable/datadog/templates/hpa-clusterrole.yaml b/stable/datadog/templates/hpa-clusterrole.yaml
index b868cff60257..dddfd45fc594 100644
--- a/stable/datadog/templates/hpa-clusterrole.yaml
+++ b/stable/datadog/templates/hpa-clusterrole.yaml
@@ -7,7 +7,7 @@ metadata:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
- name: {{ template "datadog.clusterAgent.fullname" . }}-external-metrics-reader
+ name: {{ template "datadog.fullname" . }}-cluster-agent-external-metrics-reader
rules:
- apiGroups:
- "external.metrics.k8s.io"
diff --git a/stable/datadog/templates/hpa-clusterrolebinding.yaml b/stable/datadog/templates/hpa-clusterrolebinding.yaml
index 4108746d288d..7b14c3a7abbc 100644
--- a/stable/datadog/templates/hpa-clusterrolebinding.yaml
+++ b/stable/datadog/templates/hpa-clusterrolebinding.yaml
@@ -7,11 +7,11 @@ metadata:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
- name: {{ template "datadog.clusterAgent.fullname" . }}-external-metrics-reader
+ name: {{ template "datadog.fullname" . }}-cluster-agent-external-metrics-reader
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
- name: {{ template "datadog.clusterAgent.fullname" . }}-external-metrics-reader
+ name: {{ template "datadog.fullname" . }}-cluster-agent-external-metrics-reader
subjects:
- kind: ServiceAccount
name: horizontal-pod-autoscaler
diff --git a/stable/datadog/templates/service.yaml b/stable/datadog/templates/service.yaml
index 3465d1acb65a..1975ce1580ee 100644
--- a/stable/datadog/templates/service.yaml
+++ b/stable/datadog/templates/service.yaml
@@ -10,7 +10,7 @@ metadata:
heritage: {{ .Release.Service | quote }}
{{- if .Values.deployment.service.annotations }}
annotations:
- {{ toYaml .Values.deployment.service.annotations | indent 4 }}
+{{ toYaml .Values.deployment.service.annotations | indent 4 }}
{{- end }}
spec:
type: {{ .Values.deployment.service.type }}
diff --git a/stable/datadog/values.yaml b/stable/datadog/values.yaml
index bf2ebfd480e7..55ec25e9ac5c 100644
--- a/stable/datadog/values.yaml
+++ b/stable/datadog/values.yaml
@@ -1,239 +1,222 @@
-# Default values for datadog.
+## Default values for Datadog Agent
+## See Datadog helm documentation to learn more:
+## https://docs.datadoghq.com/agent/kubernetes/helm/
+
+## @param image - object - required
+## Define the Datadog image to work with.
+#
image:
- # This chart is compatible with different images, please choose one
- repository: datadog/agent # Agent6
- # repository: datadog/dogstatsd # Standalone DogStatsD6
- tag: 6.9.0 # Use 6.9.0-jmx to enable jmx fetch collection
+
+ ## @param repository - string - required
+ ## Define the repository to use:
+ ## use "datadog/agent" for Datadog Agent 6
+ ## use "datadog/dogstatsd" for Standalone Datadog Agent DogStatsD6
+ #
+ repository: datadog/agent
+
+ ## @param tag - string - required
+ ## Define the Agent version to use.
+ ## Use 6.9.0-jmx to enable jmx fetch collection
+ #
+ tag: 6.10.1
+
+ ## @param pullPolicy - string - required
+ ## The Kubernetes pull policy.
+ #
pullPolicy: IfNotPresent
+
+ ## @param pullSecrets - list of key:value strings - optional
## It is possible to specify docker registry credentials
## See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
+ #
# pullSecrets:
- # - name: regsecret
-
-# NB! Normally you need to keep Datadog DaemonSet enabled!
-# The exceptional case could be a situation when you need to run
-# single DataDog pod per every namespace, but you do not need to
-# re-create a DaemonSet for every non-default namespace install.
-# Note, that StatsD and DogStatsD work over UDP, so you may not
-# get guaranteed delivery of the metrics in Datadog-per-namespace setup!
-daemonset:
- enabled: true
- ## Bind ports on the hostNetwork. Useful for CNI networking where hostPort might
- ## not be supported. The ports will need to be available on all hosts. Can be
- ## used for custom metrics instead of a service endpoint.
- ## WARNING: Make sure that hosts using this are properly firewalled otherwise
- ## metrics and traces will be accepted from any host able to connect to this host.
- # useHostNetwork: true
-
- ## Sets the hostPort to the same value of the container port. Needs to be used
- ## to receive traces in a standard APM set up. Can be used as for sending custom metrics.
- ## The ports will need to be available on all hosts.
- ## WARNING: Make sure that hosts using this are properly firewalled otherwise
- ## metrics and traces will be accepted from any host able to connect to this host.
- # useHostPort: true
-
- ## Run the agent in the host's PID namespace. This is required for Dogstatsd origin
- ## detection to work. See https://docs.datadoghq.com/developers/dogstatsd/unix_socket/
- # useHostPID: true
-
- ## Annotations to add to the DaemonSet's Pods
- # podAnnotations:
- # scheduler.alpha.kubernetes.io/tolerations: '[{"key": "example", "value": "foo"}]'
-
- ## Allow the DaemonSet to schedule on tainted nodes (requires Kubernetes >= 1.6)
- # tolerations: []
-
- ## Allow the DaemonSet to schedule on selected nodes
- # Ref: https://kubernetes.io/docs/user-guide/node-selection/
- # nodeSelector: {}
-
- ## Allow the DaemonSet to schedule ussing affinity rules
- # Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
- # affinity: {}
-
- ## Allow the DaemonSet to perform a rolling update on helm update
- ## ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/
- # updateStrategy: RollingUpdate
-
- ## Sets PriorityClassName if defined
- # priorityClassName:
-
-# Apart from DaemonSet, deploy Datadog agent pods and related service for
-# applications that want to send custom metrics. Provides DogStasD service.
-#
-# HINT: If you want to use datadog.collectEvents, keep deployment.replicas set to 1.
-deployment:
- enabled: false
- replicas: 1
- # Affinity for pod assignment
- # Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
- affinity: {}
- # Tolerations for pod assignment
- # Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
- tolerations: []
- # If you're using a NodePort-type service and need a fixed port, set this parameter.
- # dogstatsdNodePort: 8125
- # traceNodePort: 8126
-
- service:
- type: ClusterIP
- annotations: {}
-
- ## Sets PriorityClassName if defined
- # priorityClassName:
-
-## deploy the kube-state-metrics deployment
-## ref: https://github.com/kubernetes/charts/tree/master/stable/kube-state-metrics
-
-kubeStateMetrics:
- enabled: true
-
-# This is the new cluster agent implementation that handles cluster-wide
-# metrics more cleanly, separates concerns for better rbac, and implements
-# the external metrics API so you can autoscale HPAs based on datadog
-# metrics
-clusterAgent:
- containerName: cluster-agent
- image:
- repository: datadog/cluster-agent
- tag: 1.1.0
- pullPolicy: IfNotPresent
- enabled: false
- ## This needs to be at least 32 characters a-zA-z
- ## It is a preshared key between the node agents and the cluster agent
- token: ""
- replicas: 1
- ## Enable the metricsProvider to be able to scale based on metrics in Datadog
- metricsProvider:
- enabled: false
- resources:
- requests:
- cpu: 200m
- memory: 256Mi
- limits:
- cpu: 200m
- memory: 256Mi
+ # - name: ""
- ## Override the agent's liveness probe logic from the default:
- ## In case of issues with the probe, you can disable it with the
- ## following values, to allow easier investigating:
- # livenessProbe:
- # exec:
- # command: ["/bin/true"]
- ## Override the cluster-agent's readiness probe logic from the default:
- # readinessProbe:
+nameOverride: ""
+fullnameOverride: ""
datadog:
- ## You'll need to set this to your Datadog API key before the agent will run.
+
+ ## @param apiKey - string - required
+ ## Set this to your Datadog API key before the Agent runs.
## ref: https://app.datadoghq.com/account/settings#agent/kubernetes
- ##
- # apiKey:
+ #
+ apiKey:
+
+ ## @param apiKeyExistingSecret - string - optional
+ ## Use existing Secret which stores API key instead of creating a new one.
+ ## If set, this parameter takes precedence over "apiKey".
+ #
+ # apiKeyExistingSecret:
+
+ ## @param appKey - string - optional
+ ## If you are using clusterAgent.metricsProvider.enabled = true, you must set
+ ## a Datadog application key for read access to your metrics.
+ #
+ # appKey:
+
+ ## @param appKeyExistingSecret - string - optional
+ ## Use existing Secret which stores APP key instead of creating a new one
+ ## If set, this parameter takes precedence over "appKey".
+ #
+ # appKeyExistingSecret:
+ ## @param securityContext - object - optional
## You can modify the security context used to run the containers by
## modifying the label type below:
+ #
# securityContext:
# seLinuxOptions:
# seLinuxLabel: "spc_t"
- ## Use existing Secret which stores API key instead of creating a new one
- # apiKeyExistingSecret:
-
- ## If you are using clusterAgent.metricsProvider.enabled = true, you'll need
- ## a datadog app key for read access to the metrics
- # appKey:
-
- ## Use existing Secret which stores APP key instead of creating a new one
- # appKeyExistingSecret:
+ ## @param clusterName - string - optional
+ ## Set a unique cluster name to allow scoping hosts and Cluster Checks easily
+ #
+ # clusterName:
+ ## @param name - string - required
## Daemonset/Deployment container name
## See clusterAgent.containerName if clusterAgent.enabled = true
- ##
+ #
name: datadog
- # The site of the Datadog intake to send Agent data to.
- # Defaults to 'datadoghq.com', set to 'datadoghq.eu' to send data to the EU site.
+ ## @param site - string - optional - default: 'datadoghq.com'
+ ## The site of the Datadog intake to send Agent data to.
+ ## Set to 'datadoghq.eu' to send data to the EU site.
+ #
# site: datadoghq.com
- # The host of the Datadog intake server to send Agent data to, only set this option
- # if you need the Agent to send data to a custom URL.
- # Overrides the site setting defined in "site".
+ ## @param dd_url - string - optional - default: 'https://app.datadoghq.com'
+ ## The host of the Datadog intake server to send Agent data to, only set this option
+ ## if you need the Agent to send data to a custom URL.
+ ## Overrides the site setting defined in "site".
+ #
# dd_url: https://app.datadoghq.com
- ## Set logging verbosity.
- ## ref: https://github.com/DataDog/docker-dd-agent#environment-variables
- ## Note: For Agent6 (image `datadog/agent`) the valid log levels are
+ ## @param logLevel - string - required
+ ## Set logging verbosity, valid log levels are:
## trace, debug, info, warn, error, critical, and off
- ##
+ #
logLevel: INFO
- ## Un-comment this to make each node accept non-local statsd traffic.
- ## ref: https://github.com/DataDog/docker-dd-agent#environment-variables
+ ## @param podLabelsAsTags - list of key:value strings - optional
+ ## Provide a mapping of Kubernetes Labels to Datadog Tags.
+ #
+ # podLabelsAsTags:
+ # app: kube_app
+ # release: helm_release
+ # :
+
+ ## @param podAnnotationsAsTags - list of key:value strings - optional
+ ## Provide a mapping of Kubernetes Annotations to Datadog Tags
+ #
+ # podAnnotationsAsTags:
+ # iam.amazonaws.com/role: kube_iamrole
+ # :
+
+ ## @param tags - list of key:value elements - optional
+ ## List of tags to attach to every metric, event and service check collected by this Agent.
##
- # nonLocalTraffic: true
+ ## Learn more about tagging: https://docs.datadoghq.com/tagging/
+ #
+ # tags:
+ # - :
+ # - :
+ ## @param useCriSocketVolume - boolean - required
## Enable container runtime socket volume mounting
+ #
useCriSocketVolume: true
- ## Set host tags.
- ## ref: https://github.com/DataDog/docker-dd-agent#environment-variables
- ##
- # tags:
+ ## @param dogstatsdOriginDetection - boolean - optional
+ ## Enable origin detection for container tagging
+ ## https://docs.datadoghq.com/developers/dogstatsd/unix_socket/#using-origin-detection-for-container-tagging
+ #
+ # dogstatsdOriginDetection: true
- ## Enables event collection from the kubernetes API
+ ## @param useDogStatsDSocketVolume - boolean - optional
+ ## Enable dogstatsd over Unix Domain Socket
+ ## ref: https://docs.datadoghq.com/developers/dogstatsd/unix_socket/
+ #
+ # useDogStatsDSocketVolume: true
+
+ ## @param nonLocalTraffic - boolean - optional - default: false
+ ## Enable this to make each node accept non-local statsd traffic.
## ref: https://github.com/DataDog/docker-dd-agent#environment-variables
- ##
- collectEvents: false
+ #
+ # nonLocalTraffic: false
+
+ ## @param collectEvents - boolean - optional - default: false
+ ## Enables this to start event collection from the kubernetes API
+ ## ref: https://docs.datadoghq.com/agent/kubernetes/event_collection/
+ #
+ # collectEvents: false
+
+ ## @param leaderElection - boolean - optional - default: false
+ ## Enables leader election mechanism for event collection.
+ #
+ # leaderElection: false
+
+ ## @param leaderLeaseDuration - integer - optional - default: 60
+ ## Set the lease time for leader election in second.
+ #
+ # leaderLeaseDuration: 60
- ## Enables log collection
+ ## @param logsEnabled - boolean - optional - default: false
+ ## Enables this to activate Datadog Agent log collection.
## ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup
- ##
+ #
# logsEnabled: false
+
+ ## @param logsConfigContainerCollectAll - boolean - optional - default: false
+ ## Enable this to allow log collection for all containers.
+ ## ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup
+ #
# logsConfigContainerCollectAll: false
- ## Un-comment this to enable APM and tracing, on port 8126
+ ## @param apmEnabled - boolean - optional - default: false
+ ## Enable this to enable APM and tracing, on port 8126
## ref: https://github.com/DataDog/docker-dd-agent#tracing-from-the-host
- ##
- # apmEnabled: true
+ #
+ # apmEnabled: false
- ## Un-comment this to enable live process monitoring
+ ## @param processAgentEnabled - boolean - optional - default: false
+ ## Enable this to activate live process monitoring.
+ ## Note: /etc/passwd is automatically mounted to allow username resolution.
## ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset
- ##
- # processAgentEnabled: true
+ #
+ # processAgentEnabled: false
+ ## @param env - list of object - optional
## The dd-agent supports many environment variables
## ref: https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent#environment-variables
- ##
+ #
# env:
- # - name:
- # value:
+ # - name:
+ # value:
- ## The dd-agent supports detailed process and container monitoring and
- ## requires control over the volume and volumeMounts for the daemonset
- ## or deployment.
- ## ref: https://docs.datadoghq.com/guides/process/
- ##
+ ## @param volumes - list of objects - optional
+ ## Specify additional volumes to mount in the dd-agent container
+ #
# volumes:
- # - hostPath:
- # path: /etc/passwd
- # name: passwd
+ # - hostPath:
+ # path:
+ # name:
+
+ ## @param volumeMounts - list of objects - optional
+ ## Specify additional volumes to mount in the dd-agent container
+ #
# volumeMounts:
- # - name: passwd
- # mountPath: /etc/passwd
+ # - name:
+ # mountPath:
# readOnly: true
- ## Enable leader election mechanism for event collection
- ##
- # leaderElection: false
-
- ## Set the lease time for leader election
- ##
- # leaderLeaseDuration: 600
-
+ ## @param confd - list of objects - optional
## Provide additional check configurations (static and Autodiscovery)
- ## Each key will become a file in /conf.d
+ ## Each key becomes a file in /conf.d
## ref: https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent#optional-volumes
## ref: https://docs.datadoghq.com/agent/autodiscovery/
- ##
+ #
# confd:
# redisdb.yaml: |-
# init_config:
@@ -247,57 +230,348 @@ datadog:
# instances:
# - kube_state_url: http://%%host%%:8080/metrics
+ ## @param checksd - list of key:value strings - optional
## Provide additional custom checks as python code
- ## Each key will become a file in /checks.d
+ ## Each key becomes a file in /checks.d
## ref: https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent#optional-volumes
- ##
+ #
# checksd:
# service.py: |-
+ ## @param criSocketPath - string - optional
## Path to the container runtime socket (if different from Docker)
## This is supported starting from agent 6.6.0
+ #
# criSocketPath: /var/run/containerd/containerd.sock
- ## Provide a mapping of Kubernetes Labels to Datadog Tags
- # podLabelsAsTags:
- # app: kube_app
- # release: helm_release
-
- ## Provide a mapping of Kubernetes Annotations to Datadog Tags
- # podAnnotationsAsTags:
- # iam.amazonaws.com/role: kube_iamrole
-
+ ## @param livenessProbe - object - optional
## Override the agent's liveness probe logic from the default:
## In case of issues with the probe, you can disable it with the
## following values, to allow easier investigating:
+ #
# livenessProbe:
# exec:
# command: ["/bin/true"]
+ ## @param resources - object -required
## datadog-agent resource requests and limits
## Make sure to keep requests and limits equal to keep the pods in the Guaranteed QoS class
## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
- ##
- resources:
- requests:
- cpu: 200m
- memory: 256Mi
- limits:
- cpu: 200m
- memory: 256Mi
+ #
+ resources: {}
+# requests:
+# cpu: 200m
+# memory: 256Mi
+# limits:
+# cpu: 200m
+# memory: 256Mi
+
+## @param clusterAgent - object - required
+## This is the Datadog Cluster Agent implementation that handles cluster-wide
+## metrics more cleanly, separates concerns for better rbac, and implements
+## the external metrics API so you can autoscale HPAs based on datadog metrics
+## ref: https://docs.datadoghq.com/agent/kubernetes/cluster/
+#
+clusterAgent:
+
+ ## @param enabled - boolean - required
+ ## Set this to true to enable Datadog Cluster Agent
+ #
+ enabled: false
+
+ containerName: cluster-agent
+ image:
+ repository: datadog/cluster-agent
+ tag: 1.2.0
+ pullPolicy: IfNotPresent
+
+ ## @param token - string - required
+ ## This needs to be at least 32 characters a-zA-z
+ ## It is a preshared key between the node agents and the cluster agent
+ ## ref:
+ #
+ token: ""
+
+ replicas: 1
+
+ ## @param metricsProvider - object - required
+ ## Enable the metricsProvider to be able to scale based on metrics in Datadog
+ #
+ metricsProvider:
+ enabled: false
+
+ ## @param clusterChecks - object - required
+ ## Enable the Cluster Checks feature on both the cluster-agents and the daemonset
+ ## ref: https://docs.datadoghq.com/agent/autodiscovery/clusterchecks/
+ ## Autodiscovery via Kube Service annotations is automatically enabled
+ #
+ clusterChecks:
+ enabled: false
+
+ ## @param confd - list of objects - optional
+ ## Provide additional cluster check configurations
+ ## Each key will become a file in /conf.d
+ ## ref: https://docs.datadoghq.com/agent/autodiscovery/
+ #
+ # confd:
+ # mysql.yaml: |-
+ # cluster_check: true
+ # instances:
+ # - server: ''
+ # port: 3306
+ # user: datadog
+ # pass: ''
+
+ ## @param resources - object -required
+ ## Datadog cluster-agent resource requests and limits.
+ #
+ resources: {}
+# requests:
+# cpu: 200m
+# memory: 256Mi
+# limits:
+# cpu: 200m
+# memory: 256Mi
+
+ ## @param livenessProbe - object - optional
+ ## Override the agent's liveness probe logic from the default:
+ ## In case of issues with the probe, you can disable it with the
+ ## following values, to allow easier investigating:
+ #
+ # livenessProbe:
+ # exec:
+ # command: ["/bin/true"]
+
+ ## @param readinessProbe - object - optional
+ ## Override the cluster-agent's readiness probe logic from the default:
+ #
+ # readinessProbe:
rbac:
+
+ ## @param created - boolean - required
## If true, create & use RBAC resources
+ #
create: true
+ ## @param serviceAccountName - string - required
## Ignored if rbac.create is true
+ #
serviceAccountName: default
tolerations: []
+kubeStateMetrics:
+
+ ## @param enabled - boolean - required
+ ## If true, deploys the kube-state-metrics deployment.
+ ## ref: https://github.com/kubernetes/charts/tree/master/stable/kube-state-metrics
+ #
+ enabled: true
+
kube-state-metrics:
rbac:
+
+ ## @param created - boolean - required
+ ## If true, create & use RBAC resources
+ #
create: true
+ ## @param serviceAccountName - string - required
## Ignored if rbac.create is true
+ #
serviceAccountName: default
+
+daemonset:
+
+ ## @param enabled - boolean - required
+ ## You should keep Datadog DaemonSet enabled!
+ ## The exceptional case could be a situation when you need to run
+ ## single DataDog pod per every namespace, but you do not need to
+ ## re-create a DaemonSet for every non-default namespace install.
+ ## Note: StatsD and DogStatsD work over UDP, so you may not
+ ## get guaranteed delivery of the metrics in Datadog-per-namespace setup!
+ #
+ enabled: true
+
+ ## @param useHostNetwork - boolean - optional
+ ## Bind ports on the hostNetwork. Useful for CNI networking where hostPort might
+ ## not be supported. The ports need to be available on all hosts. It Can be
+ ## used for custom metrics instead of a service endpoint.
+ ##
+ ## WARNING: Make sure that hosts using this are properly firewalled otherwise
+ ## metrics and traces are accepted from any host able to connect to this host.
+ #
+ # useHostNetwork: true
+
+ ## @param useHostPort - boolean - optional
+ ## Sets the hostPort to the same value of the container port. Needs to be used
+ ## to receive traces in a standard APM set up. Can be used as for sending custom metrics.
+ ## The ports need to be available on all hosts.
+ ##
+ ## WARNING: Make sure that hosts using this are properly firewalled otherwise
+ ## metrics and traces are accepted from any host able to connect to this host.
+ #
+ # useHostPort: true
+
+ ## @param useHostPID - boolean - optional
+ ## Run the agent in the host's PID namespace. This is required for Dogstatsd origin
+ ## detection to work. See https://docs.datadoghq.com/developers/dogstatsd/unix_socket/
+ #
+ # useHostPID: true
+
+ ## @param podAnnotations - list of key:value strings - optional
+ ## Annotations to add to the DaemonSet's Pods
+ #
+ # podAnnotations:
+ # : '[{"key": "", "value": ""}]'
+
+ ## @param tolerations - array - optional
+ ## Allow the DaemonSet to schedule on tainted nodes (requires Kubernetes >= 1.6)
+ #
+ # tolerations: []
+
+ ## @param nodeSelector - object - optional
+ ## Allow the DaemonSet to schedule on selected nodes
+ ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
+ #
+ # nodeSelector: {}
+
+ ## @param affinity - object - optional
+ ## Allow the DaemonSet to schedule using affinity rules
+ ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
+ #
+ # affinity: {}
+
+ ## @param updateStrategy - string - optional
+ ## Allow the DaemonSet to perform a rolling update on helm update
+ ## ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/
+ #
+ # updateStrategy: RollingUpdate
+
+ ## @param priorityClassName - string - optional
+ ## Sets PriorityClassName if defined.
+ #
+ # priorityClassName:
+
+ ## @param podLabels - object - optional
+ ## Sets podLabels if defined.
+ #
+ # podLabels: {}
+
+deployment:
+ ## @param enabled - boolean - required
+ ## Apart from DaemonSet, deploy Datadog agent pods and related service for
+ ## applications that want to send custom metrics. Provides DogStasD service.
+ #
+ enabled: false
+
+ ## @param replicas - integer - required
+ ## If you want to use datadog.collectEvents, keep deployment.replicas set to 1.
+ #
+ replicas: 1
+
+ ## @param affinity - object - required
+ ## Affinity for pod assignment
+ ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
+ #
+ affinity: {}
+
+ ## @param tolerations - array - required
+ ## Tolerations for pod assignment
+ ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
+ #
+ tolerations: []
+
+ ## @param dogstatsdNodePort - integer - optional
+ ## If you're using a NodePort-type service and need a fixed port, set this parameter.
+ #
+ # dogstatsdNodePort: 8125
+
+ ## @param traceNodePort - integer - optional
+ ## If you're using a NodePort-type service and need a fixed port, set this parameter.
+ #
+ # traceNodePort: 8126
+
+ ## @param service - object - required
+ ##
+ #
+ service:
+ type: ClusterIP
+ annotations: {}
+
+ ## @param priorityClassName - string - optional
+ ## Sets PriorityClassName if defined.
+ #
+ # priorityClassName:
+
+clusterchecksDeployment:
+
+ ## @param enabled - boolean - required
+ ## If true, deploys agent dedicated for running the Cluster Checks instead of running in the Daemonset's agents.
+ ## ref: https://docs.datadoghq.com/agent/autodiscovery/clusterchecks/
+ #
+ enabled: false
+
+ rbac:
+ ## @param dedicated - boolean - required
+ ## If true, use a dedicated RBAC resource for the cluster checks agent(s)
+ #
+ dedicated: false
+ ## @param serviceAccountName - string - required
+ ## Ignored if rbac.create is true
+ #
+ serviceAccountName: default
+
+ ## @param replicas - integer - required
+ ## If you want to deploy the cluckerchecks agent in HA, keep at least clusterchecksDeployment.replicas set to 2.
+ ## And increase the clusterchecksDeployment.replicas according to the number of Cluster Checks.
+ #
+ replicas: 2
+
+ ## @param resources - object -required
+ ## Datadog clusterchecks-agent resource requests and limits.
+ #
+ resources: {}
+# requests:
+# cpu: 200m
+# memory: 500Mi
+# limits:
+# cpu: 200m
+# memory: 500Mi
+
+ ## @param affinity - object - optional
+ ## Allow the ClusterChecks Deployment to schedule using affinity rules.
+ ## By default, ClusterChecks Deployment Pods are forced to run on different Nodes.
+ ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
+ #
+ # affinity:
+
+ ## @param nodeSelector - object - optional
+ ## Allow the ClusterChecks Deploument to schedule on selected nodes
+ ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
+ #
+ # nodeSelector: {}
+
+ ## @param tolerations - array - required
+ ## Tolerations for pod assignment
+ ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
+ #
+ tolerations: []
+
+ ## @param livenessProbe - object - optional
+ ## Override the agent's liveness probe logic from the default:
+ ## In case of issues with the probe, you can disable it with the
+ ## following values, to allow easier investigating:
+ #
+ # livenessProbe:
+ # exec:
+ # command: ["/bin/true"]
+
+ ## @param env - list of object - optional
+ ## The dd-agent supports many environment variables
+ ## ref: https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent#environment-variables
+ #
+ # env:
+ # - name:
+ # value:
diff --git a/stable/dex/Chart.yaml b/stable/dex/Chart.yaml
index 1ca2f7b92fbd..aff839b76df2 100644
--- a/stable/dex/Chart.yaml
+++ b/stable/dex/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: dex
-version: 0.8.0
-appVersion: 2.14.0
+version: 1.3.0
+appVersion: 2.16.0
description: CoreOS Dex
keywords:
- dex
diff --git a/stable/dex/OWNERS b/stable/dex/OWNERS
new file mode 100644
index 000000000000..0ecc0f4833ae
--- /dev/null
+++ b/stable/dex/OWNERS
@@ -0,0 +1,4 @@
+approvers:
+- desaintmartin
+reviewers:
+- desaintmartin
diff --git a/stable/dex/templates/deployment.yaml b/stable/dex/templates/deployment.yaml
index a088188fe4b6..33b7e458b776 100644
--- a/stable/dex/templates/deployment.yaml
+++ b/stable/dex/templates/deployment.yaml
@@ -31,10 +31,16 @@ spec:
labels:
app: {{ template "dex.name" . }}
release: "{{ .Release.Name }}"
+ annotations:
+ checksum/config: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
spec:
serviceAccountName: {{ template "dex.serviceAccountName" . }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 10 }}
+{{- if .Values.affinity }}
+ affinity:
+{{ tpl .Values.affinity | indent 8 }}
+{{- end }}
containers:
- name: main
image: "{{ .Values.image }}:{{ .Values.imageTag }}"
diff --git a/stable/dex/templates/ingress.yaml b/stable/dex/templates/ingress.yaml
index 904079359f53..4ac724d126c2 100644
--- a/stable/dex/templates/ingress.yaml
+++ b/stable/dex/templates/ingress.yaml
@@ -34,6 +34,6 @@ spec:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
- servicePort: 8080
+ servicePort: {{ $servicePort }}
{{- end }}
{{- end }}
diff --git a/stable/dex/templates/job-grpc-certs.yaml b/stable/dex/templates/job-grpc-certs.yaml
index 95e23a71f854..4ff9b04b4833 100644
--- a/stable/dex/templates/job-grpc-certs.yaml
+++ b/stable/dex/templates/job-grpc-certs.yaml
@@ -31,12 +31,21 @@ spec:
release: "{{ .Release.Name }}"
component: "job"
spec:
+ {{- if .Values.certs.securityContext.enabled }}
+ securityContext:
+ runAsUser: {{ .Values.certs.securityContext.runAsUser }}
+ fsGroup: {{ .Values.certs.securityContext.fsGroup }}
+ {{- end }}
serviceAccountName: {{ template "dex.serviceAccountName" . }}
restartPolicy: OnFailure
containers:
- name: main
image: "{{ .Values.certs.image }}:{{ .Values.certs.imageTag }}"
imagePullPolicy: {{ .Values.certs.imagePullPolicy }}
+ env:
+ - name: HOME
+ value: /tmp
+ workingDir: /tmp
command:
- /bin/bash
- -exc
diff --git a/stable/dex/templates/job-web-certs.yaml b/stable/dex/templates/job-web-certs.yaml
index c2e56afc366d..1e62cff18c54 100644
--- a/stable/dex/templates/job-web-certs.yaml
+++ b/stable/dex/templates/job-web-certs.yaml
@@ -28,12 +28,21 @@ spec:
release: "{{ .Release.Name }}"
component: "job"
spec:
+ {{- if .Values.certs.securityContext.enabled }}
+ securityContext:
+ runAsUser: {{ .Values.certs.securityContext.runAsUser }}
+ fsGroup: {{ .Values.certs.securityContext.fsGroup }}
+ {{- end }}
serviceAccountName: {{ template "dex.serviceAccountName" . }}
restartPolicy: OnFailure
containers:
- name: main
image: "{{ .Values.certs.image }}:{{ .Values.certs.imageTag }}"
imagePullPolicy: {{ .Values.certs.imagePullPolicy }}
+ env:
+ - name: HOME
+ value: /tmp
+ workingDir: /tmp
command:
- /bin/bash
- -exc
diff --git a/stable/dex/templates/poddisruptionbudget.yaml b/stable/dex/templates/poddisruptionbudget.yaml
new file mode 100644
index 000000000000..66e8e6fa67d5
--- /dev/null
+++ b/stable/dex/templates/poddisruptionbudget.yaml
@@ -0,0 +1,17 @@
+{{- if .Values.podDisruptionBudget -}}
+apiVersion: policy/v1beta1
+kind: PodDisruptionBudget
+metadata:
+ name: {{ template "dex.fullname" . }}
+ labels:
+ app: {{ template "dex.name" . }}
+ chart: {{ template "dex.chart" . }}
+ heritage: "{{ .Release.Service }}"
+ release: "{{ .Release.Name }}"
+spec:
+ selector:
+ matchLabels:
+ app: {{ template "dex.name" . }}
+ release: "{{ .Release.Name }}"
+{{ toYaml .Values.podDisruptionBudget | indent 2 }}
+{{- end -}}
diff --git a/stable/dex/values.yaml b/stable/dex/values.yaml
index 01b25b6f04b0..55f0add58bd5 100644
--- a/stable/dex/values.yaml
+++ b/stable/dex/values.yaml
@@ -4,7 +4,7 @@
# name: value
image: quay.io/dexidp/dex
-imageTag: "v2.14.0"
+imageTag: "v2.16.0"
imagePullPolicy: "IfNotPresent"
inMiniKube: false
@@ -32,6 +32,7 @@ ports:
service:
type: ClusterIP
+ port: 8080
annotations: {}
ingress:
@@ -51,6 +52,10 @@ extraVolumes: []
extraVolumeMounts: []
certs:
+ securityContext:
+ enabled: true
+ runAsUser: 65534
+ fsGroup: 65534
image: gcr.io/google_containers/kubernetes-dashboard-init-amd64
imageTag: "v1.0.0"
imagePullPolicy: "IfNotPresent"
@@ -89,6 +94,20 @@ serviceAccount:
# If not set and create is true, a name is generated using the fullname template
name:
+affinity: {}
+ # podAntiAffinity:
+ # preferredDuringSchedulingIgnoredDuringExecution:
+ # - weight: 5
+ # podAffinityTerm:
+ # topologyKey: "kubernetes.io/hostname"
+ # labelSelector:
+ # matchLabels:
+ # app: {{ template "dex.name" . }}
+ # release: "{{ .Release.Name }}"
+
+podDisruptionBudget: {}
+ # maxUnavailable: 1
+
config:
issuer: http://dex.io:8080
storage:
diff --git a/stable/distributed-jmeter/.helmignore b/stable/distributed-jmeter/.helmignore
new file mode 100644
index 000000000000..7c04072e1355
--- /dev/null
+++ b/stable/distributed-jmeter/.helmignore
@@ -0,0 +1,22 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
+OWNERS
diff --git a/stable/distributed-jmeter/Chart.yaml b/stable/distributed-jmeter/Chart.yaml
new file mode 100644
index 000000000000..b30b885a9608
--- /dev/null
+++ b/stable/distributed-jmeter/Chart.yaml
@@ -0,0 +1,12 @@
+apiVersion: v1
+appVersion: "3.3"
+description: A Distributed JMeter Helm chart
+name: distributed-jmeter
+version: 1.0.1
+home: http://jmeter.apache.org/
+icon: http://jmeter.apache.org/images/logo.svg
+sources:
+ - https://github.com/pedrocesar-ti/distributed-jmeter-docker
+maintainers:
+ - name: pedrocesar-ti
+ email: pedrocesar.ti@gmail.com
diff --git a/stable/distributed-jmeter/OWNERS b/stable/distributed-jmeter/OWNERS
new file mode 100644
index 000000000000..6acab1270a0d
--- /dev/null
+++ b/stable/distributed-jmeter/OWNERS
@@ -0,0 +1,4 @@
+approvers:
+- pedrocesar-ti
+reviewers:
+- pedrocesar-ti
diff --git a/stable/distributed-jmeter/README.md b/stable/distributed-jmeter/README.md
new file mode 100644
index 000000000000..7ed8c3578087
--- /dev/null
+++ b/stable/distributed-jmeter/README.md
@@ -0,0 +1,27 @@
+# Distributed JMeter
+
+Based on the work done [here](https://github.com/pedrocesar-ti/distributed-jmeter-docker).
+
+Apache Jmeterâ„¢ is an open source tool that helps creating and running load test plans. This helm/chart was created to help you running different versions of JMeter in a distributed fashion (master -> server architecture), for more info.
+
+## Chart Details:
+This chart will do the following:
+- Deploy a JMeter master (by default 1) that is responsible to store the test plans and test results after running on the servers.
+- Deploy a JMeter server service (by default 3 replicas) that are responsible to run the actual test and send back the results to the master.
+
+
+## Installing the Chart:
+To install the chart with the release name jmeter:
+```
+$ helm install --name distributed-jmeter stable/distributed-jmeter
+```
+
+## Deploying different versions of JMeter
+The default [image](https://hub.docker.com/r/pedrocesarti/jmeter-docker/) allows you to run JMeter in all versions available.
+
+To change the version running on the helm you only need:
+```
+$ helm install --name distributed-jmeter --set master.image.tag=4.0 --set server.image.tag=4.0 stable/distributed-jmeter
+```
+
+Enjoy! :)
diff --git a/stable/distributed-jmeter/templates/NOTES.txt b/stable/distributed-jmeter/templates/NOTES.txt
new file mode 100644
index 000000000000..7be1090416fe
--- /dev/null
+++ b/stable/distributed-jmeter/templates/NOTES.txt
@@ -0,0 +1,16 @@
+JMeter is now starting.
+
+
+To get get a shell session on the master you only need to run:
+
+$ export MASTER_NAME=$(kubectl get pods -l app.kubernetes.io/component=master -o jsonpath='{.items[*].metadata.name}')
+$ kubectl exec -it $MASTER_NAME -- /bin/bash
+
+
+To copy your test plans to the master pod:
+$ kubectl cp sample.jmx $MASTER_NAME:/jmeter
+
+
+To run your test in all servers you need first a list of all servers IPs (comma-separated) and then you can run your test:
+$ export SERVER_IPS=$(kubectl get pods -l app.kubernetes.io/component=server -o jsonpath='{.items[*].status.podIP}' | tr ' ' ',')
+$ kubectl exec -it $MASTER_NAME -- jmeter -n -t /jmeter/sample.jmx -R $SERVER_IPS
diff --git a/stable/distributed-jmeter/templates/_helpers.tpl b/stable/distributed-jmeter/templates/_helpers.tpl
new file mode 100644
index 000000000000..d21e372237ef
--- /dev/null
+++ b/stable/distributed-jmeter/templates/_helpers.tpl
@@ -0,0 +1,32 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "distributed-jmeter.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "distributed-jmeter.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "distributed-jmeter.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/stable/distributed-jmeter/templates/jmeter-master-deployment.yaml b/stable/distributed-jmeter/templates/jmeter-master-deployment.yaml
new file mode 100644
index 000000000000..06e490de0bf3
--- /dev/null
+++ b/stable/distributed-jmeter/templates/jmeter-master-deployment.yaml
@@ -0,0 +1,34 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ template "distributed-jmeter.fullname" . }}-master
+ labels:
+ app.kubernetes.io/name: {{ include "distributed-jmeter.name" . }}
+ helm.sh/chart: {{ include "distributed-jmeter.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: master
+spec:
+ replicas: {{ .Values.master.replicaCount }}
+ strategy:
+ type: RollingUpdate
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "distributed-jmeter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/component: master
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "distributed-jmeter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/component: master
+ spec:
+ containers:
+ - name: {{ .Chart.Name }}
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ args:
+ - master
+ ports:
+ - containerPort: 60000
diff --git a/stable/distributed-jmeter/templates/jmeter-server-deployment.yaml b/stable/distributed-jmeter/templates/jmeter-server-deployment.yaml
new file mode 100644
index 000000000000..3c1f19b84f9a
--- /dev/null
+++ b/stable/distributed-jmeter/templates/jmeter-server-deployment.yaml
@@ -0,0 +1,34 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ template "distributed-jmeter.fullname" . }}-server
+ labels:
+ app.kubernetes.io/name: {{ include "distributed-jmeter.name" . }}
+ helm.sh/chart: {{ include "distributed-jmeter.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: server
+spec:
+ replicas: {{ .Values.server.replicaCount }}
+ strategy:
+ type: RollingUpdate
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "distributed-jmeter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/component: server
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "distributed-jmeter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/component: server
+ spec:
+ containers:
+ - name: {{ .Chart.Name }}
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ args: ["server"]
+ ports:
+ - containerPort: 50000
+ - containerPort: 1099
diff --git a/stable/distributed-jmeter/templates/jmeter-server-service.yaml b/stable/distributed-jmeter/templates/jmeter-server-service.yaml
new file mode 100644
index 000000000000..ace1d0a686da
--- /dev/null
+++ b/stable/distributed-jmeter/templates/jmeter-server-service.yaml
@@ -0,0 +1,23 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ template "distributed-jmeter.fullname" . }}-server
+ labels:
+ app.kubernetes.io/name: {{ include "distributed-jmeter.name" . }}
+ helm.sh/chart: {{ include "distributed-jmeter.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ app.kubernetes.io/component: server
+spec:
+ clusterIP: None
+ ports:
+ - port: 50000
+ protocol: TCP
+ name: tcp-50000
+ - port: 1099
+ protocol: TCP
+ name: tcp-1099
+ selector:
+ app.kubernetes.io/name: {{ include "distributed-jmeter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/component: server
diff --git a/stable/distributed-jmeter/values.yaml b/stable/distributed-jmeter/values.yaml
new file mode 100644
index 000000000000..76580b339751
--- /dev/null
+++ b/stable/distributed-jmeter/values.yaml
@@ -0,0 +1,24 @@
+# Default values for distributed-jmeter.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+
+master:
+ ## The number of pods in the master deployment
+ replicaCount: 1
+
+server:
+ ## The number of pods in the server deployment
+ replicaCount: 3
+
+image:
+ ## Specify an imagePullPolicy
+ ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
+ pullPolicy: IfNotPresent
+
+ ## The repository and image
+ ## ref: https://hub.docker.com/r/pedrocesarti/jmeter-docker/
+ repository: "pedrocesarti/jmeter-docker"
+
+ ## The tag for the image
+ ## ref: https://hub.docker.com/r/pedrocesarti/jmeter-docker/tags/
+ tag: 3.3
diff --git a/stable/dmarc2logstash/Chart.yaml b/stable/dmarc2logstash/Chart.yaml
index fbd1770472c8..f807fc5ae96d 100644
--- a/stable/dmarc2logstash/Chart.yaml
+++ b/stable/dmarc2logstash/Chart.yaml
@@ -2,7 +2,7 @@ apiVersion: v1
appVersion: "1.0.3"
description: Provides a POP3-polled DMARC XML report injector into Elasticsearch via Logstash and Filebeat
name: dmarc2logstash
-version: 1.1.0
+version: 1.2.0
home: https://github.com/jertel/dmarc2logstash
sources:
- https://github.com/jertel/dmarc2logstash
diff --git a/stable/dmarc2logstash/README.md b/stable/dmarc2logstash/README.md
index 6e4bea61e403..b320719988fc 100644
--- a/stable/dmarc2logstash/README.md
+++ b/stable/dmarc2logstash/README.md
@@ -31,7 +31,7 @@ dmarc2logstash.image.tag | dmarc2logstash image tag, typically the vers
dmarc2logstash.image.pullPolicy | dmarc2logstash Kubernetes image pull policy | IfNotPresent
delete_messages | Set to 1 to delete messages or 0 to preserve messages (useful for debugging) | 1
filebeat.image.repository | Elastic filebeat Docker image repository | docker.elastic.co/beats/filebeat
-filebeat.image.tag | Elastic filebeat tag, typically the version, of the Docker image | 6.2.4
+filebeat.image.tag | Elastic filebeat tag, typically the version, of the Docker image | 6.6.0
filebeat.image.pullPolicy | Elastic filebeat Kubernetes image pull policy | IfNotPresent
filebeat.logstash.host | Logstash service host; ex: logstash (this value must be provided) | ""
filebeat.logstash.port | Logstash service port | 5000
diff --git a/stable/dmarc2logstash/values.yaml b/stable/dmarc2logstash/values.yaml
index fcbc30faa064..c1ff7ec2e08f 100644
--- a/stable/dmarc2logstash/values.yaml
+++ b/stable/dmarc2logstash/values.yaml
@@ -12,7 +12,7 @@ dmarc2logstash:
filebeat:
image:
repository: docker.elastic.co/beats/filebeat
- tag: 6.2.4
+ tag: 6.6.0
pullPolicy: IfNotPresent
logstash:
host: ""
diff --git a/stable/docker-registry/Chart.yaml b/stable/docker-registry/Chart.yaml
index aca5ffa99428..3a1fd0bc4a7d 100644
--- a/stable/docker-registry/Chart.yaml
+++ b/stable/docker-registry/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
description: A Helm chart for Docker Registry
name: docker-registry
-version: 1.7.0
-appVersion: 2.6.2
+version: 1.8.0
+appVersion: 2.7.1
home: https://hub.docker.com/_/registry/
icon: https://hub.docker.com/public/images/logos/mini-logo.svg
sources:
diff --git a/stable/docker-registry/README.md b/stable/docker-registry/README.md
index 2b4de91c42c7..7b89f412f712 100644
--- a/stable/docker-registry/README.md
+++ b/stable/docker-registry/README.md
@@ -29,7 +29,7 @@ their default values.
|:----------------------------|:-------------------------------------------------------------------------------------------|:----------------|
| `image.pullPolicy` | Container pull policy | `IfNotPresent` |
| `image.repository` | Container image to use | `registry` |
-| `image.tag` | Container image tag to deploy | `2.6.2` |
+| `image.tag` | Container image tag to deploy | `2.7.1` |
| `imagePullSecrets` | Specify image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
| `persistence.accessMode` | Access mode to use for PVC | `ReadWriteOnce` |
| `persistence.enabled` | Whether to use a PVC for the Docker storage | `false` |
@@ -44,6 +44,7 @@ their default values.
| `replicaCount` | k8s replicas | `1` |
| `updateStrategy` | update strategy for deployment | `{}` |
| `podAnnotations` | Annotations for pod | `{}` |
+| `podLabels` | Labels for pod | `{}` |
| `resources.limits.cpu` | Container requested CPU | `nil` |
| `resources.limits.memory` | Container requested memory | `nil` |
| `priorityClassName ` | priorityClassName | `""` |
diff --git a/stable/docker-registry/templates/deployment.yaml b/stable/docker-registry/templates/deployment.yaml
index 3206c5e3dfa1..f64bbb4c3770 100644
--- a/stable/docker-registry/templates/deployment.yaml
+++ b/stable/docker-registry/templates/deployment.yaml
@@ -19,6 +19,9 @@ spec:
labels:
app: {{ template "docker-registry.name" . }}
release: {{ .Release.Name }}
+ {{- if .Values.podLabels }}
+{{ toYaml .Values.podLabels | indent 8 }}
+ {{- end }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
{{- if $.Values.podAnnotations }}
diff --git a/stable/docker-registry/values.yaml b/stable/docker-registry/values.yaml
index f6f04f8a7b1e..b8a24f108340 100644
--- a/stable/docker-registry/values.yaml
+++ b/stable/docker-registry/values.yaml
@@ -10,10 +10,11 @@ updateStrategy:
# maxUnavailable: 0
podAnnotations: {}
+podLabels: {}
image:
repository: registry
- tag: 2.6.2
+ tag: 2.7.1
pullPolicy: IfNotPresent
# imagePullSecrets:
# - name: docker
diff --git a/stable/dokuwiki/Chart.yaml b/stable/dokuwiki/Chart.yaml
index 2b6e67459042..5a80698da7a4 100644
--- a/stable/dokuwiki/Chart.yaml
+++ b/stable/dokuwiki/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: dokuwiki
-version: 4.0.1
-appVersion: 0.20180422.201805030840
+version: 4.2.1
+appVersion: 0.20180422.201901061035
description: DokuWiki is a standards-compliant, simple to use wiki optimized for creating
documentation. It is targeted at developer teams, workgroups, and small companies.
All data is stored in plain text files, so no database is required.
diff --git a/stable/dokuwiki/README.md b/stable/dokuwiki/README.md
index 5bbba7986282..e4c90eec44c6 100644
--- a/stable/dokuwiki/README.md
+++ b/stable/dokuwiki/README.md
@@ -12,7 +12,7 @@ $ helm install stable/dokuwiki
This chart bootstraps a [DokuWiki](https://github.com/bitnami/bitnami-docker-dokuwiki) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -48,6 +48,7 @@ The following table lists the configurable parameters of the DokuWiki chart and
| Parameter | Description | Default |
|--------------------------------------|------------------------------------------------------------|-----------------------------------------------|
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | DokuWiki image registry | `docker.io` |
| `image.repository` | DokuWiki image name | `bitnami/dokuwiki` |
| `image.tag` | DokuWiki image tag | `{VERSION}` |
diff --git a/stable/dokuwiki/templates/_helpers.tpl b/stable/dokuwiki/templates/_helpers.tpl
index 1ad62fbe8a5d..836064c9a9cc 100644
--- a/stable/dokuwiki/templates/_helpers.tpl
+++ b/stable/dokuwiki/templates/_helpers.tpl
@@ -48,9 +48,57 @@ Also, we can't use a single if because lazy evaluation is not an option
{{/*
Return the proper image name (for the metrics image)
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "dokuwiki.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "dokuwiki.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
{{- end -}}
diff --git a/stable/dokuwiki/templates/deployment.yaml b/stable/dokuwiki/templates/deployment.yaml
index 20fbe7f34052..e87356ba827c 100644
--- a/stable/dokuwiki/templates/deployment.yaml
+++ b/stable/dokuwiki/templates/deployment.yaml
@@ -40,12 +40,7 @@ spec:
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "dokuwiki.imagePullSecrets" . | indent 6 }}
hostAliases:
- ip: "127.0.0.1"
hostnames:
@@ -104,7 +99,7 @@ spec:
mountPath: /bitnami/apache
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "dokuwiki.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
command: [ '/bin/apache_exporter', '-scrape_uri', 'http://status.localhost:80/server-status/?auto']
ports:
diff --git a/stable/dokuwiki/values.yaml b/stable/dokuwiki/values.yaml
index c1ce46d56ca1..f21ddb879c66 100644
--- a/stable/dokuwiki/values.yaml
+++ b/stable/dokuwiki/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami DokuWiki image version
## ref: https://hub.docker.com/r/bitnami/dokuwiki/tags/
@@ -10,7 +13,7 @@
image:
registry: docker.io
repository: bitnami/dokuwiki
- tag: 0.20180422.201805030840
+ tag: 0.20180422.201901061035-debian-9-r105
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,7 +24,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## User of the application
## ref: https://github.com/bitnami/bitnami-docker-dokuwiki#environment-variables
@@ -199,7 +202,7 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Metrics exporter pod Annotation and Labels
podAnnotations:
prometheus.io/scrape: "true"
diff --git a/stable/drone/Chart.yaml b/stable/drone/Chart.yaml
index 609332c3752b..c3aeda7d738e 100644
--- a/stable/drone/Chart.yaml
+++ b/stable/drone/Chart.yaml
@@ -1,8 +1,9 @@
+apiVersion: v1
name: drone
home: https://drone.io/
icon: https://drone.io/apple-touch-icon.png
-version: 2.0.0-rc.4
-appVersion: 1.0.0-rc.4
+version: 2.0.0-rc.13
+appVersion: 1.0.0-rc.5
description: Drone is a Continuous Delivery system built on container technology
keywords:
- continuous-delivery
@@ -19,3 +20,5 @@ maintainers:
email: christian.roggia@gmail.com
- name: paulczar
email: username.taken@gmail.com
+- name: zakkg3
+ email: zakkg3@gmail.com
diff --git a/stable/drone/OWNERS b/stable/drone/OWNERS
index ce90a4a7c088..1973d9bbd9e0 100644
--- a/stable/drone/OWNERS
+++ b/stable/drone/OWNERS
@@ -1,4 +1,6 @@
approvers:
- christian-roggia
+- zakkg3
reviewers:
- christian-roggia
+- zakkg3
diff --git a/stable/drone/templates/NOTES.txt b/stable/drone/templates/NOTES.txt
index 97e77196d872..013ff3092240 100644
--- a/stable/drone/templates/NOTES.txt
+++ b/stable/drone/templates/NOTES.txt
@@ -29,7 +29,7 @@ Get the Drone URL by running:
Get the Drone URL by running:
export POD_NAME=$(kubectl get pods -n {{ .Release.Namespace }} -l "component=server,app={{ template "drone.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo http://127.0.0.1:8000/
- kubectl -n {{ .Release.Namespace }} port-forward $POD_NAME 8000:8000
+ kubectl -n {{ .Release.Namespace }} port-forward $POD_NAME 8000:80
{{- end }}
{{- else -}}
##############################################################################
@@ -47,7 +47,7 @@ control provider:
--reuse-values \
--set 'sourceControl.provider=github' \
--set 'sourceControl.github.clientID=github-oauth2-client-id' \
- --set 'souceControl.secret=drone-server-secrets' \
+ --set 'sourceControl.secret=drone-server-secrets' \
stable/drone
Currently supported providers:
diff --git a/stable/drone/templates/deployment-agent.yaml b/stable/drone/templates/deployment-agent.yaml
index 135b694156c3..b7530abbc892 100644
--- a/stable/drone/templates/deployment-agent.yaml
+++ b/stable/drone/templates/deployment-agent.yaml
@@ -36,7 +36,7 @@ spec:
{{- end }}
serviceAccountName: {{ template "drone.serviceAccountName" . }}
containers:
- - name: {{ template "drone.fullname" . }}-agent
+ - name: agent
image: "{{ .Values.images.agent.repository }}:{{ .Values.images.agent.tag }}"
imagePullPolicy: {{ .Values.images.agent.pullPolicy }}
ports:
@@ -72,7 +72,7 @@ spec:
hostPath:
path: /var/run/docker.sock
{{- else }}
- - name: {{ template "drone.fullname" . }}-dind
+ - name: dind
image: "{{ .Values.images.dind.repository }}:{{ .Values.images.dind.tag }}"
imagePullPolicy: {{ .Values.images.dind.pullPolicy }}
{{- if .Values.dind.command }}
diff --git a/stable/drone/templates/deployment-server.yaml b/stable/drone/templates/deployment-server.yaml
index defec929c5e3..75baf9ca7769 100644
--- a/stable/drone/templates/deployment-server.yaml
+++ b/stable/drone/templates/deployment-server.yaml
@@ -40,7 +40,7 @@ spec:
{{- end }}
serviceAccountName: {{ template "drone.serviceAccountName" . }}
containers:
- - name: {{ template "drone.fullname" . }}-server
+ - name: server
image: "{{ .Values.images.server.repository }}:{{ .Values.images.server.tag }}"
imagePullPolicy: {{ .Values.images.server.pullPolicy }}
env:
@@ -52,7 +52,7 @@ spec:
- name: DRONE_KUBERNETES_SERVICE_ACCOUNT
value: {{ template "drone.pipelineServiceAccount" . }}
{{- end }}
- - name: DRONE_ALWAYS_AUTH
+ - name: DRONE_GIT_ALWAYS_AUTH
value: {{ .Values.server.alwaysAuth | quote }}
- name: DRONE_SERVER_HOST
{{- if hasKey .Values.server "host" }}
@@ -60,7 +60,7 @@ spec:
{{- else }}
value: "{{ template "drone.fullname" . }}"
{{- end }}
- - name: DRONE_SERVER_PROTOCOL
+ - name: DRONE_SERVER_PROTO
value: {{ .Values.server.protocol }}
{{- if .Values.server.adminUser }}
- name: DRONE_USER_CREATE
diff --git a/stable/drone/templates/role-pipeline.yaml b/stable/drone/templates/role-pipeline.yaml
index b61cdf94a1e5..615e8ff589f7 100644
--- a/stable/drone/templates/role-pipeline.yaml
+++ b/stable/drone/templates/role-pipeline.yaml
@@ -9,6 +9,16 @@ metadata:
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
rules:
+ - apiGroups:
+ - extensions
+ resources:
+ - deployments
+ verbs:
+ - get
+ - list
+ - watch
+ - patch
+ - update
- apiGroups:
- ""
resources:
@@ -16,16 +26,17 @@ rules:
- configmaps
- secrets
- pods
+ - services
verbs:
- - "create"
- - "delete"
- - "get"
- - "list"
- - "watch"
+ - create
+ - delete
+ - get
+ - list
+ - watch
- apiGroups:
- ""
resources:
- - "pods/log"
+ - pods/log
verbs:
- - "get"
+ - get
{{ end }}
diff --git a/stable/drone/templates/secrets.yaml b/stable/drone/templates/secrets.yaml
index a3a899c173b8..144cb3eb300f 100644
--- a/stable/drone/templates/secrets.yaml
+++ b/stable/drone/templates/secrets.yaml
@@ -14,3 +14,26 @@ data:
{{ else }}
secret: "{{ randAlphaNum 24 | b64enc }}"
{{ end }}
+---
+{{- if not .Values.sourceControl.secret -}}
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ template "drone.sourceControlSecret" . }}
+ labels:
+ app: {{ template "drone.name" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ release: "{{ .Release.Name }}"
+ heritage: "{{ .Release.Service }}"
+type: Opaque
+data:
+ {{if .Values.sourceControl.provider}}
+ {{ if eq .Values.sourceControl.provider "github" }}
+ {{ .Values.sourceControl.github.clientSecretKey }}: {{ .Values.sourceControl.github.clientSecretValue | b64enc | quote }}
+ {{- else if eq .Values.sourceControl.provider "gitlab" -}}
+ {{ .Values.sourceControl.gitlab.clientSecretKey }}: {{ .Values.sourceControl.gitlab.clientSecretValue | b64enc | quote }}
+ {{- else if eq .Values.sourceControl.provider "bitbucketCloud" -}}
+ {{ .Values.sourceControl.bitbucketCloud.clientSecretKey }}: {{ .Values.sourceControl.bitbucketCloud.clientSecretValue | b64enc | quote }}
+ {{ end }}
+ {{ end }}
+{{- end -}}
diff --git a/stable/drone/values.yaml b/stable/drone/values.yaml
index aafb2e5aa48c..f11bacb5b22e 100644
--- a/stable/drone/values.yaml
+++ b/stable/drone/values.yaml
@@ -4,7 +4,7 @@ images:
##
server:
repository: "docker.io/drone/drone"
- tag: 1.0.0-rc.4
+ tag: 1.0.0-rc.5
pullPolicy: IfNotPresent
## The official drone (agent) image, change tag to use a different version.
@@ -12,7 +12,7 @@ images:
##
agent:
repository: "docker.io/drone/agent"
- tag: 1.0.0-rc.4
+ tag: 1.0.0-rc.5
pullPolicy: IfNotPresent
## The official docker (dind) image, change tag to use a different version.
@@ -80,15 +80,18 @@ sourceControl:
secret:
## Fill in the correct values for your chosen source control provider
## Any key in this list with the suffix `Key` will be fetched from the
- ## secret named above, if not provided the secret will default to
- ## `-source-control`
+ ## secret named above, if not provided the secret it will be created as
+ ## `-source-control` using for the key "ClientSecretKey" and
+ # "clientSecretValue" for the value. Be awere to not leak shis file with your password
github:
clientID:
clientSecretKey: clientSecret
+ clientSecretValue:
server: https://github.com
gitlab:
clientID:
clientSecretKey: clientSecret
+ clientSecretValue:
server:
gitea:
server:
@@ -96,7 +99,8 @@ sourceControl:
server:
bitbucketCloud:
clientID:
- clientSecret: clientSecret
+ clientSecretKey: clientSecret
+ clientSecretValue:
bitbucketServer:
server:
consumerKey: consumerKey
diff --git a/stable/drupal/Chart.yaml b/stable/drupal/Chart.yaml
index f649e982a94e..5b8013acda3e 100644
--- a/stable/drupal/Chart.yaml
+++ b/stable/drupal/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: drupal
-version: 3.0.4
-appVersion: 8.6.7
+version: 3.2.6
+appVersion: 8.7.2
description: One of the most versatile open source content management systems.
keywords:
- drupal
diff --git a/stable/drupal/README.md b/stable/drupal/README.md
index 20bcc606af21..3af33135b03f 100644
--- a/stable/drupal/README.md
+++ b/stable/drupal/README.md
@@ -14,7 +14,7 @@ This chart bootstraps a [Drupal](https://github.com/bitnami/bitnami-docker-drupa
It also packages the [Bitnami MariaDB chart](https://github.com/kubernetes/charts/tree/master/stable/mariadb) which is required for bootstrapping a MariaDB deployment as a database for the Drupal application.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -50,6 +50,7 @@ The following table lists the configurable parameters of the Drupal chart and th
| Parameter | Description | Default |
| --------------------------------- | ------------------------------------------ | --------------------------------------------------------- |
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | Drupal image registry | `docker.io` |
| `image.repository` | Drupal Image name | `bitnami/drupal` |
| `image.tag` | Drupal Image tag | `{VERSION}` |
diff --git a/stable/drupal/templates/_helpers.tpl b/stable/drupal/templates/_helpers.tpl
index 52d4f95770fa..933ca769eb86 100644
--- a/stable/drupal/templates/_helpers.tpl
+++ b/stable/drupal/templates/_helpers.tpl
@@ -56,9 +56,57 @@ Also, we can't use a single if because lazy evaluation is not an option
{{/*
Return the proper image name (for the metrics image)
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "drupal.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "drupal.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
{{- end -}}
diff --git a/stable/drupal/templates/deployment.yaml b/stable/drupal/templates/deployment.yaml
index 05121a469849..c22aeec8b9ab 100644
--- a/stable/drupal/templates/deployment.yaml
+++ b/stable/drupal/templates/deployment.yaml
@@ -29,12 +29,7 @@ spec:
{{- end }}
{{- end }}
spec:
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "drupal.imagePullSecrets" . | indent 6 }}
hostAliases:
- ip: "127.0.0.1"
hostnames:
@@ -119,7 +114,7 @@ spec:
{{- end }}
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "drupal.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
command: [ '/bin/apache_exporter', '-scrape_uri', 'http://status.localhost:80/server-status/?auto']
ports:
diff --git a/stable/drupal/values.yaml b/stable/drupal/values.yaml
index ab33d63addf8..376f0e38547c 100644
--- a/stable/drupal/values.yaml
+++ b/stable/drupal/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami Drupal image version
## ref: https://hub.docker.com/r/bitnami/drupal/tags/
@@ -10,7 +13,7 @@
image:
registry: docker.io
repository: bitnami/drupal
- tag: 8.6.7
+ tag: 8.7.2
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,7 +24,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Installation Profile
## ref: https://github.com/bitnami/bitnami-docker-drupal#configuration
@@ -265,7 +268,7 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Metrics exporter pod Annotation and Labels
podAnnotations:
prometheus.io/scrape: "true"
diff --git a/stable/efs-provisioner/.helmignore b/stable/efs-provisioner/.helmignore
index f0c131944441..4015b0f0cb2f 100644
--- a/stable/efs-provisioner/.helmignore
+++ b/stable/efs-provisioner/.helmignore
@@ -19,3 +19,5 @@
.project
.idea/
*.tmproj
+# OWNERS file
+OWNERS
diff --git a/stable/efs-provisioner/Chart.yaml b/stable/efs-provisioner/Chart.yaml
index 98d944d334dd..a5263ac0ee84 100644
--- a/stable/efs-provisioner/Chart.yaml
+++ b/stable/efs-provisioner/Chart.yaml
@@ -1,7 +1,7 @@
apiVersion: v1
name: efs-provisioner
description: A Helm chart for the AWS EFS external storage provisioner
-version: 0.1.5
+version: 0.4.0
appVersion: v2.1.0-k8s1.11
home: https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs
sources:
diff --git a/stable/efs-provisioner/OWNERS b/stable/efs-provisioner/OWNERS
new file mode 100644
index 000000000000..5f59888d9e8a
--- /dev/null
+++ b/stable/efs-provisioner/OWNERS
@@ -0,0 +1,4 @@
+approvers:
+- hareku
+reviewers:
+- hareku
diff --git a/stable/efs-provisioner/README.md b/stable/efs-provisioner/README.md
index 4573e27c5a71..13c5bb5346e1 100644
--- a/stable/efs-provisioner/README.md
+++ b/stable/efs-provisioner/README.md
@@ -1,11 +1,11 @@
# Helm chart for 'efs-provisioner'
-The Kubernetes project provides an AWS [EFS provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs)
+The Kubernetes project provides an AWS [EFS provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs)
that is used to fulfill PersistentVolumeClaims with EFS PersistentVolumes.
-"The efs-provisioner allows you to mount EFS storage as PersistentVolumes in kubernetes.
-It consists of a container that has access to an AWS EFS resource. The container reads
-a configmap which contains the EFS filesystem ID, the AWS region and the name you want
+"The efs-provisioner allows you to mount EFS storage as PersistentVolumes in kubernetes.
+It consists of a container that has access to an AWS EFS resource. The container reads
+a configmap which contains the EFS filesystem ID, the AWS region and the name you want
to use for your efs-provisioner. This name will be used later when you create a storage class."
This chart deploys the EFS Provisioner and a StorageClass for EFS volumes (optionally as the default).
@@ -76,6 +76,8 @@ annotations: {}
## https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs#deployment
##
efsProvisioner:
+ # If specified, use this DNS or IP to connect the EFS
+ #dnsName: "my-custom-efs-dns.com"
efsFileSystemId: fs-12345678
awsRegion: us-east-2
path: /example-pv
@@ -88,6 +90,9 @@ efsProvisioner:
gidMin: 40000
gidMax: 50000
reclaimPolicy: Delete
+ mountOptions: []
+ # - acregmin=3
+ # - acregmax=60
## Enable RBAC
## Leave serviceAccountName blank for the default name
@@ -96,6 +101,22 @@ rbac:
create: true
serviceAccountName: ""
+## Annotations to be added to deployment
+##
+podAnnotations: {}
+ # iam.amazonaws.com/role: efs-provisioner-role
+
+## Node labels for pod assignment
+##
+nodeSelector: {}
+
+# Affinity for pod assignment
+# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
+affinity: {}
+
+# Tolerations for node tains
+tolerations: {}
+
## Configure resources
##
resources: {}
diff --git a/stable/efs-provisioner/templates/deployment.yaml b/stable/efs-provisioner/templates/deployment.yaml
index acccd0e977e7..51e78a229a89 100644
--- a/stable/efs-provisioner/templates/deployment.yaml
+++ b/stable/efs-provisioner/templates/deployment.yaml
@@ -1,4 +1,4 @@
-{{- if ne .Values.efsProvisioner.efsFileSystemId "fs-12345678" }}
+{{- if or (ne .Values.efsProvisioner.efsFileSystemId "fs-12345678") (.Values.efsProvisioner.dnsName) }}
{{/*
The `efsFileSystemId` value must be set.
@@ -32,11 +32,13 @@ spec:
type: Recreate
template:
metadata:
+ {{- if .Values.podAnnotations }}
+ annotations:
+{{ toYaml .Values.podAnnotations | indent 8}}
+ {{- end }}
labels:
app: {{ template "efs-provisioner.name" . }}
release: "{{ .Release.Name }}"
- annotations:
-{{ toYaml .Values.annotations | indent 8 }}
spec:
serviceAccount: {{ template "efs-provisioner.serviceAccountName" . }}
{{- if .Values.priorityClassName }}
@@ -53,6 +55,12 @@ spec:
value: {{ .Values.efsProvisioner.awsRegion }}
- name: PROVISIONER_NAME
value: {{ .Values.efsProvisioner.provisionerName }}
+ {{- if .Values.efsProvisioner.dnsName }}
+ - name: DNS_NAME
+ value: {{ .Values.efsProvisioner.dnsName }}
+ {{- end }}
+ resources:
+ {{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: pv-volume
subPath: {{ (trimPrefix "/" .Values.efsProvisioner.path) }}
@@ -70,6 +78,22 @@ spec:
volumes:
- name: pv-volume
nfs:
+ {{- if .Values.efsProvisioner.dnsName }}
+ server: {{ .Values.efsProvisioner.dnsName }}
+ {{- else }}
server: {{ .Values.efsProvisioner.efsFileSystemId }}.efs.{{ .Values.efsProvisioner.awsRegion }}.amazonaws.com
+ {{- end }}
path: /
{{- end }}
+ {{- if .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.nodeSelector | indent 8 }}
+ {{- end }}
+ {{- if .Values.affinity }}
+ affinity:
+{{ toYaml .Values.affinity | indent 8 }}
+ {{- end }}
+ {{- if .Values.tolerations }}
+ tolerations:
+{{ toYaml .Values.tolerations| indent 8 }}
+ {{- end }}
diff --git a/stable/efs-provisioner/templates/storageclass.yaml b/stable/efs-provisioner/templates/storageclass.yaml
index f6822f03b431..262065c06cf5 100644
--- a/stable/efs-provisioner/templates/storageclass.yaml
+++ b/stable/efs-provisioner/templates/storageclass.yaml
@@ -8,9 +8,11 @@ metadata:
chart: {{ template "efs-provisioner.chartname" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
-{{- if .Values.efsProvisioner.storageClass.isDefault }}
annotations:
+{{- if .Values.efsProvisioner.storageClass.isDefault }}
storageclass.kubernetes.io/is-default-class: "true"
+{{- end }}
+{{- if .Values.annotations }}
{{ toYaml .Values.annotations | indent 4 }}
{{- end }}
provisioner: {{ .Values.efsProvisioner.provisionerName }}
@@ -25,3 +27,9 @@ parameters:
gidAllocate: "false"
{{- end }}
reclaimPolicy: {{ .Values.efsProvisioner.storageClass.reclaimPolicy }}
+{{- if .Values.efsProvisioner.storageClass.mountOptions }}
+mountOptions:
+ {{- range .Values.efsProvisioner.storageClass.mountOptions }}
+ - {{ . }}
+ {{- end }}
+{{- end }}
diff --git a/stable/efs-provisioner/values.yaml b/stable/efs-provisioner/values.yaml
index fb0891fc86a8..bcfe4faf467a 100644
--- a/stable/efs-provisioner/values.yaml
+++ b/stable/efs-provisioner/values.yaml
@@ -22,10 +22,16 @@ busyboxImage:
tag: 1.27
pullPolicy: IfNotPresent
+## Deployment annotations
+##
+annotations: {}
+
## Configure provisioner
## https://github.com/kubernetes-incubator/external-storage/tree/master/aws/efs#deployment
##
efsProvisioner:
+ # If specified, use this DNS or IP to connect the EFS
+ # dnsName: "my-custom-efs-dns.com"
efsFileSystemId: fs-12345678
awsRegion: us-east-2
path: /example-pv
@@ -38,6 +44,7 @@ efsProvisioner:
gidMin: 40000
gidMax: 50000
reclaimPolicy: Delete
+ mountOptions: []
## Enable RBAC
##
@@ -54,6 +61,23 @@ serviceAccount:
# If not set and create is true, a name is generated using the fullname template
name: ""
+## Annotations to be added to deployment
+##
+podAnnotations: {}
+ # iam.amazonaws.com/role: efs-provisioner-role
+
+## Node labels for pod assignment
+##
+nodeSelector: {}
+
+# Affinity for pod assignment
+# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
+affinity: {}
+
+# Tolerations for pod assignment
+# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
+tolerations: {}
+
## Configure resources
##
resources: {}
diff --git a/stable/elastalert/Chart.yaml b/stable/elastalert/Chart.yaml
index c7f1f6ba3895..44556074130c 100644
--- a/stable/elastalert/Chart.yaml
+++ b/stable/elastalert/Chart.yaml
@@ -1,7 +1,8 @@
+apiVersion: v1
description: ElastAlert is a simple framework for alerting on anomalies, spikes, or other patterns of interest from data in Elasticsearch.
name: elastalert
-version: 0.10.0
-appVersion: 0.1.38
+version: 1.0.0
+appVersion: 0.1.39
home: https://github.com/Yelp/elastalert
icon: https://static-www.elastic.co/assets/blteb1c97719574938d/logo-elastic-elasticsearch-lt.svg
sources:
diff --git a/stable/elastalert/README.md b/stable/elastalert/README.md
index cba71710a212..574413223b38 100644
--- a/stable/elastalert/README.md
+++ b/stable/elastalert/README.md
@@ -52,7 +52,7 @@ The command removes all the Kubernetes components associated with the chart and
| Parameter | Description | Default |
| ------------------------ | ------------------------------------------------- | ------------------------------- |
| `image.repository` | docker image | jertel/elastalert-docker |
-| `image.tag` | docker image tag | 0.1.38 |
+| `image.tag` | docker image tag | 0.1.39 |
| `image.pullPolicy` | image pull policy | IfNotPresent |
| `command` | command override for container | `NULL` |
| `args` | args override for container | `NULL` |
@@ -68,6 +68,7 @@ The command removes all the Kubernetes components associated with the chart and
| `elasticsearch.caCerts` | path to a CA cert bundle to use to verify SSL connections | /certs/ca.pem |
| `elasticsearch.certsVolumes` | certs volumes, required to mount ssl certificates when elasticsearch has tls enabled | `NULL` |
| `elasticsearch.certsVolumeMounts` | mount certs volumes, required to mount ssl certificates when elasticsearch has tls enabled | `NULL` |
+| `extraConfigOptions` | Additional options to propagate to all rules, cannot be `alert`, `type`, `name` or `index` | `{}` |
| `resources` | Container resource requests and limits | {} |
| `rules` | Rule and alert configuration for Elastalert | {} example shown in values.yaml |
| `runIntervalMins` | Default interval between alert checks, in minutes | 1 |
diff --git a/stable/elastalert/templates/config.yaml b/stable/elastalert/templates/config.yaml
index 66279f4f9b7a..8908b9f16c70 100644
--- a/stable/elastalert/templates/config.yaml
+++ b/stable/elastalert/templates/config.yaml
@@ -42,3 +42,6 @@ data:
{{- end }}
alert_time_limit:
minutes: {{ .Values.alertRetryLimitMins }}
+{{- if .Values.extraConfigOptions }}
+{{ toYaml .Values.extraConfigOptions | indent 4 }}
+{{- end }}
diff --git a/stable/elastalert/values.yaml b/stable/elastalert/values.yaml
index efb3be285104..c7e78bf6d3a3 100644
--- a/stable/elastalert/values.yaml
+++ b/stable/elastalert/values.yaml
@@ -28,7 +28,7 @@ image:
# docker image
repository: jertel/elastalert-docker
# docker image tag
- tag: 0.1.38
+ tag: 0.1.39
pullPolicy: IfNotPresent
resources: {}
@@ -64,6 +64,14 @@ elasticsearch:
# mountPath: /certs
# readOnly: true
+extraConfigOptions: {}
+ # # Options to propagate to all rules, e.g. a common slack_webhook_url or kibana_url
+ # # Please note at the time of implementing this value, it will not work for required_locals
+ # # Which MUST be set at the rule level, these are: ['alert', 'type', 'name', 'index']
+ # generate_kibana_link: true
+ # kibana_url: https://kibana.yourdomain.com
+ # slack_webhook_url: dummy
+
# Command and args override for container e.g. (https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/)
# command: ["YOUR_CUSTOM_COMMAND"]
# args: ["YOUR", "CUSTOM", "ARGS"]
diff --git a/stable/elastic-stack/Chart.yaml b/stable/elastic-stack/Chart.yaml
index a69467bdd6b9..7d30e8f7c629 100644
--- a/stable/elastic-stack/Chart.yaml
+++ b/stable/elastic-stack/Chart.yaml
@@ -3,7 +3,7 @@ description: A Helm chart for ELK
home: https://www.elastic.co/products
icon: https://www.elastic.co/assets/bltb35193323e8f1770/logo-elastic-stack-lt.svg
name: elastic-stack
-version: 1.4.1
+version: 1.6.0
appVersion: 6.0
maintainers:
- name: rendhalver
diff --git a/stable/elastic-stack/requirements.lock b/stable/elastic-stack/requirements.lock
index 2ba8362dc267..952886ee3c30 100644
--- a/stable/elastic-stack/requirements.lock
+++ b/stable/elastic-stack/requirements.lock
@@ -1,33 +1,33 @@
dependencies:
- name: elasticsearch
repository: https://kubernetes-charts.storage.googleapis.com/
- version: 1.17.0
+ version: 1.22.0
- name: kibana
repository: https://kubernetes-charts.storage.googleapis.com/
- version: 1.1.2
+ version: 2.2.0
- name: filebeat
repository: https://kubernetes-charts.storage.googleapis.com/
- version: 1.1.2
+ version: 1.5.1
- name: logstash
repository: https://kubernetes-charts.storage.googleapis.com/
- version: 1.4.2
+ version: 1.6.0
- name: fluentd
repository: https://kubernetes-charts.storage.googleapis.com/
- version: 1.4.0
+ version: 1.5.1
- name: fluent-bit
repository: https://kubernetes-charts.storage.googleapis.com/
- version: 1.3.0
+ version: 1.9.1
- name: fluentd-elasticsearch
repository: https://kubernetes-charts.storage.googleapis.com/
- version: 1.5.0
+ version: 2.0.7
- name: nginx-ldapauth-proxy
repository: https://kubernetes-charts.storage.googleapis.com/
version: 0.1.2
- name: elasticsearch-curator
repository: https://kubernetes-charts.storage.googleapis.com/
- version: 1.0.1
+ version: 1.3.2
- name: elasticsearch-exporter
repository: https://kubernetes-charts.storage.googleapis.com/
- version: 0.4.1
-digest: sha256:1fd4a059ff9264193884b83644fd057216c384cc5a0debfe8347e9433bd7d1e2
-generated: 2019-01-14T14:58:05.877505741-05:00
+ version: 1.1.3
+digest: sha256:c64f0ce3be369001b814ba8fd0d04fa38c09657b327ba6fa21fcf73e1e57e2e7
+generated: 2019-04-02T18:40:18.875184031+04:00
diff --git a/stable/elastic-stack/requirements.yaml b/stable/elastic-stack/requirements.yaml
index d4c872fcbf48..12805e66fddc 100644
--- a/stable/elastic-stack/requirements.yaml
+++ b/stable/elastic-stack/requirements.yaml
@@ -1,29 +1,30 @@
dependencies:
- name: elasticsearch
- version: ^1.17.0
+ version: ^1.22.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: elasticsearch.enabled
- name: kibana
- version: ^1.1.0
+ version: ^2.2.0
repository: https://kubernetes-charts.storage.googleapis.com/
+ condition: kibana.enabled
- name: filebeat
- version: ^1.0.0
+ version: ^1.5.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: filebeat.enabled
- name: logstash
- version: ^1.2.1
+ version: ^1.6.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: logstash.enabled
- name: fluentd
- version: ^1.3.0
+ version: ^1.5.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: fluentd.enabled
- name: fluent-bit
- version: ^1.3.0
+ version: ^1.9.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: fluent-bit.enabled
- name: fluentd-elasticsearch
- version: ^1.0.0
+ version: ^2.0.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: fluentd-elasticsearch.enabled
- name: nginx-ldapauth-proxy
@@ -31,10 +32,10 @@ dependencies:
repository: https://kubernetes-charts.storage.googleapis.com/
condition: nginx-ldapauth-proxy.enabled
- name: elasticsearch-curator
- version: ^1.0.0
+ version: ^1.2.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: elasticsearch-curator.enabled
- name: elasticsearch-exporter
- version: ^0.4.0
+ version: ^1.1.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: elasticsearch-exporter.enabled
diff --git a/stable/elastic-stack/templates/NOTES.txt b/stable/elastic-stack/templates/NOTES.txt
index 96ae71d8eb06..740fa256983e 100644
--- a/stable/elastic-stack/templates/NOTES.txt
+++ b/stable/elastic-stack/templates/NOTES.txt
@@ -1,5 +1,6 @@
The elasticsearch cluster and associated extras have been installed.
+{{- if .Values.kibana.enabled }}
Kibana can be accessed:
* Within your cluster, at the following DNS name at port 9200:
@@ -29,3 +30,4 @@ Kibana can be accessed:
echo "Visit http://127.0.0.1:5601 to use Kibana"
kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME 5601:5601
{{- end }}
+{{- end }}
diff --git a/stable/elastic-stack/values.yaml b/stable/elastic-stack/values.yaml
index 28480f37d19b..fad43f5b9e70 100644
--- a/stable/elastic-stack/values.yaml
+++ b/stable/elastic-stack/values.yaml
@@ -5,6 +5,7 @@ elasticsearch:
enabled: true
kibana:
+ enabled: true
env:
ELASTICSEARCH_URL: http://http.default.svc.cluster.local:9200
diff --git a/stable/elasticsearch-curator/Chart.yaml b/stable/elasticsearch-curator/Chart.yaml
index 9297f59df7df..6e3fdf84ba43 100644
--- a/stable/elasticsearch-curator/Chart.yaml
+++ b/stable/elasticsearch-curator/Chart.yaml
@@ -2,7 +2,7 @@ apiVersion: v1
appVersion: "5.5.4"
description: A Helm chart for Elasticsearch Curator
name: elasticsearch-curator
-version: 1.1.0
+version: 1.5.0
home: https://github.com/elastic/curator
keywords:
- curator
diff --git a/stable/elasticsearch-curator/README.md b/stable/elasticsearch-curator/README.md
index a5dd075b0a69..b020309e0cbe 100644
--- a/stable/elasticsearch-curator/README.md
+++ b/stable/elasticsearch-curator/README.md
@@ -28,24 +28,37 @@ $ helm install stable/elasticsearch-curator
The following table lists the configurable parameters of the docker-registry chart and
their default values.
-| Parameter | Description | Default |
-| :----------------------------------- | :---------------------------------------------------- | :------------------------------------------- |
-| `image.pullPolicy` | Container pull policy | `IfNotPresent` |
-| `image.repository` | Container image to use | `quay.io/pires/docker-elasticsearch-curator` |
-| `image.tag` | Container image tag to deploy | `5.5.4` |
-| `hooks` | Whether to run job on selected hooks | `{ "install": false, "upgrade": false }` |
-| `cronjob.schedule` | Schedule for the CronJob | `0 1 * * *` |
-| `cronjob.annotations` | Annotations to add to the cronjob | {} |
-| `cronjob.concurrencyPolicy` | `Allow|Forbid|Replace` concurrent jobs | `nil` |
-| `cronjob.failedJobsHistoryLimit` | Specify the number of failed Jobs to keep | `nil` |
-| `cronjob.successfulJobsHistoryLimit` | Specify the number of completed Jobs to keep | `nil` |
-| `pod.annotations` | Annotations to add to the pod | {} |
-| `configMaps.action_file_yml` | Contents of the Curator action_file.yml | See values.yaml |
-| `configMaps.config_yml` | Contents of the Curator config.yml (overrides config) | See values.yaml |
-| `resources` | Resource requests and limits | {} |
-| `priorityClassName` | priorityClassName | `nil` |
-| `extraVolumeMounts` | Mount extra volume(s), | |
-| `extraVolumes` | Extra volumes | |
+| Parameter | Description | Default |
+| :----------------------------------- | :---------------------------------------------------------- | :------------------------------------------- |
+| `image.pullPolicy` | Container pull policy | `IfNotPresent` |
+| `image.repository` | Container image to use | `quay.io/pires/docker-elasticsearch-curator` |
+| `image.tag` | Container image tag to deploy | `5.5.4` |
+| `hooks` | Whether to run job on selected hooks | `{ "install": false, "upgrade": false }` |
+| `cronjob.schedule` | Schedule for the CronJob | `0 1 * * *` |
+| `cronjob.annotations` | Annotations to add to the cronjob | {} |
+| `cronjob.concurrencyPolicy` | `Allow|Forbid|Replace` concurrent jobs | `nil` |
+| `cronjob.failedJobsHistoryLimit` | Specify the number of failed Jobs to keep | `nil` |
+| `cronjob.successfulJobsHistoryLimit` | Specify the number of completed Jobs to keep | `nil` |
+| `pod.annotations` | Annotations to add to the pod | {} |
+| `dryrun` | Run Curator in dry-run mode | `false` |
+| `env` | Environment variables to add to the cronjob container | {} |
+| `envFromSecrets` | Environment variables from secrets to the cronjob container | {} |
+| `envFromSecrets.*.from.secret` | - `secretKeyRef.name` used for environment variable | |
+| `envFromSecrets.*.from.key` | - `secretKeyRef.key` used for environment variable | |
+| `command` | Command to execute | ["curator"] |
+| `configMaps.action_file_yml` | Contents of the Curator action_file.yml | See values.yaml |
+| `configMaps.config_yml` | Contents of the Curator config.yml (overrides config) | See values.yaml |
+| `resources` | Resource requests and limits | {} |
+| `priorityClassName` | priorityClassName | `nil` |
+| `extraVolumeMounts` | Mount extra volume(s), | |
+| `extraVolumes` | Extra volumes | |
+| `extraInitContainers` | Init containers to add to the cronjob container | {} |
+| `securityContext` | Configure PodSecurityContext | `false` |
+| `rbac.enabled` | Enable RBAC resources | `false` |
+| `psp.create` | Create pod security policy resources | `false` |
+| `serviceAccount.create` | Create a default serviceaccount for elasticsearch curator | `true` |
+| `serviceAccount.name` | Name for elasticsearch curator serviceaccount | `""` |
+
Specify each parameter using the `--set key=value[,key=value]` argument to
`helm install`.
diff --git a/stable/elasticsearch-curator/ci/initcontainer-values.yaml b/stable/elasticsearch-curator/ci/initcontainer-values.yaml
new file mode 100644
index 000000000000..578becf3f8a3
--- /dev/null
+++ b/stable/elasticsearch-curator/ci/initcontainer-values.yaml
@@ -0,0 +1,9 @@
+extraInitContainers:
+ test:
+ image: alpine:latest
+ command:
+ - "/bin/sh"
+ - "-c"
+ args:
+ - |
+ true
diff --git a/stable/elasticsearch-curator/templates/_helpers.tpl b/stable/elasticsearch-curator/templates/_helpers.tpl
index c786fb5fa825..2ef3ceb99e0b 100644
--- a/stable/elasticsearch-curator/templates/_helpers.tpl
+++ b/stable/elasticsearch-curator/templates/_helpers.tpl
@@ -42,3 +42,14 @@ Create chart name and version as used by the chart label.
{{- define "elasticsearch-curator.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
+
+{{/*
+Create the name of the service account to use
+*/}}
+{{- define "elasticsearch-curator.serviceAccountName" -}}
+{{- if .Values.serviceAccount.create -}}
+ {{ default (include "elasticsearch-curator.fullname" .) .Values.serviceAccount.name }}
+{{- else -}}
+ {{ default "default" .Values.serviceAccount.name }}
+{{- end -}}
+{{- end -}}
diff --git a/stable/elasticsearch-curator/templates/cronjob.yaml b/stable/elasticsearch-curator/templates/cronjob.yaml
index 6d32aeeb3426..37274f6a80cb 100644
--- a/stable/elasticsearch-curator/templates/cronjob.yaml
+++ b/stable/elasticsearch-curator/templates/cronjob.yaml
@@ -49,6 +49,20 @@ spec:
{{- if .Values.priorityClassName }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
+{{- if .Values.image.pullSecret }}
+ imagePullSecrets:
+ - name: {{ .Values.image.pullSecret }}
+{{- end }}
+{{- if .Values.extraInitContainers }}
+ initContainers:
+{{- range $key, $value := .Values.extraInitContainers }}
+ - name: "{{ $key }}"
+{{ toYaml $value | indent 12 }}
+{{- end }}
+{{- end }}
+ {{- if .Values.rbac.enabled }}
+ serviceAccountName: {{ template "elasticsearch-curator.serviceAccountName" .}}
+ {{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
@@ -58,9 +72,32 @@ spec:
mountPath: /etc/es-curator
{{- if .Values.extraVolumeMounts }}
{{ toYaml .Values.extraVolumeMounts | indent 16 }}
+{{ end }}
+{{ if .Values.command }}
+ command:
+{{ toYaml .Values.command | indent 16 }}
{{- end }}
- command: [ "curator" ]
+{{- if .Values.dryrun }}
+ args: [ "--dry-run", "--config", "/etc/es-curator/config.yml", "/etc/es-curator/action_file.yml" ]
+{{- else }}
args: [ "--config", "/etc/es-curator/config.yml", "/etc/es-curator/action_file.yml" ]
+{{- end }}
+ env:
+{{- if .Values.env }}
+{{- range $key,$value := .Values.env }}
+ - name: {{ $key | upper | quote}}
+ value: {{ $value | quote}}
+{{- end }}
+{{- end }}
+{{- if .Values.envFromSecrets }}
+{{- range $key,$value := .Values.envFromSecrets }}
+ - name: {{ $key | upper | quote}}
+ valueFrom:
+ secretKeyRef:
+ name: {{ $value.from.secret | quote}}
+ key: {{ $value.from.key | quote}}
+{{- end }}
+{{- end }}
resources:
{{ toYaml .Values.resources | indent 16 }}
{{- with .Values.nodeSelector }}
@@ -73,5 +110,9 @@ spec:
{{- end }}
{{- with .Values.tolerations }}
tolerations:
+{{ toYaml . | indent 12 }}
+ {{- end }}
+ {{- with .Values.securityContext }}
+ securityContext:
{{ toYaml . | indent 12 }}
{{- end }}
diff --git a/stable/elasticsearch-curator/templates/psp.yml b/stable/elasticsearch-curator/templates/psp.yml
new file mode 100644
index 000000000000..0f68d501fda7
--- /dev/null
+++ b/stable/elasticsearch-curator/templates/psp.yml
@@ -0,0 +1,36 @@
+{{- if .Values.psp.create }}
+apiVersion: extensions/v1beta1
+kind: PodSecurityPolicy
+metadata:
+ labels:
+ labels:
+ app: {{ template "elasticsearch-curator.name" . }}
+ chart: {{ template "elasticsearch-curator.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ name: {{ template "elasticsearch-curator.fullname" . }}-psp
+spec:
+ privileged: true
+ #requiredDropCapabilities:
+ volumes:
+ - 'configMap'
+ - 'secret'
+ hostNetwork: false
+ hostIPC: false
+ hostPID: false
+ runAsUser:
+ rule: 'RunAsAny'
+ seLinux:
+ rule: 'RunAsAny'
+ supplementalGroups:
+ rule: 'MustRunAs'
+ ranges:
+ - min: 1
+ max: 65535
+ fsGroup:
+ rule: 'MustRunAs'
+ ranges:
+ - min: 1
+ max: 65535
+ readOnlyRootFilesystem: false
+{{- end }}
diff --git a/stable/elasticsearch-curator/templates/role.yaml b/stable/elasticsearch-curator/templates/role.yaml
new file mode 100644
index 000000000000..8867f679137b
--- /dev/null
+++ b/stable/elasticsearch-curator/templates/role.yaml
@@ -0,0 +1,23 @@
+{{- if .Values.rbac.enabled }}
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ labels:
+ app: {{ template "elasticsearch-curator.name" . }}
+ chart: {{ template "elasticsearch-curator.chart" . }}
+ heritage: {{ .Release.Service }}
+ release: {{ .Release.Name }}
+ component: elasticsearch-curator-configmap
+ name: {{ template "elasticsearch-curator.name" . }}-role
+rules:
+- apiGroups: [""]
+ resources: ["configmaps"]
+ verbs: ["update", "patch"]
+{{- if .Values.psp.create }}
+- apiGroups: ["extensions"]
+ resources: ["podsecuritypolicies"]
+ verbs: ["use"]
+ resourceNames:
+ - {{ template "elasticsearch-curator.fullname" . }}-psp
+{{- end -}}
+{{- end -}}
diff --git a/stable/elasticsearch-curator/templates/rolebinding.yaml b/stable/elasticsearch-curator/templates/rolebinding.yaml
new file mode 100644
index 000000000000..d25d2e142c9c
--- /dev/null
+++ b/stable/elasticsearch-curator/templates/rolebinding.yaml
@@ -0,0 +1,21 @@
+{{- if .Values.rbac.enabled -}}
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ labels:
+ app: {{ template "elasticsearch-curator.name" . }}
+ chart: {{ template "elasticsearch-curator.chart" . }}
+ heritage: {{ .Release.Service }}
+ release: {{ .Release.Name }}
+ component: elasticsearch-curator-configmap
+ name: {{ template "elasticsearch-curator.name" . }}-rolebinding
+roleRef:
+ kind: Role
+ name: {{ template "elasticsearch-curator.name" . }}-role
+ apiGroup: rbac.authorization.k8s.io
+subjects:
+ - kind: ServiceAccount
+ name: {{ template "elasticsearch-curator.serviceAccountName" . }}
+ namespace: {{ .Release.Namespace }}
+{{- end -}}
+
diff --git a/stable/elasticsearch-curator/templates/serviceaccount.yaml b/stable/elasticsearch-curator/templates/serviceaccount.yaml
new file mode 100644
index 000000000000..ad9c5c9ac030
--- /dev/null
+++ b/stable/elasticsearch-curator/templates/serviceaccount.yaml
@@ -0,0 +1,12 @@
+{{- if and .Values.serviceAccount.create .Values.rbac.enabled }}
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: {{ template "elasticsearch-curator.serviceAccountName" .}}
+ labels:
+ app: {{ template "elasticsearch-curator.fullname" . }}
+ chart: {{ template "elasticsearch-curator.chart" . }}
+ release: "{{ .Release.Name }}"
+ heritage: "{{ .Release.Service }}"
+{{- end }}
+
diff --git a/stable/elasticsearch-curator/values.yaml b/stable/elasticsearch-curator/values.yaml
index a13c8ac7f2bd..cc28c65654a6 100644
--- a/stable/elasticsearch-curator/values.yaml
+++ b/stable/elasticsearch-curator/values.yaml
@@ -13,6 +13,22 @@ cronjob:
pod:
annotations: {}
+rbac:
+ # Specifies whether RBAC should be enabled
+ enabled: false
+
+serviceAccount:
+ # Specifies whether a ServiceAccount should be created
+ create: true
+ # The name of the ServiceAccount to use.
+ # If not set and create is true, a name is generated using the fullname template
+ name:
+
+
+psp:
+ # Specifies whether a podsecuritypolicy should be created
+ create: false
+
image:
repository: quay.io/pires/docker-elasticsearch-curator
tag: 5.5.4
@@ -22,6 +38,12 @@ hooks:
install: false
upgrade: false
+# run curator in dry-run mode
+dryrun: false
+
+command: ["curator"]
+env: {}
+
configMaps:
# Delete indices older than 7 days
action_file_yml: |-
@@ -94,3 +116,41 @@ priorityClassName: ""
# - name: es-certs
# mountPath: /certs
# readOnly: true
+
+# Add your own init container or uncomment and modify the given example.
+extraInitContainers: {}
+ ## Don't configure S3 repository till Elasticsearch is reachable.
+ ## Ensure that it is available at http://elasticsearch:9200
+ ##
+ # elasticsearch-s3-repository:
+ # image: jwilder/dockerize:latest
+ # imagePullPolicy: "IfNotPresent"
+ # command:
+ # - "/bin/sh"
+ # - "-c"
+ # args:
+ # - |
+ # ES_HOST=elasticsearch
+ # ES_PORT=9200
+ # ES_REPOSITORY=backup
+ # S3_REGION=us-east-1
+ # S3_BUCKET=bucket
+ # S3_BASE_PATH=backup
+ # S3_COMPRESS=true
+ # S3_STORAGE_CLASS=standard
+ # apk add curl --no-cache && \
+ # dockerize -wait http://${ES_HOST}:${ES_PORT} --timeout 120s && \
+ # cat < 1
+ for: 1m
+ labels:
+ severity: page
+ annotations:
+ summary: "4xx response rate above 1%"
+ description: "The 4xx error response rate for envoy cluster {{ $labels.envoy_cluster_name }} reported a service replication success rate of {{ $value }}% for more than 1 minute."
+ ## Added to the PrometheusRule object so that prometheus-operator is able to discover it
+ ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
+ additionalLabels: {}
diff --git a/stable/ethereum/Chart.yaml b/stable/ethereum/Chart.yaml
index e1fa9e462e69..6ea657bcaa75 100644
--- a/stable/ethereum/Chart.yaml
+++ b/stable/ethereum/Chart.yaml
@@ -1,5 +1,7 @@
+apiVersion: v1
name: ethereum
-version: 0.1.4
+version: 1.0.0
+appVersion: v1.7.3
description: private Ethereum network Helm chart for Kubernetes
keywords:
- ethereum
@@ -11,4 +13,3 @@ maintainers:
- name: jpoon
email: kubernetes@jasonpoon.ca
icon: https://www.ethereum.org/images/logos/ETHEREUM-LOGO_PORTRAIT_Black_small.png
-appVersion: v1.7.3
diff --git a/stable/ethereum/ci/test-values.yaml b/stable/ethereum/ci/test-values.yaml
index 529f5151863f..bcd6852817a0 100644
--- a/stable/ethereum/ci/test-values.yaml
+++ b/stable/ethereum/ci/test-values.yaml
@@ -32,4 +32,4 @@ geth:
account:
address: 0xab70383d9207c6cc43ab85eeef9db4d33a8ad4e8
privateKey: 38000e15ca07309cc2d0b30faaaadb293c45ea222a117e9e9c6a2a9872bb3bcf
- secret: my-super-secret-passphrase
\ No newline at end of file
+ secret: my-super-secret-passphrase
diff --git a/stable/express-gateway/Chart.yaml b/stable/express-gateway/Chart.yaml
index f754e932a4d7..24b456875a10 100644
--- a/stable/express-gateway/Chart.yaml
+++ b/stable/express-gateway/Chart.yaml
@@ -9,5 +9,5 @@ maintainers:
name: express-gateway
sources:
- https://github.com/expressgateway/express-gateway
-version: 1.1.0
-appVersion: 1.13.0
+version: 1.5.0
+appVersion: 1.16.2
diff --git a/stable/express-gateway/README.md b/stable/express-gateway/README.md
index 070304060ee1..a62f3a82ed0d 100644
--- a/stable/express-gateway/README.md
+++ b/stable/express-gateway/README.md
@@ -49,7 +49,7 @@ and their default values.
| Parameter | Description | Default |
|----------------------|--------------------------------------------------------------------------------------------------------|----------------------------------|
| image.repository | Express Gateway image | `expressgateway/express-gateway` |
-| image.tag | Express Gateway image version | `v1.13.0` |
+| image.tag | Express Gateway image version | `v1.16.2` |
| image.pullPolicy | Image pull policy | `IfNotPresent` |
| replicaCount | Express Gateway instance count | `1` |
| admin.servicePort | TCP port on which the Express Gateway admin service is exposed | `9876` |
diff --git a/stable/express-gateway/values.yaml b/stable/express-gateway/values.yaml
index 7d18b633a8ea..02b567dac495 100644
--- a/stable/express-gateway/values.yaml
+++ b/stable/express-gateway/values.yaml
@@ -3,7 +3,7 @@
image:
repository: expressgateway/express-gateway
- tag: v1.13.0
+ tag: v1.16.2
pullPolicy: IfNotPresent
# Specify Express Gateway Admin API
diff --git a/stable/external-dns/Chart.yaml b/stable/external-dns/Chart.yaml
index d5bc32a635d1..0ddcaae8c4b5 100644
--- a/stable/external-dns/Chart.yaml
+++ b/stable/external-dns/Chart.yaml
@@ -1,10 +1,10 @@
apiVersion: v1
-description:
+description: |
Configure external DNS servers (AWS Route53, Google CloudDNS and others)
for Kubernetes Ingresses and Services
name: external-dns
-version: 1.6.0
-appVersion: 0.5.9
+version: 1.7.8
+appVersion: 0.5.14
home: https://github.com/kubernetes-incubator/external-dns
sources:
- https://github.com/kubernetes-incubator/external-dns
diff --git a/stable/external-dns/README.md b/stable/external-dns/README.md
index 5193512b01f6..f75a0259285e 100644
--- a/stable/external-dns/README.md
+++ b/stable/external-dns/README.md
@@ -27,6 +27,7 @@ The following table lists the configurable parameters of the external-dns chart
| ---------------------------------- | -------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------- |
| `annotationFilter` | Filter sources managed by external-dns via annotation using label selector semantics (default: all sources) (optional). | `""` |
| `aws.accessKey` | set in `~/.aws/credentials` mounted through secret (optional). | `""` |
+| `aws.assumeRoleArn` | Assume role by specifying --aws-assume-role to the external-dns daemon. | `""` |
| `aws.secretKey` | set in `~/.aws/credentials` mounted through secret (optional). | `""` |
| `aws.credentialsPath` | determine `mountPath` for `credentials` secret, defaults to `nobody` USER home path `/.aws` (optional). | `"/.aws"` |
| `aws.region` | `AWS_DEFAULT_REGION` to set in the environment (optional). | `us-east-1` |
@@ -41,6 +42,7 @@ The following table lists the configurable parameters of the external-dns chart
| `designate.customCA.directory` | Directory in which to mount the Designate provider's custom CA | "/config/designate" |
| `designate.customCA.filename` | Filename of Designate provider's custom CA | "designate-ca.pem" |
| `domainFilters` | Limit possible target zones by domain suffixes (optional). | `[]` |
+| `digitalocean.apiToken` | When using the DigitalOcean provider, sets `DO_TOKEN` in the environment (optional). | `""` |
| `dryRun` | When enabled, prints DNS record changes rather than actually performing them (optional). | `false` |
| `extraArgs` | Optional object of extra args, as `name`: `value` pairs. Where the name is the command line arg to external-dns. | `{}` |
| `extraEnv` | Optional array of extra environment variables. Supply a `name` property and either `value` of `valueFrom` for each. | `[]` |
@@ -49,7 +51,7 @@ The following table lists the configurable parameters of the external-dns chart
| `google.serviceAccountKey` | When using the Google provider, optionally specify the service account key JSON file. Must be provided when no existing secret is used, in this case a new secret will be created holding this service account | `""` |
| `image.name` | Container image name (Including repository name if not `hub.docker.com`). | `registry.opensource.zalan.do/teapot/external-dns` |
| `image.pullPolicy` | Container pull policy. | `IfNotPresent` |
-| `image.tag` | Container image tag. | `v0.5.9` |
+| `image.tag` | Container image tag. | `v0.5.14` |
| `image.pullSecrets` | Array of pull secret names | `[]` |
| `infoblox.gridHost` | When using the Infoblox provider, specify the Infoblox Grid host. | `""` |
| `infoblox.wapiUsername` | When using the Infoblox provider, specify the Infoblox WAPI username. | `""` |
@@ -60,15 +62,22 @@ The following table lists the configurable parameters of the external-dns chart
| `infoblox.wapiVersion` | When using the Infoblox provider, optionally specify the Infoblox WAPI version. | `""` |
| `infoblox.wapiConnectionPoolSize` | When using the Infoblox provider, optionally specify the Infoblox WAPI request connection pool size. | `""` |
| `infoblox.wapiHttpTimeout` | When using the Infoblox provider, optionally specify the Infoblox WAPI request timeout in seconds. | `""` |
+| `rfc2136.host` | When using the rfc2136 provider, specify the RFC2136 host. | `""` |
+| `rfc2136.port` | When using the rfc2136 provider, optionally specify the RFC2136 port. | `53` |
+| `rfc2136.zone` | When using the rfc2136 provider, specify the zone. | `""` |
+| `rfc2136.tsigSecret` | When using the rfc2136 provider, if you want to enable security, specify the tsig secret. | `""` |
+| `rfc2136.tsigKeyname` | When using the rfc2136 provider, if you want to enable security, specify the tsig keyname. If you want an insecure connection, disable this parameter. | `"externaldns-key"` |
+| `rfc2136.tsigSecretAlg` | When using the rfc2136 provider, if you want to enable security, specify the tsig secret alg. | `"hmac-sha256"` |
+| `rfc2136.tsigAxfr` | When using the rfc2136 provider, if you want to enable security, enable AFXR. | `true` |
| `logLevel` | Verbosity of the logs (options: panic, debug, info, warn, error, fatal) | `info` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `podAnnotations` | Additional annotations to apply to the pod. | `{}` |
| `policy` | Modify how DNS records are sychronized between sources and providers (options: sync, upsert-only ). | `upsert-only` |
-| `provider` | The DNS provider where the DNS records will be created (options: aws, google, azure, cloudflare, digitalocean, inmemory ). | `aws` |
+| `provider` | The DNS provider where the DNS records will be created (options: aws, google, azure, cloudflare, digitalocean, inmemory, rfc2136 ). | `aws` |
| `publishInternalServices` | Allow external-dns to publish DNS records for ClusterIP services (optional). | `false` |
| `rbac.create` | If true, create & use RBAC resources | `false` |
| `rbac.serviceAccountName` | Existing ServiceAccount to use (ignored if rbac.create=true) | `default` |
-| `interval` | Interval update period to use (options: txt, noop) | `txt` |
+| `interval` | Interval update period to use | `1m` |
| `registry` | Registry method to use (options: txt, noop) | `txt` |
| `resources` | CPU/Memory resource requests/limits. | `{}` |
| `securityContext` | Security options the pod should run with. [More info](https://kubernetes.io/docs/concepts/policy/security-context/) | `{}` |
diff --git a/stable/external-dns/templates/_helpers.tpl b/stable/external-dns/templates/_helpers.tpl
index 7e9c43e01506..61a84210270e 100644
--- a/stable/external-dns/templates/_helpers.tpl
+++ b/stable/external-dns/templates/_helpers.tpl
@@ -9,13 +9,18 @@ Expand the name of the chart.
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
*/}}
{{- define "external-dns.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
-{{- if ne $name .Release.Name -}}
-{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
-{{- printf "%s" $name | trunc 63 | trimSuffix "-" -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
{{- end -}}
{{- end -}}
@@ -43,4 +48,4 @@ role_arn = {{ .Values.aws.roleArn }}
{{- end }}
region = {{ .Values.aws.region }}
source_profile = default
-{{ end }}
\ No newline at end of file
+{{ end }}
diff --git a/stable/external-dns/templates/clusterrole.yaml b/stable/external-dns/templates/clusterrole.yaml
index b0b96511fed6..8ef1346cc12e 100644
--- a/stable/external-dns/templates/clusterrole.yaml
+++ b/stable/external-dns/templates/clusterrole.yaml
@@ -8,13 +8,25 @@ metadata:
rules:
- apiGroups:
- ""
- - extensions
- - networking.istio.io
resources:
- - ingresses
- services
- pods
- nodes
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - extensions
+ resources:
+ - ingresses
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - networking.istio.io
+ resources:
- gateways
verbs:
- get
diff --git a/stable/external-dns/templates/deployment.yaml b/stable/external-dns/templates/deployment.yaml
index 474788ffcce7..01d0df638339 100755
--- a/stable/external-dns/templates/deployment.yaml
+++ b/stable/external-dns/templates/deployment.yaml
@@ -8,8 +8,7 @@ spec:
template:
metadata:
{{- if .Values.podAnnotations }}
- annotations:
-{{ toYaml .Values.podAnnotations | indent 8}}
+ annotations: {{ toYaml .Values.podAnnotations | nindent 8}}
{{- end }}
labels: {{ include "external-dns.labels" . | indent 8 }}
spec:
@@ -58,6 +57,9 @@ spec:
{{ if .Values.dryRun }}
- --dry-run
{{- end }}
+ {{ if .Values.aws.assumeRoleArn }}
+ - --aws-assume-role={{ .Values.aws.assumeRoleArn }}
+ {{- end }}
{{- range $key, $value := .Values.extraArgs }}
{{- if $value }}
- --{{ $key }}={{ $value }}
@@ -93,6 +95,20 @@ spec:
- --infoblox-ssl-verify
{{- end }}
{{- end }}
+ {{- if eq .Values.provider "rfc2136" }}
+ - --rfc2136-host={{ required "rfc2136.host must be supplied for provider 'rfc2136'" .Values.rfc2136.host }}
+ - --rfc2136-port={{ .Values.rfc2136.port }}
+ - --rfc2136-zone={{ required "rfc2136.zone must be supplied for provider 'rfc2136'" .Values.rfc2136.zone }}
+ {{- if .Values.rfc2136.tsigKeyname }}
+ - --rfc2136-tsig-secret-alg={{ .Values.rfc2136.tsigSecretAlg }}
+ - --rfc2136-tsig-keyname={{ .Values.rfc2136.tsigKeyname }}
+ {{- if .Values.rfc2136.tsigAxfr }}
+ - --rfc2136-tsig-axfr
+ {{- end }}
+ {{- else }}
+ - --rfc2136-insecure
+ {{- end }}
+ {{- end }}
volumeMounts:
{{- if or .Values.google.serviceAccountSecret .Values.google.serviceAccountKey }}
- name: google-service-account
@@ -137,6 +153,13 @@ spec:
- name: CF_API_EMAIL
value: "{{ .Values.cloudflare.email }}"
{{- end }}
+ {{- if .Values.digitalocean.apiToken }}
+ - name: DO_TOKEN
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "external-dns.fullname" . }}
+ key: digitalocean_api_token
+ {{- end }}
{{- if .Values.infoblox.wapiConnectionPoolSize }}
- name: EXTERNAL_DNS_INFOBLOX_HTTP_POOL_CONNECTIONS
value: "{{ .Values.infoblox.wapiConnectionPoolSize }}"
@@ -157,6 +180,13 @@ spec:
name: {{ template "external-dns.fullname" . }}
key: infoblox_wapi_password
{{- end }}
+ {{- if and .Values.rfc2136.tsigSecret }}
+ - name: EXTERNAL_DNS_RFC2136_TSIG_SECRET
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "external-dns.fullname" . }}
+ key: rfc2136_tsig_secret
+ {{- end }}
{{- if and (eq .Values.provider "designate") .Values.designate.customCA.enabled }}
- name: OPENSTACK_CA_FILE
value: {{ .Values.designate.customCA.directory }}/{{ .Values.designate.customCA.filename }}
@@ -180,12 +210,10 @@ spec:
ports:
- containerPort: 7979
{{- if .Values.securityContext }}
- securityContext:
-{{ toYaml .Values.securityContext | indent 12 }}
+ securityContext: {{ toYaml .Values.securityContext | nindent 12 }}
{{- end }}
{{- if .Values.resources }}
- resources:
-{{ toYaml .Values.resources | indent 12 }}
+ resources: {{ toYaml .Values.resources | nindent 12 }}
{{- end }}
volumes:
{{- if .Values.google.serviceAccountSecret }}
@@ -222,15 +250,12 @@ spec:
path: {{ .Values.designate.customCA.filename }}
{{- end }}
{{- if .Values.nodeSelector }}
- nodeSelector:
-{{ toYaml .Values.nodeSelector | indent 8 }}
+ nodeSelector: {{ toYaml .Values.nodeSelector | nindent 8 }}
{{- end }}
{{- if .Values.tolerations }}
- tolerations:
-{{ toYaml .Values.tolerations | indent 8 }}
+ tolerations: {{ toYaml .Values.tolerations | nindent 8 }}
{{- end }}
{{- if .Values.affinity }}
- affinity:
-{{ toYaml .Values.affinity | indent 8 }}
+ affinity: {{ toYaml .Values.affinity | nindent 8 }}
{{- end }}
serviceAccountName: {{ if .Values.rbac.create }}{{ template "external-dns.fullname" . }}{{ else }}"{{ .Values.rbac.serviceAccountName }}"{{ end }}
diff --git a/stable/external-dns/templates/secret.yaml b/stable/external-dns/templates/secret.yaml
index 190c5dda8e49..69275c40b91c 100644
--- a/stable/external-dns/templates/secret.yaml
+++ b/stable/external-dns/templates/secret.yaml
@@ -1,4 +1,4 @@
-{{- if or (and .Values.aws.secretKey .Values.aws.accessKey) .Values.cloudflare.apiKey (and .Values.infoblox.wapiUsername .Values.infoblox.wapiPassword) .Values.extraEnv .Values.google.serviceAccountKey -}}
+{{- if or (and .Values.aws.secretKey .Values.aws.accessKey) .Values.cloudflare.apiKey .Values.digitalocean.apiToken (and .Values.infoblox.wapiUsername .Values.infoblox.wapiPassword) .Values.rfc2136.tsigSecret .Values.extraEnv .Values.google.serviceAccountKey -}}
apiVersion: v1
kind: Secret
metadata:
@@ -17,10 +17,16 @@ data:
{{- if .Values.cloudflare.apiKey }}
cloudflare_api_key: {{ .Values.cloudflare.apiKey | b64enc | quote }}
{{- end }}
+{{- if .Values.digitalocean.apiToken }}
+ digitalocean_api_token: {{ .Values.digitalocean.apiToken | b64enc | quote }}
+{{- end }}
{{- if and .Values.infoblox.wapiUsername .Values.infoblox.wapiPassword }}
infoblox_wapi_username: {{ .Values.infoblox.wapiUsername | b64enc | quote }}
infoblox_wapi_password: {{ .Values.infoblox.wapiPassword | b64enc | quote }}
{{- end }}
+{{- if .Values.rfc2136.tsigSecret }}
+ rfc2136_tsig_secret: {{ .Values.rfc2136.tsigSecret | b64enc | quote }}
+{{- end }}
{{- range .Values.extraEnv }}
{{- if .value }}
diff --git a/stable/external-dns/values.yaml b/stable/external-dns/values.yaml
index 1dae610aaaac..e88bd6b75318 100644
--- a/stable/external-dns/values.yaml
+++ b/stable/external-dns/values.yaml
@@ -1,7 +1,7 @@
## Details about the image to be pulled.
image:
name: registry.opensource.zalan.do/teapot/external-dns
- tag: v0.5.9
+ tag: v0.5.14
pullSecrets: []
pullPolicy: IfNotPresent
@@ -14,7 +14,7 @@ sources:
# Allow external-dns to publish DNS records for ClusterIP services (optional)
publishInternalServices: false
-## The DNS provider where the DNS records will be created (options: aws, google, inmemory, azure )
+## The DNS provider where the DNS records will be created (options: aws, google, inmemory, azure, rfc2136 )
provider: aws
# AWS Access keys to inject as environment variables
@@ -27,6 +27,7 @@ aws:
region: "us-east-1"
# Filter for zones of this type (optional, options: public, private)
zoneType: ""
+ assumeRoleArn: ""
azure:
# If you don't specify a secret to load azure.json from, you will get the host's /etc/kubernetes/azure.json
@@ -51,6 +52,10 @@ designate:
# Filename of the custom CA
filename: "designate-ca.pem"
+# DigitalOcean API token to inject as environment variable
+digitalocean:
+ apiToken: ""
+
# When using the Google provider, specify the Google project (required when provider=google)
google:
project: ""
@@ -71,6 +76,15 @@ infoblox:
wapiConnectionPoolSize: ""
wapiHttpTimeout: ""
+rfc2136:
+ host: ""
+ port: 53
+ zone: ""
+ tsigSecret: ""
+ tsigSecretAlg: hmac-sha256
+ tsigKeyname: externaldns-key
+ tsigAxfr: true
+
## Limit possible target zones by domain suffixes (optional)
domainFilters: []
## Limit possible target zones by zone id (optional)
diff --git a/stable/factorio/Chart.yaml b/stable/factorio/Chart.yaml
index f7a07eb8e6f9..de508be85deb 100755
--- a/stable/factorio/Chart.yaml
+++ b/stable/factorio/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: factorio
-version: 0.4.0
+version: 1.0.0
appVersion: 0.15.39
description: Factorio dedicated server.
keywords:
diff --git a/stable/falco/CHANGELOG.md b/stable/falco/CHANGELOG.md
index 793f4a9ec3d8..e97294cf2c23 100644
--- a/stable/falco/CHANGELOG.md
+++ b/stable/falco/CHANGELOG.md
@@ -3,29 +3,102 @@
This file documents all notable changes to Sysdig Falco Helm Chart. The release
numbering uses [semantic versioning](http://semver.org).
+## v0.7.6
+
+### Major Changes
+
+* Allow to enable/disable usage of the docker socket
+* Configurable docker socket path
+* CRI support, configurable CRI socket
+* Allow to enable/disable usage of the CRI socket
+
+## v0.7.5
+
+### Minor Changes
+
+* Upgrade to Falco 0.15.0
+* Upgrade rules to Falco 0.15.0
+
+## v0.7.4
+
+### Minor Changes
+
+* Use the KUBERNETES_SERVICE_HOST environment variable to connect to Kubernetes
+ API instead of using a fixed name
+
+## v0.7.3
+
+### Minor Changes
+
+* Remove the toJson pipeline when storing Google Credentials. It makes strange
+ stuff with double quotes and does not allow to use base64 encoded credentials
+
+## v0.7.2
+
+### Minor Changes
+
+* Fix typos in README.md
+
+## v0.7.1
+
+### Minor Changes
+
+* Add Google Pub/Sub Output integration
+
+## v0.7.0
+
+### Major Changes
+
+* Disable eBPF by default on Falco. We activated eBPF by default to make the
+ CI pass, but now we found a better method to make the CI pass without
+ bothering our users.
+
+## v0.6.0
+
+### Major Changes
+
+* Upgrade to Falco 0.14.0
+* Upgrade rules to Falco 0.14.0
+* Enable eBPF by default on Falco
+* Allow to download Falco images from different registries than `docker.io`
+* Use rollingUpdate strategy by default
+* Provide sane defauls for falco resource management
+
## v0.5.6
+### Minor Changes
+
* Allow extra container args
## v0.5.5
+### Minor Changes
+
* Update correct slack example
## v0.5.4
+### Minor Changes
+
* Using Falco version 0.13.0 instead of latest.
## v0.5.3
+### Minor Changes
+
* Update falco_rules.yaml file to use the same rules that Falco 0.13.0
## v0.5.2
+### Minor Changes
+
* Falco was accepted as a CNCF project. Fix references and download image from
falcosecurity organization.
## v0.5.1
+### Minor Changes
+
* Allow falco to resolve cluster hostnames when running with ebpf.hostNetwork: true
## v0.5.0
diff --git a/stable/falco/Chart.yaml b/stable/falco/Chart.yaml
index 6da3fc8df66a..33ea37c5b695 100644
--- a/stable/falco/Chart.yaml
+++ b/stable/falco/Chart.yaml
@@ -1,7 +1,7 @@
apiVersion: v1
name: falco
-version: 0.5.6
-appVersion: 0.13.0
+version: 0.7.6
+appVersion: 0.15.0
description: Sysdig Falco
keywords:
- monitoring
diff --git a/stable/falco/README.md b/stable/falco/README.md
index 16e4b88e299b..92a99fa804f5 100644
--- a/stable/falco/README.md
+++ b/stable/falco/README.md
@@ -43,54 +43,64 @@ The command removes all the Kubernetes components associated with the chart and
The following table lists the configurable parameters of the Falco chart and their default values.
-| Parameter | Description | Default |
-| --- | --- | --- |
-| `image.repository` | The image repository to pull from | `falcosecurity/falco` |
-| `image.tag` | The image tag to pull | `0.13.0` |
-| `image.pullPolicy` | The image pull policy | `IfNotPresent` |
-| `resources` | Specify container resources | `{}` |
-| `extraArgs` | Specify additional container args | `[]` |
-| `rbac.create` | If true, create & use RBAC resources | `true` |
-| `serviceAccount.create` | Create serviceAccount | `true` |
-| `serviceAccount.name` | Use this value as serviceAccountName | ` ` |
-| `fakeEventGenerator.enabled` | Run falco-event-generator for sample events | `false` |
-| `fakeEventGenerator.replicas` | How many replicas of falco-event-generator to run | `1` |
-| `proxy.httpProxy` | Set the Proxy server if is behind a firewall | `` |
-| `proxy.httpsProxy` | Set the Proxy server if is behind a firewall | `` |
-| `proxy.noProxy` | Set the Proxy server if is behind a firewall | `` |
-| `ebpf.enabled` | Enable eBPF support for Falco instead of `falco-probe` kernel module | `false` |
-| `ebpf.settings.hostNetwork` | Needed to enable eBPF JIT at runtime for performance reasons | `true` |
-| `ebpf.settings.mountEtcVolume` | Needed to detect which kernel version are running in Google COS | `true` |
-| `falco.rulesFile` | The location of the rules files | `[/etc/falco/falco_rules.yaml, /etc/falco/falco_rules.local.yaml, /etc/falco/rules.d]` |
-| `falco.jsonOutput` | Output events in json or text | `false` |
-| `falco.jsonIncludeOutputProperty` | Include output property in json output | `true` |
-| `falco.logStderr` | Send Falco debugging information logs to stderr | `true` |
-| `falco.logSyslog` | Send Falco debugging information logs to syslog | `true` |
-| `falco.logLevel` | The minimum level of Falco debugging information to include in logs | `info` |
-| `falco.priority` | The minimum rule priority level to load and run | `debug` |
-| `falco.bufferedOutputs` | Use buffered outputs to channels | `false` |
-| `falco.outputs.rate` | Number of tokens gained per second | `1` |
-| `falco.outputs.maxBurst` | Maximum number of tokens outstanding | `1000` |
-| `falco.syslogOutput.enabled` | Enable syslog output for security notifications | `true` |
-| `falco.fileOutput.enabled` | Enable file output for security notifications | `false` |
-| `falco.fileOutput.keepAlive` | Open file once or every time a new notification arrives | `false` |
-| `falco.fileOutput.filename` | The filename for logging notifications | `./events.txt` |
-| `falco.stdoutOutput.enabled` | Enable stdout output for security notifications | `true` |
-| `falco.programOutput.enabled` | Enable program output for security notifications | `false` |
-| `falco.programOutput.keepAlive` | Start the program once or re-spawn when a notification arrives | `false` |
-| `falco.programOutput.program` | Command to execute for program output | `mail -s "Falco Notification" someone@example.com` |
-| `customRules` | Third party rules enabled for Falco | `{}` |
-| `integrations.gcscc.enabled` | Enable Google Cloud Security Command Center integration | `false` |
-| `integrations.gcscc.webhookUrl` | The URL where sysdig-gcscc-connector webhook is listening | `http://sysdig-gcscc-connector.default.svc.cluster.local:8080/events` |
-| `integrations.gcscc.webhookAuthenticationToken` | Token used for authentication and webhook | `b27511f86e911f20b9e0f9c8104b4ec4` |
-| `integrations.natsOutput.enabled` | Enable NATS Output integration | `false` |
-| `integrations.natsOutput.natsUrl` | The NATS' URL where Falco is going to publish security alerts | `nats://nats.nats-io.svc.cluster.local:4222` |
-| `integrations.snsOutput.enabled` | Enable Amazon SNS Output integration | `false` |
-| `integrations.snsOutput.topic` | The SNS topic where Falco is going to publish security alerts | ` ` |
-| `integrations.snsOutput.aws_access_key_id` | The AWS Access Key Id credentials for access to SNS n | ` ` |
-| `integrations.snsOutput.aws_secret_access_key` | The AWS Secret Access Key credential to access to SNS | ` ` |
-| `integrations.snsOutput.aws_default_region` | The AWS region where SNS is deployed | ` ` |
-| `tolerations` | The tolerations for scheduling | `node-role.kubernetes.io/master:NoSchedule` |
+| Parameter | Description | Default |
+| --- | --- | --- |
+| `image.registry` | The image registry to pull from | `docker.io` |
+| `image.repository` | The image repository to pull from | `falcosecurity/falco` |
+| `image.tag` | The image tag to pull | `0.15.0` |
+| `image.pullPolicy` | The image pull policy | `IfNotPresent` |
+| `cri.socket` | The path of the CRI socket | `/run/containerd/containerd.sock` |
+| `docker.socket` | The path of the Docker daemon socket | `/var/run/docker.sock` |
+| `resources.requests.cpu` | CPU requested for being run in a node | `100m` |
+| `resources.requests.memory` | Memory requested for being run in a node | `512Mi` |
+| `resources.limits.cpu` | CPU limit | `200m` |
+| `resources.limits.memory` | Memory limit | `1024Mi` |
+| `extraArgs` | Specify additional container args | `[]` |
+| `rbac.create` | If true, create & use RBAC resources | `true` |
+| `serviceAccount.create` | Create serviceAccount | `true` |
+| `serviceAccount.name` | Use this value as serviceAccountName | ` ` |
+| `fakeEventGenerator.enabled` | Run falco-event-generator for sample events | `false` |
+| `fakeEventGenerator.replicas` | How many replicas of falco-event-generator to run | `1` |
+| `daemonset.updateStrategy.type` | The updateStrategy for updating the daemonset | `RollingUpdate` |
+| `proxy.httpProxy` | Set the Proxy server if is behind a firewall | `` |
+| `proxy.httpsProxy` | Set the Proxy server if is behind a firewall | `` |
+| `proxy.noProxy` | Set the Proxy server if is behind a firewall | `` |
+| `ebpf.enabled` | Enable eBPF support for Falco instead of `falco-probe` kernel module | `false` |
+| `ebpf.settings.hostNetwork` | Needed to enable eBPF JIT at runtime for performance reasons | `true` |
+| `ebpf.settings.mountEtcVolume` | Needed to detect which kernel version are running in Google COS | `true` |
+| `falco.rulesFile` | The location of the rules files | `[/etc/falco/falco_rules.yaml, /etc/falco/falco_rules.local.yaml, /etc/falco/rules.d]` |
+| `falco.jsonOutput` | Output events in json or text | `false` |
+| `falco.jsonIncludeOutputProperty` | Include output property in json output | `true` |
+| `falco.logStderr` | Send Falco debugging information logs to stderr | `true` |
+| `falco.logSyslog` | Send Falco debugging information logs to syslog | `true` |
+| `falco.logLevel` | The minimum level of Falco debugging information to include in logs | `info` |
+| `falco.priority` | The minimum rule priority level to load and run | `debug` |
+| `falco.bufferedOutputs` | Use buffered outputs to channels | `false` |
+| `falco.outputs.rate` | Number of tokens gained per second | `1` |
+| `falco.outputs.maxBurst` | Maximum number of tokens outstanding | `1000` |
+| `falco.syslogOutput.enabled` | Enable syslog output for security notifications | `true` |
+| `falco.fileOutput.enabled` | Enable file output for security notifications | `false` |
+| `falco.fileOutput.keepAlive` | Open file once or every time a new notification arrives | `false` |
+| `falco.fileOutput.filename` | The filename for logging notifications | `./events.txt` |
+| `falco.stdoutOutput.enabled` | Enable stdout output for security notifications | `true` |
+| `falco.programOutput.enabled` | Enable program output for security notifications | `false` |
+| `falco.programOutput.keepAlive` | Start the program once or re-spawn when a notification arrives | `false` |
+| `falco.programOutput.program` | Command to execute for program output | `mail -s "Falco Notification" someone@example.com` |
+| `customRules` | Third party rules enabled for Falco | `{}` |
+| `integrations.gcscc.enabled` | Enable Google Cloud Security Command Center integration | `false` |
+| `integrations.gcscc.webhookUrl` | The URL where sysdig-gcscc-connector webhook is listening | `http://sysdig-gcscc-connector.default.svc.cluster.local:8080/events` |
+| `integrations.gcscc.webhookAuthenticationToken` | Token used for authentication and webhook | `b27511f86e911f20b9e0f9c8104b4ec4` |
+| `integrations.natsOutput.enabled` | Enable NATS Output integration | `false` |
+| `integrations.natsOutput.natsUrl` | The NATS' URL where Falco is going to publish security alerts | `nats://nats.nats-io.svc.cluster.local:4222` |
+| `integrations.pubsubOutput.credentialsData` | Contents retrieved from `cat $HOME/.config/gcloud/legacy_credentials//adc.json | jq -c .` | ` ` |
+| `integrations.pubsubOutput.enabled` | Enable GCloud PubSub Output Integration | `false` |
+| `integrations.pubsubOutput.projectID` | GCloud Project ID where the Pub/Sub will be created | ` ` |
+| `integrations.snsOutput.enabled` | Enable Amazon SNS Output integration | `false` |
+| `integrations.snsOutput.topic` | The SNS topic where Falco is going to publish security alerts | ` ` |
+| `integrations.snsOutput.aws_access_key_id` | The AWS Access Key Id credentials for access to SNS n | ` ` |
+| `integrations.snsOutput.aws_secret_access_key` | The AWS Secret Access Key credential to access to SNS | ` ` |
+| `integrations.snsOutput.aws_default_region` | The AWS region where SNS is deployed | ` ` |
+| `tolerations` | The tolerations for scheduling | `node-role.kubernetes.io/master:NoSchedule` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
diff --git a/stable/falco/ci/ebpf-values.yaml b/stable/falco/ci/ebpf-values.yaml
new file mode 100644
index 000000000000..a3a9eafe75cf
--- /dev/null
+++ b/stable/falco/ci/ebpf-values.yaml
@@ -0,0 +1,2 @@
+ebpf:
+ enabled: true
diff --git a/stable/falco/rules/falco_rules.local.yaml b/stable/falco/rules/falco_rules.local.yaml
index 3c8e3bb5aa85..d4b619ab64f4 100644
--- a/stable/falco/rules/falco_rules.local.yaml
+++ b/stable/falco/rules/falco_rules.local.yaml
@@ -1,3 +1,21 @@
+#
+# Copyright (C) 2016-2018 Draios Inc dba Sysdig.
+#
+# This file is part of falco.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
####################
# Your custom rules!
####################
diff --git a/stable/falco/rules/falco_rules.yaml b/stable/falco/rules/falco_rules.yaml
index 167a1ddf250e..f7544287e6f9 100644
--- a/stable/falco/rules/falco_rules.yaml
+++ b/stable/falco/rules/falco_rules.yaml
@@ -16,6 +16,16 @@
# limitations under the License.
#
+# See xxx for details on falco engine and rules versioning. Currently,
+# this specific rules file is compatible with engine version 0
+# (e.g. falco releases <= 0.13.1), so we'll keep the
+# required_engine_version lines commented out, so maintain
+# compatibility with older falco releases. With the first incompatible
+# change to this rules file, we'll uncomment this line and set it to
+# the falco engine version in use at the time.
+#
+#- required_engine_version: 2
+
# Currently disabled as read/write are ignored syscalls. The nearly
# similar open_write/open_read check for files being opened for
# reading/writing.
@@ -30,11 +40,14 @@
- macro: open_read
condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0
+- macro: open_directory
+ condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='d' and fd.num>=0
+
- macro: never_true
condition: (evt.num=0)
- macro: always_true
- condition: (evt.num=>0)
+ condition: (evt.num>=0)
# In some cases, such as dropped system call events, information about
# the process name may be missing. For some rules that really depend
@@ -82,7 +95,14 @@
condition: ((fd.directory=/ or fd.name startswith /root) and fd.name contains "/")
- list: shell_binaries
- items: [bash, csh, ksh, sh, tcsh, zsh, dash]
+ items: [ash, bash, csh, ksh, sh, tcsh, zsh, dash]
+
+- list: ssh_binaries
+ items: [
+ sshd, sftp-server, ssh-agent,
+ ssh, scp, sftp,
+ ssh-keygen, ssh-keysign, ssh-keyscan, ssh-add
+ ]
- list: shell_mgmt_binaries
items: [add-shell, remove-shell]
@@ -117,7 +137,7 @@
shadowconfig, grpck, pwunconv, grpconv, pwck,
groupmod, vipw, pwconv, useradd, newusers, cppw, chpasswd, usermod,
groupadd, groupdel, grpunconv, chgpasswd, userdel, chage, chsh,
- gpasswd, chfn, expiry, passwd, vigr, cpgr
+ gpasswd, chfn, expiry, passwd, vigr, cpgr, adduser, addgroup, deluser, delgroup
]
# repoquery -l shadow-utils | grep bin | xargs ls -ld | grep -v '^d' |
@@ -133,10 +153,10 @@
items: [setup-backend, dragent, sdchecks]
- list: docker_binaries
- items: [docker, dockerd, exe, docker-compose, docker-entrypoi, docker-runc-cur, docker-current]
+ items: [docker, dockerd, exe, docker-compose, docker-entrypoi, docker-runc-cur, docker-current, dockerd-current]
- list: k8s_binaries
- items: [hyperkube, skydns, kube2sky, exechealthz, weave-net, loopback, bridge]
+ items: [hyperkube, skydns, kube2sky, exechealthz, weave-net, loopback, bridge, openshift-sdn]
- list: lxd_binaries
items: [lxd, lxcfs]
@@ -162,6 +182,13 @@
- list: gitlab_binaries
items: [gitlab-shell, gitlab-mon, gitlab-runner-b, git]
+- list: interpreted_binaries
+ items: [lua, node, perl, perl5, perl6, php, python, python2, python3, ruby, tcl]
+
+- macro: interpreted_procs
+ condition: >
+ (proc.name in (interpreted_binaries))
+
- macro: server_procs
condition: proc.name in (http_server_binaries, db_server_binaries, docker_binaries, sshd)
@@ -172,23 +199,32 @@
repoquery, rpmkeys, rpmq, yum-cron, yum-config-mana, yum-debug-dump,
abrt-action-sav, rpmdb_stat, microdnf, rhn_check, yumdb]
+- list: openscap_rpm_binaries
+ items: [probe_rpminfo, probe_rpmverify, probe_rpmverifyfile, probe_rpmverifypackage]
+
- macro: rpm_procs
- condition: proc.name in (rpm_binaries) or proc.name in (salt-minion)
+ condition: (proc.name in (rpm_binaries, openscap_rpm_binaries) or proc.name in (salt-minion))
- list: deb_binaries
items: [dpkg, dpkg-preconfigu, dpkg-reconfigur, dpkg-divert, apt, apt-get, aptitude,
frontend, preinst, add-apt-reposit, apt-auto-remova, apt-key,
- apt-listchanges, unattended-upgr, apt-add-reposit
+ apt-listchanges, unattended-upgr, apt-add-reposit, apt-config, apt-cache
]
# The truncated dpkg-preconfigu is intentional, process names are
# truncated at the sysdig level.
- list: package_mgmt_binaries
- items: [rpm_binaries, deb_binaries, update-alternat, gem, pip, pip3, sane-utils.post, alternatives, chef-client]
+ items: [rpm_binaries, deb_binaries, update-alternat, gem, pip, pip3, sane-utils.post, alternatives, chef-client, apk]
- macro: package_mgmt_procs
condition: proc.name in (package_mgmt_binaries)
+- macro: package_mgmt_ancestor_procs
+ condition: proc.pname in (package_mgmt_binaries) or
+ proc.aname[2] in (package_mgmt_binaries) or
+ proc.aname[3] in (package_mgmt_binaries) or
+ proc.aname[4] in (package_mgmt_binaries)
+
- macro: coreos_write_ssh_dir
condition: (proc.name=update-ssh-keys and fd.name startswith /home/core/.ssh)
@@ -246,7 +282,7 @@
]
- list: sensitive_file_names
- items: [/etc/shadow, /etc/sudoers, /etc/pam.conf]
+ items: [/etc/shadow, /etc/sudoers, /etc/pam.conf, /etc/security/pwquality.conf]
- macro: sensitive_files
condition: >
@@ -262,14 +298,18 @@
# Network
- macro: inbound
condition: >
- (((evt.type in (accept,listen) and evt.dir=<)) or
+ (((evt.type in (accept,listen) and evt.dir=<) or
+ (evt.type in (recvfrom,recvmsg) and evt.dir=< and
+ fd.l4proto != tcp and fd.connected=false and fd.name_changed=true)) and
(fd.typechar = 4 or fd.typechar = 6) and
(fd.ip != "0.0.0.0" and fd.net != "127.0.0.0/8") and
(evt.rawres >= 0 or evt.res = EINPROGRESS))
- macro: outbound
condition: >
- (((evt.type = connect and evt.dir=<)) or
+ (((evt.type = connect and evt.dir=<) or
+ (evt.type in (sendto,sendmsg) and evt.dir=< and
+ fd.l4proto != tcp and fd.connected=false and fd.name_changed=true)) and
(fd.typechar = 4 or fd.typechar = 6) and
(fd.ip != "0.0.0.0" and fd.net != "127.0.0.0/8") and
(evt.rawres >= 0 or evt.res = EINPROGRESS))
@@ -304,8 +344,139 @@
condition: (inbound_outbound) and ssh_port and not allowed_ssh_hosts
output: Disallowed SSH Connection (command=%proc.cmdline connection=%fd.name user=%user.name)
priority: NOTICE
+ tags: [network, mitre_remote_service]
+
+# These rules and supporting macros are more of an example for how to
+# use the fd.*ip and fd.*ip.name fields to match connection
+# information against ips, netmasks, and complete domain names.
+#
+# To use this rule, you should modify consider_all_outbound_conns and
+# populate allowed_{source,destination}_{ipaddrs,networks,domains} with the
+# values that make sense for your environment.
+- macro: consider_all_outbound_conns
+ condition: (never_true)
+
+# Note that this can be either individual IPs or netmasks
+- list: allowed_outbound_destination_ipaddrs
+ items: ['"127.0.0.1"', '"8.8.8.8"']
+
+- list: allowed_outbound_destination_networks
+ items: ['"127.0.0.1/8"']
+
+- list: allowed_outbound_destination_domains
+ items: [google.com, www.yahoo.com]
+
+- rule: Unexpected outbound connection destination
+ desc: Detect any outbound connection to a destination outside of an allowed set of ips, networks, or domain names
+ condition: >
+ consider_all_outbound_conns and outbound and not
+ ((fd.sip in (allowed_outbound_destination_ipaddrs)) or
+ (fd.snet in (allowed_outbound_destination_networks)) or
+ (fd.sip.name in (allowed_outbound_destination_domains)))
+ output: Disallowed outbound connection destination (command=%proc.cmdline connection=%fd.name user=%user.name)
+ priority: NOTICE
tags: [network]
+- macro: consider_all_inbound_conns
+ condition: (never_true)
+
+- list: allowed_inbound_source_ipaddrs
+ items: ['"127.0.0.1"']
+
+- list: allowed_inbound_source_networks
+ items: ['"127.0.0.1/8"', '"10.0.0.0/8"']
+
+- list: allowed_inbound_source_domains
+ items: [google.com]
+
+- rule: Unexpected inbound connection source
+ desc: Detect any inbound connection from a source outside of an allowed set of ips, networks, or domain names
+ condition: >
+ consider_all_inbound_conns and inbound and not
+ ((fd.cip in (allowed_inbound_source_ipaddrs)) or
+ (fd.cnet in (allowed_inbound_source_networks)) or
+ (fd.cip.name in (allowed_inbound_source_domains)))
+ output: Disallowed inbound connection source (command=%proc.cmdline connection=%fd.name user=%user.name)
+ priority: NOTICE
+ tags: [network]
+
+- list: bash_config_filenames
+ items: [.bashrc, .bash_profile, .bash_history, .bash_login, .bash_logout, .inputrc, .profile]
+
+- list: bash_config_files
+ items: [/etc/profile, /etc/bashrc]
+
+# Covers both csh and tcsh
+- list: csh_config_filenames
+ items: [.cshrc, .login, .logout, .history, .tcshrc, .cshdirs]
+
+- list: csh_config_files
+ items: [/etc/csh.cshrc, /etc/csh.login]
+
+- list: zsh_config_filenames
+ items: [.zshenv, .zprofile, .zshrc, .zlogin, .zlogout]
+
+- list: shell_config_filenames
+ items: [bash_config_filenames, csh_config_filenames, zsh_config_filenames]
+
+- list: shell_config_files
+ items: [bash_config_files, csh_config_files]
+
+- list: shell_config_directories
+ items: [/etc/zsh]
+
+- rule: Modify Shell Configuration File
+ desc: Detect attempt to modify shell configuration files
+ condition: >
+ open_write and
+ (fd.filename in (shell_config_filenames) or
+ fd.name in (shell_config_files) or
+ fd.directory in (shell_config_directories)) and
+ not proc.name in (shell_binaries)
+ output: >
+ a shell configuration file has been modified (user=%user.name command=%proc.cmdline file=%fd.name)
+ priority:
+ WARNING
+ tag: [file, mitre_persistence]
+
+# This rule is not enabled by default, as there are many legitimate
+# readers of shell config files. If you want to enable it, modify the
+# following macro.
+
+- macro: consider_shell_config_reads
+ condition: (never_true)
+
+- rule: Read Shell Configuration File
+ desc: Detect attempts to read shell configuration files by non-shell programs
+ condition: >
+ open_read and
+ consider_shell_config_reads and
+ (fd.filename in (shell_config_filenames) or
+ fd.name in (shell_config_files) or
+ fd.directory in (shell_config_directories)) and
+ (not proc.name in (shell_binaries))
+ output: >
+ a shell configuration file was read by a non-shell program (user=%user.name command=%proc.cmdline file=%fd.name)
+ priority:
+ WARNING
+ tag: [file, mitre_discovery]
+
+- macro: consider_all_cron_jobs
+ condition: (never_true)
+
+- rule: Schedule Cron Jobs
+ desc: Detect cron jobs scheduled
+ condition: >
+ consider_all_cron_jobs and
+ ((open_write and fd.name startswith /etc/cron) or
+ (spawned_process and proc.name = "crontab"))
+ output: >
+ Cron jobs were scheduled to run (user=%user.name command=%proc.cmdline
+ file=%fd.name container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
+ priority:
+ NOTICE
+ tag: [file, mitre_persistence]
+
# Use this to test whether the event occurred within a container.
# When displaying container information in the output field, use
@@ -315,7 +486,13 @@
# based on the context and whether or not -pk/-pm/-pc was specified on
# the command line.
- macro: container
- condition: container.id != host
+ condition: (container.id != host)
+
+- macro: container_started
+ condition: >
+ ((evt.type = container or
+ (evt.type=execve and evt.dir=< and proc.vpid=1)) and
+ container.image.repository != incomplete)
- macro: interactive
condition: >
@@ -580,7 +757,7 @@
condition: (proc.pname=run-openldap.sh and fd.name startswith /etc/openldap)
- macro: ucpagent_writing_conf
- condition: (proc.name=apiserver and container.image startswith docker/ucp-agent and fd.name=/etc/authorization_config.cfg)
+ condition: (proc.name=apiserver and container.image.repository=docker/ucp-agent and fd.name=/etc/authorization_config.cfg)
- macro: iscsi_writing_conf
condition: (proc.name=iscsiadm and fd.name startswith /etc/iscsi)
@@ -596,6 +773,9 @@
- macro: liveupdate_writing_conf
condition: (proc.cmdline startswith "java LiveUpdate" and fd.name in (/etc/liveupdate.conf, /etc/Product.Catalog.JavaLiveUpdate))
+- macro: rancher_agent
+ condition: (proc.name = agent and container.image.repository = rancher/agent)
+
- macro: sosreport_writing_files
condition: >
(proc.name=urlgrabber-ext- and proc.aname[3]=sosreport and
@@ -628,7 +808,7 @@
condition: (veritas_progs and (fd.name startswith /etc/vx or fd.name startswith /etc/opt/VRTS or fd.name startswith /etc/vom))
- macro: nginx_writing_conf
- condition: (proc.name=nginx and fd.name startswith /etc/nginx)
+ condition: (proc.name in (nginx,nginx-ingress-c) and fd.name startswith /etc/nginx)
- macro: nginx_writing_certs
condition: >
@@ -684,7 +864,32 @@
condition: (proc.name=chef-client and fd.name startswith /root/.chef)
- macro: kubectl_writing_state
- condition: (proc.name=kubectl and fd.name startswith /root/.kube)
+ condition: (proc.name in (kubectl,oc) and fd.name startswith /root/.kube)
+
+- macro: java_running_cassandra
+ condition: (proc.name=java and proc.cmdline contains "cassandra.jar")
+
+- macro: cassandra_writing_state
+ condition: (java_running_cassandra and fd.directory=/root/.cassandra)
+
+- list: repository_files
+ items: [sources.list]
+
+- list: repository_directories
+ items: [/etc/apt/sources.list.d, /etc/yum.repos.d]
+
+- macro: access_repositories
+ condition: (fd.filename in (repository_files) or fd.directory in (repository_directories))
+
+- rule: Update Package Repository
+ desc: Detect package repositories get updated
+ condition: >
+ open_write and access_repositories and not package_mgmt_procs
+ output: >
+ Repository files get updated (user=%user.name command=%proc.cmdline file=%fd.name)
+ priority:
+ NOTICE
+ tags: [filesystem, mitre_persistence]
- rule: Write below binary dir
desc: an attempt to write to any file below a set of binary directories
@@ -698,7 +903,7 @@
File below a known binary directory opened for writing (user=%user.name
command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline gparent=%proc.aname[2])
priority: ERROR
- tags: [filesystem]
+ tags: [filesystem, mitre_persistence]
# If you'd like to generally monitor a wider set of directories on top
# of the ones covered by the rule Write below binary dir, you can use
@@ -731,6 +936,15 @@
or user_ssh_directory)
and not mkinitramfs_writing_boot
+# Add conditions to this macro (probably in a separate file,
+# overwriting this macro) to allow for specific combinations of
+# programs writing below monitored directories.
+#
+# Its default value is an expression that always is false, which
+# becomes true when the "not ..." in the rule is applied.
+- macro: user_known_write_monitored_dir_conditions
+ condition: (never_true)
+
- rule: Write below monitored dir
desc: an attempt to write to any file below a set of binary directories
condition: >
@@ -742,14 +956,34 @@
and not python_running_ms_oms
and not google_accounts_daemon_writing_ssh
and not cloud_init_writing_ssh
+ and not user_known_write_monitored_dir_conditions
output: >
File below a monitored directory opened for writing (user=%user.name
command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline gparent=%proc.aname[2])
priority: ERROR
- tags: [filesystem]
+ tags: [filesystem, mitre_persistence]
+
+# This rule is disabled by default as many system management tools
+# like ansible, etc can read these files/paths. Enable it using this macro.
+
+- macro: consider_ssh_reads
+ condition: (never_true)
+
+- rule: Read ssh information
+ desc: Any attempt to read files below ssh directories by non-ssh programs
+ condition: >
+ (consider_ssh_reads and
+ (open_read or open_directory) and
+ (user_ssh_directory or fd.name startswith /root/.ssh) and
+ (not proc.name in (ssh_binaries)))
+ output: >
+ ssh-related file/directory read by non-ssh program (user=%user.name
+ command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline)
+ priority: ERROR
+ tags: [filesystem, mitre_discovery]
- list: safe_etc_dirs
- items: [/etc/cassandra, /etc/ssl/certs/java, /etc/logstash, /etc/nginx/conf.d, /etc/container_environment, /etc/hrmconfig]
+ items: [/etc/cassandra, /etc/ssl/certs/java, /etc/logstash, /etc/nginx/conf.d, /etc/container_environment, /etc/hrmconfig, /etc/fluent/configs.d]
- macro: fluentd_writing_conf_files
condition: (proc.name=start-fluentd and fd.name in (/etc/fluent/fluent.conf, /etc/td-agent/td-agent.conf))
@@ -788,6 +1022,20 @@
proc.cmdline startswith "agent.py /opt/datadog-agent")
and fd.name startswith "/etc/dd-agent")
+- macro: rancher_writing_conf
+ condition: (container.image.repository in (rancher_images)
+ and proc.name in (lib-controller,rancher-dns,healthcheck,rancher-metadat)
+ and (fd.name startswith "/etc/haproxy" or
+ fd.name startswith "/etc/rancher-dns")
+ )
+
+- macro: jboss_in_container_writing_passwd
+ condition: >
+ ((proc.cmdline="run-java.sh /opt/jboss/container/java/run/run-java.sh"
+ or proc.cmdline="run-java.sh /opt/run-java/run-java.sh")
+ and container
+ and fd.name=/etc/passwd)
+
- macro: curl_writing_pki_db
condition: (proc.name=curl and fd.directory=/etc/pki/nssdb)
@@ -802,7 +1050,7 @@
condition: (proc.name=rabbitmq-server and fd.directory=/etc/rabbitmq)
- macro: rook_writing_conf
- condition: (proc.name=toolbox.sh and container.image startswith rook/toolbox
+ condition: (proc.name=toolbox.sh and container.image.repository=rook/toolbox
and fd.directory=/etc/ceph)
- macro: httpd_writing_conf_logs
@@ -839,7 +1087,17 @@
condition: (proc.aname[2] in (dpkg-reconfigur, dpkg-preconfigu))
- macro: ufw_writing_conf
- condition: proc.name=ufw and fd.directory=/etc/ufw
+ condition: (proc.name=ufw and fd.directory=/etc/ufw)
+
+- macro: calico_writing_conf
+ condition: >
+ (proc.name = calico-node and fd.name startswith /etc/calico)
+
+- macro: prometheus_conf_writing_conf
+ condition: (proc.name=prometheus-conf and fd.name startswith /etc/prometheus/config_out)
+
+- macro: openshift_writing_conf
+ condition: (proc.name=oc and fd.name startswith /etc/origin/node)
# Add conditions to this macro (probably in a separate file,
# overwriting this macro) to allow for specific combinations of
@@ -870,7 +1128,7 @@
gen_resolvconf., update-ca-certi, certbot, runsv,
qualys-cloud-ag, locales.postins, nomachine_binaries,
adclient, certutil, crlutil, pam-auth-update, parallels_insta,
- openshift-launc, update-rc.d)
+ openshift-launc, update-rc.d, puppet)
and not proc.pname in (sysdigcloud_binaries, mail_config_binaries, hddtemp.postins, sshkit_script_binaries, locales.postins, deb_binaries, dhcp_binaries)
and not fd.name pmatch (safe_etc_dirs)
and not fd.name in (/etc/container_environment.sh, /etc/container_environment.json, /etc/motd, /etc/motd.svc)
@@ -943,13 +1201,18 @@
and not iscsi_writing_conf
and not istio_writing_conf
and not ufw_writing_conf
+ and not calico_writing_conf
+ and not prometheus_conf_writing_conf
+ and not openshift_writing_conf
+ and not rancher_writing_conf
+ and not jboss_in_container_writing_passwd
- rule: Write below etc
desc: an attempt to write to any file below /etc
condition: write_etc_common
output: "File below /etc opened for writing (user=%user.name command=%proc.cmdline parent=%proc.pname pcmdline=%proc.pcmdline file=%fd.name program=%proc.name gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4])"
priority: ERROR
- tags: [filesystem]
+ tags: [filesystem, mitre_persistence]
- list: known_root_files
items: [/root/.monit.state, /root/.auth_tokens, /root/.bash_history, /root/.ash_history, /root/.aws/credentials,
@@ -996,6 +1259,16 @@
or fd.name startswith /root/jvm
or fd.name startswith /root/.node-gyp)
+# Add conditions to this macro (probably in a separate file,
+# overwriting this macro) to allow for specific combinations of
+# programs writing below specific directories below
+# / or /root.
+#
+# In this file, it just takes one of the condition in the base macro
+# and repeats it.
+- macro: user_known_write_root_conditions
+ condition: fd.name=/root/.bash_history
+
- rule: Write below root
desc: an attempt to write to any file directly below / or /root
condition: >
@@ -1011,10 +1284,12 @@
and not maven_writing_groovy
and not chef_writing_conf
and not kubectl_writing_state
+ and not cassandra_writing_state
and not known_root_conditions
+ and not user_known_write_root_conditions
output: "File below / or /root opened for writing (user=%user.name command=%proc.cmdline parent=%proc.pname file=%fd.name program=%proc.name)"
priority: ERROR
- tags: [filesystem]
+ tags: [filesystem, mitre_persistence]
- macro: cmp_cp_by_passwd
condition: proc.name in (cmp, cp) and proc.pname in (passwd, run-parts)
@@ -1029,7 +1304,7 @@
Sensitive file opened for reading by trusted program after startup (user=%user.name
command=%proc.cmdline parent=%proc.pname file=%fd.name parent=%proc.pname gparent=%proc.aname[2])
priority: WARNING
- tags: [filesystem]
+ tags: [filesystem, mitre_credential_access]
- list: read_sensitive_file_binaries
items: [
@@ -1078,15 +1353,20 @@
Sensitive file opened for reading by non-trusted program (user=%user.name program=%proc.name
command=%proc.cmdline file=%fd.name parent=%proc.pname gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4])
priority: WARNING
- tags: [filesystem]
+ tags: [filesystem, mitre_credential_access, mitre_discovery]
# Only let rpm-related programs write to the rpm database
- rule: Write below rpm database
desc: an attempt to write to the rpm database by any non-rpm related program
- condition: fd.name startswith /var/lib/rpm and open_write and not rpm_procs and not ansible_running_python and not python_running_chef
+ condition: >
+ fd.name startswith /var/lib/rpm and open_write
+ and not rpm_procs
+ and not ansible_running_python
+ and not python_running_chef
+ and not exe_running_docker_save
output: "Rpm database opened for writing by a non-rpm program (command=%proc.cmdline file=%fd.name parent=%proc.pname pcmdline=%proc.pcmdline)"
priority: ERROR
- tags: [filesystem, software_mgmt]
+ tags: [filesystem, software_mgmt, mitre_persistence]
- macro: postgres_running_wal_e
condition: (proc.pname=postgres and proc.cmdline startswith "sh -c envdir /etc/wal-e.d/env /usr/local/bin/wal-e")
@@ -1121,7 +1401,7 @@
Database-related program spawned process other than itself (user=%user.name
program=%proc.cmdline parent=%proc.pname)
priority: NOTICE
- tags: [process, database]
+ tags: [process, database, mitre_execution]
- rule: Modify binary dirs
desc: an attempt to modify any file below a set of binary directories.
@@ -1130,7 +1410,7 @@
File below known binary directory renamed/removed (user=%user.name command=%proc.cmdline
pcmdline=%proc.pcmdline operation=%evt.type file=%fd.name %evt.args)
priority: ERROR
- tags: [filesystem]
+ tags: [filesystem, mitre_persistence]
- rule: Mkdir binary dirs
desc: an attempt to create a directory below a set of binary directories.
@@ -1139,7 +1419,7 @@
Directory below known binary directory created (user=%user.name
command=%proc.cmdline directory=%evt.arg.path)
priority: ERROR
- tags: [filesystem]
+ tags: [filesystem, mitre_persistence]
# This list allows for easy additions to the set of commands allowed
# to change thread namespace without having to copy and override the
@@ -1153,13 +1433,15 @@
as a part of creating a container) by calling setns.
condition: >
evt.type = setns
- and not proc.name in (docker_binaries, k8s_binaries, lxd_binaries, sysdigcloud_binaries, sysdig, nsenter)
+ and not proc.name in (docker_binaries, k8s_binaries, lxd_binaries, sysdigcloud_binaries,
+ sysdig, nsenter, calico, oci-umount)
and not proc.name in (user_known_change_thread_namespace_binaries)
and not proc.name startswith "runc:"
and not proc.pname in (sysdigcloud_binaries)
and not python_running_sdchecks
and not java_running_sdjagent
and not kubelet_running_loopback
+ and not rancher_agent
output: >
Namespace change (setns) by unexpected program (user=%user.name command=%proc.cmdline
parent=%proc.pname %container.info)
@@ -1310,54 +1592,53 @@
cmdline=%proc.cmdline pcmdline=%proc.pcmdline gparent=%proc.aname[2] ggparent=%proc.aname[3]
aname[4]=%proc.aname[4] aname[5]=%proc.aname[5] aname[6]=%proc.aname[6] aname[7]=%proc.aname[7])
priority: DEBUG
- tags: [shell]
+ tags: [shell, mitre_execution]
- macro: allowed_openshift_registry_root
condition: >
- (container.image startswith openshift3/ or
- container.image startswith registry.access.redhat.com/openshift3/)
+ (container.image.repository startswith openshift3/ or
+ container.image.repository startswith registry.access.redhat.com/openshift3/)
# Source: https://docs.openshift.com/enterprise/3.2/install_config/install/disconnected_install.html
- macro: openshift_image
condition: >
(allowed_openshift_registry_root and
- (container.image contains logging-deployment or
- container.image contains logging-elasticsearch or
- container.image contains logging-kibana or
- container.image contains logging-fluentd or
- container.image contains logging-auth-proxy or
- container.image contains metrics-deployer or
- container.image contains metrics-hawkular-metrics or
- container.image contains metrics-cassandra or
- container.image contains metrics-heapster or
- container.image contains ose-haproxy-router or
- container.image contains ose-deployer or
- container.image contains ose-sti-builder or
- container.image contains ose-docker-builder or
- container.image contains ose-pod or
- container.image contains ose-docker-registry or
- container.image contains image-inspector))
+ (container.image.repository contains logging-deployment or
+ container.image.repository contains logging-elasticsearch or
+ container.image.repository contains logging-kibana or
+ container.image.repository contains logging-fluentd or
+ container.image.repository contains logging-auth-proxy or
+ container.image.repository contains metrics-deployer or
+ container.image.repository contains metrics-hawkular-metrics or
+ container.image.repository contains metrics-cassandra or
+ container.image.repository contains metrics-heapster or
+ container.image.repository contains ose-haproxy-router or
+ container.image.repository contains ose-deployer or
+ container.image.repository contains ose-sti-builder or
+ container.image.repository contains ose-docker-builder or
+ container.image.repository contains ose-pod or
+ container.image.repository contains ose-docker-registry or
+ container.image.repository contains image-inspector))
+
+- list: trusted_images
+ items: [
+ sysdig/agent, sysdig/falco, sysdig/sysdig, gcr.io/google_containers/hyperkube,
+ quay.io/coreos/flannel, gcr.io/google_containers/kube-proxy, calico/node,
+ rook/toolbox, cloudnativelabs/kube-router, consul, mesosphere/mesos-slave,
+ datadog/docker-dd-agent, datadog/agent, docker/ucp-agent, gliderlabs/logspout
+ ]
- macro: trusted_containers
- condition: (container.image startswith sysdig/agent or
- (container.image startswith sysdig/falco and
- not container.image startswith sysdig/falco-event-generator) or
- container.image startswith quay.io/sysdig or
- container.image startswith sysdig/sysdig or
- container.image startswith gcr.io/google_containers/hyperkube or
- container.image startswith quay.io/coreos/flannel or
- container.image startswith gcr.io/google_containers/kube-proxy or
- container.image startswith calico/node or
- container.image startswith rook/toolbox or
- openshift_image or
- container.image startswith cloudnativelabs/kube-router or
- container.image startswith "consul:" or
- container.image startswith mesosphere/mesos-slave or
- container.image startswith istio/proxy_ or
- container.image startswith datadog/docker-dd-agent or
- container.image startswith datadog/agent or
- container.image startswith docker/ucp-agent or
- container.image startswith gliderlabs/logspout)
+ condition: (openshift_image or
+ container.image.repository in (trusted_images) or
+ container.image.repository startswith istio/proxy_ or
+ container.image.repository startswith quay.io/sysdig)
+
+- list: rancher_images
+ items: [
+ rancher/network-manager, rancher/dns, rancher/agent,
+ rancher/lb-service-haproxy, rancher/metadata, rancher/healthcheck
+ ]
# Add conditions to this macro (probably in a separate file,
# overwriting this macro) to specify additional containers that are
@@ -1366,7 +1647,7 @@
# In this file, it just takes one of the images in trusted_containers
# and repeats it.
- macro: user_trusted_containers
- condition: (container.image startswith sysdig/agent)
+ condition: (container.image.repository=sysdig/agent)
# Add conditions to this macro (probably in a separate file,
# overwriting this macro) to specify additional containers that are
@@ -1375,18 +1656,18 @@
# In this file, it just takes one of the images in trusted_containers
# and repeats it.
- macro: user_sensitive_mount_containers
- condition: (container.image startswith sysdig/agent)
+ condition: (container.image.repository=sysdig/agent)
- rule: Launch Privileged Container
desc: Detect the initial process started in a privileged container. Exceptions are made for known trusted images.
condition: >
- evt.type=execve and proc.vpid=1 and container
+ container_started and container
and container.privileged=true
and not trusted_containers
and not user_trusted_containers
- output: Privileged container started (user=%user.name command=%proc.cmdline %container.info image=%container.image)
+ output: Privileged container started (user=%user.name command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag)
priority: INFO
- tags: [container, cis]
+ tags: [container, cis, mitre_privilege_escalation, mitre_lateral_movement]
# For now, only considering a full mount of /etc as
# sensitive. Ideally, this would also consider all subdirectories
@@ -1395,7 +1676,8 @@
- macro: sensitive_mount
condition: (container.mount.dest[/proc*] != "N/A" or
container.mount.dest[/var/run/docker.sock] != "N/A" or
- container.mount.dest[/var/lib/kubelet*] != "N/A" or
+ container.mount.dest[/var/lib/kubelet] != "N/A" or
+ container.mount.dest[/var/lib/kubelet/pki] != "N/A" or
container.mount.dest[/] != "N/A" or
container.mount.dest[/etc] != "N/A" or
container.mount.dest[/root*] != "N/A")
@@ -1419,34 +1701,33 @@
Detect the initial process started by a container that has a mount from a sensitive host directory
(i.e. /proc). Exceptions are made for known trusted images.
condition: >
- evt.type=execve and proc.vpid=1 and container
+ container_started and container
and sensitive_mount
and not trusted_containers
and not user_sensitive_mount_containers
- output: Container with sensitive mount started (user=%user.name command=%proc.cmdline %container.info image=%container.image mounts=%container.mounts)
+ output: Container with sensitive mount started (user=%user.name command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag mounts=%container.mounts)
priority: INFO
- tags: [container, cis]
+ tags: [container, cis, mitre_lateral_movement]
# In a local/user rules file, you could override this macro to
# explicitly enumerate the container images that you want to run in
# your environment. In this main falco rules file, there isn't any way
# to know all the containers that can run, so any container is
-# alllowed, by using a filter that is guaranteed to evaluate to true
-# (the same proc.vpid=1 that's in the Launch Disallowed Container
-# rule). In the overridden macro, the condition would look something
-# like (container.image startswith vendor/container-1 or
-# container.image startswith vendor/container-2 or ...)
+# allowed, by using a filter that is guaranteed to evaluate to true.
+# In the overridden macro, the condition would look something like
+# (container.image.repository = vendor/container-1 or
+# container.image.repository = vendor/container-2 or ...)
- macro: allowed_containers
- condition: (proc.vpid=1)
+ condition: (container.id exists)
- rule: Launch Disallowed Container
desc: >
Detect the initial process started by a container that is not in a list of allowed containers.
- condition: evt.type=execve and proc.vpid=1 and container and not allowed_containers
- output: Container started and not in allowed list (user=%user.name command=%proc.cmdline %container.info image=%container.image)
+ condition: container_started and container and not allowed_containers
+ output: Container started and not in allowed list (user=%user.name command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag)
priority: WARNING
- tags: [container]
+ tags: [container, mitre_lateral_movement]
# Anything run interactively by root
# - condition: evt.type != switch and user.name = root and proc.name != sshd and interactive
@@ -1458,7 +1739,7 @@
condition: spawned_process and system_users and interactive
output: "System user ran an interactive command (user=%user.name command=%proc.cmdline)"
priority: INFO
- tags: [users]
+ tags: [users, mitre_remote_access_tools]
- rule: Terminal shell in container
desc: A shell was used as the entrypoint/exec point into a container with an attached terminal.
@@ -1470,7 +1751,7 @@
A shell was spawned in a container with an attached terminal (user=%user.name %container.info
shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline terminal=%proc.tty)
priority: NOTICE
- tags: [container, shell]
+ tags: [container, shell, mitre_execution]
# For some container types (mesos), there isn't a container image to
# work with, and the container name is autogenerated, so there isn't
@@ -1536,7 +1817,7 @@
- rule: System procs network activity
desc: any network activity performed by system binaries that are not expected to send or receive any network traffic
condition: >
- (fd.sockfamily = ip and system_procs)
+ (fd.sockfamily = ip and (system_procs or proc.name in (shell_binaries)))
and (inbound_outbound)
and not proc.name in (systemd, hostid, id)
and not login_doing_dns_lookup
@@ -1544,7 +1825,67 @@
Known system binary sent/received network traffic
(user=%user.name command=%proc.cmdline connection=%fd.name)
priority: NOTICE
- tags: [network]
+ tags: [network, mitre_exfiltration]
+
+# When filled in, this should look something like:
+# (proc.env contains "HTTP_PROXY=http://my.http.proxy.com ")
+# The trailing space is intentional so avoid matching on prefixes of
+# the actual proxy.
+- macro: allowed_ssh_proxy_env
+ condition: (always_true)
+
+- list: http_proxy_binaries
+ items: [curl, wget]
+
+- macro: http_proxy_procs
+ condition: (proc.name in (http_proxy_binaries))
+
+- rule: Program run with disallowed http proxy env
+ desc: An attempt to run a program with a disallowed HTTP_PROXY environment variable
+ condition: >
+ spawned_process and
+ http_proxy_procs and
+ not allowed_ssh_proxy_env and
+ proc.env icontains HTTP_PROXY
+ output: >
+ Program run with disallowed HTTP_PROXY environment variable
+ (user=%user.name command=%proc.cmdline env=%proc.env parent=%proc.pname)
+ priority: NOTICE
+ tags: [host, users]
+
+# In some environments, any attempt by a interpreted program (perl,
+# python, ruby, etc) to listen for incoming connections or perform
+# outgoing connections might be suspicious. These rules are not
+# enabled by default, but you can modify the following macros to
+# enable them.
+
+- macro: consider_interpreted_inbound
+ condition: (never_true)
+
+- macro: consider_interpreted_outbound
+ condition: (never_true)
+
+- rule: Interpreted procs inbound network activity
+ desc: Any inbound network activity performed by any interpreted program (perl, python, ruby, etc.)
+ condition: >
+ (inbound and consider_interpreted_inbound
+ and interpreted_procs)
+ output: >
+ Interpreted program received/listened for network traffic
+ (user=%user.name command=%proc.cmdline connection=%fd.name)
+ priority: NOTICE
+ tags: [network, mitre_exfiltration]
+
+- rule: Interpreted procs outbound network activity
+ desc: Any outbound network activity performed by any interpreted program (perl, python, ruby, etc.)
+ condition: >
+ (outbound and consider_interpreted_outbound
+ and interpreted_procs)
+ output: >
+ Interpreted program performed outgoing network connection
+ (user=%user.name command=%proc.cmdline connection=%fd.name)
+ priority: NOTICE
+ tags: [network, mitre_exfiltration]
- list: openvpn_udp_ports
items: [1194, 1197, 1198, 8080, 9201]
@@ -1585,7 +1926,7 @@
Unexpected UDP Traffic Seen
(user=%user.name command=%proc.cmdline connection=%fd.name proto=%fd.l4proto evt=%evt.type %evt.args)
priority: NOTICE
- tags: [network]
+ tags: [network, mitre_exfiltration]
# With the current restriction on system calls handled by falco
# (e.g. excluding read/write/sendto/recvfrom/etc, this rule won't
@@ -1616,6 +1957,15 @@
- macro: known_user_in_container
condition: (container and user.name != "N/A")
+# Add conditions to this macro (probably in a separate file,
+# overwriting this macro) to allow for specific combinations of
+# programs changing users by calling setuid.
+#
+# In this file, it just takes one of the condition in the base macro
+# and repeats it.
+- macro: user_known_non_sudo_setuid_conditions
+ condition: user.name=root
+
# sshd, mail programs attempt to setuid to root even when running as non-root. Excluding here to avoid meaningless FPs
- rule: Non sudo setuid
desc: >
@@ -1624,16 +1974,18 @@
condition: >
evt.type=setuid and evt.dir=>
and (known_user_in_container or not container)
- and not user.name=root and not somebody_becoming_themself
+ and not user.name=root
+ and not somebody_becoming_themself
and not proc.name in (known_setuid_binaries, userexec_binaries, mail_binaries, docker_binaries,
nomachine_binaries)
and not java_running_sdjagent
and not nrpe_becoming_nagios
+ and not user_known_non_sudo_setuid_conditions
output: >
Unexpected setuid call by non-sudo, non-root program (user=%user.name cur_uid=%user.uid parent=%proc.pname
command=%proc.cmdline uid=%evt.arg.uid)
priority: NOTICE
- tags: [users]
+ tags: [users, mitre_privilege_escalation]
- rule: User mgmt binaries
desc: >
@@ -1657,7 +2009,7 @@
User management binary command run outside of container
(user=%user.name command=%proc.cmdline parent=%proc.pname gparent=%proc.aname[2] ggparent=%proc.aname[3] gggparent=%proc.aname[4])
priority: NOTICE
- tags: [host, users]
+ tags: [host, users, mitre_persistence]
- list: allowed_dev_files
items: [
@@ -1677,7 +2029,7 @@
and not fd.name startswith /dev/tty
output: "File created below /dev by untrusted program (user=%user.name command=%proc.cmdline file=%fd.name)"
priority: ERROR
- tags: [filesystem]
+ tags: [filesystem, mitre_persistence]
# In a local/user rules file, you could override this macro to
@@ -1686,8 +2038,8 @@
# any way to know all the containers that should have access, so any
# container is alllowed, by repeating the "container" macro. In the
# overridden macro, the condition would look something like
-# (container.image startswith vendor/container-1 or container.image
-# startswith vendor/container-2 or ...)
+# (container.image.repository = vendor/container-1 or
+# container.image.repository = vendor/container-2 or ...)
- macro: ec2_metadata_containers
condition: container
@@ -1697,9 +2049,9 @@
- rule: Contact EC2 Instance Metadata Service From Container
desc: Detect attempts to contact the EC2 Instance Metadata Service from a container
condition: outbound and fd.sip="169.254.169.254" and container and not ec2_metadata_containers
- output: Outbound connection to EC2 instance metadata service (command=%proc.cmdline connection=%fd.name %container.info image=%container.image)
+ output: Outbound connection to EC2 instance metadata service (command=%proc.cmdline connection=%fd.name %container.info image=%container.image.repository:%container.image.tag)
priority: NOTICE
- tags: [network, aws, container]
+ tags: [network, aws, container, mitre_discovery]
# In a local/user rules file, you should override this macro with the
# IP address of your k8s api server. The IP 1.2.3.4 is a placeholder
@@ -1713,18 +2065,16 @@
# within a container.
- macro: k8s_containers
condition: >
- (container.image startswith gcr.io/google_containers/hyperkube-amd64 or
- container.image startswith gcr.io/google_containers/kube2sky or
- container.image startswith sysdig/agent or
- container.image startswith sysdig/falco or
- container.image startswith sysdig/sysdig)
+ (container.image.repository in (gcr.io/google_containers/hyperkube-amd64,
+ gcr.io/google_containers/kube2sky, sysdig/agent, sysdig/falco,
+ sysdig/sysdig))
- rule: Contact K8S API Server From Container
desc: Detect attempts to contact the K8S API Server from a container
condition: outbound and k8s_api_server and container and not k8s_containers
- output: Unexpected connection to K8s API Server from container (command=%proc.cmdline %container.info image=%container.image connection=%fd.name)
+ output: Unexpected connection to K8s API Server from container (command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag connection=%fd.name)
priority: NOTICE
- tags: [network, k8s, container]
+ tags: [network, k8s, container, mitre_discovery]
# In a local/user rules file, list the container images that are
# allowed to contact NodePort services from within a container. This
@@ -1740,7 +2090,184 @@
condition: (inbound_outbound) and fd.sport >= 30000 and fd.sport <= 32767 and container and not nodeport_containers
output: Unexpected K8s NodePort Connection (command=%proc.cmdline connection=%fd.name)
priority: NOTICE
- tags: [network, k8s, container]
+ tags: [network, k8s, container, mitre_port_knocking]
+
+- list: network_tool_binaries
+ items: [nc, ncat, nmap, dig, tcpdump, tshark, ngrep]
+
+- macro: network_tool_procs
+ condition: proc.name in (network_tool_binaries)
+
+# Container is supposed to be immutable. Package management should be done in building the image.
+- rule: Launch Package Management Process in Container
+ desc: Package management process ran inside container
+ condition: >
+ spawned_process and container and user.name != "_apt" and package_mgmt_procs and not package_mgmt_ancestor_procs
+ output: >
+ Package management process launched in container (user=%user.name
+ command=%proc.cmdline container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
+ priority: ERROR
+ tags: [process, mitre_persistence]
+
+- rule: Netcat Remote Code Execution in Container
+ desc: Netcat Program runs inside container that allows remote code execution
+ condition: >
+ spawned_process and container and
+ ((proc.name = "nc" and (proc.args contains "-e" or proc.args contains "-c")) or
+ (proc.name = "ncat" and (proc.args contains "--sh-exec" or proc.args contains "--exec"))
+ )
+ output: >
+ Netcat runs inside container that allows remote code execution (user=%user.name
+ command=%proc.cmdline container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
+ priority: WARNING
+ tags: [network, process, mitre_execution]
+
+- rule: Lauch Suspicious Network Tool in Container
+ desc: Detect network tools launched inside container
+ condition: >
+ spawned_process and container and network_tool_procs
+ output: >
+ Network tool launched in container (user=%user.name command=%proc.cmdline parent_process=%proc.pname
+ container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
+ priority: NOTICE
+ tags: [network, process, mitre_discovery, mitre_exfiltration]
+
+# This rule is not enabled by default, as there are legitimate use
+# cases for these tools on hosts. If you want to enable it, modify the
+# following macro.
+- macro: consider_network_tools_on_host
+ condition: (never_true)
+
+- rule: Launch Suspicious Network Tool on Host
+ desc: Detect network tools launched on the host
+ condition: >
+ spawned_process and
+ not container and
+ consider_network_tools_on_host and
+ network_tool_procs
+ output: >
+ Network tool launched on host (user=%user.name command=%proc.cmdline parent_process=%proc.pname)
+ priority: NOTICE
+ tags: [network, process, mitre_discovery, mitre_exfiltration]
+
+- list: grep_binaries
+ items: [grep, egre, fgrep]
+
+- macro: grep_commands
+ condition: (proc.name in (grep_binaries))
+
+# a less restrictive search for things that might be passwords/ssh/user etc.
+- macro: grep_more
+ condition: (never_true)
+
+- macro: private_key_or_password
+ condition: >
+ (proc.args icontains "BEGIN PRIVATE" or
+ proc.args icontains "BEGIN RSA PRIVATE" or
+ proc.args icontains "BEGIN DSA PRIVATE" or
+ proc.args icontains "BEGIN EC PRIVATE" or
+ (grep_more and
+ (proc.args icontains " pass " or
+ proc.args icontains " ssh " or
+ proc.args icontains " user "))
+ )
+
+- rule: Search Private Keys or Passwords
+ desc: >
+ Detect grep private keys or passwords activity.
+ condition: >
+ (spawned_process and
+ ((grep_commands and private_key_or_password) or
+ (proc.name = "find" and (proc.args contains "id_rsa" or proc.args contains "id_dsa")))
+ )
+ output: >
+ Grep private keys or passwords activities found
+ (user=%user.name command=%proc.cmdline container_id=%container.id container_name=%container.name
+ image=%container.image.repository:%container.image.tag)
+ priority:
+ WARNING
+ tags: [process, mitre_credential_access]
+
+- list: log_directories
+ items: [/var/log, /dev/log]
+
+- list: log_files
+ items: [syslog, auth.log, secure, kern.log, cron, user.log, dpkg.log, last.log, yum.log, access_log, mysql.log, mysqld.log]
+
+- macro: access_log_files
+ condition: (fd.directory in (log_directories) or fd.filename in (log_files))
+
+- rule: Clear Log Activities
+ desc: Detect clearing of critical log files
+ condition: >
+ open_write and access_log_files and evt.arg.flags contains "O_TRUNC"
+ output: >
+ Log files were tampered (user=%user.name command=%proc.cmdline file=%fd.name)
+ priority:
+ WARNING
+ tags: [file, mitre_defense_evasion]
+
+- list: data_remove_commands
+ items: [shred, mkfs, mke2fs]
+
+- macro: clear_data_procs
+ condition: (proc.name in (data_remove_commands))
+
+- rule: Remove Bulk Data from Disk
+ desc: Detect process running to clear bulk data from disk
+ condition: spawned_process and clear_data_procs
+ output: >
+ Bulk data has been removed from disk (user=%user.name command=%proc.cmdline file=%fd.name)
+ priority:
+ WARNING
+ tags: [process, mitre_persistence]
+
+- rule: Delete Bash History
+ desc: Detect bash history deletion
+ condition: >
+ ((spawned_process and proc.name in (shred, rm, mv) and proc.args contains "bash_history") or
+ (open_write and fd.name contains "bash_history" and evt.arg.flags contains "O_TRUNC"))
+ output: >
+ Bash history has been deleted (user=%user.name command=%proc.cmdline file=%fd.name %container.info)
+ priority:
+ WARNING
+ tag: [process, mitre_defense_evation]
+
+- macro: consider_all_chmods
+ condition: (never_true)
+
+- rule: Set Setuid or Setgid bit
+ desc: >
+ When the setuid or setgid bits are set for an application,
+ this means that the application will run with the privileges of the owning user or group respectively.
+ Detect setuid or setgid bits set via chmod
+ condition: consider_all_chmods and spawned_process and proc.name = "chmod" and (proc.args contains "+s" or proc.args contains "4777")
+ output: >
+ Setuid or setgid bit is set via chmod (user=%user.name command=%proc.cmdline
+ container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
+ priority:
+ NOTICE
+ tag: [process, mitre_persistence]
+
+- list: exclude_hidden_directories
+ items: [/root/.cassandra]
+
+# To use this rule, you should modify consider_hidden_file_creation.
+- macro: consider_hidden_file_creation
+ condition: (never_true)
+
+- rule: Create Hidden Files or Directories
+ desc: Detect hidden files or directories created
+ condition: >
+ ((mkdir and consider_hidden_file_creation and evt.arg.path contains "/.") or
+ (open_write and consider_hidden_file_creation and evt.arg.flags contains "O_CREAT" and
+ fd.name contains "/." and not fd.name pmatch (exclude_hidden_directories)))
+ output: >
+ Hidden file or directory created (user=%user.name command=%proc.cmdline
+ file=%fd.name container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
+ priority:
+ NOTICE
+ tag: [file, mitre_persistence]
# Application rules have moved to application_rules.yaml. Please look
# there if you want to enable them by adding to
diff --git a/stable/falco/templates/_helpers.tpl b/stable/falco/templates/_helpers.tpl
index 71d84f9ff0e8..c6e4082536b4 100644
--- a/stable/falco/templates/_helpers.tpl
+++ b/stable/falco/templates/_helpers.tpl
@@ -41,3 +41,26 @@ Create the name of the service account to use
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
+
+{{/*
+Return the proper Falco image name
+*/}}
+{{- define "falco.image" -}}
+{{- $registryName := .Values.image.registry -}}
+{{- $repositoryName := .Values.image.repository -}}
+{{- $tag := .Values.image.tag | toString -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
+{{- end -}}
diff --git a/stable/falco/templates/configmap.yaml b/stable/falco/templates/configmap.yaml
index 60732f44eb08..9eb2119ee1a8 100644
--- a/stable/falco/templates/configmap.yaml
+++ b/stable/falco/templates/configmap.yaml
@@ -27,7 +27,7 @@ data:
{{- end }}
# Whether to output events in json or text
- {{- if (or .Values.integrations.gcscc.enabled .Values.integrations.natsOutput.enabled .Values.integrations.snsOutput.enabled) }}
+ {{- if (or .Values.integrations.gcscc.enabled .Values.integrations.natsOutput.enabled .Values.integrations.snsOutput.enabled .Values.integrations.pubsubOutput.enabled) }}
json_output: true
{{- else }}
json_output: {{ .Values.falco.jsonOutput }}
@@ -37,7 +37,7 @@ data:
# itself (e.g. "File below a known binary directory opened for writing
# (user=root ....") in the json output.
- {{- if (or .Values.integrations.natsOutput.enabled .Values.integrations.snsOutput.enabled) }}
+ {{- if (or .Values.integrations.natsOutput.enabled .Values.integrations.snsOutput.enabled .Values.integrations.pubsubOutput.enabled) }}
json_include_output_property: true
{{- else }}
json_include_output_property: {{ .Values.falco.jsonIncludeOutputProperty }}
@@ -93,7 +93,7 @@ data:
# Also, the file will be closed and reopened if falco is signaled with
# SIGUSR1.
- {{- if (or .Values.integrations.natsOutput.enabled .Values.integrations.snsOutput.enabled) }}
+ {{- if (or .Values.integrations.natsOutput.enabled .Values.integrations.snsOutput.enabled .Values.integrations.pubsubOutput.enabled) }}
file_output:
enabled: true
keep_alive: true
diff --git a/stable/falco/templates/daemonset.yaml b/stable/falco/templates/daemonset.yaml
index 07dcca8129d1..5f93024a5c1e 100644
--- a/stable/falco/templates/daemonset.yaml
+++ b/stable/falco/templates/daemonset.yaml
@@ -24,7 +24,7 @@ spec:
{{ toYaml .Values.tolerations | indent 8 }}
containers:
- name: {{ .Chart.Name }}
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ image: {{ template "falco.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
resources:
{{ toYaml .Values.resources | indent 12 }}
@@ -32,10 +32,14 @@ spec:
privileged: true
args:
- /usr/bin/falco
+ {{- if (and .Values.cri.enabled .Values.cri.socket) }}
+ - --cri
+ - /host/var/run/cri.sock
+ {{- end }}
- -K
- /var/run/secrets/kubernetes.io/serviceaccount/token
- -k
- - https://kubernetes.default
+ - "https://$(KUBERNETES_SERVICE_HOST)"
- -pk
{{- if .Values.extraArgs }}
{{ toYaml .Values.extraArgs | indent 12 }}
@@ -58,8 +62,14 @@ spec:
value: {{ .Values.proxy.noProxy }}
{{- end }}
volumeMounts:
+ {{- if (and .Values.docker.enabled .Values.docker.socket) }}
- mountPath: /host/var/run/docker.sock
name: docker-socket
+ {{- end}}
+ {{- if (and .Values.cri.enabled .Values.cri.socket) }}
+ - mountPath: /host/var/run/cri.sock
+ name: cri-socket
+ {{- end}}
- mountPath: /host/dev
name: dev-fs
- mountPath: /host/proc
@@ -87,7 +97,7 @@ spec:
- mountPath: /etc/falco/rules.d
name: rules-volume
{{- end }}
- {{- if (or .Values.integrations.natsOutput.enabled .Values.integrations.snsOutput.enabled) }}
+ {{- if (or .Values.integrations.natsOutput.enabled .Values.integrations.snsOutput.enabled .Values.integrations.pubsubOutput.enabled) }}
- mountPath: /var/run/falco/
name: shared-pipe
readOnly: false
@@ -126,7 +136,27 @@ spec:
name: {{ template "falco.fullname" . }}
key: aws_default_region
{{- end }}
- {{- if (or .Values.integrations.natsOutput.enabled .Values.integrations.snsOutput.enabled) }}
+ {{- if .Values.integrations.pubsubOutput.enabled }}
+ - name: {{ .Chart.Name }}-pubsub
+ image: sysdiglabs/falco-pubsub:latest
+ imagePullPolicy: Always
+ args: [ "/bin/falco-pubsub", "-t", {{ .Values.integrations.pubsubOutput.topic | quote }}]
+ volumeMounts:
+ - mountPath: /var/run/falco/
+ name: shared-pipe
+ env:
+ - name: GOOGLE_PROJECT_ID
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "falco.fullname" . }}
+ key: gcp-project-id
+ - name: GOOGLE_CREDENTIALS_DATA
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "falco.fullname" . }}
+ key: gcp-credentials-data
+ {{- end }}
+ {{- if (or .Values.integrations.natsOutput.enabled .Values.integrations.snsOutput.enabled .Values.integrations.pubsubOutput.enabled) }}
initContainers:
- name: init-pipe
image: busybox
@@ -140,9 +170,16 @@ spec:
- name: dshm
emptyDir:
medium: Memory
+ {{- if (and .Values.docker.enabled .Values.docker.socket) }}
- name: docker-socket
hostPath:
- path: /var/run/docker.sock
+ path: {{ .Values.docker.socket }}
+ {{- end}}
+ {{- if (and .Values.cri.enabled .Values.cri.socket) }}
+ - name: cri-socket
+ hostPath:
+ path: {{ .Values.cri.socket }}
+ {{- end}}
- name: dev-fs
hostPath:
path: /dev
@@ -178,10 +215,9 @@ spec:
configMap:
name: {{ template "falco.fullname" . }}-rules
{{- end }}
- {{- if (or .Values.integrations.natsOutput.enabled .Values.integrations.snsOutput.enabled) }}
+ {{- if (or .Values.integrations.natsOutput.enabled .Values.integrations.snsOutput.enabled .Values.integrations.pubsubOutput.enabled) }}
- name: shared-pipe
emptyDir: {}
{{- end }}
-
updateStrategy:
- type: {{ default "OnDelete" .Values.daemonset.updateStrategy | quote }}
+{{ toYaml .Values.daemonset.updateStrategy | indent 4 }}
diff --git a/stable/falco/templates/secret.yaml b/stable/falco/templates/secret.yaml
index 44d0fe4dfd8d..fb9265e1f83b 100644
--- a/stable/falco/templates/secret.yaml
+++ b/stable/falco/templates/secret.yaml
@@ -1,4 +1,5 @@
-{{- if .Values.integrations.snsOutput.enabled }}
+{{- if (or .Values.integrations.snsOutput.enabled .Values.integrations.pubsubOutput.enabled) }}
+---
apiVersion: v1
kind: Secret
metadata:
@@ -10,6 +11,14 @@ metadata:
heritage: "{{ .Release.Service }}"
type: Opaque
data:
+{{- if .Values.integrations.snsOutput.enabled }}
aws_access_key_id: {{ .Values.integrations.snsOutput.aws_access_key_id | b64enc | quote }}
aws_secret_access_key: {{ .Values.integrations.snsOutput.aws_secret_access_key | b64enc | quote }}
{{- end }}
+{{- if .Values.integrations.pubsubOutput.enabled }}
+ gcp-credentials-data: {{ .Values.integrations.pubsubOutput.credentialsData | b64enc | quote }}
+ gcp-project-id: {{ .Values.integrations.pubsubOutput.projectID | b64enc | quote }}
+{{- end }}
+{{- end }}
+
+
diff --git a/stable/falco/values.yaml b/stable/falco/values.yaml
index f259949f186f..e7f13af62a68 100644
--- a/stable/falco/values.yaml
+++ b/stable/falco/values.yaml
@@ -1,21 +1,29 @@
# Default values for falco.
image:
+ registry: docker.io
repository: falcosecurity/falco
- tag: 0.13.0
+ tag: 0.15.0
pullPolicy: IfNotPresent
-resources: {}
- # We usually recommend not to specify default resources and to leave this as a conscious
- # choice for the user. This also increases chances charts run on environments with little
- # resources, such as Minikube. If you do want to specify resources, uncomment the following
- # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
- # limits:
- # cpu: 30m
- # memory: 128Mi
- # requests:
- # cpu: 20m
- # memory: 128Mi
+docker:
+ enabled: true
+ socket: /var/run/docker.sock
+
+cri:
+ enabled: true
+ socket: /run/containerd/containerd.sock
+
+resources:
+ # Although resources needed are subjective on the actual workload we provide
+ # a sane defaults ones. If you have more questions or concerns, please refer
+ # to Sysdig Support for more info about it
+ requests:
+ cpu: 100m
+ memory: 512Mi
+ limits:
+ cpu: 200m
+ memory: 1024Mi
extraArgs: []
@@ -33,12 +41,13 @@ fakeEventGenerator:
enabled: false
replicas: 1
-daemonset: {}
- # Allow the DaemonSet to perform a rolling update on helm update
+daemonset:
+ # Perform rolling updates by default in the DaemonSet agent
# ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/
- # If you do want to specify resources, uncomment the following lines, adjust
- # them as necessary, and remove the curly braces after 'resources:'.
- # updateStrategy: RollingUpdate
+ updateStrategy:
+ # You can also customize maxUnavailable, maxSurge or minReadySeconds if you
+ # need it
+ type: RollingUpdate
# If is behind a proxy you can set the proxy server
proxy:
@@ -199,6 +208,22 @@ integrations:
aws_secret_access_key: ""
aws_default_region: ""
+
+ # If GCloud Pub/Sub integration is enabled, falco will be configured to use this
+ # integration as file_output and sets the following values:
+ # * json_output: true
+ # * json_include_output_property: true
+ # * file_output:
+ # enabled: true
+ # keep_alive: true
+ # filename: /var/run/falco/nats
+ pubsubOutput:
+ enabled: false
+ topic: ""
+ credentialsData: ""
+ projectID: ""
+
+
# Allow falco to run on Kubernetes 1.6 masters.
tolerations:
- effect: NoSchedule
diff --git a/stable/filebeat/Chart.yaml b/stable/filebeat/Chart.yaml
index e4356b929059..3b03210ee4c0 100644
--- a/stable/filebeat/Chart.yaml
+++ b/stable/filebeat/Chart.yaml
@@ -2,8 +2,8 @@ apiVersion: v1
description: A Helm chart to collect Kubernetes logs with filebeat
icon: https://www.elastic.co/assets/blt47799dcdcf08438d/logo-elastic-beats-lt.svg
name: filebeat
-version: 1.2.0
-appVersion: 6.6.0
+version: 1.7.0
+appVersion: 6.7.0
home: https://www.elastic.co/products/beats/filebeat
sources:
- https://www.elastic.co/guide/en/beats/filebeat/current/index.html
diff --git a/stable/filebeat/README.md b/stable/filebeat/README.md
index caa61f37c243..bd8188c47fe8 100644
--- a/stable/filebeat/README.md
+++ b/stable/filebeat/README.md
@@ -25,7 +25,7 @@ The following table lists the configurable parameters of the filebeat chart and
| Parameter | Description | Default |
| -------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- | -------------------------------------------------- |
| `image.repository` | Docker image repo | `docker.elastic.co/beats/filebeat-oss` |
-| `image.tag` | Docker image tag | `6.6.0` |
+| `image.tag` | Docker image tag | `6.7.0` |
| `image.pullPolicy` | Docker image pull policy | `IfNotPresent` |
| `image.pullSecrets` | Specify image pull secrets | `nil` |
| `config.filebeat.config.prospectors.path` | Mounted `filebeat-prospectors` configmap | `${path.config}/prospectors.d/*.yml` |
@@ -40,6 +40,8 @@ The following table lists the configurable parameters of the filebeat chart and
| `config.output.file.number_of_files` | | `5` |
| `config.http.enabled` | | `false` |
| `config.http.port` | | `5066` |
+| `overrideConfig` | If overrideConfig is not empty, filebeat chart's default config won't be used at all. | `{}` |
+| `data.hostPath` | Path on the host to mount to `/usr/share/filebeat/data` in the container. | `/var/lib/filebeat` |
| `indexTemplateLoad` | List of Elasticsearch hosts to load index template, when logstash output is used | `[]` |
| `command` | Custom command (Docker Entrypoint) | `[]` |
| `args` | Custom args (Docker Cmd) | `[]` |
@@ -59,6 +61,7 @@ The following table lists the configurable parameters of the filebeat chart and
| `serviceAccount.name` | the name of the ServiceAccount to use | `""` |
| `podSecurityPolicy.enabled` | Should the PodSecurityPolicy be created. Depends on `rbac.create` being set to `true`. | `false` |
| `podSecurityPolicy.annotations` | Annotations to be added to the created PodSecurityPolicy: | `""` |
+| `privileged` | Specifies wheter to run as privileged | `false` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
diff --git a/stable/filebeat/templates/daemonset.yaml b/stable/filebeat/templates/daemonset.yaml
index 1c3cebd380a3..8bf11057ff14 100644
--- a/stable/filebeat/templates/daemonset.yaml
+++ b/stable/filebeat/templates/daemonset.yaml
@@ -23,7 +23,7 @@ spec:
app: {{ template "filebeat.name" . }}
release: {{ .Release.Name }}
annotations:
- checksum/secret: {{ toYaml .Values.config | sha256sum }}
+ checksum/secret: {{ toYaml (default .Values.config .Values.overrideConfig) | sha256sum }}
{{- if .Values.annotations }}
{{ toYaml .Values.annotations | indent 8 }}
{{- end }}
@@ -89,6 +89,9 @@ spec:
{{- end }}
securityContext:
runAsUser: 0
+{{- if .Values.privileged }}
+ privileged: true
+{{- end }}
{{- if .Values.resources }}
resources:
{{ toYaml .Values.resources | indent 10 }}
@@ -108,6 +111,21 @@ spec:
readOnly: true
{{- if .Values.extraVolumeMounts }}
{{ toYaml .Values.extraVolumeMounts | indent 8 }}
+{{- end }}
+{{- if .Values.monitoring.enabled }}
+ - name: {{ template "filebeat.fullname" . }}-prometheus-exporter
+ image: "{{ .Values.monitoring.image.repository }}:{{ .Values.monitoring.image.tag }}"
+ imagePullPolicy: {{ .Values.monitoring.image.pullPolicy }}
+ args:
+{{- if .Values.monitoring.args }}
+{{ toYaml .Values.monitoring.args | indent 8 }}
+{{- end }}
+{{- if .Values.monitoring.resources }}
+ resources:
+{{ toYaml .Values.monitoring.resources | indent 10 }}
+{{- end }}
+ ports:
+ - containerPort: {{ .Values.monitoring.exporterPort}}
{{- end }}
volumes:
- name: varlog
@@ -121,7 +139,7 @@ spec:
secretName: {{ template "filebeat.fullname" . }}
- name: data
hostPath:
- path: /var/lib/filebeat
+ path: {{ .Values.data.hostPath }}
type: DirectoryOrCreate
{{- if .Values.extraVolumes }}
{{ toYaml .Values.extraVolumes | indent 6 }}
diff --git a/stable/filebeat/templates/secret.yaml b/stable/filebeat/templates/secret.yaml
index 42cd5916374d..670729d4e6d3 100644
--- a/stable/filebeat/templates/secret.yaml
+++ b/stable/filebeat/templates/secret.yaml
@@ -9,4 +9,4 @@ metadata:
heritage: {{ .Release.Service }}
type: Opaque
data:
- filebeat.yml: {{ toYaml .Values.config | indent 4 | b64enc }}
+ filebeat.yml: {{ toYaml (default .Values.config .Values.overrideConfig) | indent 4 | b64enc }}
diff --git a/stable/filebeat/templates/service.yaml b/stable/filebeat/templates/service.yaml
new file mode 100644
index 000000000000..b42fb258a3b3
--- /dev/null
+++ b/stable/filebeat/templates/service.yaml
@@ -0,0 +1,30 @@
+{{- if .Values.monitoring.enabled }}
+kind: Service
+apiVersion: v1
+metadata:
+{{- if not .Values.monitoring.serviceMonitor.enabled }}
+ annotations:
+{{- if .Values.monitoring.telemetryPath }}
+ prometheus.io/path: {{ .Values.monitoring.telemetryPath }}
+{{- else }}
+ prometheus.io/path: /metrics
+{{- end }}
+ prometheus.io/port: "{{ .Values.monitoring.exporterPort }}"
+ prometheus.io/scrape: "true"
+{{- end }}
+ name: {{ template "filebeat.fullname" . }}-metrics
+ namespace: {{ .Release.Namespace }}
+ labels:
+ app: {{ template "filebeat.name" . }}
+ chart: {{ template "filebeat.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+spec:
+ selector:
+ app: {{ template "filebeat.name" . }}
+ ports:
+ - name: metrics
+ port: {{ .Values.monitoring.exporterPort }}
+ targetPort: {{ .Values.monitoring.targetPort }}
+ protocol: TCP
+{{ end }}
diff --git a/stable/filebeat/templates/servicemonitor.yaml b/stable/filebeat/templates/servicemonitor.yaml
new file mode 100644
index 000000000000..6eb5ff19504f
--- /dev/null
+++ b/stable/filebeat/templates/servicemonitor.yaml
@@ -0,0 +1,30 @@
+{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) ( .Values.monitoring.serviceMonitor.enabled ) ( .Values.monitoring.enabled ) }}
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+{{- if .Values.monitoring.serviceMonitor.labels }}
+ labels:
+{{ toYaml .Values.monitoring.serviceMonitor.labels | indent 4}}
+{{- end }}
+ name: {{ template "filebeat.fullname" . }}-prometheus-exporter
+{{- if .Values.monitoring.serviceMonitor.namespace }}
+ namespace: {{ .Values.monitoring.serviceMonitor.namespace }}
+{{- end }}
+spec:
+ endpoints:
+ - targetPort: {{ .Values.monitoring.exporterPort }}
+{{- if .Values.monitoring.serviceMonitor.interval }}
+ interval: {{ .Values.monitoring.serviceMonitor.interval }}
+{{- end }}
+{{- if .Values.monitoring.serviceMonitor.telemetryPath }}
+ path: {{ .Values.monitoring.serviceMonitor.telemetryPath }}
+{{- end }}
+ jobLabel: {{ template "filebeat.fullname" . }}-prometheus-exporter
+ namespaceSelector:
+ matchNames:
+ - {{ .Release.Namespace }}
+ selector:
+ matchLabels:
+ app: {{ template "filebeat.name" . }}
+ release: {{ .Release.Name }}
+{{- end }}
\ No newline at end of file
diff --git a/stable/filebeat/values.yaml b/stable/filebeat/values.yaml
index 6aa4e4c5fced..3b3000af9831 100644
--- a/stable/filebeat/values.yaml
+++ b/stable/filebeat/values.yaml
@@ -1,6 +1,6 @@
image:
repository: docker.elastic.co/beats/filebeat-oss
- tag: 6.6.0
+ tag: 6.7.0
pullPolicy: IfNotPresent
config:
@@ -44,9 +44,16 @@ config:
# When a key contains a period, use this format for setting values on the command line:
# --set config."http\.enabled"=true
- http.enabled: false
+ http.enabled: true
http.port: 5066
+# If overrideConfig is not empty, filebeat chart's default config won't be used at all.
+overrideConfig: {}
+
+# Path on the host to mount to /usr/share/filebeat/data in the container.
+data:
+ hostPath: /var/lib/filebeat
+
# Upload index template to Elasticsearch if Logstash output is enabled
# https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-template.html
# List of Elasticsearch hosts
@@ -139,3 +146,45 @@ podSecurityPolicy:
# seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
# seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
# apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
+
+privileged: false
+
+## Add Elastic beat-exporter for Prometheus
+## https://github.com/trustpilot/beat-exporter
+## Dont forget to enable http on config.http.enabled (exposing filebeat stats)
+monitoring:
+ enabled: true
+ serviceMonitor:
+ # When set true and if Prometheus Operator is installed then use a ServiceMonitor to configure scraping
+ enabled: true
+ # Set the namespace the ServiceMonitor should be deployed
+ # namespace: monitoring
+ # Set how frequently Prometheus should scrape
+ # interval: 30s
+ # Set path to beats-exporter telemtery-path
+ # telemetryPath: /metrics
+ # Set labels for the ServiceMonitor, use this to define your scrape label for Prometheus Operator
+ # labels:
+ image:
+ repository: trustpilot/beat-exporter
+ tag: 0.1.1
+ pullPolicy: IfNotPresent
+ resources: {}
+ # We usually recommend not to specify default resources and to leave this as a conscious
+ # choice for the user. This also increases chances charts run on environments with little
+ # resources, such as Minikube. If you do want to specify resources, uncomment the following
+ # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
+ # limits:
+ # cpu: 100m
+ # memory: 200Mi
+ # requests:
+ # cpu: 100m
+ # memory: 100Mi
+
+ # pass custom args. This is equivalent of Cmd in docker
+ args: []
+
+ ## default is ":9479". If changed, need pass argument "-web.listen-address <...>"
+ exporterPort: 9479
+ ## Filebeat service port, which exposes Prometheus metrics
+ targetPort: 9479
diff --git a/stable/fluent-bit/Chart.yaml b/stable/fluent-bit/Chart.yaml
index 15bef4cf945d..2c245e016b69 100755
--- a/stable/fluent-bit/Chart.yaml
+++ b/stable/fluent-bit/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: fluent-bit
-version: 1.5.1
-appVersion: 1.0.3
+version: 2.0.2
+appVersion: 1.1.0
description: Fast and Lightweight Log/Data Forwarder for Linux, BSD and OSX
keywords:
- logging
diff --git a/stable/fluent-bit/README.md b/stable/fluent-bit/README.md
index 33abdccf5e7f..e5126b27d25f 100644
--- a/stable/fluent-bit/README.md
+++ b/stable/fluent-bit/README.md
@@ -82,29 +82,35 @@ The following table lists the configurable parameters of the Fluent-Bit chart an
| `fullConfigMap` | User has provided entire config (parsers + system) | `false` |
| `existingConfigMap` | ConfigMap override | `` |
| `extraEntries.input` | Extra entries for existing [INPUT] section | `` |
-| `extraEntries.filter` | Extra entries for existing [FILTER] section | `` |
-| `extraEntries.output` | Extra entries for existing [OUPUT] section | `` |
+| `extraEntries.filter` | Extra entries for existing [FILTER] section | `` |
+| `extraEntries.output` | Extra entries for existing [OUPUT] section | `` |
| `extraPorts` | List of extra ports | |
| `extraVolumeMounts` | Mount an extra volume, required to mount ssl certificates when elasticsearch has tls enabled | |
| `extraVolume` | Extra volume | |
-| `filter.enableExclude` | Enable the use of monitoring for a pod annotation of `fluentbit.io/exclude: true`. If present, discard logs from that pod. | `true` |
-| `filter.enableParser` | Enable the use of monitoring for a pod annotation of `fluentbit.io/parser: parser_name`. parser_name must be the name of a parser contained within parsers.conf | `true` |
+| `service.flush` | Interval to flush output (seconds) | `1` |
+| `service.logLevel` | Diagnostic level (error/warning/info/debug/trace) | `info` |
+| `filter.enableExclude` | Enable the use of monitoring for a pod annotation of `fluentbit.io/exclude: true`. If present, discard logs from that pod. | `true` |
+| `filter.enableParser` | Enable the use of monitoring for a pod annotation of `fluentbit.io/parser: parser_name`. parser_name must be the name of a parser contained within parsers.conf | `true` |
| `filter.kubeURL` | Optional custom configmaps | `https://kubernetes.default.svc:443` |
| `filter.kubeCAFile` | Optional custom configmaps | `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt` |
| `filter.kubeTokenFile` | Optional custom configmaps | `/var/run/secrets/kubernetes.io/serviceaccount/token` |
| `filter.kubeTag` | Optional top-level tag for matching in filter | `kube` |
-| `filter.mergeJSONLog` | If the log field content is a JSON string map, append the map fields as part of the log structure | `true` |
+| `filter.kubeTagPrefix` | Optional tag prefix used by Tail | `kube.var.log.containers.` |
+| `filter.mergeJSONLog` | If the log field content is a JSON string map, append the map fields as part of the log structure | `true` |
| `image.fluent_bit.repository` | Image | `fluent/fluent-bit` |
-| `image.fluent_bit.tag` | Image tag | `1.0.3` |
+| `image.fluent_bit.tag` | Image tag | `1.0.6` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `nameOverride` | Override name of app | `nil` |
+| `fullnameOverride` | Override full name of app | `nil` |
| `image.pullSecrets` | Specify image pull secrets | `nil` |
| `input.tail.memBufLimit` | Specify Mem_Buf_Limit in tail input | `5MB` |
-| `input.tail.path` | Specify log file(s) through the use of common wildcards. | `/var/log/containers/*.log` |
-| `input.systemd.enabled` | [Enable systemd input](https://fluentbit.io/documentation/current/input/systemd.html) | `false` |
-| `input.systemd.filters.systemdUnit | Please see https://fluentbit.io/documentation/current/input/systemd.html | `[docker.service, kubelet.service`, `node-problem-detector.service]` |
-| `input.systemd.maxEntries` | Please see https://fluentbit.io/documentation/current/input/systemd.html | `1000` |
-| `input.systemd.readFromTail` | Please see https://fluentbit.io/documentation/current/input/systemd.html | `true`|
-| `input.systemd.tag` | Please see https://fluentbit.io/documentation/current/input/systemd.html | `host.*`|
+| `input.tail.parser` | Specify Parser in tail input. | `docker` |
+| `input.tail.path` | Specify log file(s) through the use of common wildcards. | `/var/log/containers/*.log` |
+| `input.systemd.enabled` | [Enable systemd input](https://docs.fluentbit.io/manual/input/systemd) | `false` |
+| `input.systemd.filters.systemdUnit` | Please see https://docs.fluentbit.io/manual/input/systemd | `[docker.service, kubelet.service`, `node-problem-detector.service]` |
+| `input.systemd.maxEntries` | Please see https://docs.fluentbit.io/manual/input/systemd | `1000` |
+| `input.systemd.readFromTail` | Please see https://docs.fluentbit.io/manual/input/systemd | `true` |
+| `input.systemd.tag` | Please see https://docs.fluentbit.io/manual/input/systemd | `host.*` |
| `rbac.create` | Specifies whether RBAC resources should be created. | `true` |
| `serviceAccount.create` | Specifies whether a ServiceAccount should be created. | `true` |
| `serviceAccount.name` | The name of the ServiceAccount to use. | `NULL` |
@@ -112,6 +118,7 @@ The following table lists the configurable parameters of the Fluent-Bit chart an
| `resources` | Pod resource requests & limits | `{}` |
| `hostNetwork` | Use host's network | `false` |
| `dnsPolicy` | Specifies the dnsPolicy to use | `ClusterFirst` |
+| `priorityClassName` | Specifies the priorityClassName to use | `NULL` |
| `tolerations` | Optional daemonset tolerations | `NULL` |
| `nodeSelector` | Node labels for fluent-bit pod assignment | `NULL` |
| `affinity` | Expressions for affinity | `NULL` |
@@ -120,7 +127,8 @@ The following table lists the configurable parameters of the Fluent-Bit chart an
| `metrics.service.port` | Port on where metrics should be exposed | `2020` |
| `metrics.service.type` | Service type for metrics | `ClusterIP` |
| `trackOffsets` | Specify whether to track the file offsets for tailing docker logs. This allows fluent-bit to pick up where it left after pod restarts but requires access to a `hostPath` | `false` |
-| | | |
+| `testFramework.image` | `test-framework` image repository. | `dduportal/bats` |
+| `testFramework.tag` | `test-framework` image tag. | `0.4.0` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
@@ -135,8 +143,12 @@ $ helm install --name my-release -f values.yaml stable/fluent-bit
## Upgrading
-### From < 1.0.0 To 1.0.0
+### From < 1.0.0 To >= 1.0.0
-Values `extraInputs`, `extraFilters` and `extraOutputs` have been removed in version `1.0.0` of the fluent-bit chart.
-To add additional entries to the existing sections, please use the `extraEntries.input`, `extraEntries.filter` and `extraEntries.output` values.
+Values `extraInputs`, `extraFilters` and `extraOutputs` have been removed in version `1.0.0` of the fluent-bit chart.
+To add additional entries to the existing sections, please use the `extraEntries.input`, `extraEntries.filter` and `extraEntries.output` values.
For entire sections, please use the `rawConfig` value, inserting blocks of text as desired.
+
+### From < 1.8.0 to >= 1.8.0
+
+Version `1.8.0` introduces the use of release name as full name if it contains the chart name(fluent-bit in this case). E.g. with a release name of `fluent-bit`, this renames the DaemonSet from `fluent-bit-fluent-bit` to `fluent-bit`. The suggested approach is to delete the release and reinstall it.
diff --git a/stable/fluent-bit/templates/_helpers.tpl b/stable/fluent-bit/templates/_helpers.tpl
index dca15d75a88c..42453daea624 100644
--- a/stable/fluent-bit/templates/_helpers.tpl
+++ b/stable/fluent-bit/templates/_helpers.tpl
@@ -9,11 +9,27 @@ Expand the name of the chart.
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
*/}}
{{- define "fluent-bit.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "fluent-bit.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
{{/*
Return the appropriate apiVersion for RBAC APIs.
diff --git a/stable/fluent-bit/templates/config.yaml b/stable/fluent-bit/templates/config.yaml
index 64ca48b1774e..8b44192cab38 100644
--- a/stable/fluent-bit/templates/config.yaml
+++ b/stable/fluent-bit/templates/config.yaml
@@ -11,9 +11,9 @@ metadata:
data:
fluent-bit-service.conf: |-
[SERVICE]
- Flush 1
+ Flush {{ .Values.service.flush }}
Daemon Off
- Log_Level info
+ Log_Level {{ .Values.service.logLevel }}
Parsers_File parsers.conf
{{- if .Values.parsers.enabled }}
Parsers_File parsers_custom.conf
@@ -28,7 +28,7 @@ data:
[INPUT]
Name tail
Path {{ .Values.input.tail.path }}
- Parser docker
+ Parser {{ .Values.input.tail.parser }}
Tag {{ .Values.filter.kubeTag }}.*
Refresh_Interval 5
Mem_Buf_Limit {{ .Values.input.tail.memBufLimit }}
@@ -42,7 +42,7 @@ data:
Name systemd
Tag {{ .Values.input.systemd.tag }}
{{- range $value := .Values.input.systemd.filters.systemdUnit }}
- Systemd_Filter _SYSTEMD_UNIT="{{ $value }}"
+ Systemd_Filter _SYSTEMD_UNIT={{ $value }}
{{- end }}
Max_Entries {{ .Values.input.systemd.maxEntries }}
Read_From_Tail {{ .Values.input.systemd.readFromTail }}
@@ -53,6 +53,7 @@ data:
[FILTER]
Name kubernetes
Match {{ .Values.filter.kubeTag }}.*
+ Kube_Tag_Prefix {{ .Values.filter.kubeTagPrefix }}
Kube_URL {{ .Values.filter.kubeURL }}
Kube_CA_File {{ .Values.filter.kubeCAFile }}
Kube_Token_File {{ .Values.filter.kubeTokenFile }}
@@ -191,4 +192,3 @@ data:
{{- end }}
{{- end -}}
-
diff --git a/stable/fluent-bit/templates/daemonset.yaml b/stable/fluent-bit/templates/daemonset.yaml
index d1bf032e8b2a..7fc63bdddc02 100644
--- a/stable/fluent-bit/templates/daemonset.yaml
+++ b/stable/fluent-bit/templates/daemonset.yaml
@@ -24,6 +24,9 @@ spec:
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
+{{- if .Values.priorityClassName }}
+ priorityClassName: "{{ .Values.priorityClassName }}"
+{{- end }}
{{- if .Values.image.pullSecrets }}
imagePullSecrets:
{{ toYaml .Values.image.pullSecrets | indent 8 }}
diff --git a/stable/fluent-bit/templates/tests/test-configmap.yaml b/stable/fluent-bit/templates/tests/test-configmap.yaml
new file mode 100644
index 000000000000..49f0a0091686
--- /dev/null
+++ b/stable/fluent-bit/templates/tests/test-configmap.yaml
@@ -0,0 +1,48 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "fluent-bit.fullname" . }}-test
+ labels:
+ app: {{ template "fluent-bit.fullname" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ heritage: "{{ .Release.Service }}"
+ release: "{{ .Release.Name }}"
+data:
+ run.sh: |-
+ {{- if eq .Values.backend.type "forward"}}
+ {{- if eq .Values.backend.forward.tls "on"}}
+ fluent-gem install fluent-plugin-secure-forward
+ {{- end }}
+ @test "Test fluentd" {
+ fluentd -c /tests/fluentd.conf --dry-run
+ }
+ {{- else if eq .Values.backend.type "es"}}
+ @test "Test Elasticssearch Indices" {
+ url="http://{{ .Values.backend.es.host }}:{{ .Values.backend.es.port }}/_cat/indices?format=json"
+ body=$(curl $url)
+
+ result=$(echo $body | jq -cr '.[] | select(.index | contains("{{ .Values.backend.es.index }}"))')
+ [ "$result" != "" ]
+
+ result=$(echo $body | jq -cr '.[] | select((.index | contains("{{ .Values.backend.es.index }}")) and (.health != "green"))')
+ [ "$result" == "" ]
+ }
+ {{- end }}
+
+ fluentd.conf: |-
+
+ {{- if eq .Values.backend.forward.tls "off" }}
+ @type forward
+ bind 0.0.0.0
+ port {{ .Values.backend.forward.port }}
+ {{- else }}
+ @type secure_forward
+ self_hostname myserver.local
+ secure no
+ {{- end }}
+ shared_key {{ .Values.backend.forward.shared_key }}
+
+
+
+ @type stdout
+
diff --git a/stable/fluent-bit/templates/tests/test.yaml b/stable/fluent-bit/templates/tests/test.yaml
new file mode 100644
index 000000000000..c030875e5d85
--- /dev/null
+++ b/stable/fluent-bit/templates/tests/test.yaml
@@ -0,0 +1,53 @@
+{{- if or (eq .Values.backend.type "forward") (and (eq .Values.backend.type "es") (eq .Values.backend.es.tls "off")) }}
+apiVersion: v1
+kind: Pod
+metadata:
+ name: {{ template "fluent-bit.fullname" . }}-test
+ labels:
+ app: {{ template "fluent-bit.fullname" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ heritage: "{{ .Release.Service }}"
+ release: "{{ .Release.Name }}"
+ annotations:
+ "helm.sh/hook": test-success
+spec:
+ initContainers:
+ - name: test-framework
+ image: "{{ .Values.testFramework.image}}:{{ .Values.testFramework.tag }}"
+ command:
+ - "bash"
+ - "-c"
+ - |
+ set -ex
+ # copy bats to tools dir
+ cp -R /usr/local/libexec/ /tools/bats/
+ volumeMounts:
+ - mountPath: /tools
+ name: tools
+ containers:
+ - name: {{ .Release.Name }}-test
+ {{- if eq .Values.backend.type "forward"}}
+ image: "fluent/fluentd:v1.4-debian-1"
+ {{- else }}
+ image: "dwdraju/alpine-curl-jq"
+ {{- end }}
+ command: ["/tools/bats/bats", "-t", "/tests/run.sh"]
+ {{- if and (eq .Values.backend.forward.tls "on") (eq .Values.backend.type "forward") }}
+ securityContext:
+ # run as root to install fluent gems
+ runAsUser: 0
+ {{- end }}
+ volumeMounts:
+ - mountPath: /tests
+ name: tests
+ readOnly: true
+ - mountPath: /tools
+ name: tools
+ volumes:
+ - name: tests
+ configMap:
+ name: {{ template "fluent-bit.fullname" . }}-test
+ - name: tools
+ emptyDir: {}
+ restartPolicy: Never
+{{- end }}
diff --git a/stable/fluent-bit/values.yaml b/stable/fluent-bit/values.yaml
index 703c13cf9938..40164d15298a 100644
--- a/stable/fluent-bit/values.yaml
+++ b/stable/fluent-bit/values.yaml
@@ -5,8 +5,15 @@ on_minikube: false
image:
fluent_bit:
repository: fluent/fluent-bit
- tag: 1.0.3
- pullPolicy: IfNotPresent
+ tag: 1.1.0
+ pullPolicy: Always
+
+testFramework:
+ image: "dduportal/bats"
+ tag: "0.4.0"
+
+nameOverride: ""
+fullnameOverride: ""
# When enabled, exposes json and prometheus metrics on {{ .Release.Name }}-metrics service
metrics:
@@ -23,6 +30,10 @@ metrics:
# When enabled, fluent-bit will keep track of tailing offsets across pod restarts.
trackOffsets: false
+## PriorityClassName
+## Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
+priorityClassName: ""
+
backend:
type: forward
forward:
@@ -120,12 +131,22 @@ fullConfigMap: false
##
existingConfigMap: ""
+
+# NOTE If you want to add extra sections, add them here, inbetween the includes,
+# wherever they need to go. Sections order matters.
+
rawConfig: |-
@INCLUDE fluent-bit-service.conf
@INCLUDE fluent-bit-input.conf
@INCLUDE fluent-bit-filter.conf
@INCLUDE fluent-bit-output.conf
+
+# WARNING!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+# This is to add extra entries to an existing section, NOT for adding new sections
+# Do not submit bugs against indent being wrong. Add your new sections to rawConfig
+# instead.
+#
extraEntries:
input: |-
# # >=1 additional Key/Value entrie(s) for existing Input section
@@ -133,6 +154,8 @@ extraEntries:
# # >=1 additional Key/Value entrie(s) for existing Filter section
output: |-
# # >=1 additional Key/Value entrie(s) for existing Ouput section
+# WARNING!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+
## Extra ports to add to the daemonset ports section
extraPorts: []
@@ -178,9 +201,14 @@ tolerations: []
nodeSelector: {}
affinity: {}
+service:
+ flush: 1
+ logLevel: info
+
input:
tail:
memBufLimit: 5MB
+ parser: docker
path: /var/log/containers/*.log
systemd:
enabled: false
@@ -198,6 +226,8 @@ filter:
kubeCAFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
kubeTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kubeTag: kube
+ kubeTagPrefix: kube.var.log.containers.
+
# If true, check to see if the log field content is a JSON string map, if so,
# it append the map fields as part of the log structure.
mergeJSONLog: true
diff --git a/stable/fluentd/Chart.yaml b/stable/fluentd/Chart.yaml
index a55206c12534..fb6702a2049a 100644
--- a/stable/fluentd/Chart.yaml
+++ b/stable/fluentd/Chart.yaml
@@ -2,13 +2,16 @@ apiVersion: v1
description: A Fluentd Elasticsearch Helm chart for Kubernetes.
icon: https://raw.githubusercontent.com/fluent/fluentd-docs/master/public/logo/Fluentd_square.png
name: fluentd
-version: 1.4.0
-appVersion: v2.3.1
+version: 1.9.0
+appVersion: v2.4.0
home: https://www.fluentd.org/
sources:
+- https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch/fluentd-es-image
- https://quay.io/repository/coreos/fluentd-kubernetes
- https://github.com/coreos/fluentd-kubernetes-daemonset
- https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html
maintainers:
- name: rendhalver
email: pete.brown@powerhrg.com
+- name: miouge1
+ email: maxime@root314.com
diff --git a/stable/fluentd/OWNERS b/stable/fluentd/OWNERS
index 5cb6ab0551d0..ac74ba548bd9 100644
--- a/stable/fluentd/OWNERS
+++ b/stable/fluentd/OWNERS
@@ -1,4 +1,6 @@
approvers:
- rendhalver
+- miouge1
reviewers:
- rendhalver
+- miouge1
diff --git a/stable/fluentd/README.md b/stable/fluentd/README.md
new file mode 100644
index 000000000000..693d73d6ace4
--- /dev/null
+++ b/stable/fluentd/README.md
@@ -0,0 +1,83 @@
+# fluentd
+
+[Fluentd](https://www.fluentd.org/) collects events from various data sources and writes them to files, RDBMS, NoSQL, IaaS, SaaS, Hadoop and so on. Fluentd helps you unify your logging infrastructure (Learn more about the Unified Logging Layer).
+
+## TL;DR;
+
+```console
+$ helm install stable/fluentd
+```
+
+## Introduction
+
+This chart bootstraps an fluentd deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`:
+
+```console
+$ helm install stable/fluentd --name my-release
+```
+
+The command deploys fluentd on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
+
+## Uninstalling the Chart
+
+To uninstall/delete the `my-release` deployment:
+
+```console
+$ helm delete my-release
+```
+
+The command removes all the Kubernetes components associated with the chart and deletes the release.
+
+## Configuration
+
+The following table lists the configurable parameters of the fluentd chart and their default values.
+
+Parameter | Description | Default
+--- | --- | ---
+`affinity` | node/pod affinities | `{}`
+`configMaps` | Fluentd configuration | See [values.yaml](values.yaml)
+`output.host` | output host | `elasticsearch-client.default.svc.cluster.local`
+`output.port` | output port | `9200`
+`output.scheme` | output port | `http`
+`output.sslVersion` | output ssl version | `TLSv1`
+`output.buffer_chunk_limit` | output buffer chunk limit | `2M`
+`output.buffer_queue_limit` | output buffer queue limit | `8`
+`image.pullPolicy` | Image pull policy | `IfNotPresent`
+`image.repository` | Image repository | `gcr.io/google-containers/fluentd-elasticsearch`
+`image.tag` | Image tag | `v2.4.0`
+`imagePullSecrets` | Specify image pull secrets | `nil` (does not add image pull secrets to deployed pods)
+`extraEnvVars` | Adds additional environment variables to the deployment (in yaml syntax) | `{}` See [values.yaml](values.yaml)
+`ingress.enabled` | enable ingress | `false`
+`ingress.labels` | list of labels for the ingress rule | See [values.yaml](values.yaml)
+`ingress.annotations` | list of annotations for the ingress rule | `kubernetes.io/ingress.class: nginx` See [values.yaml](values.yaml)
+`ingress.hosts` | host definition for ingress | See [values.yaml](values.yaml)
+`ingress.tls` | tls rules for ingress | See [values.yaml](values.yaml)
+`nodeSelector` | node labels for pod assignment | `{}`
+`replicaCount` | desired number of pods | `1` ???
+`resources` | pod resource requests & limits | `{}`
+`priorityClassName` | priorityClassName | `nil`
+`service.ports` | port definition for the service | See [values.yaml](values.yaml)
+`service.type` | type of service | `ClusterIP`
+`tolerations` | List of node taints to tolerate | `[]`
+`persistence.enabled` | Enable buffer persistence | `false`
+`persistence.accessMode` | Access mode for buffer persistence | `ReadWriteOnce`
+`persistence.size` | Volume size for buffer persistence | `10Gi`
+
+Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
+
+```console
+$ helm install stable/fluentd --name my-release \
+ --set=image.tag=v0.0.2,resources.limits.cpu=200m
+```
+
+Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
+
+```console
+$ helm install stable/fluentd --name my-release -f values.yaml
+```
+
+> **Tip**: You can use the default [values.yaml](values.yaml)
diff --git a/stable/fluentd/templates/deployment.yaml b/stable/fluentd/templates/deployment.yaml
index 18b95e35fbe9..2b87820f7c23 100644
--- a/stable/fluentd/templates/deployment.yaml
+++ b/stable/fluentd/templates/deployment.yaml
@@ -23,7 +23,10 @@ spec:
app: {{ template "fluentd.name" . }}
release: {{ .Release.Name }}
annotations:
-{{ toYaml .Values.annotations | indent 8 }}
+ checksum/configmap: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
+ {{- if .Values.annotations }}
+ {{- toYaml .Values.annotations | nindent 8 }}
+ {{- end }}
spec:
{{- if .Values.image.pullSecrets }}
imagePullSecrets:
@@ -52,6 +55,9 @@ spec:
- name: {{ $key | quote }}
value: {{ $value | quote }}
{{- end }}
+ {{- if .Values.extraEnvVars }}
+{{ toYaml .Values.extraEnvVars | indent 10 }}
+ {{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
ports:
diff --git a/stable/fluentd/templates/ingress.yaml b/stable/fluentd/templates/ingress.yaml
index fe1818966f34..4ed58913c230 100644
--- a/stable/fluentd/templates/ingress.yaml
+++ b/stable/fluentd/templates/ingress.yaml
@@ -1,32 +1,36 @@
{{- if .Values.ingress.enabled -}}
{{- $serviceName := include "fluentd.fullname" . -}}
-{{- $servicePort := .Values.service.externalPort -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "fluentd.fullname" . }}
labels:
app: {{ template "fluentd.name" . }}
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
+ chart: {{ template "fluentd.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
+{{- if .Values.ingress.labels }}
+{{ toYaml .Values.ingress.labels | indent 4 }}
+{{- end }}
+{{- if .Values.ingress.annotations }}
annotations:
- {{- range $key, $value := .Values.ingress.annotations }}
- {{ $key }}: {{ $value | quote }}
- {{- end }}
+{{ tpl ( toYaml .Values.ingress.annotations | indent 4 ) . }}
+{{- end }}
spec:
rules:
{{- range $host := .Values.ingress.hosts }}
- - host: {{ $host.name }}
- http:
+ - http:
paths:
- - path: /
+ - path: {{ $host.path | default "/" }}
backend:
- serviceName: {{ $host.serviceName }}
+ serviceName: {{ $serviceName }}
servicePort: {{ $host.servicePort }}
+ {{- if (not (empty $host.name)) }}
+ host: {{ $host.name }}
+ {{- end -}}
{{- end -}}
{{- if .Values.ingress.tls }}
tls:
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end -}}
-{{- end -}}
+{{- end -}}
\ No newline at end of file
diff --git a/stable/fluentd/templates/service.yaml b/stable/fluentd/templates/service.yaml
index c57d7e0d223e..ad75e1518daf 100644
--- a/stable/fluentd/templates/service.yaml
+++ b/stable/fluentd/templates/service.yaml
@@ -4,7 +4,7 @@ metadata:
name: {{ template "fluentd.fullname" . }}
labels:
app: {{ template "fluentd.name" . }}
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
+ chart: {{ template "fluentd.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
diff --git a/stable/fluentd/values.yaml b/stable/fluentd/values.yaml
index c264ff3003fa..10d00c4efbfe 100644
--- a/stable/fluentd/values.yaml
+++ b/stable/fluentd/values.yaml
@@ -3,7 +3,7 @@
# Declare variables to be passed into your templates.
image:
repository: gcr.io/google-containers/fluentd-elasticsearch
- tag: v2.3.1
+ tag: v2.4.0
pullPolicy: IfNotPresent
# pullSecrets:
# - secret1
@@ -19,10 +19,19 @@ output:
env: {}
+# Extra Environment Values - allows yaml definitions
+extraEnvVars:
+# - name: VALUE_FROM_SECRET
+# valueFrom:
+# secretKeyRef:
+# name: secret_name
+# key: secret_key
+
service:
type: ClusterIP
- # type: nodePort:
- externalPort: 80
+ # type: NodePort
+ # nodePort:
+ # Used to create Service records
ports:
- name: "monitor-agent"
protocol: TCP
@@ -34,20 +43,23 @@ annotations: {}
ingress:
enabled: false
- # Used to create an Ingress and Service record.
- # hosts:
- # - name: "http-input.local"
- # protocol: TCP
- # serviceName: http-input
- # servicePort: 9880
annotations:
- # kubernetes.io/ingress.class: nginx
- # kubernetes.io/tls-acme: "true"
- tls:
- # Secrets must be manually created in the namespace.
- # - secretName: http-input-tls
- # hosts:
- # - http-input.local
+ kubernetes.io/ingress.class: nginx
+# kubernetes.io/tls-acme: "true"
+# # Depending on which version of ingress controller you may need to configure properly - https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target
+# nginx.ingress.kubernetes.io/rewrite-target: /
+ labels: []
+ # If doing TCP or UDP ingress rule don't forget to update your Ingress Controller to accept TCP connections - https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
+ hosts:
+# - name: "http-input.local"
+# protocol: TCP
+# servicePort: 9880
+# path: /
+ tls: {}
+ # Secrets must be manually created in the namespace.
+# - secretName: http-input-tls
+# hosts:
+# - http-input.local
configMaps:
general.conf: |
diff --git a/stable/gangway/.helmignore b/stable/gangway/.helmignore
new file mode 100644
index 000000000000..f0c131944441
--- /dev/null
+++ b/stable/gangway/.helmignore
@@ -0,0 +1,21 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
diff --git a/stable/gangway/Chart.yaml b/stable/gangway/Chart.yaml
new file mode 100644
index 000000000000..848b5cbed1f7
--- /dev/null
+++ b/stable/gangway/Chart.yaml
@@ -0,0 +1,12 @@
+apiVersion: v1
+description: An application that can be used to easily enable authentication flows via OIDC for a kubernetes cluster.
+name: gangway
+version: 0.0.5
+appVersion: 3.0.0
+home: https://github.com/heptiolabs/gangway
+sources:
+ - https://github.com/heptiolabs/gangway
+engine: gotpl
+maintainers:
+- name: rk295
+ email: robin@kearney.co.uk
diff --git a/stable/gangway/OWNERS b/stable/gangway/OWNERS
new file mode 100644
index 000000000000..e07d38b804d1
--- /dev/null
+++ b/stable/gangway/OWNERS
@@ -0,0 +1,5 @@
+approvers:
+- rk295
+reviewers:
+- rk295
+
diff --git a/stable/gangway/README.md b/stable/gangway/README.md
new file mode 100644
index 000000000000..5a51147e3724
--- /dev/null
+++ b/stable/gangway/README.md
@@ -0,0 +1,113 @@
+# Gangway
+
+An application that can be used to easily enable authentication flows via OIDC for a kubernetes cluster.
+
+## TL;DR
+
+ helm install stable/gangway
+
+## Introduction
+
+The chart deploys an instance of Gangway into a Kubernetes cluster using the Helm package manager.
+
+This chart will do the following:
+
+* Create a deployment of [gangway] within your Kubernetes Cluster.
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`:
+
+```bash
+helm install --name my-release stable/gangway
+```
+
+Due to the nature of OIDC configuration, deploying the chart without at least some of the values being set will not result in a functioning application. See the Configuration section below for more information.
+
+## Configuration
+
+The following table lists the configurable parameters of the external-dns chart and their default values.
+
+All values under the `gangway` top level object are passed directly to the Gangway container via a `yaml` config file. The contents of that object in [`values.yaml`](values.yaml) are lifted directly from the Gangway [documentation](https://github.com/heptiolabs/gangway/tree/master/docs).
+
+At a minimum you *must* configure any of the values marked as **required** in the table below.
+
+| Parameter | Description | Default |
+| --------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------- |
+| `affinity` | List of affinities (requires Kubernetes >=1.6) | `{}` |
+| `gangway.allowEmptyClientSecret` | Some identity providers accept an empty client secret, this is not generally considered a good idea. If you have to use an empty secret and accept the risks that come with that then you can set this to true. | `false` |
+| `gangway.apiServerURL` | The API server endpoint used to configure kubectl. **Required** | `""` |
+| `gangway.audience` | Endpoint that provides user profile information [optional]. Not all providers will require this. To be taken from the configuration of your OIDC provider. **Required** | `""` |
+| `gangway.authorizeURL` | OAuth2 URL to start authorization flow. To be taken from the configuration of your OIDC provider. **Required** | `""` |
+| `gangway.certData` | The Public cert data. This is normally safe to leave alone. | `""` |
+| `gangway.clientID` | API client ID as indicated by the identity provider. **Required** | `""` |
+| `gangway.clientSecret` | API client secret as indicated by the identity provider. **Required** | `""` |
+| `gangway.cluster_ca_path` | The path to find the CA bundle for the API server. Used to configure kubectl. This is typically mounted into the default location for workloads running on a Kubernetes cluster and doesn't need to be set. | `""` |
+| `gangway.clusterName` | The cluster name. Used in UI and kubectl config instructions. **Required** | `""` |
+| `gangway.host` | The address to listen on. Defaults to 0.0.0.0 to listen on all interfaces. | `80` |
+| `gangway.httpPath` | The path gangway uses to create urls (defaults to "") | `/` |
+| `gangway.keyData` | The Private key data | `""` |
+| `gangway.port` | The port to listen on. Defaults to 8080. | `80` |
+| `gangway.redirectURL` | Where to redirect back to. This should be a URL where gangway is reachable. Typically this also needs to be registered as part of the oauth application with the oAuth provider. **Required** | `""` |
+| `gangway.scopes` | Used to specify the scope of the requested Oauth authorization. | `["openid", "profile", "email", "offline_access"]` |
+| `gangway.serveTLS` | Should Gangway serve TLS vs. plain HTTP? | `false` |
+| `gangway.sessionKey` | Encryption key for cookie contents. Will autogenerate if not provided. Caution: Do not use auto generation in production environments. | `""` |
+| `gangway.tokenURL` | OAuth2 URL to obtain access tokens. To be taken from the configuration of your OIDC provider. **Required** | `""` |
+| `gangway.trustedCAPath` | The path to a root CA to trust for self signed certificates at the Oauth2 URLs | `""` |
+| `gangway.usernameClaim` | The JWT claim to use as the username. This is used in UI. Default is "nickname". This is combined with the clusterName for the "user" portion of the kubeconfig. | `name` |
+| `image.repository` | Container image name (Including repository name if not `hub.docker.com`). | `gcr.io/heptio-images/gangway` |
+| `image.pullPolicy` | Container pull policy. | `IfNotPresent` |
+| `image.tag` | Container image tag. | `v2.2.0` |
+| `image.pullSecrets` | Name of Secret resource containing private registry credentials | `""` |
+| `ingress.annotations` | Ingress annotations | `{}` |
+| `ingress.enabled` | Enables or Disables the ingress resource | `false` |
+| `ingress.hosts` | List of FQDN's for the ingress | `""` |
+| `ingress.tls.hosts` | List of FQDN's the above secret is associated with | `""` |
+| `ingress.tls.secretName` | Name of the secret to use | `""` |
+| `ingress.tls` | List of SSL certs to use | `""` |
+| `nodeSelector` | Node labels for pod assignment | `{}` |
+| `podAnnotations` | Additional annotations to apply to the pod. | `{}` |
+| `resources` | CPU/Memory resource requests/limits. | `{}` |
+| `service.port` | The port the service should listen on | `80` |
+| `service.type` | Type of service to create | `ClusterIP` |
+| `tolerations` | List of node taints to tolerate (requires Kubernetes >= 1.6) | `[]` |
+
+You will likely want to expose Gangway to your users somehow, possibly by way of an ingress, the values below would be a way of doing this with the [Traefik] ingress controller, this assumes TLS offload is happening at the load balancer:
+
+```yaml
+ingress:
+enabled: true
+annotations:
+ kubernetes.io/ingress.class: traefik
+path: /
+hosts:
+ - gangway.your-domain.com
+tls: []
+```
+
+## Note about `gangway.sessionKey`
+
+The chart will auto generate a random value for `gangway.sessionKey` when you install the chart. Gangway uses this via the [Gorilla Secure Cookie] Go library to encrypt the contents of cookies sent to the users browser. Relying on the autogeneration is acceptable in testing environments, however in production you are strongly advised to provide your own random value for this variable. If you do not and you subsequently update your Helm deployment this key will be regenerated. This has the impact that any cookies in your users browsers from before the upgrade, will have been encrypted with the old key, which Gangway no longer has. Therefore when they browse to your Gangway url they will get an error when they attempt to login. The only solution to that issue is to have the user delete all the gangway cookies from their browser. You have been warned!
+
+### Specifying Values
+
+Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
+
+```console
+$ helm upgrade --install --wait my-release \
+ --set ingress.enabled=true \
+ stable/gangway
+```
+
+Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
+
+```console
+$ helm upgrade --install --wait my-release stable/gangway -f values.yaml
+```
+
+> **Tip**: You can copy the default [values.yaml](values.yaml) and make any required edits in there
+
+[gangway]: https://github.com/heptiolabs/gangway
+[gangway docs]: https://github.com/heptiolabs/gangway/tree/master/docs
+[Traefik]: https://docs.traefik.io/user-guide/kubernetes/
+[Gorilla Secure Cookie]: https://github.com/gorilla/securecookie
\ No newline at end of file
diff --git a/stable/gangway/templates/NOTES.txt b/stable/gangway/templates/NOTES.txt
new file mode 100644
index 000000000000..def27d904e9c
--- /dev/null
+++ b/stable/gangway/templates/NOTES.txt
@@ -0,0 +1,19 @@
+1. Get the application URL by running these commands:
+{{- if .Values.ingress.enabled }}
+{{- range .Values.ingress.hosts }}
+ http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
+{{- end }}
+{{- else if contains "NodePort" .Values.service.type }}
+ export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "gangway.fullname" . }})
+ export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+ echo http://$NODE_IP:$NODE_PORT
+{{- else if contains "LoadBalancer" .Values.service.type }}
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ You can watch the status of by running 'kubectl get svc -w {{ include "gangway.fullname" . }}'
+ export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "gangway.fullname" . }}svc -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ echo http://$SERVICE_IP:{{ .Values.service.port }}
+{{- else if contains "ClusterIP" .Values.service.type }}
+ export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "gangway.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
+ echo "Visit http://127.0.0.1:8080 to use your application"
+ kubectl port-forward $POD_NAME 8080:80
+{{- end }}
\ No newline at end of file
diff --git a/stable/gangway/templates/_helpers.tpl b/stable/gangway/templates/_helpers.tpl
new file mode 100644
index 000000000000..b01e7a63443d
--- /dev/null
+++ b/stable/gangway/templates/_helpers.tpl
@@ -0,0 +1,32 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "gangway.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "gangway.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "gangway.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/stable/gangway/templates/configmap.yaml b/stable/gangway/templates/configmap.yaml
new file mode 100644
index 000000000000..0b771a6e0ba6
--- /dev/null
+++ b/stable/gangway/templates/configmap.yaml
@@ -0,0 +1,14 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ include "gangway.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "gangway.name" . }}
+ helm.sh/chart: {{ include "gangway.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+data:
+ gangway.yaml: |
+ {{- .Values.gangway | toYaml | nindent 4 }}
+
+
diff --git a/stable/gangway/templates/deployment.yaml b/stable/gangway/templates/deployment.yaml
new file mode 100644
index 000000000000..d19ae0369285
--- /dev/null
+++ b/stable/gangway/templates/deployment.yaml
@@ -0,0 +1,96 @@
+apiVersion: apps/v1beta2
+kind: Deployment
+metadata:
+ name: {{ include "gangway.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "gangway.name" . }}
+ helm.sh/chart: {{ include "gangway.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ replicas: {{ .Values.replicaCount }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "gangway.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "gangway.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ annotations:
+ check/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
+ check/values: {{ .Files.Get "../values.yaml" | sha256sum }}
+ spec:
+ {{- if .Values.image.pullSecrets }}
+ imagePullSecrets:
+{{ toYaml .Values.image.pullSecrets | indent 8 }}
+ {{- end }}
+ containers:
+ - name: {{ .Chart.Name }}
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ command:
+ - gangway
+ - -config
+ - /gangway/gangway.yaml
+ env:
+ - name: GANGWAY_SESSION_SECURITY_KEY
+ valueFrom:
+ secretKeyRef:
+ key: sessionkey
+ name: {{ include "gangway.fullname" . }}-key
+ ports:
+ - name: http
+ containerPort: {{ .Values.gangway.port }}
+ protocol: TCP
+ volumeMounts:
+ - name: gangway
+ mountPath: /gangway/
+ {{- if and .Values.gangway.certData .Values.gangway.keyData }}
+ - name: gangway-tls
+ mountPath: /etc/gangway/tls
+ readOnly: true
+ {{ end }}
+ livenessProbe:
+ failureThreshold: 3
+ httpGet:
+ path: {{ .Values.gangway.httpPath }}
+ port: {{ .Values.gangway.port }}
+ scheme: HTTP
+ initialDelaySeconds: 20
+ periodSeconds: 60
+ successThreshold: 1
+ timeoutSeconds: 1
+ readinessProbe:
+ failureThreshold: 3
+ httpGet:
+ path: {{ .Values.gangway.httpPath }}
+ port: {{ .Values.gangway.port }}
+ scheme: HTTP
+ periodSeconds: 10
+ successThreshold: 1
+ timeoutSeconds: 1
+ resources:
+{{ toYaml .Values.resources | indent 12 }}
+ {{- with .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.affinity }}
+ affinity:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.tolerations }}
+ tolerations:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ volumes:
+ - name: gangway
+ configMap:
+ name: {{ include "gangway.fullname" . }}
+ {{- if and .Values.gangway.certData .Values.gangway.keyData }}
+ - name: gangway-tls
+ secret:
+ secretName: {{ include "gangway.fullname" . }}-tls
+ {{ end -}}
\ No newline at end of file
diff --git a/stable/gangway/templates/ingress.yaml b/stable/gangway/templates/ingress.yaml
new file mode 100644
index 000000000000..db3840327045
--- /dev/null
+++ b/stable/gangway/templates/ingress.yaml
@@ -0,0 +1,38 @@
+{{- if .Values.ingress.enabled -}}
+{{- $fullName := include "gangway.fullname" . -}}
+{{- $ingressPath := .Values.ingress.path -}}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ $fullName }}
+ labels:
+ app.kubernetes.io/name: {{ include "gangway.name" . }}
+ helm.sh/chart: {{ include "gangway.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+{{- with .Values.ingress.annotations }}
+ annotations:
+{{ toYaml . | indent 4 }}
+{{- end }}
+spec:
+{{- if .Values.ingress.tls }}
+ tls:
+ {{- range .Values.ingress.tls }}
+ - hosts:
+ {{- range .hosts }}
+ - {{ . | quote }}
+ {{- end }}
+ secretName: {{ .secretName }}
+ {{- end }}
+{{- end }}
+ rules:
+ {{- range .Values.ingress.hosts }}
+ - host: {{ . | quote }}
+ http:
+ paths:
+ - path: {{ $ingressPath }}
+ backend:
+ serviceName: {{ $fullName }}svc
+ servicePort: http
+ {{- end }}
+{{- end }}
diff --git a/stable/gangway/templates/key.yaml b/stable/gangway/templates/key.yaml
new file mode 100644
index 000000000000..c034336476e6
--- /dev/null
+++ b/stable/gangway/templates/key.yaml
@@ -0,0 +1,12 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ include "gangway.fullname" . }}-key
+ labels:
+ app.kubernetes.io/name: {{ include "gangway.name" . }}
+ helm.sh/chart: {{ include "gangway.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+type: Opaque
+data:
+ sessionkey: {{ ( default ( randAlphaNum 32 ) .Values.gangway.sessionKey ) | b64enc | quote }}
\ No newline at end of file
diff --git a/stable/gangway/templates/service.yaml b/stable/gangway/templates/service.yaml
new file mode 100644
index 000000000000..8f854f05ed46
--- /dev/null
+++ b/stable/gangway/templates/service.yaml
@@ -0,0 +1,23 @@
+apiVersion: v1
+kind: Service
+metadata:
+ # Need to append "svc" here because otherwise Kube will make an env var
+ # called GANGWAY_PORT with something like "tcp://100.67.143.54:80" as a value.
+ # The gangway binary then interprets this as a config variable and expects it
+ # to hold the int for the port to listen on. Result = bang!
+ name: {{ include "gangway.fullname" . }}svc
+ labels:
+ app.kubernetes.io/name: {{ include "gangway.name" . }}
+ helm.sh/chart: {{ include "gangway.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ type: {{ .Values.service.type }}
+ ports:
+ - port: {{ .Values.service.port }}
+ targetPort: {{ .Values.gangway.port }}
+ protocol: TCP
+ name: http
+ selector:
+ app.kubernetes.io/name: {{ include "gangway.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
diff --git a/stable/gangway/templates/ssl.yaml b/stable/gangway/templates/ssl.yaml
new file mode 100644
index 000000000000..06a376102b22
--- /dev/null
+++ b/stable/gangway/templates/ssl.yaml
@@ -0,0 +1,15 @@
+{{- if and .Values.gangway.certData .Values.gangway.keyData -}}
+apiVersion: v1
+type: kubernetes.io/tls
+kind: Secret
+metadata:
+ name: {{ include "gangway.fullname" . }}-tls
+ labels:
+ app.kubernetes.io/name: {{ include "gangway.name" . }}
+ helm.sh/chart: {{ include "gangway.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+data:
+ tls.crt: {{ .Values.gangway.certData | b64enc }}
+ tls.key: {{ .Values.gangway.keyData | b64enc }}
+{{- end -}}
\ No newline at end of file
diff --git a/stable/gangway/values.yaml b/stable/gangway/values.yaml
new file mode 100644
index 000000000000..4f00c4dc740e
--- /dev/null
+++ b/stable/gangway/values.yaml
@@ -0,0 +1,143 @@
+replicaCount: 1
+
+image:
+ repository: gcr.io/heptio-images/gangway
+ tag: v3.0.0
+ pullPolicy: IfNotPresent
+ ## Optional array of imagePullSecrets containing private registry credentials
+ ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+ pullSecrets: []
+ # - name: secretName
+
+nameOverride: ""
+fullnameOverride: ""
+
+gangway:
+ # The address to listen on. Defaults to 0.0.0.0 to listen on all interfaces.
+ # Env var: GANGWAY_HOST
+ # host: 0.0.0.0
+
+ # The port to listen on. Defaults to 8080.
+ # Env var: GANGWAY_PORT
+ port: 8080
+
+ # Should Gangway serve TLS vs. plain HTTP? Default: false
+ # Env var: GANGWAY_SERVE_TLS
+ # serveTLS: false
+
+ # The public cert file (including root and intermediates) to use when serving TLS.
+ # Env var: GANGWAY_CERT_FILE
+ # certFile: /etc/gangway/tls/tls.crt
+
+ # The private key file when serving TLS.
+ # Env var: GANGWAY_KEY_FILE
+ # keyFile: /etc/gangway/tls/tls.key
+
+ # The cluster name. Used in UI and kubectl config instructions.
+ # Env var: GANGWAY_CLUSTER_NAME
+ clusterName: "${GANGWAY_CLUSTER_NAME}"
+
+ # OAuth2 URL to start authorization flow.
+ # Env var: GANGWAY_AUTHORIZE_URL
+ authorizeURL: "https://${DNS_NAME}/authorize"
+
+ # OAuth2 URL to obtain access tokens.
+ # Env var: GANGWAY_TOKEN_URL
+ tokenURL: "https://${DNS_NAME}/oauth/token"
+
+ # Endpoint that provides user profile information [optional]. Not all providers
+ # will require this.
+ # Env var: GANGWAY_AUDIENCE
+ audience: "https://${DNS_NAME}/userinfo"
+
+ # Used to specify the scope of the requested Oauth authorization.
+ scopes: ["openid", "profile", "email", "offline_access"]
+
+ # Where to redirect back to. This should be a URL where gangway is reachable.
+ # Typically this also needs to be registered as part of the oauth application
+ # with the oAuth provider.
+ # Env var: GANGWAY_REDIRECT_URL
+ redirectURL: "https://${GANGWAY_REDIRECT_URL}/callback"
+
+ # API client ID as indicated by the identity provider
+ # Env var: GANGWAY_CLIENT_ID
+ clientID: "${GANGWAY_CLIENT_ID}"
+
+ # API client secret as indicated by the identity provider
+ # Env var: GANGWAY_CLIENT_SECRET
+ clientSecret: "${GANGWAY_CLIENT_SECRET}"
+
+ # Some identity providers accept an empty client secret, this
+ # is not generally considered a good idea. If you have to use an
+ # empty secret and accept the risks that come with that then you can
+ # set this to true.
+ # allowEmptyClientSecret: false
+
+ # The JWT claim to use as the username. This is used in UI.
+ # Default is "nickname". This is combined with the clusterName
+ # for the "user" portion of the kubeconfig.
+ # Env var: GANGWAY_USERNAME_CLAIM
+ usernameClaim: "sub"
+
+ # The API server endpoint used to configure kubectl
+ # Env var: GANGWAY_APISERVER_URL
+ apiServerURL: "https://${GANGWAY_APISERVER_URL}"
+
+ # The path to find the CA bundle for the API server. Used to configure kubectl.
+ # This is typically mounted into the default location for workloads running on
+ # a Kubernetes cluster and doesn't need to be set.
+ # Env var: GANGWAY_CLUSTER_CA_PATH
+ # cluster_ca_path: "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
+
+ # The path to a root CA to trust for self signed certificates at the Oauth2 URLs
+ # Env var: GANGWAY_TRUSTED_CA_PATH
+ # trustedCAPath: /cacerts/rootca.crt
+
+ # The path gangway uses to create urls (defaults to "")
+ # Env var: GANGWAY_HTTP_PATH
+ # httpPath: "https://${GANGWAY_HTTP_PATH}"
+
+ # The key to use when encrypting the contents of cookies.
+ # You can leave this blank and the chart will generate a random key, however
+ # you must use that with caution. Subsequent upgrades to the deployment will
+ # regenerate this key which will cause Gangway to error when attempting to
+ # decrypt cookies stored in users' browsers which were encrypted with the old
+ # key.
+ # TL;DR: Safe to use the auto generation in test environments, provide your
+ # own in procution.
+ # sessionKey:
+
+service:
+ type: ClusterIP
+ port: 80
+
+ingress:
+ enabled: false
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+ path: /
+ hosts:
+ - chart-example.local
+ tls: []
+ # - secretName: chart-example-tls
+ # hosts:
+ # - chart-example.local
+
+resources: {}
+ # We usually recommend not to specify default resources and to leave this as a conscious
+ # choice for the user. This also increases chances charts run on environments with little
+ # resources, such as Minikube. If you do want to specify resources, uncomment the following
+ # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
+ # limits:
+ # cpu: 100m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 128Mi
+
+nodeSelector: {}
+
+tolerations: []
+
+affinity: {}
diff --git a/stable/gce-ingress/Chart.yaml b/stable/gce-ingress/Chart.yaml
index 4fc8302c866c..96d928e543dc 100644
--- a/stable/gce-ingress/Chart.yaml
+++ b/stable/gce-ingress/Chart.yaml
@@ -2,7 +2,7 @@ apiVersion: v1
appVersion: "1.4.0"
description: A GCE Ingress Controller
name: gce-ingress
-version: 1.1.1
+version: 1.1.2
keywords:
- ingress
- gce
diff --git a/stable/gce-ingress/README.md b/stable/gce-ingress/README.md
index 7e86ea5fabd2..cd5a9effa035 100644
--- a/stable/gce-ingress/README.md
+++ b/stable/gce-ingress/README.md
@@ -1,6 +1,6 @@
# gce-ingress
-[gce-ingress](https://github.com/kubernetes/gce-gce) is an Ingress controller that configures GCE loadbalancers
+[ingress-gce](https://github.com/kubernetes/ingress-gce) is an Ingress controller that configures GCE loadbalancers
To use, add the `kubernetes.io/ingress.class: "gce"` annotation to your Ingress resources.
diff --git a/stable/ghost/Chart.yaml b/stable/ghost/Chart.yaml
index cb629baa23cd..3e224605d4bd 100644
--- a/stable/ghost/Chart.yaml
+++ b/stable/ghost/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: ghost
-version: 6.3.6
-appVersion: 2.13.1
+version: 6.7.13
+appVersion: 2.22.3
description: A simple, powerful publishing platform that allows you to share your stories with the world
keywords:
- ghost
diff --git a/stable/ghost/README.md b/stable/ghost/README.md
index 491dd0b45cef..bfc597f7faa1 100644
--- a/stable/ghost/README.md
+++ b/stable/ghost/README.md
@@ -14,7 +14,7 @@ This chart bootstraps a [Ghost](https://github.com/bitnami/bitnami-docker-ghost)
It also packages the [Bitnami MariaDB chart](https://github.com/kubernetes/charts/tree/master/stable/mariadb) which is required for bootstrapping a MariaDB deployment for the database requirements of the Ghost application.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -50,6 +50,7 @@ The following table lists the configurable parameters of the Ghost chart and the
| Parameter | Description | Default |
|-------------------------------------|---------------------------------------------------------------|----------------------------------------------------------|
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | Ghost image registry | `docker.io` |
| `image.repository` | Ghost Image name | `bitnami/ghost` |
| `image.tag` | Ghost Image tag | `{VERSION}` |
@@ -60,6 +61,7 @@ The following table lists the configurable parameters of the Ghost chart and the
| `volumePermissions.image.tag` | Init container volume-permissions image tag | `latest` |
| `volumePermissions.image.pullPolicy`| Init container volume-permissions image pull policy | `Always` |
| `ghostHost` | Ghost host to create application URLs | `nil` |
+| `ghostPort` | Ghost port to use in application URLs (defaults to `service.port` if `nil`) | `nil` |
| `ghostProtocol` | Protocol (http or https) to use in the application URLs | `http` |
| `ghostPath` | Ghost path to create application URLs | `nil` |
| `ghostUsername` | User of the application | `user@example.com` |
@@ -88,6 +90,7 @@ The following table lists the configurable parameters of the Ghost chart and the
| `ingress.hosts[0].name` | Hostname to your Ghost installation | `ghost.local` |
| `ingress.hosts[0].path` | Path within the url structure | `/` |
| `ingress.hosts[0].tls` | Utilize TLS backend in ingress | `false` |
+| `ingress.hosts[0].tlsHosts` | Array of TLS hosts for ingress record (defaults to `ingress.hosts[0].name` if `nil`) | `nil` |
| `ingress.hosts[0].tlsSecret` | TLS Secret (certificates) | `ghost.local-tls-secret` |
| `ingress.secrets[0].name` | TLS Secret Name | `nil` |
| `ingress.secrets[0].certificate` | TLS Secret Certificate | `nil` |
diff --git a/stable/ghost/templates/NOTES.txt b/stable/ghost/templates/NOTES.txt
index e8bfc24f6216..ffcff11dce73 100644
--- a/stable/ghost/templates/NOTES.txt
+++ b/stable/ghost/templates/NOTES.txt
@@ -27,8 +27,8 @@ host. To configure Ghost with the URL of your service:
2. Complete your Ghost deployment by running:
- helm upgrade {{ .Release.Name }} stable/ghost\
- --set service.type={{ .Values.service.type }},ghostHost=$APP_HOST,ghostPassword=$APP_PASSWORD,{{ if .Values.mariadb.mariadbRootPassword }},mariadb.mariadbRootPassword=$DATABASE_ROOT_PASSWORD{{ end }}mariadb.db.password=$APP_DATABASE_PASSWORD
+ helm upgrade {{ .Release.Name }} stable/{{ .Chart.Name }} \
+ --set service.type={{ .Values.service.type }},ghostHost=$APP_HOST,ghostPassword=$APP_PASSWORD,{{ if .Values.mariadb.mariadbRootPassword }},mariadb.mariadbRootPassword=$DATABASE_ROOT_PASSWORD{{ end }}mariadb.db.password=$APP_DATABASE_PASSWORD{{- if .Values.global }}{{- if .Values.global.imagePullSecrets }},global.imagePullSecrets={{ .Values.global.imagePullSecrets }}{{- end }}{{- end }}
{{- else -}}
1. Get the Ghost URL by running:
diff --git a/stable/ghost/templates/_helpers.tpl b/stable/ghost/templates/_helpers.tpl
index 9dcdbfebe7f5..711a24f10d34 100644
--- a/stable/ghost/templates/_helpers.tpl
+++ b/stable/ghost/templates/_helpers.tpl
@@ -109,3 +109,38 @@ Also, we can't use a single if because lazy evaluation is not an option
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "ghost.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.volumePermissions.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.volumePermissions.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.volumePermissions.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.volumePermissions.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- end -}}
diff --git a/stable/ghost/templates/deployment.yaml b/stable/ghost/templates/deployment.yaml
index 40c480f91308..02ab4bca720c 100644
--- a/stable/ghost/templates/deployment.yaml
+++ b/stable/ghost/templates/deployment.yaml
@@ -35,12 +35,7 @@ spec:
- mountPath: {{ .Values.persistence.path }}
name: ghost-data
{{- end }}
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "ghost.imagePullSecrets" . | indent 6 }}
containers:
- name: {{ template "ghost.fullname" . }}
image: {{ template "ghost.image" . }}
@@ -90,7 +85,11 @@ spec:
- name: GHOST_PROTOCOL
value: {{ .Values.ghostProtocol | quote }}
- name: GHOST_PORT_NUMBER
+ {{- if .Values.ghostPort }}
+ value: {{ .Values.ghostPort | quote }}
+ {{- else }}
value: {{ .Values.service.port | quote }}
+ {{- end }}
- name: GHOST_USERNAME
value: {{ .Values.ghostUsername | quote }}
- name: GHOST_PASSWORD
@@ -139,6 +138,10 @@ spec:
httpHeaders:
- name: Host
value: {{ include "ghost.host" . | quote }}
+ {{- if eq .Values.ghostProtocol "https" }}
+ - name: X-Forwarded-Proto
+ value: https
+ {{- end }}
initialDelaySeconds: 120
timeoutSeconds: 5
failureThreshold: 6
@@ -149,6 +152,10 @@ spec:
httpHeaders:
- name: Host
value: {{ include "ghost.host" . | quote }}
+ {{- if eq .Values.ghostProtocol "https" }}
+ - name: X-Forwarded-Proto
+ value: https
+ {{- end }}
initialDelaySeconds: 30
timeoutSeconds: 3
periodSeconds: 5
diff --git a/stable/ghost/templates/ingress.yaml b/stable/ghost/templates/ingress.yaml
index 56eb1c779899..a08bfc8404ad 100644
--- a/stable/ghost/templates/ingress.yaml
+++ b/stable/ghost/templates/ingress.yaml
@@ -30,7 +30,13 @@ spec:
{{- range .Values.ingress.hosts }}
{{- if .tls }}
- hosts:
+ {{- if .tlsHosts }}
+ {{- range $host := .tlsHosts }}
+ - {{ $host }}
+ {{- end }}
+ {{- else }}
- {{ .name }}
+ {{- end }}
secretName: {{ .tlsSecret }}
{{- end }}
{{- end }}
diff --git a/stable/ghost/values.yaml b/stable/ghost/values.yaml
index da0eedfde3f9..72bc80feaed8 100644
--- a/stable/ghost/values.yaml
+++ b/stable/ghost/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami Ghost image version
## ref: https://hub.docker.com/r/bitnami/ghost/tags/
@@ -10,7 +13,7 @@
image:
registry: docker.io
repository: bitnami/ghost
- tag: 2.13.1
+ tag: 2.22.3
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,9 +24,8 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
-##
## Init containers parameters:
## volumePermissions: Change the owner of the persist volume mountpoint to RunAsUser:fsGroup
##
@@ -33,12 +35,19 @@ volumePermissions:
repository: bitnami/minideb
tag: latest
pullPolicy: Always
+ ## Optionally specify an array of imagePullSecrets.
+ ## Secrets must be manually created in the namespace.
+ ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+ ##
+ # pullSecrets:
+ # - myRegistryKeySecretName
-## Ghost protocol, host and path to create application URLs
+## Ghost protocol, host, port and path to create application URLs
## ref: https://github.com/bitnami/bitnami-docker-ghost#configuration
##
ghostProtocol: http
# ghostHost:
+# ghostPort:
ghostPath: /
@@ -223,6 +232,13 @@ ingress:
## Set this to true in order to enable TLS on the ingress record
tls: false
+ ## Optionally specify the TLS hosts for the ingress record
+ ## Useful when the Ingress controller supports www-redirection
+ ## If not specified, the above host name will be used
+ # tlsHosts:
+ # - www.ghost.local
+ # - ghost.local
+
## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
tlsSecret: ghost.local-tls
diff --git a/stable/gocd/CHANGELOG.md b/stable/gocd/CHANGELOG.md
index ccd9f1c19533..883229d767e3 100644
--- a/stable/gocd/CHANGELOG.md
+++ b/stable/gocd/CHANGELOG.md
@@ -1,3 +1,49 @@
+### 1.10.0
+* [554019b](https://github.com/kubernetes/charts/commit/554019b):
+- Bump up GoCD Version to 19.4.0
+### 1.9.1
+
+- Add support for Deployment and Pod annotations.
+
+### 1.9.0
+
+- Bump up k8s elastic agent plugin to latest.
+- Bump up GoCD Version to 19.3.0
+
+### 1.8.1
+
+* [0f99647](https://github.com/helm/charts/commit/0f99647):
+
+- Update docker registry artifact plugin to latest stable release.
+
+### 1.8.0
+
+* [8ec8c89](https://github.com/helm/charts/commit/8ec8c89):
+
+- Update agent image to gocd-agent-alpine-3.9
+
+* [dcd3332](https://github.com/helm/charts/commit/dcd3332):
+
+- Introduce server and agent pre stop hooks for users to optionally provide pre stop scripts.
+
+### 1.7.1
+
+* [0b0e2bf](https://github.com/kubernetes/charts/commit/0b0e2bf):
+
+- Bump k8s elastic agent plugin to latest.
+
+### 1.7.0
+
+* [908b129](https://github.com/kubernetes/charts/commit/908b129):
+
+- Bump up GoCD Version to 19.2.0
+
+### 1.6.6
+
+* [84bd7fe](https://github.com/kubernetes/charts/commit/f44d408):
+
+- If there is no host in template ingress.yaml, use default backend.
+
### 1.6.5
* [f44d408](https://github.com/kubernetes/charts/commit/f44d408):
diff --git a/stable/gocd/Chart.yaml b/stable/gocd/Chart.yaml
index 4a5b053e6d12..96d6580b7d6f 100644
--- a/stable/gocd/Chart.yaml
+++ b/stable/gocd/Chart.yaml
@@ -1,7 +1,8 @@
+apiVersion: v1
name: gocd
home: https://www.gocd.org/
-version: 1.6.5
-appVersion: 19.1.0
+version: 1.10.0
+appVersion: 19.4.0
description: GoCD is an open-source continuous delivery server to model and visualize complex workflows with ease.
icon: https://gocd.github.io/assets/images/go-icon-black-192x192.png
keywords:
diff --git a/stable/gocd/README.md b/stable/gocd/README.md
index 9dbcd00222a4..2f5d7a33e999 100644
--- a/stable/gocd/README.md
+++ b/stable/gocd/README.md
@@ -70,8 +70,12 @@ The following tables list the configurable parameters of the GoCD chart and thei
| Parameter | Description | Default |
| ------------------------------------------ | ------------------------------------------------------------------------------------------------------------- | ------------------- |
| `server.enabled` | Enable GoCD Server. Supported values are `true`, `false`. When enabled, the GoCD server deployment is done on helm install. | `true` |
+| `server.annotations.deployment` | GoCD server Deployment annotations. | `{}` |
+| `server.annotations.pod ` | GoCD server Pod annotations. | `{}` |
| `server.shouldPreconfigure` | Preconfigure GoCD Server to have a default elastic agent profile and Kubernetes elastic agent plugin settings. Supported values are `true`, `false`. | `true` |
-| `server.preconfigureCommand` | Preconfigure GOCD Server with a custom command (shell,python, etc ...). Supported value is a list. | `["/bin/bash", "/preconfigure_server.sh"]`|
+| `server.preconfigureCommand` | Preconfigure GOCD Server with a custom command (shell,python, etc ...). Supported value is a list. | `["/bin/bash", "/preconfigure_server.sh"]`|
+| `server.preStop` | Perform cleanup and backup before stopping the gocd server. Supported value is a list. | `nil` |
+| `server.terminationGracePeriodSeconds` | Optional duration in seconds the gocd server pod needs to terminate gracefully. | `nil` |
| `server.image.repository` | GoCD server image | `gocd/gocd-server` |
| `server.image.tag` | GoCD server image tag | `.Chart.appVersion` |
| `server.image.pullPolicy` | Image pull policy | `IfNotPresent` |
@@ -156,10 +160,16 @@ $ kubectl create secret generic gocd-server-ssh \
### GoCD Agent
+ *Note: This is only for static gocd agents brought up in the cluster via the helm chart. The elastic agent pods need to be separately configured using [elastic agent profiles](https://docs.gocd.org/current/configuration/elastic_agents.html#elastic-agent-profile)*
+
| Parameter | Description | Default |
| ----------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------- |
+| `agent.annotations.deployment` | GoCD Agent Deployment annotations. | `{}` |
+| `agent.annotations.pod ` | GoCD Agent Pod annotations. | `{}` |
| `agent.replicaCount` | GoCD Agent replicas Count. By default, no agents are provided. | `0` |
-| `agent.deployStrategy` | GoCD Agent [deployment strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy). | `{}` |
+| `agent.preStop ` | Perform cleanup and backup before stopping the gocd server. Supported value is a list. | `nil` |
+| `agent.terminationGracePeriodSeconds` | Optional duration in seconds the gocd agent pods need to terminate gracefully. | `nil` |
+| `agent.deployStrategy` | GoCD Agent [deployment strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy). | `{}` |
| `agent.image.repository` | GoCD agent image | `gocd/gocd-agent-alpine-3.6` |
| `agent.image.tag` | GoCD agent image tag | `.Chart.appVersion` |
| `agent.image.pullPolicy` | Image pull policy | `IfNotPresent` |
diff --git a/stable/gocd/templates/configmap.yaml b/stable/gocd/templates/configmap.yaml
index ebdcc1208d8c..9d5065d80e82 100644
--- a/stable/gocd/templates/configmap.yaml
+++ b/stable/gocd/templates/configmap.yaml
@@ -36,14 +36,42 @@ data:
echo "No configuration found in cruise-config.xml. Using default preconfigure_server scripts to configure server" >> /godata/logs/preconfigure.log
+ echo "Trying to configure cluster profile." >> /godata/logs/preconfigure.log
+
+ (curl --fail -i 'http://localhost:8153/go/api/admin/elastic/cluster_profiles' \
+ -H'Accept: application/vnd.go.cd.v1+json' \
+ -H 'Content-Type: application/json' \
+ -X POST -d '{
+ "id": "k8-cluster-profile",
+ "plugin_id": "cd.go.contrib.elasticagent.kubernetes",
+ "properties": [
+ {
+ "key": "go_server_url",
+ "value": "https://{{ template "gocd.fullname" . }}-server:{{ .Values.server.service.httpsPort }}/go"
+ },
+ {
+ "key": "kubernetes_cluster_url",
+ "value": "https://'$KUBERNETES_SERVICE_HOST':'$KUBERNETES_SERVICE_PORT_HTTPS'"
+ },
+ {
+ "key": "namespace",
+ "value": "{{ .Release.Namespace }}"
+ },
+ {
+ "key": "security_token",
+ "value": "'$KUBE_TOKEN'"
+ }
+ ]
+ }' >> /godata/logs/preconfigure.log)
+
echo "Trying to create an elastic profile now." >> /godata/logs/preconfigure.log
(curl --fail -i 'http://localhost:8153/go/api/elastic/profiles' \
- -H 'Accept: application/vnd.go.cd.v1+json' \
+ -H 'Accept: application/vnd.go.cd.v2+json' \
-H 'Content-Type: application/json' \
-X POST -d '{
"id": "demo-app",
- "plugin_id": "cd.go.contrib.elasticagent.kubernetes",
+ "cluster_profile_id": "k8-cluster-profile",
"properties": [
{
"key": "Image",
@@ -51,11 +79,11 @@ data:
},
{
"key": "PodConfiguration",
- "value": "apiVersion: v1\nkind: Pod\nmetadata:\n name: pod-name-prefix-{{ `{{ POD_POSTFIX }}` }}\n labels:\n app: web\nspec:\n serviceAccountName: {{ template "gocd.agentServiceAccountName" . }}\n containers:\n - name: gocd-agent-container-{{ `{{ CONTAINER_POSTFIX }}` }}\n image: gocd/gocd-agent-docker-dind:v{{ .Chart.AppVersion }}\n securityContext:\n privileged: true"
+ "value": "apiVersion: v1\nkind: Pod\nmetadata:\n name: gocd-agent-{{ `{{ POD_POSTFIX }}` }}\n labels:\n app: web\nspec:\n serviceAccountName: {{ template "gocd.agentServiceAccountName" . }}\n containers:\n - name: gocd-agent-container-{{ `{{ CONTAINER_POSTFIX }}` }}\n image: gocd/gocd-agent-docker-dind:v{{ .Chart.AppVersion }}\n securityContext:\n privileged: true"
},
{
- "key": "SpecifiedUsingPodConfiguration",
- "value": "true"
+ "key": "PodSpecType",
+ "value": "yaml"
},
{
"key": "Privileged",
@@ -64,33 +92,6 @@ data:
]
}' >> /godata/logs/preconfigure.log)
- echo "Trying to configure plugin settings." >> /godata/logs/preconfigure.log
-
- (curl --fail -i 'http://localhost:8153/go/api/admin/plugin_settings' \
- -H 'Accept: application/vnd.go.cd.v1+json' \
- -H 'Content-Type: application/json' \
- -X POST -d '{
- "plugin_id": "cd.go.contrib.elasticagent.kubernetes",
- "configuration": [
- {
- "key": "go_server_url",
- "value": "https://{{ template "gocd.fullname" . }}-server:{{ .Values.server.service.httpsPort }}/go"
- },
- {
- "key": "kubernetes_cluster_url",
- "value": "https://'$KUBERNETES_SERVICE_HOST':'$KUBERNETES_SERVICE_PORT_HTTPS'"
- },
- {
- "key": "namespace",
- "value": "{{ .Release.Namespace }}"
- },
- {
- "key": "security_token",
- "value": "'$KUBE_TOKEN'"
- }
- ]
- }' >> /godata/logs/preconfigure.log)
-
echo "Trying to creating a hello world pipeline." >> /godata/logs/preconfigure.log
(curl --fail -i 'http://localhost:8153/go/api/admin/pipelines' \
diff --git a/stable/gocd/templates/gocd-agent-deployment.yaml b/stable/gocd/templates/gocd-agent-deployment.yaml
index 84b533c4361c..0f10bde92495 100644
--- a/stable/gocd/templates/gocd-agent-deployment.yaml
+++ b/stable/gocd/templates/gocd-agent-deployment.yaml
@@ -8,6 +8,10 @@ metadata:
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
component: agent
+ annotations:
+ {{- range $key, $value := .Values.agent.annotations.deployment }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
spec:
replicas: {{ .Values.agent.replicaCount }}
{{- if .Values.agent.deployStrategy }}
@@ -25,6 +29,10 @@ spec:
app: {{ template "gocd.name" . }}
release: {{ .Release.Name | quote }}
component: agent
+ annotations:
+ {{- range $key, $value := .Values.agent.annotations.pod }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
spec:
serviceAccountName: {{ template "gocd.agentServiceAccountName" . }}
{{- if or .Values.agent.persistence.enabled (or .Values.agent.security.ssh.enabled .Values.agent.persistence.extraVolumes) }}
@@ -132,8 +140,18 @@ spec:
readOnly: true
mountPath: /home/go/.ssh
{{- end }}
+ {{- if .Values.agent.preStop }}
+ lifecycle:
+ preStop:
+ exec:
+ command:
+{{ toYaml .Values.agent.preStop | indent 18 }}
+ {{- end }}
securityContext:
privileged: {{ .Values.agent.privileged }}
+ {{- if .Values.agent.terminationGracePeriodSeconds }}
+ terminationGracePeriodSeconds: {{ .Values.agent.terminationGracePeriodSeconds }}
+ {{- end }}
restartPolicy: {{ .Values.agent.restartPolicy }}
{{- if .Values.agent.nodeSelector }}
nodeSelector:
diff --git a/stable/gocd/templates/gocd-server-deployment.yaml b/stable/gocd/templates/gocd-server-deployment.yaml
index 1d53e80d6e85..192d57f68b66 100644
--- a/stable/gocd/templates/gocd-server-deployment.yaml
+++ b/stable/gocd/templates/gocd-server-deployment.yaml
@@ -9,6 +9,10 @@ metadata:
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
component: server
+ annotations:
+ {{- range $key, $value := .Values.server.annotations.deployment }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
spec:
replicas: 1
strategy:
@@ -24,6 +28,10 @@ spec:
app: {{ template "gocd.name" . }}
release: {{ .Release.Name | quote }}
component: server
+ annotations:
+ {{- range $key, $value := .Values.server.annotations.pod }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
spec:
serviceAccountName: {{ template "gocd.serviceAccountName" . }}
{{- if or .Values.server.shouldPreconfigure (or .Values.server.persistence.enabled (or .Values.server.security.ssh.enabled .Values.server.persistence.extraVolumes)) }}
@@ -111,15 +119,26 @@ spec:
readOnly: true
mountPath: /home/go/.ssh
{{- end }}
- {{- if .Values.server.shouldPreconfigure }}
+ {{- if or .Values.server.shouldPreconfigure .Values.server.preStop }}
lifecycle:
+ {{- if .Values.server.shouldPreconfigure}}
postStart:
exec:
command:
-{{ toYaml .Values.server.preconfigureCommand | indent 18 }}
+{{ toYaml .Values.server.preconfigureCommand | indent 18 }}
+ {{- end }}
+ {{- if .Values.server.preStop}}
+ preStop:
+ exec:
+ command:
+{{ toYaml .Values.server.preStop | indent 18 }}
+ {{- end }}
{{- end }}
resources:
{{ toYaml .Values.server.resources | indent 12 }}
+ {{- if .Values.server.terminationGracePeriodSeconds }}
+ terminationGracePeriodSeconds: {{ .Values.server.terminationGracePeriodSeconds }}
+ {{- end }}
restartPolicy: {{ .Values.server.restartPolicy }}
{{- if .Values.server.nodeSelector }}
nodeSelector:
diff --git a/stable/gocd/templates/ingress.yaml b/stable/gocd/templates/ingress.yaml
index 4dd546797e5f..52206b52bd3f 100644
--- a/stable/gocd/templates/ingress.yaml
+++ b/stable/gocd/templates/ingress.yaml
@@ -15,16 +15,25 @@ metadata:
{{ $key }}: {{ $value | quote }}
{{- end }}
spec:
- backend:
- serviceName: {{ template "gocd.fullname" . }}-server
- servicePort: {{ .Values.server.service.httpPort }}
+ {{- if .Values.server.ingress.hosts }}
+ {{ $dot := .}}
rules:
{{- range $host := .Values.server.ingress.hosts }}
- host: {{ $host }}
- {{- end -}}
+ http:
+ paths:
+ - backend:
+ serviceName: {{ template "gocd.fullname" $dot }}-server
+ servicePort: {{ $dot.Values.server.service.httpPort }}
+ {{- end }}
+ {{- else }}
+ backend:
+ serviceName: {{ template "gocd.fullname" . }}-server
+ servicePort: {{ .Values.server.service.httpPort }}
+ {{- end -}}
{{- if .Values.server.ingress.tls }}
tls:
{{ toYaml .Values.server.ingress.tls | indent 4 }}
{{- end -}}
{{- end -}}
-{{- end -}}
+{{- end -}}
\ No newline at end of file
diff --git a/stable/gocd/values.yaml b/stable/gocd/values.yaml
index 10bb689babde..af996626a92f 100644
--- a/stable/gocd/values.yaml
+++ b/stable/gocd/values.yaml
@@ -21,6 +21,15 @@ serviceAccount:
server:
# server.enabled is the toggle to run GoCD Server. Change to false for Agent Only Deployment.
enabled: true
+
+
+ # server.annotations is the annotations for the GoCD Server Deployment and Pod spec.
+ annotations:
+ deployment:
+ # iam.amazonaws.com/role: arn:aws:iam::xxx:role/my-custom-role
+ pod:
+ # iam.amazonaws.com/role: arn:aws:iam::xxx:role/my-custom-role
+
# server.shouldPreconfigure is used to invoke a script to pre configure the elastic agent profile and the plugin settings in the GoCD server.
# Note: If this value is set to true, then, the serviceAccount.name is configured for the GoCD server pod. The service account token is mounted as a secret and is used in the lifecycle hook.
# Note: An attempt to preconfigure the GoCD server is made. There are cases where the pre-configuration can fail and the GoCD server starts with an empty config.
@@ -28,6 +37,13 @@ server:
preconfigureCommand:
- "/bin/bash"
- "/preconfigure_server.sh"
+ # server.preStop - array of commands to use in the server pre-stop lifecycle hook
+ # preStop:
+ # - "/bin/bash"
+ # - "/backup_and_stop.sh"
+ # server.terminationGracePeriodSeconds is the optional duration in seconds the gocd server pod needs to terminate gracefully.
+ # Note: SIGTERM is issued immediately after the pod deletion request is sent. If the pod doesn't terminate, k8s waits for terminationGracePeriodSeconds before issuing SIGKILL.
+ # server.terminationGracePeriodSeconds: 60
image:
# server.image.repository is the GoCD Server image name
repository: "gocd/gocd-server"
@@ -86,9 +102,9 @@ server:
# server.env.extraEnvVars is the list of environment variables passed to GoCD Server
extraEnvVars:
- name: GOCD_PLUGIN_INSTALL_kubernetes-elastic-agents
- value: https://github.com/gocd/kubernetes-elastic-agents/releases/download/2.1.0-123/kubernetes-elastic-agent-2.1.0-123.jar
+ value: https://github.com/gocd/kubernetes-elastic-agents/releases/download/v3.0.0-156/kubernetes-elastic-agent-3.0.0-156.jar
- name: GOCD_PLUGIN_INSTALL_docker-registry-artifact-plugin
- value: https://github.com/gocd/docker-registry-artifact-plugin/releases/download/1.0.0-25/docker-registry-artifact-plugin-1.0.0-25.jar
+ value: https://github.com/gocd/docker-registry-artifact-plugin/releases/download/v1.0.1-92/docker-registry-artifact-plugin-1.0.1-92.jar
service:
# server.service.type is the GoCD Server service type
type: "NodePort"
@@ -115,8 +131,8 @@ server:
# server.ingress.enabled is the toggle to enable/disable GoCD Server Ingress
enabled: true
# server.ingress.hosts is used to create an Ingress record.
-# hosts:
-# - ci.example.com
+ # hosts:
+ # - ci.example.com
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
@@ -198,13 +214,27 @@ agent:
# If field is empty, the service account "default" will be used.
name:
+ # agent.annotations is the annotations for the GoCD Agent Deployment and Pod Spec
+ annotations:
+ deployment:
+ # iam.amazonaws.com/role: arn:aws:iam::xxx:role/my-custom-role
+ pod:
+ # iam.amazonaws.com/role: arn:aws:iam::xxx:role/my-custom-role
+
# agent.replicaCount is the GoCD Agent replicas Count. Specify the number of GoCD agents to run
replicaCount: 0
+ # agent.preStop - array of commands to use in the agent pre-stop lifecycle hook
+ # preStop:
+ # - "/bin/bash"
+ # - "/disable_and_stop.sh"
# agent.deployStrategy is the strategy explained in detail at https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
+ # agent.terminationGracePeriodSeconds is the optional duration in seconds the gocd agent pods need to terminate gracefully.
+ # Note: SIGTERM is issued immediately after the pod deletion request is sent. If the pod doesn't terminate, k8s waits for terminationGracePeriodSeconds before issuing SIGKILL.
+ # agent.terminationGracePeriodSeconds: 60
deployStrategy: {}
image:
# agent.image.repository is the GoCD Agent image name
- repository: "gocd/gocd-agent-alpine-3.6"
+ repository: "gocd/gocd-agent-alpine-3.9"
# agent.image.tag is the GoCD Agent image's tag
tag:
# agent.image.pullPolicy is the GoCD Agent image's pull policy
diff --git a/stable/goldpinger/.helmignore b/stable/goldpinger/.helmignore
new file mode 100644
index 000000000000..825c00779157
--- /dev/null
+++ b/stable/goldpinger/.helmignore
@@ -0,0 +1,23 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
+
+OWNERS
diff --git a/stable/goldpinger/Chart.yaml b/stable/goldpinger/Chart.yaml
new file mode 100644
index 000000000000..ed96310030e0
--- /dev/null
+++ b/stable/goldpinger/Chart.yaml
@@ -0,0 +1,13 @@
+apiVersion: v1
+name: goldpinger
+version: 1.1.4
+appVersion: 1.5.0
+description: Goldpinger makes calls between its instances for visibility and alerting.
+home: https://github.com/bloomberg/goldpinger
+sources:
+ - https://github.com/bloomberg/goldpinger
+maintainers:
+ - name: okgolove
+ email: okgolove@markeloff.net
+ - name: s-vkropotko
+ email: vjkropotko@gmail.com
diff --git a/stable/goldpinger/OWNERS b/stable/goldpinger/OWNERS
new file mode 100644
index 000000000000..bb1a77018b65
--- /dev/null
+++ b/stable/goldpinger/OWNERS
@@ -0,0 +1,6 @@
+approvers:
+- okgolove
+- s-vkropotko
+reviewers:
+- okgolove
+- s-vkropotko
diff --git a/stable/goldpinger/README.md b/stable/goldpinger/README.md
new file mode 100644
index 000000000000..bbbfef3daae0
--- /dev/null
+++ b/stable/goldpinger/README.md
@@ -0,0 +1,88 @@
+# Goldpinger
+
+[Goldpinger](https://github.com/bloomberg/goldpinger) makes calls between its instances for visibility and alerting.
+
+## TL;DR;
+
+```console
+$ helm install stable/goldpinger
+```
+
+## Introduction
+
+This chart bootstraps a [Goldpinger](https://github.com/bloomberg/goldpinger) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
+
+## Prerequisites
+
+- Kubernetes 1.4+ with Beta APIs enabled
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`:
+
+```console
+$ helm install --name my-release stable/goldpinger
+```
+
+The command deploys Goldpinger on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
+
+> **Tip**: List all releases using `helm list`
+
+## Uninstalling the Chart
+
+To uninstall/delete the `my-release` deployment:
+
+```console
+$ helm delete my-release
+```
+
+The command removes all the Kubernetes components associated with the chart and deletes the release.
+
+## Configuration
+
+The following table lists the configurable parameters of the Goldpinger chart and their default values.
+
+| Parameter | Description | Default |
+| ------------------------------- | ------------------------------- | ---------------------------------------------------------- |
+| `image.repository` | Goldpinger image | `bloomberg/goldpinger` |
+| `image.tag` | Goldpinger image tag | `1.5.0` |
+| `pullPolicy` | Image pull policy | `IfNotPresent` |
+| `rbac.create` | Install required rbac clusterrole | `true` |
+| `serviceAccount.create` | Enable ServiceAccount creation | `true` |
+| `serviceAccount.name` | ServiceAccount for Goldpinger pods | `default` |
+| `goldpinger.port` | Goldpinger app port listen to | `80` |
+| `service.type` | Kubernetes service type | `LoadBalancer` |
+| `service.port` | Service HTTP port | `80` |
+| `service.annotations` | Service annotations | `{}` |
+| `ingress.enabled` | Enable ingress controller resource | `false` |
+| `ingress.annotations` | Ingress annotations | `{}` |
+| `ingress.path` | Ingress path | `/` |
+| `ingress.hosts` | URLs to address your Goldpinger installation| `goldpinger.local` |
+| `ingress.tls` | Ingress TLS configuration | `[]` |
+| `podAnnotations` | Pod annotations | `{}` |
+| `nodeSelector` | Node labels for pod assignment | `{}` |
+| `tolerations` | List of node taints to tolerate | `[]` |
+| `affinity` | Map of node/pod affinities | `{}` |
+| `resources` | CPU/Memory resource requests/limits | `{}` |
+| `podSecurityPolicy.enabled` | Enable podSecuritypolicy | `false` |
+| `podSecurityPolicy.policyName` | PodSecurityPolicy Name | `unrestricted-psp` |
+
+Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
+
+```console
+$ helm install --name my-release \
+ --set goldpinger.port=8080,serviceAccount.name=goldpinger \
+ stable/goldpinger
+```
+
+Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
+
+```console
+$ helm install --name my-release -f values.yaml stable/goldpinger
+```
+
+> **Tip**: You can use the default [values.yaml](values.yaml)
+
+## Ingress
+
+This chart provides support for Ingress resource. If you have an available Ingress Controller such as Nginx or Traefik you maybe want to set `ingress.enabled` to true and choose an `ingress.hostname` for the URL. Then, you should be able to access the installation using that address.
diff --git a/stable/goldpinger/templates/NOTES.txt b/stable/goldpinger/templates/NOTES.txt
new file mode 100644
index 000000000000..cecd54feb98b
--- /dev/null
+++ b/stable/goldpinger/templates/NOTES.txt
@@ -0,0 +1,19 @@
+1. Get the application URL by running these commands:
+{{- if .Values.ingress.enabled }}
+{{- range .Values.ingress.hosts }}
+ http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
+{{- end }}
+{{- else if contains "NodePort" .Values.service.type }}
+ export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "goldpinger.fullname" . }})
+ export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+ echo http://$NODE_IP:$NODE_PORT
+{{- else if contains "LoadBalancer" .Values.service.type }}
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ You can watch the status of by running 'kubectl get svc -w {{ include "goldpinger.fullname" . }}'
+ export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "goldpinger.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ echo http://$SERVICE_IP:{{ .Values.service.port }}
+{{- else if contains "ClusterIP" .Values.service.type }}
+ export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "goldpinger.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
+ echo "Visit http://127.0.0.1:8080 to use your application"
+ kubectl port-forward $POD_NAME 8080:80
+{{- end }}
\ No newline at end of file
diff --git a/stable/goldpinger/templates/_helpers.tpl b/stable/goldpinger/templates/_helpers.tpl
new file mode 100644
index 000000000000..987f0db1f308
--- /dev/null
+++ b/stable/goldpinger/templates/_helpers.tpl
@@ -0,0 +1,42 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "goldpinger.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+*/}}
+{{- define "goldpinger.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "goldpinger.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create the name of the service account
+*/}}
+{{- define "goldpinger.serviceAccountName" -}}
+{{- if .Values.serviceAccount.create -}}
+ {{ default (include "goldpinger.fullname" .) .Values.serviceAccount.name }}
+{{- else -}}
+ {{ default "default" .Values.serviceAccount.name }}
+{{- end -}}
+{{- end -}}
\ No newline at end of file
diff --git a/stable/goldpinger/templates/clusterrole.yaml b/stable/goldpinger/templates/clusterrole.yaml
new file mode 100644
index 000000000000..f22f2e301a74
--- /dev/null
+++ b/stable/goldpinger/templates/clusterrole.yaml
@@ -0,0 +1,15 @@
+{{- if .Values.rbac.create }}
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: {{ template "goldpinger.fullname" . }}-clusterrole
+ labels:
+ app.kubernetes.io/name: {{ include "goldpinger.name" . }}
+ helm.sh/chart: {{ include "goldpinger.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+rules:
+- apiGroups: [""]
+ resources: ["pods"]
+ verbs: ["list"]
+{{- end }}
\ No newline at end of file
diff --git a/stable/goldpinger/templates/clusterrolebinding.yaml b/stable/goldpinger/templates/clusterrolebinding.yaml
new file mode 100644
index 000000000000..8c6c37f76eb4
--- /dev/null
+++ b/stable/goldpinger/templates/clusterrolebinding.yaml
@@ -0,0 +1,19 @@
+{{- if .Values.rbac.create }}
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: {{ include "goldpinger.fullname" . }}-clusterrolebinding
+ labels:
+ app.kubernetes.io/name: {{ include "goldpinger.name" . }}
+ helm.sh/chart: {{ include "goldpinger.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+subjects:
+ - kind: ServiceAccount
+ name: {{ template "goldpinger.serviceAccountName" . }}
+ namespace: {{ .Release.Namespace }}
+roleRef:
+ kind: ClusterRole
+ name: {{ template "goldpinger.fullname" . }}-clusterrole
+ apiGroup: rbac.authorization.k8s.io
+{{- end }}
\ No newline at end of file
diff --git a/stable/goldpinger/templates/daemonset.yaml b/stable/goldpinger/templates/daemonset.yaml
new file mode 100644
index 000000000000..6fe139dc3621
--- /dev/null
+++ b/stable/goldpinger/templates/daemonset.yaml
@@ -0,0 +1,68 @@
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: {{ include "goldpinger.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "goldpinger.name" . }}
+ helm.sh/chart: {{ include "goldpinger.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "goldpinger.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "goldpinger.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ {{- with .Values.podAnnotations }}
+ annotations:
+ {{ toYaml . | nindent 8 }}
+ {{- end }}
+ spec:
+ serviceAccountName: {{ template "goldpinger.serviceAccountName" . }}
+ imagePullSecrets:
+ - name: {{ .Values.image.pullSecrets }}
+ containers:
+ - name: goldpinger-daemon
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ env:
+ - name: HOSTNAME
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+ - name: HOST
+ value: "0.0.0.0"
+ - name: PORT
+ value: "{{ .Values.goldpinger.port }}"
+ - name: LABEL_SELECTOR
+ value: "app.kubernetes.io/name={{ include "goldpinger.name" . }}"
+ ports:
+ - name: http
+ containerPort: {{ .Values.goldpinger.port }}
+ protocol: TCP
+ livenessProbe:
+ httpGet:
+ path: /
+ port: http
+ readinessProbe:
+ httpGet:
+ path: /
+ port: http
+ resources:
+{{ toYaml .Values.resources | indent 12 }}
+ {{- with .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.affinity }}
+ affinity:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.tolerations }}
+ tolerations:
+{{ toYaml . | indent 8 }}
+ {{- end }}
diff --git a/stable/goldpinger/templates/ingress.yaml b/stable/goldpinger/templates/ingress.yaml
new file mode 100644
index 000000000000..1b05f0071706
--- /dev/null
+++ b/stable/goldpinger/templates/ingress.yaml
@@ -0,0 +1,39 @@
+{{- if .Values.ingress.enabled -}}
+{{- $fullName := include "goldpinger.fullname" . -}}
+{{- $servicePort := .Values.service.port -}}
+{{- $ingressPath := .Values.ingress.path -}}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ $fullName }}
+ labels:
+ app.kubernetes.io/name: {{ include "goldpinger.name" . }}
+ helm.sh/chart: {{ include "goldpinger.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+{{- with .Values.ingress.annotations }}
+ annotations:
+{{ toYaml . | indent 4 }}
+{{- end }}
+spec:
+{{- if .Values.ingress.tls }}
+ tls:
+ {{- range .Values.ingress.tls }}
+ - hosts:
+ {{- range .hosts }}
+ - {{ . }}
+ {{- end }}
+ secretName: {{ .secretName }}
+ {{- end }}
+{{- end }}
+ rules:
+ {{- range .Values.ingress.hosts }}
+ - host: {{ . }}
+ http:
+ paths:
+ - path: {{ $ingressPath }}
+ backend:
+ serviceName: {{ $fullName }}
+ servicePort: http
+ {{- end }}
+{{- end }}
\ No newline at end of file
diff --git a/stable/goldpinger/templates/podsecuritypolicy.yaml b/stable/goldpinger/templates/podsecuritypolicy.yaml
new file mode 100644
index 000000000000..01e5ed39c275
--- /dev/null
+++ b/stable/goldpinger/templates/podsecuritypolicy.yaml
@@ -0,0 +1,35 @@
+{{- if .Values.podSecurityPolicy.enabled }}
+---
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: {{ template "goldpinger.fullname" . }}-pod-security-policy
+ labels:
+ app.kubernetes.io/name: {{ include "goldpinger.name" . }}
+ helm.sh/chart: {{ include "goldpinger.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+rules:
+- apiGroups: ["extensions"]
+ resources: ["podsecuritypolicies"]
+ resourceNames: [{{ .Values.podSecurityPolicy.policyName | quote }}]
+ verbs: ["use"]
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: {{ template "goldpinger.fullname" . }}-pod-security-polic
+ labels:
+ app.kubernetes.io/name: {{ include "goldpinger.name" . }}
+ helm.sh/chart: {{ include "goldpinger.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+roleRef:
+ kind: Role
+ name: {{ template "goldpinger.fullname" . }}-pod-security-policy
+ apiGroup: rbac.authorization.k8s.io
+subjects:
+- kind: ServiceAccount
+ name: {{ template "goldpinger.serviceAccountName" . }}
+ namespace: {{ .Release.Namespace }}
+{{- end }}
diff --git a/stable/goldpinger/templates/service.yaml b/stable/goldpinger/templates/service.yaml
new file mode 100644
index 000000000000..5c851db987cd
--- /dev/null
+++ b/stable/goldpinger/templates/service.yaml
@@ -0,0 +1,23 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "goldpinger.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "goldpinger.name" . }}
+ helm.sh/chart: {{ include "goldpinger.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+{{- with .Values.service.annotations }}
+ annotations:
+{{ toYaml . | indent 4 }}
+{{- end }}
+spec:
+ type: {{ .Values.service.type }}
+ ports:
+ - port: {{ .Values.service.port }}
+ targetPort: {{ .Values.goldpinger.port }}
+ protocol: TCP
+ name: http
+ selector:
+ app.kubernetes.io/name: {{ include "goldpinger.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
\ No newline at end of file
diff --git a/stable/goldpinger/templates/serviceaccount.yaml b/stable/goldpinger/templates/serviceaccount.yaml
new file mode 100644
index 000000000000..beec3bee418b
--- /dev/null
+++ b/stable/goldpinger/templates/serviceaccount.yaml
@@ -0,0 +1,11 @@
+{{- if .Values.serviceAccount.create }}
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "goldpinger.name" . }}
+ helm.sh/chart: {{ include "goldpinger.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ name: {{ template "goldpinger.serviceAccountName" . }}
+{{- end }}
\ No newline at end of file
diff --git a/stable/goldpinger/values.yaml b/stable/goldpinger/values.yaml
new file mode 100644
index 000000000000..050a0193a173
--- /dev/null
+++ b/stable/goldpinger/values.yaml
@@ -0,0 +1,76 @@
+# Default values for goldpinger.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+image:
+ repository: bloomberg/goldpinger
+ tag: 1.5.0
+ pullPolicy: IfNotPresent
+ ## Optionally specify an array of imagePullSecrets.
+ ## Secrets must be manually created in the namespace.
+ ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+ ##
+ # pullSecrets:
+ # - myRegistrKeySecretName
+
+rbac:
+ create: true
+serviceAccount:
+ create: true
+ name:
+
+goldpinger:
+ port: 80
+
+service:
+ type: LoadBalancer
+ port: 80
+ annotations: {}
+
+ingress:
+ enabled: false
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+ path: /
+ hosts:
+ - goldpinger.local
+ tls: []
+ # - secretName: chart-example-tls
+ # hosts:
+ # - chart-example.test
+
+resources: {}
+ # We usually recommend not to specify default resources and to leave this as a conscious
+ # choice for the user. This also increases chances charts run on environments with little
+ # resources, such as Minikube. If you do want to specify resources, uncomment the following
+ # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
+ # limits:
+ # cpu: 100m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 128Mi
+
+podAnnotations: {}
+
+## Node labels for pod assignment
+## Ref: https://kubernetes.io/docs/user-guide/node-selection/
+##
+nodeSelector: {}
+
+## Tolerations for pod assignment
+## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
+##
+tolerations: []
+
+## Affinity for pod assignment
+## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
+##
+affinity: {}
+
+## Enable this if pod security policy enabled in your cluster
+## It will bind ServiceAccount with unrestricted podSecurityPolicy
+## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
+podSecurityPolicy:
+ enabled: false
+ policyName: unrestricted-psp
diff --git a/stable/grafana/Chart.yaml b/stable/grafana/Chart.yaml
index 1cfef8b77262..a9220253847d 100755
--- a/stable/grafana/Chart.yaml
+++ b/stable/grafana/Chart.yaml
@@ -1,7 +1,7 @@
apiVersion: v1
name: grafana
-version: 1.25.4
-appVersion: 5.4.3
+version: 3.3.9
+appVersion: 6.2.0
kubeVersion: "^1.8.0-0"
description: The leading tool for querying and visualizing time series and metrics.
home: https://grafana.net
diff --git a/stable/grafana/README.md b/stable/grafana/README.md
index a172c84fa1f6..9a7c66e5a1aa 100644
--- a/stable/grafana/README.md
+++ b/stable/grafana/README.md
@@ -38,35 +38,47 @@ The command removes all the Kubernetes components associated with the chart and
| `securityContext` | Deployment securityContext | `{"runAsUser": 472, "fsGroup": 472}` |
| `priorityClassName` | Name of Priority Class to assign pods | `nil` |
| `image.repository` | Image repository | `grafana/grafana` |
-| `image.tag` | Image tag. (`Must be >= 5.0.0`) | `5.4.3` |
+| `image.tag` | Image tag. (`Must be >= 5.0.0`) | `6.2.0` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `image.pullSecrets` | Image pull secrets | `{}` |
| `service.type` | Kubernetes service type | `ClusterIP` |
| `service.port` | Kubernetes port where service is exposed | `80` |
+| `service.targetPort` | internal service is port | `3000` |
| `service.annotations` | Service annotations | `{}` |
| `service.labels` | Custom labels | `{}` |
| `ingress.enabled` | Enables Ingress | `false` |
| `ingress.annotations` | Ingress annotations | `{}` |
| `ingress.labels` | Custom labels | `{}` |
+| `ingress.path` | Ingress accepted path | `/` |
| `ingress.hosts` | Ingress accepted hostnames | `[]` |
| `ingress.tls` | Ingress TLS configuration | `[]` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `affinity` | Affinity settings for pod assignment | `{}` |
+| `extraInitContainers` | Init containers to add to the grafana pod | `{}` |
+| `extraContainers` | Sidecar containers to add to the grafana pod | `{}` |
| `persistence.enabled` | Use persistent volume to store data | `false` |
| `persistence.size` | Size of persistent volume claim | `10Gi` |
| `persistence.existingClaim` | Use an existing PVC to persist data | `nil` |
| `persistence.storageClassName` | Type of persistent volume claim | `nil` |
| `persistence.accessModes` | Persistence access modes | `[ReadWriteOnce]` |
| `persistence.subPath` | Mount a sub dir of the persistent volume | `nil` |
+| `initChownData.enabled` | If false, don't reset data ownership at startup | true |
+| `initChownData.image.repository` | init-chown-data container image repository | `busybox` |
+| `initChownData.image.tag` | init-chown-data container image tag | `latest` |
+| `initChownData.image.pullPolicy` | init-chown-data container image pull policy | `IfNotPresent` |
+| `initChownData.resources` | init-chown-data pod resource requests & limits | `{}` |
| `schedulerName` | Alternate scheduler name | `nil` |
| `env` | Extra environment variables passed to pods | `{}` |
| `envFromSecret` | Name of a Kubenretes secret (must be manually created in the same namespace) containing values to be added to the environment | `""` |
| `extraSecretMounts` | Additional grafana server secret mounts | `[]` |
| `extraVolumeMounts` | Additional grafana server volume mounts | `[]` |
| `extraConfigmapMounts` | Additional grafana server configMap volume mounts | `[]` |
+| `extraEmptyDirMounts` | Additional grafana server emptyDir volume mounts | `[]` |
| `plugins` | Plugins to be loaded along with Grafana | `[]` |
| `datasources` | Configure grafana datasources (passed through tpl) | `{}` |
+| `notifiers` | Configure grafana notifiers | `{}` |
| `dashboardProviders` | Configure grafana dashboard providers | `{}` |
| `dashboards` | Dashboards to import | `{}` |
| `dashboardsConfigMaps` | ConfigMaps reference that contains dashboards | `{}` |
@@ -75,11 +87,17 @@ The command removes all the Kubernetes components associated with the chart and
| `ldap.config ` | Grafana's LDAP configuration | `""` |
| `annotations` | Deployment annotations | `{}` |
| `podAnnotations` | Pod annotations | `{}` |
+| `sidecar.image` | Sidecar image | `kiwigrid/k8s-sidecar:0.0.16` |
+| `sidecar.imagePullPolicy` | Sidecar image pull policy | `IfNotPresent` |
+| `sidecar.resources` | Sidecar resources | `{}` |
| `sidecar.dashboards.enabled` | Enabled the cluster wide search for dashboards and adds/updates/deletes them in grafana | `false` |
-| `sidecar.dashboards.label` | Label that config maps with dashboards should have to be added | `false` |
+| `sidecar.skipTlsVerify` | Set to true to skip tls verification for kube api calls | `nil` |
+| `sidecar.dashboards.label` | Label that config maps with dashboards should have to be added | `grafana_dashboard` |
+| `sidecar.dashboards.folder` | Folder in the pod that should hold the collected dashboards (unless `sidecar.dashboards.defaultFolderName` is set). This path will be mounted. | `/tmp/dashboards` |
+| `sidecar.dashboards.defaultFolderName` | The default folder name, it will create a subfolder under the `sidecar.dashboards.folder` and put dashboards in there instead | `nil` |
| `sidecar.dashboards.searchNamespace` | If specified, the sidecar will search for dashboard config-maps inside this namespace. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces | `nil` |
| `sidecar.datasources.enabled` | Enabled the cluster wide search for datasources and adds/updates/deletes them in grafana |`false` |
-| `sidecar.datasources.label` | Label that config maps with datasources should have to be added | `false` |
+| `sidecar.datasources.label` | Label that config maps with datasources should have to be added | `grafana_datasource` |
| `sidecar.datasources.searchNamespace` | If specified, the sidecar will search for datasources config-maps inside this namespace. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces | `nil` |
| `smtp.existingSecret` | The name of an existing secret containing the SMTP credentials. | `""` |
| `smtp.userKey` | The key in the existing SMTP secret containing the username. | `"user"` |
@@ -91,16 +109,63 @@ The command removes all the Kubernetes components associated with the chart and
| `rbac.namespaced` | Creates Role and Rolebinding instead of the default ClusterRole and ClusteRoleBindings for the grafana instance | `false` |
| `rbac.pspEnabled` | Create PodSecurityPolicy (with `rbac.create`, grant roles permissions as well) | `true` |
| `rbac.pspUseAppArmor` | Enforce AppArmor in created PodSecurityPolicy (requires `rbac.pspEnabled`) | `true` |
+| `command` | Define command to be executed by grafana container at startup | `nil` |
+| `testFramework.image` | `test-framework` image repository. | `dduportal/bats` |
+| `testFramework.tag` | `test-framework` image tag. | `0.4.0` |
+
+
+### Example of extraVolumeMounts
+
+```yaml
+- extraVolumeMounts:
+ - name: plugins
+ mountPath: /var/lib/grafana/plugins
+ subPath: configs/grafana/plugins
+ existingClaim: existing-grafana-claim
+ readOnly: false
+```
+
+## Import dashboards
+
+There are a few methods to import dashboards to Grafana. Below are some examples and explanations as to how to use each method:
+
+```yaml
+dashboards:
+ default:
+ some-dashboard:
+ json: |
+ {
+ "annotations":
+
+ ...
+ # Complete json file here
+ ...
+
+ "title": "Some Dashboard",
+ "uid": "abcd1234",
+ "version": 1
+ }
+ custom-dashboard:
+ # This is a path to a file inside the dashboards directory inside the chart directory
+ file: dashboards/custom-dashboard.json
+ prometheus-stats:
+ # Ref: https://grafana.com/dashboards/2
+ gnetId: 2
+ revision: 2
+ datasource: Prometheus
+ local-dashboard:
+ url: https://raw.githubusercontent.com/user/repository/master/dashboards/dashboard.json
+```
## BASE64 dashboards
Dashboards could be storaged in a server that does not return JSON directly and instead of it returns a Base64 encoded file (e.g. Gerrit)
-A new parameter has been added to the url use case so if you specify a b64content value equals to true after the url entry a Base64 decoding is applied before save the file to disk.
+A new parameter has been added to the url use case so if you specify a b64content value equals to true after the url entry a Base64 decoding is applied before save the file to disk.
If this entry is not set or is equals to false not decoding is applied to the file before saving it to disk.
-### Gerrit use case:
+### Gerrit use case:
Gerrit API for download files has the following schema: https://yourgerritserver/a/{project-name}/branches/{branch-id}/files/{file-id}/content where {project-name} and
-{file-id} usualy has '/' in their values and so they MUST be replaced by %2F so if project-name is user/repo, branch-id is master and file-id is equals to dir1/dir2/dashboard
+{file-id} usualy has '/' in their values and so they MUST be replaced by %2F so if project-name is user/repo, branch-id is master and file-id is equals to dir1/dir2/dashboard
the url value is https://yourgerritserver/a/user%2Frepo/branches/master/files/dir1%2Fdir2%2Fdashboard/content
## Sidecar for dashboards
@@ -121,7 +186,7 @@ data:
## Sidecar for datasources
-If the parameter `sidecar.datasources.enabled` is set, a sidecar container is deployed in the grafana pod. This container watches all config maps in the cluster and filters out the ones with a label as defined in `sidecar.datasources.label`. The files defined in those configmaps are written to a folder and accessed by grafana on startup. Using these yaml files, the data sources in grafana can be modified.
+If the parameter `sidecar.datasources.enabled` is set, an init container is deployed in the grafana pod. This container lists all config maps in the cluster and filters out the ones with a label as defined in `sidecar.datasources.label`. The files defined in those configmaps are written to a folder and accessed by grafana on startup. Using these yaml files, the data sources in grafana can be imported. The configmaps must be created before `helm install` so that the datasources init container can list the configmaps.
Example datasource config adapted from [Grafana](http://docs.grafana.org/administration/provisioning/#example-datasource-config-file):
```
diff --git a/stable/grafana/templates/configmap-dashboard-provider.yaml b/stable/grafana/templates/configmap-dashboard-provider.yaml
index 077173194d98..0cdedfadd0b8 100644
--- a/stable/grafana/templates/configmap-dashboard-provider.yaml
+++ b/stable/grafana/templates/configmap-dashboard-provider.yaml
@@ -22,5 +22,5 @@ data:
type: file
disableDeletion: false
options:
- path: {{ .Values.sidecar.dashboards.folder }}
+ path: {{ .Values.sidecar.dashboards.folder }}{{- with .Values.sidecar.dashboards.defaultFolderName }}/{{ . }}{{- end }}
{{- end}}
diff --git a/stable/grafana/templates/configmap.yaml b/stable/grafana/templates/configmap.yaml
index 022b4df1395a..a2d050751221 100644
--- a/stable/grafana/templates/configmap.yaml
+++ b/stable/grafana/templates/configmap.yaml
@@ -27,6 +27,13 @@ data:
{{- end -}}
{{- end -}}
+{{- if .Values.notifiers }}
+ {{- range $key, $value := .Values.notifiers }}
+ {{ $key }}: |
+{{ toYaml $value | indent 4 }}
+ {{- end -}}
+{{- end -}}
+
{{- if .Values.dashboardProviders }}
{{- range $key, $value := .Values.dashboardProviders }}
{{ $key }}: |
diff --git a/stable/grafana/templates/dashboards-json-configmap.yaml b/stable/grafana/templates/dashboards-json-configmap.yaml
index a72cde53d719..567da7f83f7f 100644
--- a/stable/grafana/templates/dashboards-json-configmap.yaml
+++ b/stable/grafana/templates/dashboards-json-configmap.yaml
@@ -14,9 +14,15 @@ metadata:
dashboard-provider: {{ $provider }}
data:
{{- range $key, $value := $dashboards }}
-{{- if hasKey $value "json" }}
+{{- if (or (hasKey $value "json") (hasKey $value "file")) }}
{{ print $key | indent 2 }}.json:
-{{ toYaml ( $files.Get $value.json ) | indent 4}}
+{{- if hasKey $value "json" }}
+ |-
+{{ $value.json | indent 6 }}
+{{- end }}
+{{- if hasKey $value "file" }}
+{{ toYaml ( $files.Get $value.file ) | indent 4}}
+{{- end }}
{{- end }}
{{- end }}
{{- end }}
diff --git a/stable/grafana/templates/deployment.yaml b/stable/grafana/templates/deployment.yaml
index ebaf40511424..d94203e815ab 100644
--- a/stable/grafana/templates/deployment.yaml
+++ b/stable/grafana/templates/deployment.yaml
@@ -27,8 +27,14 @@ spec:
labels:
app: {{ template "grafana.name" . }}
release: {{ .Release.Name }}
-{{- with .Values.podAnnotations }}
annotations:
+ checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
+ checksum/dashboards-json-config: {{ include (print $.Template.BasePath "/dashboards-json-configmap.yaml") . | sha256sum }}
+ checksum/sc-dashboard-provider-config: {{ include (print $.Template.BasePath "/configmap-dashboard-provider.yaml") . | sha256sum }}
+{{- if not .Values.admin.existingSecret }}
+ checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
+{{- end }}
+{{- with .Values.podAnnotations }}
{{ toYaml . | indent 8 }}
{{- end }}
spec:
@@ -43,16 +49,18 @@ spec:
{{- if .Values.priorityClassName }}
priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
-{{- if ( or .Values.persistence.enabled .Values.dashboards ) }}
+{{- if ( or .Values.persistence.enabled .Values.dashboards .Values.sidecar.datasources.enabled .Values.extraInitContainers) }}
initContainers:
{{- end }}
-{{- if .Values.persistence.enabled }}
+{{- if ( and .Values.persistence.enabled .Values.initChownData.enabled ) }}
- name: init-chown-data
- image: "{{ .Values.chownDataImage.repository }}:{{ .Values.chownDataImage.tag }}"
- imagePullPolicy: {{ .Values.chownDataImage.pullPolicy }}
+ image: "{{ .Values.initChownData.image.repository }}:{{ .Values.initChownData.image.tag }}"
+ imagePullPolicy: {{ .Values.initChownData.image.pullPolicy }}
securityContext:
runAsUser: 0
command: ["chown", "-R", "{{ .Values.securityContext.runAsUser }}:{{ .Values.securityContext.runAsUser }}", "/var/lib/grafana"]
+ resources:
+{{ toYaml .Values.initChownData.resources | indent 12 }}
volumeMounts:
- name: storage
mountPath: "/var/lib/grafana"
@@ -79,6 +87,34 @@ spec:
mountPath: {{ .mountPath }}
readOnly: {{ .readOnly }}
{{- end }}
+{{- end }}
+{{- if .Values.sidecar.datasources.enabled }}
+ - name: {{ template "grafana.name" . }}-sc-datasources
+ image: "{{ .Values.sidecar.image }}"
+ imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }}
+ env:
+ - name: METHOD
+ value: LIST
+ - name: LABEL
+ value: "{{ .Values.sidecar.datasources.label }}"
+ - name: FOLDER
+ value: "/etc/grafana/provisioning/datasources"
+ {{- if .Values.sidecar.datasources.searchNamespace }}
+ - name: NAMESPACE
+ value: "{{ .Values.sidecar.datasources.searchNamespace }}"
+ {{- end }}
+ {{- if .Values.sidecar.skipTlsVerify }}
+ - name: SKIP_TLS_VERIFY
+ value: "{{ .Values.sidecar.skipTlsVerify }}"
+ {{- end }}
+ resources:
+{{ toYaml .Values.sidecar.resources | indent 12 }}
+ volumeMounts:
+ - name: sc-datasources-volume
+ mountPath: "/etc/grafana/provisioning/datasources"
+{{- end}}
+{{- if .Values.extraInitContainers }}
+{{ toYaml .Values.extraInitContainers | indent 8 }}
{{- end }}
{{- if .Values.image.pullSecrets }}
imagePullSecrets:
@@ -95,39 +131,30 @@ spec:
- name: LABEL
value: "{{ .Values.sidecar.dashboards.label }}"
- name: FOLDER
- value: "{{ .Values.sidecar.dashboards.folder }}"
+ value: "{{ .Values.sidecar.dashboards.folder }}{{- with .Values.sidecar.dashboards.defaultFolderName }}/{{ . }}{{- end }}"
{{- if .Values.sidecar.dashboards.searchNamespace }}
- name: NAMESPACE
value: "{{ .Values.sidecar.dashboards.searchNamespace }}"
{{- end }}
+ {{- if .Values.sidecar.skipTlsVerify }}
+ - name: SKIP_TLS_VERIFY
+ value: "{{ .Values.sidecar.skipTlsVerify }}"
+ {{- end }}
resources:
{{ toYaml .Values.sidecar.resources | indent 12 }}
volumeMounts:
- name: sc-dashboard-volume
mountPath: {{ .Values.sidecar.dashboards.folder | quote }}
-{{- end}}
-{{- if .Values.sidecar.datasources.enabled }}
- - name: {{ template "grafana.name" . }}-sc-datasources
- image: "{{ .Values.sidecar.image }}"
- imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }}
- env:
- - name: LABEL
- value: "{{ .Values.sidecar.datasources.label }}"
- - name: FOLDER
- value: "/etc/grafana/provisioning/datasources"
- {{- if .Values.sidecar.datasources.searchNamespace }}
- - name: NAMESPACE
- value: "{{ .Values.sidecar.datasources.searchNamespace }}"
- {{- end }}
- resources:
-{{ toYaml .Values.sidecar.resources | indent 12 }}
- volumeMounts:
- - name: sc-datasources-volume
- mountPath: "/etc/grafana/provisioning/datasources"
{{- end}}
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
+ {{- if .Values.command }}
+ command:
+ {{- range .Values.command }}
+ - {{ . }}
+ {{- end }}
+ {{- end}}
volumeMounts:
- name: config
mountPath: "/etc/grafana/grafana.ini"
@@ -150,7 +177,7 @@ spec:
{{- if .Values.dashboards }}
{{- range $provider, $dashboards := .Values.dashboards }}
{{- range $key, $value := $dashboards }}
- {{- if hasKey $value "json" }}
+ {{- if (or (hasKey $value "json") (hasKey $value "file")) }}
- name: dashboards-{{ $provider }}
mountPath: "/var/lib/grafana/dashboards/{{ $provider }}/{{ $key }}.json"
subPath: "{{ $key }}.json"
@@ -169,6 +196,11 @@ spec:
mountPath: "/etc/grafana/provisioning/datasources/datasources.yaml"
subPath: datasources.yaml
{{- end }}
+{{- if .Values.notifiers }}
+ - name: config
+ mountPath: "/etc/grafana/provisioning/notifiers/notifiers.yaml"
+ subPath: notifiers.yaml
+{{- end }}
{{- if .Values.dashboardProviders }}
- name: config
mountPath: "/etc/grafana/provisioning/dashboards/dashboardproviders.yaml"
@@ -177,10 +209,12 @@ spec:
{{- if .Values.sidecar.dashboards.enabled }}
- name: sc-dashboard-volume
mountPath: {{ .Values.sidecar.dashboards.folder | quote }}
+{{- if not .Values.dashboardProviders }}
- name: sc-dashboard-provider
mountPath: "/etc/grafana/provisioning/dashboards/sc-dashboardproviders.yaml"
subPath: provider.yaml
{{- end}}
+{{- end}}
{{- if .Values.sidecar.datasources.enabled }}
- name: sc-datasources-volume
mountPath: "/etc/grafana/provisioning/datasources"
@@ -193,8 +227,13 @@ spec:
{{- range .Values.extraVolumeMounts }}
- name: {{ .name }}
mountPath: {{ .mountPath }}
+ subPath: {{ .subPath | default "" }}
readOnly: {{ .readOnly }}
{{- end }}
+ {{- range .Values.extraEmptyDirMounts }}
+ - name: {{ .name }}
+ mountPath: {{ .mountPath }}
+ {{- end }}
ports:
- name: service
containerPort: {{ .Values.service.port }}
@@ -203,16 +242,20 @@ spec:
containerPort: 3000
protocol: TCP
env:
+ {{- if not .Values.env.GF_SECURITY_ADMIN_USER }}
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: {{ .Values.admin.existingSecret | default (include "grafana.fullname" .) }}
key: {{ .Values.admin.userKey | default "admin-user" }}
+ {{- end }}
+ {{- if not .Values.env.GF_SECURITY_ADMIN_PASSWORD }}
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.admin.existingSecret | default (include "grafana.fullname" .) }}
key: {{ .Values.admin.passwordKey | default "admin-password" }}
+ {{- end }}
{{- if .Values.plugins }}
- name: GF_INSTALL_PLUGINS
valueFrom:
@@ -247,6 +290,9 @@ spec:
{{ toYaml .Values.readinessProbe | indent 12 }}
resources:
{{ toYaml .Values.resources | indent 12 }}
+{{- if .Values.extraContainers }}
+{{ toYaml .Values.extraContainers | indent 8}}
+{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
@@ -305,9 +351,11 @@ spec:
{{- if .Values.sidecar.dashboards.enabled }}
- name: sc-dashboard-volume
emptyDir: {}
+ {{- if not .Values.dashboardProviders }}
- name: sc-dashboard-provider
configMap:
name: {{ template "grafana.fullname" . }}-config-dashboards
+ {{- end -}}
{{- end }}
{{- if .Values.sidecar.datasources.enabled }}
- name: sc-datasources-volume
@@ -324,3 +372,7 @@ spec:
persistentVolumeClaim:
claimName: {{ .existingClaim }}
{{- end }}
+ {{- range .Values.extraEmptyDirMounts }}
+ - name: {{ .name }}
+ emptyDir: {}
+ {{- end }}
diff --git a/stable/grafana/templates/service.yaml b/stable/grafana/templates/service.yaml
index 6dcd63a4d2db..87fac70ca04c 100644
--- a/stable/grafana/templates/service.yaml
+++ b/stable/grafana/templates/service.yaml
@@ -40,7 +40,7 @@ spec:
- name: service
port: {{ .Values.service.port }}
protocol: TCP
- targetPort: 3000
+ targetPort: {{ .Values.service.targetPort }}
{{ if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
nodePort: {{.Values.service.nodePort}}
{{ end }}
diff --git a/stable/grafana/templates/tests/test-configmap.yaml b/stable/grafana/templates/tests/test-configmap.yaml
new file mode 100644
index 000000000000..da800b04b237
--- /dev/null
+++ b/stable/grafana/templates/tests/test-configmap.yaml
@@ -0,0 +1,17 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "grafana.fullname" . }}-test
+ labels:
+ app: {{ template "grafana.fullname" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ heritage: "{{ .Release.Service }}"
+ release: "{{ .Release.Name }}"
+data:
+ run.sh: |-
+ @test "Test Health" {
+ url="http://{{ template "grafana.fullname" . }}/api/health"
+
+ code=$(curl -s -o /dev/null -I -w "%{http_code}" $url)
+ [ "$code" == "200" ]
+ }
diff --git a/stable/grafana/templates/tests/test-podsecuritypolicy.yaml b/stable/grafana/templates/tests/test-podsecuritypolicy.yaml
new file mode 100644
index 000000000000..1e8071581a64
--- /dev/null
+++ b/stable/grafana/templates/tests/test-podsecuritypolicy.yaml
@@ -0,0 +1,31 @@
+{{- if .Values.rbac.pspEnabled }}
+apiVersion: extensions/v1beta1
+kind: PodSecurityPolicy
+metadata:
+ name: {{ template "grafana.fullname" . }}-test
+ labels:
+ app: {{ template "grafana.name" . }}
+ chart: {{ .Chart.Name }}-{{ .Chart.Version }}
+ heritage: {{ .Release.Service }}
+ release: {{ .Release.Name }}
+spec:
+ allowPrivilegeEscalation: true
+ privileged: false
+ hostNetwork: false
+ hostIPC: false
+ hostPID: false
+ fsGroup:
+ rule: RunAsAny
+ seLinux:
+ rule: RunAsAny
+ supplementalGroups:
+ rule: RunAsAny
+ runAsUser:
+ rule: RunAsAny
+ volumes:
+ - configMap
+ - downwardAPI
+ - emptyDir
+ - projected
+ - secret
+{{- end }}
diff --git a/stable/grafana/templates/tests/test-role.yaml b/stable/grafana/templates/tests/test-role.yaml
new file mode 100644
index 000000000000..e950046d4fe4
--- /dev/null
+++ b/stable/grafana/templates/tests/test-role.yaml
@@ -0,0 +1,16 @@
+{{- if .Values.rbac.pspEnabled -}}
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ name: {{ template "grafana.fullname" . }}-test
+ labels:
+ app: {{ template "grafana.name" . }}
+ chart: {{ .Chart.Name }}-{{ .Chart.Version }}
+ heritage: {{ .Release.Service }}
+ release: {{ .Release.Name }}
+rules:
+- apiGroups: ['policy']
+ resources: ['podsecuritypolicies']
+ verbs: ['use']
+ resourceNames: [{{ template "grafana.fullname" . }}-test]
+{{- end }}
diff --git a/stable/grafana/templates/tests/test-rolebinding.yaml b/stable/grafana/templates/tests/test-rolebinding.yaml
new file mode 100644
index 000000000000..88f4dbc78e93
--- /dev/null
+++ b/stable/grafana/templates/tests/test-rolebinding.yaml
@@ -0,0 +1,19 @@
+{{- if .Values.rbac.pspEnabled -}}
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: {{ template "grafana.fullname" . }}-test
+ labels:
+ app: {{ template "grafana.name" . }}
+ chart: {{ .Chart.Name }}-{{ .Chart.Version }}
+ heritage: {{ .Release.Service }}
+ release: {{ .Release.Name }}
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: {{ template "grafana.fullname" . }}-test
+subjects:
+- kind: ServiceAccount
+ name: {{ template "grafana.serviceAccountName" . }}-test
+ namespace: {{ .Release.Namespace }}
+{{- end }}
diff --git a/stable/grafana/templates/tests/test-serviceaccount.yaml b/stable/grafana/templates/tests/test-serviceaccount.yaml
new file mode 100644
index 000000000000..8f56d23a9cec
--- /dev/null
+++ b/stable/grafana/templates/tests/test-serviceaccount.yaml
@@ -0,0 +1,9 @@
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ labels:
+ app: {{ template "grafana.name" . }}
+ chart: {{ .Chart.Name }}-{{ .Chart.Version }}
+ heritage: {{ .Release.Service }}
+ release: {{ .Release.Name }}
+ name: {{ template "grafana.serviceAccountName" . }}-test
diff --git a/stable/grafana/templates/tests/test.yaml b/stable/grafana/templates/tests/test.yaml
new file mode 100644
index 000000000000..0d76a091fa44
--- /dev/null
+++ b/stable/grafana/templates/tests/test.yaml
@@ -0,0 +1,49 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: {{ template "grafana.fullname" . }}-test
+ labels:
+ app: {{ template "grafana.fullname" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ heritage: "{{ .Release.Service }}"
+ release: "{{ .Release.Name }}"
+ annotations:
+ "helm.sh/hook": test-success
+spec:
+ serviceAccountName: {{ template "grafana.serviceAccountName" . }}-test
+ initContainers:
+ - name: test-framework
+ image: "{{ .Values.testFramework.image}}:{{ .Values.testFramework.tag }}"
+ command:
+ - "bash"
+ - "-c"
+ - |
+ set -ex
+ # copy bats to tools dir
+ cp -R /usr/local/libexec/ /tools/bats/
+ volumeMounts:
+ - mountPath: /tools
+ name: tools
+ {{- if .Values.image.pullSecrets }}
+ imagePullSecrets:
+ {{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+ {{- end}}
+ {{- end }}
+ containers:
+ - name: {{ .Release.Name }}-test
+ image: "{{ .Values.testFramework.image}}:{{ .Values.testFramework.tag }}"
+ command: ["/tools/bats/bats", "-t", "/tests/run.sh"]
+ volumeMounts:
+ - mountPath: /tests
+ name: tests
+ readOnly: true
+ - mountPath: /tools
+ name: tools
+ volumes:
+ - name: tests
+ configMap:
+ name: {{ template "grafana.fullname" . }}-test
+ - name: tools
+ emptyDir: {}
+ restartPolicy: Never
diff --git a/stable/grafana/values.yaml b/stable/grafana/values.yaml
index c16a4f9cfd88..84bb7e24e191 100644
--- a/stable/grafana/values.yaml
+++ b/stable/grafana/values.yaml
@@ -26,7 +26,7 @@ livenessProbe:
image:
repository: grafana/grafana
- tag: 5.4.3
+ tag: 6.2.0
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
@@ -36,6 +36,10 @@ image:
# pullSecrets:
# - myRegistrKeySecretName
+testFramework:
+ image: "dduportal/bats"
+ tag: "0.4.0"
+
securityContext:
runAsUser: 472
fsGroup: 472
@@ -48,6 +52,11 @@ extraConfigmapMounts: []
# readOnly: true
+extraEmptyDirMounts: []
+ # - name: provisioning-notifiers
+ # mountPath: /etc/grafana/provisioning/notifiers
+
+
## Assign a PriorityClassName to pods if set
# priorityClassName:
@@ -56,11 +65,6 @@ downloadDashboardsImage:
tag: latest
pullPolicy: IfNotPresent
-chownDataImage:
- repository: busybox
- tag: 1.30.0
- pullPolicy: IfNotPresent
-
## Pod Annotations
# podAnnotations: {}
@@ -74,6 +78,8 @@ chownDataImage:
service:
type: ClusterIP
port: 80
+ targetPort: 3000
+ # targetPort: 4181 To be used with a proxy extraContainer
annotations: {}
labels: {}
@@ -114,6 +120,25 @@ tolerations: []
##
affinity: {}
+extraInitContainers: []
+
+## Enable an Specify container in extraContainers. This is meant to allow adding an authentication proxy to a grafana pod
+extraContainers: |
+# - name: proxy
+# image: quay.io/gambol99/keycloak-proxy:latest
+# args:
+# - -provider=github
+# - -client-id=
+# - -client-secret=
+# - -github-org=
+# - -email-domain=*
+# - -cookie-secret=
+# - -http-address=http://0.0.0.0:4181
+# - -upstream-url=http://127.0.0.1:3000
+# ports:
+# - name: proxy-web
+# containerPort: 4181
+
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
@@ -127,6 +152,31 @@ persistence:
# subPath: ""
# existingClaim:
+initChownData:
+ ## If false, data ownership will not be reset at startup
+ ## This allows the prometheus-server to be run with an arbitrary user
+ ##
+ enabled: true
+
+ ## initChownData container image
+ ##
+ image:
+ repository: busybox
+ tag: "1.30"
+ pullPolicy: IfNotPresent
+
+ ## initChownData resource requests and limits
+ ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
+ ##
+ resources: {}
+ # limits:
+ # cpu: 100m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 128Mi
+
+
# Administrator credentials when not using an existing secret (see below)
adminUser: admin
# adminPassword: strongpassword
@@ -137,6 +187,13 @@ admin:
userKey: admin-user
passwordKey: admin-password
+## Define command to be executed at startup by grafana container
+## Needed if using `vault-env` to manage secrets (ref: https://banzaicloud.com/blog/inject-secrets-into-pods-vault/)
+## Default is "run.sh" as defined in grafana's Dockerfile
+# command:
+# - "sh"
+# - "/run.sh"
+
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
@@ -184,6 +241,24 @@ datasources: {}
# access: proxy
# isDefault: true
+## Configure notifiers
+## ref: http://docs.grafana.org/administration/provisioning/#alert-notification-channels
+##
+notifiers: {}
+# notifiers.yaml:
+# notifiers:
+# - name: email-notifier
+# type: email
+# uid: email1
+# # either:
+# org_id: 1
+# # or
+# org_name: Main Org.
+# is_default: true
+# settings:
+# addresses: an_email_address@example.com
+# delete_notifiers:
+
## Configure grafana dashboard providers
## ref: http://docs.grafana.org/administration/provisioning/#dashboards
##
@@ -209,18 +284,21 @@ dashboardProviders: {}
## dashboards per provider, use provider name as key.
##
dashboards: {}
-# default:
-# some-dashboard:
-# json: dashboards/custom-dashboard.json
-# prometheus-stats:
-# gnetId: 2
-# revision: 2
-# datasource: Prometheus
-# local-dashboard:
-# url: https://example.com/repository/test.json
-# local-dashboard-base64:
-# url: https://example.com/repository/test-b64.json
-# b64content: true
+ # default:
+ # some-dashboard:
+ # json: |
+ # $RAW_JSON
+ # custom-dashboard:
+ # file: dashboards/custom-dashboard.json
+ # prometheus-stats:
+ # gnetId: 2
+ # revision: 2
+ # datasource: Prometheus
+ # local-dashboard:
+ # url: https://example.com/repository/test.json
+ # local-dashboard-base64:
+ # url: https://example.com/repository/test-b64.json
+ # b64content: true
## Reference to external ConfigMap per provider. Use provider name as key and ConfiMap name as value.
## A provider dashboards must be defined either by external ConfigMaps or in values.yaml, not in both.
@@ -291,21 +369,25 @@ smtp:
## Sidecars that collect the configmaps with specified label and stores the included files them into the respective folders
## Requires at least Grafana 5 to work and can't be used together with parameters dashboardProviders, datasources and dashboards
sidecar:
- image: kiwigrid/k8s-sidecar:0.0.6
+ image: kiwigrid/k8s-sidecar:0.0.16
imagePullPolicy: IfNotPresent
- resources:
+ resources: {}
# limits:
# cpu: 100m
# memory: 100Mi
# requests:
# cpu: 50m
# memory: 50Mi
+ # skipTlsVerify Set to true to skip tls verification for kube api calls
+ # skipTlsVerify: true
dashboards:
enabled: false
# label that the configmaps with dashboards are marked with
label: grafana_dashboard
- # folder in the pod that should hold the collected dashboards
+ # folder in the pod that should hold the collected dashboards (unless `defaultFolderName` is set)
folder: /tmp/dashboards
+ # The default folder name, it will create a subfolder under the `folder` and put dashboards in there instead
+ defaultFolderName: null
# If specified, the sidecar will search for dashboard config-maps inside this namespace.
# Otherwise the namespace in which the sidecar is running will be used.
# It's also possible to specify ALL to search in all namespaces
diff --git a/stable/graphite/Chart.yaml b/stable/graphite/Chart.yaml
index f0e87eb1830b..6aefe2d0c35b 100644
--- a/stable/graphite/Chart.yaml
+++ b/stable/graphite/Chart.yaml
@@ -1,5 +1,5 @@
apiVersion: v1
-version: 0.2.1
+version: 0.2.2
appVersion: "1.1.5-3"
description: DEPRECATED! - Graphite metrics server
name: graphite
diff --git a/stable/graphite/OWNERS b/stable/graphite/OWNERS
index 802266a91198..9375c95fd5ce 100644
--- a/stable/graphite/OWNERS
+++ b/stable/graphite/OWNERS
@@ -1,6 +1,4 @@
approvers:
-- fabian-schlegel
- monotek
reviewers:
-- fabian-schlegel
- monotek
diff --git a/stable/graylog/Chart.yaml b/stable/graylog/Chart.yaml
new file mode 100755
index 000000000000..e498983d7d2d
--- /dev/null
+++ b/stable/graylog/Chart.yaml
@@ -0,0 +1,17 @@
+apiVersion: v1
+name: graylog
+home: https://www.graylog.org
+version: 1.1.1
+appVersion: 2.5.1-3
+description: Graylog is the centralized log management solution built to open standards for capturing, storing, and enabling real-time analysis of terabytes of machine data.
+keywords:
+- graylog
+- logs
+- syslog
+- gelf
+icon: https://global-uploads.webflow.com/5a218ef7897bf400019e2f16/5a218ef7897bf400019e2f60_logo-graylog.png
+sources:
+- https://www.graylog.org
+maintainers:
+- name: KongZ
+ email: goonohc@gmail.com
diff --git a/stable/graylog/OWNERS b/stable/graylog/OWNERS
new file mode 100755
index 000000000000..ede67635f334
--- /dev/null
+++ b/stable/graylog/OWNERS
@@ -0,0 +1,4 @@
+approvers:
+- KongZ
+reviewers:
+- KongZ
diff --git a/stable/graylog/README.md b/stable/graylog/README.md
new file mode 100644
index 000000000000..67d5a52c7e9a
--- /dev/null
+++ b/stable/graylog/README.md
@@ -0,0 +1,239 @@
+# Graylog
+
+This chart provide the [Graylog](https://www.graylog.org/) deployments.
+Note: It is strongly recommend to use on Official Graylog image to run this chart.
+
+## Quick Installation
+This chart requires the following charts before install Graylog
+
+1. MongoDB
+2. Elasticsearch
+
+To install the Graylog Chart with all dependencies
+
+```bash
+kubectl create namespace graylog
+
+helm install --namespace "graylog" -n "graylog" stable/graylog
+```
+
+## Manually Install Dependencies
+This method is *recommended* when you want to expand the availability, scalability, and security of the services. You need to install MongoDB replicaset and Elasticsearch with proper settings before install Graylog.
+
+To install MongoDB, run
+
+```bash
+helm install --namespace "graylog" -n "mongodb" stable/mongodb-replicaset
+```
+
+To install Elasticsearch, run
+
+```bash
+helm install --namespace "graylog" -n "elasticsearch" stable/elasticsearch
+```
+
+Note: There are many alternative Elasticsearch available on GitHub. If you found the `stable/elasticsearch` is not suitable, you can search other charts from GitHub repositories.
+
+## Install Chart
+To install the Graylog Chart into your Kubernetes cluster (This Chart requires persistent volume by default, you may need to create a storage class before install chart.
+
+```bash
+helm install --namespace "graylog" -n "graylog" stable/graylog \
+ --set tags.install-mongodb=false\
+ --set tags.install-elasticsearch=false\
+ --set graylog.mongodb.uri=mongodb://mongodb-mongodb-replicaset-0.mongodb-mongodb-replicaset.graylog.svc.cluster.local:27017/graylog?replicaSet=rs0 \
+ --set graylog.elasticsearch.hosts=http://elasticsearch-client.graylog.svc.cluster.local:9200
+```
+
+After installation succeeds, you can get a status of Chart
+
+```bash
+helm status "graylog"
+```
+
+If you want to delete your Chart, use this command
+```bash
+helm delete --purge "graylog"
+```
+
+## Install Chart with specific Graylog cluster size
+By default, this Chart will create a graylog with 2 nodes (1 master, 1 coordinating). If you want to change the cluster size during installation, you can use `--set graylog.replicas={value}` argument. Or edit `values.yaml`
+
+For example:
+Set cluster size to 5
+
+```bash
+helm install --namespace "graylog" -n "graylog" --set graylog.replicas=5 stable/graylog
+```
+
+The command above will install 1 master and 4 coordinating.
+
+## Install Chart with specific node pool
+Sometime you may need to deploy your graylog to specific node pool to allocate resources.
+
+### Using node selector
+For example, you have 6 vms in node pools and you want to deploy graylog to node which labeled as `cloud.google.com/gke-nodepool: graylog-pool`
+Set the following values in `values.yaml`
+
+```yaml
+graylog:
+ nodeSelector: { cloud.google.com/gke-nodepool: graylog-pool }
+```
+
+### Using tolerations
+For example, you have 6 vms in node pools and 3 nodes are tainted with `NO_SCHEDULE graylog=true`
+Set the following values in `values.yaml`
+
+```yaml
+graylog:
+ tolerations:
+ - key: graylog
+ value: "true"
+ operator: "Equal"
+```
+
+## Configuration
+
+The following table lists the configurable parameters of the Cassandra chart and their default values.
+
+| Parameter | Description | Default |
+|-----------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------|
+| `graylog.image` | `graylog` image repository | `graylog/graylog:2.4` |
+| `graylog.imagePullPolicy` | Image pull policy | `IfNotPresent` |
+| `graylog.replicas` | The number of Graylog instances in the cluster. The chart will automatic create assign master to one of replicas | `2` |
+| `graylog.resources` | CPU/Memory resource requests/limits | Memory: `1024Mi`, CPU: `500m` |
+| `graylog.heapSize` | Override Java heap size. If this value empty, chart will allocate heapsize using `-XX:+UseCGroupMemoryLimitForHeap` | `` |
+| `graylog.nodeSelector` | Graylog server pod assignment | `{}` |
+| `graylog.affinity` | Graylog server affinity | `{}` |
+| `graylog.tolerations` | Graylog server tolerations | `[]` |
+| `graylog.nodeSelector` | Graylog server node selector | `{}` |
+| `graylog.env` | Graylog server env variables | `{}` |
+| `graylog.service.type` | Kubernetes Service type | `ClusterIP` |
+| `graylog.service.port` | Graylog Service port | `9000` |
+| `graylog.service.master.port` | Graylog Master Service port | `9000` |
+| `graylog.service.master.annotations` | Graylog Master Service annotations | `{}` |
+| `graylog.podAnnotations` | Kubernetes Pod annotations | `{}` |
+| `graylog.terminationGracePeriodSeconds` | Pod termination grace period | `120` |
+| `graylog.updateStrategy` | Update Strategy of the StatefulSet | `OnDelete` |
+| `graylog.persistence.enabled` | Use a PVC to persist data | `true` |
+| `graylog.persistence.storageClass` | Storage class of backing PVC | `nil` (uses storage class annotation) |
+| `graylog.persistence.accessMode` | Use volume as ReadOnly or ReadWrite | `ReadWriteOnce` |
+| `graylog.persistence.size` | Size of data volume | `10Gi` |
+| `graylog.ingress.enabled` | If true, Graylog Ingress will be created | `false` |
+| `graylog.ingress.port` | Graylog Ingress port | `false` |
+| `graylog.ingress.annotations` | Graylog Ingress annotations | `{}` |
+| `graylog.ingress.hosts` | Graylog Ingress host names | `[]` |
+| `graylog.ingress.tls` | Graylog Ingress TLS configuration (YAML) | `[]` |
+| `graylog.input` | Graylog Input configuration (YAML) Sees #Input section for detail | `{}` |
+| `graylog.metrics.enabled` | If true, add Prometheus annotations to pods | `false` |
+| `graylog.geoip.enabled` | If true, Maxmind Geoip Lite will be installed to ${GRAYLOG_HOME}/etc/GeoLite2-City.mmdb | `false` |
+| `graylog.plugins` | A list of Graylog installation plugins | `[]` |
+| `graylog.rootUsername` | Graylog root user name | `admin` |
+| `graylog.rootPassword` | Graylog root password. If not set, random 10-character alphanumeric string | `` |
+| `graylog.rootEmail` | Graylog root email. | `` |
+| `graylog.rootTimezone` | Graylog root timezone. | `UTC` |
+| `graylog.elasticsearch.hosts` | Graylog Elasticsearch host name. You need to specific where data will be stored. | `` |
+| `graylog.mongodb.uri` | Graylog MongoDB connection string. You need to specific where data will be stored. | `` |
+| `graylog.transportEmail.enabled` | If true, enable transport email settings on Graylog | `false` |
+| `graylog.config` | Add additional server configuration to `graylog.conf` file. | `` |
+| `graylog.serverFiles` | Add additional server files on /etc/graylog/server. This is useful for enable TLS on input | `{}` |
+| `graylog.journal.deleteBeforeStart` | Delete all journal files before start Graylog | `false` |
+| `rbac.create` | If true, create & use RBAC resources | `true` |
+| `rbac.serviceAccount.create` | If true, create the Graylog service account | `true` |
+| `rbac.serviceAccount.name` | Name of the server service account to use or create | `{{ graylog.fullname }}` |
+| `tags.install-mongodb` | If true, this chart will install MongoDB from requirement dependencies. If you want to install MongoDB by yourself, please set to `false` | `true` |
+| `tags.install-elasticsearch` | If true, this chart will install Elasticsearch from requirement dependencies. If you want to install Elasticsearch by yourself, please set to `false` | `true` |
+
+## How it works
+This chart will create a Graylog statefulset with one Master node. The chart will automatically create Master node Pod label `graylog-role=master`, if it does not exists. The others Pods will be label with `graylog-role=coordinating`
+
+This chart will automatically calculate Java heap size from given `resources.requests.memory` value. If you want to specify number of heap size, you can set `graylog.heapSize` to your desired value. The `graylog.heapSize` value must be in JVM `-Xmx` format.
+
+## Input
+You can enable input ports by edit the `input` values. For example, you want to create a GELF input on port `12222`, and `12223` with Cloud LoadBalancer and syslog on UDP port `5410` without load balancer.
+
+```
+ input:
+ tcp:
+ service:
+ type: LoadBalancer
+ loadBalancerIP:
+ ports:
+ - name: gelf1
+ port: 12222
+ - name: gelf2
+ port: 12223
+ udp:
+ service:
+ type: ClusterIP
+ ports:
+ - name: syslog
+ port: 5410
+```
+
+Note: Name must be in IANA_SVC_NAME (at most 15 characters, matching regex [a-z0-9]([a-z0-9-]*[a-z0-9])* and it must contains at least one letter [a-z], hyphens cannot be adjacent to other hyphens)
+
+Note: The port list should be sorted by port number.
+
+
+## Input TLS
+To enable TLS on input in Graylog, you need to specify the server private key and certificate. You can add them in `graylog.serverFiles` value. For example
+
+```yaml
+graylog:
+ serverFiles:
+ server.cert: |
+ -----BEGIN CERTIFICATE-----
+ MIIFYTCCA0mgAwIBAgICEAIwDQYJKoZIhvcNAQELBQAwcjELMAkGA1UEBhMCVEgx
+ EDAOBgNVBAgMB0Jhbmdrb2sxEDAOBgNVBAcMB0Jhbmdrb2sxGDAWBgNVBAoMD09t
+ aXNlIENvLiwgTHRkLjEPMA0GA1UECwwGRGV2b3BzMRQwEgYDVQQDDAtjYS5vbWlz
+ ZS5jbzAeFw0xNzA2MDEwOTQ0NTJaFw0xOTA2MjEwOTQ0NTJaMHkxCzAJBgNVBAYT
+ AlRIMRAwDgYDVQQIDAdCYW5na29rMRAwDgYDVQQHDAdCYW5na29rMRgwFgYDVQQK
+ DA9PbWlzZSBDby4sIEx0ZC4xDzANBgNVBAsMBkRldm9wczEbMBkGA1UEAwwSZ3Jh
+ 4YE6FOKJmiDV7KsmoSO2JTEaZAK6sdxI7zFJJH0TNFIuKewEBsVH/W5RccjwK/z/
+ BHwoTQc95zbfFjt1JwDiq8jGTVnQoXH99wAIW+HDYq6hqHyqW3YuQ8QvXfi/ebAs
+ rn0urmEC7JhsZIg92AqVYEgdp5H6uFqPIK1U6aYrz5zzZpRfEA==
+ -----END CERTIFICATE-----
+ server.key: |
+ -----BEGIN PRIVATE KEY-----
+ MIIEugIBADANBgkqhkiG9w0BAQEFAASCBKQwggSgAgEAAoIBAQC1zwgrnurQGlwe
+ ZcKe2RXLs9XzQo4PzNsbxRQXSZef/siUZ/X3phd7Tt7QbQv8sxoZFR1/R4neN3KV
+ tsWJ6YL3CY1IwqzxtR6SHzkg/CgUFgP4Jq9NDodOFRlmkZBK9iO9x/VITxLZPBQt
+ f+ygeNhfG/oZZxlLSWNC/adlFfUGI8TujCGGyydxAegyWRYmhkLM7F3vRqMXiUn2
+ UP/nPEMasHiHS7r99RzJILbU494aNYTxprfBAoGAdWwO/4I/r3Zo672AvCs2s/P6
+ G85cX2hKMFy3B4/Ww53jFA3bsWTOyXBv4srl3v9C3xkQmDwUxPDshEN45JX1AMIc
+ vxQkW5cm2IaPHB1BsuQpAuW6qIBT/NZqLmexb4jipAjTN4wQ2dkjI/zK2/SST5wb
+ vNufGafZ1IpvkUsDkA0=
+ -----END PRIVATE KEY-----
+```
+
+Then configure Graylog input to
+
+| Parameter | Value |
+|----------------|---------------------------------|
+| tls_cert_file: | /etc/graylog/server/server.cert |
+| tls_enable: | true |
+| tls_key_file: | /etc/graylog/server/server.key |
+
+## Get Graylog status
+You can get your Graylog status by running the command
+
+```
+kubectl get po -L graylog-role
+```
+
+Output
+```
+NAME READY STATUS RESTARTS AGE graylog-ROLE
+graylog-0 1/1 Running 0 1d master
+graylog-1 1/1 Running 0 1d coordinating
+graylog-2 1/1 Running 0 1m coordinating
+```
+## Trouble shooting
+
+If you are encounter "Unprocessed Messages" or Journal files corrupted, you may need to delete all journal files before staring Graylog.
+You can do this automatically by setting `graylog.journal.deleteBeforeStart` to `true`
+
+The chart will delete all journal files before starting Graylog.
+
+Note: All uncommitted logs will be permanently DELETED when this value is true
diff --git a/stable/graylog/requirements.yaml b/stable/graylog/requirements.yaml
new file mode 100644
index 000000000000..c4924c348bc3
--- /dev/null
+++ b/stable/graylog/requirements.yaml
@@ -0,0 +1,11 @@
+dependencies:
+- name: elasticsearch
+ version: 1.15.0
+ repository: https://kubernetes-charts.storage.googleapis.com/
+ tags:
+ - install-elasticsearch
+- name: mongodb-replicaset
+ version: 3.8.4
+ repository: https://kubernetes-charts.storage.googleapis.com/
+ tags:
+ - install-mongodb
diff --git a/stable/graylog/templates/NOTES.txt b/stable/graylog/templates/NOTES.txt
new file mode 100644
index 000000000000..f9b56f06aba5
--- /dev/null
+++ b/stable/graylog/templates/NOTES.txt
@@ -0,0 +1,85 @@
+To connect to your Graylog server:
+
+1. Get the application URL by running these commands:
+{{- if .Values.graylog.ingress.enabled }}
+{{- range .Values.graylog.ingress.hosts }}
+ http://{{ . }}
+{{- end }}
+{{- else if contains "NodePort" .Values.graylog.service.type }}
+
+ export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "graylog.fullname" . }})
+ export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+ echo http://$NODE_IP:$NODE_PORT
+{{- else if contains "LoadBalancer" .Values.graylog.service.type }}
+
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ You can watch the status of by running 'kubectl get svc -w {{ template "graylog.fullname" . }}'
+ export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "graylog.fullname" . }}-web -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ echo http://$SERVICE_IP:{{ default 9000 .Values.graylog.service.port }}
+{{- else if contains "ClusterIP" .Values.graylog.service.type }}
+
+ Graylog Web Interface uses JavaScript to get detail of each node. The client JavaScript cannot communicate to node when service type is `ClusterIP`.
+ If you want to access Graylog Web Interface, you need to enable Ingress.
+ NOTE: Port Forward does not work with web interface.
+{{- end }}
+
+2. The Graylog root users
+
+ echo "User: {{ .Values.graylog.rootUsername }}"
+ echo "Password: $(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "graylog.fullname" . }} -o "jsonpath={.data['graylog-password-secret']}" | base64 --decode)"
+
+To send logs to graylog:
+
+ NOTE: If `graylog.input` is empty, you cannot send logs from other services. Please make sure the value is not empty.
+ See https://github.com/helm/charts/tree/master/stable/graylog#input for detail
+
+{{- if .Values.graylog.input.tcp }}
+1. TCP
+
+{{- if contains "NodePort" .Values.graylog.input.tcp.service.type }}
+ export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "graylog.fullname" . }})
+ export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+ echo $NODE_IP:$NODE_PORT
+{{- else if contains "LoadBalancer" .Values.graylog.input.tcp.service.type }}
+ export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "graylog.fullname" . }}-tcp -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ {{- range .Values.graylog.input.tcp.ports }}
+ echo "{{ .name }} on $SERVICE_IP:{{ .port }}"
+ {{- end }}
+
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ You can watch the status of by running 'kubectl get svc -w {{ template "graylog.fullname" . }}-tcp'
+
+{{- else if contains "ClusterIP" .Values.graylog.input.tcp.service.type }}
+ export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ template "graylog.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
+ {{- range .Values.graylog.input.tcp.ports }}
+ Run the command
+ kubectl port-forward $POD_NAME {{ .port }}:{{ .port }}
+ Then send logs to 127.0.0.1:{{ .port }}
+ {{- end }}
+{{- end }}
+{{- end }}
+{{- if .Values.graylog.input.udp }}
+2. UDP
+
+{{- if contains "NodePort" .Values.graylog.input.udp.service.type }}
+ export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "fullname" . }})
+ export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+ echo $NODE_IP:$NODE_PORT
+{{- else if contains "LoadBalancer" .Values.graylog.input.udp.service.type }}
+ export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "graylog.fullname" . }}-udp -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ {{- range .Values.graylog.input.udp.ports }}
+ echo "{{ .name }} on $SERVICE_IP:{{ .port }}"
+ {{- end }}
+
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ You can watch the status of by running 'kubectl get svc -w {{ template "graylog.fullname" . }}-udp'
+
+{{- else if contains "ClusterIP" .Values.graylog.input.udp.service.type }}
+ export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ template "graylog.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
+ {{- range .Values.graylog.input.udp.ports }}
+ Run the command
+ kubectl port-forward $POD_NAME {{ .port }}:{{ .port }}
+ Then send logs to 127.0.0.1:{{ .port }}
+ {{- end }}
+{{- end }}
+{{- end }}
diff --git a/stable/graylog/templates/_helpers.tpl b/stable/graylog/templates/_helpers.tpl
new file mode 100644
index 000000000000..c169085eb1cd
--- /dev/null
+++ b/stable/graylog/templates/_helpers.tpl
@@ -0,0 +1,83 @@
+{{/* vim: set filetype=mustache: */}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+*/}}
+{{- define "graylog.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "graylog.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create the name of the service account to use
+*/}}
+{{- define "graylog.serviceAccountName" -}}
+{{- if .Values.serviceAccount.create -}}
+ {{ default (include "graylog.fullname" .) .Values.serviceAccount.name }}
+{{- else -}}
+ {{ default "default" .Values.serviceAccount.name }}
+{{- end -}}
+{{- end -}}
+
+
+{{/*
+Print Host URL
+*/}}
+{{- define "graylog.url" -}}
+{{- if .Values.graylog.ingress.enabled }}
+{{- if .Values.graylog.ingress.tls }}
+{{- range .Values.graylog.ingress.tls }}{{ range .hosts }}https://{{ . }}{{ end }}{{ end }}
+{{- else }}
+{{- range .Values.graylog.ingress.hosts }}http://{{ . }}{{ end }}
+{{- end }}
+{{- end }}
+{{- end -}}
+
+{{/*
+Create a default fully qualified elasticsearch name or use the `graylog.elasticsearch.hosts` value if defined.
+Or use chart dependencies with release name
+*/}}
+{{- define "graylog.elasticsearch.hosts" -}}
+{{- if .Values.graylog.elasticsearch.hosts }}
+ {{- .Values.graylog.elasticsearch.hosts -}}
+{{- else }}
+ {{- printf "http://%s-elasticsearch-client.%s.svc.cluster.local:9200" .Release.Name .Release.Namespace -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified mongodb name or use the `graylog.mongodb.uri` value if defined.
+Or use chart dependencies with release name
+*/}}
+{{- define "graylog.mongodb.uri" -}}
+{{- if .Values.graylog.mongodb.uri }}
+ {{- .Values.graylog.mongodb.uri -}}
+{{- else }}
+ {{- printf "mongodb://%s-mongodb-replicaset.%s.svc.cluster.local:27017/graylog?replicaSet=rs0" .Release.Name .Release.Namespace -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "graylog.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/stable/graylog/templates/configmap.yaml b/stable/graylog/templates/configmap.yaml
new file mode 100644
index 000000000000..2a3ba315051d
--- /dev/null
+++ b/stable/graylog/templates/configmap.yaml
@@ -0,0 +1,188 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "graylog.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ helm.sh/chart: {{ template "graylog.chart" . }}
+ app.kubernetes.io/managed-by: "{{ .Release.Service }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
+data:
+ log4j2.xml: |-
+
+
+
+
+
+
+
+
+ %d{yyyy-MM-dd'T'HH:mm:ss.SSSXXX} %-5p [%c{1}] %m%n
+
+
+
+
+
+
+
+
+
+
+
+ %d [%c{1}] - %m - %X%n
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ graylog.conf: |-
+ node_id_file = /usr/share/graylog/data/journal/node-id
+ root_username = {{ .Values.graylog.rootUsername }}
+ root_email = {{ .Values.graylog.rootEmail }}
+ root_timezone = {{ default "UTC" .Values.graylog.rootTimezone }}
+ plugin_dir = /usr/share/graylog/plugin
+ {{- if contains ":2" .Values.graylog.image.repository }}
+ rest_listen_uri = http://0.0.0.0:9000/api/
+ web_listen_uri = http://0.0.0.0:9000/
+ {{- if .Values.graylog.ingress.enabled }}
+ web_endpoint_uri = {{ template "graylog.url" .}}/api
+ {{- end }}
+ {{- else }}
+ http_bind_address = 0.0.0.0:9000
+ {{- if .Values.graylog.ingress.enabled }}
+ http_external_uri = {{ template "graylog.url" .}}/
+ {{- end }}
+ {{- end }}
+ elasticsearch_hosts = {{ template "graylog.elasticsearch.hosts" . }}
+ allow_leading_wildcard_searches = false
+ allow_highlighting = false
+ output_batch_size = 500
+ output_flush_interval = 1
+ output_fault_count_threshold = 5
+ output_fault_penalty_seconds = 30
+ processbuffer_processors = 5
+ outputbuffer_processors = 3
+ processor_wait_strategy = blocking
+ ring_size = 65536
+ inputbuffer_ring_size = 65536
+ inputbuffer_processors = 2
+ inputbuffer_wait_strategy = blocking
+ message_journal_enabled = true
+ # Do not change `message_journal_dir` location
+ message_journal_dir = /usr/share/graylog/data/journal
+ lb_recognition_period_seconds = 3
+ # Use a replica set instead of a single host
+ mongodb_uri = {{ template "graylog.mongodb.uri" . }}
+ mongodb_max_connections = {{ default 1000 .Values.graylog.mongodb.maxConnections }}
+ mongodb_threads_allowed_to_block_multiplier = 5
+ # Email transport
+ transport_email_enabled = {{ default false .Values.graylog.transportEmail.enabled }}
+ transport_email_hostname = {{ default .Values.graylog.transportEmail.hostname }}
+ transport_email_port = {{ default .Values.graylog.transportEmail.port }}
+ transport_email_use_auth = {{ default .Values.graylog.transportEmail.useAuth }}
+ transport_email_use_tls = {{ default .Values.graylog.transportEmail.useTls }}
+ transport_email_use_ssl = {{ default false .Values.graylog.transportEmail.useSsl }}
+ transport_email_auth_username = {{ default .Values.graylog.transportEmail.authUsername }}
+ transport_email_auth_password = {{ default .Values.graylog.transportEmail.authPassword }}
+ transport_email_subject_prefix = {{ default .Values.graylog.transportEmail.subjectPrefix }}
+ transport_email_from_email = {{ default .Values.graylog.transportEmail.fromEmail }}
+ {{- if .Values.graylog.ingress.enabled }}
+ transport_email_web_interface_url = {{ template "graylog.url" .}}
+ {{- end }}
+ content_packs_dir = /usr/share/graylog/data/contentpacks
+ content_packs_auto_load = grok-patterns.json
+ proxied_requests_thread_pool_size = 32
+ {{- if .Values.graylog.config }}
+{{ .Values.graylog.config | indent 4 }}
+ {{- end }}
+ entrypoint.sh: |-
+ #!/usr/bin/env bash
+
+ GRAYLOG_HOME=/usr/share/graylog
+ MASTER_NAME="{{ template "graylog.fullname" . }}-master.{{ .Release.Namespace }}.svc.cluster.local"
+ # Looking for Master IP
+ MASTER_IP=`/k8s/kubectl get pod -o jsonpath='{range .items[*]}{.metadata.name} {.status.podIP}{"\n"}{end}' -l graylog-role=master --field-selector=status.phase=Running|awk '{print $2}'`
+ echo "Current master is $MASTER_IP"
+ if [[ -z "$MASTER_IP" ]]; then
+ echo "Launching $HOSTNAME as master"
+ export GRAYLOG_IS_MASTER="true"
+ /k8s/kubectl label --overwrite pod $HOSTNAME graylog-role="master"
+ else
+ echo "Launching $HOSTNAME as coordinating"
+ export GRAYLOG_IS_MASTER="false"
+ /k8s/kubectl label --overwrite pod $HOSTNAME graylog-role="coordinating"
+ fi
+ # Download plugins
+ {{- if .Values.graylog.plugins }}
+ echo "Downloading Graylog Plugins..."
+ {{- range .Values.graylog.plugins }}
+ echo "Downloading {{ .url }} ..."
+ curl -s --location --retry 3 -o ${GRAYLOG_HOME}/plugin/{{ .name }} "{{ .url }}"
+ {{- end }}
+ {{- end }}
+ {{- if .Values.graylog.metrics.enabled }}
+ echo "Downloading https://github.com/graylog-labs/graylog-plugin-metrics-reporter/releases/download/2.4.0-beta.3/metrics-reporter-prometheus-2.4.0-beta.3.jar ..."
+ curl -s --location --retry 3 -o ${GRAYLOG_HOME}/plugin/metrics-reporter-prometheus-2.4.0-beta.3.jar "https://github.com/graylog-labs/graylog-plugin-metrics-reporter/releases/download/2.4.0-beta.3/metrics-reporter-prometheus-2.4.0-beta.3.jar"
+ {{- end }}
+ {{- if .Values.graylog.geoip.enabled }}
+ echo "Downloading Maxmind GeoLite2 ..."
+ curl -s --location --retry 3 -o /tmp/GeoLite2-City.tar.gz "https://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz"
+ curlreturn=$?
+ if [[ $curlreturn -eq 0 ]]; then
+ mkdir -p ${GRAYLOG_HOME}/geoip && cd ${GRAYLOG_HOME}/geoip && tar xvzf /tmp/GeoLite2-City.tar.gz --wildcards "*.mmdb" --strip-components=1 -C ${GRAYLOG_HOME}/geoip && chown -R graylog:graylog ${GRAYLOG_HOME}/geoip
+ fi
+ {{- end }}
+ # Start Graylog
+ echo "Starting graylog"
+ # Original docker-entrypoint.sh in Graylog Docker will error while executing since you can't chown readonly files in `config`
+ # exec /docker-entrypoint.sh graylog
+ echo "Graylog Home ${GRAYLOG_HOME}"
+ echo "JVM Options ${GRAYLOG_SERVER_JAVA_OPTS}"
+ "${JAVA_HOME}/bin/java" \
+ ${GRAYLOG_SERVER_JAVA_OPTS} \
+ -jar \
+ -Dlog4j.configurationFile=${GRAYLOG_HOME}/config/log4j2.xml \
+ -Djava.library.path=${GRAYLOG_HOME}/lib/sigar/ \
+ -Dgraylog2.installation_source=docker \
+ ${GRAYLOG_HOME}/graylog.jar \
+ server \
+ -f ${GRAYLOG_HOME}/config/graylog.conf
diff --git a/stable/graylog/templates/files-configmap.yaml b/stable/graylog/templates/files-configmap.yaml
new file mode 100644
index 000000000000..85d062ea2083
--- /dev/null
+++ b/stable/graylog/templates/files-configmap.yaml
@@ -0,0 +1,17 @@
+{{- if .Values.graylog.serverFiles -}}
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "graylog.fullname" . }}-files
+ labels:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ helm.sh/chart: {{ template "graylog.chart" . }}
+ app.kubernetes.io/managed-by: "{{ .Release.Service }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
+data:
+{{- range $key, $value := .Values.graylog.serverFiles }}
+ {{ $key }}: |
+{{ $value | default "{}" | indent 4 }}
+{{- end -}}
+{{- end -}}
diff --git a/stable/graylog/templates/ingress.yaml b/stable/graylog/templates/ingress.yaml
new file mode 100644
index 000000000000..7c90186b6e96
--- /dev/null
+++ b/stable/graylog/templates/ingress.yaml
@@ -0,0 +1,34 @@
+{{- if .Values.graylog.ingress.enabled -}}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ annotations:
+ {{- range $key, $value := .Values.graylog.ingress.annotations }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+ labels:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ helm.sh/chart: {{ template "graylog.chart" . }}
+ app.kubernetes.io/managed-by: "{{ .Release.Service }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
+ app.kubernetes.io/component: "web"
+ name: {{ template "graylog.fullname" . }}-web
+spec:
+ rules:
+ {{- range .Values.graylog.ingress.hosts }}
+ - host: {{ . }}
+ http:
+ paths:
+ - backend:
+ serviceName: {{ template "graylog.fullname" $ }}-web
+ servicePort: graylog
+ {{- if $.Values.graylog.ingress.path }}
+ path: {{ $.Values.graylog.ingress.path }}
+ {{- end -}}
+ {{- end -}}
+ {{- if .Values.graylog.ingress.tls }}
+ tls:
+{{ toYaml .Values.graylog.ingress.tls | indent 4 }}
+ {{- end -}}
+{{- end -}}
diff --git a/stable/graylog/templates/master-service.yaml b/stable/graylog/templates/master-service.yaml
new file mode 100644
index 000000000000..f44b87584494
--- /dev/null
+++ b/stable/graylog/templates/master-service.yaml
@@ -0,0 +1,55 @@
+apiVersion: v1
+kind: Service
+metadata:
+{{- if .Values.graylog.service.master.annotations }}
+ annotations:
+{{ toYaml .Values.graylog.service.master.annotations | indent 4 }}
+{{- end }}
+ name: {{ template "graylog.fullname" . }}-master
+ labels:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ helm.sh/chart: {{ template "graylog.chart" . }}
+ app.kubernetes.io/managed-by: "{{ .Release.Service }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
+ graylog-role: "master"
+spec:
+ ports:
+ - name: graylog
+{{- if .Values.graylog.service.master.port }}
+ port: {{ default 9000 .Values.graylog.service.master.port }}
+{{- else }}
+ port: {{ default 9000 .Values.graylog.service.port }}
+{{- end }}
+ protocol: TCP
+ targetPort: 9000
+{{- if contains "NodePort" .Values.graylog.service.type }}
+ {{- if .Values.graylog.service.nodePort }}
+ nodePort: {{ .Values.graylog.service.nodePort }}
+ {{- end }}
+{{- end }}
+{{- if .Values.graylog.service.externalIPs }}
+ externalIPs:
+{{ toYaml .Values.graylog.service.externalIPs | indent 4 }}
+{{- end }}
+{{- if eq "ClusterIP" .Values.graylog.service.type }}
+ {{- if .Values.graylog.service.clusterIP }}
+ clusterIP: {{ .Values.graylog.service.clusterIP }}
+ {{- end }}
+{{- end }}
+ selector:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ graylog-role: "master"
+ type: "{{ .Values.graylog.service.type }}"
+{{- if eq "LoadBalancer" .Values.graylog.service.type }}
+ {{- if .Values.graylog.service.loadBalancerIP }}
+ loadBalancerIP: {{ .Values.graylog.service.loadBalancerIP }}
+ {{- end -}}
+ {{- if .Values.graylog.service.loadBalancerSourceRanges }}
+ loadBalancerSourceRanges:
+ {{- range .Values.graylog.service.loadBalancerSourceRanges }}
+ - {{ . }}
+ {{- end }}
+ {{- end -}}
+{{- end -}}
diff --git a/stable/graylog/templates/role.yaml b/stable/graylog/templates/role.yaml
new file mode 100644
index 000000000000..b0390d24757c
--- /dev/null
+++ b/stable/graylog/templates/role.yaml
@@ -0,0 +1,22 @@
+{{- if .Values.rbac.create -}}
+apiVersion: rbac.authorization.k8s.io/v1beta1
+kind: Role
+metadata:
+ name: {{ template "graylog.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ helm.sh/chart: {{ template "graylog.chart" . }}
+ app.kubernetes.io/managed-by: "{{ .Release.Service }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
+rules:
+- apiGroups:
+ - ""
+ resources:
+ - pods
+ - secrets
+ verbs:
+ - get
+ - list
+ - patch
+{{- end -}}
diff --git a/stable/graylog/templates/rolebinding.yaml b/stable/graylog/templates/rolebinding.yaml
new file mode 100644
index 000000000000..06e10de32c40
--- /dev/null
+++ b/stable/graylog/templates/rolebinding.yaml
@@ -0,0 +1,19 @@
+{{- if .Values.rbac.create -}}
+apiVersion: rbac.authorization.k8s.io/v1beta1
+kind: RoleBinding
+metadata:
+ name: {{ template "graylog.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ helm.sh/chart: {{ template "graylog.chart" . }}
+ app.kubernetes.io/managed-by: "{{ .Release.Service }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: {{ template "graylog.fullname" . }}
+subjects:
+- kind: ServiceAccount
+ name: {{ template "graylog.serviceAccountName" . }}
+{{- end -}}
diff --git a/stable/graylog/templates/secret.yaml b/stable/graylog/templates/secret.yaml
new file mode 100644
index 000000000000..4a823c4b94c3
--- /dev/null
+++ b/stable/graylog/templates/secret.yaml
@@ -0,0 +1,20 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ template "graylog.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ helm.sh/chart: {{ template "graylog.chart" . }}
+ app.kubernetes.io/managed-by: "{{ .Release.Service }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
+type: Opaque
+data:
+ {{- if .Values.graylog.rootPassword }}
+ graylog-password-secret: {{ .Values.graylog.rootPassword | b64enc | quote }}
+ graylog-password-sha2: {{ .Values.graylog.rootPassword | sha256sum | b64enc | quote }}
+ {{- else }}
+ {{- $randpass := randAlphaNum 16 }}
+ graylog-password-secret: {{ $randpass | b64enc | quote }}
+ graylog-password-sha2: {{ $randpass | sha256sum | b64enc | quote }}
+ {{- end }}
diff --git a/stable/graylog/templates/serviceaccount.yaml b/stable/graylog/templates/serviceaccount.yaml
new file mode 100644
index 000000000000..9932790f4d10
--- /dev/null
+++ b/stable/graylog/templates/serviceaccount.yaml
@@ -0,0 +1,12 @@
+{{- if .Values.serviceAccount.create -}}
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: {{ template "graylog.serviceAccountName" . }}
+ labels:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ helm.sh/chart: {{ template "graylog.chart" . }}
+ app.kubernetes.io/managed-by: "{{ .Release.Service }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
+{{- end -}}
diff --git a/stable/graylog/templates/statefulset.yaml b/stable/graylog/templates/statefulset.yaml
new file mode 100644
index 000000000000..208f771e217a
--- /dev/null
+++ b/stable/graylog/templates/statefulset.yaml
@@ -0,0 +1,217 @@
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: {{ template "graylog.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ helm.sh/chart: {{ template "graylog.chart" . }}
+ app.kubernetes.io/managed-by: "{{ .Release.Service }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
+spec:
+ serviceName: {{ template "graylog.fullname" . }}
+ replicas: {{ .Values.graylog.replicas }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/managed-by: "{{ .Release.Service }}"
+ updateStrategy:
+ type: {{ .Values.graylog.updateStrategy }}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ helm.sh/chart: {{ template "graylog.chart" . }}
+ app.kubernetes.io/managed-by: "{{ .Release.Service }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
+ annotations:
+ {{- if .Values.graylog.podAnnotations }}
+ {{- range $key, $value := .Values.graylog.podAnnotations }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+ {{- end }}
+ {{- if .Values.graylog.metrics.enabled }}
+ prometheus.io/scrape: "true"
+ prometheus.io/port: "9000"
+ {{- end }}
+ spec:
+ serviceAccountName: {{ template "graylog.serviceAccountName" . }}
+{{- if .Values.graylog.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.graylog.nodeSelector | indent 8 }}
+{{- end }}
+{{- if .Values.graylog.affinity }}
+ affinity:
+{{ toYaml .Values.graylog.affinity | indent 8 }}
+
+{{- end }}
+{{- if .Values.graylog.tolerations }}
+ tolerations:
+{{ toYaml .Values.graylog.tolerations | indent 8 }}
+{{- end }}
+ initContainers:
+ - name: "setup"
+ image: "busybox"
+ imagePullPolicy: IfNotPresent
+ # Graylog journal will recursive in every subdirectories. Any invalid format directories will cause errors
+ command:
+ - /bin/sh
+ - -c
+ - |
+ rm -rf /usr/share/graylog/data/journal/lost+found
+ {{- if .Values.graylog.journal.deleteBeforeStart }}
+ rm -rf /usr/share/graylog/data/journal/graylog2-committed-read-offset
+ rm -rf /usr/share/graylog/data/journal/messagejournal-0
+ rm -rf /usr/share/graylog/data/journal/recovery-point-offset-checkpoint
+ {{- end }}
+ wget https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl -O /k8s/kubectl
+ chmod +x /k8s/kubectl
+
+ GRAYLOG_HOME=/usr/share/graylog
+ chown -R 1100:1100 ${GRAYLOG_HOME}/data/
+ securityContext:
+ privileged: true
+ volumeMounts:
+ - name: journal
+ mountPath: /usr/share/graylog/data/journal
+ - mountPath: /k8s
+ name: kubectl
+ containers:
+ - name: graylog-server
+ image: "{{ .Values.graylog.image.repository }}"
+ imagePullPolicy: {{ .Values.graylog.image.pullPolicy | quote }}
+ command:
+ - /entrypoint.sh
+ env:
+ - name: GRAYLOG_SERVER_JAVA_OPTS
+ {{- $javaOpts := "-Djava.net.preferIPv4Stack=true -XX:NewRatio=1 -server -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow" }}
+ {{- if .Values.graylog.heapSize }}
+ value: "{{ $javaOpts }} {{ printf "-Xms%s -Xmx%s" .Values.graylog.heapSize .Values.graylog.heapSize}}"
+ {{- else }}
+ value: "{{ $javaOpts }} -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
+ {{- end }}
+ - name: GRAYLOG_PASSWORD_SECRET
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "graylog.fullname" . }}
+ key: graylog-password-secret
+ - name: GRAYLOG_ROOT_PASSWORD_SHA2
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "graylog.fullname" . }}
+ key: graylog-password-sha2
+ {{- range $key, $value := .Values.graylog.env }}
+ - name: {{ $key }}
+ value: {{ $value | quote }}
+ {{- end }}
+ ports:
+ - containerPort: 9000
+ name: graylog
+ {{- with .Values.graylog.input }}
+ {{- if .udp }}
+ {{- range .udp.ports }}
+ - containerPort: {{ .port }}
+ name: {{ .name }}
+ protocol: UDP
+ {{- end }}
+ {{- end }}
+ {{- if .tcp }}
+ {{- range .tcp.ports }}
+ - containerPort: {{ .port }}
+ name: {{ .name }}
+ protocol: TCP
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ resources:
+{{ toYaml .Values.graylog.resources | indent 12 }}
+ livenessProbe:
+ httpGet:
+ path: /api/system/lbstatus
+ port: 9000
+ initialDelaySeconds: 120
+ periodSeconds: 30
+ failureThreshold: 3
+ successThreshold: 1
+ timeoutSeconds: 5
+ readinessProbe:
+ httpGet:
+ path: /api/system/lbstatus
+ port: 9000
+ initialDelaySeconds: 60
+ periodSeconds: 10
+ failureThreshold: 3
+ successThreshold: 1
+ timeoutSeconds: 5
+ volumeMounts:
+ - name: journal
+ mountPath: /usr/share/graylog/data/journal
+ - name: config
+ mountPath: /usr/share/graylog/config
+ - name: entrypoint
+ mountPath: /entrypoint.sh
+ subPath: entrypoint.sh
+ {{- if .Values.graylog.serverFiles }}
+ - name: files
+ mountPath: /etc/graylog/server
+ {{- end }}
+ - name: kubectl
+ mountPath: /k8s
+ lifecycle:
+ preStop:
+ exec:
+ command:
+ - bash
+ - -ec
+ - |
+ ROOT_PASSWORD=`/k8s/kubectl get secret {{ template "graylog.fullname" . }} -o "jsonpath={.data['graylog-password-secret']}" | base64 -d`
+ curl -XPOST -sS -u "{{ .Values.graylog.rootUsername }}:${ROOT_PASSWORD}" "localhost:9000/api/system/shutdown/shutdown"
+ terminationGracePeriodSeconds: {{ default 30 .Values.graylog.terminationGracePeriodSeconds }}
+ volumes:
+ - name: config
+ configMap:
+ name: {{ template "graylog.fullname" . }}
+ items:
+ - key: graylog.conf
+ path: graylog.conf
+ mode: 292 # 0444
+ - key: log4j2.xml
+ path: log4j2.xml
+ mode: 292 # 0444
+ - name: entrypoint
+ configMap:
+ name: {{ template "graylog.fullname" . }}
+ items:
+ - key: entrypoint.sh
+ path: entrypoint.sh
+ mode: 365 # 0555
+ {{- if .Values.graylog.serverFiles }}
+ - name: files
+ configMap:
+ name: {{ template "graylog.fullname" . }}-files
+ {{- end }}
+ - name: kubectl
+ emptyDir: {}
+{{- if not .Values.graylog.persistence.enabled }}
+ - name: journal
+ emptyDir: {}
+{{- else }}
+ volumeClaimTemplates:
+ - metadata:
+ name: journal
+ spec:
+ accessModes:
+ - {{ .Values.graylog.persistence.accessMode | quote }}
+ {{- if .Values.graylog.persistence.storageClass }}
+ {{- if (eq "-" .Values.graylog.persistence.storageClass) }}
+ storageClassName: ""
+ {{- else }}
+ storageClassName: "{{ .Values.graylog.persistence.storageClass }}"
+ {{- end }}
+ {{- end }}
+ resources:
+ requests:
+ storage: "{{ .Values.graylog.persistence.size }}"
+{{- end }}
diff --git a/stable/graylog/templates/tcp-service.yaml b/stable/graylog/templates/tcp-service.yaml
new file mode 100644
index 000000000000..c118a2e5cfa7
--- /dev/null
+++ b/stable/graylog/templates/tcp-service.yaml
@@ -0,0 +1,54 @@
+{{- if .Values.graylog.input.tcp }}
+apiVersion: v1
+kind: Service
+metadata:
+{{- if .Values.graylog.input.tcp.service.annotations }}
+ annotations:
+{{ toYaml .Values.graylog.input.tcp.service.annotations | indent 4 }}
+{{- end }}
+ name: {{ template "graylog.fullname" . }}-tcp
+ labels:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ helm.sh/chart: {{ template "graylog.chart" . }}
+ app.kubernetes.io/managed-by: "{{ .Release.Service }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
+ app.kubernetes.io/component: "TCP"
+spec:
+ ports:
+ {{- range .Values.graylog.input.tcp.ports }}
+ - name: {{ .name }}
+ port: {{ .port }}
+ protocol: TCP
+ targetPort: {{ .port }}
+ {{- end }}
+{{- if contains "NodePort" .Values.graylog.input.tcp.service.type }}
+ {{- if .Values.graylog.input.tcp.service.nodePort }}
+ nodePort: {{ .Values.graylog.input.tcp.service.nodePort }}
+ {{- end }}
+{{- end }}
+{{- if .Values.graylog.input.tcp.service.externalIPs }}
+ externalIPs:
+{{ toYaml .Values.graylog.input.tcp.service.externalIPs | indent 4 }}
+{{- end }}
+{{- if eq "ClusterIP" .Values.graylog.input.tcp.service.type }}
+ {{- if .Values.graylog.input.tcp.service.clusterIP }}
+ clusterIP: {{ .Values.graylog.input.tcp.service.clusterIP }}
+ {{- end }}
+{{- end }}
+ selector:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ type: "{{ .Values.graylog.input.tcp.service.type }}"
+{{- if eq "LoadBalancer" .Values.graylog.input.tcp.service.type }}
+ {{- if .Values.graylog.input.tcp.service.loadBalancerIP }}
+ loadBalancerIP: {{ .Values.graylog.input.tcp.service.loadBalancerIP }}
+ {{- end -}}
+ {{- if .Values.graylog.input.tcp.service.loadBalancerSourceRanges }}
+ loadBalancerSourceRanges:
+ {{- range .Values.graylog.input.tcp.service.loadBalancerSourceRanges }}
+ - {{ . }}
+ {{- end }}
+ {{- end -}}
+{{- end -}}
+{{- end }}
diff --git a/stable/graylog/templates/udp-service.yaml b/stable/graylog/templates/udp-service.yaml
new file mode 100644
index 000000000000..81f591e71dc2
--- /dev/null
+++ b/stable/graylog/templates/udp-service.yaml
@@ -0,0 +1,54 @@
+{{- if .Values.graylog.input.udp }}
+apiVersion: v1
+kind: Service
+metadata:
+{{- if .Values.graylog.input.udp.service.annotations }}
+ annotations:
+{{ toYaml .Values.graylog.input.udp.service.annotations | indent 4 }}
+{{- end }}
+ name: {{ template "graylog.fullname" . }}-udp
+ labels:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ helm.sh/chart: {{ template "graylog.chart" . }}
+ app.kubernetes.io/managed-by: "{{ .Release.Service }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
+ app.kubernetes.io/component: "UDP"
+spec:
+ ports:
+ {{- range .Values.graylog.input.udp.ports }}
+ - name: {{ .name }}
+ port: {{ .port }}
+ protocol: UDP
+ targetPort: {{ .port }}
+ {{- end }}
+{{- if contains "NodePort" .Values.graylog.input.udp.service.type }}
+ {{- if .Values.graylog.input.udp.service.nodePort }}
+ nodePort: {{ .Values.graylog.input.udp.service.nodePort }}
+ {{- end }}
+{{- end }}
+{{- if .Values.graylog.input.udp.service.externalIPs }}
+ externalIPs:
+{{ toYaml .Values.graylog.input.udp.service.externalIPs | indent 4 }}
+{{- end }}
+{{- if eq "ClusterIP" .Values.graylog.input.udp.service.type }}
+ {{- if .Values.graylog.input.udp.service.clusterIP }}
+ clusterIP: {{ .Values.graylog.input.udp.service.clusterIP }}
+ {{- end }}
+{{- end }}
+ selector:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ type: "{{ .Values.graylog.input.udp.service.type }}"
+{{- if eq "LoadBalancer" .Values.graylog.input.udp.service.type }}
+ {{- if .Values.graylog.input.udp.service.loadBalancerIP }}
+ loadBalancerIP: {{ .Values.graylog.input.udp.service.loadBalancerIP }}
+ {{- end -}}
+ {{- if .Values.graylog.input.udp.service.loadBalancerSourceRanges }}
+ loadBalancerSourceRanges:
+ {{- range .Values.graylog.input.udp.service.loadBalancerSourceRanges }}
+ - {{ . }}
+ {{- end }}
+ {{- end -}}
+{{- end -}}
+{{- end }}
diff --git a/stable/graylog/templates/web-service.yaml b/stable/graylog/templates/web-service.yaml
new file mode 100644
index 000000000000..a2482a4aa2e1
--- /dev/null
+++ b/stable/graylog/templates/web-service.yaml
@@ -0,0 +1,50 @@
+apiVersion: v1
+kind: Service
+metadata:
+{{- if .Values.graylog.service.annotations }}
+ annotations:
+{{ toYaml .Values.graylog.service.annotations | indent 4 }}
+{{- end }}
+ name: {{ template "graylog.fullname" . }}-web
+ labels:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ helm.sh/chart: {{ template "graylog.chart" . }}
+ app.kubernetes.io/managed-by: "{{ .Release.Service }}"
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
+ app.kubernetes.io/component: "web"
+spec:
+ ports:
+ - name: graylog
+ port: {{ default 9000 .Values.graylog.service.port }}
+ protocol: TCP
+ targetPort: 9000
+{{- if contains "NodePort" .Values.graylog.service.type }}
+ {{- if .Values.graylog.service.nodePort }}
+ nodePort: {{ .Values.graylog.service.nodePort }}
+ {{- end }}
+{{- end }}
+{{- if .Values.graylog.service.externalIPs }}
+ externalIPs:
+{{ toYaml .Values.graylog.service.externalIPs | indent 4 }}
+{{- end }}
+{{- if eq "ClusterIP" .Values.graylog.service.type }}
+ {{- if .Values.graylog.service.clusterIP }}
+ clusterIP: {{ .Values.graylog.service.clusterIP }}
+ {{- end }}
+{{- end }}
+ selector:
+ app.kubernetes.io/name: {{ template "graylog.name" . }}
+ app.kubernetes.io/instance: "{{ .Release.Name }}"
+ type: "{{ .Values.graylog.service.type }}"
+{{- if eq "LoadBalancer" .Values.graylog.service.type }}
+ {{- if .Values.graylog.service.loadBalancerIP }}
+ loadBalancerIP: {{ .Values.graylog.service.loadBalancerIP }}
+ {{- end -}}
+ {{- if .Values.graylog.service.loadBalancerSourceRanges }}
+ loadBalancerSourceRanges:
+ {{- range .Values.graylog.service.loadBalancerSourceRanges }}
+ - {{ . }}
+ {{- end }}
+ {{- end -}}
+{{- end -}}
diff --git a/stable/graylog/values.yaml b/stable/graylog/values.yaml
new file mode 100644
index 000000000000..ca45f7daa0e1
--- /dev/null
+++ b/stable/graylog/values.yaml
@@ -0,0 +1,272 @@
+# Default values for Graylog.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+
+rbac:
+ # Specifies whether RBAC resources should be created
+ ##
+ create: true
+
+serviceAccount:
+ # Specifies whether a ServiceAccount should be created
+ ##
+ create: true
+ # The name of the ServiceAccount to use.
+ # If not set and create is true, a name is generated using the fullname template
+ ##
+ name:
+
+tags:
+ # If true, this chart will install Elasticsearch from requirement dependencies
+ install-elasticsearch: true
+ # If true, this chart will install MongoDB replicaset from requirement dependencies
+ install-mongodb: true
+
+graylog:
+ ## Graylog image version
+ ## Ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
+ ##
+ ## Important note: Official Graylog Docker image may replace the existing Docker image tags and cause some corrupt when starting the pod.
+ ## Make sure you strict with the `x` version of Graylog where `x` is ${version}-${x}
+ ##
+ image:
+ # repository: "graylog/graylog:2.5.1-3"
+ repository: "graylog/graylog:3.0.1-1"
+ pullPolicy: "IfNotPresent"
+
+ ## Number of Graylog instance
+ ##
+ replicas: 2
+
+ ## Additional environment variables to be added to Graylog pods
+ ##
+ env: {}
+
+ ## Pod affinity
+ ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
+ ##
+ affinity: {}
+
+ ## Node tolerations for node-exporter scheduling to nodes with taints
+ ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+ ##
+ tolerations: []
+ # - key: "key"
+ # operator: "Equal|Exists"
+ # value: "value"
+ # effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"
+
+ ## Node labels for node-exporter pod assignment
+ ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
+ ##
+ nodeSelector: {}
+
+ ## Annotations to be added to Graylog pods
+ ##
+ podAnnotations: {}
+
+ persistence:
+ ## If true, Graylog will create/use a Persistent Volume Claim
+ ## If false, use emptyDir
+ ##
+ enabled: true
+ ## Graylog data Persistent Volume access modes
+ ## Must match those of existing PV or dynamic provisioner
+ ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
+ ##
+ accessMode: ReadWriteOnce
+ ## Graylog data Persistent Volume size
+ ##
+ size: "20Gi"
+ ## Graylog data Persistent Volume Storage Class
+ ## If defined, storageClassName:
+ ## If set to "-", storageClassName: "", which disables dynamic provisioning
+ ## If undefined (the default) or set to null, no storageClassName spec is
+ ## set, choosing the default provisioner. (gp2 on AWS, standard on
+ ## GKE, AWS & OpenStack)
+ ##
+ # storageClass: "ssd"
+
+ ## Additional plugins you need to install on Graylog.
+ ##
+ plugins: []
+ # - name: graylog-plugin-slack-2.7.1.jar
+ # url: https://github.com/omise/graylog-plugin-slack/releases/download/2.7.1/graylog-plugin-slack-2.7.1.jar
+ # - name: graylog-plugin-function-check-diff-1.0.0.jar
+ # url: https://github.com/omise/graylog-plugin-function-check-diff/releases/download/1.0.0/graylog-plugin-function-check-diff-1.0.0.jar
+ # - name: graylog-plugin-custom-alert-condition-1.0.0.jar
+ # url: https://github.com/omise/graylog-plugin-custom-alert-condition/releases/download/v1.0.0/graylog-plugin-custom-alert-condition-1.0.0.jar
+
+ ## A service for Graylog web interface
+ ##
+ service:
+ type: ClusterIP
+ port: 9000
+
+ master:
+ ## Graylog master service Ingress annotations
+ ##
+ annotations: {}
+ ## Graylog master service port.
+ ##
+ port: 9000
+
+ ## Additional input ports for receiving logs from servers
+ ## Note: Name must be in IANA_SVC_NAME (at most 15 characters, matching regex [a-z0-9]([a-z0-9-]*[a-z0-9])* and it must contains at least one letter [a-z], hyphens cannot be adjacent to other hyphens)
+ ## Note: Array must be sorted by port order
+ ##
+ input: {}
+ # tcp:
+ # service:
+ # type: LoadBalancer
+ # loadBalancerIP:
+ # ports:
+ # - name: gelf
+ # port: 12222
+ # udp:
+ # service:
+ # type: ClusterIP
+ # ports:
+ # - name: syslog
+ # port: 12222
+
+ ingress:
+ ## If true, Graylog server Ingress will be created
+ ##
+ enabled: false
+ ## Graylog server Ingress annotations
+ ##
+ annotations: {}
+ ## Graylog server Ingress hostnames with optional path
+ ## Must be provided if Ingress is enabled
+ ## Note: Graylog does not support two URL. You can specify only single URL
+ ##
+ hosts: []
+ # - graylog.yourdomain.com
+
+ ## Graylog server Ingress TLS configuration
+ ## Secrets must be manually created in the namespace
+ ##
+ tls: []
+ # - secretName: graylog-server-tls
+ # hosts:
+ # - graylog.yourdomain.com
+
+
+ ## Configure resource requests and limits
+ ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
+ ##
+ resources:
+ limits:
+ cpu: "1"
+ requests:
+ cpu: "500m"
+ memory: "1024Mi"
+
+ ## Set Graylog Java heapsize. If this value empty, chart will allocate heapsize using `-XX:+UseCGroupMemoryLimitForHeap`
+ ## ref: https://blogs.oracle.com/java-platform-group/java-se-support-for-docker-cpu-and-memory-limits
+ ##
+ # heapSize: "1024g"
+
+ ## RollingUpdate update strategy
+ ## ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets
+ ##
+ updateStrategy: OnDelete
+ ## Graylog server pod termination grace period
+ ##
+ terminationGracePeriodSeconds: 120
+
+ metrics:
+ ## If true, prometheus annotations will be attached
+ ##
+ enabled: false
+
+ geoip:
+ ## If true, Maxmind GeoLite2 will be installed to ${GRAYLOG_HOME}/geoip location
+ ##
+ enabled: false
+
+ ## Graylog root user name
+ ##
+ rootUsername: "admin"
+
+ ## Graylog root password
+ ## Defaults to a random 10-character alphanumeric string if not set
+ ##
+ # rootPassword: ""
+
+ ## Graylog root email
+ ##
+ rootEmail: ""
+
+ ## Graylog root timezone
+ ##
+ rootTimezone: "UTC"
+
+ elasticsearch:
+ ## List of Elasticsearch hosts Graylog should connect to.
+ ## Need to be specified as a comma-separated list of valid URIs for the http ports of your elasticsearch nodes.
+ ## If one or more of your elasticsearch hosts require authentication, include the credentials in each node URI that
+ ## requires authentication.
+ ##
+ # hosts: http://elasticsearch-client.graylog.svc.cluster.local:9200
+ hosts: ""
+
+ mongodb:
+ ## MongoDB connection string
+ ## See https://docs.mongodb.com/manual/reference/connection-string/ for details
+ # uri: mongodb://user:pass@host1:27017,host2:27017,host3:27017/graylog?replicaSet=rs01
+ uri: ""
+
+ ## Increase this value according to the maximum connections your MongoDB server can handle from a single client
+ ## if you encounter MongoDB connection problems.
+ ##
+ maxConnections: 1000
+
+ transportEmail:
+ ## If true, enable Email transport.
+ ## See http://docs.graylog.org/en/3.0/pages/configuration/server.conf.html#email for detail
+ ##
+ enabled: false
+ hostname: ""
+ port: 2587
+ useAuth: true
+ useTls: true
+ useSsl: false
+ authUsername: ""
+ authPassword: ""
+ subjectPrefix: "[graylog]"
+ fromEmail: ""
+
+ ## Additional graylog config which is defined on `graylog.conf`.
+ ## You can find a complete list of graylog config from http://docs.graylog.org/en/3.0/pages/configuration/server.conf.html
+ ## Graylog config is written in Java properites format. Make sure you write it correctly.
+ ##
+ # config: |
+ # elasticsearch_connect_timeout = 10s
+ # elasticsearch_socket_timeout = 60s
+ # elasticsearch_idle_timeout = -1s
+
+ journal:
+ ## Sometime Graylog journal continually grow up or corrupt and cause Graylog unable to start.
+ ## You need to clean up all journal files in order to run the Graylog.
+ ## Change `graylog.journal.deleteBeforeStart` to `true` to delete all journal files before start
+ ## Note: All uncommitted logs will be permanently DELETED when this value is true
+ ##
+ deleteBeforeStart: false
+
+ ## Additional server files will be deployed to /etc/graylog/server
+ ## For example, you can put server certificates or authorized clients certificates here
+ ##
+ serverFiles: {}
+ # graylog-server.key: |
+ # graylog-server.cert: |
+
+## Specify Elasticsearch version from requirement dependencies. Ignore this seection if you install Elasticsearch manually.
+## Note: Graylog 2.4 requires Elasticsearch version <= 5.6
+elasticsearch:
+ image:
+ repository: "docker.elastic.co/elasticsearch/elasticsearch-oss"
+ tag: "6.5.4"
+ cluster:
+ xpackEnable: false
diff --git a/stable/hackmd/Chart.yaml b/stable/hackmd/Chart.yaml
index fe5049690a2d..d92a75748683 100644
--- a/stable/hackmd/Chart.yaml
+++ b/stable/hackmd/Chart.yaml
@@ -1,7 +1,7 @@
name: hackmd
apiVersion: v1
-version: "1.0.1"
-appVersion: "1.2.1-alpine"
+version: "1.2.1"
+appVersion: "1.3.0-alpine"
description: Realtime collaborative markdown notes on all platforms.
icon: https://hackmd.io/favicon.png
keywords:
diff --git a/stable/hackmd/README.md b/stable/hackmd/README.md
index 143b02a7e54b..21e15ca22b8b 100644
--- a/stable/hackmd/README.md
+++ b/stable/hackmd/README.md
@@ -26,9 +26,10 @@ The following configurations may be set. It is recommended to use values.yaml fo
Parameter | Description | Default
--------- | ----------- | -------
+`deploymentStrategy` | Deployment strategy. | `RollingUpdate`
`replicaCount` | How many replicas to run. | 1
`image.repository` | Name of the image to run, without the tag. | [hackmdio/hackmd](https://github.com/hackmdio/docker-hackmd)
-`image.tag` | The image tag to use. | 1.2.1-alpine
+`image.tag` | The image tag to use. | 1.3.0-alpine
`image.pullPolicy` | The kubernetes image pull policy. | IfNotPresent
`service.name` | The kubernetes service name to use. | hackmd
`service.type` | The kubernetes service type to use. | ClusterIP
@@ -45,6 +46,8 @@ Parameter | Description | Default
`persistence.size` | Persistent Volume size | `2Gi`
`persistence.storageClass` | Persistent Volume Storage Class | `unset`
`extraVars` | Hackmd's extra environment variables | `[]`
+`podAnnotations` | Pod annotations | `{}`
+`sessionSecret` | Hackmd's session secret | `""` (Randomly generated)
`postgresql.install` | Enable PostgreSQL as a chart dependency | `true`
`postgresql.imageTag` | The image tag for PostgreSQL | `9.6.2`
`postgresql.postgresUser` | PostgreSQL User to create | `hackmd`
diff --git a/stable/hackmd/templates/deployment.yaml b/stable/hackmd/templates/deployment.yaml
index ac3046e57da3..bf05a0fc9b1b 100644
--- a/stable/hackmd/templates/deployment.yaml
+++ b/stable/hackmd/templates/deployment.yaml
@@ -13,11 +13,20 @@ spec:
matchLabels:
app: {{ template "hackmd.name" . }}
release: {{ .Release.Name }}
+ strategy:
+ type: {{ .Values.deploymentStrategy }}
+ {{- if ne .Values.deploymentStrategy "RollingUpdate" }}
+ rollingUpdate: null
+ {{- end }}
template:
metadata:
labels:
app: {{ template "hackmd.name" . }}
release: {{ .Release.Name }}
+{{- with .Values.podAnnotations }}
+ annotations:
+{{ toYaml . | indent 8 }}
+{{- end }}
spec:
containers:
- name: {{ .Chart.Name }}
@@ -38,7 +47,7 @@ spec:
port: 3000
initialDelaySeconds: 30
env:
- - name: HMD_DB_PASSWORD
+ - name: CMD_DB_PASSWORD
{{- if .Values.postgresql.install }}
valueFrom:
secretKeyRef:
@@ -47,8 +56,15 @@ spec:
{{- else }}
value: {{ .Values.postgresql.postgresPassword }}
{{- end }}
+ - name: CMD_SESSION_SECRET
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "hackmd.fullname" . }}
+ key: sessionSecret
+ - name: CMD_DB_URL
+ value: postgres://{{ .Values.postgresql.postgresUser }}:$(CMD_DB_PASSWORD)@{{ template "hackmd.database.host" . }}:5432/{{ .Values.postgresql.postgresDatabase }}
- name: HMD_DB_URL
- value: postgres://{{ .Values.postgresql.postgresUser }}:$(HMD_DB_PASSWORD)@{{ template "hackmd.database.host" . }}:5432/{{ .Values.postgresql.postgresDatabase }}
+ value: postgres://{{ .Values.postgresql.postgresUser }}:$(CMD_DB_PASSWORD)@{{ template "hackmd.database.host" . }}:5432/{{ .Values.postgresql.postgresDatabase }}
{{- if .Values.extraVars }}
{{ toYaml .Values.extraVars | indent 12 }}
{{- end }}
diff --git a/stable/hackmd/templates/secret.yaml b/stable/hackmd/templates/secret.yaml
new file mode 100644
index 000000000000..f04b4f4fdccc
--- /dev/null
+++ b/stable/hackmd/templates/secret.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ template "hackmd.fullname" . }}
+ labels:
+ app: {{ template "hackmd.name" . }}
+ chart: {{ template "hackmd.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+type: Opaque
+data:
+ {{- if .Values.sessionSecret }}
+ sessionSecret: {{ .Values.sessionSecret | b64enc | quote }}
+ {{- else }}
+ sessionSecret: {{ randAlphaNum 10 | b64enc | quote }}
+ {{- end }}
diff --git a/stable/hackmd/values.yaml b/stable/hackmd/values.yaml
index e9ee2efb5433..3da2f613d96c 100644
--- a/stable/hackmd/values.yaml
+++ b/stable/hackmd/values.yaml
@@ -4,9 +4,11 @@
replicaCount: 1
+deploymentStrategy: RollingUpdate
+
image:
repository: hackmdio/hackmd
- tag: 1.2.1-alpine
+ tag: 1.3.0-alpine
pullPolicy: IfNotPresent
service:
@@ -64,6 +66,8 @@ persistence:
##
# storageClass: "-"
+podAnnotations: {}
+
extraVars: []
nodeSelector: {}
diff --git a/stable/hazelcast-jet/Chart.yaml b/stable/hazelcast-jet/Chart.yaml
index 521c39ac1f1c..7d82e300c8f8 100644
--- a/stable/hazelcast-jet/Chart.yaml
+++ b/stable/hazelcast-jet/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
-appVersion: "0.7"
-description: Hazelcast Jet is an application embeddable, distributed computing engine built on top of Hazelcast In-Memory Data Grid (IMDG). With Hazelcast IMDG providing storage functionality, Hazelcast Jet performs parallel execution to enable data-intensive applications to operate in near real-time.
name: hazelcast-jet
-version: 1.0.1
+version: 1.1.0
+appVersion: "3.0"
+description: Hazelcast Jet is an application embeddable, distributed computing engine built on top of Hazelcast In-Memory Data Grid (IMDG). With Hazelcast IMDG providing storage functionality, Hazelcast Jet performs parallel execution to enable data-intensive applications to operate in near real-time.
keywords:
- hazelcast
- jet
diff --git a/stable/hazelcast-jet/values.yaml b/stable/hazelcast-jet/values.yaml
index e021b4ff9d43..329a7f4ad53d 100644
--- a/stable/hazelcast-jet/values.yaml
+++ b/stable/hazelcast-jet/values.yaml
@@ -5,7 +5,7 @@ image:
# repository is the Hazelcast Jet image name
repository: "hazelcast/hazelcast-jet"
# tag is the Hazelcast Jet image tag
- tag: "0.7"
+ tag: "3.0"
# pullPolicy is the Docker image pull policy
# It's recommended to change this to 'Always' if the image tag is 'latest'
# ref: http://kubernetes.io/docs/user-guide/images/#updating-images
diff --git a/stable/hazelcast/Chart.yaml b/stable/hazelcast/Chart.yaml
index 2120ea9e80fd..59be76061b8c 100644
--- a/stable/hazelcast/Chart.yaml
+++ b/stable/hazelcast/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: hazelcast
-version: 1.1.0
-appVersion: "3.11"
+version: 1.3.1
+appVersion: "3.11.2"
description: Hazelcast IMDG is the most widely used in-memory data grid with hundreds of thousands of installed clusters around the world. It offers caching solutions ensuring that data is in the right place when it’s needed for optimal performance.
keywords:
- hazelcast
diff --git a/stable/hazelcast/README.md b/stable/hazelcast/README.md
index 2458fa9b71eb..7097baada0c4 100644
--- a/stable/hazelcast/README.md
+++ b/stable/hazelcast/README.md
@@ -73,6 +73,17 @@ The following table lists the configurable parameters of the Hazelcast chart and
| `rbac.create` | Enable installing RBAC Role authorization | `true` |
| `serviceAccount.create` | Enable installing Service Account | `true` |
| `serviceAccount.name` | Name of Service Account, if not set, the name is generated using the fullname template | `nil` |
+| `securityContext.fsGroup` | Group ID associated with the Hazelcast container | `65534` |
+| `securityContext.runAsUser` | User ID associated with the Hazelcast container | `65534` |
+| `securityContext.runAsNonRoot` | Runs Hazelcast container as non-root user | `true` |
+| `securityContext.readOnlyRootFilesystem` | Read only root filesystem | `true` |
+| `securityContext.allowPrivilegeEscalation` | Allows privilege escalation | `false` |
+| `securityContext.defaultAllowPrivilegeEscalation` | Default allow privilege escalation | `false` |
+| `metrics.enabled` | Turn on and off JMX Prometheus metrics available at `/metrics` | `false` |
+| `metrics.service.type` | Type of the metrics service | `ClusterIP` |
+| `metrics.service.port` | Port of the `/metrics` endpoint and the metrics service | `8080` |
+| `metrics.service.annotations` | Annotations for the Prometheus discovery | |
+
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
diff --git a/stable/hazelcast/templates/metrics-service.yaml b/stable/hazelcast/templates/metrics-service.yaml
new file mode 100644
index 000000000000..f801e849c4f2
--- /dev/null
+++ b/stable/hazelcast/templates/metrics-service.yaml
@@ -0,0 +1,23 @@
+{{- if .Values.metrics.enabled }}
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ template "hazelcast.fullname" . }}-metrics
+ labels:
+ app: {{ template "hazelcast.name" . }}
+ chart: {{ template "hazelcast.chart" . }}
+ release: "{{ .Release.Name }}"
+ heritage: "{{ .Release.Service }}"
+ annotations:
+{{ toYaml .Values.metrics.service.annotations | indent 4 }}
+spec:
+ type: {{ .Values.metrics.service.type }}
+ selector:
+ app: {{ template "hazelcast.name" . }}
+ release: "{{ .Release.Name }}"
+ ports:
+ - protocol: TCP
+ port: {{ .Values.metrics.service.port }}
+ targetPort: metrics
+ name: metrics
+{{- end }}
diff --git a/stable/hazelcast/templates/statefulset.yaml b/stable/hazelcast/templates/statefulset.yaml
index aeb0a91f5b5a..cc59bb382a30 100644
--- a/stable/hazelcast/templates/statefulset.yaml
+++ b/stable/hazelcast/templates/statefulset.yaml
@@ -36,11 +36,19 @@ spec:
- name: {{ template "hazelcast.fullname" . }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
+ {{- if .Values.securityContext }}
+ securityContext:
+{{ toYaml .Values.securityContext | indent 10 }}
+ {{- end }}
resources:
{{ toYaml .Values.resources | indent 10 }}
ports:
- name: hazelcast
containerPort: 5701
+ {{- if .Values.metrics.enabled }}
+ - name: metrics
+ containerPort: {{ .Values.metrics.service.port }}
+ {{- end }}
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
httpGet:
@@ -67,8 +75,12 @@ spec:
- name: hazelcast-storage
mountPath: /data/hazelcast
env:
+ {{- if .Values.metrics.enabled }}
+ - name: PROMETHEUS_PORT
+ value: "{{ .Values.metrics.service.port }}"
+ {{- end }}
- name: JAVA_OPTS
- value: "-Dhazelcast.rest.enabled={{ .Values.hazelcast.rest }} -Dhazelcast.config=/data/hazelcast/hazelcast.xml -DserviceName={{ template "hazelcast.fullname" . }} -Dnamespace={{ .Release.Namespace }} {{ if .Values.gracefulShutdown.enabled }}-Dhazelcast.shutdownhook.policy=GRACEFUL -Dhazelcast.shutdownhook.enabled=true -Dhazelcast.graceful.shutdown.max.wait={{ .Values.gracefulShutdown.maxWaitSeconds }} {{ end }}{{ .Values.hazelcast.javaOpts }}"
+ value: "-Dhazelcast.rest.enabled={{ .Values.hazelcast.rest }} -Dhazelcast.config=/data/hazelcast/hazelcast.xml -DserviceName={{ template "hazelcast.fullname" . }} -Dnamespace={{ .Release.Namespace }} {{ if .Values.gracefulShutdown.enabled }}-Dhazelcast.shutdownhook.policy=GRACEFUL -Dhazelcast.shutdownhook.enabled=true -Dhazelcast.graceful.shutdown.max.wait={{ .Values.gracefulShutdown.maxWaitSeconds }} {{ end }} {{ if .Values.metrics.enabled }}-Dhazelcast.jmx=true{{ end }} {{ .Values.hazelcast.javaOpts }}"
serviceAccountName: {{ template "hazelcast.serviceAccountName" . }}
volumes:
- name: hazelcast-storage
diff --git a/stable/hazelcast/values.yaml b/stable/hazelcast/values.yaml
index 983955cedf8f..c1b49331da11 100644
--- a/stable/hazelcast/values.yaml
+++ b/stable/hazelcast/values.yaml
@@ -5,7 +5,7 @@ image:
# repository is the Hazelcast image name
repository: "hazelcast/hazelcast"
# tag is the Hazelcast image tag
- tag: "3.11"
+ tag: "3.11.2"
# pullPolicy is the Docker image pull policy
# It's recommended to change this to 'Always' if the image tag is 'latest'
# ref: http://kubernetes.io/docs/user-guide/images/#updating-images
@@ -111,7 +111,6 @@ service:
# It is required if DNS Lookup is used (https://github.com/hazelcast/hazelcast-kubernetes#dns-lookup)
# clusterIP: "None"
-
# Role-based Access Control
rbac:
# Specifies whether RBAC resources should be created
@@ -124,3 +123,22 @@ serviceAccount:
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
+
+# Default security-context to run unprivileged
+securityContext:
+ fsGroup: 65534
+ runAsUser: 65534
+ runAsNonRoot: true
+ readOnlyRootFilesystem: true
+ allowPrivilegeEscalation: false
+ defaultAllowPrivilegeEscalation: false
+
+# Allows to enable a Prometheus to scrape pods, implemented for Hazelcast version >= 3.12 (or 'latest')
+metrics:
+ enabled: false
+ service:
+ type: ClusterIP
+ port: 8080
+ annotations:
+ prometheus.io/scrape: "true"
+ prometheus.io/path: "/metrics"
diff --git a/stable/heapster/Chart.yaml b/stable/heapster/Chart.yaml
index 307fd20febec..f17de78f1e6b 100644
--- a/stable/heapster/Chart.yaml
+++ b/stable/heapster/Chart.yaml
@@ -1,7 +1,7 @@
apiVersion: v1
description: Heapster enables Container Cluster Monitoring and Performance Analysis.
name: heapster
-version: 0.3.2
+version: 0.3.3
appVersion: 1.5.2
home: https://github.com/kubernetes/heapster
sources:
diff --git a/stable/heapster/README.md b/stable/heapster/README.md
index ed77b63e59ad..2ae70e2fba16 100644
--- a/stable/heapster/README.md
+++ b/stable/heapster/README.md
@@ -44,7 +44,7 @@ The default configuration values for this chart are listed in `values.yaml`.
| `resources.limits` | Server resource limits | limits: {cpu: 100m, memory: 128Mi} |
| `resources.requests` | Server resource requests | requests: {cpu: 100m, memory: 128Mi} |
| `command` | Commands for heapster pod | "/heapster --source=kubernetes.summary_api:'' |
-| `rbac.create` | Bind system:heapster role | false |
+| `rbac.create` | Bind system:heapster role | true |
| `rbac.serviceAccountName` | existing ServiceAccount to use (ignored if rbac.create=true) | default |
| `resizer.enabled` | If enabled, scale resources | true |
| `eventer.enabled` | If enabled, start eventer | false |
diff --git a/stable/heartbeat/Chart.yaml b/stable/heartbeat/Chart.yaml
index 001174d3513d..d7e4921729fd 100644
--- a/stable/heartbeat/Chart.yaml
+++ b/stable/heartbeat/Chart.yaml
@@ -2,8 +2,8 @@ apiVersion: v1
description: A Helm chart to periodically check the status of your services with heartbeat
icon: https://www.elastic.co/jp/assets/blt75e7956191e1bd55/icon-heartbeat-bb.svg
name: heartbeat
-version: 1.0.0
-appVersion: 6.6.0
+version: 1.2.0
+appVersion: 6.7.0
home: https://www.elastic.co/products/beats/heartbeat
sources:
- https://www.elastic.co/guide/en/beats/heartbeat/current/index.html
diff --git a/stable/heartbeat/README.md b/stable/heartbeat/README.md
index e995e8bb394f..5dda51c2f8c5 100644
--- a/stable/heartbeat/README.md
+++ b/stable/heartbeat/README.md
@@ -38,21 +38,25 @@ The command removes all the Kubernetes components associated with the chart and
The following table lists the configurable parameters of the heartbeat chart and their default values.
-| Parameter | Description | Default |
-|-------------------------------------|------------------------------------|-------------------------------------------|
-| `image.repository` | The image repository to pull from | `docker.elastic.co/beats/heartbeat` |
-| `image.tag` | The image tag to pull | `6.6.0` |
-| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
-| `rbac.create` | If true, create & use RBAC resources | `true` |
-| `rbac.serviceAccount` | existing ServiceAccount to use (ignored if rbac.create=true) | `default` |
-| `config` | The content of the configuration file consumed by heartbeat. See the [heartbeat documentation](https://www.elastic.co/guide/en/beats/heartbeat/current/heartbeat-reference-yml.html) for full details |
-| `plugins` | List of beat plugins |
-| `extraVars` | A map of additional environment variables | |
-| `extraVolumes`, `extraVolumeMounts` | Additional volumes and mounts, for example to provide other configuration files | |
-| `resources.requests.cpu` | CPU resource requests | |
-| `resources.limits.cpu` | CPU resource limits | |
-| `resources.requests.memory` | Memory resource requests | |
-| `resources.limits.memory` | Memory resource limits | |
+| Parameter | Description | Default |
+| ----------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------- |
+| `image.repository` | The image repository to pull from | `docker.elastic.co/beats/heartbeat` |
+| `image.tag` | The image tag to pull | `6.7.0` |
+| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `rbac.create` | If true, create & use RBAC resources | `true` |
+| `rbac.serviceAccount` | existing ServiceAccount to use (ignored if rbac.create=true) | `default` |
+| `config` | The content of the configuration file consumed by heartbeat. See the [heartbeat documentation](https://www.elastic.co/guide/en/beats/heartbeat/current/heartbeat-reference-yml.html) for full details | |
+| `plugins` | List of beat plugins | |
+| `hostNetwork` | If true, use hostNetwork | `false` |
+| `extraVars` | A map of additional environment variables | |
+| `extraVolumes`, `extraVolumeMounts` | Additional volumes and mounts, for example to provide other configuration files | |
+| `resources.requests.cpu` | CPU resource requests | |
+| `resources.limits.cpu` | CPU resource limits | |
+| `resources.requests.memory` | Memory resource requests | |
+| `resources.limits.memory` | Memory resource limits | |
+| `priorityClassName` | Priority class name | |
+| `nodeSelector` | Node Selector | |
+| `tolerations` | Pod's tolerations | |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
diff --git a/stable/heartbeat/templates/daemonset.yaml b/stable/heartbeat/templates/daemonset.yaml
index ec2c4239d8b5..998f1f4b61b3 100644
--- a/stable/heartbeat/templates/daemonset.yaml
+++ b/stable/heartbeat/templates/daemonset.yaml
@@ -36,6 +36,18 @@ spec:
- {{ .Values.plugins | join "," | quote }}
{{- end }}
env:
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
+ - name: NODE_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+ - name: NODE_IP
+ valueFrom:
+ fieldRef:
+ fieldPath: status.hostIP
{{- range $key, $value := .Values.extraVars }}
- name: {{ $key }}
value: {{ $value }}
@@ -64,13 +76,20 @@ spec:
type: DirectoryOrCreate
{{- if .Values.extraVolumes }}
{{ toYaml .Values.extraVolumes | indent 6 }}
+{{- end }}
+{{- if .Values.priorityClassName }}
+ priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
terminationGracePeriodSeconds: 60
+{{- if .Values.hostNetwork }}
+ hostNetwork: true
+ dnsPolicy: ClusterFirstWithHostNet
+{{- end }}
serviceAccountName: {{ template "heartbeat.serviceAccountName" . }}
+{{- if .Values.tolerations }}
tolerations:
- - key: node-role.kubernetes.io/master
- operator: Exists
- effect: NoSchedule
+{{ toYaml .Values.tolerations | indent 8 }}
+{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
diff --git a/stable/heartbeat/values.yaml b/stable/heartbeat/values.yaml
index 9c7083a3963c..30fe6fe277a2 100644
--- a/stable/heartbeat/values.yaml
+++ b/stable/heartbeat/values.yaml
@@ -1,6 +1,6 @@
image:
repository: docker.elastic.co/beats/heartbeat
- tag: 6.6.0
+ tag: 6.7.0
pullPolicy: IfNotPresent
config:
@@ -32,6 +32,8 @@ config:
plugins: []
# - kinesis.so
+hostNetwork: false
+
# A map of additional environment variables
extraVars: {}
# test1: "test2"
@@ -58,6 +60,12 @@ resources: {}
# cpu: 100m
# memory: 100Mi
+priorityClassName: ""
+
+nodeSelector: {}
+
+tolerations: []
+
rbac:
# Specifies whether RBAC resources should be created
create: true
diff --git a/stable/helm-exporter/Chart.yaml b/stable/helm-exporter/Chart.yaml
index 289994d1801d..317756c7d08a 100644
--- a/stable/helm-exporter/Chart.yaml
+++ b/stable/helm-exporter/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
-appVersion: "0.1.0"
+appVersion: "0.4.0"
description: Exports helm release stats to prometheus
name: helm-exporter
-version: 0.1.0
+version: 0.2.3
home: https://github.com/sstarcher/helm-exporter
sources:
- https://github.com/sstarcher/helm-exporter
diff --git a/stable/helm-exporter/README.md b/stable/helm-exporter/README.md
index 5dd94b2d6a52..71bf1e531662 100644
--- a/stable/helm-exporter/README.md
+++ b/stable/helm-exporter/README.md
@@ -12,6 +12,8 @@ $ helm install stable/helm-exporter
This chart bootstraps a [helm-exporter](https://github.com/sstarcher/helm-exporter) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
+The chart comes with a ServiceMonitor for use with the [Prometheus Operator](https://github.com/helm/charts/tree/master/stable/prometheus-operator).
+
## Installing the Chart
To install the chart with the release name `my-release`:
@@ -40,11 +42,16 @@ Parameter | Description | Default
--- | --- | ---
`affinity` | affinity configuration for pod assignment | `{}`
`image.repository` | Image | `sstarcher/helm-exporter`
-`image.tag` | Image tag | `0.1.0`
+`image.tag` | Image tag | `0.4.0`
`image.pullPolicy` | Image pull policy | `IfNotPresent`
+`tillerNamespaces` | To override the default tiller namespace name or to provide the multiple tiller namespaces For example, "kube-system,dev" | ""
`nodeSelector` | node labels for pod assignment | `{}`
`resources` | pod resource requests & limits | `{}`
`tolerations` | List of node taints to tolerate (requires Kubernetes 1.6+) | `[]`
+`serviceMonitor.create` | Set to true if using the Prometheus Operator | `false`
+`serviceMonitor.interval` | Interval at which metrics should be scraped | ``
+`serviceMonitor.namespace` | The namespace where the Prometheus Operator is deployed | ``
+`serviceMonitor.additionalLabels` | Additional labels to add to the ServiceMonitor | `{}`
```console
$ helm install stable/helm-exporter --name my-release
diff --git a/stable/helm-exporter/templates/deployment.yaml b/stable/helm-exporter/templates/deployment.yaml
index d72fe4ae3385..63f477954482 100644
--- a/stable/helm-exporter/templates/deployment.yaml
+++ b/stable/helm-exporter/templates/deployment.yaml
@@ -23,17 +23,20 @@ spec:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
+ {{- if .Values.tillerNamespaces }}
+ args: ["-tiller-namespaces", {{ .Values.tillerNamespaces | quote }}]
+ {{- end }}
ports:
- name: http
- containerPort: 9100
+ containerPort: 9571
protocol: TCP
livenessProbe:
httpGet:
- path: /metrics
+ path: /healthz
port: http
readinessProbe:
httpGet:
- path: /metrics
+ path: /healthz
port: http
resources:
{{ toYaml .Values.resources | indent 12 }}
diff --git a/stable/helm-exporter/templates/service.yaml b/stable/helm-exporter/templates/service.yaml
index 287c5d65bfb6..0c6a73b93059 100644
--- a/stable/helm-exporter/templates/service.yaml
+++ b/stable/helm-exporter/templates/service.yaml
@@ -7,13 +7,15 @@ metadata:
helm.sh/chart: {{ include "helm-exporter.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
+ {{- if not .Values.serviceMonitor.create }}
annotations:
prometheus.io/scrape: "true"
+ {{- end }}
spec:
type: ClusterIP
clusterIP: None
ports:
- - port: 9100
+ - port: 9571
targetPort: http
protocol: TCP
name: metrics
diff --git a/stable/helm-exporter/templates/servicemonitor.yaml b/stable/helm-exporter/templates/servicemonitor.yaml
new file mode 100644
index 000000000000..e8e20f00327e
--- /dev/null
+++ b/stable/helm-exporter/templates/servicemonitor.yaml
@@ -0,0 +1,31 @@
+{{ if .Values.serviceMonitor.create }}
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: {{ include "helm-exporter.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "helm-exporter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ helm.sh/chart: {{ include "helm-exporter.chart" . }}
+ {{- range $key, $value := .Values.serviceMonitor.additionalLabels }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+ {{- with .Values.serviceMonitor.namespace }}
+ namespace: {{ . }}
+ {{- end }}
+spec:
+ endpoints:
+ - port: metrics
+ honorLabels: true
+ {{- with .Values.serviceMonitor.interval }}
+ interval: {{ . }}
+ {{- end }}
+ namespaceSelector:
+ matchNames:
+ - {{ .Release.Namespace }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "helm-exporter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+{{- end }}
diff --git a/stable/helm-exporter/values.yaml b/stable/helm-exporter/values.yaml
index 3ff600d6fd4d..3a5ce4920a91 100644
--- a/stable/helm-exporter/values.yaml
+++ b/stable/helm-exporter/values.yaml
@@ -2,9 +2,12 @@ replicaCount: 1
image:
repository: sstarcher/helm-exporter
- tag: 0.1.0
+ tag: 0.4.0
pullPolicy: Always
+# To override default tiller namespace name or to provide the multiple tiller namespaces like "kube-system,dev"
+tillerNamespaces: ""
+
nameOverride: ""
fullnameOverride: ""
@@ -26,3 +29,10 @@ serviceAccount:
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
+
+serviceMonitor:
+ # Specifies whether a ServiceMonitor should be created
+ create: false
+ interval:
+ namespace:
+ additionalLabels: {}
diff --git a/stable/hlf-ca/Chart.yaml b/stable/hlf-ca/Chart.yaml
index 4e280e7f5b4b..78ab43d3018b 100644
--- a/stable/hlf-ca/Chart.yaml
+++ b/stable/hlf-ca/Chart.yaml
@@ -1,7 +1,7 @@
apiVersion: v1
description: Hyperledger Fabric Certificate Authority chart (these charts are created by AID:Tech and are currently not directly associated with the Hyperledger project)
name: hlf-ca
-version: 1.1.4
+version: 1.1.6
appVersion: 1.3.0
keywords:
- blockchain
diff --git a/stable/hlf-ca/templates/configmap--ca.yaml b/stable/hlf-ca/templates/configmap--ca.yaml
index 340e6aae68fa..94793fa9a5f0 100644
--- a/stable/hlf-ca/templates/configmap--ca.yaml
+++ b/stable/hlf-ca/templates/configmap--ca.yaml
@@ -8,4 +8,4 @@ data:
GODEBUG: "netdns=go"
FABRIC_CA_HOME: /var/hyperledger/fabric-ca
FABRIC_CA_SERVER_CA_NAME: {{ .Values.caName | quote }}
- SERVICE_DNS: {{ include "hlf-ca.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local
+ SERVICE_DNS: 0.0.0.0 # Point to itself
diff --git a/stable/hlf-ca/templates/deployment.yaml b/stable/hlf-ca/templates/deployment.yaml
index 6d2998276d91..ea9a81847e75 100644
--- a/stable/hlf-ca/templates/deployment.yaml
+++ b/stable/hlf-ca/templates/deployment.yaml
@@ -10,6 +10,10 @@ spec:
matchLabels:
app: {{ include "hlf-ca.name" . }}
release: {{ .Release.Name }}
+ # Ensure we allow our pod to be unavailable, so we can upgrade
+ strategy:
+ rollingUpdate:
+ maxUnavailable: 1
template:
metadata:
labels:
diff --git a/stable/hlf-ca/tests/README.md b/stable/hlf-ca/tests/README.md
index 4617812d1a44..84753c85d7a5 100644
--- a/stable/hlf-ca/tests/README.md
+++ b/stable/hlf-ca/tests/README.md
@@ -10,9 +10,7 @@ Commands should be run from the root folder of the repository.
Due to presence of dependencies, please run inside the chart dir:
-```
-helm dependency update
-```
+ helm dependency update
### Install
diff --git a/stable/hlf-couchdb/Chart.yaml b/stable/hlf-couchdb/Chart.yaml
index 2fdabe95a574..bbff40a85502 100644
--- a/stable/hlf-couchdb/Chart.yaml
+++ b/stable/hlf-couchdb/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
description: CouchDB instance for Hyperledger Fabric (these charts are created by AID:Tech and are currently not directly associated with the Hyperledger project)
name: hlf-couchdb
-version: 1.0.5
-appVersion: 0.4.9
+version: 1.0.6
+appVersion: 0.4.10
keywords:
- blockchain
- hyperledger
diff --git a/stable/hlf-couchdb/templates/deployment.yaml b/stable/hlf-couchdb/templates/deployment.yaml
index e2c01961b24e..3fd0019ff1ef 100644
--- a/stable/hlf-couchdb/templates/deployment.yaml
+++ b/stable/hlf-couchdb/templates/deployment.yaml
@@ -10,6 +10,10 @@ spec:
matchLabels:
app: {{ include "hlf-couchdb.name" . }}
release: {{ .Release.Name }}
+ # Ensure we allow our pod to be unavailable, so we can upgrade
+ strategy:
+ rollingUpdate:
+ maxUnavailable: 1
template:
metadata:
labels:
diff --git a/stable/hlf-couchdb/values.yaml b/stable/hlf-couchdb/values.yaml
index 992db7de18d9..0df6410ca0b5 100644
--- a/stable/hlf-couchdb/values.yaml
+++ b/stable/hlf-couchdb/values.yaml
@@ -6,7 +6,7 @@ replicaCount: 1
image:
repository: hyperledger/fabric-couchdb
- tag: 0.4.9
+ tag: 0.4.10
pullPolicy: IfNotPresent
service:
diff --git a/stable/hlf-peer/Chart.yaml b/stable/hlf-peer/Chart.yaml
index 984582a6a724..79ffa8b50c95 100644
--- a/stable/hlf-peer/Chart.yaml
+++ b/stable/hlf-peer/Chart.yaml
@@ -1,7 +1,7 @@
apiVersion: v1
description: Hyperledger Fabric Peer chart (these charts are created by AID:Tech and are currently not directly associated with the Hyperledger project)
name: hlf-peer
-version: 1.2.4
+version: 1.2.6
appVersion: 1.3.0
keywords:
- blockchain
diff --git a/stable/hlf-peer/templates/configmap--peer.yaml b/stable/hlf-peer/templates/configmap--peer.yaml
index 652ba5d5969d..172d7d859941 100644
--- a/stable/hlf-peer/templates/configmap--peer.yaml
+++ b/stable/hlf-peer/templates/configmap--peer.yaml
@@ -6,12 +6,13 @@ metadata:
{{ include "labels.standard" . | indent 4 }}
data:
CORE_PEER_ADDRESSAUTODETECT: "true"
+ CORE_PEER_ID: {{ .Release.Name }}
CORE_PEER_NETWORKID: nid1
# If we have an ingress, we set hostname to it
{{- if .Values.ingress.enabled }}
CORE_PEER_ADDRESS: {{ index .Values.ingress.hosts 0 }}:443
{{- else }}
- CORE_PEER_ADDRESS: {{ include "hlf-peer.fullname" . }}:7051
+ # Otherwise we use CORE_PEER_ADDRESSAUTODETECT to auto-detect its address
{{- end }}
CORE_PEER_LISTENADDRESS: 0.0.0.0:7051
CORE_PEER_EVENTS_ADDRESS: 0.0.0.0:7053
diff --git a/stable/hoard/Chart.yaml b/stable/hoard/Chart.yaml
index 4da72545a4ab..6ab2eabcefe2 100644
--- a/stable/hoard/Chart.yaml
+++ b/stable/hoard/Chart.yaml
@@ -1,6 +1,6 @@
name: hoard
-version: 0.6.1
-appVersion: 1.1.5
+version: 0.7.0
+appVersion: 3.0.1
description: Hoard is a stateless, deterministically encrypted, content-addressed object store
home: https://github.com/monax/hoard
icon: https://pbs.twimg.com/profile_images/781959787856687105/76s1CJER_400x400.jpg
@@ -8,6 +8,8 @@ keywords:
- s3
- aws
- gcp
+- azure
+- ipfs
- envelope encryption
- content addressable
- distributed file storage
diff --git a/stable/hoard/README.md b/stable/hoard/README.md
index c5d9cb4a411b..74d750e31210 100644
--- a/stable/hoard/README.md
+++ b/stable/hoard/README.md
@@ -1,6 +1,6 @@
# Hoard
-[Hoard](https://github.com/monax/hoard) is a stateless, deterministically encrypted, content-addressed object store. It currently supports local persistent storage, [S3](https://aws.amazon.com/s3/) and [GCS](https://cloud.google.com/storage/) backends, though [IPFS](https://ipfs.io) integration is currently under development. Files that are sent to Hoard are symmetrically encrypted, where the secret is the hash of the plaintext file, and then stored in the configured backend - this enables any party with knowledge of the hash or original file to retrieve it from the store.
+[Hoard](https://github.com/monax/hoard) is a stateless, deterministically encrypted, content-addressed object store. It currently supports local persistent storage, [S3](https://aws.amazon.com/s3/), [GCS](https://cloud.google.com/storage/), [Azure](https://azure.microsoft.com/en-gb/services/storage/) and [IPFS](https://ipfs.io) backends. Files that are sent to Hoard are symmetrically encrypted, where the secret is the hash of the plaintext file, and then stored in the configured backend - this enables any party with knowledge of the hash or original file to retrieve it from the store.
## Introduction
@@ -11,26 +11,17 @@ This chart bootstraps a hoard daemon on a [Kubernetes](http://kubernetes.io) clu
To install the chart with the release name `my-release`, run:
```bash
-$ helm install --name my-release stable/hoard
+helm install --name my-release stable/hoard
```
-The [configuration](#configuration) section below lists all possible parameters that can be configured during installation.
-
-
-### S3 Example
-
-To deploy with an S3 backend, use the following command. Please first create appropriate [AWS Credentials](https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html) and apply them to the Kubernetes secret `s3-credentials`.
-
-```bash
-$ helm install --name my-release stable/hoard --set storage.type=s3,storage.prefix="folder",storage.region="eu-central-1",storage.bucket="my-bucket",storage.credentialsSecret="s3-credentials"
-```
+This installation defaults to persistent volume storage. The [configuration](#configuration) section below lists all possible parameters that can be configured.
## Uninstall
To uninstall/delete the `my-release` deployment:
```bash
-$ helm delete my-release
+helm delete my-release
```
## Configuration
@@ -41,13 +32,25 @@ The following table lists the configurable parameters of the Hoard chart and its
| --------- | ----------- | ------- |
| `replicaCount` | number of daemons | `1` |
| `image.repository` | docker image | `"quay.io/monax/hoard"` |
-| `image.tag` | version | `"1.1.5"` |
+| `image.tag` | version | `"3.0.1"` |
| `image.pullPolicy` | pull policy | `"IfNotPresent"` |
-| `storage.type` | backend object store (local, s3 or gcp)| `"local"` |
-| `storage.region` | object store location (non-local) | `""` |
-| `storage.bucket` | object storage container (non-local) | `""` |
-| `storage.prefix` | bucket folder (non-local) | `"hoard"` |
-| `storage.credentialsSecret` | required secret for gcs or s3 | `""` |
+| `config.listenaddress` | address to listen on | `tcp://:53431` |
+| `config.storage.storagetype` | backend object store (aws, azure, filesystem, gcp, ipfs) | `filesystem` |
+| `config.storage.addressencoding` | object address encoding | `base64` |
+| `config.storage.filesystemconfig.rootdirectory` | object address encoding | `"/data"` |
+| `config.storage.cloudconfig.bucket` | object storage container (cloud only) | `""` |
+| `config.storage.cloudconfig.prefix` | bucket folder (cloud only) | `""` |
+| `config.storage.cloudconfig.region` | object store location (cloud only) | `""` |
+| `config.storage.ipfsconfig.remoteapi` | remote api location (ipfs only) | `""` |
+| `config.logging.loggingtype` | format for logging output | `"json"` |
+| `config.logging.channels` | logging types | `[]` |
+| `config.secrets.symmetric` | symmetric secrets (publicid, passphrase) | `[]` |
+| `config.secrets.openpgp.privateid` | id of private key to sign with | `""` |
+| `config.secrets.openpgp.file` | name of the file mounted from secret | `"/secrets/keyring"` |
+| `controller.enabled` | enable the [shared-secrets](https://github.com/monax/shared-secrets) controller | `false` |
+| `controller.keep` | keep the shared-secrets crd after chart deletion | `true` |
+| `secrets.creds` | required secret for cloud providers | `"cloud-credentials"` |
+| `secrets.keyring` | required secret for openpgp grants | `"private-keyring"` |
| `persistence.size` | size of local store | `"10Gi"` |
| `persistence.storageClass` | pvc type | `"standard"` |
| `persistence.accessMode` | pvc access | `"ReadWriteOnce"` |
@@ -73,3 +76,45 @@ Alternatively, a YAML file that specifies the values for the parameters can be p
```bash
$ helm install --name my-release -f values.yaml stable/hoard
```
+
+## Cloud Examples
+
+For each of the supported cloud back-ends, please ensure you have the appropriate credentials as identified by the corresponding environment variables.
+
+### [AWS](https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html)
+
+```bash
+kubectl create secret generic cloud-credentials --from-literal access-key-id=${AWS_ACCESS_KEY_ID} --from-literal secret-access-key=${AWS_SECRET_ACCESS_KEY}
+helm install --name my-release stable/hoard --set storage.type=aws,storage.region="eu-central-1",storage.bucket="my-bucket",storage.prefix="folder",storage.secret="cloud-credentials"
+```
+
+### [Azure](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-manage)
+
+```bash
+kubectl create secret generic cloud-credentials --from-literal storage-account-name=${AZURE_STORAGE_ACCOUNT_NAME} --from-literal storage-account-key=${AZURE_STORAGE_ACCOUNT_KEY}
+helm install --name my-release stable/hoard --set storage.type=azure,storage.bucket="my-bucket",storage.prefix="folder",storage.secret="cloud-credentials"
+```
+
+### [GCP](https://cloud.google.com/iam/docs/creating-managing-service-account-keys)
+
+```bash
+kubectl create secret generic cloud-credentials --from-literal service-key=${GCLOUD_SERVICE_KEY}
+helm install --name my-release stable/hoard --set storage.type=gcp,storage.bucket="my-bucket",storage.prefix="folder",storage.secret="cloud-credentials"
+```
+
+## OpenPGP Grants
+
+Once configured, hoard can share access to a secret file by encrypting it with the public key of the recipient:
+
+```bash
+kubectl create secret generic private-keyring --from-file ${GOPATH}/src/github.com/monax/hoard/grant/private.key.asc
+helm install --name my-release stable/hoard --set openpgp.id="10449759736975846181",openpgp.secret=private-keyring
+```
+
+## [Shared Secrets](https://github.com/monax/shared-secrets)
+
+To enable Hoard to act as a 'secrets broker', deploy our CustomResourceDefinition and controller:
+
+```bash
+helm install --name my-release stable/hoard --set controller.enabled=true
+```
diff --git a/stable/hoard/templates/NOTES.txt b/stable/hoard/templates/NOTES.txt
index 0cc4d89373f5..79ce44909991 100644
--- a/stable/hoard/templates/NOTES.txt
+++ b/stable/hoard/templates/NOTES.txt
@@ -1 +1,6 @@
You have successfully installed Hoard!
+
+{{- if .Values.controller.enabled }}
+The shared-secrets controller is enabled, please follow the instructions here:
+ https://github.com/monax/shared-secrets
+{{- end }}
\ No newline at end of file
diff --git a/stable/hoard/templates/configmap.yaml b/stable/hoard/templates/configmap.yaml
index 7f64d9904cd8..f1680c3883ca 100644
--- a/stable/hoard/templates/configmap.yaml
+++ b/stable/hoard/templates/configmap.yaml
@@ -8,11 +8,6 @@ metadata:
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "hoard.chart" . }}
data:
- config.json: |
-{{- if eq .Values.storage.type "s3" }}
- {"ListenAddress":"tcp://0.0.0.0:{{ .Values.service.port }}","Storage":{"StorageType":"s3","AddressEncoding":"base64","Region":"{{ .Values.storage.region }}","S3Bucket":"{{ .Values.storage.bucket }}","S3Prefix":"{{ .Values.storage.prefix }}","CredentialsProviderChain":[{"Provider":"remote"},{"Provider":"env"}]},"Logging":{"LoggingType":"json","Channels":["info","trace"]}}
-{{- else if eq .Values.storage.type "gcs" }}
- {"ListenAddress":"tcp://0.0.0.0:{{ .Values.service.port }}","Storage":{"StorageType":"gcs","AddressEncoding":"base64","GCSBucket":"{{ .Values.storage.bucket }}","GCSPrefix":"{{ .Values.storage.prefix }}"},"Logging":{"LoggingType":"json","Channels":["info","trace"]}}
-{{- else }}
- {"ListenAddress":"tcp://0.0.0.0:{{ .Values.service.port }}","Storage":{"StorageType":"filesystem","AddressEncoding":"base64","RootDirectory":"/data"},"Logging":{"LoggingType":"json","Channels":["info","trace"]}}
-{{- end }}
+ hoard.conf: |
+{{ toYaml .Values.config | indent 4 }}
+
diff --git a/stable/hoard/templates/controller/crd.yaml b/stable/hoard/templates/controller/crd.yaml
new file mode 100644
index 000000000000..db9c253602f9
--- /dev/null
+++ b/stable/hoard/templates/controller/crd.yaml
@@ -0,0 +1,31 @@
+{{- if .Values.controller.enabled }}
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+ name: sharedsecrets.monax.io
+{{ if .Values.controller.keep }}
+ annotations:
+ "helm.sh/resource-policy": keep
+{{ end }}
+ labels:
+ app.kubernetes.io/name: {{ include "hoard.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ helm.sh/chart: {{ include "hoard.chart" . }}
+spec:
+ group: monax.io
+ names:
+ kind: SharedSecret
+ listKind: SharedSecretList
+ plural: sharedsecrets
+ shortNames:
+ - ss
+ singular: sharedsecret
+ scope: Namespaced
+ version: v1
+ versions:
+ - name: v1
+ served: true
+ storage: true
+{{- end }}
+
diff --git a/stable/hoard/templates/controller/deployment.yaml b/stable/hoard/templates/controller/deployment.yaml
new file mode 100644
index 000000000000..9f7fee233f92
--- /dev/null
+++ b/stable/hoard/templates/controller/deployment.yaml
@@ -0,0 +1,49 @@
+{{- if .Values.controller.enabled }}
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ include "hoard.fullname" . }}-controller
+ labels:
+ app.kubernetes.io/name: {{ include "hoard.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ helm.sh/chart: {{ include "hoard.chart" . }}
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: {{ template "hoard.name" . }}-controller
+ release: {{ .Release.Name }}
+ template:
+ metadata:
+ labels:
+ app: {{ template "hoard.name" . }}-controller
+ release: {{ .Release.Name }}
+ spec:
+ serviceAccount: {{ include "hoard.fullname" . }}-controller
+ initContainers:
+ - name: hoard-wait
+ image: busybox:1.28
+ command: ['sh', '-c', 'until nslookup {{ include "hoard.fullname" . }}; do sleep 2; done;']
+ containers:
+ - name: controller
+ image: "{{ .Values.image.repository }}:controller"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ env:
+ - name: HOARD_URL
+ value: "{{ include "hoard.fullname" . }}:{{ .Values.service.port }}"
+ resources:
+{{ toYaml .Values.resources | indent 12 }}
+{{- with .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml . | indent 8 }}
+{{- end }}
+{{- with .Values.affinity }}
+ affinity:
+{{ toYaml . | indent 8 }}
+{{- end }}
+{{- with .Values.tolerations }}
+ tolerations:
+{{ toYaml . | indent 8 }}
+{{- end }}
+{{- end }}
\ No newline at end of file
diff --git a/stable/hoard/templates/controller/rbac.yaml b/stable/hoard/templates/controller/rbac.yaml
new file mode 100644
index 000000000000..9f8d7bb70d2d
--- /dev/null
+++ b/stable/hoard/templates/controller/rbac.yaml
@@ -0,0 +1,57 @@
+{{- if .Values.controller.enabled }}
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: {{ include "hoard.fullname" . }}-controller
+ labels:
+ app.kubernetes.io/name: {{ include "hoard.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ helm.sh/chart: {{ include "hoard.chart" . }}
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: {{ include "hoard.fullname" . }}-controller
+ labels:
+ app.kubernetes.io/name: {{ include "hoard.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ helm.sh/chart: {{ include "hoard.chart" . }}
+rules:
+- apiGroups:
+ - monax.io
+ resources:
+ - sharedsecrets
+ verbs:
+ - get
+ - list
+ - watch
+- apiGroups:
+ - ""
+ resources:
+ - secrets
+ verbs:
+ - create
+ - update
+ - delete
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: {{ include "hoard.fullname" . }}-controller
+ labels:
+ app.kubernetes.io/name: {{ include "hoard.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ helm.sh/chart: {{ include "hoard.chart" . }}
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: {{ include "hoard.fullname" . }}-controller
+subjects:
+- apiGroup: ""
+ kind: ServiceAccount
+ name: {{ include "hoard.fullname" . }}-controller
+ namespace: {{ .Release.Namespace }}
+{{- end }}
\ No newline at end of file
diff --git a/stable/hoard/templates/deployment.yaml b/stable/hoard/templates/deployment.yaml
index 8c7f5f8c3e43..5cb5c78b169b 100644
--- a/stable/hoard/templates/deployment.yaml
+++ b/stable/hoard/templates/deployment.yaml
@@ -23,34 +23,54 @@ spec:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
+ command: ["hoard", "--config", "/conf/hoard.conf"]
+ volumeMounts:
+ - name: config-file
+ mountPath: /conf
+{{- if eq .Values.config.storage.storagetype "filesystem" }}
+ - mountPath: {{ required "A valid directory is required." .Values.config.storage.filesystemconfig.rootdirectory }}
+ name: data-dir
+{{- end }}
+{{- if ne .Values.config.secrets.openpgp.privateid "" }}
+ - mountPath: /secrets
+ subPath: {{ required "A valid file name is required." .Values.config.secrets.openpgp.file }}
+ name: key-ring
+{{- end }}
ports:
- containerPort: {{ .Values.service.port }}
name: http
env:
-{{- if eq .Values.storage.type "s3" }}
+{{- if eq .Values.config.storage.storagetype "aws" }}
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
- name: {{ required "A valid storage credential is required." .Values.storage.credentialsSecret }}
+ name: {{ required "A valid storage credential is required." .Values.secrets.creds }}
key: access-key-id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
- name: {{ required "A valid storage credential is required." .Values.storage.credentialsSecret }}
+ name: {{ required "A valid storage credential is required." .Values.secrets.creds }}
key: secret-access-key
{{- end }}
-{{- if eq .Values.storage.type "gcs" }}
- - name: GCLOUD_SERVICE_KEY
+{{- if eq .Values.config.storage.storagetype "azure" }}
+ - name: AZURE_STORAGE_ACCOUNT_NAME
valueFrom:
secretKeyRef:
- name: {{ required "A valid storage credential is required." .Values.storage.credentialsSecret }}
- key: secret-access-key
+ name: {{ required "A valid storage credential is required." .Values.secrets.creds }}
+ key: storage-account-name
+ - name: AZURE_STORAGE_ACCOUNT_KEY
+ valueFrom:
+ secretKeyRef:
+ name: {{ required "A valid storage credential is required." .Values.secrets.creds }}
+ key: storage-account-key
{{- end }}
- - name: HOARD_JSON_CONFIG
+{{- if eq .Values.config.storage.storagetype "gcp" }}
+ - name: GCLOUD_SERVICE_KEY
valueFrom:
- configMapKeyRef:
- name: {{ template "hoard.fullname" . }}
- key: config.json
+ secretKeyRef:
+ name: {{ required "A valid storage credential is required." .Values.secrets.creds }}
+ key: service-key
+{{- end }}
livenessProbe:
exec:
command:
@@ -59,19 +79,22 @@ spec:
- '[ $(echo "marmottes" | hoarctl put | hoarctl get) = "marmottes" ]'
initialDelaySeconds: 5
periodSeconds: 45
-{{- if eq .Values.storage.type "local" }}
- volumeMounts:
- - mountPath: /data
- name: data-dir
-{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
-{{- if eq .Values.storage.type "local" }}
volumes:
+ - name: config-file
+ configMap:
+ name: {{ template "hoard.fullname" . }}
+{{- if eq .Values.config.storage.storagetype "filesystem" }}
- name: data-dir
persistentVolumeClaim:
claimName: {{ template "hoard.fullname" $ }}
{{- end }}
+{{- if ne .Values.config.secrets.openpgp.privateid "" }}
+ - name: key-ring
+ secret:
+ secretName: {{ required "A valid keyring is required." .Values.secrets.keyring }}
+{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
diff --git a/stable/hoard/templates/pvc.yaml b/stable/hoard/templates/pvc.yaml
index 5129117044c1..b6165203fd35 100644
--- a/stable/hoard/templates/pvc.yaml
+++ b/stable/hoard/templates/pvc.yaml
@@ -1,4 +1,4 @@
-{{- if eq .Values.storage.type "local" }}
+{{- if eq .Values.config.storage.storagetype "filesystem" }}
---
kind: PersistentVolumeClaim
apiVersion: v1
diff --git a/stable/hoard/values.yaml b/stable/hoard/values.yaml
index 8146e48af825..5d90dced60ac 100644
--- a/stable/hoard/values.yaml
+++ b/stable/hoard/values.yaml
@@ -2,17 +2,43 @@ replicaCount: 1
image:
repository: quay.io/monax/hoard
- tag: 1.1.5
+ tag: 3.0.1
pullPolicy: IfNotPresent
-storage:
- type: local # s3 | gcs | local
- region: ""
- bucket: ""
- prefix: hoard
- credentialsSecret: ""
+config:
+ listenaddress: tcp://:53431
+ storage:
+ # aws | azure | filesystem | gcp | ipfs
+ storagetype: filesystem
+ addressencoding: base64
+ filesystemconfig:
+ rootdirectory: "/data"
+ cloudconfig:
+ bucket: ""
+ prefix: ""
+ region: ""
+ ipfsconfig:
+ remoteapi: ""
+ logging:
+ loggingtype: json
+ channels:
+ - info
+ - trace
+ secrets:
+ symmetric: []
+ openpgp:
+ privateid: ""
+ file: "keyring"
-# only local
+controller:
+ enabled: true
+ keep: true
+
+secrets:
+ creds: "cloud-credentials"
+ keyring: "private-keyring"
+
+# only filesystem
persistence:
size: 10Gi
storageClass: standard
@@ -28,22 +54,21 @@ service:
ingress:
enabled: false
annotations: {}
- # kubernetes.io/ingress.class: nginx
- # kubernetes.io/tls-acme: "true"
path: /
hosts:
- - chart-example.local
+ - hoard.local
tls: []
- # - secretName: chart-example-tls
+ # - secretName: hoard-tls
# hosts:
- # - chart-example.local
+ # - hoard.local
resources: {}
- # cpu: 100m
- # memory: 128Mi
- # requests:
- # cpu: 100m
- # memory: 128Mi
+# limits:
+# cpu: 500m
+# memory: 1Gi
+# requests:
+# cpu: 100m
+# memory: 256Mi
nodeSelector: {}
diff --git a/stable/home-assistant/Chart.yaml b/stable/home-assistant/Chart.yaml
index 6c4305ea9331..d436e5d23ede 100644
--- a/stable/home-assistant/Chart.yaml
+++ b/stable/home-assistant/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
-appVersion: 0.84.6
+appVersion: 0.92.2
description: Home Assistant
name: home-assistant
-version: 0.5.2
+version: 0.9.1
keywords:
- home-assistant
- hass
@@ -12,6 +12,9 @@ icon: https://upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Home_Assistant_L
sources:
- https://github.com/home-assistant/home-assistant
- https://github.com/danielperna84/hass-configurator
+- https://github.com/cdr/code-server
maintainers:
- name: billimek
email: jeff@billimek.com
+- name: runningman84
+ email: philipp.hellmich@bertelsmann.de
diff --git a/stable/home-assistant/OWNERS b/stable/home-assistant/OWNERS
index b90909f487d8..b7fc7f203f3c 100644
--- a/stable/home-assistant/OWNERS
+++ b/stable/home-assistant/OWNERS
@@ -1,4 +1,7 @@
approvers:
- billimek
+- runningman84
reviewers:
- billimek
+- runningman84
+
diff --git a/stable/home-assistant/README.md b/stable/home-assistant/README.md
index 46b9f5f966e9..0ed2190049e1 100644
--- a/stable/home-assistant/README.md
+++ b/stable/home-assistant/README.md
@@ -36,9 +36,10 @@ The following tables lists the configurable parameters of the Home Assistant cha
| Parameter | Description | Default |
|----------------------------|-------------------------------------|---------------------------------------------------------|
| `image.repository` | Image repository | `homeassistant/home-assistant` |
-| `image.tag` | Image tag. Possible values listed [here](https://hub.docker.com/r/homeassistant/home-assistant/tags/).| `0.84.6`|
+| `image.tag` | Image tag. Possible values listed [here](https://hub.docker.com/r/homeassistant/home-assistant/tags/).| `0.90.2`|
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `image.pullSecrets` | Secrets to use when pulling the image | `[]` |
+| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
| `service.type` | Kubernetes service type for the home-assistant GUI | `ClusterIP` |
| `service.port` | Kubernetes port where the home-assistant GUI is exposed| `8123` |
| `service.annotations` | Service annotations for the home-assistant GUI | `{}` |
@@ -58,6 +59,10 @@ The following tables lists the configurable parameters of the Home Assistant cha
| `persistence.existingClaim`| Use an existing PVC to persist data | `nil` |
| `persistence.storageClass` | Type of persistent volume claim | `-` |
| `persistence.accessMode` | Persistence access modes | `ReadWriteMany` |
+| `git.enabled` | Use git-sync in init container | `false` |
+| `git.secret` | Git secret to use for git-sync | `git-creds` |
+| `git.syncPath` | Git sync path | `/config` |
+| `git.keyPath` | Git ssh key path | `/root/.ssh` |
| `extraEnv` | Extra ENV vars to pass to the home-assistant container | `{}` |
| `extraEnvSecrets` | Extra env vars to pass to the home-assistant container from k8s secrets - see `values.yaml` for an example | `{}` |
| `configurator.enabled` | Enable the optional [configuration UI](https://github.com/danielperna84/hass-configurator) | `false` |
@@ -80,7 +85,6 @@ The following tables lists the configurable parameters of the Home Assistant cha
| `configurator.nodeSelector` | Node labels for pod assignment for the configurator UI | `{}` |
| `configurator.schedulerName` | Use an alternate scheduler, e.g. "stork" for the configurator UI | `` |
| `configurator.podAnnotations` | Affinity settings for pod assignment for the configurator UI | `{}` |
-| `configurator.replicaCount` | Number of replicas for the configurator UI | `1` |
| `configurator.resources` | CPU/Memory resource requests/limits for the configurator UI | `{}` |
| `configurator.securityContext` | Security context to be added to hass-configurator pods for the configurator UI | `{}` |
| `configurator.service.type` | Kubernetes service type for the configurator UI | `ClusterIP` |
@@ -92,6 +96,29 @@ The following tables lists the configurable parameters of the Home Assistant cha
| `configurator.service.externalIPs` | External IPs for the configurator UI | `[]` |
| `configurator.service.loadBalancerIP` | Loadbalancer IP for the configurator UI | `` |
| `configurator.service.loadBalancerSourceRanges` | Loadbalancer client IP restriction range for the configurator UI | `[]` |
+| `vscode.enabled` | Enable the optional [VS Code Server Sidecar](https://github.com/cdr/code-server) | `false` |
+| `vscode.image.repository` | Image repository | `codercom/code-server` |
+| `vscode.image.tag` | Image tag | `1.939`|
+| `vscode.image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `vscode.hassConfig` | Base path of the home assistant configuration files | `/config` |
+| `vscode.vscodePath` | Base path of the VS Code configuration files | `/config/.vscode` |
+| `vscode.password` | If this is set, will require a password to access the VS Code Server UI | `` |
+| `vscode.extraEnv` | Extra ENV vars to pass to the configuration UI | `{}` |
+| `vscode.ingress.enabled` | Enables Ingress for the VS Code UI | `false` |
+| `vscode.ingress.annotations` | Ingress annotations for the VS Code UI | `{}` |
+| `vscode.ingress.hosts` | Ingress accepted hostnames for the VS Code UI | `chart-example.local` |
+| `vscode.ingress.tls` | Ingress TLS configuration for the VS Code UI | `[]` |
+| `vscode.resources` | CPU/Memory resource requests/limits for the VS Code UI | `{}` |
+| `vscode.securityContext` | Security context to be added to hass-vscode pods for the VS Code UI | `{}` |
+| `vscode.service.type` | Kubernetes service type for the VS Code UI | `ClusterIP` |
+| `vscode.service.port` | Kubernetes port where the vscode UI is exposed| `80` |
+| `vscode.service.nodePort` | nodePort to listen on for the VS Code UI | `` |
+| `vscode.service.annotations` | Service annotations for the VS Code UI | `{}` |
+| `vscode.service.labels` | Service labels to use for the VS Code UI | `{}` |
+| `vscode.service.clusterIP` | Cluster IP for the VS Code UI | `` |
+| `vscode.service.externalIPs` | External IPs for the VS Code UI | `[]` |
+| `vscode.service.loadBalancerIP` | Loadbalancer IP for the VS Code UI | `` |
+| `vscode.service.loadBalancerSourceRanges` | Loadbalancer client IP restriction range for the VS Code UI | `[]` |
| `resources` | CPU/Memory resource requests/limits or the home-assistant GUI | `{}` |
| `nodeSelector` | Node labels for pod assignment or the home-assistant GUI | `{}` |
| `tolerations` | Toleration labels for pod assignment or the home-assistant GUI | `[]` |
@@ -113,8 +140,21 @@ helm install --name my-release -f values.yaml stable/home-assistant
Read through the [values.yaml](values.yaml) file. It has several commented out suggested values.
-## Regarding configuring home assistant
+## Configuring home assistant
-Much of the home assistant configuration occurs inside the various files persisted to the `/config` directory. This will require external access to the persistent storage location where the home assistant configuration data is stored.
+Much of the home assistant configuration occurs inside the various files persisted to the `/config` directory. This will require external access to the persistent storage location where the home assistant configuration data is stored. Because this may be a limitation, there are two options built-in to this chart:
-Because this may be a limitation, the [Home Assistant Configurator UI](https://github.com/danielperna84/hass-configurator) is added to the chart as an option to provide a webUI for editing the various configuration files.
+### Configurator UI
+
+[Home Assistant Configurator UI](https://github.com/danielperna84/hass-configurator) is added as an optional sidecar container to Home Assistant with access to the home assistant configuration for easy in-browser editing and manipulation of Home Assistant.
+
+### VS Code Server
+
+[VS Code Server](https://github.com/cdr/code-server) is added as an optional sidecar container to Home Assistant with access to the home assistant configuration for easy in-browser editing and manipulation of Home Assistant. If using this, it is possible to manually install the [Home Assistant Config Helper Extension](https://github.com/keesschollaart81/vscode-home-assistant) in order to have a deeper integration with Home Assistant within VS Code while editing the configuration files.
+
+## Git sync secret
+
+In order to sync the home assistant from a git repo, you have to store a ssh key as a kubernetes git secret
+```console
+kubectl create secret generic git-creds --from-file=id_rsa=git/k8s_id_rsa --from-file=known_hosts=git/known_hosts --from-file=id_rsa.pub=git/k8s_id_rsa.pub
+```
diff --git a/stable/home-assistant/templates/configurator-ingress.yaml b/stable/home-assistant/templates/configurator-ingress.yaml
index 6f67bce15c07..eda9e8614d9c 100644
--- a/stable/home-assistant/templates/configurator-ingress.yaml
+++ b/stable/home-assistant/templates/configurator-ingress.yaml
@@ -7,10 +7,10 @@ kind: Ingress
metadata:
name: {{ $fullName }}-configurator
labels:
- app: {{ template "home-assistant.name" . }}
- chart: {{ template "home-assistant.chart" . }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
+ app.kubernetes.io/name: {{ include "home-assistant.name" . }}
+ helm.sh/chart: {{ include "home-assistant.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- with .Values.configurator.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
diff --git a/stable/home-assistant/templates/deployment.yaml b/stable/home-assistant/templates/deployment.yaml
index def1c0626ece..4be0458ad79e 100644
--- a/stable/home-assistant/templates/deployment.yaml
+++ b/stable/home-assistant/templates/deployment.yaml
@@ -1,23 +1,25 @@
-apiVersion: apps/v1beta2
+apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "home-assistant.fullname" . }}
labels:
- app: {{ template "home-assistant.name" . }}
- chart: {{ template "home-assistant.chart" . }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
+ app.kubernetes.io/name: {{ include "home-assistant.name" . }}
+ helm.sh/chart: {{ include "home-assistant.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
- replicas: {{ .Values.replicaCount }}
+ replicas: 1
+ strategy:
+ type: {{ .Values.strategyType }}
selector:
matchLabels:
- app: {{ template "home-assistant.name" . }}
- release: {{ .Release.Name }}
+ app.kubernetes.io/name: {{ include "home-assistant.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
- app: {{ template "home-assistant.name" . }}
- release: {{ .Release.Name }}
+ app.kubernetes.io/name: {{ include "home-assistant.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
spec:
{{- with .Values.image.pullSecrets }}
imagePullSecrets:
@@ -26,7 +28,29 @@ spec:
{{- end }}
{{- end }}
{{- if .Values.hostNetwork }}
- hostNetwork: {{ .Values.hostNetwork }}
+ hostNetwork: {{ .Values.hostNetwork }}
+ dnsPolicy: ClusterFirstWithHostNet
+ {{- end }}
+ initContainers:
+ {{- if .Values.git.enabled }}
+ - name: git-sync
+ image: "{{ .Values.git.image.repository }}:{{ .Values.git.image.tag }}"
+ imagePullPolicy: {{ .Values.git.image.pullPolicy }}
+ command: ['sh', '-c', '[ "$(ls {{ .Values.git.syncPath }})" ] || git clone {{ .Values.git.repo }} {{ .Values.git.syncPath }}']
+ volumeMounts:
+ - mountPath: /config
+ name: config
+ - mountPath: {{ .Values.git.keyPath }}
+ name: git-secret
+ {{- if .Values.usePodSecurityContext }}
+ securityContext:
+ runAsUser: {{ default 0 .Values.runAsUser }}
+ {{- if and (.Values.runAsUser) (.Values.fsGroup) }}
+ {{- if not (eq .Values.runAsUser 0.0) }}
+ fsGroup: {{ .Values.fsGroup }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
@@ -65,6 +89,19 @@ spec:
volumeMounts:
- mountPath: /config
name: config
+ {{- if .Values.git.enabled }}
+ - mountPath: {{ .Values.git.keyPath }}
+ name: git-secret
+ {{- end }}
+ {{- if .Values.usePodSecurityContext }}
+ securityContext:
+ runAsUser: {{ default 0 .Values.runAsUser }}
+ {{- if and (.Values.runAsUser) (.Values.fsGroup) }}
+ {{- if not (eq .Values.runAsUser 0.0) }}
+ fsGroup: {{ .Values.fsGroup }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.configurator.enabled }}
@@ -72,17 +109,17 @@ spec:
image: "{{ .Values.configurator.image.repository }}:{{ .Values.configurator.image.tag }}"
imagePullPolicy: {{ .Values.configurator.image.pullPolicy }}
ports:
- - name: http
+ - name: configurator
containerPort: {{ .Values.configurator.service.port }}
protocol: TCP
livenessProbe:
tcpSocket:
- port: http
- initialDelaySeconds: 30
+ port: configurator
+ initialDelaySeconds: 30
readinessProbe:
tcpSocket:
- port: http
- initialDelaySeconds: 15
+ port: configurator
+ initialDelaySeconds: 15
env:
{{- if .Values.configurator.hassApiPassword }}
- name: HC_HASS_API_PASSWORD
@@ -108,7 +145,7 @@ spec:
value: "{{ .Values.configurator.hassApiUrl }}"
{{- else }}
- name: HC_HASS_API
- value: "http://{{ template "home-assistant.fullname" . }}:{{ .Values.service.port }}/api/"
+ value: "http://127.0.0.1:8123/api/"
{{- end }}
{{- if .Values.configurator.basepath }}
- name: HC_BASEPATH
@@ -125,9 +162,76 @@ spec:
volumeMounts:
- mountPath: /config
name: config
+ {{- if .Values.git.enabled }}
+ - mountPath: {{ .Values.git.keyPath }}
+ name: git-secret
+ {{- end }}
+ {{- if .Values.usePodSecurityContext }}
+ securityContext:
+ runAsUser: {{ default 0 .Values.runAsUser }}
+ {{- if and (.Values.runAsUser) (.Values.fsGroup) }}
+ {{- if not (eq .Values.runAsUser 0.0) }}
+ fsGroup: {{ .Values.fsGroup }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
resources:
{{ toYaml .Values.configurator.resources | indent 12 }}
{{- end }}
+ {{- if .Values.vscode.enabled }}
+ - name: vscode
+ image: "{{ .Values.vscode.image.repository }}:{{ .Values.vscode.image.tag }}"
+ imagePullPolicy: {{ .Values.vscode.image.pullPolicy }}
+ workingDir: {{ .Values.vscode.hassConfig }}
+ args:
+ - --allow-http
+ - --port={{ .Values.vscode.service.port }}
+ {{- if not (.Values.vscode.password) }}
+ - --no-auth
+ {{- end }}
+ {{- if .Values.vscode.vscodePath }}
+ - --extensions-dir={{ .Values.vscode.vscodePath }}
+ - --user-data-dir={{ .Values.vscode.vscodePath }}
+ {{- end }}
+ ports:
+ - name: vscode
+ containerPort: {{ .Values.vscode.service.port }}
+ protocol: TCP
+ livenessProbe:
+ tcpSocket:
+ port: vscode
+ initialDelaySeconds: 30
+ readinessProbe:
+ tcpSocket:
+ port: vscode
+ initialDelaySeconds: 15
+ env:
+ {{- if .Values.vscode.password }}
+ - name: PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "home-assistant.fullname" . }}-vscode
+ key: password
+ {{- end }}
+ {{- range $key, $value := .Values.vscode.extraEnv }}
+ - name: {{ $key }}
+ value: {{ $value }}
+ {{- end }}
+ volumeMounts:
+ - mountPath: /config
+ name: config
+ {{- if .Values.usePodSecurityContext }}
+ securityContext:
+ runAsUser: {{ default 0 .Values.runAsUser }}
+ {{- if and (.Values.runAsUser) (.Values.fsGroup) }}
+ {{- if not (eq .Values.runAsUser 0.0) }}
+ fsGroup: {{ .Values.fsGroup }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ resources:
+{{ toYaml .Values.vscode.resources | indent 12 }}
+ {{- end }}
volumes:
- name: config
{{- if .Values.persistence.enabled }}
@@ -136,6 +240,12 @@ spec:
{{- else }}
emptyDir: {}
{{ end }}
+ {{- if .Values.git.enabled }}
+ - name: git-secret
+ secret:
+ defaultMode: 256
+ secretName: {{ .Values.git.secret }}
+ {{ end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
diff --git a/stable/home-assistant/templates/ingress.yaml b/stable/home-assistant/templates/ingress.yaml
index 936d8a6386ff..d2125f0b7847 100644
--- a/stable/home-assistant/templates/ingress.yaml
+++ b/stable/home-assistant/templates/ingress.yaml
@@ -7,10 +7,10 @@ kind: Ingress
metadata:
name: {{ $fullName }}
labels:
- app: {{ template "home-assistant.name" . }}
- chart: {{ template "home-assistant.chart" . }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
+ app.kubernetes.io/name: {{ include "home-assistant.name" . }}
+ helm.sh/chart: {{ include "home-assistant.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
diff --git a/stable/home-assistant/templates/pvc.yaml b/stable/home-assistant/templates/pvc.yaml
index b5d89da13545..13616b7cf583 100644
--- a/stable/home-assistant/templates/pvc.yaml
+++ b/stable/home-assistant/templates/pvc.yaml
@@ -5,10 +5,10 @@ apiVersion: v1
metadata:
name: {{ template "home-assistant.fullname" . }}
labels:
- app: {{ template "home-assistant.fullname" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
- release: "{{ .Release.Name }}"
- heritage: "{{ .Release.Service }}"
+ app.kubernetes.io/name: {{ include "home-assistant.name" . }}
+ helm.sh/chart: {{ include "home-assistant.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
accessModes:
- {{ .Values.persistence.accessMode | quote }}
diff --git a/stable/home-assistant/templates/secret.yaml b/stable/home-assistant/templates/secret.yaml
index a48f055e805b..296a37d8306d 100644
--- a/stable/home-assistant/templates/secret.yaml
+++ b/stable/home-assistant/templates/secret.yaml
@@ -4,10 +4,10 @@ kind: Secret
metadata:
name: {{ template "home-assistant.fullname" . }}-configurator
labels:
- app: {{ template "home-assistant.name" . }}
- chart: {{ template "home-assistant.chart" . }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
+ app.kubernetes.io/name: {{ include "home-assistant.name" . }}
+ helm.sh/chart: {{ include "home-assistant.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
type: Opaque
data:
{{- if .Values.configurator.hassApiPassword }}
@@ -19,4 +19,21 @@ data:
{{- if .Values.configurator.password }}
password: {{ .Values.configurator.password | b64enc | quote }}
{{- end }}
+{{- end }}
+---
+{{- if .Values.vscode.enabled }}
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ template "home-assistant.fullname" . }}-vscode
+ labels:
+ app.kubernetes.io/name: {{ include "home-assistant.name" . }}
+ helm.sh/chart: {{ include "home-assistant.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+type: Opaque
+data:
+ {{- if .Values.vscode.password }}
+ password: {{ .Values.vscode.password | b64enc | quote }}
+ {{- end }}
{{- end }}
\ No newline at end of file
diff --git a/stable/home-assistant/templates/service.yaml b/stable/home-assistant/templates/service.yaml
index 9d372a57a907..8e351b83287e 100644
--- a/stable/home-assistant/templates/service.yaml
+++ b/stable/home-assistant/templates/service.yaml
@@ -3,10 +3,10 @@ kind: Service
metadata:
name: {{ template "home-assistant.fullname" . }}
labels:
- app: {{ template "home-assistant.name" . }}
- chart: {{ template "home-assistant.chart" . }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
+ app.kubernetes.io/name: {{ include "home-assistant.name" . }}
+ helm.sh/chart: {{ include "home-assistant.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.service.labels }}
{{ toYaml .Values.service.labels | indent 4 }}
{{- end }}
@@ -45,14 +45,23 @@ spec:
nodePort: {{.Values.service.nodePort}}
{{ end }}
{{- if .Values.configurator.enabled }}
- - name: http
+ - name: configurator
port: {{ .Values.configurator.service.port }}
protocol: TCP
targetPort: 3218
{{ if (and (eq .Values.configurator.service.type "NodePort") (not (empty .Values.configurator.service.nodePort))) }}
nodePort: {{.Values.configurator.service.nodePort}}
{{ end }}
+{{- end }}
+{{- if .Values.vscode.enabled }}
+ - name: vscode
+ port: {{ .Values.vscode.service.port }}
+ protocol: TCP
+ targetPort: 80
+{{ if (and (eq .Values.vscode.service.type "NodePort") (not (empty .Values.vscode.service.nodePort))) }}
+ nodePort: {{.Values.vscode.service.nodePort}}
+{{ end }}
{{- end }}
selector:
- app: {{ template "home-assistant.name" . }}
- release: {{ .Release.Name }}
+ app.kubernetes.io/name: {{ include "home-assistant.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
diff --git a/stable/home-assistant/templates/servicemonitor.yaml b/stable/home-assistant/templates/servicemonitor.yaml
new file mode 100644
index 000000000000..054a6aaa501d
--- /dev/null
+++ b/stable/home-assistant/templates/servicemonitor.yaml
@@ -0,0 +1,28 @@
+{{- if and ( .Values.monitoring.serviceMonitor.enabled ) ( .Values.monitoring.enabled ) }}
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+{{- if .Values.monitoring.serviceMonitor.labels }}
+ labels:
+{{ toYaml .Values.monitoring.serviceMonitor.labels | indent 4}}
+{{- end }}
+ name: {{ template "home-assistant.fullname" . }}-prometheus-exporter
+{{- if .Values.monitoring.serviceMonitor.namespace }}
+ namespace: {{ .Values.monitoring.serviceMonitor.namespace }}
+{{- end }}
+spec:
+ endpoints:
+ - targetPort: api
+ path: /api/prometheus
+{{- if .Values.monitoring.serviceMonitor.interval }}
+ interval: {{ .Values.monitoring.serviceMonitor.interval }}
+{{- end }}
+ jobLabel: {{ template "home-assistant.fullname" . }}-prometheus-exporter
+ namespaceSelector:
+ matchNames:
+ - {{ .Release.Namespace }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "home-assistant.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+{{- end }}
\ No newline at end of file
diff --git a/stable/home-assistant/templates/vscode-ingress.yaml b/stable/home-assistant/templates/vscode-ingress.yaml
new file mode 100644
index 000000000000..8543cda193c0
--- /dev/null
+++ b/stable/home-assistant/templates/vscode-ingress.yaml
@@ -0,0 +1,39 @@
+{{- if (.Values.vscode.enabled) and (.Values.vscode.ingress.enabled) }}
+{{- $fullName := include "home-assistant.fullname" . -}}
+{{- $servicePort := .Values.vscode.service.port -}}
+{{- $ingressPath := .Values.vscode.ingress.path -}}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ $fullName }}-vscode
+ labels:
+ app.kubernetes.io/name: {{ include "home-assistant.name" . }}
+ helm.sh/chart: {{ include "home-assistant.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+{{- with .Values.vscode.ingress.annotations }}
+ annotations:
+{{ toYaml . | indent 4 }}
+{{- end }}
+spec:
+{{- if .Values.vscode.ingress.tls }}
+ tls:
+ {{- range .Values.vscode.ingress.tls }}
+ - hosts:
+ {{- range .hosts }}
+ - {{ . }}
+ {{- end }}
+ secretName: {{ .secretName }}
+ {{- end }}
+{{- end }}
+ rules:
+ {{- range .Values.vscode.ingress.hosts }}
+ - host: {{ . }}
+ http:
+ paths:
+ - path: {{ $ingressPath }}
+ backend:
+ serviceName: {{ $fullName }}
+ servicePort: {{ $servicePort }}
+ {{- end }}
+{{- end }}
diff --git a/stable/home-assistant/values.yaml b/stable/home-assistant/values.yaml
index 316f1a723561..2df7ca48e9e8 100644
--- a/stable/home-assistant/values.yaml
+++ b/stable/home-assistant/values.yaml
@@ -2,14 +2,15 @@
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
-replicaCount: 1
-
image:
repository: homeassistant/home-assistant
- tag: 0.84.6
+ tag: 0.92.1
pullPolicy: IfNotPresent
pullSecrets: []
+# upgrade strategy type (e.g. Recreate or RollingUpdate)
+strategyType: Recreate
+
service:
type: ClusterIP
port: 8123
@@ -71,6 +72,29 @@ extraEnvSecrets:
# secret: mqtt
# key: password
+# Enable pod security context (must be `true` if runAsUser or fsGroup are set)
+usePodSecurityContext: true
+# Set runAsUser to 1000 to let home-assistant run as non-root user 'hass' which exists in 'runningman84/alpine-homeassistant' docker image.
+# When setting runAsUser to a different value than 0 also set fsGroup to the same value:
+# runAsUser:
+# fsGroup:
+
+git:
+ enabled: false
+
+ ## we just use the hass-configurator container image
+ ## you can use any image which has git and openssh installed
+ ##
+ image:
+ repository: causticlab/hass-configurator-docker
+ tag: x86_64-0.3.1
+ pullPolicy: IfNotPresent
+
+ # repo:
+ secret: git-creds
+ syncPath: /config
+ keyPath: /root/.ssh
+
configurator:
enabled: false
@@ -127,6 +151,74 @@ configurator:
loadBalancerSourceRanges: []
# nodePort: 30000
+## Add support for Prometheus
+# settings has to be enabled in configuration.yaml
+# https://www.home-assistant.io/components/prometheus/
+monitoring:
+ enabled: false
+ serviceMonitor:
+ # When set true and if Prometheus Operator is installed then use a ServiceMonitor to configure scraping
+ enabled: true
+ # Set the namespace the ServiceMonitor should be deployed
+ # namespace: monitoring
+ # Set how frequently Prometheus should scrape
+ # interval: 30s
+ # Set path to beats-exporter telemtery-path
+ # telemetryPath: /metrics
+ # Set labels for the ServiceMonitor, use this to define your scrape label for Prometheus Operator
+ # labels:
+
+vscode:
+ enabled: false
+
+ ## code-server container image
+ ##
+ image:
+ repository: codercom/code-server
+ tag: 1.939
+ pullPolicy: IfNotPresent
+
+ ## VSCode password
+ # password:
+
+ ## path where the home assistant configuration is stored
+ hassConfig: /config
+
+ ## path where the VS Code data should reside
+ vscodePath: /config/.vscode
+
+ ## Additional hass-vscode container environment variable
+ ## For instance to add a http_proxy
+ ##
+ extraEnv: {}
+
+ ingress:
+ enabled: false
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+ path: /
+ hosts:
+ - home-assistant.local
+ tls: []
+ # - secretName: home-assistant-tls
+ # hosts:
+ # - home-assistant.local
+
+ service:
+ type: ClusterIP
+ port: 80
+ annotations: {}
+ labels: {}
+ clusterIP: ""
+ ## List of IP addresses at which the hass-vscode service is available
+ ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
+ ##
+ externalIPs: []
+ loadBalancerIP: ""
+ loadBalancerSourceRanges: []
+ # nodePort: 30000
+
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
diff --git a/stable/influxdb/Chart.yaml b/stable/influxdb/Chart.yaml
index 3d36fe671883..1e21d135d94d 100755
--- a/stable/influxdb/Chart.yaml
+++ b/stable/influxdb/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: influxdb
-version: 1.1.1
-appVersion: 1.7.2
+version: 1.1.6
+appVersion: 1.7.3
description: Scalable datastore for metrics, events, and real-time analytics.
keywords:
- influxdb
diff --git a/stable/influxdb/templates/_helpers.tpl b/stable/influxdb/templates/_helpers.tpl
index 98504c060f7d..1536fd0e0299 100644
--- a/stable/influxdb/templates/_helpers.tpl
+++ b/stable/influxdb/templates/_helpers.tpl
@@ -9,12 +9,24 @@ Expand the name of the chart.
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
*/}}
{{- define "influxdb.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "influxdb.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/stable/influxdb/templates/config.yaml b/stable/influxdb/templates/config.yaml
index 78ddef8df9dc..e997065e8545 100644
--- a/stable/influxdb/templates/config.yaml
+++ b/stable/influxdb/templates/config.yaml
@@ -154,4 +154,7 @@ data:
enabled = {{ .Values.config.continuous_queries.enabled }}
run-interval = "{{ .Values.config.continuous_queries.run_interval }}"
-
+ [logging]
+ format = "{{ .Values.config.logging.format }}"
+ level = "{{ .Values.config.logging.level }}"
+ supress-logo = {{ .Values.config.logging.supress_logo }}
diff --git a/stable/influxdb/templates/deployment.yaml b/stable/influxdb/templates/deployment.yaml
index 50f62306e7c8..2d4ea7e256f4 100644
--- a/stable/influxdb/templates/deployment.yaml
+++ b/stable/influxdb/templates/deployment.yaml
@@ -65,6 +65,10 @@ spec:
mountPath: {{ .Values.config.storage_directory }}
- name: config
mountPath: /etc/influxdb
+ {{- if .Values.initScripts.enabled }}
+ - name: init
+ mountPath: /docker-entrypoint-initdb.d
+ {{- end }}
volumes:
- name: data
{{- if .Values.persistence.enabled }}
@@ -81,6 +85,11 @@ spec:
- name: config
configMap:
name: {{ template "influxdb.fullname" . }}
+ {{- if .Values.initScripts.enabled }}
+ - name: init
+ configMap:
+ name: {{ template "influxdb.fullname" . }}-init
+ {{- end }}
{{- if .Values.schedulerName }}
schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
diff --git a/stable/influxdb/templates/init-config.yaml b/stable/influxdb/templates/init-config.yaml
new file mode 100644
index 000000000000..8f611fb11d1c
--- /dev/null
+++ b/stable/influxdb/templates/init-config.yaml
@@ -0,0 +1,13 @@
+{{- if .Values.initScripts.enabled -}}
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "influxdb.fullname" . }}-init
+ labels:
+ app: {{ template "influxdb.fullname" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ release: "{{ .Release.Name }}"
+ heritage: "{{ .Release.Service }}"
+data:
+{{ toYaml .Values.initScripts.scripts | indent 2 }}
+{{- end -}}
diff --git a/stable/influxdb/templates/service.yaml b/stable/influxdb/templates/service.yaml
index ce17733cc842..b6dfb66161e4 100644
--- a/stable/influxdb/templates/service.yaml
+++ b/stable/influxdb/templates/service.yaml
@@ -23,9 +23,9 @@ spec:
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
- {{- if .Values.service.loadBalancerSourceRanges }}
+ {{- with .Values.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
- {{ toYaml .Values.service.loadBalancerSourceRanges | indent 4 }}
+ {{- toYaml . | nindent 4 }}
{{- end }}
ports:
{{- if .Values.config.http.enabled }}
diff --git a/stable/influxdb/values.yaml b/stable/influxdb/values.yaml
index 8f87f16e1b9b..245191fb12d8 100644
--- a/stable/influxdb/values.yaml
+++ b/stable/influxdb/values.yaml
@@ -2,7 +2,7 @@
## ref: https://hub.docker.com/r/library/influxdb/tags/
image:
repo: "influxdb"
- tag: "1.7.2-alpine"
+ tag: "1.7.3-alpine"
pullPolicy: IfNotPresent
## Specify a service type
@@ -247,3 +247,18 @@ config:
log_enabled: true
enabled: true
run_interval: 1s
+ logging:
+ format: auto
+ level: info
+ supress_logo: false
+
+# Allow executing custom init scripts
+#
+# If the container finds any files with the extensions .sh or .iql inside of the
+# /docker-entrypoint-initdb.d folder, it will execute them. The order they are
+# executed in is determined by the shell. This is usually alphabetical order.
+initScripts:
+ enabled: false
+ scripts:
+ init.iql: |+
+ CREATE DATABASE "telegraf" WITH DURATION 30d REPLICATION 1 NAME "rp_30d"
diff --git a/stable/instana-agent/.helmignore b/stable/instana-agent/.helmignore
new file mode 100644
index 000000000000..b4fa6cb0de2c
--- /dev/null
+++ b/stable/instana-agent/.helmignore
@@ -0,0 +1,23 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
+# OWNERS file for helm
+OWNERS
diff --git a/stable/instana-agent/Chart.yaml b/stable/instana-agent/Chart.yaml
new file mode 100644
index 000000000000..6ee1a7b4cefc
--- /dev/null
+++ b/stable/instana-agent/Chart.yaml
@@ -0,0 +1,22 @@
+apiVersion: v1
+name: instana-agent
+version: 1.0.8
+appVersion: 1.0
+description: Instana Agent for Kubernetes
+home: https://www.instana.com/
+icon: http://instana-management-assets.s3-website-eu-west-1.amazonaws.com/stan_icon_front_black_big.png
+sources:
+ - https://github.com/instana/instana-agent-docker
+maintainers:
+ - name: jbrisbin
+ email: jon.brisbin@instana.com
+ - name: wiggzz
+ email: william.james@instana.com
+ - name: JeroenSoeters
+ email: jeroen.soeters@instana.com
+ - name: fstab
+ email: fabian.staeber@instana.com
+ - name: mdonkers
+ email: miel.donkers@instana.com
+ - name: dlbock
+ email: dahlia.bock@instana.com
diff --git a/stable/instana-agent/OWNERS b/stable/instana-agent/OWNERS
new file mode 100644
index 000000000000..f9301a1d1c43
--- /dev/null
+++ b/stable/instana-agent/OWNERS
@@ -0,0 +1,14 @@
+approvers:
+- jbrisbin
+- wiggzz
+- JeroenSoeters
+- fstab
+- mdonkers
+- dlbock
+reviewers:
+- jbrisbin
+- wiggzz
+- JeroenSoeters
+- fstab
+- mdonkers
+- dlbock
diff --git a/stable/instana-agent/README.md b/stable/instana-agent/README.md
new file mode 100644
index 000000000000..2c3d886014cd
--- /dev/null
+++ b/stable/instana-agent/README.md
@@ -0,0 +1,122 @@
+# Instana
+
+[Instana](https://www.instana.com/) is a Dynamic APM for Microservice Applications
+
+## Introduction
+
+This chart adds the Instana Agent to all schedulable nodes (e.g. by default, not masters) in your cluster via a `DaemonSet`.
+
+## Prerequisites
+
+Kubernetes 1.8.x - 1.13.x
+
+Working `helm` and `tiller`.
+
+_Note:_ Tiller may need a service account and role binding if RBAC is enabled in your cluster.
+
+## Installing the Chart
+
+To configure the installation you can either specify the options on the command line using the **--set** switch, or you can edit **values.yaml**. Either way you should ensure that you set values for:
+
+* agent.key
+* zone.name
+
+If you're in the EU, you'll probably also want to set the regional equivalent values for:
+
+* agent.endpointHost
+* agent.endpointPort
+
+Optionally, if your infrastructure uses a proxy, you should ensure that you set values for:
+
+* agent.pod.proxyHost
+* agent.pod.proxyPort
+* agent.pod.proxyProtocol
+* agent.pod.proxyUser
+* agent.pod.proxyPassword
+* agent.pod.proxyUseDNS
+
+Optionally, if your infrastructure has multiple networks defined, you might need to allow the agent to listen on all addresses (typically with value set to '*'):
+
+* agent.listenAddress
+
+If your agent requires download key, you should ensure that you set values for it:
+
+* agent.downloadKey
+
+Agent can have APM, INFRASTRUCTURE or AWS mode. Default is APM and if you want to override that, ensure you set value:
+
+* agent.mode
+
+_Note:_ Check the values for the endpoint entries in the [agent backend configuration](https://docs.instana.io/quick_start/agent_configuration/#backend).
+
+To install the chart with the release name `instana-agent` and set the values on the command line run:
+
+```bash
+$ helm install --name instana-agent --namespace instana-agent \
+--set agent.key=INSTANA_AGENT_KEY \
+--set agent.endpointHost=HOST \
+--set agent.endpointPort=PORT \
+--set zone.name=CLUSTER_NAME \
+--set agent.downloadKey=INSTANA_DOWNLOAD_KEY \
+--set agent.proxyHost=INSTANA_AGENT_PROXY_HOST \
+--set agent.proxyPort=INSTANA_AGENT_PROXY_PORT \
+--set agent.proxyProtocol=INSTANA_AGENT_PROXY_PROTOCOL \
+--set agent.proxyUser=INSTANA_AGENT_PROXY_USER \
+--set agent.proxyPassword=INSTANA_AGENT_PROXY_PASSWORD \
+--set agent.proxyUseDNS=INSTANA_AGENT_PROXY_USE_DNS \
+--set agent.listenAddress=INSTANA_AGENT_HTTP_LISTEN \
+stable/instana-agent
+```
+
+To install the chart with the release name `instana-agent` after editing the **values.yaml** file, run:
+
+```bash
+$ helm install --name instana-agent --namespace instana-agent stable/instana-agent
+```
+
+## Uninstalling the Chart
+
+To uninstall/delete the `instana-agent` daemon set:
+
+```bash
+$ helm del --purge instana-agent
+```
+
+## Configuration
+
+### Helm Chart
+
+The following table lists the configurable parameters of the Instana chart and their default values.
+
+| Parameter | Description | Default |
+|------------------------------------|-------------------------------------------------------------------------|----------------------------------------------|
+| `agent.key` | Your Instana Agent key | `nil` You must provide your own key |
+| `zone.name` | Instana zone/cluster name | `nil` You must provide your own zone name |
+| `agent.image.name` | The image name to pull | `instana/agent` |
+| `agent.image.tag` | The image tag to pull | `1.0.17` |
+| `agent.image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `agent.leaderElectorPort` | Instana leader elector sidecar port | `42655` |
+| `agent.endpointHost` | Instana agent backend endpoint host | `saas-us-west-2.instana.io` |
+| `agent.endpointPort` | Instana agent backend endpoint port | `443` |
+| `agent.downloadKey` | Your Instana Download key | `nil` You must provide your own download key |
+| `agent.mode` | Agent mode (Supported values are APM, INFRASTRUCTURE, AWS) | `APM` |
+| `agent.pod.annotations` | Additional annotations to apply to the pod | `{}` |
+| `agent.pod.tolerations` | Tolerations for pod assignment | `[]` |
+| `agent.pod.proxyHost` | Hostname/address of a proxy | `nil` |
+| `agent.pod.proxyPort` | Port of a proxy | `nil` |
+| `agent.pod.proxyProtocol` | Proxy protocol (Supported proxy types are "http", "socks4", "socks5") | `nil` |
+| `agent.pod.proxyUser` | Username of the proxy auth | `nil` |
+| `agent.pod.proxyPassword` | Password of the proxy auth | `nil` |
+| `agent.pod.proxyUseDNS` | Boolean if proxy also does DNS | `nil` |
+| `agent.listenAddress` | List of addresses to listen on, or "*" for all interfaces | `nil` |
+| `agent.pod.requests.memory` | Container memory requests in MiB | `512` |
+| `agent.pod.requests.cpu` | Container cpu requests in cpu cores | `0.5` |
+| `agent.pod.limits.memory` | Container memory limits in MiB | `512` |
+| `agent.pod.limits.cpu` | Container cpu limits in cpu cores | `1.5` |
+| `rbac.create` | Whether RBAC resources should be created | `true` |
+| `serviceAccount.create` | Whether a ServiceAccount should be created | `true` |
+| `serviceAccount.name` | Name of the ServiceAccount to use | `instana-agent` |
+
+### Agent
+
+There is a [config map](templates/configmap.yaml) which you can edit to configure the agent. This configuration will be used for all instana agents on all nodes.
diff --git a/stable/instana-agent/templates/NOTES.txt b/stable/instana-agent/templates/NOTES.txt
new file mode 100644
index 000000000000..3f80a63c46ea
--- /dev/null
+++ b/stable/instana-agent/templates/NOTES.txt
@@ -0,0 +1,49 @@
+{{- if (and (not .Values.agent.key) (not .Values.zone.name)) }}
+##############################################################################
+#### ERROR: You did not specify your secret agent key. ####
+#### ERROR: You also did not specify a zone name for this cluster. ####
+##############################################################################
+
+This agent deployment will be incomplete until you set your agent key and zone name for this cluster:
+
+ helm upgrade {{ .Release.Name }} --reuse-values \
+ --set agent.key=$(YOUR_SECRET_AGENT_KEY) \
+ --set zone.name=$(YOUR_CLUSTER_NAME) stable/instana-agent
+
+- YOUR_SECRET_AGENT_KEY can be obtained from the Management Portal section of your Instana installation.
+- YOUR_CLUSTER_NAME should be a name that uniquely identifies this cluster.
+
+{{- else if not .Values.zone.name }}
+##############################################################################
+#### ERROR: You did not specify a zone name for this cluster. ####
+##############################################################################
+
+This agent deployment will be incomplete until you set a zone name for this cluster:
+
+ helm upgrade {{ .Release.Name }} --reuse-values \
+ --set zone.name=$(YOUR_CLUSTER_NAME) stable/instana-agent
+
+- YOUR_CLUSTER_NAME should be a name that uniquely identifies this cluster.
+
+{{- else if not .Values.agent.key }}
+##############################################################################
+#### ERROR: You did not specify your secret agent key. ####
+##############################################################################
+
+This agent deployment will be incomplete until you set your agent key:
+
+ helm upgrade {{ .Release.Name }} --reuse-values \
+ --set agent.key=$(YOUR_SECRET_AGENT_KEY) stable/instana-agent
+
+- YOUR_SECRET_AGENT_KEY can be obtained from the Management Portal section of your Instana installation.
+
+{{- else -}}
+It may take a few moments for the agents to fully deploy. You can see what agents are running by listing resources in the {{ .Release.Namespace }} namespace:
+
+ kubectl get all -n {{ .Release.Namespace }}
+
+You can get the logs for all of the agents with `kubectl logs`:
+
+ kubectl logs -f $(kubectl get pods -n {{ .Release.Namespace }} -o name) -n {{ .Release.Namespace }} -c instana-agent
+
+{{- end }}
diff --git a/stable/instana-agent/templates/_helpers.tpl b/stable/instana-agent/templates/_helpers.tpl
new file mode 100644
index 000000000000..f900a60e22cd
--- /dev/null
+++ b/stable/instana-agent/templates/_helpers.tpl
@@ -0,0 +1,61 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "instana-agent.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "instana-agent.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "instana-agent.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+The name of the ServiceAccount used.
+*/}}
+{{- define "instana-agent.serviceAccountName" -}}
+{{- if .Values.serviceAccount.create -}}
+ {{ default (include "instana-agent.fullname" .) .Values.serviceAccount.name }}
+{{- else -}}
+ {{ default "default" .Values.serviceAccount.name }}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Add Helm metadata to resource labels.
+*/}}
+{{- define "instana-agent.commonLabels" -}}
+app.kubernetes.io/name: {{ include "instana-agent.name" . }}
+app.kubernetes.io/instance: {{ .Release.Name }}
+app.kubernetes.io/managed-by: {{ .Release.Service }}
+helm.sh/chart: {{ include "instana-agent.chart" . }}
+{{- end -}}
+
+{{/*
+Add Helm metadata to selector labels specifically for deployments/daemonsets/statefulsets.
+*/}}
+{{- define "instana-agent.selectorLabels" -}}
+app.kubernetes.io/name: {{ include "instana-agent.name" . }}
+app.kubernetes.io/instance: {{ .Release.Name }}
+{{- end -}}
diff --git a/stable/instana-agent/templates/agentsecret.yaml b/stable/instana-agent/templates/agentsecret.yaml
new file mode 100644
index 000000000000..a66f1d4992dd
--- /dev/null
+++ b/stable/instana-agent/templates/agentsecret.yaml
@@ -0,0 +1,11 @@
+{{- if .Values.agent.key }}
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ template "instana-agent.fullname" . }}-agent-secret
+ labels:
+ {{- include "instana-agent.commonLabels" . | nindent 4 }}
+type: Opaque
+data:
+ key: {{ .Values.agent.key | b64enc | quote }}
+{{- end }}
diff --git a/stable/instana-agent/templates/clusterrole.yaml b/stable/instana-agent/templates/clusterrole.yaml
new file mode 100644
index 000000000000..72b59d0c8803
--- /dev/null
+++ b/stable/instana-agent/templates/clusterrole.yaml
@@ -0,0 +1,44 @@
+{{- if .Values.rbac.create -}}
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: {{ template "instana-agent.fullname" . }}
+ labels:
+ {{- include "instana-agent.commonLabels" . | nindent 4 }}
+rules:
+- nonResourceURLs:
+ - "/version"
+ - "/healthz"
+ verbs: ["get"]
+- apiGroups: ["batch"]
+ resources:
+ - "jobs"
+ verbs: ["get", "list", "watch"]
+- apiGroups: ["extensions"]
+ resources:
+ - "deployments"
+ - "replicasets"
+ - "ingresses"
+ verbs: ["get", "list", "watch"]
+- apiGroups: ["apps"]
+ resources:
+ - "deployments"
+ - "replicasets"
+ verbs: ["get", "list", "watch"]
+- apiGroups: [""]
+ resources:
+ - "namespaces"
+ - "events"
+ - "services"
+ - "endpoints"
+ - "nodes"
+ - "pods"
+ - "replicationcontrollers"
+ - "componentstatuses"
+ - "resourcequotas"
+ verbs: ["get", "list", "watch"]
+- apiGroups: [""]
+ resources:
+ - "endpoints"
+ verbs: ["create", "update", "patch"]
+{{- end -}}
diff --git a/stable/instana-agent/templates/clusterrolebinding.yaml b/stable/instana-agent/templates/clusterrolebinding.yaml
new file mode 100644
index 000000000000..7a6c6990caae
--- /dev/null
+++ b/stable/instana-agent/templates/clusterrolebinding.yaml
@@ -0,0 +1,16 @@
+{{- if .Values.rbac.create -}}
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: {{ template "instana-agent.fullname" . }}
+ labels:
+ {{- include "instana-agent.commonLabels" . | nindent 4 }}
+subjects:
+- kind: ServiceAccount
+ name: {{ template "instana-agent.serviceAccountName" . }}
+ namespace: {{ .Release.Namespace }}
+roleRef:
+ kind: ClusterRole
+ name: {{ template "instana-agent.fullname" . }}
+ apiGroup: rbac.authorization.k8s.io
+{{- end -}}
diff --git a/stable/instana-agent/templates/configmap.yaml b/stable/instana-agent/templates/configmap.yaml
new file mode 100644
index 000000000000..c4e7be076a42
--- /dev/null
+++ b/stable/instana-agent/templates/configmap.yaml
@@ -0,0 +1,40 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "instana-agent.fullname" . }}
+ labels:
+ {{- include "instana-agent.commonLabels" . | nindent 4 }}
+data:
+ configuration.yaml: |
+ # Manual a-priori configuration. Configuration will be only used when the sensor
+ # is actually installed by the agent.
+ # The commented out example values represent example configuration and are not
+ # necessarily defaults. Defaults are usually 'absent' or mentioned separately.
+ # Changes are hot reloaded unless otherwise mentioned.
+
+ # It is possible to create files called 'configuration-abc.yaml' which are
+ # merged with this file in file system order. So 'configuration-cde.yaml' comes
+ # after 'configuration-abc.yaml'. Only nested structures are merged, values are
+ # overwritten by subsequent configurations.
+
+ # Secrets
+ # To filter sensitive data from collection by the agent, all sensors respect
+ # the following secrets configuration. If a key collected by a sensor matches
+ # an entry from the list, the value is redacted.
+ #com.instana.secrets:
+ # matcher: 'contains-ignore-case' # 'contains-ignore-case', 'contains', 'regex'
+ # list:
+ # - 'key'
+ # - 'password'
+ # - 'secret'
+
+ # Host
+ #com.instana.plugin.host:
+ # tags:
+ # - 'dev'
+ # - 'app1'
+
+ # Hardware & Zone
+ #com.instana.plugin.generic.hardware:
+ # enabled: true # disabled by default
+ # availability-zone: 'zone'
diff --git a/stable/instana-agent/templates/daemonset.yaml b/stable/instana-agent/templates/daemonset.yaml
new file mode 100644
index 000000000000..42067e07f66c
--- /dev/null
+++ b/stable/instana-agent/templates/daemonset.yaml
@@ -0,0 +1,169 @@
+{{- if .Values.agent.key -}}
+{{- if .Values.zone.name -}}
+apiVersion: apps/v1beta2
+kind: DaemonSet
+metadata:
+ name: {{ template "instana-agent.fullname" . }}
+ labels:
+ {{- include "instana-agent.commonLabels" . | nindent 4 }}
+spec:
+ selector:
+ matchLabels:
+ {{- include "instana-agent.selectorLabels" . | nindent 6 }}
+ template:
+ metadata:
+ labels:
+ {{- include "instana-agent.commonLabels" . | nindent 8 }}
+ {{- if .Values.agent.pod.annotations }}
+ annotations:
+ {{- toYaml .Values.agent.pod.annotations | nindent 8 }}
+ {{- end }}
+ spec:
+ serviceAccount: {{ template "instana-agent.serviceAccountName" . }}
+ hostIPC: true
+ hostNetwork: true
+ hostPID: true
+ containers:
+ - name: {{ template "instana-agent.name" . }}
+ image: "{{ .Values.agent.image.name }}:{{ .Values.agent.image.tag }}"
+ imagePullPolicy: {{ .Values.agent.image.pullPolicy }}
+ env:
+ - name: INSTANA_AGENT_LEADER_ELECTOR_PORT
+ value: {{ .Values.agent.leaderElectorPort | quote }}
+ - name: INSTANA_ZONE
+ value: {{ .Values.zone.name | quote }}
+ - name: INSTANA_AGENT_ENDPOINT
+ value: {{ .Values.agent.endpointHost | quote }}
+ - name: INSTANA_AGENT_ENDPOINT_PORT
+ value: {{ .Values.agent.endpointPort | quote }}
+ - name: INSTANA_AGENT_KEY
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "instana-agent.fullname" . }}-agent-secret
+ key: key
+ {{- if .Values.agent.mode }}
+ - name: INSTANA_AGENT_MODE
+ value: {{ .Values.agent.mode | quote }}
+ {{- end }}
+ {{- if .Values.agent.downloadKey }}
+ - name: INSTANA_DOWNLOAD_KEY
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "instana-agent.fullname" . }}-download-secret
+ key: key
+ {{- end }}
+ {{- if .Values.agent.proxyHost }}
+ - name: INSTANA_AGENT_PROXY_HOST
+ value: {{ .Values.agent.proxyHost | quote }}
+ {{- end }}
+ {{- if .Values.agent.proxyPort }}
+ - name: INSTANA_AGENT_PROXY_PORT
+ value: {{ .Values.agent.proxyPort | quote }}
+ {{- end }}
+ {{- if .Values.agent.proxyProtocol }}
+ - name: INSTANA_AGENT_PROXY_PROTOCOL
+ value: {{ .Values.agent.proxyProtocol | quote }}
+ {{- end }}
+ {{- if .Values.agent.proxyUser }}
+ - name: INSTANA_AGENT_PROXY_USER
+ value: {{ .Values.agent.proxyUser | quote }}
+ {{- end }}
+ {{- if .Values.agent.proxyPassword }}
+ - name: INSTANA_AGENT_PROXY_PASSWORD
+ value: {{ .Values.agent.proxyPassword | quote }}
+ {{- end }}
+ {{- if .Values.agent.proxyUseDNS }}
+ - name: INSTANA_AGENT_PROXY_USE_DNS
+ value: {{ .Values.agent.proxyUseDNS | quote }}
+ {{- end }}
+ {{- if .Values.agent.listenAddress }}
+ - name: INSTANA_AGENT_HTTP_LISTEN
+ value: {{ .Values.agent.listenAddress | quote }}
+ {{- end }}
+ - name: JAVA_OPTS
+ value: "-Xmx{{ div (default 512 .Values.agent.pod.requests.memory) 3 }}M -XX:+ExitOnOutOfMemoryError"
+ - name: INSTANA_AGENT_POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ securityContext:
+ privileged: true
+ volumeMounts:
+ - name: dev
+ mountPath: /dev
+ - name: run
+ mountPath: /run
+ - name: var-run
+ mountPath: /var/run
+ - name: sys
+ mountPath: /sys
+ - name: var-log
+ mountPath: /var/log
+ - name: machine-id
+ mountPath: /etc/machine-id
+ - name: configuration
+ subPath: configuration.yaml
+ mountPath: /root/configuration.yaml
+ livenessProbe:
+ httpGet:
+ path: /status
+ port: 42699
+ initialDelaySeconds: 75
+ periodSeconds: 5
+ resources:
+ requests:
+ memory: "{{ default 512 .Values.agent.pod.requests.memory }}Mi"
+ cpu: {{ default 0.5 .Values.agent.pod.requests.cpu }}
+ limits:
+ memory: "{{ default 512 .Values.agent.pod.limits.memory }}Mi"
+ cpu: {{ default 1.5 .Values.agent.pod.limits.cpu }}
+ ports:
+ - containerPort: 42699
+ - name: {{ template "instana-agent.name" . }}-leader-elector
+ image: instana/leader-elector:0.5.1
+ env:
+ - name: INSTANA_AGENT_POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
+ args: ["--election=instana", "--http=localhost:{{ default 42655 .Values.agent.leaderElectorPort }}", "--id=$(INSTANA_AGENT_POD_NAME)"]
+ resources:
+ requests:
+ cpu: 0.1
+ memory: 64Mi
+ livenessProbe:
+ httpGet:
+ path: /status
+ port: 42699
+ initialDelaySeconds: 75
+ periodSeconds: 5
+ ports:
+ - containerPort: {{ .Values.agent.leaderElectorPort }}
+ {{- if .Values.agent.pod.tolerations }}
+ tolerations:
+ {{- toYaml .Values.agent.pod.tolerations | nindent 8 }}
+ {{- end }}
+ volumes:
+ - name: dev
+ hostPath:
+ path: /dev
+ - name: run
+ hostPath:
+ path: /run
+ - name: var-run
+ hostPath:
+ path: /var/run
+ - name: sys
+ hostPath:
+ path: /sys
+ - name: var-log
+ hostPath:
+ path: /var/log
+ - name: machine-id
+ hostPath:
+ path: /etc/machine-id
+ - name: configuration
+ configMap:
+ name: {{ template "instana-agent.fullname" . }}
+{{- end -}}
+{{- end -}}
diff --git a/stable/instana-agent/templates/downloadsecret.yaml b/stable/instana-agent/templates/downloadsecret.yaml
new file mode 100644
index 000000000000..c57833282bcb
--- /dev/null
+++ b/stable/instana-agent/templates/downloadsecret.yaml
@@ -0,0 +1,11 @@
+{{- if .Values.agent.downloadKey }}
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ template "instana-agent.fullname" . }}-download-secret
+ labels:
+ {{- include "instana-agent.commonLabels" . | nindent 4 }}
+type: Opaque
+data:
+ key: {{ .Values.agent.downloadKey | b64enc | quote }}
+{{- end }}
diff --git a/stable/instana-agent/templates/serviceaccount.yaml b/stable/instana-agent/templates/serviceaccount.yaml
new file mode 100644
index 000000000000..0a1c756060a5
--- /dev/null
+++ b/stable/instana-agent/templates/serviceaccount.yaml
@@ -0,0 +1,8 @@
+{{- if .Values.serviceAccount.create -}}
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: {{ template "instana-agent.serviceAccountName" . }}
+ labels:
+ {{- include "instana-agent.commonLabels" . | nindent 4 }}
+{{- end -}}
diff --git a/stable/instana-agent/values.yaml b/stable/instana-agent/values.yaml
new file mode 100644
index 000000000000..172df96e433f
--- /dev/null
+++ b/stable/instana-agent/values.yaml
@@ -0,0 +1,74 @@
+# name is the value which will be used as the base resource name for various resources associated with the agent.
+# name: instana-agent
+
+zone:
+ # zone.name is the uniquely-identifiable name by which your cluster will be known inside Instana.
+ name: null
+
+agent:
+ # agent.key is the secret token which your agent uses to authenticate to Instana's servers.
+ key: null
+ # agent.mode is used to set agent mode and it can be APM, INFRASTRUCTURE or AWS
+ # mode: APM
+ # agent.downloadKey is optional, if used it doesn't have to match agent.key
+ # downloadKey: null
+ # agent.listenAddress is the IP address the agent HTTP server will listen to.
+ # listenAddress: *
+ # agent.leaderElectorPort is the port on which the leader elector sidecar is exposed.
+ leaderElectorPort: 42655
+
+ # agent.endpointHost is the hostname of the Instana server your agents will connect to.
+ endpointHost: saas-us-west-2.instana.io
+ # agent.endpointPort is the port number (as a String) of the Instana server your agents will connect to.
+ endpointPort: 443
+
+ image:
+ # agent.image.name is the name of the container image of the Instana agent.
+ name: instana/agent
+ # agent.image.tag is the tag name of the agent container image.
+ tag: 1.0.17
+ # agent.image.pullPolicy specifies when to pull the image container.
+ pullPolicy: IfNotPresent
+
+ pod:
+ # agent.pod.annotations are additional annotations to be added to the agent pods.
+ annotations: {}
+
+ # agent.pod.tolerations are tolerations to influence agent pod assignment.
+ # https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
+ tolerations: []
+
+ requests:
+ # agent.pod.requests.memory is the requested memory allocation in MiB for the agent pods.
+ memory: 512
+ # agent.pod.requests.cpu are the requested CPU units allocation for the agent pods.
+ cpu: 0.5
+ limits:
+ # agent.pod.limits.memory set the memory allocation limits in MiB for the agent pods.
+ memory: 512
+ # agent.pod.limits.cpu sets the CPU units allocation limits for the agent pods.
+ cpu: 1.5
+
+ # agent.proxyHost sets the INSTANA_AGENT_PROXY_HOST environment variable.
+ # proxyHost: null
+ # agent.proxyPort sets the INSTANA_AGENT_PROXY_PORT environment variable.
+ # proxyPort: null
+ # agent.proxyProtocol sets the INSTANA_AGENT_PROXY_PROTOCOL environment variable.
+ # proxyProtocol: null
+ # agent.proxyUser sets the INSTANA_AGENT_PROXY_USER environment variable.
+ # proxyUser: null
+ # agent.proxyPassword sets the INSTANA_AGENT_PROXY_PASSWORD environment variable.
+ # proxyPassword: null
+ # agent.proxyUseDNS sets the INSTANA_AGENT_PROXY_USE_DNS environment variable.
+ # proxyUseDNS: null
+
+rbac:
+ # Specifies whether RBAC resources should be created
+ create: true
+
+serviceAccount:
+ # Specifies whether a ServiceAccount should be created
+ create: true
+ # The name of the ServiceAccount to use.
+ # If not set and create is true, a name is generated using the fullname template
+ # name: instana-agent
diff --git a/stable/ipfs/Chart.yaml b/stable/ipfs/Chart.yaml
index e7116cd12019..b29dcc7943c3 100644
--- a/stable/ipfs/Chart.yaml
+++ b/stable/ipfs/Chart.yaml
@@ -1,7 +1,7 @@
apiVersion: v1
description: A Helm chart for the Interplanetary File System
name: ipfs
-version: 0.2.2
+version: 0.2.3
icon: https://raw.githubusercontent.com/ipfs/logo/master/raster-generated/ipfs-logo-128-ice-text.png
home: https://ipfs.io/
appVersion: v0.4.9
diff --git a/stable/ipfs/README.md b/stable/ipfs/README.md
index 002e4f99594d..6103e5c5f151 100644
--- a/stable/ipfs/README.md
+++ b/stable/ipfs/README.md
@@ -53,7 +53,7 @@ Specify each parameter using the `--set key=value[,key=value]` argument to `helm
```bash
$ helm install --name my-release \
- --set storage.size="20Gi" \
+ --set persistence.size="20Gi" \
stable/ipfs
```
diff --git a/stable/jaeger-operator/Chart.yaml b/stable/jaeger-operator/Chart.yaml
index ccfbe56ed407..bdbd9633e65c 100644
--- a/stable/jaeger-operator/Chart.yaml
+++ b/stable/jaeger-operator/Chart.yaml
@@ -1,11 +1,13 @@
apiVersion: v1
description: jaeger-operator Helm chart for Kubernetes
name: jaeger-operator
-version: 2.2.0
-appVersion: 1.9.0
+version: 2.4.2
+appVersion: 1.11.0
home: https://www.jaegertracing.io/
icon: https://www.jaegertracing.io/img/jaeger-icon-reverse-color.svg
source: https://github.com/jaegertracing/jaeger-operator
maintainers:
- email: ctadeu@gmail.com
name: cpanato
+ - email: batazor111@gmail.com
+ name: batazor
diff --git a/stable/jaeger-operator/OWNERS b/stable/jaeger-operator/OWNERS
index cadd300e0f3c..54a200a3a845 100644
--- a/stable/jaeger-operator/OWNERS
+++ b/stable/jaeger-operator/OWNERS
@@ -1,4 +1,6 @@
approvers:
- cpanato
+- batazor
reviewers:
- cpanato
+- batazor
diff --git a/stable/jaeger-operator/README.md b/stable/jaeger-operator/README.md
index 49f8a370ca2e..7703963e8a96 100644
--- a/stable/jaeger-operator/README.md
+++ b/stable/jaeger-operator/README.md
@@ -43,17 +43,18 @@ The following table lists the configurable parameters of the jaeger-operator cha
Parameter | Description | Default
--- | --- | ---
-`image.repository` | controller container image repository | `jaegertracing/jaeger-operator`
-`image.tag` | controller container image tag | `1.9.0`
-`image.pullPolicy` | controller container image pull policy | `IfNotPresent`
-`rbac.create` | all required roles and SA will be created | `true`
-`resources` | k8s pod resorces | `None`
+`image.repository` | Controller container image repository | `jaegertracing/jaeger-operator`
+`image.tag` | Controller container image tag | `1.11.0`
+`image.pullPolicy` | Controller container image pull policy | `IfNotPresent`
+`rbac.create` | All required roles and rolebindings will be created | `true`
+`serviceAccount.create` | Service account to use | `true`
+`serviceAccount.name` | Service account name to use. If not set and create is true, a name is generated using the fullname template | ``
+`resources` | K8s pod resorces | `None`
`nodeSelector` | Node labels for pod assignment | `{}`
`tolerations` | Toleration labels for pod assignment | `[]`
`affinity` | Affinity settings for pod assignment | `{}`
-
-Specify each parameter you'd like to override using a YAML file as described above in the [installation](#Installing the Chart) section.
+Specify each parameter you'd like to override using a YAML file as described above in the [installation](#installing-the-chart) section.
You can also specify any non-array parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
@@ -68,7 +69,7 @@ $ helm install stable/jaeger-operator --name my-release \
The simplest possible way to install is by creating a YAML file like the following:
```YAML
-apiVersion: io.jaegertracing/v1alpha1
+apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simplest
@@ -88,7 +89,7 @@ After that just deploy the following manifest:
```YAML
# setup an elasticsearch with `make es`
-apiVersion: io.jaegertracing/v1alpha1
+apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simple-prod
diff --git a/stable/jaeger-operator/templates/crd.yaml b/stable/jaeger-operator/templates/crd.yaml
index f7d6bb2cb3bf..cd57a5f4a1b2 100644
--- a/stable/jaeger-operator/templates/crd.yaml
+++ b/stable/jaeger-operator/templates/crd.yaml
@@ -1,7 +1,7 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
- name: jaegers.io.jaegertracing
+ name: jaegers.jaegertracing.io
{{- if semverCompare ">=2.10-0" .Capabilities.TillerVersion.SemVer }}
annotations:
"helm.sh/hook": crd-install
@@ -13,11 +13,11 @@ metadata:
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "jaeger-operator.chart" . }}
spec:
- group: io.jaegertracing
+ group: jaegertracing.io
names:
kind: Jaeger
listKind: JaegerList
plural: jaegers
singular: jaeger
scope: Namespaced
- version: v1alpha1
+ version: v1
diff --git a/stable/jaeger-operator/templates/deployment.yaml b/stable/jaeger-operator/templates/deployment.yaml
index bfba3c984e17..2133ce7442a2 100644
--- a/stable/jaeger-operator/templates/deployment.yaml
+++ b/stable/jaeger-operator/templates/deployment.yaml
@@ -30,7 +30,7 @@ spec:
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- - containerPort: 60000
+ - containerPort: 8383
name: metrics
args: ["start"]
env:
@@ -38,6 +38,10 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
+ - name: POD_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.name
- name: OPERATOR_NAME
value: {{ include "jaeger-operator.fullname" . | quote }}
resources:
diff --git a/stable/jaeger-operator/templates/role.yaml b/stable/jaeger-operator/templates/role.yaml
index 67e7b665a6c3..3718d5bd903e 100644
--- a/stable/jaeger-operator/templates/role.yaml
+++ b/stable/jaeger-operator/templates/role.yaml
@@ -10,12 +10,6 @@ metadata:
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "jaeger-operator.chart" . }}
rules:
-- apiGroups:
- - io.jaegertracing
- resources:
- - "*"
- verbs:
- - "*"
- apiGroups:
- ""
resources:
@@ -26,8 +20,9 @@ rules:
- events
- configmaps
- secrets
+ - serviceaccounts
verbs:
- - "*"
+ - '*'
- apiGroups:
- apps
resources:
@@ -36,11 +31,49 @@ rules:
- replicasets
- statefulsets
verbs:
- - "*"
+ - '*'
+- apiGroups:
+ - monitoring.coreos.com
+ resources:
+ - servicemonitors
+ verbs:
+ - get
+ - create
+- apiGroups:
+ - io.jaegertracing
+ resources:
+ - '*'
+ verbs:
+ - '*'
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- "*"
+- apiGroups:
+ - batch
+ resources:
+ - jobs
+ - cronjobs
+ verbs:
+ - "*"
+- apiGroups:
+ - route.openshift.io
+ resources:
+ - routes
+ verbs:
+ - "*"
+- apiGroups:
+ - logging.openshift.io
+ resources:
+ - elasticsearches
+ verbs:
+ - '*'
+- apiGroups:
+ - jaegertracing.io
+ resources:
+ - '*'
+ verbs:
+ - '*'
{{- end }}
diff --git a/stable/jaeger-operator/values.yaml b/stable/jaeger-operator/values.yaml
index fcfb83b63b8f..35e9c7c037d2 100644
--- a/stable/jaeger-operator/values.yaml
+++ b/stable/jaeger-operator/values.yaml
@@ -3,7 +3,7 @@
# Declare variables to be passed into your templates.
image:
repository: jaegertracing/jaeger-operator
- tag: 1.9.0
+ tag: 1.11.0
pullPolicy: IfNotPresent
rbac:
diff --git a/stable/jasperreports/Chart.yaml b/stable/jasperreports/Chart.yaml
index c3eec72cc1f7..1bc48dd16da1 100644
--- a/stable/jasperreports/Chart.yaml
+++ b/stable/jasperreports/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: jasperreports
-version: 4.0.2
-appVersion: 7.1.0
+version: 4.2.1
+appVersion: 7.2.0
description: The JasperReports server can be used as a stand-alone or embedded reporting
and BI server that offers web-based reporting, analytic tools and visualization,
and a dashboard feature for compiling multiple custom views
diff --git a/stable/jasperreports/README.md b/stable/jasperreports/README.md
index 6914453fe6ef..090e161431c7 100644
--- a/stable/jasperreports/README.md
+++ b/stable/jasperreports/README.md
@@ -14,7 +14,7 @@ This chart bootstraps a [JasperReports](https://github.com/bitnami/bitnami-docke
It also packages the [Bitnami MariaDB chart](https://github.com/kubernetes/charts/tree/master/stable/mariadb) which bootstraps a MariaDB deployment required by the JasperReports application.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -50,6 +50,7 @@ The following table lists the configurable parameters of the JasperReports chart
| Parameter | Description | Default |
|-------------------------------|----------------------------------------------|----------------------------------------------------------|
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | JasperReports image registry | `docker.io` |
| `image.repository` | JasperReports Image name | `bitnami/jasperreports` |
| `image.tag` | JasperReports Image tag | `{VERSION}` |
@@ -65,6 +66,17 @@ The following table lists the configurable parameters of the JasperReports chart
| `smtpPassword` | SMTP password | `nil` |
| `smtpProtocol` | SMTP protocol [`ssl`, `none`] | `nil` |
| `allowEmptyPassword` | Allow DB blank passwords | `yes` |
+| `ingress.enabled` | Enable ingress controller resource | `false` |
+| `ingress.annotations` | Ingress annotations | `[]` |
+| `ingress.certManager` | Add annotations for cert-manager | `false` |
+| `ingress.hosts[0].name` | Hostname to your JasperReports installation | `jasperreports.local` |
+| `ingress.hosts[0].path` | Path within the url structure | `/` |
+| `ingress.hosts[0].tls` | Utilize TLS backend in ingress | `false` |
+| `ingress.hosts[0].tlsHosts` | Array of TLS hosts for ingress record (defaults to `ingress.hosts[0].name` if `nil`) | `nil` |
+| `ingress.hosts[0].tlsSecret` | TLS Secret (certificates) | `jasperreports.local-tls-secret` |
+| `ingress.secrets[0].name` | TLS Secret Name | `nil` |
+| `ingress.secrets[0].certificate` | TLS Secret Certificate | `nil` |
+| `ingress.secrets[0].key` | TLS Secret Key | `nil` |
| `externalDatabase.host` | Host of the external database | `nil` |
| `externalDatabase.port` | Port of the external database | `3306` |
| `externalDatabase.user` | Existing username in the external db | `bn_jasperreports` |
diff --git a/stable/jasperreports/templates/_helpers.tpl b/stable/jasperreports/templates/_helpers.tpl
index 8655db205bf2..ca05bc7fac58 100644
--- a/stable/jasperreports/templates/_helpers.tpl
+++ b/stable/jasperreports/templates/_helpers.tpl
@@ -52,3 +52,32 @@ Also, we can't use a single if because lazy evaluation is not an option
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "jasperreports.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if .Values.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if .Values.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- end -}}
diff --git a/stable/jasperreports/templates/deployment.yaml b/stable/jasperreports/templates/deployment.yaml
index ebac3190f66f..938e42d27d28 100644
--- a/stable/jasperreports/templates/deployment.yaml
+++ b/stable/jasperreports/templates/deployment.yaml
@@ -19,12 +19,7 @@ spec:
chart: "{{ template "jasperreports.chart" . }}"
release: {{ .Release.Name | quote }}
spec:
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "jasperreports.imagePullSecrets" . | indent 6 }}
containers:
- name: {{ template "jasperreports.fullname" . }}
image: {{ template "jasperreports.image" . }}
diff --git a/stable/jasperreports/templates/ingress.yaml b/stable/jasperreports/templates/ingress.yaml
new file mode 100644
index 000000000000..b26b6a5b85d4
--- /dev/null
+++ b/stable/jasperreports/templates/ingress.yaml
@@ -0,0 +1,43 @@
+{{- if .Values.ingress.enabled }}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ template "jasperreports.fullname" . }}
+ labels:
+ app: "{{ template "jasperreports.fullname" . }}"
+ chart: "{{ template "jasperreports.chart" . }}"
+ release: {{ .Release.Name | quote }}
+ heritage: {{ .Release.Service | quote }}
+ annotations:
+ {{- if .Values.ingress.certManager }}
+ kubernetes.io/tls-acme: "true"
+ {{- end }}
+ {{- range $key, $value := .Values.ingress.annotations }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+spec:
+ rules:
+ {{- range .Values.ingress.hosts }}
+ - host: {{ .name }}
+ http:
+ paths:
+ - path: {{ default "/" .path }}
+ backend:
+ serviceName: {{ template "jasperreports.fullname" $ }}
+ servicePort: http
+ {{- end }}
+ tls:
+ {{- range .Values.ingress.hosts }}
+ {{- if .tls }}
+ - hosts:
+ {{- if .tlsHosts }}
+ {{- range $host := .tlsHosts }}
+ - {{ $host }}
+ {{- end }}
+ {{- else }}
+ - {{ .name }}
+ {{- end }}
+ secretName: {{ .tlsSecret }}
+ {{- end }}
+ {{- end }}
+{{- end }}
diff --git a/stable/jasperreports/values.yaml b/stable/jasperreports/values.yaml
index 7745c3632b23..b3e842af688e 100644
--- a/stable/jasperreports/values.yaml
+++ b/stable/jasperreports/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami JasperReports image version
## ref: https://hub.docker.com/r/bitnami/dokuwiki/tags/
@@ -10,7 +13,7 @@
image:
registry: docker.io
repository: bitnami/jasperreports
- tag: 7.1.0
+ tag: 7.2.0
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,7 +24,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## User of the application
## ref: https://github.com/bitnami/bitnami-docker-jasperreports#configuration
@@ -159,3 +162,56 @@ resources:
requests:
memory: 512Mi
cpu: 300m
+
+## Configure the ingress resource that allows you to access the
+## JasperPeports installation. Set up the URL
+## ref: http://kubernetes.io/docs/user-guide/ingress/
+##
+ingress:
+ ## Set to true to enable ingress record generation
+ enabled: false
+
+ ## Set this to true in order to add the corresponding annotations for cert-manager
+ certManager: false
+
+ ## Ingress annotations done as key:value pairs
+ ## For a full list of possible ingress annotations, please see
+ ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/annotations.md
+ ##
+ ## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
+ ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
+ annotations:
+ # kubernetes.io/ingress.class: nginx
+
+ ## The list of hostnames to be covered with this ingress record.
+ ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
+ hosts:
+ - name: jasperrpeports.local
+ path: /
+
+ ## Set this to true in order to enable TLS on the ingress record
+ tls: false
+
+ ## Optionally specify the TLS hosts for the ingress record
+ ## Useful when the Ingress controller supports www-redirection
+ ## If not specified, the above host name will be used
+ # tlsHosts:
+ # - www.jasperreports.local
+ # - jasperreports.local
+
+ ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
+ tlsSecret: jasperreports.local-tls
+
+ secrets:
+ ## If you're providing your own certificates, please use this to add the certificates as secrets
+ ## key and certificate should start with -----BEGIN CERTIFICATE----- or
+ ## -----BEGIN RSA PRIVATE KEY-----
+ ##
+ ## name should line up with a tlsSecret set further up
+ ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
+ ##
+ ## It is also possible to create and manage the certificates outside of this helm chart
+ ## Please see README.md for more information
+ # - name: jasperreports.local-tls
+ # key:
+ # certificate:
diff --git a/stable/jenkins/.helmignore b/stable/jenkins/.helmignore
index f0c131944441..b4af6c204d53 100644
--- a/stable/jenkins/.helmignore
+++ b/stable/jenkins/.helmignore
@@ -19,3 +19,4 @@
.project
.idea/
*.tmproj
+ci/
diff --git a/stable/jenkins/Chart.yaml b/stable/jenkins/Chart.yaml
index 107dd017b0c3..9aa0f1f763fe 100755
--- a/stable/jenkins/Chart.yaml
+++ b/stable/jenkins/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: jenkins
home: https://jenkins.io/
-version: 0.28.11
+version: 1.1.22
appVersion: lts
description: Open source continuous integration server. It supports multiple SCM tools
including CVS, Subversion and Git. It can execute Apache Ant and Apache Maven-based
@@ -9,11 +10,14 @@ sources:
- https://github.com/jenkinsci/jenkins
- https://github.com/jenkinsci/docker-jnlp-slave
- https://github.com/nuvo/kube-tasks
+- https://github.com/jenkinsci/configuration-as-code-plugin
maintainers:
- name: lachie83
email: lachlan.evenson@microsoft.com
- name: viglesiasce
email: viglesias@google.com
- name: maorfr
- email: maorfr@gmail.com
+ email: maor.friedman@redhat.com
+- name: torstenwalter
+ email: mail@torstenwalter.de
icon: https://wiki.jenkins-ci.org/download/attachments/2916393/logo.png
diff --git a/stable/jenkins/OWNERS b/stable/jenkins/OWNERS
index 7ae600270d56..054a3bd1fe6b 100644
--- a/stable/jenkins/OWNERS
+++ b/stable/jenkins/OWNERS
@@ -2,7 +2,9 @@ approvers:
- lachie83
- viglesiasce
- maorfr
+- torstenwalter
reviewers:
- lachie83
- viglesiasce
- maorfr
+- torstenwalter
diff --git a/stable/jenkins/README.md b/stable/jenkins/README.md
index f321afdc2aad..173dba4b754f 100644
--- a/stable/jenkins/README.md
+++ b/stable/jenkins/README.md
@@ -21,93 +21,173 @@ To install the chart with the release name `my-release`:
$ helm install --name my-release stable/jenkins
```
+## Upgrading an existing Release to a new major version
+
+A major chart version change (like v0.40.0 -> v1.0.0) indicates that there is an incompatible breaking change needing manual actions.
+
+
+### 1.0.0
+
+Breaking changes:
+
+- values have been renamed to follow helm chart best practices for naming conventions so
+ that all variables start with a lowercase letter and words are separated with camelcase
+ https://helm.sh/docs/chart_best_practices/#naming-conventions
+- all resources are now using recommended standard labels
+ https://helm.sh/docs/chart_best_practices/#standard-labels
+
+As a result of the label changes also the selectors of the deployment have been updated.
+Those are immutable so trying an updated will cause an error like:
+
+```
+Error: Deployment.apps "jenkins" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/component":"jenkins-master", "app.kubernetes.io/instance":"jenkins"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
+```
+
+In order to upgrade, delete the Jenkins Deployment before upgrading:
+
+```
+kubectl delete deploy jenkins
+```
+
+
## Configuration
The following tables list the configurable parameters of the Jenkins chart and their default values.
### Jenkins Master
-| Parameter | Description | Default |
-| --------------------------------- | ------------------------------------ | ---------------------------------------------------------------------------- |
-| `nameOverride` | Override the resource name prefix | `jenkins` |
-| `fullnameOverride` | Override the full resource names | `jenkins-{release-name}` (or `jenkins` if release-name is `jenkins`) |
-| `Master.Name` | Jenkins master name | `jenkins-master` |
-| `Master.Image` | Master image name | `jenkins/jenkins` |
-| `Master.ImageTag` | Master image tag | `lts` |
-| `Master.ImagePullPolicy` | Master image pull policy | `Always` |
-| `Master.ImagePullSecret` | Master image pull secret | Not set |
-| `Master.Component` | k8s selector key | `jenkins-master` |
-| `Master.NumExecutors` | Set Number of executors | 0 |
-| `Master.UseSecurity` | Use basic security | `true` |
-| `Master.SecurityRealm` | Custom Security Realm | Not set |
-| `Master.AuthorizationStrategy` | Jenkins XML job config for AuthorizationStrategy | Not set |
-| `Master.DeploymentLabels` | Custom Deployment labels | Not set |
-| `Master.ServiceLabels` | Custom Service labels | Not set |
-| `Master.AdminUser` | Admin username (and password) created as a secret if useSecurity is true | `admin` |
-| `Master.AdminPassword` | Admin password (and user) created as a secret if useSecurity is true | Random value |
-| `Master.JenkinsAdminEmail` | Email address for the administrator of the Jenkins instance | Not set |
-| `Master.resources` | Resources allocation (Requests and Limits) | `{requests: {cpu: 50m, memory: 256Mi}, limits: {cpu: 2000m, memory: 2048Mi}}`|
-| `Master.InitContainerEnv` | Environment variables for Init Container | Not set |
-| `Master.ContainerEnv` | Environment variables for Jenkins Container | Not set |
-| `Master.UsePodSecurityContext` | Enable pod security context (must be `true` if `RunAsUser` or `FsGroup` are set) | `true` |
-| `Master.RunAsUser` | uid that jenkins runs with | `0` |
-| `Master.FsGroup` | uid that will be used for persistent volume | `0` |
-| `Master.ServiceAnnotations` | Service annotations | `{}` |
-| `Master.ServiceType` | k8s service type | `LoadBalancer` |
-| `Master.ServicePort` | k8s service port | `8080` |
-| `Master.NodePort` | k8s node port | Not set |
-| `Master.HealthProbes` | Enable k8s liveness and readiness probes | `true` |
-| `Master.HealthProbesLivenessTimeout` | Set the timeout for the liveness probe | `120` |
-| `Master.HealthProbesReadinessTimeout` | Set the timeout for the readiness probe | `60` |
-| `Master.HealthProbeReadinessPeriodSeconds` | Set how often (in seconds) to perform the liveness probe | `10` |
-| `Master.HealthProbeLivenessFailureThreshold` | Set the failure threshold for the liveness probe | `12` |
-| `Master.SlaveListenerPort` | Listening port for agents | `50000` |
-| `Master.DisabledAgentProtocols` | Disabled agent protocols | `JNLP-connect JNLP2-connect` |
-| `Master.CSRF.DefaultCrumbIssuer.Enabled` | Enable the default CSRF Crumb issuer | `true` |
-| `Master.CSRF.DefaultCrumbIssuer.ProxyCompatability` | Enable proxy compatibility | `true` |
-| `Master.CLI` | Enable CLI over remoting | `false` |
-| `Master.LoadBalancerSourceRanges` | Allowed inbound IP addresses | `0.0.0.0/0` |
-| `Master.LoadBalancerIP` | Optional fixed external IP | Not set |
-| `Master.JMXPort` | Open a port, for JMX stats | Not set |
-| `Master.ExtraPorts` | Open extra ports, for other uses | Not set |
-| `Master.CustomConfigMap` | Use a custom ConfigMap | `false` |
-| `Master.AdditionalConfig` | Add additional config files | `{}` |
-| `Master.OverwriteConfig` | Replace config w/ ConfigMap on boot | `false` |
-| `Master.Ingress.Annotations` | Ingress annotations | `{}` |
-| `Master.Ingress.Path` | Ingress path | Not set |
-| `Master.Ingress.TLS` | Ingress TLS configuration | `[]` |
-| `Master.InitScripts` | List of Jenkins init scripts | Not set |
-| `Master.CredentialsXmlSecret` | Kubernetes secret that contains a 'credentials.xml' file | Not set |
-| `Master.SecretsFilesSecret` | Kubernetes secret that contains 'secrets' files | Not set |
-| `Master.Jobs` | Jenkins XML job configs | Not set |
-| `Master.InstallPlugins` | List of Jenkins plugins to install | `kubernetes:1.14.0 workflow-aggregator:2.6 credentials-binding:1.17 git:3.9.1 workflow-job:2.31` |
-| `Master.EnableRawHtmlMarkupFormatter` | Enable HTML parsing using (see below) | Not set |
-| `Master.ScriptApproval` | List of groovy functions to approve | Not set |
-| `Master.NodeSelector` | Node labels for pod assignment | `{}` |
-| `Master.Affinity` | Affinity settings | `{}` |
-| `Master.Tolerations` | Toleration labels for pod assignment | `{}` |
-| `Master.PodAnnotations` | Annotations for master pod | `{}` |
-| `NetworkPolicy.Enabled` | Enable creation of NetworkPolicy resources. | `false` |
-| `NetworkPolicy.ApiVersion` | NetworkPolicy ApiVersion | `networking.k8s.io/v1` |
-| `rbac.install` | Create service account and ClusterRoleBinding for Kubernetes plugin | `false` |
-| `rbac.roleRef` | Cluster role name to bind to | `cluster-admin` |
-| `rbac.roleKind` | Role kind (`Role` or `ClusterRole`)| `ClusterRole`
-| `rbac.roleBindingKind` | Role binding kind (`RoleBinding` or `ClusterRoleBinding`)| `ClusterRoleBinding` |
-
-Some third-party systems, e.g. GitHub, use HTML-formatted data in their payload sent to a Jenkins webhooks, e.g. URL of a pull-request being built. To display such data as processed HTML instead of raw text set `Master.EnableRawHtmlMarkupFormatter` to true. This option requires installation of OWASP Markup Formatter Plugin (antisamy-markup-formatter). The plugin is **not** installed by default, please update `Master.InstallPlugins`.
+
+| Parameter | Description | Default |
+| --------------------------------- | ------------------------------------ | ----------------------------------------- |
+| `checkDeprecation` | Checks for deprecated values used | `true` |
+| `nameOverride` | Override the resource name prefix | `jenkins` |
+| `fullnameOverride` | Override the full resource names | `jenkins-{release-name}` (or `jenkins` if release-name is `jenkins`) |
+| `master.componentName` | Jenkins master name | `jenkins-master` |
+| `master.image` | Master image name | `jenkins/jenkins` |
+| `master.imageTag` | Master image tag | `lts` |
+| `master.imagePullPolicy` | Master image pull policy | `Always` |
+| `master.imagePullSecret` | Master image pull secret | Not set |
+| `master.numExecutors` | Set Number of executors | 0 |
+| `master.useSecurity` | Use basic security | `true` |
+| `master.securityRealm` | Custom Security Realm | Not set |
+| `master.authorizationStrategy` | Jenkins XML job config for AuthorizationStrategy | Not set |
+| `master.deploymentLabels` | Custom Deployment labels | Not set |
+| `master.serviceLabels` | Custom Service labels | Not set |
+| `master.podLabels` | Custom Pod labels | Not set |
+| `master.adminUser` | Admin username (and password) created as a secret if useSecurity is true | `admin` |
+| `master.adminPassword` | Admin password (and user) created as a secret if useSecurity is true | Random value |
+| `master.jenkinsAdminEmail` | Email address for the administrator of the Jenkins instance | Not set |
+| `master.resources` | Resources allocation (Requests and Limits) | `{requests: {cpu: 50m, memory: 256Mi}, limits: {cpu: 2000m, memory: 4096Mi}}`|
+| `master.initContainerEnv` | Environment variables for Init Container | Not set |
+| `master.containerEnv` | Environment variables for Jenkins Container | Not set |
+| `master.usePodSecurityContext` | Enable pod security context (must be `true` if `runAsUser` or `fsGroup` are set) | `true` |
+| `master.runAsUser` | uid that jenkins runs with | `0` |
+| `master.fsGroup` | uid that will be used for persistent volume | `0` |
+| `master.hostAliases` | Aliases for IPs in `/etc/hosts` | `[]` |
+| `master.serviceAnnotations` | Service annotations | `{}` |
+| `master.serviceType` | k8s service type | `LoadBalancer` |
+| `master.servicePort` | k8s service port | `8080` |
+| `master.targetPort` | k8s target port | `8080` |
+| `master.nodePort` | k8s node port | Not set |
+| `master.healthProbes` | Enable k8s liveness and readiness probes | `true` |
+| `master.healthProbesLivenessTimeout` | Set the timeout for the liveness probe | `5` |
+| `master.healthProbesReadinessTimeout` | Set the timeout for the readiness probe | `5` |
+| `master.healthProbeLivenessPeriodSeconds` | Set how often (in seconds) to perform the liveness probe | `10` |
+| `master.healthProbeReadinessPeriodSeconds` | Set how often (in seconds) to perform the readiness probe | `10` |
+| `master.healthProbeLivenessFailureThreshold` | Set the failure threshold for the liveness probe | `5` |
+| `master.healthProbeReadinessFailureThreshold` | Set the failure threshold for the readiness probe | `3` |
+| `master.healthProbeLivenessInitialDelay` | Set the initial delay for the liveness probe | `90` |
+| `master.healthProbeReadinessInitialDelay` | Set the initial delay for the readiness probe | `60` |
+| `master.slaveListenerPort` | Listening port for agents | `50000` |
+| `master.slaveHostPort` | Host port to listen for agents | Not set |
+| `master.slaveKubernetesNamespace` | Namespace in which the Kubernetes agents should be launched | Not set |
+| `master.disabledAgentProtocols` | Disabled agent protocols | `JNLP-connect JNLP2-connect` |
+| `master.csrf.defaultCrumbIssuer.enabled` | Enable the default CSRF Crumb issuer | `true` |
+| `master.csrf.defaultCrumbIssuer.proxyCompatability` | Enable proxy compatibility | `true` |
+| `master.cli` | Enable CLI over remoting | `false` |
+| `master.loadBalancerSourceRanges` | Allowed inbound IP addresses | `0.0.0.0/0` |
+| `master.loadBalancerIP` | Optional fixed external IP | Not set |
+| `master.jmxPort` | Open a port, for JMX stats | Not set |
+| `master.extraPorts` | Open extra ports, for other uses | Not set |
+| `master.overwriteConfig` | Replace init scripts and config w/ ConfigMap on boot | `false` |
+| `master.ingress.enabled` | Enables ingress | `false` |
+| `master.ingress.apiVersion` | Ingress API version | `extensions/v1beta1` |
+| `master.ingress.hostName` | Ingress host name | Not set |
+| `master.ingress.annotations` | Ingress annotations | `{}` |
+| `master.ingress.labels` | Ingress labels | `{}` |
+| `master.ingress.path` | Ingress path | Not set |
+| `master.ingress.tls` | Ingress TLS configuration | `[]` |
+| `master.route.enabled` | Enables openshift route | `false` |
+| `master.route.annotations` | Route annotations | `{}` |
+| `master.route.labels` | Route labels | `{}` |
+| `master.route.path` | Route path | Not set |
+| `master.jenkinsUrlProtocol` | Set protocol for JenkinsLocationConfiguration.xml | Set to `https` if `Master.ingress.tls`, `http` otherwise |
+| `master.JCasC.enabled` | Wheter Jenkins Configuration as Code is enabled or not | `false` |
+| `master.JCasC.configScripts` | List of Jenkins Config as Code scripts | False |
+| `master.sidecars.configAutoReload` | Jenkins Config as Code auto-reload settings | |
+| `master.sidecars.configAutoReload.enabled` | Jenkins Config as Code auto-reload settings (Attention: rbac needs to be enabled otherwise the sidecar can't read the config map) | `false` |
+| `master.sidecars.configAutoReload.image` | Image which triggers the reload | `shadwell/k8s-sidecar:0.0.2` |
+| `master.sidecars.others` | Configures additional sidecar container(s) for Jenkins master | `{}` |
+| `master.initScripts` | List of Jenkins init scripts | Not set |
+| `master.credentialsXmlSecret` | Kubernetes secret that contains a 'credentials.xml' file | Not set |
+| `master.secretsFilesSecret` | Kubernetes secret that contains 'secrets' files | Not set |
+| `master.jobs` | Jenkins XML job configs | Not set |
+| `master.overwriteJobs` | Replace jobs w/ ConfigMap on boot | `false` |
+| `master.installPlugins` | List of Jenkins plugins to install | `kubernetes:1.14.0 workflow-aggregator:2.6 credentials-binding:1.17 git:3.9.1 workflow-job:2.31` |
+| `master.overwritePlugins` | Overwrite installed plugins on start.| `false` |
+| `master.enableRawHtmlMarkupFormatter` | Enable HTML parsing using (see below) | false |
+| `master.scriptApproval` | List of groovy functions to approve | Not set |
+| `master.nodeSelector` | Node labels for pod assignment | `{}` |
+| `master.affinity` | Affinity settings | `{}` |
+| `master.tolerations` | Toleration labels for pod assignment | `[]` |
+| `master.podAnnotations` | Annotations for master pod | `{}` |
+| `master.customConfigMap` | Deprecated: Use a custom ConfigMap | `false` |
+| `master.additionalConfig` | Deprecated: Add additional config files | `{}` |
+| `master.jenkinsUriPrefix` | Root Uri Jenkins will be served on | Not set |
+| `master.customInitContainers` | Custom init-container specification in raw-yaml format | Not set |
+| `master.lifecycle` | Lifecycle specification for master-container | Not set |
+| `master.prometheus.enabled` | Enables prometheus service monitor | `false` |
+| `master.prometheus.serviceMonitorAdditionalLabels` | Additional labels to add to the service monitor object | `{}` |
+| `master.prometheus.scrapeInterval` | How often prometheus should scrape metrics | `60s` |
+| `master.prometheus.scrapeEndpoint` | The endpoint prometheus should get metrics from | `/prometheus` |
+| `master.prometheus.alertingrules` | Array of prometheus alerting rules | `[]` |
+| `master.prometheus.alertingRulesAdditionalLabels` | Additional labels to add to the prometheus rule object | `{}` |
+| `master.priorityClassName` | The name of a `priorityClass` to apply to the master pod | Not set |
+| `networkPolicy.enabled` | Enable creation of NetworkPolicy resources. | `false` |
+| `networkPolicy.apiVersion` | NetworkPolicy ApiVersion | `networking.k8s.io/v1` |
+| `rbac.create` | Whether RBAC resources are created | `true` |
+| `serviceAccount.name` | name of the ServiceAccount to be used by access-controlled resources | autogenerated |
+| `serviceAccount.create` | Configures if a ServiceAccount with this name should be created | `true` |
+| `serviceAccount.annotations` | Configures annotation for the ServiceAccount | `{}` |
+| `serviceAccountAgent.name` | name of the agent ServiceAccount to be used by access-controlled resources | autogenerated |
+| `serviceAccountAgent.create` | Configures if an agent ServiceAccount with this name should be created | `false` |
+| `serviceAccountAgent.annotations` | Configures annotation for the agent ServiceAccount | `{}` |
+
+
+Some third-party systems, e.g. GitHub, use HTML-formatted data in their payload sent to a Jenkins webhooks, e.g. URL of a pull-request being built. To display such data as processed HTML instead of raw text set `master.enableRawHtmlMarkupFormatter` to true. This option requires installation of OWASP Markup Formatter Plugin (antisamy-markup-formatter). The plugin is **not** installed by default, please update `master.installPlugins`.
### Jenkins Agent
| Parameter | Description | Default |
| -------------------------- | ----------------------------------------------- | ---------------------- |
-| `Agent.AlwaysPullImage` | Always pull agent container image before build | `false` |
-| `Agent.CustomJenkinsLabels`| Append Jenkins labels to the agent | `{}` |
-| `Agent.Enabled` | Enable Kubernetes plugin jnlp-agent podTemplate | `true` |
-| `Agent.Image` | Agent image name | `jenkinsci/jnlp-slave` |
-| `Agent.ImagePullSecret` | Agent image pull secret | Not set |
-| `Agent.ImageTag` | Agent image tag | `3.27-1` |
-| `Agent.Privileged` | Agent privileged container | `false` |
-| `Agent.resources` | Resources allocation (Requests and Limits) | `{requests: {cpu: 200m, memory: 256Mi}, limits: {cpu: 200m, memory: 256Mi}}`|
-| `Agent.volumes` | Additional volumes | `nil` |
+| `agent.alwaysPullImage` | Always pull agent container image before build | `false` |
+| `agent.customJenkinsLabels`| Append Jenkins labels to the agent | `{}` |
+| `agent.enabled` | Enable Kubernetes plugin jnlp-agent podTemplate | `true` |
+| `agent.image` | Agent image name | `jenkins/jnlp-slave` |
+| `agent.imagePullSecret` | Agent image pull secret | Not set |
+| `agent.imageTag` | Agent image tag | `3.27-1` |
+| `agent.privileged` | Agent privileged container | `false` |
+| `agent.resources` | Resources allocation (Requests and Limits) | `{requests: {cpu: 200m, memory: 256Mi}, limits: {cpu: 200m, memory: 256Mi}}`|
+| `agent.volumes` | Additional volumes | `nil` |
+| `agent.envVars` | Environment variables for the slave Pod | Not set |
+| `agent.command` | Executed command when side container starts | Not set |
+| `agent.args` | Arguments passed to executed command | Not set |
+| `agent.sideContainerName` | Side container name in agent | jnlp |
+| `agent.TTYEnabled` | Allocate pseudo tty to the side container | false |
+| `agent.containerCap` | Maximum number of agent | 10 |
+| `agent.podName` | slave Pod base name | Not set |
+| `agent.idleMinutes` | Allows the Pod to remain active for reuse | 0 |
+| `agent.yamlTemplate` | The raw yaml of a Pod API Object to merge into the agent spec | Not set |
+
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
@@ -124,7 +204,7 @@ $ helm install --name my-release -f values.yaml stable/jenkins
Your Jenkins Agents will run as pods, and it's possible to inject volumes where needed:
```yaml
-Agent:
+agent:
volumes:
- type: Secret
secretName: jenkins-mysecrets
@@ -146,14 +226,14 @@ the DefaultDeny namespace annotation. Note: this will enforce policy for _all_ p
Install helm chart with network policy enabled:
- $ helm install stable/jenkins --set NetworkPolicy.Enabled=true
+ $ helm install stable/jenkins --set networkPolicy.enabled=true
## Adding customized securityRealm
-`Master.SecurityRealm` in values can be used to support custom security realm instead of default `LegacySecurityRealm`. For example, you can add a security realm to authenticate via keycloak.
+`master.securityRealm` in values can be used to support custom security realm instead of default `LegacySecurityRealm`. For example, you can add a security realm to authenticate via keycloak.
```yaml
-SecurityRealm: |-
+securityRealm: |-
testIdtestsecret
@@ -166,10 +246,10 @@ SecurityRealm: |-
## Adding additional configs
-`Master.AdditionalConfig` can be used to add additional config files in `config.yaml`. For example, it can be used to add additional config files for keycloak authentication.
+`master.additionalConfig` can be used to add additional config files in `config.yaml`. For example, it can be used to add additional config files for keycloak authentication.
```yaml
-AdditionalConfig:
+additionalConfig:
testConfig.txt: |-
- name: testName
clientKey: testKey
@@ -178,7 +258,7 @@ AdditionalConfig:
## Adding customized labels
-`Master.ServiceLabels` can be used to add custom labels in `jenkins-master-svc.yaml`. For example,
+`master.serviceLabels` can be used to add custom labels in `jenkins-master-svc.yaml`. For example,
```yaml
ServiceLabels:
@@ -191,19 +271,20 @@ The Jenkins image stores persistence under `/var/jenkins_home` path of the conta
Claim is used to keep the data across deployments, by default. This is known to work in GCE, AWS, and minikube. Alternatively,
a previously configured Persistent Volume Claim can be used.
-It is possible to mount several volumes using `Persistence.volumes` and `Persistence.mounts` parameters.
+It is possible to mount several volumes using `persistence.volumes` and `persistence.mounts` parameters.
### Persistence Values
| Parameter | Description | Default |
| --------------------------- | ------------------------------- | --------------- |
-| `Persistence.Enabled` | Enable the use of a Jenkins PVC | `true` |
-| `Persistence.ExistingClaim` | Provide the name of a PVC | `nil` |
-| `Persistence.AccessMode` | The PVC access mode | `ReadWriteOnce` |
-| `Persistence.Size` | The size of the PVC | `8Gi` |
-| `Persistence.SubPath` | SubPath for jenkins-home mount | `nil` |
-| `Persistence.volumes` | Additional volumes | `nil` |
-| `Persistence.mounts` | Additional mounts | `nil` |
+| `persistence.enabled` | Enable the use of a Jenkins PVC | `true` |
+| `persistence.existingClaim` | Provide the name of a PVC | `nil` |
+| `persistence.annotations` | Annotations for the PVC | `{}` |
+| `persistence.accessMode` | The PVC access mode | `ReadWriteOnce` |
+| `persistence.size` | The size of the PVC | `8Gi` |
+| `persistence.subPath` | SubPath for jenkins-home mount | `nil` |
+| `persistence.volumes` | Additional volumes | `nil` |
+| `persistence.mounts` | Additional mounts | `nil` |
#### Existing PersistentVolumeClaim
@@ -212,36 +293,62 @@ It is possible to mount several volumes using `Persistence.volumes` and `Persist
3. Install the chart
```bash
-$ helm install --name my-release --set Persistence.ExistingClaim=PVC_NAME stable/jenkins
+$ helm install --name my-release --set persistence.existingClaim=PVC_NAME stable/jenkins
```
-## Custom ConfigMap
-
-When creating a new parent chart with this chart as a dependency, the `CustomConfigMap` parameter can be used to override the default config.xml provided.
-It also allows for providing additional xml configuration files that will be copied into `/var/jenkins_home`. In the parent chart's values.yaml,
-set the `jenkins.Master.CustomConfigMap` value to true like so
+## Configuration as Code
+Jenkins Configuration as Code is now a standard component in the Jenkins project. Add a key under configScripts for each configuration area, where each corresponds to a plugin or section of the UI. The keys (prior to | character) are just labels, and can be any value. They are only used to give the section a meaningful name. The only restriction is they must conform to RFC 1123 definition of a DNS label, so may only contain lowercase letters, numbers, and hyphens. Each key will become the name of a configuration yaml file on the master in /var/jenkins_home/casc_configs (by default) and will be processed by the Configuration as Code Plugin during Jenkins startup. The lines after each | become the content of the configuration yaml file. The first line after this is a JCasC root element, eg jenkins, credentials, etc. Best reference is the Documentation link here: https:///configuration-as-code. The example below creates ldap settings:
```yaml
-jenkins:
- Master:
- CustomConfigMap: true
+configScripts:
+ ldap-settings: |
+ jenkins:
+ securityRealm:
+ ldap:
+ configurations:
+ configurations:
+ - server: ldap.acme.com
+ rootDN: dc=acme,dc=uk
+ managerPasswordSecret: ${LDAP_PASSWORD}
+ - groupMembershipStrategy:
+ fromUserRecord:
+ attributeName: "memberOf"
```
-and provide the file `templates/config.tpl` in your parent chart for your use case. You can start by copying the contents of `config.yaml` from this chart into your parent charts `templates/config.tpl` as a basis for customization. Finally, you'll need to wrap the contents of `templates/config.tpl` like so:
+Further JCasC examples can be found [here.](https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos)
+### Config as Code with and without auto-reload
+Config as Code changes (to master.JCasC.configScripts) can either force a new pod to be created and only be applied at next startup, or can be auto-reloaded on-the-fly. If you choose `master.sidecars.autoConfigReload.enabled: true`, a second, auxiliary container will be installed into the Jenkins master pod, known as a "sidecar". This watches for changes to configScripts, copies the content onto the Jenkins file-system and issues a CLI command via SSH to reload configuration. The admin user (or account you specify in master.adminUser) will have a random SSH private key (RSA 4096) assigned unless you specify a key in `master.adminSshKey`. This will be saved to a k8s secret. You can monitor this sidecar's logs using command `kubectl logs -c jenkins-sc-config -f`
+If you want to enable auto-reload then you also need to configure rbac as the container which triggers the reload needs to watch the config maps.
```yaml
-{{- define "override_config_map" }}
-
-{{ end }}
+master:
+ JCasC:
+ enabled: true
+ sidecars:
+ configAutoReload:
+ enabled: true
+rbac:
+ install: true
+```
+
+### Auto-reload with non-Jenkins identities
+When enabling LDAP or another non-Jenkins identity source, the built-in admin account will no longer exist. Since the admin account is used by the sidecar to reload config, in order to use auto-reload, you must change the .master.adminUser to a valid username on your LDAP (or other) server. If you use the matrix-auth plugin, this user must also be granted Overall\Administer rights in Jenkins. Failure to do this will cause the sidecar container to fail to authenticate via SSH and enter a restart loop. You can enable LDAP using the example above and add a Config as Code block for matrix security that includes:
+```yaml
+configScripts:
+ matrix-auth: |
+ jenkins:
+ authorizationStrategy:
+ projectMatrix:
+ grantedPermissions:
+ - "Overall/Administer:"
```
+You can instead grant this permission via the UI. When this is done, you can set `master.sidecars.configAutoReload.enabled: true` and upon the next Helm upgrade, auto-reload will be successfully enabled.
## RBAC
-If running upon a cluster with RBAC enabled you will need to do the following:
+RBAC is enabled by default if you want to disable it you will need to do the following:
-* `helm install stable/jenkins --set rbac.install=true`
-* Create a Jenkins credential of type Kubernetes service account with service account name provided in the `helm status` output.
-* Under configure Jenkins -- Update the credentials config in the cloud section to use the service account credential you created in the step above.
+* `helm install stable/jenkins --set rbac.create=false`
## Backup
@@ -277,10 +384,9 @@ Fortunately the default jenkins docker image `jenkins/jenkins` contains a user `
Simply use the following settings to run Jenkins as `jenkins` user with uid `1000`.
```yaml
-jenkins:
- Master:
- RunAsUser: 1000
- FsGroup: 1000
+master:
+ runAsUser: 1000
+ fsGroup: 1000
```
## Providing jobs xml
@@ -293,8 +399,8 @@ Below is an example of a `values.yaml` file and the directory structure created:
#### values.yaml
```yaml
-Master:
- Jobs:
+master:
+ jobs:
test-job: |-
@@ -342,27 +448,49 @@ _Jenkins is run with user `jenkins`, uid = 1000. If you bind mount a volume from
## Running behind a forward proxy
-The master pod uses an Init Container to install plugins etc. If you are behind a corporate proxy it may be useful to set `Master.InitContainerEnv` to add environment variables such as `http_proxy`, so that these can be downloaded.
+The master pod uses an Init Container to install plugins etc. If you are behind a corporate proxy it may be useful to set `master.initContainerEnv` to add environment variables such as `http_proxy`, so that these can be downloaded.
-Additionally, you may want to add env vars for the Jenkins container, and the JVM (`Master.JavaOpts`).
+Additionally, you may want to add env vars for the Jenkins container, and the JVM (`master.javaOpts`).
```yaml
-Master:
- InitContainerEnv:
+master:
+ initContainerEnv:
- name: http_proxy
value: "http://192.168.64.1:3128"
- name: https_proxy
value: "http://192.168.64.1:3128"
- name: no_proxy
value: ""
- ContainerEnv:
+ containerEnv:
- name: http_proxy
value: "http://192.168.64.1:3128"
- name: https_proxy
value: "http://192.168.64.1:3128"
- JavaOpts: >-
+ javaOpts: >-
-Dhttp.proxyHost=192.168.64.1
-Dhttp.proxyPort=3128
-Dhttps.proxyHost=192.168.64.1
-Dhttps.proxyPort=3128
```
+
+## Custom ConfigMap
+
+The following configuration method is deprecated and will be removed in an upcoming version of this chart.
+We recommend you use Jenkins Configuration as Code to configure instead.
+When creating a new parent chart with this chart as a dependency, the `customConfigMap` parameter can be used to override the default config.xml provided.
+It also allows for providing additional xml configuration files that will be copied into `/var/jenkins_home`. In the parent chart's values.yaml,
+set the `jenkins.master.customConfigMap` value to true like so
+
+```yaml
+jenkins:
+ master:
+ customConfigMap: true
+```
+
+and provide the file `templates/config.tpl` in your parent chart for your use case. You can start by copying the contents of `config.yaml` from this chart into your parent charts `templates/config.tpl` as a basis for customization. Finally, you'll need to wrap the contents of `templates/config.tpl` like so:
+
+```yaml
+{{- define "override_config_map" }}
+
+{{ end }}
+```
diff --git a/stable/jenkins/ci/casc-values.yaml b/stable/jenkins/ci/casc-values.yaml
new file mode 100644
index 000000000000..833e443ad282
--- /dev/null
+++ b/stable/jenkins/ci/casc-values.yaml
@@ -0,0 +1,6 @@
+master:
+ JCasC:
+ enabled: true
+ sidecars:
+ configAutoReload:
+ enabled: true
diff --git a/stable/jenkins/ci/default-values.yaml b/stable/jenkins/ci/default-values.yaml
new file mode 100644
index 000000000000..e12ad5455182
--- /dev/null
+++ b/stable/jenkins/ci/default-values.yaml
@@ -0,0 +1 @@
+# this file is empty to check if defaults within values.yaml work as expected
diff --git a/stable/jenkins/templates/NOTES.txt b/stable/jenkins/templates/NOTES.txt
index 2a304b4ef17d..428fe48be5d5 100644
--- a/stable/jenkins/templates/NOTES.txt
+++ b/stable/jenkins/templates/NOTES.txt
@@ -1,45 +1,46 @@
-1. Get your '{{ .Values.Master.AdminUser }}' user password by running:
+1. Get your '{{ .Values.master.adminUser }}' user password by running:
printf $(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "jenkins.fullname" . }} -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
-{{- if .Values.Master.HostName }}
+{{- if .Values.master.ingress.hostName }}
-2. Visit http://{{ .Values.Master.HostName }}
+2. Visit http://{{ .Values.master.ingress.hostName }}
{{- else }}
2. Get the Jenkins URL to visit by running these commands in the same shell:
-{{- if contains "NodePort" .Values.Master.ServiceType }}
+{{- if contains "NodePort" .Values.master.serviceType }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "jenkins.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT/login
-{{- else if contains "LoadBalancer" .Values.Master.ServiceType }}
+{{- else if contains "LoadBalancer" .Values.master.serviceType }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "jenkins.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "jenkins.fullname" . }} --template "{{ "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}" }}")
- echo http://$SERVICE_IP:{{ .Values.Master.ServicePort }}/login
+ echo http://$SERVICE_IP:{{ .Values.master.servicePort }}/login
-{{- else if contains "ClusterIP" .Values.Master.ServiceType }}
- export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "component={{ .Release.Name }}-{{ .Values.Master.Component }}" -o jsonpath="{.items[0].metadata.name}")
- echo http://127.0.0.1:{{ .Values.Master.ServicePort }}
- kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME {{ .Values.Master.ServicePort }}:{{ .Values.Master.ServicePort }}
+{{- else if contains "ClusterIP" .Values.master.serviceType }}
+ export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/component={{ .Values.master.componentName }}" -l "app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
+ echo http://127.0.0.1:{{ .Values.master.servicePort }}
+ kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME {{ .Values.master.servicePort }}:{{ .Values.master.servicePort }}
{{- end }}
{{- end }}
-3. Login with the password from step 1 and the username: {{ .Values.Master.AdminUser }}
+3. Login with the password from step 1 and the username: {{ .Values.master.adminUser }}
+{{ if .Values.master.JCasC.enabled }}
+4. Use Jenkins Configuration as Code by specifying configScripts in your values.yaml file, see documentation: http://{{ .Values.master.ingress.hostName }}/configuration-as-code and examples: https://github.com/jenkinsci/configuration-as-code-plugin/tree/master/demos
+{{- end }}
For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine
+{{- if .Values.master.JCasC.enabled }}
+For more information about Jenkins Configuration as Code, visit:
+https://jenkins.io/projects/jcasc/
+{{- end }}
-{{- if .Values.Persistence.Enabled }}
+{{- if .Values.persistence.enabled }}
{{- else }}
#################################################################################
###### WARNING: Persistence is disabled!!! You will lose your data when #####
###### the Jenkins pod is terminated. #####
#################################################################################
{{- end }}
-
-{{- if .Values.rbac.install }}
-Configure the Kubernetes plugin in Jenkins to use the following Service Account name {{ template "jenkins.fullname" . }} using the following steps:
- Create a Jenkins credential of type Kubernetes service account with service account name {{ template "jenkins.fullname" . }}
- Under configure Jenkins -- Update the credentials config in the cloud section to use the service account credential you created in the step above.
-{{- end }}
diff --git a/stable/jenkins/templates/_helpers.tpl b/stable/jenkins/templates/_helpers.tpl
index eac695f6b7ad..f5f82eec4bf1 100644
--- a/stable/jenkins/templates/_helpers.tpl
+++ b/stable/jenkins/templates/_helpers.tpl
@@ -25,10 +25,42 @@ If release name contains chart name it will be used as a full name.
{{- end -}}
{{- define "jenkins.kubernetes-version" -}}
- {{- range .Values.Master.InstallPlugins -}}
+ {{- range .Values.master.installPlugins -}}
{{ if hasPrefix "kubernetes:" . }}
{{- $split := splitList ":" . }}
{{- printf "%s" (index $split 1 ) -}}
{{- end -}}
{{- end -}}
{{- end -}}
+
+{{/*
+Generate private key for jenkins CLI
+*/}}
+{{- define "jenkins.gen-key" -}}
+{{- if not .Values.master.adminSshKey -}}
+{{- $key := genPrivateKey "rsa" -}}
+jenkins-admin-private-key: {{ $key | b64enc | quote }}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create the name of the service account to use
+*/}}
+{{- define "jenkins.serviceAccountName" -}}
+{{- if .Values.serviceAccount.create -}}
+ {{ default (include "jenkins.fullname" .) .Values.serviceAccount.name }}
+{{- else -}}
+ {{ default "default" .Values.serviceAccount.name }}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create the name of the service account for Jenkins agents to use
+*/}}
+{{- define "jenkins.serviceAccountAgentName" -}}
+{{- if .Values.serviceAccountAgent.create -}}
+ {{ default (printf "%s-%s" (include "jenkins.fullname" .) "agent") .Values.serviceAccountAgent.name }}
+{{- else -}}
+ {{ default "default" .Values.serviceAccountAgent.name }}
+{{- end -}}
+{{- end -}}
diff --git a/stable/jenkins/templates/config.yaml b/stable/jenkins/templates/config.yaml
index e67276f1ae1a..a85a813c9fa6 100644
--- a/stable/jenkins/templates/config.yaml
+++ b/stable/jenkins/templates/config.yaml
@@ -1,38 +1,34 @@
-{{- if not .Values.Master.CustomConfigMap }}
+{{- if not .Values.master.customConfigMap }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "jenkins.fullname" . }}
+ labels:
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
data:
config.xml: |-
- {{ .Values.Master.ImageTag }}
- {{ .Values.Master.NumExecutors }}
+ {{ .Values.master.imageTag }}
+ {{ .Values.master.numExecutors }}NORMAL
- {{ .Values.Master.UseSecurity }}
-{{- if not (empty .Values.Master.AuthorizationStrategy ) }}
-{{ .Values.Master.AuthorizationStrategy | indent 6 }}
-{{- else }}
-
- true
-
-{{- end }}
-{{- if .Values.Master.SecurityRealm }}
-{{ .Values.Master.SecurityRealm | indent 6 }}
-{{- else }}
-
-{{- end }}
+ {{ .Values.master.useSecurity }}
+{{ .Values.master.authorizationStrategy | indent 6 }}
+{{ .Values.master.securityRealm | indent 6 }}
false${JENKINS_HOME}/workspace/${ITEM_FULLNAME}${ITEM_ROOTDIR}/builds
-{{- if .Values.Master.EnableRawHtmlMarkupFormatter }}
-
- true
-
+{{- if .Values.master.enableRawHtmlMarkupFormatter }}
+
+ true
+
{{- else }}
{{- end }}
@@ -43,23 +39,24 @@ data:
kubernetes
-{{- if .Values.Agent.Enabled }}
+{{- if .Values.agent.enabled }}
- default
+ {{ .Values.agent.podName }}2147483647
- 0
-
+ {{ .Values.agent.idleMinutes }}
+
+ {{ include "jenkins.serviceAccountAgentName" . }}
{{- $local := dict "first" true }}
- {{- range $key, $value := .Values.Agent.NodeSelector }}
+ {{- range $key, $value := .Values.agent.nodeSelector }}
{{- if not $local.first }},{{- end }}
{{- $key }}={{ $value }}
{{- $_ := set $local "first" false }}
{{- end }}NORMAL
-{{- range $index, $volume := .Values.Agent.volumes }}
+{{- range $index, $volume := .Values.agent.volumes }}
{{- range $key, $value := $volume }}{{- if not (eq $key "type") }}
<{{ $key }}>{{ $value }}{{ $key }}>
@@ -69,68 +66,78 @@ data:
- jnlp
- {{ .Values.Agent.Image }}:{{ .Values.Agent.ImageTag }}
-{{- if .Values.Agent.Privileged }}
+ {{ .Values.agent.sideContainerName }}
+ {{ .Values.agent.image }}:{{ .Values.agent.imageTag }}
+{{- if .Values.agent.privileged }}
true
{{- else }}
false
{{- end }}
- {{ .Values.Agent.AlwaysPullImage }}
+ {{ .Values.agent.alwaysPullImage }}/home/jenkins
-
+ {{ .Values.agent.command }}
+{{- if .Values.agent.args }}
+ {{ .Values.agent.args }}
+{{- else }}
${computer.jnlpmac} ${computer.name}
- false
+{{- end }}
+ {{ .Values.agent.TTYEnabled }}
# Resources configuration is a little hacky. This was to prevent breaking
# changes, and should be cleanned up in the future once everybody had
# enough time to migrate.
- {{.Values.Agent.Cpu | default .Values.Agent.resources.requests.cpu}}
- {{.Values.Agent.Memory | default .Values.Agent.resources.requests.memory}}
- {{.Values.Agent.Cpu | default .Values.Agent.resources.limits.cpu}}
- {{.Values.Agent.Memory | default .Values.Agent.resources.limits.memory}}
+ {{.Values.agent.resources.requests.cpu}}
+ {{.Values.agent.resources.requests.memory}}
+ {{.Values.agent.resources.limits.cpu}}
+ {{.Values.agent.resources.limits.memory}}JENKINS_URL
-{{- if .Values.Master.SlaveKubernetesNamespace }}
- http://{{ template "jenkins.fullname" . }}.{{.Release.Namespace}}:{{.Values.Master.ServicePort}}{{ default "" .Values.Master.JenkinsUriPrefix }}
-{{- else }}
- http://{{ template "jenkins.fullname" . }}:{{.Values.Master.ServicePort}}{{ default "" .Values.Master.JenkinsUriPrefix }}
-{{- end }}
+ http://{{ template "jenkins.fullname" . }}.{{.Release.Namespace}}.svc.cluster.local:{{.Values.master.servicePort}}{{ default "" .Values.master.jenkinsUriPrefix }}
-
+
+{{- range $index, $var := .Values.agent.envVars }}
+
+ {{ $var.name }}
+ {{ $var.value }}
+
+{{- end }}
+
-{{- if .Values.Agent.ImagePullSecret }}
+{{- if .Values.agent.imagePullSecretName }}
- {{ .Values.Agent.ImagePullSecret }}
+ {{ .Values.agent.imagePullSecretName }}
{{- else }}
{{- end }}
+{{- if .Values.agent.yamlTemplate }}
+ {{ tpl .Values.agent.yamlTemplate . | html | indent 4 | trim }}
+{{- end }}
{{- end -}}
https://kubernetes.defaultfalse
- {{ default .Release.Namespace .Values.Master.SlaveKubernetesNamespace }}
-{{- if .Values.Master.SlaveKubernetesNamespace }}
- http://{{ template "jenkins.fullname" . }}.{{.Release.Namespace}}:{{.Values.Master.ServicePort}}{{ default "" .Values.Master.JenkinsUriPrefix }}
- {{ template "jenkins.fullname" . }}-agent.{{.Release.Namespace}}:{{ .Values.Master.SlaveListenerPort }}
+ {{ default .Release.Namespace .Values.master.slaveKubernetesNamespace }}
+{{- if .Values.master.slaveKubernetesNamespace }}
+ http://{{ template "jenkins.fullname" . }}.{{.Release.Namespace}}:{{.Values.master.servicePort}}{{ default "" .Values.master.jenkinsUriPrefix }}
+ {{ template "jenkins.fullname" . }}-agent.{{.Release.Namespace}}:{{ .Values.master.slaveListenerPort }}
{{- else }}
- http://{{ template "jenkins.fullname" . }}:{{.Values.Master.ServicePort}}{{ default "" .Values.Master.JenkinsUriPrefix }}
- {{ template "jenkins.fullname" . }}-agent:{{ .Values.Master.SlaveListenerPort }}
+ http://{{ template "jenkins.fullname" . }}:{{.Values.master.servicePort}}{{ default "" .Values.master.jenkinsUriPrefix }}
+ {{ template "jenkins.fullname" . }}-agent:{{ .Values.master.slaveListenerPort }}
{{- end }}
- 10
+ {{ .Values.agent.containerCap }}500
-
+ 5
@@ -145,16 +152,16 @@ data:
All
- {{ .Values.Master.SlaveListenerPort }}
+ {{ .Values.master.slaveListenerPort }}
-{{- range .Values.Master.DisabledAgentProtocols }}
+{{- range .Values.master.disabledAgentProtocols }}
{{ . }}
{{- end }}
-{{- if .Values.Master.CSRF.DefaultCrumbIssuer.Enabled }}
+{{- if .Values.master.csrf.defaultCrumbIssuer.enabled }}
-{{- if .Values.Master.CSRF.DefaultCrumbIssuer.ProxyCompatability }}
+{{- if .Values.master.csrf.defaultCrumbIssuer.proxyCompatability }}
true
{{- end }}
@@ -163,13 +170,13 @@ data:
true
-{{- if .Values.Master.ScriptApproval }}
+{{- if .Values.master.scriptApproval }}
scriptapproval.xml: |-
-{{- range $key, $val := .Values.Master.ScriptApproval }}
+{{- range $key, $val := .Values.master.scriptApproval }}
{{ $val }}
{{- end }}
@@ -183,25 +190,25 @@ data:
jenkins.model.JenkinsLocationConfiguration.xml: |-
- {{ default "" .Values.Master.JenkinsAdminEmail }}
-{{- if .Values.Master.JenkinsUrl }}
- {{ .Values.Master.JenkinsUrl }}
-{{- else }}
-{{- if .Values.Master.HostName }}
-{{- if .Values.Master.Ingress.TLS }}
- https://{{ .Values.Master.HostName }}{{ default "" .Values.Master.JenkinsUriPrefix }}
-{{- else }}
- http://{{ .Values.Master.HostName }}{{ default "" .Values.Master.JenkinsUriPrefix }}
-{{- end }}
+ {{ default "" .Values.master.jenkinsAdminEmail }}
+{{- if .Values.master.jenkinsUrl }}
+ {{ .Values.master.jenkinsUrl }}
{{- else }}
- http://{{ template "jenkins.fullname" . }}:{{.Values.Master.ServicePort}}{{ default "" .Values.Master.JenkinsUriPrefix }}
-{{- end}}
+ {{- if .Values.master.ingress.hostName }}
+ {{- if .Values.master.ingress.tls }}
+ {{ default "https" .Values.master.jenkinsUrlProtocol }}://{{ .Values.master.ingress.hostName }}{{ default "" .Values.master.jenkinsUriPrefix }}
+ {{- else }}
+ {{ default "http" .Values.master.jenkinsUrlProtocol }}://{{ .Values.master.ingress.hostName }}{{ default "" .Values.master.jenkinsUriPrefix }}
+ {{- end }}
+ {{- else }}
+ {{ default "http" .Values.master.jenkinsUrlProtocol }}://{{ template "jenkins.fullname" . }}:{{.Values.master.servicePort}}{{ default "" .Values.master.jenkinsUriPrefix }}
+ {{- end}}
{{- end}}
jenkins.CLI.xml: |-
-{{- if .Values.Master.CLI }}
+{{- if .Values.master.cli }}
true
{{- else }}
false
@@ -210,21 +217,30 @@ data:
apply_config.sh: |-
mkdir -p /usr/share/jenkins/ref/secrets/;
echo "false" > /usr/share/jenkins/ref/secrets/slave-to-master-security-kill-switch;
-{{- if .Values.Master.OverwriteConfig }}
+{{- if .Values.master.overwriteConfig }}
cp /var/jenkins_config/config.xml /var/jenkins_home;
cp /var/jenkins_config/jenkins.CLI.xml /var/jenkins_home;
cp /var/jenkins_config/jenkins.model.JenkinsLocationConfiguration.xml /var/jenkins_home;
+ {{- if .Values.master.additionalConfig }}
+ {{- range $key, $val := .Values.master.additionalConfig }}
+ cp /var/jenkins_config/{{- $key }} /var/jenkins_home;
+ {{- end }}
+ {{- end }}
{{- else }}
yes n | cp -i /var/jenkins_config/config.xml /var/jenkins_home;
yes n | cp -i /var/jenkins_config/jenkins.CLI.xml /var/jenkins_home;
yes n | cp -i /var/jenkins_config/jenkins.model.JenkinsLocationConfiguration.xml /var/jenkins_home;
-{{- if .Values.Master.AdditionalConfig }}
-{{- range $key, $val := .Values.Master.AdditionalConfig }}
- cp /var/jenkins_config/{{- $key }} /var/jenkins_home;
+ {{- if .Values.master.additionalConfig }}
+ {{- range $key, $val := .Values.master.additionalConfig }}
+ yes n | cp -i /var/jenkins_config/{{- $key }} /var/jenkins_home;
+ {{- end }}
+ {{- end }}
{{- end }}
+{{- if .Values.master.overwritePlugins }}
+ # remove all plugins from shared volume
+ rm -rf /var/jenkins_home/plugins/*
{{- end }}
-{{- end }}
-{{- if .Values.Master.InstallPlugins }}
+{{- if .Values.master.installPlugins }}
# Install missing plugins
cp /var/jenkins_config/plugins.txt /var/jenkins_home;
rm -rf /usr/share/jenkins/ref/plugins/*.lock
@@ -232,38 +248,89 @@ data:
# Copy plugins to shared volume
yes n | cp -i /usr/share/jenkins/ref/plugins/* /var/jenkins_plugins/;
{{- end }}
-{{- if .Values.Master.ScriptApproval }}
+{{- if .Values.master.scriptApproval }}
yes n | cp -i /var/jenkins_config/scriptapproval.xml /var/jenkins_home/scriptApproval.xml;
{{- end }}
-{{- if .Values.Master.InitScripts }}
+{{- if and (.Values.master.JCasC.enabled) (.Values.master.sidecars.configAutoReload.enabled) }}
+ {{- if not .Values.master.initScripts }}
+ mkdir -p /var/jenkins_home/init.groovy.d/;
+ yes n | cp -i /var/jenkins_config/*.groovy /var/jenkins_home/init.groovy.d/;
+ {{- end }}
+{{- end }}
+{{- if .Values.master.initScripts }}
mkdir -p /var/jenkins_home/init.groovy.d/;
+ {{- if .Values.master.overwriteConfig }}
+ rm -f /var/jenkins_home/init.groovy.d/*.groovy
+ {{- end }}
yes n | cp -i /var/jenkins_config/*.groovy /var/jenkins_home/init.groovy.d/;
{{- end }}
-{{- if .Values.Master.CredentialsXmlSecret }}
+{{- if .Values.master.JCasC.enabled }}
+ {{- if .Values.master.sidecars.configAutoReload.enabled }}
+ bash -c 'ssh-keygen -y -f <(echo "${ADMIN_PRIVATE_KEY}") > /var/jenkins_home/key.pub'
+ {{- else }}
+ mkdir -p /var/jenkins_home/casc_configs;
+ rm -rf /var/jenkins_home/casc_configs/*
+ cp -v /var/jenkins_config/*.yaml /var/jenkins_home/casc_configs
+ {{- end }}
+{{- end }}
+{{- if .Values.master.credentialsXmlSecret }}
yes n | cp -i /var/jenkins_credentials/credentials.xml /var/jenkins_home;
{{- end }}
-{{- if .Values.Master.SecretsFilesSecret }}
+{{- if .Values.master.secretsFilesSecret }}
yes n | cp -i /var/jenkins_secrets/* /usr/share/jenkins/ref/secrets/;
{{- end }}
-{{- if .Values.Master.Jobs }}
+{{- if .Values.master.jobs }}
for job in $(ls /var/jenkins_jobs); do
mkdir -p /var/jenkins_home/jobs/$job
- yes n | cp -i /var/jenkins_jobs/$job /var/jenkins_home/jobs/$job/config.xml
+ yes {{ if not .Values.master.overwriteJobs }}n{{ end }} | cp -i /var/jenkins_jobs/$job /var/jenkins_home/jobs/$job/config.xml
done
{{- end }}
-{{- range $key, $val := .Values.Master.InitScripts }}
+{{- range $key, $val := .Values.master.initScripts }}
init{{ $key }}.groovy: |-
{{ $val | indent 4 }}
+{{- end }}
+{{- if .Values.master.JCasC.enabled }}
+ {{- if .Values.master.sidecars.configAutoReload.enabled }}
+ init-add-ssh-key-to-admin.groovy: |-
+ import jenkins.security.*
+ import hudson.model.User
+ import jenkins.security.ApiTokenProperty
+ import jenkins.model.Jenkins
+ User u = User.get("{{ .Values.master.adminUser | default "admin" }}")
+ ApiTokenProperty t = u.getProperty(ApiTokenProperty.class)
+ String sshKeyString = new File('/var/jenkins_home/key.pub').text
+ keys_param = new org.jenkinsci.main.modules.cli.auth.ssh.UserPropertyImpl(sshKeyString)
+ u.addProperty(keys_param)
+ def inst = Jenkins.getInstance()
+ def sshDesc = inst.getDescriptor("org.jenkinsci.main.modules.sshd.SSHD")
+ sshDesc.setPort({{ .Values.master.sidecars.configAutoReload.sshTcpPort | default 1044 }})
+ sshDesc.getActualPort()
+ sshDesc.save()
+ {{- else }}
+# Only add config to this script if we aren't auto-reloading otherwise the pod will restart upon each config change:
+{{- range $key, $val := .Values.master.JCasC.configScripts }}
+ {{ $key }}.yaml: |-
+{{ tpl $val $| indent 4 }}
+{{- end }}
+{{- end }}
{{- end }}
plugins.txt: |-
-{{- if .Values.Master.InstallPlugins }}
-{{- range $index, $val := .Values.Master.InstallPlugins }}
+{{- if .Values.master.installPlugins }}
+{{- range $index, $val := .Values.master.installPlugins }}
{{ $val | indent 4 }}
{{- end }}
+{{- if .Values.master.JCasC.enabled }}
+ {{- if not (contains "configuration-as-code" (quote .Values.master.installPlugins)) }}
+ configuration-as-code:{{ .Values.master.JCasC.pluginVersion }}
+ {{- end }}
+ {{- if not (contains "configuration-as-code-support" (quote .Values.master.installPlugins)) }}
+ configuration-as-code-support:{{ .Values.master.JCasC.supportPluginVersion }}
+ {{- end }}
+{{- end }}
{{- end }}
{{ else }}
{{ include "override_config_map" . }}
{{- end -}}
-{{- if .Values.Master.AdditionalConfig }}
-{{- toYaml .Values.Master.AdditionalConfig | indent 2 }}
+{{- if .Values.master.additionalConfig }}
+{{- toYaml .Values.master.additionalConfig | indent 2 }}
{{- end }}
diff --git a/stable/jenkins/templates/deprecation.yaml b/stable/jenkins/templates/deprecation.yaml
new file mode 100644
index 000000000000..0dd5b10d3a4a
--- /dev/null
+++ b/stable/jenkins/templates/deprecation.yaml
@@ -0,0 +1,356 @@
+{{- if .Values.checkDeprecation }}
+ {{- if .Values.Master }}
+
+ {{- if .Values.Master.Name }}
+ {{ fail "`Master.Name` does no longer exist. It has been renamed to `master.componentName`" }}
+ {{- end }}
+
+ {{- if .Values.Master.Image }}
+ {{ fail "`Master.Image` does no longer exist. It has been renamed to `master.image`" }}
+ {{- end }}
+
+ {{- if .Values.Master.ImageTag }}
+ {{ fail "`Master.ImageTag` does no longer exist. It has been renamed to `master.imageTag`" }}
+ {{- end }}
+
+ {{- if .Values.Master.ImagePullPolicy }}
+ {{ fail "`Master.ImagePullPolicy` does no longer exist. It has been renamed to `master.imagePullPolicy`" }}
+ {{- end }}
+
+ {{- if .Values.Master.ImagePullSecret }}
+ {{ fail "`Master.ImagePullPolicy` does no longer exist. It has been renamed to `master.imagePullSecretName`" }}
+ {{- end }}
+
+ {{- if .Values.Master.Component }}
+ {{ fail "`Master.Component` does no longer exist. It has been renamed to `master.componentName`" }}
+ {{- end }}
+
+ {{- if .Values.Master.NumExecutors }}
+ {{ fail "`Master.NumExecutors` does no longer exist. It has been renamed to `master.numExecutors`" }}
+ {{- end }}
+
+ {{- if .Values.Master.UseSecurity }}
+ {{ fail "`Master.UseSecurity` does no longer exist. It has been renamed to `master.useSecurity`" }}
+ {{- end }}
+
+ {{- if .Values.Master.SecurityRealm }}
+ {{ fail "`Master.SecurityRealm` does no longer exist. It has been renamed to `master.securityRealm`" }}
+ {{- end }}
+
+ {{- if .Values.Master.AuthorizationStrategy }}
+ {{ fail "`Master.AuthorizationStrategy` does no longer exist. It has been renamed to `master.authorizationStrategy`" }}
+ {{- end }}
+
+ {{- if .Values.Master.DeploymentLabels }}
+ {{ fail "`Master.DeploymentLabels` does no longer exist. It has been renamed to `master.deploymentLabels`" }}
+ {{- end }}
+
+ {{- if .Values.Master.ServiceLabels }}
+ {{ fail "`Master.ServiceLabels` does no longer exist. It has been renamed to `master.serviceLabels`" }}
+ {{- end }}
+
+ {{- if .Values.Master.PodLabels }}
+ {{ fail "`Master.PodLabels` does no longer exist. It has been renamed to `master.podLabels`" }}
+ {{- end }}
+
+ {{- if .Values.Master.AdminUser }}
+ {{ fail "`Master.AdminUser` does no longer exist. It has been renamed to `master.adminUser`" }}
+ {{- end }}
+
+ {{- if .Values.Master.AdminPassword }}
+ {{ fail "`Master.AdminPassword` does no longer exist. It has been renamed to `master.adminPassword`" }}
+ {{- end }}
+
+ {{- if .Values.Master.AdminSshKey }}
+ {{ fail "`Master.AdminSshKey` does no longer exist. It has been renamed to `master.adminSshKey`" }}
+ {{- end }}
+
+ {{- if .Values.Master.JenkinsAdminEmail }}
+ {{ fail "`Master.JenkinsAdminEmail` does no longer exist. It has been renamed to `master.jenkinsAdminEmail`" }}
+ {{- end }}
+
+ {{- if .Values.Master.JenkinsAdminEmail }}
+ {{ fail "`Master.JenkinsAdminEmail` does no longer exist. It has been renamed to `master.jenkinsAdminEmail`" }}
+ {{- end }}
+
+ {{- if .Values.Master.InitContainerEnv }}
+ {{ fail "`Master.InitContainerEnv` does no longer exist. It has been renamed to `master.initContainerEnv`" }}
+ {{- end }}
+
+ {{- if .Values.Master.ContainerEnv }}
+ {{ fail "`Master.ContainerEnv` does no longer exist. It has been renamed to `master.containerEnv`" }}
+ {{- end }}
+
+ {{- if .Values.Master.UsePodSecurityContext }}
+ {{ fail "`Master.UsePodSecurityContext` does no longer exist. It has been renamed to `master.usePodSecurityContext`" }}
+ {{- end }}
+
+ {{- if .Values.Master.RunAsUser }}
+ {{ fail "`Master.RunAsUser` does no longer exist. It has been renamed to `master.runAsUser`" }}
+ {{- end }}
+
+ {{- if .Values.Master.FsGroup }}
+ {{ fail "`Master.FsGroup` does no longer exist. It has been renamed to `master.fsGroup`" }}
+ {{- end }}
+
+ {{- if .Values.Master.HostAliases }}
+ {{ fail "`Master.HostAliases` does no longer exist. It has been renamed to `master.hostAliases`" }}
+ {{- end }}
+
+ {{- if .Values.Master.ServiceAnnotations }}
+ {{ fail "`Master.ServiceAnnotations` does no longer exist. It has been renamed to `master.serviceAnnotations`" }}
+ {{- end }}
+
+ {{- if .Values.Master.ServiceType }}
+ {{ fail "`Master.ServiceType` does no longer exist. It has been renamed to `master.serviceType`" }}
+ {{- end }}
+
+ {{- if .Values.Master.ServicePort }}
+ {{ fail "`Master.ServicePort` does no longer exist. It has been renamed to `master.servicePort`" }}
+ {{- end }}
+
+ {{- if .Values.Master.NodePort }}
+ {{ fail "`Master.NodePort` does no longer exist. It has been renamed to `master.nodePort`" }}
+ {{- end }}
+
+ {{- if .Values.Master.HealthProbes }}
+ {{ fail "`Master.HealthProbes` does no longer exist. It has been renamed to `master.healthProbes`" }}
+ {{- end }}
+
+ {{- if .Values.Master.HealthProbesLivenessTimeout }}
+ {{ fail "`Master.HealthProbesLivenessTimeout` does no longer exist. It has been renamed to `master.healthProbesLivenessTimeout`" }}
+ {{- end }}
+
+ {{- if .Values.Master.HealthProbesReadinessTimeout }}
+ {{ fail "`Master.HealthProbesReadinessTimeout` does no longer exist. It has been renamed to `master.healthProbesReadinessTimeout`" }}
+ {{- end }}
+
+ {{- if .Values.Master.HealthProbeReadinessPeriodSeconds }}
+ {{ fail "`Master.HealthProbeReadinessPeriodSeconds` does no longer exist. It has been renamed to `master.healthProbeReadinessPeriodSeconds`" }}
+ {{- end }}
+
+ {{- if .Values.Master.HealthProbeLivenessFailureThreshold }}
+ {{ fail "`Master.HealthProbeLivenessFailureThreshold` does no longer exist. It has been renamed to `master.healthProbeLivenessFailureThreshold`" }}
+ {{- end }}
+
+ {{- if .Values.Master.ServiceAnnotations }}
+ {{ fail "`Master.ServiceAnnotations` does no longer exist. It has been renamed to `master.serviceAnnotations`" }}
+ {{- end }}
+
+ {{- if .Values.Master.SlaveListenerPort }}
+ {{ fail "`Master.SlaveListenerPort` does no longer exist. It has been renamed to `master.slaveListenerPort`" }}
+ {{- end }}
+
+ {{- if .Values.Master.SlaveHostPort }}
+ {{ fail "`Master.SlaveHostPort` does no longer exist. It has been renamed to `master.slaveHostPort`" }}
+ {{- end }}
+
+ {{- if .Values.Master.DisabledAgentProtocols }}
+ {{ fail "`Master.DisabledAgentProtocols` does no longer exist. It has been renamed to `master.disabledAgentProtocols`" }}
+ {{- end }}
+
+ {{- if .Values.Master.CSRF }}
+ {{- if .Values.Master.CSRF.DefaultCrumbIssuer.Enabled }}
+ {{ fail "`Master.CSRF.DefaultCrumbIssuer.Enabled` does no longer exist. It has been renamed to `master.csrf.defaultCrumbIssuer.enabled`" }}
+ {{- end }}
+
+ {{- if .Values.Master.CSRF.DefaultCrumbIssuer.ProxyCompatability }}
+ {{ fail "`Master.CSRF.DefaultCrumbIssuer.ProxyCompatability` does no longer exist. It has been renamed to `master.csrf.defaultCrumbIssuer.proxyCompatability`" }}
+ {{- end }}
+ {{- end }}
+
+ {{- if .Values.Master.CLI }}
+ {{ fail "`Master.CLI` does no longer exist. It has been renamed to `master.cli`" }}
+ {{- end }}
+
+ {{- if .Values.Master.LoadBalancerSourceRanges }}
+ {{ fail "`Master.LoadBalancerSourceRanges` does no longer exist. It has been renamed to `master.loadBalancerSourceRanges`" }}
+ {{- end }}
+
+ {{- if .Values.Master.LoadBalancerIP }}
+ {{ fail "`Master.LoadBalancerIP` does no longer exist. It has been renamed to `master.loadBalancerIP`" }}
+ {{- end }}
+
+ {{- if .Values.Master.JMXPort }}
+ {{ fail "`Master.JMXPort` does no longer exist. It has been renamed to `master.jmxPort`" }}
+ {{- end }}
+
+ {{- if .Values.Master.ExtraPorts }}
+ {{ fail "`Master.ExtraPorts` does no longer exist. It has been renamed to `master.extraPorts`" }}
+ {{- end }}
+
+ {{- if .Values.Master.OverwriteConfig }}
+ {{ fail "`Master.OverwriteConfig` does no longer exist. It has been renamed to `master.overwriteConfig`" }}
+ {{- end }}
+
+ {{- if .Values.JCasC }}
+ {{- if .Values.JCasC.ConfigScripts }}
+ {{ fail "`Master.JCasC.ConfigScripts` does no longer exist. It has been renamed to `master.JCasC.configScripts`" }}
+ {{- end }}
+ {{- end }}
+
+ {{- if .Values.Master.Sidecars }}
+ {{- if .Values.Master.Sidecars.configAutoReload }}
+ {{ fail "`Master.Sidecars.configAutoReload` does no longer exist. It has been renamed to `master.sidecars.configAutoReload`" }}
+ {{- end }}
+ {{- end }}
+
+ {{- if .Values.Master.InitScripts }}
+ {{ fail "`Master.InitScripts` does no longer exist. It has been renamed to `master.initScripts`" }}
+ {{- end }}
+
+ {{- if .Values.Master.CredentialsXmlSecret }}
+ {{ fail "`Master.CredentialsXmlSecret` does no longer exist. It has been renamed to `master.credentialsXmlSecret`" }}
+ {{- end }}
+
+ {{- if .Values.Master.SecretsFilesSecret }}
+ {{ fail "`Master.SecretsFilesSecret` does no longer exist. It has been renamed to `master.secretsFilesSecret`" }}
+ {{- end }}
+
+ {{- if .Values.Master.CredentialsXmlSecret }}
+ {{ fail "`Master.CredentialsXmlSecret` does no longer exist. It has been renamed to `master.credentialsXmlSecret`" }}
+ {{- end }}
+
+ {{- if .Values.Master.Jobs }}
+ {{ fail "`Master.Jobs` does no longer exist. It has been renamed to `master.jobs`" }}
+ {{- end }}
+
+ {{- if .Values.Master.InstallPlugins }}
+ {{ fail "`Master.InstallPlugins` does no longer exist. It has been renamed to `master.installPlugins`" }}
+ {{- end }}
+
+ {{- if .Values.Master.OverwritePlugins }}
+ {{ fail "`Master.OverwritePlugins` does no longer exist. It has been renamed to `master.overwritePlugins`" }}
+ {{- end }}
+
+ {{- if .Values.Master.EnableRawHtmlMarkupFormatter }}
+ {{ fail "`Master.EnableRawHtmlMarkupFormatter` does no longer exist. It has been renamed to `master.enableRawHtmlMarkupFormatter`" }}
+ {{- end }}
+
+ {{- if .Values.Master.ScriptApproval }}
+ {{ fail "`Master.ScriptApproval` does no longer exist. It has been renamed to `master.scriptApproval`" }}
+ {{- end }}
+
+ {{- if .Values.Master.NodeSelector }}
+ {{ fail "`Master.NodeSelector` does no longer exist. It has been renamed to `master.nodeSelector`" }}
+ {{- end }}
+
+ {{- if .Values.Master.Affinity }}
+ {{ fail "`Master.Affinity` does no longer exist. It has been renamed to `master.affinity`" }}
+ {{- end }}
+
+ {{- if .Values.Master.PodAnnotations }}
+ {{ fail "`Master.PodAnnotations` does no longer exist. It has been renamed to `master.podAnnotations`" }}
+ {{- end }}
+
+ {{- if .Values.Master.CustomConfigMap }}
+ {{ fail "`Master.CustomConfigMap` does no longer exist. It has been renamed to `master.customConfigMap`" }}
+ {{- end }}
+
+ {{- if .Values.Master.JenkinsUriPrefix }}
+ {{ fail "`Master.JenkinsUriPrefix` does no longer exist. It has been renamed to `master.jenkinsUriPrefix`" }}
+ {{- end }}
+
+ {{- if .Values.Master.PriorityClassName }}
+ {{ fail "`Master.PriorityClassName` does no longer exist. It has been renamed to `master.priorityClassName`" }}
+ {{- end }}
+
+ {{ fail "Master.* values have been renamed, please check the documentation" }}
+ {{- end }}
+
+
+ {{- if .Values.NetworkPolicy }}
+
+ {{- if .Values.NetworkPolicy.Enabled }}
+ {{ fail "`NetworkPolicy.Enabled` does no longer exist. It has been renamed to `networkPolicy.enabled`" }}
+ {{- end }}
+
+ {{- if .Values.NetworkPolicy.ApiVersion }}
+ {{ fail "`NetworkPolicy.ApiVersion` does no longer exist. It has been renamed to `networkPolicy.apiVersion`" }}
+ {{- end }}
+
+ {{ fail "NetworkPolicy.* values have been renamed, please check the documentation" }}
+ {{- end }}
+
+
+ {{- if .Values.rbac.install }}
+ {{ fail "`rbac.install` does no longer exist. It has been renamed to `rbac.create` and is enabled by default!" }}
+ {{- end }}
+
+ {{- if .Values.rbac.serviceAccountName }}
+ {{ fail "`rbac.serviceAccountName` does no longer exist. It has been renamed to `serviceAccount.name`" }}
+ {{- end }}
+
+ {{- if .Values.rbac.serviceAccountAnnotations }}
+ {{ fail "`rbac.serviceAccountAnnotations` does no longer exist. It has been renamed to `serviceAccount.annotations`" }}
+ {{- end }}
+
+ {{- if .Values.rbac.roleRef }}
+ {{ fail "`rbac.roleRef` does no longer exist. RBAC roles are now generated, please check the documentation" }}
+ {{- end }}
+
+ {{- if .Values.rbac.roleKind }}
+ {{ fail "`rbac.roleKind` does no longer exist. RBAC roles are now generated, please check the documentation" }}
+ {{- end }}
+
+ {{- if .Values.rbac.roleBindingKind }}
+ {{ fail "`rbac.roleBindingKind` does no longer exist. RBAC roles are now generated, please check the documentation" }}
+ {{- end }}
+
+
+ {{- if .Values.Agent }}
+ {{- if .Values.Agent.AlwaysPullImage }}
+ {{ fail "`Agent.AlwaysPullImage` does no longer exist. It has been renamed to `agent.alwaysPullImage`" }}
+ {{- end }}
+
+ {{- if .Values.Agent.CustomJenkinsLabels }}
+ {{ fail "`Agent.CustomJenkinsLabels` does no longer exist. It has been renamed to `agent.customJenkinsLabels`" }}
+ {{- end }}
+
+ {{- if .Values.Agent.Enabled }}
+ {{ fail "`Agent.Enabled` does no longer exist. It has been renamed to `agent.enabled`" }}
+ {{- end }}
+
+ {{- if .Values.Agent.Image }}
+ {{ fail "`Agent.Image` does no longer exist. It has been renamed to `agent.image`" }}
+ {{- end }}
+
+ {{- if .Values.Agent.ImagePullSecret }}
+ {{ fail "`Agent.ImagePullSecret` does no longer exist. It has been renamed to `agent.imagePullSecret`" }}
+ {{- end }}
+
+ {{- if .Values.Agent.ImageTag }}
+ {{ fail "`Agent.ImageTag` does no longer exist. It has been renamed to `agent.imageTag`" }}
+ {{- end }}
+
+ {{- if .Values.Agent.Privileged }}
+ {{ fail "`Agent.Privileged` does no longer exist. It has been renamed to `agent.privileged`" }}
+ {{- end }}
+
+ {{- if .Values.Agent.Command }}
+ {{ fail "`Agent.Command` does no longer exist. It has been renamed to `agent.command`" }}
+ {{- end }}
+
+ {{- if .Values.Agent.Args }}
+ {{ fail "`Agent.Args` does no longer exist. It has been renamed to `agent.args`" }}
+ {{- end }}
+
+ {{- if .Values.Agent.SideContainerName }}
+ {{ fail "`Agent.SideContainerName` does no longer exist. It has been renamed to `agent.sideContainerName`" }}
+ {{- end }}
+
+ {{- if .Values.Agent.ContainerCap }}
+ {{ fail "`Agent.ContainerCap` does no longer exist. It has been renamed to `agent.containerCap`" }}
+ {{- end }}
+
+ {{- if .Values.Agent.PodName }}
+ {{ fail "`Agent.PodName` does no longer exist. It has been renamed to `agent.podName`" }}
+ {{- end }}
+
+ {{ fail "Agent.* values have been renamed, please check the documentation" }}
+ {{- end }}
+
+ {{- if .Values.Persistence }}
+ {{ fail "Persistence.* values have been renamed, please check the documentation" }}
+ {{- end }}
+{{- end }}
diff --git a/stable/jenkins/templates/home-pvc.yaml b/stable/jenkins/templates/home-pvc.yaml
index d6e44f2f29f8..00a02bcd8d73 100644
--- a/stable/jenkins/templates/home-pvc.yaml
+++ b/stable/jenkins/templates/home-pvc.yaml
@@ -1,28 +1,29 @@
-{{- if and .Values.Persistence.Enabled (not .Values.Persistence.ExistingClaim) -}}
+{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) -}}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
-{{- if .Values.Persistence.Annotations }}
+{{- if .Values.persistence.annotations }}
annotations:
-{{ toYaml .Values.Persistence.Annotations | indent 4 }}
+{{ toYaml .Values.persistence.annotations | indent 4 }}
{{- end }}
name: {{ template "jenkins.fullname" . }}
labels:
- app: {{ template "jenkins.fullname" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
- release: "{{ .Release.Name }}"
- heritage: "{{ .Release.Service }}"
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
spec:
accessModes:
- - {{ .Values.Persistence.AccessMode | quote }}
+ - {{ .Values.persistence.accessMode | quote }}
resources:
requests:
- storage: {{ .Values.Persistence.Size | quote }}
-{{- if .Values.Persistence.StorageClass }}
-{{- if (eq "-" .Values.Persistence.StorageClass) }}
+ storage: {{ .Values.persistence.size | quote }}
+{{- if .Values.persistence.storageClass }}
+{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
- storageClassName: "{{ .Values.Persistence.StorageClass }}"
+ storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
{{- end }}
diff --git a/stable/jenkins/templates/jcasc-config.yaml b/stable/jenkins/templates/jcasc-config.yaml
new file mode 100644
index 000000000000..06d06ce331a7
--- /dev/null
+++ b/stable/jenkins/templates/jcasc-config.yaml
@@ -0,0 +1,20 @@
+{{- $root := . }}
+{{- if and (.Values.master.JCasC.enabled) (.Values.master.sidecars.configAutoReload.enabled) }}
+{{- range $key, $val := .Values.master.JCasC.configScripts }}
+---
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "jenkins.fullname" $root }}-jenkins-config-{{ $key }}
+ labels:
+ "app.kubernetes.io/name": {{ template "jenkins.name" $root}}
+ "helm.sh/chart": {{ $.Chart.Name }}-{{ $.Chart.Version }}
+ "app.kubernetes.io/managed-by": "{{ $.Release.Service }}"
+ "app.kubernetes.io/instance": "{{ $.Release.Name }}"
+ "app.kubernetes.io/component": "{{ $.Values.master.componentName }}"
+ {{ template "jenkins.fullname" $root }}-jenkins-config: "true"
+data:
+ {{ $key }}.yaml: |-
+{{ tpl $val $| indent 4 }}
+{{- end }}
+{{- end }}
diff --git a/stable/jenkins/templates/jenkins-agent-svc.yaml b/stable/jenkins/templates/jenkins-agent-svc.yaml
index ca1ed6431683..9c4ec9d5529e 100644
--- a/stable/jenkins/templates/jenkins-agent-svc.yaml
+++ b/stable/jenkins/templates/jenkins-agent-svc.yaml
@@ -3,21 +3,24 @@ kind: Service
metadata:
name: {{ template "jenkins.fullname" . }}-agent
labels:
- app: {{ template "jenkins.fullname" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
- component: "{{ .Release.Name }}-{{ .Values.Master.Component }}"
-{{- if .Values.Master.SlaveListenerServiceAnnotations }}
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+{{- if .Values.master.slaveListenerServiceAnnotations }}
annotations:
-{{ toYaml .Values.Master.SlaveListenerServiceAnnotations | indent 4 }}
+{{ toYaml .Values.master.slaveListenerServiceAnnotations | indent 4 }}
{{- end }}
spec:
ports:
- - port: {{ .Values.Master.SlaveListenerPort }}
- targetPort: {{ .Values.Master.SlaveListenerPort }}
- {{ if (and (eq .Values.Master.SlaveListenerServiceType "NodePort") (not (empty .Values.Master.SlaveListenerPort))) }}
- nodePort: {{.Values.Master.SlaveListenerPort}}
+ - port: {{ .Values.master.slaveListenerPort }}
+ targetPort: {{ .Values.master.slaveListenerPort }}
+ {{ if (and (eq .Values.master.slaveListenerServiceType "NodePort") (not (empty .Values.master.slaveListenerPort))) }}
+ nodePort: {{.Values.master.slaveListenerPort}}
{{end}}
name: slavelistener
selector:
- component: "{{ .Release.Name }}-{{ .Values.Master.Component }}"
- type: {{ .Values.Master.SlaveListenerServiceType }}
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ type: {{ .Values.master.slaveListenerServiceType }}
diff --git a/stable/jenkins/templates/jenkins-backup-cronjob.yaml b/stable/jenkins/templates/jenkins-backup-cronjob.yaml
index 0d5a186e365a..812c38f9b477 100644
--- a/stable/jenkins/templates/jenkins-backup-cronjob.yaml
+++ b/stable/jenkins/templates/jenkins-backup-cronjob.yaml
@@ -4,10 +4,11 @@ kind: CronJob
metadata:
name: {{ template "jenkins.fullname" . }}-backup
labels:
- app: {{ template "jenkins.fullname" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
- release: "{{ .Release.Name }}"
- heritage: "{{ .Release.Service }}"
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.backup.componentName }}"
spec:
schedule: {{ .Values.backup.schedule | quote }}
concurrencyPolicy: Forbid
@@ -30,7 +31,7 @@ spec:
- -n
- {{ .Release.Namespace }}
- -l
- - release={{ .Release.Name }}
+ - app.kubernetes.io/instance={{ .Release.Name }}
- --container
- {{ template "jenkins.fullname" . }}
- --path
@@ -53,14 +54,14 @@ spec:
preferredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- - key: app
+ - key: "app.kubernetes.io/name"
operator: In
values:
- - {{ template "jenkins.fullname" . }}
- - key: release
+ - "{{ template "jenkins.name" . }}"
+ - key: "app.kubernetes.io/instance"
operator: In
values:
- - {{ .Release.Name }}
+ - "{{ .Release.Name }}"
topologyKey: "kubernetes.io/hostname"
{{- with .Values.tolerations }}
tolerations:
diff --git a/stable/jenkins/templates/jenkins-backup-rbac.yaml b/stable/jenkins/templates/jenkins-backup-rbac.yaml
index 0ac8bdcb4d59..77d964eff526 100644
--- a/stable/jenkins/templates/jenkins-backup-rbac.yaml
+++ b/stable/jenkins/templates/jenkins-backup-rbac.yaml
@@ -4,20 +4,22 @@ kind: ServiceAccount
metadata:
name: {{ template "jenkins.fullname" . }}-backup
labels:
- app: {{ template "jenkins.fullname" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
- release: "{{ .Release.Name }}"
- heritage: "{{ .Release.Service }}"
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "jenkins.fullname" . }}-backup
labels:
- app: {{ template "jenkins.fullname" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
- release: "{{ .Release.Name }}"
- heritage: "{{ .Release.Service }}"
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
@@ -31,10 +33,11 @@ kind: RoleBinding
metadata:
name: {{ template "jenkins.fullname" . }}-backup
labels:
- app: {{ template "jenkins.fullname" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
- release: "{{ .Release.Name }}"
- heritage: "{{ .Release.Service }}"
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
diff --git a/stable/jenkins/templates/jenkins-master-alerting-rules.yaml b/stable/jenkins/templates/jenkins-master-alerting-rules.yaml
new file mode 100644
index 000000000000..36cfc3213994
--- /dev/null
+++ b/stable/jenkins/templates/jenkins-master-alerting-rules.yaml
@@ -0,0 +1,19 @@
+{{- if and .Values.master.prometheus.enabled .Values.master.prometheus.alertingrules }}
+---
+apiVersion: monitoring.coreos.com/v1
+kind: PrometheusRule
+metadata:
+ name: {{ template "jenkins.fullname" . }}
+ labels:
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+ {{- range $key, $val := .Values.master.prometheus.alertingRulesAdditionalLabels }}
+ {{ $key }}: {{ $val | quote }}
+ {{- end}}
+spec:
+ groups:
+{{ toYaml .Values.master.prometheus.alertingrules | indent 2 }}
+{{- end }}
diff --git a/stable/jenkins/templates/jenkins-master-deployment.yaml b/stable/jenkins/templates/jenkins-master-deployment.yaml
index 3f85390c9fdf..ba1dd10984b2 100644
--- a/stable/jenkins/templates/jenkins-master-deployment.yaml
+++ b/stable/jenkins/templates/jenkins-master-deployment.yaml
@@ -7,120 +7,162 @@ kind: Deployment
metadata:
name: {{ template "jenkins.fullname" . }}
labels:
- heritage: {{ .Release.Service | quote }}
- release: {{ .Release.Name | quote }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
- component: "{{ .Release.Name }}-{{ .Values.Master.Name }}"
- {{- range $key, $val := .Values.Master.DeploymentLabels }}
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+ {{- range $key, $val := .Values.master.deploymentLabels }}
{{ $key }}: {{ $val | quote }}
{{- end}}
spec:
replicas: 1
strategy:
- type: {{ if .Values.Persistence.Enabled }}Recreate{{ else }}RollingUpdate{{ end }}
+ type: {{ if .Values.persistence.enabled }}Recreate{{ else }}RollingUpdate
+ rollingUpdate:
+{{ toYaml .Values.master.rollingUpdate | indent 6 }}
+ {{- end }}
selector:
matchLabels:
- component: "{{ .Release.Name }}-{{ .Values.Master.Component }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
template:
metadata:
labels:
- app: {{ template "jenkins.fullname" . }}
- heritage: {{ .Release.Service | quote }}
- release: {{ .Release.Name | quote }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
- component: "{{ .Release.Name }}-{{ .Values.Master.Component }}"
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+ {{- range $key, $val := .Values.master.podLabels }}
+ {{ $key }}: {{ $val | quote }}
+ {{- end}}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/config.yaml") . | sha256sum }}
- {{- if .Values.Master.PodAnnotations }}
-{{ toYaml .Values.Master.PodAnnotations | indent 8 }}
+ {{- if .Values.master.podAnnotations }}
+{{ toYaml .Values.master.podAnnotations | indent 8 }}
{{- end }}
spec:
- {{- if .Values.Master.NodeSelector }}
+ {{- if .Values.master.nodeSelector }}
nodeSelector:
-{{ toYaml .Values.Master.NodeSelector | indent 8 }}
+{{ toYaml .Values.master.nodeSelector | indent 8 }}
{{- end }}
- {{- if .Values.Master.Tolerations }}
+ {{- if .Values.master.tolerations }}
tolerations:
-{{ toYaml .Values.Master.Tolerations | indent 8 }}
+{{ toYaml .Values.master.tolerations | indent 8 }}
{{- end }}
- {{- if .Values.Master.Affinity }}
+ {{- if .Values.master.affinity }}
affinity:
-{{ toYaml .Values.Master.Affinity | indent 8 }}
+{{ toYaml .Values.master.affinity | indent 8 }}
{{- end }}
-{{- if .Values.Master.UsePodSecurityContext }}
+ {{- if and (.Capabilities.APIVersions.Has "scheduling.k8s.io/v1beta1") (.Values.master.priorityClassName) }}
+ priorityClassName: {{ .Values.master.priorityClassName }}
+ {{- end }}
+{{- if .Values.master.usePodSecurityContext }}
securityContext:
- runAsUser: {{ default 0 .Values.Master.RunAsUser }}
-{{- if and (.Values.Master.RunAsUser) (.Values.Master.FsGroup) }}
-{{- if not (eq .Values.Master.RunAsUser 0.0) }}
- fsGroup: {{ .Values.Master.FsGroup }}
+ runAsUser: {{ default 0 .Values.master.runAsUser }}
+{{- if and (.Values.master.runAsUser) (.Values.master.fsGroup) }}
+{{- if not (eq .Values.master.runAsUser 0.0) }}
+ fsGroup: {{ .Values.master.fsGroup }}
{{- end }}
{{- end }}
{{- end }}
- serviceAccountName: {{ if .Values.rbac.install }}{{ template "jenkins.fullname" . }}{{ else }}"{{ .Values.rbac.serviceAccountName }}"{{ end }}
-{{- if .Values.Master.HostNetworking }}
+ serviceAccountName: "{{ template "jenkins.serviceAccountName" . }}"
+{{- if .Values.master.hostNetworking }}
hostNetwork: true
- dnsPolicy: ClusterFirstWithHostNet
+ dnsPolicy: ClusterFirstWithHostNet
{{- end }}
+ {{- if .Values.master.hostAliases }}
+ hostAliases:
+ {{- toYaml .Values.master.hostAliases | nindent 8 }}
+ {{- end }}
initContainers:
+{{- if .Values.master.customInitContainers }}
+{{ tpl (toYaml .Values.master.customInitContainers) . | indent 8 }}
+{{- end }}
- name: "copy-default-config"
- image: "{{ .Values.Master.Image }}:{{ .Values.Master.ImageTag }}"
- imagePullPolicy: "{{ .Values.Master.ImagePullPolicy }}"
+ image: "{{ .Values.master.image }}:{{ .Values.master.imageTag }}"
+ imagePullPolicy: "{{ .Values.master.imagePullPolicy }}"
command: [ "sh", "/var/jenkins_config/apply_config.sh" ]
- {{- if .Values.Master.InitContainerEnv }}
env:
-{{ toYaml .Values.Master.InitContainerEnv | indent 12 }}
- {{- end }}
+ {{- if .Values.master.useSecurity }}
+ - name: ADMIN_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "jenkins.fullname" . }}
+ key: jenkins-admin-password
+ - name: ADMIN_USER
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "jenkins.fullname" . }}
+ key: jenkins-admin-user
+ {{- if or (.Values.master.adminSshKey) (.Values.master.sidecars.configAutoReload.enabled) }}
+ {{- if .Values.master.JCasC.enabled }}
+ - name: ADMIN_PRIVATE_KEY
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "jenkins.fullname" . }}
+ key: {{ "jenkins-admin-private-key" | quote }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ {{- if .Values.master.initContainerEnv }}
+{{ toYaml .Values.master.initContainerEnv | indent 12 }}
+ {{- end }}
resources:
-{{ toYaml .Values.Master.resources | indent 12 }}
+{{ toYaml .Values.master.resources | indent 12 }}
volumeMounts:
- -
- mountPath: /var/jenkins_home
+ - mountPath: /tmp
+ name: tmp
+ - mountPath: /var/jenkins_home
name: jenkins-home
- {{- if .Values.Persistence.SubPath }}
- subPath: {{ .Values.Persistence.SubPath }}
+ {{- if .Values.persistence.subPath }}
+ subPath: {{ .Values.persistence.subPath }}
{{- end }}
- -
- mountPath: /var/jenkins_config
+ - mountPath: /var/jenkins_config
name: jenkins-config
- {{- if .Values.Master.CredentialsXmlSecret }}
- -
- mountPath: /var/jenkins_credentials
+ {{- if .Values.master.credentialsXmlSecret }}
+ - mountPath: /var/jenkins_credentials
name: jenkins-credentials
readOnly: true
{{- end }}
- {{- if .Values.Master.SecretsFilesSecret }}
- -
- mountPath: /var/jenkins_secrets
+ {{- if .Values.master.secretsFilesSecret }}
+ - mountPath: /var/jenkins_secrets
name: jenkins-secrets
readOnly: true
{{- end }}
- {{- if .Values.Master.Jobs }}
- -
- mountPath: /var/jenkins_jobs
+ {{- if .Values.master.jobs }}
+ - mountPath: /var/jenkins_jobs
name: jenkins-jobs
readOnly: true
{{- end }}
- {{- if .Values.Master.InstallPlugins }}
- -
- mountPath: /var/jenkins_plugins
+ {{- if .Values.master.installPlugins }}
+ - mountPath: /usr/share/jenkins/ref/plugins
+ name: plugins
+ - mountPath: /var/jenkins_plugins
name: plugin-dir
{{- end }}
- -
- mountPath: /usr/share/jenkins/ref/secrets/
+ - mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
containers:
- name: {{ template "jenkins.fullname" . }}
- image: "{{ .Values.Master.Image }}:{{ .Values.Master.ImageTag }}"
- imagePullPolicy: "{{ .Values.Master.ImagePullPolicy }}"
- {{- if .Values.Master.UseSecurity }}
+ image: "{{ .Values.master.image }}:{{ .Values.master.imageTag }}"
+ imagePullPolicy: "{{ .Values.master.imagePullPolicy }}"
+ {{- if .Values.master.useSecurity }}
args: [ "--argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD)", "--argumentsRealm.roles.$(ADMIN_USER)=admin"]
+ {{- end }}
+ {{- if .Values.master.lifecycle }}
+ lifecycle:
+{{ toYaml .Values.master.lifecycle | indent 12 }}
{{- end }}
env:
- name: JAVA_OPTS
- value: {{ default "" .Values.Master.JavaOpts | quote }}
+ value: {{ default "" .Values.master.javaOpts | quote }}
- name: JENKINS_OPTS
- value: "{{ if .Values.Master.JenkinsUriPrefix }}--prefix={{ .Values.Master.JenkinsUriPrefix }} {{ end }}{{ default "" .Values.Master.JenkinsOpts}}"
- {{- if .Values.Master.UseSecurity }}
+ value: "{{ if .Values.master.jenkinsUriPrefix }}--prefix={{ .Values.master.jenkinsUriPrefix }} {{ end }}{{ default "" .Values.master.jenkinsOpts}}"
+ - name: JENKINS_SLAVE_AGENT_PORT
+ value: "{{ .Values.master.slaveListenerPort }}"
+ {{- if .Values.master.useSecurity }}
- name: ADMIN_PASSWORD
valueFrom:
secretKeyRef:
@@ -131,128 +173,197 @@ spec:
secretKeyRef:
name: {{ template "jenkins.fullname" . }}
key: jenkins-admin-user
+ {{- if or (.Values.master.adminSshKey) (.Values.master.sidecars.configAutoReload.enabled) }}
+ {{- if .Values.master.JCasC.enabled }}
+ - name: ADMIN_PRIVATE_KEY
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "jenkins.fullname" . }}
+ key: {{ "jenkins-admin-private-key" | quote }}
+ {{- end }}
{{- end }}
- {{- if .Values.Master.ContainerEnv }}
-{{ toYaml .Values.Master.ContainerEnv | indent 12 }}
+ {{- end }}
+ {{- if .Values.master.containerEnv }}
+{{ toYaml .Values.master.containerEnv | indent 12 }}
+ {{- end }}
+ {{- if .Values.master.JCasC.enabled }}
+ - name: CASC_JENKINS_CONFIG
+ value: {{ .Values.master.sidecars.configAutoReload.folder | default "/var/jenkins_home/casc_configs" | quote }}
{{- end }}
ports:
- containerPort: 8080
name: http
- - containerPort: {{ .Values.Master.SlaveListenerPort }}
+ - containerPort: {{ .Values.master.slaveListenerPort }}
name: slavelistener
- {{- if .Values.Master.JMXPort }}
- - containerPort: {{ .Values.Master.JMXPort }}
+ {{- if .Values.master.slaveHostPort }}
+ hostPort: {{ .Values.master.slaveHostPort }}
+ {{- end }}
+ {{- if .Values.master.jmxPort }}
+ - containerPort: {{ .Values.master.jmxPort }}
name: jmx
{{- end }}
-{{- range $index, $port := .Values.Master.ExtraPorts }}
+{{- range $index, $port := .Values.master.extraPorts }}
- containerPort: {{ $port.port }}
name: {{ $port.name }}
{{- end }}
-{{- if .Values.Master.HealthProbes }}
+{{- if .Values.master.healthProbes }}
livenessProbe:
httpGet:
- path: "{{ default "" .Values.Master.JenkinsUriPrefix }}/login"
+ path: "{{ default "" .Values.master.jenkinsUriPrefix }}/login"
port: http
- initialDelaySeconds: {{ .Values.Master.HealthProbesLivenessTimeout }}
- timeoutSeconds: 5
- failureThreshold: {{ .Values.Master.HealthProbeLivenessFailureThreshold }}
+ initialDelaySeconds: {{ .Values.master.healthProbeLivenessInitialDelay }}
+ periodSeconds: {{ .Values.master.healthProbeLivenessPeriodSeconds }}
+ timeoutSeconds: {{ .Values.master.healthProbesLivenessTimeout }}
+ failureThreshold: {{ .Values.master.healthProbeLivenessFailureThreshold }}
readinessProbe:
httpGet:
- path: "{{ default "" .Values.Master.JenkinsUriPrefix }}/login"
+ path: "{{ default "" .Values.master.jenkinsUriPrefix }}/login"
port: http
- initialDelaySeconds: {{ .Values.Master.HealthProbesReadinessTimeout }}
- periodSeconds: {{ .Values.Master.HealthProbeReadinessPeriodSeconds }}
+ initialDelaySeconds: {{ .Values.master.healthProbeReadinessInitialDelay }}
+ periodSeconds: {{ .Values.master.healthProbeReadinessPeriodSeconds }}
+ timeoutSeconds: {{ .Values.master.healthProbesReadinessTimeout }}
+ failureThreshold: {{ .Values.master.healthProbeReadinessFailureThreshold }}
{{- end }}
- # Resources configuration is a little hacky. This was to prevent breaking
- # changes, and should be cleanned up in the future once everybody had
- # enough time to migrate.
+
resources:
-{{ if or .Values.Master.Cpu .Values.Master.Memory }}
- requests:
- cpu: "{{ .Values.Master.Cpu }}"
- memory: "{{ .Values.Master.Memory }}"
-{{ else }}
-{{ toYaml .Values.Master.resources | indent 12 }}
-{{ end }}
+{{ toYaml .Values.master.resources | indent 12 }}
volumeMounts:
-{{- if .Values.Persistence.mounts }}
-{{ toYaml .Values.Persistence.mounts | indent 12 }}
+{{- if .Values.persistence.mounts }}
+{{ toYaml .Values.persistence.mounts | indent 12 }}
{{- end }}
- -
- mountPath: /var/jenkins_home
+ - mountPath: /tmp
+ name: tmp
+ - mountPath: /var/jenkins_home
name: jenkins-home
readOnly: false
- {{- if .Values.Persistence.SubPath }}
- subPath: {{ .Values.Persistence.SubPath }}
+ {{- if .Values.persistence.subPath }}
+ subPath: {{ .Values.persistence.subPath }}
{{- end }}
- -
- mountPath: /var/jenkins_config
+ - mountPath: /var/jenkins_config
name: jenkins-config
readOnly: true
- {{- if .Values.Master.CredentialsXmlSecret }}
- -
- mountPath: /var/jenkins_credentials
+ {{- if .Values.master.credentialsXmlSecret }}
+ - mountPath: /var/jenkins_credentials
name: jenkins-credentials
readOnly: true
{{- end }}
- {{- if .Values.Master.SecretsFilesSecret }}
- -
- mountPath: /var/jenkins_secrets
+ {{- if .Values.master.secretsFilesSecret }}
+ - mountPath: /var/jenkins_secrets
name: jenkins-secrets
readOnly: true
{{- end }}
- {{- if .Values.Master.Jobs }}
- -
- mountPath: /var/jenkins_jobs
+ {{- if .Values.master.jobs }}
+ - mountPath: /var/jenkins_jobs
name: jenkins-jobs
readOnly: true
{{- end }}
- {{- if .Values.Master.InstallPlugins }}
- -
- mountPath: /usr/share/jenkins/ref/plugins/
+ {{- if .Values.master.installPlugins }}
+ - mountPath: /usr/share/jenkins/ref/plugins/
name: plugin-dir
readOnly: false
{{- end }}
- -
- mountPath: /usr/share/jenkins/ref/secrets/
+ - mountPath: /usr/share/jenkins/ref/secrets/
name: secrets-dir
readOnly: false
+ {{- if and (.Values.master.JCasC.enabled) (.Values.master.sidecars.configAutoReload.enabled) }}
+ - name: sc-config-volume
+ mountPath: {{ .Values.master.sidecars.configAutoReload.folder | default "/var/jenkins_home/casc_configs" | quote }}
+ {{- end }}
+
+{{- if and (.Values.master.JCasC.enabled) (.Values.master.sidecars.configAutoReload.enabled) }}
+ - name: {{ template "jenkins.name" . }}-sc-config
+ image: "{{ .Values.master.sidecars.configAutoReload.image }}"
+ imagePullPolicy: {{ .Values.master.sidecars.configAutoReload.imagePullPolicy }}
+ env:
+ - name: JENKINSRELOADCONFIG
+ value: "true"
+ - name: LABEL
+ value: "{{ template "jenkins.fullname" . }}-jenkins-config"
+ - name: FOLDER
+ value: "{{ .Values.master.sidecars.configAutoReload.folder }}"
+ - name: NAMESPACE
+ value: "{{ .Values.master.sidecars.configAutoReload.searchNamespace }}"
+ - name: SSH_PORT
+ value: "{{ .Values.master.sidecars.configAutoReload.sshTcpPort }}"
+ - name: JENKINS_PORT
+ value: "{{ .Values.master.servicePort }}"
+ {{- if .Values.master.useSecurity }}
+ - name: ADMIN_USER
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "jenkins.fullname" . }}
+ key: jenkins-admin-user
+ {{- if or (.Values.master.adminSshKey) (.Values.master.sidecars.configAutoReload.enabled) }}
+ {{- if .Values.master.JCasC.enabled }}
+ - name: ADMIN_PRIVATE_KEY
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "jenkins.fullname" . }}
+ key: {{ "jenkins-admin-private-key" | quote }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ resources:
+{{ toYaml .Values.master.sidecars.configAutoReload.resources | indent 12 }}
+ volumeMounts:
+ - name: sc-config-volume
+ mountPath: {{ .Values.master.sidecars.configAutoReload.folder | quote }}
+ - name: jenkins-home
+ mountPath: /var/jenkins_home
+ {{- if .Values.persistence.subPath }}
+ subPath: {{ .Values.persistence.subPath }}
+ {{- end }}
+{{- end}}
+
+
+{{- if .Values.master.sidecars.other}}
+{{ tpl (toYaml .Values.master.sidecars.other | indent 8) .}}
+{{- end }}
+
volumes:
-{{- if .Values.Persistence.volumes }}
-{{ toYaml .Values.Persistence.volumes | indent 6 }}
+{{- if .Values.persistence.volumes }}
+{{ tpl (toYaml .Values.persistence.volumes | indent 6) . }}
{{- end }}
+ - name: plugins
+ emptyDir: {}
+ - name: tmp
+ emptyDir: {}
- name: jenkins-config
configMap:
name: {{ template "jenkins.fullname" . }}
- {{- if .Values.Master.CredentialsXmlSecret }}
+ {{- if .Values.master.credentialsXmlSecret }}
- name: jenkins-credentials
secret:
- secretName: {{ .Values.Master.CredentialsXmlSecret }}
+ secretName: {{ .Values.master.credentialsXmlSecret }}
{{- end }}
- {{- if .Values.Master.SecretsFilesSecret }}
+ {{- if .Values.master.secretsFilesSecret }}
- name: jenkins-secrets
secret:
- secretName: {{ .Values.Master.SecretsFilesSecret }}
+ secretName: {{ .Values.master.secretsFilesSecret }}
{{- end }}
- {{- if .Values.Master.Jobs }}
+ {{- if .Values.master.jobs }}
- name: jenkins-jobs
configMap:
name: {{ template "jenkins.fullname" . }}-jobs
{{- end }}
- {{- if .Values.Master.InstallPlugins }}
+ {{- if .Values.master.installPlugins }}
- name: plugin-dir
emptyDir: {}
{{- end }}
- name: secrets-dir
emptyDir: {}
- name: jenkins-home
- {{- if .Values.Persistence.Enabled }}
+ {{- if .Values.persistence.enabled }}
persistentVolumeClaim:
- claimName: {{ .Values.Persistence.ExistingClaim | default (include "jenkins.fullname" .) }}
+ claimName: {{ .Values.persistence.existingClaim | default (include "jenkins.fullname" .) }}
{{- else }}
emptyDir: {}
{{- end -}}
-{{- if .Values.Master.ImagePullSecret }}
+ {{- if .Values.master.JCasC.enabled }}
+ - name: sc-config-volume
+ emptyDir: {}
+ {{- end }}
+{{- if .Values.master.imagePullSecretName }}
imagePullSecrets:
- - name: {{ .Values.Master.ImagePullSecret }}
+ - name: {{ .Values.master.imagePullSecretName }}
{{- end -}}
diff --git a/stable/jenkins/templates/jenkins-master-ingress.yaml b/stable/jenkins/templates/jenkins-master-ingress.yaml
index 9c75f4467c61..f786f6b869a7 100644
--- a/stable/jenkins/templates/jenkins-master-ingress.yaml
+++ b/stable/jenkins/templates/jenkins-master-ingress.yaml
@@ -1,25 +1,36 @@
-{{- if .Values.Master.HostName }}
-apiVersion: {{ .Values.Master.Ingress.ApiVersion }}
+{{- if .Values.master.ingress.enabled }}
+apiVersion: {{ .Values.master.ingress.apiVersion }}
kind: Ingress
metadata:
-{{- if .Values.Master.Ingress.Annotations }}
+ labels:
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+{{- if .Values.master.ingress.labels }}
+{{ toYaml .Values.master.ingress.labels | indent 4 }}
+{{- end }}
+{{- if .Values.master.ingress.annotations }}
annotations:
-{{ toYaml .Values.Master.Ingress.Annotations | indent 4 }}
+{{ toYaml .Values.master.ingress.annotations | indent 4 }}
{{- end }}
name: {{ template "jenkins.fullname" . }}
spec:
rules:
- - host: {{ .Values.Master.HostName | quote }}
- http:
+ - http:
paths:
- backend:
serviceName: {{ template "jenkins.fullname" . }}
- servicePort: {{ .Values.Master.ServicePort }}
-{{- if .Values.Master.Ingress.Path }}
- path: {{ .Values.Master.Ingress.Path }}
+ servicePort: {{ .Values.master.servicePort }}
+{{- if .Values.master.ingress.path }}
+ path: {{ .Values.master.ingress.path }}
{{- end -}}
-{{- if .Values.Master.Ingress.TLS }}
+{{- if .Values.master.ingress.hostName }}
+ host: {{ .Values.master.ingress.hostName | quote }}
+{{- end }}
+{{- if .Values.master.ingress.tls }}
tls:
-{{ toYaml .Values.Master.Ingress.TLS | indent 4 }}
+{{ toYaml .Values.master.ingress.tls | indent 4 }}
{{- end -}}
{{- end }}
diff --git a/stable/jenkins/templates/jenkins-master-networkpolicy.yaml b/stable/jenkins/templates/jenkins-master-networkpolicy.yaml
index 7efd994dac65..7ae1e594e3a8 100644
--- a/stable/jenkins/templates/jenkins-master-networkpolicy.yaml
+++ b/stable/jenkins/templates/jenkins-master-networkpolicy.yaml
@@ -1,33 +1,46 @@
-{{- if .Values.NetworkPolicy.Enabled }}
+{{- if .Values.networkPolicy.enabled }}
kind: NetworkPolicy
-apiVersion: {{ .Values.NetworkPolicy.ApiVersion }}
+apiVersion: {{ .Values.networkPolicy.apiVersion }}
metadata:
- name: "{{ .Release.Name }}-{{ .Values.Master.Component }}"
+ name: "{{ .Release.Name }}-{{ .Values.master.componentName }}"
+ labels:
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
spec:
podSelector:
matchLabels:
- component: "{{ .Release.Name }}-{{ .Values.Master.Component }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
ingress:
# Allow web access to the UI
- ports:
- - port: 8080
+ - port: {{ .Values.master.targetPort }}
# Allow inbound connections from slave
- from:
- podSelector:
matchLabels:
- "jenkins/{{ .Release.Name }}-{{ .Values.Agent.Component }}": "true"
+ "jenkins/{{ .Release.Name }}-{{ .Values.agent.componentName }}": "true"
ports:
- - port: {{ .Values.Master.SlaveListenerPort }}
-{{- if .Values.Agent.Enabled }}
+ - port: {{ .Values.master.slaveListenerPort }}
+{{- if .Values.agent.enabled }}
---
kind: NetworkPolicy
-apiVersion: {{ .Values.NetworkPolicy.ApiVersion }}
+apiVersion: {{ .Values.networkPolicy.apiVersion }}
metadata:
- name: "{{ .Release.Name }}-{{ .Values.Agent.Component }}"
+ name: "{{ .Release.Name }}-{{ .Values.agent.componentName }}"
+ labels:
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
spec:
podSelector:
matchLabels:
# DefaultDeny
- "jenkins/{{ .Release.Name }}-{{ .Values.Agent.Component }}": "true"
+ "jenkins/{{ .Release.Name }}-{{ .Values.agent.componentName }}": "true"
{{- end }}
{{- end }}
diff --git a/stable/jenkins/templates/jenkins-master-route.yaml b/stable/jenkins/templates/jenkins-master-route.yaml
new file mode 100644
index 000000000000..70fea48e8359
--- /dev/null
+++ b/stable/jenkins/templates/jenkins-master-route.yaml
@@ -0,0 +1,31 @@
+{{- if .Values.master.route.enabled }}
+apiVersion: route.openshift.io/v1
+kind: Route
+metadata:
+ labels:
+ app: {{ template "jenkins.fullname" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ release: "{{ .Release.Name }}"
+ heritage: "{{ .Release.Service }}"
+ component: "{{ .Release.Name }}-{{ .Values.master.componentName }}"
+{{- if .Values.master.route.labels }}
+{{ toYaml .Values.master.route.labels | indent 4 }}
+{{- end }}
+{{- if .Values.master.route.annotations }}
+ annotations:
+{{ toYaml .Values.master.route.annotations | indent 4 }}
+{{- end }}
+ name: {{ template "jenkins.fullname" . }}
+spec:
+ host: {{ .Values.master.route.path }}
+ port:
+ targetPort: http
+ tls:
+ insecureEdgeTerminationPolicy: Redirect
+ termination: edge
+ to:
+ kind: Service
+ name: {{ template "jenkins.fullname" . }}
+ weight: 100
+ wildcardPolicy: None
+{{- end }}
diff --git a/stable/jenkins/templates/jenkins-master-servicemonitor.yaml b/stable/jenkins/templates/jenkins-master-servicemonitor.yaml
new file mode 100644
index 000000000000..8b092e19df64
--- /dev/null
+++ b/stable/jenkins/templates/jenkins-master-servicemonitor.yaml
@@ -0,0 +1,32 @@
+{{- if and .Values.master.prometheus.enabled }}
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+
+metadata:
+ name: {{ template "jenkins.fullname" . }}
+ labels:
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+ {{- range $key, $val := .Values.master.prometheus.serviceMonitorAdditionalLabels }}
+ {{ $key }}: {{ $val | quote }}
+ {{- end}}
+
+spec:
+ endpoints:
+ - interval: {{ .Values.master.prometheus.scrapeInterval }}
+ port: http
+ path: {{ .Values.master.prometheus.scrapeEndpoint }}
+ jobLabel: {{ template "jenkins.fullname" . }}
+ namespaceSelector:
+ matchNames:
+ - "{{ $.Release.Namespace }}"
+ selector:
+ matchLabels:
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+{{- end }}
diff --git a/stable/jenkins/templates/jenkins-master-svc.yaml b/stable/jenkins/templates/jenkins-master-svc.yaml
index 6a7974baa9c3..81a7ce29ea5a 100644
--- a/stable/jenkins/templates/jenkins-master-svc.yaml
+++ b/stable/jenkins/templates/jenkins-master-svc.yaml
@@ -3,35 +3,36 @@ kind: Service
metadata:
name: {{template "jenkins.fullname" . }}
labels:
- app: {{ template "jenkins.fullname" . }}
- heritage: {{.Release.Service | quote }}
- release: {{.Release.Name | quote }}
- chart: "{{.Chart.Name}}-{{.Chart.Version}}"
- component: "{{.Release.Name}}-{{.Values.Master.Component}}"
- {{- if .Values.Master.ServiceLabels }}
-{{ toYaml .Values.Master.ServiceLabels | indent 4 }}
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+ {{- if .Values.master.serviceLabels }}
+{{ toYaml .Values.master.serviceLabels | indent 4 }}
{{- end }}
-{{- if .Values.Master.ServiceAnnotations }}
+{{- if .Values.master.serviceAnnotations }}
annotations:
-{{ toYaml .Values.Master.ServiceAnnotations | indent 4 }}
+{{ toYaml .Values.master.serviceAnnotations | indent 4 }}
{{- end }}
spec:
ports:
- - port: {{.Values.Master.ServicePort}}
+ - port: {{.Values.master.servicePort}}
name: http
- targetPort: 8080
- {{if (and (eq .Values.Master.ServiceType "NodePort") (not (empty .Values.Master.NodePort)))}}
- nodePort: {{.Values.Master.NodePort}}
+ targetPort: {{ .Values.master.targetPort }}
+ {{if (and (eq .Values.master.serviceType "NodePort") (not (empty .Values.master.nodePort)))}}
+ nodePort: {{.Values.master.nodePort}}
{{end}}
selector:
- component: "{{.Release.Name}}-{{.Values.Master.Component}}"
- type: {{.Values.Master.ServiceType}}
- {{if eq .Values.Master.ServiceType "LoadBalancer"}}
-{{- if .Values.Master.LoadBalancerSourceRanges }}
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ type: {{.Values.master.serviceType}}
+ {{if eq .Values.master.serviceType "LoadBalancer"}}
+{{- if .Values.master.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
-{{ toYaml .Values.Master.LoadBalancerSourceRanges | indent 4 }}
+{{ toYaml .Values.master.loadBalancerSourceRanges | indent 4 }}
{{- end }}
- {{if .Values.Master.LoadBalancerIP}}
- loadBalancerIP: {{.Values.Master.LoadBalancerIP}}
+ {{if .Values.master.loadBalancerIP}}
+ loadBalancerIP: {{.Values.master.loadBalancerIP}}
{{end}}
{{end}}
diff --git a/stable/jenkins/templates/jobs.yaml b/stable/jenkins/templates/jobs.yaml
index 03e16b8d7996..dd3e9dbbff8c 100644
--- a/stable/jenkins/templates/jobs.yaml
+++ b/stable/jenkins/templates/jobs.yaml
@@ -1,8 +1,14 @@
-{{- if .Values.Master.Jobs }}
+{{- if .Values.master.jobs }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "jenkins.fullname" . }}-jobs
+ labels:
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
data:
-{{ toYaml .Values.Master.Jobs | indent 2 }}
+{{ toYaml .Values.master.jobs | indent 2 }}
{{- end -}}
diff --git a/stable/jenkins/templates/rbac.yaml b/stable/jenkins/templates/rbac.yaml
index f7f0127f2f0b..a8c164703404 100644
--- a/stable/jenkins/templates/rbac.yaml
+++ b/stable/jenkins/templates/rbac.yaml
@@ -1,26 +1,90 @@
-{{ if .Values.rbac.install }}
+{{ if .Values.rbac.create }}
{{- $serviceName := include "jenkins.fullname" . -}}
-{{- if .Capabilities.APIVersions.Has "rbac.authorization.k8s.io/v1beta1" }}
-apiVersion: rbac.authorization.k8s.io/v1beta1
-{{- else if .Capabilities.APIVersions.Has "rbac.authorization.k8s.io/v1alpha1" }}
-apiVersion: rbac.authorization.k8s.io/v1alpha1
-{{- else }}
+
+# This role is used to allow Jenkins scheduling of agents via Kubernetes plugin.
apiVersion: rbac.authorization.k8s.io/v1
-{{- end }}
-kind: {{ .Values.rbac.roleBindingKind }}
+kind: Role
metadata:
- name: {{ $serviceName }}-role-binding
+ name: {{ $serviceName }}-schedule-agents
+ namespace: {{ .Values.master.slaveKubernetesNamespace | default .Release.Namespace}}
labels:
- app: {{ $serviceName }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
- release: "{{ .Release.Name }}"
- heritage: "{{ .Release.Service }}"
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+rules:
+- apiGroups: [""]
+ resources: ["pods", "pods/exec", "pods/log"]
+ verbs: ["*"]
+
+---
+
+# We bind the role to the Jenkins service account. The role binding is created in the namespace
+# where the agents are supposed to run.
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: {{ $serviceName }}-schedule-agents
+ namespace: {{ .Values.master.slaveKubernetesNamespace | default .Release.Namespace}}
+ labels:
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: {{ $serviceName }}-schedule-agents
+subjects:
+- kind: ServiceAccount
+ name: {{ template "jenkins.serviceAccountName" .}}
+ namespace: {{ .Release.Namespace }}
+
+---
+
+# The sidecar container which is responsible for reloading configuration changes
+# needs permissions to watch ConfigMaps
+{{- if .Values.master.sidecars.configAutoReload.enabled }}
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ name: {{ template "jenkins.fullname" . }}-casc-reload
+ namespace: {{ .Release.Namespace}}
+ labels:
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+rules:
+- apiGroups: [""]
+ resources: ["configmaps"]
+ verbs: ["get", "watch", "list"]
+
+---
+
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
+metadata:
+ name: {{ $serviceName }}-watch-configmaps
+ namespace: {{ .Release.Namespace}}
+ labels:
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
roleRef:
apiGroup: rbac.authorization.k8s.io
- kind: {{ .Values.rbac.roleKind }}
- name: {{ .Values.rbac.roleRef }}
+ kind: Role
+ name: {{ template "jenkins.fullname" . }}-casc-reload
subjects:
- kind: ServiceAccount
- name: {{ $serviceName }}
+ name: {{ template "jenkins.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
+
+{{- end}}
+
{{ end }}
diff --git a/stable/jenkins/templates/secret.yaml b/stable/jenkins/templates/secret.yaml
index 47cc2e056ef8..3755b6f59c8b 100644
--- a/stable/jenkins/templates/secret.yaml
+++ b/stable/jenkins/templates/secret.yaml
@@ -1,19 +1,27 @@
-{{- if .Values.Master.UseSecurity }}
+{{- if .Values.master.useSecurity -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ template "jenkins.fullname" . }}
labels:
- app: {{ template "jenkins.fullname" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
- release: "{{ .Release.Name }}"
- heritage: "{{ .Release.Service }}"
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
type: Opaque
data:
- {{ if .Values.Master.AdminPassword }}
- jenkins-admin-password: {{ .Values.Master.AdminPassword | b64enc | quote }}
- {{ else }}
+ {{ if .Values.master.adminPassword -}}
+ jenkins-admin-password: {{ .Values.master.adminPassword | b64enc | quote }}
+ {{ else -}}
jenkins-admin-password: {{ randAlphaNum 10 | b64enc | quote }}
- {{ end }}
- jenkins-admin-user: {{ .Values.Master.AdminUser | b64enc | quote }}
-{{- end }}
\ No newline at end of file
+ {{ end -}}
+ {{ if and (.Values.master.JCasC.enabled) (.Values.master.sidecars.configAutoReload.enabled) -}}
+ {{ if not .Values.master.adminSshKey -}}
+ {{ ( include "jenkins.gen-key" . ) }}
+ {{ else -}}
+ jenkins-admin-private-key: {{ .Values.master.adminSshKey | b64enc | quote }}
+ {{ end -}}
+ {{ end -}}
+ jenkins-admin-user: {{ .Values.master.adminUser | b64enc | quote }}
+{{- end }}
diff --git a/stable/jenkins/templates/service-account-agent.yaml b/stable/jenkins/templates/service-account-agent.yaml
new file mode 100644
index 000000000000..78a2b56e512f
--- /dev/null
+++ b/stable/jenkins/templates/service-account-agent.yaml
@@ -0,0 +1,19 @@
+{{ if .Values.serviceAccountAgent.create }}
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: {{ include "jenkins.serviceAccountAgentName" . }}
+{{- if .Values.master.slaveKubernetesNamespace }}
+ namespace: {{ .Values.master.slaveKubernetesNamespace }}
+{{ end }}
+{{- if .Values.serviceAccountAgent.annotations }}
+ annotations:
+{{ toYaml .Values.serviceAccountAgent.annotations | indent 4 }}
+{{ end }}
+ labels:
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+{{ end }}
diff --git a/stable/jenkins/templates/service-account.yaml b/stable/jenkins/templates/service-account.yaml
index cb0911cb5b46..c00c69dcafb5 100644
--- a/stable/jenkins/templates/service-account.yaml
+++ b/stable/jenkins/templates/service-account.yaml
@@ -1,12 +1,16 @@
-{{ if .Values.rbac.install }}
-{{- $serviceName := include "jenkins.fullname" . -}}
+{{ if .Values.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
- name: {{ $serviceName }}
+ name: {{ include "jenkins.serviceAccountName" . }}
+{{- if .Values.serviceAccount.annotations }}
+ annotations:
+{{ toYaml .Values.serviceAccount.annotations | indent 4 }}
+{{ end }}
labels:
- app: {{ $serviceName }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
- release: "{{ .Release.Name }}"
- heritage: "{{ .Release.Service }}"
-{{ end }}
\ No newline at end of file
+ "app.kubernetes.io/name": '{{ template "jenkins.name" .}}'
+ "helm.sh/chart": "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ "app.kubernetes.io/managed-by": "{{ .Release.Service }}"
+ "app.kubernetes.io/instance": "{{ .Release.Name }}"
+ "app.kubernetes.io/component": "{{ .Values.master.componentName }}"
+{{ end }}
diff --git a/stable/jenkins/templates/tests/jenkins-test.yaml b/stable/jenkins/templates/tests/jenkins-test.yaml
index 73d061a4cef2..2e96f7675c3b 100644
--- a/stable/jenkins/templates/tests/jenkins-test.yaml
+++ b/stable/jenkins/templates/tests/jenkins-test.yaml
@@ -5,13 +5,13 @@ metadata:
annotations:
"helm.sh/hook": test-success
spec:
- {{- if .Values.Master.NodeSelector }}
+ {{- if .Values.master.nodeSelector }}
nodeSelector:
-{{ toYaml .Values.Master.NodeSelector | indent 4 }}
+{{ toYaml .Values.master.nodeSelector | indent 4 }}
{{- end }}
- {{- if .Values.Master.Tolerations }}
+ {{- if .Values.master.tolerations }}
tolerations:
-{{ toYaml .Values.Master.Tolerations | indent 4 }}
+{{ toYaml .Values.master.tolerations | indent 4 }}
{{- end }}
initContainers:
- name: "test-framework"
@@ -28,7 +28,7 @@ spec:
name: tools
containers:
- name: {{ .Release.Name }}-ui-test
- image: {{ .Values.Master.Image }}:{{ .Values.Master.ImageTag }}
+ image: {{ .Values.master.image }}:{{ .Values.master.imageTag }}
command: ["/tools/bats/bats", "-t", "/tests/run.sh"]
volumeMounts:
- mountPath: /tests
diff --git a/stable/jenkins/templates/tests/test-config.yaml b/stable/jenkins/templates/tests/test-config.yaml
index 00b8a66a134f..a6826b8b5c3d 100644
--- a/stable/jenkins/templates/tests/test-config.yaml
+++ b/stable/jenkins/templates/tests/test-config.yaml
@@ -5,5 +5,5 @@ metadata:
data:
run.sh: |-
@test "Testing Jenkins UI is accessible" {
- curl --retry 48 --retry-delay 10 {{ template "jenkins.fullname" . }}:{{ .Values.Master.ServicePort }}{{ default "" .Values.Master.JenkinsUriPrefix }}/login
+ curl --retry 48 --retry-delay 10 {{ template "jenkins.fullname" . }}:{{ .Values.master.servicePort }}{{ default "" .Values.master.jenkinsUriPrefix }}/login
}
diff --git a/stable/jenkins/values.yaml b/stable/jenkins/values.yaml
index 4a4b87de3aa1..19e24199d3cc 100644
--- a/stable/jenkins/values.yaml
+++ b/stable/jenkins/values.yaml
@@ -8,177 +8,334 @@
# nameOverride:
# fullnameOverride:
-Master:
- Name: jenkins-master
- Image: "jenkins/jenkins"
- ImageTag: "lts"
- ImagePullPolicy: "Always"
-# ImagePullSecret: jenkins
- Component: "jenkins-master"
- NumExecutors: 0
- UseSecurity: true
- # SecurityRealm:
- # Optionally configure a different AuthorizationStrategy using Jenkins XML
- # AuthorizationStrategy: |-
- #
- # true
- #
- HostNetworking: false
- AdminUser: admin
- # AdminPassword:
+master:
+ # Used for label app.kubernetes.io/component
+ componentName: "jenkins-master"
+ image: "jenkins/jenkins"
+ imageTag: "lts"
+ imagePullPolicy: "Always"
+ imagePullSecretName:
+ # Optionally configure lifetime for master-container
+ lifecycle:
+ # postStart:
+ # exec:
+ # command:
+ # - "uname"
+ # - "-a"
+ numExecutors: 0
+ # configAutoReload requires UseSecurity is set to true:
+ useSecurity: true
+ # Allows to configure different SecurityRealm using Jenkins XML
+ securityRealm: |-
+
+ # Allows to configure different AuthorizationStrategy using Jenkins XML
+ authorizationStrategy: |-
+
+ true
+
+ hostNetworking: false
+ # When enabling LDAP or another non-Jenkins identity source, the built-in admin account will no longer exist.
+ # Since the AdminUser is used by configAutoReload, in order to use configAutoReload you must change the
+ # .master.adminUser to a valid username on your LDAP (or other) server. This user does not need
+ # to have administrator rights in Jenkins (the default Overall:Read is sufficient) nor will it be granted any
+ # additional rights. Failure to do this will cause the sidecar container to fail to authenticate via SSH and enter
+ # a restart loop. Likewise if you disable the non-Jenkins identity store and instead use the Jenkins internal one,
+ # you should revert master.adminUser to your preferred admin user:
+ adminUser: "admin"
+ # adminPassword:
+ # adminSshKey:
+ # If CasC auto-reload is enabled, an SSH (RSA) keypair is needed. Can either provide your own, or leave unconfigured to allow a random key to be auto-generated.
+ # If you supply your own, it is recommended that the values file that contains your key not be committed to source control in an unencrypted format
+ rollingUpdate: {}
+ # Ignored if Persistence is enabled
+ # maxSurge: 1
+ # maxUnavailable: 25%
resources:
requests:
cpu: "50m"
memory: "256Mi"
limits:
cpu: "2000m"
- memory: "2048Mi"
+ memory: "4096Mi"
# Environment variables that get added to the init container (useful for e.g. http_proxy)
- # InitContainerEnv:
+ # initContainerEnv:
# - name: http_proxy
# value: "http://192.168.64.1:3128"
- # ContainerEnv:
+ # containerEnv:
# - name: http_proxy
# value: "http://192.168.64.1:3128"
# Set min/max heap here if needed with:
- # JavaOpts: "-Xms512m -Xmx512m"
- # JenkinsOpts: ""
- # JenkinsUrl: ""
+ # javaOpts: "-Xms512m -Xmx512m"
+ # jenkinsOpts: ""
+ # jenkinsUrl: ""
# If you set this prefix and use ingress controller then you might want to set the ingress path below
- # JenkinsUriPrefix: "/jenkins"
- # Enable pod security context (must be `true` if RunAsUser or FsGroup are set)
- UsePodSecurityContext: true
- # Set RunAsUser to 1000 to let Jenkins run as non-root user 'jenkins' which exists in 'jenkins/jenkins' docker image.
- # When setting RunAsUser to a different value than 0 also set FsGroup to the same value:
- # RunAsUser:
- # FsGroup:
- ServicePort: 8080
+ # jenkinsUriPrefix: "/jenkins"
+ # Enable pod security context (must be `true` if runAsUser or fsGroup are set)
+ usePodSecurityContext: true
+ # Set runAsUser to 1000 to let Jenkins run as non-root user 'jenkins' which exists in 'jenkins/jenkins' docker image.
+ # When setting runAsUser to a different value than 0 also set fsGroup to the same value:
+ # runAsUser:
+ # fsGroup:
+ servicePort: 8080
+ targetPort: 8080
# For minikube, set this to NodePort, elsewhere use LoadBalancer
# Use ClusterIP if your setup includes ingress controller
- ServiceType: LoadBalancer
- # Master Service annotations
- ServiceAnnotations: {}
- # Master Custom Labels
- DeploymentLabels:
+ serviceType: LoadBalancer
+ # Jenkins master service annotations
+ serviceAnnotations: {}
+ # Jenkins master custom labels
+ deploymentLabels: {}
# foo: bar
# bar: foo
- # Master Service Labels
- ServiceLabels: {}
+ # Jenkins master service labels
+ serviceLabels: {}
# service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https
+ # Put labels on Jenkins master pod
+ podLabels: {}
# Used to create Ingress record (should used with ServiceType: ClusterIP)
- # HostName: jenkins.cluster.local
- # NodePort:
+ # requires additional javaOpts, ie
+ # javaOpts: >
# -Dcom.sun.management.jmxremote.port=4000
# -Dcom.sun.management.jmxremote.authenticate=false
# -Dcom.sun.management.jmxremote.ssl=false
- # JMXPort: 4000
- # Optionally configure other ports to expose in the Master container
- ExtraPorts:
+ # jmxPort: 4000
+ # Optionally configure other ports to expose in the master container
+ extraPorts:
# - name: BuildInfoProxy
# port: 9000
+
# List of plugins to be install during Jenkins master start
- InstallPlugins:
+ installPlugins:
- kubernetes:1.14.0
- workflow-job:2.31
- workflow-aggregator:2.6
- credentials-binding:1.17
- git:3.9.1
+
+ # Enable to always override the installed plugins with the values of 'master.installPlugins' on upgrade or redeployment.
+ # overwritePlugins: true
# Enable HTML parsing using OWASP Markup Formatter Plugin (antisamy-markup-formatter), useful with ghprb plugin.
- # The plugin is not installed by default, please update Master.InstallPlugins.
- # EnableRawHtmlMarkupFormatter: true
+ # The plugin is not installed by default, please update master.installPlugins.
+ enableRawHtmlMarkupFormatter: false
# Used to approve a list of groovy functions in pipelines used the script-security plugin. Can be viewed under /scriptApproval
- # ScriptApproval:
- # - "method groovy.json.JsonSlurperClassic parseText java.lang.String"
- # - "new groovy.json.JsonSlurperClassic"
+ scriptApproval:
+ # - "method groovy.json.JsonSlurperClassic parseText java.lang.String"
+ # - "new groovy.json.JsonSlurperClassic"
# List of groovy init scripts to be executed during Jenkins master start
- InitScripts:
+ initScripts:
# - |
# print 'adding global pipeline libraries, register properties, bootstrap jobs...'
# Kubernetes secret that contains a 'credentials.xml' for Jenkins
- # CredentialsXmlSecret: jenkins-credentials
+ # credentialsXmlSecret: jenkins-credentials
# Kubernetes secret that contains files to be put in the Jenkins 'secrets' directory,
# useful to manage encryption keys used for credentials.xml for instance (such as
# master.key and hudson.util.Secret)
- # SecretsFilesSecret: jenkins-secrets
+ # secretsFilesSecret: jenkins-secrets
# Jenkins XML job configs to provision
- # Jobs:
- # test: |-
- # <>
- CustomConfigMap: false
- # By default, the configMap is only used to set the initial config the first time
- # that the chart is installed. Setting `OverwriteConfig` to `true` will overwrite
- # the jenkins config with the contents of the configMap every time the pod starts.
- OverwriteConfig: false
+ jobs:
+ # test: |-
+ # <>
+
+ # Below is the implementation of Jenkins Configuration as Code. Add a key under configScripts for each configuration area,
+ # where each corresponds to a plugin or section of the UI. Each key (prior to | character) is just a label, and can be any value.
+ # Keys are only used to give the section a meaningful name. The only restriction is they may only contain RFC 1123 \ DNS label
+ # characters: lowercase letters, numbers, and hyphens. The keys become the name of a configuration yaml file on the master in
+ # /var/jenkins_home/casc_configs (by default) and will be processed by the Configuration as Code Plugin. The lines after each |
+ # become the content of the configuration yaml file. The first line after this is a JCasC root element, eg jenkins, credentials,
+ # etc. Best reference is https:///configuration-as-code/reference. The example below creates a welcome message:
+ JCasC:
+ enabled: false
+ pluginVersion: 1.5
+ supportPluginVersion: 1.5
+ configScripts:
+ welcome-message: |
+ jenkins:
+ systemMessage: Welcome to our CI\CD server. This Jenkins is configured and managed 'as code'.
+
+ # Optionally specify additional init-containers
+ customInitContainers: []
+ # - name: custom-init
+ # image: "alpine:3.7"
+ # imagePullPolicy: Always
+ # command: [ "uname", "-a" ]
+
+ sidecars:
+ configAutoReload:
+ # If enabled: true, Jenkins Configuration as Code will be reloaded on-the-fly without a reboot. If false or not-specified,
+ # jcasc changes will cause a reboot and will only be applied at the subsequent start-up. Auto-reload uses the Jenkins CLI
+ # over SSH to reapply config when changes to the configScripts are detected. The admin user (or account you specify in
+ # master.adminUser) will have a random SSH private key (RSA 4096) assigned unless you specify adminSshKey. This will be saved to a k8s secret.
+ enabled: false
+ image: shadwell/k8s-sidecar:0.0.2
+ imagePullPolicy: IfNotPresent
+ resources:
+ # limits:
+ # cpu: 100m
+ # memory: 100Mi
+ # requests:
+ # cpu: 50m
+ # memory: 50Mi
+ # SSH port value can be set to any unused TCP port. The default, 1044, is a non-standard SSH port that has been chosen at random.
+ # Is only used to reload jcasc config from the sidecar container running in the Jenkins master pod.
+ # This TCP port will not be open in the pod (unless you specifically configure this), so Jenkins will not be
+ # accessible via SSH from outside of the pod. Note if you use non-root pod privileges (runAsUser & fsGroup),
+ # this must be > 1024:
+ sshTcpPort: 1044
+ # folder in the pod that should hold the collected dashboards:
+ folder: "/var/jenkins_home/casc_configs"
+ # If specified, the sidecar will search for JCasC config-maps inside this namespace.
+ # Otherwise the namespace in which the sidecar is running will be used.
+ # It's also possible to specify ALL to search in all namespaces:
+ # searchNamespace:
+
+ # Allows you to inject additional/other sidecars
+ other:
+ ## The example below runs the client for https://smee.io as sidecar container next to Jenkins,
+ ## that allows to trigger build behind a secure firewall.
+ ## https://jenkins.io/blog/2019/01/07/webhook-firewalls/#triggering-builds-with-webhooks-behind-a-secure-firewall
+ ##
+ ## Note: To use it you should go to https://smee.io/new and update the url to the generete one.
+ # - name: smee
+ # image: docker.io/twalter/smee-client:1.0.2
+ # args: ["--port", "{{ .Values.master.servicePort }}", "--path", "/github-webhook/", "--url", "https://smee.io/new"]
+ # resources:
+ # limits:
+ # cpu: 50m
+ # memory: 128Mi
+ # requests:
+ # cpu: 10m
+ # memory: 32Mi
# Node labels and tolerations for pod assignment
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
- NodeSelector: {}
- Tolerations: {}
- PodAnnotations: {}
+ nodeSelector: {}
+ tolerations: []
+ # Leverage a priorityClass to ensure your pods survive resource shortages
+ # ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
+ # priorityClass: system-cluster-critical
+ podAnnotations: {}
- Ingress:
- ApiVersion: extensions/v1beta1
- Annotations: {}
- # kubernetes.io/ingress.class: nginx
- # kubernetes.io/tls-acme: "true"
+ # The below two configuration-related values are deprecated and replaced by Jenkins Configuration as Code (see above
+ # JCasC key). They will be deleted in an upcoming version.
+ customConfigMap: false
+ # By default, the configMap is only used to set the initial config the first time
+ # that the chart is installed. Setting `overwriteConfig` to `true` will overwrite
+ # the jenkins config with the contents of the configMap every time the pod starts.
+ # This will also overwrite all init scripts
+ overwriteConfig: false
- # Set this path to JenkinsUriPrefix above or use annotations to rewrite path
- # Path: "/jenkins"
+ # By default, the Jobs Map is only used to set the initial jobs the first time
+ # that the chart is installed. Setting `overwriteJobs` to `true` will overwrite
+ # the jenkins jobs configuration with the contents of Jobs every time the pod starts.
+ overwriteJobs: false
- TLS:
+ ingress:
+ enabled: false
+ # For Kubernetes v1.14+, use 'networking.k8s.io/v1beta1'
+ apiVersion: "extensions/v1beta1"
+ labels: {}
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+ # Set this path to jenkinsUriPrefix above or use annotations to rewrite path
+ # path: "/jenkins"
+ # configures the hostname e.g. jenkins.example.com
+ hostName:
+ tls:
# - secretName: jenkins.cluster.local
# hosts:
# - jenkins.cluster.local
- AdditionalConfig: {}
-Agent:
- Enabled: true
- Image: jenkins/jnlp-slave
- ImageTag: 3.27-1
- CustomJenkinsLabels: []
-# ImagePullSecret: jenkins
- Component: "jenkins-slave"
- Privileged: false
+ # Openshift route
+ route:
+ enabled: false
+ labels: {}
+ annotations: {}
+ # path: "/jenkins"
+
+ additionalConfig: {}
+
+ # master.hostAliases allows for adding entries to Pod /etc/hosts:
+ # https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
+ hostAliases: []
+ # - ip: 192.168.50.50
+ # hostnames:
+ # - something.local
+ # - ip: 10.0.50.50
+ # hostnames:
+ # - other.local
+
+ # Expose Prometheus metrics
+ prometheus:
+ # If enabled, add the prometheus plugin to the list of plugins to install
+ # https://plugins.jenkins.io/prometheus
+ enabled: false
+ # Additional labels to add to the ServiceMonitor object
+ serviceMonitorAdditionalLabels: {}
+ scrapeInterval: 60s
+ # This is the default endpoint used by the prometheus plugin
+ scrapeEndpoint: /prometheus
+ # Additional labels to add to the PrometheusRule object
+ alertingRulesAdditionalLabels: {}
+ # An array of prometheus alerting rules
+ # See here: https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/
+ # The `groups` root object is added by default, simply add the rule entries
+ alertingrules: []
+
+agent:
+ enabled: true
+ image: "jenkins/jnlp-slave"
+ imageTag: "3.27-1"
+ customJenkinsLabels: []
+ # name of the secret to be used for image pulling
+ imagePullSecretName:
+ componentName: "jenkins-slave"
+ privileged: false
resources:
requests:
cpu: "200m"
@@ -187,29 +344,59 @@ Agent:
cpu: "200m"
memory: "256Mi"
# You may want to change this to true while testing a new image
- AlwaysPullImage: false
+ alwaysPullImage: false
# Controls how slave pods are retained after the Jenkins build completes
# Possible values: Always, Never, OnFailure
- PodRetention: Never
+ podRetention: "Never"
# You can define the volumes that you want to mount for this container
# Allowed types are: ConfigMap, EmptyDir, HostPath, Nfs, Pod, Secret
# Configure the attributes as they appear in the corresponding Java class for that type
# https://github.com/jenkinsci/kubernetes-plugin/tree/master/src/main/java/org/csanchez/jenkins/plugins/kubernetes/volumes
+ # Pod-wide ennvironment, these vars are visible to any container in the slave pod
+ envVars:
+ # - name: PATH
+ # value: /usr/local/bin
volumes:
# - type: Secret
# secretName: mysecret
# mountPath: /var/myapp/mysecret
- NodeSelector: {}
+ nodeSelector: {}
# Key Value selectors. Ex:
# jenkins-agent: v1
-Persistence:
- Enabled: true
+ # Executed command when side container gets started
+ command:
+ args:
+ # Side container name
+ sideContainerName: "jnlp"
+ # Doesn't allocate pseudo TTY by default
+ TTYEnabled: false
+ # Max number of spawned agent
+ containerCap: 10
+ # Pod name
+ podName: "default"
+ # Allows the Pod to remain active for reuse until the configured number of
+ # minutes has passed since the last step was executed on it.
+ idleMinutes: 0
+ # Raw yaml template for the Pod. For example this allows usage of toleration for agent pods.
+ # https://github.com/jenkinsci/kubernetes-plugin#using-yaml-to-define-pod-templates
+ # https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
+ yamlTemplate:
+ # yamlTemplate: |-
+ # apiVersion: v1
+ # kind: Pod
+ # spec:
+ # tolerations:
+ # - key: "key"
+ # operator: "Equal"
+ # value: "value"
+
+persistence:
+ enabled: true
## A manually managed Persistent Volume and Claim
- ## Requires Persistence.Enabled: true
+ ## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
- # ExistingClaim:
-
+ existingClaim:
## jenkins data Persistent Volume Storage Class
## If defined, storageClassName:
## If set to "-", storageClassName: "", which disables dynamic provisioning
@@ -217,11 +404,10 @@ Persistence:
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
- # StorageClass: "-"
-
- Annotations: {}
- AccessMode: ReadWriteOnce
- Size: 8Gi
+ storageClass:
+ annotations: {}
+ accessMode: "ReadWriteOnce"
+ size: "8Gi"
volumes:
# - name: nothing
# emptyDir: {}
@@ -230,23 +416,30 @@ Persistence:
# name: nothing
# readOnly: true
-NetworkPolicy:
+networkPolicy:
# Enable creation of NetworkPolicy resources.
- Enabled: false
+ enabled: false
# For Kubernetes v1.4, v1.5 and v1.6, use 'extensions/v1beta1'
# For Kubernetes v1.7, use 'networking.k8s.io/v1'
- ApiVersion: networking.k8s.io/v1
+ apiVersion: networking.k8s.io/v1
## Install Default RBAC roles and bindings
rbac:
- install: false
- serviceAccountName: default
- # Role reference
- roleRef: cluster-admin
- # Role kind (Role or ClusterRole)
- roleKind: ClusterRole
- # Role binding kind (RoleBinding or ClusterRoleBinding)
- roleBindingKind: ClusterRoleBinding
+ create: true
+
+serviceAccount:
+ create: true
+ # The name of the service account is autogenerated by default
+ name:
+ annotations: {}
+
+serviceAccountAgent:
+ # Specifies whether a ServiceAccount should be created
+ create: false
+ # The name of the ServiceAccount to use.
+ # If not set and create is true, a name is generated using the fullname template
+ name:
+ annotations: {}
## Backup cronjob configuration
## Ref: https://github.com/nuvo/kube-tasks
@@ -254,30 +447,26 @@ backup:
# Backup must use RBAC
# So by enabling backup you are enabling RBAC specific for backup
enabled: false
-
+ # Used for label app.kubernetes.io/component
+ componentName: "backup"
# Schedule to run jobs. Must be in cron time format
# Ref: https://crontab.guru/
schedule: "0 2 * * *"
-
annotations:
# Example for authorization to AWS S3 using kube2iam
# Can also be done using environment variables
- iam.amazonaws.com/role: jenkins
-
+ iam.amazonaws.com/role: "jenkins"
image:
- repository: nuvo/kube-tasks
- tag: 0.1.2
-
+ repository: "nuvo/kube-tasks"
+ tag: "0.1.2"
# Additional arguments for kube-tasks
# Ref: https://github.com/nuvo/kube-tasks#simple-backup
extraArgs: []
-
# Add additional environment variables
env:
# Example environment variable required for AWS credentials chain
- - name: AWS_REGION
- value: us-east-1
-
+ - name: "AWS_REGION"
+ value: "us-east-1"
resources:
requests:
memory: 1Gi
@@ -285,9 +474,9 @@ backup:
limits:
memory: 1Gi
cpu: 1
-
# Destination to store the backup artifacts
# Supported cloud storage services: AWS S3, Minio S3, Azure Blob Storage
# Additional support can added. Visit this repository for details
# Ref: https://github.com/nuvo/skbn
- destination: s3://nuvo-jenkins-data/backup
+ destination: "s3://nuvo-jenkins-data/backup"
+checkDeprecation: true
diff --git a/stable/joomla/Chart.yaml b/stable/joomla/Chart.yaml
index 9a5977c7ee76..6f3b1d522030 100644
--- a/stable/joomla/Chart.yaml
+++ b/stable/joomla/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: joomla
-version: 4.0.3
-appVersion: 3.9.2
+version: 4.2.2
+appVersion: 3.9.6
description: PHP content management system (CMS) for publishing web content
keywords:
- joomla
diff --git a/stable/joomla/README.md b/stable/joomla/README.md
index 3b6abd2a696e..397a81148c91 100644
--- a/stable/joomla/README.md
+++ b/stable/joomla/README.md
@@ -14,7 +14,7 @@ This chart bootstraps a [Joomla!](https://github.com/bitnami/bitnami-docker-joom
It also packages the [Bitnami MariaDB chart](https://github.com/kubernetes/charts/tree/master/stable/mariadb) which bootstraps a MariaDB deployment required by the Joomla! application.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -50,6 +50,7 @@ The following table lists the configurable parameters of the Joomla! chart and t
| Parameter | Description | Default |
| ------------------------------------ | ----------------------------------------------------------- | ---------------------------------------------- |
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | Joomla! image registry | `docker.io` |
| `image.repository` | Joomla! Image name | `bitnami/joomla` |
| `image.tag` | Joomla! Image tag | `{VERSION}` |
diff --git a/stable/joomla/templates/_helpers.tpl b/stable/joomla/templates/_helpers.tpl
index 90fb598eb4aa..9862ea70291f 100644
--- a/stable/joomla/templates/_helpers.tpl
+++ b/stable/joomla/templates/_helpers.tpl
@@ -56,9 +56,57 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
{{/*
Return the proper image name (for the metrics image)
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "joomla.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "joomla.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
{{- end -}}
diff --git a/stable/joomla/templates/deployment.yaml b/stable/joomla/templates/deployment.yaml
index 78b7ad1de524..b75116083a2f 100644
--- a/stable/joomla/templates/deployment.yaml
+++ b/stable/joomla/templates/deployment.yaml
@@ -30,12 +30,7 @@ spec:
{{- end }}
{{- end }}
spec:
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "joomla.imagePullSecrets" . | indent 6 }}
hostAliases:
- ip: "127.0.0.1"
hostnames:
@@ -144,7 +139,7 @@ spec:
mountPath: /bitnami/apache
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "joomla.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
command: [ '/bin/apache_exporter', '-scrape_uri', 'http://status.localhost:80/server-status/?auto']
ports:
diff --git a/stable/joomla/values.yaml b/stable/joomla/values.yaml
index ad4e9d8073a3..df8a226f9729 100644
--- a/stable/joomla/values.yaml
+++ b/stable/joomla/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami Joomla! image version
## ref: https://hub.docker.com/r/bitnami/dokuwiki/tags/
@@ -10,7 +13,7 @@
image:
registry: docker.io
repository: bitnami/joomla
- tag: 3.9.2
+ tag: 3.9.6
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,7 +24,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## User of the application
## ref: https://github.com/bitnami/bitnami-docker-joomla#environment-variables
@@ -278,7 +281,7 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Metrics exporter pod Annotation and Labels
podAnnotations:
diff --git a/stable/k8s-spot-termination-handler/Chart.yaml b/stable/k8s-spot-termination-handler/Chart.yaml
index 34124bfdc20f..e9f77466d190 100644
--- a/stable/k8s-spot-termination-handler/Chart.yaml
+++ b/stable/k8s-spot-termination-handler/Chart.yaml
@@ -1,14 +1,14 @@
apiVersion: v1
-appVersion: "0.1.0"
+appVersion: "1.13.0-1"
description: The K8s Spot Termination handler handles draining AWS Spot Instances in response to termination requests.
name: k8s-spot-termination-handler
-version: 0.1.0
+version: 1.2.1
keywords:
- spot
- termination
-home: https://github.com/pusher/k8s-spot-termination-handler
+home: https://github.com/kube-aws/kube-spot-termination-notice-handler
sources:
- - https://github.com/pusher/k8s-spot-termination-handler
+ - https://github.com/kube-aws/kube-spot-termination-notice-handler
maintainers:
- name: kierranm
email: kierranm@gmail.com
diff --git a/stable/k8s-spot-termination-handler/README.md b/stable/k8s-spot-termination-handler/README.md
new file mode 100644
index 000000000000..61c5c4ece927
--- /dev/null
+++ b/stable/k8s-spot-termination-handler/README.md
@@ -0,0 +1,45 @@
+# Kubernetes AWS EC2 Spot Termination Notice Handler
+
+This chart installs the [k8s-spot-termination-handler](https://github.com/kube-aws/kube-spot-termination-notice-handler)
+as a daemonset across the cluster nodes.
+
+## Purpose
+
+Spot instances on EC2 come with significant cost savings, but with the burden of instance being terminated if
+the market price goes higher than the maximum price you have configured.
+
+The termination handler watches the AWS Metadata API for termination requests and starts draining the node
+so that it can be terminated safely. Optionally it can also send a message to a Slack channel informing that
+a termination notice has been received.
+
+## Installation
+
+You should install into the `kube-system` namespace, but this is not a requirement. The following example assumes this has been chosen.
+
+```
+helm install stable/k8s-spot-termination-handler --namespace kube-system
+```
+
+## Configuration
+
+The following table lists the configurable parameters of the k8s-spot-termination-handler chart and their default values.
+
+Parameter | Description | Default
+--- | --- | ---
+`image.repository` | container image repository | `kubeaws/kube-spot-termination-notice-handler`
+`image.tag` | container image tag | `1.13.0-1`
+`image.pullPolicy` | container image pull policy | `IfNotPresent`
+`pollInterval` | the interval in seconds between attempts to poll EC2 metadata API for termination events | `"5"`
+`slackUrl` | Slack webhook URL to send messages when a termination notice is received | _not defined_
+`clusterName` | if `slackUrl` is set - use this cluster name in Slack messages | _not defined_
+`enableLogspout` | if `true`, enable the Logspout log capturing. Logspout should be deployed separately | `false`
+`rbac.create` | if `true`, create & use RBAC resources | `true`
+`serviceAccount.create` | if `true`, create a service account | `true`
+`serviceAccount.name` | the name of the service account to use. If not set and `create` is `true`, a name is generated using the fullname template. | ``
+`detachAsg` | if `true`, the spot termination handler will detect (standard) AutoScaling Group, and initiate detach when termination notice is detected. | `false`
+`gracePeriod` | Grace period for node draining | `120`
+`resources` | pod resource requests & limits | `{}`
+`nodeSelector` | node labels for pod assignment | `{}`
+`tolerations` | node taints to tolerate (requires Kubernetes >=1.6) | `[]`
+`affinity` | node/pod affinities (requires Kubernetes >=1.6) | `{}`
+`priorityClassName` | pod priorityClassName for pod. | ``
diff --git a/stable/k8s-spot-termination-handler/templates/NOTES.txt b/stable/k8s-spot-termination-handler/templates/NOTES.txt
index 3b258f154d1d..eab1b9d7d5ee 100644
--- a/stable/k8s-spot-termination-handler/templates/NOTES.txt
+++ b/stable/k8s-spot-termination-handler/templates/NOTES.txt
@@ -1,3 +1,3 @@
To verify that k8s-spot-termination-handler has started, run:
- kubectl --namespace={{ .Release.Namespace }} get pods -l "app={{ template "k8s-spot-termination-handler.name" . }},release={{ .Release.Name }}"
\ No newline at end of file
+ kubectl --namespace={{ .Release.Namespace }} get pods -l "app.kubernetes.io/name={{ include "k8s-spot-termination-handler.name" . }},app.kubernetes.io/instance={{ .Release.Name }}"
diff --git a/stable/k8s-spot-termination-handler/templates/_helpers.tpl b/stable/k8s-spot-termination-handler/templates/_helpers.tpl
index f3d23d855021..3190a4158f70 100644
--- a/stable/k8s-spot-termination-handler/templates/_helpers.tpl
+++ b/stable/k8s-spot-termination-handler/templates/_helpers.tpl
@@ -40,4 +40,4 @@ Create the name of the service account to use
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
-{{- end -}}
\ No newline at end of file
+{{- end -}}
diff --git a/stable/k8s-spot-termination-handler/templates/clusterrole.yaml b/stable/k8s-spot-termination-handler/templates/clusterrole.yaml
index d730d886c607..6a17d0e7aa39 100644
--- a/stable/k8s-spot-termination-handler/templates/clusterrole.yaml
+++ b/stable/k8s-spot-termination-handler/templates/clusterrole.yaml
@@ -4,10 +4,10 @@ kind: ClusterRole
metadata:
name: {{ template "k8s-spot-termination-handler.fullname" . }}
labels:
- app: {{ template "k8s-spot-termination-handler.name" . }}
- chart: {{ template "k8s-spot-termination-handler.chart" . }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
+ app.kubernetes.io/name: {{ template "k8s-spot-termination-handler.name" . }}
+ helm.sh/chart: {{ template "k8s-spot-termination-handler.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
rules:
# For draining nodes
- apiGroups:
@@ -16,12 +16,14 @@ rules:
- nodes
verbs:
- get
- - update
+ - list
+ - patch
- apiGroups:
- ""
resources:
- pods
verbs:
+ - get
- list
- apiGroups:
- extensions
@@ -30,10 +32,18 @@ rules:
- daemonsets
verbs:
- get
+ - list
+ - apiGroups:
+ - apps
+ resources:
+ - statefulsets
+ verbs:
+ - get
+ - list
- apiGroups:
- ""
resources:
- pods/eviction
verbs:
- create
-{{- end}}
\ No newline at end of file
+{{- end}}
diff --git a/stable/k8s-spot-termination-handler/templates/clusterrolebinding.yaml b/stable/k8s-spot-termination-handler/templates/clusterrolebinding.yaml
index 5cd0942b0b9f..492558c826a6 100644
--- a/stable/k8s-spot-termination-handler/templates/clusterrolebinding.yaml
+++ b/stable/k8s-spot-termination-handler/templates/clusterrolebinding.yaml
@@ -4,10 +4,10 @@ kind: ClusterRoleBinding
metadata:
name: {{ template "k8s-spot-termination-handler.fullname" . }}
labels:
- app: {{ template "k8s-spot-termination-handler.name" . }}
- chart: {{ template "k8s-spot-termination-handler.chart" . }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
+ app.kubernetes.io/name: {{ template "k8s-spot-termination-handler.name" . }}
+ helm.sh/chart: {{ template "k8s-spot-termination-handler.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
@@ -15,5 +15,5 @@ roleRef:
subjects:
- kind: ServiceAccount
name: {{ template "k8s-spot-termination-handler.serviceAccountName" . }}
- namespace: {{ .Release.Namespace }}
-{{- end}}
\ No newline at end of file
+ namespace: {{ .Release.Namespace | quote }}
+{{- end}}
diff --git a/stable/k8s-spot-termination-handler/templates/daemonset.yaml b/stable/k8s-spot-termination-handler/templates/daemonset.yaml
index 7abc119c3439..64fbdc2331f0 100644
--- a/stable/k8s-spot-termination-handler/templates/daemonset.yaml
+++ b/stable/k8s-spot-termination-handler/templates/daemonset.yaml
@@ -1,28 +1,53 @@
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
- name: {{ include "k8s-spot-termination-handler.fullname" . }}
+ name: {{ template "k8s-spot-termination-handler.fullname" . }}
labels:
- app.kubernetes.io/name: {{ include "k8s-spot-termination-handler.name" . }}
- helm.sh/chart: {{ include "k8s-spot-termination-handler.chart" . }}
+ app.kubernetes.io/name: {{ template "k8s-spot-termination-handler.name" . }}
+ helm.sh/chart: {{ template "k8s-spot-termination-handler.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
template:
metadata:
labels:
- app.kubernetes.io/name: {{ include "k8s-spot-termination-handler.name" . }}
+ app.kubernetes.io/name: {{ template "k8s-spot-termination-handler.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
+ serviceAccountName: {{ template "k8s-spot-termination-handler.serviceAccountName" . }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- - name: NODE_NAME
+ {{- if not .Values.enableLogspout }}
+ - name: LOGSPOUT
+ value: "ignore"
+ {{- end }}
+ {{- with .Values.slackUrl }}
+ - name: SLACK_URL
+ value: {{ . | quote }}
+ {{- end }}
+ {{- with .Values.detachAsg }}
+ - name: DETACH_ASG
+ value: {{ . | quote }}
+ {{- end }}
+ {{- with .Values.gracePeriod }}
+ - name: GRACE_PERIOD
+ value: {{ . | quote }}
+ {{- end }}
+ - name: POLL_INTERVAL
+ value: {{ .Values.pollInterval | quote }}
+ - name: CLUSTER
+ value: {{ .Values.clusterName | quote }}
+ - name: POD_NAME
valueFrom:
fieldRef:
- fieldPath: spec.nodeName
+ fieldPath: metadata.name
+ - name: NAMESPACE
+ valueFrom:
+ fieldRef:
+ fieldPath: metadata.namespace
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
@@ -37,3 +62,6 @@ spec:
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
+ {{- if .Values.priorityClassName }}
+ priorityClassName: {{ .Values.priorityClassName }}
+ {{- end }}
diff --git a/stable/k8s-spot-termination-handler/templates/serviceaccount.yaml b/stable/k8s-spot-termination-handler/templates/serviceaccount.yaml
index 3ebc059bc597..492e73e1a75e 100644
--- a/stable/k8s-spot-termination-handler/templates/serviceaccount.yaml
+++ b/stable/k8s-spot-termination-handler/templates/serviceaccount.yaml
@@ -4,8 +4,8 @@ kind: ServiceAccount
metadata:
name: {{ template "k8s-spot-termination-handler.serviceAccountName" . }}
labels:
- app: {{ template "k8s-spot-termination-handler.name" . }}
- chart: {{ template "k8s-spot-termination-handler.chart" . }}
- release: {{ .Release.Name }}
- heritage: {{ .Release.Service }}
-{{- end }}
\ No newline at end of file
+ app.kubernetes.io/name: {{ template "k8s-spot-termination-handler.name" . }}
+ helm.sh/chart: {{ template "k8s-spot-termination-handler.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+{{- end }}
diff --git a/stable/k8s-spot-termination-handler/values.yaml b/stable/k8s-spot-termination-handler/values.yaml
index ca6a71f989eb..249aaff65396 100644
--- a/stable/k8s-spot-termination-handler/values.yaml
+++ b/stable/k8s-spot-termination-handler/values.yaml
@@ -13,26 +13,50 @@ serviceAccount:
name:
image:
- repository: quay.io/pusher/k8s-spot-termination-handler
- tag: v0.1.0
+ repository: kubeaws/kube-spot-termination-notice-handler
+ tag: 1.13.0-1
pullPolicy: IfNotPresent
+# Poll the metadata every pollInterval seconds for termination events:
+pollInterval: 5
+
+# Send notifications to a Slack webhook URL - replace with your own value and uncomment:
+# slackUrl: https://hooks.slack.com/services/EXAMPLE123/EXAMPLE123/example1234567
+
+# Set the cluster name to be reported in a Slack message
+# clusterName: test
+
+# Silence logspout by default - set to true to enable logs arriving in logspout
+enableLogspout: false
+
+# Trigger instance removal from AutoScaling Group on termination notice
+detachAsg: false
+
+# Grace period for node draining
+gracePeriod: 120
+
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
- # requests:
- # cpu: 5m
- # memory: 20Mi
# limits:
# cpu: 100m
- # memory: 100Mi
+ # memory: 128Mi
+ # requests:
+ # cpu: 10m
+ # memory: 32Mi
+
+# Add a priority class to the deamonset
+priorityClassName: ""
-# By default, schedule only on spot workers
-nodeSelector:
- "node-role.kubernetes.io/spot-worker": "true"
+nodeSelector: {}
+ # "node-role.kubernetes.io/spot-worker": "true"
tolerations: []
+ # - key: "dedicated"
+ # operator: "Equal"
+ # value: "gpu"
+ # effect: "NoSchedule"
affinity: {}
diff --git a/stable/kafka-manager/Chart.yaml b/stable/kafka-manager/Chart.yaml
index 5648a81d0b74..e44c50ee0299 100644
--- a/stable/kafka-manager/Chart.yaml
+++ b/stable/kafka-manager/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: kafka-manager
-version: 1.1.0
-appVersion: 1.3.3.18
+version: 2.1.4
+appVersion: 1.3.3.22
kubeVersion: "^1.8.0-0"
description: A tool for managing Apache Kafka.
home: https://github.com/yahoo/kafka-manager
@@ -14,4 +15,6 @@ keywords:
maintainers:
- name: giacomoguiulfo
email: giacomoguiulfo@gmail.com
+- name: ssalaues
+ email: salim.salaues@scality.com
engine: gotpl
diff --git a/stable/kafka-manager/OWNERS b/stable/kafka-manager/OWNERS
index fa510b0036ea..b1df4d8802eb 100644
--- a/stable/kafka-manager/OWNERS
+++ b/stable/kafka-manager/OWNERS
@@ -1,4 +1,6 @@
approvers:
- giacomoguiulfo
+- ssalaues
reviewers:
- giacomoguiulfo
+- ssalaues
diff --git a/stable/kafka-manager/README.md b/stable/kafka-manager/README.md
index d15e46277d49..a989860ed73f 100644
--- a/stable/kafka-manager/README.md
+++ b/stable/kafka-manager/README.md
@@ -43,7 +43,7 @@ Parameter | Description | Default
`serviceAccount.create` | If true, create a service account for kafka-manager | `true`
`serviceAccount.name` | Name of the service account to create or use | `{{ kafka-manager.fullname }}`
`image.repository` | Container image repository | `zenko/kafka-manager`
-`image.tag` | Container image tag | `1.0.0`
+`image.tag` | Container image tag | `1.3.3.22`
`image.pullPolicy` | Container image pull policy | `IfNotPresent`
`zkHosts` | Zookeeper hosts required by the kafka-manager | `localhost:2181`
`clusters` | Configuration of the clusters to manage | `{}`
@@ -54,6 +54,7 @@ Parameter | Description | Default
`javaOptions` | Java runtime options | `""`
`service.type` | Kafka-manager service type | `ClusterIP`
`service.port` | Kafka-manager service port | `9000`
+`service.annotations` | Optional service annotations | `{}`
`ingress.enabled` | If true, create an ingress resource | `false`
`ingress.annotations` | Optional ingress annotations | `{}`
`ingress.path` | Ingress path | `/`
diff --git a/stable/kafka-manager/templates/_helpers.tpl b/stable/kafka-manager/templates/_helpers.tpl
index 78750c153a7c..a84fbb96285a 100644
--- a/stable/kafka-manager/templates/_helpers.tpl
+++ b/stable/kafka-manager/templates/_helpers.tpl
@@ -9,11 +9,20 @@ Expand the name of the chart.
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
*/}}
{{- define "kafka-manager.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
+{{- end -}}
+{{- end -}}
{{/*
Create a default fully qualified bootstrap job name.
diff --git a/stable/kafka-manager/templates/configmap.yaml b/stable/kafka-manager/templates/configmap.yaml
index 5fae15bb6f28..2aef27fb0dd5 100644
--- a/stable/kafka-manager/templates/configmap.yaml
+++ b/stable/kafka-manager/templates/configmap.yaml
@@ -15,37 +15,37 @@ data:
set -e
{{- range $cluster := .Values.clusters }}
{{ printf "curl http://%s:9000/clusters -X POST -f " (include "kafka-manager.fullname" $) -}}
- {{- printf "-d name=%s " (default "default" $cluster.name) -}}
- {{- printf "-d zkHosts=%s " (default (include "kafka-manager.zkHosts" $) $cluster.zkHosts ) -}}
- {{- printf "-d kafkaVersion=%s " (default "1.0.0" $cluster.kafkaVersion) -}}
- {{- printf "-d jmxEnabled=%s " (default "false" $cluster.jmxEnabled) -}}
- {{- printf "-d jmxUser=%s " (default "" $cluster.jmxUser) -}}
- {{- printf "-d jmxPass=%s " (default "" $cluster.jmxPass) -}}
- {{- printf "-d jmxSsl=%s " (default "false" $cluster.jmxSsl) -}}
- {{- printf "-d logkafkaEnabled=%s " (default "false" $cluster.logkafkaEnabled) -}}
- {{- printf "-d pollConsumers=%s " (default "false" $cluster.pollConsumers) -}}
- {{- printf "-d filterConsumers=%s " (default "false" $cluster.filterConsumers) -}}
- {{- printf "-d activeOffsetCacheEnabled=%s " (default "false" $cluster.activeOffsetCacheEnabled) -}}
- {{- printf "-d displaySizeEnabled=%s " (default "false" $cluster.displaySizeEnabled) -}}
- {{- printf "-d tuning.brokerViewUpdatePeriodSeconds=%s " (default "30" $cluster.tuning.brokerViewUpdatePeriodSeconds) -}}
- {{- printf "-d tuning.clusterManagerThreadPoolSize=%s " (default "2" $cluster.tuning.clusterManagerThreadPoolSize) -}}
- {{- printf "-d tuning.clusterManagerThreadPoolQueueSize=%s " (default "100" $cluster.tuning.clusterManagerThreadPoolQueueSize) -}}
- {{- printf "-d tuning.kafkaCommandThreadPoolSize=%s " (default "2" $cluster.tuning.kafkaCommandThreadPoolSize) -}}
- {{- printf "-d tuning.kafkaCommandThreadPoolQueueSize=%s " (default "100" $cluster.tuning.kafkaCommandThreadPoolQueueSize) -}}
- {{- printf "-d tuning.logkafkaCommandThreadPoolSize=%s " (default "2" $cluster.tuning.logkafkaCommandThreadPoolSize) -}}
- {{- printf "-d tuning.logkafkaCommandThreadPoolQueueSize=%s " (default "100" $cluster.tuning.logkafkaCommandThreadPoolQueueSize) -}}
- {{- printf "-d tuning.logkafkaUpdatePeriodSeconds=%s " (default "30" $cluster.tuning.logkafkaUpdatePeriodSeconds) -}}
- {{- printf "-d tuning.partitionOffsetCacheTimeoutSecs=%s " (default "5" $cluster.tuning.partitionOffsetCacheTimeoutSecs) -}}
- {{- printf "-d tuning.brokerViewThreadPoolSize=%s " (default "4" $cluster.tuning.brokerViewThreadPoolSize) -}}
- {{- printf "-d tuning.brokerViewThreadPoolQueueSize=%s " (default "1000" $cluster.tuning.brokerViewThreadPoolQueueSize) -}}
- {{- printf "-d tuning.offsetCacheThreadPoolSize=%s " (default "4" $cluster.tuning.offsetCacheThreadPoolSize) -}}
- {{- printf "-d tuning.offsetCacheThreadPoolQueueSize=%s " (default "1000" $cluster.tuning.offsetCacheThreadPoolQueueSize) -}}
- {{- printf "-d tuning.kafkaAdminClientThreadPoolSize=%s " (default "4" $cluster.tuning.kafkaAdminClientThreadPoolSize) -}}
- {{- printf "-d tuning.kafkaAdminClientThreadPoolQueueSize=%s " (default "1000" $cluster.tuning.kafkaAdminClientThreadPoolQueueSize) -}}
- {{- printf "-d tuning.kafkaManagedOffsetMetadataCheckMillis=%s " (default "30000" $cluster.tuning.kafkaManagedOffsetMetadataCheckMillis) -}}
- {{- printf "-d tuning.kafkaManagedOffsetGroupCacheSize=%s " (default "1000000" $cluster.tuning.kafkaManagedOffsetGroupCacheSize) -}}
- {{- printf "-d tuning.kafkaManagedOffsetGroupExpireDays=%s " (default "7" $cluster.tuning.kafkaManagedOffsetGroupExpireDays) -}}
- {{- printf "-d securityProtocol=%s " (default "PLAINTEXT" $cluster.securityProtocol) -}}
+ {{- printf "-d name=%v " (default "default" $cluster.name) -}}
+ {{- printf "-d zkHosts=%v " (default (include "kafka-manager.zkHosts" $) $cluster.zkHosts) -}}
+ {{- printf "-d kafkaVersion=%v " (default "1.0.0" $cluster.kafkaVersion) -}}
+ {{- printf "-d jmxEnabled=%v " (default "false" $cluster.jmxEnabled) -}}
+ {{- printf "-d jmxUser=%v " (default "" $cluster.jmxUser) -}}
+ {{- printf "-d jmxPass=%v " (default "" $cluster.jmxPass) -}}
+ {{- printf "-d jmxSsl=%v " (default "false" $cluster.jmxSsl) -}}
+ {{- printf "-d logkafkaEnabled=%v " (default "false" $cluster.logkafkaEnabled) -}}
+ {{- printf "-d pollConsumers=%v " (default "false" $cluster.pollConsumers) -}}
+ {{- printf "-d filterConsumers=%v " (default "false" $cluster.filterConsumers) -}}
+ {{- printf "-d activeOffsetCacheEnabled=%v " (default "false" $cluster.activeOffsetCacheEnabled) -}}
+ {{- printf "-d displaySizeEnabled=%v " (default "false" $cluster.displaySizeEnabled) -}}
+ {{- printf "-d tuning.brokerViewUpdatePeriodSeconds=%v " (default "30" $cluster.tuning.brokerViewUpdatePeriodSeconds) -}}
+ {{- printf "-d tuning.clusterManagerThreadPoolSize=%v " (default "2" $cluster.tuning.clusterManagerThreadPoolSize) -}}
+ {{- printf "-d tuning.clusterManagerThreadPoolQueueSize=%v " (default "100" $cluster.tuning.clusterManagerThreadPoolQueueSize) -}}
+ {{- printf "-d tuning.kafkaCommandThreadPoolSize=%v " (default "2" $cluster.tuning.kafkaCommandThreadPoolSize) -}}
+ {{- printf "-d tuning.kafkaCommandThreadPoolQueueSize=%v " (default "100" $cluster.tuning.kafkaCommandThreadPoolQueueSize) -}}
+ {{- printf "-d tuning.logkafkaCommandThreadPoolSize=%v " (default "2" $cluster.tuning.logkafkaCommandThreadPoolSize) -}}
+ {{- printf "-d tuning.logkafkaCommandThreadPoolQueueSize=%v " (default "100" $cluster.tuning.logkafkaCommandThreadPoolQueueSize) -}}
+ {{- printf "-d tuning.logkafkaUpdatePeriodSeconds=%v " (default "30" $cluster.tuning.logkafkaUpdatePeriodSeconds) -}}
+ {{- printf "-d tuning.partitionOffsetCacheTimeoutSecs=%v " (default "5" $cluster.tuning.partitionOffsetCacheTimeoutSecs) -}}
+ {{- printf "-d tuning.brokerViewThreadPoolSize=%v " (default "4" $cluster.tuning.brokerViewThreadPoolSize) -}}
+ {{- printf "-d tuning.brokerViewThreadPoolQueueSize=%v " (default "1000" $cluster.tuning.brokerViewThreadPoolQueueSize) -}}
+ {{- printf "-d tuning.offsetCacheThreadPoolSize=%v " (default "4" $cluster.tuning.offsetCacheThreadPoolSize) -}}
+ {{- printf "-d tuning.offsetCacheThreadPoolQueueSize=%v " (default "1000" $cluster.tuning.offsetCacheThreadPoolQueueSize) -}}
+ {{- printf "-d tuning.kafkaAdminClientThreadPoolSize=%v " (default "4" $cluster.tuning.kafkaAdminClientThreadPoolSize) -}}
+ {{- printf "-d tuning.kafkaAdminClientThreadPoolQueueSize=%v " (default "1000" $cluster.tuning.kafkaAdminClientThreadPoolQueueSize) -}}
+ {{- printf "-d tuning.kafkaManagedOffsetMetadataCheckMillis=%v " (default "30000" $cluster.tuning.kafkaManagedOffsetMetadataCheckMillis) -}}
+ {{- printf "-d tuning.kafkaManagedOffsetGroupCacheSize=%v " (default "1000000" $cluster.tuning.kafkaManagedOffsetGroupCacheSize) -}}
+ {{- printf "-d tuning.kafkaManagedOffsetGroupExpireDays=%v " (default "7" $cluster.tuning.kafkaManagedOffsetGroupExpireDays) -}}
+ {{- printf "-d securityProtocol=%v " (default "PLAINTEXT" $cluster.securityProtocol) -}}
{{- printf "$( if $KAFKA_MANAGER_AUTH_ENABLED; then echo -u $KAFKA_MANAGER_USERNAME:$KAFKA_MANAGER_PASSWORD ; fi ) " -}}
{{- end -}}
{{- end -}}
diff --git a/stable/kafka-manager/templates/service.yaml b/stable/kafka-manager/templates/service.yaml
index 0b3cc46213f9..cca776c7216b 100644
--- a/stable/kafka-manager/templates/service.yaml
+++ b/stable/kafka-manager/templates/service.yaml
@@ -8,6 +8,10 @@ metadata:
app.kubernetes.io/name: {{ include "kafka-manager.name" . }}
app.kubernetes.io/version: {{ .Chart.AppVersion | replace "+" "_" | trunc 63 }}
helm.sh/chart: {{ include "kafka-manager.chart" . }}
+{{- with .Values.service.annotations }}
+ annotations:
+{{ toYaml . | indent 4 }}
+{{- end }}
spec:
type: {{ .Values.service.type }}
ports:
diff --git a/stable/kafka-manager/values.yaml b/stable/kafka-manager/values.yaml
index 8d1dcc2e13f7..af57c5cec0ec 100644
--- a/stable/kafka-manager/values.yaml
+++ b/stable/kafka-manager/values.yaml
@@ -16,7 +16,7 @@ serviceAccount:
##
image:
repository: zenko/kafka-manager
- tag: 1.3.3.18
+ tag: 1.3.3.22
pullPolicy: IfNotPresent
## Kafka-manager zookeeper hosts. Default to localhost:2181 or
@@ -97,6 +97,7 @@ javaOptions: ""
service:
type: ClusterIP
port: 9000
+ annotations: {}
## Ingress configuration
## Ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
diff --git a/stable/kapacitor/Chart.yaml b/stable/kapacitor/Chart.yaml
index 362303b4ddc8..ecc22879f040 100755
--- a/stable/kapacitor/Chart.yaml
+++ b/stable/kapacitor/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: kapacitor
-version: 1.1.1
+version: 1.1.2
appVersion: 1.5.2
description: InfluxDB's native data processing engine. It can process both stream
and batch data from InfluxDB.
diff --git a/stable/karma/Chart.yaml b/stable/karma/Chart.yaml
index 10197b3906f1..b0690529166b 100644
--- a/stable/karma/Chart.yaml
+++ b/stable/karma/Chart.yaml
@@ -1,12 +1,12 @@
apiVersion: v1
-appVersion: "v0.21"
+appVersion: "v0.37"
description: A Helm chart for Karma - an UI for Prometheus Alertmanager
name: karma
home: https://github.com/prymitive/karma
sources:
- https://hub.docker.com/r/lmierzwa/karma/
- https://github.com/prymitive/karma
-version: 1.1.9
+version: 1.1.14
maintainers:
- name: davidkarlsen
email: david@davidkarlsen.com
diff --git a/stable/karma/README.md b/stable/karma/README.md
index ca57a2c86b2e..9a3e0a031dbf 100644
--- a/stable/karma/README.md
+++ b/stable/karma/README.md
@@ -41,7 +41,7 @@ The following table lists the configurable parameters of the karma chart and the
|-------------------------------------|----------------------------------------|-------------------------------------------|
| `replicaCount` | Number of replicas | `1` |
| `image.repository` | The image to run | `lmierzwa/karma` |
-| `image.tag` | The image tag to pull | `v0.21` |
+| `image.tag` | The image tag to pull | `v0.37` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `nameOverride` | Override name of app | `` |
| `fullnameOverride` | Override full name of app | `` |
diff --git a/stable/karma/templates/deployment.yaml b/stable/karma/templates/deployment.yaml
index e5289ef15503..01442778c75b 100644
--- a/stable/karma/templates/deployment.yaml
+++ b/stable/karma/templates/deployment.yaml
@@ -18,6 +18,10 @@ spec:
labels:
app.kubernetes.io/name: {{ include "karma.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
+ {{- if .Values.configMap.enabled }}
+ annotations:
+ checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
+ {{- end }}
spec:
serviceAccountName: {{ template "karma.serviceAccountName" . }}
containers:
@@ -73,4 +77,3 @@ spec:
configMap:
name: {{ .Release.Name }}-config
{{- end }}
-
diff --git a/stable/karma/values.yaml b/stable/karma/values.yaml
index 8e9f80b1bf24..176eae506d03 100644
--- a/stable/karma/values.yaml
+++ b/stable/karma/values.yaml
@@ -6,7 +6,7 @@ replicaCount: 1
image:
repository: lmierzwa/karma
- tag: v0.21
+ tag: v0.37
pullPolicy: IfNotPresent
nameOverride: ""
diff --git a/stable/keycloak/Chart.yaml b/stable/keycloak/Chart.yaml
index 29b74c26e296..4abdb453f65d 100644
--- a/stable/keycloak/Chart.yaml
+++ b/stable/keycloak/Chart.yaml
@@ -1,7 +1,7 @@
name: keycloak
-version: 4.3.0
-appVersion: 4.8.3.Final
-description: Open Source Identity and Access Management For Modern Applications and Services
+version: 4.10.1
+appVersion: 5.0.0
+description: DEPRECATED - Open Source Identity and Access Management For Modern Applications and Services
keywords:
- sso
- idm
@@ -13,8 +13,5 @@ home: https://www.keycloak.org/
icon: https://www.keycloak.org/resources/images/keycloak_logo_480x108.png
sources:
- https://github.com/jboss-dockerfiles/keycloak
-maintainers:
- - name: unguiculus
- email: unguiculus@gmail.com
- - name: thomasdarimont
- email: thomas.darimont+github@gmail.com
+# Chart moved to https://github.com/codecentric/helm-charts
+deprecated: true
diff --git a/stable/keycloak/README.md b/stable/keycloak/README.md
index 6f8e4083a7c5..05a03c92839d 100644
--- a/stable/keycloak/README.md
+++ b/stable/keycloak/README.md
@@ -1,4 +1,15 @@
-# Keycloak
+# DEPRECATED - Keycloak
+
+**This chart has been deprecated and moved to its new home:**
+
+- **GitHub repo:** https://github.com/codecentric/helm-charts
+- **Charts repo:** https://codecentric.github.io/helm-charts
+
+```bash
+helm repo add codecentric https://codecentric.github.io/helm-charts
+```
+
+---
[Keycloak](http://www.keycloak.org/) is an open source identity and access management for modern applications and services.
@@ -46,26 +57,32 @@ Parameter | Description | Default
`clusterDomain` | The internal Kubernetes cluster domain | `cluster.local`
`keycloak.replicas` | The number of Keycloak replicas | `1`
`keycloak.image.repository` | The Keycloak image repository | `jboss/keycloak`
-`keycloak.image.tag` | The Keycloak image tag | `4.8.3.Final`
+`keycloak.image.tag` | The Keycloak image tag | `5.0.0`
`keycloak.image.pullPolicy` | The Keycloak image pull policy | `IfNotPresent`
`keycloak.image.pullSecrets` | Image pull secrets | `[]`
`keycloak.basepath` | Path keycloak is hosted at | `auth`
`keycloak.username` | Username for the initial Keycloak admin user | `keycloak`
-`keycloak.password` | Password for the initial Keycloak admin user. If not set, a random 10 characters password is created | `""`
+`keycloak.password` | Password for the initial Keycloak admin user (if `keycloak.existingSecret=""`). If not set, a random 10 characters password is created | `""`
+`keycloak.existingSecret` | Specifies an existing secret to be used for the admin password | `""`
+`keycloak.existingSecretKey` | The key in `keycloak.existingSecret` that stores the admin password | `password`
`keycloak.extraInitContainers` | Additional init containers, e. g. for providing themes, etc. Passed through the `tpl` function and thus to be configured a string | `""`
`keycloak.extraContainers` | Additional sidecar containers, e. g. for a database proxy, such as Google's cloudsql-proxy. Passed through the `tpl` function and thus to be configured a string | `""`
`keycloak.extraEnv` | Allows the specification of additional environment variables for Keycloak. Passed through the `tpl` function and thus to be configured a string | `""`
`keycloak.extraVolumeMounts` | Add additional volumes mounts, e. g. for custom themes. Passed through the `tpl` function and thus to be configured a string | `""`
`keycloak.extraVolumes` | Add additional volumes, e. g. for custom themes. Passed through the `tpl` function and thus to be configured a string | `""`
+`keycloak.extraPorts` | Add additional ports, e. g. for custom admin console port. Passed through the `tpl` function and thus to be configured a string | `""`
`keycloak.podDisruptionBudget` | Pod disruption budget | `{}`
+`keycloak.priorityClassName` | Pod priority classname | `{}`
`keycloak.resources` | Pod resource requests and limits | `{}`
`keycloak.affinity` | Pod affinity. Passed through the `tpl` function and thus to be configured a string | `Hard node and soft zone anti-affinity`
`keycloak.nodeSelector` | Node labels for pod assignment | `{}`
`keycloak.tolerations` | Node taints to tolerate | `[]`
+`keycloak.podLabels` | Extra labels to add to pod | `{}`
`keycloak.podAnnotations` | Extra annotations to add to pod | `{}`
`keycloak.hostAliases` | Mapping between IP and hostnames that will be injected as entries in the pod's hosts files | `[]`
`keycloak.securityContext` | Security context for the pod | `{runAsUser: 1000, fsGroup: 1000, runAsNonRoot: true}`
`keycloak.preStartScript` | Custom script to run before Keycloak starts up | ``
+`keycloak.lifecycleHooks` | Container lifecycle hooks. Passed through the `tpl` function and thus to be configured a string | ``
`keycloak.extraArgs` | Additional arguments to the start command | ``
`keycloak.livenessProbe.initialDelaySeconds` | Liveness Probe `initialDelaySeconds` | `120`
`keycloak.livenessProbe.timeoutSeconds` | Liveness Probe `timeoutSeconds` | `5`
@@ -98,9 +115,11 @@ Parameter | Description | Default
`postgresql.postgresUser` | The PostgreSQL user (if `keycloak.persistence.deployPostgres=true`) | `keycloak`
`postgresql.postgresPassword` | The PostgreSQL password (if `keycloak.persistence.deployPostgres=true`) | `""`
`postgresql.postgresDatabase` | The PostgreSQL database (if `keycloak.persistence.deployPostgres=true`) | `keycloak`
+`test.enabled` | If `true`, test pods get scheduled | `true`
`test.image.repository` | Test image repository | `unguiculus/docker-python3-phantomjs-selenium`
`test.image.tag` | Test image tag | `v1`
`test.image.pullPolicy` | Test image pull policy | `IfNotPresent`
+`test.securityContext` | Security context for the test pod | `{runAsUser: 1000, fsGroup: 1000, runAsNonRoot: true}`
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
diff --git a/stable/keycloak/ci/h2-values.yaml b/stable/keycloak/ci/h2-values.yaml
index a8bad27cdadf..60140bd096a8 100644
--- a/stable/keycloak/ci/h2-values.yaml
+++ b/stable/keycloak/ci/h2-values.yaml
@@ -1 +1,2 @@
-# No config change. Just use defaults.
+keycloak:
+ password: keycloak
diff --git a/stable/keycloak/ci/postgres-ha-values.yaml b/stable/keycloak/ci/postgres-ha-values.yaml
index 55a2fa08f576..2b7355814624 100644
--- a/stable/keycloak/ci/postgres-ha-values.yaml
+++ b/stable/keycloak/ci/postgres-ha-values.yaml
@@ -1,5 +1,6 @@
keycloak:
replicas: 3
+ password: keycloak
persistence:
deployPostgres: true
dbVendor: postgres
diff --git a/stable/keycloak/templates/NOTES.txt b/stable/keycloak/templates/NOTES.txt
index eb04e032b5bb..3d6cc6150681 100644
--- a/stable/keycloak/templates/NOTES.txt
+++ b/stable/keycloak/templates/NOTES.txt
@@ -1,3 +1,10 @@
+**********************************************************************
+This chart has been DEPRECATED and moved to its new home:
+
+* GitHub repo: https://github.com/codecentric/helm-charts
+* Charts repo: https://codecentric.github.io/helm-charts
+
+**********************************************************************
Keycloak can be accessed:
diff --git a/stable/keycloak/templates/keycloak-secret.yaml b/stable/keycloak/templates/keycloak-secret.yaml
index 7346d749a30e..4598643d1fd7 100644
--- a/stable/keycloak/templates/keycloak-secret.yaml
+++ b/stable/keycloak/templates/keycloak-secret.yaml
@@ -1,3 +1,4 @@
+{{- if not .Values.keycloak.existingSecret -}}
apiVersion: v1
kind: Secret
metadata:
@@ -10,7 +11,8 @@ metadata:
type: Opaque
data:
{{- if .Values.keycloak.password }}
- password: {{ .Values.keycloak.password | b64enc | quote }}
+ {{ .Values.keycloak.existingSecretKey }}: {{ .Values.keycloak.password | b64enc | quote }}
{{- else }}
- password: {{ randAlphaNum 10 | b64enc | quote }}
+ {{ .Values.keycloak.existingSecretKey }}: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
+{{- end}}
diff --git a/stable/keycloak/templates/statefulset.yaml b/stable/keycloak/templates/statefulset.yaml
index accf478ea2ed..a0abdd9923d5 100644
--- a/stable/keycloak/templates/statefulset.yaml
+++ b/stable/keycloak/templates/statefulset.yaml
@@ -25,6 +25,9 @@ spec:
labels:
app: {{ template "keycloak.name" . }}
release: "{{ .Release.Name }}"
+ {{- if .Values.keycloak.podLabels }}
+{{ toYaml .Values.keycloak.podLabels | indent 8 }}
+ {{- end }}
{{- if .Values.keycloak.podAnnotations }}
annotations:
{{ toYaml .Values.keycloak.podAnnotations | indent 8 }}
@@ -68,6 +71,10 @@ spec:
imagePullPolicy: {{ .Values.keycloak.image.pullPolicy }}
command:
- /scripts/keycloak.sh
+ {{- if .Values.keycloak.lifecycleHooks }}
+ lifecycle:
+{{ tpl .Values.keycloak.lifecycleHooks . | indent 12 }}
+ {{- end }}
env:
{{- if .Release.IsInstall }}
- name: KEYCLOAK_USER
@@ -75,8 +82,12 @@ spec:
- name: KEYCLOAK_PASSWORD
valueFrom:
secretKeyRef:
+ {{- if .Values.keycloak.existingSecret }}
+ name: {{ .Values.keycloak.existingSecret }}
+ {{- else }}
name: {{ template "keycloak.fullname" . }}-http
- key: password
+ {{- end }}
+ key: {{ .Values.keycloak.existingSecretKey }}
{{- end }}
{{- if $highAvailability }}
- name: JGROUPS_DISCOVERY_PROTOCOL
@@ -103,6 +114,9 @@ spec:
containerPort: 7600
protocol: TCP
{{- end }}
+{{- with .Values.keycloak.extraPorts }}
+{{ tpl . $ | indent 12 }}
+{{- end }}
livenessProbe:
httpGet:
path: {{ if ne .Values.keycloak.basepath "" }}/{{ .Values.keycloak.basepath }}{{ end }}/
@@ -132,6 +146,9 @@ spec:
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
+{{- if .Values.keycloak.priorityClassName }}
+ priorityClassName: {{ .Values.keycloak.priorityClassName }}
+{{- end }}
terminationGracePeriodSeconds: 60
volumes:
- name: scripts
diff --git a/stable/keycloak/templates/test/test-configmap.yaml b/stable/keycloak/templates/test/test-configmap.yaml
index 9a60ab1b06e2..ec9e2d301cb2 100644
--- a/stable/keycloak/templates/test/test-configmap.yaml
+++ b/stable/keycloak/templates/test/test-configmap.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.test.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
@@ -17,7 +18,7 @@ data:
from urllib.parse import urlparse
print('Creating PhantomJS driver...')
- driver = webdriver.PhantomJS()
+ driver = webdriver.PhantomJS(service_log_path='/tmp/ghostdriver.log')
base_url = 'http://{{ template "keycloak.fullname" . }}-http{{ if ne 80 (int .Values.keycloak.service.port) }}{{ .Values.keycloak.service.port }}{{ end }}'
@@ -53,3 +54,4 @@ data:
print('URLs match. Login successful.')
driver.quit()
+{{- end }}
diff --git a/stable/keycloak/templates/test/test-pod.yaml b/stable/keycloak/templates/test/test-pod.yaml
index d99db4cf7583..c240f711d2d1 100644
--- a/stable/keycloak/templates/test/test-pod.yaml
+++ b/stable/keycloak/templates/test/test-pod.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.test.enabled }}
apiVersion: v1
kind: Pod
metadata:
@@ -11,6 +12,8 @@ metadata:
annotations:
"helm.sh/hook": test-success
spec:
+ securityContext:
+{{ toYaml .Values.test.securityContext | indent 8 }}
containers:
- name: {{ .Chart.Name }}-test
image: "{{ .Values.test.image.repository }}:{{ .Values.test.image.tag }}"
@@ -34,3 +37,4 @@ spec:
configMap:
name: {{ template "keycloak.fullname" . }}-test
restartPolicy: Never
+{{- end }}
diff --git a/stable/keycloak/values.yaml b/stable/keycloak/values.yaml
index 45495be50217..b466431f4365 100644
--- a/stable/keycloak/values.yaml
+++ b/stable/keycloak/values.yaml
@@ -11,7 +11,7 @@ keycloak:
image:
repository: jboss/keycloak
- tag: 4.8.3.Final
+ tag: 5.0.0
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
@@ -43,16 +43,28 @@ keycloak:
## Custom script that is run before Keycloak is started.
preStartScript:
+ ## lifecycleHooks defines the container lifecycle hooks
+ lifecycleHooks: |
+ # postStart:
+ # exec:
+ # command: ["/bin/sh", "-c", "ls"]
+
## Additional arguments to start command e.g. -Dkeycloak.import= to load a realm
extraArgs: ""
## Username for the initial Keycloak admin user
username: keycloak
- ## Password for the initial Keycloak admin user
+ ## Password for the initial Keycloak admin user. Applicable only if existingSecret is not set.
## If not set, a random 10 characters password will be used
password: ""
+ # Specifies an existing secret to be used for the admin password
+ existingSecret: ""
+
+ # The key in the existing secret that stores the password
+ existingSecretKey: password
+
## Allows the specification of additional environment variables for Keycloak
extraEnv: |
# - name: KEYCLOAK_LOGLEVEL
@@ -96,8 +108,13 @@ keycloak:
topologyKey: failure-domain.beta.kubernetes.io/zone
nodeSelector: {}
+ priorityClassName: ""
tolerations: []
+ ## Additional pod labels
+ ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
+ podLabels: {}
+
## Extra Annotations to be added to pod
podAnnotations: {}
@@ -141,6 +158,9 @@ keycloak:
extraVolumes: |
extraVolumeMounts: |
+ ## Add additional ports, eg. for custom admin console
+ extraPorts: |
+
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 1
@@ -232,7 +252,12 @@ postgresql:
enabled: false
test:
+ enabled: true
image:
repository: unguiculus/docker-python3-phantomjs-selenium
tag: v1
pullPolicy: IfNotPresent
+ securityContext:
+ runAsUser: 1000
+ fsGroup: 1000
+ runAsNonRoot: true
diff --git a/stable/kiam/Chart.yaml b/stable/kiam/Chart.yaml
index 89fa96f22d9e..66c89402a559 100644
--- a/stable/kiam/Chart.yaml
+++ b/stable/kiam/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: kiam
-version: 2.0.1-rc6
-appVersion: 3.0-rc1
+version: 2.3.0
+appVersion: 3.2
description: Integrate AWS IAM with Kubernetes
keywords:
- kiam
diff --git a/stable/kiam/README.md b/stable/kiam/README.md
index 8932091b6705..f7d3fedbaf68 100644
--- a/stable/kiam/README.md
+++ b/stable/kiam/README.md
@@ -17,7 +17,8 @@ This chart bootstraps a [kiam](https://github.com/uswitch/kiam) deployment on a
## Installing the Chart
-In order for the chart to configure kiam correctly during the installation process you should have created and installed TLS certificates and private keys as described [here](https://github.com/uswitch/kiam/blob/master/docs/TLS.md).
+The chart generates a self signed TLS certificate by default.
+If you want to create and install your own, you can create TLS certificates and private keys as described [here](https://github.com/uswitch/kiam/blob/master/docs/TLS.md).
> **Tip**: The `hosts` field in the kiam server certificate should include the value _release-name_-server:_server-service-port_, e.g. `my-release-server:443`
@@ -26,7 +27,7 @@ In order for the chart to configure kiam correctly during the installation proce
{"level":"warning","msg":"error finding role for pod: rpc error: code = Unavailable desc = there is no connection available","pod.ip":"100.120.0.2","time":"2018-05-24T04:11:25Z"}
```
-Define values `agent.tlsFiles.ca`, `agent.tlsFiles.cert`, `agent.tlsFiles.key`, `server.tlsFiles.ca`, `server.tlsFiles.cert` and `agent.tlsFiles.key` to be the base64-encoded contents (.e.g. using the `base64` command) of the generated PEM files.
+Define values `agent.tlsFiles.ca`, `agent.tlsFiles.cert`, `agent.tlsFiles.key`, `server.tlsFiles.ca`, `server.tlsFiles.cert` and `server.tlsFiles.key` to be the base64-encoded contents (.e.g. using the `base64` command) of the generated PEM files.
For example
```yaml
@@ -94,7 +95,7 @@ Parameter | Description | Default
`agent.enabled` | If true, create agent | `true`
`agent.name` | Agent container name | `agent`
`agent.image.repository` | Agent image | `quay.io/uswitch/kiam`
-`agent.image.tag` | Agent image tag | `v2.8`
+`agent.image.tag` | Agent image tag | `v3.2`
`agent.image.pullPolicy` | Agent image pull policy | `IfNotPresent`
`agent.dnsPolicy` | Agent pod DNS policy | `ClusterFirstWithHostNet`
`agent.extraArgs` | Additional agent container arguments | `{}`
@@ -113,7 +114,10 @@ Parameter | Description | Default
`agent.prometheus.syncInterval` | Agent Prometheus synchronization interval | `5s`
`agent.podAnnotations` | Annotations to be added to agent pods | `{}`
`agent.podLabels` | Labels to be added to agent pods | `{}`
+`agent.priorityClassName` | Agent pods priority class name | `""`
`agent.resources` | Agent container resources | `{}`
+`agent.serviceAnnotations` | Annotations to be added to agent service | `{}`
+`agent.serviceLabels` | Labels to be added to agent service | `{}`
`agent.tlsSecret` | Secret name for the agent's TLS certificates | `null`
`agent.tlsFiles.ca` | Base64 encoded string for the agent's CA certificate(s) | `null`
`agent.tlsFiles.cert` | Base64 encoded strings for the agent's certificate | `null`
@@ -125,7 +129,7 @@ Parameter | Description | Default
`server.name` | Server container name | `server`
`server.gatewayTimeoutCreation` | Server's timeout when creating the kiam gateway | `50ms`
`server.image.repository` | Server image | `quay.io/uswitch/kiam`
-`server.image.tag` | Server image tag | `v2.8`
+`server.image.tag` | Server image tag | `v3.2`
`server.image.pullPolicy` | Server image pull policy | `Always`
`server.assumeRoleArn` | IAM role for the server to assume before processing requests | `null`
`server.cache.syncInterval` | Pod cache synchronization interval | `1m`
@@ -140,10 +144,13 @@ Parameter | Description | Default
`server.prometheus.syncInterval` | Server Prometheus synchronization interval | `5s`
`server.podAnnotations` | Annotations to be added to server pods | `{}`
`server.podLabels` | Labels to be added to server pods | `{}`
-`server.probes.serverAddress` | Address that readyness and liveness probes will hit | `localhost`
+`server.probes.serverAddress` | Address that readyness and liveness probes will hit | `127.0.0.1`
+`server.priorityClassName` | Server pods priority class name | `""`
`server.resources` | Server container resources | `{}`
`server.roleBaseArn` | Base ARN for IAM roles. If not specified use EC2 metadata service to detect ARN prefix | `null`
`server.sessionDuration` | Session duration for STS tokens generated by the server | `15m`
+`server.serviceAnnotations` | Annotations to be added to server service | `{}`
+`server.serviceLabels` | Labels to be added to server service | `{}`
`server.service.port` | Server service port | `443`
`server.service.targetPort` | Server service target port | `443`
`server.tlsSecret` | Secret name for the server's TLS certificates | `null`
diff --git a/stable/kiam/ci/test-values.yaml b/stable/kiam/ci/test-values.yaml
index 5a35cf559dee..4df72842d5b3 100644
--- a/stable/kiam/ci/test-values.yaml
+++ b/stable/kiam/ci/test-values.yaml
@@ -3,15 +3,15 @@ agent:
gatewayTimeoutCreation: 60s
tlsFiles:
# Base64-encoded PEMs.
- key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBNWE5eW9ZMnZqbFE3bFFaN0I1SWVEbFZoa0h6WHpUSmZ1RVhwQ0NDWU1iQ1pMRnJkCk1VTVBpNUYrdVdSVkVKOFV0MklDQVlmNnZjbDBrckkxeU9jakJhRnJtMEJQZ2dEL3R0ZVdnYmFhcmdxWEV2VHUKQjNFZEl6TDk4VEFXSWhtNnVJQjYvRUwxbWxieW5oZ0lpaUlaUnhYUXpXUCthenIyclpVSFBvT2MwZ0o1UWljdAovMWNjZzVLdm8xQ001R29wVUgzc1NnUUtaQmpRQzN0UW9FbGJuQW56elFUUjI5VTdCWWRadk9KN0p2RG1EWjA5CjNHcEtlZ1hmVVE3WE9jcVdRYjhmOStKYzVUSUlNazNhME51U1dtQmdSa1hWSVRRbVhnaERTeVo0QkdTR1NhcUgKclhPZFJxMy9sckpMN1F3bFZjTVpVSndmMk8wOGdFdFlLTjlTa1FJREFRQUJBb0lCQUQ5MlpybjBxQmt2ZFBjTQpQMW9zS1ZuVWhZeWlzZzNrYVVaRktzb3dGMTFEYWs4ekhBTE1nTE1UbEd3dEtNUGE4S0pxMWhzT00xM1ZGL3lnCmVQUDF5VnQ0Nm42UEdtalZWZEp6WndhWUtjMEU2QkU0MDd3Q3FRWmN4SVdydjdIVVloOHdnTXJLeFluTGxHWFMKUmluRW1pOWwrN2VFZFh1ell3MDdMREU5dEVyaUZmQkZXb29YbGNuVGg4Y3FtSmhlNWJXTzB5bHpFaEVVNktRUQpZVUNvQ3lVVC9xdHVSVGFJZ3V6NTAvQ0ZYOHdQaUhlbFBodHgzRE5EVFR5OEViSXg0b2ZRdmFEWWxPSXY5eitvCitIcXBtNDQvWmFVNm5udERjZE44dkt5bENNNitibTIvSGYvbnl6T29mT0d1eTJKOCt3WGtIYzVrWXlzWWxHcEUKWVBXTTNqRUNnWUVBL2tPcm5HR3FVRFVIMFAvQXI5eGw0NWN1Szdhc1R6aDZWcHd2aEUzVGJHMGxZek1TMGNNWApQRFN6Z3R4dUIrRHRDZWcwcDZrUFJSbzhLU1FIWnIzclp1T2lCckdkSTRmVGYxZURlVU9yQ3EyOGxaZnBKTmx1CkJRSWg1eXhWeHZ6TmxpR1VJZjEyeUYwNEZDRktFNWlPQWcvTmRHQUVkR0ZrQ3hCaE5Mazh6Yk1DZ1lFQTUwRFQKUzBhcjg1YTgxanRBUHJVQ1VmMGJZSXpyeU9qbnJvVmtmZCtJbVRaQnBpZWxNbW9YdUEwWlJQSVViTEdHZTJiQgpkVGowVlBycEZ1NkY5b2ZzUE52ajZCeEhoS1pPWmN5WDc5WGlBZjVCZEJoR2xQQzNobTJleEFuN0ZVdVNyaWtqCjNBOC9rcVRCMjdVSHJMOWR4U0pZY05KNklQdjZob0VKdER0NFpLc0NnWUE1SGZaMUFMT0RwUVlHZXcxTDlCU24KVlpTM21TZUgvRVh2SXRMQnc4SFV2NGdBaXI2VmhGKzUxSlRtdHFHNC8xd0FON3RzVmx2cHlBVHZzUHBBcURVegpQYnR1Q1lRbE1TUGZuVWNaZkl2MXNDV0c3VU1nVmYrUy9IR2xQcDVlUHZmbjI4OHMrNFV0YVZOcG9qakR3aWRVCmF6eGFBaCsrRFFxdU9aVzhoRWdXWlFLQmdRQ0EzVGJoTTdpT1BPbHQyQWFzNnVFb0h3c3FlbHpKMEQrS21QcXUKeWVtc3R2ZE9SN2xlcHBBaEYrdUU2QUZKc0lOb01KS05aL2QvZzNKd1BPcVp2cFIrTldxQzVYOVZBL2ViOHE2WQpEMitwL0swc3JIcG9kTnRRSmJYYk9GU2FRVXF6a21sUkw0NFZnWW9sakhPQ2FBRXc0VHEzWkJKNlh1LzBFK1A4CmMwZGJrUUtCZ1FEczZYbDhzWDB1cVRWcjJDdGFwdDJTcytCUGpaYXJ2WFBhMHI4RDN3RG50bkRzY0ZkNitLT2sKYlZCZ0Z1R1E4TVZpOUE5c3NLRWU4Y1c2VUk0dXgweERHRkpCNGRDWkdoSURPUzlLRnNuT0lpNVhudnA2YmxVZQpEU3RISVlIdnExSThlblIrV2dFYzA1aGNySHhZbXpZbE9GT3JBUXRRc0tsNFJ0dktqWjBkMnc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
- cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ2RENDQXRDZ0F3SUJBZ0lVVHBPaWJzYnZWcHJkbkhMTFlCTWJ2NGVOQlZBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2JERU1NQW9HQTFVRUJoTURWVk5CTVFzd0NRWURWUVFJRXdKRFFURVVNQklHQTFVRUJ4TUxURzl6SUVGdQpaMlZzWlhNeEdUQVhCZ05WQkFvVEVFTnNiM1ZrSUZCdmMzTmxMQ0JNVEVNeEREQUtCZ05WQkFzVEEwOXdjekVRCk1BNEdBMVVFQXhNSFMybGhiU0JEUVRBZUZ3MHhPVEF4TURJd09UVTRNREJhRncweU1EQXhNREl3T1RVNE1EQmEKTUc4eEREQUtCZ05WQkFZVEExVlRRVEVMTUFrR0ExVUVDQk1DUTBFeEZEQVNCZ05WQkFjVEMweHZjeUJCYm1kbApiR1Z6TVJrd0Z3WURWUVFLRXhCRGJHOTFaQ0JRYjNOelpTd2dURXhETVF3d0NnWURWUVFMRXdOUGNITXhFekFSCkJnTlZCQU1UQ2t0cFlXMGdRV2RsYm5Rd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUIKQVFEbHIzS2hqYStPVkR1VkJuc0hraDRPVldHUWZOZk5NbCs0UmVrSUlKZ3hzSmtzV3QweFF3K0xrWDY1WkZVUQpueFMzWWdJQmgvcTl5WFNTc2pYSTV5TUZvV3ViUUUrQ0FQKzIxNWFCdHBxdUNwY1M5TzRIY1Iwak12M3hNQllpCkdicTRnSHI4UXZXYVZ2S2VHQWlLSWhsSEZkRE5ZLzVyT3ZhdGxRYytnNXpTQW5sQ0p5My9WeHlEa3EralVJemsKYWlsUWZleEtCQXBrR05BTGUxQ2dTVnVjQ2ZQTkJOSGIxVHNGaDFtODRuc204T1lOblQzY2FrcDZCZDlSRHRjNQp5cFpCdngvMzRsemxNZ2d5VGRyUTI1SmFZR0JHUmRVaE5DWmVDRU5MSm5nRVpJWkpxb2V0YzUxR3JmK1dza3Z0CkRDVlZ3eGxRbkIvWTdUeUFTMWdvMzFLUkFnTUJBQUdqZnpCOU1BNEdBMVVkRHdFQi93UUVBd0lGb0RBZEJnTlYKSFNVRUZqQVVCZ2dyQmdFRkJRY0RBUVlJS3dZQkJRVUhBd0l3REFZRFZSMFRBUUgvQkFJd0FEQWRCZ05WSFE0RQpGZ1FVNitkNTlmTDE5MWlpbS9aOTVrblZXc3poTm5Nd0h3WURWUjBqQkJnd0ZvQVVDdlh5MGl1dE80cVk0Yi9qCk1wZDdmelRZWEt3d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIaHVvZVFGK2IxbUJ0b3k2ZnBybkdQTW02V2cKVFV3SDBzWlE3MlV6N0crZnAvSXBkY2NiejBoSWJlZCtpMy9xcGMwQVZiSVR2NzhoZEpkbHM5d1k5Rlg2V05hMQowVGtoTUt6K3BEMER6Y3V1VEo5OFVXWUU1TnZJVC8zQnYrcy9NQjV4VDRqTVVkMy9hbHlZMmVMRHM5RUhEUzFJCk9EMmRweVRnT1E3VXE0NTFJS1kvVUpGS1ZaeTAwYmhSeFBLYjBSRFRnUmtiNUZwSGRlT09xcWZwWHdEZjVJaG0KU0JvVUhydWVtN1Juc3o2LzdpVkN2ZWRZQmc3UXFDOVltanZ4WDRlZmNlZG13U0NBS1VLSzM0MEI0SXk0Q1NYZApkQ01WWGdpN1BhdVZJMVVxaC9vWW1jNE13dlBiaGtYTkZVbVpqbCtLODFYQmpMNDJvQ3UvdlNDZklIVT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
- ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURxRENDQXBDZ0F3SUJBZ0lVRkwwZHRPSE55d0pmaWFoTG54M0psZzd1MjBZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2JERU1NQW9HQTFVRUJoTURWVk5CTVFzd0NRWURWUVFJRXdKRFFURVVNQklHQTFVRUJ4TUxURzl6SUVGdQpaMlZzWlhNeEdUQVhCZ05WQkFvVEVFTnNiM1ZrSUZCdmMzTmxMQ0JNVEVNeEREQUtCZ05WQkFzVEEwOXdjekVRCk1BNEdBMVVFQXhNSFMybGhiU0JEUVRBZUZ3MHhPVEF4TURJd09UVTRNREJhRncweU5EQXhNREV3T1RVNE1EQmEKTUd3eEREQUtCZ05WQkFZVEExVlRRVEVMTUFrR0ExVUVDQk1DUTBFeEZEQVNCZ05WQkFjVEMweHZjeUJCYm1kbApiR1Z6TVJrd0Z3WURWUVFLRXhCRGJHOTFaQ0JRYjNOelpTd2dURXhETVF3d0NnWURWUVFMRXdOUGNITXhFREFPCkJnTlZCQU1UQjB0cFlXMGdRMEV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQ3QKODhMV1VOT3lHcDUxMVJYWlpXaVJzcVJkUmdUOStZUXQzN21lcCtOWndlcFFQUmNkblh1anYwYnV6RUdpK0k4SApSL1B6TzRUdytGeXN1KzJaRElRTG9aWE10YkdGZ0FFSVc1YUtVOHZRMTF6Y2dhMmYyQnJXRzNVai8zK3pWOWUvCmlaT3JlS1hmZFAzRnV4bDc3TllPY20xOVU5Nnk4TGNDOEduK29DZmJVcThOcHZVMDNmZGRjTC9qTy9ZTlJETGgKVy9zR1JmVnlNd0tVVjRjaUtNQ3hjQ0JxZWNZbmkyVTYweEF2QyswSFZwR3VnSmliWWpBZXZuaUM3NjVOa1pmMwpFWFRVSkJGT284WFBZdWgweThuYWRMT0RWMzB0bWZVRHlxaWMramJzU1FaREZQNVNJNGVLLzNEM21ldHZzd1loCkcvRUtJTUh0V0ZpZHBSRWMzYis3QWdNQkFBR2pRakJBTUE0R0ExVWREd0VCL3dRRUF3SUJCakFQQmdOVkhSTUIKQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRSzlmTFNLNjA3aXBqaHYrTXlsM3QvTk5oY3JEQU5CZ2txaGtpRwo5dzBCQVFzRkFBT0NBUUVBRCsyNERKSEhZWlRibTlFRW93UFA4Sk5LdWMwYkQ0cC8rNms0RExDajRJTWtLRFpqClgwWHZlQWkvTm9sQWZVTkFBQi9WTnRLMTB0ZFZVSGhhZW8zRFQ4OFRKaEQra3VJOGdhbHlDZlhnSnVrdytRMHQKWEp2WS8vZ1NQT0xuQ3NMSTIrQUlYNlJ6SUwyODNzaDRIY2FBaE81YnRCc1Q0U3lqN2t6UGFuTGdQOVpVYTFoZQpxVUpjNms0Q1FzUFA2YmtZanQ4QktFaE5XVmJ6aXpOcyszSFFaMzVHeDRka1VLd1ZnVklWTGU2bmtJV1RJeHVJCmloRkNmWjJTRWphS0Uwdi9tM0p5RjhJTzVJQTBQTFFiMUw4Y1NyM2R4cUszank3cGVxY1ZRbmhGZzNIMkdHWEsKeFVlUlZjYm44WEFyV0NCWUJiM0JyZjZ2V082RFM3L1VjWnV3UkE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
+ key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBenp6ZlVrVjE5SDR2eGFabUpEWUVoYUZHckdwY0ovbURqclp5dE1pOER6bGJKSEJQCkRGdjFRVHVEZmNSTDIyZm9HNmcvTWZRbDZRUVArc1hrMTlOL2UxTFpCb0FvWUI4R05XcXorUjBqSklTN1FBWGQKQW1wM0FyU2JkQzZZa1owdXFrTkhCOHY0aW82ZEZvSlQvNHhLYXFjSUxraEU3RmRTY3BucURjQ2ZvVVRxMXBWegpwWEcxR1hja0NrSnA1L2NiVHl5dWV6WUlXSjBaUHdheHYzTUhOMzVjM2NQSHk2dDJoR2FNM29aZldJQ1krVUVaCkRBcHAvWjI5bVlWZkFwSyttQ0dXcXBwVHBoRUtJaXdOdS9wNkt0eTVMRlFFTDBzRi9WZUdHSHRjVG93WjR2RW8KcWVxUUxlVy9Wc1RBRWIzWk96eXhoU0NxRWdYRmxvUVp3amt2ZlFJREFRQUJBb0lCQUFkTktMMFlUMmlXelk0VQpKOE1jMkJueExiRkRhZzNLZjdVV2ZvSWFGRzRnNGpJdGRzdURyZWRuZG1HRytmazM5dmlLZS9lQmw1aFhHVTBICmplR0F4UndPTmpGQmNLcTZUUml5c3JhVExUckxKbUhDRXlCVHFlL0JkenlucTU1dHdFZ2xhS3BBcUhnUlFEMmIKeCtQWUNJTXJjV0ZZRUgyWE1nTnhvc3ByUC9TSmpiOGR4NkdjRzY2MW9sczJodkdaN2xzTjErQUx5cC9mTldidgpaTWZ1QXB2MmJJaXkrTjRoMUY0bWZTN21LNS9RNXNjZ0VTQWRodUZTNVZOUnJJa0h0TGEzMXA0VUpWOVM3Tko1CkU5OG52TEdZTFQxNmx3Wmd1eUtUSXZLVzhMNFQ4YVhnMnJjWGxNNGZDV3VoR2R4dTUzRmcvL3kvYmhvemxYOFkKdzBic29nMENnWUVBejVWWVhVWDJoTjVDalREcW5MMEhzaUhwcHBWTXJnWlBPSTNuWlgvMksyQm0yNWQ2M0lvawo1ZHl0UEE0eWhvcWtKZ3VmUXpUTUVCSnRxMWVDNkZWMjVqamMvSHRLeXZXMEJMSjdCbFFXZkRpOTdZZVBzNDIwCnFRL2NKWWM2WnJUbEErKzJ1OVNxRDAxd055UVB0QVplMXR0NzFteDFmYnVUSkwrdFhqayticWNDZ1lFQS81TGsKVHQrdEtKbjI4QjNmdGhlb1dQM0poazJBWlJ6K2RzdHIxSGpVdUNTV0xNY1BiTm9kOVhCTjlMK01BQTNHYmtUTApLWWNmYXpzbWR1ZUtFd2xUaDlWSXF0TVh0MUEwcGU3Q25uOGJISUpnajE1RnBSNDduVExiVlVuNTl5WitOSFdPClppL000SHJyVThzWXFrN3ExS244eTlMKzVhU2xjTUcrb2RrSnVUc0NnWUJocHRIdzN1Ni9Sb2RzUUN5K0d6YTUKbDdhQXhROVRkbWhpSkc5TWtrdk4wQVhURzRtU29mSUZxREJlWmhkaXIyblU4L2F4K081ZVNTMEtRNXF6alREbgowS3cwb2hObk12ckNrdXZJNkZuRGlqWGV2YnplTExWbUtxM1hnYXY1a1BPRFRJdGNCUWtUTmN5cVErNlhNNy85CnR6YWtnbFVyRnNoN3F5ZjFnVnhiVlFLQmdRRGd5eWNkYVFnNWFoTVZhSEZaREwzNmFGOVZUZDNkRWYrUUphU1cKb2lFWVJyWUFkS1pRckJrbHhMNE14RjR6dmVvSEcyTkhCNTdQQnB2eWdmMmtlTk9MNmtHY1gwZkE2VDhscERoeQppSUlrTlZrUlFXNG9xY3J0bmNubDNzZUtaOFVpQnpSVkZUNHpSR3F3clRib3Riay9qTFRaNHFCcEJNU3Z4UG9VCkNYN1ArUUtCZ0YwZmNPaEpPcFVUN1RUalAzMkVPRzZPbnJGZkgwYm9zNXFvQjVDbU5nUS9BbHJPV2ZnRVRSN3oKUG40YXFETzhiUWRpdkhod0FZVzcvWGZ5SXp3NkpydWgxYUxWZWlRRzBacmZ0SytwaXRrbjhmb3M5ZVpFMjZUUQpKd09XNnNCUDQza1JrQ3lxbVpOWGxCTWpsNW1FV0pYd2ZUMVJzZ2twMUZKMDhVOXFDaEtVCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
+ cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQwakNDQXJxZ0F3SUJBZ0lVUkJOTGtXYkNPK25YYkNESFNqM2x1dDIyQjYwd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNWVXN4RHpBTkJnTlZCQWdUQmt4dmJtUnZiakVQTUEwR0ExVUVCeE1HVEc5dQpaRzl1TVJBd0RnWURWUVFLRXdkMVUzZHBkR05vTVF3d0NnWURWUVFMRXdOWFYxY3hFREFPQmdOVkJBTVRCMHRwCllXMGdRMEV3SGhjTk1Ua3dNVE13TVRRME9UQXdXaGNOTWpBd01UTXdNVFEwT1RBd1dqQmtNUXN3Q1FZRFZRUUcKRXdKVlN6RVBNQTBHQTFVRUNCTUdURzl1Wkc5dU1ROHdEUVlEVlFRSEV3Wk1iMjVrYjI0eEVEQU9CZ05WQkFvVApCM1ZUZDJsMFkyZ3hEREFLQmdOVkJBc1RBMWRYVnpFVE1CRUdBMVVFQXhNS1MybGhiU0JCWjJWdWREQ0NBU0l3CkRRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFNODgzMUpGZGZSK0w4V21aaVEyQklXaFJxeHEKWENmNWc0NjJjclRJdkE4NVd5UndUd3hiOVVFN2czM0VTOXRuNkJ1b1B6SDBKZWtFRC9yRjVOZlRmM3RTMlFhQQpLR0FmQmpWcXMva2RJeVNFdTBBRjNRSnFkd0swbTNRdW1KR2RMcXBEUndmTCtJcU9uUmFDVS8rTVNtcW5DQzVJClJPeFhVbktaNmczQW42RkU2dGFWYzZWeHRSbDNKQXBDYWVmM0cwOHNybnMyQ0ZpZEdUOEdzYjl6QnpkK1hOM0QKeDh1cmRvUm1qTjZHWDFpQW1QbEJHUXdLYWYyZHZabUZYd0tTdnBnaGxxcWFVNllSQ2lJc0RidjZlaXJjdVN4VQpCQzlMQmYxWGhoaDdYRTZNR2VMeEtLbnFrQzNsdjFiRXdCRzkyVHM4c1lVZ3FoSUZ4WmFFR2NJNUwzMENBd0VBCkFhTi9NSDB3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01CQmdnckJnRUYKQlFjREFqQU1CZ05WSFJNQkFmOEVBakFBTUIwR0ExVWREZ1FXQkJUVExFck5KaXpDemdEUVVnUUt5bE1uVUNwNgovekFmQmdOVkhTTUVHREFXZ0JSR1JJazdFOHRuYW4zcURvMHlTMEJtL0RNT1R6QU5CZ2txaGtpRzl3MEJBUXNGCkFBT0NBUUVBZFAwT0xaK1Q5ZEVLRHVDMUdLRmxlbkFqTFRHUU5ySmVGSjZlSERFY3FEOTMvQ1ZzOFNnTlo2NEkKY1ZzNzlIYjFXeUZpZC9Ld3Iyd0J1Wk1UUmxIRjJPcHE1bDJrUTFXaE1ibmNMRjB2ZXl4M2VpWVE3clRzL0xDNwptNTRpTFRIYWRuYTkrNE9rM0h4NFREdTRtV3BGNDVJaUR4TFZIN3JRdFJoQk1nVmlGWm9UTGEyam9pdjlIekhECktZY1BVaDFsUSt1UndGWFdFTlJVQmtsb1lOdmRZMlF5eUdVUUZON2xOMEVXTm5zSFFUWmVXQUlFUWhhZXV5cUMKV2NuWExsQS91bzdWOUNTM0tvMzgrMFczOXNmL1R6NVliejlZalVxY3BYQTZORTdGeFdzd0V1bi9TVWp6Q1pNRwpad0g4R1FnVHo1aVpmRnQ1VWVLZTRMUVFZQUdZU0E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
+ ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURrakNDQW5xZ0F3SUJBZ0lVUjgwVU9rbzlPZk5jQWRRaytLbzd5ejNKZGZrd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNWVXN4RHpBTkJnTlZCQWdUQmt4dmJtUnZiakVQTUEwR0ExVUVCeE1HVEc5dQpaRzl1TVJBd0RnWURWUVFLRXdkMVUzZHBkR05vTVF3d0NnWURWUVFMRXdOWFYxY3hFREFPQmdOVkJBTVRCMHRwCllXMGdRMEV3SGhjTk1Ua3dNVE13TVRRME9UQXdXaGNOTWpRd01USTVNVFEwT1RBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKVlN6RVBNQTBHQTFVRUNCTUdURzl1Wkc5dU1ROHdEUVlEVlFRSEV3Wk1iMjVrYjI0eEVEQU9CZ05WQkFvVApCM1ZUZDJsMFkyZ3hEREFLQmdOVkJBc1RBMWRYVnpFUU1BNEdBMVVFQXhNSFMybGhiU0JEUVRDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1udnZ1R2xud3luRzlteE9YT2dlRWUza1p2a1loQU8KcCtxZmpLWkZpajBFSU5pYVNEK3J1cVRrZ3JueTVUWTkzMzA5aFYrS2IzTnR2NFEwYmhOUzRjZnJpd0k3cWhuMApHVTRJUkxKd1crMXc4enZhRFMrT1lHcnJnMVdlc2l6VlpSRERTQzNFZE5GZUdXMjlMZWNiZ3NXN0N6MlJOUWQrCmIrNG83K05uemhJSzhRclozcnFWSGVjOUZOVnlydDc4ei8ySVVlZXhkYnZvNFRyUXRZOEQydTV4QUg2Y2lHUDUKVi94cFI1RmJ2bjNUNy80aldJbmxSWHBHeWkxemE2Mi9EaFV6czRZcGcxd01yZXdQT25Ea0VIT0RoTExpVmQ2dgpxQWQxVGdqUFdmTnR0MXZnUEtIQWJ3VXU3LzVWYnRwbVJGRTgxMEpSbDBYa2NwUHQ0cjhtQ21jQ0F3RUFBYU5DCk1FQXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZFWkUKaVRzVHkyZHFmZW9PalRKTFFHYjhNdzVQTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFESUxzQzhVYkM5eE5jcwpWQWZJcUt4R0QrSE85STdtOHNITU5TQzhZbGRJQmRCSE5YRlBhK0o4Mk1hRklUZS91L0NEQ25RTXBuRWVpVi9MCmJxQ0RZbk01MEVQckNXalJGSDJFZGpocGhJTWJNSUpsMlNndGE1ZzlIT3ErTit4eC9tS21ua0xWYWQ1WkVFa0wKZnJ6NTFQRkhabEZyUDZUQ2c1VW9FUERteWw2WHQwNk5tRk40ODBxYjhLSEgxR3o3UEFlNTdIaGJPWXlMQWh3UwoxYjl6bnQxOW9KOXA2ZDlKakIyeEh3R016aWVMWVpQME1rcnJaYWRQUWtkY0NsczJhMkRRVmpCWEQvVzlhYXRBCmZtMEQ4dmN1b0dBdCt0VXkyVHRWR2xUdk5tMnlUeWNMWlhuMGdORDVGNmFCNjVvMmFtNUcvTWRSWDBrd1RCQXgKYU0vZEVQREoKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server:
gatewayTimeoutCreation: 60s
roleBaseArn: "arn:aws:iam::0123456789:role/"
tlsFiles:
# Base64-encoded PEMs.
- key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBNld5ZWt6VGwyTHp0UGFXbnhRUy9iVlRsTEFPcWVTN20yNWV2a2t3Z3hFdHJxemJvClJBenp5ai84MUZkV0xGb0h0RGNoZG9xbm9KOGcvbUNzWUtrNXRsQzRBektHNHVPemZESFBkV3d5ZnJSMEFvK24KQXNXeEF4bjkya09SU0ZqeGkwRy9CcjIzb2loUk1Lc3VsSndyRWpVRmFERzd3Q3FVaHQwQnVmSzFJcklRbGU3awpjYU8yaXFIeThFN2lYNlk5R1lVTHQ0QjR3SmJBRzNjSVlqVTd6R3NsNUJPUlo1RGdHUmt2TzFSbmtRUW4rUWVvCnh1U29BYStBc1hZekNOZEEyby84WFNUSmhONTd0a0xXOGF1NHRxbFk1NlgwOE85WmNsVFVXSWlqZUNFK3ROS2cKYktTSjg2cHpRcUh2SnJUYjU4SGdWTnVhbHkyTUhPdXZ2SGdCd1FJREFRQUJBb0lCQVFDd0x4WDUwa0Z5T0JkeApJbW5oSVZaRGRZS01tQy9CekE3ZnpEdnUxcHNjempoMFFMdExNZU9JMG9kSTFxcnFTd0hwbW5zZGVFWlJ6QW9oCk5tS2xpdFZPc05wVFAzM2tIeTNJSGVpU25wbjJYTW43Yk9ZSUI2TTF6aFoyK2V6Y2lKVzRJR1hJOXNWMkZheEMKYWRKOHhPc1ZrUU9GdzVRTTFaYkp2R0tqTVhoYXVFN2R0SWwvclUrU3JsS1d3MWFlaHN0VkxYT1JtK1JpY2xLZQpoTCtvdVhaV3hTei9HblRXUzJ6QnBiMGxiVkhJKzd1QW4wbXE0Z0d2c09tVWlXS2MxamhxUTVrU0h2YzNyY0wzCkltY2h5YnlzdkN4cXUwaFltRitaZmlMSHJ6VXFYS05IOXRjTi8yNk82ZldLT0ZHQk9CcnhTaWc5MDNGbDk1aFQKbGIzM0NLdmhBb0dCQVAxeUEwbS9GdHNNbG94cWxyNU4rS1R1OUpNbVNpMm5jTUFzTGxwMzBkVzl5M1hJOGRPawpWdnNHZThKRmxiQVllQzFlTlRqZWVHL2dzY3BZMkY0bkhFTEU4VmkwMkRCWFJ4SWg5VlJHcm04T0xBMENEdjJoCktTS0ZOWVU3VGgzSlFOQ0J2YkpvbFJXak9FWEcxSDhqRGgyK3VETk12WHVOaXVXN1FQS08zQjRkQW9HQkFPdkcKOGNtbGU5QmRsZlRtRi9XRit0ZGJRblIrZGE0Y0d1TmFQVVdxNmJaUVpNWVpkVFZXcXhVNmJsRHlBWXM3S1FQTgorR2JFZWh0VEt0clRCd1Nvb2Uxb1AzMThidlNKL0ZaVlFDTGFHYi9UNzhjaTFkNy9zdUQwa05KeFc1NXNVdDR3CmRtS2NRem1TdDhXNmxFc3JNVU5uN3E4aUNYeDhReklZN3U1QnBQRDFBb0dBYllieUNOSzk2OWdhejMvWXVWRTAKM1FJdlM5QkdTa2lNSDJCNGY3dzhRR1NQSXMyK1JEcEhKS0IrcDB3dkRqVGs2cVpGMWRlK3NJcW9Dc3d1WlRIOQpzcFV0djZvWHEzeHNTRmZJajYwa0FQWmM3eG91cEVrYlg4RzFpV2hCci9tak92aDJwRDB5QUhIVEJjU1JYSWduCnQ0OE9SNDBvYmRhVGFnaHNYdWFDRmJrQ2dZQkpBTHgwdHl4ekE4Y2VvTy9pTWEzTmFKQlhDYURlWEExblA5V2cKOEo2VXVLZTdQcjZ2MlRuM3hMUExsR010L1E5aUFqQmJnWkpkUzQ4RldqbmVFMml2M1l0ckMxQS9uMG5tWVZjTwpjNEZ0aCszQ051TUp2UnBoMU5mU2tRN1JLckV0NHN1RkZPVXJ1bVgwYnlUamNXZzdlcjdJc3oxRXNpVU1LZlF4CkNWcE0wUUtCZ1FDNmExNHZMZkEwSVdTNnc3RlMwU2F5TThIaFcrRVJkblIxU3RnRGNOdllUUEpCOVk1S1J5MkQKNGw0SW1OMGtRNHFwNm1pVlNjUEJRcDlFMzJ6VDAzbnVJNi9Tc0tpeE4xS0RKRnlDWjdKYXllakRjQTdnTGFmRQp2M3BFZktjOVJIMmRmZHJxcVZqODFCQWlEWkVSbmxJSHJXWnlwaUZ4K3oxdjZUaVdLYkh5NFE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
- cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVQakNDQXlhZ0F3SUJBZ0lVV3praFJoak0yc0pjT2FtRnNQYWljbFFuUlRVd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2JERU1NQW9HQTFVRUJoTURWVk5CTVFzd0NRWURWUVFJRXdKRFFURVVNQklHQTFVRUJ4TUxURzl6SUVGdQpaMlZzWlhNeEdUQVhCZ05WQkFvVEVFTnNiM1ZrSUZCdmMzTmxMQ0JNVEVNeEREQUtCZ05WQkFzVEEwOXdjekVRCk1BNEdBMVVFQXhNSFMybGhiU0JEUVRBZUZ3MHhPVEF4TURJd09UVTRNREJhRncweU1EQXhNREl3T1RVNE1EQmEKTUhBeEREQUtCZ05WQkFZVEExVlRRVEVMTUFrR0ExVUVDQk1DUTBFeEZEQVNCZ05WQkFjVEMweHZjeUJCYm1kbApiR1Z6TVJrd0Z3WURWUVFLRXhCRGJHOTFaQ0JRYjNOelpTd2dURXhETVF3d0NnWURWUVFMRXdOUGNITXhGREFTCkJnTlZCQU1UQzB0cFlXMGdVMlZ5ZG1WeU1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0MKQVFFQTZXeWVrelRsMkx6dFBhV254UVMvYlZUbExBT3FlUzdtMjVldmtrd2d4RXRycXpib1JBenp5ai84MUZkVwpMRm9IdERjaGRvcW5vSjhnL21Dc1lLazV0bEM0QXpLRzR1T3pmREhQZFd3eWZyUjBBbytuQXNXeEF4bjkya09SClNGanhpMEcvQnIyM29paFJNS3N1bEp3ckVqVUZhREc3d0NxVWh0MEJ1ZksxSXJJUWxlN2tjYU8yaXFIeThFN2kKWDZZOUdZVUx0NEI0d0piQUczY0lZalU3ekdzbDVCT1JaNURnR1Jrdk8xUm5rUVFuK1Flb3h1U29BYStBc1hZegpDTmRBMm8vOFhTVEpoTjU3dGtMVzhhdTR0cWxZNTZYMDhPOVpjbFRVV0lpamVDRSt0TktnYktTSjg2cHpRcUh2CkpyVGI1OEhnVk51YWx5Mk1IT3V2dkhnQndRSURBUUFCbzRIVE1JSFFNQTRHQTFVZER3RUIvd1FFQXdJRm9EQWQKQmdOVkhTVUVGakFVQmdnckJnRUZCUWNEQVFZSUt3WUJCUVVIQXdJd0RBWURWUjBUQVFIL0JBSXdBREFkQmdOVgpIUTRFRmdRVXVsZTNNT2JPVWFTMmgxOHI0M1Y1OWcrNzd2SXdId1lEVlIwakJCZ3dGb0FVQ3ZYeTBpdXRPNHFZCjRiL2pNcGQ3ZnpUWVhLd3dVUVlEVlIwUkJFb3dTSUlMYTJsaGJTMXpaWEoyWlhLQ0QydHBZVzB0YzJWeWRtVnkKT2pRME00SUpiRzlqWVd4b2IzTjBnZzFzYjJOaGJHaHZjM1E2TkRRemdnNXNiMk5oYkdodmMzUTZPVFl4TURBTgpCZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFjMktPSDBHRldRU0t2bU5rcnFwZEVvZGxaalI5enRkeUUyYlk1citXCkw1THIwNVE3MVBSalVpNXFHQ3prOUFkR3VYZE83bTgvejY0bS9zTnNXTS9adHFIaklGd3hWT3JrOU5NMjVNSGUKWDVYNWdVT3UvOENnV2wyWVZoejhnMC8wTEx4dlYvQjA0dTBmZEhudkg1Um83ZVYvNFk2MEUrTlRBYmtvN2dYeApHYXpSRXZHMXRaTHN1WUlIb1NEQ2Vqb2RWRTNrcXNWMEhKUlhuUE11K2Z1T3EvR0J5U05MZjJaR2FIV2pFcTIwCjYvbTN3WW9mbUl4Q0FSbjcyanpjdGRzQVRVa2RBRUJoN3dkWWxmTWpuODdoMnJRYUNKejVUNkVSNWs1cVMyU3IKZ3piVVU1QnpvYnI5VzY1L25CODNLQ0ExK2tXTlRldFUzQ3lWQW5qVk9QRUROdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
- ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURxRENDQXBDZ0F3SUJBZ0lVRkwwZHRPSE55d0pmaWFoTG54M0psZzd1MjBZd0RRWUpLb1pJaHZjTkFRRUwKQlFBd2JERU1NQW9HQTFVRUJoTURWVk5CTVFzd0NRWURWUVFJRXdKRFFURVVNQklHQTFVRUJ4TUxURzl6SUVGdQpaMlZzWlhNeEdUQVhCZ05WQkFvVEVFTnNiM1ZrSUZCdmMzTmxMQ0JNVEVNeEREQUtCZ05WQkFzVEEwOXdjekVRCk1BNEdBMVVFQXhNSFMybGhiU0JEUVRBZUZ3MHhPVEF4TURJd09UVTRNREJhRncweU5EQXhNREV3T1RVNE1EQmEKTUd3eEREQUtCZ05WQkFZVEExVlRRVEVMTUFrR0ExVUVDQk1DUTBFeEZEQVNCZ05WQkFjVEMweHZjeUJCYm1kbApiR1Z6TVJrd0Z3WURWUVFLRXhCRGJHOTFaQ0JRYjNOelpTd2dURXhETVF3d0NnWURWUVFMRXdOUGNITXhFREFPCkJnTlZCQU1UQjB0cFlXMGdRMEV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQ3QKODhMV1VOT3lHcDUxMVJYWlpXaVJzcVJkUmdUOStZUXQzN21lcCtOWndlcFFQUmNkblh1anYwYnV6RUdpK0k4SApSL1B6TzRUdytGeXN1KzJaRElRTG9aWE10YkdGZ0FFSVc1YUtVOHZRMTF6Y2dhMmYyQnJXRzNVai8zK3pWOWUvCmlaT3JlS1hmZFAzRnV4bDc3TllPY20xOVU5Nnk4TGNDOEduK29DZmJVcThOcHZVMDNmZGRjTC9qTy9ZTlJETGgKVy9zR1JmVnlNd0tVVjRjaUtNQ3hjQ0JxZWNZbmkyVTYweEF2QyswSFZwR3VnSmliWWpBZXZuaUM3NjVOa1pmMwpFWFRVSkJGT284WFBZdWgweThuYWRMT0RWMzB0bWZVRHlxaWMramJzU1FaREZQNVNJNGVLLzNEM21ldHZzd1loCkcvRUtJTUh0V0ZpZHBSRWMzYis3QWdNQkFBR2pRakJBTUE0R0ExVWREd0VCL3dRRUF3SUJCakFQQmdOVkhSTUIKQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRSzlmTFNLNjA3aXBqaHYrTXlsM3QvTk5oY3JEQU5CZ2txaGtpRwo5dzBCQVFzRkFBT0NBUUVBRCsyNERKSEhZWlRibTlFRW93UFA4Sk5LdWMwYkQ0cC8rNms0RExDajRJTWtLRFpqClgwWHZlQWkvTm9sQWZVTkFBQi9WTnRLMTB0ZFZVSGhhZW8zRFQ4OFRKaEQra3VJOGdhbHlDZlhnSnVrdytRMHQKWEp2WS8vZ1NQT0xuQ3NMSTIrQUlYNlJ6SUwyODNzaDRIY2FBaE81YnRCc1Q0U3lqN2t6UGFuTGdQOVpVYTFoZQpxVUpjNms0Q1FzUFA2YmtZanQ4QktFaE5XVmJ6aXpOcyszSFFaMzVHeDRka1VLd1ZnVklWTGU2bmtJV1RJeHVJCmloRkNmWjJTRWphS0Uwdi9tM0p5RjhJTzVJQTBQTFFiMUw4Y1NyM2R4cUszank3cGVxY1ZRbmhGZzNIMkdHWEsKeFVlUlZjYm44WEFyV0NCWUJiM0JyZjZ2V082RFM3L1VjWnV3UkE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
+ key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcGdJQkFBS0NBUUVBeGc2Q1RlcWlYUEpKaVJTZDVEVXBjcENMKzFBRlZhdTBqWE1HN0MvVWRGZkU5WmtmCnlmdmprcHg1N0JMZ1ltVFVtOHVWenFHaFJpSUo2ZTNQUi9qQ00rZWoxdi9QSkRRT0MwNThCS3VlTkY1MThzY2MKajVwUVBOR3pobUdsc3cwclc1M21vb3Zmend0Ty9yZ3ZiQ2lxYytlQVkwajRRRHJmbWt1UVV3RXV6SkVQQUlVMQpSdEhmMkhRcHhjdCtnUnZKR3d2TFo5bEF4dmxvUjJJS0dLcjgwQS9zZ3BHZXpOd2FaM2ptSkE2am5TQTJ4YnpmCml4Y2lUbDR3dHN3ZlhHVFpiWmxjNVZkUTNZVVAvVmJmTWk5bE5XSVNzTzJUbnFhQ2JmbFV6OFJPZ1hGdCtaQ2wKQWZuMVBDbk1YNzZ4WDdRbSt4enJERGN1ekh3c296TmNmMGxwL3dJREFRQUJBb0lCQVFDcitTazRFc2FNd20wTAp0SFV0Rk9RNmNEeThLVTJZaUJHc3lQWjMyMGcxQllrbVlLRnp0MTV4amFGb1ZUTzAvQ3lJWXd4ZmNZVWg2cWlGCkVWTnRBUmxRREpEOVBQNVdSMFR5bUdHamhJbEltOFQ2MjkxMjY5MUVFaW82UTB1bjM0V0lkZUV2dnhqRkpPS2cKMXJtR3h3REt4M2Q1dm9DZzlQMzNjaW1OaVhkamRBQ0FOQWh0ZkY5QnovQThodHR2b1NpOGliN2JiVFMwelZTVwp5U0tYVVJMNXpMRVU5OFM1emFRU1VIOVhyTUN2SjZWbzRJZmhDak5hSUZjNDBRVHZIYkdFOFBFb2d0b1ZpYkphClluQ25YWnoycFJ1am13aDI1b0ViUmZrMS9KMkZNMERQTDNhUTgvNXBicmdJUFR2eEdKY2FObGZPam1BZjZobjIKZmhEYlhWYmhBb0dCQU12T0d4SEk2Y0lybGlDS3VVTFBxYzI4TzVDTkZXQi85S01mUUZydlBOd2c0SVdXV3gxaApFTXNadW44UUNrV0t1VDJuRU1NZ2pFWUhVM1ltS1NpWG9XYmcwM0MrSDVHbTJOUFMyMlBoaS9NaTBaU3hmaHY1CjFKNncyaXhzc0s1NE1rN1JtQndhLzd0NDN0RitUSFZFbjVNSy95SzdvWHR0YjFqVkhmdk01TC9KQW9HQkFQakgKaGp2K1hkQ3ZRVmJHdllQTGxyVU1nMUxoakg5RXdFdGpDY0VWR1FuNG9hNkVCVHFQY0pjemhla0I0TXBodUl5TwpYVzl0V1dhTDQvMUdVNndDYlplcEx6dy80MmRUaE9QdDJ1SmpqTENoS1lhZmZEbW5IQjRENWdnaHhCNUhLWmRVCnZYcEhuN1RybjJkdHZveU1rZUhlY0d3cllWQzdqeXYxTkIvWXo0K0hBb0dCQUoyYzBFeHB2M1hkZFdYSFFzekwKZ250TUZoaU5Nem9FMnJHSVNxSElvSjF3ZzVKc0hCelZZMEplckY3MWphd0lRNGZOZXVZY2RyNzFqWE15d2VQVgptQW5TMTFJNmhubUN1ZTdmQTdIenpPS0VTK2FkZVhTek9kNWIwTzVJUkQ4NVQxYXJPdUtKY3JxT0dHdVZMQllJCnN3dnBsalJMUFBBU1N1azlMOG42dy9FWkFvR0JBUElZT0dqcGdDSTBha0VuNWdUN2VnMTF2OVpINTVGeU5pOG0Ka2JkejhJbmppbk5weGl6V3FacDZhVFgydmVvMGJvTlpoMU9IOWhmMHlra094eDM4dnVsM21wL25ERVRnNGRGdApCalNJNjhCM0ZSSU00YmE1Q0lPdEI0MmlUbGVvcUxDN3BpZjR5MUlrZVZzTlVRRTFTa0dqVllQdU15VjlZRFpHCnlCSzF5a2JCQW9HQkFJc1QvUkpqY203aFlqUlBmTnFLYkorYnE0bVVHZnFNd0JFaFN1VFhjQTBLcktQR2YvKzAKMzVhSGZzald5T204cGx2Vmd6dWlTTVRhdWtONktsVjlDVjB3MDdERTk5TStBcTB1L3RoK3hhbXRxNjhOQWU5VwprZE9rdUR6VWNSYm9QSzREdXJ4Qm1nSVNaT05hRUhMUUJzcFNwOFpIOW9ZdHdUQ1R2SEhjdEpTZwotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
+ cert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVEVENDQXZXZ0F3SUJBZ0lVT1pLd2tJd1J2OXdHSEExcG9sTDNSMTQ1UXQ0d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNWVXN4RHpBTkJnTlZCQWdUQmt4dmJtUnZiakVQTUEwR0ExVUVCeE1HVEc5dQpaRzl1TVJBd0RnWURWUVFLRXdkMVUzZHBkR05vTVF3d0NnWURWUVFMRXdOWFYxY3hFREFPQmdOVkJBTVRCMHRwCllXMGdRMEV3SGhjTk1Ua3dNVE13TVRRME9UQXdXaGNOTWpBd01UTXdNVFEwT1RBd1dqQlBNUXN3Q1FZRFZRUUcKRXdKVlN6RVBNQTBHQTFVRUNCTUdURzl1Wkc5dU1ROHdEUVlEVlFRSEV3Wk1iMjVrYjI0eEVEQU9CZ05WQkFvVApCM1ZUZDJsMFkyZ3hEREFLQmdOVkJBc1RBMWRYVnpDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDCkFRb0NnZ0VCQU1ZT2drM3FvbHp5U1lrVW5lUTFLWEtRaS90UUJWV3J0STF6QnV3djFIUlh4UFdaSDhuNzQ1S2MKZWV3UzRHSmsxSnZMbGM2aG9VWWlDZW50ejBmNHdqUG5vOWIvenlRMERndE9mQVNybmpSZWRmTEhISSthVUR6UgpzNFpocGJNTksxdWQ1cUtMMzg4TFR2NjRMMndvcW5QbmdHTkkrRUE2MzVwTGtGTUJMc3lSRHdDRk5VYlIzOWgwCktjWExmb0VieVJzTHkyZlpRTWI1YUVkaUNoaXEvTkFQN0lLUm5zemNHbWQ0NWlRT281MGdOc1c4MzRzWElrNWUKTUxiTUgxeGsyVzJaWE9WWFVOMkZELzFXM3pJdlpUVmlFckR0azU2bWdtMzVWTS9FVG9GeGJmbVFwUUg1OVR3cAp6Risrc1YrMEp2c2M2d3czTHN4OExLTXpYSDlKYWY4Q0F3RUFBYU9CempDQnl6QU9CZ05WSFE4QkFmOEVCQU1DCkJhQXdIUVlEVlIwbEJCWXdGQVlJS3dZQkJRVUhBd0VHQ0NzR0FRVUZCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXcKSFFZRFZSME9CQllFRkdMYjZYcmlGVTRXSnNqYXkrTXVybFkrOEh3ck1COEdBMVVkSXdRWU1CYUFGRVpFaVRzVAp5MmRxZmVvT2pUSkxRR2I4TXc1UE1Fd0dBMVVkRVFSRk1FT0NDMnRwWVcwdGMyVnlkbVZ5Z2cweE1qY3VNQzR3CkxqRTZORFF6Z2c0eE1qY3VNQzR3TGpFNk9UWXhNSWNFZndBQUFZWVBhMmxoYlMxelpYSjJaWEk2TkRRek1BMEcKQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUJBNmZIMWdCQUJFWks3OTJLazY2VytSQUNvMytqeHVwSVlVVmpjTk9LWApEWVBRcEZHc0g4bVdxc00wN1lxQ050THpFaS9rUHJkaSs5QTJOMXUvU3h5by9NbFJoYURsTXdZSExaTDZLNTRMCk1Sb3pnVjFzK1JxS1A4Y2t0aGhEcFNFRE5Tc0ZxQU9QRnNQUkkxR01XWHpleXdNZ29TTU1TTjA2UzN2cXhTcWQKQW5UTXBrRkxDYXFCS2xFSlM0VEROQmluNEpOMkdPTkgyQThmK2FNcmc1d3RMbW1GUFQvOGI2WEFqWVYzOWNQMQo2ak9JN3VydEFTWVI1NkdRRFQvbWNsajlUWDBqczluSE5NaENlU3JRa1JURVZ3MGtvSGUvTklkNisxVVptU3hMCmRXZXhpV3N0QTY0a0paMW5INDJtV2JPVDAxQStwQ1VNeno5S0h2dGNZd21RCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
+ ca: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURrakNDQW5xZ0F3SUJBZ0lVUjgwVU9rbzlPZk5jQWRRaytLbzd5ejNKZGZrd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNWVXN4RHpBTkJnTlZCQWdUQmt4dmJtUnZiakVQTUEwR0ExVUVCeE1HVEc5dQpaRzl1TVJBd0RnWURWUVFLRXdkMVUzZHBkR05vTVF3d0NnWURWUVFMRXdOWFYxY3hFREFPQmdOVkJBTVRCMHRwCllXMGdRMEV3SGhjTk1Ua3dNVE13TVRRME9UQXdXaGNOTWpRd01USTVNVFEwT1RBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKVlN6RVBNQTBHQTFVRUNCTUdURzl1Wkc5dU1ROHdEUVlEVlFRSEV3Wk1iMjVrYjI0eEVEQU9CZ05WQkFvVApCM1ZUZDJsMFkyZ3hEREFLQmdOVkJBc1RBMWRYVnpFUU1BNEdBMVVFQXhNSFMybGhiU0JEUVRDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1udnZ1R2xud3luRzlteE9YT2dlRWUza1p2a1loQU8KcCtxZmpLWkZpajBFSU5pYVNEK3J1cVRrZ3JueTVUWTkzMzA5aFYrS2IzTnR2NFEwYmhOUzRjZnJpd0k3cWhuMApHVTRJUkxKd1crMXc4enZhRFMrT1lHcnJnMVdlc2l6VlpSRERTQzNFZE5GZUdXMjlMZWNiZ3NXN0N6MlJOUWQrCmIrNG83K05uemhJSzhRclozcnFWSGVjOUZOVnlydDc4ei8ySVVlZXhkYnZvNFRyUXRZOEQydTV4QUg2Y2lHUDUKVi94cFI1RmJ2bjNUNy80aldJbmxSWHBHeWkxemE2Mi9EaFV6czRZcGcxd01yZXdQT25Ea0VIT0RoTExpVmQ2dgpxQWQxVGdqUFdmTnR0MXZnUEtIQWJ3VXU3LzVWYnRwbVJGRTgxMEpSbDBYa2NwUHQ0cjhtQ21jQ0F3RUFBYU5DCk1FQXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZFWkUKaVRzVHkyZHFmZW9PalRKTFFHYjhNdzVQTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFESUxzQzhVYkM5eE5jcwpWQWZJcUt4R0QrSE85STdtOHNITU5TQzhZbGRJQmRCSE5YRlBhK0o4Mk1hRklUZS91L0NEQ25RTXBuRWVpVi9MCmJxQ0RZbk01MEVQckNXalJGSDJFZGpocGhJTWJNSUpsMlNndGE1ZzlIT3ErTit4eC9tS21ua0xWYWQ1WkVFa0wKZnJ6NTFQRkhabEZyUDZUQ2c1VW9FUERteWw2WHQwNk5tRk40ODBxYjhLSEgxR3o3UEFlNTdIaGJPWXlMQWh3UwoxYjl6bnQxOW9KOXA2ZDlKakIyeEh3R016aWVMWVpQME1rcnJaYWRQUWtkY0NsczJhMkRRVmpCWEQvVzlhYXRBCmZtMEQ4dmN1b0dBdCt0VXkyVHRWR2xUdk5tMnlUeWNMWlhuMGdORDVGNmFCNjVvMmFtNUcvTWRSWDBrd1RCQXgKYU0vZEVQREoKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
diff --git a/stable/kiam/templates/_helpers.tpl b/stable/kiam/templates/_helpers.tpl
index e26a75dba78b..bc28d9ff14ef 100644
--- a/stable/kiam/templates/_helpers.tpl
+++ b/stable/kiam/templates/_helpers.tpl
@@ -88,3 +88,25 @@ Create the name of the server service account to use.
{{ default "default" .Values.serviceAccounts.server.name }}
{{- end -}}
{{- end -}}
+
+{{/*
+Generate certificates for kiam server and agent
+*/}}
+{{- define "kiam.agent.gen-certs" -}}
+{{- $ca := .ca | default (genCA "kiam-ca" 365) -}}
+{{- $_ := set . "ca" $ca -}}
+{{- $cert := genSignedCert "Kiam Agent" nil nil 365 $ca -}}
+{{.Values.agent.tlsCerts.caFileName }}: {{ $ca.Cert | b64enc }}
+{{.Values.agent.tlsCerts.certFileName }}: {{ $cert.Cert | b64enc }}
+{{.Values.agent.tlsCerts.keyFileName }}: {{ $cert.Key | b64enc }}
+{{- end -}}
+{{- define "kiam.server.gen-certs" -}}
+{{- $serverName := printf "%s-%s" (include "kiam.name" .) .Values.server.name -}}
+{{- $altNames := list $serverName (printf "%s:%s" $serverName .Values.server.service.port) (printf "127.0.0.1:%s" .Values.server.service.targetPort) -}}
+{{- $ca := .ca | default (genCA "kiam-ca" 365) -}}
+{{- $_ := set . "ca" $ca -}}
+{{- $cert := genSignedCert "Kiam Server" (list "127.0.0.1") $altNames 365 $ca -}}
+{{.Values.server.tlsCerts.caFileName }}: {{ $ca.Cert | b64enc }}
+{{.Values.server.tlsCerts.certFileName }}: {{ $cert.Cert | b64enc }}
+{{.Values.server.tlsCerts.keyFileName }}: {{ $cert.Key | b64enc }}
+{{- end -}}
diff --git a/stable/kiam/templates/agent-daemonset.yaml b/stable/kiam/templates/agent-daemonset.yaml
index 090e87af5829..5054f253f923 100644
--- a/stable/kiam/templates/agent-daemonset.yaml
+++ b/stable/kiam/templates/agent-daemonset.yaml
@@ -61,11 +61,15 @@ spec:
hostPath:
path: {{ .hostPath }}
{{- end }}
+ {{- if .Values.agent.priorityClassName }}
+ priorityClassName: {{ .Values.agent.priorityClassName | quote }}
+ {{- end }}
containers:
- name: {{ template "kiam.name" . }}-{{ .Values.agent.name }}
{{- if .Values.agent.host.iptables }}
securityContext:
- privileged: true
+ capabilities:
+ add: ["NET_ADMIN"]
{{- end }}
image: "{{ .Values.agent.image.repository }}:{{ .Values.agent.image.tag }}"
imagePullPolicy: {{ .Values.agent.image.pullPolicy }}
diff --git a/stable/kiam/templates/agent-secret.yaml b/stable/kiam/templates/agent-secret.yaml
index 8407514ad307..70f499460b5a 100644
--- a/stable/kiam/templates/agent-secret.yaml
+++ b/stable/kiam/templates/agent-secret.yaml
@@ -5,5 +5,9 @@ metadata:
name: {{ template "kiam.fullname" . }}-agent
type: Opaque
data:
+{{- if .Values.agent.tlsFiles.ca }}
{{ toYaml .Values.agent.tlsFiles | indent 2 }}
+{{- else }}
+{{ include "kiam.agent.gen-certs" . | indent 2 }}
+{{- end -}}
{{- end }}
diff --git a/stable/kiam/templates/agent-service.yaml b/stable/kiam/templates/agent-service.yaml
index 60154737c578..b1f8ecca3495 100644
--- a/stable/kiam/templates/agent-service.yaml
+++ b/stable/kiam/templates/agent-service.yaml
@@ -3,13 +3,26 @@
apiVersion: v1
kind: Service
metadata:
+ name: {{ template "kiam.fullname" . }}-agent
labels:
app: {{ template "kiam.name" . }}
chart: {{ template "kiam.chart" . }}
component: "{{ .Values.agent.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
- name: {{ template "kiam.fullname" . }}-agent
+ {{- range $key, $value := .Values.agent.serviceLabels }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+ {{- if or .Values.agent.serviceAnnotations .Values.agent.prometheus.scrape }}
+ annotations:
+ {{- range $key, $value := .Values.agent.serviceAnnotations }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+ {{- if .Values.agent.prometheus.scrape }}
+ prometheus.io/scrape: "true"
+ prometheus.io/port: {{ .Values.agent.prometheus.port | quote }}
+ {{- end }}
+ {{- end }}
spec:
clusterIP: None
selector:
diff --git a/stable/kiam/templates/server-daemonset.yaml b/stable/kiam/templates/server-daemonset.yaml
index 59762f5e61d5..3f5532507758 100644
--- a/stable/kiam/templates/server-daemonset.yaml
+++ b/stable/kiam/templates/server-daemonset.yaml
@@ -54,6 +54,9 @@ spec:
hostPath:
path: {{ .hostPath }}
{{- end }}
+ {{- if .Values.server.priorityClassName }}
+ priorityClassName: {{ .Values.server.priorityClassName | quote }}
+ {{- end }}
containers:
- name: {{ template "kiam.name" . }}-{{ .Values.server.name }}
image: "{{ .Values.server.image.repository }}:{{ .Values.server.image.tag }}"
diff --git a/stable/kiam/templates/server-secret.yaml b/stable/kiam/templates/server-secret.yaml
index 6884e21ae23f..2932e761a1fa 100644
--- a/stable/kiam/templates/server-secret.yaml
+++ b/stable/kiam/templates/server-secret.yaml
@@ -5,5 +5,9 @@ metadata:
name: {{ template "kiam.fullname" . }}-server
type: Opaque
data:
+{{- if .Values.server.tlsFiles.ca }}
{{ toYaml .Values.server.tlsFiles | indent 2 }}
+{{- else }}
+{{ include "kiam.server.gen-certs" . | indent 2 }}
+{{- end -}}
{{- end }}
diff --git a/stable/kiam/templates/server-service.yaml b/stable/kiam/templates/server-service.yaml
index 1eb8de76a12a..4d40f5c06213 100644
--- a/stable/kiam/templates/server-service.yaml
+++ b/stable/kiam/templates/server-service.yaml
@@ -2,13 +2,26 @@
apiVersion: v1
kind: Service
metadata:
+ name: {{ template "kiam.fullname" . }}-server
labels:
app: {{ template "kiam.name" . }}
chart: {{ template "kiam.chart" . }}
component: "{{ .Values.server.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
- name: {{ template "kiam.fullname" . }}-server
+ {{- range $key, $value := .Values.server.serviceLabels }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+ {{- if or .Values.server.serviceAnnotations .Values.server.prometheus.scrape }}
+ annotations:
+ {{- range $key, $value := .Values.server.serviceAnnotations }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+ {{- if .Values.server.prometheus.scrape }}
+ prometheus.io/scrape: "true"
+ prometheus.io/port: {{ .Values.server.prometheus.port | quote }}
+ {{- end }}
+ {{- end }}
spec:
clusterIP: None
selector:
diff --git a/stable/kiam/templates/server-write-clusterrole.yaml b/stable/kiam/templates/server-write-clusterrole.yaml
index 05ad474a7a94..8ae2bc62936d 100644
--- a/stable/kiam/templates/server-write-clusterrole.yaml
+++ b/stable/kiam/templates/server-write-clusterrole.yaml
@@ -17,5 +17,6 @@ rules:
- events
verbs:
- create
+ - patch
{{- end -}}
{{- end -}}
diff --git a/stable/kiam/values.yaml b/stable/kiam/values.yaml
index 02a00dec4b7c..6c40c947cdf7 100644
--- a/stable/kiam/values.yaml
+++ b/stable/kiam/values.yaml
@@ -11,7 +11,7 @@ agent:
image:
repository: quay.io/uswitch/kiam
- tag: v3.0-rc1
+ tag: v3.2
pullPolicy: IfNotPresent
## Logging settings
@@ -37,6 +37,16 @@ agent:
## Labels to be added to pods
##
podLabels: {}
+ ## Annotations to be added to service
+ ##
+ serviceAnnotations: {}
+ ## Labels to be added to service
+ ##
+ serviceLabels: {}
+ ## Used to assign priority to agent pods
+ ## Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
+ ##
+ priorityClassName: ""
## Strategy for DaemonSet updates (requires Kubernetes 1.6+)
## Ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/
##
@@ -122,7 +132,7 @@ server:
image:
repository: quay.io/uswitch/kiam
- tag: v3.0-rc1
+ tag: v3.2
pullPolicy: IfNotPresent
## Logging settings
@@ -139,6 +149,19 @@ server:
## Annotations to be added to pods
##
podAnnotations: {}
+ ## Labels to be added to pods
+ ##
+ podLabels: {}
+ ## Annotations to be added to service
+ ##
+ serviceAnnotations: {}
+ ## Labels to be added to service
+ ##
+ serviceLabels: {}
+ ## Used to assign priority to server pods
+ ## Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
+ ##
+ priorityClassName: ""
## Strategy for DaemonSet updates (requires Kubernetes 1.6+)
## Ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/
##
@@ -198,7 +221,7 @@ server:
## Server probe configuration
probes:
- serverAddress: localhost
+ serverAddress: 127.0.0.1
## Base64-encoded PEM values for server's CA certificate(s), certificate and private key
##
diff --git a/stable/kibana/Chart.yaml b/stable/kibana/Chart.yaml
index c32b97dd7b5d..f726766810c1 100644
--- a/stable/kibana/Chart.yaml
+++ b/stable/kibana/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: kibana
-version: 1.3.0
-appVersion: 6.6.0
+version: 3.0.0
+appVersion: 6.7.0
description: Kibana is an open source data visualization plugin for Elasticsearch
icon: https://raw.githubusercontent.com/elastic/kibana/master/src/ui/public/icons/kibana-color.svg
keywords:
@@ -9,6 +10,8 @@ keywords:
maintainers:
- name: compleatang
email: casey@monax.io
+- name: monotek
+ email: monotek23@gmail.com
sources:
- https://github.com/elastic/kibana
engine: gotpl
diff --git a/stable/kibana/README.md b/stable/kibana/README.md
index 8b0c28606fe7..dbe510d20f49 100644
--- a/stable/kibana/README.md
+++ b/stable/kibana/README.md
@@ -38,80 +38,89 @@ The command removes all the Kubernetes components associated with the chart and
The following table lists the configurable parameters of the kibana chart and their default values.
-| Parameter | Description | Default |
-|-----------------------------------------------|--------------------------------------------|----------------------------------------|
-| `affinity` | node/pod affinities | None |
-| `env` | Environment variables to configure Kibana | `{}` |
-| `files` | Kibana configuration files | None |
-| `livenessProbe.enabled` | livenessProbe to be enabled? | `false` |
-| `livenessProbe.initialDelaySeconds` | number of seconds | 30 |
-| `livenessProbe.timeoutSeconds` | number of seconds | 10 |
-| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
-| `image.repository` | Image repository | `docker.elastic.co/kibana/kibana-oss` |
-| `image.tag` | Image tag | `6.6.0` |
-| `image.pullSecrets` | Specify image pull secrets | `nil` |
-| `commandline.args` | add additional commandline args | `nil` |
-| `ingress.enabled` | Enables Ingress | `false` |
-| `ingress.annotations` | Ingress annotations | None: |
-| `ingress.hosts` | Ingress accepted hostnames | None: |
-| `ingress.tls` | Ingress TLS configuration | None: |
-| `nodeSelector` | node labels for pod assignment | `{}` |
-| `podAnnotations` | annotations to add to each pod | `{}` |
-| `replicaCount` | desired number of pods | `1` |
-| `revisionHistoryLimit` | revisionHistoryLimit | `3` |
-| `serviceAccountName` | DEPRECATED: use serviceAccount.name | `nil` |
-| `serviceAccount.create` | create a serviceAccount to run the pod | `false` |
-| `serviceAccount.name` | name of the serviceAccount to create | `kibana.fullname` |
-| `authProxyEnabled` | enables authproxy. Create container in extracontainers | `false` |
-| `extraContainers` | Sidecar containers to add to the kibana pod| `{}` |
-| `extraVolumeMounts` | additional volumemounts for the kibana pod | `[]` |
-| `extraVolumes` | additional volumes to add to the kibana pod| `[]` |
-| `resources` | pod resource requests & limits | `{}` |
-| `priorityClassName` | priorityClassName | `nil` |
-| `service.externalPort` | external port for the service | `443` |
-| `service.disableInternalPort` | disable internal port when using sidecar | `false` |
-| `service.internalPort` | internal port for the service | `4180` |
-| `service.authProxyPort` | port to use when using sidecar authProxy | None: |
-| `service.externalIPs` | external IP addresses | None: |
-| `service.loadBalancerIP` | Load Balancer IP address | None: |
-| `service.loadBalancerSourceRanges` | Limit load balancer source IPs to list of CIDRs (where available)) | `[]` |
-| `service.nodePort` | NodePort value if service.type is NodePort | None: |
-| `service.type` | type of service | `ClusterIP` |
-| `service.annotations` | Kubernetes service annotations | None: |
-| `service.labels` | Kubernetes service labels | None: |
-| `tolerations` | List of node taints to tolerate | `[]` |
-| `dashboardImport.timeout` | Time in seconds waiting for Kibana to be in green overall state | `60` |
-| `dashboardImport.xpackauth.enabled` | Enable Xpack auth | `false` |
-| `dashboardImport.xpackauth.username` | Optional Xpack username | `myuser` |
-| `dashboardImport.xpackauth.password` | Optional Xpack password | `mypass` |
-| `dashboardImport.dashboards` | Dashboards | `{}` |
-| `plugins.enabled` | Enable installation of plugins. | `false` |
-| `plugins.reset` | Optional : Remove all installed plugins before installing all new ones | `false` |
-| `plugins.values` | List of plugins to install. Format with URLs pointing to zip files of Kibana plugins to install | None: |
-| `persistentVolumeClaim.enabled` | Enable PVC for plugins | `false` |
-| `persistentVolumeClaim.existingClaim` | Use your own PVC for plugins | `false` |
-| `persistentVolumeClaim.annotations` | Add your annotations for the PVC | `{}` |
-| `persistentVolumeClaim.accessModes` | Acces mode to the PVC | `ReadWriteOnce` |
-| `persistentVolumeClaim.size` | Size of the PVC | `5Gi` |
-| `persistentVolumeClaim.storageClass` | Storage class of the PVC | None: |
-| `readinessProbe.enabled` | readinessProbe to be enabled? | `false` |
-| `readinessProbe.initialDelaySeconds` | number of seconds | 30 |
-| `readinessProbe.timeoutSeconds` | number of seconds | 10 |
-| `readinessProbe.periodSeconds` | number of seconds | 10 |
-| `readinessProbe.successThreshold` | number of successes | 5 |
-| `securityContext.enabled` | Enable security context (should be true for PVC) | `false` |
-| `securityContext.allowPrivilegeEscalation` | Allow privilege escalation | `false` |
-| `securityContext.runAsUser` | User id to run in pods | `1000` |
-| `securityContext.fsGroup` | fsGroup id to run in pods | `2000` |
-| `extraConfigMapMounts` | Additional configmaps to be mounted | `[]` |
-| `deployment.annotations` | Annotations for deployment | `{}` |
+| Parameter | Description | Default |
+| ------------------------------------------ | ---------------------------------------------------------------------- | ------------------------------------- |
+| `affinity` | node/pod affinities | None |
+| `env` | Environment variables to configure Kibana | `{}` |
+| `files` | Kibana configuration files | None |
+| `livenessProbe.enabled` | livenessProbe to be enabled? | `false` |
+| `livenessProbe.path` | path for livenessProbe | `/status` |
+| `livenessProbe.initialDelaySeconds` | number of seconds | 30 |
+| `livenessProbe.timeoutSeconds` | number of seconds | 10 |
+| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `image.repository` | Image repository | `docker.elastic.co/kibana/kibana-oss` |
+| `image.tag` | Image tag | `6.7.0` |
+| `image.pullSecrets` | Specify image pull secrets | `nil` |
+| `commandline.args` | add additional commandline args | `nil` |
+| `ingress.enabled` | Enables Ingress | `false` |
+| `ingress.annotations` | Ingress annotations | None: |
+| `ingress.hosts` | Ingress accepted hostnames | None: |
+| `ingress.tls` | Ingress TLS configuration | None: |
+| `nodeSelector` | node labels for pod assignment | `{}` |
+| `podAnnotations` | annotations to add to each pod | `{}` |
+| `podLabels` | labels to add to each pod | `{}` |
+| `replicaCount` | desired number of pods | `1` |
+| `revisionHistoryLimit` | revisionHistoryLimit | `3` |
+| `serviceAccountName` | DEPRECATED: use serviceAccount.name | `nil` |
+| `serviceAccount.create` | create a serviceAccount to run the pod | `false` |
+| `serviceAccount.name` | name of the serviceAccount to create | `kibana.fullname` |
+| `authProxyEnabled` | enables authproxy. Create container in extracontainers | `false` |
+| `extraContainers` | Sidecar containers to add to the kibana pod | `{}` |
+| `extraVolumeMounts` | additional volumemounts for the kibana pod | `[]` |
+| `extraVolumes` | additional volumes to add to the kibana pod | `[]` |
+| `resources` | pod resource requests & limits | `{}` |
+| `priorityClassName` | priorityClassName | `nil` |
+| `service.externalPort` | external port for the service | `443` |
+| `service.internalPort` | internal port for the service | `4180` |
+| `service.portName` | service port name | None: |
+| `service.authProxyPort` | port to use when using sidecar authProxy | None: |
+| `service.externalIPs` | external IP addresses | None: |
+| `service.loadBalancerIP` | Load Balancer IP address | None: |
+| `service.loadBalancerSourceRanges` | Limit load balancer source IPs to list of CIDRs (where available)) | `[]` |
+| `service.nodePort` | NodePort value if service.type is NodePort | None: |
+| `service.type` | type of service | `ClusterIP` |
+| `service.clusterIP` | static clusterIP or None for headless services | None: |
+| `service.annotations` | Kubernetes service annotations | None: |
+| `service.labels` | Kubernetes service labels | None: |
+| `service.selector` | Kubernetes service selector | `{}` |
+| `tolerations` | List of node taints to tolerate | `[]` |
+| `dashboardImport.enabled` | Enable dashboard import | `false` |
+| `dashboardImport.timeout` | Time in seconds waiting for Kibana to be in green overall state | `60` |
+| `dashboardImport.xpackauth.enabled` | Enable Xpack auth | `false` |
+| `dashboardImport.xpackauth.username` | Optional Xpack username | `myuser` |
+| `dashboardImport.xpackauth.password` | Optional Xpack password | `mypass` |
+| `dashboardImport.dashboards` | Dashboards | `{}` |
+| `plugins.enabled` | Enable installation of plugins. | `false` |
+| `plugins.reset` | Optional : Remove all installed plugins before installing all new ones | `false` |
+| `plugins.values` | List of plugins to install. Format | None: |
+| `persistentVolumeClaim.enabled` | Enable PVC for plugins | `false` |
+| `persistentVolumeClaim.existingClaim` | Use your own PVC for plugins | `false` |
+| `persistentVolumeClaim.annotations` | Add your annotations for the PVC | `{}` |
+| `persistentVolumeClaim.accessModes` | Acces mode to the PVC | `ReadWriteOnce` |
+| `persistentVolumeClaim.size` | Size of the PVC | `5Gi` |
+| `persistentVolumeClaim.storageClass` | Storage class of the PVC | None: |
+| `readinessProbe.enabled` | readinessProbe to be enabled? | `false` |
+| `readinessProbe.path` | path for readinessProbe | `/status` |
+| `readinessProbe.initialDelaySeconds` | number of seconds | 30 |
+| `readinessProbe.timeoutSeconds` | number of seconds | 10 |
+| `readinessProbe.periodSeconds` | number of seconds | 10 |
+| `readinessProbe.successThreshold` | number of successes | 5 |
+| `securityContext.enabled` | Enable security context (should be true for PVC) | `false` |
+| `securityContext.allowPrivilegeEscalation` | Allow privilege escalation | `false` |
+| `securityContext.runAsUser` | User id to run in pods | `1000` |
+| `securityContext.fsGroup` | fsGroup id to run in pods | `2000` |
+| `extraConfigMapMounts` | Additional configmaps to be mounted | `[]` |
+| `deployment.annotations` | Annotations for deployment | `{}` |
+| `initContainers` | Init containers to add to the kibana deployment | `{}` |
+| `testFramework.image` | `test-framework` image repository. | `dduportal/bats` |
+| `testFramework.tag` | `test-framework` image tag. | `0.4.0` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
-* The Kibana configuration files config properties can be set through the `env` parameter too.
-* All the files listed under this variable will overwrite any existing files by the same name in kibana config directory.
-* Files not mentioned under this variable will remain unaffected.
+- The Kibana configuration files config properties can be set through the `env` parameter too.
+- All the files listed under this variable will overwrite any existing files by the same name in kibana config directory.
+- Files not mentioned under this variable will remain unaffected.
```console
$ helm install stable/kibana --name my-release \
@@ -128,4 +137,10 @@ $ helm install stable/kibana --name my-release -f values.yaml
## Dasboard import
-* A dashboard for dashboardImport.dashboards can be a JSON or a download url to a JSON file.
+- A dashboard for dashboardImport.dashboards can be a JSON or a download url to a JSON file.
+
+## Upgrading
+
+### To 2.3.0
+
+The default value of `elasticsearch.url` (for kibana < 6.6) has been removed in favor of `elasticsearch.hosts` (for kibana >= 6.6).
diff --git a/stable/kibana/ci/authproxy-enabled.yaml b/stable/kibana/ci/authproxy-enabled.yaml
new file mode 100644
index 000000000000..186724a58f6b
--- /dev/null
+++ b/stable/kibana/ci/authproxy-enabled.yaml
@@ -0,0 +1,3 @@
+---
+# disable internal port by setting authProxyEnabled
+authProxyEnabled: true
diff --git a/stable/kibana/ci/dashboard-values.yaml b/stable/kibana/ci/dashboard-values.yaml
index 864139155c5a..e3516ceedd43 100644
--- a/stable/kibana/ci/dashboard-values.yaml
+++ b/stable/kibana/ci/dashboard-values.yaml
@@ -2,10 +2,11 @@
# enable the dashboard init container with dashboard embedded in configmap
dashboardImport:
+ enabled: true
dashboards:
1_create_index: |-
{
- "version": "6.4.2",
+ "version": "6.7.0",
"objects": [
{
"id": "a88738e0-d3c1-11e8-b38e-a37c21cf8c95",
diff --git a/stable/kibana/ci/disabled-internal-port.yaml b/stable/kibana/ci/disabled-internal-port.yaml
deleted file mode 100644
index 51e49a736f29..000000000000
--- a/stable/kibana/ci/disabled-internal-port.yaml
+++ /dev/null
@@ -1,4 +0,0 @@
----
-# disable internal service
-service:
- disableInternalPort: true
diff --git a/stable/kibana/ci/initcontainers-all-values.yaml b/stable/kibana/ci/initcontainers-all-values.yaml
new file mode 100644
index 000000000000..297220bdbddd
--- /dev/null
+++ b/stable/kibana/ci/initcontainers-all-values.yaml
@@ -0,0 +1,23 @@
+---
+# enable all init container types
+
+# A dashboard is defined by a name and a string with the json payload or the download url
+dashboardImport:
+ enabled: true
+ dashboards:
+ k8s: https://raw.githubusercontent.com/monotek/kibana-dashboards/master/k8s-fluentd-elasticsearch.json
+
+# Enable the plugin init container with plugins retrieved from an URL
+plugins:
+ enabled: true
+ reset: false
+ # Use to add/upgrade plugin
+ values:
+ - analyze-api-ui-plugin,6.7.0,https://github.com/johtani/analyze-api-ui-plugin/releases/download/6.7.0/analyze-api-ui-plugin-6.7.0.zip
+ # - other_plugin
+
+# Add your own init container
+initContainers:
+ echo-container:
+ image: "busybox"
+ command: ['sh', '-c', 'echo Hello from init container! && sleep 3']
diff --git a/stable/kibana/ci/initcontainers-values.yaml b/stable/kibana/ci/initcontainers-values.yaml
new file mode 100644
index 000000000000..70d939c6952f
--- /dev/null
+++ b/stable/kibana/ci/initcontainers-values.yaml
@@ -0,0 +1,18 @@
+---
+# enable user-defined init containers
+
+initContainers:
+ numbers-container:
+ image: "busybox"
+ imagePullPolicy: "IfNotPresent"
+ command:
+ - "/bin/sh"
+ - "-c"
+ - |
+ for i in $(seq 1 10); do
+ echo $i
+ done
+
+ echo-container:
+ image: "busybox"
+ command: ['sh', '-c', 'echo Hello from init container! && sleep 3']
diff --git a/stable/kibana/ci/plugin-install.yaml b/stable/kibana/ci/plugin-install.yaml
index 57912bd6197f..6c9da5cd8c44 100644
--- a/stable/kibana/ci/plugin-install.yaml
+++ b/stable/kibana/ci/plugin-install.yaml
@@ -5,5 +5,5 @@ plugins:
reset: false
# Use to add/upgrade plugin
values:
- - logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.4.3-0.1.30.zip
+ - analyze-api-ui-plugin,6.7.0,https://github.com/johtani/analyze-api-ui-plugin/releases/download/6.7.0/analyze-api-ui-plugin-6.7.0.zip
# - other_plugin
diff --git a/stable/kibana/ci/service-values.yaml b/stable/kibana/ci/service-values.yaml
new file mode 100644
index 000000000000..cebf52add1bc
--- /dev/null
+++ b/stable/kibana/ci/service-values.yaml
@@ -0,0 +1,4 @@
+---
+service:
+ selector:
+ foo: bar
diff --git a/stable/kibana/ci/url_dashboard-values.yaml b/stable/kibana/ci/url_dashboard-values.yaml
index eb536cf5376f..7149314db4c9 100644
--- a/stable/kibana/ci/url_dashboard-values.yaml
+++ b/stable/kibana/ci/url_dashboard-values.yaml
@@ -2,5 +2,6 @@
# enable the dashboard init container with dashboard retrieved from an URL
dashboardImport:
+ enabled: true
dashboards:
k8s: https://raw.githubusercontent.com/monotek/kibana-dashboards/master/k8s-fluentd-elasticsearch.json
diff --git a/stable/kibana/templates/configmap-dashboardimport.yaml b/stable/kibana/templates/configmap-dashboardimport.yaml
index 3a76c7efbb24..2155a4066a96 100644
--- a/stable/kibana/templates/configmap-dashboardimport.yaml
+++ b/stable/kibana/templates/configmap-dashboardimport.yaml
@@ -1,4 +1,4 @@
-{{- if .Values.dashboardImport.dashboards }}
+{{- if .Values.dashboardImport.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
diff --git a/stable/kibana/templates/deployment.yaml b/stable/kibana/templates/deployment.yaml
index 6669bb00072c..33208eaa3ca7 100644
--- a/stable/kibana/templates/deployment.yaml
+++ b/stable/kibana/templates/deployment.yaml
@@ -24,14 +24,23 @@ spec:
labels:
app: {{ template "kibana.name" . }}
release: "{{ .Release.Name }}"
+{{- if .Values.podLabels }}
+{{ toYaml .Values.podLabels | indent 8 }}
+{{- end }}
spec:
serviceAccountName: {{ template "kibana.serviceAccountName" . }}
{{- if .Values.priorityClassName }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
-{{- if or (.Values.dashboardImport.dashboards) (.Values.plugins.enabled) }}
+{{- if or (.Values.initContainers) (.Values.dashboardImport.enabled) (.Values.plugins.enabled) }}
initContainers:
-{{- if .Values.dashboardImport.dashboards }}
+{{- if .Values.initContainers }}
+{{- range $key, $value := .Values.initContainers }}
+ - name: "{{ $key }}"
+{{ toYaml $value | indent 8 }}
+{{- end }}
+{{- end }}
+{{- if .Values.dashboardImport.enabled }}
- name: {{ .Chart.Name }}-dashboardimport
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
@@ -47,10 +56,6 @@ spec:
- name: "{{ $key }}"
value: "{{ $value }}"
{{- end }}
- ports:
- - containerPort: {{ .Values.service.internalPort }}
- name: {{ template "kibana.name" . }}
- protocol: TCP
volumeMounts:
- name: {{ template "kibana.fullname" . }}-dashboards
mountPath: "/kibanadashboards"
@@ -106,12 +111,6 @@ spec:
- name: "{{ $key }}"
value: "{{ $value }}"
{{- end }}
-{{- if not .Values.service.disableInternalPort }}
- ports:
- - containerPort: {{ .Values.service.internalPort }}
- name: {{ template "kibana.name" . }}
- protocol: TCP
-{{- end }}
volumeMounts:
- name: plugins
mountPath: /usr/share/kibana/plugins
@@ -150,7 +149,7 @@ spec:
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
httpGet:
- path: /status
+ path: {{ .Values.livenessProbe.path }}
port: {{ .Values.service.internalPort }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
@@ -158,7 +157,7 @@ spec:
{{- if .Values.readinessProbe.enabled }}
readinessProbe:
httpGet:
- path: /status
+ path: {{ .Values.readinessProbe.path }}
port: {{ .Values.service.internalPort }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
@@ -220,7 +219,7 @@ spec:
emptyDir: {}
{{- end }}
{{- end }}
-{{- if .Values.dashboardImport.dashboards }}
+{{- if .Values.dashboardImport.enabled }}
- name: {{ template "kibana.fullname" . }}-dashboards
configMap:
name: {{ template "kibana.fullname" . }}-dashboards
diff --git a/stable/kibana/templates/service.yaml b/stable/kibana/templates/service.yaml
index cb22fe0882e2..4416c45b1af1 100644
--- a/stable/kibana/templates/service.yaml
+++ b/stable/kibana/templates/service.yaml
@@ -24,6 +24,9 @@ spec:
{{- end }}
{{- end }}
type: {{ .Values.service.type }}
+ {{- if and (eq .Values.service.type "ClusterIP") .Values.service.clusterIP }}
+ clusterIP: {{ .Values.service.clusterIP }}
+ {{- end }}
ports:
- port: {{ .Values.service.externalPort }}
{{- if not .Values.authProxyEnabled }}
@@ -35,6 +38,9 @@ spec:
{{ if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
nodePort: {{ .Values.service.nodePort }}
{{ end }}
+{{- if .Values.service.portName }}
+ name: {{ .Values.service.portName }}
+{{- end }}
{{- if .Values.service.externalIPs }}
externalIPs:
{{ toYaml .Values.service.externalIPs | indent 4 }}
@@ -42,6 +48,9 @@ spec:
selector:
app: {{ template "kibana.name" . }}
release: {{ .Release.Name }}
+{{- range $key, $value := .Values.service.selector }}
+ {{ $key }}: {{ $value | quote }}
+{{- end }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}
diff --git a/stable/kibana/templates/tests/test-configmap.yaml b/stable/kibana/templates/tests/test-configmap.yaml
new file mode 100644
index 000000000000..912755e2974f
--- /dev/null
+++ b/stable/kibana/templates/tests/test-configmap.yaml
@@ -0,0 +1,35 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "kibana.fullname" . }}-test
+ labels:
+ app: {{ template "kibana.fullname" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ heritage: "{{ .Release.Service }}"
+ release: "{{ .Release.Name }}"
+data:
+ run.sh: |-
+ @test "Test Status" {
+ {{- if .Values.service.selector }}
+ skip "Can't guarentee pod names with selector"
+ {{- else }}
+ {{- $port := .Values.service.externalPort }}
+ url="http://{{ template "kibana.fullname" . }}{{ if $port }}:{{ $port }}{{ end }}/api{{ .Values.livenessProbe.path }}"
+
+ # retry for 1 minute
+ run curl -s -o /dev/null -I -w "%{http_code}" --retry 30 --retry-delay 2 $url
+
+ code=$(curl -s -o /dev/null -I -w "%{http_code}" $url)
+ body=$(curl $url)
+ if [ "$code" == "503" ]
+ then
+ skip "Kibana Unavailable (503), can't get status - see pod logs: $body"
+ fi
+
+ result=$(echo $body | jq -cr '.status.statuses[]')
+ [ "$result" != "" ]
+
+ result=$(echo $body | jq -cr '.status.statuses[] | select(.state != "green")')
+ [ "$result" == "" ]
+ {{- end }}
+ }
diff --git a/stable/kibana/templates/tests/test.yaml b/stable/kibana/templates/tests/test.yaml
new file mode 100644
index 000000000000..8a518fde8d6e
--- /dev/null
+++ b/stable/kibana/templates/tests/test.yaml
@@ -0,0 +1,42 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: {{ template "kibana.fullname" . }}-test
+ labels:
+ app: {{ template "kibana.fullname" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ heritage: "{{ .Release.Service }}"
+ release: "{{ .Release.Name }}"
+ annotations:
+ "helm.sh/hook": test-success
+spec:
+ initContainers:
+ - name: test-framework
+ image: "{{ .Values.testFramework.image}}:{{ .Values.testFramework.tag }}"
+ command:
+ - "bash"
+ - "-c"
+ - |
+ set -ex
+ # copy bats to tools dir
+ cp -R /usr/local/libexec/ /tools/bats/
+ volumeMounts:
+ - mountPath: /tools
+ name: tools
+ containers:
+ - name: {{ .Release.Name }}-test
+ image: "dwdraju/alpine-curl-jq"
+ command: ["/tools/bats/bats", "-t", "/tests/run.sh"]
+ volumeMounts:
+ - mountPath: /tests
+ name: tests
+ readOnly: true
+ - mountPath: /tools
+ name: tools
+ volumes:
+ - name: tests
+ configMap:
+ name: {{ template "kibana.fullname" . }}-test
+ - name: tools
+ emptyDir: {}
+ restartPolicy: Never
diff --git a/stable/kibana/values.yaml b/stable/kibana/values.yaml
index c950aecb89b3..635d9473f59e 100644
--- a/stable/kibana/values.yaml
+++ b/stable/kibana/values.yaml
@@ -1,17 +1,21 @@
image:
repository: "docker.elastic.co/kibana/kibana-oss"
- tag: "6.6.0"
+ tag: "6.7.0"
pullPolicy: "IfNotPresent"
+testFramework:
+ image: "dduportal/bats"
+ tag: "0.4.0"
+
commandline:
args: []
env: {}
- # All Kibana configuration options are adjustable via env vars.
- # To adjust a config option to an env var uppercase + replace `.` with `_`
- # Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
- #
- # ELASTICSEARCH_URL: http://elasticsearch-client:9200
+ ## All Kibana configuration options are adjustable via env vars.
+ ## To adjust a config option to an env var uppercase + replace `.` with `_`
+ ## Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
+ ## For kibana < 6.6, use ELASTICSEARCH_URL instead
+ # ELASTICSEARCH_HOSTS: http://elasticsearch-client:9200
# SERVER_PORT: 5601
# LOGGING_VERBOSE: "true"
# SERVER_DEFAULTROUTE: "/app/kibana"
@@ -21,7 +25,8 @@ files:
## Default Kibana configuration from kibana-docker.
server.name: kibana
server.host: "0"
- elasticsearch.url: http://elasticsearch:9200
+ ## For kibana < 6.6, use elasticsearch.url instead
+ elasticsearch.hosts: http://elasticsearch:9200
## Custom config properties below
## Ref: https://www.elastic.co/guide/en/kibana/current/settings.html
@@ -34,9 +39,9 @@ deployment:
service:
type: ClusterIP
+ # clusterIP: None
+ # portName: kibana-svc
externalPort: 443
- # disables the internal port if set to true; to be used with a sidecar
- disableInternalPort: false
internalPort: 5601
# authProxyPort: 5602 To be used with authProxyEnabled and a proxy extraContainer
## External IP addresses of service
@@ -57,6 +62,7 @@ service:
# kubernetes.io/cluster-service: "true"
## Limit load balancer source ips to list of CIDRs (where available)
# loadBalancerSourceRanges: []
+ selector: {}
ingress:
enabled: false
@@ -81,11 +87,13 @@ serviceAccount:
livenessProbe:
enabled: false
+ path: /status
initialDelaySeconds: 30
timeoutSeconds: 10
readinessProbe:
enabled: false
+ path: /status
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 10
@@ -138,10 +146,14 @@ podAnnotations: {}
replicaCount: 1
revisionHistoryLimit: 3
+# Custom labels for pod assignment
+podLabels: {}
+
# To export a dashboard from a running Kibana 6.3.x use:
# curl --user : -XGET https://kibana.yourdomain.com:5601/api/kibana/dashboards/export?dashboard= > my-dashboard.json
# A dashboard is defined by a name and a string with the json payload or the download url
dashboardImport:
+ enabled: false
timeout: 60
xpackauth:
enabled: false
@@ -160,7 +172,7 @@ plugins:
# Use to add/upgrade plugin
values:
# - elastalert-kibana-plugin,1.0.1,https://github.com/bitsensor/elastalert-kibana-plugin/releases/download/1.0.1/elastalert-kibana-plugin-1.0.1-6.4.2.zip
- # - logtrail,0.1.30,https://github.com/sivasamyk/logtrail/releases/download/v0.1.30/logtrail-6.4.2-0.1.30.zip
+ # - logtrail,0.1.31,https://github.com/sivasamyk/logtrail/releases/download/v0.1.31/logtrail-6.6.0-0.1.31.zip
# - other_plugin
persistentVolumeClaim:
@@ -193,3 +205,24 @@ extraConfigMapMounts: []
# configMap: kibana-logtrail
# mountPath: /usr/share/kibana/plugins/logtrail/logtrail.json
# subPath: logtrail.json
+
+# Add your own init container or uncomment and modify the given example.
+initContainers: {}
+ ## Don't start kibana till Elasticsearch is reachable.
+ ## Ensure that it is available at http://elasticsearch:9200
+ ##
+ # es-check: # <- will be used as container name
+ # image: "appropriate/curl:latest"
+ # imagePullPolicy: "IfNotPresent"
+ # command:
+ # - "/bin/sh"
+ # - "-c"
+ # - |
+ # is_down=true
+ # while "$is_down"; do
+ # if curl -sSf --fail-early --connect-timeout 5 http://elasticsearch:9200; then
+ # is_down=false
+ # else
+ # sleep 5
+ # fi
+ # done
diff --git a/stable/kong/Chart.yaml b/stable/kong/Chart.yaml
index 51d5f8c78e5f..60d12a1a5cf9 100644
--- a/stable/kong/Chart.yaml
+++ b/stable/kong/Chart.yaml
@@ -10,5 +10,5 @@ maintainers:
name: kong
sources:
- https://github.com/Kong/kong
-version: 0.9.2
-appVersion: 1.0.2
+version: 0.11.2
+appVersion: 1.1
diff --git a/stable/kong/README.md b/stable/kong/README.md
index f8ca06d21d18..062ca10646d1 100644
--- a/stable/kong/README.md
+++ b/stable/kong/README.md
@@ -51,7 +51,7 @@ and their default values.
| Parameter | Description | Default |
| ------------------------------ | -------------------------------------------------------------------------------- | ------------------- |
| image.repository | Kong image | `kong` |
-| image.tag | Kong image version | `1.0.2` |
+| image.tag | Kong image version | `1.1` |
| image.pullPolicy | Image pull policy | `IfNotPresent` |
| image.pullSecrets | Image pull secrets | `null` |
| replicaCount | Kong instance count | `1` |
@@ -59,6 +59,7 @@ and their default values.
| admin.servicePort | TCP port on which the Kong admin service is exposed | `8444` |
| admin.containerPort | TCP port on which Kong app listens for admin traffic | `8444` |
| admin.nodePort | Node port when service type is `NodePort` | |
+| admin.hostPort | Host port to use for admin traffic | |
| admin.type | k8s service type, Options: NodePort, ClusterIP, LoadBalancer | `NodePort` |
| admin.loadBalancerIP | Will reuse an existing ingress static IP for the admin service | `null` |
| admin.loadBalancerSourceRanges | Limit admin access to CIDRs if set and service type is `LoadBalancer` | `[]` |
@@ -67,17 +68,21 @@ and their default values.
| admin.ingress.hosts | List of ingress hosts. | `[]` |
| admin.ingress.path | Ingress path. | `/` |
| admin.ingress.annotations | Ingress annotations. See documentation for your ingress controller for details | `{}` |
-| proxy.http.enabled | Enables http on the proxy | true |
+| proxy.http.enabled | Enables http on the proxy | true |
| proxy.http.servicePort | Service port to use for http | 80 |
| proxy.http.containerPort | Container port to use for http | 8000 |
| proxy.http.nodePort | Node port to use for http | 32080 |
+| proxy.http.hostPort | Host port to use for http | |
| proxy.tls.enabled | Enables TLS on the proxy | true |
| proxy.tls.containerPort | Container port to use for TLS | 8443 |
| proxy.tls.servicePort | Service port to use for TLS | 8443 |
| proxy.tls.nodePort | Node port to use for TLS | 32443 |
+| proxy.tls.hostPort | Host port to use for TLS | |
| proxy.type | k8s service type. Options: NodePort, ClusterIP, LoadBalancer | `NodePort` |
| proxy.loadBalancerSourceRanges | Limit proxy access to CIDRs if set and service type is `LoadBalancer` | `[]` |
| proxy.loadBalancerIP | To reuse an existing ingress static IP for the admin service | |
+| proxy.externalIPs | IPs for which nodes in the cluster will also accept traffic for the proxy | `[]` |
+| proxy.externalTrafficPolicy | k8s service's externalTrafficPolicy. Options: Cluster, Local | |
| proxy.ingress.enabled | Enable ingress resource creation (works with proxy.type=ClusterIP) | `false` |
| proxy.ingress.tls | Name of secret resource, containing TLS secret | |
| proxy.ingress.hosts | List of ingress hosts. | `[]` |
@@ -93,6 +98,21 @@ and their default values.
| resources | Pod resource requests & limits | `{}` |
| tolerations | List of node taints to tolerate | `[]` |
+### Admin/Proxy listener override
+
+If you specify `env.admin_listen` or `env.proxy_listen`, this chart will use
+the value provided by you as opposed to constructing a listen variable
+from fields like `proxy.http.containerPort` and `proxy.http.enabled`. This allows
+you to be more prescriptive when defining listen directives.
+
+**Note:** Overriding `env.proxy_listen` and `env.admin_listen` will potentially cause
+`admin.containerPort`, `proxy.http.containerPort` and `proxy.tls.containerPort` to become out of sync,
+and therefore must be updated accordingly.
+
+I.E. updatating to `env.proxy_listen: 0.0.0.0:4444, 0.0.0.0:4443 ssl` will need
+`proxy.http.containerPort: 4444` and `proxy.tls.containerPort: 4443` to be set in order
+for the service definition to work properly.
+
### Kong-specific parameters
Kong has a choice of either Postgres or Cassandra as a backend datatstore.
@@ -122,13 +142,30 @@ Postgres is enabled by default.
| env.cassandra_keyspace | Cassandra keyspace | `kong` |
| env.cassandra_repl_factor | Replication factor for the Kong keyspace | `2` |
-For complete list of Kong configurations please check https://getkong.org/docs/1.0.x/configuration/.
+
+All `kong.env` parameters can also accept a mapping instead of a value to ensure the parameters can be set through configmaps and secrets.
+
+An example :
+
+```yaml
+kong:
+ env:
+ pg_user: kong
+ pg_password:
+ valueFrom:
+ secretKeyRef:
+ key: kong
+ name: postgres
+```
+
+
+For complete list of Kong configurations please check https://getkong.org/docs/latest/configuration/.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```console
$ helm install stable/kong --name my-release \
- --set=image.tag=1.0.0,env.database=cassandra,cassandra.enabled=true
+ --set=image.tag=1.1,env.database=cassandra,cassandra.enabled=true
```
Alternatively, a YAML file that specifies the values for the above parameters
@@ -150,9 +187,19 @@ To deploy the ingress controller together with
kong run the following command:
```bash
+# without a database
+helm install stable/kong --set ingressController.enabled=true \
+ --set postgresql.enabled=false --set env.database=off
+# with a database
helm install stable/kong --set ingressController.enabled=true
```
+If you like to use a static IP:
+
+```shell
+helm install stable/kong --set ingressController.enabled=true --set proxy.loadBalancerIP=[Your IP goes there] --set proxy.type=LoadBalancer --name kong --namespace kong
+```
+
**Note**: Kong Ingress controller doesn't support custom SSL certificates
on Admin port. We will be removing this limitation in the future.
diff --git a/stable/kong/ci/dbless.yaml b/stable/kong/ci/dbless.yaml
new file mode 100644
index 000000000000..6b96a33a9199
--- /dev/null
+++ b/stable/kong/ci/dbless.yaml
@@ -0,0 +1,7 @@
+# CI test for testing dbless deployment
+ingressController:
+ enabled: true
+env:
+ database: "off"
+postgresql:
+ enabled: false
diff --git a/stable/kong/requirements.lock b/stable/kong/requirements.lock
index c3070e187f2e..f2a7ad80af1b 100644
--- a/stable/kong/requirements.lock
+++ b/stable/kong/requirements.lock
@@ -1,9 +1,9 @@
dependencies:
- name: postgresql
repository: https://kubernetes-charts.storage.googleapis.com/
- version: 3.9.1
+ version: 3.9.5
- name: cassandra
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
- version: 0.10.2
-digest: sha256:ed6abe23258b9eb42a55b21ff68d5df7367e04d0c1de588442f6a91a067616a9
-generated: 2019-01-09T12:04:14.047647677-08:00
+ version: 0.10.5
+digest: sha256:797c1cf8eb6504c93015f1b8f16b263dd94dca9a73cb647bdf3c0af153fba62c
+generated: 2019-05-09T13:44:59.216637988-07:00
diff --git a/stable/kong/requirements.yaml b/stable/kong/requirements.yaml
index ab2c7755b272..9d2cccb67909 100644
--- a/stable/kong/requirements.yaml
+++ b/stable/kong/requirements.yaml
@@ -4,6 +4,6 @@ dependencies:
repository: https://kubernetes-charts.storage.googleapis.com/
condition: postgresql.enabled
- name: cassandra
- version: ~0.10.2
+ version: ~0.10.5
repository: https://kubernetes-charts-incubator.storage.googleapis.com/
condition: cassandra.enabled
diff --git a/stable/kong/templates/_helpers.tpl b/stable/kong/templates/_helpers.tpl
index 9b1350917b39..c6ff9faa585d 100644
--- a/stable/kong/templates/_helpers.tpl
+++ b/stable/kong/templates/_helpers.tpl
@@ -62,3 +62,92 @@ Create the ingress servicePort value string
{{ .Values.proxy.http.servicePort }}
{{- end -}}
{{- end -}}
+
+
+{{- define "kong.env" -}}
+{{- range $key, $val := .Values.env }}
+- name: KONG_{{ $key | upper}}
+{{- $valueType := printf "%T" $val -}}
+{{ if eq $valueType "map[string]interface {}" }}
+{{ toYaml $val | indent 2 -}}
+{{- else }}
+ value: {{ $val | quote -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{- define "kong.wait-for-db" -}}
+- name: wait-for-db
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ env:
+ {{- if .Values.postgresql.enabled }}
+ - name: KONG_PG_HOST
+ value: {{ template "kong.postgresql.fullname" . }}
+ - name: KONG_PG_PORT
+ value: "{{ .Values.postgresql.service.port }}"
+ - name: KONG_PG_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "kong.postgresql.fullname" . }}
+ key: postgresql-password
+ {{- end }}
+ {{- if .Values.cassandra.enabled }}
+ - name: KONG_CASSANDRA_CONTACT_POINTS
+ value: {{ template "kong.cassandra.fullname" . }}
+ {{- end }}
+ {{- include "kong.env" . | nindent 2 }}
+ command: [ "/bin/sh", "-c", "until kong start; do echo 'waiting for db'; sleep 1; done; kong stop" ]
+{{- end -}}
+
+{{- define "kong.controller-container" -}}
+- name: ingress-controller
+ args:
+ - /kong-ingress-controller
+ # Service from were we extract the IP address/es to use in Ingress status
+ - --publish-service={{ .Release.Namespace }}/{{ template "kong.fullname" . }}-proxy
+ # Set the ingress class
+ - --ingress-class={{ .Values.ingressController.ingressClass }}
+ - --election-id=kong-ingress-controller-leader-{{ .Values.ingressController.ingressClass }}
+ # the kong URL points to the kong admin api server
+ {{- if .Values.admin.useTLS }}
+ - --kong-url=https://localhost:{{ .Values.admin.containerPort }}
+ - --admin-tls-skip-verify # TODO make this configurable
+ {{- else }}
+ - --kong-url=http://localhost:{{ .Values.admin.containerPort }}
+ {{- end }}
+ env:
+ - name: POD_NAME
+ valueFrom:
+ fieldRef:
+ apiVersion: v1
+ fieldPath: metadata.name
+ - name: POD_NAMESPACE
+ valueFrom:
+ fieldRef:
+ apiVersion: v1
+ fieldPath: metadata.namespace
+ image: "{{ .Values.ingressController.image.repository }}:{{ .Values.ingressController.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ livenessProbe:
+ failureThreshold: 3
+ httpGet:
+ path: /healthz
+ port: 10254
+ scheme: HTTP
+ initialDelaySeconds: 30
+ periodSeconds: 10
+ successThreshold: 1
+ timeoutSeconds: 1
+ readinessProbe:
+ failureThreshold: 3
+ httpGet:
+ path: /healthz
+ port: 10254
+ scheme: HTTP
+ periodSeconds: 10
+ successThreshold: 1
+ timeoutSeconds: 1
+ resources:
+{{ toYaml .Values.ingressController.resources | indent 10 }}
+{{- end -}}
diff --git a/stable/kong/templates/controller-deployment.yaml b/stable/kong/templates/controller-deployment.yaml
index 9329b8f807f1..fcf107afe5c6 100644
--- a/stable/kong/templates/controller-deployment.yaml
+++ b/stable/kong/templates/controller-deployment.yaml
@@ -1,4 +1,4 @@
-{{- if .Values.ingressController.enabled -}}
+{{- if (and (.Values.ingressController.enabled) (not (eq .Values.env.database "off"))) }}
apiVersion: apps/v1beta2
kind: Deployment
metadata:
@@ -35,37 +35,7 @@ spec:
{{- end }}
{{- end }}
initContainers:
- - name: wait-for-db
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
- env:
- - name: KONG_PROXY_ACCESS_LOG
- value: "/dev/stdout"
- - name: KONG_ADMIN_ACCESS_LOG
- value: "/dev/stdout"
- - name: KONG_PROXY_ERROR_LOG
- value: "/dev/stderr"
- - name: KONG_ADMIN_ERROR_LOG
- value: "/dev/stderr"
- {{- range $key, $val := .Values.env }}
- - name: KONG_{{ $key | upper}}
- value: {{ $val | quote }}
- {{- end}}
- {{- if .Values.postgresql.enabled }}
- - name: KONG_PG_HOST
- value: {{ template "kong.postgresql.fullname" . }}
- - name: KONG_PG_PORT
- value: "{{ .Values.postgresql.service.port }}"
- - name: KONG_PG_PASSWORD
- valueFrom:
- secretKeyRef:
- name: {{ template "kong.postgresql.fullname" . }}
- key: postgresql-password
- {{- end }}
- {{- if .Values.cassandra.enabled }}
- - name: KONG_CASSANDRA_CONTACT_POINTS
- value: {{ template "kong.cassandra.fullname" . }}
- {{- end }}
- command: [ "/bin/sh", "-c", "until kong start; do echo 'waiting for db'; sleep 1; done; kong stop" ]
+ {{- include "kong.wait-for-db" . | nindent 6 }}
containers:
- name: admin-api
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
@@ -73,14 +43,7 @@ spec:
env:
- name: KONG_PROXY_LISTEN
value: 'off'
- - name: KONG_ADMIN_ACCESS_LOG
- value: "/dev/stdout"
- - name: KONG_ADMIN_ERROR_LOG
- value: "/dev/stderr"
- {{- range $key, $val := .Values.env }}
- - name: KONG_{{ $key | upper}}
- value: {{ $val | quote }}
- {{- end}}
+ {{- include "kong.env" . | indent 8 }}
{{- if .Values.admin.useTLS }}
- name: KONG_ADMIN_LISTEN
value: "0.0.0.0:{{ .Values.admin.containerPort }} ssl"
@@ -111,55 +74,5 @@ spec:
{{ toYaml .Values.livenessProbe | indent 10 }}
resources:
{{ toYaml .Values.resources | indent 10 }}
- - name: ingress-controller
- args:
- - /kong-ingress-controller
- # the default service is the kong proxy service
- - --default-backend-service={{ .Release.Namespace }}/{{ template "kong.fullname" . }}-proxy
- # Service from were we extract the IP address/es to use in Ingress status
- - --publish-service={{ .Release.Namespace }}/{{ template "kong.fullname" . }}-proxy
- # Set the ingress class
- - --ingress-class={{ .Values.ingressController.ingressClass }}
- - --election-id=kong-ingress-controller-leader-{{ .Values.ingressController.ingressClass }}
- # the kong URL points to the kong admin api server
- {{- if .Values.admin.useTLS }}
- - --kong-url=https://localhost:{{ .Values.admin.containerPort }}
- - --admin-tls-skip-verify # TODO make this configurable
- {{- else }}
- - --kong-url=http://localhost:{{ .Values.admin.containerPort }}
- {{- end }}
- env:
- - name: POD_NAME
- valueFrom:
- fieldRef:
- apiVersion: v1
- fieldPath: metadata.name
- - name: POD_NAMESPACE
- valueFrom:
- fieldRef:
- apiVersion: v1
- fieldPath: metadata.namespace
- image: "{{ .Values.ingressController.image.repository }}:{{ .Values.ingressController.image.tag }}"
- imagePullPolicy: {{ .Values.image.pullPolicy }}
- livenessProbe:
- failureThreshold: 3
- httpGet:
- path: /healthz
- port: 10254
- scheme: HTTP
- initialDelaySeconds: 30
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 1
- readinessProbe:
- failureThreshold: 3
- httpGet:
- path: /healthz
- port: 10254
- scheme: HTTP
- periodSeconds: 10
- successThreshold: 1
- timeoutSeconds: 1
- resources:
-{{ toYaml .Values.ingressController.resources | indent 10 }}
+ {{- include "kong.controller-container" . | nindent 6 }}
{{- end -}}
diff --git a/stable/kong/templates/deployment.yaml b/stable/kong/templates/deployment.yaml
index d7191ac1b044..ebed696cab07 100644
--- a/stable/kong/templates/deployment.yaml
+++ b/stable/kong/templates/deployment.yaml
@@ -26,49 +26,28 @@ spec:
release: {{ .Release.Name }}
component: app
spec:
+ {{- if (and (.Values.ingressController) (eq .Values.env.database "off")) }}
+ serviceAccountName: {{ template "kong.serviceAccountName" . }}
+ {{ end }}
{{- if .Values.image.pullSecrets }}
imagePullSecrets:
{{- range .Values.image.pullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
+ {{- if not (eq .Values.env.database "off") }}
initContainers:
- - name: wait-for-db
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
- env:
- - name: KONG_PROXY_ACCESS_LOG
- value: "/dev/stdout"
- - name: KONG_ADMIN_ACCESS_LOG
- value: "/dev/stdout"
- - name: KONG_PROXY_ERROR_LOG
- value: "/dev/stderr"
- - name: KONG_ADMIN_ERROR_LOG
- value: "/dev/stderr"
- {{- if .Values.postgresql.enabled }}
- - name: KONG_PG_HOST
- value: {{ template "kong.postgresql.fullname" . }}
- - name: KONG_PG_PORT
- value: "{{ .Values.postgresql.service.port }}"
- - name: KONG_PG_PASSWORD
- valueFrom:
- secretKeyRef:
- name: {{ template "kong.postgresql.fullname" . }}
- key: postgresql-password
- {{- end }}
- {{- if .Values.cassandra.enabled }}
- - name: KONG_CASSANDRA_CONTACT_POINTS
- value: {{ template "kong.cassandra.fullname" . }}
- {{- end }}
- {{- range $key, $val := .Values.env }}
- - name: KONG_{{ $key | upper}}
- value: {{ $val | quote }}
- {{- end}}
- command: [ "/bin/sh", "-c", "until kong start; do echo 'waiting for db'; sleep 1; done; kong stop" ]
+ {{- include "kong.wait-for-db" . | nindent 6 }}
+ {{ end }}
containers:
+ {{- if (and (.Values.ingressController) (eq .Values.env.database "off")) }}
+ {{- include "kong.controller-container" . | nindent 6 }}
+ {{ end }}
- name: {{ template "kong.name" . }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
+ {{- if not .Values.env.admin_listen }}
{{- if .Values.admin.useTLS }}
- name: KONG_ADMIN_LISTEN
value: "0.0.0.0:{{ .Values.admin.containerPort }} ssl"
@@ -76,22 +55,14 @@ spec:
- name: KONG_ADMIN_LISTEN
value: 0.0.0.0:{{ .Values.admin.containerPort }}
{{- end }}
+ {{- end }}
+ {{- if not .Values.env.proxy_listen }}
- name: KONG_PROXY_LISTEN
value: {{ template "kong.kongProxyListenValue" . }}
+ {{- end }}
- name: KONG_NGINX_DAEMON
value: "off"
- - name: KONG_PROXY_ACCESS_LOG
- value: "/dev/stdout"
- - name: KONG_ADMIN_ACCESS_LOG
- value: "/dev/stdout"
- - name: KONG_PROXY_ERROR_LOG
- value: "/dev/stderr"
- - name: KONG_ADMIN_ERROR_LOG
- value: "/dev/stderr"
- {{- range $key, $val := .Values.env }}
- - name: KONG_{{ $key | upper}}
- value: {{ $val | quote }}
- {{- end}}
+ {{- include "kong.env" . | indent 8 }}
{{- if .Values.postgresql.enabled }}
- name: KONG_PG_HOST
value: {{ template "kong.postgresql.fullname" . }}
@@ -110,15 +81,24 @@ spec:
ports:
- name: admin
containerPort: {{ .Values.admin.containerPort }}
+ {{- if .Values.admin.hostPort }}
+ hostPort: {{ .Values.admin.hostPort }}
+ {{- end}}
protocol: TCP
{{- if .Values.proxy.http.enabled }}
- name: proxy
containerPort: {{ .Values.proxy.http.containerPort }}
+ {{- if .Values.proxy.http.hostPort }}
+ hostPort: {{ .Values.proxy.http.hostPort }}
+ {{- end}}
protocol: TCP
{{- end }}
{{- if .Values.proxy.tls.enabled }}
- name: proxy-tls
containerPort: {{ .Values.proxy.tls.containerPort }}
+ {{- if .Values.proxy.tls.hostPort }}
+ hostPort: {{ .Values.proxy.tls.hostPort }}
+ {{- end}}
protocol: TCP
{{- end }}
readinessProbe:
diff --git a/stable/kong/templates/migrations-post-upgrade.yaml b/stable/kong/templates/migrations-post-upgrade.yaml
index 0f642708dada..f1937f46c63b 100644
--- a/stable/kong/templates/migrations-post-upgrade.yaml
+++ b/stable/kong/templates/migrations-post-upgrade.yaml
@@ -1,4 +1,4 @@
-{{- if .Values.runMigrations }}
+{{- if (and (.Values.runMigrations) (not (eq .Values.env.database "off"))) }}
# Why is this Job duplicated and not using only helm hooks?
# See: https://github.com/helm/charts/pull/7362
apiVersion: batch/v1
@@ -32,7 +32,7 @@ spec:
{{- if .Values.postgresql.enabled }}
initContainers:
- name: wait-for-postgres
- image: busybox
+ image: "{{ .Values.waitImage.repository }}:{{ .Values.waitImage.tag }}"
env:
- name: KONG_PG_HOST
value: {{ template "kong.postgresql.fullname" . }}
@@ -52,10 +52,7 @@ spec:
env:
- name: KONG_NGINX_DAEMON
value: "off"
- {{- range $key, $val := .Values.env }}
- - name: KONG_{{ $key | upper}}
- value: {{ $val | quote }}
- {{- end}}
+ {{- include "kong.env" . | indent 8 }}
{{- if .Values.postgresql.enabled }}
- name: KONG_PG_HOST
value: {{ template "kong.postgresql.fullname" . }}
diff --git a/stable/kong/templates/migrations-pre-upgrade.yaml b/stable/kong/templates/migrations-pre-upgrade.yaml
index 1628789781c8..d92853ba3721 100644
--- a/stable/kong/templates/migrations-pre-upgrade.yaml
+++ b/stable/kong/templates/migrations-pre-upgrade.yaml
@@ -1,4 +1,4 @@
-{{- if .Values.runMigrations }}
+{{- if (and (.Values.runMigrations) (not (eq .Values.env.database "off"))) }}
# Why is this Job duplicated and not using only helm hooks?
# See: https://github.com/helm/charts/pull/7362
apiVersion: batch/v1
@@ -32,7 +32,7 @@ spec:
{{- if .Values.postgresql.enabled }}
initContainers:
- name: wait-for-postgres
- image: busybox
+ image: "{{ .Values.waitImage.repository }}:{{ .Values.waitImage.tag }}"
env:
- name: KONG_PG_HOST
value: {{ template "kong.postgresql.fullname" . }}
@@ -52,10 +52,7 @@ spec:
env:
- name: KONG_NGINX_DAEMON
value: "off"
- {{- range $key, $val := .Values.env }}
- - name: KONG_{{ $key | upper}}
- value: {{ $val | quote }}
- {{- end}}
+ {{- include "kong.env" . | indent 8 }}
{{- if .Values.postgresql.enabled }}
- name: KONG_PG_HOST
value: {{ template "kong.postgresql.fullname" . }}
diff --git a/stable/kong/templates/migrations.yaml b/stable/kong/templates/migrations.yaml
index b9b173a2f75f..19b4570f61a7 100644
--- a/stable/kong/templates/migrations.yaml
+++ b/stable/kong/templates/migrations.yaml
@@ -1,4 +1,4 @@
-{{- if .Values.runMigrations }}
+{{- if (and (.Values.runMigrations) (not (eq .Values.env.database "off"))) }}
apiVersion: batch/v1
kind: Job
metadata:
@@ -47,10 +47,7 @@ spec:
env:
- name: KONG_NGINX_DAEMON
value: "off"
- {{- range $key, $val := .Values.env }}
- - name: KONG_{{ $key | upper}}
- value: {{ $val | quote }}
- {{- end}}
+ {{- include "kong.env" . | indent 8 }}
{{- if .Values.postgresql.enabled }}
- name: KONG_PG_HOST
value: {{ template "kong.postgresql.fullname" . }}
diff --git a/stable/kong/templates/service-kong-proxy.yaml b/stable/kong/templates/service-kong-proxy.yaml
index f48faff3bec4..ff3454a717ae 100644
--- a/stable/kong/templates/service-kong-proxy.yaml
+++ b/stable/kong/templates/service-kong-proxy.yaml
@@ -19,11 +19,15 @@ spec:
{{- end }}
{{- if .Values.proxy.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
- {{- range $cidr := .Values.admin.loadBalancerSourceRanges }}
+ {{- range $cidr := .Values.proxy.loadBalancerSourceRanges }}
- {{ $cidr }}
{{- end }}
{{- end }}
{{- end }}
+ externalIPs:
+ {{- range $ip := .Values.proxy.externalIPs }}
+ - {{ $ip }}
+ {{- end }}
ports:
{{- if .Values.proxy.http.enabled }}
- name: kong-proxy
@@ -43,7 +47,9 @@ spec:
{{- end }}
protocol: TCP
{{- end }}
-
+ {{- if .Values.proxy.externalTrafficPolicy }}
+ externalTrafficPolicy: {{ .Values.proxy.externalTrafficPolicy }}
+ {{- end }}
selector:
app: {{ template "kong.name" . }}
diff --git a/stable/kong/values.yaml b/stable/kong/values.yaml
index 5011dd069abb..f7d5dbbec6b3 100644
--- a/stable/kong/values.yaml
+++ b/stable/kong/values.yaml
@@ -3,7 +3,7 @@
image:
repository: kong
- tag: 1.0.2
+ tag: 1.1
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
@@ -81,6 +81,8 @@ proxy:
# Ingress path.
path: /
+ externalIPs: []
+
# Set runMigrations to run Kong migrations
runMigrations: true
@@ -88,6 +90,10 @@ runMigrations: true
# Kong configurations guide https://getkong.org/docs/latest/configuration/
env:
database: postgres
+ proxy_access_log: /dev/stdout
+ admin_access_log: /dev/stdout
+ proxy_error_log: /dev/stderr
+ admin_error_log: /dev/stderr
# If you want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
@@ -170,7 +176,7 @@ ingressController:
enabled: false
image:
repository: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller
- tag: 0.3.0
+ tag: 0.4.0
replicaCount: 1
livenessProbe:
failureThreshold: 3
diff --git a/stable/kube-hunter/Chart.yaml b/stable/kube-hunter/Chart.yaml
index 5e2498a052dd..eeb70b7eb214 100644
--- a/stable/kube-hunter/Chart.yaml
+++ b/stable/kube-hunter/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
-appVersion: "34"
+appVersion: "195"
description: A Helm chart for Kube-hunter
name: kube-hunter
-version: 1.0.1
+version: 1.0.2
home: https://github.com/aquasecurity/kube-hunter
icon: https://raw.githubusercontent.com/aquasecurity/kube-hunter/master/kube-hunter.png
keywords:
diff --git a/stable/kube-hunter/README.md b/stable/kube-hunter/README.md
index 3d4a3573ff75..8d005ccc4744 100644
--- a/stable/kube-hunter/README.md
+++ b/stable/kube-hunter/README.md
@@ -26,7 +26,7 @@ their default values.
| `customArguments` | Additional custom arguments to give to kube-hunter | `[]` |
| `image.pullPolicy` | Container pull policy | `IfNotPresent` |
| `image.repository` | Container image to use | `aquasec/kube-hunter` |
-| `image.tag` | Container image tag to deploy | `34` |
+| `image.tag` | Container image tag to deploy | `195` |
| `cronjob.schedule` | Schedule for the CronJob | `0 1 * * *` |
| `cronjob.annotations` | Annotations to add to the cronjob | {} |
| `cronjob.concurrencyPolicy` | `Allow|Forbid|Replace` concurrent jobs | `Forbid` |
diff --git a/stable/kube-hunter/values.yaml b/stable/kube-hunter/values.yaml
index cd2261216bbc..1168a8cf2b18 100644
--- a/stable/kube-hunter/values.yaml
+++ b/stable/kube-hunter/values.yaml
@@ -17,7 +17,7 @@ pod:
image:
repository: aquasec/kube-hunter
- tag: 34
+ tag: 195
pullPolicy: IfNotPresent
resources: {}
diff --git a/stable/kube-ops-view/Chart.yaml b/stable/kube-ops-view/Chart.yaml
index d8fcc21ff44f..cb0e049e5959 100644
--- a/stable/kube-ops-view/Chart.yaml
+++ b/stable/kube-ops-view/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: kube-ops-view
-version: 0.7.0
-appVersion: 0.9
+version: 1.0.0
+appVersion: 0.11
description: Kubernetes Operational View - read-only system dashboard for multiple
K8s clusters
keywords:
diff --git a/stable/kube-ops-view/values.yaml b/stable/kube-ops-view/values.yaml
index 403c63630094..e8103c9340b3 100644
--- a/stable/kube-ops-view/values.yaml
+++ b/stable/kube-ops-view/values.yaml
@@ -2,7 +2,7 @@
replicaCount: 1
image:
repository: hjacobs/kube-ops-view
- tag: 0.9
+ tag: 0.11
pullPolicy: IfNotPresent
service:
# annotations:
diff --git a/stable/kube-slack/Chart.yaml b/stable/kube-slack/Chart.yaml
index febedfd28027..4967858c0e2a 100644
--- a/stable/kube-slack/Chart.yaml
+++ b/stable/kube-slack/Chart.yaml
@@ -1,6 +1,6 @@
name: kube-slack
-version: 0.4.0
-appVersion: v3.6.0
+version: 1.0.0
+appVersion: v4.1.1
apiVersion: v1
description: Chart for kube-slack, a monitoring service for Kubernetes
keywords:
diff --git a/stable/kube-slack/README.md b/stable/kube-slack/README.md
index cd0ed9d24c95..185a5e9925eb 100644
--- a/stable/kube-slack/README.md
+++ b/stable/kube-slack/README.md
@@ -24,7 +24,9 @@ $ helm delete my-release
## Configuration
-All configuration parameters are listed in [`values.yaml`](values.yaml). It is required to set the `slackUrl` value.
+All configuration parameters are listed in [`values.yaml`](values.yaml).
+
+The environment values passed to kube-slack can be described in `envVars` parameter. At a minimum, the `envVars.SLACK_URL` value must be set.
## RBAC
By default the chart will install the recommended RBAC roles and rolebindings.
diff --git a/stable/kube-slack/templates/clusterrole.yaml b/stable/kube-slack/templates/clusterrole.yaml
index 1c1f28ff4a43..e5184427497b 100644
--- a/stable/kube-slack/templates/clusterrole.yaml
+++ b/stable/kube-slack/templates/clusterrole.yaml
@@ -16,4 +16,5 @@ rules:
verbs:
- get
- list
+ - watch
{{- end -}}
diff --git a/stable/kube-slack/templates/configmap.yaml b/stable/kube-slack/templates/configmap.yaml
new file mode 100644
index 000000000000..10b4af107664
--- /dev/null
+++ b/stable/kube-slack/templates/configmap.yaml
@@ -0,0 +1,11 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "kube-slack.fullname" . }}
+ labels:
+ app: {{ template "kube-slack.name" . }}
+ chart: {{ template "kube-slack.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+data:
+{{ toYaml .Values.envVars | indent 2 }}
diff --git a/stable/kube-slack/templates/deployment.yaml b/stable/kube-slack/templates/deployment.yaml
index 08a6c5db54a2..0a82e4c95ae7 100644
--- a/stable/kube-slack/templates/deployment.yaml
+++ b/stable/kube-slack/templates/deployment.yaml
@@ -15,19 +15,23 @@ spec:
release: {{ .Release.Name }}
template:
metadata:
+ annotations:
+ checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
labels:
app: {{ template "kube-slack.name" . }}
release: {{ .Release.Name }}
+ {{- if .Values.annotations }}
+ annotations:
+{{ toYaml .Values.annotations | indent 4 }}
+ {{- end }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
- env:
- - name: SLACK_URL
- value: {{ .Values.slackUrl }}
- - name: NOT_READY_MIN_TIME
- value: {{ .Values.notReadyMinTime | quote }}
+ envFrom:
+ - configMapRef:
+ name: {{ template "kube-slack.fullname" . }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.rbac.create }}
diff --git a/stable/kube-slack/values.yaml b/stable/kube-slack/values.yaml
index e1ca78dcb7d1..e1d1fc5493f4 100644
--- a/stable/kube-slack/values.yaml
+++ b/stable/kube-slack/values.yaml
@@ -1,12 +1,33 @@
-## URL of your Incoming Webhook in Slack:
-slackUrl: ""
-## Time to wait after pod becomes not ready before notifying
-notReadyMinTime: 60000 # in ms
+## Environment values for the deployment
+envVars:
+ ## URL of your Incoming Webhook in Slack
+ ## e.g. https://hooks.slack.com/services/Txxxxxxxx/Bxxxxxxxx/XXXXXXXXXXXXXXXXXXXXXXXX
+ SLACK_URL: ""
+ ## Override channel to send
+ SLACK_CHANNEL: ""
+ ## URL of HTTP proxy used to connect to Slack
+ SLACK_PROXY: ""
+ ## Time to wait after pod becomes not ready before notifying
+ NOT_READY_MIN_TIME: "60000" # in ms
+ ## Set to false to disable alert on pod recovery
+ RECOVERY_ALERT: "true"
+ ## How often to update in milliseconds
+ TICK_RATE: "15000"
+ ## Repeat notification after this many milliseconds has passed after status returned to normal
+ FLOOD_EXPIRE: "60000"
+ ## Enable/disable metric alerting on cpu
+ METRICS_CPU: "true"
+ ## Enable/disable metric alerting on memory
+ METRICS_MEMORY: "true"
+ ## Set percentage threshold on metric alerts
+ METRICS_PERCENT: "80"
+ ## If no metrics limit defined, alert if the pod utilization is more than the resource request amount
+ METRICS_REQUESTS: "false"
## Configuration for the deployment:
image:
repository: willwill/kube-slack
- tag: v3.6.0
+ tag: v4.1.1
pullPolicy: IfNotPresent
resources: {}
@@ -33,3 +54,7 @@ nodeSelector: {}
tolerations: []
affinity: {}
+
+annotations: {}
+ # Specify which annotations must be added to kube-slack pods
+ # fluentbit.io/exclude: true
diff --git a/stable/kube-state-metrics/Chart.yaml b/stable/kube-state-metrics/Chart.yaml
index e64bcad98e41..79e8c315f0e1 100644
--- a/stable/kube-state-metrics/Chart.yaml
+++ b/stable/kube-state-metrics/Chart.yaml
@@ -5,11 +5,13 @@ keywords:
- metric
- monitoring
- prometheus
-version: 0.13.0
-appVersion: 1.4.0
+version: 1.6.3
+appVersion: 1.6.0
home: https://github.com/kubernetes/kube-state-metrics/
sources:
- https://github.com/kubernetes/kube-state-metrics/
maintainers:
- name: fiunchinho
email: jose@armesto.net
+- name: tariq1890
+ email: tariq.ibrahim@mulesoft.com
diff --git a/stable/kube-state-metrics/OWNERS b/stable/kube-state-metrics/OWNERS
new file mode 100644
index 000000000000..2a78bcc6c316
--- /dev/null
+++ b/stable/kube-state-metrics/OWNERS
@@ -0,0 +1,6 @@
+approvers:
+- fiunchinho
+- tariq1890
+reviewers:
+- fiunchinho
+- tariq1890
diff --git a/stable/kube-state-metrics/README.md b/stable/kube-state-metrics/README.md
index d751dc46c1ea..6816ef4a70cc 100644
--- a/stable/kube-state-metrics/README.md
+++ b/stable/kube-state-metrics/README.md
@@ -12,41 +12,52 @@ $ helm install stable/kube-state-metrics
## Configuration
-| Parameter | Description | Default |
-|---------------------------------------|---------------------------------------------------------|---------------------------------------------|
-| `image.repository` | The image repository to pull from | quay.io/coreos/kube-state-metrics |
-| `image.tag` | The image tag to pull from | `v1.4.0` |
-| `image.pullPolicy` | Image pull policy | IfNotPresent |
-| `service.port` | The port of the container | 8080 |
-| `prometheusScrape` | Whether or not enable prom scrape | true |
-| `rbac.create` | If true, create & use RBAC resources | true |
-| `podSecurityPolicy.enabled` | If true, create & use PodSecurityPolicy resources | false |
-| `podSecurityPolicy.annotations` | Specify pod annotations in the pod security policy | {} |
-| `rbac.serviceAccountName` | ServiceAccount to be used (ignored if rbac.create=true) | default |
-| `securityContext.enabled` | Enable security context | `true` |
-| `securityContext.fsGroup` | Group ID for the container | `65534` |
-| `securityContext.runAsUser` | User ID for the container | `65534` |
-| `priorityClassName` | Name of Priority Class to assign pods | `nil` |
-| `nodeSelector` | Node labels for pod assignment | {} |
-| `tolerations` | Tolerations for pod assignment | [] |
-| `podAnnotations` | Annotations to be added to the pod | {} |
-| `resources` | kube-state-metrics resource requests and limits | {} |
-| `collectors.configmaps` | Enable the configmaps collector. | true |
-| `collectors.cronjobs` | Enable the cronjobs collector. | true |
-| `collectors.daemonsets` | Enable the daemonsets collector. | true |
-| `collectors.deployments` | Enable the deployments collector. | true |
-| `collectors.endpoints` | Enable the endpoints collector. | true |
-| `collectors.horizontalpodautoscalers` | Enable the horizontalpodautoscalers collector. | true |
-| `collectors.jobs` | Enable the jobs collector. | true |
-| `collectors.limitranges` | Enable the limitranges collector. | true |
-| `collectors.namespaces` | Enable the namespaces collector. | true |
-| `collectors.nodes` | Enable the nodes collector. | true |
-| `collectors.persistentvolumeclaims` | Enable the persistentvolumeclaims collector. | true |
-| `collectors.persistentvolumes` | Enable the persistentvolumes collector. | true |
-| `collectors.pods` | Enable the pods collector. | true |
-| `collectors.replicasets` | Enable the replicasets collector. | true |
-| `collectors.replicationcontrollers` | Enable the replicationcontrollers collector. | true |
-| `collectors.resourcequotas` | Enable the resourcequotas collector. | true |
-| `collectors.secrets` | Enable the secrets collector. | true |
-| `collectors.services` | Enable the services collector. | true |
-| `collectors.statefulsets` | Enable the statefulsets collector. | true |
+| Parameter | Description | Default |
+|:----------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------|
+| `image.repository` | The image repository to pull from | quay.io/coreos/kube-state-metrics |
+| `image.tag` | The image tag to pull from | `v1.6.0` |
+| `image.pullPolicy` | Image pull policy | IfNotPresent |
+| `replicas` | Number of replicas | 1 |
+| `service.port` | The port of the container | 8080 |
+| `hostNetwork` | Whether or not to use the host network | false |
+| `prometheusScrape` | Whether or not enable prom scrape | true |
+| `rbac.create` | If true, create & use RBAC resources | true |
+| `serviceAccount.create` | If true, create & use serviceAccount | true |
+| `serviceAccount.name` | If not set & create is true, use template fullname | |
+| `serviceAccount.imagePullSecrets` | Specify image pull secrets field | `[]` |
+| `podSecurityPolicy.enabled` | If true, create & use PodSecurityPolicy resources | false |
+| `podSecurityPolicy.annotations` | Specify pod annotations in the pod security policy | {} |
+| `securityContext.enabled` | Enable security context | `true` |
+| `securityContext.fsGroup` | Group ID for the container | `65534` |
+| `securityContext.runAsUser` | User ID for the container | `65534` |
+| `priorityClassName` | Name of Priority Class to assign pods | `nil` |
+| `nodeSelector` | Node labels for pod assignment | {} |
+| `affinity` | Affinity settings for pod assignment | {} |
+| `tolerations` | Tolerations for pod assignment | [] |
+| `podAnnotations` | Annotations to be added to the pod | {} |
+| `resources` | kube-state-metrics resource requests and limits | {} |
+| `collectors.certificatesigningrequests` | Enable the certificatesigningrequests collector. | true |
+| `collectors.configmaps` | Enable the configmaps collector. | true |
+| `collectors.cronjobs` | Enable the cronjobs collector. | true |
+| `collectors.daemonsets` | Enable the daemonsets collector. | true |
+| `collectors.deployments` | Enable the deployments collector. | true |
+| `collectors.endpoints` | Enable the endpoints collector. | true |
+| `collectors.horizontalpodautoscalers` | Enable the horizontalpodautoscalers collector. | true |
+| `collectors.ingresses` | Enable the ingresses collector. | true |
+| `collectors.jobs` | Enable the jobs collector. | true |
+| `collectors.limitranges` | Enable the limitranges collector. | true |
+| `collectors.namespaces` | Enable the namespaces collector. | true |
+| `collectors.nodes` | Enable the nodes collector. | true |
+| `collectors.persistentvolumeclaims` | Enable the persistentvolumeclaims collector. | true |
+| `collectors.persistentvolumes` | Enable the persistentvolumes collector. | true |
+| `collectors.poddisruptionbudgets` | Enable the poddisruptionbudgets collector. | true |
+| `collectors.pods` | Enable the pods collector. | true |
+| `collectors.replicasets` | Enable the replicasets collector. | true |
+| `collectors.replicationcontrollers` | Enable the replicationcontrollers collector. | true |
+| `collectors.resourcequotas` | Enable the resourcequotas collector. | true |
+| `collectors.secrets` | Enable the secrets collector. | true |
+| `collectors.services` | Enable the services collector. | true |
+| `collectors.statefulsets` | Enable the statefulsets collector. | true |
+| `prometheus.monitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false` |
+| `prometheus.monitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}` |
+| `prometheus.monitor.namespace` | Namespace where servicemonitor resource should be created | `the same namespace as kube-state-metrics` |
diff --git a/stable/kube-state-metrics/templates/NOTES.txt b/stable/kube-state-metrics/templates/NOTES.txt
index 8e8d9fe7e443..d804011e7933 100644
--- a/stable/kube-state-metrics/templates/NOTES.txt
+++ b/stable/kube-state-metrics/templates/NOTES.txt
@@ -1,6 +1,6 @@
kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects.
The exposed metrics can be found here:
-https://github.com/kubernetes/kube-state-metrics/tree/master/Documentation#documentation.
+https://github.com/kubernetes/kube-state-metrics/blob/master/docs/README.md#exposed-metrics
The metrics are exported on the HTTP endpoint /metrics on the listening port.
In your case, {{ template "kube-state-metrics.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local:{{ .Values.service.port }}/metrics
diff --git a/stable/kube-state-metrics/templates/_helpers.tpl b/stable/kube-state-metrics/templates/_helpers.tpl
index 787d0c283ba0..e59ada57caef 100644
--- a/stable/kube-state-metrics/templates/_helpers.tpl
+++ b/stable/kube-state-metrics/templates/_helpers.tpl
@@ -23,3 +23,14 @@ If release name contains chart name it will be used as a full name.
{{- end -}}
{{- end -}}
{{- end -}}
+
+{{/*
+Create the name of the service account to use
+*/}}
+{{- define "kube-state-metrics.serviceAccountName" -}}
+{{- if .Values.serviceAccount.create -}}
+ {{ default (include "kube-state-metrics.fullname" .) .Values.serviceAccount.name }}
+{{- else -}}
+ {{ default "default" .Values.serviceAccount.name }}
+{{- end -}}
+{{- end -}}
diff --git a/stable/kube-state-metrics/templates/clusterrole.yaml b/stable/kube-state-metrics/templates/clusterrole.yaml
index 756a06ad30cb..2c59adf5f625 100644
--- a/stable/kube-state-metrics/templates/clusterrole.yaml
+++ b/stable/kube-state-metrics/templates/clusterrole.yaml
@@ -9,6 +9,12 @@ metadata:
release: {{ .Release.Name }}
name: {{ template "kube-state-metrics.fullname" . }}
rules:
+{{ if .Values.collectors.certificatesigningrequests }}
+- apiGroups: ["certificates.k8s.io"]
+ resources:
+ - certificatesigningrequests
+ verbs: ["list", "watch"]
+{{ end -}}
{{ if .Values.collectors.configmaps }}
- apiGroups: [""]
resources:
@@ -22,13 +28,13 @@ rules:
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.daemonsets }}
-- apiGroups: ["extensions"]
+- apiGroups: ["extensions", "apps"]
resources:
- daemonsets
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.deployments }}
-- apiGroups: ["extensions"]
+- apiGroups: ["extensions", "apps"]
resources:
- deployments
verbs: ["list", "watch"]
@@ -45,6 +51,12 @@ rules:
- horizontalpodautoscalers
verbs: ["list", "watch"]
{{ end -}}
+{{ if .Values.collectors.ingresses }}
+- apiGroups: ["extensions"]
+ resources:
+ - ingresses
+ verbs: ["list", "watch"]
+{{ end -}}
{{ if .Values.collectors.jobs }}
- apiGroups: ["batch"]
resources:
@@ -74,13 +86,19 @@ rules:
resources:
- persistentvolumeclaims
verbs: ["list", "watch"]
-{{ end }}
+{{ end -}}
{{ if .Values.collectors.persistentvolumes }}
- apiGroups: [""]
resources:
- persistentvolumes
verbs: ["list", "watch"]
-{{ end }}
+{{ end -}}
+{{ if .Values.collectors.poddisruptionbudgets }}
+- apiGroups: ["policy"]
+ resources:
+ - poddisruptionbudgets
+ verbs: ["list", "watch"]
+{{ end -}}
{{ if .Values.collectors.pods }}
- apiGroups: [""]
resources:
@@ -88,7 +106,7 @@ rules:
verbs: ["list", "watch"]
{{ end -}}
{{ if .Values.collectors.replicasets }}
-- apiGroups: ["extensions"]
+- apiGroups: ["extensions", "apps"]
resources:
- replicasets
verbs: ["list", "watch"]
diff --git a/stable/kube-state-metrics/templates/deployment.yaml b/stable/kube-state-metrics/templates/deployment.yaml
index 9fd2878e7b53..62f0b73059a4 100644
--- a/stable/kube-state-metrics/templates/deployment.yaml
+++ b/stable/kube-state-metrics/templates/deployment.yaml
@@ -1,4 +1,4 @@
-apiVersion: extensions/v1beta1
+apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
@@ -8,7 +8,10 @@ metadata:
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
- replicas: 1
+ selector:
+ matchLabels:
+ app: {{ template "kube-state-metrics.name" . }}
+ replicas: {{ .Values.replicas }}
template:
metadata:
labels:
@@ -19,7 +22,8 @@ spec:
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
- serviceAccountName: {{ if .Values.rbac.create }}{{ template "kube-state-metrics.fullname" . }}{{ else }}"{{ .Values.rbac.serviceAccountName }}"{{ end }}
+ hostNetwork: {{ .Values.hostNetwork }}
+ serviceAccountName: {{ template "kube-state-metrics.serviceAccountName" . }}
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
@@ -31,6 +35,9 @@ spec:
containers:
- name: {{ .Chart.Name }}
args:
+{{ if .Values.collectors.certificatesigningrequests }}
+ - --collectors=certificatesigningrequests
+{{ end }}
{{ if .Values.collectors.configmaps }}
- --collectors=configmaps
{{ end }}
@@ -49,6 +56,9 @@ spec:
{{ if .Values.collectors.horizontalpodautoscalers }}
- --collectors=horizontalpodautoscalers
{{ end }}
+{{ if .Values.collectors.ingresses }}
+ - --collectors=ingresses
+{{ end }}
{{ if .Values.collectors.jobs }}
- --collectors=jobs
{{ end }}
@@ -67,6 +77,9 @@ spec:
{{ if .Values.collectors.persistentvolumes }}
- --collectors=persistentvolumes
{{ end }}
+{{ if .Values.collectors.poddisruptionbudgets }}
+ - --collectors=poddisruptionbudgets
+{{ end }}
{{ if .Values.collectors.pods }}
- --collectors=pods
{{ end }}
@@ -103,6 +116,10 @@ spec:
timeoutSeconds: 5
resources:
{{ toYaml .Values.resources | indent 12 }}
+{{- if .Values.affinity }}
+ affinity:
+{{ toYaml .Values.affinity | indent 8 }}
+{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
@@ -111,4 +128,3 @@ spec:
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
-
diff --git a/stable/kube-state-metrics/templates/monitor.yaml b/stable/kube-state-metrics/templates/monitor.yaml
new file mode 100644
index 000000000000..af151f10249a
--- /dev/null
+++ b/stable/kube-state-metrics/templates/monitor.yaml
@@ -0,0 +1,21 @@
+{{- if .Values.prometheus.monitor.enabled }}
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: {{ template "kube-state-metrics.fullname" . }}
+ labels:
+ app: {{ template "kube-state-metrics.name" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ release: "{{ .Release.Name }}"
+ heritage: "{{ .Release.Service }}"
+ {{- if .Values.prometheus.monitor.additionalLabels }}
+{{ toYaml .Values.prometheus.monitor.additionalLabels | indent 4 }}
+ {{- end }}
+spec:
+ selector:
+ matchLabels:
+ app: {{ template "kube-state-metrics.name" . }}
+ release: {{ .Release.Name }}
+ endpoints:
+ - port: http
+{{- end }}
diff --git a/stable/kube-state-metrics/templates/podsecuritypolicy.yaml b/stable/kube-state-metrics/templates/podsecuritypolicy.yaml
index d195a5fdaf31..b44ca36f6f15 100644
--- a/stable/kube-state-metrics/templates/podsecuritypolicy.yaml
+++ b/stable/kube-state-metrics/templates/podsecuritypolicy.yaml
@@ -1,5 +1,5 @@
{{- if .Values.podSecurityPolicy.enabled }}
-apiVersion: extensions/v1beta1
+apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: {{ template "kube-state-metrics.fullname" . }}
@@ -8,8 +8,8 @@ metadata:
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
- annotations:
{{- if .Values.podSecurityPolicy.annotations }}
+ annotations:
{{ toYaml .Values.podSecurityPolicy.annotations | indent 4 }}
{{- end }}
spec:
diff --git a/stable/kube-state-metrics/templates/psp-clusterrole.yaml b/stable/kube-state-metrics/templates/psp-clusterrole.yaml
index bdc774af7de6..c43f90da2c63 100644
--- a/stable/kube-state-metrics/templates/psp-clusterrole.yaml
+++ b/stable/kube-state-metrics/templates/psp-clusterrole.yaml
@@ -1,10 +1,6 @@
{{- if and .Values.podSecurityPolicy.enabled -}}
+apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
-{{- if .Capabilities.APIVersions.Has "rbac.authorization.k8s.io/v1beta1" }}
-apiVersion: rbac.authorization.k8s.io/v1beta1
-{{- else if .Capabilities.APIVersions.Has "rbac.authorization.k8s.io/v1alpha1" }}
-apiVersion: rbac.authorization.k8s.io/v1alpha1
-{{- end }}
metadata:
labels:
app: {{ template "kube-state-metrics.name" . }}
diff --git a/stable/kube-state-metrics/templates/psp-clusterrolebinding.yaml b/stable/kube-state-metrics/templates/psp-clusterrolebinding.yaml
index 611a9a246428..bfca12cab4c3 100644
--- a/stable/kube-state-metrics/templates/psp-clusterrolebinding.yaml
+++ b/stable/kube-state-metrics/templates/psp-clusterrolebinding.yaml
@@ -1,9 +1,5 @@
{{- if and .Values.podSecurityPolicy.enabled -}}
-{{- if .Capabilities.APIVersions.Has "rbac.authorization.k8s.io/v1beta1" }}
-apiVersion: rbac.authorization.k8s.io/v1beta1
-{{- else if .Capabilities.APIVersions.Has "rbac.authorization.k8s.io/v1alpha1" }}
-apiVersion: rbac.authorization.k8s.io/v1alpha1
-{{- end }}
+apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
diff --git a/stable/kube-state-metrics/templates/serviceaccount.yaml b/stable/kube-state-metrics/templates/serviceaccount.yaml
index f1e125971e8b..5459abb10a0e 100644
--- a/stable/kube-state-metrics/templates/serviceaccount.yaml
+++ b/stable/kube-state-metrics/templates/serviceaccount.yaml
@@ -1,4 +1,4 @@
-{{- if .Values.rbac.create -}}
+{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
@@ -8,4 +8,6 @@ metadata:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "kube-state-metrics.fullname" . }}
+imagePullSecrets:
+{{ toYaml .Values.serviceAccount.imagePullSecrets | indent 2 }}
{{- end -}}
diff --git a/stable/kube-state-metrics/values.yaml b/stable/kube-state-metrics/values.yaml
index 7d6448501ca0..396866539267 100644
--- a/stable/kube-state-metrics/values.yaml
+++ b/stable/kube-state-metrics/values.yaml
@@ -2,19 +2,39 @@
prometheusScrape: true
image:
repository: quay.io/coreos/kube-state-metrics
- tag: v1.4.0
+ tag: v1.6.0
pullPolicy: IfNotPresent
+
+replicas: 1
+
service:
port: 8080
# Default to clusterIP for backward compatibility
type: ClusterIP
nodePort: 0
loadBalancerIP: ""
+
+hostNetwork: false
+
rbac:
# If true, create & use RBAC resources
create: true
- # Ignored if rbac.create is true
- serviceAccountName: default
+
+serviceAccount:
+ # Specifies whether a ServiceAccount should be created, require rbac true
+ create: true
+ # The name of the ServiceAccount to use.
+ # If not set and create is true, a name is generated using the fullname template
+ name:
+ # Reference to one or more secrets to be used when pulling images
+ # ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+ imagePullSecrets: []
+
+prometheus:
+ monitor:
+ enabled: false
+ additionalLabels: {}
+ namespace: ""
## Specify if a Pod Security Policy for kube-state-metrics must be created
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
@@ -41,6 +61,10 @@ securityContext:
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
+## Affinity settings for pod assignment
+## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+affinity: {}
+
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
@@ -54,18 +78,21 @@ podAnnotations: {}
# Available collectors for kube-state-metrics. By default all available
# collectors are enabled.
collectors:
+ certificatesigningrequests: true
configmaps: true
cronjobs: true
daemonsets: true
deployments: true
endpoints: true
horizontalpodautoscalers: true
+ ingresses: true
jobs: true
limitranges: true
namespaces: true
nodes: true
persistentvolumeclaims: true
persistentvolumes: true
+ poddisruptionbudgets: true
pods: true
replicasets: true
replicationcontrollers: true
diff --git a/stable/kube2iam/Chart.yaml b/stable/kube2iam/Chart.yaml
index 4e5ce56feb53..f8ad643fe0ae 100644
--- a/stable/kube2iam/Chart.yaml
+++ b/stable/kube2iam/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: kube2iam
-version: 0.10.0
+version: 1.0.0
appVersion: 0.10.4
description: Provide IAM credentials to pods based on annotations.
keywords:
diff --git a/stable/kube2iam/README.md b/stable/kube2iam/README.md
index 3f57093d5cf4..07ff4e9f6ea2 100644
--- a/stable/kube2iam/README.md
+++ b/stable/kube2iam/README.md
@@ -47,12 +47,17 @@ Parameter | Description | Default
`host.ip` | IP address of host | `$(HOST_IP)`
`host.iptables` | Add iptables rule | `false`
`host.interface` | Host interface for proxying AWS metadata | `docker0`
+`host.port` | Port to listen on | `8181`
`image.repository` | Image | `jtblin/kube2iam`
`image.tag` | Image tag | `0.10.4`
`image.pullPolicy` | Image pull policy | `IfNotPresent`
`nodeSelector` | node labels for pod assignment | `{}`
`podAnnotations` | annotations to be added to pods | `{}`
`priorityClassName` | priorityClassName to be added to pods | `{}`
+`prometheus.metricsPort` | Port to expose prometheus metrics on (if unspecified, `host.port` is used) | `host.port`
+`prometheus.serviceMonitor.enabled` | If true, create a Prometheus Operator ServiceMonitor resource | `false`
+`prometheus.serviceMonitor.interval` | Interval at which the metrics endpoint is scraped | `10s`
+`prometheus.serviceMonitor.namespace` | An alternative namespace in which to install the ServiceMonitor | `""`
`rbac.create` | If true, create & use RBAC resources | `false`
`rbac.serviceAccountName` | existing ServiceAccount to use (ignored if rbac.create=true) | `default`
`resources` | pod resource requests & limits | `{}`
diff --git a/stable/kube2iam/templates/daemonset.yaml b/stable/kube2iam/templates/daemonset.yaml
index 815eb276848b..d230c6a4879e 100644
--- a/stable/kube2iam/templates/daemonset.yaml
+++ b/stable/kube2iam/templates/daemonset.yaml
@@ -43,6 +43,9 @@ spec:
- --verbose
{{- end }}
- --app-port={{ .Values.host.port }}
+ {{- if .Values.prometheus.metricsPort }}
+ - --metrics-port={{ .Values.prometheus.metricsPort }}
+ {{- end }}
env:
- name: HOST_IP
valueFrom:
@@ -73,7 +76,12 @@ spec:
value: {{ quote $value }}
{{- end }}
ports:
- - containerPort: {{ .Values.host.port }}
+ - name: http
+ containerPort: {{ .Values.host.port }}
+ {{- if .Values.prometheus.metricsPort }}
+ - name: metrics
+ containerPort: {{ .Values.prometheus.metricsPort }}
+ {{- end}}
{{- if .Values.probe }}
livenessProbe:
httpGet:
diff --git a/stable/kube2iam/templates/service.yaml b/stable/kube2iam/templates/service.yaml
new file mode 100644
index 000000000000..80727e134552
--- /dev/null
+++ b/stable/kube2iam/templates/service.yaml
@@ -0,0 +1,27 @@
+{{- if .Values.prometheus.serviceMonitor.enabled }}
+apiVersion: v1
+kind: Service
+metadata:
+ annotations:
+ name: {{ template "kube2iam.fullname" . }}-metrics
+ labels:
+ app: {{ template "kube2iam.name" . }}
+ chart: {{ .Chart.Name }}-{{ .Chart.Version }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+spec:
+ selector:
+ app: {{ template "kube2iam.name" . }}
+ release: {{ .Release.Name }}
+ ports:
+ {{- if .Values.prometheus.metricsPort }}
+ - name: metrics
+ port: {{ .Values.prometheus.metricsPort }}
+ targetPort: {{ .Values.prometheus.metricsPort }}
+ {{- else }}
+ - name: http
+ port: {{ .Values.host.port }}
+ targetPort: {{ .Values.host.port }}
+ {{- end }}
+ protocol: TCP
+{{ end }}
diff --git a/stable/kube2iam/templates/servicemonitor.yaml b/stable/kube2iam/templates/servicemonitor.yaml
new file mode 100644
index 000000000000..d54c84c4dc48
--- /dev/null
+++ b/stable/kube2iam/templates/servicemonitor.yaml
@@ -0,0 +1,30 @@
+{{- if .Values.prometheus.serviceMonitor.enabled }}
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: {{ template "kube2iam.fullname" . }}
+ {{- if .Values.prometheus.serviceMonitor.namespace }}
+ namespace: {{ .Values.prometheus.serviceMonitor.namespace }}
+ {{- end }}
+ labels:
+ app: {{ template "kube2iam.name" . }}
+ chart: {{ .Chart.Name }}-{{ .Chart.Version }}
+ heritage: {{ .Release.Service }}
+ release: {{ .Release.Name }}
+spec:
+ selector:
+ matchLabels:
+ app: {{ template "kube2iam.name" . }}
+ release: {{ .Release.Name }}
+ namespaceSelector:
+ matchNames:
+ - {{ .Release.Namespace }}
+ endpoints:
+ {{- if .Values.prometheus.metricsPort }}
+ - port: metrics
+ {{- else }}
+ - port: http
+ {{- end }}
+ interval: {{ .Values.prometheus.serviceMonitor.interval }}
+ path: /metrics
+{{- end -}}
diff --git a/stable/kube2iam/values.yaml b/stable/kube2iam/values.yaml
index f8c24e579874..68cf059b94a7 100644
--- a/stable/kube2iam/values.yaml
+++ b/stable/kube2iam/values.yaml
@@ -14,6 +14,18 @@ host:
interface: docker0
port: 8181
+prometheus:
+ # Port to expose the /metrics endpoint on. If unset, defaults to `host.port`
+ # metricsPort: 8181
+
+ serviceMonitor:
+ # Create prometheus-operator ServiceMonitor
+ enabled: false
+ # Interval at which the metrics endpoint is scraped
+ interval: 10s
+ # Alternative namespace to install the ServiceMonitor in
+ namespace: ""
+
image:
repository: jtblin/kube2iam
tag: 0.10.4
diff --git a/stable/kuberhealthy/Chart.yaml b/stable/kuberhealthy/Chart.yaml
index 2fb9795c73f4..3d9bf39bb16a 100644
--- a/stable/kuberhealthy/Chart.yaml
+++ b/stable/kuberhealthy/Chart.yaml
@@ -1,14 +1,16 @@
apiVersion: v1
-appVersion: "0.1.1"
+appVersion: "v1.0.2"
home: https://comcast.github.io/kuberhealthy/
description: The official Helm chart for Kuberhealthy.
name: kuberhealthy
-version: 0.1.2
+version: 1.2.5
maintainers:
- name: integrii
email: eric.greer@comcast.com
- name: lolimjake
email: jacob.martin@comcast.com
+ - name: ihoegen
+ email: ianhoegen@gmail.com
keywords:
- kuberhealthy
- kubernetes
@@ -17,6 +19,7 @@ keywords:
- comcast
- health
- cluster
+- prometheus
sources:
- https://github.com/Comcast/kuberhealthy
- https://github.com/helm/charts/tree/master/stable/kuberhealthy
diff --git a/stable/kuberhealthy/OWNERS b/stable/kuberhealthy/OWNERS
index 8b1d7ca48e05..072f42ea54c5 100644
--- a/stable/kuberhealthy/OWNERS
+++ b/stable/kuberhealthy/OWNERS
@@ -1,6 +1,8 @@
approvers:
- integrii
- lolimjake
+- ihoegen
reviewers:
- integrii
- lolimjake
+- ihoegen
diff --git a/stable/kuberhealthy/README.md b/stable/kuberhealthy/README.md
index f4a48988e7c2..61d935794867 100644
--- a/stable/kuberhealthy/README.md
+++ b/stable/kuberhealthy/README.md
@@ -31,13 +31,13 @@ prometheus:
enabled: true # do we deploy a ServiceMonitor spec?
name: "prometheus" # the name of the Prometheus deployment in your environment.
enableScraping: true # add the Prometheus scrape annotation to Kuberhealthy pods
- serviceMonitor: true # use a ServiceMonitor configuration
+ serviceMonitor: false # use a ServiceMonitor configuration, for if using Prometheus Operator
enableAlerting: true # enable default Kuberhealthy alerts configuration
app:
name: "kuberhealthy" # what to name the kuberhealthy deployment
image:
repository: quay.io/comcast/kuberhealthy
- tag: 0.1.1
+ tag: v1.0.2
resources:
requests:
cpu: 100m
@@ -54,6 +54,18 @@ deployment:
maxUnavailable: 1
imagePullPolicy: IfNotPresent
namespace: kuberhealthy
+ podAnnotations: {} # Annotations to be added to pods created by the deployment
+ command:
+ - /app/kuberhealthy
+ # use this to override location of the test-image, see: https://github.com/Comcast/kuberhealthy/blob/master/docs/FLAGS.md
+ # args:
+ # - -dsPauseContainerImageOverride
+ # - your-repo/google_containers/pause:0.8.0
+securityContext: # default container security context
+ runAsNonRoot: true
+ runAsUser: 999
+ fsGroup: 999
+ allowPrivilegeEscalation: false
```
diff --git a/stable/kuberhealthy/templates/clusterrole.yaml b/stable/kuberhealthy/templates/clusterrole.yaml
index ea2e4214d20d..25a404d4aa2c 100644
--- a/stable/kuberhealthy/templates/clusterrole.yaml
+++ b/stable/kuberhealthy/templates/clusterrole.yaml
@@ -9,6 +9,7 @@ rules:
- pods
- namespaces
- componentstatuses
+ - nodes
verbs:
- get
- list
diff --git a/stable/kuberhealthy/templates/deployment.yaml b/stable/kuberhealthy/templates/deployment.yaml
index 6ad4e85c8be9..30fade7aa331 100644
--- a/stable/kuberhealthy/templates/deployment.yaml
+++ b/stable/kuberhealthy/templates/deployment.yaml
@@ -20,6 +20,22 @@ spec:
type: RollingUpdate
template:
metadata:
+ {{- if .Values.deployment.podAnnotations }}
+ annotations:
+ {{- range $key, $value := .Values.deployment.podAnnotations }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+ {{- end }}
+ {{- if .Values.prometheus.enabled -}}
+ {{- if .Values.prometheus.enableScraping -}}
+ {{- if not .Values.deployment.podAnnotations }}
+ annotations:
+ {{- end}}
+ prometheus.io/scrape: "true"
+ prometheus.io/path: "/metrics"
+ prometheus.io/port: "8080"
+ {{- end }}
+ {{- end }}
labels:
app: {{ template "kuberhealthy.name" . }}
chart: {{ .Chart.Name }}
@@ -30,6 +46,16 @@ spec:
automountServiceAccountToken: true
containers:
- image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
+ command: {{ .Values.deployment.command }}
+ {{- if .Values.deployment.args }}
+ args:
+{{ toYaml .Values.deployment.args | nindent 8 }}
+ {{- end }}
+ ports:
+ - containerPort: 8080
+ name: http
+ securityContext:
+ {{- toYaml .Values.securityContext | nindent 10 -}}
imagePullPolicy: {{ .Values.deployment.imagePullPolicy }}
livenessProbe:
failureThreshold: 3
diff --git a/stable/kuberhealthy/templates/service.yaml b/stable/kuberhealthy/templates/service.yaml
index 85c80ba273d5..2468e5c50c59 100644
--- a/stable/kuberhealthy/templates/service.yaml
+++ b/stable/kuberhealthy/templates/service.yaml
@@ -4,21 +4,19 @@ metadata:
labels:
app: {{ template "kuberhealthy.name" . }}
release: {{ .Release.Name }}
- {{ if .Values.prometheus.enabled -}}
- {{ if .Values.prometheus.enableScraping -}}
- annotations:
- prometheus.io/scrape: "true"
- prometheus.io/port: {{ .Values.service.externalPort | quote }}
- prometheus.io/path: "/metrics"
- {{ end -}}
- {{ end -}}
name: {{ template "kuberhealthy.name" . }}
+ {{- if .Values.service.annotations }}
+ annotations:
+ {{- range $key, $value := .Values.service.annotations }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+ {{- end }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.externalPort }}
name: http
- targetPort: 8080
+ targetPort: http
selector:
app: {{ template "kuberhealthy.name" . }}
release: {{ .Release.Name }}
diff --git a/stable/kuberhealthy/templates/servicemonitor.yaml b/stable/kuberhealthy/templates/servicemonitor.yaml
index 219169314b47..a7a7d3f62d66 100644
--- a/stable/kuberhealthy/templates/servicemonitor.yaml
+++ b/stable/kuberhealthy/templates/servicemonitor.yaml
@@ -15,7 +15,7 @@ spec:
selector:
matchLabels:
app: {{ .Chart.Name }}
- chart: {{ .Chart.Name }}
+ release: {{ .Release.Name }}
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
diff --git a/stable/kuberhealthy/values.yaml b/stable/kuberhealthy/values.yaml
index e717fd7e4eeb..3a8861152f9d 100644
--- a/stable/kuberhealthy/values.yaml
+++ b/stable/kuberhealthy/values.yaml
@@ -6,12 +6,12 @@ prometheus:
enabled: false
name: "prometheus"
enableScraping: true
- serviceMonitor: true
+ serviceMonitor: false
enableAlerting: true
image:
repository: quay.io/comcast/kuberhealthy
- tag: 0.1.1
+ tag: v1.0.2
resources:
requests:
@@ -30,6 +30,18 @@ deployment:
maxSurge: 0
maxUnavailable: 1
imagePullPolicy: IfNotPresent
+ podAnnotations: {}
+ command:
+ - /app/kuberhealthy
+ # use this to override location of the test-image, see: https://github.com/Comcast/kuberhealthy/blob/master/docs/FLAGS.md
+ # args:
+ # - -dsPauseContainerImageOverride
+ # - your-repo/google_containers/pause:0.8.0
+securityContext:
+ runAsNonRoot: true
+ runAsUser: 999
+ fsGroup: 999
+ allowPrivilegeEscalation: false
# Please remember that changing the service type to LoadBalancer
# will expose Kuberhealthy to the internet, which could cause
@@ -41,3 +53,4 @@ deployment:
service:
externalPort: 80
type: ClusterIP
+ annotations: {}
diff --git a/stable/kubernetes-dashboard/Chart.yaml b/stable/kubernetes-dashboard/Chart.yaml
index 13cc5ceadefb..2893d6241302 100644
--- a/stable/kubernetes-dashboard/Chart.yaml
+++ b/stable/kubernetes-dashboard/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: kubernetes-dashboard
-version: 1.2.0
+version: 1.5.2
appVersion: 1.10.1
description: General-purpose web UI for Kubernetes clusters
keywords:
diff --git a/stable/kubernetes-dashboard/README.md b/stable/kubernetes-dashboard/README.md
index d136bcbec106..8bb217feca75 100644
--- a/stable/kubernetes-dashboard/README.md
+++ b/stable/kubernetes-dashboard/README.md
@@ -47,6 +47,7 @@ The following table lists the configurable parameters of the kubernetes-dashboar
| `image.repository` | Repository for container image | `k8s.gcr.io/kubernetes-dashboard-amd64` |
| `image.tag` | Image tag | `v1.10.1` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `image.pullSecrets` | Image pull secrets | `[]` |
| `annotations` | Annotations for deployment | `{}` |
| `replicaCount` | Number of replicas | `1` |
| `extraArgs` | Additional container arguments | `[]` |
@@ -58,6 +59,7 @@ The following table lists the configurable parameters of the kubernetes-dashboar
| `enableInsecureLogin` | Serve application over HTTP without TLS | `false` |
| `service.externalPort` | Dashboard external port | 443 |
| `service.internalPort` | Dashboard internal port | 443 |
+| `service.loadBalancerSourceRanges` | list of IP CIDRs allowed access to load balancer (if supported) | nil |
| `ingress.annotations` | Specify ingress class | `kubernetes.io/ingress.class: nginx` |
| `ingress.enabled` | Enable ingress controller resource | `false` |
| `ingress.paths` | Paths to match against incoming requests. Both `/` and `/*` are required to work on gce ingress. | `[/]` |
@@ -66,10 +68,14 @@ The following table lists the configurable parameters of the kubernetes-dashboar
| `resources` | Pod resource requests & limits | `limits: {cpu: 100m, memory: 100Mi}, requests: {cpu: 100m, memory: 100Mi}` |
| `rbac.create` | Create & use RBAC resources | `true` |
| `rbac.clusterAdminRole` | "cluster-admin" ClusterRole will be used for dashboard ServiceAccount ([NOT RECOMMENDED](#access-control)) | `false` |
+| `rbac.clusterReadOnlyRole` | If clusterAdminRole disabled, an additional role will be created with read only permissions to all resources listed inside. | `false` |
| `serviceAccount.create` | Whether a new service account name that the agent will use should be created. | `true` |
| `serviceAccount.name` | Service account to be used. If not set and serviceAccount.create is `true` a name is generated using the fullname template. | |
| `livenessProbe.initialDelaySeconds` | Number of seconds to wait before sending first probe | 30 |
| `livenessProbe.timeoutSeconds` | Number of seconds to wait for probe response | 30 |
+| `podDisruptionBudget.enabled` | Create a PodDisruptionBudget | `false` |
+| `podDisruptionBudget.minAvailable` | Minimum available instances; ignored if there is no PodDisruptionBudget | |
+| `podDisruptionBudget.maxUnavailable`| Maximum unavailable instances; ignored if there is no PodDisruptionBudget | |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
@@ -99,10 +105,10 @@ For this to reach the dashboard, the name of the service must be 'kubernetes-das
fullnameOverride: 'kubernetes-dashboard'
```
-### Ugrade from 0.x.x to 1.x.x
+### Upgrade from 0.x.x to 1.x.x
Upgrade from 0.x.x version to 1.x.x version is seamless if you use default `ingress.path` value. If you have non-default `ingress.path` values with version 0.x.x, you need to add your custom path in `ingress.paths` list value as shown as examples in `values.yaml`.
Notes:
-- The proxy url changed please refer to the [usage section](#using-the-dashboard-with-kubectl-proxy')
+- The proxy url changed please refer to the [usage section](#using-the-dashboard-with-kubectl-proxy)
diff --git a/stable/kubernetes-dashboard/templates/NOTES.txt b/stable/kubernetes-dashboard/templates/NOTES.txt
index b6e521595b2a..ec148207dda8 100644
--- a/stable/kubernetes-dashboard/templates/NOTES.txt
+++ b/stable/kubernetes-dashboard/templates/NOTES.txt
@@ -15,7 +15,7 @@ From outside the cluster, the server URL(s) are:
{{- else if contains "NodePort" .Values.service.type }}
Get the Kubernetes Dashboard URL by running:
- export NODE_PORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "kubernetes-dashboard.fullname" . }})
+ export NODE_PORT=$(kubectl get -n {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "kubernetes-dashboard.fullname" . }})
export NODE_IP=$(kubectl get nodes -o jsonpath="{.items[0].status.addresses[0].address}")
{{- if .Values.enableInsecureLogin }}
echo http://$NODE_IP:$NODE_PORT/
@@ -26,10 +26,10 @@ Get the Kubernetes Dashboard URL by running:
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
- Watch the status with: 'kubectl get svc -w {{ template "kubernetes-dashboard.fullname" . }}'
+ Watch the status with: 'kubectl get svc -n {{ .Release.Namespace }} -w {{ template "kubernetes-dashboard.fullname" . }}'
Get the Kubernetes Dashboard URL by running:
- export SERVICE_IP=$(kubectl get svc {{ template "kubernetes-dashboard.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ export SERVICE_IP=$(kubectl get svc -n {{ .Release.Namespace }} {{ template "kubernetes-dashboard.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
{{- if .Values.enableInsecureLogin }}
echo http://$SERVICE_IP/
{{- else }}
diff --git a/stable/kubernetes-dashboard/templates/clusterrole-readonly.yaml b/stable/kubernetes-dashboard/templates/clusterrole-readonly.yaml
new file mode 100755
index 000000000000..17905f1dbe65
--- /dev/null
+++ b/stable/kubernetes-dashboard/templates/clusterrole-readonly.yaml
@@ -0,0 +1,160 @@
+{{- if and .Values.rbac.create .Values.rbac.clusterReadOnlyRole (not .Values.rbac.clusterAdminRole) }}
+apiVersion: rbac.authorization.k8s.io/v1beta1
+kind: ClusterRole
+metadata:
+ labels:
+ app: {{ template "kubernetes-dashboard.name" . }}
+ chart: {{ template "kubernetes-dashboard.chart" . }}
+ heritage: {{ .Release.Service }}
+ release: {{ .Release.Name }}
+ name: "{{ template "kubernetes-dashboard.fullname" . }}-readonly"
+ namespace: {{ .Release.Namespace }}
+rules:
+ # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
+ - apiGroups:
+ - ""
+ resources:
+ - secrets
+ resourceNames:
+ - kubernetes-dashboard-key-holder
+ - {{ template "kubernetes-dashboard.fullname" . }}
+ verbs:
+ - get
+ - update
+ - delete
+
+ # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
+ - apiGroups:
+ - ""
+ resources:
+ - configmaps
+ resourceNames:
+ - kubernetes-dashboard-settings
+ verbs:
+ - get
+ - update
+
+ - apiGroups:
+ - ""
+ resources:
+ - configmaps
+ - endpoints
+ - persistentvolumeclaims
+ - pods
+ - replicationcontrollers
+ - replicationcontrollers/scale
+ - serviceaccounts
+ - services
+ - nodes
+ - persistentvolumeclaims
+ - persistentvolumes
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - ""
+ resources:
+ - bindings
+ - events
+ - limitranges
+ - namespaces/status
+ - pods/log
+ - pods/status
+ - replicationcontrollers/status
+ - resourcequotas
+ - resourcequotas/status
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - ""
+ resources:
+ - namespaces
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - apps
+ resources:
+ - daemonsets
+ - deployments
+ - deployments/scale
+ - replicasets
+ - replicasets/scale
+ - statefulsets
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - autoscaling
+ resources:
+ - horizontalpodautoscalers
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - batch
+ resources:
+ - cronjobs
+ - jobs
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - extensions
+ resources:
+ - daemonsets
+ - deployments
+ - deployments/scale
+ - ingresses
+ - networkpolicies
+ - replicasets
+ - replicasets/scale
+ - replicationcontrollers/scale
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - policy
+ resources:
+ - poddisruptionbudgets
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - networking.k8s.io
+ resources:
+ - networkpolicies
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - storage.k8s.io
+ resources:
+ - storageclasses
+ - volumeattachments
+ verbs:
+ - get
+ - list
+ - watch
+ - apiGroups:
+ - rbac.authorization.k8s.io
+ resources:
+ - clusterrolebindings
+ - clusterroles
+ - roles
+ - rolebindings
+ verbs:
+ - get
+ - list
+ - watch
+{{- end -}}
diff --git a/stable/kubernetes-dashboard/templates/deployment.yaml b/stable/kubernetes-dashboard/templates/deployment.yaml
index 4f4b8d40dc08..8601624d57e2 100644
--- a/stable/kubernetes-dashboard/templates/deployment.yaml
+++ b/stable/kubernetes-dashboard/templates/deployment.yaml
@@ -11,7 +11,6 @@ metadata:
chart: {{ template "kubernetes-dashboard.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
- kubernetes.io/cluster-service: "true"
{{- if .Values.labels }}
{{ toYaml .Values.labels | indent 4 }}
{{- end }}
@@ -31,7 +30,6 @@ spec:
labels:
app: {{ template "kubernetes-dashboard.name" . }}
release: {{ .Release.Name }}
- kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: {{ template "kubernetes-dashboard.serviceAccountName" . }}
containers:
@@ -81,6 +79,12 @@ spec:
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
resources:
{{ toYaml .Values.resources | indent 10 }}
+ {{- if .Values.image.pullSecrets }}
+ imagePullSecrets:
+ {{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+ {{- end }}
+ {{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
@@ -96,6 +100,6 @@ spec:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
{{- if .Values.affinity }}
- affinity:
+ affinity:
{{ toYaml .Values.affinity | indent 8 }}
{{- end }}
diff --git a/stable/kubernetes-dashboard/templates/pdb.yaml b/stable/kubernetes-dashboard/templates/pdb.yaml
new file mode 100644
index 000000000000..454578a2193b
--- /dev/null
+++ b/stable/kubernetes-dashboard/templates/pdb.yaml
@@ -0,0 +1,23 @@
+{{- if .Values.podDisruptionBudget.enabled -}}
+apiVersion: policy/v1beta1
+kind: PodDisruptionBudget
+metadata:
+ labels:
+ app: {{ template "kubernetes-dashboard.name" . }}
+ chart: {{ template "kubernetes-dashboard.chart" . }}
+ heritage: {{ .Release.Service }}
+ release: {{ .Release.Name }}
+ name: {{ template "kubernetes-dashboard.fullname" . }}
+ namespace: {{ .Release.Namespace }}
+
+spec:
+ {{- if .Values.podDisruptionBudget.minAvailable }}
+ minAvailable: {{ .Values.podDisruptionBudget.minAvailable }}
+ {{- end }}
+ {{- if .Values.podDisruptionBudget.maxUnavailable }}
+ maxUnavailable: {{ .Values.podDisruptionBudget.maxUnavailable }}
+ {{- end }}
+ selector:
+ matchLabels:
+ app: {{ template "kubernetes-dashboard.name" . }}
+{{- end -}}
\ No newline at end of file
diff --git a/stable/kubernetes-dashboard/templates/rolebinding.yaml b/stable/kubernetes-dashboard/templates/rolebinding.yaml
old mode 100644
new mode 100755
index 52deb8c41cc6..94a427f9a455
--- a/stable/kubernetes-dashboard/templates/rolebinding.yaml
+++ b/stable/kubernetes-dashboard/templates/rolebinding.yaml
@@ -1,7 +1,7 @@
{{- if .Values.rbac.create }}
-{{- if .Values.rbac.clusterAdminRole }}
-# Cluster role binding for clusterAdminRole == true
+{{- if or .Values.rbac.clusterAdminRole .Values.rbac.clusterReadOnlyRole }}
+# Cluster role binding for clusterAdminRole == true or clusterReadOnlyRole=true
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
@@ -14,13 +14,17 @@ metadata:
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
- name: cluster-admin
+ name: {{ if .Values.rbac.clusterAdminRole -}}
+cluster-admin
+{{- else if .Values.rbac.clusterReadOnlyRole -}}
+{{ template "kubernetes-dashboard.fullname" . }}-readonly
+{{- end }}
subjects:
- kind: ServiceAccount
name: {{ template "kubernetes-dashboard.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- else -}}
-# Role binding for clusterAdminRole == false
+# Role binding for clusterAdminRole == false and clusterReadOnlyRole=false
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
diff --git a/stable/kubernetes-dashboard/templates/svc.yaml b/stable/kubernetes-dashboard/templates/svc.yaml
index 726b20bb2bb9..fde9876af2b9 100644
--- a/stable/kubernetes-dashboard/templates/svc.yaml
+++ b/stable/kubernetes-dashboard/templates/svc.yaml
@@ -28,6 +28,10 @@ spec:
{{- end }}
{{- if hasKey .Values.service "nodePort" }}
nodePort: {{ .Values.service.nodePort }}
+{{- end }}
+{{- if .Values.service.loadBalancerSourceRanges }}
+ loadBalancerSourceRanges:
+{{ toYaml .Values.service.loadBalancerSourceRanges | indent 4 }}
{{- end }}
selector:
app: {{ template "kubernetes-dashboard.name" . }}
diff --git a/stable/kubernetes-dashboard/values.yaml b/stable/kubernetes-dashboard/values.yaml
index b35f2b460893..56a6c2297c6a 100644
--- a/stable/kubernetes-dashboard/values.yaml
+++ b/stable/kubernetes-dashboard/values.yaml
@@ -7,6 +7,7 @@ image:
repository: k8s.gcr.io/kubernetes-dashboard-amd64
tag: v1.10.1
pullPolicy: IfNotPresent
+ pullSecrets: []
replicaCount: 1
@@ -15,7 +16,6 @@ annotations: {}
## Here labels can be added to the kubernetes dashboard deployment
##
labels: {}
-# kubernetes.io/cluster-service: "true"
# kubernetes.io/name: "Kubernetes Dashboard"
@@ -61,6 +61,10 @@ service:
##
# nameOverride:
+ # LoadBalancerSourcesRange is a list of allowed CIDR values, which are combined with ServicePort to
+ # set allowed inbound rules on the security group assigned to the master load balancer
+ # loadBalancerSourceRanges: []
+
## Kubernetes Dashboard Service annotations
##
## For GCE ingress, the following annotation is required:
@@ -128,6 +132,20 @@ rbac:
# ServiceAccount (NOT RECOMMENDED).
clusterAdminRole: false
+ # Start in ReadOnly mode.
+ # Only dashboard-related Secrets and ConfigMaps will still be available for writing.
+ #
+ # Turn OFF clusterAdminRole to use clusterReadOnlyRole.
+ #
+ # The basic idea of the clusterReadOnlyRole comparing to the clusterAdminRole
+ # is not to hide all the secrets and sensitive data but more
+ # to avoid accidental changes in the cluster outside the standard CI/CD.
+ #
+ # Same as for clusterAdminRole, it is NOT RECOMMENDED to use this version in production.
+ # Instead you should review the role and remove all potentially sensitive parts such as
+ # access to persistentvolumes, pods/log etc.
+ clusterReadOnlyRole: false
+
serviceAccount:
# Specifies whether a service account should be created
create: true
@@ -140,3 +158,9 @@ livenessProbe:
initialDelaySeconds: 30
# Number of seconds to wait for probe response
timeoutSeconds: 30
+
+podDisruptionBudget:
+ # https://kubernetes.io/docs/tasks/run-application/configure-pdb/
+ enabled: false
+ minAvailable:
+ maxUnavailable:
diff --git a/stable/kubewatch/Chart.yaml b/stable/kubewatch/Chart.yaml
index 754063038189..e8d87da281c9 100644
--- a/stable/kubewatch/Chart.yaml
+++ b/stable/kubewatch/Chart.yaml
@@ -1,5 +1,5 @@
name: kubewatch
-version: 0.6.1
+version: 0.8.0
apiVersion: v1
appVersion: 0.0.4
home: https://github.com/bitnami-labs/kubewatch
diff --git a/stable/kubewatch/README.md b/stable/kubewatch/README.md
index 1eb07b62f2e3..88bf2486fd18 100644
--- a/stable/kubewatch/README.md
+++ b/stable/kubewatch/README.md
@@ -40,6 +40,7 @@ The following table lists the configurable parameters of the kubewatch chart and
| Parameter | Description | Default |
| ---------------------------------------- | ------------------------------------ | --------------------------------- |
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `affinity` | node/pod affinities | None |
| `image.registry` | Image registry | `docker.io` |
| `image.repository` | Image repository | `bitnami/kubewatch` |
@@ -69,6 +70,7 @@ The following table lists the configurable parameters of the kubewatch chart and
| `webhook.enabled` | Enable Webhook notifications | `false` |
| `webhook.url` | Webhook URL | `""` |
| `tolerations` | List of node taints to tolerate (requires Kubernetes >= 1.6) | `[]` |
+| `namespaceToWatch` | namespace to watch, leave it empty for watching all | `""` |
| `resourcesToWatch` | list of resources which kubewatch should watch and notify slack | `{pod: true, deployment: true}` |
| `resourcesToWatch.pod` | watch changes to [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/) | `true` |
| `resourcesToWatch.deployment` | watch changes to [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) | `true` |
diff --git a/stable/kubewatch/templates/_helpers.tpl b/stable/kubewatch/templates/_helpers.tpl
index baf5c90e4d13..357b50b84140 100644
--- a/stable/kubewatch/templates/_helpers.tpl
+++ b/stable/kubewatch/templates/_helpers.tpl
@@ -64,3 +64,32 @@ Also, we can't use a single if because lazy evaluation is not an option
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "kubewatch.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if .Values.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if .Values.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- end -}}
diff --git a/stable/kubewatch/templates/configmap.yaml b/stable/kubewatch/templates/configmap.yaml
index 1c97d6f0bd63..bc18e31a9742 100644
--- a/stable/kubewatch/templates/configmap.yaml
+++ b/stable/kubewatch/templates/configmap.yaml
@@ -32,3 +32,4 @@ data:
{{- end }}
resource:
{{ toYaml .Values.resourcesToWatch | indent 6 }}
+ namespace: {{ .Values.namespaceToWatch | quote }}
diff --git a/stable/kubewatch/templates/deployment.yaml b/stable/kubewatch/templates/deployment.yaml
index d95be862202f..56cfe68a1d6e 100644
--- a/stable/kubewatch/templates/deployment.yaml
+++ b/stable/kubewatch/templates/deployment.yaml
@@ -28,6 +28,7 @@ spec:
{{ toYaml .Values.podLabels | indent 8 }}
{{- end }}
spec:
+{{- include "kubewatch.imagePullSecrets" . | indent 6 }}
containers:
- name: {{ template "kubewatch.name" . }}
image: {{ template "kubewatch.image" . }}
diff --git a/stable/kubewatch/values.yaml b/stable/kubewatch/values.yaml
index 0eff9f7d873b..a9a099b2ff1e 100644
--- a/stable/kubewatch/values.yaml
+++ b/stable/kubewatch/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
slack:
enabled: true
@@ -29,6 +32,9 @@ webhook:
enabled: false
# url: ""
+# namespace to watch, leave it empty for watching all.
+namespaceToWatch: ""
+
# Resources to watch
resourcesToWatch:
deployment: true
@@ -45,6 +51,12 @@ image:
repository: "bitnami/kubewatch"
tag: "0.0.4"
pullPolicy: "Always"
+ ## Optionally specify an array of imagePullSecrets.
+ ## Secrets must be manually created in the namespace.
+ ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+ ##
+ # pullSecrets:
+ # - myRegistryKeySecretName
rbac:
# If true, create & use RBAC resources
diff --git a/stable/kured/Chart.yaml b/stable/kured/Chart.yaml
index 1ee0cbc7fa81..6672e18c0840 100644
--- a/stable/kured/Chart.yaml
+++ b/stable/kured/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
-appVersion: "1.1.0"
+appVersion: "1.2.0"
description: A Helm chart for kured
name: kured
-version: 1.1.0
+version: 1.2.0
home: https://github.com/weaveworks/kured
maintainers:
- name: plumdog
diff --git a/stable/kured/README.md b/stable/kured/README.md
index dda81ef91316..111b85f79376 100644
--- a/stable/kured/README.md
+++ b/stable/kured/README.md
@@ -4,8 +4,8 @@ See https://github.com/weaveworks/kured
| Config | Description | Default |
| ------ | ----------- | ------- |
-| `image.repository` | Image repository | `quay.io/weaveworks/kured` |
-| `image.tag` | Image tag | `master-c42fff3` |
+| `image.repository` | Image repository | `weaveworks/kured` |
+| `image.tag` | Image tag | `1.2.0` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `image.pullSecrets` | Image pull secrets | `[]` |
| `extraArgs` | Extra arguments to pass to `/usr/bin/kured`. See below. | `{}` |
diff --git a/stable/kured/values.yaml b/stable/kured/values.yaml
index 50a9957713fa..e434862963ea 100644
--- a/stable/kured/values.yaml
+++ b/stable/kured/values.yaml
@@ -1,7 +1,6 @@
image:
- repository: quay.io/weaveworks/kured
- # Appears to be without numbered numbered tags, so using this instead
- tag: 1.1.0
+ repository: weaveworks/kured
+ tag: 1.2.0
pullPolicy: IfNotPresent
pullSecrets: []
diff --git a/stable/lamp/Chart.yaml b/stable/lamp/Chart.yaml
index a8d1f34c3003..8b31f0bd0de1 100644
--- a/stable/lamp/Chart.yaml
+++ b/stable/lamp/Chart.yaml
@@ -1,7 +1,7 @@
apiVersion: v1
description: Modular and transparent LAMP stack chart supporting PHP-FPM, Release Cloning, LoadBalancer, Ingress, SSL and lots more!
name: lamp
-version: 1.0.0
+version: 1.1.0
appVersion: 7
home: https://github.com/lead4good/helm-lamp-stack
maintainers:
diff --git a/stable/lamp/README.md b/stable/lamp/README.md
index a318af2b5857..b5b31dec6740 100644
--- a/stable/lamp/README.md
+++ b/stable/lamp/README.md
@@ -162,6 +162,7 @@ FPM is enabled by default, this creates an additional HTTPD container which rout
| `php.sockets` | If FPM is enabled, enables communication between HTTPD and PHP via sockets instead of TCP | true |
| `php.oldHTTPRoot` | Additionally mounts the webroot at `php.oldHTTPRoot` to compensate for absolute path file links | _empty_ |
| `php.ini` | additional PHP config values, see examples on how to use | _empty_ |
+| `php.fpm` | addditonal PHP FPM config values | _empty_ |
| `php.copyRoot` | if true, copies the containers web root `/var/www/html` into persistent storage. This must be enabled, if the container already comes with files installed to `/var/www/html` | false |
| `php.persistentSubpaths` | instead of enabling persistence for the whole webroot, only subpaths of webroot can be enabled for persistence. Have a look at the [nextcloud example](examples/nextcloud.yaml) to see how it works | _empty_ |
| `php.resources` | PHP container resource requests/limits | `resources` |
diff --git a/stable/lamp/templates/configmap-php.yaml b/stable/lamp/templates/configmap-php.yaml
index abe3e9f42d32..d83249afc4c0 100644
--- a/stable/lamp/templates/configmap-php.yaml
+++ b/stable/lamp/templates/configmap-php.yaml
@@ -24,5 +24,8 @@ data:
[www]
listen = /var/run/php/php-fpm.sock
listen.mode = 0666
+ {{- if .Values.php.fpm }}
+{{ .Values.php.fpm | indent 4 }}
+ {{- end }}
{{- end }}
{{- end }}
diff --git a/stable/lamp/values.yaml b/stable/lamp/values.yaml
index 06076d01769d..3b9c2d17ae5b 100644
--- a/stable/lamp/values.yaml
+++ b/stable/lamp/values.yaml
@@ -47,6 +47,10 @@ php:
# ini: |
# short_open_tag=On
+ ## php-fpm.conf: additional PHP FPM config values
+ # fpm: |
+ # pm.max_children = 120
+
## php.copyRoot if true, copies the containers web root `/var/www/html` into
copyRoot: false
diff --git a/stable/locust/Chart.yaml b/stable/locust/Chart.yaml
index 1dbcfa662aab..f9c040a7dd5b 100644
--- a/stable/locust/Chart.yaml
+++ b/stable/locust/Chart.yaml
@@ -1,11 +1,14 @@
+apiVersion: v1
name: locust
description: A modern load testing framework
-version: 0.3.0
-appVersion: 0.7.5
+version: 1.0.0
+appVersion: 0.9.0
maintainers:
- name: so0k
email: vincent.drl@gmail.com
+ - name: haugene
+ email: kristian.haugene@greenbird.com
home: http://locust.io
icon: https://pbs.twimg.com/profile_images/1867636195/locust-logo-orignal.png
sources:
- - https://github.com/honestbee/distributed-load-testing
+ - https://github.com/greenbird/locust
diff --git a/stable/locust/README.md b/stable/locust/README.md
index 45d3eb95a715..7c741d63e876 100644
--- a/stable/locust/README.md
+++ b/stable/locust/README.md
@@ -12,6 +12,7 @@ testing using Kubernetes.
This chart will do the following:
* Convert all files in `tasks/` folder into a configmap
+* If an existing configmap is specified, it will be used instead of building one from the chart
* Create a Locust master and Locust worker deployment with the Target host
and Tasks file specified.
@@ -27,8 +28,8 @@ helm install -n locust-nymph --set master.config.target-host=http://site.example
| Parameter | Description | Default |
| ---------------------------- | ---------------------------------- | ----------------------------------------------------- |
| `Name` | Locust master name | `locust` |
-| `image.repository` | Locust container image name | `quay.io/honestbee/locust` |
-| `image.tag` | Locust Container image tag | `0.7.5` |
+| `image.repository` | Locust container image name | `greenbirdit/locust` |
+| `image.tag` | Locust Container image tag | `0.9.0` |
| `image.pullSecrets` | Locust Container image registry secret | `None` |
| `service.type` | k8s service type exposing master | `NodePort` |
| `service.nodePort` | Port on cluster to expose master | `0` |
@@ -36,6 +37,7 @@ helm install -n locust-nymph --set master.config.target-host=http://site.example
| `service.extraLabels` | KV containing extra labels | `{}` |
| `master.config.target-host` | locust target host | `http://site.example.com` |
| `worker.config.locust-script`| locust script to run | `/locust-tasks/tasks.py` |
+| `worker.config.configmapName`| configmap to mount locust scripts from | `empty, configmap is created from tasks folder in Chart` |
| `worker.replicaCount` | Number of workers to run | `2` |
Specify parameters using `--set key=value[,key=value]` argument to `helm install`
@@ -46,9 +48,26 @@ Alternatively a YAML file that specifies the values for the parameters can be pr
$ helm install --name my-release -f values.yaml stable/locust
```
-You can start the swarm from the command line using Port forwarding as follows:
+#### Creating configmap with your Locust task files
-Get the Locust URL following the Post Installation notes.
+You're probably developing your own Locust scripts that you want to run in this distributed setup.
+To get those scripts into this deployment you can fork the chart and put them into the `tasks` folder. From there
+they will be converted to a configmap and mounted for use in Locust.
+
+Another solution, if you don't want to fork the Chart, is to put your Locust scripts in a configmap and provide the name
+as a config parameter in `values.yaml`. You can read more on the use of configmaps as volumes in pods [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/).
+
+If you have your Locust task files in a folder named "scripts" you would use something like the following command:
+
+`kubectl create configmap locust-worker-configs --from-file path/to/scripts`
+
+
+### Interacting with Locust
+
+Get the Locust URL following the Post Installation notes. Using port forwarding you should be able to connect to the
+web ui on Locust master node.
+
+You can start the swarm from the command line using port forwarding as follows:
for example:
```bash
diff --git a/stable/locust/templates/_helpers.tpl b/stable/locust/templates/_helpers.tpl
index 97d5e0a7e8c1..7353eae4a82e 100644
--- a/stable/locust/templates/_helpers.tpl
+++ b/stable/locust/templates/_helpers.tpl
@@ -23,5 +23,9 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
Create fully qualified configmap name.
*/}}
{{- define "locust.worker-configmap" -}}
+{{ if .Values.worker.config.configmapName }}
+{{- printf .Values.worker.config.configmapName -}}
+{{ else }}
{{- printf "%s-%s" .Release.Name "worker" -}}
+{{ end }}
{{- end -}}
diff --git a/stable/locust/templates/worker-cm.yaml b/stable/locust/templates/worker-cm.yaml
index e4d20117aa17..d48a9458ddad 100644
--- a/stable/locust/templates/worker-cm.yaml
+++ b/stable/locust/templates/worker-cm.yaml
@@ -1,3 +1,4 @@
+{{ if not .Values.worker.config.configmapName }}
apiVersion: v1
kind: ConfigMap
metadata:
@@ -9,3 +10,4 @@ metadata:
app: {{ template "locust.fullname" . }}
data:
{{ (.Files.Glob "tasks/*").AsConfig | indent 2 }}
+{{ end }}
\ No newline at end of file
diff --git a/stable/locust/values.yaml b/stable/locust/values.yaml
index 12a5d2d557fb..04ceda0b58dc 100644
--- a/stable/locust/values.yaml
+++ b/stable/locust/values.yaml
@@ -1,8 +1,8 @@
Name: locust
image:
- repository: quay.io/honestbee/locust
- tag: 0.7.5
+ repository: greenbirdit/locust
+ tag: 0.9.0
pullPolicy: IfNotPresent
pullSecrets: []
@@ -27,7 +27,10 @@ master:
worker:
config:
- # all files from tasks folder are mounted under `/locust-tasks`
+ # Optional parameter to use an existing configmap instead of deploying one with the Chart
+ # configmapName: locust-worker-configs
+
+ # all files from specified configmap (or tasks folder) are mounted under `/locust-tasks`
locust-script: "/locust-tasks/tasks.py"
replicaCount: 2
resources:
diff --git a/stable/logdna-agent/Chart.yaml b/stable/logdna-agent/Chart.yaml
new file mode 100644
index 000000000000..d0fe912c6a3a
--- /dev/null
+++ b/stable/logdna-agent/Chart.yaml
@@ -0,0 +1,22 @@
+apiVersion: v1
+name: logdna-agent
+description: Run this, get logs. All cluster containers. LogDNA collector agent daemonset for Kubernetes.
+version: 1.0.0
+appVersion: 1.5.6
+keywords:
+- logs
+- logging
+- log-management
+- agent
+- alerts
+- metrics
+- events
+- daemonset
+- logger
+home: https://logdna.com
+icon: https://logdna.com/assets/images/logdna_logo_240w.png
+sources:
+- https://github.com/logdna/logdna-agent
+maintainers:
+- name: leeliu
+ email: lee@logdna.com
diff --git a/stable/logdna-agent/README.md b/stable/logdna-agent/README.md
new file mode 100644
index 000000000000..c7c908f3e800
--- /dev/null
+++ b/stable/logdna-agent/README.md
@@ -0,0 +1,73 @@
+# LogDNA Kubernetes Agent
+
+[LogDNA](https://logdna.com) - Easy, beautiful logging in the cloud.
+
+## TL;DR;
+
+```bash
+$ helm install --set logdna.key=LOGDNA_INGESTION_KEY stable/logdna-agent
+```
+
+## Introduction
+
+This chart deploys LogDNA collector agents to all nodes in your cluster. Logs will ship from all containers. We extract pertinent Kubernetes metadata: pod name, container name, container id, namespace, and labels. View your logs at https://app.logdna.com or live tail using our [CLI](https://github.com/logdna/logdna-cli).
+
+## Prerequisites
+
+- Kubernetes 1.2+
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`, please follow directions from https://app.logdna.com/pages/add-source to obtain your LogDNA Ingestion Key:
+
+```bash
+$ helm install --name my-release \
+ --set logdna.key=LOGDNA_INGESTION_KEY,logdna.autoupdate=1 stable/logdna-agent
+```
+
+You should see logs in https://app.logdna.com in a few seconds.
+
+### Tags support:
+```bash
+$ helm install --name my-release \
+ --set logdna.key=LOGDNA_INGESTION_KEY,logdna.tags=production,logdna.autoupdate=1 stable/logdna-agent
+```
+
+## Uninstalling the Chart
+
+To uninstall/delete the `my-release` deployment:
+
+```bash
+$ helm delete my-release
+```
+
+The command removes all the Kubernetes components associated with the chart and deletes the release.
+
+## Configuration
+
+The following tables lists the configurable parameters of the LogDNA Agent chart and their default values.
+
+Parameter | Description | Default
+--- | --- | ---
+`logdna.key` | LogDNA Ingestion Key (Required) | None
+`logdna.tags` | Optional tags such as `production` | None
+`logdna.autoupdate` | Optionally turn on autoupdate by setting to 1 (auto sets image.pullPolicy to always) | `0`
+`image.pullPolicy` | Image pull policy | `IfNotPresent`
+`image.tag` | Image tag | `latest`
+`resources.limits.memory` | Memory resource limits | 500Mi |
+`tolerations` | List of node taints to tolerate | `[]`
+
+Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
+
+```bash
+$ helm install --name my-release \
+ --set logdna.key=LOGDNA_INGESTION_KEY,logdna.tags=production,logdna.autoupdate=1 stable/logdna-agent
+```
+
+Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
+
+```bash
+$ helm install --name my-release -f values.yaml stable/logdna-agent
+```
+
+> **Tip**: You can use the default [values.yaml](values.yaml)
diff --git a/stable/logdna-agent/templates/NOTES.txt b/stable/logdna-agent/templates/NOTES.txt
new file mode 100644
index 000000000000..f8669b06799b
--- /dev/null
+++ b/stable/logdna-agent/templates/NOTES.txt
@@ -0,0 +1,27 @@
+{{- if not .Values.logdna.key -}}
+#############################################################
+### ERROR: Please specify an ingestion key `logdna.key` ###
+#############################################################
+
+
+Please follow directions from https://app.logdna.com/pages/add-source
+
+Try:
+
+helm upgrade {{ .Release.Name }} --set logdna.key=INGESTION_KEY \
+ stable/logdna-agent
+
+
+Tags support:
+
+helm upgrade {{ .Release.Name }} --set logdna.key=INGESTION_KEY,logdna.tags=production \
+ stable/logdna-agent
+
+{{- else -}}
+
+LogDNA's collector agent(s) are being deployed to each node in your cluster.
+You should see logs in https://app.logdna.com in a few seconds.
+
+For help, please click on Help (top right corner of https://app.logdna.com)
+
+{{- end }}
diff --git a/stable/logdna-agent/templates/_helpers.tpl b/stable/logdna-agent/templates/_helpers.tpl
new file mode 100644
index 000000000000..5b13699faba6
--- /dev/null
+++ b/stable/logdna-agent/templates/_helpers.tpl
@@ -0,0 +1,32 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "logdna.name" -}}
+ {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "logdna.fullname" -}}
+ {{- if .Values.fullnameOverride -}}
+ {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+ {{- else -}}
+ {{- $name := default .Chart.Name .Values.nameOverride -}}
+ {{- if contains $name .Release.Name -}}
+ {{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+ {{- else -}}
+ {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+ {{- end -}}
+ {{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "logdna.chart" -}}
+ {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/stable/logdna-agent/templates/daemonset.yaml b/stable/logdna-agent/templates/daemonset.yaml
new file mode 100644
index 000000000000..83be99374688
--- /dev/null
+++ b/stable/logdna-agent/templates/daemonset.yaml
@@ -0,0 +1,80 @@
+{{- if .Values.logdna.key -}}
+apiVersion: extensions/v1beta1
+kind: DaemonSet
+metadata:
+ name: {{ template "logdna.name" . }}
+ labels:
+ app: {{ template "logdna.name" . }}
+ chart: {{ template "logdna.chart" . }}
+ release: {{ .Release.Name }}
+spec:
+ template:
+ metadata:
+ labels:
+ app: {{ template "logdna.name" . }}
+ chart: {{ template "logdna.chart" . }}
+ release: {{ .Release.Name }}
+ spec:
+ containers:
+ - name: {{ template "logdna.name" . }}
+ image: "{{.Values.image.name}}:{{.Values.image.tag}}"
+{{- if eq .Values.logdna.autoupdate "1" }}
+ imagePullPolicy: "always"
+{{- else }}
+ imagePullPolicy: "{{.Values.image.pullPolicy}}"
+{{- end }}
+ env:
+ - name: LOGDNA_AGENT_KEY
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "logdna.name" . }}
+ key: logdna-agent-key
+ - name: LOGDNA_PLATFORM
+ value: k8s
+{{- if .Values.logdna.tags }}
+ - name: LOGDNA_TAGS
+ value: {{ .Values.logdna.tags }}
+{{- end }}
+{{- if .Values.logdna.autoupdate }}
+ - name: LOGDNA_AUTOUPDATE
+ value: {{ .Values.logdna.autoupdate }}
+{{- end }}
+ resources:
+{{ toYaml .Values.resources | indent 12 }}
+ volumeMounts:
+ - name: varlog
+ mountPath: /var/log
+ - name: varlibdockercontainers
+ mountPath: /var/lib/docker/containers
+ readOnly: true
+ - name: mnt
+ mountPath: /mnt
+ readOnly: true
+ - name: docker
+ mountPath: /var/run/docker.sock
+ - name: osrelease
+ mountPath: /etc/os-release
+ - name: logdnahostname
+ mountPath: /etc/logdna-hostname
+ volumes:
+ - name: varlog
+ hostPath:
+ path: /var/log
+ - name: varlibdockercontainers
+ hostPath:
+ path: /var/lib/docker/containers
+ - name: mnt
+ hostPath:
+ path: /mnt
+ - name: docker
+ hostPath:
+ path: /var/run/docker.sock
+ - name: osrelease
+ hostPath:
+ path: /etc/os-release
+ - name: logdnahostname
+ hostPath:
+ path: /etc/hostname
+ tolerations:
+{{ toYaml .Values.daemonset.tolerations | indent 8 }}
+{{ end }}
diff --git a/stable/logdna-agent/templates/secrets.yaml b/stable/logdna-agent/templates/secrets.yaml
new file mode 100644
index 000000000000..0f6a1d882492
--- /dev/null
+++ b/stable/logdna-agent/templates/secrets.yaml
@@ -0,0 +1,11 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ template "logdna.name" . }}
+ labels:
+ app: {{ template "logdna.name" . }}
+ chart: {{ template "logdna.chart" . }}
+ release: {{ .Release.Name }}
+type: Opaque
+data:
+ logdna-agent-key: {{ default "" .Values.logdna.key | b64enc | quote }}
diff --git a/stable/logdna-agent/values.yaml b/stable/logdna-agent/values.yaml
new file mode 100644
index 000000000000..82a12d04a5b6
--- /dev/null
+++ b/stable/logdna-agent/values.yaml
@@ -0,0 +1,25 @@
+image:
+ name: logdna/logdna-agent
+ tag: 1.5.6
+ pullPolicy: IfNotPresent
+
+logdna:
+ ## Please follow directions from https://app.logdna.com/pages/add-source
+
+ ## An ingestion key is required for this chart to run
+ # key:
+
+ ## Optional settings
+ autoupdate: 0
+ # tags:
+
+ name: logdna-agent
+
+resources:
+ requests:
+ cpu: 20m
+ limits:
+ memory: 500Mi
+
+daemonset:
+ tolerations: []
diff --git a/stable/logstash/Chart.yaml b/stable/logstash/Chart.yaml
index 017461ebc51e..ff598b0ccb9f 100644
--- a/stable/logstash/Chart.yaml
+++ b/stable/logstash/Chart.yaml
@@ -3,8 +3,8 @@ description: Logstash is an open source, server-side data processing pipeline
icon: https://www.elastic.co/assets/blt86e4472872eed314/logo-elastic-logstash-lt.svg
home: https://www.elastic.co/products/logstash
name: logstash
-version: 1.5.0
-appVersion: 6.6.0
+version: 1.11.0
+appVersion: 6.7.0
sources:
- https://www.docker.elastic.co
- https://www.elastic.co/guide/en/logstash/current/index.html
diff --git a/stable/logstash/README.md b/stable/logstash/README.md
index e1a00cf7c5d0..9458d0e075d3 100644
--- a/stable/logstash/README.md
+++ b/stable/logstash/README.md
@@ -75,7 +75,7 @@ The following table lists the configurable parameters of the chart and its defau
| `podDisruptionBudget` | Pod disruption budget | `maxUnavailable: 1` |
| `updateStrategy` | Update strategy | `type: RollingUpdate` |
| `image.repository` | Container image name | `docker.elastic.co/logstash/logstash-oss` |
-| `image.tag` | Container image tag | `6.6.0` |
+| `image.tag` | Container image tag | `6.7.0` |
| `image.pullPolicy` | Container image pull policy | `IfNotPresent` |
| `service.type` | Service type (ClusterIP, NodePort or LoadBalancer) | `ClusterIP` |
| `service.annotations` | Service annotations | `{}` |
@@ -90,6 +90,7 @@ The following table lists the configurable parameters of the chart and its defau
| `ingress.path` | Ingress path | `/` |
| `ingress.hosts` | Ingress accepted hostnames | `["logstash.cluster.local"]` |
| `ingress.tls` | Ingress TLS configuration | `[]` |
+| `logstashJavaOpts` | Java options for logstash like heap size | `"-Xmx1g -Xms1g"` |
| `resources` | Pod resource requests & limits | `{}` |
| `priorityClassName` | priorityClassName | `nil` |
| `nodeSelector` | Node selector | `{}` |
@@ -97,6 +98,8 @@ The following table lists the configurable parameters of the chart and its defau
| `affinity` | Affinity or Anti-Affinity | `{}` |
| `podAnnotations` | Pod annotations | `{}` |
| `podLabels` | Pod labels | `{}` |
+| `extraEnv` | Extra pod environment variables | `[]` |
+| `extraInitContainers` | Add additional initContainers | `[]` |
| `livenessProbe` | Liveness probe settings for logstash container | (see `values.yaml`) |
| `readinessProbe` | Readiness probe settings for logstash container | (see `values.yaml`) |
| `persistence.enabled` | Enable persistence | `true` |
@@ -112,6 +115,10 @@ The following table lists the configurable parameters of the chart and its defau
| `elasticsearch.port` | ElasticSearch port | `9200` |
| `config` | Logstash configuration key-values | (see `values.yaml`) |
| `patterns` | Logstash patterns configuration | `nil` |
+| `files` | Logstash custom files configuration | `nil` |
+| `binaryFiles` | Logstash custom binary files | `nil` |
| `inputs` | Logstash inputs configuration | beats |
| `filters` | Logstash filters configuration | `nil` |
| `outputs` | Logstash outputs configuration | elasticsearch |
+| `securityContext.fsGroup` | Group ID for the container | `1000` |
+| `securityContext.runAsUser` | User ID for the container | `1000` |
diff --git a/stable/logstash/templates/files-config.yaml b/stable/logstash/templates/files-config.yaml
new file mode 100644
index 000000000000..11423cd9add6
--- /dev/null
+++ b/stable/logstash/templates/files-config.yaml
@@ -0,0 +1,20 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "logstash.fullname" . }}-files
+ labels:
+ app: {{ template "logstash.name" . }}
+ chart: {{ template "logstash.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+data:
+{{- range $key, $value := .Values.files }}
+ {{ $key }}: |-
+{{ $value | indent 4 }}
+{{- end }}
+binaryData:
+ {{- range $key, $value := .Values.binaryFiles }}
+ {{ $key }}: |-
+{{ $value | indent 4 }}
+ {{- end }}
+
\ No newline at end of file
diff --git a/stable/logstash/templates/statefulset.yaml b/stable/logstash/templates/statefulset.yaml
index 7008286ae539..b6fefb66f372 100644
--- a/stable/logstash/templates/statefulset.yaml
+++ b/stable/logstash/templates/statefulset.yaml
@@ -27,6 +27,7 @@ spec:
{{- end }}
annotations:
checksum/patterns: {{ include (print $.Template.BasePath "/patterns-config.yaml") . | sha256sum }}
+ checksum/templates: {{ include (print $.Template.BasePath "/files-config.yaml") . | sha256sum }}
checksum/pipeline: {{ include (print $.Template.BasePath "/pipeline-config.yaml") . | sha256sum }}
{{- if .Values.podAnnotations }}
## Custom pod annotations
@@ -39,12 +40,16 @@ spec:
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
securityContext:
- runAsUser: 1000
- fsGroup: 1000
+ runAsUser: {{ .Values.securityContext.runAsUser }}
+ fsGroup: {{ .Values.securityContext.fsGroup }}
{{- if .Values.image.pullSecrets }}
imagePullSecrets:
{{ toYaml .Values.image.pullSecrets | indent 8 }}
{{- end }}
+ initContainers:
+{{- if .Values.extraInitContainers }}
+{{ toYaml .Values.extraInitContainers | indent 8 }}
+{{- end }}
containers:
## logstash
@@ -71,11 +76,17 @@ spec:
value: {{ .Values.elasticsearch.host | quote }}
- name: ELASTICSEARCH_PORT
value: {{ .Values.elasticsearch.port | quote }}
+ # Logstash Java Options
+ - name: LS_JAVA_OPTS
+ value: {{ .Values.logstashJavaOpts }}
## Additional env vars
{{- range $key, $value := .Values.config }}
- name: {{ $key | upper | replace "." "_" }}
value: {{ $value | quote }}
{{- end }}
+ {{- if .Values.extraEnv }}
+{{ .Values.extraEnv | toYaml | indent 12 }}
+ {{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
volumeMounts:
@@ -130,6 +141,9 @@ spec:
- name: patterns
configMap:
name: {{ template "logstash.fullname" . }}-patterns
+ - name: files
+ configMap:
+ name: {{ template "logstash.fullname" . }}-files
- name: pipeline
configMap:
name: {{ template "logstash.fullname" . }}-pipeline
diff --git a/stable/logstash/values.yaml b/stable/logstash/values.yaml
index 380041ab0d72..b148406e0548 100644
--- a/stable/logstash/values.yaml
+++ b/stable/logstash/values.yaml
@@ -10,7 +10,7 @@ terminationGracePeriodSeconds: 30
image:
repository: docker.elastic.co/logstash/logstash-oss
- tag: 6.6.0
+ tag: 6.7.0
pullPolicy: IfNotPresent
## Add secrets manually via kubectl on kubernetes cluster and reference here
# pullSecrets:
@@ -72,6 +72,9 @@ ingress:
# hosts:
# - logstash.cluster.local
+# set java options like heap size
+logstashJavaOpts: "-Xmx1g -Xms1g"
+
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
@@ -90,6 +93,10 @@ nodeSelector: {}
tolerations: []
+securityContext:
+ fsGroup: 1000
+ runAsUser: 1000
+
affinity: {}
# podAntiAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
@@ -108,6 +115,16 @@ podLabels: {}
# team: "developers"
# service: "logstash"
+extraEnv: []
+
+extraInitContainers: []
+ # - name: echo
+ # image: busybox
+ # imagePullPolicy: Always
+ # args:
+ # - echo
+ # - hello
+
livenessProbe:
httpGet:
path: /
@@ -146,6 +163,8 @@ volumeMounts:
mountPath: /usr/share/logstash/data
- name: patterns
mountPath: /usr/share/logstash/patterns
+ - name: files
+ mountPath: /usr/share/logstash/files
- name: pipeline
mountPath: /usr/share/logstash/pipeline
@@ -213,6 +232,38 @@ patterns:
# main: |-
# TESTING {"foo":.*}$
+## Custom files that can be referenced by plugins.
+## Each YAML heredoc will become located in the logstash home directory under
+## the files subdirectory.
+files:
+ # logstash-template.json: |-
+ # {
+ # "order": 0,
+ # "version": 1,
+ # "index_patterns": [
+ # "logstash-*"
+ # ],
+ # "settings": {
+ # "index": {
+ # "refresh_interval": "5s"
+ # }
+ # },
+ # "mappings": {
+ # "doc": {
+ # "_meta": {
+ # "version": "1.0.0"
+ # },
+ # "enabled": false
+ # }
+ # },
+ # "aliases": {}
+ # }
+
+## Custom binary files encoded as base64 string that can be referenced by plugins
+## Each base64 encoded string is decoded & mounted as a file under logstash home directory under
+## the files subdirectory.
+binaryFiles: {}
+
## NOTE: To achieve multiple pipelines with this chart, current best practice
## is to maintain one pipeline per chart release. In this way configuration is
## simplified and pipelines are more isolated from one another.
diff --git a/stable/luigi/Chart.yaml b/stable/luigi/Chart.yaml
index 3bf95bf37965..bef938af51ff 100644
--- a/stable/luigi/Chart.yaml
+++ b/stable/luigi/Chart.yaml
@@ -18,5 +18,5 @@ keywords:
- spark
- kubernetes job manager
name: luigi
-version: 2.7.4
+version: 2.7.5
appVersion: 2.7.2
diff --git a/stable/luigi/README.md b/stable/luigi/README.md
index 45c6feed4ea5..f563e8eb8b92 100644
--- a/stable/luigi/README.md
+++ b/stable/luigi/README.md
@@ -35,7 +35,7 @@ The command removes all the Kubernetes components associated with the chart and
## Configuration
-The following table lists the configurable parameters of the Sentry chart and their default values.
+The following table lists the configurable parameters of the Luigi chart and their default values.
| Parameter | Description | Default |
| ------------------------------- | ------------------------------- | ---------------------------------------------------------- |
diff --git a/stable/magento/Chart.yaml b/stable/magento/Chart.yaml
index 177297924be0..2c02dbc56d20 100644
--- a/stable/magento/Chart.yaml
+++ b/stable/magento/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: magento
-version: 4.1.4
-appVersion: 2.3.0
+version: 5.0.2
+appVersion: 2.3.1
description: A feature-rich flexible e-commerce solution. It includes transaction options, multi-store functionality, loyalty programs, product categorization and shopper filtering, promotion rules, and more.
keywords:
- magento
diff --git a/stable/magento/README.md b/stable/magento/README.md
index 1053f3a0261b..23e7aeacf462 100644
--- a/stable/magento/README.md
+++ b/stable/magento/README.md
@@ -14,7 +14,7 @@ This chart bootstraps a [Magento](https://github.com/bitnami/bitnami-docker-mage
It also packages the [Bitnami MariaDB chart](https://github.com/kubernetes/charts/tree/master/stable/mariadb) which is required for bootstrapping a MariaDB deployment as a database for the Magento application.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -47,74 +47,94 @@ The command removes all the Kubernetes components associated with the chart and
The following table lists the configurable parameters of the Magento chart and their default values.
-| Parameter | Description | Default |
-|--------------------------------------|--------------------------------------------|----------------------------------------------------------|
-| `global.imageRegistry` | Global Docker image registry | `nil` |
-| `image.registry` | Magento image registry | `docker.io` |
-| `image.repository` | Magento Image name | `bitnami/magento` |
-| `image.tag` | Magento Image tag | `{VERSION}` |
-| `image.pullPolicy` | Image pull policy | `Always` if `imageTag` is `latest`, else `IfNotPresent` |
-| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
-| `magentoHost` | Magento host to create application URLs | `nil` |
-| `magentoLoadBalancerIP` | `loadBalancerIP` for the magento Service | `nil` |
-| `magentoUsername` | User of the application | `user` |
-| `magentoPassword` | Application password | _random 10 character long alphanumeric string_ |
-| `magentoEmail` | Admin email | `user@example.com` |
-| `magentoFirstName` | Magento Admin First Name | `FirstName` |
-| `magentoLastName` | Magento Admin Last Name | `LastName` |
-| `magentoMode` | Magento mode | `developer` |
-| `magentoAdminUri` | Magento prefix to access Magento Admin | `admin` |
-| `allowEmptyPassword` | Allow DB blank passwords | `yes` |
-| `externalDatabase.host` | Host of the external database | `nil` |
-| `externalDatabase.port` | Port of the external database | `3306` |
-| `externalDatabase.user` | Existing username in the external db | `bn_magento` |
-| `externalDatabase.password` | Password for the above username | `nil` |
-| `externalDatabase.database` | Name of the existing database | `bitnami_magento` |
-| `mariadb.enabled` | Whether to use the MariaDB chart | `true` |
-| `mariadb.rootUser.password` | MariaDB admin password | `nil` |
-| `mariadb.db.name` | Database name to create | `bitnami_magento` |
-| `mariadb.db.user` | Database user to create | `bn_magento` |
-| `mariadb.db.password` | Password for the database | _random 10 character long alphanumeric string_ |
-| `service.type` | Kubernetes Service type | `LoadBalancer` |
-| `service.port` | Service HTTP port | `80` |
-| `service.httpsPort` | Service HTTPS port | `443` |
-| `nodePorts.https` | Kubernetes https node port | `""` |
-| `service.externalTrafficPolicy` | Enable client source IP preservation | `Cluster` |
-| `service.nodePorts.http` | Kubernetes http node port | `""` |
-| `service.nodePorts.https` | Kubernetes https node port | `""` |
-| `service.loadBalancerIP` | `loadBalancerIP` for the Magento Service | `nil` |
-| `livenessProbe.enabled` | Turn on and off liveness probe | `true` |
-| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `1000` |
-| `livenessProbe.periodSeconds` | How often to perform the probe | `10` |
-| `livenessProbe.timeoutSeconds` | When the probe times out | `5` |
-| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe| `1` |
-| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe | `6` |
-| `readinessProbe.enabled` | Turn on and off readiness probe | `true` |
-| `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
-| `readinessProbe.periodSeconds` | How often to perform the probe | `5` |
-| `readinessProbe.timeoutSeconds` | When the probe times out | `3` |
-| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe| `1` |
-| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe | `3` |
-| `persistence.enabled` | Enable persistence using PVC | `true` |
-| `persistence.apache.storageClass` | PVC Storage Class for Apache volume | `nil` (uses alpha storage annotation) |
-| `persistence.apache.accessMode` | PVC Access Mode for Apache volume | `ReadWriteOnce` |
-| `persistence.apache.size` | PVC Storage Request for Apache volume | `1Gi` |
-| `persistence.magento.storageClass` | PVC Storage Class for Magento volume | `nil` (uses alpha storage annotation) |
-| `persistence.magento.accessMode` | PVC Access Mode for Magento volume | `ReadWriteOnce` |
-| `persistence.magento.size` | PVC Storage Request for Magento volume | `8Gi` |
-| `resources` | CPU/Memory resource requests/limits | Memory: `512Mi`, CPU: `300m` |
-| `podAnnotations` | Pod annotations | `{}` |
-| `metrics.enabled` | Start a side-car prometheus exporter | `false` |
-| `metrics.image.registry` | Apache exporter image registry | `docker.io` |
-| `metrics.image.repository` | Apache exporter image name | `lusotycoon/apache-exporter` |
-| `metrics.image.tag` | Apache exporter image tag | `v0.5.0` |
-| `metrics.image.pullPolicy` | Image pull policy | `IfNotPresent` |
-| `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
-| `metrics.podAnnotations` | Additional annotations for Metrics exporter pod | `{prometheus.io/scrape: "true", prometheus.io/port: "9117"}` |
-| `metrics.resources` | Exporter resource requests/limit | {} |
+| Parameter | Description | Default |
+| ------------------------------------ | ------------------------------------------------------------------------------------ | ------------------------------------------------------------ |
+| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
+| `image.registry` | Magento image registry | `docker.io` |
+| `image.repository` | Magento Image name | `bitnami/magento` |
+| `image.tag` | Magento Image tag | `{VERSION}` |
+| `image.debug` | Specify if debug values should be set | `false` |
+| `image.pullPolicy` | Image pull policy | `Always` if `imageTag` is `latest`, else `IfNotPresent` |
+| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
+| `magentoHost` | Magento host to create application URLs | `nil` |
+| `magentoLoadBalancerIP` | `loadBalancerIP` for the magento Service | `nil` |
+| `magentoUsername` | User of the application | `user` |
+| `magentoPassword` | Application password | _random 10 character long alphanumeric string_ |
+| `magentoEmail` | Admin email | `user@example.com` |
+| `magentoFirstName` | Magento Admin First Name | `FirstName` |
+| `magentoLastName` | Magento Admin Last Name | `LastName` |
+| `magentoMode` | Magento mode | `developer` |
+| `magentoAdminUri` | Magento prefix to access Magento Admin | `admin` |
+| `allowEmptyPassword` | Allow DB blank passwords | `yes` |
+| `ingress.enabled` | Enable ingress controller resource | `false` |
+| `ingress.annotations` | Ingress annotations | `[]` |
+| `ingress.certManager` | Add annotations for cert-manager | `false` |
+| `ingress.hosts[0].name` | Hostname to your Magento installation | `magento.local` |
+| `ingress.hosts[0].path` | Path within the url structure | `/` |
+| `ingress.hosts[0].tls` | Utilize TLS backend in ingress | `false` |
+| `ingress.hosts[0].tlsHosts` | Array of TLS hosts for ingress record (defaults to `ingress.hosts[0].name` if `nil`) | `nil` |
+| `ingress.hosts[0].tlsSecret` | TLS Secret (certificates) | `magento.local-tls-secret` |
+| `ingress.secrets[0].name` | TLS Secret Name | `nil` |
+| `ingress.secrets[0].certificate` | TLS Secret Certificate | `nil` |
+| `ingress.secrets[0].key` | TLS Secret Key | `nil` |
+| `externalDatabase.host` | Host of the external database | `nil` |
+| `externalDatabase.port` | Port of the external database | `3306` |
+| `externalDatabase.user` | Existing username in the external db | `bn_magento` |
+| `externalDatabase.password` | Password for the above username | `nil` |
+| `externalDatabase.database` | Name of the existing database | `bitnami_magento` |
+| `externalElasticsearch.host` | Host of the external elasticsearch server | `nil` |
+| `externalElasticsearch.port` | Port of the external elasticsearch server | `nil` |
+| `mariadb.enabled` | Whether to use the MariaDB chart | `true` |
+| `mariadb.rootUser.password` | MariaDB admin password | `nil` |
+| `mariadb.db.name` | Database name to create | `bitnami_magento` |
+| `mariadb.db.user` | Database user to create | `bn_magento` |
+| `mariadb.db.password` | Password for the database | _random 10 character long alphanumeric string_ |
+| `elasticsearch.enabled` | Use the Elasticsearch chart as search engine | `false` |
+| `service.type` | Kubernetes Service type | `LoadBalancer` |
+| `service.port` | Service HTTP port | `80` |
+| `service.httpsPort` | Service HTTPS port | `443` |
+| `nodePorts.https` | Kubernetes https node port | `""` |
+| `service.externalTrafficPolicy` | Enable client source IP preservation | `Cluster` |
+| `service.nodePorts.http` | Kubernetes http node port | `""` |
+| `service.nodePorts.https` | Kubernetes https node port | `""` |
+| `service.loadBalancerIP` | `loadBalancerIP` for the Magento Service | `nil` |
+| `livenessProbe.enabled` | Turn on and off liveness probe | `true` |
+| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `1000` |
+| `livenessProbe.periodSeconds` | How often to perform the probe | `10` |
+| `livenessProbe.timeoutSeconds` | When the probe times out | `5` |
+| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe | `1` |
+| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe | `6` |
+| `readinessProbe.enabled` | Turn on and off readiness probe | `true` |
+| `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `30` |
+| `readinessProbe.periodSeconds` | How often to perform the probe | `5` |
+| `readinessProbe.timeoutSeconds` | When the probe times out | `3` |
+| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe | `1` |
+| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe | `3` |
+| `persistence.enabled` | Enable persistence using PVC | `true` |
+| `persistence.apache.storageClass` | PVC Storage Class for Apache volume | `nil` (uses alpha storage annotation) |
+| `persistence.apache.accessMode` | PVC Access Mode for Apache volume | `ReadWriteOnce` |
+| `persistence.apache.size` | PVC Storage Request for Apache volume | `1Gi` |
+| `persistence.magento.storageClass` | PVC Storage Class for Magento volume | `nil` (uses alpha storage annotation) |
+| `persistence.magento.accessMode` | PVC Access Mode for Magento volume | `ReadWriteOnce` |
+| `persistence.magento.size` | PVC Storage Request for Magento volume | `8Gi` |
+| `resources` | CPU/Memory resource requests/limits | Memory: `512Mi`, CPU: `300m` |
+| `podAnnotations` | Pod annotations | `{}` |
+| `metrics.enabled` | Start a side-car prometheus exporter | `false` |
+| `metrics.image.registry` | Apache exporter image registry | `docker.io` |
+| `metrics.image.repository` | Apache exporter image name | `lusotycoon/apache-exporter` |
+| `metrics.image.tag` | Apache exporter image tag | `v0.5.0` |
+| `metrics.image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
+| `metrics.podAnnotations` | Additional annotations for Metrics exporter pod | `{prometheus.io/scrape: "true", prometheus.io/port: "9117"}` |
+| `metrics.resources` | Exporter resource requests/limit | {} |
The above parameters map to the env variables defined in [bitnami/magento](http://github.com/bitnami/bitnami-docker-magento). For more information please refer to the [bitnami/magento](http://github.com/bitnami/bitnami-docker-magento) image documentation.
+> **Note**:
+>
+> Setting `elasticsearch.enabled` to true will launch seven more pods by default. Use it with caution.
+
> **Note**:
>
> For Magento to function correctly, you should specify the `magentoHost` parameter to specify the FQDN (recommended) or the public IP address of the Magento service.
@@ -155,6 +175,19 @@ The [Bitnami Magento](https://github.com/bitnami/bitnami-docker-magento) image s
## Upgrading
+### To 5.0.0
+
+Manual intervention is needed if configuring Elasticsearch 6 as Magento search engine is desired.
+
+[Follow the Magento documentation](https://devdocs.magento.com/guides/v2.3/config-guide/elasticsearch/configure-magento.html) in order to configure Elasticsearch, setting **Search Engine** to **Elasticsearch 6.0+**. If using the Elasticsearch server included in this chart, `hostname` and `port` can be obtained with the following commands:
+
+```
+$ kubectl get svc -l app=elasticsearch,component=client,release=RELEASE_NAME -o jsonpath="{.items[0].metadata.name}"
+$ kubectl get svc -l app=elasticsearch,component=client,release=RELEASE_NAME -o jsonpath="{.items[0].spec.ports[0].port}"
+```
+
+Where `RELEASE_NAME` is the name of the release. Use `helm list` to find it.
+
### To 3.0.0
Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments.
@@ -163,3 +196,4 @@ Use the workaround below to upgrade from versions previous to 3.0.0. The followi
```console
$ kubectl patch deployment magento-magento --type=json -p='[{"op": "remove", "path": "/spec/selector/matchLabels/chart"}]'
$ kubectl delete statefulset magento-mariadb --cascade=false
+```
diff --git a/stable/magento/requirements.lock b/stable/magento/requirements.lock
index d32553d7ba2a..46320626aba6 100644
--- a/stable/magento/requirements.lock
+++ b/stable/magento/requirements.lock
@@ -1,6 +1,9 @@
dependencies:
- name: mariadb
repository: https://kubernetes-charts.storage.googleapis.com/
- version: 5.2.3
-digest: sha256:0593b73b2163fbbbae061de1aa2b8280d43f8a423a91e1c7375c0b6c86784b1c
-generated: 2018-12-11T12:41:27.526645039Z
+ version: 5.11.3
+- name: elasticsearch
+ repository: https://kubernetes-charts.storage.googleapis.com/
+ version: 1.26.2
+digest: sha256:b8276d2d458462fb87b222b5a45fa01461f11140acce57ccd1878b80c9eb7991
+generated: 2019-05-20T16:42:21.321514+02:00
diff --git a/stable/magento/requirements.yaml b/stable/magento/requirements.yaml
index a828b3769413..e5ffa232dc50 100644
--- a/stable/magento/requirements.yaml
+++ b/stable/magento/requirements.yaml
@@ -3,3 +3,7 @@ dependencies:
version: 5.x.x
repository: https://kubernetes-charts.storage.googleapis.com/
condition: mariadb.enabled
+- name: elasticsearch
+ version: 1.x.x
+ repository: https://kubernetes-charts.storage.googleapis.com/
+ condition: elasticsearch.enabled
diff --git a/stable/magento/templates/NOTES.txt b/stable/magento/templates/NOTES.txt
index 65486ca54780..143f2d8e71c1 100644
--- a/stable/magento/templates/NOTES.txt
+++ b/stable/magento/templates/NOTES.txt
@@ -32,14 +32,14 @@ host. To configure Magento with the URL of your service:
{{- if .Values.mariadb.enabled }}
- helm upgrade {{ .Release.Name }} stable/magento \
- --set magentoHost=$APP_HOST,magentoPassword=$APP_PASSWORD{{ if .Values.mariadb.mariadbRootPassword }},mariadb.mariadbRootPassword=$DATABASE_ROOT_PASSWORD{{ end }},mariadb.db.password=$APP_DATABASE_PASSWORD
+ helm upgrade {{ .Release.Name }} stable/{{ .Chart.Name }} \
+ --set magentoHost=$APP_HOST,magentoPassword=$APP_PASSWORD{{ if .Values.mariadb.mariadbRootPassword }},mariadb.mariadbRootPassword=$DATABASE_ROOT_PASSWORD{{ end }},mariadb.db.password=$APP_DATABASE_PASSWORD{{- if .Values.global }}{{- if .Values.global.imagePullSecrets }},global.imagePullSecrets={{ .Values.global.imagePullSecrets }}{{- end }}{{- end }}
{{- else }}
## PLEASE UPDATE THE EXTERNAL DATABASE CONNECTION PARAMETERS IN THE FOLLOWING COMMAND AS NEEDED ##
- helm upgrade {{ .Release.Name }} stable/magento \
- --set magentoPassword=$APP_PASSWORD,magentoHost=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.host) }},externalDatabase.host={{ .Values.externalDatabase.host }}{{- end }}{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }}
+ helm upgrade {{ .Release.Name }} stable/{{ .Chart.Name }} \
+ --set magentoPassword=$APP_PASSWORD,magentoHost=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.host) }},externalDatabase.host={{ .Values.externalDatabase.host }}{{- end }}{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }}{{- if .Values.global }}{{- if .Values.global.imagePullSecrets }},global.imagePullSecrets={{ .Values.global.imagePullSecrets }}{{- end }}{{- end }}
{{- end }}
{{- else -}}
@@ -93,6 +93,6 @@ host. To configure Magento to use and external database host:
## PLEASE UPDATE THE EXTERNAL DATABASE CONNECTION PARAMETERS IN THE FOLLOWING COMMAND AS NEEDED ##
- helm upgrade {{ .Release.Name }} stable/magento \
- --set magentoPassword=$APP_PASSWORD,magentoHost=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }},externalDatabase.host=YOUR_EXTERNAL_DATABASE_HOST
+ helm upgrade {{ .Release.Name }} stable/{{ .Chart.Name }} \
+ --set magentoPassword=$APP_PASSWORD,magentoHost=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }},externalDatabase.host=YOUR_EXTERNAL_DATABASE_HOST{{- if .Values.global }}{{- if .Values.global.imagePullSecrets }},global.imagePullSecrets={{ .Values.global.imagePullSecrets }}{{- end }}{{- end }}
{{- end }}
diff --git a/stable/magento/templates/_helpers.tpl b/stable/magento/templates/_helpers.tpl
index 2272ad0f4972..68f09ed032c9 100644
--- a/stable/magento/templates/_helpers.tpl
+++ b/stable/magento/templates/_helpers.tpl
@@ -39,6 +39,14 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
{{- printf "%s-%s" .Release.Name "mariadb" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+*/}}
+{{- define "magento.elasticsearch.fullname" -}}
+{{- printf "%s-%s-client" .Release.Name "elasticsearch" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
{{/*
Get the user defined LoadBalancerIP for this release.
Note, returns 127.0.0.1 if using ClusterIP.
@@ -86,9 +94,57 @@ Also, we can't use a single if because lazy evaluation is not an option
{{/*
Return the proper image name (for the metrics image)
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "magento.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "magento.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
{{- end -}}
diff --git a/stable/magento/templates/deployment.yaml b/stable/magento/templates/deployment.yaml
index d3748050e41e..5f657f07c243 100644
--- a/stable/magento/templates/deployment.yaml
+++ b/stable/magento/templates/deployment.yaml
@@ -29,12 +29,7 @@ spec:
{{- end }}
{{- end }}
spec:
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "magento.imagePullSecrets" . | indent 6 }}
hostAliases:
- ip: "127.0.0.1"
hostnames:
@@ -44,6 +39,12 @@ spec:
image: {{ template "magento.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
env:
+ {{- if .Values.image.debug}}
+ - name: BASH_DEBUG
+ value: "1"
+ - name: NAMI_DEBUG
+ value: "1"
+ {{- end }}
- name: MARIADB_HOST
{{- if .Values.mariadb.enabled }}
value: {{ template "magento.mariadb.fullname" . }}
@@ -56,6 +57,22 @@ spec:
{{- else }}
value: {{ .Values.externalDatabase.port | quote }}
{{- end }}
+ - name: ELASTICSEARCH_HOST
+ {{- if .Values.elasticsearch.enabled }}
+ value: {{ template "magento.elasticsearch.fullname" . }}
+ {{- else if .Values.externalElasticsearch.host }}
+ value: {{ .Values.externalElasticsearch.host | quote }}
+ {{- else }}
+ value: ""
+ {{- end }}
+ - name: ELASTICSEARCH_PORT_NUMBER
+ {{- if .Values.elasticsearch.enabled }}
+ value: "9200"
+ {{- else if .Values.externalElasticsearch.port }}
+ value: {{ .Values.externalElasticsearch.port | quote }}
+ {{- else }}
+ value: ""
+ {{- end }}
- name: MAGENTO_DATABASE_NAME
{{- if .Values.mariadb.enabled }}
value: {{ .Values.mariadb.db.name | quote }}
@@ -140,7 +157,7 @@ spec:
mountPath: /bitnami/apache
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "magento.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
command: [ '/bin/apache_exporter', '-scrape_uri', 'http://status.localhost:80/server-status/?auto']
ports:
diff --git a/stable/magento/templates/ingress.yaml b/stable/magento/templates/ingress.yaml
new file mode 100644
index 000000000000..798b82a450cc
--- /dev/null
+++ b/stable/magento/templates/ingress.yaml
@@ -0,0 +1,43 @@
+{{- if .Values.ingress.enabled }}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ template "magento.fullname" . }}
+ labels:
+ app: "{{ template "magento.fullname" . }}"
+ chart: "{{ template "magento.chart" . }}"
+ release: {{ .Release.Name | quote }}
+ heritage: {{ .Release.Service | quote }}
+ annotations:
+ {{- if .Values.ingress.certManager }}
+ kubernetes.io/tls-acme: "true"
+ {{- end }}
+ {{- range $key, $value := .Values.ingress.annotations }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+spec:
+ rules:
+ {{- range .Values.ingress.hosts }}
+ - host: {{ .name }}
+ http:
+ paths:
+ - path: {{ default "/" .path }}
+ backend:
+ serviceName: {{ template "magento.fullname" $ }}
+ servicePort: http
+ {{- end }}
+ tls:
+ {{- range .Values.ingress.hosts }}
+ {{- if .tls }}
+ - hosts:
+ {{- if .tlsHosts }}
+ {{- range $host := .tlsHosts }}
+ - {{ $host }}
+ {{- end }}
+ {{- else }}
+ - {{ .name }}
+ {{- end }}
+ secretName: {{ .tlsSecret }}
+ {{- end }}
+ {{- end }}
+{{- end }}
diff --git a/stable/magento/values-production.yaml b/stable/magento/values-production.yaml
new file mode 100644
index 000000000000..8c06d990a30b
--- /dev/null
+++ b/stable/magento/values-production.yaml
@@ -0,0 +1,325 @@
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
+##
+# global:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
+
+## Bitnami Magento image version
+## ref: https://hub.docker.com/r/bitnami/magento/tags/
+##
+image:
+ registry: docker.io
+ repository: bitnami/magento
+ tag: 2.3.1
+ ## Set to true if you would like to see extra information on logs
+ ## It turns BASH and NAMI debugging in minideb
+ ## ref: https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
+ ##
+ debug: false
+ ## Specify a imagePullPolicy
+ ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
+ ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
+ ##
+ pullPolicy: IfNotPresent
+ ## Optionally specify an array of imagePullSecrets.
+ ## Secrets must be manually created in the namespace.
+ ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+ ##
+ # pullSecrets:
+ # - myRegistryKeySecretName
+
+## Magento host to create application URLs
+## ref: https://github.com/bitnami/bitnami-docker-magento#configuration
+##
+# magentoHost:
+
+## loadBalancerIP for the Magento Service (optional, cloud specific)
+## ref: http://kubernetes.io/docs/user-guide/services/#type-loadbalancer
+##
+# magentoLoadBalancerIP:
+
+## User of the application
+## ref: https://github.com/bitnami/bitnami-docker-magento#configuration
+##
+magentoUsername: user
+
+## Application password
+## Defaults to a random 10-character alphanumeric string if not set
+## ref: https://github.com/bitnami/bitnami-docker-magento#configuration
+##
+# magentoPassword:
+
+## Admin email
+## ref: https://github.com/bitnami/bitnami-docker-magento#configuration
+##
+magentoEmail: user@example.com
+
+## Prefix for Magento Admin
+## ref: https://github.com/bitnami/bitnami-docker-magento#configuration
+##
+magentoAdminUri: admin
+
+## First Name
+## ref: https://github.com/bitnami/bitnami-docker-magento#configuration
+##
+magentoFirstName: FirstName
+
+## Last Name
+## ref: https://github.com/bitnami/bitnami-docker-magento#configuration
+##
+magentoLastName: LastName
+
+## Mode
+## ref: https://github.com/bitnami/bitnami-docker-magento#configuration
+##
+magentoMode: developer
+
+## Set to `yes` to allow the container to be started with blank passwords
+## ref: https://github.com/bitnami/bitnami-docker-magento#environment-variables
+allowEmptyPassword: "yes"
+
+##
+## External database configuration
+##
+externalDatabase:
+ ## Database host
+ host:
+
+ ## Database port
+ port: 3306
+
+ ## Database user
+ user: bn_magento
+
+ ## Database password
+ password:
+
+ ## Database name
+ database: bitnami_magento
+
+##
+## External elasticsearch configuration
+##
+externalElasticsearch:
+ ## Elasticsearch host
+ host:
+
+ ## Elasticsearch port
+ port:
+
+##
+## MariaDB chart configuration
+##
+## https://github.com/helm/charts/blob/master/stable/mariadb/values.yaml
+##
+mariadb:
+ ## Whether to deploy a mariadb server to satisfy the applications database requirements. To use an external database set this to false and configure the externalDatabase parameters
+ enabled: true
+ ## Disable MariaDB replication
+ replication:
+ enabled: false
+
+ ## Create a database and a database user
+ ## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#creating-a-database-user-on-first-run
+ ##
+ db:
+ name: bitnami_magento
+ user: bn_magento
+ ## If the password is not specified, mariadb will generates a random password
+ ##
+ # password:
+
+ ## MariaDB admin password
+ ## ref: https://github.com/bitnami/bitnami-docker-mariadb/blob/master/README.md#setting-the-root-password-on-first-run
+ ##
+ # rootUser:
+ # password:
+
+ ## Enable persistence using Persistent Volume Claims
+ ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
+ ##
+ master:
+ persistence:
+ enabled: true
+ ## mariadb data Persistent Volume Storage Class
+ ## If defined, storageClassName:
+ ## If set to "-", storageClassName: "", which disables dynamic provisioning
+ ## If undefined (the default) or set to null, no storageClassName spec is
+ ## set, choosing the default provisioner. (gp2 on AWS, standard on
+ ## GKE, AWS & OpenStack)
+ ##
+ # storageClass: "-"
+ accessMode: ReadWriteOnce
+ size: 8Gi
+
+##
+## Elasticsearch chart configuration
+##
+## https://github.com/helm/charts/blob/master/stable/elasticsearch/values.yaml
+##
+elasticsearch:
+ ## Whether to deploy a elasticsearch server to use as magento's search engine
+ ## To use an external server set this to false and configure the externalElasticsearch parameters
+ enabled: true
+
+## Kubernetes configuration
+## For minikube, set this to NodePort, elsewhere use LoadBalancer
+##
+service:
+ type: LoadBalancer
+ # HTTP Port
+ port: 80
+ # HTTPS Port
+ httpsPort: 443
+ ##
+ ## loadBalancerIP:
+ ## nodePorts:
+ ## http:
+ ## https:
+ nodePorts:
+ http: ""
+ https: ""
+ ## Enable client source IP preservation
+ ## ref http://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
+ ##
+ externalTrafficPolicy: Cluster
+
+## Configure liveness and readiness probes
+## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
+##
+livenessProbe:
+ enabled: true
+ initialDelaySeconds: 1000
+ periodSeconds: 10
+ timeoutSeconds: 5
+ successThreshold: 1
+ failureThreshold: 6
+readinessProbe:
+ enabled: true
+ initialDelaySeconds: 30
+ periodSeconds: 5
+ timeoutSeconds: 3
+ successThreshold: 1
+ failureThreshold: 3
+
+## Enable persistence using Persistent Volume Claims
+## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
+##
+persistence:
+ enabled: true
+ apache:
+ ## apache data Persistent Volume Storage Class
+ ## If defined, storageClassName:
+ ## If set to "-", storageClassName: "", which disables dynamic provisioning
+ ## If undefined (the default) or set to null, no storageClassName spec is
+ ## set, choosing the default provisioner. (gp2 on AWS, standard on
+ ## GKE, AWS & OpenStack)
+ ##
+ # storageClass: "-"
+ accessMode: ReadWriteOnce
+ size: 1Gi
+ magento:
+ ## magento data Persistent Volume Storage Class
+ ## If defined, storageClassName:
+ ## If set to "-", storageClassName: "", which disables dynamic provisioning
+ ## If undefined (the default) or set to null, no storageClassName spec is
+ ## set, choosing the default provisioner. (gp2 on AWS, standard on
+ ## GKE, AWS & OpenStack)
+ ##
+ # storageClass: "-"
+ accessMode: ReadWriteOnce
+ size: 8Gi
+
+## Configure resource requests and limits
+## ref: http://kubernetes.io/docs/user-guide/compute-resources/
+##
+resources:
+ requests:
+ memory: 512Mi
+ cpu: 300m
+
+## Pod annotations
+## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
+##
+podAnnotations: {}
+
+## Configure the ingress resource that allows you to access the
+## Magento installation. Set up the URL
+## ref: http://kubernetes.io/docs/user-guide/ingress/
+##
+ingress:
+ ## Set to true to enable ingress record generation
+ enabled: false
+
+ ## Set this to true in order to add the corresponding annotations for cert-manager
+ certManager: false
+
+ ## Ingress annotations done as key:value pairs
+ ## For a full list of possible ingress annotations, please see
+ ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/annotations.md
+ ##
+ ## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
+ ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
+ annotations:
+ # kubernetes.io/ingress.class: nginx
+
+ ## The list of hostnames to be covered with this ingress record.
+ ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
+ hosts:
+ - name: magento.local
+ path: /
+
+ ## Set this to true in order to enable TLS on the ingress record
+ tls: false
+
+ ## Optionally specify the TLS hosts for the ingress record
+ ## Useful when the Ingress controller supports www-redirection
+ ## If not specified, the above host name will be used
+ # tlsHosts:
+ # - www.magento.local
+ # - magento.local
+
+ ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
+ tlsSecret: magento.local-tls
+
+ secrets:
+ ## If you're providing your own certificates, please use this to add the certificates as secrets
+ ## key and certificate should start with -----BEGIN CERTIFICATE----- or
+ ## -----BEGIN RSA PRIVATE KEY-----
+ ##
+ ## name should line up with a tlsSecret set further up
+ ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
+ ##
+ ## It is also possible to create and manage the certificates outside of this helm chart
+ ## Please see README.md for more information
+ # - name: magento.local-tls
+ # key:
+ # certificate:
+
+
+## Prometheus Exporter / Metrics
+##
+metrics:
+ enabled: true
+ image:
+ registry: docker.io
+ repository: lusotycoon/apache-exporter
+ tag: v0.5.0
+ pullPolicy: IfNotPresent
+ ## Optionally specify an array of imagePullSecrets.
+ ## Secrets must be manually created in the namespace.
+ ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+ ##
+ # pullSecrets:
+ # - myRegistryKeySecretName
+ ## Metrics exporter pod Annotation and Labels
+ podAnnotations:
+ prometheus.io/scrape: "true"
+ prometheus.io/port: "9117"
+ ## Metrics exporter resource requests and limits
+ ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
+ ##
+ # resources: {}
diff --git a/stable/magento/values.yaml b/stable/magento/values.yaml
index fab1edb21265..17a73c4cbc7c 100644
--- a/stable/magento/values.yaml
+++ b/stable/magento/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami Magento image version
## ref: https://hub.docker.com/r/bitnami/magento/tags/
@@ -10,7 +13,12 @@
image:
registry: docker.io
repository: bitnami/magento
- tag: 2.3.0
+ tag: 2.3.1
+ ## Set to true if you would like to see extra information on logs
+ ## It turns BASH and NAMI debugging in minideb
+ ## ref: https://github.com/bitnami/minideb-extras/#turn-on-bash-debugging
+ ##
+ debug: false
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,7 +29,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Magento host to create application URLs
## ref: https://github.com/bitnami/bitnami-docker-magento#configuration
@@ -80,7 +88,7 @@ externalDatabase:
## Database host
host:
- ## Database host
+ ## Database port
port: 3306
## Database user
@@ -92,6 +100,16 @@ externalDatabase:
## Database name
database: bitnami_magento
+##
+## External elasticsearch configuration
+##
+externalElasticsearch:
+ ## Elasticsearch host
+ host:
+
+ ## Elasticsearch port
+ port:
+
##
## MariaDB chart configuration
##
@@ -137,6 +155,16 @@ mariadb:
accessMode: ReadWriteOnce
size: 8Gi
+##
+## Elasticsearch chart configuration
+##
+## https://github.com/helm/charts/blob/master/stable/elasticsearch/values.yaml
+##
+elasticsearch:
+ ## Whether to deploy a elasticsearch server to use as magento's search engine
+ ## To use an external server set this to false and configure the externalElasticsearch parameters
+ enabled: false
+
## Kubernetes configuration
## For minikube, set this to NodePort, elsewhere use LoadBalancer
##
@@ -218,6 +246,60 @@ resources:
##
podAnnotations: {}
+## Configure the ingress resource that allows you to access the
+## Magento installation. Set up the URL
+## ref: http://kubernetes.io/docs/user-guide/ingress/
+##
+ingress:
+ ## Set to true to enable ingress record generation
+ enabled: false
+
+ ## Set this to true in order to add the corresponding annotations for cert-manager
+ certManager: false
+
+ ## Ingress annotations done as key:value pairs
+ ## For a full list of possible ingress annotations, please see
+ ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/annotations.md
+ ##
+ ## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
+ ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
+ annotations:
+ # kubernetes.io/ingress.class: nginx
+
+ ## The list of hostnames to be covered with this ingress record.
+ ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
+ hosts:
+ - name: magento.local
+ path: /
+
+ ## Set this to true in order to enable TLS on the ingress record
+ tls: false
+
+ ## Optionally specify the TLS hosts for the ingress record
+ ## Useful when the Ingress controller supports www-redirection
+ ## If not specified, the above host name will be used
+ # tlsHosts:
+ # - www.magento.local
+ # - magento.local
+
+ ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
+ tlsSecret: magento.local-tls
+
+ secrets:
+ ## If you're providing your own certificates, please use this to add the certificates as secrets
+ ## key and certificate should start with -----BEGIN CERTIFICATE----- or
+ ## -----BEGIN RSA PRIVATE KEY-----
+ ##
+ ## name should line up with a tlsSecret set further up
+ ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
+ ##
+ ## It is also possible to create and manage the certificates outside of this helm chart
+ ## Please see README.md for more information
+ # - name: magento.local-tls
+ # key:
+ # certificate:
+
+
## Prometheus Exporter / Metrics
##
metrics:
@@ -232,7 +314,7 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Metrics exporter pod Annotation and Labels
podAnnotations:
prometheus.io/scrape: "true"
diff --git a/stable/magic-namespace/Chart.yaml b/stable/magic-namespace/Chart.yaml
index 4505215fc6a4..f3d53a7a12fe 100644
--- a/stable/magic-namespace/Chart.yaml
+++ b/stable/magic-namespace/Chart.yaml
@@ -1,6 +1,6 @@
apiVersion: v1
name: magic-namespace
-version: 0.3.0
+version: 0.5.2
appVersion: 2.8.1
home: https://github.com/kubernetes/charts/tree/master/stable/magic-namespace
description: Elegantly enables a Tiller per namespace in RBAC-enabled clusters
diff --git a/stable/magic-namespace/README.md b/stable/magic-namespace/README.md
index d01827eeeb35..e3c10b92fab9 100644
--- a/stable/magic-namespace/README.md
+++ b/stable/magic-namespace/README.md
@@ -140,6 +140,13 @@ reference the default `values.yaml` to understand further options.
| `tiller.role.type` | Identify the name of the `Role` or `ClusterRole` that will be referenced in the role binding for Tiller's service account. There is seldom any reason to override this. | `admin` |
| `tiller.includeService` | This deploys a service resource for Tiller. This is not generally needed. Please understand the security implications of this before overriding the default. | `false` |
| `tiller.onlyListenOnLocalhost` | This prevents Tiller from binding to `0.0.0.0`. This is generally advisable to close known Tiller-based attack vectors. Please understand the security implications of this before overriding the default. | `true` |
+| `tiller.storage` | The storage driver for Tiller to use. One of `configmap`, `memory`, or `secret` | `configmap` |
+| `tiller.tls.enabled` | Whether to enable TLS encryption between Helm and Tiller. Specify either `tiller.tls.secretName` to mount an existing secret, or `tiller.tls.ca`, `tiller.tls.cert` and `tiller.tls.key` to create a secret from Base64 provided values | `false` |
+| `tiller.tls.verify` | Whether to verify a remote Tiller certificate. | `true` |
+| `tiller.tls.secretName` | Mount an existing TLS secret into the Tiller container. The secret must include data keys: `ca.crt`, `tls.crt` and `tls.key` | `nil` |
+| `tiller.tls.ca` | Base64 encoded string to mount ca.crt into the Tiller container. This value requires `tiller.tls.cert` and `tiller.tls.key` to also be set. | `nil` |
+| `tiller.tls.cert` | Base64 encoded string to mount tls.cert into the Tiller container. This value requires `tiller.tls.ca and `tiller.tls.key` to also be set. | `nil` |
+| `tiller.tls.key` | Base64 encoded string to mount tls.key into the Tiller container. This value requires `tiller.tls.ca` and `tiller.tls.cert` to also be set. | `nil` |
| `serviceAccounts` | An optional array of names of additional service account to create | `nil` |
| `roleBindings` | An optional array of objects that define role bindings | `nil` |
| `roleBindings[n].role.kind` | Identify the kind of role (`Role` or `ClusterRole`) to be used in the role binding | |
diff --git a/stable/magic-namespace/templates/_helpers.tpl b/stable/magic-namespace/templates/_helpers.tpl
index 481470badec7..db7c4f3c6941 100644
--- a/stable/magic-namespace/templates/_helpers.tpl
+++ b/stable/magic-namespace/templates/_helpers.tpl
@@ -30,3 +30,14 @@ Create chart name and version as used by the chart label.
{{- define "magic-namespace.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
+
+{{/*
+Allow a custom secretName to be defined
+*/}}
+{{- define "magic-namespace.tillerTlsSecret" -}}
+{{- if .Values.tiller.tls.secretName -}}
+{{- .Values.tiller.tls.secretName }}
+{{- else -}}
+{{- template "magic-namespace.fullname" . }}-tiller-secret
+{{- end -}}
+{{- end -}}
diff --git a/stable/magic-namespace/templates/secret.yaml b/stable/magic-namespace/templates/secret.yaml
new file mode 100644
index 000000000000..808f366e906f
--- /dev/null
+++ b/stable/magic-namespace/templates/secret.yaml
@@ -0,0 +1,19 @@
+{{- if (and (.Values.tiller.tls.enabled) (not .Values.tiller.tls.secretName)) }}
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ template "magic-namespace.tillerTlsSecret" . }}
+ {{- if hasKey .Values "namespace" }}
+ namespace: {{ .Values.namespace }}
+ {{- end }}
+ labels:
+ app: {{ template "magic-namespace.chart" . }}
+ chart: {{ .Chart.Name }}-{{ .Chart.Version }}
+ heritage: {{ .Release.Service }}
+ release: {{ .Release.Name }}
+type: Opaque
+data:
+ ca.crt: {{ required "You need to populate .Values.tiller.tls.ca with a Base64 encoded CA" .Values.tiller.tls.ca }}
+ tls.crt: {{ required "You need to populate .Values.tiller.tls.cert with a Base64 encoded cert" .Values.tiller.tls.cert }}
+ tls.key: {{ required "You need to populate .Values.tiller.tls.key with a Base64 encoded key" .Values.tiller.tls.key}}
+{{- end }}
diff --git a/stable/magic-namespace/templates/tiller-deployment.yaml b/stable/magic-namespace/templates/tiller-deployment.yaml
index 47b46dbbc7a7..7002195731f5 100644
--- a/stable/magic-namespace/templates/tiller-deployment.yaml
+++ b/stable/magic-namespace/templates/tiller-deployment.yaml
@@ -30,7 +30,7 @@ spec:
spec:
serviceAccountName: tiller
containers:
- - name: {{ .Chart.Name }}
+ - name: tiller
image: "{{ .Values.tiller.image.repository }}:{{ .Values.tiller.image.tag }}"
imagePullPolicy: {{ .Values.tiller.image.pullPolicy }}
env:
@@ -42,8 +42,22 @@ spec:
{{- end }}
- name: TILLER_HISTORY_MAX
value: {{ quote .Values.tiller.maxHistory }}
+ {{- if .Values.tiller.tls.enabled }}
+ - name: TILLER_TLS_ENABLE
+ value: "1"
+ {{- if .Values.tiller.tls.verify }}
+ - name: TILLER_TLS_VERIFY
+ value: "1"
+ {{- end }}
+ - name: TILLER_TLS_CERTS
+ value: /etc/certs
+ {{- end }}
{{- if .Values.tiller.onlyListenOnLocalhost }}
- command: ["/tiller"]
+ command:
+ - "/tiller"
+ {{- if .Values.tiller.storage }}
+ - --storage={{ .Values.tiller.storage }}
+ {{- end }}
args: ["--listen=127.0.0.1:44134"]
{{- else }}
ports:
@@ -74,8 +88,21 @@ spec:
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
+ volumeMounts:
+ {{- if .Values.tiller.tls.enabled }}
+ - mountPath: /etc/certs
+ name: tiller-certs
+ readOnly: true
+ {{- end }}
resources:
{{ toYaml .Values.tiller.resources | indent 12 }}
+ volumes:
+ {{- if .Values.tiller.tls.enabled }}
+ - name: tiller-certs
+ secret:
+ defaultMode: 0644
+ secretName: {{ template "magic-namespace.tillerTlsSecret" . }}
+ {{- end }}
{{- with .Values.tiller.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
diff --git a/stable/magic-namespace/values.yaml b/stable/magic-namespace/values.yaml
index 71d53f73d0e4..04690aee9ea2 100644
--- a/stable/magic-namespace/values.yaml
+++ b/stable/magic-namespace/values.yaml
@@ -23,6 +23,28 @@ tiller:
maxHistory: 0
+ ## Storage driver to use. One of 'configmap', 'memory', or 'secret'
+ storage: configmap
+
+ tls:
+ ## Enable TLS encryption between Helm and Tiller
+ enabled: false
+
+ ## Verify remote certificate
+ verify: true
+
+ ## A custom secret to mount instead of specifying Base64 Values below
+ secretName: ""
+
+ ## Specify a Base64 encoded CA
+ # ca: "Zm9vCg=="
+
+ ## Specify a Base64 encoded cert
+ # cert: "Zm9vCg=="
+
+ ## Specify a Base64 encoded private key
+ # key: "Zm9vCg=="
+
## The following options specify the Role or ClusterRole to assign to the
## tiller service account. The ClusterRole "admin" is usually pre-defined in
## RBAC-enabled clusters and will allow administration of a namespace by
@@ -66,9 +88,9 @@ tiller:
affinity: {}
## Optional additional ServiceAccounts
-serviceAccounts:
-- some-service-account
-- another-service-account
+serviceAccounts: []
+# - some-service-account
+# - another-service-account
## Optional additional RoleBindings. It is a good idea to specify at least one
## to grant administrative permissions to a user or group.
diff --git a/stable/mailhog/Chart.yaml b/stable/mailhog/Chart.yaml
index f92eb845c1ae..27d8fe2a93e4 100644
--- a/stable/mailhog/Chart.yaml
+++ b/stable/mailhog/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
-description: An e-mail testing tool for developers
+description: DEPRECATED - An e-mail testing tool for developers
name: mailhog
appVersion: 1.0.0
-version: 2.3.0
+version: 2.3.1
keywords:
- mailhog
- mail
@@ -14,6 +14,5 @@ home: http://iankent.uk/project/mailhog/
icon: https://raw.githubusercontent.com/mailhog/MailHog-UI/master/assets/images/hog.png
sources:
- https://github.com/mailhog/MailHog
-maintainers:
- - name: unguiculus
- email: unguiculus@gmail.com
+# Chart moved to https://github.com/codecentric/helm-charts
+deprecated: true
diff --git a/stable/mailhog/README.md b/stable/mailhog/README.md
index 916e52aad267..6e2132b4c527 100644
--- a/stable/mailhog/README.md
+++ b/stable/mailhog/README.md
@@ -1,5 +1,15 @@
-# Mailhog
+# DEPRECATED - Mailhog
+**This chart has been deprecated and moved to its new home:**
+
+- **GitHub repo:** https://github.com/codecentric/helm-charts
+- **Charts repo:** https://codecentric.github.io/helm-charts
+
+```bash
+helm repo add codecentric https://codecentric.github.io/helm-charts
+```
+
+---
[Mailhog](http://iankent.uk/project/mailhog/) is an e-mail testing tool for developers.
## TL;DR;
diff --git a/stable/mailhog/templates/NOTES.txt b/stable/mailhog/templates/NOTES.txt
index 74f8f8ffe26b..3ec045136fbb 100644
--- a/stable/mailhog/templates/NOTES.txt
+++ b/stable/mailhog/templates/NOTES.txt
@@ -1,3 +1,11 @@
+**********************************************************************
+This chart has been DEPRECATED and moved to its new home:
+
+* GitHub repo: https://github.com/codecentric/helm-charts
+* Charts repo: https://codecentric.github.io/helm-charts
+
+**********************************************************************
+
Mailhog can be accessed via ports {{ .Values.service.port.http }} (HTTP) and {{ .Values.service.port.smtp }} (SMTP) on the following DNS name from within your cluster:
{{ template "mailhog.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local
diff --git a/stable/mariadb/Chart.yaml b/stable/mariadb/Chart.yaml
index f105c373760f..41155a1d8a4d 100644
--- a/stable/mariadb/Chart.yaml
+++ b/stable/mariadb/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: mariadb
-version: 5.5.0
-appVersion: 10.1.37
+version: 6.2.0
+appVersion: 10.3.15
description: Fast, reliable, scalable, and easy to use open-source relational database system. MariaDB Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software. Highly available MariaDB cluster.
keywords:
- mariadb
diff --git a/stable/mariadb/README.md b/stable/mariadb/README.md
index 16bf9a0d2816..c6d3bd41d704 100644
--- a/stable/mariadb/README.md
+++ b/stable/mariadb/README.md
@@ -14,7 +14,7 @@ $ helm install stable/mariadb
This chart bootstraps a [MariaDB](https://github.com/bitnami/bitnami-docker-mariadb) replication cluster deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -50,6 +50,7 @@ The following table lists the configurable parameters of the MariaDB chart and t
| Parameter | Description | Default |
|-------------------------------------------|-----------------------------------------------------|-------------------------------------------------------------------|
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | MariaDB image registry | `docker.io` |
| `image.repository` | MariaDB Image name | `bitnami/mariadb` |
| `image.tag` | MariaDB Image tag | `{VERSION}` |
@@ -61,6 +62,7 @@ The following table lists the configurable parameters of the MariaDB chart and t
| `service.port` | MySQL service port | `3306` |
| `serviceAccount.create` | Specifies whether a ServiceAccount should be created | `false` |
| `serviceAccount.name` | The name of the ServiceAccount to create | Generated using the mariadb.fullname template |
+| `rbac.create` | Create and use RBAC resources | `false` |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `1001` |
| `securityContext.runAsUser` | User ID for the container | `1001` |
@@ -73,15 +75,18 @@ The following table lists the configurable parameters of the MariaDB chart and t
| `replication.enabled` | MariaDB replication enabled | `true` |
| `replication.user` |MariaDB replication user | `replicator` |
| `replication.password` | MariaDB replication user password. Ignored if existing secret is provided. | _random 10 character alphanumeric string_ |
-| `initdbScripts` | List of initdb scripts | `nil` |
+| `initdbScripts` | Dictionary of initdb scripts | `nil` |
| `initdbScriptsConfigMap` | ConfigMap with the initdb scripts (Note: Overrides `initdbScripts`) | `nil` |
| `master.annotations[].key` | key for the the annotation list item | `nil` |
| `master.annotations[].value` | value for the the annotation list item | `nil` |
| `master.affinity` | Master affinity (in addition to master.antiAffinity when set) | `{}` |
| `master.antiAffinity` | Master pod anti-affinity policy | `soft` |
+| `master.nodeSelector` | Master node labels for pod assignment | `{}` |
| `master.tolerations` | List of node taints to tolerate (master) | `[]` |
+| `master.updateStrategy` | Master statefulset update strategy policy | `RollingUpdate` |
| `master.persistence.enabled` | Enable persistence using PVC | `true` |
| `master.persistence.existingClaim` | Provide an existing `PersistentVolumeClaim` | `nil` |
+| `master.persistence.subPath` | Subdirectory of the volume to mount | `nil` |
| `master.persistence.mountPath` | Path to mount the volume at | `/bitnami/mariadb` |
| `master.persistence.annotations` | Persistent Volume Claim annotations | `{}` |
| `master.persistence.storageClass` | Persistent Volume Storage Class | `` |
@@ -102,12 +107,17 @@ The following table lists the configurable parameters of the MariaDB chart and t
| `master.readinessProbe.timeoutSeconds` | When the probe times out (master) | `1` |
| `master.readinessProbe.successThreshold` | Minimum consecutive successes for the probe (master)| `1` |
| `master.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe (master) | `3` |
+| `master.podDisruptionBudget.enabled` | If true, create a pod disruption budget for master pods. | `false` |
+| `master.podDisruptionBudget.minAvailable` | Minimum number / percentage of pods that should remain scheduled | `1` |
+| `master.podDisruptionBudget.maxUnavailable`| Maximum number / percentage of pods that may be made unavailable | `nil` |
| `slave.replicas` | Desired number of slave replicas | `1` |
| `slave.annotations[].key` | key for the the annotation list item | `nil` |
| `slave.annotations[].value` | value for the the annotation list item | `nil` |
| `slave.affinity` | Slave affinity (in addition to slave.antiAffinity when set) | `{}` |
| `slave.antiAffinity` | Slave pod anti-affinity policy | `soft` |
+| `slave.nodeSelector` | Slave node labels for pod assignment | `{}` |
| `slave.tolerations` | List of node taints to tolerate for (slave) | `[]` |
+| `slave.updateStrategy` | Slave statefulset update strategy policy | `RollingUpdate` |
| `slave.persistence.enabled` | Enable persistence using a `PersistentVolumeClaim` | `true` |
| `slave.persistence.annotations` | Persistent Volume Claim annotations | `{}` |
| `slave.persistence.storageClass` | Persistent Volume Storage Class | `` |
@@ -128,6 +138,9 @@ The following table lists the configurable parameters of the MariaDB chart and t
| `slave.readinessProbe.timeoutSeconds` | When the probe times out (slave) | `1` |
| `slave.readinessProbe.successThreshold` | Minimum consecutive successes for the probe (slave) | `1` |
| `slave.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe (slave) | `3` |
+| `slave.podDisruptionBudget.enabled` | If true, create a pod disruption budget for slave pods. | `false` |
+| `slave.podDisruptionBudget.minAvailable` | Minimum number / percentage of pods that should remain scheduled | `1` |
+| `slave.podDisruptionBudget.maxUnavailable`| Maximum number / percentage of pods that may be made unavailable | `nil` |
| `metrics.enabled` | Start a side-car prometheus exporter | `false` |
| `metrics.image.registry` | Exporter image registry | `docker.io` |
| `metrics.image.repository` | Exporter image name | `prom/mysqld-exporter` |
@@ -141,7 +154,7 @@ Specify each parameter using the `--set key=value[,key=value]` argument to `helm
```bash
$ helm install --name my-release \
- --set root.password=secretpassword,user.database=app_database \
+ --set rootUser.password=secretpassword,db.user=app_database \
stable/mariadb
```
@@ -195,6 +208,13 @@ $ helm upgrade my-release stable/mariadb --set rootUser.password=[ROOT_PASSWORD]
| Note: you need to substitute the placeholder _[ROOT_PASSWORD]_ with the value obtained in the installation notes.
+### To 6.0.0
+
+MariaDB version was updated from 10.1 to 10.3, there are no changes in the chart itself. According to the official documentation, upgrading from 10.1 should be painless. However, there are some things that have changed which could affect an upgrade:
+
+- [Incompatible changes upgrading from MariaDB 10.1 to MariaDB 10.2](https://mariadb.com/kb/en/library/upgrading-from-mariadb-101-to-mariadb-102//#incompatible-changes-between-101-and-102)
+- [Incompatible changes upgrading from MariaDB 10.2 to MariaDB 10.3](https://mariadb.com/kb/en/library/upgrading-from-mariadb-102-to-mariadb-103/#incompatible-changes-between-102-and-103)
+
### To 5.0.0
Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments.
diff --git a/stable/mariadb/templates/_helpers.tpl b/stable/mariadb/templates/_helpers.tpl
index 99ff7806d80c..98fed6406b13 100644
--- a/stable/mariadb/templates/_helpers.tpl
+++ b/stable/mariadb/templates/_helpers.tpl
@@ -26,14 +26,14 @@ If release name contains chart name it will be used as a full name.
{{- define "master.fullname" -}}
{{- if .Values.replication.enabled -}}
-{{- printf "%s-%s" .Release.Name "mariadb-master" | trunc 63 | trimSuffix "-" -}}
+{{- printf "%s-%s" (include "mariadb.fullname" .) "master" | trunc 63 | trimSuffix "-" -}}
{{- else -}}
-{{- printf "%s-%s" .Release.Name "mariadb" | trunc 63 | trimSuffix "-" -}}
+{{- include "mariadb.fullname" . -}}
{{- end -}}
{{- end -}}
{{- define "slave.fullname" -}}
-{{- printf "%s-%s" .Release.Name "mariadb-slave" | trunc 63 | trimSuffix "-" -}}
+{{- printf "%s-%s" (include "mariadb.fullname" .) "slave" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- define "mariadb.chart" -}}
@@ -66,11 +66,24 @@ Also, we can't use a single if because lazy evaluation is not an option
{{/*
Return the proper metrics image name
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "mariadb.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
{{- end -}}
{{ template "mariadb.initdbScriptsCM" . }}
@@ -81,7 +94,7 @@ Get the initialization scripts ConfigMap name.
{{- if .Values.initdbScriptsConfigMap -}}
{{- printf "%s" .Values.initdbScriptsConfigMap -}}
{{- else -}}
-{{- printf "%s-init-scripts" (include "mariadb.fullname" .) -}}
+{{- printf "%s-init-scripts" (include "master.fullname" .) -}}
{{- end -}}
{{- end -}}
@@ -95,3 +108,38 @@ Create the name of the service account to use
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "mariadb.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- end -}}
diff --git a/stable/mariadb/templates/initialization-configmap.yaml b/stable/mariadb/templates/initialization-configmap.yaml
index f7380aff77ed..172e6ae07e5b 100644
--- a/stable/mariadb/templates/initialization-configmap.yaml
+++ b/stable/mariadb/templates/initialization-configmap.yaml
@@ -4,8 +4,8 @@ kind: ConfigMap
metadata:
name: {{ template "master.fullname" . }}-init-scripts
labels:
- app: {{ template "mariadb.name" . }}
- chart: {{ template "mariadb.chart" . }}
+ app: "{{ template "mariadb.name" . }}"
+ chart: "{{ template "mariadb.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
component: "master"
diff --git a/stable/mariadb/templates/master-configmap.yaml b/stable/mariadb/templates/master-configmap.yaml
index 880a10198da9..08bc10c28921 100644
--- a/stable/mariadb/templates/master-configmap.yaml
+++ b/stable/mariadb/templates/master-configmap.yaml
@@ -4,9 +4,9 @@ kind: ConfigMap
metadata:
name: {{ template "master.fullname" . }}
labels:
- app: {{ template "mariadb.name" . }}
+ app: "{{ template "mariadb.name" . }}"
component: "master"
- chart: {{ template "mariadb.chart" . }}
+ chart: "{{ template "mariadb.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
data:
diff --git a/stable/mariadb/templates/master-pdb.yaml b/stable/mariadb/templates/master-pdb.yaml
new file mode 100644
index 000000000000..b162ac0a7d81
--- /dev/null
+++ b/stable/mariadb/templates/master-pdb.yaml
@@ -0,0 +1,24 @@
+{{- if .Values.master.podDisruptionBudget.enabled }}
+apiVersion: policy/v1beta1
+kind: PodDisruptionBudget
+metadata:
+ name: {{ template "mariadb.fullname" . }}
+ labels:
+ app: "{{ template "mariadb.name" . }}"
+ component: "master"
+ chart: {{ template "mariadb.chart" . }}
+ release: {{ .Release.Name | quote }}
+ heritage: {{ .Release.Service | quote }}
+spec:
+{{- if .Values.master.podDisruptionBudget.minAvailable }}
+ minAvailable: {{ .Values.master.podDisruptionBudget.minAvailable }}
+{{- end }}
+{{- if .Values.master.podDisruptionBudget.maxUnavailable }}
+ maxUnavailable: {{ .Values.master.podDisruptionBudget.maxUnavailable }}
+{{- end }}
+ selector:
+ matchLabels:
+ app: "{{ template "mariadb.name" . }}"
+ component: "master"
+ release: {{ .Release.Name | quote }}
+{{- end }}
diff --git a/stable/mariadb/templates/master-statefulset.yaml b/stable/mariadb/templates/master-statefulset.yaml
index c077a3b6089e..5895ec8a29c9 100644
--- a/stable/mariadb/templates/master-statefulset.yaml
+++ b/stable/mariadb/templates/master-statefulset.yaml
@@ -4,7 +4,7 @@ metadata:
name: {{ template "master.fullname" . }}
labels:
app: "{{ template "mariadb.name" . }}"
- chart: {{ template "mariadb.chart" . }}
+ chart: "{{ template "mariadb.chart" . }}"
component: "master"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
@@ -13,11 +13,14 @@ spec:
matchLabels:
release: "{{ .Release.Name }}"
component: "master"
- app: {{ template "mariadb.name" . }}
+ app: "{{ template "mariadb.name" . }}"
serviceName: "{{ template "master.fullname" . }}"
replicas: 1
updateStrategy:
- type: RollingUpdate
+ type: {{ .Values.master.updateStrategy.type }}
+ {{- if (eq "Recreate" .Values.master.updateStrategy.type) }}
+ rollingUpdate: null
+ {{- end }}
template:
metadata:
{{- if .Values.master.annotations }}
@@ -30,7 +33,7 @@ spec:
app: "{{ template "mariadb.name" . }}"
component: "master"
release: "{{ .Release.Name }}"
- chart: {{ template "mariadb.chart" . }}
+ chart: "{{ template "mariadb.chart" . }}"
spec:
serviceAccountName: "{{ template "mariadb.serviceAccountName" . }}"
{{- if .Values.securityContext.enabled }}
@@ -70,16 +73,15 @@ spec:
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}
+ {{- if .Values.master.nodeSelector }}
+ nodeSelector:
+ {{ toYaml .Values.master.nodeSelector | nindent 8 }}
+ {{- end -}}
{{- with .Values.master.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "mariadb.imagePullSecrets" . | indent 6 }}
{{- if .Values.master.extraInitContainers }}
initContainers:
{{ tpl .Values.master.extraInitContainers . | indent 6}}
@@ -160,6 +162,9 @@ spec:
volumeMounts:
- name: data
mountPath: {{ .Values.master.persistence.mountPath }}
+ {{- if .Values.master.persistence.subPath }}
+ subPath: {{ .Values.master.persistence.subPath }}
+ {{- end }}
{{- if or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScriptsConfigMap .Values.initdbScripts }}
- name: custom-init-scripts
mountPath: /docker-entrypoint-initdb.d
@@ -171,7 +176,7 @@ spec:
{{- end }}
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "mariadb.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
env:
- name: MARIADB_ROOT_PASSWORD
diff --git a/stable/mariadb/templates/master-svc.yaml b/stable/mariadb/templates/master-svc.yaml
index 56810b44b735..4e138ad1414a 100644
--- a/stable/mariadb/templates/master-svc.yaml
+++ b/stable/mariadb/templates/master-svc.yaml
@@ -5,7 +5,7 @@ metadata:
labels:
app: "{{ template "mariadb.name" . }}"
component: "master"
- chart: {{ template "mariadb.chart" . }}
+ chart: "{{ template "mariadb.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- if .Values.metrics.enabled }}
diff --git a/stable/mariadb/templates/role.yaml b/stable/mariadb/templates/role.yaml
new file mode 100644
index 000000000000..b54eeac91b6b
--- /dev/null
+++ b/stable/mariadb/templates/role.yaml
@@ -0,0 +1,18 @@
+{{- if and .Values.serviceAccount.create .Values.rbac.create }}
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ name: {{ template "master.fullname" . }}
+ labels:
+ app: "{{ template "mariadb.name" . }}"
+ chart: "{{ template "mariadb.chart" . }}"
+ release: {{ .Release.Name | quote }}
+ heritage: {{ .Release.Service | quote }}
+rules:
+- apiGroups:
+ - ""
+ resources:
+ - endpoints
+ verbs:
+ - get
+{{- end }}
diff --git a/stable/mariadb/templates/rolebinding.yaml b/stable/mariadb/templates/rolebinding.yaml
new file mode 100644
index 000000000000..c6669cf02429
--- /dev/null
+++ b/stable/mariadb/templates/rolebinding.yaml
@@ -0,0 +1,18 @@
+{{- if and .Values.serviceAccount.create .Values.rbac.create }}
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: {{ template "master.fullname" . }}
+ labels:
+ app: "{{ template "mariadb.name" . }}"
+ chart: "{{ template "mariadb.chart" . }}"
+ release: {{ .Release.Name | quote }}
+ heritage: {{ .Release.Service | quote }}
+subjects:
+- kind: ServiceAccount
+ name: {{ template "mariadb.serviceAccountName" . }}
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: Role
+ name: {{ template "master.fullname" . }}
+{{- end }}
diff --git a/stable/mariadb/templates/secrets.yaml b/stable/mariadb/templates/secrets.yaml
index 401691c1035d..0f8d545e02fc 100644
--- a/stable/mariadb/templates/secrets.yaml
+++ b/stable/mariadb/templates/secrets.yaml
@@ -5,7 +5,7 @@ metadata:
name: {{ template "mariadb.fullname" . }}
labels:
app: "{{ template "mariadb.name" . }}"
- chart: {{ template "mariadb.chart" . }}
+ chart: "{{ template "mariadb.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
type: Opaque
@@ -35,4 +35,4 @@ data:
mariadb-replication-password: {{ required "A MariaDB Replication Password is required!" .Values.replication.password }}
{{- end }}
{{- end }}
-{{- end }}
\ No newline at end of file
+{{- end }}
diff --git a/stable/mariadb/templates/serviceaccount.yaml b/stable/mariadb/templates/serviceaccount.yaml
new file mode 100644
index 000000000000..2d27739efdc2
--- /dev/null
+++ b/stable/mariadb/templates/serviceaccount.yaml
@@ -0,0 +1,11 @@
+{{- if .Values.serviceAccount.create }}
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: {{ template "mariadb.serviceAccountName" . }}
+ labels:
+ app: "{{ template "mariadb.name" . }}"
+ chart: "{{ template "mariadb.chart" . }}"
+ release: {{ .Release.Name | quote }}
+ heritage: {{ .Release.Service | quote }}
+{{- end }}
diff --git a/stable/mariadb/templates/slave-configmap.yaml b/stable/mariadb/templates/slave-configmap.yaml
index 056cf5c0700d..074568c66f51 100644
--- a/stable/mariadb/templates/slave-configmap.yaml
+++ b/stable/mariadb/templates/slave-configmap.yaml
@@ -4,9 +4,9 @@ kind: ConfigMap
metadata:
name: {{ template "slave.fullname" . }}
labels:
- app: {{ template "mariadb.name" . }}
+ app: "{{ template "mariadb.name" . }}"
component: "slave"
- chart: {{ template "mariadb.chart" . }}
+ chart: "{{ template "mariadb.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
data:
diff --git a/stable/mariadb/templates/slave-pdb.yaml b/stable/mariadb/templates/slave-pdb.yaml
new file mode 100644
index 000000000000..de36a08c5d58
--- /dev/null
+++ b/stable/mariadb/templates/slave-pdb.yaml
@@ -0,0 +1,26 @@
+{{- if .Values.replication.enabled }}
+{{- if .Values.slave.podDisruptionBudget.enabled }}
+apiVersion: policy/v1beta1
+kind: PodDisruptionBudget
+metadata:
+ name: {{ template "mariadb.fullname" . }}
+ labels:
+ app: "{{ template "mariadb.name" . }}"
+ component: "slave"
+ chart: {{ template "mariadb.chart" . }}
+ release: {{ .Release.Name | quote }}
+ heritage: {{ .Release.Service | quote }}
+spec:
+{{- if .Values.slave.podDisruptionBudget.minAvailable }}
+ minAvailable: {{ .Values.slave.podDisruptionBudget.minAvailable }}
+{{- end }}
+{{- if .Values.slave.podDisruptionBudget.maxUnavailable }}
+ maxUnavailable: {{ .Values.slave.podDisruptionBudget.maxUnavailable }}
+{{- end }}
+ selector:
+ matchLabels:
+ app: "{{ template "mariadb.name" . }}"
+ component: "slave"
+ release: {{ .Release.Name | quote }}
+{{- end }}
+{{- end }}
diff --git a/stable/mariadb/templates/slave-statefulset.yaml b/stable/mariadb/templates/slave-statefulset.yaml
index f2cab6162eef..dfc8c7f95daf 100644
--- a/stable/mariadb/templates/slave-statefulset.yaml
+++ b/stable/mariadb/templates/slave-statefulset.yaml
@@ -5,7 +5,7 @@ metadata:
name: {{ template "slave.fullname" . }}
labels:
app: "{{ template "mariadb.name" . }}"
- chart: {{ template "mariadb.chart" . }}
+ chart: "{{ template "mariadb.chart" . }}"
component: "slave"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
@@ -14,11 +14,14 @@ spec:
matchLabels:
release: "{{ .Release.Name }}"
component: "slave"
- app: {{ template "mariadb.name" . }}
+ app: "{{ template "mariadb.name" . }}"
serviceName: "{{ template "slave.fullname" . }}"
replicas: {{ .Values.slave.replicas }}
updateStrategy:
- type: RollingUpdate
+ type: {{ .Values.slave.updateStrategy.type }}
+ {{- if (eq "Recreate" .Values.slave.updateStrategy.type) }}
+ rollingUpdate: null
+ {{- end }}
template:
metadata:
{{- if .Values.slave.annotations }}
@@ -31,7 +34,7 @@ spec:
app: "{{ template "mariadb.name" . }}"
component: "slave"
release: "{{ .Release.Name }}"
- chart: {{ template "mariadb.chart" . }}
+ chart: "{{ template "mariadb.chart" . }}"
spec:
serviceAccountName: "{{ template "mariadb.serviceAccountName" . }}"
{{- if .Values.securityContext.enabled }}
@@ -71,16 +74,15 @@ spec:
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}
+ {{- if .Values.slave.nodeSelector }}
+ nodeSelector:
+ {{ toYaml .Values.slave.nodeSelector | nindent 8 }}
+ {{- end -}}
{{- with .Values.slave.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "mariadb.imagePullSecrets" . | indent 6 }}
{{- if .Values.master.extraInitContainers }}
initContainers:
{{ tpl .Values.master.extraInitContainers . | indent 6}}
@@ -99,7 +101,7 @@ spec:
- name: MARIADB_MASTER_HOST
value: {{ template "mariadb.fullname" . }}
- name: MARIADB_MASTER_PORT_NUMBER
- value: "3306"
+ value: "{{ .Values.service.port }}"
- name: MARIADB_MASTER_ROOT_USER
value: "root"
- name: MARIADB_MASTER_ROOT_PASSWORD
@@ -157,7 +159,7 @@ spec:
{{- end }}
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "mariadb.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
env:
- name: MARIADB_MASTER_ROOT_PASSWORD
diff --git a/stable/mariadb/templates/slave-svc.yaml b/stable/mariadb/templates/slave-svc.yaml
index c41ecb7524a4..a4773bd0d22e 100644
--- a/stable/mariadb/templates/slave-svc.yaml
+++ b/stable/mariadb/templates/slave-svc.yaml
@@ -5,7 +5,7 @@ metadata:
name: {{ template "slave.fullname" . }}
labels:
app: "{{ template "mariadb.name" . }}"
- chart: {{ template "mariadb.chart" . }}
+ chart: "{{ template "mariadb.chart" . }}"
component: "slave"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
diff --git a/stable/mariadb/values-production.yaml b/stable/mariadb/values-production.yaml
index f8e3b0dae36b..07d5ac708908 100644
--- a/stable/mariadb/values-production.yaml
+++ b/stable/mariadb/values-production.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami MariaDB image
## ref: https://hub.docker.com/r/bitnami/mariadb/tags/
@@ -10,7 +13,7 @@
image:
registry: docker.io
repository: bitnami/mariadb
- tag: 10.1.37
+ tag: 10.3.15
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,7 +24,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Set to true if you would like to see extra information on logs
## It turns BASH and NAMI debugging in minideb
@@ -50,6 +53,13 @@ serviceAccount:
## If not set and create is true, a name is generated using the mariadb.fullname template
# name:
+## Role Based Access
+## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
+##
+
+rbac:
+ create: false
+
## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
@@ -106,7 +116,7 @@ replication:
forcePassword: true
## initdb scripts
-## Specify dictionnary of scripts to be run at first boot
+## Specify dictionary of scripts to be run at first boot
## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory
##
# initdbScripts:
@@ -135,11 +145,21 @@ master:
##
antiAffinity: soft
+ ## Node labels for pod assignment
+ ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
+ ##
+ nodeSelector: {}
+
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
+ ## updateStrategy for MariaDB Master StatefulSet
+ ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
+ updateStrategy:
+ type: RollingUpdate
+
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
@@ -149,6 +169,8 @@ master:
enabled: true
# Enable persistence using an existing PVC
# existingClaim:
+ # Subdirectory of the volume to mount
+ # subPath:
mountPath: /bitnami/mariadb
## Persistent Volume Storage Class
## If defined, storageClassName:
@@ -227,6 +249,11 @@ master:
successThreshold: 1
failureThreshold: 3
+ podDisruptionBudget:
+ enabled: false
+ minAvailable: 1
+ # maxUnavailable: 1
+
slave:
replicas: 2
@@ -247,11 +274,21 @@ slave:
##
antiAffinity: soft
+ ## Node labels for pod assignment
+ ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
+ ##
+ nodeSelector: {}
+
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
+ ## updateStrategy for MariaDB Slave StatefulSet
+ ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
+ updateStrategy:
+ type: RollingUpdate
+
persistence:
## If true, use a Persistent Volume Claim, If false, use emptyDir
##
@@ -323,6 +360,11 @@ slave:
successThreshold: 1
failureThreshold: 3
+ podDisruptionBudget:
+ enabled: false
+ minAvailable: 1
+ # maxUnavailable: 1
+
metrics:
enabled: true
image:
@@ -330,6 +372,12 @@ metrics:
repository: prom/mysqld-exporter
tag: v0.10.0
pullPolicy: IfNotPresent
+ ## Optionally specify an array of imagePullSecrets.
+ ## Secrets must be manually created in the namespace.
+ ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+ ##
+ # pullSecrets:
+ # - myRegistryKeySecretName
resources: {}
annotations:
prometheus.io/scrape: "true"
diff --git a/stable/mariadb/values.yaml b/stable/mariadb/values.yaml
index 9a2dfd52ab63..0312d586ec78 100644
--- a/stable/mariadb/values.yaml
+++ b/stable/mariadb/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami MariaDB image
## ref: https://hub.docker.com/r/bitnami/mariadb/tags/
@@ -10,7 +13,7 @@
image:
registry: docker.io
repository: bitnami/mariadb
- tag: 10.1.37
+ tag: 10.3.15
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,7 +24,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Set to true if you would like to see extra information on logs
## It turns BASH and NAMI debugging in minideb
@@ -50,6 +53,13 @@ serviceAccount:
## If not set and create is true, a name is generated using the mariadb.fullname template
# name:
+## Role Based Access
+## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
+##
+
+rbac:
+ create: false
+
## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
@@ -106,7 +116,7 @@ replication:
forcePassword: false
## initdb scripts
-## Specify dictionnary of scripts to be run at first boot
+## Specify dictionary of scripts to be run at first boot
## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory
##
# initdbScripts:
@@ -135,11 +145,21 @@ master:
##
antiAffinity: soft
+ ## Node labels for pod assignment
+ ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
+ ##
+ nodeSelector: {}
+
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
+ ## updateStrategy for MariaDB Master StatefulSet
+ ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
+ updateStrategy:
+ type: RollingUpdate
+
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
@@ -149,6 +169,8 @@ master:
enabled: true
# Enable persistence using an existing PVC
# existingClaim:
+ # Subdirectory of the volume to mount
+ # subPath:
mountPath: /bitnami/mariadb
## Persistent Volume Storage Class
## If defined, storageClassName:
@@ -227,6 +249,11 @@ master:
successThreshold: 1
failureThreshold: 3
+ podDisruptionBudget:
+ enabled: false
+ minAvailable: 1
+ # maxUnavailable: 1
+
slave:
replicas: 1
@@ -246,11 +273,21 @@ slave:
##
antiAffinity: soft
+ ## Node labels for pod assignment
+ ## Ref: https://kubernetes.io/docs/user-guide/node-selection/
+ ##
+ nodeSelector: {}
+
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
+ ## updateStrategy for MariaDB Slave StatefulSet
+ ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
+ updateStrategy:
+ type: RollingUpdate
+
persistence:
## If true, use a Persistent Volume Claim, If false, use emptyDir
##
@@ -322,6 +359,11 @@ slave:
successThreshold: 1
failureThreshold: 3
+ podDisruptionBudget:
+ enabled: false
+ minAvailable: 1
+ # maxUnavailable: 1
+
metrics:
enabled: false
image:
@@ -329,6 +371,12 @@ metrics:
repository: prom/mysqld-exporter
tag: v0.10.0
pullPolicy: IfNotPresent
+ ## Optionally specify an array of imagePullSecrets.
+ ## Secrets must be manually created in the namespace.
+ ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+ ##
+ # pullSecrets:
+ # - myRegistryKeySecretName
resources: {}
annotations:
prometheus.io/scrape: "true"
diff --git a/stable/mattermost-team-edition/Chart.yaml b/stable/mattermost-team-edition/Chart.yaml
index 0d1fc64774ee..f2a5cddb8353 100644
--- a/stable/mattermost-team-edition/Chart.yaml
+++ b/stable/mattermost-team-edition/Chart.yaml
@@ -1,8 +1,9 @@
apiVersion: v1
description: Mattermost Team Edition server.
name: mattermost-team-edition
-version: 2.2.0
-appVersion: 5.7.0
+version: 3.1.2
+appVersion: 5.9.0
+deprecated: true
keywords:
- mattermost
- communication
@@ -11,9 +12,4 @@ home: https://mattermost.com
icon: http://www.mattermost.org/wp-content/uploads/2016/04/icon.png
source:
- https://github.com/mattermost/mattermost-server
-- https://github.com/mattermost/mattermost-kubernetes
-maintainers:
- - name: cpanato
- email: carlos@mattermost.com
- - name: jwilander
- email: joram@mattermost.com
+- https://github.com/mattermost/mattermost-helm
diff --git a/stable/mattermost-team-edition/README.md b/stable/mattermost-team-edition/README.md
index 8e3e8d477df3..b49e124bd453 100644
--- a/stable/mattermost-team-edition/README.md
+++ b/stable/mattermost-team-edition/README.md
@@ -1,4 +1,13 @@
-# Mattermost Team Edition
+# DEPRECATED - Mattermost Team Edition
+
+**This chart has been deprecated and moved to its new home:**
+
+- **GitHub repo:** https://github.com/mattermost/mattermost-helm
+- **Charts repo:** https://helm.mattermost.com
+
+```bash
+$ helm repo add mattermost https://helm.mattermost.com
+```
[Mattermost](https://mattermost.com/) is a hybrid cloud enterprise messaging workspace that brings your messaging and tools together to get more done, faster.
@@ -30,6 +39,14 @@ $ helm install --name my-release stable/mattermost-team-edition
The command deploys Mattermost on the Kubernetes cluster in the default configuration. The [configuration](#configuration)
section lists the parameters that can be configured during installation.
+## Upgrading the Chart to 3.0.0+
+
+Breaking Helm chart changes was introduced with version 3.0.0. The easiest
+method of resolving them is to simply upgrade the chart and let it fail with and
+provide you with a custom message on what you need to change in your
+configuration. Note that this failure will occur before any changes have been
+made to the k8s cluster.
+
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
@@ -43,36 +60,29 @@ The command removes all the Kubernetes components associated with the chart and
The following table lists the configurable parameters of the Mattermost Team Edition chart and their default values.
-Parameter | Description | Default
---- | --- | ---
-`image.repository` | container image repository | `mattermost/mattermost-team-edition`
-`image.tag` | container image tag | `5.7.0`
-`image.imagePullPolicy` | container image pull policy | `IfNotPresent`
-`initContainerImage.repository` | init container image repository | `appropriate/curl`
-`initContainerImage.tag` | init container image tag | `latest`
-`initContainerImage.imagePullPolicy` | container image pull policy | `IfNotPresent`
-`revisionHistoryLimit` | How many old ReplicaSets for Mattermost Deployment you want to retain | `1`
-`config.SiteUrl` | The URL that users will use to access Mattermost. ie `https://mattermost.mycompany.com` | ``
-`config.SiteName` | Name of service shown in login screens and UI | `Mattermost`
-`config.FilesAccessKey` | The AWS Access Key, if you want store the files on S3 | ``
-`config.FilesSecretKey` | The AWS Secret Key | ``
-`config.FileBucketName` | The S3 bucket name | ``
-`config.SMTPHost` | Location of SMTP email server | ``
-`config.SMTPPort` | Port of SMTP email server | ``
-`config.SMTPUsername` | The username for authenticating to the SMTP server | ``
-`config.SMTPPassword` | The password associated with the SMTP username | ``
-`config.FeedbackEmail` | Address displayed on email account used when sending notification emails from Mattermost system | ``
-`config.FeedbackName` | Name displayed on email account used when sending notification emails from Mattermost system | ``
-`config.enableSignUpWithEmail` | Allow team creation and account signup using email and password. | `true`
-`ingress.enabled` | if `true`, an ingress is created | `false`
-`ingress.hosts` | a list of ingress hosts | `[mattermost.example.com]`
-`ingress.tls` | a list of [IngressTLS](https://v1-8.docs.kubernetes.io/docs/api-reference/v1.8/#ingresstls-v1beta1-extensions) items | `[]`
-`mysql.mysqlRootPassword` | Root Password for Mysql (Opcional) | ""
-`mysql.mysqlUser` | Username for Mysql (Required) | ""
-`mysql.mysqlPassword` | User Password for Mysql (Required) | ""
-`mysql.mysqlDatabase` | Database name (Required) | "mattermost"
-`extraEnvVars` | Extra environments variables to be used in the deployments |
-`extraInitContainers` | Additional init containers. Passed through the `tpl` function | ``
+Parameter | Description | Default
+--- | --- | ---
+`configJSON` | The `config.json` configuration to be used by the mattermost server. The values you provide will by using Helm's merging behavior override individual default values only. See the [example configuration](#example-configuration) and the [Mattermost documentation](https://docs.mattermost.com/administration/config-settings.html) for details. | See `configJSON` in [values.yaml](https://github.com/helm/charts/blob/master/stable/mattermost-team-edition/values.yaml)
+`image.repository` | Container image repository | `mattermost/mattermost-team-edition`
+`image.tag` | Container image tag | `5.9.0`
+`image.imagePullPolicy` | Container image pull policy | `IfNotPresent`
+`initContainerImage.repository` | Init container image repository | `appropriate/curl`
+`initContainerImage.tag` | Init container image tag | `latest`
+`initContainerImage.imagePullPolicy` | Container image pull policy | `IfNotPresent`
+`revisionHistoryLimit` | How many old ReplicaSets for Mattermost Deployment you want to retain | `1`
+`ingress.enabled` | If `true`, an ingress is created | `false`
+`ingress.hosts` | A list of ingress hosts | `[mattermost.example.com]`
+`ingress.tls` | A list of [ingress tls](https://kubernetes.io/docs/concepts/services-networking/ingress/#tls) items | `[]`
+`mysql.enabled` | Enables deployment of a mysql server | `true`
+`mysql.mysqlRootPassword` | Root Password for Mysql (Optional) | ""
+`mysql.mysqlUser` | Username for Mysql (Required) | ""
+`mysql.mysqlPassword` | User Password for Mysql (Required) | ""
+`mysql.mysqlDatabase` | Database name (Required) | "mattermost"
+`externalDB.enabled` | Enables use of an preconfigured external database server | `false`
+`externalDB.externalDriverType` | `"postgres"` or `"mysql"` | ""
+`externalDB.externalConnectionString` | See the section about [external databases](#External-Databases). | ""
+`extraEnvVars` | Extra environments variables to be used in the deployments | `[]`
+`extraInitContainers` | Additional init containers | `[]`
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
@@ -90,6 +100,24 @@ Alternatively, a YAML file that specifies the values for the parameters can be p
$ helm install --name my-release -f values.yaml stable/mattermost-team-edition
```
+### Example configuration
+
+A basic example of a `.yaml` file with values that could be passed to the `helm`
+command with the `-f` or `--values` flag to get started.
+
+```yaml
+ingress:
+ enabled: true
+ hosts:
+ - mattermost.example.com
+
+configJSON:
+ ServiceSettings:
+ SiteURL: "https://mattermost.example.com"
+ TeamSettings:
+ SiteName: "Mattermost on Example.com"
+```
+
### External Databases
There is an option to use external database services (PostgreSQL or MySQL) for your Mattermost installation.
If you use an external Database you will need to disable the MySQL chart in the `values.yaml`
diff --git a/stable/mattermost-team-edition/ci/extra-values.yaml b/stable/mattermost-team-edition/ci/extra-values.yaml
index 7f9859e530d9..6584a2aa335e 100644
--- a/stable/mattermost-team-edition/ci/extra-values.yaml
+++ b/stable/mattermost-team-edition/ci/extra-values.yaml
@@ -9,7 +9,7 @@ extraEnvVars:
- name: TEST_SAMPLE
value: blablabal
-extraInitContainers: |
+extraInitContainers:
- name: test-init
image: busybox
imagePullPolicy: IfNotPresent
diff --git a/stable/mattermost-team-edition/templates/NOTES.txt b/stable/mattermost-team-edition/templates/NOTES.txt
index 35fba34c6bf8..4d662d0993eb 100644
--- a/stable/mattermost-team-edition/templates/NOTES.txt
+++ b/stable/mattermost-team-edition/templates/NOTES.txt
@@ -1,6 +1,6 @@
You can easily connect to the remote instance from your browser. Forward the webserver port to localhost:8065
-- kubectl port-forward --namespace {{ .Release.Namespace }} $(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "mattermost-team-edition.name" . }},release={{ .Release.Name }}" -o jsonpath='{ .items[0].metadata.name }') 8080:8065
+- kubectl port-forward --namespace {{ .Release.Namespace }} $(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "mattermost-team-edition.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath='{ .items[0].metadata.name }') 8080:8065
{{ if .Values.ingress.enabled }}
@@ -25,3 +25,7 @@ To expose Mattermost via an Ingress you need to set host and enable ingress.
helm install --set host=mattermost.yourdomain.com --set ingress.enabled=true stable/mattermost-team-edition
{{ end }}
+
+{{ include "mattermost.warnings" . }}
+
+{{ include "mattermost.deprecations" . }}
diff --git a/stable/mattermost-team-edition/templates/_deprecations.tpl b/stable/mattermost-team-edition/templates/_deprecations.tpl
new file mode 100644
index 000000000000..46a6f27955c2
--- /dev/null
+++ b/stable/mattermost-team-edition/templates/_deprecations.tpl
@@ -0,0 +1,151 @@
+{{- /*
+A template for handling deprecation messages. The messages templated here will
+be combined into a single `fail` call. This creates a means for the user to
+receive all messages at one time, in place a frustrating iterative approach.
+
+To add a deprecation:
+
+1. Define a new template prefixed `mattermost.deprecate.`
+2. Check for deprecated values / patterns, and directly output messages (see
+ message format below)
+3. Add a line to `mattermost.deprecations` to include the new template.
+
+Message format:
+
+```
+deprecatedHelmConfig.option is deprecated, please use the following configuration instead...
+
+newHelmConfig:
+ option:
+ {{- .Values.deprecatedHelmConfig.option | toYaml | nindent 4 }}
+```
+*/}}
+
+{{- /*
+Compile all deprecations into a single message, and call fail.
+*/}}
+
+{{- define "mattermost.deprecations" }}
+{{- $depHeader := print "\n\nFAILURE DUE TO DEPRECATIONS:\n----------------------------" }}
+{{- $depMessage := "" }}
+
+{{- /*
+deprecations in order to transition to a passthrough configuration in configJSON
+*/}}
+{{- $passthroughs := list }}
+{{- $passthroughs := append $passthroughs (include "mattermost.deprecate.auth.gitlab" .) }}
+{{- $passthroughs := append $passthroughs (include "mattermost.deprecate.config.siteUrl" .) }}
+{{- $passthroughs := append $passthroughs (include "mattermost.deprecate.config.siteName" .) }}
+{{- $passthroughs := append $passthroughs (include "mattermost.deprecate.config.fileSettings" .) }}
+{{- $passthroughs := append $passthroughs (include "mattermost.deprecate.config.emailSettings" .) }}
+{{- $passthroughs := without $passthroughs "" }}
+{{- if $passthroughs }}
+{{- $passthroughsHeader := print "\n\nconfigJSON:" }}
+{{- $passthroughsMessage := print $passthroughsHeader (join "\n" $passthroughs) }}
+{{- $depMessage := print $depMessage $passthroughsMessage }}}
+{{- end }}
+
+{{- if typeIs "string" .Values.extraInitContainers }}
+{{- $stringToListMessage := print "\n\nPlease make extraInitContainers a list instead of a string.\nGot a '|' symbol after extraInitContainers? Remove it." }}
+{{- $depMessage := print $depMessage $stringToListMessage }}
+{{- end }}
+
+{{- /* print output */}}
+{{- if $depMessage }}
+{{- printf $depMessage | fail }}
+{{- end }}
+{{- end }}
+
+
+
+{{- /* Deprecate auth.gitlab */}}
+{{- define "mattermost.deprecate.auth.gitlab" }}
+{{- if typeIs "map[string]interface {}" .Values.auth }}
+{{- if typeIs "map[string]interface {}" .Values.auth.gitlab }}
+ # auth.gitlab is deprecated, instead use:
+ GitLabSettings:
+ {{- .Values.auth.gitlab | toYaml | nindent 4 }}
+{{- end }}
+{{- end }}
+{{- end }}
+
+{{- /* Deprecate config.siteUrl */}}
+{{- define "mattermost.deprecate.config.siteUrl" }}
+{{- if typeIs "map[string]interface {}" .Values.config }}
+{{- if .Values.config.siteUrl }}
+ # config.siteUrl is deprecated, instead use:
+ ServiceSettings:
+ SiteURL: {{ .Values.config.siteUrl | quote }}
+{{- end }}
+{{- end }}
+{{- end }}
+
+{{- /* Deprecate config.siteName */}}
+{{- define "mattermost.deprecate.config.siteName" }}
+{{- if typeIs "map[string]interface {}" .Values.config }}
+{{- if .Values.config.siteName }}
+ # config.siteName is deprecated, instead use:
+ TeamSettings:
+ SiteName: {{ .Values.config.siteName | quote }}
+{{- end }}
+{{- end }}
+{{- end }}
+
+{{- /* Deprecate config.fileSettings */}}
+{{- define "mattermost.deprecate.config.fileSettings" }}
+{{- if typeIs "map[string]interface {}" .Values.config }}
+{{- $FileSettings := dict }}
+{{- if or .Values.config.filesAccessKey (or .Values.config.filesSecretKey .Values.config.fileBucketName) }}
+{{- $_ := set $FileSettings "DriverName" "amazons3" }}
+{{- $_ := set $FileSettings "AmazonS3AccessKeyId" (.Values.config.filesAccessKey | default "") }}
+{{- $_ := set $FileSettings "AmazonS3SecretAccessKey" (.Values.config.filesSecretKey | default "") }}
+{{- $_ := set $FileSettings "AmazonS3Bucket" (.Values.config.fileBucketName | default "") }}
+ # config.fileSecretKey,
+ # config.fileAccessKey,
+ # config.fileBucketName,
+ # are all deprecated, instead use:
+ FileSettings:
+ {{- $FileSettings | toYaml | nindent 4 }}
+{{- end }}
+{{- end }}
+{{- end }}
+
+{{- /* Deprecate config.emailSettings */}}
+{{- define "mattermost.deprecate.config.emailSettings" }}
+{{- if typeIs "map[string]interface {}" .Values.config }}
+{{- if or .Values.config.smtpServer (or (hasKey .Values.config "enableSignUpWithEmail") (or .Values.config.feedbackName .Values.config.feedbackEmail)) }}
+{{- $EmailSettings := dict }}
+{{- if .Values.config.smtpServer }}
+{{- $_ := set $EmailSettings "SendEmailNotifications" true }}
+{{- else }}
+{{- $_ := set $EmailSettings "SendEmailNotifications" false }}
+{{- end }}
+{{- $_ := set $EmailSettings "EnableSignUpWithEmail" (.Values.config.enableSignUpWithEmail | default true) }}
+{{- $_ := set $EmailSettings "FeedbackName" (.Values.config.feedbackName | default "") }}
+{{- $_ := set $EmailSettings "FeedbackEmail" (.Values.config.feedbackEmail | default "") }}
+{{- $_ := set $EmailSettings "SMTPUsername" (.Values.config.smtpUsername | default "") }}
+{{- $_ := set $EmailSettings "SMTPPassword" (.Values.config.smtpPassword | default "") }}
+{{- if and .Values.config.smtpUsername .Values.config.smtpPassword }}
+{{- $_ := set $EmailSettings "EnableSMTPAuth" true }}
+{{- else }}
+{{- $_ := set $EmailSettings "EnableSMTPAuth" false }}
+{{- end }}
+{{- $_ := set $EmailSettings "SMTPServer" (.Values.config.smtpServer | default "") }}
+{{- $_ := set $EmailSettings "SMTPPort" (.Values.config.smtpPort | default "") }}
+{{- $_ := set $EmailSettings "ConnectionSecurity" (.Values.config.smtpConnection | default "") }}
+ # config.enableSignUpWithEmail,
+ # config.feedbackName,
+ # config.feedbackEmail,
+ # config.smtpUsername,
+ # config.smtpPassword,
+ # config.smtpUsername,
+ # config.smtpPassword,
+ # config.smtpServer,
+ # config.smtpPort,
+ # config.smtpConnection,
+ # are all deprecated, instead use:
+ EmailSettings:
+ {{- $EmailSettings | toYaml | nindent 4 }}
+{{- end }}
+{{- end }}
+{{- end }}
diff --git a/stable/mattermost-team-edition/templates/_warnings.tpl b/stable/mattermost-team-edition/templates/_warnings.tpl
new file mode 100644
index 000000000000..687f628f77b1
--- /dev/null
+++ b/stable/mattermost-team-edition/templates/_warnings.tpl
@@ -0,0 +1,29 @@
+{{- /*
+A template for handling warning messages.
+*/}}
+
+{{- /* Warn about not setting salt and keys explicitly */}}
+{{- define "mattermost.warnings" }}
+{{- with .Values.configJSON }}
+{{- if not (and (.EmailSettings.InviteSalt) (and .FileSettings.PublicLinkSalt .SqlSettings.AtRestEncryptKey)) }}
+WARNING:
+--------
+
+Every `helm upgrade` will generate a new set of keys unless it is set manually like this:
+
+configJSON:
+ {{- if not .EmailSettings.InviteSalt }}
+ EmailSettings:
+ InviteSalt: {{ randAlphaNum 32 }}
+ {{- end }}
+ {{- if not .FileSettings.PublicLinkSalt }}
+ FileSettings:
+ PublicLinkSalt: {{ randAlphaNum 32 }}
+ {{- end }}
+ {{- if not .SqlSettings.AtRestEncryptKey }}
+ SqlSettings:
+ AtRestEncryptKey: {{ randAlphaNum 32 }}
+ {{- end }}
+{{- end }}
+{{- end }}
+{{- end }}
diff --git a/stable/mattermost-team-edition/templates/config.tpl b/stable/mattermost-team-edition/templates/config.tpl
deleted file mode 100644
index 1996afc11e35..000000000000
--- a/stable/mattermost-team-edition/templates/config.tpl
+++ /dev/null
@@ -1,234 +0,0 @@
-{{ define "config.tpl" }}
-{
- "ServiceSettings": {
- "SiteURL": {{ .Values.config.siteUrl | default "" | quote }},
- "LicenseFileLocation": "",
- "ListenAddress": ":8065",
- "ConnectionSecurity": "",
- "TLSCertFile": "",
- "TLSKeyFile": "",
- "UseLetsEncrypt": false,
- "LetsEncryptCertificateCacheFile": "./config/letsencrypt.cache",
- "Forward80To443": false,
- "ReadTimeout": 300,
- "WriteTimeout": 300,
- "MaximumLoginAttempts": 10,
- "GoroutineHealthThreshold": -1,
- "GoogleDeveloperKey": "",
- "EnableOAuthServiceProvider": false,
- "EnableIncomingWebhooks": true,
- "EnableOutgoingWebhooks": true,
- "EnableCommands": true,
- "EnableOnlyAdminIntegrations": false,
- "EnablePostUsernameOverride": false,
- "EnablePostIconOverride": false,
- "EnableLinkPreviews": false,
- "EnableTesting": false,
- "EnableDeveloper": false,
- "EnableSecurityFixAlert": true,
- "EnableInsecureOutgoingConnections": false,
- "EnableMultifactorAuthentication": false,
- "EnforceMultifactorAuthentication": false,
- "AllowCorsFrom": "",
- "SessionLengthWebInDays": 30,
- "SessionLengthMobileInDays": 30,
- "SessionLengthSSOInDays": 30,
- "SessionCacheInMinutes": 10,
- "WebsocketSecurePort": 443,
- "WebsocketPort": 80,
- "WebserverMode": "gzip",
- "EnableCustomEmoji": false,
- "RestrictCustomEmojiCreation": "all",
- "RestrictPostDelete": "all",
- "AllowEditPost": "always",
- "PostEditTimeLimit": 300,
- "TimeBetweenUserTypingUpdatesMilliseconds": 5000,
- "EnablePostSearch": true,
- "EnableUserTypingMessages": true,
- "EnableUserStatuses": true,
- "ClusterLogTimeoutMilliseconds": 2000
- },
- "TeamSettings": {
- "SiteName": {{ .Values.config.siteName | default "Mattermost" | quote }},
- "MaxUsersPerTeam": 50000,
- "EnableTeamCreation": true,
- "EnableUserCreation": true,
- "EnableOpenServer": true,
- "RestrictCreationToDomains": "",
- "EnableCustomBrand": false,
- "CustomBrandText": "",
- "CustomDescriptionText": "",
- "RestrictDirectMessage": "any",
- "RestrictTeamInvite": "all",
- "RestrictPublicChannelManagement": "all",
- "RestrictPrivateChannelManagement": "all",
- "RestrictPublicChannelCreation": "all",
- "RestrictPrivateChannelCreation": "all",
- "RestrictPublicChannelDeletion": "all",
- "RestrictPrivateChannelDeletion": "all",
- "RestrictPrivateChannelManageMembers": "all",
- "UserStatusAwayTimeout": 300,
- "MaxChannelsPerTeam": 50000,
- "MaxNotificationsPerChannel": 1000
- },
- "SqlSettings": {
- {{ if .Values.externalDB.enabled }}
- "DriverName": "{{ .Values.externalDB.externalDriverType }}",
- "DataSource": "{{ .Values.externalDB.externalConnectionString }}",
- {{ else }}
- "DriverName": "mysql",
- "DataSource": "{{ .Values.mysql.mysqlUser }}:{{ .Values.mysql.mysqlPassword }}@tcp({{ .Release.Name }}-mysql:3306)/{{ .Values.mysql.mysqlDatabase }}?charset=utf8mb4,utf8&readTimeout=30s&writeTimeout=30s",
- {{ end }}
- "DataSourceReplicas": [],
- "DataSourceSearchReplicas": [],
- "MaxIdleConns": 20,
- "MaxOpenConns": 35,
- "Trace": false,
- "AtRestEncryptKey": "{{ randAlphaNum 32 }}",
- "QueryTimeout": 30
- },
- "LogSettings": {
- "EnableConsole": true,
- "ConsoleLevel": "INFO",
- "EnableFile": true,
- "FileLevel": "INFO",
- "FileFormat": "",
- "FileLocation": "",
- "EnableWebhookDebugging": true,
- "EnableDiagnostics": true
- },
- "PasswordSettings": {
- "MinimumLength": 5,
- "Lowercase": false,
- "Number": false,
- "Uppercase": false,
- "Symbol": false
- },
- "FileSettings": {
- "EnableFileAttachments": true,
- "MaxFileSize": 52428800,
- {{ if .Values.config.filesAccessKey }}
- "DriverName": "amazons3",
- {{ else }}
- "DriverName": "local",
- {{ end }}
- "Directory": "./data/",
- "EnablePublicLink": false,
- "PublicLinkSalt": "{{ randAlphaNum 32 }}",
- "ThumbnailWidth": 120,
- "ThumbnailHeight": 100,
- "PreviewWidth": 1024,
- "PreviewHeight": 0,
- "ProfileWidth": 128,
- "ProfileHeight": 128,
- "InitialFont": "luximbi.ttf",
- "AmazonS3AccessKeyId": {{ .Values.config.filesAccessKey | default "" | quote }},
- "AmazonS3SecretAccessKey": {{ .Values.config.filesSecretKey | default "" | quote }},
- "AmazonS3Bucket": {{ .Values.config.fileBucketName | default "" | quote }},
- "AmazonS3Region": "",
- "AmazonS3Endpoint": "s3.amazonaws.com",
- "AmazonS3SSL": false,
- "AmazonS3SignV2": false
- },
- "EmailSettings": {
- "EnableSignUpWithEmail": {{ .Values.config.enableSignUpWithEmail }},
- "EnableSignInWithEmail": true,
- "EnableSignInWithUsername": true,
- {{ if .Values.config.smtpServer }}
- "SendEmailNotifications": true,
- {{ else }}
- "SendEmailNotifications": false,
- {{ end }}
- "RequireEmailVerification": false,
- "FeedbackName": {{ .Values.config.feedbackName | default "" | quote }},
- "FeedbackEmail": {{ .Values.config.feedbackEmail | default "" | quote }},
- "FeedbackOrganization": "",
- "SMTPUsername": {{ .Values.config.smtpUsername | default "" | quote }},
- "SMTPPassword": {{ .Values.config.smtpPassword | default "" | quote }},
- {{ if and .Values.config.smtpUsername .Values.config.smtpPassword }}
- "EnableSMTPAuth": true,
- {{ else }}
- "EnableSMTPAuth": false,
- {{ end }}
- "SMTPServer": {{ .Values.config.smtpServer | default "" | quote }},
- "SMTPPort": {{ .Values.config.smtpPort | default "" | quote }},
- "ConnectionSecurity": {{ .Values.config.smtpConnection | default "" | quote }},
- "InviteSalt": "{{ randAlphaNum 32 }}",
- "SendPushNotifications": true,
- "PushNotificationServer": "https://push.mattermost.com",
- "PushNotificationContents": "generic",
- "EnableEmailBatching": false,
- "EmailBatchingBufferSize": 256,
- "EmailBatchingInterval": 30,
- "SkipServerCertificateVerification": false
- },
- "RateLimitSettings": {
- "Enable": false,
- "PerSec": 10,
- "MaxBurst": 100,
- "MemoryStoreSize": 10000,
- "VaryByRemoteAddr": true,
- "VaryByHeader": ""
- },
- "PrivacySettings": {
- "ShowEmailAddress": true,
- "ShowFullName": true
- },
- "SupportSettings": {
- "TermsOfServiceLink": "https://about.mattermost.com/default-terms/",
- "PrivacyPolicyLink": "https://about.mattermost.com/default-privacy-policy/",
- "AboutLink": "https://about.mattermost.com/default-about/",
- "HelpLink": "https://about.mattermost.com/default-help/",
- "ReportAProblemLink": "https://about.mattermost.com/default-report-a-problem/",
- "SupportEmail": "feedback@mattermost.com"
- },
- "AnnouncementSettings": {
- "EnableBanner": false,
- "BannerText": "",
- "BannerColor": "#f2a93b",
- "BannerTextColor": "#333333",
- "AllowBannerDismissal": true
- },
-{{ if .Values.auth.gitlab }}
- "GitLabSettings": {{ .Values.auth.gitlab | toJson }},
-{{ end }}
- "LocalizationSettings": {
- "DefaultServerLocale": "en",
- "DefaultClientLocale": "en",
- "AvailableLocales": ""
- },
- "NativeAppSettings": {
- "AppDownloadLink": "https://about.mattermost.com/downloads/",
- "AndroidAppDownloadLink": "https://about.mattermost.com/mattermost-android-app/",
- "IosAppDownloadLink": "https://about.mattermost.com/mattermost-ios-app/"
- },
- "AnalyticsSettings": {
- "MaxUsersForStatistics": 2500
- },
- "WebrtcSettings": {
- "Enable": false,
- "GatewayWebsocketUrl": "",
- "GatewayAdminUrl": "",
- "GatewayAdminSecret": "",
- "StunURI": "",
- "TurnURI": "",
- "TurnUsername": "",
- "TurnSharedKey": ""
- },
- "DisplaySettings": {
- "CustomUrlSchemes": [],
- "ExperimentalTimezone": true
- },
- "TimezoneSettings": {
- "SupportedTimezonesPath": "timezones.json"
- },
- "PluginSettings": {
- "Enable": true,
- "EnableUploads": true,
- "Directory": "./plugins",
- "ClientDirectory": "./client/plugins",
- "Plugins": {},
- "PluginStates": {}
- }
-}
-{{ end }}
diff --git a/stable/mattermost-team-edition/templates/configmap-config.yaml b/stable/mattermost-team-edition/templates/configmap-config.yaml
deleted file mode 100644
index 5c29dfc17877..000000000000
--- a/stable/mattermost-team-edition/templates/configmap-config.yaml
+++ /dev/null
@@ -1,12 +0,0 @@
-apiVersion: v1
-kind: ConfigMap
-metadata:
- name: {{ include "mattermost-team-edition.fullname" . }}-config-json
- labels:
- app.kubernetes.io/name: {{ include "mattermost-team-edition.name" . }}
- app.kubernetes.io/instance: {{ .Release.Name }}
- app.kubernetes.io/managed-by: {{ .Release.Service }}
- helm.sh/chart: {{ include "mattermost-team-edition.chart" . }}
-data:
- config.json: |
-{{ include "config.tpl" . | printf "%s" | indent 4 }}
diff --git a/stable/mattermost-team-edition/templates/deployment.yaml b/stable/mattermost-team-edition/templates/deployment.yaml
index a94c2d05a851..8f11fa7405c9 100644
--- a/stable/mattermost-team-edition/templates/deployment.yaml
+++ b/stable/mattermost-team-edition/templates/deployment.yaml
@@ -21,7 +21,7 @@ spec:
template:
metadata:
annotations:
- checksum/config: {{ include (print $.Template.BasePath "/configmap-config.yaml") . | sha256sum }}
+ checksum/config: {{ include (print $.Template.BasePath "/secret-config.yaml") . | sha256sum }}
labels:
app.kubernetes.io/name: {{ include "mattermost-team-edition.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
@@ -36,7 +36,7 @@ spec:
command: ["sh", "-c", "until curl --max-time 5 http://{{ .Release.Name }}-mysql:3306; do echo waiting for {{ .Release.Name }}-mysql; sleep 5; done;"]
{{- end }}
{{- if .Values.extraInitContainers }}
-{{ tpl .Values.extraInitContainers . | indent 6 }}
+ {{- .Values.extraInitContainers | toYaml | nindent 6 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
@@ -44,11 +44,11 @@ spec:
imagePullPolicy: {{ .Values.image.imagePullPolicy }}
env:
{{- if .Values.extraEnvVars }}
-{{ toYaml .Values.extraEnvVars | indent 10 }}
+ {{- .Values.extraEnvVars | toYaml | nindent 10 }}
{{- end }}
ports:
- name: http
- containerPort: 80
+ containerPort: {{ .Values.service.internalPort }}
protocol: TCP
livenessProbe:
initialDelaySeconds: 90
@@ -56,14 +56,14 @@ spec:
periodSeconds: 15
httpGet:
path: /api/v4/system/ping
- port: {{ .Values.service.internalPort }}
+ port: http
readinessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
periodSeconds: 15
httpGet:
path: /api/v4/system/ping
- port: {{ .Values.service.internalPort }}
+ port: http
volumeMounts:
- mountPath: /mattermost/config/config.json
name: config-json
@@ -71,18 +71,15 @@ spec:
- mountPath: /mattermost/data
name: mattermost-data
resources:
-{{ toYaml .Values.resources | indent 12 }}
+ {{- .Values.resources | toYaml | nindent 12 }}
volumes:
- name: config-json
- configMap:
- name: {{ include "mattermost-team-edition.fullname" . }}-config-json
- items:
- - key: config.json
- path: config.json
+ secret:
+ secretName: {{ include "mattermost-team-edition.fullname" . }}-config-json
- name: mattermost-data
{{ if .Values.persistence.data.enabled }}
persistentVolumeClaim:
- claimName: {{ .Values.persistence.existingClaim | default (include "mattermost-team-edition.fullname" .) }}
+ claimName: {{ .Values.persistence.data.existingClaim | default (include "mattermost-team-edition.fullname" .) }}
{{ else }}
emptyDir: {}
{{ end }}
diff --git a/stable/mattermost-team-edition/templates/ingress.yaml b/stable/mattermost-team-edition/templates/ingress.yaml
index 1a01afe985f9..54357bdec557 100644
--- a/stable/mattermost-team-edition/templates/ingress.yaml
+++ b/stable/mattermost-team-edition/templates/ingress.yaml
@@ -12,15 +12,15 @@ metadata:
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "mattermost-team-edition.chart" . }}
annotations:
-{{ if .Values.ingress.tls }}
+ {{- if .Values.ingress.tls }}
nginx.ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/tls-acme: "true"
-{{ else }}
+ {{- else }}
nginx.ingress.kubernetes.io/ssl-redirect: "false"
-{{ end }}
-{{ with $ingress.annotations }}
-{{ toYaml . | indent 4 }}
-{{ end }}
+ {{- end }}
+ {{- with $ingress.annotations }}
+ {{- . | toYaml | nindent 4 }}
+ {{- end }}
spec:
rules:
{{ range $host := $ingress.hosts }}
@@ -34,6 +34,6 @@ spec:
{{ end }}
{{ if $ingress.tls }}
tls:
-{{ toYaml $ingress.tls | indent 4 }}
+ {{- $ingress.tls | toYaml | nindent 4 }}
{{ end }}
{{ end }}
diff --git a/stable/mattermost-team-edition/templates/pvc.yaml b/stable/mattermost-team-edition/templates/pvc.yaml
index 50bb5e7e87cf..8dd705aacb28 100644
--- a/stable/mattermost-team-edition/templates/pvc.yaml
+++ b/stable/mattermost-team-edition/templates/pvc.yaml
@@ -1,4 +1,4 @@
-{{ if and .Values.persistence.data.enabled (not .Values.persistence.data.existingClaim) }}
+{{- if and .Values.persistence.data.enabled (not .Values.persistence.data.existingClaim) -}}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
@@ -9,20 +9,20 @@ metadata:
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "mattermost-team-edition.chart" . }}
annotations:
- {{ range $key, $value := .Values.persistence.data.annotations }}
+ {{- range $key, $value := .Values.persistence.data.annotations }}
{{ $key }}: {{ $value | quote }}
- {{ end }}
+ {{- end }}
spec:
accessModes:
- {{ .Values.persistence.data.accessMode | quote }}
resources:
requests:
storage: {{ .Values.persistence.data.size | quote }}
-{{ if .Values.persistence.data.storageClass }}
-{{ if (eq "-" .Values.persistence.data.storageClass) }}
+{{- if .Values.persistence.data.storageClass }}
+{{- if (eq "-" .Values.persistence.data.storageClass) }}
storageClassName: ""
-{{ else }}
+{{- else }}
storageClassName: "{{ .Values.persistence.data.storageClass }}"
-{{ end }}
-{{ end }}
-{{ end }}
+{{- end }}
+{{- end }}
+{{- end }}
diff --git a/stable/mattermost-team-edition/templates/secret-config.yaml b/stable/mattermost-team-edition/templates/secret-config.yaml
new file mode 100644
index 000000000000..c22d867eecd9
--- /dev/null
+++ b/stable/mattermost-team-edition/templates/secret-config.yaml
@@ -0,0 +1,40 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ include "mattermost-team-edition.fullname" . }}-config-json
+ labels:
+ app.kubernetes.io/name: {{ include "mattermost-team-edition.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ helm.sh/chart: {{ include "mattermost-team-edition.chart" . }}
+type: Opaque
+data:
+ {{- /* Make a deep enough copy of the default config */}}
+ {{- $c := dict }}
+ {{- range $key, $dictVal := .Values.configJSON }}
+ {{- $dictCopy := merge (dict) $dictVal }}
+ {{- $_ := set $c $key $dictCopy }}
+ {{- end }}
+
+ {{- /* Update the copied default config based on .Values */}}
+
+ {{- if or .Values.configJSON.SqlSettings.DriverName .Values.configJSON.SqlSettings.DataSource }}
+ {{- $message := "Use 'mysql' or 'externalDB' to instead of using a configuration with:\n\nconfigJSON:\n SqlSettings\n DriverName: ...\n DataSource: ..." }}
+ {{- print "\n\nDIRECT CONFIGURATION NOT SUPPORTED:\n-----------------------------------\n\n" $message | fail }}
+ {{- end }}
+
+ {{- if .Values.externalDB.enabled }}
+ {{- $_ := set $c.SqlSettings "DriverName" (.Values.externalDB.externalDriverType) }}
+ {{- $_ := set $c.SqlSettings "DataSource" (.Values.externalDB.externalConnectionString) }}
+ {{- else }}
+ {{- $_ := set $c.SqlSettings "DriverName" "mysql" }}
+ {{- $_ := set $c.SqlSettings "DataSource" (print .Values.mysql.mysqlUser ":" .Values.mysql.mysqlPassword "@tcp(" .Release.Name "-mysql:3306)/" .Values.mysql.mysqlDatabase "?charset=utf8mb4,utf8&readTimeout=30s&writeTimeout=30s") }}
+ {{- end }}
+
+ {{- $_ := set $c.SqlSettings "AtRestEncryptKey" (.Values.configJSON.SqlSettings.AtRestEncryptKey | default (randAlphaNum 32)) }}
+ {{- $_ := set $c.FileSettings "PublicLinkSalt" (.Values.configJSON.FileSettings.PublicLinkSalt | default (randAlphaNum 32)) }}
+ {{- $_ := set $c.EmailSettings "InviteSalt" (.Values.configJSON.EmailSettings.InviteSalt | default (randAlphaNum 32)) }}
+
+ {{- /* Render the processed config as a JSON string */}}
+ {{- /* NOTE: Mounted at /mattermost/config/config.json on the mattermost pod */}}
+ config.json: {{ $c | toJson | b64enc }}
diff --git a/stable/mattermost-team-edition/templates/service.yaml b/stable/mattermost-team-edition/templates/service.yaml
index a4e8e8434ce9..08f061b3bd12 100644
--- a/stable/mattermost-team-edition/templates/service.yaml
+++ b/stable/mattermost-team-edition/templates/service.yaml
@@ -15,6 +15,6 @@ spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.externalPort }}
- targetPort: {{ .Values.service.internalPort }}
+ targetPort: http
protocol: TCP
name: {{ include "mattermost-team-edition.name" . }}
diff --git a/stable/mattermost-team-edition/values.yaml b/stable/mattermost-team-edition/values.yaml
index fb74e949bd17..cc4c10900bd3 100644
--- a/stable/mattermost-team-edition/values.yaml
+++ b/stable/mattermost-team-edition/values.yaml
@@ -3,7 +3,7 @@
# Declare variables to be passed into your templates.
image:
repository: mattermost/mattermost-team-edition
- tag: 5.7.0
+ tag: 5.9.0
imagePullPolicy: IfNotPresent
initContainerImage:
@@ -31,23 +31,6 @@ persistence:
accessMode: ReadWriteOnce
# existingClaim: ""
-# Mattermost configuration:
-config:
- siteUrl: ""
- siteName: "Mattermost"
- filesAccessKey:
- filesSecretKey:
- fileBucketName:
- smtpServer:
- smtpPort:
- # empty, TLS, or STARTTLS
- smtpConnection:
- smtpUsername:
- smtpPassword:
- feedbackEmail:
- feedbackName:
- enableSignUpWithEmail: true
-
service:
type: ClusterIP
externalPort: 8065
@@ -56,7 +39,7 @@ service:
ingress:
enabled: false
path: /
- annotations:
+ annotations: {}
# kubernetes.io/ingress.class: nginx
# certmanager.k8s.io/issuer: your-issuer
# nginx.ingress.kubernetes.io/proxy-body-size: 50m
@@ -79,28 +62,17 @@ ingress:
# hosts:
# - mattermost.example.com
-auth:
- gitlab:
- # Enable: "false"
- # Secret: ""
- # Id: ""
- # Scope: ""
- # AuthEndpoint:
- # TokenEndpoint:
- # UserApiEndpoint:
-## If use this please disable the mysql chart, setting the config mysql.enable to false
+## If use this please disable the mysql chart by setting mysql.enable to false
externalDB:
enabled: false
- # externalDriverType: "postgres" #or mysql
- # externalConnectionString: "postgres://:@:5432/?sslmode=disable&connect_timeout=10"
- # for mysql: ":@tcp(:3306)/?charset=utf8mb4,utf8&readTimeout=30s&writeTimeout=30s"
- # When using existingUser and ExistingSecret (for example when configuring to use MM with Gitlab helm charts) you will need to
- # define a initContainer to read those configs and create the database in the existing gitlab and set the config.json
- # See the initContainer example below
- # existingUser: gitlab
- # existingSecret: "gitlab-postgresql-password"
+ ## postgres or mysql
+ externalDriverType: ""
+
+ ## postgres: "postgres://:@:5432/?sslmode=disable&connect_timeout=10"
+ ## mysql: ":@tcp(:3306)/?charset=utf8mb4,utf8&readTimeout=30s&writeTimeout=30s"
+ externalConnectionString: ""
mysql:
enabled: true
@@ -127,11 +99,13 @@ mysql:
# existingClaim: ""
## Additional env vars
-extraEnvVars:
+extraEnvVars: []
# This is an example of extra env vars when using with the deployment with GitLab Helm Charts
# - name: POSTGRES_PASSWORD_GITLAB
# valueFrom:
# secretKeyRef:
+ # # NOTE: Needs to be manually created
+ # # kubectl create secret generic gitlab-postgresql-password --namespace --from-literal postgres-password=
# name: gitlab-postgresql-password
# key: postgres-password
# - name: POSTGRES_USER_GITLAB
@@ -148,8 +122,8 @@ extraEnvVars:
# value: postgres://$(POSTGRES_USER_GITLAB):$(POSTGRES_PASSWORD_GITLAB)@$(POSTGRES_HOST_GITLAB):$(POSTGRES_PORT_GITLAB)/$(POSTGRES_DB_NAME_MATTERMOST)?sslmode=disable&connect_timeout=10
## Additional init containers
-extraInitContainers: |
-# This is an example of extra Init Container when using with the deployment with GitLab Helm Charts
+extraInitContainers: []
+ # This is an example of extra Init Container when using with the deployment with GitLab Helm Charts
# - name: bootstrap-database
# image: "postgres:9.6-alpine"
# imagePullPolicy: IfNotPresent
@@ -179,3 +153,231 @@ extraInitContainers: |
# PGPASSWORD=$POSTGRES_PASSWORD_GITLAB createdb -h $POSTGRES_HOST_GITLAB -p $POSTGRES_PORT_GITLAB -U $POSTGRES_USER_GITLAB $POSTGRES_DB_NAME_MATTERMOST
# echo "Done"
# fi
+
+# NOTE: These acts as the default values for the config.json file read by the
+# mattermost server itself. You can override the configJSON object just like any
+# Helm template value. Since it is an object, the object you provide will merge
+# with these defaults. Also note that this is YAML, so you can choose to use
+# either JSON or YAML as JSON is a subset of YAML. No matter what you choose,
+# the config.json file that will be generated will be correctly JSON formatted.
+configJSON: {
+ "ServiceSettings": {
+ "SiteURL": "",
+ "LicenseFileLocation": "",
+ "ListenAddress": ":8065",
+ "ConnectionSecurity": "",
+ "TLSCertFile": "",
+ "TLSKeyFile": "",
+ "UseLetsEncrypt": false,
+ "LetsEncryptCertificateCacheFile": "./config/letsencrypt.cache",
+ "Forward80To443": false,
+ "ReadTimeout": 300,
+ "WriteTimeout": 300,
+ "MaximumLoginAttempts": 10,
+ "GoroutineHealthThreshold": -1,
+ "GoogleDeveloperKey": "",
+ "EnableOAuthServiceProvider": false,
+ "EnableIncomingWebhooks": true,
+ "EnableOutgoingWebhooks": true,
+ "EnableCommands": true,
+ "EnableOnlyAdminIntegrations": false,
+ "EnablePostUsernameOverride": false,
+ "EnablePostIconOverride": false,
+ "EnableLinkPreviews": false,
+ "EnableTesting": false,
+ "EnableDeveloper": false,
+ "EnableSecurityFixAlert": true,
+ "EnableInsecureOutgoingConnections": false,
+ "EnableMultifactorAuthentication": false,
+ "EnforceMultifactorAuthentication": false,
+ "AllowCorsFrom": "",
+ "SessionLengthWebInDays": 30,
+ "SessionLengthMobileInDays": 30,
+ "SessionLengthSSOInDays": 30,
+ "SessionCacheInMinutes": 10,
+ "WebsocketSecurePort": 443,
+ "WebsocketPort": 80,
+ "WebserverMode": "gzip",
+ "EnableCustomEmoji": false,
+ "RestrictCustomEmojiCreation": "all",
+ "RestrictPostDelete": "all",
+ "AllowEditPost": "always",
+ "PostEditTimeLimit": 300,
+ "TimeBetweenUserTypingUpdatesMilliseconds": 5000,
+ "EnablePostSearch": true,
+ "EnableUserTypingMessages": true,
+ "EnableUserStatuses": true,
+ "ClusterLogTimeoutMilliseconds": 2000
+ },
+ "TeamSettings": {
+ "SiteName": "Mattermost",
+ "MaxUsersPerTeam": 50000,
+ "EnableTeamCreation": true,
+ "EnableUserCreation": true,
+ "EnableOpenServer": true,
+ "RestrictCreationToDomains": "",
+ "EnableCustomBrand": false,
+ "CustomBrandText": "",
+ "CustomDescriptionText": "",
+ "RestrictDirectMessage": "any",
+ "RestrictTeamInvite": "all",
+ "RestrictPublicChannelManagement": "all",
+ "RestrictPrivateChannelManagement": "all",
+ "RestrictPublicChannelCreation": "all",
+ "RestrictPrivateChannelCreation": "all",
+ "RestrictPublicChannelDeletion": "all",
+ "RestrictPrivateChannelDeletion": "all",
+ "RestrictPrivateChannelManageMembers": "all",
+ "UserStatusAwayTimeout": 300,
+ "MaxChannelsPerTeam": 50000,
+ "MaxNotificationsPerChannel": 1000
+ },
+ "SqlSettings": {
+ "DriverName": "",
+ "DataSource": "",
+ "DataSourceReplicas": [],
+ "DataSourceSearchReplicas": [],
+ "MaxIdleConns": 20,
+ "MaxOpenConns": 35,
+ "Trace": false,
+ "AtRestEncryptKey": "",
+ "QueryTimeout": 30
+ },
+ "LogSettings": {
+ "EnableConsole": true,
+ "ConsoleLevel": "INFO",
+ "EnableFile": true,
+ "FileLevel": "INFO",
+ "FileFormat": "",
+ "FileLocation": "",
+ "EnableWebhookDebugging": true,
+ "EnableDiagnostics": true
+ },
+ "PasswordSettings": {
+ "MinimumLength": 5,
+ "Lowercase": false,
+ "Number": false,
+ "Uppercase": false,
+ "Symbol": false
+ },
+ "FileSettings": {
+ "EnableFileAttachments": true,
+ "MaxFileSize": 52428800,
+ "DriverName": "local",
+ "Directory": "./data/",
+ "EnablePublicLink": false,
+ "PublicLinkSalt": "",
+ "ThumbnailWidth": 120,
+ "ThumbnailHeight": 100,
+ "PreviewWidth": 1024,
+ "PreviewHeight": 0,
+ "ProfileWidth": 128,
+ "ProfileHeight": 128,
+ "InitialFont": "luximbi.ttf",
+ "AmazonS3AccessKeyId": "",
+ "AmazonS3SecretAccessKey": "",
+ "AmazonS3Bucket": "",
+ "AmazonS3Region": "",
+ "AmazonS3Endpoint": "s3.amazonaws.com",
+ "AmazonS3SSL": false,
+ "AmazonS3SignV2": false
+ },
+ "EmailSettings": {
+ "EnableSignUpWithEmail": true,
+ "EnableSignInWithEmail": true,
+ "EnableSignInWithUsername": true,
+ "SendEmailNotifications": false,
+ "RequireEmailVerification": false,
+ "FeedbackName": "",
+ "FeedbackEmail": "",
+ "FeedbackOrganization": "",
+ "SMTPUsername": "",
+ "SMTPPassword": "",
+ "EnableSMTPAuth": "",
+ "SMTPServer": "",
+ "SMTPPort": "",
+ "ConnectionSecurity": "",
+ "InviteSalt": "",
+ "SendPushNotifications": true,
+ "PushNotificationServer": "https://push-test.mattermost.com",
+ "PushNotificationContents": "generic",
+ "EnableEmailBatching": false,
+ "EmailBatchingBufferSize": 256,
+ "EmailBatchingInterval": 30,
+ "SkipServerCertificateVerification": false
+ },
+ "RateLimitSettings": {
+ "Enable": false,
+ "PerSec": 10,
+ "MaxBurst": 100,
+ "MemoryStoreSize": 10000,
+ "VaryByRemoteAddr": true,
+ "VaryByHeader": ""
+ },
+ "PrivacySettings": {
+ "ShowEmailAddress": true,
+ "ShowFullName": true
+ },
+ "SupportSettings": {
+ "TermsOfServiceLink": "https://about.mattermost.com/default-terms/",
+ "PrivacyPolicyLink": "https://about.mattermost.com/default-privacy-policy/",
+ "AboutLink": "https://about.mattermost.com/default-about/",
+ "HelpLink": "https://about.mattermost.com/default-help/",
+ "ReportAProblemLink": "https://about.mattermost.com/default-report-a-problem/",
+ "SupportEmail": "feedback@mattermost.com"
+ },
+ "AnnouncementSettings": {
+ "EnableBanner": false,
+ "BannerText": "",
+ "BannerColor": "#f2a93b",
+ "BannerTextColor": "#333333",
+ "AllowBannerDismissal": true
+ },
+ "GitLabSettings": {
+ "Enable": false,
+ "Secret": "",
+ "Id": "",
+ "Scope": "",
+ "AuthEndpoint": "",
+ "TokenEndpoint": "",
+ "UserApiEndpoint": ""
+ },
+ "LocalizationSettings": {
+ "DefaultServerLocale": "en",
+ "DefaultClientLocale": "en",
+ "AvailableLocales": ""
+ },
+ "NativeAppSettings": {
+ "AppDownloadLink": "https://about.mattermost.com/downloads/",
+ "AndroidAppDownloadLink": "https://about.mattermost.com/mattermost-android-app/",
+ "IosAppDownloadLink": "https://about.mattermost.com/mattermost-ios-app/"
+ },
+ "AnalyticsSettings": {
+ "MaxUsersForStatistics": 2500
+ },
+ "WebrtcSettings": {
+ "Enable": false,
+ "GatewayWebsocketUrl": "",
+ "GatewayAdminUrl": "",
+ "GatewayAdminSecret": "",
+ "StunURI": "",
+ "TurnURI": "",
+ "TurnUsername": "",
+ "TurnSharedKey": ""
+ },
+ "DisplaySettings": {
+ "CustomUrlSchemes": [],
+ "ExperimentalTimezone": true
+ },
+ "TimezoneSettings": {
+ "SupportedTimezonesPath": "timezones.json"
+ },
+ "PluginSettings": {
+ "Enable": true,
+ "EnableUploads": true,
+ "Directory": "./plugins",
+ "ClientDirectory": "./client/plugins",
+ "Plugins": {},
+ "PluginStates": {}
+ }
+}
diff --git a/stable/mcrouter/Chart.yaml b/stable/mcrouter/Chart.yaml
index dc3eddcd547b..bc9760db8afc 100644
--- a/stable/mcrouter/Chart.yaml
+++ b/stable/mcrouter/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: mcrouter
home: https://github.com/facebook/mcrouter
-version: 0.1.1
+version: 1.0.0
appVersion: 0.36.0
description: Mcrouter is a memcached protocol router for scaling memcached deployments.
sources:
diff --git a/stable/mediawiki/Chart.yaml b/stable/mediawiki/Chart.yaml
index 22d271f85e1f..327b67fbdb59 100644
--- a/stable/mediawiki/Chart.yaml
+++ b/stable/mediawiki/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: mediawiki
-version: 6.0.2
-appVersion: 1.32.0
+version: 6.2.1
+appVersion: 1.32.1
description: Extremely powerful, scalable software and a feature-rich wiki implementation that uses PHP to process and display data stored in a database.
home: http://www.mediawiki.org/
icon: https://bitnami.com/assets/stacks/mediawiki/img/mediawiki-stack-220x234.png
diff --git a/stable/mediawiki/README.md b/stable/mediawiki/README.md
index a4cfc004e60f..1bcc2cc407d2 100644
--- a/stable/mediawiki/README.md
+++ b/stable/mediawiki/README.md
@@ -14,7 +14,7 @@ This chart bootstraps a [MediaWiki](https://github.com/bitnami/bitnami-docker-me
It also packages the [Bitnami MariaDB chart](https://github.com/kubernetes/charts/tree/master/stable/mariadb) which is required for bootstrapping a MariaDB deployment for the database requirements of the MediaWiki application.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -50,6 +50,7 @@ The following table lists the configurable parameters of the MediaWiki chart and
| Parameter | Description | Default |
|--------------------------------------|-------------------------------------------------------------|---------------------------------------------------------|
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | MediaWiki image registry | `docker.io` |
| `image.repository` | MediaWiki Image name | `bitnami/mediawiki` |
| `image.tag` | MediaWiki Image tag | `{VERSION}` |
diff --git a/stable/mediawiki/templates/_helpers.tpl b/stable/mediawiki/templates/_helpers.tpl
index ddeeb9dea072..57adf0e7ec53 100644
--- a/stable/mediawiki/templates/_helpers.tpl
+++ b/stable/mediawiki/templates/_helpers.tpl
@@ -56,9 +56,57 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
{{/*
Return the proper image name (for the metrics image)
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "mediawiki.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "mediawiki.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
{{- end -}}
diff --git a/stable/mediawiki/templates/deployment.yaml b/stable/mediawiki/templates/deployment.yaml
index 851041e03140..25cd107d36c0 100644
--- a/stable/mediawiki/templates/deployment.yaml
+++ b/stable/mediawiki/templates/deployment.yaml
@@ -30,12 +30,7 @@ spec:
{{- end }}
{{- end }}
spec:
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "mediawiki.imagePullSecrets" . | indent 6 }}
hostAliases:
- ip: "127.0.0.1"
hostnames:
@@ -153,7 +148,7 @@ spec:
subPath: mediawiki
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "mediawiki.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
command: [ '/bin/apache_exporter', '-scrape_uri', 'http://status.localhost:80/server-status/?auto']
ports:
diff --git a/stable/mediawiki/values.yaml b/stable/mediawiki/values.yaml
index 87a94662e4c8..07635cafa897 100644
--- a/stable/mediawiki/values.yaml
+++ b/stable/mediawiki/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami DokuWiki image version
## ref: https://hub.docker.com/r/bitnami/mediawiki/tags/
@@ -10,7 +13,7 @@
image:
registry: docker.io
repository: bitnami/mediawiki
- tag: 1.32.0
+ tag: 1.32.1
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,7 +24,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## User of the application
## ref: https://github.com/bitnami/bitnami-docker-mediawiki#environment-variables
@@ -264,7 +267,7 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Metrics exporter pod Annotation and Labels
podAnnotations:
prometheus.io/scrape: "true"
diff --git a/stable/memcached/Chart.yaml b/stable/memcached/Chart.yaml
index 90e5ffc2e947..ccb3a31fa81b 100644
--- a/stable/memcached/Chart.yaml
+++ b/stable/memcached/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: memcached
-version: 2.5.0
-appVersion: 1.5.6
+version: 2.8.2
+appVersion: 1.5.12
description: Free & open source, high-performance, distributed memory object caching
system.
keywords:
diff --git a/stable/memcached/README.md b/stable/memcached/README.md
index 7b9ec3468843..2d7a30283f4f 100644
--- a/stable/memcached/README.md
+++ b/stable/memcached/README.md
@@ -46,6 +46,7 @@ The following table lists the configurable parameters of the Memcached chart and
| `imagePullPolicy` | Image pull policy | `Always` if `imageTag` is `latest`, else `IfNotPresent` |
| `memcached.verbosity` | Verbosity level (v, vv, or vvv) | Un-set. |
| `memcached.maxItemMemory` | Max memory for items (in MB) | `64` |
+| `memcached.extraArgs` | Additional memcached arguments | `[]` |
| `metrics.enabled` | Expose metrics in prometheus format | false |
| `metrics.image` | The image to pull and run for the metrics exporter | A recent official memcached tag |
| `metrics.imagePullPolicy` | Image pull policy | `Always` if `imageTag` is `latest`, else `IfNotPresent` |
@@ -54,6 +55,10 @@ The following table lists the configurable parameters of the Memcached chart and
| `extraVolumes` | Volume definitions to add as string | Un-set |
| `kind` | Install as StatefulSet or Deployment | StatefulSet |
| `podAnnotations` | Map of annotations to add to the pod(s) | `{}` |
+| `podLabels` | Custom Labels to be applied to statefulset | Un-set |
+| `nodeSelector` | Simple pod scheduling control | `{}` |
+| `tolerations` | Allow or deny specific node taints | `{}` |
+| `affinity` | Advanced pod scheduling control | `{}` |
The above parameters map to `memcached` params. For more information please refer to the [Memcached documentation](https://github.com/memcached/memcached/wiki/ConfiguringServer).
diff --git a/stable/memcached/templates/pdb.yaml b/stable/memcached/templates/pdb.yaml
index cbd64de10942..ee83e0515e65 100644
--- a/stable/memcached/templates/pdb.yaml
+++ b/stable/memcached/templates/pdb.yaml
@@ -2,6 +2,11 @@ apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: {{ template "memcached.fullname" . }}
+ labels:
+ app: {{ template "memcached.fullname" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ release: "{{ .Release.Name }}"
+ heritage: "{{ .Release.Service }}"
spec:
selector:
matchLabels:
@@ -10,4 +15,3 @@ spec:
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
minAvailable: {{ .Values.pdbMinAvailable }}
-
\ No newline at end of file
diff --git a/stable/memcached/templates/statefulset.yaml b/stable/memcached/templates/statefulset.yaml
index 007c71fb2673..3cb6b57cd2ac 100644
--- a/stable/memcached/templates/statefulset.yaml
+++ b/stable/memcached/templates/statefulset.yaml
@@ -17,6 +17,9 @@ spec:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
+{{- with .Values.podLabels }}
+{{ toYaml . | indent 8 }}
+{{- end}}
{{- with .Values.podAnnotations }}
annotations:
{{ toYaml . | indent 8 }}
@@ -47,7 +50,7 @@ spec:
imagePullPolicy: {{ default "" .Values.imagePullPolicy | quote }}
command:
- memcached
- - -m {{ .Values.memcached.maxItemMemory }}
+ - -m {{ .Values.memcached.maxItemMemory }}
{{- if .Values.memcached.extendedOptions }}
- -o
- {{ .Values.memcached.extendedOptions }}
@@ -55,6 +58,9 @@ spec:
{{- if .Values.memcached.verbosity }}
- -{{ .Values.memcached.verbosity }}
{{- end }}
+{{- with .Values.memcached.extraArgs }}
+{{ toYaml . | indent 8 }}
+{{- end }}
ports:
- name: memcache
containerPort: 11211
@@ -91,3 +97,11 @@ spec:
nodeSelector:
{{ toYaml . | trim | indent 8}}
{{- end }}
+{{- with .Values.tolerations }}
+ tolerations:
+{{ toYaml . | trim | indent 8}}
+{{- end }}
+{{- with .Values.affinity }}
+ affinity:
+{{ toYaml . | trim | indent 8}}
+{{- end }}
diff --git a/stable/memcached/values.yaml b/stable/memcached/values.yaml
index a2c332679d70..e0f2cd202991 100644
--- a/stable/memcached/values.yaml
+++ b/stable/memcached/values.yaml
@@ -1,7 +1,7 @@
## Memcached image and tag
## ref: https://hub.docker.com/r/library/memcached/tags/
##
-image: memcached:1.5.6-alpine
+image: memcached:1.5.12-alpine
## Specify a imagePullPolicy
## 'Always' if imageTag is 'latest', else set to 'IfNotPresent'
@@ -29,6 +29,12 @@ memcached:
verbosity: v
extendedOptions: modern
+ ## Additional command line arguments to pass to memcached
+ ## E.g. to specify a maximum value size
+ ## extraArgs:
+ ## - -I 2m
+ extraArgs: []
+
## Define various attributes of the service
serviceAnnotations: {}
# prometheus.io/scrape: "true"
@@ -44,8 +50,17 @@ resources:
memory: 64Mi
cpu: 50m
+## Key:value pair for assigning pod to specific sets of nodes
+## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
nodeSelector: {}
+## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
+tolerations: {}
+
+## Advanced scheduling controls
+## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+affinity: {}
+
metrics:
## Expose memcached metrics in Prometheus format
enabled: false
@@ -68,5 +83,10 @@ extraContainers: |
extraVolumes: |
+## Custom metadata labels to be applied to statefulset and pods
+# podLabels:
+# foo: "bar"
+# bar: "foo"
+
# To be added to the server pod(s)
podAnnotations: {}
diff --git a/stable/mercure/.helmignore b/stable/mercure/.helmignore
new file mode 100644
index 000000000000..a0482efdf830
--- /dev/null
+++ b/stable/mercure/.helmignore
@@ -0,0 +1,23 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
+.vscode/
+OWNERS
diff --git a/stable/mercure/Chart.yaml b/stable/mercure/Chart.yaml
new file mode 100644
index 000000000000..eb3a3c61d098
--- /dev/null
+++ b/stable/mercure/Chart.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+appVersion: "0.5.0"
+description: The Mercure hub allows to push data updates using the Mercure protocol to web browsers and other HTTP clients in a convenient, fast, reliable and battery-efficient way
+name: mercure
+version: 1.0.2
+keywords:
+- mercure
+- hub
+- push
+home: https://mercure.rocks
+icon: https://cdn.jsdelivr.net/gh/dunglas/mercure/public/mercure.svg
+sources:
+- https://github.com/dunglas/mercure
+maintainers:
+- name: dunglas
+ email: dunglas+mercure@gmail.com
diff --git a/stable/mercure/OWNERS b/stable/mercure/OWNERS
new file mode 100644
index 000000000000..f047cf1b1d92
--- /dev/null
+++ b/stable/mercure/OWNERS
@@ -0,0 +1,6 @@
+approvers:
+- dunglas
+- pborreli
+reviewers:
+- dunglas
+- pborreli
diff --git a/stable/mercure/README.md b/stable/mercure/README.md
new file mode 100644
index 000000000000..6effa9a5c42a
--- /dev/null
+++ b/stable/mercure/README.md
@@ -0,0 +1,93 @@
+# Mercure
+
+[Mercure](https://mercure.rocks) is a protocol allowing to push data updates to web browsers and other HTTP clients in a convenient, fast, reliable and battery-efficient way.
+
+## TL;DR;
+
+```console
+$ helm install stable/mercure
+```
+
+## Introduction
+
+This chart bootstraps a [Mercure Hub](https://mercure.rocks) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
+
+## Prerequisites
+
+- Kubernetes 1.4+ with Beta APIs enabled
+- PV provisioner support in the underlying infrastructure
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`:
+
+```console
+$ helm install --name my-release stable/mercure
+```
+
+The command deploys the Mercure Hub on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
+
+> **Tip**: List all releases using `helm list`
+
+## Uninstalling the Chart
+
+To uninstall/delete the `my-release` deployment:
+
+```console
+$ helm delete my-release
+```
+
+The command removes all the Kubernetes components associated with the chart and deletes the release.
+
+## Configuration
+
+The following table lists the configurable parameters of the Moodle chart and their default values.
+
+| Parameter | Description | Default | | |
+|---------------------------|-----------------------------------------------------------------------------------------------------|---------------------|---|---|
+| `allowAnonymous` | set to `1` to allow subscribers with no valid JWT to connect | `0` | | |
+| `corsAllowedOrigins` | a comma separated list of allowed CORS origins, can be `*` for all | empty | | |
+| `debug` | set to `1` to enable the debug mode (prints recovery stack traces) | `0` | | |
+| `demo` | set to `1` to enable the demo mode (automatically enabled when `debug` is `1`) | `0` | | |
+| `jwtKey` | the JWT key to use for both publishers and subscribers | random string | | |
+| `logFormat` | the log format | `FLUENTD` | | |
+| `publishAllowedOrigins` | a comma separated list of origins allowed to publish (only applicable when using cookie-based auth) | empty | | |
+| `publisherJwtKey` | must contain the secret key to valid publishers' JWT, can be omited in favor of `jwtKey` | empty | | |
+| `subscriberJwtKey` | must contain the secret key to valid subscribers' JWT, can be omited in favor of `jwtKey` | empty | | |
+| `heartbeatInterval` | interval between heartbeats (useful with some proxies, and old browsers) | `0s` | | |
+| `historyCleanupFrequency` | chances to trigger history cleanup when an update occurs (number between `0` and `1`) | `0.3` | | |
+| `historySize` | size of the history (`0` for no limits) | `0` | | |
+| `readTimeout` | maximum duration for reading the entire request, including the body | `0s` | | |
+| `writeTimeout` | maximum duration before timing out writes of the response | `0s` | | |
+| `image.repository` | controller container image repository | `dunglas/mercure` | | |
+| `image.tag` | controller container image tag | `v0.3.2` | | |
+| `image.pullPolicy` | controller container image pull policy | `IfNotPresent` | | |
+| `nameOverride` | Name override | empty | | |
+| `fullnameOverride` | fullname override | `empty | | |
+| `service.type` | Service type | `NodePort` | | |
+| `service.port` | Service port | `80` | | |
+| `ingress.enabled` | Enables Ingress | `false` | | |
+| `ingress.annotations` | Ingress annotations | `{}` | | |
+| `ingress.path` | Ingress path | `/` | | |
+| `ingress.hosts` | Ingress accepted hostnames | `["mercure.local"]` | | |
+| `ingress.tls` | Ingress TLS configuration | `[]` | | |
+| `resources` | controller pod resource requests & limits | `{}` | | |
+| `nodeSelector` | node labels for controller pod assignment | `{}` | | |
+| `tolerations` | controller pod toleration for taints | `{}` | | |
+
+Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
+
+```console
+$ helm install --name my-release --set jwtKey=FooBar,corsAllowedOrigins=example.com stable/mercure
+```
+
+The above command sets the JWT key to `FooBar`.
+Additionally it allows pages served from `example.com` to connect to the hub.
+
+Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
+
+```console
+$ helm install --name my-release -f values.yaml stable/mercure
+```
+
+> **Tip**: You can use the default [values.yaml](values.yaml)
diff --git a/stable/mercure/templates/NOTES.txt b/stable/mercure/templates/NOTES.txt
new file mode 100644
index 000000000000..5bc5a215725c
--- /dev/null
+++ b/stable/mercure/templates/NOTES.txt
@@ -0,0 +1,23 @@
+The Mercure Hub is now running in the cluster.
+
+1. Get the application URL by running these commands:
+{{- if .Values.ingress.enabled }}
+{{- range $host := .Values.ingress.hosts }}
+ {{- range $.Values.ingress.paths }}
+ http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host }}{{ . }}
+ {{- end }}
+{{- end }}
+{{- else if contains "NodePort" .Values.service.type }}
+ export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "mercure.fullname" . }})
+ export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+ echo http://$NODE_IP:$NODE_PORT
+{{- else if contains "LoadBalancer" .Values.service.type }}
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "mercure.fullname" . }}'
+ export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "mercure.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ echo http://$SERVICE_IP:{{ .Values.service.port }}
+{{- else if contains "ClusterIP" .Values.service.type }}
+ export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "mercure.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
+ echo "Visit http://127.0.0.1:8080 to use your application"
+ kubectl port-forward $POD_NAME 8080:80
+{{- end }}
diff --git a/stable/mercure/templates/_helpers.tpl b/stable/mercure/templates/_helpers.tpl
new file mode 100644
index 000000000000..66f975f22c3f
--- /dev/null
+++ b/stable/mercure/templates/_helpers.tpl
@@ -0,0 +1,32 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "mercure.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "mercure.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "mercure.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/stable/mercure/templates/configmap.yaml b/stable/mercure/templates/configmap.yaml
new file mode 100644
index 000000000000..7577bb8d23ea
--- /dev/null
+++ b/stable/mercure/templates/configmap.yaml
@@ -0,0 +1,23 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ include "mercure.fullname" . }}
+ annotations:
+ helm.sh/hook: "pre-install"
+ labels:
+ app.kubernetes.io/name: {{ include "mercure.name" . }}
+ helm.sh/chart: {{ include "mercure.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+data:
+ allowAnonymous: {{ .Values.allowAnonymous | quote }}
+ corsAllowedOrigins: {{ .Values.corsAllowedOrigins | quote }}
+ debug: {{ .Values.debug | quote }}
+ demo: {{ .Values.demo | quote }}
+ logFormat: {{ .Values.logFormat | quote }}
+ publishAllowedOrigins: {{ .Values.publishAllowedOrigins | quote }}
+ heartbeatInterval: {{ .Values.heartbeatInterval | quote }}
+ historyCleanupFrequency: {{ .Values.historyCleanupFrequency | quote }}
+ historySize: {{ .Values.historySize | quote }}
+ readTimeout: {{ .Values.readTimeout | quote }}
+ writeTimeout: {{ .Values.writeTimeout | quote }}
diff --git a/stable/mercure/templates/deployment.yaml b/stable/mercure/templates/deployment.yaml
new file mode 100644
index 000000000000..75aa6ee73977
--- /dev/null
+++ b/stable/mercure/templates/deployment.yaml
@@ -0,0 +1,124 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ include "mercure.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "mercure.name" . }}
+ helm.sh/chart: {{ include "mercure.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "mercure.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "mercure.name" . }}
+ helm.sh/chart: {{ include "mercure.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ spec:
+ containers:
+ - name: {{ .Chart.Name }}
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ ports:
+ - name: http
+ containerPort: 80
+ protocol: TCP
+ env:
+ - name: ALLOW_ANONYMOUS
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "mercure.fullname" . }}
+ key: allowAnonymous
+ - name: CORS_ALLOWED_ORIGINS
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "mercure.fullname" . }}
+ key: corsAllowedOrigins
+ - name: DEBUG
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "mercure.fullname" . }}
+ key: debug
+ - name: DEMO
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "mercure.fullname" . }}
+ key: demo
+ - name: JWT_KEY
+ valueFrom:
+ secretKeyRef:
+ name: {{ include "mercure.fullname" . }}
+ key: jwtKey
+ - name: LOG_FORMAT
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "mercure.fullname" . }}
+ key: logFormat
+ - name: PUBLISH_ALLOWED_ORIGINS
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "mercure.fullname" . }}
+ key: publishAllowedOrigins
+ - name: PUBLISHER_JWT_KEY
+ valueFrom:
+ secretKeyRef:
+ name: {{ include "mercure.fullname" . }}
+ key: publisherJwtKey
+ - name: SUBSCRIBER_JWT_KEY
+ valueFrom:
+ secretKeyRef:
+ name: {{ include "mercure.fullname" . }}
+ key: subscriberJwtKey
+ - name: HEARTBEAT_INTERVAL
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "mercure.fullname" . }}
+ key: heartbeatInterval
+ - name: HISTORY_CLEANUP_FREQUENCY
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "mercure.fullname" . }}
+ key: historyCleanupFrequency
+ - name: HISTORY_SIZE
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "mercure.fullname" . }}
+ key: historySize
+ - name: READ_TIMEOUT
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "mercure.fullname" . }}
+ key: readTimeout
+ - name: WRITE_TIMEOUT
+ valueFrom:
+ configMapKeyRef:
+ name: {{ include "mercure.fullname" . }}
+ key: writeTimeout
+ livenessProbe:
+ httpGet:
+ path: /
+ port: http
+ readinessProbe:
+ httpGet:
+ path: /
+ port: http
+ resources:
+ {{- toYaml .Values.resources | nindent 12 }}
+ {{- with .Values.nodeSelector }}
+ nodeSelector:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.affinity }}
+ affinity:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.tolerations }}
+ tolerations:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
diff --git a/stable/mercure/templates/ingress.yaml b/stable/mercure/templates/ingress.yaml
new file mode 100644
index 000000000000..0e7a41c74b38
--- /dev/null
+++ b/stable/mercure/templates/ingress.yaml
@@ -0,0 +1,40 @@
+{{- if .Values.ingress.enabled -}}
+{{- $fullName := include "mercure.fullname" . -}}
+{{- $ingressPaths := .Values.ingress.paths -}}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ $fullName }}
+ labels:
+ app.kubernetes.io/name: {{ include "mercure.name" . }}
+ helm.sh/chart: {{ include "mercure.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ {{- with .Values.ingress.annotations }}
+ annotations:
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+spec:
+{{- if .Values.ingress.tls }}
+ tls:
+ {{- range .Values.ingress.tls }}
+ - hosts:
+ {{- range .hosts }}
+ - {{ . | quote }}
+ {{- end }}
+ secretName: {{ .secretName }}
+ {{- end }}
+{{- end }}
+ rules:
+ {{- range .Values.ingress.hosts }}
+ - host: {{ . | quote }}
+ http:
+ paths:
+ {{- range $ingressPaths }}
+ - path: {{ . }}
+ backend:
+ serviceName: {{ $fullName }}
+ servicePort: http
+ {{- end }}
+ {{- end }}
+{{- end }}
diff --git a/stable/mercure/templates/secret.yaml b/stable/mercure/templates/secret.yaml
new file mode 100644
index 000000000000..ae1541b8f853
--- /dev/null
+++ b/stable/mercure/templates/secret.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ include "mercure.fullname" . }}
+ annotations:
+ helm.sh/hook: "pre-install"
+ labels:
+ app.kubernetes.io/name: {{ include "mercure.name" . }}
+ helm.sh/chart: {{ include "mercure.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+type: Opaque
+data:
+ jwtKey: {{ .Values.jwtKey | default (randAlphaNum 12) | b64enc | quote }}
+ publisherJwtKey: {{ .Values.jwtKey | b64enc | quote }}
+ subscriberJwtKey: {{ .Values.jwtKey | b64enc | quote }}
diff --git a/stable/mercure/templates/service.yaml b/stable/mercure/templates/service.yaml
new file mode 100644
index 000000000000..f38a90f3c4c9
--- /dev/null
+++ b/stable/mercure/templates/service.yaml
@@ -0,0 +1,19 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "mercure.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "mercure.name" . }}
+ helm.sh/chart: {{ include "mercure.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ type: {{ .Values.service.type }}
+ ports:
+ - port: {{ .Values.service.port }}
+ targetPort: http
+ protocol: TCP
+ name: http
+ selector:
+ app.kubernetes.io/name: {{ include "mercure.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
diff --git a/stable/mercure/values.yaml b/stable/mercure/values.yaml
new file mode 100644
index 000000000000..e86c4ecde0c4
--- /dev/null
+++ b/stable/mercure/values.yaml
@@ -0,0 +1,61 @@
+# Default values for chart.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+
+allowAnonymous: "0"
+corsAllowedOrigins: ""
+debug: "0"
+demo: "0"
+jwtKey: ""
+logFormat: "FLUENTD"
+publishAllowedOrigins: ""
+publisherJwtKey: ""
+subscriberJwtKey: ""
+heartbeatInterval: "0s"
+historyCleanupFrequency: 0.3
+historySize: 0
+readTimeout: "0s"
+writeTimeout: "0s"
+
+image:
+ repository: dunglas/mercure
+ tag: v0
+ pullPolicy: IfNotPresent
+
+nameOverride: ""
+fullnameOverride: ""
+
+service:
+ type: NodePort
+ port: 80
+
+ingress:
+ enabled: false
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+ paths: []
+ hosts:
+ - mercure.local
+ tls: []
+ # - secretName: chart-example-tls
+ # hosts:
+ # - mercure.local
+
+resources: {}
+ # We usually recommend not to specify default resources and to leave this as a conscious
+ # choice for the user. This also increases chances charts run on environments with little
+ # resources, such as Minikube. If you do want to specify resources, uncomment the following
+ # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
+ # limits:
+ # cpu: 100m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 128Mi
+
+nodeSelector: {}
+
+tolerations: []
+
+affinity: {}
diff --git a/stable/metabase/Chart.yaml b/stable/metabase/Chart.yaml
index 6ee24d86c755..b513b03da58b 100644
--- a/stable/metabase/Chart.yaml
+++ b/stable/metabase/Chart.yaml
@@ -2,8 +2,8 @@ apiVersion: v1
description: The easy, open source way for everyone in your company to ask questions
and learn from data.
name: metabase
-version: 0.4.4
-appVersion: v0.30.1
+version: 0.5.0
+appVersion: v0.31.2
maintainers:
- name: pmint93
email: phamminhthanh69@gmail.com
diff --git a/stable/metabase/README.md b/stable/metabase/README.md
index efabfec95553..8cdfd2ec97c3 100644
--- a/stable/metabase/README.md
+++ b/stable/metabase/README.md
@@ -46,7 +46,7 @@ The following table lists the configurable parameters of the Metabase chart and
|------------------------|------------------------------------------------------------|-------------------|
| replicaCount | desired number of controller pods | 1 |
| image.repository | controller container image repository | metabase/metabase |
-| image.tag | controller container image tag | v0.30.1 |
+| image.tag | controller container image tag | v0.31.2 |
| image.pullPolicy | controller container image pull policy | IfNotPresent |
| listen.host | Listening on a specific network host | 0.0.0.0 |
| listen.port | Listening on a specific network port | 3000 |
@@ -66,7 +66,7 @@ The following table lists the configurable parameters of the Metabase chart and
| password.length | Minimum length required for Metabase account's password | 6 |
| timeZone | Service time zone | UTC |
| emojiLogging | Get a funny emoji in service log | true |
-| javaToolOptions | JVM options | null |
+| javaOpts | JVM options | null |
| pluginsDirectory | A directory with Metabase plugins | null |
| service.type | ClusterIP, NodePort, or LoadBalancer | ClusterIP |
| service.externalPort | Service external port | 80 |
@@ -78,6 +78,7 @@ The following table lists the configurable parameters of the Metabase chart and
| ingress.labels | Ingress labels configuration | null |
| ingress.annotations | Ingress annotations configuration | null |
| ingress.tls | Ingress TLS configuration | null |
+| log4jProperties | Custom `log4j.properties` file | null |
| resources | Server resource requests and limits | {} |
The above parameters map to the env variables defined in [metabase](http://github.com/metabase/metabase). For more information please refer to the [metabase documentations](http://www.metabase.com/docs/v0.24.2/).
diff --git a/stable/metabase/templates/config.yaml b/stable/metabase/templates/config.yaml
new file mode 100644
index 000000000000..8de994da124f
--- /dev/null
+++ b/stable/metabase/templates/config.yaml
@@ -0,0 +1,14 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ template "metabase.fullname" . }}-config
+ labels:
+ app: {{ template "metabase.name" . }}
+ chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+data:
+ {{- if .Values.log4jProperties }}
+ log4j.properties:
+{{ toYaml .Values.log4jProperties | indent 4}}
+ {{- end}}
diff --git a/stable/metabase/templates/deployment.yaml b/stable/metabase/templates/deployment.yaml
index 317073014460..eed48bc0af26 100644
--- a/stable/metabase/templates/deployment.yaml
+++ b/stable/metabase/templates/deployment.yaml
@@ -11,6 +11,8 @@ spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
+ annotations:
+ checksum/config: {{ include (print $.Template.BasePath "/config.yaml") . | sha256sum }}
labels:
app: {{ template "metabase.name" . }}
release: {{ .Release.Name }}
@@ -81,9 +83,14 @@ spec:
value: {{ .Values.password.length | quote }}
- name: JAVA_TIMEZONE
value: {{ .Values.timeZone }}
- {{- if .Values.javaToolOptions }}
- - name: JAVA_TOOL_OPTIONS
- value: {{ .Values.javaToolOptions | quote }}
+ {{- if .Values.javaOpts }}
+ - name: JAVA_OPTS
+ value: {{ .Values.javaOpts | quote }}
+ {{- else }}
+ {{- if .Values.log4jProperties }}
+ - name: JAVA_OPTS
+ value: "-Dlog4j.configuration=file:/tmp/conf/log4j.properties"
+ {{- end }}
{{- end }}
{{- if .Values.pluginsDirectory }}
- name: MB_PLUGINS_DIR
@@ -107,9 +114,23 @@ spec:
initialDelaySeconds: 30
timeoutSeconds: 3
periodSeconds: 5
+ {{- if .Values.log4jProperties }}
+ volumeMounts:
+ - name: config
+ mountPath: /tmp/conf/
+ {{- end}}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
+ volumes:
+ {{- if .Values.log4jProperties}}
+ - name: config
+ configMap:
+ name: {{ template "metabase.fullname" . }}-config
+ items:
+ - key: log4j.properties
+ path: log4j.properties
+ {{- end }}
diff --git a/stable/metabase/values.yaml b/stable/metabase/values.yaml
index 9fec98a11f57..9dde5769243e 100644
--- a/stable/metabase/values.yaml
+++ b/stable/metabase/values.yaml
@@ -5,7 +5,7 @@
replicaCount: 1
image:
repository: metabase/metabase
- tag: v0.30.1
+ tag: v0.31.2
pullPolicy: IfNotPresent
# Config Jetty web server
@@ -44,7 +44,7 @@ password:
timeZone: UTC
emojiLogging: true
-# javaToolOptions:
+# javaOpts:
# pluginsDirectory:
service:
@@ -74,6 +74,12 @@ ingress:
# - secretName: metabase-tls
# hosts:
# - metabase.domain.com
+
+# A custom log4j.properties file can be provided using a multiline YAML string.
+# See https://github.com/metabase/metabase/blob/master/resources/log4j.properties
+#
+# log4jProperties:
+
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
diff --git a/stable/metallb/Chart.yaml b/stable/metallb/Chart.yaml
index 74cd6ff6b2fa..e4852002d6dd 100644
--- a/stable/metallb/Chart.yaml
+++ b/stable/metallb/Chart.yaml
@@ -1,5 +1,5 @@
apiVersion: v1
-version: 0.8.4
+version: 0.9.5
name: metallb
appVersion: 0.7.3
diff --git a/stable/metallb/templates/psp.yaml b/stable/metallb/templates/psp.yaml
new file mode 100644
index 000000000000..bb7d465f3077
--- /dev/null
+++ b/stable/metallb/templates/psp.yaml
@@ -0,0 +1,33 @@
+{{- if .Values.psp.create -}}
+
+apiVersion: extensions/v1beta1
+kind: PodSecurityPolicy
+metadata:
+ name: {{ template "metallb.fullname" . }}-speaker
+ labels:
+ heritage: {{ .Release.Service | quote }}
+ release: {{ .Release.Name | quote }}
+ chart: {{ template "metallb.chart" . }}
+ app: {{ template "metallb.name" . }}
+spec:
+ hostNetwork: true
+ hostPorts:
+ - min: 7472
+ max: 7472
+ privileged: true
+ allowPrivilegeEscalation: false
+ allowedCapabilities:
+ - 'NET_ADMIN'
+ - 'NET_RAW'
+ - 'SYS_ADMIN'
+ volumes:
+ - '*'
+ fsGroup:
+ rule: RunAsAny
+ runAsUser:
+ rule: RunAsAny
+ seLinux:
+ rule: RunAsAny
+ supplementalGroups:
+ rule: RunAsAny
+{{- end -}}
diff --git a/stable/metallb/templates/rbac.yaml b/stable/metallb/templates/rbac.yaml
index f5450047e92d..2c432cda8cf7 100644
--- a/stable/metallb/templates/rbac.yaml
+++ b/stable/metallb/templates/rbac.yaml
@@ -34,6 +34,12 @@ rules:
- apiGroups: [""]
resources: ["services", "endpoints", "nodes"]
verbs: ["get", "list", "watch"]
+{{- if .Values.psp.create }}
+- apiGroups: ["extensions"]
+ resources: ["podsecuritypolicies"]
+ resourceNames: [{{ printf "%s-speaker" (include "metallb.fullname" .) | quote}}]
+ verbs: ["use"]
+{{- end }}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
diff --git a/stable/metallb/values.yaml b/stable/metallb/values.yaml
index 2710054a8e81..f618a5900995 100644
--- a/stable/metallb/values.yaml
+++ b/stable/metallb/values.yaml
@@ -40,6 +40,10 @@ rbac:
# create specifies whether to install and use RBAC rules.
create: true
+psp:
+ # create specifies whether to install and use Pod Security Policies.
+ create: true
+
prometheus:
# scrape annotations specifies whether to add Prometheus metric
# auto-collection annotations to pods. See
diff --git a/stable/metricbeat/Chart.yaml b/stable/metricbeat/Chart.yaml
index ab91b9b1ca51..3f3ed1ed2ab4 100644
--- a/stable/metricbeat/Chart.yaml
+++ b/stable/metricbeat/Chart.yaml
@@ -2,8 +2,8 @@ apiVersion: v1
description: A Helm chart to collect Kubernetes logs with metricbeat
icon: https://www.elastic.co/assets/blt47799dcdcf08438d/logo-elastic-beats-lt.svg
name: metricbeat
-version: 1.0.0
-appVersion: 6.6.0
+version: 1.6.1
+appVersion: 6.7.0
home: https://www.elastic.co/products/beats/metricbeat
sources:
- https://www.elastic.co/guide/en/beats/metricbeat/current/index.html
diff --git a/stable/metricbeat/README.md b/stable/metricbeat/README.md
index fb42f6e1705f..917cd3775fbd 100644
--- a/stable/metricbeat/README.md
+++ b/stable/metricbeat/README.md
@@ -4,7 +4,7 @@
## Prerequisites
-- Kubernetes 1.9+
+- Kubernetes 1.9+
## Installing the Chart
@@ -30,32 +30,47 @@ The command removes all the Kubernetes components associated with the chart and
The following table lists the configurable parameters of the metricbeat chart and their default values.
-| Parameter | Description | Default |
-|-------------------------------------|------------------------------------|-------------------------------------------|
-| `image.repository` | The image repository to pull from | `docker.elastic.co/beats/metricbeat` |
-| `image.tag` | The image tag to pull | `6.6.0` |
-| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
-| `rbac.create` | If true, create & use RBAC resources | `true` |
-| `serviceAccount.create` | If true, create & use ServiceAccount | `true` |
-| `serviceAccount.name` | The name of the ServiceAccount to use. If not set and create is true, a name is generated using the fullname template | |
-| `config` | The content of the configuration file consumed by metricbeat. See the [metricbeat.reference.yml](https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-reference-yml.html) for full details | |
-| `plugins` | List of beat plugins | |
-| `extraEnv` | Additional environment | |
-| `extraVolumes`, `extraVolumeMounts` | Additional volumes and mounts, for example to provide other configuration files | |
-| `resources.requests.cpu` | CPU resource requests | |
-| `resources.limits.cpu` | CPU resource limits | |
-| `resources.requests.memory` | Memory resource requests | |
-| `resources.limits.memory` | Memory resource limits | |
-| `daemonset.modules..config` | The content of the modules configuration file consumed by metricbeat deployed as daemonset, which is assumed to collect metrics in each nodes. See the [metricbeat.reference.yml](https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-reference-yml.html) for full details |
-| `daemonset.modules..enabled` | If true, enable configuration | |
-| `daemonset.podAnnotations` | Pod annotations for daemonset | |
-| `daemonset.nodeSelector` | Pod node selector for daemonset | `{}` |
-| `daemonset.tolerations` | Pod taint tolerations for daemonset | `[{"key": "node-role.kubernetes.io/master", "operator": "Exists", "effect": "NoSchedule"}]` |
-| `deployment.modules..config` | The content of the modules configuration file consumed by metricbeat deployed as deployment, which is assumed to collect cluster-level metrics. See the [metricbeat.reference.yml](https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-reference-yml.html) for full details ||
-| `deployment.modules..enabled` | If true, enable configuration ||
-| `deployment.podAnnotations` | Pod annotations for deployment | |
-| `deployment.nodeSelector` | Pod node selector for deployment | `{}` |
-| `deployment.tolerations` | Pod taint tolerations for deployment | `[]` |
+| Parameter | Description | Default |
+| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- |
+| `image.repository` | The image repository to pull from | `docker.elastic.co/beats/metricbeat` |
+| `image.tag` | The image tag to pull | `6.7.0` |
+| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `rbac.create` | If true, create & use RBAC resources | `true` |
+| `rbac.pspEnabled` | If true, create & use PSP resources | `false` |
+| `serviceAccount.create` | If true, create & use ServiceAccount | `true` |
+| `serviceAccount.name` | The name of the ServiceAccount to use. If not set and create is true, a name is generated using the fullname template | |
+| `config` | The content of the configuration file consumed by metricbeat. See the [metricbeat.reference.yml](https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-reference-yml.html) for full details | |
+| `plugins` | List of beat plugins | |
+| `extraEnv` | Additional environment | |
+| `extraVolumes`, `extraVolumeMounts` | Additional volumes and mounts, for example to provide other configuration files | |
+| `resources.requests.cpu` | CPU resource requests | |
+| `resources.limits.cpu` | CPU resource limits | |
+| `resources.requests.memory` | Memory resource requests | |
+| `resources.limits.memory` | Memory resource limits | |
+| `daemonset.enabled` | If true, enable daemonset | |
+| `daemonset.args` | Custom arguments for the docker command | |
+| `daemonset.overrideConfig` | If overrideConfig is not empty, metricbeat chart's default config won't be used at all. | `{}` |
+| `daemonset.modules..config` | The content of the modules configuration file consumed by metricbeat deployed as daemonset, which is assumed to collect metrics in each nodes. See the [metricbeat.reference.yml](https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-reference-yml.html) for full details | |
+| `daemonset.modules..enabled` | If true, enable configuration | |
+| `daemonset.overrideModules` | If overrideModules is not empty, metricbeat chart's default modules won't be used at all. | `{}` |
+| `daemonset.podAnnotations` | Pod annotations for daemonset | |
+| `daemonset.priorityClassName` | Pod priority class name for daemonset (e.g system-node-critical) | `""` |
+| `daemonset.nodeSelector` | Pod node selector for daemonset | `{}` |
+| `daemonset.tolerations` | Pod taint tolerations for daemonset | `[{"key": "node-role.kubernetes.io/master", "operator": "Exists", "effect": "NoSchedule"}]` |
+| `daemonset.resources.requests.cpu` | CPU resource requests for daemonset | |
+| `daemonset.resources.limits.cpu` | CPU resource limits for daemonset | |
+| `deployment.enabled` | If true, enable deployment | |
+| `deployment.args` | Custom arguments for the docker command | |
+| `deployment.overrideConfig` | If overrideConfig is not empty, metricbeat chart's default config won't be used at all. | `{}` |
+| `deployment.modules..config` | The content of the modules configuration file consumed by metricbeat deployed as deployment, which is assumed to collect cluster-level metrics. See the [metricbeat.reference.yml](https://www.elastic.co/guide/en/beats/metricbeat/current/metricbeat-reference-yml.html) for full details | |
+| `deployment.modules..enabled` | If true, enable configuration | |
+| `deployment.overrideModules` | If overrideModules is not empty, metricbeat chart's default modules won't be used at all. | `{}` |
+| `deployment.podAnnotations` | Pod annotations for deployment | |
+| `deployment.priorityClassName` | Pod priority class name for deployment (e.g system-cluster-critical) | `""` |
+| `deployment.nodeSelector` | Pod node selector for deployment | `{}` |
+| `deployment.tolerations` | Pod taint tolerations for deployment | `[]` |
+| `deployment.resources.requests.cpu` | CPU resource requests for daemonset | |
+| `deployment.resources.limits.cpu` | CPU resource limits for daemonset | |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
diff --git a/stable/metricbeat/templates/clusterrole.yaml b/stable/metricbeat/templates/clusterrole.yaml
index ab985f798e9f..5ca886c3497a 100644
--- a/stable/metricbeat/templates/clusterrole.yaml
+++ b/stable/metricbeat/templates/clusterrole.yaml
@@ -9,6 +9,12 @@ metadata:
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
rules:
+{{- if .Values.rbac.pspEnabled }}
+- apiGroups: ['extensions']
+ resources: ['podsecuritypolicies']
+ verbs: ['use']
+ resourceNames: [{{ template "metricbeat.fullname" . }}]
+{{- end }}
- apiGroups: [""]
resources:
- nodes
@@ -28,6 +34,7 @@ rules:
- apiGroups: [""]
resources:
- nodes/stats
+ - nodes/metrics
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
diff --git a/stable/metricbeat/templates/daemonset.yaml b/stable/metricbeat/templates/daemonset.yaml
index 4abcdcb69f7f..f95bd25a9b21 100644
--- a/stable/metricbeat/templates/daemonset.yaml
+++ b/stable/metricbeat/templates/daemonset.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.daemonset.enabled }}
apiVersion: apps/v1
kind: DaemonSet
metadata:
@@ -23,8 +24,8 @@ spec:
app: {{ template "metricbeat.name" . }}
release: {{ .Release.Name }}
annotations:
- checksum/daemonset: {{ toYaml .Values.daemonset.modules | sha256sum }}
- checksum/config: {{ toYaml .Values.daemonset.config | sha256sum }}
+ checksum/config: {{ toYaml (default .Values.daemonset.config .Values.daemonset.overrideConfig) | sha256sum }}
+ checksum/modules: {{ toYaml (default .Values.daemonset.modules .Values.daemonset.overrideModules) | sha256sum }}
{{- if .Values.daemonset.podAnnotations }}
{{- range $key, $value := .Values.daemonset.podAnnotations }}
{{ $key }}: {{ $value }}
@@ -36,6 +37,9 @@ spec:
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
+{{- if .Values.daemonset.args }}
+{{ toYaml .Values.daemonset.args | indent 8 }}
+{{- else }}
- "-e"
{{- if .Values.plugins }}
- "--plugin"
@@ -45,6 +49,7 @@ spec:
{{- if .Values.daemonset.debug }}
- "-d"
- "*"
+{{- end }}
{{- end }}
env:
- name: POD_NAMESPACE
@@ -61,7 +66,11 @@ spec:
securityContext:
runAsUser: 0
resources:
+{{- if .Values.daemonset.resources }}
+{{ toYaml .Values.daemonset.resources | indent 10 }}
+{{- else if .Values.resources }}
{{ toYaml .Values.resources | indent 10 }}
+{{- end }}
volumeMounts:
- name: config
mountPath: /usr/share/metricbeat/metricbeat.yml
@@ -107,6 +116,9 @@ spec:
{{ toYaml .Values.extraVolumes | indent 6 }}
{{- end }}
terminationGracePeriodSeconds: 60
+{{- if .Values.daemonset.priorityClassName }}
+ priorityClassName: {{ .Values.daemonset.priorityClassName }}
+{{- end }}
serviceAccountName: {{ template "metricbeat.serviceAccountName" . }}
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
@@ -122,3 +134,4 @@ spec:
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end }}
+{{- end }}
diff --git a/stable/metricbeat/templates/deployment.yaml b/stable/metricbeat/templates/deployment.yaml
index 07fecc24adac..16314287274f 100644
--- a/stable/metricbeat/templates/deployment.yaml
+++ b/stable/metricbeat/templates/deployment.yaml
@@ -1,4 +1,5 @@
# Deploy singleton instance in the whole cluster for some unique data sources, like kube-state-metrics
+{{- if .Values.deployment.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
@@ -19,20 +20,26 @@ spec:
app: {{ template "metricbeat.name" . }}
release: {{ .Release.Name }}
annotations:
- checksum/modules: {{ toYaml .Values.deployment.modules | sha256sum }}
- checksum/config: {{ toYaml .Values.deployment.config | sha256sum }}
+ checksum/config: {{ toYaml (default .Values.deployment.config .Values.deployment.overrideConfig) | sha256sum }}
+ checksum/modules: {{ toYaml (default .Values.deployment.modules .Values.deployment.overrideModules) | sha256sum }}
{{- if .Values.deployment.podAnnotations }}
{{- range $key, $value := .Values.deployment.podAnnotations }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
spec:
+{{- if .Values.deployment.priorityClassName }}
+ priorityClassName: {{ .Values.deployment.priorityClassName }}
+{{- end }}
serviceAccountName: {{ template "metricbeat.serviceAccountName" . }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
+{{- if .Values.deployment.args }}
+{{ toYaml .Values.deployment.args | indent 8 }}
+{{- else }}
- "-e"
{{- if .Values.plugins }}
- "--plugin"
@@ -41,6 +48,7 @@ spec:
{{- if .Values.deployment.debug }}
- "-d"
- "*"
+{{- end }}
{{- end }}
env:
- name: POD_NAMESPACE
@@ -57,7 +65,11 @@ spec:
securityContext:
runAsUser: 0
resources:
+{{- if .Values.deployment.resources }}
+{{ toYaml .Values.deployment.resources | indent 10 }}
+{{- else if .Values.resources }}
{{ toYaml .Values.resources | indent 10 }}
+{{- end }}
volumeMounts:
- name: metricbeat-config
mountPath: /usr/share/metricbeat/metricbeat.yml
@@ -87,3 +99,4 @@ spec:
{{- if .Values.extraVolumes }}
{{ toYaml .Values.extraVolumes | indent 6 }}
{{- end }}
+{{- end }}
diff --git a/stable/metricbeat/templates/podsecuritypolicy.yaml b/stable/metricbeat/templates/podsecuritypolicy.yaml
new file mode 100644
index 000000000000..29fb2a9fb2dd
--- /dev/null
+++ b/stable/metricbeat/templates/podsecuritypolicy.yaml
@@ -0,0 +1,31 @@
+{{- if .Values.rbac.pspEnabled }}
+apiVersion: extensions/v1beta1
+kind: PodSecurityPolicy
+metadata:
+ name: {{ template "metricbeat.fullname" . }}
+ labels:
+ app: {{ template "metricbeat.name" . }}
+ chart: {{ .Chart.Name }}-{{ .Chart.Version }}
+ heritage: {{ .Release.Service }}
+ release: {{ .Release.Name }}
+spec:
+ privileged: false
+ defaultAddCapabilities:
+ - CHOWN
+ volumes:
+ - 'configMap'
+ - 'emptyDir'
+ - 'secret'
+ allowPrivilegeEscalation: false
+ hostNetwork: false
+ hostIPC: false
+ hostPID: false
+ runAsUser:
+ rule: 'RunAsAny'
+ seLinux:
+ rule: 'RunAsAny'
+ supplementalGroups:
+ rule: 'RunAsAny'
+ fsGroup:
+ rule: 'RunAsAny'
+{{- end }}
diff --git a/stable/metricbeat/templates/secret.yaml b/stable/metricbeat/templates/secret.yaml
index a7c22ec2f618..9d2c4399e439 100644
--- a/stable/metricbeat/templates/secret.yaml
+++ b/stable/metricbeat/templates/secret.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.daemonset.enabled }}
apiVersion: v1
kind: Secret
metadata:
@@ -9,8 +10,10 @@ metadata:
heritage: {{ .Release.Service }}
type: Opaque
data:
- metricbeat.yml: {{ toYaml .Values.daemonset.config | indent 4 | b64enc }}
+ metricbeat.yml: {{ toYaml (default .Values.daemonset.config .Values.daemonset.overrideConfig) | indent 4 | b64enc }}
+{{- end }}
---
+{{- if .Values.deployment.enabled }}
apiVersion: v1
kind: Secret
metadata:
@@ -22,8 +25,10 @@ metadata:
heritage: {{ .Release.Service }}
type: Opaque
data:
- metricbeat.yml: {{ toYaml .Values.deployment.config | indent 4 | b64enc }}
+ metricbeat.yml: {{ toYaml (default .Values.deployment.config .Values.deployment.overrideConfig) | indent 4 | b64enc }}
+{{- end }}
---
+{{- if .Values.daemonset.enabled }}
apiVersion: v1
kind: Secret
metadata:
@@ -35,12 +40,14 @@ metadata:
heritage: {{ .Release.Service }}
type: Opaque
data:
- {{- range $key, $value := .Values.daemonset.modules }}
+ {{- range $key, $value := (default .Values.daemonset.modules .Values.daemonset.overrideModules) }}
{{- if eq $value.enabled true }}
- {{ $key }}.yml: {{ toYaml $value.config | indent 4 | b64enc }}
+ {{ $key }}.yml: {{ toYaml $value.config | b64enc }}
{{- end }}
{{- end }}
+{{- end }}
---
+{{- if .Values.deployment.enabled }}
apiVersion: v1
kind: Secret
metadata:
@@ -52,8 +59,9 @@ metadata:
heritage: {{ .Release.Service }}
type: Opaque
data:
- {{- range $key, $value := .Values.deployment.modules }}
+ {{- range $key, $value := (default .Values.deployment.modules .Values.deployment.overrideModules) }}
{{- if eq $value.enabled true }}
- {{ $key }}.yml: {{ toYaml $value.config | indent 4 | b64enc }}
+ {{ $key }}.yml: {{ toYaml $value.config | b64enc }}
{{- end }}
{{- end }}
+{{- end }}
diff --git a/stable/metricbeat/values.yaml b/stable/metricbeat/values.yaml
index 8e73d3191c9f..5094639e2c9e 100644
--- a/stable/metricbeat/values.yaml
+++ b/stable/metricbeat/values.yaml
@@ -1,16 +1,19 @@
image:
repository: docker.elastic.co/beats/metricbeat
- tag: 6.6.0
+ tag: 6.7.0
pullPolicy: IfNotPresent
# The instances created by daemonset retrieve most metrics from the host
daemonset:
+ enabled: true
podAnnotations: []
+ priorityClassName: ""
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
nodeSelector: {}
+ resources: {}
config:
metricbeat.config:
modules:
@@ -23,6 +26,8 @@ daemonset:
filename: metricbeat
rotate_every_kb: 10000
number_of_files: 5
+ # If overrideConfig is not empty, metricbeat chart's default config won't be used at all.
+ overrideConfig: {}
modules:
system:
enabled: true
@@ -70,11 +75,16 @@ daemonset:
# bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
# ssl.certificate_authorities:
# - /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
+ # If overrideModules is not empty, metricbeat chart's default modules won't be used at all.
+ overrideModules: {}
# The instance created by deployment retrieves metrics that are unique for the whole cluster, like Kubernetes events or kube-state-metrics
deployment:
+ enabled: true
podAnnotations: []
+ priorityClassName: ""
tolerations: []
nodeSelector: {}
+ resources: {}
config:
metricbeat.config:
modules:
@@ -87,6 +97,8 @@ deployment:
filename: metricbeat
rotate_every_kb: 10000
number_of_files: 5
+ # If overrideConfig is not empty, metricbeat chart's default config won't be used at all.
+ overrideConfig: {}
modules:
kubernetes:
enabled: true
@@ -102,6 +114,8 @@ deployment:
# - event
period: 10s
hosts: ["kube-state-metrics:8080"]
+ # If overrideModules is not empty, metricbeat chart's default modules won't be used at all.
+ overrideModules: {}
# List of beat plugins
plugins: []
@@ -139,6 +153,7 @@ resources: {}
rbac:
# Specifies whether RBAC resources should be created
create: true
+ pspEnabled: false
serviceAccount:
# Specifies whether a ServiceAccount should be created
diff --git a/stable/metrics-server/Chart.yaml b/stable/metrics-server/Chart.yaml
index 8c917bdbc916..b4a221fbe8f7 100755
--- a/stable/metrics-server/Chart.yaml
+++ b/stable/metrics-server/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
-appVersion: 0.3.1
+appVersion: 0.3.2
description: Metrics Server is a cluster-wide aggregator of resource usage data.
name: metrics-server
-version: 2.2.0
+version: 2.8.0
keywords:
- metrics-server
home: https://github.com/kubernetes-incubator/metrics-server
diff --git a/stable/metrics-server/README.md b/stable/metrics-server/README.md
index b8c664830c0d..06be74a64d77 100644
--- a/stable/metrics-server/README.md
+++ b/stable/metrics-server/README.md
@@ -1,22 +1,37 @@
# metrics-server
-Metrics Server is a cluster-wide aggregator of resource usage data.
+[Metrics Server](https://github.com/kubernetes-incubator/metrics-server) is a cluster-wide aggregator of resource usage data. Resource metrics are used by components like `kubectl top` and the [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale) to scale workloads. To autoscale based upon a custom metric, see the [Prometheus Adapter chart](https://github.com/helm/charts/blob/master/stable/prometheus-adapter).
## Configuration
Parameter | Description | Default
--- | --- | ---
`rbac.create` | Enable Role-based authentication | `true`
+`rbac.pspEnabled` | Enable pod security policy support | `false`
`serviceAccount.create` | If `true`, create a new service account | `true`
`serviceAccount.name` | Service account to be used. If not set and `serviceAccount.create` is `true`, a name is generated using the fullname template | ``
`apiService.create` | Create the v1beta1.metrics.k8s.io API service | `true`
-`hostNetwork.enable` | Enable hostNetwork mode | `false`
+`hostNetwork.enabled` | Enable hostNetwork mode | `false`
`image.repository` | Image repository | `gcr.io/google_containers/metrics-server-amd64`
-`image.tag` | Image tag | `v0.3.1`
+`image.tag` | Image tag | `v0.3.2`
`image.pullPolicy` | Image pull policy | `IfNotPresent`
-`args` | Command line arguments | `--logtostderr`
+`imagePullSecrets` | Image pull secrets | `[]`
+`args` | Command line arguments | `[]`
`resources` | CPU/Memory resource requests/limits. | `{}`
`tolerations` | List of node taints to tolerate (requires Kubernetes >=1.6) | `[]`
`nodeSelector` | Node labels for pod assignment | `{}`
`affinity` | Node affinity | `{}`
`replicas` | Number of replicas | `1`
+`extraVolumeMounts` | Ability to provide volume mounts to the pod | `[]`
+`extraVolumes` | Ability to provide volumes to the pod | `[]`
+`livenessProbe` | Container liveness probe | See values.yaml
+`podAnnotations` | Annotations to be added to pods | `{}`
+`priorityClassName` | Pod priority class | `""`
+`readinessProbe` | Container readiness probe | See values.yaml
+`service.annotations` | Annotations to add to the service | `{}`
+`service.labels` | Labels to be added to the metrics-server service | `{}`
+`service.port` | Service port to expose | `443`
+`service.type` | Type of service to create | `ClusterIP`
+`podDisruptionBudget.enabled` | Create a PodDisruptionBudget | `false`
+`podDisruptionBudget.minAvailable` | Minimum available instances; ignored if there is no PodDisruptionBudget |
+`podDisruptionBudget.maxUnavailable` | Maximum unavailable instances; ignored if there is no PodDisruptionBudget |
diff --git a/stable/metrics-server/templates/cluster-role.yaml b/stable/metrics-server/templates/cluster-role.yaml
index 4c94f175bb28..7880f992c1ff 100644
--- a/stable/metrics-server/templates/cluster-role.yaml
+++ b/stable/metrics-server/templates/cluster-role.yaml
@@ -14,16 +14,19 @@ rules:
resources:
- pods
- nodes
- - namespaces
+ - nodes/stats
verbs:
- get
- list
- watch
+ {{- if .Values.rbac.pspEnabled }}
- apiGroups:
- - ""
+ - extensions
resources:
- - nodes/stats
+ - podsecuritypolicies
+ resourceNames:
+ - privileged-{{ template "metrics-server.fullname" . }}
verbs:
- - get
- - create
+ - use
+ {{- end -}}
{{- end -}}
diff --git a/stable/metrics-server/templates/metric-server-service.yaml b/stable/metrics-server/templates/metric-server-service.yaml
index ddf03e1f2a6c..037a076ad7ff 100644
--- a/stable/metrics-server/templates/metric-server-service.yaml
+++ b/stable/metrics-server/templates/metric-server-service.yaml
@@ -7,11 +7,18 @@ metadata:
chart: {{ template "metrics-server.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
+ {{- with .Values.service.labels }}
+ {{ toYaml . | indent 4}}
+ {{- end }}
+ annotations:
+ {{- toYaml .Values.service.annotations | nindent 4 }}
spec:
ports:
- - port: 443
+ - port: {{ .Values.service.port }}
protocol: TCP
- targetPort: 443
+ targetPort: https
selector:
app: {{ template "metrics-server.name" . }}
release: {{ .Release.Name }}
+ type: {{ .Values.service.type }}
+
diff --git a/stable/metrics-server/templates/metrics-server-deployment.yaml b/stable/metrics-server/templates/metrics-server-deployment.yaml
index 65a6955ff2d9..725578c9013c 100644
--- a/stable/metrics-server/templates/metrics-server-deployment.yaml
+++ b/stable/metrics-server/templates/metrics-server-deployment.yaml
@@ -1,4 +1,4 @@
-apiVersion: apps/v1beta2
+apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "metrics-server.fullname" . }}
@@ -18,33 +18,64 @@ spec:
labels:
app: {{ template "metrics-server.name" . }}
release: {{ .Release.Name }}
+ {{- with .Values.podAnnotations }}
+ annotations:
+ {{- range $key, $value := . }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+ {{- end }}
spec:
+ {{- if .Values.priorityClassName }}
+ priorityClassName: "{{ .Values.priorityClassName }}"
+ {{- end }}
+ {{- if .Values.imagePullSecrets }}
+ imagePullSecrets:
+ {{- range .Values.imagePullSecrets }}
+ - name: {{ . }}
+ {{- end }}
+ {{- end }}
serviceAccountName: {{ template "metrics-server.serviceAccountName" . }}
-{{ if .Values.hostNetwork.enabled }}
+{{- if .Values.hostNetwork.enabled }}
hostNetwork: true
-{{ end }}
+{{- end }}
containers:
- name: metrics-server
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- /metrics-server
+ - --cert-dir=/tmp
+ - --logtostderr
+ - --secure-port=8443
{{- range .Values.args }}
- {{ . | quote }}
{{- end }}
- {{- with .Values.resources }}
+ ports:
+ - containerPort: 8443
+ name: https
+ livenessProbe:
+ {{- toYaml .Values.livenessProbe | nindent 12 }}
+ readinessProbe:
+ {{- toYaml .Values.readinessProbe | nindent 12 }}
resources:
-{{ toYaml . | indent 12 }}
- {{- end }}
- {{- with .Values.nodeSelector }}
+ {{- toYaml .Values.resources | nindent 12 }}
+ securityContext:
+ {{- toYaml .Values.securityContext | nindent 12 }}
+ volumeMounts:
+ - name: tmp
+ mountPath: /tmp
+ {{- with .Values.extraVolumeMounts }}
+ {{- toYaml . | nindent 10 }}
+ {{- end }}
nodeSelector:
-{{ toYaml . | indent 8 }}
- {{- end }}
- {{- with .Values.affinity }}
+ {{- toYaml .Values.nodeSelector | nindent 8 }}
affinity:
-{{ toYaml . | indent 8 }}
- {{- end }}
- {{- with .Values.tolerations }}
+ {{- toYaml .Values.affinity | nindent 8 }}
tolerations:
-{{ toYaml . | indent 8 }}
- {{- end }}
+ {{- toYaml .Values.tolerations | nindent 8 }}
+ volumes:
+ - name: tmp
+ emptyDir: {}
+ {{- with .Values.extraVolumes }}
+ {{- toYaml . | nindent 6}}
+ {{- end }}
diff --git a/stable/metrics-server/templates/pdb.yaml b/stable/metrics-server/templates/pdb.yaml
new file mode 100644
index 000000000000..3831097d9d3f
--- /dev/null
+++ b/stable/metrics-server/templates/pdb.yaml
@@ -0,0 +1,23 @@
+{{- if .Values.podDisruptionBudget.enabled -}}
+apiVersion: policy/v1beta1
+kind: PodDisruptionBudget
+metadata:
+ labels:
+ app: {{ template "metrics-server.name" . }}
+ chart: {{ template "metrics-server.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ name: {{ template "metrics-server.fullname" . }}
+ namespace: {{ .Release.Namespace }}
+
+spec:
+ {{- if .Values.podDisruptionBudget.minAvailable }}
+ minAvailable: {{ .Values.podDisruptionBudget.minAvailable }}
+ {{- end }}
+ {{- if .Values.podDisruptionBudget.maxUnavailable }}
+ maxUnavailable: {{ .Values.podDisruptionBudget.maxUnavailable }}
+ {{- end }}
+ selector:
+ matchLabels:
+ app: {{ template "metrics-server.name" . }}
+{{- end -}}
\ No newline at end of file
diff --git a/stable/metrics-server/templates/psp.yaml b/stable/metrics-server/templates/psp.yaml
new file mode 100644
index 000000000000..021ef97219b0
--- /dev/null
+++ b/stable/metrics-server/templates/psp.yaml
@@ -0,0 +1,26 @@
+{{- if .Values.rbac.pspEnabled }}
+apiVersion: extensions/v1beta1
+kind: PodSecurityPolicy
+metadata:
+ name: privileged-{{ template "metrics-server.fullname" . }}
+spec:
+ allowedCapabilities:
+ - '*'
+ fsGroup:
+ rule: RunAsAny
+ privileged: true
+ runAsUser:
+ rule: RunAsAny
+ seLinux:
+ rule: RunAsAny
+ supplementalGroups:
+ rule: RunAsAny
+ volumes:
+ - '*'
+ hostPID: true
+ hostIPC: true
+ hostNetwork: true
+ hostPorts:
+ - min: 1
+ max: 65536
+{{- end }}
diff --git a/stable/metrics-server/templates/tests/test-version.yaml b/stable/metrics-server/templates/tests/test-version.yaml
new file mode 100644
index 000000000000..3648e6d74eb9
--- /dev/null
+++ b/stable/metrics-server/templates/tests/test-version.yaml
@@ -0,0 +1,21 @@
+apiVersion: v1
+kind: Pod
+metadata:
+ name: {{ template "metrics-server.fullname" . }}-test
+ labels:
+ app: {{ template "metrics-server.name" . }}
+ chart: {{ template "metrics-server.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ annotations:
+ "helm.sh/hook": test-success
+spec:
+ containers:
+ - name: wget
+ image: busybox
+ command: ['/bin/sh']
+ args:
+ - -c
+ - 'wget -qO- https://{{ include "metrics-server.fullname" . }}:{{ .Values.service.port }}/version | grep -F {{ .Values.image.tag }}'
+ restartPolicy: Never
+
diff --git a/stable/metrics-server/values.yaml b/stable/metrics-server/values.yaml
index c05332e98dd0..9fae75d3635e 100644
--- a/stable/metrics-server/values.yaml
+++ b/stable/metrics-server/values.yaml
@@ -1,6 +1,7 @@
rbac:
# Specifies whether RBAC resources should be created
create: true
+ pspEnabled: false
serviceAccount:
# Specifies whether a ServiceAccount should be created
@@ -27,11 +28,13 @@ hostNetwork:
image:
repository: gcr.io/google_containers/metrics-server-amd64
- tag: v0.3.1
+ tag: v0.3.2
pullPolicy: IfNotPresent
-args:
- - --logtostderr
+imagePullSecrets: []
+# - registrySecretName
+
+args: []
# enable this if you have self-signed certificates, see: https://github.com/kubernetes-incubator/metrics-server
# - --kubelet-insecure-tls
@@ -44,3 +47,57 @@ tolerations: []
affinity: {}
replicas: 1
+
+podAnnotations: {}
+# The following annotations guarantee scheduling for critical add-on pods.
+# See more at: https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
+# scheduler.alpha.kubernetes.io/critical-pod: ''
+# priorityClassName: system-node-critical
+
+extraVolumeMounts: []
+# - name: secrets
+# mountPath: /etc/kubernetes/secrets
+# readOnly: true
+
+extraVolumes: []
+# - name: secrets
+# secret:
+# secretName: kube-apiserver
+
+livenessProbe:
+ httpGet:
+ path: /healthz
+ port: https
+ scheme: HTTPS
+ initialDelaySeconds: 20
+
+readinessProbe:
+ httpGet:
+ path: /healthz
+ port: https
+ scheme: HTTPS
+ initialDelaySeconds: 20
+
+securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop: ["all"]
+ readOnlyRootFilesystem: true
+ runAsGroup: 10001
+ runAsNonRoot: true
+ runAsUser: 10001
+
+service:
+ annotations: {}
+ labels: {}
+ # Add these labels to have metrics-server show up in `kubectl cluster-info`
+ # kubernetes.io/cluster-service: "true"
+ # kubernetes.io/name: metrics-server
+ port: 443
+ type: ClusterIP
+
+podDisruptionBudget:
+ # https://kubernetes.io/docs/tasks/run-application/configure-pdb/
+ enabled: false
+ minAvailable:
+ maxUnavailable:
diff --git a/stable/minecraft/Chart.yaml b/stable/minecraft/Chart.yaml
index bd08eae8eff9..c85cb77745dd 100755
--- a/stable/minecraft/Chart.yaml
+++ b/stable/minecraft/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: minecraft
-version: 0.3.2
+version: 1.0.0
appVersion: 1.13.1
home: https://minecraft.net/
description: Minecraft server
diff --git a/stable/minecraft/ci/test-values.yaml b/stable/minecraft/ci/test-values.yaml
index 440b8e070bde..12fd267807fc 100644
--- a/stable/minecraft/ci/test-values.yaml
+++ b/stable/minecraft/ci/test-values.yaml
@@ -1,3 +1,3 @@
# This must be overridden, since we can't accept this for the user.
- minecraftServer.eula: "TRUE"
-#fix in place and rebased
+minecraftServer.eula: "TRUE"
+# fix in place and rebased
diff --git a/stable/minio/Chart.yaml b/stable/minio/Chart.yaml
index d82698a63532..286404c5647c 100755
--- a/stable/minio/Chart.yaml
+++ b/stable/minio/Chart.yaml
@@ -1,14 +1,14 @@
apiVersion: v1
-description: Minio is a high performance distributed object storage server, designed for large-scale private cloud infrastructure.
+description: MinIO is a high performance distributed object storage server, designed for large-scale private cloud infrastructure.
name: minio
-version: 2.4.1
-appVersion: RELEASE.2019-01-16T21-44-08Z
+version: 2.4.16
+appVersion: RELEASE.2019-05-14T23-57-45Z
keywords:
- storage
- object-storage
- S3
-home: https://minio.io
-icon: https://www.minio.io/img/logo_160x160.png
+home: https://min.io
+icon: https://min.io/resources/img/logo/MINIO_wordmark.png
sources:
- https://github.com/minio/minio
maintainers:
diff --git a/stable/minio/README.md b/stable/minio/README.md
index 3b44b37e7653..9b904eaa6398 100755
--- a/stable/minio/README.md
+++ b/stable/minio/README.md
@@ -1,20 +1,20 @@
-Minio
+MinIO
=====
-[Minio](https://minio.io) is a distributed object storage service for high performance, high scale data infrastructures. It is a drop in replacement for AWS S3 in your own environment. It uses erasure coding to provide highly resilient storage that can tolerate failures of upto n/2 nodes. It runs on cloud, container, kubernetes and bare-metal environments. It is simple enough to be deployed in seconds, and can scale to 100s of peta bytes. Minio is suitable for storing objects such as photos, videos, log files, backups, VM and container images.
+[MinIO](https://min.io) is a distributed object storage service for high performance, high scale data infrastructures. It is a drop in replacement for AWS S3 in your own environment. It uses erasure coding to provide highly resilient storage that can tolerate failures of upto n/2 nodes. It runs on cloud, container, kubernetes and bare-metal environments. It is simple enough to be deployed in seconds, and can scale to 100s of peta bytes. MinIO is suitable for storing objects such as photos, videos, log files, backups, VM and container images.
-Minio supports [distributed mode](https://docs.minio.io/docs/distributed-minio-quickstart-guide). In distributed mode, you can pool multiple drives (even on different machines) into a single object storage server.
+MinIO supports [distributed mode](https://docs.minio.io/docs/distributed-minio-quickstart-guide). In distributed mode, you can pool multiple drives (even on different machines) into a single object storage server.
Introduction
------------
-This chart bootstraps Minio deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
+This chart bootstraps MinIO deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
Prerequisites
-------------
- Kubernetes 1.4+ with Beta APIs enabled for default standalone mode.
-- Kubernetes 1.5+ with Beta APIs enabled to run Minio in [distributed mode](#distributed-minio).
+- Kubernetes 1.5+ with Beta APIs enabled to run MinIO in [distributed mode](#distributed-minio).
- PV provisioner support in the underlying infrastructure.
Installing the Chart
@@ -26,7 +26,7 @@ Install this chart using:
$ helm install stable/minio
```
-The command deploys Minio on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
+The command deploys MinIO on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
### Release name
@@ -45,15 +45,15 @@ $ helm install --set accessKey=myaccesskey,secretKey=mysecretkey \
stable/minio
```
-### Updating Minio configuration via Helm
+### Updating MinIO configuration via Helm
[ConfigMap](https://kubernetes.io/docs/user-guide/configmap/) allows injecting containers with configuration data even while a Helm release is deployed.
-To update your Minio server configuration while it is deployed in a release, you need to
+To update your MinIO server configuration while it is deployed in a release, you need to
-1. Check all the configurable values in the Minio chart using `helm inspect values stable/minio`.
+1. Check all the configurable values in the MinIO chart using `helm inspect values stable/minio`.
2. Override the `minio_server_config` settings in a YAML formatted file, and then pass that file like this `helm upgrade -f config.yaml stable/minio`.
-3. Restart the Minio server(s) for the changes to take effect.
+3. Restart the MinIO server(s) for the changes to take effect.
You can also check the history of upgrades to a release using `helm history my-release`. Replace `my-release` with the actual release name.
@@ -71,13 +71,13 @@ The command removes all the Kubernetes components associated with the chart and
Upgrading the Chart
-------------------
-You can use Helm to update Minio version in a live release. Assuming your release is named as `my-release`, get the values using the command:
+You can use Helm to update MinIO version in a live release. Assuming your release is named as `my-release`, get the values using the command:
```bash
$ helm get values my-release > old_values.yaml
```
-Then change the field `image.tag` in `old_values.yaml` file with Minio image tag you want to use. Now update the chart using
+Then change the field `image.tag` in `old_values.yaml` file with MinIO image tag you want to use. Now update the chart using
```bash
$ helm upgrade -f old_values.yaml my-release stable/minio
@@ -88,27 +88,27 @@ Default upgrade strategies are specified in the `values.yaml` file. Update these
Configuration
-------------
-The following table lists the configurable parameters of the Minio chart and their default values.
+The following table lists the configurable parameters of the MinIO chart and their default values.
| Parameter | Description | Default |
|----------------------------|-------------------------------------|---------------------------------------------------------|
| `image.repository` | Image repository | `minio/minio` |
-| `image.tag` | Minio image tag. Possible values listed [here](https://hub.docker.com/r/minio/minio/tags/).| `RELEASE.2019-01-16T21-44-08Z`|
+| `image.tag` | MinIO image tag. Possible values listed [here](https://hub.docker.com/r/minio/minio/tags/).| `RELEASE.2019-05-14T23-57-45Z`|
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `mcImage.repository` | Client image repository | `minio/mc` |
-| `mcImage.tag` | mc image tag. Possible values listed [here](https://hub.docker.com/r/minio/mc/tags/).| `RELEASE.2019-01-10T00-38-22Z`|
+| `mcImage.tag` | mc image tag. Possible values listed [here](https://hub.docker.com/r/minio/mc/tags/).| `RELEASE.2019-05-01T23-27-44Z`|
| `mcImage.pullPolicy` | mc Image pull policy | `IfNotPresent` |
| `ingress.enabled` | Enables Ingress | `false` |
| `ingress.annotations` | Ingress annotations | `{}` |
| `ingress.hosts` | Ingress accepted hostnames | `[]` |
| `ingress.tls` | Ingress TLS configuration | `[]` |
-| `mode` | Minio server mode (`standalone` or `distributed`)| `standalone` |
-| `replicas` | Number of nodes (applicable only for Minio distributed mode). Should be 4 <= x <= 32 | `4` |
+| `mode` | MinIO server mode (`standalone` or `distributed`)| `standalone` |
+| `replicas` | Number of nodes (applicable only for MinIO distributed mode). Should be 4 <= x <= 32 | `4` |
| `existingSecret` | Name of existing secret with access and secret key.| `""` |
| `accessKey` | Default access key (5 to 20 characters) | `AKIAIOSFODNN7EXAMPLE` |
| `secretKey` | Default secret key (8 to 40 characters) | `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY` |
| `configPath` | Default config file location | `~/.minio` |
-| `configPathmc` | Default config file location for minio client - mc | `~/.mc` |
+| `configPathmc` | Default config file location for MinIO client - mc | `~/.mc` |
| `mountPath` | Default mount location for persistent drive| `/export` |
| `clusterDomain` | domain name of kubernetes cluster where pod is running.| `cluster.local` |
| `service.type` | Kubernetes service type | `ClusterIP` |
@@ -126,7 +126,8 @@ The following table lists the configurable parameters of the Minio chart and the
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `affinity` | Affinity settings for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
-| `tls.enabled` | Enable TLS for Minio server | `false` |
+| `podAnnotations` | Pod annotations | `{}` |
+| `tls.enabled` | Enable TLS for MinIO server | `false` |
| `tls.certSecret` | Kubernetes Secret with `public.crt` and `private.key` files. | `""` |
| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `5` |
| `livenessProbe.periodSeconds` | How often to perform the probe | `30` |
@@ -138,25 +139,27 @@ The following table lists the configurable parameters of the Minio chart and the
| `readinessProbe.timeoutSeconds` | When the probe times out | `1` |
| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | `1` |
| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `3` |
-| `defaultBucket.enabled` | If set to true, a bucket will be created after minio install | `false` |
+| `defaultBucket.enabled` | If set to true, a bucket will be created after MinIO install | `false` |
| `defaultBucket.name` | Bucket name | `bucket` |
| `defaultBucket.policy` | Bucket policy | `none` |
| `defaultBucket.purge` | Purge the bucket if already exists | `false` |
-| `buckets` | List of buckets to create after minio install | `[]` |
-| `s3gateway.enabled` | Use minio as a [s3 gateway](https://github.com/minio/minio/blob/master/docs/gateway/s3.md)| `false` |
+| `buckets` | List of buckets to create after MinIO install | `[]` |
+| `s3gateway.enabled` | Use MinIO as a [s3 gateway](https://github.com/minio/minio/blob/master/docs/gateway/s3.md)| `false` |
| `s3gateway.replicas` | Number of s3 gateway instances to run in parallel | `4` |
-| `azuregateway.enabled` | Use minio as an [azure gateway](https://docs.minio.io/docs/minio-gateway-for-azure)| `false` |
-| `gcsgateway.enabled` | Use minio as a [Google Cloud Storage gateway](https://docs.minio.io/docs/minio-gateway-for-gcs)| `false` |
+| `s3gateway.serviceEndpoint`| Endpoint to the S3 compatible service | `""` |
+| `azuregateway.enabled` | Use MinIO as an [azure gateway](https://docs.minio.io/docs/minio-gateway-for-azure)| `false` |
+| `azuregateway.replicas` | Number of azure gateway instances to run in parallel | `4` |
+| `gcsgateway.enabled` | Use MinIO as a [Google Cloud Storage gateway](https://docs.minio.io/docs/minio-gateway-for-gcs)| `false` |
| `gcsgateway.gcsKeyJson` | credential json file of service account key | `""` |
| `gcsgateway.projectId` | Google cloud project id | `""` |
-| `ossgateway.enabled` | Use minio as an [Alibaba Cloud Object Storage Service gateway](https://github.com/minio/minio/blob/master/docs/gateway/oss.md)| `false` |
+| `ossgateway.enabled` | Use MinIO as an [Alibaba Cloud Object Storage Service gateway](https://github.com/minio/minio/blob/master/docs/gateway/oss.md)| `false` |
| `ossgateway.replicas` | Number of oss gateway instances to run in parallel | `4` |
| `ossgateway.endpointURL` | OSS server endpoint. | `""` |
-| `nasgateway.enabled` | Use minio as a [NAS gateway](https://docs.minio.io/docs/minio-gateway-for-nas) | `false` |
+| `nasgateway.enabled` | Use MinIO as a [NAS gateway](https://docs.MinIO.io/docs/minio-gateway-for-nas) | `false` |
| `nasgateway.replicas` | Number of NAS gateway instances to be run in parallel on a PV | `4` |
-| `environment` | Set Minio server relevant environment variables in `values.yaml` file. Minio containers will be passed these variables when they start. | `MINIO_BROWSER: "on"` |
+| `environment` | Set MinIO server relevant environment variables in `values.yaml` file. MinIO containers will be passed these variables when they start. | `MINIO_BROWSER: "on"` |
-Some of the parameters above map to the env variables defined in the [Minio DockerHub image](https://hub.docker.com/r/minio/minio/).
+Some of the parameters above map to the env variables defined in the [MinIO DockerHub image](https://hub.docker.com/r/minio/minio/).
You can specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
@@ -166,7 +169,7 @@ $ helm install --name my-release \
stable/minio
```
-The above command deploys Minio server with a 100Gi backing persistent volume.
+The above command deploys MinIO server with a 100Gi backing persistent volume.
Alternately, you can provide a YAML file that specifies parameter values while installing the chart. For example,
@@ -176,51 +179,51 @@ $ helm install --name my-release -f values.yaml stable/minio
> **Tip**: You can use the default [values.yaml](values.yaml)
-Distributed Minio
+Distributed MinIO
-----------
-This chart provisions a Minio server in standalone mode, by default. To provision Minio server in [distributed mode](https://docs.minio.io/docs/distributed-minio-quickstart-guide), set the `mode` field to `distributed`,
+This chart provisions a MinIO server in standalone mode, by default. To provision MinIO server in [distributed mode](https://docs.minio.io/docs/distributed-minio-quickstart-guide), set the `mode` field to `distributed`,
```bash
$ helm install --set mode=distributed stable/minio
```
-This provisions Minio server in distributed mode with 4 nodes. To change the number of nodes in your distributed Minio server, set the `replicas` field,
+This provisions MinIO server in distributed mode with 4 nodes. To change the number of nodes in your distributed MinIO server, set the `replicas` field,
```bash
$ helm install --set mode=distributed,replicas=8 stable/minio
```
-This provisions Minio server in distributed mode with 8 nodes. Note that the `replicas` value should be an integer between 4 and 16 (inclusive).
+This provisions MinIO server in distributed mode with 8 nodes. Note that the `replicas` value should be an integer between 4 and 16 (inclusive).
-### StatefulSet [limitations](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/#limitations) applicable to distributed Minio
+### StatefulSet [limitations](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/#limitations) applicable to distributed MinIO
1. StatefulSets need persistent storage, so the `persistence.enabled` flag is ignored when `mode` is set to `distributed`.
-2. When uninstalling a distributed Minio release, you'll need to manually delete volumes associated with the StatefulSet.
+2. When uninstalling a distributed MinIO release, you'll need to manually delete volumes associated with the StatefulSet.
NAS Gateway
-----------
### Prerequisites
-Minio in [NAS gateway mode](https://docs.minio.io/docs/minio-gateway-for-nas) can be used to create multiple Minio instances backed by single PV in `ReadWriteMany` mode. Currently few [Kubernetes volume plugins](https://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes) support `ReadWriteMany` mode. To deploy Minio NAS gateway with Helm chart you'll need to have a Persistent Volume running with one of the supported volume plugins. [This document](https://kubernetes.io/docs/user-guide/volumes/#nfs)
+MinIO in [NAS gateway mode](https://docs.minio.io/docs/minio-gateway-for-nas) can be used to create multiple MinIO instances backed by single PV in `ReadWriteMany` mode. Currently few [Kubernetes volume plugins](https://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes) support `ReadWriteMany` mode. To deploy MinIO NAS gateway with Helm chart you'll need to have a Persistent Volume running with one of the supported volume plugins. [This document](https://kubernetes.io/docs/user-guide/volumes/#nfs)
outlines steps to create a NFS PV in Kubernetes cluster.
-### Provision NAS Gateway Minio instances
+### Provision NAS Gateway MinIO instances
-To provision Minio servers in [NAS gateway mode](https://docs.minio.io/docs/minio-gateway-for-nas), set the `nasgateway.enabled` field to `true`,
+To provision MinIO servers in [NAS gateway mode](https://docs.minio.io/docs/minio-gateway-for-nas), set the `nasgateway.enabled` field to `true`,
```bash
$ helm install --set nasgateway.enabled=true stable/minio
```
-This provisions 4 Minio NAS gateway instances backed by single storage. To change the number of instances in your Minio deployment, set the `replicas` field,
+This provisions 4 MinIO NAS gateway instances backed by single storage. To change the number of instances in your MinIO deployment, set the `replicas` field,
```bash
$ helm install --set nasgateway.enabled=true,nasgateway.replicas=8 stable/minio
```
-This provisions Minio NAS gateway with 8 instances.
+This provisions MinIO NAS gateway with 8 instances.
Persistence
-----------
@@ -249,7 +252,7 @@ $ helm install --set persistence.existingClaim=PVC_NAME stable/minio
NetworkPolicy
-------------
-To enable network policy for Minio,
+To enable network policy for MinIO,
install [a networking plugin that implements the Kubernetes
NetworkPolicy spec](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy#before-you-begin),
and set `networkPolicy.enabled` to `true`.
@@ -262,7 +265,7 @@ the DefaultDeny namespace annotation. Note: this will enforce policy for _all_ p
With NetworkPolicy enabled, traffic will be limited to just port 9000.
For more precise policy, set `networkPolicy.allowExternal=true`. This will
-only allow pods with the generated client label to connect to Minio.
+only allow pods with the generated client label to connect to MinIO.
This label will be displayed in the output of a successful install.
Existing secret
@@ -289,7 +292,7 @@ The following fields are expected in the secret
Configure TLS
-------------
-To enable TLS for Minio containers, acquire TLS certificates from a CA or create self-signed certificates. While creating / acquiring certificates ensure the corresponding domain names are set as per the standard [DNS naming conventions](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-identity) in a Kubernetes StatefulSet (for a distributed Minio setup). Then create a secret using
+To enable TLS for MinIO containers, acquire TLS certificates from a CA or create self-signed certificates. While creating / acquiring certificates ensure the corresponding domain names are set as per the standard [DNS naming conventions](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-identity) in a Kubernetes StatefulSet (for a distributed MinIO setup). Then create a secret using
```bash
$ kubectl create secret generic tls-ssl-minio --from-file=path/to/private.key --from-file=path/to/public.crt
@@ -301,10 +304,10 @@ Then install the chart, specifying that you want to use the TLS secret:
$ helm install --set tls.enabled=true,tls.certSecret=tls-ssl-minio stable/minio
```
-Pass environment variables to Minio containers
+Pass environment variables to MinIO containers
----------------------------------------------
-To pass environment variables to Minio containers when deploying via Helm chart, use the below command line format
+To pass environment variables to MinIO containers when deploying via Helm chart, use the below command line format
```bash
$ helm install --set environment.MINIO_BROWSER=on,environment.MINIO_DOMAIN=domain-name stable/minio
diff --git a/stable/minio/templates/NOTES.txt b/stable/minio/templates/NOTES.txt
index a54431c28242..b690f5028cf4 100644
--- a/stable/minio/templates/NOTES.txt
+++ b/stable/minio/templates/NOTES.txt
@@ -4,7 +4,7 @@ Minio can be accessed via port {{ .Values.service.port }} on the following DNS n
To access Minio from localhost, run the below commands:
- 1. export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "release={{ template "minio.fullname" . }}" -o jsonpath="{.items[0].metadata.name}")
+ 1. export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
2. kubectl port-forward $POD_NAME 9000 --namespace {{ .Release.Namespace }}
diff --git a/stable/minio/templates/_helpers.tpl b/stable/minio/templates/_helpers.tpl
index e928bf63f02a..c8fe9ba7aa05 100644
--- a/stable/minio/templates/_helpers.tpl
+++ b/stable/minio/templates/_helpers.tpl
@@ -19,7 +19,7 @@ If release name contains chart name it will be used as a full name.
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
-{{- printf "%s" .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
diff --git a/stable/minio/templates/deployment.yaml b/stable/minio/templates/deployment.yaml
index f48f2c962deb..3773dd57caa9 100644
--- a/stable/minio/templates/deployment.yaml
+++ b/stable/minio/templates/deployment.yaml
@@ -11,9 +11,13 @@ metadata:
spec:
strategy:
type: {{ .Values.DeploymentUpdate.type }}
+ {{- if eq .Values.DeploymentUpdate.type "RollingUpdate" }}
rollingUpdate:
maxSurge: {{ .Values.DeploymentUpdate.maxSurge }}
maxUnavailable: {{ .Values.DeploymentUpdate.maxUnavailable }}
+ {{- else }}
+ rollingUpdate: null
+ {{- end}}
{{- if .Values.nasgateway.enabled }}
replicas: {{ .Values.nasgateway.replicas }}
{{- end }}
@@ -39,6 +43,10 @@ spec:
labels:
app: {{ template "minio.name" . }}
release: {{ .Release.Name }}
+ {{- if .Values.podAnnotations }}
+ annotations:
+{{ toYaml .Values.podAnnotations | indent 8 }}
+ {{- end }}
spec:
{{- if .Values.priorityClassName }}
priorityClassName: "{{ .Values.priorityClassName }}"
@@ -50,7 +58,7 @@ spec:
{{- if .Values.s3gateway.enabled }}
command: [ "/bin/sh",
"-ce",
- "/usr/bin/docker-entrypoint.sh minio -C {{ .Values.configPath }} gateway s3" ]
+ "/usr/bin/docker-entrypoint.sh minio -C {{ .Values.configPath }} gateway s3 {{ .Values.s3gateway.serviceEndpoint }}" ]
{{- else }}
{{- if .Values.azuregateway.enabled }}
command: [ "/bin/sh",
@@ -82,7 +90,7 @@ spec:
{{- end }}
{{- end }}
volumeMounts:
- {{- if and .Values.persistence.enabled ((not .Values.gcsgateway.enabled) (not .Values.azuregateway.enabled) (not .Values.s3gateway.enabled)) }}
+ {{- if and .Values.persistence.enabled (not .Values.gcsgateway.enabled) (not .Values.azuregateway.enabled) (not .Values.s3gateway.enabled) }}
- name: export
mountPath: {{ .Values.mountPath }}
{{- if .Values.persistence.subPath }}
@@ -116,10 +124,7 @@ spec:
key: secretkey
{{- if .Values.gcsgateway.enabled }}
- name: GOOGLE_APPLICATION_CREDENTIALS
- valueFrom:
- secretKeyRef:
- name: {{ template "minio.fullname" . }}
- key: gcs_key.json
+ value: "/etc/credentials/gcs_key.json"
{{- end }}
{{- range $key, $val := .Values.environment }}
- name: {{ $key }}
@@ -146,6 +151,7 @@ spec:
{{- end }}
path: /minio/health/ready
port: service
+ initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.readinessProbe.successThreshold }}
@@ -165,7 +171,7 @@ spec:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- {{- if and (not .Values.gcsgateway.enabled) (not .Values.azuregateway.enabled) (not .Values.s3gateway.enabled) }}
+ {{- if and ((not .Values.gcsgateway.enabled) (not .Values.azuregateway.enabled) (not .Values.s3gateway.enabled)) }}
- name: export
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
diff --git a/stable/minio/templates/post-install-create-bucket-job.yaml b/stable/minio/templates/post-install-create-bucket-job.yaml
index 4801a12ef694..c581338a2653 100755
--- a/stable/minio/templates/post-install-create-bucket-job.yaml
+++ b/stable/minio/templates/post-install-create-bucket-job.yaml
@@ -30,7 +30,7 @@ spec:
- configMap:
name: {{ template "minio.fullname" . }}
- secret:
- name: {{ template "minio.fullname" . }}
+ name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{ else }}{{ template "minio.fullname" . }}{{ end }}
{{- if .Values.tls.enabled }}
- name: cert-secret-volume-mc
secret:
diff --git a/stable/minio/templates/pvc.yaml b/stable/minio/templates/pvc.yaml
index 3f4cbb03593f..014f90f3e62f 100644
--- a/stable/minio/templates/pvc.yaml
+++ b/stable/minio/templates/pvc.yaml
@@ -20,8 +20,16 @@ spec:
resources:
requests:
storage: {{ .Values.persistence.size | quote }}
+
{{- if .Values.persistence.storageClass }}
- storageClassName: {{ .Values.persistence.storageClass | quote }}
+{{- if (eq "-" .Values.persistence.storageClass) }}
+ storageClassName: ""
+{{- else }}
+ storageClassName: "{{ .Values.persistence.storageClass }}"
+{{- end }}
+{{- end }}
+{{- if .Values.persistence.VolumeName }}
+ volumeName: "{{ .Values.persistence.VolumeName }}"
{{- end }}
{{- end }}
{{- end }}
diff --git a/stable/minio/templates/service.yaml b/stable/minio/templates/service.yaml
index 0799b287e9c1..3df046847ce3 100644
--- a/stable/minio/templates/service.yaml
+++ b/stable/minio/templates/service.yaml
@@ -13,11 +13,7 @@ metadata:
{{- end }}
spec:
{{- if (or (eq .Values.service.type "ClusterIP" "") (empty .Values.service.type)) }}
- {{- if eq .Values.mode "distributed" }}
- clusterIP: None
- {{- else }}
type: ClusterIP
- {{- end }}
{{- if not (empty .Values.service.clusterIP) }}
clusterIP: {{ .Values.service.clusterIP }}
{{end}}
@@ -29,11 +25,12 @@ spec:
{{- end }}
ports:
- name: service
- port: 9000
- targetPort: {{ .Values.service.port }}
+ port: {{ .Values.service.port }}
protocol: TCP
{{- if (and (eq .Values.service.type "NodePort") ( .Values.service.nodePort)) }}
nodePort: {{ .Values.service.nodePort }}
+{{- else }}
+ targetPort: 9000
{{- end}}
{{- if .Values.service.externalIPs }}
externalIPs:
diff --git a/stable/minio/templates/statefulset.yaml b/stable/minio/templates/statefulset.yaml
index 2c0d01790532..1eb66639211a 100644
--- a/stable/minio/templates/statefulset.yaml
+++ b/stable/minio/templates/statefulset.yaml
@@ -1,5 +1,23 @@
{{- if eq .Values.mode "distributed" }}
{{ $nodeCount := .Values.replicas | int }}
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ template "minio.fullname" . }}-svc
+ labels:
+ app: {{ template "minio.name" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ release: "{{ .Release.Name }}"
+ heritage: "{{ .Release.Service }}"
+spec:
+ clusterIP: None
+ ports:
+ - name: service
+ port: {{ .Values.service.port }}
+ protocol: TCP
+ selector:
+ app: {{ template "minio.name" . }}
+---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
@@ -12,7 +30,7 @@ metadata:
spec:
updateStrategy:
type: {{ .Values.StatefulSetUpdate.updateStrategy }}
- serviceName: {{ template "minio.fullname" . }}
+ serviceName: {{ template "minio.fullname" . }}-svc
replicas: {{ .Values.replicas }}
selector:
matchLabels:
@@ -24,6 +42,10 @@ spec:
labels:
app: {{ template "minio.name" . }}
release: {{ .Release.Name }}
+ {{- if .Values.podAnnotations }}
+ annotations:
+{{ toYaml .Values.podAnnotations | indent 8 }}
+ {{- end }}
spec:
{{- if .Values.priorityClassName }}
priorityClassName: "{{ .Values.priorityClassName }}"
@@ -37,22 +59,24 @@ spec:
"-ce",
"/usr/bin/docker-entrypoint.sh minio -C {{ .Values.configPath }} server
{{- range $i := until $nodeCount }}
- https://{{ template `minio.fullname` $ }}-{{ $i }}.{{ template `minio.fullname` $ }}.{{ $.Release.Namespace }}.svc.{{ $.Values.clusterDomain }}{{ $.Values.mountPath }}
+ https://{{ template `minio.fullname` $ }}-{{ $i }}.{{ template `minio.fullname` $ }}-svc.{{ $.Release.Namespace }}.svc.{{ $.Values.clusterDomain }}{{ $.Values.mountPath }}
{{- end }}" ]
{{ else }}
command: [ "/bin/sh",
"-ce",
"/usr/bin/docker-entrypoint.sh minio -C {{ .Values.configPath }} server
{{- range $i := until $nodeCount }}
- http://{{ template `minio.fullname` $ }}-{{ $i }}.{{ template `minio.fullname` $ }}.{{ $.Release.Namespace }}.svc.{{ $.Values.clusterDomain }}{{ $.Values.mountPath }}
+ http://{{ template `minio.fullname` $ }}-{{ $i }}.{{ template `minio.fullname` $ }}-svc.{{ $.Release.Namespace }}.svc.{{ $.Values.clusterDomain }}{{ $.Values.mountPath }}
{{- end }}" ]
{{ end }}
volumeMounts:
+ {{- if .Values.persistence.enabled }}
- name: export
mountPath: {{ .Values.mountPath }}
{{- if and .Values.persistence.enabled .Values.persistence.subPath }}
subPath: "{{ .Values.persistence.subPath }}"
{{- end }}
+ {{ end }}
- name: minio-config-dir
mountPath: {{ .Values.configPath }}
{{- if .Values.tls.enabled }}
@@ -123,6 +147,7 @@ spec:
- key: {{ .Values.tls.publicCrt }}
path: CAs/public.crt
{{ end }}
+{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: export
@@ -134,4 +159,5 @@ spec:
resources:
requests:
storage: {{ .Values.persistence.size }}
+{{ end }}
{{- end }}
diff --git a/stable/minio/values.yaml b/stable/minio/values.yaml
index 377689557321..e23df7e64c6d 100755
--- a/stable/minio/values.yaml
+++ b/stable/minio/values.yaml
@@ -6,7 +6,7 @@ clusterDomain: cluster.local
##
image:
repository: minio/minio
- tag: RELEASE.2019-01-16T21-44-08Z
+ tag: RELEASE.2019-05-14T23-57-45Z
pullPolicy: IfNotPresent
## Set default image, imageTag, and imagePullPolicy for the `mc` (the minio
@@ -14,7 +14,7 @@ image:
##
mcImage:
repository: minio/mc
- tag: RELEASE.2019-01-10T00-38-22Z
+ tag: RELEASE.2019-05-01T23-27-44Z
pullPolicy: IfNotPresent
## minio server mode, i.e. standalone or distributed.
@@ -78,6 +78,7 @@ persistence:
## Storage class of PV to bind. By default it looks for standard storage class.
## If the PV uses a different storage class, specify that here.
# storageClass: standard
+ # VolumeName: ""
accessMode: ReadWriteOnce
size: 10Gi
@@ -95,7 +96,7 @@ service:
type: ClusterIP
clusterIP: ~
port: 9000
- # nodePort: 31311
+ nodePort: 31311
# externalIPs:
# - externalIp1
annotations: {}
@@ -103,11 +104,19 @@ service:
# prometheus.io/path: '/minio/prometheus/metrics'
# prometheus.io/port: '9000'
+## Configure Ingress based on the documentation here: https://kubernetes.io/docs/concepts/services-networking/ingress/
+##
+
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
+ # kubernetes.io/ingress.allow-http: "false"
+ # kubernetes.io/ingress.global-static-ip-name: ""
+ # nginx.ingress.kubernetes.io/secure-backends: "true"
+ # nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
+ # nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
path: /
hosts:
- chart-example.local
@@ -123,6 +132,9 @@ nodeSelector: {}
tolerations: []
affinity: {}
+# Additational pod annotations
+podAnnotations: {}
+
## Liveness and Readiness probe values.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
livenessProbe:
@@ -171,6 +183,7 @@ buckets: []
s3gateway:
enabled: false
replicas: 4
+ serviceEndpoint: ""
## Use minio as an azure blob gateway, you should disable data persistence so no volume claim are created.
## https://docs.minio.io/docs/minio-gateway-for-azure
@@ -233,96 +246,6 @@ environment:
# MINIO_DOMAIN: ""
## Add other environment variables relevant to Minio server here. These values will be added to the container(s) as this Chart is deployed
-## https://docs.minio.io/docs/minio-bucket-notification-guide
-## https://github.com/minio/minio/blob/master/docs/config
-minioConfig:
- region: "us-east-1"
- browser: "on"
- domain: ""
- worm: "off"
- storageClass:
- standardStorageClass: ""
- reducedRedundancyStorageClass: ""
- cache:
- drives: []
- expiry: 90
- maxuse: 80
- exclude: []
- aqmp:
- enable: false
- url: ""
- exchange: ""
- routingKey: ""
- exchangeType: ""
- deliveryMode: 0
- mandatory: false
- immediate: false
- durable: false
- internal: false
- noWait: false
- autoDeleted: false
- nats:
- enable: false
- address: ""
- subject: ""
- username: ""
- password: ""
- token: ""
- secure: false
- pingInterval: 0
- enableStreaming: false
- clusterID: ""
- clientID: ""
- async: false
- maxPubAcksInflight: 0
- elasticsearch:
- enable: false
- format: "namespace"
- url: ""
- index: ""
- redis:
- enable: false
- format: "namespace"
- address: ""
- password: ""
- key: ""
- postgresql:
- enable: false
- format: "namespace"
- connectionString: ""
- table: ""
- host: ""
- port: ""
- user: ""
- password: ""
- database: ""
- kafka:
- enable: false
- brokers: "null"
- topic: ""
- webhook:
- enable: false
- endpoint: ""
- mysql:
- enable: false
- format: "namespace"
- dsnString: ""
- table: ""
- host: ""
- port: ""
- user: ""
- password: ""
- database: ""
- mqtt:
- enable: false
- broker: ""
- topic: ""
- qos: 0
- clientId: ""
- username: ""
- password: ""
- reconnectInterval: 0
- keepAliveInterval: 0
networkPolicy:
enabled: false
allowExternal: true
diff --git a/stable/mongodb-replicaset/Chart.yaml b/stable/mongodb-replicaset/Chart.yaml
index 20670fd967fa..19f770298b05 100644
--- a/stable/mongodb-replicaset/Chart.yaml
+++ b/stable/mongodb-replicaset/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: mongodb-replicaset
home: https://github.com/mongodb/mongo
-version: 3.9.0
+version: 3.9.4
appVersion: 3.6
description: NoSQL document-oriented database that stores JSON-like documents with
dynamic schemas, simplifying the integration of data in content-driven applications.
diff --git a/stable/mongodb-replicaset/README.md b/stable/mongodb-replicaset/README.md
index 45492ff5e971..23ee2d2b24a5 100644
--- a/stable/mongodb-replicaset/README.md
+++ b/stable/mongodb-replicaset/README.md
@@ -49,7 +49,10 @@ The following table lists the configurable parameters of the mongodb chart and t
| `image.tag` | MongoDB image tag | `3.6` |
| `image.pullPolicy` | MongoDB image pull policy | `IfNotPresent` |
| `podAnnotations` | Annotations to be added to MongoDB pods | `{}` |
-| `securityContext` | Security context for the pod | `{runAsUser: 999, fsGroup: 999, runAsNonRoot: true}`|
+| `securityContext.enabled` | Enable security context | `true` |
+| `securityContext.fsGroup` | Group ID for the container | `999` |
+| `securityContext.runAsUser` | User ID for the container | `999` |
+| `securityContext.runAsNonRoot` | | `true` |
| `resources` | Pod resource requests and limits | `{}` |
| `persistentVolume.enabled` | If `true`, persistent volume claims are created | `true` |
| `persistentVolume.storageClass` | Persistent volume storage class | `` |
@@ -79,6 +82,7 @@ The following table lists the configurable parameters of the mongodb chart and t
| `auth.adminPassword` | MongoDB admin password | `` |
| `auth.metricsUser` | MongoDB clusterMonitor user | `` |
| `auth.metricsPassword` | MongoDB clusterMonitor password | `` |
+| `auth.existingMetricsSecret` | If set, and existing secret with this name is used for the metrics user | `` |
| `auth.existingAdminSecret` | If set, and existing secret with this name is used for the admin user | `` |
| `serviceAnnotations` | Annotations to be added to the service | `{}` |
| `configmap` | Content of the MongoDB config file | `` |
diff --git a/stable/mongodb-replicaset/templates/mongodb-metrics-secret.yaml b/stable/mongodb-replicaset/templates/mongodb-metrics-secret.yaml
index 3a3f3f099074..c1484481ef32 100644
--- a/stable/mongodb-replicaset/templates/mongodb-metrics-secret.yaml
+++ b/stable/mongodb-replicaset/templates/mongodb-metrics-secret.yaml
@@ -1,4 +1,4 @@
-{{- if and (.Values.auth.enabled) (not .Values.auth.exisitingMetricsSecret) (.Values.metrics.enabled) -}}
+{{- if and (.Values.auth.enabled) (not .Values.auth.existingMetricsSecret) (.Values.metrics.enabled) -}}
apiVersion: v1
kind: Secret
metadata:
diff --git a/stable/mongodb-replicaset/templates/mongodb-statefulset.yaml b/stable/mongodb-replicaset/templates/mongodb-statefulset.yaml
index d05c3a1cbd7d..9be59c9c01dc 100644
--- a/stable/mongodb-replicaset/templates/mongodb-statefulset.yaml
+++ b/stable/mongodb-replicaset/templates/mongodb-statefulset.yaml
@@ -45,8 +45,12 @@ spec:
- name: {{ . }}
{{- end}}
{{- end }}
+ {{- if .Values.securityContext.enabled }}
securityContext:
-{{ toYaml .Values.securityContext | indent 8 }}
+ runAsUser: {{ .Values.securityContext.runAsUser }}
+ fsGroup: {{ .Values.securityContext.fsGroup }}
+ runAsNonRoot: {{ .Values.securityContext.runAsNonRoot }}
+ {{- end }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
initContainers:
- name: copy-config
@@ -292,8 +296,8 @@ spec:
-mongodb.tls-cert=/work-dir/mongo.pem
{{- end }}
-test
- initialDelaySeconds: 30
- periodSeconds: 10
+ initialDelaySeconds: 30
+ periodSeconds: 10
{{ end }}
{{- with .Values.nodeSelector }}
nodeSelector:
diff --git a/stable/mongodb-replicaset/values.yaml b/stable/mongodb-replicaset/values.yaml
index 975e74cbb675..a5831d1e1e55 100644
--- a/stable/mongodb-replicaset/values.yaml
+++ b/stable/mongodb-replicaset/values.yaml
@@ -16,7 +16,7 @@ auth:
# key: keycontent
# existingKeySecret:
# existingAdminSecret:
- # exisitingMetricsSecret:
+ # existingMetricsSecret:
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
@@ -66,6 +66,7 @@ metrics:
podAnnotations: {}
securityContext:
+ enabled: true
runAsUser: 999
fsGroup: 999
runAsNonRoot: true
diff --git a/stable/mongodb/Chart.yaml b/stable/mongodb/Chart.yaml
index e3ad40f8626f..70f8433232c9 100644
--- a/stable/mongodb/Chart.yaml
+++ b/stable/mongodb/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: mongodb
-version: 5.3.0
-appVersion: 4.0.3
+version: 5.17.1
+appVersion: 4.0.9
description: NoSQL document-oriented database that stores JSON-like documents with dynamic schemas, simplifying the integration of data in content-driven applications.
keywords:
- mongodb
diff --git a/stable/mongodb/README.md b/stable/mongodb/README.md
index 0fefc6cc7381..3ff1d3a8840f 100644
--- a/stable/mongodb/README.md
+++ b/stable/mongodb/README.md
@@ -12,7 +12,7 @@ $ helm install stable/mongodb
This chart bootstraps a [MongoDB](https://github.com/bitnami/bitnami-docker-mongodb) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -48,12 +48,14 @@ The following table lists the configurable parameters of the MongoDB chart and t
| Parameter | Description | Default |
| -------------------------------------------------- | -------------------------------------------------------------------------------------------- | ------------------------------------------------------- |
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | MongoDB image registry | `docker.io` |
| `image.repository` | MongoDB Image name | `bitnami/mongodb` |
| `image.tag` | MongoDB Image tag | `{VERSION}` |
| `image.pullPolicy` | Image pull policy | `Always` |
| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.debug` | Specify if debug logs should be enabled | `false` |
+| `clusterDomain` | Default Kubernetes cluster domain | `cluster.local` |
| `usePassword` | Enable password authentication | `true` |
| `existingSecret` | Existing secret with MongoDB credentials | `nil` |
| `mongodbRootPassword` | MongoDB admin password | `random alphanumeric string (10)` |
@@ -61,63 +63,93 @@ The following table lists the configurable parameters of the MongoDB chart and t
| `mongodbPassword` | MongoDB custom user password | `random alphanumeric string (10)` |
| `mongodbDatabase` | Database to create | `nil` |
| `mongodbEnableIPv6` | Switch to enable/disable IPv6 on MongoDB | `true` |
+| `mongodbDirectoryPerDB` | Switch to enable/disable DirectoryPerDB on MongoDB | `false` |
| `mongodbSystemLogVerbosity` | MongoDB systen log verbosity level | `0` |
| `mongodbDisableSystemLog` | Whether to disable MongoDB system log or not | `false` |
-| `mongodbExtraFlags` | MongoDB additional command line flags | [] |
+| `mongodbExtraFlags` | MongoDB additional command line flags | `[]` |
| `service.annotations` | Kubernetes service annotations | `{}` |
| `service.type` | Kubernetes Service type | `ClusterIP` |
| `service.clusterIP` | Static clusterIP or None for headless services | `nil` |
| `service.nodePort` | Port to bind to for NodePort service type | `nil` |
+| `service.loadBalancerIP` | Static IP Address to use for LoadBalancer service type | `nil` |
+| `service.externalIPs` | External IP list to use with ClusterIP service type | `[]` |
| `port` | MongoDB service port | `27017` |
| `replicaSet.enabled` | Switch to enable/disable replica set configuration | `false` |
| `replicaSet.name` | Name of the replica set | `rs0` |
| `replicaSet.useHostnames` | Enable DNS hostnames in the replica set config | `true` |
-| `replicaSet.key` | Key used for authentication in the replica set | `nil` |
+| `replicaSet.key` | Key used for authentication in the replica set | `random alphanumeric string (10)` |
| `replicaSet.replicas.secondary` | Number of secondary nodes in the replica set | `1` |
| `replicaSet.replicas.arbiter` | Number of arbiter nodes in the replica set | `1` |
| `replicaSet.pdb.minAvailable.primary` | PDB for the MongoDB Primary nodes | `1` |
| `replicaSet.pdb.minAvailable.secondary` | PDB for the MongoDB Secondary nodes | `1` |
| `replicaSet.pdb.minAvailable.arbiter` | PDB for the MongoDB Arbiter nodes | `1` |
-| `podAnnotations` | Annotations to be added to pods | {} |
-| `podLabels` | Additional labels for the pod(s). | {} |
-| `resources` | Pod resources | {} |
+| `podAnnotations` | Annotations to be added to pods | `{}` |
+| `podLabels` | Additional labels for the pod(s). | `{}` |
+| `resources` | Pod resources | `{}` |
| `priorityClassName` | Pod priority class name | `` |
-| `nodeSelector` | Node labels for pod assignment | {} |
-| `affinity` | Affinity for pod assignment | {} |
-| `tolerations` | Toleration labels for pod assignment | {} |
+| `nodeSelector` | Node labels for pod assignment | `{}` |
+| `affinity` | Affinity for pod assignment | `{}` |
+| `tolerations` | Toleration labels for pod assignment | `{}` |
+| `updateStrategy` | Statefulsets update strategy policy | `RollingUpdate` |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `1001` |
| `securityContext.runAsUser` | User ID for the container | `1001` |
| `persistence.enabled` | Use a PVC to persist data | `true` |
+| `persistence.mountPath` | Path to mount the volume at | `/bitnami/mongodb` |
+| `persistence.subPath` | Subdirectory of the volume to mount at | `""` |
| `persistence.storageClass` | Storage class of backing PVC | `nil` (uses alpha storage class annotation) |
-| `persistence.accessMode` | Use volume as ReadOnly or ReadWrite | `ReadWriteOnce` |
+| `persistence.accessModes` | Use volume as ReadOnly or ReadWrite | `[ReadWriteOnce]` |
| `persistence.size` | Size of data volume | `8Gi` |
| `persistence.annotations` | Persistent Volume annotations | `{}` |
| `persistence.existingClaim` | Name of an existing PVC to use (avoids creating one if this is given) | `nil` |
+| `extraInitContainers` | Additional init containers as a string to be passed to the `tpl` function | `{}` | |
+| `livenessProbe.enabled` | Enable/disable the Liveness probe | `true` |
| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `30` |
| `livenessProbe.periodSeconds` | How often to perform the probe | `10` |
| `livenessProbe.timeoutSeconds` | When the probe times out | `5` |
| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | `1` |
| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `6` |
+| `readinessProbe.enabled` | Enable/disable the Readiness probe | `true` |
| `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `5` |
| `readinessProbe.periodSeconds` | How often to perform the probe | `10` |
| `readinessProbe.timeoutSeconds` | When the probe times out | `5` |
| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `6` |
| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | `1` |
+| `initConfigMap.name` | Custom config map with init scripts | `nil` |
| `configmap` | MongoDB configuration file to be used | `nil` |
+| `ingress.enabled` | Enables Ingress. Tested with nginx-ingress version `1.3.1` | `false` |
+| `ingress.annotations` | Ingress annotations | `{}` |
+| `ingress.labels` | Custom labels | `{}` |
+| `ingress.paths` | Ingress paths | `[/]` |
+| `ingress.hosts` | Ingress accepted hostnames | `[]` |
+| `ingress.tls` | Ingress TLS configuration | `[ { secretName: secret-tls, hosts: [] } ]` |
| `metrics.enabled` | Start a side-car prometheus exporter | `false` |
| `metrics.image.registry` | MongoDB exporter image registry | `docker.io` |
| `metrics.image.repository` | MongoDB exporter image name | `forekshub/percona-mongodb-exporter` |
| `metrics.image.tag` | MongoDB exporter image tag | `latest` |
| `metrics.image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
-| `metrics.podAnnotations` | Additional annotations for Metrics exporter pod | {} |
-| `metrics.resources` | Exporter resource requests/limit | Memory: `256Mi`, CPU: `100m` |
+| `metrics.podAnnotations.prometheus.io/scrape` | Additional annotations for Metrics exporter pod | `true` |
+| `metrics.podAnnotations.prometheus.io/port` | Additional annotations for Metrics exporter pod | `"9216"` |
+| `metrics.extraArgs` | String with extra arguments for the MongoDB Exporter | `` |
+| `metrics.resources` | Exporter resource requests/limit | `{}` |
| `metrics.serviceMonitor.enabled` | Create ServiceMonitor Resource for scraping metrics using PrometheusOperator | `false` |
-| `metrics.serviceMonitor.additionalLabels` | Used to pass Labels that are required by the Installed Prometheus Operator | {} |
+| `metrics.serviceMonitor.additionalLabels` | Used to pass Labels that are required by the Installed Prometheus Operator | `{}` |
| `metrics.serviceMonitor.relabellings` | Specify Metric Relabellings to add to the scrape endpoint | `nil` |
-| `metrics.serviceMonitor.alerting.rules` | Define individual alerting rules as required | {} |
-| `metrics.serviceMonitor.alerting.additionalLabels` | Used to pass Labels that are required by the Installed Prometheus Operator | {} |
+| `metrics.serviceMonitor.alerting.rules` | Define individual alerting rules as required | `{}` |
+| `metrics.serviceMonitor.alerting.additionalLabels` | Used to pass Labels that are required by the Installed Prometheus Operator | `{}` |
+| `metrics.livenessProbe.enabled` | Enable/disable the Liveness Check of Prometheus metrics exporter | `false` |
+| `metrics.livenessProbe.initialDelaySeconds` | Initial Delay for Liveness Check of Prometheus metrics exporter | `15` |
+| `metrics.livenessProbe.periodSeconds` | How often to perform Liveness Check of Prometheus metrics exporter | `5` |
+| `metrics.livenessProbe.timeoutSeconds` | Timeout for Liveness Check of Prometheus metrics exporter | `5` |
+| `metrics.livenessProbe.failureThreshold` | Failure Threshold for Liveness Check of Prometheus metrics exporter | `3` |
+| `metrics.livenessProbe.successThreshold` | Success Threshold for Liveness Check of Prometheus metrics exporter | `1` |
+| `metrics.readinessProbe.enabled` | Enable/disable the Readiness Check of Prometheus metrics exporter | `false` |
+| `metrics.readinessProbe.initialDelaySeconds` | Initial Delay for Readiness Check of Prometheus metrics exporter | `5` |
+| `metrics.readinessProbe.periodSeconds` | How often to perform Readiness Check of Prometheus metrics exporter | `5` |
+| `metrics.readinessProbe.timeoutSeconds` | Timeout for Readiness Check of Prometheus metrics exporter | `1` |
+| `metrics.readinessProbe.failureThreshold` | Failure Threshold for Readiness Check of Prometheus metrics exporter | `3` |
+| `metrics.readinessProbe.successThreshold` | Success Threshold for Readiness Check of Prometheus metrics exporter | `1` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
@@ -170,6 +202,7 @@ Some characteristics of this chart are:
## Initialize a fresh instance
The [Bitnami MongoDB](https://github.com/bitnami/bitnami-docker-mongodb) image allows you to use your custom scripts to initialize a fresh instance. In order to execute the scripts, they must be located inside the chart folder `files/docker-entrypoint-initdb.d` so they can be consumed as a ConfigMap.
+Also you can create a custom config map and give it via `initConfigMap`(check options for more details).
The allowed extensions are `.sh`, and `.js`.
@@ -189,3 +222,18 @@ Use the workaround below to upgrade from versions previous to 5.0.0. The followi
```consoloe
$ kubectl delete statefulset my-release-mongodb-arbiter my-release-mongodb-primary my-release-mongodb-secondary --cascade=false
```
+
+## Configure Ingress
+MongoDB can exposed externally using the [NGINX Ingress Controller](https://github.com/kubernetes/ingress-nginx). To do so, it's necessary to:
+
+- Install the MongoDB chart setting the parameter `ingress.enabled=true`.
+- Create a ConfigMap to map the external port to use and the internal service/port where to redirect the requests (see https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/exposing-tcp-udp-services.md for more information).
+
+For instance, if you installed the MongoDB chart in the `default` namespace, you can install the [stable/nginx-ingress chart](https://github.com/helm/charts/tree/master/stable/nginx-ingress) setting the "tcp" parameter in the **values.yaml** used to install the chart as shown below:
+
+```yaml
+...
+
+tcp:
+ 27017: "default/mongodb:27017"
+```
diff --git a/stable/mongodb/templates/NOTES.txt b/stable/mongodb/templates/NOTES.txt
index 1f77ea9510a6..d2c3dc656ebe 100644
--- a/stable/mongodb/templates/NOTES.txt
+++ b/stable/mongodb/templates/NOTES.txt
@@ -19,7 +19,7 @@
MongoDB can be accessed via port 27017 on the following DNS name from within your cluster:
- {{ template "mongodb.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local
+ {{ template "mongodb.fullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}
{{ if .Values.usePassword -}}
diff --git a/stable/mongodb/templates/_helpers.tpl b/stable/mongodb/templates/_helpers.tpl
index 00057a1cc79f..26f739d604af 100644
--- a/stable/mongodb/templates/_helpers.tpl
+++ b/stable/mongodb/templates/_helpers.tpl
@@ -79,10 +79,58 @@ Also, we can't use a single if because lazy evaluation is not an option
{{/*
Return the proper image name (for the metrics image)
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "mongodb.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "mongodb.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- end -}}
diff --git a/stable/mongodb/templates/deployment-standalone.yaml b/stable/mongodb/templates/deployment-standalone.yaml
index df4e41206721..b505caef8644 100644
--- a/stable/mongodb/templates/deployment-standalone.yaml
+++ b/stable/mongodb/templates/deployment-standalone.yaml
@@ -38,7 +38,6 @@ spec:
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
- runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
{{- if .Values.affinity }}
affinity:
@@ -52,22 +51,33 @@ spec:
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
+{{- include "mongodb.imagePullSecrets" . | indent 6 }}
+ {{- if .Values.extraInitContainers }}
+ initContainers:
+{{ tpl .Values.extraInitContainers . | indent 6}}
{{- end }}
containers:
- name: {{ template "mongodb.fullname" . }}
image: {{ template "mongodb.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy | quote }}
+ {{- if .Values.securityContext.enabled }}
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: {{ .Values.securityContext.runAsUser }}
+ {{- end }}
env:
{{- if .Values.image.debug}}
- name: NAMI_DEBUG
value: "1"
{{- end }}
{{- if .Values.usePassword }}
+ {{- if and .Values.mongodbUsername .Values.mongodbDatabase }}
+ - name: MONGODB_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "mongodb.fullname" . }}{{- end }}
+ key: mongodb-password
+ {{- end }}
- name: MONGODB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
@@ -78,13 +88,6 @@ spec:
- name: MONGODB_USERNAME
value: {{ .Values.mongodbUsername | quote }}
{{- end }}
- {{- if and .Values.mongodbUsername .Values.mongodbDatabase }}
- - name: MONGODB_PASSWORD
- valueFrom:
- secretKeyRef:
- name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "mongodb.fullname" . }}{{- end }}
- key: mongodb-password
- {{- end }}
- name: MONGODB_SYSTEM_LOG_VERBOSITY
value: {{ .Values.mongodbSystemLogVerbosity | quote }}
- name: MONGODB_DISABLE_SYSTEM_LOG
@@ -103,6 +106,12 @@ spec:
{{- else }}
value: "no"
{{- end }}
+ - name: MONGODB_ENABLE_DIRECTORY_PER_DB
+ {{- if .Values.mongodbDirectoryPerDB }}
+ value: "yes"
+ {{- else }}
+ value: "no"
+ {{- end }}
{{- if .Values.mongodbExtraFlags }}
- name: MONGODB_EXTRA_FLAGS
value: {{ .Values.mongodbExtraFlags | join " " }}
@@ -138,8 +147,9 @@ spec:
{{- end }}
volumeMounts:
- name: data
- mountPath: /bitnami/mongodb
- {{- if (.Files.Glob "files/docker-entrypoint-initdb.d/*[sh|js]") }}
+ mountPath: {{ .Values.persistence.mountPath }}
+ subPath: {{ .Values.persistence.subPath }}
+ {{- if or (.Files.Glob "files/docker-entrypoint-initdb.d/*[sh|js|json]") (.Values.initConfigMap) }}
- name: custom-init-scripts
mountPath: /docker-entrypoint-initdb.d
{{- end }}
@@ -152,7 +162,7 @@ spec:
{{ toYaml .Values.resources | indent 10 }}
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "mongodb.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
env:
{{- if .Values.usePassword }}
@@ -161,34 +171,49 @@ spec:
secretKeyRef:
name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "mongodb.fullname" . }}{{- end }}
key: mongodb-root-password
- command: [ 'sh', '-c', '/bin/mongodb_exporter --mongodb.uri mongodb://root:${MONGODB_ROOT_PASSWORD}@localhost:{{ .Values.service.port }}/admin' ]
+ command: [ 'sh', '-c', '/bin/mongodb_exporter --mongodb.uri mongodb://root:${MONGODB_ROOT_PASSWORD}@localhost:{{ .Values.service.port }}/admin {{ .Values.metrics.extraArgs }}' ]
{{- else }}
- command: [ 'sh', '-c', '/bin/mongodb_exporter --mongodb.uri mongodb://localhost:{{ .Values.service.port }}' ]
+ command: [ 'sh', '-c', '/bin/mongodb_exporter --mongodb.uri mongodb://localhost:{{ .Values.service.port }} {{ .Values.metrics.extraArgs }}' ]
{{- end }}
ports:
- name: metrics
containerPort: 9216
+ {{- if .Values.metrics.livenessProbe.enabled }}
livenessProbe:
httpGet:
path: /metrics
port: metrics
- initialDelaySeconds: 15
- timeoutSeconds: 5
+ initialDelaySeconds: {{ .Values.metrics.livenessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.metrics.readinessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.metrics.livenessProbe.timeoutSeconds }}
+ failureThreshold: {{ .Values.metrics.livenessProbe.failureThreshold }}
+ successThreshold: {{ .Values.metrics.livenessProbe.successThreshold }}
+ {{- end }}
+ {{- if .Values.metrics.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: /metrics
port: metrics
- initialDelaySeconds: 5
- timeoutSeconds: 1
+ initialDelaySeconds: {{ .Values.metrics.readinessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.metrics.readinessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.metrics.readinessProbe.timeoutSeconds }}
+ failureThreshold: {{ .Values.metrics.readinessProbe.failureThreshold }}
+ successThreshold: {{ .Values.metrics.readinessProbe.successThreshold }}
+ {{- end }}
resources:
{{ toYaml .Values.metrics.resources | indent 10 }}
{{- end }}
volumes:
- {{- if (.Files.Glob "files/docker-entrypoint-initdb.d/*[sh|js]") }}
+ {{- if (.Files.Glob "files/docker-entrypoint-initdb.d/*[sh|js|json]") }}
- name: custom-init-scripts
configMap:
name: {{ template "mongodb.fullname" . }}-init-scripts
{{- end }}
+ {{- if (.Values.initConfigMap) }}
+ - name: custom-init-scripts
+ configMap:
+ name: {{ .Values.initConfigMap.name }}
+ {{- end }}
- name: data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
diff --git a/stable/mongodb/templates/ingress.yaml b/stable/mongodb/templates/ingress.yaml
new file mode 100644
index 000000000000..97d42bb01e9a
--- /dev/null
+++ b/stable/mongodb/templates/ingress.yaml
@@ -0,0 +1,40 @@
+{{- if .Values.ingress.enabled -}}
+{{- $fullName := include "mongodb.fullname" . -}}
+{{- $ingressPaths := .Values.ingress.paths -}}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ $fullName }}
+ labels:
+ app.kubernetes.io/name: {{ include "mongodb.name" . }}
+ helm.sh/chart: {{ include "mongodb.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ {{- with .Values.ingress.annotations }}
+ annotations:
+{{- toYaml . | nindent 4 }}
+ {{- end }}
+spec:
+{{- if .Values.ingress.tls }}
+ tls:
+ {{- range .Values.ingress.tls }}
+ - hosts:
+ {{- range .hosts }}
+ - {{ . | quote }}
+ {{- end }}
+ secretName: {{ .secretName }}
+ {{- end }}
+{{- end }}
+ rules:
+ {{- range .Values.ingress.hosts }}
+ - host: {{ . | quote }}
+ http:
+ paths:
+ {{- range $ingressPaths }}
+ - path: {{ . }}
+ backend:
+ serviceName: {{ $fullName }}
+ servicePort: mongodb
+ {{- end }}
+ {{- end }}
+{{- end }}
diff --git a/stable/mongodb/templates/initialization-configmap.yaml b/stable/mongodb/templates/initialization-configmap.yaml
index 840e77ccfadd..02da7dfbed93 100644
--- a/stable/mongodb/templates/initialization-configmap.yaml
+++ b/stable/mongodb/templates/initialization-configmap.yaml
@@ -1,4 +1,4 @@
-{{ if (.Files.Glob "files/docker-entrypoint-initdb.d/*[sh|js]") }}
+{{ if (.Files.Glob "files/docker-entrypoint-initdb.d/*[sh|js|json]") }}
apiVersion: v1
kind: ConfigMap
metadata:
@@ -9,5 +9,5 @@ metadata:
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
data:
-{{ (.Files.Glob "files/docker-entrypoint-initdb.d/*[sh|js]").AsConfig | indent 2 }}
-{{ end }}
\ No newline at end of file
+{{ tpl (.Files.Glob "files/docker-entrypoint-initdb.d/*[sh|js|json]").AsConfig . | indent 2 }}
+{{ end }}
diff --git a/stable/mongodb/templates/secrets.yaml b/stable/mongodb/templates/secrets.yaml
index ecbf1eb0940d..bf644cba9ef4 100644
--- a/stable/mongodb/templates/secrets.yaml
+++ b/stable/mongodb/templates/secrets.yaml
@@ -10,13 +10,11 @@ metadata:
heritage: "{{ .Release.Service }}"
type: Opaque
data:
- {{- if .Values.usePassword }}
{{- if .Values.mongodbRootPassword }}
mongodb-root-password: {{ .Values.mongodbRootPassword | b64enc | quote }}
{{- else }}
mongodb-root-password: {{ randAlphaNum 10 | b64enc | quote }}
{{- end }}
- {{- end }}
{{- if and .Values.mongodbUsername .Values.mongodbDatabase }}
{{- if .Values.mongodbPassword }}
mongodb-password: {{ .Values.mongodbPassword | b64enc | quote }}
diff --git a/stable/mongodb/templates/statefulset-arbiter-rs.yaml b/stable/mongodb/templates/statefulset-arbiter-rs.yaml
index ac4cc5f47af9..22c1950d45dd 100644
--- a/stable/mongodb/templates/statefulset-arbiter-rs.yaml
+++ b/stable/mongodb/templates/statefulset-arbiter-rs.yaml
@@ -16,6 +16,11 @@ spec:
component: arbiter
serviceName: {{ template "mongodb.fullname" . }}-headless
replicas: {{ .Values.replicaSet.replicas.arbiter }}
+ updateStrategy:
+ type: {{ .Values.updateStrategy.type }}
+ {{- if (eq "Recreate" .Values.updateStrategy.type) }}
+ rollingUpdate: null
+ {{- end }}
template:
metadata:
labels:
@@ -37,7 +42,6 @@ spec:
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
- runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
{{- if .Values.affinity }}
affinity:
@@ -51,16 +55,20 @@ spec:
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
+{{- include "mongodb.imagePullSecrets" . | indent 6 }}
+ {{- if .Values.extraInitContainers }}
+ initContainers:
+{{ tpl .Values.extraInitContainers . | indent 6}}
{{- end }}
containers:
- name: {{ template "mongodb.name" . }}-arbiter
image: {{ template "mongodb.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
+ {{- if .Values.securityContext.enabled }}
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: {{ .Values.securityContext.runAsUser }}
+ {{- end }}
ports:
- containerPort: {{ .Values.service.port }}
name: mongodb
@@ -109,6 +117,12 @@ spec:
{{- else }}
value: "no"
{{- end }}
+ - name: MONGODB_ENABLE_DIRECTORY_PER_DB
+ {{- if .Values.mongodbDirectoryPerDB }}
+ value: "yes"
+ {{- else }}
+ value: "no"
+ {{- end }}
{{- if .Values.mongodbExtraFlags }}
- name: MONGODB_EXTRA_FLAGS
value: {{ .Values.mongodbExtraFlags | join " " }}
diff --git a/stable/mongodb/templates/statefulset-primary-rs.yaml b/stable/mongodb/templates/statefulset-primary-rs.yaml
index c24774c18f06..57fb792930aa 100644
--- a/stable/mongodb/templates/statefulset-primary-rs.yaml
+++ b/stable/mongodb/templates/statefulset-primary-rs.yaml
@@ -11,6 +11,11 @@ metadata:
spec:
serviceName: {{ template "mongodb.fullname" . }}-headless
replicas: 1
+ updateStrategy:
+ type: {{ .Values.updateStrategy.type }}
+ {{- if (eq "Recreate" .Values.updateStrategy.type) }}
+ rollingUpdate: null
+ {{- end }}
selector:
matchLabels:
app: {{ template "mongodb.name" . }}
@@ -42,7 +47,6 @@ spec:
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
- runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
{{- if .Values.affinity }}
affinity:
@@ -56,16 +60,20 @@ spec:
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
+{{- include "mongodb.imagePullSecrets" . | indent 6 }}
+ {{- if .Values.extraInitContainers }}
+ initContainers:
+{{ tpl .Values.extraInitContainers . | indent 6}}
{{- end }}
containers:
- name: {{ template "mongodb.name" . }}-primary
image: {{ template "mongodb.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
+ {{- if .Values.securityContext.enabled }}
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: {{ .Values.securityContext.runAsUser }}
+ {{- end }}
ports:
- containerPort: {{ .Values.service.port }}
name: mongodb
@@ -103,7 +111,7 @@ spec:
value: {{ .Values.mongodbDatabase | quote }}
{{- end }}
{{- if .Values.usePassword }}
- {{- if or .Values.mongodbPassword .Values.existingSecret }}
+ {{- if and .Values.mongodbUsername .Values.mongodbDatabase }}
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
@@ -127,6 +135,12 @@ spec:
{{- else }}
value: "no"
{{- end }}
+ - name: MONGODB_ENABLE_DIRECTORY_PER_DB
+ {{- if .Values.mongodbDirectoryPerDB }}
+ value: "yes"
+ {{- else }}
+ value: "no"
+ {{- end }}
{{- if .Values.mongodbExtraFlags }}
- name: MONGODB_EXTRA_FLAGS
value: {{ .Values.mongodbExtraFlags | join " " }}
@@ -159,8 +173,9 @@ spec:
{{- end }}
volumeMounts:
- name: datadir
- mountPath: /bitnami/mongodb
- {{- if (.Files.Glob "files/docker-entrypoint-initdb.d/*[sh|js]") }}
+ mountPath: {{ .Values.persistence.mountPath }}
+ subPath: {{ .Values.persistence.subPath }}
+ {{- if or (.Files.Glob "files/docker-entrypoint-initdb.d/*[sh|js|json]") (.Values.initConfigMap) }}
- name: custom-init-scripts
mountPath: /docker-entrypoint-initdb.d
{{- end }}
@@ -173,7 +188,7 @@ spec:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "mongodb.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
env:
{{- if .Values.usePassword }}
@@ -182,34 +197,49 @@ spec:
secretKeyRef:
name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "mongodb.fullname" . }}{{- end }}
key: mongodb-root-password
- command: [ 'sh', '-c', '/bin/mongodb_exporter --mongodb.uri mongodb://root:${MONGODB_ROOT_PASSWORD}@localhost:{{ .Values.service.port }}/admin' ]
+ command: [ 'sh', '-c', '/bin/mongodb_exporter --mongodb.uri mongodb://root:${MONGODB_ROOT_PASSWORD}@localhost:{{ .Values.service.port }}/admin {{ .Values.metrics.extraArgs }}' ]
{{- else }}
- command: [ 'sh', '-c', '/bin/mongodb_exporter --mongodb.uri mongodb://localhost:{{ .Values.service.port }}' ]
+ command: [ 'sh', '-c', '/bin/mongodb_exporter --mongodb.uri mongodb://localhost:{{ .Values.service.port }} {{ .Values.metrics.extraArgs }}' ]
{{- end }}
ports:
- name: metrics
containerPort: 9216
+ {{- if .Values.metrics.livenessProbe.enabled }}
livenessProbe:
httpGet:
path: /metrics
port: metrics
- initialDelaySeconds: 15
- timeoutSeconds: 5
+ initialDelaySeconds: {{ .Values.metrics.livenessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.metrics.livenessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.metrics.livenessProbe.timeoutSeconds }}
+ failureThreshold: {{ .Values.metrics.livenessProbe.failureThreshold }}
+ successThreshold: {{ .Values.metrics.livenessProbe.successThreshold }}
+ {{- end }}
+ {{- if .Values.metrics.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: /metrics
port: metrics
- initialDelaySeconds: 5
- timeoutSeconds: 1
+ initialDelaySeconds: {{ .Values.metrics.readinessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.metrics.readinessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.metrics.readinessProbe.timeoutSeconds }}
+ failureThreshold: {{ .Values.metrics.readinessProbe.failureThreshold }}
+ successThreshold: {{ .Values.metrics.readinessProbe.successThreshold }}
+ {{- end }}
resources:
{{ toYaml .Values.metrics.resources | indent 12 }}
{{- end }}
volumes:
- {{- if (.Files.Glob "files/docker-entrypoint-initdb.d/*[sh|js]") }}
+ {{- if (.Files.Glob "files/docker-entrypoint-initdb.d/*[sh|js|json]") }}
- name: custom-init-scripts
configMap:
name: {{ template "mongodb.fullname" . }}-init-scripts
{{- end }}
+ {{- if (.Values.initConfigMap) }}
+ - name: custom-init-scripts
+ configMap:
+ name: {{ .Values.initConfigMap.name }}
+ {{- end }}
{{- if .Values.configmap }}
- name: config
configMap:
diff --git a/stable/mongodb/templates/statefulset-secondary-rs.yaml b/stable/mongodb/templates/statefulset-secondary-rs.yaml
index 1220c4c5402b..ce5ef57d878a 100644
--- a/stable/mongodb/templates/statefulset-secondary-rs.yaml
+++ b/stable/mongodb/templates/statefulset-secondary-rs.yaml
@@ -17,6 +17,11 @@ spec:
podManagementPolicy: "Parallel"
serviceName: {{ template "mongodb.fullname" . }}-headless
replicas: {{ .Values.replicaSet.replicas.secondary }}
+ updateStrategy:
+ type: {{ .Values.updateStrategy.type }}
+ {{- if (eq "Recreate" .Values.updateStrategy.type) }}
+ rollingUpdate: null
+ {{- end }}
template:
metadata:
labels:
@@ -43,7 +48,6 @@ spec:
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
- runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
{{- if .Values.affinity }}
affinity:
@@ -57,16 +61,20 @@ spec:
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
+{{- include "mongodb.imagePullSecrets" . | indent 6 }}
+ {{- if .Values.extraInitContainers }}
+ initContainers:
+{{ tpl .Values.extraInitContainers . | indent 6}}
{{- end }}
containers:
- name: {{ template "mongodb.name" . }}-secondary
image: {{ template "mongodb.image" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
+ {{- if .Values.securityContext.enabled }}
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: {{ .Values.securityContext.runAsUser }}
+ {{- end }}
ports:
- containerPort: {{ .Values.service.port }}
name: mongodb
@@ -115,6 +123,12 @@ spec:
{{- else }}
value: "no"
{{- end }}
+ - name: MONGODB_ENABLE_DIRECTORY_PER_DB
+ {{- if .Values.mongodbDirectoryPerDB }}
+ value: "yes"
+ {{- else }}
+ value: "no"
+ {{- end }}
{{- if .Values.mongodbExtraFlags }}
- name: MONGODB_EXTRA_FLAGS
value: {{ .Values.mongodbExtraFlags | join " " }}
@@ -147,7 +161,8 @@ spec:
{{- end }}
volumeMounts:
- name: datadir
- mountPath: /bitnami/mongodb
+ mountPath: {{ .Values.persistence.mountPath }}
+ subPath: {{ .Values.persistence.subPath }}
{{- if .Values.configmap }}
- name: config
mountPath: /opt/bitnami/mongodb/conf/mongodb.conf
@@ -157,7 +172,7 @@ spec:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "mongodb.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
env:
{{- if .Values.usePassword }}
@@ -166,25 +181,35 @@ spec:
secretKeyRef:
name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "mongodb.fullname" . }}{{- end }}
key: mongodb-root-password
- command: [ 'sh', '-c', '/bin/mongodb_exporter --mongodb.uri mongodb://root:${MONGODB_ROOT_PASSWORD}@localhost:{{ .Values.service.port }}/admin' ]
+ command: [ 'sh', '-c', '/bin/mongodb_exporter --mongodb.uri mongodb://root:${MONGODB_ROOT_PASSWORD}@localhost:{{ .Values.service.port }}/admin {{ .Values.metrics.extraArgs }}' ]
{{- else }}
- command: [ 'sh', '-c', '/bin/mongodb_exporter --mongodb.uri mongodb://localhost:{{ .Values.service.port }}' ]
+ command: [ 'sh', '-c', '/bin/mongodb_exporter --mongodb.uri mongodb://localhost:{{ .Values.service.port }} {{ .Values.metrics.extraArgs }}' ]
{{- end }}
ports:
- name: metrics
containerPort: 9216
+ {{- if .Values.metrics.livenessProbe.enabled }}
livenessProbe:
httpGet:
path: /metrics
port: metrics
- initialDelaySeconds: 15
- timeoutSeconds: 5
+ initialDelaySeconds: {{ .Values.metrics.livenessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.metrics.livenessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.metrics.livenessProbe.timeoutSeconds }}
+ failureThreshold: {{ .Values.metrics.livenessProbe.failureThreshold }}
+ successThreshold: {{ .Values.metrics.livenessProbe.successThreshold }}
+ {{- end }}
+ {{- if .Values.metrics.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: /metrics
port: metrics
- initialDelaySeconds: 5
- timeoutSeconds: 1
+ initialDelaySeconds: {{ .Values.metrics.readinessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.metrics.readinessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.metrics.readinessProbe.timeoutSeconds }}
+ failureThreshold: {{ .Values.metrics.readinessProbe.failureThreshold }}
+ successThreshold: {{ .Values.metrics.readinessProbe.successThreshold }}
+ {{- end }}
resources:
{{ toYaml .Values.metrics.resources | indent 12 }}
{{- end }}
diff --git a/stable/mongodb/templates/svc-primary-rs.yaml b/stable/mongodb/templates/svc-primary-rs.yaml
index ccc73ecb353f..1c300702bacd 100644
--- a/stable/mongodb/templates/svc-primary-rs.yaml
+++ b/stable/mongodb/templates/svc-primary-rs.yaml
@@ -14,9 +14,17 @@ metadata:
{{- end }}
spec:
type: {{ .Values.service.type }}
- {{- if and (eq .Values.service.type "ClusterIP") .Values.service.clusterIP }}
- clusterIP: {{ .Values.service.clusterIP }}
- {{- end }}
+ {{- if and (eq .Values.service.type "ClusterIP") .Values.service.clusterIP }}
+ clusterIP: {{ .Values.service.clusterIP }}
+ {{- end }}
+ {{- if and (eq .Values.service.type "LoadBalancer") .Values.service.loadBalancerIP }}
+ loadBalancerIP: {{ .Values.service.loadBalancerIP }}
+ {{- end }}
+ {{- if .Values.service.externalIPs }}
+ externalIPs:
+ {{ toYaml .Values.service.externalIPs | indent 4 }}
+ {{- end }}
+
ports:
- name: mongodb
port: 27017
diff --git a/stable/mongodb/templates/svc-standalone.yaml b/stable/mongodb/templates/svc-standalone.yaml
index 55e8a351a702..d31c1f965996 100644
--- a/stable/mongodb/templates/svc-standalone.yaml
+++ b/stable/mongodb/templates/svc-standalone.yaml
@@ -17,6 +17,14 @@ spec:
{{- if and (eq .Values.service.type "ClusterIP") .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{- end }}
+ {{- if and (eq .Values.service.type "LoadBalancer") .Values.service.loadBalancerIP }}
+ loadBalancerIP: {{ .Values.service.loadBalancerIP }}
+ {{- end }}
+ {{- if .Values.service.externalIPs }}
+ externalIPs:
+ {{ toYaml .Values.service.externalIPs | indent 4 }}
+ {{- end }}
+
ports:
- name: mongodb
port: 27017
diff --git a/stable/mongodb/values-production.yaml b/stable/mongodb/values-production.yaml
index 35cd03dc884f..d6db78956da8 100644
--- a/stable/mongodb/values-production.yaml
+++ b/stable/mongodb/values-production.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
image:
## Bitnami MongoDB registry
@@ -14,7 +17,7 @@ image:
## Bitnami MongoDB image tag
## ref: https://hub.docker.com/r/bitnami/mongodb/tags/
##
- tag: 4.0.3
+ tag: 4.0.9
## Specify a imagePullPolicy
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -25,7 +28,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Set to true if you would like to see extra information on logs
## It turns NAMI debugging in minideb
@@ -55,6 +58,11 @@ usePassword: true
##
mongodbEnableIPv6: true
+## Whether enable/disable DirectoryPerDB on MongoDB
+## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-directoryperdb
+##
+mongodbDirectoryPerDB: false
+
## MongoDB System Log configuration
## ref: https://github.com/bitnami/bitnami-docker-mongodb#configuring-system-log-verbosity-level
##
@@ -92,6 +100,15 @@ service:
##
# nodePort:
+ ## Specify the externalIP value ClusterIP service type.
+ ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
+ # externalIPs: []
+
+ ## Specify the loadBalancerIP value for LoadBalancer service types.
+ ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
+ ##
+ # loadBalancerIP:
+
## Setting up replication
## ref: https://github.com/bitnami/bitnami-docker-mongodb#setting-up-a-replication
@@ -155,6 +172,11 @@ affinity: {}
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
+## updateStrategy for MongoDB Primary, Secondary and Arbitrer statefulsets
+## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
+updateStrategy:
+ type: RollingUpdate
+
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
@@ -163,8 +185,19 @@ persistence:
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
+ ##
# existingClaim:
+ ## The path the volume will be mounted at, useful when using different
+ ## MongoDB images.
+ ##
+ mountPath: /bitnami/mongodb
+
+ ## The subdirectory of the volume to mount to, useful in dev environments
+ ## and one PV for multiple services.
+ ##
+ subPath: ""
+
## mongodb data Persistent Volume Storage Class
## If defined, storageClassName:
## If set to "-", storageClassName: "", which disables dynamic provisioning
@@ -178,6 +211,28 @@ persistence:
size: 8Gi
annotations: {}
+# Expose mongodb via ingress. This is possible if using nginx-ingress
+# https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
+ingress:
+ enabled: false
+ annotations: {}
+ labels: {}
+ paths:
+ - /
+ hosts: []
+ tls:
+ - secretName: secret-tls
+ hosts: []
+
+## Configure the options for init containers to be run before the main app containers
+## are started. All init containers are run sequentially and must exit without errors
+## for the next one to be started.
+## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
+# extraInitContainers: |
+# - name: do-something
+# image: busybox
+# command: ['do', 'something']
+
## Configure extra options for liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
@@ -195,6 +250,10 @@ readinessProbe:
failureThreshold: 6
successThreshold: 1
+# Define custom config map with init scripts
+initConfigMap: {}
+# name: "init-config-map"
+
# Entries for the MongoDB config file
configmap:
# # Where and how to store data.
@@ -246,13 +305,34 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
+
+ ## String with extra arguments to the metrics exporter
+ ## ref: https://github.com/dcu/mongodb_exporter/blob/master/mongodb_exporter.go
+ extraArgs: ""
## Metrics exporter resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
# resources: {}
+ ## Metrics exporter liveness and readiness probes
+ ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
+ livenessProbe:
+ enabled: true
+ initialDelaySeconds: 15
+ periodSeconds: 5
+ timeoutSeconds: 5
+ failureThreshold: 3
+ successThreshold: 1
+ readinessProbe:
+ enabled: true
+ initialDelaySeconds: 5
+ periodSeconds: 5
+ timeoutSeconds: 1
+ failureThreshold: 3
+ successThreshold: 1
+
## Metrics exporter pod Annotation
podAnnotations:
prometheus.io/scrape: "true"
diff --git a/stable/mongodb/values.yaml b/stable/mongodb/values.yaml
index 6e6eb0a24120..a2dfac1e3a16 100644
--- a/stable/mongodb/values.yaml
+++ b/stable/mongodb/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
image:
## Bitnami MongoDB registry
@@ -14,7 +17,7 @@ image:
## Bitnami MongoDB image tag
## ref: https://hub.docker.com/r/bitnami/mongodb/tags/
##
- tag: 4.0.3
+ tag: 4.0.9
## Specify a imagePullPolicy
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -25,7 +28,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Set to true if you would like to see extra information on logs
## It turns NAMI debugging in minideb
@@ -50,12 +53,16 @@ usePassword: true
# mongodbPassword: password
# mongodbDatabase: database
-
## Whether enable/disable IPv6 on MongoDB
## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-ipv6
##
mongodbEnableIPv6: true
+## Whether enable/disable DirectoryPerDB on MongoDB
+## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-directoryperdb
+##
+mongodbDirectoryPerDB: false
+
## MongoDB System Log configuration
## ref: https://github.com/bitnami/bitnami-docker-mongodb#configuring-system-log-verbosity-level
##
@@ -93,6 +100,15 @@ service:
##
# nodePort:
+ ## Specify the externalIP value ClusterIP service type.
+ ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
+ # externalIPs: []
+
+ ## Specify the loadBalancerIP value for LoadBalancer service types.
+ ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
+ ##
+ # loadBalancerIP:
+
## Setting up replication
## ref: https://github.com/bitnami/bitnami-docker-mongodb#setting-up-a-replication
#
@@ -155,6 +171,11 @@ affinity: {}
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
+## updateStrategy for MongoDB Primary, Secondary and Arbitrer statefulsets
+## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
+updateStrategy:
+ type: RollingUpdate
+
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
@@ -163,8 +184,19 @@ persistence:
## A manually managed Persistent Volume and Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
+ ##
# existingClaim:
+ ## The path the volume will be mounted at, useful when using different
+ ## MongoDB images.
+ ##
+ mountPath: /bitnami/mongodb
+
+ ## The subdirectory of the volume to mount to, useful in dev environments
+ ## and one PV for multiple services.
+ ##
+ subPath: ""
+
## mongodb data Persistent Volume Storage Class
## If defined, storageClassName:
## If set to "-", storageClassName: "", which disables dynamic provisioning
@@ -178,6 +210,28 @@ persistence:
size: 8Gi
annotations: {}
+# Expose mongodb via ingress. This is possible if using nginx-ingress
+# https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
+ingress:
+ enabled: false
+ annotations: {}
+ labels: {}
+ paths:
+ - /
+ hosts: []
+ tls:
+ - secretName: secret-tls
+ hosts: []
+
+## Configure the options for init containers to be run before the main app containers
+## are started. All init containers are run sequentially and must exit without errors
+## for the next one to be started.
+## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
+# extraInitContainers: |
+# - name: do-something
+# image: busybox
+# command: ['do', 'something']
+
## Configure extra options for liveness and readiness probes
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
livenessProbe:
@@ -195,6 +249,10 @@ readinessProbe:
failureThreshold: 6
successThreshold: 1
+# Define custom config map with init scripts
+initConfigMap: {}
+# name: "init-config-map"
+
# Entries for the MongoDB config file
configmap:
# # Where and how to store data.
@@ -246,13 +304,34 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
+
+ ## String with extra arguments to the metrics exporter
+ ## ref: https://github.com/dcu/mongodb_exporter/blob/master/mongodb_exporter.go
+ extraArgs: ""
## Metrics exporter resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
# resources: {}
+ ## Metrics exporter liveness and readiness probes
+ ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
+ livenessProbe:
+ enabled: false
+ initialDelaySeconds: 15
+ periodSeconds: 5
+ timeoutSeconds: 5
+ failureThreshold: 3
+ successThreshold: 1
+ readinessProbe:
+ enabled: false
+ initialDelaySeconds: 5
+ periodSeconds: 5
+ timeoutSeconds: 1
+ failureThreshold: 3
+ successThreshold: 1
+
## Metrics exporter pod Annotation
podAnnotations:
prometheus.io/scrape: "true"
diff --git a/stable/moodle/Chart.yaml b/stable/moodle/Chart.yaml
index 93fddb9f6af0..b84529177c1b 100644
--- a/stable/moodle/Chart.yaml
+++ b/stable/moodle/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: moodle
-version: 4.0.4
-appVersion: 3.6.2
+version: 4.2.2
+appVersion: 3.7.0
description: Moodle is a learning platform designed to provide educators, administrators and learners with a single robust, secure and integrated system to create personalised learning environments
keywords:
- moodle
diff --git a/stable/moodle/README.md b/stable/moodle/README.md
index 1185169fbf5f..91629bce56ed 100644
--- a/stable/moodle/README.md
+++ b/stable/moodle/README.md
@@ -14,7 +14,7 @@ This chart bootstraps a [Moodle](https://github.com/bitnami/bitnami-docker-moodl
It also packages the [Bitnami MariaDB chart](https://github.com/kubernetes/charts/tree/master/stable/mariadb) which is required for bootstrapping a MariaDB deployment for the database requirements of the Moodle application.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -50,6 +50,7 @@ The following table lists the configurable parameters of the Moodle chart and th
| Parameter | Description | Default |
|---------------------------------------|----------------------------------------------------------------------------------------------|-------------------------------------------------------- |
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | Moodle image registry | `docker.io` |
| `image.repository` | Moodle Image name | `bitnami/moodle` |
| `image.tag` | Moodle Image tag | `{VERSION}` |
diff --git a/stable/moodle/templates/_helpers.tpl b/stable/moodle/templates/_helpers.tpl
index 79d9811d975c..ae6ee9a05d3c 100644
--- a/stable/moodle/templates/_helpers.tpl
+++ b/stable/moodle/templates/_helpers.tpl
@@ -64,9 +64,57 @@ Also, we can't use a single if because lazy evaluation is not an option
{{/*
Return the proper image name (for the metrics image)
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "moodle.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "moodle.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
{{- end -}}
diff --git a/stable/moodle/templates/deployment.yaml b/stable/moodle/templates/deployment.yaml
index 110607803e48..b8b3a1559b16 100644
--- a/stable/moodle/templates/deployment.yaml
+++ b/stable/moodle/templates/deployment.yaml
@@ -32,12 +32,7 @@ spec:
affinity:
{{ toYaml .Values.affinity | indent 8 }}
{{- end }}
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "moodle.imagePullSecrets" . | indent 6 }}
hostAliases:
- ip: "127.0.0.1"
hostnames:
@@ -134,7 +129,7 @@ spec:
mountPath: /bitnami
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "moodle.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
command: [ '/bin/apache_exporter', '-scrape_uri', 'http://status.localhost:80/server-status/?auto']
ports:
diff --git a/stable/moodle/values.yaml b/stable/moodle/values.yaml
index 755ca23587eb..d74a9f3af00e 100644
--- a/stable/moodle/values.yaml
+++ b/stable/moodle/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami Moodle image version
## ref: https://hub.docker.com/r/bitnami/moodle/tags/
@@ -10,7 +13,7 @@
image:
registry: docker.io
repository: bitnami/moodle
- tag: 3.6.2
+ tag: 3.7.0
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,7 +24,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## User of the application
## ref: https://github.com/bitnami/bitnami-docker-moodle#configuration
@@ -288,7 +291,7 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Metrics exporter pod Annotation and Labels
podAnnotations:
prometheus.io/scrape: "true"
diff --git a/stable/mssql-linux/Chart.yaml b/stable/mssql-linux/Chart.yaml
index c9d522d24c4b..b20477481aaf 100644
--- a/stable/mssql-linux/Chart.yaml
+++ b/stable/mssql-linux/Chart.yaml
@@ -1,7 +1,7 @@
apiVersion: v1
description: SQL Server 2017 Linux Helm Chart
name: mssql-linux
-version: 0.6.5
+version: 0.7.0
appVersion: 14.0.3023.8
home: https://hub.docker.com/r/microsoft/mssql-server-linux/
icon: https://img-prod-cms-rt-microsoft-com.akamaized.net/cms/api/am/imageFileData/RE1I4Dx
diff --git a/stable/mssql-linux/README.md b/stable/mssql-linux/README.md
index 07900ea142e5..6cb7c741b19d 100644
--- a/stable/mssql-linux/README.md
+++ b/stable/mssql-linux/README.md
@@ -99,7 +99,11 @@ The configuration parameters in this section control the resources requested and
| service.type | Service Type | `ClusterIP` |
| service.port | Service Port | `1433` |
| service.annotations | Kubernetes service annotations | `{}` |
+| service.labels | Kubernetes service labels | `{}` |
| deployment.annotations | Kubernetes deployment annotations | `{}` |
+| deployment.labels | Kubernetes deployment labels | `{}` |
+| pod.annotations | Kubernetes pod annotations | `{}` |
+| pod.labels | Kubernetes pod labels | `{}` |
| collation | Default collation for SQL Server | `SQL_Latin1_General_CP1_CI_AS` |
| lcid | Default languages for SQL Server | `1033` |
| hadr | Enable Availability Group | `0` |
diff --git a/stable/mssql-linux/templates/deployment.yaml b/stable/mssql-linux/templates/deployment.yaml
index c72ca334fa8d..c07344cf6188 100644
--- a/stable/mssql-linux/templates/deployment.yaml
+++ b/stable/mssql-linux/templates/deployment.yaml
@@ -7,6 +7,9 @@ metadata:
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
+{{- if .Values.deployment.labels }}
+{{ toYaml .Values.deployment.labels | indent 4 }}
+{{- end }}
{{- if .Values.deployment.annotations }}
annotations:
{{ toYaml .Values.deployment.annotations | indent 4 }}
@@ -22,6 +25,13 @@ spec:
labels:
app: {{ template "mssql.name" . }}
release: {{ .Release.Name }}
+{{- if .Values.pod.labels }}
+{{ toYaml .Values.pod.labels | indent 8 }}
+{{- end }}
+{{- if .Values.pod.annotations }}
+ annotations:
+{{ toYaml .Values.pod.annotations | indent 8 }}
+{{- end }}
spec:
containers:
- name: {{ .Chart.Name }}
diff --git a/stable/mssql-linux/templates/service.yaml b/stable/mssql-linux/templates/service.yaml
index 017f1936aa1b..74f34a525d1f 100644
--- a/stable/mssql-linux/templates/service.yaml
+++ b/stable/mssql-linux/templates/service.yaml
@@ -7,6 +7,9 @@ metadata:
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
+{{- if .Values.service.labels }}
+{{ toYaml .Values.service.labels | indent 4 }}
+{{- end }}
{{- if .Values.service.annotations }}
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
diff --git a/stable/mssql-linux/values.yaml b/stable/mssql-linux/values.yaml
index 22882a57cc6e..fd42de1db3bf 100644
--- a/stable/mssql-linux/values.yaml
+++ b/stable/mssql-linux/values.yaml
@@ -22,8 +22,13 @@ service:
type: ClusterIP
port: 1433
annotations: {}
+ labels: {}
deployment:
annotations: {}
+ labels: {}
+pod:
+ annotations: {}
+ labels: {}
persistence:
enabled: true
# existingDataClaim:
diff --git a/stable/mysql/Chart.yaml b/stable/mysql/Chart.yaml
index 705743e15149..2762714f5fde 100755
--- a/stable/mysql/Chart.yaml
+++ b/stable/mysql/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: mysql
-version: 0.13.3
+version: 1.1.1
appVersion: 5.7.14
description: Fast, reliable, scalable, and easy to use open-source relational database
system.
diff --git a/stable/mysql/README.md b/stable/mysql/README.md
index 059e7f1d4d51..066e0526fede 100755
--- a/stable/mysql/README.md
+++ b/stable/mysql/README.md
@@ -46,12 +46,13 @@ The following table lists the configurable parameters of the MySQL chart and the
| Parameter | Description | Default |
| -------------------------------------------- | -------------------------------------------------------------------------------------------- | ---------------------------------------------------- |
+| `initContainer.resources` | initContainer resource requests/limits | Memory: `10Mi`, CPU: `10m` |
| `image` | `mysql` image repository. | `mysql` |
| `imageTag` | `mysql` image tag. | `5.7.14` |
-| `busybox.image` | `busybox` image repository. | `busybox` |
-| `busybox.tag` | `busybox` image tag. | `1.29.3` |
-| `testFramework.image` | `test-framework` image repository. | `dduportal/bats` |
-| `testFramework.tag` | `test-framework` image tag. | `0.4.0` |
+| `busybox.image` | `busybox` image repository. | `busybox` |
+| `busybox.tag` | `busybox` image tag. | `1.29.3` |
+| `testFramework.image` | `test-framework` image repository. | `dduportal/bats` |
+| `testFramework.tag` | `test-framework` image tag. | `0.4.0` |
| `imagePullPolicy` | Image pull policy | `IfNotPresent` |
| `existingSecret` | Use Existing secret for Password details | `nil` |
| `extraVolumes` | Additional volumes as a string to be passed to the `tpl` function | |
@@ -73,12 +74,13 @@ The following table lists the configurable parameters of the MySQL chart and the
| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 3 |
| `persistence.enabled` | Create a volume to store data | true |
| `persistence.size` | Size of persistent volume claim | 8Gi RW |
-| `persistence.storageClass` | Type of persistent volume claim | nil |
+| `persistence.storageClass` | Type of persistent volume claim | nil |
| `persistence.accessMode` | ReadWriteOnce or ReadOnly | ReadWriteOnce |
| `persistence.existingClaim` | Name of existing persistent volume | `nil` |
| `persistence.subPath` | Subdirectory of the volume to mount | `nil` |
-| `persistence.annotations` | Persistent Volume annotations | {} |
+| `persistence.annotations` | Persistent Volume annotations | {} |
| `nodeSelector` | Node labels for pod assignment | {} |
+| `tolerations` | Pod taint tolerations for deployment | {} |
| `metrics.enabled` | Start a side-car prometheus exporter | `false` |
| `metrics.image` | Exporter image | `prom/mysqld-exporter` |
| `metrics.imageTag` | Exporter image | `v0.10.0` |
@@ -88,9 +90,17 @@ The following table lists the configurable parameters of the MySQL chart and the
| `metrics.livenessProbe.timeoutSeconds` | When the probe times out | 5 |
| `metrics.readinessProbe.initialDelaySeconds` | Delay before metrics readiness probe is initiated | 5 |
| `metrics.readinessProbe.timeoutSeconds` | When the probe times out | 1 |
+| `metrics.flags` | Additional flags for the mysql exporter to use | `[]` |
+| `metrics.serviceMonitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false` |
+| `metrics.serviceMonitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}` |
| `resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `100m` |
| `configurationFiles` | List of mysql configuration files | `nil` |
+| `configurationFilesPath` | Path of mysql configuration files | `/etc/mysql/conf.d/` |
+| `securityContext.enabled` | Enable security context (mysql pod) | `false` |
+| `securityContext.fsGroup` | Group ID for the container (mysql pod) | 999 |
+| `securityContext.runAsUser` | User ID for the container (mysql pod) | 999 |
| `service.annotations` | Kubernetes annotations for mysql | {} |
+| `service.loadBalancerIP` | LoadBalancer service IP | `""` |
| `ssl.enabled` | Setup and use SSL for MySQL connections | `false` |
| `ssl.secret` | Name of the secret containing the SSL certificates | mysql-ssl-certs |
| `ssl.certificates[0].name` | Name of the secret containing the SSL certificates | `nil` |
@@ -101,6 +111,7 @@ The following table lists the configurable parameters of the MySQL chart and the
| `initializationFiles` | List of SQL files which are run after the container started | `nil` |
| `timezone` | Container and mysqld timezone (TZ env) | `nil` (UTC depending on image) |
| `podAnnotations` | Map of annotations to add to the pods | `{}` |
+| `podLabels` | Map of labels to add to the pods | `{}` |
| `priorityClassName` | Set pod priorityClassName | `{}` |
Some of the parameters above map to the env variables defined in the [MySQL DockerHub image](https://hub.docker.com/_/mysql/).
diff --git a/stable/mysql/templates/configurationFiles-configmap.yaml b/stable/mysql/templates/configurationFiles-configmap.yaml
index 9fc5cb55b294..ebed8cc7b4e8 100644
--- a/stable/mysql/templates/configurationFiles-configmap.yaml
+++ b/stable/mysql/templates/configurationFiles-configmap.yaml
@@ -3,6 +3,7 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "mysql.fullname" . }}-configuration
+ namespace: {{ .Release.Namespace }}
data:
{{- range $key, $val := .Values.configurationFiles }}
{{ $key }}: |-
diff --git a/stable/mysql/templates/deployment.yaml b/stable/mysql/templates/deployment.yaml
index 60e02b3f8c51..9ba3cf80e860 100644
--- a/stable/mysql/templates/deployment.yaml
+++ b/stable/mysql/templates/deployment.yaml
@@ -2,6 +2,7 @@ apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "mysql.fullname" . }}
+ namespace: {{ .Release.Namespace }}
labels:
app: {{ template "mysql.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
@@ -12,6 +13,9 @@ spec:
metadata:
labels:
app: {{ template "mysql.fullname" . }}
+{{- with .Values.podLabels }}
+{{ toYaml . | indent 8 }}
+{{- end }}
{{- with .Values.podAnnotations }}
annotations:
{{ toYaml . | indent 8 }}
@@ -24,10 +28,17 @@ spec:
{{- if .Values.priorityClassName }}
priorityClassName: "{{ .Values.priorityClassName }}"
{{- end }}
+ {{- if .Values.securityContext.enabled }}
+ securityContext:
+ fsGroup: {{ .Values.securityContext.fsGroup }}
+ runAsUser: {{ .Values.securityContext.runAsUser }}
+ {{- end }}
initContainers:
- name: "remove-lost-found"
image: "{{ .Values.busybox.image}}:{{ .Values.busybox.tag }}"
imagePullPolicy: {{ .Values.imagePullPolicy | quote }}
+ resources:
+{{ toYaml .Values.initContainer.resources | indent 10 }}
command: ["rm", "-fr", "/var/lib/mysql/lost+found"]
volumeMounts:
- name: data
@@ -41,6 +52,10 @@ spec:
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
+ {{- end }}
+ {{- if .Values.tolerations }}
+ tolerations:
+{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
containers:
- name: {{ template "mysql.fullname" . }}
@@ -119,8 +134,11 @@ spec:
subPath: {{ .Values.persistence.subPath }}
{{- end }}
{{- if .Values.configurationFiles }}
+ {{- range $key, $val := .Values.configurationFiles }}
- name: configurations
- mountPath: /etc/mysql/conf.d
+ mountPath: {{ $.Values.configurationFilesPath }}{{ $key }}
+ subPath: {{ $key }}
+ {{- end -}}
{{- end }}
{{- if .Values.initializationFiles }}
- name: migrations
@@ -138,7 +156,10 @@ spec:
image: "{{ .Values.metrics.image }}:{{ .Values.metrics.imageTag }}"
imagePullPolicy: {{ .Values.metrics.imagePullPolicy | quote }}
{{- if .Values.mysqlAllowEmptyPassword }}
- command: [ 'sh', '-c', 'DATA_SOURCE_NAME="root@(localhost:3306)/" /bin/mysqld_exporter' ]
+ command:
+ - 'sh'
+ - '-c'
+ - 'DATA_SOURCE_NAME="root@(localhost:3306)/" /bin/mysqld_exporter'
{{- else }}
env:
- name: MYSQL_ROOT_PASSWORD
@@ -146,7 +167,13 @@ spec:
secretKeyRef:
name: {{ template "mysql.secretName" . }}
key: mysql-root-password
- command: [ 'sh', '-c', 'DATA_SOURCE_NAME="root:$MYSQL_ROOT_PASSWORD@(localhost:3306)/" /bin/mysqld_exporter' ]
+ command:
+ - 'sh'
+ - '-c'
+ - 'DATA_SOURCE_NAME="root:$MYSQL_ROOT_PASSWORD@(localhost:3306)/" /bin/mysqld_exporter'
+ {{- end }}
+ {{- range $f := .Values.metrics.flags }}
+ - {{ $f | quote }}
{{- end }}
ports:
- name: metrics
diff --git a/stable/mysql/templates/initializationFiles-configmap.yaml b/stable/mysql/templates/initializationFiles-configmap.yaml
index eff3a7a63cf7..38c3795c765a 100644
--- a/stable/mysql/templates/initializationFiles-configmap.yaml
+++ b/stable/mysql/templates/initializationFiles-configmap.yaml
@@ -3,6 +3,7 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "mysql.fullname" . }}-initialization
+ namespace: {{ .Release.Namespace }}
data:
{{- range $key, $val := .Values.initializationFiles }}
{{ $key }}: |-
diff --git a/stable/mysql/templates/pvc.yaml b/stable/mysql/templates/pvc.yaml
index a0f34edb234b..39e9bf8e26f7 100644
--- a/stable/mysql/templates/pvc.yaml
+++ b/stable/mysql/templates/pvc.yaml
@@ -3,6 +3,7 @@ kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ template "mysql.fullname" . }}
+ namespace: {{ .Release.Namespace }}
{{- with .Values.persistence.annotations }}
annotations:
{{ toYaml . | indent 4 }}
diff --git a/stable/mysql/templates/secrets.yaml b/stable/mysql/templates/secrets.yaml
index 558a705fdf5f..6bcd0ae8f413 100755
--- a/stable/mysql/templates/secrets.yaml
+++ b/stable/mysql/templates/secrets.yaml
@@ -3,6 +3,7 @@ apiVersion: v1
kind: Secret
metadata:
name: {{ template "mysql.fullname" . }}
+ namespace: {{ .Release.Namespace }}
labels:
app: {{ template "mysql.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
diff --git a/stable/mysql/templates/servicemonitor.yaml b/stable/mysql/templates/servicemonitor.yaml
new file mode 100644
index 000000000000..bd830be654dd
--- /dev/null
+++ b/stable/mysql/templates/servicemonitor.yaml
@@ -0,0 +1,26 @@
+{{- if and .Values.metrics.enabled .Values.metrics.serviceMonitor.enabled }}
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: {{ include "mysql.fullname" . }}
+ namespace: {{ .Release.Namespace }}
+ labels:
+ app: {{ template "mysql.fullname" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ release: "{{ .Release.Name }}"
+ heritage: "{{ .Release.Service }}"
+ {{- if .Values.metrics.serviceMonitor.additionalLabels }}
+{{ toYaml .Values.metrics.serviceMonitor.additionalLabels | indent 4 }}
+ {{- end }}
+spec:
+ endpoints:
+ - port: metrics
+ interval: 30s
+ namespaceSelector:
+ matchNames:
+ - {{ .Release.Namespace }}
+ selector:
+ matchLabels:
+ app: {{ include "mysql.fullname" . }}
+ release: {{ .Release.Name }}
+{{- end }}
diff --git a/stable/mysql/templates/svc.yaml b/stable/mysql/templates/svc.yaml
index 8cc9afc3b34d..b9687f2a9953 100644
--- a/stable/mysql/templates/svc.yaml
+++ b/stable/mysql/templates/svc.yaml
@@ -2,6 +2,7 @@ apiVersion: v1
kind: Service
metadata:
name: {{ template "mysql.fullname" . }}
+ namespace: {{ .Release.Namespace }}
labels:
app: {{ template "mysql.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
@@ -16,6 +17,9 @@ metadata:
{{- end }}
spec:
type: {{ .Values.service.type }}
+ {{- if (and (eq .Values.service.type "LoadBalancer") (not (empty .Values.service.loadBalancerIP))) }}
+ loadBalancerIP: {{ .Values.service.loadBalancerIP }}
+ {{- end }}
ports:
- name: mysql
port: {{ .Values.service.port }}
diff --git a/stable/mysql/values.yaml b/stable/mysql/values.yaml
index cc6c0209eaa0..eee552559be1 100644
--- a/stable/mysql/values.yaml
+++ b/stable/mysql/values.yaml
@@ -61,6 +61,11 @@ extraInitContainers: |
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
nodeSelector: {}
+## Tolerations for pod assignment
+## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
+##
+tolerations: []
+
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 10
@@ -90,6 +95,12 @@ persistence:
size: 8Gi
annotations: {}
+## Security context
+securityContext:
+ enabled: false
+ runAsUser: 999
+ fsGroup: 999
+
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
@@ -98,6 +109,9 @@ resources:
memory: 256Mi
cpu: 100m
+# Custom mysql configuration files path
+configurationFilesPath: /etc/mysql/conf.d/
+
# Custom mysql configuration files used to override default mysql settings
configurationFiles: {}
# mysql.cnf: |-
@@ -129,6 +143,10 @@ metrics:
readinessProbe:
initialDelaySeconds: 5
timeoutSeconds: 1
+ flags: []
+ serviceMonitor:
+ enabled: false
+ additionalLabels: {}
## Configure the service
## ref: http://kubernetes.io/docs/user-guide/services/
@@ -139,6 +157,7 @@ service:
type: ClusterIP
port: 3306
# nodePort: 32000
+ # loadBalancerIP:
ssl:
enabled: false
@@ -168,5 +187,15 @@ ssl:
# To be added to the database server pod(s)
podAnnotations: {}
+podLabels: {}
+
## Set pod priorityClassName
# priorityClassName: {}
+
+
+## Init container resources defaults
+initContainer:
+ resources:
+ requests:
+ memory: 10Mi
+ cpu: 10m
diff --git a/stable/mysqldump/Chart.yaml b/stable/mysqldump/Chart.yaml
index af00f82949a4..01e13aaf952f 100644
--- a/stable/mysqldump/Chart.yaml
+++ b/stable/mysqldump/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
-appVersion: 2.3.0
+appVersion: 2.4.0
description: A Helm chart to help backup MySQL databases using mysqldump
name: mysqldump
-version: 2.3.0
+version: 2.4.0
keywords:
- mysql
- mysqldump
diff --git a/stable/mysqldump/README.md b/stable/mysqldump/README.md
index 93e34f7115a4..14d8958d25b7 100644
--- a/stable/mysqldump/README.md
+++ b/stable/mysqldump/README.md
@@ -57,6 +57,7 @@ The following tables lists the configurable parameters of the mysqldump chart an
| schedule | crontab schedule to run on. set as `now` to run as a one time job | "0/5 \* \* \* \*" |
| options | options to pass onto MySQL | "--opt --single-transaction" |
| debug | print some extra debug logs during backup | false |
+| additionalSteps | run these extra shell steps after all backup jobs completed | [] |
| successfulJobsHistoryLimit | number of successful jobs to remember | 5 |
| failedJobsHistoryLimit | number of failed jobs to remember | 5 |
| persistentVolumeClaim | existing Persistent Volume Claim to backup to, leave blank to create a new one | |
diff --git a/stable/mysqldump/templates/NOTES.txt b/stable/mysqldump/templates/NOTES.txt
index cff75de1de2a..22ad7e8937e8 100644
--- a/stable/mysqldump/templates/NOTES.txt
+++ b/stable/mysqldump/templates/NOTES.txt
@@ -10,7 +10,7 @@ $ kubectl get pods --selector=job-name={{ template "mysqldump.fullname" . }} --s
To see the logs from the backup job run:
-$ kubectl logs `kc get pods --selector=job-name=test-mysqldump --output=jsonpath={.items..metadata.name}`
+$ kubectl logs `kubectl get pods --selector=job-name=test-mysqldump --output=jsonpath={.items..metadata.name}`
mysqldump contents can be found in:
{{- if .Values.persistentVolumeClaim }}
@@ -19,7 +19,7 @@ $ kubectl get persistentvolumeclaim {{ .Values.persistentVolumeClaim }}
{{- if .Values.persistence.enabled }}
$ kubectl get persistentvolumeclaim {{ template "mysqldump.fullname" . }}
{{- else }}
-$ kubectl logs `kc get pods --selector=job-name=test-mysqldump --output=jsonpath={.items..metadata.name}`
+$ kubectl logs `kubectl get pods --selector=job-name=test-mysqldump --output=jsonpath={.items..metadata.name}`
{{- end -}}
{{- end }}
@@ -35,8 +35,8 @@ $ kubectl get jobs --selector=cronjob-name={{ template "mysqldump.fullname" . }}
To see the logs from the most recent backup job run:
-$ kubectl logs $(kc get pods --selector \
- job-name=$(kc get jobs --selector=cronjob-name={{ template "mysqldump.fullname" . }} \
+$ kubectl logs $(kubectl get pods --selector \
+ job-name=$(kubectl get jobs --selector=cronjob-name={{ template "mysqldump.fullname" . }} \
--output=jsonpath='{.items[-1:].metadata.name}') \
--output=jsonpath={.items..metadata.name})
@@ -47,7 +47,7 @@ $ kubectl get persistentvolumeclaim {{ .Values.persistentVolumeClaim }}
{{- if .Values.persistence.enabled }}
$ kubectl get persistentvolumeclaim {{ template "mysqldump.fullname" . }}
{{- else }}
-$ kubectl logs `kc get pods --selector=job-name=test-mysqldump --output=jsonpath={.items..metadata.name}`
+$ kubectl logs `kubectl get pods --selector=job-name=test-mysqldump --output=jsonpath={.items..metadata.name}`
{{- end -}}
{{- end }}
diff --git a/stable/mysqldump/templates/configmap.yaml b/stable/mysqldump/templates/configmap.yaml
index 0e73da5f3bb2..c36d43870ad3 100644
--- a/stable/mysqldump/templates/configmap.yaml
+++ b/stable/mysqldump/templates/configmap.yaml
@@ -55,7 +55,7 @@ data:
echo "Backing up single db ${MYSQL_DB}"
{{ if .Values.saveToDirectory }}mkdir -p "${BACKUP_DIR}"/"${MYSQL_DB}"{{ end }}
mysqldump ${MYSQL_OPTS} -h ${MYSQL_HOST} -P ${MYSQL_PORT} -u ${MYSQL_USERNAME}{{ if .Values.mysql.password }} -p${MYSQL_PASSWORD}{{ end }} --databases ${MYSQL_DB} | gzip > ${BACKUP_DIR}/{{ if .Values.saveToDirectory }}${MYSQL_DB}/{{ end }}${TIMESTAMP}_${MYSQL_DB}.sql.gz
-
+ rc=$?
{{ else if and (.Values.allDatabases.enabled) (eq .Values.allDatabases.singleBackupFile false)}}
for MYSQL_DB in $(mysql -h "${MYSQL_HOST}" -u ${MYSQL_USERNAME}{{ if .Values.mysql.password }} -p${MYSQL_PASSWORD}{{ end }} -B -N -e "SHOW DATABASES;"|egrep -v '^(information|performance)_schema$'); do
echo "Backing up db ${MYSQL_DB}"
@@ -103,6 +103,12 @@ data:
exit 1
fi
+ {{ if .Values.additionalSteps }}
+ {{- range .Values.additionalSteps }}
+ {{ . }}
+ {{- end }}
+ {{- end }}
+
{{ if .Values.debug }}
ls -alh ${BACKUP_DIR}
{{ end }}
diff --git a/stable/mysqldump/values.yaml b/stable/mysqldump/values.yaml
index e5b1e157798b..c0783461d32e 100644
--- a/stable/mysqldump/values.yaml
+++ b/stable/mysqldump/values.yaml
@@ -40,6 +40,14 @@ debug: false
successfulJobsHistoryLimit: 5
failedJobsHistoryLimit: 5
+# additional steps for mysqldump shell script
+# will be inserted after all backup and upload jobs completed successfully.
+# Use "${BACKUP_DIR}/${TIMESTAMP}_${MYSQL_DB}.sql.gz" as dump file name.
+# see examples
+additionalSteps: []
+# - gsutil cp "${BACKUP_DIR}/${TIMESTAMP}_${MYSQL_DB}.sql.gz" gs://mybucket/latest.sql.gz
+# - echo "latest sql dump updated"
+
## set persistentVolumeClaim to use a PVC that already exists.
## if set will override any settings under `persistence` otherwise
## if not set and `persistence.enabled` set to true, will create a PVC.
diff --git a/stable/nats/Chart.yaml b/stable/nats/Chart.yaml
index d5f9bf5eddc8..ed008053f7be 100644
--- a/stable/nats/Chart.yaml
+++ b/stable/nats/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: nats
-version: 2.0.4
-appVersion: 1.4.0
+version: 2.5.1
+appVersion: 1.4.1
description: An open-source, cloud-native messaging system
keywords:
- nats
diff --git a/stable/nats/README.md b/stable/nats/README.md
index 6a056ce68a59..f7c713782632 100644
--- a/stable/nats/README.md
+++ b/stable/nats/README.md
@@ -12,7 +12,7 @@ $ helm install stable/nats
This chart bootstraps a [NATS](https://github.com/bitnami/bitnami-docker-nats) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -48,6 +48,7 @@ The following table lists the configurable parameters of the NATS chart and thei
| Parameter | Description | Default |
| ------------------------------------ | -------------------------------------------------------------------------------------------- | ------------------------------------------------------------- |
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | NATS image registry | `docker.io` |
| `image.repository` | NATS Image name | `bitnami/nats` |
| `image.tag` | NATS Image tag | `{VERSION}` |
@@ -69,12 +70,14 @@ The following table lists the configurable parameters of the NATS chart and thei
| `maxPayload` | Max. payload | `nil` |
| `writeDeadline` | Duration the server can block on a socket write to a client | `nil` |
| `replicaCount` | Number of NATS nodes | `1` |
+| `resourceType` | NATS cluster resource type under Kubernetes (Supported: StatefulSets, or Deployment) | `statefulset` |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `1001` |
| `securityContext.runAsUser` | User ID for the container | `1001` |
| `statefulset.updateStrategy` | Statefulsets Update strategy | `OnDelete` |
| `statefulset.rollingUpdatePartition` | Partition for Rolling Update strategy | `nil` |
| `podLabels` | Additional labels to be added to pods | {} |
+| `priorityClassName` | Name of pod priority class | `nil` |
| `podAnnotations` | Annotations to be added to pods | {} |
| `nodeSelector` | Node labels for pod assignment | `nil` |
| `schedulerName` | Name of an alternate | `nil` |
diff --git a/stable/nats/templates/NOTES.txt b/stable/nats/templates/NOTES.txt
index 224df8fc756f..396e01de7dde 100644
--- a/stable/nats/templates/NOTES.txt
+++ b/stable/nats/templates/NOTES.txt
@@ -85,4 +85,7 @@ To access the Monitoring svc from outside the cluster, follow the steps below:
kubectl port-forward --namespace {{ .Release.Namespace }} {{ template "nats.fullname" . }}-0 {{ .Values.metrics.port }}:{{ .Values.metrics.port }}
4. Access NATS Prometheus metrics by opening the URL obtained in a browser.
+
{{- end }}
+
+{{- include "nats.validateValues" . -}}
diff --git a/stable/nats/templates/_helpers.tpl b/stable/nats/templates/_helpers.tpl
index 260985674686..6159ca095912 100644
--- a/stable/nats/templates/_helpers.tpl
+++ b/stable/nats/templates/_helpers.tpl
@@ -65,10 +65,15 @@ Return the appropriate apiVersion for networkpolicy.
{{/*
Return the proper image name (for the metrics image)
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "nats.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
{{- if .Values.global }}
{{- if .Values.global.imageRegistry }}
{{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
@@ -79,3 +84,61 @@ Return the proper image name (for the metrics image)
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "nats.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Compile all warnings into a single message, and call fail.
+*/}}
+{{- define "nats.validateValues" -}}
+{{- $messages := list -}}
+{{- $messages := append $messages (include "nats.validateValues.resourceType" .) -}}
+{{- $messages := without $messages "" -}}
+{{- $message := join "\n" $messages -}}
+
+{{- if $message -}}
+{{- printf "\nVALUES VALIDATION:\n%s" $message | fail -}}
+{{- end -}}
+{{- end -}}
+
+{{/* Validate values of NATS - must provide a valid resourceType ("deployment" or "statefulset") */}}
+{{- define "nats.validateValues.resourceType" -}}
+{{- if and (ne .Values.resourceType "deployment") (ne .Values.resourceType "statefulset") -}}
+nats: resourceType
+ Invalid resourceType selected. Valid values are "deployment" and
+ "statefulset". Please set a valid mode (--set resourceType="xxxx")
+{{- end -}}
+{{- end -}}
diff --git a/stable/nats/templates/deployment.yaml b/stable/nats/templates/deployment.yaml
new file mode 100644
index 000000000000..2964cd72817c
--- /dev/null
+++ b/stable/nats/templates/deployment.yaml
@@ -0,0 +1,166 @@
+{{- if eq .Values.resourceType "deployment" }}
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ template "nats.fullname" . }}
+ labels:
+ app: "{{ template "nats.name" . }}"
+ chart: "{{ template "nats.chart" . }}"
+ release: {{ .Release.Name | quote }}
+ heritage: {{ .Release.Service | quote }}
+spec:
+ serviceName: {{ template "nats.fullname" . }}-headless
+ replicas: {{ .Values.replicaCount }}
+ strategy:
+ rollingUpdate:
+ maxSurge: {{ .Values.deployment.maxSurge }}
+ maxUnavailable: {{ .Values.deployment.maxUnavailable }}
+ type: {{ .Values.deployment.updateType }}
+ selector:
+ matchLabels:
+ app: "{{ template "nats.name" . }}"
+ release: {{ .Release.Name | quote }}
+ template:
+ metadata:
+ labels:
+ app: "{{ template "nats.name" . }}"
+ chart: "{{ template "nats.chart" . }}"
+ release: {{ .Release.Name | quote }}
+ {{- if .Values.podLabels }}
+{{ toYaml .Values.podLabels | indent 8 }}
+ {{- end }}
+{{- if or .Values.podAnnotations .Values.metrics.enabled }}
+ annotations:
+{{- if .Values.podAnnotations }}
+{{ toYaml .Values.podAnnotations | indent 8 }}
+{{- end }}
+{{- if .Values.metrics.podAnnotations }}
+{{ toYaml .Values.metrics.podAnnotations | indent 8 }}
+{{- end }}
+{{- end }}
+ spec:
+{{- include "nats.imagePullSecrets" . | indent 6 }}
+ {{- if .Values.securityContext.enabled }}
+ securityContext:
+ fsGroup: {{ .Values.securityContext.fsGroup }}
+ runAsUser: {{ .Values.securityContext.runAsUser }}
+ {{- end }}
+ {{- if .Values.priorityClassName }}
+ priorityClassName: {{ .Values.priorityClassName | quote }}
+ {{- end }}
+ {{- if .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.nodeSelector | indent 8 }}
+ {{- end }}
+ {{- if .Values.tolerations }}
+ tolerations:
+{{ toYaml .Values.tolerations | indent 8 }}
+ {{- end }}
+ {{- if .Values.schedulerName }}
+ schedulerName: {{ .Values.schedulerName | quote }}
+ {{- end }}
+ {{- if eq .Values.antiAffinity "hard" }}
+ affinity:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - topologyKey: "kubernetes.io/hostname"
+ labelSelector:
+ matchLabels:
+ app: "{{ template "nats.name" . }}"
+ release: {{ .Release.Name | quote }}
+ {{- else if eq .Values.antiAffinity "soft" }}
+ affinity:
+ podAntiAffinity:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ - weight: 1
+ podAffinityTerm:
+ topologyKey: kubernetes.io/hostname
+ labelSelector:
+ matchLabels:
+ app: "{{ template "nats.name" . }}"
+ release: {{ .Release.Name | quote }}
+ {{- end }}
+ containers:
+ - name: {{ template "nats.name" . }}
+ image: {{ template "nats.image" . }}
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ command:
+ - gnatsd
+ args:
+ - -c
+ - /opt/bitnami/nats/gnatsd.conf
+ # to ensure nats could run with non-root user, we put the configuration
+ # file under `/opt/bitnami/nats/gnatsd.conf`, please check the link below
+ # for the implementation inside Dockerfile:
+ # - https://github.com/bitnami/bitnami-docker-nats/blob/master/1/debian-9/Dockerfile#L12
+ {{- if .Values.extraArgs }}
+{{ toYaml .Values.extraArgs | indent 8 }}
+ {{- end }}
+ ports:
+ - name: client
+ containerPort: {{ .Values.client.service.port }}
+ - name: cluster
+ containerPort: {{ .Values.cluster.service.port }}
+ - name: monitoring
+ containerPort: {{ .Values.monitoring.service.port }}
+ {{- if .Values.livenessProbe.enabled }}
+ livenessProbe:
+ httpGet:
+ path: /
+ port: monitoring
+ initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
+ successThreshold: {{ .Values.livenessProbe.successThreshold }}
+ failureThreshold: {{ .Values.livenessProbe.failureThreshold }}
+ {{- end }}
+ {{- if .Values.readinessProbe.enabled }}
+ readinessProbe:
+ httpGet:
+ path: /
+ port: monitoring
+ initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
+ periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
+ timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
+ successThreshold: {{ .Values.readinessProbe.successThreshold }}
+ failureThreshold: {{ .Values.readinessProbe.failureThreshold }}
+ {{- end }}
+ resources:
+{{ toYaml .Values.resources | indent 10 }}
+ volumeMounts:
+ - name: config
+ mountPath: /opt/bitnami/nats/gnatsd.conf
+ subPath: gnatsd.conf
+ {{- if .Values.sidecars }}
+{{ toYaml .Values.sidecars | indent 6 }}
+ {{- end }}
+{{- if .Values.metrics.enabled }}
+ - name: metrics
+ image: {{ template "nats.metrics.image" . }}
+ imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
+ args:
+{{ toYaml .Values.metrics.args | indent 10 -}}
+ - "http://localhost:{{ .Values.monitoring.service.port }}"
+ ports:
+ - name: metrics
+ containerPort: {{ .Values.metrics.port }}
+ livenessProbe:
+ httpGet:
+ path: /metrics
+ port: metrics
+ initialDelaySeconds: 15
+ timeoutSeconds: 5
+ readinessProbe:
+ httpGet:
+ path: /metrics
+ port: metrics
+ initialDelaySeconds: 5
+ timeoutSeconds: 1
+ resources:
+{{ toYaml .Values.metrics.resources | indent 10 }}
+{{- end }}
+ volumes:
+ - name: config
+ configMap:
+ name: {{ template "nats.fullname" . }}
+{{- end }}
diff --git a/stable/nats/templates/headless-svc.yaml b/stable/nats/templates/headless-svc.yaml
index d340551b9de0..5d8dd84dd2fe 100644
--- a/stable/nats/templates/headless-svc.yaml
+++ b/stable/nats/templates/headless-svc.yaml
@@ -12,11 +12,11 @@ spec:
clusterIP: None
ports:
- name: client
- port: 4222
+ port: {{ .Values.client.service.port }}
targetPort: client
- name: cluster
- port: 6222
+ port: {{ .Values.cluster.service.port }}
targetPort: cluster
selector:
app: {{ template "nats.name" . }}
- release: {{ .Release.Name | quote }}
\ No newline at end of file
+ release: {{ .Release.Name | quote }}
diff --git a/stable/nats/templates/statefulset.yaml b/stable/nats/templates/statefulset.yaml
index 7c8a20f39944..662db563d24c 100644
--- a/stable/nats/templates/statefulset.yaml
+++ b/stable/nats/templates/statefulset.yaml
@@ -1,3 +1,4 @@
+{{- if or (eq .Values.resourceType "statefulset") (not (contains .Values.resourceType "deployment")) }}
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
@@ -12,10 +13,14 @@ spec:
replicas: {{ .Values.replicaCount }}
updateStrategy:
type: {{ .Values.statefulset.updateStrategy }}
+ {{- if (eq "Recreate" .Values.statefulset.updateStrategy) }}
+ rollingUpdate: null
+ {{- else }}
{{- if .Values.statefulset.rollingUpdatePartition }}
rollingUpdate:
partition: {{ .Values.statefulset.rollingUpdatePartition }}
{{- end }}
+ {{- end }}
selector:
matchLabels:
app: "{{ template "nats.name" . }}"
@@ -39,15 +44,15 @@ spec:
{{- end }}
{{- end }}
spec:
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
-{{ toYaml .Values.image.pullSecrets | indent 8 }}
- {{- end }}
+{{- include "nats.imagePullSecrets" . | indent 6 }}
{{- if .Values.securityContext.enabled }}
securityContext:
fsGroup: {{ .Values.securityContext.fsGroup }}
runAsUser: {{ .Values.securityContext.runAsUser }}
{{- end }}
+ {{- if .Values.priorityClassName }}
+ priorityClassName: {{ .Values.priorityClassName | quote }}
+ {{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
@@ -89,6 +94,10 @@ spec:
args:
- -c
- /opt/bitnami/nats/gnatsd.conf
+ # to ensure nats could run with non-root user, we put the configuration
+ # file under `/opt/bitnami/nats/gnatsd.conf`, please check the link below
+ # for the implementation inside Dockerfile:
+ # - https://github.com/bitnami/bitnami-docker-nats/blob/master/1/debian-9/Dockerfile#L12
{{- if .Values.extraArgs }}
{{ toYaml .Values.extraArgs | indent 8 }}
{{- end }}
@@ -132,7 +141,7 @@ spec:
{{- end }}
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "nats.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
args:
{{ toYaml .Values.metrics.args | indent 10 -}}
@@ -159,3 +168,4 @@ spec:
- name: config
configMap:
name: {{ template "nats.fullname" . }}
+{{- end }}
diff --git a/stable/nats/values-production.yaml b/stable/nats/values-production.yaml
index d3c1c18ef93c..4c410a6d2f98 100644
--- a/stable/nats/values-production.yaml
+++ b/stable/nats/values-production.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami NATS image version
## ref: https://hub.docker.com/r/bitnami/nats/tags/
@@ -10,14 +13,14 @@
image:
registry: docker.io
repository: bitnami/nats
- tag: 1.4.0
+ tag: 1.4.1
pullPolicy: Always
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - name: myRegistrKeySecretName
+ # - name: myRegistryKeySecretName
## NATS replicas
replicaCount: 3
@@ -57,7 +60,18 @@ podAnnotations: {}
##
podLabels: {}
-## Update strategy, can be set to RollingUpdate or OnDelete by default.
+## Pod Priority Class
+## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
+##
+# priorityClassName: ""
+
+## NATS cluster resource type under Kubernetes. Allowed values: statefulset (default) or deployment
+## ref:
+## - https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
+## - https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
+resourceType: "statefulset"
+
+## Update strategy for statefulset, can be set to RollingUpdate or OnDelete by default.
## https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets
statefulset:
updateStrategy: OnDelete
@@ -65,6 +79,12 @@ statefulset:
## https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#partitions
# rollingUpdatePartition:
+## Update strategy for deployment, can be set to RollingUpdate or OnDelete by default.
+## https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
+deployment:
+ updateType: RollingUpdate
+ # maxSurge: 25%
+ # maxUnavailable: 25%
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
@@ -267,7 +287,7 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Metrics exporter resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
diff --git a/stable/nats/values.yaml b/stable/nats/values.yaml
index 954a0e780045..188dd53ebc05 100644
--- a/stable/nats/values.yaml
+++ b/stable/nats/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami NATS image version
## ref: https://hub.docker.com/r/bitnami/nats/tags/
@@ -10,7 +13,7 @@
image:
registry: docker.io
repository: bitnami/nats
- tag: 1.4.0
+ tag: 1.4.1
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,7 +24,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - name: myRegistrKeySecretName
+ # - name: myRegistryKeySecretName
## NATS replicas
replicaCount: 1
@@ -61,7 +64,18 @@ podAnnotations: {}
##
podLabels: {}
-## Update strategy, can be set to RollingUpdate or OnDelete by default.
+## Pod Priority Class
+## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
+##
+# priorityClassName: ""
+
+## NATS cluster resource type under Kubernetes. Allowed values: statefulset (default) or deployment
+## ref:
+## - https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
+## - https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
+resourceType: "statefulset"
+
+## Update strategy for statefulset, can be set to RollingUpdate or OnDelete by default.
## https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets
statefulset:
updateStrategy: OnDelete
@@ -69,6 +83,12 @@ statefulset:
## https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#partitions
# rollingUpdatePartition:
+## Update strategy for deployment, can be set to RollingUpdate or OnDelete by default.
+## https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy
+deployment:
+ updateType: RollingUpdate
+ # maxSurge: 25%
+ # maxUnavailable: 25%
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
@@ -273,7 +293,7 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Metrics exporter resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
diff --git a/stable/neo4j/Chart.yaml b/stable/neo4j/Chart.yaml
index 337c2fb1a23f..a4b4a789961f 100644
--- a/stable/neo4j/Chart.yaml
+++ b/stable/neo4j/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: neo4j
home: https://www.neo4j.com
-version: 0.10.0
+version: 1.0.0
appVersion: 3.4.5
description: Neo4j is the world's leading graph database
icon: http://info.neo4j.com/rs/773-GON-065/images/neo4j_logo.png
diff --git a/stable/newrelic-infrastructure/Chart.yaml b/stable/newrelic-infrastructure/Chart.yaml
index b67073a46fd1..f0d7659eb0c5 100644
--- a/stable/newrelic-infrastructure/Chart.yaml
+++ b/stable/newrelic-infrastructure/Chart.yaml
@@ -1,9 +1,9 @@
apiVersion: v1
description: A Helm chart to deploy the New Relic Infrastructure Agent as a DaemonSet
name: newrelic-infrastructure
-version: 0.7.0
-appVersion: 1.3.1
-home: https://hub.docker.com/r/newrelic/infrastructure/
+version: 0.11.0
+appVersion: 1.8.0
+home: https://hub.docker.com/r/newrelic/infrastructure-k8s/
source:
- https://github.com/kubernetes/kubernetes/tree/master/examples/newrelic-infrastructure
engine: gotpl
@@ -11,6 +11,12 @@ icon: https://newrelic.com/assets/newrelic/source/NewRelic-logo-square.svg
maintainers:
- name: rk295
email: robin@kearney.co.uk
+ - name: jfjoly
+ email: jjoly@newrelic.com
+ - name: smoya
+ - name: areina
+ - name: douglascamata
+ - name: rk295
keywords:
- infrastructure
- newrelic
diff --git a/stable/newrelic-infrastructure/OWNERS b/stable/newrelic-infrastructure/OWNERS
new file mode 100644
index 000000000000..5cc48745b4de
--- /dev/null
+++ b/stable/newrelic-infrastructure/OWNERS
@@ -0,0 +1,12 @@
+approvers:
+- jfjoly
+- smoya
+- areina
+- douglascamata
+- rk295
+reviewers:
+- jfjoly
+- smoya
+- areina
+- douglascamata
+- rk295
diff --git a/stable/newrelic-infrastructure/README.md b/stable/newrelic-infrastructure/README.md
index 37effa06a978..4c267ecc59d8 100644
--- a/stable/newrelic-infrastructure/README.md
+++ b/stable/newrelic-infrastructure/README.md
@@ -13,9 +13,11 @@ This chart will deploy the New Relic Infrastructure agent as a Daemonset.
| `config` | A `newrelic.yml` file if you wish to provide. | |
| `kubeStateMetricsUrl` | If provided, the discovery process for kube-state-metrics endpoint won't be triggered. Example: http://172.17.0.3:8080 |
| `kubeStateMetricsTimeout` | Timeout for accessing kube-state-metrics in milliseconds. If not set the newrelic default is 5000 | |
+| `rbac.create` | Enable Role-based authentication | `true` |
+| `rbac.pspEnabled` | Enable pod security policy support | `false` |
| `image.name` | The container to pull. | `newrelic/infrastructure` |
| `image.pullPolicy` | The pull policy. | `IfNotPresent` |
-| `image.tag` | The version of the container to pull. | `1.3.1` |
+| `image.tag` | The version of the container to pull. | `1.8.0` |
| `resources` | Any resources you wish to assign to the pod. | See Resources below |
| `verboseLog` | Should the agent log verbosely. (Boolean) | `false` |
| `nodeSelector` | Node label to use for scheduling | `nil` |
diff --git a/stable/newrelic-infrastructure/templates/clusterrole.yaml b/stable/newrelic-infrastructure/templates/clusterrole.yaml
new file mode 100644
index 000000000000..340a31ca2b12
--- /dev/null
+++ b/stable/newrelic-infrastructure/templates/clusterrole.yaml
@@ -0,0 +1,27 @@
+{{- if .Values.rbac.create }}
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ labels: {{ include "newrelic.labels" . | indent 4 }}
+ name: {{ template "newrelic.fullname" . }}
+rules:
+ - apiGroups: [""]
+ resources:
+ - "nodes"
+ - "nodes/metrics"
+ - "nodes/stats"
+ - "nodes/proxy"
+ - "pods"
+ - "services"
+ verbs: ["get", "list"]
+{{- if .Values.rbac.pspEnabled }}
+ - apiGroups:
+ - extensions
+ resources:
+ - podsecuritypolicies
+ resourceNames:
+ - privileged-{{ template "newrelic.fullname" . }}
+ verbs:
+ - use
+{{- end -}}
+{{- end -}}
diff --git a/stable/newrelic-infrastructure/templates/rbac.yaml b/stable/newrelic-infrastructure/templates/clusterrolebinding.yaml
similarity index 51%
rename from stable/newrelic-infrastructure/templates/rbac.yaml
rename to stable/newrelic-infrastructure/templates/clusterrolebinding.yaml
index ebe85132e1fc..d1940ef4dd2f 100644
--- a/stable/newrelic-infrastructure/templates/rbac.yaml
+++ b/stable/newrelic-infrastructure/templates/clusterrolebinding.yaml
@@ -1,21 +1,5 @@
{{- if .Values.rbac.create }}
-apiVersion: rbac.authorization.k8s.io/v1beta1
-kind: ClusterRole
-metadata:
- labels: {{ include "newrelic.labels" . | indent 4 }}
- name: {{ template "newrelic.fullname" . }}
-rules:
-- apiGroups: [""]
- resources:
- - "nodes"
- - "nodes/metrics"
- - "nodes/stats"
- - "nodes/proxy"
- - "pods"
- - "services"
- verbs: ["get", "list"]
----
-apiVersion: rbac.authorization.k8s.io/v1beta1
+apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels: {{ include "newrelic.labels" . | indent 4 }}
diff --git a/stable/newrelic-infrastructure/templates/daemonset.yaml b/stable/newrelic-infrastructure/templates/daemonset.yaml
index f170c2756b10..082bbc491c16 100644
--- a/stable/newrelic-infrastructure/templates/daemonset.yaml
+++ b/stable/newrelic-infrastructure/templates/daemonset.yaml
@@ -57,6 +57,8 @@ spec:
fieldRef:
apiVersion: "v1"
fieldPath: "spec.nodeName"
+ - name: "NRIA_CUSTOM_ATTRIBUTES"
+ value: {{ .Values.customAttribues }}
- name: "NRIA_PASSTHROUGH_ENVIRONMENT"
value: "KUBERNETES_SERVICE_HOST,KUBERNETES_SERVICE_PORT,CLUSTER_NAME,CADVISOR_PORT,NRK8S_NODE_NAME,KUBE_STATE_METRICS_URL,TIMEOUT"
{{- if .Values.verboseLog }}
diff --git a/stable/newrelic-infrastructure/templates/podsecuritypolicy.yaml b/stable/newrelic-infrastructure/templates/podsecuritypolicy.yaml
new file mode 100644
index 000000000000..98d03a6bbbd9
--- /dev/null
+++ b/stable/newrelic-infrastructure/templates/podsecuritypolicy.yaml
@@ -0,0 +1,26 @@
+{{- if .Values.rbac.pspEnabled }}
+apiVersion: extensions/v1beta1
+kind: PodSecurityPolicy
+metadata:
+ name: privileged-{{ template "newrelic.fullname" . }}
+spec:
+ allowedCapabilities:
+ - '*'
+ fsGroup:
+ rule: RunAsAny
+ privileged: true
+ runAsUser:
+ rule: RunAsAny
+ seLinux:
+ rule: RunAsAny
+ supplementalGroups:
+ rule: RunAsAny
+ volumes:
+ - '*'
+ hostPID: true
+ hostIPC: true
+ hostNetwork: true
+ hostPorts:
+ - min: 1
+ max: 65536
+{{- end }}
diff --git a/stable/newrelic-infrastructure/values.yaml b/stable/newrelic-infrastructure/values.yaml
index 9a5db40d0174..c49d6f08b0bd 100644
--- a/stable/newrelic-infrastructure/values.yaml
+++ b/stable/newrelic-infrastructure/values.yaml
@@ -16,7 +16,7 @@ verboseLog: false
image:
repository: newrelic/infrastructure-k8s
- tag: 1.3.1
+ tag: 1.8.0
pullPolicy: IfNotPresent
resources:
@@ -30,6 +30,7 @@ resources:
rbac:
# Specifies whether RBAC resources should be created
create: true
+ pspEnabled: false
serviceAccount:
# Specifies whether a ServiceAccount should be created
@@ -89,3 +90,6 @@ nodeSelector: {}
tolerations: []
updateStrategy: RollingUpdate
+
+# Custom attributes to be passed to the New Relic agent
+customAttribues: "'{\"clusterName\":\"$(CLUSTER_NAME)\"}'"
diff --git a/stable/nextcloud/.helmignore b/stable/nextcloud/.helmignore
new file mode 100644
index 000000000000..f0c131944441
--- /dev/null
+++ b/stable/nextcloud/.helmignore
@@ -0,0 +1,21 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
diff --git a/stable/nextcloud/Chart.yaml b/stable/nextcloud/Chart.yaml
new file mode 100644
index 000000000000..84a460ce5c11
--- /dev/null
+++ b/stable/nextcloud/Chart.yaml
@@ -0,0 +1,20 @@
+apiVersion: v1
+name: nextcloud
+version: 1.0.5
+appVersion: 15.0.2
+description: A file sharing server that puts the control and security of your own data back into your hands.
+keywords:
+- nextcloud
+- storage
+- http
+- web
+- php
+home: https://nextcloud.com/
+icon: https://cdn.rawgit.com/docker-library/docs/defa5ffc7123177acd60ddef6e16bddf694cc35f/nextcloud/logo.svg
+sources:
+- https://github.com/nextcloud/docker
+maintainers:
+- name: chrisingenhaag
+ email: christian.ingenhaag@googlemail.com
+- name: billimek
+ email: jeff@billimek.com
diff --git a/stable/nextcloud/OWNERS b/stable/nextcloud/OWNERS
new file mode 100644
index 000000000000..7936f915ea41
--- /dev/null
+++ b/stable/nextcloud/OWNERS
@@ -0,0 +1,6 @@
+approvers:
+- chrisingenhaag
+- billimek
+reviewers:
+- chrisingenhaag
+- billimek
diff --git a/stable/nextcloud/README.md b/stable/nextcloud/README.md
new file mode 100644
index 000000000000..0f35d06e474c
--- /dev/null
+++ b/stable/nextcloud/README.md
@@ -0,0 +1,119 @@
+# nextcloud
+
+[nextcloud](https://nextcloud.com/) is a file sharing server that puts the control and security of your own data back into your hands.
+
+## TL;DR;
+
+```console
+$ helm install stable/nextcloud
+```
+
+## Introduction
+
+This chart bootstraps an [nextcloud](https://hub.docker.com/_/nextcloud/) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
+
+It also packages the [Bitnami MariaDB chart](https://github.com/kubernetes/charts/tree/master/stable/mariadb) which is required for bootstrapping a MariaDB deployment for the database requirements of the nextcloud application.
+
+## Prerequisites
+
+- Kubernetes 1.9+ with Beta APIs enabled
+- PV provisioner support in the underlying infrastructure
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`:
+
+```console
+$ helm install --name my-release stable/nextcloud
+```
+
+The command deploys nextcloud on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
+
+> **Tip**: List all releases using `helm list`
+
+## Uninstalling the Chart
+
+To uninstall/delete the `my-release` deployment:
+
+```console
+$ helm delete my-release
+```
+
+The command removes all the Kubernetes components associated with the chart and deletes the release.
+
+## Configuration
+
+The following table lists the configurable parameters of the nextcloud chart and their default values.
+
+| Parameter | Description | Default |
+|-------------------------------------|-------------------------------------------|-------------------------------------------------------- |
+| `image.repository` | nextcloud Image name | `nextcloud` |
+| `image.tag` | nextcloud Image tag | `{VERSION}` |
+| `image.pullPolicy` | Image pull policy | `Always` if `imageTag` is `latest`, else `IfNotPresent` |
+| `image.pullSecrets` | Specify image pull secrets | `nil` |
+| `ingress.enabled` | Enable use of ingress controllers | `false` |
+| `ingress.servicePort` | Ingress' backend servicePort | `http` |
+| `ingress.annotations` | An array of service annotations | `nil` |
+| `ingress.tls` | Ingress TLS configuration | `[]` |
+| `nextcloud.host` | nextcloud host to create application URLs | `nextcloud.kube.home` |
+| `nextcloud.username` | User of the application | `admin` |
+| `nextcloud.password` | Application password | `changeme` |
+| `internalDatabase.enabled` | Whether to use internal sqlite database | `true` |
+| `internalDatabase.database` | Name of the existing database | `nextcloud` |
+| `externalDatabase.enabled` | Whether to use external database | `false` |
+| `externalDatabase.host` | Host of the external database | `nil` |
+| `externalDatabase.database` | Name of the existing database | `nextcloud` |
+| `externalDatabase.user` | Existing username in the external db | `nextcloud` |
+| `externalDatabase.password` | Password for the above username | `nil` |
+| `mariadb.enabled` | Whether to use the MariaDB chart | `false` |
+| `mariadb.db.name` | Database name to create | `nextcloud` |
+| `mariadb.db.password` | Password for the database | `changeme` |
+| `mariadb.db.user` | Database user to create | `nextcloud` |
+| `mariadb.rootUser.password` | MariaDB admin password | `nil` |
+| `service.type` | Kubernetes Service type | `ClusterIp` |
+| `service.loadBalancerIP` | LoadBalancerIp for service type LoadBalancer | `nil` |
+| `persistence.enabled` | Enable persistence using PVC | `false` |
+| `persistence.storageClass` | PVC Storage Class for nextcloud volume | `nil` (uses alpha storage class annotation) |
+| `persistence.existingClaim`| An Existing PVC name for nextcloud volume | `nil` (uses alpha storage class annotation) |
+| `persistence.accessMode` | PVC Access Mode for nextcloud volume | `ReadWriteOnce` |
+| `persistence.size` | PVC Storage Request for nextcloud volume | `8Gi` |
+| `resources` | CPU/Memory resource requests/limits | `{}` |
+
+> **Note**:
+>
+> For nextcloud to function correctly, you should specify the `nextcloud.host` parameter to specify the FQDN (recommended) or the public IP address of the nextcloud service.
+>
+> Optionally, you can specify the `service.loadBalancerIP` parameter to assign a reserved IP address to the nextcloud service of the chart. However please note that this feature is only available on a few cloud providers (f.e. GKE).
+>
+> To reserve a public IP address on GKE:
+>
+> ```bash
+> $ gcloud compute addresses create nextcloud-public-ip
+> ```
+>
+> The reserved IP address can be associated to the nextcloud service by specifying it as the value of the `service.loadBalancerIP` parameter while installing the chart.
+
+Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
+
+```console
+$ helm install --name my-release \
+ --set nextcloud.username=admin,nextcloud.password=password,mariadb.rootUser.password=secretpassword \
+ stable/nextcloud
+```
+
+The above command sets the nextcloud administrator account username and password to `admin` and `password` respectively. Additionally, it sets the MariaDB `root` user password to `secretpassword`.
+
+Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
+
+```console
+$ helm install --name my-release -f values.yaml stable/nextcloud
+```
+
+> **Tip**: You can use the default [values.yaml](values.yaml)
+
+## Persistence
+
+The [Nextcloud](https://hub.docker.com/_/nextcloud/) image stores the nextcloud data and configurations at the `/var/www/html` paths of the container.
+
+Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube.
+See the [Configuration](#configuration) section to enable persistence and configuration of the PVC.
diff --git a/stable/nextcloud/requirements.lock b/stable/nextcloud/requirements.lock
new file mode 100644
index 000000000000..222b19649568
--- /dev/null
+++ b/stable/nextcloud/requirements.lock
@@ -0,0 +1,6 @@
+dependencies:
+- name: mariadb
+ repository: https://kubernetes-charts.storage.googleapis.com/
+ version: 5.5.0
+digest: sha256:66e8bec50806f6576f4954c145d45b44a55975cad4f10b3bdd6cc4e208055bca
+generated: 2019-01-26T18:57:18.847326+01:00
diff --git a/stable/nextcloud/requirements.yaml b/stable/nextcloud/requirements.yaml
new file mode 100644
index 000000000000..6582ce49e328
--- /dev/null
+++ b/stable/nextcloud/requirements.yaml
@@ -0,0 +1,5 @@
+dependencies:
+- name: mariadb
+ version: ~5.5.0
+ repository: https://kubernetes-charts.storage.googleapis.com/
+ condition: mariadb.enabled
diff --git a/stable/nextcloud/templates/NOTES.txt b/stable/nextcloud/templates/NOTES.txt
new file mode 100644
index 000000000000..c755176247de
--- /dev/null
+++ b/stable/nextcloud/templates/NOTES.txt
@@ -0,0 +1,94 @@
+{{- if or .Values.mariadb.enabled .Values.externalDatabase.host -}}
+
+{{- if empty (include "nextcloud.host" .) -}}
+#################################################################################
+### WARNING: You did not provide an external host in your 'helm install' call ###
+#################################################################################
+
+This deployment will be incomplete until you configure nextcloud with a resolvable
+host. To configure nextcloud with the URL of your service:
+
+1. Get the nextcloud URL by running:
+
+ {{- if contains "NodePort" .Values.service.type }}
+
+ export APP_PORT=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "nextcloud.fullname" . }} -o jsonpath="{.spec.ports[0].nodePort}")
+ export APP_HOST=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+
+ {{- else if contains "LoadBalancer" .Values.service.type }}
+
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "nextcloud.fullname" . }}'
+
+ export APP_HOST=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "nextcloud.fullname" . }} --template "{{ "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}" }}")
+ export APP_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "nextcloud.fullname" . }} -o jsonpath="{.data.nextcloud-password}" | base64 --decode)
+ {{- if .Values.mariadb.db.password }}
+ export APP_DATABASE_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "nextcloud.mariadb.fullname" . }} -o jsonpath="{.data.mariadb-password}" | base64 --decode)
+ {{- end }}
+ {{- end }}
+
+2. Complete your nextcloud deployment by running:
+
+{{- if .Values.mariadb.enabled }}
+
+ helm upgrade {{ .Release.Name }} stable/nextcloud \
+ --set nextcloud.host=$APP_HOST,nextcloud.password=$APP_PASSWORD{{ if .Values.mariadb.db.password }},mariadb.db.password=$APP_DATABASE_PASSWORD{{ end }}
+{{- else }}
+
+ ## PLEASE UPDATE THE EXTERNAL DATABASE CONNECTION PARAMETERS IN THE FOLLOWING COMMAND AS NEEDED ##
+
+ helm upgrade {{ .Release.Name }} stable/nextcloud \
+ --set nextcloud.password=$APP_PASSWORD,nextcloud.host=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.host) }},externalDatabase.host={{ .Values.externalDatabase.host }}{{- end }}{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }}
+{{- end }}
+
+{{- else -}}
+1. Get the nextcloud URL by running:
+
+{{- if eq .Values.service.type "ClusterIP" }}
+
+ export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "nextcloud.fullname" . }}" -o jsonpath="{.items[0].metadata.name}")
+ echo http://127.0.0.1:8080/
+ kubectl port-forward $POD_NAME 8080:8080
+{{- else }}
+
+ echo http://{{ include "nextcloud.host" . }}{{ if .Values.nextcloudPort }}:{{ .Values.nextcloudPort }}{{ end }}/
+{{- end }}
+
+2. Get your nextcloud login credentials by running:
+
+ echo User: {{ .Values.nextcloud.username }}
+ echo Password: $(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "nextcloud.fullname" . }} -o jsonpath="{.data.nextcloud-password}" | base64 --decode)
+{{- end }}
+
+{{- else -}}
+
+#######################################################################################################
+## WARNING: You did not provide an external database host in your 'helm install' call ##
+## Running Nextcloud with the integrated sqlite database is not recommended for production instances ##
+#######################################################################################################
+
+For better performance etc. you have to configure nextcloud with a resolvable database
+host. To configure nextcloud to use and external database host:
+
+
+1. Complete your nextcloud deployment by running:
+
+{{- if contains "NodePort" .Values.service.type }}
+ export APP_HOST=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+{{- else if contains "LoadBalancer" .Values.service.type }}
+
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "nextcloud.fullname" . }}'
+
+ export APP_HOST=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "nextcloud.fullname" . }} --template "{{ "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}" }}")
+{{- else }}
+
+ export APP_HOST=127.0.0.1
+{{- end }}
+ export APP_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "nextcloud.fullname" . }} -o jsonpath="{.data.nextcloud-password}" | base64 --decode)
+
+ ## PLEASE UPDATE THE EXTERNAL DATABASE CONNECTION PARAMETERS IN THE FOLLOWING COMMAND AS NEEDED ##
+
+ helm upgrade {{ .Release.Name }} stable/nextcloud \
+ --set nextcloud.password=$APP_PASSWORD,nextcloud.host=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }},externalDatabase.host=YOUR_EXTERNAL_DATABASE_HOST
+{{- end }}
\ No newline at end of file
diff --git a/stable/nextcloud/templates/_helpers.tpl b/stable/nextcloud/templates/_helpers.tpl
new file mode 100644
index 000000000000..edf3149eebbc
--- /dev/null
+++ b/stable/nextcloud/templates/_helpers.tpl
@@ -0,0 +1,40 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "nextcloud.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "nextcloud.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+*/}}
+{{- define "nextcloud.mariadb.fullname" -}}
+{{- printf "%s-%s" .Release.Name "mariadb" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "nextcloud.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/stable/nextcloud/templates/db-secret.yaml b/stable/nextcloud/templates/db-secret.yaml
new file mode 100644
index 000000000000..2bcdc0e7a7f3
--- /dev/null
+++ b/stable/nextcloud/templates/db-secret.yaml
@@ -0,0 +1,15 @@
+{{- if .Values.mariadb.enabled }}
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ printf "%s-%s" .Release.Name "db" }}
+ labels:
+ app.kubernetes.io/name: {{ include "nextcloud.name" . }}
+ helm.sh/chart: {{ include "nextcloud.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+type: Opaque
+data:
+ db-password: {{ default "" .Values.mariadb.db.password | b64enc | quote }}
+ db-username: {{ default "" .Values.mariadb.db.user | b64enc | quote }}
+{{- end }}
diff --git a/stable/nextcloud/templates/deployment.yaml b/stable/nextcloud/templates/deployment.yaml
new file mode 100644
index 000000000000..ec0a426f2c20
--- /dev/null
+++ b/stable/nextcloud/templates/deployment.yaml
@@ -0,0 +1,142 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ template "nextcloud.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "nextcloud.name" . }}
+ helm.sh/chart: {{ include "nextcloud.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ replicas: {{ .Values.replicaCount }}
+ strategy:
+ type: Recreate
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "nextcloud.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "nextcloud.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ spec:
+ {{- if .Values.image.pullSecrets }}
+ imagePullSecrets:
+ {{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+ {{- end}}
+ {{- end }}
+ containers:
+ - name: {{ .Chart.Name }}
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ env:
+ {{- if .Values.internalDatabase.enabled }}
+ - name: SQLITE_DATABASE
+ value: {{ .Values.internalDatabase.name | quote }}
+ {{- else if .Values.mariadb.enabled }}
+ - name: MYSQL_HOST
+ value: {{ template "nextcloud.mariadb.fullname" . }}
+ - name: MYSQL_DATABASE
+ value: {{ .Values.mariadb.db.name | quote }}
+ - name: MYSQL_USER
+ valueFrom:
+ secretKeyRef:
+ name: {{ printf "%s-%s" .Release.Name "db" }}
+ key: db-username
+ - name: MYSQL_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: {{ printf "%s-%s" .Release.Name "db" }}
+ key: db-password
+ {{- else }}
+ - name: MYSQL_HOST
+ value: {{ .Values.externalDatabase.host | quote }}
+ - name: MYSQL_DATABASE
+ value: {{ .Values.externalDatabase.database | quote }}
+ - name: MYSQL_USER
+ valueFrom:
+ secretKeyRef:
+ name: {{ printf "%s-%s" .Release.Name "db" }}
+ key: db-username
+ - name: MYSQL_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: {{ printf "%s-%s" .Release.Name "db" }}
+ key: db-password
+ {{- end }}
+ - name: NEXTCLOUD_ADMIN_USER
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "nextcloud.fullname" . }}
+ key: nextcloud-username
+ - name: NEXTCLOUD_ADMIN_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "nextcloud.fullname" . }}
+ key: nextcloud-password
+ - name: NEXTCLOUD_TRUSTED_DOMAINS
+ value: {{ .Values.nextcloud.host }}
+ ports:
+ - name: http
+ containerPort: 80
+ protocol: TCP
+ livenessProbe:
+ httpGet:
+ path: /status.php
+ port: http
+ httpHeaders:
+ - name: Host
+ value: {{ .Values.nextcloud.host | quote }}
+ initialDelaySeconds: 30
+ timeoutSeconds: 5
+ failureThreshold: 6
+ readinessProbe:
+ httpGet:
+ path: /status.php
+ port: http
+ httpHeaders:
+ - name: Host
+ value: {{ .Values.nextcloud.host | quote }}
+ initialDelaySeconds: 30
+ timeoutSeconds: 3
+ periodSeconds: 5
+ resources:
+{{ toYaml .Values.resources | indent 10 }}
+ volumeMounts:
+ - name: nextcloud-data
+ mountPath: /var/www/html/
+ subPath: root
+ - name: nextcloud-data
+ mountPath: /var/www/html/data
+ subPath: data
+ - name: nextcloud-data
+ mountPath: /var/www/html/config
+ subPath: config
+ - name: nextcloud-data
+ mountPath: /var/www/html/custom_apps
+ subPath: custom_apps
+ - name: nextcloud-data
+ mountPath: /var/www/html/themes
+ subPath: themes
+ {{- with .Values.nodeSelector }}
+ nodeSelector:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.affinity }}
+ affinity:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.tolerations }}
+ tolerations:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ volumes:
+ - name: nextcloud-data
+ {{- if .Values.persistence.enabled }}
+ persistentVolumeClaim:
+ claimName: {{ if .Values.persistence.existingClaim }}{{ .Values.persistence.existingClaim }}{{- else }}{{ template "nextcloud.fullname" . }}-nextcloud{{- end }}
+ {{- else }}
+ emptyDir: {}
+ {{- end }}
diff --git a/stable/nextcloud/templates/ingress.yaml b/stable/nextcloud/templates/ingress.yaml
new file mode 100644
index 000000000000..b02ece4201f2
--- /dev/null
+++ b/stable/nextcloud/templates/ingress.yaml
@@ -0,0 +1,27 @@
+{{- if .Values.ingress.enabled }}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ template "nextcloud.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "nextcloud.name" . }}
+ helm.sh/chart: {{ include "nextcloud.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+{{- if .Values.ingress.annotations }}
+ annotations:
+{{ toYaml .Values.ingress.annotations | indent 4 }}
+{{- end }}
+spec:
+ rules:
+ - host: {{ .Values.nextcloud.host }}
+ http:
+ paths:
+ - backend:
+ serviceName: {{ template "nextcloud.fullname" . }}
+ servicePort: {{ .Values.service.port }}
+{{- if .Values.ingress.tls }}
+ tls:
+{{ toYaml .Values.ingress.tls | indent 4 }}
+{{- end -}}
+{{- end }}
diff --git a/stable/nextcloud/templates/nextcloud-pvc.yaml b/stable/nextcloud/templates/nextcloud-pvc.yaml
new file mode 100644
index 000000000000..f1a00da58d74
--- /dev/null
+++ b/stable/nextcloud/templates/nextcloud-pvc.yaml
@@ -0,0 +1,21 @@
+{{- if .Values.persistence.enabled -}}
+{{- if not .Values.persistence.existingClaim -}}
+kind: PersistentVolumeClaim
+apiVersion: v1
+metadata:
+ name: {{ template "nextcloud.fullname" . }}-nextcloud
+spec:
+ accessModes:
+ - {{ .Values.persistence.accessMode | quote }}
+ resources:
+ requests:
+ storage: {{ .Values.persistence.size | quote }}
+{{- if .Values.persistence.storageClass }}
+{{- if (eq "-" .Values.persistence.storageClass) }}
+ storageClassName: ""
+{{- else }}
+ storageClassName: "{{ .Values.persistence.storageClass }}"
+{{- end }}
+{{- end }}
+{{- end -}}
+{{- end -}}
diff --git a/stable/nextcloud/templates/secrets.yaml b/stable/nextcloud/templates/secrets.yaml
new file mode 100644
index 000000000000..b24aa69966a2
--- /dev/null
+++ b/stable/nextcloud/templates/secrets.yaml
@@ -0,0 +1,18 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ template "nextcloud.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "nextcloud.name" . }}
+ helm.sh/chart: {{ include "nextcloud.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+type: Opaque
+data:
+ nextcloud-username: {{ .Values.nextcloud.username | b64enc | quote }}
+ {{ if .Values.nextcloud.password }}
+ nextcloud-password: {{ .Values.nextcloud.password | b64enc | quote }}
+ {{ else }}
+ nextcloud-password: {{ randAlphaNum 10 | b64enc | quote }}
+ {{ end }}
+
\ No newline at end of file
diff --git a/stable/nextcloud/templates/service.yaml b/stable/nextcloud/templates/service.yaml
new file mode 100644
index 000000000000..290098051a4c
--- /dev/null
+++ b/stable/nextcloud/templates/service.yaml
@@ -0,0 +1,21 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ template "nextcloud.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "nextcloud.name" . }}
+ helm.sh/chart: {{ include "nextcloud.chart" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+spec:
+ type: {{ .Values.service.type }}
+ {{- if eq .Values.service.type "LoadBalancer" }}
+ loadBalancerIP: {{ default "" .Values.service.loadBalancerIP }}
+ {{- end }}
+ ports:
+ - port: {{ .Values.service.port }}
+ targetPort: http
+ protocol: TCP
+ name: http
+ selector:
+ app.kubernetes.io/name: {{ include "nextcloud.name" . }}
diff --git a/stable/nextcloud/values-mariadb.yaml b/stable/nextcloud/values-mariadb.yaml
new file mode 100644
index 000000000000..cc769937eb84
--- /dev/null
+++ b/stable/nextcloud/values-mariadb.yaml
@@ -0,0 +1,5 @@
+internalDatabase:
+ enabled: false
+
+mariadb:
+ enabled: true
\ No newline at end of file
diff --git a/stable/nextcloud/values.yaml b/stable/nextcloud/values.yaml
new file mode 100644
index 000000000000..11862bb3471f
--- /dev/null
+++ b/stable/nextcloud/values.yaml
@@ -0,0 +1,160 @@
+## Official nextcloud image version
+## ref: https://hub.docker.com/r/library/nextcloud/tags/
+##
+image:
+ repository: nextcloud
+ tag: 15.0.2-apache
+ pullPolicy: IfNotPresent
+ # pullSecrets:
+ # - myRegistrKeySecretName
+
+nameOverride: ""
+fullnameOverride: ""
+
+# Number of replicas to be deployed
+replicaCount: 1
+
+## Allowing use of ingress controllers
+## ref: https://kubernetes.io/docs/concepts/services-networking/ingress/
+##
+ingress:
+ enabled: false
+ annotations: {}
+ # nginx.ingress.kubernetes.io/proxy-body-size: 4G
+ # kubernetes.io/tls-acme: "true"
+ # certmanager.k8s.io/cluster-issuer: letsencrypt-prod
+ # nginx.ingress.kubernetes.io/server-snippet: |-
+ # add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
+ # add_header X-Robots-Tag none;
+ # add_header X-Download-Options noopen;
+ # add_header X-Permitted-Cross-Domain-Policies none;
+ # add_header X-Content-Type-Options nosniff;
+ # add_header X-XSS-Protection "1; mode=block";
+ # add_header Referrer-Policy no-referrer;
+ # rewrite ^/.well-known/webfinger /public.php?service=webfinger last;
+ # rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
+ # rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
+ # location = /.well-known/carddav {
+ # return 301 $scheme://$host/remote.php/dav;
+ # }
+ # location = /.well-known/caldav {
+ # return 301 $scheme://$host/remote.php/dav;
+ # }
+ # location = /robots.txt {
+ # allow all;
+ # log_not_found off;
+ # access_log off;
+ # }
+ # location ~ \.(?:png|html|ttf|ico|jpg|jpeg)$ {
+ # try_files $uri /index.php$request_uri;
+ # # Optional: Don't log access to other assets
+ # access_log off;
+ # }
+ # location / {
+ # rewrite ^ /index.php$request_uri;
+ # }
+ # location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
+ # deny all;
+ # }
+ # location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
+ # deny all;
+ # }
+ # tls:
+ # - secretName: nextcloud-tls
+ # hosts:
+ # - nextcloud.kube.home
+
+nextcloud:
+ host: nextcloud.kube.home
+ username: admin
+ password: changeme
+
+
+internalDatabase:
+ enabled: true
+ name: nextcloud
+
+
+##
+## External database configuration
+##
+externalDatabase:
+ enabled: false
+
+ ## Database host
+ host:
+
+ ## Database user
+ user: nextcloud
+
+ ## Database password
+ password:
+
+ ## Database name
+ database: nextcloud
+
+##
+## MariaDB chart configuration
+##
+mariadb:
+ ## Whether to deploy a mariadb server to satisfy the applications database requirements. To use an external database set this to false and configure the externalDatabase parameters
+ enabled: false
+
+ db:
+ name: nextcloud
+ user: nextcloud
+ password: changeme
+
+ ## Enable persistence using Persistent Volume Claims
+ ## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
+ ##
+ persistence:
+ enabled: false
+ accessMode: ReadWriteOnce
+ size: 8Gi
+
+service:
+ type: ClusterIP
+ port: 8080
+ loadBalancerIP: nil
+
+
+## Enable persistence using Persistent Volume Claims
+## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
+##
+persistence:
+ enabled: false
+ ## nextcloud data Persistent Volume Storage Class
+ ## If defined, storageClassName:
+ ## If set to "-", storageClassName: "", which disables dynamic provisioning
+ ## If undefined (the default) or set to null, no storageClassName spec is
+ ## set, choosing the default provisioner. (gp2 on AWS, standard on
+ ## GKE, AWS & OpenStack)
+ ##
+ # storageClass: "-"
+
+ ## A manually managed Persistent Volume and Claim
+ ## Requires persistence.enabled: true
+ ## If defined, PVC must be created manually before volume will be bound
+ # existingClaim:
+
+ accessMode: ReadWriteOnce
+ size: 8Gi
+
+resources: {}
+ # We usually recommend not to specify default resources and to leave this as a conscious
+ # choice for the user. This also increases chances charts run on environments with little
+ # resources, such as Minikube. If you do want to specify resources, uncomment the following
+ # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
+ # limits:
+ # cpu: 100m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 128Mi
+
+nodeSelector: {}
+
+tolerations: []
+
+affinity: {}
diff --git a/stable/nfs-client-provisioner/Chart.yaml b/stable/nfs-client-provisioner/Chart.yaml
index 6259d88e613b..5214471e0392 100644
--- a/stable/nfs-client-provisioner/Chart.yaml
+++ b/stable/nfs-client-provisioner/Chart.yaml
@@ -3,7 +3,7 @@ appVersion: 3.1.0
description: nfs-client is an automatic provisioner that used your *already configured* NFS server, automatically creating Persistent Volumes.
name: nfs-client-provisioner
home: https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client
-version: 1.2.3
+version: 1.2.5
sources:
- https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client
maintainers:
diff --git a/stable/nfs-server-provisioner/Chart.yaml b/stable/nfs-server-provisioner/Chart.yaml
index 5feb6dc03a30..276dc108521c 100644
--- a/stable/nfs-server-provisioner/Chart.yaml
+++ b/stable/nfs-server-provisioner/Chart.yaml
@@ -1,11 +1,13 @@
apiVersion: v1
-appVersion: 1.0.8
+appVersion: 2.2.1-k8s1.12
description: nfs-server-provisioner is an out-of-tree dynamic provisioner for Kubernetes. You can use it to quickly & easily deploy shared storage that works almost anywhere.
name: nfs-server-provisioner
-version: 0.2.1
+version: 0.3.0
maintainers:
- name: kiall
email: kiall@macinnes.ie
+- name: joaocc
+ email: joaocc-dev@live.com
home: https://github.com/kubernetes/charts/tree/master/stable/nfs-server-provisioner
sources:
- https://github.com/kubernetes-incubator/external-storage/tree/master/nfs
diff --git a/stable/nfs-server-provisioner/README.md b/stable/nfs-server-provisioner/README.md
index 664fefe2d070..d292fab475f3 100644
--- a/stable/nfs-server-provisioner/README.md
+++ b/stable/nfs-server-provisioner/README.md
@@ -73,7 +73,9 @@ their default values.
| `storageClass.provisionerName` | The provisioner name for the storageclass | `cluster.local/{release-name}-{chart-name}` |
| `storageClass.defaultClass` | Whether to set the created StorageClass as the clusters default StorageClass | `false` |
| `storageClass.name` | The name to assign the created StorageClass | `nfs` |
-| `storageClass.parameters` | Parameters for StorageClass | `mountOptions: vers=4.1` |
+| `storageClass.allowVolumeExpansion` | Allow base storage PCV to be dynamically resizeable (set to null to disable ) | `true |
+| `storageClass.parameters` | Parameters for StorageClass | `{}` |
+| `storageClass.mountOptions` | Mount options for StorageClass | `[ "vers=4.1", "noatime" ]` |
| `storageClass.reclaimPolicy` | ReclaimPolicy field of the class, which can be either Delete or Retain | `Delete` |
| `resources` | Resource limits for nfs-server-provisioner pod | `{}` |
| `nodeSelector` | Map of node labels for pod assignment | `{}` |
diff --git a/stable/nfs-server-provisioner/templates/clusterrole.yaml b/stable/nfs-server-provisioner/templates/clusterrole.yaml
index fadcf23eca96..5affc4bab07a 100644
--- a/stable/nfs-server-provisioner/templates/clusterrole.yaml
+++ b/stable/nfs-server-provisioner/templates/clusterrole.yaml
@@ -28,4 +28,7 @@ rules:
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
+ - apiGroups: [""]
+ resources: ["endpoints"]
+ verbs: ["get", "list", "watch", "create", "delete", "update", "patch"]
{{- end -}}
diff --git a/stable/nfs-server-provisioner/templates/storageclass.yaml b/stable/nfs-server-provisioner/templates/storageclass.yaml
index 9fb51fa77306..093e650e5a2a 100644
--- a/stable/nfs-server-provisioner/templates/storageclass.yaml
+++ b/stable/nfs-server-provisioner/templates/storageclass.yaml
@@ -14,8 +14,15 @@ metadata:
{{- end }}
provisioner: {{ template "nfs-provisioner.provisionerName" . }}
reclaimPolicy: {{ .Values.storageClass.reclaimPolicy }}
+{{ if .Values.storageClass.allowVolumeExpansion }}
+allowVolumeExpansion: {{ .Values.storageClass.allowVolumeExpansion }}
+{{ end }}
{{ end -}}
{{- with .Values.storageClass.parameters }}
parameters:
{{ toYaml . | indent 2 }}
{{- end }}
+{{- with .Values.storageClass.mountOptions }}
+mountOptions:
+{{ toYaml . | indent 2 }}
+{{- end }}
diff --git a/stable/nfs-server-provisioner/values.yaml b/stable/nfs-server-provisioner/values.yaml
index 86d5ed12d52e..87808d323fb4 100644
--- a/stable/nfs-server-provisioner/values.yaml
+++ b/stable/nfs-server-provisioner/values.yaml
@@ -8,7 +8,7 @@ replicaCount: 1
image:
repository: quay.io/kubernetes_incubator/nfs-provisioner
- tag: v1.0.9
+ tag: v2.2.1-k8s1.12
pullPolicy: IfNotPresent
service:
@@ -53,9 +53,14 @@ storageClass:
## Ignored if storageClass.create is false
name: nfs
+ # set to null to prevent expansion
+ allowVolumeExpansion: true
## StorageClass parameters
- parameters:
- mountOptions: vers=4.1
+ parameters: {}
+
+ mountOptions:
+ - vers=4.1
+ - noatime
## ReclaimPolicy field of the class, which can be either Delete or Retain
reclaimPolicy: Delete
diff --git a/stable/nginx-ingress/Chart.yaml b/stable/nginx-ingress/Chart.yaml
index 187f6889e92d..15d4abf99218 100644
--- a/stable/nginx-ingress/Chart.yaml
+++ b/stable/nginx-ingress/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: nginx-ingress
-version: 1.2.2
-appVersion: 0.22.0
+version: 1.6.13
+appVersion: 0.24.1
home: https://github.com/kubernetes/ingress-nginx
description: An nginx Ingress controller that uses ConfigMap to store the nginx configuration.
icon: https://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/Nginx_logo.svg/500px-Nginx_logo.svg.png
@@ -14,4 +15,5 @@ maintainers:
email: jack.zampolin@gmail.com
- name: mgoodness
email: mgoodness@gmail.com
+ - name: ChiefAlexander
engine: gotpl
diff --git a/stable/nginx-ingress/OWNERS b/stable/nginx-ingress/OWNERS
index cbf470d11dc5..5a4106992367 100644
--- a/stable/nginx-ingress/OWNERS
+++ b/stable/nginx-ingress/OWNERS
@@ -1,6 +1,8 @@
approvers:
- jackzampolin
- mgoodness
+- ChiefAlexander
reviewers:
- jackzampolin
- mgoodness
+- ChiefAlexander
diff --git a/stable/nginx-ingress/README.md b/stable/nginx-ingress/README.md
index f7c763b57270..50959daa1da2 100644
--- a/stable/nginx-ingress/README.md
+++ b/stable/nginx-ingress/README.md
@@ -47,10 +47,12 @@ Parameter | Description | Default
--- | --- | ---
`controller.name` | name of the controller component | `controller`
`controller.image.repository` | controller container image repository | `quay.io/kubernetes-ingress-controller/nginx-ingress-controller`
-`controller.image.tag` | controller container image tag | `0.22.0`
+`controller.image.tag` | controller container image tag | `0.24.1`
`controller.image.pullPolicy` | controller container image pull policy | `IfNotPresent`
`controller.image.runAsUser` | User ID of the controller process. Value depends on the Linux distribution used inside of the container image. By default uses debian one. | `33`
-`controller.config` | nginx ConfigMap entries | none
+`controller.containerPort.http` | The port that the controller container listens on for http connections. | `80`
+`controller.containerPort.https` | The port that the controller container listens on for https connections. | `443`
+`controller.config` | nginx [ConfigMap](https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md) entries | none
`controller.hostNetwork` | If the nginx deployment / daemonset should run on the host's network namespace. Do not set this when `controller.service.externalIPs` is set and `kube-proxy` is used as there will be a port-conflict for port `80` | false
`controller.defaultBackendService` | default 404 backend service; required only if `defaultBackend.enabled = false` | `""`
`controller.dnsPolicy` | If using `hostNetwork=true`, change to `ClusterFirstWithHostNet`. See [pod's dns policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) for details | `ClusterFirst`
@@ -68,12 +70,14 @@ Parameter | Description | Default
`controller.daemonset.useHostPort` | If `controller.kind` is `DaemonSet`, this will enable `hostPort` for TCP/80 and TCP/443 | false
`controller.daemonset.hostPorts.http` | If `controller.daemonset.useHostPort` is `true` and this is non-empty, it sets the hostPort | `"80"`
`controller.daemonset.hostPorts.https` | If `controller.daemonset.useHostPort` is `true` and this is non-empty, it sets the hostPort | `"443"`
+`controller.daemonset.hostPorts.stats` | If `controller.daemonset.useHostPort` is `true` and this is non-empty, it sets the hostPort | `"18080"`
`controller.tolerations` | node taints to tolerate (requires Kubernetes >=1.6) | `[]`
`controller.affinity` | node/pod affinities (requires Kubernetes >=1.6) | `{}`
`controller.minReadySeconds` | how many seconds a pod needs to be ready before killing the next, during update | `0`
`controller.nodeSelector` | node labels for pod assignment | `{}`
`controller.podAnnotations` | annotations to be added to pods | `{}`
`controller.podLabels` | labels to add to the pod container metadata | `{}`
+`controller.podSecurityContext` | Security context policies to add to the controller pod | `{}`
`controller.replicaCount` | desired number of controller pods | `1`
`controller.minAvailable` | minimum number of available controller pods for PodDisruptionBudget | `1`
`controller.resources` | controller pod resource requests & limits | `{}`
@@ -84,6 +88,7 @@ Parameter | Description | Default
`controller.publishService.enabled` | if true, the controller will set the endpoint records on the ingress objects to reflect those on the service | `false`
`controller.publishService.pathOverride` | override of the default publish-service name | `""`
`controller.service.clusterIP` | internal controller cluster service IP | `""`
+`controller.service.omitClusterIP` | To omit the `clusterIP` from the controller service | `false`
`controller.service.externalIPs` | controller service external IP addresses. Do not set this when `controller.hostNetwork` is set to `true` and `kube-proxy` is used as there will be a port-conflict for port `80` | `[]`
`controller.service.externalTrafficPolicy` | If `controller.service.type` is `NodePort` or `LoadBalancer`, set this to `Local` to enable [source IP preservation](https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typenodeport) | `"Cluster"`
`controller.service.healthCheckNodePort` | If `controller.service.type` is `NodePort` or `LoadBalancer` and `controller.service.externalTrafficPolicy` is set to `Local`, set this to [the managed health-check port the kube-proxy will expose](https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typenodeport). If blank, a random port in the `NodePort` range will be assigned | `""`
@@ -94,8 +99,8 @@ Parameter | Description | Default
`controller.service.targetPorts.http` | Sets the targetPort that maps to the Ingress' port 80 | `80`
`controller.service.targetPorts.https` | Sets the targetPort that maps to the Ingress' port 443 | `443`
`controller.service.type` | type of controller service to create | `LoadBalancer`
-`controller.service.nodePorts.http` | If `controller.service.type` is `NodePort` and this is non-empty, it sets the nodePort that maps to the Ingress' port 80 | `""`
-`controller.service.nodePorts.https` | If `controller.service.type` is `NodePort` and this is non-empty, it sets the nodePort that maps to the Ingress' port 443 | `""`
+`controller.service.nodePorts.http` | If `controller.service.type` is either `NodePort` or `LoadBalancer` and this is non-empty, it sets the nodePort that maps to the Ingress' port 80 | `""`
+`controller.service.nodePorts.https` | If `controller.service.type` is either `NodePort` or `LoadBalancer` and this is non-empty, it sets the nodePort that maps to the Ingress' port 443 | `""`
`controller.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 10
`controller.livenessProbe.periodSeconds` | How often to perform the probe | 10
`controller.livenessProbe.timeoutSeconds` | When the probe times out | 5
@@ -111,6 +116,7 @@ Parameter | Description | Default
`controller.stats.enabled` | if `true`, enable "vts-status" page | `false`
`controller.stats.service.annotations` | annotations for controller stats service | `{}`
`controller.stats.service.clusterIP` | internal controller stats cluster service IP | `""`
+`controller.stats.service.omitClusterIP` | To omit the `clusterIP` from the stats service | `false`
`controller.stats.service.externalIPs` | controller service stats external IP addresses | `[]`
`controller.stats.service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `""`
`controller.stats.service.loadBalancerSourceRanges` | list of IP CIDRs allowed access to load balancer (if supported) | `[]`
@@ -118,6 +124,7 @@ Parameter | Description | Default
`controller.metrics.enabled` | if `true`, enable Prometheus metrics (`controller.stats.enabled` must be `true` as well) | `false`
`controller.metrics.service.annotations` | annotations for Prometheus metrics service | `{}`
`controller.metrics.service.clusterIP` | cluster IP address to assign to service | `""`
+`controller.metrics.service.omitClusterIP` | To omit the `clusterIP` from the metrics service | `false`
`controller.metrics.service.externalIPs` | Prometheus metrics service external IP addresses | `[]`
`controller.metrics.service.labels` | labels for metrics service | `{}`
`controller.metrics.service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `""`
@@ -126,14 +133,16 @@ Parameter | Description | Default
`controller.metrics.service.type` | type of Prometheus metrics service to create | `ClusterIP`
`controller.metrics.serviceMonitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false`
`controller.metrics.serviceMonitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}`
+`controller.metrics.serviceMonitor.namespace` | namespace where servicemonitor resource should be created | `the same namespace as nginx ingress`
+`controller.metrics.serviceMonitor.honorLabels` | honorLabels chooses the metric's labels on collisions with target labels. | `false`
`controller.customTemplate.configMapName` | configMap containing a custom nginx template | `""`
`controller.customTemplate.configMapKey` | configMap key containing the nginx template | `""`
`controller.headers` | configMap key:value pairs containing the [custom headers](https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/customization/custom-headers) for Nginx | `{}`
`controller.updateStrategy` | allows setting of RollingUpdate strategy | `{}`
`defaultBackend.enabled` | If false, controller.defaultBackendService must be provided | `true`
`defaultBackend.name` | name of the default backend component | `default-backend`
-`defaultBackend.image.repository` | default backend container image repository | `k8s.gcr.io/defaultbackend`
-`defaultBackend.image.tag` | default backend container image tag | `1.4`
+`defaultBackend.image.repository` | default backend container image repository | `k8s.gcr.io/defaultbackend-amd64`
+`defaultBackend.image.tag` | default backend container image tag | `1.5`
`defaultBackend.image.pullPolicy` | default backend container image pull policy | `IfNotPresent`
`defaultBackend.extraArgs` | Additional default backend container arguments | `{}`
`defaultBackend.port` | Http port number | `8080`
@@ -148,6 +157,7 @@ Parameter | Description | Default
`defaultBackend.priorityClassName` | default backend priorityClassName | `nil`
`defaultBackend.service.annotations` | annotations for default backend service | `{}`
`defaultBackend.service.clusterIP` | internal default backend cluster service IP | `""`
+`defaultBackend.service.omitClusterIP` | To omit the `clusterIP` from the default backend service | `false`
`defaultBackend.service.externalIPs` | default backend service external IP addresses | `[]`
`defaultBackend.service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `""`
`defaultBackend.service.loadBalancerSourceRanges` | list of IP CIDRs allowed access to load balancer (if supported) | `[]`
@@ -158,8 +168,8 @@ Parameter | Description | Default
`serviceAccount.create` | if `true`, create a service account | ``
`serviceAccount.name` | The name of the service account to use. If not set and `create` is `true`, a name is generated using the fullname template. | ``
`revisionHistoryLimit` | The number of old history to retain to allow rollback. | `10`
-`tcp` | TCP service key:value pairs | `{}`
-`udp` | UDP service key:value pairs | `{}`
+`tcp` | TCP service key:value pairs. The value is evaluated as a template. | `{}`
+`udp` | UDP service key:value pairs The value is evaluated as a template. | `{}`
```console
$ helm install stable/nginx-ingress --name my-release \
@@ -201,8 +211,10 @@ You can add Prometheus annotations to the metrics service using `controller.metr
Add an [ExternalDNS](https://github.com/kubernetes-incubator/external-dns) annotation to the LoadBalancer service:
```yaml
-annotations:
- external-dns.alpha.kubernetes.io/hostname: kubernetes-example.com.
+controller:
+ service:
+ annotations:
+ external-dns.alpha.kubernetes.io/hostname: kubernetes-example.com.
```
## AWS L7 ELB with SSL Termination
@@ -234,3 +246,13 @@ controller:
annotations:
domainName: "kubernetes-example.com"
```
+
+## Helm error when upgrading: spec.clusterIP: Invalid value: ""
+
+If you are upgrading this chart from a version between 0.31.0 and 1.2.2 then you may get an error like this:
+
+```
+Error: UPGRADE FAILED: Service "?????-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable
+```
+
+Detail of how and why are in [this issue](https://github.com/helm/charts/pull/13646) but to resolve this you can set `xxxx.service.omitClusterIP` to `true` where `xxxx` is the service referenced in the error.
diff --git a/stable/nginx-ingress/templates/controller-daemonset.yaml b/stable/nginx-ingress/templates/controller-daemonset.yaml
index f4c0223f0a51..6db675c37dd9 100644
--- a/stable/nginx-ingress/templates/controller-daemonset.yaml
+++ b/stable/nginx-ingress/templates/controller-daemonset.yaml
@@ -16,10 +16,12 @@ spec:
minReadySeconds: {{ .Values.controller.minReadySeconds }}
template:
metadata:
+ {{- if .Values.controller.podAnnotations }}
annotations:
{{- range $key, $value := .Values.controller.podAnnotations }}
{{ $key }}: {{ $value | quote }}
{{- end }}
+ {{- end }}
labels:
app: {{ template "nginx-ingress.name" . }}
component: "{{ .Values.controller.name }}"
@@ -36,6 +38,10 @@ spec:
{{- if .Values.controller.priorityClassName }}
priorityClassName: "{{ .Values.controller.priorityClassName }}"
{{- end }}
+ {{- if .Values.controller.podSecurityContext }}
+ securityContext:
+{{ toYaml .Values.controller.podSecurityContext | indent 8 }}
+ {{- end }}
containers:
- name: {{ template "nginx-ingress.name" . }}-{{ .Values.controller.name }}
image: "{{ .Values.controller.image.repository }}:{{ .Values.controller.image.tag }}"
@@ -110,13 +116,13 @@ spec:
failureThreshold: {{ .Values.controller.livenessProbe.failureThreshold }}
ports:
- name: http
- containerPort: 80
+ containerPort: {{ .Values.controller.containerPort.http }}
protocol: TCP
{{- if .Values.controller.daemonset.useHostPort }}
hostPort: {{ .Values.controller.daemonset.hostPorts.http }}
{{- end }}
- name: https
- containerPort: 443
+ containerPort: {{ .Values.controller.containerPort.https }}
protocol: TCP
{{- if .Values.controller.daemonset.useHostPort }}
hostPort: {{ .Values.controller.daemonset.hostPorts.https }}
@@ -125,6 +131,9 @@ spec:
- name: stats
containerPort: 18080
protocol: TCP
+ {{- if .Values.controller.daemonset.useHostPort }}
+ hostPort: {{ .Values.controller.daemonset.hostPorts.stats }}
+ {{- end }}
{{- if .Values.controller.metrics.enabled }}
- name: metrics
containerPort: 10254
diff --git a/stable/nginx-ingress/templates/controller-deployment.yaml b/stable/nginx-ingress/templates/controller-deployment.yaml
index 7d78507d33a2..784824a84e55 100644
--- a/stable/nginx-ingress/templates/controller-deployment.yaml
+++ b/stable/nginx-ingress/templates/controller-deployment.yaml
@@ -39,6 +39,10 @@ spec:
{{- if .Values.controller.priorityClassName }}
priorityClassName: "{{ .Values.controller.priorityClassName }}"
{{- end }}
+ {{- if .Values.controller.podSecurityContext }}
+ securityContext:
+{{ toYaml .Values.controller.podSecurityContext | indent 8 }}
+ {{- end }}
containers:
- name: {{ template "nginx-ingress.name" . }}-{{ .Values.controller.name }}
image: "{{ .Values.controller.image.repository }}:{{ .Values.controller.image.tag }}"
@@ -113,10 +117,10 @@ spec:
failureThreshold: {{ .Values.controller.livenessProbe.failureThreshold }}
ports:
- name: http
- containerPort: 80
+ containerPort: {{ .Values.controller.containerPort.http }}
protocol: TCP
- name: https
- containerPort: 443
+ containerPort: {{ .Values.controller.containerPort.https }}
protocol: TCP
{{- if .Values.controller.stats.enabled }}
- name: stats
diff --git a/stable/nginx-ingress/templates/controller-metrics-service.yaml b/stable/nginx-ingress/templates/controller-metrics-service.yaml
index bfff958f6e99..c502dce20961 100644
--- a/stable/nginx-ingress/templates/controller-metrics-service.yaml
+++ b/stable/nginx-ingress/templates/controller-metrics-service.yaml
@@ -19,7 +19,7 @@ metadata:
release: {{ .Release.Name }}
name: {{ template "nginx-ingress.controller.fullname" . }}-metrics
spec:
-{{- if .Values.controller.metrics.service.clusterIP }}
+{{- if not .Values.controller.metrics.service.omitClusterIP }}
clusterIP: "{{ .Values.controller.metrics.service.clusterIP }}"
{{- end }}
{{- if .Values.controller.metrics.service.externalIPs }}
diff --git a/stable/nginx-ingress/templates/controller-service.yaml b/stable/nginx-ingress/templates/controller-service.yaml
index 6a0979c4f832..27a28344758a 100644
--- a/stable/nginx-ingress/templates/controller-service.yaml
+++ b/stable/nginx-ingress/templates/controller-service.yaml
@@ -18,7 +18,7 @@ metadata:
release: {{ .Release.Name }}
name: {{ template "nginx-ingress.controller.fullname" . }}
spec:
-{{- if .Values.controller.service.clusterIP }}
+{{- if not .Values.controller.service.omitClusterIP }}
clusterIP: "{{ .Values.controller.service.clusterIP }}"
{{- end }}
{{- if .Values.controller.service.externalIPs }}
@@ -39,12 +39,13 @@ spec:
healthCheckNodePort: {{ .Values.controller.service.healthCheckNodePort }}
{{- end }}
ports:
+ {{- $setNodePorts := (or (eq .Values.controller.service.type "NodePort") (eq .Values.controller.service.type "LoadBalancer")) }}
{{- if .Values.controller.service.enableHttp }}
- name: http
port: 80
protocol: TCP
targetPort: {{ .Values.controller.service.targetPorts.http }}
- {{- if (and (eq .Values.controller.service.type "NodePort") (not (empty .Values.controller.service.nodePorts.http))) }}
+ {{- if (and $setNodePorts (not (empty .Values.controller.service.nodePorts.http))) }}
nodePort: {{ .Values.controller.service.nodePorts.http }}
{{- end }}
{{- end }}
@@ -53,7 +54,7 @@ spec:
port: 443
protocol: TCP
targetPort: {{ .Values.controller.service.targetPorts.https }}
- {{- if (and (eq .Values.controller.service.type "NodePort") (not (empty .Values.controller.service.nodePorts.https))) }}
+ {{- if (and $setNodePorts (not (empty .Values.controller.service.nodePorts.https))) }}
nodePort: {{ .Values.controller.service.nodePorts.https }}
{{- end }}
{{- end }}
diff --git a/stable/nginx-ingress/templates/controller-servicemonitor.yaml b/stable/nginx-ingress/templates/controller-servicemonitor.yaml
index e68c933b2308..51611e79d669 100644
--- a/stable/nginx-ingress/templates/controller-servicemonitor.yaml
+++ b/stable/nginx-ingress/templates/controller-servicemonitor.yaml
@@ -3,6 +3,9 @@ apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ template "nginx-ingress.controller.fullname" . }}
+ {{- if .Values.controller.metrics.serviceMonitor.namespace }}
+ namespace: {{ .Values.controller.metrics.serviceMonitor.namespace }}
+ {{- end }}
labels:
app: {{ template "nginx-ingress.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
@@ -16,6 +19,9 @@ spec:
endpoints:
- port: metrics
interval: 30s
+ {{- if .Values.controller.metrics.serviceMonitor.honorLabels }}
+ honorLabels: true
+ {{- end }}
namespaceSelector:
matchNames:
- {{ .Release.Namespace }}
diff --git a/stable/nginx-ingress/templates/controller-stats-service.yaml b/stable/nginx-ingress/templates/controller-stats-service.yaml
index b162859939c3..fb99f5c59c5c 100644
--- a/stable/nginx-ingress/templates/controller-stats-service.yaml
+++ b/stable/nginx-ingress/templates/controller-stats-service.yaml
@@ -16,7 +16,7 @@ metadata:
release: {{ .Release.Name }}
name: {{ template "nginx-ingress.controller.fullname" . }}-stats
spec:
-{{- if .Values.controller.stats.service.clusterIP }}
+{{- if not .Values.controller.stats.service.omitClusterIP }}
clusterIP: "{{ .Values.controller.stats.service.clusterIP }}"
{{- end }}
{{- if .Values.controller.stats.service.externalIPs }}
diff --git a/stable/nginx-ingress/templates/default-backend-deployment.yaml b/stable/nginx-ingress/templates/default-backend-deployment.yaml
index 93ea613c0800..ba7787b322d9 100644
--- a/stable/nginx-ingress/templates/default-backend-deployment.yaml
+++ b/stable/nginx-ingress/templates/default-backend-deployment.yaml
@@ -60,6 +60,7 @@ spec:
protocol: TCP
resources:
{{ toYaml .Values.defaultBackend.resources | indent 12 }}
+ serviceAccountName: {{ template "nginx-ingress.serviceAccountName" . }}
{{- if .Values.defaultBackend.nodeSelector }}
nodeSelector:
{{ toYaml .Values.defaultBackend.nodeSelector | indent 8 }}
diff --git a/stable/nginx-ingress/templates/default-backend-service.yaml b/stable/nginx-ingress/templates/default-backend-service.yaml
index 36a355e35c3f..150a0d8cb5fc 100644
--- a/stable/nginx-ingress/templates/default-backend-service.yaml
+++ b/stable/nginx-ingress/templates/default-backend-service.yaml
@@ -16,7 +16,7 @@ metadata:
release: {{ .Release.Name }}
name: {{ template "nginx-ingress.defaultBackend.fullname" . }}
spec:
-{{- if .Values.defaultBackend.service.clusterIP }}
+{{- if not .Values.defaultBackend.service.omitClusterIP }}
clusterIP: "{{ .Values.defaultBackend.service.clusterIP }}"
{{- end }}
{{- if .Values.defaultBackend.service.externalIPs }}
diff --git a/stable/nginx-ingress/templates/scoped-clusterrole.yaml b/stable/nginx-ingress/templates/scoped-clusterrole.yaml
deleted file mode 100644
index e3766338834c..000000000000
--- a/stable/nginx-ingress/templates/scoped-clusterrole.yaml
+++ /dev/null
@@ -1,20 +0,0 @@
-{{- if and .Values.rbac.create .Values.controller.scope.enabled -}}
-apiVersion: rbac.authorization.k8s.io/v1beta1
-kind: ClusterRole
-metadata:
- labels:
- app: {{ template "nginx-ingress.name" . }}
- chart: {{ .Chart.Name }}-{{ .Chart.Version }}
- heritage: {{ .Release.Service }}
- release: {{ .Release.Name }}
- name: {{ template "nginx-ingress.fullname" . }}-scoped
-rules:
- - apiGroups:
- - ""
- resources:
- - nodes
- verbs:
- - get
- - list
- - watch
-{{- end -}}
diff --git a/stable/nginx-ingress/templates/scoped-clusterrolebinding.yaml b/stable/nginx-ingress/templates/scoped-clusterrolebinding.yaml
deleted file mode 100644
index 00227b04acba..000000000000
--- a/stable/nginx-ingress/templates/scoped-clusterrolebinding.yaml
+++ /dev/null
@@ -1,19 +0,0 @@
-{{- if and .Values.rbac.create .Values.controller.scope.enabled -}}
-apiVersion: rbac.authorization.k8s.io/v1beta1
-kind: ClusterRoleBinding
-metadata:
- labels:
- app: {{ template "nginx-ingress.name" . }}
- chart: {{ .Chart.Name }}-{{ .Chart.Version }}
- heritage: {{ .Release.Service }}
- release: {{ .Release.Name }}
- name: {{ template "nginx-ingress.fullname" . }}-scoped
-roleRef:
- apiGroup: rbac.authorization.k8s.io
- kind: ClusterRole
- name: {{ template "nginx-ingress.fullname" . }}-scoped
-subjects:
- - kind: ServiceAccount
- name: {{ template "nginx-ingress.serviceAccountName" . }}
- namespace: {{ .Release.Namespace }}
-{{- end -}}
diff --git a/stable/nginx-ingress/templates/tcp-configmap.yaml b/stable/nginx-ingress/templates/tcp-configmap.yaml
index fdbf282682d1..ed4359535b4d 100644
--- a/stable/nginx-ingress/templates/tcp-configmap.yaml
+++ b/stable/nginx-ingress/templates/tcp-configmap.yaml
@@ -10,5 +10,5 @@ metadata:
release: {{ .Release.Name }}
name: {{ template "nginx-ingress.fullname" . }}-tcp
data:
-{{ toYaml .Values.tcp | indent 2 }}
+{{ tpl (toYaml .Values.tcp) . | indent 2 }}
{{- end }}
diff --git a/stable/nginx-ingress/templates/udp-configmap.yaml b/stable/nginx-ingress/templates/udp-configmap.yaml
index 75ce16325f5f..da1233592c09 100644
--- a/stable/nginx-ingress/templates/udp-configmap.yaml
+++ b/stable/nginx-ingress/templates/udp-configmap.yaml
@@ -10,5 +10,5 @@ metadata:
release: {{ .Release.Name }}
name: {{ template "nginx-ingress.fullname" . }}-udp
data:
-{{ toYaml .Values.udp | indent 2 }}
+{{ tpl (toYaml .Values.udp) . | indent 2 }}
{{- end }}
diff --git a/stable/nginx-ingress/values.yaml b/stable/nginx-ingress/values.yaml
index 0886b0688163..4bd8a0e23a12 100644
--- a/stable/nginx-ingress/values.yaml
+++ b/stable/nginx-ingress/values.yaml
@@ -5,11 +5,16 @@ controller:
name: controller
image:
repository: quay.io/kubernetes-ingress-controller/nginx-ingress-controller
- tag: "0.22.0"
+ tag: "0.24.1"
pullPolicy: IfNotPresent
# www-data -> uid 33
runAsUser: 33
+ # Configures the ports the nginx-controller listens on
+ containerPort:
+ http: 80
+ https: 443
+
config: {}
# Will add custom header to Nginx https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/customization/custom-headers
headers: {}
@@ -31,6 +36,8 @@ controller:
hostPorts:
http: 80
https: 443
+ ## healthz endpoint
+ stats: 18080
## Required only if defaultBackend.enabled = false
## Must be /
@@ -49,6 +56,12 @@ controller:
podLabels: {}
# key: value
+ ## Security Context policies for controller pods
+ ## See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/ for
+ ## notes on enabling and using sysctls
+ ##
+ podSecurityContext: {}
+
## Allows customization of the external service
## the ingress will be bound to via DNS
publishService:
@@ -160,6 +173,7 @@ controller:
service:
annotations: {}
labels: {}
+ omitClusterIP: false
clusterIP: ""
## List of IP addresses at which the controller services are available
@@ -240,6 +254,7 @@ controller:
service:
annotations: {}
+ omitClusterIP: false
clusterIP: ""
## List of IP addresses at which the stats service is available
@@ -262,6 +277,7 @@ controller:
# prometheus.io/scrape: "true"
# prometheus.io/port: "10254"
+ omitClusterIP: false
clusterIP: ""
## List of IP addresses at which the stats-exporter service is available
@@ -277,6 +293,8 @@ controller:
serviceMonitor:
enabled: false
additionalLabels: {}
+ namespace: ""
+ # honorLabels: true
lifecycle: {}
@@ -296,8 +314,8 @@ defaultBackend:
name: default-backend
image:
- repository: k8s.gcr.io/defaultbackend
- tag: "1.4"
+ repository: k8s.gcr.io/defaultbackend-amd64
+ tag: "1.5"
pullPolicy: IfNotPresent
extraArgs: {}
@@ -342,6 +360,7 @@ defaultBackend:
service:
annotations: {}
+ omitClusterIP: false
clusterIP: ""
## List of IP addresses at which the default backend service is available
diff --git a/stable/node-problem-detector/Chart.yaml b/stable/node-problem-detector/Chart.yaml
index 1a11ea61d4a4..a6e806cf7127 100644
--- a/stable/node-problem-detector/Chart.yaml
+++ b/stable/node-problem-detector/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: node-problem-detector
-version: "1.3.0"
-appVersion: v0.6.1
+version: "1.4.3"
+appVersion: v0.6.3
home: https://github.com/kubernetes/node-problem-detector
description: Installs the node-problem-detector daemonset for monitoring extra attributes on nodes
icon: https://github.com/kubernetes/kubernetes/raw/master/logo/logo.png
@@ -16,4 +17,10 @@ sources:
maintainers:
- name: max-rocket-internet
email: max.williams@deliveryhero.com
+- name: Random-Liu
+ email: lantaol@google.com
+- name: andyxning
+ email: andy.xning@gmail.com
+- name: wangzhen127
+ email: zhenw@google.com
engine: gotpl
diff --git a/stable/node-problem-detector/OWNERS b/stable/node-problem-detector/OWNERS
index 9f7acfb63b3a..0ad303c0e740 100644
--- a/stable/node-problem-detector/OWNERS
+++ b/stable/node-problem-detector/OWNERS
@@ -1,4 +1,10 @@
approvers:
- max-rocket-internet
+- Random-Liu
+- andyxning
+- wangzhen127
reviewers:
- max-rocket-internet
+- Random-Liu
+- andyxning
+- wangzhen127
diff --git a/stable/node-problem-detector/README.md b/stable/node-problem-detector/README.md
index 62202389c73c..c4ada1ea179a 100644
--- a/stable/node-problem-detector/README.md
+++ b/stable/node-problem-detector/README.md
@@ -41,10 +41,11 @@ The following table lists the configurable parameters for this chart and their d
| `fullnameOverride` | Override the fullname of the chart | `nil` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `image.repository` | Image | `k8s.gcr.io/node-problem-detector` |
-| `image.tag` | Image tag | `v0.6.1` |
+| `image.tag` | Image tag | `v0.6.3` |
| `nameOverride` | Override the name of the chart | `nil` |
| `rbac.create` | RBAC | `true` |
| `hostNetwork` | Run pod on host network | `false` |
+| `priorityClassName` | Priority class name | `""` |
| `resources` | Pod resource requests and limits | `{}` |
| `settings.custom_monitor_definitions` | User-specified custom monitor definitions | `{}` |
| `settings.log_monitors` | System log monitor config files | `/config/kernel-monitor.json`, `/config/docker-monitor.json` |
diff --git a/stable/node-problem-detector/templates/daemonset.yaml b/stable/node-problem-detector/templates/daemonset.yaml
index d795a9ddbb23..93690a992803 100644
--- a/stable/node-problem-detector/templates/daemonset.yaml
+++ b/stable/node-problem-detector/templates/daemonset.yaml
@@ -26,6 +26,10 @@ spec:
spec:
serviceAccountName: {{ template "node-problem-detector.serviceAccountName" . }}
hostNetwork: {{ .Values.hostNetwork }}
+ terminationGracePeriodSeconds: 30
+ {{- if .Values.priorityClassName }}
+ priorityClassName: {{ .Values.priorityClassName | quote }}
+ {{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
@@ -50,7 +54,6 @@ spec:
- name: custom-config
mountPath: /custom-config
readOnly: true
- terminationGracePeriodSeconds: 30
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.affinity }}
diff --git a/stable/node-problem-detector/values.yaml b/stable/node-problem-detector/values.yaml
index fb2af3d8d81d..50378bed1886 100644
--- a/stable/node-problem-detector/values.yaml
+++ b/stable/node-problem-detector/values.yaml
@@ -38,7 +38,7 @@ hostpath:
image:
repository: k8s.gcr.io/node-problem-detector
- tag: v0.6.1
+ tag: v0.6.3
pullPolicy: IfNotPresent
nameOverride: ""
@@ -51,6 +51,8 @@ rbac:
# not recommended, but may be useful for certain use cases.
hostNetwork: false
+priorityClassName: ""
+
resources: {}
annotations: {}
diff --git a/stable/node-red/Chart.yaml b/stable/node-red/Chart.yaml
index 542095fbc04f..afbbdac35bf6 100644
--- a/stable/node-red/Chart.yaml
+++ b/stable/node-red/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
-appVersion: 0.19.4
+appVersion: 0.19.6
description: Node-RED is flow-based programming for the Internet of Things
name: node-red
-version: 1.0.2
+version: 1.2.3
keywords:
- nodered
- node-red
@@ -14,3 +14,5 @@ sources:
maintainers:
- name: billimek
email: jeff@billimek.com
+ - name: batazor
+ email: batazor111@gmail.com
diff --git a/stable/node-red/OWNERS b/stable/node-red/OWNERS
index b90909f487d8..edce73409791 100644
--- a/stable/node-red/OWNERS
+++ b/stable/node-red/OWNERS
@@ -1,4 +1,6 @@
approvers:
- billimek
+- batazor
reviewers:
- billimek
+- batazor
diff --git a/stable/node-red/README.md b/stable/node-red/README.md
index d9ad3cfec55c..eee03ad7c13c 100644
--- a/stable/node-red/README.md
+++ b/stable/node-red/README.md
@@ -34,13 +34,14 @@ The command removes all the Kubernetes components associated with the chart and
## Configuration
-The following tables lists the configurable parameters of the Sentry chart and their default values.
+The following tables lists the configurable parameters of the Node-RED chart and their default values.
| Parameter | Description | Default |
| ------------------------------- | ------------------------------- | ---------------------------------------------------------- |
| `image.repository` | node-red image | `nodered/node-red-docker` |
| `image.tag` | node-red image tag | `0.19.4-v8` |
| `image.pullPolicy` | node-red image pull policy | `IfNotPresent` |
+| `strategyType` | Specifies the strategy used to replace old Pods by new ones | `Recreate` |
| `flows` | Default flows configuration | `flows.json` |
| `nodeOptions` | Node.js runtime arguments | `` |
| `timezone` | Default timezone | `UTC` |
@@ -62,6 +63,7 @@ The following tables lists the configurable parameters of the Sentry chart and t
| `persistence.existingClaim` | Use an existing PVC to persist data | `nil` |
| `persistence.storageClass` | Type of persistent volume claim | `-` |
| `persistence.accessModes` | Persistence access modes | `ReadWriteOnce` |
+| `persistence.subPath` | Mount a sub dir of the persistent volume | `nil` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
diff --git a/stable/node-red/templates/deployment.yaml b/stable/node-red/templates/deployment.yaml
index 8e2b1e220058..576694f7b450 100644
--- a/stable/node-red/templates/deployment.yaml
+++ b/stable/node-red/templates/deployment.yaml
@@ -8,7 +8,9 @@ metadata:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
- replicas: {{ .Values.replicaCount }}
+ replicas: 1
+ strategy:
+ type: {{ .Values.strategyType }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "node-red.name" . }}
@@ -45,6 +47,9 @@ spec:
volumeMounts:
- name: data
mountPath: /data
+{{- if .Values.persistence.subPath }}
+ subPath: {{ .Values.persistence.subPath }}
+{{- end }}
resources:
{{ toYaml .Values.resources | indent 12 }}
volumes:
diff --git a/stable/node-red/templates/service.yaml b/stable/node-red/templates/service.yaml
index 3cc6ae82b680..ea49c966eb24 100644
--- a/stable/node-red/templates/service.yaml
+++ b/stable/node-red/templates/service.yaml
@@ -2,6 +2,10 @@ apiVersion: v1
kind: Service
metadata:
name: {{ include "node-red.fullname" . }}
+ {{- if .Values.service.annotations }}
+ annotations:
+ {{- toYaml .Values.service.annotations | nindent 4 }}
+ {{- end }}
labels:
app.kubernetes.io/name: {{ include "node-red.name" . }}
helm.sh/chart: {{ include "node-red.chart" . }}
@@ -35,4 +39,4 @@ spec:
{{ end }}
selector:
app.kubernetes.io/name: {{ include "node-red.name" . }}
- app.kubernetes.io/instance: {{ .Release.Name }}
\ No newline at end of file
+ app.kubernetes.io/instance: {{ .Release.Name }}
diff --git a/stable/node-red/values.yaml b/stable/node-red/values.yaml
index 44d0212b2c27..3133000753a7 100644
--- a/stable/node-red/values.yaml
+++ b/stable/node-red/values.yaml
@@ -2,11 +2,12 @@
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
-replicaCount: 1
+# upgrade strategy type (e.g. Recreate or RollingUpdate)
+strategyType: Recreate
image:
repository: nodered/node-red-docker
- tag: 0.19.4-v8
+ tag: 0.19.6-v8
pullPolicy: IfNotPresent
nameOverride: ""
@@ -66,6 +67,8 @@ persistence:
# existingClaim: your-claim
accessMode: ReadWriteOnce
size: 5Gi
+ ## When mounting the data volume you may specify a subPath
+ # subPath: /configs/node-red
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
diff --git a/stable/oauth2-proxy/.helmignore b/stable/oauth2-proxy/.helmignore
index f0c131944441..825c00779157 100644
--- a/stable/oauth2-proxy/.helmignore
+++ b/stable/oauth2-proxy/.helmignore
@@ -19,3 +19,5 @@
.project
.idea/
*.tmproj
+
+OWNERS
diff --git a/stable/oauth2-proxy/Chart.yaml b/stable/oauth2-proxy/Chart.yaml
index 504432e39c5d..ac4f4157d787 100644
--- a/stable/oauth2-proxy/Chart.yaml
+++ b/stable/oauth2-proxy/Chart.yaml
@@ -1,12 +1,9 @@
name: oauth2-proxy
-version: 0.6.0
-# This chart is deprecated and no longer maintained as it's upstream has been abandoned.
-# For details deprecation, including how to un-deprecate a chart see the PROCESSES.md file.
-deprecated: true
+version: 0.12.2
apiVersion: v1
-appVersion: 2.2
+appVersion: 3.1.0
home: http://www.videntity.com/
-description: DEPRECATED A reverse proxy that provides authentication with Google, Github or other providers
+description: A reverse proxy that provides authentication with Google, Github or other providers
keywords:
- kubernetes
- oauth
@@ -14,6 +11,9 @@ keywords:
- authentication
- google
- github
+maintainers:
+ - name: miouge1
+ email: maxime@root314.com
sources:
-- https://github.com/bitly/oauth2_proxy
+- https://github.com/pusher/oauth2_proxy
engine: gotpl
diff --git a/stable/oauth2-proxy/OWNERS b/stable/oauth2-proxy/OWNERS
new file mode 100644
index 000000000000..48b09f34c54a
--- /dev/null
+++ b/stable/oauth2-proxy/OWNERS
@@ -0,0 +1,4 @@
+approvers:
+- miouge1
+reviewers:
+- miouge1
diff --git a/stable/oauth2-proxy/README.md b/stable/oauth2-proxy/README.md
index 092a0c846e0a..d9fdf2e9dce0 100644
--- a/stable/oauth2-proxy/README.md
+++ b/stable/oauth2-proxy/README.md
@@ -1,10 +1,6 @@
# oauth2-proxy
-**N.B., this chart is deprecated and is no longer maintained as it's upstream [has been abandoned](https://github.com/bitly/oauth2_proxy/issues/628#issuecomment-417121636).**
-
-[oauth2-proxy](https://github.com/bitly/oauth2_proxy) is a reverse proxy and static file server that provides authentication using Providers (Google, GitHub, and others) to validate accounts by email, domain or group.
-
-**Note - at this time, there is a known incompatibility between `oauth2-proxy` version 2.2 (which is its latest release) and `nginx-ingress` versions >= 0.9beta12. To utilize this chart at this time please use nginx-ingress version 0.9beta11**
+[oauth2-proxy](https://github.com/pusher/oauth2_proxy) is a reverse proxy and static file server that provides authentication using Providers (Google, GitHub, and others) to validate accounts by email, domain or group.
## TL;DR;
@@ -44,26 +40,42 @@ Parameter | Description | Default
--- | --- | ---
`affinity` | node/pod affinities | None
`authenticatedEmailsFile.enabled` | Enables authorize individual email addresses | `false`
-`authenticatedEmailsFile.template` | Name of the configmap what is handled outside of that chart | `""`
-`authenticatedEmailsFile.restricted_access | (email addresses)[https://github.com/bitly/oauth2_proxy#email-authentication] list config | `""`
+`authenticatedEmailsFile.template` | Name of the configmap that is handled outside of that chart | `""`
+`authenticatedEmailsFile.restricted_access` | [email addresses](https://github.com/pusher/oauth2_proxy#email-authentication) list config | `""`
`config.clientID` | oauth client ID | `""`
`config.clientSecret` | oauth client secret | `""`
-`config.cookieSecret` | server specific cookie for the secret; create a new one with `python -c 'import os,base64; print base64.b64encode(os.urandom(16))'` | `""`
-`config.configFile` | custom [oauth2_proxy.cfg](https://github.com/bitly/oauth2_proxy/blob/master/contrib/oauth2_proxy.cfg.example) contents for settings not overridable via environment nor command line | `""`
+`config.cookieSecret` | server specific cookie for the secret; create a new one with `openssl rand -base64 32 | head -c 32 | base64` | `""`
+`config.existingSecret` | existing Kubernetes secret to use for OAuth2 credentials. See [secret template](https://github.com/helm/charts/blob/master/stable/oauth2-proxy/templates/secret.yaml) for the required values | `nil`
+`config.configFile` | custom [oauth2_proxy.cfg](https://github.com/pusher/oauth2_proxy/blob/master/contrib/oauth2_proxy.cfg.example) contents for settings not overridable via environment nor command line | `""`
+`config.existingConfig` | existing Kubernetes configmap to use for the configuration file. See [config template](https://github.com/helm/charts/blob/master/stable/oauth2-proxy/templates/configmap.yaml) for the required values | `nil`
+`config.google.adminEmail` | user impersonated by the google service account | `""`
+`config.google.serviceAccountJson` | google service account json contents | `""`
+`config.google.existingConfig` | existing Kubernetes configmap to use for the service account file. See [google secret template](https://github.com/helm/charts/blob/master/stable/oauth2-proxy/templates/google-secret.yaml) for the required values | `nil`
`extraArgs` | key:value list of extra arguments to give the binary | `{}`
`image.pullPolicy` | Image pull policy | `IfNotPresent`
-`image.repository` | Image repository | `a5huynh/oauth2_proxy`
-`image.tag` | Image tag | `2.2`
+`image.repository` | Image repository | `quay.io/pusher/oauth2_proxy`
+`image.tag` | Image tag | `v3.1.0`
`imagePullSecrets` | Specify image pull secrets | `nil` (does not add image pull secrets to deployed pods)
`ingress.enabled` | enable ingress | `false`
+`livenessProbe.enabled` | enable Kubernetes livenessProbe. Disable to use oauth2-proxy with Istio mTLS. See [Istio FAQ](https://istio.io/help/faq/security/#k8s-health-checks) | `true`
+`livenessProbe.initialDelaySeconds` | number of seconds | 0
+`livenessProbe.timeoutSeconds` | number of seconds | 1
`nodeSelector` | node labels for pod assignment | `{}`
`podAnnotations` | annotations to add to each pod | `{}`
`podLabels` | additional labesl to add to each pod | `{}`
+`priorityClassName` | priorityClassName | `nil`
+`readinessProbe.enabled` | enable Kubernetes readinessProbe. Disable to use oauth2-proxy with Istio mTLS. See [Istio FAQ](https://istio.io/help/faq/security/#k8s-health-checks) | `true`
+`readinessProbe.initialDelaySeconds` | number of seconds | 0
+`readinessProbe.timeoutSeconds` | number of seconds | 1
+`readinessProbe.periodSeconds` | number of seconds | 10
+`readinessProbe.successThreshold` | number of successes | 1
`replicaCount` | desired number of pods | `1`
`resources` | pod resource requests & limits | `{}`
-`priorityClassName` | priorityClassName | `nil`
`service.port` | port for the service | `80`
`service.type` | type of service | `ClusterIP`
+`service.clusterIP` | cluster ip address | `nil`
+`service.loadBalancerIP` | ip of load balancer | `nil`
+`service.loadBalancerSourceRanges` | allowed source ranges in load balancer | `nil`
`tolerations` | List of node taints to tolerate | `[]`
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
diff --git a/stable/oauth2-proxy/templates/NOTES.txt b/stable/oauth2-proxy/templates/NOTES.txt
index abca7aa5d997..10d2de847d4a 100644
--- a/stable/oauth2-proxy/templates/NOTES.txt
+++ b/stable/oauth2-proxy/templates/NOTES.txt
@@ -1 +1,3 @@
-Note this chart is deprecated and is no longer maintained because upstream was abandoned.
+To verify that oauth2-proxy has started, run:
+
+ kubectl --namespace={{ .Release.Namespace }} get pods -l "app={{ template "oauth2-proxy.fullname" . }}"
diff --git a/stable/oauth2-proxy/templates/_helpers.tpl b/stable/oauth2-proxy/templates/_helpers.tpl
index 36cbfe1b2710..c263df0a803b 100644
--- a/stable/oauth2-proxy/templates/_helpers.tpl
+++ b/stable/oauth2-proxy/templates/_helpers.tpl
@@ -30,3 +30,14 @@ Create chart name and version as used by the chart label.
{{- define "oauth2-proxy.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
+
+{{/*
+Get the secret name.
+*/}}
+{{- define "oauth2-proxy.secretName" -}}
+{{- if .Values.config.existingSecret -}}
+{{- printf "%s" .Values.config.existingSecret -}}
+{{- else -}}
+{{- printf "%s" (include "oauth2-proxy.fullname" .) -}}
+{{- end -}}
+{{- end -}}
diff --git a/stable/oauth2-proxy/templates/configmap-authenticated-emails-file.yaml b/stable/oauth2-proxy/templates/configmap-authenticated-emails-file.yaml
old mode 100755
new mode 100644
diff --git a/stable/oauth2-proxy/templates/configmap.yaml b/stable/oauth2-proxy/templates/configmap.yaml
index 76ea61f4409e..bf5f517c9acd 100644
--- a/stable/oauth2-proxy/templates/configmap.yaml
+++ b/stable/oauth2-proxy/templates/configmap.yaml
@@ -1,3 +1,4 @@
+{{- if not .Values.config.existingConfig }}
{{- if .Values.config.configFile }}
apiVersion: v1
kind: ConfigMap
@@ -11,3 +12,4 @@ metadata:
data:
oauth2_proxy.cfg: {{ .Values.config.configFile | quote }}
{{- end }}
+{{- end }}
diff --git a/stable/oauth2-proxy/templates/deployment.yaml b/stable/oauth2-proxy/templates/deployment.yaml
index fa3eeac5e4c1..91eaa29a0f12 100644
--- a/stable/oauth2-proxy/templates/deployment.yaml
+++ b/stable/oauth2-proxy/templates/deployment.yaml
@@ -15,8 +15,12 @@ spec:
release: {{ .Release.Name }}
template:
metadata:
- {{- if .Values.podAnnotations }}
annotations:
+ checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
+ checksum/config-emails: {{ include (print $.Template.BasePath "/configmap-authenticated-emails-file.yaml") . | sha256sum }}
+ checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
+ checksum/google-secret: {{ include (print $.Template.BasePath "/google-secret.yaml") . | sha256sum }}
+ {{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
labels:
@@ -34,6 +38,7 @@ spec:
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
args:
+ - --http-address=0.0.0.0:4180
{{- range $key, $value := .Values.extraArgs }}
{{- if $value }}
- --{{ $key }}={{ $value }}
@@ -41,7 +46,7 @@ spec:
- --{{ $key }}
{{- end }}
{{- end }}
- {{- if .Values.config.configFile }}
+ {{- if or .Values.config.existingConfig .Values.config.configFile }}
- --config=/etc/oauth2_proxy/oauth2_proxy.cfg
{{- end }}
{{- if .Values.authenticatedEmailsFile.enabled }}
@@ -51,52 +56,82 @@ spec:
- --authenticated-emails-file=/etc/oauth2-proxy/authenticated-emails-list
{{- end }}
{{- end }}
+ {{- with .Values.config.google }}
+ {{- if and .adminEmail (or .serviceAccountJson .existingSecret) }}
+ - --google-admin-email={{ .adminEmail }}
+ - --google-service-account-json=/google/service-account.json
+ {{- end }}
+ {{- end }}
env:
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
- name: {{ template "oauth2-proxy.fullname" . }}
+ name: {{ template "oauth2-proxy.secretName" . }}
key: client-id
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
- name: {{ template "oauth2-proxy.fullname" . }}
+ name: {{ template "oauth2-proxy.secretName" . }}
key: client-secret
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
- name: {{ template "oauth2-proxy.fullname" . }}
+ name: {{ template "oauth2-proxy.secretName" . }}
key: cookie-secret
ports:
- containerPort: 4180
name: http
protocol: TCP
+{{- if .Values.livenessProbe.enabled }}
livenessProbe:
httpGet:
path: /ping
port: http
+ initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
+ timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
+{{- end }}
+{{- if .Values.readinessProbe.enabled }}
readinessProbe:
httpGet:
path: /ping
port: http
+ initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
+ timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
+ successThreshold: {{ .Values.readinessProbe.successThreshold }}
+ periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
+{{- end }}
resources:
{{ toYaml .Values.resources | indent 10 }}
volumeMounts:
-{{- if .Values.config.configFile }}
+{{- with .Values.config.google }}
+{{- if and .adminEmail (or .serviceAccountJson .existingSecret) }}
+ - name: google-secret
+ mountPath: /google
+ readOnly: true
+{{- end }}
+{{- end }}
+{{- if or .Values.config.existingConfig .Values.config.configFile }}
- mountPath: /etc/oauth2_proxy
- name: {{ template "oauth2-proxy.fullname" . }}-configmap
+ name: configmain
{{- end }}
{{- if .Values.authenticatedEmailsFile.enabled }}
- mountPath: /etc/oauth2-proxy
- name: {{ template "oauth2-proxy.fullname" . }}-configaccesslist
+ name: configaccesslist
readOnly: true
{{- end }}
volumes:
-{{- if .Values.config.configFile }}
+{{- with .Values.config.google }}
+{{- if and .adminEmail (or .serviceAccountJson .existingSecret) }}
+ - name: google-secret
+ secret:
+ secretName: {{ if .existingSecret }}{{ .existingSecret }}{{ else }} {{ template "oauth2-proxy.secretName" . }}{{ end }}
+{{- end }}
+{{- end }}
+{{- if or .Values.config.existingConfig .Values.config.configFile }}
- configMap:
defaultMode: 420
- name: {{ template "oauth2-proxy.fullname" . }}
- name: {{ template "oauth2-proxy.fullname" . }}-configmap
+ name: {{ if .Values.config.existingConfig }}{{ .Values.config.existingConfig }}{{ else }}{{ template "oauth2-proxy.fullname" . }}{{ end }}
+ name: configmain
{{- end }}
{{- if .Values.authenticatedEmailsFile.enabled }}
- configMap:
@@ -112,7 +147,7 @@ spec:
{{- else }}
path: authenticated-emails-list
{{- end }}
- name: {{ template "oauth2-proxy.fullname" . }}-configaccesslist
+ name: configaccesslist
{{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
diff --git a/stable/oauth2-proxy/templates/google-secret.yaml b/stable/oauth2-proxy/templates/google-secret.yaml
new file mode 100644
index 000000000000..0e785b1860ac
--- /dev/null
+++ b/stable/oauth2-proxy/templates/google-secret.yaml
@@ -0,0 +1,14 @@
+{{- if and .Values.config.google (not .Values.config.google.existingSecret) }}
+apiVersion: v1
+kind: Secret
+metadata:
+ labels:
+ app: {{ template "oauth2-proxy.name" . }}
+ chart: {{ template "oauth2-proxy.chart" . }}
+ heritage: {{ .Release.Service }}
+ release: {{ .Release.Name }}
+ name: {{ template "oauth2-proxy.fullname" . }}-google
+type: Opaque
+data:
+ service-account.json: {{ .serviceAccountJson }}
+{{- end -}}
diff --git a/stable/oauth2-proxy/templates/secret.yaml b/stable/oauth2-proxy/templates/secret.yaml
index 8347d5c239bb..858fe9f41793 100644
--- a/stable/oauth2-proxy/templates/secret.yaml
+++ b/stable/oauth2-proxy/templates/secret.yaml
@@ -1,3 +1,4 @@
+{{- if not .Values.config.existingSecret }}
apiVersion: v1
kind: Secret
metadata:
@@ -12,3 +13,4 @@ data:
cookie-secret: {{ .Values.config.cookieSecret | b64enc | quote }}
client-secret: {{ .Values.config.clientSecret | b64enc | quote }}
client-id: {{ .Values.config.clientID | b64enc | quote }}
+{{- end -}}
diff --git a/stable/oauth2-proxy/templates/service.yaml b/stable/oauth2-proxy/templates/service.yaml
index b89ca6e18770..e2d3157779eb 100644
--- a/stable/oauth2-proxy/templates/service.yaml
+++ b/stable/oauth2-proxy/templates/service.yaml
@@ -12,7 +12,23 @@ metadata:
{{ toYaml .Values.service.annotations | indent 4 }}
{{- end }}
spec:
+{{- if (or (eq .Values.service.type "ClusterIP") (empty .Values.service.type)) }}
+ type: ClusterIP
+ {{- if .Values.service.clusterIP }}
+ clusterIP: {{ .Values.service.clusterIP }}
+ {{end}}
+{{- else if eq .Values.service.type "LoadBalancer" }}
type: {{ .Values.service.type }}
+ {{- if .Values.service.loadBalancerIP }}
+ loadBalancerIP: {{ .Values.service.loadBalancerIP }}
+ {{- end }}
+ {{- if .Values.service.loadBalancerSourceRanges }}
+ loadBalancerSourceRanges:
+{{ toYaml .Values.service.loadBalancerSourceRanges | indent 4 }}
+ {{- end -}}
+{{- else }}
+ type: {{ .Values.service.type }}
+{{- end }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
diff --git a/stable/oauth2-proxy/values.yaml b/stable/oauth2-proxy/values.yaml
index 57c2af995e07..a10323c45b15 100644
--- a/stable/oauth2-proxy/values.yaml
+++ b/stable/oauth2-proxy/values.yaml
@@ -5,17 +5,32 @@ config:
# OAuth client secret
clientSecret: "XXXXXXXX"
# Create a new secret with the following command
- # python -c 'import os,base64; print base64.b64encode(os.urandom(16))'
+ # openssl rand -base64 32 | head -c 32 | base64
+ # Use an existing secret for OAuth2 credentials (see secret.yaml for required fields)
+ # Example:
+ # existingSecret: secret
cookieSecret: "XXXXXXXXXX"
+ google: {}
+ # adminEmail: xxxx
+ # serviceAccountJson: xxxx
+ # Alternatively, use an existing secret (see google-secret.yaml for required fields)
+ # Example:
+ # existingSecret: google-secret
+ # Default configuration, to be overridden
+ configFile: |-
+ email_domains = [ "*" ]
+ upstreams = [ "file:///dev/null" ]
# Custom configuration file: oauth2_proxy.cfg
# configFile: |-
# pass_basic_auth = false
# pass_access_token = true
- configFile: ""
+ # Use an existing config map (see configmap.yaml for required fields)
+ # Example:
+ # existingConfig: config
image:
- repository: "a5huynh/oauth2_proxy"
- tag: "2.2"
+ repository: "quay.io/pusher/oauth2_proxy"
+ tag: "v3.1.0"
pullPolicy: "IfNotPresent"
# Optionally specify an array of imagePullSecrets.
@@ -24,10 +39,7 @@ image:
# imagePullSecrets:
# - name: myRegistryKeySecretName
-extraArgs:
- email-domain: "*"
- upstream: "file:///dev/null"
- http-address: "0.0.0.0:4180"
+extraArgs: {}
# To authorize individual email addresses
# That is part of extraArgs but since this needs special treatment we need to do a separate section
@@ -48,6 +60,11 @@ authenticatedEmailsFile:
service:
type: ClusterIP
+ # when service.type is ClusterIP ...
+ # clusterIP: 192.0.2.20
+ # when service.type is LoadBalancer ...
+ # loadBalancerIP: 198.51.100.40
+ # loadBalancerSourceRanges: 203.0.113.0/24
port: 80
annotations: {}
# foo.io/bar: "true"
@@ -89,6 +106,21 @@ tolerations: []
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
+# Configure Kubernetes liveness and readiness probes.
+# Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
+# Disable both when deploying with Istio 1.0 mTLS. https://istio.io/help/faq/security/#k8s-health-checks
+livenessProbe:
+ enabled: true
+ initialDelaySeconds: 0
+ timeoutSeconds: 1
+
+readinessProbe:
+ enabled: true
+ initialDelaySeconds: 0
+ timeoutSeconds: 1
+ periodSeconds: 10
+ successThreshold: 1
+
podAnnotations: {}
podLabels: {}
replicaCount: 1
diff --git a/stable/odoo/Chart.yaml b/stable/odoo/Chart.yaml
index bff7b702b4d3..f38892d10edf 100644
--- a/stable/odoo/Chart.yaml
+++ b/stable/odoo/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: odoo
-version: 5.0.4
-appVersion: 11.0.20190115
+version: 8.0.1
+appVersion: 12.0.20190515
description: A suite of web based open source business apps.
home: https://www.odoo.com/
icon: https://bitnami.com/assets/stacks/odoo/img/odoo-stack-110x117.png
diff --git a/stable/odoo/README.md b/stable/odoo/README.md
index 26439a61136f..33da0126a76d 100644
--- a/stable/odoo/README.md
+++ b/stable/odoo/README.md
@@ -14,7 +14,7 @@ $ helm install stable/odoo
This chart bootstraps a [Odoo](https://github.com/bitnami/bitnami-docker-odoo) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -50,6 +50,7 @@ The following table lists the configurable parameters of the Odoo chart and thei
| Parameter | Description | Default |
|---------------------------------------|-------------------------------------------------------------|------------------------------------------------|
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | Odoo image registry | `docker.io` |
| `image.repository` | Odoo Image name | `bitnami/odoo` |
| `image.tag` | Odoo Image tag | `{VERSION}` |
@@ -79,6 +80,8 @@ The following table lists the configurable parameters of the Odoo chart and thei
| `ingress.secrets[0].certificate` | TLS Secret Certificate | `nil` |
| `ingress.secrets[0].key` | TLS Secret Key | `nil` |
| `resources` | CPU/Memory resource requests/limits | Memory: `512Mi`, CPU: `300m` |
+| `persistence.enabled` | Enable persistence using PVC | `true` |
+| `persistence.existingClaim` | Enable persistence using an existing PVC | `nil` |
| `persistence.storageClass` | PVC Storage Class | `nil` (uses alpha storage class annotation) |
| `persistence.accessMode` | PVC Access Mode | `ReadWriteOnce` |
| `persistence.size` | PVC Storage Request | `8Gi` |
diff --git a/stable/odoo/requirements.lock b/stable/odoo/requirements.lock
index 78864c2403c8..146b6b4e0c8f 100644
--- a/stable/odoo/requirements.lock
+++ b/stable/odoo/requirements.lock
@@ -1,6 +1,6 @@
dependencies:
- name: postgresql
repository: https://kubernetes-charts.storage.googleapis.com/
- version: 2.7.10
-digest: sha256:972c7085960fbe4a3f530f726f5a1cc6fe038f0ab84df632f6427c3a49f3f366
-generated: 2018-12-17T05:16:55.480186185Z
+ version: 4.0.1
+digest: sha256:5fb584d119966f79129f7e6f010654540fe747203b91cb4e3ea548f90332c9db
+generated: 2019-05-09T20:07:01.558447453+02:00
diff --git a/stable/odoo/requirements.yaml b/stable/odoo/requirements.yaml
index 8b19b44566f9..060c94d6deac 100644
--- a/stable/odoo/requirements.yaml
+++ b/stable/odoo/requirements.yaml
@@ -1,4 +1,4 @@
dependencies:
- name: postgresql
- version: 2.x.x
+ version: 4.x.x
repository: https://kubernetes-charts.storage.googleapis.com/
diff --git a/stable/odoo/templates/_helpers.tpl b/stable/odoo/templates/_helpers.tpl
index c91d10128b03..fd57f873917f 100644
--- a/stable/odoo/templates/_helpers.tpl
+++ b/stable/odoo/templates/_helpers.tpl
@@ -52,3 +52,32 @@ Also, we can't use a single if because lazy evaluation is not an option
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "odoo.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if .Values.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if .Values.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- end -}}
diff --git a/stable/odoo/templates/deployment.yaml b/stable/odoo/templates/deployment.yaml
index 6ab0986ff585..987b36303569 100644
--- a/stable/odoo/templates/deployment.yaml
+++ b/stable/odoo/templates/deployment.yaml
@@ -19,12 +19,7 @@ spec:
chart: {{ template "odoo.chart" . }}
release: {{ .Release.Name | quote }}
spec:
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "odoo.imagePullSecrets" . | indent 6 }}
containers:
- name: {{ template "odoo.fullname" . }}
image: {{ template "odoo.image" . }}
@@ -103,7 +98,7 @@ spec:
- name: odoo-data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
- claimName: {{ template "odoo.fullname" . }}
+ claimName: {{ .Values.persistence.existingClaim | default (include "odoo.fullname" .) }}
{{- else }}
emptyDir: {}
{{- end }}
diff --git a/stable/odoo/templates/pvc.yaml b/stable/odoo/templates/pvc.yaml
index be6345156967..58e3b479edcb 100644
--- a/stable/odoo/templates/pvc.yaml
+++ b/stable/odoo/templates/pvc.yaml
@@ -1,4 +1,4 @@
-{{- if .Values.persistence.enabled -}}
+{{- if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
diff --git a/stable/odoo/templates/svc.yaml b/stable/odoo/templates/svc.yaml
index 0e45478417a5..d0c8a4663e6e 100644
--- a/stable/odoo/templates/svc.yaml
+++ b/stable/odoo/templates/svc.yaml
@@ -24,3 +24,4 @@ spec:
{{- end }}
selector:
app: {{ template "odoo.name" . }}
+ release: {{ .Release.Name | quote }}
diff --git a/stable/odoo/values.yaml b/stable/odoo/values.yaml
index f1b3fb39d4c9..9c792ab3e7da 100644
--- a/stable/odoo/values.yaml
+++ b/stable/odoo/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami Odoo image version
## ref: https://hub.docker.com/r/bitnami/odoo/tags/
@@ -10,7 +13,7 @@
image:
registry: docker.io
repository: bitnami/odoo
- tag: 11.0.20190115
+ tag: 12.0.20190515
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,7 +24,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## User of the application
## ref: https://github.com/bitnami/bitnami-docker-odoo#configuration
@@ -63,6 +66,11 @@ postgresql:
##
persistence:
enabled: true
+ ## If you want to reuse an existing claim, you can pass the name of the PVC using
+ ## the existingClaim variable
+ ##
+ # existingClaim: your-claim
+
## postgresql data Persistent Volume Storage Class
## If defined, storageClassName:
## If set to "-", storageClassName: "", which disables dynamic provisioning
diff --git a/stable/opa/Chart.yaml b/stable/opa/Chart.yaml
index 0d451dc83230..ac9c133c62e5 100644
--- a/stable/opa/Chart.yaml
+++ b/stable/opa/Chart.yaml
@@ -1,12 +1,12 @@
apiVersion: v1
-appVersion: 0.10.1
+appVersion: 0.10.2
description: Open source, general-purpose policy engine. Enforce fine-grained invariants over arbitrary Kubernetes resources.
name: opa
keywords:
- opa
- admission control
- policy
-version: 0.2.0
+version: 1.4.2
home: https://www.openpolicyagent.org
icon: https://raw.githubusercontent.com/open-policy-agent/opa/master/logo/logo.png
sources:
diff --git a/stable/opa/README.md b/stable/opa/README.md
index 17d69535cd8d..81bb837d9543 100644
--- a/stable/opa/README.md
+++ b/stable/opa/README.md
@@ -7,6 +7,7 @@ engine designed for cloud-native environments.
- Kubernetes 1.9 (or newer) for validating and mutating webhook admission
controller support.
+- Optional, cert-manager (https://docs.cert-manager.io/en/latest/)
## Overview
@@ -56,6 +57,7 @@ Reference](https://www.openpolicyagent.org/docs/configuration.html).
| Parameter | Description | Default |
| --- | --- | --- |
+| `certManager.enabled` | Setup the Webhook using cert-manager | `false` |
| `admissionControllerKind` | Type of admission controller to install. | `ValidatingWebhookConfiguration` |
| `admissionControllerFailurePolicy` | Fail-open (`Ignore`) or fail-closed (`Fail`)? | `Ignore` |
| `admissionControllerRules` | Types of operations resources to check. | `*` |
@@ -63,12 +65,20 @@ Reference](https://www.openpolicyagent.org/docs/configuration.html).
| `admissionControllerCA` | Manually set admission controller certificate CA. | Unset |
| `admissionControllerCert` | Manually set admission controller certificate. | Unset |
| `admissionControllerKey` | Manually set admission controller key. | Unset |
+| `podDisruptionBudget.enabled` | Enables creation of a PodDisruptionBudget for OPA. | `false` |
+| `podDisruptionBudget.minAvailable` | Sets the minimum number of pods to be available. Cannot be set at the same time as maxUnavailable. | `1` |
+| `podDisruptionBudget.maxUnavailable` | Sets the maximum number of pods to be unavailable. Cannot be set at the same time as minAvailable. | Unset |
| `image` | OPA image to deploy. | `openpolicyagent/opa` |
| `imageTag` | OPA image tag to deploy. | See [values.yaml](values.yaml) |
+| `port` | Port in the pod to which OPA will bind itself. | `443` |
+| `logLevel` | Log level that OPA outputs at, (`debug`, `info` or `error`) | `info` |
+| `logFormat` | Log format that OPA produces (`text` or `json`) | `text` |
| `replicas` | Number of admission controller replicas to deploy. | `1` |
| `tolerations` | List of node taint tolerations. | `[]` |
| `nodeSelector` | Node labels for pod assignment. | `{}` |
-| `resources` | CPU and memory limits for OPA Pod. | `{}` |
+| `resources` | CPU and memory limits for OPA container. | `{}` |
| `readinessProbe` | HTTP readiness probe for OPA container. | See [values.yaml](values.yaml) |
| `livenessProbe` | HTTP liveness probe for OPA container. | See [values.yaml](values.yaml) |
| `opa` | OPA configuration. | See [values.yaml](values.yaml) |
+| `mgmt.resources` | CPU and memory limits for the kube-mgmt container. | `{}` |
+| `sar.resources` | CPU and memory limits for the sar container. | `{}` |
diff --git a/stable/opa/templates/NOTES.txt b/stable/opa/templates/NOTES.txt
index 1264bb6da2c8..c3a596db5355 100644
--- a/stable/opa/templates/NOTES.txt
+++ b/stable/opa/templates/NOTES.txt
@@ -2,14 +2,6 @@ Please wait while the OPA is deployed on your cluster.
For example policies that you can enforce with OPA see https://www.openpolicyagent.org.
-You can query OPA to see the policies it has loaded:
-
-export OPA_POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "opa.fullname" . }}" -o jsonpath="{.items[0].metadata.name}")
-
-kubectl port-forward $OPA_POD_NAME 8080:443
-
-curl -k -s https://localhost:8080/v1/policies | jq -r '.result[].raw'
-
If you installed this chart with the default values, you can exercise the sample policy.
# 1. Create a namespace called "opa-example"
@@ -53,3 +45,15 @@ spec:
EOF
kubectl -n opa-example create -f ingress-bad.yaml
+
+If you want to turn off authz for debugging purposes, you can do so by upgrading the chart like so:
+helm upgrade {{ .Release.Name }} stable/opa --reuse-values --set authz.enabled=false
+
+You can query OPA to see the policies it has loaded (you will need to turn off authz as described above):
+
+export OPA_POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "opa.fullname" . }}" -o jsonpath="{.items[0].metadata.name}")
+
+kubectl port-forward $OPA_POD_NAME 8080:443
+
+curl -k -s https://localhost:8080/v1/policies | jq -r '.result[].raw'
+
diff --git a/stable/opa/templates/_helpers.tpl b/stable/opa/templates/_helpers.tpl
index 99cdea89e482..1e98a8d00caf 100644
--- a/stable/opa/templates/_helpers.tpl
+++ b/stable/opa/templates/_helpers.tpl
@@ -29,6 +29,11 @@ If release name contains chart name it will be used as a full name.
{{- printf "%s-sar" $name -}}
{{- end -}}
+{{- define "opa.mgmtfullname" -}}
+{{- $name := (include "opa.fullname" . | trunc 58 | trimSuffix "-") -}}
+{{- printf "%s-mgmt" $name -}}
+{{- end -}}
+
{{/*
Create chart name and version as used by the chart label.
*/}}
@@ -56,3 +61,19 @@ Create the name of the service account to use
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
{{- end -}}
+
+{{- define "opa.selfSignedIssuer" -}}
+{{ printf "%s-selfsign" (include "opa.fullname" .) }}
+{{- end -}}
+
+{{- define "opa.rootCAIssuer" -}}
+{{ printf "%s-ca" (include "opa.fullname" .) }}
+{{- end -}}
+
+{{- define "opa.rootCACertificate" -}}
+{{ printf "%s-ca" (include "opa.fullname" .) }}
+{{- end -}}
+
+{{- define "opa.servingCertificate" -}}
+{{ printf "%s-webhook-tls" (include "opa.fullname" .) }}
+{{- end -}}
diff --git a/stable/opa/templates/deployment.yaml b/stable/opa/templates/deployment.yaml
index 7bea7c6c39ab..270d64955f80 100644
--- a/stable/opa/templates/deployment.yaml
+++ b/stable/opa/templates/deployment.yaml
@@ -15,26 +15,106 @@ spec:
app: {{ template "opa.fullname" . }}
name: {{ template "opa.fullname" . }}
spec:
+{{- if .Values.authz.enabled }}
+ initContainers:
+ - name: initpolicy
+ image: {{ .Values.mgmt.image }}:{{ .Values.mgmt.imageTag }}
+ imagePullPolicy: {{ .Values.mgmt.imagePullPolicy }}
+ resources:
+{{ toYaml .Values.mgmt.resources | indent 12 }}
+ command:
+ - /bin/sh
+ - -c
+ - |
+ tr -dc 'A-F0-9' < /dev/urandom | dd bs=1 count=32 2>/dev/null > /authz/mgmt-token
+ TOKEN=`cat /authz/mgmt-token`
+ cat > /authz/authz.rego <&1
+ volumeMounts:
+ - name: webhook-certs
+ mountPath: /etc/webhook/certs
+ readOnly: true
+ volumes:
+ - name: webhook-certs
+ secret:
+ secretName: admission-server-certs
diff --git a/stable/openebs/templates/deployment-local-provisioner.yaml b/stable/openebs/templates/deployment-local-provisioner.yaml
new file mode 100644
index 000000000000..82d95c78e5a4
--- /dev/null
+++ b/stable/openebs/templates/deployment-local-provisioner.yaml
@@ -0,0 +1,72 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ template "openebs.fullname" . }}-localpv-provisioner
+ labels:
+ app: {{ template "openebs.name" . }}
+ chart: {{ template "openebs.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ component: localpv-provisioner
+ openebs.io/component-name: openebs-localpv-provisioner
+spec:
+ replicas: {{ .Values.provisioner.replicas }}
+ selector:
+ matchLabels:
+ app: {{ template "openebs.name" . }}
+ release: {{ .Release.Name }}
+ template:
+ metadata:
+ labels:
+ app: {{ template "openebs.name" . }}
+ release: {{ .Release.Name }}
+ component: localpv-provisioner
+ openebs.io/version: {{ .Values.release.version }}
+ spec:
+ serviceAccountName: {{ template "openebs.serviceAccountName" . }}
+ containers:
+ - name: {{ template "openebs.name" . }}-localpv-provisioner
+ image: "{{ .Values.localprovisioner.image }}:{{ .Values.localprovisioner.imageTag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ env:
+ # OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s
+ # based on this address. This is ignored if empty.
+ # This is supported for openebs provisioner version 0.5.2 onwards
+ #- name: OPENEBS_IO_K8S_MASTER
+ # value: "http://10.128.0.12:8080"
+ # OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s
+ # based on this config. This is ignored if empty.
+ # This is supported for openebs provisioner version 0.5.2 onwards
+ #- name: OPENEBS_IO_KUBE_CONFIG
+ # value: "/home/ubuntu/.kube/config"
+ # OPENEBS_NAMESPACE is the namespace that this provisioner will
+ # lookup to find maya api service
+ - name: OPENEBS_NAMESPACE
+ value: "{{ .Release.Namespace }}"
+ - name: NODE_NAME
+ valueFrom:
+ fieldRef:
+ fieldPath: spec.nodeName
+ # OPENEBS_IO_BASE_PATH is the environment variable that provides the
+ # default base path on the node where host-path PVs will be provisioned.
+ - name: OPENEBS_IO_BASE_PATH
+ value: "{{ .Values.localprovisioner.basePath }}"
+ livenessProbe:
+ exec:
+ command:
+ - pgrep
+ - ".*localpv"
+ initialDelaySeconds: {{ .Values.localprovisioner.healthCheck.initialDelaySeconds }}
+ periodSeconds: {{ .Values.localprovisioner.healthCheck.periodSeconds }}
+{{- if .Values.localprovisioner.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.localprovisioner.nodeSelector | indent 8 }}
+{{- end }}
+{{- if .Values.localprovisioner.tolerations }}
+ tolerations:
+{{ toYaml .Values.localprovisioner.tolerations | indent 8 }}
+{{- end }}
+{{- if .Values.localprovisioner.affinity }}
+ affinity:
+{{ toYaml .Values.localprovisioner.affinity | indent 8 }}
+{{- end }}
diff --git a/stable/openebs/templates/deployment-maya-apiserver.yaml b/stable/openebs/templates/deployment-maya-apiserver.yaml
index c7c6992e9394..5414ee30870d 100644
--- a/stable/openebs/templates/deployment-maya-apiserver.yaml
+++ b/stable/openebs/templates/deployment-maya-apiserver.yaml
@@ -9,6 +9,7 @@ metadata:
heritage: {{ .Release.Service }}
component: apiserver
name: maya-apiserver
+ openebs.io/component-name: maya-apiserver
spec:
replicas: {{ .Values.apiserver.replicas }}
selector:
@@ -21,6 +22,8 @@ spec:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: apiserver
+ name: maya-apiserver
+ openebs.io/version: {{ .Values.release.version }}
spec:
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
containers:
@@ -40,15 +43,11 @@ spec:
# This is supported for maya api server version 0.5.2 onwards
#- name: OPENEBS_IO_K8S_MASTER
# value: "http://172.28.128.3:8080"
-{{- if .Values.ndm.sparse }}
-{{- if .Values.ndm.sparse.enabled }}
# OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be
# configured as a part of openebs installation.
# If "true" a default cstor sparse pool will be configured, if "false" it will not be configured.
- name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL
- value: "{{ .Values.ndm.sparse.enabled }}"
-{{- end }}
-{{- end }}
+ value: "{{ .Values.apiserver.sparse.enabled }}"
# OPENEBS_NAMESPACE provides the namespace of this deployment as an
# environment variable
- name: OPENEBS_NAMESPACE
@@ -83,6 +82,8 @@ spec:
value: "{{ .Values.cstor.volumeMgmt.image }}:{{ .Values.cstor.volumeMgmt.imageTag }}"
- name: OPENEBS_IO_VOLUME_MONITOR_IMAGE
value: "{{ .Values.policies.monitoring.image }}:{{ .Values.policies.monitoring.imageTag }}"
+ - name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE
+ value: "{{ .Values.policies.monitoring.image }}:{{ .Values.policies.monitoring.imageTag }}"
# OPENEBS_IO_ENABLE_ANALYTICS if set to true sends anonymous usage
# events to Google Analytics
- name: OPENEBS_IO_ENABLE_ANALYTICS
@@ -91,6 +92,13 @@ spec:
# for periodic ping events sent to Google Analytics. Default is 24 hours.
- name: OPENEBS_IO_ANALYTICS_PING_INTERVAL
value: "{{ .Values.analytics.pingInterval }}"
+ livenessProbe:
+ exec:
+ command:
+ - /usr/local/bin/mayactl
+ - version
+ initialDelaySeconds: {{ .Values.apiserver.healthCheck.initialDelaySeconds }}
+ periodSeconds: {{ .Values.apiserver.healthCheck.periodSeconds }}
{{- if .Values.apiserver.nodeSelector }}
nodeSelector:
{{ toYaml .Values.apiserver.nodeSelector | indent 8 }}
diff --git a/stable/openebs/templates/deployment-maya-provisioner.yaml b/stable/openebs/templates/deployment-maya-provisioner.yaml
index 7ac74202c6e4..7e3ba6de4f3a 100644
--- a/stable/openebs/templates/deployment-maya-provisioner.yaml
+++ b/stable/openebs/templates/deployment-maya-provisioner.yaml
@@ -8,6 +8,7 @@ metadata:
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: provisioner
+ openebs.io/component-name: openebs-provisioner
spec:
replicas: {{ .Values.provisioner.replicas }}
selector:
@@ -20,6 +21,7 @@ spec:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: provisioner
+ openebs.io/version: {{ .Values.release.version }}
spec:
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
containers:
@@ -46,7 +48,7 @@ spec:
fieldRef:
fieldPath: spec.nodeName
# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
- # that provisioner should forward the volume creaet/delete requests.
+ # that provisioner should forward the volume create/delete requests.
# If not present, "maya-apiserver-service" will be used for lookup.
# This is supported for openebs provisioner version 0.5.3-RC1 onwards
- name: OPENEBS_MAYA_SERVICE_NAME
@@ -59,6 +61,13 @@ spec:
# value: "{{ .Values.provisioner.monitorVolumeKey }}"
#- name: MAYA_PORTAL_URL
# value: "{{ .Values.provisioner.mayaPortalUrl }}"
+ livenessProbe:
+ exec:
+ command:
+ - pgrep
+ - ".*openebs"
+ initialDelaySeconds: {{ .Values.provisioner.healthCheck.initialDelaySeconds }}
+ periodSeconds: {{ .Values.provisioner.healthCheck.periodSeconds }}
{{- if .Values.provisioner.nodeSelector }}
nodeSelector:
{{ toYaml .Values.provisioner.nodeSelector | indent 8 }}
diff --git a/stable/openebs/templates/deployment-maya-snapshot-operator.yaml b/stable/openebs/templates/deployment-maya-snapshot-operator.yaml
index ead94e232bae..270554eda445 100644
--- a/stable/openebs/templates/deployment-maya-snapshot-operator.yaml
+++ b/stable/openebs/templates/deployment-maya-snapshot-operator.yaml
@@ -8,6 +8,7 @@ metadata:
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
component: snapshot-operator
+ openebs.io/component-name: openebs-snapshot-operator
spec:
replicas: {{ .Values.snapshotOperator.replicas }}
selector:
@@ -22,6 +23,7 @@ spec:
app: {{ template "openebs.name" . }}
release: {{ .Release.Name }}
component: snapshot-operator
+ openebs.io/version: {{ .Values.release.version }}
spec:
serviceAccountName: {{ template "openebs.serviceAccountName" . }}
containers:
@@ -53,6 +55,13 @@ spec:
# This is supported for openebs snapshot controller version 0.6-RC1 onwards
- name: OPENEBS_MAYA_SERVICE_NAME
value: "{{ template "openebs.fullname" . }}-apiservice"
+ livenessProbe:
+ exec:
+ command:
+ - pgrep
+ - ".*controller"
+ initialDelaySeconds: {{ .Values.snapshotOperator.healthCheck.initialDelaySeconds }}
+ periodSeconds: {{ .Values.snapshotOperator.healthCheck.periodSeconds }}
- name: {{ template "openebs.name" . }}-snapshot-provisioner
image: "{{ .Values.snapshotOperator.provisioner.image }}:{{ .Values.snapshotOperator.provisioner.imageTag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
@@ -81,6 +90,13 @@ spec:
# This is supported for openebs snapshot provisioner version 0.6-RC1 onwards
- name: OPENEBS_MAYA_SERVICE_NAME
value: "{{ template "openebs.fullname" . }}-apiservice"
+ livenessProbe:
+ exec:
+ command:
+ - pgrep
+ - ".*provisioner"
+ initialDelaySeconds: {{ .Values.snapshotOperator.healthCheck.initialDelaySeconds }}
+ periodSeconds: {{ .Values.snapshotOperator.healthCheck.periodSeconds }}
{{- if .Values.snapshotOperator.nodeSelector }}
nodeSelector:
{{ toYaml .Values.snapshotOperator.nodeSelector | indent 8 }}
diff --git a/stable/openebs/templates/service-admission-server.yaml b/stable/openebs/templates/service-admission-server.yaml
new file mode 100644
index 000000000000..4bb77cf96ca3
--- /dev/null
+++ b/stable/openebs/templates/service-admission-server.yaml
@@ -0,0 +1,15 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: admission-server-svc
+ labels:
+ app: admission-webhook
+ chart: {{ template "openebs.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+spec:
+ ports:
+ - port: 443
+ targetPort: 443
+ selector:
+ app: admission-webhook
diff --git a/stable/openebs/templates/validationwebhook.yaml b/stable/openebs/templates/validationwebhook.yaml
new file mode 100644
index 000000000000..5089aee11ec8
--- /dev/null
+++ b/stable/openebs/templates/validationwebhook.yaml
@@ -0,0 +1,60 @@
+{{- $ca := genCA "admission-server-ca" 3650 }}
+{{- $cn := printf "admission-server-svc" }}
+{{- $altName1 := printf "admission-server-svc.%s" .Release.Namespace }}
+{{- $altName2 := printf "admission-server-svc.%s.svc" .Release.Namespace }}
+{{- $cert := genSignedCert $cn nil (list $altName1 $altName2) 3650 $ca }}
+---
+apiVersion: admissionregistration.k8s.io/v1beta1
+kind: ValidatingWebhookConfiguration
+metadata:
+ name: openebs-validation-webhook-cfg
+ labels:
+ app: {{ template "openebs.name" . }}
+ chart: {{ template "openebs.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ component: admission-webhook
+webhooks:
+ - name: admission-webhook.openebs.io
+ clientConfig:
+ service:
+ name: admission-server-svc
+ namespace: {{ .Release.Namespace }}
+ path: "/validate"
+{{- if .Values.webhook.generateTLS }}
+ caBundle: {{ b64enc $ca.Cert }}
+{{- else }}
+ caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURpekNDQW5PZ0F3SUJBZ0lKQUk5NG9wdWdKb1drTUEwR0NTcUdTSWIzRFFFQkN3VUFNRnd4Q3pBSkJnTlYKQkFZVEFuaDRNUW93Q0FZRFZRUUlEQUY0TVFvd0NBWURWUVFIREFGNE1Rb3dDQVlEVlFRS0RBRjRNUW93Q0FZRApWUVFMREFGNE1Rc3dDUVlEVlFRRERBSmpZVEVRTUE0R0NTcUdTSWIzRFFFSkFSWUJlREFlRncweE9UQXpNREl3Ck56TXlOREZhRncweU1EQXpNREV3TnpNeU5ERmFNRnd4Q3pBSkJnTlZCQVlUQW5oNE1Rb3dDQVlEVlFRSURBRjQKTVFvd0NBWURWUVFIREFGNE1Rb3dDQVlEVlFRS0RBRjRNUW93Q0FZRFZRUUxEQUY0TVFzd0NRWURWUVFEREFKagpZVEVRTUE0R0NTcUdTSWIzRFFFSkFSWUJlRENDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DCmdnRUJBT0pxNmI2dnI0cDMzM3FRaHJQbmNCVFVIUE1ESnJtaEYvOU44NjZodzFvOGZLclFwNkJmRkcvZEQ0N2gKVGcvWnJ0U2VHT0NoRjFxSEk1dGp3SlVEeGphSUM3U0FkZGpxb1pJUGFoT1pjVlpxZE1POVVFTlFUbktIRXczVQpCUjJUaHdydi9QTTRxZitUazdRa1J6Y2VJQXg1VS9lbUlEV2t4NEk3RlRYQk1XT1hGUTNoRlFtWFppZHpHN21mCnZJTlhYN0krOHR3QVM0alNSdGhxYjVUTzMwYmpxQTFzY0RRdXlZU2R6OVg5TGw1WU1QSUtSZHpnYUR1d1Q5QkQKZjNxT1VqazN6M1FZd0IvWmowaXJtQlpKejJla0V3a1QxbWlyUHF2NTA5QVJ5V1U2QUlSSTN6dnB6S2tWeFJUaApmcUROa1M5SmRRV1Q3RW9vN2lITmRtZlhOYmtDQXdFQUFhTlFNRTR3SFFZRFZSME9CQllFRk1ORzZGeGlMYWFmCjFld2w1RDd1SXJiK0UrSE9NQjhHQTFVZEl3UVlNQmFBRk1ORzZGeGlMYWFmMWV3bDVEN3VJcmIrRStIT01Bd0cKQTFVZEV3UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHQnYxeC92OWRnWU1ZY1h5TU9MUUNENgpVZWNsS3YzSFRTVGUybXZQcTZoTW56K0ExOGF6RWhPU0xONHZuQUNSd2pzRmVobWIrWk9wMVlYWDkzMi9OckRxCk1XUmh1bENiblFndjlPNVdHWXBDQUR1dnBBMkwyT200aU50S0FucUpGNm5ubHI1UFdQZnVJelB1eVlvQUpKRDkKSFpZRjVwa2hac0EwdDlUTDFuUmdPbFY4elZ0eUg2TTVDWm5nSEpjWG9CWlVvSlBvcGJsc3BpUnh6dzBkMUU0SgpUVmVHaXZFa0RJNFpFYTVuTzZyTUZzcXJ1L21ydVQwN1FCaWd5ZzlEY3h0QU5TUTczQUhOemNRUWpZMWg3L2RiCmJ6QXQ2aWxNZXZKc2lpVFlGYjRPb0dIVW53S2tTQUJuazFNQW5oUUhvYUNuS2dXZE1vU3orQWVuYkhzYXJSMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
+{{- end }}
+ rules:
+ - operations: [ "CREATE", "DELETE" ]
+ apiGroups: ["*"]
+ apiVersions: ["*"]
+ resources: ["persistentvolumeclaims"]
+---
+apiVersion: v1
+kind: Secret
+metadata:
+ name: admission-server-certs
+ namespace: {{ .Release.Namespace }}
+ labels:
+ app: {{ template "openebs.name" . }}
+ chart: {{ template "openebs.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ # Helm hook annotations in order to ensure that the certs
+ # will only be generated on chart install. This will
+ # prevent overriding the certs anytime we upgrade the chart’s
+ # released instance.
+ annotations:
+ "helm.sh/hook": "pre-install"
+ "helm.sh/hook-delete-policy": "before-hook-creation"
+type: Opaque
+data:
+{{- if .Values.webhook.generateTLS }}
+ cert.pem: {{ b64enc $cert.Cert }}
+ key.pem: {{ b64enc $cert.Key }}
+{{- else }}
+ cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQ3VENDQXRXZ0F3SUJBZ0lVYk84NS9JR0ZXYTA2Vm11WVdTWjdxaTUybmRRd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1hERUxNQWtHQTFVRUJoTUNlSGd4Q2pBSUJnTlZCQWdNQVhneENqQUlCZ05WQkFjTUFYZ3hDakFJQmdOVgpCQW9NQVhneENqQUlCZ05WQkFzTUFYZ3hDekFKQmdOVkJBTU1BbU5oTVJBd0RnWUpLb1pJaHZjTkFRa0JGZ0Y0Ck1CNFhEVEU1TURNd01qQTNNek13TUZvWERUSXdNRE13TVRBM01qYzFNbG93S3pFcE1DY0dBMVVFQXhNZ1lXUnQKYVhOemFXOXVMWE5sY25abGNpMXpkbU11YjNCbGJtVmljeTV6ZG1Nd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQQpBNElCRHdBd2dnRUtBb0lCQVFERk5MRE1xKzd6eFZidDNPcnFhaVUyOFB6K25ZeFRCblA0NVhFWGFjSUpPWG1aClM1c2ZjMjM3WVNWS0I5Tlp4cXNYT08wcXpWb0xtNlZ0UDJjREpWZGZIVUQ0QXBZSC94UVBVTktrcFg3K0NVTFEKZ3VBNWowOXozdkFaeDJidXBTaXFFdE1mVldqNkh5V0Jyd2FuZW9IaVVXVVdpbmtnUXpCQzR1SWtiRkE2djYrZwp4ZzAwS09TY2NFRWY3eU5McjBvejBKVHRpRm1aS1pVVVBwK3N3WTRpRTZ3RER5bVVnTmY4SW8wUEExVkQ1TE9vCkFwQ0l2WDJyb1RNd3VkR1VrZUc1VTA2OWIrMWtQMEJsUWdDZk9TQTBmZEN3Snp0aWE1aHpaUlVIWGxFOVArN0kKekgyR0xXeHh1aHJPTlFmT25HcVRiUE13UmowekZIdmcycUo1azJ2VkFnTUJBQUdqZ2Rjd2dkUXdEZ1lEVlIwUApBUUgvQkFRREFnV2dNQk1HQTFVZEpRUU1NQW9HQ0NzR0FRVUZCd01CTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEClZSME9CQllFRklnOVFSOSsyVW12THQwQXY4MlYwZml0bU81WE1COEdBMVVkSXdRWU1CYUFGTU5HNkZ4aUxhYWYKMWV3bDVEN3VJcmIrRStIT01GOEdBMVVkRVFSWU1GYUNGR0ZrYldsemMybHZiaTF6WlhKMlpYSXRjM1pqZ2h4aApaRzFwYzNOcGIyNHRjMlZ5ZG1WeUxYTjJZeTV2Y0dWdVpXSnpnaUJoWkcxcGMzTnBiMjR0YzJWeWRtVnlMWE4yCll5NXZjR1Z1WldKekxuTjJZekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBSlpJRzd2d0RYaWxhWUFCS1Brc0oKZVJtdml4ZnYybTRVTVdzdlBKVVVJTXhHbzhtc1J6aWhBRjVuTExzaURKRDl4MjhraXZXaGUwbWE4aWVHYjY5Sgp1U1N4bys0OStaV3NVaTB3UlRDMi9ZWGlkWS9xNDU2c1g4ck9qQURDZlFUcFpYc2ZyekVWa2Q4NE0zdU5GTmhnCnMyWmxJMnNDTWljYXExNWxIWEh3akFkY2FqZit1VklwOXNHUElsMUhmZFcxWVFLc0NoU3dhdi80NUZJcFlMSVYKM3hiS2ZIbmh2czhJck5ZbTVIenAvVVdvcFN1Tm5tS1IwWGo3cXpGcllUYzV3eHZ3VVZrKzVpZFFreWMwZ0RDcApGbkFVdEdmaUVUQnBhU3pISjQ4STZqUFpneVE0NzlZMmRxRUtXcWtyc0RkZ2tVcXlnNGlQQ0YwWC9YVU9YU3VGClNnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
+ key.pem: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeFRTd3pLdnU4OFZXN2R6cTZtb2xOdkQ4L3AyTVV3WnorT1Z4RjJuQ0NUbDVtVXViCkgzTnQrMkVsU2dmVFdjYXJGemp0S3MxYUM1dWxiVDluQXlWWFh4MUErQUtXQi84VUQxRFNwS1YrL2dsQzBJTGcKT1k5UGM5N3dHY2RtN3FVb3FoTFRIMVZvK2g4bGdhOEdwM3FCNGxGbEZvcDVJRU13UXVMaUpHeFFPcit2b01ZTgpOQ2prbkhCQkgrOGpTNjlLTTlDVTdZaFptU21WRkQ2ZnJNR09JaE9zQXc4cGxJRFgvQ0tORHdOVlErU3pxQUtRCmlMMTlxNkV6TUxuUmxKSGh1Vk5PdlcvdFpEOUFaVUlBbnprZ05IM1FzQ2M3WW11WWMyVVZCMTVSUFQvdXlNeDkKaGkxc2Nib2F6alVIenB4cWsyenpNRVk5TXhSNzROcWllWk5yMVFJREFRQUJBb0lCQVFDcXRIT2VsKzRlUWVKLwp3RTN4WUxTYUhIMURnZWxvTFJ2U2hmb2hSRURjYjA0ZExsODNHRnBKMGN2UGkzcWVLZVVNRXhEcGpoeTJFNk5kCk1CYmhtRDlMYkMxREFpb1EvZkxGVnpjZm9zcU02RU5YN3hKZGdQcEwyTjJKMHh2ODFDYWhJZTV6SHlIaDhYZ3MKQysvOHBZVXMvVHcrQ052VTI1UTVNZUNEbXViUUVuemJqQ3lIQm5SVmw1dVF6bk8zWEt2NEVyejdBT1BBWmFJTQozYmNFNC83c1JGczM4SE1aMVZTZ2JxUi9rM1N5SEFzNXhNWHVtY0hMMTBkK0FVK21BQ0svUThpdWJHMm9kNnJiCko3S0RONmFuUzRPZk4zZ3RtaEppN3ZsTjJVL3JycHdnblI0d3Y0bmV4U1ZlamYzQU9iaU9jNnYzZ0xJbXJ2Q3oKNzFETDFPaTVBb0dCQU9HeFp2RWFUSFFnNFdaQVJZbXlGZEtZeXY2MURDc1JycElmUlh3Q1YrcnBZTFM2NlV4SQprWHJISlNreWFqTjNTOXVsZUtUTXRWaU5wY2JCcjVNZ0lOaFFvdThRc2dpZlZHWFJGQ3d0OXJ3MGNDbEc1Y2pCClZ3bUQzYWFBTGR5WVQvbHc4dnk1Zndqc1hFZHd1OEQ2cC9rd0ZzMmlwZWQ4QVFPUVZlQ1dPeXF6QW9HQkFOK3YKL2VxKzZ5NHhPZ2ZtQ01KcHJ0THBBN1J0M3FsU0JKbEw3RkNsQXRCeUUxazBPTVIrZTdhSDBVTDdYWVR4YlBLOApBYnRZR3lzWDkydGM3RHlaU0k0cDFjUHhvcHdzNkt3N0RYZUt0YTNnVkRmSXVuZ3haR25XWjk2WmNjcEhyVzgyCnl5OTk5dTQ2WE1tQWZwSzEvbGxjdGdLem5FUVp5ZkhEUmlWdVVQTlhBb0dCQUxkMGxORDNKNTVkKzlvNTlFeHgKVGZ2WjUyZ1Rrc2lQbnU5NEsrc1puSTEvRnZUUjJrSC8yd0dLVDFLbGdGNUZZb3d3ZlZpNGJkQ0ZrM04walZ0eQppa0JMaTZYNFZEOWVCQ1NmUjE2Q0hrWHQraDRUVzBWTW80dEFmVE9TamJUNnVrZHc0Sk05MVYxVGc4OHVlKy9wCjBCQm1YcUxZeXpMWFFadTcvNUtIaTZDeEFvR0FaTWV2R0E5eWVEcFhrZTF6THR4Y2xzdkREb3lkMEIyUzB0cGgKR3lodEx5cm1Tcjk3Z0JRWWV2R1FONlIyeXduVzh6bi9jYi9OWmNvRGdFeTZac2NNNkhneXhuaGNzZzZOdWVOVgpPdkcwenlVTjdLQTBXeWl0dS8yTWlMOExoSDVzeG5taWE4Qk4rNkV4NHR0UXE1cnhnS09Eb1kzNHJyb0x3VEFnCnI0YVhWRHNDZ1lBYnRwZXhvNTJ4VmJkTzZCL3B5RUU2cEJCS1FkK3hiVkJNMDZwUzArSlFudSt5SVBmeXFhekwKbGdYTEhBSm01bU9Sb2RFRHk0WlVJRkM5RmhraGcrV0ZzSHJCOXpGU1IrZFc2Uzg1eFA4ZGxHVE42S2cydXJNQQowNTRCQUh4RWhPNU9QblNqT0VHSmQwYTdGQmc1UlkxN0RRQlFxV25SZENURHlDWmU0OStLcWc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
+{{- end }}
diff --git a/stable/openebs/values.yaml b/stable/openebs/values.yaml
index 3811eb40fd31..07cbceb88349 100644
--- a/stable/openebs/values.yaml
+++ b/stable/openebs/values.yaml
@@ -10,46 +10,71 @@ serviceAccount:
create: true
name:
+release:
+ # "openebs.io/version" label for control plane components
+ version: "0.9.0"
+
image:
pullPolicy: IfNotPresent
apiserver:
image: "quay.io/openebs/m-apiserver"
- imageTag: "0.8.0"
+ imageTag: "0.9.0"
replicas: 1
ports:
externalPort: 5656
internalPort: 5656
+ sparse:
+ enabled: "false"
nodeSelector: {}
tolerations: []
affinity: {}
+ healthCheck:
+ initialDelaySeconds: 30
+ periodSeconds: 60
provisioner:
image: "quay.io/openebs/openebs-k8s-provisioner"
- imageTag: "0.8.0"
+ imageTag: "0.9.0"
+ replicas: 1
+ nodeSelector: {}
+ tolerations: []
+ affinity: {}
+ healthCheck:
+ initialDelaySeconds: 30
+ periodSeconds: 60
+
+localprovisioner:
+ image: "quay.io/openebs/provisioner-localpv"
+ imageTag: "0.9.0"
replicas: 1
nodeSelector: {}
tolerations: []
affinity: {}
+ healthCheck:
+ initialDelaySeconds: 30
+ periodSeconds: 60
snapshotOperator:
controller:
image: "quay.io/openebs/snapshot-controller"
- imageTag: "0.8.0"
+ imageTag: "0.9.0"
provisioner:
image: "quay.io/openebs/snapshot-provisioner"
- imageTag: "0.8.0"
+ imageTag: "0.9.0"
replicas: 1
upgradeStrategy: "Recreate"
nodeSelector: {}
tolerations: []
affinity: {}
+ healthCheck:
+ initialDelaySeconds: 30
+ periodSeconds: 60
ndm:
image: "quay.io/openebs/node-disk-manager-amd64"
- imageTag: "v0.2.0"
+ imageTag: "v0.3.5"
sparse:
- enabled: "true"
path: "/var/openebs/sparse"
size: "10737418240"
count: "1"
@@ -57,31 +82,43 @@ ndm:
excludeVendors: "CLOUDBYT,OpenEBS"
excludePaths: "loop,fd0,sr0,/dev/ram,/dev/dm-,/dev/md"
nodeSelector: {}
+ healthCheck:
+ initialDelaySeconds: 30
+ periodSeconds: 60
+
+webhook:
+ image: "quay.io/openebs/admission-server"
+ imageTag: "0.9.0"
+ generateTLS: true
+ replicas: 1
+ nodeSelector: {}
+ tolerations: []
+ affinity: {}
jiva:
image: "quay.io/openebs/jiva"
- imageTag: "0.8.0"
+ imageTag: "0.9.0"
replicas: 3
cstor:
pool:
image: "quay.io/openebs/cstor-pool"
- imageTag: "0.8.0"
+ imageTag: "0.9.0"
poolMgmt:
image: "quay.io/openebs/cstor-pool-mgmt"
- imageTag: "0.8.0"
+ imageTag: "0.9.0"
target:
image: "quay.io/openebs/cstor-istgt"
- imageTag: "0.8.0"
+ imageTag: "0.9.0"
volumeMgmt:
image: "quay.io/openebs/cstor-volume-mgmt"
- imageTag: "0.8.0"
+ imageTag: "0.9.0"
policies:
monitoring:
enabled: true
image: "quay.io/openebs/m-exporter"
- imageTag: "0.8.0"
+ imageTag: "0.9.0"
analytics:
enabled: true
diff --git a/stable/openldap/Chart.yaml b/stable/openldap/Chart.yaml
index 7534db70287d..0deccc661afe 100644
--- a/stable/openldap/Chart.yaml
+++ b/stable/openldap/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: openldap
home: https://www.openldap.org
-version: 0.3.0
+version: 1.0.0
appVersion: 2.4.44
description: Community developed LDAP software
icon: http://www.openldap.org/images/headers/LDAPworm.gif
diff --git a/stable/openldap/README.md b/stable/openldap/README.md
index f1359049ff21..4c7c3cb722c0 100644
--- a/stable/openldap/README.md
+++ b/stable/openldap/README.md
@@ -23,35 +23,39 @@ We use the docker images provided by https://github.com/osixia/docker-openldap.
The following table lists the configurable parameters of the openldap chart and their default values.
-| Parameter | Description | Default |
-| ---------------------------------- | ------------------------------------------------------------------------- | ------------------|
-| `replicaCount` | Number of replicas | `1` |
-| `strategy` | Deployment strategy | `{}` |
-| `image.repository` | Container image repository | `osixia/openldap` |
-| `image.tag` | Container image tag | `1.1.10` |
-| `image.pullPolicy` | Container pull policy | `IfNotPresent` |
-| `extraLabels` | Labels to add to the Resources | `{}` |
-| `existingSecret` Â Â | Use an existing secret for admin and config user passwords | `""` |
-| `service.annotations` | Annotations to add to the service | `{}` |
-| `service.clusterIP` | IP address to assign to the service | `""` |
-| `service.externalIPs` | Service external IP addresses | `[]` |
-| `service.ldapPort` | External service port for LDAP | `389` |
-| `service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `""` |
-| `service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | `[]` |
-| `service.sslLdapPort` | External service port for SSL+LDAP | `636` |
-| `service.type` | Service type | `ClusterIP` |
-| `env` | List of key value pairs as env variables to be sent to the docker image. See https://github.com/osixia/docker-openldap for available ones | `[see values.yaml]` |
-| `adminPassword` | Password for admin user. Unset to auto-generate the password | None |
-| `configPassword` | Password for config user. Unset to auto-generate the password | None |
-| `customLdifFiles` | Custom ldif files to seed the LDAP server. List of filename -> data pairs | None |
-| `persistence.enabled` | Whether to use PersistentVolumes or not | `false` |
-| `persistence.storageClass` | Storage class for PersistentVolumes. | `` |
-| `persistence.accessMode` | Access mode for PersistentVolumes | `ReadWriteOnce` |
-| `persistence.size` | PersistentVolumeClaim storage size | `8Gi` |
-| `resources` | Container resource requests and limits in yaml | `{}` |
-| `test.enabled` | Conditionally provision test resources | `false` |
-| `test.image.repository` | Test container image requires bats framework | `dduportal/bats` |
-| `test.image.tag` | Test container tag | `0.4.0` |
+| Parameter | Description | Default |
+| ---------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | ------------------- |
+| `replicaCount` | Number of replicas | `1` |
+| `strategy` | Deployment strategy | `{}` |
+| `image.repository` | Container image repository | `osixia/openldap` |
+| `image.tag` | Container image tag | `1.1.10` |
+| `image.pullPolicy` | Container pull policy | `IfNotPresent` |
+| `extraLabels` | Labels to add to the Resources | `{}` |
+| `existingSecret` | Use an existing secret for admin and config user passwords | `""` |
+| `service.annotations` | Annotations to add to the service | `{}` |
+| `service.clusterIP` | IP address to assign to the service | `""` |
+| `service.externalIPs` | Service external IP addresses | `[]` |
+| `service.ldapPort` | External service port for LDAP | `389` |
+| `service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `""` |
+| `service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | `[]` |
+| `service.sslLdapPort` | External service port for SSL+LDAP | `636` |
+| `service.type` | Service type | `ClusterIP` |
+| `env` | List of key value pairs as env variables to be sent to the docker image. See https://github.com/osixia/docker-openldap for available ones | `[see values.yaml]` |
+| `tls.enabled` | Set to enable TLS/LDAPS - should also set `tls.secret` | `false` |
+| `tls.secret` | Secret containing TLS cert and key (eg, generated via cert-manager) | `""` |
+| `tls.CA.enabled` | Set to enable custom CA crt file - should also set `tls.CA.secret` | `false` |
+| `tls.CA.secret` | Secret containing CA certificate (ca.crt) | `""` |
+| `adminPassword` | Password for admin user. Unset to auto-generate the password | None |
+| `configPassword` | Password for config user. Unset to auto-generate the password | None |
+| `customLdifFiles` | Custom ldif files to seed the LDAP server. List of filename -> data pairs | None |
+| `persistence.enabled` | Whether to use PersistentVolumes or not | `false` |
+| `persistence.storageClass` | Storage class for PersistentVolumes. | `` |
+| `persistence.accessMode` | Access mode for PersistentVolumes | `ReadWriteOnce` |
+| `persistence.size` | PersistentVolumeClaim storage size | `8Gi` |
+| `resources` | Container resource requests and limits in yaml | `{}` |
+| `test.enabled` | Conditionally provision test resources | `false` |
+| `test.image.repository` | Test container image requires bats framework | `dduportal/bats` |
+| `test.image.tag` | Test container tag | `0.4.0` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
diff --git a/stable/openldap/templates/deployment.yaml b/stable/openldap/templates/deployment.yaml
index 59139f32e218..46af3c113a06 100644
--- a/stable/openldap/templates/deployment.yaml
+++ b/stable/openldap/templates/deployment.yaml
@@ -31,9 +31,11 @@ spec:
app: {{ template "openldap.name" . }}
release: {{ .Release.Name }}
spec:
- {{- if .Values.customLdifFiles }}
+ {{- if or .Values.customLdifFiles .Values.tls.enabled }}
initContainers:
- - name: {{ .Chart.Name }}-init
+ {{- end }}
+ {{- if .Values.customLdifFiles }}
+ - name: {{ .Chart.Name }}-init-ldif
image: busybox
command: ['sh', '-c', 'cp /customldif/* /ldifworkingdir']
volumeMounts:
@@ -42,6 +44,26 @@ spec:
- name: ldifworkingdir
mountPath: /ldifworkingdir
{{- end }}
+ {{- if .Values.tls.enabled }}
+ - name: {{ .Chart.Name }}-init-tls
+ image: busybox
+ command: ['sh', '-c', 'cp /tls/* /certs']
+ volumeMounts:
+ - name: tls
+ mountPath: /tls
+ - name: certs
+ mountPath: /certs
+ {{- if .Values.tls.CA.enabled }}
+ - name: {{ .Chart.Name }}-init-catls
+ image: busybox
+ command: ['sh', '-c', 'cp /catls/ca.crt /certs']
+ volumeMounts:
+ - name: catls
+ mountPath: /catls
+ - name: certs
+ mountPath: /certs
+ {{- end }}
+ {{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
@@ -70,6 +92,21 @@ spec:
- name: ldifworkingdir
mountPath: /container/service/slapd/assets/config/bootstrap/ldif/custom
{{- end }}
+ {{- if .Values.tls.enabled }}
+ - name: certs
+ mountPath: /container/service/slapd/assets/certs
+ {{- end }}
+ env:
+ {{- if .Values.tls.enabled }}
+ - name: LDAP_TLS_CRT_FILENAME
+ value: tls.crt
+ - name: LDAP_TLS_KEY_FILENAME
+ value: tls.key
+ {{- if .Values.tls.CA.enabled }}
+ - name: LDAP_TLS_CA_CRT_FILENAME
+ value: ca.crt
+ {{- end }}
+ {{- end }}
livenessProbe:
tcpSocket:
port: ldap-port
@@ -104,6 +141,19 @@ spec:
- name: ldifworkingdir
emptyDir: {}
{{- end }}
+ {{- if .Values.tls.enabled }}
+ - name: tls
+ secret:
+ secretName: {{ .Values.tls.secret }}
+ {{- if .Values.tls.CA.enabled }}
+ - name: catls
+ secret:
+ secretName: {{ .Values.tls.CA.secret }}
+ {{- end }}
+ {{- end }}
+ - name: certs
+ emptyDir:
+ medium: Memory
- name: data
{{- if .Values.persistence.enabled }}
persistentVolumeClaim:
diff --git a/stable/openldap/values.yaml b/stable/openldap/values.yaml
index f30f72056212..6de299658c1d 100644
--- a/stable/openldap/values.yaml
+++ b/stable/openldap/values.yaml
@@ -25,6 +25,13 @@ image:
# Spcifies an existing secret to be used for admin and config user passwords
existingSecret: ""
+# settings for enabling TLS
+tls:
+ enabled: false
+ secret: "" # The name of a kubernetes.io/tls type secret to use for TLS
+ CA:
+ enabled: false
+ secret: "" # The name of a generic secret to use for custom CA certificate (ca.crt)
## Add additional labels to all resources
extraLabels: {}
@@ -33,7 +40,7 @@ service:
clusterIP: ""
ldapPort: 389
- sslLdapPort: 636
+ sslLdapPort: 636 # Only used if tls.enabled is true
## List of IP addresses at which the service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
diff --git a/stable/openvpn/Chart.yaml b/stable/openvpn/Chart.yaml
index 87013c7f4e7f..482f0e37d253 100755
--- a/stable/openvpn/Chart.yaml
+++ b/stable/openvpn/Chart.yaml
@@ -3,7 +3,7 @@ description: A Helm chart to install an openvpn server inside a kubernetes clust
generation is also part of the deployment, and this chart will generate client keys
as needed.
name: openvpn
-version: 3.11.0
+version: 3.12.4
appVersion: 1.1.0
maintainers:
- name: jfelten
diff --git a/stable/openvpn/OWNERS b/stable/openvpn/OWNERS
index c5061575c3e8..cdb58fd55040 100644
--- a/stable/openvpn/OWNERS
+++ b/stable/openvpn/OWNERS
@@ -1,4 +1,6 @@
approvers:
- jfelten
+- jasongwartz
reviewers:
- jfelten
+- jasongwartz
diff --git a/stable/openvpn/README.md b/stable/openvpn/README.md
index 797a0bad1a55..5355e0aad838 100644
--- a/stable/openvpn/README.md
+++ b/stable/openvpn/README.md
@@ -18,7 +18,7 @@ Check pod status, replacing `$HELM_RELEASE` with the name of your release, via:
```bash
POD_NAME=$(kubectl get pods -l "app=openvpn,release=$HELM_RELEASE" -o jsonpath='{.items[0].metadata.name}') \
-&& kubectl log "$POD_NAME" --follow
+&& kubectl logs "$POD_NAME" --follow
```
When all components of the openvpn chart have started use the following script to generate a client key:
@@ -84,6 +84,7 @@ Parameter | Description | Default
`openvpn.dhcpOptionDomain` | Push a `dhcp-option DOMAIN` config | `true`
`openvpn.conf` | Arbitrary lines appended to the end of the server configuration file | `nil`
`openvpn.redirectGateway` | Redirect all client traffic through VPN | `true`
+`nodeSelector` | Node labels for pod assignment | `{}`
This chart has been engineered to use kube-dns and route all network traffic to kubernetes pods and services,
to disable this behaviour set `openvpn.OVPN_K8S_POD_NETWORK` and `openvpn.OVPN_K8S_POD_SUBNET` to `null`.
@@ -112,4 +113,3 @@ Certificates can be found in openvpn pod in the following files:
`/etc/openvpn/certs/pki/ca.crt`
`/etc/openvpn/certs/pki/issued/server.crt`
`/etc/openvpn/certs/pki/dh.pem`
-
diff --git a/stable/openvpn/templates/NOTES.txt b/stable/openvpn/templates/NOTES.txt
index 4d7010d38252..61a24f63b9dd 100644
--- a/stable/openvpn/templates/NOTES.txt
+++ b/stable/openvpn/templates/NOTES.txt
@@ -4,11 +4,11 @@ Please be aware that certificate generation is variable and may take some time (
Check pod status with the command:
- POD_NAME=$(kubectl get pods --namespace "{{ .Release.Namespace }}" -l app=openvpn -o jsonpath='{ .items[0].metadata.name }') && kubectl log $POD_NAME --follow
+ POD_NAME=$(kubectl get pods --namespace "{{ .Release.Namespace }}" -l app=openvpn -o jsonpath='{ .items[0].metadata.name }') && kubectl --namespace "{{ .Release.Namespace }}" logs $POD_NAME --follow
LoadBalancer ingress creation can take some time as well. Check service status with the command:
- kubectl get svc
+ kubectl --namespace "{{ .Release.Namespace }}" get svc
{{ if and (eq "NodePort" .Values.service.type) (hasKey .Values.service "nodePort") }}
You set the service type to NodePort, port {{ .Values.service.nodePort }} will be used on each node.
{{ end }}
diff --git a/stable/openvpn/templates/config-openvpn.yaml b/stable/openvpn/templates/config-openvpn.yaml
index 4276077865ab..9891ef944276 100644
--- a/stable/openvpn/templates/config-openvpn.yaml
+++ b/stable/openvpn/templates/config-openvpn.yaml
@@ -53,6 +53,34 @@ data:
configure.sh: |-
#!/bin/sh
+
+ cidr2mask() {
+ # Number of args to shift, 255..255, first non-255 byte, zeroes
+ set -- $(( 5 - ($1 / 8) )) 255 255 255 255 $(( (255 << (8 - ($1 % 8))) & 255 )) 0 0 0
+ [ $1 -gt 1 ] && shift "$1" || shift
+ echo ${1-0}.${2-0}.${3-0}.${4-0}
+ }
+
+ cidr2net() {
+ local i ip mask netOctets octets
+ ip="${1%/*}"
+ mask="${1#*/}"
+ octets=$(echo "$ip" | tr '.' '\n')
+
+ for octet in $octets; do
+ i=$((i+1))
+ if [ $i -le $(( mask / 8)) ]; then
+ netOctets="$netOctets.$octet"
+ elif [ $i -eq $(( mask / 8 +1 )) ]; then
+ netOctets="$netOctets.$((((octet / ((256 / ((2**((mask % 8)))))))) * ((256 / ((2**((mask % 8))))))))"
+ else
+ netOctets="$netOctets.0"
+ fi
+ done
+
+ echo ${netOctets#.}
+ }
+
/etc/openvpn/setup/setup-certs.sh
iptables -t nat -A POSTROUTING -s {{ .Values.openvpn.OVPN_NETWORK }}/{{ .Values.openvpn.OVPN_SUBNET }} -o eth0 -j MASQUERADE
mkdir -p /dev/net
@@ -65,9 +93,14 @@ data:
cat "${OVPN_CONFIG}"
echo ====================================
fi
- IP=$(ip route get 8.8.8.8 | awk '/8.8.8.8/ {print $NF}')
- BASEIP=`echo $IP | cut -d"." -f1-3`
- NETWORK=`echo $BASEIP".0"`
+
+ intAndIP="$(ip route get 8.8.8.8 | awk '/8.8.8.8/ {print $5 "-" $7}')"
+ int="${intAndIP%-*}"
+ ip="${intAndIP#*-}"
+ cidr="$(ip addr show dev "$int" | awk -vip="$ip" '($2 ~ ip) {print $2}')"
+
+ NETWORK="$(cidr2net $cidr)"
+ NETMASK="$(cidr2mask ${cidr#*/})"
DNS=$(cat /etc/resolv.conf | grep -v '^#' | grep nameserver | awk '{print $2}')
SEARCH=$(cat /etc/resolv.conf | grep -v '^#' | grep search | awk '{$1=""; print $0}')
FORMATTED_SEARCH=""
@@ -78,6 +111,7 @@ data:
sed 's|OVPN_K8S_SEARCH|'"${FORMATTED_SEARCH}"'|' -i /etc/openvpn/openvpn.conf
sed 's|OVPN_K8S_DNS|'"${DNS}"'|' -i /etc/openvpn/openvpn.conf
sed 's|NETWORK|'"${NETWORK}"'|' -i /etc/openvpn/openvpn.conf
+ sed 's|NETMASK|'"${NETMASK}"'|' -i /etc/openvpn/openvpn.conf
openvpn --config /etc/openvpn/openvpn.conf
openvpn.conf: |-
@@ -101,7 +135,7 @@ data:
user nobody
group nogroup
- push "route NETWORK 255.255.240.0"
+ push "route NETWORK NETMASK"
{{ if (.Values.openvpn.OVPN_K8S_POD_NETWORK) (.Values.openvpn.OVPN_K8S_POD_SUBNET) }}
push "route {{ .Values.openvpn.OVPN_K8S_POD_NETWORK }} {{ .Values.openvpn.OVPN_K8S_POD_SUBNET }}"
{{ end }}
diff --git a/stable/openvpn/templates/openvpn-deployment.yaml b/stable/openvpn/templates/openvpn-deployment.yaml
index 51b656196c99..bea260715675 100644
--- a/stable/openvpn/templates/openvpn-deployment.yaml
+++ b/stable/openvpn/templates/openvpn-deployment.yaml
@@ -82,3 +82,7 @@ spec:
{{- else }}
emptyDir: {}
{{- end -}}
+ {{- if .Values.nodeSelector }}
+ nodeSelector:
+ {{ toYaml .Values.nodeSelector }}
+ {{- end }}
diff --git a/stable/openvpn/values.yaml b/stable/openvpn/values.yaml
index 9f58b8bf9d0c..cd05a36fc6af 100644
--- a/stable/openvpn/values.yaml
+++ b/stable/openvpn/values.yaml
@@ -85,3 +85,5 @@ openvpn:
# conf: |
# max-clients 100
# client-to-client
+
+nodeSelector: {}
diff --git a/stable/orangehrm/Chart.yaml b/stable/orangehrm/Chart.yaml
index 7adbb70e382f..a8c11291d11e 100644
--- a/stable/orangehrm/Chart.yaml
+++ b/stable/orangehrm/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: orangehrm
-version: 4.0.1
-appVersion: 4.2.0-1
+version: 4.3.2
+appVersion: 4.3.1-0
description: OrangeHRM is a free HR management system that offers a wealth of modules
to suit the needs of your business.
keywords:
diff --git a/stable/orangehrm/README.md b/stable/orangehrm/README.md
index 3d08634bfb51..9c04a2773237 100644
--- a/stable/orangehrm/README.md
+++ b/stable/orangehrm/README.md
@@ -14,7 +14,7 @@ This chart bootstraps an [OrangeHRM](https://github.com/bitnami/bitnami-docker-o
It also packages the [Bitnami MariaDB chart](https://github.com/kubernetes/charts/tree/master/stable/mariadb) which is required for bootstrapping a MariaDB deployment for the database requirements of the OrangeHRM application.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -50,6 +50,7 @@ The following table lists the configurable parameters of the OrangeHRM chart and
| Parameter | Description | Default |
|--------------------------------------|------------------------------------------|-------------------------------------------------------- |
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | OrangeHRM image registry | `docker.io` |
| `image.repository` | OrangeHRM Image name | `bitnami/orangehrm` |
| `image.tag` | OrangeHRM Image tag | `{VERSION}` |
@@ -68,6 +69,17 @@ The following table lists the configurable parameters of the OrangeHRM chart and
| `service.externalTrafficPolicy` | Enable client source IP preservation | `Cluster` |
| `service.nodePorts.http` | Kubernetes http node port | `""` |
| `service.nodePorts.https` | Kubernetes https node port | `""` |
+| `ingress.enabled` | Enable ingress controller resource | `false` |
+| `ingress.annotations` | Ingress annotations | `[]` |
+| `ingress.certManager` | Add annotations for cert-manager | `false` |
+| `ingress.hosts[0].name` | Hostname to your OrangeHRM installation | `orangehrm.local` |
+| `ingress.hosts[0].path` | Path within the url structure | `/` |
+| `ingress.hosts[0].tls` | Utilize TLS backend in ingress | `false` |
+| `ingress.hosts[0].tlsHosts` | Array of TLS hosts for ingress record (defaults to `ingress.hosts[0].name` if `nil`) | `nil` |
+| `ingress.hosts[0].tlsSecret` | TLS Secret (certificates) | `orangehrm.local-tls-secret` |
+| `ingress.secrets[0].name` | TLS Secret Name | `nil` |
+| `ingress.secrets[0].certificate` | TLS Secret Certificate | `nil` |
+| `ingress.secrets[0].key` | TLS Secret Key | `nil` |
| `resources` | CPU/Memory resource requests/limits | Memory: `512Mi`, CPU: `300m` |
| `persistence.enabled` | Enable persistence using PVC | `true` |
| `persistence.apache.storageClass` | PVC Storage Class for Apache volume | `nil` (uses alpha storage class annotation) |
diff --git a/stable/orangehrm/templates/_helpers.tpl b/stable/orangehrm/templates/_helpers.tpl
index c9266a39db82..53374ec7f3ca 100644
--- a/stable/orangehrm/templates/_helpers.tpl
+++ b/stable/orangehrm/templates/_helpers.tpl
@@ -15,6 +15,13 @@ We truncate at 63 chars because some Kubernetes name fields are limited to this
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "orangehrm.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
@@ -49,9 +56,57 @@ Also, we can't use a single if because lazy evaluation is not an option
{{/*
Return the proper image name (for the metrics image)
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "orangehrm.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "orangehrm.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
{{- end -}}
diff --git a/stable/orangehrm/templates/deployment.yaml b/stable/orangehrm/templates/deployment.yaml
index cb873eb84e95..7bbfe09e05fb 100644
--- a/stable/orangehrm/templates/deployment.yaml
+++ b/stable/orangehrm/templates/deployment.yaml
@@ -28,12 +28,7 @@ spec:
{{- end }}
{{- end }}
spec:
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "orangehrm.imagePullSecrets" . | indent 6 }}
hostAliases:
- ip: "127.0.0.1"
hostnames:
@@ -118,7 +113,7 @@ spec:
mountPath: /bitnami/apache
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "orangehrm.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
command: [ '/bin/apache_exporter', '-scrape_uri', 'http://status.localhost:80/server-status/?auto']
ports:
diff --git a/stable/orangehrm/templates/ingress.yaml b/stable/orangehrm/templates/ingress.yaml
new file mode 100644
index 000000000000..79c1203cdc4a
--- /dev/null
+++ b/stable/orangehrm/templates/ingress.yaml
@@ -0,0 +1,43 @@
+{{- if .Values.ingress.enabled }}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ template "orangehrm.fullname" . }}
+ labels:
+ app: "{{ template "orangehrm.fullname" . }}"
+ chart: "{{ template "orangehrm.chart" . }}"
+ release: {{ .Release.Name | quote }}
+ heritage: {{ .Release.Service | quote }}
+ annotations:
+ {{- if .Values.ingress.certManager }}
+ kubernetes.io/tls-acme: "true"
+ {{- end }}
+ {{- range $key, $value := .Values.ingress.annotations }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+spec:
+ rules:
+ {{- range .Values.ingress.hosts }}
+ - host: {{ .name }}
+ http:
+ paths:
+ - path: {{ default "/" .path }}
+ backend:
+ serviceName: {{ template "orangehrm.fullname" $ }}
+ servicePort: http
+ {{- end }}
+ tls:
+ {{- range .Values.ingress.hosts }}
+ {{- if .tls }}
+ - hosts:
+ {{- if .tlsHosts }}
+ {{- range $host := .tlsHosts }}
+ - {{ $host }}
+ {{- end }}
+ {{- else }}
+ - {{ .name }}
+ {{- end }}
+ secretName: {{ .tlsSecret }}
+ {{- end }}
+ {{- end }}
+{{- end }}
diff --git a/stable/orangehrm/values.yaml b/stable/orangehrm/values.yaml
index eb47163abae6..075b3bed7019 100644
--- a/stable/orangehrm/values.yaml
+++ b/stable/orangehrm/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami OrangeHRM image version
## ref: https://hub.docker.com/r/bitnami/orangehrm/tags/
@@ -10,7 +13,7 @@
image:
registry: docker.io
repository: bitnami/orangehrm
- tag: 4.2.0-1
+ tag: 4.3.1-0
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,7 +24,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## User of the application
## ref: https://github.com/bitnami/bitnami-docker-orangehrm#configuration
@@ -173,6 +176,60 @@ resources:
##
podAnnotations: {}
+## Configure the ingress resource that allows you to access the
+## OrangeHRM installation. Set up the URL
+## ref: http://kubernetes.io/docs/user-guide/ingress/
+##
+ingress:
+ ## Set to true to enable ingress record generation
+ enabled: false
+
+ ## Set this to true in order to add the corresponding annotations for cert-manager
+ certManager: false
+
+ ## Ingress annotations done as key:value pairs
+ ## For a full list of possible ingress annotations, please see
+ ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/annotations.md
+ ##
+ ## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
+ ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
+ annotations:
+ # kubernetes.io/ingress.class: nginx
+
+ ## The list of hostnames to be covered with this ingress record.
+ ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
+ hosts:
+ - name: orangehrm.local
+ path: /
+
+ ## Set this to true in order to enable TLS on the ingress record
+ tls: false
+
+ ## Optionally specify the TLS hosts for the ingress record
+ ## Useful when the Ingress controller supports www-redirection
+ ## If not specified, the above host name will be used
+ # tlsHosts:
+ # - www.orangehrm.local
+ # - orangehrm.local
+
+ ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
+ tlsSecret: orangehrm.local-tls
+
+ secrets:
+ ## If you're providing your own certificates, please use this to add the certificates as secrets
+ ## key and certificate should start with -----BEGIN CERTIFICATE----- or
+ ## -----BEGIN RSA PRIVATE KEY-----
+ ##
+ ## name should line up with a tlsSecret set further up
+ ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
+ ##
+ ## It is also possible to create and manage the certificates outside of this helm chart
+ ## Please see README.md for more information
+ # - name: orangehrm.local-tls
+ # key:
+ # certificate:
+
+
## Prometheus Exporter / Metrics
##
metrics:
@@ -187,7 +244,7 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Metrics exporter pod Annotation and Labels
podAnnotations:
prometheus.io/scrape: "true"
diff --git a/stable/osclass/Chart.yaml b/stable/osclass/Chart.yaml
index c716778738b7..fd5653279a1f 100644
--- a/stable/osclass/Chart.yaml
+++ b/stable/osclass/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: osclass
-version: 4.0.3
+version: 4.3.0
appVersion: 3.7.4
description: Osclass is a php script that allows you to quickly create and manage
your own free classifieds site.
diff --git a/stable/osclass/README.md b/stable/osclass/README.md
index c40138f9870f..7da101a49205 100644
--- a/stable/osclass/README.md
+++ b/stable/osclass/README.md
@@ -14,7 +14,7 @@ This chart bootstraps an [Osclass](https://github.com/bitnami/bitnami-docker-osc
It also packages the [Bitnami MariaDB chart](https://github.com/kubernetes/charts/tree/master/stable/mariadb) which is required for bootstrapping a MariaDB deployment for the database requirements of the Osclass application.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -50,6 +50,7 @@ The following table lists the configurable parameters of the Osclass chart and t
| Parameter | Description | Default |
|------------------------------------|------------------------------------------|-------------------------------------------------------- |
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | Osclass image registry | `docker.io` |
| `image.repository` | Osclass Image name | `bitnami/osclass` |
| `image.tag` | Osclass Image tag | `{VERSION}` |
@@ -83,6 +84,17 @@ The following table lists the configurable parameters of the Osclass chart and t
| `externalDatabase.user` | Existing username in the external db | `bn_osclass` |
| `externalDatabase.password` | Password for the above username | `nil` |
| `externalDatabase.database` | Name of the existing database | `bitnami_osclass` |
+| `ingress.enabled` | Enable ingress controller resource | `false` |
+| `ingress.annotations` | Ingress annotations | `[]` |
+| `ingress.certManager` | Add annotations for cert-manager | `false` |
+| `ingress.hosts[0].name` | Hostname to your osclass installation | `osclass.local` |
+| `ingress.hosts[0].path` | Path within the url structure | `/` |
+| `ingress.hosts[0].tls` | Utilize TLS backend in ingress | `false` |
+| `ingress.hosts[0].tlsHosts` | Array of TLS hosts for ingress record (defaults to `ingress.hosts[0].name` if `nil`) | `nil` |
+| `ingress.hosts[0].tlsSecret` | TLS Secret (certificates) | `osclass.local-tls-secret` |
+| `ingress.secrets[0].name` | TLS Secret Name | `nil` |
+| `ingress.secrets[0].certificate` | TLS Secret Certificate | `nil` |
+| `ingress.secrets[0].key` | TLS Secret Key | `nil` |
| `mariadb.enabled` | Whether to use the MariaDB chart | `true` |
| `mariadb.db.name` | Database name to create | `bitnami_osclass` |
| `mariadb.db.user` | Database user to create | `bn_osclass` |
diff --git a/stable/osclass/templates/NOTES.txt b/stable/osclass/templates/NOTES.txt
index 389d059064f6..c239fdda9397 100644
--- a/stable/osclass/templates/NOTES.txt
+++ b/stable/osclass/templates/NOTES.txt
@@ -32,14 +32,14 @@ host. To configure Osclass with the URL of your service:
{{- if .Values.mariadb.enabled }}
- helm upgrade {{ .Release.Name }} stable/osclass \
- --set osclassHost=$APP_HOST,osclassPassword=$APP_PASSWORD{{ if .Values.mariadb.mariadbRootPassword }},mariadb.mariadbRootPassword=$DATABASE_ROOT_PASSWORD{{ end }},mariadb.db.password=$APP_DATABASE_PASSWORD
+ helm upgrade {{ .Release.Name }} stable/{{ .Chart.Name }} \
+ --set osclassHost=$APP_HOST,osclassPassword=$APP_PASSWORD{{ if .Values.mariadb.mariadbRootPassword }},mariadb.mariadbRootPassword=$DATABASE_ROOT_PASSWORD{{ end }},mariadb.db.password=$APP_DATABASE_PASSWORD{{- if .Values.global }}{{- if .Values.global.imagePullSecrets }},global.imagePullSecrets={{ .Values.global.imagePullSecrets }}{{- end }}{{- end }}
{{- else }}
## PLEASE UPDATE THE EXTERNAL DATABASE CONNECTION PARAMETERS IN THE FOLLOWING COMMAND AS NEEDED ##
- helm upgrade {{ .Release.Name }} stable/osclass \
- --set osclassPassword=$APP_PASSWORD,osclassHost=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.host) }},externalDatabase.host={{ .Values.externalDatabase.host }}{{- end }}{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }}
+ helm upgrade {{ .Release.Name }} stable/{{ .Chart.Name }} \
+ --set osclassPassword=$APP_PASSWORD,osclassHost=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.host) }},externalDatabase.host={{ .Values.externalDatabase.host }}{{- end }}{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }}{{- if .Values.global }}{{- if .Values.global.imagePullSecrets }},global.imagePullSecrets={{ .Values.global.imagePullSecrets }}{{- end }}{{- end }}
{{- end }}
{{- else -}}
@@ -94,6 +94,6 @@ host. To configure Osclass to use and external database host:
## PLEASE UPDATE THE EXTERNAL DATABASE CONNECTION PARAMETERS IN THE FOLLOWING COMMAND AS NEEDED ##
- helm upgrade {{ .Release.Name }} stable/osclass \
- --set osclassPassword=$APP_PASSWORD,osclassHost=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }},externalDatabase.host=YOUR_EXTERNAL_DATABASE_HOST
+ helm upgrade {{ .Release.Name }} stable/{{ .Chart.Name }} \
+ --set osclassPassword=$APP_PASSWORD,osclassHost=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }},externalDatabase.host=YOUR_EXTERNAL_DATABASE_HOST{{- if .Values.global }}{{- if .Values.global.imagePullSecrets }},global.imagePullSecrets={{ .Values.global.imagePullSecrets }}{{- end }}{{- end }}
{{- end }}
diff --git a/stable/osclass/templates/_helpers.tpl b/stable/osclass/templates/_helpers.tpl
index cf77a9278a8d..a60c101e06cf 100644
--- a/stable/osclass/templates/_helpers.tpl
+++ b/stable/osclass/templates/_helpers.tpl
@@ -70,9 +70,57 @@ Also, we can't use a single if because lazy evaluation is not an option
{{/*
Return the proper image name (for the metrics image)
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "osclass.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "osclass.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
{{- end -}}
diff --git a/stable/osclass/templates/deployment.yaml b/stable/osclass/templates/deployment.yaml
index 3285f6038861..4b1a328e7d17 100644
--- a/stable/osclass/templates/deployment.yaml
+++ b/stable/osclass/templates/deployment.yaml
@@ -29,12 +29,7 @@ spec:
{{- end }}
{{- end }}
spec:
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "osclass.imagePullSecrets" . | indent 6 }}
hostAliases:
- ip: "127.0.0.1"
hostnames:
@@ -140,7 +135,7 @@ spec:
mountPath: /bitnami/apache
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "osclass.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
command: [ '/bin/apache_exporter', '-scrape_uri', 'http://status.localhost:80/server-status/?auto']
ports:
diff --git a/stable/osclass/templates/ingress.yaml b/stable/osclass/templates/ingress.yaml
new file mode 100644
index 000000000000..8bb2dd2a329d
--- /dev/null
+++ b/stable/osclass/templates/ingress.yaml
@@ -0,0 +1,43 @@
+{{- if .Values.ingress.enabled }}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ template "osclass.fullname" . }}
+ labels:
+ app: "{{ template "osclass.fullname" . }}"
+ chart: "{{ template "osclass.chart" . }}"
+ release: {{ .Release.Name | quote }}
+ heritage: {{ .Release.Service | quote }}
+ annotations:
+ {{- if .Values.ingress.certManager }}
+ kubernetes.io/tls-acme: "true"
+ {{- end }}
+ {{- range $key, $value := .Values.ingress.annotations }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+spec:
+ rules:
+ {{- range .Values.ingress.hosts }}
+ - host: {{ .name }}
+ http:
+ paths:
+ - path: {{ default "/" .path }}
+ backend:
+ serviceName: {{ template "osclass.fullname" $ }}
+ servicePort: http
+ {{- end }}
+ tls:
+ {{- range .Values.ingress.hosts }}
+ {{- if .tls }}
+ - hosts:
+ {{- if .tlsHosts }}
+ {{- range $host := .tlsHosts }}
+ - {{ $host }}
+ {{- end }}
+ {{- else }}
+ - {{ .name }}
+ {{- end }}
+ secretName: {{ .tlsSecret }}
+ {{- end }}
+ {{- end }}
+{{- end }}
diff --git a/stable/osclass/values.yaml b/stable/osclass/values.yaml
index 9b3e61e21555..b83a373122c6 100644
--- a/stable/osclass/values.yaml
+++ b/stable/osclass/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami Osclass image version
## ref: https://hub.docker.com/r/bitnami/osclass/tags/
@@ -21,7 +24,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Osclass host to create application URLs
## ref: https://github.com/bitnami/bitnami-docker-osclass#configuration
@@ -157,6 +160,60 @@ service:
##
externalTrafficPolicy: Cluster
+## Configure the ingress resource that allows you to access the
+## osclass installation. Set up the URL
+## ref: http://kubernetes.io/docs/user-guide/ingress/
+##
+ingress:
+ ## Set to true to enable ingress record generation
+ enabled: false
+
+ ## Set this to true in order to add the corresponding annotations for cert-manager
+ certManager: false
+
+ ## Ingress annotations done as key:value pairs
+ ## For a full list of possible ingress annotations, please see
+ ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/annotations.md
+ ##
+ ## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
+ ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
+ annotations:
+ # kubernetes.io/ingress.class: nginx
+
+ ## The list of hostnames to be covered with this ingress record.
+ ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
+ hosts:
+ - name: osclass.local
+ path: /
+
+ ## Set this to true in order to enable TLS on the ingress record
+ tls: false
+
+ ## Optionally specify the TLS hosts for the ingress record
+ ## Useful when the Ingress controller supports www-redirection
+ ## If not specified, the above host name will be used
+ # tlsHosts:
+ # - www.osclass.local
+ # - osclass.local
+
+ ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
+ tlsSecret: osclass.local-tls
+
+ secrets:
+ ## If you're providing your own certificates, please use this to add the certificates as secrets
+ ## key and certificate should start with -----BEGIN CERTIFICATE----- or
+ ## -----BEGIN RSA PRIVATE KEY-----
+ ##
+ ## name should line up with a tlsSecret set further up
+ ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
+ ##
+ ## It is also possible to create and manage the certificates outside of this helm chart
+ ## Please see README.md for more information
+ # - name: osclass.local-tls
+ # key:
+ # certificate:
+
+
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
@@ -211,7 +268,7 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Metrics exporter pod Annotation and Labels
podAnnotations:
prometheus.io/scrape: "true"
diff --git a/stable/owncloud/Chart.yaml b/stable/owncloud/Chart.yaml
index d9ef65b765ff..e30da2d8576d 100644
--- a/stable/owncloud/Chart.yaml
+++ b/stable/owncloud/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: owncloud
-version: 4.0.2
-appVersion: 10.0.10
+version: 4.2.3
+appVersion: 10.2.0
description: A file sharing server that puts the control and security of your own data back into your hands.
keywords:
- owncloud
diff --git a/stable/owncloud/README.md b/stable/owncloud/README.md
index 75c002c10ef4..ea52e00b356a 100644
--- a/stable/owncloud/README.md
+++ b/stable/owncloud/README.md
@@ -14,7 +14,7 @@ This chart bootstraps an [ownCloud](https://github.com/bitnami/bitnami-docker-ow
It also packages the [Bitnami MariaDB chart](https://github.com/kubernetes/charts/tree/master/stable/mariadb) which is required for bootstrapping a MariaDB deployment for the database requirements of the ownCloud application.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -50,6 +50,7 @@ The following table lists the configurable parameters of the ownCloud chart and
| Parameter | Description | Default |
|-------------------------------------|--------------------------------------------|-------------------------------------------------------- |
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | ownCloud image registry | `docker.io` |
| `image.repository` | ownCloud Image name | `bitnami/owncloud` |
| `image.tag` | ownCloud Image tag | `{VERSION}` |
diff --git a/stable/owncloud/templates/NOTES.txt b/stable/owncloud/templates/NOTES.txt
index d87222fe8c07..89250d4b2fc2 100644
--- a/stable/owncloud/templates/NOTES.txt
+++ b/stable/owncloud/templates/NOTES.txt
@@ -32,14 +32,14 @@ host. To configure ownCloud with the URL of your service:
{{- if .Values.mariadb.enabled }}
- helm upgrade {{ .Release.Name }} stable/owncloud \
- --set owncloudHost=$APP_HOST,owncloudPassword=$APP_PASSWORD{{ if .Values.mariadb.mariadbRootPassword }},mariadb.mariadbRootPassword=$DATABASE_ROOT_PASSWORD{{ end }},mariadb.db.password=$APP_DATABASE_PASSWORD
+ helm upgrade {{ .Release.Name }} stable/{{ .Chart.Name }} \
+ --set owncloudHost=$APP_HOST,owncloudPassword=$APP_PASSWORD{{ if .Values.mariadb.mariadbRootPassword }},mariadb.mariadbRootPassword=$DATABASE_ROOT_PASSWORD{{ end }},mariadb.db.password=$APP_DATABASE_PASSWORD{{- if .Values.global }}{{- if .Values.global.imagePullSecrets }},global.imagePullSecrets={{ .Values.global.imagePullSecrets }}{{- end }}{{- end }}
{{- else }}
## PLEASE UPDATE THE EXTERNAL DATABASE CONNECTION PARAMETERS IN THE FOLLOWING COMMAND AS NEEDED ##
- helm upgrade {{ .Release.Name }} stable/owncloud \
- --set owncloudPassword=$APP_PASSWORD,owncloudHost=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.host) }},externalDatabase.host={{ .Values.externalDatabase.host }}{{- end }}{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }}
+ helm upgrade {{ .Release.Name }} stable/{{ .Chart.Name }} \
+ --set owncloudPassword=$APP_PASSWORD,owncloudHost=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.host) }},externalDatabase.host={{ .Values.externalDatabase.host }}{{- end }}{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }}{{- if .Values.global }}{{- if .Values.global.imagePullSecrets }},global.imagePullSecrets={{ .Values.global.imagePullSecrets }}{{- end }}{{- end }}
{{- end }}
{{- else -}}
@@ -91,6 +91,6 @@ host. To configure ownCloud to use and external database host:
## PLEASE UPDATE THE EXTERNAL DATABASE CONNECTION PARAMETERS IN THE FOLLOWING COMMAND AS NEEDED ##
- helm upgrade {{ .Release.Name }} stable/owncloud \
- --set owncloudPassword=$APP_PASSWORD,owncloudHost=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }},externalDatabase.host=YOUR_EXTERNAL_DATABASE_HOST
+ helm upgrade {{ .Release.Name }} stable/{{ .Chart.Name }} \
+ --set owncloudPassword=$APP_PASSWORD,owncloudHost=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }},externalDatabase.host=YOUR_EXTERNAL_DATABASE_HOST{{- if .Values.global }}{{- if .Values.global.imagePullSecrets }},global.imagePullSecrets={{ .Values.global.imagePullSecrets }}{{- end }}{{- end }}
{{- end }}
diff --git a/stable/owncloud/templates/_helpers.tpl b/stable/owncloud/templates/_helpers.tpl
index 0d4896699b0e..1e3ad787d965 100644
--- a/stable/owncloud/templates/_helpers.tpl
+++ b/stable/owncloud/templates/_helpers.tpl
@@ -77,9 +77,57 @@ Also, we can't use a single if because lazy evaluation is not an option
{{/*
Return the proper image name (for the metrics image)
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "owncloud.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "owncloud.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
{{- end -}}
diff --git a/stable/owncloud/templates/deployment.yaml b/stable/owncloud/templates/deployment.yaml
index f6a08d015b11..adf11b3c16fb 100644
--- a/stable/owncloud/templates/deployment.yaml
+++ b/stable/owncloud/templates/deployment.yaml
@@ -30,12 +30,7 @@ spec:
{{- end }}
{{- end }}
spec:
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "owncloud.imagePullSecrets" . | indent 6 }}
hostAliases:
- ip: "127.0.0.1"
hostnames:
@@ -120,7 +115,7 @@ spec:
mountPath: /bitnami/apache
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "owncloud.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
command: [ '/bin/apache_exporter', '-scrape_uri', 'http://status.localhost:80/server-status/?auto']
ports:
diff --git a/stable/owncloud/values.yaml b/stable/owncloud/values.yaml
index f34a6619f42b..4c3c0b25ab8f 100644
--- a/stable/owncloud/values.yaml
+++ b/stable/owncloud/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami ownCloud image version
## ref: https://hub.docker.com/r/bitnami/owncloud/tags/
@@ -10,7 +13,7 @@
image:
registry: docker.io
repository: bitnami/owncloud
- tag: 10.0.10
+ tag: 10.2.0
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,7 +24,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## For Kubernetes v1.4, v1.5 and v1.6, use 'extensions/v1beta1'
## For Kubernetes v1.7, use 'networking.k8s.io/v1'
@@ -249,7 +252,7 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Metrics exporter pod Annotation and Labels
podAnnotations:
prometheus.io/scrape: "true"
diff --git a/stable/pachyderm/Chart.yaml b/stable/pachyderm/Chart.yaml
index f12778ae347d..e48f70c5cd57 100755
--- a/stable/pachyderm/Chart.yaml
+++ b/stable/pachyderm/Chart.yaml
@@ -10,8 +10,8 @@ keywords:
- reproducibility
- distributed
- processing
-version: 0.1.8
-appVersion: 1.7.3
+version: 0.2.1
+appVersion: 1.8.6
home: "https://pachyderm.io"
sources:
- "https://github.com/pachyderm/pachyderm"
diff --git a/stable/pachyderm/README.md b/stable/pachyderm/README.md
index beecc6a09516..28618edec55a 100644
--- a/stable/pachyderm/README.md
+++ b/stable/pachyderm/README.md
@@ -6,29 +6,32 @@ Pachyderm is a language-agnostic and cloud infrastructure-agnostic large-scale d
* https://pachyderm.io
* https://github.com/pachyderm/pachyderm
-
Prerequisites Details
---------------------
-- Dynamic provisioning of PVs (for non-local deployments)
+* Dynamic provisioning of PVs (for non-local deployments)
General chart settings
----------------------
The following table lists the configurable parameters of `pachd` and their default values:
-| Parameter | Description | Default |
-|--------------------------|-----------------------|-------------------|
-| `rbac.create` | Enable RBAC | `true` |
-| `pachd.image.repository` | Container image name | `pachyderm/pachd` |
-| `pachd.pfsCache` | File System cache size| `0G` |
-| `*.image.tag` | Container image tag | ``|
-| `*.image.pullPolicy` | Image pull policy | `Always` |
-| `*.worker.repository` | Worker image name | `pachyderm/worker`|
-| `*.worker.tag` | Worker image tag | ``|
-| `*.replicaCount` | Number of pachds | `1` |
-| `*.resources.requests` | Memory and cpu request| `{512M,250m}` |
-
+| Parameter | Description | Default |
+|-------------------------------|-------------------------------------|-------------------|
+| `rbac.create` | Enable RBAC | `true` |
+| `pachd.exposeObjApi` | Expose S3 API | `false` |
+| `pachd.image.repository` | Container image name | `pachyderm/pachd` |
+| `pachd.pfsCache` | File System cache size | `0G` |
+| `*.image.tag` | Container image tag | ``|
+| `*.image.pullPolicy` | Image pull policy | `Always` |
+| `*.worker.repository` | Worker image name | `pachyderm/worker`|
+| `*.worker.tag` | Worker image tag | ``|
+| `*.replicaCount` | Number of pachds | `1` |
+| `*.resources.requests` | Memory and cpu request | `{512M,250m}` |
+| `*.resources.limits` | Memory and cpu limit | `nil` |
+| `*.service.grpc.annotations` | GRPC service additional annotations | `{}` |
+| `*.service.grpc.prod` | GRPC service pord | `30650` |
+| `*.service.grpc.type` | GRPC service type | `NodePort` |
Next table lists the configurable parameters of `etcd` and their default values:
@@ -38,26 +41,24 @@ Next table lists the configurable parameters of `etcd` and their default values:
| `*.image.tag` | Container image tag | `` |
| `*.image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `*.resources.requests` | Memory and cpu request| `{250M,250m}` |
+| `*.resources.limits` | Memory and cpu limit | `nil` |
| `*.persistence.enabled` | Enable persistence | `false` |
| `*.persistence.size` | Storage request | `20G` |
| `*.persistence.accessMode` | Access mode for PV | `ReadWriteOnce` |
| `*.persistence.storageClass`| PVC storage class | `nil` |
-
In order to set which object store credentials you want to use, please set the flag `credentials` with one of the following values: `local | s3 | google | amazon | microsoft`.
| Parameter | Description | Default |
|--------------------------|-----------------------|-------------------|
| `credentials` | Backend credentials | "" |
-
-Based on the storage credentials used, fill in the corresponding parameters for your object store. Note that The `local` installation will deploy Pachyderm on your local Kubernetes cluster (i.e: minikube) backed by your local storage unit.
-
+Based on the storage credentials used, fill in the corresponding parameters for your object store. Note that The `local` installation will deploy Pachyderm on your local Kubernetes cluster (i.e: minikube) backed by your local storage unit.
On-premises deployment
------------------------
-- On an on-premise environment like Openstack, a `S3 endpoint` can be used as storage backend. The following credentials (such as Minio credentials) are configurable:
+* On an on-premise environment like Openstack, a `S3 endpoint` can be used as storage backend. The following credentials (such as Minio credentials) are configurable:
| Parameter | Description | Default |
|--------------------------|-----------------------|-------------------|
@@ -68,22 +69,20 @@ On-premises deployment
| `s3.secure` | S3 secure | `"0"` |
| `s3.signature` | S3 signature | `"1"` |
-
-Google Cloud
+Google Cloud
-------------
-- With `Google Cloud` credentials, you must define your `GCS bucket name`:
+* With `Google Cloud` credentials, you must define your `GCS bucket name`:
| Parameter | Description | Default |
|--------------------------|-----------------------|-------------------|
| `google.bucketName` | GCS bucket name | `""` |
| `google.credentials` | GCP credentials | `""` |
-
Amazon Web Services
---------------------
-- On `Amazon Web Services`, please set the next values:
+* On `Amazon Web Services`, please set the next values:
| Parameter | Description | Default |
|--------------------------|-----------------------|-------------------|
@@ -91,14 +90,14 @@ Amazon Web Services
| `amazon.distribution` | Amazon distribution | `""` |
| `amazon.id` | Amazon id | `""` |
| `amazon.region` | Amazon region | `""` |
+| `amazon.roleArn` | Amazon role arn | `""` |
| `amazon.secret` | Amazon secret | `""` |
| `amazon.token` | Amazon token | `""` |
-
Microsoft Azure
---------------------
-- As for `Microsoft Azure`, you must specify the following parameters:
+* As for `Microsoft Azure`, you must specify the following parameters:
| Parameter | Description | Default |
|--------------------------|-----------------------|-------------------|
@@ -106,7 +105,6 @@ Microsoft Azure
| `microsoft.id` | Account name | `""` |
| `microsoft.secret` | Account key | `""` |
-
How to install the chart
------------------------
@@ -118,7 +116,6 @@ $ helm install --namespace pachyderm --name my-release stable/pachyderm
You should install the chart specifying each parameter using the `--set key=value[,key=value]` argument to helm install. Please consult the `values.yaml` file for more information regarding the parameters. For example:
-
```console
$ helm install --namespace pachyderm --name my-release \
--set credentials=s3,s3.accessKey=myaccesskey,s3.secretKey=mysecretkey,s3.bucketName=default_bucket,s3.endpoint=domain.subdomain:8080,etcd.persistence.enabled=true,etcd.persistence.accessMode=ReadWriteMany \
@@ -131,16 +128,27 @@ Alternatively, a YAML file that specifies the values for the parameters can be p
$ helm install --namespace pachyderm --name my-release -f values.yaml stable/pachyderm
```
+Specifying a pachyderm version
+------------------------
+
+To specify a pachyderm version run the following command:
+
+```console
+$ helm install --namespace pachyderm --name my-release \
+--set pachd.image.tag=1.8.6,pachd.worker.tag=1.8.6 \
+stable/pachyderm
+```
+
Accessing the pachd service
---------------------------
In order to use Pachyderm, please login through ssh to the master node and install the Pachyderm client:
```console
-$ curl -o /tmp/pachctl.deb -L https://github.com/pachyderm/pachyderm/releases/download/v1.7.3/pachctl_1.7.3_amd64.deb && sudo dpkg -i /tmp/pachctl.deb
+$ curl -o /tmp/pachctl.deb -L https://github.com/pachyderm/pachyderm/releases/download/v1.8.6/pachctl_1.8.6_amd64.deb && sudo dpkg -i /tmp/pachctl.deb
```
-Please note that the client version should correspond with the pachd service version. For more information please consult: http://pachyderm.readthedocs.io/en/latest/index.html. Also, if you have your kubernetes client properly configured to talk with your remote cluster, you can simply install `pachctl` on your local machine and execute: `pachctl --namespace port-forward &`.
+Please note that the client version should correspond with the pachd service version. For more information please consult the official [documentation][documentation] . Also, if you have your kubernetes client properly configured to talk with your remote cluster, you can simply install `pachctl` on your local machine and execute: `pachctl --namespace port-forward &`.
Clean-up
-------
@@ -151,3 +159,5 @@ In order to remove the Pachyderm release, you can execute the following commands
$ helm list
$ helm delete --purge
```
+
+[documentation]: http://pachyderm.readthedocs.io/en/latest/index.html "Pachyderm documentation"
diff --git a/stable/pachyderm/templates/etcd_deployment.yaml b/stable/pachyderm/templates/etcd_deployment.yaml
index b78e738c6cd5..39cf43af3d35 100644
--- a/stable/pachyderm/templates/etcd_deployment.yaml
+++ b/stable/pachyderm/templates/etcd_deployment.yaml
@@ -8,19 +8,19 @@ metadata:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
- suite: pachyderm
+ suite: {{ template "fullname" . }}
spec:
replicas: 1
selector:
matchLabels:
app: {{ template "etcd.fullname" . }}
- suite: pachyderm
+ suite: {{ template "fullname" . }}
template:
metadata:
name: etcd
labels:
app: {{ template "etcd.fullname" . }}
- suite: pachyderm
+ suite: {{ template "fullname" . }}
spec:
volumes:
- name: etcdvol
@@ -50,6 +50,11 @@ spec:
requests:
cpu: '{{ .Values.etcd.resources.requests.cpu }}'
memory: {{ .Values.etcd.resources.requests.memory }}
+ {{- if .Values.etcd.resources.limits }}
+ limits:
+ cpu: '{{ .Values.etcd.resources.limits.cpu }}'
+ memory: {{ .Values.etcd.resources.limits.memory }}
+ {{- end }}
volumeMounts:
- name: etcdvol
mountPath: "/var/data/etcd"
diff --git a/stable/pachyderm/templates/etcd_svc.yaml b/stable/pachyderm/templates/etcd_svc.yaml
index 05e590b6d695..5b80fd74f24b 100644
--- a/stable/pachyderm/templates/etcd_svc.yaml
+++ b/stable/pachyderm/templates/etcd_svc.yaml
@@ -8,7 +8,7 @@ metadata:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
- suite: pachyderm
+ suite: {{ template "fullname" . }}
spec:
ports:
- name: client-port
diff --git a/stable/pachyderm/templates/pachd_cluster_role.yaml b/stable/pachyderm/templates/pachd_cluster_role.yaml
index c786d608f814..92d31dc339c0 100644
--- a/stable/pachyderm/templates/pachd_cluster_role.yaml
+++ b/stable/pachyderm/templates/pachd_cluster_role.yaml
@@ -3,11 +3,11 @@
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
- name: pachyderm
- creationTimestamp:
+ name: {{ template "fullname" . }}
+ creationTimestamp:
labels:
app: ''
- suite: pachyderm
+ suite: {{ template "fullname" . }}
rules:
- verbs:
- get
diff --git a/stable/pachyderm/templates/pachd_deployment.yaml b/stable/pachyderm/templates/pachd_deployment.yaml
index 9b2a961a89cd..cae92e550aff 100644
--- a/stable/pachyderm/templates/pachd_deployment.yaml
+++ b/stable/pachyderm/templates/pachd_deployment.yaml
@@ -8,19 +8,23 @@ metadata:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
- suite: pachyderm
+ suite: {{ template "fullname" . }}
spec:
replicas: {{ .Values.pachd.replicaCount }}
selector:
matchLabels:
app: pachd
- suite: pachyderm
+ suite: {{ template "fullname" . }}
template:
metadata:
name: pachd
+ annotations:
+ {{- if .Values.amazon.roleArn }}
+ iam.amazonaws.com/role: {{ .Values.amazon.roleArn }}
+ {{- end }}
labels:
app: pachd
- suite: pachyderm
+ suite: {{ template "fullname" . }}
spec:
volumes:
- name: pachdvol
@@ -43,6 +47,8 @@ spec:
- name: api-http-port
containerPort: 652
env:
+ - name: EXPOSE_OBJECT_API
+ value: {{ .Values.pachd.exposeObjApi | quote }}
- name: PACH_ROOT
value: "/pach"
- name: NUM_SHARDS
@@ -63,6 +69,108 @@ spec:
{{- if eq .Values.credentials "local" }}
value: "/var/pachyderm/pachd"
{{- end }}
+ - name: GOOGLE_BUCKET
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: google-bucket
+ optional: true
+ - name: GOOGLE_CRED
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: google-cred
+ optional: true
+ - name: AMAZON_BUCKET
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: amazon-bucket
+ optional: true
+ - name: AMAZON_DISTRIBUTION
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: amazon-distribution
+ optional: true
+ - name: AMAZON_ID
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: amazon-id
+ optional: true
+ - name: AMAZON_SECRET
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: amazon-secret
+ optional: true
+ - name: AMAZON_REGION
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: amazon-region
+ optional: true
+ - name: AMAZON_TOKEN
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: amazon-token
+ optional: true
+ - name: MICROSOFT_CONTAINER
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: microsoft-container
+ optional: true
+ - name: MICROSOFT_ID
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: microsoft-id
+ optional: true
+ - name: MICROSOFT_SECRET
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: microsoft-secret
+ optional: true
+ - name: MINIO_ENDPOINT
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: minio-endpoint
+ optional: true
+ - name: MINIO_BUCKET
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: minio-bucket
+ optional: true
+ - name: MINIO_SECURE
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: minio-secure
+ optional: true
+ - name: MINIO_ID
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: minio-id
+ optional: true
+ - name: MINIO_SECRET
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: minio-secret
+ optional: true
+ - name: MINIO_SIGNATURE
+ valueFrom:
+ secretKeyRef:
+ name: pachyderm-storage-secret
+ key: minio-signature
+ optional: true
- name: PACHD_POD_NAMESPACE
valueFrom:
fieldRef:
@@ -82,13 +190,21 @@ spec:
value: info
- name: BLOCK_CACHE_BYTES
value: '{{ .Values.pachd.pfsCache }}'
+ {{- if eq .Values.credentials "amazon" }}
- name: IAM_ROLE
+ value: {{ .Values.amazon.roleArn }}
+ {{- end }}
- name: PACHYDERM_AUTHENTICATION_DISABLED_FOR_TESTING
value: 'false'
resources:
requests:
cpu: '{{ .Values.pachd.resources.requests.cpu }}'
memory: {{ .Values.pachd.resources.requests.memory }}
+ {{- if .Values.pachd.resources.limits }}
+ limits:
+ cpu: '{{ .Values.pachd.resources.limits.cpu }}'
+ memory: {{ .Values.pachd.resources.limits.memory }}
+ {{- end }}
volumeMounts:
- name: pachdvol
mountPath: "/pach"
@@ -97,6 +213,6 @@ spec:
imagePullPolicy: {{ .Values.pachd.image.pullPolicy }}
securityContext:
privileged: true
- serviceAccountName: pachyderm
+ serviceAccountName: {{ template "fullname" . }}
strategy: {}
status: {}
diff --git a/stable/pachyderm/templates/pachd_rolebinding.yaml b/stable/pachyderm/templates/pachd_rolebinding.yaml
index 72c5f0deb6d0..8637f9387e20 100644
--- a/stable/pachyderm/templates/pachd_rolebinding.yaml
+++ b/stable/pachyderm/templates/pachd_rolebinding.yaml
@@ -1,15 +1,15 @@
---
{{- if .Values.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1beta1
-kind: RoleBinding
+kind: ClusterRoleBinding
metadata:
- name: pachyderm
+ name: {{ template "fullname" . }}
roleRef:
apiGroup: ''
kind: ClusterRole
- name: pachyderm
+ name: {{ template "fullname" . }}
subjects:
- kind: ServiceAccount
- name: pachyderm
+ name: {{ template "fullname" . }}
namespace: {{ .Release.Namespace }}
{{- end }}
diff --git a/stable/pachyderm/templates/pachd_sa.yaml b/stable/pachyderm/templates/pachd_sa.yaml
index 4e14c95f9eb0..281f5b1dd4cb 100644
--- a/stable/pachyderm/templates/pachd_sa.yaml
+++ b/stable/pachyderm/templates/pachd_sa.yaml
@@ -2,10 +2,10 @@
kind: ServiceAccount
apiVersion: v1
metadata:
- name: pachyderm
+ name: {{ template "fullname" . }}
labels:
app: {{ template "fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
- suite: pachyderm
+ suite: {{ template "fullname" . }}
diff --git a/stable/pachyderm/templates/pachd_secret.yaml b/stable/pachyderm/templates/pachd_secret.yaml
index 5ff93a275311..61d090fd8ceb 100644
--- a/stable/pachyderm/templates/pachd_secret.yaml
+++ b/stable/pachyderm/templates/pachd_secret.yaml
@@ -8,7 +8,7 @@ metadata:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
- suite: pachyderm
+ suite: {{ template "fullname" . }}
data:
{{- if eq .Values.credentials "s3" }}
minio-id: {{ .Values.s3.accessKey | b64enc | quote }}
@@ -22,11 +22,17 @@ data:
google-cred: {{ .Values.google.credentials | b64enc | quote }}
{{- else if eq .Values.credentials "amazon" }}
amazon-bucket: {{ .Values.amazon.bucketName | b64enc | quote }}
+ {{- if .Values.amazon.distribution }}
amazon-distribution: {{ .Values.amazon.distribution | b64enc | quote }}
+ {{- end }}
+ {{- if .Values.amazon.id }}
amazon-id: {{ .Values.amazon.id | b64enc | quote }}
+ {{- end }}
amazon-region: {{ .Values.amazon.region | b64enc | quote }}
+ {{- if not .Values.amazon.roleArn }}
amazon-secret: {{ .Values.amazon.secret | b64enc | quote }}
amazon-token: {{ .Values.amazon.token | b64enc | quote }}
+ {{- end }}
{{- else if eq .Values.credentials "microsoft" }}
microsoft-container: {{ .Values.microsoft.container | b64enc | quote }}
microsoft-id: {{ .Values.microsoft.id | b64enc | quote }}
diff --git a/stable/pachyderm/templates/pachd_svc.yaml b/stable/pachyderm/templates/pachd_svc.yaml
index 18c76a6aaa67..016bbf5c4c66 100644
--- a/stable/pachyderm/templates/pachd_svc.yaml
+++ b/stable/pachyderm/templates/pachd_svc.yaml
@@ -2,27 +2,67 @@
kind: Service
apiVersion: v1
metadata:
- name: pachd
+ name: pachd-api-grpc
+ {{- if .Values.pachd.service.grpc.annotations }}
+ annotations:
+ {{ .Values.pachd.service.grpc.annotations | toYaml | indent 4 | trim }}
+ {{- end }}
labels:
app: pachd
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
- suite: pachyderm
+ suite: {{ template "fullname" . }}
spec:
ports:
- name: api-grpc-port
- port: 650
+ port: {{ .Values.pachd.service.grpc.port }}
targetPort: 650
- nodePort: 30650
+ {{- if eq .Values.pachd.service.grpc.type "NodePort" }}
+ nodePort: {{ .Values.pachd.service.grpc.port }}
+ {{- end }}
+ selector:
+ app: pachd
+ type: {{ .Values.pachd.service.grpc.type | quote }}
+
+---
+kind: Service
+apiVersion: v1
+metadata:
+ name: pachd-trace
+ labels:
+ app: pachd
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ release: "{{ .Release.Name }}"
+ heritage: "{{ .Release.Service }}"
+ suite: {{ template "fullname" . }}
+spec:
+ ports:
- name: trace-port
port: 651
targetPort: 651
nodePort: 30651
+ selector:
+ app: pachd
+ type: "NodePort"
+
+---
+kind: Service
+apiVersion: v1
+metadata:
+ name: pachd-api-http
+ labels:
+ app: pachd
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ release: "{{ .Release.Name }}"
+ heritage: "{{ .Release.Service }}"
+ suite: {{ template "fullname" . }}
+spec:
+ ports:
- name: api-http-port
port: 652
targetPort: 652
nodePort: 30652
selector:
app: pachd
- type: NodePort
+ type: "NodePort"
diff --git a/stable/pachyderm/values.yaml b/stable/pachyderm/values.yaml
index 4902511e55c2..30c82aa3ed84 100644
--- a/stable/pachyderm/values.yaml
+++ b/stable/pachyderm/values.yaml
@@ -30,6 +30,7 @@ amazon:
distribution: ""
id: ""
region: ""
+ roleArn: ""
secret: ""
token: ""
@@ -41,22 +42,31 @@ microsoft:
## Set default image settings, resource requests and number of replicas of pachd
pachd:
+ exposeObjApi: false
replicaCount: 1
## Size of pachd's in-memory cache for PFS files. Size is specified in bytes, with allowed SI suffixes (M, K, G, Mi, Ki, Gi, etc).
pfsCache: 0G
## For available images please check: https://hub.docker.com/r/pachyderm/pachd/tags/
image:
repository: pachyderm/pachd
- tag: 1.7.3
+ tag: 1.8.6
pullPolicy: Always
worker:
repository: pachyderm/worker
- tag: 1.7.3
+ tag: 1.8.6
resources:
## For non-local deployments, 1 cpu and 2G of memory requests are recommended
requests:
cpu: 250m
memory: 512M
+ # limits:
+ # cpu: 250m
+ # memory: 512M
+ service:
+ grpc:
+ annotations: {}
+ port: 30650
+ type: NodePort
## Set default image settings and persistence settings of etcd
etcd:
@@ -85,3 +95,6 @@ etcd:
requests:
cpu: 250m
memory: 256M
+ # limits:
+ # cpu: 250m
+ # memory: 512M
diff --git a/stable/parse/Chart.yaml b/stable/parse/Chart.yaml
index 23822855f8c1..feb7501e3059 100644
--- a/stable/parse/Chart.yaml
+++ b/stable/parse/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: parse
-version: 6.0.2
-appVersion: 3.1.3
+version: 6.2.5
+appVersion: 3.4.0
description: Parse is a platform that enables users to add a scalable and powerful backend to launch a full-featured app for iOS, Android, JavaScript, Windows, Unity, and more.
keywords:
- parse
diff --git a/stable/parse/README.md b/stable/parse/README.md
index 5694f150551b..1a4b3377ffde 100644
--- a/stable/parse/README.md
+++ b/stable/parse/README.md
@@ -12,7 +12,7 @@ $ helm install stable/parse
This chart bootstraps a [Parse](https://github.com/bitnami/bitnami-docker-parse) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -48,6 +48,7 @@ The following table lists the configurable parameters of the Parse chart and the
| Parameter | Description | Default |
|---------------------------------------|------------------------------------------|-------------------------------------------------------- |
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `service.type` | Kubernetes Service type | `LoadBalancer` |
| `service.port` | Service HTTP port (Dashboard) | `80` |
| `service.loadBalancerIP` | `loadBalancerIP` for the Parse Service | `nil` |
@@ -83,6 +84,22 @@ The following table lists the configurable parameters of the Parse chart and the
| `persistence.storageClass` | PVC Storage Class for Parse volume | `nil` (uses alpha storage class annotation) |
| `persistence.accessMode` | PVC Access Mode for Parse volume | `ReadWriteOnce` |
| `persistence.size` | PVC Storage Request for Parse volume | `8Gi` |
+| `ingress.enabled` | Enable ingress controller resource | `false` |
+| `ingress.annotations` | Ingress annotations | `[]` |
+| `ingress.certManager` | Add annotations for cert-manager | `false` |
+| `ingress.dashboard.hosts[0].name` | Hostname to your Parse Dashboard installation | `ghost.local` |
+| `ingress.dashboard.hosts[0].path` | Path within the url structure | `/` |
+| `ingress.dashboard.hosts[0].tls` | Utilize TLS backend in ingress | `false` |
+| `ingress.dashboard.hosts[0].tlsHosts` | Array of TLS hosts for ingress record (defaults to `ingress.hosts[0].name` if `nil`) | `nil` |
+| `ingress.dashboard.hosts[0].tlsSecret` | TLS Secret (certificates) | `ghost.local-tls-secret` |
+| `ingress.server.hosts[0].name` | Hostname to your Parse Server installation | `ghost.local` |
+| `ingress.server.hosts[0].path` | Path within the url structure | `/` |
+| `ingress.server.hosts[0].tls` | Utilize TLS backend in ingress | `false` |
+| `ingress.server.hosts[0].tlsHosts` | Array of TLS hosts for ingress record (defaults to `ingress.hosts[0].name` if `nil`) | `nil` |
+| `ingress.server.hosts[0].tlsSecret` | TLS Secret (certificates) | `ghost.local-tls-secret` |
+| `ingress.secrets[0].name` | TLS Secret Name | `nil` |
+| `ingress.secrets[0].certificate` | TLS Secret Certificate | `nil` |
+| `ingress.secrets[0].key` | TLS Secret Key | `nil` |
| `mongodb.usePassword` | Enable MongoDB password authentication | `true` |
| `mongodb.password` | MongoDB admin password | `nil` |
| `mongodb.persistence.enabled` | Enable MongoDB persistence using PVC | `true` |
diff --git a/stable/parse/templates/NOTES.txt b/stable/parse/templates/NOTES.txt
index 79e9b7fa15ce..d448136ecdb6 100644
--- a/stable/parse/templates/NOTES.txt
+++ b/stable/parse/templates/NOTES.txt
@@ -59,8 +59,8 @@ service:
2. Complete your Parse Dashboard deployment by running:
- helm upgrade {{ .Release.Name }} \
- --set server.host=$APP_HOST,server.port={{ .Values.server.port }},server.masterKey=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "parse.fullname" . }} -o jsonpath="{.data.master-key}" | base64 --decode),dashboard.username={{ .Values.dashboard.username }},dashboard.password=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "parse.fullname" . }} -o jsonpath="{.data.parse-dashboard-password}" | base64 --decode) stable/parse
+ helm upgrade {{ .Release.Name }} stable/{{ .Chart.Name }} \
+ --set server.host=$APP_HOST,server.port={{ .Values.server.port }},server.masterKey=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "parse.fullname" . }} -o jsonpath="{.data.master-key}" | base64 --decode),dashboard.username={{ .Values.dashboard.username }},dashboard.password=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "parse.fullname" . }} -o jsonpath="{.data.parse-dashboard-password}" | base64 --decode){{- if .Values.global }}{{- if .Values.global.imagePullSecrets }},global.imagePullSecrets={{ .Values.global.imagePullSecrets }}{{- end }}{{- end }}
{{ else }}
1. Get the Parse Dashboard URL by running:
diff --git a/stable/parse/templates/_helpers.tpl b/stable/parse/templates/_helpers.tpl
index cb32e1271ee3..ef3f175a9175 100644
--- a/stable/parse/templates/_helpers.tpl
+++ b/stable/parse/templates/_helpers.tpl
@@ -6,6 +6,13 @@ Expand the name of the chart.
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "parse.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
@@ -44,29 +51,6 @@ If not using ClusterIP, or if a host or LoadBalancerIP is not defined, the value
{{- default (include "parse.serviceIP" .) $host -}}
{{- end -}}
-{{/*
-Return the proper Parse image name
-*/}}
-{{- define "parse.image" -}}
-{{- $registryName := .Values.image.registry -}}
-{{- $repositoryName := .Values.image.repository -}}
-{{- $tag := .Values.image.tag | toString -}}
-{{/*
-Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
-but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
-Also, we can't use a single if because lazy evaluation is not an option
-*/}}
-{{- if .Values.global }}
- {{- if .Values.global.imageRegistry }}
- {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
- {{- else -}}
- {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
- {{- end -}}
-{{- else -}}
- {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
-{{- end -}}
-{{- end -}}
-
{{/*
Return the proper Parse dashboard image name
*/}}
@@ -112,3 +96,38 @@ Also, we can't use a single if because lazy evaluation is not an option
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- end -}}
{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "parse.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.server.image.pullSecrets .Values.dashboard.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.server.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.dashboard.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.server.image.pullSecrets .Values.dashboard.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.server.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.dashboard.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- end -}}
diff --git a/stable/parse/templates/dashboard-deployment.yaml b/stable/parse/templates/dashboard-deployment.yaml
index 37243cfd53d6..36fb07a51437 100644
--- a/stable/parse/templates/dashboard-deployment.yaml
+++ b/stable/parse/templates/dashboard-deployment.yaml
@@ -5,7 +5,7 @@ metadata:
name: {{ template "parse.fullname" . }}-dashboard
labels:
app: {{ template "parse.name" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ chart: {{ template "parse.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
component: "dashboard"
@@ -29,12 +29,7 @@ spec:
fsGroup: {{ .Values.dashboard.securityContext.fsGroup }}
runAsUser: {{ .Values.dashboard.securityContext.runAsUser }}
{{- end }}
- {{- if .Values.dashboard.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.dashboard.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "parse.imagePullSecrets" . | indent 6 }}
containers:
- name: {{ template "parse.fullname" . }}
image: {{ template "parse.dashboard.image" . }}
diff --git a/stable/parse/templates/ingress.yaml b/stable/parse/templates/ingress.yaml
new file mode 100644
index 000000000000..54bb77cca481
--- /dev/null
+++ b/stable/parse/templates/ingress.yaml
@@ -0,0 +1,69 @@
+{{- if .Values.ingress.enabled }}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ template "parse.fullname" . }}
+ labels:
+ app: {{ template "parse.fullname" . }}
+ chart: {{ template "parse.chart" . }}
+ release: {{ .Release.Name | quote }}
+ heritage: {{ .Release.Service | quote }}
+ annotations:
+ {{- if .Values.ingress.certManager }}
+ kubernetes.io/tls-acme: "true"
+ {{- end }}
+ {{- range $key, $value := .Values.ingress.annotations }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+spec:
+ rules:
+ {{- if .Values.dashboard.enabled }}
+ {{- range .Values.ingress.dashboard.hosts }}
+ - host: {{ .name }}
+ http:
+ paths:
+ - path: {{ default "/" .path }}
+ backend:
+ serviceName: {{ template "parse.fullname" $ }}
+ servicePort: dashboard-http
+ {{- end }}
+ {{- end }}
+ {{- range .Values.ingress.server.hosts }}
+ - host: {{ .name }}
+ http:
+ paths:
+ - path: {{ default "/" .path }}
+ backend:
+ serviceName: {{ template "parse.fullname" $ }}
+ servicePort: server-http
+ {{- end }}
+ tls:
+ {{- if .Values.dashboard.enabled }}
+ {{- range .Values.ingress.dashboard.hosts }}
+ {{- if .tls }}
+ - hosts:
+ {{- if .tlsHosts }}
+ {{- range $host := .tlsHosts }}
+ - {{ $host }}
+ {{- end }}
+ {{- else }}
+ - {{ .name }}
+ {{- end }}
+ secretName: {{ .tlsSecret }}
+ {{- end }}
+ {{- end }}
+ {{- end }}
+ {{- range .Values.ingress.server.hosts }}
+ {{- if .tls }}
+ - hosts:
+ {{- if .tlsHosts }}
+ {{- range $host := .tlsHosts }}
+ - {{ $host }}
+ {{- end }}
+ {{- else }}
+ - {{ .name }}
+ {{- end }}
+ secretName: {{ .tlsSecret }}
+ {{- end }}
+ {{- end }}
+{{- end }}
diff --git a/stable/parse/templates/pvc.yaml b/stable/parse/templates/pvc.yaml
index 97ea4720b6a1..e840f003ac2f 100644
--- a/stable/parse/templates/pvc.yaml
+++ b/stable/parse/templates/pvc.yaml
@@ -5,7 +5,7 @@ metadata:
name: {{ template "parse.fullname" . }}
labels:
app: {{ template "parse.name" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ chart: {{ template "parse.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
diff --git a/stable/parse/templates/secrets.yaml b/stable/parse/templates/secrets.yaml
index 2acf9a449722..40b37625ba7d 100644
--- a/stable/parse/templates/secrets.yaml
+++ b/stable/parse/templates/secrets.yaml
@@ -4,7 +4,7 @@ metadata:
name: {{ template "parse.fullname" . }}
labels:
app: {{ template "parse.name" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ chart: {{ template "parse.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
type: Opaque
diff --git a/stable/parse/templates/server-deployment.yaml b/stable/parse/templates/server-deployment.yaml
index 6bf5c210222e..bba9a358de53 100644
--- a/stable/parse/templates/server-deployment.yaml
+++ b/stable/parse/templates/server-deployment.yaml
@@ -4,7 +4,7 @@ metadata:
name: {{ template "parse.fullname" . }}-server
labels:
app: {{ template "parse.name" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ chart: {{ template "parse.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
component: "server"
@@ -28,12 +28,7 @@ spec:
fsGroup: {{ .Values.server.securityContext.fsGroup }}
runAsUser: {{ .Values.server.securityContext.runAsUser }}
{{- end }}
- {{- if .Values.server.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.server.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "parse.imagePullSecrets" . | indent 6 }}
containers:
- name: {{ template "parse.fullname" . }}
image: {{ template "parse.server.image" . }}
diff --git a/stable/parse/templates/svc.yaml b/stable/parse/templates/svc.yaml
index 24601b38d3b9..a7afc1011df6 100644
--- a/stable/parse/templates/svc.yaml
+++ b/stable/parse/templates/svc.yaml
@@ -4,7 +4,7 @@ metadata:
name: {{ template "parse.fullname" . }}
labels:
app: {{ template "parse.name" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ chart: {{ template "parse.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
diff --git a/stable/parse/values.yaml b/stable/parse/values.yaml
index b95b04e0c05a..d6315ed91e7f 100644
--- a/stable/parse/values.yaml
+++ b/stable/parse/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Kubernetes serviceType for Parse Deployment
## ref: http://kubernetes.io/docs/user-guide/services/#publishing-services---service-types
@@ -26,7 +29,6 @@ service:
## ref: http://kubernetes.io/docs/user-guide/services/#type-loadbalancer
##
#
-
server:
## Bitnami Parse image version
## ref: https://hub.docker.com/r/bitnami/parse/tags/
@@ -34,7 +36,7 @@ server:
image:
registry: docker.io
repository: bitnami/parse
- tag: 3.1.3
+ tag: 3.4.0-debian-9-r0
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -45,7 +47,7 @@ server:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Parse Server Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
@@ -105,7 +107,7 @@ dashboard:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Parse Dashboard Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
@@ -139,6 +141,81 @@ dashboard:
# memory: 512Mi
# cpu: 300m
+## Configure the ingress resource that allows you to access the
+## Parse installation. Set up the URL
+## ref: http://kubernetes.io/docs/user-guide/ingress/
+##
+ingress:
+ ## Set to true to enable ingress record generation
+ enabled: false
+
+ ## Set this to true in order to add the corresponding annotations for cert-manager
+ certManager: false
+
+ ## Ingress annotations done as key:value pairs
+ ## For a full list of possible ingress annotations, please see
+ ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/annotations.md
+ ##
+ ## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
+ ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
+ annotations:
+ # kubernetes.io/ingress.class: nginx
+
+ dashboard:
+ ## The list of hostnames to be covered with this ingress record.
+ ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
+ hosts:
+ - name: parse.local
+ path: /
+
+ ## Set this to true in order to enable TLS on the ingress record
+ tls: false
+
+ ## Optionally specify the TLS hosts for the ingress record
+ ## Useful when the Ingress controller supports www-redirection
+ ## If not specified, the above host name will be used
+ # tlsHosts:
+ # - www.parse.local
+ # - parse.local
+
+ ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
+ tlsSecret: parse.local-tls
+
+ server:
+ ## The list of hostnames to be covered with this ingress record.
+ ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
+ hosts:
+ - name: parse-server.local
+ path: /
+
+ ## Set this to true in order to enable TLS on the ingress record
+ tls: false
+
+ ## Optionally specify the TLS hosts for the ingress record
+ ## Useful when the Ingress controller supports www-redirection
+ ## If not specified, the above host name will be used
+ # tlsHosts:
+ # - www.parse.local
+ # - parse.local
+
+ ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
+ tlsSecret: parse.local-tls
+
+ secrets:
+ ## If you're providing your own certificates, please use this to add the certificates as secrets
+ ## key and certificate should start with -----BEGIN CERTIFICATE----- or
+ ## -----BEGIN RSA PRIVATE KEY-----
+ ##
+ ## name should line up with a tlsSecret set further up
+ ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
+ ##
+ ## It is also possible to create and manage the certificates outside of this helm chart
+ ## Please see README.md for more information
+ # - name: parse.local-tls
+ # key:
+ # certificate:
+
+
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
diff --git a/stable/percona-xtradb-cluster/Chart.yaml b/stable/percona-xtradb-cluster/Chart.yaml
index 17e1c18c86e0..47216c621417 100644
--- a/stable/percona-xtradb-cluster/Chart.yaml
+++ b/stable/percona-xtradb-cluster/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: percona-xtradb-cluster
-version: 0.6.2
+version: 1.0.0
appVersion: 5.7.19
description: free, fully compatible, enhanced, open source drop-in replacement for
MySQL with Galera Replication (xtradb)
diff --git a/stable/percona-xtradb-cluster/README.md b/stable/percona-xtradb-cluster/README.md
index 68b54bd00867..d69ee3e32a04 100644
--- a/stable/percona-xtradb-cluster/README.md
+++ b/stable/percona-xtradb-cluster/README.md
@@ -59,6 +59,7 @@ The following table lists the configurable parameters of the Percona chart and t
| `allowRootFrom` | Remote hosts to allow root access, set to `127.0.0.1` to disable remote root | `%` |
| `mysqlRootPassword` | Password for the `root` user. | `not-a-secure-password` |
| `xtraBackupPassword` | Password for the `xtrabackup` user. | `replicate-my-data` |
+| `pxc_strict_mode` | Setting for `pxc_strict_mode`. | ENFORCING |
| `mysqlUser` | Username of new user to create. | `nil` |
| `mysqlPassword` | Password for the new user. | `nil` |
| `mysqlDatabase` | Name for new database to create. | `nil` |
@@ -81,7 +82,16 @@ The following table lists the configurable parameters of the Percona chart and t
| `metricsExporter.enabled` | if set to true runs a [mysql metrics exporter](https://github.com/prometheus/mysqld_exporter) container in the pod | false |
| `metricsExporter.commandOverrides` | Overrides default docker command for metrics exporter | `[]` |
| `metricsExporter.argsOverrides` | Overrides default docker args for metrics exporter | `[]` |
+| `prometheus.operator.enabled` | Setting to true will create Prometheus-Operator specific resources | `false` |
+| `prometheus.operator.prometheusRule.enabled` | Create default alerting rules | `true` |
+| `prometheus.operator.prometheusRule.labels` | Labels to add to alerts | `{}` |
+| `prometheus.operator.prometheusRule.namespace` | Namespace which Prometheus is installed in | `nil` |
+| `prometheus.operator.prometheusRule.selector` | Label Selector for Prometheus to find ServiceMonitors | `nil` |
+| `prometheus.operator.serviceMonitor.interval` | Interval at which Prometheus will scrape metrics exporter | `10s` |
+| `prometheus.operator.serviceMonitor.namespace` | Namespace which Prometheus is installed in | `nil` |
+| `prometheus.operator.serviceMonitor.selector` | Label Selector for Prometheus to find ServiceMonitors | `nil` |
| `podDisruptionBudget` | Pod disruption budget | `{enabled: false, maxUnavailable: 1}` |
+| `service.percona.headless` | if set to true makes the percona service [headless](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services) | false |
Some of the parameters above map to the env variables defined in the [Percona XtraDB Cluster DockerHub image](https://hub.docker.com/r/percona/percona-xtradb-cluster/).
@@ -170,3 +180,18 @@ If you are using a certificate your configurationFiles must include the three ss
ssl-cert=/ssl/server-cert.pem
ssl-key=/ssl/server-key.pem
```
+
+## PXC Strict Mode
+
+PXC Strict Mode is designed to avoid the use of experimental and unsupported features in Percona XtraDB Cluster. It performs a number of validations at startup and during runtime.
+
+Depending on the actual mode you select, upon encountering a failed validation, the server will either throw an error (halting startup or denying the operation), or log a warning and continue running as normal. The following modes are available:
+
+* DISABLED: Do not perform strict mode validations and run as normal.
+* PERMISSIVE: If a vaidation fails, log a warning and continue running as normal.
+* ENFORCING: If a validation fails during startup, halt the server and throw an error. If a validation fails during runtime, deny the operation and throw an error.
+* MASTER: The same as ENFORCING except that the validation of explicit table locking is not performed. This mode can be used with clusters in which write operations are isolated to a single node.
+
+By default, PXC Strict Mode is set to ENFORCING, except if the node is acting as a standalone server or the node is bootstrapping, then PXC Strict Mode defaults to DISABLED.
+
+Source: https://www.percona.com/doc/percona-xtradb-cluster/LATEST/features/pxc-strict-mode.html
diff --git a/stable/percona-xtradb-cluster/files/entrypoint.sh b/stable/percona-xtradb-cluster/files/entrypoint.sh
index d1a89ce0ba33..91b0fca50952 100644
--- a/stable/percona-xtradb-cluster/files/entrypoint.sh
+++ b/stable/percona-xtradb-cluster/files/entrypoint.sh
@@ -24,7 +24,7 @@ if [[ -z "${cluster_join}" ]]; then
exec mysqld --user=mysql --wsrep_cluster_name=$SHORT_CLUSTER_NAME --wsrep_node_name=$hostname \
--wsrep_cluster_address=gcomm:// --wsrep_sst_method=xtrabackup-v2 \
--wsrep_sst_auth="xtrabackup:$XTRABACKUP_PASSWORD" \
- --wsrep_node_address="$ipaddr" $CMDARG
+ --wsrep_node_address="$ipaddr" --pxc_strict_mode="$PXC_STRICT_MODE" $CMDARG
else
echo "I am not the Primary Node"
chown -R mysql:mysql /var/lib/mysql || true # default is root:root 777
@@ -34,5 +34,5 @@ else
exec mysqld --user=mysql --wsrep_cluster_name=$SHORT_CLUSTER_NAME --wsrep_node_name=$hostname \
--wsrep_cluster_address="gcomm://$cluster_join" --wsrep_sst_method=xtrabackup-v2 \
--wsrep_sst_auth="xtrabackup:$XTRABACKUP_PASSWORD" \
- --wsrep_node_address="$ipaddr" $CMDARG
+ --wsrep_node_address="$ipaddr" --pxc_strict_mode="$PXC_STRICT_MODE" $CMDARG
fi
diff --git a/stable/percona-xtradb-cluster/templates/prometheusrule.yaml b/stable/percona-xtradb-cluster/templates/prometheusrule.yaml
new file mode 100644
index 000000000000..fe07a7e34391
--- /dev/null
+++ b/stable/percona-xtradb-cluster/templates/prometheusrule.yaml
@@ -0,0 +1,61 @@
+{{ if and .Values.metricsExporter.enabled .Values.prometheus.operator.enabled .Values.prometheus.operator.prometheusRule.enabled }}
+apiVersion: monitoring.coreos.com/v1
+kind: PrometheusRule
+metadata:
+ name: {{ template "percona-xtradb-cluster.fullname" . }}
+{{- if .Values.prometheus.operator.prometheusRule.namespace }}
+ namespace: {{ .Values.prometheus.operator.prometheusRule.namespace }}
+{{- end }}
+ labels:
+ app: {{ template "percona-xtradb-cluster.fullname" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ heritage: "{{ .Release.Service }}"
+ release: "{{ .Release.Name }}"
+{{- if .Values.prometheus.operator.prometheusRule.selector }}
+{{ toYaml .Values.prometheus.operator.prometheusRule.selector | indent 4 }}
+{{- end }}
+spec:
+ groups:
+ - name: {{ template "percona-xtradb-cluster.fullname" . }}-alerts
+ rules:
+ - alert: MySQLGaleraNotReady
+ annotations:
+ message: Galera cluster node not ready
+ expr: mysql_global_status_wsrep_ready{service="{{ template "percona-xtradb-cluster.fullname" . }}-metrics"} != 1
+ for: 5m
+ labels:
+ severity: critical
+{{- if .Values.prometheus.operator.prometheusRule.labels }}
+{{ toYaml .Values.prometheus.operator.prometheusRule.labels | indent 8 }}
+{{- end }}
+ - alert: MySQLGaleraOutOfSync
+ annotations:
+ message: Galera cluster node out of sync
+ expr: (mysql_global_status_wsrep_local_state{service="{{ template "percona-xtradb-cluster.fullname" . }}-metrics"} != 4 and mysql_global_variables_wsrep_desync{service="{{ template "percona-xtradb-cluster.fullname" . }}-metrics"} == 0)
+ for: 5m
+ labels:
+ severity: critical
+{{- if .Values.prometheus.operator.prometheusRule.labels }}
+{{ toYaml .Values.prometheus.operator.prometheusRule.labels | indent 8 }}
+{{- end }}
+ - alert: MySQLGaleraDonorFallingBehind
+ annotations:
+ message: xtradb cluster donor node falling behind
+ expr: (mysql_global_status_wsrep_local_state{service="{{ template "percona-xtradb-cluster.fullname" . }}-metrics"} == 2 and mysql_global_status_wsrep_local_recv_queue{service="{{ template "percona-xtradb-cluster.fullname" . }}-metrics"} > 100)
+ for: 5m
+ labels:
+ severity: warning
+{{- if .Values.prometheus.operator.prometheusRule.labels }}
+{{ toYaml .Values.prometheus.operator.prometheusRule.labels | indent 8 }}
+{{- end }}
+ - alert: MySQLInnoDBLogWaits
+ annotations:
+ message: MySQL innodb log writes stalling
+ expr: rate(mysql_global_status_innodb_log_waits{service="{{ template "percona-xtradb-cluster.fullname" . }}-metrics"}[15m]) > 10
+ for: 5m
+ labels:
+ severity: warning
+{{- if .Values.prometheus.operator.prometheusRule.labels }}
+{{ toYaml .Values.prometheus.operator.prometheusRule.labels | indent 8 }}
+{{- end }}
+{{ end }}
diff --git a/stable/percona-xtradb-cluster/templates/service-metrics.yaml b/stable/percona-xtradb-cluster/templates/service-metrics.yaml
index f4fe934df3e0..5bcc5541aa1c 100644
--- a/stable/percona-xtradb-cluster/templates/service-metrics.yaml
+++ b/stable/percona-xtradb-cluster/templates/service-metrics.yaml
@@ -1,4 +1,4 @@
-{{ if .Values.metricsExporter }}
+{{ if .Values.metricsExporter.enabled }}
---
apiVersion: v1
kind: Service
@@ -12,7 +12,8 @@ metadata:
spec:
clusterIP: None
ports:
- - port: 9104
+ - name: metrics
+ port: 9104
selector:
app: {{ template "percona-xtradb-cluster.fullname" . }}
release: "{{ .Release.Name }}"
diff --git a/stable/percona-xtradb-cluster/templates/service-percona.yaml b/stable/percona-xtradb-cluster/templates/service-percona.yaml
index 5120a5768bb7..659fc5739650 100644
--- a/stable/percona-xtradb-cluster/templates/service-percona.yaml
+++ b/stable/percona-xtradb-cluster/templates/service-percona.yaml
@@ -8,6 +8,9 @@ metadata:
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
+ {{- if .Values.service.percona.headless }}
+ clusterIP: None
+ {{- end }}
ports:
- name: mysql
port: 3306
diff --git a/stable/percona-xtradb-cluster/templates/servicemonitor.yaml b/stable/percona-xtradb-cluster/templates/servicemonitor.yaml
new file mode 100644
index 000000000000..7b7216826b41
--- /dev/null
+++ b/stable/percona-xtradb-cluster/templates/servicemonitor.yaml
@@ -0,0 +1,27 @@
+{{ if and .Values.metricsExporter.enabled .Values.prometheus.operator.enabled }}
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: {{ template "percona-xtradb-cluster.fullname" . }}
+{{- if .Values.prometheus.operator.serviceMonitor.namespace }}
+ namespace: {{ .Values.prometheus.operator.serviceMonitor.namespace }}
+{{- end }}
+ labels:
+ app: {{ template "percona-xtradb-cluster.fullname" . }}
+ chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ heritage: "{{ .Release.Service }}"
+ release: "{{ .Release.Name }}"
+{{- if .Values.prometheus.operator.serviceMonitor.selector }}
+{{ toYaml .Values.prometheus.operator.serviceMonitor.selector | indent 4 }}
+{{- end }}
+spec:
+ selector:
+ matchLabels:
+ app: {{ template "percona-xtradb-cluster.fullname" . }}
+ release: {{ .Release.Name }}
+ endpoints:
+ - port: metrics
+ interval: {{ .Values.prometheus.operator.serviceMonitor.interval }}
+ namespaceSelector:
+ any: true
+{{ end }}
diff --git a/stable/percona-xtradb-cluster/templates/statefulset.yaml b/stable/percona-xtradb-cluster/templates/statefulset.yaml
index 5071475c242c..5541f226fa31 100644
--- a/stable/percona-xtradb-cluster/templates/statefulset.yaml
+++ b/stable/percona-xtradb-cluster/templates/statefulset.yaml
@@ -79,6 +79,8 @@ spec:
value: {{ template "percona-xtradb-cluster.shortname" . }}
- name: K8S_SERVICE_NAME
value: {{ template "percona-xtradb-cluster.fullname" . }}-repl
+ - name: PXC_STRICT_MODE
+ value: {{ default "ENFORCING" .Values.pxc_strict_mode | quote }}
- name: DEBUG
value: "true"
ports:
@@ -92,7 +94,10 @@ spec:
containerPort: 4444
livenessProbe:
exec:
- command: ["mysqladmin","ping"]
+ command:
+ - "/bin/bash"
+ - "-c"
+ - "mysqladmin ping || test -e /var/lib/mysql/sst_in_progress"
initialDelaySeconds: 30
timeoutSeconds: 2
readinessProbe:
diff --git a/stable/percona-xtradb-cluster/values.yaml b/stable/percona-xtradb-cluster/values.yaml
index e15a0d427518..4d10525c054a 100644
--- a/stable/percona-xtradb-cluster/values.yaml
+++ b/stable/percona-xtradb-cluster/values.yaml
@@ -31,6 +31,10 @@ replicas: 3
##
# mysqlDatabase: test
+## Configure pxc_strict_mode
+## ref: https://www.percona.com/doc/percona-xtradb-cluster/LATEST/features/pxc-strict-mode.html
+## pxc_strict_mode: ENFORCING
+
## hosts to allow root user access from
# set to "127.0.0.1" to deny remote root.
allowRootFrom: "%"
@@ -97,6 +101,38 @@ metricsExporter:
commandOverrides: []
argsOverrides: []
+prometheus:
+ ## Are you using [Prometheus Operator](https://coreos.com/operators/prometheus/docs/latest/user-guides/getting-started.html)?
+ operator:
+ ## Setting to true will create Prometheus-Operator specific resources like ServiceMonitors
+ enabled: false
+
+ ## Configures alerts for Prometheus to pick up
+ prometheusRule:
+ enabled: true
+
+ ## Labels to add to alerts
+ labels: {}
+
+ ## Namespace which Prometheus is installed in
+ # namespace: monitoring
+
+ ## Label Selector for Prometheus to find alert rules
+ # selector:
+ # prometheus: kube-prometheus
+
+ ## Configures targets for Prometheus to pick up
+ serviceMonitor:
+ ## Interval at which Prometheus will scrape metrics exporter
+ interval: 10s
+
+ ## Namespace which Prometheus is installed in
+ # namespace: monitoring
+
+ ## Label Selector for Prometheus to find ServiceMonitors
+ # selector:
+ # prometheus: kube-prometheus
+
## When set to true will create sidecar to tail mysql log
logTail: true
@@ -125,3 +161,9 @@ podDisruptionBudget:
enabled: false
# minAvailable: 1
maxUnavailable: 1
+
+## Set the percona kubernetes service headless (no load-balancing)
+## ref: https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
+service:
+ percona:
+ headless: false
diff --git a/stable/percona/Chart.yaml b/stable/percona/Chart.yaml
index f8966087ee58..61e3d76a7c72 100644
--- a/stable/percona/Chart.yaml
+++ b/stable/percona/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: percona
-version: 0.3.4
+version: 1.0.0
appVersion: 5.7.17
description: free, fully compatible, enhanced, open source drop-in replacement for
MySQL
diff --git a/stable/percona/README.md b/stable/percona/README.md
index 68bae8a83c20..13c71321d09b 100644
--- a/stable/percona/README.md
+++ b/stable/percona/README.md
@@ -62,6 +62,7 @@ The following table lists the configurable parameters of the Percona chart and t
| `resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `100m` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `tolerations` | Node labels for pod assignment | `[]` |
+| `affinity` | Node labels for pod assignment | `{}` |
Some of the parameters above map to the env variables defined in the [Percona Server DockerHub image](https://hub.docker.com/_/percona/).
diff --git a/stable/percona/templates/deployment.yaml b/stable/percona/templates/deployment.yaml
index 939747a650ea..616d93383649 100644
--- a/stable/percona/templates/deployment.yaml
+++ b/stable/percona/templates/deployment.yaml
@@ -82,6 +82,10 @@ spec:
nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
{{- end -}}
+ {{- if .Values.affinity }}
+ affinity:
+{{ toYaml .Values.affinity | indent 8 }}
+ {{- end }}
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
diff --git a/stable/percona/values.yaml b/stable/percona/values.yaml
index f5aa9c1d1c82..bbb45428777c 100644
--- a/stable/percona/values.yaml
+++ b/stable/percona/values.yaml
@@ -1,6 +1,5 @@
## percona image
## ref: https://hub.docker.com/_/percona/
-##
image: "percona"
## percona image version
@@ -64,3 +63,8 @@ nodeSelector: {}
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
+
+## Affinity labels for pod assignment
+## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
+##
+affinity: {}
diff --git a/stable/phabricator/Chart.yaml b/stable/phabricator/Chart.yaml
index 31c4e18201b7..b87d5159b44e 100644
--- a/stable/phabricator/Chart.yaml
+++ b/stable/phabricator/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: phabricator
-version: 4.0.10
-appVersion: 2019.4.0
+version: 4.2.8
+appVersion: 2019.20.0
description: Collection of open source web applications that help software companies build better software.
keywords:
- phabricator
diff --git a/stable/phabricator/README.md b/stable/phabricator/README.md
index 9d46956fd25d..3e536849b29f 100644
--- a/stable/phabricator/README.md
+++ b/stable/phabricator/README.md
@@ -14,7 +14,7 @@ This chart bootstraps a [Phabricator](https://github.com/bitnami/bitnami-docker-
It also packages the [Bitnami MariaDB chart](https://github.com/kubernetes/charts/tree/master/stable/mariadb) which is required for bootstrapping a MariaDB deployment for the database requirements of the Phabricator application.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -50,6 +50,7 @@ The following table lists the configurable parameters of the Phabricator chart a
| Parameter | Description | Default |
|----------------------------------------|----------------------------------------------|----------------------------------------------------------|
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | Phabricator image registry | `docker.io` |
| `image.repository` | Phabricator image name | `bitnami/phabricator` |
| `image.tag` | Phabricator image tag | `{VERSION}` |
@@ -102,6 +103,9 @@ The following table lists the configurable parameters of the Phabricator chart a
| `metrics.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `metrics.podAnnotations` | Additional annotations for Metrics exporter pod | `{prometheus.io/scrape: "true", prometheus.io/port: "9117"}` |
| `metrics.resources` | Exporter resource requests/limit | {} |
+| `nodeSelector` | Node labels for pod assignment | `nil` |
+| `affinity` | Node/pod affinities | `nil` |
+| `tolerations` | List of node taints to tolerate | `nil` |
The above parameters map to the env variables defined in [bitnami/phabricator](http://github.com/bitnami/bitnami-docker-phabricator). For more information please refer to the [bitnami/phabricator](http://github.com/bitnami/bitnami-docker-phabricator) image documentation.
diff --git a/stable/phabricator/templates/_helpers.tpl b/stable/phabricator/templates/_helpers.tpl
index e22d26c902ae..2c9d214c9f54 100644
--- a/stable/phabricator/templates/_helpers.tpl
+++ b/stable/phabricator/templates/_helpers.tpl
@@ -70,9 +70,57 @@ Also, we can't use a single if because lazy evaluation is not an option
{{/*
Return the proper image name (for the metrics image)
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "phabricator.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "phabricator.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
{{- end -}}
diff --git a/stable/phabricator/templates/deployment.yaml b/stable/phabricator/templates/deployment.yaml
index cca4e33936a0..5fd5f5742ca2 100644
--- a/stable/phabricator/templates/deployment.yaml
+++ b/stable/phabricator/templates/deployment.yaml
@@ -29,12 +29,7 @@ spec:
{{- end }}
{{- end }}
spec:
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "phabricator.imagePullSecrets" . | indent 6 }}
hostAliases:
- ip: "127.0.0.1"
hostnames:
@@ -136,7 +131,7 @@ spec:
mountPath: /bitnami/apache
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "phabricator.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
command: [ '/bin/apache_exporter', '-scrape_uri', 'http://status.localhost:80/server-status/?auto']
ports:
@@ -172,4 +167,16 @@ spec:
{{- else }}
emptyDir: {}
{{- end }}
+ {{- if .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.nodeSelector | indent 8 }}
+ {{- end }}
+ {{- if .Values.affinity }}
+ affinity:
+{{ toYaml .Values.affinity | indent 8 }}
+ {{- end }}
+ {{- if .Values.tolerations }}
+ tolerations:
+{{ toYaml .Values.tolerations | indent 8 }}
+ {{- end }}
{{- end -}}
diff --git a/stable/phabricator/values.yaml b/stable/phabricator/values.yaml
index 408942bd4bc9..108e2ba12cd1 100644
--- a/stable/phabricator/values.yaml
+++ b/stable/phabricator/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami Phabricator image version
## ref: https://hub.docker.com/r/bitnami/phabricator/tags/
@@ -10,7 +13,7 @@
image:
registry: docker.io
repository: bitnami/phabricator
- tag: 2019.4.0
+ tag: 2019.20.0
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,7 +24,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Phabricator host to create application URLs
## ref: https://github.com/bitnami/bitnami-docker-phabricator#configuration
@@ -225,7 +228,7 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Metrics exporter pod Annotation and Labels
podAnnotations:
prometheus.io/scrape: "true"
@@ -234,3 +237,18 @@ metrics:
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
# resources: {}
+
+## Node labels for pod assignment
+## ref: https://kubernetes.io/docs/user-guide/node-selection/
+##
+# nodeSelector: {}
+
+## Affinity for pod assignment
+## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
+##
+# affinity: {}
+
+## Tolerations for pod assignment
+## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
+##
+# tolerations: []
diff --git a/stable/phpbb/Chart.yaml b/stable/phpbb/Chart.yaml
index 4ae3d6befbc4..d18658cec5bc 100644
--- a/stable/phpbb/Chart.yaml
+++ b/stable/phpbb/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: phpbb
-version: 4.0.3
-appVersion: 3.2.5
+version: 4.3.2
+appVersion: 3.2.7
description: Community forum that supports the notion of users and groups, file attachments, full-text search, notifications and more.
keywords:
- phpbb
diff --git a/stable/phpbb/README.md b/stable/phpbb/README.md
index 041addd216d3..ff68625f8d41 100644
--- a/stable/phpbb/README.md
+++ b/stable/phpbb/README.md
@@ -14,7 +14,7 @@ This chart bootstraps a [phpBB](https://github.com/bitnami/bitnami-docker-phpbb)
It also packages the [Bitnami MariaDB chart](https://github.com/kubernetes/charts/tree/master/stable/mariadb) which is required for bootstrapping a MariaDB deployment for the database requirements of the phpBB application.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -50,6 +50,7 @@ The following table lists the configurable parameters of the phpBB chart and the
| Parameter | Description | Default |
|-----------------------------------|---------------------------------------|---------------------------------------------------------|
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | phpBB image registry | `docker.io` |
| `image.repository` | phpBB image name | `bitnami/phpbb` |
| `image.tag` | phpBB image tag | `{VERSION}` |
@@ -67,6 +68,17 @@ The following table lists the configurable parameters of the phpBB chart and the
| `externalDatabase.user` | Existing username in the external db | `bn_phpbb` |
| `externalDatabase.password` | Password for the above username | `nil` |
| `externalDatabase.database` | Name of the existing database | `bitnami_phpbb` |
+| `ingress.enabled` | Enable ingress controller resource | `false` |
+| `ingress.annotations` | Ingress annotations | `[]` |
+| `ingress.certManager` | Add annotations for cert-manager | `false` |
+| `ingress.hosts[0].name` | Hostname to your phpbb installation | `phpbb.local` |
+| `ingress.hosts[0].path` | Path within the url structure | `/` |
+| `ingress.hosts[0].tls` | Utilize TLS backend in ingress | `false` |
+| `ingress.hosts[0].tlsHosts` | Array of TLS hosts for ingress record (defaults to `ingress.hosts[0].name` if `nil`) | `nil` |
+| `ingress.hosts[0].tlsSecret` | TLS Secret (certificates) | `phpbb.local-tls-secret` |
+| `ingress.secrets[0].name` | TLS Secret Name | `nil` |
+| `ingress.secrets[0].certificate` | TLS Secret Certificate | `nil` |
+| `ingress.secrets[0].key` | TLS Secret Key | `nil` |
| `mariadb.enabled` | Use or not the MariaDB chart | `true` |
| `mariadb.rootUser.password` | MariaDB admin password | `nil` |
| `mariadb.db.name` | Database name to create | `bitnami_phpbb` |
diff --git a/stable/phpbb/templates/_helpers.tpl b/stable/phpbb/templates/_helpers.tpl
index 36950493a6d1..fa1476b58d38 100644
--- a/stable/phpbb/templates/_helpers.tpl
+++ b/stable/phpbb/templates/_helpers.tpl
@@ -56,9 +56,57 @@ Also, we can't use a single if because lazy evaluation is not an option
{{/*
Return the proper image name (for the metrics image)
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "phpbb.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "phpbb.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
{{- end -}}
diff --git a/stable/phpbb/templates/deployment.yaml b/stable/phpbb/templates/deployment.yaml
index 19221cf55b9b..8b2500e0bce7 100644
--- a/stable/phpbb/templates/deployment.yaml
+++ b/stable/phpbb/templates/deployment.yaml
@@ -30,12 +30,7 @@ spec:
{{- end }}
{{- end }}
spec:
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "phpbb.imagePullSecrets" . | indent 6 }}
hostAliases:
- ip: "127.0.0.1"
hostnames:
@@ -137,7 +132,7 @@ spec:
mountPath: /bitnami/apache
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "phpbb.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
command: [ '/bin/apache_exporter', '-scrape_uri', 'http://status.localhost:80/server-status/?auto']
ports:
diff --git a/stable/phpbb/templates/ingress.yaml b/stable/phpbb/templates/ingress.yaml
new file mode 100644
index 000000000000..5fc12b42b11a
--- /dev/null
+++ b/stable/phpbb/templates/ingress.yaml
@@ -0,0 +1,43 @@
+{{- if .Values.ingress.enabled }}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ template "phpbb.fullname" . }}
+ labels:
+ app: "{{ template "phpbb.fullname" . }}"
+ chart: "{{ template "phpbb.chart" . }}"
+ release: {{ .Release.Name | quote }}
+ heritage: {{ .Release.Service | quote }}
+ annotations:
+ {{- if .Values.ingress.certManager }}
+ kubernetes.io/tls-acme: "true"
+ {{- end }}
+ {{- range $key, $value := .Values.ingress.annotations }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+spec:
+ rules:
+ {{- range .Values.ingress.hosts }}
+ - host: {{ .name }}
+ http:
+ paths:
+ - path: {{ default "/" .path }}
+ backend:
+ serviceName: {{ template "phpbb.fullname" $ }}
+ servicePort: http
+ {{- end }}
+ tls:
+ {{- range .Values.ingress.hosts }}
+ {{- if .tls }}
+ - hosts:
+ {{- if .tlsHosts }}
+ {{- range $host := .tlsHosts }}
+ - {{ $host }}
+ {{- end }}
+ {{- else }}
+ - {{ .name }}
+ {{- end }}
+ secretName: {{ .tlsSecret }}
+ {{- end }}
+ {{- end }}
+{{- end }}
diff --git a/stable/phpbb/values.yaml b/stable/phpbb/values.yaml
index 93f37f118592..1910b90a0fc6 100644
--- a/stable/phpbb/values.yaml
+++ b/stable/phpbb/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami phpBB image version
## ref: https://hub.docker.com/r/bitnami/phpbb/tags/
@@ -10,7 +13,7 @@
image:
registry: docker.io
repository: bitnami/phpbb
- tag: 3.2.5
+ tag: 3.2.7
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,7 +24,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## User of the application
## ref: https://github.com/bitnami/bitnami-docker-phpbb#environment-variables
@@ -115,6 +118,60 @@ mariadb:
accessMode: ReadWriteOnce
size: 8Gi
+## Configure the ingress resource that allows you to access the
+## phpbb installation. Set up the URL
+## ref: http://kubernetes.io/docs/user-guide/ingress/
+##
+ingress:
+ ## Set to true to enable ingress record generation
+ enabled: false
+
+ ## Set this to true in order to add the corresponding annotations for cert-manager
+ certManager: false
+
+ ## Ingress annotations done as key:value pairs
+ ## For a full list of possible ingress annotations, please see
+ ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/annotations.md
+ ##
+ ## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
+ ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set
+ annotations:
+ # kubernetes.io/ingress.class: nginx
+
+ ## The list of hostnames to be covered with this ingress record.
+ ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
+ hosts:
+ - name: phpbb.local
+ path: /
+
+ ## Set this to true in order to enable TLS on the ingress record
+ tls: false
+
+ ## Optionally specify the TLS hosts for the ingress record
+ ## Useful when the Ingress controller supports www-redirection
+ ## If not specified, the above host name will be used
+ # tlsHosts:
+ # - www.phpbb.local
+ # - phpbb.local
+
+ ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
+ tlsSecret: phpbb.local-tls
+
+ secrets:
+ ## If you're providing your own certificates, please use this to add the certificates as secrets
+ ## key and certificate should start with -----BEGIN CERTIFICATE----- or
+ ## -----BEGIN RSA PRIVATE KEY-----
+ ##
+ ## name should line up with a tlsSecret set further up
+ ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set
+ ##
+ ## It is also possible to create and manage the certificates outside of this helm chart
+ ## Please see README.md for more information
+ # - name: phpbb.local-tls
+ # key:
+ # certificate:
+
+
## Kubernetes configuration
## For minikube, set this to NodePort, elsewhere use LoadBalancer
##
@@ -192,7 +249,7 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Metrics exporter pod Annotation and Labels
podAnnotations:
prometheus.io/scrape: "true"
diff --git a/stable/phpmyadmin/Chart.yaml b/stable/phpmyadmin/Chart.yaml
index 6ede7fbef54b..765b48ed1d15 100644
--- a/stable/phpmyadmin/Chart.yaml
+++ b/stable/phpmyadmin/Chart.yaml
@@ -1,5 +1,6 @@
+apiVersion: v1
name: phpmyadmin
-version: 2.0.4
+version: 2.2.0
appVersion: 4.8.5
description: phpMyAdmin is an mysql administration frontend
keywords:
diff --git a/stable/phpmyadmin/README.md b/stable/phpmyadmin/README.md
index 24f6bf329c76..91f8bd655f0b 100644
--- a/stable/phpmyadmin/README.md
+++ b/stable/phpmyadmin/README.md
@@ -12,7 +12,7 @@ $ helm install stable/phpmyadmin
This chart bootstraps a [phpMyAdmin](https://github.com/bitnami/bitnami-docker-phpmyadmin) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -47,6 +47,7 @@ The following table lists the configurable parameters of the phpMyAdmin chart an
| Parameter | Description | Default |
|----------------------------|------------------------------------------|---------------------------------------------------------|
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | phpMyAdmin image registry | `docker.io` |
| `image.repository` | phpMyAdmin image name | `bitnami/phpmyadmin` |
| `image.tag` | phpMyAdmin image tag | `{VERSION}` |
diff --git a/stable/phpmyadmin/templates/_helpers.tpl b/stable/phpmyadmin/templates/_helpers.tpl
index 0d776554e150..14537cfb12be 100644
--- a/stable/phpmyadmin/templates/_helpers.tpl
+++ b/stable/phpmyadmin/templates/_helpers.tpl
@@ -73,9 +73,57 @@ Also, we can't use a single if because lazy evaluation is not an option
{{/*
Return the proper image name (for the metrics image)
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "phpmyadmin.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "phpmyadmin.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
{{- end -}}
diff --git a/stable/phpmyadmin/templates/deployment.yaml b/stable/phpmyadmin/templates/deployment.yaml
index b1999f9c0f88..87e2f4cb6b7b 100644
--- a/stable/phpmyadmin/templates/deployment.yaml
+++ b/stable/phpmyadmin/templates/deployment.yaml
@@ -29,12 +29,7 @@ spec:
{{- end }}
{{- end }}
spec:
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "phpmyadmin.imagePullSecrets" . | indent 6 }}
hostAliases:
- ip: "127.0.0.1"
hostnames:
@@ -49,10 +44,10 @@ spec:
{{- if .Values.db.chartName }}
- name: DATABASE_HOST
value: "{{ template "phpmyadmin.dbfullname" . }}"
- {{- else if .Values.db.bundleTestDB }}
+ {{- else if .Values.db.bundleTestDB }}
- name: DATABASE_HOST
value: "{{ template "mariadb.fullname" . }}"
- {{- else }}
+ {{- else }}
- name: DATABASE_HOST
value: "{{ .Values.db.host }}"
{{- end }}
@@ -86,7 +81,7 @@ spec:
{{ toYaml .Values.resources | indent 12 }}
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "phpmyadmin.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
command: [ '/bin/apache_exporter', '-scrape_uri', 'http://status.localhost:80/server-status/?auto']
ports:
diff --git a/stable/phpmyadmin/values.yaml b/stable/phpmyadmin/values.yaml
index 9fdda410fc6d..f0fd422ffe86 100644
--- a/stable/phpmyadmin/values.yaml
+++ b/stable/phpmyadmin/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami WordPress image version
## ref: https://hub.docker.com/r/bitnami/phpmyadmin/tags/
@@ -18,7 +21,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## User of the application
## ref: https://github.com/bitnami/bitnami-docker-phpmyadmin#environment-variables
@@ -102,7 +105,7 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Metrics exporter pod Annotation and Labels
podAnnotations:
prometheus.io/scrape: "true"
diff --git a/stable/postgresql/Chart.yaml b/stable/postgresql/Chart.yaml
index 6630e77df439..2368e4dc0dae 100644
--- a/stable/postgresql/Chart.yaml
+++ b/stable/postgresql/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: postgresql
-version: 3.9.5
-appVersion: 10.6.0
+version: 5.1.0
+appVersion: 11.3.0
description: Chart for PostgreSQL, an object-relational database management system (ORDBMS) with an emphasis on extensibility and on standards-compliance.
keywords:
- postgresql
diff --git a/stable/postgresql/README.md b/stable/postgresql/README.md
index 5bb0f07f3f7e..43119ce6e775 100644
--- a/stable/postgresql/README.md
+++ b/stable/postgresql/README.md
@@ -12,7 +12,7 @@ $ helm install stable/postgresql
This chart bootstraps a [PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -20,7 +20,6 @@ Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment
- PV provisioner support in the underlying infrastructure
## Installing the Chart
-
To install the chart with the release name `my-release`:
```console
@@ -45,88 +44,120 @@ The command removes all the Kubernetes components associated with the chart and
The following tables lists the configurable parameters of the PostgreSQL chart and their default values.
-| Parameter | Description | Default |
-|-----------------------------------------------|------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------|
-| `global.imageRegistry` | Global Docker Image registry | `nil` |
-| `image.registry` | PostgreSQL Image registry | `docker.io` |
-| `image.repository` | PostgreSQL Image name | `bitnami/postgresql` |
-| `image.tag` | PostgreSQL Image tag | `{VERSION}` |
-| `image.pullPolicy` | PostgreSQL Image pull policy | `Always` |
-| `image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
-| `image.debug` | Specify if debug values should be set | `false` |
-| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` |
-| `volumePermissions.image.repository` | Init container volume-permissions image name | `bitnami/minideb` |
-| `volumePermissions.image.tag` | Init container volume-permissions image tag | `latest` |
-| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `Always` |
-| `volumePermissions.securityContext.runAsUser` | User ID for the init container | `0` |
-| `usePasswordFile` | Have the secrets mounted as a file instead of env vars | `false` |
-| `replication.enabled` | Would you like to enable replication | `false` |
-| `replication.user` | Replication user | `repl_user` |
-| `replication.password` | Replication user password | `repl_password` |
-| `replication.slaveReplicas` | Number of slaves replicas | `1` |
-| `replication.synchronousCommit` | Set synchronous commit mode. Allowed values: `on`, `remote_apply`, `remote_write`, `local` and `off` | `off` |
-| `replication.numSynchronousReplicas` | Number of replicas that will have synchronous replication. Note: Cannot be greater than `replication.slaveReplicas`. | `0` |
-| `replication.applicationName` | Cluster application name. Useful for advanced replication settings | `my_application` |
-| `existingSecret` | Name of existing secret to use for PostgreSQL passwords | `nil` |
-| `postgresqlUsername` | PostgreSQL admin user | `postgres` |
-| `postgresqlPassword` | PostgreSQL admin password | _random 10 character alphanumeric string_ |
-| `postgresqlDatabase` | PostgreSQL database | `nil` |
-| `postgresqlConfiguration` | Runtime Config Parameters | `nil` |
-| `postgresqlExtendedConf` | Extended Runtime Config Parameters (appended to main or default configuration) | `nil` |
-| `pgHbaConfiguration` | Content of pg\_hba.conf | `nil (do not create pg_hba.conf)` |
-| `configurationConfigMap` | ConfigMap with the PostgreSQL configuration files (Note: Overrides `postgresqlConfiguration` and `pgHbaConfiguration`) | `nil` |
-| `extendedConfConfigMap` | ConfigMap with the extended PostgreSQL configuration files | `nil` |
-| `initdbScripts` | List of initdb scripts | `nil` |
-| `initdbScriptsConfigMap` | ConfigMap with the initdb scripts (Note: Overrides `initdbScripts`) | `nil` |
-| `service.type` | Kubernetes Service type | `ClusterIP` |
-| `service.port` | PostgreSQL port | `5432` |
-| `service.nodePort` | Kubernetes Service nodePort | `nil` |
-| `service.annotations` | Annotations for PostgreSQL service | {} |
-| `service.loadBalancerIP` | loadBalancerIP if service type is `LoadBalancer` | `nil` |
-| `persistence.enabled` | Enable persistence using PVC | `true` |
-| `persistence.existingClaim` | Provide an existing `PersistentVolumeClaim` | `nil` |
-| `persistence.mountPath` | Path to mount the volume at | `/bitnami/postgresql` |
-| `persistence.storageClass` | PVC Storage Class for PostgreSQL volume | `nil` |
-| `persistence.accessMode` | PVC Access Mode for PostgreSQL volume | `ReadWriteOnce` |
-| `persistence.size` | PVC Storage Request for PostgreSQL volume | `8Gi` |
-| `persistence.annotations` | Annotations for the PVC | `{}` |
-| `master.nodeSelector` | Node labels for pod assignment (postgresql master) | `{}` |
-| `master.affinity` | Affinity labels for pod assignment (postgresql master) | `{}` |
-| `master.tolerations` | Toleration labels for pod assignment (postgresql master) | `[]` |
-| `slave.nodeSelector` | Node labels for pod assignment (postgresql slave) | `{}` |
-| `slave.affinity` | Affinity labels for pod assignment (postgresql slave) | `{}` |
-| `slave.tolerations` | Toleration labels for pod assignment (postgresql slave) | `[]` |
-| `terminationGracePeriodSeconds` | Seconds the pod needs to terminate gracefully | `nil` |
-| `resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `250m` |
-| `securityContext.enabled` | Enable security context | `true` |
-| `securityContext.fsGroup` | Group ID for the container | `1001` |
-| `securityContext.runAsUser` | User ID for the container | `1001` |
-| `livenessProbe.enabled` | Would you like a livessProbed to be enabled | `true` |
-| `networkPolicy.enabled` | Enable NetworkPolicy | `false` |
-| `networkPolicy.allowExternal` | Don't require client label for connections | `true` |
-| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 |
-| `livenessProbe.periodSeconds` | How often to perform the probe | 10 |
-| `livenessProbe.timeoutSeconds` | When the probe times out | 5 |
-| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
-| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
-| `readinessProbe.enabled` | would you like a readinessProbe to be enabled | `true` |
-| `readinessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 5 |
-| `readinessProbe.periodSeconds` | How often to perform the probe | 10 |
-| `readinessProbe.timeoutSeconds` | When the probe times out | 5 |
-| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
-| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
-| `metrics.enabled` | Start a prometheus exporter | `false` |
-| `metrics.service.type` | Kubernetes Service type | `ClusterIP` |
-| `service.clusterIP` | Static clusterIP or None for headless services | `nil` |
-| `metrics.service.annotations` | Additional annotations for metrics exporter pod | `{}` |
-| `metrics.service.loadBalancerIP` | loadBalancerIP if redis metrics service type is `LoadBalancer` | `nil` |
-| `metrics.image.registry` | PostgreSQL Image registry | `docker.io` |
-| `metrics.image.repository` | PostgreSQL Image name | `wrouesnel/postgres_exporter` |
-| `metrics.image.tag` | PostgreSQL Image tag | `{VERSION}` |
-| `metrics.image.pullPolicy` | PostgreSQL Image pull policy | `IfNotPresent` |
-| `metrics.image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
-| `extraEnv` | Any extra environment variables you would like to pass on to the pod | `{}` |
-| `updateStrategy` | Update strategy policy | `{type: "onDelete"}` |
+| Parameter | Description | Default |
+| --------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- |
+| `global.imageRegistry` | Global Docker Image registry | `nil` |
+| `global.postgresql.postgresqlDatabase` | PostgreSQL database (overrides `postgresqlDatabase`) | `nil` |
+| `global.postgresql.postgresqlUsername` | PostgreSQL username (overrides `postgresqlUsername`) | `nil` |
+| `global.postgresql.existingSecret` | Name of existing secret to use for PostgreSQL passwords (overrides `existingSecret`) | `nil` |
+| `global.postgresql.postgresqlPassword` | Name of existing secret to use for PostgreSQL passwords (overrides `postgresqlPassword`) | `nil` |
+| `global.postgresql.servicePort` | PostgreSQL port (overrides `service.port`) | `nil` |
+| `global.postgresql.replicationPassword` | Replication user password (overrides `replication.password`) | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
+| `image.registry` | PostgreSQL Image registry | `docker.io` |
+| `image.repository` | PostgreSQL Image name | `bitnami/postgresql` |
+| `image.tag` | PostgreSQL Image tag | `{VERSION}` |
+| `image.pullPolicy` | PostgreSQL Image pull policy | `Always` |
+| `image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
+| `image.debug` | Specify if debug values should be set | `false` |
+| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` |
+| `volumePermissions.image.repository` | Init container volume-permissions image name | `bitnami/minideb` |
+| `volumePermissions.image.tag` | Init container volume-permissions image tag | `latest` |
+| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `Always` |
+| `volumePermissions.securityContext.runAsUser` | User ID for the init container | `0` |
+| `usePasswordFile` | Have the secrets mounted as a file instead of env vars | `false` |
+| `replication.enabled` | Would you like to enable replication | `false` |
+| `replication.user` | Replication user | `repl_user` |
+| `replication.password` | Replication user password | `repl_password` |
+| `replication.slaveReplicas` | Number of slaves replicas | `1` |
+| `replication.synchronousCommit` | Set synchronous commit mode. Allowed values: `on`, `remote_apply`, `remote_write`, `local` and `off` | `off` |
+| `replication.numSynchronousReplicas` | Number of replicas that will have synchronous replication. Note: Cannot be greater than `replication.slaveReplicas`. | `0` |
+| `replication.applicationName` | Cluster application name. Useful for advanced replication settings | `my_application` |
+| `existingSecret` | Name of existing secret to use for PostgreSQL passwords | `nil` |
+| `postgresqlUsername` | PostgreSQL admin user | `postgres` |
+| `postgresqlPassword` | PostgreSQL admin password | _random 10 character alphanumeric string_ |
+| `postgresqlDatabase` | PostgreSQL database | `nil` |
+| `postgresqlDataDir` | PostgreSQL data dir folder | `/bitnami/postgresql` (same value as persistence.mountPath) |
+| `postgresqlInitdbArgs` | PostgreSQL initdb extra arguments | `nil` |
+| `postgresqlInitdbWalDir` | PostgreSQL location for transaction log | `nil` |
+| `postgresqlConfiguration` | Runtime Config Parameters | `nil` |
+| `postgresqlExtendedConf` | Extended Runtime Config Parameters (appended to main or default configuration) | `nil` |
+| `pgHbaConfiguration` | Content of pg\_hba.conf | `nil (do not create pg_hba.conf)` |
+| `configurationConfigMap` | ConfigMap with the PostgreSQL configuration files (Note: Overrides `postgresqlConfiguration` and `pgHbaConfiguration`). The value is evaluated as a template. | `nil` |
+| `extendedConfConfigMap` | ConfigMap with the extended PostgreSQL configuration files. The value is evaluated as a template. | `nil` |
+| `initdbScripts` | Dictionary of initdb scripts | `nil` |
+| `initdbScriptsConfigMap` | ConfigMap with the initdb scripts (Note: Overrides `initdbScripts`). The value is evaluated as a template. | `nil` |
+| `initdbScriptsSecret` | Secret with initdb scripts that contain sensitive information (Note: can be used with `initdbScriptsConfigMap` or `initdbScripts`). The value is evaluated as a template. | `nil` |
+| `service.type` | Kubernetes Service type | `ClusterIP` |
+| `service.port` | PostgreSQL port | `5432` |
+| `service.nodePort` | Kubernetes Service nodePort | `nil` |
+| `service.annotations` | Annotations for PostgreSQL service | {} |
+| `service.loadBalancerIP` | loadBalancerIP if service type is `LoadBalancer` | `nil` |
+| `service.loadBalancerSourceRanges` | Address that are allowed when svc is LoadBalancer | [] |
+| `persistence.enabled` | Enable persistence using PVC | `true` |
+| `persistence.existingClaim` | Provide an existing `PersistentVolumeClaim`, the value is evaluated as a template. | `nil` |
+| `persistence.mountPath` | Path to mount the volume at | `/bitnami/postgresql` |
+| `persistence.subPath` | Subdirectory of the volume to mount at | `""` |
+| `persistence.storageClass` | PVC Storage Class for PostgreSQL volume | `nil` |
+| `persistence.accessModes` | PVC Access Mode for PostgreSQL volume | `[ReadWriteOnce]` |
+| `persistence.size` | PVC Storage Request for PostgreSQL volume | `8Gi` |
+| `persistence.annotations` | Annotations for the PVC | `{}` |
+| `master.nodeSelector` | Node labels for pod assignment (postgresql master) | `{}` |
+| `master.affinity` | Affinity labels for pod assignment (postgresql master) | `{}` |
+| `master.tolerations` | Toleration labels for pod assignment (postgresql master) | `[]` |
+| `master.podAnnotations` | Map of annotations to add to the pods (postgresql master) | `{}` |
+| `master.podLabels` | Map of labels to add to the pods (postgresql master) | `{}` |
+| `slave.nodeSelector` | Node labels for pod assignment (postgresql slave) | `{}` |
+| `slave.affinity` | Affinity labels for pod assignment (postgresql slave) | `{}` |
+| `slave.tolerations` | Toleration labels for pod assignment (postgresql slave) | `[]` |
+| `slave.podAnnotations` | Map of annotations to add to the pods (postgresql slave) | `{}` |
+| `slave.podLabels` | Map of labels to add to the pods (postgresql slave) | `{}` |
+| `terminationGracePeriodSeconds` | Seconds the pod needs to terminate gracefully | `nil` |
+| `resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `250m` |
+| `securityContext.enabled` | Enable security context | `true` |
+| `securityContext.fsGroup` | Group ID for the container | `1001` |
+| `securityContext.runAsUser` | User ID for the container | `1001` |
+| `serviceAccount.enabled` | Enable service account (Note: Service Account will only be automatically created if `serviceAccount.name` is not set) | `false` |
+| `serviceAcccount.name` | Name of existing service account | `nil` |
+| `livenessProbe.enabled` | Would you like a livessProbed to be enabled | `true` |
+| `networkPolicy.enabled` | Enable NetworkPolicy | `false` |
+| `networkPolicy.allowExternal` | Don't require client label for connections | `true` |
+| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 |
+| `livenessProbe.periodSeconds` | How often to perform the probe | 10 |
+| `livenessProbe.timeoutSeconds` | When the probe times out | 5 |
+| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
+| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
+| `readinessProbe.enabled` | would you like a readinessProbe to be enabled | `true` |
+| `readinessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 5 |
+| `readinessProbe.periodSeconds` | How often to perform the probe | 10 |
+| `readinessProbe.timeoutSeconds` | When the probe times out | 5 |
+| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
+| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
+| `metrics.enabled` | Start a prometheus exporter | `false` |
+| `metrics.service.type` | Kubernetes Service type | `ClusterIP` |
+| `service.clusterIP` | Static clusterIP or None for headless services | `nil` |
+| `metrics.service.annotations` | Additional annotations for metrics exporter pod | `{ prometheus.io/scrape: "true", prometheus.io/port: "9187"}` |
+| `metrics.service.loadBalancerIP` | loadBalancerIP if redis metrics service type is `LoadBalancer` | `nil` |
+| `metrics.image.registry` | PostgreSQL Image registry | `docker.io` |
+| `metrics.image.repository` | PostgreSQL Image name | `wrouesnel/postgres_exporter` |
+| `metrics.image.tag` | PostgreSQL Image tag | `v0.4.7` |
+| `metrics.image.pullPolicy` | PostgreSQL Image pull policy | `IfNotPresent` |
+| `metrics.image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) |
+| `metrics.securityContext.enabled` | Enable security context for metrics | `false` |
+| `metrics.securityContext.runAsUser` | User ID for the container for metrics | `1001` |
+| `metrics.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 |
+| `metrics.livenessProbe.periodSeconds` | How often to perform the probe | 10 |
+| `metrics.livenessProbe.timeoutSeconds` | When the probe times out | 5 |
+| `metrics.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
+| `metrics.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
+| `metrics.readinessProbe.enabled` | would you like a readinessProbe to be enabled | `true` |
+| `metrics.readinessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 5 |
+| `metrics.readinessProbe.periodSeconds` | How often to perform the probe | 10 |
+| `metrics.readinessProbe.timeoutSeconds` | When the probe times out | 5 |
+| `metrics.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 |
+| `metrics.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 |
+| `extraEnv` | Any extra environment variables you would like to pass on to the pod | `{}` |
+| `updateStrategy` | Update strategy policy | `{type: "RollingUpdate"}` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
@@ -169,7 +200,7 @@ The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) i
Alternatively, you can specify custom scripts using the `initdbScripts` parameter as dict.
-In addition to these options, you can also set an external ConfigMap with all the initialization scripts. This is done by setting the `initdbScriptsConfigMap` parameter. Note that this will override the two previous options.
+In addition to these options, you can also set an external ConfigMap with all the initialization scripts. This is done by setting the `initdbScriptsConfigMap` parameter. Note that this will override the two previous options. If your initialization scripts contain sensitive information such as credentials or passwords, you can use the `initdbScriptsSecret` parameter.
The allowed extensions are `.sh`, `.sql` and `.sql.gz`.
@@ -212,8 +243,103 @@ With NetworkPolicy enabled, traffic will be limited to just port 5432.
For more precise policy, set `networkPolicy.allowExternal=false`. This will only allow pods with the generated client label to connect to PostgreSQL.
This label will be displayed in the output of a successful install.
+## Deploy chart using Docker Official PostgreSQL Image
+
+From chart version 4.0.0, it is possible to use this chart with the Docker Official PostgreSQL image.
+Besides specifying the new Docker repository and tag, it is important to modify the PostgreSQL data directory and volume mount point. Basically, the PostgreSQL data dir cannot be the mount point directly, it has to be a subdirectory.
+
+```
+helm install --name postgres \
+ --set image.repository=postgres \
+ --set image.tag=10.6 \
+ --set postgresqlDataDir=/data/pgdata \
+ --set persistence.mountPath=/data/ \
+ stable/postgresql
+```
+
+## Differences between Bitnami PostgreSQL image and [Docker Official](https://hub.docker.com/_/postgres) image
+
+- The Docker Official PostgreSQL image does not support replication. If you pass any replication environment variable, this would be ignored. The only environment variables supported by the Docker Official image are POSTGRES_USER, POSTGRES_DB, POSTGRES_PASSWORD, POSTGRES_INITDB_ARGS, POSTGRES_INITDB_WALDIR and PGDATA. All the remaining environment variables are specific to the Bitnami PostgreSQL image.
+- The Bitnami PostgreSQL image is non-root by default. This requires that you run the pod with `securityContext` and updates the permissions of the volume with an `initContainer`. A key benefit of this configuration is that the pod follows security best practices and is prepared to run on Kubernetes distributions with hard security constraints like OpenShift.
+
+## Use of global variables
+
+In more complex scenarios, we may have the following tree of dependencies
+
+```
+ +--------------+
+ | |
+ +------------+ Chart 1 +-----------+
+ | | | |
+ | --------+------+ |
+ | | |
+ | | |
+ | | |
+ | | |
+ v v v
++-------+------+ +--------+------+ +--------+------+
+| | | | | |
+| PostgreSQL | | Sub-chart 1 | | Sub-chart 2 |
+| | | | | |
++--------------+ +---------------+ +---------------+
+```
+
+The three charts below depend on the parent chart Chart 1. However, subcharts 1 and 2 may need to connect to PostgreSQL as well. In order to do so, subcharts 1 and 2 need to know the PostgreSQL credentials, so one option for deploying could be:
+
+```
+helm install chart1 --set postgresql.postgresqlPassword=testtest --set subchart1.postgresql.postgresqlPassword=testtest --set subchart2.postgresql.postgresqlPassword=testtest --set postgresql.postgresqlDatabase=db1 --set subchart1.postgresql.postgresqlDatabase=db1 --set subchart1.postgresql.postgresqlDatabase=db1
+```
+
+If the number of dependent sub-charts increases, executing `helm install` can become increasingly difficult. An alternative would be to set the credentials using global variables as follows:
+
+```
+helm install chart1 --set global.postgresql.postgresqlPassword=testtest --set global.postgresql.postgresqlDatabase=db1
+```
+
+This way, the credentials will be available in all of the subcharts.
+
## Upgrade
+## 5.0.0
+
+In this version, the **chart is using PostgreSQL 11 instead of PostgreSQL 10**. You can find the main difference and notable changes in the following links: [https://www.postgresql.org/about/news/1894/](https://www.postgresql.org/about/news/1894/) and [https://www.postgresql.org/about/featurematrix/](https://www.postgresql.org/about/featurematrix/).
+
+For major releases of PostgreSQL, the internal data storage format is subject to change, thus complicating upgrades, you can see some errors like the following one in the logs:
+
+```bash
+Welcome to the Bitnami postgresql container
+Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
+Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
+Send us your feedback at containers@bitnami.com
+
+INFO ==> ** Starting PostgreSQL setup **
+NFO ==> Validating settings in POSTGRESQL_* env vars..
+INFO ==> Initializing PostgreSQL database...
+INFO ==> postgresql.conf file not detected. Generating it...
+INFO ==> pg_hba.conf file not detected. Generating it...
+INFO ==> Deploying PostgreSQL with persisted data...
+INFO ==> Configuring replication parameters
+INFO ==> Loading custom scripts...
+INFO ==> Enabling remote connections
+INFO ==> Stopping PostgreSQL...
+INFO ==> ** PostgreSQL setup finished! **
+
+INFO ==> ** Starting PostgreSQL **
+ [1] FATAL: database files are incompatible with server
+ [1] DETAIL: The data directory was initialized by PostgreSQL version 10, which is not compatible with this version 11.3.
+```
+In this case, you should migrate the data from the old chart to the new one following an approach similar to that described in [this section](https://www.postgresql.org/docs/current/upgrading.html#UPGRADING-VIA-PGDUMPALL) from the official documentation. Basically, create a database dump in the old chart, move and restore it in the new one.
+
+### 4.0.0
+
+This chart will use by default the Bitnami PostgreSQL container starting from version `10.7.0-r68`. This version moves the initialization logic from node.js to bash. This new version of the chart requires setting the `POSTGRES_PASSWORD` in the slaves as well, in order to properly configure the `pg_hba.conf` file. Users from previous versions of the chart are advised to upgrade immediately.
+
+IMPORTANT: If you do not want to upgrade the chart version then make sure you use the `10.7.0-r68` version of the container. Otherwise, you will get this error
+
+```
+The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development
+```
+
### 3.0.0
This releases make it possible to specify different nodeSelector, affinity and tolerations for master and slave pods.
diff --git a/stable/postgresql/templates/NOTES.txt b/stable/postgresql/templates/NOTES.txt
index 41c22104910e..721c9dcc45f0 100644
--- a/stable/postgresql/templates/NOTES.txt
+++ b/stable/postgresql/templates/NOTES.txt
@@ -1,36 +1,19 @@
-{{- if contains .Values.service.type "LoadBalancer" }}
-{{- if not .Values.postgresqlPassword }}
--------------------------------------------------------------------------------
- WARNING
-
- By specifying "serviceType=LoadBalancer" and not specifying "postgresqlPassword"
- you have most likely exposed the PostgreSQL service externally without any
- authentication mechanism.
-
- For security reasons, we strongly suggest that you switch to "ClusterIP" or
- "NodePort". As an alternative, you can also specify a valid password on the
- "postgresqlPassword" parameter.
-
--------------------------------------------------------------------------------
-{{- end }}
-{{- end }}
-
** Please be patient while the chart is being deployed **
-PostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster:
+PostgreSQL can be accessed via port {{ template "postgresql.port" . }} on the following DNS name from within your cluster:
{{ template "postgresql.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local - Read/Write connection
{{- if .Values.replication.enabled }}
{{ template "postgresql.fullname" . }}-read.{{ .Release.Namespace }}.svc.cluster.local - Read only connection
{{- end }}
-To get the password for "{{ .Values.postgresqlUsername }}" run:
+To get the password for "{{ template "postgresql.username" . }}" run:
- export POSTGRESQL_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{ else }}{{ template "postgresql.fullname" . }}{{ end }} -o jsonpath="{.data.postgresql-password}" | base64 --decode)
+ export POSTGRES_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{ else }}{{ template "postgresql.fullname" . }}{{ end }} -o jsonpath="{.data.postgresql-password}" | base64 --decode)
To connect to your database run the following command:
- kubectl run {{ template "postgresql.fullname" . }}-client --rm --tty -i --restart='Never' --namespace {{ .Release.Namespace }} --image bitnami/postgresql --env="PGPASSWORD=$POSTGRESQL_PASSWORD" {{- if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }}
- --labels="{{ template "postgresql.fullname" . }}-client=true" {{- end }} --command -- psql --host {{ template "postgresql.fullname" . }} -U {{ .Values.postgresqlUsername }}
+ kubectl run {{ template "postgresql.fullname" . }}-client --rm --tty -i --restart='Never' --namespace {{ .Release.Namespace }} --image {{ template "postgresql.image" . }} --env="PGPASSWORD=$POSTGRES_PASSWORD" {{- if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }}
+ --labels="{{ template "postgresql.fullname" . }}-client=true" {{- end }} --command -- psql --host {{ template "postgresql.fullname" . }} -U {{ .Values.postgresqlUsername }}{{- if .Values.postgresqlDatabase }} -d {{ .Values.postgresqlDatabase }}{{- end }} -p {{ template "postgresql.port" . }}
{{ if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }}
Note: Since NetworkPolicy is enabled, only pods with label {{ template "postgresql.fullname" . }}-client=true" will be able to connect to this PostgreSQL cluster.
@@ -42,7 +25,7 @@ To connect to your database from outside the cluster execute the following comma
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "postgresql.fullname" . }})
- {{ if .Values.postgresqlPassword }}PGPASSWORD="{{ .Values.postgresqlPassword}}" {{ end }}psql --host $NODE_IP --port $NODE_PORT -U {{ .Values.postgresqlUsername }}
+ {{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host $NODE_IP --port $NODE_PORT -U {{ .Values.postgresqlUsername }}{{- if .Values.postgresqlDatabase }} -d {{ .Values.postgresqlDatabase }}{{- end }}
{{- else if contains "LoadBalancer" .Values.service.type }}
@@ -50,11 +33,11 @@ To connect to your database from outside the cluster execute the following comma
Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "postgresql.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "postgresql.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
- {{ if .Values.postgresqlPassword }}PGPASSWORD="{{ .Values.postgresqlPassword}}" {{ end }}psql --host $SERVICE_IP --port {{ .Values.service.port }} -U {{ .Values.postgresqlUsername }}
+ {{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host $SERVICE_IP --port {{ template "postgresql.port" . }} -U {{ .Values.postgresqlUsername }}{{- if .Values.postgresqlDatabase }} -d {{ .Values.postgresqlDatabase }}{{- end }}
{{- else if contains "ClusterIP" .Values.service.type }}
- kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "postgresql.fullname" . }} 5432:5432 &
- {{ if .Values.postgresqlPassword }}PGPASSWORD="{{ .Values.postgresqlPassword}}" {{ end }}psql --host 127.0.0.1 -U {{ .Values.postgresqlUsername }}
+ kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "postgresql.fullname" . }} {{ template "postgresql.port" . }}:{{ template "postgresql.port" . }} &
+ {{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host 127.0.0.1 -U {{ .Values.postgresqlUsername }}{{- if .Values.postgresqlDatabase }} -d {{ .Values.postgresqlDatabase }}{{- end }} -p {{ template "postgresql.port" . }}
{{- end }}
diff --git a/stable/postgresql/templates/_helpers.tpl b/stable/postgresql/templates/_helpers.tpl
index d17977960881..40621db836de 100644
--- a/stable/postgresql/templates/_helpers.tpl
+++ b/stable/postgresql/templates/_helpers.tpl
@@ -74,6 +74,77 @@ Also, we can't use a single if because lazy evaluation is not an option
{{- end -}}
{{- end -}}
+{{/*
+Return PostgreSQL password
+*/}}
+{{- define "postgresql.password" -}}
+{{- if .Values.global.postgresql.postgresqlPassword }}
+ {{- .Values.global.postgresql.postgresqlPassword -}}
+{{- else if .Values.postgresqlPassword -}}
+ {{- .Values.postgresqlPassword -}}
+{{- else -}}
+ {{- randAlphaNum 10 -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return PostgreSQL replication password
+*/}}
+{{- define "postgresql.replication.password" -}}
+{{- if .Values.global.postgresql.replicationPassword }}
+ {{- .Values.global.postgresql.replicationPassword -}}
+{{- else if .Values.replication.password -}}
+ {{- .Values.replication.password -}}
+{{- else -}}
+ {{- randAlphaNum 10 -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return PostgreSQL username
+*/}}
+{{- define "postgresql.username" -}}
+{{- if .Values.global.postgresql.postgresqlUsername }}
+ {{- .Values.global.postgresql.postgresqlUsername -}}
+{{- else -}}
+ {{- .Values.postgresqlUsername -}}
+{{- end -}}
+{{- end -}}
+
+
+{{/*
+Return PostgreSQL replication username
+*/}}
+{{- define "postgresql.replication.username" -}}
+{{- if .Values.global.postgresql.replicationUser }}
+ {{- .Values.global.postgresql.replicationUser -}}
+{{- else -}}
+ {{- .Values.replication.user -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return PostgreSQL port
+*/}}
+{{- define "postgresql.port" -}}
+{{- if .Values.global.postgresql.servicePort }}
+ {{- .Values.global.postgresql.servicePort -}}
+{{- else -}}
+ {{- .Values.service.port -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return PostgreSQL created database
+*/}}
+{{- define "postgresql.database" -}}
+{{- if .Values.global.postgresql.postgresqlDatabase }}
+ {{- .Values.global.postgresql.postgresqlDatabase -}}
+{{- else if .Values.postgresqlDatabase -}}
+ {{- .Values.postgresqlDatabase -}}
+{{- end -}}
+{{- end -}}
+
{{/*
Return the proper image name to change the volume permissions
*/}}
@@ -97,24 +168,50 @@ Also, we can't use a single if because lazy evaluation is not an option
{{- end -}}
{{- end -}}
-
{{/*
Return the proper PostgreSQL metrics image name
*/}}
-{{- define "metrics.image" -}}
+{{- define "postgresql.metrics.image" -}}
{{- $registryName := default "docker.io" .Values.metrics.image.registry -}}
+{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := default "latest" .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName .Values.metrics.image.repository $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
{{- end -}}
{{/*
Get the password secret.
*/}}
{{- define "postgresql.secretName" -}}
-{{- if .Values.existingSecret -}}
-{{- printf "%s" .Values.existingSecret -}}
+{{- if .Values.global.postgresql.existingSecret }}
+ {{- printf "%s" .Values.global.postgresql.existingSecret -}}
+{{- else if .Values.existingSecret -}}
+ {{- printf "%s" .Values.existingSecret -}}
{{- else -}}
-{{- printf "%s" (include "postgresql.fullname" .) -}}
+ {{- printf "%s" (include "postgresql.fullname" .) -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return true if a secret object should be created
+*/}}
+{{- define "postgresql.createSecret" -}}
+{{- if .Values.global.postgresql.existingSecret }}
+{{- else if .Values.existingSecret -}}
+{{- else -}}
+ {{- true -}}
{{- end -}}
{{- end -}}
@@ -123,7 +220,7 @@ Get the configuration ConfigMap name.
*/}}
{{- define "postgresql.configurationCM" -}}
{{- if .Values.configurationConfigMap -}}
-{{- printf "%s" .Values.configurationConfigMap -}}
+{{- printf "%s" (tpl .Values.configurationConfigMap $) -}}
{{- else -}}
{{- printf "%s-configuration" (include "postgresql.fullname" .) -}}
{{- end -}}
@@ -134,7 +231,7 @@ Get the extended configuration ConfigMap name.
*/}}
{{- define "postgresql.extendedConfigurationCM" -}}
{{- if .Values.extendedConfConfigMap -}}
-{{- printf "%s" .Values.extendedConfConfigMap -}}
+{{- printf "%s" (tpl .Values.extendedConfConfigMap $) -}}
{{- else -}}
{{- printf "%s-extended-configuration" (include "postgresql.fullname" .) -}}
{{- end -}}
@@ -145,8 +242,56 @@ Get the initialization scripts ConfigMap name.
*/}}
{{- define "postgresql.initdbScriptsCM" -}}
{{- if .Values.initdbScriptsConfigMap -}}
-{{- printf "%s" .Values.initdbScriptsConfigMap -}}
+{{- printf "%s" (tpl .Values.initdbScriptsConfigMap $) -}}
{{- else -}}
{{- printf "%s-init-scripts" (include "postgresql.fullname" .) -}}
{{- end -}}
{{- end -}}
+
+{{/*
+Get the initialization scripts Secret name.
+*/}}
+{{- define "postgresql.initdbScriptsSecret" -}}
+{{- printf "%s" (tpl .Values.initdbScriptsSecret $) -}}
+{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "postgresql.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets .Values.volumePermissions.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.volumePermissions.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets .Values.volumePermissions.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.volumePermissions.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- end -}}
diff --git a/stable/postgresql/templates/networkpolicy.yaml b/stable/postgresql/templates/networkpolicy.yaml
index 40496a763f8d..67cf5a3e6b29 100644
--- a/stable/postgresql/templates/networkpolicy.yaml
+++ b/stable/postgresql/templates/networkpolicy.yaml
@@ -16,12 +16,17 @@ spec:
ingress:
# Allow inbound connections
- ports:
- - port: 5432
+ - port: {{ template "postgresql.port" . }}
{{- if not .Values.networkPolicy.allowExternal }}
from:
- podSelector:
matchLabels:
{{ template "postgresql.fullname" . }}-client: "true"
+ - podSelector:
+ matchLabels:
+ app: {{ template "postgresql.name" . }}
+ release: {{ .Release.Name | quote }}
+ role: slave
{{- end }}
# Allow prometheus scrapes
- ports:
diff --git a/stable/postgresql/templates/secrets.yaml b/stable/postgresql/templates/secrets.yaml
index acc1681415e4..e0bc3b2399c7 100644
--- a/stable/postgresql/templates/secrets.yaml
+++ b/stable/postgresql/templates/secrets.yaml
@@ -1,4 +1,4 @@
-{{- if not .Values.existingSecret }}
+{{- if (include "postgresql.createSecret" .) }}
apiVersion: v1
kind: Secret
metadata:
@@ -10,16 +10,8 @@ metadata:
heritage: {{ .Release.Service | quote }}
type: Opaque
data:
- {{- if .Values.postgresqlPassword }}
- postgresql-password: {{ .Values.postgresqlPassword | b64enc | quote }}
- {{- else }}
- postgresql-password: {{ randAlphaNum 10 | b64enc | quote }}
- {{- end }}
+ postgresql-password: {{ include "postgresql.password" . | b64enc | quote }}
{{- if .Values.replication.enabled }}
- {{- if .Values.replication.password }}
- postgresql-replication-password: {{ .Values.replication.password | b64enc | quote }}
- {{- else }}
- postgresql-replication-password: {{ randAlphaNum 10 | b64enc | quote }}
- {{- end }}
+ postgresql-replication-password: {{ include "postgresql.replication.password" . | b64enc | quote }}
{{- end }}
{{- end -}}
diff --git a/stable/postgresql/templates/serviceaccount.yaml b/stable/postgresql/templates/serviceaccount.yaml
new file mode 100644
index 000000000000..27e5b516efa9
--- /dev/null
+++ b/stable/postgresql/templates/serviceaccount.yaml
@@ -0,0 +1,11 @@
+{{- if and (.Values.serviceAccount.enabled) (not .Values.serviceAccount.name) }}
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ labels:
+ app: {{ template "postgresql.name" . }}
+ chart: {{ template "postgresql.chart" . }}
+ release: {{ .Release.Name | quote }}
+ heritage: {{ .Release.Service | quote }}
+ name: {{ template "postgresql.fullname" . }}
+{{- end }}
\ No newline at end of file
diff --git a/stable/postgresql/templates/statefulset-slaves.yaml b/stable/postgresql/templates/statefulset-slaves.yaml
index 057ed664cfa8..2b035d176f58 100644
--- a/stable/postgresql/templates/statefulset-slaves.yaml
+++ b/stable/postgresql/templates/statefulset-slaves.yaml
@@ -25,18 +25,15 @@ spec:
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
role: slave
+{{- with .Values.slave.podLabels }}
+{{ toYaml . | indent 8 }}
+{{- end }}
+{{- with .Values.slave.podAnnotations }}
+ annotations:
+{{ toYaml . | indent 8 }}
+{{- end }}
spec:
- {{- if .Values.securityContext.enabled }}
- securityContext:
- fsGroup: {{ .Values.securityContext.fsGroup }}
- runAsUser: {{ .Values.securityContext.runAsUser }}
- {{- end }}
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "postgresql.imagePullSecrets" . | indent 6 }}
{{- if .Values.slave.nodeSelector }}
nodeSelector:
{{ toYaml .Values.slave.nodeSelector | indent 8 }}
@@ -52,6 +49,13 @@ spec:
{{- if .Values.terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- end }}
+ {{- if .Values.securityContext.enabled }}
+ securityContext:
+ fsGroup: {{ .Values.securityContext.fsGroup }}
+ {{- end }}
+ {{- if .Values.serviceAccount.enabled }}
+ serviceAccountName: {{ default (include "postgresql.fullname" . ) .Values.serviceAccount.name}}
+ {{- end }}
{{- if and .Values.volumePermissions.enabled .Values.persistence.enabled }}
initContainers:
- name: init-chmod-data
@@ -63,15 +67,17 @@ spec:
- sh
- -c
- |
- chown -R {{ .Values.securityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} /bitnami
- if [ -d /bitnami/postgresql/data ]; then
- chmod 0700 /bitnami/postgresql/data;
+ find {{ .Values.persistence.mountPath }} -mindepth 1 -maxdepth 1 -not -name ".snapshot" | \
+ xargs chown -R {{ .Values.securityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }}
+ if [ -d {{ .Values.persistence.mountPath }}/data ]; then
+ chmod 0700 {{ .Values.persistence.mountPath }}/data;
fi
securityContext:
runAsUser: {{ .Values.volumePermissions.securityContext.runAsUser }}
volumeMounts:
- name: data
- mountPath: /bitnami/postgresql
+ mountPath: {{ .Values.persistence.mountPath }}
+ subPath: {{ .Values.persistence.subPath }}
{{- end }}
containers:
- name: {{ template "postgresql.fullname" . }}
@@ -79,46 +85,62 @@ spec:
imagePullPolicy: "{{ .Values.image.pullPolicy }}"
resources:
{{ toYaml .Values.resources | indent 10 }}
+ {{- if .Values.securityContext.enabled }}
+ securityContext:
+ runAsUser: {{ .Values.securityContext.runAsUser }}
+ {{- end }}
env:
- {{- if .Values.image.debug}}
- - name: BASH_DEBUG
- value: "1"
- - name: NAMI_DEBUG
- value: "1"
+ - name: BITNAMI_DEBUG
+ value: {{ ternary "true" "false" .Values.image.debug | quote }}
+ - name: POSTGRESQL_PORT_NUMBER
+ value: "{{ template "postgresql.port" . }}"
+ {{- if .Values.persistence.mountPath }}
+ - name: PGDATA
+ value: {{ .Values.postgresqlDataDir | quote }}
{{- end }}
- - name: POSTGRESQL_REPLICATION_MODE
+ - name: POSTGRES_REPLICATION_MODE
value: "slave"
- - name: POSTGRESQL_REPLICATION_USER
- value: {{ .Values.replication.user | quote }}
+ - name: POSTGRES_REPLICATION_USER
+ value: {{ include "postgresql.replication.username" . | quote }}
{{- if .Values.usePasswordFile }}
- - name: POSTGRESQL_REPLICATION_PASSWORD_FILE
+ - name: POSTGRES_REPLICATION_PASSWORD_FILE
value: "/opt/bitnami/postgresql/secrets/postgresql-replication-password"
{{- else }}
- - name: POSTGRESQL_REPLICATION_PASSWORD
+ - name: POSTGRES_REPLICATION_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "postgresql.secretName" . }}
key: postgresql-replication-password
{{- end }}
- - name: POSTGRESQL_CLUSTER_APP_NAME
+ - name: POSTGRES_CLUSTER_APP_NAME
value: {{ .Values.replication.applicationName }}
- - name: POSTGRESQL_MASTER_HOST
+ - name: POSTGRES_MASTER_HOST
value: {{ template "postgresql.fullname" . }}
- - name: POSTGRESQL_MASTER_PORT_NUMBER
- value: {{ .Values.service.port | quote }}
+ - name: POSTGRES_MASTER_PORT_NUMBER
+ value: {{ include "postgresql.port" . | quote }}
+ {{- if .Values.usePasswordFile }}
+ - name: POSTGRES_PASSWORD_FILE
+ value: "/opt/bitnami/postgresql/secrets/postgresql-password"
+ {{- else }}
+ - name: POSTGRES_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "postgresql.secretName" . }}
+ key: postgresql-password
+ {{- end }}
ports:
- name: postgresql
- containerPort: {{ .Values.service.port }}
+ containerPort: {{ template "postgresql.port" . }}
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
exec:
command:
- sh
- -c
- {{- if .Values.postgresqlDatabase }}
- - exec pg_isready -U {{ .Values.postgresqlUsername | quote }} -d {{ .Values.postgresqlDatabase | quote }} -h localhost
+ {{- if (include "postgresql.database" .) }}
+ - exec pg_isready -U {{ include "postgresql.username" . | quote }} -d {{ (include "postgresql.database" .) | quote }} -h 127.0.0.1 -p {{ template "postgresql.port" . }}
{{- else }}
- - exec pg_isready -U {{ .Values.postgresqlUsername | quote }} -h localhost
+ - exec pg_isready -U {{ include "postgresql.username" . | quote }} -h 127.0.0.1 -p {{ template "postgresql.port" . }}
{{- end }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
@@ -132,10 +154,10 @@ spec:
command:
- sh
- -c
- {{- if .Values.postgresqlDatabase }}
- - exec pg_isready -U {{ .Values.postgresqlUsername | quote }} -d {{ .Values.postgresqlDatabase | quote }} -h localhost
+ {{- if (include "postgresql.database" .) }}
+ - exec pg_isready -U {{ include "postgresql.username" . | quote }} -d {{ (include "postgresql.database" .) | quote }} -h 127.0.0.1 -p {{ template "postgresql.port" . }}
{{- else }}
- - exec pg_isready -U {{ .Values.postgresqlUsername | quote }} -h localhost
+ - exec pg_isready -U {{ include "postgresql.username" . | quote }} -h 127.0.0.1 -p {{ template "postgresql.port" . }}
{{- end }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
@@ -146,13 +168,14 @@ spec:
volumeMounts:
{{- if .Values.usePasswordFile }}
- name: postgresql-password
- mountPath: /opt/bitnami/postgresql/secrets
- {{ end }}
+ mountPath: /opt/bitnami/postgresql/secrets/
+ {{- end }}
{{- if .Values.persistence.enabled }}
- name: data
mountPath: {{ .Values.persistence.mountPath }}
+ subPath: {{ .Values.persistence.subPath }}
{{ end }}
- {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.extendedConfConfigMap }}
+ {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }}
- name: postgresql-extended-config
mountPath: /bitnami/postgresql/conf/conf.d/
{{- end }}
@@ -165,13 +188,13 @@ spec:
- name: postgresql-password
secret:
secretName: {{ template "postgresql.secretName" . }}
- {{ end }}
+ {{- end }}
{{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap}}
- name: postgresql-config
configMap:
name: {{ template "postgresql.configurationCM" . }}
{{- end }}
- {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.extendedConfConfigMap }}
+ {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }}
- name: postgresql-extended-config
configMap:
name: {{ template "postgresql.extendedConfigurationCM" . }}
@@ -182,6 +205,9 @@ spec:
{{- end }}
updateStrategy:
type: {{ .Values.updateStrategy.type }}
+ {{- if (eq "Recreate" .Values.updateStrategy.type) }}
+ rollingUpdate: null
+ {{- end }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
diff --git a/stable/postgresql/templates/statefulset.yaml b/stable/postgresql/templates/statefulset.yaml
index d85826fc9940..bdc0ba974f70 100644
--- a/stable/postgresql/templates/statefulset.yaml
+++ b/stable/postgresql/templates/statefulset.yaml
@@ -12,6 +12,9 @@ spec:
replicas: 1
updateStrategy:
type: {{ .Values.updateStrategy.type }}
+ {{- if (eq "Recreate" .Values.updateStrategy.type) }}
+ rollingUpdate: null
+ {{- end }}
selector:
matchLabels:
app: {{ template "postgresql.name" . }}
@@ -26,21 +29,15 @@ spec:
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
role: master
+{{- with .Values.master.podLabels }}
+{{ toYaml . | indent 8 }}
+{{- end }}
+{{- with .Values.master.podAnnotations }}
+ annotations:
+{{ toYaml . | indent 8 }}
+{{- end }}
spec:
- {{- if .Values.securityContext.enabled }}
- securityContext:
- fsGroup: {{ .Values.securityContext.fsGroup }}
- runAsUser: {{ .Values.securityContext.runAsUser }}
- {{- end }}
- {{- if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- range .Values.metrics.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "postgresql.imagePullSecrets" . | indent 6 }}
{{- if .Values.master.nodeSelector }}
nodeSelector:
{{ toYaml .Values.master.nodeSelector | indent 8 }}
@@ -56,6 +53,13 @@ spec:
{{- if .Values.terminationGracePeriodSeconds }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
{{- end }}
+ {{- if .Values.securityContext.enabled }}
+ securityContext:
+ fsGroup: {{ .Values.securityContext.fsGroup }}
+ {{- end }}
+ {{- if .Values.serviceAccount.enabled }}
+ serviceAccountName: {{ default (include "postgresql.fullname" . ) .Values.serviceAccount.name }}
+ {{- end }}
{{- if and .Values.volumePermissions.enabled .Values.persistence.enabled }}
initContainers:
- name: init-chmod-data
@@ -67,15 +71,17 @@ spec:
- sh
- -c
- |
- chown -R {{ .Values.securityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} /bitnami
- if [ -d /bitnami/postgresql/data ]; then
- chmod 0700 /bitnami/postgresql/data;
+ find {{ .Values.persistence.mountPath }} -mindepth 1 -maxdepth 1 -not -name ".snapshot" | \
+ xargs chown -R {{ .Values.securityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }}
+ if [ -d {{ .Values.persistence.mountPath }}/data ]; then
+ chmod 0700 {{ .Values.persistence.mountPath }}/data;
fi
securityContext:
runAsUser: {{ .Values.volumePermissions.securityContext.runAsUser }}
volumeMounts:
- name: data
- mountPath: /bitnami/postgresql
+ mountPath: {{ .Values.persistence.mountPath }}
+ subPath: {{ .Values.persistence.subPath }}
{{- end }}
containers:
- name: {{ template "postgresql.fullname" . }}
@@ -83,69 +89,83 @@ spec:
imagePullPolicy: "{{ .Values.image.pullPolicy }}"
resources:
{{ toYaml .Values.resources | indent 10 }}
+ {{- if .Values.securityContext.enabled }}
+ securityContext:
+ runAsUser: {{ .Values.securityContext.runAsUser }}
+ {{- end }}
env:
- {{- if .Values.image.debug}}
- - name: BASH_DEBUG
- value: "1"
- - name: NAMI_DEBUG
- value: "1"
+ - name: BITNAMI_DEBUG
+ value: {{ ternary "true" "false" .Values.image.debug | quote }}
+ - name: POSTGRESQL_PORT_NUMBER
+ value: "{{ template "postgresql.port" . }}"
+ {{- if .Values.postgresqlInitdbArgs }}
+ - name: POSTGRES_INITDB_ARGS
+ value: {{ .Values.postgresqlInitdbArgs | quote }}
+ {{- end }}
+ {{- if .Values.postgresqlInitdbWalDir }}
+ - name: POSTGRES_INITDB_WALDIR
+ value: {{ .Values.postgresqlInitdbWalDir | quote }}
+ {{- end }}
+ {{- if .Values.persistence.mountPath }}
+ - name: PGDATA
+ value: {{ .Values.postgresqlDataDir | quote }}
{{- end }}
{{- if .Values.replication.enabled }}
- - name: POSTGRESQL_REPLICATION_MODE
+ - name: POSTGRES_REPLICATION_MODE
value: "master"
- - name: POSTGRESQL_REPLICATION_USER
- value: {{ .Values.replication.user | quote }}
+ - name: POSTGRES_REPLICATION_USER
+ value: {{ include "postgresql.replication.username" . | quote }}
{{- if .Values.usePasswordFile }}
- - name: POSTGRESQL_REPLICATION_PASSWORD_FILE
+ - name: POSTGRES_REPLICATION_PASSWORD_FILE
value: "/opt/bitnami/postgresql/secrets/postgresql-replication-password"
{{- else }}
- - name: POSTGRESQL_REPLICATION_PASSWORD
+ - name: POSTGRES_REPLICATION_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "postgresql.secretName" . }}
key: postgresql-replication-password
{{- end }}
{{- if not (eq .Values.replication.synchronousCommit "off")}}
- - name: POSTGRESQL_SYNCHRONOUS_COMMIT_MODE
+ - name: POSTGRES_SYNCHRONOUS_COMMIT_MODE
value: {{ .Values.replication.synchronousCommit | quote }}
- - name: POSTGRESQL_NUM_SYNCHRONOUS_REPLICAS
+ - name: POSTGRES_NUM_SYNCHRONOUS_REPLICAS
value: {{ .Values.replication.numSynchronousReplicas | quote }}
{{- end }}
- - name: POSTGRESQL_CLUSTER_APP_NAME
+ - name: POSTGRES_CLUSTER_APP_NAME
value: {{ .Values.replication.applicationName }}
{{- end }}
- - name: POSTGRESQL_USERNAME
- value: {{ .Values.postgresqlUsername | quote }}
+ - name: POSTGRES_USER
+ value: {{ include "postgresql.username" . | quote }}
{{- if .Values.usePasswordFile }}
- - name: POSTGRESQL_PASSWORD_FILE
+ - name: POSTGRES_PASSWORD_FILE
value: "/opt/bitnami/postgresql/secrets/postgresql-password"
{{- else }}
- - name: POSTGRESQL_PASSWORD
+ - name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "postgresql.secretName" . }}
key: postgresql-password
{{- end }}
- {{- if .Values.postgresqlDatabase }}
- - name: POSTGRESQL_DATABASE
- value: {{ .Values.postgresqlDatabase | quote }}
+ {{- if (include "postgresql.database" .) }}
+ - name: POSTGRES_DB
+ value: {{ (include "postgresql.database" .) | quote }}
{{- end }}
{{- if .Values.extraEnv }}
{{ toYaml .Values.extraEnv | indent 8 }}
{{- end }}
ports:
- name: postgresql
- containerPort: {{ .Values.service.port }}
+ containerPort: {{ template "postgresql.port" . }}
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
exec:
command:
- sh
- -c
- {{- if .Values.postgresqlDatabase }}
- - exec pg_isready -U {{ .Values.postgresqlUsername | quote }} -d {{ .Values.postgresqlDatabase | quote }} -h localhost
+ {{- if (include "postgresql.database" .) }}
+ - exec pg_isready -U {{ include "postgresql.username" . | quote }} -d {{ (include "postgresql.database" .) | quote }} -h 127.0.0.1 -p {{ template "postgresql.port" . }}
{{- else }}
- - exec pg_isready -U {{ .Values.postgresqlUsername | quote }} -h localhost
+ - exec pg_isready -U {{ include "postgresql.username" . | quote }} -h 127.0.0.1 -p {{ template "postgresql.port" . }}
{{- end }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
@@ -159,10 +179,10 @@ spec:
command:
- sh
- -c
- {{- if .Values.postgresqlDatabase }}
- - exec pg_isready -U {{ .Values.postgresqlUsername | quote }} -d {{ .Values.postgresqlDatabase | quote }} -h localhost
+ {{- if (include "postgresql.database" .) }}
+ - exec pg_isready -U {{ include "postgresql.username" . | quote }} -d {{ (include "postgresql.database" .) | quote }} -h 127.0.0.1 -p {{ template "postgresql.port" . }}
{{- else }}
- - exec pg_isready -U {{ .Values.postgresqlUsername | quote }} -h localhost
+ - exec pg_isready -U {{ include "postgresql.username" . | quote }} -h 127.0.0.1 -p {{ template "postgresql.port" . }}
{{- end }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
@@ -173,7 +193,11 @@ spec:
volumeMounts:
{{- if or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScriptsConfigMap .Values.initdbScripts }}
- name: custom-init-scripts
- mountPath: /docker-entrypoint-initdb.d
+ mountPath: /docker-entrypoint-initdb.d/
+ {{- end }}
+ {{- if .Values.initdbScriptsSecret }}
+ - name: custom-init-scripts-secret
+ mountPath: /docker-entrypoint-initdb.d/secret
{{- end }}
{{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }}
- name: postgresql-extended-config
@@ -186,6 +210,7 @@ spec:
{{- if .Values.persistence.enabled }}
- name: data
mountPath: {{ .Values.persistence.mountPath }}
+ subPath: {{ .Values.persistence.subPath }}
{{- end }}
{{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap }}
- name: postgresql-config
@@ -193,12 +218,16 @@ spec:
{{- end }}
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "postgresql.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
+ {{- if .Values.metrics.securityContext.enabled }}
+ securityContext:
+ runAsUser: {{ .Values.metrics.securityContext.runAsUser }}
+ {{- end }}
env:
- {{- $database := required "In order to enable metrics you need to specify a database (.Values.postgresqlDatabase)" .Values.postgresqlDatabase }}
+ {{- $database := required "In order to enable metrics you need to specify a database (.Values.postgresqlDatabase or .Values.global.postgresql.postgresqlDatabase)" (include "postgresql.database" .) }}
- name: DATA_SOURCE_URI
- value: {{ printf "localhost:%d/%s?sslmode=disable" (int .Values.service.port) $database | quote }}
+ value: {{ printf "127.0.0.1:%d/%s?sslmode=disable" (int (include "postgresql.port" .)) $database | quote }}
{{- if .Values.usePasswordFile }}
- name: DATA_SOURCE_PASS_FILE
value: "/opt/bitnami/postgresql/secrets/postgresql-password"
@@ -210,7 +239,7 @@ spec:
key: postgresql-password
{{- end }}
- name: DATA_SOURCE_USER
- value: {{ .Values.postgresqlUsername }}
+ value: {{ template "postgresql.username" . }}
{{- if .Values.livenessProbe.enabled }}
livenessProbe:
httpGet:
@@ -265,10 +294,17 @@ spec:
configMap:
name: {{ template "postgresql.initdbScriptsCM" . }}
{{- end }}
+ {{- if .Values.initdbScriptsSecret }}
+ - name: custom-init-scripts-secret
+ secret:
+ secretName: {{ template "postgresql.initdbScriptsSecret" . }}
+ {{- end }}
{{- if and .Values.persistence.enabled .Values.persistence.existingClaim }}
- name: data
persistentVolumeClaim:
- claimName: {{ .Values.persistence.existingClaim }}
+{{- with .Values.persistence.existingClaim }}
+ claimName: {{ tpl . $ }}
+{{- end }}
{{- else if not .Values.persistence.enabled }}
- name: data
emptyDir: {}
diff --git a/stable/postgresql/templates/svc-headless.yaml b/stable/postgresql/templates/svc-headless.yaml
index 9414d609a3ed..8198a187b6bb 100644
--- a/stable/postgresql/templates/svc-headless.yaml
+++ b/stable/postgresql/templates/svc-headless.yaml
@@ -12,7 +12,7 @@ spec:
clusterIP: None
ports:
- name: postgresql
- port: 5432
+ port: {{ template "postgresql.port" . }}
targetPort: postgresql
selector:
app: {{ template "postgresql.name" . }}
diff --git a/stable/postgresql/templates/svc-read.yaml b/stable/postgresql/templates/svc-read.yaml
index 6b2de778ab0b..d5f468f3db4e 100644
--- a/stable/postgresql/templates/svc-read.yaml
+++ b/stable/postgresql/templates/svc-read.yaml
@@ -19,7 +19,7 @@ spec:
{{- end }}
ports:
- name: postgresql
- port: {{ .Values.service.port }}
+ port: {{ template "postgresql.port" . }}
targetPort: postgresql
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
diff --git a/stable/postgresql/templates/svc.yaml b/stable/postgresql/templates/svc.yaml
index 31b9b08d50a9..c8a7887f5489 100644
--- a/stable/postgresql/templates/svc.yaml
+++ b/stable/postgresql/templates/svc.yaml
@@ -15,13 +15,19 @@ spec:
type: {{ .Values.service.type }}
{{- if and .Values.service.loadBalancerIP (eq .Values.service.type "LoadBalancer") }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
+ {{- end }}
+ {{- if and (eq .Values.service.type "LoadBalancer") .Values.service.loadBalancerSourceRanges }}
+ loadBalancerSourceRanges:
+ {{ with .Values.service.loadBalancerSourceRanges }}
+{{ toYaml . | indent 4 }}
+{{- end }}
{{- end }}
{{- if and (eq .Values.service.type "ClusterIP") .Values.service.clusterIP }}
clusterIP: {{ .Values.service.clusterIP }}
{{- end }}
ports:
- name: postgresql
- port: {{ .Values.service.port }}
+ port: {{ template "postgresql.port" . }}
targetPort: postgresql
{{- if .Values.service.nodePort }}
nodePort: {{ .Values.service.nodePort }}
diff --git a/stable/postgresql/values-production.yaml b/stable/postgresql/values-production.yaml
index f53542fb3e89..d75d6a8142b4 100644
--- a/stable/postgresql/values-production.yaml
+++ b/stable/postgresql/values-production.yaml
@@ -1,8 +1,12 @@
-## Global Docker image registry
-### Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
-###
-## global:
-## imageRegistry:
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
+##
+global:
+ postgresql: {}
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami PostgreSQL image version
## ref: https://hub.docker.com/r/bitnami/postgresql/tags/
@@ -10,19 +14,18 @@
image:
registry: docker.io
repository: bitnami/postgresql
- tag: 10.6.0
+ tag: 11.3.0
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: Always
-
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Set to true if you would like to see extra information on logs
## It turns BASH and NAMI debugging in minideb
@@ -44,6 +47,12 @@ volumePermissions:
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: Always
+ ## Optionally specify an array of imagePullSecrets.
+ ## Secrets must be manually created in the namespace.
+ ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+ ##
+ # pullSecrets:
+ # - myRegistryKeySecretName
## Init container Security Context
securityContext:
runAsUser: 0
@@ -56,6 +65,13 @@ securityContext:
fsGroup: 1001
runAsUser: 1001
+## Pod Service Account
+## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
+serviceAccount:
+ enabled: false
+ ## Name of an already existing service account. Setting this value disables the automatic service account creation.
+ # name:
+
replication:
enabled: true
user: repl_user
@@ -84,6 +100,21 @@ postgresqlUsername: postgres
##
# postgresqlDatabase:
+## PostgreSQL data dir
+## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md
+##
+postgresqlDataDir: /bitnami/postgresql/data
+
+## Specify extra initdb args
+## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md
+##
+# postgresqlInitdbArgs:
+
+## Specify a custom location for the PostgreSQL transaction log
+## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md
+##
+# postgresqlInitdbWalDir:
+
## PostgreSQL password using existing secret
## existingSecret: secret
@@ -122,11 +153,11 @@ postgresqlUsername: postgres
# extendedConfConfigMap:
## initdb scripts
-## Specify dictionnary of scripts to be run at first boot
+## Specify dictionary of scripts to be run at first boot
## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory
##
# initdbScripts:
-# my_init_script.sh:|
+# my_init_script.sh: |
# #!/bin/sh
# echo "Do something."
@@ -134,6 +165,10 @@ postgresqlUsername: postgres
## NOTE: This will override initdbScripts
# initdbScriptsConfigMap:
+## Secret with scripts to be run at first boot (in case it contains sensitive information)
+## NOTE: This can work along initdbScripts or initdbScriptsConfigMap
+# initdbScriptsSecret:
+
## PostgreSQL service configuration
service:
## PosgresSQL service type
@@ -152,6 +187,12 @@ service:
##
# loadBalancerIP:
+ ## Load Balancer sources
+ ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
+ ##
+ # loadBalancerSourceRanges:
+ # - 10.10.10.0/24
+
## PostgreSQL data Persistent Volume Storage Class
## If defined, storageClassName:
## If set to "-", storageClassName: "", which disables dynamic provisioning
@@ -163,8 +204,20 @@ persistence:
enabled: true
## A manually managed Persistent Volume and Claim
## If defined, PVC must be created manually before volume will be bound
+ ## The value is evaluated as a template, so, for example, the name can depend on .Release or .Chart
+ ##
# existingClaim:
+
+ ## The path the volume will be mounted at, useful when using different
+ ## PostgreSQL images.
+ ##
mountPath: /bitnami/postgresql
+
+ ## The subdirectory of the volume to mount to, useful in dev environments
+ ## and one PV for multiple services.
+ ##
+ subPath: ""
+
# storageClass: "-"
accessModes:
- ReadWriteOnce
@@ -187,6 +240,8 @@ master:
nodeSelector: {}
affinity: {}
tolerations: []
+ podLabels: {}
+ podAnnotations: {}
##
## PostgreSQL Slave parameters
@@ -199,6 +254,8 @@ slave:
nodeSelector: {}
affinity: {}
tolerations: []
+ podLabels: {}
+ podAnnotations: {}
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
@@ -252,14 +309,20 @@ metrics:
image:
registry: docker.io
repository: wrouesnel/postgres_exporter
- tag: v0.4.6
+ tag: v0.4.7
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
+ ## Pod Security Context
+ ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
+ ##
+ securityContext:
+ enabled: false
+ runAsUser: 1001
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
## Configure extra options for liveness and readiness probes
diff --git a/stable/postgresql/values.yaml b/stable/postgresql/values.yaml
index e25704a56a36..5b0fdae03f2a 100644
--- a/stable/postgresql/values.yaml
+++ b/stable/postgresql/values.yaml
@@ -1,8 +1,12 @@
-## Global Docker image registry
-### Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
-###
-## global:
-## imageRegistry:
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
+##
+global:
+ postgresql: {}
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami PostgreSQL image version
## ref: https://hub.docker.com/r/bitnami/postgresql/tags/
@@ -10,19 +14,18 @@
image:
registry: docker.io
repository: bitnami/postgresql
- tag: 10.6.0
+ tag: 11.3.0
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: Always
-
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Set to true if you would like to see extra information on logs
## It turns BASH and NAMI debugging in minideb
@@ -44,6 +47,12 @@ volumePermissions:
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: Always
+ ## Optionally specify an array of imagePullSecrets.
+ ## Secrets must be manually created in the namespace.
+ ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+ ##
+ # pullSecrets:
+ # - myRegistryKeySecretName
## Init container Security Context
securityContext:
runAsUser: 0
@@ -56,6 +65,13 @@ securityContext:
fsGroup: 1001
runAsUser: 1001
+## Pod Service Account
+## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
+serviceAccount:
+ enabled: false
+ ## Name of an already existing service account. Setting this value disables the automatic service account creation.
+ # name:
+
replication:
enabled: false
user: repl_user
@@ -90,6 +106,22 @@ postgresqlUsername: postgres
##
# postgresqlDatabase:
+## PostgreSQL data dir
+## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md
+##
+postgresqlDataDir: /bitnami/postgresql/data
+
+## Specify extra initdb args
+## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md
+##
+# postgresqlInitdbArgs:
+
+## Specify a custom location for the PostgreSQL transaction log
+## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md
+##
+# postgresqlInitdbWalDir:
+
+
## PostgreSQL configuration
## Specify runtime configuration parameters as a dict, using camelCase, e.g.
## {"sharedBuffers": "500MB"}
@@ -122,11 +154,11 @@ postgresqlUsername: postgres
# extendedConfConfigMap:
## initdb scripts
-## Specify dictionnary of scripts to be run at first boot
+## Specify dictionary of scripts to be run at first boot
## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory
##
# initdbScripts:
-# my_init_script.sh:|
+# my_init_script.sh: |
# #!/bin/sh
# echo "Do something."
#
@@ -134,6 +166,10 @@ postgresqlUsername: postgres
## NOTE: This will override initdbScripts
# initdbScriptsConfigMap:
+## Secret with scripts to be run at first boot (in case it contains sensitive information)
+## NOTE: This can work along initdbScripts or initdbScriptsConfigMap
+# initdbScriptsSecret:
+
## Optional duration in seconds the pod needs to terminate gracefully.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods
##
@@ -158,6 +194,12 @@ service:
##
# loadBalancerIP:
+ ## Load Balancer sources
+ ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
+ ##
+ # loadBalancerSourceRanges:
+ # - 10.10.10.0/24
+
## PostgreSQL data Persistent Volume Storage Class
## If defined, storageClassName:
## If set to "-", storageClassName: "", which disables dynamic provisioning
@@ -169,8 +211,20 @@ persistence:
enabled: true
## A manually managed Persistent Volume and Claim
## If defined, PVC must be created manually before volume will be bound
+ ## The value is evaluated as a template, so, for example, the name can depend on .Release or .Chart
+ ##
# existingClaim:
+
+ ## The path the volume will be mounted at, useful when using different
+ ## PostgreSQL images.
+ ##
mountPath: /bitnami/postgresql
+
+ ## The subdirectory of the volume to mount to, useful in dev environments
+ ## and one PV for multiple services.
+ ##
+ subPath: ""
+
# storageClass: "-"
accessModes:
- ReadWriteOnce
@@ -193,6 +247,8 @@ master:
nodeSelector: {}
affinity: {}
tolerations: []
+ podLabels: {}
+ podAnnotations: {}
##
## PostgreSQL Slave parameters
@@ -205,6 +261,8 @@ slave:
nodeSelector: {}
affinity: {}
tolerations: []
+ podLabels: {}
+ podAnnotations: {}
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
@@ -258,15 +316,20 @@ metrics:
image:
registry: docker.io
repository: wrouesnel/postgres_exporter
- tag: v0.4.6
+ tag: v0.4.7
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
-
+ # - myRegistryKeySecretName
+ ## Pod Security Context
+ ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
+ ##
+ securityContext:
+ enabled: false
+ runAsUser: 1001
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes)
## Configure extra options for liveness and readiness probes
livenessProbe:
diff --git a/stable/prestashop/Chart.yaml b/stable/prestashop/Chart.yaml
index 213d1d41f22b..b0c34c569c10 100644
--- a/stable/prestashop/Chart.yaml
+++ b/stable/prestashop/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: prestashop
-version: 6.1.2
-appVersion: 1.7.5-0
+version: 6.5.1
+appVersion: 1.7.5-2
description: A popular open source ecommerce solution. Professional tools are easily accessible to increase online sales including instant guest checkout, abandoned cart reminders and automated Email marketing.
keywords:
- prestashop
diff --git a/stable/prestashop/README.md b/stable/prestashop/README.md
index f867893eb77f..c10ea8090386 100644
--- a/stable/prestashop/README.md
+++ b/stable/prestashop/README.md
@@ -14,7 +14,7 @@ This chart bootstraps a [PrestaShop](https://github.com/bitnami/bitnami-docker-p
It also packages the [Bitnami MariaDB chart](https://github.com/kubernetes/charts/tree/master/stable/mariadb) which is required for bootstrapping a MariaDB deployment for the database requirements of the PrestaShop application.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -50,6 +50,7 @@ The following table lists the configurable parameters of the PrestaShop chart an
| Parameter | Description | Default |
|---------------------------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------|
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | PrestaShop image registry | `docker.io` |
| `image.repository` | PrestaShop image name | `bitnami/prestashop` |
| `image.tag` | PrestaShop image tag | `{VERSION}` |
diff --git a/stable/prestashop/templates/NOTES.txt b/stable/prestashop/templates/NOTES.txt
index d8e2205c5b17..fd08a3f2d2a8 100644
--- a/stable/prestashop/templates/NOTES.txt
+++ b/stable/prestashop/templates/NOTES.txt
@@ -32,14 +32,14 @@ host. To configure PrestaShop with the URL of your service:
{{- if .Values.mariadb.enabled }}
- helm upgrade {{ .Release.Name }} stable/prestashop \
- --set prestashopHost=$APP_HOST,prestashopPassword=$APP_PASSWORD{{ if .Values.mariadb.mariadbRootPassword }},mariadb.mariadbRootPassword=$DATABASE_ROOT_PASSWORD{{ end }},mariadb.db.password=$APP_DATABASE_PASSWORD
+ helm upgrade {{ .Release.Name }} stable/{{ .Chart.Name }} \
+ --set prestashopHost=$APP_HOST,prestashopPassword=$APP_PASSWORD{{ if .Values.mariadb.mariadbRootPassword }},mariadb.mariadbRootPassword=$DATABASE_ROOT_PASSWORD{{ end }},mariadb.db.password=$APP_DATABASE_PASSWORD{{- if .Values.global }}{{- if .Values.global.imagePullSecrets }},global.imagePullSecrets={{ .Values.global.imagePullSecrets }}{{- end }}{{- end }}
{{- else }}
## PLEASE UPDATE THE EXTERNAL DATABASE CONNECTION PARAMETERS IN THE FOLLOWING COMMAND AS NEEDED ##
- helm upgrade {{ .Release.Name }} stable/prestashop \
- --set prestashopPassword=$APP_PASSWORD,prestashopHost=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.host) }},externalDatabase.host={{ .Values.externalDatabase.host }}{{- end }}{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }}
+ helm upgrade {{ .Release.Name }} stable/{{ .Chart.Name }} \
+ --set prestashopPassword=$APP_PASSWORD,prestashopHost=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.host) }},externalDatabase.host={{ .Values.externalDatabase.host }}{{- end }}{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }}{{- if .Values.global }}{{- if .Values.global.imagePullSecrets }},global.imagePullSecrets={{ .Values.global.imagePullSecrets }}{{- end }}{{- end }}
{{- end }}
{{- else -}}
@@ -95,6 +95,6 @@ host. To configure PrestaShop to use and external database host:
## PLEASE UPDATE THE EXTERNAL DATABASE CONNECTION PARAMETERS IN THE FOLLOWING COMMAND AS NEEDED ##
- helm upgrade {{ .Release.Name }} stable/prestashop \
- --set prestashopPassword=$APP_PASSWORD,prestashopHost=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }},externalDatabase.host=YOUR_EXTERNAL_DATABASE_HOST
+ helm upgrade {{ .Release.Name }} stable/{{ .Chart.Name }} \
+ --set prestashopPassword=$APP_PASSWORD,prestashopHost=$APP_HOST,service.type={{ .Values.service.type }},mariadb.enabled=false{{- if not (empty .Values.externalDatabase.user) }},externalDatabase.user={{ .Values.externalDatabase.user }}{{- end }}{{- if not (empty .Values.externalDatabase.password) }},externalDatabase.password={{ .Values.externalDatabase.password }}{{- end }}{{- if not (empty .Values.externalDatabase.database) }},externalDatabase.database={{ .Values.externalDatabase.database }}{{- end }},externalDatabase.host=YOUR_EXTERNAL_DATABASE_HOST{{- if .Values.global }}{{- if .Values.global.imagePullSecrets }},global.imagePullSecrets={{ .Values.global.imagePullSecrets }}{{- end }}{{- end }}
{{- end }}
diff --git a/stable/prestashop/templates/_helpers.tpl b/stable/prestashop/templates/_helpers.tpl
index eb7e5d52619a..86a67622f4bd 100644
--- a/stable/prestashop/templates/_helpers.tpl
+++ b/stable/prestashop/templates/_helpers.tpl
@@ -83,9 +83,57 @@ Also, we can't use a single if because lazy evaluation is not an option
{{/*
Return the proper image name (for the metrics image)
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "prestashop.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "prestashop.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
{{- end -}}
diff --git a/stable/prestashop/templates/deployment.yaml b/stable/prestashop/templates/deployment.yaml
index 35fe83716b1c..6c60ebd45f6e 100644
--- a/stable/prestashop/templates/deployment.yaml
+++ b/stable/prestashop/templates/deployment.yaml
@@ -29,12 +29,7 @@ spec:
{{- end }}
{{- end }}
spec:
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "prestashop.imagePullSecrets" . | indent 6 }}
hostAliases:
- ip: "127.0.0.1"
hostnames:
@@ -170,7 +165,7 @@ spec:
subPath: prestashop
{{- if .Values.metrics.enabled }}
- name: metrics
- image: {{ template "metrics.image" . }}
+ image: {{ template "prestashop.metrics.image" . }}
imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }}
command: [ '/bin/apache_exporter', '-scrape_uri', 'http://status.localhost:80/server-status/?auto']
ports:
diff --git a/stable/prestashop/templates/secrets.yaml b/stable/prestashop/templates/secrets.yaml
index ed231f3c7ff1..6cbf304522ff 100644
--- a/stable/prestashop/templates/secrets.yaml
+++ b/stable/prestashop/templates/secrets.yaml
@@ -7,6 +7,8 @@ metadata:
chart: "{{ template "prestashop.chart" . }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
+ annotations:
+ "helm.sh/hook": pre-install
type: Opaque
data:
{{- if .Values.prestashopPassword }}
diff --git a/stable/prestashop/values.yaml b/stable/prestashop/values.yaml
index e4d04a8e17fc..7a1edeff002d 100644
--- a/stable/prestashop/values.yaml
+++ b/stable/prestashop/values.yaml
@@ -1,8 +1,11 @@
-## Global Docker image registry
-## Please, note that this will override the image registry for all the images, including dependencies, configured to use the global value
+## Global Docker image parameters
+## Please, note that this will override the image parameters, including dependencies, configured to use the global value
+## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
-# imageRegistry:
+# imageRegistry: myRegistryName
+# imagePullSecrets:
+# - myRegistryKeySecretName
## Bitnami PrestaShop image version
## ref: https://hub.docker.com/r/bitnami/prestashop/tags/
@@ -10,7 +13,7 @@
image:
registry: docker.io
repository: bitnami/prestashop
- tag: 1.7.5-0
+ tag: 1.7.5-2
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
@@ -21,7 +24,7 @@ image:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## PrestaShop host to create application URLs
## ref: https://github.com/bitnami/bitnami-docker-prestashop#configuration
@@ -285,7 +288,7 @@ metrics:
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
- # - myRegistrKeySecretName
+ # - myRegistryKeySecretName
## Metrics exporter pod Annotation and Labels
podAnnotations:
prometheus.io/scrape: "true"
diff --git a/stable/presto/Chart.yaml b/stable/presto/Chart.yaml
index 998a18b496d7..cd7eacce8dc4 100644
--- a/stable/presto/Chart.yaml
+++ b/stable/presto/Chart.yaml
@@ -2,7 +2,7 @@ apiVersion: v1
appVersion: 0.196
description: Distributed SQL query engine for running interactive analytic queries
name: presto
-version: 0.1
+version: 0.1.1
home: https://prestodb.io
icon: https://prestodb.io/static/presto.png
sources:
diff --git a/stable/prisma/Chart.yaml b/stable/prisma/Chart.yaml
index 08457a43b6b2..f79b7f86bb85 100644
--- a/stable/prisma/Chart.yaml
+++ b/stable/prisma/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: prisma
-version: 1.1.0
-appVersion: 1.20
+version: 1.2.1
+appVersion: 1.29.1
kubeVersion: "^1.8.0-0"
description: Prisma turns your database into a realtime GraphQL API
home: https://www.prisma.io/
diff --git a/stable/prisma/README.md b/stable/prisma/README.md
index 7ef1fb73f452..404b66e62b2a 100644
--- a/stable/prisma/README.md
+++ b/stable/prisma/README.md
@@ -45,7 +45,7 @@ Parameter | Description
`serviceAccount.create` | If true, create a service account for Prisma | `true`
`serviceAccount.name` | Name of the service account to create or use | `{{ prisma.fullname }}`
`image.repository` | Prisma image repository | `prismagraphql/prisma`
-`image.tag` | Prisma image tag | `1.20-heroku`
+`image.tag` | Prisma image tag | `1.29.1-heroku`
`image.pullPolicy` | Image pull policy | `IfNotPresent`
`database.connector` | Database connector | `postgres`
`database.host` | Host for the database endpoint | `""`
diff --git a/stable/prisma/values.yaml b/stable/prisma/values.yaml
index 897bc55dc44d..ef157717e227 100644
--- a/stable/prisma/values.yaml
+++ b/stable/prisma/values.yaml
@@ -19,7 +19,7 @@ image:
## Prisma image version
##
- tag: 1.20-heroku
+ tag: 1.29.1-heroku
## Specify an imagePullPolicy
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
diff --git a/stable/prometheus-adapter/Chart.yaml b/stable/prometheus-adapter/Chart.yaml
index d693f4120102..5c9a44701053 100644
--- a/stable/prometheus-adapter/Chart.yaml
+++ b/stable/prometheus-adapter/Chart.yaml
@@ -1,10 +1,12 @@
+apiVersion: v1
name: prometheus-adapter
-version: v0.4.1
-appVersion: v0.4.1
+version: 1.0.2
+appVersion: v0.5.0
description: A Helm chart for k8s prometheus adapter
home: https://github.com/DirectXMan12/k8s-prometheus-adapter
keywords:
- hpa
+ - metrics
- prometheus
- adapter
sources:
@@ -13,3 +15,4 @@ sources:
maintainers:
- name: mattiasgees
email: mattias.gees@jetstack.io
+ - name: steven-sheehy
diff --git a/stable/prometheus-adapter/OWNERS b/stable/prometheus-adapter/OWNERS
new file mode 100644
index 000000000000..02ecb622e59b
--- /dev/null
+++ b/stable/prometheus-adapter/OWNERS
@@ -0,0 +1,6 @@
+approvers:
+ - mattiasgees
+ - steven-sheehy
+reviewers:
+ - mattiasgees
+ - steven-sheehy
diff --git a/stable/prometheus-adapter/README.md b/stable/prometheus-adapter/README.md
index 4a79102c0b3e..fa0fa0eb3712 100644
--- a/stable/prometheus-adapter/README.md
+++ b/stable/prometheus-adapter/README.md
@@ -4,7 +4,7 @@ Installs the [Prometheus Adapter](https://github.com/DirectXMan12/k8s-prometheus
## Prerequisites
-Kubernetes 1.9+
+Kubernetes 1.11+
## Installing the Chart
@@ -38,19 +38,20 @@ The following table lists the configurable parameters of the Prometheus Adapter
| ------------------------------- | ------------------------------------------------------------------------------- | --------------------------------------------|
| `affinity` | Node affinity | `{}` |
| `image.repository` | Image repository | `directxman12/k8s-prometheus-adapter-amd64` |
-| `image.tag` | Image tag | `v0.4.1` |
+| `image.tag` | Image tag | `v0.5.0` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `image.pullSecrets` | Image pull secrets | `{}` |
| `logLevel` | Log level | `4` |
| `metricsRelistInterval` | Interval at which to re-list the set of all available metrics from Prometheus | `1m` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `prometheus.url` | Url of where we can find the Prometheus service | `http://prometheus.default.svc` |
-| `prometheus.port` | Port of where we can find the Prometheus service | `9090` |
+| `prometheus.port` | Port of where we can find the Prometheus service, zero to omit this option | `9090` |
| `rbac.create` | If true, create & use RBAC resources | `true` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
| `rules.default` | If `true`, enable a set of default rules in the configmap | `true` |
| `rules.custom` | A list of custom configmap rules | `[]` |
-| `rules.existing` | The name of an existing configMap with rules. Overrides default and custom. | `` |
+| `rules.existing` | The name of an existing configMap with rules. Overrides default, custom and external. | `` |
+| `rules.external` | A list of custom rules for external metrics API | `[]` |
| `service.annotations` | Annotations to add to the service | `{}` |
| `service.port` | Service port to expose | `443` |
| `service.type` | Type of service to create | `ClusterIP` |
diff --git a/stable/prometheus-adapter/templates/custom-metrics-apiserver-deployment.yaml b/stable/prometheus-adapter/templates/custom-metrics-apiserver-deployment.yaml
index c9e46c290140..bc55c4b711e9 100644
--- a/stable/prometheus-adapter/templates/custom-metrics-apiserver-deployment.yaml
+++ b/stable/prometheus-adapter/templates/custom-metrics-apiserver-deployment.yaml
@@ -24,12 +24,6 @@ spec:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/custom-metrics-configmap.yaml") . | sha256sum }}
spec:
- securityContext:
- allowPrivilegeEscalation: false
- capabilities:
- drop: ["all"]
- runAsNonRoot: true
- runAsUser: 10001
serviceAccountName: {{ template "k8s-prometheus-adapter.serviceAccountName" . }}
containers:
- name: {{ .Chart.Name }}
@@ -44,7 +38,7 @@ spec:
{{- end }}
- --cert-dir=/tmp/cert
- --logtostderr=true
- - --prometheus-url={{ .Values.prometheus.url }}:{{ .Values.prometheus.port }}
+ - --prometheus-url={{ .Values.prometheus.url }}{{ if .Values.prometheus.port }}:{{ .Values.prometheus.port }}{{end}}
- --metrics-relist-interval={{ .Values.metricsRelistInterval }}
- --v={{ .Values.logLevel }}
- --config=/etc/adapter/config.yaml
@@ -68,7 +62,12 @@ spec:
{{ toYaml .Values.resources | indent 10 }}
{{- end }}
securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop: ["all"]
readOnlyRootFilesystem: true
+ runAsNonRoot: true
+ runAsUser: 10001
volumeMounts:
- mountPath: /etc/adapter/
name: config
diff --git a/stable/prometheus-adapter/templates/custom-metrics-configmap.yaml b/stable/prometheus-adapter/templates/custom-metrics-configmap.yaml
index e30aefd8f5c8..2d7f019f3da7 100644
--- a/stable/prometheus-adapter/templates/custom-metrics-configmap.yaml
+++ b/stable/prometheus-adapter/templates/custom-metrics-configmap.yaml
@@ -82,4 +82,8 @@ data:
{{- if .Values.rules.custom }}
{{ toYaml .Values.rules.custom | indent 4 }}
{{- end -}}
-{{- end -}}
\ No newline at end of file
+{{- if .Values.rules.external }}
+ externalRules:
+{{ toYaml .Values.rules.external | indent 4 }}
+{{- end -}}
+{{- end -}}
diff --git a/stable/prometheus-adapter/templates/external-metrics-apiservice.yaml b/stable/prometheus-adapter/templates/external-metrics-apiservice.yaml
new file mode 100644
index 000000000000..13de1c4bc60c
--- /dev/null
+++ b/stable/prometheus-adapter/templates/external-metrics-apiservice.yaml
@@ -0,0 +1,21 @@
+apiVersion: apiregistration.k8s.io/v1beta1
+kind: APIService
+metadata:
+ labels:
+ app: {{ template "k8s-prometheus-adapter.name" . }}
+ chart: {{ template "k8s-prometheus-adapter.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ name: v1beta1.external.metrics.k8s.io
+spec:
+ service:
+ name: {{ template "k8s-prometheus-adapter.fullname" . }}
+ namespace: {{ .Release.Namespace | quote }}
+ {{ if .Values.tls.enable -}}
+ caBundle: {{ b64enc .Values.tls.ca }}
+ {{- end }}
+ group: external.metrics.k8s.io
+ version: v1beta1
+ insecureSkipTLSVerify: {{ if .Values.tls.enable }}false{{ else }}true{{ end }}
+ groupPriorityMinimum: 100
+ versionPriority: 100
diff --git a/stable/prometheus-adapter/templates/external-metrics-cluster-role.yaml b/stable/prometheus-adapter/templates/external-metrics-cluster-role.yaml
new file mode 100644
index 000000000000..daa1d03a05a3
--- /dev/null
+++ b/stable/prometheus-adapter/templates/external-metrics-cluster-role.yaml
@@ -0,0 +1,20 @@
+{{- if .Values.rbac.create -}}
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ labels:
+ app: {{ template "k8s-prometheus-adapter.name" . }}
+ chart: {{ template "k8s-prometheus-adapter.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ name: {{ template "k8s-prometheus-adapter.name" . }}-external-metrics
+rules:
+- apiGroups:
+ - "external.metrics.k8s.io"
+ resources:
+ - "*"
+ verbs:
+ - list
+ - get
+ - watch
+{{- end -}}
diff --git a/stable/prometheus-adapter/templates/hpa-external-metrics-cluster-role-binding.yaml b/stable/prometheus-adapter/templates/hpa-external-metrics-cluster-role-binding.yaml
new file mode 100644
index 000000000000..f1d54f8b84f8
--- /dev/null
+++ b/stable/prometheus-adapter/templates/hpa-external-metrics-cluster-role-binding.yaml
@@ -0,0 +1,20 @@
+
+{{- if .Values.rbac.create -}}
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ labels:
+ app: {{ template "k8s-prometheus-adapter.name" . }}
+ chart: {{ template "k8s-prometheus-adapter.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+ name: {{ template "k8s-prometheus-adapter.name" . }}-hpa-controller-external-metrics
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: {{ template "k8s-prometheus-adapter.name" . }}-external-metrics
+subjects:
+- kind: ServiceAccount
+ name: horizontal-pod-autoscaler
+ namespace: kube-system
+{{- end -}}
diff --git a/stable/prometheus-adapter/values.yaml b/stable/prometheus-adapter/values.yaml
index 070bc462d28c..28fc9f1469ed 100644
--- a/stable/prometheus-adapter/values.yaml
+++ b/stable/prometheus-adapter/values.yaml
@@ -3,7 +3,7 @@ affinity: {}
image:
repository: directxman12/k8s-prometheus-adapter-amd64
- tag: v0.4.1
+ tag: v0.5.0
pullPolicy: IfNotPresent
logLevel: 4
@@ -49,8 +49,16 @@ rules:
# as: "my_custom_metric"
# metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)
# Mounts a configMap with pre-generated rules for use. Overrides the
- # default and custom entries
+ # default, custom and external entries
existing:
+ external: []
+# - seriesQuery: '{__name__=~"^some_metric_count$"}'
+# resources:
+# template: <<.Resource>>
+# name:
+# matches: ""
+# as: "my_custom_metric"
+# metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)
service:
annotations: {}
diff --git a/stable/prometheus-blackbox-exporter/Chart.yaml b/stable/prometheus-blackbox-exporter/Chart.yaml
index 279ef1acdb9d..0669aff5f5a9 100644
--- a/stable/prometheus-blackbox-exporter/Chart.yaml
+++ b/stable/prometheus-blackbox-exporter/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
description: Prometheus Blackbox Exporter
name: prometheus-blackbox-exporter
-version: 0.2.0
-appVersion: 0.12.0
+version: 0.3.0
+appVersion: 0.14.0
home: https://github.com/prometheus/blackbox_exporter
sources:
- https://github.com/prometheus/blackbox_exporter
diff --git a/stable/prometheus-blackbox-exporter/README.md b/stable/prometheus-blackbox-exporter/README.md
index b30e4e6fd397..7948c3096976 100644
--- a/stable/prometheus-blackbox-exporter/README.md
+++ b/stable/prometheus-blackbox-exporter/README.md
@@ -41,35 +41,36 @@ The command removes all the Kubernetes components associated with the chart and
The following table lists the configurable parameters of the Blackbox-Exporter chart and their default values.
-| Parameter | Description | Default |
-| -------------------------------------- | ----------------------------------------------- | ----------------------------- |
-| `config` | Prometheus blackbox configuration | {} |
-| `configmapReload.name` | configmap-reload container name | `configmap-reload` |
-| `configmapReload.image.repository` | configmap-reload container image repository | `jimmidyson/configmap-reload` |
-| `configmapReload.image.tag` | configmap-reload container image tag | `v0.2.2` |
-| `configmapReload.image.pullPolicy` | configmap-reload container image pull policy | `IfNotPresent` |
-| `configmapReload.extraArgs` | Additional configmap-reload container arguments | `{}` |
-| `configmapReload.extraConfigmapMounts` | Additional configmap-reload configMap mounts | `[]` |
-| `configmapReload.resources` | configmap-reload pod resource requests & limits | `{}` |
-| `extraArgs` | Optional flags for blackbox | `[]` |
-| `image.repository` | container image repository | `prom/blackbox-exporter` |
-| `image.tag` | container image tag | `v0.12.0` |
-| `image.pullPolicy` | container image pull policy | `IfNotPresent` |
-| `ingress.annotations` | Ingress annotations | None |
-| `ingress.enabled` | Enables Ingress | `false` |
-| `ingress.hosts` | Ingress accepted hostnames | None |
-| `ingress.tls` | Ingress TLS configuration | None |
-| `nodeSelector` | node labels for pod assignment | `{}` |
-| `tolerations` | node tolerations for pod assignment | `[]` |
-| `affinity` | node affinity for pod assignment | `{}` |
-| `podAnnotations` | annotations to add to each pod | `{}` |
-| `resources` | pod resource requests & limits | `{}` |
-| `restartPolicy` | container restart policy | `Always` |
-| `service.annotations` | annotations for the service | `{}` |
-| `service.labels` | additional labels for the service | None |
-| `service.type` | type of service to create | `ClusterIP` |
-| `service.port` | port for the blackbox http service | `9115` |
-| `service.externalIPs` | list of external ips | [] |
+| Parameter | Description | Default |
+| -------------------------------------- | ------------------------------------------------- | ----------------------------- |
+| `config` | Prometheus blackbox configuration | {} |
+| `secretConfig` | Whether to treat blackbox configuration as secret | `false` |
+| `configmapReload.name` | configmap-reload container name | `configmap-reload` |
+| `configmapReload.image.repository` | configmap-reload container image repository | `jimmidyson/configmap-reload` |
+| `configmapReload.image.tag` | configmap-reload container image tag | `v0.2.2` |
+| `configmapReload.image.pullPolicy` | configmap-reload container image pull policy | `IfNotPresent` |
+| `configmapReload.extraArgs` | Additional configmap-reload container arguments | `{}` |
+| `configmapReload.extraConfigmapMounts` | Additional configmap-reload configMap mounts | `[]` |
+| `configmapReload.resources` | configmap-reload pod resource requests & limits | `{}` |
+| `extraArgs` | Optional flags for blackbox | `[]` |
+| `image.repository` | container image repository | `prom/blackbox-exporter` |
+| `image.tag` | container image tag | `v0.14.0` |
+| `image.pullPolicy` | container image pull policy | `IfNotPresent` |
+| `ingress.annotations` | Ingress annotations | None |
+| `ingress.enabled` | Enables Ingress | `false` |
+| `ingress.hosts` | Ingress accepted hostnames | None |
+| `ingress.tls` | Ingress TLS configuration | None |
+| `nodeSelector` | node labels for pod assignment | `{}` |
+| `tolerations` | node tolerations for pod assignment | `[]` |
+| `affinity` | node affinity for pod assignment | `{}` |
+| `podAnnotations` | annotations to add to each pod | `{}` |
+| `resources` | pod resource requests & limits | `{}` |
+| `restartPolicy` | container restart policy | `Always` |
+| `service.annotations` | annotations for the service | `{}` |
+| `service.labels` | additional labels for the service | None |
+| `service.type` | type of service to create | `ClusterIP` |
+| `service.port` | port for the blackbox http service | `9115` |
+| `service.externalIPs` | list of external ips | [] |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
diff --git a/stable/prometheus-blackbox-exporter/ci/default-values.yaml b/stable/prometheus-blackbox-exporter/ci/default-values.yaml
new file mode 100644
index 000000000000..e69de29bb2d1
diff --git a/stable/prometheus-blackbox-exporter/ci/secret-values.yaml b/stable/prometheus-blackbox-exporter/ci/secret-values.yaml
new file mode 100644
index 000000000000..92664ab04197
--- /dev/null
+++ b/stable/prometheus-blackbox-exporter/ci/secret-values.yaml
@@ -0,0 +1 @@
+secretConfig: true
diff --git a/stable/prometheus-blackbox-exporter/templates/configmap.yaml b/stable/prometheus-blackbox-exporter/templates/configmap.yaml
index 2d57bd6b9ae9..d40a444b3007 100644
--- a/stable/prometheus-blackbox-exporter/templates/configmap.yaml
+++ b/stable/prometheus-blackbox-exporter/templates/configmap.yaml
@@ -1,6 +1,6 @@
{{- if .Values.config }}
apiVersion: v1
-kind: ConfigMap
+kind: {{ if .Values.secretConfig -}} Secret {{- else -}} ConfigMap {{- end }}
metadata:
name: {{ template "prometheus-blackbox-exporter.fullname" . }}
labels:
@@ -8,7 +8,7 @@ metadata:
app: {{ template "prometheus-blackbox-exporter.name" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
-data:
+{{ if .Values.secretConfig -}} stringData: {{- else -}} data: {{- end }}
blackbox.yaml: |
{{ toYaml .Values.config | indent 4 }}
{{- end }}
diff --git a/stable/prometheus-blackbox-exporter/templates/deployment.yaml b/stable/prometheus-blackbox-exporter/templates/deployment.yaml
index 5115f9e4f599..88d78725286a 100644
--- a/stable/prometheus-blackbox-exporter/templates/deployment.yaml
+++ b/stable/prometheus-blackbox-exporter/templates/deployment.yaml
@@ -88,6 +88,10 @@ spec:
readOnly: true
volumes:
- name: config
+{{- if .Values.secretConfig }}
+ secret:
+ secretName: {{ template "prometheus-blackbox-exporter.fullname" . }}
+{{- else }}
configMap:
name: {{ template "prometheus-blackbox-exporter.fullname" . }}
-
+{{- end }}
diff --git a/stable/prometheus-blackbox-exporter/values.yaml b/stable/prometheus-blackbox-exporter/values.yaml
index 6b56a83563fa..46041d03d5c4 100644
--- a/stable/prometheus-blackbox-exporter/values.yaml
+++ b/stable/prometheus-blackbox-exporter/values.yaml
@@ -2,13 +2,14 @@ restartPolicy: Always
image:
repository: prom/blackbox-exporter
- tag: v0.12.0
+ tag: v0.14.0
pullPolicy: IfNotPresent
nodeSelector: {}
tolerations: []
affinity: {}
+secretConfig: false
config:
modules:
http_2xx:
diff --git a/stable/prometheus-cloudwatch-exporter/Chart.yaml b/stable/prometheus-cloudwatch-exporter/Chart.yaml
index 30ac344717c6..f1394ede6d78 100644
--- a/stable/prometheus-cloudwatch-exporter/Chart.yaml
+++ b/stable/prometheus-cloudwatch-exporter/Chart.yaml
@@ -2,7 +2,7 @@ apiVersion: v1
appVersion: "0.5.0"
description: A Helm chart for prometheus cloudwatch-exporter
name: prometheus-cloudwatch-exporter
-version: 0.4.0
+version: 0.4.5
home: https://github.com/prometheus/cloudwatch_exporter
sources:
- https://github.com/prometheus/cloudwatch_exporter
diff --git a/stable/prometheus-cloudwatch-exporter/README.md b/stable/prometheus-cloudwatch-exporter/README.md
index 61234a17c49b..7b42ce745910 100644
--- a/stable/prometheus-cloudwatch-exporter/README.md
+++ b/stable/prometheus-cloudwatch-exporter/README.md
@@ -73,6 +73,17 @@ The following table lists the configurable parameters of the Cloudwatch Exporter
| `affinity` | node/pod affinities | `{}` |
| `livenessProbe` | Liveness probe settings | |
| `readinessProbe` | Readiness probe settings | |
+| `servicemonitor.enabled` | Use servicemonitor from prometheus operator | `false` |
+| `servicemonitor.namespace` | Namespace thes Servicemonitor is installed in | |
+| `servicemonitor.interval` | How frequently Prometheus should scrape | |
+| `servicemonitor.telemetryPath` | path to cloudwatch-exporter telemtery-path | |
+| `servicemonitor.labels` | labels for the ServiceMonitor passed to Prometheus Operator | `{}` |
+| `servicemonitor.timeout` | Timeout after which the scrape is ended | |
+| `ingress.enabled` | Enables Ingress | `false` |
+| `ingress.annotations` | Ingress annotations | `{}` |
+| `ingress.labels` | Custom labels | `{}` |
+| `ingress.hosts` | Ingress accepted hostnames | `[]` |
+| `ingress.tls` | Ingress TLS configuration | `[]` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
diff --git a/stable/prometheus-cloudwatch-exporter/templates/deployment.yaml b/stable/prometheus-cloudwatch-exporter/templates/deployment.yaml
index 9d5eb5e82a41..9e28ee9ba693 100644
--- a/stable/prometheus-cloudwatch-exporter/templates/deployment.yaml
+++ b/stable/prometheus-cloudwatch-exporter/templates/deployment.yaml
@@ -61,13 +61,13 @@ spec:
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- - name: {{ .Values.service.portName }}
+ - name: container-port
containerPort: 9106
protocol: TCP
livenessProbe:
httpGet:
path: /-/healthy
- port: {{ .Values.service.portName }}
+ port: container-port
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds}}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
@@ -76,7 +76,7 @@ spec:
readinessProbe:
httpGet:
path: /-/ready
- port: {{ .Values.service.portName }}
+ port: container-port
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds}}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
diff --git a/stable/prometheus-cloudwatch-exporter/templates/ingress.yaml b/stable/prometheus-cloudwatch-exporter/templates/ingress.yaml
new file mode 100644
index 000000000000..dd1a312f3ed3
--- /dev/null
+++ b/stable/prometheus-cloudwatch-exporter/templates/ingress.yaml
@@ -0,0 +1,42 @@
+{{- if .Values.ingress.enabled -}}
+{{- $fullName := include "prometheus-cloudwatch-exporter.fullname" . -}}
+{{- $servicePort := .Values.service.port -}}
+{{- $ingressPath := .Values.ingress.path -}}
+apiVersion: extensions/v1beta1
+kind: Ingress
+metadata:
+ name: {{ $fullName }}
+ labels:
+ app: {{ template "prometheus-cloudwatch-exporter.name" . }}
+ chart: {{ template "prometheus-cloudwatch-exporter.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+{{- if .Values.ingress.labels }}
+{{ toYaml .Values.ingress.labels | indent 4 }}
+{{- end }}
+{{- with .Values.ingress.annotations }}
+ annotations:
+{{ toYaml . | indent 4 }}
+{{- end }}
+spec:
+{{- if .Values.ingress.tls }}
+ tls:
+ {{- range .Values.ingress.tls }}
+ - hosts:
+ {{- range .hosts }}
+ - {{ . | quote }}
+ {{- end }}
+ secretName: {{ .secretName }}
+ {{- end }}
+{{- end }}
+ rules:
+ {{- range .Values.ingress.hosts }}
+ - host: {{ . }}
+ http:
+ paths:
+ - path: {{ $ingressPath }}
+ backend:
+ serviceName: {{ $fullName }}
+ servicePort: {{ $servicePort }}
+ {{- end }}
+{{- end }}
diff --git a/stable/prometheus-cloudwatch-exporter/templates/service.yaml b/stable/prometheus-cloudwatch-exporter/templates/service.yaml
index def2c9e7e45e..3723dd9ed075 100644
--- a/stable/prometheus-cloudwatch-exporter/templates/service.yaml
+++ b/stable/prometheus-cloudwatch-exporter/templates/service.yaml
@@ -16,7 +16,7 @@ spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
- targetPort: {{ .Values.service.targetPort }}
+ targetPort: container-port
protocol: TCP
name: {{ .Values.service.portName }}
selector:
diff --git a/stable/prometheus-cloudwatch-exporter/templates/servicemonitor.yaml b/stable/prometheus-cloudwatch-exporter/templates/servicemonitor.yaml
new file mode 100644
index 000000000000..bbc449e72cfe
--- /dev/null
+++ b/stable/prometheus-cloudwatch-exporter/templates/servicemonitor.yaml
@@ -0,0 +1,33 @@
+{{- if and ( .Capabilities.APIVersions.Has "monitoring.coreos.com/v1" ) ( .Values.serviceMonitor.enabled ) }}
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+{{- if .Values.serviceMonitor.labels }}
+ labels:
+{{ toYaml .Values.serviceMonitor.labels | indent 4}}
+{{- end }}
+ name: {{ template "prometheus-cloudwatch-exporter.fullname" . }}
+{{- if .Values.serviceMonitor.namespace }}
+ namespace: {{ .Values.serviceMonitor.namespace }}
+{{- end }}
+spec:
+ endpoints:
+ - targetPort: {{ .Values.service.port }}
+{{- if .Values.serviceMonitor.interval }}
+ interval: {{ .Values.serviceMonitor.interval }}
+{{- end }}
+{{- if .Values.serviceMonitor.telemetryPath }}
+ path: {{ .Values.serviceMonitor.telemetryPath }}
+{{- end }}
+{{- if .Values.serviceMonitor.timeout }}
+ scrapeTimeout: {{ .Values.serviceMonitor.timeout }}
+{{- end }}
+ jobLabel: {{ template "prometheus-cloudwatch-exporter.fullname" . }}
+ namespaceSelector:
+ matchNames:
+ - {{ .Release.Namespace }}
+ selector:
+ matchLabels:
+ app: {{ template "prometheus-cloudwatch-exporter.name" . }}
+ release: {{ .Release.Name }}
+{{- end }}
diff --git a/stable/prometheus-cloudwatch-exporter/values.yaml b/stable/prometheus-cloudwatch-exporter/values.yaml
index 6117ea981101..70faf3574e15 100644
--- a/stable/prometheus-cloudwatch-exporter/values.yaml
+++ b/stable/prometheus-cloudwatch-exporter/values.yaml
@@ -11,7 +11,7 @@ image:
service:
type: ClusterIP
- port: 80
+ port: 9106
portName: http
annotations: {}
labels: {}
@@ -106,3 +106,31 @@ readinessProbe:
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
+
+serviceMonitor:
+ # When set true then use a ServiceMonitor to configure scraping
+ enabled: false
+ # Set the namespace the ServiceMonitor should be deployed
+ # namespace: monitoring
+ # Set how frequently Prometheus should scrape
+ # interval: 30s
+ # Set path to cloudwatch-exporter telemtery-path
+ # telemetryPath: /metrics
+ # Set labels for the ServiceMonitor, use this to define your scrape label for Prometheus Operator
+ # labels:
+ # Set timeout for scrape
+ # timeout: 10s
+
+ingress:
+ enabled: false
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+ labels: {}
+ path: /
+ hosts:
+ - chart-example.local
+ tls: []
+ # - secretName: chart-example-tls
+ # hosts:
+ # - chart-example.local
diff --git a/stable/prometheus-consul-exporter/Chart.yaml b/stable/prometheus-consul-exporter/Chart.yaml
index 85470043d635..6b2229b45e8f 100644
--- a/stable/prometheus-consul-exporter/Chart.yaml
+++ b/stable/prometheus-consul-exporter/Chart.yaml
@@ -2,7 +2,7 @@ apiVersion: v1
appVersion: "0.4.0"
description: A Helm chart for the Prometheus Consul Exporter
name: prometheus-consul-exporter
-version: 0.1.2
+version: 0.1.3
keywords:
- metrics
- consul
diff --git a/stable/prometheus-consul-exporter/templates/podsecuritypolicy.yaml b/stable/prometheus-consul-exporter/templates/podsecuritypolicy.yaml
index b6b2eb2f8a04..9844714a7ca7 100644
--- a/stable/prometheus-consul-exporter/templates/podsecuritypolicy.yaml
+++ b/stable/prometheus-consul-exporter/templates/podsecuritypolicy.yaml
@@ -10,9 +10,11 @@ metadata:
release: {{ .Release.Name }}
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default'
- apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default'
+ {{- if .Values.rbac.pspUseAppArmor }}
+ apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
+ {{- end }}
spec:
privileged: false
allowPrivilegeEscalation: false
diff --git a/stable/prometheus-consul-exporter/values.yaml b/stable/prometheus-consul-exporter/values.yaml
index 72049c611c2f..e4e87ac85c42 100644
--- a/stable/prometheus-consul-exporter/values.yaml
+++ b/stable/prometheus-consul-exporter/values.yaml
@@ -8,6 +8,7 @@ rbac:
# Specifies whether RBAC resources should be created
create: true
pspEnabled: true
+ pspUseAppArmor: true
serviceAccount:
# Specifies whether a ServiceAccount should be created
create: true
diff --git a/stable/prometheus-mongodb-exporter/.helmignore b/stable/prometheus-mongodb-exporter/.helmignore
new file mode 100644
index 000000000000..50af03172541
--- /dev/null
+++ b/stable/prometheus-mongodb-exporter/.helmignore
@@ -0,0 +1,22 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
+.vscode/
diff --git a/stable/prometheus-mongodb-exporter/Chart.yaml b/stable/prometheus-mongodb-exporter/Chart.yaml
new file mode 100644
index 000000000000..14539bafcb4e
--- /dev/null
+++ b/stable/prometheus-mongodb-exporter/Chart.yaml
@@ -0,0 +1,16 @@
+apiVersion: v1
+appVersion: "v0.7.0"
+description: A Prometheus exporter for MongoDB metrics
+home: https://github.com/percona/mongodb_exporter
+keywords:
+- exporter
+- metrics
+- mongodb
+- prometheus
+maintainers:
+- email: ssheehy@firescope.com
+ name: steven-sheehy
+name: prometheus-mongodb-exporter
+sources:
+- https://github.com/percona/mongodb_exporter
+version: 2.1.0
diff --git a/stable/prometheus-mongodb-exporter/OWNERS b/stable/prometheus-mongodb-exporter/OWNERS
new file mode 100644
index 000000000000..deb841ab740d
--- /dev/null
+++ b/stable/prometheus-mongodb-exporter/OWNERS
@@ -0,0 +1,4 @@
+approvers:
+ - steven-sheehy
+reviewers:
+ - steven-sheehy
diff --git a/stable/prometheus-mongodb-exporter/README.md b/stable/prometheus-mongodb-exporter/README.md
new file mode 100644
index 000000000000..59c5d1e5deb2
--- /dev/null
+++ b/stable/prometheus-mongodb-exporter/README.md
@@ -0,0 +1,64 @@
+# Prometheus MongoDB Exporter
+
+Installs the [MongoDB Exporter](https://github.com/percona/mongodb_exporter) for [Prometheus](https://prometheus.io/). The
+MongoDB Exporter collects and exports oplog, replica set, server status, sharding and storage engine metrics.
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`:
+
+```console
+$ helm upgrade --install my-release stable/prometheus-mongodb-exporter
+```
+
+This command deploys the MongoDB Exporter with the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
+
+## Using the Chart
+
+To use the chart, ensure the `mongodb.uri` is populated with a valid [MongoDB URI](https://docs.mongodb.com/manual/reference/connection-string).
+If the MongoDB server requires authentication, credentials should be populated in the connection string as well. The MongoDB Exporter supports
+connecting to either a MongoDB replica set member, shard, or standalone instance.
+
+The chart comes with a ServiceMonitor for use with the [Prometheus Operator](https://github.com/helm/charts/tree/master/stable/prometheus-operator).
+If you're not using the Prometheus Operator, you can disable the ServiceMonitor by setting `serviceMonitor.enabled` to `false` and instead
+populate the `podAnnotations` as below:
+
+```yaml
+podAnnotations:
+ prometheus.io/scrape: "true"
+ prometheus.io/port: "metrics"
+```
+
+## Configuration
+
+| Parameter | Description | Default |
+|-----------|-------------|---------|
+| `affinity` | Node/pod affinities | `{}` |
+| `annotations` | Annotations to be added to the pods | `{}` |
+| `extraArgs` | The extra command line arguments to pass to the MongoDB Exporter | See values.yaml |
+| `fullnameOverride` | Override the full chart name | `` |
+| `image.pullPolicy` | MongoDB Exporter image pull policy | `IfNotPresent` |
+| `image.repository` | MongoDB Exporter image name | `ssheehy/mongodb-exporter` |
+| `image.tag` | MongoDB Exporter image tag | `0.7.0` |
+| `imagePullSecrets` | List of container registry secrets | `[]` |
+| `mongodb.uri` | The required [URI](https://docs.mongodb.com/manual/reference/connection-string) to connect to MongoDB | `` |
+| `nameOverride` | Override the application name | `` |
+| `nodeSelector` | Node labels for pod assignment | `{}` |
+| `podAnnotations` | Annotations to be added to all pods | `{}` |
+| `port` | The container port to listen on | `9216` |
+| `priorityClassName` | Pod priority class name | `` |
+| `replicas` | Number of replicas in the replica set | `1` |
+| `resources` | Pod resource requests and limits | `{}` |
+| `env` | Extra environment variables passed to pod | `{}` |
+| `securityContext` | Security context for the pod | See values.yaml |
+| `serviceMonitor.enabled` | Set to true if using the Prometheus Operator | `true` |
+| `serviceMonitor.interval` | Interval at which metrics should be scraped | `30s` |
+| `serviceMonitor.scrapeTimeout` | Interval at which metric scrapes should time out | `10s` |
+| `serviceMonitor.namespace` | The namespace where the Prometheus Operator is deployed | `` |
+| `serviceMonitor.additionalLabels` | Additional labels to add to the ServiceMonitor | `{}` |
+| `tolerations` | List of node taints to tolerate | `[]` |
+
+## Limitations
+
+Connecting to MongoDB via TLS is currently not supported.
+
diff --git a/stable/prometheus-mongodb-exporter/ci/env-values.yaml b/stable/prometheus-mongodb-exporter/ci/env-values.yaml
new file mode 100644
index 000000000000..944e4bb02b31
--- /dev/null
+++ b/stable/prometheus-mongodb-exporter/ci/env-values.yaml
@@ -0,0 +1,10 @@
+---
+# Test adding environment variables
+
+mongodb:
+ uri: mongodb://localhost:9216
+serviceMonitor:
+ enabled: false
+env:
+ TEST_ENV: test
+ MONGODB_URI: mongodb://hostfromenv:9216
diff --git a/stable/prometheus-mongodb-exporter/ci/servicemonitor-disabled-values.yaml b/stable/prometheus-mongodb-exporter/ci/servicemonitor-disabled-values.yaml
new file mode 100644
index 000000000000..4b7812dc9ebc
--- /dev/null
+++ b/stable/prometheus-mongodb-exporter/ci/servicemonitor-disabled-values.yaml
@@ -0,0 +1,4 @@
+mongodb:
+ uri: mongodb://localhost:9216
+serviceMonitor:
+ enabled: false
diff --git a/stable/prometheus-mongodb-exporter/templates/NOTES.txt b/stable/prometheus-mongodb-exporter/templates/NOTES.txt
new file mode 100644
index 000000000000..3fb9a92ac788
--- /dev/null
+++ b/stable/prometheus-mongodb-exporter/templates/NOTES.txt
@@ -0,0 +1,3 @@
+Verify the application is working by running these commands:
+ kubectl port-forward deployment/{{ include "prometheus-mongodb-exporter.fullname" . }} {{ .Values.port }}
+ curl http://127.0.0.1:{{ .Values.port }}/metrics
diff --git a/stable/prometheus-mongodb-exporter/templates/_helpers.tpl b/stable/prometheus-mongodb-exporter/templates/_helpers.tpl
new file mode 100644
index 000000000000..e051b2387ed4
--- /dev/null
+++ b/stable/prometheus-mongodb-exporter/templates/_helpers.tpl
@@ -0,0 +1,32 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "prometheus-mongodb-exporter.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "prometheus-mongodb-exporter.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "prometheus-mongodb-exporter.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/stable/prometheus-mongodb-exporter/templates/deployment.yaml b/stable/prometheus-mongodb-exporter/templates/deployment.yaml
new file mode 100644
index 000000000000..a150d005df2c
--- /dev/null
+++ b/stable/prometheus-mongodb-exporter/templates/deployment.yaml
@@ -0,0 +1,69 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ include "prometheus-mongodb-exporter.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "prometheus-mongodb-exporter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ helm.sh/chart: {{ include "prometheus-mongodb-exporter.chart" . }}
+ annotations:
+ {{- toYaml .Values.annotations | nindent 4 }}
+spec:
+ replicas: {{ .Values.replicas }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "prometheus-mongodb-exporter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ template:
+ metadata:
+ annotations:
+ {{- toYaml .Values.podAnnotations | nindent 8 }}
+ labels:
+ app.kubernetes.io/name: {{ include "prometheus-mongodb-exporter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ spec:
+ containers:
+ - name: mongodb-exporter
+ env:
+ - name: MONGODB_URI
+ valueFrom:
+ secretKeyRef:
+ name: {{ include "prometheus-mongodb-exporter.fullname" . }}
+ key: mongodb-uri
+ {{- if .Values.env }}
+ {{- range $key, $value := .Values.env }}
+ - name: "{{ $key }}"
+ value: "{{ $value }}"
+ {{- end }}
+ {{- end }}
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ args:
+ - --web.listen-address={{ printf ":%s" .Values.port }}
+ {{- toYaml .Values.extraArgs | nindent 8 }}
+ ports:
+ - name: metrics
+ containerPort: {{ .Values.port }}
+ protocol: TCP
+ livenessProbe:
+ {{- toYaml .Values.livenessProbe | nindent 10 }}
+ readinessProbe:
+ {{- toYaml .Values.readinessProbe | nindent 10 }}
+ resources:
+ {{- toYaml .Values.resources | nindent 10 }}
+ securityContext:
+ {{- toYaml .Values.securityContext | nindent 10 }}
+ affinity:
+ {{- toYaml .Values.affinity | nindent 8 }}
+ imagePullSecrets:
+ {{- toYaml .Values.imagePullSecrets | nindent 8 }}
+ nodeSelector:
+ {{- toYaml .Values.nodeSelector | nindent 8 }}
+ {{- if .Values.priorityClassName }}
+ priorityClassName: {{ .Values.priorityClassName }}
+ {{- end }}
+ terminationGracePeriodSeconds: 30
+ tolerations:
+ {{- toYaml .Values.tolerations | nindent 8 }}
+
diff --git a/stable/prometheus-mongodb-exporter/templates/secret.yaml b/stable/prometheus-mongodb-exporter/templates/secret.yaml
new file mode 100644
index 000000000000..9a7ca8a299f7
--- /dev/null
+++ b/stable/prometheus-mongodb-exporter/templates/secret.yaml
@@ -0,0 +1,12 @@
+apiVersion: v1
+kind: Secret
+metadata:
+ name: {{ include "prometheus-mongodb-exporter.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "prometheus-mongodb-exporter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ helm.sh/chart: {{ include "prometheus-mongodb-exporter.chart" . }}
+type: Opaque
+data:
+ mongodb-uri: {{ required "A MongoDB URI is required" .Values.mongodb.uri | b64enc }}
diff --git a/stable/prometheus-mongodb-exporter/templates/servicemonitor.yaml b/stable/prometheus-mongodb-exporter/templates/servicemonitor.yaml
new file mode 100644
index 000000000000..be08fb2b748a
--- /dev/null
+++ b/stable/prometheus-mongodb-exporter/templates/servicemonitor.yaml
@@ -0,0 +1,30 @@
+{{ if .Values.serviceMonitor.enabled }}
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: {{ include "prometheus-mongodb-exporter.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "prometheus-mongodb-exporter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ helm.sh/chart: {{ include "prometheus-mongodb-exporter.chart" . }}
+ {{- range $key, $value := .Values.serviceMonitor.additionalLabels }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+ {{- if .Values.serviceMonitor.namespace }}
+ namespace: {{ .Values.serviceMonitor.namespace }}
+ {{- end }}
+spec:
+ endpoints:
+ - port: metrics
+ interval: {{ .Values.serviceMonitor.interval }}
+ scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
+ namespaceSelector:
+ matchNames:
+ - {{ .Release.Namespace }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "prometheus-mongodb-exporter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+{{- end }}
+
diff --git a/stable/prometheus-mongodb-exporter/values.yaml b/stable/prometheus-mongodb-exporter/values.yaml
new file mode 100644
index 000000000000..5b58202af2de
--- /dev/null
+++ b/stable/prometheus-mongodb-exporter/values.yaml
@@ -0,0 +1,77 @@
+affinity: {}
+
+annotations: {}
+
+extraArgs:
+- --collect.collection
+- --collect.database
+- --collect.indexusage
+- --collect.topmetrics
+
+fullnameOverride: ""
+
+image:
+ pullPolicy: IfNotPresent
+ repository: ssheehy/mongodb-exporter
+ tag: 0.7.0
+
+imagePullSecrets: []
+
+livenessProbe:
+ httpGet:
+ path: /
+ port: metrics
+ initialDelaySeconds: 10
+
+# [mongodb://][user:pass@]host1[:port1][,host2[:port2],...][/database][?options]
+mongodb:
+ uri:
+
+nameOverride: ""
+
+nodeSelector: {}
+
+podAnnotations: {}
+# prometheus.io/scrape: "true"
+# prometheus.io/port: "metrics"
+
+port: "9216"
+
+priorityClassName: ""
+
+readinessProbe:
+ httpGet:
+ path: /
+ port: metrics
+ initialDelaySeconds: 10
+
+replicas: 1
+
+resources: {}
+# limits:
+# cpu: 250m
+# memory: 192Mi
+# requests:
+# cpu: 100m
+# memory: 128Mi
+
+# Extra environment variables that will be passed into the exporter pod
+env: {}
+
+securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop: ["all"]
+ readOnlyRootFilesystem: true
+ runAsGroup: 10000
+ runAsNonRoot: true
+ runAsUser: 10000
+
+serviceMonitor:
+ enabled: true
+ interval: 30s
+ scrapeTimeout: 10s
+ namespace:
+ additionalLabels: {}
+
+tolerations: []
diff --git a/stable/prometheus-mysql-exporter/Chart.yaml b/stable/prometheus-mysql-exporter/Chart.yaml
index ccb176626b80..b1d40a0d87c0 100644
--- a/stable/prometheus-mysql-exporter/Chart.yaml
+++ b/stable/prometheus-mysql-exporter/Chart.yaml
@@ -1,11 +1,11 @@
apiVersion: v1
description: A Helm chart for prometheus mysql exporter with cloudsqlproxy
name: prometheus-mysql-exporter
-version: 0.2.1
+version: 0.3.2
home: https://github.com/prometheus/mysqld_exporter
appVersion: v0.11.0
sources:
- https://github.com/prometheus/mysqld_exporter
-mantainers:
+maintainers:
- name: juanchimienti
email: juan.chimienti@gmail.com
diff --git a/stable/prometheus-mysql-exporter/README.md b/stable/prometheus-mysql-exporter/README.md
index 1b0e06f18921..ee8c53a9a48b 100644
--- a/stable/prometheus-mysql-exporter/README.md
+++ b/stable/prometheus-mysql-exporter/README.md
@@ -1,6 +1,6 @@
# Prometheus Mysql Exporter
-* Installs prometheus [mysql exporter](https://github.com/prometheus/mysqld_exporter)
+- Installs prometheus [mysql exporter](https://github.com/prometheus/mysqld_exporter)
## TL;DR;
@@ -17,7 +17,7 @@ This chart bootstraps a prometheus [mysql exporter](http://github.com/prometheus
To install the chart with the release name `my-release`:
```console
-$ helm install --name my-release stable/prometheus-mysql-exporter --set datasource="username:password@(db:3306)/"
+$ helm install --name my-release stable/prometheus-mysql-exporter --set mysql.user="username",mysql.pass="password",mysql.host="example.com",mysql.port="3306"
```
The command deploys a mysql exporter on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
@@ -36,32 +36,34 @@ The command removes all the Kubernetes components associated with the chart and
The following table lists the configurable parameters of the mysql exporter chart and their default values.
-| Parameter | Description | Default |
-| ---------------------------------------- | -------------------------------------------------------------------------------------------------------------------- | --------------------------------------- |
-| `replicaCount` | Amount of pods for the deployment | `1` |
-| `image.repository` | Image repository | `prom/mysqld-exporter` |
-| `image.tag` | Image tag | `v0.11.0` |
-| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
-| `service.name` | Service name | `mysql-exporter` |
-| `service.type` | Service type | `ClusterIP` |
-| `service.externalport` | The service port | `9104` |
-| `service.internalPort` | The target port of the container | `9104` |
-| `resources` | CPU/Memory resource requests/limits | `{}` |
-| `annotations` | pod annotations for easier discovery | `see values.yaml` |
-| `mysql.db` | MySQL connection db (optional) | `""` |
-| `mysql.host` | MySQL connection host | `localhost` |
-| `mysql.param` | MySQL connection parameters (optional) | `"tcp"` |
-| `mysql.pass` | MySQL connection password | `password` |
-| `mysql.port` | MySQL connection port | `3306` |
-| `mysql.protocol` | MySQL connection protocol (optional) | `""` |
-| `mysql.user` | MySQL connection username | `exporter` |
-| `cloudsqlproxy.enabled` | Flag to enable the connection using Cloud SQL Proxy | `false` |
-| `cloudsqlproxy.image.repo` | Cloud SQL Proxy image repository | `gcr.io/cloudsql-docker/gce-proxy` |
-| `cloudsqlproxy.image.tag` | Cloud SQL Proxy image tag | `1.11` |
-| `cloudsqlproxy.image.pullPolicy` | Cloud SQL Proxy image pull policy | `IfNotPresent` |
-| `cloudsqlproxy.instanceConnectionName` | Google Cloud instance connection name | `project:us-central1:dbname` |
-| `cloudsqlproxy.port` | Cloud SQL Proxy listening port | `3306` |
-| `cloudsqlproxy.credentials` | Cloud SQL Proxy service account credentials | `bogus credential file` |
+| Parameter | Description | Default |
+| -------------------------------------- | --------------------------------------------------- | ---------------------------------- |
+| `replicaCount` | Amount of pods for the deployment | `1` |
+| `image.repository` | Image repository | `prom/mysqld-exporter` |
+| `image.tag` | Image tag | `v0.11.0` |
+| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
+| `service.name` | Service name | `mysql-exporter` |
+| `service.type` | Service type | `ClusterIP` |
+| `service.externalport` | The service port | `9104` |
+| `service.internalPort` | The target port of the container | `9104` |
+| `resources` | CPU/Memory resource requests/limits | `{}` |
+| `annotations` | pod annotations for easier discovery | `see values.yaml` |
+| `collectors` | Collector configuration | `see values.yaml` |
+| `podLabels` | Additional labels to add to each pod | `{}` |
+| `mysql.db` | MySQL connection db (optional) | `""` |
+| `mysql.host` | MySQL connection host | `localhost` |
+| `mysql.param` | MySQL connection parameters (optional) | `"tcp"` |
+| `mysql.pass` | MySQL connection password | `password` |
+| `mysql.port` | MySQL connection port | `3306` |
+| `mysql.protocol` | MySQL connection protocol (optional) | `""` |
+| `mysql.user` | MySQL connection username | `exporter` |
+| `cloudsqlproxy.enabled` | Flag to enable the connection using Cloud SQL Proxy | `false` |
+| `cloudsqlproxy.image.repo` | Cloud SQL Proxy image repository | `gcr.io/cloudsql-docker/gce-proxy` |
+| `cloudsqlproxy.image.tag` | Cloud SQL Proxy image tag | `1.11` |
+| `cloudsqlproxy.image.pullPolicy` | Cloud SQL Proxy image pull policy | `IfNotPresent` |
+| `cloudsqlproxy.instanceConnectionName` | Google Cloud instance connection name | `project:us-central1:dbname` |
+| `cloudsqlproxy.port` | Cloud SQL Proxy listening port | `3306` |
+| `cloudsqlproxy.credentials` | Cloud SQL Proxy service account credentials | `bogus credential file` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
@@ -77,5 +79,9 @@ Alternatively, a YAML file that specifies the values for the above parameters ca
$ helm install --name my-release -f values.yaml stable/prometheus-mysql-exporter
```
-Documentation for the MySQL Exporter can be found here: (https://github.com/prometheus/mysqld_exporter)
-A mysql params overview can be found here: (https://github.com/go-sql-driver/mysql#dsn-data-source-name)
+Documentation for the MySQL Exporter can be found here: ()
+A mysql params overview can be found here: ()
+
+## Collector Flags
+
+Available collector flags can be found in the [values.yaml](https://github.com/kilhyunjun/charts/blob/master/stable/prometheus-mysql-exporter/values.yaml) and a description of each flag can be found in the [mysqld_exporter](https://github.com/prometheus/mysqld_exporter#collector-flags) repository.
diff --git a/stable/prometheus-mysql-exporter/templates/deployment.yaml b/stable/prometheus-mysql-exporter/templates/deployment.yaml
index 9cd1bc74c2b1..4461bd99d330 100644
--- a/stable/prometheus-mysql-exporter/templates/deployment.yaml
+++ b/stable/prometheus-mysql-exporter/templates/deployment.yaml
@@ -18,6 +18,9 @@ spec:
labels:
app: {{ template "prometheus-mysql-exporter.name" . }}
release: {{ .Release.Name }}
+{{- if .Values.podLabels }}
+{{ toYaml .Values.podLabels | trim | indent 8 }}
+{{- end }}
annotations:
{{- if .Values.cloudsqlproxy.enabled }}
checksum/config: {{ include (print .Template.BasePath "/secret.yaml") . | sha256sum }}
@@ -32,6 +35,19 @@ spec:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
+{{- with .Values.collectors }}
+ args: [
+{{- range $index, $element := . }}
+{{- if and (typeIs "bool" $element) $element }}
+{{ printf "--collect.%s" $index | quote | indent 12 }},
+{{- else if and (typeIs "bool" $element) (not $element) }}
+{{ printf "--no-collect.%s" $index | quote | indent 12 }},
+{{- else }}
+{{ printf "--collect.%s" $index | quote | indent 12 }}, {{ $element | quote }}
+{{- end }}
+{{- end }}
+ ]
+{{- end }}
env:
- name: DATA_SOURCE_NAME
value: "{{ .Values.mysql.user }}:{{ .Values.mysql.pass }}@{{ if .Values.mysql.protocol }}{{ .Values.mysql.protocol }}{{ end }}({{ .Values.mysql.host }}:{{ .Values.mysql.port }})/{{ if .Values.mysql.db }}{{ .Values.mysql.db }}{{ end }}{{ if .Values.mysql.param }}?{{ .Values.mysql.param }}{{ end }}"
diff --git a/stable/prometheus-mysql-exporter/values.yaml b/stable/prometheus-mysql-exporter/values.yaml
index 77fac30aa7e8..7f4828f597c1 100644
--- a/stable/prometheus-mysql-exporter/values.yaml
+++ b/stable/prometheus-mysql-exporter/values.yaml
@@ -33,11 +33,50 @@ tolerations: []
affinity: {}
+podLabels: {}
+
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "9104"
+collectors: {}
+ # auto_increment.columns: false
+ # binlog_size: false
+ # engine_innodb_status: false
+ # engine_tokudb_status: false
+ # global_status: true
+ # global_variables: true
+ # info_schema.clientstats: false
+ # info_schema.innodb_metrics: false
+ # info_schema.innodb_tablespaces: false
+ # info_schema.innodb_cmp: false
+ # info_schema.innodb_cmpmem: false
+ # info_schema.processlist: false
+ # info_schema.processlist.min_time: 0
+ # info_schema.query_response_time: false
+ # info_schema.tables: true
+ # info_schema.tables.databases: '*'
+ # info_schema.tablestats: false
+ # info_schema.schemastats: false
+ # info_schema.userstats: false
+ # perf_schema.eventsstatements: false
+ # perf_schema.eventsstatements.digest_text_limit: 120
+ # perf_schema.eventsstatements.limit: false
+ # perf_schema.eventsstatements.timelimit: 86400
+ # perf_schema.eventswaits: false
+ # perf_schema.file_events: false
+ # perf_schema.file_instances: false
+ # perf_schema.indexiowaits: false
+ # perf_schema.tableiowaits: false
+ # perf_schema.tablelocks: false
+ # perf_schema.replication_group_member_stats: false
+ # slave_status: true
+ # slave_hosts: false
+ # heartbeat: false
+ # heartbeat.database: heartbeat
+ # heartbeat.table: heartbeat
+
# mysql connection params which build the DATA_SOURCE_NAME env var of the docker container
mysql:
db: ""
diff --git a/stable/prometheus-nats-exporter/.helmignore b/stable/prometheus-nats-exporter/.helmignore
new file mode 100644
index 000000000000..825c00779157
--- /dev/null
+++ b/stable/prometheus-nats-exporter/.helmignore
@@ -0,0 +1,23 @@
+# Patterns to ignore when building packages.
+# This supports shell glob matching, relative path matching, and
+# negation (prefixed with !). Only one pattern per line.
+.DS_Store
+# Common VCS dirs
+.git/
+.gitignore
+.bzr/
+.bzrignore
+.hg/
+.hgignore
+.svn/
+# Common backup files
+*.swp
+*.bak
+*.tmp
+*~
+# Various IDEs
+.project
+.idea/
+*.tmproj
+
+OWNERS
diff --git a/stable/prometheus-nats-exporter/Chart.yaml b/stable/prometheus-nats-exporter/Chart.yaml
new file mode 100644
index 000000000000..9557b579503c
--- /dev/null
+++ b/stable/prometheus-nats-exporter/Chart.yaml
@@ -0,0 +1,15 @@
+apiVersion: v1
+appVersion: "0.17.0"
+description: A Helm chart for prometheus-nats-exporter
+name: prometheus-nats-exporter
+version: 1.0.0
+home: https://github.com/nats-io/prometheus-nats-exporter
+sources:
+ - https://github.com/nats-io/prometheus-nats-exporter
+keywords:
+ - nats
+ - prometheus
+ - exporter
+maintainers:
+ - email: okgolove@markeloff.net
+ name: okgolove
diff --git a/stable/prometheus-nats-exporter/OWNERS b/stable/prometheus-nats-exporter/OWNERS
new file mode 100644
index 000000000000..165767d85fe3
--- /dev/null
+++ b/stable/prometheus-nats-exporter/OWNERS
@@ -0,0 +1,4 @@
+approvers:
+- okgolove
+reviewers:
+- okgolove
diff --git a/stable/prometheus-nats-exporter/README.md b/stable/prometheus-nats-exporter/README.md
new file mode 100644
index 000000000000..3a17f2828376
--- /dev/null
+++ b/stable/prometheus-nats-exporter/README.md
@@ -0,0 +1,70 @@
+# Prometheus NATS Exporter
+
+* Installs prometheus [NATS exporter](https://github.com/nats-io/prometheus-nats-exporter)
+
+## TL;DR;
+
+```console
+$ helm install incubator/prometheus-nats-exporter
+```
+
+## Introduction
+
+This chart bootstraps a prometheus [NATS exporter](https://github.com/nats-io/prometheus-nats-exporter) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
+
+## Installing the Chart
+
+To install the chart with the release name `my-release`:
+
+```console
+$ helm install --name my-release incubator/prometheus-nats-exporter
+```
+
+The command deploys NATS exporter on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists the parameters that can be configured during installation.
+
+## Uninstalling the Chart
+
+To uninstall/delete the `my-release` deployment:
+
+```console
+$ helm delete my-release
+```
+
+The command removes all the Kubernetes components associated with the chart and deletes the release.
+
+## Configuration
+
+The following table lists the configurable parameters of the postgres Exporter chart and their default values.
+
+| Parameter | Description | Default |
+| ------------------------------- | ------------------------------------------ | ---------------------------------------------------------- |
+| `image` | Image | `appcelerator/prometheus-nats-exporter` |
+| `imageTag` | Image tag | `0.17.0` |
+| `imagePullPolicy` | Image pull policy | `IfNotPresent` |
+| `service.type` | Service type | `ClusterIP` |
+| `service.port` | The service port | `80` |
+| `service.targetPort` | The target port of the container | `8222` |
+| `resources` | | `{}` |
+| `config.nats.service` | NATS monitoring [service name](https://github.com/helm/charts/blob/master/stable/nats/templates/monitoring-svc.yaml)| `nats-nats-monitoring`|
+| `config.nats.namespace` | Namespace in which NATS deployed | `default` |
+| `config.nats.port` | NATS monitoring service port | `8222` |
+| `tolerations` | Add tolerations | `[]` |
+| `nodeSelector` | node labels for pod assignment | `{}` |
+| `affinity` | node/pod affinities | `{}` |
+| `annotations` | Deployment annotations | `{}` |
+| `extraContainers` | Additional sidecar containers | `""` |
+| `extraVolumes` | Additional volumes for use in extraContainers | `""` |
+
+Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
+
+```console
+$ helm install --name my-release \
+ --set config.nats.service=nats-production-nats-monitoring \
+ incubator/prometheus-nats-exporter
+```
+
+Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,
+
+```console
+$ helm install --name my-release -f values.yaml stable/prometheus-nats-exporter
+```
\ No newline at end of file
diff --git a/stable/prometheus-nats-exporter/templates/NOTES.txt b/stable/prometheus-nats-exporter/templates/NOTES.txt
new file mode 100644
index 000000000000..4277eac2d265
--- /dev/null
+++ b/stable/prometheus-nats-exporter/templates/NOTES.txt
@@ -0,0 +1,15 @@
+1. Get the application URL by running these commands:
+{{- if contains "NodePort" .Values.service.type }}
+ export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "prometheus-nats-exporter.fullname" . }})
+ export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
+ echo http://$NODE_IP:$NODE_PORT
+{{- else if contains "LoadBalancer" .Values.service.type }}
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ You can watch the status of by running 'kubectl get svc -w {{ include "prometheus-nats-exporter.fullname" . }}'
+ export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "prometheus-nats-exporter.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ echo http://$SERVICE_IP:{{ .Values.service.port }}
+{{- else if contains "ClusterIP" .Values.service.type }}
+ export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ include "prometheus-nats-exporter.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
+ echo "Visit http://127.0.0.1:8080 to use your application"
+ kubectl port-forward $POD_NAME 8080:80
+{{- end }}
diff --git a/stable/prometheus-nats-exporter/templates/_helpers.tpl b/stable/prometheus-nats-exporter/templates/_helpers.tpl
new file mode 100644
index 000000000000..8ab6bf00aa29
--- /dev/null
+++ b/stable/prometheus-nats-exporter/templates/_helpers.tpl
@@ -0,0 +1,32 @@
+{{/* vim: set filetype=mustache: */}}
+{{/*
+Expand the name of the chart.
+*/}}
+{{- define "prometheus-nats-exporter.name" -}}
+{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
+{{/*
+Create a default fully qualified app name.
+We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
+If release name contains chart name it will be used as a full name.
+*/}}
+{{- define "prometheus-nats-exporter.fullname" -}}
+{{- if .Values.fullnameOverride -}}
+{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- $name := default .Chart.Name .Values.nameOverride -}}
+{{- if contains $name .Release.Name -}}
+{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
+{{- else -}}
+{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "prometheus-nats-exporter.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
diff --git a/stable/prometheus-nats-exporter/templates/deployment.yaml b/stable/prometheus-nats-exporter/templates/deployment.yaml
new file mode 100644
index 000000000000..fc612688d852
--- /dev/null
+++ b/stable/prometheus-nats-exporter/templates/deployment.yaml
@@ -0,0 +1,63 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ include "prometheus-nats-exporter.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "prometheus-nats-exporter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ helm.sh/chart: {{ include "prometheus-nats-exporter.chart" . }}
+spec:
+ replicas: {{ .Values.replicaCount }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "prometheus-nats-exporter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "prometheus-nats-exporter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ annotations:
+{{- if .Values.annotations }}
+{{ toYaml .Values.annotations | indent 8 }}
+{{- end}}
+ spec:
+ containers:
+ - name: {{ .Chart.Name }}
+ args:
+ - "-port"
+ - "{{ .Values.service.targetPort }}"
+ - "-varz"
+ - "http://{{ .Values.config.nats.service }}.{{ .Values.config.nats.namespace }}.svc.cluster.local:{{ .Values.config.nats.port }}"
+ image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ ports:
+ - name: http
+ containerPort: {{ .Values.service.targetPort }}
+ protocol: TCP
+ livenessProbe:
+ httpGet:
+ path: /metrics
+ port: http
+ readinessProbe:
+ httpGet:
+ path: /metrics
+ port: http
+ resources:
+{{ toYaml .Values.resources | indent 12 }}
+ {{- with .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.affinity }}
+ affinity:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+ {{- with .Values.tolerations }}
+ tolerations:
+{{ toYaml . | indent 8 }}
+ {{- end }}
+{{- with .Values.extraVolumes }}
+{{ tpl . $ | indent 6 }}
+{{- end }}
diff --git a/stable/prometheus-nats-exporter/templates/service.yaml b/stable/prometheus-nats-exporter/templates/service.yaml
new file mode 100644
index 000000000000..793f18970136
--- /dev/null
+++ b/stable/prometheus-nats-exporter/templates/service.yaml
@@ -0,0 +1,19 @@
+apiVersion: v1
+kind: Service
+metadata:
+ name: {{ include "prometheus-nats-exporter.fullname" . }}
+ labels:
+ app.kubernetes.io/name: {{ include "prometheus-nats-exporter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+ app.kubernetes.io/managed-by: {{ .Release.Service }}
+ helm.sh/chart: {{ include "prometheus-nats-exporter.chart" . }}
+spec:
+ type: {{ .Values.service.type }}
+ ports:
+ - port: {{ .Values.service.port }}
+ targetPort: {{ .Values.service.targetPort }}
+ protocol: TCP
+ name: http
+ selector:
+ app.kubernetes.io/name: {{ include "prometheus-nats-exporter.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
diff --git a/stable/prometheus-nats-exporter/values.yaml b/stable/prometheus-nats-exporter/values.yaml
new file mode 100644
index 000000000000..90e4a8f15e09
--- /dev/null
+++ b/stable/prometheus-nats-exporter/values.yaml
@@ -0,0 +1,45 @@
+# Default values for prometheus-nats-exporter.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+
+replicaCount: 1
+
+image:
+ repository: appcelerator/prometheus-nats-exporter
+ tag: 0.17.0
+ pullPolicy: IfNotPresent
+
+service:
+ type: ClusterIP
+ port: 80
+ targetPort: 8222
+
+resources: {}
+ # We usually recommend not to specify default resources and to leave this as a conscious
+ # choice for the user. This also increases chances charts run on environments with little
+ # resources, such as Minikube. If you do want to specify resources, uncomment the following
+ # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
+ # limits:
+ # cpu: 100m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 128Mi
+
+config:
+ nats:
+ service: nats-nats-monitoring
+ namespace: default
+ port: 8222
+
+nodeSelector: {}
+
+tolerations: []
+
+affinity: {}
+
+annotations: {}
+
+extraContainers: |
+
+extraVolumes: |
diff --git a/stable/prometheus-node-exporter/Chart.yaml b/stable/prometheus-node-exporter/Chart.yaml
index 89f76634ab6e..0653576fb706 100644
--- a/stable/prometheus-node-exporter/Chart.yaml
+++ b/stable/prometheus-node-exporter/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
-appVersion: "0.17.0"
+appVersion: "0.18.0"
description: A Helm chart for prometheus node-exporter
name: prometheus-node-exporter
-version: 1.1.0
+version: 1.5.0
home: https://github.com/prometheus/node_exporter/
sources:
- https://github.com/prometheus/node_exporter/
diff --git a/stable/prometheus-node-exporter/README.md b/stable/prometheus-node-exporter/README.md
index e5027062a78e..0d31ec68a607 100644
--- a/stable/prometheus-node-exporter/README.md
+++ b/stable/prometheus-node-exporter/README.md
@@ -39,7 +39,7 @@ The following table lists the configurable parameters of the Node Exporter chart
| Parameter | Description | Default | |
| --------------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | --- |
| `image.repository` | Image repository | `quay.io/prometheus/node-exporter` | |
-| `image.tag` | Image tag | `v0.16.0` | |
+| `image.tag` | Image tag | `v0.18.0` | |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` | |
| `extraArgs` | Additional container arguments | `[]` | |
| `extraHostVolumeMounts` | Additional host volume mounts | {} | |
@@ -56,10 +56,15 @@ The following table lists the configurable parameters of the Node Exporter chart
| `serviceAccount.name` | Service account to be used. If not set and `serviceAccount.create` is `true`, a name is generated using the fullname template | | |
| `serviceAccount.imagePullSecrets` | Specify image pull secrets | `[]` | |
| `securityContext` | SecurityContext | `{"runAsNonRoot": true, "runAsUser": 65534}` | |
+| `affinity` | A group of affinity scheduling rules for pod assignment | `{}` | |
+| `nodeSelector` | Node labels for pod assignment | `{}` | |
| `tolerations` | List of node taints to tolerate | `- effect: NoSchedule operator: Exists` | |
| `priorityClassName` | Name of Priority Class to assign pods | `nil` | |
| `endpoints` | list of addresses that have node exporter deployed outside of the cluster | `[]` | |
-
+| `hostNetwork` | Whether to expose the service to the host network | `true` | |
+| `prometheus.monitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false` | |
+| `prometheus.monitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}` | |
+| `prometheus.monitor.namespace` | namespace where servicemonitor resource should be created | `the same namespace as prometheus node exporter` | |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
diff --git a/stable/prometheus-node-exporter/templates/daemonset.yaml b/stable/prometheus-node-exporter/templates/daemonset.yaml
index a6687d5bc05f..f4c982bb67db 100644
--- a/stable/prometheus-node-exporter/templates/daemonset.yaml
+++ b/stable/prometheus-node-exporter/templates/daemonset.yaml
@@ -7,18 +7,18 @@ spec:
selector:
matchLabels:
app: {{ template "prometheus-node-exporter.name" . }}
- release: {{ .Release.Name }}
+ release: {{ .Release.Name }}
updateStrategy:
type: RollingUpdate
rollingUpdate:
- maxUnavailable: 1
+ maxUnavailable: 1
template:
metadata:
labels: {{ include "prometheus-node-exporter.labels" . | indent 8 }}
spec:
{{- if and .Values.rbac.create .Values.serviceAccount.create }}
serviceAccountName: {{ template "prometheus-node-exporter.serviceAccountName" . }}
-{{- end }}
+{{- end }}
{{- if .Values.securityContext }}
securityContext:
{{ toYaml .Values.securityContext | indent 8 }}
@@ -63,10 +63,21 @@ spec:
- name: {{ $mount.name }}
mountPath: {{ $mount.mountPath }}
readOnly: {{ $mount.readOnly }}
+ {{- if $mount.mountPropagation }}
+ mountPropagation: {{ $mount.mountPropagation }}
+ {{- end }}
{{- end }}
{{- end }}
- hostNetwork: true
+ hostNetwork: {{ .Values.hostNetwork }}
hostPID: true
+{{- if .Values.affinity }}
+ affinity:
+{{ toYaml .Values.affinity | indent 8 }}
+{{- end }}
+{{- if .Values.nodeSelector }}
+ nodeSelector:
+{{ toYaml .Values.nodeSelector | indent 8 }}
+{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
diff --git a/stable/prometheus-node-exporter/templates/monitor.yaml b/stable/prometheus-node-exporter/templates/monitor.yaml
new file mode 100644
index 000000000000..9c723e690f82
--- /dev/null
+++ b/stable/prometheus-node-exporter/templates/monitor.yaml
@@ -0,0 +1,17 @@
+{{- if .Values.prometheus.monitor.enabled }}
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: {{ template "prometheus-node-exporter.fullname" . }}
+ labels: {{ include "prometheus-node-exporter.labels" . | indent 4 }}
+ {{- if .Values.prometheus.monitor.additionalLabels }}
+{{ toYaml .Values.prometheus.monitor.additionalLabels | indent 4 }}
+ {{- end }}
+spec:
+ selector:
+ matchLabels:
+ app: {{ template "prometheus-node-exporter.name" . }}
+ release: {{ .Release.Name }}
+ endpoints:
+ - port: metrics
+{{- end }}
diff --git a/stable/prometheus-node-exporter/templates/serviceaccount.yaml b/stable/prometheus-node-exporter/templates/serviceaccount.yaml
index 8414781d8145..b70745aa6f82 100644
--- a/stable/prometheus-node-exporter/templates/serviceaccount.yaml
+++ b/stable/prometheus-node-exporter/templates/serviceaccount.yaml
@@ -10,6 +10,6 @@ metadata:
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
imagePullSecrets:
-{{ toYaml .Values.imagePullSecrets | indent 2 }}
+{{ toYaml .Values.serviceAccount.imagePullSecrets | indent 2 }}
{{- end -}}
{{- end -}}
\ No newline at end of file
diff --git a/stable/prometheus-node-exporter/values.yaml b/stable/prometheus-node-exporter/values.yaml
index 6df243685d67..7ca723ff9198 100644
--- a/stable/prometheus-node-exporter/values.yaml
+++ b/stable/prometheus-node-exporter/values.yaml
@@ -3,7 +3,7 @@
# Declare variables to be passed into your templates.
image:
repository: quay.io/prometheus/node-exporter
- tag: v0.17.0
+ tag: v0.18.0
pullPolicy: IfNotPresent
service:
@@ -14,6 +14,12 @@ service:
annotations:
prometheus.io/scrape: "true"
+prometheus:
+ monitor:
+ enabled: false
+ additionalLabels: {}
+ namespace: ""
+
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
@@ -50,6 +56,27 @@ rbac:
# their addresses here
endpoints: []
+# Expose the service to the host network
+hostNetwork: true
+
+## Assign a group of affinity scheduling rules
+##
+affinity: {}
+# nodeAffinity:
+# requiredDuringSchedulingIgnoredDuringExecution:
+# nodeSelectorTerms:
+# - matchFields:
+# - key: metadata.name
+# operator: In
+# values:
+# - target-host-name
+
+## Assign a nodeSelector if operating a hybrid cluster
+##
+nodeSelector: {}
+# beta.kubernetes.io/arch: amd64
+# beta.kubernetes.io/os: linux
+
tolerations:
- effect: NoSchedule
operator: Exists
@@ -69,3 +96,4 @@ extraHostVolumeMounts: {}
# hostPath:
# mountPath:
# readOnly: true|false
+# mountPropagation: None|HostToContainer|Bidirectional
diff --git a/stable/prometheus-operator/.helmignore b/stable/prometheus-operator/.helmignore
index f0c131944441..aba2fa8ce401 100644
--- a/stable/prometheus-operator/.helmignore
+++ b/stable/prometheus-operator/.helmignore
@@ -19,3 +19,8 @@
.project
.idea/
*.tmproj
+# helm/charts
+OWNERS
+hack/
+ci/
+prometheus-operator-*.tgz
diff --git a/stable/prometheus-operator/CONTRIBUTING.md b/stable/prometheus-operator/CONTRIBUTING.md
index 074b3cd90430..2fba4f200f33 100644
--- a/stable/prometheus-operator/CONTRIBUTING.md
+++ b/stable/prometheus-operator/CONTRIBUTING.md
@@ -1,8 +1,11 @@
# Contributing Guidelines
- ## How to contribute to this chart
- 1. Fork this repository, develop and test your Chart.
+## How to contribute to this chart
+1. Fork this repository, develop and test your Chart.
1. Bump the chart version for every change.
1. Ensure PR title has the prefix `[stable/prometheus-operator]`
+1. When making changes to values.yaml, update the files in `ci/` by running `hack/update-ci.sh`
+1. When making changes to rules or dashboards, see the README.md section on how to sync data from upstream repositories
+1. Check the `hack/minikube` folder has scripts to set up minikube and components of this chart that will allow all components to be scraped. You can use this configuration when validating your changes.
1. Check for changes of RBAC rules.
1. Check for changes in CRD specs.
1. PR must pass the linter (`helm lint`)
\ No newline at end of file
diff --git a/stable/prometheus-operator/Chart.yaml b/stable/prometheus-operator/Chart.yaml
index 49076c731f16..cd2a095ef18f 100644
--- a/stable/prometheus-operator/Chart.yaml
+++ b/stable/prometheus-operator/Chart.yaml
@@ -5,12 +5,13 @@ engine: gotpl
maintainers:
- name: gianrubio
email: gianrubio@gmail.com
+ - name: anothertobi
name: prometheus-operator
sources:
- https://github.com/coreos/prometheus-operator
- https://coreos.com/operators/prometheus
-version: 2.1.1
-appVersion: 0.26.0
+version: 5.10.4
+appVersion: 0.29.0
home: https://github.com/coreos/prometheus-operator
keywords:
- operator
diff --git a/stable/prometheus-operator/OWNERS b/stable/prometheus-operator/OWNERS
index ca300fa25d28..697dee9ccbd4 100644
--- a/stable/prometheus-operator/OWNERS
+++ b/stable/prometheus-operator/OWNERS
@@ -1,4 +1,8 @@
approvers:
- gianrubio
+- vsliouniaev
+- anothertobi
reviewers:
- gianrubio
+- vsliouniaev
+- anothertobi
diff --git a/stable/prometheus-operator/README.md b/stable/prometheus-operator/README.md
index abe5b7563461..1fca43170430 100644
--- a/stable/prometheus-operator/README.md
+++ b/stable/prometheus-operator/README.md
@@ -1,6 +1,23 @@
# prometheus-operator
-Installs [prometheus-operator](https://github.com/coreos/prometheus-operator) to create/configure/manage Prometheus clusters atop Kubernetes.
+Installs [prometheus-operator](https://github.com/coreos/prometheus-operator) to create/configure/manage Prometheus clusters atop Kubernetes. This chart includes multiple components and is suitable for a variety of use-cases.
+
+The default installation is intended to suit monitoring a kubernetes cluster the chart is deployed onto. It closely matches the kube-prometheus project.
+- [prometheus-operator](https://github.com/coreos/prometheus-operator)
+- [prometheus](https://prometheus.io/)
+- [alertmanager](https://prometheus.io/)
+- [node-exporter](https://github.com/helm/charts/tree/master/stable/prometheus-node-exporter)
+- [kube-state-metrics](https://github.com/helm/charts/tree/master/stable/kube-state-metrics)
+- [grafana](https://github.com/helm/charts/tree/master/stable/grafana)
+- service monitors to scrape internal kubernetes components
+ - kube-apiserver
+ - kube-scheduler
+ - kube-controller-manager
+ - etcd
+ - kube-dns/coredns
+With the installation, the chart also includes dashboards and alerts.
+
+The same chart can be used to run multiple prometheus instances in the same cluster if required. To achieve this, the other components need to be disabled - it is necessary to run only one instance of prometheus-operator and a pair of alertmanager pods for an HA configuration.
## TL;DR;
@@ -40,16 +57,41 @@ The command removes all the Kubernetes components associated with the chart and
CRDs created by this chart are not removed by default and should be manually cleaned up:
-```
+```console
kubectl delete crd prometheuses.monitoring.coreos.com
kubectl delete crd prometheusrules.monitoring.coreos.com
kubectl delete crd servicemonitors.monitoring.coreos.com
kubectl delete crd alertmanagers.monitoring.coreos.com
```
+## Work-Arounds for Known Issues
+
+### Helm fails to create CRDs
+Due to a bug in helm, it is possible for the 4 CRDs that are created by this chart to fail to get fully deployed before Helm attempts to create resources that require them. This affects all versions of Helm with a [potential fix pending](https://github.com/helm/helm/pull/5112). In order to work around this issue when installing the chart you will need to make sure all 4 CRDs exist in the cluster first and disable their previsioning by the chart:
+
+1. Create CRDs
+```console
+kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/alertmanager.crd.yaml
+kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/prometheus.crd.yaml
+kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/prometheusrule.crd.yaml
+kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/servicemonitor.crd.yaml
+```
+
+2. Wait for CRDs to be created, which should only take a few seconds
+
+3. Install the chart, but disable the CRD provisioning by setting `prometheusOperator.createCustomResource=false`
+```console
+$ helm install --name my-release stable/prometheus-operator --set prometheusOperator.createCustomResource=false
+```
+
+### Helm <2.10 workaround
+The `crd-install` hook is required to deploy the prometheus operator CRDs before they are used. If you are forced to use an earlier version of Helm you can work around this requirement as follows:
+1. Install prometheus-operator by itself, disabling everything but the prometheus-operator component, and also setting `prometheusOperator.serviceMonitor.selfMonitor=false`
+2. Install all the other components, and configure `prometheus.additionalServiceMonitors` to scrape the prometheus-operator service.
+
## Configuration
-The following tables lists the configurable parameters of the prometheus-operator chart and their default values.
+The following tables list the configurable parameters of the prometheus-operator chart and their default values.
### General
| Parameter | Description | Default |
@@ -76,6 +118,7 @@ The following tables lists the configurable parameters of the prometheus-operato
| `defaultRules.rules.prometheus` | Create Prometheus default rules| `true` |
| `defaultRules.labels` | Labels for default rules for monitoring the cluster | `{}` |
| `defaultRules.annotations` | Annotations for default rules for monitoring the cluster | `{}` |
+| `additionalPrometheusRules` | List of `prometheusRule` objects to create. See https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusrulespec. | `[]` |
| `global.rbac.create` | Create RBAC resources | `true` |
| `global.rbac.pspEnabled` | Create pod security policy resources | `true` |
| `global.imagePullSecrets` | Reference to one or more secrets to be used when pulling images | `[]` |
@@ -84,19 +127,25 @@ The following tables lists the configurable parameters of the prometheus-operato
| Parameter | Description | Default |
| ----- | ----------- | ------ |
| `prometheusOperator.enabled` | Deploy Prometheus Operator. Only one of these should be deployed into the cluster | `true` |
-| `prometheusOperator.serviceAccount` | Create a serviceaccount for the operator | `true` |
-| `prometheusOperator.name` | Operator serviceAccount name | `""` |
+| `prometheusOperator.serviceAccount.create` | Create a serviceaccount for the operator | `true` |
+| `prometheusOperator.serviceAccount.name` | Operator serviceAccount name | `""` |
+| `prometheusOperator.logFormat` | Operator log output formatting | `"logfmt"` |
+| `prometheusOperator.logLevel` | Operator log level. Possible values: "all", "debug", "info", "warn", "error", "none" | `"info"` |
| `prometheusOperator.createCustomResource` | Create CRDs. Required if deploying anything besides the operator itself as part of the release. The operator will create / update these on startup. If your Helm version < 2.10 you will have to either create the CRDs first or deploy the operator first, then the rest of the resources | `true` |
| `prometheusOperator.crdApiGroup` | Specify the API Group for the CustomResourceDefinitions | `monitoring.coreos.com` |
| `prometheusOperator.cleanupCustomResource` | Attempt to delete CRDs when the release is removed. This option may be useful while testing but is not recommended, as deleting the CRD definition will delete resources and prevent the operator from being able to clean up resources that it manages | `false` |
| `prometheusOperator.podLabels` | Labels to add to the operator pod | `{}` |
+| `prometheusOperator.podAnnotations` | Annotations to add to the operator pod | `{}` |
| `prometheusOperator.priorityClassName` | Name of Priority Class to assign pods | `nil` |
| `prometheusOperator.kubeletService.enabled` | If true, the operator will create and maintain a service for scraping kubelets | `true` |
| `prometheusOperator.kubeletService.namespace` | Namespace to deploy kubelet service | `kube-system` |
| `prometheusOperator.serviceMonitor.selfMonitor` | Enable monitoring of prometheus operator | `true` |
+| `prometheusOperator.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
+| `prometheusOperator.serviceMonitor.metricRelabelings` | The `metric_relabel_configs` for scraping the operator instance. | `` |
+| `prometheusOperator.serviceMonitor.relabelings` | The `relabel_configs` for scraping the operator instance. | `` |
| `prometheusOperator.service.type` | Prometheus operator service type | `ClusterIP` |
| `prometheusOperator.service.clusterIP` | Prometheus operator service clusterIP IP | `""` |
-| `prometheusOperator.service.nodePort` | Port to expose prometheus operator service on each node | `38080` |
+| `prometheusOperator.service.nodePort` | Port to expose prometheus operator service on each node | `30080` |
| `prometheusOperator.service.annotations` | Annotations to be added to the prometheus operator service | `{}` |
| `prometheusOperator.service.labels` | Prometheus Operator Service Labels | `{}` |
| `prometheusOperator.service.externalIPs` | List of IP addresses at which the Prometheus Operator server service is available | `[]` |
@@ -106,14 +155,16 @@ The following tables lists the configurable parameters of the prometheus-operato
| `prometheusOperator.securityContext` | SecurityContext for prometheus operator | `{"runAsNonRoot": true, "runAsUser": 65534}` |
| `prometheusOperator.nodeSelector` | Prometheus operator node selector https://kubernetes.io/docs/user-guide/node-selection/ | `{}` |
| `prometheusOperator.tolerations` | Tolerations for use with node taints https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ | `[]` |
-| `prometheusOperator.affinity` | Assign the prometheus operator to run on specific nodes https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | `{}` |
+| `prometheusOperator.affinity` | Assign custom affinity rules to the prometheus operator https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | `{}` |
| `prometheusOperator.image.repository` | Repository for prometheus operator image | `quay.io/coreos/prometheus-operator` |
-| `prometheusOperator.image.tag` | Tag for prometheus operator image | `v0.26.0` |
+| `prometheusOperator.image.tag` | Tag for prometheus operator image | `v0.29.0` |
| `prometheusOperator.image.pullPolicy` | Pull policy for prometheus operator image | `IfNotPresent` |
| `prometheusOperator.configmapReloadImage.repository` | Repository for configmapReload image | `quay.io/coreos/configmap-reload` |
| `prometheusOperator.configmapReloadImage.tag` | Tag for configmapReload image | `v0.0.1` |
| `prometheusOperator.prometheusConfigReloaderImage.repository` | Repository for config-reloader image | `quay.io/coreos/prometheus-config-reloader` |
-| `prometheusOperator.prometheusConfigReloaderImage.tag` | Tag for config-reloader image | `v0.26.0` |
+| `prometheusOperator.prometheusConfigReloaderImage.tag` | Tag for config-reloader image | `v0.29.0` |
+| `prometheusOperator.configReloaderCpu` | Set the prometheus config reloader side-car CPU limit. If unset, uses the prometheus-operator project default | `nil` |
+| `prometheusOperator.configReloaderMemory` | Set the prometheus config reloader side-car memory limit. If unset, uses the prometheus-operator project default | `nil` |
| `prometheusOperator.hyperkubeImage.repository` | Repository for hyperkube image used to perform maintenance tasks | `k8s.gcr.io/hyperkube` |
| `prometheusOperator.hyperkubeImage.tag` | Tag for hyperkube image used to perform maintenance tasks | `v1.12.1` |
| `prometheusOperator.hyperkubeImage.repository` | Image pull policy for hyperkube image used to perform maintenance tasks | `IfNotPresent` |
@@ -123,6 +174,9 @@ The following tables lists the configurable parameters of the prometheus-operato
| ----- | ----------- | ------ |
| `prometheus.enabled` | Deploy prometheus | `true` |
| `prometheus.serviceMonitor.selfMonitor` | Create a `serviceMonitor` to automatically monitor the prometheus instance | `true` |
+| `prometheus.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
+| `prometheus.serviceMonitor.metricRelabelings` | The `metric_relabel_configs` for scraping the prometheus instance. | `` |
+| `prometheus.serviceMonitor.relabelings` | The `relabel_configs` for scraping the prometheus instance. | `` |
| `prometheus.serviceAccount.create` | Create a default serviceaccount for prometheus to use | `true` |
| `prometheus.serviceAccount.name` | Name for prometheus serviceaccount | `""` |
| `prometheus.rbac.roleNamespaces` | Create role bindings in the specified namespaces, to allow Prometheus monitoring a role binding in the release namespace will always be created. | `["kube-system"]` |
@@ -133,26 +187,31 @@ The following tables lists the configurable parameters of the prometheus-operato
| `prometheus.ingress.annotations` | Prometheus Ingress annotations | `{}` |
| `prometheus.ingress.labels` | Prometheus Ingress additional labels | `{}` |
| `prometheus.ingress.hosts` | Prometheus Ingress hostnames | `[]` |
+| `prometheus.ingress.paths` | Prometheus Ingress paths | `[]` |
| `prometheus.ingress.tls` | Prometheus Ingress TLS configuration (YAML) | `[]` |
| `prometheus.service.type` | Prometheus Service type | `ClusterIP` |
| `prometheus.service.clusterIP` | Prometheus service clusterIP IP | `""` |
-| `prometheus.service.nodePort` | Prometheus Service port for NodePort service type | `39090` |
+| `prometheus.service.targetPort` | Prometheus Service internal port | `9090` |
+| `prometheus.service.nodePort` | Prometheus Service port for NodePort service type | `30090` |
+| `prometheus.service.additionalPorts` | Additional Prometheus Service ports to add for NodePort service type | `[]` |
| `prometheus.service.annotations` | Prometheus Service Annotations | `{}` |
| `prometheus.service.labels` | Prometheus Service Labels | `{}` |
| `prometheus.service.externalIPs` | List of IP addresses at which the Prometheus server service is available | `[]` |
| `prometheus.service.loadBalancerIP` | Prometheus Loadbalancer IP | `""` |
| `prometheus.service.loadBalancerSourceRanges` | Prometheus Load Balancer Source Ranges | `[]` |
+| `prometheus.service.sessionAffinity` | Prometheus Service Session Affinity | `""` |
| `prometheus.additionalServiceMonitors` | List of `serviceMonitor` objects to create. See https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#servicemonitorspec | `[]` |
| `prometheus.prometheusSpec.podMetadata` | Standard object’s metadata. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#metadata Metadata Labels and Annotations gets propagated to the prometheus pods. | `{}` |
| `prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues` | If true, a nil or {} value for prometheus.prometheusSpec.serviceMonitorSelector will cause the prometheus resource to be created with selectors based on values in the helm deployment, which will also match the servicemonitors created | `true` |
-| `prometheus.prometheusSpec.serviceMonitorSelector` | ServiceMonitors to be selected for target discovery. | `{}` |
-| `prometheus.prometheusSpec.serviceMonitorNamespaceSelector` | Namespaces to be selected for ServiceMonitor discovery. If nil, only check own namespace. | `{}` |
+| `prometheus.prometheusSpec.serviceMonitorSelector` | ServiceMonitors to be selected for target discovery. If {}, select all ServiceMonitors | `{}` |
+| `prometheus.prometheusSpec.serviceMonitorNamespaceSelector` | Namespaces to be selected for ServiceMonitor discovery. See [metav1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#labelselector-v1-meta) for usage | `{}` |
| `prometheus.prometheusSpec.image.repository` | Base image to use for a Prometheus deployment. | `quay.io/prometheus/prometheus` |
-| `prometheus.prometheusSpec.image.tag` | Tag of Prometheus container image to be deployed. | `v2.5.0` |
+| `prometheus.prometheusSpec.image.tag` | Tag of Prometheus container image to be deployed. | `v2.9.1` |
| `prometheus.prometheusSpec.paused` | When a Prometheus deployment is paused, no actions except for deletion will be performed on the underlying objects. | `false` |
| `prometheus.prometheusSpec.replicas` | Number of instances to deploy for a Prometheus deployment. | `1` |
-| `prometheus.prometheusSpec.retention` | Time duration Prometheus shall retain data for. Must match the regular expression `[0-9]+(ms\|s\|m\|h\|d\|w\|y)` (milliseconds seconds minutes hours days weeks years). | `120h` |
+| `prometheus.prometheusSpec.retention` | Time duration Prometheus shall retain data for. Must match the regular expression `[0-9]+(ms\|s\|m\|h\|d\|w\|y)` (milliseconds seconds minutes hours days weeks years). | `10d` |
| `prometheus.prometheusSpec.logLevel` | Log level for Prometheus to be configured with. | `info` |
+| `prometheus.prometheusSpec.logFormat` | Log format for Prometheus to be configured with. | `logfmt` |
| `prometheus.prometheusSpec.scrapeInterval` | Interval between consecutive scrapes. | `""` |
| `prometheus.prometheusSpec.evaluationInterval` | Interval between consecutive evaluations. | `""` |
| `prometheus.prometheusSpec.externalLabels` | The labels to add to any time series or alerts when communicating with external systems (federation, remote storage, Alertmanager). | `[]` |
@@ -160,20 +219,23 @@ The following tables lists the configurable parameters of the prometheus-operato
| `prometheus.prometheusSpec.routePrefix` | The route prefix Prometheus registers HTTP handlers for. This is useful, if using ExternalURL and a proxy is rewriting HTTP routes of a request, and the actual ExternalURL is still true, but the server serves requests under a different route prefix. For example for use with `kubectl proxy`. | `/` |
| `prometheus.prometheusSpec.storageSpec` | Storage spec to specify how storage shall be used. | `{}` |
| `prometheus.prometheusSpec.ruleSelectorNilUsesHelmValues` | If true, a nil or {} value for prometheus.prometheusSpec.ruleSelector will cause the prometheus resource to be created with selectors based on values in the helm deployment, which will also match the PrometheusRule resources created. | `true` |
-| `prometheus.prometheusSpec.ruleSelector` | A selector to select which PrometheusRules to mount for loading alerting rules from. Until (excluding) Prometheus Operator v0.24.0 Prometheus Operator will migrate any legacy rule ConfigMaps to PrometheusRule custom resources selected by RuleSelector. Make sure it does not match any config maps that you do not want to be migrated. | `{}` |
-| `prometheus.prometheusSpec.ruleNamespaceSelector` | Namespaces to be selected for PrometheusRules discovery. If unspecified, only the same namespace as the Prometheus object is in is used. | `{}` |
+| `prometheus.prometheusSpec.ruleSelector` | A selector to select which PrometheusRules to mount for loading alerting rules from. Until (excluding) Prometheus Operator v0.24.0 Prometheus Operator will migrate any legacy rule ConfigMaps to PrometheusRule custom resources selected by RuleSelector. Make sure it does not match any config maps that you do not want to be migrated. If {}, select all PrometheusRules | `{}` |
+| `prometheus.prometheusSpec.ruleNamespaceSelector` | Namespaces to be selected for PrometheusRules discovery. If nil, select own namespace. See [namespaceSelector](https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#namespaceselector) for usage | `{}` |
| `prometheus.prometheusSpec.alertingEndpoints` | Alertmanagers to which alerts will be sent https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#alertmanagerendpoints Default configuration will connect to the alertmanager deployed as part of this release | `[]` |
| `prometheus.prometheusSpec.resources` | Define resources requests and limits for single Pods. | `{}` |
| `prometheus.prometheusSpec.nodeSelector` | Define which Nodes the Pods are scheduled on. | `{}` |
| `prometheus.prometheusSpec.secrets` | Secrets is a list of Secrets in the same namespace as the Prometheus object, which shall be mounted into the Prometheus Pods. The Secrets are mounted into /etc/prometheus/secrets/. Secrets changes after initial creation of a Prometheus object are not reflected in the running Pods. To change the secrets mounted into the Prometheus Pods, the object must be deleted and recreated with the new list of secrets. | `[]` |
| `prometheus.prometheusSpec.configMaps` | ConfigMaps is a list of ConfigMaps in the same namespace as the Prometheus object, which shall be mounted into the Prometheus Pods. The ConfigMaps are mounted into /etc/prometheus/configmaps/ | `[]` |
-|`prometheus.prometheusSpec.podAntiAffinity` | Pod anti-affinity can prevent the scheduler from placing Prometheus replicas on the same node. The default value "soft" means that the scheduler should *prefer* to not schedule two replica pods onto the same node but no guarantee is provided. The value "hard" means that the scheduler is *required* to not schedule two replica pods onto the same node. The value "" will disable pod anti-affinity so that no anti-affinity rules will be configured. | `""` |
-|`prometheus.prometheusSpec.podAntiAffinityTopologyKey` | If anti-affinity is enabled sets the topologyKey to use for anti-affinity. This can be changed to, for example `failure-domain.beta.kubernetes.io/zone`| `kubernetes.io/hostname` |
+| `prometheus.prometheusSpec.query` | QuerySpec defines the query command line flags when starting Prometheus. Not all parameters are supported by the operator - [see coreos documentation](https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#queryspec) | `{}` |
+| `prometheus.prometheusSpec.podAntiAffinity` | Pod anti-affinity can prevent the scheduler from placing Prometheus replicas on the same node. The default value "soft" means that the scheduler should *prefer* to not schedule two replica pods onto the same node but no guarantee is provided. The value "hard" means that the scheduler is *required* to not schedule two replica pods onto the same node. The value "" will disable pod anti-affinity so that no anti-affinity rules will be configured. | `""` |
+| `prometheus.prometheusSpec.podAntiAffinityTopologyKey` | If anti-affinity is enabled sets the topologyKey to use for anti-affinity. This can be changed to, for example `failure-domain.beta.kubernetes.io/zone`| `kubernetes.io/hostname` |
+| `prometheus.prometheusSpec.affinity` | Assign custom affinity rules to the prometheus instance https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | `{}` |
| `prometheus.prometheusSpec.tolerations` | If specified, the pod's tolerations. | `[]` |
| `prometheus.prometheusSpec.remoteWrite` | If specified, the remote_write spec. This is an experimental feature, it may change in any upcoming release in a breaking way. | `[]` |
| `prometheus.prometheusSpec.remoteRead` | If specified, the remote_read spec. This is an experimental feature, it may change in any upcoming release in a breaking way. | `[]` |
| `prometheus.prometheusSpec.securityContext` | SecurityContext holds pod-level security attributes and common container settings. This defaults to non root user with uid 1000 and gid 2000 in order to support migration from operator version <0.26. | `{"runAsNonRoot": true, "runAsUser": 1000, "fsGroup": 2000}` |
| `prometheus.prometheusSpec.listenLocal` | ListenLocal makes the Prometheus server listen on loopback, so that it does not bind against the Pod IP. | `false` |
+| `prometheus.prometheusSpec.enableAdminAPI` | EnableAdminAPI enables Prometheus the administrative HTTP API which includes functionality such as deleting time series. | `false` |
| `prometheus.prometheusSpec.containers` | Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to a Prometheus pod. |`[]`|
| `prometheus.prometheusSpec.additionalScrapeConfigs` | AdditionalScrapeConfigs allows specifying additional Prometheus scrape configurations. Scrape configurations are appended to the configurations generated by the Prometheus Operator. Job configurations must have the form as specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#. As scrape configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible scrape configs are going to break Prometheus after the upgrade. | `{}` |
| `prometheus.prometheusSpec.additionalScrapeConfigsExternal` | Enable additional scrape configs that are managed externally to this chart. Note that the prometheus will fail to provision if the correct secret does not exist. | `false` |
@@ -186,6 +248,10 @@ The following tables lists the configurable parameters of the prometheus-operato
| Parameter | Description | Default |
| ----- | ----------- | ------ |
| `alertmanager.enabled` | Deploy alertmanager | `true` |
+| `alertmanager.serviceMonitor.selfMonitor` | Create a `serviceMonitor` to automatically monitor the alartmanager instance | `true` |
+| `alertmanager.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
+| `alertmanager.serviceMonitor.metricRelabelings` | The `metric_relabel_configs` for scraping the alertmanager instance. | `` |
+| `alertmanager.serviceMonitor.relabelings` | The `relabel_configs` for scraping the alertmanager instance. | `` |
| `alertmanager.serviceAccount.create` | Create a `serviceAccount` for alertmanager | `true` |
| `alertmanager.serviceAccount.name` | Name for Alertmanager service account | `""` |
| `alertmanager.podDisruptionBudget.enabled` | If true, create a pod disruption budget for Alertmanager pods. The created resource cannot be modified once created - it must be deleted to perform a change | `true` |
@@ -195,6 +261,7 @@ The following tables lists the configurable parameters of the prometheus-operato
| `alertmanager.ingress.annotations` | Alertmanager Ingress annotations | `{}` |
| `alertmanager.ingress.labels` | Alertmanager Ingress additional labels | `{}` |
| `alertmanager.ingress.hosts` | Alertmanager Ingress hostnames | `[]` |
+| `alertmanager.ingress.paths` | Alertmanager Ingress paths | `[]` |
| `alertmanager.ingress.tls` | Alertmanager Ingress TLS configuration (YAML) | `[]` |
| `alertmanager.service.type` | Alertmanager Service type | `ClusterIP` |
| `alertmanager.service.clusterIP` | Alertmanager service clusterIP IP | `""` |
@@ -204,10 +271,11 @@ The following tables lists the configurable parameters of the prometheus-operato
| `alertmanager.service.externalIPs` | List of IP addresses at which the Alertmanager server service is available | `[]` |
| `alertmanager.service.loadBalancerIP` | Alertmanager Loadbalancer IP | `""` |
| `alertmanager.service.loadBalancerSourceRanges` | Alertmanager Load Balancer Source Ranges | `[]` |
-| `alertmanager.config` | Provide YAML to configure Alertmanager. See https://prometheus.io/docs/alerting/configuration/#configuration-file. The default provided works to suppress the DeadMansSwitch alert from `defaultRules.create` | `{"global":{"resolve_timeout":"5m"},"route":{"group_by":["job"],"group_wait":"30s","group_interval":"5m","repeat_interval":"12h","receiver":"null","routes":[{"match":{"alertname":"DeadMansSwitch"},"receiver":"null"}]},"receivers":[{"name":"null"}]}` |
+| `alertmanager.config` | Provide YAML to configure Alertmanager. See https://prometheus.io/docs/alerting/configuration/#configuration-file. The default provided works to suppress the Watchdog alert from `defaultRules.create` | `{"global":{"resolve_timeout":"5m"},"route":{"group_by":["job"],"group_wait":"30s","group_interval":"5m","repeat_interval":"12h","receiver":"null","routes":[{"match":{"alertname":"Watchdog"},"receiver":"null"}]},"receivers":[{"name":"null"}]}` |
| `alertmanager.alertmanagerSpec.podMetadata` | Standard object’s metadata. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#metadata Metadata Labels and Annotations gets propagated to the prometheus pods. | `{}` |
-| `alertmanager.alertmanagerSpec.image.tag` | Tag of Alertmanager container image to be deployed. | `v0.15.3` |
+| `alertmanager.alertmanagerSpec.image.tag` | Tag of Alertmanager container image to be deployed. | `v0.17.0` |
| `alertmanager.alertmanagerSpec.image.repository` | Base image that is used to deploy pods, without tag. | `quay.io/prometheus/alertmanager` |
+| `alertmanager.alertmanagerSpec.useExistingSecret` | Use an existing secret for configuration (all defined config from values.yaml will be ignored) | `false` |
| `alertmanager.alertmanagerSpec.secrets` | Secrets is a list of Secrets in the same namespace as the Alertmanager object, which shall be mounted into the Alertmanager Pods. The Secrets are mounted into /etc/alertmanager/secrets/. | `[]` |
| `alertmanager.alertmanagerSpec.configMaps` | ConfigMaps is a list of ConfigMaps in the same namespace as the Alertmanager object, which shall be mounted into the Alertmanager Pods. The ConfigMaps are mounted into /etc/alertmanager/configmaps/ | `[]` |
| `alertmanager.alertmanagerSpec.logLevel` | Log level for Alertmanager to be configured with. | `info` |
@@ -220,7 +288,8 @@ The following tables lists the configurable parameters of the prometheus-operato
| `alertmanager.alertmanagerSpec.nodeSelector` | Define which Nodes the Pods are scheduled on. | `{}` |
| `alertmanager.alertmanagerSpec.resources` | Define resources requests and limits for single Pods. | `{}` |
| `alertmanager.alertmanagerSpec.podAntiAffinity` | Pod anti-affinity can prevent the scheduler from placing Prometheus replicas on the same node. The default value "soft" means that the scheduler should *prefer* to not schedule two replica pods onto the same node but no guarantee is provided. The value "hard" means that the scheduler is *required* to not schedule two replica pods onto the same node. The value "" will disable pod anti-affinity so that no anti-affinity rules will be configured. | `""` |
-|`prometheus.prometheusSpec.podAntiAffinityTopologyKey` | If anti-affinity is enabled sets the topologyKey to use for anti-affinity. This can be changed to, for example `failure-domain.beta.kubernetes.io/zone`| `kubernetes.io/hostname` |
+| `alertmanager.alertmanagerSpec.podAntiAffinityTopologyKey` | If anti-affinity is enabled sets the topologyKey to use for anti-affinity. This can be changed to, for example `failure-domain.beta.kubernetes.io/zone`| `kubernetes.io/hostname` |
+| `alertmanager.alertmanagerSpec.affinity` | Assign custom affinity rules to the alertmanager instance https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ | `{}` |
| `alertmanager.alertmanagerSpec.tolerations` | If specified, the pod's tolerations. | `[]` |
| `alertmanager.alertmanagerSpec.securityContext` | SecurityContext holds pod-level security attributes and common container settings. This defaults to non root user with uid 1000 and gid 2000 in order to support migration from operator version < 0.26 | `{"runAsNonRoot": true, "runAsUser": 1000, "fsGroup": 2000}` |
| `alertmanager.alertmanagerSpec.listenLocal` | ListenLocal makes the Alertmanager server listen on loopback, so that it does not bind against the Pod IP. Note this is only for the Alertmanager UI, not the gossip communication. | `false` |
@@ -232,6 +301,10 @@ The following tables lists the configurable parameters of the prometheus-operato
| Parameter | Description | Default |
| ----- | ----------- | ------ |
| `grafana.enabled` | If true, deploy the grafana sub-chart | `true` |
+| `grafana.serviceMonitor.selfMonitor` | Create a `serviceMonitor` to automatically monitor the grafana instance | `true` |
+| `grafana.serviceMonitor.metricRelabelings` | The `metric_relabel_configs` for scraping the grafana instance. | `` |
+| `grafana.serviceMonitor.relabelings` | The `relabel_configs` for scraping the grafana instance. | `` |
+| `grafana.additionalDataSources` | Configure additional grafana datasources | `[]` |
| `grafana.adminPassword` | Admin password to log into the grafana UI | "prom-operator" |
| `grafana.defaultDashboardsEnabled` | Deploy default dashboards. These are loaded using the sidecar | `true` |
| `grafana.ingress.enabled` | Enables Ingress for Grafana | `false` |
@@ -242,6 +315,7 @@ The following tables lists the configurable parameters of the prometheus-operato
| `grafana.sidecar.dashboards.enabled` | Enable the Grafana sidecar to automatically load dashboards with a label `{{ grafana.sidecar.dashboards.label }}=1` | `true` |
| `grafana.sidecar.dashboards.label` | If the sidecar is enabled, configmaps with this label will be loaded into Grafana as dashboards | `grafana_dashboard` |
| `grafana.sidecar.datasources.enabled` | Enable the Grafana sidecar to automatically load dashboards with a label `{{ grafana.sidecar.datasources.label }}=1` | `true` |
+| `grafana.sidecar.datasources.defaultDatasourceEnabled` | Enable Grafana `Prometheus` default datasource` | `true` |
| `grafana.sidecar.datasources.label` | If the sidecar is enabled, configmaps with this label will be loaded into Grafana as datasources configurations | `grafana_datasource` |
| `grafana.rbac.pspUseAppArmor` | Enforce AppArmor in created PodSecurityPolicy (requires rbac.pspEnabled) | `true` |
| `grafana.extraConfigmapMounts` | Additional grafana server configMap volume mounts | `[]` |
@@ -250,45 +324,75 @@ The following tables lists the configurable parameters of the prometheus-operato
| Parameter | Description | Default |
| ----- | ----------- | ------ |
| `kubeApiServer.enabled` | Deploy `serviceMonitor` to scrape the Kubernetes API server | `true` |
+| `kubeApiServer.relabelings` | Relablings for the API Server ServiceMonitor | `[]` |
| `kubeApiServer.tlsConfig.serverName` | Name of the server to use when validating TLS certificate | `kubernetes` |
| `kubeApiServer.tlsConfig.insecureSkipVerify` | Skip TLS certificate validation when scraping | `false` |
| `kubeApiServer.serviceMonitor.jobLabel` | The name of the label on the target service to use as the job name in prometheus | `component` |
-| `kubeApiServer.serviceMonitor.selector` | The service selector | `{"matchLabels":{"component":"apiserver","provider":"kubernetes"}}`
+| `kubeApiServer.serviceMonitor.selector` | The service selector | `{"matchLabels":{"component":"apiserver","provider":"kubernetes"}}` |
+| `kubeApiServer.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
+| `kubeApiServer.serviceMonitor.relabelings` | The `relabel_configs` for scraping the Kubernetes API server. | `` |
+| `kubeApiServer.serviceMonitor.metricRelabelings` | The `metric_relabel_configs` for scraping the Kubernetes API server. | `` |
| `kubelet.enabled` | Deploy servicemonitor to scrape the kubelet service. See also `prometheusOperator.kubeletService` | `true` |
| `kubelet.namespace` | Namespace where the kubelet is deployed. See also `prometheusOperator.kubeletService.namespace` | `kube-system` |
-| `kubelet.serviceMonitor.https` | Enable scraping of the kubelet over HTTPS. For more information, see https://github.com/coreos/prometheus-operator/issues/926 | `false` |
+| `kubelet.serviceMonitor.https` | Enable scraping of the kubelet over HTTPS. For more information, see https://github.com/coreos/prometheus-operator/issues/926 | `true` |
+| `kubelet.serviceMonitor.cAdvisorMetricRelabelings` | The `metric_relabel_configs` for scraping cAdvisor. | `` |
+| `kubelet.serviceMonitor.cAdvisorRelabelings` | The `relabel_configs` for scraping cAdvisor. | `` |
+| `kubelet.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
| `kubeControllerManager.enabled` | Deploy a `service` and `serviceMonitor` to scrape the Kubernetes controller-manager | `true` |
| `kubeControllerManager.endpoints` | Endpoints where Controller-manager runs. Provide this if running Controller-manager outside the cluster | `[]` |
| `kubeControllermanager.service.port` | Controller-manager port for the service runs on | `10252` |
| `kubeControllermanager.service.targetPort` | Controller-manager targetPort for the service runs on | `10252` |
-| `kubeControllermanager.service.targetPort.selector` | Controller-manager service selector | `{"k8s-app" : "kube-controller-manager" }`
+| `kubeControllermanager.service.selector` | Controller-manager service selector | `{"component" : "kube-controller-manager" }` |
+| `kubeControllermanager.serviceMonitor.https` | Controller-manager service scrape over https | `false` |
+| `kubeControllermanager.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
+| `kubeControllermanager.serviceMonitor.metricRelabelings` | The `metric_relabel_configs` for scraping the scheduler. | `` |
+| `kubeControllermanager.serviceMonitor.relabelings` | The `relabel_configs` for scraping the scheduler. | `` |
| `coreDns.enabled` | Deploy coreDns scraping components. Use either this or kubeDns | true |
| `coreDns.service.port` | CoreDns port | `9153` |
| `coreDns.service.targetPort` | CoreDns targetPort | `9153` |
-| `coreDns.service.selector` | CoreDns service selector | `{"k8s-app" : "coredns" }`
+| `coreDns.service.selector` | CoreDns service selector | `{"k8s-app" : "kube-dns" }` |
+| `coreDns.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
+| `coreDns.serviceMonitor.metricRelabelings` | The `metric_relabel_configs` for scraping CoreDns. | `` |
+| `coreDns.serviceMonitor.relabelings` | The `relabel_configs` for scraping CoreDNS. | `` |
| `kubeDns.enabled` | Deploy kubeDns scraping components. Use either this or coreDns| `false` |
-| `kubeDns.service.selector` | CoreDns service selector | `{"k8s-app" : "kube-dns" }` |
+| `kubeDns.service.selector` | kubeDns service selector | `{"k8s-app" : "kube-dns" }` |
+| `kubeDns.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
+| `kubeDns.serviceMonitor.metricRelabelings` | The `metric_relabel_configs` for scraping kubeDns. | `` |
+| `kubeDns.serviceMonitor.relabelings` | The `relabel_configs` for scraping kubeDns. | `` |
| `kubeEtcd.enabled` | Deploy components to scrape etcd | `true` |
| `kubeEtcd.endpoints` | Endpoints where etcd runs. Provide this if running etcd outside the cluster | `[]` |
| `kubeEtcd.service.port` | Etcd port | `4001` |
| `kubeEtcd.service.targetPort` | Etcd targetPort | `4001` |
-| `kubeEtcd.service.selector` | Selector for etcd if running inside the cluster | `{"k8s-app":"etcd-server"}` |
+| `kubeEtcd.service.selector` | Selector for etcd if running inside the cluster | `{"component":"etcd"}` |
| `kubeEtcd.serviceMonitor.scheme` | Etcd servicemonitor scheme | `http` |
| `kubeEtcd.serviceMonitor.insecureSkipVerify` | Skip validating etcd TLS certificate when scraping | `false` |
| `kubeEtcd.serviceMonitor.serverName` | Etcd server name to validate certificate against when scraping | `""` |
| `kubeEtcd.serviceMonitor.caFile` | Certificate authority file to use when connecting to etcd. See `prometheus.prometheusSpec.secrets` | `""` |
+| `kubeEtcd.serviceMonitor.metricRelabelings` | The `metric_relabel_configs` for scraping Etcd. | `` |
+| `kubeEtcd.serviceMonitor.relabelings` | The `relabel_configs` for scraping Etcd. | `` |
| `kubeEtcd.serviceMonitor.certFile` | Client certificate file to use when connecting to etcd. See `prometheus.prometheusSpec.secrets` | `""` |
| `kubeEtcd.serviceMonitor.keyFile` | Client key file to use when connecting to etcd. See `prometheus.prometheusSpec.secrets` | `""` |
+| `kubeEtcd.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
| `kubeScheduler.enabled` | Deploy a `service` and `serviceMonitor` to scrape the Kubernetes scheduler | `true` |
| `kubeScheduler.endpoints` | Endpoints where scheduler runs. Provide this if running scheduler outside the cluster | `[]` |
| `kubeScheduler.service.port` | Scheduler port for the service runs on | `10251` |
| `kubeScheduler.service.targetPort` | Scheduler targetPort for the service runs on | `10251` |
-| `kubeScheduler.service.selector` | Scheduler service selector | `{"k8s-app" : "kube-scheduler" }`
+| `kubeScheduler.service.selector` | Scheduler service selector | `{"component" : "kube-scheduler" }` |
+| `kubeScheduler.serviceMonitor.https` | Scheduler service scrape over https | `false` |
+| `kubeScheduler.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
+| `kubeScheduler.serviceMonitor.metricRelabelings` | The `metric_relabel_configs` for scraping the Kubernetes scheduler. | `` |
+| `kubeScheduler.serviceMonitor.relabelings` | The `relabel_configs` for scraping the Kubernetes scheduler. | `` |
| `kubeStateMetrics.enabled` | Deploy the `kube-state-metrics` chart and configure a servicemonitor to scrape | `true` |
+| `kubeStateMetrics.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
+| `kubeStateMetrics.serviceMonitor.metricRelabelings` | Metric relablings for the `kube-state-metrics` ServiceMonitor | `[]` |
+| `kubeStateMetrics.serviceMonitor.relabelings` | The `relabel_configs` for scraping `kube-state-metrics`. | `` |
| `kube-state-metrics.rbac.create` | Create RBAC components in kube-state-metrics. See `global.rbac.create` | `true` |
| `kube-state-metrics.podSecurityPolicy.enabled` | Create pod security policy resource for kube-state-metrics. | `true` |
| `nodeExporter.enabled` | Deploy the `prometheus-node-exporter` and scrape it | `true` |
| `nodeExporter.jobLabel` | The name of the label on the target service to use as the job name in prometheus. See `prometheus-node-exporter.podLabels.jobLabel=node-exporter` default | `jobLabel` |
+| `nodeExporter.serviceMonitor.metricRelabelings` | Metric relablings for the `prometheus-node-exporter` ServiceMonitor | `[]` |
+| `nodeExporter.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` |
+| `nodeExporter.serviceMonitor.relabelings` | The `relabel_configs` for scraping the `prometheus-node-exporter`. | `` |
| `prometheus-node-exporter.podLabels` | Additional labels for pods in the DaemonSet | `{"jobLabel":"node-exporter"}` |
| `prometheus-node-exporter.extraArgs` | Additional arguments for the node exporter container | `["--collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)", "--collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$"]` |
@@ -310,7 +414,7 @@ $ helm install --name my-release stable/prometheus-operator -f values1.yaml,valu
## Developing Prometheus Rules and Grafana Dashboards
-This chart Grafana Dashboards and Prometheus Rules are just a copy from coreos/prometheus-operator and other sources, synced (with alterations) by scripts in [hack](hack) folder. In order to introduce any changes you need to first [add them to original repo](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/docs/developing-prometheus-rules-and-grafana-dashboards.md) and then sync there by scripts.
+This chart Grafana Dashboards and Prometheus Rules are just a copy from coreos/prometheus-operator and other sources, synced (with alterations) by scripts in [hack](hack) folder. In order to introduce any changes you need to first [add them to the original repo](https://github.com/coreos/kube-prometheus/blob/master/docs/developing-prometheus-rules-and-grafana-dashboards.md) and then sync there by scripts.
## Further Information
@@ -319,7 +423,79 @@ For more in-depth documentation of configuration options meanings, please see
- [Prometheus](https://prometheus.io/docs/introduction/overview/)
- [Grafana](https://github.com/helm/charts/tree/master/stable/grafana#grafana-helm-chart)
-## Helm <2.10 workaround
-The `crd-install` hook is required to deploy the prometheus operator CRDs before they are used. If you are forced to use an earlier version of Helm you can work around this requirement as follows:
-1. Install prometheus-operator by itself, disabling everything but the prometheus-operator component, and also setting `prometheusOperator.serviceMonitor.selfMonitor=false`
-2. Install all the other components, and configure `prometheus.additionalServiceMonitors` to scrape the prometheus-operator service.
+# Migrating from coreos/prometheus-operator chart
+
+The multiple charts have been combined into a single chart that installs prometheus operator, prometheus, alertmanager, grafana as well as the multitude of exporters necessary to monitor a cluster.
+
+There is no simple and direct migration path between the charts as the changes are extensive and intended to make the chart easier to support.
+
+The capabilities of the old chart are all available in the new chart, including the ability to run multiple prometheus instances on a single cluster - you will need to disable the parts of the chart you do not wish to deploy.
+
+You can check out the tickets for this change [here](https://github.com/coreos/prometheus-operator/issues/592) and [here](https://github.com/helm/charts/pull/6765).
+
+## High-level overview of Changes
+The chart has 3 dependencies, that can be seen in the chart's requirements file:
+https://github.com/helm/charts/blob/master/stable/prometheus-operator/requirements.yaml
+
+### Node-Exporter, Kube-State-Metrics
+These components are loaded as dependencies into the chart. The source for both charts is found in the same repository. They are relatively simple components.
+
+### Grafana
+The Grafana chart is more feature-rich than this chart - it contains a sidecar that is able to load data sources and dashboards from configmaps deployed into the same cluster. For more information check out the [documentation for the chart](https://github.com/helm/charts/tree/master/stable/grafana)
+
+### Coreos CRDs
+The CRDs are provisioned using crd-install hooks, rather than relying on a separate chart installation. If you already have these CRDs provisioned and don't want to remove them, you can disable the CRD creation by these hooks by passing `prometheusOperator.createCustomResource=false`
+
+### Kubelet Service
+Because the kubelet service has a new name in the chart, make sure to clean up the old kubelet service in the `kube-system` namespace to prevent counting container metrics twice.
+
+### Persistent Volumes
+If you would like to keep the data of the current persistent volumes, it should be possible to attach existing volumes to new PVCs and PVs that are created using the conventions in the new chart. For example, in order to use an existing Azure disk for a helm release called `prometheus-migration` the following resources can be created:
+```
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: pvc-prometheus-migration-prometheus-0
+spec:
+ accessModes:
+ - ReadWriteOnce
+ azureDisk:
+ cachingMode: None
+ diskName: pvc-prometheus-migration-prometheus-0
+ diskURI: /subscriptions/f5125d82-2622-4c50-8d25-3f7ba3e9ac4b/resourceGroups/sample-migration-resource-group/providers/Microsoft.Compute/disks/pvc-prometheus-migration-prometheus-0
+ fsType: ""
+ kind: Managed
+ readOnly: false
+ capacity:
+ storage: 1Gi
+ persistentVolumeReclaimPolicy: Delete
+ storageClassName: prometheus
+ volumeMode: Filesystem
+```
+```
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ labels:
+ app: prometheus
+ prometheus: prometheus-migration-prometheus
+ name: prometheus-prometheus-migration-prometheus-db-prometheus-prometheus-migration-prometheus-0
+ namespace: monitoring
+spec:
+ accessModes:
+ - ReadWriteOnce
+ dataSource: null
+ resources:
+ requests:
+ storage: 1Gi
+ storageClassName: prometheus
+ volumeMode: Filesystem
+ volumeName: pvc-prometheus-migration-prometheus-0
+status:
+ accessModes:
+ - ReadWriteOnce
+ capacity:
+ storage: 1Gi
+```
+
+The PVC will take ownership of the PV and when you create a release using a persistent volume claim template it will use the existing PVCs as they match the naming convention used by the chart. For other cloud providers similar approaches can be used.
diff --git a/stable/prometheus-operator/ci/test-values.yaml b/stable/prometheus-operator/ci/test-values.yaml
index d0f9409a84b6..42af2451d267 100644
--- a/stable/prometheus-operator/ci/test-values.yaml
+++ b/stable/prometheus-operator/ci/test-values.yaml
@@ -42,6 +42,16 @@ defaultRules:
## Annotations for default rules
annotations: {}
+## Provide custom recording or alerting rules to be deployed into the cluster.
+##
+additionalPrometheusRules: []
+# - name: my-rule-file
+# groups:
+# - name: my_group
+# rules:
+# - record: my_record
+# expr: 100 * my_record
+
##
global:
rbac:
@@ -95,7 +105,7 @@ alertmanager:
receiver: 'null'
routes:
- match:
- alertname: DeadMansSwitch
+ alertname: Watchdog
receiver: 'null'
receivers:
- name: 'null'
@@ -134,6 +144,11 @@ alertmanager:
hosts: []
# - alertmanager.domain.com
+ ## Paths to use for ingress rules - one path should match the alertmanagerSpec.routePrefix
+ ##
+ paths: []
+ # - /
+
## TLS configuration for Alertmanager Ingress
## Secret must be manually created in the namespace
##
@@ -166,8 +181,28 @@ alertmanager:
## If true, create a serviceMonitor for alertmanager
##
serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
selfMonitor: true
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
+
## Settings affecting alertmanagerSpec
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#alertmanagerspec
##
@@ -181,7 +216,12 @@ alertmanager:
##
image:
repository: quay.io/prometheus/alertmanager
- tag: v0.15.3
+ tag: v0.17.0
+
+ ## If true then the user will be responsible to provide a secret with alertmanager configuration
+ ## So when true the config part will be ignored (including templateFiles) and the one in the secret will be used
+ ##
+ useExistingSecret: false
## Secrets is a list of Secrets in the same namespace as the Alertmanager object, which shall be mounted into the
## Alertmanager Pods. The Secrets are mounted into /etc/alertmanager/secrets/.
@@ -257,6 +297,20 @@ alertmanager:
##
podAntiAffinityTopologyKey: kubernetes.io/hostname
+ ## Assign custom affinity rules to the alertmanager instance
+ ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+ ##
+ affinity: {}
+ # nodeAffinity:
+ # requiredDuringSchedulingIgnoredDuringExecution:
+ # nodeSelectorTerms:
+ # - matchExpressions:
+ # - key: kubernetes.io/e2e-az-name
+ # operator: In
+ # values:
+ # - e2e-az1
+ # - e2e-az2
+
## If specified, the pod's tolerations.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
@@ -304,11 +358,11 @@ grafana:
adminPassword: prom-operator
ingress:
- ## If true, Prometheus Ingress will be created
+ ## If true, Grafana Ingress will be created
##
enabled: false
- ## Annotations for Prometheus Ingress
+ ## Annotations for Grafana Ingress
##
annotations: {}
# kubernetes.io/ingress.class: nginx
@@ -322,16 +376,19 @@ grafana:
## Must be provided if Ingress is enable.
##
# hosts:
- # - prometheus.domain.com
+ # - grafana.domain.com
hosts: []
- ## TLS configuration for prometheus Ingress
+ ## Path for grafana ingress
+ path: /
+
+ ## TLS configuration for grafana Ingress
## Secret must be manually created in the namespace
##
tls: []
- # - secretName: prometheus-general-tls
+ # - secretName: grafana-general-tls
# hosts:
- # - prometheus.example.com
+ # - grafana.example.com
sidecar:
dashboards:
@@ -339,6 +396,7 @@ grafana:
label: grafana_dashboard
datasources:
enabled: true
+ defaultDatasourceEnabled: true
label: grafana_datasource
extraConfigmapMounts: []
@@ -347,6 +405,46 @@ grafana:
# configMap: certs-configmap
# readOnly: true
+ ## Configure additional grafana datasources
+ ## ref: http://docs.grafana.org/administration/provisioning/#datasources
+ additionalDataSources: []
+ # - name: prometheus-sample
+ # access: proxy
+ # basicAuth: true
+ # basicAuthPassword: pass
+ # basicAuthUser: daco
+ # editable: false
+ # jsonData:
+ # tlsSkipVerify: true
+ # orgId: 1
+ # type: prometheus
+ # url: https://prometheus.svc:9090
+ # version: 1
+
+ ## If true, create a serviceMonitor for grafana
+ ##
+ serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
+ selfMonitor: true
+
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
## Component scraping the kube api server
##
@@ -356,13 +454,35 @@ kubeApiServer:
serverName: kubernetes
insecureSkipVerify: false
+ ## If your API endpoint address is not reachable (as in AKS) you can replace it with the kubernetes service
+ ##
+ relabelings: []
+ # - sourceLabels:
+ # - __meta_kubernetes_namespace
+ # - __meta_kubernetes_service_name
+ # - __meta_kubernetes_endpoint_port_name
+ # action: keep
+ # regex: default;kubernetes;https
+ # - targetLabel: __address__
+ # replacement: kubernetes.default.svc:443
+
serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
jobLabel: component
selector:
matchLabels:
component: apiserver
provider: kubernetes
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
## Component scraping the kubelet and kubelet-hosted cAdvisor
##
kubelet:
@@ -370,10 +490,38 @@ kubelet:
namespace: kube-system
serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
+
## Enable scraping the kubelet over https. For requirements to enable this see
## https://github.com/coreos/prometheus-operator/issues/926
##
- https: false
+ https: true
+
+ ## Metric relabellings to apply to samples before ingestion
+ ##
+ cAdvisorMetricRelabelings: []
+ # - sourceLabels: [__name__, image]
+ # separator: ;
+ # regex: container_([a-z_]+);
+ # replacement: $1
+ # action: drop
+ # - sourceLabels: [__name__]
+ # separator: ;
+ # regex: container_(network_tcp_usage_total|network_udp_usage_total|tasks_state|cpu_load_average_10s)
+ # replacement: $1
+ # action: drop
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ cAdvisorRelabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
## Component scraping the kube controller manager
##
@@ -393,7 +541,35 @@ kubeControllerManager:
port: 10252
targetPort: 10252
selector:
- k8s-app: kube-controller-manager
+ component: kube-controller-manager
+
+ serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
+
+ ## Enable scraping kube-controller-manager over https.
+ ## Requires proper certs (not self-signed) and delegated authentication/authorization checks
+ ##
+ https: false
+
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
+
## Component scraping coreDns. Use either this or kubeDns
##
coreDns:
@@ -402,7 +578,28 @@ coreDns:
port: 9153
targetPort: 9153
selector:
- k8s-app: coredns
+ k8s-app: kube-dns
+ serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
+
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
## Component scraping kubeDns. Use either this or coreDns
##
@@ -411,6 +608,28 @@ kubeDns:
service:
selector:
k8s-app: kube-dns
+ serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
+
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
+
## Component scraping etcd
##
kubeEtcd:
@@ -426,10 +645,10 @@ kubeEtcd:
## Etcd service. If using kubeEtcd.endpoints only the port and targetPort are used
##
service:
- port: 4001
- targetPort: 4001
+ port: 2379
+ targetPort: 2379
selector:
- k8s-app: etcd-server
+ component: etcd
## Configure secure access to the etcd cluster by loading a secret into prometheus and
## specifying security configuration below. For example, with a secret named etcd-client-cert
@@ -443,6 +662,9 @@ kubeEtcd:
## keyFile: /etc/prometheus/secrets/etcd-client-cert/etcd-client-key
##
serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
scheme: http
insecureSkipVerify: false
serverName: ""
@@ -450,6 +672,23 @@ kubeEtcd:
certFile: ""
keyFile: ""
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
+
## Component scraping kube scheduler
##
@@ -469,12 +708,59 @@ kubeScheduler:
port: 10251
targetPort: 10251
selector:
- k8s-app: kube-scheduler
+ component: kube-scheduler
+
+ serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
+ ## Enable scraping kube-controller-manager over https.
+ ## Requires proper certs (not self-signed) and delegated authentication/authorization checks
+ ##
+ https: false
+
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
## Component scraping kube state metrics
##
kubeStateMetrics:
enabled: true
+ serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
+
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
## Configuration for kube-state-metrics subchart
##
@@ -493,6 +779,30 @@ nodeExporter:
##
jobLabel: jobLabel
+ serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
+
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - sourceLabels: [__name__]
+ # separator: ;
+ # regex: ^node_mountstats_nfs_(event|operations|transport)_.+
+ # replacement: $1
+ # action: drop
+
+ ## relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
+
## Configuration for prometheus-node-exporter subchart
##
prometheus-node-exporter:
@@ -526,8 +836,15 @@ prometheusOperator:
## Port to expose on each node
## Only used if service.type is 'NodePort'
##
- nodePort: 38080
+ nodePort: 30080
+ ## Additional ports to open for Prometheus service
+ ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services
+ ##
+ additionalPorts: []
+ # - name: thanos-cluster
+ # port: 10900
+ # nodePort: 30111
## Loadbalancer IP
## Only use if service.type is "loadbalancer"
@@ -560,9 +877,20 @@ prometheusOperator:
##
podLabels: {}
+ ## Annotations to add to the operator pod
+ ##
+ podAnnotations: {}
+
## Assign a PriorityClassName to pods if set
# priorityClassName: ""
+ ## Define Log Format
+ # Use logfmt (default) or json-formatted logging
+ # logFormat: logfmt
+
+ ## Decrease log verbosity to errors only
+ # logLevel: error
+
## If true, the operator will create and maintain a service for scraping kubelets
## ref: https://github.com/coreos/prometheus-operator/blob/master/helm/prometheus-operator/README.md
##
@@ -573,8 +901,28 @@ prometheusOperator:
## Create a servicemonitor for the operator
##
serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
selfMonitor: true
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
+
## Resource limits & requests
##
resources: {}
@@ -599,18 +947,19 @@ prometheusOperator:
# value: "value"
# effect: "NoSchedule"
- ## Assign the prometheus operator to run on specific nodes
+ ## Assign custom affinity rules to the prometheus operator
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
##
affinity: {}
- # requiredDuringSchedulingIgnoredDuringExecution:
- # nodeSelectorTerms:
- # - matchExpressions:
- # - key: kubernetes.io/e2e-az-name
- # operator: In
- # values:
- # - e2e-az1
- # - e2e-az2
+ # nodeAffinity:
+ # requiredDuringSchedulingIgnoredDuringExecution:
+ # nodeSelectorTerms:
+ # - matchExpressions:
+ # - key: kubernetes.io/e2e-az-name
+ # operator: In
+ # values:
+ # - e2e-az1
+ # - e2e-az2
securityContext:
runAsNonRoot: true
@@ -620,7 +969,7 @@ prometheusOperator:
##
image:
repository: quay.io/coreos/prometheus-operator
- tag: v0.27.0
+ tag: v0.29.0
pullPolicy: IfNotPresent
## Configmap-reload image to use for reloading configmaps
@@ -633,7 +982,15 @@ prometheusOperator:
##
prometheusConfigReloaderImage:
repository: quay.io/coreos/prometheus-config-reloader
- tag: v0.27.0
+ tag: v0.29.0
+
+ ## Set the prometheus config reloader side-car CPU limit. If unset, uses the prometheus-operator project default
+ ##
+ # configReloaderCpu: 100m
+
+ ## Set the prometheus config reloader side-car memory limit. If unset, uses the prometheus-operator project default
+ ##
+ # configReloaderMemory: 25Mi
## Hyperkube image to use when cleaning up
##
@@ -662,6 +1019,10 @@ prometheus:
labels: {}
clusterIP: ""
+
+ ## To be used with a proxy extraContainer port
+ targetPort: 9090
+
## List of IP addresses at which the Prometheus server service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
@@ -670,7 +1031,7 @@ prometheus:
## Port to expose on each node
## Only used if service.type is 'NodePort'
##
- nodePort: 39090
+ nodePort: 30090
## Loadbalancer IP
## Only use if service.type is "loadbalancer"
@@ -680,6 +1041,8 @@ prometheus:
##
type: ClusterIP
+ sessionAffinity: ""
+
rbac:
## Create role bindings in the specified namespaces, to allow Prometheus monitoring
## a role binding in the release namespace will always be created.
@@ -709,6 +1072,11 @@ prometheus:
# - prometheus.domain.com
hosts: []
+ ## Paths to use for ingress rules - one path should match the prometheusSpec.routePrefix
+ ##
+ paths: []
+ # - /
+
## TLS configuration for Prometheus Ingress
## Secret must be manually created in the namespace
##
@@ -718,8 +1086,28 @@ prometheus:
# - prometheus.example.com
serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
selfMonitor: true
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
+
## Settings affecting prometheusSpec
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
##
@@ -737,11 +1125,17 @@ prometheus:
##
listenLocal: false
+ ## EnableAdminAPI enables Prometheus the administrative HTTP API which includes functionality such as deleting time series.
+ ## This is disabled by default.
+ ## ref: https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-admin-apis
+ ##
+ enableAdminAPI: false
+
## Image of Prometheus.
##
image:
repository: quay.io/prometheus/prometheus
- tag: v2.6.1
+ tag: v2.9.1
## Tolerations for use with node taints
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
@@ -788,8 +1182,14 @@ prometheus:
##
configMaps: []
+ ## QuerySpec defines the query command line flags when starting Prometheus.
+ ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#queryspec
+ ##
+ query: {}
+
## Namespaces to be selected for PrometheusRules discovery.
- ## If unspecified, only the same namespace as the Prometheus object is in is used.
+ ## If nil, select own namespace. Namespaces to be selected for ServiceMonitor discovery.
+ ## See https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#namespaceselector for usage
##
ruleNamespaceSelector: {}
@@ -799,10 +1199,8 @@ prometheus:
##
ruleSelectorNilUsesHelmValues: true
- ## Rules CRD selector
- ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/design.md
- ## If unspecified the release `app` and `release` will be used as the label selector
- ## to load rules
+ ## PrometheusRules to be selected for target discovery.
+ ## If {}, select all ServiceMonitors
##
ruleSelector: {}
## Example which select all prometheusrules resources
@@ -826,17 +1224,17 @@ prometheus:
##
serviceMonitorSelectorNilUsesHelmValues: true
- ## serviceMonitorSelector will limit which servicemonitors are used to create scrape
- ## configs in Prometheus. See serviceMonitorSelectorUseHelmLabels
+ ## ServiceMonitors to be selected for target discovery.
+ ## If {}, select all ServiceMonitors
##
serviceMonitorSelector: {}
-
- # serviceMonitorSelector: {}
+ ## Example which selects ServiceMonitors with label "prometheus" set to "somelabel"
+ # serviceMonitorSelector:
# matchLabels:
# prometheus: somelabel
- ## serviceMonitorNamespaceSelector will limit namespaces from which serviceMonitors are used to create scrape
- ## configs in Prometheus. By default all namespaces will be used
+ ## Namespaces to be selected for ServiceMonitor discovery.
+ ## See https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#namespaceselector for usage
##
serviceMonitorNamespaceSelector: {}
@@ -856,6 +1254,10 @@ prometheus:
##
logLevel: info
+ ## Log format for Prometheus be configured in
+ ##
+ logFormat: logfmt
+
## Prefix used to register routes, overriding externalUrl route.
## Useful for proxies that rewrite URLs.
##
@@ -880,16 +1282,29 @@ prometheus:
##
podAntiAffinityTopologyKey: kubernetes.io/hostname
+ ## Assign custom affinity rules to the prometheus instance
+ ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+ ##
+ affinity: {}
+ # nodeAffinity:
+ # requiredDuringSchedulingIgnoredDuringExecution:
+ # nodeSelectorTerms:
+ # - matchExpressions:
+ # - key: kubernetes.io/e2e-az-name
+ # operator: In
+ # values:
+ # - e2e-az1
+ # - e2e-az2
+
## The remote_read spec configuration for Prometheus.
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#remotereadspec
- remoteRead: {}
+ remoteRead: []
# - url: http://remote1/read
## The remote_write spec configuration for Prometheus.
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#remotewritespec
- remoteWrite: {}
- # remoteWrite:
- # - url: http://remote1/push
+ remoteWrite: []
+ # - url: http://remote1/push
## Resource limits & requests
##
@@ -1001,7 +1416,7 @@ prometheus:
thanos: {}
## Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to a Prometheus pod.
- ##
+ ## if using proxy extraContainer update targetPort with proxy container port
containers: []
## Enable additional scrape configs that are managed externally to this chart. Note that the prometheus
@@ -1024,6 +1439,10 @@ prometheus:
##
# jobLabel: ""
+ ## labels to transfer from the kubernetes service to the target
+ ##
+ # targetLabels: ""
+
## Label selector for services to which this ServiceMonitor applies
##
# selector: {}
diff --git a/stable/prometheus-operator/hack/README.md b/stable/prometheus-operator/hack/README.md
index 4fc7e6a90fb7..a33f810edc62 100644
--- a/stable/prometheus-operator/hack/README.md
+++ b/stable/prometheus-operator/hack/README.md
@@ -5,14 +5,49 @@
This script generates prometheus rules set for alertmanager from any properly formatted kubernetes yaml based on defined input, splitting rules to separate files based on group name.
Currently following imported:
- - [coreos/prometheus-operator rules set](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml)
- - [etcd-io/etc rules set](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/etcd3_alert.rules.yml) (temporary disabled)
+
+- [coreos/kube-prometheus rules set](https://github.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml)
+ - In order to modify these rules:
+ - prepare and merge PR into [kubernetes-mixin](https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/rules)
+ - run import inside your fork of [coreos/kube-prometheus](https://github.com/coreos/kube-prometheus/tree/master)
+
+ ```bash
+ jb update
+ make generate-in-docker
+ ```
+
+ - prepare and merge PR with imported changes into coreos/prometheus-operator
+ - run sync_prometheus_rules.py inside your fork of this repo
+ - send PR with changes to this repo
+- [etcd-io/etc rules set](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/etcd3_alert.rules.yml)
+ - In order to modify these rules:
+ - prepare and merge PR into [etcd-io/etcd](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/grafana.json) repo
+ - run sync_prometheus_rules.py inside your fork of this repo
+ - send PR with changes to this repo
## [sync_grafana_dashboards.py](sync_grafana_dashboards.py)
This script generates grafana dashboards from json files, splitting them to separate files based on group name.
Currently following imported:
- - [coreos/prometheus-operator dashboards](https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/manifests/grafana-deployment.yaml)
- - [etcd-io/etc dashboard](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/grafana.json)
- - [coreos/prometheus-operator CoreDNS dashboard](https://github.com/helm/charts/blob/master/stable/prometheus-operator/dashboards/grafana-coredns-k8s.json) (not maintained in this location)
\ No newline at end of file
+
+- [coreos/prometheus-operator dashboards](https://github.com/coreos/kube-prometheus/manifests/grafana-deployment.yaml)
+ - In order to modify these dashboards:
+ - prepare and merge PR into [kubernetes-mixin](https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/dashboards)
+ - run import inside your fork of [coreos/kube-prometheus](https://github.com/coreos/kube-prometheus/tree/master)
+
+ ```bash
+ jb update
+ make generate-in-docker
+ ```
+
+ - prepare and merge PR with imported changes into coreos/prometheus-operator
+ - run sync_grafana_dashboards.py inside your fork of this repo
+ - send PR with changes to this repo
+- [etcd-io/etc dashboard](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/grafana.json)
+ - In order to modify this dashboard:
+ - prepare and merge PR into [etcd-io/etcd](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/grafana.json) repo
+ - run sync_grafana_dashboards.py inside your fork of this repo
+ - send PR with changes to this repo
+
+[CoreDNS dashboard](https://github.com/helm/charts/blob/master/stable/prometheus-operator/templates/grafana/dashboards/k8s-coredns.yaml) is the only dashboard which is maintained in this repo and can be changed without import.
diff --git a/stable/prometheus-operator/hack/minikube/README.md b/stable/prometheus-operator/hack/minikube/README.md
new file mode 100644
index 000000000000..a5db53be6417
--- /dev/null
+++ b/stable/prometheus-operator/hack/minikube/README.md
@@ -0,0 +1,3 @@
+The configuration in this folder lets you locally test the setup on minikube. Use cmd.sh to set up components and hack a working etcd scrape configuration. Run the commands in the sequence listed in the script to get a local working minikube cluster.
+
+If you're using windows, there's a commented-out section that you should add to the minikube command.
\ No newline at end of file
diff --git a/stable/prometheus-operator/hack/minikube/cmd.sh b/stable/prometheus-operator/hack/minikube/cmd.sh
new file mode 100755
index 000000000000..e0dfe2cc0d74
--- /dev/null
+++ b/stable/prometheus-operator/hack/minikube/cmd.sh
@@ -0,0 +1,84 @@
+#!/usr/bin/env bash
+
+HELM_RELEASE_NAME=prom-op
+CHART=./
+NAMESPACE=monitoring
+VALUES_FILES=./hack/minikube/values.yaml
+
+if [ "$1" = "reset-minikube" ]; then
+ minikube delete
+ minikube start \
+ #--vm-driver hyperv --hyperv-virtual-switch "Default Switch" \
+ --kubernetes-version=v1.13.3 \
+ --memory=4096 --bootstrapper=kubeadm \
+ --extra-config=kubelet.authentication-token-webhook=true \
+ --extra-config=kubelet.authorization-mode=Webhook \
+ --extra-config=scheduler.address=0.0.0.0 \
+ --extra-config=controller-manager.address=0.0.0.0
+ exit 0
+fi
+
+if [ "$1" = "init-helm" ]; then
+ helm init
+ helm repo update
+ exit 0
+fi
+
+if [ "$1" = "init-etcd-secret" ]; then
+ kubectl create namespace monitoring
+ kubectl delete secret etcd-certs -nmonitoring
+ kubectl create secret generic etcd-certs -nmonitoring \
+ --from-literal=ca.crt="$(kubectl exec kube-apiserver-minikube -nkube-system -- cat /var/lib/minikube/certs/etcd/ca.crt)" \
+ --from-literal=client.crt="$(kubectl exec kube-apiserver-minikube -nkube-system -- cat /var/lib/minikube/certs/apiserver-etcd-client.crt)" \
+ --from-literal=client.key="$(kubectl exec kube-apiserver-minikube -nkube-system -- cat /var/lib/minikube/certs/apiserver-etcd-client.key)"
+
+ exit 0
+fi
+
+
+if [ "$1" = "prometheus-operator" ]; then
+ helm upgrade $HELM_RELEASE_NAME $CHART \
+ --namespace $NAMESPACE \
+ --values $VALUES_FILES \
+ --set grafana.podAnnotations.redeploy-hack="$(cat /proc/sys/kernel/random/uuid)" \
+ --install --debug
+ exit 0
+fi
+
+if [ "$1" = "port-forward" ]; then
+ killall kubectl &>/dev/null
+ kubectl port-forward service/prom-op-prometheus-operato-prometheus 9090 &>/dev/null &
+ kubectl port-forward service/prom-op-prometheus-operato-alertmanager 9093 &>/dev/null &
+ kubectl port-forward service/prom-op-grafana 3000:80 &>/dev/null &
+ echo "Started port-forward commands"
+ echo "localhost:9090 - prometheus"
+ echo "localhost:9093 - alertmanager"
+ echo "localhost:3000 - grafana"
+ exit 0
+fi
+
+cat << EOF
+Usage:
+ install.sh
+
+Commands:
+ reset-minikube - resets minikube with values suitable for running prometheus operator
+ the normal installation will not allow scraping of the kubelet,
+ scheduler or controller-manager components
+ init-helm - initialize helm and update repository so that we can install
+ the prometheus-operator chart. This has to be run only once after
+ a minikube installation is done
+ init-etcd-secret - pulls the certs used to access etcd from the api server and creates
+ a secret in the monitoring namespace with them. The values files
+ in the install command assume that this secret exists and is valid.
+ If not, then prometheus will not start
+ prometheus-operator - install or upgrade the prometheus operator chart in the cluster
+ port-forward - starts port-forwarding for prometheus, alertmanager, grafana
+ localhost:9090 - prometheus
+ localhost:9093 - alertmanager
+ localhost:3000 - grafana
+EOF
+
+exit 0
+}
+
diff --git a/stable/prometheus-operator/hack/minikube/values.yaml b/stable/prometheus-operator/hack/minikube/values.yaml
new file mode 100644
index 000000000000..8bdce3131f5a
--- /dev/null
+++ b/stable/prometheus-operator/hack/minikube/values.yaml
@@ -0,0 +1,9 @@
+prometheus:
+ prometheusSpec:
+ secrets: [etcd-certs]
+kubeEtcd:
+ serviceMonitor:
+ scheme: https
+ caFile: /etc/prometheus/secrets/etcd-certs/ca.crt
+ certFile: /etc/prometheus/secrets/etcd-certs/client.crt
+ keyFile: /etc/prometheus/secrets/etcd-certs/client.key
\ No newline at end of file
diff --git a/stable/prometheus-operator/hack/sync_grafana_dashboards.py b/stable/prometheus-operator/hack/sync_grafana_dashboards.py
index 2a43c15df2bf..8c0625f3148a 100755
--- a/stable/prometheus-operator/hack/sync_grafana_dashboards.py
+++ b/stable/prometheus-operator/hack/sync_grafana_dashboards.py
@@ -26,7 +26,7 @@ def new_representer(dumper, data):
# Source files list
charts = [
{
- 'source': 'https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/grafana-dashboardDefinitions.yaml',
+ 'source': 'https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/grafana-dashboardDefinitions.yaml',
'destination': '../templates/grafana/dashboards',
'type': 'yaml',
},
@@ -45,6 +45,8 @@ def new_representer(dumper, data):
# standard header
header = '''# Generated from '%(name)s' from %(url)s
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled%(condition)s }}
apiVersion: v1
kind: ConfigMap
diff --git a/stable/prometheus-operator/hack/sync_prometheus_rules.py b/stable/prometheus-operator/hack/sync_prometheus_rules.py
index 0a5ac851c674..cec1251fdae9 100755
--- a/stable/prometheus-operator/hack/sync_prometheus_rules.py
+++ b/stable/prometheus-operator/hack/sync_prometheus_rules.py
@@ -25,12 +25,12 @@ def new_representer(dumper, data):
# Source files list
charts = [
{
- 'source': 'https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml',
- 'destination': '../templates/alertmanager/rules'
+ 'source': 'https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml',
+ 'destination': '../templates/prometheus/rules'
},
{
'source': 'https://raw.githubusercontent.com/etcd-io/etcd/master/Documentation/op-guide/etcd3_alert.rules.yml',
- 'destination': '../templates/alertmanager/rules'
+ 'destination': '../templates/prometheus/rules'
},
]
@@ -63,6 +63,7 @@ def new_representer(dumper, data):
'PrometheusOperatorDown': '.Values.prometheusOperator.enabled',
'NodeExporterDown': '.Values.nodeExporter.enabled',
'CoreDNSDown': '.Values.kubeDns.enabled',
+ 'AlertmanagerDown': '.Values.alertmanager.enabled',
}
replacement_map = {
@@ -75,10 +76,18 @@ def new_representer(dumper, data):
'job="alertmanager-main"': {
'replacement': 'job="{{ $alertmanagerJob }}"',
'init': '{{- $alertmanagerJob := printf "%s-%s" (include "prometheus-operator.fullname" .) "alertmanager" }}'},
+ 'namespace="monitoring"': {
+ 'replacement': 'namespace="{{ $namespace }}"',
+ 'init': '{{- $namespace := .Release.Namespace }}'},
+ 'alertmanager-$1': {
+ 'replacement': '$1',
+ 'init': ''},
}
# standard header
header = '''# Generated from '%(name)s' group from %(url)s
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create%(condition)s }}%(init_line)s
apiVersion: {{ printf "%%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
@@ -146,6 +155,20 @@ def add_rules_conditions(rules, indent=4):
except ValueError:
# we found the last alert in file if there are no alerts after it
next_index = len(rules)
+
+ # depending on the rule ordering in alert_condition_map it's possible that an if statement from another rule is present at the end of this block.
+ found_block_end = False
+ last_line_index = next_index
+ while not found_block_end:
+ last_line_index = rules.rindex('\n', index, last_line_index - 1) # find the starting position of the last line
+ last_line = rules[last_line_index + 1:next_index]
+
+ if last_line.startswith('{{- if'):
+ next_index = last_line_index + 1 # move next_index back if the current block ends in an if statement
+ continue
+
+ found_block_end = True
+
rules = rules[:next_index] + '{{- end }}\n' + rules[next_index:]
return rules
@@ -160,7 +183,8 @@ def write_group_to_file(group, url, destination):
for line in replacement_map:
if line in rules:
rules = rules.replace(line, replacement_map[line]['replacement'])
- init_line += '\n' + replacement_map[line]['init']
+ if replacement_map[line]['init']:
+ init_line += '\n' + replacement_map[line]['init']
# append per-alert rules
rules = add_rules_conditions(rules)
# initialize header
diff --git a/stable/prometheus-operator/hack/update-ci.sh b/stable/prometheus-operator/hack/update-ci.sh
new file mode 100755
index 000000000000..e879bd60b489
--- /dev/null
+++ b/stable/prometheus-operator/hack/update-ci.sh
@@ -0,0 +1,2 @@
+#!/usr/bin/env bash
+cat values.yaml | sed 's/cleanupCustomResource: false/cleanupCustomResource: true/' > ci/test-values.yaml
\ No newline at end of file
diff --git a/stable/prometheus-operator/requirements.lock b/stable/prometheus-operator/requirements.lock
index 7335f4f21aae..6151a2cdd165 100644
--- a/stable/prometheus-operator/requirements.lock
+++ b/stable/prometheus-operator/requirements.lock
@@ -1,12 +1,12 @@
dependencies:
- name: kube-state-metrics
repository: https://kubernetes-charts.storage.googleapis.com/
- version: 0.13.0
+ version: 1.1.0
- name: prometheus-node-exporter
repository: https://kubernetes-charts.storage.googleapis.com/
- version: 1.1.0
+ version: 1.4.2
- name: grafana
repository: https://kubernetes-charts.storage.googleapis.com/
- version: 1.25.0
-digest: sha256:db064dc47d3363e31d6e317385b29bffc30929ed42a0d2951109184e907721e2
-generated: 2019-01-15T16:00:38.946498-08:00
+ version: 3.3.6
+digest: sha256:81f530518f48adc9279b5a874b405a0861ed40bc5cc2ce5e22b97852fe084a4e
+generated: 2019-05-08T13:07:21.139618-07:00
diff --git a/stable/prometheus-operator/requirements.yaml b/stable/prometheus-operator/requirements.yaml
index 8d9a6aee0ac1..d3e32df3f7de 100644
--- a/stable/prometheus-operator/requirements.yaml
+++ b/stable/prometheus-operator/requirements.yaml
@@ -1,16 +1,16 @@
dependencies:
- name: kube-state-metrics
- version: 0.13.*
+ version: 1.1.*
repository: https://kubernetes-charts.storage.googleapis.com/
condition: kubeStateMetrics.enabled
- name: prometheus-node-exporter
- version: 1.1.*
+ version: 1.4.*
repository: https://kubernetes-charts.storage.googleapis.com/
condition: nodeExporter.enabled
- name: grafana
- version: 1.25.*
+ version: 3.3.*
repository: https://kubernetes-charts.storage.googleapis.com/
condition: grafana.enabled
diff --git a/stable/prometheus-operator/templates/_helpers.tpl b/stable/prometheus-operator/templates/_helpers.tpl
index 6ec1fa2b1805..77992fcf3c63 100644
--- a/stable/prometheus-operator/templates/_helpers.tpl
+++ b/stable/prometheus-operator/templates/_helpers.tpl
@@ -56,7 +56,7 @@ heritage: {{ .Release.Service | quote }}
{{/* Create the name of prometheus-operator service account to use */}}
{{- define "prometheus-operator.operator.serviceAccountName" -}}
-{{- if and .Values.global.rbac.create .Values.prometheusOperator.serviceAccount.create -}}
+{{- if .Values.prometheusOperator.serviceAccount.create -}}
{{ default (include "prometheus-operator.operator.fullname" .) .Values.prometheusOperator.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.prometheusOperator.serviceAccount.name }}
@@ -65,7 +65,7 @@ heritage: {{ .Release.Service | quote }}
{{/* Create the name of prometheus service account to use */}}
{{- define "prometheus-operator.prometheus.serviceAccountName" -}}
-{{- if and .Values.global.rbac.create .Values.prometheus.serviceAccount.create -}}
+{{- if .Values.prometheus.serviceAccount.create -}}
{{ default (include "prometheus-operator.prometheus.fullname" .) .Values.prometheus.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.prometheus.serviceAccount.name }}
@@ -74,7 +74,7 @@ heritage: {{ .Release.Service | quote }}
{{/* Create the name of alertmanager service account to use */}}
{{- define "prometheus-operator.alertmanager.serviceAccountName" -}}
-{{- if and .Values.global.rbac.create .Values.alertmanager.serviceAccount.create -}}
+{{- if .Values.alertmanager.serviceAccount.create -}}
{{ default (include "prometheus-operator.alertmanager.fullname" .) .Values.alertmanager.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.alertmanager.serviceAccount.name }}
diff --git a/stable/prometheus-operator/templates/alertmanager/alertmanager.yaml b/stable/prometheus-operator/templates/alertmanager/alertmanager.yaml
index 24f93847e18b..9e1b6c506bbd 100644
--- a/stable/prometheus-operator/templates/alertmanager/alertmanager.yaml
+++ b/stable/prometheus-operator/templates/alertmanager/alertmanager.yaml
@@ -14,9 +14,6 @@ spec:
replicas: {{ .Values.alertmanager.alertmanagerSpec.replicas }}
listenLocal: {{ .Values.alertmanager.alertmanagerSpec.listenLocal }}
serviceAccountName: {{ template "prometheus-operator.alertmanager.serviceAccountName" . }}
-{{- if .Values.alertmanager.alertmanagerSpec.externalUrl }}
- externalUrl: "{{ .Values.alertmanager.alertmanagerSpec.externalUrl }}"
-{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.externalUrl }}
externalUrl: "{{ .Values.alertmanager.alertmanagerSpec.externalUrl }}"
{{- else if .Values.alertmanager.ingress.enabled }}
@@ -58,8 +55,12 @@ spec:
podMetadata:
{{ toYaml .Values.alertmanager.alertmanagerSpec.podMetadata | indent 4 }}
{{- end }}
-{{- if eq .Values.alertmanager.alertmanagerSpec.podAntiAffinity "hard" }}
+{{- if or .Values.alertmanager.alertmanagerSpec.podAntiAffinity .Values.alertmanager.alertmanagerSpec.affinity }}
affinity:
+{{- if .Values.alertmanager.alertmanagerSpec.affinity }}
+{{ toYaml .Values.alertmanager.alertmanagerSpec.affinity | indent 4 }}
+{{- end }}
+{{- if eq .Values.alertmanager.alertmanagerSpec.podAntiAffinity "hard" }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: {{ .Values.alertmanager.alertmanagerSpec.podAntiAffinityTopologyKey }}
@@ -68,7 +69,6 @@ spec:
app: alertmanager
alertmanager: {{ template "prometheus-operator.fullname" . }}-alertmanager
{{- else if eq .Values.alertmanager.alertmanagerSpec.podAntiAffinity "soft" }}
- affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
@@ -79,6 +79,7 @@ spec:
app: alertmanager
alertmanager: {{ template "prometheus-operator.fullname" . }}-alertmanager
{{- end }}
+{{- end }}
{{- if .Values.alertmanager.alertmanagerSpec.tolerations }}
tolerations:
{{ toYaml .Values.alertmanager.alertmanagerSpec.tolerations | indent 4 }}
diff --git a/stable/prometheus-operator/templates/alertmanager/ingress.yaml b/stable/prometheus-operator/templates/alertmanager/ingress.yaml
index fd657f71beae..ac6306bf07f8 100644
--- a/stable/prometheus-operator/templates/alertmanager/ingress.yaml
+++ b/stable/prometheus-operator/templates/alertmanager/ingress.yaml
@@ -1,6 +1,8 @@
{{- if and .Values.alertmanager.enabled .Values.alertmanager.ingress.enabled }}
-{{- $routePrefix := .Values.alertmanager.alertmanagerSpec.routePrefix }}
{{- $serviceName := printf "%s-%s" (include "prometheus-operator.fullname" .) "alertmanager" }}
+{{- $servicePort := 9093 -}}
+{{- $routePrefix := list .Values.alertmanager.alertmanagerSpec.routePrefix }}
+{{- $paths := .Values.alertmanager.ingress.paths | default $routePrefix -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
@@ -17,17 +19,30 @@ metadata:
{{ include "prometheus-operator.labels" . | indent 4 }}
spec:
rules:
- {{- range $host := .Values.alertmanager.ingress.hosts }}
- - host: {{ . }}
+ {{- if .Values.alertmanager.ingress.hosts }}
+ {{- range $host := .Values.alertmanager.ingress.hosts }}
+ - host: {{ tpl $host $ }}
http:
paths:
- - path: "{{ $routePrefix }}"
+ {{- range $p := $paths }}
+ - path: {{ tpl $p $ }}
backend:
serviceName: {{ $serviceName }}
- servicePort: 9093
- {{- end }}
-{{- if .Values.alertmanager.ingress.tls }}
+ servicePort: {{ $servicePort }}
+ {{- end -}}
+ {{- end -}}
+ {{- else }}
+ - http:
+ paths:
+ {{- range $p := $paths }}
+ - path: {{ tpl $p $ }}
+ backend:
+ serviceName: {{ $serviceName }}
+ servicePort: {{ $servicePort }}
+ {{- end -}}
+ {{- end -}}
+ {{- if .Values.alertmanager.ingress.tls }}
tls:
{{ toYaml .Values.alertmanager.ingress.tls | indent 4 }}
-{{- end }}
-{{- end }}
\ No newline at end of file
+ {{- end -}}
+{{- end -}}
diff --git a/stable/prometheus-operator/templates/alertmanager/secret.yaml b/stable/prometheus-operator/templates/alertmanager/secret.yaml
index e73c465f9ab7..001262f15698 100644
--- a/stable/prometheus-operator/templates/alertmanager/secret.yaml
+++ b/stable/prometheus-operator/templates/alertmanager/secret.yaml
@@ -1,4 +1,4 @@
-{{- if and .Values.alertmanager.enabled }}
+{{- if and (.Values.alertmanager.enabled) (not .Values.alertmanager.alertmanagerSpec.useExistingSecret) }}
apiVersion: v1
kind: Secret
metadata:
diff --git a/stable/prometheus-operator/templates/alertmanager/serviceaccount.yaml b/stable/prometheus-operator/templates/alertmanager/serviceaccount.yaml
index bbed02879bb9..ccdd15df417c 100644
--- a/stable/prometheus-operator/templates/alertmanager/serviceaccount.yaml
+++ b/stable/prometheus-operator/templates/alertmanager/serviceaccount.yaml
@@ -1,4 +1,4 @@
-{{- if and .Values.alertmanager.enabled .Values.global.rbac.create .Values.alertmanager.serviceAccount.create }}
+{{- if and .Values.alertmanager.enabled .Values.alertmanager.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
diff --git a/stable/prometheus-operator/templates/alertmanager/servicemonitor.yaml b/stable/prometheus-operator/templates/alertmanager/servicemonitor.yaml
index 5c8cab9038a8..0aef9835df5c 100644
--- a/stable/prometheus-operator/templates/alertmanager/servicemonitor.yaml
+++ b/stable/prometheus-operator/templates/alertmanager/servicemonitor.yaml
@@ -16,6 +16,16 @@ spec:
- {{ .Release.Namespace | quote }}
endpoints:
- port: web
- interval: 30s
+ {{- if .Values.alertmanager.serviceMonitor.interval }}
+ interval: {{ .Values.alertmanager.serviceMonitor.interval }}
+ {{- end }}
path: "{{ trimSuffix "/" .Values.alertmanager.alertmanagerSpec.routePrefix }}/metrics"
+{{- if .Values.alertmanager.serviceMonitor.metricRelabelings }}
+ metricRelabelings:
+{{ toYaml .Values.alertmanager.serviceMonitor.metricRelabelings | indent 6 }}
+{{- end }}
+{{- if .Values.alertmanager.serviceMonitor.relabelings }}
+ relabelings:
+{{ toYaml .Values.alertmanager.serviceMonitor.relabelings | indent 6 }}
+{{- end }}
{{- end }}
diff --git a/stable/prometheus-operator/templates/exporters/core-dns/servicemonitor.yaml b/stable/prometheus-operator/templates/exporters/core-dns/servicemonitor.yaml
index 2c29b95060ab..ebdbe02f8d11 100644
--- a/stable/prometheus-operator/templates/exporters/core-dns/servicemonitor.yaml
+++ b/stable/prometheus-operator/templates/exporters/core-dns/servicemonitor.yaml
@@ -17,6 +17,16 @@ spec:
- "kube-system"
endpoints:
- port: http-metrics
- interval: 15s
+ {{- if .Values.coreDns.serviceMonitor.interval}}
+ interval: {{ .Values.coreDns.serviceMonitor.interval }}
+ {{- end }}
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
+{{- if .Values.coreDns.serviceMonitor.metricRelabelings }}
+ metricRelabelings:
+{{ toYaml .Values.coreDns.serviceMonitor.metricRelabelings | indent 4 }}
+{{- end }}
+{{- if .Values.coreDns.serviceMonitor.relabelings }}
+ relabelings:
+{{ toYaml .Values.coreDns.serviceMonitor.relabelings | indent 4 }}
+{{- end }}
{{- end }}
diff --git a/stable/prometheus-operator/templates/exporters/kube-api-server/servicemonitor.yaml b/stable/prometheus-operator/templates/exporters/kube-api-server/servicemonitor.yaml
index a0bf69657af1..242419c5bc39 100644
--- a/stable/prometheus-operator/templates/exporters/kube-api-server/servicemonitor.yaml
+++ b/stable/prometheus-operator/templates/exporters/kube-api-server/servicemonitor.yaml
@@ -9,9 +9,19 @@ metadata:
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
- interval: 30s
+ {{- if .Values.kubeApiServer.serviceMonitor.interval }}
+ interval: {{ .Values.kubeApiServer.serviceMonitor.interval }}
+ {{- end }}
port: https
scheme: https
+{{- if .Values.kubeApiServer.serviceMonitor.metricRelabelings }}
+ metricRelabelings:
+{{ toYaml .Values.kubeApiServer.serviceMonitor.metricRelabelings | indent 6 }}
+{{- end }}
+{{- if .Values.kubeApiServer.relabelings }}
+ relabelings:
+{{ toYaml .Values.kubeApiServer.relabelings | indent 6 }}
+{{- end }}
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
serverName: {{ .Values.kubeApiServer.tlsConfig.serverName }}
diff --git a/stable/prometheus-operator/templates/exporters/kube-controller-manager/servicemonitor.yaml b/stable/prometheus-operator/templates/exporters/kube-controller-manager/servicemonitor.yaml
index db83e80de2e3..ebd8c8166c59 100644
--- a/stable/prometheus-operator/templates/exporters/kube-controller-manager/servicemonitor.yaml
+++ b/stable/prometheus-operator/templates/exporters/kube-controller-manager/servicemonitor.yaml
@@ -17,9 +17,21 @@ spec:
- "kube-system"
endpoints:
- port: http-metrics
- interval: 15s
+ {{- if .Values.kubeControllerManager.serviceMonitor.interval }}
+ interval: {{ .Values.kubeControllerManager.serviceMonitor.interval }}
+ {{- end }}
+ bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
+ {{- if .Values.kubeControllerManager.serviceMonitor.https }}
+ scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- insecureSkipVerify: true
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
+ {{- end }}
+{{- if .Values.kubeControllerManager.serviceMonitor.metricRelabelings }}
+ metricRelabelings:
+{{ toYaml .Values.kubeControllerManager.serviceMonitor.metricRelabelings | indent 4 }}
+{{- end }}
+{{- if .Values.kubeControllerManager.serviceMonitor.relabelings }}
+ relabelings:
+{{ toYaml .Values.kubeControllerManager.serviceMonitor.relabelings | indent 4 }}
+{{- end }}
{{- end }}
\ No newline at end of file
diff --git a/stable/prometheus-operator/templates/exporters/kube-dns/servicemonitor.yaml b/stable/prometheus-operator/templates/exporters/kube-dns/servicemonitor.yaml
index 0ba0984aded1..a4255be6eb1e 100644
--- a/stable/prometheus-operator/templates/exporters/kube-dns/servicemonitor.yaml
+++ b/stable/prometheus-operator/templates/exporters/kube-dns/servicemonitor.yaml
@@ -17,9 +17,21 @@ spec:
- "kube-system"
endpoints:
- port: http-metrics-dnsmasq
- interval: 15s
+ {{- if .Values.kubeDns.serviceMonitor.interval }}
+ interval: {{ .Values.kubeDns.serviceMonitor.interval }}
+ {{- end }}
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
- port: http-metrics-skydns
- interval: 15s
+ {{- if .Values.kubeDns.serviceMonitor.interval }}
+ interval: {{ .Values.kubeDns.serviceMonitor.interval }}
+ {{- end }}
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
+{{- if .Values.kubeDns.serviceMonitor.metricRelabelings }}
+ metricRelabelings:
+{{ toYaml .Values.kubeDns.serviceMonitor.metricRelabelings | indent 4 }}
+{{- end }}
+{{- if .Values.kubeDns.serviceMonitor.relabelings }}
+ relabelings:
+{{ toYaml .Values.kubeDns.serviceMonitor.relabelings | indent 4 }}
+{{- end }}
{{- end }}
diff --git a/stable/prometheus-operator/templates/exporters/kube-etcd/servicemonitor.yaml b/stable/prometheus-operator/templates/exporters/kube-etcd/servicemonitor.yaml
index 6d4a4447a64e..d0a789296197 100644
--- a/stable/prometheus-operator/templates/exporters/kube-etcd/servicemonitor.yaml
+++ b/stable/prometheus-operator/templates/exporters/kube-etcd/servicemonitor.yaml
@@ -17,7 +17,9 @@ spec:
- "kube-system"
endpoints:
- port: http-metrics
- interval: 15s
+ {{- if .Values.kubeEtcd.serviceMonitor.interval }}
+ interval: {{ .Values.kubeEtcd.serviceMonitor.interval }}
+ {{- end }}
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
{{- if eq .Values.kubeEtcd.serviceMonitor.scheme "https" }}
scheme: https
@@ -36,4 +38,12 @@ spec:
{{- end}}
insecureSkipVerify: {{ .Values.kubeEtcd.serviceMonitor.insecureSkipVerify }}
{{- end }}
+{{- if .Values.kubeEtcd.serviceMonitor.metricRelabelings }}
+ metricRelabelings:
+{{ toYaml .Values.kubeEtcd.serviceMonitor.metricRelabelings | indent 4 }}
+{{- end }}
+{{- if .Values.kubeEtcd.serviceMonitor.relabelings }}
+ relabelings:
+{{ toYaml .Values.kubeEtcd.serviceMonitor.relabelings | indent 4 }}
+{{- end }}
{{- end }}
diff --git a/stable/prometheus-operator/templates/exporters/kube-scheduler/servicemonitor.yaml b/stable/prometheus-operator/templates/exporters/kube-scheduler/servicemonitor.yaml
index b4195b620541..aa66c461c4c5 100644
--- a/stable/prometheus-operator/templates/exporters/kube-scheduler/servicemonitor.yaml
+++ b/stable/prometheus-operator/templates/exporters/kube-scheduler/servicemonitor.yaml
@@ -17,5 +17,21 @@ spec:
- "kube-system"
endpoints:
- port: http-metrics
- interval: 15s
+ {{- if .Values.kubeScheduler.serviceMonitor.interval }}
+ interval: {{ .Values.kubeScheduler.serviceMonitor.interval }}
+ {{- end }}
+ bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
+ {{- if .Values.kubeScheduler.serviceMonitor.https }}
+ scheme: https
+ tlsConfig:
+ caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
+ {{- end}}
+{{- if .Values.kubeScheduler.serviceMonitor.metricRelabelings }}
+ metricRelabelings:
+{{ toYaml .Values.kubeScheduler.serviceMonitor.metricRelabelings | indent 4 }}
+{{- end }}
+{{- if .Values.kubeScheduler.serviceMonitor.relabelings }}
+ relabelings:
+{{ toYaml .Values.kubeScheduler.serviceMonitor.relabelings | indent 4 }}
+{{- end }}
{{- end }}
diff --git a/stable/prometheus-operator/templates/exporters/kube-state-metrics/serviceMonitor.yaml b/stable/prometheus-operator/templates/exporters/kube-state-metrics/serviceMonitor.yaml
index cfbe2d70877a..d8ac0d0a14d7 100644
--- a/stable/prometheus-operator/templates/exporters/kube-state-metrics/serviceMonitor.yaml
+++ b/stable/prometheus-operator/templates/exporters/kube-state-metrics/serviceMonitor.yaml
@@ -9,9 +9,19 @@ metadata:
spec:
jobLabel: app
endpoints:
- - interval: 30s
- port: http
+ - port: http
+ {{- if .Values.kubeStateMetrics.serviceMonitor.interval }}
+ interval: {{ .Values.kubeStateMetrics.serviceMonitor.interval }}
+ {{- end }}
honorLabels: true
+{{- if .Values.kubeStateMetrics.serviceMonitor.metricRelabelings }}
+ metricRelabelings:
+{{ toYaml .Values.kubeStateMetrics.serviceMonitor.metricRelabelings | indent 4 }}
+{{- end }}
+{{- if .Values.kubeStateMetrics.serviceMonitor.relabelings }}
+ relabelings:
+{{ toYaml .Values.kubeStateMetrics.serviceMonitor.relabelings | indent 4 }}
+{{- end }}
selector:
matchLabels:
app: kube-state-metrics
diff --git a/stable/prometheus-operator/templates/exporters/kubelet/servicemonitor.yaml b/stable/prometheus-operator/templates/exporters/kubelet/servicemonitor.yaml
index fb3b9a2284ec..1d9a094db569 100644
--- a/stable/prometheus-operator/templates/exporters/kubelet/servicemonitor.yaml
+++ b/stable/prometheus-operator/templates/exporters/kubelet/servicemonitor.yaml
@@ -11,7 +11,9 @@ spec:
{{- if .Values.kubelet.serviceMonitor.https }}
- port: https-metrics
scheme: https
- interval: 15s
+ {{- if .Values.kubelet.serviceMonitor.interval }}
+ interval: {{ .Values.kubelet.serviceMonitor.interval }}
+ {{- end }}
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecureSkipVerify: true
@@ -20,20 +22,42 @@ spec:
- port: https-metrics
scheme: https
path: /metrics/cadvisor
- interval: 30s
+ {{- if .Values.kubelet.serviceMonitor.interval }}
+ interval: {{ .Values.kubelet.serviceMonitor.interval }}
+ {{- end }}
honorLabels: true
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecureSkipVerify: true
bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
+{{- if .Values.kubelet.serviceMonitor.cAdvisorMetricRelabelings }}
+ metricRelabelings:
+{{ toYaml .Values.kubelet.serviceMonitor.cAdvisorMetricRelabelings | indent 4 }}
+{{- end }}
+{{- if .Values.kubelet.serviceMonitor.cAdvisorRelabelings }}
+ relabelings:
+{{ toYaml .Values.kubelet.serviceMonitor.cAdvisorRelabelings | indent 4 }}
+{{- end }}
{{- else }}
- port: http-metrics
- interval: 30s
+ {{- if .Values.kubelet.serviceMonitor.interval }}
+ interval: {{ .Values.kubelet.serviceMonitor.interval }}
+ {{- end }}
honorLabels: true
- port: http-metrics
path: /metrics/cadvisor
- interval: 30s
+ {{- if .Values.kubelet.serviceMonitor.interval }}
+ interval: {{ .Values.kubelet.serviceMonitor.interval }}
+ {{- end }}
honorLabels: true
+{{- if .Values.kubelet.serviceMonitor.cAdvisorMetricRelabelings }}
+ metricRelabelings:
+{{ toYaml .Values.kubelet.serviceMonitor.cAdvisorMetricRelabelings | indent 4 }}
+{{- end }}
+{{- if .Values.kubelet.serviceMonitor.cAdvisorRelabelings }}
+ relabelings:
+{{ toYaml .Values.kubelet.serviceMonitor.cAdvisorRelabelings | indent 4 }}
+{{- end }}
{{- end }}
jobLabel: k8s-app
namespaceSelector:
diff --git a/stable/prometheus-operator/templates/exporters/node-exporter/servicemonitor.yaml b/stable/prometheus-operator/templates/exporters/node-exporter/servicemonitor.yaml
index 392b7c936bd6..4095143d1073 100644
--- a/stable/prometheus-operator/templates/exporters/node-exporter/servicemonitor.yaml
+++ b/stable/prometheus-operator/templates/exporters/node-exporter/servicemonitor.yaml
@@ -14,5 +14,15 @@ spec:
release: {{ .Release.Name }}
endpoints:
- port: metrics
- interval: 30s
+ {{- if .Values.nodeExporter.serviceMonitor.interval }}
+ interval: {{ .Values.nodeExporter.serviceMonitor.interval }}
+ {{- end }}
+{{- if .Values.nodeExporter.serviceMonitor.metricRelabelings }}
+ metricRelabelings:
+{{ toYaml .Values.nodeExporter.serviceMonitor.metricRelabelings | indent 4 }}
+{{- end }}
+{{- if .Values.nodeExporter.serviceMonitor.relabelings }}
+ relabelings:
+{{ toYaml .Values.nodeExporter.serviceMonitor.relabelings | indent 4 }}
+{{- end }}
{{- end }}
diff --git a/stable/prometheus-operator/templates/grafana/configmap-dashboards.yaml b/stable/prometheus-operator/templates/grafana/configmap-dashboards.yaml
index 2eab290256aa..0289154b9f61 100644
--- a/stable/prometheus-operator/templates/grafana/configmap-dashboards.yaml
+++ b/stable/prometheus-operator/templates/grafana/configmap-dashboards.yaml
@@ -1,8 +1,10 @@
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
+{{- $files := .Files.Glob "dashboards/*.json" }}
+{{- if $files }}
apiVersion: v1
kind: ConfigMapList
items:
-{{- range $path, $fileContents := .Files.Glob "dashboards/*.json" }}
+{{- range $path, $fileContents := $files }}
{{- $dashboardName := regexReplaceAll "(^.*/)(.*)\\.json$" $path "${2}" }}
- apiVersion: v1
kind: ConfigMap
@@ -17,4 +19,5 @@ items:
data:
{{ $dashboardName }}.json: {{ $.Files.Get $path | toJson }}
{{- end }}
-{{- end }}
\ No newline at end of file
+{{- end }}
+{{- end }}
diff --git a/stable/prometheus-operator/templates/grafana/configmaps-datasources.yaml b/stable/prometheus-operator/templates/grafana/configmaps-datasources.yaml
index 5b8b54c4ee13..ca19d0e3c38a 100644
--- a/stable/prometheus-operator/templates/grafana/configmaps-datasources.yaml
+++ b/stable/prometheus-operator/templates/grafana/configmaps-datasources.yaml
@@ -11,9 +11,14 @@ data:
datasource.yaml: |-
apiVersion: 1
datasources:
+{{- if .Values.grafana.sidecar.datasources.defaultDatasourceEnabled }}
- name: Prometheus
type: prometheus
url: http://{{ template "prometheus-operator.fullname" . }}-prometheus:9090/{{ trimPrefix "/" .Values.prometheus.prometheusSpec.routePrefix }}
access: proxy
isDefault: true
-{{- end }}
\ No newline at end of file
+{{- end }}
+{{- if .Values.grafana.additionalDataSources }}
+{{ toYaml .Values.grafana.additionalDataSources | indent 4}}
+{{- end }}
+{{- end }}
diff --git a/stable/prometheus-operator/templates/grafana/dashboards/etcd.yaml b/stable/prometheus-operator/templates/grafana/dashboards/etcd.yaml
index f10fbf106115..afee65a60267 100644
--- a/stable/prometheus-operator/templates/grafana/dashboards/etcd.yaml
+++ b/stable/prometheus-operator/templates/grafana/dashboards/etcd.yaml
@@ -1,4 +1,6 @@
# Generated from 'etcd' from https://raw.githubusercontent.com/etcd-io/etcd/master/Documentation/op-guide/grafana.json
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled .Values.kubeEtcd.enabled }}
apiVersion: v1
kind: ConfigMap
@@ -22,7 +24,7 @@ data:
"hideControls": false,
"id": 6,
"links": [],
- "refresh": false,
+ "refresh": "10s",
"rows": [
{
"collapse": false,
diff --git a/stable/prometheus-operator/templates/grafana/dashboards/k8s-cluster-rsrc-use.yaml b/stable/prometheus-operator/templates/grafana/dashboards/k8s-cluster-rsrc-use.yaml
index f62f74a6fa15..09e27680bb76 100644
--- a/stable/prometheus-operator/templates/grafana/dashboards/k8s-cluster-rsrc-use.yaml
+++ b/stable/prometheus-operator/templates/grafana/dashboards/k8s-cluster-rsrc-use.yaml
@@ -1,4 +1,6 @@
-# Generated from 'k8s-cluster-rsrc-use' from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/grafana-dashboardDefinitions.yaml
+# Generated from 'k8s-cluster-rsrc-use' from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/grafana-dashboardDefinitions.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
apiVersion: v1
kind: ConfigMap
@@ -40,7 +42,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 10,
- "id": 0,
+ "id": 1,
"legend": {
"avg": false,
"current": false,
@@ -69,7 +71,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_cpu_utilisation:avg1m * node:node_num_cpu:sum / scalar(sum(node:node_num_cpu:sum))",
+ "expr": "node:cluster_cpu_utilisation:ratio{cluster=\"$cluster\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{node}}`}}",
@@ -84,7 +86,7 @@ data:
"timeShift": null,
"title": "CPU Utilisation",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -126,7 +128,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 10,
- "id": 1,
+ "id": 2,
"legend": {
"avg": false,
"current": false,
@@ -155,7 +157,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_cpu_saturation_load1: / scalar(sum(min(kube_pod_info) by (node)))",
+ "expr": "node:node_cpu_saturation_load1:{cluster=\"$cluster\"} / scalar(sum(min(kube_pod_info{cluster=\"$cluster\"}) by (node)))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{node}}`}}",
@@ -170,7 +172,7 @@ data:
"timeShift": null,
"title": "CPU Saturation (Load1)",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -224,7 +226,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 10,
- "id": 2,
+ "id": 3,
"legend": {
"avg": false,
"current": false,
@@ -253,7 +255,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_memory_utilisation:ratio",
+ "expr": "node:cluster_memory_utilisation:ratio{cluster=\"$cluster\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{node}}`}}",
@@ -268,7 +270,7 @@ data:
"timeShift": null,
"title": "Memory Utilisation",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -310,7 +312,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 10,
- "id": 3,
+ "id": 4,
"legend": {
"avg": false,
"current": false,
@@ -339,7 +341,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_memory_swap_io_bytes:sum_rate",
+ "expr": "node:node_memory_swap_io_bytes:sum_rate{cluster=\"$cluster\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{node}}`}}",
@@ -354,7 +356,7 @@ data:
"timeShift": null,
"title": "Memory Saturation (Swap I/O)",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -408,7 +410,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 10,
- "id": 4,
+ "id": 5,
"legend": {
"avg": false,
"current": false,
@@ -437,7 +439,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_disk_utilisation:avg_irate / scalar(:kube_pod_info_node_count:)",
+ "expr": "node:node_disk_utilisation:avg_irate{cluster=\"$cluster\"} / scalar(:kube_pod_info_node_count:{cluster=\"$cluster\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{node}}`}}",
@@ -452,7 +454,7 @@ data:
"timeShift": null,
"title": "Disk IO Utilisation",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -494,7 +496,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 10,
- "id": 5,
+ "id": 6,
"legend": {
"avg": false,
"current": false,
@@ -523,7 +525,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_disk_saturation:avg_irate / scalar(:kube_pod_info_node_count:)",
+ "expr": "node:node_disk_saturation:avg_irate{cluster=\"$cluster\"} / scalar(:kube_pod_info_node_count:{cluster=\"$cluster\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{node}}`}}",
@@ -538,7 +540,7 @@ data:
"timeShift": null,
"title": "Disk IO Saturation",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -592,7 +594,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 10,
- "id": 6,
+ "id": 7,
"legend": {
"avg": false,
"current": false,
@@ -621,7 +623,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_net_utilisation:sum_irate",
+ "expr": "node:node_net_utilisation:sum_irate{cluster=\"$cluster\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{node}}`}}",
@@ -636,7 +638,7 @@ data:
"timeShift": null,
"title": "Net Utilisation (Transmitted)",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -678,7 +680,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 10,
- "id": 7,
+ "id": 8,
"legend": {
"avg": false,
"current": false,
@@ -707,7 +709,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_net_saturation:sum_irate",
+ "expr": "node:node_net_saturation:sum_irate{cluster=\"$cluster\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{node}}`}}",
@@ -722,7 +724,7 @@ data:
"timeShift": null,
"title": "Net Saturation (Dropped)",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -776,7 +778,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 10,
- "id": 8,
+ "id": 9,
"legend": {
"avg": false,
"current": false,
@@ -805,7 +807,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "sum(max(node_filesystem_size_bytes{fstype=\u007e\"ext[234]|btrfs|xfs|zfs\"} - node_filesystem_avail_bytes{fstype=\u007e\"ext[234]|btrfs|xfs|zfs\"}) by (device,pod,namespace)) by (pod,namespace)\n/ scalar(sum(max(node_filesystem_size_bytes{fstype=\u007e\"ext[234]|btrfs|xfs|zfs\"}) by (device,pod,namespace)))\n* on (namespace, pod) group_left (node) node_namespace_pod:kube_pod_info:\n",
+ "expr": "sum(max(node_filesystem_size_bytes{fstype=~\"ext[234]|btrfs|xfs|zfs\", cluster=\"$cluster\"} - node_filesystem_avail_bytes{fstype=~\"ext[234]|btrfs|xfs|zfs\", cluster=\"$cluster\"}) by (device,pod,namespace)) by (pod,namespace)\n/ scalar(sum(max(node_filesystem_size_bytes{fstype=~\"ext[234]|btrfs|xfs|zfs\", cluster=\"$cluster\"}) by (device,pod,namespace)))\n* on (namespace, pod) group_left (node) node_namespace_pod:kube_pod_info:{cluster=\"$cluster\"}\n",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{node}}`}}",
@@ -820,7 +822,7 @@ data:
"timeShift": null,
"title": "Disk Capacity",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -865,7 +867,7 @@ data:
"schemaVersion": 14,
"style": "dark",
"tags": [
-
+ "kubernetes-mixin"
],
"templating": {
"list": [
@@ -884,6 +886,33 @@ data:
"refresh": 1,
"regex": "",
"type": "datasource"
+ },
+ {
+ "allValue": null,
+ "current": {
+ "text": "prod",
+ "value": "prod"
+ },
+ "datasource": "$datasource",
+ "hide": 2,
+ "includeAll": false,
+ "label": "cluster",
+ "multi": false,
+ "name": "cluster",
+ "options": [
+
+ ],
+ "query": "label_values(kube_node_info, cluster)",
+ "refresh": 1,
+ "regex": "",
+ "sort": 2,
+ "tagValuesQuery": "",
+ "tags": [
+
+ ],
+ "tagsQuery": "",
+ "type": "query",
+ "useTags": false
}
]
},
@@ -917,7 +946,7 @@ data:
]
},
"timezone": "",
- "title": "K8s / USE Method / Cluster",
+ "title": "Kubernetes / USE Method / Cluster",
"uid": "a6e7d1362e1ddbb79db21d5bb40d7137",
"version": 0
}
diff --git a/stable/prometheus-operator/templates/grafana/dashboards/grafana-coredns-k8s.yaml b/stable/prometheus-operator/templates/grafana/dashboards/k8s-coredns.yaml
similarity index 99%
rename from stable/prometheus-operator/templates/grafana/dashboards/grafana-coredns-k8s.yaml
rename to stable/prometheus-operator/templates/grafana/dashboards/k8s-coredns.yaml
index bd6306d94a55..b638913e8561 100644
--- a/stable/prometheus-operator/templates/grafana/dashboards/grafana-coredns-k8s.yaml
+++ b/stable/prometheus-operator/templates/grafana/dashboards/k8s-coredns.yaml
@@ -1,9 +1,9 @@
-# Added manually, should be changed in-place.
+# Added manually, can be changed in-place.
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled .Values.coreDns.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
- name: {{ printf "%s-%s" (include "prometheus-operator.fullname" $) "grafana-coredns-k8s" | trunc 63 | trimSuffix "-" }}
+ name: {{ printf "%s-%s" (include "prometheus-operator.fullname" $) "k8s-coredns" | trunc 63 | trimSuffix "-" }}
labels:
{{- if $.Values.grafana.sidecar.dashboards.label }}
{{ $.Values.grafana.sidecar.dashboards.label }}: "1"
@@ -11,7 +11,7 @@ metadata:
app: {{ template "prometheus-operator.name" $ }}-grafana
{{ include "prometheus-operator.labels" $ | indent 4 }}
data:
- grafana-coredns-k8s.json: |-
+ k8s-coredns.json: |-
{
"annotations": {
"list": [
diff --git a/stable/prometheus-operator/templates/grafana/dashboards/k8s-node-rsrc-use.yaml b/stable/prometheus-operator/templates/grafana/dashboards/k8s-node-rsrc-use.yaml
index cd68fe26da0e..d48caad403e7 100644
--- a/stable/prometheus-operator/templates/grafana/dashboards/k8s-node-rsrc-use.yaml
+++ b/stable/prometheus-operator/templates/grafana/dashboards/k8s-node-rsrc-use.yaml
@@ -1,4 +1,6 @@
-# Generated from 'k8s-node-rsrc-use' from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/grafana-dashboardDefinitions.yaml
+# Generated from 'k8s-node-rsrc-use' from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/grafana-dashboardDefinitions.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
apiVersion: v1
kind: ConfigMap
@@ -40,7 +42,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 1,
- "id": 0,
+ "id": 1,
"legend": {
"avg": false,
"current": false,
@@ -69,7 +71,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_cpu_utilisation:avg1m{node=\"$node\"}",
+ "expr": "node:node_cpu_utilisation:avg1m{cluster=\"$cluster\", node=\"$node\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Utilisation",
@@ -84,7 +86,7 @@ data:
"timeShift": null,
"title": "CPU Utilisation",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -126,7 +128,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 1,
- "id": 1,
+ "id": 2,
"legend": {
"avg": false,
"current": false,
@@ -155,7 +157,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_cpu_saturation_load1:{node=\"$node\"}",
+ "expr": "node:node_cpu_saturation_load1:{cluster=\"$cluster\", node=\"$node\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Saturation",
@@ -170,7 +172,7 @@ data:
"timeShift": null,
"title": "CPU Saturation (Load1)",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -224,7 +226,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 1,
- "id": 2,
+ "id": 3,
"legend": {
"avg": false,
"current": false,
@@ -253,7 +255,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_memory_utilisation:{node=\"$node\"}",
+ "expr": "node:node_memory_utilisation:{cluster=\"$cluster\", node=\"$node\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Memory",
@@ -268,7 +270,7 @@ data:
"timeShift": null,
"title": "Memory Utilisation",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -310,7 +312,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 1,
- "id": 3,
+ "id": 4,
"legend": {
"avg": false,
"current": false,
@@ -339,7 +341,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_memory_swap_io_bytes:sum_rate{node=\"$node\"}",
+ "expr": "node:node_memory_swap_io_bytes:sum_rate{cluster=\"$cluster\", node=\"$node\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Swap IO",
@@ -354,7 +356,7 @@ data:
"timeShift": null,
"title": "Memory Saturation (Swap I/O)",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -408,7 +410,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 1,
- "id": 4,
+ "id": 5,
"legend": {
"avg": false,
"current": false,
@@ -437,7 +439,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_disk_utilisation:avg_irate{node=\"$node\"}",
+ "expr": "node:node_disk_utilisation:avg_irate{cluster=\"$cluster\", node=\"$node\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Utilisation",
@@ -452,7 +454,7 @@ data:
"timeShift": null,
"title": "Disk IO Utilisation",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -494,7 +496,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 1,
- "id": 5,
+ "id": 6,
"legend": {
"avg": false,
"current": false,
@@ -523,7 +525,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_disk_saturation:avg_irate{node=\"$node\"}",
+ "expr": "node:node_disk_saturation:avg_irate{cluster=\"$cluster\", node=\"$node\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Saturation",
@@ -538,7 +540,7 @@ data:
"timeShift": null,
"title": "Disk IO Saturation",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -592,7 +594,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 1,
- "id": 6,
+ "id": 7,
"legend": {
"avg": false,
"current": false,
@@ -621,7 +623,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_net_utilisation:sum_irate{node=\"$node\"}",
+ "expr": "node:node_net_utilisation:sum_irate{cluster=\"$cluster\", node=\"$node\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Utilisation",
@@ -636,7 +638,7 @@ data:
"timeShift": null,
"title": "Net Utilisation (Transmitted)",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -678,7 +680,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 1,
- "id": 7,
+ "id": 8,
"legend": {
"avg": false,
"current": false,
@@ -707,7 +709,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_net_saturation:sum_irate{node=\"$node\"}",
+ "expr": "node:node_net_saturation:sum_irate{cluster=\"$cluster\", node=\"$node\"}",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Saturation",
@@ -722,7 +724,7 @@ data:
"timeShift": null,
"title": "Net Saturation (Dropped)",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -776,7 +778,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 1,
- "id": 8,
+ "id": 9,
"legend": {
"avg": false,
"current": false,
@@ -805,7 +807,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_filesystem_usage:\n* on (namespace, pod) group_left (node) node_namespace_pod:kube_pod_info:{node=\"$node\"}\n",
+ "expr": "node:node_filesystem_usage:{cluster=\"$cluster\"}\n* on (namespace, pod) group_left (node) node_namespace_pod:kube_pod_info:{cluster=\"$cluster\", node=\"$node\"}\n",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{device}}`}}",
@@ -820,7 +822,7 @@ data:
"timeShift": null,
"title": "Disk Utilisation",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -865,7 +867,7 @@ data:
"schemaVersion": 14,
"style": "dark",
"tags": [
-
+ "kubernetes-mixin"
],
"templating": {
"list": [
@@ -885,6 +887,33 @@ data:
"regex": "",
"type": "datasource"
},
+ {
+ "allValue": null,
+ "current": {
+ "text": "prod",
+ "value": "prod"
+ },
+ "datasource": "$datasource",
+ "hide": 2,
+ "includeAll": false,
+ "label": "cluster",
+ "multi": false,
+ "name": "cluster",
+ "options": [
+
+ ],
+ "query": "label_values(kube_node_info, cluster)",
+ "refresh": 1,
+ "regex": "",
+ "sort": 2,
+ "tagValuesQuery": "",
+ "tags": [
+
+ ],
+ "tagsQuery": "",
+ "type": "query",
+ "useTags": false
+ },
{
"allValue": null,
"current": {
@@ -900,7 +929,7 @@ data:
"options": [
],
- "query": "label_values(kube_node_info, node)",
+ "query": "label_values(kube_node_info{cluster=\"$cluster\"}, node)",
"refresh": 1,
"regex": "",
"sort": 2,
@@ -944,7 +973,7 @@ data:
]
},
"timezone": "",
- "title": "K8s / USE Method / Node",
+ "title": "Kubernetes / USE Method / Node",
"uid": "4ac4f123aae0ff6dbaf4f4f66120033b",
"version": 0
}
diff --git a/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-cluster.yaml b/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-cluster.yaml
index c5bb0f753899..9f5388bc218d 100644
--- a/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-cluster.yaml
+++ b/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-cluster.yaml
@@ -1,4 +1,6 @@
-# Generated from 'k8s-resources-cluster' from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/grafana-dashboardDefinitions.yaml
+# Generated from 'k8s-resources-cluster' from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/grafana-dashboardDefinitions.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
apiVersion: v1
kind: ConfigMap
@@ -41,7 +43,7 @@ data:
"datasource": "$datasource",
"fill": 1,
"format": "percentunit",
- "id": 0,
+ "id": 1,
"legend": {
"avg": false,
"current": false,
@@ -70,7 +72,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "1 - avg(rate(node_cpu_seconds_total{mode=\"idle\"}[1m]))",
+ "expr": "1 - avg(rate(node_cpu_seconds_total{mode=\"idle\", cluster=\"$cluster\"}[1m]))",
"format": "time_series",
"instant": true,
"intervalFactor": 2,
@@ -82,7 +84,7 @@ data:
"timeShift": null,
"title": "CPU Utilisation",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -125,7 +127,7 @@ data:
"datasource": "$datasource",
"fill": 1,
"format": "percentunit",
- "id": 1,
+ "id": 2,
"legend": {
"avg": false,
"current": false,
@@ -154,7 +156,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "sum(kube_pod_container_resource_requests_cpu_cores) / sum(node:node_num_cpu:sum)",
+ "expr": "sum(kube_pod_container_resource_requests_cpu_cores{cluster=\"$cluster\"}) / sum(node:node_num_cpu:sum{cluster=\"$cluster\"})",
"format": "time_series",
"instant": true,
"intervalFactor": 2,
@@ -166,7 +168,7 @@ data:
"timeShift": null,
"title": "CPU Requests Commitment",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -209,7 +211,7 @@ data:
"datasource": "$datasource",
"fill": 1,
"format": "percentunit",
- "id": 2,
+ "id": 3,
"legend": {
"avg": false,
"current": false,
@@ -238,7 +240,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "sum(kube_pod_container_resource_limits_cpu_cores) / sum(node:node_num_cpu:sum)",
+ "expr": "sum(kube_pod_container_resource_limits_cpu_cores{cluster=\"$cluster\"}) / sum(node:node_num_cpu:sum{cluster=\"$cluster\"})",
"format": "time_series",
"instant": true,
"intervalFactor": 2,
@@ -250,7 +252,7 @@ data:
"timeShift": null,
"title": "CPU Limits Commitment",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -293,7 +295,7 @@ data:
"datasource": "$datasource",
"fill": 1,
"format": "percentunit",
- "id": 3,
+ "id": 4,
"legend": {
"avg": false,
"current": false,
@@ -322,7 +324,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "1 - sum(:node_memory_MemFreeCachedBuffers_bytes:sum) / sum(:node_memory_MemTotal_bytes:sum)",
+ "expr": "1 - sum(:node_memory_MemFreeCachedBuffers_bytes:sum{cluster=\"$cluster\"}) / sum(:node_memory_MemTotal_bytes:sum{cluster=\"$cluster\"})",
"format": "time_series",
"instant": true,
"intervalFactor": 2,
@@ -334,7 +336,7 @@ data:
"timeShift": null,
"title": "Memory Utilisation",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -377,7 +379,7 @@ data:
"datasource": "$datasource",
"fill": 1,
"format": "percentunit",
- "id": 4,
+ "id": 5,
"legend": {
"avg": false,
"current": false,
@@ -406,7 +408,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "sum(kube_pod_container_resource_requests_memory_bytes) / sum(:node_memory_MemTotal_bytes:sum)",
+ "expr": "sum(kube_pod_container_resource_requests_memory_bytes{cluster=\"$cluster\"}) / sum(:node_memory_MemTotal_bytes:sum{cluster=\"$cluster\"})",
"format": "time_series",
"instant": true,
"intervalFactor": 2,
@@ -418,7 +420,7 @@ data:
"timeShift": null,
"title": "Memory Requests Commitment",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -461,7 +463,7 @@ data:
"datasource": "$datasource",
"fill": 1,
"format": "percentunit",
- "id": 5,
+ "id": 6,
"legend": {
"avg": false,
"current": false,
@@ -490,7 +492,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "sum(kube_pod_container_resource_limits_memory_bytes) / sum(:node_memory_MemTotal_bytes:sum)",
+ "expr": "sum(kube_pod_container_resource_limits_memory_bytes{cluster=\"$cluster\"}) / sum(:node_memory_MemTotal_bytes:sum{cluster=\"$cluster\"})",
"format": "time_series",
"instant": true,
"intervalFactor": 2,
@@ -502,7 +504,7 @@ data:
"timeShift": null,
"title": "Memory Limits Commitment",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -556,7 +558,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 10,
- "id": 6,
+ "id": 7,
"legend": {
"avg": false,
"current": false,
@@ -585,7 +587,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "sum(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate) by (namespace)",
+ "expr": "sum(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\"}) by (namespace)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{namespace}}`}}",
@@ -600,7 +602,7 @@ data:
"timeShift": null,
"title": "CPU Usage",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -654,7 +656,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 1,
- "id": 7,
+ "id": 8,
"legend": {
"avg": false,
"current": false,
@@ -688,6 +690,42 @@ data:
"pattern": "Time",
"type": "hidden"
},
+ {
+ "alias": "Pods",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 0,
+ "link": true,
+ "linkTooltip": "Drill down to pods",
+ "linkUrl": "/d/85a562078cdf77779eaa1add43ccec1e/k8s-resources-namespace?var-datasource=$datasource&var-cluster=$cluster&var-namespace=$__cell_1",
+ "pattern": "Value #A",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
+ {
+ "alias": "Workloads",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 0,
+ "link": true,
+ "linkTooltip": "Drill down to workloads",
+ "linkUrl": "/d/a87fb0d919ec0ea5f6543124e16c42a5/k8s-resources-workloads-namespace?var-datasource=$datasource&var-cluster=$cluster&var-namespace=$__cell_1",
+ "pattern": "Value #B",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
{
"alias": "CPU Usage",
"colorMode": null,
@@ -699,7 +737,7 @@ data:
"link": false,
"linkTooltip": "Drill down",
"linkUrl": "",
- "pattern": "Value #A",
+ "pattern": "Value #C",
"thresholds": [
],
@@ -717,7 +755,7 @@ data:
"link": false,
"linkTooltip": "Drill down",
"linkUrl": "",
- "pattern": "Value #B",
+ "pattern": "Value #D",
"thresholds": [
],
@@ -735,7 +773,7 @@ data:
"link": false,
"linkTooltip": "Drill down",
"linkUrl": "",
- "pattern": "Value #C",
+ "pattern": "Value #E",
"thresholds": [
],
@@ -753,7 +791,7 @@ data:
"link": false,
"linkTooltip": "Drill down",
"linkUrl": "",
- "pattern": "Value #D",
+ "pattern": "Value #F",
"thresholds": [
],
@@ -771,7 +809,7 @@ data:
"link": false,
"linkTooltip": "Drill down",
"linkUrl": "",
- "pattern": "Value #E",
+ "pattern": "Value #G",
"thresholds": [
],
@@ -787,8 +825,8 @@ data:
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": true,
- "linkTooltip": "Drill down",
- "linkUrl": "/d/85a562078cdf77779eaa1add43ccec1e/k8s-resources-namespace?var-datasource=$datasource&var-namespace=$__cell",
+ "linkTooltip": "Drill down to pods",
+ "linkUrl": "/d/85a562078cdf77779eaa1add43ccec1e/k8s-resources-namespace?var-datasource=$datasource&var-cluster=$cluster&var-namespace=$__cell",
"pattern": "namespace",
"thresholds": [
@@ -814,7 +852,7 @@ data:
],
"targets": [
{
- "expr": "sum(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate) by (namespace)",
+ "expr": "count(mixin_pod_workload{cluster=\"$cluster\"}) by (namespace)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -823,7 +861,7 @@ data:
"step": 10
},
{
- "expr": "sum(kube_pod_container_resource_requests_cpu_cores) by (namespace)",
+ "expr": "count(avg(mixin_pod_workload{cluster=\"$cluster\"}) by (workload, namespace)) by (namespace)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -832,7 +870,7 @@ data:
"step": 10
},
{
- "expr": "sum(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate) by (namespace) / sum(kube_pod_container_resource_requests_cpu_cores) by (namespace)",
+ "expr": "sum(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\"}) by (namespace)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -841,7 +879,7 @@ data:
"step": 10
},
{
- "expr": "sum(kube_pod_container_resource_limits_cpu_cores) by (namespace)",
+ "expr": "sum(kube_pod_container_resource_requests_cpu_cores{cluster=\"$cluster\"}) by (namespace)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -850,13 +888,31 @@ data:
"step": 10
},
{
- "expr": "sum(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate) by (namespace) / sum(kube_pod_container_resource_limits_cpu_cores) by (namespace)",
+ "expr": "sum(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\"}) by (namespace) / sum(kube_pod_container_resource_requests_cpu_cores{cluster=\"$cluster\"}) by (namespace)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "E",
"step": 10
+ },
+ {
+ "expr": "sum(kube_pod_container_resource_limits_cpu_cores{cluster=\"$cluster\"}) by (namespace)",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "F",
+ "step": 10
+ },
+ {
+ "expr": "sum(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\"}) by (namespace) / sum(kube_pod_container_resource_limits_cpu_cores{cluster=\"$cluster\"}) by (namespace)",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "G",
+ "step": 10
}
],
"thresholds": [
@@ -866,7 +922,7 @@ data:
"timeShift": null,
"title": "CPU Quota",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -921,7 +977,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 10,
- "id": 8,
+ "id": 9,
"legend": {
"avg": false,
"current": false,
@@ -950,7 +1006,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "sum(container_memory_rss{container_name!=\"\"}) by (namespace)",
+ "expr": "sum(container_memory_rss{cluster=\"$cluster\", container_name!=\"\"}) by (namespace)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{namespace}}`}}",
@@ -965,7 +1021,7 @@ data:
"timeShift": null,
"title": "Memory Usage (w/o cache)",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -981,7 +1037,7 @@ data:
},
"yaxes": [
{
- "format": "decbytes",
+ "format": "bytes",
"label": null,
"logBase": 1,
"max": null,
@@ -1019,7 +1075,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 1,
- "id": 9,
+ "id": 10,
"legend": {
"avg": false,
"current": false,
@@ -1053,6 +1109,42 @@ data:
"pattern": "Time",
"type": "hidden"
},
+ {
+ "alias": "Pods",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 0,
+ "link": true,
+ "linkTooltip": "Drill down to pods",
+ "linkUrl": "/d/85a562078cdf77779eaa1add43ccec1e/k8s-resources-namespace?var-datasource=$datasource&var-cluster=$cluster&var-namespace=$__cell_1",
+ "pattern": "Value #A",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
+ {
+ "alias": "Workloads",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 0,
+ "link": true,
+ "linkTooltip": "Drill down to workloads",
+ "linkUrl": "/d/a87fb0d919ec0ea5f6543124e16c42a5/k8s-resources-workloads-namespace?var-datasource=$datasource&var-cluster=$cluster&var-namespace=$__cell_1",
+ "pattern": "Value #B",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
{
"alias": "Memory Usage",
"colorMode": null,
@@ -1064,12 +1156,12 @@ data:
"link": false,
"linkTooltip": "Drill down",
"linkUrl": "",
- "pattern": "Value #A",
+ "pattern": "Value #C",
"thresholds": [
],
"type": "number",
- "unit": "decbytes"
+ "unit": "bytes"
},
{
"alias": "Memory Requests",
@@ -1082,12 +1174,12 @@ data:
"link": false,
"linkTooltip": "Drill down",
"linkUrl": "",
- "pattern": "Value #B",
+ "pattern": "Value #D",
"thresholds": [
],
"type": "number",
- "unit": "decbytes"
+ "unit": "bytes"
},
{
"alias": "Memory Requests %",
@@ -1100,7 +1192,7 @@ data:
"link": false,
"linkTooltip": "Drill down",
"linkUrl": "",
- "pattern": "Value #C",
+ "pattern": "Value #E",
"thresholds": [
],
@@ -1118,12 +1210,12 @@ data:
"link": false,
"linkTooltip": "Drill down",
"linkUrl": "",
- "pattern": "Value #D",
+ "pattern": "Value #F",
"thresholds": [
],
"type": "number",
- "unit": "decbytes"
+ "unit": "bytes"
},
{
"alias": "Memory Limits %",
@@ -1136,7 +1228,7 @@ data:
"link": false,
"linkTooltip": "Drill down",
"linkUrl": "",
- "pattern": "Value #E",
+ "pattern": "Value #G",
"thresholds": [
],
@@ -1152,8 +1244,8 @@ data:
"dateFormat": "YYYY-MM-DD HH:mm:ss",
"decimals": 2,
"link": true,
- "linkTooltip": "Drill down",
- "linkUrl": "/d/85a562078cdf77779eaa1add43ccec1e/k8s-resources-namespace?var-datasource=$datasource&var-namespace=$__cell",
+ "linkTooltip": "Drill down to pods",
+ "linkUrl": "/d/85a562078cdf77779eaa1add43ccec1e/k8s-resources-namespace?var-datasource=$datasource&var-cluster=$cluster&var-namespace=$__cell",
"pattern": "namespace",
"thresholds": [
@@ -1179,7 +1271,7 @@ data:
],
"targets": [
{
- "expr": "sum(container_memory_rss{container_name!=\"\"}) by (namespace)",
+ "expr": "count(mixin_pod_workload{cluster=\"$cluster\"}) by (namespace)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -1188,7 +1280,7 @@ data:
"step": 10
},
{
- "expr": "sum(kube_pod_container_resource_requests_memory_bytes) by (namespace)",
+ "expr": "count(avg(mixin_pod_workload{cluster=\"$cluster\"}) by (workload, namespace)) by (namespace)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -1197,7 +1289,7 @@ data:
"step": 10
},
{
- "expr": "sum(container_memory_rss{container_name!=\"\"}) by (namespace) / sum(kube_pod_container_resource_requests_memory_bytes) by (namespace)",
+ "expr": "sum(container_memory_rss{cluster=\"$cluster\", container_name!=\"\"}) by (namespace)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -1206,7 +1298,7 @@ data:
"step": 10
},
{
- "expr": "sum(kube_pod_container_resource_limits_memory_bytes) by (namespace)",
+ "expr": "sum(kube_pod_container_resource_requests_memory_bytes{cluster=\"$cluster\"}) by (namespace)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -1215,13 +1307,31 @@ data:
"step": 10
},
{
- "expr": "sum(container_memory_rss{container_name!=\"\"}) by (namespace) / sum(kube_pod_container_resource_limits_memory_bytes) by (namespace)",
+ "expr": "sum(container_memory_rss{cluster=\"$cluster\", container_name!=\"\"}) by (namespace) / sum(kube_pod_container_resource_requests_memory_bytes{cluster=\"$cluster\"}) by (namespace)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "E",
"step": 10
+ },
+ {
+ "expr": "sum(kube_pod_container_resource_limits_memory_bytes{cluster=\"$cluster\"}) by (namespace)",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "F",
+ "step": 10
+ },
+ {
+ "expr": "sum(container_memory_rss{cluster=\"$cluster\", container_name!=\"\"}) by (namespace) / sum(kube_pod_container_resource_limits_memory_bytes{cluster=\"$cluster\"}) by (namespace)",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "G",
+ "step": 10
}
],
"thresholds": [
@@ -1231,7 +1341,7 @@ data:
"timeShift": null,
"title": "Requests by Namespace",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -1277,7 +1387,7 @@ data:
"schemaVersion": 14,
"style": "dark",
"tags": [
-
+ "kubernetes-mixin"
],
"templating": {
"list": [
@@ -1296,6 +1406,33 @@ data:
"refresh": 1,
"regex": "",
"type": "datasource"
+ },
+ {
+ "allValue": null,
+ "current": {
+ "text": "prod",
+ "value": "prod"
+ },
+ "datasource": "$datasource",
+ "hide": 2,
+ "includeAll": false,
+ "label": "cluster",
+ "multi": false,
+ "name": "cluster",
+ "options": [
+
+ ],
+ "query": "label_values(node_cpu_seconds_total, cluster)",
+ "refresh": 1,
+ "regex": "",
+ "sort": 2,
+ "tagValuesQuery": "",
+ "tags": [
+
+ ],
+ "tagsQuery": "",
+ "type": "query",
+ "useTags": false
}
]
},
@@ -1329,7 +1466,7 @@ data:
]
},
"timezone": "",
- "title": "K8s / Compute Resources / Cluster",
+ "title": "Kubernetes / Compute Resources / Cluster",
"uid": "efa86fd1d0c121a26444b636a3f509a8",
"version": 0
}
diff --git a/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-namespace.yaml b/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-namespace.yaml
index fc3c2e3feb68..966263e6fa6d 100644
--- a/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-namespace.yaml
+++ b/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-namespace.yaml
@@ -1,4 +1,6 @@
-# Generated from 'k8s-resources-namespace' from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/grafana-dashboardDefinitions.yaml
+# Generated from 'k8s-resources-namespace' from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/grafana-dashboardDefinitions.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
apiVersion: v1
kind: ConfigMap
@@ -40,7 +42,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 10,
- "id": 0,
+ "id": 1,
"legend": {
"avg": false,
"current": false,
@@ -69,7 +71,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "sum(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{namespace=\"$namespace\"}) by (pod_name)",
+ "expr": "sum(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\", namespace=\"$namespace\"}) by (pod_name)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{pod_name}}`}}",
@@ -84,7 +86,7 @@ data:
"timeShift": null,
"title": "CPU Usage",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -138,7 +140,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 1,
- "id": 1,
+ "id": 2,
"legend": {
"avg": false,
"current": false,
@@ -272,7 +274,7 @@ data:
"decimals": 2,
"link": true,
"linkTooltip": "Drill down",
- "linkUrl": "/d/6581e46e4e5c7ba40a07646395ef7b23/k8s-resources-pod?var-datasource=$datasource&var-namespace=$namespace&var-pod=$__cell",
+ "linkUrl": "/d/6581e46e4e5c7ba40a07646395ef7b23/k8s-resources-pod?var-datasource=$datasource&var-cluster=$cluster&var-namespace=$namespace&var-pod=$__cell",
"pattern": "pod",
"thresholds": [
@@ -298,7 +300,7 @@ data:
],
"targets": [
{
- "expr": "sum(label_replace(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{namespace=\"$namespace\"}, \"pod\", \"$1\", \"pod_name\", \"(.*)\")) by (pod)",
+ "expr": "sum(label_replace(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\", namespace=\"$namespace\"}, \"pod\", \"$1\", \"pod_name\", \"(.*)\")) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -307,7 +309,7 @@ data:
"step": 10
},
{
- "expr": "sum(kube_pod_container_resource_requests_cpu_cores{namespace=\"$namespace\"}) by (pod)",
+ "expr": "sum(kube_pod_container_resource_requests_cpu_cores{cluster=\"$cluster\", namespace=\"$namespace\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -316,7 +318,7 @@ data:
"step": 10
},
{
- "expr": "sum(label_replace(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{namespace=\"$namespace\"}, \"pod\", \"$1\", \"pod_name\", \"(.*)\")) by (pod) / sum(kube_pod_container_resource_requests_cpu_cores{namespace=\"$namespace\"}) by (pod)",
+ "expr": "sum(label_replace(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\", namespace=\"$namespace\"}, \"pod\", \"$1\", \"pod_name\", \"(.*)\")) by (pod) / sum(kube_pod_container_resource_requests_cpu_cores{cluster=\"$cluster\", namespace=\"$namespace\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -325,7 +327,7 @@ data:
"step": 10
},
{
- "expr": "sum(kube_pod_container_resource_limits_cpu_cores{namespace=\"$namespace\"}) by (pod)",
+ "expr": "sum(kube_pod_container_resource_limits_cpu_cores{cluster=\"$cluster\", namespace=\"$namespace\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -334,7 +336,7 @@ data:
"step": 10
},
{
- "expr": "sum(label_replace(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{namespace=\"$namespace\"}, \"pod\", \"$1\", \"pod_name\", \"(.*)\")) by (pod) / sum(kube_pod_container_resource_limits_cpu_cores{namespace=\"$namespace\"}) by (pod)",
+ "expr": "sum(label_replace(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\", namespace=\"$namespace\"}, \"pod\", \"$1\", \"pod_name\", \"(.*)\")) by (pod) / sum(kube_pod_container_resource_limits_cpu_cores{cluster=\"$cluster\", namespace=\"$namespace\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -350,7 +352,7 @@ data:
"timeShift": null,
"title": "CPU Quota",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -405,7 +407,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 10,
- "id": 2,
+ "id": 3,
"legend": {
"avg": false,
"current": false,
@@ -434,7 +436,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "sum(container_memory_usage_bytes{namespace=\"$namespace\", container_name!=\"\"}) by (pod_name)",
+ "expr": "sum(container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\", container_name!=\"\"}) by (pod_name)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{pod_name}}`}}",
@@ -447,9 +449,9 @@ data:
],
"timeFrom": null,
"timeShift": null,
- "title": "Memory Usage",
+ "title": "Memory Usage (w/o cache)",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -465,7 +467,7 @@ data:
},
"yaxes": [
{
- "format": "decbytes",
+ "format": "bytes",
"label": null,
"logBase": 1,
"max": null,
@@ -503,7 +505,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 1,
- "id": 3,
+ "id": 4,
"legend": {
"avg": false,
"current": false,
@@ -553,7 +555,7 @@ data:
],
"type": "number",
- "unit": "decbytes"
+ "unit": "bytes"
},
{
"alias": "Memory Requests",
@@ -571,7 +573,7 @@ data:
],
"type": "number",
- "unit": "decbytes"
+ "unit": "bytes"
},
{
"alias": "Memory Requests %",
@@ -607,7 +609,7 @@ data:
],
"type": "number",
- "unit": "decbytes"
+ "unit": "bytes"
},
{
"alias": "Memory Limits %",
@@ -627,6 +629,60 @@ data:
"type": "number",
"unit": "percentunit"
},
+ {
+ "alias": "Memory Usage (RSS)",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #F",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "bytes"
+ },
+ {
+ "alias": "Memory Usage (Cache)",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #G",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "bytes"
+ },
+ {
+ "alias": "Memory Usage (Swap",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #H",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "bytes"
+ },
{
"alias": "Pod",
"colorMode": null,
@@ -637,7 +693,7 @@ data:
"decimals": 2,
"link": true,
"linkTooltip": "Drill down",
- "linkUrl": "/d/6581e46e4e5c7ba40a07646395ef7b23/k8s-resources-pod?var-datasource=$datasource&var-namespace=$namespace&var-pod=$__cell",
+ "linkUrl": "/d/6581e46e4e5c7ba40a07646395ef7b23/k8s-resources-pod?var-datasource=$datasource&var-cluster=$cluster&var-namespace=$namespace&var-pod=$__cell",
"pattern": "pod",
"thresholds": [
@@ -663,7 +719,7 @@ data:
],
"targets": [
{
- "expr": "sum(label_replace(container_memory_usage_bytes{namespace=\"$namespace\",container_name!=\"\"}, \"pod\", \"$1\", \"pod_name\", \"(.*)\")) by (pod)",
+ "expr": "sum(label_replace(container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\",container_name!=\"\"}, \"pod\", \"$1\", \"pod_name\", \"(.*)\")) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -672,7 +728,7 @@ data:
"step": 10
},
{
- "expr": "sum(kube_pod_container_resource_requests_memory_bytes{namespace=\"$namespace\"}) by (pod)",
+ "expr": "sum(kube_pod_container_resource_requests_memory_bytes{cluster=\"$cluster\", namespace=\"$namespace\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -681,7 +737,7 @@ data:
"step": 10
},
{
- "expr": "sum(label_replace(container_memory_usage_bytes{namespace=\"$namespace\",container_name!=\"\"}, \"pod\", \"$1\", \"pod_name\", \"(.*)\")) by (pod) / sum(kube_pod_container_resource_requests_memory_bytes{namespace=\"$namespace\"}) by (pod)",
+ "expr": "sum(label_replace(container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\",container_name!=\"\"}, \"pod\", \"$1\", \"pod_name\", \"(.*)\")) by (pod) / sum(kube_pod_container_resource_requests_memory_bytes{namespace=\"$namespace\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -690,7 +746,7 @@ data:
"step": 10
},
{
- "expr": "sum(kube_pod_container_resource_limits_memory_bytes{namespace=\"$namespace\"}) by (pod)",
+ "expr": "sum(kube_pod_container_resource_limits_memory_bytes{cluster=\"$cluster\", namespace=\"$namespace\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -699,13 +755,40 @@ data:
"step": 10
},
{
- "expr": "sum(label_replace(container_memory_usage_bytes{namespace=\"$namespace\",container_name!=\"\"}, \"pod\", \"$1\", \"pod_name\", \"(.*)\")) by (pod) / sum(kube_pod_container_resource_limits_memory_bytes{namespace=\"$namespace\"}) by (pod)",
+ "expr": "sum(label_replace(container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\",container_name!=\"\"}, \"pod\", \"$1\", \"pod_name\", \"(.*)\")) by (pod) / sum(kube_pod_container_resource_limits_memory_bytes{namespace=\"$namespace\"}) by (pod)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "E",
"step": 10
+ },
+ {
+ "expr": "sum(label_replace(container_memory_rss{cluster=\"$cluster\", namespace=\"$namespace\",container_name!=\"\"}, \"pod\", \"$1\", \"pod_name\", \"(.*)\")) by (pod)",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "F",
+ "step": 10
+ },
+ {
+ "expr": "sum(label_replace(container_memory_cache{cluster=\"$cluster\", namespace=\"$namespace\",container_name!=\"\"}, \"pod\", \"$1\", \"pod_name\", \"(.*)\")) by (pod)",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "G",
+ "step": 10
+ },
+ {
+ "expr": "sum(label_replace(container_memory_swap{cluster=\"$cluster\", namespace=\"$namespace\",container_name!=\"\"}, \"pod\", \"$1\", \"pod_name\", \"(.*)\")) by (pod)",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "H",
+ "step": 10
}
],
"thresholds": [
@@ -715,7 +798,7 @@ data:
"timeShift": null,
"title": "Memory Quota",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -761,7 +844,7 @@ data:
"schemaVersion": 14,
"style": "dark",
"tags": [
-
+ "kubernetes-mixin"
],
"templating": {
"list": [
@@ -781,6 +864,33 @@ data:
"regex": "",
"type": "datasource"
},
+ {
+ "allValue": null,
+ "current": {
+ "text": "prod",
+ "value": "prod"
+ },
+ "datasource": "$datasource",
+ "hide": 2,
+ "includeAll": false,
+ "label": "cluster",
+ "multi": false,
+ "name": "cluster",
+ "options": [
+
+ ],
+ "query": "label_values(kube_pod_info, cluster)",
+ "refresh": 1,
+ "regex": "",
+ "sort": 2,
+ "tagValuesQuery": "",
+ "tags": [
+
+ ],
+ "tagsQuery": "",
+ "type": "query",
+ "useTags": false
+ },
{
"allValue": null,
"current": {
@@ -796,7 +906,7 @@ data:
"options": [
],
- "query": "label_values(kube_pod_info, namespace)",
+ "query": "label_values(kube_pod_info{cluster=\"$cluster\"}, namespace)",
"refresh": 1,
"regex": "",
"sort": 2,
@@ -840,7 +950,7 @@ data:
]
},
"timezone": "",
- "title": "K8s / Compute Resources / Namespace",
+ "title": "Kubernetes / Compute Resources / Namespace (Pods)",
"uid": "85a562078cdf77779eaa1add43ccec1e",
"version": 0
}
diff --git a/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-pod.yaml b/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-pod.yaml
index 246c6115f13c..e862938f7e42 100644
--- a/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-pod.yaml
+++ b/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-pod.yaml
@@ -1,4 +1,6 @@
-# Generated from 'k8s-resources-pod' from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/grafana-dashboardDefinitions.yaml
+# Generated from 'k8s-resources-pod' from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/grafana-dashboardDefinitions.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
apiVersion: v1
kind: ConfigMap
@@ -40,7 +42,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 10,
- "id": 0,
+ "id": 1,
"legend": {
"avg": false,
"current": false,
@@ -69,7 +71,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "sum(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{namespace=\"$namespace\", pod_name=\"$pod\", container_name!=\"POD\"}) by (container_name)",
+ "expr": "sum(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{namespace=\"$namespace\", pod_name=\"$pod\", container_name!=\"POD\", cluster=\"$cluster\"}) by (container_name)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{container_name}}`}}",
@@ -84,7 +86,7 @@ data:
"timeShift": null,
"title": "CPU Usage",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -138,7 +140,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 1,
- "id": 1,
+ "id": 2,
"legend": {
"avg": false,
"current": false,
@@ -298,7 +300,7 @@ data:
],
"targets": [
{
- "expr": "sum(label_replace(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{namespace=\"$namespace\", pod_name=\"$pod\", container_name!=\"POD\"}, \"container\", \"$1\", \"container_name\", \"(.*)\")) by (container)",
+ "expr": "sum(label_replace(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\", namespace=\"$namespace\", pod_name=\"$pod\", container_name!=\"POD\"}, \"container\", \"$1\", \"container_name\", \"(.*)\")) by (container)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -307,7 +309,7 @@ data:
"step": 10
},
{
- "expr": "sum(kube_pod_container_resource_requests_cpu_cores{namespace=\"$namespace\", pod=\"$pod\"}) by (container)",
+ "expr": "sum(kube_pod_container_resource_requests_cpu_cores{cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}) by (container)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -316,7 +318,7 @@ data:
"step": 10
},
{
- "expr": "sum(label_replace(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{namespace=\"$namespace\", pod_name=\"$pod\"}, \"container\", \"$1\", \"container_name\", \"(.*)\")) by (container) / sum(kube_pod_container_resource_requests_cpu_cores{namespace=\"$namespace\", pod=\"$pod\"}) by (container)",
+ "expr": "sum(label_replace(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\", namespace=\"$namespace\", pod_name=\"$pod\"}, \"container\", \"$1\", \"container_name\", \"(.*)\")) by (container) / sum(kube_pod_container_resource_requests_cpu_cores{cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}) by (container)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -325,7 +327,7 @@ data:
"step": 10
},
{
- "expr": "sum(kube_pod_container_resource_limits_cpu_cores{namespace=\"$namespace\", pod=\"$pod\"}) by (container)",
+ "expr": "sum(kube_pod_container_resource_limits_cpu_cores{cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}) by (container)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -334,7 +336,7 @@ data:
"step": 10
},
{
- "expr": "sum(label_replace(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{namespace=\"$namespace\", pod_name=\"$pod\"}, \"container\", \"$1\", \"container_name\", \"(.*)\")) by (container) / sum(kube_pod_container_resource_limits_cpu_cores{namespace=\"$namespace\", pod=\"$pod\"}) by (container)",
+ "expr": "sum(label_replace(namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\", namespace=\"$namespace\", pod_name=\"$pod\"}, \"container\", \"$1\", \"container_name\", \"(.*)\")) by (container) / sum(kube_pod_container_resource_limits_cpu_cores{cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}) by (container)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -350,7 +352,7 @@ data:
"timeShift": null,
"title": "CPU Quota",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -405,7 +407,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 10,
- "id": 2,
+ "id": 3,
"legend": {
"avg": false,
"current": false,
@@ -434,10 +436,26 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "sum(container_memory_usage_bytes{namespace=\"$namespace\", pod_name=\"$pod\", container_name!=\"POD\", container_name!=\"\"}) by (container_name)",
+ "expr": "sum(container_memory_rss{cluster=\"$cluster\", namespace=\"$namespace\", pod_name=\"$pod\", container_name!=\"POD\", container_name!=\"\"}) by (container_name)",
"format": "time_series",
"intervalFactor": 2,
- "legendFormat": "{{`{{container_name}}`}}",
+ "legendFormat": "{{`{{container_name}}`}} (RSS)",
+ "legendLink": null,
+ "step": 10
+ },
+ {
+ "expr": "sum(container_memory_cache{cluster=\"$cluster\", namespace=\"$namespace\", pod_name=\"$pod\", container_name!=\"POD\", container_name!=\"\"}) by (container_name)",
+ "format": "time_series",
+ "intervalFactor": 2,
+ "legendFormat": "{{`{{container_name}}`}} (Cache)",
+ "legendLink": null,
+ "step": 10
+ },
+ {
+ "expr": "sum(container_memory_swap{cluster=\"$cluster\", namespace=\"$namespace\", pod_name=\"$pod\", container_name!=\"POD\", container_name!=\"\"}) by (container_name)",
+ "format": "time_series",
+ "intervalFactor": 2,
+ "legendFormat": "{{`{{container_name}}`}} (Swap)",
"legendLink": null,
"step": 10
}
@@ -449,7 +467,7 @@ data:
"timeShift": null,
"title": "Memory Usage",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -465,7 +483,7 @@ data:
},
"yaxes": [
{
- "format": "short",
+ "format": "bytes",
"label": null,
"logBase": 1,
"max": null,
@@ -503,7 +521,7 @@ data:
"dashes": false,
"datasource": "$datasource",
"fill": 1,
- "id": 3,
+ "id": 4,
"legend": {
"avg": false,
"current": false,
@@ -553,7 +571,7 @@ data:
],
"type": "number",
- "unit": "decbytes"
+ "unit": "bytes"
},
{
"alias": "Memory Requests",
@@ -571,7 +589,7 @@ data:
],
"type": "number",
- "unit": "decbytes"
+ "unit": "bytes"
},
{
"alias": "Memory Requests %",
@@ -607,7 +625,7 @@ data:
],
"type": "number",
- "unit": "decbytes"
+ "unit": "bytes"
},
{
"alias": "Memory Limits %",
@@ -627,6 +645,60 @@ data:
"type": "number",
"unit": "percentunit"
},
+ {
+ "alias": "Memory Usage (RSS)",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #F",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "bytes"
+ },
+ {
+ "alias": "Memory Usage (Cache)",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #G",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "bytes"
+ },
+ {
+ "alias": "Memory Usage (Swap",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #H",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "bytes"
+ },
{
"alias": "Container",
"colorMode": null,
@@ -663,7 +735,7 @@ data:
],
"targets": [
{
- "expr": "sum(label_replace(container_memory_usage_bytes{namespace=\"$namespace\", pod_name=\"$pod\", container_name!=\"POD\", container_name!=\"\"}, \"container\", \"$1\", \"container_name\", \"(.*)\")) by (container)",
+ "expr": "sum(label_replace(container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\", pod_name=\"$pod\", container_name!=\"POD\", container_name!=\"\"}, \"container\", \"$1\", \"container_name\", \"(.*)\")) by (container)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -672,7 +744,7 @@ data:
"step": 10
},
{
- "expr": "sum(kube_pod_container_resource_requests_memory_bytes{namespace=\"$namespace\", pod=\"$pod\"}) by (container)",
+ "expr": "sum(kube_pod_container_resource_requests_memory_bytes{cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}) by (container)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -681,7 +753,7 @@ data:
"step": 10
},
{
- "expr": "sum(label_replace(container_memory_usage_bytes{namespace=\"$namespace\", pod_name=\"$pod\"}, \"container\", \"$1\", \"container_name\", \"(.*)\")) by (container) / sum(kube_pod_container_resource_requests_memory_bytes{namespace=\"$namespace\", pod=\"$pod\"}) by (container)",
+ "expr": "sum(label_replace(container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\", pod_name=\"$pod\"}, \"container\", \"$1\", \"container_name\", \"(.*)\")) by (container) / sum(kube_pod_container_resource_requests_memory_bytes{namespace=\"$namespace\", pod=\"$pod\"}) by (container)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -690,7 +762,7 @@ data:
"step": 10
},
{
- "expr": "sum(kube_pod_container_resource_limits_memory_bytes{namespace=\"$namespace\", pod=\"$pod\", container!=\"\"}) by (container)",
+ "expr": "sum(kube_pod_container_resource_limits_memory_bytes{cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\", container!=\"\"}) by (container)",
"format": "table",
"instant": true,
"intervalFactor": 2,
@@ -699,13 +771,40 @@ data:
"step": 10
},
{
- "expr": "sum(label_replace(container_memory_usage_bytes{namespace=\"$namespace\", pod_name=\"$pod\", container_name!=\"\"}, \"container\", \"$1\", \"container_name\", \"(.*)\")) by (container) / sum(kube_pod_container_resource_limits_memory_bytes{namespace=\"$namespace\", pod=\"$pod\"}) by (container)",
+ "expr": "sum(label_replace(container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\", pod_name=\"$pod\", container_name!=\"\"}, \"container\", \"$1\", \"container_name\", \"(.*)\")) by (container) / sum(kube_pod_container_resource_limits_memory_bytes{namespace=\"$namespace\", pod=\"$pod\"}) by (container)",
"format": "table",
"instant": true,
"intervalFactor": 2,
"legendFormat": "",
"refId": "E",
"step": 10
+ },
+ {
+ "expr": "sum(label_replace(container_memory_rss{cluster=\"$cluster\", namespace=\"$namespace\", pod_name=\"$pod\", container_name != \"\", container_name != \"POD\"}, \"container\", \"$1\", \"container_name\", \"(.*)\")) by (container)",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "F",
+ "step": 10
+ },
+ {
+ "expr": "sum(label_replace(container_memory_cache{cluster=\"$cluster\", namespace=\"$namespace\", pod_name=\"$pod\", container_name != \"\", container_name != \"POD\"}, \"container\", \"$1\", \"container_name\", \"(.*)\")) by (container)",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "G",
+ "step": 10
+ },
+ {
+ "expr": "sum(label_replace(container_memory_swap{cluster=\"$cluster\", namespace=\"$namespace\", pod_name=\"$pod\", container_name != \"\", container_name != \"POD\"}, \"container\", \"$1\", \"container_name\", \"(.*)\")) by (container)",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "H",
+ "step": 10
}
],
"thresholds": [
@@ -715,7 +814,7 @@ data:
"timeShift": null,
"title": "Memory Quota",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -761,7 +860,7 @@ data:
"schemaVersion": 14,
"style": "dark",
"tags": [
-
+ "kubernetes-mixin"
],
"templating": {
"list": [
@@ -781,6 +880,33 @@ data:
"regex": "",
"type": "datasource"
},
+ {
+ "allValue": null,
+ "current": {
+ "text": "prod",
+ "value": "prod"
+ },
+ "datasource": "$datasource",
+ "hide": 2,
+ "includeAll": false,
+ "label": "cluster",
+ "multi": false,
+ "name": "cluster",
+ "options": [
+
+ ],
+ "query": "label_values(kube_pod_info, cluster)",
+ "refresh": 1,
+ "regex": "",
+ "sort": 2,
+ "tagValuesQuery": "",
+ "tags": [
+
+ ],
+ "tagsQuery": "",
+ "type": "query",
+ "useTags": false
+ },
{
"allValue": null,
"current": {
@@ -796,7 +922,7 @@ data:
"options": [
],
- "query": "label_values(kube_pod_info, namespace)",
+ "query": "label_values(kube_pod_info{cluster=\"$cluster\"}, namespace)",
"refresh": 1,
"regex": "",
"sort": 2,
@@ -823,7 +949,7 @@ data:
"options": [
],
- "query": "label_values(kube_pod_info{namespace=\"$namespace\"}, pod)",
+ "query": "label_values(kube_pod_info{cluster=\"$cluster\", namespace=\"$namespace\"}, pod)",
"refresh": 1,
"regex": "",
"sort": 2,
@@ -867,7 +993,7 @@ data:
]
},
"timezone": "",
- "title": "K8s / Compute Resources / Pod",
+ "title": "Kubernetes / Compute Resources / Pod",
"uid": "6581e46e4e5c7ba40a07646395ef7b23",
"version": 0
}
diff --git a/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-workload.yaml b/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-workload.yaml
new file mode 100644
index 000000000000..36c9c676a0b0
--- /dev/null
+++ b/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-workload.yaml
@@ -0,0 +1,930 @@
+# Generated from 'k8s-resources-workload' from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/grafana-dashboardDefinitions.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
+{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ printf "%s-%s" (include "prometheus-operator.fullname" $) "k8s-resources-workload" | trunc 63 | trimSuffix "-" }}
+ labels:
+ {{- if $.Values.grafana.sidecar.dashboards.label }}
+ {{ $.Values.grafana.sidecar.dashboards.label }}: "1"
+ {{- end }}
+ app: {{ template "prometheus-operator.name" $ }}-grafana
+{{ include "prometheus-operator.labels" $ | indent 4 }}
+data:
+ k8s-resources-workload.json: |-
+ {
+ "annotations": {
+ "list": [
+
+ ]
+ },
+ "editable": true,
+ "gnetId": null,
+ "graphTooltip": 0,
+ "hideControls": false,
+ "links": [
+
+ ],
+ "refresh": "10s",
+ "rows": [
+ {
+ "collapse": false,
+ "height": "250px",
+ "panels": [
+ {
+ "aliasColors": {
+
+ },
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fill": 10,
+ "id": 1,
+ "legend": {
+ "avg": false,
+ "current": false,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": false
+ },
+ "lines": true,
+ "linewidth": 0,
+ "links": [
+
+ ],
+ "nullPointMode": "null as zero",
+ "percentage": false,
+ "pointradius": 5,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [
+
+ ],
+ "spaceLength": 10,
+ "span": 12,
+ "stack": true,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(\n label_replace(\n namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\", namespace=\"$namespace\"},\n \"pod\", \"$1\", \"pod_name\", \"(.*)\"\n ) * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\", workload_type=\"$type\"}\n) by (pod)\n",
+ "format": "time_series",
+ "intervalFactor": 2,
+ "legendFormat": "{{`{{pod}}`}}",
+ "legendLink": null,
+ "step": 10
+ }
+ ],
+ "thresholds": [
+
+ ],
+ "timeFrom": null,
+ "timeShift": null,
+ "title": "CPU Usage",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": [
+
+ ]
+ },
+ "yaxes": [
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": 0,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": false
+ }
+ ]
+ }
+ ],
+ "repeat": null,
+ "repeatIteration": null,
+ "repeatRowId": null,
+ "showTitle": true,
+ "title": "CPU Usage",
+ "titleSize": "h6"
+ },
+ {
+ "collapse": false,
+ "height": "250px",
+ "panels": [
+ {
+ "aliasColors": {
+
+ },
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fill": 1,
+ "id": 2,
+ "legend": {
+ "avg": false,
+ "current": false,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": false
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [
+
+ ],
+ "nullPointMode": "null as zero",
+ "percentage": false,
+ "pointradius": 5,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [
+
+ ],
+ "spaceLength": 10,
+ "span": 12,
+ "stack": false,
+ "steppedLine": false,
+ "styles": [
+ {
+ "alias": "Time",
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "pattern": "Time",
+ "type": "hidden"
+ },
+ {
+ "alias": "CPU Usage",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #A",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
+ {
+ "alias": "CPU Requests",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #B",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
+ {
+ "alias": "CPU Requests %",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #C",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "percentunit"
+ },
+ {
+ "alias": "CPU Limits",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #D",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
+ {
+ "alias": "CPU Limits %",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #E",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "percentunit"
+ },
+ {
+ "alias": "Pod",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": true,
+ "linkTooltip": "Drill down",
+ "linkUrl": "/d/6581e46e4e5c7ba40a07646395ef7b23/k8s-resources-pod?var-datasource=$datasource&var-cluster=$cluster&var-namespace=$namespace&var-pod=$__cell",
+ "pattern": "pod",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
+ {
+ "alias": "",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "pattern": "/.*/",
+ "thresholds": [
+
+ ],
+ "type": "string",
+ "unit": "short"
+ }
+ ],
+ "targets": [
+ {
+ "expr": "sum(\n label_replace(\n namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\", namespace=\"$namespace\"},\n \"pod\", \"$1\", \"pod_name\", \"(.*)\"\n ) * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\", workload_type=\"$type\"}\n) by (pod)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "A",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n kube_pod_container_resource_requests_cpu_cores{cluster=\"$cluster\", namespace=\"$namespace\"}\n * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\", workload_type=\"$type\"}\n) by (pod)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "B",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n label_replace(\n namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\", namespace=\"$namespace\"},\n \"pod\", \"$1\", \"pod_name\", \"(.*)\"\n ) * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\", workload_type=\"$type\"}\n) by (pod)\n/sum(\n kube_pod_container_resource_requests_cpu_cores{cluster=\"$cluster\", namespace=\"$namespace\"}\n * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\", workload_type=\"$type\"}\n) by (pod)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "C",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n kube_pod_container_resource_limits_cpu_cores{cluster=\"$cluster\", namespace=\"$namespace\"}\n * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\", workload_type=\"$type\"}\n) by (pod)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "D",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n label_replace(\n namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\", namespace=\"$namespace\"},\n \"pod\", \"$1\", \"pod_name\", \"(.*)\"\n ) * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\", workload_type=\"$type\"}\n) by (pod)\n/sum(\n kube_pod_container_resource_limits_cpu_cores{cluster=\"$cluster\", namespace=\"$namespace\"}\n * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\", workload_type=\"$type\"}\n) by (pod)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "E",
+ "step": 10
+ }
+ ],
+ "thresholds": [
+
+ ],
+ "timeFrom": null,
+ "timeShift": null,
+ "title": "CPU Quota",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "transform": "table",
+ "type": "table",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": [
+
+ ]
+ },
+ "yaxes": [
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": 0,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": false
+ }
+ ]
+ }
+ ],
+ "repeat": null,
+ "repeatIteration": null,
+ "repeatRowId": null,
+ "showTitle": true,
+ "title": "CPU Quota",
+ "titleSize": "h6"
+ },
+ {
+ "collapse": false,
+ "height": "250px",
+ "panels": [
+ {
+ "aliasColors": {
+
+ },
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fill": 10,
+ "id": 3,
+ "legend": {
+ "avg": false,
+ "current": false,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": false
+ },
+ "lines": true,
+ "linewidth": 0,
+ "links": [
+
+ ],
+ "nullPointMode": "null as zero",
+ "percentage": false,
+ "pointradius": 5,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [
+
+ ],
+ "spaceLength": 10,
+ "span": 12,
+ "stack": true,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(\n label_replace(\n container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\", container_name!=\"\"},\n \"pod\", \"$1\", \"pod_name\", \"(.*)\"\n ) * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\", workload_type=\"$type\"}\n ) by (pod)\n",
+ "format": "time_series",
+ "intervalFactor": 2,
+ "legendFormat": "{{`{{pod}}`}}",
+ "legendLink": null,
+ "step": 10
+ }
+ ],
+ "thresholds": [
+
+ ],
+ "timeFrom": null,
+ "timeShift": null,
+ "title": "Memory Usage",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": [
+
+ ]
+ },
+ "yaxes": [
+ {
+ "format": "bytes",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": 0,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": false
+ }
+ ]
+ }
+ ],
+ "repeat": null,
+ "repeatIteration": null,
+ "repeatRowId": null,
+ "showTitle": true,
+ "title": "Memory Usage",
+ "titleSize": "h6"
+ },
+ {
+ "collapse": false,
+ "height": "250px",
+ "panels": [
+ {
+ "aliasColors": {
+
+ },
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fill": 1,
+ "id": 4,
+ "legend": {
+ "avg": false,
+ "current": false,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": false
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [
+
+ ],
+ "nullPointMode": "null as zero",
+ "percentage": false,
+ "pointradius": 5,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [
+
+ ],
+ "spaceLength": 10,
+ "span": 12,
+ "stack": false,
+ "steppedLine": false,
+ "styles": [
+ {
+ "alias": "Time",
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "pattern": "Time",
+ "type": "hidden"
+ },
+ {
+ "alias": "Memory Usage",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #A",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "bytes"
+ },
+ {
+ "alias": "Memory Requests",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #B",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "bytes"
+ },
+ {
+ "alias": "Memory Requests %",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #C",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "percentunit"
+ },
+ {
+ "alias": "Memory Limits",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #D",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "bytes"
+ },
+ {
+ "alias": "Memory Limits %",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #E",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "percentunit"
+ },
+ {
+ "alias": "Pod",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": true,
+ "linkTooltip": "Drill down",
+ "linkUrl": "/d/6581e46e4e5c7ba40a07646395ef7b23/k8s-resources-pod?var-datasource=$datasource&var-cluster=$cluster&var-namespace=$namespace&var-pod=$__cell",
+ "pattern": "pod",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
+ {
+ "alias": "",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "pattern": "/.*/",
+ "thresholds": [
+
+ ],
+ "type": "string",
+ "unit": "short"
+ }
+ ],
+ "targets": [
+ {
+ "expr": "sum(\n label_replace(\n container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\", container_name!=\"\"},\n \"pod\", \"$1\", \"pod_name\", \"(.*)\"\n ) * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\", workload_type=\"$type\"}\n ) by (pod)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "A",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n kube_pod_container_resource_requests_memory_bytes{cluster=\"$cluster\", namespace=\"$namespace\"}\n * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\", workload_type=\"$type\"}\n) by (pod)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "B",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n label_replace(\n container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\", container_name!=\"\"},\n \"pod\", \"$1\", \"pod_name\", \"(.*)\"\n ) * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\", workload_type=\"$type\"}\n ) by (pod)\n/sum(\n kube_pod_container_resource_requests_memory_bytes{cluster=\"$cluster\", namespace=\"$namespace\"}\n * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\", workload_type=\"$type\"}\n) by (pod)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "C",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n kube_pod_container_resource_limits_memory_bytes{cluster=\"$cluster\", namespace=\"$namespace\"}\n * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\", workload_type=\"$type\"}\n) by (pod)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "D",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n label_replace(\n container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\", container_name!=\"\"},\n \"pod\", \"$1\", \"pod_name\", \"(.*)\"\n ) * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\", workload_type=\"$type\"}\n ) by (pod)\n/sum(\n kube_pod_container_resource_limits_memory_bytes{cluster=\"$cluster\", namespace=\"$namespace\"}\n * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\", workload_type=\"$type\"}\n) by (pod)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "E",
+ "step": 10
+ }
+ ],
+ "thresholds": [
+
+ ],
+ "timeFrom": null,
+ "timeShift": null,
+ "title": "Memory Quota",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "transform": "table",
+ "type": "table",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": [
+
+ ]
+ },
+ "yaxes": [
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": 0,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": false
+ }
+ ]
+ }
+ ],
+ "repeat": null,
+ "repeatIteration": null,
+ "repeatRowId": null,
+ "showTitle": true,
+ "title": "Memory Quota",
+ "titleSize": "h6"
+ }
+ ],
+ "schemaVersion": 14,
+ "style": "dark",
+ "tags": [
+ "kubernetes-mixin"
+ ],
+ "templating": {
+ "list": [
+ {
+ "current": {
+ "text": "Prometheus",
+ "value": "Prometheus"
+ },
+ "hide": 0,
+ "label": null,
+ "name": "datasource",
+ "options": [
+
+ ],
+ "query": "prometheus",
+ "refresh": 1,
+ "regex": "",
+ "type": "datasource"
+ },
+ {
+ "allValue": null,
+ "current": {
+ "text": "prod",
+ "value": "prod"
+ },
+ "datasource": "$datasource",
+ "hide": 2,
+ "includeAll": false,
+ "label": "cluster",
+ "multi": false,
+ "name": "cluster",
+ "options": [
+
+ ],
+ "query": "label_values(kube_pod_info, cluster)",
+ "refresh": 1,
+ "regex": "",
+ "sort": 2,
+ "tagValuesQuery": "",
+ "tags": [
+
+ ],
+ "tagsQuery": "",
+ "type": "query",
+ "useTags": false
+ },
+ {
+ "allValue": null,
+ "current": {
+ "text": "prod",
+ "value": "prod"
+ },
+ "datasource": "$datasource",
+ "hide": 0,
+ "includeAll": false,
+ "label": "namespace",
+ "multi": false,
+ "name": "namespace",
+ "options": [
+
+ ],
+ "query": "label_values(kube_pod_info{cluster=\"$cluster\"}, namespace)",
+ "refresh": 1,
+ "regex": "",
+ "sort": 2,
+ "tagValuesQuery": "",
+ "tags": [
+
+ ],
+ "tagsQuery": "",
+ "type": "query",
+ "useTags": false
+ },
+ {
+ "allValue": null,
+ "current": {
+ "text": "prod",
+ "value": "prod"
+ },
+ "datasource": "$datasource",
+ "hide": 0,
+ "includeAll": false,
+ "label": "workload",
+ "multi": false,
+ "name": "workload",
+ "options": [
+
+ ],
+ "query": "label_values(mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}, workload)",
+ "refresh": 1,
+ "regex": "",
+ "sort": 2,
+ "tagValuesQuery": "",
+ "tags": [
+
+ ],
+ "tagsQuery": "",
+ "type": "query",
+ "useTags": false
+ },
+ {
+ "allValue": null,
+ "current": {
+ "text": "prod",
+ "value": "prod"
+ },
+ "datasource": "$datasource",
+ "hide": 0,
+ "includeAll": false,
+ "label": "type",
+ "multi": false,
+ "name": "type",
+ "options": [
+
+ ],
+ "query": "label_values(mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\", workload=\"$workload\"}, workload_type)",
+ "refresh": 1,
+ "regex": "",
+ "sort": 2,
+ "tagValuesQuery": "",
+ "tags": [
+
+ ],
+ "tagsQuery": "",
+ "type": "query",
+ "useTags": false
+ }
+ ]
+ },
+ "time": {
+ "from": "now-1h",
+ "to": "now"
+ },
+ "timepicker": {
+ "refresh_intervals": [
+ "5s",
+ "10s",
+ "30s",
+ "1m",
+ "5m",
+ "15m",
+ "30m",
+ "1h",
+ "2h",
+ "1d"
+ ],
+ "time_options": [
+ "5m",
+ "15m",
+ "1h",
+ "6h",
+ "12h",
+ "24h",
+ "2d",
+ "7d",
+ "30d"
+ ]
+ },
+ "timezone": "",
+ "title": "Kubernetes / Compute Resources / Workload",
+ "uid": "a164a7f0339f99e89cea5cb47e9be617",
+ "version": 0
+ }
+{{- end }}
\ No newline at end of file
diff --git a/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-workloads-namespace.yaml b/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-workloads-namespace.yaml
new file mode 100644
index 000000000000..90ad17378acb
--- /dev/null
+++ b/stable/prometheus-operator/templates/grafana/dashboards/k8s-resources-workloads-namespace.yaml
@@ -0,0 +1,966 @@
+# Generated from 'k8s-resources-workloads-namespace' from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/grafana-dashboardDefinitions.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
+{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ printf "%s-%s" (include "prometheus-operator.fullname" $) "k8s-resources-workloads-namespace" | trunc 63 | trimSuffix "-" }}
+ labels:
+ {{- if $.Values.grafana.sidecar.dashboards.label }}
+ {{ $.Values.grafana.sidecar.dashboards.label }}: "1"
+ {{- end }}
+ app: {{ template "prometheus-operator.name" $ }}-grafana
+{{ include "prometheus-operator.labels" $ | indent 4 }}
+data:
+ k8s-resources-workloads-namespace.json: |-
+ {
+ "annotations": {
+ "list": [
+
+ ]
+ },
+ "editable": true,
+ "gnetId": null,
+ "graphTooltip": 0,
+ "hideControls": false,
+ "links": [
+
+ ],
+ "refresh": "10s",
+ "rows": [
+ {
+ "collapse": false,
+ "height": "250px",
+ "panels": [
+ {
+ "aliasColors": {
+
+ },
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fill": 10,
+ "id": 1,
+ "legend": {
+ "avg": false,
+ "current": false,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": false
+ },
+ "lines": true,
+ "linewidth": 0,
+ "links": [
+
+ ],
+ "nullPointMode": "null as zero",
+ "percentage": false,
+ "pointradius": 5,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [
+
+ ],
+ "spaceLength": 10,
+ "span": 12,
+ "stack": true,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(\n label_replace(\n namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\", namespace=\"$namespace\"},\n \"pod\", \"$1\", \"pod_name\", \"(.*)\"\n ) * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}\n) by (workload, workload_type)\n",
+ "format": "time_series",
+ "intervalFactor": 2,
+ "legendFormat": "{{`{{workload}}`}} - {{`{{workload_type}}`}}",
+ "legendLink": null,
+ "step": 10
+ }
+ ],
+ "thresholds": [
+
+ ],
+ "timeFrom": null,
+ "timeShift": null,
+ "title": "CPU Usage",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": [
+
+ ]
+ },
+ "yaxes": [
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": 0,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": false
+ }
+ ]
+ }
+ ],
+ "repeat": null,
+ "repeatIteration": null,
+ "repeatRowId": null,
+ "showTitle": true,
+ "title": "CPU Usage",
+ "titleSize": "h6"
+ },
+ {
+ "collapse": false,
+ "height": "250px",
+ "panels": [
+ {
+ "aliasColors": {
+
+ },
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fill": 1,
+ "id": 2,
+ "legend": {
+ "avg": false,
+ "current": false,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": false
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [
+
+ ],
+ "nullPointMode": "null as zero",
+ "percentage": false,
+ "pointradius": 5,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [
+
+ ],
+ "spaceLength": 10,
+ "span": 12,
+ "stack": false,
+ "steppedLine": false,
+ "styles": [
+ {
+ "alias": "Time",
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "pattern": "Time",
+ "type": "hidden"
+ },
+ {
+ "alias": "Running Pods",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 0,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #A",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
+ {
+ "alias": "CPU Usage",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #B",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
+ {
+ "alias": "CPU Requests",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #C",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
+ {
+ "alias": "CPU Requests %",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #D",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "percentunit"
+ },
+ {
+ "alias": "CPU Limits",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #E",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
+ {
+ "alias": "CPU Limits %",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #F",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "percentunit"
+ },
+ {
+ "alias": "Workload",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": true,
+ "linkTooltip": "Drill down",
+ "linkUrl": "/d/a164a7f0339f99e89cea5cb47e9be617/k8s-resources-workload?var-datasource=$datasource&var-cluster=$cluster&var-namespace=$namespace&var-workload=$__cell&var-type=$__cell_2",
+ "pattern": "workload",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
+ {
+ "alias": "Workload Type",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "workload_type",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
+ {
+ "alias": "",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "pattern": "/.*/",
+ "thresholds": [
+
+ ],
+ "type": "string",
+ "unit": "short"
+ }
+ ],
+ "targets": [
+ {
+ "expr": "count(mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}) by (workload, workload_type)",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "A",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n label_replace(\n namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\", namespace=\"$namespace\"},\n \"pod\", \"$1\", \"pod_name\", \"(.*)\"\n ) * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}\n) by (workload, workload_type)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "B",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n kube_pod_container_resource_requests_cpu_cores{cluster=\"$cluster\", namespace=\"$namespace\"}\n * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}\n) by (workload, workload_type)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "C",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n label_replace(\n namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\", namespace=\"$namespace\"},\n \"pod\", \"$1\", \"pod_name\", \"(.*)\"\n ) * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}\n) by (workload, workload_type)\n/sum(\n kube_pod_container_resource_requests_cpu_cores{cluster=\"$cluster\", namespace=\"$namespace\"}\n * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}\n) by (workload, workload_type)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "D",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n kube_pod_container_resource_limits_cpu_cores{cluster=\"$cluster\", namespace=\"$namespace\"}\n * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}\n) by (workload, workload_type)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "E",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n label_replace(\n namespace_pod_name_container_name:container_cpu_usage_seconds_total:sum_rate{cluster=\"$cluster\", namespace=\"$namespace\"},\n \"pod\", \"$1\", \"pod_name\", \"(.*)\"\n ) * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}\n) by (workload, workload_type)\n/sum(\n kube_pod_container_resource_limits_cpu_cores{cluster=\"$cluster\", namespace=\"$namespace\"}\n * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}\n) by (workload, workload_type)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "F",
+ "step": 10
+ }
+ ],
+ "thresholds": [
+
+ ],
+ "timeFrom": null,
+ "timeShift": null,
+ "title": "CPU Quota",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "transform": "table",
+ "type": "table",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": [
+
+ ]
+ },
+ "yaxes": [
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": 0,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": false
+ }
+ ]
+ }
+ ],
+ "repeat": null,
+ "repeatIteration": null,
+ "repeatRowId": null,
+ "showTitle": true,
+ "title": "CPU Quota",
+ "titleSize": "h6"
+ },
+ {
+ "collapse": false,
+ "height": "250px",
+ "panels": [
+ {
+ "aliasColors": {
+
+ },
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fill": 10,
+ "id": 3,
+ "legend": {
+ "avg": false,
+ "current": false,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": false
+ },
+ "lines": true,
+ "linewidth": 0,
+ "links": [
+
+ ],
+ "nullPointMode": "null as zero",
+ "percentage": false,
+ "pointradius": 5,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [
+
+ ],
+ "spaceLength": 10,
+ "span": 12,
+ "stack": true,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "sum(\n label_replace(\n container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\", container_name!=\"\"},\n \"pod\", \"$1\", \"pod_name\", \"(.*)\"\n ) * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}\n ) by (workload, workload_type)\n",
+ "format": "time_series",
+ "intervalFactor": 2,
+ "legendFormat": "{{`{{workload}}`}} - {{`{{workload_type}}`}}",
+ "legendLink": null,
+ "step": 10
+ }
+ ],
+ "thresholds": [
+
+ ],
+ "timeFrom": null,
+ "timeShift": null,
+ "title": "Memory Usage",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": [
+
+ ]
+ },
+ "yaxes": [
+ {
+ "format": "bytes",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": 0,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": false
+ }
+ ]
+ }
+ ],
+ "repeat": null,
+ "repeatIteration": null,
+ "repeatRowId": null,
+ "showTitle": true,
+ "title": "Memory Usage",
+ "titleSize": "h6"
+ },
+ {
+ "collapse": false,
+ "height": "250px",
+ "panels": [
+ {
+ "aliasColors": {
+
+ },
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fill": 1,
+ "id": 4,
+ "legend": {
+ "avg": false,
+ "current": false,
+ "max": false,
+ "min": false,
+ "show": true,
+ "total": false,
+ "values": false
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [
+
+ ],
+ "nullPointMode": "null as zero",
+ "percentage": false,
+ "pointradius": 5,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [
+
+ ],
+ "spaceLength": 10,
+ "span": 12,
+ "stack": false,
+ "steppedLine": false,
+ "styles": [
+ {
+ "alias": "Time",
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "pattern": "Time",
+ "type": "hidden"
+ },
+ {
+ "alias": "Running Pods",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 0,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #A",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
+ {
+ "alias": "Memory Usage",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #B",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "bytes"
+ },
+ {
+ "alias": "Memory Requests",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #C",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "bytes"
+ },
+ {
+ "alias": "Memory Requests %",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #D",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "percentunit"
+ },
+ {
+ "alias": "Memory Limits",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #E",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "bytes"
+ },
+ {
+ "alias": "Memory Limits %",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "Value #F",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "percentunit"
+ },
+ {
+ "alias": "Workload",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": true,
+ "linkTooltip": "Drill down",
+ "linkUrl": "/d/a164a7f0339f99e89cea5cb47e9be617/k8s-resources-workload?var-datasource=$datasource&var-cluster=$cluster&var-namespace=$namespace&var-workload=$__cell&var-type=$__cell_2",
+ "pattern": "workload",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
+ {
+ "alias": "Workload Type",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "link": false,
+ "linkTooltip": "Drill down",
+ "linkUrl": "",
+ "pattern": "workload_type",
+ "thresholds": [
+
+ ],
+ "type": "number",
+ "unit": "short"
+ },
+ {
+ "alias": "",
+ "colorMode": null,
+ "colors": [
+
+ ],
+ "dateFormat": "YYYY-MM-DD HH:mm:ss",
+ "decimals": 2,
+ "pattern": "/.*/",
+ "thresholds": [
+
+ ],
+ "type": "string",
+ "unit": "short"
+ }
+ ],
+ "targets": [
+ {
+ "expr": "count(mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}) by (workload, workload_type)",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "A",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n label_replace(\n container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\", container_name!=\"\"},\n \"pod\", \"$1\", \"pod_name\", \"(.*)\"\n ) * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}\n ) by (workload, workload_type)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "B",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n kube_pod_container_resource_requests_memory_bytes{cluster=\"$cluster\", namespace=\"$namespace\"}\n * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}\n) by (workload, workload_type)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "C",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n label_replace(\n container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\", container_name!=\"\"},\n \"pod\", \"$1\", \"pod_name\", \"(.*)\"\n ) * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}\n ) by (workload, workload_type)\n/sum(\n kube_pod_container_resource_requests_memory_bytes{cluster=\"$cluster\", namespace=\"$namespace\"}\n * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}\n) by (workload, workload_type)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "D",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n kube_pod_container_resource_limits_memory_bytes{cluster=\"$cluster\", namespace=\"$namespace\"}\n * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}\n) by (workload, workload_type)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "E",
+ "step": 10
+ },
+ {
+ "expr": "sum(\n label_replace(\n container_memory_usage_bytes{cluster=\"$cluster\", namespace=\"$namespace\", container_name!=\"\"},\n \"pod\", \"$1\", \"pod_name\", \"(.*)\"\n ) * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}\n ) by (workload, workload_type)\n/sum(\n kube_pod_container_resource_limits_memory_bytes{cluster=\"$cluster\", namespace=\"$namespace\"}\n * on(namespace,pod) group_left(workload, workload_type) mixin_pod_workload{cluster=\"$cluster\", namespace=\"$namespace\"}\n) by (workload, workload_type)\n",
+ "format": "table",
+ "instant": true,
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "F",
+ "step": 10
+ }
+ ],
+ "thresholds": [
+
+ ],
+ "timeFrom": null,
+ "timeShift": null,
+ "title": "Memory Quota",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "transform": "table",
+ "type": "table",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": [
+
+ ]
+ },
+ "yaxes": [
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": 0,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": false
+ }
+ ]
+ }
+ ],
+ "repeat": null,
+ "repeatIteration": null,
+ "repeatRowId": null,
+ "showTitle": true,
+ "title": "Memory Quota",
+ "titleSize": "h6"
+ }
+ ],
+ "schemaVersion": 14,
+ "style": "dark",
+ "tags": [
+ "kubernetes-mixin"
+ ],
+ "templating": {
+ "list": [
+ {
+ "current": {
+ "text": "Prometheus",
+ "value": "Prometheus"
+ },
+ "hide": 0,
+ "label": null,
+ "name": "datasource",
+ "options": [
+
+ ],
+ "query": "prometheus",
+ "refresh": 1,
+ "regex": "",
+ "type": "datasource"
+ },
+ {
+ "allValue": null,
+ "current": {
+ "text": "prod",
+ "value": "prod"
+ },
+ "datasource": "$datasource",
+ "hide": 2,
+ "includeAll": false,
+ "label": "cluster",
+ "multi": false,
+ "name": "cluster",
+ "options": [
+
+ ],
+ "query": "label_values(kube_pod_info, cluster)",
+ "refresh": 1,
+ "regex": "",
+ "sort": 2,
+ "tagValuesQuery": "",
+ "tags": [
+
+ ],
+ "tagsQuery": "",
+ "type": "query",
+ "useTags": false
+ },
+ {
+ "allValue": null,
+ "current": {
+ "text": "prod",
+ "value": "prod"
+ },
+ "datasource": "$datasource",
+ "hide": 0,
+ "includeAll": false,
+ "label": "namespace",
+ "multi": false,
+ "name": "namespace",
+ "options": [
+
+ ],
+ "query": "label_values(kube_pod_info{cluster=\"$cluster\"}, namespace)",
+ "refresh": 1,
+ "regex": "",
+ "sort": 2,
+ "tagValuesQuery": "",
+ "tags": [
+
+ ],
+ "tagsQuery": "",
+ "type": "query",
+ "useTags": false
+ }
+ ]
+ },
+ "time": {
+ "from": "now-1h",
+ "to": "now"
+ },
+ "timepicker": {
+ "refresh_intervals": [
+ "5s",
+ "10s",
+ "30s",
+ "1m",
+ "5m",
+ "15m",
+ "30m",
+ "1h",
+ "2h",
+ "1d"
+ ],
+ "time_options": [
+ "5m",
+ "15m",
+ "1h",
+ "6h",
+ "12h",
+ "24h",
+ "2d",
+ "7d",
+ "30d"
+ ]
+ },
+ "timezone": "",
+ "title": "Kubernetes / Compute Resources / Namespace (Workloads)",
+ "uid": "a87fb0d919ec0ea5f6543124e16c42a5",
+ "version": 0
+ }
+{{- end }}
\ No newline at end of file
diff --git a/stable/prometheus-operator/templates/grafana/dashboards/nodes.yaml b/stable/prometheus-operator/templates/grafana/dashboards/nodes.yaml
index 62f0e170b2a0..6cd9656af82f 100644
--- a/stable/prometheus-operator/templates/grafana/dashboards/nodes.yaml
+++ b/stable/prometheus-operator/templates/grafana/dashboards/nodes.yaml
@@ -1,4 +1,6 @@
-# Generated from 'nodes' from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/grafana-dashboardDefinitions.yaml
+# Generated from 'nodes' from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/grafana-dashboardDefinitions.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
apiVersion: v1
kind: ConfigMap
@@ -82,25 +84,32 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "max(node_load1{job=\"node-exporter\", instance=\"$instance\"})",
+ "expr": "max(node_load1{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "load 1m",
"refId": "A"
},
{
- "expr": "max(node_load5{job=\"node-exporter\", instance=\"$instance\"})",
+ "expr": "max(node_load5{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "load 5m",
"refId": "B"
},
{
- "expr": "max(node_load15{job=\"node-exporter\", instance=\"$instance\"})",
+ "expr": "max(node_load15{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "load 15m",
"refId": "C"
+ },
+ {
+ "expr": "count(node_cpu_seconds_total{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\", mode=\"user\"})",
+ "format": "time_series",
+ "intervalFactor": 2,
+ "legendFormat": "logical cores",
+ "refId": "D"
}
],
"thresholds": [
@@ -110,7 +119,7 @@ data:
"timeShift": null,
"title": "System load",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -187,7 +196,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "sum by (cpu) (irate(node_cpu_seconds_total{job=\"node-exporter\", mode!=\"idle\", instance=\"$instance\"}[5m]))",
+ "expr": "sum by (cpu) (irate(node_cpu_seconds_total{cluster=\"$cluster\", job=\"node-exporter\", mode!=\"idle\", instance=\"$instance\"}[5m]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{cpu}}`}}",
@@ -201,7 +210,7 @@ data:
"timeShift": null,
"title": "Usage Per Core",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -291,7 +300,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "max (sum by (cpu) (irate(node_cpu_seconds_total{job=\"node-exporter\", mode!=\"idle\", instance=\"$instance\"}[2m])) ) * 100\n",
+ "expr": "max (sum by (cpu) (irate(node_cpu_seconds_total{cluster=\"$cluster\", job=\"node-exporter\", mode!=\"idle\", instance=\"$instance\"}[2m])) ) * 100\n",
"format": "time_series",
"intervalFactor": 10,
"legendFormat": "{{`{{ cpu }}`}}",
@@ -303,9 +312,9 @@ data:
],
"timeFrom": null,
"timeShift": null,
- "title": "CPU Utilizaion",
+ "title": "CPU Utilization",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -399,7 +408,7 @@ data:
"tableColumn": "",
"targets": [
{
- "expr": "avg(sum by (cpu) (irate(node_cpu_seconds_total{job=\"node-exporter\", mode!=\"idle\", instance=\"$instance\"}[2m]))) * 100\n",
+ "expr": "avg(sum by (cpu) (irate(node_cpu_seconds_total{cluster=\"$cluster\", job=\"node-exporter\", mode!=\"idle\", instance=\"$instance\"}[2m]))) * 100\n",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "",
@@ -408,6 +417,9 @@ data:
],
"thresholds": "80, 90",
"title": "CPU Usage",
+ "tooltip": {
+ "shared": false
+ },
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
@@ -476,28 +488,28 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "max(\n node_memory_MemTotal_bytes{job=\"node-exporter\", instance=\"$instance\"}\n - node_memory_MemFree_bytes{job=\"node-exporter\", instance=\"$instance\"}\n - node_memory_Buffers_bytes{job=\"node-exporter\", instance=\"$instance\"}\n - node_memory_Cached_bytes{job=\"node-exporter\", instance=\"$instance\"}\n)\n",
+ "expr": "max(\n node_memory_MemTotal_bytes{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}\n - node_memory_MemFree_bytes{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}\n - node_memory_Buffers_bytes{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}\n - node_memory_Cached_bytes{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}\n)\n",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "memory used",
"refId": "A"
},
{
- "expr": "max(node_memory_Buffers_bytes{job=\"node-exporter\", instance=\"$instance\"})",
+ "expr": "max(node_memory_Buffers_bytes{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "memory buffers",
"refId": "B"
},
{
- "expr": "max(node_memory_Cached_bytes{job=\"node-exporter\", instance=\"$instance\"})",
+ "expr": "max(node_memory_Cached_bytes{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "memory cached",
"refId": "C"
},
{
- "expr": "max(node_memory_MemFree_bytes{job=\"node-exporter\", instance=\"$instance\"})",
+ "expr": "max(node_memory_MemFree_bytes{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "memory free",
@@ -511,7 +523,7 @@ data:
"timeShift": null,
"title": "Memory Usage",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -605,7 +617,7 @@ data:
"tableColumn": "",
"targets": [
{
- "expr": "max(\n (\n (\n node_memory_MemTotal_bytes{job=\"node-exporter\", instance=\"$instance\"}\n - node_memory_MemFree_bytes{job=\"node-exporter\", instance=\"$instance\"}\n - node_memory_Buffers_bytes{job=\"node-exporter\", instance=\"$instance\"}\n - node_memory_Cached_bytes{job=\"node-exporter\", instance=\"$instance\"}\n )\n / node_memory_MemTotal_bytes{job=\"node-exporter\", instance=\"$instance\"}\n ) * 100)\n",
+ "expr": "max(\n (\n (\n node_memory_MemTotal_bytes{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}\n - node_memory_MemFree_bytes{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}\n - node_memory_Buffers_bytes{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}\n - node_memory_Cached_bytes{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}\n )\n / node_memory_MemTotal_bytes{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}\n ) * 100)\n",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "",
@@ -614,6 +626,9 @@ data:
],
"thresholds": "80, 90",
"title": "Memory Usage",
+ "tooltip": {
+ "shared": false
+ },
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
@@ -689,21 +704,21 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "max(rate(node_disk_read_bytes_total{job=\"node-exporter\", instance=\"$instance\"}[2m]))",
+ "expr": "max(rate(node_disk_read_bytes_total{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}[2m]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "read",
"refId": "A"
},
{
- "expr": "max(rate(node_disk_written_bytes_total{job=\"node-exporter\", instance=\"$instance\"}[2m]))",
+ "expr": "max(rate(node_disk_written_bytes_total{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}[2m]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "written",
"refId": "B"
},
{
- "expr": "max(rate(node_disk_io_time_seconds_total{job=\"node-exporter\", instance=\"$instance\"}[2m]))",
+ "expr": "max(rate(node_disk_io_time_seconds_total{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}[2m]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "io time",
@@ -717,7 +732,7 @@ data:
"timeShift": null,
"title": "Disk I/O",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -794,11 +809,18 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "node:node_filesystem_usage:\n",
+ "expr": "max by (namespace, pod, device) ((node_filesystem_size_bytes{cluster=\"$cluster\", fstype=~\"ext[234]|btrfs|xfs|zfs\", instance=\"$instance\", job=\"node-exporter\"} - node_filesystem_avail_bytes{cluster=\"$cluster\", fstype=~\"ext[234]|btrfs|xfs|zfs\", instance=\"$instance\", job=\"node-exporter\"}) / node_filesystem_size_bytes{cluster=\"$cluster\", fstype=~\"ext[234]|btrfs|xfs|zfs\", instance=\"$instance\", job=\"node-exporter\"})",
"format": "time_series",
"intervalFactor": 2,
- "legendFormat": "{{`{{device}}`}}",
+ "legendFormat": "disk used",
"refId": "A"
+ },
+ {
+ "expr": "max by (namespace, pod, device) (node_filesystem_avail_bytes{cluster=\"$cluster\", fstype=~\"ext[234]|btrfs|xfs|zfs\", instance=\"$instance\", job=\"node-exporter\"} / node_filesystem_size_bytes{cluster=\"$cluster\", fstype=~\"ext[234]|btrfs|xfs|zfs\", instance=\"$instance\", job=\"node-exporter\"})",
+ "format": "time_series",
+ "intervalFactor": 2,
+ "legendFormat": "disk free",
+ "refId": "B"
}
],
"thresholds": [
@@ -808,7 +830,7 @@ data:
"timeShift": null,
"title": "Disk Space Usage",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -898,7 +920,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "max(rate(node_network_receive_bytes_total{job=\"node-exporter\", instance=\"$instance\", device!\u007e\"lo\"}[5m]))",
+ "expr": "max(rate(node_network_receive_bytes_total{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\", device!~\"lo\"}[5m]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{device}}`}}",
@@ -912,7 +934,7 @@ data:
"timeShift": null,
"title": "Network Received",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -989,7 +1011,7 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "max(rate(node_network_transmit_bytes_total{job=\"node-exporter\", instance=\"$instance\", device!\u007e\"lo\"}[5m]))",
+ "expr": "max(rate(node_network_transmit_bytes_total{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\", device!~\"lo\"}[5m]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{`{{device}}`}}",
@@ -1003,7 +1025,7 @@ data:
"timeShift": null,
"title": "Network Transmitted",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -1093,14 +1115,14 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "max(\n node_filesystem_files{job=\"node-exporter\", instance=\"$instance\"}\n - node_filesystem_files_free{job=\"node-exporter\", instance=\"$instance\"}\n)\n",
+ "expr": "max(\n node_filesystem_files{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}\n - node_filesystem_files_free{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}\n)\n",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "inodes used",
"refId": "A"
},
{
- "expr": "max(node_filesystem_files_free{job=\"node-exporter\", instance=\"$instance\"})",
+ "expr": "max(node_filesystem_files_free{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "inodes free",
@@ -1114,7 +1136,7 @@ data:
"timeShift": null,
"title": "Inodes Usage",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -1208,7 +1230,7 @@ data:
"tableColumn": "",
"targets": [
{
- "expr": "max(\n (\n (\n node_filesystem_files{job=\"node-exporter\", instance=\"$instance\"}\n - node_filesystem_files_free{job=\"node-exporter\", instance=\"$instance\"}\n )\n / node_filesystem_files{job=\"node-exporter\", instance=\"$instance\"}\n ) * 100)\n",
+ "expr": "max(\n (\n (\n node_filesystem_files{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}\n - node_filesystem_files_free{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}\n )\n / node_filesystem_files{cluster=\"$cluster\", job=\"node-exporter\", instance=\"$instance\"}\n ) * 100)\n",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "",
@@ -1217,6 +1239,9 @@ data:
],
"thresholds": "80, 90",
"title": "Inodes Usage",
+ "tooltip": {
+ "shared": false
+ },
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
@@ -1241,7 +1266,7 @@ data:
"schemaVersion": 14,
"style": "dark",
"tags": [
-
+ "kubernetes-mixin"
],
"templating": {
"list": [
@@ -1265,6 +1290,32 @@ data:
"allValue": null,
"current": {
+ },
+ "datasource": "$datasource",
+ "hide": 2,
+ "includeAll": false,
+ "label": "cluster",
+ "multi": false,
+ "name": "cluster",
+ "options": [
+
+ ],
+ "query": "label_values(kube_pod_info, cluster)",
+ "refresh": 2,
+ "regex": "",
+ "sort": 0,
+ "tagValuesQuery": "",
+ "tags": [
+
+ ],
+ "tagsQuery": "",
+ "type": "query",
+ "useTags": false
+ },
+ {
+ "allValue": null,
+ "current": {
+
},
"datasource": "$datasource",
"hide": 0,
@@ -1275,7 +1326,7 @@ data:
"options": [
],
- "query": "label_values(node_boot_time_seconds{job=\"node-exporter\"}, instance)",
+ "query": "label_values(node_boot_time_seconds{cluster=\"$cluster\", job=\"node-exporter\"}, instance)",
"refresh": 2,
"regex": "",
"sort": 0,
@@ -1319,7 +1370,7 @@ data:
]
},
"timezone": "",
- "title": "Nodes",
+ "title": "Kubernetes / Nodes",
"uid": "fa49a4706d07a042595b664c87fb33ea",
"version": 0
}
diff --git a/stable/prometheus-operator/templates/grafana/dashboards/persistentvolumesusage.yaml b/stable/prometheus-operator/templates/grafana/dashboards/persistentvolumesusage.yaml
index e3225d3ce708..8d5021535c3a 100644
--- a/stable/prometheus-operator/templates/grafana/dashboards/persistentvolumesusage.yaml
+++ b/stable/prometheus-operator/templates/grafana/dashboards/persistentvolumesusage.yaml
@@ -1,4 +1,6 @@
-# Generated from 'persistentvolumesusage' from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/grafana-dashboardDefinitions.yaml
+# Generated from 'persistentvolumesusage' from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/grafana-dashboardDefinitions.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
apiVersion: v1
kind: ConfigMap
@@ -52,7 +54,7 @@ data:
},
"id": 2,
"legend": {
- "alignAsTable": false,
+ "alignAsTable": true,
"avg": true,
"current": true,
"max": true,
@@ -77,16 +79,23 @@ data:
],
"spaceLength": 10,
- "span": 12,
- "stack": false,
+ "span": 9,
+ "stack": true,
"steppedLine": false,
"targets": [
{
- "expr": "(kubelet_volume_stats_capacity_bytes{job=\"kubelet\", persistentvolumeclaim=\"$volume\"} - kubelet_volume_stats_available_bytes{job=\"kubelet\", persistentvolumeclaim=\"$volume\"}) / kubelet_volume_stats_capacity_bytes{job=\"kubelet\", persistentvolumeclaim=\"$volume\"} * 100\n",
+ "expr": "(\n sum without(instance, node) (kubelet_volume_stats_capacity_bytes{cluster=\"$cluster\", job=\"kubelet\", namespace=\"$namespace\", persistentvolumeclaim=\"$volume\"})\n -\n sum without(instance, node) (kubelet_volume_stats_available_bytes{cluster=\"$cluster\", job=\"kubelet\", namespace=\"$namespace\", persistentvolumeclaim=\"$volume\"})\n)\n",
"format": "time_series",
"intervalFactor": 1,
- "legendFormat": "{{`{{ Usage }}`}}",
+ "legendFormat": "Used Space",
"refId": "A"
+ },
+ {
+ "expr": "sum without(instance, node) (kubelet_volume_stats_available_bytes{cluster=\"$cluster\", job=\"kubelet\", namespace=\"$namespace\", persistentvolumeclaim=\"$volume\"})\n",
+ "format": "time_series",
+ "intervalFactor": 1,
+ "legendFormat": "Free Space",
+ "refId": "B"
}
],
"thresholds": [
@@ -96,7 +105,7 @@ data:
"timeShift": null,
"title": "Volume Space Usage",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -112,22 +121,106 @@ data:
},
"yaxes": [
{
- "format": "percent",
+ "format": "bytes",
"label": null,
"logBase": 1,
- "max": 100,
+ "max": null,
"min": 0,
"show": true
},
{
- "format": "percent",
+ "format": "bytes",
"label": null,
"logBase": 1,
- "max": 100,
+ "max": null,
"min": 0,
"show": true
}
]
+ },
+ {
+ "cacheTimeout": null,
+ "colorBackground": false,
+ "colorValue": false,
+ "colors": [
+ "rgba(50, 172, 45, 0.97)",
+ "rgba(237, 129, 40, 0.89)",
+ "rgba(245, 54, 54, 0.9)"
+ ],
+ "datasource": "$datasource",
+ "format": "percent",
+ "gauge": {
+ "maxValue": 100,
+ "minValue": 0,
+ "show": true,
+ "thresholdLabels": false,
+ "thresholdMarkers": true
+ },
+ "gridPos": {
+
+ },
+ "id": 3,
+ "interval": null,
+ "links": [
+
+ ],
+ "mappingType": 1,
+ "mappingTypes": [
+ {
+ "name": "value to text",
+ "value": 1
+ },
+ {
+ "name": "range to text",
+ "value": 2
+ }
+ ],
+ "maxDataPoints": 100,
+ "nullPointMode": "connected",
+ "nullText": null,
+ "postfix": "",
+ "postfixFontSize": "50%",
+ "prefix": "",
+ "prefixFontSize": "50%",
+ "rangeMaps": [
+ {
+ "from": "null",
+ "text": "N/A",
+ "to": "null"
+ }
+ ],
+ "span": 3,
+ "sparkline": {
+ "fillColor": "rgba(31, 118, 189, 0.18)",
+ "full": false,
+ "lineColor": "rgb(31, 120, 193)",
+ "show": false
+ },
+ "tableColumn": "",
+ "targets": [
+ {
+ "expr": "(\n kubelet_volume_stats_capacity_bytes{cluster=\"$cluster\", job=\"kubelet\", namespace=\"$namespace\", persistentvolumeclaim=\"$volume\"}\n -\n kubelet_volume_stats_available_bytes{cluster=\"$cluster\", job=\"kubelet\", namespace=\"$namespace\", persistentvolumeclaim=\"$volume\"}\n)\n/\nkubelet_volume_stats_capacity_bytes{cluster=\"$cluster\", job=\"kubelet\", namespace=\"$namespace\", persistentvolumeclaim=\"$volume\"}\n* 100\n",
+ "format": "time_series",
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "A"
+ }
+ ],
+ "thresholds": "80, 90",
+ "title": "Volume Space Usage",
+ "tooltip": {
+ "shared": false
+ },
+ "type": "singlestat",
+ "valueFontSize": "80%",
+ "valueMaps": [
+ {
+ "op": "=",
+ "text": "N/A",
+ "value": "null"
+ }
+ ],
+ "valueName": "current"
}
],
"repeat": null,
@@ -154,9 +247,9 @@ data:
"gridPos": {
},
- "id": 3,
+ "id": 4,
"legend": {
- "alignAsTable": false,
+ "alignAsTable": true,
"avg": true,
"current": true,
"max": true,
@@ -181,16 +274,23 @@ data:
],
"spaceLength": 10,
- "span": 12,
- "stack": false,
+ "span": 9,
+ "stack": true,
"steppedLine": false,
"targets": [
{
- "expr": "kubelet_volume_stats_inodes_used{job=\"kubelet\", persistentvolumeclaim=\"$volume\"} / kubelet_volume_stats_inodes{job=\"kubelet\", persistentvolumeclaim=\"$volume\"} * 100\n",
+ "expr": "sum without(instance, node) (kubelet_volume_stats_inodes_used{cluster=\"$cluster\", job=\"kubelet\", namespace=\"$namespace\", persistentvolumeclaim=\"$volume\"})\n",
"format": "time_series",
"intervalFactor": 1,
- "legendFormat": "{{`{{ Usage }}`}}",
+ "legendFormat": "Used inodes",
"refId": "A"
+ },
+ {
+ "expr": "(\n sum without(instance, node) (kubelet_volume_stats_inodes{cluster=\"$cluster\", job=\"kubelet\", namespace=\"$namespace\", persistentvolumeclaim=\"$volume\"})\n -\n sum without(instance, node) (kubelet_volume_stats_inodes_used{cluster=\"$cluster\", job=\"kubelet\", namespace=\"$namespace\", persistentvolumeclaim=\"$volume\"})\n)\n",
+ "format": "time_series",
+ "intervalFactor": 1,
+ "legendFormat": " Free inodes",
+ "refId": "B"
}
],
"thresholds": [
@@ -200,7 +300,7 @@ data:
"timeShift": null,
"title": "Volume inodes Usage",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -216,22 +316,106 @@ data:
},
"yaxes": [
{
- "format": "percent",
+ "format": "none",
"label": null,
"logBase": 1,
- "max": 100,
+ "max": null,
"min": 0,
"show": true
},
{
- "format": "percent",
+ "format": "none",
"label": null,
"logBase": 1,
- "max": 100,
+ "max": null,
"min": 0,
"show": true
}
]
+ },
+ {
+ "cacheTimeout": null,
+ "colorBackground": false,
+ "colorValue": false,
+ "colors": [
+ "rgba(50, 172, 45, 0.97)",
+ "rgba(237, 129, 40, 0.89)",
+ "rgba(245, 54, 54, 0.9)"
+ ],
+ "datasource": "$datasource",
+ "format": "percent",
+ "gauge": {
+ "maxValue": 100,
+ "minValue": 0,
+ "show": true,
+ "thresholdLabels": false,
+ "thresholdMarkers": true
+ },
+ "gridPos": {
+
+ },
+ "id": 5,
+ "interval": null,
+ "links": [
+
+ ],
+ "mappingType": 1,
+ "mappingTypes": [
+ {
+ "name": "value to text",
+ "value": 1
+ },
+ {
+ "name": "range to text",
+ "value": 2
+ }
+ ],
+ "maxDataPoints": 100,
+ "nullPointMode": "connected",
+ "nullText": null,
+ "postfix": "",
+ "postfixFontSize": "50%",
+ "prefix": "",
+ "prefixFontSize": "50%",
+ "rangeMaps": [
+ {
+ "from": "null",
+ "text": "N/A",
+ "to": "null"
+ }
+ ],
+ "span": 3,
+ "sparkline": {
+ "fillColor": "rgba(31, 118, 189, 0.18)",
+ "full": false,
+ "lineColor": "rgb(31, 120, 193)",
+ "show": false
+ },
+ "tableColumn": "",
+ "targets": [
+ {
+ "expr": "kubelet_volume_stats_inodes_used{cluster=\"$cluster\", job=\"kubelet\", namespace=\"$namespace\", persistentvolumeclaim=\"$volume\"}\n/\nkubelet_volume_stats_inodes{cluster=\"$cluster\", job=\"kubelet\", namespace=\"$namespace\", persistentvolumeclaim=\"$volume\"}\n* 100\n",
+ "format": "time_series",
+ "intervalFactor": 2,
+ "legendFormat": "",
+ "refId": "A"
+ }
+ ],
+ "thresholds": "80, 90",
+ "title": "Volume inodes Usage",
+ "tooltip": {
+ "shared": false
+ },
+ "type": "singlestat",
+ "valueFontSize": "80%",
+ "valueMaps": [
+ {
+ "op": "=",
+ "text": "N/A",
+ "value": "null"
+ }
+ ],
+ "valueName": "current"
}
],
"repeat": null,
@@ -246,7 +430,7 @@ data:
"schemaVersion": 14,
"style": "dark",
"tags": [
-
+ "kubernetes-mixin"
],
"templating": {
"list": [
@@ -270,6 +454,32 @@ data:
"allValue": null,
"current": {
+ },
+ "datasource": "$datasource",
+ "hide": 2,
+ "includeAll": false,
+ "label": "cluster",
+ "multi": false,
+ "name": "cluster",
+ "options": [
+
+ ],
+ "query": "label_values(kubelet_volume_stats_capacity_bytes, cluster)",
+ "refresh": 2,
+ "regex": "",
+ "sort": 0,
+ "tagValuesQuery": "",
+ "tags": [
+
+ ],
+ "tagsQuery": "",
+ "type": "query",
+ "useTags": false
+ },
+ {
+ "allValue": null,
+ "current": {
+
},
"datasource": "$datasource",
"hide": 0,
@@ -280,7 +490,7 @@ data:
"options": [
],
- "query": "label_values(kubelet_volume_stats_capacity_bytes{job=\"kubelet\"}, exported_namespace)",
+ "query": "label_values(kubelet_volume_stats_capacity_bytes{cluster=\"$cluster\", job=\"kubelet\"}, namespace)",
"refresh": 2,
"regex": "",
"sort": 0,
@@ -306,7 +516,7 @@ data:
"options": [
],
- "query": "label_values(kubelet_volume_stats_capacity_bytes{job=\"kubelet\", exported_namespace=\"$namespace\"}, persistentvolumeclaim)",
+ "query": "label_values(kubelet_volume_stats_capacity_bytes{cluster=\"$cluster\", job=\"kubelet\", namespace=\"$namespace\"}, persistentvolumeclaim)",
"refresh": 2,
"regex": "",
"sort": 0,
@@ -350,7 +560,7 @@ data:
]
},
"timezone": "",
- "title": "Persistent Volumes",
+ "title": "Kubernetes / Persistent Volumes",
"uid": "919b92a8e8041bd567af9edab12c840c",
"version": 0
}
diff --git a/stable/prometheus-operator/templates/grafana/dashboards/pods.yaml b/stable/prometheus-operator/templates/grafana/dashboards/pods.yaml
index a0af807cc2e7..4d9dca921f61 100644
--- a/stable/prometheus-operator/templates/grafana/dashboards/pods.yaml
+++ b/stable/prometheus-operator/templates/grafana/dashboards/pods.yaml
@@ -1,4 +1,6 @@
-# Generated from 'pods' from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/grafana-dashboardDefinitions.yaml
+# Generated from 'pods' from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/grafana-dashboardDefinitions.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
apiVersion: v1
kind: ConfigMap
@@ -21,7 +23,20 @@ data:
],
"annotations": {
"list": [
-
+ {
+ "builtIn": 1,
+ "datasource": "$datasource",
+ "enable": true,
+ "expr": "time() == BOOL timestamp(rate(kube_pod_container_status_restarts_total{job=\"kube-state-metrics\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}[2m]) > 0)",
+ "hide": false,
+ "iconColor": "rgba(215, 44, 44, 1)",
+ "name": "Restarts",
+ "showIn": 0,
+ "tags": [
+ "restart"
+ ],
+ "type": "rows"
+ }
]
},
"editable": false,
@@ -77,29 +92,37 @@ data:
],
"spaceLength": 10,
+ "span": 12,
"stack": false,
"steppedLine": false,
"targets": [
{
- "expr": "sum by(container_name) (container_memory_usage_bytes{job=\"kubelet\", namespace=\"$namespace\", pod_name=\"$pod\", container_name=\u007e\"$container\", container_name!=\"POD\"})",
+ "expr": "sum by(container_name) (container_memory_usage_bytes{job=\"kubelet\", cluster=\"$cluster\", namespace=\"$namespace\", pod_name=\"$pod\", container_name=~\"$container\", container_name!=\"POD\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Current: {{`{{ container_name }}`}}",
"refId": "A"
},
{
- "expr": "sum by(container) (kube_pod_container_resource_requests_memory_bytes{job=\"kube-state-metrics\", namespace=\"$namespace\", pod=\"$pod\", container=\u007e\"$container\"})",
+ "expr": "sum by(container) (kube_pod_container_resource_requests{job=\"kube-state-metrics\", cluster=\"$cluster\", namespace=\"$namespace\", resource=\"memory\", pod=\"$pod\", container=~\"$container\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Requested: {{`{{ container }}`}}",
"refId": "B"
},
{
- "expr": "sum by(container) (kube_pod_container_resource_limits_memory_bytes{job=\"kube-state-metrics\", namespace=\"$namespace\", pod=\"$pod\", container=\u007e\"$container\"})",
+ "expr": "sum by(container) (kube_pod_container_resource_limits{job=\"kube-state-metrics\", cluster=\"$cluster\", namespace=\"$namespace\", resource=\"memory\", pod=\"$pod\", container=~\"$container\"})",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "Limit: {{`{{ container }}`}}",
"refId": "C"
+ },
+ {
+ "expr": "sum by(container_name) (container_memory_cache{job=\"kubelet\", namespace=\"$namespace\", pod_name=~\"$pod\", container_name=~\"$container\", container_name!=\"POD\"})",
+ "format": "time_series",
+ "intervalFactor": 2,
+ "legendFormat": "Cache: {{`{{ container_name }}`}}",
+ "refId": "D"
}
],
"thresholds": [
@@ -109,7 +132,7 @@ data:
"timeShift": null,
"title": "Memory Usage",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -194,15 +217,30 @@ data:
],
"spaceLength": 10,
+ "span": 12,
"stack": false,
"steppedLine": false,
"targets": [
{
- "expr": "sum by (container_name) (rate(container_cpu_usage_seconds_total{job=\"kubelet\", namespace=\"$namespace\", image!=\"\",container_name!=\"POD\",pod_name=\"$pod\"}[1m]))",
+ "expr": "sum by (container_name) (rate(container_cpu_usage_seconds_total{job=\"kubelet\", cluster=\"$cluster\", namespace=\"$namespace\", image!=\"\", pod_name=\"$pod\", container_name=~\"$container\", container_name!=\"POD\"}[1m]))",
"format": "time_series",
"intervalFactor": 2,
- "legendFormat": "{{`{{ container_name }}`}}",
+ "legendFormat": "Current: {{`{{ container_name }}`}}",
"refId": "A"
+ },
+ {
+ "expr": "sum by(container) (kube_pod_container_resource_requests{job=\"kube-state-metrics\", cluster=\"$cluster\", namespace=\"$namespace\", resource=\"cpu\", pod=\"$pod\", container=~\"$container\"})",
+ "format": "time_series",
+ "intervalFactor": 2,
+ "legendFormat": "Requested: {{`{{ container }}`}}",
+ "refId": "B"
+ },
+ {
+ "expr": "sum by(container) (kube_pod_container_resource_limits{job=\"kube-state-metrics\", cluster=\"$cluster\", namespace=\"$namespace\", resource=\"cpu\", pod=\"$pod\", container=~\"$container\"})",
+ "format": "time_series",
+ "intervalFactor": 2,
+ "legendFormat": "Limit: {{`{{ container }}`}}",
+ "refId": "C"
}
],
"thresholds": [
@@ -212,7 +250,7 @@ data:
"timeShift": null,
"title": "CPU Usage",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -297,15 +335,23 @@ data:
],
"spaceLength": 10,
+ "span": 12,
"stack": false,
"steppedLine": false,
"targets": [
{
- "expr": "sort_desc(sum by (pod_name) (rate(container_network_receive_bytes_total{job=\"kubelet\", namespace=\"$namespace\", pod_name=\"$pod\"}[1m])))",
+ "expr": "sort_desc(sum by (pod_name) (rate(container_network_receive_bytes_total{job=\"kubelet\", cluster=\"$cluster\", namespace=\"$namespace\", pod_name=\"$pod\"}[1m])))",
"format": "time_series",
"intervalFactor": 2,
- "legendFormat": "{{`{{ pod_name }}`}}",
+ "legendFormat": "RX: {{`{{ pod_name }}`}}",
"refId": "A"
+ },
+ {
+ "expr": "sort_desc(sum by (pod_name) (rate(container_network_transmit_bytes_total{job=\"kubelet\", cluster=\"$cluster\", namespace=\"$namespace\", pod_name=\"$pod\"}[1m])))",
+ "format": "time_series",
+ "intervalFactor": 2,
+ "legendFormat": "TX: {{`{{ pod_name }}`}}",
+ "refId": "B"
}
],
"thresholds": [
@@ -315,7 +361,7 @@ data:
"timeShift": null,
"title": "Network I/O",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -356,12 +402,116 @@ data:
"title": "Dashboard Row",
"titleSize": "h6",
"type": "row"
+ },
+ {
+ "collapse": false,
+ "collapsed": false,
+ "panels": [
+ {
+ "aliasColors": {
+
+ },
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "fill": 1,
+ "gridPos": {
+
+ },
+ "id": 5,
+ "legend": {
+ "alignAsTable": true,
+ "avg": true,
+ "current": true,
+ "max": false,
+ "min": false,
+ "rightSide": true,
+ "show": true,
+ "total": false,
+ "values": false
+ },
+ "lines": true,
+ "linewidth": 1,
+ "links": [
+
+ ],
+ "nullPointMode": "null",
+ "percentage": false,
+ "pointradius": 5,
+ "points": false,
+ "renderer": "flot",
+ "repeat": null,
+ "seriesOverrides": [
+
+ ],
+ "spaceLength": 10,
+ "span": 12,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "max by (container) (kube_pod_container_status_restarts_total{job=\"kube-state-metrics\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\", container=~\"$container\"})",
+ "format": "time_series",
+ "intervalFactor": 2,
+ "legendFormat": "Restarts: {{`{{ container }}`}}",
+ "refId": "A"
+ }
+ ],
+ "thresholds": [
+
+ ],
+ "timeFrom": null,
+ "timeShift": null,
+ "title": "Total Restarts Per Container",
+ "tooltip": {
+ "shared": false,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": [
+
+ ]
+ },
+ "yaxes": [
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": 0,
+ "show": true
+ },
+ {
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": 0,
+ "show": true
+ }
+ ]
+ }
+ ],
+ "repeat": null,
+ "repeatIteration": null,
+ "repeatRowId": null,
+ "showTitle": false,
+ "title": "Dashboard Row",
+ "titleSize": "h6",
+ "type": "row"
}
],
"schemaVersion": 14,
"style": "dark",
"tags": [
-
+ "kubernetes-mixin"
],
"templating": {
"list": [
@@ -385,6 +535,32 @@ data:
"allValue": null,
"current": {
+ },
+ "datasource": "$datasource",
+ "hide": 2,
+ "includeAll": false,
+ "label": "cluster",
+ "multi": false,
+ "name": "cluster",
+ "options": [
+
+ ],
+ "query": "label_values(kube_pod_info, cluster)",
+ "refresh": 2,
+ "regex": "",
+ "sort": 0,
+ "tagValuesQuery": "",
+ "tags": [
+
+ ],
+ "tagsQuery": "",
+ "type": "query",
+ "useTags": false
+ },
+ {
+ "allValue": null,
+ "current": {
+
},
"datasource": "$datasource",
"hide": 0,
@@ -395,7 +571,7 @@ data:
"options": [
],
- "query": "label_values(kube_pod_info, namespace)",
+ "query": "label_values(kube_pod_info{cluster=\"$cluster\"}, namespace)",
"refresh": 2,
"regex": "",
"sort": 0,
@@ -421,7 +597,7 @@ data:
"options": [
],
- "query": "label_values(kube_pod_info{namespace=\u007e\"$namespace\"}, pod)",
+ "query": "label_values(kube_pod_info{cluster=\"$cluster\", namespace=~\"$namespace\"}, pod)",
"refresh": 2,
"regex": "",
"sort": 0,
@@ -447,7 +623,7 @@ data:
"options": [
],
- "query": "label_values(kube_pod_container_info{namespace=\"$namespace\", pod=\"$pod\"}, container)",
+ "query": "label_values(kube_pod_container_info{cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\"}, container)",
"refresh": 2,
"regex": "",
"sort": 0,
@@ -491,7 +667,7 @@ data:
]
},
"timezone": "",
- "title": "Pods",
+ "title": "Kubernetes / Pods",
"uid": "ab4f13a9892a76a4d21ce8c2445bf4ea",
"version": 0
}
diff --git a/stable/prometheus-operator/templates/grafana/dashboards/statefulset.yaml b/stable/prometheus-operator/templates/grafana/dashboards/statefulset.yaml
index f73865f8e7da..d7a3f881e177 100644
--- a/stable/prometheus-operator/templates/grafana/dashboards/statefulset.yaml
+++ b/stable/prometheus-operator/templates/grafana/dashboards/statefulset.yaml
@@ -1,4 +1,6 @@
-# Generated from 'statefulset' from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/grafana-dashboardDefinitions.yaml
+# Generated from 'statefulset' from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/grafana-dashboardDefinitions.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
apiVersion: v1
kind: ConfigMap
@@ -98,7 +100,7 @@ data:
"tableColumn": "",
"targets": [
{
- "expr": "sum(rate(container_cpu_usage_seconds_total{job=\"kubelet\", namespace=\"$namespace\", pod_name=\u007e\"$statefulset.*\"}[3m]))",
+ "expr": "sum(rate(container_cpu_usage_seconds_total{job=\"kubelet\", cluster=\"$cluster\", namespace=\"$namespace\", pod_name=~\"$statefulset.*\"}[3m]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "",
@@ -107,6 +109,9 @@ data:
],
"thresholds": "",
"title": "CPU",
+ "tooltip": {
+ "shared": false
+ },
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
@@ -178,7 +183,7 @@ data:
"tableColumn": "",
"targets": [
{
- "expr": "sum(container_memory_usage_bytes{job=\"kubelet\", namespace=\"$namespace\", pod_name=\u007e\"$statefulset.*\"}) / 1024^3",
+ "expr": "sum(container_memory_usage_bytes{job=\"kubelet\", cluster=\"$cluster\", namespace=\"$namespace\", pod_name=~\"$statefulset.*\"}) / 1024^3",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "",
@@ -187,6 +192,9 @@ data:
],
"thresholds": "",
"title": "Memory",
+ "tooltip": {
+ "shared": false
+ },
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
@@ -258,7 +266,7 @@ data:
"tableColumn": "",
"targets": [
{
- "expr": "sum(rate(container_network_transmit_bytes_total{job=\"kubelet\", namespace=\"$namespace\", pod_name=\u007e\"$statefulset.*\"}[3m])) + sum(rate(container_network_receive_bytes_total{namespace=\"$namespace\",pod_name=\u007e\"$statefulset.*\"}[3m]))",
+ "expr": "sum(rate(container_network_transmit_bytes_total{job=\"kubelet\", cluster=\"$cluster\", namespace=\"$namespace\", pod_name=~\"$statefulset.*\"}[3m])) + sum(rate(container_network_receive_bytes_total{cluster=\"$cluster\", namespace=\"$namespace\",pod_name=~\"$statefulset.*\"}[3m]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "",
@@ -267,6 +275,9 @@ data:
],
"thresholds": "",
"title": "Network",
+ "tooltip": {
+ "shared": false
+ },
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
@@ -353,7 +364,7 @@ data:
"tableColumn": "",
"targets": [
{
- "expr": "max(kube_statefulset_replicas{job=\"kube-state-metrics\", namespace=\"$namespace\", statefulset=\"$statefulset\"}) without (instance, pod)",
+ "expr": "max(kube_statefulset_replicas{job=\"kube-state-metrics\", cluster=\"$cluster\", namespace=\"$namespace\", statefulset=\"$statefulset\"}) without (instance, pod)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "",
@@ -362,6 +373,9 @@ data:
],
"thresholds": "",
"title": "Desired Replicas",
+ "tooltip": {
+ "shared": false
+ },
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
@@ -434,7 +448,7 @@ data:
"tableColumn": "",
"targets": [
{
- "expr": "min(kube_statefulset_status_replicas_current{job=\"kube-state-metrics\", namespace=\"$namespace\", statefulset=\"$statefulset\"}) without (instance, pod)",
+ "expr": "min(kube_statefulset_status_replicas_current{job=\"kube-state-metrics\", cluster=\"$cluster\", namespace=\"$namespace\", statefulset=\"$statefulset\"}) without (instance, pod)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "",
@@ -443,6 +457,9 @@ data:
],
"thresholds": "",
"title": "Replicas of current version",
+ "tooltip": {
+ "shared": false
+ },
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
@@ -515,7 +532,7 @@ data:
"tableColumn": "",
"targets": [
{
- "expr": "max(kube_statefulset_status_observed_generation{job=\"kube-state-metrics\", namespace=\"$namespace\", statefulset=\"$statefulset\"}) without (instance, pod)",
+ "expr": "max(kube_statefulset_status_observed_generation{job=\"kube-state-metrics\", cluster=\"$cluster\", namespace=\"$namespace\", statefulset=\"$statefulset\"}) without (instance, pod)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "",
@@ -524,6 +541,9 @@ data:
],
"thresholds": "",
"title": "Observed Generation",
+ "tooltip": {
+ "shared": false
+ },
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
@@ -596,7 +616,7 @@ data:
"tableColumn": "",
"targets": [
{
- "expr": "max(kube_statefulset_metadata_generation{job=\"kube-state-metrics\", statefulset=\"$statefulset\", namespace=\"$namespace\"}) without (instance, pod)",
+ "expr": "max(kube_statefulset_metadata_generation{job=\"kube-state-metrics\", statefulset=\"$statefulset\", cluster=\"$cluster\", namespace=\"$namespace\"}) without (instance, pod)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "",
@@ -605,6 +625,9 @@ data:
],
"thresholds": "",
"title": "Metadata Generation",
+ "tooltip": {
+ "shared": false
+ },
"type": "singlestat",
"valueFontSize": "80%",
"valueMaps": [
@@ -672,35 +695,35 @@ data:
"steppedLine": false,
"targets": [
{
- "expr": "max(kube_statefulset_replicas{job=\"kube-state-metrics\", statefulset=\"$statefulset\",namespace=\"$namespace\"}) without (instance, pod)",
+ "expr": "max(kube_statefulset_replicas{job=\"kube-state-metrics\", statefulset=\"$statefulset\", cluster=\"$cluster\", namespace=\"$namespace\"}) without (instance, pod)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "replicas specified",
"refId": "A"
},
{
- "expr": "max(kube_statefulset_status_replicas{job=\"kube-state-metrics\", statefulset=\"$statefulset\",namespace=\"$namespace\"}) without (instance, pod)",
+ "expr": "max(kube_statefulset_status_replicas{job=\"kube-state-metrics\", statefulset=\"$statefulset\", cluster=\"$cluster\", namespace=\"$namespace\"}) without (instance, pod)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "replicas created",
"refId": "B"
},
{
- "expr": "min(kube_statefulset_status_replicas_ready{job=\"kube-state-metrics\", statefulset=\"$statefulset\",namespace=\"$namespace\"}) without (instance, pod)",
+ "expr": "min(kube_statefulset_status_replicas_ready{job=\"kube-state-metrics\", statefulset=\"$statefulset\", cluster=\"$cluster\", namespace=\"$namespace\"}) without (instance, pod)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "ready",
"refId": "C"
},
{
- "expr": "min(kube_statefulset_status_replicas_current{job=\"kube-state-metrics\", statefulset=\"$statefulset\",namespace=\"$namespace\"}) without (instance, pod)",
+ "expr": "min(kube_statefulset_status_replicas_current{job=\"kube-state-metrics\", statefulset=\"$statefulset\", cluster=\"$cluster\", namespace=\"$namespace\"}) without (instance, pod)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "replicas of current version",
"refId": "D"
},
{
- "expr": "min(kube_statefulset_status_replicas_updated{job=\"kube-state-metrics\", statefulset=\"$statefulset\",namespace=\"$namespace\"}) without (instance, pod)",
+ "expr": "min(kube_statefulset_status_replicas_updated{job=\"kube-state-metrics\", statefulset=\"$statefulset\", cluster=\"$cluster\", namespace=\"$namespace\"}) without (instance, pod)",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "updated",
@@ -714,7 +737,7 @@ data:
"timeShift": null,
"title": "Replicas",
"tooltip": {
- "shared": true,
+ "shared": false,
"sort": 0,
"value_type": "individual"
},
@@ -760,7 +783,7 @@ data:
"schemaVersion": 14,
"style": "dark",
"tags": [
-
+ "kubernetes-mixin"
],
"templating": {
"list": [
@@ -784,6 +807,32 @@ data:
"allValue": null,
"current": {
+ },
+ "datasource": "$datasource",
+ "hide": 2,
+ "includeAll": false,
+ "label": "cluster",
+ "multi": false,
+ "name": "cluster",
+ "options": [
+
+ ],
+ "query": "label_values(kube_statefulset_metadata_generation, cluster)",
+ "refresh": 2,
+ "regex": "",
+ "sort": 0,
+ "tagValuesQuery": "",
+ "tags": [
+
+ ],
+ "tagsQuery": "",
+ "type": "query",
+ "useTags": false
+ },
+ {
+ "allValue": null,
+ "current": {
+
},
"datasource": "$datasource",
"hide": 0,
@@ -864,7 +913,7 @@ data:
]
},
"timezone": "",
- "title": "StatefulSets",
+ "title": "Kubernetes / StatefulSets",
"uid": "a31c1f46e6f727cb37c0d731a7245005",
"version": 0
}
diff --git a/stable/prometheus-operator/templates/grafana/servicemonitor.yaml b/stable/prometheus-operator/templates/grafana/servicemonitor.yaml
new file mode 100644
index 000000000000..c76a87cd4799
--- /dev/null
+++ b/stable/prometheus-operator/templates/grafana/servicemonitor.yaml
@@ -0,0 +1,31 @@
+{{- if and .Values.grafana.enabled .Values.grafana.serviceMonitor.selfMonitor }}
+apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
+kind: ServiceMonitor
+metadata:
+ name: {{ template "prometheus-operator.fullname" . }}-grafana
+ labels:
+ app: {{ template "prometheus-operator.name" . }}-grafana
+{{ include "prometheus-operator.labels" . | indent 4 }}
+spec:
+ selector:
+ matchLabels:
+ app: grafana
+ release: {{ .Release.Name | quote }}
+ namespaceSelector:
+ matchNames:
+ - {{ .Release.Namespace | quote }}
+ endpoints:
+ - port: service
+ {{- if .Values.grafana.serviceMonitor.interval }}
+ interval: {{ .Values.grafana.serviceMonitor.interval }}
+ {{- end }}
+ path: "/metrics"
+{{- if .Values.grafana.serviceMonitor.metricRelabelings }}
+ metricRelabelings:
+{{ toYaml .Values.grafana.serviceMonitor.metricRelabelings | indent 6 }}
+{{- end }}
+{{- if .Values.grafana.serviceMonitor.relabelings }}
+ relabelings:
+{{ toYaml .Values.grafana.serviceMonitor.relabelings | indent 6 }}
+{{- end }}
+{{- end }}
diff --git a/stable/prometheus-operator/templates/prometheus-operator/clusterrole.yaml b/stable/prometheus-operator/templates/prometheus-operator/clusterrole.yaml
index 594a20192a47..53724a23cc73 100644
--- a/stable/prometheus-operator/templates/prometheus-operator/clusterrole.yaml
+++ b/stable/prometheus-operator/templates/prometheus-operator/clusterrole.yaml
@@ -48,11 +48,13 @@ rules:
- ""
resources:
- services
+ - services/finalizers
- endpoints
verbs:
- get
- create
- update
+ - delete
- apiGroups:
- ""
resources:
diff --git a/stable/prometheus-operator/templates/prometheus-operator/crd-alertmanager.yaml b/stable/prometheus-operator/templates/prometheus-operator/crd-alertmanager.yaml
index 1834d02f720b..16514e55ec8f 100644
--- a/stable/prometheus-operator/templates/prometheus-operator/crd-alertmanager.yaml
+++ b/stable/prometheus-operator/templates/prometheus-operator/crd-alertmanager.yaml
@@ -742,7 +742,6 @@ spec:
configMapRef:
description: |-
ConfigMapEnvSource selects a ConfigMap to populate the environment variables with.
-
The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables.
properties:
name:
@@ -758,7 +757,6 @@ spec:
secretRef:
description: |-
SecretEnvSource selects a Secret to populate the environment variables with.
-
The contents of the target Secret's Data field will represent the key-value pairs as environment variables.
properties:
name:
@@ -1323,8 +1321,7 @@ spec:
type: boolean
volumeDevices:
description: volumeDevices is the list of block devices to be
- used by the container. This is an alpha feature and may change
- in the future.
+ used by the container. This is a beta feature.
items:
description: volumeDevice describes a mapping of a raw block
device within a container.
@@ -1386,6 +1383,12 @@ spec:
under. This is necessary to generate correct URLs. This is necessary
if Alertmanager is not served from root of a DNS name.
type: string
+ image:
+ description: Image if specified has precedence over baseImage, tag and
+ sha combinations. Specifying the version is still necessary to ensure
+ the Prometheus Operator knows what version of Alertmanager is being
+ configured.
+ type: string
imagePullSecrets:
description: An optional list of references to secrets in the same namespace
to use for pulling prometheus and alertmanager images from registries
@@ -1459,9 +1462,7 @@ spec:
generateName:
description: |-
GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.
-
If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header).
-
Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#idempotency
type: string
generation:
@@ -1525,7 +1526,6 @@ spec:
field:
description: |-
The field of the resource that has caused this error, as named by its JSON serialization. May include dot and postfix notation for nested attributes. Arrays are zero-indexed. Fields may appear more than once in an array of causes due to fields having multiple errors. Optional.
-
Examples:
"name" - the field "name" on the current resource
"items[0].name" - the field "name" on the first array entry in "items"
@@ -1638,7 +1638,6 @@ spec:
namespace:
description: |-
Namespace defines the space within each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.
-
Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces
type: string
ownerReferences:
@@ -1649,8 +1648,9 @@ spec:
set to true. There cannot be more than one managing controller.
items:
description: OwnerReference contains enough information to let
- you identify an owning object. Currently, an owning object must
- be in the same namespace, so there is no namespace field.
+ you identify an owning object. An owning object must be in the
+ same namespace as the dependent, or be cluster-scoped, so there
+ is no namespace field.
properties:
apiVersion:
description: API version of the referent.
@@ -1684,7 +1684,6 @@ spec:
resourceVersion:
description: |-
An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.
-
Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
type: string
selfLink:
@@ -1694,7 +1693,6 @@ spec:
uid:
description: |-
UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.
-
Populated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids
type: string
priorityClassName:
@@ -1721,8 +1719,8 @@ spec:
type: object
retention:
description: Time duration Alertmanager shall retain data for. Default
- is '120h', and must match the regular expression `[0-9]+(ms|s|m|h|d|w|y)`
- (milliseconds seconds minutes hours days weeks years).
+ is '120h', and must match the regular expression `[0-9]+(ms|s|m|h)`
+ (milliseconds seconds minutes hours).
type: string
routePrefix:
description: The route prefix Alertmanager registers HTTP handlers for.
@@ -1747,9 +1745,7 @@ spec:
fsGroup:
description: |-
A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod:
-
1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw----
-
If unset, the Kubelet will not modify the ownership and permissions of any volume.
format: int64
type: integer
@@ -1838,11 +1834,6 @@ spec:
is specified, then by default an [EmptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)
will be used.
properties:
- class:
- description: 'Name of the StorageClass to use when requesting storage
- provisioning. More info: https://kubernetes.io/docs/user-guide/persistent-volumes/#storageclasses
- (DEPRECATED - instead use `volumeClaimTemplate.spec.storageClassName`)'
- type: string
emptyDir:
description: Represents an empty directory for a pod. Empty directory
volumes support ownership management and SELinux relabeling.
@@ -1853,63 +1844,6 @@ spec:
Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir'
type: string
sizeLimit: {}
- resources:
- description: ResourceRequirements describes the compute resource
- requirements.
- properties:
- limits:
- description: 'Limits describes the maximum amount of compute
- resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/'
- type: object
- requests:
- description: 'Requests describes the minimum amount of compute
- resources required. If Requests is omitted for a container,
- it defaults to Limits if that is explicitly specified, otherwise
- to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/'
- type: object
- selector:
- description: A label selector is a label query over a set of resources.
- The result of matchLabels and matchExpressions are ANDed. An empty
- label selector matches all objects. A null label selector matches
- no objects.
- properties:
- matchExpressions:
- description: matchExpressions is a list of label selector requirements.
- The requirements are ANDed.
- items:
- description: A label selector requirement is a selector that
- contains values, a key, and an operator that relates the
- key and values.
- properties:
- key:
- description: key is the label key that the selector applies
- to.
- type: string
- operator:
- description: operator represents a key's relationship
- to a set of values. Valid operators are In, NotIn, Exists
- and DoesNotExist.
- type: string
- values:
- description: values is an array of string values. If the
- operator is In or NotIn, the values array must be non-empty.
- If the operator is Exists or DoesNotExist, the values
- array must be empty. This array is replaced during a
- strategic merge patch.
- items:
- type: string
- type: array
- required:
- - key
- - operator
- type: array
- matchLabels:
- description: matchLabels is a map of {key,value} pairs. A single
- {key,value} in the matchLabels map is equivalent to an element
- of matchExpressions, whose key field is "key", the operator
- is "In", and the values array contains only "value". The requirements
- are ANDed.
- type: object
volumeClaimTemplate:
description: PersistentVolumeClaim is a user's request for and claim
to a persistent volume
@@ -1977,9 +1911,7 @@ spec:
generateName:
description: |-
GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.
-
If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header).
-
Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#idempotency
type: string
generation:
@@ -2047,7 +1979,6 @@ spec:
field:
description: |-
The field of the resource that has caused this error, as named by its JSON serialization. May include dot and postfix notation for nested attributes. Arrays are zero-indexed. Fields may appear more than once in an array of causes due to fields having multiple errors. Optional.
-
Examples:
"name" - the field "name" on the current resource
"items[0].name" - the field "name" on the first array entry in "items"
@@ -2168,7 +2099,6 @@ spec:
namespace:
description: |-
Namespace defines the space within each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.
-
Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces
type: string
ownerReferences:
@@ -2180,9 +2110,9 @@ spec:
There cannot be more than one managing controller.
items:
description: OwnerReference contains enough information
- to let you identify an owning object. Currently, an
- owning object must be in the same namespace, so there
- is no namespace field.
+ to let you identify an owning object. An owning object
+ must be in the same namespace as the dependent, or be
+ cluster-scoped, so there is no namespace field.
properties:
apiVersion:
description: API version of the referent.
@@ -2217,7 +2147,6 @@ spec:
resourceVersion:
description: |-
An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.
-
Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
type: string
selfLink:
@@ -2227,7 +2156,6 @@ spec:
uid:
description: |-
UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.
-
Populated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids
type: string
spec:
@@ -2327,8 +2255,7 @@ spec:
volumeMode:
description: volumeMode defines what type of volume is required
by the claim. Value of Filesystem is implied when not
- included in claim spec. This is an alpha feature and may
- change in the future.
+ included in claim spec. This is a beta feature.
type: string
volumeName:
description: VolumeName is the binding reference to the
diff --git a/stable/prometheus-operator/templates/prometheus-operator/crd-prometheus.yaml b/stable/prometheus-operator/templates/prometheus-operator/crd-prometheus.yaml
index 0debca787c32..8fedf928b53b 100644
--- a/stable/prometheus-operator/templates/prometheus-operator/crd-prometheus.yaml
+++ b/stable/prometheus-operator/templates/prometheus-operator/crd-prometheus.yaml
@@ -1488,8 +1488,7 @@ spec:
type: boolean
volumeDevices:
description: volumeDevices is the list of block devices to be
- used by the container. This is an alpha feature and may change
- in the future.
+ used by the container. This is a beta feature.
items:
description: volumeDevice describes a mapping of a raw block
device within a container.
@@ -1546,6 +1545,14 @@ spec:
required:
- name
type: array
+ enableAdminAPI:
+ description: 'Enable access to prometheus web admin API. Defaults to
+ the value of `false`. WARNING: Enabling the admin APIs enables mutating
+ endpoints, to delete data, shutdown Prometheus, and more. Enabling
+ this should be done with care and the user is advised to add additional
+ authentication authorization via a proxy to ensure only clients authorized
+ to perform these actions can do so. For more information see https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-admin-apis'
+ type: boolean
evaluationInterval:
description: Interval between consecutive evaluations.
type: string
@@ -1558,6 +1565,12 @@ spec:
under. This is necessary to generate correct URLs. This is necessary
if Prometheus is not served from root of a DNS name.
type: string
+ image:
+ description: Image if specified has precedence over baseImage, tag and
+ sha combinations. Specifying the version is still necessary to ensure
+ the Prometheus Operator knows what version of Prometheus is being
+ configured.
+ type: string
imagePullSecrets:
description: An optional list of references to secrets in the same namespace
to use for pulling prometheus and alertmanager images from registries
@@ -1574,6 +1587,9 @@ spec:
description: ListenLocal makes the Prometheus server listen on loopback,
so that it does not bind against the Pod IP.
type: boolean
+ logFormat:
+ description: Log format for Prometheus to be configured with.
+ type: string
logLevel:
description: Log level for Prometheus to be configured with.
type: string
@@ -1820,8 +1836,9 @@ spec:
set to true. There cannot be more than one managing controller.
items:
description: OwnerReference contains enough information to let
- you identify an owning object. Currently, an owning object must
- be in the same namespace, so there is no namespace field.
+ you identify an owning object. An owning object must be in the
+ same namespace as the dependent, or be cluster-scoped, so there
+ is no namespace field.
properties:
apiVersion:
description: API version of the referent.
@@ -1871,6 +1888,21 @@ spec:
priorityClassName:
description: Priority class assigned to the Pods
type: string
+ query:
+ description: QuerySpec defines the query command line flags when starting
+ Prometheus.
+ properties:
+ lookbackDelta:
+ description: The delta difference allowed for retrieving metrics
+ during expression evaluations.
+ type: string
+ maxConcurrency:
+ description: Number of concurrent queries that can be run at once.
+ format: int32
+ type: integer
+ timeout:
+ description: Maximum time a query may take before being aborted.
+ type: string
remoteRead:
description: If specified, the remote_read spec. This is an experimental
feature, it may change in any upcoming release in a breaking way.
@@ -2046,6 +2078,11 @@ spec:
description: MinBackoff is the initial retry delay. Gets doubled
for every retry.
type: string
+ minShards:
+ description: MinShards is the minimum number of shards, i.e.
+ amount of concurrency.
+ format: int32
+ type: integer
remoteTimeout:
description: Timeout for requests to the remote write endpoint.
type: string
@@ -2117,6 +2154,10 @@ spec:
required:
- url
type: array
+ replicaExternalLabelName:
+ description: Name of Prometheus external label used to denote replica
+ name. Defaults to the value of `prometheus_replica`.
+ type: string
replicas:
description: Number of instances to deploy for a Prometheus deployment.
format: int32
@@ -2230,6 +2271,25 @@ spec:
"In", and the values array contains only "value". The requirements
are ANDed.
type: object
+ rules:
+ description: /--rules.*/ command-line arguments
+ properties:
+ alert:
+ description: /--rules.alert.*/ command-line arguments
+ properties:
+ forGracePeriod:
+ description: Minimum duration between alert and restored 'for'
+ state. This is maintained only for alerts with configured
+ 'for' time greater than grace period.
+ type: string
+ forOutageTolerance:
+ description: Max time to tolerate prometheus outage for restoring
+ 'for' state of alert.
+ type: string
+ resendDelay:
+ description: Minimum amount of time to wait before resending
+ an alert to Alertmanager.
+ type: string
scrapeInterval:
description: Interval between consecutive scrapes.
type: string
@@ -2424,11 +2484,6 @@ spec:
is specified, then by default an [EmptyDir](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir)
will be used.
properties:
- class:
- description: 'Name of the StorageClass to use when requesting storage
- provisioning. More info: https://kubernetes.io/docs/user-guide/persistent-volumes/#storageclasses
- (DEPRECATED - instead use `volumeClaimTemplate.spec.storageClassName`)'
- type: string
emptyDir:
description: Represents an empty directory for a pod. Empty directory
volumes support ownership management and SELinux relabeling.
@@ -2439,63 +2494,6 @@ spec:
Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir'
type: string
sizeLimit: {}
- resources:
- description: ResourceRequirements describes the compute resource
- requirements.
- properties:
- limits:
- description: 'Limits describes the maximum amount of compute
- resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/'
- type: object
- requests:
- description: 'Requests describes the minimum amount of compute
- resources required. If Requests is omitted for a container,
- it defaults to Limits if that is explicitly specified, otherwise
- to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/'
- type: object
- selector:
- description: A label selector is a label query over a set of resources.
- The result of matchLabels and matchExpressions are ANDed. An empty
- label selector matches all objects. A null label selector matches
- no objects.
- properties:
- matchExpressions:
- description: matchExpressions is a list of label selector requirements.
- The requirements are ANDed.
- items:
- description: A label selector requirement is a selector that
- contains values, a key, and an operator that relates the
- key and values.
- properties:
- key:
- description: key is the label key that the selector applies
- to.
- type: string
- operator:
- description: operator represents a key's relationship
- to a set of values. Valid operators are In, NotIn, Exists
- and DoesNotExist.
- type: string
- values:
- description: values is an array of string values. If the
- operator is In or NotIn, the values array must be non-empty.
- If the operator is Exists or DoesNotExist, the values
- array must be empty. This array is replaced during a
- strategic merge patch.
- items:
- type: string
- type: array
- required:
- - key
- - operator
- type: array
- matchLabels:
- description: matchLabels is a map of {key,value} pairs. A single
- {key,value} in the matchLabels map is equivalent to an element
- of matchExpressions, whose key field is "key", the operator
- is "In", and the values array contains only "value". The requirements
- are ANDed.
- type: object
volumeClaimTemplate:
description: PersistentVolumeClaim is a user's request for and claim
to a persistent volume
@@ -2766,9 +2764,9 @@ spec:
There cannot be more than one managing controller.
items:
description: OwnerReference contains enough information
- to let you identify an owning object. Currently, an
- owning object must be in the same namespace, so there
- is no namespace field.
+ to let you identify an owning object. An owning object
+ must be in the same namespace as the dependent, or be
+ cluster-scoped, so there is no namespace field.
properties:
apiVersion:
description: API version of the referent.
@@ -2913,8 +2911,7 @@ spec:
volumeMode:
description: volumeMode defines what type of volume is required
by the claim. Value of Filesystem is implied when not
- included in claim spec. This is an alpha feature and may
- change in the future.
+ included in claim spec. This is a beta feature.
type: string
volumeName:
description: VolumeName is the binding reference to the
@@ -2989,9 +2986,14 @@ spec:
baseImage:
description: Thanos base image if other than default.
type: string
+ clusterAdvertiseAddress:
+ description: Explicit (external) ip:port address to advertise for
+ gossip in gossip cluster. Used internally for membership only.
+ type: string
gcs:
- description: ThanosGCSSpec defines parameters for use of Google
- Cloud Storage (GCS) with Thanos.
+ description: 'Deprecated: ThanosGCSSpec should be configured with
+ an ObjectStorageConfig secret starting with Thanos v0.2.0. ThanosGCSSpec
+ will be removed.'
properties:
bucket:
description: Google Cloud Storage bucket name for stored blocks.
@@ -3013,6 +3015,33 @@ spec:
type: boolean
required:
- key
+ grpcAdvertiseAddress:
+ description: Explicit (external) host:port address to advertise
+ for gRPC StoreAPI in gossip cluster. If empty, 'grpc-address'
+ will be used.
+ type: string
+ image:
+ description: Image if specified has precedence over baseImage, tag
+ and sha combinations. Specifying the version is still necessary
+ to ensure the Prometheus Operator knows what version of Thanos
+ is being configured.
+ type: string
+ objectStorageConfig:
+ description: SecretKeySelector selects a key of a Secret.
+ properties:
+ key:
+ description: The key of the secret to select from. Must be
+ a valid secret key.
+ type: string
+ name:
+ description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
+ type: string
+ optional:
+ description: Specify whether the Secret or it's key must be
+ defined
+ type: boolean
+ required:
+ - key
peers:
description: Peers is a DNS name for Thanos to discover peers through.
type: string
@@ -3031,8 +3060,9 @@ spec:
to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/'
type: object
s3:
- description: ThanosS3Spec defines parameters for of AWS Simple Storage
- Service (S3) with Thanos. (S3 compatible services apply as well)
+ description: 'Deprecated: ThanosS3Spec should be configured with
+ an ObjectStorageConfig secret starting with Thanos v0.2.0. ThanosS3Spec
+ will be removed.'
properties:
accessKey:
description: SecretKeySelector selects a key of a Secret.
diff --git a/stable/prometheus-operator/templates/prometheus-operator/crd-prometheusrules.yaml b/stable/prometheus-operator/templates/prometheus-operator/crd-prometheusrules.yaml
index 9839687eecf9..02a21b36d36c 100644
--- a/stable/prometheus-operator/templates/prometheus-operator/crd-prometheusrules.yaml
+++ b/stable/prometheus-operator/templates/prometheus-operator/crd-prometheusrules.yaml
@@ -12,20 +12,10 @@ metadata:
"helm.sh/hook": crd-install
"helm.sh/hook-delete-policy": "before-hook-creation"
spec:
- additionalPrinterColumns:
- - JSONPath: .metadata.creationTimestamp
- description: |-
- CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.
-
- Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
- name: Age
- type: date
group: {{ .Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com" }}
names:
kind: PrometheusRule
- listKind: PrometheusRuleList
plural: prometheusrules
- singular: prometheusrule
scope: Namespaced
validation:
openAPIV3Schema:
@@ -85,9 +75,7 @@ spec:
generateName:
description: |-
GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.
-
If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header).
-
Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#idempotency
type: string
generation:
@@ -151,7 +139,6 @@ spec:
field:
description: |-
The field of the resource that has caused this error, as named by its JSON serialization. May include dot and postfix notation for nested attributes. Arrays are zero-indexed. Fields may appear more than once in an array of causes due to fields having multiple errors. Optional.
-
Examples:
"name" - the field "name" on the current resource
"items[0].name" - the field "name" on the first array entry in "items"
@@ -214,11 +201,12 @@ spec:
server has more data available. The value is opaque and
may be used to issue another request to the endpoint that
served this list to retrieve the next set of available
- objects. Continuing a list may not be possible if the
- server configuration has changed or more than a few minutes
- have passed. The resourceVersion field returned when using
- this continue value will be identical to the value in
- the first response.
+ objects. Continuing a consistent list may not be possible
+ if the server configuration has changed or more than a
+ few minutes have passed. The resourceVersion field returned
+ when using this continue value will be identical to the
+ value in the first response, unless you have received
+ this token from an error message.
type: string
resourceVersion:
description: 'String that identifies the server''s internal
@@ -259,7 +247,6 @@ spec:
namespace:
description: |-
Namespace defines the space within each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.
-
Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces
type: string
ownerReferences:
@@ -270,8 +257,9 @@ spec:
There cannot be more than one managing controller.
items:
description: OwnerReference contains enough information to let you
- identify an owning object. Currently, an owning object must be in
- the same namespace, so there is no namespace field.
+ identify an owning object. An owning object must be in the same
+ namespace as the dependent, or be cluster-scoped, so there is no
+ namespace field.
properties:
apiVersion:
description: API version of the referent.
@@ -304,7 +292,6 @@ spec:
resourceVersion:
description: |-
An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.
-
Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
type: string
selfLink:
@@ -314,7 +301,6 @@ spec:
uid:
description: |-
UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.
-
Populated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids
type: string
spec:
diff --git a/stable/prometheus-operator/templates/prometheus-operator/crd-servicemonitor.yaml b/stable/prometheus-operator/templates/prometheus-operator/crd-servicemonitor.yaml
index ac0a633bc0cd..7223d86ce4bf 100644
--- a/stable/prometheus-operator/templates/prometheus-operator/crd-servicemonitor.yaml
+++ b/stable/prometheus-operator/templates/prometheus-operator/crd-servicemonitor.yaml
@@ -12,20 +12,10 @@ metadata:
"helm.sh/hook": crd-install
"helm.sh/hook-delete-policy": "before-hook-creation"
spec:
- additionalPrinterColumns:
- - JSONPath: .metadata.creationTimestamp
- description: |-
- CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.
-
- Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
- name: Age
- type: date
group: {{ .Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com" }}
names:
kind: ServiceMonitor
- listKind: ServiceMonitorList
plural: servicemonitors
- singular: servicemonitor
scope: Namespaced
validation:
openAPIV3Schema:
@@ -156,7 +146,7 @@ spec:
type: string
relabelings:
description: 'RelabelConfigs to apply to samples before ingestion.
- More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#'
+ More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config'
items:
description: 'RelabelConfig allows dynamic rewriting of the
label set, being applied to samples before ingestion. It defines
@@ -174,7 +164,7 @@ spec:
type: integer
regex:
description: Regular expression against which the extracted
- value is matched. default is '(.*)'
+ value is matched. defailt is '(.*)'
type: string
replacement:
description: Replacement value against which a regex replace
diff --git a/stable/prometheus-operator/templates/prometheus-operator/deployment.yaml b/stable/prometheus-operator/templates/prometheus-operator/deployment.yaml
index b2cfea07c4a9..74587ac413b8 100644
--- a/stable/prometheus-operator/templates/prometheus-operator/deployment.yaml
+++ b/stable/prometheus-operator/templates/prometheus-operator/deployment.yaml
@@ -19,6 +19,10 @@ spec:
{{ include "prometheus-operator.labels" . | indent 8 }}
{{- if .Values.prometheusOperator.podLabels }}
{{ toYaml .Values.prometheusOperator.podLabels | indent 8 }}
+{{- end }}
+{{- if .Values.prometheusOperator.podAnnotations }}
+ annotations:
+{{ toYaml .Values.prometheusOperator.podAnnotations | indent 8 }}
{{- end }}
spec:
{{- if .Values.prometheusOperator.priorityClassName }}
@@ -31,12 +35,24 @@ spec:
args:
{{- if .Values.prometheusOperator.kubeletService.enabled }}
- --kubelet-service={{ .Values.prometheusOperator.kubeletService.namespace }}/{{ template "prometheus-operator.fullname" . }}-kubelet
+ {{- end }}
+ {{- if .Values.prometheusOperator.logFormat }}
+ - --log-format={{ .Values.prometheusOperator.logFormat }}
+ {{- end }}
+ {{- if .Values.prometheusOperator.logLevel }}
+ - --log-level={{ .Values.prometheusOperator.logLevel }}
{{- end }}
- --logtostderr=true
- --crd-apigroup={{ .Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com" }}
- --localhost=127.0.0.1
- --prometheus-config-reloader={{ .Values.prometheusOperator.prometheusConfigReloaderImage.repository }}:{{ .Values.prometheusOperator.prometheusConfigReloaderImage.tag }}
- --config-reloader-image={{ .Values.prometheusOperator.configmapReloadImage.repository }}:{{ .Values.prometheusOperator.configmapReloadImage.tag }}
+ {{- if .Values.prometheusOperator.configReloaderCpu }}
+ - --config-reloader-cpu={{ .Values.prometheusOperator.configReloaderCpu }}
+ {{- end }}
+ {{- if .Values.prometheusOperator.configReloaderMemory }}
+ - --config-reloader-memory={{ .Values.prometheusOperator.configReloaderMemory }}
+ {{- end }}
ports:
- containerPort: 8080
name: http
diff --git a/stable/prometheus-operator/templates/prometheus-operator/serviceaccount.yaml b/stable/prometheus-operator/templates/prometheus-operator/serviceaccount.yaml
index 2cffa7de9292..fbf876855ecf 100644
--- a/stable/prometheus-operator/templates/prometheus-operator/serviceaccount.yaml
+++ b/stable/prometheus-operator/templates/prometheus-operator/serviceaccount.yaml
@@ -1,4 +1,4 @@
-{{- if and .Values.prometheusOperator.enabled .Values.global.rbac.create .Values.prometheusOperator.serviceAccount.create }}
+{{- if and .Values.prometheusOperator.enabled .Values.prometheusOperator.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
diff --git a/stable/prometheus-operator/templates/prometheus-operator/servicemonitor.yaml b/stable/prometheus-operator/templates/prometheus-operator/servicemonitor.yaml
index 9532c1f973cc..00584406e471 100644
--- a/stable/prometheus-operator/templates/prometheus-operator/servicemonitor.yaml
+++ b/stable/prometheus-operator/templates/prometheus-operator/servicemonitor.yaml
@@ -10,6 +10,17 @@ spec:
endpoints:
- port: http
honorLabels: true
+ {{- if .Values.prometheusOperator.serviceMonitor.interval }}
+ interval: {{ .Values.prometheusOperator.serviceMonitor.interval }}
+ {{- end }}
+{{- if .Values.prometheusOperator.serviceMonitor.metricRelabelings }}
+ metricRelabelings:
+{{ toYaml .Values.prometheusOperator.serviceMonitor.metricRelabelings | indent 6 }}
+{{- end }}
+{{- if .Values.prometheusOperator.serviceMonitor.relabelings }}
+ relabelings:
+{{ toYaml .Values.prometheusOperator.serviceMonitor.relabelings | indent 6 }}
+{{- end }}
selector:
matchLabels:
app: {{ template "prometheus-operator.name" . }}-operator
diff --git a/stable/prometheus-operator/templates/prometheus/additionalPrometheusRules.yaml b/stable/prometheus-operator/templates/prometheus/additionalPrometheusRules.yaml
new file mode 100644
index 000000000000..0d85c9bd00e1
--- /dev/null
+++ b/stable/prometheus-operator/templates/prometheus/additionalPrometheusRules.yaml
@@ -0,0 +1,20 @@
+{{- if .Values.additionalPrometheusRules }}
+apiVersion: v1
+kind: List
+items:
+{{- range .Values.additionalPrometheusRules }}
+ - apiVersion: {{ printf "%s/v1" ($.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
+ kind: PrometheusRule
+ metadata:
+ name: {{ template "prometheus-operator.name" $ }}-{{ .name }}
+ labels:
+ app: {{ template "prometheus-operator.name" $ }}
+{{ include "prometheus-operator.labels" $ | indent 8 }}
+ {{- if .additionalLabels }}
+{{ toYaml .additionalLabels | indent 8 }}
+ {{- end }}
+ spec:
+ groups:
+{{ toYaml .groups| indent 8 }}
+{{- end }}
+{{- end }}
diff --git a/stable/prometheus-operator/templates/prometheus/ingress.yaml b/stable/prometheus-operator/templates/prometheus/ingress.yaml
index e013e9608b30..d6b16ace3b83 100644
--- a/stable/prometheus-operator/templates/prometheus/ingress.yaml
+++ b/stable/prometheus-operator/templates/prometheus/ingress.yaml
@@ -1,6 +1,8 @@
{{- if and .Values.prometheus.enabled .Values.prometheus.ingress.enabled }}
-{{- $routePrefix := .Values.prometheus.prometheusSpec.routePrefix }}
{{- $serviceName := printf "%s-%s" (include "prometheus-operator.fullname" .) "prometheus" }}
+{{- $servicePort := 9090 -}}
+{{- $routePrefix := list .Values.prometheus.prometheusSpec.routePrefix }}
+{{- $paths := .Values.prometheus.ingress.paths | default $routePrefix -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
@@ -17,17 +19,30 @@ metadata:
{{- end }}
spec:
rules:
- {{- range $host := .Values.prometheus.ingress.hosts }}
- - host: {{ . }}
+ {{- if .Values.prometheus.ingress.hosts }}
+ {{- range $host := .Values.prometheus.ingress.hosts }}
+ - host: {{ tpl $host $ }}
http:
paths:
- - path: "{{ $routePrefix }}"
+ {{- range $p := $paths }}
+ - path: {{ tpl $p $ }}
backend:
serviceName: {{ $serviceName }}
- servicePort: 9090
- {{- end }}
-{{- if .Values.prometheus.ingress.tls }}
+ servicePort: {{ $servicePort }}
+ {{- end -}}
+ {{- end -}}
+ {{- else }}
+ - http:
+ paths:
+ {{- range $p := $paths }}
+ - path: {{ tpl $p $ }}
+ backend:
+ serviceName: {{ $serviceName }}
+ servicePort: {{ $servicePort }}
+ {{- end -}}
+ {{- end -}}
+ {{- if .Values.prometheus.ingress.tls }}
tls:
{{ toYaml .Values.prometheus.ingress.tls | indent 4 }}
-{{- end }}
-{{- end }}
\ No newline at end of file
+ {{- end -}}
+{{- end -}}
diff --git a/stable/prometheus-operator/templates/prometheus/prometheus.yaml b/stable/prometheus-operator/templates/prometheus/prometheus.yaml
index f528b2e151f2..7138f9f22d84 100644
--- a/stable/prometheus-operator/templates/prometheus/prometheus.yaml
+++ b/stable/prometheus-operator/templates/prometheus/prometheus.yaml
@@ -41,7 +41,9 @@ spec:
paused: {{ .Values.prometheus.prometheusSpec.paused }}
replicas: {{ .Values.prometheus.prometheusSpec.replicas }}
logLevel: {{ .Values.prometheus.prometheusSpec.logLevel }}
+ logFormat: {{ .Values.prometheus.prometheusSpec.logFormat }}
listenLocal: {{ .Values.prometheus.prometheusSpec.listenLocal }}
+ enableAdminAPI: {{ .Values.prometheus.prometheusSpec.enableAdminAPI }}
{{- if .Values.prometheus.prometheusSpec.scrapeInterval }}
scrapeInterval: {{ .Values.prometheus.prometheusSpec.scrapeInterval }}
{{- end }}
@@ -93,30 +95,41 @@ spec:
securityContext:
{{ toYaml .Values.prometheus.prometheusSpec.securityContext | indent 4 }}
{{- end }}
-
{{- if .Values.prometheus.prometheusSpec.ruleNamespaceSelector }}
ruleNamespaceSelector:
{{ toYaml .Values.prometheus.prometheusSpec.ruleNamespaceSelector | indent 4 }}
+{{ else }}
+ ruleNamespaceSelector: {}
{{- end }}
{{- if .Values.prometheus.prometheusSpec.ruleSelector }}
ruleSelector:
{{ toYaml .Values.prometheus.prometheusSpec.ruleSelector | indent 4}}
{{- else if .Values.prometheus.prometheusSpec.ruleSelectorNilUsesHelmValues }}
- ruleSelector:
+ ruleSelector:
matchLabels:
app: {{ template "prometheus-operator.name" . }}
release: {{ .Release.Name | quote }}
- {{- end }}
+{{ else }}
+ ruleSelector: {}
+{{- end }}
{{- if .Values.prometheus.prometheusSpec.storageSpec }}
storage:
{{ toYaml .Values.prometheus.prometheusSpec.storageSpec | indent 4 }}
{{- end }}
- {{- if .Values.prometheus.prometheusSpec.podMetadata }}
+{{- if .Values.prometheus.prometheusSpec.podMetadata }}
podMetadata:
{{ toYaml .Values.prometheus.prometheusSpec.podMetadata | indent 4 }}
- {{- end }}
-{{- if eq .Values.prometheus.prometheusSpec.podAntiAffinity "hard" }}
+{{- end }}
+{{- if .Values.prometheus.prometheusSpec.query }}
+ query:
+{{ toYaml .Values.prometheus.prometheusSpec.query | indent 4}}
+{{- end }}
+{{- if or .Values.prometheus.prometheusSpec.podAntiAffinity .Values.prometheus.prometheusSpec.affinity }}
affinity:
+{{- if .Values.prometheus.prometheusSpec.affinity }}
+{{ toYaml .Values.prometheus.prometheusSpec.affinity | indent 4 }}
+{{- end }}
+{{- if eq .Values.prometheus.prometheusSpec.podAntiAffinity "hard" }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: {{ .Values.prometheus.prometheusSpec.podAntiAffinityTopologyKey }}
@@ -125,7 +138,6 @@ spec:
app: prometheus
prometheus: {{ template "prometheus-operator.fullname" . }}-prometheus
{{- else if eq .Values.prometheus.prometheusSpec.podAntiAffinity "soft" }}
- affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
@@ -136,6 +148,7 @@ spec:
app: prometheus
prometheus: {{ template "prometheus-operator.fullname" . }}-prometheus
{{- end }}
+{{- end }}
{{- if .Values.prometheus.prometheusSpec.tolerations }}
tolerations:
{{ toYaml .Values.prometheus.prometheusSpec.tolerations | indent 4 }}
diff --git a/stable/prometheus-operator/templates/alertmanager/rules/alertmanager.rules.yaml b/stable/prometheus-operator/templates/prometheus/rules/alertmanager.rules.yaml
similarity index 74%
rename from stable/prometheus-operator/templates/alertmanager/rules/alertmanager.rules.yaml
rename to stable/prometheus-operator/templates/prometheus/rules/alertmanager.rules.yaml
index ed4df1aaaaa4..8d0b248407d8 100644
--- a/stable/prometheus-operator/templates/alertmanager/rules/alertmanager.rules.yaml
+++ b/stable/prometheus-operator/templates/prometheus/rules/alertmanager.rules.yaml
@@ -1,7 +1,10 @@
-# Generated from 'alertmanager.rules' group from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml
+# Generated from 'alertmanager.rules' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create .Values.defaultRules.rules.alertmanager }}
{{- $operatorJob := printf "%s-%s" (include "prometheus-operator.fullname" .) "operator" }}
{{- $alertmanagerJob := printf "%s-%s" (include "prometheus-operator.fullname" .) "alertmanager" }}
+{{- $namespace := .Release.Namespace }}
apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
metadata:
@@ -23,14 +26,14 @@ spec:
- alert: AlertmanagerConfigInconsistent
annotations:
message: The configuration of the instances of the Alertmanager cluster `{{`{{$labels.service}}`}}` are out of sync.
- expr: count_values("config_hash", alertmanager_config_hash{job="{{ $alertmanagerJob }}"}) BY (service) / ON(service) GROUP_LEFT() label_replace(prometheus_operator_spec_replicas{job="{{ $operatorJob }}",controller="alertmanager"}, "service", "alertmanager-$1", "name", "(.*)") != 1
+ expr: count_values("config_hash", alertmanager_config_hash{job="{{ $alertmanagerJob }}",namespace="{{ $namespace }}"}) BY (service) / ON(service) GROUP_LEFT() label_replace(prometheus_operator_spec_replicas{job="{{ $operatorJob }}",namespace="{{ $namespace }}",controller="alertmanager"}, "service", "$1", "name", "(.*)") != 1
for: 5m
labels:
severity: critical
- alert: AlertmanagerFailedReload
annotations:
message: Reloading Alertmanager's configuration has failed for {{`{{ $labels.namespace }}`}}/{{`{{ $labels.pod}}`}}.
- expr: alertmanager_config_last_reload_successful{job="{{ $alertmanagerJob }}"} == 0
+ expr: alertmanager_config_last_reload_successful{job="{{ $alertmanagerJob }}",namespace="{{ $namespace }}"} == 0
for: 10m
labels:
severity: warning
@@ -38,9 +41,9 @@ spec:
annotations:
message: Alertmanager has not found all other members of the cluster.
expr: |-
- alertmanager_cluster_members{job="{{ $alertmanagerJob }}"}
+ alertmanager_cluster_members{job="{{ $alertmanagerJob }}",namespace="{{ $namespace }}"}
!= on (service) GROUP_LEFT()
- count by (service) (alertmanager_cluster_members{job="{{ $alertmanagerJob }}"})
+ count by (service) (alertmanager_cluster_members{job="{{ $alertmanagerJob }}",namespace="{{ $namespace }}"})
for: 5m
labels:
severity: critical
diff --git a/stable/prometheus-operator/templates/alertmanager/rules/etcd.yaml b/stable/prometheus-operator/templates/prometheus/rules/etcd.yaml
similarity index 97%
rename from stable/prometheus-operator/templates/alertmanager/rules/etcd.yaml
rename to stable/prometheus-operator/templates/prometheus/rules/etcd.yaml
index 7370bb99f179..a68eeff2a5e5 100644
--- a/stable/prometheus-operator/templates/alertmanager/rules/etcd.yaml
+++ b/stable/prometheus-operator/templates/prometheus/rules/etcd.yaml
@@ -1,4 +1,6 @@
# Generated from 'etcd' group from https://raw.githubusercontent.com/etcd-io/etcd/master/Documentation/op-guide/etcd3_alert.rules.yml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create .Values.kubeEtcd.enabled .Values.defaultRules.rules.etcd }}
apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
diff --git a/stable/prometheus-operator/templates/alertmanager/rules/general.rules.yaml b/stable/prometheus-operator/templates/prometheus/rules/general.rules.yaml
similarity index 61%
rename from stable/prometheus-operator/templates/alertmanager/rules/general.rules.yaml
rename to stable/prometheus-operator/templates/prometheus/rules/general.rules.yaml
index 93ff5a7ce666..fcf2351a679b 100644
--- a/stable/prometheus-operator/templates/alertmanager/rules/general.rules.yaml
+++ b/stable/prometheus-operator/templates/prometheus/rules/general.rules.yaml
@@ -1,4 +1,6 @@
-# Generated from 'general.rules' group from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml
+# Generated from 'general.rules' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create .Values.defaultRules.rules.general }}
apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
@@ -25,9 +27,19 @@ spec:
for: 10m
labels:
severity: warning
- - alert: DeadMansSwitch
+ - alert: Watchdog
annotations:
- message: This is a DeadMansSwitch meant to ensure that the entire alerting pipeline is functional.
+ message: 'This is an alert meant to ensure that the entire alerting pipeline is functional.
+
+ This alert is always firing, therefore it should always be firing in Alertmanager
+
+ and always fire against a receiver. There are integrations with various notification
+
+ mechanisms that send a notification when this alert is not firing. For example the
+
+ "DeadMansSnitch" integration in PagerDuty.
+
+ '
expr: vector(1)
labels:
severity: none
diff --git a/stable/prometheus-operator/templates/alertmanager/rules/k8s.rules.yaml b/stable/prometheus-operator/templates/prometheus/rules/k8s.rules.yaml
similarity index 63%
rename from stable/prometheus-operator/templates/alertmanager/rules/k8s.rules.yaml
rename to stable/prometheus-operator/templates/prometheus/rules/k8s.rules.yaml
index 9c3fed5750ec..ba2e30441ea9 100644
--- a/stable/prometheus-operator/templates/alertmanager/rules/k8s.rules.yaml
+++ b/stable/prometheus-operator/templates/prometheus/rules/k8s.rules.yaml
@@ -1,4 +1,6 @@
-# Generated from 'k8s.rules' group from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml
+# Generated from 'k8s.rules' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create .Values.defaultRules.rules.k8s }}
apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
@@ -43,16 +45,49 @@ spec:
record: namespace_name:container_memory_usage_bytes:sum
- expr: |-
sum by (namespace, label_name) (
- sum(kube_pod_container_resource_requests_memory_bytes{job="kube-state-metrics"}) by (namespace, pod)
+ sum(kube_pod_container_resource_requests_memory_bytes{job="kube-state-metrics"} * on (endpoint, instance, job, namespace, pod, service) group_left(phase) (kube_pod_status_phase{phase=~"^(Pending|Running)$"} == 1)) by (namespace, pod)
* on (namespace, pod) group_left(label_name)
label_replace(kube_pod_labels{job="kube-state-metrics"}, "pod_name", "$1", "pod", "(.*)")
)
record: namespace_name:kube_pod_container_resource_requests_memory_bytes:sum
- expr: |-
sum by (namespace, label_name) (
- sum(kube_pod_container_resource_requests_cpu_cores{job="kube-state-metrics"} and on(pod) kube_pod_status_scheduled{condition="true"}) by (namespace, pod)
+ sum(kube_pod_container_resource_requests_cpu_cores{job="kube-state-metrics"} * on (endpoint, instance, job, namespace, pod, service) group_left(phase) (kube_pod_status_phase{phase=~"^(Pending|Running)$"} == 1)) by (namespace, pod)
* on (namespace, pod) group_left(label_name)
label_replace(kube_pod_labels{job="kube-state-metrics"}, "pod_name", "$1", "pod", "(.*)")
)
record: namespace_name:kube_pod_container_resource_requests_cpu_cores:sum
+ - expr: |-
+ sum(
+ label_replace(
+ label_replace(
+ kube_pod_owner{job="kube-state-metrics", owner_kind="ReplicaSet"},
+ "replicaset", "$1", "owner_name", "(.*)"
+ ) * on(replicaset, namespace) group_left(owner_name) kube_replicaset_owner{job="kube-state-metrics"},
+ "workload", "$1", "owner_name", "(.*)"
+ )
+ ) by (namespace, workload, pod)
+ labels:
+ workload_type: deployment
+ record: mixin_pod_workload
+ - expr: |-
+ sum(
+ label_replace(
+ kube_pod_owner{job="kube-state-metrics", owner_kind="DaemonSet"},
+ "workload", "$1", "owner_name", "(.*)"
+ )
+ ) by (namespace, workload, pod)
+ labels:
+ workload_type: daemonset
+ record: mixin_pod_workload
+ - expr: |-
+ sum(
+ label_replace(
+ kube_pod_owner{job="kube-state-metrics", owner_kind="StatefulSet"},
+ "workload", "$1", "owner_name", "(.*)"
+ )
+ ) by (namespace, workload, pod)
+ labels:
+ workload_type: statefulset
+ record: mixin_pod_workload
{{- end }}
\ No newline at end of file
diff --git a/stable/prometheus-operator/templates/alertmanager/rules/kube-apiserver.rules.yaml b/stable/prometheus-operator/templates/prometheus/rules/kube-apiserver.rules.yaml
similarity index 86%
rename from stable/prometheus-operator/templates/alertmanager/rules/kube-apiserver.rules.yaml
rename to stable/prometheus-operator/templates/prometheus/rules/kube-apiserver.rules.yaml
index a2afeb1807d6..fc7da48e4bd5 100644
--- a/stable/prometheus-operator/templates/alertmanager/rules/kube-apiserver.rules.yaml
+++ b/stable/prometheus-operator/templates/prometheus/rules/kube-apiserver.rules.yaml
@@ -1,4 +1,6 @@
-# Generated from 'kube-apiserver.rules' group from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml
+# Generated from 'kube-apiserver.rules' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create .Values.kubeApiServer.enabled .Values.defaultRules.rules.kubeApiserver }}
apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
diff --git a/stable/prometheus-operator/templates/alertmanager/rules/kube-prometheus-node-alerting.rules.yaml b/stable/prometheus-operator/templates/prometheus/rules/kube-prometheus-node-alerting.rules.yaml
similarity index 86%
rename from stable/prometheus-operator/templates/alertmanager/rules/kube-prometheus-node-alerting.rules.yaml
rename to stable/prometheus-operator/templates/prometheus/rules/kube-prometheus-node-alerting.rules.yaml
index 3a99e2752038..8c19e4d96248 100644
--- a/stable/prometheus-operator/templates/alertmanager/rules/kube-prometheus-node-alerting.rules.yaml
+++ b/stable/prometheus-operator/templates/prometheus/rules/kube-prometheus-node-alerting.rules.yaml
@@ -1,4 +1,6 @@
-# Generated from 'kube-prometheus-node-alerting.rules' group from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml
+# Generated from 'kube-prometheus-node-alerting.rules' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create .Values.defaultRules.rules.kubePrometheusNodeAlerting }}
apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
diff --git a/stable/prometheus-operator/templates/alertmanager/rules/kube-prometheus-node-recording.rules.yaml b/stable/prometheus-operator/templates/prometheus/rules/kube-prometheus-node-recording.rules.yaml
similarity index 88%
rename from stable/prometheus-operator/templates/alertmanager/rules/kube-prometheus-node-recording.rules.yaml
rename to stable/prometheus-operator/templates/prometheus/rules/kube-prometheus-node-recording.rules.yaml
index 208644cebcb8..e7298edf2d3d 100644
--- a/stable/prometheus-operator/templates/alertmanager/rules/kube-prometheus-node-recording.rules.yaml
+++ b/stable/prometheus-operator/templates/prometheus/rules/kube-prometheus-node-recording.rules.yaml
@@ -1,4 +1,6 @@
-# Generated from 'kube-prometheus-node-recording.rules' group from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml
+# Generated from 'kube-prometheus-node-recording.rules' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create .Values.defaultRules.rules.kubePrometheusNodeRecording }}
apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
diff --git a/stable/prometheus-operator/templates/alertmanager/rules/kube-scheduler.rules.yaml b/stable/prometheus-operator/templates/prometheus/rules/kube-scheduler.rules.yaml
similarity index 93%
rename from stable/prometheus-operator/templates/alertmanager/rules/kube-scheduler.rules.yaml
rename to stable/prometheus-operator/templates/prometheus/rules/kube-scheduler.rules.yaml
index a2f2fef0d866..f6e3e00d8a06 100644
--- a/stable/prometheus-operator/templates/alertmanager/rules/kube-scheduler.rules.yaml
+++ b/stable/prometheus-operator/templates/prometheus/rules/kube-scheduler.rules.yaml
@@ -1,4 +1,6 @@
-# Generated from 'kube-scheduler.rules' group from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml
+# Generated from 'kube-scheduler.rules' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create .Values.kubeScheduler.enabled .Values.defaultRules.rules.kubeScheduler }}
apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
diff --git a/stable/prometheus-operator/templates/alertmanager/rules/kubernetes-absent.yaml b/stable/prometheus-operator/templates/prometheus/rules/kubernetes-absent.yaml
similarity index 89%
rename from stable/prometheus-operator/templates/alertmanager/rules/kubernetes-absent.yaml
rename to stable/prometheus-operator/templates/prometheus/rules/kubernetes-absent.yaml
index ce021553bf4b..56334fbf7545 100644
--- a/stable/prometheus-operator/templates/alertmanager/rules/kubernetes-absent.yaml
+++ b/stable/prometheus-operator/templates/prometheus/rules/kubernetes-absent.yaml
@@ -1,8 +1,11 @@
-# Generated from 'kubernetes-absent' group from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml
+# Generated from 'kubernetes-absent' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create .Values.defaultRules.rules.kubernetesAbsent }}
{{- $operatorJob := printf "%s-%s" (include "prometheus-operator.fullname" .) "operator" }}
{{- $prometheusJob := printf "%s-%s" (include "prometheus-operator.fullname" .) "prometheus" }}
{{- $alertmanagerJob := printf "%s-%s" (include "prometheus-operator.fullname" .) "alertmanager" }}
+{{- $namespace := .Release.Namespace }}
apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
metadata:
@@ -21,14 +24,16 @@ spec:
groups:
- name: kubernetes-absent
rules:
+{{- if .Values.alertmanager.enabled }}
- alert: AlertmanagerDown
annotations:
message: Alertmanager has disappeared from Prometheus target discovery.
runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-alertmanagerdown
- expr: absent(up{job="{{ $alertmanagerJob }}"} == 1)
+ expr: absent(up{job="{{ $alertmanagerJob }}",namespace="{{ $namespace }}"} == 1)
for: 15m
labels:
severity: critical
+{{- end }}
{{- if .Values.kubeDns.enabled }}
- alert: CoreDNSDown
annotations:
@@ -38,8 +43,8 @@ spec:
for: 15m
labels:
severity: critical
-{{- if .Values.kubeApiServer.enabled }}
{{- end }}
+{{- if .Values.kubeApiServer.enabled }}
- alert: KubeAPIDown
annotations:
message: KubeAPI has disappeared from Prometheus target discovery.
@@ -103,7 +108,7 @@ spec:
annotations:
message: Prometheus has disappeared from Prometheus target discovery.
runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-prometheusdown
- expr: absent(up{job="{{ $prometheusJob }}"} == 1)
+ expr: absent(up{job="{{ $prometheusJob }}",namespace="{{ $namespace }}"} == 1)
for: 15m
labels:
severity: critical
@@ -112,7 +117,7 @@ spec:
annotations:
message: PrometheusOperator has disappeared from Prometheus target discovery.
runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-prometheusoperatordown
- expr: absent(up{job="{{ $operatorJob }}"} == 1)
+ expr: absent(up{job="{{ $operatorJob }}",namespace="{{ $namespace }}"} == 1)
for: 15m
labels:
severity: critical
diff --git a/stable/prometheus-operator/templates/alertmanager/rules/kubernetes-apps.yaml b/stable/prometheus-operator/templates/prometheus/rules/kubernetes-apps.yaml
similarity index 97%
rename from stable/prometheus-operator/templates/alertmanager/rules/kubernetes-apps.yaml
rename to stable/prometheus-operator/templates/prometheus/rules/kubernetes-apps.yaml
index 11ed563637bd..54668c789fd1 100644
--- a/stable/prometheus-operator/templates/alertmanager/rules/kubernetes-apps.yaml
+++ b/stable/prometheus-operator/templates/prometheus/rules/kubernetes-apps.yaml
@@ -1,4 +1,6 @@
-# Generated from 'kubernetes-apps' group from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml
+# Generated from 'kubernetes-apps' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create .Values.kubeStateMetrics.enabled .Values.defaultRules.rules.kubernetesApps }}
apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
diff --git a/stable/prometheus-operator/templates/alertmanager/rules/kubernetes-resources.yaml b/stable/prometheus-operator/templates/prometheus/rules/kubernetes-resources.yaml
similarity index 93%
rename from stable/prometheus-operator/templates/alertmanager/rules/kubernetes-resources.yaml
rename to stable/prometheus-operator/templates/prometheus/rules/kubernetes-resources.yaml
index 26f3b17b4e28..df551b093ea5 100644
--- a/stable/prometheus-operator/templates/alertmanager/rules/kubernetes-resources.yaml
+++ b/stable/prometheus-operator/templates/prometheus/rules/kubernetes-resources.yaml
@@ -1,4 +1,6 @@
-# Generated from 'kubernetes-resources' group from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml
+# Generated from 'kubernetes-resources' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create .Values.defaultRules.rules.kubernetesResources }}
apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
@@ -51,7 +53,7 @@ spec:
message: Cluster has overcommitted CPU resource requests for Namespaces.
runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubecpuovercommit
expr: |-
- sum(kube_resourcequota{job="kube-state-metrics", type="hard", resource="requests.cpu"})
+ sum(kube_resourcequota{job="kube-state-metrics", type="hard", resource="cpu"})
/
sum(node:node_num_cpu:sum)
> 1.5
@@ -63,7 +65,7 @@ spec:
message: Cluster has overcommitted memory resource requests for Namespaces.
runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubememovercommit
expr: |-
- sum(kube_resourcequota{job="kube-state-metrics", type="hard", resource="requests.memory"})
+ sum(kube_resourcequota{job="kube-state-metrics", type="hard", resource="memory"})
/
sum(node_memory_MemTotal_bytes{job="node-exporter"})
> 1.5
diff --git a/stable/prometheus-operator/templates/alertmanager/rules/kubernetes-storage.yaml b/stable/prometheus-operator/templates/prometheus/rules/kubernetes-storage.yaml
similarity index 91%
rename from stable/prometheus-operator/templates/alertmanager/rules/kubernetes-storage.yaml
rename to stable/prometheus-operator/templates/prometheus/rules/kubernetes-storage.yaml
index 60ab6812aff0..522397c0e2ee 100644
--- a/stable/prometheus-operator/templates/alertmanager/rules/kubernetes-storage.yaml
+++ b/stable/prometheus-operator/templates/prometheus/rules/kubernetes-storage.yaml
@@ -1,4 +1,6 @@
-# Generated from 'kubernetes-storage' group from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml
+# Generated from 'kubernetes-storage' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create .Values.defaultRules.rules.kubernetesStorage }}
apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
diff --git a/stable/prometheus-operator/templates/alertmanager/rules/kubernetes-system.yaml b/stable/prometheus-operator/templates/prometheus/rules/kubernetes-system.yaml
similarity index 70%
rename from stable/prometheus-operator/templates/alertmanager/rules/kubernetes-system.yaml
rename to stable/prometheus-operator/templates/prometheus/rules/kubernetes-system.yaml
index 653bb047fea6..313abf679ebe 100644
--- a/stable/prometheus-operator/templates/alertmanager/rules/kubernetes-system.yaml
+++ b/stable/prometheus-operator/templates/prometheus/rules/kubernetes-system.yaml
@@ -1,4 +1,6 @@
-# Generated from 'kubernetes-system' group from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml
+# Generated from 'kubernetes-system' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create .Values.defaultRules.rules.kubernetesSystem }}
apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
@@ -28,9 +30,9 @@ spec:
severity: warning
- alert: KubeVersionMismatch
annotations:
- message: There are {{`{{ $value }}`}} different versions of Kubernetes components running.
+ message: There are {{`{{ $value }}`}} different semantic versions of Kubernetes components running.
runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeversionmismatch
- expr: count(count(kubernetes_build_info{job!="kube-dns"}) by (gitVersion)) > 1
+ expr: count(count by (gitVersion) (label_replace(kubernetes_build_info{job!="kube-dns"},"gitVersion","$1","gitVersion","(v[0-9]*.[0-9]*.[0-9]*).*"))) > 1
for: 1h
labels:
severity: warning
@@ -83,9 +85,9 @@ spec:
message: API server is returning errors for {{`{{ $value }}`}}% of requests.
runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeapierrorshigh
expr: |-
- sum(rate(apiserver_request_count{job="apiserver",code=~"^(?:5..)$"}[5m])) without(instance, pod)
+ sum(rate(apiserver_request_count{job="apiserver",code=~"^(?:5..)$"}[5m]))
/
- sum(rate(apiserver_request_count{job="apiserver"}[5m])) without(instance, pod) * 100 > 10
+ sum(rate(apiserver_request_count{job="apiserver"}[5m])) * 100 > 3
for: 10m
labels:
severity: critical
@@ -94,24 +96,46 @@ spec:
message: API server is returning errors for {{`{{ $value }}`}}% of requests.
runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeapierrorshigh
expr: |-
- sum(rate(apiserver_request_count{job="apiserver",code=~"^(?:5..)$"}[5m])) without(instance, pod)
+ sum(rate(apiserver_request_count{job="apiserver",code=~"^(?:5..)$"}[5m]))
/
- sum(rate(apiserver_request_count{job="apiserver"}[5m])) without(instance, pod) * 100 > 5
+ sum(rate(apiserver_request_count{job="apiserver"}[5m])) * 100 > 1
+ for: 10m
+ labels:
+ severity: warning
+ - alert: KubeAPIErrorsHigh
+ annotations:
+ message: API server is returning errors for {{`{{ $value }}`}}% of requests for {{`{{ $labels.verb }}`}} {{`{{ $labels.resource }}`}} {{`{{ $labels.subresource }}`}}.
+ runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeapierrorshigh
+ expr: |-
+ sum(rate(apiserver_request_count{job="apiserver",code=~"^(?:5..)$"}[5m])) by (resource,subresource,verb)
+ /
+ sum(rate(apiserver_request_count{job="apiserver"}[5m])) by (resource,subresource,verb) * 100 > 10
+ for: 10m
+ labels:
+ severity: critical
+ - alert: KubeAPIErrorsHigh
+ annotations:
+ message: API server is returning errors for {{`{{ $value }}`}}% of requests for {{`{{ $labels.verb }}`}} {{`{{ $labels.resource }}`}} {{`{{ $labels.subresource }}`}}.
+ runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeapierrorshigh
+ expr: |-
+ sum(rate(apiserver_request_count{job="apiserver",code=~"^(?:5..)$"}[5m])) by (resource,subresource,verb)
+ /
+ sum(rate(apiserver_request_count{job="apiserver"}[5m])) by (resource,subresource,verb) * 100 > 5
for: 10m
labels:
severity: warning
- alert: KubeClientCertificateExpiration
annotations:
- message: A client certificate used to authenticate to the apiserver is expiring in less than 7 days.
+ message: A client certificate used to authenticate to the apiserver is expiring in less than 7.0 days.
runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeclientcertificateexpiration
- expr: histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job="apiserver"}[5m]))) < 604800
+ expr: apiserver_client_certificate_expiration_seconds_count{job="apiserver"} > 0 and histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job="apiserver"}[5m]))) < 604800
labels:
severity: warning
- alert: KubeClientCertificateExpiration
annotations:
- message: A client certificate used to authenticate to the apiserver is expiring in less than 24 hours.
+ message: A client certificate used to authenticate to the apiserver is expiring in less than 24.0 hours.
runbook_url: https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/runbook.md#alert-name-kubeclientcertificateexpiration
- expr: histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job="apiserver"}[5m]))) < 86400
+ expr: apiserver_client_certificate_expiration_seconds_count{job="apiserver"} > 0 and histogram_quantile(0.01, sum by (job, le) (rate(apiserver_client_certificate_expiration_seconds_bucket{job="apiserver"}[5m]))) < 86400
labels:
severity: critical
{{- end }}
\ No newline at end of file
diff --git a/stable/prometheus-operator/templates/prometheus/rules/node-network.yaml b/stable/prometheus-operator/templates/prometheus/rules/node-network.yaml
new file mode 100644
index 000000000000..79a704f773fc
--- /dev/null
+++ b/stable/prometheus-operator/templates/prometheus/rules/node-network.yaml
@@ -0,0 +1,44 @@
+# Generated from 'node-network' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
+{{- if and .Values.defaultRules.create }}
+apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
+kind: PrometheusRule
+metadata:
+ name: {{ printf "%s-%s" (include "prometheus-operator.fullname" .) "node-network" | trunc 63 | trimSuffix "-" }}
+ labels:
+ app: {{ template "prometheus-operator.name" . }}
+{{ include "prometheus-operator.labels" . | indent 4 }}
+{{- if .Values.defaultRules.labels }}
+{{ toYaml .Values.defaultRules.labels | indent 4 }}
+{{- end }}
+{{- if .Values.defaultRules.annotations }}
+ annotations:
+{{ toYaml .Values.defaultRules.annotations | indent 4 }}
+{{- end }}
+spec:
+ groups:
+ - name: node-network
+ rules:
+ - alert: NetworkReceiveErrors
+ annotations:
+ message: Network interface "{{`{{ $labels.device }}`}}" showing receive errors on node-exporter {{`{{ $labels.namespace }}`}}/{{`{{ $labels.pod }}`}}"
+ expr: rate(node_network_receive_errs_total{job="node-exporter",device!~"veth.+"}[2m]) > 0
+ for: 2m
+ labels:
+ severity: warning
+ - alert: NetworkTransmitErrors
+ annotations:
+ message: Network interface "{{`{{ $labels.device }}`}}" showing transmit errors on node-exporter {{`{{ $labels.namespace }}`}}/{{`{{ $labels.pod }}`}}"
+ expr: rate(node_network_transmit_errs_total{job="node-exporter",device!~"veth.+"}[2m]) > 0
+ for: 2m
+ labels:
+ severity: warning
+ - alert: NodeNetworkInterfaceFlapping
+ annotations:
+ message: Network interface "{{`{{ $labels.device }}`}}" changing it's up status often on node-exporter {{`{{ $labels.namespace }}`}}/{{`{{ $labels.pod }}`}}"
+ expr: changes(node_network_up{job="node-exporter",device!~"veth.+"}[2m]) > 2
+ for: 2m
+ labels:
+ severity: warning
+{{- end }}
\ No newline at end of file
diff --git a/stable/prometheus-operator/templates/prometheus/rules/node-time.yaml b/stable/prometheus-operator/templates/prometheus/rules/node-time.yaml
new file mode 100644
index 000000000000..78cad3e758bc
--- /dev/null
+++ b/stable/prometheus-operator/templates/prometheus/rules/node-time.yaml
@@ -0,0 +1,30 @@
+# Generated from 'node-time' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
+{{- if and .Values.defaultRules.create }}
+apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
+kind: PrometheusRule
+metadata:
+ name: {{ printf "%s-%s" (include "prometheus-operator.fullname" .) "node-time" | trunc 63 | trimSuffix "-" }}
+ labels:
+ app: {{ template "prometheus-operator.name" . }}
+{{ include "prometheus-operator.labels" . | indent 4 }}
+{{- if .Values.defaultRules.labels }}
+{{ toYaml .Values.defaultRules.labels | indent 4 }}
+{{- end }}
+{{- if .Values.defaultRules.annotations }}
+ annotations:
+{{ toYaml .Values.defaultRules.annotations | indent 4 }}
+{{- end }}
+spec:
+ groups:
+ - name: node-time
+ rules:
+ - alert: ClockSkewDetected
+ annotations:
+ message: Clock skew detected on node-exporter {{`{{ $labels.namespace }}`}}/{{`{{ $labels.pod }}`}}. Ensure NTP is configured correctly on this host.
+ expr: abs(node_timex_offset_seconds{job="node-exporter"}) > 0.03
+ for: 2m
+ labels:
+ severity: warning
+{{- end }}
\ No newline at end of file
diff --git a/stable/prometheus-operator/templates/alertmanager/rules/node.rules.yaml b/stable/prometheus-operator/templates/prometheus/rules/node.rules.yaml
similarity index 84%
rename from stable/prometheus-operator/templates/alertmanager/rules/node.rules.yaml
rename to stable/prometheus-operator/templates/prometheus/rules/node.rules.yaml
index 6c92e79c588c..9ea571b4a1d4 100644
--- a/stable/prometheus-operator/templates/alertmanager/rules/node.rules.yaml
+++ b/stable/prometheus-operator/templates/prometheus/rules/node.rules.yaml
@@ -1,4 +1,6 @@
-# Generated from 'node.rules' group from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml
+# Generated from 'node.rules' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create .Values.nodeExporter.enabled .Values.defaultRules.rules.node }}
apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
@@ -37,6 +39,13 @@ spec:
* on (namespace, pod) group_left(node)
node_namespace_pod:kube_pod_info:)
record: node:node_cpu_utilisation:avg1m
+ - expr: |-
+ node:node_cpu_utilisation:avg1m
+ *
+ node:node_num_cpu:sum
+ /
+ scalar(sum(node:node_num_cpu:sum))
+ record: node:cluster_cpu_utilisation:ratio
- expr: |-
sum(node_load1{job="node-exporter"})
/
@@ -78,8 +87,13 @@ spec:
- expr: |-
(node:node_memory_bytes_total:sum - node:node_memory_bytes_available:sum)
/
- scalar(sum(node:node_memory_bytes_total:sum))
+ node:node_memory_bytes_total:sum
record: node:node_memory_utilisation:ratio
+ - expr: |-
+ (node:node_memory_bytes_total:sum - node:node_memory_bytes_available:sum)
+ /
+ scalar(sum(node:node_memory_bytes_total:sum))
+ record: node:cluster_memory_utilisation:ratio
- expr: |-
1e3 * sum(
(rate(node_vmstat_pgpgin{job="node-exporter"}[1m])
@@ -110,51 +124,51 @@ spec:
node_namespace_pod:kube_pod_info:
)
record: node:node_memory_swap_io_bytes:sum_rate
- - expr: avg(irate(node_disk_io_time_seconds_total{job="node-exporter",device=~"nvme.+|rbd.+|sd.+|vd.+|xvd.+"}[1m]))
+ - expr: avg(irate(node_disk_io_time_seconds_total{job="node-exporter",device=~"nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+"}[1m]))
record: :node_disk_utilisation:avg_irate
- expr: |-
avg by (node) (
- irate(node_disk_io_time_seconds_total{job="node-exporter",device=~"nvme.+|rbd.+|sd.+|vd.+|xvd.+"}[1m])
+ irate(node_disk_io_time_seconds_total{job="node-exporter",device=~"nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+"}[1m])
* on (namespace, pod) group_left(node)
node_namespace_pod:kube_pod_info:
)
record: node:node_disk_utilisation:avg_irate
- - expr: avg(irate(node_disk_io_time_weighted_seconds_total{job="node-exporter",device=~"nvme.+|rbd.+|sd.+|vd.+|xvd.+"}[1m]) / 1e3)
+ - expr: avg(irate(node_disk_io_time_weighted_seconds_total{job="node-exporter",device=~"nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+"}[1m]))
record: :node_disk_saturation:avg_irate
- expr: |-
avg by (node) (
- irate(node_disk_io_time_weighted_seconds_total{job="node-exporter",device=~"nvme.+|rbd.+|sd.+|vd.+|xvd.+"}[1m]) / 1e3
+ irate(node_disk_io_time_weighted_seconds_total{job="node-exporter",device=~"nvme.+|rbd.+|sd.+|vd.+|xvd.+|dm-.+"}[1m])
* on (namespace, pod) group_left(node)
node_namespace_pod:kube_pod_info:
)
record: node:node_disk_saturation:avg_irate
- expr: |-
- max by (namespace, pod, device) ((node_filesystem_size_bytes{fstype=~"ext[234]|btrfs|xfs|zfs"}
+ max by (instance, namespace, pod, device) ((node_filesystem_size_bytes{fstype=~"ext[234]|btrfs|xfs|zfs"}
- node_filesystem_avail_bytes{fstype=~"ext[234]|btrfs|xfs|zfs"})
/ node_filesystem_size_bytes{fstype=~"ext[234]|btrfs|xfs|zfs"})
record: 'node:node_filesystem_usage:'
- - expr: max by (namespace, pod, device) (node_filesystem_avail_bytes{fstype=~"ext[234]|btrfs|xfs|zfs"} / node_filesystem_size_bytes{fstype=~"ext[234]|btrfs|xfs|zfs"})
+ - expr: max by (instance, namespace, pod, device) (node_filesystem_avail_bytes{fstype=~"ext[234]|btrfs|xfs|zfs"} / node_filesystem_size_bytes{fstype=~"ext[234]|btrfs|xfs|zfs"})
record: 'node:node_filesystem_avail:'
- expr: |-
- sum(irate(node_network_receive_bytes_total{job="node-exporter",device="eth0"}[1m])) +
- sum(irate(node_network_transmit_bytes_total{job="node-exporter",device="eth0"}[1m]))
+ sum(irate(node_network_receive_bytes_total{job="node-exporter",device!~"veth.+"}[1m])) +
+ sum(irate(node_network_transmit_bytes_total{job="node-exporter",device!~"veth.+"}[1m]))
record: :node_net_utilisation:sum_irate
- expr: |-
sum by (node) (
- (irate(node_network_receive_bytes_total{job="node-exporter",device="eth0"}[1m]) +
- irate(node_network_transmit_bytes_total{job="node-exporter",device="eth0"}[1m]))
+ (irate(node_network_receive_bytes_total{job="node-exporter",device!~"veth.+"}[1m]) +
+ irate(node_network_transmit_bytes_total{job="node-exporter",device!~"veth.+"}[1m]))
* on (namespace, pod) group_left(node)
node_namespace_pod:kube_pod_info:
)
record: node:node_net_utilisation:sum_irate
- expr: |-
- sum(irate(node_network_receive_drop_total{job="node-exporter",device="eth0"}[1m])) +
- sum(irate(node_network_transmit_drop_total{job="node-exporter",device="eth0"}[1m]))
+ sum(irate(node_network_receive_drop_total{job="node-exporter",device!~"veth.+"}[1m])) +
+ sum(irate(node_network_transmit_drop_total{job="node-exporter",device!~"veth.+"}[1m]))
record: :node_net_saturation:sum_irate
- expr: |-
sum by (node) (
- (irate(node_network_receive_drop_total{job="node-exporter",device="eth0"}[1m]) +
- irate(node_network_transmit_drop_total{job="node-exporter",device="eth0"}[1m]))
+ (irate(node_network_receive_drop_total{job="node-exporter",device!~"veth.+"}[1m]) +
+ irate(node_network_transmit_drop_total{job="node-exporter",device!~"veth.+"}[1m]))
* on (namespace, pod) group_left(node)
node_namespace_pod:kube_pod_info:
)
diff --git a/stable/prometheus-operator/templates/alertmanager/rules/prometheus-operator.yaml b/stable/prometheus-operator/templates/prometheus/rules/prometheus-operator.yaml
similarity index 78%
rename from stable/prometheus-operator/templates/alertmanager/rules/prometheus-operator.yaml
rename to stable/prometheus-operator/templates/prometheus/rules/prometheus-operator.yaml
index 55725275b76c..80bfcca1534e 100644
--- a/stable/prometheus-operator/templates/alertmanager/rules/prometheus-operator.yaml
+++ b/stable/prometheus-operator/templates/prometheus/rules/prometheus-operator.yaml
@@ -1,6 +1,9 @@
-# Generated from 'prometheus-operator' group from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml
+# Generated from 'prometheus-operator' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create .Values.defaultRules.rules.prometheusOperator }}
{{- $operatorJob := printf "%s-%s" (include "prometheus-operator.fullname" .) "operator" }}
+{{- $namespace := .Release.Namespace }}
apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
metadata:
@@ -22,14 +25,14 @@ spec:
- alert: PrometheusOperatorReconcileErrors
annotations:
message: Errors while reconciling {{`{{ $labels.controller }}`}} in {{`{{ $labels.namespace }}`}} Namespace.
- expr: rate(prometheus_operator_reconcile_errors_total{job="{{ $operatorJob }}"}[5m]) > 0.1
+ expr: rate(prometheus_operator_reconcile_errors_total{job="{{ $operatorJob }}",namespace="{{ $namespace }}"}[5m]) > 0.1
for: 10m
labels:
severity: warning
- alert: PrometheusOperatorNodeLookupErrors
annotations:
message: Errors while reconciling Prometheus in {{`{{ $labels.namespace }}`}} Namespace.
- expr: rate(prometheus_operator_node_address_lookup_errors_total{job="{{ $operatorJob }}"}[5m]) > 0.1
+ expr: rate(prometheus_operator_node_address_lookup_errors_total{job="{{ $operatorJob }}",namespace="{{ $namespace }}"}[5m]) > 0.1
for: 10m
labels:
severity: warning
diff --git a/stable/prometheus-operator/templates/alertmanager/rules/prometheus.rules.yaml b/stable/prometheus-operator/templates/prometheus/rules/prometheus.rules.yaml
similarity index 78%
rename from stable/prometheus-operator/templates/alertmanager/rules/prometheus.rules.yaml
rename to stable/prometheus-operator/templates/prometheus/rules/prometheus.rules.yaml
index bb31139c2a6f..c3f837d91da6 100644
--- a/stable/prometheus-operator/templates/alertmanager/rules/prometheus.rules.yaml
+++ b/stable/prometheus-operator/templates/prometheus/rules/prometheus.rules.yaml
@@ -1,6 +1,9 @@
-# Generated from 'prometheus.rules' group from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/prometheus-rules.yaml
+# Generated from 'prometheus.rules' group from https://raw.githubusercontent.com/coreos/kube-prometheus/master/manifests/prometheus-rules.yaml
+# Do not change in-place! In order to change this file first read following link:
+# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.defaultRules.create .Values.defaultRules.rules.prometheus }}
{{- $prometheusJob := printf "%s-%s" (include "prometheus-operator.fullname" .) "prometheus" }}
+{{- $namespace := .Release.Namespace }}
apiVersion: {{ printf "%s/v1" (.Values.prometheusOperator.crdApiGroup | default "monitoring.coreos.com") }}
kind: PrometheusRule
metadata:
@@ -23,7 +26,7 @@ spec:
annotations:
description: Reloading Prometheus' configuration has failed for {{`{{$labels.namespace}}`}}/{{`{{$labels.pod}}`}}
summary: Reloading Prometheus' configuration failed
- expr: prometheus_config_last_reload_successful{job="{{ $prometheusJob }}"} == 0
+ expr: prometheus_config_last_reload_successful{job="{{ $prometheusJob }}",namespace="{{ $namespace }}"} == 0
for: 10m
labels:
severity: warning
@@ -31,7 +34,7 @@ spec:
annotations:
description: Prometheus' alert notification queue is running full for {{`{{$labels.namespace}}`}}/{{`{{ $labels.pod}}`}}
summary: Prometheus' alert notification queue is running full
- expr: predict_linear(prometheus_notifications_queue_length{job="{{ $prometheusJob }}"}[5m], 60 * 30) > prometheus_notifications_queue_capacity{job="{{ $prometheusJob }}"}
+ expr: predict_linear(prometheus_notifications_queue_length{job="{{ $prometheusJob }}",namespace="{{ $namespace }}"}[5m], 60 * 30) > prometheus_notifications_queue_capacity{job="{{ $prometheusJob }}",namespace="{{ $namespace }}"}
for: 10m
labels:
severity: warning
@@ -39,7 +42,7 @@ spec:
annotations:
description: Errors while sending alerts from Prometheus {{`{{$labels.namespace}}`}}/{{`{{ $labels.pod}}`}} to Alertmanager {{`{{$labels.Alertmanager}}`}}
summary: Errors while sending alert from Prometheus
- expr: rate(prometheus_notifications_errors_total{job="{{ $prometheusJob }}"}[5m]) / rate(prometheus_notifications_sent_total{job="{{ $prometheusJob }}"}[5m]) > 0.01
+ expr: rate(prometheus_notifications_errors_total{job="{{ $prometheusJob }}",namespace="{{ $namespace }}"}[5m]) / rate(prometheus_notifications_sent_total{job="{{ $prometheusJob }}",namespace="{{ $namespace }}"}[5m]) > 0.01
for: 10m
labels:
severity: warning
@@ -47,7 +50,7 @@ spec:
annotations:
description: Errors while sending alerts from Prometheus {{`{{$labels.namespace}}`}}/{{`{{ $labels.pod}}`}} to Alertmanager {{`{{$labels.Alertmanager}}`}}
summary: Errors while sending alerts from Prometheus
- expr: rate(prometheus_notifications_errors_total{job="{{ $prometheusJob }}"}[5m]) / rate(prometheus_notifications_sent_total{job="{{ $prometheusJob }}"}[5m]) > 0.03
+ expr: rate(prometheus_notifications_errors_total{job="{{ $prometheusJob }}",namespace="{{ $namespace }}"}[5m]) / rate(prometheus_notifications_sent_total{job="{{ $prometheusJob }}",namespace="{{ $namespace }}"}[5m]) > 0.03
for: 10m
labels:
severity: critical
@@ -55,7 +58,7 @@ spec:
annotations:
description: Prometheus {{`{{ $labels.namespace }}`}}/{{`{{ $labels.pod}}`}} is not connected to any Alertmanagers
summary: Prometheus is not connected to any Alertmanagers
- expr: prometheus_notifications_alertmanagers_discovered{job="{{ $prometheusJob }}"} < 1
+ expr: prometheus_notifications_alertmanagers_discovered{job="{{ $prometheusJob }}",namespace="{{ $namespace }}"} < 1
for: 10m
labels:
severity: warning
@@ -63,7 +66,7 @@ spec:
annotations:
description: '{{`{{$labels.job}}`}} at {{`{{$labels.instance}}`}} had {{`{{$value | humanize}}`}} reload failures over the last four hours.'
summary: Prometheus has issues reloading data blocks from disk
- expr: increase(prometheus_tsdb_reloads_failures_total{job="{{ $prometheusJob }}"}[2h]) > 0
+ expr: increase(prometheus_tsdb_reloads_failures_total{job="{{ $prometheusJob }}",namespace="{{ $namespace }}"}[2h]) > 0
for: 12h
labels:
severity: warning
@@ -71,7 +74,7 @@ spec:
annotations:
description: '{{`{{$labels.job}}`}} at {{`{{$labels.instance}}`}} had {{`{{$value | humanize}}`}} compaction failures over the last four hours.'
summary: Prometheus has issues compacting sample blocks
- expr: increase(prometheus_tsdb_compactions_failed_total{job="{{ $prometheusJob }}"}[2h]) > 0
+ expr: increase(prometheus_tsdb_compactions_failed_total{job="{{ $prometheusJob }}",namespace="{{ $namespace }}"}[2h]) > 0
for: 12h
labels:
severity: warning
@@ -79,7 +82,7 @@ spec:
annotations:
description: '{{`{{$labels.job}}`}} at {{`{{$labels.instance}}`}} has a corrupted write-ahead log (WAL).'
summary: Prometheus write-ahead log is corrupted
- expr: tsdb_wal_corruptions_total{job="{{ $prometheusJob }}"} > 0
+ expr: prometheus_tsdb_wal_corruptions_total{job="{{ $prometheusJob }}",namespace="{{ $namespace }}"} > 0
for: 4h
labels:
severity: warning
@@ -87,7 +90,7 @@ spec:
annotations:
description: Prometheus {{`{{ $labels.namespace }}`}}/{{`{{ $labels.pod}}`}} isn't ingesting samples.
summary: Prometheus isn't ingesting samples
- expr: rate(prometheus_tsdb_head_samples_appended_total{job="{{ $prometheusJob }}"}[5m]) <= 0
+ expr: rate(prometheus_tsdb_head_samples_appended_total{job="{{ $prometheusJob }}",namespace="{{ $namespace }}"}[5m]) <= 0
for: 10m
labels:
severity: warning
@@ -95,7 +98,7 @@ spec:
annotations:
description: '{{`{{$labels.namespace}}`}}/{{`{{$labels.pod}}`}} has many samples rejected due to duplicate timestamps but different values'
summary: Prometheus has many samples rejected
- expr: increase(prometheus_target_scrapes_sample_duplicate_timestamp_total{job="{{ $prometheusJob }}"}[5m]) > 0
+ expr: increase(prometheus_target_scrapes_sample_duplicate_timestamp_total{job="{{ $prometheusJob }}",namespace="{{ $namespace }}"}[5m]) > 0
for: 10m
labels:
severity: warning
diff --git a/stable/prometheus-operator/templates/prometheus/service.yaml b/stable/prometheus-operator/templates/prometheus/service.yaml
index 831a881425b3..e1736dd49672 100644
--- a/stable/prometheus-operator/templates/prometheus/service.yaml
+++ b/stable/prometheus-operator/templates/prometheus/service.yaml
@@ -33,12 +33,15 @@ spec:
nodePort: {{ .Values.prometheus.service.nodePort }}
{{- end }}
port: 9090
- {{- if eq .Values.prometheus.service.type "NodePort" }}
- nodePort: {{ .Values.prometheus.service.nodePort }}
- {{- end }}
- targetPort: web
+ targetPort: {{ .Values.prometheus.service.targetPort }}
+{{- if .Values.prometheus.service.additionalPorts }}
+{{ toYaml .Values.prometheus.service.additionalPorts | indent 2 }}
+{{- end }}
selector:
app: prometheus
prometheus: {{ template "prometheus-operator.fullname" . }}-prometheus
+{{- if .Values.prometheus.service.sessionAffinity }}
+ sessionAffinity: {{ .Values.prometheus.service.sessionAffinity }}
+{{- end }}
type: "{{ .Values.prometheus.service.type }}"
{{- end }}
diff --git a/stable/prometheus-operator/templates/prometheus/serviceaccount.yaml b/stable/prometheus-operator/templates/prometheus/serviceaccount.yaml
index 88df10ad24d9..bed5f58f14f4 100644
--- a/stable/prometheus-operator/templates/prometheus/serviceaccount.yaml
+++ b/stable/prometheus-operator/templates/prometheus/serviceaccount.yaml
@@ -1,4 +1,4 @@
-{{- if and .Values.prometheus.enabled .Values.global.rbac.create .Values.prometheus.serviceAccount.create }}
+{{- if and .Values.prometheus.enabled .Values.prometheus.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
diff --git a/stable/prometheus-operator/templates/prometheus/servicemonitor.yaml b/stable/prometheus-operator/templates/prometheus/servicemonitor.yaml
index 36790450b8eb..3d62d1756b1e 100644
--- a/stable/prometheus-operator/templates/prometheus/servicemonitor.yaml
+++ b/stable/prometheus-operator/templates/prometheus/servicemonitor.yaml
@@ -16,6 +16,16 @@ spec:
- {{ .Release.Namespace | quote }}
endpoints:
- port: web
- interval: 30s
+ {{- if .Values.prometheus.serviceMonitor.interval }}
+ interval: {{ .Values.prometheus.serviceMonitor.interval }}
+ {{- end }}
path: "{{ trimSuffix "/" .Values.prometheus.prometheusSpec.routePrefix }}/metrics"
+{{- if .Values.prometheus.serviceMonitor.metricRelabelings }}
+ metricRelabelings:
+{{ toYaml .Values.prometheus.serviceMonitor.metricRelabelings | indent 6 }}
+{{- end }}
+{{- if .Values.prometheus.serviceMonitor.relabelings }}
+ relabelings:
+{{ toYaml .Values.prometheus.serviceMonitor.relabelings | indent 6 }}
+{{- end }}
{{- end }}
diff --git a/stable/prometheus-operator/templates/prometheus/servicemonitors.yaml b/stable/prometheus-operator/templates/prometheus/servicemonitors.yaml
index 61f3ca3cf850..92ffa2d3eee5 100644
--- a/stable/prometheus-operator/templates/prometheus/servicemonitors.yaml
+++ b/stable/prometheus-operator/templates/prometheus/servicemonitors.yaml
@@ -25,5 +25,9 @@ items:
{{- end }}
selector:
{{ toYaml .selector | indent 8 }}
+ {{- if .targetLabels }}
+ targetLabels:
+{{ toYaml .targetLabels | indent 8 }}
+ {{- end }}
{{- end }}
{{- end }}
diff --git a/stable/prometheus-operator/values.yaml b/stable/prometheus-operator/values.yaml
index 69a1e78450e4..9e58d56c26f0 100644
--- a/stable/prometheus-operator/values.yaml
+++ b/stable/prometheus-operator/values.yaml
@@ -42,6 +42,16 @@ defaultRules:
## Annotations for default rules
annotations: {}
+## Provide custom recording or alerting rules to be deployed into the cluster.
+##
+additionalPrometheusRules: []
+# - name: my-rule-file
+# groups:
+# - name: my_group
+# rules:
+# - record: my_record
+# expr: 100 * my_record
+
##
global:
rbac:
@@ -95,7 +105,7 @@ alertmanager:
receiver: 'null'
routes:
- match:
- alertname: DeadMansSwitch
+ alertname: Watchdog
receiver: 'null'
receivers:
- name: 'null'
@@ -134,6 +144,11 @@ alertmanager:
hosts: []
# - alertmanager.domain.com
+ ## Paths to use for ingress rules - one path should match the alertmanagerSpec.routePrefix
+ ##
+ paths: []
+ # - /
+
## TLS configuration for Alertmanager Ingress
## Secret must be manually created in the namespace
##
@@ -166,8 +181,28 @@ alertmanager:
## If true, create a serviceMonitor for alertmanager
##
serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
selfMonitor: true
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
+
## Settings affecting alertmanagerSpec
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#alertmanagerspec
##
@@ -181,7 +216,12 @@ alertmanager:
##
image:
repository: quay.io/prometheus/alertmanager
- tag: v0.15.3
+ tag: v0.17.0
+
+ ## If true then the user will be responsible to provide a secret with alertmanager configuration
+ ## So when true the config part will be ignored (including templateFiles) and the one in the secret will be used
+ ##
+ useExistingSecret: false
## Secrets is a list of Secrets in the same namespace as the Alertmanager object, which shall be mounted into the
## Alertmanager Pods. The Secrets are mounted into /etc/alertmanager/secrets/.
@@ -257,6 +297,20 @@ alertmanager:
##
podAntiAffinityTopologyKey: kubernetes.io/hostname
+ ## Assign custom affinity rules to the alertmanager instance
+ ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+ ##
+ affinity: {}
+ # nodeAffinity:
+ # requiredDuringSchedulingIgnoredDuringExecution:
+ # nodeSelectorTerms:
+ # - matchExpressions:
+ # - key: kubernetes.io/e2e-az-name
+ # operator: In
+ # values:
+ # - e2e-az1
+ # - e2e-az2
+
## If specified, the pod's tolerations.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
@@ -304,11 +358,11 @@ grafana:
adminPassword: prom-operator
ingress:
- ## If true, Prometheus Ingress will be created
+ ## If true, Grafana Ingress will be created
##
enabled: false
- ## Annotations for Prometheus Ingress
+ ## Annotations for Grafana Ingress
##
annotations: {}
# kubernetes.io/ingress.class: nginx
@@ -322,16 +376,19 @@ grafana:
## Must be provided if Ingress is enable.
##
# hosts:
- # - prometheus.domain.com
+ # - grafana.domain.com
hosts: []
- ## TLS configuration for prometheus Ingress
+ ## Path for grafana ingress
+ path: /
+
+ ## TLS configuration for grafana Ingress
## Secret must be manually created in the namespace
##
tls: []
- # - secretName: prometheus-general-tls
+ # - secretName: grafana-general-tls
# hosts:
- # - prometheus.example.com
+ # - grafana.example.com
sidecar:
dashboards:
@@ -339,6 +396,7 @@ grafana:
label: grafana_dashboard
datasources:
enabled: true
+ defaultDatasourceEnabled: true
label: grafana_datasource
extraConfigmapMounts: []
@@ -347,6 +405,46 @@ grafana:
# configMap: certs-configmap
# readOnly: true
+ ## Configure additional grafana datasources
+ ## ref: http://docs.grafana.org/administration/provisioning/#datasources
+ additionalDataSources: []
+ # - name: prometheus-sample
+ # access: proxy
+ # basicAuth: true
+ # basicAuthPassword: pass
+ # basicAuthUser: daco
+ # editable: false
+ # jsonData:
+ # tlsSkipVerify: true
+ # orgId: 1
+ # type: prometheus
+ # url: https://prometheus.svc:9090
+ # version: 1
+
+ ## If true, create a serviceMonitor for grafana
+ ##
+ serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
+ selfMonitor: true
+
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
## Component scraping the kube api server
##
@@ -356,13 +454,35 @@ kubeApiServer:
serverName: kubernetes
insecureSkipVerify: false
+ ## If your API endpoint address is not reachable (as in AKS) you can replace it with the kubernetes service
+ ##
+ relabelings: []
+ # - sourceLabels:
+ # - __meta_kubernetes_namespace
+ # - __meta_kubernetes_service_name
+ # - __meta_kubernetes_endpoint_port_name
+ # action: keep
+ # regex: default;kubernetes;https
+ # - targetLabel: __address__
+ # replacement: kubernetes.default.svc:443
+
serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
jobLabel: component
selector:
matchLabels:
component: apiserver
provider: kubernetes
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
## Component scraping the kubelet and kubelet-hosted cAdvisor
##
kubelet:
@@ -370,10 +490,38 @@ kubelet:
namespace: kube-system
serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
+
## Enable scraping the kubelet over https. For requirements to enable this see
## https://github.com/coreos/prometheus-operator/issues/926
##
- https: false
+ https: true
+
+ ## Metric relabellings to apply to samples before ingestion
+ ##
+ cAdvisorMetricRelabelings: []
+ # - sourceLabels: [__name__, image]
+ # separator: ;
+ # regex: container_([a-z_]+);
+ # replacement: $1
+ # action: drop
+ # - sourceLabels: [__name__]
+ # separator: ;
+ # regex: container_(network_tcp_usage_total|network_udp_usage_total|tasks_state|cpu_load_average_10s)
+ # replacement: $1
+ # action: drop
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ cAdvisorRelabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
## Component scraping the kube controller manager
##
@@ -393,7 +541,35 @@ kubeControllerManager:
port: 10252
targetPort: 10252
selector:
- k8s-app: kube-controller-manager
+ component: kube-controller-manager
+
+ serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
+
+ ## Enable scraping kube-controller-manager over https.
+ ## Requires proper certs (not self-signed) and delegated authentication/authorization checks
+ ##
+ https: false
+
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
+
## Component scraping coreDns. Use either this or kubeDns
##
coreDns:
@@ -402,7 +578,28 @@ coreDns:
port: 9153
targetPort: 9153
selector:
- k8s-app: coredns
+ k8s-app: kube-dns
+ serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
+
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
## Component scraping kubeDns. Use either this or coreDns
##
@@ -411,6 +608,28 @@ kubeDns:
service:
selector:
k8s-app: kube-dns
+ serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
+
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
+
## Component scraping etcd
##
kubeEtcd:
@@ -426,10 +645,10 @@ kubeEtcd:
## Etcd service. If using kubeEtcd.endpoints only the port and targetPort are used
##
service:
- port: 4001
- targetPort: 4001
+ port: 2379
+ targetPort: 2379
selector:
- k8s-app: etcd-server
+ component: etcd
## Configure secure access to the etcd cluster by loading a secret into prometheus and
## specifying security configuration below. For example, with a secret named etcd-client-cert
@@ -443,6 +662,9 @@ kubeEtcd:
## keyFile: /etc/prometheus/secrets/etcd-client-cert/etcd-client-key
##
serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
scheme: http
insecureSkipVerify: false
serverName: ""
@@ -450,6 +672,23 @@ kubeEtcd:
certFile: ""
keyFile: ""
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
+
## Component scraping kube scheduler
##
@@ -469,12 +708,59 @@ kubeScheduler:
port: 10251
targetPort: 10251
selector:
- k8s-app: kube-scheduler
+ component: kube-scheduler
+
+ serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
+ ## Enable scraping kube-controller-manager over https.
+ ## Requires proper certs (not self-signed) and delegated authentication/authorization checks
+ ##
+ https: false
+
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
## Component scraping kube state metrics
##
kubeStateMetrics:
enabled: true
+ serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
+
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
## Configuration for kube-state-metrics subchart
##
@@ -493,6 +779,30 @@ nodeExporter:
##
jobLabel: jobLabel
+ serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
+
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - sourceLabels: [__name__]
+ # separator: ;
+ # regex: ^node_mountstats_nfs_(event|operations|transport)_.+
+ # replacement: $1
+ # action: drop
+
+ ## relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
+
## Configuration for prometheus-node-exporter subchart
##
prometheus-node-exporter:
@@ -526,8 +836,15 @@ prometheusOperator:
## Port to expose on each node
## Only used if service.type is 'NodePort'
##
- nodePort: 38080
+ nodePort: 30080
+ ## Additional ports to open for Prometheus service
+ ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services
+ ##
+ additionalPorts: []
+ # - name: thanos-cluster
+ # port: 10900
+ # nodePort: 30111
## Loadbalancer IP
## Only use if service.type is "loadbalancer"
@@ -560,9 +877,20 @@ prometheusOperator:
##
podLabels: {}
+ ## Annotations to add to the operator pod
+ ##
+ podAnnotations: {}
+
## Assign a PriorityClassName to pods if set
# priorityClassName: ""
+ ## Define Log Format
+ # Use logfmt (default) or json-formatted logging
+ # logFormat: logfmt
+
+ ## Decrease log verbosity to errors only
+ # logLevel: error
+
## If true, the operator will create and maintain a service for scraping kubelets
## ref: https://github.com/coreos/prometheus-operator/blob/master/helm/prometheus-operator/README.md
##
@@ -573,8 +901,28 @@ prometheusOperator:
## Create a servicemonitor for the operator
##
serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
selfMonitor: true
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
+
## Resource limits & requests
##
resources: {}
@@ -599,18 +947,19 @@ prometheusOperator:
# value: "value"
# effect: "NoSchedule"
- ## Assign the prometheus operator to run on specific nodes
+ ## Assign custom affinity rules to the prometheus operator
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
##
affinity: {}
- # requiredDuringSchedulingIgnoredDuringExecution:
- # nodeSelectorTerms:
- # - matchExpressions:
- # - key: kubernetes.io/e2e-az-name
- # operator: In
- # values:
- # - e2e-az1
- # - e2e-az2
+ # nodeAffinity:
+ # requiredDuringSchedulingIgnoredDuringExecution:
+ # nodeSelectorTerms:
+ # - matchExpressions:
+ # - key: kubernetes.io/e2e-az-name
+ # operator: In
+ # values:
+ # - e2e-az1
+ # - e2e-az2
securityContext:
runAsNonRoot: true
@@ -620,7 +969,7 @@ prometheusOperator:
##
image:
repository: quay.io/coreos/prometheus-operator
- tag: v0.27.0
+ tag: v0.29.0
pullPolicy: IfNotPresent
## Configmap-reload image to use for reloading configmaps
@@ -633,7 +982,15 @@ prometheusOperator:
##
prometheusConfigReloaderImage:
repository: quay.io/coreos/prometheus-config-reloader
- tag: v0.27.0
+ tag: v0.29.0
+
+ ## Set the prometheus config reloader side-car CPU limit. If unset, uses the prometheus-operator project default
+ ##
+ # configReloaderCpu: 100m
+
+ ## Set the prometheus config reloader side-car memory limit. If unset, uses the prometheus-operator project default
+ ##
+ # configReloaderMemory: 25Mi
## Hyperkube image to use when cleaning up
##
@@ -662,6 +1019,10 @@ prometheus:
labels: {}
clusterIP: ""
+
+ ## To be used with a proxy extraContainer port
+ targetPort: 9090
+
## List of IP addresses at which the Prometheus server service is available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
@@ -670,7 +1031,7 @@ prometheus:
## Port to expose on each node
## Only used if service.type is 'NodePort'
##
- nodePort: 39090
+ nodePort: 30090
## Loadbalancer IP
## Only use if service.type is "loadbalancer"
@@ -680,6 +1041,8 @@ prometheus:
##
type: ClusterIP
+ sessionAffinity: ""
+
rbac:
## Create role bindings in the specified namespaces, to allow Prometheus monitoring
## a role binding in the release namespace will always be created.
@@ -709,6 +1072,11 @@ prometheus:
# - prometheus.domain.com
hosts: []
+ ## Paths to use for ingress rules - one path should match the prometheusSpec.routePrefix
+ ##
+ paths: []
+ # - /
+
## TLS configuration for Prometheus Ingress
## Secret must be manually created in the namespace
##
@@ -718,8 +1086,28 @@ prometheus:
# - prometheus.example.com
serviceMonitor:
+ ## Scrape interval. If not set, the Prometheus default scrape interval is used.
+ ##
+ interval: ""
selfMonitor: true
+ ## metric relabel configs to apply to samples before ingestion.
+ ##
+ metricRelabelings: []
+ # - action: keep
+ # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'
+ # sourceLabels: [__name__]
+
+ # relabel configs to apply to samples before ingestion.
+ ##
+ relabelings: []
+ # - sourceLabels: [__meta_kubernetes_pod_node_name]
+ # separator: ;
+ # regex: ^(.*)$
+ # target_label: nodename
+ # replacement: $1
+ # action: replace
+
## Settings affecting prometheusSpec
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec
##
@@ -737,11 +1125,17 @@ prometheus:
##
listenLocal: false
+ ## EnableAdminAPI enables Prometheus the administrative HTTP API which includes functionality such as deleting time series.
+ ## This is disabled by default.
+ ## ref: https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-admin-apis
+ ##
+ enableAdminAPI: false
+
## Image of Prometheus.
##
image:
repository: quay.io/prometheus/prometheus
- tag: v2.6.1
+ tag: v2.9.1
## Tolerations for use with node taints
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
@@ -788,8 +1182,14 @@ prometheus:
##
configMaps: []
+ ## QuerySpec defines the query command line flags when starting Prometheus.
+ ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#queryspec
+ ##
+ query: {}
+
## Namespaces to be selected for PrometheusRules discovery.
- ## If unspecified, only the same namespace as the Prometheus object is in is used.
+ ## If nil, select own namespace. Namespaces to be selected for ServiceMonitor discovery.
+ ## See https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#namespaceselector for usage
##
ruleNamespaceSelector: {}
@@ -799,10 +1199,8 @@ prometheus:
##
ruleSelectorNilUsesHelmValues: true
- ## Rules CRD selector
- ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/design.md
- ## If unspecified the release `app` and `release` will be used as the label selector
- ## to load rules
+ ## PrometheusRules to be selected for target discovery.
+ ## If {}, select all ServiceMonitors
##
ruleSelector: {}
## Example which select all prometheusrules resources
@@ -826,17 +1224,17 @@ prometheus:
##
serviceMonitorSelectorNilUsesHelmValues: true
- ## serviceMonitorSelector will limit which servicemonitors are used to create scrape
- ## configs in Prometheus. See serviceMonitorSelectorUseHelmLabels
+ ## ServiceMonitors to be selected for target discovery.
+ ## If {}, select all ServiceMonitors
##
serviceMonitorSelector: {}
-
- # serviceMonitorSelector: {}
+ ## Example which selects ServiceMonitors with label "prometheus" set to "somelabel"
+ # serviceMonitorSelector:
# matchLabels:
# prometheus: somelabel
- ## serviceMonitorNamespaceSelector will limit namespaces from which serviceMonitors are used to create scrape
- ## configs in Prometheus. By default all namespaces will be used
+ ## Namespaces to be selected for ServiceMonitor discovery.
+ ## See https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#namespaceselector for usage
##
serviceMonitorNamespaceSelector: {}
@@ -856,6 +1254,10 @@ prometheus:
##
logLevel: info
+ ## Log format for Prometheus be configured in
+ ##
+ logFormat: logfmt
+
## Prefix used to register routes, overriding externalUrl route.
## Useful for proxies that rewrite URLs.
##
@@ -880,16 +1282,29 @@ prometheus:
##
podAntiAffinityTopologyKey: kubernetes.io/hostname
+ ## Assign custom affinity rules to the prometheus instance
+ ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
+ ##
+ affinity: {}
+ # nodeAffinity:
+ # requiredDuringSchedulingIgnoredDuringExecution:
+ # nodeSelectorTerms:
+ # - matchExpressions:
+ # - key: kubernetes.io/e2e-az-name
+ # operator: In
+ # values:
+ # - e2e-az1
+ # - e2e-az2
+
## The remote_read spec configuration for Prometheus.
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#remotereadspec
- remoteRead: {}
+ remoteRead: []
# - url: http://remote1/read
## The remote_write spec configuration for Prometheus.
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#remotewritespec
- remoteWrite: {}
- # remoteWrite:
- # - url: http://remote1/push
+ remoteWrite: []
+ # - url: http://remote1/push
## Resource limits & requests
##
@@ -1001,7 +1416,7 @@ prometheus:
thanos: {}
## Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to a Prometheus pod.
- ##
+ ## if using proxy extraContainer update targetPort with proxy container port
containers: []
## Enable additional scrape configs that are managed externally to this chart. Note that the prometheus
@@ -1024,6 +1439,10 @@ prometheus:
##
# jobLabel: ""
+ ## labels to transfer from the kubernetes service to the target
+ ##
+ # targetLabels: ""
+
## Label selector for services to which this ServiceMonitor applies
##
# selector: {}
diff --git a/stable/prometheus-postgres-exporter/Chart.yaml b/stable/prometheus-postgres-exporter/Chart.yaml
index d3f66c4f6219..60db0f0e417a 100644
--- a/stable/prometheus-postgres-exporter/Chart.yaml
+++ b/stable/prometheus-postgres-exporter/Chart.yaml
@@ -2,7 +2,7 @@ apiVersion: v1
appVersion: "0.4.7"
description: A Helm chart for prometheus postgres-exporter
name: prometheus-postgres-exporter
-version: 0.6.1
+version: 0.6.2
home: https://github.com/wrouesnel/postgres_exporter
sources:
- https://github.com/wrouesnel/postgres_exporter
diff --git a/stable/prometheus-postgres-exporter/README.md b/stable/prometheus-postgres-exporter/README.md
index 366b3cb81b0d..9f230c085a48 100644
--- a/stable/prometheus-postgres-exporter/README.md
+++ b/stable/prometheus-postgres-exporter/README.md
@@ -48,7 +48,7 @@ The following table lists the configurable parameters of the postgres Exporter c
| `service.name` | Name of the service port | `http` |
| `service.labels` | Labels to add to the service | `{}` |
| `resources` | | `{}` |
-| `config.datasource` | Postgresql datasource configuration | |
+| `config.datasource` | Postgresql datasource configuration | see [values.yaml](values.yaml) |
| `config.queries` | SQL queries that the exporter will run | [postgres exporter defaults](https://github.com/wrouesnel/postgres_exporter/blob/master/queries.yaml) |
| `config.disableDefaultMetrics` | Specifies whether to use only metrics from `queries.yaml`| `false` |
| `rbac.create` | Specifies whether RBAC resources should be created.| `true` |
diff --git a/stable/prometheus-postgres-exporter/templates/_helpers.tpl b/stable/prometheus-postgres-exporter/templates/_helpers.tpl
index c56c4fd4b587..f3eadedb450d 100644
--- a/stable/prometheus-postgres-exporter/templates/_helpers.tpl
+++ b/stable/prometheus-postgres-exporter/templates/_helpers.tpl
@@ -41,4 +41,12 @@ Create the name of the service account to use
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
-{{- end -}}
\ No newline at end of file
+{{- end -}}
+
+
+{{/*
+Set DATA_SOURCE_URI environment variable
+*/}}
+{{- define "prometheus-postgres-exporter.data_source_uri" -}}
+{{ printf "%s:%s/%s?sslmode=%s" .Values.config.datasource.host .Values.config.datasource.port .Values.config.datasource.database .Values.config.datasource.sslmode | quote }}
+{{- end }}
diff --git a/stable/prometheus-postgres-exporter/templates/deployment.yaml b/stable/prometheus-postgres-exporter/templates/deployment.yaml
index 5aa01a2c9926..4d55a51855cd 100644
--- a/stable/prometheus-postgres-exporter/templates/deployment.yaml
+++ b/stable/prometheus-postgres-exporter/templates/deployment.yaml
@@ -1,3 +1,6 @@
+{{- if and .Values.config.datasource.passwordSecret .Values.config.datasource.password -}}
+{{ fail (printf "ERROR: only one of .Values.config.datasource.passwordSecret and .Values.config.datasource.password must be defined") }}
+{{- end -}}
apiVersion: apps/v1beta2
kind: Deployment
metadata:
@@ -36,11 +39,20 @@ spec:
- "--disable-default-metrics"
{{- end }}
env:
- - name: DATA_SOURCE_NAME
- valueFrom:
- secretKeyRef:
- key: data_source_name
- name: {{ template "prometheus-postgres-exporter.fullname" . }}
+ - name: DATA_SOURCE_URI
+ value: {{ template "prometheus-postgres-exporter.data_source_uri" . }}
+ - name: DATA_SOURCE_USER
+ value: {{ .Values.config.datasource.user }}
+ - name: DATA_SOURCE_PASS
+ valueFrom:
+ secretKeyRef:
+ {{- if .Values.config.datasource.passwordSecret }}
+ name: {{ .Values.config.datasource.passwordSecret.name }}
+ key: {{ .Values.config.datasource.passwordSecret.key }}
+ {{- else }}
+ name: {{ template "prometheus-postgres-exporter.fullname" . }}
+ key: data_source_password
+ {{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
diff --git a/stable/prometheus-postgres-exporter/templates/secrets.yaml b/stable/prometheus-postgres-exporter/templates/secrets.yaml
index 5bd51ed150cd..b1d9858efc69 100644
--- a/stable/prometheus-postgres-exporter/templates/secrets.yaml
+++ b/stable/prometheus-postgres-exporter/templates/secrets.yaml
@@ -1,3 +1,4 @@
+{{- if .Values.config.datasource.password -}}
apiVersion: v1
kind: Secret
metadata:
@@ -9,4 +10,5 @@ metadata:
release: {{ .Release.Name }}
type: Opaque
data:
- data_source_name: "{{ printf "postgresql://%s:%s@%s:%s/%s?sslmode=%s" .Values.config.datasource.user .Values.config.datasource.password .Values.config.datasource.host .Values.config.datasource.port .Values.config.datasource.database .Values.config.datasource.sslmode | b64enc }}"
+ data_source_password: {{ .Values.config.datasource.password | b64enc }}
+{{- end -}}
diff --git a/stable/prometheus-postgres-exporter/values.yaml b/stable/prometheus-postgres-exporter/values.yaml
index 3373cf556874..d71cf852adeb 100644
--- a/stable/prometheus-postgres-exporter/values.yaml
+++ b/stable/prometheus-postgres-exporter/values.yaml
@@ -44,8 +44,15 @@ serviceAccount:
config:
datasource:
host:
- user:
- password:
+ user: postgres
+ # Only one of password and passwordSecret can be specified
+ password: somepassword
+ # Specify passwordSecret if DB password is stored in secret.
+ passwordSecret: {}
+ # Secret name
+ # name:
+ # Password key inside secret
+ # key:
port: "5432"
database: ''
sslmode: disable
diff --git a/stable/prometheus-pushgateway/Chart.yaml b/stable/prometheus-pushgateway/Chart.yaml
index fdae63b8570a..1bc14d72359f 100644
--- a/stable/prometheus-pushgateway/Chart.yaml
+++ b/stable/prometheus-pushgateway/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
-appVersion: "0.6.0"
+appVersion: "0.8.0"
description: A Helm chart for prometheus pushgateway
name: prometheus-pushgateway
-version: 0.3.0
+version: 0.4.0
home: https://github.com/prometheus/pushgateway
sources:
- https://github.com/prometheus/pushgateway
diff --git a/stable/prometheus-pushgateway/README.md b/stable/prometheus-pushgateway/README.md
index 9a734c889d43..8eeab236c471 100644
--- a/stable/prometheus-pushgateway/README.md
+++ b/stable/prometheus-pushgateway/README.md
@@ -42,8 +42,9 @@ The following table lists the configurable parameters of the pushgateway chart a
| --------------------------- | ----------------------------------------------------------------------------------------------------------------------------- | --------------------------------- |
| `affinity` | Affinity settings for pod assignment | `{}` |
| `extraArgs` | Optional flags for pushgateway | `[]` |
+| `extraVars` | Optional environment variables for pushgateway | `[]` |
| `image.repository` | Image repository | `prom/pushgateway` |
-| `image.tag` | Image tag | `v0.6.0` |
+| `image.tag` | Image tag | `v0.8.0` |
| `image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `ingress.enabled` | Enables Ingress for pushgateway | `false` |
| `ingress.annotations` | Ingress annotations | `{}` |
diff --git a/stable/prometheus-pushgateway/templates/deployment.yaml b/stable/prometheus-pushgateway/templates/deployment.yaml
index 2e8f39b56f40..01fb378225cf 100644
--- a/stable/prometheus-pushgateway/templates/deployment.yaml
+++ b/stable/prometheus-pushgateway/templates/deployment.yaml
@@ -16,13 +16,17 @@ spec:
app: {{ template "prometheus-pushgateway.name" . }}
release: {{ .Release.Name }}
annotations:
-{{ toYaml .Values.podAnnotations | indent 8 }}
+{{ toYaml .Values.podAnnotations | indent 8 }}
spec:
serviceAccountName: {{ template "prometheus-pushgateway.serviceAccountName" . }}
containers:
- name: pushgateway
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
+ {{- if .Values.extraVars }}
+ env:
+{{ toYaml .Values.extraVars | indent 12 }}
+ {{- end }}
{{- if .Values.extraArgs }}
args:
{{ toYaml .Values.extraArgs | indent 12 }}
diff --git a/stable/prometheus-pushgateway/values.yaml b/stable/prometheus-pushgateway/values.yaml
index cc06ed09bddd..77a1921ab158 100644
--- a/stable/prometheus-pushgateway/values.yaml
+++ b/stable/prometheus-pushgateway/values.yaml
@@ -3,7 +3,7 @@
# Declare variables to be passed into your templates.
image:
repository: prom/pushgateway
- tag: v0.6.0
+ tag: v0.8.0
pullPolicy: IfNotPresent
service:
@@ -26,6 +26,9 @@ serviceAccountLabels: {}
# Optional additional arguments
extraArgs: []
+# Optional additional environment variables
+extraVars: []
+
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
diff --git a/stable/prometheus-rabbitmq-exporter/Chart.yaml b/stable/prometheus-rabbitmq-exporter/Chart.yaml
index 6b3734343629..d82c7c708dfe 100644
--- a/stable/prometheus-rabbitmq-exporter/Chart.yaml
+++ b/stable/prometheus-rabbitmq-exporter/Chart.yaml
@@ -1,7 +1,7 @@
apiVersion: v1
description: Rabbitmq metrics exporter for prometheus
name: prometheus-rabbitmq-exporter
-version: 0.3.0
+version: 0.4.1
appVersion: v0.29.0
home: https://github.com/kbudde/rabbitmq_exporter
sources:
diff --git a/stable/prometheus-rabbitmq-exporter/README.md b/stable/prometheus-rabbitmq-exporter/README.md
index adfccb9faf9c..f9220fdb1296 100644
--- a/stable/prometheus-rabbitmq-exporter/README.md
+++ b/stable/prometheus-rabbitmq-exporter/README.md
@@ -44,21 +44,22 @@ The following table lists the configurable parameters and their default values.
| ------------------------ | ---------------------------------------------------------------------- | ------------------------- |
| `replicaCount` | desired number of prometheus-rabbitmq-exporter pods | `1` |
| `image.repository` | prometheus-rabbitmq-exporter image repository | `kbudde/rabbitmq-exporter`|
-| `image.tag` | prometheus-rabbitmq-exporter image tag | `v0.28.0` |
+| `image.tag` | prometheus-rabbitmq-exporter image tag | `v0.29.0` |
| `image.pullPolicy` | image pull policy | `IfNotPresent` |
| `service.type` | desired service type | `ClusterIP` |
-| `service.internalport` | service listening port | `9121` |
+| `service.internalport` | service listening port | `9419` |
| `service.externalPort` | public service port | `9419` |
| `resources` | cpu/memory resource requests/limits | {} |
| `loglevel` | exporter log level | {} |
-| `rabbitmq.url` | rabbitm management url | `http://myrabbit:15672` |
-| `rabbitmq.user` | rabbitm user login | `guest` |
-| `rabbitmq.password` | rabbitm password login | `guest` |
+| `rabbitmq.url` | rabbitmq management url | `http://myrabbit:15672` |
+| `rabbitmq.user` | rabbitmq user login | `guest` |
+| `rabbitmq.password` | rabbitmq password login | `guest` |
| `rabbitmq.capabilities` | comma-separated list of capabilities supported by the RabbitMQ server | `bert,no_sort` |
| `rabbitmq.include_queues`| regex queue filter. just matching names are exported | `.*` |
| `rabbitmq.skip_queues` | regex, matching queue names are not exported | `^$` |
| `rabbitmq.include_vhost` | regex vhost filter. Only queues in matching vhosts are exported | `.*` |
| `rabbitmq.skip_vhost` | regex, matching vhost names are not exported. First performs include_vhost, then skip_vhost | `^$` |
+| `rabbitmq.skip_verify` | true/0 will ignore certificate errors of the management plugin | `false` |
| `rabbitmq.exporters` | List of enabled modules. Just "connections" is not enabled by default | `exchange,node,overview,queue` |
| `rabbitmq.output_format` | Log ouput format. TTY and JSON are suported | `TTY` |
| `rabbitmq.timeout` | timeout in seconds for retrieving data from management plugin | `30` |
diff --git a/stable/prometheus-rabbitmq-exporter/templates/deployment.yaml b/stable/prometheus-rabbitmq-exporter/templates/deployment.yaml
index f7d93baa7633..4f2220e2792d 100644
--- a/stable/prometheus-rabbitmq-exporter/templates/deployment.yaml
+++ b/stable/prometheus-rabbitmq-exporter/templates/deployment.yaml
@@ -44,6 +44,8 @@ spec:
value: "{{ .Values.rabbitmq.include_vhost }}"
- name: SKIP_QUEUES
value: "{{ .Values.rabbitmq.skip_queues }}"
+ - name: SKIPVERIFY
+ value: "{{ .Values.rabbitmq.skip_verify }}"
- name: SKIP_VHOST
value: "{{ .Values.rabbitmq.skip_vhost }}"
- name: RABBIT_EXPORTERS
diff --git a/stable/prometheus-rabbitmq-exporter/values.yaml b/stable/prometheus-rabbitmq-exporter/values.yaml
index 9eb82a5b27c5..5429c20ec6ff 100644
--- a/stable/prometheus-rabbitmq-exporter/values.yaml
+++ b/stable/prometheus-rabbitmq-exporter/values.yaml
@@ -37,6 +37,7 @@ rabbitmq:
include_queues: ".*"
include_vhost: ".*"
skip_queues: "^$"
+ skip_verify: "false"
skip_vhost: "^$"
exporters: "exchange,node,overview,queue"
output_format: "TTY"
@@ -46,4 +47,4 @@ rabbitmq:
annotations: {}
# prometheus.io/scrape: "true"
# prometheus.io/path: "/metrics"
-# prometheus.io/port: 9419
+# prometheus.io/port: "9419"
diff --git a/stable/prometheus-redis-exporter/Chart.yaml b/stable/prometheus-redis-exporter/Chart.yaml
index cef664e4e12f..8469e7cc30bc 100644
--- a/stable/prometheus-redis-exporter/Chart.yaml
+++ b/stable/prometheus-redis-exporter/Chart.yaml
@@ -1,8 +1,8 @@
apiVersion: v1
-appVersion: 0.25.0
+appVersion: 0.28.0
description: Prometheus exporter for Redis metrics
name: prometheus-redis-exporter
-version: 1.0.1
+version: 1.0.2
home: https://github.com/oliver006/redis_exporter
sources:
- https://github.com/oliver006/redis_exporter
diff --git a/stable/prometheus-redis-exporter/README.md b/stable/prometheus-redis-exporter/README.md
index b0a807c3c025..3c5feb61d359 100644
--- a/stable/prometheus-redis-exporter/README.md
+++ b/stable/prometheus-redis-exporter/README.md
@@ -44,7 +44,7 @@ The following table lists the configurable parameters and their default values.
| ---------------------- | --------------------------------------------------- | ------------------------- |
| `replicaCount` | desired number of prometheus-redis-exporter pods | `1` |
| `image.repository` | prometheus-redis-exporter image repository | `oliver006/redis_exporter`|
-| `image.tag` | prometheus-redis-exporter image tag | `v0.25.0` |
+| `image.tag` | prometheus-redis-exporter image tag | `v0.28.0` |
| `image.pullPolicy` | image pull policy | `IfNotPresent` |
| `extraArgs` | extra arguments for the binary; possible values [here](https://github.com/oliver006/redis_exporter#flags)| {}
| `env` | additional environment variables in YAML format. Can be used to pass credentials as env variables (via secret) as per the image readme [here](https://github.com/oliver006/redis_exporter#environment-variables) | {} |
diff --git a/stable/prometheus-redis-exporter/values.yaml b/stable/prometheus-redis-exporter/values.yaml
index f42021a83a47..ea0869f14207 100644
--- a/stable/prometheus-redis-exporter/values.yaml
+++ b/stable/prometheus-redis-exporter/values.yaml
@@ -13,7 +13,7 @@ serviceAccount:
replicaCount: 1
image:
repository: oliver006/redis_exporter
- tag: v0.25.0
+ tag: v0.28.0
pullPolicy: IfNotPresent
extraArgs: {}
# Additional Environment variables
diff --git a/stable/prometheus-snmp-exporter/Chart.yaml b/stable/prometheus-snmp-exporter/Chart.yaml
index 19c540f40f1f..2766fa90a295 100644
--- a/stable/prometheus-snmp-exporter/Chart.yaml
+++ b/stable/prometheus-snmp-exporter/Chart.yaml
@@ -1,7 +1,7 @@
apiVersion: v1
description: Prometheus SNMP Exporter
name: prometheus-snmp-exporter
-version: 0.0.1
+version: 0.0.2
appVersion: 0.14.0
home: https://github.com/prometheus/snmp_exporter
sources:
diff --git a/stable/prometheus-snmp-exporter/README.md b/stable/prometheus-snmp-exporter/README.md
index a530fe398ccc..4ba468367e67 100644
--- a/stable/prometheus-snmp-exporter/README.md
+++ b/stable/prometheus-snmp-exporter/README.md
@@ -73,6 +73,8 @@ The following table lists the configurable parameters of the SNMP-Exporter chart
| `rbac.create` | Use Role-based Access Control | `true` |
| `serviceAccount.create` | Should we create a ServiceAccount | `true` |
| `serviceAccount.name` | Name of the ServiceAccount to use | `null` |
+| `serviceMonitor.enabled` | Enables ServiceMonitor | `false` |
+
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
diff --git a/stable/prometheus-snmp-exporter/templates/servicemonitor.yaml b/stable/prometheus-snmp-exporter/templates/servicemonitor.yaml
new file mode 100644
index 000000000000..9e467a3f80fd
--- /dev/null
+++ b/stable/prometheus-snmp-exporter/templates/servicemonitor.yaml
@@ -0,0 +1,30 @@
+{{- if .Values.serviceMonitor.enabled }}
+apiVersion: monitoring.coreos.com/v1
+kind: ServiceMonitor
+metadata:
+ name: {{ template "prometheus-snmp-exporter.name" . }}
+ {{- if .Values.serviceMonitor.namespace }}
+ namespace: {{ .Values.serviceMonitor.namespace }}
+ {{- end }}
+ labels:
+ {{- range $key, $value := .Values.serviceMonitor.selector }}
+ {{ $key }}: {{ $value | quote }}
+ {{- end }}
+spec:
+ endpoints:
+ - port: http
+ {{- if .Values.serviceMonitor.interval }}
+ interval: {{ .Values.serviceMonitor.interval }}
+ {{- end }}
+ honorLabels: {{ .Values.serviceMonitor.honorLabels }}
+ path: {{ .Values.serviceMonitor.path }}
+ {{- if .Values.serviceMonitor.scrapeTimeout }}
+ scrapeTimeout: {{ .Values.serviceMonitor.scrapeTimeout }}
+ {{- end }}
+ {{- if .Values.serviceMonitor.params.enabled }}
+{{ toYaml .Values.conf.params.conf | indent 6 }}
+ {{- end }}
+ selector:
+ matchLabels:
+ app: {{ template "prometheus-snmp-exporter.name" . }}
+{{- end -}}
diff --git a/stable/prometheus-snmp-exporter/values.yaml b/stable/prometheus-snmp-exporter/values.yaml
index 38dee25cd1cd..0695068a80f3 100644
--- a/stable/prometheus-snmp-exporter/values.yaml
+++ b/stable/prometheus-snmp-exporter/values.yaml
@@ -77,3 +77,31 @@ configmapReload:
## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
+
+# Enable this if you're using https://github.com/coreos/prometheus-operator
+serviceMonitor:
+ enabled: false
+ namespace: monitoring
+
+ # fallback to the prometheus default unless specified
+ # interval: 10s
+
+ ## Defaults to what's used if you follow CoreOS [Prometheus Install Instructions](https://github.com/helm/charts/tree/master/stable/prometheus-operator#tldr)
+ ## [Prometheus Selector Label](https://github.com/helm/charts/tree/master/stable/prometheus-operator#prometheus-operator-1)
+ ## [Kube Prometheus Selector Label](https://github.com/helm/charts/tree/master/stable/prometheus-operator#exporters)
+ selector:
+ prometheus: kube-prometheus
+ # Retain the job and instance labels of the metrics pushed to the snmp-exporter
+ # [Scraping SNMP-exporter](https://github.com/prometheus/snmp_exporter#configure-the-snmp_exporter-as-a-target-to-scrape)
+ honorLabels: true
+
+ params:
+ enabled: false
+ conf:
+ module:
+ - if_mib
+ target:
+ - 127.0.0.1
+
+ path: /snmp
+ scrapeTimeout: 10s
diff --git a/stable/prometheus/Chart.yaml b/stable/prometheus/Chart.yaml
index 244418a3abfc..a8dadc13b0d0 100755
--- a/stable/prometheus/Chart.yaml
+++ b/stable/prometheus/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: prometheus
-version: 8.4.9
-appVersion: 2.6.1
+version: 8.11.4
+appVersion: 2.9.2
description: Prometheus is a monitoring system and time series database.
home: https://prometheus.io/
icon: https://raw.githubusercontent.com/prometheus/prometheus.github.io/master/assets/prometheus_logo-cb55bb5c346.png
diff --git a/stable/prometheus/README.md b/stable/prometheus/README.md
index cc6f92e34de2..0282c504764f 100644
--- a/stable/prometheus/README.md
+++ b/stable/prometheus/README.md
@@ -177,7 +177,7 @@ Parameter | Description | Default
`nodeExporter.enabled` | If true, create node-exporter | `true`
`nodeExporter.name` | node-exporter container name | `node-exporter`
`nodeExporter.image.repository` | node-exporter container image repository| `prom/node-exporter`
-`nodeExporter.image.tag` | node-exporter container image tag | `v0.17.0`
+`nodeExporter.image.tag` | node-exporter container image tag | `v0.18.0`
`nodeExporter.image.pullPolicy` | node-exporter container image pull policy | `IfNotPresent`
`nodeExporter.extraArgs` | Additional node-exporter container arguments | `{}`
`nodeExporter.extraHostPathMounts` | Additional node-exporter hostPath mounts | `[]`
@@ -215,6 +215,14 @@ Parameter | Description | Default
`pushgateway.podAnnotations` | annotations to be added to pushgateway pods | `{}`
`pushgateway.tolerations` | node taints to tolerate (requires Kubernetes >=1.6) | `[]`
`pushgateway.replicaCount` | desired number of pushgateway pods | `1`
+`pushgateway.persistentVolume.enabled` | If true, Prometheus pushgateway will create a Persistent Volume Claim | `false`
+`pushgateway.persistentVolume.accessModes` | Prometheus pushgateway data Persistent Volume access modes | `[ReadWriteOnce]`
+`pushgateway.persistentVolume.annotations` | Prometheus pushgateway data Persistent Volume annotations | `{}`
+`pushgateway.persistentVolume.existingClaim` | Prometheus pushgateway data Persistent Volume existing claim name | `""`
+`pushgateway.persistentVolume.mountPath` | Prometheus pushgateway data Persistent Volume mount root path | `/data`
+`pushgateway.persistentVolume.size` | Prometheus pushgateway data Persistent Volume size | `2Gi`
+`pushgateway.persistentVolume.storageClass` | Prometheus server data Persistent Volume Storage Class | `unset`
+`pushgateway.persistentVolume.subPath` | Subdirectory of Prometheus server data Persistent Volume to mount | `""`
`pushgateway.priorityClassName` | pushgateway priorityClassName | `nil`
`pushgateway.resources` | pushgateway pod resource requests & limits | `{}`
`pushgateway.service.annotations` | annotations for pushgateway service | `{}`
@@ -227,9 +235,10 @@ Parameter | Description | Default
`rbac.create` | If true, create & use RBAC resources | `true`
`server.name` | Prometheus server container name | `server`
`server.image.repository` | Prometheus server container image repository | `prom/prometheus`
-`server.image.tag` | Prometheus server container image tag | `v2.6.1`
+`server.image.tag` | Prometheus server container image tag | `v2.9.2`
`server.image.pullPolicy` | Prometheus server container image pull policy | `IfNotPresent`
`server.enableAdminApi` | If true, Prometheus administrative HTTP API will be enabled. Please note, that you should take care of administrative API access protection (ingress or some frontend Nginx with auth) before enabling it. | `false`
+`server.skipTSDBLock` | If true, Prometheus skip TSDB locking. | `false`
`server.configPath` | Path to a prometheus server config file on the container FS | `/etc/config/prometheus.yml`
`server.global.scrape_interval` | How frequently to scrape targets by default | `1m`
`server.global.scrape_timeout` | How long until a scrape request times out | `10s`
@@ -262,6 +271,7 @@ Parameter | Description | Default
`server.persistentVolume.size` | Prometheus server data Persistent Volume size | `8Gi`
`server.persistentVolume.storageClass` | Prometheus server data Persistent Volume Storage Class | `unset`
`server.persistentVolume.subPath` | Subdirectory of Prometheus server data Persistent Volume to mount | `""`
+`server.emptyDir.sizeLimit` | emptyDir sizeLimit if a Persistent Volume is not used | `""`
`server.podAnnotations` | annotations to be added to Prometheus server pods | `{}`
`server.deploymentAnnotations` | annotations to be added to Prometheus server deployment | `{}'
`server.replicaCount` | desired number of Prometheus server pods | `1`
diff --git a/stable/prometheus/templates/alertmanager-deployment.yaml b/stable/prometheus/templates/alertmanager-deployment.yaml
index 668ffb7db311..12a0d571d275 100644
--- a/stable/prometheus/templates/alertmanager-deployment.yaml
+++ b/stable/prometheus/templates/alertmanager-deployment.yaml
@@ -23,10 +23,6 @@ spec:
labels:
{{- include "prometheus.alertmanager.labels" . | nindent 8 }}
spec:
-{{- if .Values.alertmanager.affinity }}
- affinity:
-{{ toYaml .Values.alertmanager.affinity | indent 8 }}
-{{- end }}
{{- if .Values.alertmanager.schedulerName }}
schedulerName: "{{ .Values.alertmanager.schedulerName }}"
{{- end }}
@@ -81,7 +77,7 @@ spec:
imagePullPolicy: "{{ .Values.configmapReload.image.pullPolicy }}"
args:
- --volume-dir=/etc/config
- - --webhook-url=http://localhost:9093{{ .Values.alertmanager.prefixURL }}/-/reload
+ - --webhook-url=http://127.0.0.1:9093{{ .Values.alertmanager.prefixURL }}/-/reload
resources:
{{ toYaml .Values.configmapReload.resources | indent 12 }}
volumeMounts:
diff --git a/stable/prometheus/templates/alertmanager-statefulset.yaml b/stable/prometheus/templates/alertmanager-statefulset.yaml
index 0748c5b45274..8429ab122677 100644
--- a/stable/prometheus/templates/alertmanager-statefulset.yaml
+++ b/stable/prometheus/templates/alertmanager-statefulset.yaml
@@ -106,10 +106,6 @@ spec:
{{- if .Values.alertmanager.tolerations }}
tolerations:
{{ toYaml .Values.alertmanager.tolerations | indent 8 }}
- {{- end }}
- {{- if .Values.alertmanager.affinity }}
- affinity:
-{{ toYaml .Values.alertmanager.affinity | indent 8 }}
{{- end }}
volumes:
- name: config-volume
diff --git a/stable/prometheus/templates/kube-state-metrics-clusterrole.yaml b/stable/prometheus/templates/kube-state-metrics-clusterrole.yaml
index 1243b0c0b7b5..a58ccc8c1f8f 100644
--- a/stable/prometheus/templates/kube-state-metrics-clusterrole.yaml
+++ b/stable/prometheus/templates/kube-state-metrics-clusterrole.yaml
@@ -30,6 +30,7 @@ rules:
resources:
- daemonsets
- deployments
+ - ingresses
- replicasets
verbs:
- list
@@ -37,6 +38,8 @@ rules:
- apiGroups:
- apps
resources:
+ - daemonsets
+ - deployments
- statefulsets
verbs:
- get
@@ -64,4 +67,11 @@ rules:
verbs:
- list
- watch
+ - apiGroups:
+ - certificates.k8s.io
+ resources:
+ - certificatesigningrequests
+ verbs:
+ - list
+ - watch
{{- end }}
diff --git a/stable/prometheus/templates/pushgateway-deployment.yaml b/stable/prometheus/templates/pushgateway-deployment.yaml
index 7dd7f26fa435..befc4a7973c3 100644
--- a/stable/prometheus/templates/pushgateway-deployment.yaml
+++ b/stable/prometheus/templates/pushgateway-deployment.yaml
@@ -45,6 +45,12 @@ spec:
timeoutSeconds: 10
resources:
{{ toYaml .Values.pushgateway.resources | indent 12 }}
+ {{- if .Values.pushgateway.persistentVolume.enabled }}
+ volumeMounts:
+ - name: storage-volume
+ mountPath: "{{ .Values.pushgateway.persistentVolume.mountPath }}"
+ subPath: "{{ .Values.pushgateway.persistentVolume.subPath }}"
+ {{- end }}
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 2 }}
@@ -65,4 +71,10 @@ spec:
affinity:
{{ toYaml .Values.pushgateway.affinity | indent 8 }}
{{- end }}
+ {{- if .Values.pushgateway.persistentVolume.enabled }}
+ volumes:
+ - name: storage-volume
+ persistentVolumeClaim:
+ claimName: {{ if .Values.pushgateway.persistentVolume.existingClaim }}{{ .Values.pushgateway.persistentVolume.existingClaim }}{{- else }}{{ template "prometheus.pushgateway.fullname" . }}{{- end }}
+ {{- end -}}
{{- end }}
diff --git a/stable/prometheus/templates/pushgateway-pvc.yaml b/stable/prometheus/templates/pushgateway-pvc.yaml
new file mode 100644
index 000000000000..061ca19cf27b
--- /dev/null
+++ b/stable/prometheus/templates/pushgateway-pvc.yaml
@@ -0,0 +1,27 @@
+{{- if .Values.pushgateway.persistentVolume.enabled -}}
+{{- if not .Values.pushgateway.persistentVolume.existingClaim -}}
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ {{- if .Values.pushgateway.persistentVolume.annotations }}
+ annotations:
+{{ toYaml .Values.pushgateway.persistentVolume.annotations | indent 4 }}
+ {{- end }}
+ labels:
+ {{- include "prometheus.pushgateway.labels" . | nindent 4 }}
+ name: {{ template "prometheus.pushgateway.fullname" . }}
+spec:
+ accessModes:
+{{ toYaml .Values.pushgateway.persistentVolume.accessModes | indent 4 }}
+{{- if .Values.pushgateway.persistentVolume.storageClass }}
+{{- if (eq "-" .Values.pushgateway.persistentVolume.storageClass) }}
+ storageClassName: ""
+{{- else }}
+ storageClassName: "{{ .Values.pushgateway.persistentVolume.storageClass }}"
+{{- end }}
+{{- end }}
+ resources:
+ requests:
+ storage: "{{ .Values.pushgateway.persistentVolume.size }}"
+{{- end -}}
+{{- end -}}
diff --git a/stable/prometheus/templates/server-clusterrole.yaml b/stable/prometheus/templates/server-clusterrole.yaml
index 9fe94d467efa..e039172a3be2 100644
--- a/stable/prometheus/templates/server-clusterrole.yaml
+++ b/stable/prometheus/templates/server-clusterrole.yaml
@@ -15,16 +15,11 @@ rules:
- endpoints
- pods
- ingresses
+ - configmaps
verbs:
- get
- list
- watch
- - apiGroups:
- - ""
- resources:
- - configmaps
- verbs:
- - get
- apiGroups:
- "extensions"
resources:
diff --git a/stable/prometheus/templates/server-deployment.yaml b/stable/prometheus/templates/server-deployment.yaml
index 73f6171457bf..e4a96403cea4 100644
--- a/stable/prometheus/templates/server-deployment.yaml
+++ b/stable/prometheus/templates/server-deployment.yaml
@@ -27,10 +27,6 @@ spec:
labels:
{{- include "prometheus.server.labels" . | nindent 8 }}
spec:
-{{- if .Values.server.affinity }}
- affinity:
-{{ toYaml .Values.server.affinity | indent 8 }}
-{{- end }}
{{- if .Values.server.priorityClassName }}
priorityClassName: "{{ .Values.server.priorityClassName }}"
{{- end }}
@@ -44,7 +40,7 @@ spec:
image: "{{ .Values.initChownData.image.repository }}:{{ .Values.initChownData.image.tag }}"
imagePullPolicy: "{{ .Values.initChownData.image.pullPolicy }}"
resources:
-{{ toYaml .Values.initChownData.resources | indent 12 }}
+{{ toYaml .Values.initChownData.resources | indent 10 }}
# 65534 is the nobody user that prometheus uses.
command: ["chown", "-R", "65534:65534", "{{ .Values.server.persistentVolume.mountPath }}"]
volumeMounts:
@@ -87,7 +83,7 @@ spec:
{{- end }}
args:
{{- if .Values.server.retention }}
- - --storage.tsdb.retention={{ .Values.server.retention }}
+ - --storage.tsdb.retention.time={{ .Values.server.retention }}
{{- end }}
- --config.file={{ .Values.server.configPath }}
- --storage.tsdb.path={{ .Values.server.persistentVolume.mountPath }}
@@ -103,6 +99,9 @@ spec:
{{- if .Values.server.enableAdminApi }}
- --web.enable-admin-api
{{- end }}
+ {{- if .Values.server.skipTSDBLock }}
+ - --storage.tsdb.no-lockfile
+ {{- end }}
ports:
- containerPort: 9090
readinessProbe:
@@ -180,7 +179,12 @@ spec:
persistentVolumeClaim:
claimName: {{ if .Values.server.persistentVolume.existingClaim }}{{ .Values.server.persistentVolume.existingClaim }}{{- else }}{{ template "prometheus.server.fullname" . }}{{- end }}
{{- else }}
- emptyDir: {}
+ emptyDir:
+ {{- if .Values.server.emptyDir.sizeLimit }}
+ sizeLimit: {{ .Values.server.emptyDir.sizeLimit }}
+ {{- else }}
+ {}
+ {{- end -}}
{{- end -}}
{{- if .Values.server.extraVolumes }}
{{ toYaml .Values.server.extraVolumes | indent 8}}
diff --git a/stable/prometheus/templates/server-statefulset.yaml b/stable/prometheus/templates/server-statefulset.yaml
index 82d85f72c61d..b952590daf68 100644
--- a/stable/prometheus/templates/server-statefulset.yaml
+++ b/stable/prometheus/templates/server-statefulset.yaml
@@ -42,7 +42,7 @@ spec:
image: "{{ .Values.initChownData.image.repository }}:{{ .Values.initChownData.image.tag }}"
imagePullPolicy: "{{ .Values.initChownData.image.pullPolicy }}"
resources:
-{{ toYaml .Values.initChownData.resources | indent 12 }}
+{{ toYaml .Values.initChownData.resources | indent 10 }}
# 65534 is the nobody user that prometheus uses.
command: ["chown", "-R", "65534:65534", "{{ .Values.server.persistentVolume.mountPath }}"]
volumeMounts:
@@ -80,7 +80,7 @@ spec:
imagePullPolicy: "{{ .Values.server.image.pullPolicy }}"
args:
{{- if .Values.server.retention }}
- - --storage.tsdb.retention={{ .Values.server.retention }}
+ - --storage.tsdb.retention.time={{ .Values.server.retention }}
{{- end }}
- --config.file={{ .Values.server.configPath }}
- --storage.tsdb.path={{ .Values.server.persistentVolume.mountPath }}
@@ -96,6 +96,9 @@ spec:
{{- if .Values.server.enableAdminApi }}
- --web.enable-admin-api
{{- end }}
+ {{- if .Values.server.skipTSDBLock }}
+ - --storage.tsdb.no-lockfile
+ {{- end }}
ports:
- containerPort: 9090
readinessProbe:
diff --git a/stable/prometheus/values.yaml b/stable/prometheus/values.yaml
index 32f35f4f7381..094980ca0cb2 100644
--- a/stable/prometheus/values.yaml
+++ b/stable/prometheus/values.yaml
@@ -315,7 +315,7 @@ kubeStateMetrics:
##
image:
repository: quay.io/coreos/kube-state-metrics
- tag: v1.5.0
+ tag: v1.6.0
pullPolicy: IfNotPresent
## kube-state-metrics priorityClassName
@@ -404,7 +404,7 @@ nodeExporter:
##
image:
repository: prom/node-exporter
- tag: v0.17.0
+ tag: v0.18.0
pullPolicy: IfNotPresent
## Specify if a Pod Security Policy for node-exporter must be created
@@ -518,7 +518,7 @@ server:
##
image:
repository: prom/prometheus
- tag: v2.6.1
+ tag: v2.9.2
pullPolicy: IfNotPresent
## prometheus server priorityClassName
@@ -555,6 +555,9 @@ server:
## series. This is disabled by default.
enableAdminApi: false
+ ## This flag controls BD locking
+ skipTSDBLock: false
+
## Path to a configuration file on prometheus server container FS
configPath: /etc/config/prometheus.yml
@@ -713,6 +716,9 @@ server:
##
subPath: ""
+ emptyDir:
+ sizeLimit: ""
+
## Annotations to be added to Prometheus server pods
##
podAnnotations: {}
@@ -798,6 +804,7 @@ pushgateway:
## Additional pushgateway container arguments
##
+ ## for example: persistence.file: /data/pushgateway.data
extraArgs: {}
ingress:
@@ -877,6 +884,51 @@ pushgateway:
servicePort: 9091
type: ClusterIP
+ persistentVolume:
+ ## If true, pushgateway will create/use a Persistent Volume Claim
+ ## If false, use emptyDir
+ ##
+ enabled: false
+
+ ## pushgateway data Persistent Volume access modes
+ ## Must match those of existing PV or dynamic provisioner
+ ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
+ ##
+ accessModes:
+ - ReadWriteOnce
+
+ ## pushgateway data Persistent Volume Claim annotations
+ ##
+ annotations: {}
+
+ ## pushgateway data Persistent Volume existing claim name
+ ## Requires pushgateway.persistentVolume.enabled: true
+ ## If defined, PVC must be created manually before volume will be bound
+ existingClaim: ""
+
+ ## pushgateway data Persistent Volume mount root path
+ ##
+ mountPath: /data
+
+ ## pushgateway data Persistent Volume size
+ ##
+ size: 2Gi
+
+ ## alertmanager data Persistent Volume Storage Class
+ ## If defined, storageClassName:
+ ## If set to "-", storageClassName: "", which disables dynamic provisioning
+ ## If undefined (the default) or set to null, no storageClassName spec is
+ ## set, choosing the default provisioner. (gp2 on AWS, standard on
+ ## GKE, AWS & OpenStack)
+ ##
+ # storageClass: "-"
+
+ ## Subdirectory of alertmanager data Persistent Volume to mount
+ ## Useful if the volume's root directory is not empty
+ ##
+ subPath: ""
+
+
## alertmanager ConfigMap entries
##
alertmanagerFiles:
diff --git a/stable/rabbitmq-ha/Chart.yaml b/stable/rabbitmq-ha/Chart.yaml
index f6d115d00e3c..b6fd261b955f 100644
--- a/stable/rabbitmq-ha/Chart.yaml
+++ b/stable/rabbitmq-ha/Chart.yaml
@@ -1,7 +1,7 @@
name: rabbitmq-ha
apiVersion: v1
-appVersion: 3.7.8
-version: 1.19.0
+appVersion: 3.7.12
+version: 1.27.1
description: Highly available RabbitMQ cluster, the open source message broker
software that implements the Advanced Message Queuing Protocol (AMQP).
keywords:
diff --git a/stable/rabbitmq-ha/README.md b/stable/rabbitmq-ha/README.md
index 013f7648e8a7..e73b62557f63 100644
--- a/stable/rabbitmq-ha/README.md
+++ b/stable/rabbitmq-ha/README.md
@@ -66,7 +66,7 @@ and their default values.
| Parameter | Description | Default |
|------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------|
| `existingConfigMap` | Use an existing ConfigMap | `false` |
-| `existingSecret` | Use an existing secret for password & erlang cookie | `""` Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â |
+| `existingSecret` | Use an existing secret for password, managementPassword & erlang cookie | `""` Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â |
| `extraPlugins` | Additional plugins to add to the default configmap | `rabbitmq_shovel, rabbitmq_shovel_management, rabbitmq_federation, rabbitmq_federation_management,` |
| `extraConfig` | Additional configuration to add to default configmap | `{}` |
| `advancedConfig` | Additional configuration in classic config format | `""` |
@@ -81,9 +81,9 @@ and their default values.
| `definitionsSource` | Use this key within an existing secret to reference the definitions specification | `"definitions.json"` |
| `image.pullPolicy` | Image pull policy | `Always` if `image` tag is `latest`, else `IfNotPresent` |
| `image.repository` | RabbitMQ container image repository | `rabbitmq` |
-| `image.tag` | RabbitMQ container image tag | `3.7-alpine` |
+| `image.tag` | RabbitMQ container image tag | `3.7.12-alpine` |
| `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` |
-| `managementPassword` | Management user password. Should be changed from default | `E9R3fjZm4ejFkVFE` |
+| `managementPassword` | Management user password. | _random 24 character long alphanumeric string_ |
| `managementUsername` | Management user with minimal permissions used for health checks | `management` |
| `nodeSelector` | Node labels for pod assignment | `{}` |
| `persistentVolume.accessMode` | Persistent volume access modes | `[ReadWriteOnce]` |
@@ -93,6 +93,8 @@ and their default values.
| `persistentVolume.size` | Persistent volume size | `8Gi` |
| `persistentVolume.storageClass` | Persistent volume storage class | `-` |
| `podAntiAffinity` | Pod antiaffinity, `hard` or `soft` | `hard` |
+| `podAntiAffinityTopologyKey` | TopologyKey for antiaffinity, default is hostname
+| `podDisruptionBudget` | Pod Disruption Budget rules | `{}` |
| `podManagementPolicy` | Whether the pods should be restarted in parallel or one at a time. Either `OrderedReady` or `Parallel`. | `OrderedReady` |
| `prometheus.exporter.enabled` | Configures Prometheus Exporter to expose and scrape stats | `false` |
| `prometheus.exporter.env` | Environment variables to set for Exporter container | `{}` |
@@ -126,7 +128,7 @@ and their default values.
| `rabbitmqMemoryHighWatermark` | Memory high watermark | `256MB` |
| `rabbitmqMemoryHighWatermarkType` | Memory high watermark type. Either absolute or relative | `absolute` |
| `rabbitmqNodePort` | Node port | `5672` |
-| `rabbitmqPassword` | RabbitMQ application password | _random 10 character long alphanumeric string_ |
+| `rabbitmqPassword` | RabbitMQ application password | _random 24 character long alphanumeric string_ |
| `rabbitmqSTOMPPlugin.config` | STOMP configuration | `` |
| `rabbitmqSTOMPPlugin.enabled` | Enable STOMP plugin | `false` |
| `rabbitmqUsername` | RabbitMQ application username | `guest` |
@@ -138,6 +140,11 @@ and their default values.
| `rbac.create` | If true, create & use RBAC resources | `true` |
| `replicaCount` | Number of replica | `3` |
| `resources` | CPU/Memory resource requests/limits | `{}` |
+| `schedulerName` | alternate scheduler name | `nil` |
+| `securityContext.fsGroup` | Group ID for the container's volumes | `101` |
+| `securityContext.runAsGroup` | Group ID for the container | `101` |
+| `securityContext.runAsNonRoot` | Enforce non-root user ID for the container | `true` |
+| `securityContext.runAsUser` | User ID for the container | `100` |
| `serviceAccount.create` | Create service account | `true` |
| `serviceAccount.name` | Service account name to use | _name of the release_ |
| `service.annotations` | Annotations to add to the service | `{}` |
@@ -146,6 +153,12 @@ and their default values.
| `service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `""` |
| `service.loadBalancerSourceRanges` | List of IP CIDRs allowed access to load balancer (if supported) | `[]` |
| `service.type` | Type of service to create | `ClusterIP` |
+| `ingress.enabled` | Enable Ingress | `false` |
+| `ingress.path` | Ingress path | `/` |
+| `ingress.hostName` | Ingress hostname | |
+| `ingress.tls` | Enable Ingress TLS | `false` |
+| `ingress.tlsSecret` | Ingress TLS secret name | `myTlsSecret` |
+| `ingress.annotations` | Ingress annotations | `{}` |
| `tolerations` | Toleration labels for pod assignment | `[]` |
| `podAnnotations` | Extra annotations to add to pod | `{}` |
| `terminationGracePeriodSeconds` | Duration pod needs to terminate gracefully | `10` |
@@ -161,12 +174,13 @@ Specify each parameter using the `--set key=value[,key=value]` argument to `helm
```bash
$ helm install --name my-release \
- --set rabbitmqUsername=admin,rabbitmqPassword=secretpassword,rabbitmqErlangCookie=secretcookie \
+ --set rabbitmqUsername=admin,rabbitmqPassword=secretpassword,managementPassword=anothersecretpassword,rabbitmqErlangCookie=secretcookie \
stable/rabbitmq-ha
```
The above command sets the RabbitMQ admin username and password to `admin` and
-`secretpassword` respectively. Additionally the secure erlang cookie is set to
+`secretpassword` respectively. Additionally the management user password is set
+to `anothersecretpassword` and the secure erlang cookie is set to
`secretcookie`.
Alternatively, a YAML file that specifies the values for the parameters can be
@@ -230,6 +244,8 @@ the following keys:
* `rabbitmq-user`
* `rabbitmq-password`
+* `rabbitmq-management-user`
+* `rabbitmq-management-password`
* `rabbitmq-erlang-cookie`
* `definitions.json` (the name can be altered by setting the `definitionsSource`)
diff --git a/stable/rabbitmq-ha/templates/NOTES.txt b/stable/rabbitmq-ha/templates/NOTES.txt
index 777c1876842c..d5c97ab3e0e1 100644
--- a/stable/rabbitmq-ha/templates/NOTES.txt
+++ b/stable/rabbitmq-ha/templates/NOTES.txt
@@ -2,14 +2,11 @@
Credentials:
- Username : {{ .Values.rabbitmqUsername -}}
- {{ if .Values.existingSecret }}
- Password : $(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "rabbitmq-ha.secretName" . }} -o jsonpath="{.data.rabbitmq-password}" | base64 --decode)
- ErLang Cookie : $(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "rabbitmq-ha.secretName" . }} -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode)
- {{ else }}
- Password : $(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "rabbitmq-ha.fullname" . }} -o jsonpath="{.data.rabbitmq-password}" | base64 --decode)
- ErLang Cookie : $(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "rabbitmq-ha.fullname" . }} -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode)
- {{ end }}
+ Username : {{ .Values.rabbitmqUsername }}
+ Password : $(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "rabbitmq-ha.secretName" . }} -o jsonpath="{.data.rabbitmq-password}" | base64 --decode)
+ Management username : {{ .Values.managementUsername }}
+ Management password : $(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "rabbitmq-ha.secretName" . }} -o jsonpath="{.data.rabbitmq-management-password}" | base64 --decode)
+ ErLang Cookie : $(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "rabbitmq-ha.secretName" . }} -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode)
RabbitMQ can be accessed within the cluster on port {{ .Values.rabbitmqNodePort }} at {{ template "rabbitmq-ha.fullname" . }}.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}
diff --git a/stable/rabbitmq-ha/templates/_helpers.tpl b/stable/rabbitmq-ha/templates/_helpers.tpl
index bc3fabfac29a..202a388b0031 100644
--- a/stable/rabbitmq-ha/templates/_helpers.tpl
+++ b/stable/rabbitmq-ha/templates/_helpers.tpl
@@ -24,6 +24,13 @@ If release name contains chart name it will be used as a full name.
{{- end -}}
{{- end -}}
+{{/*
+Create chart name and version as used by the chart label.
+*/}}
+{{- define "rabbitmq-ha.chart" -}}
+{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
+{{- end -}}
+
{{/*
Create the name of the service account to use
*/}}
diff --git a/stable/rabbitmq-ha/templates/alerts.yaml b/stable/rabbitmq-ha/templates/alerts.yaml
index 32c45a4b1275..4cb728c9b47c 100644
--- a/stable/rabbitmq-ha/templates/alerts.yaml
+++ b/stable/rabbitmq-ha/templates/alerts.yaml
@@ -6,7 +6,7 @@ metadata:
namespace: {{ .Values.prometheus.operator.serviceMonitor.namespace }}
labels:
app: {{ template "rabbitmq-ha.name" . }}
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
+ chart: {{ template "rabbitmq-ha.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.prometheus.operator.serviceMonitor.selector }}
diff --git a/stable/rabbitmq-ha/templates/configmap.yaml b/stable/rabbitmq-ha/templates/configmap.yaml
index 9a7a268499b4..ff9ebe2b3cf4 100644
--- a/stable/rabbitmq-ha/templates/configmap.yaml
+++ b/stable/rabbitmq-ha/templates/configmap.yaml
@@ -3,9 +3,10 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "rabbitmq-ha.fullname" . }}
+ namespace: {{ .Release.Namespace }}
labels:
app: {{ template "rabbitmq-ha.name" . }}
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
+ chart: {{ template "rabbitmq-ha.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.extraLabels }}
diff --git a/stable/rabbitmq-ha/templates/ingress.yaml b/stable/rabbitmq-ha/templates/ingress.yaml
index f9d6831145b6..4949948d0307 100644
--- a/stable/rabbitmq-ha/templates/ingress.yaml
+++ b/stable/rabbitmq-ha/templates/ingress.yaml
@@ -3,9 +3,10 @@ apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "rabbitmq-ha.fullname" . }}
+ namespace: {{ .Release.Namespace }}
labels:
app: {{ template "rabbitmq-ha.name" . }}
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
+ chart: {{ template "rabbitmq-ha.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
annotations:
diff --git a/stable/rabbitmq-ha/templates/pdb.yaml b/stable/rabbitmq-ha/templates/pdb.yaml
new file mode 100644
index 000000000000..60a8dc80c08d
--- /dev/null
+++ b/stable/rabbitmq-ha/templates/pdb.yaml
@@ -0,0 +1,18 @@
+{{- if .Values.podDisruptionBudget -}}
+apiVersion: policy/v1beta1
+kind: PodDisruptionBudget
+metadata:
+ name: {{ template "rabbitmq-ha.fullname" . }}
+ namespace: {{ .Release.Namespace }}
+ labels:
+ app: {{ template "rabbitmq-ha.name" . }}
+ chart: {{ template "rabbitmq-ha.chart" . }}
+ release: {{ .Release.Name }}
+ heritage: {{ .Release.Service }}
+spec:
+ selector:
+ matchLabels:
+ app: {{ template "rabbitmq-ha.name" . }}
+ release: {{ .Release.Name }}
+{{ toYaml .Values.podDisruptionBudget | indent 2 }}
+{{- end -}}
diff --git a/stable/rabbitmq-ha/templates/role.yaml b/stable/rabbitmq-ha/templates/role.yaml
index 04283066c6e8..0e0fcf7f8237 100644
--- a/stable/rabbitmq-ha/templates/role.yaml
+++ b/stable/rabbitmq-ha/templates/role.yaml
@@ -4,13 +4,14 @@ kind: Role
metadata:
labels:
app: {{ template "rabbitmq-ha.name" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ chart: {{ template "rabbitmq-ha.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "rabbitmq-ha.fullname" . }}
+ namespace: {{ .Release.Namespace }}
rules:
- apiGroups: [""]
resources: ["endpoints"]
diff --git a/stable/rabbitmq-ha/templates/rolebinding.yaml b/stable/rabbitmq-ha/templates/rolebinding.yaml
index 0969e3c82ab4..620b0b71acf8 100644
--- a/stable/rabbitmq-ha/templates/rolebinding.yaml
+++ b/stable/rabbitmq-ha/templates/rolebinding.yaml
@@ -4,16 +4,18 @@ kind: RoleBinding
metadata:
labels:
app: {{ template "rabbitmq-ha.name" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ chart: {{ template "rabbitmq-ha.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "rabbitmq-ha.fullname" . }}
+ namespace: {{ .Release.Namespace }}
subjects:
- kind: ServiceAccount
name: {{ template "rabbitmq-ha.serviceAccountName" . }}
+ namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
diff --git a/stable/rabbitmq-ha/templates/secret.yaml b/stable/rabbitmq-ha/templates/secret.yaml
index 583116772073..16c89e239fb2 100644
--- a/stable/rabbitmq-ha/templates/secret.yaml
+++ b/stable/rabbitmq-ha/templates/secret.yaml
@@ -3,9 +3,10 @@ apiVersion: v1
kind: Secret
metadata:
name: {{ template "rabbitmq-ha.fullname" . }}
+ namespace: {{ .Release.Namespace }}
labels:
app: {{ template "rabbitmq-ha.name" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ chart: {{ template "rabbitmq-ha.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
{{- if .Values.extraLabels }}
@@ -15,8 +16,12 @@ type: Opaque
data:
{{- $password := .Values.rabbitmqPassword | default (randAlphaNum 24 | nospace) -}}
{{- $_ := set .Values "rabbitmqPassword" $password }}
+ {{- $managementPassword := .Values.managementPassword | default (randAlphaNum 24 | nospace) -}}
+ {{- $_ := set .Values "managementPassword" $managementPassword }}
rabbitmq-username: {{ .Values.rabbitmqUsername | b64enc | quote }}
rabbitmq-password: {{ .Values.rabbitmqPassword | b64enc | quote }}
+ rabbitmq-management-username: {{ .Values.managementUsername | b64enc | quote }}
+ rabbitmq-management-password: {{ .Values.managementPassword | b64enc | quote }}
rabbitmq-erlang-cookie: {{ .Values.rabbitmqErlangCookie | default (randAlphaNum 32) | b64enc | quote }}
{{ .Values.definitionsSource }}: {{ include "rabbitmq-ha.definitions" . | b64enc | quote }}
{{ end }}
@@ -28,7 +33,7 @@ metadata:
name: {{ template "rabbitmq-ha.fullname" . }}-cert
labels:
app: {{ template "rabbitmq-ha.name" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ chart: {{ template "rabbitmq-ha.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
{{- if .Values.extraLabels }}
diff --git a/stable/rabbitmq-ha/templates/service-discovery.yaml b/stable/rabbitmq-ha/templates/service-discovery.yaml
index 442fe5c51309..e9cee944fd73 100644
--- a/stable/rabbitmq-ha/templates/service-discovery.yaml
+++ b/stable/rabbitmq-ha/templates/service-discovery.yaml
@@ -6,10 +6,11 @@ metadata:
{{- if .Values.service.annotations }}
{{ toYaml .Values.service.annotations | indent 4 }}
{{- end }}
- name: {{ template "rabbitmq-ha.fullname" . }}-discovery
+ name: {{ printf "%s-discovery" (include "rabbitmq-ha.fullname" .) | trunc 63 | trimSuffix "-" }}
+ namespace: {{ .Release.Namespace }}
labels:
app: {{ template "rabbitmq-ha.name" . }}
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
+ chart: {{ template "rabbitmq-ha.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
diff --git a/stable/rabbitmq-ha/templates/service.yaml b/stable/rabbitmq-ha/templates/service.yaml
index e9d0f0c1242b..8a6ac8a2dd1f 100644
--- a/stable/rabbitmq-ha/templates/service.yaml
+++ b/stable/rabbitmq-ha/templates/service.yaml
@@ -12,16 +12,19 @@ metadata:
prometheus.io/port: {{ .Values.prometheus.exporter.port | quote }}
{{- end }}
name: {{ template "rabbitmq-ha.fullname" . }}
+ namespace: {{ .Release.Namespace }}
labels:
app: {{ template "rabbitmq-ha.name" . }}
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
+ chart: {{ template "rabbitmq-ha.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
spec:
+{{- if ne .Values.service.type "NodePort" }}
clusterIP: "{{ .Values.service.clusterIP }}"
+{{- end }}
{{- if .Values.service.externalIPs }}
externalIPs:
{{ toYaml .Values.service.externalIPs | indent 4 }}
@@ -37,14 +40,17 @@ spec:
- name: http
protocol: TCP
port: {{ .Values.rabbitmqManagerPort }}
+ nodePort: {{ .Values.service.managerNodePort }}
targetPort: http
- name: amqp
protocol: TCP
port: {{ .Values.rabbitmqNodePort }}
+ nodePort: {{ .Values.service.amqpNodePort }}
targetPort: amqp
- name: epmd
protocol: TCP
port: {{ .Values.rabbitmqEpmdPort }}
+ nodePort: {{ .Values.service.epmdNodePort }}
targetPort: epmd
{{- if .Values.rabbitmqSTOMPPlugin.enabled }}
- name: stomp-tcp
diff --git a/stable/rabbitmq-ha/templates/serviceaccount.yaml b/stable/rabbitmq-ha/templates/serviceaccount.yaml
index f0bbefe508ec..04a438f6544e 100644
--- a/stable/rabbitmq-ha/templates/serviceaccount.yaml
+++ b/stable/rabbitmq-ha/templates/serviceaccount.yaml
@@ -4,11 +4,12 @@ kind: ServiceAccount
metadata:
labels:
app: {{ template "rabbitmq-ha.name" . }}
- chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
+ chart: {{ template "rabbitmq-ha.chart" . }}
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- if .Values.extraLabels }}
{{ toYaml .Values.extraLabels | indent 4 }}
{{- end }}
name: {{ template "rabbitmq-ha.serviceAccountName" . }}
-{{- end }}
+ namespace: {{ .Release.Namespace }}
+{{- end }}
\ No newline at end of file
diff --git a/stable/rabbitmq-ha/templates/statefulset.yaml b/stable/rabbitmq-ha/templates/statefulset.yaml
index 3bce97a6c778..c3943c19dc99 100644
--- a/stable/rabbitmq-ha/templates/statefulset.yaml
+++ b/stable/rabbitmq-ha/templates/statefulset.yaml
@@ -2,9 +2,10 @@ apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: {{ template "rabbitmq-ha.fullname" . }}
+ namespace: {{ .Release.Namespace }}
labels:
app: {{ template "rabbitmq-ha.name" . }}
- chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
+ chart: {{ template "rabbitmq-ha.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.extraLabels }}
@@ -39,6 +40,8 @@ spec:
{{- end }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}
+ securityContext:
+{{ toYaml .Values.securityContext | indent 10 }}
serviceAccountName: {{ template "rabbitmq-ha.serviceAccountName" . }}
initContainers:
- name: copy-rabbitmq-config
@@ -99,14 +102,12 @@ spec:
protocol: TCP
containerPort: 5671
{{- end }}
- {{- $managementCredentials := printf "%s:%s" .Values.managementUsername .Values.managementPassword | b64enc }}
- {{- $managementHeader := printf "Authorization: Basic %s" $managementCredentials }}
livenessProbe:
exec:
command:
- /bin/sh
- -c
- - 'wget -O - -q --header "{{ $managementHeader }}" http://localhost:15672/api/healthchecks/node | grep -qF "{\"status\":\"ok\"}"'
+ - 'wget -O - -q --header "Authorization: Basic `echo -n \"$RABBIT_MANAGEMENT_USER:$RABBIT_MANAGEMENT_PASSWORD\" | base64`" http://localhost:15672/api/healthchecks/node | grep -qF "{\"status\":\"ok\"}"'
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
@@ -116,7 +117,7 @@ spec:
command:
- /bin/sh
- -c
- - 'wget -O - -q --header "{{ $managementHeader }}" http://localhost:15672/api/healthchecks/node | grep -qF "{\"status\":\"ok\"}"'
+ - 'wget -O - -q --header "Authorization: Basic `echo -n \"$RABBIT_MANAGEMENT_USER:$RABBIT_MANAGEMENT_PASSWORD\" | base64`" http://localhost:15672/api/healthchecks/node | grep -qF "{\"status\":\"ok\"}"'
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
periodSeconds: {{ .Values.readinessProbe.periodSeconds }}
@@ -140,6 +141,16 @@ spec:
secretKeyRef:
name: {{ template "rabbitmq-ha.secretName" . }}
key: rabbitmq-erlang-cookie
+ - name: RABBIT_MANAGEMENT_USER
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "rabbitmq-ha.secretName" . }}
+ key: rabbitmq-management-username
+ - name: RABBIT_MANAGEMENT_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: {{ template "rabbitmq-ha.secretName" . }}
+ key: rabbitmq-management-password
{{- if .Values.rabbitmqHipeCompile }}
- name: RABBITMQ_HIPE_COMPILE
value: {{ .Values.rabbitmqHipeCompile | quote }}
@@ -200,12 +211,15 @@ spec:
{{- if .Values.tolerations }}
tolerations:
{{ toYaml .Values.tolerations | indent 8 }}
+ {{- end }}
+ {{- if .Values.schedulerName }}
+ schedulerName: "{{ .Values.schedulerName }}"
{{- end }}
{{- if eq .Values.podAntiAffinity "hard" }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- - topologyKey: "kubernetes.io/hostname"
+ - topologyKey: "{{ .Values.podAntiAffinityTopologyKey }}"
labelSelector:
matchLabels:
app: {{ template "rabbitmq-ha.name" . }}
@@ -216,7 +230,7 @@ spec:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
- topologyKey: kubernetes.io/hostname
+ topologyKey: "{{ .Values.podAntiAffinityTopologyKey }}"
labelSelector:
matchLabels:
app: {{ template "rabbitmq-ha.name" . }}
diff --git a/stable/rabbitmq-ha/values.yaml b/stable/rabbitmq-ha/values.yaml
index f240506bf5ad..f17e5a00b263 100644
--- a/stable/rabbitmq-ha/values.yaml
+++ b/stable/rabbitmq-ha/values.yaml
@@ -6,7 +6,7 @@ rabbitmqUsername: guest
## RabbitMQ Management user used for health checks
managementUsername: management
-managementPassword: E9R3fjZm4ejFkVFE
+# managementPassword:
## Place any additional key/value configuration to add to rabbitmq.conf
## Ref: https://www.rabbitmq.com/configure.html#config-items
@@ -268,7 +268,7 @@ replicaCount: 3
image:
repository: rabbitmq
- tag: 3.7-alpine
+ tag: 3.7.12-alpine
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
@@ -298,6 +298,13 @@ service:
loadBalancerSourceRanges: []
type: ClusterIP
+ ## Customize nodePort number when the service type is NodePort
+ ### Ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
+ ###
+ epmdNodePort: null
+ amqpNodePort: null
+ managerNodePort: null
+
podManagementPolicy: OrderedReady
## Statefulsets rolling update update strategy
@@ -320,20 +327,25 @@ updateStrategy: OnDelete
##
resources: {}
# limits:
- # cpu: 100mm
+ # cpu: 100m
# memory: 1Gi
# requests:
- # cpu: 100mm
+ # cpu: 100m
# memory: 1Gi
initContainer:
resources: {}
# limits:
- # cpu: 100mm
+ # cpu: 100m
# memory: 128Mi
# requests:
- # cpu: 100mm
+ # cpu: 100m
# memory: 128Mi
+## Use an alternate scheduler, e.g. "stork".
+## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
+##
+# schedulerName:
+
## Data Persistency
persistentVolume:
enabled: false
@@ -366,6 +378,7 @@ podAnnotations: {}
## Pod affinity
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
podAntiAffinity: soft
+podAntiAffinityTopologyKey: "kubernetes.io/hostname"
## Create default configMap
##
@@ -423,9 +436,18 @@ readinessProbe:
timeoutSeconds: 3
periodSeconds: 5
-# Specifies an existing secret to be used for RMQ password and Erlang Cookie
+# Specifies an existing secret to be used for RMQ password, management user password and Erlang Cookie
existingSecret: ""
+
+## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
+##
+securityContext:
+ fsGroup: 101
+ runAsGroup: 101
+ runAsNonRoot: true
+ runAsUser: 100
+
prometheus:
## Configures Prometheus Exporter to expose and scrape stats.
exporter:
@@ -481,3 +503,8 @@ prometheus:
## Kubernetes Cluster Domain
clusterDomain: cluster.local
+
+## Pod Disruption Budget
+podDisruptionBudget: {}
+ # maxUnavailable: 1
+ # minAvailable: 1
diff --git a/stable/rabbitmq/Chart.yaml b/stable/rabbitmq/Chart.yaml
index 096132f2113d..b30de0138305 100644
--- a/stable/rabbitmq/Chart.yaml
+++ b/stable/rabbitmq/Chart.yaml
@@ -1,6 +1,7 @@
+apiVersion: v1
name: rabbitmq
-version: 4.1.0
-appVersion: 3.7.10
+version: 5.8.0
+appVersion: 3.7.15
description: Open source message broker software that implements the Advanced Message Queuing Protocol (AMQP)
keywords:
- rabbitmq
diff --git a/stable/rabbitmq/README.md b/stable/rabbitmq/README.md
index 9b0732c9cd6f..70861b7dfddf 100644
--- a/stable/rabbitmq/README.md
+++ b/stable/rabbitmq/README.md
@@ -12,7 +12,7 @@ $ helm install stable/rabbitmq
This chart bootstraps a [RabbitMQ](https://github.com/bitnami/bitnami-docker-rabbitmq) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
-Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters.
+Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/).
## Prerequisites
@@ -48,6 +48,7 @@ The following table lists the configurable parameters of the RabbitMQ chart and
| Parameter | Description | Default |
| ------------------------------------ | ------------------------------------------------ | ------------------------------------------------------- |
| `global.imageRegistry` | Global Docker image registry | `nil` |
+| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) |
| `image.registry` | Rabbitmq Image registry | `docker.io` |
| `image.repository` | Rabbitmq Image name | `bitnami/rabbitmq` |
| `image.tag` | Rabbitmq Image tag | `{VERSION}` |
@@ -55,31 +56,44 @@ The following table lists the configurable parameters of the RabbitMQ chart and
| `image.pullSecrets` | Specify docker-registry secret names as an array | `nil` |
| `image.debug` | Specify if debug values should be set | `false` |
| `rbacEnabled` | Specify if rbac is enabled in your cluster | `true` |
+| `podManagementPolicy` | Pod management policy | `OrderedReady` |
| `rabbitmq.username` | RabbitMQ application username | `user` |
| `rabbitmq.password` | RabbitMQ application password | _random 10 character long alphanumeric string_ |
+| `rabbitmq.existingPasswordSecret` | Existing secret with RabbitMQ credentials | `nil` |
| `rabbitmq.erlangCookie` | Erlang cookie | _random 32 character long alphanumeric string_ |
-| `rabbitmq.plugins` | configuration file for plugins to enable | `[rabbitmq_management,rabbitmq_peer_discovery_k8s].` |
+| `rabbitmq.existingErlangSecret` | Existing secret with RabbitMQ Erlang cookie | `nil` |
+| `rabbitmq.plugins` | List of plugins to enable | `rabbitmq_management rabbitmq_peer_discovery_k8s` |
+| `rabbitmq.extraPlugins` | Extra plugings to enable | `nil` |
| `rabbitmq.clustering.address_type` | Switch clustering mode | `ip` or `hostname` |
| `rabbitmq.clustering.k8s_domain` | Customize internal k8s cluster domain | `cluster.local` |
+| `rabbitmq.logs` | Value for the RABBITMQ_LOGS environment variable | `-` |
+| `rabbitmq.setUlimitNofiles` | Specify if max file descriptor limit should be set | `true` |
| `rabbitmq.ulimitNofiles` | Max File Descriptor limit | `65536` |
+| `rabbitmq.maxAvailableSchedulers` | RabbitMQ maximum available scheduler threads | `2` |
+| `rabbitmq.onlineSchedulers` | RabbitMQ online scheduler threads | `1` |
| `rabbitmq.configuration` | Required cluster configuration | See values.yaml |
| `rabbitmq.extraConfiguration` | Extra configuration to add to rabbitmq.conf | See values.yaml |
| `service.type` | Kubernetes Service type | `ClusterIP` |
-| `service.amqpPort` | Amqp port | `5672` |
+| `service.port` | Amqp port | `5672` |
| `service.distPort` | Erlang distribution server port | `25672` |
| `service.nodePort` | Node port override, if serviceType NodePort | _random available between 30000-32767_ |
| `service.managerPort` | RabbitMQ Manager port | `15672` |
-| `persistence.enabled` | Use a PVC to persist data | `false` |
+| `persistence.enabled` | Use a PVC to persist data | `true` |
+| `service.annotations` | service annotations as an array | [] |
| `persistence.storageClass` | Storage class of backing PVC | `nil` (uses alpha storage class annotation) |
+| `persistence.existingClaim` | RabbitMQ data Persistent Volume existing claim name, evaluated as a template | "" |
| `persistence.accessMode` | Use volume as ReadOnly or ReadWrite | `ReadWriteOnce` |
| `persistence.size` | Size of data volume | `8Gi` |
+| `persistence.path` | Mount path of the data volume | `/opt/bitnami/rabbitmq/var/lib/rabbitmq` |
| `securityContext.enabled` | Enable security context | `true` |
| `securityContext.fsGroup` | Group ID for the container | `1001` |
| `securityContext.runAsUser` | User ID for the container | `1001` |
| `resources` | resource needs and limits to apply to the pod | {} |
+| `priorityClassName` | Pod priority class name | `` |
| `nodeSelector` | Node labels for pod assignment | {} |
| `affinity` | Affinity settings for pod assignment | {} |
| `tolerations` | Toleration labels for pod assignment | [] |
+| `updateStrategy` | Statefulset update strategy policy | `RollingUpdate` |
| `ingress.enabled` | Enable ingress resource for Management console | `false` |
| `ingress.hostName` | Hostname to your RabbitMQ installation | `nil` |
| `ingress.path` | Path within the url structure | `/` |
@@ -103,8 +117,17 @@ The following table lists the configurable parameters of the RabbitMQ chart and
| `metrics.image.repository` | Exporter image name | `kbudde/rabbitmq-exporter` |
| `metrics.image.tag` | Exporter image tag | `v0.29.0` |
| `metrics.image.pullPolicy` | Exporter image pull policy | `IfNotPresent` |
+| `metrics.env` | Exporter [configuration environment variables](https://github.com/kbudde/rabbitmq_exporter#configuration) | `{}` |
| `metrics.resources` | Exporter resource requests/limit | `nil` |
+| `metrics.capabilities` | Exporter: Comma-separated list of extended [scraping capabilities supported by the target RabbitMQ server](https://github.com/kbudde/rabbitmq_exporter#extended-rabbitmq-capabilities) | `bert,no_sort` |
| `podLabels` | Additional labels for the statefulset pod(s). | {} |
+| `volumePermissions.enabled` | Enable init container that changes volume permissions in the data directory (for cases where the default k8s `runAsUser` and `fsUser` values do not work) | `false` |
+| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` |
+| `volumePermissions.image.repository` | Init container volume-permissions image name | `bitnami/minideb` |
+| `volumePermissions.image.tag` | Init container volume-permissions image tag | `latest` |
+| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `IfNotPresent` |
+| `volumePermissions.resources` | Init container resource requests/limit | `nil` |
+| `forceBoot.enabled` | Executes 'rabbitmqctl force_boot' to force boot cluster shut down unexpectedly in an unknown order. Use it only if you prefer availability over integrity.) | `false` |
The above parameters map to the env variables defined in [bitnami/rabbitmq](http://github.com/bitnami/bitnami-docker-rabbitmq). For more information please refer to the [bitnami/rabbitmq](http://github.com/bitnami/bitnami-docker-rabbitmq) image documentation.
@@ -126,6 +149,32 @@ $ helm install --name my-release -f values.yaml stable/rabbitmq
> **Tip**: You can use the default [values.yaml](values.yaml)
+### Load Definitions
+It is possible to [load a RabbitMQ definitions file to configure RabbitMQ](http://www.rabbitmq.com/management.html#load-definitions). Because definitions may contain RabbitMQ credentials, [store the JSON as a Kubernetes secret](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod). Within the secret's data, choose a key name that corresponds with the desired load definitions filename (i.e. `load_definition.json`) and use the JSON object as the value. For example:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: rabbitmq-load-definition
+type: Opaque
+stringData:
+ load_definition.json: |-
+ {
+ "vhosts": [
+ {
+ "name": "/"
+ }
+ ]
+ }
+```
+
+Then, specify the `management.load_definitions` property as an `extraConfiguration` pointing to the load definition file path within the container (i.e. `/app/load_definition.json`) and set `loadDefinition.enable` to `true`.
+
+Any load definitions specified will be available within in the container at `/app`.
+
+> Loading a definition will take precedence over any configuration done through [Helm values](#configuration).
+
## Production configuration
A standard configuration is provided by default that will run on most development environments. To operate this chart in a production environment, we recommend you use the alternative file values-production.yaml provided in this repository.
@@ -152,10 +201,17 @@ $ helm install --set persistence.existingClaim=PVC_NAME rabbitmq
## Upgrading
+### To 5.0.0
+
+This major release changes the clustering method from `ip` to `hostname`.
+This change is needed to fix the persistence. The data dir will now depend on the hostname which is stable instead of the pod IP that might change.
+
+> IMPORTANT: Note that if you upgrade from a previous version you will lose your data.
+
### To 3.0.0
Backwards compatibility is not guaranteed unless you modify the labels used on the chart's deployments.
-Use the workaround below to upgrade from versions previous to 3.0.0. The following example assumes that the release name is opencart:
+Use the workaround below to upgrade from versions previous to 3.0.0. The following example assumes that the release name is rabbitmq:
```console
$ kubectl delete statefulset rabbitmq --cascade=false
diff --git a/stable/rabbitmq/templates/_helpers.tpl b/stable/rabbitmq/templates/_helpers.tpl
index b6e5b889d7b0..353b649d477b 100644
--- a/stable/rabbitmq/templates/_helpers.tpl
+++ b/stable/rabbitmq/templates/_helpers.tpl
@@ -31,6 +31,19 @@ Create chart name and version as used by the chart label.
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
+{{/*
+Return the proper RabbitMQ plugin list
+*/}}
+{{- define "rabbitmq.plugins" -}}
+{{- $plugins := .Values.rabbitmq.plugins | replace " " ", " -}}
+{{- if .Values.rabbitmq.extraPlugins -}}
+{{- $extraPlugins := .Values.rabbitmq.extraPlugins | replace " " ", " -}}
+{{- printf "[%s, %s]." $plugins $extraPlugins | indent 4 -}}
+{{- else -}}
+{{- printf "[%s]." $plugins | indent 4 -}}
+{{- end -}}
+{{- end -}}
+
{{/*
Return the proper RabbitMQ image name
*/}}
@@ -57,9 +70,108 @@ Also, we can't use a single if because lazy evaluation is not an option
{{/*
Return the proper metrics image name
*/}}
-{{- define "metrics.image" -}}
-{{- $registryName := .Values.metrics.image.registry -}}
+{{- define "rabbitmq.metrics.image" -}}
+{{- $registryName := .Values.metrics.image.registry -}}
{{- $repositoryName := .Values.metrics.image.repository -}}
{{- $tag := .Values.metrics.image.tag | toString -}}
-{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Get the password secret.
+*/}}
+{{- define "rabbitmq.secretPasswordName" -}}
+ {{- if .Values.rabbitmq.existingPasswordSecret -}}
+ {{- printf "%s" .Values.rabbitmq.existingPasswordSecret -}}
+ {{- else -}}
+ {{- printf "%s" (include "rabbitmq.fullname" .) -}}
+ {{- end -}}
+{{- end -}}
+
+{{/*
+Get the erlang secret.
+*/}}
+{{- define "rabbitmq.secretErlangName" -}}
+ {{- if .Values.rabbitmq.existingErlangSecret -}}
+ {{- printf "%s" .Values.rabbitmq.existingErlangSecret -}}
+ {{- else -}}
+ {{- printf "%s" (include "rabbitmq.fullname" .) -}}
+ {{- end -}}
+{{- end -}}
+
+{{/*
+Return the proper Docker Image Registry Secret Names
+*/}}
+{{- define "rabbitmq.imagePullSecrets" -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic.
+Also, we can not use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+{{- if .Values.global.imagePullSecrets }}
+imagePullSecrets:
+{{- range .Values.global.imagePullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets .Values.volumePermissions.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.volumePermissions.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- else if or .Values.image.pullSecrets .Values.metrics.image.pullSecrets .Values.volumePermissions.image.pullSecrets }}
+imagePullSecrets:
+{{- range .Values.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.metrics.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- range .Values.volumePermissions.image.pullSecrets }}
+ - name: {{ . }}
+{{- end }}
+{{- end -}}
+{{- end -}}
+
+{{/*
+Return the proper image name (for the init container volume-permissions image)
+*/}}
+{{- define "rabbitmq.volumePermissions.image" -}}
+{{- $registryName := .Values.volumePermissions.image.registry -}}
+{{- $repositoryName := .Values.volumePermissions.image.repository -}}
+{{- $tag := .Values.volumePermissions.image.tag | toString -}}
+{{/*
+Helm 2.11 supports the assignment of a value to a variable defined in a different scope,
+but Helm 2.9 and 2.10 doesn't support it, so we need to implement this if-else logic.
+Also, we can't use a single if because lazy evaluation is not an option
+*/}}
+{{- if .Values.global }}
+ {{- if .Values.global.imageRegistry }}
+ {{- printf "%s/%s:%s" .Values.global.imageRegistry $repositoryName $tag -}}
+ {{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+ {{- end -}}
+{{- else -}}
+ {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
+{{- end -}}
{{- end -}}
diff --git a/stable/rabbitmq/templates/configuration.yaml b/stable/rabbitmq/templates/configuration.yaml
index 24ded9399eb6..fd2c61878ec4 100644
--- a/stable/rabbitmq/templates/configuration.yaml
+++ b/stable/rabbitmq/templates/configuration.yaml
@@ -9,11 +9,10 @@ metadata:
heritage: "{{ .Release.Service }}"
data:
enabled_plugins: |-
-{{ .Values.rabbitmq.plugins | indent 4 }}
+{{ template "rabbitmq.plugins" . }}
rabbitmq.conf: |-
##username and password
default_user={{.Values.rabbitmq.username}}
default_pass=CHANGEME
{{ .Values.rabbitmq.configuration | indent 4 }}
{{ .Values.rabbitmq.extraConfiguration | indent 4 }}
-
diff --git a/stable/rabbitmq/templates/secrets.yaml b/stable/rabbitmq/templates/secrets.yaml
index b5362f142f34..19c0296cd364 100644
--- a/stable/rabbitmq/templates/secrets.yaml
+++ b/stable/rabbitmq/templates/secrets.yaml
@@ -1,3 +1,4 @@
+{{ if or (not .Values.rabbitmq.existingErlangSecret) (not .Values.rabbitmq.existingPasswordSecret) }}
apiVersion: v1
kind: Secret
metadata:
@@ -9,13 +10,14 @@ metadata:
heritage: "{{ .Release.Service }}"
type: Opaque
data:
- {{ if .Values.rabbitmq.password }}
+ {{ if not .Values.rabbitmq.existingPasswordSecret }}{{ if .Values.rabbitmq.password }}
rabbitmq-password: {{ .Values.rabbitmq.password | b64enc | quote }}
{{ else }}
rabbitmq-password: {{ randAlphaNum 10 | b64enc | quote }}
- {{ end }}
- {{ if .Values.rabbitmq.erlangCookie }}
+ {{ end }}{{ end }}
+ {{ if not .Values.rabbitmq.existingErlangSecret }}{{ if .Values.rabbitmq.erlangCookie }}
rabbitmq-erlang-cookie: {{ .Values.rabbitmq.erlangCookie | b64enc | quote }}
{{ else }}
rabbitmq-erlang-cookie: {{ randAlphaNum 32 | b64enc | quote }}
- {{ end }}
+ {{ end }}{{ end }}
+{{ end }}
diff --git a/stable/rabbitmq/templates/statefulset.yaml b/stable/rabbitmq/templates/statefulset.yaml
index 2e304a111705..88187387863b 100644
--- a/stable/rabbitmq/templates/statefulset.yaml
+++ b/stable/rabbitmq/templates/statefulset.yaml
@@ -9,7 +9,13 @@ metadata:
heritage: "{{ .Release.Service }}"
spec:
serviceName: {{ template "rabbitmq.fullname" . }}-headless
+ podManagementPolicy: {{ .Values.podManagementPolicy }}
replicas: {{ .Values.replicas }}
+ updateStrategy:
+ type: {{ .Values.updateStrategy.type }}
+ {{- if (eq "Recreate" .Values.updateStrategy.type) }}
+ rollingUpdate: null
+ {{- end }}
selector:
matchLabels:
app: {{ template "rabbitmq.name" . }}
@@ -28,18 +34,16 @@ spec:
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
spec:
- {{- if .Values.image.pullSecrets }}
- imagePullSecrets:
- {{- range .Values.image.pullSecrets }}
- - name: {{ . }}
- {{- end}}
- {{- end }}
+{{- include "rabbitmq.imagePullSecrets" . | indent 6 }}
{{- if .Values.rbacEnabled}}
serviceAccountName: {{ template "rabbitmq.fullname" . }}
{{- end }}
{{- if .Values.affinity }}
affinity:
{{ toYaml .Values.affinity | indent 8 }}
+ {{- end }}
+ {{- if .Values.priorityClassName }}
+ priorityClassName: {{ .Values.priorityClassName }}
{{- end }}
{{- if .Values.nodeSelector }}
nodeSelector:
@@ -50,6 +54,20 @@ spec:
{{ toYaml .Values.tolerations | indent 8 }}
{{- end }}
terminationGracePeriodSeconds: 10
+ {{- if and .Values.volumePermissions.enabled .Values.persistence.enabled .Values.securityContext.enabled }}
+ initContainers:
+ - name: volume-permissions
+ image: "{{ template "rabbitmq.volumePermissions.image" . }}"
+ imagePullPolicy: {{ default "" .Values.volumePermissions.image.pullPolicy | quote }}
+ command: ["/bin/chown", "-R", "{{ .Values.securityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }}", "{{ .Values.persistence.path }}"]
+ securityContext:
+ runAsUser: 0
+ resources:
+{{ toYaml .Values.volumePermissions.resources | indent 10 }}
+ volumeMounts:
+ - name: data
+ mountPath: "{{ .Values.persistence.path }}"
+ {{- end }}
containers:
- name: rabbitmq
image: {{ template "rabbitmq.image" . }}
@@ -68,12 +86,25 @@ spec:
#copy the mounted configuration to both places
cp /opt/bitnami/rabbitmq/conf/* /opt/bitnami/rabbitmq/etc/rabbitmq
# Apply resources limits
+ {{- if .Values.rabbitmq.setUlimitNofiles }}
ulimit -n "${RABBITMQ_ULIMIT_NOFILES}"
+ {{- end }}
#replace the default password that is generated
sed -i "s/CHANGEME/$RABBITMQ_PASSWORD/g" /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf
- # Move logs to stdout
- ln -sF /dev/stdout /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@${MY_POD_IP}.log
- ln -sF /dev/stdout /opt/bitnami/rabbitmq/var/log/rabbitmq/rabbit@${MY_POD_IP}_upgrade.log
+ #api check for probes
+ cat > /opt/bitnami/rabbitmq/sbin/rabbitmq-api-check <
@@ -136,9 +183,17 @@ persistence:
# storageClass: "-"
accessMode: ReadWriteOnce
+ ## Existing PersistentVolumeClaims
+ ## The value is evaluated as a template
+ ## So, for example, the name can depend on .Release or .Chart
+ # existingClaim: ""
+
# If you change this value, you might have to adjust `rabbitmq.diskFreeLimit` as well.
size: 8Gi
+ # persistence directory, maps to the rabbitmq data directory
+ path: /opt/bitnami/rabbitmq/var/lib/rabbitmq
+
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
@@ -147,6 +202,15 @@ resources: {}
## Replica count, set to 1 to provide a default available cluster
replicas: 1
+## Pod priority
+## https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
+# priorityClassName: ""
+
+## updateStrategy for RabbitMQ statefulset
+## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
+updateStrategy:
+ type: RollingUpdate
+
## Node labels and tolerations for pod assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
@@ -213,7 +277,46 @@ metrics:
repository: kbudde/rabbitmq-exporter
tag: v0.29.0
pullPolicy: IfNotPresent
+ ## Optionally specify an array of imagePullSecrets.
+ ## Secrets must be manually created in the namespace.
+ ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+ ##
+ # pullSecrets:
+ # - myRegistryKeySecretName
+
+ ## environment variables to configure rabbitmq_exporter
+ ## ref: https://github.com/kbudde/rabbitmq_exporter#configuration
+ env: {}
+ ## Comma-separated list of extended scraping capabilities supported by the target RabbitMQ server
+ ## ref: https://github.com/kbudde/rabbitmq_exporter#extended-rabbitmq-capabilities
+ capabilities: "bert,no_sort"
resources: {}
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
+
+##
+## Init containers parameters:
+## volumePermissions: Change the owner of the persist volume mountpoint to RunAsUser:fsGroup
+##
+volumePermissions:
+ enabled: false
+ image:
+ registry: docker.io
+ repository: bitnami/minideb
+ tag: latest
+ pullPolicy: IfNotPresent
+ ## Optionally specify an array of imagePullSecrets.
+ ## Secrets must be manually created in the namespace.
+ ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
+ ##
+ # pullSecrets:
+ # - myRegistryKeySecretName
+ resources: {}
+
+## forceBoot: executes 'rabbitmqctl force_boot' to force boot cluster shut down unexpectedly in an
+## unknown order.
+## ref: https://www.rabbitmq.com/rabbitmqctl.8.html#force_boot
+##
+forceBoot:
+ enabled: false
diff --git a/stable/redis-ha/Chart.yaml b/stable/redis-ha/Chart.yaml
index be92ddb85246..53c3c16b0d69 100644
--- a/stable/redis-ha/Chart.yaml
+++ b/stable/redis-ha/Chart.yaml
@@ -1,3 +1,4 @@
+apiVersion: v1
name: redis-ha
home: http://redis.io/
engine: gotpl
@@ -5,7 +6,7 @@ keywords:
- redis
- keyvalue
- database
-version: 3.1.3
+version: 3.4.2
appVersion: 5.0.3
description: Highly available Kubernetes implementation of Redis
icon: https://upload.wikimedia.org/wikipedia/en/thumb/6/6b/Redis_Logo.svg/1200px-Redis_Logo.svg.png
@@ -18,3 +19,4 @@ details:
sources:
- https://redis.io/download
- https://github.com/scality/Zenko/tree/development/1.0/kubernetes/zenko/charts/redis-ha
+- https://github.com/oliver006/redis_exporter
diff --git a/stable/redis-ha/README.md b/stable/redis-ha/README.md
index 9a16ff619792..57e9994a01a7 100644
--- a/stable/redis-ha/README.md
+++ b/stable/redis-ha/README.md
@@ -9,12 +9,12 @@ $ helm install stable/redis-ha
```
By default this chart install 3 pods total:
- * one pod containing a redis master and sentinel containers
- * two pods each containing redis slave and sentinel containers.
+ * one pod containing a redis master and sentinel container (optional prometheus metrics exporter sidecar available)
+ * two pods each containing a redis slave and sentinel containers (optional prometheus metrics exporter sidecars available)
## Introduction
-This chart bootstraps a [Redis](https://redis.io) highly available master/slave statefulset in a [Kubernetes](http://kubernetes.io) cluster using the Helm package manager.
+This chart bootstraps a [Redis](https://redis.io) highly available master/slave statefulset in a [Kubernetes](http://kubernetes.io) cluster using the Helm package manager.
## Prerequisites
@@ -51,29 +51,47 @@ The command removes all the Kubernetes components associated with the chart and
The following table lists the configurable parameters of the Redis chart and their default values.
-| Parameter | Description | Default |
-| -------------------------------- | ----------------------------------------------------- | --------------------------------------------------------- |
-| `image` | Redis image | `redis` |
-| `tag` | Redis tag | `5.0.3-alpine` |
-| `replicas` | Number of redis master/slave pods | `3` |
-| `redis.port` | Port to access the redis service | `6379` |
-| `redis.masterGroupName` | Redis convention for naming the cluster group | `mymaster` |
-| `redis.config` | Any valid redis config options in this section will be applied to each server (see below) | see values.yaml |
-| `redis.customConfig` | Allows for custom redis.conf files to be applied. If this is used then `redis.config` is ignored | `` |
-| `redis.resources` | CPU/Memory for master/slave nodes resource requests/limits | `{}` |
-| `sentinel.port` | Port to access the sentinel service | `26379` |
-| `sentinel.quorum` | Minimum number of servers necessary to maintain quorum | `2` |
-| `sentinel.config` | Valid sentinel config options in this section will be applied as config options to each sentinel (see below) | see values.yaml |
-| `sentinel.customConfig` | Allows for custom sentinel.conf files to be applied. If this is used then `sentinel.config` is ignored | `` |
-| `sentinel.resources` | CPU/Memory for sentinel node resource requests/limits | `{}` |
-| `init.resources` | CPU/Memory for init Container node resource requests/limits | `{}`
-| `auth` | Enables or disables redis AUTH (Requires `redisPassword` to be set) | `false` |
-| `redisPassword` | A password that configures a `requirepass` and `masterauth` in the conf parameters (Requires `auth: enabled`) | `` |
-| `existingSecret` | An existing secret containing an `auth` key that configures `requirepass` and `masterauth` in the conf parameters (Requires `auth: enabled`, cannot be used in conjunction with `.Values.redisPassword`) | `` |
-| `nodeSelector` | Node labels for pod assignment | `{}` |
-| `tolerations` | Toleration labels for pod assignment | `[]` |
-| `podAntiAffinity.server` | Antiaffinity for pod assignment of servers, `hard` or `soft` | `Hard node and soft zone anti-affinity` |
-
+| Parameter | Description | Default |
+|:-------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------|
+| `image` | Redis image | `redis` |
+| `tag` | Redis tag | `5.0.3-alpine` |
+| `replicas` | Number of redis master/slave pods | `3` |
+| `serviceAccount.create` | Specifies whether a ServiceAccount should be created | `true` |
+| `serviceAccount.name` | The name of the ServiceAccount to create | Generated using the redis-ha.fullname template |
+| `rbac.create` | Create and use RBAC resources | `true` |
+| `redis.port` | Port to access the redis service | `6379` |
+| `redis.masterGroupName` | Redis convention for naming the cluster group | `mymaster` |
+| `redis.config` | Any valid redis config options in this section will be applied to each server (see below) | see values.yaml |
+| `redis.customConfig` | Allows for custom redis.conf files to be applied. If this is used then `redis.config` is ignored | `` |
+| `redis.resources` | CPU/Memory for master/slave nodes resource requests/limits | `{}` |
+| `sentinel.port` | Port to access the sentinel service | `26379` |
+| `sentinel.quorum` | Minimum number of servers necessary to maintain quorum | `2` |
+| `sentinel.config` | Valid sentinel config options in this section will be applied as config options to each sentinel (see below) | see values.yaml |
+| `sentinel.customConfig` | Allows for custom sentinel.conf files to be applied. If this is used then `sentinel.config` is ignored | `` |
+| `sentinel.resources` | CPU/Memory for sentinel node resource requests/limits | `{}` |
+| `init.resources` | CPU/Memory for init Container node resource requests/limits | `{}` |
+| `auth` | Enables or disables redis AUTH (Requires `redisPassword` to be set) | `false` |
+| `redisPassword` | A password that configures a `requirepass` and `masterauth` in the conf parameters (Requires `auth: enabled`) | `` |
+| `authKey` | The key holding the redis password in an existing secret. | `auth` |
+| `existingSecret` | An existing secret containing a key defined by `authKey` that configures `requirepass` and `masterauth` in the conf parameters (Requires `auth: enabled`, cannot be used in conjunction with `.Values.redisPassword`) | `` |
+| `nodeSelector` | Node labels for pod assignment | `{}` |
+| `tolerations` | Toleration labels for pod assignment | `[]` |
+| `podAntiAffinity.server` | Antiaffinity for pod assignment of servers, `hard` or `soft` | `Hard node and soft zone anti-affinity` |
+| `exporter.enabled` | If `true`, the prometheus exporter sidecar is enabled | `false` |
+| `exporter.image` | Exporter image | `oliver006/redis_exporter` |
+| `exporter.tag` | Exporter tag | `v0.31.0` |
+| `exporter.annotations` | Prometheus scrape annotations | `{prometheus.io/path: /metrics, prometheus.io/port: "9121", prometheus.io/scrape: "true"}` |
+| `exporter.extraArgs` | Additional args for the exporter | `{}` |
+| `podDisruptionBudget` | Pod Disruption Budget rules | `{}` |
+| `hostPath.path` | Use this path on the host for data storage | not set |
+| `hostPath.chown` | Run an init-container as root to set ownership on the hostPath | `true` |
+| `sysctlImage.enabled` | Enable an init container to modify Kernel settings | `false` |
+| `sysctlImage.command` | sysctlImage command to execute | [] |
+| `sysctlImage.registry` | sysctlImage Init container registry | `docker.io` |
+| `sysctlImage.repository` | sysctlImage Init container name | `bitnami/minideb` |
+| `sysctlImage.tag` | sysctlImage Init container tag | `latest` |
+| `sysctlImage.pullPolicy` | sysctlImage Init container pull policy | `Always` |
+| `sysctlImage.mountHostSys` | Mount the host `/sys` folder to `/host-sys` | `false` |
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
@@ -84,7 +102,7 @@ $ helm install \
stable/redis-ha
```
-The above command sets the Redis server within `default` namespace.
+The above command sets the Redis server within `default` namespace.
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
@@ -115,3 +133,18 @@ Sentinel options supported must be in the the `sentinel