What problem are you trying to solve?
The linkerd-control-plane chart's templates/podmonitor.yaml ships a fixed set of relabelings that select linkerd-proxy pods and drop unrelated targets. There is no value to inject additional relabelings from values, so any operator who needs to add their own filter (e.g. drop a specific pod label combination from being scraped twice when a non-Prometheus scraper also runs in the cluster) has to fork the chart.
Concretely, in our environment we run vmagent (VictoriaMetrics) alongside the in-cluster Prometheus the chart's PodMonitor targets. vmagent itself runs in the metrics namespace and matches the pod-label selectors, so its own linkerd-proxy gets scraped by both targets and we end up with duplicate timeseries. We want to add a single drop rule:
- sourceLabels:
- __meta_kubernetes_namespace
- __meta_kubernetes_pod_label_app_kubernetes_io_name
action: drop
regex: ^metrics;vmagent$
right after the existing keep rule, but before the rest of the relabelings — there's no way to do that without forking.
How should the problem be solved?
Add a proxy.podMonitor.extraRelabelings value (and maybe also proxy.podMonitor.extraMetricRelabelings) that gets appended to the metricRelabelings/relabelings list in the PodMonitor template. Default empty list = current behaviour.
Related:
A coordinated fix that exposes the whole PodMonitor-spec body as a map of overrides (similar to the priorityClassName / nodeSelector patterns already in the chart) would close all three with one PR. Happy to send the PR if a maintainer confirms the shape.
Any alternatives you've considered?
- Forking the chart (what we currently do; meaningful maintenance burden across upgrades).
- FluxCD
HelmRelease postRenderers (works but is invasive — depends on a Flux runtime).
Would you like to work on this feature?
Yes, if the API shape is confirmed.
What problem are you trying to solve?
The
linkerd-control-planechart'stemplates/podmonitor.yamlships a fixed set ofrelabelingsthat select linkerd-proxy pods and drop unrelated targets. There is no value to inject additionalrelabelingsfrom values, so any operator who needs to add their own filter (e.g. drop a specific pod label combination from being scraped twice when a non-Prometheus scraper also runs in the cluster) has to fork the chart.Concretely, in our environment we run
vmagent(VictoriaMetrics) alongside the in-cluster Prometheus the chart's PodMonitor targets. vmagent itself runs in themetricsnamespace and matches the pod-label selectors, so its own linkerd-proxy gets scraped by both targets and we end up with duplicate timeseries. We want to add a single drop rule:right after the existing
keeprule, but before the rest of the relabelings — there's no way to do that without forking.How should the problem be solved?
Add a
proxy.podMonitor.extraRelabelingsvalue (and maybe alsoproxy.podMonitor.extraMetricRelabelings) that gets appended to themetricRelabelings/relabelingslist in the PodMonitor template. Default empty list = current behaviour.Related:
proxy.podMonitor.extraLabelsfor Prometheus selector wiring — same family of "the PodMonitor needs to be configurable from values", different field.honorTimestampsconfigurable on the same PodMonitor, also same family.A coordinated fix that exposes the whole PodMonitor-spec body as a map of overrides (similar to the
priorityClassName/nodeSelectorpatterns already in the chart) would close all three with one PR. Happy to send the PR if a maintainer confirms the shape.Any alternatives you've considered?
HelmReleasepostRenderers(works but is invasive — depends on a Flux runtime).Would you like to work on this feature?
Yes, if the API shape is confirmed.