| copyright |
|
||
|---|---|---|---|
| lastupdated | 2021-03-22 | ||
| keywords | kubernetes, iks, coredns, kubedns, dns | ||
| subcollection | containers |
{:DomainName: data-hd-keyref="APPDomain"} {:DomainName: data-hd-keyref="DomainName"} {:android: data-hd-operatingsystem="android"} {:api: .ph data-hd-interface='api'} {:apikey: data-credential-placeholder='apikey'} {:app_key: data-hd-keyref="app_key"} {:app_name: data-hd-keyref="app_name"} {:app_secret: data-hd-keyref="app_secret"} {:app_url: data-hd-keyref="app_url"} {:authenticated-content: .authenticated-content} {:beta: .beta} {:c#: data-hd-programlang="c#"} {:cli: .ph data-hd-interface='cli'} {:codeblock: .codeblock} {:curl: .ph data-hd-programlang='curl'} {:deprecated: .deprecated} {:dotnet-standard: .ph data-hd-programlang='dotnet-standard'} {:download: .download} {:external: target="_blank" .external} {:faq: data-hd-content-type='faq'} {:fuzzybunny: .ph data-hd-programlang='fuzzybunny'} {:generic: data-hd-operatingsystem="generic"} {:generic: data-hd-programlang="generic"} {:gif: data-image-type='gif'} {:go: .ph data-hd-programlang='go'} {:help: data-hd-content-type='help'} {:hide-dashboard: .hide-dashboard} {:hide-in-docs: .hide-in-docs} {:important: .important} {:ios: data-hd-operatingsystem="ios"} {:java: .ph data-hd-programlang='java'} {:java: data-hd-programlang="java"} {:javascript: .ph data-hd-programlang='javascript'} {:javascript: data-hd-programlang="javascript"} {:new_window: target="_blank"} {:note .note} {:note: .note} {:objectc data-hd-programlang="objectc"} {:org_name: data-hd-keyref="org_name"} {:php: data-hd-programlang="php"} {:pre: .pre} {:preview: .preview} {:python: .ph data-hd-programlang='python'} {:python: data-hd-programlang="python"} {:route: data-hd-keyref="route"} {:row-headers: .row-headers} {:ruby: .ph data-hd-programlang='ruby'} {:ruby: data-hd-programlang="ruby"} {:runtime: architecture="runtime"} {:runtimeIcon: .runtimeIcon} {:runtimeIconList: .runtimeIconList} {:runtimeLink: .runtimeLink} {:runtimeTitle: .runtimeTitle} {:screen: .screen} {:script: data-hd-video='script'} {:service: architecture="service"} {:service_instance_name: data-hd-keyref="service_instance_name"} {:service_name: data-hd-keyref="service_name"} {:shortdesc: .shortdesc} {:space_name: data-hd-keyref="space_name"} {:step: data-tutorial-type='step'} {:subsection: outputclass="subsection"} {:support: data-reuse='support'} {:swift: .ph data-hd-programlang='swift'} {:swift: data-hd-programlang="swift"} {:table: .aria-labeledby="caption"} {:term: .term} {:tip: .tip} {:tooling-url: data-tooling-url-placeholder='tooling-url'} {:troubleshoot: data-hd-content-type='troubleshoot'} {:tsCauses: .tsCauses} {:tsResolve: .tsResolve} {:tsSymptoms: .tsSymptoms} {:tutorial: data-hd-content-type='tutorial'} {:ui: .ph data-hd-interface='ui'} {:unity: .ph data-hd-programlang='unity'} {:url: data-credential-placeholder='url'} {:user_ID: data-hd-keyref="user_ID"} {:vbnet: .ph data-hd-programlang='vb.net'} {:video: .video}
{: #cluster_dns}
Each service in your {{site.data.keyword.containerlong}} cluster is assigned a Domain Name System (DNS) name that the cluster DNS provider registers to resolve DNS requests. For more information about DNS for services and pods, see the Kubernetes documentation{: external}. {: shortdesc}
The cluster DNS provider is CoreDNS{: external}, which is a general-purpose, authoritative DNS server that provides a backwards-compatible, but extensible, integration with Kubernetes. Because CoreDNS is a single executable and single process, it has fewer dependencies and moving parts that could experience issues than other cluster DNS providers. The project is also written in the same language as the Kubernetes project, Go, which helps protect memory. Finally, CoreDNS supports more flexible use cases than other cluster DNS providers because you can create custom DNS entries such as the common setups in the CoreDNS docs{: external}.
{: #dns_autoscale}
By default, CoreDNS includes a deployment to autoscale the CoreDNS pods in response to the number of worker nodes and cores within the cluster. You can fine-tune the CoreDNS autoscaler parameters by editing the CoreDNS autoscaling configmap. For example, if your apps heavily use the cluster DNS provider, you might need to increase the minimum number of CoreDNS pods to support the app. For more information, see the Kubernetes documentation{: external}. {: shortdesc}
Before you begin: Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.
-
Verify that the CoreDNS autoscaler deployment is available. In your CLI output, verify that one deployment is AVAILABLE.
kubectl get deployment -n kube-system coredns-autoscaler{: pre}
Example output:
NAME READY UP-TO-DATE AVAILABLE AGE coredns-autoscaler 1/1 1 1 69d{: screen}
-
Edit the default settings for the CoreDNS autoscaler. Look for the
data.linearfield, which defaults to one CoreDNS pod per 16 worker nodes or 256 cores, with a minimum of two CoreDNS pods regardless of cluster size (preventSinglePointFailure: true). For more information, see the Kubernetes documentation{: external}.kubectl edit configmap -n kube-system coredns-autoscaler{: pre}
Example output:
apiVersion: v1 data: linear: '{"coresPerReplica":256,"nodesPerReplica":16,"preventSinglePointFailure":true}' kind: ConfigMap metadata: ...{: screen}
{: #dns_customize}
You can customize CoreDNS by editing the CoreDNS configmap. For example, you might want to configure stub domains and upstream DNS servers to resolve services that point to external hosts. Additionally, you can configure multiple Corefiles{: external} within the CoreDNS configmap. For more information, see the Kubernetes documentation{: external}. {: shortdesc}
Before you begin: Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.
-
Verify that the CoreDNS deployment is available. In your CLI output, verify that one deployment is AVAILABLE.
kubectl get deployment -n kube-system coredns{: pre}
Example output:
NAME READY UP-TO-DATE AVAILABLE AGE coredns 3/3 3 3 69d{: screen}
-
Edit the default settings for the CoreDNS configmap. Use a Corefile in the
datasection of the configmap to customize stub domains and upstream DNS servers. For more information, see the Kubernetes documentation{: external}.The CoreDNS
proxyplug-in is deprecated and replaced with theforwardplug-in. If you update the CoreDNS configmap, make sure to replace allproxyinstances withforward. {: note}kubectl edit configmap -n kube-system coredns{: pre}
CoreDNS example output:
apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | import <MyCorefile> .:53 { errors health { lameduck 10s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance } <MyCorefile>: | abc.com:53 { errors cache 30 loop forward . 1.2.3.4 }{: screen}
-
Optional: Add custom Corefiles to the CoreDNS configmap. In the following example, include the
import <MyCoreFile>in thedata.Corefilesection, and fill out thedata.<MyCorefile>section with your custom Corefile information. For more information, see the Corefile import documentation{: external}.The CoreDNS
proxyplug-in is deprecated and replaced with theforwardplug-in. If you update the CoreDNS configmap, make sure to replace allproxyinstances withforward. {: note}kubectl edit configmap -n kube-system coredns{: pre}
Custom Corefile example output:
apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | import <MyCorefile> .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream 172.16.0.1 fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance } <MyCorefile>: | abc.com:53 { errors cache 30 loop forward . 1.2.3.4 }
{: screen}
-
After a few minutes, the CoreDNS pods pick up the configmap changes.
{: #dns_cache}
Set up the NodeLocal DNS caching agent on select worker nodes for improved cluster DNS performance and availability in your {{site.data.keyword.containerlong_notm}} cluster. For more information, see the Kubernetes docs{: external}.
{: shortdesc}
By default, cluster DNS requests for pods that use a ClusterFirst DNS policy{: external} are sent to the cluster DNS service. If you enable NodeLocal DNS caching on a worker node, the cluster DNS requests for these pods that are on the worker node are sent instead to the local DNS cache, which listens on link-local IP address 169.254.20.10. The DNS cache also listens on the cluster IP of the kube-dns service in the kube-system namespace.
Do not add the DNS cache label when you already use zone-aware DNS in your cluster. {: important}
NodeLocal DNS cache is generally available in clusters that run Kubernetes 1.18 or later, but still disabled by default. In previous versions, the feature is beta and available only for select Kubernetes versions that depend on your cluster infrastructure provider.
{: preview}
{: #dns_enablecache}
Enable NodeLocal DNS cache for one or more worker nodes in your Kubernetes cluster.
{: shortdesc}
The following steps update DNS pods that run on particular worker nodes. You can also label the worker pool so that future nodes inherit the label. {: note}
Before you begin: Update any DNS egress network policies{: external} that are impacted by this feature, such as policies that rely on pod or namespace selectors for DNS egress.
kubectl get networkpolicy --all-namespaces -o yaml
{: pre}
**To enable NodeLocal DNS cache**:
- Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.
- If you customized stub domains and upstream DNS servers for CoreDNS, you must also customize the
NodeLocalDNS cache with these stub domains and upstream DNS servers. - List the nodes in your cluster. The
NodeLocalDNS caching agent pods are part of a daemon set that run on each node.{: pre}kubectl get nodes - Add the
ibm-cloud.kubernetes.io/node-local-dns-enabled=truelabel to the worker node. The label starts the DNS caching agent pod on the worker node.-
Add the label to one or more worker nodes.
- To label all worker nodes in the cluster: Add the label to all existing worker pools.
- To label an individual worker node:
{: pre}
kubectl label node <node_name> --overwrite "ibm-cloud.kubernetes.io/node-local-dns-enabled=true"
-
Verify that the node has the label by checking that the
NODE-LOCAL-DNS-ENABLEDfield is set totrue.kubectl get nodes -L "ibm-cloud.kubernetes.io/node-local-dns-enabled"{: pre}
Example output:
NAME STATUS ROLES AGE VERSION NODE-LOCAL-DNS-ENABLED 10.xxx.xx.xxx Ready,SchedulingDisabled <none> 28h v1.17.1+IKS true{: screen}
-
Verify that the DNS caching agent pod is running on the worker node.
kubectl get pods -n kube-system -l k8s-app=node-local-dns -o wide{: pre}
Example output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES node-local-dns-pvnjn 1/1 Running 0 1m 10.xxx.xx.xxx 10.xxx.xx.xxx <none> <none>{: screen}
-
- Repeat the previous steps for each worker node to enable DNS caching.
{: #dns_disablecache}
You can disable the NodeLocal DNS cache for one or more worker nodes.
{: shortdesc}
-
Remove the
ibm-cloud.kubernetes.io/node-local-dns-enabledlabel from the worker node. This action terminates the DNS caching agent pod on the worker node.To remove the label from all worker nodes in the cluster:
kubectl label node --all --overwrite "ibm-cloud.kubernetes.io/node-local-dns-enabled-"{: pre}
To remove the label from an individual worker node:
kubectl label node <node_name> "ibm-cloud.kubernetes.io/node-local-dns-enabled-"{: pre}
-
Verify that the label is removed by checking that the
NODE-LOCAL-DNS-ENABLEDfield is empty.kubectl get nodes -L "ibm-cloud.kubernetes.io/node-local-dns-enabled"{: pre}
Example output:
NAME STATUS ROLES AGE VERSION NODE-LOCAL-DNS-ENABLED 10.xxx.xx.xxx Ready,SchedulingDisabled <none> 28h v1.17.1+IKS{: screen}
-
Verify that the pod is no longer running on the node where DNS cache is disabled. The output shows no pods.
kubectl get pods -n kube-system -l k8s-app=node-local-dns -o wide{: pre}
Example output:
No resources found.{: screen}
-
-
Repeat the previous steps for each worker node to disable DNS caching.
{: #dns_nodelocal_customize}
You can customize the NodeLocal DNS cache by editing either of the two configmaps.
{: shortdesc}
node-local-dnsconfigmap: In Kubernetes 1.17 or later, customize theNodeLocalDNS cache configuration.node-local-dns-configconfigmap: Extend theNodeLocalDNS cache configuration by customizing stub domains or upstream DNS servers to resolve services that point to external hosts.
{: #dns_nodelocal_customize_configmap}
Edit the node-local-dns configmap to customize the NodeLocal DNS cache configuration.
{: shortdesc}
Before you begin: Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.
-
Verify that the
NodeLocalDNS cache daemon set is available.kubectl get ds -n kube-system node-local-dns{: pre}
Example output:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE node-local-dns 4 4 4 4 4 ibm-cloud.kubernetes.io/node-local-dns-enabled=true 82d{: screen}
-
Edit the default settings or add custom Corefiles to the
NodeLocalDNS cache configmap. Each Corefile that you import must use thecorednspath. For more information, see the Kubernetes documentation{: external}.Only a limited set of plug-ins{: external} is supported for the
NodeLocalDNS cache. {: important}kubectl edit configmap -n kube-system node-local-dns{: pre}
Example output:
apiVersion: v1 kind: ConfigMap metadata: name: node-local-dns namespace: kube-system data: Corefile: | # Add your NodeLocal DNS customizations as import files under ./coredns directory. # Refer to https://cloud.ibm.com/docs/containers?topic=containers-cluster_dns for details. import ./coredns/<MyCorefile> cluster.local:53 abc.com:53 { errors cache { success 9984 30 denial 9984 5 } reload loop bind 169.254.20.10 172.21.0.10 forward . __PILLAR__CLUSTER__DNS__ { force_tcp } prometheus :9253 health 169.254.20.10:8080 } in-addr.arpa:53 { errors cache 30 reload loop bind 169.254.20.10 172.21.0.10 forward . __PILLAR__CLUSTER__DNS__ { force_tcp } prometheus :9253 } ip6.arpa:53 { errors cache 30 reload loop bind 169.254.20.10 172.21.0.10 forward . __PILLAR__CLUSTER__DNS__ { force_tcp } prometheus :9253 } .:53 { errors cache 30 reload loop bind 169.254.20.10 172.21.0.10 forward . __PILLAR__UPSTREAM__SERVERS__ { force_tcp } prometheus :9253 } <MyCorefile>: | # Add custom corefile content ...{: screen}
-
After a few minutes, the
NodeLocalDNS cache pods pick up the configmap changes.
{: #dns_nodelocal_customize_stub_upstream}
Edit the node-local-dns-config configmap to extend the NodeLocal DNS cache configuration such as by customizing stub domains or upstream DNS servers. For more information, see the Kubernetes documentation{: external}.
{: shortdesc}
Before you begin: Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.
-
Verify that the
NodeLocalDNS cache daemon set is available.kubectl get ds -n kube-system node-local-dns{: pre}
Example output:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE node-local-dns 4 4 4 4 4 ibm-cloud.kubernetes.io/node-local-dns-enabled=true 82d{: screen}
-
Confirm that the
NodeLocalDNS cache has a configmap.-
Determine if the
NodeLocalDNS cache configmap exists.kubectl get cm -n kube-system node-local-dns-config{: pre}
Example output if no configmap exists:
Error from server (NotFound): configmaps "node-local-dns-config" not found{: screen}
-
If the configmap does not exist, create a
NodeLocalDNS cache configmap.kubectl create cm -n kube-system node-local-dns-config{: pre}
Example output:
configmap/node-local-dns-config created{: screen}
-
-
Edit the
NodeLocalDNS cache configmap. The configmap uses the KubeDNS syntax to customize stub domains and upstream DNS servers. For more information, see the Kubernetes documentation{: external}.kubectl edit cm -n kube-system node-local-dns-config{: pre}
Example output:
apiVersion: v1 kind: ConfigMap metadata: name: node-local-dns-config namespace: kube-system data: stubDomains: | {"abc.com" : ["1.2.3.4"]}{: screen}
-
After a few minutes, the
NodeLocalDNS cache pods pick up the configmap changes.
{: #dns_zone_aware}
Set up zone-aware DNS for improved cluster DNS performance and availability in your multizone {{site.data.keyword.containerlong_notm}} cluster. This setup extends NodeLocal DNS cache to prefer cluster DNS traffic within the same zone.
{: shortdesc}
By default, your cluster is set up with cluster-wide DNS resources, not zone-aware DNS resources. Even after you set up zone-aware DNS, the cluster-wide DNS resources remain running as a backup DNS. Your zone-aware DNS resources are separate from the cluster-wide DNS, and changing zone-aware DNS does not impact the cluster-wide DNS.
Do not use the DNS cache label when you use zone-aware DNS in your cluster. {: important}
Zone-aware DNS is a beta feature that is subject to change, and available only for clusters that run Kubernetes versions 1.18 and later. {: beta}
{: #dns_zone_aware_deploy}
To use zone-aware DNS, you must first deploy zone-aware DNS resources in your multizone cluster. Then, you can enable zone-aware DNS in each zone. {: shortdesc}
Before you begin: Update any DNS egress network policies{: external} that are impacted by zone-aware DNS, such as policies that rely on pod or namespace selectors for DNS egress.
kubectl get networkpolicy --all-namespaces -o yaml
{: pre}
Step 1: Deploy zone-aware DNS resources:
- Add the
ibm-cloud.kubernetes.io/deploy-zone-aware-dns=truelabel to thecorednsconfigmap in thekube-systemnamespace.{: pre}kubectl label cm -n kube-system coredns --overwrite "ibm-cloud.kubernetes.io/deploy-zone-aware-dns=true" - Refresh the cluster master to the deploy zone-aware DNS resources.
{: pre}
ibmcloud ks cluster master refresh -c <cluster_name_or_ID> - Watch for the refresh operation to complete by reviewing the Master Health in the cluster details.
{: pre}
ibmcloud ks cluster get -c <cluster_name_or_ID>
Step 2: Enable zone-aware DNS:
- If you customized stub domains and upstream DNS servers for CoreDNS, you must also customize the
NodeLocalDNS cache with these stub domains and upstream DNS servers. - Set an environment variable for the zones of the cluster.
{: pre}
ZONES=$(kubectl get nodes --no-headers --ignore-not-found=true -o jsonpath='{range .items[*]}{.metadata.labels.failure-domain\.beta\.kubernetes\.io/zone}{"\n"}{end}' | uniq) - Start the CoreDNS and CoreDNS autoscaler pods in all zones.
{: pre}
for ZONE in ${ZONES}; do kubectl scale deployment -n kube-system "coredns-autoscaler-${ZONE}" --replicas=1 done - Verify that the CoreDNS and CoreDNS autoscaler pods are running in all zones.
{: pre}
for ZONE in ${ZONES}; do kubectl get pods -n kube-system -l "k8s-app=coredns-autoscaler-${ZONE}" -o wide kubectl get pods -n kube-system -l "k8s-app=coredns-${ZONE}" -o wide done - Start the
NodeLocalDNS cache pods on all workers nodes.{: pre}kubectl label nodes --all --overwrite "ibm-cloud.kubernetes.io/zone-aware-dns-enabled=true" - Verify that the
NodeLocalDNS cache pods are running on all workers nodes.{: pre}for ZONE in ${ZONES}; do kubectl get pods -n kube-system -l "k8s-app=node-local-dns-${ZONE}" -o wide done - Label your worker pools so that future worker nodes inherit the
ibm-cloud.kubernetes.io/zone-aware-dns-enabled=truelabel.
{: #dns_zone_aware_delete}
To remove zone-aware DNS, you must first disable zone-aware DNS in each zone of your multizone cluster. Then, delete the zone-aware DNS resources. {: shortdesc}
Step 1: Disable zone-aware DNS:
- Remove the
ibm-cloud.kubernetes.io/zone-aware-dns-enabled=truelabel from your worker pools. - Set an environment variable for the zones in the cluster.
{: pre}
ZONES=$(kubectl get nodes --no-headers --ignore-not-found=true -o jsonpath='{range .items[*]}{.metadata.labels.failure-domain\.beta\.kubernetes\.io/zone}{"\n"}{end}' | uniq) - Stop the
NodeLocalDNS cache pods on all worker nodes.{: pre}kubectl label nodes --all --overwrite "ibm-cloud.kubernetes.io/zone-aware-dns-enabled-" - Stop the CoreDNS autoscaler pods in all zones.
{: pre}
for ZONE in ${ZONES}; do kubectl scale deployment -n kube-system "coredns-autoscaler-${ZONE}" --replicas=0 done - Verify that the CoreDNS autoscaler pods are no longer running in all zones.
{: pre}
for ZONE in ${ZONES}; do kubectl get pods -n kube-system -l "k8s-app=coredns-autoscaler-${ZONE}" done - Stop the CoreDNS pods in all zones.
{: pre}
for ZONE in ${ZONES}; do kubectl scale deployment -n kube-system "coredns-${ZONE}" --replicas=0 done
Step 2: Delete zone-aware DNS resources:
- Remove the
ibm-cloud.kubernetes.io/deploy-zone-aware-dns=truelabel from thecorednsconfigmap in thekube-systemnamespace.{: pre}kubectl label cm -n kube-system coredns --overwrite "ibm-cloud.kubernetes.io/deploy-zone-aware-dns-" - Refresh the cluster master to the delete zone-aware DNS resources.
{: pre}
ibmcloud ks cluster master refresh --cluster <cluster-name-or-id> - Watch for the refresh operation to complete by reviewing the Master Health in the cluster details.
{: pre}
ibmcloud ks cluster get -c <cluster_name_or_ID>