| copyright |
|
||
|---|---|---|---|
| lastupdated | 2021-03-30 | ||
| keywords | kubernetes, iks, infrastructure, rbac, policy, http2, quota | ||
| subcollection | containers |
{:DomainName: data-hd-keyref="APPDomain"} {:DomainName: data-hd-keyref="DomainName"} {:android: data-hd-operatingsystem="android"} {:api: .ph data-hd-interface='api'} {:apikey: data-credential-placeholder='apikey'} {:app_key: data-hd-keyref="app_key"} {:app_name: data-hd-keyref="app_name"} {:app_secret: data-hd-keyref="app_secret"} {:app_url: data-hd-keyref="app_url"} {:authenticated-content: .authenticated-content} {:beta: .beta} {:c#: data-hd-programlang="c#"} {:cli: .ph data-hd-interface='cli'} {:codeblock: .codeblock} {:curl: .ph data-hd-programlang='curl'} {:deprecated: .deprecated} {:dotnet-standard: .ph data-hd-programlang='dotnet-standard'} {:download: .download} {:external: target="_blank" .external} {:faq: data-hd-content-type='faq'} {:fuzzybunny: .ph data-hd-programlang='fuzzybunny'} {:generic: data-hd-operatingsystem="generic"} {:generic: data-hd-programlang="generic"} {:gif: data-image-type='gif'} {:go: .ph data-hd-programlang='go'} {:help: data-hd-content-type='help'} {:hide-dashboard: .hide-dashboard} {:hide-in-docs: .hide-in-docs} {:important: .important} {:ios: data-hd-operatingsystem="ios"} {:java: .ph data-hd-programlang='java'} {:java: data-hd-programlang="java"} {:javascript: .ph data-hd-programlang='javascript'} {:javascript: data-hd-programlang="javascript"} {:new_window: target="_blank"} {:note .note} {:note: .note} {:objectc data-hd-programlang="objectc"} {:org_name: data-hd-keyref="org_name"} {:php: data-hd-programlang="php"} {:pre: .pre} {:preview: .preview} {:python: .ph data-hd-programlang='python'} {:python: data-hd-programlang="python"} {:route: data-hd-keyref="route"} {:row-headers: .row-headers} {:ruby: .ph data-hd-programlang='ruby'} {:ruby: data-hd-programlang="ruby"} {:runtime: architecture="runtime"} {:runtimeIcon: .runtimeIcon} {:runtimeIconList: .runtimeIconList} {:runtimeLink: .runtimeLink} {:runtimeTitle: .runtimeTitle} {:screen: .screen} {:script: data-hd-video='script'} {:service: architecture="service"} {:service_instance_name: data-hd-keyref="service_instance_name"} {:service_name: data-hd-keyref="service_name"} {:shortdesc: .shortdesc} {:space_name: data-hd-keyref="space_name"} {:step: data-tutorial-type='step'} {:subsection: outputclass="subsection"} {:support: data-reuse='support'} {:swift: .ph data-hd-programlang='swift'} {:swift: data-hd-programlang="swift"} {:table: .aria-labeledby="caption"} {:term: .term} {:tip: .tip} {:tooling-url: data-tooling-url-placeholder='tooling-url'} {:troubleshoot: data-hd-content-type='troubleshoot'} {:tsCauses: .tsCauses} {:tsResolve: .tsResolve} {:tsSymptoms: .tsSymptoms} {:tutorial: data-hd-content-type='tutorial'} {:ui: .ph data-hd-interface='ui'} {:unity: .ph data-hd-programlang='unity'} {:url: data-credential-placeholder='url'} {:user_ID: data-hd-keyref="user_ID"} {:vbnet: .ph data-hd-programlang='vb.net'} {:video: .video}
{: #limitations}
{{site.data.keyword.containerlong}} and the Kubernetes open source project come with default service settings and limitations to ensure security, convenience, and basic functionality. Some of the limitations you might be able to change where noted.
{: shortdesc}
If you anticipate reaching any of the following {{site.data.keyword.containerlong_notm}} limitations, contact IBM Support and provide the cluster ID, the new quota limit, the region, and infrastructure provider in your support ticket. {: tip}
{: #tech_limits}
{{site.data.keyword.containerlong_notm}} comes with the following service limitations and quotas that apply to all clusters, independent of what infrastructure provider you plan to use. Keep in mind that the classic and VPC cluster limitations also apply. {: shortdesc}
To view quota limits on cluster-related resources in your {{site.data.keyword.cloud_notm}} account, use the ibmcloud ks quota ls command.
{: tip}
| Category | Description |
|---|---|
| API rate limits | 200 requests per 10 seconds to the {{site.data.keyword.containerlong_notm}} API from each unique source IP address. |
| App deployment | The apps that you deploy to and services that you integrate with your cluster must be able to run on the operating system of the worker nodes. |
| Calico network plug-in | Changing the Calico plug-in, components, or default Calico settings is not supported. For example, do not deploy a new Calico plug-in version, or modify the daemon sets or deployments for the Calico components, default IPPool resources, or Calico nodes. Instead, you can follow the documentation to create a Calico NetworkPolicy or GlobalNetworkPolicy, to change the Calico MTU, or to disable the port map plug-in for the Calico CNI. |
| Cluster quota | You cannot exceed 100 clusters per region and per infrastructure provider. If you need more of the resource, contact IBM Support. In the support case, include the new quota limit for the region and infrastructure provider that you want. |
| Kubernetes | Make sure to review the Kubernetes project limitations{: external}. |
| KMS provider | Customizing the IP addresses that are allowed to connect to your {{site.data.keyword.keymanagementservicefull}} instance is not supported. |
| Kubernetes pod logs | To check the logs for individual app pods, you can use the command line to run kubectl logs <pod name>. Do not use the Kubernetes dashboard to stream logs for your pods, which might cause a disruption in your access to the Kubernetes dashboard. |
| Load balancers | Kubernetes 1.20 or later: Although the Kubernetes SCTP protocol{: external} and application protocol{: external} features are generally available in the community release, creating load balancers that use these protocols is not supported in {{site.data.keyword.containerlong_notm}} clusters. |
| Pod instances | You can run 110 pods per worker node. If you have worker nodes with 11 CPU cores or more, you can support 10 pods per core, up to a limit of 250 pods per worker node. The number of pods includes kube-system and ibm-system pods that run on the worker node. For improved performance, consider limiting the number of pods that you run per compute core so that you do not overuse the worker node. For example, on a worker node with a b3c.4x16 flavor, you might run 10 pods per core that use no more than 75% of the worker node total capacity. |
| Worker node quota | You cannot exceed 500 worker nodes across all clusters in a region, per infrastructure provider. If you need more of the resource, contact IBM Support. In the support case, include the new quota limit for the region and infrastructure provider that you want. |
| Worker pool size | You must have a minimum of 1 worker node in your cluster and each worker pool at all times. For more information, see What is the smallest size cluster that I can make?. You cannot scale worker pools down to zero. Because of the worker node quota, you are limited in the number of worker pools per cluster and number of worker nodes per worker pool. For example, with the default worker node quota of 500 per region, you might have up to 500 worker pools of 1 worker node each in a region with only 1 cluster. Or, you might have 1 worker pool with up to 500 worker nodes in a region with only 1 cluster. |
| {: summary="This table contains information on general {{site.data.keyword.containerlong_notm}} limitations. Columns are read from left to right. In the first column is the type of limitation and in the second column is the description of the limitation."} | |
| {: caption="{{site.data.keyword.containerlong_notm}} limitations"} |
{: #classic_limits}
Classic infrastructure clusters in {{site.data.keyword.containerlong_notm}} are released with the following limitations.
{: shortdesc}
{: #classic_compute_limit}
Keep in mind that the service limitations also apply.
| Category | Description |
|---|---|
| Operating system | You cannot create a cluster with worker nodes that run multiple operating systems, such as {{site.data.keyword.openshiftshort}} on Red Hat Enterprise Linux and community Kubernetes on Ubuntu. |
| Reserved instances | Reserved capacity and reserved instances are not supported. |
| Worker node flavors | Worker nodes are available in select flavors of compute resources. |
| Worker node host access | For security, you cannot SSH into the worker node compute host. |
| {: summary="This table contains information on compute limitations for classic clusters. Columns are read from left to right. In the first column is the type of limitation and in the second column is the description of the limitation."} | |
| {: caption="Classic cluster compute limitations"} |
{: #classic_networking_limit}
Keep in mind that the service limitations also apply.
| Category | Description |
|---|---|
| Ingress ALBs |
|
| Istio managed add-on | See Istio add-on limitations. |
| Network load balancers (NLB) |
|
| strongSwan VPN service | See strongSwan VPN service considerations. |
| Service IP addresses | You can have 65,000 IP addresses per cluster in the 172.21.0.0/16 range that you can assign to Kubernetes services within the cluster. |
| Subnets per VLAN | Each VLAN has a limit of 40 subnets. |
| {: summary="This table contains information on networking limitations for classic clusters. Columns are read from left to right. In the first column is the type of limitation and in the second column is the description of the limitation."} | |
| {: caption="Classic cluster networking limitations"} |
{: #classic_storage_limit}
Keep in mind that the service limitations also apply.
| Category | Description |
|---|---|
| Volume instances | You can have a total of 250 IBM Cloud infrastructure file and block storage volumes per account. If you mount more than this amount, you might see an "out of capacity" message when you provision persistent volumes. For more FAQs, see the file and block storage docs. If you want to mount more volumes, contact IBM Support. In your support ticket, include your account ID and the new file or block storage volume quota that you want. |
| Portworx | Review the Portworx limitations. |
| {: summary="This table contains information on storage limitations for classic clusters. Columns are read from left to right. In the first column is the type of limitation and in the second column is the description of the limitation."} | |
| {: caption="Classic cluster storage limitations"} |
{: #ks_vpc_gen2_limits}
VPC Generation 2 compute clusters in {{site.data.keyword.containerlong_notm}} are released with the following limitations. Additionally, all the underlying VPC quotas, VPC limits, VPC service limitations, and regular service limitations apply.
{: shortdesc}
{: #vpc_gen2_compute_limit}
Keep in mind that the service limitations also apply.
| Category | Description |
|---|---|
| Encryption | The secondary disks of your worker nodes are encrypted at rest by default by the underlying VPC infrastructure provider. However, you cannot bring your own encryption to the underlying virtual server instances. |
| Location | VPC Gen 2 clusters are available only in select multizone metro locations. |
| Operating system | You cannot create a cluster with worker nodes that run multiple operating systems, such as {{site.data.keyword.openshiftshort}} on Red Hat Enterprise Linux and community Kubernetes on Ubuntu. |
| Versions | VPC Gen 2 clusters must run Kubernetes version 1.17 or later. |
| Virtual Private Cloud | See Known limitations and Quotas. |
| v2 API | VPC clusters use the {{site.data.keyword.containerlong_notm}} v2 API. The v2 API is currently under development, with only a limited number of API operations currently available. You can run certain v1 API operations against the VPC cluster, such as GET /v1/clusters or ibmcloud ks cluster ls, but not all the information that a Classic cluster has is returned or you might experience unexpected results. For supported VPC v2 operations, see the CLI reference topic for VPC commands. |
| Worker node flavors | Only certain flavors are available for worker node virtual machines. Bare metal machines are not supported. |
| Worker node host access | For security, you cannot SSH into the worker node compute host. |
| Worker node updates | You cannot update or reload worker nodes. Instead, you can delete the worker node and rebalance the worker pool with the ibmcloud ks worker replace command. If you replace multiple worker nodes at the same time, they are deleted and replaced concurrently, not one by one. Make sure that you have enough capacity in your cluster to reschedule your workloads before you replace worker nodes. |
| {: summary="This table contains information on compute limitations for VPC Gen 2 clusters. Columns are read from left to right. In the first column is the type of limitation and in the second column is the description of the limitation."} | |
| {: caption="VPC Gen 2 cluster compute limitations"} |
{: #vpc_gen2_networking_limit}
Keep in mind that the service limitations also apply.
| Category | Description |
|---|---|
| App URL length | Kubernetes version 1.20 or later only: DNS resolution is managed by the cluster's virtual private endpoint (VPE), which can resolve URLs up to 130 characters. If you expose apps in your cluster with URLs, such as the Ingress subdomain, ensure that the URLs are 130 characters or fewer. |
| Istio managed add-on | See Istio add-on limitations. |
| Network speeds | VPC Gen 2 compute profile network speeds refer to the speeds of the worker node interfaces. The maximum speed available to your worker nodes is 16Gbps. Because IP in IP encapsulation is required for traffic between pods that are on different VPC Gen 2 worker nodes, data transfer speeds between pods on different worker nodes might be slower, about half the compute profile network speed. Overall network speeds for apps that you deploy to your cluster depend on the worker node size and application's architecture. |
| NodePort | You can access an app through a NodePort only if you are connected to your private VPC network, such as through a VPN connection. To access an app from the internet, you must use a VPC load balancer or Ingress service instead. |
| Pod network | VPC access control lists (ACLs) filter incoming and outgoing traffic for your cluster at the subnet level, and security groups filter incoming and outgoing traffic for your cluster at the worker nodes level. To control traffic within the cluster at the pod-to-pod level, you cannot use VPC security groups or ACLs. Instead, use Calico and Kubernetes network policies, which can control the pod-level network traffic that uses IP in IP encapsulation. |
| strongSwan VPN service | The strongSwan service is not supported. To connect your cluster to resources in an on-premises network or another VPC, see Using VPN with your VPC. |
| Subnets |
|
| VPC load balancer | See VPC load balancer limitations. |
| VPC security groups | VPC Gen 2 clusters that run Kubernetes version 1.18 or earlier only: You must allow inbound traffic requests to node ports on your worker nodes. |
| {: summary="This table contains information on networking limitations for VPC Gen 2 clusters. Columns are read from left to right. In the first column is the type of limitation and in the second column is the description of the limitation."} | |
| {: caption="VPC Gen 2 cluster networking limitations"} |
{: #vpc_gen2_storage_limit}
Keep in mind that the service limitations also apply.
| Category | Description |
|---|---|
| Storage class for profile sizes | The available volume profiles are limited to 2TB in size and 20,000 IOPS in capacity. |
| Supported types | You can set up {{site.data.keyword.block_storage_is_short}}, {{site.data.keyword.cos_full_notm}} and {{site.data.keyword.databases-for}} only.
|
| Unsupported types | NFS File Storage is not supported. |
| Volume attachments | See Volume attachment limits. |
| Portworx | Review the Portworx limitations. |
| {: summary="This table contains information on storage limitations for VPC Gen 2 clusters. Columns are read from left to right. In the first column is the type of limitation and in the second column is the description of the limitation."} | |
| {: caption="VPC Gen 2 cluster storage limitations"} |