Skip to content

Latest commit

 

History

History
948 lines (803 loc) · 93 KB

cs_clusters.md

File metadata and controls

948 lines (803 loc) · 93 KB
copyright lastupdated keywords subcollection
years
2014, 2021
2021-04-28
kubernetes, iks, clusters, worker nodes, worker pools
containers

{:DomainName: data-hd-keyref="APPDomain"} {:DomainName: data-hd-keyref="DomainName"} {:android: data-hd-operatingsystem="android"} {:api: .ph data-hd-interface='api'} {:apikey: data-credential-placeholder='apikey'} {:app_key: data-hd-keyref="app_key"} {:app_name: data-hd-keyref="app_name"} {:app_secret: data-hd-keyref="app_secret"} {:app_url: data-hd-keyref="app_url"} {:authenticated-content: .authenticated-content} {:beta: .beta} {:c#: data-hd-programlang="c#"} {:cli: .ph data-hd-interface='cli'} {:codeblock: .codeblock} {:curl: .ph data-hd-programlang='curl'} {:deprecated: .deprecated} {:dotnet-standard: .ph data-hd-programlang='dotnet-standard'} {:download: .download} {:external: target="_blank" .external} {:faq: data-hd-content-type='faq'} {:fuzzybunny: .ph data-hd-programlang='fuzzybunny'} {:generic: data-hd-operatingsystem="generic"} {:generic: data-hd-programlang="generic"} {:gif: data-image-type='gif'} {:go: .ph data-hd-programlang='go'} {:help: data-hd-content-type='help'} {:hide-dashboard: .hide-dashboard} {:hide-in-docs: .hide-in-docs} {:important: .important} {:ios: data-hd-operatingsystem="ios"} {:java: .ph data-hd-programlang='java'} {:java: data-hd-programlang="java"} {:javascript: .ph data-hd-programlang='javascript'} {:javascript: data-hd-programlang="javascript"} {:new_window: target="_blank"} {:note .note} {:note: .note} {:objectc data-hd-programlang="objectc"} {:org_name: data-hd-keyref="org_name"} {:php: data-hd-programlang="php"} {:pre: .pre} {:preview: .preview} {:python: .ph data-hd-programlang='python'} {:python: data-hd-programlang="python"} {:route: data-hd-keyref="route"} {:row-headers: .row-headers} {:ruby: .ph data-hd-programlang='ruby'} {:ruby: data-hd-programlang="ruby"} {:runtime: architecture="runtime"} {:runtimeIcon: .runtimeIcon} {:runtimeIconList: .runtimeIconList} {:runtimeLink: .runtimeLink} {:runtimeTitle: .runtimeTitle} {:screen: .screen} {:script: data-hd-video='script'} {:service: architecture="service"} {:service_instance_name: data-hd-keyref="service_instance_name"} {:service_name: data-hd-keyref="service_name"} {:shortdesc: .shortdesc} {:space_name: data-hd-keyref="space_name"} {:step: data-tutorial-type='step'} {:subsection: outputclass="subsection"} {:support: data-reuse='support'} {:swift: .ph data-hd-programlang='swift'} {:swift: data-hd-programlang="swift"} {:table: .aria-labeledby="caption"} {:term: .term} {:tip: .tip} {:tooling-url: data-tooling-url-placeholder='tooling-url'} {:troubleshoot: data-hd-content-type='troubleshoot'} {:tsCauses: .tsCauses} {:tsResolve: .tsResolve} {:tsSymptoms: .tsSymptoms} {:tutorial: data-hd-content-type='tutorial'} {:ui: .ph data-hd-interface='ui'} {:unity: .ph data-hd-programlang='unity'} {:url: data-credential-placeholder='url'} {:user_ID: data-hd-keyref="user_ID"} {:vbnet: .ph data-hd-programlang='vb.net'} {:video: .video}

Creating clusters

{: #clusters}

Create a cluster in {{site.data.keyword.containerlong}}. {: shortdesc}

After getting started, you might want to create a cluster that is customized to your use case and different public and private cloud environments. Consider the following steps to create the right cluster each time.

  1. Prepare your account to create clusters. This step includes creating a billable account, setting up an API key with infrastructure permissions, making sure that you have Administrator access in {{site.data.keyword.cloud_notm}} IAM, planning resource groups, and setting up account networking.
  2. Decide on your cluster setup. This step includes planning cluster network and HA setup, estimating costs, and if applicable, allowing network traffic through a firewall.
  3. Create your VPC Gen2 or classic cluster by following the steps in the {{site.data.keyword.cloud_notm}} console or CLI.

Sample commands

{: #cluster_create_samples}

Have you created a cluster before and are just looking for quick example commands? Try these examples. {: shortdesc}

Free cluster:

ibmcloud ks cluster create classic --name my_cluster

{: pre}

Classic infrastructure provider icon Classic clusters:

  • Classic cluster, shared virtual machine:
    ibmcloud ks cluster create classic --name my_cluster --zone dal10 --flavor b3c.4x16 --hardware shared --workers 3 --public-vlan <public_VLAN_ID> --private-vlan <private_VLAN_ID>
    
    {: pre}
  • Classic cluster, bare metal:
    ibmcloud ks cluster create classic --name my_cluster --zone dal10 --flavor mb2c.4x32 --hardware dedicated --workers 3 --public-vlan <public_VLAN_ID> --private-vlan <private_VLAN_ID>
    
    {: pre}
  • Classic cluster with a gateway enabled:
    ibmcloud ks cluster create classic --name my_cluster --zone dal10 --flavor b3c.4x16 --hardware shared --workers 3 --gateway-enabled --version 1.20.6 --public-vlan <public_VLAN_ID> --private-vlan <private_VLAN_ID> --public-service-endpoint --private-service-endpoint
    
    {: pre}
  • Classic cluster that uses private VLANs and the private cloud service endpoint only:
    ibmcloud ks cluster create classic --name my_cluster --zone dal10 --flavor b3c.4x16 --hardware shared --workers 3 --private-vlan <private_VLAN_ID> --private-only --private-service-endpoint
    
    {: pre}
  • For a classic multizone cluster, after you created the cluster in a multizone metro, add zones:
    ibmcloud ks zone add classic --zone <zone> --cluster <cluster_name_or_ID> --worker-pool <pool_name> --private-vlan <private_VLAN_ID> --public-vlan <public_VLAN_ID>
    
    {: pre}

VPC infrastructure provider icon VPC clusters:

  • VPC infrastructure provider icon VPC cluster:
    ibmcloud ks cluster create vpc-gen2 --name my_cluster --zone us-east-1 --vpc-id <VPC_ID> --subnet-id <VPC_SUBNET_ID> --flavor b2.4x16 --workers 3
    
    {: pre}
  • For a VPC multizone cluster, after you created the cluster in a multizone metro, add zones:
    ibmcloud ks zone add vpc-gen2 --zone <zone> --cluster <cluster_name_or_ID> --worker-pool <pool_name> --subnet-id <VPC_SUBNET_ID>
    
    {: pre}

Preparing to create clusters at the account level

{: #cluster_prepare}

Prepare your {{site.data.keyword.cloud_notm}} account for {{site.data.keyword.containerlong_notm}}. After the account administrator makes these preparations, you might not need to change them each time that you create a cluster. However, each time that you create a cluster, you still want to verify that the current account-level state is what you need it to be. {: shortdesc}

  1. Create or upgrade your account to a billable account ({{site.data.keyword.cloud_notm}} Pay-As-You-Go or Subscription).

  2. Set up an API key for {{site.data.keyword.containerlong_notm}} in the region and resource groups where you want to create clusters. Assign the API key with the required service and infrastructure permissions to create clusters.

    Are you the account owner? You already have the necessary permissions! When you create a cluster, the API key for that region and resource group is set with your credentials.

  3. Verify that you as a user (not just the API key) have the required permissions to create clusters.

  4. From the {{site.data.keyword.cloud_notm}} console{: external} menu bar, click Manage > Access (IAM).

  5. Click the Users page, and then from the table, select yourself.

  6. From the Access policies tab, confirm that you have the required permissions to create clusters.

Make sure that your account administrator does not assign you the **Administrator** platform access role at the same time as scoping the access policy to a namespace.

  1. If your account uses multiple resource groups, figure out your account's strategy for managing resource groups.
  • The cluster is created in the resource group that you target when you log in to {{site.data.keyword.cloud_notm}}. If you do not target a resource group, the default resource group is automatically targeted. Free clusters are created in the default resource group.
  • If you want to create a cluster in a different resource group than the default, you need at least the Viewer role for the resource group. If you do not have any role for the resource group, your cluster is created in the default resource group.
  • You cannot change a cluster's resource group.
  • If you need to use the ibmcloud ks cluster service bind command to integrate with an {{site.data.keyword.cloud_notm}} service, that service must be in the same resource group as the cluster. Services that do not use resource groups like {{site.data.keyword.registrylong_notm}} or that do not need service binding like {{site.data.keyword.la_full_notm}} work even if the cluster is in a different resource group.
  • Consider giving clusters unique names across resource groups and regions in your account to avoid naming conflicts. You cannot rename a cluster.
  1. Classic infrastructure provider icon Classic clusters only: Consider creating a reservation to lock in a discount over 1 or 3 year terms for your worker nodes. After you create the cluster, add worker pools that use the reserved instances. Typical savings range between 30-50% compared to on-demand worker node costs.

  2. Standard clusters: Set up your IBM Cloud infrastructure networking to allow worker-to-master and user-to-master communication. Your cluster network setup varies with the infrastructure provider that you choose (classic or VPC). For more information, see Planning your cluster network setup.

  • VPC infrastructure provider icon VPC clusters only: Your VPC clusters are created with a public and a private cloud service endpoint by default.

    1. Enable VRF in your IBM Cloud account.
    2. Enable your {{site.data.keyword.cloud_notm}} account to use service endpoints.
    3. Optional: If you want your VPC clusters to communicate with classic clusters over the private network interface, you can choose to set up classic infrastructure access from the VPC that your cluster is in. Note that you can set up classic infrastructure access for only one VPC per region and Virtual Routing and Forwarding (VRF) is required in your {{site.data.keyword.cloud_notm}} account. For more information, see Setting up access to your Classic Infrastructure from VPC.
  • Classic infrastructure provider icon Classic clusters only, VRF and service endpoint enabled accounts: You must set up your account to use VRF and service endpoints to support scenarios such as running internet-facing workloads and extending your on-premises data center. After you set up the account, your VPC and classic clusters are created with a public and a private cloud service endpoint by default.

    1. Enable VRF in your IBM Cloud infrastructure account. To check whether a VRF is already enabled, use the ibmcloud account show command.
    2. Enable your {{site.data.keyword.cloud_notm}} account to use service endpoints.
  • Classic infrastructure provider icon Classic clusters only, Non-VRF and non-service endpoint accounts: If you do not set up your account to use VRF and service endpoints, you can create only classic clusters that use VLAN spanning to communicate with each other on the public and private network. Note that you cannot create gateway-enabled clusters.

    • To use the public cloud service endpoint only (run internet-facing workloads):
      1. Enable VLAN spanning for your IBM Cloud infrastructure account so that your worker nodes can communicate with each other on the private network. To perform this action, you need the Network > Manage Network VLAN Spanning infrastructure permission, or you can request the account owner to enable it. To check whether VLAN spanning is already enabled, use the ibmcloud ks vlan spanning get --region <region> command.
    • To use a gateway appliance (extend your on-premises data center):
      1. Enable VLAN spanning for your IBM Cloud infrastructure account so that your worker nodes can communicate with each other on the private network. To perform this action, you need the Network > Manage Network VLAN Spanning infrastructure permission, or you can request the account owner to enable it. To check whether VLAN spanning is already enabled, use the ibmcloud ks vlan spanning get --region <region> command.
      2. Configure a gateway appliance to connect your cluster to the on-premises network. For example, you might choose to set up a Virtual Router Appliance or a Fortigate Security Appliance to act as your firewall to allow required network traffic and to block unwanted network traffic.
      3. Open up the required private IP addresses and ports for each region so that the master and the worker nodes can communicate and for the {{site.data.keyword.cloud_notm}} services that you plan to use.

Deciding on your cluster setup

{: #prepare_cluster_level} {: help} {: support}

After you set up your account to create clusters, decide on the setup for your cluster. You must make these decisions every time that you create a cluster. You can click on the options in the following decision tree image for more information, such as comparisons of free and standard, Kubernetes and {{site.data.keyword.openshiftshort}}, or VPC and classic clusters. {: shortdesc}

The following image walks you through choosing the setup that you want for your cluster.

This image walks you through choosing the setup that you want for your cluster.


Creating a standard classic cluster

{: #clusters_standard}

Classic infrastructure provider icon Use the {{site.data.keyword.cloud_notm}} CLI or the {{site.data.keyword.cloud_notm}} console to create a fully-customizable standard cluster with your choice of hardware isolation and access to features like multiple worker nodes for a highly available environment. {: shortdesc}

Want to try out a free cluster first? See Creating a free classic cluster.

Classic infrastructure provider icon Want to save on your classic worker node costs? Create a reservation to lock in a discount over 1 or 3 year terms! Then, create your worker pool by using the reserved instances. {: tip}

Creating a standard classic cluster in the console

{: #clusters_ui}

Classic infrastructure provider icon Create your single zone or multizone classic Kubernetes cluster by using the {{site.data.keyword.cloud_notm}} console. {: shortdesc}

  1. Make sure that you complete the prerequisites to prepare your account and decide on your cluster setup.

  2. From the Kubernetes clusters console{: external}, click Create cluster.

  3. Configure your cluster environment.

    1. Select the Standard cluster plan.
    2. From the Kubernetes drop-down list, select the version that you want to use in your cluster, such as 1.20.6.
    3. Select Classic infrastructure.
  4. Configure the Location details for your cluster.

    1. Select the Resource group that you want to create your cluster in.
      • A cluster can be created in only one resource group, and after the cluster is created, you cannot change its resource group.
      • To create clusters in a resource group other than the default, you must have at least the Viewer role for the resource group.
    2. Select a Geography to create the cluster in, such as North America. The geography helps filter the Availability and Metro values that you can select.
    3. Select the Availability that you want for your cluster, Single zone or Multizone. In a multizone cluster, the Kubernetes master is deployed in a multizone-capable zone and three replicas of your master are spread across zones.
    4. Enter the Metro and Worker zones details, depending on the availability that you selected for your cluster.
      • Multizone clusters:
        1. Select a Metro location. For the best performance, select the metro location that is physically closest to you. Your choices might be limited by geography.
        2. Select the specific Worker zones within the metro to host your cluster. You must select at least one zone but you can select as many as you like. If you select more than one zone, the worker nodes are spread across the zones that you choose which gives you higher availability. If you select only one zone, you can add zones to your cluster after the cluster is created.
      • Single zone clusters: Select a Worker zone in which you want to host your cluster. For the best performance, select the data center that is physically closest to you. Your choices might be limited by geography.
    5. For each of the selected zones, choose your public and private VLANs. You can change the pre-selected VLANs by clicking the Edit VLANs pencil icon. The first time that you create a cluster in a zone, public and private VLANs are automatically created for you.
      • To create a cluster in which you can run internet-facing workloads:
        1. Select a public VLAN and a private VLAN from your IBM Cloud infrastructure account for each zone. Worker nodes communicate with each other by using the private VLAN, and can communicate with the Kubernetes master by using the public or the private VLAN. If you do not have a public or private VLAN in this zone, a public and a private VLAN are automatically created for you. You can use the same VLAN for multiple clusters.
      • To create a cluster that extends your on-premises data center on the private network only, that extends your on-premises data center with the option of adding limited public access later, or that extends your on-premises data center and provides limited public access through a gateway appliance:
        1. Select a private VLAN from your IBM Cloud infrastructure account for each zone. Worker nodes communicate with each other by using the private VLAN. If you do not have a private VLAN in a zone, a private VLAN is automatically created for you. You can use the same VLAN for multiple clusters.
        2. For the public VLAN, select None.
  5. Configure your Worker pool setup. Worker pools are groups of worker nodes that share the same configuration. You can always add more worker pools to your cluster later.

    1. If you want a larger size for your worker nodes, click Change flavor. The flavor defines the amount of virtual CPU, memory, and disk space that is set up in each worker node and made available to the containers. Available bare metal and virtual machines types vary by the zone in which you deploy the cluster. For more information, see Planning your worker node setup. After you create your cluster, you can add different flavors by adding a worker pool to the cluster.
    • Default: The default flavor is Virtual - shared, Ubuntu 18, which comes with 4 vCPUs of computing power and 16 GB of memory. This virtual flavor is billed hourly. Other types of flavors include the following.
    • Bare metal: Bare metal servers are provisioned manually by IBM Cloud infrastructure after you order, and can take more than one business day to complete. Bare metal is best suited for high-performance applications that need more resources and host control. Be sure that you want to provision a bare metal machine. Because it is billed monthly, if you cancel it immediately after an order by mistake, you are still charged the full month.
    • Virtual - shared: Infrastructure resources, such as the hypervisor and physical hardware, are shared across you and other IBM customers, but each worker node is accessible only by you. Although this option is less expensive and sufficient in most cases, you might want to verify your performance and infrastructure requirements with your company policies. Virtual machines are billed hourly.
    • Virtual - dedicated: Your worker nodes are hosted on infrastructure that is devoted to your account. Your physical resources are completely isolated. Virtual machines are billed hourly.
    1. Set how many worker nodes to create per zone, such as 3. For example, if you selected 2 zones and want to create 3 worker nodes, a total of 6 worker nodes are provisioned in your cluster with 3 worker nodes in each zone. You must set at least 1 worker node. For more information, see What is the smallest size cluster that I can make?.
    2. Toggle disk encryption. By default, worker nodes feature AES 256-bit disk encryption.
  6. Configure your cluster with a private, public, or both a public and a private cloud service endpoint by setting the Master service endpoint. For more information about what setup is required to run internet-facing apps, or to keep your cluster private, see Planning your cluster network setup.

  7. If you do not have the required infrastructure permissions to create a cluster, the Infrastructure permissions checker lists the missing permissions. Ask your account owner to set up the API key with the required permissions.

  8. Complete the Resource details to customize the unique cluster name and any tags that you want to use to organize your {{site.data.keyword.cloud_notm}} resources, such as the team or billing department.

  9. In the Summary pane, review your order summary and then click Create. A worker pool is created with the number of workers that you specified. You can see the progress of the worker node deployment in the Worker nodes tab.

    • Your cluster might take some time to provision the Kubernetes master and all worker nodes and enter a Normal state. Note that even if the cluster is ready, some parts of the cluster that are used by other services, such as Ingress secrets or registry image pull secrets, might still be in process.
    • Every worker node is assigned a unique worker node ID and domain name that must not be changed manually after the cluster is created. Changing the ID or domain name prevents the Kubernetes master from managing your cluster.

      Is your cluster not in a Normal state? Check out the Debugging clusters guide for help. For example, if your cluster is provisioned in an account that is protected by a firewall gateway appliance, you must configure your firewall settings to allow outgoing traffic to the appropriate ports and IP addresses.

  10. After your cluster is created, you can begin working with your cluster by configuring your CLI session. For more possibilities, review the Next steps.


Creating a standard classic cluster in the CLI

{: #clusters_cli_steps}

Classic infrastructure provider icon Create your single zone or multizone classic cluster by using the {{site.data.keyword.cloud_notm}} CLI. {: shortdesc}

Before you begin:


To create a classic cluster from the CLI:

  1. Log in to the {{site.data.keyword.cloud_notm}} CLI.

    1. Log in and enter your {{site.data.keyword.cloud_notm}} credentials when prompted. If you have a federated ID, use ibmcloud login --sso to log in to the {{site.data.keyword.cloud_notm}} CLI.

      ibmcloud login [--sso]
      

      {: pre}

    2. If you have multiple {{site.data.keyword.cloud_notm}} accounts, select the account where you want to create your Kubernetes cluster.

    3. To create clusters in a resource group other than default, target that resource group.

      • A cluster can be created in only one resource group, and after the cluster is created, you can't change its resource group.
      • You must have at least the Viewer role for the resource group.
      ibmcloud target -g <resource_group_name>
      

      {: pre}

  2. Review the zones where you can create your cluster. In the output of the following command, zones have a Location Type of dc. To span your cluster across zones, you must create the cluster in a multizone-capable zone. Multizone-capable zones have a metro value in the Multizone Metro column. If you want to create a multizone cluster, you can use the {{site.data.keyword.cloud_notm}} console, or add more zones to your cluster after the cluster is created.

    ibmcloud ks locations
    

    {: pre} When you select a zone that is located outside your country, keep in mind that you might require legal authorization before data can be physically stored in a foreign country. {: note}

  3. Review the worker node flavors that are available in that zone. The flavor determines the amount of virtual CPU, memory, and disk space that is set up in each worker node and made available to your apps. Worker nodes in classic clusters can be created as virtual machines on shared or dedicated infrastructure, or as bare metal machines that are dedicated to you. For more information, see Planning your worker node setup. After you create your cluster, you can add different flavors by adding a worker pool.

    Before you create a bare metal machine, be sure that you want to provision one. Bare metal machines are billed monthly. If you order a bare metal machine by mistake, you are charged for the entire month, even if you cancel the machine immediately.
    {:tip}

    ibmcloud ks flavors --zone <zone>
    

    {: pre}

  4. Check if you have existing VLANs in the zones that you want to include in your cluster, and note the ID of the VLAN. If you do not have a public or private VLAN in one of the zones that you want to use in your cluster, {{site.data.keyword.containerlong_notm}} automatically creates these VLANs for you when you create the cluster.

    ibmcloud ks vlan ls --zone <zone>
    

    {: pre}

    Example output:

    ID        Name   Number   Type      Router
    1519999   vlan   1355     private   bcr02a.dal10
    1519898   vlan   1357     private   bcr02a.dal10
    1518787   vlan   1252     public    fcr02a.dal10
    1518888   vlan   1254     public    fcr02a.dal10
    

    {: screen}

    • To create a cluster in which you can run internet-facing workloads, check to see whether a public and private VLAN exist. If a public and private VLAN already exist, note the matching routers. Private VLAN routers always begin with bcr (back-end router) and public VLAN routers always begin with fcr (front-end router). When you create a cluster and specify the public and private VLANs, the number and letter combination after those prefixes must match. In the example output, any private VLAN can be used with any public VLAN because the routers all include 02a.dal10.
    • To create a cluster that extends your on-premises data center on the private network only or provides limited public access through a gateway appliance, check to see whether a private VLAN exists. If you have a private VLAN, note the ID.
  5. Create your standard cluster.

    • To create a cluster in which you can run internet-facing workloads:

      ibmcloud ks cluster create classic --zone <zone> --flavor <flavor> --hardware <shared_or_dedicated> --public-vlan <public_VLAN_ID> --private-vlan <private_VLAN_ID> --workers <number> --name <cluster_name> --version <major.minor.patch> [--public-service-endpoint] [--private-service-endpoint] [--pod-subnet] [--service-subnet] [--disable-disk-encrypt]
      

      {: pre}

    • To create a cluster with private network connectivity only, such as to extend your on-premises data center:

      ibmcloud ks cluster create classic --zone <zone> --flavor <flavor> --hardware <shared_or_dedicated> --private-vlan <private_VLAN_ID> --private-only --workers <number> --name <cluster_name> --version <major.minor.patch> --public-service-endpoint [--pod-subnet] [--service-subnet] [--disable-disk-encrypt]
      

      {: pre}

    `cluster create classic` command components
    Parameter Description
    --zone <zone> Specify the {{site.data.keyword.cloud_notm}} zone ID that you chose earlier and that you want to use to create your cluster.
    --flavor <flavor> Specify the flavor for your worker node that you chose earlier.
    --hardware <shared_or_dedicated> Specify with the level of hardware isolation for your worker node. Use dedicated to have available physical resources dedicated to you only, or shared to allow physical resources to be shared with other IBM customers. The default is shared. This value is optional for VM standard clusters. For bare metal flavors, specify dedicated.
    --public-vlan <public_vlan_id> If you already have a public VLAN set up in your IBM Cloud infrastructure account for that zone, enter the ID of the public VLAN that you retrieved earlier. If you do not have a public VLAN in your account, do not specify this option. {{site.data.keyword.containerlong_notm}} automatically creates a public VLAN for you.

    Private VLAN routers always begin with bcr (back-end router) and public VLAN routers always begin with fcr (front-end router). When you create a cluster and specify the public and private VLANs, the number and letter combination after those prefixes must match.

    --private-vlan <private_vlan_id> If you already have a private VLAN set up in your IBM Cloud infrastructure account for that zone, enter the ID of the private VLAN that you retrieved earlier. If you do not have a private VLAN in your account, do not specify this option. {{site.data.keyword.containerlong_notm}} automatically creates a private VLAN for you.

    Private VLAN routers always begin with bcr (back-end router) and public VLAN routers always begin with fcr (front-end router). When you create a cluster and specify the public and private VLANs, the number and letter combination after those prefixes must match.

    --private-only Create the cluster with private VLANs only. If you include this option, do not include the --public-vlan option.
    --name <name> Specify a name for your cluster. The name must start with a letter, can contain letters, numbers, periods (.), and hyphen (-), and must be 35 characters or fewer. Use a name that is unique across regions. The cluster name and the region in which the cluster is deployed form the fully qualified domain name for the Ingress subdomain. To ensure that the Ingress subdomain is unique within a region, the cluster name might be truncated and appended with a random value within the Ingress domain name.
    --workers <number> Specify the number of worker nodes to include in the cluster. If you do not specify this option, a cluster with the minimum value of 1 is created. For more information, see [What is the smallest size cluster that I can make?](/docs/containers?topic=containers-faqs#smallest_cluster).
    --version <major.minor.patch> The Kubernetes version for the cluster master node. This value is optional. When the version is not specified, the cluster is created with the default supported Kubernetes version. To see available versions, run ibmcloud ks versions.
    --public-service-endpoint Enable the public cloud service endpoint so that your Kubernetes master can be accessed over the public network, for example to run `kubectl` commands from your CLI, and so that your Kubernetes master and the worker nodes can communicate over the public VLAN. You can later disable the public cloud service endpoint if you want a private-only cluster.

    After you create the cluster, you can get the endpoint by running `ibmcloud ks cluster get --cluster `.
    --private-service-endpoint **In [VRF-enabled](/docs/account?topic=account-vrf-service-endpoint#vrf) and [service endpoint-enabled](/docs/account?topic=account-vrf-service-endpoint#service-endpoint) accounts**: Enable the private cloud service endpoint so that your Kubernetes master and the worker nodes can communicate over the private VLAN. In addition, you can choose to enable the public cloud service endpoint by using the `--public-service-endpoint` flag to access your cluster over the internet. If you enable the private cloud service endpoint only, you must be connected to the private VLAN to communicate with your Kubernetes master. After you enable a private cloud service endpoint, you cannot later disable it.

    After you create the cluster, you can get the endpoint by running `ibmcloud ks cluster get --cluster `.
    --pod-subnet All pods that are deployed to a worker node are assigned a private IP address in the 172.30.0.0/16 range by default. If you plan to connect your cluster to on-premises networks through {{site.data.keyword.BluDirectLink}} or a VPN service, you can avoid subnet conflicts by specifying a custom subnet CIDR that provides the private IP addresses for your pods.

    When you choose a subnet size, consider the size of the cluster that you plan to create and the number of worker nodes that you might add in the future. The subnet must have a CIDR of at least /23, which provides enough pod IPs for a maximum of four worker nodes in a cluster. For larger clusters, use /22 to have enough pod IP addresses for eight worker nodes, /21 to have enough pod IP addresses for 16 worker nodes, and so on.

    The subnet that you choose must be within one of the following ranges:

    • 172.17.0.0 - 172.17.255.255
    • 172.21.0.0 - 172.31.255.255
    • 192.168.0.0 - 192.168.254.255
    • 198.18.0.0 - 198.19.255.255
    Note that the pod and service subnets cannot overlap. The service subnet is in the 172.21.0.0/16 range by default.

    --service-subnet All services that are deployed to the cluster are assigned a private IP address in the 172.21.0.0/16 range by default. If you plan to connect your cluster to on-premises networks through {{site.data.keyword.dl_full_notm}} or a VPN service, you can avoid subnet conflicts by specifying a custom subnet CIDR that provides the private IP addresses for your services.

    The subnet must be specified in CIDR format with a size of at least /24, which allows a maximum of 255 services in the cluster, or larger. The subnet that you choose must be within one of the following ranges:

    • 172.17.0.0 - 172.17.255.255
    • 172.21.0.0 - 172.31.255.255
    • 192.168.0.0 - 192.168.254.255
    • 198.18.0.0 - 198.19.255.255
    Note that the pod and service subnets cannot overlap. The pod subnet is in the 172.30.0.0/16 range by default.

    --disable-disk-encrypt Worker nodes feature AES 256-bit [disk encryption](/docs/containers?topic=containers-security#encrypted_disk) by default. If you want to disable encryption, include this option.
  6. Verify that the creation of the cluster was requested. For virtual machines, it can take a few minutes for the worker node machines to be ordered, and for the cluster to be set up and provisioned in your account. Bare metal physical machines are provisioned by manual interaction with IBM Cloud infrastructure, and can take more than one business day to complete.

    ibmcloud ks cluster ls
    

    {: pre}

    When the provisioning of your Kubernetes master is completed, the State of your cluster changes to deployed. After your Kubernetes master is ready, the provisioning of your worker nodes is initiated.

    Name         ID                         State      Created          Workers    Zone      Version     Resource Group Name   Provider
    mycluster    blrs3b1d0p0p2f7haq0g       deployed   20170201162433   3          dal10     1.20.6      Default             classic
    

    {: screen}

    Is your cluster not in a deployed state? Check out the Debugging clusters guide for help. For example, if your cluster is provisioned in an account that is protected by a firewall gateway appliance, you must configure your firewall settings to allow outgoing traffic to the appropriate ports and IP addresses. {: tip}

  7. Check the status of the worker nodes.

    ibmcloud ks worker ls --cluster <cluster_name_or_ID>
    

    {: pre}

    When the worker nodes are ready, the worker node state changes to normal and the status changes to Ready. When the node status is Ready, you can then access the cluster. Note that even if the cluster is ready, some parts of the cluster that are used by other services, such as Ingress secrets or registry image pull secrets, might still be in process. Note that if you created your cluster with a private VLAN only, no Public IP addresses are assigned to your worker nodes.

    ID                                                     Public IP        Private IP     Flavor              State    Status   Zone    Version
    kube-blrs3b1d0p0p2f7haq0g-mycluster-default-000001f7   169.xx.xxx.xxx  10.xxx.xx.xxx   u3c.2x4.encrypted   normal   Ready    dal10   1.20.6
    

    {: screen}

    Every worker node is assigned a unique worker node ID and domain name that must not be changed manually after the cluster is created. Changing the ID or domain name prevents the Kubernetes master from managing your cluster. {: important}

  8. Optional: If you created your cluster in a multizone metro location, you can spread the default worker pool across zones to increase the cluster's availability.

  9. After your cluster is created, you can begin working with your cluster by configuring your CLI session.

Your cluster is ready for your workloads! You might also want to add a tag to your cluster, such as the team or billing department that uses the cluster, to help manage {{site.data.keyword.cloud_notm}} resources. For more ideas of what to do with your cluster, review the Next steps.


Creating a standard classic cluster with a gateway in the CLI

{: #gateway_cluster_cli}

Classic infrastructure provider icon Use the {{site.data.keyword.cloud_notm}} CLI to create a standard, gateway-enabled cluster on classic infrastructure. {: shortdesc}

When you enable a gateway on a classic cluster, the cluster is created with a compute worker pool of compute worker nodes that are connected to a private VLAN only, and a gateway worker pool of gateway worker nodes that are connected to public and private VLANs. Traffic into or out of the cluster is routed through the gateway worker nodes, which provide your cluster with limited public access. For more information about the network setup for gateway-enabled clusters, see Using a gateway-enabled cluster.

Before you begin:


To create a classic gateway cluster:

  1. Log in to the {{site.data.keyword.cloud_notm}} CLI.

    1. Log in and enter your {{site.data.keyword.cloud_notm}} credentials when prompted.

      ibmcloud login
      

      {: pre}

      If you have a federated ID, use ibmcloud login --sso to log in to the {{site.data.keyword.cloud_notm}} CLI. Enter your username and use the provided URL in your CLI output to retrieve your one-time passcode. You know that you have a federated ID when the login fails without the --sso and succeeds with the --sso option. {: tip}

    2. If you have multiple {{site.data.keyword.cloud_notm}} accounts, select the account where you want to create your Kubernetes cluster.

    3. To create clusters in a resource group other than default, target that resource group.

      • A cluster can be created in only one resource group, and after the cluster is created, you can't change its resource group.
      • You must have at least the Viewer role for the resource group.
      ibmcloud target -g <resource_group_name>
      

      {: pre}

  2. Review the zones where you can create your cluster. In the output of the following command, zones have a Location Type of dc. To span your cluster across zones, you must create the cluster in a multizone-capable zone. Multizone-capable zones have a metro value in the Multizone Metro column. If you want to create a multizone cluster, you can use the {{site.data.keyword.cloud_notm}} console, or add more zones to your cluster after the cluster is created.

    ibmcloud ks locations
    

    {: pre}

    When you select a zone that is located outside your country, keep in mind that you might require legal authorization before data can be physically stored in a foreign country.

  3. Review the compute worker node flavors that are available in that zone. The flavor determines the amount of virtual CPU, memory, and disk space that is set up in each worker node and made available to your apps. Worker nodes in classic clusters can be created as virtual machines on shared or dedicated infrastructure, or as bare metal machines that are dedicated to you. For more information, see Planning your worker node setup. After you create your cluster, you can add different flavors by adding a worker pool. Note that the gateway worker nodes are created with the u3c.2x4 flavor by default. If you want to change the isolation and flavor of the gateway worker nodes after you create your cluster, you can create a new gateway worker pool to replace the gateway worker pool that is created by default.

    Before you create a bare metal machine, be sure that you want to provision one. Bare metal machines are billed monthly. If you order a bare metal machine by mistake, you are charged for the entire month, even if you cancel the machine immediately. {:tip}

    ibmcloud ks flavors --zone <zone>
    

    {: pre}

  4. In the zone where you want to create your cluster, check to see whether a public and private VLAN exist. If a public and private VLAN already exist, and they have matching routers, note the VLAN IDs. Private VLAN routers always begin with bcr (back-end router) and public VLAN routers always begin with fcr (front-end router). When you create a cluster and specify the public and private VLANs, the number and letter combination after those prefixes must match. If you do not have a public or private VLAN in the zone that you want to use in your cluster, or if you do not have VLANs that have matching routers, {{site.data.keyword.containerlong_notm}} automatically creates these VLANs for you when you create the cluster.

    ibmcloud ks vlan ls --zone <zone>
    

    {: pre}

    In this example output, any private VLAN can be used with any public VLAN because the routers all include 02a.dal10.

    ID        Name   Number   Type      Router
    1519999   vlan   1355     private   bcr02a.dal10
    1519898   vlan   1357     private   bcr02a.dal10
    1518787   vlan   1252     public    fcr02a.dal10
    1518888   vlan   1254     public    fcr02a.dal10
    

    {: screen}

  5. Create your gateway-enabled cluster.

ibmcloud ks cluster create classic --zone <single_zone> --gateway-enabled --flavor --hardware <shared_or_dedicated> --public-vlan <public_VLAN_ID> --private-vlan <private_VLAN_ID> --workers --name <cluster_name> --version 1.20.6 --private-service-endpoint --public-service-endpoint [--pod-subnet] [--service-subnet] [--disable-disk-encrypt]

{: pre}

<table summary="The columns are read from left to right. The first column has the parameter of the command. The second column describes the parameter.">
<caption>cluster create classic components</caption>
<col width="25%">
<thead>
<th>Parameter</th>
<th>Description</th>
</thead>
<tbody>
<tr>
<td><code>--zone <em>&lt;zone&gt;</em></code></td>
<td>Specify the {{site.data.keyword.cloud_notm}} zone ID that you chose earlier and that you want to use to create your cluster.</td>
</tr>
<tr>
<td><code>--gateway-enabled</code></td>
<td>Create a cluster with a `gateway` worker pool of two gateway worker nodes that are connected to public and private VLANs to provide limited public access, and a `compute` worker pool of compute worker nodes that are connected to the private VLAN only. You can specify the number of compute nodes that are created in the `--workers` option. Note that you can later resize both the `compute` and `gateway` worker nodes by using the `ibmcloud ks worker-pool resize` command.</td>
</tr>
<tr>
<td><code>--flavor <em>&lt;flavor&gt;</em></code></td>
<td>Specify the flavor for your compute worker nodes that you chose earlier. Note that the gateway worker nodes are created with the `u3c.2x4` flavor by default. If you want to change the isolation and flavor of the gateway worker nodes, you can [create a new gateway worker pool](/docs/containers?topic=containers-add_workers#gateway_replace) to replace the gateway worker pool that is created by default.</td>
</tr>
<tr>
<td><code>--hardware <em>&lt;shared_or_dedicated&gt;</em></code></td>
<td>Specify the level of hardware isolation for your worker node. Use <code>dedicated</code> to have available physical resources dedicated to you only, or <code>shared</code> to allow physical resources to be shared with other IBM customers. The default is shared. This value is optional for VM standard clusters. For bare metal flavors, specify <code>dedicated</code>.</td>
</tr>
<tr>
<td><code>--public-vlan <em>&lt;public_vlan_id&gt;</em></code></td>
<td>If you already have a public VLAN set up in your IBM Cloud infrastructure account for that zone, enter the ID of the public VLAN that you retrieved earlier. If you do not have a public VLAN in your account, do not specify this option. {{site.data.keyword.containerlong_notm}} automatically creates a public VLAN for you.</td>
</tr>
<tr>
<td><code>--private-vlan <em>&lt;private_vlan_id&gt;</em></code></td>
<td>If you already have a private VLAN set up in your IBM Cloud infrastructure account for that zone, enter the ID of the private VLAN that you retrieved earlier. If you do not have a private VLAN in your account, do not specify this option. {{site.data.keyword.containerlong_notm}} automatically creates a private VLAN for you.</td>
</tr>
<tr>
<td><code>--name <em>&lt;name&gt;</em></code></td>
<td>Specify a name for your cluster. The name must start with a letter, can contain letters, numbers, periods (.), and hyphen (-), and must be 35 characters or fewer. Use a name that is unique across regions. The cluster name and the region in which the cluster is deployed form the fully qualified domain name for the Ingress subdomain. To ensure that the Ingress subdomain is unique within a region, the cluster name might be truncated and appended with a random value within the Ingress domain name.</td>
</tr>
<tr>
<td><code>--workers <em>&lt;number&gt;</em></code></td>
<td>Specify the number of compute worker nodes to include in the `compute` worker pool. If the <code>--workers</code> option is not specified, one compute worker node is created. Note that the `gateway` worker pool is created with two gateway worker nodes by default.</td>
</tr>
<tr>
<td><code>--version <em>&lt;major.minor.patch&gt;</em></code></td>
<td>The Kubernetes version for the cluster master node. This value is optional. When the version is not specified, the cluster is created with the default supported Kubernetes version. To see available versions, run <code>ibmcloud ks versions</code>.</td>
</tr>
<tr>
<td><code>--private-service-endpoint</code></td>
<td>Enable the private cloud service endpoint so that your Kubernetes master and the worker nodes can communicate over the private VLAN. In addition, you can choose to enable the public cloud service endpoint by using the `--public-service-endpoint` flag to access your cluster over the internet. If you enable the private cloud service endpoint only, you must be connected to the private VLAN to communicate with your Kubernetes master. After you enable a private cloud service endpoint, you cannot later disable it.<br><br>After you create the cluster, you can get the endpoint by running `ibmcloud ks cluster get --cluster <cluster_name_or_ID>`.</td>
</tr>
<tr>
<td><code>--public-service-endpoint</code></td>
<td>Enable the public cloud service endpoint so that your Kubernetes master can be accessed over the public network, for example to run `kubectl` commands from your command line, and so that your Kubernetes master and the worker nodes can communicate over the public VLAN. You can later disable the public cloud service endpoint if you want a private-only cluster.<br><br>After you create the cluster, you can get the endpoint by running `ibmcloud ks cluster get --cluster <cluster_name_or_ID>`.</td>
</tr>
<tr>
<td><code>--pod-subnet</code></td>
<td>All pods that are deployed to a worker node are assigned a private IP address in the 172.30.0.0/16 range by default. If you plan to connect your cluster to on-premises networks through {{site.data.keyword.BluDirectLink}} or a VPN service, you can avoid subnet conflicts by specifying a custom subnet CIDR that provides the private IP addresses for your pods.
<p>When you choose a subnet size, consider the size of the cluster that you plan to create and the number of worker nodes that you might add in the future. The subnet must have a CIDR of at least <code>/23</code>, which provides enough pod IPs for a maximum of four worker nodes in a cluster. For larger clusters, use <code>/22</code> to have enough pod IP addresses for eight worker nodes, <code>/21</code> to have enough pod IP addresses for 16 worker nodes, and so on.</p>
<p>The subnet that you choose must be within one of the following ranges:
<ul><li><code>172.17.0.0 - 172.17.255.255</code></li>
<li><code>172.21.0.0 - 172.31.255.255</code></li>
<li><code>192.168.0.0 - 192.168.254.255</code></li>
<li><code>198.18.0.0 - 198.19.255.255</code></li></ul>Note that the pod and service subnets cannot overlap. The service subnet is in the 172.21.0.0/16 range by default.</p></td>
</tr>
<tr>
<td><code>--service-subnet</code></td>
<td>All services that are deployed to the cluster are assigned a private IP address in the 172.21.0.0/16 range by default. If you plan to connect your cluster to on-premises networks through {{site.data.keyword.dl_full_notm}} or a VPN service, you can avoid subnet conflicts by specifying a custom subnet CIDR that provides the private IP addresses for your services.
<p>The subnet must be specified in CIDR format with a size of at least <code>/24</code>, which allows a maximum of 255 services in the cluster, or larger. The subnet that you choose must be within one of the following ranges:
<ul><li><code>172.17.0.0 - 172.17.255.255</code></li>
<li><code>172.21.0.0 - 172.31.255.255</code></li>
<li><code>192.168.0.0 - 192.168.254.255</code></li>
<li><code>198.18.0.0 - 198.19.255.255</code></li></ul>Note that the pod and service subnets cannot overlap. The pod subnet is in the 172.30.0.0/16 range by default.</p></td>
</tr>
<tr>
<td><code>--disable-disk-encrypt</code></td>
<td>Worker nodes feature AES 256-bit [disk encryption](/docs/containers?topic=containers-security#encrypted_disk) by default. If you want to disable encryption, include this option.</td>
</tr>
</tbody></table>

7. Verify that the creation of the cluster was requested. For virtual machines, it can take a few minutes for the worker node machines to be ordered, and for the cluster to be set up and provisioned in your account. Bare metal physical machines are provisioned by manual interaction with IBM Cloud infrastructure, and can take more than one business day to complete.

ibmcloud ks cluster ls

{: pre}

When the provisioning of your Kubernetes master is completed, the **State** of your cluster changes to `deployed`. After your Kubernetes master is ready, the provisioning of your worker nodes is initiated.

Name ID State Created Workers Location Version Resource Group Name Provider mycluster blbfcbhd0p6lse558lgg deployed 1 month ago 1 Dallas 1.20.6_1515 default classic

{: screen}

Is your cluster not in a **deployed** state? Check out the [Debugging clusters](/docs/containers?topic=containers-cs_troubleshoot) guide for help.
{: tip}

8. Check the status of the worker nodes. When the worker nodes are ready, the worker node **State** changes to `deployed` and the **Status** changes to `Ready`. When the node **Status** changes to `Ready`, you can then access the cluster. Note that even if the cluster is ready, some parts of the cluster that are used by other services, such as Ingress secrets or registry image pull secrets, might still be in process.

ibmcloud ks worker ls --cluster <cluster_name_or_ID>

{: pre}

In the output, verify that the number of compute worker nodes that you specified in the `--workers` flag and two gateway worker nodes are provisioned. In the following example output, three compute and two gateway worker nodes are provisioned.

ID Public IP Private IP Flavor State Status Zone Version kube-blrs3b1d0p0p2f7haq0g-mycluster-compute-000001f7 - 10.xxx.xx.xxx b3c.4x16.encrypted normal Ready dal10 1.20.6 kube-blrs3b1d0p0p2f7haq0g-mycluster-compute-000004ea - 10.xxx.xx.xxx b3c.4x16.encrypted normal Ready dal10 1.20.6 kube-blrs3b1d0p0p2f7haq0g-mycluster-compute-000003d6 - 10.xxx.xx.xxx b3c.4x16.encrypted normal Ready dal10 1.20.6 kube-blrs3b1d0p0p2f7haq0g-mycluster-gateway-000004ea 169.xx.xxx.xxx 10.xxx.xx.xxx u3c.2x4.encrypted normal Ready dal10 1.20.6 kube-blrs3b1d0p0p2f7haq0g-mycluster-gateway-000003d6 169.xx.xxx.xxx 10.xxx.xx.xxx u3c.2x4.encrypted normal Ready dal10 1.20.6

{: screen}

Every worker node is assigned a unique worker node ID and domain name that must not be changed manually after the cluster is created. Changing the ID or domain name prevents the Kubernetes master from managing your cluster.
{: important}

9. **Optional**: If you created your cluster in a [multizone metro location](/docs/containers?topic=containers-regions-and-zones#zones-mz), you can [spread the default worker pool across zones](/docs/containers?topic=containers-add_workers#add_gateway_zone) to increase the cluster's availability.

10. After your cluster is created, you can [begin working with your cluster by configuring your CLI session](/docs/containers?topic=containers-access_cluster).

Your cluster is ready for your workloads! You might also want to [add a tag to your cluster](/docs/containers?topic=containers-add_workers#cluster_tags), such as the team or billing department that uses the cluster, to help manage {{site.data.keyword.cloud_notm}} resources. For more ideas of what to do with your cluster, review the [Next steps](/docs/containers?topic=containers-clusters#next_steps).



<br />

## Creating a standard VPC cluster
{: #clusters_vpcg2}

<img src="images/icon-vpc.png" alt="VPC infrastructure provider icon" width="15" style="width:15px; border-style: none"/> Use the {{site.data.keyword.cloud_notm}} CLI or the {{site.data.keyword.cloud_notm}} console to create a standard VPC cluster, and customize your cluster to meet the high availability and security requirements of your apps.
{: shortdesc}

### Creating a standard VPC cluster in the console
{: #clusters_vpcg2_ui}

<img src="images/icon-vpc.png" alt="VPC infrastructure provider icon" width="15" style="width:15px; border-style: none"/> Create your single zone or multizone VPC cluster by using the {{site.data.keyword.cloud_notm}} console.
{: shortdesc}



1. Make sure that you complete the prerequisites to [prepare your account](#cluster_prepare) and decide on your [cluster setup](#prepare_cluster_level).
2. [Create a Virtual Private Cloud (VPC) on generation 2 compute](https://cloud.ibm.com/vpc/provision/vpc){: external} with a subnet that is located in the VPC zone where you want to create the cluster.
* During the VPC creation, you can create one subnet only. Subnets are specific to a zone. If you want to create a multizone cluster, create the subnet in one of the multizone-capable zones that you want to use. Later, you manually create the subnets for the remaining zones that you want to include in your cluster.<p class="important">Do not delete the subnets that you attach to your cluster during cluster creation or when you add worker nodes in a zone. If you delete a VPC subnet that your cluster used, any load balancers that use IP addresses from the subnet might experience issues, and you might be unable to create new load balancers.</p>
* If worker nodes must access public endpoints, attach a public gateway to each subnet.
* If you require access to classic infrastructure resources, you must follow the steps in [Creating VPC subnets for classic access](/docs/containers?topic=containers-vpc-subnets#ca_subnet_ui) to create a classic access VPC and VPC subnets without the automatic default address prefixes.
* For more information, see [Creating a VPC using the IBM Cloud console](/docs/vpc?topic=vpc-creating-a-vpc-using-the-ibm-cloud-console) and [Overview of VPC networking in {{site.data.keyword.containerlong_notm}}: Subnets](/docs/containers?topic=containers-vpc-subnets#vpc_basics_subnets).
3. If you want to create a multizone cluster, create the subnets for all of the remaining zones that you want to include in your cluster. You must have one VPC subnet in all of the zones where you want to create your multizone cluster.
1. From the [VPC subnet dashboard](https://cloud.ibm.com/vpc/network/subnets){: external}, click **New subnet**.
2. Enter a name for your subnet.
3. Select the location of your VPC and zone where you want to create the subnet.
4. Select the name of the VPC that you created.
5. Specify the number of IP addresses to create. VPC subnets provide IP addresses for your worker nodes and load balancer services in the cluster, so [create a VPC subnet with enough IP addresses](/docs/containers?topic=containers-vpc-subnets#vpc_basics_subnets), such as 256. You cannot change the number of IPs that a VPC subnet has later. If you enter a specific IP range, do not use the following reserved ranges: `172.16.0.0/16`, `172.18.0.0/16`, `172.19.0.0/16`, and `172.20.0.0/16`.
6. Choose if you want to attach a public network gateway to your subnet. A public network gateway is required when you want your cluster to access public endpoints, such as a public URL of another app or an {{site.data.keyword.cloud_notm}} service that supports public cloud service endpoints only. Make sure to review the [VPC networking basics](/docs/containers?topic=containers-plan_clusters#plan_vpc_basics) to understand when a public network gateway is required and how you can set up your cluster to limit public access to one or more subnets only.
7. Click **Create subnet**.
4. From the [Kubernetes clusters console](https://cloud.ibm.com/kubernetes/clusters){: external}, click **Create cluster**.
5. Configure your cluster environment.
1. Select the **Standard** cluster plan.
2. From the Kubernetes drop-down list, select the version that you want to use in your cluster. You must choose **Kubernetes 1.17 or later**.
3. Select **VPC** infrastructure.
4. From the **Virtual private cloud** drop-down menu, select the VPC that you created earlier.
6. Configure the **Location** details for your cluster.
1. Select the **Resource group** that you want to create your cluster in.
   * A cluster can be created in only one resource group, and after the cluster is created, you cannot change its resource group.
   * To create clusters in a resource group other than the default, you must have at least the [**Viewer** role](/docs/containers?topic=containers-users#platform) for the resource group.
2. Select the zones to create your cluster in.
   * The zones are filtered based on the VPC that you selected, and include the VPC subnets that you previously created.
   * To create a [single zone cluster](/docs/containers?topic=containers-ha_clusters#single_zone), select one zone only. If you select only one zone, you can [add zones to your cluster](/docs/containers?topic=containers-add_workers#add_zone) after the cluster is created.
   * To create a [multizone cluster](/docs/containers?topic=containers-ha_clusters#multizone), select multiple zones.
7. Configure your **Worker pool** setup. Worker pools are groups of worker nodes that share the same configuration. You can always add more worker pools to your cluster later.
1.  If you want a larger size for your worker nodes, click **Change flavor**. The flavor defines the amount of virtual CPU, memory, and disk space that is set up in each worker node and made available to the containers. Available bare metal and virtual machines types vary by the zone in which you deploy the cluster. For more information, see [Planning your worker node setup](/docs/containers?topic=containers-planning_worker_nodes). After you create your cluster, you can add different flavors by adding a worker pool to the cluster.
   * **Default**: The default flavor is **Virtual - shared, Ubuntu 18**, which comes with **4 vCPUs** of computing power and **16 GB** of memory. This virtual flavor is billed hourly. Other types of flavors include the following.
   * **Bare metal**: Bare metal servers are provisioned manually by IBM Cloud infrastructure after you order, and can take more than one business day to complete. Bare metal is best suited for high-performance applications that need more resources and host control. Be sure that you want to provision a bare metal machine. Because it is billed monthly, if you cancel it immediately after an order by mistake, you are still charged the full month.
   * **Virtual - shared**: Infrastructure resources, such as the hypervisor and physical hardware, are shared across you and other IBM customers, but each worker node is accessible only by you. Although this option is less expensive and sufficient in most cases, you might want to verify your performance and infrastructure requirements with your company policies. Virtual machines are billed hourly.
   * **Virtual - dedicated**: Your worker nodes are hosted on infrastructure that is devoted to your account. Your physical resources are completely isolated. Virtual machines are billed hourly.
2. Set how many worker nodes to create per zone, such as **3**. For example, if you selected 2 zones and want to create 3 worker nodes, a total of 6 worker nodes are provisioned in your cluster with 3 worker nodes in each zone. You must set at least 1 worker node. For more information, see [What is the smallest size cluster that I can make?](/docs/containers?topic=containers-faqs#smallest_cluster).
3. Toggle disk encryption. By default, [worker nodes feature AES 256-bit disk encryption](/docs/containers?topic=containers-security#workernodes).
8. Configure your cluster with a private or a public and a private cloud service endpoint by setting the **Master service endpoint**. For more information about what setup is required to run internet-facing apps, or to keep your cluster private, see [Planning your cluster network setup](/docs/containers?topic=containers-plan_clusters#vpc-workeruser-master).
9. If you do not have the required infrastructure permissions to create a cluster, the **Infrastructure permissions checker** lists the missing permissions. Ask your account owner to [set up the API key](/docs/containers?topic=containers-users#api_key) with the required permissions.
10. Complete the **Resource details** to customize the unique cluster name and any [tags](/docs/account?topic=account-tag) that you want to use to organize your {{site.data.keyword.cloud_notm}} resources, such as the team or billing department.
11. In the **Summary** pane, review the order summary and then click **Create**. A worker pool is created with the number of workers that you specified. You can see the progress of the worker node deployment in the **Worker nodes** tab.
*  Your cluster might take some time to provision the Kubernetes master and all worker nodes and enter a   **Normal** state. Note that even if the cluster is ready, some parts of the cluster that are used by other services, such as Ingress  secrets or registry image pull secrets, might still be in process.
*  Every worker node is assigned a unique worker node ID and domain name that must not be changed manually after the cluster is created. Changing the ID or domain name prevents the Kubernetes master from managing your cluster.<p class="tip">Is your cluster not in a **Normal** state? Check out the [Debugging clusters](/docs/containers?topic=containers-cs_troubleshoot) guide for help. For example, if your cluster is provisioned in an account that is protected by a firewall gateway appliance, you must [configure your firewall settings to allow outgoing traffic to the appropriate ports and IP addresses](/docs/containers?topic=containers-firewall#firewall_outbound).</p>
12. After your cluster is created, you can [begin working with your cluster by configuring your CLI session](/docs/containers?topic=containers-access_cluster). For more possibilities, review the [Next steps](/docs/containers?topic=containers-clusters#next_steps).
13. Kubernetes version 1.18 or earlier only: To allow any traffic requests to apps that you deploy on your worker nodes, modify the VPC's default security group.
1. From the [Virtual private cloud dashboard](https://cloud.ibm.com/vpc-ext/network/vpcs){: external}, click the name of the **Default Security Group** for the VPC that you created.
2. In the **Inbound rules** section, click **New rule**.
3. Choose the **TCP** protocol, enter `30000` for the **Port min** and `32767` for the **Port max**, and leave the **Any** source type selected.
4. Click **Save**.
5. If you require VPC VPN access or classic infrastructure access into this cluster, repeat these steps to add a rule that uses the **UDP** protocol, `30000` for the **Port min**, `32767` for the **Port max**, and the **Any** source type.

### Creating standard VPC clusters from the CLI
{: #cluster_vpcg2_cli}

<img src="images/icon-vpc.png" alt="VPC infrastructure provider icon" width="15" style="width:15px; border-style: none"/> Create your single zone or multizone VPC cluster by using the {{site.data.keyword.cloud_notm}} CLI.
{: shortdesc}

**Before you begin**:
* Make sure that you complete the prerequisites to [prepare your account](#cluster_prepare) and decide on your [cluster setup](#prepare_cluster_level).
* Install the {{site.data.keyword.cloud_notm}} CLI and the [{{site.data.keyword.containerlong_notm}} plug-in](/docs/containers?topic=containers-cs_cli_install#cs_cli_install).
* Install the [VPC CLI plug-in](/docs/vpc?topic=vpc-infrastructure-cli-plugin-vpc-reference#cli-ref-prereqs).

<br>

**To create a VPC cluster from the CLI**:

1. In your command line, log in to your {{site.data.keyword.cloud_notm}} account and target the {{site.data.keyword.cloud_notm}} region and resource group where you want to create your VPC cluster. For supported regions, see [Creating a VPC in a different region](/docs/vpc?topic=vpc-creating-a-vpc-in-a-different-region). Enter your {{site.data.keyword.cloud_notm}} credentials when prompted. If you have a federated ID, use the --sso flag to log in.

ibmcloud login -r [-g <resource_group>] [--sso]

{: pre}

3. [Create a VPC](/docs/vpc?topic=vpc-creating-a-vpc-using-cli#create-a-vpc-cli) in the same region where you want to create the cluster.<p class="important">Do the clusters of worker nodes in your VPC need to send and receive information to and from IBM Cloud classic infrastructure? Follow the steps in [Creating VPC subnets for classic access](/docs/containers?topic=containers-vpc-subnets#ca_subnet_cli) to create a classic-enabled VPC and VPC subnets without the automatic default address prefixes.<p>
4. [Create a subnet for your VPC](/docs/vpc?topic=vpc-creating-a-vpc-using-cli#create-a-subnet-cli).
* If you want to create a [multizone cluster](/docs/containers?topic=containers-ha_clusters#multizone), repeat this step to create additional subnets in all of the zones that you want to include in your cluster.
* VPC subnets provide IP addresses for your worker nodes and load balancer services in the cluster, so [create a VPC subnet with enough IP addresses](/docs/containers?topic=containers-vpc-subnets#vpc_basics_subnets), such as 256. You cannot change the number of IPs that a VPC subnet has later.
* Do not use the following reserved ranges: `172.16.0.0/16`, `172.18.0.0/16`, `172.19.0.0/16`, and `172.20.0.0/16`.
* If worker nodes must access public endpoints, [attach a public gateway](/docs/vpc?topic=vpc-creating-a-vpc-using-cli#attach-public-gateway-cli) to each subnet.
* **Important**: Do not delete the subnets that you attach to your cluster during cluster creation or when you add worker nodes in a zone. If you delete a VPC subnet that your cluster used, any load balancers that use IP addresses from the subnet might experience issues, and you might be unable to create new load balancers.
* For more information, see [Overview of VPC networking in {{site.data.keyword.containerlong_notm}}: Subnets](/docs/containers?topic=containers-vpc-subnets#vpc_basics_subnets).
5. Create the cluster in your VPC. You can use the `ibmcloud ks cluster create vpc-gen2` command to create a single zone cluster in your VPC with worker nodes that are connected to one VPC subnet only. If you want to create a multizone cluster, you can use the {{site.data.keyword.cloud_notm}} console, or [add more zones](/docs/containers?topic=containers-add_workers#vpc_add_zone) to your cluster after the cluster is created. The cluster takes a few minutes to provision.
 ```
 ibmcloud ks cluster create vpc-gen2 --name <cluster_name> --zone <vpc_zone> --vpc-id <vpc_ID> --subnet-id <vpc_subnet_ID> --flavor <worker_flavor> [--version <major.minor.patch>][--workers <number_workers_per_zone>] [--pod-subnet] [--service-subnet] [--disable-public-service-endpoint]
 ```
 {: pre}

 <table summary="The columns are read from left to right. The first column has the parameter of the command. The second column describes the parameter.">
 <caption>Cluster create components</caption>
 <col width="25%">
 <thead>
 <th>Parameter</th>
 <th>Description</th>
 </thead>
 <tbody>
 <tr>
 <td><code>--name <em>&lt;cluster_name&gt;</em></code></td>
 <td>Specify a name for your cluster. The name must start with a letter, can contain letters, numbers, periods (.), and hyphen (-), and must be 35 characters or fewer. Use a name that is unique across regions. The cluster name and the region in which the cluster is deployed form the fully qualified domain name for the Ingress subdomain. To ensure that the Ingress subdomain is unique within a region, the cluster name might be truncated and appended with a random value within the Ingress domain name.</td>
 </tr>
 <tr>
 <td><code>--zone <em>&lt;zone&gt;</em></code></td>
 <td>Specify the {{site.data.keyword.cloud_notm}} zone where you want to create your cluster. Make sure that you use a zone that matches the metro city location that you selected when you created your VPC and that you have an existing VPC subnet for that zone. For example, if you created your VPC in the Dallas metro city, your zone must be set to <code>us-south-1</code>, <code>us-south-2</code>, or <code>us-south-3</code>. To list available VPC cluster zones, run <code>ibmcloud ks zone ls --provider vpc-gen2</code>. Note that when you select a zone outside of your country, you might require legal authorization before data can be physically stored in a foreign country.</td>
 </tr>
 <tr>
 <td><code>--vpc-id <em>&lt;vpc_ID&gt;</em></code></td>
 <td>Enter the ID of the VPC that you created earlier. To retrieve the ID of your VPC, run <code>ibmcloud ks vpcs</code>. </td>
 </tr>
 <tr>
 <td><code>--subnet-id <em>&lt;subnet_ID&gt;</em></code></td>
 <td>Enter the ID of the VPC subnet that you created earlier. When you create a VPC cluster from the CLI, you can initially create your cluster in one zone with one subnet only. To create a multizone cluster, [add more zones](/docs/containers?topic=containers-add_workers) with the subnets that you created earlier to your cluster after the cluster is created. To list the IDs of your subnets in all resource groups, run <code> ibmcloud ks subnets --provider vpc-gen2 --vpc-id &lt,VPC_ID&gt; --zone &lt;subnet_zone&gt; </code>.  </td>
 </tr>
 <tr>
 <td><code>--flavor <em>&lt;worker_flavor&gt;</em></code></td>
 <td>Enter the worker node flavor that you want to use. The flavor determines the amount of virtual CPU, memory, and disk space that is set up in each worker node and made available to your apps. VPC worker nodes can be created as virtual machines on shared infrastructure only. Bare metal or software-defined storage machines are not supported.  For more information, see [Planning your worker node setup](/docs/containers?topic=containers-planning_worker_nodes). To view available flavors, first list available VPC zones with <code>ibmcloud ks zone ls --provider vpc-gen2</code>, and then use the zone to list supported flavors by running <code>ibmcloud ks flavors --zone &lt;VPC_zone&gt; --provider vpc-gen2</code>. After you create your cluster, you can add different flavors by adding a worker node or worker pool to the cluster.</td>
 </tr>
 <tr>
 <td><code>--version <em>&lt;major.minor.patch&gt;</em></code></td>
 <td>The Kubernetes version for the cluster master node. Note that VPC clusters are supported for Kubernetes versions 1.17 and later only. To see available versions, run <code>ibmcloud ks versions</code>.</td>
 </tr>
 <tr>
 <td><code>--workers <em>&lt;number&gt;</em></code></td>
 <td>Specify the number of worker nodes to include in the cluster. If you do not specify this option, a cluster with the minimum value of 1 is created. For more information, see [What is the smallest size cluster that I can make?](/docs/containers?topic=containers-faqs#smallest_cluster). This value is optional.</td>
 </tr>
 <tr>
 <td><code>--pod-subnet</code></td>
 <td>In the first cluster that you create in a VPC, the default pod subnet is `172.17.0.0/18`. In the second cluster that you create in that VPC, the default pod subnet is `172.17.64.0/18`. In each subsequent cluster, the pod subnet range is the next available, non-overlapping `/18` subnet. If you plan to connect your cluster to on-premises networks through {{site.data.keyword.BluDirectLink}} or a VPN service, you can avoid subnet conflicts by specifying a custom subnet CIDR that provides the private IP addresses for your pods.
 <p>When you choose a subnet size, consider the size of the cluster that you plan to create and the number of worker nodes that you might add in the future. The subnet must have a CIDR of at least <code>/23</code>, which provides enough pod IPs for a maximum of four worker nodes in a cluster. For larger clusters, use <code>/22</code> to have enough pod IP addresses for eight worker nodes, <code>/21</code> to have enough pod IP addresses for 16 worker nodes, and so on.</p>
 <p>The subnet that you choose must be within one of the following ranges:
 <ul><li><code>172.17.0.0 - 172.17.255.255</code></li>
 <li><code>172.21.0.0 - 172.31.255.255</code></li>
 <li><code>192.168.0.0 - 192.168.254.255</code></li>
 <li><code>198.18.0.0 - 198.19.255.255</code></li></ul>Note that the pod and service subnets cannot overlap. If you use custom-range subnets for your worker nodes, you must [ensure that your worker node subnets do not overlap with your cluster's pod subnet](/docs/containers?topic=containers-vpc-subnets#vpc-ip-range).</p></td>
 </tr>
 <tr>
 <td><code>--service-subnet</code></td>
 <td>All services that are deployed to the cluster are assigned a private IP address in the 172.21.0.0/16 range by default. If you plan to connect your cluster to on-premises networks through {{site.data.keyword.dl_full_notm}} or a VPN service, you can avoid subnet conflicts by specifying a custom subnet CIDR that provides the private IP addresses for your services.
 <p>The subnet must be specified in CIDR format with a size of at least <code>/24</code>, which allows a maximum of 255 services in the cluster, or larger. The subnet that you choose must be within one of the following ranges:
 <ul><li><code>172.17.0.0 - 172.17.255.255</code></li>
 <li><code>172.21.0.0 - 172.31.255.255</code></li>
 <li><code>192.168.0.0 - 192.168.254.255</code></li>
 <li><code>198.18.0.0 - 198.19.255.255</code></li></ul>Note that the pod and service subnets cannot overlap.</p></td>
 </tr>
 <tr>
 <td><code>--disable-public-service-endpoint</code></td>
 <td>Include this option in your command to create your VPC cluster with a private cloud service endpoint only. If you do not include this option, your cluster is set up with a public and a private cloud service endpoint. The service endpoint determines how your Kubernetes master and the worker nodes communicate, how your cluster access other {{site.data.keyword.cloud_notm}} services and apps outside the cluster, and how your users connect to your cluster. For more information, see [Planning your cluster network setup](/docs/containers?topic=containers-plan_clusters).</td>
 </tr>
 </tbody></table>
6. Verify that the creation of the cluster was requested. It can take a few minutes for the worker node machines to be ordered, and for the cluster to be set up and provisioned in your account.
 ```
 ibmcloud ks cluster ls
 ```
 {: pre}

 When the provisioning of your Kubernetes master is completed, the status of your cluster changes to **deployed**. After the Kubernetes master is ready, your worker nodes are set up.
 ```
 Name         ID                                   State      Created          Workers    Zone      Version     Resource Group Name   Provider
 mycluster    aaf97a8843a29941b49a598f516da72101   deployed   20170201162433   3          mil01     1.20.6      Default               vpc-gen2
 ```
 {: screen}

 Is your cluster not in a **deployed** state? Check out the [Debugging clusters](/docs/containers?topic=containers-cs_troubleshoot) guide for help. For example, if your cluster is provisioned in an account that is protected by a firewall gateway appliance, you must [configure your firewall settings to allow outgoing traffic to the appropriate ports and IP addresses](/docs/containers?topic=containers-firewall#firewall_outbound).
 {: tip}

7. Check the status of the worker nodes.

ibmcloud ks worker ls --cluster <cluster_name_or_ID>

{: pre}

When the worker nodes are ready, the worker node **State** changes to `deployed` and the **Status** changes to `Ready`. When the node **Status** changes to `Ready`, you can access the cluster. Note that even if the cluster is ready, some parts of the cluster that are used by other services, such as Ingress secrets or registry image pull secrets, might still be in process.

ID Public IP Private IP Flavor State Status Zone Version kube-blrs3b1d0p0p2f7haq0g-mycluster-default-000001f7 169.xx.xxx.xxx 10.xxx.xx.xxx u3c.2x4.encrypted normal Ready dal10 1.20.6

{: screen}

Every worker node is assigned a unique worker node ID and domain name that must not be changed manually after the cluster is created. Changing the ID or domain name prevents the Kubernetes master from managing your cluster.
{: important}

8. **Optional**: If you created your cluster in a [multizone metro location](/docs/containers?topic=containers-regions-and-zones#zones-vpc), you can [spread the default worker pool across zones](/docs/containers?topic=containers-add_workers#vpc_add_zone) to increase the cluster's availability.

9. After your cluster is created, you can [begin working with your cluster by configuring your CLI session](/docs/containers?topic=containers-access_cluster).

10. Kubernetes version 1.18 or earlier only: To allow any traffic requests to apps that you deploy on your worker nodes, modify the VPC's default security group.
 1. List your security groups. For the **VPC** that you created, note the ID of the default security group.
   ```
   ibmcloud is security-groups
   ```
   {: pre}
   Example output with only the default security group of a randomly generated name, `preppy-swimmer-island-green-refreshment`:
   ```
   ID                                     Name                                       Rules   Network interfaces         Created                     VPC                      Resource group
   1a111a1a-a111-11a1-a111-111111111111   preppy-swimmer-island-green-refreshment    4       -                          2019-08-12T13:24:45-04:00   <vpc_name>(bbbb222b-.)   c3c33cccc33c333ccc3c33cc3c333cc3
   ```
   {: screen}

 2. Add a security group rule to allow inbound TCP traffic on ports 30000-32767.
   ```
   ibmcloud is security-group-rule-add <security_group_ID> inbound tcp --port-min 30000 --port-max 32767
   ```
   {: pre}

 3. If you require VPC VPN access or classic infrastructure access into this cluster, add a security group rule to allow inbound UDP traffic on ports 30000-32767.
   ```
   ibmcloud is security-group-rule-add <security_group_ID> inbound udp --port-min 30000 --port-max 32767
   ```
   {: pre}

Your cluster is ready for your workloads! You might also want to [add a tag to your cluster](/docs/containers?topic=containers-add_workers#cluster_tags), such as the team or billing department that uses the cluster, to help manage {{site.data.keyword.cloud_notm}} resources. For more ideas of what to do with your cluster, review the [Next steps](/docs/containers?topic=containers-clusters#next_steps).

<br />

## Next steps
{: #next_steps}

When the cluster is up and running, you can check out the following cluster administration tasks:
- If you created the cluster in a multizone capable zone, [spread worker nodes by adding a zone to your cluster](/docs/containers?topic=containers-add_workers).
- [Deploy an app in your cluster.](/docs/containers?topic=containers-deploy_app#app_cli)
- [Set up your own private registry in {{site.data.keyword.cloud_notm}} to store and share Docker images with other users.](/docs/Registry?topic=Registry-getting-started)
- [Set up the cluster autoscaler](/docs/containers?topic=containers-ca#ca) to automatically add or remove worker nodes from your worker pools based on your workload resource requests.
- Control who can create pods in your cluster with [pod security policies](/docs/containers?topic=containers-psp).
- Enable the [Istio](/docs/containers?topic=containers-istio) managed add-on to extend your cluster capabilities.

Then, you can check out the following network configuration steps for your cluster setup:
* <img src="images/icon-classic.png" alt="Classic infrastructure provider icon" width="15" style="width:15px; border-style: none"/> Classic clusters:
* Isolate networking workloads to edge worker nodes [in classic clusters without a gateway](/docs/containers?topic=containers-edge) or [in gateway-enabled classic clusters](/docs/containers?topic=containers-edge#edge_gateway).
* Expose your apps with [public networking services](/docs/containers?topic=containers-cs_network_planning#public_access) or [private networking services](/docs/containers?topic=containers-cs_network_planning#private_access).
* Connect your cluster with services in private networks outside of your {{site.data.keyword.cloud_notm}} account by setting up [{{site.data.keyword.dl_full_notm}}](/docs/dl?topic=dl-get-started-with-ibm-cloud-dl) or the [strongSwan IPSec VPN service](/docs/containers?topic=containers-vpn).
* Create Calico host network policies to isolate your cluster on the [public network](/docs/containers?topic=containers-network_policies#isolate_workers_public) and on the [private network](/docs/containers?topic=containers-network_policies#isolate_workers).
* If you use a gateway appliance, such as a Virtual Router Appliance (VRA), [open up the required ports and IP addresses](/docs/containers?topic=containers-firewall#firewall_inbound) in the public firewall to permit inbound traffic to networking services. If you also have a firewall on the private network, [allow communication between worker nodes and let your cluster access infrastructure resources over the private network](/docs/containers?topic=containers-firewall#firewall_private).
* <img src="images/icon-vpc.png" alt="VPC infrastructure provider icon" width="15" style="width:15px; border-style: none"/> VPC clusters:
* Expose your apps with [public networking services](/docs/containers?topic=containers-cs_network_planning#public_access) or [private networking services](/docs/containers?topic=containers-cs_network_planning#private_access).
* Connect your cluster with services in private networks outside of your {{site.data.keyword.cloud_notm}} account or with resources in other VPCs by [setting up the {{site.data.keyword.vpc_short}} VPN](/docs/containers?topic=containers-vpc-vpnaas).
* [Add rules to the security group for your worker nodes](/docs/containers?topic=containers-vpc-network-policy) to control ingress and egress traffic to your VPC subnets.