This page documents the language specification for the gcp package. If you're looking for help working with the inputs, outputs, or functions of gcp resources in a Pulumi program, please see the resource documentation for examples and API reference.
container¶
This provider is a derived work of the Terraform Provider distributed under MPL 2.0. If you encounter a bug or missing feature, first check the pulumi/pulumi-gcp repo; however, if that doesn’t turn up anything, please consult the source terraform-providers/terraform-provider-google repo.
- class
pulumi_gcp.container.AwaitableGetClusterResult(additional_zones=None, addons_configs=None, authenticator_groups_configs=None, cluster_autoscalings=None, cluster_ipv4_cidr=None, cluster_telemetries=None, database_encryptions=None, default_max_pods_per_node=None, description=None, enable_binary_authorization=None, enable_intranode_visibility=None, enable_kubernetes_alpha=None, enable_legacy_abac=None, enable_shielded_nodes=None, enable_tpu=None, endpoint=None, id=None, initial_node_count=None, instance_group_urls=None, ip_allocation_policies=None, label_fingerprint=None, location=None, logging_service=None, maintenance_policies=None, master_authorized_networks_configs=None, master_auths=None, master_version=None, min_master_version=None, monitoring_service=None, name=None, network=None, network_policies=None, node_configs=None, node_locations=None, node_pools=None, node_version=None, operation=None, pod_security_policy_configs=None, private_cluster_configs=None, project=None, region=None, release_channels=None, remove_default_node_pool=None, resource_labels=None, resource_usage_export_configs=None, services_ipv4_cidr=None, subnetwork=None, tpu_ipv4_cidr_block=None, vertical_pod_autoscalings=None, workload_identity_configs=None, zone=None)¶
- class
pulumi_gcp.container.AwaitableGetEngineVersionsResult(default_cluster_version=None, id=None, latest_master_version=None, latest_node_version=None, location=None, project=None, release_channel_default_version=None, valid_master_versions=None, valid_node_versions=None, version_prefix=None)¶
- class
pulumi_gcp.container.AwaitableGetRegistryImageResult(digest=None, id=None, image_url=None, name=None, project=None, region=None, tag=None)¶
- class
pulumi_gcp.container.AwaitableGetRegistryRepositoryResult(id=None, project=None, region=None, repository_url=None)¶
- class
pulumi_gcp.container.Cluster(resource_name, opts=None, addons_config=None, authenticator_groups_config=None, cluster_autoscaling=None, cluster_ipv4_cidr=None, cluster_telemetry=None, database_encryption=None, default_max_pods_per_node=None, description=None, enable_binary_authorization=None, enable_intranode_visibility=None, enable_kubernetes_alpha=None, enable_legacy_abac=None, enable_shielded_nodes=None, enable_tpu=None, initial_node_count=None, ip_allocation_policy=None, location=None, logging_service=None, maintenance_policy=None, master_auth=None, master_authorized_networks_config=None, min_master_version=None, monitoring_service=None, name=None, network=None, network_policy=None, node_config=None, node_locations=None, node_pools=None, node_version=None, pod_security_policy_config=None, private_cluster_config=None, project=None, release_channel=None, remove_default_node_pool=None, resource_labels=None, resource_usage_export_config=None, subnetwork=None, vertical_pod_autoscaling=None, workload_identity_config=None, __props__=None, __name__=None, __opts__=None)¶ Manages a Google Kubernetes Engine (GKE) cluster. For more information see the official documentation and the API reference.
Note: All arguments and attributes, including basic auth username and passwords as well as certificate outputs will be stored in the raw state as plaintext. Read more about secrets in state.
import pulumi import pulumi_gcp as gcp primary = gcp.container.Cluster("primary", location="us-central1", remove_default_node_pool=True, initial_node_count=1, master_auth={ "username": "", "password": "", "client_certificate_config": { "issueClientCertificate": False, }, }) primary_preemptible_nodes = gcp.container.NodePool("primaryPreemptibleNodes", location="us-central1", cluster=primary.name, node_count=1, node_config={ "preemptible": True, "machine_type": "n1-standard-1", "metadata": { "disable-legacy-endpoints": "true", }, "oauthScopes": [ "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", ], })
import pulumi import pulumi_gcp as gcp primary = gcp.container.Cluster("primary", initial_node_count=3, location="us-central1-a", master_auth={ "clientCertificateConfig": { "issueClientCertificate": False, }, "password": "", "username": "", }, node_config={ "labels": { "foo": "bar", }, "metadata": { "disable-legacy-endpoints": "true", }, "oauthScopes": [ "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", ], "tags": [ "foo", "bar", ], })
- Parameters
resource_name (str) – The name of the resource.
opts (pulumi.ResourceOptions) – Options for the resource.
addons_config (pulumi.Input[dict]) – The configuration for addons supported by GKE. Structure is documented below.
authenticator_groups_config (pulumi.Input[dict]) – Configuration for the Google Groups for GKE feature. Structure is documented below.
cluster_autoscaling (pulumi.Input[dict]) – Per-cluster configuration of Node Auto-Provisioning with Cluster Autoscaler to automatically adjust the size of the cluster and create/delete node pools based on the current needs of the cluster’s workload. See the guide to using Node Auto-Provisioning for more details. Structure is documented below.
cluster_ipv4_cidr (pulumi.Input[str]) – The IP address range of the Kubernetes pods in this cluster in CIDR notation (e.g.
10.96.0.0/14). Leave blank to have one automatically chosen or specify a/14block in10.0.0.0/8. This field will only work for routes-based clusters, whereip_allocation_policyis not defined.cluster_telemetry (pulumi.Input[dict]) – ) Configuration for ClusterTelemetry feature, Structure is documented below.
database_encryption (pulumi.Input[dict]) –
. Structure is documented below.
default_max_pods_per_node (pulumi.Input[float]) – The default maximum number of pods per node in this cluster. This doesn’t work on “routes-based” clusters, clusters that don’t have IP Aliasing enabled. See the official documentation for more information.
description (pulumi.Input[str]) – Description of the cluster.
enable_binary_authorization (pulumi.Input[bool]) – Enable Binary Authorization for this cluster. If enabled, all container images will be validated by Google Binary Authorization.
enable_intranode_visibility (pulumi.Input[bool]) – Whether Intra-node visibility is enabled for this cluster. This makes same node pod to pod traffic visible for VPC network.
enable_kubernetes_alpha (pulumi.Input[bool]) – Whether to enable Kubernetes Alpha features for this cluster. Note that when this option is enabled, the cluster cannot be upgraded and will be automatically deleted after 30 days.
enable_legacy_abac (pulumi.Input[bool]) – Whether the ABAC authorizer is enabled for this cluster. When enabled, identities in the system, including service accounts, nodes, and controllers, will have statically granted permissions beyond those provided by the RBAC configuration or IAM. Defaults to
falseenable_shielded_nodes (pulumi.Input[bool]) – Enable Shielded Nodes features on all nodes in this cluster. Defaults to
false.enable_tpu (pulumi.Input[bool]) –
Whether to enable Cloud TPU resources in this cluster. See the official documentation.
initial_node_count (pulumi.Input[float]) – The number of nodes to create in this cluster’s default node pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Must be set if
node_poolis not set. If you’re usingcontainer.NodePoolobjects with no default node pool, you’ll need to set this to a value of at least1, alongside settingremove_default_node_pooltotrue.ip_allocation_policy (pulumi.Input[dict]) – Configuration of cluster IP allocation for VPC-native clusters. Adding this block enables IP aliasing, making the cluster VPC-native instead of routes-based. Structure is documented below.
location (pulumi.Input[str]) – The location (region or zone) in which the cluster master will be created, as well as the default node location. If you specify a zone (such as
us-central1-a), the cluster will be a zonal cluster with a single cluster master. If you specify a region (such asus-west1), the cluster will be a regional cluster with multiple masters spread across zones in the region, and with default node locations in those zones as welllogging_service (pulumi.Input[str]) – The logging service that the cluster should write logs to. Available options include
logging.googleapis.com(Legacy Stackdriver),logging.googleapis.com/kubernetes(Stackdriver Kubernetes Engine Logging), andnone. Defaults tologging.googleapis.com/kubernetesmaintenance_policy (pulumi.Input[dict]) – The maintenance policy to use for the cluster. Structure is documented below.
master_auth (pulumi.Input[dict]) – The authentication information for accessing the Kubernetes master. Some values in this block are only returned by the API if your service account has permission to get credentials for your GKE cluster. If you see an unexpected diff removing a username/password or unsetting your client cert, ensure you have the
container.clusters.getCredentialspermission. Structure is documented below.master_authorized_networks_config (pulumi.Input[dict]) – The desired configuration options for master authorized networks. Omit the nested
cidr_blocksattribute to disallow external access (except the cluster node IPs, which GKE automatically whitelists).min_master_version (pulumi.Input[str]) – The minimum version of the master. GKE will auto-update the master to new versions, so this does not guarantee the current master version–use the read-only
master_versionfield to obtain that. If unset, the cluster’s version will be set by GKE to the version of the most recent official release (which is not necessarily the latest version). Most users will find thecontainer.getEngineVersionsdata source useful - it indicates which versions are available. If you intend to specify versions manually, the docs describe the various acceptable formats for this field.monitoring_service (pulumi.Input[str]) – The monitoring service that the cluster should write metrics to. Automatically send metrics from pods in the cluster to the Google Cloud Monitoring API. VM metrics will be collected by Google Compute Engine regardless of this setting Available options include
monitoring.googleapis.com(Legacy Stackdriver),monitoring.googleapis.com/kubernetes(Stackdriver Kubernetes Engine Monitoring), andnone. Defaults tomonitoring.googleapis.com/kubernetesname (pulumi.Input[str]) – The name of the cluster, unique within the project and location.
network (pulumi.Input[str]) – The name or self_link of the Google Compute Engine network to which the cluster is connected. For Shared VPC, set this to the self link of the shared network.
network_policy (pulumi.Input[dict]) – Configuration options for the NetworkPolicy feature. Structure is documented below.
node_config (pulumi.Input[dict]) – Parameters used in creating the default node pool. Generally, this field should not be used at the same time as a
container.NodePoolor anode_poolblock; this configuration manages the default node pool, which isn’t recommended to be used. Structure is documented below.node_locations (pulumi.Input[list]) – The list of zones in which the cluster’s nodes are located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If this is specified for a zonal cluster, omit the cluster’s zone.
node_pools (pulumi.Input[list]) – List of node pools associated with this cluster. See container.NodePool for schema. Warning: node pools defined inside a cluster can’t be changed (or added/removed) after cluster creation without deleting and recreating the entire cluster. Unless you absolutely need the ability to say “these are the only node pools associated with this cluster”, use the container.NodePool resource instead of this property.
node_version (pulumi.Input[str]) – The Kubernetes version on the nodes. Must either be unset or set to the same value as
min_master_versionon create. Defaults to the default version set by GKE which is not necessarily the latest version. This only affects nodes in the default node pool. While a fuzzy version can be specified, it’s recommended that you specify explicit versions as the provider will see spurious diffs when fuzzy versions are used. See thecontainer.getEngineVersionsdata source’sversion_prefixfield to approximate fuzzy versions. To update nodes in other node pools, use theversionattribute on the node pool.pod_security_policy_config (pulumi.Input[dict]) – Configuration for the PodSecurityPolicy feature. Structure is documented below.
private_cluster_config (pulumi.Input[dict]) – Configuration for private clusters, clusters with private nodes. Structure is documented below.
project (pulumi.Input[str]) – The ID of the project in which the resource belongs. If it is not provided, the provider project is used.
release_channel (pulumi.Input[dict]) – Configuration options for the Release channel feature, which provide more control over automatic upgrades of your GKE clusters. When updating this field, GKE imposes specific version requirements. See Migrating between release channels for more details; the
container.getEngineVersionsdatasource can provide the default version for a channel. Note that removing therelease_channelfield from your config will cause this provider to stop managing your cluster’s release channel, but will not unenroll it. Instead, use the"UNSPECIFIED"channel. Structure is documented below.remove_default_node_pool (pulumi.Input[bool]) – If
true, deletes the default node pool upon cluster creation. If you’re usingcontainer.NodePoolresources with no default node pool, this should be set totrue, alongside settinginitial_node_countto at least1.resource_labels (pulumi.Input[dict]) – The GCE resource labels (a map of key/value pairs) to be applied to the cluster.
resource_usage_export_config (pulumi.Input[dict]) – Configuration for the ResourceUsageExportConfig feature. Structure is documented below.
subnetwork (pulumi.Input[str]) – The name or self_link of the Google Compute Engine subnetwork in which the cluster’s instances are launched.
vertical_pod_autoscaling (pulumi.Input[dict]) – Vertical Pod Autoscaling automatically adjusts the resources of pods controlled by it. Structure is documented below.
workload_identity_config (pulumi.Input[dict]) – Workload Identity allows Kubernetes service accounts to act as a user-managed Google IAM Service Account. Structure is documented below.
The addons_config object supports the following:
cloudrunConfig(pulumi.Input[dict]) - . The status of the CloudRun addon. It is disabled by default. Setdisabled = falseto enable.disabled(pulumi.Input[bool]) - The status of the Istio addon, which makes it easy to set up Istio for services in a cluster. It is disabled by default. Setdisabled = falseto enable.
configConnectorConfig(pulumi.Input[dict]) - . The status of the ConfigConnector addon. It is disabled by default; Setenabled = trueto enable.enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
dnsCacheConfig(pulumi.Input[dict]) - . The status of the NodeLocal DNSCache addon. It is disabled by default. Setenabled = trueto enable.enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
gcePersistentDiskCsiDriverConfig(pulumi.Input[dict]) - . Whether this cluster should enable the Google Compute Engine Persistent Disk Container Storage Interface (CSI) Driver. Defaults to disabled; setenabled = trueto enable.enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
horizontalPodAutoscaling(pulumi.Input[dict]) - The status of the Horizontal Pod Autoscaling addon, which increases or decreases the number of replica pods a replication controller has based on the resource usage of the existing pods. It ensures that a Heapster pod is running in the cluster, which is also used by the Cloud Monitoring service. It is enabled by default; setdisabled = trueto disable.disabled(pulumi.Input[bool]) - The status of the Istio addon, which makes it easy to set up Istio for services in a cluster. It is disabled by default. Setdisabled = falseto enable.
httpLoadBalancing(pulumi.Input[dict]) - The status of the HTTP (L7) load balancing controller addon, which makes it easy to set up HTTP load balancers for services in a cluster. It is enabled by default; setdisabled = trueto disable.disabled(pulumi.Input[bool]) - The status of the Istio addon, which makes it easy to set up Istio for services in a cluster. It is disabled by default. Setdisabled = falseto enable.
istioConfig(pulumi.Input[dict]) - . Structure is documented below.auth(pulumi.Input[str]) - The authentication type between services in Istio. Available options includeAUTH_MUTUAL_TLS.disabled(pulumi.Input[bool]) - The status of the Istio addon, which makes it easy to set up Istio for services in a cluster. It is disabled by default. Setdisabled = falseto enable.
kalmConfig(pulumi.Input[dict]) - . Configuration for the KALM addon, which manages the lifecycle of k8s. It is disabled by default; Setenabled = trueto enable.enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
networkPolicyConfig(pulumi.Input[dict]) - Whether we should enable the network policy addon for the master. This must be enabled in order to enable network policy for the nodes. To enable this, you must also define anetwork_policyblock, otherwise nothing will happen. It can only be disabled if the nodes already do not have network policies enabled. Defaults to disabled; setdisabled = falseto enable.disabled(pulumi.Input[bool]) - The status of the Istio addon, which makes it easy to set up Istio for services in a cluster. It is disabled by default. Setdisabled = falseto enable.
The authenticator_groups_config object supports the following:
securityGroup(pulumi.Input[str]) - The name of the RBAC security group for use with Google security groups in Kubernetes RBAC. Group name must be in formatgke-security-groups@yourdomain.com.
The cluster_autoscaling object supports the following:
autoProvisioningDefaults(pulumi.Input[dict]) - Contains defaults for a node pool created by NAP. Structure is documented below.min_cpu_platform(pulumi.Input[str]) - Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or newer CPU platform. Applicable values are the friendly names of CPU platforms, such asIntel Haswell. See the official documentation for more information.oauthScopes(pulumi.Input[list]) - The set of Google API scopes to be made available on all of the node VMs under the “default” service account. These can be either FQDNs, or scope aliases. The following scopes are necessary to ensure the correct functioning of the cluster:service_account(pulumi.Input[str]) - The service account to be used by the Node VMs. If not specified, the “default” service account is used. In order to use the configuredoauth_scopesfor logging and monitoring, the service account being used needs the roles/logging.logWriter and roles/monitoring.metricWriter roles.
autoscalingProfile(pulumi.Input[str]) - ) Configuration options for the Autoscaling profile feature, which lets you choose whether the cluster autoscaler should optimize for resource utilization or resource availability when deciding to remove nodes from a cluster. Can beBALANCEDorOPTIMIZE_UTILIZATION. Defaults toBALANCED.enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.resourceLimits(pulumi.Input[list]) - Global constraints for machine resources in the cluster. Configuring thecpuandmemorytypes is required if node auto-provisioning is enabled. These limits will apply to node pool autoscaling in addition to node auto-provisioning. Structure is documented below.maximum(pulumi.Input[float]) - Maximum amount of the resource in the cluster.minimum(pulumi.Input[float]) - Minimum amount of the resource in the cluster.resourceType(pulumi.Input[str]) - The type of the resource. For example,cpuandmemory. See the guide to using Node Auto-Provisioning for a list of types.
The cluster_telemetry object supports the following:
type(pulumi.Input[str]) - The accelerator type resource to expose to this instance. E.g.nvidia-tesla-k80.
The database_encryption object supports the following:
keyName(pulumi.Input[str]) - the key to use to encrypt/decrypt secrets. See the DatabaseEncryption definition for more information.state(pulumi.Input[str]) -ENCRYPTEDorDECRYPTED
The ip_allocation_policy object supports the following:
clusterIpv4CidrBlock(pulumi.Input[str]) - The IP address range for the cluster pod IPs. Set to blank to have a range chosen with the default size. Set to /netmask (e.g. /14) to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to pick a specific range to use.clusterSecondaryRangeName(pulumi.Input[str]) - The name of the existing secondary range in the cluster’s subnetwork to use for pod IP addresses. Alternatively,cluster_ipv4_cidr_blockcan be used to automatically create a GKE-managed one.servicesIpv4CidrBlock(pulumi.Input[str]) - The IP address range of the services IPs in this cluster. Set to blank to have a range chosen with the default size. Set to /netmask (e.g. /14) to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to pick a specific range to use.servicesSecondaryRangeName(pulumi.Input[str]) - The name of the existing secondary range in the cluster’s subnetwork to use for serviceClusterIPs. Alternatively,services_ipv4_cidr_blockcan be used to automatically create a GKE-managed one.
The maintenance_policy object supports the following:
dailyMaintenanceWindow(pulumi.Input[dict]) - Time window specified for daily maintenance operations. Specifystart_timein RFC3339 format “HH:MM”, where HH : [00-23] and MM : [00-59] GMT. For example:duration(pulumi.Input[str])startTime(pulumi.Input[str])
recurringWindow(pulumi.Input[dict]) - Time window for recurring maintenance operations.endTime(pulumi.Input[str])recurrence(pulumi.Input[str])startTime(pulumi.Input[str])
The master_auth object supports the following:
clientCertificate(pulumi.Input[str])clientCertificateConfig(pulumi.Input[dict]) - Whether client certificate authorization is enabled for this cluster. For example:issueClientCertificate(pulumi.Input[bool])
clientKey(pulumi.Input[str])clusterCaCertificate(pulumi.Input[str])password(pulumi.Input[str]) - The password to use for HTTP basic authentication when accessing the Kubernetes master endpoint.username(pulumi.Input[str]) - The username to use for HTTP basic authentication when accessing the Kubernetes master endpoint. If not present basic auth will be disabled.
The master_authorized_networks_config object supports the following:
cidrBlocks(pulumi.Input[list]) - External networks that can access the Kubernetes cluster master through HTTPS.cidr_block(pulumi.Input[str]) - External network that can access Kubernetes master through HTTPS. Must be specified in CIDR notation.display_name(pulumi.Input[str]) - Field for users to identify CIDR blocks.
The network_policy object supports the following:
enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.provider(pulumi.Input[str]) - The selected network policy provider. Defaults to PROVIDER_UNSPECIFIED.
The node_config object supports the following:
bootDiskKmsKey(pulumi.Input[str]) - The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryptiondisk_size_gb(pulumi.Input[float]) - Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. Defaults to 100GB.diskType(pulumi.Input[str]) - Type of the disk attached to each node (e.g. ‘pd-standard’ or ‘pd-ssd’). If unspecified, the default disk type is ‘pd-standard’guest_accelerators(pulumi.Input[list]) - List of the type and count of accelerator cards attached to the instance. Structure documented below.count(pulumi.Input[float]) - The number of the guest accelerator cards exposed to this instance.type(pulumi.Input[str]) - The accelerator type resource to expose to this instance. E.g.nvidia-tesla-k80.
imageType(pulumi.Input[str]) - The image type to use for this node. Note that changing the image type will delete and recreate all nodes in the node pool.labels(pulumi.Input[dict]) - The Kubernetes labels (key/value pairs) to be applied to each node.localSsdCount(pulumi.Input[float]) - The amount of local SSD disks that will be attached to each cluster node. Defaults to 0.machine_type(pulumi.Input[str]) - The name of a Google Compute Engine machine type. Defaults ton1-standard-1. To create a custom machine type, value should be set as specified here.metadata(pulumi.Input[dict]) - The metadata key/value pairs assigned to instances in the cluster. From GKE1.12onwards,disable-legacy-endpointsis set totrueby the API; ifmetadatais set but that default value is not included, the provider will attempt to unset the value. To avoid this, set the value in your config.min_cpu_platform(pulumi.Input[str]) - Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or newer CPU platform. Applicable values are the friendly names of CPU platforms, such asIntel Haswell. See the official documentation for more information.oauthScopes(pulumi.Input[list]) - The set of Google API scopes to be made available on all of the node VMs under the “default” service account. These can be either FQDNs, or scope aliases. The following scopes are necessary to ensure the correct functioning of the cluster:preemptible(pulumi.Input[bool]) - A boolean that represents whether or not the underlying node VMs are preemptible. See the official documentation for more information. Defaults to false.sandboxConfig(pulumi.Input[dict]) - GKE Sandbox configuration. When enabling this feature you must specifyimage_type = "COS_CONTAINERD"andnode_version = "1.12.7-gke.17"or later to use it. Structure is documented below.sandboxType(pulumi.Input[str]) - Which sandbox to use for pods in the node pool. Accepted values are:
service_account(pulumi.Input[str]) - The service account to be used by the Node VMs. If not specified, the “default” service account is used. In order to use the configuredoauth_scopesfor logging and monitoring, the service account being used needs the roles/logging.logWriter and roles/monitoring.metricWriter roles.shielded_instance_config(pulumi.Input[dict]) - Shielded Instance options. Structure is documented below.enableIntegrityMonitoring(pulumi.Input[bool]) - Defines if the instance has integrity monitoring enabled.enableSecureBoot(pulumi.Input[bool]) - Defines if the instance has Secure Boot enabled.
tags(pulumi.Input[list]) - The list of instance tags applied to all nodes. Tags are used to identify valid sources or targets for network firewalls.taints(pulumi.Input[list]) - A list of Kubernetes taints to apply to nodes. GKE’s API can only set this field on cluster creation. However, GKE will add taints to your nodes if you enable certain features such as GPUs. If this field is set, any diffs on this field will cause the provider to recreate the underlying resource. Taint values can be updated safely in Kubernetes (eg. throughkubectl), and it’s recommended that you do not use this field to manage taints. If you do,lifecycle.ignore_changesis recommended. Structure is documented below.effect(pulumi.Input[str]) - Effect for taint. Accepted values areNO_SCHEDULE,PREFER_NO_SCHEDULE, andNO_EXECUTE.key(pulumi.Input[str]) - Key for taint.value(pulumi.Input[str]) - Value for taint.
workloadMetadataConfig(pulumi.Input[dict]) - Metadata configuration to expose to workloads on the node pool. Structure is documented below.nodeMetadata(pulumi.Input[str]) - How to expose the node metadata to the workload running on the node. Accepted values are:UNSPECIFIED: Not Set
SECURE: Prevent workloads not in hostNetwork from accessing certain VM metadata, specifically kube-env, which contains Kubelet credentials, and the instance identity token. See Metadata Concealment documentation.
EXPOSE: Expose all VM metadata to pods.
GKE_METADATA_SERVER: Enables workload identity on the node.
The node_pools object supports the following:
autoscaling(pulumi.Input[dict])maxNodeCount(pulumi.Input[float])minNodeCount(pulumi.Input[float])
initial_node_count(pulumi.Input[float]) - The number of nodes to create in this cluster’s default node pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Must be set ifnode_poolis not set. If you’re usingcontainer.NodePoolobjects with no default node pool, you’ll need to set this to a value of at least1, alongside settingremove_default_node_pooltotrue.instance_group_urls(pulumi.Input[list]) - List of instance group URLs which have been assigned to the cluster.management(pulumi.Input[dict])autoRepair(pulumi.Input[bool])autoUpgrade(pulumi.Input[bool])
max_pods_per_node(pulumi.Input[float])name(pulumi.Input[str]) - The name of the cluster, unique within the project and location.name_prefix(pulumi.Input[str])node_config(pulumi.Input[dict]) - Parameters used in creating the default node pool. Generally, this field should not be used at the same time as acontainer.NodePoolor anode_poolblock; this configuration manages the default node pool, which isn’t recommended to be used. Structure is documented below.bootDiskKmsKey(pulumi.Input[str]) - The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryptiondisk_size_gb(pulumi.Input[float]) - Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. Defaults to 100GB.diskType(pulumi.Input[str]) - Type of the disk attached to each node (e.g. ‘pd-standard’ or ‘pd-ssd’). If unspecified, the default disk type is ‘pd-standard’guest_accelerators(pulumi.Input[list]) - List of the type and count of accelerator cards attached to the instance. Structure documented below.count(pulumi.Input[float]) - The number of the guest accelerator cards exposed to this instance.type(pulumi.Input[str]) - The accelerator type resource to expose to this instance. E.g.nvidia-tesla-k80.
imageType(pulumi.Input[str]) - The image type to use for this node. Note that changing the image type will delete and recreate all nodes in the node pool.labels(pulumi.Input[dict]) - The Kubernetes labels (key/value pairs) to be applied to each node.localSsdCount(pulumi.Input[float]) - The amount of local SSD disks that will be attached to each cluster node. Defaults to 0.machine_type(pulumi.Input[str]) - The name of a Google Compute Engine machine type. Defaults ton1-standard-1. To create a custom machine type, value should be set as specified here.metadata(pulumi.Input[dict]) - The metadata key/value pairs assigned to instances in the cluster. From GKE1.12onwards,disable-legacy-endpointsis set totrueby the API; ifmetadatais set but that default value is not included, the provider will attempt to unset the value. To avoid this, set the value in your config.min_cpu_platform(pulumi.Input[str]) - Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or newer CPU platform. Applicable values are the friendly names of CPU platforms, such asIntel Haswell. See the official documentation for more information.oauthScopes(pulumi.Input[list]) - The set of Google API scopes to be made available on all of the node VMs under the “default” service account. These can be either FQDNs, or scope aliases. The following scopes are necessary to ensure the correct functioning of the cluster:preemptible(pulumi.Input[bool]) - A boolean that represents whether or not the underlying node VMs are preemptible. See the official documentation for more information. Defaults to false.sandboxConfig(pulumi.Input[dict]) - GKE Sandbox configuration. When enabling this feature you must specifyimage_type = "COS_CONTAINERD"andnode_version = "1.12.7-gke.17"or later to use it. Structure is documented below.sandboxType(pulumi.Input[str]) - Which sandbox to use for pods in the node pool. Accepted values are:
service_account(pulumi.Input[str]) - The service account to be used by the Node VMs. If not specified, the “default” service account is used. In order to use the configuredoauth_scopesfor logging and monitoring, the service account being used needs the roles/logging.logWriter and roles/monitoring.metricWriter roles.shielded_instance_config(pulumi.Input[dict]) - Shielded Instance options. Structure is documented below.enableIntegrityMonitoring(pulumi.Input[bool]) - Defines if the instance has integrity monitoring enabled.enableSecureBoot(pulumi.Input[bool]) - Defines if the instance has Secure Boot enabled.
tags(pulumi.Input[list]) - The list of instance tags applied to all nodes. Tags are used to identify valid sources or targets for network firewalls.taints(pulumi.Input[list]) - A list of Kubernetes taints to apply to nodes. GKE’s API can only set this field on cluster creation. However, GKE will add taints to your nodes if you enable certain features such as GPUs. If this field is set, any diffs on this field will cause the provider to recreate the underlying resource. Taint values can be updated safely in Kubernetes (eg. throughkubectl), and it’s recommended that you do not use this field to manage taints. If you do,lifecycle.ignore_changesis recommended. Structure is documented below.effect(pulumi.Input[str]) - Effect for taint. Accepted values areNO_SCHEDULE,PREFER_NO_SCHEDULE, andNO_EXECUTE.key(pulumi.Input[str]) - Key for taint.value(pulumi.Input[str]) - Value for taint.
workloadMetadataConfig(pulumi.Input[dict]) - Metadata configuration to expose to workloads on the node pool. Structure is documented below.nodeMetadata(pulumi.Input[str]) - How to expose the node metadata to the workload running on the node. Accepted values are:UNSPECIFIED: Not Set
SECURE: Prevent workloads not in hostNetwork from accessing certain VM metadata, specifically kube-env, which contains Kubelet credentials, and the instance identity token. See Metadata Concealment documentation.
EXPOSE: Expose all VM metadata to pods.
GKE_METADATA_SERVER: Enables workload identity on the node.
node_count(pulumi.Input[float])node_locations(pulumi.Input[list]) - The list of zones in which the cluster’s nodes are located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If this is specified for a zonal cluster, omit the cluster’s zone.upgrade_settings(pulumi.Input[dict])maxSurge(pulumi.Input[float])maxUnavailable(pulumi.Input[float])
version(pulumi.Input[str])
The pod_security_policy_config object supports the following:
enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
The private_cluster_config object supports the following:
enablePrivateEndpoint(pulumi.Input[bool]) - Whentrue, the cluster’s private endpoint is used as the cluster endpoint and access through the public endpoint is disabled. Whenfalse, either endpoint can be used. This field only applies to private clusters, whenenable_private_nodesistrue.enablePrivateNodes(pulumi.Input[bool]) - Enables the private cluster feature, creating a private endpoint on the cluster. In a private cluster, nodes only have RFC 1918 private addresses and communicate with the master’s private endpoint via private networking.masterGlobalAccessConfig(pulumi.Input[dict])enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
masterIpv4CidrBlock(pulumi.Input[str]) - The IP range in CIDR notation to use for the hosted master network. This range will be used for assigning private IP addresses to the cluster master(s) and the ILB VIP. This range must not overlap with any other ranges in use within the cluster’s network, and it must be a /28 subnet. See Private Cluster Limitations for more details. This field only applies to private clusters, whenenable_private_nodesistrue.peeringName(pulumi.Input[str]) - The name of the peering between this cluster and the Google owned VPC.privateEndpoint(pulumi.Input[str]) - The internal IP address of this cluster’s master endpoint.publicEndpoint(pulumi.Input[str]) - The external IP address of this cluster’s master endpoint.
The release_channel object supports the following:
channel(pulumi.Input[str]) - The selected release channel. Accepted values are:UNSPECIFIED: Not set.
RAPID: Weekly upgrade cadence; Early testers and developers who requires new features.
REGULAR: Multiple per month upgrade cadence; Production users who need features not yet offered in the Stable channel.
STABLE: Every few months upgrade cadence; Production users who need stability above all else, and for whom frequent upgrades are too risky.
The resource_usage_export_config object supports the following:
bigqueryDestination(pulumi.Input[dict]) - Parameters for using BigQuery as the destination of resource usage export.dataset_id(pulumi.Input[str])
enableNetworkEgressMetering(pulumi.Input[bool]) - Whether to enable network egress metering for this cluster. If enabled, a daemonset will be created in the cluster to meter network egress traffic.enableResourceConsumptionMetering(pulumi.Input[bool]) - Whether to enable resource consumption metering on this cluster. When enabled, a table will be created in the resource export BigQuery dataset to store resource consumption data. The resulting table can be joined with the resource usage table or with BigQuery billing export. Defaults totrue.
The vertical_pod_autoscaling object supports the following:
enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
The workload_identity_config object supports the following:
identityNamespace(pulumi.Input[str]) - Currently, the only supported identity namespace is the project’s default.
addons_config: pulumi.Output[dict] = None¶The configuration for addons supported by GKE. Structure is documented below.
cloudrunConfig(dict) - . The status of the CloudRun addon. It is disabled by default. Setdisabled = falseto enable.disabled(bool) - The status of the Istio addon, which makes it easy to set up Istio for services in a cluster. It is disabled by default. Setdisabled = falseto enable.
configConnectorConfig(dict) - . The status of the ConfigConnector addon. It is disabled by default; Setenabled = trueto enable.enabled(bool) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
dnsCacheConfig(dict) - . The status of the NodeLocal DNSCache addon. It is disabled by default. Setenabled = trueto enable.enabled(bool) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
gcePersistentDiskCsiDriverConfig(dict) - . Whether this cluster should enable the Google Compute Engine Persistent Disk Container Storage Interface (CSI) Driver. Defaults to disabled; setenabled = trueto enable.enabled(bool) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
horizontalPodAutoscaling(dict) - The status of the Horizontal Pod Autoscaling addon, which increases or decreases the number of replica pods a replication controller has based on the resource usage of the existing pods. It ensures that a Heapster pod is running in the cluster, which is also used by the Cloud Monitoring service. It is enabled by default; setdisabled = trueto disable.disabled(bool) - The status of the Istio addon, which makes it easy to set up Istio for services in a cluster. It is disabled by default. Setdisabled = falseto enable.
httpLoadBalancing(dict) - The status of the HTTP (L7) load balancing controller addon, which makes it easy to set up HTTP load balancers for services in a cluster. It is enabled by default; setdisabled = trueto disable.disabled(bool) - The status of the Istio addon, which makes it easy to set up Istio for services in a cluster. It is disabled by default. Setdisabled = falseto enable.
istioConfig(dict) - . Structure is documented below.auth(str) - The authentication type between services in Istio. Available options includeAUTH_MUTUAL_TLS.disabled(bool) - The status of the Istio addon, which makes it easy to set up Istio for services in a cluster. It is disabled by default. Setdisabled = falseto enable.
kalmConfig(dict) - . Configuration for the KALM addon, which manages the lifecycle of k8s. It is disabled by default; Setenabled = trueto enable.enabled(bool) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
networkPolicyConfig(dict) - Whether we should enable the network policy addon for the master. This must be enabled in order to enable network policy for the nodes. To enable this, you must also define anetwork_policyblock, otherwise nothing will happen. It can only be disabled if the nodes already do not have network policies enabled. Defaults to disabled; setdisabled = falseto enable.disabled(bool) - The status of the Istio addon, which makes it easy to set up Istio for services in a cluster. It is disabled by default. Setdisabled = falseto enable.
authenticator_groups_config: pulumi.Output[dict] = None¶Configuration for the Google Groups for GKE feature. Structure is documented below.
securityGroup(str) - The name of the RBAC security group for use with Google security groups in Kubernetes RBAC. Group name must be in formatgke-security-groups@yourdomain.com.
cluster_autoscaling: pulumi.Output[dict] = None¶Per-cluster configuration of Node Auto-Provisioning with Cluster Autoscaler to automatically adjust the size of the cluster and create/delete node pools based on the current needs of the cluster’s workload. See the guide to using Node Auto-Provisioning for more details. Structure is documented below.
autoProvisioningDefaults(dict) - Contains defaults for a node pool created by NAP. Structure is documented below.min_cpu_platform(str) - Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or newer CPU platform. Applicable values are the friendly names of CPU platforms, such asIntel Haswell. See the official documentation for more information.oauthScopes(list) - The set of Google API scopes to be made available on all of the node VMs under the “default” service account. These can be either FQDNs, or scope aliases. The following scopes are necessary to ensure the correct functioning of the cluster:service_account(str) - The service account to be used by the Node VMs. If not specified, the “default” service account is used. In order to use the configuredoauth_scopesfor logging and monitoring, the service account being used needs the roles/logging.logWriter and roles/monitoring.metricWriter roles.
autoscalingProfile(str) - ) Configuration options for the Autoscaling profile feature, which lets you choose whether the cluster autoscaler should optimize for resource utilization or resource availability when deciding to remove nodes from a cluster. Can beBALANCEDorOPTIMIZE_UTILIZATION. Defaults toBALANCED.enabled(bool) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.resourceLimits(list) - Global constraints for machine resources in the cluster. Configuring thecpuandmemorytypes is required if node auto-provisioning is enabled. These limits will apply to node pool autoscaling in addition to node auto-provisioning. Structure is documented below.maximum(float) - Maximum amount of the resource in the cluster.minimum(float) - Minimum amount of the resource in the cluster.resourceType(str) - The type of the resource. For example,cpuandmemory. See the guide to using Node Auto-Provisioning for a list of types.
cluster_ipv4_cidr: pulumi.Output[str] = None¶The IP address range of the Kubernetes pods in this cluster in CIDR notation (e.g.
10.96.0.0/14). Leave blank to have one automatically chosen or specify a/14block in10.0.0.0/8. This field will only work for routes-based clusters, whereip_allocation_policyis not defined.
cluster_telemetry: pulumi.Output[dict] = None¶) Configuration for ClusterTelemetry feature, Structure is documented below.
type(str) - The accelerator type resource to expose to this instance. E.g.nvidia-tesla-k80.
database_encryption: pulumi.Output[dict] = None¶. Structure is documented below.
keyName(str) - the key to use to encrypt/decrypt secrets. See the DatabaseEncryption definition for more information.state(str) -ENCRYPTEDorDECRYPTED
default_max_pods_per_node: pulumi.Output[float] = None¶The default maximum number of pods per node in this cluster. This doesn’t work on “routes-based” clusters, clusters that don’t have IP Aliasing enabled. See the official documentation for more information.
description: pulumi.Output[str] = None¶Description of the cluster.
Enable Binary Authorization for this cluster. If enabled, all container images will be validated by Google Binary Authorization.
enable_intranode_visibility: pulumi.Output[bool] = None¶Whether Intra-node visibility is enabled for this cluster. This makes same node pod to pod traffic visible for VPC network.
enable_kubernetes_alpha: pulumi.Output[bool] = None¶Whether to enable Kubernetes Alpha features for this cluster. Note that when this option is enabled, the cluster cannot be upgraded and will be automatically deleted after 30 days.
enable_legacy_abac: pulumi.Output[bool] = None¶Whether the ABAC authorizer is enabled for this cluster. When enabled, identities in the system, including service accounts, nodes, and controllers, will have statically granted permissions beyond those provided by the RBAC configuration or IAM. Defaults to
false
enable_shielded_nodes: pulumi.Output[bool] = None¶Enable Shielded Nodes features on all nodes in this cluster. Defaults to
false.
enable_tpu: pulumi.Output[bool] = None¶Whether to enable Cloud TPU resources in this cluster. See the official documentation.
endpoint: pulumi.Output[str] = None¶The IP address of this cluster’s Kubernetes master.
initial_node_count: pulumi.Output[float] = None¶The number of nodes to create in this cluster’s default node pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Must be set if
node_poolis not set. If you’re usingcontainer.NodePoolobjects with no default node pool, you’ll need to set this to a value of at least1, alongside settingremove_default_node_pooltotrue.
instance_group_urls: pulumi.Output[list] = None¶List of instance group URLs which have been assigned to the cluster.
ip_allocation_policy: pulumi.Output[dict] = None¶Configuration of cluster IP allocation for VPC-native clusters. Adding this block enables IP aliasing, making the cluster VPC-native instead of routes-based. Structure is documented below.
clusterIpv4CidrBlock(str) - The IP address range for the cluster pod IPs. Set to blank to have a range chosen with the default size. Set to /netmask (e.g. /14) to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to pick a specific range to use.clusterSecondaryRangeName(str) - The name of the existing secondary range in the cluster’s subnetwork to use for pod IP addresses. Alternatively,cluster_ipv4_cidr_blockcan be used to automatically create a GKE-managed one.servicesIpv4CidrBlock(str) - The IP address range of the services IPs in this cluster. Set to blank to have a range chosen with the default size. Set to /netmask (e.g. /14) to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to pick a specific range to use.servicesSecondaryRangeName(str) - The name of the existing secondary range in the cluster’s subnetwork to use for serviceClusterIPs. Alternatively,services_ipv4_cidr_blockcan be used to automatically create a GKE-managed one.
label_fingerprint: pulumi.Output[str] = None¶The fingerprint of the set of labels for this cluster.
location: pulumi.Output[str] = None¶The location (region or zone) in which the cluster master will be created, as well as the default node location. If you specify a zone (such as
us-central1-a), the cluster will be a zonal cluster with a single cluster master. If you specify a region (such asus-west1), the cluster will be a regional cluster with multiple masters spread across zones in the region, and with default node locations in those zones as well
logging_service: pulumi.Output[str] = None¶The logging service that the cluster should write logs to. Available options include
logging.googleapis.com(Legacy Stackdriver),logging.googleapis.com/kubernetes(Stackdriver Kubernetes Engine Logging), andnone. Defaults tologging.googleapis.com/kubernetes
maintenance_policy: pulumi.Output[dict] = None¶The maintenance policy to use for the cluster. Structure is documented below.
dailyMaintenanceWindow(dict) - Time window specified for daily maintenance operations. Specifystart_timein RFC3339 format “HH:MM”, where HH : [00-23] and MM : [00-59] GMT. For example:duration(str)startTime(str)
recurringWindow(dict) - Time window for recurring maintenance operations.endTime(str)recurrence(str)startTime(str)
master_auth: pulumi.Output[dict] = None¶The authentication information for accessing the Kubernetes master. Some values in this block are only returned by the API if your service account has permission to get credentials for your GKE cluster. If you see an unexpected diff removing a username/password or unsetting your client cert, ensure you have the
container.clusters.getCredentialspermission. Structure is documented below.clientCertificate(str)clientCertificateConfig(dict) - Whether client certificate authorization is enabled for this cluster. For example:issueClientCertificate(bool)
clientKey(str)clusterCaCertificate(str)password(str) - The password to use for HTTP basic authentication when accessing the Kubernetes master endpoint.username(str) - The username to use for HTTP basic authentication when accessing the Kubernetes master endpoint. If not present basic auth will be disabled.
The desired configuration options for master authorized networks. Omit the nested
cidr_blocksattribute to disallow external access (except the cluster node IPs, which GKE automatically whitelists).cidrBlocks(list) - External networks that can access the Kubernetes cluster master through HTTPS.cidr_block(str) - External network that can access Kubernetes master through HTTPS. Must be specified in CIDR notation.display_name(str) - Field for users to identify CIDR blocks.
master_version: pulumi.Output[str] = None¶The current version of the master in the cluster. This may be different than the
min_master_versionset in the config if the master has been updated by GKE.
min_master_version: pulumi.Output[str] = None¶The minimum version of the master. GKE will auto-update the master to new versions, so this does not guarantee the current master version–use the read-only
master_versionfield to obtain that. If unset, the cluster’s version will be set by GKE to the version of the most recent official release (which is not necessarily the latest version). Most users will find thecontainer.getEngineVersionsdata source useful - it indicates which versions are available. If you intend to specify versions manually, the docs describe the various acceptable formats for this field.
monitoring_service: pulumi.Output[str] = None¶The monitoring service that the cluster should write metrics to. Automatically send metrics from pods in the cluster to the Google Cloud Monitoring API. VM metrics will be collected by Google Compute Engine regardless of this setting Available options include
monitoring.googleapis.com(Legacy Stackdriver),monitoring.googleapis.com/kubernetes(Stackdriver Kubernetes Engine Monitoring), andnone. Defaults tomonitoring.googleapis.com/kubernetes
name: pulumi.Output[str] = None¶The name of the cluster, unique within the project and location.
network: pulumi.Output[str] = None¶The name or self_link of the Google Compute Engine network to which the cluster is connected. For Shared VPC, set this to the self link of the shared network.
network_policy: pulumi.Output[dict] = None¶Configuration options for the NetworkPolicy feature. Structure is documented below.
enabled(bool) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.provider(str) - The selected network policy provider. Defaults to PROVIDER_UNSPECIFIED.
node_config: pulumi.Output[dict] = None¶Parameters used in creating the default node pool. Generally, this field should not be used at the same time as a
container.NodePoolor anode_poolblock; this configuration manages the default node pool, which isn’t recommended to be used. Structure is documented below.bootDiskKmsKey(str) - The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryptiondisk_size_gb(float) - Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. Defaults to 100GB.diskType(str) - Type of the disk attached to each node (e.g. ‘pd-standard’ or ‘pd-ssd’). If unspecified, the default disk type is ‘pd-standard’guest_accelerators(list) - List of the type and count of accelerator cards attached to the instance. Structure documented below.count(float) - The number of the guest accelerator cards exposed to this instance.type(str) - The accelerator type resource to expose to this instance. E.g.nvidia-tesla-k80.
imageType(str) - The image type to use for this node. Note that changing the image type will delete and recreate all nodes in the node pool.labels(dict) - The Kubernetes labels (key/value pairs) to be applied to each node.localSsdCount(float) - The amount of local SSD disks that will be attached to each cluster node. Defaults to 0.machine_type(str) - The name of a Google Compute Engine machine type. Defaults ton1-standard-1. To create a custom machine type, value should be set as specified here.metadata(dict) - The metadata key/value pairs assigned to instances in the cluster. From GKE1.12onwards,disable-legacy-endpointsis set totrueby the API; ifmetadatais set but that default value is not included, the provider will attempt to unset the value. To avoid this, set the value in your config.min_cpu_platform(str) - Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or newer CPU platform. Applicable values are the friendly names of CPU platforms, such asIntel Haswell. See the official documentation for more information.oauthScopes(list) - The set of Google API scopes to be made available on all of the node VMs under the “default” service account. These can be either FQDNs, or scope aliases. The following scopes are necessary to ensure the correct functioning of the cluster:preemptible(bool) - A boolean that represents whether or not the underlying node VMs are preemptible. See the official documentation for more information. Defaults to false.sandboxConfig(dict) - GKE Sandbox configuration. When enabling this feature you must specifyimage_type = "COS_CONTAINERD"andnode_version = "1.12.7-gke.17"or later to use it. Structure is documented below.sandboxType(str) - Which sandbox to use for pods in the node pool. Accepted values are:
service_account(str) - The service account to be used by the Node VMs. If not specified, the “default” service account is used. In order to use the configuredoauth_scopesfor logging and monitoring, the service account being used needs the roles/logging.logWriter and roles/monitoring.metricWriter roles.shielded_instance_config(dict) - Shielded Instance options. Structure is documented below.enableIntegrityMonitoring(bool) - Defines if the instance has integrity monitoring enabled.enableSecureBoot(bool) - Defines if the instance has Secure Boot enabled.
tags(list) - The list of instance tags applied to all nodes. Tags are used to identify valid sources or targets for network firewalls.taints(list) - A list of Kubernetes taints to apply to nodes. GKE’s API can only set this field on cluster creation. However, GKE will add taints to your nodes if you enable certain features such as GPUs. If this field is set, any diffs on this field will cause the provider to recreate the underlying resource. Taint values can be updated safely in Kubernetes (eg. throughkubectl), and it’s recommended that you do not use this field to manage taints. If you do,lifecycle.ignore_changesis recommended. Structure is documented below.effect(str) - Effect for taint. Accepted values areNO_SCHEDULE,PREFER_NO_SCHEDULE, andNO_EXECUTE.key(str) - Key for taint.value(str) - Value for taint.
workloadMetadataConfig(dict) - Metadata configuration to expose to workloads on the node pool. Structure is documented below.nodeMetadata(str) - How to expose the node metadata to the workload running on the node. Accepted values are:UNSPECIFIED: Not Set
SECURE: Prevent workloads not in hostNetwork from accessing certain VM metadata, specifically kube-env, which contains Kubelet credentials, and the instance identity token. See Metadata Concealment documentation.
EXPOSE: Expose all VM metadata to pods.
GKE_METADATA_SERVER: Enables workload identity on the node.
node_locations: pulumi.Output[list] = None¶The list of zones in which the cluster’s nodes are located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If this is specified for a zonal cluster, omit the cluster’s zone.
node_pools: pulumi.Output[list] = None¶List of node pools associated with this cluster. See container.NodePool for schema. Warning: node pools defined inside a cluster can’t be changed (or added/removed) after cluster creation without deleting and recreating the entire cluster. Unless you absolutely need the ability to say “these are the only node pools associated with this cluster”, use the container.NodePool resource instead of this property.
autoscaling(dict)maxNodeCount(float)minNodeCount(float)
initial_node_count(float) - The number of nodes to create in this cluster’s default node pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Must be set ifnode_poolis not set. If you’re usingcontainer.NodePoolobjects with no default node pool, you’ll need to set this to a value of at least1, alongside settingremove_default_node_pooltotrue.instance_group_urls(list) - List of instance group URLs which have been assigned to the cluster.management(dict)autoRepair(bool)autoUpgrade(bool)
max_pods_per_node(float)name(str) - The name of the cluster, unique within the project and location.name_prefix(str)node_config(dict) - Parameters used in creating the default node pool. Generally, this field should not be used at the same time as acontainer.NodePoolor anode_poolblock; this configuration manages the default node pool, which isn’t recommended to be used. Structure is documented below.bootDiskKmsKey(str) - The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryptiondisk_size_gb(float) - Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. Defaults to 100GB.diskType(str) - Type of the disk attached to each node (e.g. ‘pd-standard’ or ‘pd-ssd’). If unspecified, the default disk type is ‘pd-standard’guest_accelerators(list) - List of the type and count of accelerator cards attached to the instance. Structure documented below.count(float) - The number of the guest accelerator cards exposed to this instance.type(str) - The accelerator type resource to expose to this instance. E.g.nvidia-tesla-k80.
imageType(str) - The image type to use for this node. Note that changing the image type will delete and recreate all nodes in the node pool.labels(dict) - The Kubernetes labels (key/value pairs) to be applied to each node.localSsdCount(float) - The amount of local SSD disks that will be attached to each cluster node. Defaults to 0.machine_type(str) - The name of a Google Compute Engine machine type. Defaults ton1-standard-1. To create a custom machine type, value should be set as specified here.metadata(dict) - The metadata key/value pairs assigned to instances in the cluster. From GKE1.12onwards,disable-legacy-endpointsis set totrueby the API; ifmetadatais set but that default value is not included, the provider will attempt to unset the value. To avoid this, set the value in your config.min_cpu_platform(str) - Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or newer CPU platform. Applicable values are the friendly names of CPU platforms, such asIntel Haswell. See the official documentation for more information.oauthScopes(list) - The set of Google API scopes to be made available on all of the node VMs under the “default” service account. These can be either FQDNs, or scope aliases. The following scopes are necessary to ensure the correct functioning of the cluster:preemptible(bool) - A boolean that represents whether or not the underlying node VMs are preemptible. See the official documentation for more information. Defaults to false.sandboxConfig(dict) - GKE Sandbox configuration. When enabling this feature you must specifyimage_type = "COS_CONTAINERD"andnode_version = "1.12.7-gke.17"or later to use it. Structure is documented below.sandboxType(str) - Which sandbox to use for pods in the node pool. Accepted values are:
service_account(str) - The service account to be used by the Node VMs. If not specified, the “default” service account is used. In order to use the configuredoauth_scopesfor logging and monitoring, the service account being used needs the roles/logging.logWriter and roles/monitoring.metricWriter roles.shielded_instance_config(dict) - Shielded Instance options. Structure is documented below.enableIntegrityMonitoring(bool) - Defines if the instance has integrity monitoring enabled.enableSecureBoot(bool) - Defines if the instance has Secure Boot enabled.
tags(list) - The list of instance tags applied to all nodes. Tags are used to identify valid sources or targets for network firewalls.taints(list) - A list of Kubernetes taints to apply to nodes. GKE’s API can only set this field on cluster creation. However, GKE will add taints to your nodes if you enable certain features such as GPUs. If this field is set, any diffs on this field will cause the provider to recreate the underlying resource. Taint values can be updated safely in Kubernetes (eg. throughkubectl), and it’s recommended that you do not use this field to manage taints. If you do,lifecycle.ignore_changesis recommended. Structure is documented below.effect(str) - Effect for taint. Accepted values areNO_SCHEDULE,PREFER_NO_SCHEDULE, andNO_EXECUTE.key(str) - Key for taint.value(str) - Value for taint.
workloadMetadataConfig(dict) - Metadata configuration to expose to workloads on the node pool. Structure is documented below.nodeMetadata(str) - How to expose the node metadata to the workload running on the node. Accepted values are:UNSPECIFIED: Not Set
SECURE: Prevent workloads not in hostNetwork from accessing certain VM metadata, specifically kube-env, which contains Kubelet credentials, and the instance identity token. See Metadata Concealment documentation.
EXPOSE: Expose all VM metadata to pods.
GKE_METADATA_SERVER: Enables workload identity on the node.
node_count(float)node_locations(list) - The list of zones in which the cluster’s nodes are located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If this is specified for a zonal cluster, omit the cluster’s zone.upgrade_settings(dict)maxSurge(float)maxUnavailable(float)
version(str)
node_version: pulumi.Output[str] = None¶The Kubernetes version on the nodes. Must either be unset or set to the same value as
min_master_versionon create. Defaults to the default version set by GKE which is not necessarily the latest version. This only affects nodes in the default node pool. While a fuzzy version can be specified, it’s recommended that you specify explicit versions as the provider will see spurious diffs when fuzzy versions are used. See thecontainer.getEngineVersionsdata source’sversion_prefixfield to approximate fuzzy versions. To update nodes in other node pools, use theversionattribute on the node pool.
pod_security_policy_config: pulumi.Output[dict] = None¶Configuration for the PodSecurityPolicy feature. Structure is documented below.
enabled(bool) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
private_cluster_config: pulumi.Output[dict] = None¶Configuration for private clusters, clusters with private nodes. Structure is documented below.
enablePrivateEndpoint(bool) - Whentrue, the cluster’s private endpoint is used as the cluster endpoint and access through the public endpoint is disabled. Whenfalse, either endpoint can be used. This field only applies to private clusters, whenenable_private_nodesistrue.enablePrivateNodes(bool) - Enables the private cluster feature, creating a private endpoint on the cluster. In a private cluster, nodes only have RFC 1918 private addresses and communicate with the master’s private endpoint via private networking.masterGlobalAccessConfig(dict)enabled(bool) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
masterIpv4CidrBlock(str) - The IP range in CIDR notation to use for the hosted master network. This range will be used for assigning private IP addresses to the cluster master(s) and the ILB VIP. This range must not overlap with any other ranges in use within the cluster’s network, and it must be a /28 subnet. See Private Cluster Limitations for more details. This field only applies to private clusters, whenenable_private_nodesistrue.peeringName(str) - The name of the peering between this cluster and the Google owned VPC.privateEndpoint(str) - The internal IP address of this cluster’s master endpoint.publicEndpoint(str) - The external IP address of this cluster’s master endpoint.
project: pulumi.Output[str] = None¶The ID of the project in which the resource belongs. If it is not provided, the provider project is used.
release_channel: pulumi.Output[dict] = None¶Configuration options for the Release channel feature, which provide more control over automatic upgrades of your GKE clusters. When updating this field, GKE imposes specific version requirements. See Migrating between release channels for more details; the
container.getEngineVersionsdatasource can provide the default version for a channel. Note that removing therelease_channelfield from your config will cause this provider to stop managing your cluster’s release channel, but will not unenroll it. Instead, use the"UNSPECIFIED"channel. Structure is documented below.channel(str) - The selected release channel. Accepted values are:UNSPECIFIED: Not set.
RAPID: Weekly upgrade cadence; Early testers and developers who requires new features.
REGULAR: Multiple per month upgrade cadence; Production users who need features not yet offered in the Stable channel.
STABLE: Every few months upgrade cadence; Production users who need stability above all else, and for whom frequent upgrades are too risky.
remove_default_node_pool: pulumi.Output[bool] = None¶If
true, deletes the default node pool upon cluster creation. If you’re usingcontainer.NodePoolresources with no default node pool, this should be set totrue, alongside settinginitial_node_countto at least1.
resource_labels: pulumi.Output[dict] = None¶The GCE resource labels (a map of key/value pairs) to be applied to the cluster.
resource_usage_export_config: pulumi.Output[dict] = None¶Configuration for the ResourceUsageExportConfig feature. Structure is documented below.
bigqueryDestination(dict) - Parameters for using BigQuery as the destination of resource usage export.dataset_id(str)
enableNetworkEgressMetering(bool) - Whether to enable network egress metering for this cluster. If enabled, a daemonset will be created in the cluster to meter network egress traffic.enableResourceConsumptionMetering(bool) - Whether to enable resource consumption metering on this cluster. When enabled, a table will be created in the resource export BigQuery dataset to store resource consumption data. The resulting table can be joined with the resource usage table or with BigQuery billing export. Defaults totrue.
services_ipv4_cidr: pulumi.Output[str] = None¶The IP address range of the Kubernetes services in this cluster, in CIDR notation (e.g.
1.2.3.4/29). Service addresses are typically put in the last/16from the container CIDR.
subnetwork: pulumi.Output[str] = None¶The name or self_link of the Google Compute Engine subnetwork in which the cluster’s instances are launched.
tpu_ipv4_cidr_block: pulumi.Output[str] = None¶The IP address range of the Cloud TPUs in this cluster, in CIDR notation (e.g.
1.2.3.4/29).
vertical_pod_autoscaling: pulumi.Output[dict] = None¶Vertical Pod Autoscaling automatically adjusts the resources of pods controlled by it. Structure is documented below.
enabled(bool) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
workload_identity_config: pulumi.Output[dict] = None¶Workload Identity allows Kubernetes service accounts to act as a user-managed Google IAM Service Account. Structure is documented below.
identityNamespace(str) - Currently, the only supported identity namespace is the project’s default.
- static
get(resource_name, id, opts=None, addons_config=None, authenticator_groups_config=None, cluster_autoscaling=None, cluster_ipv4_cidr=None, cluster_telemetry=None, database_encryption=None, default_max_pods_per_node=None, description=None, enable_binary_authorization=None, enable_intranode_visibility=None, enable_kubernetes_alpha=None, enable_legacy_abac=None, enable_shielded_nodes=None, enable_tpu=None, endpoint=None, initial_node_count=None, instance_group_urls=None, ip_allocation_policy=None, label_fingerprint=None, location=None, logging_service=None, maintenance_policy=None, master_auth=None, master_authorized_networks_config=None, master_version=None, min_master_version=None, monitoring_service=None, name=None, network=None, network_policy=None, node_config=None, node_locations=None, node_pools=None, node_version=None, operation=None, pod_security_policy_config=None, private_cluster_config=None, project=None, release_channel=None, remove_default_node_pool=None, resource_labels=None, resource_usage_export_config=None, services_ipv4_cidr=None, subnetwork=None, tpu_ipv4_cidr_block=None, vertical_pod_autoscaling=None, workload_identity_config=None)¶ Get an existing Cluster resource’s state with the given name, id, and optional extra properties used to qualify the lookup.
- Parameters
resource_name (str) – The unique name of the resulting resource.
id (str) – The unique provider ID of the resource to lookup.
opts (pulumi.ResourceOptions) – Options for the resource.
addons_config (pulumi.Input[dict]) – The configuration for addons supported by GKE. Structure is documented below.
authenticator_groups_config (pulumi.Input[dict]) –
Configuration for the Google Groups for GKE feature. Structure is documented below.
cluster_autoscaling (pulumi.Input[dict]) –
Per-cluster configuration of Node Auto-Provisioning with Cluster Autoscaler to automatically adjust the size of the cluster and create/delete node pools based on the current needs of the cluster’s workload. See the guide to using Node Auto-Provisioning for more details. Structure is documented below.
cluster_ipv4_cidr (pulumi.Input[str]) – The IP address range of the Kubernetes pods in this cluster in CIDR notation (e.g.
10.96.0.0/14). Leave blank to have one automatically chosen or specify a/14block in10.0.0.0/8. This field will only work for routes-based clusters, whereip_allocation_policyis not defined.cluster_telemetry (pulumi.Input[dict]) –
) Configuration for ClusterTelemetry feature, Structure is documented below.
database_encryption (pulumi.Input[dict]) –
. Structure is documented below.
default_max_pods_per_node (pulumi.Input[float]) –
The default maximum number of pods per node in this cluster. This doesn’t work on “routes-based” clusters, clusters that don’t have IP Aliasing enabled. See the official documentation for more information.
description (pulumi.Input[str]) – Description of the cluster.
enable_binary_authorization (pulumi.Input[bool]) – Enable Binary Authorization for this cluster. If enabled, all container images will be validated by Google Binary Authorization.
enable_intranode_visibility (pulumi.Input[bool]) – Whether Intra-node visibility is enabled for this cluster. This makes same node pod to pod traffic visible for VPC network.
enable_kubernetes_alpha (pulumi.Input[bool]) – Whether to enable Kubernetes Alpha features for this cluster. Note that when this option is enabled, the cluster cannot be upgraded and will be automatically deleted after 30 days.
enable_legacy_abac (pulumi.Input[bool]) – Whether the ABAC authorizer is enabled for this cluster. When enabled, identities in the system, including service accounts, nodes, and controllers, will have statically granted permissions beyond those provided by the RBAC configuration or IAM. Defaults to
falseenable_shielded_nodes (pulumi.Input[bool]) – Enable Shielded Nodes features on all nodes in this cluster. Defaults to
false.enable_tpu (pulumi.Input[bool]) –
Whether to enable Cloud TPU resources in this cluster. See the official documentation.
endpoint (pulumi.Input[str]) – The IP address of this cluster’s Kubernetes master.
initial_node_count (pulumi.Input[float]) – The number of nodes to create in this cluster’s default node pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Must be set if
node_poolis not set. If you’re usingcontainer.NodePoolobjects with no default node pool, you’ll need to set this to a value of at least1, alongside settingremove_default_node_pooltotrue.instance_group_urls (pulumi.Input[list]) – List of instance group URLs which have been assigned to the cluster.
ip_allocation_policy (pulumi.Input[dict]) –
Configuration of cluster IP allocation for VPC-native clusters. Adding this block enables IP aliasing, making the cluster VPC-native instead of routes-based. Structure is documented below.
label_fingerprint (pulumi.Input[str]) – The fingerprint of the set of labels for this cluster.
location (pulumi.Input[str]) – The location (region or zone) in which the cluster master will be created, as well as the default node location. If you specify a zone (such as
us-central1-a), the cluster will be a zonal cluster with a single cluster master. If you specify a region (such asus-west1), the cluster will be a regional cluster with multiple masters spread across zones in the region, and with default node locations in those zones as welllogging_service (pulumi.Input[str]) – The logging service that the cluster should write logs to. Available options include
logging.googleapis.com(Legacy Stackdriver),logging.googleapis.com/kubernetes(Stackdriver Kubernetes Engine Logging), andnone. Defaults tologging.googleapis.com/kubernetesmaintenance_policy (pulumi.Input[dict]) – The maintenance policy to use for the cluster. Structure is documented below.
master_auth (pulumi.Input[dict]) – The authentication information for accessing the Kubernetes master. Some values in this block are only returned by the API if your service account has permission to get credentials for your GKE cluster. If you see an unexpected diff removing a username/password or unsetting your client cert, ensure you have the
container.clusters.getCredentialspermission. Structure is documented below.master_authorized_networks_config (pulumi.Input[dict]) – The desired configuration options for master authorized networks. Omit the nested
cidr_blocksattribute to disallow external access (except the cluster node IPs, which GKE automatically whitelists).master_version (pulumi.Input[str]) – The current version of the master in the cluster. This may be different than the
min_master_versionset in the config if the master has been updated by GKE.min_master_version (pulumi.Input[str]) –
The minimum version of the master. GKE will auto-update the master to new versions, so this does not guarantee the current master version–use the read-only
master_versionfield to obtain that. If unset, the cluster’s version will be set by GKE to the version of the most recent official release (which is not necessarily the latest version). Most users will find thecontainer.getEngineVersionsdata source useful - it indicates which versions are available. If you intend to specify versions manually, the docs describe the various acceptable formats for this field.monitoring_service (pulumi.Input[str]) – The monitoring service that the cluster should write metrics to. Automatically send metrics from pods in the cluster to the Google Cloud Monitoring API. VM metrics will be collected by Google Compute Engine regardless of this setting Available options include
monitoring.googleapis.com(Legacy Stackdriver),monitoring.googleapis.com/kubernetes(Stackdriver Kubernetes Engine Monitoring), andnone. Defaults tomonitoring.googleapis.com/kubernetesname (pulumi.Input[str]) – The name of the cluster, unique within the project and location.
network (pulumi.Input[str]) – The name or self_link of the Google Compute Engine network to which the cluster is connected. For Shared VPC, set this to the self link of the shared network.
network_policy (pulumi.Input[dict]) –
Configuration options for the NetworkPolicy feature. Structure is documented below.
node_config (pulumi.Input[dict]) – Parameters used in creating the default node pool. Generally, this field should not be used at the same time as a
container.NodePoolor anode_poolblock; this configuration manages the default node pool, which isn’t recommended to be used. Structure is documented below.node_locations (pulumi.Input[list]) – The list of zones in which the cluster’s nodes are located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If this is specified for a zonal cluster, omit the cluster’s zone.
node_pools (pulumi.Input[list]) – List of node pools associated with this cluster. See container.NodePool for schema. Warning: node pools defined inside a cluster can’t be changed (or added/removed) after cluster creation without deleting and recreating the entire cluster. Unless you absolutely need the ability to say “these are the only node pools associated with this cluster”, use the container.NodePool resource instead of this property.
node_version (pulumi.Input[str]) – The Kubernetes version on the nodes. Must either be unset or set to the same value as
min_master_versionon create. Defaults to the default version set by GKE which is not necessarily the latest version. This only affects nodes in the default node pool. While a fuzzy version can be specified, it’s recommended that you specify explicit versions as the provider will see spurious diffs when fuzzy versions are used. See thecontainer.getEngineVersionsdata source’sversion_prefixfield to approximate fuzzy versions. To update nodes in other node pools, use theversionattribute on the node pool.pod_security_policy_config (pulumi.Input[dict]) –
Configuration for the PodSecurityPolicy feature. Structure is documented below.
private_cluster_config (pulumi.Input[dict]) –
Configuration for private clusters, clusters with private nodes. Structure is documented below.
project (pulumi.Input[str]) – The ID of the project in which the resource belongs. If it is not provided, the provider project is used.
release_channel (pulumi.Input[dict]) –
Configuration options for the Release channel feature, which provide more control over automatic upgrades of your GKE clusters. When updating this field, GKE imposes specific version requirements. See Migrating between release channels for more details; the
container.getEngineVersionsdatasource can provide the default version for a channel. Note that removing therelease_channelfield from your config will cause this provider to stop managing your cluster’s release channel, but will not unenroll it. Instead, use the"UNSPECIFIED"channel. Structure is documented below.remove_default_node_pool (pulumi.Input[bool]) – If
true, deletes the default node pool upon cluster creation. If you’re usingcontainer.NodePoolresources with no default node pool, this should be set totrue, alongside settinginitial_node_countto at least1.resource_labels (pulumi.Input[dict]) – The GCE resource labels (a map of key/value pairs) to be applied to the cluster.
resource_usage_export_config (pulumi.Input[dict]) –
Configuration for the ResourceUsageExportConfig feature. Structure is documented below.
services_ipv4_cidr (pulumi.Input[str]) –
The IP address range of the Kubernetes services in this cluster, in CIDR notation (e.g.
1.2.3.4/29). Service addresses are typically put in the last/16from the container CIDR.subnetwork (pulumi.Input[str]) – The name or self_link of the Google Compute Engine subnetwork in which the cluster’s instances are launched.
tpu_ipv4_cidr_block (pulumi.Input[str]) –
The IP address range of the Cloud TPUs in this cluster, in CIDR notation (e.g.
1.2.3.4/29).vertical_pod_autoscaling (pulumi.Input[dict]) – Vertical Pod Autoscaling automatically adjusts the resources of pods controlled by it. Structure is documented below.
workload_identity_config (pulumi.Input[dict]) –
Workload Identity allows Kubernetes service accounts to act as a user-managed Google IAM Service Account. Structure is documented below.
The addons_config object supports the following:
cloudrunConfig(pulumi.Input[dict]) - . The status of the CloudRun addon. It is disabled by default. Setdisabled = falseto enable.disabled(pulumi.Input[bool]) - The status of the Istio addon, which makes it easy to set up Istio for services in a cluster. It is disabled by default. Setdisabled = falseto enable.
configConnectorConfig(pulumi.Input[dict]) - . The status of the ConfigConnector addon. It is disabled by default; Setenabled = trueto enable.enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
dnsCacheConfig(pulumi.Input[dict]) - . The status of the NodeLocal DNSCache addon. It is disabled by default. Setenabled = trueto enable.enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
gcePersistentDiskCsiDriverConfig(pulumi.Input[dict]) - . Whether this cluster should enable the Google Compute Engine Persistent Disk Container Storage Interface (CSI) Driver. Defaults to disabled; setenabled = trueto enable.enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
horizontalPodAutoscaling(pulumi.Input[dict]) - The status of the Horizontal Pod Autoscaling addon, which increases or decreases the number of replica pods a replication controller has based on the resource usage of the existing pods. It ensures that a Heapster pod is running in the cluster, which is also used by the Cloud Monitoring service. It is enabled by default; setdisabled = trueto disable.disabled(pulumi.Input[bool]) - The status of the Istio addon, which makes it easy to set up Istio for services in a cluster. It is disabled by default. Setdisabled = falseto enable.
httpLoadBalancing(pulumi.Input[dict]) - The status of the HTTP (L7) load balancing controller addon, which makes it easy to set up HTTP load balancers for services in a cluster. It is enabled by default; setdisabled = trueto disable.disabled(pulumi.Input[bool]) - The status of the Istio addon, which makes it easy to set up Istio for services in a cluster. It is disabled by default. Setdisabled = falseto enable.
istioConfig(pulumi.Input[dict]) - . Structure is documented below.auth(pulumi.Input[str]) - The authentication type between services in Istio. Available options includeAUTH_MUTUAL_TLS.disabled(pulumi.Input[bool]) - The status of the Istio addon, which makes it easy to set up Istio for services in a cluster. It is disabled by default. Setdisabled = falseto enable.
kalmConfig(pulumi.Input[dict]) - . Configuration for the KALM addon, which manages the lifecycle of k8s. It is disabled by default; Setenabled = trueto enable.enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
networkPolicyConfig(pulumi.Input[dict]) - Whether we should enable the network policy addon for the master. This must be enabled in order to enable network policy for the nodes. To enable this, you must also define anetwork_policyblock, otherwise nothing will happen. It can only be disabled if the nodes already do not have network policies enabled. Defaults to disabled; setdisabled = falseto enable.disabled(pulumi.Input[bool]) - The status of the Istio addon, which makes it easy to set up Istio for services in a cluster. It is disabled by default. Setdisabled = falseto enable.
The authenticator_groups_config object supports the following:
securityGroup(pulumi.Input[str]) - The name of the RBAC security group for use with Google security groups in Kubernetes RBAC. Group name must be in formatgke-security-groups@yourdomain.com.
The cluster_autoscaling object supports the following:
autoProvisioningDefaults(pulumi.Input[dict]) - Contains defaults for a node pool created by NAP. Structure is documented below.min_cpu_platform(pulumi.Input[str]) - Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or newer CPU platform. Applicable values are the friendly names of CPU platforms, such asIntel Haswell. See the official documentation for more information.oauthScopes(pulumi.Input[list]) - The set of Google API scopes to be made available on all of the node VMs under the “default” service account. These can be either FQDNs, or scope aliases. The following scopes are necessary to ensure the correct functioning of the cluster:service_account(pulumi.Input[str]) - The service account to be used by the Node VMs. If not specified, the “default” service account is used. In order to use the configuredoauth_scopesfor logging and monitoring, the service account being used needs the roles/logging.logWriter and roles/monitoring.metricWriter roles.
autoscalingProfile(pulumi.Input[str]) - ) Configuration options for the Autoscaling profile feature, which lets you choose whether the cluster autoscaler should optimize for resource utilization or resource availability when deciding to remove nodes from a cluster. Can beBALANCEDorOPTIMIZE_UTILIZATION. Defaults toBALANCED.enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.resourceLimits(pulumi.Input[list]) - Global constraints for machine resources in the cluster. Configuring thecpuandmemorytypes is required if node auto-provisioning is enabled. These limits will apply to node pool autoscaling in addition to node auto-provisioning. Structure is documented below.maximum(pulumi.Input[float]) - Maximum amount of the resource in the cluster.minimum(pulumi.Input[float]) - Minimum amount of the resource in the cluster.resourceType(pulumi.Input[str]) - The type of the resource. For example,cpuandmemory. See the guide to using Node Auto-Provisioning for a list of types.
The cluster_telemetry object supports the following:
type(pulumi.Input[str]) - The accelerator type resource to expose to this instance. E.g.nvidia-tesla-k80.
The database_encryption object supports the following:
keyName(pulumi.Input[str]) - the key to use to encrypt/decrypt secrets. See the DatabaseEncryption definition for more information.state(pulumi.Input[str]) -ENCRYPTEDorDECRYPTED
The ip_allocation_policy object supports the following:
clusterIpv4CidrBlock(pulumi.Input[str]) - The IP address range for the cluster pod IPs. Set to blank to have a range chosen with the default size. Set to /netmask (e.g. /14) to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to pick a specific range to use.clusterSecondaryRangeName(pulumi.Input[str]) - The name of the existing secondary range in the cluster’s subnetwork to use for pod IP addresses. Alternatively,cluster_ipv4_cidr_blockcan be used to automatically create a GKE-managed one.servicesIpv4CidrBlock(pulumi.Input[str]) - The IP address range of the services IPs in this cluster. Set to blank to have a range chosen with the default size. Set to /netmask (e.g. /14) to have a range chosen with a specific netmask. Set to a CIDR notation (e.g. 10.96.0.0/14) from the RFC-1918 private networks (e.g. 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) to pick a specific range to use.servicesSecondaryRangeName(pulumi.Input[str]) - The name of the existing secondary range in the cluster’s subnetwork to use for serviceClusterIPs. Alternatively,services_ipv4_cidr_blockcan be used to automatically create a GKE-managed one.
The maintenance_policy object supports the following:
dailyMaintenanceWindow(pulumi.Input[dict]) - Time window specified for daily maintenance operations. Specifystart_timein RFC3339 format “HH:MM”, where HH : [00-23] and MM : [00-59] GMT. For example:duration(pulumi.Input[str])startTime(pulumi.Input[str])
recurringWindow(pulumi.Input[dict]) - Time window for recurring maintenance operations.endTime(pulumi.Input[str])recurrence(pulumi.Input[str])startTime(pulumi.Input[str])
The master_auth object supports the following:
clientCertificate(pulumi.Input[str])clientCertificateConfig(pulumi.Input[dict]) - Whether client certificate authorization is enabled for this cluster. For example:issueClientCertificate(pulumi.Input[bool])
clientKey(pulumi.Input[str])clusterCaCertificate(pulumi.Input[str])password(pulumi.Input[str]) - The password to use for HTTP basic authentication when accessing the Kubernetes master endpoint.username(pulumi.Input[str]) - The username to use for HTTP basic authentication when accessing the Kubernetes master endpoint. If not present basic auth will be disabled.
The master_authorized_networks_config object supports the following:
cidrBlocks(pulumi.Input[list]) - External networks that can access the Kubernetes cluster master through HTTPS.cidr_block(pulumi.Input[str]) - External network that can access Kubernetes master through HTTPS. Must be specified in CIDR notation.display_name(pulumi.Input[str]) - Field for users to identify CIDR blocks.
The network_policy object supports the following:
enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.provider(pulumi.Input[str]) - The selected network policy provider. Defaults to PROVIDER_UNSPECIFIED.
The node_config object supports the following:
bootDiskKmsKey(pulumi.Input[str]) - The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryptiondisk_size_gb(pulumi.Input[float]) - Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. Defaults to 100GB.diskType(pulumi.Input[str]) - Type of the disk attached to each node (e.g. ‘pd-standard’ or ‘pd-ssd’). If unspecified, the default disk type is ‘pd-standard’guest_accelerators(pulumi.Input[list]) - List of the type and count of accelerator cards attached to the instance. Structure documented below.count(pulumi.Input[float]) - The number of the guest accelerator cards exposed to this instance.type(pulumi.Input[str]) - The accelerator type resource to expose to this instance. E.g.nvidia-tesla-k80.
imageType(pulumi.Input[str]) - The image type to use for this node. Note that changing the image type will delete and recreate all nodes in the node pool.labels(pulumi.Input[dict]) - The Kubernetes labels (key/value pairs) to be applied to each node.localSsdCount(pulumi.Input[float]) - The amount of local SSD disks that will be attached to each cluster node. Defaults to 0.machine_type(pulumi.Input[str]) - The name of a Google Compute Engine machine type. Defaults ton1-standard-1. To create a custom machine type, value should be set as specified here.metadata(pulumi.Input[dict]) - The metadata key/value pairs assigned to instances in the cluster. From GKE1.12onwards,disable-legacy-endpointsis set totrueby the API; ifmetadatais set but that default value is not included, the provider will attempt to unset the value. To avoid this, set the value in your config.min_cpu_platform(pulumi.Input[str]) - Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or newer CPU platform. Applicable values are the friendly names of CPU platforms, such asIntel Haswell. See the official documentation for more information.oauthScopes(pulumi.Input[list]) - The set of Google API scopes to be made available on all of the node VMs under the “default” service account. These can be either FQDNs, or scope aliases. The following scopes are necessary to ensure the correct functioning of the cluster:preemptible(pulumi.Input[bool]) - A boolean that represents whether or not the underlying node VMs are preemptible. See the official documentation for more information. Defaults to false.sandboxConfig(pulumi.Input[dict]) - GKE Sandbox configuration. When enabling this feature you must specifyimage_type = "COS_CONTAINERD"andnode_version = "1.12.7-gke.17"or later to use it. Structure is documented below.sandboxType(pulumi.Input[str]) - Which sandbox to use for pods in the node pool. Accepted values are:
service_account(pulumi.Input[str]) - The service account to be used by the Node VMs. If not specified, the “default” service account is used. In order to use the configuredoauth_scopesfor logging and monitoring, the service account being used needs the roles/logging.logWriter and roles/monitoring.metricWriter roles.shielded_instance_config(pulumi.Input[dict]) - Shielded Instance options. Structure is documented below.enableIntegrityMonitoring(pulumi.Input[bool]) - Defines if the instance has integrity monitoring enabled.enableSecureBoot(pulumi.Input[bool]) - Defines if the instance has Secure Boot enabled.
tags(pulumi.Input[list]) - The list of instance tags applied to all nodes. Tags are used to identify valid sources or targets for network firewalls.taints(pulumi.Input[list]) - A list of Kubernetes taints to apply to nodes. GKE’s API can only set this field on cluster creation. However, GKE will add taints to your nodes if you enable certain features such as GPUs. If this field is set, any diffs on this field will cause the provider to recreate the underlying resource. Taint values can be updated safely in Kubernetes (eg. throughkubectl), and it’s recommended that you do not use this field to manage taints. If you do,lifecycle.ignore_changesis recommended. Structure is documented below.effect(pulumi.Input[str]) - Effect for taint. Accepted values areNO_SCHEDULE,PREFER_NO_SCHEDULE, andNO_EXECUTE.key(pulumi.Input[str]) - Key for taint.value(pulumi.Input[str]) - Value for taint.
workloadMetadataConfig(pulumi.Input[dict]) - Metadata configuration to expose to workloads on the node pool. Structure is documented below.nodeMetadata(pulumi.Input[str]) - How to expose the node metadata to the workload running on the node. Accepted values are:UNSPECIFIED: Not Set
SECURE: Prevent workloads not in hostNetwork from accessing certain VM metadata, specifically kube-env, which contains Kubelet credentials, and the instance identity token. See Metadata Concealment documentation.
EXPOSE: Expose all VM metadata to pods.
GKE_METADATA_SERVER: Enables workload identity on the node.
The node_pools object supports the following:
autoscaling(pulumi.Input[dict])maxNodeCount(pulumi.Input[float])minNodeCount(pulumi.Input[float])
initial_node_count(pulumi.Input[float]) - The number of nodes to create in this cluster’s default node pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Must be set ifnode_poolis not set. If you’re usingcontainer.NodePoolobjects with no default node pool, you’ll need to set this to a value of at least1, alongside settingremove_default_node_pooltotrue.instance_group_urls(pulumi.Input[list]) - List of instance group URLs which have been assigned to the cluster.management(pulumi.Input[dict])autoRepair(pulumi.Input[bool])autoUpgrade(pulumi.Input[bool])
max_pods_per_node(pulumi.Input[float])name(pulumi.Input[str]) - The name of the cluster, unique within the project and location.name_prefix(pulumi.Input[str])node_config(pulumi.Input[dict]) - Parameters used in creating the default node pool. Generally, this field should not be used at the same time as acontainer.NodePoolor anode_poolblock; this configuration manages the default node pool, which isn’t recommended to be used. Structure is documented below.bootDiskKmsKey(pulumi.Input[str]) - The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryptiondisk_size_gb(pulumi.Input[float]) - Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. Defaults to 100GB.diskType(pulumi.Input[str]) - Type of the disk attached to each node (e.g. ‘pd-standard’ or ‘pd-ssd’). If unspecified, the default disk type is ‘pd-standard’guest_accelerators(pulumi.Input[list]) - List of the type and count of accelerator cards attached to the instance. Structure documented below.count(pulumi.Input[float]) - The number of the guest accelerator cards exposed to this instance.type(pulumi.Input[str]) - The accelerator type resource to expose to this instance. E.g.nvidia-tesla-k80.
imageType(pulumi.Input[str]) - The image type to use for this node. Note that changing the image type will delete and recreate all nodes in the node pool.labels(pulumi.Input[dict]) - The Kubernetes labels (key/value pairs) to be applied to each node.localSsdCount(pulumi.Input[float]) - The amount of local SSD disks that will be attached to each cluster node. Defaults to 0.machine_type(pulumi.Input[str]) - The name of a Google Compute Engine machine type. Defaults ton1-standard-1. To create a custom machine type, value should be set as specified here.metadata(pulumi.Input[dict]) - The metadata key/value pairs assigned to instances in the cluster. From GKE1.12onwards,disable-legacy-endpointsis set totrueby the API; ifmetadatais set but that default value is not included, the provider will attempt to unset the value. To avoid this, set the value in your config.min_cpu_platform(pulumi.Input[str]) - Minimum CPU platform to be used by this instance. The instance may be scheduled on the specified or newer CPU platform. Applicable values are the friendly names of CPU platforms, such asIntel Haswell. See the official documentation for more information.oauthScopes(pulumi.Input[list]) - The set of Google API scopes to be made available on all of the node VMs under the “default” service account. These can be either FQDNs, or scope aliases. The following scopes are necessary to ensure the correct functioning of the cluster:preemptible(pulumi.Input[bool]) - A boolean that represents whether or not the underlying node VMs are preemptible. See the official documentation for more information. Defaults to false.sandboxConfig(pulumi.Input[dict]) - GKE Sandbox configuration. When enabling this feature you must specifyimage_type = "COS_CONTAINERD"andnode_version = "1.12.7-gke.17"or later to use it. Structure is documented below.sandboxType(pulumi.Input[str]) - Which sandbox to use for pods in the node pool. Accepted values are:
service_account(pulumi.Input[str]) - The service account to be used by the Node VMs. If not specified, the “default” service account is used. In order to use the configuredoauth_scopesfor logging and monitoring, the service account being used needs the roles/logging.logWriter and roles/monitoring.metricWriter roles.shielded_instance_config(pulumi.Input[dict]) - Shielded Instance options. Structure is documented below.enableIntegrityMonitoring(pulumi.Input[bool]) - Defines if the instance has integrity monitoring enabled.enableSecureBoot(pulumi.Input[bool]) - Defines if the instance has Secure Boot enabled.
tags(pulumi.Input[list]) - The list of instance tags applied to all nodes. Tags are used to identify valid sources or targets for network firewalls.taints(pulumi.Input[list]) - A list of Kubernetes taints to apply to nodes. GKE’s API can only set this field on cluster creation. However, GKE will add taints to your nodes if you enable certain features such as GPUs. If this field is set, any diffs on this field will cause the provider to recreate the underlying resource. Taint values can be updated safely in Kubernetes (eg. throughkubectl), and it’s recommended that you do not use this field to manage taints. If you do,lifecycle.ignore_changesis recommended. Structure is documented below.effect(pulumi.Input[str]) - Effect for taint. Accepted values areNO_SCHEDULE,PREFER_NO_SCHEDULE, andNO_EXECUTE.key(pulumi.Input[str]) - Key for taint.value(pulumi.Input[str]) - Value for taint.
workloadMetadataConfig(pulumi.Input[dict]) - Metadata configuration to expose to workloads on the node pool. Structure is documented below.nodeMetadata(pulumi.Input[str]) - How to expose the node metadata to the workload running on the node. Accepted values are:UNSPECIFIED: Not Set
SECURE: Prevent workloads not in hostNetwork from accessing certain VM metadata, specifically kube-env, which contains Kubelet credentials, and the instance identity token. See Metadata Concealment documentation.
EXPOSE: Expose all VM metadata to pods.
GKE_METADATA_SERVER: Enables workload identity on the node.
node_count(pulumi.Input[float])node_locations(pulumi.Input[list]) - The list of zones in which the cluster’s nodes are located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If this is specified for a zonal cluster, omit the cluster’s zone.upgrade_settings(pulumi.Input[dict])maxSurge(pulumi.Input[float])maxUnavailable(pulumi.Input[float])
version(pulumi.Input[str])
The pod_security_policy_config object supports the following:
enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
The private_cluster_config object supports the following:
enablePrivateEndpoint(pulumi.Input[bool]) - Whentrue, the cluster’s private endpoint is used as the cluster endpoint and access through the public endpoint is disabled. Whenfalse, either endpoint can be used. This field only applies to private clusters, whenenable_private_nodesistrue.enablePrivateNodes(pulumi.Input[bool]) - Enables the private cluster feature, creating a private endpoint on the cluster. In a private cluster, nodes only have RFC 1918 private addresses and communicate with the master’s private endpoint via private networking.masterGlobalAccessConfig(pulumi.Input[dict])enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
masterIpv4CidrBlock(pulumi.Input[str]) - The IP range in CIDR notation to use for the hosted master network. This range will be used for assigning private IP addresses to the cluster master(s) and the ILB VIP. This range must not overlap with any other ranges in use within the cluster’s network, and it must be a /28 subnet. See Private Cluster Limitations for more details. This field only applies to private clusters, whenenable_private_nodesistrue.peeringName(pulumi.Input[str]) - The name of the peering between this cluster and the Google owned VPC.privateEndpoint(pulumi.Input[str]) - The internal IP address of this cluster’s master endpoint.publicEndpoint(pulumi.Input[str]) - The external IP address of this cluster’s master endpoint.
The release_channel object supports the following:
channel(pulumi.Input[str]) - The selected release channel. Accepted values are:UNSPECIFIED: Not set.
RAPID: Weekly upgrade cadence; Early testers and developers who requires new features.
REGULAR: Multiple per month upgrade cadence; Production users who need features not yet offered in the Stable channel.
STABLE: Every few months upgrade cadence; Production users who need stability above all else, and for whom frequent upgrades are too risky.
The resource_usage_export_config object supports the following:
bigqueryDestination(pulumi.Input[dict]) - Parameters for using BigQuery as the destination of resource usage export.dataset_id(pulumi.Input[str])
enableNetworkEgressMetering(pulumi.Input[bool]) - Whether to enable network egress metering for this cluster. If enabled, a daemonset will be created in the cluster to meter network egress traffic.enableResourceConsumptionMetering(pulumi.Input[bool]) - Whether to enable resource consumption metering on this cluster. When enabled, a table will be created in the resource export BigQuery dataset to store resource consumption data. The resulting table can be joined with the resource usage table or with BigQuery billing export. Defaults totrue.
The vertical_pod_autoscaling object supports the following:
enabled(pulumi.Input[bool]) - Enable the PodSecurityPolicy controller for this cluster. If enabled, pods must be valid under a PodSecurityPolicy to be created.
The workload_identity_config object supports the following:
identityNamespace(pulumi.Input[str]) - Currently, the only supported identity namespace is the project’s default.
translate_output_property(prop)¶Provides subclasses of Resource an opportunity to translate names of output properties into a format of their choosing before writing those properties to the resource object.
- Parameters
prop (str) – A property name.
- Returns
A potentially transformed property name.
- Return type
str
translate_input_property(prop)¶Provides subclasses of Resource an opportunity to translate names of input properties into a format of their choosing before sending those properties to the Pulumi engine.
- Parameters
prop (str) – A property name.
- Returns
A potentially transformed property name.
- Return type
str
- class
pulumi_gcp.container.GetClusterResult(additional_zones=None, addons_configs=None, authenticator_groups_configs=None, cluster_autoscalings=None, cluster_ipv4_cidr=None, cluster_telemetries=None, database_encryptions=None, default_max_pods_per_node=None, description=None, enable_binary_authorization=None, enable_intranode_visibility=None, enable_kubernetes_alpha=None, enable_legacy_abac=None, enable_shielded_nodes=None, enable_tpu=None, endpoint=None, id=None, initial_node_count=None, instance_group_urls=None, ip_allocation_policies=None, label_fingerprint=None, location=None, logging_service=None, maintenance_policies=None, master_authorized_networks_configs=None, master_auths=None, master_version=None, min_master_version=None, monitoring_service=None, name=None, network=None, network_policies=None, node_configs=None, node_locations=None, node_pools=None, node_version=None, operation=None, pod_security_policy_configs=None, private_cluster_configs=None, project=None, region=None, release_channels=None, remove_default_node_pool=None, resource_labels=None, resource_usage_export_configs=None, services_ipv4_cidr=None, subnetwork=None, tpu_ipv4_cidr_block=None, vertical_pod_autoscalings=None, workload_identity_configs=None, zone=None)¶ A collection of values returned by getCluster.
id= None¶The provider-assigned unique ID for this managed resource.
- class
pulumi_gcp.container.GetEngineVersionsResult(default_cluster_version=None, id=None, latest_master_version=None, latest_node_version=None, location=None, project=None, release_channel_default_version=None, valid_master_versions=None, valid_node_versions=None, version_prefix=None)¶ A collection of values returned by getEngineVersions.
default_cluster_version= None¶Version of Kubernetes the service deploys by default.
id= None¶The provider-assigned unique ID for this managed resource.
latest_master_version= None¶The latest version available in the given zone for use with master instances.
latest_node_version= None¶The latest version available in the given zone for use with node instances.
release_channel_default_version= None¶A map from a release channel name to the channel’s default version.
valid_master_versions= None¶A list of versions available in the given zone for use with master instances.
valid_node_versions= None¶A list of versions available in the given zone for use with node instances.
- class
pulumi_gcp.container.GetRegistryImageResult(digest=None, id=None, image_url=None, name=None, project=None, region=None, tag=None)¶ A collection of values returned by getRegistryImage.
id= None¶The provider-assigned unique ID for this managed resource.
- class
pulumi_gcp.container.GetRegistryRepositoryResult(id=None, project=None, region=None, repository_url=None)¶ A collection of values returned by getRegistryRepository.
id= None¶The provider-assigned unique ID for this managed resource.
- class
pulumi_gcp.container.NodePool(resource_name, opts=None, autoscaling=None, cluster=None, initial_node_count=None, location=None, management=None, max_pods_per_node=None, name=None, name_prefix=None, node_config=None, node_count=None, node_locations=None, project=None, upgrade_settings=None, version=None, __props__=None, __name__=None, __opts__=None)¶ Manages a node pool in a Google Kubernetes Engine (GKE) cluster separately from the cluster control plane. For more information see the official documentation and the API reference.
import pulumi import pulumi_gcp as gcp primary = gcp.container.Cluster("primary", location="us-central1", remove_default_node_pool=True, initial_node_count=1) primary_preemptible_nodes = gcp.container.NodePool("primaryPreemptibleNodes", location="us-central1", cluster=primary.name, node_count=1, node_config={ "preemptible": True, "machine_type": "n1-standard-1", "oauthScopes": [ "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", ], })
import pulumi import pulumi_gcp as gcp primary = gcp.container.Cluster("primary", location="us-central1-a", initial_node_count=3, node_locations=["us-central1-c"], master_auth={ "username": "", "password": "", "client_certificate_config": { "issueClientCertificate": False, }, }, node_config={ "oauthScopes": [ "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", ], "metadata": { "disable-legacy-endpoints": "true", }, "guest_accelerator": [{ "type": "nvidia-tesla-k80", "count": 1, }], }) np = gcp.container.NodePool("np", location="us-central1-a", cluster=primary.name, node_count=3, timeouts=[{ "create": "30m", "update": "20m", }])
- Parameters
resource_name (str) – The name of the resource.
opts (pulumi.ResourceOptions) – Options for the resource.
autoscaling (pulumi.Input[dict]) – Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. Structure is documented below.
cluster (pulumi.Input[str]) – The cluster to create the node pool for. Cluster must be present in
locationprovided for zonal clusters.initial_node_count (pulumi.Input[float]) – The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource.
location (pulumi.Input[str]) – The location (region or zone) of the cluster.
management (pulumi.Input[dict]) – Node management configuration, wherein auto-repair and auto-upgrade is configured. Structure is documented below.
max_pods_per_node (pulumi.Input[float]) –
The maximum number of pods per node in this node pool. Note that this does not work on node pools which are “route-based” - that is, node pools belonging to clusters that do not have IP Aliasing enabled. See the official documentation for more information.
name (pulumi.Input[str]) – The name of the node pool. If left blank, the provider will auto-generate a unique name.
name_prefix (pulumi.Input[str]) – Creates a unique name for the node pool beginning with the specified prefix. Conflicts with
name.node_config (pulumi.Input[dict]) – The node configuration of the pool. See container.Cluster for schema.
node_count (pulumi.Input[float]) – The number of nodes per instance group. This field can be used to update the number of nodes per instance group but should not be used alongside
autoscaling.node_locations (pulumi.Input[list]) – The list of zones in which the node pool’s nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If unspecified, the cluster-level
node_locationswill be used.project (pulumi.Input[str]) – The ID of the project in which to create the node pool. If blank, the provider-configured project will be used.
upgrade_settings (pulumi.Input[dict]) – Specify node upgrade settings to change how many nodes GKE attempts to upgrade at once. The number of nodes upgraded simultaneously is the sum of
max_surgeandmax_unavailable. The maximum number of nodes upgraded simultaneously is limited to 20.version (pulumi.Input[str]) – The Kubernetes version for the nodes in this pool. Note that if this field and
auto_upgradeare both specified, they will fight each other for what the node version should be, so setting both is highly discouraged. While a fuzzy version can be specified, it’s recommended that you specify explicit versions as the provider will see spurious diffs when fuzzy versions are used. See thecontainer.getEngineVersionsdata source’sversion_prefixfield to approximate fuzzy versions in a provider-compatible way.
The autoscaling object supports the following:
maxNodeCount(pulumi.Input[float]) - Maximum number of nodes in the NodePool. Must be >= min_node_count.minNodeCount(pulumi.Input[float]) - Minimum number of nodes in the NodePool. Must be >=0 and <=max_node_count.
The management object supports the following:
autoRepair(pulumi.Input[bool]) - Whether the nodes will be automatically repaired.autoUpgrade(pulumi.Input[bool]) - Whether the nodes will be automatically upgraded.
The node_config object supports the following:
bootDiskKmsKey(pulumi.Input[str])disk_size_gb(pulumi.Input[float])diskType(pulumi.Input[str])guest_accelerators(pulumi.Input[list])count(pulumi.Input[float])type(pulumi.Input[str])
imageType(pulumi.Input[str])labels(pulumi.Input[dict])localSsdCount(pulumi.Input[float])machine_type(pulumi.Input[str])metadata(pulumi.Input[dict])min_cpu_platform(pulumi.Input[str])oauthScopes(pulumi.Input[list])preemptible(pulumi.Input[bool])sandboxConfig(pulumi.Input[dict])sandboxType(pulumi.Input[str])
service_account(pulumi.Input[str])shielded_instance_config(pulumi.Input[dict])enableIntegrityMonitoring(pulumi.Input[bool])enableSecureBoot(pulumi.Input[bool])
tags(pulumi.Input[list])taints(pulumi.Input[list])effect(pulumi.Input[str])key(pulumi.Input[str])value(pulumi.Input[str])
workloadMetadataConfig(pulumi.Input[dict])nodeMetadata(pulumi.Input[str])
The upgrade_settings object supports the following:
maxSurge(pulumi.Input[float]) - The number of additional nodes that can be added to the node pool during an upgrade. Increasingmax_surgeraises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater.maxUnavailable(pulumi.Input[float]) - The number of nodes that can be simultaneously unavailable during an upgrade. Increasingmax_unavailableraises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater.
autoscaling: pulumi.Output[dict] = None¶Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. Structure is documented below.
maxNodeCount(float) - Maximum number of nodes in the NodePool. Must be >= min_node_count.minNodeCount(float) - Minimum number of nodes in the NodePool. Must be >=0 and <=max_node_count.
cluster: pulumi.Output[str] = None¶The cluster to create the node pool for. Cluster must be present in
locationprovided for zonal clusters.
initial_node_count: pulumi.Output[float] = None¶The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource.
instance_group_urls: pulumi.Output[list] = None¶The resource URLs of the managed instance groups associated with this node pool.
location: pulumi.Output[str] = None¶The location (region or zone) of the cluster.
management: pulumi.Output[dict] = None¶Node management configuration, wherein auto-repair and auto-upgrade is configured. Structure is documented below.
autoRepair(bool) - Whether the nodes will be automatically repaired.autoUpgrade(bool) - Whether the nodes will be automatically upgraded.
max_pods_per_node: pulumi.Output[float] = None¶The maximum number of pods per node in this node pool. Note that this does not work on node pools which are “route-based” - that is, node pools belonging to clusters that do not have IP Aliasing enabled. See the official documentation for more information.
name: pulumi.Output[str] = None¶The name of the node pool. If left blank, the provider will auto-generate a unique name.
name_prefix: pulumi.Output[str] = None¶Creates a unique name for the node pool beginning with the specified prefix. Conflicts with
name.
node_config: pulumi.Output[dict] = None¶The node configuration of the pool. See container.Cluster for schema.
bootDiskKmsKey(str)disk_size_gb(float)diskType(str)guest_accelerators(list)count(float)type(str)
imageType(str)labels(dict)localSsdCount(float)machine_type(str)metadata(dict)min_cpu_platform(str)oauthScopes(list)preemptible(bool)sandboxConfig(dict)sandboxType(str)
service_account(str)shielded_instance_config(dict)enableIntegrityMonitoring(bool)enableSecureBoot(bool)
tags(list)taints(list)effect(str)key(str)value(str)
workloadMetadataConfig(dict)nodeMetadata(str)
node_count: pulumi.Output[float] = None¶The number of nodes per instance group. This field can be used to update the number of nodes per instance group but should not be used alongside
autoscaling.
node_locations: pulumi.Output[list] = None¶The list of zones in which the node pool’s nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If unspecified, the cluster-level
node_locationswill be used.
project: pulumi.Output[str] = None¶The ID of the project in which to create the node pool. If blank, the provider-configured project will be used.
upgrade_settings: pulumi.Output[dict] = None¶Specify node upgrade settings to change how many nodes GKE attempts to upgrade at once. The number of nodes upgraded simultaneously is the sum of
max_surgeandmax_unavailable. The maximum number of nodes upgraded simultaneously is limited to 20.maxSurge(float) - The number of additional nodes that can be added to the node pool during an upgrade. Increasingmax_surgeraises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater.maxUnavailable(float) - The number of nodes that can be simultaneously unavailable during an upgrade. Increasingmax_unavailableraises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater.
version: pulumi.Output[str] = None¶The Kubernetes version for the nodes in this pool. Note that if this field and
auto_upgradeare both specified, they will fight each other for what the node version should be, so setting both is highly discouraged. While a fuzzy version can be specified, it’s recommended that you specify explicit versions as the provider will see spurious diffs when fuzzy versions are used. See thecontainer.getEngineVersionsdata source’sversion_prefixfield to approximate fuzzy versions in a provider-compatible way.
- static
get(resource_name, id, opts=None, autoscaling=None, cluster=None, initial_node_count=None, instance_group_urls=None, location=None, management=None, max_pods_per_node=None, name=None, name_prefix=None, node_config=None, node_count=None, node_locations=None, project=None, upgrade_settings=None, version=None)¶ Get an existing NodePool resource’s state with the given name, id, and optional extra properties used to qualify the lookup.
- Parameters
resource_name (str) – The unique name of the resulting resource.
id (str) – The unique provider ID of the resource to lookup.
opts (pulumi.ResourceOptions) – Options for the resource.
autoscaling (pulumi.Input[dict]) – Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. Structure is documented below.
cluster (pulumi.Input[str]) – The cluster to create the node pool for. Cluster must be present in
locationprovided for zonal clusters.initial_node_count (pulumi.Input[float]) – The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource.
instance_group_urls (pulumi.Input[list]) – The resource URLs of the managed instance groups associated with this node pool.
location (pulumi.Input[str]) – The location (region or zone) of the cluster.
management (pulumi.Input[dict]) – Node management configuration, wherein auto-repair and auto-upgrade is configured. Structure is documented below.
max_pods_per_node (pulumi.Input[float]) –
The maximum number of pods per node in this node pool. Note that this does not work on node pools which are “route-based” - that is, node pools belonging to clusters that do not have IP Aliasing enabled. See the official documentation for more information.
name (pulumi.Input[str]) – The name of the node pool. If left blank, the provider will auto-generate a unique name.
name_prefix (pulumi.Input[str]) – Creates a unique name for the node pool beginning with the specified prefix. Conflicts with
name.node_config (pulumi.Input[dict]) – The node configuration of the pool. See container.Cluster for schema.
node_count (pulumi.Input[float]) – The number of nodes per instance group. This field can be used to update the number of nodes per instance group but should not be used alongside
autoscaling.node_locations (pulumi.Input[list]) – The list of zones in which the node pool’s nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If unspecified, the cluster-level
node_locationswill be used.project (pulumi.Input[str]) – The ID of the project in which to create the node pool. If blank, the provider-configured project will be used.
upgrade_settings (pulumi.Input[dict]) – Specify node upgrade settings to change how many nodes GKE attempts to upgrade at once. The number of nodes upgraded simultaneously is the sum of
max_surgeandmax_unavailable. The maximum number of nodes upgraded simultaneously is limited to 20.version (pulumi.Input[str]) – The Kubernetes version for the nodes in this pool. Note that if this field and
auto_upgradeare both specified, they will fight each other for what the node version should be, so setting both is highly discouraged. While a fuzzy version can be specified, it’s recommended that you specify explicit versions as the provider will see spurious diffs when fuzzy versions are used. See thecontainer.getEngineVersionsdata source’sversion_prefixfield to approximate fuzzy versions in a provider-compatible way.
The autoscaling object supports the following:
maxNodeCount(pulumi.Input[float]) - Maximum number of nodes in the NodePool. Must be >= min_node_count.minNodeCount(pulumi.Input[float]) - Minimum number of nodes in the NodePool. Must be >=0 and <=max_node_count.
The management object supports the following:
autoRepair(pulumi.Input[bool]) - Whether the nodes will be automatically repaired.autoUpgrade(pulumi.Input[bool]) - Whether the nodes will be automatically upgraded.
The node_config object supports the following:
bootDiskKmsKey(pulumi.Input[str])disk_size_gb(pulumi.Input[float])diskType(pulumi.Input[str])guest_accelerators(pulumi.Input[list])count(pulumi.Input[float])type(pulumi.Input[str])
imageType(pulumi.Input[str])labels(pulumi.Input[dict])localSsdCount(pulumi.Input[float])machine_type(pulumi.Input[str])metadata(pulumi.Input[dict])min_cpu_platform(pulumi.Input[str])oauthScopes(pulumi.Input[list])preemptible(pulumi.Input[bool])sandboxConfig(pulumi.Input[dict])sandboxType(pulumi.Input[str])
service_account(pulumi.Input[str])shielded_instance_config(pulumi.Input[dict])enableIntegrityMonitoring(pulumi.Input[bool])enableSecureBoot(pulumi.Input[bool])
tags(pulumi.Input[list])taints(pulumi.Input[list])effect(pulumi.Input[str])key(pulumi.Input[str])value(pulumi.Input[str])
workloadMetadataConfig(pulumi.Input[dict])nodeMetadata(pulumi.Input[str])
The upgrade_settings object supports the following:
maxSurge(pulumi.Input[float]) - The number of additional nodes that can be added to the node pool during an upgrade. Increasingmax_surgeraises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater.maxUnavailable(pulumi.Input[float]) - The number of nodes that can be simultaneously unavailable during an upgrade. Increasingmax_unavailableraises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater.
translate_output_property(prop)¶Provides subclasses of Resource an opportunity to translate names of output properties into a format of their choosing before writing those properties to the resource object.
- Parameters
prop (str) – A property name.
- Returns
A potentially transformed property name.
- Return type
str
translate_input_property(prop)¶Provides subclasses of Resource an opportunity to translate names of input properties into a format of their choosing before sending those properties to the Pulumi engine.
- Parameters
prop (str) – A property name.
- Returns
A potentially transformed property name.
- Return type
str
- class
pulumi_gcp.container.Registry(resource_name, opts=None, location=None, project=None, __props__=None, __name__=None, __opts__=None)¶ Ensures that the Google Cloud Storage bucket that backs Google Container Registry exists. Creating this resource will create the backing bucket if it does not exist, or do nothing if the bucket already exists. Destroying this resource does NOT destroy the backing bucket. For more information see the official documentation
This resource can be used to ensure that the GCS bucket exists prior to assigning permissions. For more information see the access control page for GCR.
import pulumi import pulumi_gcp as gcp registry = gcp.container.Registry("registry", location="EU", project="my-project")
- Parameters
resource_name (str) – The name of the resource.
opts (pulumi.ResourceOptions) – Options for the resource.
location (pulumi.Input[str]) –
The location of the registry. One of
ASIA,EU,USor not specified. See the official documentation for more information on registry locations.project (pulumi.Input[str]) – The ID of the project in which the resource belongs. If it is not provided, the provider project is used.
bucket_self_link: pulumi.Output[str] = None¶The URI of the created resource.
location: pulumi.Output[str] = None¶The location of the registry. One of
ASIA,EU,USor not specified. See the official documentation for more information on registry locations.
project: pulumi.Output[str] = None¶The ID of the project in which the resource belongs. If it is not provided, the provider project is used.
- static
get(resource_name, id, opts=None, bucket_self_link=None, location=None, project=None)¶ Get an existing Registry resource’s state with the given name, id, and optional extra properties used to qualify the lookup.
- Parameters
resource_name (str) – The unique name of the resulting resource.
id (str) – The unique provider ID of the resource to lookup.
opts (pulumi.ResourceOptions) – Options for the resource.
bucket_self_link (pulumi.Input[str]) – The URI of the created resource.
location (pulumi.Input[str]) –
The location of the registry. One of
ASIA,EU,USor not specified. See the official documentation for more information on registry locations.project (pulumi.Input[str]) – The ID of the project in which the resource belongs. If it is not provided, the provider project is used.
translate_output_property(prop)¶Provides subclasses of Resource an opportunity to translate names of output properties into a format of their choosing before writing those properties to the resource object.
- Parameters
prop (str) – A property name.
- Returns
A potentially transformed property name.
- Return type
str
translate_input_property(prop)¶Provides subclasses of Resource an opportunity to translate names of input properties into a format of their choosing before sending those properties to the Pulumi engine.
- Parameters
prop (str) – A property name.
- Returns
A potentially transformed property name.
- Return type
str
pulumi_gcp.container.get_cluster(location=None, name=None, project=None, region=None, zone=None, opts=None)¶Get info about a GKE cluster from its name and location.
import pulumi import pulumi_gcp as gcp my_cluster = gcp.container.get_cluster(name="my-cluster", location="us-east1-a") pulumi.export("clusterUsername", my_cluster.master_auths[0]["username"]) pulumi.export("clusterPassword", my_cluster.master_auths[0]["password"]) pulumi.export("endpoint", my_cluster.endpoint) pulumi.export("instanceGroupUrls", my_cluster.instance_group_urls) pulumi.export("nodeConfig", my_cluster.node_configs) pulumi.export("nodePools", my_cluster.node_pools)
- Parameters
location (str) – The location (zone or region) this cluster has been created in. One of
location,region,zone, or a provider-levelzonemust be specified.name (str) – The name of the cluster.
project (str) – The project in which the resource belongs. If it is not provided, the provider project is used.
region (str) – The region this cluster has been created in. Deprecated in favour of
location.zone (str) – The zone this cluster has been created in. Deprecated in favour of
location.
pulumi_gcp.container.get_engine_versions(location=None, project=None, version_prefix=None, opts=None)¶Provides access to available Google Kubernetes Engine versions in a zone or region for a given project.
If you are using the
container.getEngineVersionsdatasource with a regional cluster, ensure that you have provided a region as thelocationto the datasource. A region can have a different set of supported versions than its component zones, and not all zones in a region are guaranteed to support the same version.import pulumi import pulumi_gcp as gcp central1b = gcp.container.get_engine_versions(location="us-central1-b", version_prefix="1.12.") foo = gcp.container.Cluster("foo", location="us-central1-b", node_version=central1b.latest_node_version, initial_node_count=1, master_auth={ "username": "mr.yoda", "password": "adoy.rm", }) pulumi.export("stableChannelVersion", central1b.release_channel_default_version["STABLE"])
- Parameters
location (str) – The location (region or zone) to list versions for. Must exactly match the location the cluster will be deployed in, or listed versions may not be available. If
location,region, andzoneare not specified, the provider-level zone must be set and is used instead.project (str) – ID of the project to list available cluster versions for. Should match the project the cluster will be deployed to. Defaults to the project that the provider is authenticated with.
version_prefix (str) – If provided, the provider will only return versions that match the string prefix. For example,
1.11.will match all1.11series releases. Since this is just a string match, it’s recommended that you append a.after minor versions to ensure that prefixes such as1.1don’t match versions like1.12.5-gke.10accidentally. See the docs on versioning schema for full details on how version strings are formatted.
pulumi_gcp.container.get_registry_image(digest=None, name=None, project=None, region=None, tag=None, opts=None)¶This data source fetches the project name, and provides the appropriate URLs to use for container registry for this project.
The URLs are computed entirely offline - as long as the project exists, they will be valid, but this data source does not contact Google Container Registry (GCR) at any point.
import pulumi import pulumi_gcp as gcp debian = gcp.container.get_registry_image(name="debian") pulumi.export("gcrLocation", debian.image_url)
pulumi_gcp.container.get_registry_repository(project=None, region=None, opts=None)¶This data source fetches the project name, and provides the appropriate URLs to use for container registry for this project.
The URLs are computed entirely offline - as long as the project exists, they will be valid, but this data source does not contact Google Container Registry (GCR) at any point.
import pulumi import pulumi_gcp as gcp foo = gcp.container.get_registry_repository() pulumi.export("gcrLocation", foo.repository_url)