NodePool
Manages a node pool in a Google Kubernetes Engine (GKE) cluster separately from the cluster control plane. For more information see the official documentation and the API reference.
Create a NodePool Resource
new NodePool(name: string, args: NodePoolArgs, opts?: CustomResourceOptions);def NodePool(resource_name, opts=None, autoscaling=None, cluster=None, initial_node_count=None, location=None, management=None, max_pods_per_node=None, name=None, name_prefix=None, node_config=None, node_count=None, node_locations=None, project=None, upgrade_settings=None, version=None, __props__=None);func NewNodePool(ctx *Context, name string, args NodePoolArgs, opts ...ResourceOption) (*NodePool, error)public NodePool(string name, NodePoolArgs args, CustomResourceOptions? opts = null)- name string
- The unique name of the resource.
- args NodePoolArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- opts ResourceOptions
- A bag of options that control this resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args NodePoolArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args NodePoolArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
NodePool Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Programming Model docs.
Inputs
The NodePool resource accepts the following input properties:
- Cluster string
The cluster to create the node pool for. Cluster must be present in
locationprovided for zonal clusters.- Autoscaling
Node
Pool Autoscaling Args Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. Structure is documented below.
- Initial
Node intCount The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource.
- Location string
The location (region or zone) of the cluster.
- Management
Node
Pool Management Args Node management configuration, wherein auto-repair and auto-upgrade is configured. Structure is documented below.
- Max
Pods intPer Node The maximum number of pods per node in this node pool. Note that this does not work on node pools which are “route-based” - that is, node pools belonging to clusters that do not have IP Aliasing enabled. See the official documentation for more information.
- Name string
The name of the node pool. If left blank, the provider will auto-generate a unique name.
- Name
Prefix string Creates a unique name for the node pool beginning with the specified prefix. Conflicts with
name.- Node
Config NodePool Node Config Args The node configuration of the pool. See gcp.container.Cluster for schema.
- Node
Count int The number of nodes per instance group. This field can be used to update the number of nodes per instance group but should not be used alongside
autoscaling.- Node
Locations List<string> The list of zones in which the node pool’s nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If unspecified, the cluster-level
node_locationswill be used.- Project string
The ID of the project in which to create the node pool. If blank, the provider-configured project will be used.
- Upgrade
Settings NodePool Upgrade Settings Args Specify node upgrade settings to change how many nodes GKE attempts to upgrade at once. The number of nodes upgraded simultaneously is the sum of
max_surgeandmax_unavailable. The maximum number of nodes upgraded simultaneously is limited to 20.- Version string
The Kubernetes version for the nodes in this pool. Note that if this field and
auto_upgradeare both specified, they will fight each other for what the node version should be, so setting both is highly discouraged. While a fuzzy version can be specified, it’s recommended that you specify explicit versions as the provider will see spurious diffs when fuzzy versions are used. See thegcp.container.getEngineVersionsdata source’sversion_prefixfield to approximate fuzzy versions in a provider-compatible way.
- Cluster string
The cluster to create the node pool for. Cluster must be present in
locationprovided for zonal clusters.- Autoscaling
Node
Pool Autoscaling Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. Structure is documented below.
- Initial
Node intCount The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource.
- Location string
The location (region or zone) of the cluster.
- Management
Node
Pool Management Node management configuration, wherein auto-repair and auto-upgrade is configured. Structure is documented below.
- Max
Pods intPer Node The maximum number of pods per node in this node pool. Note that this does not work on node pools which are “route-based” - that is, node pools belonging to clusters that do not have IP Aliasing enabled. See the official documentation for more information.
- Name string
The name of the node pool. If left blank, the provider will auto-generate a unique name.
- Name
Prefix string Creates a unique name for the node pool beginning with the specified prefix. Conflicts with
name.- Node
Config NodePool Node Config The node configuration of the pool. See gcp.container.Cluster for schema.
- Node
Count int The number of nodes per instance group. This field can be used to update the number of nodes per instance group but should not be used alongside
autoscaling.- Node
Locations []string The list of zones in which the node pool’s nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If unspecified, the cluster-level
node_locationswill be used.- Project string
The ID of the project in which to create the node pool. If blank, the provider-configured project will be used.
- Upgrade
Settings NodePool Upgrade Settings Specify node upgrade settings to change how many nodes GKE attempts to upgrade at once. The number of nodes upgraded simultaneously is the sum of
max_surgeandmax_unavailable. The maximum number of nodes upgraded simultaneously is limited to 20.- Version string
The Kubernetes version for the nodes in this pool. Note that if this field and
auto_upgradeare both specified, they will fight each other for what the node version should be, so setting both is highly discouraged. While a fuzzy version can be specified, it’s recommended that you specify explicit versions as the provider will see spurious diffs when fuzzy versions are used. See thegcp.container.getEngineVersionsdata source’sversion_prefixfield to approximate fuzzy versions in a provider-compatible way.
- cluster string
The cluster to create the node pool for. Cluster must be present in
locationprovided for zonal clusters.- autoscaling
Node
Pool Autoscaling Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. Structure is documented below.
- initial
Node numberCount The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource.
- location string
The location (region or zone) of the cluster.
- management
Node
Pool Management Node management configuration, wherein auto-repair and auto-upgrade is configured. Structure is documented below.
- max
Pods numberPer Node The maximum number of pods per node in this node pool. Note that this does not work on node pools which are “route-based” - that is, node pools belonging to clusters that do not have IP Aliasing enabled. See the official documentation for more information.
- name string
The name of the node pool. If left blank, the provider will auto-generate a unique name.
- name
Prefix string Creates a unique name for the node pool beginning with the specified prefix. Conflicts with
name.- node
Config NodePool Node Config The node configuration of the pool. See gcp.container.Cluster for schema.
- node
Count number The number of nodes per instance group. This field can be used to update the number of nodes per instance group but should not be used alongside
autoscaling.- node
Locations string[] The list of zones in which the node pool’s nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If unspecified, the cluster-level
node_locationswill be used.- project string
The ID of the project in which to create the node pool. If blank, the provider-configured project will be used.
- upgrade
Settings NodePool Upgrade Settings Specify node upgrade settings to change how many nodes GKE attempts to upgrade at once. The number of nodes upgraded simultaneously is the sum of
max_surgeandmax_unavailable. The maximum number of nodes upgraded simultaneously is limited to 20.- version string
The Kubernetes version for the nodes in this pool. Note that if this field and
auto_upgradeare both specified, they will fight each other for what the node version should be, so setting both is highly discouraged. While a fuzzy version can be specified, it’s recommended that you specify explicit versions as the provider will see spurious diffs when fuzzy versions are used. See thegcp.container.getEngineVersionsdata source’sversion_prefixfield to approximate fuzzy versions in a provider-compatible way.
- cluster str
The cluster to create the node pool for. Cluster must be present in
locationprovided for zonal clusters.- autoscaling
Dict[Node
Pool Autoscaling] Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. Structure is documented below.
- initial_
node_ floatcount The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource.
- location str
The location (region or zone) of the cluster.
- management
Dict[Node
Pool Management] Node management configuration, wherein auto-repair and auto-upgrade is configured. Structure is documented below.
- max_
pods_ floatper_ node The maximum number of pods per node in this node pool. Note that this does not work on node pools which are “route-based” - that is, node pools belonging to clusters that do not have IP Aliasing enabled. See the official documentation for more information.
- name str
The name of the node pool. If left blank, the provider will auto-generate a unique name.
- name_
prefix str Creates a unique name for the node pool beginning with the specified prefix. Conflicts with
name.- node_
config Dict[NodePool Node Config] The node configuration of the pool. See gcp.container.Cluster for schema.
- node_
count float The number of nodes per instance group. This field can be used to update the number of nodes per instance group but should not be used alongside
autoscaling.- node_
locations List[str] The list of zones in which the node pool’s nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If unspecified, the cluster-level
node_locationswill be used.- project str
The ID of the project in which to create the node pool. If blank, the provider-configured project will be used.
- upgrade_
settings Dict[NodePool Upgrade Settings] Specify node upgrade settings to change how many nodes GKE attempts to upgrade at once. The number of nodes upgraded simultaneously is the sum of
max_surgeandmax_unavailable. The maximum number of nodes upgraded simultaneously is limited to 20.- version str
The Kubernetes version for the nodes in this pool. Note that if this field and
auto_upgradeare both specified, they will fight each other for what the node version should be, so setting both is highly discouraged. While a fuzzy version can be specified, it’s recommended that you specify explicit versions as the provider will see spurious diffs when fuzzy versions are used. See thegcp.container.getEngineVersionsdata source’sversion_prefixfield to approximate fuzzy versions in a provider-compatible way.
Outputs
All input properties are implicitly available as output properties. Additionally, the NodePool resource produces the following output properties:
- Id string
- The provider-assigned unique ID for this managed resource.
- Instance
Group List<string>Urls The resource URLs of the managed instance groups associated with this node pool.
- Id string
- The provider-assigned unique ID for this managed resource.
- Instance
Group []stringUrls The resource URLs of the managed instance groups associated with this node pool.
- id string
- The provider-assigned unique ID for this managed resource.
- instance
Group string[]Urls The resource URLs of the managed instance groups associated with this node pool.
- id str
- The provider-assigned unique ID for this managed resource.
- instance_
group_ List[str]urls The resource URLs of the managed instance groups associated with this node pool.
Look up an Existing NodePool Resource
Get an existing NodePool resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.
public static get(name: string, id: Input<ID>, state?: NodePoolState, opts?: CustomResourceOptions): NodePoolstatic get(resource_name, id, opts=None, autoscaling=None, cluster=None, initial_node_count=None, instance_group_urls=None, location=None, management=None, max_pods_per_node=None, name=None, name_prefix=None, node_config=None, node_count=None, node_locations=None, project=None, upgrade_settings=None, version=None, __props__=None);func GetNodePool(ctx *Context, name string, id IDInput, state *NodePoolState, opts ...ResourceOption) (*NodePool, error)public static NodePool Get(string name, Input<string> id, NodePoolState? state, CustomResourceOptions? opts = null)- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- resource_name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
The following state arguments are supported:
- Autoscaling
Node
Pool Autoscaling Args Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. Structure is documented below.
- Cluster string
The cluster to create the node pool for. Cluster must be present in
locationprovided for zonal clusters.- Initial
Node intCount The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource.
- Instance
Group List<string>Urls The resource URLs of the managed instance groups associated with this node pool.
- Location string
The location (region or zone) of the cluster.
- Management
Node
Pool Management Args Node management configuration, wherein auto-repair and auto-upgrade is configured. Structure is documented below.
- Max
Pods intPer Node The maximum number of pods per node in this node pool. Note that this does not work on node pools which are “route-based” - that is, node pools belonging to clusters that do not have IP Aliasing enabled. See the official documentation for more information.
- Name string
The name of the node pool. If left blank, the provider will auto-generate a unique name.
- Name
Prefix string Creates a unique name for the node pool beginning with the specified prefix. Conflicts with
name.- Node
Config NodePool Node Config Args The node configuration of the pool. See gcp.container.Cluster for schema.
- Node
Count int The number of nodes per instance group. This field can be used to update the number of nodes per instance group but should not be used alongside
autoscaling.- Node
Locations List<string> The list of zones in which the node pool’s nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If unspecified, the cluster-level
node_locationswill be used.- Project string
The ID of the project in which to create the node pool. If blank, the provider-configured project will be used.
- Upgrade
Settings NodePool Upgrade Settings Args Specify node upgrade settings to change how many nodes GKE attempts to upgrade at once. The number of nodes upgraded simultaneously is the sum of
max_surgeandmax_unavailable. The maximum number of nodes upgraded simultaneously is limited to 20.- Version string
The Kubernetes version for the nodes in this pool. Note that if this field and
auto_upgradeare both specified, they will fight each other for what the node version should be, so setting both is highly discouraged. While a fuzzy version can be specified, it’s recommended that you specify explicit versions as the provider will see spurious diffs when fuzzy versions are used. See thegcp.container.getEngineVersionsdata source’sversion_prefixfield to approximate fuzzy versions in a provider-compatible way.
- Autoscaling
Node
Pool Autoscaling Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. Structure is documented below.
- Cluster string
The cluster to create the node pool for. Cluster must be present in
locationprovided for zonal clusters.- Initial
Node intCount The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource.
- Instance
Group []stringUrls The resource URLs of the managed instance groups associated with this node pool.
- Location string
The location (region or zone) of the cluster.
- Management
Node
Pool Management Node management configuration, wherein auto-repair and auto-upgrade is configured. Structure is documented below.
- Max
Pods intPer Node The maximum number of pods per node in this node pool. Note that this does not work on node pools which are “route-based” - that is, node pools belonging to clusters that do not have IP Aliasing enabled. See the official documentation for more information.
- Name string
The name of the node pool. If left blank, the provider will auto-generate a unique name.
- Name
Prefix string Creates a unique name for the node pool beginning with the specified prefix. Conflicts with
name.- Node
Config NodePool Node Config The node configuration of the pool. See gcp.container.Cluster for schema.
- Node
Count int The number of nodes per instance group. This field can be used to update the number of nodes per instance group but should not be used alongside
autoscaling.- Node
Locations []string The list of zones in which the node pool’s nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If unspecified, the cluster-level
node_locationswill be used.- Project string
The ID of the project in which to create the node pool. If blank, the provider-configured project will be used.
- Upgrade
Settings NodePool Upgrade Settings Specify node upgrade settings to change how many nodes GKE attempts to upgrade at once. The number of nodes upgraded simultaneously is the sum of
max_surgeandmax_unavailable. The maximum number of nodes upgraded simultaneously is limited to 20.- Version string
The Kubernetes version for the nodes in this pool. Note that if this field and
auto_upgradeare both specified, they will fight each other for what the node version should be, so setting both is highly discouraged. While a fuzzy version can be specified, it’s recommended that you specify explicit versions as the provider will see spurious diffs when fuzzy versions are used. See thegcp.container.getEngineVersionsdata source’sversion_prefixfield to approximate fuzzy versions in a provider-compatible way.
- autoscaling
Node
Pool Autoscaling Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. Structure is documented below.
- cluster string
The cluster to create the node pool for. Cluster must be present in
locationprovided for zonal clusters.- initial
Node numberCount The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource.
- instance
Group string[]Urls The resource URLs of the managed instance groups associated with this node pool.
- location string
The location (region or zone) of the cluster.
- management
Node
Pool Management Node management configuration, wherein auto-repair and auto-upgrade is configured. Structure is documented below.
- max
Pods numberPer Node The maximum number of pods per node in this node pool. Note that this does not work on node pools which are “route-based” - that is, node pools belonging to clusters that do not have IP Aliasing enabled. See the official documentation for more information.
- name string
The name of the node pool. If left blank, the provider will auto-generate a unique name.
- name
Prefix string Creates a unique name for the node pool beginning with the specified prefix. Conflicts with
name.- node
Config NodePool Node Config The node configuration of the pool. See gcp.container.Cluster for schema.
- node
Count number The number of nodes per instance group. This field can be used to update the number of nodes per instance group but should not be used alongside
autoscaling.- node
Locations string[] The list of zones in which the node pool’s nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If unspecified, the cluster-level
node_locationswill be used.- project string
The ID of the project in which to create the node pool. If blank, the provider-configured project will be used.
- upgrade
Settings NodePool Upgrade Settings Specify node upgrade settings to change how many nodes GKE attempts to upgrade at once. The number of nodes upgraded simultaneously is the sum of
max_surgeandmax_unavailable. The maximum number of nodes upgraded simultaneously is limited to 20.- version string
The Kubernetes version for the nodes in this pool. Note that if this field and
auto_upgradeare both specified, they will fight each other for what the node version should be, so setting both is highly discouraged. While a fuzzy version can be specified, it’s recommended that you specify explicit versions as the provider will see spurious diffs when fuzzy versions are used. See thegcp.container.getEngineVersionsdata source’sversion_prefixfield to approximate fuzzy versions in a provider-compatible way.
- autoscaling
Dict[Node
Pool Autoscaling] Configuration required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. Structure is documented below.
- cluster str
The cluster to create the node pool for. Cluster must be present in
locationprovided for zonal clusters.- initial_
node_ floatcount The initial number of nodes for the pool. In regional or multi-zonal clusters, this is the number of nodes per zone. Changing this will force recreation of the resource.
- instance_
group_ List[str]urls The resource URLs of the managed instance groups associated with this node pool.
- location str
The location (region or zone) of the cluster.
- management
Dict[Node
Pool Management] Node management configuration, wherein auto-repair and auto-upgrade is configured. Structure is documented below.
- max_
pods_ floatper_ node The maximum number of pods per node in this node pool. Note that this does not work on node pools which are “route-based” - that is, node pools belonging to clusters that do not have IP Aliasing enabled. See the official documentation for more information.
- name str
The name of the node pool. If left blank, the provider will auto-generate a unique name.
- name_
prefix str Creates a unique name for the node pool beginning with the specified prefix. Conflicts with
name.- node_
config Dict[NodePool Node Config] The node configuration of the pool. See gcp.container.Cluster for schema.
- node_
count float The number of nodes per instance group. This field can be used to update the number of nodes per instance group but should not be used alongside
autoscaling.- node_
locations List[str] The list of zones in which the node pool’s nodes should be located. Nodes must be in the region of their regional cluster or in the same region as their cluster’s zone for zonal clusters. If unspecified, the cluster-level
node_locationswill be used.- project str
The ID of the project in which to create the node pool. If blank, the provider-configured project will be used.
- upgrade_
settings Dict[NodePool Upgrade Settings] Specify node upgrade settings to change how many nodes GKE attempts to upgrade at once. The number of nodes upgraded simultaneously is the sum of
max_surgeandmax_unavailable. The maximum number of nodes upgraded simultaneously is limited to 20.- version str
The Kubernetes version for the nodes in this pool. Note that if this field and
auto_upgradeare both specified, they will fight each other for what the node version should be, so setting both is highly discouraged. While a fuzzy version can be specified, it’s recommended that you specify explicit versions as the provider will see spurious diffs when fuzzy versions are used. See thegcp.container.getEngineVersionsdata source’sversion_prefixfield to approximate fuzzy versions in a provider-compatible way.
Supporting Types
NodePoolAutoscaling
- Max
Node intCount Maximum number of nodes in the NodePool. Must be >= min_node_count.
- Min
Node intCount Minimum number of nodes in the NodePool. Must be >=0 and <=
max_node_count.
- Max
Node intCount Maximum number of nodes in the NodePool. Must be >= min_node_count.
- Min
Node intCount Minimum number of nodes in the NodePool. Must be >=0 and <=
max_node_count.
- max
Node numberCount Maximum number of nodes in the NodePool. Must be >= min_node_count.
- min
Node numberCount Minimum number of nodes in the NodePool. Must be >=0 and <=
max_node_count.
- max
Node floatCount Maximum number of nodes in the NodePool. Must be >= min_node_count.
- min
Node floatCount Minimum number of nodes in the NodePool. Must be >=0 and <=
max_node_count.
NodePoolManagement
- Auto
Repair bool Whether the nodes will be automatically repaired.
- Auto
Upgrade bool Whether the nodes will be automatically upgraded.
- Auto
Repair bool Whether the nodes will be automatically repaired.
- Auto
Upgrade bool Whether the nodes will be automatically upgraded.
- auto
Repair boolean Whether the nodes will be automatically repaired.
- auto
Upgrade boolean Whether the nodes will be automatically upgraded.
- auto
Repair bool Whether the nodes will be automatically repaired.
- auto
Upgrade bool Whether the nodes will be automatically upgraded.
NodePoolNodeConfig
- Boot
Disk stringKms Key - Disk
Size intGb - Disk
Type string - Guest
Accelerators List<NodePool Node Config Guest Accelerator Args> - Image
Type string - Labels Dictionary<string, string>
- Local
Ssd intCount - Machine
Type string - Metadata Dictionary<string, string>
- Min
Cpu stringPlatform - Oauth
Scopes List<string> - Preemptible bool
- Sandbox
Config NodePool Node Config Sandbox Config Args - Service
Account string - Shielded
Instance NodeConfig Pool Node Config Shielded Instance Config Args - List<string>
- Taints
List<Node
Pool Node Config Taint Args> - Workload
Metadata NodeConfig Pool Node Config Workload Metadata Config Args
- Boot
Disk stringKms Key - Disk
Size intGb - Disk
Type string - Guest
Accelerators []NodePool Node Config Guest Accelerator - Image
Type string - Labels map[string]string
- Local
Ssd intCount - Machine
Type string - Metadata map[string]string
- Min
Cpu stringPlatform - Oauth
Scopes []string - Preemptible bool
- Sandbox
Config NodePool Node Config Sandbox Config - Service
Account string - Shielded
Instance NodeConfig Pool Node Config Shielded Instance Config - []string
- Taints
[]Node
Pool Node Config Taint - Workload
Metadata NodeConfig Pool Node Config Workload Metadata Config
- boot
Disk stringKms Key - disk
Size numberGb - disk
Type string - guest
Accelerators NodePool Node Config Guest Accelerator[] - image
Type string - labels {[key: string]: string}
- local
Ssd numberCount - machine
Type string - metadata {[key: string]: string}
- min
Cpu stringPlatform - oauth
Scopes string[] - preemptible boolean
- sandbox
Config NodePool Node Config Sandbox Config - service
Account string - shielded
Instance NodeConfig Pool Node Config Shielded Instance Config - string[]
- taints
Node
Pool Node Config Taint[] - workload
Metadata NodeConfig Pool Node Config Workload Metadata Config
- boot
Disk strKms Key - disk
Type str - disk_
size_ floatgb - guest_
accelerators List[NodePool Node Config Guest Accelerator] - image
Type str - labels Dict[str, str]
- local
Ssd floatCount - machine_
type str - metadata Dict[str, str]
- min_
cpu_ strplatform - oauth
Scopes List[str] - preemptible bool
- sandbox
Config Dict[NodePool Node Config Sandbox Config] - service_
account str - shielded_
instance_ Dict[Nodeconfig Pool Node Config Shielded Instance Config] - List[str]
- taints
List[Node
Pool Node Config Taint] - workload
Metadata Dict[NodeConfig Pool Node Config Workload Metadata Config]
NodePoolNodeConfigGuestAccelerator
NodePoolNodeConfigSandboxConfig
NodePoolNodeConfigShieldedInstanceConfig
NodePoolNodeConfigTaint
NodePoolNodeConfigWorkloadMetadataConfig
NodePoolUpgradeSettings
- Max
Surge int The number of additional nodes that can be added to the node pool during an upgrade. Increasing
max_surgeraises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater.- int
The number of nodes that can be simultaneously unavailable during an upgrade. Increasing
max_unavailableraises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater.
- Max
Surge int The number of additional nodes that can be added to the node pool during an upgrade. Increasing
max_surgeraises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater.- int
The number of nodes that can be simultaneously unavailable during an upgrade. Increasing
max_unavailableraises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater.
- max
Surge number The number of additional nodes that can be added to the node pool during an upgrade. Increasing
max_surgeraises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater.- number
The number of nodes that can be simultaneously unavailable during an upgrade. Increasing
max_unavailableraises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater.
- max
Surge float The number of additional nodes that can be added to the node pool during an upgrade. Increasing
max_surgeraises the number of nodes that can be upgraded simultaneously. Can be set to 0 or greater.- float
The number of nodes that can be simultaneously unavailable during an upgrade. Increasing
max_unavailableraises the number of nodes that can be upgraded in parallel. Can be set to 0 or greater.
Package Details
- Repository
- https://github.com/pulumi/pulumi-gcp
- License
- Apache-2.0
- Notes
- This Pulumi package is based on the
google-betaTerraform Provider.