Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
google-native.dataproc/v1beta2.getCluster
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
Gets the resource representation for a cluster in a project.
Using getCluster
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getCluster(args: GetClusterArgs, opts?: InvokeOptions): Promise<GetClusterResult>
function getClusterOutput(args: GetClusterOutputArgs, opts?: InvokeOptions): Output<GetClusterResult>def get_cluster(cluster_name: Optional[str] = None,
                project: Optional[str] = None,
                region: Optional[str] = None,
                opts: Optional[InvokeOptions] = None) -> GetClusterResult
def get_cluster_output(cluster_name: Optional[pulumi.Input[str]] = None,
                project: Optional[pulumi.Input[str]] = None,
                region: Optional[pulumi.Input[str]] = None,
                opts: Optional[InvokeOptions] = None) -> Output[GetClusterResult]func LookupCluster(ctx *Context, args *LookupClusterArgs, opts ...InvokeOption) (*LookupClusterResult, error)
func LookupClusterOutput(ctx *Context, args *LookupClusterOutputArgs, opts ...InvokeOption) LookupClusterResultOutput> Note: This function is named LookupCluster in the Go SDK.
public static class GetCluster 
{
    public static Task<GetClusterResult> InvokeAsync(GetClusterArgs args, InvokeOptions? opts = null)
    public static Output<GetClusterResult> Invoke(GetClusterInvokeArgs args, InvokeOptions? opts = null)
}public static CompletableFuture<GetClusterResult> getCluster(GetClusterArgs args, InvokeOptions options)
public static Output<GetClusterResult> getCluster(GetClusterArgs args, InvokeOptions options)
fn::invoke:
  function: google-native:dataproc/v1beta2:getCluster
  arguments:
    # arguments dictionaryThe following arguments are supported:
- ClusterName string
- Region string
- Project string
- ClusterName string
- Region string
- Project string
- clusterName String
- region String
- project String
- clusterName string
- region string
- project string
- cluster_name str
- region str
- project str
- clusterName String
- region String
- project String
getCluster Result
The following output properties are available:
- ClusterName string
- The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.
- ClusterUuid string
- A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- Config
Pulumi.Google Native. Dataproc. V1Beta2. Outputs. Cluster Config Response 
- The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.
- Labels Dictionary<string, string>
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- Metrics
Pulumi.Google Native. Dataproc. V1Beta2. Outputs. Cluster Metrics Response 
- Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- Project string
- The Google Cloud Platform project ID that the cluster belongs to.
- Status
Pulumi.Google Native. Dataproc. V1Beta2. Outputs. Cluster Status Response 
- Cluster status.
- StatusHistory List<Pulumi.Google Native. Dataproc. V1Beta2. Outputs. Cluster Status Response> 
- The previous cluster status.
- ClusterName string
- The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.
- ClusterUuid string
- A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- Config
ClusterConfig Response 
- The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.
- Labels map[string]string
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- Metrics
ClusterMetrics Response 
- Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- Project string
- The Google Cloud Platform project ID that the cluster belongs to.
- Status
ClusterStatus Response 
- Cluster status.
- StatusHistory []ClusterStatus Response 
- The previous cluster status.
- clusterName String
- The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.
- clusterUuid String
- A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- config
ClusterConfig Response 
- The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.
- labels Map<String,String>
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- metrics
ClusterMetrics Response 
- Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- project String
- The Google Cloud Platform project ID that the cluster belongs to.
- status
ClusterStatus Response 
- Cluster status.
- statusHistory List<ClusterStatus Response> 
- The previous cluster status.
- clusterName string
- The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.
- clusterUuid string
- A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- config
ClusterConfig Response 
- The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.
- labels {[key: string]: string}
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- metrics
ClusterMetrics Response 
- Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- project string
- The Google Cloud Platform project ID that the cluster belongs to.
- status
ClusterStatus Response 
- Cluster status.
- statusHistory ClusterStatus Response[] 
- The previous cluster status.
- cluster_name str
- The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.
- cluster_uuid str
- A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- config
ClusterConfig Response 
- The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.
- labels Mapping[str, str]
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- metrics
ClusterMetrics Response 
- Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- project str
- The Google Cloud Platform project ID that the cluster belongs to.
- status
ClusterStatus Response 
- Cluster status.
- status_history Sequence[ClusterStatus Response] 
- The previous cluster status.
- clusterName String
- The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.
- clusterUuid String
- A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
- config Property Map
- The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.
- labels Map<String>
- Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
- metrics Property Map
- Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- project String
- The Google Cloud Platform project ID that the cluster belongs to.
- status Property Map
- Cluster status.
- statusHistory List<Property Map>
- The previous cluster status.
Supporting Types
AcceleratorConfigResponse  
- AcceleratorCount int
- The number of the accelerator cards of this type exposed to this instance.
- AcceleratorType stringUri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- AcceleratorCount int
- The number of the accelerator cards of this type exposed to this instance.
- AcceleratorType stringUri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- acceleratorCount Integer
- The number of the accelerator cards of this type exposed to this instance.
- acceleratorType StringUri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- acceleratorCount number
- The number of the accelerator cards of this type exposed to this instance.
- acceleratorType stringUri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator_count int
- The number of the accelerator cards of this type exposed to this instance.
- accelerator_type_ struri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- acceleratorCount Number
- The number of the accelerator cards of this type exposed to this instance.
- acceleratorType StringUri 
- Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
AutoscalingConfigResponse  
- PolicyUri string
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- PolicyUri string
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policyUri String
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policyUri string
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy_uri str
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policyUri String
- Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
ClusterConfigResponse  
- AutoscalingConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Autoscaling Config Response 
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- ConfigBucket string
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- EncryptionConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Encryption Config Response 
- Optional. Encryption settings for the cluster.
- EndpointConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Endpoint Config Response 
- Optional. Port/endpoint configuration for this cluster
- GceCluster Pulumi.Config Google Native. Dataproc. V1Beta2. Inputs. Gce Cluster Config Response 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- GkeCluster Pulumi.Config Google Native. Dataproc. V1Beta2. Inputs. Gke Cluster Config Response 
- Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- InitializationActions List<Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Node Initialization Action Response> 
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- LifecycleConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Lifecycle Config Response 
- Optional. The config setting for auto delete cluster schedule.
- MasterConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Instance Group Config Response 
- Optional. The Compute Engine config settings for the master instance in a cluster.
- MetastoreConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Metastore Config Response 
- Optional. Metastore configuration.
- SecondaryWorker Pulumi.Config Google Native. Dataproc. V1Beta2. Inputs. Instance Group Config Response 
- Optional. The Compute Engine config settings for additional worker instances in a cluster.
- SecurityConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Security Config Response 
- Optional. Security related configuration.
- SoftwareConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Software Config Response 
- Optional. The config settings for software inside the cluster.
- TempBucket string
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- WorkerConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Instance Group Config Response 
- Optional. The Compute Engine config settings for worker instances in a cluster.
- AutoscalingConfig AutoscalingConfig Response 
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- ConfigBucket string
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- EncryptionConfig EncryptionConfig Response 
- Optional. Encryption settings for the cluster.
- EndpointConfig EndpointConfig Response 
- Optional. Port/endpoint configuration for this cluster
- GceCluster GceConfig Cluster Config Response 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- GkeCluster GkeConfig Cluster Config Response 
- Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- InitializationActions []NodeInitialization Action Response 
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- LifecycleConfig LifecycleConfig Response 
- Optional. The config setting for auto delete cluster schedule.
- MasterConfig InstanceGroup Config Response 
- Optional. The Compute Engine config settings for the master instance in a cluster.
- MetastoreConfig MetastoreConfig Response 
- Optional. Metastore configuration.
- SecondaryWorker InstanceConfig Group Config Response 
- Optional. The Compute Engine config settings for additional worker instances in a cluster.
- SecurityConfig SecurityConfig Response 
- Optional. Security related configuration.
- SoftwareConfig SoftwareConfig Response 
- Optional. The config settings for software inside the cluster.
- TempBucket string
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- WorkerConfig InstanceGroup Config Response 
- Optional. The Compute Engine config settings for worker instances in a cluster.
- autoscalingConfig AutoscalingConfig Response 
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- configBucket String
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- encryptionConfig EncryptionConfig Response 
- Optional. Encryption settings for the cluster.
- endpointConfig EndpointConfig Response 
- Optional. Port/endpoint configuration for this cluster
- gceCluster GceConfig Cluster Config Response 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- gkeCluster GkeConfig Cluster Config Response 
- Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initializationActions List<NodeInitialization Action Response> 
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycleConfig LifecycleConfig Response 
- Optional. The config setting for auto delete cluster schedule.
- masterConfig InstanceGroup Config Response 
- Optional. The Compute Engine config settings for the master instance in a cluster.
- metastoreConfig MetastoreConfig Response 
- Optional. Metastore configuration.
- secondaryWorker InstanceConfig Group Config Response 
- Optional. The Compute Engine config settings for additional worker instances in a cluster.
- securityConfig SecurityConfig Response 
- Optional. Security related configuration.
- softwareConfig SoftwareConfig Response 
- Optional. The config settings for software inside the cluster.
- tempBucket String
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- workerConfig InstanceGroup Config Response 
- Optional. The Compute Engine config settings for worker instances in a cluster.
- autoscalingConfig AutoscalingConfig Response 
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- configBucket string
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- encryptionConfig EncryptionConfig Response 
- Optional. Encryption settings for the cluster.
- endpointConfig EndpointConfig Response 
- Optional. Port/endpoint configuration for this cluster
- gceCluster GceConfig Cluster Config Response 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- gkeCluster GkeConfig Cluster Config Response 
- Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initializationActions NodeInitialization Action Response[] 
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycleConfig LifecycleConfig Response 
- Optional. The config setting for auto delete cluster schedule.
- masterConfig InstanceGroup Config Response 
- Optional. The Compute Engine config settings for the master instance in a cluster.
- metastoreConfig MetastoreConfig Response 
- Optional. Metastore configuration.
- secondaryWorker InstanceConfig Group Config Response 
- Optional. The Compute Engine config settings for additional worker instances in a cluster.
- securityConfig SecurityConfig Response 
- Optional. Security related configuration.
- softwareConfig SoftwareConfig Response 
- Optional. The config settings for software inside the cluster.
- tempBucket string
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- workerConfig InstanceGroup Config Response 
- Optional. The Compute Engine config settings for worker instances in a cluster.
- autoscaling_config AutoscalingConfig Response 
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- config_bucket str
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- encryption_config EncryptionConfig Response 
- Optional. Encryption settings for the cluster.
- endpoint_config EndpointConfig Response 
- Optional. Port/endpoint configuration for this cluster
- gce_cluster_ Gceconfig Cluster Config Response 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke_cluster_ Gkeconfig Cluster Config Response 
- Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization_actions Sequence[NodeInitialization Action Response] 
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle_config LifecycleConfig Response 
- Optional. The config setting for auto delete cluster schedule.
- master_config InstanceGroup Config Response 
- Optional. The Compute Engine config settings for the master instance in a cluster.
- metastore_config MetastoreConfig Response 
- Optional. Metastore configuration.
- secondary_worker_ Instanceconfig Group Config Response 
- Optional. The Compute Engine config settings for additional worker instances in a cluster.
- security_config SecurityConfig Response 
- Optional. Security related configuration.
- software_config SoftwareConfig Response 
- Optional. The config settings for software inside the cluster.
- temp_bucket str
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- worker_config InstanceGroup Config Response 
- Optional. The Compute Engine config settings for worker instances in a cluster.
- autoscalingConfig Property Map
- Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- configBucket String
- Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- encryptionConfig Property Map
- Optional. Encryption settings for the cluster.
- endpointConfig Property Map
- Optional. Port/endpoint configuration for this cluster
- gceCluster Property MapConfig 
- Optional. The shared Compute Engine config settings for all instances in a cluster.
- gkeCluster Property MapConfig 
- Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initializationActions List<Property Map>
- Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycleConfig Property Map
- Optional. The config setting for auto delete cluster schedule.
- masterConfig Property Map
- Optional. The Compute Engine config settings for the master instance in a cluster.
- metastoreConfig Property Map
- Optional. Metastore configuration.
- secondaryWorker Property MapConfig 
- Optional. The Compute Engine config settings for additional worker instances in a cluster.
- securityConfig Property Map
- Optional. Security related configuration.
- softwareConfig Property Map
- Optional. The config settings for software inside the cluster.
- tempBucket String
- Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- workerConfig Property Map
- Optional. The Compute Engine config settings for worker instances in a cluster.
ClusterMetricsResponse  
- HdfsMetrics Dictionary<string, string>
- The HDFS metrics.
- YarnMetrics Dictionary<string, string>
- The YARN metrics.
- HdfsMetrics map[string]string
- The HDFS metrics.
- YarnMetrics map[string]string
- The YARN metrics.
- hdfsMetrics Map<String,String>
- The HDFS metrics.
- yarnMetrics Map<String,String>
- The YARN metrics.
- hdfsMetrics {[key: string]: string}
- The HDFS metrics.
- yarnMetrics {[key: string]: string}
- The YARN metrics.
- hdfs_metrics Mapping[str, str]
- The HDFS metrics.
- yarn_metrics Mapping[str, str]
- The YARN metrics.
- hdfsMetrics Map<String>
- The HDFS metrics.
- yarnMetrics Map<String>
- The YARN metrics.
ClusterStatusResponse  
- Detail string
- Optional details of cluster's state.
- State string
- The cluster's state.
- StateStart stringTime 
- Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Substate string
- Additional state information that includes status reported by the agent.
- Detail string
- Optional details of cluster's state.
- State string
- The cluster's state.
- StateStart stringTime 
- Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Substate string
- Additional state information that includes status reported by the agent.
- detail String
- Optional details of cluster's state.
- state String
- The cluster's state.
- stateStart StringTime 
- Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- substate String
- Additional state information that includes status reported by the agent.
- detail string
- Optional details of cluster's state.
- state string
- The cluster's state.
- stateStart stringTime 
- Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- substate string
- Additional state information that includes status reported by the agent.
- detail str
- Optional details of cluster's state.
- state str
- The cluster's state.
- state_start_ strtime 
- Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- substate str
- Additional state information that includes status reported by the agent.
- detail String
- Optional details of cluster's state.
- state String
- The cluster's state.
- stateStart StringTime 
- Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- substate String
- Additional state information that includes status reported by the agent.
DiskConfigResponse  
- BootDisk intSize Gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- BootDisk stringType 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- NumLocal intSsds 
- Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- BootDisk intSize Gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- BootDisk stringType 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- NumLocal intSsds 
- Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- bootDisk IntegerSize Gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- bootDisk StringType 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- numLocal IntegerSsds 
- Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- bootDisk numberSize Gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- bootDisk stringType 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- numLocal numberSsds 
- Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- boot_disk_ intsize_ gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- boot_disk_ strtype 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- num_local_ intssds 
- Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- bootDisk NumberSize Gb 
- Optional. Size in GB of the boot disk (default is 500GB).
- bootDisk StringType 
- Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- numLocal NumberSsds 
- Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
EncryptionConfigResponse  
- GcePd stringKms Key Name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- GcePd stringKms Key Name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- gcePd StringKms Key Name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- gcePd stringKms Key Name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- gce_pd_ strkms_ key_ name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- gcePd StringKms Key Name 
- Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
EndpointConfigResponse  
- EnableHttp boolPort Access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- HttpPorts Dictionary<string, string>
- The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- EnableHttp boolPort Access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- HttpPorts map[string]string
- The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enableHttp BooleanPort Access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- httpPorts Map<String,String>
- The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enableHttp booleanPort Access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- httpPorts {[key: string]: string}
- The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable_http_ boolport_ access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http_ports Mapping[str, str]
- The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enableHttp BooleanPort Access 
- Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- httpPorts Map<String>
- The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
GceClusterConfigResponse   
- InternalIp boolOnly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata Dictionary<string, string>
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- NetworkUri string
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- NodeGroup Pulumi.Affinity Google Native. Dataproc. V1Beta2. Inputs. Node Group Affinity Response 
- Optional. Node Group Affinity for sole-tenant clusters.
- PrivateIpv6Google stringAccess 
- Optional. The type of IPv6 access for a cluster.
- ReservationAffinity Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Reservation Affinity Response 
- Optional. Reservation Affinity for consuming Zonal reservation.
- ServiceAccount string
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- ServiceAccount List<string>Scopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- ShieldedInstance Pulumi.Config Google Native. Dataproc. V1Beta2. Inputs. Shielded Instance Config Response 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- SubnetworkUri string
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- List<string>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- ZoneUri string
- Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- InternalIp boolOnly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata map[string]string
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- NetworkUri string
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- NodeGroup NodeAffinity Group Affinity Response 
- Optional. Node Group Affinity for sole-tenant clusters.
- PrivateIpv6Google stringAccess 
- Optional. The type of IPv6 access for a cluster.
- ReservationAffinity ReservationAffinity Response 
- Optional. Reservation Affinity for consuming Zonal reservation.
- ServiceAccount string
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- ServiceAccount []stringScopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- ShieldedInstance ShieldedConfig Instance Config Response 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- SubnetworkUri string
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- []string
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- ZoneUri string
- Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- internalIp BooleanOnly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String,String>
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- networkUri String
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- nodeGroup NodeAffinity Group Affinity Response 
- Optional. Node Group Affinity for sole-tenant clusters.
- privateIpv6Google StringAccess 
- Optional. The type of IPv6 access for a cluster.
- reservationAffinity ReservationAffinity Response 
- Optional. Reservation Affinity for consuming Zonal reservation.
- serviceAccount String
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- serviceAccount List<String>Scopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shieldedInstance ShieldedConfig Instance Config Response 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetworkUri String
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zoneUri String
- Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- internalIp booleanOnly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata {[key: string]: string}
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- networkUri string
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- nodeGroup NodeAffinity Group Affinity Response 
- Optional. Node Group Affinity for sole-tenant clusters.
- privateIpv6Google stringAccess 
- Optional. The type of IPv6 access for a cluster.
- reservationAffinity ReservationAffinity Response 
- Optional. Reservation Affinity for consuming Zonal reservation.
- serviceAccount string
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- serviceAccount string[]Scopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shieldedInstance ShieldedConfig Instance Config Response 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetworkUri string
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- string[]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zoneUri string
- Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- internal_ip_ boolonly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Mapping[str, str]
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network_uri str
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- node_group_ Nodeaffinity Group Affinity Response 
- Optional. Node Group Affinity for sole-tenant clusters.
- private_ipv6_ strgoogle_ access 
- Optional. The type of IPv6 access for a cluster.
- reservation_affinity ReservationAffinity Response 
- Optional. Reservation Affinity for consuming Zonal reservation.
- service_account str
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service_account_ Sequence[str]scopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded_instance_ Shieldedconfig Instance Config Response 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork_uri str
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- Sequence[str]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone_uri str
- Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- internalIp BooleanOnly 
- Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String>
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- networkUri String
- Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- nodeGroup Property MapAffinity 
- Optional. Node Group Affinity for sole-tenant clusters.
- privateIpv6Google StringAccess 
- Optional. The type of IPv6 access for a cluster.
- reservationAffinity Property Map
- Optional. Reservation Affinity for consuming Zonal reservation.
- serviceAccount String
- Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- serviceAccount List<String>Scopes 
- Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shieldedInstance Property MapConfig 
- Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetworkUri String
- Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zoneUri String
- Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
GkeClusterConfigResponse   
- NamespacedGke Pulumi.Deployment Target Google Native. Dataproc. V1Beta2. Inputs. Namespaced Gke Deployment Target Response 
- Optional. A target for the deployment.
- NamespacedGke NamespacedDeployment Target Gke Deployment Target Response 
- Optional. A target for the deployment.
- namespacedGke NamespacedDeployment Target Gke Deployment Target Response 
- Optional. A target for the deployment.
- namespacedGke NamespacedDeployment Target Gke Deployment Target Response 
- Optional. A target for the deployment.
- namespaced_gke_ Namespaceddeployment_ target Gke Deployment Target Response 
- Optional. A target for the deployment.
- namespacedGke Property MapDeployment Target 
- Optional. A target for the deployment.
InstanceGroupConfigResponse   
- Accelerators
List<Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Accelerator Config Response> 
- Optional. The Compute Engine accelerator configuration for these instances.
- DiskConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Disk Config Response 
- Optional. Disk option config settings.
- ImageUri string
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- InstanceNames List<string>
- The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- InstanceReferences List<Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Instance Reference Response> 
- List of references to Compute Engine instances.
- IsPreemptible bool
- Specifies that this instance group contains preemptible instances.
- MachineType stringUri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- ManagedGroup Pulumi.Config Google Native. Dataproc. V1Beta2. Inputs. Managed Group Config Response 
- The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- MinCpu stringPlatform 
- Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- NumInstances int
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility string
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- Accelerators
[]AcceleratorConfig Response 
- Optional. The Compute Engine accelerator configuration for these instances.
- DiskConfig DiskConfig Response 
- Optional. Disk option config settings.
- ImageUri string
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- InstanceNames []string
- The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- InstanceReferences []InstanceReference Response 
- List of references to Compute Engine instances.
- IsPreemptible bool
- Specifies that this instance group contains preemptible instances.
- MachineType stringUri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- ManagedGroup ManagedConfig Group Config Response 
- The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- MinCpu stringPlatform 
- Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- NumInstances int
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility string
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators
List<AcceleratorConfig Response> 
- Optional. The Compute Engine accelerator configuration for these instances.
- diskConfig DiskConfig Response 
- Optional. Disk option config settings.
- imageUri String
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instanceNames List<String>
- The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instanceReferences List<InstanceReference Response> 
- List of references to Compute Engine instances.
- isPreemptible Boolean
- Specifies that this instance group contains preemptible instances.
- machineType StringUri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managedGroup ManagedConfig Group Config Response 
- The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- minCpu StringPlatform 
- Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- numInstances Integer
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility String
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators
AcceleratorConfig Response[] 
- Optional. The Compute Engine accelerator configuration for these instances.
- diskConfig DiskConfig Response 
- Optional. Disk option config settings.
- imageUri string
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instanceNames string[]
- The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instanceReferences InstanceReference Response[] 
- List of references to Compute Engine instances.
- isPreemptible boolean
- Specifies that this instance group contains preemptible instances.
- machineType stringUri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managedGroup ManagedConfig Group Config Response 
- The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- minCpu stringPlatform 
- Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- numInstances number
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility string
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators
Sequence[AcceleratorConfig Response] 
- Optional. The Compute Engine accelerator configuration for these instances.
- disk_config DiskConfig Response 
- Optional. Disk option config settings.
- image_uri str
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance_names Sequence[str]
- The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instance_references Sequence[InstanceReference Response] 
- List of references to Compute Engine instances.
- is_preemptible bool
- Specifies that this instance group contains preemptible instances.
- machine_type_ struri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managed_group_ Managedconfig Group Config Response 
- The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- min_cpu_ strplatform 
- Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num_instances int
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility str
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators List<Property Map>
- Optional. The Compute Engine accelerator configuration for these instances.
- diskConfig Property Map
- Optional. Disk option config settings.
- imageUri String
- Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instanceNames List<String>
- The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instanceReferences List<Property Map>
- List of references to Compute Engine instances.
- isPreemptible Boolean
- Specifies that this instance group contains preemptible instances.
- machineType StringUri 
- Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managedGroup Property MapConfig 
- The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- minCpu StringPlatform 
- Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- numInstances Number
- Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility String
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
InstanceReferenceResponse  
- InstanceId string
- The unique identifier of the Compute Engine instance.
- InstanceName string
- The user-friendly name of the Compute Engine instance.
- PublicKey string
- The public key used for sharing data with this instance.
- InstanceId string
- The unique identifier of the Compute Engine instance.
- InstanceName string
- The user-friendly name of the Compute Engine instance.
- PublicKey string
- The public key used for sharing data with this instance.
- instanceId String
- The unique identifier of the Compute Engine instance.
- instanceName String
- The user-friendly name of the Compute Engine instance.
- publicKey String
- The public key used for sharing data with this instance.
- instanceId string
- The unique identifier of the Compute Engine instance.
- instanceName string
- The user-friendly name of the Compute Engine instance.
- publicKey string
- The public key used for sharing data with this instance.
- instance_id str
- The unique identifier of the Compute Engine instance.
- instance_name str
- The user-friendly name of the Compute Engine instance.
- public_key str
- The public key used for sharing data with this instance.
- instanceId String
- The unique identifier of the Compute Engine instance.
- instanceName String
- The user-friendly name of the Compute Engine instance.
- publicKey String
- The public key used for sharing data with this instance.
KerberosConfigResponse  
- CrossRealm stringTrust Admin Server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- CrossRealm stringTrust Kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- CrossRealm stringTrust Realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- EnableKerberos bool
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- KdcDb stringKey Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- KeyPassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- KeystorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- KeystoreUri string
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- KmsKey stringUri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- Realm string
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- RootPrincipal stringPassword Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- TgtLifetime intHours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- TruststorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- TruststoreUri string
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- CrossRealm stringTrust Admin Server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- CrossRealm stringTrust Kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- CrossRealm stringTrust Realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- EnableKerberos bool
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- KdcDb stringKey Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- KeyPassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- KeystorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- KeystoreUri string
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- KmsKey stringUri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- Realm string
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- RootPrincipal stringPassword Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- TgtLifetime intHours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- TruststorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- TruststoreUri string
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- crossRealm StringTrust Admin Server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm StringTrust Kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm StringTrust Realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- String
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enableKerberos Boolean
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdcDb StringKey Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- keyPassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystorePassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystoreUri String
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kmsKey StringUri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm String
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- rootPrincipal StringPassword Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgtLifetime IntegerHours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststorePassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststoreUri String
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- crossRealm stringTrust Admin Server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm stringTrust Kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm stringTrust Realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enableKerberos boolean
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdcDb stringKey Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- keyPassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystoreUri string
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kmsKey stringUri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm string
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- rootPrincipal stringPassword Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgtLifetime numberHours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststorePassword stringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststoreUri string
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross_realm_ strtrust_ admin_ server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross_realm_ strtrust_ kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross_realm_ strtrust_ realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- str
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable_kerberos bool
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc_db_ strkey_ uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key_password_ struri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore_password_ struri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore_uri str
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms_key_ struri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm str
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root_principal_ strpassword_ uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt_lifetime_ inthours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore_password_ struri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore_uri str
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- crossRealm StringTrust Admin Server 
- Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm StringTrust Kdc 
- Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- crossRealm StringTrust Realm 
- Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- String
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enableKerberos Boolean
- Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdcDb StringKey Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- keyPassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystorePassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystoreUri String
- Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kmsKey StringUri 
- Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm String
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- rootPrincipal StringPassword Uri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgtLifetime NumberHours 
- Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststorePassword StringUri 
- Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststoreUri String
- Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
LifecycleConfigResponse  
- AutoDelete stringTime 
- Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- AutoDelete stringTtl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- IdleDelete stringTtl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- IdleStart stringTime 
- The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- AutoDelete stringTime 
- Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- AutoDelete stringTtl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- IdleDelete stringTtl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- IdleStart stringTime 
- The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete StringTime 
- Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete StringTtl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idleDelete StringTtl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idleStart StringTime 
- The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete stringTime 
- Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete stringTtl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idleDelete stringTtl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idleStart stringTime 
- The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto_delete_ strtime 
- Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto_delete_ strttl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle_delete_ strttl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle_start_ strtime 
- The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete StringTime 
- Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- autoDelete StringTtl 
- Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idleDelete StringTtl 
- Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idleStart StringTime 
- The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
ManagedGroupConfigResponse   
- InstanceGroup stringManager Name 
- The name of the Instance Group Manager for this group.
- InstanceTemplate stringName 
- The name of the Instance Template used for the Managed Instance Group.
- InstanceGroup stringManager Name 
- The name of the Instance Group Manager for this group.
- InstanceTemplate stringName 
- The name of the Instance Template used for the Managed Instance Group.
- instanceGroup StringManager Name 
- The name of the Instance Group Manager for this group.
- instanceTemplate StringName 
- The name of the Instance Template used for the Managed Instance Group.
- instanceGroup stringManager Name 
- The name of the Instance Group Manager for this group.
- instanceTemplate stringName 
- The name of the Instance Template used for the Managed Instance Group.
- instance_group_ strmanager_ name 
- The name of the Instance Group Manager for this group.
- instance_template_ strname 
- The name of the Instance Template used for the Managed Instance Group.
- instanceGroup StringManager Name 
- The name of the Instance Group Manager for this group.
- instanceTemplate StringName 
- The name of the Instance Template used for the Managed Instance Group.
MetastoreConfigResponse  
- DataprocMetastore stringService 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- DataprocMetastore stringService 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataprocMetastore StringService 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataprocMetastore stringService 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc_metastore_ strservice 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataprocMetastore StringService 
- Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
NamespacedGkeDeploymentTargetResponse    
- ClusterNamespace string
- Optional. A namespace within the GKE cluster to deploy into.
- TargetGke stringCluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- ClusterNamespace string
- Optional. A namespace within the GKE cluster to deploy into.
- TargetGke stringCluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- clusterNamespace String
- Optional. A namespace within the GKE cluster to deploy into.
- targetGke StringCluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- clusterNamespace string
- Optional. A namespace within the GKE cluster to deploy into.
- targetGke stringCluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster_namespace str
- Optional. A namespace within the GKE cluster to deploy into.
- target_gke_ strcluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- clusterNamespace String
- Optional. A namespace within the GKE cluster to deploy into.
- targetGke StringCluster 
- Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
NodeGroupAffinityResponse   
- NodeGroup stringUri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- NodeGroup stringUri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- nodeGroup StringUri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- nodeGroup stringUri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- node_group_ struri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- nodeGroup StringUri 
- The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
NodeInitializationActionResponse   
- ExecutableFile string
- Cloud Storage URI of executable file.
- ExecutionTimeout string
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- ExecutableFile string
- Cloud Storage URI of executable file.
- ExecutionTimeout string
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executableFile String
- Cloud Storage URI of executable file.
- executionTimeout String
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executableFile string
- Cloud Storage URI of executable file.
- executionTimeout string
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable_file str
- Cloud Storage URI of executable file.
- execution_timeout str
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executableFile String
- Cloud Storage URI of executable file.
- executionTimeout String
- Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
ReservationAffinityResponse  
- ConsumeReservation stringType 
- Optional. Type of reservation to consume
- Key string
- Optional. Corresponds to the label key of reservation resource.
- Values List<string>
- Optional. Corresponds to the label values of reservation resource.
- ConsumeReservation stringType 
- Optional. Type of reservation to consume
- Key string
- Optional. Corresponds to the label key of reservation resource.
- Values []string
- Optional. Corresponds to the label values of reservation resource.
- consumeReservation StringType 
- Optional. Type of reservation to consume
- key String
- Optional. Corresponds to the label key of reservation resource.
- values List<String>
- Optional. Corresponds to the label values of reservation resource.
- consumeReservation stringType 
- Optional. Type of reservation to consume
- key string
- Optional. Corresponds to the label key of reservation resource.
- values string[]
- Optional. Corresponds to the label values of reservation resource.
- consume_reservation_ strtype 
- Optional. Type of reservation to consume
- key str
- Optional. Corresponds to the label key of reservation resource.
- values Sequence[str]
- Optional. Corresponds to the label values of reservation resource.
- consumeReservation StringType 
- Optional. Type of reservation to consume
- key String
- Optional. Corresponds to the label key of reservation resource.
- values List<String>
- Optional. Corresponds to the label values of reservation resource.
SecurityConfigResponse  
- KerberosConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Kerberos Config Response 
- Optional. Kerberos related configuration.
- KerberosConfig KerberosConfig Response 
- Optional. Kerberos related configuration.
- kerberosConfig KerberosConfig Response 
- Optional. Kerberos related configuration.
- kerberosConfig KerberosConfig Response 
- Optional. Kerberos related configuration.
- kerberos_config KerberosConfig Response 
- Optional. Kerberos related configuration.
- kerberosConfig Property Map
- Optional. Kerberos related configuration.
ShieldedInstanceConfigResponse   
- EnableIntegrity boolMonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- EnableSecure boolBoot 
- Optional. Defines whether instances have Secure Boot enabled.
- EnableVtpm bool
- Optional. Defines whether instances have the vTPM enabled.
- EnableIntegrity boolMonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- EnableSecure boolBoot 
- Optional. Defines whether instances have Secure Boot enabled.
- EnableVtpm bool
- Optional. Defines whether instances have the vTPM enabled.
- enableIntegrity BooleanMonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- enableSecure BooleanBoot 
- Optional. Defines whether instances have Secure Boot enabled.
- enableVtpm Boolean
- Optional. Defines whether instances have the vTPM enabled.
- enableIntegrity booleanMonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- enableSecure booleanBoot 
- Optional. Defines whether instances have Secure Boot enabled.
- enableVtpm boolean
- Optional. Defines whether instances have the vTPM enabled.
- enable_integrity_ boolmonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- enable_secure_ boolboot 
- Optional. Defines whether instances have Secure Boot enabled.
- enable_vtpm bool
- Optional. Defines whether instances have the vTPM enabled.
- enableIntegrity BooleanMonitoring 
- Optional. Defines whether instances have integrity monitoring enabled.
- enableSecure BooleanBoot 
- Optional. Defines whether instances have Secure Boot enabled.
- enableVtpm Boolean
- Optional. Defines whether instances have the vTPM enabled.
SoftwareConfigResponse  
- ImageVersion string
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- OptionalComponents List<string>
- The set of optional components to activate on the cluster.
- Properties Dictionary<string, string>
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- ImageVersion string
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- OptionalComponents []string
- The set of optional components to activate on the cluster.
- Properties map[string]string
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- imageVersion String
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optionalComponents List<String>
- The set of optional components to activate on the cluster.
- properties Map<String,String>
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- imageVersion string
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optionalComponents string[]
- The set of optional components to activate on the cluster.
- properties {[key: string]: string}
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image_version str
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional_components Sequence[str]
- The set of optional components to activate on the cluster.
- properties Mapping[str, str]
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- imageVersion String
- Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optionalComponents List<String>
- The set of optional components to activate on the cluster.
- properties Map<String>
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi