Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.dataplex/v1.Task
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a task resource within a lake. Auto-naming is currently not supported for this resource.
Create Task Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Task(name: string, args: TaskArgs, opts?: CustomResourceOptions);@overload
def Task(resource_name: str,
         args: TaskArgs,
         opts: Optional[ResourceOptions] = None)
@overload
def Task(resource_name: str,
         opts: Optional[ResourceOptions] = None,
         execution_spec: Optional[GoogleCloudDataplexV1TaskExecutionSpecArgs] = None,
         lake_id: Optional[str] = None,
         task_id: Optional[str] = None,
         trigger_spec: Optional[GoogleCloudDataplexV1TaskTriggerSpecArgs] = None,
         description: Optional[str] = None,
         display_name: Optional[str] = None,
         labels: Optional[Mapping[str, str]] = None,
         location: Optional[str] = None,
         notebook: Optional[GoogleCloudDataplexV1TaskNotebookTaskConfigArgs] = None,
         project: Optional[str] = None,
         spark: Optional[GoogleCloudDataplexV1TaskSparkTaskConfigArgs] = None)func NewTask(ctx *Context, name string, args TaskArgs, opts ...ResourceOption) (*Task, error)public Task(string name, TaskArgs args, CustomResourceOptions? opts = null)type: google-native:dataplex/v1:Task
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args TaskArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args TaskArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args TaskArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args TaskArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args TaskArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var exampletaskResourceResourceFromDataplexv1 = new GoogleNative.Dataplex.V1.Task("exampletaskResourceResourceFromDataplexv1", new()
{
    ExecutionSpec = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TaskExecutionSpecArgs
    {
        ServiceAccount = "string",
        Args = 
        {
            { "string", "string" },
        },
        KmsKey = "string",
        MaxJobExecutionLifetime = "string",
        Project = "string",
    },
    LakeId = "string",
    TaskId = "string",
    TriggerSpec = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TaskTriggerSpecArgs
    {
        Type = GoogleNative.Dataplex.V1.GoogleCloudDataplexV1TaskTriggerSpecType.TypeUnspecified,
        Disabled = false,
        MaxRetries = 0,
        Schedule = "string",
        StartTime = "string",
    },
    Description = "string",
    DisplayName = "string",
    Labels = 
    {
        { "string", "string" },
    },
    Location = "string",
    Notebook = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TaskNotebookTaskConfigArgs
    {
        Notebook = "string",
        ArchiveUris = new[]
        {
            "string",
        },
        FileUris = new[]
        {
            "string",
        },
        InfrastructureSpec = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TaskInfrastructureSpecArgs
        {
            Batch = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TaskInfrastructureSpecBatchComputeResourcesArgs
            {
                ExecutorsCount = 0,
                MaxExecutorsCount = 0,
            },
            ContainerImage = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TaskInfrastructureSpecContainerImageRuntimeArgs
            {
                Image = "string",
                JavaJars = new[]
                {
                    "string",
                },
                Properties = 
                {
                    { "string", "string" },
                },
                PythonPackages = new[]
                {
                    "string",
                },
            },
            VpcNetwork = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TaskInfrastructureSpecVpcNetworkArgs
            {
                Network = "string",
                NetworkTags = new[]
                {
                    "string",
                },
                SubNetwork = "string",
            },
        },
    },
    Project = "string",
    Spark = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TaskSparkTaskConfigArgs
    {
        ArchiveUris = new[]
        {
            "string",
        },
        FileUris = new[]
        {
            "string",
        },
        InfrastructureSpec = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TaskInfrastructureSpecArgs
        {
            Batch = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TaskInfrastructureSpecBatchComputeResourcesArgs
            {
                ExecutorsCount = 0,
                MaxExecutorsCount = 0,
            },
            ContainerImage = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TaskInfrastructureSpecContainerImageRuntimeArgs
            {
                Image = "string",
                JavaJars = new[]
                {
                    "string",
                },
                Properties = 
                {
                    { "string", "string" },
                },
                PythonPackages = new[]
                {
                    "string",
                },
            },
            VpcNetwork = new GoogleNative.Dataplex.V1.Inputs.GoogleCloudDataplexV1TaskInfrastructureSpecVpcNetworkArgs
            {
                Network = "string",
                NetworkTags = new[]
                {
                    "string",
                },
                SubNetwork = "string",
            },
        },
        MainClass = "string",
        MainJarFileUri = "string",
        PythonScriptFile = "string",
        SqlScript = "string",
        SqlScriptFile = "string",
    },
});
example, err := dataplex.NewTask(ctx, "exampletaskResourceResourceFromDataplexv1", &dataplex.TaskArgs{
	ExecutionSpec: &dataplex.GoogleCloudDataplexV1TaskExecutionSpecArgs{
		ServiceAccount: pulumi.String("string"),
		Args: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
		KmsKey:                  pulumi.String("string"),
		MaxJobExecutionLifetime: pulumi.String("string"),
		Project:                 pulumi.String("string"),
	},
	LakeId: pulumi.String("string"),
	TaskId: pulumi.String("string"),
	TriggerSpec: &dataplex.GoogleCloudDataplexV1TaskTriggerSpecArgs{
		Type:       dataplex.GoogleCloudDataplexV1TaskTriggerSpecTypeTypeUnspecified,
		Disabled:   pulumi.Bool(false),
		MaxRetries: pulumi.Int(0),
		Schedule:   pulumi.String("string"),
		StartTime:  pulumi.String("string"),
	},
	Description: pulumi.String("string"),
	DisplayName: pulumi.String("string"),
	Labels: pulumi.StringMap{
		"string": pulumi.String("string"),
	},
	Location: pulumi.String("string"),
	Notebook: &dataplex.GoogleCloudDataplexV1TaskNotebookTaskConfigArgs{
		Notebook: pulumi.String("string"),
		ArchiveUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		FileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		InfrastructureSpec: &dataplex.GoogleCloudDataplexV1TaskInfrastructureSpecArgs{
			Batch: &dataplex.GoogleCloudDataplexV1TaskInfrastructureSpecBatchComputeResourcesArgs{
				ExecutorsCount:    pulumi.Int(0),
				MaxExecutorsCount: pulumi.Int(0),
			},
			ContainerImage: &dataplex.GoogleCloudDataplexV1TaskInfrastructureSpecContainerImageRuntimeArgs{
				Image: pulumi.String("string"),
				JavaJars: pulumi.StringArray{
					pulumi.String("string"),
				},
				Properties: pulumi.StringMap{
					"string": pulumi.String("string"),
				},
				PythonPackages: pulumi.StringArray{
					pulumi.String("string"),
				},
			},
			VpcNetwork: &dataplex.GoogleCloudDataplexV1TaskInfrastructureSpecVpcNetworkArgs{
				Network: pulumi.String("string"),
				NetworkTags: pulumi.StringArray{
					pulumi.String("string"),
				},
				SubNetwork: pulumi.String("string"),
			},
		},
	},
	Project: pulumi.String("string"),
	Spark: &dataplex.GoogleCloudDataplexV1TaskSparkTaskConfigArgs{
		ArchiveUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		FileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		InfrastructureSpec: &dataplex.GoogleCloudDataplexV1TaskInfrastructureSpecArgs{
			Batch: &dataplex.GoogleCloudDataplexV1TaskInfrastructureSpecBatchComputeResourcesArgs{
				ExecutorsCount:    pulumi.Int(0),
				MaxExecutorsCount: pulumi.Int(0),
			},
			ContainerImage: &dataplex.GoogleCloudDataplexV1TaskInfrastructureSpecContainerImageRuntimeArgs{
				Image: pulumi.String("string"),
				JavaJars: pulumi.StringArray{
					pulumi.String("string"),
				},
				Properties: pulumi.StringMap{
					"string": pulumi.String("string"),
				},
				PythonPackages: pulumi.StringArray{
					pulumi.String("string"),
				},
			},
			VpcNetwork: &dataplex.GoogleCloudDataplexV1TaskInfrastructureSpecVpcNetworkArgs{
				Network: pulumi.String("string"),
				NetworkTags: pulumi.StringArray{
					pulumi.String("string"),
				},
				SubNetwork: pulumi.String("string"),
			},
		},
		MainClass:        pulumi.String("string"),
		MainJarFileUri:   pulumi.String("string"),
		PythonScriptFile: pulumi.String("string"),
		SqlScript:        pulumi.String("string"),
		SqlScriptFile:    pulumi.String("string"),
	},
})
var exampletaskResourceResourceFromDataplexv1 = new Task("exampletaskResourceResourceFromDataplexv1", TaskArgs.builder()
    .executionSpec(GoogleCloudDataplexV1TaskExecutionSpecArgs.builder()
        .serviceAccount("string")
        .args(Map.of("string", "string"))
        .kmsKey("string")
        .maxJobExecutionLifetime("string")
        .project("string")
        .build())
    .lakeId("string")
    .taskId("string")
    .triggerSpec(GoogleCloudDataplexV1TaskTriggerSpecArgs.builder()
        .type("TYPE_UNSPECIFIED")
        .disabled(false)
        .maxRetries(0)
        .schedule("string")
        .startTime("string")
        .build())
    .description("string")
    .displayName("string")
    .labels(Map.of("string", "string"))
    .location("string")
    .notebook(GoogleCloudDataplexV1TaskNotebookTaskConfigArgs.builder()
        .notebook("string")
        .archiveUris("string")
        .fileUris("string")
        .infrastructureSpec(GoogleCloudDataplexV1TaskInfrastructureSpecArgs.builder()
            .batch(GoogleCloudDataplexV1TaskInfrastructureSpecBatchComputeResourcesArgs.builder()
                .executorsCount(0)
                .maxExecutorsCount(0)
                .build())
            .containerImage(GoogleCloudDataplexV1TaskInfrastructureSpecContainerImageRuntimeArgs.builder()
                .image("string")
                .javaJars("string")
                .properties(Map.of("string", "string"))
                .pythonPackages("string")
                .build())
            .vpcNetwork(GoogleCloudDataplexV1TaskInfrastructureSpecVpcNetworkArgs.builder()
                .network("string")
                .networkTags("string")
                .subNetwork("string")
                .build())
            .build())
        .build())
    .project("string")
    .spark(GoogleCloudDataplexV1TaskSparkTaskConfigArgs.builder()
        .archiveUris("string")
        .fileUris("string")
        .infrastructureSpec(GoogleCloudDataplexV1TaskInfrastructureSpecArgs.builder()
            .batch(GoogleCloudDataplexV1TaskInfrastructureSpecBatchComputeResourcesArgs.builder()
                .executorsCount(0)
                .maxExecutorsCount(0)
                .build())
            .containerImage(GoogleCloudDataplexV1TaskInfrastructureSpecContainerImageRuntimeArgs.builder()
                .image("string")
                .javaJars("string")
                .properties(Map.of("string", "string"))
                .pythonPackages("string")
                .build())
            .vpcNetwork(GoogleCloudDataplexV1TaskInfrastructureSpecVpcNetworkArgs.builder()
                .network("string")
                .networkTags("string")
                .subNetwork("string")
                .build())
            .build())
        .mainClass("string")
        .mainJarFileUri("string")
        .pythonScriptFile("string")
        .sqlScript("string")
        .sqlScriptFile("string")
        .build())
    .build());
exampletask_resource_resource_from_dataplexv1 = google_native.dataplex.v1.Task("exampletaskResourceResourceFromDataplexv1",
    execution_spec={
        "service_account": "string",
        "args": {
            "string": "string",
        },
        "kms_key": "string",
        "max_job_execution_lifetime": "string",
        "project": "string",
    },
    lake_id="string",
    task_id="string",
    trigger_spec={
        "type": google_native.dataplex.v1.GoogleCloudDataplexV1TaskTriggerSpecType.TYPE_UNSPECIFIED,
        "disabled": False,
        "max_retries": 0,
        "schedule": "string",
        "start_time": "string",
    },
    description="string",
    display_name="string",
    labels={
        "string": "string",
    },
    location="string",
    notebook={
        "notebook": "string",
        "archive_uris": ["string"],
        "file_uris": ["string"],
        "infrastructure_spec": {
            "batch": {
                "executors_count": 0,
                "max_executors_count": 0,
            },
            "container_image": {
                "image": "string",
                "java_jars": ["string"],
                "properties": {
                    "string": "string",
                },
                "python_packages": ["string"],
            },
            "vpc_network": {
                "network": "string",
                "network_tags": ["string"],
                "sub_network": "string",
            },
        },
    },
    project="string",
    spark={
        "archive_uris": ["string"],
        "file_uris": ["string"],
        "infrastructure_spec": {
            "batch": {
                "executors_count": 0,
                "max_executors_count": 0,
            },
            "container_image": {
                "image": "string",
                "java_jars": ["string"],
                "properties": {
                    "string": "string",
                },
                "python_packages": ["string"],
            },
            "vpc_network": {
                "network": "string",
                "network_tags": ["string"],
                "sub_network": "string",
            },
        },
        "main_class": "string",
        "main_jar_file_uri": "string",
        "python_script_file": "string",
        "sql_script": "string",
        "sql_script_file": "string",
    })
const exampletaskResourceResourceFromDataplexv1 = new google_native.dataplex.v1.Task("exampletaskResourceResourceFromDataplexv1", {
    executionSpec: {
        serviceAccount: "string",
        args: {
            string: "string",
        },
        kmsKey: "string",
        maxJobExecutionLifetime: "string",
        project: "string",
    },
    lakeId: "string",
    taskId: "string",
    triggerSpec: {
        type: google_native.dataplex.v1.GoogleCloudDataplexV1TaskTriggerSpecType.TypeUnspecified,
        disabled: false,
        maxRetries: 0,
        schedule: "string",
        startTime: "string",
    },
    description: "string",
    displayName: "string",
    labels: {
        string: "string",
    },
    location: "string",
    notebook: {
        notebook: "string",
        archiveUris: ["string"],
        fileUris: ["string"],
        infrastructureSpec: {
            batch: {
                executorsCount: 0,
                maxExecutorsCount: 0,
            },
            containerImage: {
                image: "string",
                javaJars: ["string"],
                properties: {
                    string: "string",
                },
                pythonPackages: ["string"],
            },
            vpcNetwork: {
                network: "string",
                networkTags: ["string"],
                subNetwork: "string",
            },
        },
    },
    project: "string",
    spark: {
        archiveUris: ["string"],
        fileUris: ["string"],
        infrastructureSpec: {
            batch: {
                executorsCount: 0,
                maxExecutorsCount: 0,
            },
            containerImage: {
                image: "string",
                javaJars: ["string"],
                properties: {
                    string: "string",
                },
                pythonPackages: ["string"],
            },
            vpcNetwork: {
                network: "string",
                networkTags: ["string"],
                subNetwork: "string",
            },
        },
        mainClass: "string",
        mainJarFileUri: "string",
        pythonScriptFile: "string",
        sqlScript: "string",
        sqlScriptFile: "string",
    },
});
type: google-native:dataplex/v1:Task
properties:
    description: string
    displayName: string
    executionSpec:
        args:
            string: string
        kmsKey: string
        maxJobExecutionLifetime: string
        project: string
        serviceAccount: string
    labels:
        string: string
    lakeId: string
    location: string
    notebook:
        archiveUris:
            - string
        fileUris:
            - string
        infrastructureSpec:
            batch:
                executorsCount: 0
                maxExecutorsCount: 0
            containerImage:
                image: string
                javaJars:
                    - string
                properties:
                    string: string
                pythonPackages:
                    - string
            vpcNetwork:
                network: string
                networkTags:
                    - string
                subNetwork: string
        notebook: string
    project: string
    spark:
        archiveUris:
            - string
        fileUris:
            - string
        infrastructureSpec:
            batch:
                executorsCount: 0
                maxExecutorsCount: 0
            containerImage:
                image: string
                javaJars:
                    - string
                properties:
                    string: string
                pythonPackages:
                    - string
            vpcNetwork:
                network: string
                networkTags:
                    - string
                subNetwork: string
        mainClass: string
        mainJarFileUri: string
        pythonScriptFile: string
        sqlScript: string
        sqlScriptFile: string
    taskId: string
    triggerSpec:
        disabled: false
        maxRetries: 0
        schedule: string
        startTime: string
        type: TYPE_UNSPECIFIED
Task Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The Task resource accepts the following input properties:
- ExecutionSpec Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Task Execution Spec 
- Spec related to how a task is executed.
- LakeId string
- TaskId string
- Required. Task identifier.
- TriggerSpec Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Task Trigger Spec 
- Spec related to how often and when a task should be triggered.
- Description string
- Optional. Description of the task.
- DisplayName string
- Optional. User friendly display name.
- Labels Dictionary<string, string>
- Optional. User-defined labels for the task.
- Location string
- Notebook
Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Task Notebook Task Config 
- Config related to running scheduled Notebooks.
- Project string
- Spark
Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Task Spark Task Config 
- Config related to running custom Spark tasks.
- ExecutionSpec GoogleCloud Dataplex V1Task Execution Spec Args 
- Spec related to how a task is executed.
- LakeId string
- TaskId string
- Required. Task identifier.
- TriggerSpec GoogleCloud Dataplex V1Task Trigger Spec Args 
- Spec related to how often and when a task should be triggered.
- Description string
- Optional. Description of the task.
- DisplayName string
- Optional. User friendly display name.
- Labels map[string]string
- Optional. User-defined labels for the task.
- Location string
- Notebook
GoogleCloud Dataplex V1Task Notebook Task Config Args 
- Config related to running scheduled Notebooks.
- Project string
- Spark
GoogleCloud Dataplex V1Task Spark Task Config Args 
- Config related to running custom Spark tasks.
- executionSpec GoogleCloud Dataplex V1Task Execution Spec 
- Spec related to how a task is executed.
- lakeId String
- taskId String
- Required. Task identifier.
- triggerSpec GoogleCloud Dataplex V1Task Trigger Spec 
- Spec related to how often and when a task should be triggered.
- description String
- Optional. Description of the task.
- displayName String
- Optional. User friendly display name.
- labels Map<String,String>
- Optional. User-defined labels for the task.
- location String
- notebook
GoogleCloud Dataplex V1Task Notebook Task Config 
- Config related to running scheduled Notebooks.
- project String
- spark
GoogleCloud Dataplex V1Task Spark Task Config 
- Config related to running custom Spark tasks.
- executionSpec GoogleCloud Dataplex V1Task Execution Spec 
- Spec related to how a task is executed.
- lakeId string
- taskId string
- Required. Task identifier.
- triggerSpec GoogleCloud Dataplex V1Task Trigger Spec 
- Spec related to how often and when a task should be triggered.
- description string
- Optional. Description of the task.
- displayName string
- Optional. User friendly display name.
- labels {[key: string]: string}
- Optional. User-defined labels for the task.
- location string
- notebook
GoogleCloud Dataplex V1Task Notebook Task Config 
- Config related to running scheduled Notebooks.
- project string
- spark
GoogleCloud Dataplex V1Task Spark Task Config 
- Config related to running custom Spark tasks.
- execution_spec GoogleCloud Dataplex V1Task Execution Spec Args 
- Spec related to how a task is executed.
- lake_id str
- task_id str
- Required. Task identifier.
- trigger_spec GoogleCloud Dataplex V1Task Trigger Spec Args 
- Spec related to how often and when a task should be triggered.
- description str
- Optional. Description of the task.
- display_name str
- Optional. User friendly display name.
- labels Mapping[str, str]
- Optional. User-defined labels for the task.
- location str
- notebook
GoogleCloud Dataplex V1Task Notebook Task Config Args 
- Config related to running scheduled Notebooks.
- project str
- spark
GoogleCloud Dataplex V1Task Spark Task Config Args 
- Config related to running custom Spark tasks.
- executionSpec Property Map
- Spec related to how a task is executed.
- lakeId String
- taskId String
- Required. Task identifier.
- triggerSpec Property Map
- Spec related to how often and when a task should be triggered.
- description String
- Optional. Description of the task.
- displayName String
- Optional. User friendly display name.
- labels Map<String>
- Optional. User-defined labels for the task.
- location String
- notebook Property Map
- Config related to running scheduled Notebooks.
- project String
- spark Property Map
- Config related to running custom Spark tasks.
Outputs
All input properties are implicitly available as output properties. Additionally, the Task resource produces the following output properties:
- CreateTime string
- The time when the task was created.
- ExecutionStatus Pulumi.Google Native. Dataplex. V1. Outputs. Google Cloud Dataplex V1Task Execution Status Response 
- Status of the latest task executions.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The relative resource name of the task, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/ tasks/{task_id}.
- State string
- Current state of the task.
- Uid string
- System generated globally unique ID for the task. This ID will be different if the task is deleted and re-created with the same name.
- UpdateTime string
- The time when the task was last updated.
- CreateTime string
- The time when the task was created.
- ExecutionStatus GoogleCloud Dataplex V1Task Execution Status Response 
- Status of the latest task executions.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The relative resource name of the task, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/ tasks/{task_id}.
- State string
- Current state of the task.
- Uid string
- System generated globally unique ID for the task. This ID will be different if the task is deleted and re-created with the same name.
- UpdateTime string
- The time when the task was last updated.
- createTime String
- The time when the task was created.
- executionStatus GoogleCloud Dataplex V1Task Execution Status Response 
- Status of the latest task executions.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- The relative resource name of the task, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/ tasks/{task_id}.
- state String
- Current state of the task.
- uid String
- System generated globally unique ID for the task. This ID will be different if the task is deleted and re-created with the same name.
- updateTime String
- The time when the task was last updated.
- createTime string
- The time when the task was created.
- executionStatus GoogleCloud Dataplex V1Task Execution Status Response 
- Status of the latest task executions.
- id string
- The provider-assigned unique ID for this managed resource.
- name string
- The relative resource name of the task, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/ tasks/{task_id}.
- state string
- Current state of the task.
- uid string
- System generated globally unique ID for the task. This ID will be different if the task is deleted and re-created with the same name.
- updateTime string
- The time when the task was last updated.
- create_time str
- The time when the task was created.
- execution_status GoogleCloud Dataplex V1Task Execution Status Response 
- Status of the latest task executions.
- id str
- The provider-assigned unique ID for this managed resource.
- name str
- The relative resource name of the task, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/ tasks/{task_id}.
- state str
- Current state of the task.
- uid str
- System generated globally unique ID for the task. This ID will be different if the task is deleted and re-created with the same name.
- update_time str
- The time when the task was last updated.
- createTime String
- The time when the task was created.
- executionStatus Property Map
- Status of the latest task executions.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- The relative resource name of the task, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/ tasks/{task_id}.
- state String
- Current state of the task.
- uid String
- System generated globally unique ID for the task. This ID will be different if the task is deleted and re-created with the same name.
- updateTime String
- The time when the task was last updated.
Supporting Types
GoogleCloudDataplexV1JobResponse, GoogleCloudDataplexV1JobResponseArgs          
- EndTime string
- The time when the job ended.
- ExecutionSpec Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Task Execution Spec Response 
- Spec related to how a task is executed.
- Labels Dictionary<string, string>
- User-defined labels for the task.
- Message string
- Additional information about the current state.
- Name string
- The relative resource name of the job, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/tasks/{task_id}/jobs/{job_id}.
- RetryCount int
- The number of times the job has been retried (excluding the initial attempt).
- Service string
- The underlying service running a job.
- ServiceJob string
- The full resource name for the job run under a particular service.
- StartTime string
- The time when the job was started.
- State string
- Execution state for the job.
- Trigger string
- Job execution trigger.
- Uid string
- System generated globally unique ID for the job.
- EndTime string
- The time when the job ended.
- ExecutionSpec GoogleCloud Dataplex V1Task Execution Spec Response 
- Spec related to how a task is executed.
- Labels map[string]string
- User-defined labels for the task.
- Message string
- Additional information about the current state.
- Name string
- The relative resource name of the job, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/tasks/{task_id}/jobs/{job_id}.
- RetryCount int
- The number of times the job has been retried (excluding the initial attempt).
- Service string
- The underlying service running a job.
- ServiceJob string
- The full resource name for the job run under a particular service.
- StartTime string
- The time when the job was started.
- State string
- Execution state for the job.
- Trigger string
- Job execution trigger.
- Uid string
- System generated globally unique ID for the job.
- endTime String
- The time when the job ended.
- executionSpec GoogleCloud Dataplex V1Task Execution Spec Response 
- Spec related to how a task is executed.
- labels Map<String,String>
- User-defined labels for the task.
- message String
- Additional information about the current state.
- name String
- The relative resource name of the job, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/tasks/{task_id}/jobs/{job_id}.
- retryCount Integer
- The number of times the job has been retried (excluding the initial attempt).
- service String
- The underlying service running a job.
- serviceJob String
- The full resource name for the job run under a particular service.
- startTime String
- The time when the job was started.
- state String
- Execution state for the job.
- trigger String
- Job execution trigger.
- uid String
- System generated globally unique ID for the job.
- endTime string
- The time when the job ended.
- executionSpec GoogleCloud Dataplex V1Task Execution Spec Response 
- Spec related to how a task is executed.
- labels {[key: string]: string}
- User-defined labels for the task.
- message string
- Additional information about the current state.
- name string
- The relative resource name of the job, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/tasks/{task_id}/jobs/{job_id}.
- retryCount number
- The number of times the job has been retried (excluding the initial attempt).
- service string
- The underlying service running a job.
- serviceJob string
- The full resource name for the job run under a particular service.
- startTime string
- The time when the job was started.
- state string
- Execution state for the job.
- trigger string
- Job execution trigger.
- uid string
- System generated globally unique ID for the job.
- end_time str
- The time when the job ended.
- execution_spec GoogleCloud Dataplex V1Task Execution Spec Response 
- Spec related to how a task is executed.
- labels Mapping[str, str]
- User-defined labels for the task.
- message str
- Additional information about the current state.
- name str
- The relative resource name of the job, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/tasks/{task_id}/jobs/{job_id}.
- retry_count int
- The number of times the job has been retried (excluding the initial attempt).
- service str
- The underlying service running a job.
- service_job str
- The full resource name for the job run under a particular service.
- start_time str
- The time when the job was started.
- state str
- Execution state for the job.
- trigger str
- Job execution trigger.
- uid str
- System generated globally unique ID for the job.
- endTime String
- The time when the job ended.
- executionSpec Property Map
- Spec related to how a task is executed.
- labels Map<String>
- User-defined labels for the task.
- message String
- Additional information about the current state.
- name String
- The relative resource name of the job, of the form: projects/{project_number}/locations/{location_id}/lakes/{lake_id}/tasks/{task_id}/jobs/{job_id}.
- retryCount Number
- The number of times the job has been retried (excluding the initial attempt).
- service String
- The underlying service running a job.
- serviceJob String
- The full resource name for the job run under a particular service.
- startTime String
- The time when the job was started.
- state String
- Execution state for the job.
- trigger String
- Job execution trigger.
- uid String
- System generated globally unique ID for the job.
GoogleCloudDataplexV1TaskExecutionSpec, GoogleCloudDataplexV1TaskExecutionSpecArgs            
- ServiceAccount string
- Service account to use to execute a task. If not provided, the default Compute service account for the project is used.
- Args Dictionary<string, string>
- Optional. The arguments to pass to the task. The args can use placeholders of the format ${placeholder} as part of key/value string. These will be interpolated before passing the args to the driver. Currently supported placeholders: - ${task_id} - ${job_time} To pass positional args, set the key as TASK_ARGS. The value should be a comma-separated string of all the positional arguments. To use a delimiter other than comma, refer to https://cloud.google.com/sdk/gcloud/reference/topic/escaping. In case of other keys being present in the args, then TASK_ARGS will be passed as the last argument.
- KmsKey string
- Optional. The Cloud KMS key to use for encryption, of the form: projects/{project_number}/locations/{location_id}/keyRings/{key-ring-name}/cryptoKeys/{key-name}.
- MaxJob stringExecution Lifetime 
- Optional. The maximum duration after which the job execution is expired.
- Project string
- Optional. The project in which jobs are run. By default, the project containing the Lake is used. If a project is provided, the ExecutionSpec.service_account must belong to this project.
- ServiceAccount string
- Service account to use to execute a task. If not provided, the default Compute service account for the project is used.
- Args map[string]string
- Optional. The arguments to pass to the task. The args can use placeholders of the format ${placeholder} as part of key/value string. These will be interpolated before passing the args to the driver. Currently supported placeholders: - ${task_id} - ${job_time} To pass positional args, set the key as TASK_ARGS. The value should be a comma-separated string of all the positional arguments. To use a delimiter other than comma, refer to https://cloud.google.com/sdk/gcloud/reference/topic/escaping. In case of other keys being present in the args, then TASK_ARGS will be passed as the last argument.
- KmsKey string
- Optional. The Cloud KMS key to use for encryption, of the form: projects/{project_number}/locations/{location_id}/keyRings/{key-ring-name}/cryptoKeys/{key-name}.
- MaxJob stringExecution Lifetime 
- Optional. The maximum duration after which the job execution is expired.
- Project string
- Optional. The project in which jobs are run. By default, the project containing the Lake is used. If a project is provided, the ExecutionSpec.service_account must belong to this project.
- serviceAccount String
- Service account to use to execute a task. If not provided, the default Compute service account for the project is used.
- args Map<String,String>
- Optional. The arguments to pass to the task. The args can use placeholders of the format ${placeholder} as part of key/value string. These will be interpolated before passing the args to the driver. Currently supported placeholders: - ${task_id} - ${job_time} To pass positional args, set the key as TASK_ARGS. The value should be a comma-separated string of all the positional arguments. To use a delimiter other than comma, refer to https://cloud.google.com/sdk/gcloud/reference/topic/escaping. In case of other keys being present in the args, then TASK_ARGS will be passed as the last argument.
- kmsKey String
- Optional. The Cloud KMS key to use for encryption, of the form: projects/{project_number}/locations/{location_id}/keyRings/{key-ring-name}/cryptoKeys/{key-name}.
- maxJob StringExecution Lifetime 
- Optional. The maximum duration after which the job execution is expired.
- project String
- Optional. The project in which jobs are run. By default, the project containing the Lake is used. If a project is provided, the ExecutionSpec.service_account must belong to this project.
- serviceAccount string
- Service account to use to execute a task. If not provided, the default Compute service account for the project is used.
- args {[key: string]: string}
- Optional. The arguments to pass to the task. The args can use placeholders of the format ${placeholder} as part of key/value string. These will be interpolated before passing the args to the driver. Currently supported placeholders: - ${task_id} - ${job_time} To pass positional args, set the key as TASK_ARGS. The value should be a comma-separated string of all the positional arguments. To use a delimiter other than comma, refer to https://cloud.google.com/sdk/gcloud/reference/topic/escaping. In case of other keys being present in the args, then TASK_ARGS will be passed as the last argument.
- kmsKey string
- Optional. The Cloud KMS key to use for encryption, of the form: projects/{project_number}/locations/{location_id}/keyRings/{key-ring-name}/cryptoKeys/{key-name}.
- maxJob stringExecution Lifetime 
- Optional. The maximum duration after which the job execution is expired.
- project string
- Optional. The project in which jobs are run. By default, the project containing the Lake is used. If a project is provided, the ExecutionSpec.service_account must belong to this project.
- service_account str
- Service account to use to execute a task. If not provided, the default Compute service account for the project is used.
- args Mapping[str, str]
- Optional. The arguments to pass to the task. The args can use placeholders of the format ${placeholder} as part of key/value string. These will be interpolated before passing the args to the driver. Currently supported placeholders: - ${task_id} - ${job_time} To pass positional args, set the key as TASK_ARGS. The value should be a comma-separated string of all the positional arguments. To use a delimiter other than comma, refer to https://cloud.google.com/sdk/gcloud/reference/topic/escaping. In case of other keys being present in the args, then TASK_ARGS will be passed as the last argument.
- kms_key str
- Optional. The Cloud KMS key to use for encryption, of the form: projects/{project_number}/locations/{location_id}/keyRings/{key-ring-name}/cryptoKeys/{key-name}.
- max_job_ strexecution_ lifetime 
- Optional. The maximum duration after which the job execution is expired.
- project str
- Optional. The project in which jobs are run. By default, the project containing the Lake is used. If a project is provided, the ExecutionSpec.service_account must belong to this project.
- serviceAccount String
- Service account to use to execute a task. If not provided, the default Compute service account for the project is used.
- args Map<String>
- Optional. The arguments to pass to the task. The args can use placeholders of the format ${placeholder} as part of key/value string. These will be interpolated before passing the args to the driver. Currently supported placeholders: - ${task_id} - ${job_time} To pass positional args, set the key as TASK_ARGS. The value should be a comma-separated string of all the positional arguments. To use a delimiter other than comma, refer to https://cloud.google.com/sdk/gcloud/reference/topic/escaping. In case of other keys being present in the args, then TASK_ARGS will be passed as the last argument.
- kmsKey String
- Optional. The Cloud KMS key to use for encryption, of the form: projects/{project_number}/locations/{location_id}/keyRings/{key-ring-name}/cryptoKeys/{key-name}.
- maxJob StringExecution Lifetime 
- Optional. The maximum duration after which the job execution is expired.
- project String
- Optional. The project in which jobs are run. By default, the project containing the Lake is used. If a project is provided, the ExecutionSpec.service_account must belong to this project.
GoogleCloudDataplexV1TaskExecutionSpecResponse, GoogleCloudDataplexV1TaskExecutionSpecResponseArgs              
- Args Dictionary<string, string>
- Optional. The arguments to pass to the task. The args can use placeholders of the format ${placeholder} as part of key/value string. These will be interpolated before passing the args to the driver. Currently supported placeholders: - ${task_id} - ${job_time} To pass positional args, set the key as TASK_ARGS. The value should be a comma-separated string of all the positional arguments. To use a delimiter other than comma, refer to https://cloud.google.com/sdk/gcloud/reference/topic/escaping. In case of other keys being present in the args, then TASK_ARGS will be passed as the last argument.
- KmsKey string
- Optional. The Cloud KMS key to use for encryption, of the form: projects/{project_number}/locations/{location_id}/keyRings/{key-ring-name}/cryptoKeys/{key-name}.
- MaxJob stringExecution Lifetime 
- Optional. The maximum duration after which the job execution is expired.
- Project string
- Optional. The project in which jobs are run. By default, the project containing the Lake is used. If a project is provided, the ExecutionSpec.service_account must belong to this project.
- ServiceAccount string
- Service account to use to execute a task. If not provided, the default Compute service account for the project is used.
- Args map[string]string
- Optional. The arguments to pass to the task. The args can use placeholders of the format ${placeholder} as part of key/value string. These will be interpolated before passing the args to the driver. Currently supported placeholders: - ${task_id} - ${job_time} To pass positional args, set the key as TASK_ARGS. The value should be a comma-separated string of all the positional arguments. To use a delimiter other than comma, refer to https://cloud.google.com/sdk/gcloud/reference/topic/escaping. In case of other keys being present in the args, then TASK_ARGS will be passed as the last argument.
- KmsKey string
- Optional. The Cloud KMS key to use for encryption, of the form: projects/{project_number}/locations/{location_id}/keyRings/{key-ring-name}/cryptoKeys/{key-name}.
- MaxJob stringExecution Lifetime 
- Optional. The maximum duration after which the job execution is expired.
- Project string
- Optional. The project in which jobs are run. By default, the project containing the Lake is used. If a project is provided, the ExecutionSpec.service_account must belong to this project.
- ServiceAccount string
- Service account to use to execute a task. If not provided, the default Compute service account for the project is used.
- args Map<String,String>
- Optional. The arguments to pass to the task. The args can use placeholders of the format ${placeholder} as part of key/value string. These will be interpolated before passing the args to the driver. Currently supported placeholders: - ${task_id} - ${job_time} To pass positional args, set the key as TASK_ARGS. The value should be a comma-separated string of all the positional arguments. To use a delimiter other than comma, refer to https://cloud.google.com/sdk/gcloud/reference/topic/escaping. In case of other keys being present in the args, then TASK_ARGS will be passed as the last argument.
- kmsKey String
- Optional. The Cloud KMS key to use for encryption, of the form: projects/{project_number}/locations/{location_id}/keyRings/{key-ring-name}/cryptoKeys/{key-name}.
- maxJob StringExecution Lifetime 
- Optional. The maximum duration after which the job execution is expired.
- project String
- Optional. The project in which jobs are run. By default, the project containing the Lake is used. If a project is provided, the ExecutionSpec.service_account must belong to this project.
- serviceAccount String
- Service account to use to execute a task. If not provided, the default Compute service account for the project is used.
- args {[key: string]: string}
- Optional. The arguments to pass to the task. The args can use placeholders of the format ${placeholder} as part of key/value string. These will be interpolated before passing the args to the driver. Currently supported placeholders: - ${task_id} - ${job_time} To pass positional args, set the key as TASK_ARGS. The value should be a comma-separated string of all the positional arguments. To use a delimiter other than comma, refer to https://cloud.google.com/sdk/gcloud/reference/topic/escaping. In case of other keys being present in the args, then TASK_ARGS will be passed as the last argument.
- kmsKey string
- Optional. The Cloud KMS key to use for encryption, of the form: projects/{project_number}/locations/{location_id}/keyRings/{key-ring-name}/cryptoKeys/{key-name}.
- maxJob stringExecution Lifetime 
- Optional. The maximum duration after which the job execution is expired.
- project string
- Optional. The project in which jobs are run. By default, the project containing the Lake is used. If a project is provided, the ExecutionSpec.service_account must belong to this project.
- serviceAccount string
- Service account to use to execute a task. If not provided, the default Compute service account for the project is used.
- args Mapping[str, str]
- Optional. The arguments to pass to the task. The args can use placeholders of the format ${placeholder} as part of key/value string. These will be interpolated before passing the args to the driver. Currently supported placeholders: - ${task_id} - ${job_time} To pass positional args, set the key as TASK_ARGS. The value should be a comma-separated string of all the positional arguments. To use a delimiter other than comma, refer to https://cloud.google.com/sdk/gcloud/reference/topic/escaping. In case of other keys being present in the args, then TASK_ARGS will be passed as the last argument.
- kms_key str
- Optional. The Cloud KMS key to use for encryption, of the form: projects/{project_number}/locations/{location_id}/keyRings/{key-ring-name}/cryptoKeys/{key-name}.
- max_job_ strexecution_ lifetime 
- Optional. The maximum duration after which the job execution is expired.
- project str
- Optional. The project in which jobs are run. By default, the project containing the Lake is used. If a project is provided, the ExecutionSpec.service_account must belong to this project.
- service_account str
- Service account to use to execute a task. If not provided, the default Compute service account for the project is used.
- args Map<String>
- Optional. The arguments to pass to the task. The args can use placeholders of the format ${placeholder} as part of key/value string. These will be interpolated before passing the args to the driver. Currently supported placeholders: - ${task_id} - ${job_time} To pass positional args, set the key as TASK_ARGS. The value should be a comma-separated string of all the positional arguments. To use a delimiter other than comma, refer to https://cloud.google.com/sdk/gcloud/reference/topic/escaping. In case of other keys being present in the args, then TASK_ARGS will be passed as the last argument.
- kmsKey String
- Optional. The Cloud KMS key to use for encryption, of the form: projects/{project_number}/locations/{location_id}/keyRings/{key-ring-name}/cryptoKeys/{key-name}.
- maxJob StringExecution Lifetime 
- Optional. The maximum duration after which the job execution is expired.
- project String
- Optional. The project in which jobs are run. By default, the project containing the Lake is used. If a project is provided, the ExecutionSpec.service_account must belong to this project.
- serviceAccount String
- Service account to use to execute a task. If not provided, the default Compute service account for the project is used.
GoogleCloudDataplexV1TaskExecutionStatusResponse, GoogleCloudDataplexV1TaskExecutionStatusResponseArgs              
- LatestJob Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Job Response 
- latest job execution
- UpdateTime string
- Last update time of the status.
- LatestJob GoogleCloud Dataplex V1Job Response 
- latest job execution
- UpdateTime string
- Last update time of the status.
- latestJob GoogleCloud Dataplex V1Job Response 
- latest job execution
- updateTime String
- Last update time of the status.
- latestJob GoogleCloud Dataplex V1Job Response 
- latest job execution
- updateTime string
- Last update time of the status.
- latest_job GoogleCloud Dataplex V1Job Response 
- latest job execution
- update_time str
- Last update time of the status.
- latestJob Property Map
- latest job execution
- updateTime String
- Last update time of the status.
GoogleCloudDataplexV1TaskInfrastructureSpec, GoogleCloudDataplexV1TaskInfrastructureSpecArgs            
- Batch
Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Task Infrastructure Spec Batch Compute Resources 
- Compute resources needed for a Task when using Dataproc Serverless.
- ContainerImage Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Task Infrastructure Spec Container Image Runtime 
- Container Image Runtime Configuration.
- VpcNetwork Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Task Infrastructure Spec Vpc Network 
- Vpc network.
- Batch
GoogleCloud Dataplex V1Task Infrastructure Spec Batch Compute Resources 
- Compute resources needed for a Task when using Dataproc Serverless.
- ContainerImage GoogleCloud Dataplex V1Task Infrastructure Spec Container Image Runtime 
- Container Image Runtime Configuration.
- VpcNetwork GoogleCloud Dataplex V1Task Infrastructure Spec Vpc Network 
- Vpc network.
- batch
GoogleCloud Dataplex V1Task Infrastructure Spec Batch Compute Resources 
- Compute resources needed for a Task when using Dataproc Serverless.
- containerImage GoogleCloud Dataplex V1Task Infrastructure Spec Container Image Runtime 
- Container Image Runtime Configuration.
- vpcNetwork GoogleCloud Dataplex V1Task Infrastructure Spec Vpc Network 
- Vpc network.
- batch
GoogleCloud Dataplex V1Task Infrastructure Spec Batch Compute Resources 
- Compute resources needed for a Task when using Dataproc Serverless.
- containerImage GoogleCloud Dataplex V1Task Infrastructure Spec Container Image Runtime 
- Container Image Runtime Configuration.
- vpcNetwork GoogleCloud Dataplex V1Task Infrastructure Spec Vpc Network 
- Vpc network.
- batch
GoogleCloud Dataplex V1Task Infrastructure Spec Batch Compute Resources 
- Compute resources needed for a Task when using Dataproc Serverless.
- container_image GoogleCloud Dataplex V1Task Infrastructure Spec Container Image Runtime 
- Container Image Runtime Configuration.
- vpc_network GoogleCloud Dataplex V1Task Infrastructure Spec Vpc Network 
- Vpc network.
- batch Property Map
- Compute resources needed for a Task when using Dataproc Serverless.
- containerImage Property Map
- Container Image Runtime Configuration.
- vpcNetwork Property Map
- Vpc network.
GoogleCloudDataplexV1TaskInfrastructureSpecBatchComputeResources, GoogleCloudDataplexV1TaskInfrastructureSpecBatchComputeResourcesArgs                  
- ExecutorsCount int
- Optional. Total number of job executors. Executor Count should be between 2 and 100. Default=2
- MaxExecutors intCount 
- Optional. Max configurable executors. If max_executors_count > executors_count, then auto-scaling is enabled. Max Executor Count should be between 2 and 1000. Default=1000
- ExecutorsCount int
- Optional. Total number of job executors. Executor Count should be between 2 and 100. Default=2
- MaxExecutors intCount 
- Optional. Max configurable executors. If max_executors_count > executors_count, then auto-scaling is enabled. Max Executor Count should be between 2 and 1000. Default=1000
- executorsCount Integer
- Optional. Total number of job executors. Executor Count should be between 2 and 100. Default=2
- maxExecutors IntegerCount 
- Optional. Max configurable executors. If max_executors_count > executors_count, then auto-scaling is enabled. Max Executor Count should be between 2 and 1000. Default=1000
- executorsCount number
- Optional. Total number of job executors. Executor Count should be between 2 and 100. Default=2
- maxExecutors numberCount 
- Optional. Max configurable executors. If max_executors_count > executors_count, then auto-scaling is enabled. Max Executor Count should be between 2 and 1000. Default=1000
- executors_count int
- Optional. Total number of job executors. Executor Count should be between 2 and 100. Default=2
- max_executors_ intcount 
- Optional. Max configurable executors. If max_executors_count > executors_count, then auto-scaling is enabled. Max Executor Count should be between 2 and 1000. Default=1000
- executorsCount Number
- Optional. Total number of job executors. Executor Count should be between 2 and 100. Default=2
- maxExecutors NumberCount 
- Optional. Max configurable executors. If max_executors_count > executors_count, then auto-scaling is enabled. Max Executor Count should be between 2 and 1000. Default=1000
GoogleCloudDataplexV1TaskInfrastructureSpecBatchComputeResourcesResponse, GoogleCloudDataplexV1TaskInfrastructureSpecBatchComputeResourcesResponseArgs                    
- ExecutorsCount int
- Optional. Total number of job executors. Executor Count should be between 2 and 100. Default=2
- MaxExecutors intCount 
- Optional. Max configurable executors. If max_executors_count > executors_count, then auto-scaling is enabled. Max Executor Count should be between 2 and 1000. Default=1000
- ExecutorsCount int
- Optional. Total number of job executors. Executor Count should be between 2 and 100. Default=2
- MaxExecutors intCount 
- Optional. Max configurable executors. If max_executors_count > executors_count, then auto-scaling is enabled. Max Executor Count should be between 2 and 1000. Default=1000
- executorsCount Integer
- Optional. Total number of job executors. Executor Count should be between 2 and 100. Default=2
- maxExecutors IntegerCount 
- Optional. Max configurable executors. If max_executors_count > executors_count, then auto-scaling is enabled. Max Executor Count should be between 2 and 1000. Default=1000
- executorsCount number
- Optional. Total number of job executors. Executor Count should be between 2 and 100. Default=2
- maxExecutors numberCount 
- Optional. Max configurable executors. If max_executors_count > executors_count, then auto-scaling is enabled. Max Executor Count should be between 2 and 1000. Default=1000
- executors_count int
- Optional. Total number of job executors. Executor Count should be between 2 and 100. Default=2
- max_executors_ intcount 
- Optional. Max configurable executors. If max_executors_count > executors_count, then auto-scaling is enabled. Max Executor Count should be between 2 and 1000. Default=1000
- executorsCount Number
- Optional. Total number of job executors. Executor Count should be between 2 and 100. Default=2
- maxExecutors NumberCount 
- Optional. Max configurable executors. If max_executors_count > executors_count, then auto-scaling is enabled. Max Executor Count should be between 2 and 1000. Default=1000
GoogleCloudDataplexV1TaskInfrastructureSpecContainerImageRuntime, GoogleCloudDataplexV1TaskInfrastructureSpecContainerImageRuntimeArgs                  
- Image string
- Optional. Container image to use.
- JavaJars List<string>
- Optional. A list of Java JARS to add to the classpath. Valid input includes Cloud Storage URIs to Jar binaries. For example, gs://bucket-name/my/path/to/file.jar
- Properties Dictionary<string, string>
- Optional. Override to common configuration of open source components installed on the Dataproc cluster. The properties to set on daemon config files. Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. For more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- PythonPackages List<string>
- Optional. A list of python packages to be installed. Valid formats include Cloud Storage URI to a PIP installable library. For example, gs://bucket-name/my/path/to/lib.tar.gz
- Image string
- Optional. Container image to use.
- JavaJars []string
- Optional. A list of Java JARS to add to the classpath. Valid input includes Cloud Storage URIs to Jar binaries. For example, gs://bucket-name/my/path/to/file.jar
- Properties map[string]string
- Optional. Override to common configuration of open source components installed on the Dataproc cluster. The properties to set on daemon config files. Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. For more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- PythonPackages []string
- Optional. A list of python packages to be installed. Valid formats include Cloud Storage URI to a PIP installable library. For example, gs://bucket-name/my/path/to/lib.tar.gz
- image String
- Optional. Container image to use.
- javaJars List<String>
- Optional. A list of Java JARS to add to the classpath. Valid input includes Cloud Storage URIs to Jar binaries. For example, gs://bucket-name/my/path/to/file.jar
- properties Map<String,String>
- Optional. Override to common configuration of open source components installed on the Dataproc cluster. The properties to set on daemon config files. Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. For more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- pythonPackages List<String>
- Optional. A list of python packages to be installed. Valid formats include Cloud Storage URI to a PIP installable library. For example, gs://bucket-name/my/path/to/lib.tar.gz
- image string
- Optional. Container image to use.
- javaJars string[]
- Optional. A list of Java JARS to add to the classpath. Valid input includes Cloud Storage URIs to Jar binaries. For example, gs://bucket-name/my/path/to/file.jar
- properties {[key: string]: string}
- Optional. Override to common configuration of open source components installed on the Dataproc cluster. The properties to set on daemon config files. Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. For more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- pythonPackages string[]
- Optional. A list of python packages to be installed. Valid formats include Cloud Storage URI to a PIP installable library. For example, gs://bucket-name/my/path/to/lib.tar.gz
- image str
- Optional. Container image to use.
- java_jars Sequence[str]
- Optional. A list of Java JARS to add to the classpath. Valid input includes Cloud Storage URIs to Jar binaries. For example, gs://bucket-name/my/path/to/file.jar
- properties Mapping[str, str]
- Optional. Override to common configuration of open source components installed on the Dataproc cluster. The properties to set on daemon config files. Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. For more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- python_packages Sequence[str]
- Optional. A list of python packages to be installed. Valid formats include Cloud Storage URI to a PIP installable library. For example, gs://bucket-name/my/path/to/lib.tar.gz
- image String
- Optional. Container image to use.
- javaJars List<String>
- Optional. A list of Java JARS to add to the classpath. Valid input includes Cloud Storage URIs to Jar binaries. For example, gs://bucket-name/my/path/to/file.jar
- properties Map<String>
- Optional. Override to common configuration of open source components installed on the Dataproc cluster. The properties to set on daemon config files. Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. For more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- pythonPackages List<String>
- Optional. A list of python packages to be installed. Valid formats include Cloud Storage URI to a PIP installable library. For example, gs://bucket-name/my/path/to/lib.tar.gz
GoogleCloudDataplexV1TaskInfrastructureSpecContainerImageRuntimeResponse, GoogleCloudDataplexV1TaskInfrastructureSpecContainerImageRuntimeResponseArgs                    
- Image string
- Optional. Container image to use.
- JavaJars List<string>
- Optional. A list of Java JARS to add to the classpath. Valid input includes Cloud Storage URIs to Jar binaries. For example, gs://bucket-name/my/path/to/file.jar
- Properties Dictionary<string, string>
- Optional. Override to common configuration of open source components installed on the Dataproc cluster. The properties to set on daemon config files. Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. For more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- PythonPackages List<string>
- Optional. A list of python packages to be installed. Valid formats include Cloud Storage URI to a PIP installable library. For example, gs://bucket-name/my/path/to/lib.tar.gz
- Image string
- Optional. Container image to use.
- JavaJars []string
- Optional. A list of Java JARS to add to the classpath. Valid input includes Cloud Storage URIs to Jar binaries. For example, gs://bucket-name/my/path/to/file.jar
- Properties map[string]string
- Optional. Override to common configuration of open source components installed on the Dataproc cluster. The properties to set on daemon config files. Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. For more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- PythonPackages []string
- Optional. A list of python packages to be installed. Valid formats include Cloud Storage URI to a PIP installable library. For example, gs://bucket-name/my/path/to/lib.tar.gz
- image String
- Optional. Container image to use.
- javaJars List<String>
- Optional. A list of Java JARS to add to the classpath. Valid input includes Cloud Storage URIs to Jar binaries. For example, gs://bucket-name/my/path/to/file.jar
- properties Map<String,String>
- Optional. Override to common configuration of open source components installed on the Dataproc cluster. The properties to set on daemon config files. Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. For more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- pythonPackages List<String>
- Optional. A list of python packages to be installed. Valid formats include Cloud Storage URI to a PIP installable library. For example, gs://bucket-name/my/path/to/lib.tar.gz
- image string
- Optional. Container image to use.
- javaJars string[]
- Optional. A list of Java JARS to add to the classpath. Valid input includes Cloud Storage URIs to Jar binaries. For example, gs://bucket-name/my/path/to/file.jar
- properties {[key: string]: string}
- Optional. Override to common configuration of open source components installed on the Dataproc cluster. The properties to set on daemon config files. Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. For more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- pythonPackages string[]
- Optional. A list of python packages to be installed. Valid formats include Cloud Storage URI to a PIP installable library. For example, gs://bucket-name/my/path/to/lib.tar.gz
- image str
- Optional. Container image to use.
- java_jars Sequence[str]
- Optional. A list of Java JARS to add to the classpath. Valid input includes Cloud Storage URIs to Jar binaries. For example, gs://bucket-name/my/path/to/file.jar
- properties Mapping[str, str]
- Optional. Override to common configuration of open source components installed on the Dataproc cluster. The properties to set on daemon config files. Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. For more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- python_packages Sequence[str]
- Optional. A list of python packages to be installed. Valid formats include Cloud Storage URI to a PIP installable library. For example, gs://bucket-name/my/path/to/lib.tar.gz
- image String
- Optional. Container image to use.
- javaJars List<String>
- Optional. A list of Java JARS to add to the classpath. Valid input includes Cloud Storage URIs to Jar binaries. For example, gs://bucket-name/my/path/to/file.jar
- properties Map<String>
- Optional. Override to common configuration of open source components installed on the Dataproc cluster. The properties to set on daemon config files. Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. For more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- pythonPackages List<String>
- Optional. A list of python packages to be installed. Valid formats include Cloud Storage URI to a PIP installable library. For example, gs://bucket-name/my/path/to/lib.tar.gz
GoogleCloudDataplexV1TaskInfrastructureSpecResponse, GoogleCloudDataplexV1TaskInfrastructureSpecResponseArgs              
- Batch
Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Task Infrastructure Spec Batch Compute Resources Response 
- Compute resources needed for a Task when using Dataproc Serverless.
- ContainerImage Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Task Infrastructure Spec Container Image Runtime Response 
- Container Image Runtime Configuration.
- VpcNetwork Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Task Infrastructure Spec Vpc Network Response 
- Vpc network.
- Batch
GoogleCloud Dataplex V1Task Infrastructure Spec Batch Compute Resources Response 
- Compute resources needed for a Task when using Dataproc Serverless.
- ContainerImage GoogleCloud Dataplex V1Task Infrastructure Spec Container Image Runtime Response 
- Container Image Runtime Configuration.
- VpcNetwork GoogleCloud Dataplex V1Task Infrastructure Spec Vpc Network Response 
- Vpc network.
- batch
GoogleCloud Dataplex V1Task Infrastructure Spec Batch Compute Resources Response 
- Compute resources needed for a Task when using Dataproc Serverless.
- containerImage GoogleCloud Dataplex V1Task Infrastructure Spec Container Image Runtime Response 
- Container Image Runtime Configuration.
- vpcNetwork GoogleCloud Dataplex V1Task Infrastructure Spec Vpc Network Response 
- Vpc network.
- batch
GoogleCloud Dataplex V1Task Infrastructure Spec Batch Compute Resources Response 
- Compute resources needed for a Task when using Dataproc Serverless.
- containerImage GoogleCloud Dataplex V1Task Infrastructure Spec Container Image Runtime Response 
- Container Image Runtime Configuration.
- vpcNetwork GoogleCloud Dataplex V1Task Infrastructure Spec Vpc Network Response 
- Vpc network.
- batch
GoogleCloud Dataplex V1Task Infrastructure Spec Batch Compute Resources Response 
- Compute resources needed for a Task when using Dataproc Serverless.
- container_image GoogleCloud Dataplex V1Task Infrastructure Spec Container Image Runtime Response 
- Container Image Runtime Configuration.
- vpc_network GoogleCloud Dataplex V1Task Infrastructure Spec Vpc Network Response 
- Vpc network.
- batch Property Map
- Compute resources needed for a Task when using Dataproc Serverless.
- containerImage Property Map
- Container Image Runtime Configuration.
- vpcNetwork Property Map
- Vpc network.
GoogleCloudDataplexV1TaskInfrastructureSpecVpcNetwork, GoogleCloudDataplexV1TaskInfrastructureSpecVpcNetworkArgs                
- Network string
- Optional. The Cloud VPC network in which the job is run. By default, the Cloud VPC network named Default within the project is used.
- List<string>
- Optional. List of network tags to apply to the job.
- SubNetwork string
- Optional. The Cloud VPC sub-network in which the job is run.
- Network string
- Optional. The Cloud VPC network in which the job is run. By default, the Cloud VPC network named Default within the project is used.
- []string
- Optional. List of network tags to apply to the job.
- SubNetwork string
- Optional. The Cloud VPC sub-network in which the job is run.
- network String
- Optional. The Cloud VPC network in which the job is run. By default, the Cloud VPC network named Default within the project is used.
- List<String>
- Optional. List of network tags to apply to the job.
- subNetwork String
- Optional. The Cloud VPC sub-network in which the job is run.
- network string
- Optional. The Cloud VPC network in which the job is run. By default, the Cloud VPC network named Default within the project is used.
- string[]
- Optional. List of network tags to apply to the job.
- subNetwork string
- Optional. The Cloud VPC sub-network in which the job is run.
- network str
- Optional. The Cloud VPC network in which the job is run. By default, the Cloud VPC network named Default within the project is used.
- Sequence[str]
- Optional. List of network tags to apply to the job.
- sub_network str
- Optional. The Cloud VPC sub-network in which the job is run.
- network String
- Optional. The Cloud VPC network in which the job is run. By default, the Cloud VPC network named Default within the project is used.
- List<String>
- Optional. List of network tags to apply to the job.
- subNetwork String
- Optional. The Cloud VPC sub-network in which the job is run.
GoogleCloudDataplexV1TaskInfrastructureSpecVpcNetworkResponse, GoogleCloudDataplexV1TaskInfrastructureSpecVpcNetworkResponseArgs                  
- Network string
- Optional. The Cloud VPC network in which the job is run. By default, the Cloud VPC network named Default within the project is used.
- List<string>
- Optional. List of network tags to apply to the job.
- SubNetwork string
- Optional. The Cloud VPC sub-network in which the job is run.
- Network string
- Optional. The Cloud VPC network in which the job is run. By default, the Cloud VPC network named Default within the project is used.
- []string
- Optional. List of network tags to apply to the job.
- SubNetwork string
- Optional. The Cloud VPC sub-network in which the job is run.
- network String
- Optional. The Cloud VPC network in which the job is run. By default, the Cloud VPC network named Default within the project is used.
- List<String>
- Optional. List of network tags to apply to the job.
- subNetwork String
- Optional. The Cloud VPC sub-network in which the job is run.
- network string
- Optional. The Cloud VPC network in which the job is run. By default, the Cloud VPC network named Default within the project is used.
- string[]
- Optional. List of network tags to apply to the job.
- subNetwork string
- Optional. The Cloud VPC sub-network in which the job is run.
- network str
- Optional. The Cloud VPC network in which the job is run. By default, the Cloud VPC network named Default within the project is used.
- Sequence[str]
- Optional. List of network tags to apply to the job.
- sub_network str
- Optional. The Cloud VPC sub-network in which the job is run.
- network String
- Optional. The Cloud VPC network in which the job is run. By default, the Cloud VPC network named Default within the project is used.
- List<String>
- Optional. List of network tags to apply to the job.
- subNetwork String
- Optional. The Cloud VPC sub-network in which the job is run.
GoogleCloudDataplexV1TaskNotebookTaskConfig, GoogleCloudDataplexV1TaskNotebookTaskConfigArgs              
- Notebook string
- Path to input notebook. This can be the Cloud Storage URI of the notebook file or the path to a Notebook Content. The execution args are accessible as environment variables (TASK_key=value).
- ArchiveUris List<string>
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- FileUris List<string>
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- InfrastructureSpec Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Task Infrastructure Spec 
- Optional. Infrastructure specification for the execution.
- Notebook string
- Path to input notebook. This can be the Cloud Storage URI of the notebook file or the path to a Notebook Content. The execution args are accessible as environment variables (TASK_key=value).
- ArchiveUris []string
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- FileUris []string
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- InfrastructureSpec GoogleCloud Dataplex V1Task Infrastructure Spec 
- Optional. Infrastructure specification for the execution.
- notebook String
- Path to input notebook. This can be the Cloud Storage URI of the notebook file or the path to a Notebook Content. The execution args are accessible as environment variables (TASK_key=value).
- archiveUris List<String>
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- fileUris List<String>
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- infrastructureSpec GoogleCloud Dataplex V1Task Infrastructure Spec 
- Optional. Infrastructure specification for the execution.
- notebook string
- Path to input notebook. This can be the Cloud Storage URI of the notebook file or the path to a Notebook Content. The execution args are accessible as environment variables (TASK_key=value).
- archiveUris string[]
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- fileUris string[]
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- infrastructureSpec GoogleCloud Dataplex V1Task Infrastructure Spec 
- Optional. Infrastructure specification for the execution.
- notebook str
- Path to input notebook. This can be the Cloud Storage URI of the notebook file or the path to a Notebook Content. The execution args are accessible as environment variables (TASK_key=value).
- archive_uris Sequence[str]
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- file_uris Sequence[str]
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- infrastructure_spec GoogleCloud Dataplex V1Task Infrastructure Spec 
- Optional. Infrastructure specification for the execution.
- notebook String
- Path to input notebook. This can be the Cloud Storage URI of the notebook file or the path to a Notebook Content. The execution args are accessible as environment variables (TASK_key=value).
- archiveUris List<String>
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- fileUris List<String>
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- infrastructureSpec Property Map
- Optional. Infrastructure specification for the execution.
GoogleCloudDataplexV1TaskNotebookTaskConfigResponse, GoogleCloudDataplexV1TaskNotebookTaskConfigResponseArgs                
- ArchiveUris List<string>
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- FileUris List<string>
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- InfrastructureSpec Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Task Infrastructure Spec Response 
- Optional. Infrastructure specification for the execution.
- Notebook string
- Path to input notebook. This can be the Cloud Storage URI of the notebook file or the path to a Notebook Content. The execution args are accessible as environment variables (TASK_key=value).
- ArchiveUris []string
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- FileUris []string
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- InfrastructureSpec GoogleCloud Dataplex V1Task Infrastructure Spec Response 
- Optional. Infrastructure specification for the execution.
- Notebook string
- Path to input notebook. This can be the Cloud Storage URI of the notebook file or the path to a Notebook Content. The execution args are accessible as environment variables (TASK_key=value).
- archiveUris List<String>
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- fileUris List<String>
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- infrastructureSpec GoogleCloud Dataplex V1Task Infrastructure Spec Response 
- Optional. Infrastructure specification for the execution.
- notebook String
- Path to input notebook. This can be the Cloud Storage URI of the notebook file or the path to a Notebook Content. The execution args are accessible as environment variables (TASK_key=value).
- archiveUris string[]
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- fileUris string[]
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- infrastructureSpec GoogleCloud Dataplex V1Task Infrastructure Spec Response 
- Optional. Infrastructure specification for the execution.
- notebook string
- Path to input notebook. This can be the Cloud Storage URI of the notebook file or the path to a Notebook Content. The execution args are accessible as environment variables (TASK_key=value).
- archive_uris Sequence[str]
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- file_uris Sequence[str]
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- infrastructure_spec GoogleCloud Dataplex V1Task Infrastructure Spec Response 
- Optional. Infrastructure specification for the execution.
- notebook str
- Path to input notebook. This can be the Cloud Storage URI of the notebook file or the path to a Notebook Content. The execution args are accessible as environment variables (TASK_key=value).
- archiveUris List<String>
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- fileUris List<String>
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- infrastructureSpec Property Map
- Optional. Infrastructure specification for the execution.
- notebook String
- Path to input notebook. This can be the Cloud Storage URI of the notebook file or the path to a Notebook Content. The execution args are accessible as environment variables (TASK_key=value).
GoogleCloudDataplexV1TaskSparkTaskConfig, GoogleCloudDataplexV1TaskSparkTaskConfigArgs              
- ArchiveUris List<string>
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- FileUris List<string>
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- InfrastructureSpec Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Task Infrastructure Spec 
- Optional. Infrastructure specification for the execution.
- MainClass string
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris. The execution args are passed in as a sequence of named process arguments (--key=value).
- MainJar stringFile Uri 
- The Cloud Storage URI of the jar file that contains the main class. The execution args are passed in as a sequence of named process arguments (--key=value).
- PythonScript stringFile 
- The Gcloud Storage URI of the main Python file to use as the driver. Must be a .py file. The execution args are passed in as a sequence of named process arguments (--key=value).
- SqlScript string
- The query text. The execution args are used to declare a set of script variables (set key="value";).
- SqlScript stringFile 
- A reference to a query file. This can be the Cloud Storage URI of the query file or it can the path to a SqlScript Content. The execution args are used to declare a set of script variables (set key="value";).
- ArchiveUris []string
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- FileUris []string
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- InfrastructureSpec GoogleCloud Dataplex V1Task Infrastructure Spec 
- Optional. Infrastructure specification for the execution.
- MainClass string
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris. The execution args are passed in as a sequence of named process arguments (--key=value).
- MainJar stringFile Uri 
- The Cloud Storage URI of the jar file that contains the main class. The execution args are passed in as a sequence of named process arguments (--key=value).
- PythonScript stringFile 
- The Gcloud Storage URI of the main Python file to use as the driver. Must be a .py file. The execution args are passed in as a sequence of named process arguments (--key=value).
- SqlScript string
- The query text. The execution args are used to declare a set of script variables (set key="value";).
- SqlScript stringFile 
- A reference to a query file. This can be the Cloud Storage URI of the query file or it can the path to a SqlScript Content. The execution args are used to declare a set of script variables (set key="value";).
- archiveUris List<String>
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- fileUris List<String>
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- infrastructureSpec GoogleCloud Dataplex V1Task Infrastructure Spec 
- Optional. Infrastructure specification for the execution.
- mainClass String
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris. The execution args are passed in as a sequence of named process arguments (--key=value).
- mainJar StringFile Uri 
- The Cloud Storage URI of the jar file that contains the main class. The execution args are passed in as a sequence of named process arguments (--key=value).
- pythonScript StringFile 
- The Gcloud Storage URI of the main Python file to use as the driver. Must be a .py file. The execution args are passed in as a sequence of named process arguments (--key=value).
- sqlScript String
- The query text. The execution args are used to declare a set of script variables (set key="value";).
- sqlScript StringFile 
- A reference to a query file. This can be the Cloud Storage URI of the query file or it can the path to a SqlScript Content. The execution args are used to declare a set of script variables (set key="value";).
- archiveUris string[]
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- fileUris string[]
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- infrastructureSpec GoogleCloud Dataplex V1Task Infrastructure Spec 
- Optional. Infrastructure specification for the execution.
- mainClass string
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris. The execution args are passed in as a sequence of named process arguments (--key=value).
- mainJar stringFile Uri 
- The Cloud Storage URI of the jar file that contains the main class. The execution args are passed in as a sequence of named process arguments (--key=value).
- pythonScript stringFile 
- The Gcloud Storage URI of the main Python file to use as the driver. Must be a .py file. The execution args are passed in as a sequence of named process arguments (--key=value).
- sqlScript string
- The query text. The execution args are used to declare a set of script variables (set key="value";).
- sqlScript stringFile 
- A reference to a query file. This can be the Cloud Storage URI of the query file or it can the path to a SqlScript Content. The execution args are used to declare a set of script variables (set key="value";).
- archive_uris Sequence[str]
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- file_uris Sequence[str]
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- infrastructure_spec GoogleCloud Dataplex V1Task Infrastructure Spec 
- Optional. Infrastructure specification for the execution.
- main_class str
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris. The execution args are passed in as a sequence of named process arguments (--key=value).
- main_jar_ strfile_ uri 
- The Cloud Storage URI of the jar file that contains the main class. The execution args are passed in as a sequence of named process arguments (--key=value).
- python_script_ strfile 
- The Gcloud Storage URI of the main Python file to use as the driver. Must be a .py file. The execution args are passed in as a sequence of named process arguments (--key=value).
- sql_script str
- The query text. The execution args are used to declare a set of script variables (set key="value";).
- sql_script_ strfile 
- A reference to a query file. This can be the Cloud Storage URI of the query file or it can the path to a SqlScript Content. The execution args are used to declare a set of script variables (set key="value";).
- archiveUris List<String>
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- fileUris List<String>
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- infrastructureSpec Property Map
- Optional. Infrastructure specification for the execution.
- mainClass String
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris. The execution args are passed in as a sequence of named process arguments (--key=value).
- mainJar StringFile Uri 
- The Cloud Storage URI of the jar file that contains the main class. The execution args are passed in as a sequence of named process arguments (--key=value).
- pythonScript StringFile 
- The Gcloud Storage URI of the main Python file to use as the driver. Must be a .py file. The execution args are passed in as a sequence of named process arguments (--key=value).
- sqlScript String
- The query text. The execution args are used to declare a set of script variables (set key="value";).
- sqlScript StringFile 
- A reference to a query file. This can be the Cloud Storage URI of the query file or it can the path to a SqlScript Content. The execution args are used to declare a set of script variables (set key="value";).
GoogleCloudDataplexV1TaskSparkTaskConfigResponse, GoogleCloudDataplexV1TaskSparkTaskConfigResponseArgs                
- ArchiveUris List<string>
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- FileUris List<string>
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- InfrastructureSpec Pulumi.Google Native. Dataplex. V1. Inputs. Google Cloud Dataplex V1Task Infrastructure Spec Response 
- Optional. Infrastructure specification for the execution.
- MainClass string
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris. The execution args are passed in as a sequence of named process arguments (--key=value).
- MainJar stringFile Uri 
- The Cloud Storage URI of the jar file that contains the main class. The execution args are passed in as a sequence of named process arguments (--key=value).
- PythonScript stringFile 
- The Gcloud Storage URI of the main Python file to use as the driver. Must be a .py file. The execution args are passed in as a sequence of named process arguments (--key=value).
- SqlScript string
- The query text. The execution args are used to declare a set of script variables (set key="value";).
- SqlScript stringFile 
- A reference to a query file. This can be the Cloud Storage URI of the query file or it can the path to a SqlScript Content. The execution args are used to declare a set of script variables (set key="value";).
- ArchiveUris []string
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- FileUris []string
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- InfrastructureSpec GoogleCloud Dataplex V1Task Infrastructure Spec Response 
- Optional. Infrastructure specification for the execution.
- MainClass string
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris. The execution args are passed in as a sequence of named process arguments (--key=value).
- MainJar stringFile Uri 
- The Cloud Storage URI of the jar file that contains the main class. The execution args are passed in as a sequence of named process arguments (--key=value).
- PythonScript stringFile 
- The Gcloud Storage URI of the main Python file to use as the driver. Must be a .py file. The execution args are passed in as a sequence of named process arguments (--key=value).
- SqlScript string
- The query text. The execution args are used to declare a set of script variables (set key="value";).
- SqlScript stringFile 
- A reference to a query file. This can be the Cloud Storage URI of the query file or it can the path to a SqlScript Content. The execution args are used to declare a set of script variables (set key="value";).
- archiveUris List<String>
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- fileUris List<String>
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- infrastructureSpec GoogleCloud Dataplex V1Task Infrastructure Spec Response 
- Optional. Infrastructure specification for the execution.
- mainClass String
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris. The execution args are passed in as a sequence of named process arguments (--key=value).
- mainJar StringFile Uri 
- The Cloud Storage URI of the jar file that contains the main class. The execution args are passed in as a sequence of named process arguments (--key=value).
- pythonScript StringFile 
- The Gcloud Storage URI of the main Python file to use as the driver. Must be a .py file. The execution args are passed in as a sequence of named process arguments (--key=value).
- sqlScript String
- The query text. The execution args are used to declare a set of script variables (set key="value";).
- sqlScript StringFile 
- A reference to a query file. This can be the Cloud Storage URI of the query file or it can the path to a SqlScript Content. The execution args are used to declare a set of script variables (set key="value";).
- archiveUris string[]
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- fileUris string[]
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- infrastructureSpec GoogleCloud Dataplex V1Task Infrastructure Spec Response 
- Optional. Infrastructure specification for the execution.
- mainClass string
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris. The execution args are passed in as a sequence of named process arguments (--key=value).
- mainJar stringFile Uri 
- The Cloud Storage URI of the jar file that contains the main class. The execution args are passed in as a sequence of named process arguments (--key=value).
- pythonScript stringFile 
- The Gcloud Storage URI of the main Python file to use as the driver. Must be a .py file. The execution args are passed in as a sequence of named process arguments (--key=value).
- sqlScript string
- The query text. The execution args are used to declare a set of script variables (set key="value";).
- sqlScript stringFile 
- A reference to a query file. This can be the Cloud Storage URI of the query file or it can the path to a SqlScript Content. The execution args are used to declare a set of script variables (set key="value";).
- archive_uris Sequence[str]
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- file_uris Sequence[str]
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- infrastructure_spec GoogleCloud Dataplex V1Task Infrastructure Spec Response 
- Optional. Infrastructure specification for the execution.
- main_class str
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris. The execution args are passed in as a sequence of named process arguments (--key=value).
- main_jar_ strfile_ uri 
- The Cloud Storage URI of the jar file that contains the main class. The execution args are passed in as a sequence of named process arguments (--key=value).
- python_script_ strfile 
- The Gcloud Storage URI of the main Python file to use as the driver. Must be a .py file. The execution args are passed in as a sequence of named process arguments (--key=value).
- sql_script str
- The query text. The execution args are used to declare a set of script variables (set key="value";).
- sql_script_ strfile 
- A reference to a query file. This can be the Cloud Storage URI of the query file or it can the path to a SqlScript Content. The execution args are used to declare a set of script variables (set key="value";).
- archiveUris List<String>
- Optional. Cloud Storage URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- fileUris List<String>
- Optional. Cloud Storage URIs of files to be placed in the working directory of each executor.
- infrastructureSpec Property Map
- Optional. Infrastructure specification for the execution.
- mainClass String
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris. The execution args are passed in as a sequence of named process arguments (--key=value).
- mainJar StringFile Uri 
- The Cloud Storage URI of the jar file that contains the main class. The execution args are passed in as a sequence of named process arguments (--key=value).
- pythonScript StringFile 
- The Gcloud Storage URI of the main Python file to use as the driver. Must be a .py file. The execution args are passed in as a sequence of named process arguments (--key=value).
- sqlScript String
- The query text. The execution args are used to declare a set of script variables (set key="value";).
- sqlScript StringFile 
- A reference to a query file. This can be the Cloud Storage URI of the query file or it can the path to a SqlScript Content. The execution args are used to declare a set of script variables (set key="value";).
GoogleCloudDataplexV1TaskTriggerSpec, GoogleCloudDataplexV1TaskTriggerSpecArgs            
- Type
Pulumi.Google Native. Dataplex. V1. Google Cloud Dataplex V1Task Trigger Spec Type 
- Immutable. Trigger type of the user-specified Task.
- Disabled bool
- Optional. Prevent the task from executing. This does not cancel already running tasks. It is intended to temporarily disable RECURRING tasks.
- MaxRetries int
- Optional. Number of retry attempts before aborting. Set to zero to never attempt to retry a failed task.
- Schedule string
- Optional. Cron schedule (https://en.wikipedia.org/wiki/Cron) for running tasks periodically. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *. This field is required for RECURRING tasks.
- StartTime string
- Optional. The first run of the task will be after this time. If not specified, the task will run shortly after being submitted if ON_DEMAND and based on the schedule if RECURRING.
- Type
GoogleCloud Dataplex V1Task Trigger Spec Type 
- Immutable. Trigger type of the user-specified Task.
- Disabled bool
- Optional. Prevent the task from executing. This does not cancel already running tasks. It is intended to temporarily disable RECURRING tasks.
- MaxRetries int
- Optional. Number of retry attempts before aborting. Set to zero to never attempt to retry a failed task.
- Schedule string
- Optional. Cron schedule (https://en.wikipedia.org/wiki/Cron) for running tasks periodically. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *. This field is required for RECURRING tasks.
- StartTime string
- Optional. The first run of the task will be after this time. If not specified, the task will run shortly after being submitted if ON_DEMAND and based on the schedule if RECURRING.
- type
GoogleCloud Dataplex V1Task Trigger Spec Type 
- Immutable. Trigger type of the user-specified Task.
- disabled Boolean
- Optional. Prevent the task from executing. This does not cancel already running tasks. It is intended to temporarily disable RECURRING tasks.
- maxRetries Integer
- Optional. Number of retry attempts before aborting. Set to zero to never attempt to retry a failed task.
- schedule String
- Optional. Cron schedule (https://en.wikipedia.org/wiki/Cron) for running tasks periodically. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *. This field is required for RECURRING tasks.
- startTime String
- Optional. The first run of the task will be after this time. If not specified, the task will run shortly after being submitted if ON_DEMAND and based on the schedule if RECURRING.
- type
GoogleCloud Dataplex V1Task Trigger Spec Type 
- Immutable. Trigger type of the user-specified Task.
- disabled boolean
- Optional. Prevent the task from executing. This does not cancel already running tasks. It is intended to temporarily disable RECURRING tasks.
- maxRetries number
- Optional. Number of retry attempts before aborting. Set to zero to never attempt to retry a failed task.
- schedule string
- Optional. Cron schedule (https://en.wikipedia.org/wiki/Cron) for running tasks periodically. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *. This field is required for RECURRING tasks.
- startTime string
- Optional. The first run of the task will be after this time. If not specified, the task will run shortly after being submitted if ON_DEMAND and based on the schedule if RECURRING.
- type
GoogleCloud Dataplex V1Task Trigger Spec Type 
- Immutable. Trigger type of the user-specified Task.
- disabled bool
- Optional. Prevent the task from executing. This does not cancel already running tasks. It is intended to temporarily disable RECURRING tasks.
- max_retries int
- Optional. Number of retry attempts before aborting. Set to zero to never attempt to retry a failed task.
- schedule str
- Optional. Cron schedule (https://en.wikipedia.org/wiki/Cron) for running tasks periodically. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *. This field is required for RECURRING tasks.
- start_time str
- Optional. The first run of the task will be after this time. If not specified, the task will run shortly after being submitted if ON_DEMAND and based on the schedule if RECURRING.
- type "TYPE_UNSPECIFIED" | "ON_DEMAND" | "RECURRING"
- Immutable. Trigger type of the user-specified Task.
- disabled Boolean
- Optional. Prevent the task from executing. This does not cancel already running tasks. It is intended to temporarily disable RECURRING tasks.
- maxRetries Number
- Optional. Number of retry attempts before aborting. Set to zero to never attempt to retry a failed task.
- schedule String
- Optional. Cron schedule (https://en.wikipedia.org/wiki/Cron) for running tasks periodically. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *. This field is required for RECURRING tasks.
- startTime String
- Optional. The first run of the task will be after this time. If not specified, the task will run shortly after being submitted if ON_DEMAND and based on the schedule if RECURRING.
GoogleCloudDataplexV1TaskTriggerSpecResponse, GoogleCloudDataplexV1TaskTriggerSpecResponseArgs              
- Disabled bool
- Optional. Prevent the task from executing. This does not cancel already running tasks. It is intended to temporarily disable RECURRING tasks.
- MaxRetries int
- Optional. Number of retry attempts before aborting. Set to zero to never attempt to retry a failed task.
- Schedule string
- Optional. Cron schedule (https://en.wikipedia.org/wiki/Cron) for running tasks periodically. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *. This field is required for RECURRING tasks.
- StartTime string
- Optional. The first run of the task will be after this time. If not specified, the task will run shortly after being submitted if ON_DEMAND and based on the schedule if RECURRING.
- Type string
- Immutable. Trigger type of the user-specified Task.
- Disabled bool
- Optional. Prevent the task from executing. This does not cancel already running tasks. It is intended to temporarily disable RECURRING tasks.
- MaxRetries int
- Optional. Number of retry attempts before aborting. Set to zero to never attempt to retry a failed task.
- Schedule string
- Optional. Cron schedule (https://en.wikipedia.org/wiki/Cron) for running tasks periodically. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *. This field is required for RECURRING tasks.
- StartTime string
- Optional. The first run of the task will be after this time. If not specified, the task will run shortly after being submitted if ON_DEMAND and based on the schedule if RECURRING.
- Type string
- Immutable. Trigger type of the user-specified Task.
- disabled Boolean
- Optional. Prevent the task from executing. This does not cancel already running tasks. It is intended to temporarily disable RECURRING tasks.
- maxRetries Integer
- Optional. Number of retry attempts before aborting. Set to zero to never attempt to retry a failed task.
- schedule String
- Optional. Cron schedule (https://en.wikipedia.org/wiki/Cron) for running tasks periodically. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *. This field is required for RECURRING tasks.
- startTime String
- Optional. The first run of the task will be after this time. If not specified, the task will run shortly after being submitted if ON_DEMAND and based on the schedule if RECURRING.
- type String
- Immutable. Trigger type of the user-specified Task.
- disabled boolean
- Optional. Prevent the task from executing. This does not cancel already running tasks. It is intended to temporarily disable RECURRING tasks.
- maxRetries number
- Optional. Number of retry attempts before aborting. Set to zero to never attempt to retry a failed task.
- schedule string
- Optional. Cron schedule (https://en.wikipedia.org/wiki/Cron) for running tasks periodically. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *. This field is required for RECURRING tasks.
- startTime string
- Optional. The first run of the task will be after this time. If not specified, the task will run shortly after being submitted if ON_DEMAND and based on the schedule if RECURRING.
- type string
- Immutable. Trigger type of the user-specified Task.
- disabled bool
- Optional. Prevent the task from executing. This does not cancel already running tasks. It is intended to temporarily disable RECURRING tasks.
- max_retries int
- Optional. Number of retry attempts before aborting. Set to zero to never attempt to retry a failed task.
- schedule str
- Optional. Cron schedule (https://en.wikipedia.org/wiki/Cron) for running tasks periodically. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *. This field is required for RECURRING tasks.
- start_time str
- Optional. The first run of the task will be after this time. If not specified, the task will run shortly after being submitted if ON_DEMAND and based on the schedule if RECURRING.
- type str
- Immutable. Trigger type of the user-specified Task.
- disabled Boolean
- Optional. Prevent the task from executing. This does not cancel already running tasks. It is intended to temporarily disable RECURRING tasks.
- maxRetries Number
- Optional. Number of retry attempts before aborting. Set to zero to never attempt to retry a failed task.
- schedule String
- Optional. Cron schedule (https://en.wikipedia.org/wiki/Cron) for running tasks periodically. To explicitly set a timezone to the cron tab, apply a prefix in the cron tab: "CRON_TZ=${IANA_TIME_ZONE}" or "TZ=${IANA_TIME_ZONE}". The ${IANA_TIME_ZONE} may only be a valid string from IANA time zone database. For example, CRON_TZ=America/New_York 1 * * * *, or TZ=America/New_York 1 * * * *. This field is required for RECURRING tasks.
- startTime String
- Optional. The first run of the task will be after this time. If not specified, the task will run shortly after being submitted if ON_DEMAND and based on the schedule if RECURRING.
- type String
- Immutable. Trigger type of the user-specified Task.
GoogleCloudDataplexV1TaskTriggerSpecType, GoogleCloudDataplexV1TaskTriggerSpecTypeArgs              
- TypeUnspecified 
- TYPE_UNSPECIFIEDUnspecified trigger type.
- OnDemand 
- ON_DEMANDThe task runs one-time shortly after Task Creation.
- Recurring
- RECURRINGThe task is scheduled to run periodically.
- GoogleCloud Dataplex V1Task Trigger Spec Type Type Unspecified 
- TYPE_UNSPECIFIEDUnspecified trigger type.
- GoogleCloud Dataplex V1Task Trigger Spec Type On Demand 
- ON_DEMANDThe task runs one-time shortly after Task Creation.
- GoogleCloud Dataplex V1Task Trigger Spec Type Recurring 
- RECURRINGThe task is scheduled to run periodically.
- TypeUnspecified 
- TYPE_UNSPECIFIEDUnspecified trigger type.
- OnDemand 
- ON_DEMANDThe task runs one-time shortly after Task Creation.
- Recurring
- RECURRINGThe task is scheduled to run periodically.
- TypeUnspecified 
- TYPE_UNSPECIFIEDUnspecified trigger type.
- OnDemand 
- ON_DEMANDThe task runs one-time shortly after Task Creation.
- Recurring
- RECURRINGThe task is scheduled to run periodically.
- TYPE_UNSPECIFIED
- TYPE_UNSPECIFIEDUnspecified trigger type.
- ON_DEMAND
- ON_DEMANDThe task runs one-time shortly after Task Creation.
- RECURRING
- RECURRINGThe task is scheduled to run periodically.
- "TYPE_UNSPECIFIED"
- TYPE_UNSPECIFIEDUnspecified trigger type.
- "ON_DEMAND"
- ON_DEMANDThe task runs one-time shortly after Task Creation.
- "RECURRING"
- RECURRINGThe task is scheduled to run periodically.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.