Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.dataproc/v1beta2.Job
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Submits a job to a cluster. Auto-naming is currently not supported for this resource.
Create Job Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Job(name: string, args: JobArgs, opts?: CustomResourceOptions);@overload
def Job(resource_name: str,
        args: JobArgs,
        opts: Optional[ResourceOptions] = None)
@overload
def Job(resource_name: str,
        opts: Optional[ResourceOptions] = None,
        placement: Optional[JobPlacementArgs] = None,
        region: Optional[str] = None,
        pig_job: Optional[PigJobArgs] = None,
        hadoop_job: Optional[HadoopJobArgs] = None,
        labels: Optional[Mapping[str, str]] = None,
        presto_job: Optional[PrestoJobArgs] = None,
        project: Optional[str] = None,
        pyspark_job: Optional[PySparkJobArgs] = None,
        reference: Optional[JobReferenceArgs] = None,
        hive_job: Optional[HiveJobArgs] = None,
        request_id: Optional[str] = None,
        scheduling: Optional[JobSchedulingArgs] = None,
        spark_job: Optional[SparkJobArgs] = None,
        spark_r_job: Optional[SparkRJobArgs] = None,
        spark_sql_job: Optional[SparkSqlJobArgs] = None)func NewJob(ctx *Context, name string, args JobArgs, opts ...ResourceOption) (*Job, error)public Job(string name, JobArgs args, CustomResourceOptions? opts = null)type: google-native:dataproc/v1beta2:Job
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args JobArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var examplejobResourceResourceFromDataprocv1beta2 = new GoogleNative.Dataproc.V1Beta2.Job("examplejobResourceResourceFromDataprocv1beta2", new()
{
    Placement = new GoogleNative.Dataproc.V1Beta2.Inputs.JobPlacementArgs
    {
        ClusterName = "string",
        ClusterLabels = 
        {
            { "string", "string" },
        },
    },
    Region = "string",
    PigJob = new GoogleNative.Dataproc.V1Beta2.Inputs.PigJobArgs
    {
        ContinueOnFailure = false,
        JarFileUris = new[]
        {
            "string",
        },
        LoggingConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigArgs
        {
            DriverLogLevels = 
            {
                { "string", "string" },
            },
        },
        Properties = 
        {
            { "string", "string" },
        },
        QueryFileUri = "string",
        QueryList = new GoogleNative.Dataproc.V1Beta2.Inputs.QueryListArgs
        {
            Queries = new[]
            {
                "string",
            },
        },
        ScriptVariables = 
        {
            { "string", "string" },
        },
    },
    HadoopJob = new GoogleNative.Dataproc.V1Beta2.Inputs.HadoopJobArgs
    {
        ArchiveUris = new[]
        {
            "string",
        },
        Args = new[]
        {
            "string",
        },
        FileUris = new[]
        {
            "string",
        },
        JarFileUris = new[]
        {
            "string",
        },
        LoggingConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigArgs
        {
            DriverLogLevels = 
            {
                { "string", "string" },
            },
        },
        MainClass = "string",
        MainJarFileUri = "string",
        Properties = 
        {
            { "string", "string" },
        },
    },
    Labels = 
    {
        { "string", "string" },
    },
    PrestoJob = new GoogleNative.Dataproc.V1Beta2.Inputs.PrestoJobArgs
    {
        ClientTags = new[]
        {
            "string",
        },
        ContinueOnFailure = false,
        LoggingConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigArgs
        {
            DriverLogLevels = 
            {
                { "string", "string" },
            },
        },
        OutputFormat = "string",
        Properties = 
        {
            { "string", "string" },
        },
        QueryFileUri = "string",
        QueryList = new GoogleNative.Dataproc.V1Beta2.Inputs.QueryListArgs
        {
            Queries = new[]
            {
                "string",
            },
        },
    },
    Project = "string",
    PysparkJob = new GoogleNative.Dataproc.V1Beta2.Inputs.PySparkJobArgs
    {
        MainPythonFileUri = "string",
        ArchiveUris = new[]
        {
            "string",
        },
        Args = new[]
        {
            "string",
        },
        FileUris = new[]
        {
            "string",
        },
        JarFileUris = new[]
        {
            "string",
        },
        LoggingConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigArgs
        {
            DriverLogLevels = 
            {
                { "string", "string" },
            },
        },
        Properties = 
        {
            { "string", "string" },
        },
        PythonFileUris = new[]
        {
            "string",
        },
    },
    Reference = new GoogleNative.Dataproc.V1Beta2.Inputs.JobReferenceArgs
    {
        JobId = "string",
        Project = "string",
    },
    HiveJob = new GoogleNative.Dataproc.V1Beta2.Inputs.HiveJobArgs
    {
        ContinueOnFailure = false,
        JarFileUris = new[]
        {
            "string",
        },
        Properties = 
        {
            { "string", "string" },
        },
        QueryFileUri = "string",
        QueryList = new GoogleNative.Dataproc.V1Beta2.Inputs.QueryListArgs
        {
            Queries = new[]
            {
                "string",
            },
        },
        ScriptVariables = 
        {
            { "string", "string" },
        },
    },
    RequestId = "string",
    Scheduling = new GoogleNative.Dataproc.V1Beta2.Inputs.JobSchedulingArgs
    {
        MaxFailuresPerHour = 0,
        MaxFailuresTotal = 0,
    },
    SparkJob = new GoogleNative.Dataproc.V1Beta2.Inputs.SparkJobArgs
    {
        ArchiveUris = new[]
        {
            "string",
        },
        Args = new[]
        {
            "string",
        },
        FileUris = new[]
        {
            "string",
        },
        JarFileUris = new[]
        {
            "string",
        },
        LoggingConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigArgs
        {
            DriverLogLevels = 
            {
                { "string", "string" },
            },
        },
        MainClass = "string",
        MainJarFileUri = "string",
        Properties = 
        {
            { "string", "string" },
        },
    },
    SparkRJob = new GoogleNative.Dataproc.V1Beta2.Inputs.SparkRJobArgs
    {
        MainRFileUri = "string",
        ArchiveUris = new[]
        {
            "string",
        },
        Args = new[]
        {
            "string",
        },
        FileUris = new[]
        {
            "string",
        },
        LoggingConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigArgs
        {
            DriverLogLevels = 
            {
                { "string", "string" },
            },
        },
        Properties = 
        {
            { "string", "string" },
        },
    },
    SparkSqlJob = new GoogleNative.Dataproc.V1Beta2.Inputs.SparkSqlJobArgs
    {
        JarFileUris = new[]
        {
            "string",
        },
        LoggingConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigArgs
        {
            DriverLogLevels = 
            {
                { "string", "string" },
            },
        },
        Properties = 
        {
            { "string", "string" },
        },
        QueryFileUri = "string",
        QueryList = new GoogleNative.Dataproc.V1Beta2.Inputs.QueryListArgs
        {
            Queries = new[]
            {
                "string",
            },
        },
        ScriptVariables = 
        {
            { "string", "string" },
        },
    },
});
example, err := dataprocv1beta2.NewJob(ctx, "examplejobResourceResourceFromDataprocv1beta2", &dataprocv1beta2.JobArgs{
	Placement: &dataproc.JobPlacementArgs{
		ClusterName: pulumi.String("string"),
		ClusterLabels: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
	},
	Region: pulumi.String("string"),
	PigJob: &dataproc.PigJobArgs{
		ContinueOnFailure: pulumi.Bool(false),
		JarFileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		LoggingConfig: &dataproc.LoggingConfigArgs{
			DriverLogLevels: pulumi.StringMap{
				"string": pulumi.String("string"),
			},
		},
		Properties: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
		QueryFileUri: pulumi.String("string"),
		QueryList: &dataproc.QueryListArgs{
			Queries: pulumi.StringArray{
				pulumi.String("string"),
			},
		},
		ScriptVariables: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
	},
	HadoopJob: &dataproc.HadoopJobArgs{
		ArchiveUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		Args: pulumi.StringArray{
			pulumi.String("string"),
		},
		FileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		JarFileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		LoggingConfig: &dataproc.LoggingConfigArgs{
			DriverLogLevels: pulumi.StringMap{
				"string": pulumi.String("string"),
			},
		},
		MainClass:      pulumi.String("string"),
		MainJarFileUri: pulumi.String("string"),
		Properties: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
	},
	Labels: pulumi.StringMap{
		"string": pulumi.String("string"),
	},
	PrestoJob: &dataproc.PrestoJobArgs{
		ClientTags: pulumi.StringArray{
			pulumi.String("string"),
		},
		ContinueOnFailure: pulumi.Bool(false),
		LoggingConfig: &dataproc.LoggingConfigArgs{
			DriverLogLevels: pulumi.StringMap{
				"string": pulumi.String("string"),
			},
		},
		OutputFormat: pulumi.String("string"),
		Properties: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
		QueryFileUri: pulumi.String("string"),
		QueryList: &dataproc.QueryListArgs{
			Queries: pulumi.StringArray{
				pulumi.String("string"),
			},
		},
	},
	Project: pulumi.String("string"),
	PysparkJob: &dataproc.PySparkJobArgs{
		MainPythonFileUri: pulumi.String("string"),
		ArchiveUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		Args: pulumi.StringArray{
			pulumi.String("string"),
		},
		FileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		JarFileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		LoggingConfig: &dataproc.LoggingConfigArgs{
			DriverLogLevels: pulumi.StringMap{
				"string": pulumi.String("string"),
			},
		},
		Properties: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
		PythonFileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
	},
	Reference: &dataproc.JobReferenceArgs{
		JobId:   pulumi.String("string"),
		Project: pulumi.String("string"),
	},
	HiveJob: &dataproc.HiveJobArgs{
		ContinueOnFailure: pulumi.Bool(false),
		JarFileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		Properties: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
		QueryFileUri: pulumi.String("string"),
		QueryList: &dataproc.QueryListArgs{
			Queries: pulumi.StringArray{
				pulumi.String("string"),
			},
		},
		ScriptVariables: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
	},
	RequestId: pulumi.String("string"),
	Scheduling: &dataproc.JobSchedulingArgs{
		MaxFailuresPerHour: pulumi.Int(0),
		MaxFailuresTotal:   pulumi.Int(0),
	},
	SparkJob: &dataproc.SparkJobArgs{
		ArchiveUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		Args: pulumi.StringArray{
			pulumi.String("string"),
		},
		FileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		JarFileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		LoggingConfig: &dataproc.LoggingConfigArgs{
			DriverLogLevels: pulumi.StringMap{
				"string": pulumi.String("string"),
			},
		},
		MainClass:      pulumi.String("string"),
		MainJarFileUri: pulumi.String("string"),
		Properties: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
	},
	SparkRJob: &dataproc.SparkRJobArgs{
		MainRFileUri: pulumi.String("string"),
		ArchiveUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		Args: pulumi.StringArray{
			pulumi.String("string"),
		},
		FileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		LoggingConfig: &dataproc.LoggingConfigArgs{
			DriverLogLevels: pulumi.StringMap{
				"string": pulumi.String("string"),
			},
		},
		Properties: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
	},
	SparkSqlJob: &dataproc.SparkSqlJobArgs{
		JarFileUris: pulumi.StringArray{
			pulumi.String("string"),
		},
		LoggingConfig: &dataproc.LoggingConfigArgs{
			DriverLogLevels: pulumi.StringMap{
				"string": pulumi.String("string"),
			},
		},
		Properties: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
		QueryFileUri: pulumi.String("string"),
		QueryList: &dataproc.QueryListArgs{
			Queries: pulumi.StringArray{
				pulumi.String("string"),
			},
		},
		ScriptVariables: pulumi.StringMap{
			"string": pulumi.String("string"),
		},
	},
})
var examplejobResourceResourceFromDataprocv1beta2 = new Job("examplejobResourceResourceFromDataprocv1beta2", JobArgs.builder()
    .placement(JobPlacementArgs.builder()
        .clusterName("string")
        .clusterLabels(Map.of("string", "string"))
        .build())
    .region("string")
    .pigJob(PigJobArgs.builder()
        .continueOnFailure(false)
        .jarFileUris("string")
        .loggingConfig(LoggingConfigArgs.builder()
            .driverLogLevels(Map.of("string", "string"))
            .build())
        .properties(Map.of("string", "string"))
        .queryFileUri("string")
        .queryList(QueryListArgs.builder()
            .queries("string")
            .build())
        .scriptVariables(Map.of("string", "string"))
        .build())
    .hadoopJob(HadoopJobArgs.builder()
        .archiveUris("string")
        .args("string")
        .fileUris("string")
        .jarFileUris("string")
        .loggingConfig(LoggingConfigArgs.builder()
            .driverLogLevels(Map.of("string", "string"))
            .build())
        .mainClass("string")
        .mainJarFileUri("string")
        .properties(Map.of("string", "string"))
        .build())
    .labels(Map.of("string", "string"))
    .prestoJob(PrestoJobArgs.builder()
        .clientTags("string")
        .continueOnFailure(false)
        .loggingConfig(LoggingConfigArgs.builder()
            .driverLogLevels(Map.of("string", "string"))
            .build())
        .outputFormat("string")
        .properties(Map.of("string", "string"))
        .queryFileUri("string")
        .queryList(QueryListArgs.builder()
            .queries("string")
            .build())
        .build())
    .project("string")
    .pysparkJob(PySparkJobArgs.builder()
        .mainPythonFileUri("string")
        .archiveUris("string")
        .args("string")
        .fileUris("string")
        .jarFileUris("string")
        .loggingConfig(LoggingConfigArgs.builder()
            .driverLogLevels(Map.of("string", "string"))
            .build())
        .properties(Map.of("string", "string"))
        .pythonFileUris("string")
        .build())
    .reference(JobReferenceArgs.builder()
        .jobId("string")
        .project("string")
        .build())
    .hiveJob(HiveJobArgs.builder()
        .continueOnFailure(false)
        .jarFileUris("string")
        .properties(Map.of("string", "string"))
        .queryFileUri("string")
        .queryList(QueryListArgs.builder()
            .queries("string")
            .build())
        .scriptVariables(Map.of("string", "string"))
        .build())
    .requestId("string")
    .scheduling(JobSchedulingArgs.builder()
        .maxFailuresPerHour(0)
        .maxFailuresTotal(0)
        .build())
    .sparkJob(SparkJobArgs.builder()
        .archiveUris("string")
        .args("string")
        .fileUris("string")
        .jarFileUris("string")
        .loggingConfig(LoggingConfigArgs.builder()
            .driverLogLevels(Map.of("string", "string"))
            .build())
        .mainClass("string")
        .mainJarFileUri("string")
        .properties(Map.of("string", "string"))
        .build())
    .sparkRJob(SparkRJobArgs.builder()
        .mainRFileUri("string")
        .archiveUris("string")
        .args("string")
        .fileUris("string")
        .loggingConfig(LoggingConfigArgs.builder()
            .driverLogLevels(Map.of("string", "string"))
            .build())
        .properties(Map.of("string", "string"))
        .build())
    .sparkSqlJob(SparkSqlJobArgs.builder()
        .jarFileUris("string")
        .loggingConfig(LoggingConfigArgs.builder()
            .driverLogLevels(Map.of("string", "string"))
            .build())
        .properties(Map.of("string", "string"))
        .queryFileUri("string")
        .queryList(QueryListArgs.builder()
            .queries("string")
            .build())
        .scriptVariables(Map.of("string", "string"))
        .build())
    .build());
examplejob_resource_resource_from_dataprocv1beta2 = google_native.dataproc.v1beta2.Job("examplejobResourceResourceFromDataprocv1beta2",
    placement={
        "cluster_name": "string",
        "cluster_labels": {
            "string": "string",
        },
    },
    region="string",
    pig_job={
        "continue_on_failure": False,
        "jar_file_uris": ["string"],
        "logging_config": {
            "driver_log_levels": {
                "string": "string",
            },
        },
        "properties": {
            "string": "string",
        },
        "query_file_uri": "string",
        "query_list": {
            "queries": ["string"],
        },
        "script_variables": {
            "string": "string",
        },
    },
    hadoop_job={
        "archive_uris": ["string"],
        "args": ["string"],
        "file_uris": ["string"],
        "jar_file_uris": ["string"],
        "logging_config": {
            "driver_log_levels": {
                "string": "string",
            },
        },
        "main_class": "string",
        "main_jar_file_uri": "string",
        "properties": {
            "string": "string",
        },
    },
    labels={
        "string": "string",
    },
    presto_job={
        "client_tags": ["string"],
        "continue_on_failure": False,
        "logging_config": {
            "driver_log_levels": {
                "string": "string",
            },
        },
        "output_format": "string",
        "properties": {
            "string": "string",
        },
        "query_file_uri": "string",
        "query_list": {
            "queries": ["string"],
        },
    },
    project="string",
    pyspark_job={
        "main_python_file_uri": "string",
        "archive_uris": ["string"],
        "args": ["string"],
        "file_uris": ["string"],
        "jar_file_uris": ["string"],
        "logging_config": {
            "driver_log_levels": {
                "string": "string",
            },
        },
        "properties": {
            "string": "string",
        },
        "python_file_uris": ["string"],
    },
    reference={
        "job_id": "string",
        "project": "string",
    },
    hive_job={
        "continue_on_failure": False,
        "jar_file_uris": ["string"],
        "properties": {
            "string": "string",
        },
        "query_file_uri": "string",
        "query_list": {
            "queries": ["string"],
        },
        "script_variables": {
            "string": "string",
        },
    },
    request_id="string",
    scheduling={
        "max_failures_per_hour": 0,
        "max_failures_total": 0,
    },
    spark_job={
        "archive_uris": ["string"],
        "args": ["string"],
        "file_uris": ["string"],
        "jar_file_uris": ["string"],
        "logging_config": {
            "driver_log_levels": {
                "string": "string",
            },
        },
        "main_class": "string",
        "main_jar_file_uri": "string",
        "properties": {
            "string": "string",
        },
    },
    spark_r_job={
        "main_r_file_uri": "string",
        "archive_uris": ["string"],
        "args": ["string"],
        "file_uris": ["string"],
        "logging_config": {
            "driver_log_levels": {
                "string": "string",
            },
        },
        "properties": {
            "string": "string",
        },
    },
    spark_sql_job={
        "jar_file_uris": ["string"],
        "logging_config": {
            "driver_log_levels": {
                "string": "string",
            },
        },
        "properties": {
            "string": "string",
        },
        "query_file_uri": "string",
        "query_list": {
            "queries": ["string"],
        },
        "script_variables": {
            "string": "string",
        },
    })
const examplejobResourceResourceFromDataprocv1beta2 = new google_native.dataproc.v1beta2.Job("examplejobResourceResourceFromDataprocv1beta2", {
    placement: {
        clusterName: "string",
        clusterLabels: {
            string: "string",
        },
    },
    region: "string",
    pigJob: {
        continueOnFailure: false,
        jarFileUris: ["string"],
        loggingConfig: {
            driverLogLevels: {
                string: "string",
            },
        },
        properties: {
            string: "string",
        },
        queryFileUri: "string",
        queryList: {
            queries: ["string"],
        },
        scriptVariables: {
            string: "string",
        },
    },
    hadoopJob: {
        archiveUris: ["string"],
        args: ["string"],
        fileUris: ["string"],
        jarFileUris: ["string"],
        loggingConfig: {
            driverLogLevels: {
                string: "string",
            },
        },
        mainClass: "string",
        mainJarFileUri: "string",
        properties: {
            string: "string",
        },
    },
    labels: {
        string: "string",
    },
    prestoJob: {
        clientTags: ["string"],
        continueOnFailure: false,
        loggingConfig: {
            driverLogLevels: {
                string: "string",
            },
        },
        outputFormat: "string",
        properties: {
            string: "string",
        },
        queryFileUri: "string",
        queryList: {
            queries: ["string"],
        },
    },
    project: "string",
    pysparkJob: {
        mainPythonFileUri: "string",
        archiveUris: ["string"],
        args: ["string"],
        fileUris: ["string"],
        jarFileUris: ["string"],
        loggingConfig: {
            driverLogLevels: {
                string: "string",
            },
        },
        properties: {
            string: "string",
        },
        pythonFileUris: ["string"],
    },
    reference: {
        jobId: "string",
        project: "string",
    },
    hiveJob: {
        continueOnFailure: false,
        jarFileUris: ["string"],
        properties: {
            string: "string",
        },
        queryFileUri: "string",
        queryList: {
            queries: ["string"],
        },
        scriptVariables: {
            string: "string",
        },
    },
    requestId: "string",
    scheduling: {
        maxFailuresPerHour: 0,
        maxFailuresTotal: 0,
    },
    sparkJob: {
        archiveUris: ["string"],
        args: ["string"],
        fileUris: ["string"],
        jarFileUris: ["string"],
        loggingConfig: {
            driverLogLevels: {
                string: "string",
            },
        },
        mainClass: "string",
        mainJarFileUri: "string",
        properties: {
            string: "string",
        },
    },
    sparkRJob: {
        mainRFileUri: "string",
        archiveUris: ["string"],
        args: ["string"],
        fileUris: ["string"],
        loggingConfig: {
            driverLogLevels: {
                string: "string",
            },
        },
        properties: {
            string: "string",
        },
    },
    sparkSqlJob: {
        jarFileUris: ["string"],
        loggingConfig: {
            driverLogLevels: {
                string: "string",
            },
        },
        properties: {
            string: "string",
        },
        queryFileUri: "string",
        queryList: {
            queries: ["string"],
        },
        scriptVariables: {
            string: "string",
        },
    },
});
type: google-native:dataproc/v1beta2:Job
properties:
    hadoopJob:
        archiveUris:
            - string
        args:
            - string
        fileUris:
            - string
        jarFileUris:
            - string
        loggingConfig:
            driverLogLevels:
                string: string
        mainClass: string
        mainJarFileUri: string
        properties:
            string: string
    hiveJob:
        continueOnFailure: false
        jarFileUris:
            - string
        properties:
            string: string
        queryFileUri: string
        queryList:
            queries:
                - string
        scriptVariables:
            string: string
    labels:
        string: string
    pigJob:
        continueOnFailure: false
        jarFileUris:
            - string
        loggingConfig:
            driverLogLevels:
                string: string
        properties:
            string: string
        queryFileUri: string
        queryList:
            queries:
                - string
        scriptVariables:
            string: string
    placement:
        clusterLabels:
            string: string
        clusterName: string
    prestoJob:
        clientTags:
            - string
        continueOnFailure: false
        loggingConfig:
            driverLogLevels:
                string: string
        outputFormat: string
        properties:
            string: string
        queryFileUri: string
        queryList:
            queries:
                - string
    project: string
    pysparkJob:
        archiveUris:
            - string
        args:
            - string
        fileUris:
            - string
        jarFileUris:
            - string
        loggingConfig:
            driverLogLevels:
                string: string
        mainPythonFileUri: string
        properties:
            string: string
        pythonFileUris:
            - string
    reference:
        jobId: string
        project: string
    region: string
    requestId: string
    scheduling:
        maxFailuresPerHour: 0
        maxFailuresTotal: 0
    sparkJob:
        archiveUris:
            - string
        args:
            - string
        fileUris:
            - string
        jarFileUris:
            - string
        loggingConfig:
            driverLogLevels:
                string: string
        mainClass: string
        mainJarFileUri: string
        properties:
            string: string
    sparkRJob:
        archiveUris:
            - string
        args:
            - string
        fileUris:
            - string
        loggingConfig:
            driverLogLevels:
                string: string
        mainRFileUri: string
        properties:
            string: string
    sparkSqlJob:
        jarFileUris:
            - string
        loggingConfig:
            driverLogLevels:
                string: string
        properties:
            string: string
        queryFileUri: string
        queryList:
            queries:
                - string
        scriptVariables:
            string: string
Job Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.
The Job resource accepts the following input properties:
- Placement
Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Job Placement 
- Job information, including how, when, and where to run the job.
- Region string
- HadoopJob Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Hadoop Job 
- Optional. Job is a Hadoop job.
- HiveJob Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Hive Job 
- Optional. Job is a Hive job.
- Labels Dictionary<string, string>
- Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.
- PigJob Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Pig Job 
- Optional. Job is a Pig job.
- PrestoJob Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Presto Job 
- Optional. Job is a Presto job.
- Project string
- PysparkJob Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Py Spark Job 
- Optional. Job is a PySpark job.
- Reference
Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Job Reference 
- Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.
- RequestId string
- Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#google.cloud.dataproc.v1.SubmitJobRequest)s with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Scheduling
Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Job Scheduling 
- Optional. Job scheduling configuration.
- SparkJob Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Spark Job 
- Optional. Job is a Spark job.
- SparkRJob Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Spark RJob 
- Optional. Job is a SparkR job.
- SparkSql Pulumi.Job Google Native. Dataproc. V1Beta2. Inputs. Spark Sql Job 
- Optional. Job is a SparkSql job.
- Placement
JobPlacement Args 
- Job information, including how, when, and where to run the job.
- Region string
- HadoopJob HadoopJob Args 
- Optional. Job is a Hadoop job.
- HiveJob HiveJob Args 
- Optional. Job is a Hive job.
- Labels map[string]string
- Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.
- PigJob PigJob Args 
- Optional. Job is a Pig job.
- PrestoJob PrestoJob Args 
- Optional. Job is a Presto job.
- Project string
- PysparkJob PySpark Job Args 
- Optional. Job is a PySpark job.
- Reference
JobReference Args 
- Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.
- RequestId string
- Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#google.cloud.dataproc.v1.SubmitJobRequest)s with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Scheduling
JobScheduling Args 
- Optional. Job scheduling configuration.
- SparkJob SparkJob Args 
- Optional. Job is a Spark job.
- SparkRJob SparkRJob Args 
- Optional. Job is a SparkR job.
- SparkSql SparkJob Sql Job Args 
- Optional. Job is a SparkSql job.
- placement
JobPlacement 
- Job information, including how, when, and where to run the job.
- region String
- hadoopJob HadoopJob 
- Optional. Job is a Hadoop job.
- hiveJob HiveJob 
- Optional. Job is a Hive job.
- labels Map<String,String>
- Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.
- pigJob PigJob 
- Optional. Job is a Pig job.
- prestoJob PrestoJob 
- Optional. Job is a Presto job.
- project String
- pysparkJob PySpark Job 
- Optional. Job is a PySpark job.
- reference
JobReference 
- Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.
- requestId String
- Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#google.cloud.dataproc.v1.SubmitJobRequest)s with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- scheduling
JobScheduling 
- Optional. Job scheduling configuration.
- sparkJob SparkJob 
- Optional. Job is a Spark job.
- sparkRJob SparkRJob 
- Optional. Job is a SparkR job.
- sparkSql SparkJob Sql Job 
- Optional. Job is a SparkSql job.
- placement
JobPlacement 
- Job information, including how, when, and where to run the job.
- region string
- hadoopJob HadoopJob 
- Optional. Job is a Hadoop job.
- hiveJob HiveJob 
- Optional. Job is a Hive job.
- labels {[key: string]: string}
- Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.
- pigJob PigJob 
- Optional. Job is a Pig job.
- prestoJob PrestoJob 
- Optional. Job is a Presto job.
- project string
- pysparkJob PySpark Job 
- Optional. Job is a PySpark job.
- reference
JobReference 
- Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.
- requestId string
- Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#google.cloud.dataproc.v1.SubmitJobRequest)s with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- scheduling
JobScheduling 
- Optional. Job scheduling configuration.
- sparkJob SparkJob 
- Optional. Job is a Spark job.
- sparkRJob SparkRJob 
- Optional. Job is a SparkR job.
- sparkSql SparkJob Sql Job 
- Optional. Job is a SparkSql job.
- placement
JobPlacement Args 
- Job information, including how, when, and where to run the job.
- region str
- hadoop_job HadoopJob Args 
- Optional. Job is a Hadoop job.
- hive_job HiveJob Args 
- Optional. Job is a Hive job.
- labels Mapping[str, str]
- Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.
- pig_job PigJob Args 
- Optional. Job is a Pig job.
- presto_job PrestoJob Args 
- Optional. Job is a Presto job.
- project str
- pyspark_job PySpark Job Args 
- Optional. Job is a PySpark job.
- reference
JobReference Args 
- Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.
- request_id str
- Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#google.cloud.dataproc.v1.SubmitJobRequest)s with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- scheduling
JobScheduling Args 
- Optional. Job scheduling configuration.
- spark_job SparkJob Args 
- Optional. Job is a Spark job.
- spark_r_ Sparkjob RJob Args 
- Optional. Job is a SparkR job.
- spark_sql_ Sparkjob Sql Job Args 
- Optional. Job is a SparkSql job.
- placement Property Map
- Job information, including how, when, and where to run the job.
- region String
- hadoopJob Property Map
- Optional. Job is a Hadoop job.
- hiveJob Property Map
- Optional. Job is a Hive job.
- labels Map<String>
- Optional. The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a job.
- pigJob Property Map
- Optional. Job is a Pig job.
- prestoJob Property Map
- Optional. Job is a Presto job.
- project String
- pysparkJob Property Map
- Optional. Job is a PySpark job.
- reference Property Map
- Optional. The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id.
- requestId String
- Optional. A unique id used to identify the request. If the server receives two SubmitJobRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#google.cloud.dataproc.v1.SubmitJobRequest)s with the same id, then the second request will be ignored and the first Job created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- scheduling Property Map
- Optional. Job scheduling configuration.
- sparkJob Property Map
- Optional. Job is a Spark job.
- sparkRJob Property Map
- Optional. Job is a SparkR job.
- sparkSql Property MapJob 
- Optional. Job is a SparkSql job.
Outputs
All input properties are implicitly available as output properties. Additionally, the Job resource produces the following output properties:
- Done bool
- Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled.
- DriverControl stringFiles Uri 
- If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.
- DriverOutput stringResource Uri 
- A URI pointing to the location of the stdout of the job's driver program.
- Id string
- The provider-assigned unique ID for this managed resource.
- JobUuid string
- A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time.
- Status
Pulumi.Google Native. Dataproc. V1Beta2. Outputs. Job Status Response 
- The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields.
- StatusHistory List<Pulumi.Google Native. Dataproc. V1Beta2. Outputs. Job Status Response> 
- The previous job status.
- SubmittedBy string
- The email address of the user submitting the job. For jobs submitted on the cluster, the address is username@hostname.
- YarnApplications List<Pulumi.Google Native. Dataproc. V1Beta2. Outputs. Yarn Application Response> 
- The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- Done bool
- Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled.
- DriverControl stringFiles Uri 
- If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.
- DriverOutput stringResource Uri 
- A URI pointing to the location of the stdout of the job's driver program.
- Id string
- The provider-assigned unique ID for this managed resource.
- JobUuid string
- A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time.
- Status
JobStatus Response 
- The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields.
- StatusHistory []JobStatus Response 
- The previous job status.
- SubmittedBy string
- The email address of the user submitting the job. For jobs submitted on the cluster, the address is username@hostname.
- YarnApplications []YarnApplication Response 
- The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- done Boolean
- Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled.
- driverControl StringFiles Uri 
- If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.
- driverOutput StringResource Uri 
- A URI pointing to the location of the stdout of the job's driver program.
- id String
- The provider-assigned unique ID for this managed resource.
- jobUuid String
- A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time.
- status
JobStatus Response 
- The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields.
- statusHistory List<JobStatus Response> 
- The previous job status.
- submittedBy String
- The email address of the user submitting the job. For jobs submitted on the cluster, the address is username@hostname.
- yarnApplications List<YarnApplication Response> 
- The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- done boolean
- Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled.
- driverControl stringFiles Uri 
- If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.
- driverOutput stringResource Uri 
- A URI pointing to the location of the stdout of the job's driver program.
- id string
- The provider-assigned unique ID for this managed resource.
- jobUuid string
- A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time.
- status
JobStatus Response 
- The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields.
- statusHistory JobStatus Response[] 
- The previous job status.
- submittedBy string
- The email address of the user submitting the job. For jobs submitted on the cluster, the address is username@hostname.
- yarnApplications YarnApplication Response[] 
- The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- done bool
- Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled.
- driver_control_ strfiles_ uri 
- If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.
- driver_output_ strresource_ uri 
- A URI pointing to the location of the stdout of the job's driver program.
- id str
- The provider-assigned unique ID for this managed resource.
- job_uuid str
- A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time.
- status
JobStatus Response 
- The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields.
- status_history Sequence[JobStatus Response] 
- The previous job status.
- submitted_by str
- The email address of the user submitting the job. For jobs submitted on the cluster, the address is username@hostname.
- yarn_applications Sequence[YarnApplication Response] 
- The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
- done Boolean
- Indicates whether the job is completed. If the value is false, the job is still in progress. If true, the job is completed, and status.state field will indicate if it was successful, failed, or cancelled.
- driverControl StringFiles Uri 
- If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as driver_output_uri.
- driverOutput StringResource Uri 
- A URI pointing to the location of the stdout of the job's driver program.
- id String
- The provider-assigned unique ID for this managed resource.
- jobUuid String
- A UUID that uniquely identifies a job within the project over time. This is in contrast to a user-settable reference.job_id that may be reused over time.
- status Property Map
- The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields.
- statusHistory List<Property Map>
- The previous job status.
- submittedBy String
- The email address of the user submitting the job. For jobs submitted on the cluster, the address is username@hostname.
- yarnApplications List<Property Map>
- The collection of YARN applications spun up by this job.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
Supporting Types
HadoopJob, HadoopJobArgs    
- ArchiveUris List<string>
- Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- FileUris List<string>
- Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- JarFile List<string>Uris 
- Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- LoggingConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config 
- Optional. The runtime log config for job execution.
- MainClass string
- The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- MainJar stringFile Uri 
- The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- ArchiveUris []string
- Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- FileUris []string
- Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- JarFile []stringUris 
- Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- LoggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- MainClass string
- The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- MainJar stringFile Uri 
- The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris List<String>
- Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jarFile List<String>Uris 
- Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- loggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- mainClass String
- The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- mainJar StringFile Uri 
- The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archiveUris string[]
- Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris string[]
- Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jarFile string[]Uris 
- Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- loggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- mainClass string
- The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- mainJar stringFile Uri 
- The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archive_uris Sequence[str]
- Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_uris Sequence[str]
- Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jar_file_ Sequence[str]uris 
- Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- logging_config LoggingConfig 
- Optional. The runtime log config for job execution.
- main_class str
- The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- main_jar_ strfile_ uri 
- The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris List<String>
- Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jarFile List<String>Uris 
- Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- loggingConfig Property Map
- Optional. The runtime log config for job execution.
- mainClass String
- The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- mainJar StringFile Uri 
- The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
HadoopJobResponse, HadoopJobResponseArgs      
- ArchiveUris List<string>
- Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- FileUris List<string>
- Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- JarFile List<string>Uris 
- Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- LoggingConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response 
- Optional. The runtime log config for job execution.
- MainClass string
- The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- MainJar stringFile Uri 
- The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- ArchiveUris []string
- Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- FileUris []string
- Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- JarFile []stringUris 
- Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- LoggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- MainClass string
- The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- MainJar stringFile Uri 
- The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris List<String>
- Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jarFile List<String>Uris 
- Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- loggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- mainClass String
- The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- mainJar StringFile Uri 
- The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archiveUris string[]
- Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris string[]
- Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jarFile string[]Uris 
- Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- loggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- mainClass string
- The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- mainJar stringFile Uri 
- The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archive_uris Sequence[str]
- Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_uris Sequence[str]
- Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jar_file_ Sequence[str]uris 
- Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- logging_config LoggingConfig Response 
- Optional. The runtime log config for job execution.
- main_class str
- The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- main_jar_ strfile_ uri 
- The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris List<String>
- Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jarFile List<String>Uris 
- Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- loggingConfig Property Map
- Optional. The runtime log config for job execution.
- mainClass String
- The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- mainJar StringFile Uri 
- The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
HiveJob, HiveJobArgs    
- ContinueOn boolFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- JarFile List<string>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- Properties Dictionary<string, string>
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- QueryFile stringUri 
- The HCFS URI of the script that contains Hive queries.
- QueryList Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List 
- A list of queries.
- ScriptVariables Dictionary<string, string>
- Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- ContinueOn boolFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- JarFile []stringUris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- Properties map[string]string
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- QueryFile stringUri 
- The HCFS URI of the script that contains Hive queries.
- QueryList QueryList 
- A list of queries.
- ScriptVariables map[string]string
- Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continueOn BooleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties Map<String,String>
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- queryFile StringUri 
- The HCFS URI of the script that contains Hive queries.
- queryList QueryList 
- A list of queries.
- scriptVariables Map<String,String>
- Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continueOn booleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jarFile string[]Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties {[key: string]: string}
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- queryFile stringUri 
- The HCFS URI of the script that contains Hive queries.
- queryList QueryList 
- A list of queries.
- scriptVariables {[key: string]: string}
- Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continue_on_ boolfailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar_file_ Sequence[str]uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties Mapping[str, str]
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- query_file_ struri 
- The HCFS URI of the script that contains Hive queries.
- query_list QueryList 
- A list of queries.
- script_variables Mapping[str, str]
- Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continueOn BooleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties Map<String>
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- queryFile StringUri 
- The HCFS URI of the script that contains Hive queries.
- queryList Property Map
- A list of queries.
- scriptVariables Map<String>
- Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
HiveJobResponse, HiveJobResponseArgs      
- ContinueOn boolFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- JarFile List<string>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- Properties Dictionary<string, string>
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- QueryFile stringUri 
- The HCFS URI of the script that contains Hive queries.
- QueryList Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List Response 
- A list of queries.
- ScriptVariables Dictionary<string, string>
- Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- ContinueOn boolFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- JarFile []stringUris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- Properties map[string]string
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- QueryFile stringUri 
- The HCFS URI of the script that contains Hive queries.
- QueryList QueryList Response 
- A list of queries.
- ScriptVariables map[string]string
- Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continueOn BooleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties Map<String,String>
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- queryFile StringUri 
- The HCFS URI of the script that contains Hive queries.
- queryList QueryList Response 
- A list of queries.
- scriptVariables Map<String,String>
- Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continueOn booleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jarFile string[]Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties {[key: string]: string}
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- queryFile stringUri 
- The HCFS URI of the script that contains Hive queries.
- queryList QueryList Response 
- A list of queries.
- scriptVariables {[key: string]: string}
- Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continue_on_ boolfailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar_file_ Sequence[str]uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties Mapping[str, str]
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- query_file_ struri 
- The HCFS URI of the script that contains Hive queries.
- query_list QueryList Response 
- A list of queries.
- script_variables Mapping[str, str]
- Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continueOn BooleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties Map<String>
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- queryFile StringUri 
- The HCFS URI of the script that contains Hive queries.
- queryList Property Map
- A list of queries.
- scriptVariables Map<String>
- Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
JobPlacement, JobPlacementArgs    
- ClusterName string
- The name of the cluster where the job will be submitted.
- ClusterLabels Dictionary<string, string>
- Optional. Cluster labels to identify a cluster where the job will be submitted.
- ClusterName string
- The name of the cluster where the job will be submitted.
- ClusterLabels map[string]string
- Optional. Cluster labels to identify a cluster where the job will be submitted.
- clusterName String
- The name of the cluster where the job will be submitted.
- clusterLabels Map<String,String>
- Optional. Cluster labels to identify a cluster where the job will be submitted.
- clusterName string
- The name of the cluster where the job will be submitted.
- clusterLabels {[key: string]: string}
- Optional. Cluster labels to identify a cluster where the job will be submitted.
- cluster_name str
- The name of the cluster where the job will be submitted.
- cluster_labels Mapping[str, str]
- Optional. Cluster labels to identify a cluster where the job will be submitted.
- clusterName String
- The name of the cluster where the job will be submitted.
- clusterLabels Map<String>
- Optional. Cluster labels to identify a cluster where the job will be submitted.
JobPlacementResponse, JobPlacementResponseArgs      
- ClusterLabels Dictionary<string, string>
- Optional. Cluster labels to identify a cluster where the job will be submitted.
- ClusterName string
- The name of the cluster where the job will be submitted.
- ClusterUuid string
- A cluster UUID generated by the Dataproc service when the job is submitted.
- ClusterLabels map[string]string
- Optional. Cluster labels to identify a cluster where the job will be submitted.
- ClusterName string
- The name of the cluster where the job will be submitted.
- ClusterUuid string
- A cluster UUID generated by the Dataproc service when the job is submitted.
- clusterLabels Map<String,String>
- Optional. Cluster labels to identify a cluster where the job will be submitted.
- clusterName String
- The name of the cluster where the job will be submitted.
- clusterUuid String
- A cluster UUID generated by the Dataproc service when the job is submitted.
- clusterLabels {[key: string]: string}
- Optional. Cluster labels to identify a cluster where the job will be submitted.
- clusterName string
- The name of the cluster where the job will be submitted.
- clusterUuid string
- A cluster UUID generated by the Dataproc service when the job is submitted.
- cluster_labels Mapping[str, str]
- Optional. Cluster labels to identify a cluster where the job will be submitted.
- cluster_name str
- The name of the cluster where the job will be submitted.
- cluster_uuid str
- A cluster UUID generated by the Dataproc service when the job is submitted.
- clusterLabels Map<String>
- Optional. Cluster labels to identify a cluster where the job will be submitted.
- clusterName String
- The name of the cluster where the job will be submitted.
- clusterUuid String
- A cluster UUID generated by the Dataproc service when the job is submitted.
JobReference, JobReferenceArgs    
- JobId string
- Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
- Project string
- Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
- JobId string
- Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
- Project string
- Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
- jobId String
- Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
- project String
- Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
- jobId string
- Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
- project string
- Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
- job_id str
- Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
- project str
- Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
- jobId String
- Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
- project String
- Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
JobReferenceResponse, JobReferenceResponseArgs      
- JobId string
- Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
- Project string
- Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
- JobId string
- Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
- Project string
- Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
- jobId String
- Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
- project String
- Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
- jobId string
- Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
- project string
- Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
- job_id str
- Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
- project str
- Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
- jobId String
- Optional. The job ID, which must be unique within the project. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 100 characters.If not specified by the caller, the job ID will be provided by the server.
- project String
- Optional. The ID of the Google Cloud Platform project that the job belongs to. If specified, must match the request project ID.
JobScheduling, JobSchedulingArgs    
- MaxFailures intPer Hour 
- Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- MaxFailures intTotal 
- Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- MaxFailures intPer Hour 
- Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- MaxFailures intTotal 
- Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- maxFailures IntegerPer Hour 
- Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- maxFailures IntegerTotal 
- Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- maxFailures numberPer Hour 
- Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- maxFailures numberTotal 
- Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- max_failures_ intper_ hour 
- Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- max_failures_ inttotal 
- Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- maxFailures NumberPer Hour 
- Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- maxFailures NumberTotal 
- Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
JobSchedulingResponse, JobSchedulingResponseArgs      
- MaxFailures intPer Hour 
- Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- MaxFailures intTotal 
- Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- MaxFailures intPer Hour 
- Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- MaxFailures intTotal 
- Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- maxFailures IntegerPer Hour 
- Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- maxFailures IntegerTotal 
- Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- maxFailures numberPer Hour 
- Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- maxFailures numberTotal 
- Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- max_failures_ intper_ hour 
- Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- max_failures_ inttotal 
- Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- maxFailures NumberPer Hour 
- Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- maxFailures NumberTotal 
- Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
JobStatusResponse, JobStatusResponseArgs      
- Details string
- Optional Job state details, such as an error description if the state is ERROR.
- State string
- A state message specifying the overall job state.
- StateStart stringTime 
- The time when this state was entered.
- Substate string
- Additional state information, which includes status reported by the agent.
- Details string
- Optional Job state details, such as an error description if the state is ERROR.
- State string
- A state message specifying the overall job state.
- StateStart stringTime 
- The time when this state was entered.
- Substate string
- Additional state information, which includes status reported by the agent.
- details String
- Optional Job state details, such as an error description if the state is ERROR.
- state String
- A state message specifying the overall job state.
- stateStart StringTime 
- The time when this state was entered.
- substate String
- Additional state information, which includes status reported by the agent.
- details string
- Optional Job state details, such as an error description if the state is ERROR.
- state string
- A state message specifying the overall job state.
- stateStart stringTime 
- The time when this state was entered.
- substate string
- Additional state information, which includes status reported by the agent.
- details str
- Optional Job state details, such as an error description if the state is ERROR.
- state str
- A state message specifying the overall job state.
- state_start_ strtime 
- The time when this state was entered.
- substate str
- Additional state information, which includes status reported by the agent.
- details String
- Optional Job state details, such as an error description if the state is ERROR.
- state String
- A state message specifying the overall job state.
- stateStart StringTime 
- The time when this state was entered.
- substate String
- Additional state information, which includes status reported by the agent.
LoggingConfig, LoggingConfigArgs    
- DriverLog Dictionary<string, string>Levels 
- The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- DriverLog map[string]stringLevels 
- The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driverLog Map<String,String>Levels 
- The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driverLog {[key: string]: string}Levels 
- The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driver_log_ Mapping[str, str]levels 
- The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driverLog Map<String>Levels 
- The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
LoggingConfigResponse, LoggingConfigResponseArgs      
- DriverLog Dictionary<string, string>Levels 
- The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- DriverLog map[string]stringLevels 
- The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driverLog Map<String,String>Levels 
- The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driverLog {[key: string]: string}Levels 
- The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driver_log_ Mapping[str, str]levels 
- The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driverLog Map<String>Levels 
- The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
PigJob, PigJobArgs    
- ContinueOn boolFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- JarFile List<string>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- LoggingConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config 
- Optional. The runtime log config for job execution.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- QueryFile stringUri 
- The HCFS URI of the script that contains the Pig queries.
- QueryList Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List 
- A list of queries.
- ScriptVariables Dictionary<string, string>
- Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- ContinueOn boolFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- JarFile []stringUris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- LoggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- QueryFile stringUri 
- The HCFS URI of the script that contains the Pig queries.
- QueryList QueryList 
- A list of queries.
- ScriptVariables map[string]string
- Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continueOn BooleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- loggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- queryFile StringUri 
- The HCFS URI of the script that contains the Pig queries.
- queryList QueryList 
- A list of queries.
- scriptVariables Map<String,String>
- Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continueOn booleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jarFile string[]Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- loggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- queryFile stringUri 
- The HCFS URI of the script that contains the Pig queries.
- queryList QueryList 
- A list of queries.
- scriptVariables {[key: string]: string}
- Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continue_on_ boolfailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar_file_ Sequence[str]uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- logging_config LoggingConfig 
- Optional. The runtime log config for job execution.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- query_file_ struri 
- The HCFS URI of the script that contains the Pig queries.
- query_list QueryList 
- A list of queries.
- script_variables Mapping[str, str]
- Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continueOn BooleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- loggingConfig Property Map
- Optional. The runtime log config for job execution.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- queryFile StringUri 
- The HCFS URI of the script that contains the Pig queries.
- queryList Property Map
- A list of queries.
- scriptVariables Map<String>
- Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
PigJobResponse, PigJobResponseArgs      
- ContinueOn boolFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- JarFile List<string>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- LoggingConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response 
- Optional. The runtime log config for job execution.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- QueryFile stringUri 
- The HCFS URI of the script that contains the Pig queries.
- QueryList Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List Response 
- A list of queries.
- ScriptVariables Dictionary<string, string>
- Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- ContinueOn boolFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- JarFile []stringUris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- LoggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- QueryFile stringUri 
- The HCFS URI of the script that contains the Pig queries.
- QueryList QueryList Response 
- A list of queries.
- ScriptVariables map[string]string
- Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continueOn BooleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- loggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- queryFile StringUri 
- The HCFS URI of the script that contains the Pig queries.
- queryList QueryList Response 
- A list of queries.
- scriptVariables Map<String,String>
- Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continueOn booleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jarFile string[]Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- loggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- queryFile stringUri 
- The HCFS URI of the script that contains the Pig queries.
- queryList QueryList Response 
- A list of queries.
- scriptVariables {[key: string]: string}
- Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continue_on_ boolfailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar_file_ Sequence[str]uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- logging_config LoggingConfig Response 
- Optional. The runtime log config for job execution.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- query_file_ struri 
- The HCFS URI of the script that contains the Pig queries.
- query_list QueryList Response 
- A list of queries.
- script_variables Mapping[str, str]
- Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continueOn BooleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- loggingConfig Property Map
- Optional. The runtime log config for job execution.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- queryFile StringUri 
- The HCFS URI of the script that contains the Pig queries.
- queryList Property Map
- A list of queries.
- scriptVariables Map<String>
- Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
PrestoJob, PrestoJobArgs    
- List<string>
- Optional. Presto client tags to attach to this query
- ContinueOn boolFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- LoggingConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config 
- Optional. The runtime log config for job execution.
- OutputFormat string
- Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- QueryFile stringUri 
- The HCFS URI of the script that contains SQL queries.
- QueryList Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List 
- A list of queries.
- []string
- Optional. Presto client tags to attach to this query
- ContinueOn boolFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- LoggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- OutputFormat string
- Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- Properties map[string]string
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- QueryFile stringUri 
- The HCFS URI of the script that contains SQL queries.
- QueryList QueryList 
- A list of queries.
- List<String>
- Optional. Presto client tags to attach to this query
- continueOn BooleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- loggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- outputFormat String
- Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties Map<String,String>
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- queryFile StringUri 
- The HCFS URI of the script that contains SQL queries.
- queryList QueryList 
- A list of queries.
- string[]
- Optional. Presto client tags to attach to this query
- continueOn booleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- loggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- outputFormat string
- Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties {[key: string]: string}
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- queryFile stringUri 
- The HCFS URI of the script that contains SQL queries.
- queryList QueryList 
- A list of queries.
- Sequence[str]
- Optional. Presto client tags to attach to this query
- continue_on_ boolfailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- logging_config LoggingConfig 
- Optional. The runtime log config for job execution.
- output_format str
- Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties Mapping[str, str]
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- query_file_ struri 
- The HCFS URI of the script that contains SQL queries.
- query_list QueryList 
- A list of queries.
- List<String>
- Optional. Presto client tags to attach to this query
- continueOn BooleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- loggingConfig Property Map
- Optional. The runtime log config for job execution.
- outputFormat String
- Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties Map<String>
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- queryFile StringUri 
- The HCFS URI of the script that contains SQL queries.
- queryList Property Map
- A list of queries.
PrestoJobResponse, PrestoJobResponseArgs      
- List<string>
- Optional. Presto client tags to attach to this query
- ContinueOn boolFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- LoggingConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response 
- Optional. The runtime log config for job execution.
- OutputFormat string
- Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- QueryFile stringUri 
- The HCFS URI of the script that contains SQL queries.
- QueryList Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List Response 
- A list of queries.
- []string
- Optional. Presto client tags to attach to this query
- ContinueOn boolFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- LoggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- OutputFormat string
- Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- Properties map[string]string
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- QueryFile stringUri 
- The HCFS URI of the script that contains SQL queries.
- QueryList QueryList Response 
- A list of queries.
- List<String>
- Optional. Presto client tags to attach to this query
- continueOn BooleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- loggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- outputFormat String
- Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties Map<String,String>
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- queryFile StringUri 
- The HCFS URI of the script that contains SQL queries.
- queryList QueryList Response 
- A list of queries.
- string[]
- Optional. Presto client tags to attach to this query
- continueOn booleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- loggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- outputFormat string
- Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties {[key: string]: string}
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- queryFile stringUri 
- The HCFS URI of the script that contains SQL queries.
- queryList QueryList Response 
- A list of queries.
- Sequence[str]
- Optional. Presto client tags to attach to this query
- continue_on_ boolfailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- logging_config LoggingConfig Response 
- Optional. The runtime log config for job execution.
- output_format str
- Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties Mapping[str, str]
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- query_file_ struri 
- The HCFS URI of the script that contains SQL queries.
- query_list QueryList Response 
- A list of queries.
- List<String>
- Optional. Presto client tags to attach to this query
- continueOn BooleanFailure 
- Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- loggingConfig Property Map
- Optional. The runtime log config for job execution.
- outputFormat String
- Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties Map<String>
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- queryFile StringUri 
- The HCFS URI of the script that contains SQL queries.
- queryList Property Map
- A list of queries.
PySparkJob, PySparkJobArgs      
- MainPython stringFile Uri 
- The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- ArchiveUris List<string>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- FileUris List<string>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- JarFile List<string>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- LoggingConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config 
- Optional. The runtime log config for job execution.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- PythonFile List<string>Uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- MainPython stringFile Uri 
- The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- ArchiveUris []string
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- FileUris []string
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- JarFile []stringUris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- LoggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- PythonFile []stringUris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- mainPython StringFile Uri 
- The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- loggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- pythonFile List<String>Uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- mainPython stringFile Uri 
- The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- archiveUris string[]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris string[]
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jarFile string[]Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- loggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- pythonFile string[]Uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- main_python_ strfile_ uri 
- The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- archive_uris Sequence[str]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_uris Sequence[str]
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar_file_ Sequence[str]uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- logging_config LoggingConfig 
- Optional. The runtime log config for job execution.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- python_file_ Sequence[str]uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- mainPython StringFile Uri 
- The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- loggingConfig Property Map
- Optional. The runtime log config for job execution.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- pythonFile List<String>Uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
PySparkJobResponse, PySparkJobResponseArgs        
- ArchiveUris List<string>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- FileUris List<string>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- JarFile List<string>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- LoggingConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response 
- Optional. The runtime log config for job execution.
- MainPython stringFile Uri 
- The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- PythonFile List<string>Uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- ArchiveUris []string
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- FileUris []string
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- JarFile []stringUris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- LoggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- MainPython stringFile Uri 
- The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- PythonFile []stringUris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- loggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- mainPython StringFile Uri 
- The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- pythonFile List<String>Uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archiveUris string[]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris string[]
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jarFile string[]Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- loggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- mainPython stringFile Uri 
- The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- pythonFile string[]Uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archive_uris Sequence[str]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_uris Sequence[str]
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar_file_ Sequence[str]uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- logging_config LoggingConfig Response 
- Optional. The runtime log config for job execution.
- main_python_ strfile_ uri 
- The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- python_file_ Sequence[str]uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- loggingConfig Property Map
- Optional. The runtime log config for job execution.
- mainPython StringFile Uri 
- The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- pythonFile List<String>Uris 
- Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
QueryList, QueryListArgs    
- Queries List<string>
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- Queries []string
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries List<String>
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries string[]
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries Sequence[str]
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries List<String>
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
QueryListResponse, QueryListResponseArgs      
- Queries List<string>
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- Queries []string
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries List<String>
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries string[]
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries Sequence[str]
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries List<String>
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
SparkJob, SparkJobArgs    
- ArchiveUris List<string>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- FileUris List<string>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- JarFile List<string>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- LoggingConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config 
- Optional. The runtime log config for job execution.
- MainClass string
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- MainJar stringFile Uri 
- The HCFS URI of the jar file that contains the main class.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- ArchiveUris []string
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- FileUris []string
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- JarFile []stringUris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- LoggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- MainClass string
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- MainJar stringFile Uri 
- The HCFS URI of the jar file that contains the main class.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- loggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- mainClass String
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- mainJar StringFile Uri 
- The HCFS URI of the jar file that contains the main class.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archiveUris string[]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris string[]
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jarFile string[]Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- loggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- mainClass string
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- mainJar stringFile Uri 
- The HCFS URI of the jar file that contains the main class.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive_uris Sequence[str]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_uris Sequence[str]
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar_file_ Sequence[str]uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- logging_config LoggingConfig 
- Optional. The runtime log config for job execution.
- main_class str
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- main_jar_ strfile_ uri 
- The HCFS URI of the jar file that contains the main class.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- loggingConfig Property Map
- Optional. The runtime log config for job execution.
- mainClass String
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- mainJar StringFile Uri 
- The HCFS URI of the jar file that contains the main class.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
SparkJobResponse, SparkJobResponseArgs      
- ArchiveUris List<string>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- FileUris List<string>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- JarFile List<string>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- LoggingConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response 
- Optional. The runtime log config for job execution.
- MainClass string
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- MainJar stringFile Uri 
- The HCFS URI of the jar file that contains the main class.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- ArchiveUris []string
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- FileUris []string
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- JarFile []stringUris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- LoggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- MainClass string
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- MainJar stringFile Uri 
- The HCFS URI of the jar file that contains the main class.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- loggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- mainClass String
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- mainJar StringFile Uri 
- The HCFS URI of the jar file that contains the main class.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archiveUris string[]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris string[]
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jarFile string[]Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- loggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- mainClass string
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- mainJar stringFile Uri 
- The HCFS URI of the jar file that contains the main class.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive_uris Sequence[str]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_uris Sequence[str]
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar_file_ Sequence[str]uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- logging_config LoggingConfig Response 
- Optional. The runtime log config for job execution.
- main_class str
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- main_jar_ strfile_ uri 
- The HCFS URI of the jar file that contains the main class.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- loggingConfig Property Map
- Optional. The runtime log config for job execution.
- mainClass String
- The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- mainJar StringFile Uri 
- The HCFS URI of the jar file that contains the main class.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
SparkRJob, SparkRJobArgs    
- MainRFile stringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R file.
- ArchiveUris List<string>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- FileUris List<string>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- LoggingConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config 
- Optional. The runtime log config for job execution.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- MainRFile stringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R file.
- ArchiveUris []string
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- FileUris []string
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- LoggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- mainRFile StringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R file.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- loggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- mainRFile stringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R file.
- archiveUris string[]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris string[]
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- loggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- main_r_ strfile_ uri 
- The HCFS URI of the main R file to use as the driver. Must be a .R file.
- archive_uris Sequence[str]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_uris Sequence[str]
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- logging_config LoggingConfig 
- Optional. The runtime log config for job execution.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- mainRFile StringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R file.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- loggingConfig Property Map
- Optional. The runtime log config for job execution.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
SparkRJobResponse, SparkRJobResponseArgs      
- ArchiveUris List<string>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- FileUris List<string>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- LoggingConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response 
- Optional. The runtime log config for job execution.
- MainRFile stringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R file.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- ArchiveUris []string
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- FileUris []string
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- LoggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- MainRFile stringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R file.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- loggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- mainRFile StringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R file.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archiveUris string[]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris string[]
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- loggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- mainRFile stringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R file.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive_uris Sequence[str]
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_uris Sequence[str]
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- logging_config LoggingConfig Response 
- Optional. The runtime log config for job execution.
- main_r_ strfile_ uri 
- The HCFS URI of the main R file to use as the driver. Must be a .R file.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archiveUris List<String>
- Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- fileUris List<String>
- Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- loggingConfig Property Map
- Optional. The runtime log config for job execution.
- mainRFile StringUri 
- The HCFS URI of the main R file to use as the driver. Must be a .R file.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
SparkSqlJob, SparkSqlJobArgs      
- JarFile List<string>Uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- LoggingConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config 
- Optional. The runtime log config for job execution.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- QueryFile stringUri 
- The HCFS URI of the script that contains SQL queries.
- QueryList Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List 
- A list of queries.
- ScriptVariables Dictionary<string, string>
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- JarFile []stringUris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- LoggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- QueryFile stringUri 
- The HCFS URI of the script that contains SQL queries.
- QueryList QueryList 
- A list of queries.
- ScriptVariables map[string]string
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- loggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- queryFile StringUri 
- The HCFS URI of the script that contains SQL queries.
- queryList QueryList 
- A list of queries.
- scriptVariables Map<String,String>
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jarFile string[]Uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- loggingConfig LoggingConfig 
- Optional. The runtime log config for job execution.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- queryFile stringUri 
- The HCFS URI of the script that contains SQL queries.
- queryList QueryList 
- A list of queries.
- scriptVariables {[key: string]: string}
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar_file_ Sequence[str]uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- logging_config LoggingConfig 
- Optional. The runtime log config for job execution.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- query_file_ struri 
- The HCFS URI of the script that contains SQL queries.
- query_list QueryList 
- A list of queries.
- script_variables Mapping[str, str]
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- loggingConfig Property Map
- Optional. The runtime log config for job execution.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- queryFile StringUri 
- The HCFS URI of the script that contains SQL queries.
- queryList Property Map
- A list of queries.
- scriptVariables Map<String>
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
SparkSqlJobResponse, SparkSqlJobResponseArgs        
- JarFile List<string>Uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- LoggingConfig Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response 
- Optional. The runtime log config for job execution.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- QueryFile stringUri 
- The HCFS URI of the script that contains SQL queries.
- QueryList Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List Response 
- A list of queries.
- ScriptVariables Dictionary<string, string>
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- JarFile []stringUris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- LoggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- QueryFile stringUri 
- The HCFS URI of the script that contains SQL queries.
- QueryList QueryList Response 
- A list of queries.
- ScriptVariables map[string]string
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- loggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- queryFile StringUri 
- The HCFS URI of the script that contains SQL queries.
- queryList QueryList Response 
- A list of queries.
- scriptVariables Map<String,String>
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jarFile string[]Uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- loggingConfig LoggingConfig Response 
- Optional. The runtime log config for job execution.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- queryFile stringUri 
- The HCFS URI of the script that contains SQL queries.
- queryList QueryList Response 
- A list of queries.
- scriptVariables {[key: string]: string}
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar_file_ Sequence[str]uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- logging_config LoggingConfig Response 
- Optional. The runtime log config for job execution.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- query_file_ struri 
- The HCFS URI of the script that contains SQL queries.
- query_list QueryList Response 
- A list of queries.
- script_variables Mapping[str, str]
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jarFile List<String>Uris 
- Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- loggingConfig Property Map
- Optional. The runtime log config for job execution.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- queryFile StringUri 
- The HCFS URI of the script that contains SQL queries.
- queryList Property Map
- A list of queries.
- scriptVariables Map<String>
- Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
YarnApplicationResponse, YarnApplicationResponseArgs      
- Name string
- The application name.
- Progress double
- The numerical progress of the application, from 1 to 100.
- State string
- The application state.
- TrackingUrl string
- The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.
- Name string
- The application name.
- Progress float64
- The numerical progress of the application, from 1 to 100.
- State string
- The application state.
- TrackingUrl string
- The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.
- name String
- The application name.
- progress Double
- The numerical progress of the application, from 1 to 100.
- state String
- The application state.
- trackingUrl String
- The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.
- name string
- The application name.
- progress number
- The numerical progress of the application, from 1 to 100.
- state string
- The application state.
- trackingUrl string
- The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.
- name str
- The application name.
- progress float
- The numerical progress of the application, from 1 to 100.
- state str
- The application state.
- tracking_url str
- The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.
- name String
- The application name.
- progress Number
- The numerical progress of the application, from 1 to 100.
- state String
- The application state.
- trackingUrl String
- The HTTP URL of the ApplicationMaster, HistoryServer, or TimelineServer that provides application-specific information. The URL uses the internal hostname, and requires a proxy server for resolution and, possibly, access.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.