Job

Jobs are actions that BigQuery runs on your behalf to load data, export data, query data, or copy data. Once a BigQuery job is created, it cannot be changed or deleted.

Create a Job Resource

new Job(name: string, args: JobArgs, opts?: CustomResourceOptions);
def Job(resource_name, opts=None, copy=None, extract=None, job_id=None, job_timeout_ms=None, labels=None, load=None, location=None, project=None, query=None, __props__=None);
func NewJob(ctx *Context, name string, args JobArgs, opts ...ResourceOption) (*Job, error)
public Job(string name, JobArgs args, CustomResourceOptions? opts = null)
name string
The unique name of the resource.
args JobArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
resource_name str
The unique name of the resource.
opts ResourceOptions
A bag of options that control this resource's behavior.
ctx Context
Context object for the current deployment.
name string
The unique name of the resource.
args JobArgs
The arguments to resource properties.
opts ResourceOption
Bag of options to control resource's behavior.
name string
The unique name of the resource.
args JobArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.

Job Resource Properties

To learn more about resource properties and how to use them, see Inputs and Outputs in the Programming Model docs.

Inputs

The Job resource accepts the following input properties:

JobId string

The ID of the job. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters.

Copy JobCopyArgs

Copies a table. Structure is documented below.

Extract JobExtractArgs

Configures an extract job. Structure is documented below.

JobTimeoutMs string

Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.

Labels Dictionary<string, string>

The labels associated with this job. You can use these to organize and group your jobs.

Load JobLoadArgs

Configures a load job. Structure is documented below.

Location string

The geographic location of the job. The default value is US.

Project string

The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

Query JobQueryArgs

Configures a query job. Structure is documented below.

JobId string

The ID of the job. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters.

Copy JobCopy

Copies a table. Structure is documented below.

Extract JobExtract

Configures an extract job. Structure is documented below.

JobTimeoutMs string

Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.

Labels map[string]string

The labels associated with this job. You can use these to organize and group your jobs.

Load JobLoad

Configures a load job. Structure is documented below.

Location string

The geographic location of the job. The default value is US.

Project string

The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

Query JobQuery

Configures a query job. Structure is documented below.

jobId string

The ID of the job. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters.

copy JobCopy

Copies a table. Structure is documented below.

extract JobExtract

Configures an extract job. Structure is documented below.

jobTimeoutMs string

Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.

labels {[key: string]: string}

The labels associated with this job. You can use these to organize and group your jobs.

load JobLoad

Configures a load job. Structure is documented below.

location string

The geographic location of the job. The default value is US.

project string

The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

query JobQuery

Configures a query job. Structure is documented below.

job_id str

The ID of the job. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters.

copy Dict[JobCopy]

Copies a table. Structure is documented below.

extract Dict[JobExtract]

Configures an extract job. Structure is documented below.

job_timeout_ms str

Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.

labels Dict[str, str]

The labels associated with this job. You can use these to organize and group your jobs.

load Dict[JobLoad]

Configures a load job. Structure is documented below.

location str

The geographic location of the job. The default value is US.

project str

The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

query Dict[JobQuery]

Configures a query job. Structure is documented below.

Outputs

All input properties are implicitly available as output properties. Additionally, the Job resource produces the following output properties:

Id string
The provider-assigned unique ID for this managed resource.
JobType string

The type of the job.

UserEmail string

Email address of the user who ran the job.

Id string
The provider-assigned unique ID for this managed resource.
JobType string

The type of the job.

UserEmail string

Email address of the user who ran the job.

id string
The provider-assigned unique ID for this managed resource.
jobType string

The type of the job.

userEmail string

Email address of the user who ran the job.

id str
The provider-assigned unique ID for this managed resource.
job_type str

The type of the job.

user_email str

Email address of the user who ran the job.

Look up an Existing Job Resource

Get an existing Job resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.

public static get(name: string, id: Input<ID>, state?: JobState, opts?: CustomResourceOptions): Job
static get(resource_name, id, opts=None, copy=None, extract=None, job_id=None, job_timeout_ms=None, job_type=None, labels=None, load=None, location=None, project=None, query=None, user_email=None, __props__=None);
func GetJob(ctx *Context, name string, id IDInput, state *JobState, opts ...ResourceOption) (*Job, error)
public static Job Get(string name, Input<string> id, JobState? state, CustomResourceOptions? opts = null)
name
The unique name of the resulting resource.
id
The unique provider ID of the resource to lookup.
state
Any extra arguments used during the lookup.
opts
A bag of options that control this resource's behavior.
resource_name
The unique name of the resulting resource.
id
The unique provider ID of the resource to lookup.
name
The unique name of the resulting resource.
id
The unique provider ID of the resource to lookup.
state
Any extra arguments used during the lookup.
opts
A bag of options that control this resource's behavior.
name
The unique name of the resulting resource.
id
The unique provider ID of the resource to lookup.
state
Any extra arguments used during the lookup.
opts
A bag of options that control this resource's behavior.

The following state arguments are supported:

Copy JobCopyArgs

Copies a table. Structure is documented below.

Extract JobExtractArgs

Configures an extract job. Structure is documented below.

JobId string

The ID of the job. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters.

JobTimeoutMs string

Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.

JobType string

The type of the job.

Labels Dictionary<string, string>

The labels associated with this job. You can use these to organize and group your jobs.

Load JobLoadArgs

Configures a load job. Structure is documented below.

Location string

The geographic location of the job. The default value is US.

Project string

The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

Query JobQueryArgs

Configures a query job. Structure is documented below.

UserEmail string

Email address of the user who ran the job.

Copy JobCopy

Copies a table. Structure is documented below.

Extract JobExtract

Configures an extract job. Structure is documented below.

JobId string

The ID of the job. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters.

JobTimeoutMs string

Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.

JobType string

The type of the job.

Labels map[string]string

The labels associated with this job. You can use these to organize and group your jobs.

Load JobLoad

Configures a load job. Structure is documented below.

Location string

The geographic location of the job. The default value is US.

Project string

The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

Query JobQuery

Configures a query job. Structure is documented below.

UserEmail string

Email address of the user who ran the job.

copy JobCopy

Copies a table. Structure is documented below.

extract JobExtract

Configures an extract job. Structure is documented below.

jobId string

The ID of the job. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters.

jobTimeoutMs string

Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.

jobType string

The type of the job.

labels {[key: string]: string}

The labels associated with this job. You can use these to organize and group your jobs.

load JobLoad

Configures a load job. Structure is documented below.

location string

The geographic location of the job. The default value is US.

project string

The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

query JobQuery

Configures a query job. Structure is documented below.

userEmail string

Email address of the user who ran the job.

copy Dict[JobCopy]

Copies a table. Structure is documented below.

extract Dict[JobExtract]

Configures an extract job. Structure is documented below.

job_id str

The ID of the job. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or dashes (-). The maximum length is 1,024 characters.

job_timeout_ms str

Job timeout in milliseconds. If this time limit is exceeded, BigQuery may attempt to terminate the job.

job_type str

The type of the job.

labels Dict[str, str]

The labels associated with this job. You can use these to organize and group your jobs.

load Dict[JobLoad]

Configures a load job. Structure is documented below.

location str

The geographic location of the job. The default value is US.

project str

The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

query Dict[JobQuery]

Configures a query job. Structure is documented below.

user_email str

Email address of the user who ran the job.

Supporting Types

JobCopy

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

SourceTables List<JobCopySourceTableArgs>

Source tables to copy. Structure is documented below.

CreateDisposition string

Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a ‘notFound’ error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion

DestinationEncryptionConfiguration JobCopyDestinationEncryptionConfigurationArgs

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

DestinationTable JobCopyDestinationTableArgs

The destination table. Structure is documented below.

WriteDisposition string

Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a ‘duplicate’ error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.

SourceTables []JobCopySourceTable

Source tables to copy. Structure is documented below.

CreateDisposition string

Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a ‘notFound’ error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion

DestinationEncryptionConfiguration JobCopyDestinationEncryptionConfiguration

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

DestinationTable JobCopyDestinationTable

The destination table. Structure is documented below.

WriteDisposition string

Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a ‘duplicate’ error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.

sourceTables JobCopySourceTable[]

Source tables to copy. Structure is documented below.

createDisposition string

Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a ‘notFound’ error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion

destinationEncryptionConfiguration JobCopyDestinationEncryptionConfiguration

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

destinationTable JobCopyDestinationTable

The destination table. Structure is documented below.

writeDisposition string

Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a ‘duplicate’ error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.

sourceTables List[JobCopySourceTable]

Source tables to copy. Structure is documented below.

createDisposition str

Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a ‘notFound’ error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion

destinationEncryptionConfiguration Dict[JobCopyDestinationEncryptionConfiguration]

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

destinationTable Dict[JobCopyDestinationTable]

The destination table. Structure is documented below.

writeDisposition str

Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a ‘duplicate’ error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.

JobCopyDestinationEncryptionConfiguration

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

KmsKeyName string

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.

KmsKeyName string

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.

kmsKeyName string

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.

kms_key_name str

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.

JobCopyDestinationTable

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

TableId string

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

DatasetId string

The ID of the dataset containing this model.

ProjectId string

The ID of the project containing this model.

TableId string

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

DatasetId string

The ID of the dataset containing this model.

ProjectId string

The ID of the project containing this model.

tableId string

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

datasetId string

The ID of the dataset containing this model.

projectId string

The ID of the project containing this model.

table_id str

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

dataset_id str

The ID of the dataset containing this model.

project_id str

The ID of the project containing this model.

JobCopySourceTable

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

TableId string

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

DatasetId string

The ID of the dataset containing this model.

ProjectId string

The ID of the project containing this model.

TableId string

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

DatasetId string

The ID of the dataset containing this model.

ProjectId string

The ID of the project containing this model.

tableId string

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

datasetId string

The ID of the dataset containing this model.

projectId string

The ID of the project containing this model.

table_id str

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

dataset_id str

The ID of the dataset containing this model.

project_id str

The ID of the project containing this model.

JobExtract

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

DestinationUris List<string>

A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written.

Compression string

The compression type to use for exported files. Possible values include GZIP, DEFLATE, SNAPPY, and NONE. The default value is NONE. DEFLATE and SNAPPY are only supported for Avro.

DestinationFormat string

The exported file format. Possible values include CSV, NEWLINE_DELIMITED_JSON and AVRO for tables and SAVED_MODEL for models. The default value for tables is CSV. Tables with nested or repeated fields cannot be exported as CSV. The default value for models is SAVED_MODEL.

FieldDelimiter string

When extracting data in CSV format, this defines the delimiter to use between fields in the exported data. Default is ‘,’

PrintHeader bool

Whether to print out a header row in the results. Default is true.

SourceModel JobExtractSourceModelArgs

A reference to the model being exported. Structure is documented below.

SourceTable JobExtractSourceTableArgs

A reference to the table being exported. Structure is documented below.

UseAvroLogicalTypes bool

Whether to use logical types when extracting to AVRO format.

DestinationUris []string

A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written.

Compression string

The compression type to use for exported files. Possible values include GZIP, DEFLATE, SNAPPY, and NONE. The default value is NONE. DEFLATE and SNAPPY are only supported for Avro.

DestinationFormat string

The exported file format. Possible values include CSV, NEWLINE_DELIMITED_JSON and AVRO for tables and SAVED_MODEL for models. The default value for tables is CSV. Tables with nested or repeated fields cannot be exported as CSV. The default value for models is SAVED_MODEL.

FieldDelimiter string

When extracting data in CSV format, this defines the delimiter to use between fields in the exported data. Default is ‘,’

PrintHeader bool

Whether to print out a header row in the results. Default is true.

SourceModel JobExtractSourceModel

A reference to the model being exported. Structure is documented below.

SourceTable JobExtractSourceTable

A reference to the table being exported. Structure is documented below.

UseAvroLogicalTypes bool

Whether to use logical types when extracting to AVRO format.

destinationUris string[]

A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written.

compression string

The compression type to use for exported files. Possible values include GZIP, DEFLATE, SNAPPY, and NONE. The default value is NONE. DEFLATE and SNAPPY are only supported for Avro.

destinationFormat string

The exported file format. Possible values include CSV, NEWLINE_DELIMITED_JSON and AVRO for tables and SAVED_MODEL for models. The default value for tables is CSV. Tables with nested or repeated fields cannot be exported as CSV. The default value for models is SAVED_MODEL.

fieldDelimiter string

When extracting data in CSV format, this defines the delimiter to use between fields in the exported data. Default is ‘,’

printHeader boolean

Whether to print out a header row in the results. Default is true.

sourceModel JobExtractSourceModel

A reference to the model being exported. Structure is documented below.

sourceTable JobExtractSourceTable

A reference to the table being exported. Structure is documented below.

useAvroLogicalTypes boolean

Whether to use logical types when extracting to AVRO format.

destinationUris List[str]

A list of fully-qualified Google Cloud Storage URIs where the extracted table should be written.

compression str

The compression type to use for exported files. Possible values include GZIP, DEFLATE, SNAPPY, and NONE. The default value is NONE. DEFLATE and SNAPPY are only supported for Avro.

destinationFormat str

The exported file format. Possible values include CSV, NEWLINE_DELIMITED_JSON and AVRO for tables and SAVED_MODEL for models. The default value for tables is CSV. Tables with nested or repeated fields cannot be exported as CSV. The default value for models is SAVED_MODEL.

fieldDelimiter str

When extracting data in CSV format, this defines the delimiter to use between fields in the exported data. Default is ‘,’

printHeader bool

Whether to print out a header row in the results. Default is true.

sourceModel Dict[JobExtractSourceModel]

A reference to the model being exported. Structure is documented below.

sourceTable Dict[JobExtractSourceTable]

A reference to the table being exported. Structure is documented below.

useAvroLogicalTypes bool

Whether to use logical types when extracting to AVRO format.

JobExtractSourceModel

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

DatasetId string

The ID of the dataset containing this model.

ModelId string

The ID of the model.

ProjectId string

The ID of the project containing this model.

DatasetId string

The ID of the dataset containing this model.

ModelId string

The ID of the model.

ProjectId string

The ID of the project containing this model.

datasetId string

The ID of the dataset containing this model.

modelId string

The ID of the model.

projectId string

The ID of the project containing this model.

dataset_id str

The ID of the dataset containing this model.

modelId str

The ID of the model.

project_id str

The ID of the project containing this model.

JobExtractSourceTable

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

TableId string

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

DatasetId string

The ID of the dataset containing this model.

ProjectId string

The ID of the project containing this model.

TableId string

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

DatasetId string

The ID of the dataset containing this model.

ProjectId string

The ID of the project containing this model.

tableId string

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

datasetId string

The ID of the dataset containing this model.

projectId string

The ID of the project containing this model.

table_id str

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

dataset_id str

The ID of the dataset containing this model.

project_id str

The ID of the project containing this model.

JobLoad

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

DestinationTable JobLoadDestinationTableArgs

The destination table. Structure is documented below.

SourceUris List<string>

The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one ‘’ wildcard character and it must come after the ‘bucket’ name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the ‘’ wildcard character is not allowed.

AllowJaggedRows bool

Accept rows that are missing trailing optional columns. The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.

AllowQuotedNewlines bool

Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.

Autodetect bool

Indicates if we should automatically infer the options and schema for CSV and JSON sources.

CreateDisposition string

Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a ‘notFound’ error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion

DestinationEncryptionConfiguration JobLoadDestinationEncryptionConfigurationArgs

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

Encoding string

The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.

FieldDelimiter string

When extracting data in CSV format, this defines the delimiter to use between fields in the exported data. Default is ‘,’

IgnoreUnknownValues bool

Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don’t match any column names

MaxBadRecords int

The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid.

NullMarker string

Specifies a string that represents a null value in a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as an empty value.

ProjectionFields List<string>

If sourceFormat is set to “DATASTORE_BACKUP”, indicates which entity properties to load into BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn’t found in the Cloud Datastore backup, an invalid error is returned in the job result.

Quote string

The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote (‘“’). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.

SchemaUpdateOptions List<string>

Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

SkipLeadingRows int

The number of rows at the top of a CSV file that BigQuery will skip when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.

SourceFormat string

The format of the data files. For CSV files, specify “CSV”. For datastore backups, specify “DATASTORE_BACKUP”. For newline-delimited JSON, specify “NEWLINE_DELIMITED_JSON”. For Avro, specify “AVRO”. For parquet, specify “PARQUET”. For orc, specify “ORC”. The default value is CSV.

TimePartitioning JobLoadTimePartitioningArgs

Time-based partitioning specification for the destination table. Structure is documented below.

WriteDisposition string

Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a ‘duplicate’ error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.

DestinationTable JobLoadDestinationTable

The destination table. Structure is documented below.

SourceUris []string

The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one ‘’ wildcard character and it must come after the ‘bucket’ name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the ‘’ wildcard character is not allowed.

AllowJaggedRows bool

Accept rows that are missing trailing optional columns. The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.

AllowQuotedNewlines bool

Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.

Autodetect bool

Indicates if we should automatically infer the options and schema for CSV and JSON sources.

CreateDisposition string

Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a ‘notFound’ error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion

DestinationEncryptionConfiguration JobLoadDestinationEncryptionConfiguration

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

Encoding string

The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.

FieldDelimiter string

When extracting data in CSV format, this defines the delimiter to use between fields in the exported data. Default is ‘,’

IgnoreUnknownValues bool

Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don’t match any column names

MaxBadRecords int

The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid.

NullMarker string

Specifies a string that represents a null value in a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as an empty value.

ProjectionFields []string

If sourceFormat is set to “DATASTORE_BACKUP”, indicates which entity properties to load into BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn’t found in the Cloud Datastore backup, an invalid error is returned in the job result.

Quote string

The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote (‘“’). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.

SchemaUpdateOptions []string

Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

SkipLeadingRows int

The number of rows at the top of a CSV file that BigQuery will skip when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.

SourceFormat string

The format of the data files. For CSV files, specify “CSV”. For datastore backups, specify “DATASTORE_BACKUP”. For newline-delimited JSON, specify “NEWLINE_DELIMITED_JSON”. For Avro, specify “AVRO”. For parquet, specify “PARQUET”. For orc, specify “ORC”. The default value is CSV.

TimePartitioning JobLoadTimePartitioning

Time-based partitioning specification for the destination table. Structure is documented below.

WriteDisposition string

Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a ‘duplicate’ error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.

destinationTable JobLoadDestinationTable

The destination table. Structure is documented below.

sourceUris string[]

The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one ‘’ wildcard character and it must come after the ‘bucket’ name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the ‘’ wildcard character is not allowed.

allowJaggedRows boolean

Accept rows that are missing trailing optional columns. The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.

allowQuotedNewlines boolean

Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.

autodetect boolean

Indicates if we should automatically infer the options and schema for CSV and JSON sources.

createDisposition string

Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a ‘notFound’ error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion

destinationEncryptionConfiguration JobLoadDestinationEncryptionConfiguration

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

encoding string

The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.

fieldDelimiter string

When extracting data in CSV format, this defines the delimiter to use between fields in the exported data. Default is ‘,’

ignoreUnknownValues boolean

Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don’t match any column names

maxBadRecords number

The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid.

nullMarker string

Specifies a string that represents a null value in a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as an empty value.

projectionFields string[]

If sourceFormat is set to “DATASTORE_BACKUP”, indicates which entity properties to load into BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn’t found in the Cloud Datastore backup, an invalid error is returned in the job result.

quote string

The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote (‘“’). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.

schemaUpdateOptions string[]

Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

skipLeadingRows number

The number of rows at the top of a CSV file that BigQuery will skip when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.

sourceFormat string

The format of the data files. For CSV files, specify “CSV”. For datastore backups, specify “DATASTORE_BACKUP”. For newline-delimited JSON, specify “NEWLINE_DELIMITED_JSON”. For Avro, specify “AVRO”. For parquet, specify “PARQUET”. For orc, specify “ORC”. The default value is CSV.

timePartitioning JobLoadTimePartitioning

Time-based partitioning specification for the destination table. Structure is documented below.

writeDisposition string

Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a ‘duplicate’ error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.

destinationTable Dict[JobLoadDestinationTable]

The destination table. Structure is documented below.

sourceUris List[str]

The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one ‘’ wildcard character and it must come after the ‘bucket’ name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one URI can be specified. Also, the ‘’ wildcard character is not allowed.

allowJaggedRows bool

Accept rows that are missing trailing optional columns. The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.

allowQuotedNewlines bool

Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.

autodetect bool

Indicates if we should automatically infer the options and schema for CSV and JSON sources.

createDisposition str

Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a ‘notFound’ error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion

destinationEncryptionConfiguration Dict[JobLoadDestinationEncryptionConfiguration]

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

encoding str

The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.

fieldDelimiter str

When extracting data in CSV format, this defines the delimiter to use between fields in the exported data. Default is ‘,’

ignoreUnknownValues bool

Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don’t match any column names

maxBadRecords float

The maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid.

nullMarker str

Specifies a string that represents a null value in a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except for STRING and BYTE. For STRING and BYTE columns, BigQuery interprets the empty string as an empty value.

projectionFields List[str]

If sourceFormat is set to “DATASTORE_BACKUP”, indicates which entity properties to load into BigQuery from a Cloud Datastore backup. Property names are case sensitive and must be top-level properties. If no properties are specified, BigQuery loads all properties. If any named property isn’t found in the Cloud Datastore backup, an invalid error is returned in the job result.

quote str

The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote (‘“’). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.

schemaUpdateOptions List[str]

Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

skipLeadingRows float

The number of rows at the top of a CSV file that BigQuery will skip when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.

sourceFormat str

The format of the data files. For CSV files, specify “CSV”. For datastore backups, specify “DATASTORE_BACKUP”. For newline-delimited JSON, specify “NEWLINE_DELIMITED_JSON”. For Avro, specify “AVRO”. For parquet, specify “PARQUET”. For orc, specify “ORC”. The default value is CSV.

time_partitioning Dict[JobLoadTimePartitioning]

Time-based partitioning specification for the destination table. Structure is documented below.

writeDisposition str

Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a ‘duplicate’ error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.

JobLoadDestinationEncryptionConfiguration

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

KmsKeyName string

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.

KmsKeyName string

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.

kmsKeyName string

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.

kms_key_name str

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.

JobLoadDestinationTable

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

TableId string

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

DatasetId string

The ID of the dataset containing this model.

ProjectId string

The ID of the project containing this model.

TableId string

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

DatasetId string

The ID of the dataset containing this model.

ProjectId string

The ID of the project containing this model.

tableId string

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

datasetId string

The ID of the dataset containing this model.

projectId string

The ID of the project containing this model.

table_id str

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

dataset_id str

The ID of the dataset containing this model.

project_id str

The ID of the project containing this model.

JobLoadTimePartitioning

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

Type string

The only type supported is DAY, which will generate one partition per day. Providing an empty string used to cause an error, but in OnePlatform the field will be treated as unset.

ExpirationMs string

Number of milliseconds for which to keep the storage for a partition. A wrapper is used here because 0 is an invalid value.

Field string

If not set, the table is partitioned by pseudo column ‘_PARTITIONTIME’; if set, the table is partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED. A wrapper is used here because an empty string is an invalid value.

Type string

The only type supported is DAY, which will generate one partition per day. Providing an empty string used to cause an error, but in OnePlatform the field will be treated as unset.

ExpirationMs string

Number of milliseconds for which to keep the storage for a partition. A wrapper is used here because 0 is an invalid value.

Field string

If not set, the table is partitioned by pseudo column ‘_PARTITIONTIME’; if set, the table is partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED. A wrapper is used here because an empty string is an invalid value.

type string

The only type supported is DAY, which will generate one partition per day. Providing an empty string used to cause an error, but in OnePlatform the field will be treated as unset.

expirationMs string

Number of milliseconds for which to keep the storage for a partition. A wrapper is used here because 0 is an invalid value.

field string

If not set, the table is partitioned by pseudo column ‘_PARTITIONTIME’; if set, the table is partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED. A wrapper is used here because an empty string is an invalid value.

type str

The only type supported is DAY, which will generate one partition per day. Providing an empty string used to cause an error, but in OnePlatform the field will be treated as unset.

expirationMs str

Number of milliseconds for which to keep the storage for a partition. A wrapper is used here because 0 is an invalid value.

field str

If not set, the table is partitioned by pseudo column ‘_PARTITIONTIME’; if set, the table is partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED. A wrapper is used here because an empty string is an invalid value.

JobQuery

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

Query string

Configures a query job. Structure is documented below.

AllowLargeResults bool

If true and query uses legacy SQL dialect, allows the query to produce arbitrarily large result tables at a slight cost in performance. Requires destinationTable to be set. For standard SQL queries, this flag is ignored and large results are always allowed. However, you must still set destinationTable when result size exceeds the allowed maximum response size.

CreateDisposition string

Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a ‘notFound’ error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion

DefaultDataset JobQueryDefaultDatasetArgs

Specifies the default dataset to use for unqualified table names in the query. Note that this does not alter behavior of unqualified dataset names. Structure is documented below.

DestinationEncryptionConfiguration JobQueryDestinationEncryptionConfigurationArgs

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

DestinationTable JobQueryDestinationTableArgs

The destination table. Structure is documented below.

FlattenResults bool

If true and query uses legacy SQL dialect, flattens all nested and repeated fields in the query results. allowLargeResults must be true if this is set to false. For standard SQL queries, this flag is ignored and results are never flattened.

MaximumBillingTier int

Limits the billing tier for this job. Queries that have resource usage beyond this tier will fail (without incurring a charge). If unspecified, this will be set to your project default.

MaximumBytesBilled string

Limits the bytes billed for this job. Queries that will have bytes billed beyond this limit will fail (without incurring a charge). If unspecified, this will be set to your project default.

ParameterMode string

Standard SQL only. Set to POSITIONAL to use positional (?) query parameters or to NAMED to use named (@myparam) query parameters in this query.

Priority string

Specifies a priority for the query.

SchemaUpdateOptions List<string>

Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

ScriptOptions JobQueryScriptOptionsArgs

Options controlling the execution of scripts. Structure is documented below.

UseLegacySql bool

Specifies whether to use BigQuery’s legacy SQL dialect for this query. The default value is true. If set to false, the query will use BigQuery’s standard SQL.

UseQueryCache bool

Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. Moreover, the query cache is only available when a query does not have a destination table specified. The default value is true.

UserDefinedFunctionResources List<JobQueryUserDefinedFunctionResourceArgs>

Describes user-defined function resources used in the query. Structure is documented below.

WriteDisposition string

Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a ‘duplicate’ error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.

Query string

Configures a query job. Structure is documented below.

AllowLargeResults bool

If true and query uses legacy SQL dialect, allows the query to produce arbitrarily large result tables at a slight cost in performance. Requires destinationTable to be set. For standard SQL queries, this flag is ignored and large results are always allowed. However, you must still set destinationTable when result size exceeds the allowed maximum response size.

CreateDisposition string

Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a ‘notFound’ error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion

DefaultDataset JobQueryDefaultDataset

Specifies the default dataset to use for unqualified table names in the query. Note that this does not alter behavior of unqualified dataset names. Structure is documented below.

DestinationEncryptionConfiguration JobQueryDestinationEncryptionConfiguration

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

DestinationTable JobQueryDestinationTable

The destination table. Structure is documented below.

FlattenResults bool

If true and query uses legacy SQL dialect, flattens all nested and repeated fields in the query results. allowLargeResults must be true if this is set to false. For standard SQL queries, this flag is ignored and results are never flattened.

MaximumBillingTier int

Limits the billing tier for this job. Queries that have resource usage beyond this tier will fail (without incurring a charge). If unspecified, this will be set to your project default.

MaximumBytesBilled string

Limits the bytes billed for this job. Queries that will have bytes billed beyond this limit will fail (without incurring a charge). If unspecified, this will be set to your project default.

ParameterMode string

Standard SQL only. Set to POSITIONAL to use positional (?) query parameters or to NAMED to use named (@myparam) query parameters in this query.

Priority string

Specifies a priority for the query.

SchemaUpdateOptions []string

Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

ScriptOptions JobQueryScriptOptions

Options controlling the execution of scripts. Structure is documented below.

UseLegacySql bool

Specifies whether to use BigQuery’s legacy SQL dialect for this query. The default value is true. If set to false, the query will use BigQuery’s standard SQL.

UseQueryCache bool

Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. Moreover, the query cache is only available when a query does not have a destination table specified. The default value is true.

UserDefinedFunctionResources []JobQueryUserDefinedFunctionResource

Describes user-defined function resources used in the query. Structure is documented below.

WriteDisposition string

Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a ‘duplicate’ error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.

query string

Configures a query job. Structure is documented below.

allowLargeResults boolean

If true and query uses legacy SQL dialect, allows the query to produce arbitrarily large result tables at a slight cost in performance. Requires destinationTable to be set. For standard SQL queries, this flag is ignored and large results are always allowed. However, you must still set destinationTable when result size exceeds the allowed maximum response size.

createDisposition string

Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a ‘notFound’ error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion

defaultDataset JobQueryDefaultDataset

Specifies the default dataset to use for unqualified table names in the query. Note that this does not alter behavior of unqualified dataset names. Structure is documented below.

destinationEncryptionConfiguration JobQueryDestinationEncryptionConfiguration

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

destinationTable JobQueryDestinationTable

The destination table. Structure is documented below.

flattenResults boolean

If true and query uses legacy SQL dialect, flattens all nested and repeated fields in the query results. allowLargeResults must be true if this is set to false. For standard SQL queries, this flag is ignored and results are never flattened.

maximumBillingTier number

Limits the billing tier for this job. Queries that have resource usage beyond this tier will fail (without incurring a charge). If unspecified, this will be set to your project default.

maximumBytesBilled string

Limits the bytes billed for this job. Queries that will have bytes billed beyond this limit will fail (without incurring a charge). If unspecified, this will be set to your project default.

parameterMode string

Standard SQL only. Set to POSITIONAL to use positional (?) query parameters or to NAMED to use named (@myparam) query parameters in this query.

priority string

Specifies a priority for the query.

schemaUpdateOptions string[]

Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

scriptOptions JobQueryScriptOptions

Options controlling the execution of scripts. Structure is documented below.

useLegacySql boolean

Specifies whether to use BigQuery’s legacy SQL dialect for this query. The default value is true. If set to false, the query will use BigQuery’s standard SQL.

useQueryCache boolean

Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. Moreover, the query cache is only available when a query does not have a destination table specified. The default value is true.

userDefinedFunctionResources JobQueryUserDefinedFunctionResource[]

Describes user-defined function resources used in the query. Structure is documented below.

writeDisposition string

Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a ‘duplicate’ error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.

query str

Configures a query job. Structure is documented below.

allowLargeResults bool

If true and query uses legacy SQL dialect, allows the query to produce arbitrarily large result tables at a slight cost in performance. Requires destinationTable to be set. For standard SQL queries, this flag is ignored and large results are always allowed. However, you must still set destinationTable when result size exceeds the allowed maximum response size.

createDisposition str

Specifies whether the job is allowed to create new tables. The following values are supported: CREATE_IF_NEEDED: If the table does not exist, BigQuery creates the table. CREATE_NEVER: The table must already exist. If it does not, a ‘notFound’ error is returned in the job result. Creation, truncation and append actions occur as one atomic update upon job completion

defaultDataset Dict[JobQueryDefaultDataset]

Specifies the default dataset to use for unqualified table names in the query. Note that this does not alter behavior of unqualified dataset names. Structure is documented below.

destinationEncryptionConfiguration Dict[JobQueryDestinationEncryptionConfiguration]

Custom encryption configuration (e.g., Cloud KMS keys) Structure is documented below.

destinationTable Dict[JobQueryDestinationTable]

The destination table. Structure is documented below.

flattenResults bool

If true and query uses legacy SQL dialect, flattens all nested and repeated fields in the query results. allowLargeResults must be true if this is set to false. For standard SQL queries, this flag is ignored and results are never flattened.

maximumBillingTier float

Limits the billing tier for this job. Queries that have resource usage beyond this tier will fail (without incurring a charge). If unspecified, this will be set to your project default.

maximumBytesBilled str

Limits the bytes billed for this job. Queries that will have bytes billed beyond this limit will fail (without incurring a charge). If unspecified, this will be set to your project default.

parameterMode str

Standard SQL only. Set to POSITIONAL to use positional (?) query parameters or to NAMED to use named (@myparam) query parameters in this query.

priority str

Specifies a priority for the query.

schemaUpdateOptions List[str]

Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema. ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original schema to nullable.

scriptOptions Dict[JobQueryScriptOptions]

Options controlling the execution of scripts. Structure is documented below.

useLegacySql bool

Specifies whether to use BigQuery’s legacy SQL dialect for this query. The default value is true. If set to false, the query will use BigQuery’s standard SQL.

useQueryCache bool

Whether to look for the result in the query cache. The query cache is a best-effort cache that will be flushed whenever tables in the query are modified. Moreover, the query cache is only available when a query does not have a destination table specified. The default value is true.

userDefinedFunctionResources List[JobQueryUserDefinedFunctionResource]

Describes user-defined function resources used in the query. Structure is documented below.

writeDisposition str

Specifies the action that occurs if the destination table already exists. The following values are supported: WRITE_TRUNCATE: If the table already exists, BigQuery overwrites the table data and uses the schema from the query result. WRITE_APPEND: If the table already exists, BigQuery appends the data to the table. WRITE_EMPTY: If the table already exists and contains data, a ‘duplicate’ error is returned in the job result. Each action is atomic and only occurs if BigQuery is able to complete the job successfully. Creation, truncation and append actions occur as one atomic update upon job completion.

JobQueryDefaultDataset

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

DatasetId string

The ID of the dataset containing this model.

ProjectId string

The ID of the project containing this model.

DatasetId string

The ID of the dataset containing this model.

ProjectId string

The ID of the project containing this model.

datasetId string

The ID of the dataset containing this model.

projectId string

The ID of the project containing this model.

dataset_id str

The ID of the dataset containing this model.

project_id str

The ID of the project containing this model.

JobQueryDestinationEncryptionConfiguration

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

KmsKeyName string

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.

KmsKeyName string

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.

kmsKeyName string

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.

kms_key_name str

Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.

JobQueryDestinationTable

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

TableId string

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

DatasetId string

The ID of the dataset containing this model.

ProjectId string

The ID of the project containing this model.

TableId string

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

DatasetId string

The ID of the dataset containing this model.

ProjectId string

The ID of the project containing this model.

tableId string

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

datasetId string

The ID of the dataset containing this model.

projectId string

The ID of the project containing this model.

table_id str

The table. Can be specified {{table_id}} if project_id and dataset_id are also set, or of the form projects/{{project}}/datasets/{{dataset_id}}/tables/{{table_id}} if not.

dataset_id str

The ID of the dataset containing this model.

project_id str

The ID of the project containing this model.

JobQueryScriptOptions

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

KeyResultStatement string

Determines which statement in the script represents the “key result”, used to populate the schema and query results of the script job.

StatementByteBudget string

Limit on the number of bytes billed per statement. Exceeding this budget results in an error.

StatementTimeoutMs string

Timeout period for each statement in a script.

KeyResultStatement string

Determines which statement in the script represents the “key result”, used to populate the schema and query results of the script job.

StatementByteBudget string

Limit on the number of bytes billed per statement. Exceeding this budget results in an error.

StatementTimeoutMs string

Timeout period for each statement in a script.

keyResultStatement string

Determines which statement in the script represents the “key result”, used to populate the schema and query results of the script job.

statementByteBudget string

Limit on the number of bytes billed per statement. Exceeding this budget results in an error.

statementTimeoutMs string

Timeout period for each statement in a script.

keyResultStatement str

Determines which statement in the script represents the “key result”, used to populate the schema and query results of the script job.

statementByteBudget str

Limit on the number of bytes billed per statement. Exceeding this budget results in an error.

statementTimeoutMs str

Timeout period for each statement in a script.

JobQueryUserDefinedFunctionResource

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

InlineCode string

An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.

ResourceUri string

A code resource to load from a Google Cloud Storage URI (gs://bucket/path).

InlineCode string

An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.

ResourceUri string

A code resource to load from a Google Cloud Storage URI (gs://bucket/path).

inlineCode string

An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.

resourceUri string

A code resource to load from a Google Cloud Storage URI (gs://bucket/path).

inlineCode str

An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.

resource_uri str

A code resource to load from a Google Cloud Storage URI (gs://bucket/path).

Package Details

Repository
https://github.com/pulumi/pulumi-gcp
License
Apache-2.0
Notes
This Pulumi package is based on the google-beta Terraform Provider.