Table

Creates a table resource in a dataset for Google BigQuery. For more information see the official documentation and API.

Create a Table Resource

new Table(name: string, args: TableArgs, opts?: CustomResourceOptions);
def Table(resource_name, opts=None, clusterings=None, dataset_id=None, description=None, encryption_configuration=None, expiration_time=None, external_data_configuration=None, friendly_name=None, labels=None, project=None, range_partitioning=None, schema=None, table_id=None, time_partitioning=None, view=None, __props__=None);
func NewTable(ctx *Context, name string, args TableArgs, opts ...ResourceOption) (*Table, error)
public Table(string name, TableArgs args, CustomResourceOptions? opts = null)
name string
The unique name of the resource.
args TableArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
resource_name str
The unique name of the resource.
opts ResourceOptions
A bag of options that control this resource's behavior.
ctx Context
Context object for the current deployment.
name string
The unique name of the resource.
args TableArgs
The arguments to resource properties.
opts ResourceOption
Bag of options to control resource's behavior.
name string
The unique name of the resource.
args TableArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.

Table Resource Properties

To learn more about resource properties and how to use them, see Inputs and Outputs in the Programming Model docs.

Inputs

The Table resource accepts the following input properties:

DatasetId string

The dataset ID to create the table in. Changing this forces a new resource to be created.

TableId string

A unique ID for the resource. Changing this forces a new resource to be created.

Clusterings List<string>

Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.

Description string

The field description.

EncryptionConfiguration TableEncryptionConfigurationArgs

Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below.

ExpirationTime int

The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.

ExternalDataConfiguration TableExternalDataConfigurationArgs

Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below.

FriendlyName string

A descriptive name for the table.

Labels Dictionary<string, string>

A mapping of labels to assign to the resource.

Project string

The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

RangePartitioning TableRangePartitioningArgs

If specified, configures range-based partitioning for this table. Structure is documented below.

Schema string

A JSON schema for the table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables. For more information see the BigQuery API documentation. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn’t changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced STRUCT field type with RECORD field type, we currently cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.

TimePartitioning TableTimePartitioningArgs

If specified, configures time-based partitioning for this table. Structure is documented below.

View TableViewArgs

If specified, configures this table as a view. Structure is documented below.

DatasetId string

The dataset ID to create the table in. Changing this forces a new resource to be created.

TableId string

A unique ID for the resource. Changing this forces a new resource to be created.

Clusterings []string

Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.

Description string

The field description.

EncryptionConfiguration TableEncryptionConfiguration

Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below.

ExpirationTime int

The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.

ExternalDataConfiguration TableExternalDataConfiguration

Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below.

FriendlyName string

A descriptive name for the table.

Labels map[string]string

A mapping of labels to assign to the resource.

Project string

The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

RangePartitioning TableRangePartitioning

If specified, configures range-based partitioning for this table. Structure is documented below.

Schema string

A JSON schema for the table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables. For more information see the BigQuery API documentation. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn’t changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced STRUCT field type with RECORD field type, we currently cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.

TimePartitioning TableTimePartitioning

If specified, configures time-based partitioning for this table. Structure is documented below.

View TableView

If specified, configures this table as a view. Structure is documented below.

datasetId string

The dataset ID to create the table in. Changing this forces a new resource to be created.

tableId string

A unique ID for the resource. Changing this forces a new resource to be created.

clusterings string[]

Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.

description string

The field description.

encryptionConfiguration TableEncryptionConfiguration

Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below.

expirationTime number

The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.

externalDataConfiguration TableExternalDataConfiguration

Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below.

friendlyName string

A descriptive name for the table.

labels {[key: string]: string}

A mapping of labels to assign to the resource.

project string

The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

rangePartitioning TableRangePartitioning

If specified, configures range-based partitioning for this table. Structure is documented below.

schema string

A JSON schema for the table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables. For more information see the BigQuery API documentation. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn’t changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced STRUCT field type with RECORD field type, we currently cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.

timePartitioning TableTimePartitioning

If specified, configures time-based partitioning for this table. Structure is documented below.

view TableView

If specified, configures this table as a view. Structure is documented below.

dataset_id str

The dataset ID to create the table in. Changing this forces a new resource to be created.

table_id str

A unique ID for the resource. Changing this forces a new resource to be created.

clusterings List[str]

Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.

description str

The field description.

encryption_configuration Dict[TableEncryptionConfiguration]

Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below.

expiration_time float

The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.

external_data_configuration Dict[TableExternalDataConfiguration]

Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below.

friendly_name str

A descriptive name for the table.

labels Dict[str, str]

A mapping of labels to assign to the resource.

project str

The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

range_partitioning Dict[TableRangePartitioning]

If specified, configures range-based partitioning for this table. Structure is documented below.

schema str

A JSON schema for the table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables. For more information see the BigQuery API documentation. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn’t changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced STRUCT field type with RECORD field type, we currently cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.

time_partitioning Dict[TableTimePartitioning]

If specified, configures time-based partitioning for this table. Structure is documented below.

view Dict[TableView]

If specified, configures this table as a view. Structure is documented below.

Outputs

All input properties are implicitly available as output properties. Additionally, the Table resource produces the following output properties:

CreationTime int

The time when this table was created, in milliseconds since the epoch.

Etag string

A hash of the resource.

Id string
The provider-assigned unique ID for this managed resource.
LastModifiedTime int

The time when this table was last modified, in milliseconds since the epoch.

Location string

The geographic location where the table resides. This value is inherited from the dataset.

NumBytes int

The size of this table in bytes, excluding any data in the streaming buffer.

NumLongTermBytes int

The number of bytes in the table that are considered “long-term storage”.

NumRows int

The number of rows of data in this table, excluding any data in the streaming buffer.

SelfLink string

The URI of the created resource.

Type string

The only type supported is DAY, which will generate one partition per day based on data loading time.

CreationTime int

The time when this table was created, in milliseconds since the epoch.

Etag string

A hash of the resource.

Id string
The provider-assigned unique ID for this managed resource.
LastModifiedTime int

The time when this table was last modified, in milliseconds since the epoch.

Location string

The geographic location where the table resides. This value is inherited from the dataset.

NumBytes int

The size of this table in bytes, excluding any data in the streaming buffer.

NumLongTermBytes int

The number of bytes in the table that are considered “long-term storage”.

NumRows int

The number of rows of data in this table, excluding any data in the streaming buffer.

SelfLink string

The URI of the created resource.

Type string

The only type supported is DAY, which will generate one partition per day based on data loading time.

creationTime number

The time when this table was created, in milliseconds since the epoch.

etag string

A hash of the resource.

id string
The provider-assigned unique ID for this managed resource.
lastModifiedTime number

The time when this table was last modified, in milliseconds since the epoch.

location string

The geographic location where the table resides. This value is inherited from the dataset.

numBytes number

The size of this table in bytes, excluding any data in the streaming buffer.

numLongTermBytes number

The number of bytes in the table that are considered “long-term storage”.

numRows number

The number of rows of data in this table, excluding any data in the streaming buffer.

selfLink string

The URI of the created resource.

type string

The only type supported is DAY, which will generate one partition per day based on data loading time.

creation_time float

The time when this table was created, in milliseconds since the epoch.

etag str

A hash of the resource.

id str
The provider-assigned unique ID for this managed resource.
last_modified_time float

The time when this table was last modified, in milliseconds since the epoch.

location str

The geographic location where the table resides. This value is inherited from the dataset.

num_bytes float

The size of this table in bytes, excluding any data in the streaming buffer.

num_long_term_bytes float

The number of bytes in the table that are considered “long-term storage”.

num_rows float

The number of rows of data in this table, excluding any data in the streaming buffer.

self_link str

The URI of the created resource.

type str

The only type supported is DAY, which will generate one partition per day based on data loading time.

Look up an Existing Table Resource

Get an existing Table resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.

public static get(name: string, id: Input<ID>, state?: TableState, opts?: CustomResourceOptions): Table
static get(resource_name, id, opts=None, clusterings=None, creation_time=None, dataset_id=None, description=None, encryption_configuration=None, etag=None, expiration_time=None, external_data_configuration=None, friendly_name=None, labels=None, last_modified_time=None, location=None, num_bytes=None, num_long_term_bytes=None, num_rows=None, project=None, range_partitioning=None, schema=None, self_link=None, table_id=None, time_partitioning=None, type=None, view=None, __props__=None);
func GetTable(ctx *Context, name string, id IDInput, state *TableState, opts ...ResourceOption) (*Table, error)
public static Table Get(string name, Input<string> id, TableState? state, CustomResourceOptions? opts = null)
name
The unique name of the resulting resource.
id
The unique provider ID of the resource to lookup.
state
Any extra arguments used during the lookup.
opts
A bag of options that control this resource's behavior.
resource_name
The unique name of the resulting resource.
id
The unique provider ID of the resource to lookup.
name
The unique name of the resulting resource.
id
The unique provider ID of the resource to lookup.
state
Any extra arguments used during the lookup.
opts
A bag of options that control this resource's behavior.
name
The unique name of the resulting resource.
id
The unique provider ID of the resource to lookup.
state
Any extra arguments used during the lookup.
opts
A bag of options that control this resource's behavior.

The following state arguments are supported:

Clusterings List<string>

Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.

CreationTime int

The time when this table was created, in milliseconds since the epoch.

DatasetId string

The dataset ID to create the table in. Changing this forces a new resource to be created.

Description string

The field description.

EncryptionConfiguration TableEncryptionConfigurationArgs

Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below.

Etag string

A hash of the resource.

ExpirationTime int

The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.

ExternalDataConfiguration TableExternalDataConfigurationArgs

Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below.

FriendlyName string

A descriptive name for the table.

Labels Dictionary<string, string>

A mapping of labels to assign to the resource.

LastModifiedTime int

The time when this table was last modified, in milliseconds since the epoch.

Location string

The geographic location where the table resides. This value is inherited from the dataset.

NumBytes int

The size of this table in bytes, excluding any data in the streaming buffer.

NumLongTermBytes int

The number of bytes in the table that are considered “long-term storage”.

NumRows int

The number of rows of data in this table, excluding any data in the streaming buffer.

Project string

The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

RangePartitioning TableRangePartitioningArgs

If specified, configures range-based partitioning for this table. Structure is documented below.

Schema string

A JSON schema for the table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables. For more information see the BigQuery API documentation. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn’t changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced STRUCT field type with RECORD field type, we currently cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.

SelfLink string

The URI of the created resource.

TableId string

A unique ID for the resource. Changing this forces a new resource to be created.

TimePartitioning TableTimePartitioningArgs

If specified, configures time-based partitioning for this table. Structure is documented below.

Type string

The only type supported is DAY, which will generate one partition per day based on data loading time.

View TableViewArgs

If specified, configures this table as a view. Structure is documented below.

Clusterings []string

Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.

CreationTime int

The time when this table was created, in milliseconds since the epoch.

DatasetId string

The dataset ID to create the table in. Changing this forces a new resource to be created.

Description string

The field description.

EncryptionConfiguration TableEncryptionConfiguration

Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below.

Etag string

A hash of the resource.

ExpirationTime int

The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.

ExternalDataConfiguration TableExternalDataConfiguration

Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below.

FriendlyName string

A descriptive name for the table.

Labels map[string]string

A mapping of labels to assign to the resource.

LastModifiedTime int

The time when this table was last modified, in milliseconds since the epoch.

Location string

The geographic location where the table resides. This value is inherited from the dataset.

NumBytes int

The size of this table in bytes, excluding any data in the streaming buffer.

NumLongTermBytes int

The number of bytes in the table that are considered “long-term storage”.

NumRows int

The number of rows of data in this table, excluding any data in the streaming buffer.

Project string

The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

RangePartitioning TableRangePartitioning

If specified, configures range-based partitioning for this table. Structure is documented below.

Schema string

A JSON schema for the table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables. For more information see the BigQuery API documentation. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn’t changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced STRUCT field type with RECORD field type, we currently cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.

SelfLink string

The URI of the created resource.

TableId string

A unique ID for the resource. Changing this forces a new resource to be created.

TimePartitioning TableTimePartitioning

If specified, configures time-based partitioning for this table. Structure is documented below.

Type string

The only type supported is DAY, which will generate one partition per day based on data loading time.

View TableView

If specified, configures this table as a view. Structure is documented below.

clusterings string[]

Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.

creationTime number

The time when this table was created, in milliseconds since the epoch.

datasetId string

The dataset ID to create the table in. Changing this forces a new resource to be created.

description string

The field description.

encryptionConfiguration TableEncryptionConfiguration

Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below.

etag string

A hash of the resource.

expirationTime number

The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.

externalDataConfiguration TableExternalDataConfiguration

Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below.

friendlyName string

A descriptive name for the table.

labels {[key: string]: string}

A mapping of labels to assign to the resource.

lastModifiedTime number

The time when this table was last modified, in milliseconds since the epoch.

location string

The geographic location where the table resides. This value is inherited from the dataset.

numBytes number

The size of this table in bytes, excluding any data in the streaming buffer.

numLongTermBytes number

The number of bytes in the table that are considered “long-term storage”.

numRows number

The number of rows of data in this table, excluding any data in the streaming buffer.

project string

The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

rangePartitioning TableRangePartitioning

If specified, configures range-based partitioning for this table. Structure is documented below.

schema string

A JSON schema for the table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables. For more information see the BigQuery API documentation. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn’t changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced STRUCT field type with RECORD field type, we currently cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.

selfLink string

The URI of the created resource.

tableId string

A unique ID for the resource. Changing this forces a new resource to be created.

timePartitioning TableTimePartitioning

If specified, configures time-based partitioning for this table. Structure is documented below.

type string

The only type supported is DAY, which will generate one partition per day based on data loading time.

view TableView

If specified, configures this table as a view. Structure is documented below.

clusterings List[str]

Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.

creation_time float

The time when this table was created, in milliseconds since the epoch.

dataset_id str

The dataset ID to create the table in. Changing this forces a new resource to be created.

description str

The field description.

encryption_configuration Dict[TableEncryptionConfiguration]

Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below.

etag str

A hash of the resource.

expiration_time float

The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.

external_data_configuration Dict[TableExternalDataConfiguration]

Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below.

friendly_name str

A descriptive name for the table.

labels Dict[str, str]

A mapping of labels to assign to the resource.

last_modified_time float

The time when this table was last modified, in milliseconds since the epoch.

location str

The geographic location where the table resides. This value is inherited from the dataset.

num_bytes float

The size of this table in bytes, excluding any data in the streaming buffer.

num_long_term_bytes float

The number of bytes in the table that are considered “long-term storage”.

num_rows float

The number of rows of data in this table, excluding any data in the streaming buffer.

project str

The ID of the project in which the resource belongs. If it is not provided, the provider project is used.

range_partitioning Dict[TableRangePartitioning]

If specified, configures range-based partitioning for this table. Structure is documented below.

schema str

A JSON schema for the table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables. For more information see the BigQuery API documentation. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn’t changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced STRUCT field type with RECORD field type, we currently cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.

self_link str

The URI of the created resource.

table_id str

A unique ID for the resource. Changing this forces a new resource to be created.

time_partitioning Dict[TableTimePartitioning]

If specified, configures time-based partitioning for this table. Structure is documented below.

type str

The only type supported is DAY, which will generate one partition per day based on data loading time.

view Dict[TableView]

If specified, configures this table as a view. Structure is documented below.

Supporting Types

TableEncryptionConfiguration

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

KmsKeyName string

The self link or full name of a key which should be used to encrypt this table. Note that the default bigquery service account will need to have encrypt/decrypt permissions on this key - you may want to see the gcp.bigquery.getDefaultServiceAccount datasource and the gcp.kms.CryptoKeyIAMBinding resource.

KmsKeyName string

The self link or full name of a key which should be used to encrypt this table. Note that the default bigquery service account will need to have encrypt/decrypt permissions on this key - you may want to see the gcp.bigquery.getDefaultServiceAccount datasource and the gcp.kms.CryptoKeyIAMBinding resource.

kmsKeyName string

The self link or full name of a key which should be used to encrypt this table. Note that the default bigquery service account will need to have encrypt/decrypt permissions on this key - you may want to see the gcp.bigquery.getDefaultServiceAccount datasource and the gcp.kms.CryptoKeyIAMBinding resource.

kms_key_name str

The self link or full name of a key which should be used to encrypt this table. Note that the default bigquery service account will need to have encrypt/decrypt permissions on this key - you may want to see the gcp.bigquery.getDefaultServiceAccount datasource and the gcp.kms.CryptoKeyIAMBinding resource.

TableExternalDataConfiguration

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

Autodetect bool
  • Let BigQuery try to autodetect the schema and format of the table.
SourceFormat string

The data format. Supported values are: “CSV”, “GOOGLE_SHEETS”, “NEWLINE_DELIMITED_JSON”, “AVRO”, “PARQUET”, and “DATSTORE_BACKUP”. To use “GOOGLE_SHEETS” the scopes must include “https://www.googleapis.com/auth/drive.readonly".

SourceUris List<string>

A list of the fully-qualified URIs that point to your data in Google Cloud.

Compression string

The compression type of the data source. Valid values are “NONE” or “GZIP”.

CsvOptions TableExternalDataConfigurationCsvOptionsArgs

Additional properties to set if source_format is set to “CSV”. Structure is documented below.

GoogleSheetsOptions TableExternalDataConfigurationGoogleSheetsOptionsArgs

Additional options if source_format is set to “GOOGLE_SHEETS”. Structure is documented below.

HivePartitioningOptions TableExternalDataConfigurationHivePartitioningOptionsArgs

When set, configures hive partitioning support. Not all storage formats support hive partitioning – requesting hive partitioning on an unsupported format will lead to an error, as will providing an invalid specification.

IgnoreUnknownValues bool

Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.

MaxBadRecords int

The maximum number of bad records that BigQuery can ignore when reading data.

Autodetect bool
  • Let BigQuery try to autodetect the schema and format of the table.
SourceFormat string

The data format. Supported values are: “CSV”, “GOOGLE_SHEETS”, “NEWLINE_DELIMITED_JSON”, “AVRO”, “PARQUET”, and “DATSTORE_BACKUP”. To use “GOOGLE_SHEETS” the scopes must include “https://www.googleapis.com/auth/drive.readonly".

SourceUris []string

A list of the fully-qualified URIs that point to your data in Google Cloud.

Compression string

The compression type of the data source. Valid values are “NONE” or “GZIP”.

CsvOptions TableExternalDataConfigurationCsvOptions

Additional properties to set if source_format is set to “CSV”. Structure is documented below.

GoogleSheetsOptions TableExternalDataConfigurationGoogleSheetsOptions

Additional options if source_format is set to “GOOGLE_SHEETS”. Structure is documented below.

HivePartitioningOptions TableExternalDataConfigurationHivePartitioningOptions

When set, configures hive partitioning support. Not all storage formats support hive partitioning – requesting hive partitioning on an unsupported format will lead to an error, as will providing an invalid specification.

IgnoreUnknownValues bool

Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.

MaxBadRecords int

The maximum number of bad records that BigQuery can ignore when reading data.

autodetect boolean
  • Let BigQuery try to autodetect the schema and format of the table.
sourceFormat string

The data format. Supported values are: “CSV”, “GOOGLE_SHEETS”, “NEWLINE_DELIMITED_JSON”, “AVRO”, “PARQUET”, and “DATSTORE_BACKUP”. To use “GOOGLE_SHEETS” the scopes must include “https://www.googleapis.com/auth/drive.readonly".

sourceUris string[]

A list of the fully-qualified URIs that point to your data in Google Cloud.

compression string

The compression type of the data source. Valid values are “NONE” or “GZIP”.

csvOptions TableExternalDataConfigurationCsvOptions

Additional properties to set if source_format is set to “CSV”. Structure is documented below.

googleSheetsOptions TableExternalDataConfigurationGoogleSheetsOptions

Additional options if source_format is set to “GOOGLE_SHEETS”. Structure is documented below.

hivePartitioningOptions TableExternalDataConfigurationHivePartitioningOptions

When set, configures hive partitioning support. Not all storage formats support hive partitioning – requesting hive partitioning on an unsupported format will lead to an error, as will providing an invalid specification.

ignoreUnknownValues boolean

Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.

maxBadRecords number

The maximum number of bad records that BigQuery can ignore when reading data.

autodetect bool
  • Let BigQuery try to autodetect the schema and format of the table.
sourceFormat str

The data format. Supported values are: “CSV”, “GOOGLE_SHEETS”, “NEWLINE_DELIMITED_JSON”, “AVRO”, “PARQUET”, and “DATSTORE_BACKUP”. To use “GOOGLE_SHEETS” the scopes must include “https://www.googleapis.com/auth/drive.readonly".

sourceUris List[str]

A list of the fully-qualified URIs that point to your data in Google Cloud.

compression str

The compression type of the data source. Valid values are “NONE” or “GZIP”.

csvOptions Dict[TableExternalDataConfigurationCsvOptions]

Additional properties to set if source_format is set to “CSV”. Structure is documented below.

googleSheetsOptions Dict[TableExternalDataConfigurationGoogleSheetsOptions]

Additional options if source_format is set to “GOOGLE_SHEETS”. Structure is documented below.

hivePartitioningOptions Dict[TableExternalDataConfigurationHivePartitioningOptions]

When set, configures hive partitioning support. Not all storage formats support hive partitioning – requesting hive partitioning on an unsupported format will lead to an error, as will providing an invalid specification.

ignoreUnknownValues bool

Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.

maxBadRecords float

The maximum number of bad records that BigQuery can ignore when reading data.

TableExternalDataConfigurationCsvOptions

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

Quote string

The value that is used to quote data sections in a CSV file. If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allow_quoted_newlines property to true. The API-side default is ", specified in the provider escaped as \". Due to limitations with default values, this value is required to be explicitly set.

AllowJaggedRows bool

Indicates if BigQuery should accept rows that are missing trailing optional columns.

AllowQuotedNewlines bool

Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.

Encoding string

The character encoding of the data. The supported values are UTF-8 or ISO-8859-1.

FieldDelimiter string

The separator for fields in a CSV file.

SkipLeadingRows int

The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of range or skip_leading_rows must be set.

Quote string

The value that is used to quote data sections in a CSV file. If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allow_quoted_newlines property to true. The API-side default is ", specified in the provider escaped as \". Due to limitations with default values, this value is required to be explicitly set.

AllowJaggedRows bool

Indicates if BigQuery should accept rows that are missing trailing optional columns.

AllowQuotedNewlines bool

Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.

Encoding string

The character encoding of the data. The supported values are UTF-8 or ISO-8859-1.

FieldDelimiter string

The separator for fields in a CSV file.

SkipLeadingRows int

The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of range or skip_leading_rows must be set.

quote string

The value that is used to quote data sections in a CSV file. If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allow_quoted_newlines property to true. The API-side default is ", specified in the provider escaped as \". Due to limitations with default values, this value is required to be explicitly set.

allowJaggedRows boolean

Indicates if BigQuery should accept rows that are missing trailing optional columns.

allowQuotedNewlines boolean

Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.

encoding string

The character encoding of the data. The supported values are UTF-8 or ISO-8859-1.

fieldDelimiter string

The separator for fields in a CSV file.

skipLeadingRows number

The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of range or skip_leading_rows must be set.

quote str

The value that is used to quote data sections in a CSV file. If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allow_quoted_newlines property to true. The API-side default is ", specified in the provider escaped as \". Due to limitations with default values, this value is required to be explicitly set.

allowJaggedRows bool

Indicates if BigQuery should accept rows that are missing trailing optional columns.

allowQuotedNewlines bool

Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.

encoding str

The character encoding of the data. The supported values are UTF-8 or ISO-8859-1.

fieldDelimiter str

The separator for fields in a CSV file.

skipLeadingRows float

The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of range or skip_leading_rows must be set.

TableExternalDataConfigurationGoogleSheetsOptions

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

Range string

Information required to partition based on ranges. Structure is documented below.

SkipLeadingRows int

The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of range or skip_leading_rows must be set.

Range string

Information required to partition based on ranges. Structure is documented below.

SkipLeadingRows int

The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of range or skip_leading_rows must be set.

range string

Information required to partition based on ranges. Structure is documented below.

skipLeadingRows number

The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of range or skip_leading_rows must be set.

range str

Information required to partition based on ranges. Structure is documented below.

skipLeadingRows float

The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of range or skip_leading_rows must be set.

TableExternalDataConfigurationHivePartitioningOptions

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

Mode string

When set, what mode of hive partitioning to use when reading data. The following modes are supported. * AUTO: automatically infer partition key name(s) and type(s). * STRINGS: automatically infer partition key name(s). All types are Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported formats are: JSON, CSV, ORC, Avro and Parquet. * CUSTOM: when set to CUSTOM, you must encode the partition key schema within the source_uri_prefix by setting source_uri_prefix to gs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}.

SourceUriPrefix string

When hive partition detection is requested, a common for all source uris must be required. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-06-01/country=USA/id=7/file.avro gs://bucket/path_to_table/dt=2019-05-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/. Note that when mode is set to CUSTOM, you must encode the partition key schema within the source_uri_prefix by setting source_uri_prefix to gs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}.

Mode string

When set, what mode of hive partitioning to use when reading data. The following modes are supported. * AUTO: automatically infer partition key name(s) and type(s). * STRINGS: automatically infer partition key name(s). All types are Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported formats are: JSON, CSV, ORC, Avro and Parquet. * CUSTOM: when set to CUSTOM, you must encode the partition key schema within the source_uri_prefix by setting source_uri_prefix to gs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}.

SourceUriPrefix string

When hive partition detection is requested, a common for all source uris must be required. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-06-01/country=USA/id=7/file.avro gs://bucket/path_to_table/dt=2019-05-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/. Note that when mode is set to CUSTOM, you must encode the partition key schema within the source_uri_prefix by setting source_uri_prefix to gs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}.

mode string

When set, what mode of hive partitioning to use when reading data. The following modes are supported. * AUTO: automatically infer partition key name(s) and type(s). * STRINGS: automatically infer partition key name(s). All types are Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported formats are: JSON, CSV, ORC, Avro and Parquet. * CUSTOM: when set to CUSTOM, you must encode the partition key schema within the source_uri_prefix by setting source_uri_prefix to gs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}.

sourceUriPrefix string

When hive partition detection is requested, a common for all source uris must be required. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-06-01/country=USA/id=7/file.avro gs://bucket/path_to_table/dt=2019-05-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/. Note that when mode is set to CUSTOM, you must encode the partition key schema within the source_uri_prefix by setting source_uri_prefix to gs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}.

mode str

When set, what mode of hive partitioning to use when reading data. The following modes are supported. * AUTO: automatically infer partition key name(s) and type(s). * STRINGS: automatically infer partition key name(s). All types are Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported formats are: JSON, CSV, ORC, Avro and Parquet. * CUSTOM: when set to CUSTOM, you must encode the partition key schema within the source_uri_prefix by setting source_uri_prefix to gs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}.

sourceUriPrefix str

When hive partition detection is requested, a common for all source uris must be required. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-06-01/country=USA/id=7/file.avro gs://bucket/path_to_table/dt=2019-05-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/. Note that when mode is set to CUSTOM, you must encode the partition key schema within the source_uri_prefix by setting source_uri_prefix to gs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}.

TableRangePartitioning

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

Field string

The field used to determine how to create a range-based partition.

Range TableRangePartitioningRangeArgs

Information required to partition based on ranges. Structure is documented below.

Field string

The field used to determine how to create a range-based partition.

Range TableRangePartitioningRange

Information required to partition based on ranges. Structure is documented below.

field string

The field used to determine how to create a range-based partition.

range TableRangePartitioningRange

Information required to partition based on ranges. Structure is documented below.

field str

The field used to determine how to create a range-based partition.

range Dict[TableRangePartitioningRange]

Information required to partition based on ranges. Structure is documented below.

TableRangePartitioningRange

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

End int

End of the range partitioning, exclusive.

Interval int

The width of each range within the partition.

Start int

Start of the range partitioning, inclusive.

End int

End of the range partitioning, exclusive.

Interval int

The width of each range within the partition.

Start int

Start of the range partitioning, inclusive.

end number

End of the range partitioning, exclusive.

interval number

The width of each range within the partition.

start number

Start of the range partitioning, inclusive.

end float

End of the range partitioning, exclusive.

interval float

The width of each range within the partition.

start float

Start of the range partitioning, inclusive.

TableTimePartitioning

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

Type string

The only type supported is DAY, which will generate one partition per day based on data loading time.

ExpirationMs int

Number of milliseconds for which to keep the storage for a partition.

Field string

The field used to determine how to create a range-based partition.

RequirePartitionFilter bool

If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.

Type string

The only type supported is DAY, which will generate one partition per day based on data loading time.

ExpirationMs int

Number of milliseconds for which to keep the storage for a partition.

Field string

The field used to determine how to create a range-based partition.

RequirePartitionFilter bool

If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.

type string

The only type supported is DAY, which will generate one partition per day based on data loading time.

expirationMs number

Number of milliseconds for which to keep the storage for a partition.

field string

The field used to determine how to create a range-based partition.

requirePartitionFilter boolean

If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.

type str

The only type supported is DAY, which will generate one partition per day based on data loading time.

expirationMs float

Number of milliseconds for which to keep the storage for a partition.

field str

The field used to determine how to create a range-based partition.

requirePartitionFilter bool

If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.

TableView

See the input and output API doc for this type.

See the input and output API doc for this type.

See the input and output API doc for this type.

Query string

A query that BigQuery executes when the view is referenced.

UseLegacySql bool

Specifies whether to use BigQuery’s legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery’s standard SQL.

Query string

A query that BigQuery executes when the view is referenced.

UseLegacySql bool

Specifies whether to use BigQuery’s legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery’s standard SQL.

query string

A query that BigQuery executes when the view is referenced.

useLegacySql boolean

Specifies whether to use BigQuery’s legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery’s standard SQL.

query str

A query that BigQuery executes when the view is referenced.

useLegacySql bool

Specifies whether to use BigQuery’s legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery’s standard SQL.

Package Details

Repository
https://github.com/pulumi/pulumi-gcp
License
Apache-2.0
Notes
This Pulumi package is based on the google-beta Terraform Provider.