Table
Creates a table resource in a dataset for Google BigQuery. For more information see the official documentation and API.
Create a Table Resource
new Table(name: string, args: TableArgs, opts?: CustomResourceOptions);def Table(resource_name, opts=None, clusterings=None, dataset_id=None, description=None, encryption_configuration=None, expiration_time=None, external_data_configuration=None, friendly_name=None, labels=None, project=None, range_partitioning=None, schema=None, table_id=None, time_partitioning=None, view=None, __props__=None);public Table(string name, TableArgs args, CustomResourceOptions? opts = null)- name string
- The unique name of the resource.
- args TableArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- opts ResourceOptions
- A bag of options that control this resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args TableArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args TableArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
Table Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Programming Model docs.
Inputs
The Table resource accepts the following input properties:
- Dataset
Id string The dataset ID to create the table in. Changing this forces a new resource to be created.
- Table
Id string A unique ID for the resource. Changing this forces a new resource to be created.
- Clusterings List<string>
Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.
- Description string
The field description.
- Encryption
Configuration TableEncryption Configuration Args Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below.
- Expiration
Time int The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.
- External
Data TableConfiguration External Data Configuration Args Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below.
- Friendly
Name string A descriptive name for the table.
- Labels Dictionary<string, string>
A mapping of labels to assign to the resource.
- Project string
The ID of the project in which the resource belongs. If it is not provided, the provider project is used.
- Range
Partitioning TableRange Partitioning Args If specified, configures range-based partitioning for this table. Structure is documented below.
- Schema string
A JSON schema for the table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables. For more information see the BigQuery API documentation. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn’t changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced
STRUCTfield type withRECORDfield type, we currently cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.- Time
Partitioning TableTime Partitioning Args If specified, configures time-based partitioning for this table. Structure is documented below.
- View
Table
View Args If specified, configures this table as a view. Structure is documented below.
- Dataset
Id string The dataset ID to create the table in. Changing this forces a new resource to be created.
- Table
Id string A unique ID for the resource. Changing this forces a new resource to be created.
- Clusterings []string
Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.
- Description string
The field description.
- Encryption
Configuration TableEncryption Configuration Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below.
- Expiration
Time int The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.
- External
Data TableConfiguration External Data Configuration Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below.
- Friendly
Name string A descriptive name for the table.
- Labels map[string]string
A mapping of labels to assign to the resource.
- Project string
The ID of the project in which the resource belongs. If it is not provided, the provider project is used.
- Range
Partitioning TableRange Partitioning If specified, configures range-based partitioning for this table. Structure is documented below.
- Schema string
A JSON schema for the table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables. For more information see the BigQuery API documentation. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn’t changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced
STRUCTfield type withRECORDfield type, we currently cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.- Time
Partitioning TableTime Partitioning If specified, configures time-based partitioning for this table. Structure is documented below.
- View
Table
View If specified, configures this table as a view. Structure is documented below.
- dataset
Id string The dataset ID to create the table in. Changing this forces a new resource to be created.
- table
Id string A unique ID for the resource. Changing this forces a new resource to be created.
- clusterings string[]
Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.
- description string
The field description.
- encryption
Configuration TableEncryption Configuration Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below.
- expiration
Time number The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.
- external
Data TableConfiguration External Data Configuration Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below.
- friendly
Name string A descriptive name for the table.
- labels {[key: string]: string}
A mapping of labels to assign to the resource.
- project string
The ID of the project in which the resource belongs. If it is not provided, the provider project is used.
- range
Partitioning TableRange Partitioning If specified, configures range-based partitioning for this table. Structure is documented below.
- schema string
A JSON schema for the table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables. For more information see the BigQuery API documentation. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn’t changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced
STRUCTfield type withRECORDfield type, we currently cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.- time
Partitioning TableTime Partitioning If specified, configures time-based partitioning for this table. Structure is documented below.
- view
Table
View If specified, configures this table as a view. Structure is documented below.
- dataset_
id str The dataset ID to create the table in. Changing this forces a new resource to be created.
- table_
id str A unique ID for the resource. Changing this forces a new resource to be created.
- clusterings List[str]
Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.
- description str
The field description.
- encryption_
configuration Dict[TableEncryption Configuration] Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below.
- expiration_
time float The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.
- external_
data_ Dict[Tableconfiguration External Data Configuration] Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below.
- friendly_
name str A descriptive name for the table.
- labels Dict[str, str]
A mapping of labels to assign to the resource.
- project str
The ID of the project in which the resource belongs. If it is not provided, the provider project is used.
- range_
partitioning Dict[TableRange Partitioning] If specified, configures range-based partitioning for this table. Structure is documented below.
- schema str
A JSON schema for the table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables. For more information see the BigQuery API documentation. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn’t changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced
STRUCTfield type withRECORDfield type, we currently cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.- time_
partitioning Dict[TableTime Partitioning] If specified, configures time-based partitioning for this table. Structure is documented below.
- view
Dict[Table
View] If specified, configures this table as a view. Structure is documented below.
Outputs
All input properties are implicitly available as output properties. Additionally, the Table resource produces the following output properties:
- Creation
Time int The time when this table was created, in milliseconds since the epoch.
- Etag string
A hash of the resource.
- Id string
- The provider-assigned unique ID for this managed resource.
- Last
Modified intTime The time when this table was last modified, in milliseconds since the epoch.
- Location string
The geographic location where the table resides. This value is inherited from the dataset.
- Num
Bytes int The size of this table in bytes, excluding any data in the streaming buffer.
- Num
Long intTerm Bytes The number of bytes in the table that are considered “long-term storage”.
- Num
Rows int The number of rows of data in this table, excluding any data in the streaming buffer.
- Self
Link string The URI of the created resource.
- Type string
The only type supported is DAY, which will generate one partition per day based on data loading time.
- Creation
Time int The time when this table was created, in milliseconds since the epoch.
- Etag string
A hash of the resource.
- Id string
- The provider-assigned unique ID for this managed resource.
- Last
Modified intTime The time when this table was last modified, in milliseconds since the epoch.
- Location string
The geographic location where the table resides. This value is inherited from the dataset.
- Num
Bytes int The size of this table in bytes, excluding any data in the streaming buffer.
- Num
Long intTerm Bytes The number of bytes in the table that are considered “long-term storage”.
- Num
Rows int The number of rows of data in this table, excluding any data in the streaming buffer.
- Self
Link string The URI of the created resource.
- Type string
The only type supported is DAY, which will generate one partition per day based on data loading time.
- creation
Time number The time when this table was created, in milliseconds since the epoch.
- etag string
A hash of the resource.
- id string
- The provider-assigned unique ID for this managed resource.
- last
Modified numberTime The time when this table was last modified, in milliseconds since the epoch.
- location string
The geographic location where the table resides. This value is inherited from the dataset.
- num
Bytes number The size of this table in bytes, excluding any data in the streaming buffer.
- num
Long numberTerm Bytes The number of bytes in the table that are considered “long-term storage”.
- num
Rows number The number of rows of data in this table, excluding any data in the streaming buffer.
- self
Link string The URI of the created resource.
- type string
The only type supported is DAY, which will generate one partition per day based on data loading time.
- creation_
time float The time when this table was created, in milliseconds since the epoch.
- etag str
A hash of the resource.
- id str
- The provider-assigned unique ID for this managed resource.
- last_
modified_ floattime The time when this table was last modified, in milliseconds since the epoch.
- location str
The geographic location where the table resides. This value is inherited from the dataset.
- num_
bytes float The size of this table in bytes, excluding any data in the streaming buffer.
- num_
long_ floatterm_ bytes The number of bytes in the table that are considered “long-term storage”.
- num_
rows float The number of rows of data in this table, excluding any data in the streaming buffer.
- self_
link str The URI of the created resource.
- type str
The only type supported is DAY, which will generate one partition per day based on data loading time.
Look up an Existing Table Resource
Get an existing Table resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.
public static get(name: string, id: Input<ID>, state?: TableState, opts?: CustomResourceOptions): Tablestatic get(resource_name, id, opts=None, clusterings=None, creation_time=None, dataset_id=None, description=None, encryption_configuration=None, etag=None, expiration_time=None, external_data_configuration=None, friendly_name=None, labels=None, last_modified_time=None, location=None, num_bytes=None, num_long_term_bytes=None, num_rows=None, project=None, range_partitioning=None, schema=None, self_link=None, table_id=None, time_partitioning=None, type=None, view=None, __props__=None);func GetTable(ctx *Context, name string, id IDInput, state *TableState, opts ...ResourceOption) (*Table, error)public static Table Get(string name, Input<string> id, TableState? state, CustomResourceOptions? opts = null)- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- resource_name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
- name
- The unique name of the resulting resource.
- id
- The unique provider ID of the resource to lookup.
- state
- Any extra arguments used during the lookup.
- opts
- A bag of options that control this resource's behavior.
The following state arguments are supported:
- Clusterings List<string>
Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.
- Creation
Time int The time when this table was created, in milliseconds since the epoch.
- Dataset
Id string The dataset ID to create the table in. Changing this forces a new resource to be created.
- Description string
The field description.
- Encryption
Configuration TableEncryption Configuration Args Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below.
- Etag string
A hash of the resource.
- Expiration
Time int The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.
- External
Data TableConfiguration External Data Configuration Args Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below.
- Friendly
Name string A descriptive name for the table.
- Labels Dictionary<string, string>
A mapping of labels to assign to the resource.
- Last
Modified intTime The time when this table was last modified, in milliseconds since the epoch.
- Location string
The geographic location where the table resides. This value is inherited from the dataset.
- Num
Bytes int The size of this table in bytes, excluding any data in the streaming buffer.
- Num
Long intTerm Bytes The number of bytes in the table that are considered “long-term storage”.
- Num
Rows int The number of rows of data in this table, excluding any data in the streaming buffer.
- Project string
The ID of the project in which the resource belongs. If it is not provided, the provider project is used.
- Range
Partitioning TableRange Partitioning Args If specified, configures range-based partitioning for this table. Structure is documented below.
- Schema string
A JSON schema for the table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables. For more information see the BigQuery API documentation. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn’t changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced
STRUCTfield type withRECORDfield type, we currently cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.- Self
Link string The URI of the created resource.
- Table
Id string A unique ID for the resource. Changing this forces a new resource to be created.
- Time
Partitioning TableTime Partitioning Args If specified, configures time-based partitioning for this table. Structure is documented below.
- Type string
The only type supported is DAY, which will generate one partition per day based on data loading time.
- View
Table
View Args If specified, configures this table as a view. Structure is documented below.
- Clusterings []string
Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.
- Creation
Time int The time when this table was created, in milliseconds since the epoch.
- Dataset
Id string The dataset ID to create the table in. Changing this forces a new resource to be created.
- Description string
The field description.
- Encryption
Configuration TableEncryption Configuration Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below.
- Etag string
A hash of the resource.
- Expiration
Time int The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.
- External
Data TableConfiguration External Data Configuration Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below.
- Friendly
Name string A descriptive name for the table.
- Labels map[string]string
A mapping of labels to assign to the resource.
- Last
Modified intTime The time when this table was last modified, in milliseconds since the epoch.
- Location string
The geographic location where the table resides. This value is inherited from the dataset.
- Num
Bytes int The size of this table in bytes, excluding any data in the streaming buffer.
- Num
Long intTerm Bytes The number of bytes in the table that are considered “long-term storage”.
- Num
Rows int The number of rows of data in this table, excluding any data in the streaming buffer.
- Project string
The ID of the project in which the resource belongs. If it is not provided, the provider project is used.
- Range
Partitioning TableRange Partitioning If specified, configures range-based partitioning for this table. Structure is documented below.
- Schema string
A JSON schema for the table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables. For more information see the BigQuery API documentation. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn’t changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced
STRUCTfield type withRECORDfield type, we currently cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.- Self
Link string The URI of the created resource.
- Table
Id string A unique ID for the resource. Changing this forces a new resource to be created.
- Time
Partitioning TableTime Partitioning If specified, configures time-based partitioning for this table. Structure is documented below.
- Type string
The only type supported is DAY, which will generate one partition per day based on data loading time.
- View
Table
View If specified, configures this table as a view. Structure is documented below.
- clusterings string[]
Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.
- creation
Time number The time when this table was created, in milliseconds since the epoch.
- dataset
Id string The dataset ID to create the table in. Changing this forces a new resource to be created.
- description string
The field description.
- encryption
Configuration TableEncryption Configuration Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below.
- etag string
A hash of the resource.
- expiration
Time number The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.
- external
Data TableConfiguration External Data Configuration Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below.
- friendly
Name string A descriptive name for the table.
- labels {[key: string]: string}
A mapping of labels to assign to the resource.
- last
Modified numberTime The time when this table was last modified, in milliseconds since the epoch.
- location string
The geographic location where the table resides. This value is inherited from the dataset.
- num
Bytes number The size of this table in bytes, excluding any data in the streaming buffer.
- num
Long numberTerm Bytes The number of bytes in the table that are considered “long-term storage”.
- num
Rows number The number of rows of data in this table, excluding any data in the streaming buffer.
- project string
The ID of the project in which the resource belongs. If it is not provided, the provider project is used.
- range
Partitioning TableRange Partitioning If specified, configures range-based partitioning for this table. Structure is documented below.
- schema string
A JSON schema for the table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables. For more information see the BigQuery API documentation. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn’t changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced
STRUCTfield type withRECORDfield type, we currently cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.- self
Link string The URI of the created resource.
- table
Id string A unique ID for the resource. Changing this forces a new resource to be created.
- time
Partitioning TableTime Partitioning If specified, configures time-based partitioning for this table. Structure is documented below.
- type string
The only type supported is DAY, which will generate one partition per day based on data loading time.
- view
Table
View If specified, configures this table as a view. Structure is documented below.
- clusterings List[str]
Specifies column names to use for data clustering. Up to four top-level columns are allowed, and should be specified in descending priority order.
- creation_
time float The time when this table was created, in milliseconds since the epoch.
- dataset_
id str The dataset ID to create the table in. Changing this forces a new resource to be created.
- description str
The field description.
- encryption_
configuration Dict[TableEncryption Configuration] Specifies how the table should be encrypted. If left blank, the table will be encrypted with a Google-managed key; that process is transparent to the user. Structure is documented below.
- etag str
A hash of the resource.
- expiration_
time float The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed.
- external_
data_ Dict[Tableconfiguration External Data Configuration] Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table. Structure is documented below.
- friendly_
name str A descriptive name for the table.
- labels Dict[str, str]
A mapping of labels to assign to the resource.
- last_
modified_ floattime The time when this table was last modified, in milliseconds since the epoch.
- location str
The geographic location where the table resides. This value is inherited from the dataset.
- num_
bytes float The size of this table in bytes, excluding any data in the streaming buffer.
- num_
long_ floatterm_ bytes The number of bytes in the table that are considered “long-term storage”.
- num_
rows float The number of rows of data in this table, excluding any data in the streaming buffer.
- project str
The ID of the project in which the resource belongs. If it is not provided, the provider project is used.
- range_
partitioning Dict[TableRange Partitioning] If specified, configures range-based partitioning for this table. Structure is documented below.
- schema str
A JSON schema for the table. Schema is required for CSV and JSON formats and is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats when using external tables. For more information see the BigQuery API documentation. ~>NOTE: Because this field expects a JSON string, any changes to the string will create a diff, even if the JSON itself hasn’t changed. If the API returns a different value for the same schema, e.g. it switched the order of values or replaced
STRUCTfield type withRECORDfield type, we currently cannot suppress the recurring diff this causes. As a workaround, we recommend using the schema as returned by the API.- self_
link str The URI of the created resource.
- table_
id str A unique ID for the resource. Changing this forces a new resource to be created.
- time_
partitioning Dict[TableTime Partitioning] If specified, configures time-based partitioning for this table. Structure is documented below.
- type str
The only type supported is DAY, which will generate one partition per day based on data loading time.
- view
Dict[Table
View] If specified, configures this table as a view. Structure is documented below.
Supporting Types
TableEncryptionConfiguration
- Kms
Key stringName The self link or full name of a key which should be used to encrypt this table. Note that the default bigquery service account will need to have encrypt/decrypt permissions on this key - you may want to see the
gcp.bigquery.getDefaultServiceAccountdatasource and thegcp.kms.CryptoKeyIAMBindingresource.
- Kms
Key stringName The self link or full name of a key which should be used to encrypt this table. Note that the default bigquery service account will need to have encrypt/decrypt permissions on this key - you may want to see the
gcp.bigquery.getDefaultServiceAccountdatasource and thegcp.kms.CryptoKeyIAMBindingresource.
- kms
Key stringName The self link or full name of a key which should be used to encrypt this table. Note that the default bigquery service account will need to have encrypt/decrypt permissions on this key - you may want to see the
gcp.bigquery.getDefaultServiceAccountdatasource and thegcp.kms.CryptoKeyIAMBindingresource.
- kms_
key_ strname The self link or full name of a key which should be used to encrypt this table. Note that the default bigquery service account will need to have encrypt/decrypt permissions on this key - you may want to see the
gcp.bigquery.getDefaultServiceAccountdatasource and thegcp.kms.CryptoKeyIAMBindingresource.
TableExternalDataConfiguration
- Autodetect bool
- Let BigQuery try to autodetect the schema and format of the table.
- Source
Format string The data format. Supported values are: “CSV”, “GOOGLE_SHEETS”, “NEWLINE_DELIMITED_JSON”, “AVRO”, “PARQUET”, and “DATSTORE_BACKUP”. To use “GOOGLE_SHEETS” the
scopesmust include “https://www.googleapis.com/auth/drive.readonly".- Source
Uris List<string> A list of the fully-qualified URIs that point to your data in Google Cloud.
- Compression string
The compression type of the data source. Valid values are “NONE” or “GZIP”.
- Csv
Options TableExternal Data Configuration Csv Options Args Additional properties to set if
source_formatis set to “CSV”. Structure is documented below.- Google
Sheets TableOptions External Data Configuration Google Sheets Options Args Additional options if
source_formatis set to “GOOGLE_SHEETS”. Structure is documented below.- Hive
Partitioning TableOptions External Data Configuration Hive Partitioning Options Args When set, configures hive partitioning support. Not all storage formats support hive partitioning – requesting hive partitioning on an unsupported format will lead to an error, as will providing an invalid specification.
- Ignore
Unknown boolValues Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.
- Max
Bad intRecords The maximum number of bad records that BigQuery can ignore when reading data.
- Autodetect bool
- Let BigQuery try to autodetect the schema and format of the table.
- Source
Format string The data format. Supported values are: “CSV”, “GOOGLE_SHEETS”, “NEWLINE_DELIMITED_JSON”, “AVRO”, “PARQUET”, and “DATSTORE_BACKUP”. To use “GOOGLE_SHEETS” the
scopesmust include “https://www.googleapis.com/auth/drive.readonly".- Source
Uris []string A list of the fully-qualified URIs that point to your data in Google Cloud.
- Compression string
The compression type of the data source. Valid values are “NONE” or “GZIP”.
- Csv
Options TableExternal Data Configuration Csv Options Additional properties to set if
source_formatis set to “CSV”. Structure is documented below.- Google
Sheets TableOptions External Data Configuration Google Sheets Options Additional options if
source_formatis set to “GOOGLE_SHEETS”. Structure is documented below.- Hive
Partitioning TableOptions External Data Configuration Hive Partitioning Options When set, configures hive partitioning support. Not all storage formats support hive partitioning – requesting hive partitioning on an unsupported format will lead to an error, as will providing an invalid specification.
- Ignore
Unknown boolValues Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.
- Max
Bad intRecords The maximum number of bad records that BigQuery can ignore when reading data.
- autodetect boolean
- Let BigQuery try to autodetect the schema and format of the table.
- source
Format string The data format. Supported values are: “CSV”, “GOOGLE_SHEETS”, “NEWLINE_DELIMITED_JSON”, “AVRO”, “PARQUET”, and “DATSTORE_BACKUP”. To use “GOOGLE_SHEETS” the
scopesmust include “https://www.googleapis.com/auth/drive.readonly".- source
Uris string[] A list of the fully-qualified URIs that point to your data in Google Cloud.
- compression string
The compression type of the data source. Valid values are “NONE” or “GZIP”.
- csv
Options TableExternal Data Configuration Csv Options Additional properties to set if
source_formatis set to “CSV”. Structure is documented below.- google
Sheets TableOptions External Data Configuration Google Sheets Options Additional options if
source_formatis set to “GOOGLE_SHEETS”. Structure is documented below.- hive
Partitioning TableOptions External Data Configuration Hive Partitioning Options When set, configures hive partitioning support. Not all storage formats support hive partitioning – requesting hive partitioning on an unsupported format will lead to an error, as will providing an invalid specification.
- ignore
Unknown booleanValues Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.
- max
Bad numberRecords The maximum number of bad records that BigQuery can ignore when reading data.
- autodetect bool
- Let BigQuery try to autodetect the schema and format of the table.
- source
Format str The data format. Supported values are: “CSV”, “GOOGLE_SHEETS”, “NEWLINE_DELIMITED_JSON”, “AVRO”, “PARQUET”, and “DATSTORE_BACKUP”. To use “GOOGLE_SHEETS” the
scopesmust include “https://www.googleapis.com/auth/drive.readonly".- source
Uris List[str] A list of the fully-qualified URIs that point to your data in Google Cloud.
- compression str
The compression type of the data source. Valid values are “NONE” or “GZIP”.
- csv
Options Dict[TableExternal Data Configuration Csv Options] Additional properties to set if
source_formatis set to “CSV”. Structure is documented below.- google
Sheets Dict[TableOptions External Data Configuration Google Sheets Options] Additional options if
source_formatis set to “GOOGLE_SHEETS”. Structure is documented below.- hive
Partitioning Dict[TableOptions External Data Configuration Hive Partitioning Options] When set, configures hive partitioning support. Not all storage formats support hive partitioning – requesting hive partitioning on an unsupported format will lead to an error, as will providing an invalid specification.
- ignore
Unknown boolValues Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.
- max
Bad floatRecords The maximum number of bad records that BigQuery can ignore when reading data.
TableExternalDataConfigurationCsvOptions
- Quote string
The value that is used to quote data sections in a CSV file. If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the
allow_quoted_newlinesproperty to true. The API-side default is", specified in the provider escaped as\". Due to limitations with default values, this value is required to be explicitly set.- Allow
Jagged boolRows Indicates if BigQuery should accept rows that are missing trailing optional columns.
- Allow
Quoted boolNewlines Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
- Encoding string
The character encoding of the data. The supported values are UTF-8 or ISO-8859-1.
- Field
Delimiter string The separator for fields in a CSV file.
- Skip
Leading intRows The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of
rangeorskip_leading_rowsmust be set.
- Quote string
The value that is used to quote data sections in a CSV file. If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the
allow_quoted_newlinesproperty to true. The API-side default is", specified in the provider escaped as\". Due to limitations with default values, this value is required to be explicitly set.- Allow
Jagged boolRows Indicates if BigQuery should accept rows that are missing trailing optional columns.
- Allow
Quoted boolNewlines Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
- Encoding string
The character encoding of the data. The supported values are UTF-8 or ISO-8859-1.
- Field
Delimiter string The separator for fields in a CSV file.
- Skip
Leading intRows The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of
rangeorskip_leading_rowsmust be set.
- quote string
The value that is used to quote data sections in a CSV file. If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the
allow_quoted_newlinesproperty to true. The API-side default is", specified in the provider escaped as\". Due to limitations with default values, this value is required to be explicitly set.- allow
Jagged booleanRows Indicates if BigQuery should accept rows that are missing trailing optional columns.
- allow
Quoted booleanNewlines Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
- encoding string
The character encoding of the data. The supported values are UTF-8 or ISO-8859-1.
- field
Delimiter string The separator for fields in a CSV file.
- skip
Leading numberRows The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of
rangeorskip_leading_rowsmust be set.
- quote str
The value that is used to quote data sections in a CSV file. If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the
allow_quoted_newlinesproperty to true. The API-side default is", specified in the provider escaped as\". Due to limitations with default values, this value is required to be explicitly set.- allow
Jagged boolRows Indicates if BigQuery should accept rows that are missing trailing optional columns.
- allow
Quoted boolNewlines Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
- encoding str
The character encoding of the data. The supported values are UTF-8 or ISO-8859-1.
- field
Delimiter str The separator for fields in a CSV file.
- skip
Leading floatRows The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of
rangeorskip_leading_rowsmust be set.
TableExternalDataConfigurationGoogleSheetsOptions
- Range string
Information required to partition based on ranges. Structure is documented below.
- Skip
Leading intRows The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of
rangeorskip_leading_rowsmust be set.
- Range string
Information required to partition based on ranges. Structure is documented below.
- Skip
Leading intRows The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of
rangeorskip_leading_rowsmust be set.
- range string
Information required to partition based on ranges. Structure is documented below.
- skip
Leading numberRows The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of
rangeorskip_leading_rowsmust be set.
- range str
Information required to partition based on ranges. Structure is documented below.
- skip
Leading floatRows The number of rows at the top of the sheet that BigQuery will skip when reading the data. At least one of
rangeorskip_leading_rowsmust be set.
TableExternalDataConfigurationHivePartitioningOptions
- Mode string
When set, what mode of hive partitioning to use when reading data. The following modes are supported. * AUTO: automatically infer partition key name(s) and type(s). * STRINGS: automatically infer partition key name(s). All types are Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported formats are: JSON, CSV, ORC, Avro and Parquet. * CUSTOM: when set to
CUSTOM, you must encode the partition key schema within thesource_uri_prefixby settingsource_uri_prefixtogs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}.- Source
Uri stringPrefix When hive partition detection is requested, a common for all source uris must be required. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout.
gs://bucket/path_to_table/dt=2019-06-01/country=USA/id=7/file.avrogs://bucket/path_to_table/dt=2019-05-31/country=CA/id=3/file.avroWhen hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either ofgs://bucket/path_to_tableorgs://bucket/path_to_table/. Note that whenmodeis set toCUSTOM, you must encode the partition key schema within thesource_uri_prefixby settingsource_uri_prefixtogs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}.
- Mode string
When set, what mode of hive partitioning to use when reading data. The following modes are supported. * AUTO: automatically infer partition key name(s) and type(s). * STRINGS: automatically infer partition key name(s). All types are Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported formats are: JSON, CSV, ORC, Avro and Parquet. * CUSTOM: when set to
CUSTOM, you must encode the partition key schema within thesource_uri_prefixby settingsource_uri_prefixtogs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}.- Source
Uri stringPrefix When hive partition detection is requested, a common for all source uris must be required. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout.
gs://bucket/path_to_table/dt=2019-06-01/country=USA/id=7/file.avrogs://bucket/path_to_table/dt=2019-05-31/country=CA/id=3/file.avroWhen hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either ofgs://bucket/path_to_tableorgs://bucket/path_to_table/. Note that whenmodeis set toCUSTOM, you must encode the partition key schema within thesource_uri_prefixby settingsource_uri_prefixtogs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}.
- mode string
When set, what mode of hive partitioning to use when reading data. The following modes are supported. * AUTO: automatically infer partition key name(s) and type(s). * STRINGS: automatically infer partition key name(s). All types are Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported formats are: JSON, CSV, ORC, Avro and Parquet. * CUSTOM: when set to
CUSTOM, you must encode the partition key schema within thesource_uri_prefixby settingsource_uri_prefixtogs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}.- source
Uri stringPrefix When hive partition detection is requested, a common for all source uris must be required. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout.
gs://bucket/path_to_table/dt=2019-06-01/country=USA/id=7/file.avrogs://bucket/path_to_table/dt=2019-05-31/country=CA/id=3/file.avroWhen hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either ofgs://bucket/path_to_tableorgs://bucket/path_to_table/. Note that whenmodeis set toCUSTOM, you must encode the partition key schema within thesource_uri_prefixby settingsource_uri_prefixtogs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}.
- mode str
When set, what mode of hive partitioning to use when reading data. The following modes are supported. * AUTO: automatically infer partition key name(s) and type(s). * STRINGS: automatically infer partition key name(s). All types are Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported formats are: JSON, CSV, ORC, Avro and Parquet. * CUSTOM: when set to
CUSTOM, you must encode the partition key schema within thesource_uri_prefixby settingsource_uri_prefixtogs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}.- source
Uri strPrefix When hive partition detection is requested, a common for all source uris must be required. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout.
gs://bucket/path_to_table/dt=2019-06-01/country=USA/id=7/file.avrogs://bucket/path_to_table/dt=2019-05-31/country=CA/id=3/file.avroWhen hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either ofgs://bucket/path_to_tableorgs://bucket/path_to_table/. Note that whenmodeis set toCUSTOM, you must encode the partition key schema within thesource_uri_prefixby settingsource_uri_prefixtogs://bucket/path_to_table/{key1:TYPE1}/{key2:TYPE2}/{key3:TYPE3}.
TableRangePartitioning
- Field string
The field used to determine how to create a range-based partition.
- Range
Table
Range Partitioning Range Args Information required to partition based on ranges. Structure is documented below.
- Field string
The field used to determine how to create a range-based partition.
- Range
Table
Range Partitioning Range Information required to partition based on ranges. Structure is documented below.
- field string
The field used to determine how to create a range-based partition.
- range
Table
Range Partitioning Range Information required to partition based on ranges. Structure is documented below.
- field str
The field used to determine how to create a range-based partition.
- range
Dict[Table
Range Partitioning Range] Information required to partition based on ranges. Structure is documented below.
TableRangePartitioningRange
TableTimePartitioning
- Type string
The only type supported is DAY, which will generate one partition per day based on data loading time.
- Expiration
Ms int Number of milliseconds for which to keep the storage for a partition.
- Field string
The field used to determine how to create a range-based partition.
- Require
Partition boolFilter If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.
- Type string
The only type supported is DAY, which will generate one partition per day based on data loading time.
- Expiration
Ms int Number of milliseconds for which to keep the storage for a partition.
- Field string
The field used to determine how to create a range-based partition.
- Require
Partition boolFilter If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.
- type string
The only type supported is DAY, which will generate one partition per day based on data loading time.
- expiration
Ms number Number of milliseconds for which to keep the storage for a partition.
- field string
The field used to determine how to create a range-based partition.
- require
Partition booleanFilter If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.
- type str
The only type supported is DAY, which will generate one partition per day based on data loading time.
- expiration
Ms float Number of milliseconds for which to keep the storage for a partition.
- field str
The field used to determine how to create a range-based partition.
- require
Partition boolFilter If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.
TableView
- Query string
A query that BigQuery executes when the view is referenced.
- Use
Legacy boolSql Specifies whether to use BigQuery’s legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery’s standard SQL.
- Query string
A query that BigQuery executes when the view is referenced.
- Use
Legacy boolSql Specifies whether to use BigQuery’s legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery’s standard SQL.
- query string
A query that BigQuery executes when the view is referenced.
- use
Legacy booleanSql Specifies whether to use BigQuery’s legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery’s standard SQL.
- query str
A query that BigQuery executes when the view is referenced.
- use
Legacy boolSql Specifies whether to use BigQuery’s legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery’s standard SQL.
Package Details
- Repository
- https://github.com/pulumi/pulumi-gcp
- License
- Apache-2.0
- Notes
- This Pulumi package is based on the
google-betaTerraform Provider.