Google BigQuery
The Google BigQuery destination loads new data or change data capture (CDC) data to Google BigQuery. The destination can compensate for data drift to support loading to new or existing datasets, tables, and columns. For information about supported versions, see Supported Systems and Versions in the Data Collector documentation.
To load data, the destination first stages the pipeline data in CSV or Avro files in a staging area in Google Cloud Storage. Then, the destination creates a BigQuery batch load job that copies the staged files into BigQuery. For information about the Google BigQuery quota policy for batch loading data, see the Google Cloud BigQuery documentation.
When you configure the destination, you specify authentication information for Google BigQuery and for your Google Cloud Storage staging area. You can also use a connection to define the information required to connect to BigQuery and to the staging area. You can optionally configure the destination to connect to Google BigQuery through a proxy server.
You specify the name of the dataset and tables to load the data to. The destination loads data from record fields to table columns based on matching names.
You can configure the destination to compensate for data drift by creating new columns in existing tables when new fields appear in records or by creating new tables or datasets as needed. When creating new tables, you can configure how the destination partitions the tables.
You can configure the root field for the row, and any first-level fields that you want to exclude from the record. You can specify characters to represent null values.
When processing CDC data, you can specify the key columns in the BigQuery table that the destination uses to evaluate the merge condition.
You can configure the destination to replace missing fields or fields with invalid data types with user-defined default values. You can also configure the destination to replace newline characters and trim leading and trailing spaces.
Before you use the Google BigQuery destination, you must complete some prerequisite tasks.
Prerequisites
Prepare the Google BigQuery Data Warehouse
Before configuring the Google BigQuery destination, prepare your BigQuery data warehouse.
Prepare the Google Cloud Storage Staging Area
Before configuring the Google BigQuery destination, prepare a staging area in Google Cloud Storage. The destination stages CSV or Avro files in the staging area before loading them to Google BigQuery.
Credentials
When the Google BigQuery destination stages CSV or Avro files in the Cloud Storage staging area and then loads the files to BigQuery, the destination must pass credentials to Google Cloud Storage and then to Google BigQuery.
- Google Cloud default credentials
- Credentials in a file
- Credentials in a stage property
For details on how to configure each option, see Security in Google Cloud Stages.
Staging File Formats
The Google BigQuery destination stages data in Avro, CSV, JSON, or Parquet files. For more information about staging file formats in BigQuery, see the BigQuery documentation.
Note the following staging file format limits:
- The destination supports nested data types for Avro or JSON staging files only.
- For CSV staging files, the destination does not support nested Data Collector data types. The destination treats these fields as having an invalid data type.
For more information, see Row Generation.
You specify the file format using the Staging File Format property on the Data tab.
Specifying Datasets and Tables
You can use the Google BigQuery destination to load data to one or more tables in one or more datasets.
- Single dataset or table
- To write data to a single dataset, simply enter the dataset name as
follows:
<dataset_name>
- Multiple datasets or tables
- To write data to multiple datasets and tables, specify a field in the record that defines the datasets and tables.
Use the Dataset and Table properties on the Table Definition tab to specify the datasets and tables to write to.
Enabling Data Drift Handling
The Google BigQuery destination can automatically compensate for changes in column, table, or dataset requirements, also known as data drift.
- Create new columns
-
The destination can create new columns in existing tables when new fields appear in records. For example, if a record suddenly includes a new
Address2
field, the destination creates a newAddress2
column in the target table. The destination always creates new columns in existing tables as Nullable columns.By default, the destination creates new columns based on the data in the new fields, such as creating an Int column for integer data. You can, however, configure the destination to create all new columns as String.
To enable the automatic creation of new columns, select the Enable Data Drift property on the Table Definition tab. To create all new columns as String, select the Create New Columns as String property.
- Create new tables
- When data drift handling is enabled, you can also configure the destination
to create new tables as needed. For example, say the destination writes data
to tables based on the region name in the
Region
field. When a newSW-3
region shows up in a record, the destination creates a newSW-3
table and writes the record to the new table. - Create new datasets
- When data drift handling and the creation of new tables are enabled, you can
also configure the destination to create new datasets as needed. For
example, say the destination writes data to datasets based on the department
name in the
dept
field. When a newEngineering
department shows up in a record, the destination creates a newEngineering
dataset and writes the record to a new table in the new dataset.
Partitioning New Tables
When creating a new table, the Google BigQuery destination can also partition the table.
To enable partitioning for new tables, first enable data drift and select the Create Table property on the Table Definition tab. Then, select the Partition Table property and define partition configurations.
You can add multiple partition configurations so that the destination can partition new tables in different ways. Because BigQuery can partition a table only once, the destination uses a single partition configuration when it creates a table.
For each partition configuration, you specify whether the configuration should be applied to a specific table or whether the configuration is the default. Then, you specify the partition options.
Partition Configurations
- To a specific table
- To create a partition configuration for a specific table, enter the dataset and table name to apply the partition configuration to.
- As the default
- To create the default partition configuration that the destination uses when it cannot find a matching configuration for a new table, select Default Partition. You can create one default partition configuration.
When you enable the creation and partitioning of new tables, the destination first looks for a matching partition configuration by dataset and table name. If the destination cannot find a match, it uses the default configuration. If a default configuration does not exist, the destination creates the table without partitioning.
Region
field. The Region
field can have one of the
following values: WestRegion
, EastRegion
, and
NorthRegion
. You also enable the creation and partitioning of new
tables. You can set up partition configurations for the three new tables in the
following ways:- Different partitioning for each table - Create three partition
configurations, one for each table:
WestRegion
,EastRegion
, andNorthRegion
. - Same partitioning for each table - Create a single default partition.
- Different partitioning for one table, same partitioning for the remaining
tables - Create a partition configuration for the
WestRegion
table. Then create a default partition configuration used for the remaining tables. - Partition one table, do not partition the remaining tables - Create a
partition configuration for the
WestRegion
table. Do not create a default partition configuration.
Partition Options
- Date
- Datetime
- Timestamp
- Integer
- Ingestion
For the Date, Datetime, Timestamp, and Integer partition type, you also must specify the column to partition by. The data type of the column must match the selected partition type. For example, to partition by the Integer type, the data type of the column must be Integer.
You configure additional options for each partition type as required by BigQuery. For details about the available options for each partition type, see the Google Cloud BigQuery documentation.
Row Generation
When writing a record to a table, the Google BigQuery destination includes all record
fields in the resulting row, by default. The destination uses the root field,
/
, as the basis for the resulting row.
You can configure the Row Field property on the Data tab to specify a map or list-map field in the record as the basis for the row. The resulting record includes only the data from the specified map or list-map field and excludes all other record data. Use this functionality when the data that you want to write to BigQuery exists in a single map or list-map field within the record.
When you want to use the root field, but do not want to include all fields in the resulting row, configure the destination to ignore all specified first-level fields.
The destination supports nested data types for Avro or JSON staging files only.
/
, as the basis for the resulting row. A record contains the
following
fields:{
"name": "Jane Smith",
"id": "557",
"address": {
"street": "101 3rd St",
"city": "Huntsville",
"state": "NC",
"zipcode": "27023"
}
}
The destination treats the address
map field as a field with an invalid
data type, processing the field as an error record by default. You can configure the
destination to ignore the field and process the remaining record data, as described in
Missing Fields and Fields with Invalid Types.
Missing Fields and Fields with Invalid Types
By default, the destination treats records with missing fields or with invalid data types in fields as error records.
You can configure the destination to ignore missing fields or fields with invalid data types, replacing the data in the field with an empty value.
The default for each data type is an empty string, which is the default null value in
BigQuery. You can specify a different default value to use for each data type on the
Data Advanced tab. For example, you might define the default value for a missing String
field or a String field with an invalid data type as none
or
not_applicable
.
To configure the destination to ignore missing fields and fields with invalid data types, select the Ignore Missing Fields and the Ignore Fields with Invalid Types properties on the Data Advanced tab.
Performance Optimization
Use the following tips to optimize for performance and cost-effectiveness when using the Google BigQuery destination:
- Increase the batch size
- The maximum batch size is determined by the origin in the pipeline and typically has a default value of 1,000 records. To take advantage of the loading abilities that Google BigQuery provides, increase the maximum batch size in the pipeline origin to 20,000-50,000 records. Be sure to increase the Data Collector java heap size, as needed. For more information, see Java Heap Size in the Data Collector documentation.
- Use multiple threads
- You can use multiple threads to improve performance when you include a multithreaded origin in the pipeline. When Data Collector resources allow, using multiple threads enables processing multiple batches of data concurrently.
- Enable additional connections to Google BigQuery
- When loading data into multiple tables or datasets, increase the number of connections that the destination makes to Google BigQuery. Each additional connection allows the destination to load data into an additional table, concurrently.
CRUD Operation Processing
The Google BigQuery destination can insert, update, upsert, or delete data when you configure the destination to process CDC data. When processing CDC data, the destination merges data into BigQuery tables.
sdc.operation.type
record header attribute. The destination
performs operations based on the following numeric values:- 1 for INSERT
- 2 for DELETE
- 3 for UPDATE
- 4 for UPSERT
If
your pipeline has a CRUD-enabled origin that processes
changed data, the destination simply reads the operation
type from the sdc.operation.type
header
attribute that the origin generates. If your pipeline has
a non-CDC origin, you can use the Expression Evaluator
processor or a scripting processor to define the record
header attribute. For more information about Data Collector changed data processing and a list of CDC-enabled
origins, see Processing Changed Data.
BigQuery Data Types
The Google BigQuery destination converts Data Collector data types into BigQuery data types before writing data to BigQuery tables.
When you configure the destination to compensate for data drift, you can also configure the destination to create all new columns as String. However, by default, the destination converts record data to the appropriate data type.
The destination converts the following Data Collector data types into these BigQuery data types:
Data Collector Data Type | BigQuery Data Type |
---|---|
Boolean | Boolean |
Byte | Bytes |
Byte Array | Bytes |
Character | String |
Date | Date |
Datetime | Datetime or Timestamp |
Decimal | Decimal |
Double | Float |
Float | Float |
Integer | Integer |
List |
Repeated mode column of the specified datatype. Supported for Avro staging files only. For more information, see Staging File Formats. |
List-Map |
Struct or Record Repeated mode column with subfields created from the key-value pair. Supported for Avro staging files only. For more information, see Staging File Formats. |
Long | Integer |
Map |
Struct or Record Repeated mode column with subfields created from the key-value pair. Supported for Avro staging files only. For more information, see Staging File Formats. |
Short | Integer |
String | String |
Time | Time |
Configuring a Google BigQuery Destination
Configure a Google BigQuery destination to load data to BigQuery tables. Before you use the destination in a pipeline, complete the prerequisite tasks.