Amazon S3
The Amazon S3 origin reads objects stored in Amazon S3. The object names must share a prefix pattern and should be fully written. To read messages from Amazon SQS, use the Amazon SQS Consumer origin. The Amazon S3 origin can process objects in parallel with multiple threads. For information about supported versions, see Supported Systems and Versions in the Data Collector documentation.
With the Amazon S3 origin, you define the region, bucket, prefix pattern, optional common prefix, and read order. These properties determine the objects that the origin processes. You configure the authentication method that the origin uses to connect to Amazon S3. You can optionally include Amazon S3 object metadata in the record as record header attributes.
After processing an object or upon encountering errors, the origin can keep, archive, or delete the object. When archiving, the origin can copy or move the object.
When a pipeline stops, the Amazon S3 origin notes where it stops reading. When the pipeline starts again, the origin continues processing from where it stopped by default. You can reset the origin to process all requested objects.
You can configure the origin to decrypt data stored on Amazon S3 with server-side encryption and customer-provided encryption keys. You can optionally use a proxy to connect to Amazon S3. You can also use a connection to configure the origin.
The origin can generate events for an event stream. For more information about dataflow triggers and the event framework, see Dataflow Triggers Overview.
Authentication Method
You can configure the Amazon S3 origin to authenticate with Amazon Web Services (AWS) using an instance profile or AWS access keys. When accessing a public bucket, you can connect anonymously using no authentication.
For more information about the authentication methods and details on how to configure each method, see Security in Amazon Stages.
Common Prefix, Prefix Pattern, and Wildcards
The Amazon S3 origin appends the common prefix to the prefix pattern to define the objects that the origin processes. You can specify an exact prefix pattern or you can use Ant-style path patterns to read multiple objects recursively.
- Question mark (?) to match a single character
- Asterisk (*) to match zero or more characters
- Double asterisks (**) to match zero or more directories
US/East/MD/
and all nested
prefixes, you can use the following common prefix and prefix
pattern:Common Prefix: US/East/MD/
Prefix Pattern: **/*.log
US/**/weblogs/
, you can include the nested prefixes in the
prefix pattern or define the entire hierarchy in the prefix pattern, as
follows:Common Prefix: US/
Prefix Pattern: **/weblogs/*.log
Common Prefix:
Prefix Pattern: US/**/weblogs/*.log
Multithreaded Processing
The Amazon S3 origin uses multiple concurrent threads to process data based on the Number of Threads property.
Each thread reads data from a single object, and each object can have a maximum of one thread read from it at a time. The object read order is based on the configuration for the Read Order property.
As the pipeline runs, each thread connects to the origin system, creates a batch of data, and passes the batch to an available pipeline runner. A pipeline runner is a sourceless pipeline instance - an instance of the pipeline that includes all of the processors, executors, and destinations in the pipeline and handles all pipeline processing after the origin.
Each pipeline runner processes one batch at a time, just like a pipeline that runs on a single thread. When the flow of data slows, the pipeline runners wait idly until they are needed, generating an empty batch at regular intervals. You can configure the Runner Idle Time pipeline property to specify the interval or to opt out of empty batch generation.
Multithreaded pipelines preserve the order of records within each batch, just like a single-threaded pipeline. But since batches are processed by different pipeline runners, the order that batches are written to destinations is not ensured.
For example, suppose you configure the origin to use five threads to read objects in the order of last-modified timestamp. When you start the pipeline, the origin creates five threads, and Data Collector creates a matching number of pipeline runners.
The Amazon S3 origin assigns a thread to each of the five oldest objects. Each thread processes its assigned object, passing batches of data to the origin. Upon receiving data, the origin passes a batch to each of the pipeline runners for processing.
After a thread completes processing an object, the origin assigns the thread to the next object based on the last-modified timestamp, until all objects are processed.
For more information about multithreaded pipelines, see Multithreaded Pipeline Overview.
Record Header Attributes
When the
Amazon S3 origin processes Avro data, it includes the Avro schema in
an avroSchema
record header attribute. When the origin processes Parquet data and Skip Union Indexes is
not enabled, it generates an avro.union.typeIndex./id
record header attribute identifying the index number of the
element in a union the data is read from. You can also configure the origin to include Amazon S3 object metadata in record
header attributes.
You can use the record:attribute
or
record:attributeOrDefault
functions to access the information
in the attributes. For more information about working with record header attributes,
see Working with Header Attributes.
Object Metadata in Record Header Attributes
You can include Amazon S3 object metadata in record header attributes. Include metadata when you want to use the information to help process records. For example, you might include metadata if you want to route records to different branches of a pipeline based on the last-modified timestamp.
- System-defined metadata
- The origin includes the following system-defined metadata:
- Name - The object name. Bucket and prefix information is included as
follows:
<bucket>/<prefix>/<object_name>
- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Length
- Content-MD5
- Content-Range
- Content-Type
- ETag
- Expires
- Last-Modified
- Name - The object name. Bucket and prefix information is included as
follows:
- User-defined metadata
- When available, the Amazon S3 origin also includes user-defined metadata in record header attributes.
For more information about record header attributes, see Record Header Attributes.
Read Order
The Amazon S3 origin reads objects in ascending order based on the object key name or the last modified timestamp. For best performance when reading a large number of objects, configure the origin to read objects based on the key name.
You can configure one of the following read orders:
- Lexicographically Ascending Key Names
- The Amazon S3 origin can read objects in lexicographically ascending order based on key names. Lexicographically ascending order reads the numbers 1 through 11 as follows:
- Last Modified Timestamp
- The Amazon S3 origin can read objects in ascending order based on the last modified timestamp. When you start a pipeline, the origin starts processing data with the earliest object that matches the common prefix and prefix pattern, and then progresses in chronological order. If two or more objects have the same timestamp, the origin processes the objects in lexicographically increasing order by key name.
Buffer Limit and Error Handling
The Amazon S3 origin uses a buffer to read objects into memory to produce records. The size of the buffer determines the maximum size of the record that can be processed.
The buffer limit helps prevent out of memory errors. Decrease the buffer limit when memory on the Data Collector machine is limited. Increase the buffer limit to process larger records when memory is available.
- Discard
- The origin discards the record and all remaining records in the object, and then continues processing the next object.
- Send to Error
- With a buffer limit error, the origin cannot send the record to the pipeline for error
handling because it is unable to fully process the record.
Instead, the origin displays a message indicating that a buffer overrun error occurred in the pipeline history.
If an error directory is configured for the stage, the origin moves the object to the error directory and continues processing the next object.
- Stop Pipeline
- The origin stops the pipeline and displays a message indicating that a buffer overrun error occurred. The message includes the object and offset where the buffer overrun error occurred. The information displays in the pipeline history.
Server Side Encryption
You can configure the origin to decrypt data stored on Amazon S3 with Amazon Web Services server-side encryption.
- Base64 encoded 256-bit encryption key
- Base64 encoded 128-bit MD5 digest of the encryption key using RFC 1321
For information about implementing customer-provided encryption keys in the origin system, see the Amazon S3 documentation.
Event Generation
The Amazon S3 origin can generate events that you can use in an event stream. When you enable event generation, the origin generates event records each time the origin starts or completes reading an object and after the configured batch wait time has elapsed following all processing of the available data.
- With the Pipeline Finisher executor to
stop the pipeline and transition the pipeline to a Finished state when
the origin completes processing available data.
When you restart a pipeline stopped by the Pipeline Finisher executor, the origin continues processing from the last-saved offset unless you reset the origin.
For an example, see Stopping a Pipeline After Processing All Available Data.
- With a destination to store event information.
For an example, see Preserving an Audit Trail of Events.
For more information about dataflow triggers and the event framework, see Dataflow Triggers Overview.
Event Records
Record Header Attribute | Description |
---|---|
sdc.event.type | Event type. Uses one of the following types:
|
sdc.event.version | Integer that indicates the version of the event record type. |
sdc.event.creation_timestamp | Epoch timestamp when the stage created the event. |
The Amazon S3 origin can generate the following types of event records:
- new-file
- The Amazon S3 origin generates a new-file event record when it starts processing a new object.
- finished-file
- The Amazon S3 origin generates a finished-file event record when it finishes processing an object.
- no-more-data
- The Amazon S3 origin generates a no-more-data event record when the origin completes processing all available records and the number of seconds configured for Batch Wait Time elapses without any new objects appearing to be processed.
Data Formats
- Avro
- Generates a record for every Avro record. Includes a
precision
andscale
field attribute for each Decimal field. - Binary
- Generates a record with a single byte array field at the root of the record.
- Delimited
- Generates a record for each delimited line.
- Excel
- Generates a record for every row in the file. Can process
.xls
or.xlsx
files.You can configure the origin to read from all sheets in a workbook or from particular sheets in a workbook. You can specify whether files include a header row and whether to ignore the header row. You can also configure the origin to skip cells that do not have a corresponding header value. A header row must be the first row of a file. Vertical header columns are not recognized.
The origin cannot process Excel files with large numbers of rows. You can save such files as CSV files in Excel, and then use the origin to process with the delimited data format.
- JSON
- Generates a record for each JSON object. You can process JSON files that include multiple JSON objects or a single JSON array.
- Parquet
- The origin generates records for every Parquet record in the file.
The file must contain the Parquet schema. The origin uses the
Parquet schema to generate records.
The stage includes the Parquet schema in a
parquetSchema
record header attribute.When Skip Union Indexes is not enabled, the origin generates an
avro.union.typeIndex./id
record header attribute identifying the index number of the element in the union that the data is read from. If a schema contains many unions and the pipeline does not depend on index information, you can enable Skip Union Indexes to avoid long processing times associated with storing a large number of indexes. - Log
- Generates a record for every log line.
- Protobuf
- Generates a record for every protobuf message.
- SDC Record
- Generates a record for every record. Use to process records generated by a Data Collector pipeline using the SDC Record data format.
- Text
- Generates a record for each line of text or for each section of text based on a custom delimiter.
- Whole File
- Streams whole files from the origin system to the destination system. You can specify a transfer rate or use all available resources to perform the transfer.
- XML
- Generates records based on a user-defined delimiter element. Use an XML element directly under the root element or define a simplified XPath expression. If you do not define a delimiter element, the origin treats the XML file as a single record.