HTTP Server
The HTTP Server origin listens on an HTTP endpoint and processes the contents of all authorized HTTP POST and PUT requests. Use the HTTP Server origin to read high volumes of HTTP POST and PUT requests using multiple threads. For information about supported versions, see Supported Systems and Versions in the Data Collector documentation.
The HTTP Server origin can read requests containing messages with no compression or with the Gzip or Snappy compression format.
The HTTP Server origin can use multiple threads to enable parallel processing of data from multiple HTTP clients. The origin can require that requests specify an allowed application ID. Before you run a pipeline with the origin, perform additional steps to configure the HTTP clients.
When a pipeline stops, the HTTP Server origin notes where it stops reading. When the pipeline starts again, the origin continues processing from where it stopped by default. You can reset the origin to process all requested data.
When you configure the HTTP Server origin, you specify the maximum number of concurrent requests to determine how many threads to use. You define the listening port, allowed application IDs, and the maximum message size. You can also configure SSL/TLS properties, including default transport protocols and cipher suites.
The origin provides request header fields as record header attributes so you can use the information in the pipeline when needed.
Prerequisites
Before you run a pipeline with the HTTP Server origin, complete the following prerequisites to configure the HTTP clients.
Send Data to the Listening Port
Configure the HTTP clients to send data to the HTTP Server listening port.
When you configure the origin, you define a listening port number where the origin listens for data. To pass data to the pipeline, configure each HTTP client to send data to a URL that includes the listening port number.
<http | https>://<sdc_hostname>:<listening_port>/
- <http | https> - Use https for secure HTTP connections.
- <sdc_hostname> - The Data Collector host name.
- <listening_port> - The port number where the origin listens for data.
For example: https://localhost:8000/
Include the Application ID in Requests
For origins configured to require requests from allowed application IDs, configure the HTTP clients to include an allowed application ID in each request.
When you configure the HTTP Server origin, you can optionally define a list of application IDs allowed to pass requests to the origin. If you specify a list of application IDs, then all messages sent to the origin must include one of the specified application IDs.
Client requests can include the application ID in one of the following ways:
- In request headers
- Add the following information to the request header for all HTTP requests that
you want the origin to
process:
X-SDC-APPLICATION-ID: <application_ID>
- In a query parameter in the URL
- If you cannot configure the client request headers - for example if the requests are generated by another system - then configure each HTTP client to send data to a URL that includes the application ID in a query parameter.
Multithreaded Processing
The HTTP Server origin can use multiple threads to perform parallel processing based on the Max Concurrent Requests property.
When you start a multithreaded pipeline, the origin creates the number of threads specified in the Max Concurrent Requests property. Each thread generates a batch from an incoming request and passes the batch to an available pipeline runner.
A pipeline runner is a sourceless pipeline instance - an instance of the pipeline that includes all of the processors, executors, and destinations in the pipeline and handles all pipeline processing after the origin. Each pipeline runner processes one batch at a time, just like a pipeline that runs on a single thread. When the flow of data slows, the pipeline runners wait idly until they are needed, generating an empty batch at regular intervals. You can configure the Runner Idle Time pipeline property to specify the interval or to opt out of empty batch generation.
Multithreaded pipelines preserve the order of records within each batch, just like a single-threaded pipeline. But since batches are processed by different pipeline runners, the order that batches are written to destinations is not ensured.
For example, say you set the Max Concurrent Requests property to 5. When you start the pipeline, the origin creates five threads, and Data Collector creates a matching number of pipeline runners. Upon receiving data, the origin passes a batch to each of the pipeline runners for processing. In the batch, HTTP Server includes only the HTTP POST and PUT requests with the specified Application ID.
Each pipeline runner performs the processing associated with the rest of the pipeline. After a batch is written to pipeline destinations, the pipeline runner becomes available for another batch of data. Each batch is processed and written as quickly as possible, independent from other batches processed by other pipeline runners, so batches may be written differently from the read order.
At any given moment, the five pipeline runners can each process a batch, so this multithreaded pipeline processes up to five batches at a time. When incoming data slows, the pipeline runners sit idle, available for use as soon as the data flow increases.
For more information about multithreaded pipelines, see Multithreaded Pipeline Overview.
Data Formats
The HTTP Server origin processes data differently based on the data format that you select.
The HTTP Server origin processes data formats as follows:
- Avro
- Generates a record for every Avro record. The origin includes the Avro schema in the
avroSchema
record header attribute. It also includes aprecision
andscale
field attribute for each Decimal field. - Binary
- Generates a record with a single byte array field at the root of the record.
- Datagram
- Generates a record for every message. The origin can process collectd messages, NetFlow 5 and NetFlow 9 messages, and the following types of syslog messages:
- Delimited
- Generates a record for each delimited line.
- JSON
- Generates a record for each JSON object. You can process JSON files that include multiple JSON objects or a single JSON array.
- Protobuf
- Generates a record for every protobuf message. By default, the origin assumes messages contain multiple protobuf messages.
- SDC Record
- Generates a record for every record. Use to process records generated by a Data Collector pipeline using the SDC Record data format.
- XML
- Generates records based on a user-defined delimiter element. Use an XML element directly under the root element or define a simplified XPath expression. If you do not define a delimiter element, the origin treats the XML file as a single record.
Record Header Attributes
The REST Service origin creates record header attributes that include information about the requested URL.
You can use the record:attribute
or
record:attributeOrDefault
functions to access the information
in the attributes. For more information about working with record header attributes,
see Working with Header Attributes.
- method - The HTTP method for the request, such as GET, POST, or DELETE.
- path - The path of the URL.
- queryString - The parameters of the URL that come after the path. Can be empty if there are no query parameters on the URL.
- remoteHost - The name of the client or proxy that made the request.
The HTTP Server origin also includes HTTP request header fields – such as Host or Content-Type – in records as record header attributes. The attribute names match the original HTTP request header field name.
Configuring an HTTP Server Origin
Configure an HTTP Server origin to generate multiple threads for parallel processing of HTTP POST and PUT requests.