You can ingest data from AVEVA PI System into Maximo® Monitor.
Before you begin
The document provides details about the SCADA connector implementation for AVEVA PI historians.
It is a containerized piece of software that extracts data from the Aveva PI Data Archive and sends
it to Maximo Monitor. The tool is configured via the User Interface and is
deployed with the help of a container-run command on the client system. The customer system requires
the following for the SCADA connector container to run:
- A container engine tool i.e Podman or Docker
- Access to Maximo Monitor
- Access to the Aveva PI Data Archive via the Aveva PI Data Access Server (OLEDB)
To access the Aveva PI Data Archive, the connector relies on the following Aveva PI components,
which should be in place for it to work and can be downloaded from the Aveva PI customer portal:
- PI SQL Data Access Server (OLEDB) installed on the same host as the PI Data Archive or a
different one with connection to it
- PI OLEDB Provider installed on the host of the PI SQL DAS OLEDB
- PI JDBC driver installed on the host that should run the Aveva connector
Note: The above components belong to the Aveva PI SQL family of products prior to the introduction
of the PI SQL DAS (RQTP) and the PI SQL Client components, available in newer installation of the PI
system. They also require a PI System Access license for runtime in order to be used
About this task
Complete the following steps to configure the connector to ingest data from the Aveva PI
system:
Procedure
- To setup the SCADA connector, navigate to Maximo Monitor. Click
Setup and then choose the Intergrations
menu.
- Under the Setup-Integrations Menu. Click on Add Integration
button.
- Select the Aveva PI Historian
tile
- To configure details, enter the configuration information in the UI fields as mentioned
below.
-
- Integration Name
- Input only alphanumeric characters, underscore and hyphens. (a-z,A-Z,0-9,_,-)
-
- Start Time to fetch data
- The timestamp from when to start fetching data from.
-
- Hostname
- used to construct the JDBC url. Refers to the PLSQL DAS server
-
- Port
- Port set by the PI SQL DAS (OLEB) to allow HTTP communication. Default is 5461. Used to
configure the JDBC driver
-
- Database name
- Name of the PI System server to connect to extract data from the PI Data Archive. Used to
construct the JDBC url
-
- Path to JDBC driver
- Absolute path to the PI JDBC driver present in the host machine that will run the connector
-
- Username & Password
- Credentials for the PI SQL DAS (OLEDB) login
- In the next step for Data Mapping, you need to specify device types and their metric data
types. Also associate a Tag name filter expression to map the tags in the PI system to each of them.
For each device type described, only the tags in the PI system that match its filter expression and
have the same data type (Number or Literal) will be mapped into it. The filter expressions and the
data type are used by the connector to map the tags to each device type. Any tag that matches one of
the filters and has the same value data type defined in a device type, will be mapped as a device
under it.
- To add a device type entry to the integration, click the (+) icon
- For each entry, you can select an already existing device type of AVEVA nature, by
selecting one of the options from the dropdown or by typing its name on the field. Alternatively,
you can create a new device type by typing a new name on the device type field
- For a newly created device type, the field tag type is a drop-down list for the user
to select the type of tag to link to that specific device type where number will allow any type of
numeric tag, while literal will allow string type tags
- For an existing device type, the field tag type shows the type supported by that
specific device type.
- Input the tag filter expressions in the tag name filter text field. This is used to
filter the tags to map into that device type. One device type can have more than one tag name filter
expression, and they are combined in an OR logic
Valid wildcards are:
-
%: ANSI SQL valid wildcard, matches 0 or more occurrences of any single character
-
_ : ANSI SQL valid wildcard, matches a single occurrence of any single character
-
* : PI System wildcard, similar to % wildcard
-
? : PI System wildcard, similar to _ wildcard
Note:
The filter TagName_Example and TagName?Example will match
the following tags: TagName1Example, TagNameFExample,
TagNameExample. But they will NOT match these ones:
TagName2TestExample, TagNameRandomExample
The filters TagName*Example and TagName%Example will match
any tag that begins with TagName and ends with Example,
including all the examples given above
Valid characters are:
-
Alphanumeric: A-Za-z0-9
-
Comma (,)
-
Period (.)
- Blank space
Note: The number of device types that can be
created for an integration at once is limited to 5. Create an integration with 5 devices types first
and then edit it to add the extra ones (in batches of 5) in case you need to create an integration
with more than 5 device types
- Navigate to the summary tab which presents users with the container run command,an
overview of the integrations configured, and the most relevant information provided for review. When
all the above steps have been completed successfully, the container-run command appears after the
user clicks submit. The toggle button allows the users to configure whether the connector should run
in Validation mode or not. Copy the command and run it in your system that fulfills the
prerequisites.
There are two main
modes of operation for the SCADA connector:
- Validation mode
-
- When the Validate toggle is enabled in the container command pop-up as in Step 6, the system
checks connectivity to the Aveva PI Data Archive and Monitor MQTT broker, then discovers Aveva PI
tags matching the provided filters. It reports the mapping configuration, including each device type
and its associated tag filter expressions, and lists the tags that will be mapped under each device
type.
- The system also identifies any tags that match a filter but have a data type mismatch with their
configured device type (e.g., a String tag where a Number is expected), and flags any tags that are
duplicated across multiple device types.
- Normal Mode
-
- When the validate flag is not set, the system checks connections to the Aveva PI Data Archive
and Monitor MQTT broker, discovers Aveva PI tags that match the provided filters and match the
expected data types for their device types, and ensures corresponding devices exist in Maximo Monitor—creating them if necessary. It then begins a continuous
query-publish cycle, retrieving data from Aveva PI and sending it to Maximo Monitorvia MQTT
- If the query timestamp is over 30 minutes old, data is fetched in 30-minute windows until
current time is reached. If it’s over 1 minute old, data is fetched up to the current time before
switching to 1-minute intervals. If it’s less than 1 minute old, the system waits until it is 1
minute behind current time before starting the 1-minute cycle. The connector updates its status
every 2 minutes to allow resumption from the last known state if interrupted.
Alternatively, use the dedicated POST API to create an integration:
/api/v2/core/connectors/scada. Please refer to the following payload format:
{
"serviceName": "dev-env-pi",
"description": "aveva-connector-description",
"serviceType": "AVEVA_PI",
"databaseConfig": {
"type": "PISQL",
"hostname": "hostname",
"port": 1234,
"db": "db",
"username": "username",
"password": "pwd",
"path": "/path/jdbc/driver"
},
"startTimestamp": "2024-05-31T14:23:52.674+00:00",
"extractInterval": 60,
"mappings": [
{
"deviceTypeName": "aveva-device-01",
"tagDataType": "NUMBER",
"filters": [
"a",
"b",
"c"
],
"action": "CREATE"
}
]
}
- serviceName: name of the integration
- description (optional): To note any additional information for the integration
- serviceType: Set it to AVEVA_PI
- databaseConfig - type: Set it to PISQL
- databaseConfig - hostname: Name/IP of the PI SQL DAS (OLEB) server the connector needs to
connect . Used to construct the JDBC url
- databaseConfig - port: Port set by the PI SQL DAS (OLEB) server. Default is 5461. Used to
configure the JDBC driver
- databaseConfig - db: Name/IP of the PI System server to connect to extract data from the PI Data
Archive. Used to construct the JDBC url
- databaseConfig - username: Username for the PI SQL DAS(OLEBD) login
- databaseConfig - password: Password for the given username
- databaseConfig - path: Absolute path to the PI JDBC driver present in the host machine that will
run the connector
- startTimestamp: Timestamp from which to start getting data (UTC). The API allows setting it to a
string-formatted timestamp ("2025-04-25T10:25:32.457+00:00") . or as an integer value representing
the epoch milliseconds
- extractInterval: Value in seconds between each data query when working with present data
- mappings - deviceTypeName: Name of the device type to create or use for a specific mapping
- mappings - tagDataType: Data type of the tags to capture under the specific device type (Can be
either Number or Literal )
- mappings - filters: List of tagname filters to use to decide which tags should be mapped into
the specific device type. If more than one filter is present, they are logically concatenated with
an OR expression
- mappings - action: The action to apply to the device type mapping provided. Set it to CREATE.
Even if the device type is already present, the action accepted is CREATE because that refers to the
mapping object, which won't be present and needs to be created. Thus, for any new integration, all
the mapping objects should have CREATE in this field