Defining File Observer jobs

Using the File Observer functionality, you can write bespoke data to a file in a specific format, upload this data to the topology service, and then visualize this data as a topology view in the Agile Service Manager UI. The File Observer is installed as part of the core installation procedure.

Before you begin

Remember: Swagger documentation for the observer is available at the following default location: https://<your host>/1.0/file-observer/swagger

About this task

File Observer jobs are HTTP POST requests that can be triggered via cURL or swagger, or via the example scripts provided in the $ASM_HOME/bin directory.
file_observer_common.sh
The config file you use to customize the File Observer job unique_id or service host.
The parameters defined here are then used by the file_observer_load_start.sh script to trigger the File Observer job.

The File Observer runs a 'loadFile' job that loads all requested topology data for each tenant. The loadFile job takes the name of the file to parse and load.

Tip:
  • An example file is available in the $ASM_HOME/data/file-observer directory.
  • See the related links for more information on available timestamps formats.
Important: Ensure that the file is structured correctly. For each line of the file, information included after the closing } that matches an opening { is ignored, and no error is recorded. See the related links for more information on available timestamps formats.
Restriction:

Files to be read by File Observer must be located in the following directory: $ASM_HOME/data/file-observer

A file name specified in a File Observer job must be relative to that directory (and not absolute).

Procedure

  1. Define your data file.

    Topology data in a file is comprised of vertices (nodes) and edges. A vertex represents an object (resource), while an edge represents the relationship between two objects.

    Each line of the file you create should be in one of the formats below, loading a single resource vertex (including optional relationships in the _references field) or a single edge, deleting a single vertex, or pausing execution.

    Lines starting with V: (vertex), D: (delete) or W: (wait) are treated as instruction lines to be processed. Other lines, for example lines that are empty or commented out, are ignored.

    Line format
    V:
    Load a resource vertex, with a JSON representation as documented for the body of the topology service API method: POST /resources
    If specifying the _status element, acceptable state values are open, closed, or clear, and acceptable severity values are clear, indeterminate, warning, minor, major, or critical.
    Geolocation example: The following example adds 'point' geolocation data as a GeoJSON object inside the vertex JSON, with the coordinates being longitude first, then latitude.
    V:{"name":"Smallbrook Junction", "uniqueId":"gb-nr-SAB", "entityTypes":["railStation"], "geolocation":{"geometry":{"coordinates":[-1.15511,50.71141],"type":"Point"}}}
    Edge example: In the following example the first line creates the AAAA resource, and the second line replaces the edges for that resource.
    V:{"uniqueId":"AAAA","entityTypes":["interface"],"name":"ge-1/0/2"}
    
    V: {"_operation":"InsertReplace","uniqueId":"AAAA","_references":[{"_toUniqueId":"BBBB","_edgeType":"dependsOn"}, {"_edgeType":"contains","_direction":"out"}]}       
    Note that the second line should contain only the uniqueID resource property; if it contains any additional properties, those would be replaced, too.
    Deprecated:
    E:
    Load an edge, with a JSON representation as documented for the _references section of the body of the topology service API method POST /resources
    W:
    Pause for the given duration (for testing purposes only).
    Takes an integer period followed by a string specifying the units.
    G:
    Load a group, identified by it's uniqueId, with a JSON representation as documented for the body of the topology service API method: POST /groups
    See the following example syntax:
    G:{"uniqueId":"EA","entityTypes":["group"],"name":"Event Analytics"}
    Note: When creating a group's edge, it must be based on uniqueId.
    Important: Ensure that the file is structured correctly. For each line of the file, information included after the closing } that matches an opening { is ignored, and no error is recorded. See the related links for more information on available timestamps formats.
  2. Copy the file to the following location: /opt/ibm/netcool/asm/data/file-observer/
    For example:
    cp dncim.file $ASM_HOME/data/file-observer/
  3. Edit the file_observer_common.sh config file.
    Define the following File Observer job parameters:
    Table 1. File Observer job parameters
    Parameter Action Details
    Unique ID Enter a unique name for the job Required
    Provider The name of the data provider. Resources can be loaded from multiple files with the same provider, and linked as if they were all loaded from the same file. To link to resources from another provider (for example, from another observer), the cross-provider semantics are required. Optional. If omitted, the name of the file, prefixed with FILE.OBSERVER: will be used.
    Observer job description Enter additional information to describe the job. Optional
    Job schedule

    Specify when the job should run, and whether it should run at regular intervals.

    By default the job runs immediately, and only once.

    Optionally you can specify a future date and time for the job to run, and then set it to run at regular intervals after that.

    Optional. Transient (one-off) jobs only.

    If you set a job schedule, the run intervals must be at least 90 seconds apart, and if you set them at less than 15 minutes, a warning is displayed, as the frequency can impact system performance.

  4. To start the File Observer Load job, use one of the following commands:
    To run the File Observer start script
    $ASM_HOME/bin/file_observer_load_start.sh
    To define the data file via a command line argument
    ./file_observer_load_start.sh --file dncim.file
    To define the data file via the environment
    env file=dncim.file $ASM_HOME/bin/file_observer_load_start.sh
    The load job loads all requested topology data from the file specified. This job runs only once, unless a schedule has been defined.

Example

The following cURL command example invokes the File Observer job:
curl -u PROXY_USER[:PROXY_PASSWORD] -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'X-TenantID: cfd95b7e-3bc7-4006-a4a8-a73a79c71255' -d '{ "unique_id": "dncim.file", "type": "load", "parameters": {   "file": "dncim.file"  } }' https://localhost/1.0/file-observer/jobs

What to do next

Optional file size configuration:
The size of file posted to the File Observer is set by the file-observer and observer-service parameters.
file-observer
Determines the file size limit when the file is uploaded via Swagger or the oc cp command.
Default is 35 Mb
observer-service
Determines the file size limit when the file is uploaded in the Observer Configuration UI via drag-and-drop.
The default is set to 8 Mb by default for on-prem deployments. A limit is useful in guarding against, for example, some types of denial-of-service (DOS) attacks. You can change the default to a suggested maximum of 35 Mb.
To do so, access the /opt/ibm/netcool/asm/etc/nginx/conf.d/nasm-file-observer.rules file and change the following property to 35m:
client_max_body_size 8m
You can also use the following scripts:
file_observer_load_stop.sh
Stops the job
file_observer_job_list.sh
Lists the current job status
file_observer_log_level.sh
Sets the log level
Remember: As an alternative to being configured using the Observer Configuration UI, observer jobs have scripts to start and stop all available jobs, to list the status of a current job, and to set its logging levels. These scripts can be run with -h or --help to display help information, and with -v or --verbose to print out the details of the actions performed by the script, including the full cURL command. For the on-prem version of Agile Service Manager, observer scripts are configured for specific jobs by editing the script configuration files.