Configuring Network Discovery Observer jobs

The Network Discovery Observer job retrieves network topology data, including discovered devices and relationships, from the Network Discovery database via REST API, and uses this data to create topologies within the Agile Service Manager topology service.

Before you begin

The Network Discovery Services are installed as part of the core installation procedure. This includes the Network Discovery Observer, Network Discovery Collector and Network Discovery Engine, which have been installed and should be running, as well as the required scripts to manage jobs.

Before you configure a network discovery job, you must configure the discovery details. This procedure is described here: Configuring the network discovery services

When successful, the configuration details will be saved in topology management artifacts.

Known issue and workaround: The Kafka service may not yet be fully initialized when the nasm-net-disco-schema-registry is started during an Agile Service Manager start up or restart, and it may subsequently fail a consul health check.
Workaround
  1. Verify via consul that the nasm-net-disco-schema-registry has failed its healthcheck:
    https://localhost:8501/ui/nasm_net_disco/services
  2. Restart the nasm-net-disco-schema-registry:
    docker restart nasm-net-disco-schema-registry
  3. Wait until all Agile Service Manager services have registered with consul, and only then run the network discovery.
Important:
OCP requirement: On the OCP hosts, network discovery requires that pids_limit be set at least to 44406 inside the crio.conf file.
For information about changing the values in the crio.conf file using Machine Configs, see Creating a ContainerRuntimeConfig CR to edit CRI-O parameters
For reference information about Machine Configs, see Red Hat Enterprise Linux CoreOS (RHCOS)

About this task

The Network Discovery Observer receives network data from the network services. Once the discovery configuration has successfully been sent to the topology management artifact and Network Discovery engine, you can run a full load job to discover Network Discovery data.

You define and start the following job. You must edit the parameters in the configuration file before running this job.
Load job
By default, these jobs are one-off, transient jobs that perform a full upload of all requested topology data as soon as they are triggered.
You can also run these jobs (again) manually from the Observer UI, or schedule them to run at set times when configuring them.

Procedure

  1. On the Observer jobs page, perform one of the following actions:
    To edit an existing job
    Open the List of options overflow menu next to the job and click View & edit.
    To create a new job
    Click Add a new job + and select the NetDisco Observer tile.
  2. Enter or edit the following parameters, then click Save to save your job and begin retrieving information:
    Table 1. Network Discovery Observer job parameters
    Parameter Action Details
    Unique ID Enter a unique name for the job Required
    Config ID Specify the Network Discovery configuration ID Required
    Specify record process threads Specify the number of threads to process Kafka records. The default (and minimum) is 30, the maximum is 50. Optional.
    Tip: If you increase threads, you must also increase the observer pod's memory and CPU.
    Data tracking using legacy provider
    False
    'False '(default) tracks the data using only the configId parameter.
    True
    'True' uses both the configId and the EdgeModelPreference parameters.
    Optional
    Topology edge modeling Specify the edge modeling for the topology. The options are ASM and ITNM. Optional
    Access scope

    Enter text to provide a scope for the resources.

    Access scope can help map alerts to resources when resources in different scopes share the same parameters, such as matchTokens.

    Optional.
    Tip: You can define access scope for locations, project names, namespaces, etc.
    Generate debug support file
    Set the optional Generate debug support file parameter to 'True' in order to capture the output of the next scheduled job run as a file. This file will be stored with an observer's log files and can be used to debug observer issues, for example at the request of your designated Support team, or while using a test environment. For one-off jobs (that is, Load jobs), this parameter reverts to 'False' after the next completed run. To examine the output produced, you can load the generated debug file using the File Observer. The file is saved to the following locations:
    On-prem
    $ASM_HOME/logs/<obs>-observer/
    On OCP
    /var/log/itsm/<obs>-observer
    Optional
    Observer job description Enter additional information to describe the job. Optional
    Job schedule

    Specify when the job should run, and whether it should run at regular intervals.

    By default the job runs immediately, and only once.

    Optionally you can specify a future date and time for the job to run, and then set it to run at regular intervals after that.

    Optional. Transient (one-off) jobs only.

    If you set a job schedule, the run intervals must be at least 90 seconds apart, and if you set them at less than 15 minutes, a warning is displayed, as the frequency can impact system performance.

Results

The Network Discovery Observer job discovers and then uploads the network services data to the Agile Service Manager topology service.

What to do next

Tip: Stopping a job using DELETE /jobs/{id} will stop the discovery first and then the job. It will take a while for the discovery engine and observer job to stop. If you want to resubmit another job, ensure that both the discovery and job have stopped by checking GET /jobs/load/status and GET /jobs/{id}.