Load information manager (LIM) daemon or service, monitoring host load.


lim [-C] [-t] [-T] [-vm] [-d conf_dir] [-debug_level]
lim -h
lim -V


There is one lim daemon or service on every host in the cluster. Of these, one lim from the management host list is elected management host LIM for the cluster. The management host LIM receives load information from the other lim daemons, and provides services to all host.

The lim does the following for the host on which it runs:
  • Starts pem on that host
  • Provides system configuration information to vemkd
  • Monitors load and provides load information statistics to vemkd and users

The management host LIM starts vemkd and pem on the management host.

The non-management host LIM daemons monitor the status of the management host LIM and elect a new management host (from the management host list) if the current management host LIM becomes unavailable.

Collectively, the LIMs in the cluster coordinate the collection and transmission of load information. Load information is collected in the form of load indices.

Never start the daemon manually without options: specify the -V option to check the version, the -d option to start the daemon in debug mode, or the -C option to validate its configuration files.


-d conf_dir
Starts the daemon, reading from the LSF configuration file ego.conf in the specified directory, rather than from the directory set via the EGO_CONFDIR environment variable.

Use this option when starting the daemon in debug mode.

Never start the daemon manually unless directed to do so by Product Support.
Starts the lim in debug mode. When running in debug mode, the lim uses a hard-coded port number rather than the one registered in system services.
Specify one of the following values:
Starts the lim in the background, with no associated control terminal.
Starts the lim in the foreground, displaying the log messages to the terminal.
Never start the daemon manually unless directed to do so by Product Support.
Displays host information, such as host type, host architecture, number of physical processors, number of cores per physical processor, number of threads per core, and license requirements.
Note: When running Linux kernel version 2.4, you must run lim -t as root to ensure consistent output with other clustered application management commands (for example, output from running theLSF command lshosts).
Displays host topology information for each host or cluster. Topology is displayed by processor unit level: NUMA node, if present, socket, core, and thread,

A socket is a collection of cores with a direct pipe to memory. Each socket contains 1 or more cores. This does not necessarily refer to a physical socket, but rather to the memory architecture of the machine.

A core is a single entity capable of performing computations.

A node contains sockets, a socket contains cores, and a core can contain threads if the core is enabled for multithreading.

The following fields are displayed:

Host[memory] host_name

Maximum memory available on the host followed by the host name. If memory availability cannot be determined, a dash (-) is displayed for the host.

For hosts that do not support affinity scheduling, a dash (-) is displayed for host memory and no host topology is displayed.

NUMA[numa_node: max_mem]
Maximum NUMA node memory. It is possible for requested memory for the NUMA node to be greater than the maximum available memory displayed.

If no NUMA nodes are present, then the NUMA layer in the output is not shown. Other relevant items such as host, socket, core and thread are still shown.

If the host is not available, only the host name is displayed. A dash (-) is shown where available host memory would normally be displayed.

In the following example, full topology (NUMA, socket, and core) information is shown for hostA:
lim -T
Host[24G] hostA
    NUMA[0: 24G]
Host hostB has a different architecture:
lim -T
Host[63G] hostB
        NUMA[0: 16G]
        NUMA[1: 16G]
        NUMA[2: 16G]
        NUMA[3: 16G]
When LSF cannot detect processor unit topology, it displays processor units to the closest level. For example:
lim -T
     Host[1009M] hostA 
            Socket (0 1)

On hostA there are two processor units: 0 and 1. LSF cannot detect core information, so the processor unit is attached to the socket level.

Outputs command usage and exits.
Outputs product version and exits.


The lim reads the configuration file ego.conf to retrieve configuration information. ego.conf is a generic configuration file shared by all daemons/services and clients. It contains configuration information and other information that dictates the behavior of the software.

Some of the parameters lim retrieves from ego.conf are as follows:

The TCP port the lim uses to serve all applications.
The directory used for reconfiguring the LIM—where the lim binary is stored.
The directory used for message logs.
The log level used to determine the amount of detail logged.
The log class setting for lim.
The full path to and name of the entitlement file.
Defines whether ncpus is to be defined as procs, cores, or threads. This parameter overrides LSF_ENABLE_DUALCORE. If EGO_ENABLE_DUALCORE is set, EGO_DEFINE_NCPUS settings take precedent.
  • procs (if ncpus defined as procs, then ncpus = nprocs)cores (if ncpus defined as cores, then ncpus = nprocs x ncores)
  • threads (if ncpus defined as threads, then ncpus = nprocs x ncores x nthreads)

When EGO_DEFINE_NCPUS is set, run queue-length values (r1* values returned by lsload) are automatically normalized based on the set value.

If EGO_DEFINE_NCPUS is not defined, but EGO_ENABLE_DUALCORE is set, the lim reports the number of cores. If both EGO_DEFINE_NCPUS and LSF_ENABLE_DUALCORE are set, then the EGO parameter takes precedence.

Defines if the hosts have dual cores or not. Is overridden by EGO_DEFINE_NCPUS, if set.
Note: If EGO_DEFINE_NCPUS is not defined, but EGO_ENABLE_DUALCORE is set, the lim reports the number of cores. If both EGO_DEFINE_NCPUS and LSF_ENABLE_DUALCORE are set, then the EGO parameter takes precedence.


You can customize the lim by changing configuration files in EGO_CONFDIR directory. Configure ego.cluster.<cluster_name> to define various cluster properties such as the resources on individual hosts, the load threshold values for a host, and so on. Configure ego.shared to define host models read by the lim, or the CPU factor of individual hosts.