Fixing high CPU issue with Agentless Linux
Albook 120000625S Visits (6864)
If you have one or more Agentless Linux instances monitoring a meaningful number of remote systems, let's say 70-80 for each instance, you may notice the Agentless processes are constantly consuming around 30% of CPU.
This could cause resource shortage in case other processes are started or when there are temporary CPU peaks from other workloads.
It happens because
by default Agentless collect data for all attribute groups every 60 seconds.
Depending on the number of monitored servers, the used CPU can reach such peaks (20%-30%).
Typically the Processes attribute group can take longer to collect as there can be a lot of processes on some of the remote systems.
In order to reduce CPU consumption, you should go through a fine tuning of the Agentless instances, by increasing the data collection interval time.
This is configurable, down to the attribute group level.
Can change this to a value more like 120 or 180 seconds.
CDP_<attrubute group name >_REFRESH_INTERVAL.
So different attribute groups could be collected at different intervals.
Those are the attribute groups available for the Linux Agentless:
The best practice requires to to check the situations active on the Agentless instances, investigate about the situation interval
For the attribute groups you are not interested to, you can set a very high collection interval (hourly, for example) so that you will save further CPU from being wasted for useless data collection
Thanks for reading
Subscribe and follow us for all the latest information directly on your social feeds: