IBM Support

IBM Storage Ceph: Health shows warning "x large omap objects" in the rgw.meta pool

Troubleshooting


Problem

ceph health detail shows warning X large omap objects in rgw.meta pool

Cause

  • The Default value of osd_deep_scrub_large_omap_object_key_threshold is 200,000: deep scrubs warn if an omap object contains more than 200,000 keys. Prior releases set a much higher value, but it was found that a lower value is better in that it affords administrators more opportunity to address the situation without substantial impact.
  • All the keys of bucket names owned by a user are stored under one object therefore we see such warning when single user owns a large number of buckets.

Environment

  • IBM Ceph Storage 9.x
  • IBM Ceph Storage 6.x
  • IBM Ceph Storage 7.x
  • IBM Ceph Storage 8.x

Diagnosing The Problem

  • Reviewing the Ceph cluster logs would reveal the large omap objects detail
    $ less /var/log/ceph/ceph.log | grep "Large omap object found"
    2020-12-03 05:56:52.557266 osd.49 (osd.49) 1107 : cluster [WRN] Large omap object found. Object: 74:6eff0f81:users.uid::operator.buckets:head PG: 74.81f0ff76 (74.6) Key count: 253683 Size (bytes): 47855479
    
  • Check the number of buckets owned by the RGW user operator:
    $ radosgw-admin bucket stats >bucket_stats.out
    
    $ grep '"owner": operator' bucket_stats.out |wc -l
    253685

Resolving The Problem

  • Remove unused/empty buckets,if possible.
  • Distribute the buckets ownership across multiple users.
  • To suppress these warnings, increase the threshold as per your requirement and deep-scrub the PG. We recommend setting osd_deep_scrub_large_omap_object_key_threshold at global scope; if you see an existing setting at osd, mds, or other scope, remove those so that the global setting takes precedence.  Note that while significantly raising this value will clear the immediate warning, the threshold is there for a reason and prudent practice will be to reduce the number of omap keys via the above methods, and restore the default value.
    # ceph config dump | grep osd_deep_scrub_large_omap_object_key_threshold
    mgr advanced osd_deep_scrub_large_omap_object_key_threshold 200000
    # ceph config set global osd_deep_scrub_large_omap_object_key_threshold 300000
    # ceph config rm mgr osd_deep_scrub_large_omap_object_key_threshold

Document Location

Worldwide

[{"Type":"MASTER","Line of Business":{"code":"LOB69","label":"Storage TPS"},"Business Unit":{"code":"BU048","label":"IBM Software"},"Product":{"code":"SSEG27","label":"IBM Storage Ceph"},"ARM Category":[{"code":"a8m3p000000UoIPAA0","label":"Support Reference Guide"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"5.3.0;6.1.0;7.0.0;7.1.0;8.0.0;8.1.0;9.0.0"}]

Document Information

Modified date:
17 September 2025

UID

ibm17237003