Troubleshooting
Problem
The ceph status command shows th HEALTH_ERR for MDS as below:
1 MDSs report damaged metadata
Symptom
Check if the damage ls command shows something similar to the below:
# ceph tell mds.$filesystem:0 damage ls
[
{
"damage_type": "backtrace",
"id": 2091078454,
"ino": 1,
"path": ""
}
]
Where: $filesystem is the name of the CephFS filesystem which has the damaged metadata. This is not the MDS's hostname.
Fetch the CephFS filesystem name from the below command:
# ceph fs ls
Cause
Scrub shows the damaged metadata because the on-disk and in-memory backtraces don't match. This happens because the backtrace information for the root inode was never written in older versions of CephFS.
Environment
IBM Storage Ceph 5.x and above
Resolving The Problem
** Important Note **
Before applying this fix, Please confirm that:
- The
damage_typeis listed asbacktrace. - The
inovalue is 1, which indicates that the damaged metadata corresponds to the root of the filesystem.
In the following process, $filesystem is the name CephFS Filesystem which has the damaged metadata. This is not the MDS's hostname. Please modify the value to match the proper MDS in your environment.
To resolve this reported damaged metadata, proceed as follows:
-
Force repair of the root of the CephFS filesystem:
# ceph tell mds.$filesystem:0 scrub start / force repair -
Once the repair completes, delete the damage entry where $id is the
"id"value from thedamage lsoutput obtained earlier:# ceph tell mds.$filesystem:0 damage rm $id -
Confirm that ceph status output no longer shows
damaged metadataand the health line showsHEALTH_OK
Document Location
Worldwide
Was this topic helpful?
Document Information
Modified date:
17 September 2025
UID
ibm17171732