IBM Support

meet the error message "mount: block device /dev/drbd1 is write-protected, mounting read-only mount: Wrong medium type "when try to mount /nz or /export/home

Troubleshooting


Problem

When drbd service crashed ,try to fix drbd issue by manullay execute mount file system /nz and /export/home on primary node ,meet the error meaasge "mount: block device /dev/drbd1 is write-protected, mounting read-only mount: Wrong medium type "

Resolving The Problem

Cause drbd crash ,so first step is to check the drbd status on both node

1) execute command "service drbd status" on primary node ha1 and secondary node ha2 ,find role is both Secondary/Secondary
[root@NZ30222-H1 ~]# service drbd status
drbd driver loaded OK; device status:
version: 8.4.0nz2c (api:1/proto:86-100)
GIT-hash: 28753f559ab51b549d16bcf487fe625d5919c49c build by root@testA, 2012-05-04 09:59:24
m:res cs ro ds p mounted fstype
0:r1 Connected Secondary/Secondary UpToDate/UpToDate C r-----
1:r0 Connected Secondary/Secondary UpToDate/UpToDate C r-----

[root@NZ30222-H2 ~]# service drbd status
drbd driver loaded OK; device status:
version: 8.4.0nz2c (api:1/proto:86-100)
GIT-hash: 28753f559ab51b549d16bcf487fe625d5919c49c build by root@testA, 2012-05-04 09:59:24
m:res cs ro ds p mounted fstype
0:r1 Connected Secondary/Secondary UpToDate/UpToDate C r-----
1:r0 Connected Secondary/Secondary UpToDate/UpToDate C r-----

2) Then to restart the service of drbd of primary node HA1 ,however failed and meet the below error message :

[root@NZ30222-H1 ~]# service drbd start
Starting DRBD resources: [
create res: r0 r1
prepare disk: r0 r1
adjust disk: r0 r1
adjust net: r0 r1
]
..........
***************************************************************
DRBD's startup script waits for the peer node(s) to appear.
- In case this node was already a degraded cluster before the
reboot the timeout is 0 seconds. [degr-wfc-timeout]
- If the peer was available before the reboot the timeout will
expire after 0 seconds. [wfc-timeout]
(These values are for resource 'r0 r1'; 0 sec -> wait forever)
To abort waiting enter 'yes' [ 12]: yes

3) Manually reset the drbd resource and make the role is back to Primary/Secondary on primary node HA1 by execute below command

[root@NZ30222-H1 ~]# drbdadm primary r0
[root@NZ30222-H1 ~]# drbdadm primary r1

4) check the drdb status for HA1 is back to normal by execute below command:

[root@NZ30222-H1 ~]# drbd-overview
0:r1/0 Connected Primary/Secondary UpToDate/UpToDate C r----- /export/home ext4 16G 183M 15G 2%
1:r0/0 Connected Primary/Secondary UpToDate/UpToDate C r----- /nz ext4 296G 38G 243G 14%

5) mount file system /nz and /export/home on primary node ha1 ,all is clean and no report any error message:
[root@NZ30222-H1 ~]# mount /nz
[root@NZ30222-H1 ~]# mount /export/home
[root@NZ30222-H1 ~]# df
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 36G 4.8G 29G 15% /
/dev/sda2 1008M 80M 877M 9% /boot
/dev/sda1 200M 252K 200M 1% /boot/efi
/dev/sda13 426G 18G 387G 5% /nzscratch
/dev/sda8 7.9G 4.3G 3.3G 58% /opt
/dev/sda7 16G 2.5G 13G 17% /tmp
/dev/sda9 7.9G 1.6G 6.0G 21% /usr
/dev/sda12 4.0G 137M 3.7G 4% /usr/local
/dev/sda10 7.9G 600M 6.9G 8% /var
none 77G 298M 76G 1% /dev/shm
/dev/drbd1 296G 39G 243G 14% /nz
/dev/drbd0 16G 183M 15G 2% /export/home

[{"Product":{"code":"SSULQD","label":"IBM PureData System"},"Business Unit":{"code":"BU053","label":"Cloud & Data Platform"},"Component":"Cluster","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"1.0.0","Edition":"","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

Document Information

Modified date:
27 August 2020

UID

swg22002454