Topic
  • 8 replies
  • Latest Post - ‏2013-01-23T01:21:51Z by YuanZhengcai
SystemAdmin
SystemAdmin
2092 Posts

Pinned topic Synchronous replication over a WAN, RE: Metadata performance

‏2013-01-11T22:42:55Z |
I have a 2-site configuration separated by 100ms latency. We'll call them siteA and siteB. I have a client in siteA, he's called tsting03.

In this configuration I have dedicated metadata and dataOnly NSD's. Everything appears to be working fine, except for metadata performance. The behavior I'm seeing is clear - metadata read operations are being made by the client to both sites whereas data IO is being performed only to the "local" site. I have "readReplicaPolicy local" set for all nodes in the cluster configuration. GPFS since this change has been restarted. All nodes in siteA are within the same subnet. Site B is in a different subnet.
Is there a configurable besides readReplicaPolicy that is supposed to narrate the read operations on metadata for clients?

To put it into perspective… I have siteA and siteB. The client is in siteA. Doing a find on 10,000 files while both siteA and siteB are online takes about 4 minutes. Doing the same find when siteB is offline takes about 2 seconds. Doing a dd read of a file in the filesystem has the same performance regardless of if siteB is online. These findings are reinforced by mmfsadm dump waiters output:

tsting03.FQDN: 0x7FDCB4000E00 waiting 7.285068451 seconds, PrefetchWorkerThread: on ThCond 0x7FDC90002088 (0x7FDC90002088) (MsgRecordCondvar), reason 'RPC wait' for NSD I/O completion on node 10.248.59.72 <c0n1>

(Thre are IO's for both .71 and .72, those IP's are siteB-nsd1 and siteB-nsd2)
Updated on 2013-01-23T01:21:51Z at 2013-01-23T01:21:51Z by YuanZhengcai
  • dlmcnabb
    dlmcnabb
    1012 Posts

    Re: Synchronous replication over a WAN, RE: Metadata performance

    ‏2013-01-11T22:53:53Z  
    ReadReplicaPolicy=local works the same for data and metadata. So the differential diagnosis questions are:
    1) Is metadata really replicated for all system metadata? Show me mmlsfs $fsname -m has the value 2, and make sure it is replicated using mmrestripefs $fsname -R.

    2) Do the metadata disks somehow look like they are locally attached so that the IO does not go through the NSD servers?

    3) Are any of the metadata LUNs in "suspended" state? mmlsdisk $fsname -L
  • dlmcnabb
    dlmcnabb
    1012 Posts

    Re: Synchronous replication over a WAN, RE: Metadata performance

    ‏2013-01-11T23:02:46Z  
    • dlmcnabb
    • ‏2013-01-11T22:53:53Z
    ReadReplicaPolicy=local works the same for data and metadata. So the differential diagnosis questions are:
    1) Is metadata really replicated for all system metadata? Show me mmlsfs $fsname -m has the value 2, and make sure it is replicated using mmrestripefs $fsname -R.

    2) Do the metadata disks somehow look like they are locally attached so that the IO does not go through the NSD servers?

    3) Are any of the metadata LUNs in "suspended" state? mmlsdisk $fsname -L
    Also, what code release are you running. There was a fix in 3.4.0.13 where after a NSD node failure and recovery, GPFS was not resetting the NSD server and the internal relativeAccTime variable which is used in determining which replica to read.

    Use "mmfsadm dump nsd" and look at relaccTime for each disk. It should have the value 0 for local access, 1 for local NSD server, and 2 for remote NSD server.
  • SystemAdmin
    SystemAdmin
    2092 Posts

    Re: Synchronous replication over a WAN, RE: Metadata performance

    ‏2013-01-11T23:05:36Z  
    • dlmcnabb
    • ‏2013-01-11T22:53:53Z
    ReadReplicaPolicy=local works the same for data and metadata. So the differential diagnosis questions are:
    1) Is metadata really replicated for all system metadata? Show me mmlsfs $fsname -m has the value 2, and make sure it is replicated using mmrestripefs $fsname -R.

    2) Do the metadata disks somehow look like they are locally attached so that the IO does not go through the NSD servers?

    3) Are any of the metadata LUNs in "suspended" state? mmlsdisk $fsname -L
    Thank you for the quick response!

    Some feedback, I don't have direct access to their system, but I have captured the state of one of the systems - so I will hopefully get you all the information you're looking for. Just as an FYI, the filesystem was created with -m2 -M2 -r2 -R2.

    Some other notes:
    -We are using gpfs-3.4.0.15
    -siteA has 2 NSD servers, they see the disks through /dev/mapper/
    -siteB has 2 NSD servers, they see the disks through /dev/mapper/
    -Each site has the same number of devices for metadata and data
    -Each site has a unique failure group assigned to its NSD's
    RE request 1:

    File system attributes for /dev/siteAfs:
    ========================================
    flag value description

    ------------------------
    -f 65536 Minimum fragment size in bytes
    -i 512 Inode size in bytes
    -I 32768 Indirect block size in bytes
    -m 2 Default number of metadata replicas
    -M 2 Maximum number of metadata replicas
    -r 2 Default number of data replicas
    -R 2 Maximum number of data replicas
    -j scatter Block allocation type
    -D nfs4 File locking semantics in effect
    -k all ACL semantics in effect
    -n 256 Estimated number of nodes that will mount file system
    -B 2097152 Block size
    -Q none Quotas enforced
    none Default quotas enabled
    --filesetdf No Fileset df enabled?
    -V 12.10 (3.4.0.7) File system version
    --create-time Wed Dec 12 22:44:33 2012 File system creation time
    -u Yes Support for large LUNs?
    -z No Is DMAPI enabled?
    -L 4194304 Logfile size
    -E Yes Exact mtime mount option
    -S Yes Suppress atime mount option
    -K whenpossible Strict replica allocation option
    --fastea Yes Fast external attributes enabled?
    --inode-limit 134217728 Maximum number of inodes
    -P system Disk storage pools in file system
    -d siteA_nsd000;siteA_nsd002;siteA_nsd004;siteA_nsd006;siteA_nsd008;siteA_nsd010;siteA_nsd012;siteA_nsd014;siteA_nsd200;siteA_nsd202;siteA_nsd210;siteA_nsd212;siteB_nsd001;
    -d siteB_nsd003;siteB_nsd005;siteB_nsd007;siteB_nsd009;siteB_nsd011;siteB_nsd013;siteB_nsd015;siteB_nsd201;siteB_nsd203;siteB_nsd211;siteB_nsd213 Disks in file system
    -A yes Automatic mount option
    -o none Additional mount options
    -T /siteAfs Default mount point
    --mount-priority 0 Mount priority
    As it relates to running the mmrestripefs, we haven't run that - but the metadata is definitely replicated for the files we're testing (as seen by the files' metadata being available even while siteB is offline). I will run a restripefs when I have access to the system.

    RE question number 2:

    No, the client is a network client and has no access to the SAN disks.
    RE question number 3:

    No. All disks are in a healthy running state.

    disk driver sector failure holds holds storage
    name type size group metadata data status availability disk id pool remarks
    --------
    -------
    -----
    ------------
    ------------
    siteA_nsd000 nsd 512 200 No Yes ready up 1 system
    siteA_nsd002 nsd 512 200 No Yes ready up 2 system
    siteA_nsd004 nsd 512 200 No Yes ready up 3 system
    siteA_nsd006 nsd 512 200 No Yes ready up 4 system
    siteA_nsd008 nsd 512 200 No Yes ready up 5 system
    siteA_nsd010 nsd 512 200 No Yes ready up 6 system
    siteA_nsd012 nsd 512 200 No Yes ready up 7 system
    siteA_nsd014 nsd 512 200 No Yes ready up 8 system
    siteA_nsd200 nsd 512 200 Yes No ready up 9 system desc
    siteA_nsd202 nsd 512 200 Yes No ready up 10 system desc
    siteA_nsd210 nsd 512 200 Yes No ready up 11 system
    siteA_nsd212 nsd 512 200 Yes No ready up 12 system
    siteB_nsd001 nsd 512 400 No Yes ready up 13 system
    siteB_nsd003 nsd 512 400 No Yes ready up 14 system
    siteB_nsd005 nsd 512 400 No Yes ready up 15 system
    siteB_nsd007 nsd 512 400 No Yes ready up 16 system
    siteB_nsd009 nsd 512 400 No Yes ready up 17 system
    siteB_nsd011 nsd 512 400 No Yes ready up 18 system
    siteB_nsd013 nsd 512 400 No Yes ready up 19 system
    siteB_nsd015 nsd 512 400 No Yes ready up 20 system
    siteB_nsd201 nsd 512 400 Yes No ready up 21 system
    siteB_nsd203 nsd 512 400 Yes No ready up 22 system desc
    siteB_nsd211 nsd 512 400 Yes No ready up 23 system
    siteB_nsd213 nsd 512 400 Yes No ready up 24 system
    Number of quorum disks: 3
    Read quorum value: 2
    Write quorum value: 2
  • SystemAdmin
    SystemAdmin
    2092 Posts

    Re: Synchronous replication over a WAN, RE: Metadata performance

    ‏2013-01-11T23:13:00Z  
    • dlmcnabb
    • ‏2013-01-11T23:02:46Z
    Also, what code release are you running. There was a fix in 3.4.0.13 where after a NSD node failure and recovery, GPFS was not resetting the NSD server and the internal relativeAccTime variable which is used in determining which replica to read.

    Use "mmfsadm dump nsd" and look at relaccTime for each disk. It should have the value 0 for local access, 1 for local NSD server, and 2 for remote NSD server.
    Thanks, I will run the mmfsadm dump nsd command shortly and get back to you with feedback on the result. I'll run this from both the client and one of the NSD servers.
    Again, thank you for your quick responses.
  • SystemAdmin
    SystemAdmin
    2092 Posts

    Re: Synchronous replication over a WAN, RE: Metadata performance

    ‏2013-01-14T18:41:33Z  
    So - I have run mmfsadm dump nsd on both the client and nsd servers. I have attached the output of one of those commands. This is the output from the client, which doesn't show much for RelAccTime:

    NSD configuration:

    Disk name NsdId Cl St Local dev Dev type Servers Addr/rcfg


    -- --
    --------
    ----
    siteA_nsd000 0AF83B47:50C907D3 0 N generic <c0n0> <c0n1> 0x7F5FE00B3660/0x00000000
    siteA_nsd002 0AF83B48:50C907D4 0 N generic <c0n1> <c0n0> 0x7F5FE00B3790/0x00000000
    siteA_nsd004 0AF83B47:50C907D4 0 N generic <c0n0> <c0n1> 0x7F5FE00B38C0/0x00000000
    siteA_nsd006 0AF83B48:50C907D5 0 N generic <c0n1> <c0n0> 0x7F5FE00B39F0/0x00000000
    siteA_nsd008 0AF83B47:50C907D5 0 N generic <c0n0> <c0n1> 0x7F5FE00B3B20/0x00000000
    siteA_nsd010 0AF83B48:50C907D6 0 N generic <c0n1> <c0n0> 0x7F5FE00B3C50/0x00000000
    siteA_nsd012 0AF83B47:50C907D6 0 N generic <c0n0> <c0n1> 0x7F5FE00B3D80/0x00000000
    siteA_nsd014 0AF83B48:50C907D7 0 N generic <c0n1> <c0n0> 0x7F5FE00B3EB0/0x00000000
    siteA_nsd200 0AF83B47:50C907D7 0 N generic <c0n0> <c0n1> 0x7F5FE00B3FE0/0x00000000
    siteA_nsd202 0AF83B48:50C907D8 0 N generic <c0n1> <c0n0> 0x7F5FE00B4110/0x00000000
    siteA_nsd210 0AF83B47:50C907D8 0 N generic <c0n0> <c0n1> 0x7F5FE00B4240/0x00000000
    siteA_nsd212 0AF83B48:50C907D9 0 N generic <c0n1> <c0n0> 0x7F5FE00B4370/0x00000000
    siteB_nsd001 0AF83B83:50C907D8 0 N generic <c0n2> <c0n3> 0x7F5FE00B44A0/0x00000000
    siteB_nsd003 0AF83B84:50C907D9 0 N generic <c0n3> <c0n2> 0x7F5FE00B45D0/0x00000000
    siteB_nsd005 0AF83B83:50C907D9 0 N generic <c0n2> <c0n3> 0x7F5FE00B4700/0x00000000
    siteB_nsd007 0AF83B84:50C907DA 0 N generic <c0n3> <c0n2> 0x7F5FE00B4830/0x00000000
    siteB_nsd009 0AF83B83:50C907DB 0 N generic <c0n2> <c0n3> 0x7F5FE00B4960/0x00000000
    siteB_nsd011 0AF83B84:50C907DB 0 N generic <c0n3> <c0n2> 0x7F5FE00B4A90/0x00000000
    siteB_nsd013 0AF83B83:50C907DC 0 N generic <c0n2> <c0n3> 0x7F5FE00B4BC0/0x00000000
    siteB_nsd015 0AF83B84:50C907DD 0 N generic <c0n3> <c0n2> 0x7F5FE00B4CF0/0x00000000
    siteB_nsd201 0AF83B83:50C907DD 0 N generic <c0n2> <c0n3> 0x7F5FE00B4E20/0x00000000
    siteB_nsd203 0AF83B83:50C907DE 0 N generic <c0n3> <c0n2> 0x7F5FE00B4F50/0x00000000
    siteB_nsd211 0AF83B83:50C907DF 0 N generic <c0n2> <c0n3> 0x7F5FE00B5080/0x00000000
    siteB_nsd213 0AF83B83:50C907E0 0 N generic <c0n3> <c0n2> 0x7F5FE00B51B0/0x00000000
    A co-worker mentioned to me that he had heard of a similar issue occurring when subnets weren't set. Figured I would mention that.
    doing a restripe did not change the behavior.
  • dlmcnabb
    dlmcnabb
    1012 Posts

    Re: Synchronous replication over a WAN, RE: Metadata performance

    ‏2013-01-15T17:40:26Z  
    So - I have run mmfsadm dump nsd on both the client and nsd servers. I have attached the output of one of those commands. This is the output from the client, which doesn't show much for RelAccTime:

    NSD configuration:

    Disk name NsdId Cl St Local dev Dev type Servers Addr/rcfg


    -- --
    --------
    ----
    siteA_nsd000 0AF83B47:50C907D3 0 N generic <c0n0> <c0n1> 0x7F5FE00B3660/0x00000000
    siteA_nsd002 0AF83B48:50C907D4 0 N generic <c0n1> <c0n0> 0x7F5FE00B3790/0x00000000
    siteA_nsd004 0AF83B47:50C907D4 0 N generic <c0n0> <c0n1> 0x7F5FE00B38C0/0x00000000
    siteA_nsd006 0AF83B48:50C907D5 0 N generic <c0n1> <c0n0> 0x7F5FE00B39F0/0x00000000
    siteA_nsd008 0AF83B47:50C907D5 0 N generic <c0n0> <c0n1> 0x7F5FE00B3B20/0x00000000
    siteA_nsd010 0AF83B48:50C907D6 0 N generic <c0n1> <c0n0> 0x7F5FE00B3C50/0x00000000
    siteA_nsd012 0AF83B47:50C907D6 0 N generic <c0n0> <c0n1> 0x7F5FE00B3D80/0x00000000
    siteA_nsd014 0AF83B48:50C907D7 0 N generic <c0n1> <c0n0> 0x7F5FE00B3EB0/0x00000000
    siteA_nsd200 0AF83B47:50C907D7 0 N generic <c0n0> <c0n1> 0x7F5FE00B3FE0/0x00000000
    siteA_nsd202 0AF83B48:50C907D8 0 N generic <c0n1> <c0n0> 0x7F5FE00B4110/0x00000000
    siteA_nsd210 0AF83B47:50C907D8 0 N generic <c0n0> <c0n1> 0x7F5FE00B4240/0x00000000
    siteA_nsd212 0AF83B48:50C907D9 0 N generic <c0n1> <c0n0> 0x7F5FE00B4370/0x00000000
    siteB_nsd001 0AF83B83:50C907D8 0 N generic <c0n2> <c0n3> 0x7F5FE00B44A0/0x00000000
    siteB_nsd003 0AF83B84:50C907D9 0 N generic <c0n3> <c0n2> 0x7F5FE00B45D0/0x00000000
    siteB_nsd005 0AF83B83:50C907D9 0 N generic <c0n2> <c0n3> 0x7F5FE00B4700/0x00000000
    siteB_nsd007 0AF83B84:50C907DA 0 N generic <c0n3> <c0n2> 0x7F5FE00B4830/0x00000000
    siteB_nsd009 0AF83B83:50C907DB 0 N generic <c0n2> <c0n3> 0x7F5FE00B4960/0x00000000
    siteB_nsd011 0AF83B84:50C907DB 0 N generic <c0n3> <c0n2> 0x7F5FE00B4A90/0x00000000
    siteB_nsd013 0AF83B83:50C907DC 0 N generic <c0n2> <c0n3> 0x7F5FE00B4BC0/0x00000000
    siteB_nsd015 0AF83B84:50C907DD 0 N generic <c0n3> <c0n2> 0x7F5FE00B4CF0/0x00000000
    siteB_nsd201 0AF83B83:50C907DD 0 N generic <c0n2> <c0n3> 0x7F5FE00B4E20/0x00000000
    siteB_nsd203 0AF83B83:50C907DE 0 N generic <c0n3> <c0n2> 0x7F5FE00B4F50/0x00000000
    siteB_nsd211 0AF83B83:50C907DF 0 N generic <c0n2> <c0n3> 0x7F5FE00B5080/0x00000000
    siteB_nsd213 0AF83B83:50C907E0 0 N generic <c0n3> <c0n2> 0x7F5FE00B51B0/0x00000000
    A co-worker mentioned to me that he had heard of a similar issue occurring when subnets weren't set. Figured I would mention that.
    doing a restripe did not change the behavior.
    The dump only showed disk stanza for siteA NSDs. Maybe what I needed was "mmfsadm dump disk" to see the siteB disks.
  • SystemAdmin
    SystemAdmin
    2092 Posts

    Re: Synchronous replication over a WAN, RE: Metadata performance

    ‏2013-01-15T17:41:23Z  
    • dlmcnabb
    • ‏2013-01-11T23:02:46Z
    Also, what code release are you running. There was a fix in 3.4.0.13 where after a NSD node failure and recovery, GPFS was not resetting the NSD server and the internal relativeAccTime variable which is used in determining which replica to read.

    Use "mmfsadm dump nsd" and look at relaccTime for each disk. It should have the value 0 for local access, 1 for local NSD server, and 2 for remote NSD server.
    Update. It would seem that mmfsadm dump nsd did not have the information you were looking for, but I was able to find it in the dump all output.

    relAcctime appears to be interpreted correctly. tsting03 is in siteA and all NSDs (including those serving metadata) have relAcctime 1. I have again verified that data read IO operations are being serviced correctly (by the local NSD's), but metadata operations are going across the sites.

    The data IO operation is just a simple dd.
    The metadata IO operation is "find /siteAfs/testdirectory -ls > /dev/null"

    A dump of the waiters when running the two test commands confirms that data IO is being serviced locally, metadata IO is being serviced both local and remote.
    The output you were looking for re: relAcctime:

    root@tsting03 siteAfs# mmfsadm dump all | grep relAcctime -B10 | grep -e ^State -e dtype
    State of Disk siteA_nsd000 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 1
    State of Disk siteA_nsd002 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 1
    State of Disk siteA_nsd004 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 1
    State of Disk siteA_nsd006 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 1
    State of Disk siteA_nsd008 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 1
    State of Disk siteA_nsd010 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 1
    State of Disk siteA_nsd012 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 1
    State of Disk siteA_nsd014 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 1
    State of Disk siteA_nsd200 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 1
    State of Disk siteA_nsd202 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 1
    State of Disk siteA_nsd210 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 1
    State of Disk siteA_nsd212 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 1
    State of Disk siteB_nsd001 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 2
    State of Disk siteB_nsd003 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 2
    State of Disk siteB_nsd005 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 2
    State of Disk siteB_nsd007 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 2
    State of Disk siteB_nsd009 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 2
    State of Disk siteB_nsd011 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 2
    State of Disk siteB_nsd013 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 2
    State of Disk siteB_nsd015 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 2
    State of Disk siteB_nsd201 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 2
    State of Disk siteB_nsd203 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 2
    State of Disk siteB_nsd211 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 2
    State of Disk siteB_nsd213 (devId FFFFFFFF devType Z devSubT N):
      dtype 'nsd', LAN attached relAcctime 2

    Hopefully we can come to resolution on this - right now, a 2 second metadata operation with both sites live takes nearly 300 seconds with a 100ms RTT (when there's no reason to go from siteA to siteB when all IO can be serviced locally).
  • YuanZhengcai
    YuanZhengcai
    9 Posts

    Re: Synchronous replication over a WAN, RE: Metadata performance

    ‏2013-01-23T01:21:51Z  
    >> tsting03.FQDN: 0x7FDCB4000E00 waiting 7.285068451 seconds, PrefetchWorkerThread: on ThCond 0x7FDC90002088 (0x7FDC90002088) (MsgRecordCondvar), reason 'RPC wait' for NSD I/O completion on node 10.248.59.72 <c0n1>

    >> (Thre are IO's for both .71 and .72, those IP's are siteB-nsd1 and siteB-nsd2)

    No, the <c0n1> .72 is the siteA-nsd2 from the dump and trace.

    trcrpt.130117.23.33.58.tsting03.gz:
    0.100860 8583 TRACE_DISK: doReplicatedRead: da 10:92242539
    0.100863 8583 TRACE_IO: QIO: read inode tag 0 229995 buf 0x4000018000 nsdName siteA_nsd202 da 10:92242539 nSectors 1 align 0 by iocMBHandler (SharedHas TabFetchHandlerThread)
    0.100871 8583 TRACE_NSD: nsdDoIO enter: read bufAddr 0x4000018000 nsdId 0AF83B48:50C907D8 da 10:92242539 nBytes 512
    0.100872 8583 TRACE_NSD: nsdDoIO: serverAddr <c0n1>
    0.100883 8583 TRACE_TS: sendMessage dest <c0n1> 10.248.59.72 siteA-nsd2: msg_id 14393 type 13 tagP 0x7FBAE400E8D8 seq 3972, state initial

    mmfsadm_dump_nsd.sitea-nsda.log:
    NSD configuration:

    Disk name NsdId Cl St Local dev Dev type Servers Addr/rcfg


    -- --
    --------
    ----
    siteA_nsd000 0AF83B47:50C907D3 0 N /dev/mapper/360001ff0808f80000000002689440000 generic <c0n0> <c0n1> 0x7F414C0D7DF0/0x00000000