Topic
  • 18 replies
  • Latest Post - ‏2012-05-29T16:17:59Z by JayFurmanek
Sudhanshu_Chopra
Sudhanshu_Chopra
10 Posts

Pinned topic Power Linux - VSCSI disk redundancy not working

‏2012-05-23T15:40:36Z |
We are facing issue related to redundancy of VSCSI disk comming from VIO Servers, which are exported from VIO servers as PV ( Physical Volume ).
We are not using NPIV anywhere. Linux Version is RedHat Enterprise Linux v6.1.
We have 2 VIO Servers i.e. VIO1 and VIO2 ( Same disk is getting visible to the OS from two different path coming from VIO1 and VIO2 ). When we shutdown one of the VIO Servers, RedHat root filesystem automatically mount itself into read-only mode and we receive I/O errors on OS and OS become unavailable.

Do we have any solution for redundacy of SCSI disk ( One disk ) coming from two VIO Servers. Or we need to manage this only using Software RAID on Linux.
 
As these are SCSI disk not directly coming through FC adapter, do we still need to configure Linux Mulitpath Driver.or this would be handled by ibmvscsi driver itself. 
Any document or links which can guide us how to setup redundancy of SCSI disk in case of Power Linux.

Updated on 2012-05-29T16:17:59Z at 2012-05-29T16:17:59Z by JayFurmanek
  • jscheel
    jscheel
    67 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-23T19:45:07Z  
     Have you configured multi-path over the disks?  It doesn't sounds like you've done so.
  • Storix
    Storix
    9 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-23T19:50:17Z  
     jscheel...you just beat me to it.
     
    RHEL supports native device-mapper multipath. Typically you can set the /etc/multipath.conf file to "use_friendly_name" and you will see /dev/mpath/mpatha instead of sda & sdb. Then it should failover properly. This is not software RAID because in that case are trying to make two disks act as one, with mulitpath you are making one disk appearing as two disks act as one. ;-)
  • rfolco
    rfolco
    6 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-23T20:58:37Z  
    • Storix
    • ‏2012-05-23T19:50:17Z
     jscheel...you just beat me to it.
     
    RHEL supports native device-mapper multipath. Typically you can set the /etc/multipath.conf file to "use_friendly_name" and you will see /dev/mpath/mpatha instead of sda & sdb. Then it should failover properly. This is not software RAID because in that case are trying to make two disks act as one, with mulitpath you are making one disk appearing as two disks act as one. ;-)
    We do redundancy with multipath. The basic difference is that, in the case of multipath, the paths will be automatically recovered once they are
    available again. "multipath -v3" should help debugging what is wrong. When the path is down you see an output like this with "multipath -ll":
    \_ 0:0:1:0 sda 8:0 [failed][faulty]
    Multipath reads sector 0 for checking paths so if you didn't install it as a multipath disk, you might have some issues by just trying to configure it manually in the configuration file (/etc/multipath.conf).
    defaults {
    path_checker readsector0
    user_friendly_names yes
     
  • Sudhanshu_Chopra
    Sudhanshu_Chopra
    10 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-24T05:23:00Z  
    • jscheel
    • ‏2012-05-23T19:45:07Z
     Have you configured multi-path over the disks?  It doesn't sounds like you've done so.
    Jscheel,
     
    We have not configured multi-path over the disk using any s/w ( nor using native device-mapper multipath) . As these are not directly attached FC Disk , nor these disk are coming through NPIV. 
  • Sudhanshu_Chopra
    Sudhanshu_Chopra
    10 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-24T05:44:12Z  
    • Storix
    • ‏2012-05-23T19:50:17Z
     jscheel...you just beat me to it.
     
    RHEL supports native device-mapper multipath. Typically you can set the /etc/multipath.conf file to "use_friendly_name" and you will see /dev/mpath/mpatha instead of sda & sdb. Then it should failover properly. This is not software RAID because in that case are trying to make two disks act as one, with mulitpath you are making one disk appearing as two disks act as one. ;-)
     Storix,
    As of now the visible path are /dev/sda and /dev/sdb. Frankly speaking we are not aware of "how to configure multipath on Linux for SCSI disks which are coming from VIO Servers and not from HBA/NPIV."
     
    Updated on 2012-05-24T05:44:12Z at 2012-05-24T05:44:12Z by Sudhanshu_Chopra
  • Sudhanshu_Chopra
    Sudhanshu_Chopra
    10 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-24T05:44:21Z  
    • rfolco
    • ‏2012-05-23T20:58:37Z
    We do redundancy with multipath. The basic difference is that, in the case of multipath, the paths will be automatically recovered once they are
    available again. "multipath -v3" should help debugging what is wrong. When the path is down you see an output like this with "multipath -ll":
    \_ 0:0:1:0 sda 8:0 [failed][faulty]
    Multipath reads sector 0 for checking paths so if you didn't install it as a multipath disk, you might have some issues by just trying to configure it manually in the configuration file (/etc/multipath.conf).
    defaults {
    path_checker readsector0
    user_friendly_names yes
     
     
    Rfolco, 
    As you told, in our environment , we have not configured the disk as mulitpath. Can you please explain how do we do that for SCSI disks coming from VIO Servers. 
  • Sudhanshu_Chopra
    Sudhanshu_Chopra
    10 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-24T05:49:13Z  
    • rfolco
    • ‏2012-05-23T20:58:37Z
    We do redundancy with multipath. The basic difference is that, in the case of multipath, the paths will be automatically recovered once they are
    available again. "multipath -v3" should help debugging what is wrong. When the path is down you see an output like this with "multipath -ll":
    \_ 0:0:1:0 sda 8:0 [failed][faulty]
    Multipath reads sector 0 for checking paths so if you didn't install it as a multipath disk, you might have some issues by just trying to configure it manually in the configuration file (/etc/multipath.conf).
    defaults {
    path_checker readsector0
    user_friendly_names yes
     
     Also find the output multipath -v3 command :-
    [root@adcuxxxx log]# multipath -v3
    May 24 18:44:37 | ram0: device node name blacklisted
    May 24 18:44:37 | ram1: device node name blacklisted
    May 24 18:44:37 | ram2: device node name blacklisted
    May 24 18:44:37 | ram3: device node name blacklisted
    May 24 18:44:37 | ram4: device node name blacklisted
    May 24 18:44:37 | ram5: device node name blacklisted
    May 24 18:44:37 | ram6: device node name blacklisted
    May 24 18:44:37 | ram7: device node name blacklisted
    May 24 18:44:37 | ram8: device node name blacklisted
    May 24 18:44:37 | ram9: device node name blacklisted
    May 24 18:44:37 | ram10: device node name blacklisted
    May 24 18:44:37 | ram11: device node name blacklisted
    May 24 18:44:37 | ram12: device node name blacklisted
    May 24 18:44:37 | ram13: device node name blacklisted
    May 24 18:44:37 | ram14: device node name blacklisted
    May 24 18:44:37 | ram15: device node name blacklisted
    May 24 18:44:37 | loop0: device node name blacklisted
    May 24 18:44:37 | loop1: device node name blacklisted
    May 24 18:44:37 | loop2: device node name blacklisted
    May 24 18:44:37 | loop3: device node name blacklisted
    May 24 18:44:37 | loop4: device node name blacklisted
    May 24 18:44:37 | loop5: device node name blacklisted
    May 24 18:44:37 | loop6: device node name blacklisted
    May 24 18:44:37 | loop7: device node name blacklisted
    May 24 18:44:37 | sda: not found in pathvec
    May 24 18:44:37 | sda: mask = 0x1f
    May 24 18:44:37 | sda: dev_t = 8:0
    May 24 18:44:37 | sda: size = 83886080
    May 24 18:44:37 | sda: subsystem = scsi
    May 24 18:44:37 | sda: vendor = AIX
    May 24 18:44:37 | sda: product = VDASD
    May 24 18:44:37 | sda: rev = 0001
    May 24 18:44:37 | sda: h:b:t:l = 1:0:1:0
    May 24 18:44:37 | sda: serial = 33213600507680192828040000000000001EA04214503IBMfcp
    May 24 18:44:37 | sda: get_state
    May 24 18:44:37 | sda: path checker = directio (controller setting)
    May 24 18:44:37 | sda: checker timeout = 120000 ms (sysfs setting)
    May 24 18:44:37 | sda: state = running
    May 24 18:44:37 | directio: starting new request
    May 24 18:44:37 | directio: io finished 4096/0
    May 24 18:44:37 | sda: state = 3
    May 24 18:44:37 | sda: getuid = /lib/udev/scsi_id --whitelisted --device=/dev/%n (controller setting)
    May 24 18:44:37 | sda: uid = 3600507680192828040000000000001ea (callout)
    May 24 18:44:37 | sda: prio = const (controller setting)
    May 24 18:44:37 | sda: const prio = 1
    May 24 18:44:37 | dm-0: device node name blacklisted
    May 24 18:44:37 | dm-1: device node name blacklisted
    May 24 18:44:37 | dm-2: device node name blacklisted
    May 24 18:44:37 | dm-3: device node name blacklisted
    May 24 18:44:37 | dm-4: device node name blacklisted
    May 24 18:44:37 | dm-5: device node name blacklisted
    May 24 18:44:37 | sdb: not found in pathvec
    May 24 18:44:37 | sdb: mask = 0x1f
    May 24 18:44:37 | sdb: dev_t = 8:16
    May 24 18:44:37 | sdb: size = 83886080
    May 24 18:44:37 | sdb: subsystem = scsi
    May 24 18:44:37 | sdb: vendor = AIX
    May 24 18:44:37 | sdb: product = VDASD
    May 24 18:44:37 | sdb: rev = 0001
    May 24 18:44:37 | sdb: h:b:t:l = 0:0:1:0
    May 24 18:44:37 | sdb: serial = 33213600507680192828040000000000001EA04214503IBMfcp
    May 24 18:44:37 | sdb: get_state
    May 24 18:44:37 | sdb: path checker = directio (controller setting)
    May 24 18:44:37 | sdb: checker timeout = 120000 ms (sysfs setting)
    May 24 18:44:37 | sdb: state = running
    May 24 18:44:37 | directio: starting new request
    May 24 18:44:37 | directio: io finished 4096/0
    May 24 18:44:37 | sdb: state = 3
    May 24 18:44:37 | sdb: getuid = /lib/udev/scsi_id --whitelisted --device=/dev/%n (controller setting)
    May 24 18:44:37 | sdb: uid = 3600507680192828040000000000001ea (callout)
    May 24 18:44:37 | sdb: prio = const (controller setting)
    May 24 18:44:37 | sdb: const prio = 1
    May 24 18:44:37 | sr0: device node name blacklisted
    ===== paths list =====
    uuid                              hcil    dev dev_t pri dm_st chk_st vend/prod
    3600507680192828040000000000001ea 1:0:1:0 sda 8:0   1   undef ready  AIX,VDASD
    3600507680192828040000000000001ea 0:0:1:0 sdb 8:16  1   undef ready  AIX,VDASD
    May 24 18:44:37 | Found matching wwid [3600507680192828040000000000001ea] in bindings file. Setting alias to mpatha
    May 24 18:44:37 | sda: ownership set to mpatha
    May 24 18:44:37 | sda: not found in pathvec
    May 24 18:44:37 | sda: mask = 0xc
    May 24 18:44:37 | sda: get_state
    May 24 18:44:37 | sda: state = running
    May 24 18:44:37 | directio: starting new request
    May 24 18:44:37 | directio: io finished 4096/0
    May 24 18:44:37 | sda: state = 3
    May 24 18:44:37 | sda: const prio = 1
    May 24 18:44:37 | sdb: ownership set to mpatha
    May 24 18:44:37 | sdb: not found in pathvec
    May 24 18:44:37 | sdb: mask = 0xc
    May 24 18:44:37 | sdb: get_state
    May 24 18:44:37 | sdb: state = running
    May 24 18:44:37 | directio: starting new request
    May 24 18:44:37 | directio: io finished 4096/0
    May 24 18:44:37 | sdb: state = 3
    May 24 18:44:37 | sdb: const prio = 1
    May 24 18:44:37 | mpatha: pgfailback = -2 (controller setting)
    May 24 18:44:37 | mpatha: pgpolicy = multibus (controller setting)
    May 24 18:44:37 | mpatha: selector = round-robin 0 (controller setting)
    May 24 18:44:37 | mpatha: features = 0 (controller setting)
    May 24 18:44:37 | mpatha: hwhandler = 0 (controller setting)
    May 24 18:44:37 | mpatha: rr_weight = 1 (controller setting)
    May 24 18:44:37 | mpatha: minio = 1000 (controller setting)
    May 24 18:44:37 | mpatha: no_path_retry = 60 (controller setting)
    May 24 18:44:37 | pg_timeout = NONE (internal default)
    May 24 18:44:37 | mpatha: set ACT_CREATE (map does not exist)
    May 24 18:44:37 | mpatha: domap (0) failure for create/reload map
    May 24 18:44:37 | Found matching wwid [3600507680192828040000000000001ea] in bindings file. Setting alias to mpatha
    May 24 18:44:37 | sda: ownership set to mpatha
    May 24 18:44:37 | sda: not found in pathvec
    May 24 18:44:37 | sda: mask = 0xc
    May 24 18:44:37 | sda: get_state
    May 24 18:44:37 | sda: path checker = directio (controller setting)
    May 24 18:44:37 | sda: checker timeout = 120000 ms (sysfs setting)
    May 24 18:44:37 | sda: state = running
    May 24 18:44:37 | directio: starting new request
    May 24 18:44:37 | directio: io finished 4096/0
    May 24 18:44:37 | sda: state = 3
    May 24 18:44:37 | sda: prio = const (controller setting)
    May 24 18:44:37 | sda: const prio = 1
    May 24 18:44:37 | sdb: ownership set to mpatha
    May 24 18:44:37 | sdb: not found in pathvec
    May 24 18:44:37 | sdb: mask = 0xc
    May 24 18:44:37 | sdb: get_state
    May 24 18:44:37 | sdb: path checker = directio (controller setting)
    May 24 18:44:37 | sdb: checker timeout = 120000 ms (sysfs setting)
    May 24 18:44:37 | sdb: state = running
    May 24 18:44:37 | directio: starting new request
    May 24 18:44:37 | directio: io finished 4096/0
    May 24 18:44:37 | sdb: state = 3
    May 24 18:44:37 | sdb: prio = const (controller setting)
    May 24 18:44:37 | sdb: const prio = 1
    May 24 18:44:37 | mpatha: pgfailback = -2 (controller setting)
    May 24 18:44:37 | mpatha: pgpolicy = multibus (controller setting)
    May 24 18:44:37 | mpatha: selector = round-robin 0 (controller setting)
    May 24 18:44:37 | mpatha: features = 0 (controller setting)
    May 24 18:44:37 | mpatha: hwhandler = 0 (controller setting)
    May 24 18:44:37 | mpatha: rr_weight = 1 (controller setting)
    May 24 18:44:37 | mpatha: minio = 1000 (controller setting)
    May 24 18:44:37 | mpatha: no_path_retry = 60 (controller setting)
    May 24 18:44:37 | pg_timeout = NONE (internal default)
    May 24 18:44:37 | mpatha: set ACT_CREATE (map does not exist)
    May 24 18:44:37 | mpatha: domap (0) failure for create/reload map
     
  • rfolco
    rfolco
    6 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-24T12:59:04Z  
     Storix,
    As of now the visible path are /dev/sda and /dev/sdb. Frankly speaking we are not aware of "how to configure multipath on Linux for SCSI disks which are coming from VIO Servers and not from HBA/NPIV."
     
     
    The device-mapper multipath should be configured on the server and client sides:
     
    * Server (VIOS):
    VSCSI mapping requires one pair of Server/Client SCSI for each VIOS. In other words, the same disk should be mapped by the two VIOS.
    - Change the error recovery attribute to fast_fail (any I/O operations will fail immediately after a broken path is detected)
    $ chdev -dev fscsi0 -attr fc_err_recov=fast_fail dyntrk=yes -perm
     - The disk should be accessed by the two VIOS, so change the reserve_policy to no_reserve:
    $ chdev -dev hdisk1 -attr reserve_policy=no_reserve
     - do the mapping:
    $ mkvdev -vdev hdisk1 -vadapter vhost0 -dev Linux_rootvg
     - List the mappings:
    $ lsmap --all
     
    * Client (Linux):
    Assuming the configuration on the server is done (the dual VIOS mapping is set up), you have to install the disk as multipath, as follows:
    http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/images/storagedevices/selectstoragedevices-multipath.png
    Notice that the 2 paths should be automatically detected, and you don't need to change anything after the installation completes.
    You can also check multipath -ll, multipath -v3, and multipath.conf. Shutdown one of the VIOS and monitor /var/log/messages. Then re-activate the VIOS and see if the path is up again.  
     
    The "IBM PowerVM Virtualization Introduction and Configuration" Redbook has more details on these steps.
     
    Hope that helps. Please let me know if any other issues or questions.
     
    --Rafael
     
  • Sudhanshu_Chopra
    Sudhanshu_Chopra
    10 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-25T17:59:57Z  
    • rfolco
    • ‏2012-05-24T12:59:04Z
     
    The device-mapper multipath should be configured on the server and client sides:
     
    * Server (VIOS):
    VSCSI mapping requires one pair of Server/Client SCSI for each VIOS. In other words, the same disk should be mapped by the two VIOS.
    - Change the error recovery attribute to fast_fail (any I/O operations will fail immediately after a broken path is detected)
    $ chdev -dev fscsi0 -attr fc_err_recov=fast_fail dyntrk=yes -perm
     - The disk should be accessed by the two VIOS, so change the reserve_policy to no_reserve:
    $ chdev -dev hdisk1 -attr reserve_policy=no_reserve
     - do the mapping:
    $ mkvdev -vdev hdisk1 -vadapter vhost0 -dev Linux_rootvg
     - List the mappings:
    $ lsmap --all
     
    * Client (Linux):
    Assuming the configuration on the server is done (the dual VIOS mapping is set up), you have to install the disk as multipath, as follows:
    http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/images/storagedevices/selectstoragedevices-multipath.png
    Notice that the 2 paths should be automatically detected, and you don't need to change anything after the installation completes.
    You can also check multipath -ll, multipath -v3, and multipath.conf. Shutdown one of the VIOS and monitor /var/log/messages. Then re-activate the VIOS and see if the path is up again.  
     
    The "IBM PowerVM Virtualization Introduction and Configuration" Redbook has more details on these steps.
     
    Hope that helps. Please let me know if any other issues or questions.
     
    --Rafael
     
     Thanks Rafael for your support.
     
     Our issue got resolved. It was due to two things i.e.  
    1) We did RHEL 6.1 installation only using one path due which multipath was not working for root filesystem.
    To solve this we followed below link :- 
     
    2)  Automatic recovery of failed paths in multipath was not happenning. This was due to bug in RHEL 6.1 kernel. Which got resolved by following links:-
    http://rhn.redhat.com/errata/RHSA-2011-1530.html 
     -------------------------------------------------------
    Would like to ask one more doubt i.e. , if we take a Flash  Copy of the RHEL OS disk and use it on another IBM Machine ( Like existing it is running on p570 and new will run on p595 ) , will it work. We will take Flash copy using IBM SVC.
     
    Thanks all for your support. 
     
  • Storix
    Storix
    9 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-25T21:01:29Z  
     Thanks Rafael for your support.
     
     Our issue got resolved. It was due to two things i.e.  
    1) We did RHEL 6.1 installation only using one path due which multipath was not working for root filesystem.
    To solve this we followed below link :- 
     
    2)  Automatic recovery of failed paths in multipath was not happenning. This was due to bug in RHEL 6.1 kernel. Which got resolved by following links:-
    http://rhn.redhat.com/errata/RHSA-2011-1530.html 
     -------------------------------------------------------
    Would like to ask one more doubt i.e. , if we take a Flash  Copy of the RHEL OS disk and use it on another IBM Machine ( Like existing it is running on p570 and new will run on p595 ) , will it work. We will take Flash copy using IBM SVC.
     
    Thanks all for your support. 
     
     I theory that might work, but I would not want to do this for a production system. From my understanding, a FlashCopy snapshot is not a complete copy of the data. It is just a way to present links to the original data. If the original data changes, then the snapshot copy is preserved as the original. If you change the snapshot copy, then the original is preserved on the original slice (look up "Copy On Write"). This prevents redundant data. To go to dissimilar hardware, for the OS, you will likely need to change the storage configuration, which scsi drivers and network drivers are used, network settings, etc. I am being biased here, but I would recommend taking a backup of the original and cloning it instead. You can take a snapshot of the datavg maybe, but the rootvg (OS) should probably not be shared between two production systems. Just my opinion.
  • rfolco
    rfolco
    6 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-26T02:42:21Z  
     Thanks Rafael for your support.
     
     Our issue got resolved. It was due to two things i.e.  
    1) We did RHEL 6.1 installation only using one path due which multipath was not working for root filesystem.
    To solve this we followed below link :- 
     
    2)  Automatic recovery of failed paths in multipath was not happenning. This was due to bug in RHEL 6.1 kernel. Which got resolved by following links:-
    http://rhn.redhat.com/errata/RHSA-2011-1530.html 
     -------------------------------------------------------
    Would like to ask one more doubt i.e. , if we take a Flash  Copy of the RHEL OS disk and use it on another IBM Machine ( Like existing it is running on p570 and new will run on p595 ) , will it work. We will take Flash copy using IBM SVC.
     
    Thanks all for your support. 
     
     I think using a kickstart file for automating other installations would be more safe. If you clone your system, the devices and configurations will be different on the target machine and that would give you more trouble.
  • jscheel
    jscheel
    67 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-26T12:55:30Z  
     Thanks Rafael for your support.
     
     Our issue got resolved. It was due to two things i.e.  
    1) We did RHEL 6.1 installation only using one path due which multipath was not working for root filesystem.
    To solve this we followed below link :- 
     
    2)  Automatic recovery of failed paths in multipath was not happenning. This was due to bug in RHEL 6.1 kernel. Which got resolved by following links:-
    http://rhn.redhat.com/errata/RHSA-2011-1530.html 
     -------------------------------------------------------
    Would like to ask one more doubt i.e. , if we take a Flash  Copy of the RHEL OS disk and use it on another IBM Machine ( Like existing it is running on p570 and new will run on p595 ) , will it work. We will take Flash copy using IBM SVC.
     
    Thanks all for your support. 
     
    Sudhanshyu, thanks for you excellent postings.  I appreciate the detailed explanation of how you fixed your issues.  Those should be most helpful to folks with similar issues in the future.
     
    As to your flash copy question, my experience is that it will *almost* work.  I believe that you may need to modify a file or two in your new system to get the volumes mounted correctly because the system will be looking for them by UUID from the old images and the new storage volumes will likley have new UUIDs.  If you have flashed root, you may need to boot into a lower runlevel to adjust these files.
     
    If you do try it, let us know how it works and which files you had to modify.
    Updated on 2012-05-26T12:55:30Z at 2012-05-26T12:55:30Z by jscheel
  • Sudhanshu_Chopra
    Sudhanshu_Chopra
    10 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-28T18:08:46Z  
    • jscheel
    • ‏2012-05-26T12:53:35Z
    Sudhanshyu, thanks for you excellent postings.  I appreciate the detailed explanation of how you fixed your issues.  Those should be most helpful to folks with similar issues in the future.
     
    As to your flash copy question, my experience is that it will *almost* work.  I believe that you may need to modify a file or two in your new system to get the volumes mounted correctly because the system will be looking for them by UUID from the old images and the new storage volumes will likley have new UUIDs.  If you have flashed root, you may need to boot into a lower runlevel to adjust these files.
     
    If you do try it, let us know how it works and which files you had to modify.
     Jscheel,
    We did a flash copy and every thing worked fine. Only difference we faced were :-
    1) Disk path were automatically changed from mpatha to mpathb. ( We didn't changed anything )
    2) Instead of using eth0 for network adapter , we configured eth1. ( eth0 was not working so we configured eth1 ).
    ----------------------------------------------------- 
    Can you please guide , if we want to dynamically add new network adapter to client, how to scan for new  virtual ethernet adapter on RHEL 6.1 for PPC. 
     
     
     
  • Sudhanshu_Chopra
    Sudhanshu_Chopra
    10 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-28T18:10:12Z  
    • Storix
    • ‏2012-05-25T21:01:29Z
     I theory that might work, but I would not want to do this for a production system. From my understanding, a FlashCopy snapshot is not a complete copy of the data. It is just a way to present links to the original data. If the original data changes, then the snapshot copy is preserved as the original. If you change the snapshot copy, then the original is preserved on the original slice (look up "Copy On Write"). This prevents redundant data. To go to dissimilar hardware, for the OS, you will likely need to change the storage configuration, which scsi drivers and network drivers are used, network settings, etc. I am being biased here, but I would recommend taking a backup of the original and cloning it instead. You can take a snapshot of the datavg maybe, but the rootvg (OS) should probably not be shared between two production systems. Just my opinion.
     Storix,
    Thanks for your response, while taking a flash copy , we shutdown the source OS disk and then did a copy mapping. 
  • Sudhanshu_Chopra
    Sudhanshu_Chopra
    10 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-28T18:12:48Z  
    • rfolco
    • ‏2012-05-26T02:42:21Z
     I think using a kickstart file for automating other installations would be more safe. If you clone your system, the devices and configurations will be different on the target machine and that would give you more trouble.
     Rafael,
    If we do a kickstart installation, then in-order to avoid above issue, how do we specify multipath / multiple path  for single disk in kickstart file ( like sda and sdb ).
  • rfolco
    rfolco
    6 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-28T18:37:23Z  
     Rafael,
    If we do a kickstart installation, then in-order to avoid above issue, how do we specify multipath / multiple path  for single disk in kickstart file ( like sda and sdb ).
    The kickstart suggestion was just for the case where you have attempted a successful multipath install. With that assumption you could just try replicating your mpath installation into another system using kickstart. Otherwise the clone with few adjustments seems to be the most appropriate option.
  • JayFurmanek
    JayFurmanek
    113 Posts

    Re: Power Linux - VSCSI disk redundancy not working

    ‏2012-05-29T16:17:59Z  
     Jscheel,
    We did a flash copy and every thing worked fine. Only difference we faced were :-
    1) Disk path were automatically changed from mpatha to mpathb. ( We didn't changed anything )
    2) Instead of using eth0 for network adapter , we configured eth1. ( eth0 was not working so we configured eth1 ).
    ----------------------------------------------------- 
    Can you please guide , if we want to dynamically add new network adapter to client, how to scan for new  virtual ethernet adapter on RHEL 6.1 for PPC. 
     
     
     
     Hi Sudhanshyu,
     
    Dynamic I/O operations require the installation of the Service and Productivity Tools. See here:
    http://www-304.ibm.com/webapp/set2/sas/f/lopdiags/home.html
     
    Installation has been made really easy with the available YUM repository. Info and instructions on use are here:
    http://www-304.ibm.com/webapp/set2/sas/f/lopdiags/yum.html
     
    Once set up properly, you won't have to 'scan' for a newly added adapter. It should just show up.