Before managing Hitachi storage

When using the Hitachi Block Storage Driver (HBSD) to deploy Hitachi storage, ensure that your environment is set up appropriately and that all requirements are met.

Before managing Hitachi storage - environment setup

  1. Make sure the user has the following roles assigned.
    1. Administrator user group (a built-in user group) OR
    2. All of the roles listed below:
      • Storage Administrator (View Only)
      • Storage Administrator (Provisioning)
      • Storage Administrator (Local Copy)
      • Storage Administrator (Performance Management)

      And one of the following roles assigned

      • Audit Log Administrator (View & Modify)
      • Audit Log Administrator (View Only)
      • Security Administrator (View & Modify)
      • Security Administrator (View Only)
      • Support Personnel
      • User Maintenance
  2. Existing virtual machines that use Hitachi storage that you will bring under PowerVC management might require some changes. The controller host group names used by those virtual machines must be renamed to conform to the HBSD host group naming convention. The HBSD host group naming convention is HBSD-lowercase_initiator_wwpn, where lowercase_initiator_wwpn is the lowest-valued NPIV or VSCSI WWPN, converted to lower case. For example, HBSD-c05076012345678a would be an acceptable name, but HBSD-C05076012345678A would not.
  3. Ensure that the provider is ready for REST communication by installing and configuring a Hitachi Configuration Manager REST API server. Follow these steps to install and configure the Configuration Manager REST API server:
    1. Download and install the Hitachi Configuration Manager REST API from the Hitachi Developer Network on an x86 virtual machine.
    2. Configure secure communications between the PowerVC management server and the Configuration Manager REST API by configuring and installing an appropriate certificate. See Setting up SSL communication in the Hitachi Command Suite Configuration Manager REST API Reference Guide.
    3. Register the Hitachi storage system with the Configuration Manager REST API server. For instructions, see the Hitachi Command Suite Configuration Manager REST API Reference Guide.
      Note: When deploying multiple virtual machines simultaneously, the Hitachi driver issues concurrent requests to the Hitachi Configuration Manger REST API server. Therefore this server must be sized appropriately to handle the requests. To enable higher scaling of these requests, each Hitachi array might need a private Configuration Management REST API server. Refer to the Hitachi documentation for sizing recommendations.

Registering Hitachi storage

When registering Hitachi storage, first fill out the Specify the Fibre Channel ports available to PowerVC field. This specifies the Fibre Channel ports that PowerVC can use. Choose this list carefully because it cannot easily be changed after registration.

Next, edit the Specify the Fibre Channel ports for the default template field. These are the ports included in the default storage template. You can only select ports that you made available to PowerVC.

Install a certificate on the Hitachi storage device to enable secure HTTPS communications with PowerVC. PowerVC validates the certificate against the system trust store. For certificates signed by internal CA, the CA root and intermediate certificates must be present in the system trust store to ensure successful certificate validation.

After registration, you can create additional storage templates that specify different Fibre Channel ports. However, you will only be able to choose Fibre Channel ports that were made available to PowerVC.

Hitachi storage templates

Requirements and recommendations when specifying the Fibre Channel ports in the storage template:
  • For all volumes
    • Consult Hitachi publications for best practices and limitations that are associated with target port selection for your storage array.
    • When a new volume is attached to a preexisting virtual machine that was brought under PowerVC management, be sure to maintain consistent target port usage. It is recommended that you use a storage template that specifies the same set of ports that the virtual machine already uses. Alternatively, specify a set of ports that does not overlap the ports that the virtual machine is already using.
    • There are a limited number of WWPNs that can be mapped to a single Fibre Channel port on the controller. Also, when you use different sets of target ports to attach volumes to virtual machines, additional resources are consumed when the volume is mapped to the virtual machine. Therefore, plan carefully before using different sets of target ports to attach volumes to virtual machines.
    • They should be dedicated to a single PowerVC instance to avoid contention on the controller regarding a group of target ports.
  • For NPIV connected volumes
    • The set of Fibre Channel target ports that are used to map volumes to a virtual machine by using NPIV connectivity must be used consistently for a virtual machine. The Hitachi storage array maps all of the Fibre Channel target ports in the volume's storage template to the virtual machine. When using NPIV connections, the Hitachi volume driver and storage array allow different sets of target ports to be used to connect a volume to a virtual machine if each set of target ports is used consistently. For example, assume the following scenario:
      • A volume is being deployed to virtual machine hds-1 by using storage template hds-template.
      • The template hds-template specifies that Hitachi VSP target ports CL1-A and CL1-B should be used to map the volume to the virtual machine.
      • The volume will be connected to hds-1 via NPIV.

      In the example scenario, no other volumes should be deployed to hds-1 that specify other target ports and either CL1-A or CL1-B. That is, specifying CL1-C and CL1-D would be valid, but specifying CL1-A and CL1-C would not.

      If target ports are mixed when attaching volumes to a virtual machine, the Hitachi volume driver might not be able to use all of the specified target ports to map the volume to the virtual machine. If that happens, the virtual machine might not be able to discover the volume. Therefore, it is recommended that you use the same storage template to attach all NPIV connected volumes to a virtual machine. This practice prevents mixing target ports when volumes with NPIV connections are attached to a virtual machine. In the previous example, all volumes that are being attached to the virtual machine hds-1 could use storage template hds-template.

    • PowerVC zones one initiator Fibre Channel port to one target Fibre Channel port from the backing storage. Therefore, it is recommended that the number of target ports per fabric that is specified in the storage template is not greater than the number of NPIV ports that are allocated to the virtual machine per fabric. Review the I/O characteristics that are required by the virtual machine when determining the required number of target ports. The storage connectivity group has options that control the connectivity characteristics from the host port side.
  • For VSCSI connected volumes
    • The set of Fibre Channel target ports that are used to map volumes to a virtual machine by using VSCSI connectivity must be used consistently for a Virtual I/O Server. The Hitachi storage array maps all of the Fibre Channel target ports in the volume's storage template to the Virtual I/O Server. The Hitachi volume driver and storage array allow different sets of target ports to be used to connect a volume to a Virtual I/O Server if each set of target ports is used consistently. For example, assume the following scenario:
      • A volume is being deployed to virtual machine hds-2 by using storage template hds-template2.
      • The template hds-template2 specifies that Hitachi VSP target ports CL2-A and CL2-B should be used to map the volume to the Virtual I/O Server.
      • The volume will be connected to hds-1 via VSCSI.

      In the example scenario, CL2-A and CL2-B should not be mixed with any other target ports in a different storage template that are used for VSCI connections through that Virtual I/O Server. That is, specifying CL2-C and CL2-D would be valid, but using CL2-A and CL2-C would not.

      If target ports are mixed when attaching volumes to a virtual machine, the volume driver might not be able to use all of the specified target ports to map the volume to the virtual machine. If that happens, the virtual machine might not be able to discover the volume. Therefore, PowerVC recommends that you use the same storage template to attach all VSCSI connected volumes to a virtual machine. In the previous example, all volumes that are being attached to the virtual machine hds-2 could use storage template hds-template2.