The VIOS Install Instructions and Readme files can be found here:
The new on 11th Nov 2016 VIOS release 2.2.5 with the first service pack is VIOS 188.8.131.52.
The VIOS 2.2.5 Readme includes the Installation instructions can be found here:
In this we find the usual limits but also a few rules for this and the increased number of VIOS nodes. Please read the "Readme" in full.
Here I will highlight the important points. New News is highlighted in Green
- Minimum CPU: POWER6 and later with 1 CPU of guaranteed entitlement
- Minimum memory: 4 GB
- Minimum disk: 1 Fibre-channel attached disk for repository =1 GB plus 10 GB for data (10 GB is ridiculously small size for the data in a real SSP)
see Special Notes below
|Number of VIOS Nodes in Cluster
|Number of Physical Disks in Pool
|Number of Virtual Disks (LUs) Mappings in Pool
|Number of Client LPARs per VIOS node
|Capacity of Physical Disks in Pool
|Storage Capacity of Storage Pool
|Capacity of a Virtual Disk (LU) in Pool
|Number of Repository Disks
|Capacity of Repository Disk
|Number of Client LPARs per Cluster
* Large SSP special notes:
- Over 16 VIOS Nodes requires that the SYSTEM (metadata) tier contains only SSD storage.
- Over 250 Client LPARs per VIOS requires each VIOS have at least 4 CPUs and 8 GB memory.
- Maximum physical volumes added to or replaced at one time: 64
- Cluster name < 63 characters
- Pool name < 127 characters
- I found this part confusing: LU size up to 4 TB. However, recommended to limit the size of individual luns to 16 GB for optimal performance in cases where all of the following conditions are met:
- The server generates a random access pattern for the I/O device.
- There are more than 8 processes concurrently performing I/O.
- The performance of the application is dependent on the I/O subsystem throughput.
- I discussed this with the SSP architect - I would reworked these words as follows:
- For I/O intensive SSP clients (application with a disk I/O bottleneck, random pattern and many I/O processes), it is not recommended using one massive LU. Instead a number of LUs will yield optimal performance. For example, 256 GB of disk space, use 16 x 16 GB LUs.
- VIOS file system /var >= 3 GB to ensure proper logging.
- SSP network must be on a highly reliable network which is not congested.
- SSP network IPV4 or IPV6, but not both.
- SSP network local /etc/hosts and then DNS with forward and reverse lookup.
- Keep their clocks synchronised - NTP.
- SSPl vSCSI may use more CPU cycles.
- Using SCSI reservations (SCSI Reserve/Release and SCSI-3 Reserve) for fencing physical disks in the Shared Storage pool is not supported.
- SANCOM will not be supported in a Shared Storage Pool environment but no one knows what SANCOM is!!
Shared Storage Pool capabilities and limitations
- SSP only supports vSCSI to the virtual machines.
- The SSP vSCSI must not be set-up in "Any client partition can connect" mode.
- SSP VIOS Shared Ethernet Adapter(s) (SEA) be setup for Threaded mode (the default mode). SEA in Interrupt Mode is not supported within SSP.
- SSP disk space can not be used Paging Space Partition (PSP) nor Active Memory Sharing (AMS) paging space.
Installing VIOS 184.108.40.206
- rootvg at least 30 GB of disk space and at least 4 GB free disk space.
- Nigel's Note: I would make it at least double that to allow long term logging, nmon/part performance data and diagnostic snap file collection.
- - - The End - - -