If you download any software from this web site please be aware of the Warranty Disclaimer and Limitation of Liabilities.
|upstream kernel 2.6.25||2007-05-07: kernel 2.6.25 - upstream|
|linux-2.6.25-s390-00.tar.gz / MD5||2008-05-07: kernel 2.6.25 - "Development Stream" patch 00|
|linux-2.6.25-s390-01.tar.gz / MD5||2008-05-07: kernel 2.6.25 - "Development Stream" patch 01|
To download the linux-2.6.25.tar.gz visit: http://www.kernel.org/pub/linux/kernel/v2.6)
The upstream kernel 2.6.25 contains the following functionality developed by the Linux on System z development team:
- kernel (new function): Standby CPU activation/deactivation.
- With this feature it is possible to make use of standby CPUs for instruction execution.
A CPU can be in one of the states "configured", "standby", or "reserved". Before a CPU can be used for instruction execution it must be in "configured" state. Previously, the kernel was limited to operate only with "configured" CPUs. With this feature it is possible to change the state of "standby" CPUs to "configured" state and vice versa via a sysfs attribute.
This support is available only on IBM System z10, when running Linux on System z in an LPAR.
- kernel (new function): Add fallback driver for in-kernel crypto AES-s390.
- CPUs prior to IBM System z10 support only AES with 128-bit keys in hardware.
This patch adds software fallback support for the other key lengths which may be required. The generic algorithm and the block mode must be available in case of a fallback.
- kernel (new function): Shutdown Actions Interface.
- The new shutdown actions interface allows to specify for each shutdown trigger (halt, power off, reboot, panic) one of the five available shutdown actions (stop, ipl, reipl, dump, vmcmd).
A sysfs interface under /sys/firmware is provided for that purpose.
Possible use cases are e.g. to specify that a vmdump should be automatically triggered in case of a kernel panic or the z/VM logoff command should be executed on halt.
- dasd (new function): Add support for HyperPAV to DASD device driver.
- Parallel access volumes (PAV) is a storage server feature, that allows to start multiple channel programs on the same DASD in parallel. It defines alias devices which can be used as alternative paths to the same disk. (See IBM U.S Hardware Announcement 106-811 "IBM System Storage DS8000 series (machine type 2107) delivers HyperPAV")
With the old base-PAV support, we only needed rudimentary functionality in the DASD device driver. As the mapping between base and alias devices was static, we just had to export an identifier (uid) and could leave the combining of devices to external layers like a device mapper multipath.
Now HyperPAV removes the requirement to dedicate alias devices to specific base devices. Instead each alias device can be combined with multiple base devices on a per request basis. This requires full support by the DASD device driver as now each channel program itself has to identify the target base device.
The changes to the dasd device driver and the ECKD discipline are:
HyperPAV is activated automatically when the necessary prerequisites are there:
If the prerequisites for HyperPAV are not there, base-PAV is used if the PAV feature is enabled on the storage server. Otherwise the DASD driver works without using PAV.
For more details refer to "How to Improve Performance with PAV" on the "Documentation" page.
This patch contains the following functionality for Linux on System z:
- dasd (new function): SIM handling.
- With this feature the system reports system information messages (SIM) to the user. The System Reference Code (SRC), which is part of the SIM, is reported to the user and allows to look up the reason of the SIM online in the documentation of the storage server.
- kernel (new function): Merged CTC/CTCMPC driver CTCM.
- The CTCM driver supports the channel-to-channel connections of the old CTC driver plus an additional MPC protocol to provide SNA connectivity (which was formerly provided via the separate CTCMPC driver).
Note that the MPC-protocal is used when running Communication Server for Linux on System z (CSL).
- kernel (new function): Add support for large random numbers.
- Allow user space applications to access large amounts of truly random data. The random data source is the built-in hardware random number generator on the CEX2C cards.
- kernel (new function): STSI change for capacity provisioning.
- Make the permanent and temporary capacity information as provided by the STSI instruction of the IBM System z10 available to user space via /proc/sysinfo.
Using this support when running Linux on System z on IBM System z10 as a VM guest requires z/VM 5.3.
- kernel (new function): Support for hardware accelerated in-kernel crypto.
- Add support for the new hardware accelerated crypto algorithms of the IBM System z10.
The new algorithms are SHA-512 (including SHA-384) and AES-192, AES-256.
This support is available only on IBM System z10, running Linux on System z in an LPAR or as a VM guest.
- kernel (new function): CPU node affinity.
- With this feature the kernel uses CPU topology information as supplied by the IBM System z10. This information is used by the scheduler to build scheduling domains and should increase overall performance on SMP machines.
This support is available only on IBM System z10, when running Linux on System z in an LPAR.
- kernel (new function): Vertical CPU management.
- With this feature it is possible to switch between horizontal and vertical CPU polarization via a sysfs attribute.
If vertical CPU polarization is active then the hypervisor will dispatch certain CPUs for a longer time than other CPUs for maximum performance.
There are three different types of vertical CPUs: high, medium and low. "Low" CPUs get hardly any real CPU time, while "high" CPUs get a full real CPU; "medium" CPUs get something in between.
By default the old horizontal CPU polarization is active.
This support is available only on z10, running Linux on System z in an LPAR.
- kernel (new function): System z large page support.
- This adds hugetlbfs support on System z, using both hardware large page support if available (IBM System z10), and software large page emulation (with shared hugetlbfs pagetables) on older hardware.
Exploitation of the IBM System z10 hardware large page support is only available when running Linux on System z in an LPAR.
- kernel (new function): Collaborative Memory Management Stage II.
- Support for the Collaborative Memory Management Assist (CMMA) in z/VM 5.3 reduces hypervisor paging I/O overhead.
Please apply the PTFs for APARs VM64265 and VM64297 before using this support.
The Linux support for CMM2 is activated per IPL-option cmma=on (default is cmma=off).
You may be interested in the article about Collaborative Memory Management (cmm2) and Cooperative Memory Management (cmm1) at: //www.vm.ibm.com/perf/reports/zvm/html/530cmm.htm
- qeth (new function): Support two OSA ports per CHPID.
- Exploit next OSA adapter generation which offers two ports within one CHPID (See IBM U.S Hardware Announcement 108-296, "Four-port exploitation on OSA-Express3 GbE SX and LX"). The additional port number 1 can be specified with the qeth sysfs-attribute "portno".
This support is available only for OSA-Express3 GbE SX and LX on IBM System z10, running Linux on System z in an LPAR or as a VM guest (PTF for z/VM APAR VM64277 required).
- qeth (new function): System z HiperSockets layer-2 support.
- HiperSockets are enhanced to support layer-2 functionality.
The existing OSA layer-2 support is utilized to enable HiperSockets layer-2. This includes IPv6 support for HiperSocket layer-2. Connecting layer-2 and layer-3 hosts is not supported by the System z firmware.
This support is available only on z10, running Linux on System z in an LPAR or as a VM guest (z/VM 5.2 or later).
- qeth (new function): QETH Componentization.
- The qeth driver module is split into a core module and layer2-/layer3-specific modules. The default operation mode for OSA-devices is changed to layer2; for HiperSockets devices the layer3 default-mode is kept.
For layer3 mode devices the existence of (possibly faked) ethernet-headers is guaranteed to enable smooth integration of qeth devices into Linux.
- zfcp (new function): FCP adapter statistics.
- The FCP adapter statistics (available since IBM System z9) provide a variety of information about the virtual adapter (subchannel). In order to collect this information the zfcp device driver is extended on one side to query the adapter and on the other side summarize certain values which can then be fetched on demand. This information is made available via files (attributes) in the sysfs filesystem.
The information provided by the FCP adapter statistics can be fetched by reading from the following files in the sysfs filesystem:
/sys/class/scsi_host/host<n>/seconds_active /sys/class/scsi_host/host<n>/requests /sys/class/scsi_host/host<n>/megabytes /sys/class/scsi_host/host<n>/utilization
These are the statistics on a virtual adapter (subchannel) level.
In addition latency information is provided on a SCSI device level (LUN) which can be found at the following location:
/sys/class/scsi_device/<H:C:T:L>/device/cmd_latency /sys/class/scsi_device/<H:C:T:L>/device/read_latency /sys/class/scsi_device/<H:C:T:L>/device/write_latency
The information provided is raw - as collected from the FCP adapter. No interpretation or modification of the values is done by the zfcp device driver. The individual values are summed up during normal operation of the virtual adapter. An overrun of the variables is neither detected nor treated. Therefore the file has to be read twice to make a meaningful statement, because only the differences of the values between the two reads can be used.
This support is available on z9 or z10 ("FCP performance metrics"), running Linux on System z in an LPAR or as a VM guest.
Equivalent ZFCP-functionality was available in earlier Linux on System z ("April 2004 stream" and "October 2005 stream") in a different form.