Let's begin by covering the new features and innovations in the latest releases of the three major commercial UNIX operating systems.
HP-UX, HP's flavor of UNIX, is now up to release 11iV3. HP-UX is based on System V and runs on both HP9000 RISC servers and HP Integrity Itanium systems. In this respect, it is similar to Solaris, which can run on their SPARC RISC architecture, as well as x86 machines. AIX can only run on the POWER® architecture; however, given how UNIX is a high-end operating system, it is a positive thing that AIX and the POWER architecture are tightly integrated.
HP-UX 11Iv3 supports up to 128 processor cores, 2TB of memory, 2TB of filesystem, a 16TB filesystem size, and 100 million ZB of storage. Recent innovations and improvements include:
- 10% reduced power usage with dynamic savings capability
- Improved application performance by up to 20%, with locality optimized resource alignment
- Increased performance sensitive workloads with tune-N-Tools tuning
About a year ago, HP made available sets of operating system environments, which provide new choices for clients. They include the following builds: datacenter, high availability, virtual server, and the base environment, without all the bells and whistles.
- Accelerated virtual I/O provides for increased bandwidth and up to a 60% greater CPU efficiently when working with HP Integrity virtual machines. It does this by providing for a gatekeeper function that helps prioritize critical data traffic to go first.
- Online JFS, through VxFS, deploys a new method of indexing files and provides for increased performance for directories.
- HP Logical Volume Manager improvements:
- Support for on-line disk replacement
- Dynamic line support
- Support for multi-path I/O
- Performance improvements
- Increased maximum logical volume size: 2 - 16TB
- Virtualization enhancements, through Dynamic nPartitions. This allows cell-based HP Integrity and HP9000 servers to be configured into partitions of varying sizes, which can be adjusted for application workload, while applications continue to become available.
- Network enhancements:
- Improved through-put with mobile clients by avoiding unnecessary TCP communications.
- Enhancements to TCP stack to improve performance.
- Change in tunable tcphasize. This is now auto-tunable; the system can actually determine the optimal value at boot time.
Certainly it's good that improvements were made in LVM, but in most cases, AIX already had these features.
Solaris 10 was first released in 2005. The latest update to Solaris is version 10 10/08. Recent enhancements in this particular version include:
- Enabling the ability to boot from ZFS and using ZFS as its root filesystem. A lot of other improvements were also made to Solaris ZFS, including: the ability to rollback a fileset without umounting, enhancements to the ZFS send command, allowing for ZFS quotas and reservations (for file system data only), command line enhancements including zpool history, allowing you to use the upgrade command to upgrade your existing filesystem with the new filesystem enhancements, and providing for the ability of non-root users to do ZFS administration of granular tasks.
- Allowing Solaris containers to automatically update its environment when moving from one system to another.
- LDOM support for dynamically reconfigurable disk and network I/O.
- Support for up to 256 processors on x86 systems -- up from 64.
- Improvements in Solaris zones, including: the ability to set default router in Shared IP Zones, allowing for the zonepath being on ZFS.
- Security enhancements, including: allowing for the separation of data enhancement though the Solaris Management Console and improved encryption algorithms.
- Networking enhancements: the ability to provide SIP end-to-end traffic measurements and logging, and new communication protocol parser utilities.
These recent improvements to ZFS are very important. When ZFS first came out, it looked incredible, but the root issue was a glaring omission in feature functionality. With this ability now added, Solaris compares favorably in many ways to JFS2 from AIX and VxFs from HP.
AIX 6.1, first released about two years ago, is now available in two editions: standard, which includes only the base AIX, and the Enterprise edition, which includes workload partition manager and several Tivoli® products. In this respect, it has some similarities to HP, which has several versions. Recent enhancements include:
- Workload partitions: operating systems virtualization, similar to Solaris containers, called WPARs allow for the creation of multiple AIX 6.1 environments inside of one AIX 6.1 instance. Application WPARs, a wrap-around that can run inside of a global instance, can be created in seconds, which allows for the quick testing of new applications.
- Live Application mobility: this allows for partitions to be moved from one system to another without restarting applications or causing system disruption to end-users. In addition to allowing for planned outages, this feature helps manage workload by allowing for the movement of servers off of systems during non-peak periods, to save energy, cost and efficiencies.
- Support for concurrent AIX kernel updates: allows you to update systems without having to do a reboot for this to take effect.
- Support for storage key: allows you to reduce the number of outages associated with memory overlays inside the AIX kernel.
- Support for dynamic tracing: allows simplified debugging of system or application code.
- Enhanced functional recovery routine: allows for one to recover from errors that would normally crash the system.
- Improvement in default tuning parameters for AIX 6.1 allowing for better performance.
- New name resolver caching daemon allowing requests to resolve hostnames to improve the efficiencies of these requests.
- Improved NIM support for NFS version 4.
- Improved manageability features, such as IBM Systems Director Console for AIX.
Recent security enhancements include:
- Role-Based Access Control (RBAC) allows for improved security and manageability, by allowing administrators to delegate administration duties to non-root users.
- Trusted AIX allows AIX 6.1 to become an option that can meet the most critical of government and private industry security requirements.
- Encrypted Filesystems provides JFS with greater security, through its ability to encrypt data in a filesystem.
- Enhancements to AIX security expert include an enhancement to store security templates in LDAP.
- The Secure-by-Default installation allows only a minimum number of services and packages to enable a higher level of security on installation.
- Support for long password phrases.
These AIX 6.1 innovations are supported on all platforms, except for the following, which are only supported on the POWER6™ architecture: application storage keys, kernel storage keys, automatic variable page size, firmware assisted dump and hardware decimal floating point. Most IBM POWER administrators are very excited about AIX 6.1, and have already started taking it out of the closet-sandbox and putting it into production. What about AIX 6.1 that has so many people excited about it and how it compares to recent versions of Solaris and HP-UX is covered next.
First and foremost, there is workload partitioning and Live application mobility. While Solaris has zones/containers, which are similar in some respects to workload partitions (WPARs), it cannot do what WPARs can. No other UNIX can boast the ability to move over running workloads on a workload partition from one system to another without shutting down the partition. Why is this important? Because it allows one to increase their availability by keeping systems up during planned outages. It does this by allowing either the systems administrator or even operators (through WPAR manager) to move these virtual operating system partitions to other systems, without incurring any downtime. It also lends itself to green computing because it allows operators to shift partitions from underutilized boxes to more heavily utilized boxes, during non-peak periods. This feature alone can save a company a lot of money, while at the same time helping the environment. Of all the innovations discussed, AIX WPARs and Live application mobility are clearly the biggest winners.
Now, let's take a high-level look at the virtualization capabilities available on the big three.
- nPartitions: These are hard partitions, similar in some respects to SUN DSDs. One feature that this has which Sun does not is its ability to service one partition while others are on-line. They also support multiple operating systems such as HP-UX, VMS, Linux® and Windows®, albeit only on the Itanium processor, not PA-RISC. Similar to Solaris, they are also only available on high-end systems and also do not support moving around resources without a reboot.
- vPars. These are separate and distinct operating systems instances that can reside on either one nPartition or physical box. They allow you to dynamically move both CPU and RAM resources between partitions, as requirements evolve. Its important to note that resources cannot be shared or moved between partitions.
- Integrity Virtual Machines: These allow you to have separate guest instances on one partition, which have fully isolated environments. They allow for partitions to have their own copy of the OS. Of all the virtualization strategies that HP or Sun offers for that matter, this most clearly mimics IBM's PowerVM™. The granularity actually increases what is available with PowerVM as you can partition a box with as little as 1/20th of a micro partition. The huge downside here is that this system simply does not scale very well. There is a limitation of 4 CPUs and only 64GB of memory. Other limitations include the inability to move around storage adapters, while the system is up nor the ability to dedicate processes to a single partition.
- Resource Partitions: This is HPs equivalent to Solaris containers and AIX WPARs.
Among these three hardware vendors, IBM clearly is the only vendor that has a single consolidated technology and vision. Each of the other vendors have myriad strategies that have a tendency to confuse even the most experienced of systems persons. With IBM, you have PowerVM. Period. It's more scalable than anything that HP or Sun has, more innovative, with its ability to move around live partitions, and has a 40-year history of virtualization (I include the mainframe here) behind it. Finally, its features and functionality extend themselves to the entire POWER product line. This is a sever shortcoming of both HP and Sun, each of which have products that are only supported on either low-end, high-end models, and/or a given architecture.
Sun has multiple methods by which they implement virtualization on Solaris:
- Containers or zones: In essence, this feature allows for the ability to have multiple virtual operating systems running inside of one kernel instance of Solaris. This is a form of operating system virtualization, similar to AIX 6.1's implementation of WPARs.
- xVM server: This innovation, introduced in February of 2008, is a hypervisor based solution, based on Xen, which can run under Solaris on x86 machines. On Sparc, it is still based on logical domains.
- Logical Domains (LDOMs): This enables customers to run multiple operating systems simultaneously. The truth is that it has many issues, among them, scalability, limited micro partitioning and no dynamic allocation between systems. It also can run only on low-end SPARC servers.
- Hardware partitioning (DSDs): These are similar in some ways to IBM's logical partitioning, which is not part of PowerVM. Hardware partitioning does not have any real virtualization capabilities because you cannot share resource between partitions.
PowerVM's virtualization is based on IBM's paravirtualization hypervisor strategy. It includes the following features:
- Micro-partitioning: This feature allows you to slice up a POWER CPU on as many as 10 logical partitions, each with 1/10th of a CPU. It also allows for the capability of your system to exceed the amount of entitled capacity that the partition has been granted. It does this by allowing for uncapped partitions.
- Shared Processor pools: This feature allows virtual partitions to reach into a shared pool to gain more resources as the demand increases. When demand is light, shared partitions give-back to the shared processor community.
- Virtual I/O servers. This defines a special type of partition that allows for shared I/O in the form of Shared Ethernet and Virtual SCSI.
- Live Partition Mobility. This innovation allows you to move entire running partitions from one machine to another. This increases the availability of systems by allowing the system to keep running during planned outages, without incurring downtime. This feature is only available on POWER6.
- Shared Dedicated Capacity. This feature allows partitions that have dedicated processors to contribute towards the shared processor pools.
This section compares and contrasts the differences of configuring networking on HP-UX, Solaris, and AIX and configures a default router on all three systems.
When you first boot an HP-UX system after installation, the /sbin/set_parms program is run. You can also do this later using the set_parms initial command. This is used to configure the systems hostname, IP address, DNS and other network parameters. So let's run it:
# set_parms initial (see Listing 1).
Listing 1. Running the set_parms program
_______________________________________________________________________________ Welcome to HP-UX! Before using your system, you will need to answer a few questions. The first question is whether you plan to use this system on a network. Answer "yes" if you have connected the system to a network and are ready to link with a network. Answer "no" if you: * Plan to set up this system as a standalone (no networking). * Want to use the system now as a standalone and connect to a network later. _______________________________________________________________________________ Are you ready to link this system to a network? Press [y] for yes or [n] for no, then press [Enter] Do you wish to use DHCP to obtain networking information? Press [y] for yes or [n] for no, then press [Enter]
From here you will also enter your IP address and add additional network parameters.
Let's also configure an Ethernet card. After installing the LAN card, you would run ioscan (see Listing 2).
Listing 2. Running ioscan
# ioscan -fnC lan Class I H/W Path Driver S/W State H/W Type Description =================================================================== lan 0 0/0/1/0 iether CLAIMED INTERFACE HP PCI/PCI-X 1000Base-T #
Let's look at the IP addresses, which are configured using netstat (see Listing 3).
Listing 3. Configuring the IP addresses using netsat
# netstat -in Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll lan0 1500 188.8.131.52 184.108.40.206 32657 0 24500 0 0 lo0 32808 127.0.0.0 127.0.0.1 131689 0 131689 0 0
From here we can clearly see that lan0 is correlated with 220.127.116.11 (see Listing 4).
Listing 4. Checking the the lan0 correlation
# ifconfig lan0 lan0: flags=1843<UP,BROADCAST,RUNNING,MULTICAST,CKO> inet 18.104.22.168 netmask ffffff80 broadcast 22.214.171.124
Next, you'll want to configure your default router. First you'll need to modify this file: /etc/rc.config.d/netconf filesystem. The netconf file stores your configuration values (see Listing 5).
Listing 5. Checking your configuration values in the netconf file
# more /etc/rc.config.d/netconf # netconf: configuration values for core networking subsystems # # @(#) netconf $Date: 2007/10/05 20:09:28 $Revision: r11.31/1 PATCH_11.31 (PHNE_ 36281) # # HOSTNAME: Name of your system for uname -S and hostname # # OPERATING_SYSTEM: Name of operating system returned by uname -s # ---- DO NOT CHANGE THIS VALUE ---- # # LOOPBACK_ADDRESS: Loopback address # ---- DO NOT CHANGE THIS VALUE ---- # HOSTNAME="vital24.testdrive.hp.com" OPERATING_SYSTEM=HP-UX LOOPBACK_ADDRESS=127.0.0.1 DEFAULT_INTERFACE_MODULES="" INTERFACE_NAME=lan0 IP_ADDRESS=126.96.36.199 DHCP_ENABLE=1 SUBNET_MASK=255.255.255.128 ROUTE_MASK=0.0.0.0 ROUTE_GATEWAY=188.8.131.52 BROADCAST_ADDRESS="" ROUTE_COUNT=1 ROUTE_DESTINATION=default
Then you use the route command to put the new route into effect:
# route add default 184.108.40.206 1.
To initiate the new route, you need to start services and initiate the route (see Listing 6).
Listing 6. Starting services and initiating the route
/sbin/init.d inetd start /sbin/init.d net start
Let's examine SAM, HP's answer to IBM's SMIT. While not as powerful as SMIT, HP does at least provide a text-based menuing system.
Listing 7. HP's SAM
# sam HP-UX System Management Homepage (Text User Interface) SMH --------------------------------------------------------------------------------- a - Auditing and Security c - Auditing and Security Attributes Configuration(new) d - Peripheral Devices e - Resource Management f - Disks and File Systems g - Display k - Kernel Configuration l - Printers and Plotters(new) m - Event Monitoring Service n - Networking and Communications p - Printers and Plotters s - Software Management u - Accounts for Users and Groups
From here, navigate to Networking and Communications >Network Interfaces Configuration>Network Interface Cards. Here is the output (see Listing 8).
Listing 8. Output for network interface cards
Interface Subsystem Hardware Interface Interface IPv4 Address IPv6 Address Name Path State Type ---------------------------------------------------------------------------------------- lan0 iether 0/0/1/0 up 1000Base-T 220.127.116.11 Not Configured
The way to configure networking in HP-UX is fairly straightforward, though I did find it a bit cumbersome at times.
With Solaris, you need to work with text files; there is no SAM or SMIT equivalent. Let's first check the hostname of your box in /etc/nodename (see Listing 9).
Listing 9. Checking the hostname of your box
# more /etc/nodename ezqspc03z1
You should also use ifconfig to gather information (see Listing 10).
Listing 10. Using ifconfig to gather information
# ifconfig -a lo0:9: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 bge0:9: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2 inet 10.24.116.121 netmask ffffff00 broadcast 10.24.116.255
To enable a card, you would use the plumb parameter.
For example, to enable bge0:
# ifconfig bge0 plumb.
To bring up the interface, you would do this:
# ifconfig bge0 up.
To make the change permanent, you need to edit the following files:
- /etchostname/bge0 (for our bge0 interface)
Let's change a default route. Changing a default route in Solaris requires the following steps:
- You need to first edit your /etc/defaultrouter file.
- Delete your default router's IP address:
# route delete theipadress.
- Assign your new one:
# route add default newipadress.
While at times I enjoy the ability to just edit text files, most administrators prefer the flexibility and ease of a menu-driven system to configure networking. Furthermore, with Solaris you need to plumb the interface -- other UNIX operating systems do not require this. Both IBM and HP have system administration menu systems, though IBM's SMIT is more powerful.
With AIX, you usually go straight to SMIT when configuring networking. This is how you list the ethernet adapters: # smit devices >communication >Ethernet adapter>adapter>list all Ethernet adapters (see Listing 11).
Listing 11. Listing the Ethernet adapters
COMMAND STATUS Command: OK stdout: yes stderr: no Before command completion, additional instructions may appear below. ent0 Available Virtual I/O Ethernet Adapter (l-lan) ent1 Available Virtual I/O Ethernet Adapter (l-lan) F1=Help F2=Refresh F3=Cancel F6=Command F8=Image F9=Shell F10=Exit /=Find n=Find Next
To make changes go to Change/Show characteristics of an Ethernet adapter (see Listing 12).
Listing 12. Change/Show characteristics of an Ethernet adapter
Change / Show Characteristics of an Ethernet Adapter Type or select values in entry fields. Press Enter AFTER making all desired changes. [TOP] [Entry Fields] Ethernet Adapter ent0 Description Virtual I/O Ethernet > Status Available Location Enable ALTERNATE ETHERNET address no + ALTERNATE ETHERNET address [0x000000000000] + Minimum Tiny Buffers  +# Maximum Tiny Buffers  +# Minimum Small Buffers  +# Maximum Small Buffers  +# Maximum Medium Buffers  +# Maximum Medium Buffers  +# Minimum Large Buffers  +# [MORE...8]
With AIX, you can also use ifconfig. But be advised that any changes you make with ifconfig are not saved in the Object Data Manager (ODM) and on a reboot will be lost. This is why ifconfig is not the preferred method; you should always use SMIT when doing work on your networks with AIX.
To add a default route on AIX, all you need to do is go to SMIT>TCP/IP Minimum Configuration&Startup and click on your interface to get this screen shown in Listing 13.
Listing 13. Adding a default route on AIX
Minimum Configuration & Startup To Delete existing configuration data, please use Further Configuration menus Type or select values in entry fields. Press Enter AFTER making all desired changes. [TOP] [Entry Fields] * HOSTNAME [lpar21ml162f_pub] * Internet ADDRESS (dotted decimal) [172.29.141.94] Network MASK (dotted decimal) [255.255.192.0] * Network INTERFACE en0 NAMESERVER Internet ADDRESS (dotted decimal) [10.153.50.201] DOMAIN Name [vlp.com] Default Gateway Address (dotted decimal or symbolic name) [172.29.128.13] Cost  # Do Active Dead Gateway Detection? no + [MORE...2] F1=Help F2=Refresh F3=Cancel F4=List F5=Reset F6=Command F7=Edit F8=Image
Then change your default gateway and hit the enter key twice (see Listing 14).
Listing 14. Changing the default gateway
COMMAND STATUS Command: OK stdout: yes stderr: no Before command completion, additional instructions may appear below en0 lpar21ml162f_pub inet0 changed en0 changed inet0 changed
The simplicity with which AIX allows you to configure networking is very clear.
This section compares and contrasts the differences of tuning systems in HP-UX, Solaris, and AIX.
With HP-UX, remember that it runs under Itanium and PA-RISC. As such, Integrity servers that run programs that have been compiled for Itanium will run better than on programs compiled on PA-RISC. It could run on emulation mode, but it will not run as fast. HP-UX 11.31 has per-thread locks and as such there are significant performance enhancements with the latest version of HP-UX -- up to 30% more performance compared to 11iV2. With disk I/O it is recommended that you use VxFS with 8 kb block sizes. You will get better performance using HP Online JFS.
See kctune and kcweb in SAM to tune.
Here is the list of parameters, as seen by kctune (see Listing 15).
Listing 15. List of parameters for kctune
# kctune Tunable Value Expression Changes NSTREVENT 50 Default NSTRPUSH 16 Default NSTRSCHED 0 Default STRCTLSZ 1024 Default STRMSGSZ 0 Default acctresume 4 Default acctsuspend 2 Default aio_iosize_max 0 Default Immed aio_listio_max 256 Default Immed aio_max_ops 2048 Default Immed aio_monitor_run_sec 30 Default Immed aio_physmem_pct 10 Default Immed aio_prio_delta_max 20 Default Immed aio_proc_max 0 Default Immed aio_proc_thread_pct 70 Default Immed aio_proc_threads 1024 Default Immed aio_req_per_thread 1 Default Immed allocate_fs_swapmap 0 Default alwaysdump 0 Default Immed audit_memory_usage 5 Default Immed audit_track_paths 0 Default Auto base_pagesize 4 Default copy_on_write 1 Default Immed core_addshmem_read 0 Default Immed core_addshmem_write 0 Default Immed create_fastlinks 0 Default default_disk_ir 0 Default diskaudit_flush_interval 5 Default Immed dlpi_max_ub_promisc 1 Default Immed dma32_pool_size 4194304 4194304 dmp_rootdev_is_vol 0 Default dmp_swapdev_is_vol 0 Default dnlc_hash_locks 512 Default dontdump 0 Default Immed dst 1 Default dump_compress_on 1 Default Immed dump_concurrent_on 1 Default Immed executable_stack 0 Default Immed expanded_node_host_names 0 Default Immed fcache_fb_policy 0 Default Immed fcache_seqlimit_file 100 Default Immed fcache_seqlimit_system 100 Default Immed fcd_disable_mgmt_lun 0 Default Immed fclp_ifc_disable_mgmt_lun 0 Default Immed filecache_max 1018863616 Default Auto filecache_min 101883904 Default Auto fr_statemax 800000 Default fr_tcpidletimeout 86400 Default fs_async 0 Default fs_symlinks 20 Default Immed ftable_hash_locks 64 Default gvid_no_claim_dev 0 Default hires_timeout_enable 0 Default Immed hp_hfs_mtra_enabled 1 Default intr_strobe_ics_pct 80 Default Immed io_ports_hash_locks 64 Default ipf_icmp6_passthru 0 Default ipl_buffer_sz 8192 Default ipl_logall 0 Default ipl_suppress 1 Default ipmi_watchdog_action 0 Default Immed ksi_alloc_max 33600 Default Immed ksi_send_max 32 Default lcpu_attr 0 Default Auto max_acct_file_size 2560000 Default Immed max_async_ports 4096 Default Immed max_mem_window 0 Default Immed max_thread_proc 1100 1100 Immed maxdsiz 1073741824 Default Immed maxdsiz_64bit 4294967296 Default Immed maxfiles 2048 Default maxfiles_lim 4096 Default Immed maxrsessiz 8388608 Default maxrsessiz_64bit 8388608 Default maxssiz 8388608 Default Immed maxssiz_64bit 268435456 Default Immed maxtsiz 100663296 Default Immed maxtsiz_64bit 1073741824 Default Immed maxuprc 256 Default Immed mca_recovery_on 0 Default Auto msgmbs 8 Default Immed msgmnb 16384 Default Immed msgmni 512 Default Immed msgtql 1024 Default Immed ncdnode 150 Default nclist 8292 Default ncsize 8976 Default nflocks 4096 Default Auto nfs2_max_threads 8 Default Immed nfs2_nra 4 Default Immed nfs3_bsize 32768 Default Immed nfs3_do_readdirplus 1 Default Immed nfs3_jukebox_delay 1000 Default Immed nfs3_max_threads 8 Default Immed nfs3_max_transfer_size 1048576 Default Immed nfs3_max_transfer_size_cots 1048576 Default Immed nfs3_nra 4 Default Immed nfs4_bsize 32768 Default Immed nfs4_max_threads 8 Default Immed nfs4_max_transfer_size 1048576 Default Immed nfs4_max_transfer_size_cots 1048576 Default Immed nfs4_nra 4 Default Immed nfs_portmon 0 Default Immed ngroups_max 20 Default Immed ninode 8192 Default nkthread 8416 Default Immed nproc 4200 Default Immed npty 60 Default nstrpty 60 Default nstrtel 60 Default nswapdev 32 Default nswapfs 32 Default numa_policy 0 Default Immed pa_maxssiz_32bit 83648512 Default pa_maxssiz_64bit 536870912 Default pagezero_daemon_enabled 1 Default Immed patch_active_text 1 Default Immed pci_eh_enable 1 Default pci_error_tolerance_time 1440 Default Immed process_id_max 30000 Default Auto process_id_min 0 Default Auto pwr_idle_ctl 0 Default Auto remote_nfs_swap 0 Default rng_bitvals 9876543210 Default rng_sleeptime 2 Default rtsched_numpri 32 Default sched_thread_affinity 6 Default Immed scroll_lines 100 Default secure_sid_scripts 1 Default Immed semaem 16384 Default semmni 2048 Default semmns 4096 Default semmnu 256 Default semmsl 2048 Default Immed semume 100 Default semvmx 32767 Default shmmax 1073741824 Default Immed shmmni 400 Default Immed shmseg 300 Default Immed streampipes 0 Default swchunk 2048 Default sysv_hash_locks 128 Default tcphashsz 0 Default timeslice 10 Default timezone 420 Default uname_eoverflow 1 Default Immed vnode_cd_hash_locks 128 Default vnode_hash_locks 128 Default vol_checkpt_default 10240 Default vol_dcm_replay_size 262144 Default vol_default_iodelay 50 Default vol_fmr_logsz 4 Default vol_max_bchain 32 Default vol_max_nconfigs 20 Default vol_max_nlogs 20 Default vol_max_nmpool_sz 4194304 Default Immed vol_max_prm_dgs 1024 Default vol_max_rdback_sz 4194304 Default Immed vol_max_vol 8388608 Default vol_max_wrspool_sz 4194304 Default Immed vol_maxio 256 Default vol_maxioctl 32768 Default vol_maxkiocount 2048 Default vol_maxparallelio 256 Default vol_maxspecialio 256 Default vol_maxstablebufsize 256 Default vol_min_lowmem_sz 532480 Default Immed vol_mvr_maxround 256 Default vol_nm_hb_timeout 10 Default vol_rootdev_is_vol 0 Default vol_rvio_maxpool_sz 4194304 Default Immed vol_subdisk_num 4096 Default vol_swapdev_is_vol 0 Default vol_vvr_transport 1 Default vol_vvr_use_nat 0 Default volcvm_cluster_size 16 Default volcvm_smartsync 1 Default voldrl_max_drtregs 2048 Default voldrl_min_regionsz 512 Default voliomem_chunk_size 65536 Default voliomem_maxpool_sz 4194304 Default voliot_errbuf_dflt 16384 Default voliot_iobuf_default 8192 Default voliot_iobuf_limit 131072 Default voliot_iobuf_max 65536 Default voliot_max_open 32 Default volpagemod_max_memsz 6144 Default Immed volraid_rsrtransmax 1 Default vps_ceiling 16 Default Immed vps_chatr_ceiling 1048576 Default Immed vps_pagesize 16 Default Immed vx_maxlink 32767 Default vx_ninode 0 Default Immed vxfs_bc_bufhwm 0 Default Immed vxfs_ifree_timelag 0 Default Immed vxtask_max_monitors 32 Default
As you can see, there are two types of kernel parameters. One takes place immediately (immed), while the others take a reboot (Default) to take effect.
To view one parameter, let's use the kctune command. You can just type in kctune as shown in Listing 16.
Listing 16. Using kctune
# kctune vx_ninode Tunable Value Expression Changes vx_ninode 0 Default Immed #
I like to use the
-B parameter when making a change that backs up the older value. Let's make a change (see Listing 17).
Listing 17. Using the
# kctune -B vps_ceiling=32 * The automatic 'backup' configuration has been updated. * Future operations will update the backup without prompting. * The requested changes have been applied to the currently running configuration. Tunable Value Expression Changes vps_ceiling (before) 16 Default Immed (now) 32 32
Let's look at SAM. In the kernel configuration section shown in Listing 18, you'll view the usage of the kernel tunables.
Listing 18. Kernel configuration section
SMH->Kernel Configuration --------------------------------------------------------------------------------- t - Tunables View or modify kernel tunables m - Modules View or modify kernel modules and drivers a - Alarms View or modify alarms for kernel tunables l - Log Viewer View the changes made to kernel tunables or modules u - Usage View usage of kernel tunables c - Manage Configuration View the options available to manage configurations b - Restore Previous Boot Values Restores Previous Boot Values for Tunables And Modules SMH->Kernel Configuration->Usage Usage Monitoring is On -------------------------------------------------------------------------- Tunable Current Usage Current Setting ========================================================================== filecache_max 76054528 1018863616 maxdsiz 11403264 1073741824 maxdsiz_64bit 42663936 4294967296 maxfiles_lim 38 4096 maxssiz 786432 8388608 maxssiz_64bit 98304 268435456 maxtsiz 35823616 100663296 maxtsiz_64bit 1409024 1073741824 maxuprc 3 256 max_thread_proc 21 1100 msgmni 2 512 msgtql 0 1024 nflocks 27 4096 ninode 727 8192 nkthread 330 8416 nproc 151 4200 npty 0 60 nstrpty 1 60 nstrtel 0 60 nswapdev 1 32 nswapfs 0 32 semmni 28 2048 semmns 146 4096 shmmax 17868904 1073741824 shmmni 7 400 shmseg 3 300
HP-UX provides for a strong command-line in addition to using its menu-driven system, SAM, to perform tuning tasks. I like the overall approach that HP-UX uses with performance tuning, though I think there is just too much out there in kctune. AIX separates tuning parameters by area, as you'll see.
Unlike HP-UX or AIX, with Solaris you are going to use text files to do most of your work. The major text file is /etc/system. It is actually recommended that when moving to a new release you start with an empty file and only add tunables required by third party applications. Any changes made to /etc/system are applied only after a reboot.
Let's make a change in /etc/system:
This change sets the number of read-ahead blocks that are read for file systems mounted using NFS version 2 software. One important change in Solaris 10 is that many Solaris kernel parameters have now been replaced by resource controls. The command to change resource controls is the prctl command. For example, all shared memory and semaphore settings are now handled via resource controls. What this means is that any entries regarding shared memory or semaphores (I.E. sem) in /etc/system are no longer relevant. One example is with Oracle tuning. In earlier versions, we would configure SHMMAX in /etc/system and then reboot. Now we use prctl. The advantage here is that you do not need a reboot for the change to take effect. The downside is that the information gets lost on a reboot, so this needs to get entered in at a user-profile. The command to modify the value of max-shm-memory to 6 GB would be:
# prctl -n project.max-shm-memory -v 6gb -r -i project user.root.
Other ways to tune include:
- Using kmdb, the kernel debuger
- Using mdb, the modular debugger
- Using ndd to configure your TCP/IP parameters
- Using /etc/default to tune NCA parameters
- Using prtctl for changing resource controls
An example of tuning ndd is:
# ndd -get /dev/tcp tcp_time_wait_interval.
What about NFS? These parameters are also in /etc/system. Some of the parameters include: nfs_cots_timeo, nfs_allow_preepoch_time and nfs4_pathconf_disable_cache
While I know some administrators prefer the old methods of editing text files, generally speaking most administrators prefer the simplicity and ease of use that either HP or IBM provides to tune systems. While in some ways the prtctl is helpful, it also confuses things, because for some areas you use prtctl and for others you still use /etc/system.
With AIX, there are several tuning commands, depending on whether you are tuning I/O (separate utilities for network and disk), Memory or CPU. They are as follows: ioo, no vmm, or schedo. For nfs, nfso is used to tune the nfs subsystem. There were also some nice improvements on the performance front for AIX. 6.1. Some of the more important changes include improving default parameters to more accurately day-to-day work and the incorporation of restricted tunables -- to help prevent administrators from messing things up. Here is a summary of the recent improvements:
- Improvement on default tunables for each of the following areas: vmo, ioo, aio, no, nfso, schedo
- On the filesystem front, changes were made to the Enhanced Journaling File System that allow you to mount a JFS2 f/s without logging. Sure this can improve performance, but I don't recommend it because it can cause availability issues.
- I/O pacing allows for the ability to limit the number of pending I/O requests to a file and has the effect of preventing disk I/O-intensive processes. AIX 6.1 enables I/O pacing by default. In AIX 5.3, you needed to explicitly enable this feature.
- AIO is an AIX software subsystem that allows processes to issue I/O operations without waiting for I/O to finish. With AIX 6.1 AIO subsystems are now loaded by default and not activated. They are automatically started at the time when the application initiates the AIO I/O request. Furthermore there is no more aioo command, which had been used to configure aio servers.
- A new network caching daemon has also been introduced to improve performance when resolving using DNS. This is started up with AIX's System Resource Controller (SRC). The netcdctrol utility is used to manage this.
Let's make some changes.
This command allocates 16777216 bytes to provide large pages, which is particularly useful in an Oracle environment:
# vmo -r -o lgpg_size=16777216 lgpg_regions=256.
Let's look at virtual memory. The AIX virtual memory manager serves all memory on the box, not just virtual memory. It's always important to reduce the amount of paging on UNIX systems. How can we force the AIX virtual memory manager to do so? Computational memory is used while your process is working on computing information and are transitory temporary segments. They have no permanent disk storage location. On the other hand, file memory uses persistent storage not working segments. Given the choice you would much rather have file memory pages to disk rather than computational memory. There are several parameters within the virtual memory settings that let us do this. We need to tune the minperm, maxperm and maxclient. To prevent AIX from paging working storage and to utilize the caching from your database, you need to set maxperm to a high value (greater than 90) and to make sure the lru_file_repage=0. This parameter indicates whether or not the VMM re-page counts should be considered and what type of memory it should steal. The default setting is 1, so you need to change it to 0. This is done using the vmo command. When you set the parameter to 0, it tells the VMM that you prefer that it steal only file pages rather than computational pages. In AIX 6.1, minperm, maxperm and maxclient are already set with the proper values. In AIX 5.3, you would do the following shown in Listing 19.
Listing 19. Setting the minperm, maxperm and maxclient values in AIX 5.3
# vmo -p -o minperm%=3 # vmo -p -o maxperm%=97 # vmo -p -o maxclient%=97
IBM wins out in performance tuning, because by far it is the most intuitive to use. While HP-UX has made tremendous strides in recent years towards becoming a self-tuning system, its not quite there yet. Solaris introduced some positive change with prtctl, though there are just too many facilities to configure changes. With IBM, it's all very simple. vmo for memory, ioo for disk/io, schedo for CPU, no for network and nfso for nfs. It can't be any easier. Furthermore, the improvements in AIX 6.1 with tuning parameters help separate IBM even that much more from the pack.
This article compared and contrasted recent innovations and feature/functionality improvements of AIX 6.1 compared to recent versions of HP-UX (11Iv3) and Solaris (10/08). It also looked at how some of the commands and approach differ as they relate to certain areas, such as configuring networking and performance tuning. The article also summarized virtualization and some of the basic differences between the UNIX flavors. You decide what you prefer best, but in my comparisons, IBM compared very favorably in every area. HP-UX was most similar to IBM, while Solaris mostly maintained their text-file centric approach to system administration.
- AIX 6 Workload Partition and Live Application Mobility: Read a white paper that introduces WPAR concepts, gives hands-on details to show ease of use, and provides steps that correctly outline the Live Application Mobility function.
- New to AIX and UNIX?: Visit the "New to AIX and UNIX" page to learn more about AIX and UNIX.
- Optimizing AIX 5L performance: Tuning network performance, Part 1 (Ken Milberg, developerWorks, November 2007): Read Part 1 of a three-part series on AIX networking, which focuses on the challenges of optimizing network performance.
- For a three-part series on memory tuning on AIX, see Optimizing AIX 5L performance: Tuning your memory settings, Part 1 (Ken Milberg, developerWorks, June 2007).
- Read the IBM whitepaper Improving Database Performance with AIX concurrent I/O.
- Learn about AIX memory affinity support from the IBM System p and AIX InfoCenter.
- The Redbook, Database Performance Tuning on AIX, is designed to help system designers, system administrators, and database adminsitrators design, size, implement, maintain, monitor, and tune a Relational Database Management System (RDMBS) for optimal performance on AIX.
- developerWorks technical events and webcasts: Stay current with developerWorks technical events and webcasts.
- Podcasts: Tune in and catch up with IBM technical experts.
- Participate in the discussion forum.
- Participate in the AIX and UNIX forums:
Dig deeper into AIX and Unix on developerWorks
Get samples, articles, product docs, and community resources to help build, deploy, and manage your cloud apps.
Experiment with new directions in software development.
Software development in the cloud. Register today to create a project.
Evaluate IBM software and solutions, and transform challenges into opportunities.