A file system describes the way information is stored on a hard disk, such as ext2, ext3, ReiserFS, and JFS. The General Parallel File System (GPFS) is another type of file system available for a clustered environment. The design of GPFS has better throughput and a high fault tolerance. This article discusses a simple case of GPFS implementation. To keep things easy, you'll use machines with two hard disks -- the first hard disk is used for a Linux® installation and the second is left "as is" (in raw format).
Hardware, software, and setup
The required hardware and software are listed below. Figure 1 shows the system setup.
- eServer™ x336
- Red Hat Enterprise Linux Version 4.0 Advanced Server with Update 2
- GPFS Version 2.3
- GPFS Version 188.8.131.52 Fix
Figure 1. Sample setup
Installation and configuration
Before you start, please note the following assumptions:
- All machines have Red Hat Enterprise Linux AS installed (for example, RHEL4 AS with Update 2).
- The /etc/hosts file is updated on all the nodes.
- SSH is configured so that "root" can log in to any machine without a password prompt.
- The nodes gpfs1.my.com and gpfs2.my.com act as GPFS servers and offer /dev/sdb for storage. Node gpfs3.my.com acts as a GPFS client.
- GPFS code is available in tar format in the /dump folder.
- Steps 1, 2, 3, 4, and 5 are required on all the nodes (gpfs1.my.com, gpfs2.my.com, and gpfs3.my.com).
- Steps 6 and 7 are only required on gpfs1.my.com, because it has gpfs.gpl-2.3.0-9.noarch.rpm installed.
- If you have the code available in tar format, extract the GPFS files or mount the CD-ROM or DVD having the GPFS RPMs, as follows:
Figure 2 above illustrates the some of the extracted contents:
#cd /dump #tar zxvf gpfs_code.tar.gz
Figure 2. Contents of GPFS
- Extract the GPFS RPMs, as follows:
#./gpfs_install-2.3.0-0_i386 --dir /dump
This command displays the "License" message. Accept it. To suppress the license message, use
Figure 3. GPFS extraction
After extraction, you should see:
Figure 4. Extracted files
- For GPFS installation, use:
#cd /dump #rpm -ivh gpfs.msg.en_US-2.3.0-0.noarch.rpm gpfs.base-2.3.0-0.i386.rpm gpfs.docs-2.3.0-0.noarch.rpm gpfs.gpl-2.3.0-0.noarch.rpm OR rpm -ivh *.rpm
Figure 5. GPFS installation
The gpfs.gpl-2.3.0-0.noarch.rpm install is only required on gpfs1.my.com.
- After GPFS RPMs are installed, the binaries are available in the path, as shown in the sample below.
Figure 6. Binary paths
- For GPFS fix installation, use:
#tar zxvf gpfs-2.3.0-9.i386.update.tar.gz #rpm -Uvh gpfs.msg.en_US-2.3.0-9.noarch.rpm gpfs.base-2.3.0-9.i386.rpm gpfs.docs-2.3.0-9.noarch.rpm gpfs.gpl-2.3.0-9.noarch.rpm OR rpm -Uvh *.rpm
Figure 7. GPFS fix install
The gpfs.gpl-2.3.0-9.noarch.rpm upgrade is required only on gpfs1.my.com.
- You are now ready to build GPFS GPL modules on the node (gpfs1.my.com) selected for this activity. You'll do this activity on just one node. After that, you'll distribute the generated binaries to the other nodes (gpfs2.my.com and gpfs3.my.com).
#cd /usr/lpp/mmfs/src/config #cp site.mcr.proto site.mcr #vi site.mcr
Refer to the README in /usr/lpp/mmfs/src for the desired changes.
Figure 8. GPFS GPL modules
In the sample above, LINUX_DISTRIBUTION, LINUX_DISTRIBUTION_LEVEL, and LINUX_KERNEL_VERSION were specifically updated to match RHEL4 Update 2.
#cd /usr/lpp/mmfs/src #export SHARKCLONEROOT=/usr/lpp/mmfs/src #make World #make InstallImages
Figure 9. Install images
#cd /usr/lpp/mmfs/bin #scp mmfslinux mmfs26 lxtrace dumpconv tracedev gpfs2:/usr/lpp/mmfs/bin /* Copy to all the other nodes */
- At this point, you're ready to create a GPFS cluster and configure the Network Storage Device (NSD).
- Start the GPFS daemons on all the nodes using:
Figure 10. GPFS daemons
- Create a file that identifies the nodes participating in the GPFS cluster. See Resources for detailed information on the Configuration and Tuning GPFS for Digital Media Environments redbook.
Figure 11. GPFS nodes
- Create a separate file that contains the disk information for creating the NSD. See the Configuration and Tuning GPFS for Digital Media Environments redbook for detailed information in the Resources section.
Figure 12. GPFS disks
- Create the GPFS cluster using:
#/usr/lpp/mmfs/bin/mmcrcluster -p gpfs1.my.com -s gpfs2.my.com -n /dump/gpfs.nodes -r /usr/bin/ssh -R /usr/bin/scp
Figure 13. GPFS cluster
List the GPFS cluster details using:
Figure 14. GPFS mmls cluster
- Create the NSD using:
#/usr/lpp/mmfs/bin/mmcrnsd -F /dump/gpfs.disks -v yes
Figure 15. Creating the NSD
List the NSD details using:
Figure 16. NSD details
The contents of the /dump/gpfs.disks changes after you execute the
Figure 17. Modified GPFS disks
- Create the GPFS file system using:
#/usr/lpp/mmfs/bin/mmcrfs /gpfs gpfsdev -F /dump/gpfs.disks -B 1024K -m 1 -M 2 -r 1 -R 2
Figure 18. Creating the GPFS file system
The above command creates the GPFS type file system (for example, 286GB) and mounts it in the /gpfs folder. It also makes an entry in the /etc/fstab file on all the nodes so that the file system can be automatically mounted on a reboot.
Figure 19. fstab file
- Your GPFS environment is now ready for use.
- Start the GPFS daemons on all the nodes using:
To uninstall GPFS RPMs, do the following:
#rpm -e gpfs.msg.en_US gpfs.docs gpfs.base /* gpfs2.my.com and gpfs3.my.com */ #rpm -e gpfs.msg.en_US gpfs.docs gpfs.base gpfs.gpl /* gpfs1.my.com */ #rm -rf /usr/lpp/mmfs /* gpfs1.my.com, gpfs2.my.com and gpfs3.my.com */
- Configuration and Tuning GPFS for Digital Media Environments: Read this IBM Rebook, which focuses on the newest GPFS function and its on demand characteristics related to (but not limited to) digital media environments.
- General Parallel File System FAQs: Peruse the latest FAQs.
- Benchmark Using x345/GPFS/Linux Clients and x345/Linux/NSD File Servers: Check out this IBM Redpaper on benchmark results.
- Stay current with developerWorks technical events and Webcasts.
- Want more? The developerWorks eServer zone hosts hundreds of informative articles and introductory, intermediate, and advanced tutorials on the eServer brand.
Get products and technologies
- General Parallel File System -- Support for UNIX servers and AMD and Intel based servers: Download corrective service packages for General Parallel File Systems (GPFS) clustering software.
- Build your next development project with IBM trial software, available for download directly from developerWorks.
- Participate in developerWorks blogs and get involved in the developerWorks community.