Exercise #1A: Install and Configure A GPFS Cluster: Objectives: Requirements
Exercise #1A: Install and Configure A GPFS Cluster: Objectives: Requirements
Objectives:
Use the GPFS web based administration tool to install a GPFS cluster
Requirements:
Node names and IP address provided by instructor Node1:________________________ Node2:________________________ Account/Cluster name used for exercise Name: root Password: __________________ ClusterName: _______________
In this step you will create a GPFS cluster on two nodes using the GPFS web based administration interface. 1. Open a web browser to the GPFS
http://\[node1 ip address\]/ibm/console Account: root Password:
4. In the navigation pane select "GPFS Management" then "Install GPFS". This will start the GPFS configuration wizard.
5. Select "Create a new Session". This will take you to the Define hosts page. 6. Under "Defined hosts" click "Add" 1. Enter the name or ipaddress of node1, and the root password. 2. Select "Add this host and add the next host."
8. Enter the IP address and root password for node2 and select "Add host." Close the Task process dialogue when completed.
9. The hosts have now been added. Select next to go to the Install and Verify Packages page. On this page select "Check existing package installation." Close the task dialog when the check is complete. 10. GPFS Ships an open source component called the GPL layer that allows the support of a wide variety of Linux kernels. The GPL layer installation page checks that the GPL layer is built and installed correctly. If it is not, the installer will complete the build and install. Select "Check existing GPL layer installation". Close the "Task" dialog when the check is complete. 11. GPFS verifies the network configuration of all nodes in the cluster. Select "Check current settings" to verify the network configuration. Close the "Task" dialog when the check is complete. 12. GPFS uses ssh (Or other remote command tool) for some cluster operations. The installer will verify that ssh is configured properly for a GPFS cluster. Select "Check Current Settings" to verify the ssh config. Close the "Task" dialog when the check is complete. 13. It is recommended, though not required, that all the servers synchronize the time using a protocol such as NTP. For this lab we will skip the NTP setup. Choose "Skip Setup" to continue. 14. Next, you set the name the GPFS cluster. Enter the cluster name and select " Next." 15. The last step is to define the primary and secondary cluster configuration servers. Since this is a two node cluster we will leave it at the defaults. Select "Next" to continue. 16. Select Next to complete the cluster configuration. Close the "Task" dialog when the configuration is complete. 17. When you select "Finish" you will be directed to the cluster management page.
1.
2.
3.
2. 3. 4. 5.
Add the public key from node2 to the authorized_keys file on node1
cat /tmp/id_rsa.pub >> /.ssh/authorized_keys
6.
To test your ssh configuration ssh as root from node 1 to node1 and node1 to node2 until you are no longer prompted for a password or for addition to the known_hosts file. node1# ssh node1 date node1# ssh node2 date node2# ssh node1 date node2# ssh node2 date Supress ssh banners by creating a .hushlogin file in the root home directory
touch /.hushlogin
7. 4.
Verify the disks are available to the system For this lab you should have 4 disks available for use hdiskn-hdiskt. 1. Use lspv to verify the disks exist 2. Ensure you see 4 disks besides hdisk0 talk.
# lslpp -L gpfs.\* Fileset Level State Type Description (Uninstaller) ---------------------------------------------------------------------------gpfs.base 3.3.0.3 A F GPFS File Manager gpfs.docs.data 3.3.0.3 A F GPFS Server Manpages and Documentation gpfs.gui 3.3.0.3 C F GPFS GUI gpfs.msg.en_US 3.3.0.1 A F GPFS Server Messages U.S. English
8.
Note: Exact versions of GPFS may vary from this example, the important part is that all three packages are present. Confirm the GPFS binaries are in your path using the mmlscluster command
# mmlscluster mmlscluster: 6027-1382 This node does not belong to a GPFS cluster. mmlscluster: 6027-1639 Command failed. Examine previous error messages to determine cause.
primary configuration server and give node1 the designations quorum and manager. Use ssh and scp as the remote shell and remote file copy commands. *Primary Configuration server (node1): __ ______ *Verify fully qualified path to ssh and scp: ssh path___ _____ scp path_____________
1. 2. Use the mmcrcluster command to create the cluster
mmcrcluster -N _node01_:manager-quorum -p _node01_ -r /usr/bin/ssh -R /usr/bin/scp
Run the mmlscluster command again to see that the cluster was created
# mmlscluster GPFS cluster information ======================== GPFS cluster name: node1.ibm.com GPFS cluster id: 13882390374179224464 GPFS UID domain: node1.ibm.com Remote shell command: /usr/bin/ssh Remote file copy command: /usr/bin/scp GPFS cluster configuration servers: ----------------------------------Primary server: node1.ibm.com Secondary server: (none) Node Daemon node name IP address Admin node name Designation ----------------------------------------------------------------------------------------------1 perf3-c2-aix.bvnssg.net 10.0.0.1 node1.ibm.com quorum-manager
3.
Set the license mode for the node using the mmchlicense command. Use a server license for this node.
mmchlicense server --accept -N node01
Confirm the node was added to the cluster using the mmlscluster command
# mmlscluster
Use the mmchcluster command to set node2 as the secondary configuration server
# mmchcluster -s node2
Set the license mode for the node using the mmchlicense command. Use a server license for this node.
mmchlicense server --accept -N node02
5. 6.
Use the mmgetstate command to verify that both nodes are in the active state
# mmgetstate -a
1. 2.
1. 2.
Mount the file system at /gpfs Create the file system using the mmcrfs command
mmcrfs /gpfs fs1 -F diskdesc.txt -B 64k
Verify the file system was created correctly using the mmlsfs command
mmlsfs fs1
# df -k Filesystem 1024-blocks Free %Used Iused %Iused Mounted on /dev/hd4 65536 6508 91% 3375 64% / /dev/hd2 1769472 465416 74% 35508 24% /usr /dev/hd9var 131072 75660 43% 620 4% /var /dev/hd3 196608 192864 2% 37 1% /tmp /dev/hd1 65536 65144 1% 13 1% /home /proc - - /proc /dev/hd10opt 327680 47572 86% 7766 41% /opt /dev/fs1 398929107 398929000 1% 1 1% /gpfs
5.
How many inodes are currently used in the file system? ______________
Requirements:
Complete Exercise 1: Installing the cluster List of available devices /dev/sd__ /dev/sd__ /dev/sd__ /dev/sd__ /dev/sd__
The two storage pools will be system and pool1. 1. Create a backup copy of the disk descriptor file
/gpfs-course/data/pooldesc_bak.txt
3. Create a file system based on these NSD's using the mmcrfs command * Set the file system blocksize to 64KB * Mount the file system at /gpfs Command: "mmcrfs /gpfs fs1 -F /gpfs-course/data/pooldesc.txt -B 64k -M 2 -R 2" Example:
gpfs1:~ # mmcrfs /gpfs fs1 -F /gpfs-course/data/pooldesc.txt -B 64k -M2 -R2 The following disks of fs1 will be formatted on node gpfs1:
nsd1: size 20971520 KB nsd2: size 20971520 KB nsd3: size 20971520 KB nsd4: size 20971520 KB Formatting file system ... Disks up to size 53 GB can be added to storage pool 'system'. Disks up to size 53 GB can be added to storage pool 'pool1'. Creating Inode File 45 % complete on Wed Sep 26 10:05:27 2007 89 % complete on Wed Sep 26 10:05:32 2007 100 % complete on Wed Sep 26 10:05:33 2007 Creating Allocation Maps Clearing Inode Allocation Map Clearing Block Allocation Map 42 % complete on Wed Sep 26 10:05:52 2007 83 % complete on Wed Sep 26 10:05:57 2007 100 % complete on Wed Sep 26 10:05:59 2007 43 % complete on Wed Sep 26 10:06:04 2007 85 % complete on Wed Sep 26 10:06:09 2007 100 % complete on Wed Sep 26 10:06:10 2007 Completed creation of file system /dev/fs1. mmcrfs: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.
4. Verify the file system was created correctly using the mmlsfs command
> mmlsfs fs1
> mmdf fs1 disk disk size failure holds holds free KB free KB name in KB group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ------------------Disks in storage pool: system nsd1 102734400 -1 yes yes 102565184 (100%) 90 ( 0%) nsd2 102734400 -1 yes yes 102564608 (100%) 96 ( 0%) -------------------------------- ------------------(pool total) 205468800 205129792 (100%) 186 ( 0%) Disks in storage pool: pool1 nsd3 102734400 nsd4 102734400 ------------(pool total) 205468800 (data) (metadata) (total) ============= 410937600 205468800 ============= 410937600 -1 no -1 no yes 102732288 (100%) yes 102732288 (100%) -------------------- ------------------205464576 (100%) 62 ( 0%) 62 ( 0%) 124 ( 0%)
==================== =================== 410594368 (100%) 310 ( 0%) 205129792 (100%) 186 ( 0%) ==================== =================== 410594368 (100%) 310 ( 0%)
4038
Number of free inodes: 397370 Number of allocated inodes: 401408 Maximum number of inodes: 401408
What is the status of fileset1-fileset5? _______________ 3. Link the filesets into the file system using the mmlinkfileset command
# # # # # mmlinkfileset mmlinkfileset mmlinkfileset mmlinkfileset mmlinkfileset fs1 fs1 fs1 fs1 fs1 fileset1 fileset2 fileset3 fileset4 fileset5 -J -J -J -J -J /gpfs/fileset1 /gpfs/fileset2 /gpfs/fileset3 /gpfs/fileset4 /gpfs/fileset5
1. Record the "Before" free space in the chart 2. Create a file in fileset1 called bigfile1
dd if=/dev/zero of=/gpfs/fileset1/bigfile1 bs=64k count=10000
3. Record the free space in each pool using the mmdf command (Bigfile1)
> mmdf fs1 disk disk size failure holds holds free KB free KB name in KB group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ------------------Disks in storage pool: system nsd1 20971520 -1 yes yes 20588288 ( 98%) 930 ( 0%) nsd2 20971520 -1 yes yes 20588608 ( 98%) 806 ( 0%) -------------------------------- ------------------(pool total) 41943040 41176896 ( 98%) 1736 ( 0%) Disks in storage pool: pool1 nsd3 20971520 nsd4 20971520 ------------(pool total) 41943040 (data) (metadata) (total) -1 no -1 no yes 20969408 (100%) 62 ( 0%) yes 20969408 (100%) 62 ( 0%) -------------------- ------------------41938816 (100%) 124 ( 0%) ==================== =================== 83115712 ( 99%) 1860 ( 0%) 41176896 ( 98%) 1736 ( 0%) ==================== =================== 83115712 ( 99%) 1860 ( 0%)
Inode Information ----------------Number of used inodes: Number of free inodes: Number of allocated inodes: Maximum number of inodes:
Record the free space (bigfile1.dat) 5. Create a file in fileset5 called bigfile2
dd if=/dev/zero of=/gpfs/fileset5/bigfile2 bs=64k count=1000
Record the free space (bigfile2) 6. Questions Where did the data go for each file? Bigfile1 ______________ Bigfile1.dat ______________ Bigfile2 ______________ Why?
7. Create a couple more files (These will be used in the next step)
> dd if=/dev/zero of=/gpfs/fileset3/bigfile3 bs=64k count=10000 > dd if=/dev/zero of=/gpfs/fileset4/bigfile4 bs=64k count=10000
2.
This command will show you what mmapplypolicy will do but will not actually perform the delete or migrate.
3.
Actually perform the migration and deletion using the mmapplypolicy command
> mmapplypolicy fs1 \-P managementpolicy.txt
4.
Review: Review the output of the mmapplypolicy command to answer these questions. How many files were deleted? ____________ How many files were moved? ____________ How many KB total were moved? ___________
File listrule1.txt
RULE EXTERNAL POOL 'externalpoolA' EXEC '/tmp/expool1.bash' RULE 'MigToExt' MIGRATE TO POOL 'externalpoolA' WHERE FILE_SIZE > 2
Note: You may need to modify where clause to get a list of files on your file system. 2. Make the external pool script executable
chmod +x /tmp/expool1.bash
It will print output to the screen. When it is done it will print the location of the results file. For example: The file list report has been placed in /tmp/FileReport_Jul3108-20_15_50 4. What information do you see in the file?
Objectives:
Requirements:
Enable data and metadata replication Verify and monitor a file's replication status
1. Complete Exercise 1: Installing the cluster 2. List of available devices /dev/sd__ /dev/sd__ /dev/sd__ /dev/sd__
The failure group should be set to a value of -1 2. Change the failure group to 1 for nsd1 and nsd3 and to 2 for nsd2 and nsd4 using the mmchdisk command.
> > > > mmchdisk mmchdisk mmchdisk mmchdisk fs1 fs1 fs1 fs1 change change change change -d -d -d -d "nsd1:::dataAndMetadata:1:::" "nsd2:::dataAndMetadata:2:::" "nsd3:::dataOnly:1:::" "nsd4:::dataOnly:2:::"
Notice that data was not written because the default replication level is still set to 1. Now that there are two failure groups you can see how to change the replication status of a file.
2. Use the mmlsattr command to check the replication status of the file bigfile10
> mmlsattr /gpfs/fileset1/bigfile10 replication factors metadata(max) data(max) file [flags] ------------- --------- --------------1 ( 2) 1 ( 2) /gpfs/fileset1/bigfile10
3. Change the file replication status of bigfile10 so that it is replicated in two failure groups using the mmchattr command.
mmchattr \-m 2 \-r 2 /gpfs/fileset1/bigfile10
Notice that this command take a few moments to execute, as you change the replication status of a file the data is copied before the command completes unless you use the "-I defer" option. 4. Again use the mmlsattr command to check the replication status of the file bigfile10
> mmlsattr /gpfs/fileset1/bigfile10
2. Use the mmlsattr command to check the replication status of the file bigfile11
mmlsattr /gpfs/fileset1/bigfile11
3. Using the mmchfs command change the default replicaton status for fs1.
mmchfs fs1 \-m 2 \-r 2
4. Use the mmlsattr command to check the replication status of the file bigfile11
mmlsattr /gpfs/fileset1/bigfile11
Has the replication status of bigfile11 changed? _________________ 5. The replication status of a file does not change until mmrestripefs is run or a new file is created. To test this create a new file called bigfile12
dd if=/dev/zero of=/gpfs/fileset1/bigfile12 bs=64k count=1000
6. Use the mmlsattr command to check the replication status of the file bigfile10
mmlsattr /gpfs/fileset1/bigfile12
Is the file replicated? 7. You can replicate the existing files in the file system using the mmrestripefs command
mmrestripefs fs1 \-R
In this lab we will use the snapshot feature to create online copies of files. Create a file system snapshot Restore a user deleted file from a snapshot image Manage multiple snapshot images
Requirements:
1. Complete Exercise 1: Installing the cluster 2. A File System - Use Exercise 2 to create a file system if you do not already have one.
6. Delete the file /gpfs/fileset1/snapfile1 Now that the file is deleted let's see what is in the snapshots: 7. Take a look at the snapshot images. To view the image change directories to the .snapshot directory
cd /gpfs/.snapshots
What directories do you see? _____________________ 8. Compare the snapfile1 stored in each snapshot
cat snap1/fileset1/snapfile1 cat snap2/fileset1/snapfile1
Are the file contents the same? _______________ 9. To restore the file from the snapshot copy the file back into the original location
cp /gpfs/.snapshots/snap2/fileset1/snapfile1 /gpfs/fileset1/snapfile1
10. When you are done with a snapshot you can delete the snapshot. Delete both of these snapshots
11. Verify the snapshots were deleted using the mmlssnapshot command
mmlssnapshot fs1
Requirements:
1. Complete Exercise 1: Installing the cluster 2. A File System (Use Exercise 2 to create a file system if you do not already have one). 3. Device to add /dev/sd___
b. The df command will display the mounted GPFS file system. 2. Create a disk descriptor file /gpfs-course/data/adddisk.txt for the new disk using the format
#DiskName:serverlist::DiskUsage:FailureGroup:DesiredName:StoragePool /dev/sd_:::dataOnly::nsd5:pool1
4. Verify the disk has been created using the mmlsnsd command
> mmlsnsd
The disk you just added should show as a (free disk) 5. Add the new NSD to the fs1 file system using the mmadddisk command
> mmadddisk fs1 -F /gpfs-course/data/adddisk.txt
> mmdf fs1 disk disk size failure holds holds free KB free KB name in KB group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ------------------Disks in storage pool: system nsd1 20971520 1 yes yes 20873984 (100%) 284 ( 0%) nsd2 20971520 2 yes yes 20873984 (100%) 202 ( 0%) -------------------------------- ------------------(pool total) 41943040 41747968 (100%) 486 ( 0%) Disks in storage pool: pool1 nsd3 20971520 nsd4 20971520 nsd5 20971520 ------------(pool total) 62914560 1 no 2 no -1 no yes 20969408 (100%) 62 ( 0%) yes 20969408 (100%) 62 ( 0%) yes 20969408 (100%) 62 ( 0%) -------------------- ------------------62908224 (100%) 186 ( 0%) ==================== ===================
=============
104656192 (100%) 672 ( 0%) 41747968 (100%) 486 ( 0%) ==================== =================== 104656192 (100%) 672 ( 0%)
Inode Information ----------------Number of used inodes: 4045 Number of free inodes: 78131 Number of allocated inodes: 82176 Maximum number of inodes: 82176