Installing On UNIX and Linux: Platform LSF
Installing On UNIX and Linux: Platform LSF
SC27-5314-02
Platform LSF
Version 9 Release 1.2
SC27-5314-02
Note
Before using this information and the product it supports, read the information in “Notices” on page 39.
First edition
This edition applies to version 9, release 1 of IBM Platform LSF (product number 5725G82) and to all subsequent
releases and modifications until otherwise indicated in new editions.
Significant changes or additions to the text and illustrations are indicated by a vertical line (|) to the left of the
change.
If you find an error in any Platform Computing documentation, or you have a suggestion for improving it, please
let us know. Send your suggestions, comments and questions to the following email address:
[email protected]
Be sure include the publication title and order number, and, if applicable, the specific location of the information
about which you have comments (for example, a page number or a browser URL). When you send information to
IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate
without incurring any obligation to you.
© Copyright IBM Corporation 1992, 2013.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
Contents
Chapter 1. Example installation directory Chapter 8. install.config . . . . . . . 19
structure . . . . . . . . . . . . . . 1 About install.config. . . . . . . . . . . . 19
Parameters . . . . . . . . . . . . . . 19
Chapter 2. Plan your installation . . . . 3
EGO in the LSF cluster . . . . . . . . . . . 4 Chapter 9. slave.config . . . . . . . . 31
EGO_DAEMON_CONTROL. . . . . . . . . 32
Chapter 3. Prepare your systems for ENABLE_EGO . . . . . . . . . . . . . 32
EP_BACKUP . . . . . . . . . . . . . . 33
installation . . . . . . . . . . . . . 5 LSF_ADMINS . . . . . . . . . . . . . 33
| LSF_ENTITLEMENT_FILE . . . . . . . . . 34
Chapter 4. Install a new LSF cluster LSF_LIM_PORT . . . . . . . . . . . . . 34
(lsfinstall) . . . . . . . . . . . . . . 7 LSF_SERVER_HOSTS . . . . . . . . . . . 34
LSF_TARDIR . . . . . . . . . . . . . . 35
Chapter 5. After installing LSF . . . . . 9 LSF_LOCAL_RESOURCES . . . . . . . . . 36
LSF_TOP . . . . . . . . . . . . . . . 36
SILENT_INSTALL . . . . . . . . . . . . 37
Chapter 6. If you install LSF as a LSF_SILENT_INSTALL_TARLIST . . . . . . . 37
non-root user . . . . . . . . . . . . 11
Notices . . . . . . . . . . . . . . 39
Chapter 7. Add hosts . . . . . . . . 13 Trademarks . . . . . . . . . . . . . . 41
Running host setup remotely (rhostsetup) . . . . 13 Privacy policy considerations . . . . . . . . 41
Enable LSF HPC Features . . . . . . . . . 14
Optional LSF HPC features configuration . . . . 17
Important: Do not use the name of any host, user, or user group as the name of
your cluster.
v Choose LSF server hosts that are candidates to become the master host for the
cluster, if you are installing a new host to be dynamically added to the cluster
(for example, LSF_MASTER_LIST="hosta hostb"
v Choose a cluster name (39 characters or less with no white spaces; for example,
LSF_CLUSTER_NAME="cluster1")
| v If you are installing LSF Standard Edition, choose a configuration template to
| determine the initial configuration of your new cluster (for example,
| CONFIGURATION_TEMPLATE="HIGH_THROUGHPUT"). Select one of the following
| templates depending on the type of jobs your cluster will run:
| DEFAULT
| Select this template for clusters with mixed workload. This configuration
| can serve different types of workload with good performance, but is not
| specifically tuned for a particular type of cluster.
| PARALLEL
| Select this template for clusters running large parallel jobs. This
| configuration is designed for long running parallel jobs and should not
| be used for clusters that mainly run short jobs due to the longer
| reporting time for each job.
| HIGH_THROUGHPUT
| This template is designed to be used for clusters that mainly run short
| jobs, where over 80% of jobs finish within one minute. This high
| turnover rate requires LSF to be more responsive and fast acting, but
| will consume more resources as the daemons become busier.
See the LSF administrator documentation for more details on the benefits of
enabling EGO and using EGO to control the services.
Installation choices
When you install the cluster and enable EGO, you can configure the following
separately:
v EGO control of sbatchd and res
To install LSF in an LDAP environment, check that the following are satisfied:
v The LSF administrator is a defined user in LDAP.
v The OS is configured to use LDAP for authentication.
v The LDAP administrator grants privileges to the LSF installer user (usually root)
to retrieve the user list from the LDAP server.
LSF uses entitlement files to determine which feature set to be enabled or disabled
based on the edition of the product. The entitlement files are:
v LSF Standard Edition - platform_lsf_std_entitlement.dat
v LSF Express Edition - platform_lsf_exp_entitlement.dat
v LSF Advanced Edition - platform_lsf_adv_entitlement.dat
You must download the entitlement file for the edition of the product you are
running, and set LSF_ENTITLEMENT_FILE in install.config to the full path to the
entitlement file you downloaded.
If you are installing LSF Express Edition, you can later upgrade to LSF Standard
Edition to take advantage of the additional functionality of LSF Standard Edition.
Simply reinstall the cluster with the LSF Standard entitlement file
(platform_lsf_std_entitlement.dat). You can also upgrade to LSF Advanced
Edition to take advantage of even more functionality. Simply reinstall the cluster
with the LSF Advanced entitlement file (platform_lsf_adv_entitlement.dat).
You can also manually upgrade from LSF Express Edition to Standard Edition or
Advanced Edition. Get the LSF Standard entitlement configuration file
platform_lsf_std_entitlement.dat or platform_lsf_adv_entitlement.dat, copy it
to <LSF_TOP>/conf/lsf.entitlement and restart your cluster. The new entitlement
configuration enables additional functionality, but you may need to change some
of the default LSF Express configuration parameters to use the LSF Standard
Edition or Advanced Edition features.
Once LSF is installed and running, run the lsid command to see which edition of
LSF is enabled.
Tip: The sample values in the install.config and slave.config template files
are examples only. They are not default installation values.
The following install.config parameters are required for installation:
v LSF_TOP
v LSF_ADMINS
v LSF_CLUSTER_NAME
v LSF_MASTER_LIST
v LSF_ENTITLEMENT_FILE
v LSF_TARDIR
If you do not specify this parameter, the default value is the parent directory
of the current working directory from which lsfinstall is run.
v CONFIGURATION_TEMPLATE (LSF Standard Edition only)
If you do not specify this parameter, the default value is DEFAULT.
If you are intending to include some servers in your cluster that will not share
the specified LSF_TOP in slave.config, then you must complete the
slave.config file and run lsfinstall -f -s slave.config.
For details on install.config parameters, refer to install.config.
For details on slave.config parameters, refer to slave.config.
4. Run lsfinstall -f install.config to install the cluster.
5. Test your cluster by running some basic LSF commands (for example, lsid,
lshosts, bhosts).
Note: Running hostsetup is required if you will be running IBM POE jobs
using IBM Parallel Environment Runtime Edition (or IBM PE Runtime Edition).
a. Log on to each LSF server host as root. Start with the LSF master host.
If you are integrating LSF with IBM Parallel Environment (PE), you must
log on as root.
Otherwise, you can continue with host setup if you are not root, but by
default, only root can start the LSF daemons.
b. Run hostsetup on each LSF server host. For example, to use the LSF cluster
installed in /usr/share/lsf and configure LSF daemons to start
automatically at boot time:
# cd /usr/share/lsf/9.1/install
# ./hostsetup --top="/usr/share/lsf" --boot="y"
For complete hostsetup usage, enter hostsetup -h.
2. Log on to the LSF master host as root, and set your LSF environment:
v For csh or tcsh: % source <LSF_TOP>/conf/cshrc.lsf
v For sh, ksh, or bash: $ . <LSF_TOP>/conf/profile.lsf
3. Optional. Enable LSF for users.
Ensure all users Include <LSF_TOP>/conf/cshrc.lsf or <LSF_TOP>/conf/
profile.lsf in their .cshrc or .profile.
4. Run lsfstartup to start the cluster.
lsfstartup will use RSH to connect to all nodes in the cluster and start LSF. If
RSH is not configured in your environment, you can configure lsfstartup to
use SSH by adding the following line to your lsf.conf:
LSF_RSH=ssh
5. Test your cluster by running some basic LSF commands (for example, lsid,
lshosts, bhosts).
After testing your cluster, be sure all LSF users include LSF_CONFDIR/cshrc.lsf
or LSF_CONFDIR/profile.lsf in their .cshrc or .profile.
Note:
If you will be running IBM POE jobs using IBM Parallel Environment Runtime
Edition (or IBM PE Runtime Edition) you must run hostsetup.
If you are integrating LSF with IBM Parallel Environment (PE), you must run
hostsetup as root.
1. # hostsetup --top="/usr/share/lsf" --boot="y"
This sets up a host to use the cluster installed in /usr/share/lsf. It also
configures the LSF daemons to start automatically (--boot="y").
2. # hostsetup --top="/usr/share/lsf" --silent
This is the silent installation option which does not display any output
messages.
If you are integrating LSF with IBM Parallel Environment (PE), you must run
rhostsetup as root.
rhostsetup uses either ssh or rsh. It is included in the installer script package
lsf9.1.2_lsfinstall.tar.Z and is located in the lsf9.1.2_lsfinstall directory
created when you uncompress and extract the installer script package.
For example:
LSF_RSHCMD="ssh -n"
LSF_HOSTS="hostA hostB hostC"
LSF_TOPDIR=/usr/local/ls
LSF_BOOT=y
LSF_QUIET=n
lsb.modules
v Adds the external scheduler plugin module names to the PluginModule section
of lsb.modules:
Begin PluginModule
SCH_PLUGIN RB_PLUGIN SCH_DISABLE_PHASES
schmod_default () ()
schmod_fcfs () ()
schmod_fairshare () ()
schmod_limit () ()
schmod_parallel () ()
schmod_reserve () ()
schmod_mc () ()
schmod_preemption () ()
schmod_advrsv () ()
schmod_ps () ()
schmod_affinity () ()
#schmod_dc () ()
schmod_aps () ()
schmod_cpuset () ()
End PluginModule
Note:
The HPC plugin names must be configured after the standard LSF plugin names in
the PluginModule list.
lsb.resources
Configures hpc_ibm queue for IBM POE jobs and the hpc_ibm_tv queue for
debugging IBM POE jobs:
Begin Queue
QUEUE_NAME = hpc_linux
PRIORITY = 30
NICE = 20
#RUN_WINDOW = 5:19:00-1:8:30 20:00-8:30
#r1m = 0.7/2.0 # loadSched/loadStop
#r15m = 1.0/2.5
#pg = 4.0/8
#ut = 0.2
#io = 50/240
#CPULIMIT = 180/hostA # 3 hours of host hostA
#FILELIMIT = 20000
#DATALIMIT = 20000 # jobs data segment limit
#CORELIMIT = 20000
#PROCLIMIT = 5 # job processor limit
#USERS = all # users who can submit jobs to this queue
#HOSTS = all # hosts on which jobs in this queue can run
#PRE_EXEC = /usr/local/lsf/misc/testq_pre >> /tmp/pre.out
#POST_EXEC = /usr/local/lsf/misc/testq_post |grep -v Hey
DESCRIPTION = IBM Platform LSF 9.1 for linux.
End Queue
Begin Queue
QUEUE_NAME = hpc_linux_tv
PRIORITY = 30
NICE = 20
#RUN_WINDOW = 5:19:00-1:8:30 20:00-8:30
#r1m = 0.7/2.0 # loadSched/loadStop
#r15m = 1.0/2.5
#pg = 4.0/8
#ut = 0.2
#io = 50/240
#CPULIMIT = 180/hostA # 3 hours of host hostA
#FILELIMIT = 20000
#DATALIMIT = 20000 # jobs data segment limit
#CORELIMIT = 20000
#PROCLIMIT = 5 # job processor limit
#USERS = all # users who can submit jobs to this queue
#HOSTS = all # hosts on which jobs in this queue can run
#PRE_EXEC = /usr/local/lsf/misc/testq_pre >> /tmp/pre.out
#POST_EXEC = /usr/local/lsf/misc/testq_post |grep -v Hey
TERMINATE_WHEN = LOAD PREEMPT WINDOW
RERUNNABLE = NO
INTERACTIVE = NO
DESCRIPTION = IBM Platform LSF 9.1 for linux debug queue.
End Queue
Begin Queue
QUEUE_NAME = hpc_ibm
PRIORITY = 30
NICE = 20
#RUN_WINDOW = 5:19:00-1:8:30 20:00-8:30
#r1m = 0.7/2.0 # loadSched/loadStop
#r15m = 1.0/2.5
#pg = 4.0/8
#ut = 0.2
#io = 50/240
#CPULIMIT = 180/hostA # 3 hours of host hostA
#FILELIMIT = 20000
#DATALIMIT = 20000 # jobs data segment limit
#CORELIMIT = 20000
#PROCLIMIT = 5 # job processor limit
Begin Queue
QUEUE_NAME = hpc_ibm_tv
PRIORITY = 30
NICE = 20
#RUN_WINDOW = 5:19:00-1:8:30 20:00-8:30
#r1m = 0.7/2.0 # loadSched/loadStop
#r15m = 1.0/2.5
#pg = 4.0/8
#ut = 0.2
#io = 50/240
#CPULIMIT = 180/hostA # 3 hours of host hostA
#FILELIMIT = 20000
#DATALIMIT = 20000 # jobs data segment limit
#CORELIMIT = 20000
#PROCLIMIT = 5 # job processor limit
#USERS = all # users who can submit jobs to this queue
#HOSTS = all # hosts on which jobs in this queue can run
#PRE_EXEC = /usr/local/lsf/misc/testq_pre >> /tmp/pre.out
#POST_EXEC = /usr/local/lsf/misc/testq_post |grep -v Hey
RES_REQ = select[ poe > 0 ]
REQUEUE_EXIT_VALUES = 133 134 135
TERMINATE_WHEN = LOAD PREEMPT WINDOW
RERUNNABLE = NO
INTERACTIVE = NO
DESCRIPTION = IBM Platform LSF 9.1 for IBM debug queue. This queue is to run POE jobs ONLY.
End Queue
lsf.cluster.cluster_name
lsf.conf
v LSB_SUB_COMMANDNAME=Y to lsf.conf to enable the LSF_SUB_COMMANDLINE
environment variable required by esub.
v LSF_ENABLE_EXTSCHEDULER=Y: LSF uses an external scheduler for topology-aware
external scheduling.
v LSB_CPUSET_BESTCPUS=Y: LSF schedules jobs based on the shortest CPU radius in
the processor topology using a best-fit algorithm. On HP-UX hosts, sets the full
path to the HP vendor MPI library libmpirm.sl LSF_VPLUGIN="/opt/mpi/lib/
pa1.1/libmpirm.sl"
lsf.shared
Template location
Important:
The sample values in the install.config template file are examples only. They are
not default installation values.
Format
The equal sign = must follow each NAME even if no value follows and there should
be no spaces around the equal sign.
Blank lines and lines starting with a pound sign (#) are ignored.
Parameters
| v CONFIGURATION_TEMPLATE
v EGO_DAEMON_CONTROL
v ENABLE_DYNAMIC_HOSTS
v ENABLE_EGO
| v ENABLE_STREAM
v LSF_ADD_SERVERS
v LSF_ADD_CLIENTS
v LSF_ADMINS
v LSF_CLUSTER_NAME
v LSF_DYNAMIC_HOST_WAIT_TIME
v LSF_ENTITLEMENT_FILE
CONFIGURATION_TEMPLATE
Syntax
Description
LSF Standard Edition on UNIX or Linux only. Selects the configuration template
for this installation, which determines the initial LSF configuration parameters
specified when the installation is complete. The following are valid values for this
parameter:
DEFAULT
This template should be used for clusters with mixed workload. This
configuration can serve different types of workload with good
performance, but is not specifically tuned for a particular type of cluster.
PARALLEL
This template provides extra support for large parallel jobs. This
configuration is designed for long running parallel jobs, and should not be
used for clusters that mainly run short jobs due to the longer reporting
time for each job.
HIGH_THROUGHPUT
This template is designed to be used for clusters that mainly run short
jobs, where over 80% of jobs finish within one minute. This high turnover
rate requires LSF to be more responsive and fast acting. However, this
configuration will consume more resources as the daemons become busier.
The installer uses the DEFAULT configuration template when installing LSF
Standard Edition on Windows.
The installer specifies the following initial configuration file parameter values
based on the selected configuration template:
v DEFAULT
– lsf.conf:
DAEMON_SHUTDOWN_DELAY=180
LSF_LINUX_CGROUP_ACCT=Y
LSF_PROCESS_TRACKING=Y
– lsb.params:
The installer specifies the following initial configuration parameters for all
configuration templates:
v lsf.conf:
EGO_ENABLE_AUTO_DAEMON_SHUTDOWN=Y
LSB_DISABLE_LIMLOCK_EXCL=Y
LSB_MOD_ALL_JOBS=Y
LSF_DISABLE_LSRUN=Y
LSB_SUBK_SHOW_EXEC_HOST=Y
LSF_PIM_LINUX_ENHANCE=Y
LSF_PIM_SLEEPTIME_UPDATE=Y
LSF_STRICT_RESREQ
LSF_UNIT_FOR_LIMITS=MB
v lsb.params:
ABS_RUNLIMIT=Y
DEFAULT_QUEUE=normal interactive
JOB_ACCEPT_INTERVAL=0
MAX_CONCURRENT_JOB_QUERY=100
MBD_SLEEP_TIME=10
PARALLEL_SCHED_BY_SLOT=Y
In addition, the installer enables the following features for all configuration
templates:
v Fairshare scheduling (LSF Standard Edition and Advanced Edition): All queues
except admin and license have fairshare scheduling enabled as follows in
lsb.queues:
Chapter 8. install.config 21
Begin Queue
...
FAIRSHARE=USER_SHARES[[default, 1]]
...
End Queue
v Host groups (LSF Standard Edition on UNIX or Linux): Master candidate hosts
are assigned to the master_hosts host group.
v User groups (LSF Standard Edition on UNIX or Linux): LSF administrators are
assigned to the lsfadmins user group.
| v Affinity scheduling in both lsb.modules and lsb.hosts.
Example
CONFIGURATION_TEMPLATE="HIGH_THROUGHPUT"
Default
EGO_DAEMON_CONTROL
Syntax
EGO_DAEMON_CONTROL="Y" | "N"
Description
Enables EGO to control LSF res and sbatchd. Set the value to "Y" if you want EGO
Service Controller to start res and sbatchd, and restart if they fail. To avoid
conflicts, leave this parameter undefined if you use a script to start up LSF
daemons.
Note:
Example
EGO_DAEMON_CONTROL="N"
Default
ENABLE_DYNAMIC_HOSTS
Syntax
ENABLE_DYNAMIC_HOSTS="Y" | "N"
Description
Enables dynamically adding and removing hosts. Set the value to "Y" if you want
to allow dynamically added hosts.
If you enable dynamic hosts, any host can connect to cluster. To enable security,
configure LSF_HOST_ADDR_RANGE in lsf.cluster.cluster_name after
Example
ENABLE_DYNAMIC_HOSTS="N"
Default
ENABLE_EGO
Syntax
ENABLE_EGO="Y" | "N"
Description
Enables EGO functionality in the LSF cluster.
Set the value to "Y" if you want to take advantage of the following LSF features
that depend on EGO:
v LSF daemon control by EGO Service Controller
v EGO-enabled SLA scheduling
Default
| ENABLE_STREAM
| Syntax
| ENABLE_STREAM="Y" | "N"
| Description
| Enable LSF event streaming if you intend to install IBM Platform Analytics or IBM
| Platform Application Center.
| Default
LSF_ADD_SERVERS
Syntax
LSF_ADD_SERVERS="host_name [ host_name...]"
Chapter 8. install.config 23
Description
The hosts in LSF_MASTER_LIST are always LSF servers. You can specify
additional server hosts. Specify a list of host names two ways:
v
Host names separated by spaces
v
Name of a file containing a list of host names, one host per line.
Valid Values
Any valid LSF host name.
Example 1
Example 2
Default
LSF_ADD_CLIENTS
Syntax
LSF_ADD_CLIENTS="host_name [ host_name...]"
Description
Tip:
Valid Values
Example 2
Default
LSF_ADMINS
Syntax
Description
The first user account name in the list is the primary LSF administrator. It cannot
be the root user account.
Typically this account is named lsfadmin. It owns the LSF configuration files and
log files for job events. It also has permission to reconfigure LSF and to control
batch jobs submitted by other users. It typically does not have authority to start
LSF daemons. Usually, only root has permission to start LSF daemons.
All the LSF administrator accounts must exist on all hosts in the cluster before you
install LSF. Secondary LSF administrators are optional.
CAUTION:
You should not configure the root account as the primary LSF administrator.
Valid Values
Example
Default
None—required variable
Chapter 8. install.config 25
LSF_CLUSTER_NAME
Syntax
LSF_CLUSTER_NAME="cluster_name"
Description
Example
LSF_CLUSTER_NAME="cluster1"
Valid Values
Any alphanumeric string containing no more than 39 characters. The name cannot
contain white spaces.
Important:
Do not use the name of any host, user, or user group as the name of your cluster.
Default
None—required variable
LSF_DYNAMIC_HOST_WAIT_TIME
Syntax
LSF_DYNAMIC_HOST_WAIT_TIME=seconds
Description
Time in seconds slave LIM waits after startup before calling master LIM to add the
slave host dynamically.
Recommended value
Up to 60 seconds for every 1000 hosts in the cluster, for a maximum of 15 minutes.
Selecting a smaller value will result in a quicker response time for new hosts at the
expense of an increased load on the master LIM.
Example
LSF_DYNAMIC_HOST_WAIT_TIME=60
Hosts will wait 60 seconds from startup to receive an acknowledgement from the
master LIM. If it does not receive the acknowledgement within the 60 seconds, it
will send a request for the master LIM to add it to the cluster.
LSF_ENTITLEMENT_FILE
Syntax
LSF_ENTITLEMENT_FILE=path
Description
Full path to the LSF entitlement file. LSF uses the entitlement to determine which
feature set to be enable or disable based on the edition of the product. The
entitlement file for LSF Standard Edition is platform_lsf_std_entitlement.dat.
For LSF Express Edition, the file is platform_lsf_exp_entitlement.dat. For LSF
Advanced Edition, the file is platform_lsf_adv_entitlement.dat. The entitlement
file is installed as <LSF_TOP>/conf/lsf.entitlement.
You must download the entitlement file for the edition of the product you are
running, and set LSF_ENTITLEMENT_FILE to the full path to the entitlement file you
downloaded.
Once LSF is installed and running, run the lsid command to see which edition of
LSF is enabled.
Example
LSF_ENTITLEMENT_FILE=/usr/share/lsf_distrib/lsf.entitlement
Default
LSF_MASTER_LIST
Syntax
Description
You must specify at least one valid server host to start the cluster. The first host
listed is the LSF master host.
Valid Values
Example
Chapter 8. install.config 27
Default
None—required variable
LSF_QUIET_INST
Syntax
LSF_QUIET_INST="Y" | "N"
Description
Set the value to Y if you want to hide the LSF installation messages.
Example
LSF_QUIET_INST="Y"
Default
LSF_SILENT_INSTALL_TARLIST
Syntax
Description
A string which contains all LSF package names to be installed. This name list only
applies to the silent install mode. Supports keywords ?all?, ?ALL? and ?All? which
can install all packages in LSF_TARDIR.
LSF_SILENT_INSTALL_TARLIST="ALL" | "lsf9.1.2_linux2.6-glibc2.3-
x86_64.tar.Z"
Default
None
LSF_TARDIR
Syntax
LSF_TARDIR="/path"
Description
Full path to the directory containing the LSF distribution tar files.
Example
LSF_TARDIR="/usr/share/lsf_distrib"
The parent directory of the current working directory. For example, if lsfinstall is
running under usr/share/lsf_distrib/lsf_lsfinstall the LSF_TARDIR default
value is usr/share/lsf_distrib.
LSF_TOP
Syntax
LSF_TOP="/path"
Description
Valid Value
The path to LSF_TOP must be shared and accessible to all hosts in the cluster. It
cannot be the root directory (/). The file system containing LSF_TOP must have
enough disk space for all host types (approximately 300 MB per host type).
Example
LSF_TOP="/usr/share/lsf"
Default
PATCH_BACKUP_DIR
Syntax
PATCH_BACKUP_DIR="/path"
Description
Full path to the patch backup directory. This parameter is used when you install a
new cluster for the first time, and is ignored for all other cases.
The file system containing the patch backup directory must have sufficient disk
space to back up your files (approximately 400 MB per binary type if you want to
be able to install and roll back one enhancement pack and a few additional fixes).
It cannot be the root directory (/).
Example
PATCH_BACKUP_DIR="/usr/share/lsf/patch/backup"
Chapter 8. install.config 29
Default
LSF_TOP/patch/backup
PATCH_HISTORY_DIR
Syntax
PATCH_HISTORY_DIR="/path"
Description
Full path to the patch history directory. This parameter is used when you install a
new cluster for the first time, and is ignored for all other cases.
It cannot be the root directory (/). If the directory already exists, it must be
writable by lsfadmin.
Example
PATCH_BACKUP_DIR="/usr/share/lsf/patch"
Default
LSF_TOP/patch
SILENT_INSTALL
Syntax
SILENT_INSTALL="Y" | "N"
Description
Enabling the silent installation (setting this parameter to Y) means you want to do
the silent installation and accept the license agreement.
Default
Dynamically added LSF hosts that will not be master candidates are slave hosts.
Each dynamic slave host has its own LSF binaries and local lsf.conf and shell
environment scripts (cshrc.lsf and profile.lsf). You must install LSF on each
slave host.
The slave.config file contains options for installing and configuring a slave host
that can be dynamically added or removed.
Template location
Important:
The sample values in the slave.config template file are examples only. They are not
default installation values.
Format
The equal sign = must follow each NAME even if no value follows and there should
be no spaces around the equal sign.
Blank lines and lines starting with a pound sign (#) are ignored.
Parameters
v EGO_DAEMON_CONTROL
v ENABLE_EGO
v EP_BACKUP
v LSF_ADMINS
| v LSF_ENTITLEMENT_FILE
v LSF_LIM_PORT
v LSF_SERVER_HOSTS
v LSF_TARDIR
v LSF_LOCAL_RESOURCES
EGO_DAEMON_CONTROL
Syntax
EGO_DAEMON_CONTROL="Y" | "N"
Description
Enables EGO to control LSF res and sbatchd. Set the value to "Y" if you want EGO
Service Controller to start res and sbatchd, and restart if they fail.
All hosts in the cluster must use the same value for this parameter (this means the
value of EGO_DAEMON_CONTROL in this file must be the same as the
specification for EGO_DAEMON_CONTROL in install.config).
To avoid conflicts, leave this parameter undefined if you use a script to start up
LSF daemons.
Note:
Example
EGO_DAEMON_CONTROL="N"
Default
ENABLE_EGO
Syntax
ENABLE_EGO="Y" | "N"
Description
Set the value to "Y" if you want to take advantage of the following LSF features
that depend on EGO:
v LSF daemon control by EGO Service Controller
v EGO-enabled SLA scheduling
EP_BACKUP
Syntax
EP_BACKUP="Y" | "N"
Description
Enables backup and rollback for enhancement packs. Set the value to "N" to
disable backups when installing enhancement packs (you will not be able to roll
back to the previous patch level after installing an EP, but you will still be able to
roll back any fixes installed on the new EP).
You may disable backups to speed up install time, to save disk space, or because
you have your own methods to back up the cluster.
Default
Y (backup and rollback are fully enabled)
LSF_ADMINS
Syntax
Description
The first user account name in the list is the primary LSF administrator. It cannot
be the root user account.
Typically this account is named lsfadmin. It owns the LSF configuration files and
log files for job events. It also has permission to reconfigure LSF and to control
batch jobs submitted by other users. It typically does not have authority to start
LSF daemons. Usually, only root has permission to start LSF daemons.
All the LSF administrator accounts must exist on all hosts in the cluster before you
install LSF. Secondary LSF administrators are optional.
Valid Values
Example
LSF_ADMINS="lsfadmin user1 user2"
Default
None—required variable
Chapter 9. slave.config 33
| LSF_ENTITLEMENT_FILE
| Syntax
| LSF_ENTITLEMENT_FILE=path
| Description
| Full path to the LSF entitlement file. LSF uses the entitlement to determine which
| feature set to be enable or disable based on the edition of the product. The
| entitlement file for LSF Standard Edition is platform_lsf_std_entitlement.dat.
| For LSF Express Edition, the file is platform_lsf_exp_entitlement.dat. The
| entitlement file is installed as <LSF_TOP>/conf/lsf.entitlement.
| You must download the entitlement file for the edition of the product you are
| running, and set LSF_ENTITLEMENT_FILE to the full path to the entitlement file you
| downloaded.
| Once LSF is installed and running, run the lsid command to see which edition of
| LSF is enabled.
| Example
| LSF_ENTITLEMENT_FILE=/usr/share/lsf_distrib/lsf.entitlement
| Default
LSF_LIM_PORT
Syntax
LSF_LIM_PORT="port_number"
Description
Use the same port number as LSF_LIM_PORT in lsf.conf on the master host.
Default
7869
LSF_SERVER_HOSTS
Syntax
Description
Required for non-shared slave host installation. This parameter defines a list of
hosts that can provide host and load information to client hosts. If you do not
Recommended for large clusters to decrease the load on the master LIM. Do not
specify the master host in the list. Client commands will query the LIMs on the
LSF_SERVER_HOSTS, which off-loads traffic from the master LIM.
Define this parameter to ensure that commands execute successfully when no LIM
is running on the local host, or when the local LIM has just started.
Valid Values
Examples
Default
None
LSF_TARDIR
Syntax
LSF_TARDIR="/path"
Description
Full path to the directory containing the LSF distribution tar files.
Example
LSF_TARDIR="/usr/local/lsf_distrib"
Chapter 9. slave.config 35
Default
The parent directory of the current working directory. For example, if lsfinstall is
running under usr/share/lsf_distrib/lsf_lsfinstall the LSF_TARDIR default
value is usr/share/lsf_distrib.
LSF_LOCAL_RESOURCES
Syntax
LSF_LOCAL_RESOURCES="resource ..."
Description
When the slave host calls the master host to add itself, it also reports its local
resources. The local resources to be added must be defined in lsf.shared.
Tip:
Important:
Example
LSF_LOCAL_RESOURCES="[resourcemap 1*verilog] [resource linux]"
Default
None
LSF_TOP
Syntax
LSF_TOP="/path"
Important:
You must use the same path for every slave host you install.
Valid value
Example
LSF_TOP="/usr/local/lsf"
Default
None—required variable
SILENT_INSTALL
Syntax
SILENT_INSTALL="Y" | "N"
Description
Enabling the silent installation (setting this parameter to Y) means you want to do
the silent installation and accept the license agreement.
Default
LSF_SILENT_INSTALL_TARLIST
Syntax
LSF_SILENT_INSTALL_TARLIST="ALL" | "Package_Name ..."
Description
A string which contains all LSF package names to be installed. This name list only
applies to the silent install mode. Supports keywords ?all?, ?ALL? and ?All? which
can install all packages in LSF_TARDIR.
LSF_SILENT_INSTALL_TARLIST="ALL" | "lsf9.1.2_linux2.6-glibc2.3-
x86_64.tar.Z"
Default
None
Chapter 9. slave.config 37
38 Installing Platform LSF on UNIX and Linux
Notices
This information was developed for products and services offered in the U.S.A.
IBM® may not offer the products, services, or features discussed in this document
in other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
IBM Corporation
Intellectual Property Law
Mail Station P300
2455 South Road,
Poughkeepsie, NY 12601-5400
USA
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
All statements regarding IBM's future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Each copy or any portion of these sample programs or any derivative work, must
include a copyright notice as follows:
© (your company name) (year). Portions of this code are derived from IBM Corp.
Sample Programs. © Copyright IBM Corp. _enter the year or years_.
If you are viewing this information softcopy, the photographs and color
illustrations may not appear.
Trademarks
IBM, the IBM logo, and ibm.com® are trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and
service names might be trademarks of IBM or other companies. A current list of
IBM trademarks is available on the Web at "Copyright and trademark information"
at http://www.ibm.com/legal/copytrade.shtml.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo,
Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or
registered trademarks of Intel Corporation or its subsidiaries in the United States
and other countries.
Java™ and all Java-based trademarks and logos are trademarks or registered
trademarks of Oracle and/or its affiliates.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
Notices 41
Offering uses cookies to collect personally identifiable information, specific
information about this offering’s use of cookies is set forth below.
This Software Offering does not use cookies or other technologies to collect
personally identifiable information.
If the configurations deployed for this Software Offering provide you as customer
the ability to collect personally identifiable information from end users via cookies
and other technologies, you should seek your own legal advice about any laws
applicable to such data collection, including any requirements for notice and
consent.
For more information about the use of various technologies, including cookies, for
these purposes, See IBM’s Privacy Policy at http://www.ibm.com/privacy and
IBM’s Online Privacy Statement at http://www.ibm.com/privacy/details the
section entitled “Cookies, Web Beacons and Other Technologies” and the “IBM
Software Products and Software-as-a-Service Privacy Statement” at
http://www.ibm.com/software/info/product-privacy.
Printed in USA
SC27-5314-02