Protocol Components: Remote Procedure Call (RPC) Protocol: Rpcbind
Protocol Components: Remote Procedure Call (RPC) Protocol: Rpcbind
A service is a group of RPC procedures that have been grouped together into
programs. A unique number is used to identify each service, which means that
more than one service can operate at any given time. An application that
needs to use a service can use the different programs that make up the
service to perform specific actions. For example, when designing an NFS
service, one program might be responsible for determining a file's attributes,
and another program might be responsible for the actual transfer of data
between the client and server computers.
The unique service number is used to identify different network services that
run on a particular system, and the mapping for this is usually found in the
file /etc/rpc.
Note
Certain operations, such as file or record locking, do require a stateful protocol
of some sort, and many implementations of NFS accomplish this by using
another protocol to handle the specific function. NFS itself is composed of a
set of procedures that deal only with file access.
The RPC procedures that make up the NFS protocol are the following:
Set File AttributesSets the file attributes of a file on the remote server.
Create Link to FileCreates a hard link (in the same file system) to a file.
When compared to the NFS protocol, the Mount protocol consists of only a
very few procedures:
NullThe "do nothing" procedure, just like the one listed under the NFS
protocol.
MNTMounts a file system and returns to the client a file handle and the
name of the remote file system.
Note
The commands shown in the following sections might differ from one version
of Unix to another. As always with Unix or Linux, consult the man pages to
determine the exact syntax for commands and the locations of files mentioned
in relation to the commands.
On the client side of the NFS process, there are actually three daemon
processes that are used. The first is biod, which stands for block input/output
daemon. This daemon processes the input/output with the NFS server on
behalf of the user process that is making requests of the remote file system. If
you use NFS heavily on a client, you can improve performance by starting up
more than one biod daemon. The syntax used to start the daemon is as
follows:
/etc/biod [number of daemon processes]
Also note that the biod daemon is a client process. You should not run it on an
NFS server unless that server is also a client of another NFS server.
The mount Command
The mount command is used to mount a local file system, and you can also
use the command to mount a remote NFS file system. The syntax for
usingmount to make available a file system being exported by an NFS server is
as follows:
mount -F nfs -o options machine:filesystem mountpoint
In some versions of Unix, the syntax for mounting a remote NFS file system is
a little different. For example, in SCO Unix you use a lowercase f and an
uppercase NFS:
mount -f NFS -o options machine:filesystem mountpoint
This is the same way you mount other local file systems into the local
hierarchy. Under the /usr/docs directory, you can access any other
subdirectories that exist on host zira under the /usr/projectx/docs directory.
Other options that can be used when mounting a remote file system include
the following:
rw Mounts the file system for local read-write access, which is the
default.
suid Allows setuid execution.
nosuid Disallows setuid execution.
For more command-line parameters and options, see the man page for
the mount command for your particular system.
Caution
A computer can be an NFS server, an NFS client, or perhaps both a server
and a client. However, you should not try to mount an exported file system on
the same server that is exporting it. This can lead to looping problems,
causing unpredictable behavior.
The mountpoint is the path to the location in the local file system where the
remote NFS file system will appear, and this path must exist before
themount command is issued. Any files existing in the mountpoint directory will
no longer be accessible to users after a remote file system is attached to the
directory with the mount command, so do not use just any directory. Note that
the files are not lost. They reappear when the remote file system is
unmounted.
Server-Side Daemons
The nfsd daemon process handles requests from NFS clients for the server.
The nfsd daemon interprets requests and sends them to the I/O system to
perform the requests' actual functions. The daemon communicates with
the biod daemon on the client, processing requests and returning data to the
requestor's daemon.
An NFS server will usually be set up to serve multiple clients. You can set up
multiple copies of the nfsd daemon on the server so that the server can handle
multiple client requests in a timely manner.
Unix systems and the utilities that are closely associated with them are
continually being updated or improved. Some new versions include using the
concept of threads to make it possible for a daemon to be implemented as a
multithreaded process, capable of handling many requests at one time. Digital
Unix 4.0 (now HP True64 Unix) is an operating system that provides a
multithreaded NFS server daemon.
Other daemons the NFS server runs include the lockd daemon to handle file
locking and the statd daemon to help coordinate the status of current file
locks.
For an NFS server, choose a computer that has the hardware capabilities
needed to support your network clients. If the NFS server will be used to allow
clients to view seldom-used documentation, a lesspowerful hardware
configuration might be all you need. If the server is going to be used to export
a large number of directories, say from a powerful disk storage subsystem,
the hardware requirements become much more important. You will have to
make capacity judgments concerning the CPU power, disk subsystems, and
network adapter card performance.
Setting up an NFS server is a simple task. Create a list of the directories that
are to be exported, and place entries for these in the /etc/exports file on the
server. At boot time the exportfs program starts and obtains information from
this file. The exportfs program uses this data to make exported directories
available to clients that make requests.
The syntax for this command varies, depending on what actions you want to
perform:
/usr/sbin/exportfs [-auv]
/usr/sbin/exportfs [-uv] [dir ...]
/usr/sbin/exportfs -i [-o options] [-v] [dir ...]
The parameters and options you can use with this command are listed here:
a Causes exportfs to read the /etc/exports file and export all directories
for which it finds an entry. When used with the -u parameter, it causes
all directories to be unexported.
u Used to stop exporting a directory (or all directories if used with the -
a option).
v Tells exportfs to
operate in "verbose" mode, giving you additional
feedback in response to your commands.
The options you can specify after the -o qualifier are the same as you use in
the /etc/exports file (see the following section, "Configuration Files").
The following example causes your NFS server to stop sharing all the
directories listed for export in the /etc/exports file:
exportfs -au
You can also dismount and mount remote file systems using different options
when troubleshooting or when researching the commands you will need when
preparing to upgrade a network segment where connections need to change.
Configuration Files
To make a file system or a directory in a file system available for export, add
the pathnames to the /etc/exports file. The format for an entry in this file is as
follows:
directory [-option, ...]
The term directory is a pathname for the directory you want to share with
other systems. The options you can include are the following:
ro This makes the directory available to remote users in a read-only
mode. The default is readwrite, and remote users can change data
in files on your system if you do not specify ro here.
anon=uid Use
this parameter to set the uid (user ID) that will be used for
anonymous users, if allowed.
For example:
/etc/users/acctpay -access=acct
/etc/users/docs -ro
/etc/users/reports/monthend -rw=ono
Caution
You should give considerable thought to the matter before using NFS to
export sensitive or critical data. If the information could cause great harm if it
were to be altered or exposed, you should not treat it lightly and make it
available on the network via NFS. NFS is better suited for ordinary user data
files and programs, directories, or other resources that are shared by a large
number of users. There are not enough security mechanisms in place when
using many implementations of NFS to make it a candidate for a high-security
environment.
Automounting File Systems
The Mount protocol takes care of the details of making a connection for the
NFS client to the NFS server. This means that it is necessary to use
themount command to make the remote file system available at a mountpoint in
the local file system. To make this process even easier, the automountddaemon
has been created. This daemon listens for NFS requests and mounts a
remote file system locally on an as-needed basis. The mounted condition
usually persists for a specified number of minutes (the default is usually five
minutes) in order to satisfy any further requests.
The automount map is a file that tells the daemon where the file system to be
mounted is located and where it should be mounted in the local file system.
Options can also be included for the mount process, for example, to make it is
read-write or read-only. The automountd daemon mounts a file system under
the mountpoint /tmp_mnt. It then creates a symbolic link that appears to the
user as part of his file system.
It is assumed that you will be setting up both a server and a client. If you are just
setting up a client to work off of somebody else's server (say in your department), you
can skip to Section 4. However, every client that is set up requires modifications on
the server to authorize that client (unless the server setup is done in a very insecure
way), so even if you are not setting up a server you may wish to read this section to
get an idea what kinds of authorization problems to look out for.
Setting up the server will be done in two steps: Setting up the configuration files for
NFS, and then starting the NFS services.
There are three main configuration files you will need to edit to set up an NFS
server: /etc/exports, /etc/hosts.allow, and /etc/hosts.deny. Strictly speaking, you
only need to edit /etc/exportsto get NFS to work, but you would be left with an
extremely insecure setup. You may also need to edit your startup scripts; see Section
3.3.3 for more on that.
3.2.1. /etc/exports
This file contains a list of entries; each entry indicates a volume that is shared and
how it is shared. Check the man pages (man exports) for a complete description of all
the setup options for the file, although the description here will probably satistfy most
people's needs.
where
directory
the directory that you want to share. It may be an entire volume though it need
not be. If you share a directory, then all directories under it within the same file
system will be shared as well.
client machines that will have access to the directory. The machines may be
listed by their DNS address or their IP address
(e.g., machine.company.com or 192.168.0.8). Using IP addresses is more
reliable and more secure. If you need to use DNS addresses, and they do not
seem to be resolving to the right machine, see Section 7.3.
optionxx
the option listing for each machine will describe what kind of access that
machine will have. Important options are:
ro: The directory is shared read only; the client machine will not be able
to write to it. This is the default.
rw: The client machine will have read and write access to the directory.
sync: By default, all but the most recent version (version 1.11) of
the exportfs command will use async behavior, telling a client machine
that a file write is complete - that is, has been written to stable storage -
when NFS has finished handing the write over to the filesysytem. This
behavior may cause data corruption if the server reboots, and
the sync option prevents this. See Section 5.9 for a complete discussion
of sync and async behavior.
If you have a large installation, you may find that you have a bunch of computers all
on the same local network that require access to your server. There are a few ways of
simplifying references to large numbers of machines. First, you can give access to a
range of machines at once by specifying a network and a netmask. For example, if
you wanted to allow access to all the machines with IP addresses
between 192.168.0.0 and 192.168.0.255 then you could have the entries:
/usr/local 192.168.0.0/255.255.255.0(ro)
/home 192.168.0.0/255.255.255.0(rw)
Second, you can use NIS netgroups in your entry. To specify a netgroup in your
exports file, simply prepend the name of the netgroup with an "@". See the NIS
HOWTO for details on how netgroups work.
However, you should keep in mind that any of these simplifications could cause a
security risk if there are machines in your netgroup or local network that you do not
trust completely.
A few cautions are in order about what cannot (or should not) be exported. First, if a
directory is exported, its parent and child directories cannot be exported if they are in
the same filesystem. However, exporting both should not be necessary because listing
the parent directory in the /etc/exports file will cause all underlying directories
within that file system to be exported.
Second, it is a poor idea to export a FAT or VFAT (i.e., MS-DOS or Windows 95/98)
filesystem with NFS. FAT is not designed for use on a multi-user machine, and as a
result, operations that depend on permissions will not work well. Moreover, some of
the underlying filesystem design is reported to work poorly with NFS's expectations.
Third, device or other special files may not export correctly to non-Linux clients.
See Section 8 for details on particular operating systems.
These two files specify which computers on the network can use services on your
machine. Each line of the file contains a single entry listing a service and a set of
machines. When the server gets a request from a machine, it does the following:
If the machine does not match an entry in hosts.allow, the server then
checks hosts.deny to see if the client matches a listing in there. If it does then
the machine is denied access.
The first daemon to restrict access to is the portmapper. This daemon essentially just
tells requesting clients how to find all the NFS services on the system. Restricting
access to the portmapper is the best defense against someone breaking into your
system through NFS because completely unauthorized clients won't know where to
find the NFS daemons. However, there are two things to watch out for. First,
restricting portmapper isn't enough if the intruder already knows for some reason how
to find those daemons. And second, if you are running NIS, restricting portmapper
will also restrict requests to NIS. That should usually be harmless since you usually
want to restrict NFS and NIS in a similar way, but just be cautioned. (Running NIS is
generally a good idea if you are running NFS, because the client machines need a way
of knowing who owns what files on the exported volumes. Of course there are other
ways of doing this such as syncing password files. See the NIS HOWTO for
information on setting up NIS.)
In general it is a good idea with NFS (as with most internet services) to explicitly
deny access to IP addresses that you don't need to allow access to.
The first step in doing this is to add the followng entry to /etc/hosts.deny:
portmap:ALL
lockd:ALL
mountd:ALL
rquotad:ALL
statd:ALL
Even if you have an older version of nfs-utils, adding these entries is at worst
harmless (since they will just be ignored) and at best will save you some trouble when
you upgrade. Some sys admins choose to put the entry ALL:ALL in the
file /etc/hosts.deny, which causes any service that looks at these files to deny access
to all hosts unless it is explicitly allowed. While this is more secure behavior, it may
also get you in trouble when you are installing new services, you forget you put it
there, and you can't figure out for the life of you why they won't work.
Next, we need to add an entry to hosts.allow to give any hosts access that we want to
have access. (If we just leave the above lines in hosts.deny then nobody will have
access to NFS.) Entries inhosts.allow follow the format
Suppose we have the setup above and we just want to allow access
to slave1.foo.com and slave2.foo.com, and suppose that the IP addresses of these
machines are 192.168.0.1 and 192.168.0.2, respectively. We could add the following
entry to /etc/hosts.allow:
For recent nfs-utils versions, we would also add the following (again, these entries are
harmless even if they are not supported):
lockd: 192.168.0.1 , 192.168.0.2
rquotad: 192.168.0.1 , 192.168.0.2
mountd: 192.168.0.1 , 192.168.0.2
statd: 192.168.0.1 , 192.168.0.2
3.3.1. Pre-requisites
The NFS server should now be configured and we can start it running. First, you will
need to have the appropriate packages installed. This consists mainly of a new enough
kernel and a new enough version of the nfs-utils package. See Section 2.4 if you are in
doubt.
Next, before you can start NFS, you will need to have TCP/IP networking functioning
correctly on your machine. If you can use telnet, FTP, and so on, then chances are
your TCP networking is fine.
That said, with most recent Linux distributions you may be able to get NFS up and
running simply by rebooting your machine, and the startup scripts should detect that
you have set up your /etc/exportsfile and will start up NFS correctly. If you try this,
see Section 3.4 Verifying that NFS is running. If this does not work, or if you are not
in a position to reboot your machine, then the following section will tell you which
daemons need to be started in order to run NFS services. If for some reason nfsd was
already running when you edited your configuration files above, you will have to
flush your configuration; seeSection 3.5 for details.
3.3.2. Starting the Portmapper
NFS serving is taken care of by five daemons: rpc.nfsd, which does most of the
work; rpc.lockd and rpc.statd, which handle file locking; rpc.mountd, which
handles the initial mount requests, andrpc.rquotad, which handles user file quotas on
exported volumes. Starting with 2.2.18, lockd is called by nfsd upon demand, so you
do not need to worry about starting it yourself. statd will need to be started separately.
Most recent Linux distributions will have startup scripts for these daemons.
The daemons are all part of the nfs-utils package, and may be either in
the /sbin directory or the /usr/sbin directory.
If your distribution does not include them in the startup scripts, then then you should
add them, configured to start in the following order:
rpc.portmap
rpc.mountd, rpc.nfsd
The nfs-utils package has sample startup scripts for RedHat and Debian. If you are
using a different distribution, in general you can just copy the RedHat script, but you
will probably have to take out the line that says:
. ../init.d/functions
To do this, query the portmapper with the command rpcinfo -p to find out what
services it is providing. You should get something like this:
This says that we have NFS versions 2 and 3, rpc.statd version 1, network lock
manager (the service name for rpc.lockd) versions 1, 3, and 4. There are also different
service listings depending on whether NFS is travelling over TCP or UDP. Linux
systems use UDP by default unless TCP is explicitly requested; however other OSes
such as Solaris default to TCP.
If you do not at least see a line that says portmapper, a line that says nfs, and a line
that says mountd then you will need to backtrack and try again to start up the daemons
(see Section 7, Troubleshooting, if this still doesn't work).
If you do see these services listed, then you should be ready to set up NFS clients to
access files from your server.
3.5. Making changes to /etc/exports later on
If you come back and change your /etc/exports file, the changes you make may not
take effect immediately. You should run the command exportfs -ra to force nfsd to
re-read the /etc/exports file. If you can't find the exportfs command, then you can
kill nfsd with the -HUP flag (see the man pages for kill for details).
If that still doesn't work, don't forget to check hosts.allow to make sure you haven't
forgotten to list any new client machines there. Also check the host listings on any
firewalls you may have set up (seeSection 7 and Section 6 for more details on
firewalls and NFS).
Before beginning, you should double-check to make sure your mount program is new
enough (version 2.10m if you want to use Version 3 NFS), and that the client machine
supports NFS mounting, though most standard distributions do. If you are using a 2.2
or later kernel with the /proc filesystem you can check the latter by reading the
file /proc/filesystems and making sure there is a line containing nfs. If not,
typing insmod nfs may make it magically appear if NFS has been compiled as a
module; otherwise, you will need to build (or download) a kernel that has NFS
support built in. In general, kernels that do not have NFS compiled in will give a very
specific error when the mount command below is run.
To begin using machine as an NFS client, you will need the portmapper running on
that machine, and to use NFS file locking, you will also
need rpc.statd and rpc.lockd running on both the client and the server. Most recent
distributions start those services by default at boot time; if yours doesn't, see Section
3.2 for information on how to start them up.
If this does not work, see the Troubleshooting section (Section 7).
# umount /mnt/home
NFS file systems can be added to your /etc/fstab file the same way local file systems
can, so that they mount when your system starts up. The only difference is that the file
system type will be set to nfsand the dump and fsck order (the last two entries) will
have to be set to zero. So for our example above, the entry in /etc/fstab would look
like:
# device mountpoint fs-type options dump fsckorder
...
master.foo.com:/home /mnt nfs rw 0 0
...
See the man pages for fstab if you are unfamiliar with the syntax of this file. If you
are using an automounter such as amd or autofs, the options in the corresponding
fields of your mount listings should look very similar if not identical.
At this point you should have NFS working, though a few tweaks may still be
necessary to get it to work well. You should also read Section 6 to be sure your setup
is reasonably secure.
4.3. Mount options
4.3.1. Soft vs. Hard Mounting
There are some options you should consider adding at once. They govern the way the
NFS client handles a server crash or network outage. One of the cool things about
NFS is that it can handle this gracefully. If you set up the clients right. There are two
distinct failure modes:
soft
If a file request fails, the NFS client will report an error to the process on the
client machine requesting the file access. Some programs can handle this with
composure, most won't. We do not recommend using this setting; it is a recipe
for corrupted files and lost data. You should especially not use this for mail
disks --- if you value your mail, that is.
hard
The program accessing a file on a NFS mounted file system will hang when the
server crashes. The process cannot be interrupted or killed (except by a "sure
kill") unless you also specify intr. When the NFS server is back online the
program will continue undisturbed from where it was. We recommend
using hard,intr on all NFS mounted file systems.
Picking up the from previous example, the fstab entry would now look like:
# device mountpoint fs-type options dump fsckord
...
master.foo.com:/home /mnt/home nfs rw,hard,intr 0 0
...
The rsize and wsize mount options specify the size of the chunks of data that the
client and server pass back and forth to each other.
The defaults may be too big or to small; there is no size that works well on all or most
setups. On the one hand, some combinations of Linux kernels and network cards
(largely on older machines) cannot handle blocks that large. On the other hand, if they
can handle larger blocks, a bigger size might be faster.
Getting the block size right is an important factor in performance and is a must if you
are planning to use the NFS server in a production environment. See Section 5 for
details.