[Previous] [Contents] [Index] [Next]

Caution: This version of this document is no longer maintained. For the latest documentation, see http://www.qnx.com/developers/docs.

nfsd

NFS v2 & v3 and MOUNT v1 & v3 protocol server


Note: You must be root to start this daemon.

Syntax:

nfsd [-f n] [-h n] [-o nfsvers=2] [-P] [-s n] [-t]
     [-x n] &

Options:

-f n
Open file cache size (default is 16).

The open file cache is used to cache open files and directories (with a 5 second idle timeout). If you know nfsd services only one client that only reads/writes to a single file, reducing this cache may be beneficial (memory wise). If you know nfsd services many clients that read/writes many files, increasing this cache could be beneficial (performance-wise regarding read/write operations).


Note: Keep this cache reasonable, as fds (open files) are a limited resource -- by default, QNX Neutrino sets a maximum of 1000 open files per process. Besides this cache, nfsd needs fds for sockets (servicing TCP consumes more fds than just UDP) and internal readdir() operations.

-h n
File handle cache size (default is 200).

The file handle cache is a straight memory/performance tradeoff, however it doesn't significantly affect read/write performance. It helps speed up ls-type operations (very useful for compiling/makefiles). To get a rough idea of how large this cache should (optimally) be, use the output of:

find mnt1 ... mntN | wc -l
      
-o nfsvers=2
Support NFS v2 only (default is to support both NFS v2 and NFS v3).
-P
Parse check exports file only.
-s n
Flush cache every n idle seconds (default is 5).
-t
Service TCP transport.
-x n
XID cache size (default is 16).

The XID cache isn't used for performance, but rather to ensure nonidempotent operations are responded to correctly. Consider what happens when a client issues a remove request. The server receives this request, removes the file, and sends back a successful response. For some reason, the server doesn't respond fast enough for the client and it retransmits this request. If the server tries to remove the file (again), it fails. Instead, the server matches the remove request with the previous (each request is assigned a transaction identifier, known as an xid, which remains constant for retransmissions) and just replies with the previous status. Generally, the busier the network and server are, the more requests are retransmitted by the client(s), and the larger the XID cache should be.

Description:

The nfsd daemon services both NFS mount requests and NFS requests, as specified by the exports file. Upon startup, nfsd reads the /etc/exports.hostname file (if this file doesn't exist, nfsd reads /etc/exports instead) to determine which mountpoints to service. Changes made to this file don't take affect until nfsd is restarted.

There's no direct check for root on the mount. nfsd only checks that requests come in on a privileged port, which implies root access.


Note: The nfsd command doesn't tolerate any parsing errors while reading the exports file. If an error is detected, nfsd terminates. To keep downtime to a minimum, it's recommended that another nfsd is started after the exports file is modified. If no parsing errors are detected, the second nfsd reports a bind failure, and exits -- this indicates the exports file was parsed correctly (alternatively, use the -P option).

Security Issues

NFS is a very unsecure protocol. Although nfsd checks each request's origin against restrictions specified in the exports file, this helps only in an "honest" network. It's not difficult to spoof NFS requests.

Configuring Caches

Fine tuning nfsd caches may result in less memory usage and improved performance, but these qualities are usually exclusive. Before modifying the default behavior of nfsd, it's important to know what its clients will demand from it. Also note, these caches are shared across all mountpoints.


Note: The tiny TCP/IP stack (npm-ttcpip.so) doesn't support rpcbind utility -- when the utility is run in secure mode, and uses Unix domain sockets. Use both -L and -i rpcbind options together, if you want to use this utility in combination with the tiny TCP/IP stack.

See also:

/etc/exports, fs-nfs2, fs-nfs3, io-net, mount, npm-tcpip.so, npm-ttcpip.so, syslogd, umount

"NFS filesystem" in the Working With Filesystems chapter of the User's Guide


[Previous] [Contents] [Index] [Next]