The client-initiated model puts a lot of load on the server if reads are common but writes are few, and it's not clear that it is much faster than simply passing read operations through to the server unless network bandwidth is severely limited.
The server-initiated approach will be less expensive when writes are rare, but requires that the server maintain state about what files each client has cached. Changes in the semantics of the file system can affect cache costs. For example, the write-on-close policy of the Andrew File System AFS ensures that all updates to a file are consolidated into a single giant write operation—this means that the server only needs to notify interested clients once when the modified file is closed instead of after every write to a part of the file.
The cost here is that it is no longer possible to have interleaved write operations on the same file by different clients. The ultimate solution to consistency problems is typically locking. The history of NFS is illustrative here. Early versions of NFS did not provide any locking at all, punting the issue to separate lock servers that were ultimately not widely deployed. The problems with such ad-hoc solutions mostly the need to rewrite any program that used locks to use lockfiles instead eventually forced NFS to incorporate explicit support for Posix-style advisory fcntl locks.
Stateful vs stateless servers The original distributed version of NFS NFS version 2 used a stateless protocol in which the server didn't keep track of any information about clients or what files they were working on. This has a number of advantages: Scalability Because the server knows nothing about clients, adding more clients consumes no resources on the server although satisfying their increased requests may.
Consistency There is no possibility of inconsistency between client and server state, because there is no server state.
This means that problems like TwoGenerals don't come up with a stateless server, and there is no need for a special recovery mechanism after a client or server crashes. The problem with a stateless server is that it requires careful design of the protocol so that the clients can send all necessary information along with a request.
The inclusion of an explicit offset, and the translation of the local file descriptor to a file handle that contains enough information to uniquely identify the target file without a server-side mapping table, means that the server can satisfy this request without remembering it. A second feature we want with a stateless server is idempotence : performing the same operation twice should have the same effect as performing it once.
This allows a client to deal with lost messages, lost acknowledgments, or a crashed server in the same way: retransmit the original request and hope that it works this time. It is not hard to see that including offsets and explicit file handles gives us this property.
Data representation An issue that arises for any network service but that is particularly tricky for filesystems is the issue of machine-independent data representation. Many of the values that will be sent across the network e. So an xbased client talking to a PowerPC-based server will need to agree on the number of bytes called octets in IETF RFC documents, to emphasize 8-bit bytes as opposed to the now-bizarre-seeming nonbit bytes that haunted the early history of computing in each field of a data structure as well as the order in which they arrive.
The convention used in most older network services is to use a standard network byte order which is defined to be big-endian or most significant byte first. This means that are hypothetical x86 client will need to byte swap all of its integer values before sending them out. File locking and caching interact together, and both must be properly specified for shared file access to work. If file read or write data only resides in a host cache and some other host tries to access the same file, the data it reads could be wrong, unless both or rather all clients of the NFS storage server use the SAME locking options and caching options for the mounted file system.
File locking was designed to support shared file access. That is when a file is accessed by more than one application or compute thread. Here are some others:. There are other problems with NFS, but these are our top five. Yes, block size restrictions could easily be made larger, but then the timeouts would need to be adjusted and perhaps rethought.
And yes, parallel file access is coming, but protocol chattiness and file sharing locking-caching problems listed above are much more difficult to solve. NFS has worked well for over 35 years now. NFS offers limited performance and scalability for modern environments. NFS offers capacities of up to 1.
NFS is also not efficient in managing metadata. SAN vs. This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember you.
0コメント