ALL > Computer and Education > courses > university courses > graduate courses > modern operating system > zstu 2018-2019-2 class > student homework directory > L20182E060114 >
Homework 6: Write a review about technologies of distributed file systems, and an example of DFS such as NFS Version 0
👤 Author: by aeonorbitgmailcom 2019-04-30 09:13:37
Distributed File Systems (DFS), are systems used in both local network and wide area networks by using discs, storage areas and sources together. Nowadays, DFS makes possible operating big data, large-scale computings and transactions. The classifications are performed based on their working logics and architectures. These classifications were performed based on fault tolerance, replications, naming, synchronization and purpose of design. In this study, firstly the examinations on general design of DFSs were performed. Later, the results were discussed based on the advantages and disadvantages of Ceph and Hadoop and Network File Systems(NFS), commonly used in these days.

Computer systems had large evolutions until now. The first one is development of strong microprocessors on 1980s from 8 bit to 64 bit processing. The strengths of these computers were as mainframe computers and command processing costs were low at the same time. The second evolution is commonly using local networks with high speed and large scale nodes, This helped transferring 1 gigabit data in a second. At the end of these developments, distributed systems using multiple computers with high speed networks appeared rather than a strong computer having one processor. The first DFSs were developed on 1970s. These were storage system connected with FTP-like structure and they were not commonly used due to their limited storage spaces. L. Svoboda reported the first study on DFSs  and Svoboda developed various DFS in this year such as LOCUS, ACORN, SWALLOW, and XDFS. The studies continued on DFSs until now. Today’s DFSs are generally designed analogously to classical time sharing systems. These generally take base the UNIX file systems. The purpose of this system is combination of different computer files and storage sytems.

DFSs process differently generated data on numeric data platforms. It also performs this safely, efficiently and rapidly. The need for rapid growth of data and rapid access to them has caused the growth of data storage resources. The big increase on data created a new concept, BigData. At the same time, distributed file systems are used to process big data and to perform operations quickly. Distributed file systems have emerged and are now being used effectively by cloud systems. A DFS file is stored on one or more computers, each of which is a server, and computers, called clients, access those files as if they were a single file.

DFSs were designed for different goals. For example, the purpose of Andrew File System (AFS) is DS which can support up to 5000 clients . Network File System (NFS) uses RPC Remote Procedure Call (RPC) communication model. RPC creates intermediate layer between server and client. The client performs operations without knowing the server's file systems. This method allows clients and servers with different file systems to run smoothly . The purpose of Google File System (GFS) is to work with big data. This is achieved by using a lot of low cost equipment. Another DFS that has a very different structure is XFS. It keeps very large files stable. Also, XFS does not have a generic server. The entire file system is distributed over the clients. In Ceph DFS, it decomposes the metadata holding the data and data information. It replicates and increases the system's fault tolerance.

Please login to reply. Login

Reversion History

Loading...
No reversions found.