ALL > Computer and Education > courses > university courses > graduate courses > modern operating system > ZSTU-(2019-2020-2) Class > student directories > AJIBODE ADEKUNLE AKINJOBI L20192E060109 >
A review on Distributed Shared Memory Version 0
👤 Author: by damajibodegmailcom 2020-05-12 12:42:10
BRIEF INTRODUCTION

Distributed systems otherwise known as DS is a major research area in the field of computer science in this century. The vision and research efforts devoted to this area have continuously increased since the late seventies. Nowadays, distributed systems have drastically changed and new fundamental issues have arisen.

The need of resources has been one of the main motivations behind distributed systems. Indeed, take computational resources as an example. Multiprocessor systems allow to share the computational tasks on different processors while distributed machines can execute multiple computational tasks at the same time, one on each machine. Nowadays, while powerful multiprocessor systems remain expensive a novel form of computational collaboration arises due to the enhancement of the Internet and various networks.

A peer-to-peer (p2p) system is a distributed system that has no centralized control. Actually, the p2p paradigm relies on the fact that distributed entities not only benefit from resources but also provide resources. In traditional distributed systems, services are hosted by servers and accessed by clients, thus, in p2p systems every entity acts as a client and as a server at the same time.

Distributed systems can be differentiated along two major paradigms: shared-memory and message-passing. In shared-memory, entities communicate by reading from and writing to a single memory. In message-passing, entities communicate by sending and receiving messages whose transmission time is, in general, arbitrarily long. Traditionally, tightly-coupled distributed architectures, as multi-processor machines use the shared-memory model for communication among processors. The motivation for shared-memory stems from the simplicity with which algorithms can be designed and with which programs can be written compared to the message-passing model. Conversely, the prevalent model for loosely-coupled distributed architecture as network of workstations is message-passing. The motivation for message-passing stems from the ability to replicate on several workstations so that each workstation can fail independently without affecting the performance of the system. Despite the complex programming tasks message-passing requires, this model is more appropriate when message delays are arbitrarily long. This motivates the need for emulating shared-memory in message-passing model. This emulation, also known as distributed shared memory.

Shared Memory Models

Shared-memory is used for communication among participants of a distributed system. As said previously, it appears to be one of the two major communication paradigms for distributed systems, the other being the message-passing model. Shared memory simplifies algorithm formalization as well as code programming. The shared memory is accessed by some nodes through read and write operations. The read operation simply returns to the requester a piece of information contained in the memory without modifying it. The write operation modifies the information stored in the shared memory. Distinct shared memories exist, each shared memory depending on the number of nodes allowed to read it and the number of nodes allowed to write it. First observe that a model in which a single node is allowed to read the memory and write it, is equivalent to a single node with exclusive access to its memory. In contrast, we are interested by the ability for nodes to share a memory.

  • MWSR: In the multi-writer/single-reader (MWSR) model, multiple nodes can execute write operations whereas only one node can execute read operations.

  • SWMR: In the single-writer/multi-reader (SWMR) model, multiple nodes can execute read operations whereas only a single node can execute write operations.

  • MWMR: In the multi-writer/multi-reader (MWMR) model, multiple nodes can execute the write operations and the read operations.


It is noteworthy that the models presented above do not differentiate the number of operations executed simultaneously. Actually, ensuring consistency while many operations are concurrent can be reduced to solving concurrency when only two operations are concurrent: hardware synchronization might be required at some nodes in both cases. However, complexity relies tightly on the type of concurrent operations: if two write operations are concurrent, the result of one operation overwrites the result of the other, thus, those operations must be ordered; if two read operations occur concurrently, then both may return the same value, thus, there is no need to order them.

MAIN POINTS

Probabilistic consistency represents ways to define acceptable consistency conditions and time complexity while achieving ideal performance in terms of communication complexity. This results essentially from the previous tradeoff that translates into the lack of performance when willing to emulate a deterministic Distributed Shared Memory (DSM): either congestion provokes request losses, or enlarging memory delays operations. There are two major points in favor of probabilistic consistency:

First point in favor is that large-scale dynamic systems cannot be modeled with participants whose actions are always dependent. Indeed, it is unreasonable to think at a dynamic system in which many participants act altogether so that a small number of nodes leave the system during a small period of time. If the system is large, participants are more likely to act independently and to either join or leave the memory at arbitrary instants. Because of this unpredictability due to the independence of behaviors, many nodes may leave at the same time, even though it is very unlikely. It is far more realistic to admit that nodes act independently and that there exist a small probability that some nodes leave at the same time. In this case, it is only possible to ensure properties with specific probabilities. Some researches has presented probabilistic atomicity as a promising consistency criteria that allows all operations to satisfy atomicity partial ordering with high probability.

Second point in favor is that there exist implementations of Timed Quorum System (TQS), with probabilistic requirements, that supersede performance of deterministic solutions. The implementation of TQS presented so far in the researches achieves faster operations than Square can do. More generally, any solution that improves on Square by adapting it with a larger degree than its communication graph requires a more costly reconfiguration mechanism, that is completely absent from the TQS implementation. Furthermore, this TQS implementation supersedes RDS by using a constant degree in the underlying communication graph, and by masking reconfiguration power behind existing operations. That is, each node communicates only with a constant number of neighbors and no additional and costly reconfiguration is required.

Finally, as a step towards an effective application of DSM in large-scale dynamic systems, high probability can be translated into some practical quality of service exploitable by system designers.

Please login to reply. Login

Reversion History

Loading...
No reversions found.