ALL > Computer and Education > courses > university courses > graduate courses > modern operating system > ZSTU-(2019-2020-2) Class > student directories > Bhupesh(l20192e060101) >
Homework5 :Write a review paper on Communication Technology used in Distributed SYSTEM: Version 0
👤 Author: by bhupeshaawasthi952gmailcom 2020-05-25 10:34:48
Communication Technology in Distributed SYSTEM:

A distributed system, also known as distributed computing, is a system with multiple components located on different machines that communicate and coordinate actions in order to appear as a single coherent system to the end-user.

The machines that are a part of a distributed system may be computers, physical servers, virtual machines, containers, or any other node that can connect to the network, have local memory, and communicate by passing messages.

There are two general ways that distributed systems function:

  1. Each machine works toward a common goal and the end-user views results as one cohesive unit.

  2. Each machine has its own end-user and the distributed system facilitates sharing resources or communication services.


Although distributed systems can sometimes be obscure, they usually have three primary characteristics: all components run concurrently, there is no global clock, and all components fail independently of each other.

Example of a Distributed System

Distributed systems have endless use cases, a few being electronic banking systems, massive multiplayer online games, and sensor networks.

StackPath utilizes a particularly large distributed system to power its content delivery network service. Every one of our points of presence (PoPs) has nodes that form a worldwide distributed system. And to provide top notch content delivery, StackPath stores the most recently and frequently requested content in edge locations closest to the location it is being used.

Technologies for Supporting Distributed Computing 

To address the challenge described above, therefore, three levels of support for distributed computing were developed: ad hoc network programming, structured communication, and middleware. Ad hoc network programming includes interprocess communication (IPC) mechanisms, such as shared memory, pipes, and sockets, that allow distributed components to connect and exchange information. These IPC mechanisms help address a key challenge of distributed computing: enabling components from different address spaces to cooperate with one another. 

Certain drawbacks arise, however, when developing distributed systems only using ad hoc network programming support. For instance, using sockets directly within application code tightly couples this code to the socket API. Porting this code to another IPC mechanism or redeploying components to different nodes in a network thus becomes a costly manual programming effort. Even porting this code to another version of the same operating system can require code changes if each platform has slightly different APIs for the IPC mechanisms [POSA2] [SH02]. Programming directly to an IPC mechanism can also cause a paradigm mismatch, e.g., local communication uses object-oriented classes and method invocations, whereas remote communication uses the function-oriented socket API and message passing.  

The next level of support for distributed computing is structured communication, which overcomes limitations with ad hoc network programming by not coupling application code to low-level IPC mechanisms, but instead offering higherlevel communication mechanisms to distributed systems. Structured communication encapsulates machine-level details, such as bits and bytes and binary reads and writes. Application developers are therefore presented with a programming model that embodies types and a communication style closer to their application domain.

Historically significant examples of structured communication are remote procedure call (RPC) platforms, such as Sun RPC and the Distributed Computing Environment (DCE). RPC platforms allow distributed applications to cooperate with one another much like they would in a local environment: they invoke functions on each other, pass parameters along with each invocation, and receive results from the functions they called. The RPC platform shields them from details of specific IPC mechanisms and low-level operating system APIs. Another example of structured communication is ACE [SH02] [SH03], which provides reusable C++ wrapper facades and frameworks that perform common structured communication tasks across a range of OS platforms. 

Despite its improvements over ad hoc network programming, structured communication does not fully resolve the challenges described above. In particular, components in a distributed system that communicate via structured communication are still aware of their peers’ remoteness—and sometimes even their location in the network. While location awareness may suffice for certain types of distributed systems, such as statically configured embedded systems whose component deployment rarely changes, structured communication does not fulfill the following the properties needed for more complex distributed systems: • Location-independence of components. Ideally, clients in a distributed system should communicate with collocated or remote services using the same programming model. Providing this degree of location-independence requires the separation of code that deals with remoting or location-specific details from client and service application code. Even then, of course, distributed systems have failure modes that local systems do not have [WWWK96]. • Flexible component (re)deployment. The original deployment of an application’s services to network nodes could become suboptimal as hardware is upgraded, new nodes are incorporated, and/or new requirements are added. A redeployment of distributed system services may therefore be needed, ideally without breaking code and or shutting down the entire system. 

Mastering these challenges requires more than structured communication support for distributed systems. Instead it requires dedicated middleware [ScSc02], which is distribution infrastructure software that resides between an application and the operating system, network, or database underneath it. Middleware provides the properties described above so that application developers can focus on their primary responsibility: implementing their domain-specific functionality. Realizing the need for middleware has motivated companies, such as Microsoft, IBM, and Sun, and consortia, such as the Object Management Group (OMG) and the World Wide Web Consortium (W3C), to develop technologies for distributed computing. Below, we describe a number of popular middleware technologies, including distributed object computing, component middleware, publish/subscribe middleware, and service-oriented architectures and Web Services [Vin04a].

Distributed Object Computing Middleware 

A key contribution to distributed system development was the emergence of distributed object computing (DOC) middleware in the late 1980s and early 1990s. DOC middleware represented the confluence of two major information technologies: RPC-based distributed computing systems and object-oriented design and programming. Techniques for developing RPC-based distributed systems, such as DCE, focused on integrating multiple computers to act as a unified scalable computational resource. Likewise, techniques for developing object-oriented systems focused on reducing complexity by creating reusable frameworks and components that reify successful patterns and software architectures. DOC middleware therefore used object-oriented techniques to distribute reusable services and applications efficiently, flexibly, and robustly over multiple, often heterogeneous, computing and networking elements. 

CORBA 2.x and Java RMI are examples of DOC middleware technologies for building applications for distributed systems. These technologies focus on interfaces, which are contracts between clients and servers that define a locationindependent means for clients to view and access object services provided by a server. Standard DOC middleware technologies like CORBA also define communication protocols and object information models to enable interoperability between heterogeneous applications written in various languages running on various platforms.  

Despite its maturity and performance, however, DOC middleware had key limitations, including:

Lack of functional boundaries. The CORBA 2.x and Java RMI object models treat all interfaces as client/server contracts. These object models do not, however, provide standard assembly mechanisms to decouple dependencies among collaborating object implementations. For example, objects whose implementations depend on other objects need to discover and connect to those objects explicitly. To build complex distributed applications, therefore, application developers must explicitly program the connections among interdependent services and object interfaces, which is extra work that can yield brittle and non-reusable implementations. • Lack of software deployment and configuratoin standards. There is no standard way to distribute and start up object implementations remotely in DOC middleware. Application administrators must therefore resort to in-house scr ipts and procedures to deliver software implementations to target machines, configure the target machine and software implementations for execution, and then instantiate software implementations to make them ready for clients. Moreover, software implementations are often modified to accommodate such ad hoc deployment mechanisms. The need of most reusable software implementations to interact with other software implementations and services further aggravates the problem. The lack of higher-level software management standards results in systems that are harder to maintain and software component implementations that are much harder to reuse. 

Component Middleware  

Starting in the mid to late 1990s, component middleware emerged to address the limitations of DOC middleware described above. In particular, to address the lack of functional boundaries, component middleware allows a group of cohesive component objects to interact with each other through multiple provided and required interfaces and defines standard runtime mechanisms needed to execute these component objects in generic applications servers. To address the lack of standard deployment and configuration mechanisms, component middleware specifies the infrastructure to package, customize, assemble, and disseminate components throughout a distributed system.  

Enterprise JavaBeans and the CORBA Component Model (CCM) are examples of component middleware that define the following general roles and relationships: • A component is an implementation entity that exposes a set of named interfaces and connection points that components use to collaborate with each other. Named interfaces service method invocations that other components call synchronously. Connection points are joined with named interfaces provided by other components to associate clients with their servers. Some component models also offer event sources and event sinks, which can be joined together to support asynchronous message passing. 

 

Component Middleware  

Starting in the mid to late 1990s, component middleware emerged to address the limitations of DOC middleware described above. In particular, to address the lack of functional boundaries, component middleware allows a group of cohesive component objects to interact with each other through multiple provided and required interfaces and defines standard runtime mechanisms needed to execute these component objects in generic applications servers. To address the lack of standard deployment and configuration mechanisms, component middleware specifies the infrastructure to package, customize, assemble, and disseminate components throughout a distributed system.  

Enterprise JavaBeans and the CORBA Component Model (CCM) are examples of component middleware that define the following general roles and relationships: • A component is an implementation entity that exposes a set of named interfaces and connection points that components use to collaborate with each other. Named interfaces service method invocations that other components call synchronously. Connection points are joined with named interfaces provided by other components to associate clients with their servers. Some component models also offer event sources and event sinks, which can be joined together to support asynchronous message passing. 

 Understanding Distributed Systems Software Technologies via Patterns  

Although the various middleware technologies described in Section 2 differ widely in their programming interfaces and language mappings they share many of the same patterns [VKZ04]. Design-focused patterns provide a vocabulary for expressing architectural visions, as well as examples of representative designs and detailed implementations that are clear and concise. Presenting pieces of software in terms of their constituent patterns also allows developers to communicate more effectively, with greater conciseness and less ambiguity. 

Distributed computing has been a popular focus for pattern authors for many years. For example, [POSA2] and [VKZ04] present collections of patterns for developing distributed object computing middleware. Likewise, [HOHPE03] and [FOW02] present collections of patterns for enterprise message-oriented middleware and serviceoriented architectures. Most recently, [POSA4] has captured an extensive pattern language for building distributed software systems that connects over 250 patterns addressing topics ranging from defining and selecting an appropriate baseline architecture and communication infrastructure, to specifying component interfaces, their implementations, and their interactions. Together, the patterns covered in these books address key technical aspects of distributed computing, such as adaptation and extension, concurrency, database access, event handling, synchronization, and resource management. 

As software is integrated into mission-critical systems there is an increasing need for robust techniques to meet user dependability requirements. Patterns on fault tolerance and fault management have therefore been an active focus over the past decade. Several recent books [UTAS05] [HAN07] contain patterns and pattern languages that address fault tolerance and fault management for systems with stringent operational requirements. Likewise, developing high-quality distributed real-time and embedded (DRE) systems that provide predictable behavior in networked environments is also increasingly crucial to support mission-critical systems. Patterns that guide the development of resource management algorithms and architectures for DRE software appear in [DIP07] and [POSA3]

Please login to reply. Login

Reversion History

Loading...
No reversions found.