By definition, a distributed system consists of multiple processors. These may be organized as a collection of personal workstations, a public processor pool, or some hybrid form. In all cases, some algorithm is needed for deciding which process should be run on which machine. For the workstation model, the question is when to run a process locally and when to look for an idle workstation. For the processor pool model, a decision must be made for every new process. In this section we will study the algorithms used to determine which process is assigned to which processor. We will follow tradition and refer to this subject as "processor allocation" rather than "process allocation," although a good case can be made for the latter. PostVer 0
In distributed system, Processor allocation is the process of sharing the available processor spaces to each connected system in the distributed network. All the connected system will be assigned a portion of the processor to process its job. The processor in distributed system decides on how to decide that a process should be migrated, how to select a new host for a process, and how to make the resources originally located at one host available at another host. PostVer 0
As we know a distributed system is a collection of large number of autonomous computers connected by a communication network which operate in a unied way to accomplish better performance and throughput.
The potential power of a distributed system comes from the way it manages its resources. Management of processing resources (processors or CPUs) is done by two policies of the system, namely, processor allocation and processor scheduling.
Each node in distributed system contains its own scheduler to execute processes on the local processor, usually in some timeshared way, whereas the higher level decisions of assigning a task to a node is carried out by a processor allocation algorithm. Although there are slight variations, this scheme seems to be most natural in distributed systems. The reason behind this is twofold. First, each node usually has its own operating system which is capable of scheduling processes. Second, modularity which ensures that the designers can concentrate more on the relatively complicated load distribution issues without being burdened by every single detail of scheduling. PostVer 0
Please login to reply. Login