ALL > Computer and Education > courses > university courses > graduate courses > modern operating system > zstu 2018-2019-2 class > student homework directory >
Homework 5 - Processor Allocation Algorithms Version 0
👤 Author: by ernestovegaavgmailcom 2019-04-30 05:08:38
Processors allocation

A distributed system consists of several processors that can organize non-dedicated workstations, such as a stack of processors or some hybrid form. In all cases, you need to make the decision of which process to execute and on which machine. For the model of workstations, the decision is to choose to execute processes locally and when remotely, to look for an unoccupied station. For the model of the processor stack, the decision must be made at the time of execution for the allocation of each process. In general, the allocation algorithms have their balance between complexity and economy in 5 design approaches:

1.- Deterministic Algorithms vs. Heuristics.

2.- Distributed Algorithms vs. Centralized

3.- Optimal Algorithms vs. Suboptimus

4.- Local Algorithms vs. Global

5.- Algorithms Initiated by the issuer vs. Initiated by the receiver.

 

The deterministic algorithms are adequate when you have parameterized and systematized the behavior of the processes, which is not an easy task, but can be approached through a statistical approach. The heuristic algorithms are suitable when the load is unpredictable, but they require a lot of ingenuity or to be able to formally describe the operation in terms of system states.

Centralized designs allow you to gather all the information in one place and make a better decision; the disadvantage is that it brings a bottleneck to the central machine and robustness is lost in the event that it fails.

By definition, optimal algorithms consume more resources than sub-optimal algorithms, but this can be very relative, since everything depends on how far the solution is from its optimum point, maximum performance at lower cost. On the other hand, in most real systems sub-optimal, heuristic and distributed solutions are sought, except in those cases of critical operations and in real time.

When a process is going to be created, it must be decided where it will be executed through a transfer policy. The decision can be made only with local information or with global information, while more structured, better but more expensive. We are then subject to the fact that local algorithms are simple but not optimal and that global algorithms are better but consume a lot of resources.

When a machine delegates the execution of a process, the location policy must decide where to send it, it needs information about the load everywhere, but it must be triggered as an event, since otherwise it is not feasible to monitor the general state of the system if we do not need to balance the workload. Then the communication to consult the state of the system can originate by:

- An overloaded transmitter that looks for an inactive machine to download.

- An unemployed receiver looking for work to take care of.

 

Aspects of the Implementation of Processor Allocation Algorithms.

Almost all algorithms assume that the machines know their own load and that they can report their status, because the centralized measurement of the load is not so simple. One method is to count the number of processes (including non-active latent processes), or to count only the processes that are running or terminated; You can also measure the fraction of time the CPU is busy, but the performance metrics can vary its perception of overload depending on the measurement point of view.

Another important aspect is the excessive cost of consuming resources to collect measurements and move processes, since it consumes CPU time, twice the original memory space, and network bandwidth for a period of time. That is why great importance must be attached to the stability of the system, because it is not desirable to use a migration strategy, unless the application requires it by its nature.

Regardless of the Assignment Models used, we take the following considerations:

  1. a) All machines are identical (or at least compatible in the code); They differ at most in speed.

  2. b) Each processor can communicate with the others.

  3. c) Processor allocation strategies are preferred Non-migratory.

  4. d) Migratory strategies will be used if a balance of the load in execution time is required.

  5. e) We seek to maximize the number of CPU cycles that are executed for user jobs, minimize downtime and have a reasonable average response time, not so much individual times.

  6. f) It is about minimizing the response rate, which is the time necessary to execute a process on a certain machine divided by the time it would take a certain reference processor.


 

Processor allocation algorithms

 

Schedulling algorithms

 

FCFS (First Come First Served)

When a process reaches the queue of their PCB it is added to the end of the list. The use of the CPU is granted to the first of the list and once a process begins to work without stopping until it is finished. The average wait time for this algorithm is usually quite.

 

SJF (Shortest job first)

Once a process does not stop doing it until it voluntarily changes state (there is no interruption for time). Associated with each process. If there were more than one, use FCFS to break the tie. The biggest problem with this algorithm lies in calculating the CPU time.

The problem with this algorithm is the waiting time for the processes. The shorter processes are constantly being delivered and the larger will never be executed.

Priority schedulling

We associate each process with a priority. Then select the process with the highest priority to break the tie. There are two types of priority in the processes: the external priority is defined as the operating system and the internal priority is defined in the time of use of the CPU, the I/O control, etc., preemptive and not preemptive. The algorithm solves the problem of the loop but the problem is the processes with a very low priority. To solve this waiting problem, inform yourself.

 

Round robin

This is an algorithm based on FCFS. Treat the ready queue as a circular list. Enter the concept of "Quantum" or "Timeshare": the CPU's longest time that you can use a process in each round. If the Quantum value was very large, the algorithm worked like an FCFS. If it is an overload change by context (it means that the Quantum or Time Slice.) The Quantum must be greater than 80% of the CPU times that make use of the processes but not the integer but for each burst.

 

MQS (Multi-level queue programming)

This algorithm is the number of queues n. Then there is a criterion to classify in the queue. Each queue can handle its own programming algorithm.

 

MFQS (multi-level feedback queue programming)

Define the following parameters:

Number of tails

Programming algorithm used in each queue.

Method to decide the queue.

Method to decide when a process will be sent to a lower priority queue.

 

PDF File: Processor allocation algorithms - Vega, Ernesto

Please login to reply. Login

Reversion History

Loading...
No reversions found.