Processor sharing

From HandWiki
Short description: Form of resource sharing for tasks in computing

Processor sharing or egalitarian processor sharing is a service policy where the customers, clients or jobs are all served simultaneously, each receiving an equal fraction of the service capacity available. In such a system all jobs start service immediately (there is no queueing).

The processor sharing algorithm "emerged as an idealisation of round-robin scheduling algorithms in time-shared computer systems".[1][2]

Queueing theory

A single server queue operating subject to Poisson arrivals (such as an M/M/1 queue or M/G/1 queue) with a processor sharing discipline has a geometric stationary distribution.[1]

The sojourn time jobs experience has no closed form solution, even in an M/M/1 queue.[3]

Generalized processor sharing

Main page: Generalized processor sharing

Generalized processor sharing is a multi-class adaptation of the policy which shares service capacity according to positive weight factors to all non-empty job classes at the node, irrespective of the number of jobs of each class present. Often it is assumed that the jobs within a class form a queue and that queue is served on a first-come, first-served basis, but this assumption is not necessary for many GPS applications.[1]

In processor scheduling, generalized processor sharing is "an idealized scheduling algorithm that achieves perfect fairness. All practical schedulers approximate GPS and use it as a reference to measure fairness."[4]

Multilevel processor sharing

In multilevel processor sharing a finite set of thresholds are defined and jobs partitioned according to how much service they have received. The lowest level (containing jobs which have received the least service) has the highest priority and higher levels monotonically decreasing priorities. Within each level an internal discipline is used.[1]

References