SKEDSOFT

Real Time Systems

Preemptive Weighted Fair-Queueing Algorithm: The WFQ algorithm is designed to ensure fairness among multiple servers. As you will see shortly, the algorithm closely resembles the total bandwidth server algorithm. Both are greedy, that is, work conserving. Both provide the same schedulability guarantee,
and hence, the same worst-case response time. At a quick glance, the replenishment rules of a WFQ server appear to be the same as those of a total bandwidth server, except for how the deadline is computed at each replenishment time. This difference, however, leads to a significant difference in their behavior: The total bandwidth server algorithm is unfair, but the WFQ algorithm gives bounded fairness.

Emulation of GPS Algorithm. Again, aWFQ server consumes its budget only when it executes. Its budget is replenished when it first becomes backlogged after being idle. As long as it is backlogged, its budget is replenished each time it completes a job. At each replenishment time, the server budget is set to the execution time of the job at the head of its queue.

In short, the replenishment rules of the WFQ algorithm are such that a WFQ server emulates a GPS server of the same size; the deadline of the WFQ server is the time at which a GPS server would complete the job at the head of the server queue. To illustrate, we look at the example in Figure 7–14 again. Figure 7–14(c) shows the schedule of the four tasks Ai, for i = 1, 2, 3, 4, when they are scheduled according to the GPS algorithm. Specifically, the figure shows that they are executed by GPS servers GPS1, GPS2, GPS3, and GPS4, respectively, and the sizes of these servers are 1/4, 1/8, 1/4, and 3/8, respectively. The scheduler schedules backlogged servers on a weighted round-robin basis, with an infinitesmally small round length and the time per round given to each server is proportional to the server size. The numbers below each time line in Figure 7–14(c) give the completion times of jobs executed by the respective server.

  • Before time 18, server GPS4 being idle, the backlogged servers share the processor proportionally. In other words, the servers attain 2/5, 1/5, and 2/5 of available time, and each of their jobs completes in 2.5, 5, and 7.5 units of time, respectively.
  • By time 18, the eighth, fourth, and third jobs of A1, A2 and A3 are at the heads of the queues of the backlogged servers, and their remaining execution times are 0.8, 0.4, and 1.8 respectively.
  • Starting from 18, all four servers being backlogged, each server now attains 1/4, 1/8, 1/4, and 3/8 of available time, respectively. This is why the first three servers take an additional 3.2, 3.2, and 7.2 units of time to complete the jobs at the heads of their respective queues. (These jobs are completed at 21.2, 21.2, and 25.2, respectively.) Afterwards, each server GPSi completes each job in Ai in 4, 8, 12, and 8, units of time, for i = 1, 2, 3, 4, respectively.

Figure 7–14(d) gives the corresponding WFQ schedule. The budget of each WFQ server (called FQi in the figure) is replenished in the same manner as the corresponding total bandwidth server, except for the way the server deadline is computed. Specifically, we note the following.

  • Before time 18, the backlogged WFQ servers behave just like total bandwidth servers, except each of the backlogged servers gets 2/5, 1/5, and 2/5 of processor time, respectively. Hence whenever a WFQ server FQi completes a job, the scheduler gives the server 1, 1, or 3 units of budget and sets its deadline at its current deadline plus 2.5 (i.e., 1 × 5/2), 5 (i.e., 1 × 5/1), or 7.5 (i.e., 3 × 5/2) for i = 1, 2, 3, respectively.
  • Immediately before time 18, the three backlogged servers have completed 8, 4, and 2 jobs and their deadlines are at 22.5, 25, and 22.5, respectively.
  • At time 18 when FQ4 also becomes backlogged, the scheduler recomputes the deadlines of FQ1, FQ2 and FQ3 and make them equal to the completion times of the ninth job, fifth job and third job of corresponding tasks according to the GPS schedule. Their completion times are 25.2, 29.2, and 25.2, respectively. These are the new deadlines of servers FQ1, FQ2, and FQ3. Also, at this time, the scheduler gives FQ4 3 units of budget and sets its deadline at 26. The scheduler then queues the servers according to their new deadlines.
  • After time 18, the WFQ servers behave just like total bandwidth servers again.

Virtual Time versus Real-Time. We note that if the scheduler were to replenish server budget in the manner illustrated by the above example, it would have to recompute the deadlines of all backlogged servers whenever some server changes from idle to backlogged and vice versa. While this recomputation may be acceptable for CPU scheduling, it is not for scheduling packet transmissions. A “budget replenishment” in a packet switch corresponds to the scheduler giving a ready packet a time stamp (i.e., a deadline) and inserting the packet in the outgoing queue sorted in order of packet time stamps. To compute a new deadline and time stamp again of an already queued packet would be unacceptable, from the standpoint of both scheduling overhead and switch complexity. Fortunately, this recomputation of server deadlines is not necessary if instead of giving servers deadlines measured in real time, as we have done in this example, the scheduler gives servers virtual-time deadlines, called finish numbers. The finish number of a server gives the number of the round in which the server budget would be exhausted if the backlogged servers were scheduled according to the GPS algorithm.

Figure 7–14(e) shows how finish numbers are related to time for our example. Before time 18, the total size of backlogged servers is only 5/8. If the tasks were executed according to the GPS algorithm, the length of each round would be only 5/8 of what the round length would be when the total size of backlogged servers is 1. In other words, the finish number of the system increases at the rate of 8/5 per unit time. So, at time 2.5, the system just finishes round 8 (= 2.5×8/5); at time 5, the system just finishes round 8, and at time 18, the system is in round 28.8 (= 18 × 8/5). After 18, the total size of backlogged servers is 1. Consequently, the finish number of the system increases at the rate of 1 per unit time.