SKEDSOFT

Real Time Systems

OFF-LINE VERSUS ON-LINE SCHEDULING: In Section 4.1, we mentioned that a clock-driven scheduler typically makes use of a precomputed schedule of all hard real-time jobs. This schedule is computed off-line before the system begins to execute, and the computation is based on the knowledge of the release times and processor-time/resource requirements of all the jobs for all times. When the operation mode of the system changes, the new schedule specifying when each job in the new mode executes is also precomputed and stored for use. In this case, we say that scheduling is (done) off-line, and the precomputed schedules are off-line schedules.

An obvious disadvantage of off-line scheduling is inflexibility. This approach is possible only when the system is deterministic, meaning that the system provides some fixed set(s) of functions and that the release times and processor-time/resource demands of all its jobs are known and do not vary or vary only slightly. For a deterministic system, however, off-line scheduling has several advantages, the deterministic timing behavior of the resultant system being one of them. Because the computation of the schedules is done off-line, the complexity of the scheduling algorithm(s) used for this purpose is not important. Indeed, as we will see in the next chapter, complex heuristic algorithms are typically used to find good off-line schedules that can make nearly full use of the resources.

Competitiveness of On-Line Scheduling. We say that scheduling is done on-line, or that we use an on-line scheduling algorithm, if the scheduler makes each scheduling decision without knowledge about the jobs that will be released in the future; the parameters of each job become known to the on-line scheduler only after the job is released. The priority-driven algorithms described earlier and in subsequent chapters are on-line algorithms. In Chapter 2 we talked about the admission of each new task depending on the outcome of an acceptance test that is based on the parameters of the new task and tasks admitted earlier. Such an acceptance test is on-line.

Clearly, on-line scheduling is the only option in a system whose future workload is unpredictable. An on-line scheduler can accommodate dynamic variations in user demands and resource availability. The price of the flexibility and adaptability is a reduced ability for the scheduler to make the best use of system resources.Without prior knowledge about future jobs, the scheduler cannot make optimal scheduling decisions while a clairvoyant scheduler that knows about all future jobs can.

As a simple example, suppose that at time 0, a nonpreemptive job J1 with execution time 1 and deadline 2 is released. An on-line scheduler has two options at time 0: It either schedules J1 to start execution at time 0 or it postpones the execution of J1 to some later time. Suppose that the on-line scheduler decides to schedule J1 at time 0. Later at time x < 1, a job J2 with execution time 1 − x and deadline 1 is released. J2 would miss its deadline because it cannot start execution until time 1. In contrast, a clairvoyant scheduler, which knows J2 at time 0, would schedule J1 to start execution at time 1 and thus allow both jobs to complete in time. In the second case, the on-line scheduler decides to postpone the execution of J1 until some later time x < 1. Now suppose that at time x, J3 is released instead of J2. The execution time of J3 is 1, and its deadline is 2. It is impossible for the on-line scheduler to schedule both J1 and J3 so that they complete in time. Again, a clairvoyant scheduler, knowing the future release of J3 at time 0, would schedule J1 to start execution at time 0 so it can complete both J1 and J3 on time.

The system is said to be overloaded when the jobs offered to the scheduler cannot be feasibly scheduled even by a clairvoyant scheduler. When the system is not overloaded, an optimal on-line scheduling algorithm is one that always produces a feasible schedule of all offered jobs. The example above shows that no optimal on-line scheduling algorithm exists when some jobs are nonpreemptable. On the other hand, if all the jobs are preemptable and there is only one processor, optimal on-line algorithms exist, and the EDF and LST algorithms are examples.

During an overload, some jobs must be discarded in order to allow other jobs to complete in time. A reasonable way to measure the performance of a scheduling algorithm during an overload is by the amount of work the scheduler can feasibly schedule according to the algorithm: the larger this amount, the better the algorithm. The competitive factor of an algorithm captures this aspect of performance. To define this performance measure, we say that the value of a job is equal to its execution time if the job completes by its deadline according to a given schedule and is equal to zero if the job fails to complete in time according to the schedule. The value of a schedule of a sequence of jobs is equal to the sum of the values of all the jobs in the sequence according to the schedule. A scheduling algorithm is optimal if it always produces a schedule of the maximum possible value for every finite set of jobs. An on-line algorithm has a competitive factor c if and only if the value of the schedule of any finite sequence of jobs produced by the algorithm is at least c times the value of the schedule of the jobs produced by an optimal clairvoyant algorithm.

In terms of this performance measure, EDF and LST algorithms are optimal under the condition that the jobs are preemptable, there is only one processor, and the processor is not overloaded. Their competitive factors are equal to 1 under this condition. On other hand, when the system is overloaded, their competitive factors are 0. To demonstrate, let us consider two jobs. The first one is released at time 0, and its execution time is 2ε; the deadline is ε for some arbitrarily small positive number ε. At time ε, a job whose relative deadline is equal to its execution time e is released. The value achieved by the EDF or LST algorithm is 0, while the maximum possible value achievable is e.

As it turns out, the EDF and LST algorithms are not the only algorithms with poor performance when the system is overloaded. In general, all on-line scheduling algorithms perform rather poorly. The following theorem due to Baruah, et al. [BKMM] gives us the performance limitation of on-line scheduling when the system is overloaded.

THEOREM 1. No on-line scheduling algorithm can achieve a competitive factor greater than 0.25 when the system is overloaded.
*Informal Proof of Theorem 1. To gain some insight into why this upper bound of 0.25 is true, we summarize the proof of Theorem 4.5; you can find the complete formal proof in [BKMM]. Suppose that there is an adversary of the on-line scheduler. Over time, the adversary creates two kinds of jobs and offers (i.e., releases) them to the scheduler: major jobs and jobs associated with major jobs. The relative deadline of every job is equal to its execution time. (In other words, the job has no slack; the scheduler should either schedule the job immediately after it is released, or discard it.) We name the major jobs Ji for i = 0, 1, . . . , max in increasing order of their release times and denote the execution time and release time of each job Ji by ei and ri , respectively. (max is some positive integer that we will define shortly.) The adversary creates a sequence of jobs associated with each major job Ji . The execution times of all the associated jobs in the sequence are equal to some small number ε > 0, which is negligible compared with ei. The first job in the sequence associated with each major job Ji is released at the same time with Ji . If there is more than one associated job in a sequence, each subsequent associated job is released at the deadline of the previous associated job in the sequence. The number of associated jobs in the sequence depends on the action of the on-line scheduler. Specifically, as long as the on-line scheduler chooses to execute Ji , the adversary continues to create jobs associated with Ji until the deadline di of Ji . Whenever the scheduler decides to execute a job associated with Ji , the adversary stops releasing any more associated jobs.

Let us now consider a busy interval which begins at r0 when the adversary releases the first major job J0 and the first of the sequence of jobs associated with J0. Depending on the action of the on-line scheduler, the adversary may release major job Ji for i > 0 at time ri ; the release time ri of the ith major job and the first of its associated jobs is equal to ri−1 ei−1 − ε. In other words, Ji is released at ε units of time before the deadline of Ji−1. It is not possible to schedule both Ji−1 and Ji to complete in time. At the time ri , the scheduler must choose either to discard Ji−1 and start to execute Ji or continue to execute Ji−1 and therefore discard Ji .