Resource Access Control
Resource Access Control
In Section 3.1, we briefly mentioned the fact that in addition to a processor, each job may
require some other resource in order to execute, but thus far we have ignored this requirement
in order to focus on the problems in processor scheduling. We are now ready to take into
account the resource requirements.
This chapter first extends our workload model to include resources and introduces addi-
tional notations needed in later sections. It then discusses how resource contention affects the
execution behavior and schedulability of jobs, how various resource access-control protocols
work to reduce the undesirable effect of resource contention, and how well these protocols
succeed in achieving this goal. We focus on priority-driven systems. Clock-driven systems do
not have these problems as we can avoid resource contention among jobs by scheduling them
according to a cyclic schedule that keeps jobs’ resource accesses serialized.
We continue to focus on the case where the system contains only one processor. In addition,
the system also contains ρ types of serially reusable resources named R1 , R2 , . . . , Rρ . There
are νi indistinguishable units of resource (of type) Ri , for 1 ≤ i ≤ ρ. Serially reusable
resources are typically granted (i.e., allocated) to jobs on a nonpreemptive basis and used in a
mutually exclusive manner. In other words, when a unit of a resource Ri is granted to a job, this
unit is no longer available to other jobs until the job frees the unit. Again, examples of such
resources are mutexes, reader/writer locks, connection sockets, printers, and remote servers.
A binary semaphore is a resource (type) that has only 1 unit while a counting semaphore has
many units. A system containing five printers has 5 units of the printer resource. There is only
1 unit of an exclusive write-lock.
A resource that has an infinite number of units has no effect on the timing behavior of
any job since every job can have the resource at any time; there is no need to include the
resource in our model. Therefore, we lose no generality by assuming that every resource Ri
has a finite number of units.
Some resources can be used by more than one job at the same time. We model such a
resource as a resource type that has many units, each used in a mutually exclusive manner. For
277
278 Chapter 8 Resources and Resource Access Control
example, a file that can be read by at most ν users at the same time is modeled as a resource
that has ν exclusive units. By modeling shared resources in this manner, we do not need to
treat them differently.
0 2 4 6 8 10 12 14 16 18
binations of critical sections. Specifically, we denote nested critical sections by nested square
brackets. For example, the nested critical section in J3 is [R1 ; 14 [R4 , 3; 9 [R5 , 4; 3]]]. This
notation indicates that the critical section beginning from L(R1 ) includes the one beginning
from L(R4 , 3), which in turn includes the one beginning from L(R5 , 4). Similarly, critical sec-
tions in J2 are [R2 ; 7 [R3 ; 2]][R3 ; 3], indicating that there are two nonlapping critical sections
in this job.
0 2 4 6 8 10 12 14 16 18
J1
J2
J3
FIGURE 8–2 Example of job interaction due to resource contention.
280 Chapter 8 Resources and Resource Access Control
This example illustrates how resource contention can delay the completion of higher-priority
jobs. It is easy to see that if J1 and J2 do not require the resource, they can complete by times
11 and 14, respectively.
0 2 4 6 8 10 12 14 16 18
J1
J2
J3
FIGURE 8–3 Example illustrating timing anomaly.
should complete sooner. Indeed, this reduction does allow jobs J2 and J3 to complete sooner.
Unfortunately, rather than meeting its deadline at 14, J1 misses its deadline because it does
not complete until 14.5.
More seriously, without good resource access control, the duration of a priority inver-
sion can be unbounded. The example in Figure 8–4 illustrates this fact. Here, jobs J1 and J3
have the highest priority and lowest priority, respectively. At time 0, J3 becomes ready and
executes. It acquires the resource R shortly afterwards and continues to execute. After R is
allocated to J3 , J1 becomes ready. It preempts J3 and executes until it requests resource R at
time 3. Because the resource is in use, J1 becomes blocked, and a priority inversion begins.
While J3 is holding the resource and executes, a job J2 with a priority higher than J3 but
lower than J1 is released. Moreover, J2 does not require the resource R. This job preempts J3
and executes to completion. Thus, J2 lengthens the duration of this priority inversion. In this
situation, the priority inversion is said to be uncontrolled [ShRL90]. There can be an arbitrary
number of jobs with priorities lower than J1 and higher than J3 released in the meantime. They
can further lengthen the duration of the priority inversion. Indeed, when priority inversion is
uncontrolled, a job can be blocked for an infinitely long time.
Nonpreemptivity of resource allocation can also lead to deadlocks. The classic example
is one where there are two jobs that both require resources X and Y . The jobs are in deadlock
0 2 4 6 8 10 12 14 16 18
J1
J2
J3
FIGURE 8–4 Uncontrolled priority inversion.
282 Chapter 8 Resources and Resource Access Control
when one of them holds X and requests for Y , while the other holds Y and requests for X .
The conditions that allow this circular wait of jobs for each other (i.e., a deadlock) to occur
are well-known [Crow, SiGa].
From these examples, we see that no resource access-control protocol can eliminate the
priority inversion and anomalous behavior caused by resource contention. A more realistic
goal of such a protocol is that it keeps the delays thus incurred as short as possible. For this
reason, a criterion we use to measure the performance of a resource access-control protocol is
the blocking time of each job. A good resource access-control protocol should control priority
inversion and prevent deadlock and, thus, keep the blocking time of every job bounded from
the above.
1 R, 1 1
J2 J3
J1
FIGURE 8–5 A wait-for-graph of the system in Figure 8–2 at time 5.
Section 8.2 Effects of Resource Contention and Resource Access Control 283
blocked by J3 when it requests and is denied the resource, and the wait-for graph of the system
will have an additional edge from J1 to R. Since there is only one resource, deadlock can never
occur.
In a system of periodic tasks, we say that a periodic task Ti has a critical section
[R, x; y], or that the task requires x units of resource R for y units of time, if every job
in the task requires at most x units of R for at most y units of time. We sometimes denote the
periodic task by the tuple (φi , pi , ei , Di , [R, x; y]). In general, when jobs in the task require
more than one resource and hence have more than one critical section, we put all the critical
sections in the tuple. Hence if J2 in Figure 8–1 is a job in a periodic task of period 100, exe-
cution time 20, zero phase, and relative deadline 100, and if no job in the task has any longer
critical section, we call the task (100, 20; [R2 ; 7[R3 ; 2]][R3 ; 3]). When the parameters of the
critical sections are not relevant, we also denote the task Ti by a tuple (φi , pi , ei , Di ; ci ); the
last element ci is the maximum execution time of the longest critical section of the jobs in the
periodic task. (The execution times of all the critical sections are included in the execution
time ei of the task.) In this simpler way, the task (100, 20; [R2 ; 7[R3 ; 2]][R3 ; 3]) is also called
(100, 20; 7).
We will specify resource requirements of a system by a bipartite graph in which there
is a vertex for every job (or periodic task) and every resource. Each vertex is named by the
name of the job (or task) or resource it represents. The integer next to each resource vertex
Ri gives the number νi of units of the resource. The fact that a job J (or task T ) requires a
resource Ri is represented by an edge from the job (or task) vertex J (or T ) to the resource
vertex Ri . We may label each edge by one or more 2-tuples or numbers, each for a critical
section of the job (or task) which uses the resource. The first element of each 2-tuple gives the
number of units used in the critical section. We usually omit this element when there is only
1 unit of the resource. The second element, which is always given, specifies the duration of
the critical section. Figure 8–6 gives two examples. The simple graph in Figure 8–6(a) gives
T1
(2; 3)
J1 R1, 5
2
T2
J2 4
R, 1
1
4
J3 T3
R2, 1
[R2; 8[R1, 4; 1][R1, 1; 5]]
2
T4
(a) (b)
FIGURE 8–6 Graphs that specify resource requirements.
284 Chapter 8 Resources and Resource Access Control
the resource requirements of the jobs in the system in Figure 8–2. The labels of the edges are
the durations of the critical sections of the respective jobs. The graph in Figure 8-6(b) uses
a combination of ways to specify resource requirements. The system has four periodic tasks.
The edges from T1 tells us that (each job in) this task requires 2 units of R1 for at most 3
units of time and it requires the resource R2 for at most 1 unit of time. Similarly, T4 requires
R2 for 2 units of time. Rather than specifying the resource requirement of T3 by complicated
labels of edges connecting vertex T3 to resource vertices, we simply provide the information
on critical sections of T3 next to the vertex T3 . There is no edge from T2 , meaning that this
task does not require any resource.
Finally, to avoid verboseness whenever there is no ambiguity, by a critical section, we
mean an outermost critical section. By a critical section of a periodic task, we mean a critical
section of each job in the periodic task.
The simplest way to control access of resources is to schedule all critical sections on the
processor nonpreemptively [Mok]. (In other words, when a job requests a resource, it is always
allocated the resource. When a job holds any resource, it executes at a priority higher than
the priorities of all jobs.) This protocol is called the Nonpreemptive Critical Section (NPCS)
protocol. Because no job is ever preempted when it holds any resource, deadlock can never
occur.
Take the jobs in Figure 8–4 for example. Figure 8–7(a) shows the schedule of these
jobs when their critical sections are scheduled nonpreemptively on the processor. According
to this schedule, J1 is forced to wait for J3 when J3 holds the resource. However, as soon as
J3 releases the resource, J1 becomes unblocked and executes to completion. Because J1 is not
delayed by J2 , it completes at time 10, rather than 15 according to the schedule in Figure 8–4.
In general, uncontrolled priority inversion illustrated by Figure 8–4 can never occur. The
reason is that a job Jh can be blocked only if it is released when some lower-priority job is in a
critical section. If it is blocked, once the blocking critical section completes, all resources are
free. No lower-priority job can get the processor and acquire any resource until Jh completes.
Hence, Jh can be blocked only once, and its blocking time due to resource conflict is at most
equal to the maximum execution time of the critical sections of all lower-priority jobs.
Specifically, the blocking time bi (r c) due to resource conflict of a periodic task Ti in a
fixed-priority system of n periodic tasks is equal to
bi (r c) = max (ck ) (8.1)
i+1≤k≤n
when the tasks are indexed in order of nonincreasing priority. In a system where periodic tasks
are scheduled on the EDF basis, a job in task Ti with relative deadline Di can be blocked only
by jobs in tasks with relative deadlines longer than Di . This was stated in Theorem 6.18.
Therefore, the blocking time bi (r c) of Ti is again given by Eq. (8.1) if we index the periodic
tasks according to their relative deadlines so that i < j if Di < D j .
As an example, suppose that the tasks in the system in Figure 8–6(b) are scheduled
on a fixed-priority basis. The blocking time b1 (r c) of the highest priority task T1 is 8, the
execution time of the (outermost) critical section of T3 . Similarly, b2 (r c) is 8, while b3 (r c) is
2, the execution time of the critical section of T4 . No task blocks T4 since it has the lowest
Section 8.3 Nonpreemptive Critical Sections 285
0 2 4 6 8 10 12 14 16 18
J1
J2
J3
(a)
J1
J2
J3
(b)
FIGURE 8–7 Example to illustrate two simple protocols. (a) Controlling priority inversion by disallowing preemp-
tion of critical section. (b) Controlling priority inversion by using priority inheritance.
priority. Suppose that the relative deadlines of the tasks are D1 < D2 < D3 < D4 and the
tasks are scheduled on an EDF basis. Then the blocking times bi (r c) for i = 1, 2, 3, and 4 are
also 8, 8, 2, and 0, respectively.
The most important advantage of the NPCS protocol is its simplicity, especially when
the numbers of resource units are arbitrary. The protocol does not need any prior knowl-
edge about resource requirements of jobs. It is simple to implement and can be used in both
fixed-priority and dynamic-priority systems. It is clearly a good protocol when all the critical
sections are short and when most of the jobs conflict with each other.
An obvious shortcoming of this protocol is that every job can be blocked by every
lower-priority job that requires some resource even when there is no resource conflict between
them. When the resource requirements of all jobs are known, an improvement is to let a
job holding any resource execute at the highest priority of all jobs requiring the resource.
This is indeed how the ceiling-priority protocol supported by the Real-Time Systems Annex
of Ada95 [Cohe96] works. We will discuss the ceiling-priority protocol and its worst-case
performance in detail in Section 8.6, after examining two other well-known protocols that
aim at improving upon the NPCS protocol.
286 Chapter 8 Resources and Resource Access Control
The priority-inheritance protocol proposed by Sha, et al. [ShRL90] is also a simple protocol.
It works with any preemptive, priority-driven scheduling algorithm. Like the NPCS protocol,
it does not require prior knowledge on resource requirements of jobs. The priority-inheritance
protocol does not prevent deadlock. When there is no deadlock (i.e., when some other method
is used to prevent deadlock), the protocol ensures that no job is ever blocked for an indefinitely
long time because uncontrolled priority inversion cannot occur.
In this and the next three sections, we confine our attention to the special case where
there is only 1 unit of each resource. (This is the reason for calling the version described here
the basic version.) By doing so, we relieve ourselves temporarily of the details that arise due
to multiple resource units, so we can focus on the essence of the protocols and the behavior
of the system under their control. We will return in Section 8.9 to remove this restrictive
assumption.
According to this protocol, a job J is denied a resource only when the resource re-
quested by it is held by another job. (You will see shortly that this is not true for some other
protocols.) At time t when it requests the resource, J has the highest priority among all ready
jobs. The current priority πl (t) of the job Jl directly blocking J is never higher than the pri-
ority π(t) of J . Rule 3 relies on this fact.
The simple example in Figure 8–7(b) illustrates how priority inheritance affects the
way jobs are scheduled and executed. The three jobs in this figure are the same as the ones
in Figure 8–7(a). When J1 requests resource R and becomes blocked by J3 at time 3, job J3
inherits the priority π1 of J1 . When job J2 becomes ready at 5, it cannot preempt J3 because
its priority π2 is lower than the inherited priority π1 of J3 . As a consequence, J3 completes
its critical section as soon as possible. In this way, the protocol ensures that the duration of
priority inversion is never longer than the duration of an outermost critical section each time
a job is blocked.
Figure 8–8 gives a more complex example. In this example, there are five jobs and two
resources Black and Shaded. The parameters of the jobs and their critical sections are listed
in part (a). As usual, jobs are indexed in decreasing order of their priorities: The priority πi
of Ji is i, and the smaller the integer, the higher the priority. In the schedule in part (b) of this
figure, black boxes show the critical sections when the jobs are holding Black. Shaded boxes
show the critical sections when the jobs are holding Shaded.
1. At time 0, job J5 becomes ready and executes at its assigned priority 5. At time 1, it is
granted the resource Black.
2. At time 2, J4 is released. It preempts J5 and starts to execute.
3. At time 3, J4 requests Shaded. Shaded, being free, is granted to the job. The job contin-
ues to execute.
4. At time 4, J3 is released and preempts J4 . At time 5, J2 is released and preempts J3 .
5. At time 6, J2 executes L(Black) to request Black; L(Black) fails because Black is in use
by J5 . J2 is now directly blocked by J5 . According to rule 3, J5 inherits the priority 2 of
J2 . Because J5 ’s priority is now the highest among all ready jobs, J5 starts to execute.
6. J1 is released at time 7. Having the highest priority 1, it preempts J5 and starts to exe-
cute.
7. At time 8, J1 executes L(Shaded), which fails, and becomes blocked. Since J4 has
Shaded at the time, it directly blocks J1 and, consequently, inherits J1 ’s priority 1. J4
now has the highest priority among the ready jobs J3 , J4 , and J5 . Therefore, it starts to
execute.
8. At time 9, J4 requests the resource Black and becomes directly blocked by J5 . At this
time the current priority of J4 is 1, the priority it has inherited from J1 since time 8.
Therefore, J5 inherits priority 1 and begins to execute.
9. At time 11, J5 releases the resource Black. Its priority returns to 5, which was its priority
when it acquired Black. The job with the highest priority among all unblocked jobs is
J4 . Consequently, J4 enters its inner critical section and proceeds to complete this and
the outer critical section.
10. At time 13, J4 releases Shaded. The job no longer holds any resource; its priority re-
turns to 4, its assigned priority. J1 becomes unblocked, acquires Shaded, and begins to
execute.
288 Chapter 8 Resources and Resource Access Control
(a)
0 2 4 6 8 10 12 14 16 18 20
J1
J2
J3
J4
J5
(b)
FIGURE 8–8 Example illustrating transitive inheritance of priority inheritance. (a) Parameters of jobs. (b) Schedule
under priority inheritance.
11. At time 15, J1 completes. J2 is granted the resource Black and is now the job with the
highest priority. Consequently, it begins to execute.
12. At time 17, J2 completes. Afterwards, jobs J3 , J4 , and J5 execute in turn to completion.
J1 …
L(R1) U(R1)
J2
L(R2) U(R2)
J3
…
L(Rk-1) U(Rk-1)
Jk
L(Rk) U(Rk)
Jk+1
FIGURE 8–9 A worst-case blocking scenario for priority-inheritance protocol.
290 Chapter 8 Resources and Resource Access Control
To define the protocol, we need two additional terms. The protocol makes use of a
parameter, called priority ceiling, of every resource. The priority ceiling of any resource Ri
is the highest priority of all the jobs that require Ri and is denoted by 5(Ri ). For example,
the priority ceiling 5(Black) of the resource Black in the example in Figure 8–8 is 2 because
J2 is the highest priority job among the jobs requiring it. Similarly, 5(Shaded) is 1. Because
of assumption 2, the priority ceilings of all resources are known a priori. We note that if the
resource access control protocol includes the priority-inheritance rule, then a job can inherit a
priority as high as x during its execution if it requires a resource with priority ceiling x.
At any time t, the current priority ceiling (or simply the ceiling) 5̂(t) of the system
is equal to the highest priority ceiling of the resources that are in use at the time, if some
resources are in use. If all the resources are free at the time, the current ceiling 5̂(t) is equal
to , a nonexisting priority level that is lower than the lowest priority of all jobs. As an
example, we again look at the system in Figure 8–8. In the interval [0, 1) when both resources
in the system are free, the current ceiling of the system is equal to , lower than 5, the priority
of the lowest priority job J5 . In (1, 3], Black is held by J5 ; hence, the current ceiling of the
system is 2. In (3, 13] when Shaded is also in use, the current ceiling of the system is 1, and
so it is in (13, 14].
(ii) If J ’s priority π(t) is not higher than the ceiling 5̂(t) of the system, R is
allocated to J only if J is the job holding the resource(s) whose priority ceiling
is equal to 5̂(t); otherwise, J ’s request is denied, and J becomes blocked.
3. Priority-Inheritance Rule: When J becomes blocked, the job Jl which blocks J inherits
the current priority π(t) of J . Jl executes at its inherited priority until the time when
it releases every resource whose priority ceiling is equal to or higher than π(t); at that
time, the priority of Jl returns to its priority πl (t 0 ) at the time t 0 when it was granted the
resource(s).
We note that (ii) in rule 2 assumes that only one job holds all the resources with priority
ceiling equal to 5̂(t). Similarly, rule 3 assumes that only one job is responsible for J ’s request
being denied, because it holds either the requested resource or a resource with priority ceiling
5̂(t). We will return shortly to show that these assumptions are true.
Figure 8–10 shows the schedule of the system of jobs whose parameters are listed in
Figure 8–8(a) when their accesses to resources are controlled by the priority-ceiling proto-
0 2 4 6 8 10 12 14 16 18 20
J1
J2
J3
J4
J5
Π (t)
3
Ω
FIGURE 8–10 A schedule illustrating priority-ceiling protocol.
292 Chapter 8 Resources and Resource Access Control
col. As stated earlier, the priority ceilings of the resources Black and Shaded are 2 and 1,
respectively.
1. In the interval (0, 3], this schedule is the same as the schedule shown in Figure 8–8,
which is produced under the basic priority-inheritance protocol. In particular, the ceil-
ing of the system at time 1 is . When J5 requests Black, it is allocated the resource
according to (i) in part (b) of rule 2. After Black is allocated, the ceiling of the system
is raised to 2, the priority ceiling of Black.
2. At time 3, J4 requests Shaded. Shaded is free; however, because the ceiling 5̂(3) (= 2)
of the system is higher than the priority of J4 , J4 ’s request is denied according to (ii) in
part (b) of rule 2. J4 is blocked, and J5 inherits J4 ’s priority and executes at priority 4.
3. At time 4, J3 preempts J5 , and at time 5, J2 preempts J3 . At time 6, J2 requests Black
and becomes directly blocked by J5 . Consequently, J5 inherits the priority 2; it executes
until J1 becomes ready and preempts it. During all this time, the ceiling of the system
remains at 2.
4. When J1 requests Shaded at time 8, its priority is higher than the ceiling of the system.
Hence, its request is granted according to (i) in part (b) of rule 2, allowing it to enter its
critical section and complete by the time 10. At time 10, J3 and J5 are ready. The latter
has a higher priority (i.e., 2); it resumes.
5. At 11, when J5 releases Black, its priority returns to 5, and the ceiling of the system
drops to . J2 becomes unblocked, is allocated Black [according to (i) in part (b) of
rule 2], and starts to execute.
6. At time 14, after J2 and J3 complete, J4 has the processor and is granted the resource
Shaded because its priority is higher than , the ceiling of the system at the time. It
starts to execute. The ceiling of the system is raised to 1, the priority ceiling of Shaded.
7. At time 16, J4 requests Black, which is free. The priority of J4 is lower than 5̂(16),
but J4 is the job holding the resource (i.e., Shaded) whose priority ceiling is equal to
5̂(16). Hence, according to (ii) of part (b) of rule 2, J4 is granted Black. It continues to
execute. The rest of the schedule is self-explanatory.
Comparing the schedules in Figures 8–8 and 8–10, we see that when priority-ceiling
protocol is used, J4 is blocked at time 3 according to (ii) of part (b) of rule 2. A consequence
is that the higher priority jobs J1 , J2 , and J3 all complete earlier at the expense of the lower
priority job J4 . This is the desired effect of the protocol.
J R Jl Jh R Jl
J R X Jl
Π(X) is equal to or higher than π(t)
(c) Avoidance blocking
FIGURE 8–11 Ways for a job to block another job. (a) Direct blocking. (b) Priority-inheritance blocking. (c) Avoid-
ance blocking.
0 2 4 6 8 10 12 14
J1
J2
J3
shaded boxes (critical sections associated with resource Shaded), and black boxes (critical
sections associated with resource Black).
1. When J3 requests Shaded at time 0.5, no resource is allocated at the time. J3 ’s request
is granted. When job J2 becomes ready at time 1, it preempts J3 .
2. At time 2.5, J2 requests Black. Because Shaded is already allocated to J3 and has pri-
ority ceiling 2, the current ceiling of the system is 2. J2 ’s priority is 2. According to (ii)
of part (b) of rule 2, J2 is denied Black, even though the resource is free. Since J2 is
blocked, J3 inherits the priority 2 (rule 3), resumes, and starts to execute.
3. When J3 requests Black at time 3, it is holding the resource whose priority ceiling is the
current ceiling of the system. According to (ii) of part (b) of rule 2, J3 is granted the
resource Black, and it continues to execute.
4. J3 is preempted again at time 3.5 when J1 becomes ready. When J1 requests Dotted
at time 4.5, the resource is free and the priority of J1 is higher than the ceiling of the
system. (i) of part (b) of rule 2 applies, and Dotted is allocated to J1 , allowing the job
to enter into its critical section and proceed to complete at 7.3. The description of the
segment after this time is left to you.
We now use this example to explain intuitively why priority-ceiling protocol prevents
deadlock. To see the rationale behind (ii) of part (b) of rule 2, which leads to the denial of J2 ’s
request for Black, we note that at the time Black is free but Shaded is already allocated to J3 .
The fact that the priority ceiling of Shaded is 2 indicates that some job with priority 2 requires
this resource and this job may very well be J2 , as is indeed the case in this example. If J2 were
allowed to have Black, it might later request Shaded and would be blocked by J3 . J3 would
execute and might later request Black. Denying J2 ’s access to Black is one way to prevent
this deadlock. On the other hand, suppose that the priority ceiling of Shaded were lower than
the priority of J2 . This fact would indicate that J2 does not require Shaded. Moreover, no job
with priority equal to or higher than J2 requires this resource. Consequently, it would not be
possible for the job holding Shaded to later inherit a higher priority than J2 , preempt J2 , and
Section 8.5 Basic Priority-Ceiling Protocol 295
request Black. This is the rationale of (i) of part (b) of rule 2. Indeed, this is the reason that J1
was granted Dotted.
Let us now state in general terms what was said above. At any time t, the priority π(t)
of a job J being higher than the current ceiling 5̂(t) of the system means that (1) job J will
not require any of the resources in use at t and (2) jobs with priorities equal to or higher than
J will not require any of these resource. In other words, the priority ceiling 5̂(t) of the system
tells us the subset of all jobs to which we can safely grant free resources at time t; this subset
contains all the jobs that have higher priorities than 5̂(t). Because of (1), J will not request
any resource that is in use at the time. Because of (2), no job holding any resource at the time
can inherit a higher priority than J , later preempt J , and request resources allocated to J after
t. For these reasons, (i) of part (b) of rule 2 will not lead to any deadlock.
(ii) in part (b) of rule 2 states an exception to the rule that J ’s request for any resource
is denied if its priority is not higher than the ceiling of the system. The exception applies
when J is the job holding the resource(s) whose priority ceiling(s) is equal to 5̂(t); under this
condition, J is granted the resource it requests at t. (This exception is necessary in order to
ensure that a job can make progress as it acquires resources. Otherwise, the job would block
itself!) J ’s priority π(t) must be equal to 5̂(t) when it is granted its requested resource under
this condition. Moreover, because of (i) and (ii) in part (b) of rule 2, no other job is holding
resources with priority ceiling equal to 5̂(t). Consequently, (ii) in part (b) of rule 2 cannot
lead to any deadlock.
The following theorem summarizes this discussion. A formal proof of the theorem can
be found in [ShRL90].
Informal Proof of Theorem 8.2. Rather than proving the theorem formally, we use
an intuitive argument to convince you that the theorem is true. There are two parts to this
argument: (1) When a job becomes blocked, it is blocked by only one job, and (2) a job which
blocks another job cannot be blocked in turn by some other job. State (2) in another way, there
can be no transitive blocking under the priority-ceiling protocol.