信号量的分类
信號量的分類
Mutual Exclusion Semaphores(互斥):一種特殊的二進(jìn)制信號量,專門針對互斥操作進(jìn)行了優(yōu)化。
Binary Semaphores(二進(jìn)制):完成互斥、同步操作的最佳方式;速度最快,最常用。
Counting Semaphores(計數(shù)):類似于二進(jìn)制信號量,可記錄信號量釋放的次數(shù),可監(jiān)視同一資源上的多個實例。
======== Mutual Exclusion Semaphores(互斥信號量)==============================
互斥信號量是一種特殊的二進(jìn)制信號量,它是針對使用二進(jìn)制信號量進(jìn)行互斥操作時存在的一些問題設(shè)計的。
互斥信號量主要增加了對優(yōu)先級倒置、刪除安全以及遞歸訪問的處理。
1.互斥信號量只能用于互斥操作。
2.只能由已經(jīng)獲取了互斥信號量的任務(wù)去釋放它。
3.中斷服務(wù)程序(ISR)不可以釋放(semGive())互斥信號量。
4.互斥信號量不支持semFlush()操作。
A mutual exclusion (mutex) semaphore is a special binary semaphore that supports
ownership, recursive access, task deletion safety, and one or more protocols
for avoiding problems inherent to mutual exclusion.
When a task owns the mutex, it is not possible for any other task to lock or unlock that mutex.
Contrast this concept with the binary semaphore, which can be released by any task,
even a task that did not originally acquire the semaphore.
A mutex is a synchronization object that can have only two states:
Not owned.
Owned.
Two operations are defined for mutexes:
Lock : This operation attempts to take ownership of a mutex,
if the mutex is already owned by another thread then the invoking thread is queued.
Unlock : This operation relinquishes ownership of a mutex.
If there are queued threads then a thread is removed from the queue and resumed,
ownership is implicitly assigned to the thread.
======== Binary Semaphores(二進(jìn)制信號量)======================================
1.互斥操作:是指不同任務(wù)可以利用信號量互斥地訪問臨界資源。
這種互斥的訪問方式比中斷禁止(interrupt disable) 與優(yōu)先級鎖定(preemptive locks)
兩種互斥方式具有更加精確的粒度。
互斥操作時初始狀態(tài)設(shè)為(SEM_FULL)可用。
并在同一個Task中成對、順序調(diào)用semTake()、semGive()。
2.同步操作:是指一個任務(wù)可以利用信號量控制自己的執(zhí)行進(jìn)度,
使自己同步于一組外部事件。
同步操作時初始狀態(tài)設(shè)為(SEM_EMPTY)不可用。
在不同Task中分別單獨調(diào)用semTake()、semGive()。
Because Wait may cause a thread to block (i.e., when the counter is zero),
it has a similar effect of the lock operation of a mutex lock.
Similarly, a Signal may release a waiting threads,
and is similar to the unlock operation.
In fact, semaphores can be used as mutex locks.
Consider a semaphore S with initial value 1.
Then, Wait and Signal correspond to lock and unlock:
A binary semaphore can have a value of either 0 or 1.
When a binary semaphore’s value is 0, the semaphore is considered unavailable (or empty);
when the value is 1, the binary semaphore is considered available (or full ).
Note that when a binary semaphore is first created, it can be initialized to
either available or unavailable (1 or 0, respectively).
However, there is an advantage in using semaphores.
When a mutex lock is created, it is always in the "unlock" position.
If a binary semaphore is used and initialized to 0, it is equivalent to having a mutex lock
that is locked initially. Therefore, the use of binary semaphores is a little more flexible.
A binary semaphore is a synchronization object that can have only two states:
Not taken.
Taken.
Two operations are defined:
Take : Taking a binary semaphore brings it in the “taken” state,
trying to take a semaphore that is already taken enters the invoking thread into a waiting queue.
Release : Releasing a binary semaphore brings it in the “not taken” state
if there are not queued threads. If there are queued threads then a thread is removed
from the queue and resumed, the binary semaphore remains in the “taken” state.
Releasing a semaphore that is already in its “not taken” state has no effect.
======== Counting Semaphores(計數(shù)信號量)======================================
計數(shù)信號量與二進(jìn)制信號量都可以用于任務(wù)之間的同步與互斥。
其不同點在于,計數(shù)信號量可記錄信號量釋放的次數(shù),可以用來監(jiān)視某一資源的使用狀況。
A counting semaphore is a synchronization object that can have an arbitrarily large number of states.
The internal state is defined by a signed integer variable, the counter.
The counter value (N) has a precise meaning:
Negative, there are exactly -N threads queued on the semaphore.
Zero, no waiting threads, a wait operation would put in queue the invoking thread.
Positive, no waiting threads, a wait operation would not put in queue the invoking thread.
Two operations are defined for counting semaphores:
Wait : This operation decreases the semaphore counter,
if the result is negative then the invoking thread is queued.
Signal : This operation increases the semaphore counter,
if the result is non-negative then a waiting thread is removed from the queue and resumed.
======== Mutexes 互斥信號量 ====================================================
Mutexes are binary semaphores that include a priority inheritance mechanism. <優(yōu)先級繼承>
Whereas binary semaphores are the better choice for implementing synchronisation
(between tasks or between tasks and an interrupt),
mutexes are the better choice for implementing
simple mutual exclusion (hence 'MUT'ual 'EX'clusion).
When used for mutual exclusion the mutex acts
like a token that is used to guard a resource.
When a task wishes to access the resource it must first obtain ('take') the token.
When it has finished with the resource it must 'give' the token back -
allowing other tasks the opportunity to access the same resource.
Priority inheritance does not cure priority inversion!
It just minimises its effect in some situations.
Hard real time applications should be designed such that priority inversion
does not happen in the first place.
======== Recursive Mutexes 遞歸互斥信號量 ======================================
A mutex used recursively can be 'taken' repeatedly by the owner.
The mutex doesn't become available again until the owner has called
xSemaphoreGiveRecursive() for each successful xSemaphoreTakeRecursive() request.
For example, if a task successfully 'takes' the same mutex 5 times then the mutex
will not be available to any other task until it has also 'given'
the mutex back exactly five times.
This type of semaphore uses a priority inheritance mechanism so a task
'taking' a semaphore MUST ALWAYS 'give' the semaphore back
once the semaphore it is no longer required.
Mutex type semaphores cannot be used from within interrupt service routines.
Task() ----- xSemaphoreTakeRecursive()
|funcA --- xSemaphoreTakeRecursive(), xSemaphoreGiveRecursive()
|funcB --- xSemaphoreTakeRecursive(), xSemaphoreGiveRecursive()
Task() ----- xSemaphoreGiveRecursive()
======== Binary Semaphores(二進(jìn)制信號量)======================================
Binary semaphores are used for both mutual exclusion and synchronisation purposes.
Binary semaphores and mutexes are very similar but have some subtle differences:
Mutexes include a priority inheritance mechanism, binary semaphores do not. <!優(yōu)先級繼承>
This makes binary semaphores the better choice for implementing synchronisation
(between tasks or between tasks and an interrupt),
and mutexes the better choice for implementing simple mutual exclusion.
Think of a binary semaphore as a queue that can only hold one item.
The queue can therefore only be empty or full (hence binary).
Tasks and interrupts using the queue don't care what the queue holds
- they only want to know if the queue is empty or full.
This mechanism can be exploited to synchronise (for example) a task with an interrupt.
Consider the case where a task is used to service a peripheral.
Polling the peripheral would be wasteful of CPU resources,
and prevent other tasks from executing.
It is therefore preferable that the task spends most of its time
in the Blocked state (allowing other tasks to execute) and
only execute itself when there is actually something for it to do.
This is achieved using a binary semaphore by having the task Block
while attempting to 'take' the semaphore.
An interrupt routine is then written for the peripheral that just 'gives'
the semaphore when the peripheral requires servicing.
The task always 'takes' the semaphore (reads from the queue to make the queue empty),
but never 'gives' it.
The interrupt always 'gives' the semaphore (writes to the queue to make it full)
but never takes it.
Task prioritisation can be used to ensure peripherals get services in a timely manner
- effectively generating a 'differed interrupt' scheme.
An alternative approach is to use a queue in place of the semaphore.
When this is done the interrupt routine can capture the data associated with the peripheral event
and send it on a queue to the task. The task unblocks when data becomes available on the queue,
retrieves the data from the queue, then performs any data processing that is required.
This second scheme permits interrupts to remain as short as possible,
with all post processing instead occurring within a task.
======== Counting Semaphores(計數(shù)信號量)======================================
Just as binary semaphores can be thought of as queues of length one,
counting semaphores can be thought of as queues of length greater than one.
Again, users of the semaphore are not interested in the data that is stored in the queue
- just whether or not the queue is empty or not.
Counting semaphores are typically used for two things:
Counting events.
In this usage scenario an event handler will 'give' a semaphore each time an event occurs
(incrementing the semaphore count value), and a handler task will 'take' a semaphore each time
it processes an event (decrementing the semaphore count value).
The count value is therefore the difference between the number of events that have occurred
and the number that have been processed.
In this case it is desirable for the count value to be zero when the semaphore is created.
Resource management.
In this usage scenario the count value indicates the number of resources available.
To obtain control of a resource a task must first obtain a semaphore
- decrementing the semaphore count value.
When the count value reaches zero there are no free resources.
When a task finishes with the resource it 'gives' the semaphore back
- incrementing the semaphore count value.
In this case it is desirable for the count value to be equal the maximum count value
when the semaphore is created.
======== Typical Semaphore Use =================================================
Semaphores are useful either for synchronizing execution of multiple tasks
or for coordinating access to a shared resource.
The following examples and general discussions illustrate using different types of semaphores
to address common synchronization design requirements effectively, as listed:
wait-and-signal synchronization
multiple-task wait-and-signal synchronization
credit-tracking synchronization
single shared-resource-access synchronization
multiple shared-resource-access synchronization
recursive shared-resource-access synchronization
死鎖(或抱死)(Deadlock (or Deadly Embrace))
死鎖也稱作抱死,指兩個任務(wù)無限期地互相等待對方控制著的資源。
設(shè)任務(wù)T1正獨享資源R1,任務(wù)T2在獨享資源T2,
而此時T1又要獨享R2,T2也要獨享R1,于是哪個任務(wù)都沒法繼續(xù)執(zhí)行了,
發(fā)生了死鎖。最簡單的防止發(fā)生死鎖的方法是讓每個任務(wù)都:
先得到全部需要的資源再做下一步的工作
用同樣的順序去申請多個資源
釋放資源時使用相反的順序
內(nèi)核大多允許用戶在申請信號量時定義等待超時,以此化解死鎖。
當(dāng)?shù)却龝r間超過了某一確定值,信號量還是無效狀態(tài),
就會返回某種形式的出現(xiàn)超時錯誤的代碼,這個出錯代碼告知該任務(wù),
不是得到了資源使用權(quán),而是系統(tǒng)錯誤。
死鎖一般發(fā)生在大型多任務(wù)系統(tǒng)中,在嵌入式系統(tǒng)中不易出現(xiàn)。
優(yōu)先級倒置 : HP_task的優(yōu)先級降至LP_task的優(yōu)先級
HP_task等待LP_task的資源,于是處于Pend狀態(tài),這時一個中等優(yōu)先級的MP_task進(jìn)來,并搶占了LP_task的CPU,
此時的表現(xiàn)是低優(yōu)先級MP_task在高優(yōu)先級的HP_task前執(zhí)行。這種現(xiàn)象就是優(yōu)先級倒置。
優(yōu)先級繼承 : LP_task的優(yōu)先級升至HP_task的優(yōu)先級
HP_task等待LP_task的資源,于是處于Pend狀態(tài),這是把LP_task提升到HP_task的優(yōu)先級
在LP_task semGive()后恢復(fù)LP_task的優(yōu)先級,避免低于HP_task優(yōu)先級的任務(wù)在HP_task
等待期間執(zhí)行。這種現(xiàn)象就是優(yōu)先級繼承。
LP_task繼承了HP_task的優(yōu)先級。
The rule to go by for the scheduler is:
Activate the task that has the highest priority of all tasks in the READY state.
But what happens if the highest-priority task is blocked because it is waiting for a resource owned by a lower-priority task?
According to the above rule, it would wait until the low-priority-task becomes running again and releases the resource.
Up to this point, everything works as expected.
Problems arise when a task with medium priority becomes ready during the execution of the higher prioritized task.
When the higher priority task is suspended waiting for the resource, the task with the medium priority will run
until it finished its work, because it has higher priority as the low priority task.
In this scenario, a task with medium priority runs before the task with high priority.
This is known as priority inversion.
The low priority task claims the semaphore with OS_Use().
An interrupt activates the high priority task, which also calls OS_Use().
Meanwhile a task with medium got ready and runs when the high priority task is suspended.
After doing some operations, the task with medium priority calls OS_Delay() and is therefore suspended.
The task with lower priority continues now and calls OS_Unuse() to release the resource semaphore.
After the low priority task releases the semaphore, the high priority task is activated and claims the semaphore.
To avoid this kind of situation, the low-priority task that is blocking the highest-priority task gets assigned the highest priority
until it releases the resource, unblocking the task which originally had highest priority. This is known as priority inheritance.
With priority inheritance, the low priority task inherits the priority of the waiting high priority task
as long as it holds the resource semaphore. The lower priority task is activated instead of the medium priority task
when the high priority task tries to claim the semaphore.
mutex — specify the task-waiting order and enable task deletion safety, recursion,
and priority-inversion avoidance protocols, if supported.
binary — specify the initial semaphore state and the task-waiting order.
counting — specify the initial semaphore count and the task-waiting order.
?
轉(zhuǎn)載于:https://www.cnblogs.com/shangdawei/p/3939376.html
總結(jié)
- 上一篇: css动画之波纹
- 下一篇: 数据工程师必须掌握的7个大数据实战项目