diff options
| author | Syed Rameez Mustafa <rameezmustafa@codeaurora.org> | 2016-12-07 17:00:27 -0800 |
|---|---|---|
| committer | Syed Rameez Mustafa <rameezmustafa@codeaurora.org> | 2016-12-09 14:30:41 -0800 |
| commit | 6e24ba90a2787bb55fdcaca404adca1c3012b84e (patch) | |
| tree | e7f83d404ad399568184e4c2f2a270bcc2a28156 /kernel/sched/rt.c | |
| parent | 368fecd7df5b203a5ce684a0c77726a5690c1147 (diff) | |
sched: Ensure proper task migration when a CPU is isolated
migrate_tasks() migrates all tasks of a CPU by using pick_next_task().
This works in the hotplug case as we force migrate every single task
allowing pick_next_task() to return a new task on every loop iteration.
In the case of isolation, however, task migration is not guaranteed
which causes pick_next_task() to keep returning the same task over and
over again until we terminate the loop without having migrated all the
tasks that were supposed to migrated.
Fix the above problem by temporarily dequeuing tasks that are pinned
and marking them with TASK_ON_RQ_MIGRATING. This not only allows
pick_next_task() to properly walk the runqueue but also prevents any
migrations or changes in affinity for the dequeued tasks. Once we are
done with migrating all possible tasks, we re-enqueue all the dequeued
tasks.
While at it, ensure consistent ordering between task de-activation and
setting the TASK_ON_RQ_MIGRATING flag across all scheduling classes.
Change-Id: Id06151a8e34edab49ac76b4bffd50c132f0b792f
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
Diffstat (limited to 'kernel/sched/rt.c')
| -rw-r--r-- | kernel/sched/rt.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c index 12a04f30ef77..52edd6b158ed 100644 --- a/kernel/sched/rt.c +++ b/kernel/sched/rt.c @@ -1970,11 +1970,11 @@ retry: goto retry; } - deactivate_task(rq, next_task, 0); next_task->on_rq = TASK_ON_RQ_MIGRATING; + deactivate_task(rq, next_task, 0); set_task_cpu(next_task, lowest_rq->cpu); - next_task->on_rq = TASK_ON_RQ_QUEUED; activate_task(lowest_rq, next_task, 0); + next_task->on_rq = TASK_ON_RQ_QUEUED; ret = 1; resched_curr(lowest_rq); @@ -2226,11 +2226,11 @@ static void pull_rt_task(struct rq *this_rq) resched = true; - deactivate_task(src_rq, p, 0); p->on_rq = TASK_ON_RQ_MIGRATING; + deactivate_task(src_rq, p, 0); set_task_cpu(p, this_cpu); - p->on_rq = TASK_ON_RQ_QUEUED; activate_task(this_rq, p, 0); + p->on_rq = TASK_ON_RQ_QUEUED; /* * We continue with the search, just in * case there's an even higher prio task |
