From 09417ad30eeee22816471313bf13417c3039b930 Mon Sep 17 00:00:00 2001 From: Vikram Mulukutla Date: Wed, 10 Jun 2015 17:17:46 -0700 Subject: sched: Fix racy invocation of fixup_busy_time via move_queued_task set_task_cpu uses fixup_busy_time to redistribute a task's load information between source and destination runqueues. fixup_busy_time assumes that both source and destination runqueue locks have been acquired if the task is not being concurrently woken up. However this is no longer true, since move_queued_task does not acquire the destination CPU's runqueue lock due to optimizations brought in by recent kernels. Acquire both source and destination runqueue locks before invoking set_task_cpu in move_queued_tasks. Change-Id: I39fadf0508ad42e511db43428e52c8aa8bf9baf6 Signed-off-by: Vikram Mulukutla [joonwoop@codeaurora.org: fixed conflict in move_queued_task().] Signed-off-by: Joonwoo Park --- kernel/sched/core.c | 2 ++ 1 file changed, 2 insertions(+) (limited to 'kernel') diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 2baf7e319942..7b3be71b6e2f 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2678,7 +2678,9 @@ static struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new p->on_rq = TASK_ON_RQ_MIGRATING; dequeue_task(rq, p, 0); + double_lock_balance(rq, cpu_rq(new_cpu)); set_task_cpu(p, new_cpu); + double_unlock_balance(rq, cpu_rq(new_cpu)); raw_spin_unlock(&rq->lock); rq = cpu_rq(new_cpu); -- cgit v1.2.3