summaryrefslogtreecommitdiff
path: root/kernel
diff options
context:
space:
mode:
authorSyed Rameez Mustafa <rameezmustafa@codeaurora.org>2015-07-21 10:30:01 -0700
committerDavid Keitel <dkeitel@codeaurora.org>2016-03-23 20:02:14 -0700
commit17bb9bcd5481e0b9c2bd519fced7d166b0d7315d (patch)
treecafda39d7b157ae36b7c14dd22c99aa5411ee932 /kernel
parenta509c84de711d7dd31e28a9cc04c76a83acf5b3c (diff)
sched: Avoid running idle_balance() consecutively
With the introduction of "6dd123a sched: update ld_moved for active balance from the load balancer" the function load_balance() returns a non zero number of migrated tasks in anticipation of tasks that will end up on that CPU via active migration. Unfortunately on kernel versions 3.14 and beyond this ends up breaking pick_next_task_fair() which assumes that the load balancer only returns non zero numbers for tasks already migrated on to the destination CPU. A non zero number then triggers a rerun of the pick_next_task_fair() logic so that it can return one of the migrated tasks as the next task. When the load balancer returns a non zero number for tasks that will be moved via active migration, the rerun of pick_next_task_fair() finds the CPU to still have no runnable tasks. This in turn causes a rerun of idle_balance() and possibly migrating another task. Hence the destination CPU can unintentionally end up pulling several tasks. The intent of the change above is still necessary though to indicate termination of load balance at higher scheduling domains when active migration occurs. Achieve the same effect by using continue_balancing instead of faking the number of pulled tasks. This way pick_next_task_fair() stays happy and load balance stops at higher scheduling domains. Change-Id: Id223a3287e5d401e10fbc67316f8551303c7ff96 Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/sched/fair.c9
1 files changed, 6 insertions, 3 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9d046d58050b..6e7ba8bce1fb 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9553,7 +9553,7 @@ no_move:
stop_one_cpu_nowait(cpu_of(busiest),
active_load_balance_cpu_stop, busiest,
&busiest->active_balance_work);
- ld_moved++;
+ *continue_balancing = 0;
}
/*
@@ -9761,9 +9761,12 @@ static int idle_balance(struct rq *this_rq)
/*
* Stop searching for tasks to pull if there are
- * now runnable tasks on the balance rq.
+ * now runnable tasks on the balance rq or if
+ * continue_balancing has been unset (only possible
+ * due to active migration).
*/
- if (pulled_task || balance_rq->nr_running > 0)
+ if (pulled_task || balance_rq->nr_running > 0 ||
+ !continue_balancing)
break;
}
rcu_read_unlock();