diff options
| author | Steve Muckle <smuckle@codeaurora.org> | 2014-04-02 20:16:39 -0700 |
|---|---|---|
| committer | David Keitel <dkeitel@codeaurora.org> | 2016-03-23 19:59:25 -0700 |
| commit | 5b7076b4e584c6f8f053338ef8f10133738ca761 (patch) | |
| tree | 91bf4389e69cfa734a1033c1363c1230c9a1964e /kernel/sched | |
| parent | 9931863046ec2ba6c79838e3b8bb0839b1104aa8 (diff) | |
sched: check for power inefficient task placement in tick
Although tasks are routed to the most power-efficient CPUs during
task wakeup, a CPU-bound task will not go through this decision point.
Load balancing can help if it is modified to dislodge a single task
from an inefficient CPU. The situation can be further improved if
during the tick, the task placement is checked to see if it is
optimal.
This sort of checking is already being done to ensure proper
task placement in heterogneous CPU topologies, so checking for
power efficient task placement fits pretty well.
Change-Id: I71e56d406d314702bc26dee1438c0eeda7699027
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
Diffstat (limited to 'kernel/sched')
| -rw-r--r-- | kernel/sched/fair.c | 36 |
1 files changed, 35 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bb6704e0ee09..0f38e31eea4a 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2973,6 +2973,33 @@ static inline int find_new_hmp_ilb(void) } /* + * For the current task's CPU, we don't check whether there are + * multiple tasks. Just see if running the task on another CPU is + * lower power than running only this task on the current CPU. This is + * not the most accurate model, but we should be load balanced most of + * the time anyway. */ +static int lower_power_cpu_available(struct task_struct *p, int cpu) +{ + int i; + int lowest_power_cpu = task_cpu(p); + int lowest_power = power_cost(p, task_cpu(p)); + + /* Is a lower-powered idle CPU available which will fit this task? */ + for_each_cpu_and(i, tsk_cpus_allowed(p), cpu_online_mask) { + if (idle_cpu(i) && task_will_fit(p, i)) { + int idle_power_cost = power_cost(p, i); + if (idle_power_cost < lowest_power) { + lowest_power_cpu = i; + lowest_power = idle_power_cost; + } + } + } + + return (lowest_power_cpu != task_cpu(p)); +} + + +/* * Check if a task is on the "wrong" cpu (i.e its current cpu is not the ideal * cpu as per its demand or priority) */ @@ -2985,7 +3012,14 @@ static inline int migration_needed(struct rq *rq, struct task_struct *p) rq->capacity > min_capacity) return 1; - return !task_will_fit(p, cpu_of(rq)); + if (!task_will_fit(p, cpu_of(rq))) + return 1; + + if (sysctl_sched_enable_power_aware && + lower_power_cpu_available(p, cpu_of(rq))) + return 1; + + return 0; } /* |
