diff options
| author | Srivatsa Vaddagiri <vatsa@codeaurora.org> | 2014-11-18 13:19:39 +0530 |
|---|---|---|
| committer | David Keitel <dkeitel@codeaurora.org> | 2016-03-23 20:01:17 -0700 |
| commit | 29a412dffa5cbd6d7d913909cd57d04d9d5cb172 (patch) | |
| tree | 034ba56d1040d86df67375ab1683e058c521b65b /include/linux | |
| parent | d1b240ccc7317c502b8a051a7d94466de482f8a4 (diff) | |
sched: Avoid frequent migration of running task
Power values for cpus can drop quite considerably when it goes idle.
As a result, the best choice for running a single task in a cluster
can vary quite rapidly. As the task keeps hopping cpus, other cpus go
idle and start being seen as more favorable target for running a task,
leading to task migrating almost every scheduler tick!
Prevent this by keeping track of when a task started running on a cpu
and allowing task migration in tick path (migration_needed()) on
account of energy efficiency reasons only if the task has run
sufficiently long (as determined by sysctl_sched_min_runtime
variable).
Note that currently sysctl_sched_min_runtime setting is considered
only in scheduler_tick()->migration_needed() path and not in
idle_balance() path. In other words, a task could be migrated to
another cpu which did a idle_balance(). This limitation should not
affect high-frequency migrations seen typically (when a single
high-demand task runs on high-performance cpu).
CRs-Fixed: 756570
Change-Id: I96413b7a81b623193c3bbcec6f3fa9dfec367d99
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
[joonwoop@codeaurora.org: fixed conflict in set_task_cpu() and
__schedule().]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/sched.h | 1 | ||||
| -rw-r--r-- | include/linux/sched/sysctl.h | 1 |
2 files changed, 2 insertions, 0 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h index 5398a8aea026..0876b298c76e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1453,6 +1453,7 @@ struct task_struct { * of this task */ u32 init_load_pct; + u64 run_start; #endif #ifdef CONFIG_CGROUP_SCHED struct task_group *sched_task_group; diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h index 25bdacde2d83..0ec9fc8cd361 100644 --- a/include/linux/sched/sysctl.h +++ b/include/linux/sched/sysctl.h @@ -48,6 +48,7 @@ extern unsigned int sysctl_sched_cpu_high_irqload; extern unsigned int sysctl_sched_freq_account_wait_time; extern unsigned int sysctl_sched_migration_fixup; extern unsigned int sysctl_sched_heavy_task_pct; +extern unsigned int sysctl_sched_min_runtime; #if defined(CONFIG_SCHED_FREQ_INPUT) || defined(CONFIG_SCHED_HMP) extern unsigned int sysctl_sched_init_task_load_pct; |
