diff options
| author | Pavankumar Kondeti <pkondeti@codeaurora.org> | 2017-05-10 15:43:29 +0530 |
|---|---|---|
| committer | Pavankumar Kondeti <pkondeti@codeaurora.org> | 2017-05-31 08:33:48 +0530 |
| commit | 57fd979fc92aac87bc6745883940d32fbdeb4ac4 (patch) | |
| tree | 091e3dd0e0381f295f57e68b2fdc5ff96ee336df /include/linux | |
| parent | f37f0680d728df428d75278597402c53b34366b0 (diff) | |
core_ctl: un-isolate BIG CPUs more aggressively
The current algorithm to bring additional BIG CPUs is very
conservative. It works when BIG tasks alone run on BIG
cluster. When co-location and scheduler boost features
are activated, small/medium tasks also run on BIG cluster.
We don't want these tasks to downmigrate, when BIG CPUs are
available but isolated. The following changes are done to
un-isolate CPUs more aggressively.
(1) Round up the big_avg. When the big_avg indicates that
there are 1.5 tasks on an average in the last window, it
indicates that we need 2 BIG CPUs not 1 BIG CPU.
(2) Track the maximum number of running tasks in the last
window on all CPUs. If any of the CPU in a cluster has more
than 4 runnable tasks in the last window, bring an additional
CPU to help out.
Change-Id: Id05d9983af290760cec6d93d1bdc45bc5e924cce
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/sched.h | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h index c71978453864..138fcf72508a 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -178,7 +178,9 @@ extern u64 nr_running_integral(unsigned int cpu); #endif extern void sched_update_nr_prod(int cpu, long delta, bool inc); -extern void sched_get_nr_running_avg(int *avg, int *iowait_avg, int *big_avg); +extern void sched_get_nr_running_avg(int *avg, int *iowait_avg, int *big_avg, + unsigned int *max_nr, + unsigned int *big_max_nr); extern void calc_global_load(unsigned long ticks); |
