summaryrefslogtreecommitdiff
path: root/kernel/sched/rt.c
diff options
context:
space:
mode:
authorSyed Rameez Mustafa <rameezmustafa@codeaurora.org>2016-08-31 16:54:12 -0700
committerSyed Rameez Mustafa <rameezmustafa@codeaurora.org>2016-11-16 17:57:56 -0800
commit30fc7742350f45a6b3667880de0a72c68509ccc5 (patch)
tree840761c4bedb6168d261de7f30c4a64ac8740bb2 /kernel/sched/rt.c
parentfd5b530593795c9aa145c7f9aa3eb817d3841af3 (diff)
sched/hmp: Enhance co-location and scheduler boost features
The recent introduction of the schedtune cgroup controller has provided the scheduler with added flexibility in terms of some of it's placement features. In particular each cgroup under the schedtune controller can now specify: 1) Whether it needs co-location along with other cgroups 2) Whether it is eligible for scheduler boost (sched_boost_enabled) 3) Whether the kernel can override the boost eligibility when necessary (sched_boost_no_override) The scheduler now creates a reserved co-location group at boot. This group is used to co-locate all tasks that form part of any one of the cgroups that have co-location enabled. This reserved group can neither be destroyed nor reused for other purposes. Furthermore, cgroups are only allowed to indicate their co-location preference once at boot. Further updates are disallowed. Since we are now creating co-location groups for an extended period of time, there are a few other factors to consider when determining the preferred cluster for the group. We first exclude any tasks in the group that have not been observed to be running for a significant amount of time. Secondly we introduce the notion of group up and down migrate tunables to allow different migration policies than individual tasks. Lastly we break co-location if a single task in a group exceeds up-migrate but the total load of the group does not exceed group up-migrate. In terms of sched_boost, the scheduler now supports multiple types of boost. These are: 1) FULL_THROTTLE : Force up-migrate tasks belonging any cgroup that has the sched_boost_enabled flag turned on. Little CPUs will only be used when big CPUs can no longer accommodate tasks. Also up-migrate all RT tasks. 2) CONSERVATIVE : Override the sched_boost_enabled flag for all cgroups except those that have the sched_boost_no_override flag set. Force up-migrate all tasks belonging to only those cgroups that still remain eligible for boost. RT tasks do not get force up migrated. 3) RESTRAINED : Start frequency aggregation for co-located tasks. This type of boost does not force up-migrate any task. Finally the boost API removes ref-counting. This means that there can only be a single entity using boost at any given time. If multiple entities are managing boost, they are required to be well behaved so that they don't interfere with one another. Even for a single client, it is not possible to switch directly from one boost type to another. Boost must be first turned off before switching over to a new type. Change-Id: I8d224a70cbef162f27078b62b73acaa22670861d Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
Diffstat (limited to 'kernel/sched/rt.c')
-rw-r--r--kernel/sched/rt.c12
1 files changed, 10 insertions, 2 deletions
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index ba4403e910d8..12a04f30ef77 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1677,8 +1677,13 @@ static int find_lowest_rq_hmp(struct task_struct *task)
int prev_cpu = task_cpu(task);
u64 cpu_load, min_load = ULLONG_MAX;
int i;
- int restrict_cluster = sched_boost() ? 0 :
- sysctl_sched_restrict_cluster_spill;
+ int restrict_cluster;
+ int boost_on_big;
+
+ boost_on_big = sched_boost() == FULL_THROTTLE_BOOST &&
+ sched_boost_policy() == SCHED_BOOST_ON_BIG;
+
+ restrict_cluster = sysctl_sched_restrict_cluster_spill;
/* Make sure the mask is initialized first */
if (unlikely(!lowest_mask))
@@ -1697,6 +1702,9 @@ static int find_lowest_rq_hmp(struct task_struct *task)
*/
for_each_sched_cluster(cluster) {
+ if (boost_on_big && cluster->capacity != max_possible_capacity)
+ continue;
+
cpumask_and(&candidate_mask, &cluster->cpus, lowest_mask);
cpumask_andnot(&candidate_mask, &candidate_mask,
cpu_isolated_mask);