From c459d156283b9ca32c053ce327ece301e5821db4 Mon Sep 17 00:00:00 2001 From: Joonwoo Park Date: Wed, 8 Jul 2015 15:42:30 -0700 Subject: sched: avoid unnecessary HMP scheduler stat re-accounting When sched_entity's runnable average changes, before and after, we decrease and increase HMP scheduler's statistics of the sched_entity to take into account of updated runnable average. In that period, however, other CPUs would see that the runnable average updating CPU's load as less than actual. This is suboptimal and can lead improper task placement and load balance decision. We can avoid such situation at least with window based load tracking as sched_entity's load average which is for PELT won't affect to HMP scheduler's load tracking statistics. Thus fix to update HMP statistics only when HMP scheduler uses PELT based load statistics. Change-Id: I9eb615c248c79daab5d22cbb4a994f94be6a968d [joonwoop@codeaurora.org: applied fix into __update_load_avg() instead of update_entity_load_avg().] Signed-off-by: Joonwoo Park --- kernel/sched/fair.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'kernel') diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c564897cbd4e..39f656fcc0ac 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4466,7 +4466,7 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa, return 0; sa->last_update_time = now; - if (!cfs_rq && weight) { + if (sched_use_pelt && !cfs_rq && weight) { se = container_of(sa, struct sched_entity, avg); if (entity_is_task(se) && se->on_rq) dec_hmp_sched_stats_fair(rq_of(cfs_rq), task_of(se)); -- cgit v1.2.3