diff options
author | Joonwoo Park <joonwoop@codeaurora.org> | 2017-05-26 11:19:36 -0700 |
---|---|---|
committer | Joonwoo Park <joonwoop@codeaurora.org> | 2017-09-01 17:20:51 -0700 |
commit | 48f67ea85de468a9b3e47e723e7681cf7771dea6 (patch) | |
tree | 7893a1bbc808f4e000eb6812939b7be7ede335cf /kernel/sched/walt.c | |
parent | 26b37261ea25c6928c6a69e77a8f7d39ee3267c9 (diff) |
sched: WALT: fix broken cumulative runnable average accounting
When running tasks's ravg.demand is changed update_history() adjusts
rq->cumulative_runnable_avg to reflect change of CPU load. Currently
this fixup is broken by accumulating task's new demand without
subtracting the task's old demand.
Fix the fixup logic to subtract the task's old demand.
Change-Id: I61beb32a4850879ccb39b733f5564251e465bfeb
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Diffstat (limited to 'kernel/sched/walt.c')
-rw-r--r-- | kernel/sched/walt.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/kernel/sched/walt.c b/kernel/sched/walt.c index 92c3aae8e056..166641ed1f39 100644 --- a/kernel/sched/walt.c +++ b/kernel/sched/walt.c @@ -111,8 +111,10 @@ walt_dec_cumulative_runnable_avg(struct rq *rq, static void fixup_cumulative_runnable_avg(struct rq *rq, - struct task_struct *p, s64 task_load_delta) + struct task_struct *p, u64 new_task_load) { + s64 task_load_delta = (s64)new_task_load - task_load(p); + rq->cumulative_runnable_avg += task_load_delta; if ((s64)rq->cumulative_runnable_avg < 0) panic("cra less than zero: tld: %lld, task_load(p) = %u\n", |