diff options
| author | Syed Rameez Mustafa <rameezmustafa@codeaurora.org> | 2015-06-09 16:15:37 -0700 |
|---|---|---|
| committer | David Keitel <dkeitel@codeaurora.org> | 2016-03-23 20:02:22 -0700 |
| commit | 425e3f0cc4e5ea70fbbcfb71662ae62157e0d0fa (patch) | |
| tree | 1549a55b17c88f849ebedb3efe1055b13f903d0d /kernel | |
| parent | ca42a1bec8eedc327d4986479349cdc16ff7661c (diff) | |
sched: remove temporary demand fixups in fixup_busy_time()
On older kernel versions p->on_rq was a binary value that did not
allow distinguishing between enqueued and migrating tasks. As a result
fixup_busy_time would have to do temporary load adjustments to ensure
that update_history does not do incorrect demand adjustments for
migrating tasks. Since p->on_rq can now be used make a distinction
between migrating and enqueued tasks, there is no need to do these
temporary load calculations. Instead make sure update_history() only
does load adjustments on enqueued tasks.
Change-Id: I1f800ac61a045a66ab44b9219516c39aa08db087
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
Diffstat (limited to 'kernel')
| -rw-r--r-- | kernel/sched/core.c | 21 |
1 files changed, 2 insertions, 19 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 94883c846424..ebaeda755c91 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1709,7 +1709,8 @@ static void update_history(struct rq *rq, struct task_struct *p, * changing p->on_rq. Since the dequeue decrements hmp stats * avoid decrementing it here again. */ - if (p->on_rq && (!task_has_dl_policy(p) || !p->dl.dl_throttled)) + if (task_on_rq_queued(p) && (!task_has_dl_policy(p) || + !p->dl.dl_throttled)) p->sched_class->fixup_hmp_sched_stats(rq, p, demand); else p->ravg.demand = demand; @@ -2297,27 +2298,9 @@ static void fixup_busy_time(struct task_struct *p, int new_cpu) update_task_ravg(dest_rq->curr, dest_rq, TASK_UPDATE, wallclock, 0); - /* - * In case of migration of task on runqueue, on_rq =1, - * however its load is removed from its runqueue. - * update_task_ravg() below can update its demand, which - * will require its load on runqueue to be adjusted to - * reflect new demand. Restore load temporarily for such - * task on its runqueue - */ - if (p->on_rq) - p->sched_class->inc_hmp_sched_stats(src_rq, p); - update_task_ravg(p, task_rq(p), TASK_MIGRATE, wallclock, 0); - /* - * Remove task's load from rq as its now migrating to - * another cpu. - */ - if (p->on_rq) - p->sched_class->dec_hmp_sched_stats(src_rq, p); - if (p->ravg.curr_window) { src_rq->curr_runnable_sum -= p->ravg.curr_window; dest_rq->curr_runnable_sum += p->ravg.curr_window; |
