diff options
| author | Joonwoo Park <joonwoop@codeaurora.org> | 2016-04-29 10:58:21 -0700 |
|---|---|---|
| committer | Kyle Yan <kyan@codeaurora.org> | 2016-06-09 15:08:01 -0700 |
| commit | 96818d6f1dd9c5a6e86678861fbdd28ff71e3493 (patch) | |
| tree | b9a4c216f81124fde8057958dd8618c50b4bab45 /kernel/sched/cputime.c | |
| parent | 6e8c9ac98d71360e0edc345928e67e47cd7e2bcf (diff) | |
sched: fix potential deflated frequency estimation during IRQ handling
Time between mark_start of idle task and IRQ handler entry time is CPU
cycle counter stall period. Therefore it's inappropriate to include such
duration as part of sample period when we do frequency estimation.
Fix such suboptimality by replenishing idle task's CPU cycle counter
upon IRQ entry and using irqtime as time delta.
Change-Id: I274d5047a50565cfaaa2fb821ece21c8cf4c991d
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Diffstat (limited to 'kernel/sched/cputime.c')
| -rw-r--r-- | kernel/sched/cputime.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c index 930d3ce4f34e..647f184f8aec 100644 --- a/kernel/sched/cputime.c +++ b/kernel/sched/cputime.c @@ -80,6 +80,8 @@ void irqtime_account_irq(struct task_struct *curr) if (account) sched_account_irqtime(cpu, curr, delta, wallclock); + else if (curr != this_cpu_ksoftirqd()) + sched_account_irqstart(cpu, curr, wallclock); local_irq_restore(flags); } |
