diff options
| author | Srivatsa Vaddagiri <vatsa@codeaurora.org> | 2014-10-13 14:40:01 +0530 |
|---|---|---|
| committer | David Keitel <dkeitel@codeaurora.org> | 2016-03-23 20:00:53 -0700 |
| commit | 6139e8a16f92ab392de53264cf3000af68f6a85a (patch) | |
| tree | c92f16cc96482149d6692207285e4200a388006d /kernel/sched/core.c | |
| parent | 604c41065b982ee94b45233d2da876d873284544 (diff) | |
sched: window-stats: Retain idle thread's mark_start
init_idle() is called on a cpu's idle-thread once at bootup and
subsequently everytime the cpu is hot-added. Since init_idle() calls
__sched_fork(), we end up blowing idle thread's ravg.mark_start value.
As a result we will fail to accurately maintain cpu's
curr/prev_runnable_sum counters. Below example illustrates such a
failure:
CS = curr_runnable_sum, PS = prev_runnable_sum
t0 -> New window starts for CPU2
<after some_task_activity> CS = X, PS = Y
t1 -> <cpu2 is hot-removed. idle_task start's running on cpu2>
At this time, cpu2_idle_thread.ravg.mark_start = t1
t1 -> t0 + W. One window elapses. CPU2 still hot-removed. We
defer swapping CS and PS until some future task event occurs
t2 -> CPU2 hot-added. _cpu_up()->idle_thread_get()->init_idle()
->__sched_fork() results in cpu2_idle_thread.ravg.mark_start = 0
t3 -> Some task wakes on cpu2. Since mark_start = 0, we don't swap CS
and PS => which is a BUG!
Fix this by retaining idle task's original mark_start value during
init_idle() call.
Change-Id: I4ac9bfe3a58fb5da8a6c7bc378c79d9930d17942
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
Diffstat (limited to 'kernel/sched/core.c')
| -rw-r--r-- | kernel/sched/core.c | 23 |
1 files changed, 23 insertions, 0 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 13ae48336c92..2ecc87e12491 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2566,6 +2566,16 @@ static int register_sched_callback(void) */ core_initcall(register_sched_callback); +static u64 orig_mark_start(struct task_struct *p) +{ + return p->ravg.mark_start; +} + +static void restore_orig_mark_start(struct task_struct *p, u64 mark_start) +{ + p->ravg.mark_start = mark_start; +} + #else /* CONFIG_SCHED_HMP */ static inline void fixup_busy_time(struct task_struct *p, int new_cpu) { } @@ -2590,6 +2600,13 @@ static inline void set_window_start(struct rq *rq) {} static inline void migrate_sync_cpu(int cpu) {} +static inline u64 orig_mark_start(struct task_struct *p) { return 0; } + +static inline void +restore_orig_mark_start(struct task_struct *p, u64 mark_start) +{ +} + #endif /* CONFIG_SCHED_HMP */ #ifdef CONFIG_SMP @@ -6639,11 +6656,17 @@ void init_idle(struct task_struct *idle, int cpu) { struct rq *rq = cpu_rq(cpu); unsigned long flags; + u64 mark_start = orig_mark_start(idle); raw_spin_lock_irqsave(&idle->pi_lock, flags); raw_spin_lock(&rq->lock); __sched_fork(0, idle); + /* + * Restore idle thread's original mark_start as we rely on it being + * correct for maintaining per-cpu counters, curr/prev_runnable_sum. + */ + restore_orig_mark_start(idle, mark_start); idle->state = TASK_RUNNING; idle->se.exec_start = sched_clock(); |
