diff options
| author | Brendan Jackman <brendan.jackman@arm.com> | 2017-01-10 11:31:01 +0000 |
|---|---|---|
| committer | Amit Pundir <amit.pundir@linaro.org> | 2017-01-16 15:03:08 +0530 |
| commit | ee620ddd6581cf9779d27677f6f0f11e3f939a8c (patch) | |
| tree | 161d9110166690e22d6f05333f99f01cfc8362d2 | |
| parent | 52a2ef75c34af99c4c383dfe357ed1bb84a49bcc (diff) | |
DEBUG: sched/fair: Fix sched_load_avg_cpu events for task_groups
The current sched_load_avg_cpu event traces the load for any cfs_rq that is
updated. This is not representative of the CPU load - instead we should only
trace this event when the cfs_rq being updated is in the root_task_group.
Change-Id: I345c2f13f6b5718cb4a89beb247f7887ce97ed6b
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
| -rw-r--r-- | kernel/sched/fair.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index ad1507e420e8..3331f453a17f 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2757,7 +2757,9 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) cfs_rq->load_last_update_time_copy = sa->last_update_time; #endif - trace_sched_load_avg_cpu(cpu_of(rq_of(cfs_rq)), cfs_rq); + /* Trace CPU load, unless cfs_rq belongs to a non-root task_group */ + if (cfs_rq == &rq_of(cfs_rq)->cfs) + trace_sched_load_avg_cpu(cpu_of(rq_of(cfs_rq)), cfs_rq); return decayed || removed; } |
