diff options
| author | Brendan Jackman <brendan.jackman@arm.com> | 2017-01-10 11:31:01 +0000 |
|---|---|---|
| committer | Brendan Jackman <brendan.jackman@arm.com> | 2017-01-13 14:22:27 +0000 |
| commit | 1cb392e10307ba3ef7d9a602e59e54ae3b6399ad (patch) | |
| tree | 521c88bcb3d8028a453ead73fc12825231415404 /kernel | |
| parent | 7f18f0963d81a096e741cfc14ec9c2915f633e0a (diff) | |
DEBUG: sched/fair: Fix sched_load_avg_cpu events for task_groups
The current sched_load_avg_cpu event traces the load for any cfs_rq that is
updated. This is not representative of the CPU load - instead we should only
trace this event when the cfs_rq being updated is in the root_task_group.
Change-Id: I345c2f13f6b5718cb4a89beb247f7887ce97ed6b
Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Diffstat (limited to 'kernel')
| -rw-r--r-- | kernel/sched/fair.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index aeb9b550470b..7d4151601860 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2726,7 +2726,9 @@ static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq) cfs_rq->load_last_update_time_copy = sa->last_update_time; #endif - trace_sched_load_avg_cpu(cpu_of(rq_of(cfs_rq)), cfs_rq); + /* Trace CPU load, unless cfs_rq belongs to a non-root task_group */ + if (cfs_rq == &rq_of(cfs_rq)->cfs) + trace_sched_load_avg_cpu(cpu_of(rq_of(cfs_rq)), cfs_rq); return decayed || removed; } |
