diff options
| author | Uladzislau 2 Rezki <uladzislau2.rezki@sonymobile.com> | 2017-02-08 09:43:27 +0100 |
|---|---|---|
| committer | Michael Bestas <mkbestas@lineageos.org> | 2019-12-23 23:43:33 +0200 |
| commit | 2590a10d3fffb0271466a169db6aabc314503b7d (patch) | |
| tree | 7d41f639277b0ae0b87a46abadaecf06a78fc0ef /kernel | |
| parent | 541b9f854f466ae2578a52d1ec1d07eaec2d5d50 (diff) | |
sched: set loop_max after rq lock is taken
While doing a load balance there is a race in setting
loop_max variable since nr_running can be changed causing
incorect iteration loops.
As a result we may skip some candidates or check the same
tasks again.
Change-Id: I2f58f8fe96c14bd70674e600bc33caeb8aa960c6
Signed-off-by: Uladzislau 2 Rezki <uladzislau2.rezki@sonymobile.com>
Signed-off-by: Artem Labazov <123321artyom@gmail.com>
Diffstat (limited to 'kernel')
| -rw-r--r-- | kernel/sched/fair.c | 7 |
1 files changed, 6 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index df2e6dd2c665..18ce8cb02272 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -10536,7 +10536,6 @@ redo: * correctly treated as an imbalance. */ env.flags |= LBF_ALL_PINNED; - env.loop_max = min(sysctl_sched_nr_migrate, busiest->nr_running); more_balance: raw_spin_lock_irqsave(&busiest->lock, flags); @@ -10550,6 +10549,12 @@ more_balance: } /* + * Set loop_max when rq's lock is taken to prevent a race. + */ + env.loop_max = min(sysctl_sched_nr_migrate, + busiest->nr_running); + + /* * cur_ld_moved - load moved in current iteration * ld_moved - cumulative load moved across iterations */ |
