diff options
| author | Joonwoo Park <joonwoop@codeaurora.org> | 2015-03-30 17:29:16 -0700 |
|---|---|---|
| committer | David Keitel <dkeitel@codeaurora.org> | 2016-03-23 20:01:54 -0700 |
| commit | 1cac3260d4e8fd180a7c30408c5f4ffb7b7ec4d1 (patch) | |
| tree | b44ff703f3f50a54d15e39dfc4fa8fb444b33254 /kernel/sched | |
| parent | 81280a6963792fe18b6c38935557b8161e330ac4 (diff) | |
sched: fix race conditions where HMP tunables change
When multiple threads race to update HMP scheduler tunables, at present,
the tunables which require big/small task count fix-up can be updated
without fix-up and it can trigger BUG_ON().
This happens because sched_hmp_proc_update_handler() acquires rq locks and
does fix-up only when number of big/small tasks affecting tunables are
updated even though the function sched_hmp_proc_update_handler() calls
set_hmp_defaults() which re-calculates all sysctl input data at that
point. Consequently a thread that is trying to update a tunable which does
not affect big/small task count can call set_hmp_defaults() and update
big/small task count affecting tunable without fix-up if there is another
thread and it just set fix-up needed sysctl value.
Example of problem scenario :
thread 0 thread 1
Set sched_small_task – needs fix up.
Set sched_init_task_load – no fix
up needed.
proc_dointvec_minmax() completed
which means sysctl_sched_small_task has
new value.
Call set_hmp_defaults() without
lock/fixup. set_hmp_defaults() still
updates sched_small_tasks with new
sysctl_sched_small_task value by
thread 0.
Fix such issue by embracing proc update handler with already existing
policy mutex.
CRs-fixed: 812443
Change-Id: I7aa4c0efc1ca56e28dc0513480aca3264786d4f7
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
Diffstat (limited to 'kernel/sched')
| -rw-r--r-- | kernel/sched/fair.c | 28 |
1 files changed, 17 insertions, 11 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c1b7bf841fdf..c7d456ba3960 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -3965,37 +3965,41 @@ int sched_hmp_proc_update_handler(struct ctl_table *table, int write, loff_t *ppos) { int ret; + unsigned int old_val; unsigned int *data = (unsigned int *)table->data; - unsigned int old_val = *data; int update_min_nice = 0; + mutex_lock(&policy_mutex); + + old_val = *data; + ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); if (ret || !write || !sched_enable_hmp) - return ret; + goto done; if (write && (old_val == *data)) - return 0; + goto done; if (data == &sysctl_sched_min_runtime) { sched_min_runtime = ((u64) sysctl_sched_min_runtime) * 1000; - return 0; + goto done; } - if (data == (unsigned int *)&sysctl_sched_upmigrate_min_nice) - update_min_nice = 1; - - if (update_min_nice) { + if (data == (unsigned int *)&sysctl_sched_upmigrate_min_nice) { if ((*(int *)data) < -20 || (*(int *)data) > 19) { *data = old_val; - return -EINVAL; + ret = -EINVAL; + goto done; } + update_min_nice = 1; } else { /* all tunables other than min_nice are in percentage */ if (sysctl_sched_downmigrate_pct > sysctl_sched_upmigrate_pct || *data > 100) { *data = old_val; - return -EINVAL; + ret = -EINVAL; + goto done; } } @@ -4021,7 +4025,9 @@ int sched_hmp_proc_update_handler(struct ctl_table *table, int write, put_online_cpus(); } - return 0; +done: + mutex_unlock(&policy_mutex); + return ret; } /* |
