diff options
author | Tejun Heo <tj@kernel.org> | 2017-06-17 08:10:08 -0400 |
---|---|---|
committer | Georg Veichtlbauer <georg@vware.at> | 2023-07-16 12:47:43 +0200 |
commit | b9b6bc6ea3c06ab2edac96db7a3f9e51c9e459d1 (patch) | |
tree | 30845ae8ee52675068ead00e6d899f67ac97d727 | |
parent | a9314f9d8ad402f17e107f2f4a11636e50301cfa (diff) |
sched: Allow migrating kthreads into online but inactive CPUs
Per-cpu workqueues have been tripping CPU affinity sanity checks while
a CPU is being offlined. A per-cpu kworker ends up running on a CPU
which isn't its target CPU while the CPU is online but inactive.
While the scheduler allows kthreads to wake up on an online but
inactive CPU, it doesn't allow a running kthread to be migrated to
such a CPU, which leads to an odd situation where setting affinity on
a sleeping and running kthread leads to different results.
Each mem-reclaim workqueue has one rescuer which guarantees forward
progress and the rescuer needs to bind itself to the CPU which needs
help in making forward progress; however, due to the above issue,
while set_cpus_allowed_ptr() succeeds, the rescuer doesn't end up on
the correct CPU if the CPU is in the process of going offline,
tripping the sanity check and executing the work item on the wrong
CPU.
This patch updates __migrate_task() so that kthreads can be migrated
into an inactive but online CPU.
Change-Id: I38cc3eb3b2ec5b7034cc72a2bcdd32a549314915
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Reported-by: Steven Rostedt <rostedt@goodmis.org>
-rw-r--r-- | kernel/sched/core.c | 9 |
1 files changed, 7 insertions, 2 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b33433586774..3c64cd08e8e2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -1144,8 +1144,13 @@ static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int dest_ { int src_cpu; - if (unlikely(!cpu_active(dest_cpu))) - return rq; + if (p->flags & PF_KTHREAD) { + if (unlikely(!cpu_online(dest_cpu))) + return ret; + } else { + if (unlikely(!cpu_active(dest_cpu))) + return ret; + } /* Affinity changed (again). */ if (!cpumask_test_cpu(dest_cpu, tsk_cpus_allowed(p))) |