diff options
author | Vikram Mulukutla <markivx@codeaurora.org> | 2017-09-21 17:24:24 -0700 |
---|---|---|
committer | Andres Oportus <andresoportus@google.com> | 2017-09-23 01:25:03 +0000 |
commit | 650b6a5c418563f2a6cb4f12b3402a5b4870eea8 (patch) | |
tree | fd47a0ab34c39d0ab943ef7a32b1a6fb281bf726 /fs/f2fs/debug.c | |
parent | 047200481e7f98e6334d751010c9470efb531b71 (diff) |
Revert "ANDROID: sched/tune: Initialize raw_spin_lock in boosted_groups"
This reverts commit c5616f2f874faa20b59b116177b99bf3948586df.
If we re-init the per-cpu boostgroup spinlock every time that
we add a new boosted cgroup, we can easily wipe out (reinit)
a spinlock struct while in a critical section. We should only
be setting up the per-cpu boostgroup data, and the spin_lock
initialization need only happen once - which we're already
doing in a postcore_initcall.
For example:
-------- CPU 0 -------- | -------- CPU1 --------
cgroupX boost group added |
schedtune_enqueue_task |
acquires(bg->lock) | cgroupY boost group added
| for_each_cpu()
| raw_spin_lock_init(bg->lock)
releases(bg->lock) |
BUG (already unlocked) |
|
This results in the following BUG from the debug spinlock code:
BUG: spinlock already unlocked on CPU#5, rcuop/6/68
Change-Id: I3016702780b461a0cd95e26c538cd18df27d6316
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
Diffstat (limited to 'fs/f2fs/debug.c')
0 files changed, 0 insertions, 0 deletions