diff options
| author | Vikram Mulukutla <markivx@codeaurora.org> | 2017-09-21 17:24:24 -0700 |
|---|---|---|
| committer | Gerrit - the friendly Code Review server <code-review@localhost> | 2017-09-22 04:24:28 -0700 |
| commit | 2bbf3a762a3b4208ae4990e5cfbd9843666449fc (patch) | |
| tree | 1cba94648e072e01ee4847acebdec0ad50355988 /lib/mpi/mpi-cmp.c | |
| parent | 6f777b2385c98a17d69bbeead6edbc7ad7470f72 (diff) | |
Revert "ANDROID: sched/tune: Initialize raw_spin_lock in boosted_groups"
This reverts commit c5616f2f874faa20b59b116177b99bf3948586df.
If we re-init the per-cpu boostgroup spinlock every time that
we add a new boosted cgroup, we can easily wipe out (reinit)
a spinlock struct while in a critical section. We should only
be setting up the per-cpu boostgroup data, and the spin_lock
initialization need only happen once - which we're already
doing in a postcore_initcall.
For example:
-------- CPU 0 -------- | -------- CPU1 --------
cgroupX boost group added |
schedtune_enqueue_task |
acquires(bg->lock) | cgroupY boost group added
| for_each_cpu()
| raw_spin_lock_init(bg->lock)
releases(bg->lock) |
BUG (already unlocked) |
|
This results in the following BUG from the debug spinlock code:
BUG: spinlock already unlocked on CPU#5, rcuop/6/68
CRs-fixed: 2113062
Change-Id: I1cd780d9ba5801cf99bfe46504b18a88e45f17a8
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
Diffstat (limited to 'lib/mpi/mpi-cmp.c')
0 files changed, 0 insertions, 0 deletions
