diff options
author | Pavankumar Kondeti <pkondeti@codeaurora.org> | 2018-09-20 15:31:36 +0530 |
---|---|---|
committer | Gerrit - the friendly Code Review server <code-review@localhost> | 2019-06-25 20:37:06 -0700 |
commit | f395d5810f27a22e5ff0230ca7b3ef88857d98d9 (patch) | |
tree | 0657de979d070f93516d97c9069b3b81b7eb5ca9 /kernel/sched/core.c | |
parent | c94369b4c1fc531a4521dacdb6b08a70ea71fc1b (diff) |
sched/walt: Fix the memory leak of idle task load pointers
The memory for task load pointers are allocated twice for each
idle thread except for the boot CPU. This happens during boot
from idle_threads_init()->idle_init() in the following 2 paths.
1. idle_init()->fork_idle()->copy_process()->
sched_fork()->init_new_task_load()
2. idle_init()->fork_idle()-> init_idle()->init_new_task_load()
The memory allocation for all tasks happens through the 1st path,
so use the same for idle tasks and kill the 2nd path. Since
the idle thread of boot CPU does not go through fork_idle(),
allocate the memory for it separately.
Change-Id: I4696a414ffe07d4114b56d326463026019e278f1
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
[schikk@codeaurora.org: resolved merge conflicts]
Signed-off-by: Swetha Chikkaboraiah <schikk@codeaurora.org>
Diffstat (limited to 'kernel/sched/core.c')
-rw-r--r-- | kernel/sched/core.c | 11 |
1 files changed, 4 insertions, 7 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index cccb3564410b..543f7113b1d2 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2447,7 +2447,7 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p) unsigned long flags; int cpu; - init_new_task_load(p, false); + init_new_task_load(p); cpu = get_cpu(); __sched_fork(clone_flags, p); @@ -5407,19 +5407,15 @@ void init_idle_bootup_task(struct task_struct *idle) * init_idle - set up an idle thread for a given CPU * @idle: task in question * @cpu: cpu the idle task belongs to - * @cpu_up: differentiate between initial boot vs hotplug * * NOTE: this function does not set the idle thread's NEED_RESCHED * flag, to make booting more robust. */ -void init_idle(struct task_struct *idle, int cpu, bool cpu_up) +void init_idle(struct task_struct *idle, int cpu) { struct rq *rq = cpu_rq(cpu); unsigned long flags; - if (!cpu_up) - init_new_task_load(idle, true); - raw_spin_lock_irqsave(&idle->pi_lock, flags); raw_spin_lock(&rq->lock); @@ -8571,7 +8567,8 @@ void __init sched_init(void) * but because we are the idle thread, we just pick up running again * when this runqueue becomes "idle". */ - init_idle(current, smp_processor_id(), false); + init_idle(current, smp_processor_id()); + init_new_task_load(current); calc_load_update = jiffies + LOAD_FREQ; |