summaryrefslogtreecommitdiff
path: root/kernel/sched/hmp.c (follow)
Commit message (Collapse)AuthorAge
* sched: WALT: increase WALT minimum window size to 20msJoonwoo Park2023-07-16
| | | | | | | | | | Increase WALT minimum window size to 20ms. 10ms isn't large enough capture workload's pattern. [beykerykt}: Adapt for HMP Change-Id: I4d69577fbfeac2bc23db4ff414939cc51ada30d6 Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* sched: cpufreq: Limit governor updates to WALT changes aloneVikram Mulukutla2023-07-16
| | | | | | | | | | | It's not necessary to keep reporting load to the governor if it doesn't change in a window. Limit updates to when we expect load changes - after window rollover and when we send updates related to intercluster migrations. [beykerykt]: Adapt for HMP Change-Id: I3232d40f3d54b0b81cfafdcdb99b534df79327bf Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
* sched: walt: Correct WALT window size initializationVikram Mulukutla2023-07-16
| | | | | | | | | | | | | | | | | | | | | | It is preferable that WALT window rollover occurs just before a tick, since the tick is an opportune moment to record a complete window's statistics, as well as report those stats to the cpu frequency governor. When CONFIG_HZ results in a TICK_NSEC that isn't a integral number, this requirement may be violated. Account for this by reducing the WALT window size to the nearest multiple of TICK_NSEC. Commit d368c6faa19b ("sched: walt: fix window misalignment when HZ=300") attempted to do this but WALT isn't using MIN_SCHED_RAVG_WINDOW as the window size and the patch was doing nothing. Also, change the type of 'walt_disabled' to bool and warn if an invalid window size causes WALT to be disabled. [beykerykt]: Adapt for HMP Change-Id: Ie3dcfc21a3df4408254ca1165a355bbe391ed5c7 Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
* sched: walt: fix window misalignment when HZ=300Joonwoo Park2023-07-16
| | | | | | | | | | | Due to rounding error hrtimer tick interval becomes 3333333 ns when HZ=300. Consequently the tick time stamp nearest to the WALT's default window size 20ms will be also 19999998 (3333333 * 6). [beykerykt]: Adapt for HMP Change-Id: I08f9bd2dbecccbb683e4490d06d8b0da703d3ab2 Suggested-by: Joel Fernandes <joelaf@google.com> Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* sched: hmp: Reduce number of load reports in a windowVikram Mulukutla2019-12-23
| | | | | | | | | | There's no use reporting load more than once in a window via the cpufreq_update_util path (unless there's a migration). Set the load_reported_window flag in sched_get_cpus_busy to remove these redundant updates. Change-Id: If43dd5abc7e0e52a8e0f0df3a20ca99ed92f5361 Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
* sched: cpufreq: HMP load reporting changesVikram Mulukutla2019-12-23
| | | | | | | | | Since HMP uses WALT, ensure that load is reported just once per window, with the exception of intercluster migrations. Further, try to report load whenever WALT stats are updated. Change-Id: I6539f8c916f6f271cf26f03249de7f953d5b12c2 Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
* sched/walt: Fix the memory leak of idle task load pointersPavankumar Kondeti2019-06-25
| | | | | | | | | | | | | | | | | | | | | The memory for task load pointers are allocated twice for each idle thread except for the boot CPU. This happens during boot from idle_threads_init()->idle_init() in the following 2 paths. 1. idle_init()->fork_idle()->copy_process()-> sched_fork()->init_new_task_load() 2. idle_init()->fork_idle()-> init_idle()->init_new_task_load() The memory allocation for all tasks happens through the 1st path, so use the same for idle tasks and kill the 2nd path. Since the idle thread of boot CPU does not go through fork_idle(), allocate the memory for it separately. Change-Id: I4696a414ffe07d4114b56d326463026019e278f1 Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org> [schikk@codeaurora.org: resolved merge conflicts] Signed-off-by: Swetha Chikkaboraiah <schikk@codeaurora.org>
* sched: walt: fix out-of-bounds accessJohn Dias2018-08-08
| | | | | | | | | | | | | A computation in update_top_tasks() is indexing off the end of a top_tasks array. There's code to limit the index in the computation, but it's insufficient. Bug: 110529282 Change-Id: Idb5ff5e5800c014394bcb04638844bf1e057a40c Signed-off-by: John Dias <joaodias@google.com> [pkondeti@codeaurora.org: Backported to 4.4 for HMP scheduler] Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
* sched/walt: Fix use after free in trace_sched_update_task_ravg()Pavankumar Kondeti2018-05-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 4d09122c1868 ("sched: Fix spinlock recursion in sched_exit()") moved freeing of task's current and previous window arrays outside the rq->lock. These arrays can be accessed from another CPU in parallel and end up using freed memory. For example, CPU#0 CPU#1 ---------------------------------- ------------------------------- sched_exit() try_to_wake_up()--> The task wakes up on CPU#0 task_rq_lock() set_task_cpu() fixup_busy_time() --> waiting for CPU#0's rq->lock task_rq_unlock() fixup_busy_time()-->lock acquired free_task_load_ptrs() kfree(p->ravg.curr_window_cpu) update_task_ravg()-->called on current of CPU#0 trace_sched_update_task_ravg() --> access freed memory p->ravg.curr_window_cpu = NULL; To fix this issue, window array pointers must be set to NULL before freeing the memory. Since this happens outside the lock, memory barriers are needed on write and read paths. A much simpler alternative would be skipping update_task_ravg() trace point for tasks that are marked as dead. The window stats of dead tasks are not updated any ways. While at it, skip this trace point for newly created tasks for which also window stats are not updated. Change-Id: I4d7cb8a3cf7cf84270b09721140d35205643b7ab Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org> [spathi@codeaurora.org: moved changes to hmp.c since EAS is not supported] Signed-off-by: Srinivasarao P <spathi@codeaurora.org>
* Merge "sched: Update tracepoint to include task info"Linux Build Service Account2018-01-09
|\
| * sched: Update tracepoint to include task infoPuja Gupta2018-01-05
| | | | | | | | | | | | | | | | | | Update sched_get_task_cpu_cycles trace to include pid and name of the task to help with debug better. Change-Id: Ic307ebcf0a44c94bf0a2aa1a02b8aeff39010b29 Signed-off-by: Puja Gupta <pujag@codeaurora.org> Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
* | sched: Fix spinlock recursion in sched_exit()Pavankumar Kondeti2017-12-30
|/ | | | | | | | | | | | | The exiting task's prev_window and curr_window arrays are freed with rq->lock acquired. The kfree() may wakeup kswapd and if kswapd wakeup needs the same rq->lock, we hit a deadlock. Fix this issue by freeing these arrays after releasing the lock. Since the task is already marked as exiting under lock, delaying the freeing of the current and window arrays will not have any side effect. Change-Id: I3282d91ba715765e38177b9d66be32aaed989303 Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
* Merge "sched: hmp: Optimize cycle counter reads"Linux Build Service Account2017-06-06
|\
| * sched: hmp: Optimize cycle counter readsVikram Mulukutla2017-05-31
| | | | | | | | | | | | | | | | | | | | | | The cycle counter read is a bit of an expensive operation and requires locking across all CPUs in a cluster. Optimize this by returning the same value if the delta between two reads is zero (so if two reads are done in the same sched context) or if the last read was within a specific time period prior to the current read. Change-Id: I99da5a704d3652f53c8564ba7532783d3288f227 Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
* | Merge "sched: Fix load tracking bug to avoid adding phantom task demand"Linux Build Service Account2017-06-06
|\ \
| * | sched: Fix load tracking bug to avoid adding phantom task demandSyed Rameez Mustafa2017-05-19
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | When update_task_ravg() is called with the TASK_UPDATE event on a task that is not on the runqueue, task demand accounting incorrectly treats the time delta as execution time. This can happen when a sleeping task is moved to/from colocation groups. This phantom execution time can cause unpredictable changes to demand that in turn can result in incorrect task placement. Fix the issue by adding special handling of TASK_UPDATE in task demand accounting. CPU busy time accounting already has all the necessary checks. Change-Id: Ibb42d83ac353bf2e849055fa3cb5c22e7acd56de Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
* | core_ctl: un-isolate BIG CPUs more aggressivelyPavankumar Kondeti2017-05-31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current algorithm to bring additional BIG CPUs is very conservative. It works when BIG tasks alone run on BIG cluster. When co-location and scheduler boost features are activated, small/medium tasks also run on BIG cluster. We don't want these tasks to downmigrate, when BIG CPUs are available but isolated. The following changes are done to un-isolate CPUs more aggressively. (1) Round up the big_avg. When the big_avg indicates that there are 1.5 tasks on an average in the last window, it indicates that we need 2 BIG CPUs not 1 BIG CPU. (2) Track the maximum number of running tasks in the last window on all CPUs. If any of the CPU in a cluster has more than 4 runnable tasks in the last window, bring an additional CPU to help out. Change-Id: Id05d9983af290760cec6d93d1bdc45bc5e924cce Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
* | sched: Improve short sleeping tasks detectionPavankumar Kondeti2017-05-31
|/ | | | | | | | | | | | | | | | | When a short sleeping task goes for a long sleep, the task's avg_sleep_time signal gets boosted. This signal will not go below short_sleep threshold for a long time time even when the task run in short bursts. This results in frequent preemption of other tasks as the short burst tasks are placed on busy CPUs. The idea behind tracking avg_sleep_time signal is to detect if a task is short sleeping or not. Limit the sleep time to twice the short sleep threshold to make avg_sleep_time signal more responsive. This won't affect regular long sleeping tasks, as the avg_sleep_time would be higher than threshold. Change-Id: Ic0838e81ef7f5d83864a58b318553afc42812853 Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
* sched: Print aggregation status in sched_get_busy trace eventPavankumar Kondeti2017-02-27
| | | | | | | | | | Aggregation for frequency is not enabled all the time. The aggregated load is attached to the most busy CPU only when the group load is above a certain threshold. Print the aggregation status in sched_get_busy trace event to make debugging and testing easier. Change-Id: Icb916f362ea0fa8b5dc7d23cb384168d86159687 Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
* Merge "sched: don't assume higher capacity means higher power in tick migration"Linux Build Service Account2017-02-15
|\
| * sched: don't assume higher capacity means higher power in tick migrationPavankumar Kondeti2017-02-15
| | | | | | | | | | | | | | | | | | | | When an upmigrate ineligible task running on the maximum capacity CPU, we check if it can be migrated to a lower capacity CPU in tick path. Add a power cost based check there to prevent the task migration from a power efficient CPU. Change-Id: I291c62d7dbf169d5123faba5f5246ad44a7a40dd Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
* | Merge "sched: remove sched_new_task_windows tunable"Linux Build Service Account2017-02-09
|\ \
| * | sched: remove sched_new_task_windows tunablePavankumar Kondeti2017-02-08
| |/ | | | | | | | | | | | | | | The sched_new_task_windows tunable is set to 5 in the scheduler and it is not changed from user space. Remove this unused tunable. Change-Id: I771e12b44876efe75ce87a90e4e9d69c22168b64 Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
* / sched: fix bug in auto adjustment of group upmigrate/downmigratePavankumar Kondeti2017-02-08
|/ | | | | | | | sched_group_upmigrate tunable can accept values greater than 100%. Don't limit it to 100% while doing the auto adjustment. Change-Id: I3d1c1e84f2f4dec688235feb1536b9261a3e808b Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
* Merge "sched: fix argument type in update_task_burst()"Linux Build Service Account2017-02-07
|\
| * sched: fix argument type in update_task_burst()Pavankumar Kondeti2017-02-02
| | | | | | | | | | | | | | | | update_task_burst() function's runtime argument type should be u64 not int. Fix this to avoid potential overflow. Change-Id: I33757b7b42f142138c1a099bb8be18c2a3bed331 Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
* | Merge "sysctl: define upper limit for sched_freq_reporting_policy"Linux Build Service Account2017-02-07
|\ \
| * | sysctl: define upper limit for sched_freq_reporting_policyPavankumar Kondeti2017-02-03
| |/ | | | | | | | | | | | | | | | | | | | | | | Setting sched_freq_reporting_policy tunable to an unsupported values results in a warning from the scheduler. The previous policy setting is also lost. As sched_freq_reporting_policy can not be set to an incorrect value now, remove the WARN_ON_ONCE from the scheduler. Change-Id: I58d7e5dfefb7d11d2309bc05a1dd66acdc11b766 Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
* | Merge "sched: Remove sched_enable_hmp flag"Linux Build Service Account2017-02-03
|\ \
| * | sched: Remove sched_enable_hmp flagOlav Haugan2017-02-02
| |/ | | | | | | | | | | | | | | | | | | Clean up the code and make it more maintainable by removing dependency on the sched_enable_hmp flag. We do not support HMP scheduler without recompiling. Enabling the HMP scheduler is done through enabling the CONFIG_SCHED_HMP config. Change-Id: I246c1b1889f8dcbc8f0a0805077c0ce5d4f083b0 Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
* / sched: maintain group busy time counters in runqueuePavankumar Kondeti2017-02-01
|/ | | | | | | | | | | | | | | There is no advantage of tracking busy time counters per related thread group. We need busy time across all groups for either a CPU or a frequency domain. Hence maintain group busy time counters in the runqueue itself. When CPU window is rolled over, the group busy counters are also rolled over. This eliminates the overhead of individual group's window_start maintenance. As we are preallocating related thread group now, this patch saves 40 * nr_cpu_ids * (nr_grp - 1) bytes memory. Change-Id: Ieaaccea483b377f54ea1761e6939ee23a78a5e9c Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
* sched: Update capacity and load scale factor for all clusters at bootSyed Rameez Mustafa2017-01-20
| | | | | | | | | | | | Cluster capacities should reflect differences in efficiency of different clusters even in the absence of cpufreq. Currently capacity is updated only when cpufreq policy notifier is received. Therefore placement is suboptimal when cpufreq is turned off. Fix this by updating capacities and load scaling factors during cluster detection. Change-Id: I47f63c1e374bbfd247a4302525afb37d55334bad Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
* Merge "sched: kill sync_cpu maintenance"Linux Build Service Account2017-01-19
|\
| * sched: kill sync_cpu maintenancePavankumar Kondeti2017-01-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We assume boot CPU as a sync CPU and initialize it's window_start to sched_ktime_clock(). As windows are synchronized across all CPUs, the secondary CPUs' window_start are initialized from the sync_cpu's window_start. A CPU's window_start is never reset, so this synchronization happens only once for a given CPU. Given this fact, there is no need to reassigning the sync_cpu role to another CPU when the boot CPU is going offline. Remove this unnecessary maintenance of sync_cpu and use any online CPU's window_start as reference. Change-Id: I169a8e80573c6dbcb1edeab0659c07c17102f4c9 Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
* | Merge "sched: hmp: Remove the global sysctl_sched_enable_colocation tunable"Linux Build Service Account2017-01-18
|\ \ | |/ |/|
| * sched: hmp: Remove the global sysctl_sched_enable_colocation tunableVikram Mulukutla2017-01-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Colocation in HMP includes a tunable that turns on or off the feature globally across all colocation groups. Supporting this tunable correctly would result in complexity that would outweigh any foreseeable benefits. For example, disabling the feature globally would involve deleting all colocation groups one by one while ensuring no placement decisions are made during the process. Remove the tunable. Adding or removing a task from a colocation group is still possible and so we're not losing functionality. Change-Id: I4cb8bcdbee98d3bdd168baacbac345eca9ea8879 Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
| * sched: hmp: Ensure that best_cluster() never returns NULLVikram Mulukutla2017-01-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are certain conditions under which group_will_fit() may return 0 for all clusters in the system, especially under changing thermal conditions. This may result in crashes such as this one: CPU 0 | CPU 1 ==================================================================== select_best_cpu() | -> env.rtg = rtgA | rtgA.pref_cluster=C_big | | set_pref_cluster() for rtgA | -> best_cluster() | C_little doesn't fit | | IRQ: thermal mitigation | C_big capacity now less | than C_little capacity | | -> best_cluster() continues | C_big doesn't fit | set_pref_cluster() sets | rtgA.pref_cluster = NULL | select_least_power_cluster() | -> cluster_first_cpu() | -> BUG() | To add lock protection around accesses to the group's preferred cluster would be expensive and defeat the point of the usage of RCU to protect access to the related_thread_group structure. Therefore, ensure that best_cluster() can never return NULL. In the worst case, we'll select the wrong cluster for a related_thread_group's demand, but this should be fixed in the next tick or wakeup etc. Locking would have still led to the momentary wrong decision with the additional expense! Also, don't set preferred cluster to NULL when colocation is disabled. Change-Id: Id3f514b149add9b3ed33d104fa6a9bd57bec27e2 Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
* | Merge "sched: Initialize variables"Linux Build Service Account2017-01-16
|\ \
| * | sched: Initialize variablesOlav Haugan2017-01-13
| |/ | | | | | | | | | | | | | | Initialize variable at definition to avoid compiler warning when compiling with CONFIG_OPTIMIZE_FOR_SIZE=n. Change-Id: Ibd201877b2274c70ced9d7240d0e527bc77402f3 Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
* | Merge "sched: fix a bug in handling top task table rollover"Linux Build Service Account2017-01-14
|\ \
| * | sched: fix a bug in handling top task table rolloverPavankumar Kondeti2017-01-07
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When frequency aggregation is enabled, there is a possibility of rolling over the top task table multiple times in a single window. For example - utra() is called with PUT_PREV_TASK for task 'A' which does not belong to any related thread grp. Lets say window rollover happens. rq counters and top task table rollover is done. - utra() is called with PICK_NEXT_TASK/TASK_WAKE for task 'B' which belongs to a related thread grp. Lets say this happens before the grp's cpu_time->window_start is in sync with rq->window_start. In this case, grp's cpu_time counters are rolled over and the top task table is also rolled over again. Roll over the top task table in the context of current running task to fix this. Change-Id: Iea3075e0ea460a9279a01ba42725890c46edd713 Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
* / sched: Convert the global wake_up_idle flag to a per cluster flagSyed Rameez Mustafa2017-01-10
|/ | | | | | | | | | | | | Since clusters can vary significantly in the power and performance characteristics, there may be a need to have different CPU selection policies based on which cluster a task is being placed on. For example the placement policy can be more aggressive in using idle CPUs on cluster that are power efficient and less aggressive on clusters that are geared towards performance. Add support for per cluster wake_up_idle flag to allow greater flexibility in placement policies. Change-Id: I18cd3d907cd965db03a13f4655870dc10c07acfe Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
* sched: fix stale predicted load in trace_sched_get_busy()Pavankumar Kondeti2017-01-07
| | | | | | | | | When early detection notification is pending, we skip calculating predicted load. Initialize it to 0 so that stale value does not get printed in trace_sched_get_busy(). Change-Id: I36287c0081f6c12191235104666172b7cae2a583 Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
* Merge "sched: Delete heavy task heuristics in prediction code"Linux Build Service Account2017-01-05
|\
| * sched: Delete heavy task heuristics in prediction codeRohit Gupta2017-01-04
| | | | | | | | | | | | | | | | | | Heavy task prediction code needs further tuning to avoid any negative power impact. Delete the code for now instead of adding tunables to avoid inefficiencies in the scheduler path. Change-Id: I71e3b37a5c99e24bc5be93cc825d7e171e8ff7ce Signed-off-by: Rohit Gupta <rohgup@codeaurora.org>
* | Merge "sched: Fix new task accounting bug in transfer_busy_time()"Linux Build Service Account2017-01-05
|\ \ | |/ |/|
| * sched: Fix new task accounting bug in transfer_busy_time()Syed Rameez Mustafa2017-01-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In transfer_busy_time(), the new_task flag is set based on the active window count prior to the call to update_task_ravg(). update_task_ravg() however, can then increment the active window count and consequently the new_task flag above becomes stale. This is turn leads to inaccurate accounting whereby update_task_ravg() does accounting based on the fact that the task is not new whereas transfer_busy_time() then continues to do further accounting assuming that the task is new. The accounting discrepancies are sometimes caught by some of the scheduler BUGs. Fix the described problem by moving the check is_new_task() after the call to update_task_ravg(). Also add two missing BUGs that would catch the problem sooner rather than later. Change-Id: I8dc4822e97cc03ebf2ca1ee2de95eb4e5851f459 Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
* | sched: Fix deadlock between cpu hotplug and upmigrate changePavankumar Kondeti2016-12-30
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is a circular dependency between cpu_hotplug.lock and HMP scheduler policy mutex. Prevent this by enforcing the same lock order. Here CPU0 and CPU4 are governed by different cpufreq policies. ---------------- -------------------- CPU 0 CPU 4 --------------- -------------------- proc_sys_call_handler() cpu_up() --> acquired cpu_hotplug.lock sched_hmp_proc_update_handler() cpufreq_cpu_callback() --> acquired policy_mutex cpufreq_governor_interactive() get_online_cpus() sched_set_window() --> waiting for cpu_hotplug.lock --> waiting for policy_mutex Change-Id: I39efc394f4f00815b72adc975021fdb16fe6e30a Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
* Merge "sched: Fix out of bounds array access in sched_reset_all_window_stats()"Linux Build Service Account2016-12-21
|\
| * sched: Fix out of bounds array access in sched_reset_all_window_stats()Pavankumar Kondeti2016-11-29
| | | | | | | | | | | | | | | | | | A new reset reason code "FREQ_AGGREGATE_CHANGE" is added to reset_reason_code enum but the corresponding string array is not updated. Fix this. Change-Id: I2a17d95328bef91c4a5dd4dde418296efca44431 Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>