summaryrefslogtreecommitdiff
path: root/kernel (follow)
Commit message (Collapse)AuthorAge
...
* | | sched: Provide tunable to switch between PELT and window-based statsSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | Provide a runtime tunable to switch between using PELT-based load stats and window-based load stats. This will be needed for runtime analysis of the two load tracking schemes. Change-Id: I018f6a90b49844bf2c4e5666912621d87acc7217 Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
* | | sched: Provide scaled load information for tasks in /procSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | Extend "sched" file in /proc for every task to provide information on scaled load statistics and percentage-scaled based load (load_avg) for a task. This will be valuable debug aid. Change-Id: I6ee0394b409c77c7f79f5b9ac560da03dc879758 Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
* | | sched: Add additional ftrace eventsSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds two ftrace events: sched_task_load -> records information of a task, such as scaled demand sched_cpu_load -> records information of a cpu, such as nr_running, nr_big_tasks etc This will be useful to debug HMP related task placement decisions by scheduler. Change-Id: If91587149bcd9bed157b5d2bfdecc3c3bf6652ff Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
* | | sched: Extend /proc/sched_debug with additional informationSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Provide additional information in /proc/sched_debug for every cpu. This will be a valuable debug aid. Change-Id: If22ee530e880cd21719242be7bc2c41308ad4186 Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> [rameezmustafa@codeaurora.org]: Port to msm-3.18] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
* | | sched: Tighten controls for tasks spillover to idle clusterSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Several conditions can cause an idle cluster to pick up load from a busy cluster. One such condition is when busy cluster has number of tasks that exceeds its capacity (or number of cpus). This patch extends that condition to consider small and big tasks on a cluster. Too many "small" tasks should not cause them to spill over to another idle cluster. Like-wise presence of big tasks should be considered by a cluster to pick up load from another another cluster with lower capacity. Change-Id: I0545bf2989c37217d84ed18756c6f5c8946d5ae5 Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> [rameezmustafa@codeaurora.org]: Port to msm-3.18] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> [joonwoop@codeaurora.org: fixed minior conflict in fair.c.] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | sched: Track number of big and small tasks on a cpuSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds 'nr_big_tasks' and 'nr_small_tasks' per-cpu counters that tracks number of big and small tasks on a cpu respectively. This will be used in load balance decisions introduced in a subsequent patch. Change-Id: Ia174904140f81dd6d1946286889a50be3f16ea83 Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> [rameezmustafa@codeaurora.org]: Port to msm-3.18] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> [joonwoop@codeaurora.org: fix conflicts in fair.c] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | sched: Handle cpu-bound tasks stuck on wrong cpuSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CPU-bound tasks that don't sleep for long intervals can stay stuck on the wrong cpu, as the selection of "ideal" cpu for tasks largely happens during task wakeup time. This patch adds a check in the scheduler tick for task/cpu mismatch (big task on little cpu OR little task on big cpu) and forces migration of such tasks to their ideal cpu (via select_best_cpu()). Change-Id: Icac3485b6aa4b558c4ed9df23c2e81fb8f4bb9d9 Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
* | | sched: Extend active balance to accept 'push_task' argumentSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Active balance currently picks one task to migrate from busy cpu to a chosen cpu (push_cpu). This patch extends active load balance to recognize a particular task ('push_task') that needs to be migrated to 'push_cpu'. This capability will be leveraged by HMP-aware task placement in a subsequent patch. Change-Id: If31320111e6cc7044e617b5c3fd6d8e0c0e16952 Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> [rameezmustafa@codeaurora.org]: Port to msm-3.18] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
* | | sched: Send NOHZ kick to idle cpu in same clusterSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A busy cpu will kick (via IPI) one of the idle cpus in tickless state to run load balance and help move tasks off itself. The cpu chosen to receive kick is simply the "first" idle cpu found in nohz.idle_cpus_mask. This could cause unnecessary wakeups of a cluster. A better choice would be to look for an idle cpu that is in the same cluster as busy cpu, which should minimize cluster wakeups. Change-Id: Ia63038d7c34b416b53c8feef3c3b31dab5200e42 Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> [rameezmustafa@codeaurora.org]: Port to msm-3.18] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> [joonwoop@codeaurora.org: fixed minor conflict about return value.] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | sched: Basic task placement support for HMP systemsSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | HMP systems have cpus with different power and performance characteristics. Some cpus could offer better power at cost of lower performance while other cpus could offer better performance at cost of higher power. As a result, bandwidth consumed by a task to do some "fixed" amount of work could vary across cpus. Optimal task placement on HMP would involve placing a task on a cpu where it can meet its performance goals at lowest power cost. Since kernel has little to no awareness of performance goals of applications, we guestimate whether task is meeting its performance goals or not by looking at its cpu bandwidth consumption. High bandwidth consumption could imply that task's performance can improve by running on cpus with better capacity/performance-characterisitcs. This patch makes the basic changes to support HMP. It provides a configurable threshold and any task consuming bandwidth in excess of threshold will be placed on a cpu with better capacity. Change-Id: I3fd98edd430f73342fbef06411e8b2d1cf2f56fa Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> [rameezmustafa@codeaurora.org]: Port to msm-3.18] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> [joonwoop@codeaurora.org: fixed conflict about members of p->se which are not available anymore.] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | sched: Use rq->efficiency in scaling load statsSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | Extend task load scaling function to account for cpu efficiency factor. Task load is scaled in reference to "most" efficient cpu. Change-Id: I7bf829211a6e1293076e8ba0f93b4f6abcf20d92 Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
* | | sched: Introduce efficiency, load_scale_factor and capacitySrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Efficiency reflects instructions per cycle capability of a cpu. load_scale_factor reflects magnification factor that is applied for task load when estimating bandwidth it will consume on a cpu. It accounts for the fact that task load is scaled in reference to "best" cpu that has best efficiency factor and also best possible max_freq. Note that there may be no single CPU in the system that has both the best efficiency and best possible max_freq, but that is still the combination that all task load in the system is scaled against. capacity reflects max_freq and efficiency metric of a cpu. It is defined such that the "least" performing cpu (one with lowest efficiency factor and max_freq) gets capacity of 1024. Again, there may not be a CPU in the system that has both the lowest efficiency and lowest max_freq. This is still the combination that is assigned a capacity of 1024 however, other CPU capacities are relative to this. Change-Id: I4a853f1f0f90020721d2a4ee8b10db3d226b287c Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> [rameezmustafa@codeaurora.org]: Port to msm-3.18] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
* | | sched: Add CONFIG_SCHED_HMP Kconfig optionSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a compile-time flag to enable or disable scheduler features for HMP (heterogenous multi-processor) systems. Main feature deals with optimizing task placement for best power/performance tradeoff. Also extend features currently dependent on CONFIG_SCHED_FREQ_INPUT to be enabled for CONFIG_HMP as well. Change-Id: I03b3942709a80cc19f7b934a8089e1d84c14d72d Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> [rameezmustafa@codeaurora.org]: Port to msm-3.18] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> [joonwoop@codeaurora.org: fixed minor ifdefry conflict.] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | sched: Add scaled task load statisticsSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Scheduler guided frequency selection as well as task placement on heterogeneous systems require scaled task load statistics. This patch adds a 'runnable_avg_sum_scaled' metric per task that is a scaled derivative of 'runnable_avg_sum'. Load is scaled in reference to "best" cpu, i.e one with best possible max_freq Change-Id: Ie8ae450d0b02753e9927fb769aee734c6d33190f Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> [rameezmustafa@codeaurora.org]: Port to msm-3.18] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> [joonwoop@codeaurora.org: incoporated with change 9d89c257df (" sched/fair: Rewrite runnable load and utilization average tracking"). Used container_of() to get sched_entity.] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | sched: Introduce CONFIG_SCHED_FREQ_INPUTSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce a compile time flag to enable scheduler guidance of frequency selection. This flag is also used to turn on or off window-based load stats feature. Having a compile time flag will let some platforms avoid any overhead that may be present with this scheduler feature. Change-Id: Id8dec9839f90dcac82f58ef7e2bd0ccd0b6bd16c Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> [rameezmustafa@codeaurora.org]: Port to msm-3.18] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> [joonwoop@codeaurora.org: fixed minor conflict around sysctl_timer_migration.] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | sched: window-based load stats improvementsSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Following cleanups and improvements are made to window-based load stats feature: * Add sysctl to pick max, avg or most recent samples as task's demand. * Fix overflow possibility in calculation of sum for average policy. * Use unscaled statistics when a task is running on a CPU which is thermally throttled. Change-Id: I8293565ca0c2a785dadf8adb6c67f579a445ed29 Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
* | | sched: Add min_max_freq and rq->max_possible_freqSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | rq->max_possible_freq represents the maximum frequency a cpu is capable of attaining, while rq->max_freq represents the maximum frequency a cpu can attain at a given instant. rq->max_freq includes constraints imposed by user or thermal driver. rq->max_freq <= rq->max_possible_freq. max_possible_freq is derived as max(rq->max_possible_freq) and represents the "best" cpu that can attain best possible frequency. min_max_freq is derived as min(rq->max_possible_freq). For homogeneous systems, max_possible_freq and min_max_freq will be same, while they could be different on heterogeneous systems. Change-Id: Iec485fde35cfd33f55ebf2c2dce4864faa2083c5 Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> [joonwoop@codeaurora.org: fixed conflict around max_possible_freq.] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | sched: move task load based functionsSteve Muckle2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | The task load based functions will need to make use of LOAD_AVG_MAX in a subsequent patch, so move them below the definition of that macro. Change-Id: I02f18ba069b81033e611f8f8bba6dccd7cd81252 Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
* | | sched: fix race between try_to_wake_up() and move_task()Steve Muckle2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until a task's state has been seen as interruptible/uninterruptible and it is no longer on_cpu, it is possible that the task may move to another CPU (load balancing may cause this). Here is an example where the race condition results in incorrect operation: - cpu 0 calls put_prev_task on task A, task A's state is TASK_RUNNING - cpu 0 runs task B, which attempts to wake up A - cpu 0 begins try_to_wake_up(), recording src_cpu for task A as cpu 0 - cpu 1 then pulls task A (perhaps due to idle balance) - cpu 1 runs task A, which then sleeps, becoming INTERRUPTIBLE - cpu 0 continues in try_to_wake_up(), thinking task A's previous cpu is 0, where it is actually 1 - if select_task_rq returns cpu 0, task A will be woken up on cpu 0 without properly updating its cpu to 0 in set_task_cpu() CRs-Fixed: 665958 Change-Id: Icee004cb320bd8edfc772d9f74e670a9d4978a99 Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
* | | sched: Skip load update for idle taskSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | Load statistics for idle tasks is not useful in any manner. Skip load update for such idle tasks. CRs-Fixed: 665706 Change-Id: If3a908bad7fbb42dcb3d0a1d073a3750cf32fcf9 Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
* | | sched: Window-based load stat improvementsSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some tasks can have a sporadic load pattern such that they can suddenly start running for longer intervals of time after running for shorter durations. To recognize such sharp increase in tasks' demands, max between the average of 5 window load samples and the most recent sample is chosen as the task demand. Make the window size (sched_ravg_window) configurable at boot up time. To prevent users from setting inappropriate values for window size, min and max limits are defined. As 'ravg' struct tracks load for both real-time and non real-time tasks it is moved out of sched_entity struct. In order to prevent changing function signatures for move_tasks() and move_one_task() per-cpu variables are defined to track the total load moved. In case multiple tasks are selected to migrate in one load balance operation, loads > 100 could be sent through migration notifiers. Prevent this scenario by setting mnd.load to 100 in such cases. Define wrapper functions to compute cpu demands for tasks and to change rq->cumulative_runnable_avg. Change-Id: I9abfbf3b5fe23ae615a6acd3db9580cfdeb515b4 Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> Signed-off-by: Rohit Gupta <rohgup@codeaurora.org> [rameezmustafa@codeaurora.org: Port to msm-3.18 and squash "dcf7256 sched: window-stats: Fix overflow bug" into this patch.] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> [joonwoop@codeaurora.org: fixed conflict in __migrate_task().] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | sched: Call the notify_on_migrate notifier chain for wakeups as wellRohit Gupta2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a change to send notify_on_migrate hints on wakeups of foreground tasks from scheduler if their load is above wakeup_load_thresholds (default value is 60). These hints can be used to choose an appropriate CPU frequency corresponding to the load of the task being woken up. By default sched_wakeup_load_threshold is set to 60 and therefore wakeup hints are sent out for those tasks whose loads are higher that value. This might cause unnecessary wakeup boosts to happen when load based syncing is turned ON for cpu-boost. Disable the wake up hints by setting the sched_wakeup_load_threshold to a value higher than 100 so that wakeup boost doesnt happen unless it is explicitly turned ON from adb shell. Change-Id: Ieca413c1a8bd2b14a15a7591e8e15d22925c42ca Signed-off-by: Rohit Gupta <rohgup@codeaurora.org> [rameezmustafa@codeaurora.org: Squash "a26fcce sched: Disable wakeup hints for foreground tasks by default" into this patch and update commit text.] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
* | | cpufreq: cpu-boost: Introduce scheduler assisted load based syncsRohit Gupta2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously, on getting a migration notification cpu-boost changed the scaling min of the destination frequency to match that of the source frequency or sync_threshold whichever was minimum. If the scheduler migration notification is extended with task load (cpu demand) information, the cpu boost driver can use this load to compute a suitable frequency for the migrating task. The required frequency for the task is calculated by taking the load percentage of the max frequency and no sync is performed if the load is less than a particular value (migration_load_threshold).This change is beneficial for both perf and power as demand of a task is taken into consideration while making cpufreq decisions and unnecessary syncs for lightweight tasks are avoided. The task load information provided by scheduler comes from a window-based load collection mechanism which also normalizes the load collected by the scheduler to the max possible frequency across all CPUs. Change-Id: Id2ba91cc4139c90602557f9b3801fb06b3c38992 Signed-off-by: Rohit Gupta <rohgup@codeaurora.org> [rameezmustafa@codeaurora.org]: Port to msm-3.18] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> [joonwoop@codeaurora.org: fixed conflict in __migrate_task().] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | sched: window-based load stats for tasksSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Provide a metric per task that specifies how cpu bound a task is. Task execution is monitored over several time windows and the fraction of the window for which task was found to be executing or wanting to run is recorded as task's demand. Windows over which task was sleeping are ignored. We track last 5 recent windows for every task and the maximum demand seen in any of the previous 5 windows (where task had some activity) drives freq demand for every task. A per-cpu metric (rq->cumulative_runnable_avg) is also provided which is an aggregation of cpu demand of all tasks currently enqueued on it. rq->cumulative_runnable_avg will be useful to know if cpu frequency will need to be changed to match task demand. Change-Id: Ib83207b9ba8683cd3304ee8a2290695c34f08fe2 Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> [rameezmustafa@codeaurora.org]: Port to msm-3.18] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> [joonwoop@codeaurora.org: fixed conflict in ttwu_do_wakeup() to incorporate with changed trace_sched_wakeup() location.] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | sched: Make scheduler aware of cpu frequency stateSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Capacity of a cpu (how much performance it can deliver) is partly determined by its frequency (P) state, both current frequency as well as max frequency it can reach. Knowing frequency state of cpus will help scheduler optimize various functions such as tracking every task's cpu demand and placing tasks on various cpus. This patch has scheduler registering for cpufreq notifications to become aware of cpu's frequency state. Subsequent patches will make use of derived information for various purposes, such as task's scaled load (cpu demand) accounting and task placement. Change-Id: I376dffa1e7f3f47d0496cd7e6ef8b5642ab79016 Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> [joonwoop@codeaurora.org: fixed minor conflict in kernel/sched/core.c.] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | sched/debug: Make sysrq prints of sched debug data optionalMatt Wagantall2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Calls to sysrq_sched_debug_show() can yield rather verbose output which contributes to log spew and, under heavy load, may increase the chances of a watchdog bark. Make printing of this data optional with the introduction of a new Kconfig, CONFIG_SYSRQ_SCHED_DEBUG. Change-Id: I5f54d901d0dea403109f7ac33b8881d967a899ed Signed-off-by: Matt Wagantall <mattw@codeaurora.org>
* | | tracing/sched: add load balancer tracepointSteve Muckle2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When doing performance analysis it can be useful to see exactly what is going on with the load balancer - when it runs and why exactly it may not be redistributing load. This additional tracepoint will show the idle context of the load balance operation (idle, not idle, newly idle), various values from the load balancing operation, the final result, and the new balance interval. Change-Id: I1538c411c5f9d17d7d37d84ead6210756be2d884 Signed-off-by: Steve Muckle <smuckle@codeaurora.org> [rameezmustafa@codeaurora.org: Initialize variables in load_balance() to avoid crashes and inaccurate tracing.] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> [joonwoop@codeaurora.org: fixed minor conflict in include/trace/events/sched.h.] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | sched: change WARN_ON_ONCE to printk_deferred() in try_to_wake_up_local()Steve Muckle2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | try_to_wake_up_local() is called with the rq lock held. Printing to console in this context can result in a deadlock if klogd needs to be woken up. Print to the kernel log buffer via printk_sched() instead which avoids the wakeup. Change-Id: Ia07baea3cb7e0b158545207fdbbb866203256d3c Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
* | | tracing/sched: Track per-cpu rt and non-rt cpu_load.Arun Bharadwaj2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a new tracepoint trace_sched_enq_deq_task to track per-cpu rt and non-rt cpu_load during task enqueue and dequeue. This is useful to visualize and compare the load on different cpus and also to understand how balanced the load is at any point of time. Note: We only print cpu_load[0] because we only care about the most recent load history for tracking load balancer effectiveness. Change-Id: I46f0bb84e81652099ed5edf8c2686c70c8b8330c Signed-off-by: Arun Bharadwaj <abharadw@codeaurora.org>
* | | sched: re-calculate a cpu's next_balance point upon sched domain changesSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A cpus next_balance point could be stale when its being attached to a sched domain hierarchy. That would lead to undesirable delay in cpu doing a load balance and hence potentially affect scheduling latencies for tasks. Fix that by initializing cpu's next_balance point when its being attached to a sched domain hierarchy. Change-Id: I855cff8da5ca28d278596c3bb0163b839d4704bc Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> [rameezmustafa@codeaurora.org: Modify commit text to reflect dropped patches] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
* | | sched: provide per cpu-cgroup option to notify on migrationsSteve Muckle2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On systems where CPUs may run asynchronously, task migrations between CPUs running at grossly different speeds can cause problems. This change provides a mechanism to notify a subsystem in the kernel if a task in a particular cgroup migrates to a different CPU. Other subsystems (such as cpufreq) may then register for this notifier to take appropriate action when such a task is migrated. The cgroup attribute to set for this behavior is "notify_on_migrate" . Change-Id: Ie1868249e53ef901b89c837fdc33b0ad0c0a4590 Signed-off-by: Steve Muckle <smuckle@codeaurora.org> [rameezmustafa@codeaurora.org: Use new cgroup APIs, fix 64-bit compilation issues and resolve some merge conflicts. Also squash "2bd8075 sched: remove migration notification from RT class" into this patch.] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> [joonwoop@codeaurora.org: Incorporated with new __migrate_task().] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | sched: Fix SCHED_HRTICK bug leading to late preemption of tasksSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SCHED_HRTICK feature is useful to preempt SCHED_FAIR tasks on-the-dot (just when they would have exceeded their ideal_runtime). It makes use of a a per-cpu hrtimer resource and hence alarming that hrtimer should be based on total SCHED_FAIR tasks a cpu has across its various cfs_rqs, rather than being based on number of tasks in a particular cfs_rq (as implemented currently). As a result, with current code, its possible for a running task (which is the sole task in its cfs_rq) to be preempted much after its ideal_runtime has elapsed, resulting in increased latency for tasks in other cfs_rq on same cpu. Fix this by alarming sched hrtimer based on total number of SCHED_FAIR tasks a CPU has across its various cfs_rqs. Change-Id: I1f23680a64872f8ce0f451ac4bcae28e8967918f Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> [rameezmustafa@codeaurora.org]: Squash "c24fb502 sched: fix reference to wrong cfs_rq" into this patch] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
* | | kernel: reduce sleep duration in wait_task_inactiveSteve Muckle2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sleeping for an entire tick adds unnecessary latency to hotplugging a cpu (cpu_up). Change-Id: Iab323a79f4048bc9101ecfd368e0f275827ed4ab Signed-off-by: Steve Muckle <smuckle@codeaurora.org> [rameezmustafa@codeaurora.org]: Port to msm-3.18] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
* | | sched: add sysctl for controlling task migrations on wakeSteve Muckle2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The PF_WAKE_UP_IDLE per-task flag made it impossible to enable the old behavior of SD_SHARE_PKG_RESOURCES, where every task migrates to an idle CPU on wakeup. The sched_wake_to_idle sysctl value, when made nonzero, will cause all tasks to migrate to an idle CPU if one is available when the task is woken up. This is regardless of how PF_WAKE_UP_IDLE is configured for tasks in the system. Similar to PF_WAKE_UP_IDLE, the SD_SHARE_PKG_RESOURCES scheduler domain flag must be enabled for the sysctl value to have an effect. Change-Id: I23bed846d26502c7aed600bfcf1c13053a7e5f61 Signed-off-by: Steve Muckle <smuckle@codeaurora.org> (cherry picked from commit 9d5b38dc0025d19df5b756b16024b4269e73f282)
* | | sched/rt: Add Kconfig option to enable panicking for RT throttlingMatt Wagantall2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This may be useful for detecting and debugging RT throttling issues. Change-Id: I5807a897d11997d76421c1fcaa2918aad988c6c9 Signed-off-by: Matt Wagantall <mattw@codeaurora.org> [rameezmustafa@codeaurora.org]: Port to msm-3.18] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> [joonwoop@codeaurora.org: fixed conflict in lib/Kconfig.debug] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | sched/rt: print RT tasks when RT throttling is activatedMatt Wagantall2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Existing debug prints do not provide any clues about which tasks may have triggered RT throttling. Print the names and PIDs of all tasks on the throttled rt_rq to help narrow down the source of the problem. Change-Id: I180534c8a647254ed38e89d0c981a8f8bccd741c Signed-off-by: Matt Wagantall <mattw@codeaurora.org> [rameezmustafa@codeaurora.org]: Port to msm-3.18] Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
* | | sched: add PF_WAKE_UP_IDLESteve Muckle2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Certain workloads may benefit from the SD_SHARE_PKG_RESOURCES behavior of waking their tasks up on idle CPUs. The feature has too much of a negative impact on other workloads however to apply globally. The PF_WAKE_UP_IDLE flag tells the scheduler to wake up tasks that have this flag set, or tasks woken by tasks with this flag set, on an idle CPU if one is available. Change-Id: I20b28faf35029f9395e9d9f5ddd57ce2de795039 Signed-off-by: Steve Muckle <smuckle@codeaurora.org> [joonwoop@codeaurora.org: fixed conflict around set_wake_up_idle() in include/linux/sched.h] Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | sched: Make the scheduler aware of C-state for cpusSrivatsa Vaddagiri2016-03-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | C-state represents a power-state of a cpu. A cpu could have one or more C-states associated with it. C-state transitions are based on various factors (expected sleep time for example). "Deeper" C-states implies longer wakeup latencies. Scheduler needs to know wakeup latency associated with various C-states. Having this information allows the scheduler to make better decisions during task placement. For example: - Prefer an idle cpu that is in the least shallow C-state - Avoid waking up small tasks on a idle cpu unless it is in the least shallow C-state This patch introduces APIs in the scheduler that can be used by the architecture specific power-management driver to inform the scheduler about C-states for cpus. Change-Id: I39c5ae6dbace4f8bd96e88f75cd2d72620436dd1 Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org> Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
* | | lib: spinlock: Trigger a watchdog bite on spin_dump for rwlockPrasad Sodagudi2016-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently dump_stack is printed once a spin_bug is detected for rwlock. So provide an options to trigger a panic or watchdog bite for debugging rwlock magic corruptions and lockups. Change-Id: I20807e8eceb8b81635e58701d1f9f9bd36ab5877 [abhimany: replace msm with qcom] Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org> Signed-off-by: Abhimanyu Kapur <abhimany@codeaurora.org>
* | | lib: spinlock: Cause a watchdog bite on spin_dumpRohit Vaswani2016-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently we cause a BUG_ON once a spin_bug is detected, but that causes a whole lot of processing and the other CPUs would have proceeded to perform other actions and the state of the system is moved by the time we can analyze it. Provide an option to trigger a watchdog bite instead so that we can get the traces as close to the issue as possible. Change-Id: Ic8d692ebd02c6940a3b4e5798463744db20b0026 Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org> Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org>
* | | lib: spinlock_debug: Prevent continuous spin dump logs on panicRohit Vaswani2016-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Once a spinlock lockup is detected on a CPU, we invoke a Kernel Panic. During the panic handling, we might see more instances of spinlock lockup from other CPUs. This causes the dmesg to be cluttered and makes it cumbersome to detect what exactly happened. Call spin_bug instead of calling spin_dump directly. Change-Id: I57857a991345a8dac3cd952463d05129145867a8 Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org> Signed-off-by: Prasad Sodagudi <psodagud@codeaurora.org>
* | | kernel/lib: add additional debug capabilites for data corruptionSyed Rameez Mustafa2016-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Data corruptions in the kernel often end up in system crashes that are easier to debug closer to the time of detection. Specifically, if we do not panic immediately after lock or list corruptions have been detected, the problem context is lost in the ensuing system mayhem. Add support for allowing system crash immediately after such corruptions are detected. The CONFIG option controls the enabling/disabling of the feature. Change-Id: I9b2eb62da506a13007acff63e85e9515145909ff Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org> [abhimany: minor merge conflict resolution] Signed-off-by: Abhimanyu Kapur <abhimany@codeaurora.org>
* | | kernel: fork: Call KASan alloc before release the thread info pagesSe Wang (Patrick) Oh2016-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the pages allocated for thread info is used for stack. KAsan marks some stack memory region for guarding area and the bitmasks for that region are not cleared until the pages are freed. When CONFIG_PAGE_POISONING is enabled, as the pages still have special bitmasks, a out of bound access KASan report arises during pages poisoning. So mark the pages as alloc status before poisoning the pages. ================================================================== BUG: KASan: out of bounds on stack in memset+0x24/0x44 at addr ffffffc0b8e3f000 Write of size 4096 by task swapper/0/0 page:ffffffbacc38e760 count:0 mapcount:0 mapping: (null) index:0x0 flags: 0x4000000000000000() page dumped because: kasan: bad access detected CPU: 0 PID: 0 Comm: swapper/0 Tainted: G W 3.18.0-g5a4a5d5-07244-g488682c-dirty #12 Hardware name: Qualcomm Technologies, Inc. MSM 8996 v2.0 LiQUID (DT) Call trace: [<ffffffc00008c010>] dump_backtrace+0x0/0x250 [<ffffffc00008c270>] show_stack+0x10/0x1c [<ffffffc001b6f9e4>] dump_stack+0x74/0xfc [<ffffffc0002debf4>] kasan_report_error+0x2b0/0x408 [<ffffffc0002dee28>] kasan_report+0x34/0x40 [<ffffffc0002de240>] __asan_storeN+0x15c/0x168 [<ffffffc0002de47c>] memset+0x20/0x44 [<ffffffc0002d77bc>] kernel_map_pages+0x2e8/0x384 [<ffffffc000266458>] free_pages_prepare+0x340/0x3a0 [<ffffffc0002694cc>] __free_pages_ok+0x20/0x12c [<ffffffc00026a698>] __free_pages+0x34/0x44 [<ffffffc00026abb0>] free_kmem_pages+0x68/0x80 [<ffffffc0000b0424>] free_task+0x80/0xac [<ffffffc0000b05a8>] __put_task_struct+0x158/0x23c [<ffffffc0000b9194>] delayed_put_task_struct+0x188/0x1cc [<ffffffc00018586c>] rcu_process_callbacks+0x6cc/0xbb0 [<ffffffc0000bfdb0>] __do_softirq+0x368/0x750 [<ffffffc0000c0630>] irq_exit+0xd8/0x15c [<ffffffc00016f610>] __handle_domain_irq+0x108/0x168 [<ffffffc000081af8>] gic_handle_irq+0x50/0xc0 Memory state around the buggy address: ffffffc0b8e3f980: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffffffc0b8e3fa00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >ffffffc0b8e3fa80: 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00 00 00 00 ^ ffffffc0b8e3fb00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffffffc0b8e3fb80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 Change-Id: I90aa1c6e82a0bde58d2d5d68d84e67f932728a88 Signed-off-by: Se Wang (Patrick) Oh <sewango@codeaurora.org>
* | | kernel: printk: specify alignment for struct printk_logAndrey Ryabinin2016-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On architectures that have support for efficient unaligned access struct printk_log has 4-byte alignment. Specify alignment attribute in type declaration. The whole point of this patch is to fix deadlock which happening when UBSAN detects unaligned access in printk() thus UBSAN recursively calls printk() with logbuf_lock held by top printk() call. CRs-Fixed: 969533 Change-Id: Iee822667b9104a69c3d717caeded0fcde3f6d560 Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Michal Marek <mmarek@suse.cz> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Yury Gribov <y.gribov@samsung.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Konstantin Khlebnikov <koct9i@gmail.com> Cc: Kostya Serebryany <kcc@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Git-repo: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/ Git-commit: 5c9cf8af2e77388f1da81c39237fb4f20c2f85d5 Signed-off-by: Trilok Soni <tsoni@codeaurora.org> [satyap@codeaurora.org: trivial merge conflict resolution] Signed-off-by: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
* | | net/ipv6/addrconf: IPv6 tethering enhancementTianyi Gou2016-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Added new procfs flag to toggle the automatic addition of prefix routes on a per device basis. The new flag is accept_ra_prefix_route. Defaults to 1 as to not break existing behavior. CRs-Fixed: 435320 Change-Id: If25493890c7531c27f5b2c4855afebbbbf5d072a Acked-by: Harout S. Hedeshian <harouth@qti.qualcomm.com> Signed-off-by: Tianyi Gou <tgou@codeaurora.org> [subashab@codeaurora.org: resolve trivial merge conflicts] Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
* | | rtc: alarm: Add power-on alarm featureMao Jinlong2016-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | Android does not support powering-up the phone through alarm. Set rtc alarm in timerfd to power-up the phone after alarm expiration. Change-Id: I781389c658fb00ba7f0ce089d706c10f202a7dc6 Signed-off-by: Mao Jinlong <c_jmao@codeaurora.org>
* | | alarmtimer: add rtc irq support for alarmMao Jinlong2016-03-22
| | | | | | | | | | | | | | | | | | | | | | | | Add the rtc irq support for alarmtimer to wakeup the alarm during system suspend. Change-Id: I41b774ed4e788359321e1c6a564551cc9cd40c8e Signed-off-by: Xiaocheng Li <lix@codeaurora.org>
* | | soc: qcom: rq_stats: add snapshot of run queue stats driverMatt Wagantall2016-03-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a snapshot of the run queue stats driver as of msm-3.10 commit 4bf320bd ("Merge "ASoC: msm8952: set async flag for 8952 dailink"") Resolve checkpatch warnings in the process, notably the replacement of sscanf with kstrtouint. Change-Id: I7e2f98223677e6477df114ffe770c0740ed37de9 Signed-off-by: Matt Wagantall <mattw@codeaurora.org> Signed-off-by: Abhimanyu Kapur <abhimany@codeaurora.org>
* | | trace: ipc_logging: Use virtual counterKarthikeyan Ramasubramanian2016-03-22
| | | | | | | | | | | | | | | | | | | | | Using the physical counter leads to a kernel BUG_ON(). Update the IPC Logging Driver to use virtual counter. Signed-off-by: Karthikeyan Ramasubramanian <kramasub@codeaurora.org>
* | | trace: Add snapshot of ipc_logging driverKarthikeyan Ramasubramanian2016-03-22
| | | | | | | | | | | | | | | | | | | | | This snapshot is taken as of msm-3.18 commit e70ad0cd (Promotion of kernel.lnx.3.18-151201.) Signed-off-by: Karthikeyan Ramasubramanian <kramasub@codeaurora.org>