| Commit message (Collapse) | Author | Age |
| ... | |
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Do not lock in wakeup functions as it may block current thread.
Wakeup can trigger a callback looking to acquire same lock.
Also rename the wakeup command names to properly identify client
and lib commands.
Change-Id: I28411714d2d7f0104364726fc5ce0593e5ccff91
Signed-off-by: Ajay Singh Parmar <aparmar@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Isolate the execution environments for HDMI HDCP2.2 driver and HDCP
library by creating separate threads and executing each work on
dedicated kworker. Do not call each other's functions directly.
Wakeup the other thread when needed and let the independent module
execute their corresponding work and acknowledge by waking up other
thread back.
Change-Id: I67bca61b92c831451ce3482a759a214b1e5d6578
Signed-off-by: Ajay Singh Parmar <aparmar@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Check if the hdcp2p2 app is present every time the hdmi cable
is connected. If the app is not present, then we consider
hdcp-2.2 as not supported on the target. This scenario occurs
on devices that are not hdcp-2.2 provisioned, and with this
change hdmi core continues in non-encrypted mode.
Change-Id: I72ebcc1e6844f46dbbc974efb6ba926948e1bbde
Signed-off-by: Tatenda Chipeperekwa <tatendac@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Define APIs to start or stop authentication from client.
Handle internal states within HDCP library and do not
call HDCP library internal functions directly from client.
Remove unnecessary threads and locks and execute on same
thread as standard requires this to be processed sequentially.
Change-Id: I4cd924fb836e0e01ff1d6eba58d817fe0ca383e1
Signed-off-by: Ajay Singh Parmar <aparmar@codeaurora.org>
[cip@codeaurora.org: Snapshot hdcp.c/hdcp_qseecom.h,
add hdcp Kconfig/Makefile changes]
Signed-off-by: Clarence Ip <cip@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Fast forward msm_mdp.h file for 3.14 kernel upgrade. Commit
history is not essential for future debugging.
Change-Id: Id9525ef3361c7f73b5c341a29e7a3e91b8037fa7
Signed-off-by: Terence Hampson <thampson@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
FB clients are tracked using process id. The process id does
not match if open and close API callers are different. Track
the fb clients using file descriptor node id in such cases
and release all resources associated with that process id
gracefully.
CRs-fixed: 652449
Change-Id: I09c965a421197c6464a64684e9706f30df327882
Signed-off-by: Dhaval Patel <pdhaval@codeaurora.org>
Signed-off-by: Veera Sundaram Sankaran <veeras@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Move contents of msm_hdmi_audio_codec.h
directory to msm_hdmi.h in common linux
directory to remove platform dependence.
Change-Id: I6331073ba1e5e119770c5e8cb50f6ff677807292
Signed-off-by: Casey Piper <cpiper@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
MDSS io utility contains APIs to handle driver specific
resources like GPIOs, power supplies, clocks and other
resources. Sharing them with other drivers allow them
to do resource management without re-writing the same
code.
Change-Id: Ib699407667239336cf82211e3f6e8eec97a104a8
Signed-off-by: Dhaval Patel <pdhaval@codeaurora.org>
[cip@codeaurora.org: Move mdss_io_util.h to include/linux]
Signed-off-by: Clarence Ip <cip@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Provide a callback machanism to all the modules registered
for HDMI cable notifications. The registered modules will be
notified everytime the cable is connected or disconnected.
The callback will be scheduled on global kernel workqueue.
Change-Id: I74630748448d8479b6e0cda6ccd148cbeae48a51
Signed-off-by: Ajay Singh Parmar <aparmar@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
While merging to 3.14 kernel this is only display
related changes for commit below
Change ID: Ia44a224d3cea7bc78dd45e8a8279860d35d4b008
msm: reap unused kernel files
This change removes source files from the kernel tree that
were not being used during make. The list of used files
was generated using an annotated make log and was then
compared with new files added since the public release of
kernel version 3.10.00. New files which were added but
not used have been removed from the tree.
A diff was also run to determine the list of files that had
been modified since the release of kernel version 3.10.00.
These files were then scrubbed based on the current kernel
configuration, removing invalid and unused conditionals.
Some files which support planned functionality or are
useful in debugging have been excluded from this reap.
Change-Id: Ib2af17d073ea4b9e4a1b7572875dbb6801e78a63
Signed-off-by: Ian Maund <imaund@codeaurora.org>
[cip@codeaurora.org: Resolved removed file locations]
Signed-off-by: Clarence Ip <cip@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Dima Zavin <dima@android.com>
Signed-off-by: Carl Vanderlip <carlv@codeaurora.org>
Signed-off-by: David Brown <davidb@codeaurora.org>
[cip@codeaurora.org: Removed mdp_hw.h, mdp_ppp.c]
Signed-off-by: Clarence Ip <cip@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Needed to get the driver to compile ;(
Signed-off-by: Daniel Walker <dwalker@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Battery monitor services enumeration is used by voice and
other subsystem modules. Add the enumeration to support
backward compatibility and to avoid compilation issues.
Signed-off-by: Sudheer Papothi <spapothi@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add snapshot of audio codec drivers for MSM targets. The
code is migrated from msm-3.18 kernel at the below commit/AU level-
AU_LINUX_ANDROID_LA.HB.1.3.1.06.00.00.187.056
(e70ad0cd5efdd9dc91a77dcdac31d6132e1315c1)
(Promotion of kernel.lnx.3.18-151201.)
Signed-off-by: Sudheer Papothi <spapothi@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add snapshot for audio drivers for MSM targets. The code is
migrated from msm-3.18 kernel at the below commit/AU level -
AU_LINUX_ANDROID_LA.HB.1.3.1.06.00.00.187.056
(e70ad0cd5efdd9dc91a77dcdac31d6132e1315c1)
(Promotion of kernel.lnx.3.18-151201.)
Signed-off-by: Sudheer Papothi <spapothi@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add snapshot of avtimer changes for MSM targets. The code
is migrated from msm-3.18 kernel.
Signed-off-by: Banajit Goswami <bgoswami@codeaurora.org>
Signed-off-by: Sudheer Papothi <spapothi@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add soundwire bus support to regmap. This change enables
codec drivers using soundwire hardware interface to use
regmap interface for register read/write functionality.
Signed-off-by: Sudheer Papothi <spapothi@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
With this patch soundwire drivers can use id_table and
MODULE_DEVICE_TABLE() method to bind against the devices just
like I2C or SPI drivers.
Change-Id: I4e8eee3cb9626a5dc4fbfa238b5d2a578355f2a3
Signed-off-by: Sudheer Papothi <spapothi@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Increase the Input device SW ID from 15 to 32. This is needed
to accomodate more input devices.
Signed-off-by: Banajit Goswami <bgoswami@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | | |
Signed-off-by: Skylar Chang <chiaweic@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is a snapshot of the Documentation and header file for
the stub regulator driver as of msm-3.18 kernel commit:
2642c0adc79c06c0f3225da0177e910a1cea8cb5 ("Merge "ARM: dts:
msm: Add support for truly 720p command mode panel on msmgold"")
Signed-off-by: Devesh Jhunjhunwala <deveshj@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The devfreq framework calls a frequency targeting function with
a flag parameter. Allow the governors to influence that parameter.
Change-Id: I4058bd9dcd027dd246ccdb90d25c68f1dc055901
Signed-off-by: Lucille Sylvester <lsylvest@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Some modules can benefit from getting additional information cpufreq
governors use to make frequency switch decisions.
This change lays down a basic framework that the governors can use
to report additional information (Eg: CPU's load) information to
the clients that subscribe to cpufreq govinfo notifier chain.
Change-Id: I511b4bdb7d12394a31ce5352ae47553861e49303
Signed-off-by: Rohit Gupta <rohgup@codeaurora.org>
[imaund@codeaurora.org: resolved context conflicts]
Signed-off-by: Ian Maund <imaund@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Select given task's prev_cpu when the task slept for short period to
reduce latency of task placement and migrations. A new tunable
/proc/sys/kernel/sched_select_prev_cpu_us introduced to determine whether
tasks are eligible to go through fast path.
CRs-fixed: 947467
Change-Id: Ia507665b91f4e9f0e6ee1448d8df8994ead9739a
[joonwoop@codeaurora.org: fixed conflict in include/linux/sched.h,
include/linux/sched/sysctl.h, kernel/sched/core.c and kernel/sysctl.c]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
At present, HMP scheduler uses sched_clock to setup window boundary to
be aligned with timer interrupt to ensure timer interrupt fires after
window rollover. However this alignment won't last long since the timer
interrupt rearms next timer based on time measured by ktime which isn't
coupled with sched_clock.
Convert sched_clock to ktime to avoid wallclock discrepancy between
scheduler and timer so that we can ensure scheduler's window boundary is
always aligned with timer.
CRs-fixed: 933330
Change-Id: I4108819a4382f725b3ce6075eb46aab0cf670b7e
[joonwoop@codeaurora.org: fixed minor conflict in include/linux/tick.h
and kernel/sched/core.c. omitted fixes for kernel/sched/qhmp_core.c]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Tasks that are on the runqueue continuously for a certain amount of time
have the potential to be big tasks at the end of the window in which they
are runnable. In such scenarios ramping the CPU frequency early can
boost performance rather than waiting till the end of a window for the
governor to query load. Notify the governor early at every tick when a
task has been observed to execute beyond some percentage of the tick
period.
The threshold beyond which a task is eligible for early detection can be
changed via the tunable sched_early_detection_duration. The feature itself
is enabled only when scheduler boost is in effect.
Change-Id: I528b72bbc79a55b4593d1b8ab45450411c6d70f3
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
[joonwoop@codeaurora.org: fixed conflict in scheduler_tick() in
kernel/sched/core.c. fixed minor conflicts in include/linux/sched.h,
include/linux/sched/sysctl.h and kernel/sysctl.c due to
CONFIG_SCHED_QHMP.]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Account amount of load contributed by new tasks within CPU load so that
governor can apply different policy when CPU is loaded by new tasks.
To be able to distinguish new task load a new tunable
sched_new_task_windows also introduced. The tunable defines tasks as new
when the tasks are have been active less than configured windows.
Change-Id: I2e2e62e4103882f7362154b792ab978b181b9f59
Suggested-by: Saravana Kannan <skannan@codeaurora.org>
[joonwoop@codeaurora.org: ommited changes for
drivers/cpufreq/cpufreq_interactive.c. cpufreq changes needs to be
applied separately later. fixed conflict in include/linux/sched.h and
include/linux/sched/sysctl.h. omitted changes for qhmp_core.c]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add per-cpu tunable to set the extra cost to use a CPU that is idle.
Add the same for a cluster.
Change-Id: I4aa53f3c42c963df7abc7480980f747f0413d389
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
[joonwoop@codeaurora.org: omitted changes for qhmp*.[c,h] stripped out
CONFIG_SCHED_QHMP in drivers/base/cpu.c and include/linux/sched.h]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add new API to the scheduler to allow low power mode driver to inform
the scheduler about the d-state of a cluster. This can be leveraged by
the scheduler to make an informed decision about the cost of placing a task
on a cluster.
Change-Id: If0fe0fdba7acad1c2eb73654ebccfdb421225e62
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
[joonwoop@codeaurora.org: omitted fixes for qhmp_core.c and qhmp_core.h]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
At present HMP scheduler packs tasks to busy CPU till the CPU's load is
100% to avoid waking up of idle CPU as much as possible. Such aggressive
packing leads unintended CPU frequency raise as governor raises the busy
CPU's frequency when its load is more than configured frequency max load
which can be less than 100%.
Fix to take into account of governor's frequency max load and pack tasks
only when the CPU's projected load is less than max load to avoid
unnecessary frequency raise.
Change-Id: I4447e5e0c2fa5214ae7a9128f04fd7585ed0dcac
[joonwoop@codeaurora.org: fixed minor conflict in kernel/sched/sched.h]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add zone awareness to the load balancer. Remove all earlier restrictions
that the load balancer had for inter cluster kicks and migration.
Change-Id: I12ad3d0c2d2e9bb498f49a231810f2ad418b061f
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
[joonwoop@codeaurora.org: fixed minor conflict in nohz_kick_needed() due
to its return type change.]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
For the fair sched class, update the select_best_cpu() policy to do
power based placement. The hope is to minimize the voltage at which
the CPU runs.
While RT tasks already do power based placement, their placement
preference has to now take into account the power cost of all tasks
on a given CPU. Also remove the check for sched_boost since
sched_boost no longer intends to elevate all tasks to the highest
capacity cluster.
Change-Id: Ic6a7625c97d567254d93b94cec3174a91727cb87
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Task packing will now be determined solely on the basis of the
power cost of task placement. All tasks are eligible for packing.
Remove the notion of "small" tasks from the scheduler.
Change-Id: I72d52d04b2677c6a8d0bc6aa7d50ff0f1a4f5ebb
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Energy aware core rotation is not compatible with the power
based task placement being introduced in subsequent patches.
Remove all existing EA based task placement/migration logic.
power_cost() is the only function remaining. This function has
been modified to return the total power cost associated with a
task on a given CPU taking existing load on that CPU into
account.
Change-Id: Ia00501e3cbfc6e11446a9a2e93e318c4c42bdab4
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
[joonwoop@codeaurora.org: fixed multiple conflicts in fair.c and minor
conflict in features.h]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
At present, governor retrieves each CPUs' load sequentially. In this
way, there is chance of race between governor's CPU load query and task
migration that would result in reporting of lesser CPUs' load than actual.
For example,
CPU0 load = 30%. CPU1 load = 50%.
Governor Load balancer
- sched_get_busy(cpu 0) = 30%.
- A task 'p' migrated from CPU 1 to
CPU 0. p->ravg->prev_window = 50.
Now CPU 0's load = 80%,
CPU 1's load = 0%.
- sched_get_busy(cpu 1) = 0%
50% of load from CPU 1 to 0 never
accounted.
Fix such issues by introducing a new API sched_get_cpus_busy() which
makes for governor to be able to get set of CPUs' load. The loads set
internally constructed with blocking load balancer to ensure migration
cannot occur in the meantime.
Change-Id: I4fa4dd1195eff26aa603829aca2054871521495e
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add a new procfs node /proc/sys/kernel/sched_max_latency_us to track the
worst scheduling latency. It provides easier way to identify maximum
scheduling latency seen across the CPUs.
Change-Id: I6e435bbf825c0a4dff2eded4a1256fb93f108d0e
[joonwoop@codeaurora.org: fixed conflict in update_stats_wait_end().]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add new tunables /proc/sys/kernel/sched_latency_warn_threshold_us and
/proc/sys/kernel/sched_latency_panic_threshold_us to warn or panic for the
cases that tasks are runnable but not scheduled more than configured time.
This helps to find out unacceptably high scheduling latency more easily.
Change-Id: If077aba6211062cf26ee289970c5abcd1c218c82
[joonwoop@codeaurora.org: fixed conflict in update_stats_wait_end().]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Extend sched_get_nr_running_avg() API to return average nr_big_tasks,
in addition to average nr_running and average nr_io_wait tasks. Also
add a new trace point to record values returned by
sched_get_nr_running_avg() API.
Change-Id: Id3591e6d04da8db484b4d1cb9d95dba075f5ab9a
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
[rameezmustafa@codeaurora.org: Resolve trivial merge conflicts]
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
sched_get_nr_running_avg() returns average nr_running and nr_iowait
task count since it was last invoked. Fix several bugs in their
calculation.
* sched_update_nr_prod() needs to consider that nr_running count can
change by more than 1 when CFS_BANDWIDTH feature is used
* sched_get_nr_running_avg() needs to sum up nr_iowait count across
all cpus, rather than just one
* sched_get_nr_running_avg() could race with sched_update_nr_prod(),
as a result of which it could use curr_time which is behind a cpu's
'last_time' value. That would lead to erroneous calculation of
average nr_running or nr_iowait.
While at it, fix also a bug in BUG_ON() check in
sched_update_nr_prod() function and remove unnecessary nr_running
argument to sched_update_nr_prod() function.
Change-Id: I46737614737292fae0d7204c4648fb9b862f65b2
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
[rameezmustafa@codeaurora.org: Port to msm-3.18]
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
sched_prefer_idle flag controls whether tasks can be woken to any
available idle cpu. It may be desirable to set sched_prefer_idle to 0
so that most tasks wake up to non-idle cpus under mostly_idle
threshold and have specialized tasks override this behavior through
other means. PF_WAKE_UP_IDLE flag per task provides exactly that. It
lets tasks with PF_WAKE_UP_IDLE flag set be woken up to any available
idle cpu independent of sched_prefer_idle flag setting. Currently
only kernel-space API exists to set PF_WAKE_UP_IDLE flag for a task.
This patch adds a user-space API (in /proc filesystem) to set
PF_WAKE_UP_IDLE flag for a given task. /proc/[pid]/sched_wake_up_idle
file can be written to set or clear PF_WAKE_UP_IDLE flag for a given
task.
Change-Id: I13a37e740195e503f457ebe291d54e83b230fbeb
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
[rameezmustafa@codeaurora.org: Port to msm-3.18]
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
[joonwoop@codeaurora.org: fixed minor conflict in kernel/sched/fair.c]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add code to calculate the run queue depth of a cpu and iowait
depth of the cpu.
The scheduler calls in to sched_update_nr_prod whenever there
is a runqueue change. This function maintains the runqueue average
and the iowait of that cpu in that time interval.
Whoever wants to know the runqueue average is expected to call
sched_get_nr_running_avg periodically to get the accumulated
runqueue and iowait averages for all the cpus.
Change-Id: Id8cb2ecf0ed479f090a83ccb72dd59c53fa73e0c
Signed-off-by: Jeff Ohlstein <johlstei@codeaurora.org>
(cherry picked from commit 0299fcaaad80e2c0ac9aa583c95107f6edc27750)
[rameezmustafa@codeaurora.org: Port to msm-3.18]
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Remove the global sysctl_sched_prefer_idle flag and replace it with a
per-cpu prefer_idle flag. The per-cpu flag is expected to same for all
cpus in a cluster. It thus provides convenient means to disable
packing in one cluster while allowing packing in another cluster.
Change-Id: Ie4cc73bb1a55b4eac5697be38e558546161faca1
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add sysctl to enable energy awareness at runtime. This is useful for
performance/power tuning/measurements and debugging. In addition this
will match up with the Documentation/scheduler/sched-hmp.txt documentation.
Change-Id: I0a9185498640d66917b38bf5d55f6c59fc60ad5c
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
[rameezmustafa@codeaurora.org]: Port to msm-3.18]
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Power values for cpus can drop quite considerably when it goes idle.
As a result, the best choice for running a single task in a cluster
can vary quite rapidly. As the task keeps hopping cpus, other cpus go
idle and start being seen as more favorable target for running a task,
leading to task migrating almost every scheduler tick!
Prevent this by keeping track of when a task started running on a cpu
and allowing task migration in tick path (migration_needed()) on
account of energy efficiency reasons only if the task has run
sufficiently long (as determined by sysctl_sched_min_runtime
variable).
Note that currently sysctl_sched_min_runtime setting is considered
only in scheduler_tick()->migration_needed() path and not in
idle_balance() path. In other words, a task could be migrated to
another cpu which did a idle_balance(). This limitation should not
affect high-frequency migrations seen typically (when a single
high-demand task runs on high-performance cpu).
CRs-Fixed: 756570
Change-Id: I96413b7a81b623193c3bbcec6f3fa9dfec367d99
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
[joonwoop@codeaurora.org: fixed conflict in set_task_cpu() and
__schedule().]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
sysctl_sched_prefer_idle lets the scheduler bias selection of
idle cpus over mostly idle cpus for tasks. This knob could be
useful to control balance between power and performance.
Change-Id: Ide6eef684ef94ac8b9927f53c220ccf94976fe67
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
It may be desirable to be able to alter the scehd_cpu_high_irqload
setting easily, so make it a runtime tunable value.
Change-Id: I832030eec2aafa101f0f435a4fd2d401d447880d
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The system initially uses a jiffy-based sched clock. When the platform
registers a new timer for sched_clock, sched_clock can jump backwards.
Once sched_clock_postinit() runs it should be safe to rely on it.
Also sched_clock_cpu() relies on completion of sched_clock_init()
and until that happens sched_clock_cpu() returns zero. This is used
in the irq accounting path which window-based stats relies upon.
So do not set window_start until sched_clock_cpu() is working.
Change-Id: Ided349de8f8554f80a027ace0f63ea52b1c38c68
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add another dimension for task packing based on frequency. This patch
adds a per-cpu tunable, rq->mostly_idle_freq, which when set will
result in tasks being packed on a single cpu in cluster as long as
cluster frequency is less than set threshold.
Change-Id: I318e9af6c8788ddf5dfcda407d621449ea5343c0
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The tick code already tracks exact time a tick is expected
to arrive. This can be used to eliminate slack in the jiffy
to sched_clock mapping that aligns windows between a caller
of sched_set_window and the scheduler itself.
Change-Id: I9d47466658d01e6857d7457405459436d504a2ca
Signed-off-by: Steve Muckle <smuckle@codeaurora.org>
[joonwoop@codeaurora.org: fixed minor conflict in include/linux/tick.h]
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
|
| | | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
sched_mostly_idle_load and sched_mostly_idle_nr_run knobs help pack
tasks on cpus to some extent. In some cases, it may be desirable to
have different packing limits for different cpus. For example, pack to
a higher limit on high-performance cpus compared to power-efficient
cpus.
This patch removes the global mostly_idle tunables and makes them
per-cpu, thus letting task packing behavior to be controlled in a
fine-grained manner.
Change-Id: Ifc254cda34b928eae9d6c342ce4c0f64e531e6c2
Signed-off-by: Srivatsa Vaddagiri <vatsa@codeaurora.org>
|