| Commit message (Collapse) | Author | Age |
... | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Ethernet drivers implementing both {GS}RXFH and {GS}CHANNELS ethtool ops
incorrectly allow SCHANNELS when it would conflict with the settings
from SRXFH. This occurs because it is not possible for drivers to
understand whether their Rx flow indirection table has been configured
or is in the default state. In addition, drivers currently behave in
various ways when increasing the number of Rx channels.
Some drivers will always destroy the Rx flow indirection table when this
occurs, whether it has been set by the user or not. Other drivers will
attempt to preserve the table even if the user has never modified it
from the default driver settings. Neither of these situation is
desirable because it leads to unexpected behavior or loss of user
configuration.
The correct behavior is to simply return -EINVAL when SCHANNELS would
conflict with the current Rx flow table settings. However, it should
only do so if the current settings were modified by the user. If we
required that the new settings never conflict with the current (default)
Rx flow settings, we would force users to first reduce their Rx flow
settings and then reduce the number of Rx channels.
This patch proposes a solution implemented in net/core/ethtool.c which
ensures that all drivers behave correctly. It checks whether the RXFH
table has been configured to non-default settings, and stores this
information in a private netdev flag. When the number of channels is
requested to change, it first ensures that the current Rx flow table is
not going to assign flows to now disabled channels.
Change-Id: I3c7ecdeed66e20a423fd46a0ebc937020e951105
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
As suggested by Eric, these helpers should have const dev param.
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Change-Id: I320e01f441a2b266680cdaefc5eb659f329143fb
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Some code does not mind if a device is bond slave or team port and treats
them the same, as generic LAG ports.
Change-Id: I9f1940897da8079deaec8d59bfba5e8a2c91c230
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Some code does not mind if the master is bond or team and treats them
the same, as generic LAG.
Change-Id: I97803f67bdaf42a2cbf3eb6adf417c1aeebb5be1
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Similar to other helpers, caller can use this to find out if device is
team port.
Change-Id: Id39d3c7a188de21768a4f18190e25f27e5846b28
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Similar to other helpers, caller can use this to find out if device is
team master.
Change-Id: I8301442073b4af1e19a1ee207fa5055a3ab5c908
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
sound/soc/codecs/wcd_cpe_services.c:667:17: error: implicit
conversion from enumeration type 'enum cpe_svc_result' to different
enumeration type 'enum cmi_api_result' [-Werror,-Wenum-conversion]
notif.result = result;
~ ^~~~~~
sound/soc/codecs/wcd_cpe_services.c:1358:8: error: implicit
conversion from enumeration type 'enum cpe_svc_result' to different
enumeration type 'enum cpe_process_result' [-Werror,-Wenum-conversion]
rc = cpe_send_msg_to_inbox(t_info, 0, m);
~ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2 errors generated.
Change-Id: Ib9fce60017066e9c96e79195d7dba9ffb9177148
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
../drivers/soc/qcom/icnss.c:3418:37: warning: implicit conversion from
enumeration type 'enum icnss_driver_mode' to different enumeration type
'enum wlfw_driver_mode_enum_v01' [-Wenum-conversion]
ret = wlfw_wlan_mode_send_sync_msg(mode);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^~~~
1 warning generated.
Change-Id: I7ff0326411b4b2a6e020cf50bc655ec26c1e4992
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
|
| | |
| | |
| | |
| | | |
Change-Id: I70ebba2c049700cebd94a23df33cb2efbb588875
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
clang-15 complains:
drivers/platform/msm/ipa/ipa_v3/rmnet_ipa.c:457:41: error: implicit
conversion from enumeration type 'enum ipa_ip_type_enum_v01'
to different enumeration type 'enum ipa_ip_type'
[-Werror,-Wenum-conversion]
q6_ul_flt_rule_ptr->ip = flt_spec_ptr->ip_type;
~ ~~~~~~~~~~~~~~^~~~~~~
drivers/platform/msm/ipa/ipa_v3/rmnet_ipa.c:459:45: error: implicit
conversion from enumeration type 'enum ipa_filter_action_enum_v01'
to different enumeration type 'enum ipa_flt_action'
[-Werror,-Wenum-conversion]
q6_ul_flt_rule_ptr->action = flt_spec_ptr->filter_action;
~ ~~~~~~~~~~~~~~^~~~~~~~~~~~~
2 errors generated.
Change-Id: I7a40b0d7b082836670b6551f2a04aa141d240153
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is trivial to do:
- add flags argument to simple_rename()
- check if flags doesn't have any other than RENAME_NOREPLACE
- assign simple_rename() to .rename2 instead of .rename
Filesystems converted:
hugetlbfs, ramfs, bpf.
Debugfs uses simple_rename() to implement debugfs_rename(), which is for
debugfs instances to rename files internally, not for userspace filesystem
access. For this case pass zero flags to simple_rename().
Change-Id: I1a46ece3b40b05c9f18fd13b98062d2a959b76a0
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Alexei Starovoitov <ast@kernel.org>
|
| | |
| | |
| | |
| | | |
Change-Id: Ifb8fe11c1d647ab1286e9bb0a63ee0c83641f9a8
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fix compile error of mdie[SIR_MDIE_SIZE], use
mdie[] instead.
Change-Id: I934d3f02a19b511583141deeca7af5b4d4c0ef30
CRs-Fixed: 3364146
|
| | |
| | |
| | |
| | |
| | |
| | | |
cgroup_path() returns length, not char *.
Change-Id: I8bdfcc0fc58789aa23f730866f27fbb932b24be1
|
| | |
| | |
| | |
| | | |
Change-Id: I1b61a6df9d028fdff86bca6c6f3642a050e8f005
|
| | |
| | |
| | |
| | | |
Change-Id: Iedf64ccf3ab2364a5d1ba04469afd89be010e835
|
| | |
| | |
| | |
| | | |
Change-Id: I5e80493d5c7600ecdafc0b78e2c72be27095fae0
|
| | |
| | |
| | |
| | | |
Change-Id: I756cfd498312be1ecc264576af8c874c89d93813
|
| | |
| | |
| | |
| | | |
Change-Id: I55fbab53a24e395a6f34c699d5b55ad38d023c24
|
| | |
| | |
| | |
| | | |
Change-Id: If0e72ed9ae5ac9ea2db67f92e34dbf9675049d26
|
| | |
| | |
| | |
| | | |
Change-Id: I59c35db0bbf8d15e8b814ae7b729c8301ac4ed97
|
| | |
| | |
| | |
| | | |
Change-Id: I8bfc0e53b4bd347adaa298594402a2210aed3b49
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
commit 14831fad73f5ac30ac61760487d95a538e6ab3cb upstream.
When running the following command without arm-linux-gnueabi-gcc in
one's $PATH, the following warning is observed:
$ ARCH=arm64 CROSS_COMPILE_COMPAT=arm-linux-gnueabi- make -j72 LLVM=1 mrproper
make[1]: arm-linux-gnueabi-gcc: No such file or directory
This is because KCONFIG is not run for mrproper, so CONFIG_CC_IS_CLANG
is not set, and we end up eagerly evaluating various variables that try
to invoke CC_COMPAT.
This is a similar problem to what was observed in
commit dc960bfeedb0 ("h8300: suppress error messages for 'make clean'")
Reported-by: Lucas Henneman <henneman@google.com>
Suggested-by: Masahiro Yamada <masahiroy@kernel.org>
Change-Id: I672930f558d087230c119ece518b83cac7d8baa7
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Link: https://lore.kernel.org/r/20211019223646.1146945-4-ndesaulniers@google.com
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
[ Upstream commit 8d75785a814241587802655cc33e384230744f0c ]
Add the 32-bit vdso Makefile to the vdso_install rule so that 'make
vdso_install' installs the 32-bit compat vdso when it is compiled.
Fixes: a7f71a2c8903 ("arm64: compat: Add vDSO")
Change-Id: I554988049d9f4395d226988a410a0a68eeed304d
Signed-off-by: Stephen Boyd <swboyd@chromium.org>
Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Acked-by: Will Deacon <will@kernel.org>
Cc: Vincenzo Frascino <vincenzo.frascino@arm.com>
Link: https://lore.kernel.org/r/20200818014950.42492-1-swboyd@chromium.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
KBUILD_CPPFLAGS is defined differently depending on whether the main
compiler is clang or not. This means that it is not possible to build
the compat vDSO with GCC if the rest of the kernel is built with clang.
Define VDSO_CPPFLAGS directly to break this dependency and allow a clang
kernel to build a compat vDSO with GCC:
$ make ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- \
CROSS_COMPILE_COMPAT=arm-linux-gnueabihf- CC=clang \
COMPATCC=arm-linux-gnueabihf-gcc
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: Iaa7f6197bc99ca5c8162c6e34e3dd38c31d894cd
Signed-off-by: Will Deacon <will@kernel.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The jump labels are not used in vdso32 since it is not possible to run
runtime patching on them.
Remove the configuration option from the Makefile.
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: Ie0f311218a761a6b18b3ed3cccf30fd10f440407
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The global declared mmap_handle can be left dangling
for case when the handle is freed by the calling function.
Fix is to address this. Also add a check to make sure
the mmap_handle is accessed legally.
Change-Id: I367f8a41339aa0025b545b125ee820220efedeee
Signed-off-by: Soumya Managoli <quic_c_smanag@quicinc.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Check added for voice session index.
Change-Id: Ifff36add5d62f2fdc3395de1447075d297f2c2df
Signed-off-by: Soumya Managoli <quic_c_smanag@quicinc.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There is no check for voip pkt pkt_len,if it contains the
min required data. This can lead to integer underflow.
Add check for the same.
Change-Id: I4f57eb125967d52ad8da60d21a440af1f81d2579
Signed-off-by: Soumya Managoli <quic_c_smanag@quicinc.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In present scenario, STA disconnects with AP if it receives
invalid channel in CSA IE. In this case STA shouldn't
disconnect with AP as this request may come from a spoof AP.
Ignore this CSA request as it might be from spoof AP and
if it is from genuine AP heart beat failure happens and
results in disconnection. After disconnection DUT may
reconnect to same or other APs.
Change-Id: I840508dd27d8c313a3e8f74c4e1f5aa64eecf6f9
CRs-Fixed: 3390251
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If the next_freq field of struct sugov_policy is set to UINT_MAX,
it shouldn't be used for updating the CPU frequency (this is a
special "invalid" value), but after commit b7eaf1aab9f8 (cpufreq:
schedutil: Avoid reducing frequency of busy CPUs prematurely) it
may be passed as the new frequency to sugov_update_commit() in
sugov_update_single().
Fix that by adding an extra check for the special UINT_MAX value
of next_freq to sugov_update_single().
Fixes: b7eaf1aab9f8 (cpufreq: schedutil: Avoid reducing frequency of busy CPUs prematurely)
Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
Cc: 4.12+ <stable@vger.kernel.org> # 4.12+
Change-Id: Idf4ebe9e912f983598255167d2065e47562ab83d
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
(cherry picked from commit 97739501f207efe33145b918817f305b822987f8)
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The schedutil driver sets sg_policy->next_freq to UINT_MAX on certain
occasions to discard the cached value of next freq:
- In sugov_start(), when the schedutil governor is started for a group
of CPUs.
- And whenever we need to force a freq update before rate-limit
duration, which happens when:
- there is an update in cpufreq policy limits.
- Or when the utilization of DL scheduling class increases.
In return, get_next_freq() doesn't return a cached next_freq value but
recalculates the next frequency instead.
But having special meaning for a particular value of frequency makes the
code less readable and error prone. We recently fixed a bug where the
UINT_MAX value was considered as valid frequency in
sugov_update_single().
All we need is a flag which can be used to discard the value of
sg_policy->next_freq and we already have need_freq_update for that. Lets
reuse it instead of setting next_freq to UINT_MAX.
Change-Id: Ia37ef416d5ecac11fe0c6a2be7e21fdbca708a1a
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Yaroslav Furman <yaro330@gmail.com> - backported to 4.4
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Currently there is a chance of a schedutil cpufreq update request to be
dropped if there is a pending update request. This pending request can
be delayed if there is a scheduling delay of the irq_work and the wake
up of the schedutil governor kthread.
A very bad scenario is when a schedutil request was already just made,
such as to reduce the CPU frequency, then a newer request to increase
CPU frequency (even sched deadline urgent frequency increase requests)
can be dropped, even though the rate limits suggest that its Ok to
process a request. This is because of the way the work_in_progress flag
is used.
This patch improves the situation by allowing new requests to happen
even though the old one is still being processed. Note that in this
approach, if an irq_work was already issued, we just update next_freq
and don't bother to queue another request so there's no extra work being
done to make this happen.
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Juri Lelli <juri.lelli@redhat.com>
Change-Id: I7b6e19971b2ce3bd8e8336a5a4bc1acb920493b5
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Yaroslav Furman <yaro330@gmail.com> - backport to 4.4
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
A more energy efficient update of the IO wait boosting mechanism has
been introduced in:
commit a5a0809 ("cpufreq: schedutil: Make iowait boost more energy
efficient")
where the boost value is expected to be:
- doubled at each successive wakeup from IO
staring from the minimum frequency supported by a CPU
- reset when a CPU is not updated for more then one tick
by either disabling the IO wait boost or resetting its value to the
minimum frequency if this new update requires an IO boost.
This approach is supposed to "ignore" boosting for sporadic wakeups from
IO, while still getting the frequency boosted to the maximum to benefit
long sequence of wakeup from IO operations.
However, these assumptions are not always satisfied.
For example, when an IO boosted CPU enters idle for more the one tick
and then wakes up after an IO wait, since in sugov_set_iowait_boost() we
first check the IOWAIT flag, we keep doubling the iowait boost instead
of restarting from the minimum frequency value.
This misbehavior could happen mainly on non-shared frequency domains,
thus defeating the energy efficiency optimization, but it can also
happen on shared frequency domain systems.
Let fix this issue in sugov_set_iowait_boost() by:
- first check the IO wait boost reset conditions
to eventually reset the boost value
- then applying the correct IO boost value
if required by the caller
Fixes: a5a0809 (cpufreq: schedutil: Make iowait boost more energy
efficient)
Reported-by: Viresh Kumar <viresh.kumar@linaro.org>
Change-Id: I196b5c464cd43164807c12b2dbc2e5d814bf1d33
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Yaroslav Furman <yaro330@gmail.com> - backport to 4.4
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When lower capacity CPUs are load balancing and considering to pull
something from a higher capacity group, we should not pull tasks from a
cpu with only one task running as this is guaranteed to impede progress
for that task. If there is more than one task running, load balance in
the higher capacity group would have already made any possible moves to
resolve imbalance and we should make better use of system compute
capacity by moving a task if we still have more than one running.
cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
[from https://lore.kernel.org/lkml/1530699470-29808-11-git-send-email-morten.rasmussen@arm.com/]
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Change-Id: Ib86570abdd453a51be885b086c8d80be2773a6f2
Signed-off-by: Adam W. Willis <return.of.octobot@gmail.com>
|
| | |
| | |
| | |
| | | |
Change-Id: Icd8f2cde6cac1b7bb07e54b8e6989c65eea4e4a5
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
joshuous: Adapted to work with CAF's "softirq: defer softirq processing
to ksoftirqd if CPU is busy with RT" commit.
ajaivasudeve: adapted for the commit "softirq: Don't defer all softirq during RT task"
We're finding audio glitches caused by audio-producing RT tasks
that are either interrupted to handle softirq's or that are
scheduled onto cpu's that are handling softirq's.
In a previous patch, we attempted to catch many cases of the
latter problem, but it's clear that we are still losing
significant numbers of races in some apps.
This patch attempts to address both problems:
1. It prohibits handling softirq's when interrupting
an RT task, by delaying the softirq to the ksoftirqd
thread.
2. It attempts to reduce the most common windows in which
we lose the race between scheduling an RT task on a remote
core and starting to handle softirq's on that core.
We still lose some races, but we lose significantly fewer.
(And we don't want to introduce any heavyweight forms
of synchronization on these paths.)
Bug: 64912585
Change-Id: Ida89a903be0f1965552dd0e84e67ef1d3158c7d8
Signed-off-by: John Dias <joaodias@google.com>
Signed-off-by: joshuous <joshuous@gmail.com>
Signed-off-by: ajaivasudeve <ajaivasudeve@gmail.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
For effective interplay between RT and fair tasks. Enables sched_fifo
for UI and Render tasks. Critical for improving user experience.
bug: 24503801
bug: 30377696
Change-Id: I2a210c567c3f5c7edbdd7674244822f848e6d0cf
Signed-off-by: Srinath Sridharan <srinathsr@google.com>
(cherry picked from commit dfe0f16b6fd3a694173c5c62cf825643eef184a3)
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
With a shared policy in place, when one of the CPUs in the policy is
hotplugged out and then brought back online, sugov_stop and
sugov_start are called in order.
sugov_stop removes utilization hooks for each CPU in the policy and
does nothing else in the for_each_cpu loop. sugov_start on the other
hand iterates through the CPUs in the policy and re-initializes the
per-cpu structure _and_ adds the utilization hook. This implies that
the scheduler is allowed to invoke a CPU's utilization update hook
when the rest of the per-cpu structures have yet to be re-inited.
Apart from some strange values in tracepoints this doesn't cause a
problem, but if we do end up accessing a pointer from the per-cpu
sugov_cpu structure somewhere in the sugov_update_shared path,
we will likely see crashes since the memset for another CPU in the
policy is free to race with sugov_update_shared from the CPU that is
ready to go. So let's fix this now to first init all per-cpu
structures, and then add the per-cpu utilization update hooks all at
once.
Change-Id: I399e0e159b3db3ae3258843c9231f92312fe18ef
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Currently when all the related CPUs from a policy go offline or the
governor is switched, cpufreq framework calls sugov_exit() that
frees the governor tunables. When any of the related CPUs comes back
online or governor is switched back to schedutil sugov_init() gets
called which allocates a fresh set of tunables that are set to
default values. This can cause the userspace settings to those
tunables to be lost across governor switches or when an entire
cluster is hotplugged out.
To prevent this, save the tunable values on governor exit. Restore
these values to the newly allocated tunables on governor init.
Change-Id: I671d4d0e1a4e63e948bfddb0005367df33c0c249
Signed-off-by: Rohit Gupta <rohgup@codeaurora.org>
[Caching and restoring different tunables.]
Signed-off-by: joshuous <joshuous@gmail.com>
Change-Id: I852ae2d23f10c9337e7057a47adcc46fe0623c6a
Signed-off-by: joshuous <joshuous@gmail.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Return immediately upon memory allocation failure.
Change-Id: I30947d55f0f4abd55c51e42912a0762df57cbc1d
Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* Render - added back missing header
When tasks come and go from a runqueue quickly, this can lead to boost
being applied and removed quickly which sometimes means we cannot raise
the CPU frequency again when we need to (due to the rate limit on
frequency updates). This has proved to be a particular issue for RT tasks
and alternative methods have been used in the past to work around it.
This is an attempt to solve the issue for all task classes and cpufreq
governors by introducing a generic mechanism in schedtune to retain
the max boost level from task enqueue for a minimum period - defined
here as 50ms. This timeout was determined experimentally and is not
configurable.
A sched_feat guards the application of this to tasks - in the default
configuration, task boosting only applied to tasks which have RT
policy. Change SCHEDTUNE_BOOST_HOLD_ALL to true to apply it to all
tasks regardless of class.
It works like so:
Every task enqueue (in an allowed class) stores a cpu-local timestamp.
If the task is not a member of an allowed class (all or RT depending
upon feature selection), the timestamp is not updated.
The boost group will stay active regardless of tasks present until
50ms beyond the last timestamp stored. We also store the timestamp
of the active boost group to avoid unneccesarily revisiting the boost
groups when checking CPU boost level.
If the timestamp is more than 50ms in the past when we check boost then
we re-evaluate the boost groups for that CPU, taking into account the
timestamps associated with each group.
Idea based on rt-boost-retention patches from Joel.
Change-Id: I52cc2d2e82d1c5aa03550378c8836764f41630c1
Suggested-by: Joel Fernandes <joelaf@google.com>
Reviewed-by: Patrick Bellasi <patrick.bellasi@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: RenderBroken <zkennedy87@gmail.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The current implementation of the energy-aware wake-up path relies on
find_best_target() to select an ordered list of CPU candidates for task
placement. The first candidate of the list saving energy is then chosen,
disregarding all the others to avoid the overhead of an expensive
energy_diff.
With the recent refactoring of select_energy_cpu_idx(), the cost of
exploring multiple CPUs has been reduced, hence offering the opportunity
to select the most energy-efficient candidate at a lower cost. This commit
seizes this opportunity by allowing to change select_energy_cpu_idx()'s
behaviour as to ignore the order of CPUs returned by find_best_target()
and to pick the best candidate energy-wise.
As this functionality is still considered as experimental, it is hidden
behind a sched_feature named FBT_STRICT_ORDER (like the equivalent
feature in Android 4.14) which defaults to true, hence keeping the
current behaviour by default.
Change-Id: I0cb833bfec1a4a053eddaff1652c0b6cad554f97
Suggested-by: Patrick Bellasi <patrick.bellasi@arm.com>
Suggested-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We are using an incorrect index while initializing the nrg_delta for
the previous CPU in select_energy_cpu_idx(). This initialization
it self is not needed as the nrg_delta for the previous CPU is already
initialized to 0 while preparing the energ_env struct.
Change-Id: Iee4e2c62f904050d2680a0a1df646d4d515c62cc
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Most walt variables in hot code paths are __read_mostly and grouped
together in the .data section, including these variables that show
up as frequently accessed in gem5 simulation: walt_ravg_window,
walt_disabled, walt_account_wait_time, and
walt_freq_account_wait_time.
The exception is walt_ktime_suspended, which is also accessed in many
of the same hot paths. It is also almost entirely accessed by reads.
Move it to __read_mostly in hopes of keeping it in the same cache line
as the other hot data.
Change-Id: I8c9e4ee84e5a0328b943752ee9ed47d4e006e7de
Signed-off-by: Todd Poynor <toddpoynor@google.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When the normal utilization value for all the CPUs in a policy is 0, the
current aggregation scheme of using a > check will result in the aggregated
utilization being 0 and the max value being 1.
This is not a problem for upstream code. However, since we also use other
statistics provided by WALT to update the aggregated utilization value
across all CPUs in a policy, we can end up with a non-zero aggregated
utilization while the max remains as 1.
Then when get_next_freq() tries to compute the frequency using:
max-frequency * 1.25 * (util / max)
it ends up with a frequency that is greater than max-frequency. So the
policy frequency jumps to max-frequency.
By changing the aggregation check to >=, we make sure that the max value is
updated with something reported by the scheduler for a CPU in that policy.
With the correct max, we can continue using the WALT specific statistics
without spurious jumps to max-frequency.
Change-Id: I14996cd796191192ea112f949dc42450782105f7
Signed-off-by: Saravana Kannan <skannan@codeaurora.org>
|
| | |
| | |
| | |
| | |
| | | |
Change-Id: Ibbcb281c2f048e2af0ded0b1cbbbedcc49b29e45
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
schedtune sets 0 as the floor for calculating CPU max boost, so
negative schedtune.boost values do not affect CPU frequency
decisions. Remove this restriction to allow userspace to apply
negative boosts when appropriate.
Also change treatment of the root boost group to match the other
groups, so it only affects the CPU boost if it has runnable tasks on
the CPU.
Test: set all boosts negative; sched_boost_cpu trace events show
negative CPU margins.
Change-Id: I89f3470299aef96a18797c105f02ebc8f367b5e1
Signed-off-by: Connor O'Brien <connoro@google.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
in commit ac087abe1358 ("Merge android-msm-8998-4.4-common into
android-msm-muskie-4.4"),
.allow_attach = subsys_cgroup_allow_attach,
was dropped in the merge.
This patch brings back allow_attach,
but with the marlin-3.18 behavior of allowing all cgroup changes
rather than the subsys_cgroup_allow_attach behavior of
requiring SYS_CAP_NICE.
Bug: 36592053
Change-Id: Iaa51597b49a955fd5709ca504e968ea19a9ca8f5
Signed-off-by: Andres Oportus <andresoportus@google.com>
Signed-off-by: Andrew Chant <achant@google.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When we are calculating what the impact of placing a task on a specific
cpu is, we should include the information that there might be a minimum
capacity imposed upon that cpu which could change the performance and/or
energy cost decisions.
When choosing an active target CPU, favour CPUs that won't end up
running at a high OPP due to a min capacity cap imposed by external
actors.
Change-Id: Ibc3302304345b63107f172b1fc3ffdabc19aa9d4
Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com>
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
|