diff options
| author | Viresh Kumar <viresh.kumar@linaro.org> | 2018-06-17 03:10:11 +0300 |
|---|---|---|
| committer | Georg Veichtlbauer <georg@vware.at> | 2023-07-27 17:52:37 +0200 |
| commit | 25c2cb9e8de74aa2460141542cdcb2fd03c94cba (patch) | |
| tree | 76fea1c6d50fe9f8cfa9683e1fb146879a6d0a62 /kernel/sched | |
| parent | 7c6d3914a32b1d536813dd95b1742eaa50065624 (diff) | |
cpufreq: schedutil: Don't set next_freq to UINT_MAX
The schedutil driver sets sg_policy->next_freq to UINT_MAX on certain
occasions to discard the cached value of next freq:
- In sugov_start(), when the schedutil governor is started for a group
of CPUs.
- And whenever we need to force a freq update before rate-limit
duration, which happens when:
- there is an update in cpufreq policy limits.
- Or when the utilization of DL scheduling class increases.
In return, get_next_freq() doesn't return a cached next_freq value but
recalculates the next frequency instead.
But having special meaning for a particular value of frequency makes the
code less readable and error prone. We recently fixed a bug where the
UINT_MAX value was considered as valid frequency in
sugov_update_single().
All we need is a flag which can be used to discard the value of
sg_policy->next_freq and we already have need_freq_update for that. Lets
reuse it instead of setting next_freq to UINT_MAX.
Change-Id: Ia37ef416d5ecac11fe0c6a2be7e21fdbca708a1a
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Yaroslav Furman <yaro330@gmail.com> - backported to 4.4
Diffstat (limited to 'kernel/sched')
| -rw-r--r-- | kernel/sched/cpufreq_schedutil.c | 12 |
1 files changed, 4 insertions, 8 deletions
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index d625301e83de..a50b0435db39 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -91,12 +91,6 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time) s64 delta_ns; if (unlikely(sg_policy->need_freq_update)) { - sg_policy->need_freq_update = false; - /* - * This happens when limits change, so forget the previous - * next_freq value and force an update. - */ - sg_policy->next_freq = UINT_MAX; return true; } @@ -185,8 +179,10 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy, freq = (freq + (freq >> 2)) * util / max; - if (freq == sg_policy->cached_raw_freq && sg_policy->next_freq != UINT_MAX) + if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update) return sg_policy->next_freq; + + sg_policy->need_freq_update = false; sg_policy->cached_raw_freq = freq; return cpufreq_driver_resolve_freq(policy, freq); } @@ -838,7 +834,7 @@ static int sugov_start(struct cpufreq_policy *policy) sg_policy->tunables->down_rate_limit_us * NSEC_PER_USEC; update_min_rate_limit_us(sg_policy); sg_policy->last_freq_update_time = 0; - sg_policy->next_freq = UINT_MAX; + sg_policy->next_freq = 0; sg_policy->work_in_progress = false; sg_policy->need_freq_update = false; sg_policy->cached_raw_freq = 0; |
