summaryrefslogtreecommitdiff
path: root/kernel/sched/fair.c (follow)
Commit message (Collapse)AuthorAge
...
| * | sched/fair: remove erroneous RCU_LOCKDEP_WARN from start_cpu()Dietmar Eggemann2017-10-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixes: https://bugs.linaro.org/show_bug.cgi?id=3075 Change-Id: I62d714fc4b9366a9b2535649aa92d1edc840cf94 Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Brendan Jackman <brendan.jackman@arm.com> Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| * | sched/fair: prevent meaningless active migrationJoonwoo Park2017-10-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At present need_active_balance() determines whether an active upmigration is needed by using capacity_of(). A CPU's capacity may be reduced by RT pressure, and therefore distinguishing capability differences with capacity_of() may lead to suboptimal active migrations to less capable CPUs. Use capacity_orig_of to distinguish differently capable CPUs in addition to capacity_of(), thus avoiding placing tasks on less capable CPUs due to instantaneous RT pressure. Change-Id: I3e1435246a8edc3ad618ef98a34866cfbd8c16a5 Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org> [markivx: Reworked the commit text a bit] Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
* | | Merge android-4.4@a8935c9 (v4.4.87) into msm-4.4Blagovest Kolenichev2017-09-21
|\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * refs/heads/tmp-a8935c9: Linux 4.4.87 crypto: algif_skcipher - only call put_page on referenced and used pages epoll: fix race between ep_poll_callback(POLLFREE) and ep_free()/ep_remove() kvm: arm/arm64: Force reading uncached stage2 PGD kvm: arm/arm64: Fix race in resetting stage2 PGD drm/ttm: Fix accounting error when fail to get pages for pool xfrm: policy: check policy direction value wl1251: add a missing spin_lock_init() CIFS: remove endian related sparse warning CIFS: Fix maximum SMB2 header size alpha: uapi: Add support for __SANE_USERSPACE_TYPES__ cpuset: Fix incorrect memory_pressure control file mapping cpumask: fix spurious cpumask_of_node() on non-NUMA multi-node configs ceph: fix readpage from fscache i2c: ismt: Return EMSGSIZE for block reads with bogus length i2c: ismt: Don't duplicate the receive length for block reads irqchip: mips-gic: SYNC after enabling GIC region ANDROID: cpufreq-dt: Set sane defaults for schedutil rate limits BACKPORT: cpufreq: schedutil: Use policy-dependent transition delays FROMLIST: binder: fix an ret value override FROMLIST: binder: fix memory corruption in binder_transaction binder Linux 4.4.86 drm/i915: fix compiler warning in drivers/gpu/drm/i915/intel_uncore.c scsi: sg: reset 'res_in_use' after unlinking reserved array scsi: sg: protect accesses to 'reserved' page array arm64: fpsimd: Prevent registers leaking across exec x86/io: Add "memory" clobber to insb/insw/insl/outsb/outsw/outsl arm64: mm: abort uaccess retries upon fatal signal lpfc: Fix Device discovery failures during switch reboot test. p54: memset(0) whole array lightnvm: initialize ppa_addr in dev_to_generic_addr() gcov: support GCC 7.1 gcov: add support for gcc version >= 6 i2c: jz4780: drop superfluous init btrfs: remove duplicate const specifier ALSA: au88x0: Fix zero clear of stream->resources scsi: isci: avoid array subscript warning sched: WALT: fix window mis-alignment sched: EAS: kill incorrect nohz idle cpu kick sched: EAS: fix incorrect energy delta calculation due to rounding error sched: EAS/WALT: take into account of waking task's load cpufreq: sched: WALT: don't apply capacity margin twice sched: WALT: fix potential overflow sched: EAS: schedfreq: fix CPU util over estimation sched: EAS/WALT: use cr_avg instead of prev_runnable_sum sched: WALT: fix broken cumulative runnable average accounting sched: deadline: WALT: account cumulative runnable avg FROMLIST: android: binder: Add page usage in binder stats FROMLIST: android: binder: Add shrinker tracepoints FROMLIST: android: binder: Add global lru shrinker to binder FROMLIST: android: binder: Move buffer out of area shared with user space FROMLIST: android: binder: Add allocator selftest FROMLIST: android: binder: Refactor prev and next buffer into a helper function android: android-base.config: enable IP6_NF_MATCH_RPFILTER UPSTREAM: cpufreq: schedutil: Use unsigned int for iowait boost UPSTREAM: cpufreq: schedutil: Make iowait boost more energy efficient Conflicts: drivers/cpufreq/cpufreq-dt.c kernel/sched/deadline.c kernel/sched/fair.c kernel/sched/sched.h Change-Id: Iee31db3fd1a0d1650ebf3d6de307a4e4637120b4 Signed-off-by: Blagovest Kolenichev <bkolenichev@codeaurora.org>
| * | sched: EAS: kill incorrect nohz idle cpu kickJoonwoo Park2017-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | EAS won't allow NOHZ idle balancer until CPU's over utilized. However nohz_kick_needed() can return true. This causes idle CPU wake up for nothing. Change-Id: I6e548442e29e4f85cda695e4c7101dd591b12fe6 Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
| * | sched: EAS: fix incorrect energy delta calculation due to rounding errorJoonwoo Park2017-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to calculate energy difference we currently iterates CPUs under the same sched doamin to accumulate total energy cost and compare before and after : for_each_domain(cpu) total_energy_before += (cpu_util * power) >> SCHED_CAPACITY_SHIFT; for_each_domain(cpu) total_energy_after += (cpu_util * power) >> SCHED_CAPACITY_SHIFT; Doing such can incorrectly calculate and report abs(delta) > 0 when there is actually no energy delta between before and after because the same total accumulated cpu_util of all the CPUs can be distributed differently before and after and it causes different amount of rounding error. Fix such incorrectness by shifting just once with accumulated total_energy. Change-Id: I82f1e2e358367058960938b4ef81714f57e921cf Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org> (moved part to another commit) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| * | sched: EAS/WALT: take into account of waking task's loadJoonwoo Park2017-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WALT's function cpu_util(cpu) reports CPU's load without taking into account of waking task's load. Thus currently cpu_overutilized() underestimates load on the previous CPU of waking task. Take into account of task's load to determine whether previous CPU is overutilzed to bail out early without running energy_diff() which is expensive. Change-Id: I30f146984a880ad2cc1b8a4ce35bd239a8c9a607 Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org> (minor rebase conflicts) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| * | sched: EAS/WALT: use cr_avg instead of prev_runnable_sumJoonwoo Park2017-09-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | WALT accounts two major statistics; CPU load and cumulative tasks demand. The CPU load which is account of accumulated each CPU's absolute execution time is for CPU frequency guidance. Whereas cumulative tasks demand which is each CPU's instantaneous load to reflect CPU's load at given time is for task placement decision. Use cumulative tasks demand for cpu_util() for task placement and introduce cpu_util_freq() for frequency guidance. Change-Id: Id928f01dbc8cb2a617cdadc584c1f658022565c5 Signed-off-by: Joonwoo Park <joonwoop@codeaurora.org>
* | | Merge android-4.4@9f764bb (v4.4.80) into msm-4.4Blagovest Kolenichev2017-08-15
|\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * refs/heads/tmp-9f764bb Linux 4.4.80 ASoC: dpcm: Avoid putting stream state to STOP when FE stream is paused scsi: snic: Return error code on memory allocation failure scsi: fnic: Avoid sending reset to firmware when another reset is in progress HID: ignore Petzl USB headlamp ALSA: usb-audio: test EP_FLAG_RUNNING at urb completion sh_eth: enable RX descriptor word 0 shift on SH7734 nvmem: imx-ocotp: Fix wrong register size arm64: mm: fix show_pte KERN_CONT fallout vfio-pci: Handle error from pci_iomap video: fbdev: cobalt_lcdfb: Handle return NULL error from devm_ioremap perf symbols: Robustify reading of build-id from sysfs perf tools: Install tools/lib/traceevent plugins with install-bin xfrm: Don't use sk_family for socket policy lookups tools lib traceevent: Fix prev/next_prio for deadline tasks Btrfs: adjust outstanding_extents counter properly when dio write is split usb: gadget: Fix copy/pasted error message ACPI / scan: Prefer devices without _HID/_CID for _ADR matching ARM: s3c2410_defconfig: Fix invalid values for NF_CT_PROTO_* ARM64: zynqmp: Fix i2c node's compatible string ARM64: zynqmp: Fix W=1 dtc 1.4 warnings dmaengine: ti-dma-crossbar: Add some 'of_node_put()' in error path. dmaengine: ioatdma: workaround SKX ioatdma version dmaengine: ioatdma: Add Skylake PCI Dev ID openrisc: Add _text symbol to fix ksym build error irqchip/mxs: Enable SKIP_SET_WAKE and MASK_ON_SUSPEND ASoC: nau8825: fix invalid configuration in Pre-Scalar of FLL spi: dw: Make debugfs name unique between instances ASoC: tlv320aic3x: Mark the RESET register as volatile irqchip/keystone: Fix "scheduling while atomic" on rt vfio-pci: use 32-bit comparisons for register address for gcc-4.5 drm/msm: Verify that MSM_SUBMIT_BO_FLAGS are set drm/msm: Ensure that the hardware write pointer is valid net/mlx4: Remove BUG_ON from ICM allocation routine ipv6: Should use consistent conditional judgement for ip6 fragment between __ip6_append_data and ip6_finish_output ARM: dts: n900: Mark eMMC slot with no-sdio and no-sd flags r8169: add support for RTL8168 series add-on card. x86/mce/AMD: Make the init code more robust tpm: Replace device number bitmap with IDR tpm: fix a kernel memory leak in tpm-sysfs.c xen/blkback: don't use xen_blkif_get() in xen-blkback kthread xen/blkback: don't free be structure too early sched/cputime: Fix prev steal time accouting during CPU hotplug net: skb_needs_check() accepts CHECKSUM_NONE for tx pstore: Use dynamic spinlock initializer pstore: Correctly initialize spinlock and flags pstore: Allow prz to control need for locking vlan: Propagate MAC address to VLANs /proc/iomem: only expose physical resource addresses to privileged users Make file credentials available to the seqfile interfaces v4l: s5c73m3: fix negation operator dentry name snapshots ipmi/watchdog: fix watchdog timeout set on reboot libnvdimm, btt: fix btt_rw_page not returning errors RDMA/uverbs: Fix the check for port number PM / Domains: defer dev_pm_domain_set() until genpd->attach_dev succeeds if present sched/cgroup: Move sched_online_group() back into css_online() to fix crash kaweth: fix oops upon failed memory allocation kaweth: fix firmware download mpt3sas: Don't overreach ioc->reply_post[] during initialization mailbox: handle empty message in tx_tick mailbox: skip complete wait event if timer expired mailbox: always wait in mbox_send_message for blocking Tx mode wil6210: fix deadlock when using fw_no_recovery option ath10k: fix null deref on wmi-tlv when trying spectral scan isdn/i4l: fix buffer overflow isdn: Fix a sleep-in-atomic bug net: phy: Do not perform software reset for Generic PHY nfc: fdp: fix NULL pointer dereference xfs: don't BUG() on mixed direct and mapped I/O perf intel-pt: Ensure never to set 'last_ip' when packet 'count' is zero perf intel-pt: Use FUP always when scanning for an IP perf intel-pt: Fix last_ip usage perf intel-pt: Fix ip compression drm: rcar-du: Simplify and fix probe error handling drm: rcar-du: Perform initialization/cleanup at probe/remove time drm/rcar: Nuke preclose hook Staging: comedi: comedi_fops: Avoid orphaned proc entry Revert "powerpc/numa: Fix percpu allocations to be NUMA aware" KVM: PPC: Book3S HV: Save/restore host values of debug registers KVM: PPC: Book3S HV: Reload HTM registers explicitly KVM: PPC: Book3S HV: Restore critical SPRs to host values on guest exit KVM: PPC: Book3S HV: Context-switch EBB registers properly drm/nouveau/bar/gf100: fix access to upper half of BAR2 drm/vmwgfx: Fix gcc-7.1.1 warning md/raid5: add thread_group worker async_tx_issue_pending_all crypto: authencesn - Fix digest_null crash powerpc/pseries: Fix of_node_put() underflow during reconfig remove net: reduce skb_warn_bad_offload() noise pstore: Make spinlock per zone instead of global af_key: Add lock to key dump ANDROID: binder: Don't BUG_ON(!spin_is_locked()). Linux 4.4.79 alarmtimer: don't rate limit one-shot timers tracing: Fix kmemleak in instance_rmdir spmi: Include OF based modalias in device uevent of: device: Export of_device_{get_modalias, uvent_modalias} to modules drm/mst: Avoid processing partially received up/down message transactions drm/mst: Avoid dereferencing a NULL mstb in drm_dp_mst_handle_up_req() drm/mst: Fix error handling during MST sideband message reception RDMA/core: Initialize port_num in qp_attr ceph: fix race in concurrent readdir staging: rtl8188eu: add TL-WN722N v2 support Revert "perf/core: Drop kernel samples even though :u is specified" perf annotate: Fix broken arrow at row 0 connecting jmp instruction to its target target: Fix COMPARE_AND_WRITE caw_sem leak during se_cmd quiesce udf: Fix deadlock between writeback and udf_setsize() NFS: only invalidate dentrys that are clearly invalid. Input: i8042 - fix crash at boot time MIPS: Fix a typo: s/preset/present/ in r2-to-r6 emulation error message MIPS: Send SIGILL for linked branches in `__compute_return_epc_for_insn' MIPS: Rename `sigill_r6' to `sigill_r2r6' in `__compute_return_epc_for_insn' MIPS: Send SIGILL for BPOSGE32 in `__compute_return_epc_for_insn' MIPS: math-emu: Prevent wrong ISA mode instruction emulation MIPS: Fix unaligned PC interpretation in `compute_return_epc' MIPS: Actually decode JALX in `__compute_return_epc_for_insn' MIPS: Save static registers before sysmips MIPS: Fix MIPS I ISA /proc/cpuinfo reporting x86/ioapic: Pass the correct data to unmask_ioapic_irq() x86/acpi: Prevent out of bound access caused by broken ACPI tables MIPS: Negate error syscall return in trace MIPS: Fix mips_atomic_set() with EVA MIPS: Fix mips_atomic_set() retry condition ftrace: Fix uninitialized variable in match_records() vfio: New external user group/file match vfio: Fix group release deadlock f2fs: Don't clear SGID when inheriting ACLs ipmi:ssif: Add missing unlock in error branch ipmi: use rcu lock around call to intf->handlers->sender() drm/radeon: Fix eDP for single-display iMac10,1 (v2) drm/radeon/ci: disable mclk switching for high refresh rates (v2) drm/amd/amdgpu: Return error if initiating read out of range on vram s390/syscalls: Fix out of bounds arguments access Raid5 should update rdev->sectors after reshape cx88: Fix regression in initial video standard setting x86/xen: allow userspace access during hypercalls md: don't use flush_signals in userspace processes usb: renesas_usbhs: gadget: disable all eps when the driver stops usb: renesas_usbhs: fix usbhsc_resume() for !USBHSF_RUNTIME_PWCTRL USB: cdc-acm: add device-id for quirky printer usb: storage: return on error to avoid a null pointer dereference xhci: Fix NULL pointer dereference when cleaning up streams for removed host xhci: fix 20000ms port resume timeout ipvs: SNAT packet replies only for NATed connections PCI/PM: Restore the status of PCI devices across hibernation af_key: Fix sadb_x_ipsecrequest parsing powerpc/asm: Mark cr0 as clobbered in mftb() powerpc: Fix emulation of mfocrf in emulate_step() powerpc: Fix emulation of mcrf in emulate_step() powerpc/64: Fix atomic64_inc_not_zero() to return an int iscsi-target: Add login_keys_workaround attribute for non RFC initiators scsi: ses: do not add a device to an enclosure if enclosure_add_links() fails. PM / Domains: Fix unsafe iteration over modified list of domain providers PM / Domains: Fix unsafe iteration over modified list of device links ASoC: compress: Derive substream from stream based on direction wlcore: fix 64K page support Bluetooth: use constant time memory comparison for secret values perf intel-pt: Clear FUP flag on error perf intel-pt: Ensure IP is zero when state is INTEL_PT_STATE_NO_IP perf intel-pt: Fix missing stack clear perf intel-pt: Improve sample timestamp perf intel-pt: Move decoder error setting into one condition NFC: Add sockaddr length checks before accessing sa_family in bind handlers nfc: Fix the sockaddr length sanitization in llcp_sock_connect nfc: Ensure presence of required attributes in the activate_target handler NFC: nfcmrvl: fix firmware-management initialisation NFC: nfcmrvl: use nfc-device for firmware download NFC: nfcmrvl: do not use device-managed resources NFC: nfcmrvl_uart: add missing tty-device sanity check NFC: fix broken device allocation ath9k: fix tx99 bus error ath9k: fix tx99 use after free thermal: cpu_cooling: Avoid accessing potentially freed structures s5p-jpeg: don't return a random width/height ir-core: fix gcc-7 warning on bool arithmetic disable new gcc-7.1.1 warnings for now sched/fair: Add a backup_cpu to find_best_target sched/fair: Try to estimate possible idle states. sched/fair: Sync task util before EAS wakeup Revert "sched/fair: ensure utilization signals are synchronized before use" sched/fair: kick nohz idle balance for misfit task sched/fair: Update signals of nohz cpus if we are going idle events: add tracepoint for find_best_target sched/fair: streamline find_best_target heuristics UPSTREAM: af_key: Fix sadb_x_ipsecrequest parsing ANDROID: lowmemorykiller: Add tgid to kill message Revert "proc: smaps: Allow smaps access for CAP_SYS_RESOURCE" Conflicts: drivers/gpu/drm/msm/adreno/adreno_gpu.c drivers/gpu/drm/msm/msm_ringbuffer.c drivers/staging/android/lowmemorykiller.c kernel/sched/fair.c Change-Id: Ic3b3a522b79b1deb178e513b56b9c39eea48e079 Signed-off-by: Blagovest Kolenichev <bkolenichev@codeaurora.org>
| * | sched/fair: Add a backup_cpu to find_best_targetChris Redpath2017-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sometimes we find a target cpu but then we do not use it as the energy_diff indicates that we would increase energy usage or not save anything. To offer an additional option for those cases, we return a second option which is what we would have selected if the target CPU had not been found. This gives us another chance to try to save some energy. Change-Id: I42c4f20aba10e4cf65b51ac4153e2e00e534c8c7 Signed-off-by: Chris Redpath <chris.redpath@arm.com> Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
| * | sched/fair: Try to estimate possible idle states.Chris Redpath2017-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the current EAS group energy calculations, we only use the idle state of the group as it is right now. This means that there are times when EAS cannot see that we are about to remove all utilization from a group which is likely to result in us being able to idle that entire group. This is an attempt to detect that situation and at least allow the energy calculation to include savings in that scenario, regardless of what we might be able to actually achieve in the real world. If a cluster or cpu looks like it will have some idle time available to it, we try to map the utilization onto an idle state. Change-Id: I8fcb1e507f65ae6a2c5647eeef75a4bf28c7a0c0 Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| * | sched/fair: Sync task util before EAS wakeupBrendan Jackman2017-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before using a task's util_avg signal in EAS, we need to ensure that it has been synced up to the last_update_time of prev_cpu's root cfs_rq. We previously relied on the side effect of wake_cap to do that, however that does not happen when the waking CPU has the same capacity as the prev_cpu. Therefore just explicitly call sync_entity_load_avg. This may result in calling that function twice within the same select_task_rq_fair, but since last_update_time hasn't changed the second call will bail out very quickly. Change-Id: I91f1fcd71dfeb96b7f5b73418f1cf9ac311d4655 Signed-off-by: Brendan Jackman <brendan.jackman@arm.com> Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
| * | Revert "sched/fair: ensure utilization signals are synchronized before use"Brendan Jackman2017-07-25
| | | | | | | | | | | | | | | | | | | | | This reverts commit 83f462daa328f2f42c3c1f7f5277f71e3fa0f750. Change-Id: I37ba36da61df2beb3a005557d9b673027f446916 Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
| * | sched/fair: kick nohz idle balance for misfit taskLeo Yan2017-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If there have misfit task on one CPU, current code does not handle this situation for nohz idle balance. As result, we can see the misfit task stays run on little core for long time. So this patch check if the CPU has misfit task or not. If has misfit task then kick nohz idle balance so finally can execute active balance. Change-Id: I117d3b7404296f8de11cb960a87a6b9a54a9f348 Signed-off-by: Leo Yan <leo.yan at linaro.org> [taken from https://lists.linaro.org/pipermail/eas-dev/2016-September/000551.html] Signed-off-by: Chris Redpath <chris.redpath@arm.com> Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
| * | sched/fair: Update signals of nohz cpus if we are going idleChris Redpath2017-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Stale cpu utilization signals can cause havoc for energy-aware systems, and they are caused by no updates being performed for cpus which have no tick running. There is open debate about when is the correct time to update these cpus, and general recognition that something needs to be done. This is an attempt to do something useful. When we are looking for a task to pull for a newly-idle cpu, we have an opportunity to update the stats for any cpu which has no tick running without causing too much disturbance to the system or waking it up. Change-Id: I0280104ea9c53e56c26f1c56a62bacab5d3e951b Signed-off-by: Chris Redpath <chris.redpath@arm.com> Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
| * | events: add tracepoint for find_best_targetPatrick Bellasi2017-07-25
| | | | | | | | | | | | | | | Change-Id: I4c245ffacb207d7ea826c5763a426efe5399e0a2 Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
| * | sched/fair: streamline find_best_target heuristicsPatrick Bellasi2017-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The find_best_target() code has evolved over time to integrate different micro-optimizations to the point to be quite difficult now to follow exactly what it's doing. This patch rafactors the existing code to make it more readable and easy to maintain. It does that by properly identifying the three main use-cases and addressing them in priority order: A) latency sensitive tasks B) non latency sensitive tasks on IDLE CPUs C) non latency sensitive tasks on ACTIVE CPUs The original behaviors are preserved. Some tests to compare power/performances before and after this patch have been done using Jankbench and YouTube and we did not noticed sensible differences. The only difference with respect of the original code is a small update to favor lower-capacity idle CPUs in case B. The same preference is not enforce in case A since this can lead to a selection of a non-reserved CPU for TOP_APP tasks, which ultimately can lead to non desirable co-scheduling side-effects. Change-Id: I871e5d95af89176217e4e239b64d44a420baabe8 Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com> (removed checkpatch whitespace error) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
* | | Merge "Merge branch 'android-4.4@6fc0573' into branch 'msm-4.4'"Linux Build Service Account2017-06-22
|\ \ \
| * | | Merge branch 'android-4.4@6fc0573' into branch 'msm-4.4'Blagovest Kolenichev2017-06-19
| |\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * refs/heads/tmp-6fc0573: Linux 4.4.71 xfs: only return -errno or success from attr ->put_listent xfs: in _attrlist_by_handle, copy the cursor back to userspace xfs: fix unaligned access in xfs_btree_visit_blocks xfs: bad assertion for delalloc an extent that start at i_size xfs: fix indlen accounting error on partial delalloc conversion xfs: wait on new inodes during quotaoff dquot release xfs: update ag iterator to support wait on new inodes xfs: support ability to wait on new inodes xfs: fix up quotacheck buffer list error handling xfs: prevent multi-fsb dir readahead from reading random blocks xfs: handle array index overrun in xfs_dir2_leaf_readbuf() xfs: fix over-copying of getbmap parameters from userspace xfs: fix off-by-one on max nr_pages in xfs_find_get_desired_pgoff() xfs: Fix missed holes in SEEK_HOLE implementation mlock: fix mlock count can not decrease in race condition mm/migrate: fix refcount handling when !hugepage_migration_supported() drm/gma500/psb: Actually use VBT mode when it is found slub/memcg: cure the brainless abuse of sysfs attributes ALSA: hda - apply STAC_9200_DELL_M22 quirk for Dell Latitude D430 pcmcia: remove left-over %Z format drm/radeon: Unbreak HPD handling for r600+ drm/radeon/ci: disable mclk switching for high refresh rates (v2) scsi: mpt3sas: Force request partial completion alignment HID: wacom: Have wacom_tpc_irq guard against possible NULL dereference mmc: sdhci-iproc: suppress spurious interrupt with Multiblock read i2c: i2c-tiny-usb: fix buffer not being DMA capable vlan: Fix tcp checksum offloads in Q-in-Q vlans net: phy: marvell: Limit errata to 88m1101 netem: fix skb_orphan_partial() ipv4: add reference counting to metrics sctp: fix ICMP processing if skb is non-linear tcp: avoid fastopen API to be used on AF_UNSPEC virtio-net: enable TSO/checksum offloads for Q-in-Q vlans be2net: Fix offload features for Q-in-Q packets ipv6: fix out of bound writes in __ip6_append_data() bridge: start hello_timer when enabling KERNEL_STP in br_stp_start qmi_wwan: add another Lenovo EM74xx device ID bridge: netlink: check vlan_default_pvid range ipv6: Check ip6_find_1stfragopt() return value properly. ipv6: Prevent overrun when parsing v6 header options net: Improve handling of failures on link and route dumps tcp: eliminate negative reordering in tcp_clean_rtx_queue sctp: do not inherit ipv6_{mc|ac|fl}_list from parent sctp: fix src address selection if using secondary addresses for ipv6 tcp: avoid fragmenting peculiar skbs in SACK s390/qeth: avoid null pointer dereference on OSN s390/qeth: unbreak OSM and OSN support s390/qeth: handle sysfs error during initialization ipv6/dccp: do not inherit ipv6_mc_list from parent dccp/tcp: do not inherit mc_list from parent sparc: Fix -Wstringop-overflow warning android: base-cfg: disable CONFIG_NFS_FS and CONFIG_NFSD schedstats/eas: guard properly to avoid breaking non-smp schedstats users BACKPORT: f2fs: sanity check size of nat and sit cache FROMLIST: f2fs: sanity check checkpoint segno and blkoff sched/tune: don't use schedtune before it is ready sched/fair: use SCHED_CAPACITY_SCALE for energy normalization sched/{fair,tune}: use reciprocal_value to compute boost margin sched/tune: Initialize raw_spin_lock in boosted_groups sched/tune: report when SchedTune has not been initialized sched/tune: fix sched_energy_diff tracepoint sched/tune: increase group count to 5 cpufreq/schedutil: use boosted_cpu_util for PELT to match WALT sched/fair: Fix sched_group_energy() to support per-cpu capacity states sched/fair: discount task contribution to find CPU with lowest utilization sched/fair: ensure utilization signals are synchronized before use sched/fair: remove task util from own cpu when placing waking task trace:sched: Make util_avg in load_avg trace reflect PELT/WALT as used sched/fair: Add eas (& cas) specific rq, sd and task stats sched/core: Fix PELT jump to max OPP upon util increase sched: EAS & 'single cpu per cluster'/cpu hotplug interoperability UPSTREAM: sched/core: Fix group_entity's share update UPSTREAM: sched/fair: Fix calc_cfs_shares() fixed point arithmetics width confusion UPSTREAM: sched/fair: Fix incorrect task group ->load_avg UPSTREAM: sched/fair: Fix effective_load() to consistently use smoothed load UPSTREAM: sched/fair: Propagate asynchrous detach UPSTREAM: sched/fair: Propagate load during synchronous attach/detach UPSTREAM: sched/fair: Fix hierarchical order in rq->leaf_cfs_rq_list BACKPORT: sched/fair: Factorize PELT update UPSTREAM: sched/fair: Factorize attach/detach entity UPSTREAM: sched/fair: Improve PELT stuff some more UPSTREAM: sched/fair: Apply more PELT fixes UPSTREAM: sched/fair: Fix post_init_entity_util_avg() serialization BACKPORT: sched/fair: Initiate a new task's util avg to a bounded value sched/fair: Simplify idle_idx handling in select_idle_sibling() sched/fair: refactor find_best_target() for simplicity sched/fair: Change cpu iteration order in find_best_target() sched/core: Add first cpu w/ max/min orig capacity to root domain sched/core: Remove remnants of commit fd5c98da1a42 sched: Remove sysctl_sched_is_big_little sched/fair: Code !is_big_little path into select_energy_cpu_brute() EAS: sched/fair: Re-integrate 'honor sync wakeups' into wakeup path Fixup!: sched/fair.c: Set SchedTune specific struct energy_env.task sched/fair: Energy-aware wake-up task placement sched/fair: Add energy_diff dead-zone margin sched/fair: Decommission energy_aware_wake_cpu() sched/fair: Do not force want_affine eq. true if EAS is enabled arm64: Set SD_ASYM_CPUCAPACITY sched_domain flag on DIE level UPSTREAM: sched/fair: Fix incorrect comment for capacity_margin UPSTREAM: sched/fair: Avoid pulling tasks from non-overloaded higher capacity groups UPSTREAM: sched/fair: Add per-CPU min capacity to sched_group_capacity UPSTREAM: sched/fair: Consider spare capacity in find_idlest_group() UPSTREAM: sched/fair: Compute task/cpu utilization at wake-up correctly UPSTREAM: sched/fair: Let asymmetric CPU configurations balance at wake-up UPSTREAM: sched/core: Enable SD_BALANCE_WAKE for asymmetric capacity systems UPSTREAM: sched/core: Pass child domain into sd_init() UPSTREAM: sched/core: Introduce SD_ASYM_CPUCAPACITY sched_domain topology flag UPSTREAM: sched/core: Remove unnecessary NULL-pointer check UPSTREAM: sched/fair: Optimize find_idlest_cpu() when there is no choice BACKPORT: sched/fair: Make the use of prev_cpu consistent in the wakeup path UPSTREAM: sched/core: Fix power to capacity renaming in comment Partial Revert: "WIP: sched: Add cpu capacity awareness to wakeup balancing" Revert "WIP: sched: Consider spare cpu capacity at task wake-up" FROM-LIST: cpufreq: schedutil: Redefine the rate_limit_us tunable cpufreq: schedutil: add up/down frequency transition rate limits trace/sched: add rq utilization signal for WALT sched/cpufreq: make schedutil use WALT signal sched: cpufreq: use rt_avg as estimate of required RT CPU capacity cpufreq: schedutil: move slow path from workqueue to SCHED_FIFO task BACKPORT: kthread: allow to cancel kthread work sched/cpufreq: fix tunables for schedfreq governor BACKPORT: cpufreq: schedutil: New governor based on scheduler utilization data sched: backport cpufreq hooks from 4.9-rc4 ANDROID: Kconfig: add depends for UID_SYS_STATS ANDROID: hid: uhid: implement refcount for open and close Revert "ext4: require encryption feature for EXT4_IOC_SET_ENCRYPTION_POLICY" ANDROID: mnt: Fix next_descendent Conflicts: include/trace/events/sched.h kernel/sched/Makefile kernel/sched/core.c kernel/sched/fair.c kernel/sched/sched.h Change-Id: I55318828f2c858e192ac7015bcf2bf0ec5c5b2c5 Signed-off-by: Blagovest Kolenichev <bkolenichev@codeaurora.org>
| | * | sched/tune: don't use schedtune before it is readyChris Redpath2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When EAS is enabled during boot, we have to be careful not to use schedtune from fair.c before it is ready or it will warn us and we'll get a traceback in the console. Change-Id: I1a5cf29b18af626545c636c51219f9ed497c19fa Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | sched/fair: use SCHED_CAPACITY_SCALE for energy normalizationPatrick Bellasi2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | Change-Id: I686d26975f4a7dd830ff8441ff986e35461a7d55 Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com> Signed-off-by: Srinath Sridharan <srinathsr@google.com>
| | * | sched/{fair,tune}: use reciprocal_value to compute boost marginPatrick Bellasi2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | Change-Id: I493b07360c46eee0b72c2a046dab9ec6cb3427ef Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com> Signed-off-by: Srinath Sridharan <srinathsr@google.com>
| | * | sched/tune: fix sched_energy_diff tracepointChris Redpath2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sched_energy_diff tracepoint is in a place where it can never trace payoff or nrg.delta. If CONFIG_SCHED_TUNE is enabled, put it in a place where those values exist. If it is not enabled, trace from the current location Change-Id: Id5442f2b34ec76625491d27c0f4285433ca12699 Reported-by: Valentin Schneider <valentin.schneider@arm.com> Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | sched/fair: Fix sched_group_energy() to support per-cpu capacity statesMorten Rasmussen2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sched_group_energy() was supposed to support per-cpu capacity states (DVFS), however, while fixing a hotplug issue this was broken as we bail out if there is no SD_SHARE_CAP_STATES flag set. This patch implements the hotplug race check differently and should therefore reinstate support for per-cpu capacity states. Change-Id: I5b865666c9ce833dcfa6514c574580d75aa0a195 Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
| | * | sched/fair: discount task contribution to find CPU with lowest utilizationValentin Schneider2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In some cases, the new_util of a task can be the same on several CPUs. This causes an issue because the target_util is only updated if the current new_util is strictly smaller than target_util. To fix that, the cpu_util_wake() return value is used alongside the new_util value. If two CPUs compute the same new_util value, we'll now also look at their cpu_util_wake() return value. In this case, the CPU that last ran the task will be chosen in priority. Change-Id: Ia1ea2c4b3ec39621372c2f748862317d5b497723 Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
| | * | sched/fair: ensure utilization signals are synchronized before useChris Redpath2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | wake_cap performs task and cpu utilization synchronization which is what allows us to subtract current task util from prev_cpu util and have a sensible number to work with. It looks as though if wake_wide returns 0, we could potentially not execute wake_cap, which would result in unsynced signals we then use for energy calculations. This is not necessarily an issue we've seen in traces, but it looks as though it should be changed. Change-Id: Ic54a3cba2a10d946ea20113a04371dea04115e82 Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | sched/fair: remove task util from own cpu when placing waking taskChris Redpath2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When we place a waking task with find_best_target, we calculate the existing and new utilisation of each candidate cpu. However, we do not remove any blocked load resulting from the waking task on the previous cpu which might cause unnecessary migrations. Switch to using cpu_util_wake which does this for us, which requires moving cpu_util_wake a few functions earlier. Also, we have multiple potential cpu utilization signals here, so update the necessary bits to allow WALT to work properly (including not subtracting task util for WALT). When WALT is in use, cpu utilization is the utilization in the previous completed window, whilst the task utilization ignores fully idle windows. There seems to be no way to have a decently accurate estimate of how much (if any) utilization from this task remains on the prev cpu. Instead, just return cpu_util when we're using WALT. Change-Id: I448203ab98ffb5c020dfb6b218581eef1f5601f7 Reported-by: Patrick Bellasi <patrick.bellasi@arm.com> Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | trace:sched: Make util_avg in load_avg trace reflect PELT/WALT as usedChris Redpath2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the ability to choose between WALT and PELT for utilisation tracking we can have the situation where we're using WALT to make all the decisions and reporting PELT figures in the sched_load_avg_(cpu|task) trace points. This is not too much of an issue, but when analysing trace it is nice to see numbers representing what the scheduler is using rather than needing to add in additional sched_walt_* traces to figure it out. Add reporting for both types, and make the util_avg member reflect what will be seen from cpu or task_util functions in the scheduler. Change-Id: I2abbd2c5fa70822096d0f3372b4c12b1c6af1590 Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | sched/fair: Add eas (& cas) specific rq, sd and task statsDietmar Eggemann2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The statistic counter are placed in the eas (& cas) wakeup path. Each of them has one representation for the runqueue (rq), the sched_domain (sd) and the task. A task counter is always incremented. A rq counter is always incremented for the rq the scheduler is currently running on. A sd counter is only incremented if a relation to a sd exists. The counters are exposed: (1) In /proc/schedstat for rq's and sd's: $ cat /proc/schedstat ... cpu0 71422 0 2321254 ... eas 44144 0 0 19446 0 24698 568435 51621 156932 133 222011 17459 120279 516814 83 0 156962 359235 176439 139981 <- runqueue for cpu0 ... domain0 3 42430 42331 ... eas 0 0 0 14200 0 0 0 0 0 0 0 0 0 0 0 0 0 0 66355 0 <- MC sched domain for cpu0 ... The per-cpu eas vector has the following elements: sis_attempts sis_idle sis_cache_affine sis_suff_cap sis_idle_cpu sis_count || secb_attempts secb_sync secb_idle_bt secb_insuff_cap secb_no_nrg_sav secb_nrg_sav secb_count || fbt_attempts fbt_no_cpu fbt_no_sd fbt_pref_idle fbt_count || cas_attempts cas_count The following relations exist between these counters (from cpu0 eas vector above): sis_attempts = sis_idle + sis_cache_affine + sis_suff_cap + sis_idle_cpu + sis_count 44144 = 0 + 0 + 19446 + 0 + 24698 secb_attempts = secb_sync + secb_idle_bt + secb_insuff_cap + secb_no_nrg_sav + secb_nrg_sav + secb_count 568435 = 51621 + 156932 + 133 + 222011 + 17459 + 120279 fbt_attempts = fbt_no_cpu + fbt_no_sd + fbt_pref_idle + fbt_count + (return -1) 516814 = 83 + 0 + 156962 + 359235 + (534) cas_attempts = cas_count + (return -1 or smp_processor_id()) 176439 = 139981 + (36458) (2) In /proc/$PROCESS_PID/task/$TASK_PID/sched for a task. example: main thread of system_server $ cat /proc/1083/task/1083/sched ... se.statistics.nr_wakeups_sis_attempts : 945 se.statistics.nr_wakeups_sis_idle : 0 se.statistics.nr_wakeups_sis_cache_affine : 0 se.statistics.nr_wakeups_sis_suff_cap : 219 se.statistics.nr_wakeups_sis_idle_cpu : 0 se.statistics.nr_wakeups_sis_count : 726 se.statistics.nr_wakeups_secb_attempts : 10376 se.statistics.nr_wakeups_secb_sync : 1462 se.statistics.nr_wakeups_secb_idle_bt : 6984 se.statistics.nr_wakeups_secb_insuff_cap : 3 se.statistics.nr_wakeups_secb_no_nrg_sav : 927 se.statistics.nr_wakeups_secb_nrg_sav : 206 se.statistics.nr_wakeups_secb_count : 794 se.statistics.nr_wakeups_fbt_attempts : 8914 se.statistics.nr_wakeups_fbt_no_cpu : 0 se.statistics.nr_wakeups_fbt_no_sd : 0 se.statistics.nr_wakeups_fbt_pref_idle : 6987 se.statistics.nr_wakeups_fbt_count : 1554 se.statistics.nr_wakeups_cas_attempts : 3107 se.statistics.nr_wakeups_cas_count : 1195 ... The same relation between the counters as in the per-cpu case apply. Change-Id: Ie7d01267c78a3f41f60a3ef52917d5a5d463f195 Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | sched: EAS & 'single cpu per cluster'/cpu hotplug interoperabilityDietmar Eggemann2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For Energy-Aware Scheduling (EAS) to work properly, even in the case that there is only one cpu per cluster or that cpus are hot-plugged out, the Energy Model (EM) data on all energy-aware sched domains (sd) has to be present for all online cpus. Mainline sd hierarchy setup code will remove sd's which are not useful for task scheduling e.g. in the following situations: 1. Only 1 cpu is/remains in one cluster of a multi cluster system. This remaining cpu only has DIE and no MC sd. 2. A complete cluster in a two cluster system is hot-plugged out. The cpus of the remaining cluster only have MC and no DIE sd. To make sure that all online cpus keep all their energy-aware sd's, the sd degenerate functionality has been changed to not free a sd if its first sched group (sg) contains EM data in case: 1. There is only 1 cpu left in the sd. 2. There have to be at least 2 sg's if certain sd flags are set. Instead of freeing such a sd it now clears only its SD_LOAD_BALANCE flag. This will make sure that the EAS functionality will always see all energy-aware sd's for all online cpus. It will introduce a tiny performance degradation for operations on affected cpus since the hot-path macro for_each_domain() has to deal with sd's not contributing to task scheduling at all now. In most cases the exisiting code makes sure that task scheduling is not invoked on a sd with !SD_LOAD_BALANCE. However, a small change is necessary in update_sd_lb_stats() to make sure that sd->parent is only initialized to !NULL in case the parent sd contains more than 1 sg. The handling of newidle decay values before the SD_LOAD_BALANCE check in rebalance_domains() stays unchanged. Test (w/ CONFIG_SCHED_DEBUG): JUNO r0 default system: $ cat /proc/cpuinfo | grep "^CPU part" CPU part : 0xd03 CPU part : 0xd07 CPU part : 0xd07 CPU part : 0xd03 CPU part : 0xd03 CPU part : 0xd03 SD names and flags: $ cat /proc/sys/kernel/sched_domain/cpu*/domain*/name MC DIE MC DIE MC DIE MC DIE MC DIE MC DIE $ printf "%x\n" `cat /proc/sys/kernel/sched_domain/cpu*/domain*/flags` 832f 102f 832f 102f 832f 102f 832f 102f 832f 102f 832f 102f Test 1: Hotplug-out one A57 (CPU part 0xd07) cpu: $ echo 0 > /sys/devices/system/cpu/cpu1/online $ cat /proc/cpuinfo | grep "^CPU part" CPU part : 0xd03 CPU part : 0xd07 CPU part : 0xd03 CPU part : 0xd03 CPU part : 0xd03 SD names and flags for remaining A57 (cpu2) cpu: $ cat /proc/sys/kernel/sched_domain/cpu2/domain*/name MC DIE $ printf "%x\n" `cat /proc/sys/kernel/sched_domain/cpu2/domain*/flags` 832e <-- MC SD with !SD_LOAD_BALANCE 102f Test 2: Hotplug-out the entire A57 cluster: $ echo 0 > /sys/devices/system/cpu/cpu1/online $ echo 0 > /sys/devices/system/cpu/cpu2/online $ cat /proc/cpuinfo | grep "^CPU part" CPU part : 0xd03 CPU part : 0xd03 CPU part : 0xd03 CPU part : 0xd03 SD names and flags for the remaining A53 (CPU part 0xd03) cluster: $ cat /proc/sys/kernel/sched_domain/cpu*/domain*/name MC DIE MC DIE MC DIE MC DIE $ printf "%x\n" `cat /proc/sys/kernel/sched_domain/cpu*/domain*/flags` 832f 102e <-- DIE SD with !SD_LOAD_BALANCE 832f 102e 832f 102e 832f 102e Change-Id: If24aa2b2628f334abbf0207d39e2a86168d9d673 Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
| | * | UPSTREAM: sched/core: Fix group_entity's share updateVincent Guittot2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The update of the share of a cfs_rq is done when its load_avg is updated but before the group_entity's load_avg has been updated for the past time slot. This generates wrong load_avg accounting which can be significant when small tasks are involved in the scheduling. Let take the example of a task a that is dequeued of its task group A: root (cfs_rq) \ (se) A (cfs_rq) \ (se) a Task "a" was the only task in task group A which becomes idle when a is dequeued. We have the sequence: - dequeue_entity a->se - update_load_avg(a->se) - dequeue_entity_load_avg(A->cfs_rq, a->se) - update_cfs_shares(A->cfs_rq) A->cfs_rq->load.weight == 0 A->se->load.weight is updated with the new share (0 in this case) - dequeue_entity A->se - update_load_avg(A->se) but its weight is now null so the last time slot (up to a tick) will be accounted with a weight of 0 instead of its real weight during the time slot. The last time slot will be accounted as an idle one whereas it was a running one. If the running time of task a is short enough that no tick happens when it runs, all running time of group entity A->se will be accounted as idle time. Instead, we should update the share of a cfs_rq (in fact the weight of its group entity) only after having updated the load_avg of the group_entity. update_cfs_shares() now takes the sched_entity as a parameter instead of the cfs_rq, and the weight of the group_entity is updated only once its load_avg has been synced with current time. Change-Id: Id6ce3be1767b44b444ce2a77ed1ba063e57c0664 Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: pjt@google.com Link: http://lkml.kernel.org/r/1482335426-7664-1-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 89ee048f3cc796db6f26906c6bef4edf0bee70fd) [minor cherry pick stuff] Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | UPSTREAM: sched/fair: Fix calc_cfs_shares() fixed point arithmetics width ↵Peter Zijlstra2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | confusion Commit: fde7d22e01aa ("sched/fair: Fix overly small weight for interactive group entities") did something non-obvious but also did it buggy yet latent. The problem was exposed for real by a later commit in the v4.7 merge window: 2159197d6677 ("sched/core: Enable increased load resolution on 64-bit kernels") ... after which tg->load_avg and cfs_rq->load.weight had different units (10 bit fixed point and 20 bit fixed point resp.). Add a comment to explain the use of cfs_rq->load.weight over the 'natural' cfs_rq->avg.load_avg and add scale_load_down() to correct for the difference in unit. Since this is (now, as per a previous commit) the only user of calc_tg_weight(), collapse it. The effects of this bug should be randomly inconsistent SMP-balancing of cgroups workloads. Change-Id: If1e565662ea163485edd94a12aef644d0e0dfe7a Reported-by: Jirka Hladky <jhladky@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: 2159197d6677 ("sched/core: Enable increased load resolution on 64-bit kernels") Fixes: fde7d22e01aa ("sched/fair: Fix overly small weight for interactive group entities") Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit ea1dc6fc6242f991656e35e2ed3d90ec1cd13418) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | UPSTREAM: sched/fair: Fix incorrect task group ->load_avgVincent Guittot2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A scheduler performance regression has been reported by Joseph Salisbury, which he bisected back to: 3d30544f0212 ("sched/fair: Apply more PELT fixes) The regression triggers when several levels of task groups are involved (read: SystemD) and cpu_possible_mask != cpu_present_mask. The root cause is that group entity's load (tg_child->se[i]->avg.load_avg) is initialized to scale_load_down(se->load.weight). During the creation of a child task group, its group entities on possible CPUs are attached to parent's cfs_rq (tg_parent) and their loads are added to the parent's load (tg_parent->load_avg) with update_tg_load_avg(). But only the load on online CPUs will then be updated to reflect real load, whereas load on other CPUs will stay at the initial value. The result is a tg_parent->load_avg that is higher than the real load, the weight of group entities (tg_parent->se[i]->load.weight) on online CPUs is smaller than it should be, and the task group gets a less running time than what it could expect. ( This situation can be detected with /proc/sched_debug. The ".tg_load_avg" of the task group will be much higher than sum of ".tg_load_avg_contrib" of online cfs_rqs of the task group. ) The load of group entities don't have to be intialized to something else than 0 because their load will increase when an entity is attached. Change-Id: Ie55021ff98ba49016adfddb2444e9c9709939226 Reported-by: Joseph Salisbury <joseph.salisbury@canonical.com> Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: <stable@vger.kernel.org> # 4.8.x Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: joonwoop@codeaurora.org Fixes: 3d30544f0212 ("sched/fair: Apply more PELT fixes) Link: http://lkml.kernel.org/r/1476881123-10159-1-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit b5a9b340789b2b24c6896bcf7a065c31a4db671c) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | UPSTREAM: sched/fair: Fix effective_load() to consistently use smoothed loadPeter Zijlstra2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Starting with the following commit: fde7d22e01aa ("sched/fair: Fix overly small weight for interactive group entities") calc_tg_weight() doesn't compute the right value as expected by effective_load(). The difference is in the 'correction' term. In order to ensure \Sum rw_j >= rw_i we cannot use tg->load_avg directly, since that might be lagging a correction on the current cfs_rq->avg.load_avg value. Therefore we use tg->load_avg - cfs_rq->tg_load_avg_contrib + cfs_rq->avg.load_avg. Now, per the referenced commit, calc_tg_weight() doesn't use cfs_rq->avg.load_avg, as is later used in @w, but uses cfs_rq->load.weight instead. So stop using calc_tg_weight() and do it explicitly. The effects of this bug are wake_affine() making randomly poor choices in cgroup-intense workloads. Change-Id: I1c0058ff674650cf295c8dc3b88a5a3de4bddab0 Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: <stable@vger.kernel.org> # v4.3+ Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: fde7d22e01aa ("sched/fair: Fix overly small weight for interactive group entities") Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 7dd4912594daf769a46744848b05bd5bc6d62469) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | UPSTREAM: sched/fair: Propagate asynchrous detachVincent Guittot2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A task can be asynchronously detached from cfs_rq when migrating between CPUs. The load of the migrated task is then removed from source cfs_rq during its next update. We use this event to set propagation flag. During the load balance, we take advantage of the update of blocked load to propagate any pending changes. The propagation relies on patch: "sched: Fix hierarchical order in rq->leaf_cfs_rq_list" ... which orders children and parents, to ensure that it's done in one pass. Change-Id: I33782e35fc4711f5901e8c23d6aa7ec5f2ff7ee5 Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Morten.Rasmussen@arm.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bsegall@google.com Cc: kernellwp@gmail.com Cc: pjt@google.com Cc: yuyang.du@intel.com Link: http://lkml.kernel.org/r/1478598827-32372-6-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 4e5160766fcc9f41bbd38bac11f92dce993644aa) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | UPSTREAM: sched/fair: Propagate load during synchronous attach/detachVincent Guittot2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a task moves from/to a cfs_rq, we set a flag which is then used to propagate the change at parent level (sched_entity and cfs_rq) during next update. If the cfs_rq is throttled, the flag will stay pending until the cfs_rq is unthrottled. For propagating the utilization, we copy the utilization of group cfs_rq to the sched_entity. For propagating the load, we have to take into account the load of the whole task group in order to evaluate the load of the sched_entity. Similarly to what was done before the rewrite of PELT, we add a correction factor in case the task group's load is greater than its share so it will contribute the same load of a task of equal weight. Change-Id: Id34a9888484716961c9027299c0b4d82881a39d1 Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Morten.Rasmussen@arm.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bsegall@google.com Cc: kernellwp@gmail.com Cc: pjt@google.com Cc: yuyang.du@intel.com Link: http://lkml.kernel.org/r/1478598827-32372-5-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 09a43ace1f986b003c118fdf6ddf1fd685692d49) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | UPSTREAM: sched/fair: Fix hierarchical order in rq->leaf_cfs_rq_listVincent Guittot2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix the insertion of cfs_rq in rq->leaf_cfs_rq_list to ensure that a child will always be called before its parent. The hierarchical order in shares update list has been introduced by commit: 67e86250f8ea ("sched: Introduce hierarchal order on shares update list") With the current implementation a child can be still put after its parent. Lets take the example of: root \ b /\ c d* | e* with root -> b -> c already enqueued but not d -> e so the leaf_cfs_rq_list looks like: head -> c -> b -> root -> tail The branch d -> e will be added the first time that they are enqueued, starting with e then d. When e is added, its parents is not already on the list so e is put at the tail : head -> c -> b -> root -> e -> tail Then, d is added at the head because its parent is already on the list: head -> d -> c -> b -> root -> e -> tail e is not placed at the right position and will be called the last whereas it should be called at the beginning. Because it follows the bottom-up enqueue sequence, we are sure that we will finished to add either a cfs_rq without parent or a cfs_rq with a parent that is already on the list. We can use this event to detect when we have finished to add a new branch. For the others, whose parents are not already added, we have to ensure that they will be added after their children that have just been inserted the steps before, and after any potential parents that are already in the list. The easiest way is to put the cfs_rq just after the last inserted one and to keep track of it untl the branch is fully added. Change-Id: I4fe0b8502ea628c13d14e8e5c5279bce67fb8845 Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Morten.Rasmussen@arm.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bsegall@google.com Cc: kernellwp@gmail.com Cc: pjt@google.com Cc: yuyang.du@intel.com Link: http://lkml.kernel.org/r/1478598827-32372-3-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 9c2791f936ef5fd04a118b5c284f2c9a95f4a647) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | BACKPORT: sched/fair: Factorize PELT updateVincent Guittot2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Every time we modify load/utilization of sched_entity, we start to sync it with its cfs_rq. This update is done in different ways: - when attaching/detaching a sched_entity, we update cfs_rq and then we sync the entity with the cfs_rq. - when enqueueing/dequeuing the sched_entity, we update both sched_entity and cfs_rq metrics to now. Use update_load_avg() everytime we have to update and sync cfs_rq and sched_entity before changing the state of a sched_enity. Change-Id: Ibde9a7e07ac80e9d5753bb4a0c30dfb3643cc666 Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Morten.Rasmussen@arm.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bsegall@google.com Cc: kernellwp@gmail.com Cc: pjt@google.com Cc: yuyang.du@intel.com Link: http://lkml.kernel.org/r/1478598827-32372-4-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org> [backported FROMLIST] Signed-off-by: Andres Oportus <andresoportus@google.com> (cherry picked from commit d31b1a66cbe0931733583ad9d9e8c6cfd710907d) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | UPSTREAM: sched/fair: Factorize attach/detach entityVincent Guittot2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Factorize post_init_entity_util_avg() and part of attach_task_cfs_rq() in one function attach_entity_cfs_rq(). Create symmetric detach_entity_cfs_rq() function. Change-Id: I44fc6bb5e71460be65f6b8928d4620c6c27a6a67 Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Morten.Rasmussen@arm.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bsegall@google.com Cc: kernellwp@gmail.com Cc: pjt@google.com Cc: yuyang.du@intel.com Link: http://lkml.kernel.org/r/1478598827-32372-2-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit df217913e72ec7e603d8b68cc4c70646cf7000db) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | UPSTREAM: sched/fair: Improve PELT stuff some morePeter Zijlstra2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Vincent noted that the update_tg_load_avg() usage in commit: 3d30544f0212 ("sched/fair: Apply more PELT fixes") isn't entirely sufficient. We need to call this function every time cfs_rq->avg.load changes, this includes when update_cfs_rq_load_avg() returns true, but {attach,detach}_entity_load_avg() themselves also change it. This means we need to unconditionally call update_tg_load_avg(). Also, add more comments. Change-Id: I7e55fceb587601f73c760c8b0d47a7ef2b777b9e Reported-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 7c3edd2c300b7ef2005a69dc727692ee07434aa5) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | UPSTREAM: sched/fair: Apply more PELT fixesPeter Zijlstra2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | One additional 'rule' for using update_cfs_rq_load_avg() is that one should call update_tg_load_avg() if it returns true. Add a bunch of comments to hopefully clarify some of the rules: o You need to update cfs_rq _before_ any entity attach/detach, this is important, because while for mathmatical consisency this isn't strictly needed, it is required for the physical interpretation of the model, you attach/detach _now_. o When you modify the cfs_rq avg, you have to then call update_tg_load_avg() in order to propagate changes upwards. o (Fair) entities are always attached, switched_{to,from}_fair() deal with !fair. This directly follows from the definition of the cfs_rq averages, namely that they are a direct sum of all (runnable or blocked) entities on that rq. It is the second rule that this patch enforces, but it adds comments pertaining to all of them. Change-Id: Icdc906e98c67b84cb9582c893bc761a9886be57a Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 3d30544f02120b884bba2a9466c87dba980e3be5) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | UPSTREAM: sched/fair: Fix post_init_entity_util_avg() serializationPeter Zijlstra2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Chris Wilson reported a divide by 0 at: post_init_entity_util_avg(): > 725 if (cfs_rq->avg.util_avg != 0) { > 726 sa->util_avg = cfs_rq->avg.util_avg * se->load.weight; > -> 727 sa->util_avg /= (cfs_rq->avg.load_avg + 1); > 728 > 729 if (sa->util_avg > cap) > 730 sa->util_avg = cap; > 731 } else { Which given the lack of serialization, and the code generated from update_cfs_rq_load_avg() is entirely possible: if (atomic_long_read(&cfs_rq->removed_load_avg)) { s64 r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0); sa->load_avg = max_t(long, sa->load_avg - r, 0); sa->load_sum = max_t(s64, sa->load_sum - r * LOAD_AVG_MAX, 0); removed_load = 1; } turns into: ffffffff81087064: 49 8b 85 98 00 00 00 mov 0x98(%r13),%rax ffffffff8108706b: 48 85 c0 test %rax,%rax ffffffff8108706e: 74 40 je ffffffff810870b0 ffffffff81087070: 4c 89 f8 mov %r15,%rax ffffffff81087073: 49 87 85 98 00 00 00 xchg %rax,0x98(%r13) ffffffff8108707a: 49 29 45 70 sub %rax,0x70(%r13) ffffffff8108707e: 4c 89 f9 mov %r15,%rcx ffffffff81087081: bb 01 00 00 00 mov $0x1,%ebx ffffffff81087086: 49 83 7d 70 00 cmpq $0x0,0x70(%r13) ffffffff8108708b: 49 0f 49 4d 70 cmovns 0x70(%r13),%rcx Which you'll note ends up with 'sa->load_avg - r' in memory at ffffffff8108707a. By calling post_init_entity_util_avg() under rq->lock we're sure to be fully serialized against PELT updates and cannot observe intermediate state like this. Change-Id: I56c11886102b7859df82e26c88b1b7c200a39f6e Reported-by: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Yuyang Du <yuyang.du@intel.com> Cc: bsegall@google.com Cc: morten.rasmussen@arm.com Cc: pjt@google.com Cc: steve.muckle@linaro.org Fixes: 2b8c41daba32 ("sched/fair: Initiate a new task's util avg to a bounded value") Link: http://lkml.kernel.org/r/20160609130750.GQ30909@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit b7fa30c9cc48c4f55663420472505d3b4f6e1705) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | BACKPORT: sched/fair: Initiate a new task's util avg to a bounded valueYuyang Du2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A new task's util_avg is set to full utilization of a CPU (100% time running). This accelerates a new task's utilization ramp-up, useful to boost its execution in early time. However, it may result in (insanely) high utilization for a transient time period when a flood of tasks are spawned. Importantly, it violates the "fundamentally bounded" CPU utilization, and its side effect is negative if we don't take any measure to bound it. This patch proposes an algorithm to address this issue. It has two methods to approach a sensible initial util_avg: (1) An expected (or average) util_avg based on its cfs_rq's util_avg: util_avg = cfs_rq->util_avg / (cfs_rq->load_avg + 1) * se.load.weight (2) A trajectory of how successive new tasks' util develops, which gives 1/2 of the left utilization budget to a new task such that the additional util is noticeably large (when overall util is low) or unnoticeably small (when overall util is high enough). In the meantime, the aggregate utilization is well bounded: util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n where n denotes the nth task. If util_avg is larger than util_avg_cap, then the effective util is clamped to the util_avg_cap. Change-Id: Idafe989b24d9e70911666f09800bf1d5a011e1f4 Reported-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: Yuyang Du <yuyang.du@intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bsegall@google.com Cc: morten.rasmussen@arm.com Cc: pjt@google.com Cc: steve.muckle@linaro.org Link: http://lkml.kernel.org/r/1459283456-21682-1-git-send-email-yuyang.du@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org> (cherry picked from commit 2b8c41daba327c633228169e8bd8ec067ab443f8) [integrate with schedfreq - schedfreq has a tuneable for init task util but this commit removes the use of the tuneable since we have a new algorithm for calculating an initial utilisation. I've left the tuneable in place, but it is no longer used even when schedfreq is the CPUFreq governor] Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | sched/fair: Simplify idle_idx handling in select_idle_sibling()Dietmar Eggemann2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rename best_idle to best_idle_cpu so the same name is used like in find_best_target(). Fix if (best_idle > 0) since best_idle_cpu = 0 is a valid target. Use 'unsigned long' data type for best_idle_capacity. Since we're looking for the shallowest best_idle_cstate initialize best_idle_cstate = INT_MAX. For cpus which are not idle (idle_idx = -1) the condition 'if (idle_idx < best_idle_cstate && ...)' is never executed. Change-Id: Ic5b63d58478696b3d1ec6253cf739a69a574cf99 Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> (cherry picked from commit 8bff5e9c0968108d465e1f2a4624fc5ec2f00849) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | sched/fair: refactor find_best_target() for simplicityDietmar Eggemann2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Simplify backup_capacity handling and use 'unsigned long' data type for cpu capacity, simplify target_util handling, simplify idle_idx handling & refactor min_util, new_util. Also return first idle cpu for prefer_idle task immediately. Change-Id: Ic89e140f7b369f3965703fdc8463013d16e9b94a Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | sched/fair: Change cpu iteration order in find_best_target()Dietmar Eggemann2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The schedtune task parameter 'boosted' is mapped into the cpu iteration order. Currently for 'boosted' equal true the iteration starts at the last cpu (NR_CPUS-1) whereas for 'boosted' equal false it starts at the first cpu (0). This only has the desired effect if the cpu topology oerdering matches the underlying assumption. This e.g. is the case for the Qc snapdragon 821 with its [L0 L1 b0 b1] cpu topology layout (L=lower max freq, b=higher max freq). This results in cpus with higher maximum capacity being given the highest logical cpu ids. However not all big.LITTLE systems enumerate their cpus in the same way. For example, the ARM Versatile Express Juno board has 6 cpus for which the default configuration has topology [L0 b0 b1 L1 L2 L3]. To make this approach independent from the cpu topology layout it now iterates over the cpus in the order of the sched_groups of the EAS sched_domain (sd_ea). The order of cpu iteration is different for the different cpu types in case the cpu is used to dereference sd_ea. Considering the Qc snapdragon 821 again, for cpu L0 and L1 the order is 'b0->b1->L0->L1' whereas for b0 and b1 the order is 'b0->b1->L0->L1'. This approach does not allow the exact same iteration order as with the currently used flat iteration over [0 .. NR_CPUS-1] but the cpus are ordered by the original cpu capacity. The cpu iteration is now done in the sd_ea sched_group order required by the 'boosted' value ['L0->L1->b0->b1'/'b0->b1->L0->L1'] rather than forward/backward over the flat cpu space ['L0->L1->b0->b1'/ 'b1->b0->L1->L0']. Change-Id: I8fbe2073dedd2ecb1c750620c6000c11a5ff4358 Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> (cherry picked from commit a0c6a4272c3968c0ff50d3fed65f5865b72d777b) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | sched: Remove sysctl_sched_is_big_littleDietmar Eggemann2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the new wakeup approach this sysctl is not necessary any more. Change-Id: I52114b3c918791f6a4f9f30f50002919ccbc1a9c Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> (cherry picked from commit 885c0d503bcdf0ef4e9b46822496f16b20aa3bbd) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | sched/fair: Code !is_big_little path into select_energy_cpu_brute()Dietmar Eggemann2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch replaces the existing EAS upstream implementation of select_energy_cpu_brute() with the one of find_best_target() used in Android previously. It also removes the cpumask 'and' from select_energy_cpu_brute, see the existing use of 'cpu = smp_processor_id()' in select_task_rq_fair(). Change-Id: If678c002efaa87d1ba3ec9989a4e9f8df98b83ec Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> [ added guarding for non-schedtune builds ] Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | EAS: sched/fair: Re-integrate 'honor sync wakeups' into wakeup pathDietmar Eggemann2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch re-integrates the part which was initially provided by 3b9d7554aeec ("EAS: sched/fair: tunable to honor sync wakeups") into energy_aware_wake_cpu() into select_energy_cpu_brute(). Change-Id: I748fde3ecdeb44651179bce0a5bb8dd82d1903f6 Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> (cherry picked from commit b75b7286cb068d5761621ea134c23dd131db953f) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | Fixup!: sched/fair.c: Set SchedTune specific struct energy_env.taskDietmar Eggemann2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This has to be done in the caller function of energy_diff() version of SchedTune to avoid Null pointer dereference in energy_diff(). Change-Id: I3f0f68dbd11efb15bbb3b1832f8294419ed85241 Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> (cherry picked from commit 14531d4e245d063f713ee5ed835df958e6c7838f) Signed-off-by: Chris Redpath <chris.redpath@arm.com>
| | * | sched/fair: Energy-aware wake-up task placementMorten Rasmussen2017-06-02
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the systems is not overutilized, place waking tasks on the most energy efficient cpu. Previous attempts reduced the search space by matching task utilization to cpu capacity before consulting the energy model as this is an expensive operation. The search heuristics didn't work very well and lacking any better alternatives this patch takes the brute-force route and tries all potential targets. This approach doesn't scale, but it might be sufficient for many embedded applications while work is continuing on a heuristic that can minimize the necessary computations. The heuristic must be derrived from the platform energy model rather than make additional assumptions, such lower capacity implies better energy efficiency. PeterZ mentioned in the past that we might be able to derrive some simpler deciding functions using mathematical (modal?) analysis. Change-Id: I772bacb4c8fd599f8006fa422f842e66377a9c6c Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com> [rebase: on top of msm-google/android-msm-marlin-3.18] Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> (cherry picked from commit a894422dbdb7b77ea2acfe7ff909ccb5ded23514) Signed-off-by: Chris Redpath <chris.redpath@arm.com>