diff options
494 files changed, 9324 insertions, 4200 deletions
diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu index e4cd3be77663..f97d1aaec1f9 100644 --- a/Documentation/ABI/testing/sysfs-devices-system-cpu +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu @@ -279,6 +279,8 @@ What: /sys/devices/system/cpu/vulnerabilities /sys/devices/system/cpu/vulnerabilities/spec_store_bypass /sys/devices/system/cpu/vulnerabilities/l1tf /sys/devices/system/cpu/vulnerabilities/mds + /sys/devices/system/cpu/vulnerabilities/tsx_async_abort + /sys/devices/system/cpu/vulnerabilities/itlb_multihit Date: January 2018 Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org> Description: Information about CPU vulnerabilities diff --git a/Documentation/hw-vuln/tsx_async_abort.rst b/Documentation/hw-vuln/tsx_async_abort.rst new file mode 100644 index 000000000000..38beda735f39 --- /dev/null +++ b/Documentation/hw-vuln/tsx_async_abort.rst @@ -0,0 +1,268 @@ +.. SPDX-License-Identifier: GPL-2.0 + +TAA - TSX Asynchronous Abort +====================================== + +TAA is a hardware vulnerability that allows unprivileged speculative access to +data which is available in various CPU internal buffers by using asynchronous +aborts within an Intel TSX transactional region. + +Affected processors +------------------- + +This vulnerability only affects Intel processors that support Intel +Transactional Synchronization Extensions (TSX) when the TAA_NO bit (bit 8) +is 0 in the IA32_ARCH_CAPABILITIES MSR. On processors where the MDS_NO bit +(bit 5) is 0 in the IA32_ARCH_CAPABILITIES MSR, the existing MDS mitigations +also mitigate against TAA. + +Whether a processor is affected or not can be read out from the TAA +vulnerability file in sysfs. See :ref:`tsx_async_abort_sys_info`. + +Related CVEs +------------ + +The following CVE entry is related to this TAA issue: + + ============== ===== =================================================== + CVE-2019-11135 TAA TSX Asynchronous Abort (TAA) condition on some + microprocessors utilizing speculative execution may + allow an authenticated user to potentially enable + information disclosure via a side channel with + local access. + ============== ===== =================================================== + +Problem +------- + +When performing store, load or L1 refill operations, processors write +data into temporary microarchitectural structures (buffers). The data in +those buffers can be forwarded to load operations as an optimization. + +Intel TSX is an extension to the x86 instruction set architecture that adds +hardware transactional memory support to improve performance of multi-threaded +software. TSX lets the processor expose and exploit concurrency hidden in an +application due to dynamically avoiding unnecessary synchronization. + +TSX supports atomic memory transactions that are either committed (success) or +aborted. During an abort, operations that happened within the transactional region +are rolled back. An asynchronous abort takes place, among other options, when a +different thread accesses a cache line that is also used within the transactional +region when that access might lead to a data race. + +Immediately after an uncompleted asynchronous abort, certain speculatively +executed loads may read data from those internal buffers and pass it to dependent +operations. This can be then used to infer the value via a cache side channel +attack. + +Because the buffers are potentially shared between Hyper-Threads cross +Hyper-Thread attacks are possible. + +The victim of a malicious actor does not need to make use of TSX. Only the +attacker needs to begin a TSX transaction and raise an asynchronous abort +which in turn potenitally leaks data stored in the buffers. + +More detailed technical information is available in the TAA specific x86 +architecture section: :ref:`Documentation/x86/tsx_async_abort.rst <tsx_async_abort>`. + + +Attack scenarios +---------------- + +Attacks against the TAA vulnerability can be implemented from unprivileged +applications running on hosts or guests. + +As for MDS, the attacker has no control over the memory addresses that can +be leaked. Only the victim is responsible for bringing data to the CPU. As +a result, the malicious actor has to sample as much data as possible and +then postprocess it to try to infer any useful information from it. + +A potential attacker only has read access to the data. Also, there is no direct +privilege escalation by using this technique. + + +.. _tsx_async_abort_sys_info: + +TAA system information +----------------------- + +The Linux kernel provides a sysfs interface to enumerate the current TAA status +of mitigated systems. The relevant sysfs file is: + +/sys/devices/system/cpu/vulnerabilities/tsx_async_abort + +The possible values in this file are: + +.. list-table:: + + * - 'Vulnerable' + - The CPU is affected by this vulnerability and the microcode and kernel mitigation are not applied. + * - 'Vulnerable: Clear CPU buffers attempted, no microcode' + - The system tries to clear the buffers but the microcode might not support the operation. + * - 'Mitigation: Clear CPU buffers' + - The microcode has been updated to clear the buffers. TSX is still enabled. + * - 'Mitigation: TSX disabled' + - TSX is disabled. + * - 'Not affected' + - The CPU is not affected by this issue. + +.. _ucode_needed: + +Best effort mitigation mode +^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If the processor is vulnerable, but the availability of the microcode-based +mitigation mechanism is not advertised via CPUID the kernel selects a best +effort mitigation mode. This mode invokes the mitigation instructions +without a guarantee that they clear the CPU buffers. + +This is done to address virtualization scenarios where the host has the +microcode update applied, but the hypervisor is not yet updated to expose the +CPUID to the guest. If the host has updated microcode the protection takes +effect; otherwise a few CPU cycles are wasted pointlessly. + +The state in the tsx_async_abort sysfs file reflects this situation +accordingly. + + +Mitigation mechanism +-------------------- + +The kernel detects the affected CPUs and the presence of the microcode which is +required. If a CPU is affected and the microcode is available, then the kernel +enables the mitigation by default. + + +The mitigation can be controlled at boot time via a kernel command line option. +See :ref:`taa_mitigation_control_command_line`. + +.. _virt_mechanism: + +Virtualization mitigation +^^^^^^^^^^^^^^^^^^^^^^^^^ + +Affected systems where the host has TAA microcode and TAA is mitigated by +having disabled TSX previously, are not vulnerable regardless of the status +of the VMs. + +In all other cases, if the host either does not have the TAA microcode or +the kernel is not mitigated, the system might be vulnerable. + + +.. _taa_mitigation_control_command_line: + +Mitigation control on the kernel command line +--------------------------------------------- + +The kernel command line allows to control the TAA mitigations at boot time with +the option "tsx_async_abort=". The valid arguments for this option are: + + ============ ============================================================= + off This option disables the TAA mitigation on affected platforms. + If the system has TSX enabled (see next parameter) and the CPU + is affected, the system is vulnerable. + + full TAA mitigation is enabled. If TSX is enabled, on an affected + system it will clear CPU buffers on ring transitions. On + systems which are MDS-affected and deploy MDS mitigation, + TAA is also mitigated. Specifying this option on those + systems will have no effect. + ============ ============================================================= + +Not specifying this option is equivalent to "tsx_async_abort=full". + +The kernel command line also allows to control the TSX feature using the +parameter "tsx=" on CPUs which support TSX control. MSR_IA32_TSX_CTRL is used +to control the TSX feature and the enumeration of the TSX feature bits (RTM +and HLE) in CPUID. + +The valid options are: + + ============ ============================================================= + off Disables TSX on the system. + + Note that this option takes effect only on newer CPUs which are + not vulnerable to MDS, i.e., have MSR_IA32_ARCH_CAPABILITIES.MDS_NO=1 + and which get the new IA32_TSX_CTRL MSR through a microcode + update. This new MSR allows for the reliable deactivation of + the TSX functionality. + + on Enables TSX. + + Although there are mitigations for all known security + vulnerabilities, TSX has been known to be an accelerator for + several previous speculation-related CVEs, and so there may be + unknown security risks associated with leaving it enabled. + + auto Disables TSX if X86_BUG_TAA is present, otherwise enables TSX + on the system. + ============ ============================================================= + +Not specifying this option is equivalent to "tsx=off". + +The following combinations of the "tsx_async_abort" and "tsx" are possible. For +affected platforms tsx=auto is equivalent to tsx=off and the result will be: + + ========= ========================== ========================================= + tsx=on tsx_async_abort=full The system will use VERW to clear CPU + buffers. Cross-thread attacks are still + possible on SMT machines. + tsx=on tsx_async_abort=off The system is vulnerable. + tsx=off tsx_async_abort=full TSX might be disabled if microcode + provides a TSX control MSR. If so, + system is not vulnerable. + tsx=off tsx_async_abort=off ditto + ========= ========================== ========================================= + + +For unaffected platforms "tsx=on" and "tsx_async_abort=full" does not clear CPU +buffers. For platforms without TSX control (MSR_IA32_ARCH_CAPABILITIES.MDS_NO=0) +"tsx" command line argument has no effect. + +For the affected platforms below table indicates the mitigation status for the +combinations of CPUID bit MD_CLEAR and IA32_ARCH_CAPABILITIES MSR bits MDS_NO +and TSX_CTRL_MSR. + + ======= ========= ============= ======================================== + MDS_NO MD_CLEAR TSX_CTRL_MSR Status + ======= ========= ============= ======================================== + 0 0 0 Vulnerable (needs microcode) + 0 1 0 MDS and TAA mitigated via VERW + 1 1 0 MDS fixed, TAA vulnerable if TSX enabled + because MD_CLEAR has no meaning and + VERW is not guaranteed to clear buffers + 1 X 1 MDS fixed, TAA can be mitigated by + VERW or TSX_CTRL_MSR + ======= ========= ============= ======================================== + +Mitigation selection guide +-------------------------- + +1. Trusted userspace and guests +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If all user space applications are from a trusted source and do not execute +untrusted code which is supplied externally, then the mitigation can be +disabled. The same applies to virtualized environments with trusted guests. + + +2. Untrusted userspace and guests +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If there are untrusted applications or guests on the system, enabling TSX +might allow a malicious actor to leak data from the host or from other +processes running on the same physical core. + +If the microcode is available and the TSX is disabled on the host, attacks +are prevented in a virtualized environment as well, even if the VMs do not +explicitly enable the mitigation. + + +.. _taa_default_mitigations: + +Default mitigations +------------------- + +The kernel's default action for vulnerable processors is: + + - Deploy TSX disable mitigation (tsx_async_abort=full tsx=off). diff --git a/Documentation/input/event-codes.txt b/Documentation/input/event-codes.txt index 3f0f5ce3338b..a65b3bf50389 100644 --- a/Documentation/input/event-codes.txt +++ b/Documentation/input/event-codes.txt @@ -297,7 +297,10 @@ them as any other INPUT_PROP_BUTTONPAD device. INPUT_PROP_ACCELEROMETER ------------------------- Directional axes on this device (absolute and/or relative x, y, z) represent -accelerometer data. All other axes retain their meaning. A device must not mix +accelerometer data. Some devices also report gyroscope data, which devices +can report through the rotational axes (absolute and/or relative rx, ry, rz). + +All other axes retain their meaning. A device must not mix regular directional axes and accelerometer axes on the same event node. Guidelines: diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt index b3c7bdffbdc0..d170c8fcced6 100644 --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -2240,6 +2240,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted. spectre_v2_user=off [X86] spec_store_bypass_disable=off [X86] mds=off [X86] + tsx_async_abort=off [X86] auto (default) Mitigate all CPU vulnerabilities, but leave SMT @@ -4131,6 +4132,67 @@ bytes respectively. Such letter suffixes can also be entirely omitted. platforms where RDTSC is slow and this accounting can add overhead. + tsx= [X86] Control Transactional Synchronization + Extensions (TSX) feature in Intel processors that + support TSX control. + + This parameter controls the TSX feature. The options are: + + on - Enable TSX on the system. Although there are + mitigations for all known security vulnerabilities, + TSX has been known to be an accelerator for + several previous speculation-related CVEs, and + so there may be unknown security risks associated + with leaving it enabled. + + off - Disable TSX on the system. (Note that this + option takes effect only on newer CPUs which are + not vulnerable to MDS, i.e., have + MSR_IA32_ARCH_CAPABILITIES.MDS_NO=1 and which get + the new IA32_TSX_CTRL MSR through a microcode + update. This new MSR allows for the reliable + deactivation of the TSX functionality.) + + auto - Disable TSX if X86_BUG_TAA is present, + otherwise enable TSX on the system. + + Not specifying this option is equivalent to tsx=off. + + See Documentation/hw-vuln/tsx_async_abort.rst + for more details. + + tsx_async_abort= [X86,INTEL] Control mitigation for the TSX Async + Abort (TAA) vulnerability. + + Similar to Micro-architectural Data Sampling (MDS) + certain CPUs that support Transactional + Synchronization Extensions (TSX) are vulnerable to an + exploit against CPU internal buffers which can forward + information to a disclosure gadget under certain + conditions. + + In vulnerable processors, the speculatively forwarded + data can be used in a cache side channel attack, to + access data to which the attacker does not have direct + access. + + This parameter controls the TAA mitigation. The + options are: + + full - Enable TAA mitigation on vulnerable CPUs + if TSX is enabled. + + off - Unconditionally disable TAA mitigation + + Not specifying this option is equivalent to + tsx_async_abort=full. On CPUs which are MDS affected + and deploy MDS mitigation, TAA mitigation is not + required and doesn't provide any additional + mitigation. + + For details see: + Documentation/hw-vuln/tsx_async_abort.rst + turbografx.map[2|3]= [HW,JOY] TurboGraFX parallel port interface Format: diff --git a/Documentation/usb/rio.txt b/Documentation/usb/rio.txt deleted file mode 100644 index aee715af7db7..000000000000 --- a/Documentation/usb/rio.txt +++ /dev/null @@ -1,138 +0,0 @@ -Copyright (C) 1999, 2000 Bruce Tenison -Portions Copyright (C) 1999, 2000 David Nelson -Thanks to David Nelson for guidance and the usage of the scanner.txt -and scanner.c files to model our driver and this informative file. - -Mar. 2, 2000 - -CHANGES - -- Initial Revision - - -OVERVIEW - -This README will address issues regarding how to configure the kernel -to access a RIO 500 mp3 player. -Before I explain how to use this to access the Rio500 please be warned: - -W A R N I N G: --------------- - -Please note that this software is still under development. The authors -are in no way responsible for any damage that may occur, no matter how -inconsequential. - -It seems that the Rio has a problem when sending .mp3 with low batteries. -I suggest when the batteries are low and you want to transfer stuff that you -replace it with a fresh one. In my case, what happened is I lost two 16kb -blocks (they are no longer usable to store information to it). But I don't -know if that's normal or not; it could simply be a problem with the flash -memory. - -In an extreme case, I left my Rio playing overnight and the batteries wore -down to nothing and appear to have corrupted the flash memory. My RIO -needed to be replaced as a result. Diamond tech support is aware of the -problem. Do NOT allow your batteries to wear down to nothing before -changing them. It appears RIO 500 firmware does not handle low battery -power well at all. - -On systems with OHCI controllers, the kernel OHCI code appears to have -power on problems with some chipsets. If you are having problems -connecting to your RIO 500, try turning it on first and then plugging it -into the USB cable. - -Contact information: --------------------- - - The main page for the project is hosted at sourceforge.net in the following - URL: <http://rio500.sourceforge.net>. You can also go to the project's - sourceforge home page at: <http://sourceforge.net/projects/rio500/>. - There is also a mailing list: rio500-users@lists.sourceforge.net - -Authors: -------- - -Most of the code was written by Cesar Miquel <miquel@df.uba.ar>. Keith -Clayton <kclayton@jps.net> is incharge of the PPC port and making sure -things work there. Bruce Tenison <btenison@dibbs.net> is adding support -for .fon files and also does testing. The program will mostly sure be -re-written and Pete Ikusz along with the rest will re-design it. I would -also like to thank Tri Nguyen <tmn_3022000@hotmail.com> who provided use -with some important information regarding the communication with the Rio. - -ADDITIONAL INFORMATION and Userspace tools - -http://rio500.sourceforge.net/ - - -REQUIREMENTS - -A host with a USB port. Ideally, either a UHCI (Intel) or OHCI -(Compaq and others) hardware port should work. - -A Linux development kernel (2.3.x) with USB support enabled or a -backported version to linux-2.2.x. See http://www.linux-usb.org for -more information on accomplishing this. - -A Linux kernel with RIO 500 support enabled. - -'lspci' which is only needed to determine the type of USB hardware -available in your machine. - -CONFIGURATION - -Using `lspci -v`, determine the type of USB hardware available. - - If you see something like: - - USB Controller: ...... - Flags: ..... - I/O ports at .... - - Then you have a UHCI based controller. - - If you see something like: - - USB Controller: ..... - Flags: .... - Memory at ..... - - Then you have a OHCI based controller. - -Using `make menuconfig` or your preferred method for configuring the -kernel, select 'Support for USB', 'OHCI/UHCI' depending on your -hardware (determined from the steps above), 'USB Diamond Rio500 support', and -'Preliminary USB device filesystem'. Compile and install the modules -(you may need to execute `depmod -a` to update the module -dependencies). - -Add a device for the USB rio500: - `mknod /dev/usb/rio500 c 180 64` - -Set appropriate permissions for /dev/usb/rio500 (don't forget about -group and world permissions). Both read and write permissions are -required for proper operation. - -Load the appropriate modules (if compiled as modules): - - OHCI: - modprobe usbcore - modprobe usb-ohci - modprobe rio500 - - UHCI: - modprobe usbcore - modprobe usb-uhci (or uhci) - modprobe rio500 - -That's it. The Rio500 Utils at: http://rio500.sourceforge.net should -be able to access the rio500. - -BUGS - -If you encounter any problems feel free to drop me an email. - -Bruce Tenison -btenison@dibbs.net - diff --git a/Documentation/x86/tsx_async_abort.rst b/Documentation/x86/tsx_async_abort.rst new file mode 100644 index 000000000000..4a4336a89372 --- /dev/null +++ b/Documentation/x86/tsx_async_abort.rst @@ -0,0 +1,117 @@ +.. SPDX-License-Identifier: GPL-2.0 + +TSX Async Abort (TAA) mitigation +================================ + +.. _tsx_async_abort: + +Overview +-------- + +TSX Async Abort (TAA) is a side channel attack on internal buffers in some +Intel processors similar to Microachitectural Data Sampling (MDS). In this +case certain loads may speculatively pass invalid data to dependent operations +when an asynchronous abort condition is pending in a Transactional +Synchronization Extensions (TSX) transaction. This includes loads with no +fault or assist condition. Such loads may speculatively expose stale data from +the same uarch data structures as in MDS, with same scope of exposure i.e. +same-thread and cross-thread. This issue affects all current processors that +support TSX. + +Mitigation strategy +------------------- + +a) TSX disable - one of the mitigations is to disable TSX. A new MSR +IA32_TSX_CTRL will be available in future and current processors after +microcode update which can be used to disable TSX. In addition, it +controls the enumeration of the TSX feature bits (RTM and HLE) in CPUID. + +b) Clear CPU buffers - similar to MDS, clearing the CPU buffers mitigates this +vulnerability. More details on this approach can be found in +:ref:`Documentation/hw-vuln/mds.rst <mds>`. + +Kernel internal mitigation modes +-------------------------------- + + ============= ============================================================ + off Mitigation is disabled. Either the CPU is not affected or + tsx_async_abort=off is supplied on the kernel command line. + + tsx disabled Mitigation is enabled. TSX feature is disabled by default at + bootup on processors that support TSX control. + + verw Mitigation is enabled. CPU is affected and MD_CLEAR is + advertised in CPUID. + + ucode needed Mitigation is enabled. CPU is affected and MD_CLEAR is not + advertised in CPUID. That is mainly for virtualization + scenarios where the host has the updated microcode but the + hypervisor does not expose MD_CLEAR in CPUID. It's a best + effort approach without guarantee. + ============= ============================================================ + +If the CPU is affected and the "tsx_async_abort" kernel command line parameter is +not provided then the kernel selects an appropriate mitigation depending on the +status of RTM and MD_CLEAR CPUID bits. + +Below tables indicate the impact of tsx=on|off|auto cmdline options on state of +TAA mitigation, VERW behavior and TSX feature for various combinations of +MSR_IA32_ARCH_CAPABILITIES bits. + +1. "tsx=off" + +========= ========= ============ ============ ============== =================== ====================== +MSR_IA32_ARCH_CAPABILITIES bits Result with cmdline tsx=off +---------------------------------- ------------------------------------------------------------------------- +TAA_NO MDS_NO TSX_CTRL_MSR TSX state VERW can clear TAA mitigation TAA mitigation + after bootup CPU buffers tsx_async_abort=off tsx_async_abort=full +========= ========= ============ ============ ============== =================== ====================== + 0 0 0 HW default Yes Same as MDS Same as MDS + 0 0 1 Invalid case Invalid case Invalid case Invalid case + 0 1 0 HW default No Need ucode update Need ucode update + 0 1 1 Disabled Yes TSX disabled TSX disabled + 1 X 1 Disabled X None needed None needed +========= ========= ============ ============ ============== =================== ====================== + +2. "tsx=on" + +========= ========= ============ ============ ============== =================== ====================== +MSR_IA32_ARCH_CAPABILITIES bits Result with cmdline tsx=on +---------------------------------- ------------------------------------------------------------------------- +TAA_NO MDS_NO TSX_CTRL_MSR TSX state VERW can clear TAA mitigation TAA mitigation + after bootup CPU buffers tsx_async_abort=off tsx_async_abort=full +========= ========= ============ ============ ============== =================== ====================== + 0 0 0 HW default Yes Same as MDS Same as MDS + 0 0 1 Invalid case Invalid case Invalid case Invalid case + 0 1 0 HW default No Need ucode update Need ucode update + 0 1 1 Enabled Yes None Same as MDS + 1 X 1 Enabled X None needed None needed +========= ========= ============ ============ ============== =================== ====================== + +3. "tsx=auto" + +========= ========= ============ ============ ============== =================== ====================== +MSR_IA32_ARCH_CAPABILITIES bits Result with cmdline tsx=auto +---------------------------------- ------------------------------------------------------------------------- +TAA_NO MDS_NO TSX_CTRL_MSR TSX state VERW can clear TAA mitigation TAA mitigation + after bootup CPU buffers tsx_async_abort=off tsx_async_abort=full +========= ========= ============ ============ ============== =================== ====================== + 0 0 0 HW default Yes Same as MDS Same as MDS + 0 0 1 Invalid case Invalid case Invalid case Invalid case + 0 1 0 HW default No Need ucode update Need ucode update + 0 1 1 Disabled Yes TSX disabled TSX disabled + 1 X 1 Enabled X None needed None needed +========= ========= ============ ============ ============== =================== ====================== + +In the tables, TSX_CTRL_MSR is a new bit in MSR_IA32_ARCH_CAPABILITIES that +indicates whether MSR_IA32_TSX_CTRL is supported. + +There are two control bits in IA32_TSX_CTRL MSR: + + Bit 0: When set it disables the Restricted Transactional Memory (RTM) + sub-feature of TSX (will force all transactions to abort on the + XBEGIN instruction). + + Bit 1: When set it disables the enumeration of the RTM and HLE feature + (i.e. it will make CPUID(EAX=7).EBX{bit4} and + CPUID(EAX=7).EBX{bit11} read as 0). diff --git a/MAINTAINERS b/MAINTAINERS index 1a92bf2191e6..c927bdcef19b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -11186,13 +11186,6 @@ W: http://www.linux-usb.org/usbnet S: Maintained F: drivers/net/usb/dm9601.c -USB DIAMOND RIO500 DRIVER -M: Cesar Miquel <miquel@df.uba.ar> -L: rio500-users@lists.sourceforge.net -W: http://rio500.sourceforge.net -S: Maintained -F: drivers/usb/misc/rio500* - USB EHCI DRIVER M: Alan Stern <stern@rowland.harvard.edu> L: linux-usb@vger.kernel.org @@ -1,6 +1,6 @@ VERSION = 4 PATCHLEVEL = 4 -SUBLEVEL = 194 +SUBLEVEL = 202 EXTRAVERSION = NAME = Blurry Fish Butt @@ -844,6 +844,12 @@ KBUILD_CFLAGS += $(call cc-option,-Werror=strict-prototypes) # Prohibit date/time macros, which would make the build non-deterministic KBUILD_CFLAGS += $(call cc-option,-Werror=date-time) +# ensure -fcf-protection is disabled when using retpoline as it is +# incompatible with -mindirect-branch=thunk-extern +ifdef CONFIG_RETPOLINE +KBUILD_CFLAGS += $(call cc-option,-fcf-protection=none) +endif + # use the deterministic mode of AR if available KBUILD_ARFLAGS := $(call ar-option,D) diff --git a/arch/arm/boot/dts/am4372.dtsi b/arch/arm/boot/dts/am4372.dtsi index 3ef1d5a26389..3bb5254a227a 100644 --- a/arch/arm/boot/dts/am4372.dtsi +++ b/arch/arm/boot/dts/am4372.dtsi @@ -1002,6 +1002,8 @@ ti,hwmods = "dss_dispc"; clocks = <&disp_clk>; clock-names = "fck"; + + max-memory-bandwidth = <230000000>; }; rfbi: rfbi@4832a800 { diff --git a/arch/arm/boot/dts/logicpd-torpedo-som.dtsi b/arch/arm/boot/dts/logicpd-torpedo-som.dtsi index e05670423d8b..a6c59bf698b3 100644 --- a/arch/arm/boot/dts/logicpd-torpedo-som.dtsi +++ b/arch/arm/boot/dts/logicpd-torpedo-som.dtsi @@ -169,3 +169,7 @@ &twl_gpio { ti,use-leds; }; + +&twl_keypad { + status = "disabled"; +}; diff --git a/arch/arm/configs/badge4_defconfig b/arch/arm/configs/badge4_defconfig index d59009878312..067d73e3b28b 100644 --- a/arch/arm/configs/badge4_defconfig +++ b/arch/arm/configs/badge4_defconfig @@ -97,7 +97,6 @@ CONFIG_USB_SERIAL_PL2303=m CONFIG_USB_SERIAL_CYBERJACK=m CONFIG_USB_SERIAL_XIRCOM=m CONFIG_USB_SERIAL_OMNINET=m -CONFIG_USB_RIO500=m CONFIG_EXT2_FS=m CONFIG_EXT3_FS=m CONFIG_MSDOS_FS=y diff --git a/arch/arm/configs/corgi_defconfig b/arch/arm/configs/corgi_defconfig index c1470a00f55a..031d9d3549b9 100644 --- a/arch/arm/configs/corgi_defconfig +++ b/arch/arm/configs/corgi_defconfig @@ -207,7 +207,6 @@ CONFIG_USB_SERIAL_XIRCOM=m CONFIG_USB_SERIAL_OMNINET=m CONFIG_USB_EMI62=m CONFIG_USB_EMI26=m -CONFIG_USB_RIO500=m CONFIG_USB_LEGOTOWER=m CONFIG_USB_LCD=m CONFIG_USB_LED=m diff --git a/arch/arm/configs/s3c2410_defconfig b/arch/arm/configs/s3c2410_defconfig index 01116ee1284b..a199b0e1a6ea 100644 --- a/arch/arm/configs/s3c2410_defconfig +++ b/arch/arm/configs/s3c2410_defconfig @@ -354,7 +354,6 @@ CONFIG_USB_EMI62=m CONFIG_USB_EMI26=m CONFIG_USB_ADUTUX=m CONFIG_USB_SEVSEG=m -CONFIG_USB_RIO500=m CONFIG_USB_LEGOTOWER=m CONFIG_USB_LCD=m CONFIG_USB_LED=m diff --git a/arch/arm/configs/spitz_defconfig b/arch/arm/configs/spitz_defconfig index a1ede1966baf..7d9aa284cb6f 100644 --- a/arch/arm/configs/spitz_defconfig +++ b/arch/arm/configs/spitz_defconfig @@ -202,7 +202,6 @@ CONFIG_USB_SERIAL_XIRCOM=m CONFIG_USB_SERIAL_OMNINET=m CONFIG_USB_EMI62=m CONFIG_USB_EMI26=m -CONFIG_USB_RIO500=m CONFIG_USB_LEGOTOWER=m CONFIG_USB_LCD=m CONFIG_USB_LED=m diff --git a/arch/arm/include/asm/arch_gicv3.h b/arch/arm/include/asm/arch_gicv3.h index e08d15184056..af25c32b1ccc 100644 --- a/arch/arm/include/asm/arch_gicv3.h +++ b/arch/arm/include/asm/arch_gicv3.h @@ -22,9 +22,7 @@ #include <linux/io.h> #include <asm/barrier.h> - -#define __ACCESS_CP15(CRn, Op1, CRm, Op2) p15, Op1, %0, CRn, CRm, Op2 -#define __ACCESS_CP15_64(Op1, CRm) p15, Op1, %Q0, %R0, CRm +#include <asm/cp15.h> #define ICC_EOIR1 __ACCESS_CP15(c12, 0, c12, 1) #define ICC_DIR __ACCESS_CP15(c12, 0, c11, 1) @@ -102,58 +100,55 @@ static inline void gic_write_eoir(u32 irq) { - asm volatile("mcr " __stringify(ICC_EOIR1) : : "r" (irq)); + write_sysreg(irq, ICC_EOIR1); isb(); } static inline void gic_write_dir(u32 val) { - asm volatile("mcr " __stringify(ICC_DIR) : : "r" (val)); + write_sysreg(val, ICC_DIR); isb(); } static inline u32 gic_read_iar(void) { - u32 irqstat; + u32 irqstat = read_sysreg(ICC_IAR1); - asm volatile("mrc " __stringify(ICC_IAR1) : "=r" (irqstat)); dsb(sy); + return irqstat; } static inline void gic_write_pmr(u32 val) { - asm volatile("mcr " __stringify(ICC_PMR) : : "r" (val)); + write_sysreg(val, ICC_PMR); } static inline void gic_write_ctlr(u32 val) { - asm volatile("mcr " __stringify(ICC_CTLR) : : "r" (val)); + write_sysreg(val, ICC_CTLR); isb(); } static inline void gic_write_grpen1(u32 val) { - asm volatile("mcr " __stringify(ICC_IGRPEN1) : : "r" (val)); + write_sysreg(val, ICC_IGRPEN1); isb(); } static inline void gic_write_sgi1r(u64 val) { - asm volatile("mcrr " __stringify(ICC_SGI1R) : : "r" (val)); + write_sysreg(val, ICC_SGI1R); } static inline u32 gic_read_sre(void) { - u32 val; - - asm volatile("mrc " __stringify(ICC_SRE) : "=r" (val)); - return val; + return read_sysreg(ICC_SRE); } static inline void gic_write_sre(u32 val) { - asm volatile("mcr " __stringify(ICC_SRE) : : "r" (val)); + write_sysreg(val, ICC_SRE); isb(); } diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 4a275fba6059..f2624fbd0336 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -441,11 +441,34 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) .size \name , . - \name .endm + .macro csdb +#ifdef CONFIG_THUMB2_KERNEL + .inst.w 0xf3af8014 +#else + .inst 0xe320f014 +#endif + .endm + .macro check_uaccess, addr:req, size:req, limit:req, tmp:req, bad:req #ifndef CONFIG_CPU_USE_DOMAINS adds \tmp, \addr, #\size - 1 sbcccs \tmp, \tmp, \limit bcs \bad +#ifdef CONFIG_CPU_SPECTRE + movcs \addr, #0 + csdb +#endif +#endif + .endm + + .macro uaccess_mask_range_ptr, addr:req, size:req, limit:req, tmp:req +#ifdef CONFIG_CPU_SPECTRE + sub \tmp, \limit, #1 + subs \tmp, \tmp, \addr @ tmp = limit - 1 - addr + addhs \tmp, \tmp, #1 @ if (tmp >= 0) { + subhss \tmp, \tmp, \size @ tmp = limit - (addr + size) } + movlo \addr, #0 @ if (tmp < 0) addr = NULL + csdb #endif .endm diff --git a/arch/arm/include/asm/barrier.h b/arch/arm/include/asm/barrier.h index 27c1d26b05b5..8514b70704de 100644 --- a/arch/arm/include/asm/barrier.h +++ b/arch/arm/include/asm/barrier.h @@ -18,6 +18,12 @@ #define isb(option) __asm__ __volatile__ ("isb " #option : : : "memory") #define dsb(option) __asm__ __volatile__ ("dsb " #option : : : "memory") #define dmb(option) __asm__ __volatile__ ("dmb " #option : : : "memory") +#ifdef CONFIG_THUMB2_KERNEL +#define CSDB ".inst.w 0xf3af8014" +#else +#define CSDB ".inst 0xe320f014" +#endif +#define csdb() __asm__ __volatile__(CSDB : : : "memory") #elif defined(CONFIG_CPU_XSC3) || __LINUX_ARM_ARCH__ == 6 #define isb(x) __asm__ __volatile__ ("mcr p15, 0, %0, c7, c5, 4" \ : : "r" (0) : "memory") @@ -38,6 +44,13 @@ #define dmb(x) __asm__ __volatile__ ("" : : : "memory") #endif +#ifndef CSDB +#define CSDB +#endif +#ifndef csdb +#define csdb() +#endif + #ifdef CONFIG_ARM_HEAVY_MB extern void (*soc_mb)(void); extern void arm_heavy_mb(void); @@ -95,5 +108,26 @@ do { \ #define smp_mb__before_atomic() smp_mb() #define smp_mb__after_atomic() smp_mb() +#ifdef CONFIG_CPU_SPECTRE +static inline unsigned long array_index_mask_nospec(unsigned long idx, + unsigned long sz) +{ + unsigned long mask; + + asm volatile( + "cmp %1, %2\n" + " sbc %0, %1, %1\n" + CSDB + : "=r" (mask) + : "r" (idx), "Ir" (sz) + : "cc"); + + return mask; +} +#define array_index_mask_nospec array_index_mask_nospec +#endif + +#include <asm-generic/barrier.h> + #endif /* !__ASSEMBLY__ */ #endif /* __ASM_BARRIER_H */ diff --git a/arch/arm/include/asm/bugs.h b/arch/arm/include/asm/bugs.h index a97f1ea708d1..73a99c72a930 100644 --- a/arch/arm/include/asm/bugs.h +++ b/arch/arm/include/asm/bugs.h @@ -10,12 +10,14 @@ #ifndef __ASM_BUGS_H #define __ASM_BUGS_H -#ifdef CONFIG_MMU extern void check_writebuffer_bugs(void); -#define check_bugs() check_writebuffer_bugs() +#ifdef CONFIG_MMU +extern void check_bugs(void); +extern void check_other_bugs(void); #else #define check_bugs() do { } while (0) +#define check_other_bugs() do { } while (0) #endif #endif diff --git a/arch/arm/include/asm/cp15.h b/arch/arm/include/asm/cp15.h index c3f11524f10c..b74b174ac9fc 100644 --- a/arch/arm/include/asm/cp15.h +++ b/arch/arm/include/asm/cp15.h @@ -49,6 +49,24 @@ #ifdef CONFIG_CPU_CP15 +#define __ACCESS_CP15(CRn, Op1, CRm, Op2) \ + "mrc", "mcr", __stringify(p15, Op1, %0, CRn, CRm, Op2), u32 +#define __ACCESS_CP15_64(Op1, CRm) \ + "mrrc", "mcrr", __stringify(p15, Op1, %Q0, %R0, CRm), u64 + +#define __read_sysreg(r, w, c, t) ({ \ + t __val; \ + asm volatile(r " " c : "=r" (__val)); \ + __val; \ +}) +#define read_sysreg(...) __read_sysreg(__VA_ARGS__) + +#define __write_sysreg(v, r, w, c, t) asm volatile(w " " c : : "r" ((t)(v))) +#define write_sysreg(v, ...) __write_sysreg(v, __VA_ARGS__) + +#define BPIALL __ACCESS_CP15(c7, 0, c5, 6) +#define ICIALLU __ACCESS_CP15(c7, 0, c5, 0) + extern unsigned long cr_alignment; /* defined in entry-armv.S */ static inline unsigned long get_cr(void) diff --git a/arch/arm/include/asm/cputype.h b/arch/arm/include/asm/cputype.h index e9d04f475929..53125dad6edd 100644 --- a/arch/arm/include/asm/cputype.h +++ b/arch/arm/include/asm/cputype.h @@ -74,8 +74,16 @@ #define ARM_CPU_PART_CORTEX_A12 0x4100c0d0 #define ARM_CPU_PART_CORTEX_A17 0x4100c0e0 #define ARM_CPU_PART_CORTEX_A15 0x4100c0f0 +#define ARM_CPU_PART_CORTEX_A53 0x4100d030 +#define ARM_CPU_PART_CORTEX_A57 0x4100d070 +#define ARM_CPU_PART_CORTEX_A72 0x4100d080 +#define ARM_CPU_PART_CORTEX_A73 0x4100d090 +#define ARM_CPU_PART_CORTEX_A75 0x4100d0a0 #define ARM_CPU_PART_MASK 0xff00fff0 +/* Broadcom cores */ +#define ARM_CPU_PART_BRAHMA_B15 0x420000f0 + #define ARM_CPU_XSCALE_ARCH_MASK 0xe000 #define ARM_CPU_XSCALE_ARCH_V1 0x2000 #define ARM_CPU_XSCALE_ARCH_V2 0x4000 @@ -85,6 +93,7 @@ #define ARM_CPU_PART_SCORPION 0x510002d0 extern unsigned int processor_id; +struct proc_info_list *lookup_processor(u32 midr); #ifdef CONFIG_CPU_CP15 #define read_cpuid(reg) \ diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h index 8877ad5ffe10..1bfcc3bcfc6d 100644 --- a/arch/arm/include/asm/proc-fns.h +++ b/arch/arm/include/asm/proc-fns.h @@ -23,7 +23,7 @@ struct mm_struct; /* * Don't change this structure - ASM code relies on it. */ -extern struct processor { +struct processor { /* MISC * get data abort address/flags */ @@ -37,6 +37,10 @@ extern struct processor { */ void (*_proc_init)(void); /* + * Check for processor bugs + */ + void (*check_bugs)(void); + /* * Disable any processor specifics */ void (*_proc_fin)(void); @@ -75,9 +79,13 @@ extern struct processor { unsigned int suspend_size; void (*do_suspend)(void *); void (*do_resume)(void *); -} processor; +}; #ifndef MULTI_CPU +static inline void init_proc_vtable(const struct processor *p) +{ +} + extern void cpu_proc_init(void); extern void cpu_proc_fin(void); extern int cpu_do_idle(void); @@ -94,17 +102,50 @@ extern void cpu_reset(unsigned long addr) __attribute__((noreturn)); extern void cpu_do_suspend(void *); extern void cpu_do_resume(void *); #else -#define cpu_proc_init processor._proc_init -#define cpu_proc_fin processor._proc_fin -#define cpu_reset processor.reset -#define cpu_do_idle processor._do_idle -#define cpu_dcache_clean_area processor.dcache_clean_area -#define cpu_set_pte_ext processor.set_pte_ext -#define cpu_do_switch_mm processor.switch_mm -/* These three are private to arch/arm/kernel/suspend.c */ -#define cpu_do_suspend processor.do_suspend -#define cpu_do_resume processor.do_resume +extern struct processor processor; +#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) +#include <linux/smp.h> +/* + * This can't be a per-cpu variable because we need to access it before + * per-cpu has been initialised. We have a couple of functions that are + * called in a pre-emptible context, and so can't use smp_processor_id() + * there, hence PROC_TABLE(). We insist in init_proc_vtable() that the + * function pointers for these are identical across all CPUs. + */ +extern struct processor *cpu_vtable[]; +#define PROC_VTABLE(f) cpu_vtable[smp_processor_id()]->f +#define PROC_TABLE(f) cpu_vtable[0]->f +static inline void init_proc_vtable(const struct processor *p) +{ + unsigned int cpu = smp_processor_id(); + *cpu_vtable[cpu] = *p; + WARN_ON_ONCE(cpu_vtable[cpu]->dcache_clean_area != + cpu_vtable[0]->dcache_clean_area); + WARN_ON_ONCE(cpu_vtable[cpu]->set_pte_ext != + cpu_vtable[0]->set_pte_ext); +} +#else +#define PROC_VTABLE(f) processor.f +#define PROC_TABLE(f) processor.f +static inline void init_proc_vtable(const struct processor *p) +{ + processor = *p; +} +#endif + +#define cpu_proc_init PROC_VTABLE(_proc_init) +#define cpu_check_bugs PROC_VTABLE(check_bugs) +#define cpu_proc_fin PROC_VTABLE(_proc_fin) +#define cpu_reset PROC_VTABLE(reset) +#define cpu_do_idle PROC_VTABLE(_do_idle) +#define cpu_dcache_clean_area PROC_TABLE(dcache_clean_area) +#define cpu_set_pte_ext PROC_TABLE(set_pte_ext) +#define cpu_do_switch_mm PROC_VTABLE(switch_mm) + +/* These two are private to arch/arm/kernel/suspend.c */ +#define cpu_do_suspend PROC_VTABLE(do_suspend) +#define cpu_do_resume PROC_VTABLE(do_resume) #endif extern void cpu_resume(void); diff --git a/arch/arm/include/asm/system_misc.h b/arch/arm/include/asm/system_misc.h index 062c48452148..84e65cb22c4b 100644 --- a/arch/arm/include/asm/system_misc.h +++ b/arch/arm/include/asm/system_misc.h @@ -7,6 +7,7 @@ #include <linux/linkage.h> #include <linux/irqflags.h> #include <linux/reboot.h> +#include <linux/percpu.h> extern void cpu_init(void); @@ -14,6 +15,20 @@ void soft_restart(unsigned long); extern void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd); extern void (*arm_pm_idle)(void); +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR +typedef void (*harden_branch_predictor_fn_t)(void); +DECLARE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn); +static inline void harden_branch_predictor(void) +{ + harden_branch_predictor_fn_t fn = per_cpu(harden_branch_predictor_fn, + smp_processor_id()); + if (fn) + fn(); +} +#else +#define harden_branch_predictor() do { } while (0) +#endif + #define UDBG_UNDEFINED (1 << 0) #define UDBG_SYSCALL (1 << 1) #define UDBG_BADABORT (1 << 2) diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h index f23454db246f..cfbf32bb9fc4 100644 --- a/arch/arm/include/asm/thread_info.h +++ b/arch/arm/include/asm/thread_info.h @@ -124,10 +124,10 @@ extern void vfp_flush_hwstate(struct thread_info *); struct user_vfp; struct user_vfp_exc; -extern int vfp_preserve_user_clear_hwstate(struct user_vfp __user *, - struct user_vfp_exc __user *); -extern int vfp_restore_user_hwstate(struct user_vfp __user *, - struct user_vfp_exc __user *); +extern int vfp_preserve_user_clear_hwstate(struct user_vfp *, + struct user_vfp_exc *); +extern int vfp_restore_user_hwstate(struct user_vfp *, + struct user_vfp_exc *); #endif /* diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 7665bd2f4871..942be2f596f7 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -99,6 +99,14 @@ extern int __put_user_bad(void); static inline void set_fs(mm_segment_t fs) { current_thread_info()->addr_limit = fs; + + /* + * Prevent a mispredicted conditional call to set_fs from forwarding + * the wrong address limit to access_ok under speculation. + */ + dsb(nsh); + isb(); + modify_domain(DOMAIN_KERNEL, fs ? DOMAIN_CLIENT : DOMAIN_MANAGER); } @@ -123,6 +131,39 @@ static inline void set_fs(mm_segment_t fs) flag; }) /* + * This is a type: either unsigned long, if the argument fits into + * that type, or otherwise unsigned long long. + */ +#define __inttype(x) \ + __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL)) + +/* + * Sanitise a uaccess pointer such that it becomes NULL if addr+size + * is above the current addr_limit. + */ +#define uaccess_mask_range_ptr(ptr, size) \ + ((__typeof__(ptr))__uaccess_mask_range_ptr(ptr, size)) +static inline void __user *__uaccess_mask_range_ptr(const void __user *ptr, + size_t size) +{ + void __user *safe_ptr = (void __user *)ptr; + unsigned long tmp; + + asm volatile( + " sub %1, %3, #1\n" + " subs %1, %1, %0\n" + " addhs %1, %1, #1\n" + " subhss %1, %1, %2\n" + " movlo %0, #0\n" + : "+r" (safe_ptr), "=&r" (tmp) + : "r" (size), "r" (current_thread_info()->addr_limit) + : "cc"); + + csdb(); + return safe_ptr; +} + +/* * Single-value transfer routines. They automatically use the right * size if we just have the right pointer type. Note that the functions * which read from user space (*get_*) need to take care not to leak @@ -191,7 +232,7 @@ extern int __get_user_64t_4(void *); ({ \ unsigned long __limit = current_thread_info()->addr_limit - 1; \ register const typeof(*(p)) __user *__p asm("r0") = (p);\ - register typeof(x) __r2 asm("r2"); \ + register __inttype(x) __r2 asm("r2"); \ register unsigned long __l asm("r1") = __limit; \ register int __e asm("r0"); \ unsigned int __ua_flags = uaccess_save_and_enable(); \ @@ -238,49 +279,23 @@ extern int __put_user_2(void *, unsigned int); extern int __put_user_4(void *, unsigned int); extern int __put_user_8(void *, unsigned long long); -#define __put_user_x(__r2, __p, __e, __l, __s) \ - __asm__ __volatile__ ( \ - __asmeq("%0", "r0") __asmeq("%2", "r2") \ - __asmeq("%3", "r1") \ - "bl __put_user_" #__s \ - : "=&r" (__e) \ - : "0" (__p), "r" (__r2), "r" (__l) \ - : "ip", "lr", "cc") - -#define __put_user_check(x, p) \ +#define __put_user_check(__pu_val, __ptr, __err, __s) \ ({ \ unsigned long __limit = current_thread_info()->addr_limit - 1; \ - const typeof(*(p)) __user *__tmp_p = (p); \ - register typeof(*(p)) __r2 asm("r2") = (x); \ - register const typeof(*(p)) __user *__p asm("r0") = __tmp_p; \ + register typeof(__pu_val) __r2 asm("r2") = __pu_val; \ + register const void __user *__p asm("r0") = __ptr; \ register unsigned long __l asm("r1") = __limit; \ register int __e asm("r0"); \ - unsigned int __ua_flags = uaccess_save_and_enable(); \ - switch (sizeof(*(__p))) { \ - case 1: \ - __put_user_x(__r2, __p, __e, __l, 1); \ - break; \ - case 2: \ - __put_user_x(__r2, __p, __e, __l, 2); \ - break; \ - case 4: \ - __put_user_x(__r2, __p, __e, __l, 4); \ - break; \ - case 8: \ - __put_user_x(__r2, __p, __e, __l, 8); \ - break; \ - default: __e = __put_user_bad(); break; \ - } \ - uaccess_restore(__ua_flags); \ - __e; \ + __asm__ __volatile__ ( \ + __asmeq("%0", "r0") __asmeq("%2", "r2") \ + __asmeq("%3", "r1") \ + "bl __put_user_" #__s \ + : "=&r" (__e) \ + : "0" (__p), "r" (__r2), "r" (__l) \ + : "ip", "lr", "cc"); \ + __err = __e; \ }) -#define put_user(x, p) \ - ({ \ - might_fault(); \ - __put_user_check(x, p); \ - }) - #else /* CONFIG_MMU */ /* @@ -298,7 +313,7 @@ static inline void set_fs(mm_segment_t fs) } #define get_user(x, p) __get_user(x, p) -#define put_user(x, p) __put_user(x, p) +#define __put_user_check __put_user_nocheck #endif /* CONFIG_MMU */ @@ -307,6 +322,16 @@ static inline void set_fs(mm_segment_t fs) #define user_addr_max() \ (segment_eq(get_fs(), KERNEL_DS) ? ~0UL : get_fs()) +#ifdef CONFIG_CPU_SPECTRE +/* + * When mitigating Spectre variant 1, it is not worth fixing the non- + * verifying accessors, because we need to add verification of the + * address space there. Force these to use the standard get_user() + * version instead. + */ +#define __get_user(x, ptr) get_user(x, ptr) +#else + /* * The "__xxx" versions of the user access functions do not verify the * address space - it must have been done previously with a separate @@ -323,12 +348,6 @@ static inline void set_fs(mm_segment_t fs) __gu_err; \ }) -#define __get_user_error(x, ptr, err) \ -({ \ - __get_user_err((x), (ptr), err); \ - (void) 0; \ -}) - #define __get_user_err(x, ptr, err) \ do { \ unsigned long __gu_addr = (unsigned long)(ptr); \ @@ -388,37 +407,58 @@ do { \ #define __get_user_asm_word(x, addr, err) \ __get_user_asm(x, addr, err, ldr) +#endif -#define __put_user(x, ptr) \ + +#define __put_user_switch(x, ptr, __err, __fn) \ + do { \ + const __typeof__(*(ptr)) __user *__pu_ptr = (ptr); \ + __typeof__(*(ptr)) __pu_val = (x); \ + unsigned int __ua_flags; \ + might_fault(); \ + __ua_flags = uaccess_save_and_enable(); \ + switch (sizeof(*(ptr))) { \ + case 1: __fn(__pu_val, __pu_ptr, __err, 1); break; \ + case 2: __fn(__pu_val, __pu_ptr, __err, 2); break; \ + case 4: __fn(__pu_val, __pu_ptr, __err, 4); break; \ + case 8: __fn(__pu_val, __pu_ptr, __err, 8); break; \ + default: __err = __put_user_bad(); break; \ + } \ + uaccess_restore(__ua_flags); \ + } while (0) + +#define put_user(x, ptr) \ ({ \ - long __pu_err = 0; \ - __put_user_err((x), (ptr), __pu_err); \ + int __pu_err = 0; \ + __put_user_switch((x), (ptr), __pu_err, __put_user_check); \ __pu_err; \ }) -#define __put_user_error(x, ptr, err) \ +#ifdef CONFIG_CPU_SPECTRE +/* + * When mitigating Spectre variant 1.1, all accessors need to include + * verification of the address space. + */ +#define __put_user(x, ptr) put_user(x, ptr) + +#else +#define __put_user(x, ptr) \ ({ \ - __put_user_err((x), (ptr), err); \ - (void) 0; \ + long __pu_err = 0; \ + __put_user_switch((x), (ptr), __pu_err, __put_user_nocheck); \ + __pu_err; \ }) -#define __put_user_err(x, ptr, err) \ -do { \ - unsigned long __pu_addr = (unsigned long)(ptr); \ - unsigned int __ua_flags; \ - __typeof__(*(ptr)) __pu_val = (x); \ - __chk_user_ptr(ptr); \ - might_fault(); \ - __ua_flags = uaccess_save_and_enable(); \ - switch (sizeof(*(ptr))) { \ - case 1: __put_user_asm_byte(__pu_val, __pu_addr, err); break; \ - case 2: __put_user_asm_half(__pu_val, __pu_addr, err); break; \ - case 4: __put_user_asm_word(__pu_val, __pu_addr, err); break; \ - case 8: __put_user_asm_dword(__pu_val, __pu_addr, err); break; \ - default: __put_user_bad(); \ - } \ - uaccess_restore(__ua_flags); \ -} while (0) +#define __put_user_nocheck(x, __pu_ptr, __err, __size) \ + do { \ + unsigned long __pu_addr = (unsigned long)__pu_ptr; \ + __put_user_nocheck_##__size(x, __pu_addr, __err); \ + } while (0) + +#define __put_user_nocheck_1 __put_user_asm_byte +#define __put_user_nocheck_2 __put_user_asm_half +#define __put_user_nocheck_4 __put_user_asm_word +#define __put_user_nocheck_8 __put_user_asm_dword #define __put_user_asm(x, __pu_addr, err, instr) \ __asm__ __volatile__( \ @@ -488,6 +528,7 @@ do { \ : "r" (x), "i" (-EFAULT) \ : "cc") +#endif /* !CONFIG_CPU_SPECTRE */ #ifdef CONFIG_MMU extern unsigned long __must_check diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile index 82bdac0f2804..649bc3300c93 100644 --- a/arch/arm/kernel/Makefile +++ b/arch/arm/kernel/Makefile @@ -30,6 +30,7 @@ else obj-y += entry-armv.o endif +obj-$(CONFIG_MMU) += bugs.o obj-$(CONFIG_CPU_IDLE) += cpuidle.o obj-$(CONFIG_ISA_DMA_API) += dma.o obj-$(CONFIG_FIQ) += fiq.o fiqasm.o diff --git a/arch/arm/kernel/bugs.c b/arch/arm/kernel/bugs.c new file mode 100644 index 000000000000..d41d3598e5e5 --- /dev/null +++ b/arch/arm/kernel/bugs.c @@ -0,0 +1,18 @@ +// SPDX-Identifier: GPL-2.0 +#include <linux/init.h> +#include <asm/bugs.h> +#include <asm/proc-fns.h> + +void check_other_bugs(void) +{ +#ifdef MULTI_CPU + if (cpu_check_bugs) + cpu_check_bugs(); +#endif +} + +void __init check_bugs(void) +{ + check_writebuffer_bugs(); + check_other_bugs(); +} diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S index 9440b320a8a3..e3e0c4627214 100644 --- a/arch/arm/kernel/entry-common.S +++ b/arch/arm/kernel/entry-common.S @@ -233,9 +233,7 @@ local_restart: tst r10, #_TIF_SYSCALL_WORK @ are we tracing syscalls? bne __sys_trace - cmp scno, #NR_syscalls @ check upper syscall limit - badr lr, ret_fast_syscall @ return address - ldrcc pc, [tbl, scno, lsl #2] @ call sys_* routine + invoke_syscall tbl, scno, r10, ret_fast_syscall add r1, sp, #S_OFF 2: cmp scno, #(__ARM_NR_BASE - __NR_SYSCALL_BASE) @@ -268,14 +266,8 @@ __sys_trace: mov r1, scno add r0, sp, #S_OFF bl syscall_trace_enter - - badr lr, __sys_trace_return @ return address - mov scno, r0 @ syscall number (possibly new) - add r1, sp, #S_R0 + S_OFF @ pointer to regs - cmp scno, #NR_syscalls @ check upper syscall limit - ldmccia r1, {r0 - r6} @ have to reload r0 - r6 - stmccia sp, {r4, r5} @ and update the stack args - ldrcc pc, [tbl, scno, lsl #2] @ call sys_* routine + mov scno, r0 + invoke_syscall tbl, scno, r10, __sys_trace_return, reload=1 cmp scno, #-1 @ skip the syscall? bne 2b add sp, sp, #S_OFF @ restore stack @@ -327,6 +319,10 @@ sys_syscall: bic scno, r0, #__NR_OABI_SYSCALL_BASE cmp scno, #__NR_syscall - __NR_SYSCALL_BASE cmpne scno, #NR_syscalls @ check range +#ifdef CONFIG_CPU_SPECTRE + movhs scno, #0 + csdb +#endif stmloia sp, {r5, r6} @ shuffle args movlo r0, r1 movlo r1, r2 diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S index 6d243e830516..86dfee487e24 100644 --- a/arch/arm/kernel/entry-header.S +++ b/arch/arm/kernel/entry-header.S @@ -373,6 +373,31 @@ #endif .endm + .macro invoke_syscall, table, nr, tmp, ret, reload=0 +#ifdef CONFIG_CPU_SPECTRE + mov \tmp, \nr + cmp \tmp, #NR_syscalls @ check upper syscall limit + movcs \tmp, #0 + csdb + badr lr, \ret @ return address + .if \reload + add r1, sp, #S_R0 + S_OFF @ pointer to regs + ldmccia r1, {r0 - r6} @ reload r0-r6 + stmccia sp, {r4, r5} @ update stack arguments + .endif + ldrcc pc, [\table, \tmp, lsl #2] @ call sys_* routine +#else + cmp \nr, #NR_syscalls @ check upper syscall limit + badr lr, \ret @ return address + .if \reload + add r1, sp, #S_R0 + S_OFF @ pointer to regs + ldmccia r1, {r0 - r6} @ reload r0-r6 + stmccia sp, {r4, r5} @ update stack arguments + .endif + ldrcc pc, [\table, \nr, lsl #2] @ call sys_* routine +#endif + .endm + /* * These are the registers used in the syscall handler, and allow us to * have in theory up to 7 arguments to a function - r0 to r6. diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S index 8733012d231f..7e662bdd5cb3 100644 --- a/arch/arm/kernel/head-common.S +++ b/arch/arm/kernel/head-common.S @@ -122,6 +122,9 @@ __mmap_switched_data: .long init_thread_union + THREAD_START_SP @ sp .size __mmap_switched_data, . - __mmap_switched_data + __FINIT + .text + /* * This provides a C-API version of __lookup_processor_type */ @@ -133,9 +136,6 @@ ENTRY(lookup_processor_type) ldmfd sp!, {r4 - r6, r9, pc} ENDPROC(lookup_processor_type) - __FINIT - .text - /* * Read processor ID register (CP#15, CR0), and look up in the linker-built * supported processor list. Note that we can't use the absolute addresses diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 74d792dd2977..1ad40fc316b2 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -122,6 +122,11 @@ EXPORT_SYMBOL(cold_boot); #ifdef MULTI_CPU struct processor processor __read_mostly; +#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) +struct processor *cpu_vtable[NR_CPUS] = { + [0] = &processor, +}; +#endif #endif #ifdef MULTI_TLB struct cpu_tlb_fns cpu_tlb __read_mostly; @@ -608,28 +613,33 @@ static void __init smp_build_mpidr_hash(void) } #endif -static void __init setup_processor(void) +/* + * locate processor in the list of supported processor types. The linker + * builds this table for us from the entries in arch/arm/mm/proc-*.S + */ +struct proc_info_list *lookup_processor(u32 midr) { - struct proc_info_list *list; + struct proc_info_list *list = lookup_processor_type(midr); - /* - * locate processor in the list of supported processor - * types. The linker builds this table for us from the - * entries in arch/arm/mm/proc-*.S - */ - list = lookup_processor_type(read_cpuid_id()); if (!list) { - pr_err("CPU configuration botched (ID %08x), unable to continue.\n", - read_cpuid_id()); - while (1); + pr_err("CPU%u: configuration botched (ID %08x), CPU halted\n", + smp_processor_id(), midr); + while (1) + /* can't use cpu_relax() here as it may require MMU setup */; } + return list; +} + +static void __init setup_processor(void) +{ + unsigned int midr = read_cpuid_id(); + struct proc_info_list *list = lookup_processor(midr); + cpu_name = list->cpu_name; __cpu_architecture = __get_cpu_architecture(); -#ifdef MULTI_CPU - processor = *list->proc; -#endif + init_proc_vtable(list->proc); #ifdef MULTI_TLB cpu_tlb = *list->tlb; #endif @@ -641,7 +651,7 @@ static void __init setup_processor(void) #endif pr_info("CPU: %s [%08x] revision %d (ARMv%s), cr=%08lx\n", - cpu_name, read_cpuid_id(), read_cpuid_id() & 15, + list->cpu_name, midr, midr & 15, proc_arch[cpu_architecture()], get_cr()); snprintf(init_utsname()->machine, __NEW_UTS_LEN + 1, "%s%c", diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c index 304e68408f9c..7abc908ebea0 100644 --- a/arch/arm/kernel/signal.c +++ b/arch/arm/kernel/signal.c @@ -95,34 +95,34 @@ static int restore_iwmmxt_context(struct iwmmxt_sigframe *frame) static int preserve_vfp_context(struct vfp_sigframe __user *frame) { - const unsigned long magic = VFP_MAGIC; - const unsigned long size = VFP_STORAGE_SIZE; + struct vfp_sigframe kframe; int err = 0; - __put_user_error(magic, &frame->magic, err); - __put_user_error(size, &frame->size, err); + memset(&kframe, 0, sizeof(kframe)); + kframe.magic = VFP_MAGIC; + kframe.size = VFP_STORAGE_SIZE; + err = vfp_preserve_user_clear_hwstate(&kframe.ufp, &kframe.ufp_exc); if (err) - return -EFAULT; + return err; - return vfp_preserve_user_clear_hwstate(&frame->ufp, &frame->ufp_exc); + return __copy_to_user(frame, &kframe, sizeof(kframe)); } -static int restore_vfp_context(struct vfp_sigframe __user *frame) +static int restore_vfp_context(struct vfp_sigframe __user *auxp) { - unsigned long magic; - unsigned long size; - int err = 0; + struct vfp_sigframe frame; + int err; - __get_user_error(magic, &frame->magic, err); - __get_user_error(size, &frame->size, err); + err = __copy_from_user(&frame, (char __user *) auxp, sizeof(frame)); if (err) - return -EFAULT; - if (magic != VFP_MAGIC || size != VFP_STORAGE_SIZE) + return err; + + if (frame.magic != VFP_MAGIC || frame.size != VFP_STORAGE_SIZE) return -EINVAL; - return vfp_restore_user_hwstate(&frame->ufp, &frame->ufp_exc); + return vfp_restore_user_hwstate(&frame.ufp, &frame.ufp_exc); } #endif @@ -142,6 +142,7 @@ struct rt_sigframe { static int restore_sigframe(struct pt_regs *regs, struct sigframe __user *sf) { + struct sigcontext context; struct aux_sigframe __user *aux; sigset_t set; int err; @@ -150,23 +151,26 @@ static int restore_sigframe(struct pt_regs *regs, struct sigframe __user *sf) if (err == 0) set_current_blocked(&set); - __get_user_error(regs->ARM_r0, &sf->uc.uc_mcontext.arm_r0, err); - __get_user_error(regs->ARM_r1, &sf->uc.uc_mcontext.arm_r1, err); - __get_user_error(regs->ARM_r2, &sf->uc.uc_mcontext.arm_r2, err); - __get_user_error(regs->ARM_r3, &sf->uc.uc_mcontext.arm_r3, err); - __get_user_error(regs->ARM_r4, &sf->uc.uc_mcontext.arm_r4, err); - __get_user_error(regs->ARM_r5, &sf->uc.uc_mcontext.arm_r5, err); - __get_user_error(regs->ARM_r6, &sf->uc.uc_mcontext.arm_r6, err); - __get_user_error(regs->ARM_r7, &sf->uc.uc_mcontext.arm_r7, err); - __get_user_error(regs->ARM_r8, &sf->uc.uc_mcontext.arm_r8, err); - __get_user_error(regs->ARM_r9, &sf->uc.uc_mcontext.arm_r9, err); - __get_user_error(regs->ARM_r10, &sf->uc.uc_mcontext.arm_r10, err); - __get_user_error(regs->ARM_fp, &sf->uc.uc_mcontext.arm_fp, err); - __get_user_error(regs->ARM_ip, &sf->uc.uc_mcontext.arm_ip, err); - __get_user_error(regs->ARM_sp, &sf->uc.uc_mcontext.arm_sp, err); - __get_user_error(regs->ARM_lr, &sf->uc.uc_mcontext.arm_lr, err); - __get_user_error(regs->ARM_pc, &sf->uc.uc_mcontext.arm_pc, err); - __get_user_error(regs->ARM_cpsr, &sf->uc.uc_mcontext.arm_cpsr, err); + err |= __copy_from_user(&context, &sf->uc.uc_mcontext, sizeof(context)); + if (err == 0) { + regs->ARM_r0 = context.arm_r0; + regs->ARM_r1 = context.arm_r1; + regs->ARM_r2 = context.arm_r2; + regs->ARM_r3 = context.arm_r3; + regs->ARM_r4 = context.arm_r4; + regs->ARM_r5 = context.arm_r5; + regs->ARM_r6 = context.arm_r6; + regs->ARM_r7 = context.arm_r7; + regs->ARM_r8 = context.arm_r8; + regs->ARM_r9 = context.arm_r9; + regs->ARM_r10 = context.arm_r10; + regs->ARM_fp = context.arm_fp; + regs->ARM_ip = context.arm_ip; + regs->ARM_sp = context.arm_sp; + regs->ARM_lr = context.arm_lr; + regs->ARM_pc = context.arm_pc; + regs->ARM_cpsr = context.arm_cpsr; + } err |= !valid_user_regs(regs); @@ -254,30 +258,35 @@ static int setup_sigframe(struct sigframe __user *sf, struct pt_regs *regs, sigset_t *set) { struct aux_sigframe __user *aux; + struct sigcontext context; int err = 0; - __put_user_error(regs->ARM_r0, &sf->uc.uc_mcontext.arm_r0, err); - __put_user_error(regs->ARM_r1, &sf->uc.uc_mcontext.arm_r1, err); - __put_user_error(regs->ARM_r2, &sf->uc.uc_mcontext.arm_r2, err); - __put_user_error(regs->ARM_r3, &sf->uc.uc_mcontext.arm_r3, err); - __put_user_error(regs->ARM_r4, &sf->uc.uc_mcontext.arm_r4, err); - __put_user_error(regs->ARM_r5, &sf->uc.uc_mcontext.arm_r5, err); - __put_user_error(regs->ARM_r6, &sf->uc.uc_mcontext.arm_r6, err); - __put_user_error(regs->ARM_r7, &sf->uc.uc_mcontext.arm_r7, err); - __put_user_error(regs->ARM_r8, &sf->uc.uc_mcontext.arm_r8, err); - __put_user_error(regs->ARM_r9, &sf->uc.uc_mcontext.arm_r9, err); - __put_user_error(regs->ARM_r10, &sf->uc.uc_mcontext.arm_r10, err); - __put_user_error(regs->ARM_fp, &sf->uc.uc_mcontext.arm_fp, err); - __put_user_error(regs->ARM_ip, &sf->uc.uc_mcontext.arm_ip, err); - __put_user_error(regs->ARM_sp, &sf->uc.uc_mcontext.arm_sp, err); - __put_user_error(regs->ARM_lr, &sf->uc.uc_mcontext.arm_lr, err); - __put_user_error(regs->ARM_pc, &sf->uc.uc_mcontext.arm_pc, err); - __put_user_error(regs->ARM_cpsr, &sf->uc.uc_mcontext.arm_cpsr, err); - - __put_user_error(current->thread.trap_no, &sf->uc.uc_mcontext.trap_no, err); - __put_user_error(current->thread.error_code, &sf->uc.uc_mcontext.error_code, err); - __put_user_error(current->thread.address, &sf->uc.uc_mcontext.fault_address, err); - __put_user_error(set->sig[0], &sf->uc.uc_mcontext.oldmask, err); + context = (struct sigcontext) { + .arm_r0 = regs->ARM_r0, + .arm_r1 = regs->ARM_r1, + .arm_r2 = regs->ARM_r2, + .arm_r3 = regs->ARM_r3, + .arm_r4 = regs->ARM_r4, + .arm_r5 = regs->ARM_r5, + .arm_r6 = regs->ARM_r6, + .arm_r7 = regs->ARM_r7, + .arm_r8 = regs->ARM_r8, + .arm_r9 = regs->ARM_r9, + .arm_r10 = regs->ARM_r10, + .arm_fp = regs->ARM_fp, + .arm_ip = regs->ARM_ip, + .arm_sp = regs->ARM_sp, + .arm_lr = regs->ARM_lr, + .arm_pc = regs->ARM_pc, + .arm_cpsr = regs->ARM_cpsr, + + .trap_no = current->thread.trap_no, + .error_code = current->thread.error_code, + .fault_address = current->thread.address, + .oldmask = set->sig[0], + }; + + err |= __copy_to_user(&sf->uc.uc_mcontext, &context, sizeof(context)); err |= __copy_to_user(&sf->uc.uc_sigmask, set, sizeof(*set)); @@ -294,7 +303,7 @@ setup_sigframe(struct sigframe __user *sf, struct pt_regs *regs, sigset_t *set) if (err == 0) err |= preserve_vfp_context(&aux->vfp); #endif - __put_user_error(0, &aux->end_magic, err); + err |= __put_user(0, &aux->end_magic); return err; } @@ -426,7 +435,7 @@ setup_frame(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs) /* * Set uc.uc_flags to a value which sc.trap_no would never have. */ - __put_user_error(0x5ac3c35a, &frame->uc.uc_flags, err); + err = __put_user(0x5ac3c35a, &frame->uc.uc_flags); err |= setup_sigframe(frame, regs, set); if (err == 0) @@ -446,8 +455,8 @@ setup_rt_frame(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs) err |= copy_siginfo_to_user(&frame->info, &ksig->info); - __put_user_error(0, &frame->sig.uc.uc_flags, err); - __put_user_error(NULL, &frame->sig.uc.uc_link, err); + err |= __put_user(0, &frame->sig.uc.uc_flags); + err |= __put_user(NULL, &frame->sig.uc.uc_link); err |= __save_altstack(&frame->sig.uc.uc_stack, regs->ARM_sp); err |= setup_sigframe(&frame->sig, regs, set); diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c index 2467a5e0410f..35a4633d5f1a 100644 --- a/arch/arm/kernel/smp.c +++ b/arch/arm/kernel/smp.c @@ -27,8 +27,10 @@ #include <linux/completion.h> #include <linux/cpufreq.h> #include <linux/irq_work.h> +#include <linux/slab.h> #include <linux/atomic.h> +#include <asm/bugs.h> #include <asm/smp.h> #include <asm/cacheflush.h> #include <asm/cpu.h> @@ -39,6 +41,7 @@ #include <asm/mmu_context.h> #include <asm/pgtable.h> #include <asm/pgalloc.h> +#include <asm/procinfo.h> #include <asm/processor.h> #include <asm/sections.h> #include <asm/tlbflush.h> @@ -95,6 +98,30 @@ static unsigned long get_arch_pgd(pgd_t *pgd) #endif } +#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) +static int secondary_biglittle_prepare(unsigned int cpu) +{ + if (!cpu_vtable[cpu]) + cpu_vtable[cpu] = kzalloc(sizeof(*cpu_vtable[cpu]), GFP_KERNEL); + + return cpu_vtable[cpu] ? 0 : -ENOMEM; +} + +static void secondary_biglittle_init(void) +{ + init_proc_vtable(lookup_processor(read_cpuid_id())->proc); +} +#else +static int secondary_biglittle_prepare(unsigned int cpu) +{ + return 0; +} + +static void secondary_biglittle_init(void) +{ +} +#endif + int __cpu_up(unsigned int cpu, struct task_struct *idle) { int ret; @@ -102,6 +129,10 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle) if (!smp_ops.smp_boot_secondary) return -ENOSYS; + ret = secondary_biglittle_prepare(cpu); + if (ret) + return ret; + /* * We need to tell the secondary core where to find * its stack and the page tables. @@ -353,6 +384,8 @@ asmlinkage void secondary_start_kernel(void) struct mm_struct *mm = &init_mm; unsigned int cpu; + secondary_biglittle_init(); + /* * The identity mapping is uncached (strongly ordered), so * switch away from it before attempting any exclusive accesses. @@ -396,6 +429,9 @@ asmlinkage void secondary_start_kernel(void) * before we continue - which happens after __cpu_up returns. */ set_cpu_online(cpu, true); + + check_other_bugs(); + complete(&cpu_running); local_irq_enable(); diff --git a/arch/arm/kernel/suspend.c b/arch/arm/kernel/suspend.c index 9a2f882a0a2d..134f0d432610 100644 --- a/arch/arm/kernel/suspend.c +++ b/arch/arm/kernel/suspend.c @@ -1,6 +1,7 @@ #include <linux/init.h> #include <linux/slab.h> +#include <asm/bugs.h> #include <asm/cacheflush.h> #include <asm/idmap.h> #include <asm/pgalloc.h> @@ -34,6 +35,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) cpu_switch_mm(mm->pgd, mm); local_flush_bp_all(); local_flush_tlb_all(); + check_other_bugs(); } return ret; diff --git a/arch/arm/kernel/sys_oabi-compat.c b/arch/arm/kernel/sys_oabi-compat.c index 5f221acd21ae..d844c5c9364b 100644 --- a/arch/arm/kernel/sys_oabi-compat.c +++ b/arch/arm/kernel/sys_oabi-compat.c @@ -276,6 +276,7 @@ asmlinkage long sys_oabi_epoll_wait(int epfd, int maxevents, int timeout) { struct epoll_event *kbuf; + struct oabi_epoll_event e; mm_segment_t fs; long ret, err, i; @@ -294,8 +295,11 @@ asmlinkage long sys_oabi_epoll_wait(int epfd, set_fs(fs); err = 0; for (i = 0; i < ret; i++) { - __put_user_error(kbuf[i].events, &events->events, err); - __put_user_error(kbuf[i].data, &events->data, err); + e.events = kbuf[i].events; + e.data = kbuf[i].data; + err = __copy_to_user(events, &e, sizeof(e)); + if (err) + break; events++; } kfree(kbuf); @@ -328,9 +332,11 @@ asmlinkage long sys_oabi_semtimedop(int semid, return -ENOMEM; err = 0; for (i = 0; i < nsops; i++) { - __get_user_error(sops[i].sem_num, &tsops->sem_num, err); - __get_user_error(sops[i].sem_op, &tsops->sem_op, err); - __get_user_error(sops[i].sem_flg, &tsops->sem_flg, err); + struct oabi_sembuf osb; + err |= __copy_from_user(&osb, tsops, sizeof(osb)); + sops[i].sem_num = osb.sem_num; + sops[i].sem_op = osb.sem_op; + sops[i].sem_flg = osb.sem_flg; tsops++; } if (timeout) { diff --git a/arch/arm/lib/copy_from_user.S b/arch/arm/lib/copy_from_user.S index 1512bebfbf1b..e32b51838439 100644 --- a/arch/arm/lib/copy_from_user.S +++ b/arch/arm/lib/copy_from_user.S @@ -90,6 +90,11 @@ .text ENTRY(arm_copy_from_user) +#ifdef CONFIG_CPU_SPECTRE + get_thread_info r3 + ldr r3, [r3, #TI_ADDR_LIMIT] + uaccess_mask_range_ptr r1, r2, r3, ip +#endif #include "copy_template.S" diff --git a/arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c b/arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c index b31ad596be79..6b09debcf484 100644 --- a/arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c +++ b/arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c @@ -1020,7 +1020,8 @@ static struct omap_hwmod_class_sysconfig am33xx_timer_sysc = { .rev_offs = 0x0000, .sysc_offs = 0x0010, .syss_offs = 0x0014, - .sysc_flags = (SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET), + .sysc_flags = SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET | + SYSC_HAS_RESET_STATUS, .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART | SIDLE_SMART_WKUP), .sysc_fields = &omap_hwmod_sysc_type2, diff --git a/arch/arm/mach-zynq/platsmp.c b/arch/arm/mach-zynq/platsmp.c index f66816c49186..dabe33ac988e 100644 --- a/arch/arm/mach-zynq/platsmp.c +++ b/arch/arm/mach-zynq/platsmp.c @@ -65,7 +65,7 @@ int zynq_cpun_start(u32 address, int cpu) * 0x4: Jump by mov instruction * 0x8: Jumping address */ - memcpy((__force void *)zero, &zynq_secondary_trampoline, + memcpy_toio(zero, &zynq_secondary_trampoline, trampoline_size); writel(address, zero + trampoline_size); diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig index 41218867a9a6..71115afb71a0 100644 --- a/arch/arm/mm/Kconfig +++ b/arch/arm/mm/Kconfig @@ -396,6 +396,7 @@ config CPU_V7 select CPU_CP15_MPU if !MMU select CPU_HAS_ASID if MMU select CPU_PABRT_V7 + select CPU_SPECTRE if MMU select CPU_TLB_V7 if MMU # ARMv7M @@ -793,6 +794,28 @@ config CPU_BPREDICT_DISABLE help Say Y here to disable branch prediction. If unsure, say N. +config CPU_SPECTRE + bool + +config HARDEN_BRANCH_PREDICTOR + bool "Harden the branch predictor against aliasing attacks" if EXPERT + depends on CPU_SPECTRE + default y + help + Speculation attacks against some high-performance processors rely + on being able to manipulate the branch predictor for a victim + context by executing aliasing branches in the attacker context. + Such attacks can be partially mitigated against by clearing + internal branch predictor state and limiting the prediction + logic in some situations. + + This config option will take CPU-specific actions to harden + the branch predictor against aliasing attacks and may rely on + specific instruction sequences or control bits being set by + the system firmware. + + If unsure, say Y. + config TLS_REG_EMUL bool select NEED_KUSER_HELPERS diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile index 7f76d96ce546..35307176e46c 100644 --- a/arch/arm/mm/Makefile +++ b/arch/arm/mm/Makefile @@ -92,7 +92,7 @@ obj-$(CONFIG_CPU_MOHAWK) += proc-mohawk.o obj-$(CONFIG_CPU_FEROCEON) += proc-feroceon.o obj-$(CONFIG_CPU_V6) += proc-v6.o obj-$(CONFIG_CPU_V6K) += proc-v6.o -obj-$(CONFIG_CPU_V7) += proc-v7.o +obj-$(CONFIG_CPU_V7) += proc-v7.o proc-v7-bugs.o obj-$(CONFIG_CPU_V7M) += proc-v7m.o AFLAGS_proc-v6.o :=-Wa,-march=armv6 diff --git a/arch/arm/mm/alignment.c b/arch/arm/mm/alignment.c index 7d5f4c736a16..cd18eda014c2 100644 --- a/arch/arm/mm/alignment.c +++ b/arch/arm/mm/alignment.c @@ -767,6 +767,36 @@ do_alignment_t32_to_handler(unsigned long *pinstr, struct pt_regs *regs, return NULL; } +static int alignment_get_arm(struct pt_regs *regs, u32 *ip, unsigned long *inst) +{ + u32 instr = 0; + int fault; + + if (user_mode(regs)) + fault = get_user(instr, ip); + else + fault = probe_kernel_address(ip, instr); + + *inst = __mem_to_opcode_arm(instr); + + return fault; +} + +static int alignment_get_thumb(struct pt_regs *regs, u16 *ip, u16 *inst) +{ + u16 instr = 0; + int fault; + + if (user_mode(regs)) + fault = get_user(instr, ip); + else + fault = probe_kernel_address(ip, instr); + + *inst = __mem_to_opcode_thumb16(instr); + + return fault; +} + static int do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs) { @@ -774,10 +804,10 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs) unsigned long instr = 0, instrptr; int (*handler)(unsigned long addr, unsigned long instr, struct pt_regs *regs); unsigned int type; - unsigned int fault; u16 tinstr = 0; int isize = 4; int thumb2_32b = 0; + int fault; if (interrupts_enabled(regs)) local_irq_enable(); @@ -786,15 +816,14 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs) if (thumb_mode(regs)) { u16 *ptr = (u16 *)(instrptr & ~1); - fault = probe_kernel_address(ptr, tinstr); - tinstr = __mem_to_opcode_thumb16(tinstr); + + fault = alignment_get_thumb(regs, ptr, &tinstr); if (!fault) { if (cpu_architecture() >= CPU_ARCH_ARMv7 && IS_T32(tinstr)) { /* Thumb-2 32-bit */ - u16 tinst2 = 0; - fault = probe_kernel_address(ptr + 1, tinst2); - tinst2 = __mem_to_opcode_thumb16(tinst2); + u16 tinst2; + fault = alignment_get_thumb(regs, ptr + 1, &tinst2); instr = __opcode_thumb32_compose(tinstr, tinst2); thumb2_32b = 1; } else { @@ -803,8 +832,7 @@ do_alignment(unsigned long addr, unsigned int fsr, struct pt_regs *regs) } } } else { - fault = probe_kernel_address((void *)instrptr, instr); - instr = __mem_to_opcode_arm(instr); + fault = alignment_get_arm(regs, (void *)instrptr, &instr); } if (fault) { diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index 83519afe3254..8c6b7c18843a 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -163,6 +163,9 @@ __do_user_fault(struct task_struct *tsk, unsigned long addr, { struct siginfo si; + if (addr > TASK_SIZE) + harden_branch_predictor(); + #ifdef CONFIG_DEBUG_USER if (((user_debug & UDBG_SEGV) && (sig == SIGSEGV)) || ((user_debug & UDBG_BUS) && (sig == SIGBUS))) { @@ -211,7 +214,7 @@ static inline bool access_error(unsigned int fsr, struct vm_area_struct *vma) { unsigned int mask = VM_READ | VM_WRITE | VM_EXEC; - if (fsr & FSR_WRITE) + if ((fsr & FSR_WRITE) && !(fsr & FSR_CM)) mask = VM_WRITE; if (fsr & FSR_LNX_PF) mask = VM_EXEC; @@ -281,7 +284,7 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) if (user_mode(regs)) flags |= FAULT_FLAG_USER; - if (fsr & FSR_WRITE) + if ((fsr & FSR_WRITE) && !(fsr & FSR_CM)) flags |= FAULT_FLAG_WRITE; /* diff --git a/arch/arm/mm/fault.h b/arch/arm/mm/fault.h index 78830657cab3..b014e5724804 100644 --- a/arch/arm/mm/fault.h +++ b/arch/arm/mm/fault.h @@ -5,6 +5,7 @@ * Fault status register encodings. We steal bit 31 for our own purposes. */ #define FSR_LNX_PF (1 << 31) +#define FSR_CM (1 << 13) #define FSR_WRITE (1 << 11) #define FSR_FS4 (1 << 10) #define FSR_FS3_0 (15) diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S index 9d14cdd02e00..277d78d5410d 100644 --- a/arch/arm/mm/proc-macros.S +++ b/arch/arm/mm/proc-macros.S @@ -258,13 +258,21 @@ mcr p15, 0, ip, c7, c10, 4 @ data write barrier .endm -.macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0 +.macro define_processor_functions name:req, dabort:req, pabort:req, nommu=0, suspend=0, bugs=0 +/* + * If we are building for big.Little with branch predictor hardening, + * we need the processor function tables to remain available after boot. + */ +#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) + .section ".rodata" +#endif .type \name\()_processor_functions, #object .align 2 ENTRY(\name\()_processor_functions) .word \dabort .word \pabort .word cpu_\name\()_proc_init + .word \bugs .word cpu_\name\()_proc_fin .word cpu_\name\()_reset .word cpu_\name\()_do_idle @@ -293,6 +301,9 @@ ENTRY(\name\()_processor_functions) .endif .size \name\()_processor_functions, . - \name\()_processor_functions +#if defined(CONFIG_BIG_LITTLE) && defined(CONFIG_HARDEN_BRANCH_PREDICTOR) + .previous +#endif .endm .macro define_cache_functions name:req diff --git a/arch/arm/mm/proc-v7-2level.S b/arch/arm/mm/proc-v7-2level.S index c6141a5435c3..f8d45ad2a515 100644 --- a/arch/arm/mm/proc-v7-2level.S +++ b/arch/arm/mm/proc-v7-2level.S @@ -41,11 +41,6 @@ * even on Cortex-A8 revisions not affected by 430973. * If IBE is not set, the flush BTAC/BTB won't do anything. */ -ENTRY(cpu_ca8_switch_mm) -#ifdef CONFIG_MMU - mov r2, #0 - mcr p15, 0, r2, c7, c5, 6 @ flush BTAC/BTB -#endif ENTRY(cpu_v7_switch_mm) #ifdef CONFIG_MMU mmid r1, r1 @ get mm->context.id @@ -66,7 +61,6 @@ ENTRY(cpu_v7_switch_mm) #endif bx lr ENDPROC(cpu_v7_switch_mm) -ENDPROC(cpu_ca8_switch_mm) /* * cpu_v7_set_pte_ext(ptep, pte) diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c new file mode 100644 index 000000000000..9a07916af8dd --- /dev/null +++ b/arch/arm/mm/proc-v7-bugs.c @@ -0,0 +1,161 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <linux/arm-smccc.h> +#include <linux/kernel.h> +#include <linux/psci.h> +#include <linux/smp.h> + +#include <asm/cp15.h> +#include <asm/cputype.h> +#include <asm/proc-fns.h> +#include <asm/system_misc.h> + +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR +DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn); + +extern void cpu_v7_iciallu_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm); +extern void cpu_v7_bpiall_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm); +extern void cpu_v7_smc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm); +extern void cpu_v7_hvc_switch_mm(phys_addr_t pgd_phys, struct mm_struct *mm); + +static void harden_branch_predictor_bpiall(void) +{ + write_sysreg(0, BPIALL); +} + +static void harden_branch_predictor_iciallu(void) +{ + write_sysreg(0, ICIALLU); +} + +static void __maybe_unused call_smc_arch_workaround_1(void) +{ + arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL); +} + +static void __maybe_unused call_hvc_arch_workaround_1(void) +{ + arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL); +} + +static void cpu_v7_spectre_init(void) +{ + const char *spectre_v2_method = NULL; + int cpu = smp_processor_id(); + + if (per_cpu(harden_branch_predictor_fn, cpu)) + return; + + switch (read_cpuid_part()) { + case ARM_CPU_PART_CORTEX_A8: + case ARM_CPU_PART_CORTEX_A9: + case ARM_CPU_PART_CORTEX_A12: + case ARM_CPU_PART_CORTEX_A17: + case ARM_CPU_PART_CORTEX_A73: + case ARM_CPU_PART_CORTEX_A75: + per_cpu(harden_branch_predictor_fn, cpu) = + harden_branch_predictor_bpiall; + spectre_v2_method = "BPIALL"; + break; + + case ARM_CPU_PART_CORTEX_A15: + case ARM_CPU_PART_BRAHMA_B15: + per_cpu(harden_branch_predictor_fn, cpu) = + harden_branch_predictor_iciallu; + spectre_v2_method = "ICIALLU"; + break; + +#ifdef CONFIG_ARM_PSCI + default: + /* Other ARM CPUs require no workaround */ + if (read_cpuid_implementor() == ARM_CPU_IMP_ARM) + break; + /* fallthrough */ + /* Cortex A57/A72 require firmware workaround */ + case ARM_CPU_PART_CORTEX_A57: + case ARM_CPU_PART_CORTEX_A72: { + struct arm_smccc_res res; + + if (psci_ops.smccc_version == SMCCC_VERSION_1_0) + break; + + switch (psci_ops.conduit) { + case PSCI_CONDUIT_HVC: + arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, + ARM_SMCCC_ARCH_WORKAROUND_1, &res); + if ((int)res.a0 != 0) + break; + per_cpu(harden_branch_predictor_fn, cpu) = + call_hvc_arch_workaround_1; + cpu_do_switch_mm = cpu_v7_hvc_switch_mm; + spectre_v2_method = "hypervisor"; + break; + + case PSCI_CONDUIT_SMC: + arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, + ARM_SMCCC_ARCH_WORKAROUND_1, &res); + if ((int)res.a0 != 0) + break; + per_cpu(harden_branch_predictor_fn, cpu) = + call_smc_arch_workaround_1; + cpu_do_switch_mm = cpu_v7_smc_switch_mm; + spectre_v2_method = "firmware"; + break; + + default: + break; + } + } +#endif + } + + if (spectre_v2_method) + pr_info("CPU%u: Spectre v2: using %s workaround\n", + smp_processor_id(), spectre_v2_method); +} +#else +static void cpu_v7_spectre_init(void) +{ +} +#endif + +static __maybe_unused bool cpu_v7_check_auxcr_set(bool *warned, + u32 mask, const char *msg) +{ + u32 aux_cr; + + asm("mrc p15, 0, %0, c1, c0, 1" : "=r" (aux_cr)); + + if ((aux_cr & mask) != mask) { + if (!*warned) + pr_err("CPU%u: %s", smp_processor_id(), msg); + *warned = true; + return false; + } + return true; +} + +static DEFINE_PER_CPU(bool, spectre_warned); + +static bool check_spectre_auxcr(bool *warned, u32 bit) +{ + return IS_ENABLED(CONFIG_HARDEN_BRANCH_PREDICTOR) && + cpu_v7_check_auxcr_set(warned, bit, + "Spectre v2: firmware did not set auxiliary control register IBE bit, system vulnerable\n"); +} + +void cpu_v7_ca8_ibe(void) +{ + if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6))) + cpu_v7_spectre_init(); +} + +void cpu_v7_ca15_ibe(void) +{ + if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0))) + cpu_v7_spectre_init(); +} + +void cpu_v7_bugs_init(void) +{ + cpu_v7_spectre_init(); +} diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S index 8e1ea433c3f1..90cddff176f6 100644 --- a/arch/arm/mm/proc-v7.S +++ b/arch/arm/mm/proc-v7.S @@ -9,6 +9,7 @@ * * This is the "shell" of the ARMv7 processor support. */ +#include <linux/arm-smccc.h> #include <linux/init.h> #include <linux/linkage.h> #include <asm/assembler.h> @@ -87,6 +88,37 @@ ENTRY(cpu_v7_dcache_clean_area) ret lr ENDPROC(cpu_v7_dcache_clean_area) +#ifdef CONFIG_ARM_PSCI + .arch_extension sec +ENTRY(cpu_v7_smc_switch_mm) + stmfd sp!, {r0 - r3} + movw r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1 + movt r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1 + smc #0 + ldmfd sp!, {r0 - r3} + b cpu_v7_switch_mm +ENDPROC(cpu_v7_smc_switch_mm) + .arch_extension virt +ENTRY(cpu_v7_hvc_switch_mm) + stmfd sp!, {r0 - r3} + movw r0, #:lower16:ARM_SMCCC_ARCH_WORKAROUND_1 + movt r0, #:upper16:ARM_SMCCC_ARCH_WORKAROUND_1 + hvc #0 + ldmfd sp!, {r0 - r3} + b cpu_v7_switch_mm +ENDPROC(cpu_v7_hvc_switch_mm) +#endif +ENTRY(cpu_v7_iciallu_switch_mm) + mov r3, #0 + mcr p15, 0, r3, c7, c5, 0 @ ICIALLU + b cpu_v7_switch_mm +ENDPROC(cpu_v7_iciallu_switch_mm) +ENTRY(cpu_v7_bpiall_switch_mm) + mov r3, #0 + mcr p15, 0, r3, c7, c5, 6 @ flush BTAC/BTB + b cpu_v7_switch_mm +ENDPROC(cpu_v7_bpiall_switch_mm) + string cpu_v7_name, "ARMv7 Processor" .align @@ -152,31 +184,6 @@ ENTRY(cpu_v7_do_resume) ENDPROC(cpu_v7_do_resume) #endif -/* - * Cortex-A8 - */ - globl_equ cpu_ca8_proc_init, cpu_v7_proc_init - globl_equ cpu_ca8_proc_fin, cpu_v7_proc_fin - globl_equ cpu_ca8_reset, cpu_v7_reset - globl_equ cpu_ca8_do_idle, cpu_v7_do_idle - globl_equ cpu_ca8_dcache_clean_area, cpu_v7_dcache_clean_area - globl_equ cpu_ca8_set_pte_ext, cpu_v7_set_pte_ext - globl_equ cpu_ca8_suspend_size, cpu_v7_suspend_size -#ifdef CONFIG_ARM_CPU_SUSPEND - globl_equ cpu_ca8_do_suspend, cpu_v7_do_suspend - globl_equ cpu_ca8_do_resume, cpu_v7_do_resume -#endif - -/* - * Cortex-A9 processor functions - */ - globl_equ cpu_ca9mp_proc_init, cpu_v7_proc_init - globl_equ cpu_ca9mp_proc_fin, cpu_v7_proc_fin - globl_equ cpu_ca9mp_reset, cpu_v7_reset - globl_equ cpu_ca9mp_do_idle, cpu_v7_do_idle - globl_equ cpu_ca9mp_dcache_clean_area, cpu_v7_dcache_clean_area - globl_equ cpu_ca9mp_switch_mm, cpu_v7_switch_mm - globl_equ cpu_ca9mp_set_pte_ext, cpu_v7_set_pte_ext .globl cpu_ca9mp_suspend_size .equ cpu_ca9mp_suspend_size, cpu_v7_suspend_size + 4 * 2 #ifdef CONFIG_ARM_CPU_SUSPEND @@ -488,12 +495,79 @@ __v7_setup_stack: __INITDATA + .weak cpu_v7_bugs_init + @ define struct processor (see <asm/proc-fns.h> and proc-macros.S) - define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 + define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init + +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + @ generic v7 bpiall on context switch + globl_equ cpu_v7_bpiall_proc_init, cpu_v7_proc_init + globl_equ cpu_v7_bpiall_proc_fin, cpu_v7_proc_fin + globl_equ cpu_v7_bpiall_reset, cpu_v7_reset + globl_equ cpu_v7_bpiall_do_idle, cpu_v7_do_idle + globl_equ cpu_v7_bpiall_dcache_clean_area, cpu_v7_dcache_clean_area + globl_equ cpu_v7_bpiall_set_pte_ext, cpu_v7_set_pte_ext + globl_equ cpu_v7_bpiall_suspend_size, cpu_v7_suspend_size +#ifdef CONFIG_ARM_CPU_SUSPEND + globl_equ cpu_v7_bpiall_do_suspend, cpu_v7_do_suspend + globl_equ cpu_v7_bpiall_do_resume, cpu_v7_do_resume +#endif + define_processor_functions v7_bpiall, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init + +#define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_bpiall_processor_functions +#else +#define HARDENED_BPIALL_PROCESSOR_FUNCTIONS v7_processor_functions +#endif + #ifndef CONFIG_ARM_LPAE - define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 - define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 + @ Cortex-A8 - always needs bpiall switch_mm implementation + globl_equ cpu_ca8_proc_init, cpu_v7_proc_init + globl_equ cpu_ca8_proc_fin, cpu_v7_proc_fin + globl_equ cpu_ca8_reset, cpu_v7_reset + globl_equ cpu_ca8_do_idle, cpu_v7_do_idle + globl_equ cpu_ca8_dcache_clean_area, cpu_v7_dcache_clean_area + globl_equ cpu_ca8_set_pte_ext, cpu_v7_set_pte_ext + globl_equ cpu_ca8_switch_mm, cpu_v7_bpiall_switch_mm + globl_equ cpu_ca8_suspend_size, cpu_v7_suspend_size +#ifdef CONFIG_ARM_CPU_SUSPEND + globl_equ cpu_ca8_do_suspend, cpu_v7_do_suspend + globl_equ cpu_ca8_do_resume, cpu_v7_do_resume #endif + define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_ca8_ibe + + @ Cortex-A9 - needs more registers preserved across suspend/resume + @ and bpiall switch_mm for hardening + globl_equ cpu_ca9mp_proc_init, cpu_v7_proc_init + globl_equ cpu_ca9mp_proc_fin, cpu_v7_proc_fin + globl_equ cpu_ca9mp_reset, cpu_v7_reset + globl_equ cpu_ca9mp_do_idle, cpu_v7_do_idle + globl_equ cpu_ca9mp_dcache_clean_area, cpu_v7_dcache_clean_area +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + globl_equ cpu_ca9mp_switch_mm, cpu_v7_bpiall_switch_mm +#else + globl_equ cpu_ca9mp_switch_mm, cpu_v7_switch_mm +#endif + globl_equ cpu_ca9mp_set_pte_ext, cpu_v7_set_pte_ext + define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_bugs_init +#endif + + @ Cortex-A15 - needs iciallu switch_mm for hardening + globl_equ cpu_ca15_proc_init, cpu_v7_proc_init + globl_equ cpu_ca15_proc_fin, cpu_v7_proc_fin + globl_equ cpu_ca15_reset, cpu_v7_reset + globl_equ cpu_ca15_do_idle, cpu_v7_do_idle + globl_equ cpu_ca15_dcache_clean_area, cpu_v7_dcache_clean_area +#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR + globl_equ cpu_ca15_switch_mm, cpu_v7_iciallu_switch_mm +#else + globl_equ cpu_ca15_switch_mm, cpu_v7_switch_mm +#endif + globl_equ cpu_ca15_set_pte_ext, cpu_v7_set_pte_ext + globl_equ cpu_ca15_suspend_size, cpu_v7_suspend_size + globl_equ cpu_ca15_do_suspend, cpu_v7_do_suspend + globl_equ cpu_ca15_do_resume, cpu_v7_do_resume + define_processor_functions ca15, dabort=v7_early_abort, pabort=v7_pabort, suspend=1, bugs=cpu_v7_ca15_ibe #ifdef CONFIG_CPU_PJ4B define_processor_functions pj4b, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 #endif @@ -600,7 +674,7 @@ __v7_ca7mp_proc_info: __v7_ca12mp_proc_info: .long 0x410fc0d0 .long 0xff0ffff0 - __v7_proc __v7_ca12mp_proc_info, __v7_ca12mp_setup + __v7_proc __v7_ca12mp_proc_info, __v7_ca12mp_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS .size __v7_ca12mp_proc_info, . - __v7_ca12mp_proc_info /* @@ -610,7 +684,7 @@ __v7_ca12mp_proc_info: __v7_ca15mp_proc_info: .long 0x410fc0f0 .long 0xff0ffff0 - __v7_proc __v7_ca15mp_proc_info, __v7_ca15mp_setup + __v7_proc __v7_ca15mp_proc_info, __v7_ca15mp_setup, proc_fns = ca15_processor_functions .size __v7_ca15mp_proc_info, . - __v7_ca15mp_proc_info /* @@ -620,7 +694,7 @@ __v7_ca15mp_proc_info: __v7_b15mp_proc_info: .long 0x420f00f0 .long 0xff0ffff0 - __v7_proc __v7_b15mp_proc_info, __v7_b15mp_setup + __v7_proc __v7_b15mp_proc_info, __v7_b15mp_setup, proc_fns = ca15_processor_functions .size __v7_b15mp_proc_info, . - __v7_b15mp_proc_info /* @@ -630,9 +704,25 @@ __v7_b15mp_proc_info: __v7_ca17mp_proc_info: .long 0x410fc0e0 .long 0xff0ffff0 - __v7_proc __v7_ca17mp_proc_info, __v7_ca17mp_setup + __v7_proc __v7_ca17mp_proc_info, __v7_ca17mp_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS .size __v7_ca17mp_proc_info, . - __v7_ca17mp_proc_info + /* ARM Ltd. Cortex A73 processor */ + .type __v7_ca73_proc_info, #object +__v7_ca73_proc_info: + .long 0x410fd090 + .long 0xff0ffff0 + __v7_proc __v7_ca73_proc_info, __v7_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS + .size __v7_ca73_proc_info, . - __v7_ca73_proc_info + + /* ARM Ltd. Cortex A75 processor */ + .type __v7_ca75_proc_info, #object +__v7_ca75_proc_info: + .long 0x410fd0a0 + .long 0xff0ffff0 + __v7_proc __v7_ca75_proc_info, __v7_setup, proc_fns = HARDENED_BPIALL_PROCESSOR_FUNCTIONS + .size __v7_ca75_proc_info, . - __v7_ca75_proc_info + /* * Qualcomm Inc. Krait processors. */ diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c index 73085d3482ed..7c5a99974442 100644 --- a/arch/arm/vfp/vfpmodule.c +++ b/arch/arm/vfp/vfpmodule.c @@ -554,12 +554,11 @@ void vfp_flush_hwstate(struct thread_info *thread) * Save the current VFP state into the provided structures and prepare * for entry into a new function (signal handler). */ -int vfp_preserve_user_clear_hwstate(struct user_vfp __user *ufp, - struct user_vfp_exc __user *ufp_exc) +int vfp_preserve_user_clear_hwstate(struct user_vfp *ufp, + struct user_vfp_exc *ufp_exc) { struct thread_info *thread = current_thread_info(); struct vfp_hard_struct *hwstate = &thread->vfpstate.hard; - int err = 0; /* Ensure that the saved hwstate is up-to-date. */ vfp_sync_hwstate(thread); @@ -568,22 +567,19 @@ int vfp_preserve_user_clear_hwstate(struct user_vfp __user *ufp, * Copy the floating point registers. There can be unused * registers see asm/hwcap.h for details. */ - err |= __copy_to_user(&ufp->fpregs, &hwstate->fpregs, - sizeof(hwstate->fpregs)); + memcpy(&ufp->fpregs, &hwstate->fpregs, sizeof(hwstate->fpregs)); + /* * Copy the status and control register. */ - __put_user_error(hwstate->fpscr, &ufp->fpscr, err); + ufp->fpscr = hwstate->fpscr; /* * Copy the exception registers. */ - __put_user_error(hwstate->fpexc, &ufp_exc->fpexc, err); - __put_user_error(hwstate->fpinst, &ufp_exc->fpinst, err); - __put_user_error(hwstate->fpinst2, &ufp_exc->fpinst2, err); - - if (err) - return -EFAULT; + ufp_exc->fpexc = hwstate->fpexc; + ufp_exc->fpinst = hwstate->fpinst; + ufp_exc->fpinst2 = hwstate->fpinst2; /* Ensure that VFP is disabled. */ vfp_flush_hwstate(thread); @@ -597,13 +593,11 @@ int vfp_preserve_user_clear_hwstate(struct user_vfp __user *ufp, } /* Sanitise and restore the current VFP state from the provided structures. */ -int vfp_restore_user_hwstate(struct user_vfp __user *ufp, - struct user_vfp_exc __user *ufp_exc) +int vfp_restore_user_hwstate(struct user_vfp *ufp, struct user_vfp_exc *ufp_exc) { struct thread_info *thread = current_thread_info(); struct vfp_hard_struct *hwstate = &thread->vfpstate.hard; unsigned long fpexc; - int err = 0; /* Disable VFP to avoid corrupting the new thread state. */ vfp_flush_hwstate(thread); @@ -612,17 +606,16 @@ int vfp_restore_user_hwstate(struct user_vfp __user *ufp, * Copy the floating point registers. There can be unused * registers see asm/hwcap.h for details. */ - err |= __copy_from_user(&hwstate->fpregs, &ufp->fpregs, - sizeof(hwstate->fpregs)); + memcpy(&hwstate->fpregs, &ufp->fpregs, sizeof(hwstate->fpregs)); /* * Copy the status and control register. */ - __get_user_error(hwstate->fpscr, &ufp->fpscr, err); + hwstate->fpscr = ufp->fpscr; /* * Sanitise and restore the exception registers. */ - __get_user_error(fpexc, &ufp_exc->fpexc, err); + fpexc = ufp_exc->fpexc; /* Ensure the VFP is enabled. */ fpexc |= FPEXC_EN; @@ -631,10 +624,10 @@ int vfp_restore_user_hwstate(struct user_vfp __user *ufp, fpexc &= ~(FPEXC_EX | FPEXC_FP2V); hwstate->fpexc = fpexc; - __get_user_error(hwstate->fpinst, &ufp_exc->fpinst, err); - __get_user_error(hwstate->fpinst2, &ufp_exc->fpinst2, err); + hwstate->fpinst = ufp_exc->fpinst; + hwstate->fpinst2 = ufp_exc->fpinst2; - return err ? -EFAULT : 0; + return 0; } /* diff --git a/arch/arm64/configs/msm-auto-perf_defconfig b/arch/arm64/configs/msm-auto-perf_defconfig index 1c83060d2d2f..f5130c4c4611 100644 --- a/arch/arm64/configs/msm-auto-perf_defconfig +++ b/arch/arm64/configs/msm-auto-perf_defconfig @@ -329,6 +329,7 @@ CONFIG_SENSORS_BMG_FIFO=y # CONFIG_LEGACY_PTYS is not set # CONFIG_DEVMEM is not set # CONFIG_DEVKMEM is not set +CONFIG_SERIAL_MSM=y CONFIG_SERIAL_MSM_HS=y CONFIG_SERIAL_MSM_SMD=y CONFIG_DIAG_CHAR=y diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index bb718233bbdb..2c94aecc8992 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -96,9 +96,10 @@ struct arm64_cpu_capabilities { struct { /* Feature register checking */ u32 sys_reg; - int field_pos; - int min_field_value; - int hwcap_type; + u8 field_pos; + u8 min_field_value; + u8 hwcap_type; + bool sign; unsigned long hwcap; }; }; @@ -128,15 +129,15 @@ static inline void cpus_set_cap(unsigned int num) } static inline int __attribute_const__ -cpuid_feature_extract_field_width(u64 features, int field, int width) +cpuid_feature_extract_signed_field_width(u64 features, int field, int width) { return (s64)(features << (64 - width - field)) >> (64 - width); } static inline int __attribute_const__ -cpuid_feature_extract_field(u64 features, int field) +cpuid_feature_extract_signed_field(u64 features, int field) { - return cpuid_feature_extract_field_width(features, field, 4); + return cpuid_feature_extract_signed_field_width(features, field, 4); } static inline unsigned int __attribute_const__ @@ -156,17 +157,23 @@ static inline u64 arm64_ftr_mask(struct arm64_ftr_bits *ftrp) return (u64)GENMASK(ftrp->shift + ftrp->width - 1, ftrp->shift); } +static inline int __attribute_const__ +cpuid_feature_extract_field(u64 features, int field, bool sign) +{ + return (sign) ? + cpuid_feature_extract_signed_field(features, field) : + cpuid_feature_extract_unsigned_field(features, field); +} + static inline s64 arm64_ftr_value(struct arm64_ftr_bits *ftrp, u64 val) { - return ftrp->sign ? - cpuid_feature_extract_field_width(val, ftrp->shift, ftrp->width) : - cpuid_feature_extract_unsigned_field_width(val, ftrp->shift, ftrp->width); + return (s64)cpuid_feature_extract_field(val, ftrp->shift, ftrp->sign); } static inline bool id_aa64mmfr0_mixed_endian_el0(u64 mmfr0) { - return cpuid_feature_extract_field(mmfr0, ID_AA64MMFR0_BIGENDEL_SHIFT) == 0x1 || - cpuid_feature_extract_field(mmfr0, ID_AA64MMFR0_BIGENDEL0_SHIFT) == 0x1; + return cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_BIGENDEL_SHIFT) == 0x1 || + cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_BIGENDEL0_SHIFT) == 0x1; } void __init setup_cpu_features(void); diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index cb475b0a3422..50033b91ce4f 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -617,7 +617,7 @@ u64 read_system_reg(u32 id) static bool feature_matches(u64 reg, const struct arm64_cpu_capabilities *entry) { - int val = cpuid_feature_extract_field(reg, entry->field_pos); + int val = cpuid_feature_extract_field(reg, entry->field_pos, entry->sign); return val >= entry->min_field_value; } @@ -703,6 +703,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_useable_gicv3_cpuif, .sys_reg = SYS_ID_AA64PFR0_EL1, .field_pos = ID_AA64PFR0_GIC_SHIFT, + .sign = FTR_UNSIGNED, .min_field_value = 1, }, #ifdef CONFIG_ARM64_PAN @@ -712,6 +713,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64MMFR1_EL1, .field_pos = ID_AA64MMFR1_PAN_SHIFT, + .sign = FTR_UNSIGNED, .min_field_value = 1, .enable = cpu_enable_pan, }, @@ -723,6 +725,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64ISAR0_EL1, .field_pos = ID_AA64ISAR0_ATOMICS_SHIFT, + .sign = FTR_UNSIGNED, .min_field_value = 2, }, #endif /* CONFIG_AS_LSE && CONFIG_ARM64_LSE_ATOMICS */ @@ -765,37 +768,39 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_cpuid_feature, .sys_reg = SYS_ID_AA64PFR0_EL1, .field_pos = ID_AA64PFR0_EL0_SHIFT, + .sign = FTR_UNSIGNED, .min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT, }, {}, }; -#define HWCAP_CAP(reg, field, min_value, type, cap) \ +#define HWCAP_CAP(reg, field, s, min_value, type, cap) \ { \ .desc = #cap, \ .matches = has_cpuid_feature, \ .sys_reg = reg, \ .field_pos = field, \ + .sign = s, \ .min_field_value = min_value, \ .hwcap_type = type, \ .hwcap = cap, \ } static const struct arm64_cpu_capabilities arm64_hwcaps[] = { - HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, 2, CAP_HWCAP, HWCAP_PMULL), - HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, 1, CAP_HWCAP, HWCAP_AES), - HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, 1, CAP_HWCAP, HWCAP_SHA1), - HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, 1, CAP_HWCAP, HWCAP_SHA2), - HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, 1, CAP_HWCAP, HWCAP_CRC32), - HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, 2, CAP_HWCAP, HWCAP_ATOMICS), - HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, 0, CAP_HWCAP, HWCAP_FP), - HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, 0, CAP_HWCAP, HWCAP_ASIMD), + HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_PMULL), + HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_AES), + HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA1), + HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA2_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA2), + HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_CRC32_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_CRC32), + HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_ATOMICS_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ATOMICS), + HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_FP), + HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_ASIMD), #ifdef CONFIG_COMPAT - HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_AES_SHIFT, 2, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_PMULL), - HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_AES_SHIFT, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_AES), - HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_SHA1_SHIFT, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_SHA1), - HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_SHA2_SHIFT, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_SHA2), - HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_CRC32_SHIFT, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_CRC32), + HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_AES_SHIFT, FTR_UNSIGNED, 2, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_PMULL), + HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_AES_SHIFT, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_AES), + HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_SHA1_SHIFT, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_SHA1), + HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_SHA2_SHIFT, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_SHA2), + HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_CRC32_SHIFT, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_CRC32), #endif {}, }; diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c index 6de6d9f43b95..858454466142 100644 --- a/arch/arm64/kernel/debug-monitors.c +++ b/arch/arm64/kernel/debug-monitors.c @@ -35,7 +35,7 @@ /* Determine debug architecture. */ u8 debug_monitors_arch(void) { - return cpuid_feature_extract_field(read_system_reg(SYS_ID_AA64DFR0_EL1), + return cpuid_feature_extract_unsigned_field(read_system_reg(SYS_ID_AA64DFR0_EL1), ID_AA64DFR0_DEBUGVER_SHIFT); } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 3ff507c177a5..2127fdf74f20 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -688,7 +688,7 @@ static bool trap_dbgidr(struct kvm_vcpu *vcpu, } else { u64 dfr = read_system_reg(SYS_ID_AA64DFR0_EL1); u64 pfr = read_system_reg(SYS_ID_AA64PFR0_EL1); - u32 el3 = !!cpuid_feature_extract_field(pfr, ID_AA64PFR0_EL3_SHIFT); + u32 el3 = !!cpuid_feature_extract_unsigned_field(pfr, ID_AA64PFR0_EL3_SHIFT); p->regval = ((((dfr >> ID_AA64DFR0_WRPS_SHIFT) & 0xf) << 28) | (((dfr >> ID_AA64DFR0_BRPS_SHIFT) & 0xf) << 24) | diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index e7e8ebae2775..1e31cd3871ee 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -213,7 +213,8 @@ asmlinkage void post_ttbr_update_workaround(void) static int asids_init(void) { - int fld = cpuid_feature_extract_field(read_cpuid(SYS_ID_AA64MMFR0_EL1), 4); + int fld = cpuid_feature_extract_unsigned_field(read_cpuid(SYS_ID_AA64MMFR0_EL1), + ID_AA64MMFR0_ASID_SHIFT); switch (fld) { default: diff --git a/arch/ia64/kernel/module.c b/arch/ia64/kernel/module.c index 36b2c94a8eb5..14c7184daaf6 100644 --- a/arch/ia64/kernel/module.c +++ b/arch/ia64/kernel/module.c @@ -912,8 +912,12 @@ module_finalize (const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, struct module *mo void module_arch_cleanup (struct module *mod) { - if (mod->arch.init_unw_table) + if (mod->arch.init_unw_table) { unw_remove_unwind_table(mod->arch.init_unw_table); - if (mod->arch.core_unw_table) + mod->arch.init_unw_table = NULL; + } + if (mod->arch.core_unw_table) { unw_remove_unwind_table(mod->arch.core_unw_table); + mod->arch.core_unw_table = NULL; + } } diff --git a/arch/mips/bcm63xx/prom.c b/arch/mips/bcm63xx/prom.c index 7019e2967009..bbbf8057565b 100644 --- a/arch/mips/bcm63xx/prom.c +++ b/arch/mips/bcm63xx/prom.c @@ -84,7 +84,7 @@ void __init prom_init(void) * Here we will start up CPU1 in the background and ask it to * reconfigure itself then go back to sleep. */ - memcpy((void *)0xa0000200, &bmips_smp_movevec, 0x20); + memcpy((void *)0xa0000200, bmips_smp_movevec, 0x20); __sync(); set_c0_cause(C_SW0); cpumask_set_cpu(1, &bmips_booted_mask); diff --git a/arch/mips/bcm63xx/reset.c b/arch/mips/bcm63xx/reset.c index d1fe51edf5e6..4d411da2497b 100644 --- a/arch/mips/bcm63xx/reset.c +++ b/arch/mips/bcm63xx/reset.c @@ -119,7 +119,7 @@ #define BCM6368_RESET_DSL 0 #define BCM6368_RESET_SAR SOFTRESET_6368_SAR_MASK #define BCM6368_RESET_EPHY SOFTRESET_6368_EPHY_MASK -#define BCM6368_RESET_ENETSW 0 +#define BCM6368_RESET_ENETSW SOFTRESET_6368_ENETSW_MASK #define BCM6368_RESET_PCM SOFTRESET_6368_PCM_MASK #define BCM6368_RESET_MPI SOFTRESET_6368_MPI_MASK #define BCM6368_RESET_PCIE 0 diff --git a/arch/mips/configs/mtx1_defconfig b/arch/mips/configs/mtx1_defconfig index 9b6926d6bb32..6ac662afd5c1 100644 --- a/arch/mips/configs/mtx1_defconfig +++ b/arch/mips/configs/mtx1_defconfig @@ -638,7 +638,6 @@ CONFIG_USB_SERIAL_OMNINET=m CONFIG_USB_EMI62=m CONFIG_USB_EMI26=m CONFIG_USB_ADUTUX=m -CONFIG_USB_RIO500=m CONFIG_USB_LEGOTOWER=m CONFIG_USB_LCD=m CONFIG_USB_LED=m diff --git a/arch/mips/configs/rm200_defconfig b/arch/mips/configs/rm200_defconfig index db029f4ff759..4acaa3fb4818 100644 --- a/arch/mips/configs/rm200_defconfig +++ b/arch/mips/configs/rm200_defconfig @@ -351,7 +351,6 @@ CONFIG_USB_SERIAL_SAFE_PADDED=y CONFIG_USB_SERIAL_CYBERJACK=m CONFIG_USB_SERIAL_XIRCOM=m CONFIG_USB_SERIAL_OMNINET=m -CONFIG_USB_RIO500=m CONFIG_USB_LEGOTOWER=m CONFIG_USB_LCD=m CONFIG_USB_LED=m diff --git a/arch/mips/fw/sni/sniprom.c b/arch/mips/fw/sni/sniprom.c index 6aa264b9856a..7c6151d412bd 100644 --- a/arch/mips/fw/sni/sniprom.c +++ b/arch/mips/fw/sni/sniprom.c @@ -42,7 +42,7 @@ /* O32 stack has to be 8-byte aligned. */ static u64 o32_stk[4096]; -#define O32_STK &o32_stk[sizeof(o32_stk)] +#define O32_STK (&o32_stk[ARRAY_SIZE(o32_stk)]) #define __PROM_O32(fun, arg) fun arg __asm__(#fun); \ __asm__(#fun " = call_o32") diff --git a/arch/mips/include/asm/bmips.h b/arch/mips/include/asm/bmips.h index 6d25ad33ec78..860e4cef61be 100644 --- a/arch/mips/include/asm/bmips.h +++ b/arch/mips/include/asm/bmips.h @@ -75,11 +75,11 @@ static inline int register_bmips_smp_ops(void) #endif } -extern char bmips_reset_nmi_vec; -extern char bmips_reset_nmi_vec_end; -extern char bmips_smp_movevec; -extern char bmips_smp_int_vec; -extern char bmips_smp_int_vec_end; +extern char bmips_reset_nmi_vec[]; +extern char bmips_reset_nmi_vec_end[]; +extern char bmips_smp_movevec[]; +extern char bmips_smp_int_vec[]; +extern char bmips_smp_int_vec_end[]; extern int bmips_smp_enabled; extern int bmips_cpu_offset; diff --git a/arch/mips/kernel/smp-bmips.c b/arch/mips/kernel/smp-bmips.c index 4874712b475e..a62d24169d75 100644 --- a/arch/mips/kernel/smp-bmips.c +++ b/arch/mips/kernel/smp-bmips.c @@ -451,10 +451,10 @@ static void bmips_wr_vec(unsigned long dst, char *start, char *end) static inline void bmips_nmi_handler_setup(void) { - bmips_wr_vec(BMIPS_NMI_RESET_VEC, &bmips_reset_nmi_vec, - &bmips_reset_nmi_vec_end); - bmips_wr_vec(BMIPS_WARM_RESTART_VEC, &bmips_smp_int_vec, - &bmips_smp_int_vec_end); + bmips_wr_vec(BMIPS_NMI_RESET_VEC, bmips_reset_nmi_vec, + bmips_reset_nmi_vec_end); + bmips_wr_vec(BMIPS_WARM_RESTART_VEC, bmips_smp_int_vec, + bmips_smp_int_vec_end); } struct reset_vec_info { diff --git a/arch/mips/loongson64/common/serial.c b/arch/mips/loongson64/common/serial.c index ffefc1cb2612..98c3a7feb10f 100644 --- a/arch/mips/loongson64/common/serial.c +++ b/arch/mips/loongson64/common/serial.c @@ -110,7 +110,7 @@ static int __init serial_init(void) } module_init(serial_init); -static void __init serial_exit(void) +static void __exit serial_exit(void) { platform_device_unregister(&uart8250_device); } diff --git a/arch/parisc/mm/ioremap.c b/arch/parisc/mm/ioremap.c index 838d0259cd27..3741f91fc186 100644 --- a/arch/parisc/mm/ioremap.c +++ b/arch/parisc/mm/ioremap.c @@ -2,7 +2,7 @@ * arch/parisc/mm/ioremap.c * * (C) Copyright 1995 1996 Linus Torvalds - * (C) Copyright 2001-2006 Helge Deller <deller@gmx.de> + * (C) Copyright 2001-2019 Helge Deller <deller@gmx.de> * (C) Copyright 2005 Kyle McMartin <kyle@parisc-linux.org> */ @@ -83,7 +83,7 @@ void __iomem * __ioremap(unsigned long phys_addr, unsigned long size, unsigned l addr = (void __iomem *) area->addr; if (ioremap_page_range((unsigned long)addr, (unsigned long)addr + size, phys_addr, pgprot)) { - vfree(addr); + vunmap(addr); return NULL; } @@ -91,9 +91,11 @@ void __iomem * __ioremap(unsigned long phys_addr, unsigned long size, unsigned l } EXPORT_SYMBOL(__ioremap); -void iounmap(const volatile void __iomem *addr) +void iounmap(const volatile void __iomem *io_addr) { - if (addr > high_memory) - return vfree((void *) (PAGE_MASK & (unsigned long __force) addr)); + unsigned long addr = (unsigned long)io_addr & PAGE_MASK; + + if (is_vmalloc_addr((void *)addr)) + vunmap((void *)addr); } EXPORT_SYMBOL(iounmap); diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile index 96efd8213c1c..d7eb035a9c96 100644 --- a/arch/powerpc/Makefile +++ b/arch/powerpc/Makefile @@ -66,29 +66,35 @@ endif UTS_MACHINE := $(OLDARCH) ifeq ($(CONFIG_CPU_LITTLE_ENDIAN),y) -override CC += -mlittle-endian -ifneq ($(cc-name),clang) -override CC += -mno-strict-align -endif -override AS += -mlittle-endian override LD += -EL -override CROSS32CC += -mlittle-endian override CROSS32AS += -mlittle-endian LDEMULATION := lppc GNUTARGET := powerpcle MULTIPLEWORD := -mno-multiple KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-save-toc-indirect) else -ifeq ($(call cc-option-yn,-mbig-endian),y) -override CC += -mbig-endian -override AS += -mbig-endian -endif override LD += -EB LDEMULATION := ppc GNUTARGET := powerpc MULTIPLEWORD := -mmultiple endif +ifdef CONFIG_PPC64 +cflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mabi=elfv1) +cflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mcall-aixdesc) +aflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mabi=elfv1) +aflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -mabi=elfv2 +endif + +cflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -mlittle-endian +cflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mbig-endian) +ifneq ($(cc-name),clang) + cflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -mno-strict-align +endif + +aflags-$(CONFIG_CPU_BIG_ENDIAN) += $(call cc-option,-mbig-endian) +aflags-$(CONFIG_CPU_LITTLE_ENDIAN) += -mlittle-endian + ifeq ($(HAS_BIARCH),y) override AS += -a$(CONFIG_WORD_SIZE) override LD += -m elf$(CONFIG_WORD_SIZE)$(LDEMULATION) @@ -121,7 +127,9 @@ ifeq ($(CONFIG_CPU_LITTLE_ENDIAN),y) CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv2,$(call cc-option,-mcall-aixdesc)) AFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv2) else +CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv1) CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mcall-aixdesc) +AFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mabi=elfv1) endif CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mcmodel=medium,$(call cc-option,-mminimal-toc)) CFLAGS-$(CONFIG_PPC64) += $(call cc-option,-mno-pointers-to-nested-functions) @@ -212,6 +220,9 @@ cpu-as-$(CONFIG_E200) += -Wa,-me200 KBUILD_AFLAGS += $(cpu-as-y) KBUILD_CFLAGS += $(cpu-as-y) +KBUILD_AFLAGS += $(aflags-y) +KBUILD_CFLAGS += $(cflags-y) + head-y := arch/powerpc/kernel/head_$(CONFIG_WORD_SIZE).o head-$(CONFIG_8xx) := arch/powerpc/kernel/head_8xx.o head-$(CONFIG_40x) := arch/powerpc/kernel/head_40x.o diff --git a/arch/powerpc/boot/wrapper b/arch/powerpc/boot/wrapper index ceaa75d5a684..be4831acda22 100755 --- a/arch/powerpc/boot/wrapper +++ b/arch/powerpc/boot/wrapper @@ -161,6 +161,28 @@ case "$elfformat" in elf32-powerpc) format=elf32ppc ;; esac +ld_version() +{ + # Poached from scripts/ld-version.sh, but we don't want to call that because + # this script (wrapper) is distributed separately from the kernel source. + # Extract linker version number from stdin and turn into single number. + awk '{ + gsub(".*\\)", ""); + gsub(".*version ", ""); + gsub("-.*", ""); + split($1,a, "."); + print a[1]*100000000 + a[2]*1000000 + a[3]*10000; + exit + }' +} + +# Do not include PT_INTERP segment when linking pie. Non-pie linking +# just ignores this option. +LD_VERSION=$(${CROSS}ld --version | ld_version) +LD_NO_DL_MIN_VERSION=$(echo 2.26 | ld_version) +if [ "$LD_VERSION" -ge "$LD_NO_DL_MIN_VERSION" ] ; then + nodl="--no-dynamic-linker" +fi platformo=$object/"$platform".o lds=$object/zImage.lds @@ -412,7 +434,7 @@ if [ "$platform" != "miboot" ]; then if [ -n "$link_address" ] ; then text_start="-Ttext $link_address" fi - ${CROSS}ld -m $format -T $lds $text_start $pie -o "$ofile" \ + ${CROSS}ld -m $format -T $lds $text_start $pie $nodl -o "$ofile" \ $platformo $tmp $object/wrapper.a rm $tmp fi diff --git a/arch/powerpc/include/asm/futex.h b/arch/powerpc/include/asm/futex.h index f4c7467f7465..b73ab8a7ebc3 100644 --- a/arch/powerpc/include/asm/futex.h +++ b/arch/powerpc/include/asm/futex.h @@ -60,8 +60,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval, pagefault_enable(); - if (!ret) - *oval = oldval; + *oval = oldval; return ret; } diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index a44f1755dc4b..536718ed033f 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -1465,6 +1465,10 @@ machine_check_handle_early: RFI_TO_USER_OR_KERNEL 9: /* Deliver the machine check to host kernel in V mode. */ +BEGIN_FTR_SECTION + ld r10,ORIG_GPR3(r1) + mtspr SPRN_CFAR,r10 +END_FTR_SECTION_IFSET(CPU_FTR_CFAR) MACHINE_CHECK_HANDLER_WINDUP b machine_check_pSeries diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c index 5a753fae8265..0c42e872d548 100644 --- a/arch/powerpc/kernel/rtas.c +++ b/arch/powerpc/kernel/rtas.c @@ -857,15 +857,17 @@ static int rtas_cpu_state_change_mask(enum rtas_cpu_state state, return 0; for_each_cpu(cpu, cpus) { + struct device *dev = get_cpu_device(cpu); + switch (state) { case DOWN: - cpuret = cpu_down(cpu); + cpuret = device_offline(dev); break; case UP: - cpuret = cpu_up(cpu); + cpuret = device_online(dev); break; } - if (cpuret) { + if (cpuret < 0) { pr_debug("%s: cpu_%s for cpu#%d returned %d.\n", __func__, ((state == UP) ? "up" : "down"), @@ -954,6 +956,8 @@ int rtas_ibm_suspend_me(u64 handle) data.token = rtas_token("ibm,suspend-me"); data.complete = &done; + lock_device_hotplug(); + /* All present CPUs must be online */ cpumask_andnot(offline_mask, cpu_present_mask, cpu_online_mask); cpuret = rtas_online_cpus_mask(offline_mask); @@ -985,6 +989,7 @@ int rtas_ibm_suspend_me(u64 handle) __func__); out: + unlock_device_hotplug(); free_cpumask_var(offline_mask); return atomic_read(&data.error); } diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c index b40606051efe..d3787618315f 100644 --- a/arch/powerpc/platforms/powernv/opal.c +++ b/arch/powerpc/platforms/powernv/opal.c @@ -580,7 +580,10 @@ static ssize_t symbol_map_read(struct file *fp, struct kobject *kobj, bin_attr->size); } -static BIN_ATTR_RO(symbol_map, 0); +static struct bin_attribute symbol_map_attr = { + .attr = {.name = "symbol_map", .mode = 0400}, + .read = symbol_map_read +}; static void opal_export_symmap(void) { @@ -597,10 +600,10 @@ static void opal_export_symmap(void) return; /* Setup attributes */ - bin_attr_symbol_map.private = __va(be64_to_cpu(syms[0])); - bin_attr_symbol_map.size = be64_to_cpu(syms[1]); + symbol_map_attr.private = __va(be64_to_cpu(syms[0])); + symbol_map_attr.size = be64_to_cpu(syms[1]); - rc = sysfs_create_bin_file(opal_kobj, &bin_attr_symbol_map); + rc = sysfs_create_bin_file(opal_kobj, &symbol_map_attr); if (rc) pr_warn("Error %d creating OPAL symbols file\n", rc); } diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c index c773396d0969..8d30a425a88a 100644 --- a/arch/powerpc/platforms/pseries/mobility.c +++ b/arch/powerpc/platforms/pseries/mobility.c @@ -11,6 +11,7 @@ #include <linux/kernel.h> #include <linux/kobject.h> +#include <linux/sched.h> #include <linux/smp.h> #include <linux/stat.h> #include <linux/completion.h> @@ -206,7 +207,11 @@ static int update_dt_node(__be32 phandle, s32 scope) prop_data += vd; } + + cond_resched(); } + + cond_resched(); } while (rtas_rc == 1); of_node_put(dn); @@ -282,8 +287,12 @@ int pseries_devicetree_update(s32 scope) add_dt_node(phandle, drc_index); break; } + + cond_resched(); } } + + cond_resched(); } while (rc == 1); kfree(rtas_buf); diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c index 9cc976ff7fec..88fcf6a95fa6 100644 --- a/arch/powerpc/platforms/pseries/setup.c +++ b/arch/powerpc/platforms/pseries/setup.c @@ -369,6 +369,9 @@ static void pseries_lpar_idle(void) * low power mode by cedeing processor to hypervisor */ + if (!prep_irq_for_idle()) + return; + /* Indicate to hypervisor that we are idle. */ get_lppaca()->idle = 1; diff --git a/arch/s390/hypfs/inode.c b/arch/s390/hypfs/inode.c index c670279b33f0..1de3fdfc3537 100644 --- a/arch/s390/hypfs/inode.c +++ b/arch/s390/hypfs/inode.c @@ -267,7 +267,7 @@ static int hypfs_show_options(struct seq_file *s, struct dentry *root) static int hypfs_fill_super(struct super_block *sb, void *data, int silent) { struct inode *root_inode; - struct dentry *root_dentry; + struct dentry *root_dentry, *update_file; int rc = 0; struct hypfs_sb_info *sbi; @@ -298,9 +298,10 @@ static int hypfs_fill_super(struct super_block *sb, void *data, int silent) rc = hypfs_diag_create_files(root_dentry); if (rc) return rc; - sbi->update_file = hypfs_create_update_file(root_dentry); - if (IS_ERR(sbi->update_file)) - return PTR_ERR(sbi->update_file); + update_file = hypfs_create_update_file(root_dentry); + if (IS_ERR(update_file)) + return PTR_ERR(update_file); + sbi->update_file = update_file; hypfs_update_update(sb); pr_info("Hypervisor filesystem mounted\n"); return 0; diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c index 40b8102fdadb..ff95ddd031d1 100644 --- a/arch/s390/kernel/topology.c +++ b/arch/s390/kernel/topology.c @@ -291,7 +291,8 @@ int arch_update_cpu_topology(void) topology_update_polarization_simple(); for_each_online_cpu(cpu) { dev = get_cpu_device(cpu); - kobject_uevent(&dev->kobj, KOBJ_CHANGE); + if (dev) + kobject_uevent(&dev->kobj, KOBJ_CHANGE); } return rc; } diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 14d2ca9c779e..3e46f62d32ad 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -2471,7 +2471,7 @@ static long kvm_s390_guest_mem_op(struct kvm_vcpu *vcpu, const u64 supported_flags = KVM_S390_MEMOP_F_INJECT_EXCEPTION | KVM_S390_MEMOP_F_CHECK_ONLY; - if (mop->flags & ~supported_flags) + if (mop->flags & ~supported_flags || mop->ar >= NUM_ACRS || !mop->size) return -EINVAL; if (mop->size > MEM_OP_MAX_SIZE) diff --git a/arch/s390/mm/cmm.c b/arch/s390/mm/cmm.c index 79ddd580d605..ca6fab51eea1 100644 --- a/arch/s390/mm/cmm.c +++ b/arch/s390/mm/cmm.c @@ -306,16 +306,16 @@ static int cmm_timeout_handler(struct ctl_table *ctl, int write, } if (write) { - len = *lenp; - if (copy_from_user(buf, buffer, - len > sizeof(buf) ? sizeof(buf) : len)) + len = min(*lenp, sizeof(buf)); + if (copy_from_user(buf, buffer, len)) return -EFAULT; - buf[sizeof(buf) - 1] = '\0'; + buf[len - 1] = '\0'; cmm_skip_blanks(buf, &p); nr = simple_strtoul(p, &p, 0); cmm_skip_blanks(p, &p); seconds = simple_strtoul(p, &p, 0); cmm_set_timeout(nr, seconds); + *ppos += *lenp; } else { len = sprintf(buf, "%ld %ld\n", cmm_timeout_pages, cmm_timeout_seconds); @@ -323,9 +323,9 @@ static int cmm_timeout_handler(struct ctl_table *ctl, int write, len = *lenp; if (copy_to_user(buffer, buf, len)) return -EFAULT; + *lenp = len; + *ppos += len; } - *lenp = len; - *ppos += len; return 0; } diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 75b5d8f59940..2dccd604c6eb 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1718,6 +1718,51 @@ config X86_INTEL_MPX If unsure, say N. +choice + prompt "TSX enable mode" + depends on CPU_SUP_INTEL + default X86_INTEL_TSX_MODE_OFF + help + Intel's TSX (Transactional Synchronization Extensions) feature + allows to optimize locking protocols through lock elision which + can lead to a noticeable performance boost. + + On the other hand it has been shown that TSX can be exploited + to form side channel attacks (e.g. TAA) and chances are there + will be more of those attacks discovered in the future. + + Therefore TSX is not enabled by default (aka tsx=off). An admin + might override this decision by tsx=on the command line parameter. + Even with TSX enabled, the kernel will attempt to enable the best + possible TAA mitigation setting depending on the microcode available + for the particular machine. + + This option allows to set the default tsx mode between tsx=on, =off + and =auto. See Documentation/kernel-parameters.txt for more + details. + + Say off if not sure, auto if TSX is in use but it should be used on safe + platforms or on if TSX is in use and the security aspect of tsx is not + relevant. + +config X86_INTEL_TSX_MODE_OFF + bool "off" + help + TSX is disabled if possible - equals to tsx=off command line parameter. + +config X86_INTEL_TSX_MODE_ON + bool "on" + help + TSX is always enabled on TSX capable HW - equals the tsx=on command + line parameter. + +config X86_INTEL_TSX_MODE_AUTO + bool "auto" + help + TSX is enabled on TSX capable HW that is believed to be safe against + side channel attacks- equals the tsx=auto command line parameter. +endchoice + config EFI bool "EFI runtime service support" depends on ACPI diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 113cb01ebaac..94491e4d21a7 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -340,5 +340,7 @@ #define X86_BUG_MDS X86_BUG(19) /* CPU is affected by Microarchitectural data sampling */ #define X86_BUG_MSBDS_ONLY X86_BUG(20) /* CPU is only affected by the MSDBS variant of BUG_MDS */ #define X86_BUG_SWAPGS X86_BUG(21) /* CPU is affected by speculation through SWAPGS */ +#define X86_BUG_TAA X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */ +#define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */ #endif /* _ASM_X86_CPUFEATURES_H */ diff --git a/arch/x86/include/asm/intel-family.h b/arch/x86/include/asm/intel-family.h index 6801f958e254..aaa0bd820cf4 100644 --- a/arch/x86/include/asm/intel-family.h +++ b/arch/x86/include/asm/intel-family.h @@ -5,7 +5,7 @@ * "Big Core" Processors (Branded as Core, Xeon, etc...) * * The "_X" parts are generally the EP and EX Xeons, or the - * "Extreme" ones, like Broadwell-E. + * "Extreme" ones, like Broadwell-E, or Atom microserver. * * Things ending in "2" are usually because we have no better * name for them. There's no processor called "WESTMERE2". @@ -67,6 +67,7 @@ #define INTEL_FAM6_ATOM_GOLDMONT 0x5C /* Apollo Lake */ #define INTEL_FAM6_ATOM_GOLDMONT_X 0x5F /* Denverton */ #define INTEL_FAM6_ATOM_GOLDMONT_PLUS 0x7A /* Gemini Lake */ +#define INTEL_FAM6_ATOM_TREMONT_X 0x86 /* Jacobsville */ /* Xeon Phi */ diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 39f202462029..dac449879113 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -408,6 +408,7 @@ struct kvm_vcpu_arch { u64 smbase; bool tpr_access_reporting; u64 ia32_xss; + u64 arch_capabilities; /* * Paging state of the vcpu @@ -1226,6 +1227,7 @@ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu); void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm, unsigned long address); +u64 kvm_get_arch_capabilities(void); void kvm_define_shared_msr(unsigned index, u32 msr); int kvm_set_shared_msr(unsigned index, u64 val, u64 mask); diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index 30183770132a..854a20efa771 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -71,10 +71,26 @@ * Microarchitectural Data * Sampling (MDS) vulnerabilities. */ +#define ARCH_CAP_PSCHANGE_MC_NO BIT(6) /* + * The processor is not susceptible to a + * machine check error due to modifying the + * code page size along with either the + * physical address or cache type + * without TLB invalidation. + */ +#define ARCH_CAP_TSX_CTRL_MSR BIT(7) /* MSR for TSX control is available. */ +#define ARCH_CAP_TAA_NO BIT(8) /* + * Not susceptible to + * TSX Async Abort (TAA) vulnerabilities. + */ #define MSR_IA32_BBL_CR_CTL 0x00000119 #define MSR_IA32_BBL_CR_CTL3 0x0000011e +#define MSR_IA32_TSX_CTRL 0x00000122 +#define TSX_CTRL_RTM_DISABLE BIT(0) /* Disable RTM feature */ +#define TSX_CTRL_CPUID_CLEAR BIT(1) /* Disable TSX enumeration */ + #define MSR_IA32_SYSENTER_CS 0x00000174 #define MSR_IA32_SYSENTER_ESP 0x00000175 #define MSR_IA32_SYSENTER_EIP 0x00000176 diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h index b98dbdaee8ac..1ab7f37150d2 100644 --- a/arch/x86/include/asm/mwait.h +++ b/arch/x86/include/asm/mwait.h @@ -19,7 +19,7 @@ #define MWAIT_ECX_INTERRUPT_BREAK 0x1 #define MWAITX_ECX_TIMER_ENABLE BIT(1) #define MWAITX_MAX_LOOPS ((u32)-1) -#define MWAITX_DISABLE_CSTATES 0xf +#define MWAITX_DISABLE_CSTATES 0xf0 static inline void __monitor(const void *eax, unsigned long ecx, unsigned long edx) diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index c3138ac80db2..783f0711895b 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -268,7 +268,7 @@ DECLARE_STATIC_KEY_FALSE(mds_idle_clear); #include <asm/segment.h> /** - * mds_clear_cpu_buffers - Mitigation for MDS vulnerability + * mds_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability * * This uses the otherwise unused and obsolete VERW instruction in * combination with microcode which triggers a CPU buffer flush when the @@ -291,7 +291,7 @@ static inline void mds_clear_cpu_buffers(void) } /** - * mds_user_clear_cpu_buffers - Mitigation for MDS vulnerability + * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability * * Clear CPU buffers if the corresponding static key is enabled */ diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index dab73faef9b0..cac54e61c299 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -852,4 +852,11 @@ enum mds_mitigations { MDS_MITIGATION_VMWERV, }; +enum taa_mitigations { + TAA_MITIGATION_OFF, + TAA_MITIGATION_UCODE_NEEDED, + TAA_MITIGATION_VERW, + TAA_MITIGATION_TSX_DISABLED, +}; + #endif /* _ASM_X86_PROCESSOR_H */ diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c index 834d1b5b4355..be3d4dcf3a10 100644 --- a/arch/x86/kernel/apic/apic.c +++ b/arch/x86/kernel/apic/apic.c @@ -1265,6 +1265,14 @@ void setup_local_APIC(void) return; } + /* + * If this comes from kexec/kcrash the APIC might be enabled in + * SPIV. Soft disable it before doing further initialization. + */ + value = apic_read(APIC_SPIV); + value &= ~APIC_SPIV_APIC_ENABLED; + apic_write(APIC_SPIV, value); + #ifdef CONFIG_X86_32 /* Pound the ESR really hard over the head with a big hammer - mbligh */ if (lapic_is_integrated() && apic->disable_esr) { diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile index 1e5184092ee6..ea8f887da6cc 100644 --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -25,7 +25,7 @@ obj-y += bugs.o obj-$(CONFIG_PROC_FS) += proc.o obj-$(CONFIG_X86_FEATURE_NAMES) += capflags.o powerflags.o -obj-$(CONFIG_CPU_SUP_INTEL) += intel.o +obj-$(CONFIG_CPU_SUP_INTEL) += intel.o tsx.o obj-$(CONFIG_CPU_SUP_AMD) += amd.o obj-$(CONFIG_CPU_SUP_CYRIX_32) += cyrix.o obj-$(CONFIG_CPU_SUP_CENTAUR) += centaur.o diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 917c63aa1599..7fd0a13ae0ba 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -30,11 +30,14 @@ #include <asm/intel-family.h> #include <asm/e820.h> +#include "cpu.h" + static void __init spectre_v1_select_mitigation(void); static void __init spectre_v2_select_mitigation(void); static void __init ssb_select_mitigation(void); static void __init l1tf_select_mitigation(void); static void __init mds_select_mitigation(void); +static void __init taa_select_mitigation(void); /* The base value of the SPEC_CTRL MSR that always has to be preserved. */ u64 x86_spec_ctrl_base; @@ -94,6 +97,7 @@ void __init check_bugs(void) ssb_select_mitigation(); l1tf_select_mitigation(); mds_select_mitigation(); + taa_select_mitigation(); arch_smt_update(); @@ -247,6 +251,93 @@ static int __init mds_cmdline(char *str) early_param("mds", mds_cmdline); #undef pr_fmt +#define pr_fmt(fmt) "TAA: " fmt + +/* Default mitigation for TAA-affected CPUs */ +static enum taa_mitigations taa_mitigation = TAA_MITIGATION_VERW; + +static const char * const taa_strings[] = { + [TAA_MITIGATION_OFF] = "Vulnerable", + [TAA_MITIGATION_UCODE_NEEDED] = "Vulnerable: Clear CPU buffers attempted, no microcode", + [TAA_MITIGATION_VERW] = "Mitigation: Clear CPU buffers", + [TAA_MITIGATION_TSX_DISABLED] = "Mitigation: TSX disabled", +}; + +static void __init taa_select_mitigation(void) +{ + u64 ia32_cap; + + if (!boot_cpu_has_bug(X86_BUG_TAA)) { + taa_mitigation = TAA_MITIGATION_OFF; + return; + } + + /* TSX previously disabled by tsx=off */ + if (!boot_cpu_has(X86_FEATURE_RTM)) { + taa_mitigation = TAA_MITIGATION_TSX_DISABLED; + goto out; + } + + if (cpu_mitigations_off()) { + taa_mitigation = TAA_MITIGATION_OFF; + return; + } + + /* TAA mitigation is turned off on the cmdline (tsx_async_abort=off) */ + if (taa_mitigation == TAA_MITIGATION_OFF) + goto out; + + if (boot_cpu_has(X86_FEATURE_MD_CLEAR)) + taa_mitigation = TAA_MITIGATION_VERW; + else + taa_mitigation = TAA_MITIGATION_UCODE_NEEDED; + + /* + * VERW doesn't clear the CPU buffers when MD_CLEAR=1 and MDS_NO=1. + * A microcode update fixes this behavior to clear CPU buffers. It also + * adds support for MSR_IA32_TSX_CTRL which is enumerated by the + * ARCH_CAP_TSX_CTRL_MSR bit. + * + * On MDS_NO=1 CPUs if ARCH_CAP_TSX_CTRL_MSR is not set, microcode + * update is required. + */ + ia32_cap = x86_read_arch_cap_msr(); + if ( (ia32_cap & ARCH_CAP_MDS_NO) && + !(ia32_cap & ARCH_CAP_TSX_CTRL_MSR)) + taa_mitigation = TAA_MITIGATION_UCODE_NEEDED; + + /* + * TSX is enabled, select alternate mitigation for TAA which is + * the same as MDS. Enable MDS static branch to clear CPU buffers. + * + * For guests that can't determine whether the correct microcode is + * present on host, enable the mitigation for UCODE_NEEDED as well. + */ + static_branch_enable(&mds_user_clear); + +out: + pr_info("%s\n", taa_strings[taa_mitigation]); +} + +static int __init tsx_async_abort_parse_cmdline(char *str) +{ + if (!boot_cpu_has_bug(X86_BUG_TAA)) + return 0; + + if (!str) + return -EINVAL; + + if (!strcmp(str, "off")) { + taa_mitigation = TAA_MITIGATION_OFF; + } else if (!strcmp(str, "full")) { + taa_mitigation = TAA_MITIGATION_VERW; + } + + return 0; +} +early_param("tsx_async_abort", tsx_async_abort_parse_cmdline); + +#undef pr_fmt #define pr_fmt(fmt) "Spectre V1 : " fmt enum spectre_v1_mitigation { @@ -758,13 +849,10 @@ static void update_mds_branch_idle(void) } #define MDS_MSG_SMT "MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.\n" +#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n" void arch_smt_update(void) { - /* Enhanced IBRS implies STIBP. No update required. */ - if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) - return; - mutex_lock(&spec_ctrl_mutex); switch (spectre_v2_user) { @@ -790,6 +878,17 @@ void arch_smt_update(void) break; } + switch (taa_mitigation) { + case TAA_MITIGATION_VERW: + case TAA_MITIGATION_UCODE_NEEDED: + if (sched_smt_active()) + pr_warn_once(TAA_MSG_SMT); + break; + case TAA_MITIGATION_TSX_DISABLED: + case TAA_MITIGATION_OFF: + break; + } + mutex_unlock(&spec_ctrl_mutex); } @@ -1178,6 +1277,11 @@ static void __init l1tf_select_mitigation(void) #ifdef CONFIG_SYSFS +static ssize_t itlb_multihit_show_state(char *buf) +{ + return sprintf(buf, "Processor vulnerable\n"); +} + static ssize_t mds_show_state(char *buf) { #ifdef CONFIG_HYPERVISOR_GUEST @@ -1197,6 +1301,21 @@ static ssize_t mds_show_state(char *buf) sched_smt_active() ? "vulnerable" : "disabled"); } +static ssize_t tsx_async_abort_show_state(char *buf) +{ + if ((taa_mitigation == TAA_MITIGATION_TSX_DISABLED) || + (taa_mitigation == TAA_MITIGATION_OFF)) + return sprintf(buf, "%s\n", taa_strings[taa_mitigation]); + + if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) { + return sprintf(buf, "%s; SMT Host state unknown\n", + taa_strings[taa_mitigation]); + } + + return sprintf(buf, "%s; SMT %s\n", taa_strings[taa_mitigation], + sched_smt_active() ? "vulnerable" : "disabled"); +} + static char *stibp_state(void) { if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) @@ -1262,6 +1381,12 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr case X86_BUG_MDS: return mds_show_state(buf); + case X86_BUG_TAA: + return tsx_async_abort_show_state(buf); + + case X86_BUG_ITLB_MULTIHIT: + return itlb_multihit_show_state(buf); + default: break; } @@ -1298,4 +1423,14 @@ ssize_t cpu_show_mds(struct device *dev, struct device_attribute *attr, char *bu { return cpu_show_common(dev, attr, buf, X86_BUG_MDS); } + +ssize_t cpu_show_tsx_async_abort(struct device *dev, struct device_attribute *attr, char *buf) +{ + return cpu_show_common(dev, attr, buf, X86_BUG_TAA); +} + +ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr, char *buf) +{ + return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT); +} #endif diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 3965235973c8..e8fa12c7ad5b 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -847,13 +847,14 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c) #endif } -#define NO_SPECULATION BIT(0) -#define NO_MELTDOWN BIT(1) -#define NO_SSB BIT(2) -#define NO_L1TF BIT(3) -#define NO_MDS BIT(4) -#define MSBDS_ONLY BIT(5) -#define NO_SWAPGS BIT(6) +#define NO_SPECULATION BIT(0) +#define NO_MELTDOWN BIT(1) +#define NO_SSB BIT(2) +#define NO_L1TF BIT(3) +#define NO_MDS BIT(4) +#define MSBDS_ONLY BIT(5) +#define NO_SWAPGS BIT(6) +#define NO_ITLB_MULTIHIT BIT(7) #define VULNWL(_vendor, _family, _model, _whitelist) \ { X86_VENDOR_##_vendor, _family, _model, X86_FEATURE_ANY, _whitelist } @@ -871,26 +872,26 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { VULNWL(NSC, 5, X86_MODEL_ANY, NO_SPECULATION), /* Intel Family 6 */ - VULNWL_INTEL(ATOM_SALTWELL, NO_SPECULATION), - VULNWL_INTEL(ATOM_SALTWELL_TABLET, NO_SPECULATION), - VULNWL_INTEL(ATOM_SALTWELL_MID, NO_SPECULATION), - VULNWL_INTEL(ATOM_BONNELL, NO_SPECULATION), - VULNWL_INTEL(ATOM_BONNELL_MID, NO_SPECULATION), - - VULNWL_INTEL(ATOM_SILVERMONT, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), - VULNWL_INTEL(ATOM_SILVERMONT_X, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), - VULNWL_INTEL(ATOM_SILVERMONT_MID, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), - VULNWL_INTEL(ATOM_AIRMONT, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), - VULNWL_INTEL(XEON_PHI_KNL, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), - VULNWL_INTEL(XEON_PHI_KNM, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS), + VULNWL_INTEL(ATOM_SALTWELL, NO_SPECULATION | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_SALTWELL_TABLET, NO_SPECULATION | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_SALTWELL_MID, NO_SPECULATION | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_BONNELL, NO_SPECULATION | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_BONNELL_MID, NO_SPECULATION | NO_ITLB_MULTIHIT), + + VULNWL_INTEL(ATOM_SILVERMONT, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_SILVERMONT_X, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_SILVERMONT_MID, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_AIRMONT, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_INTEL(XEON_PHI_KNL, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_INTEL(XEON_PHI_KNM, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT), VULNWL_INTEL(CORE_YONAH, NO_SSB), - VULNWL_INTEL(ATOM_AIRMONT_MID, NO_L1TF | MSBDS_ONLY | NO_SWAPGS), + VULNWL_INTEL(ATOM_AIRMONT_MID, NO_L1TF | MSBDS_ONLY | NO_SWAPGS | NO_ITLB_MULTIHIT), - VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS), - VULNWL_INTEL(ATOM_GOLDMONT_X, NO_MDS | NO_L1TF | NO_SWAPGS), - VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS), + VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_GOLDMONT_X, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS | NO_ITLB_MULTIHIT), /* * Technically, swapgs isn't serializing on AMD (despite it previously @@ -901,13 +902,13 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = { */ /* AMD Family 0xf - 0x12 */ - VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS), - VULNWL_AMD(0x10, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS), - VULNWL_AMD(0x11, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS), - VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS), + VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_AMD(0x10, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_AMD(0x11, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT), + VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT), /* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */ - VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS), + VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS | NO_ITLB_MULTIHIT), {} }; @@ -918,19 +919,30 @@ static bool __init cpu_matches(unsigned long which) return m && !!(m->driver_data & which); } -static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) +u64 x86_read_arch_cap_msr(void) { u64 ia32_cap = 0; + if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) + rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap); + + return ia32_cap; +} + +static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) +{ + u64 ia32_cap = x86_read_arch_cap_msr(); + + /* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */ + if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO)) + setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT); + if (cpu_matches(NO_SPECULATION)) return; setup_force_cpu_bug(X86_BUG_SPECTRE_V1); setup_force_cpu_bug(X86_BUG_SPECTRE_V2); - if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES)) - rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap); - if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) && !cpu_has(c, X86_FEATURE_AMD_SSB_NO)) setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS); @@ -947,6 +959,21 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) if (!cpu_matches(NO_SWAPGS)) setup_force_cpu_bug(X86_BUG_SWAPGS); + /* + * When the CPU is not mitigated for TAA (TAA_NO=0) set TAA bug when: + * - TSX is supported or + * - TSX_CTRL is present + * + * TSX_CTRL check is needed for cases when TSX could be disabled before + * the kernel boot e.g. kexec. + * TSX_CTRL check alone is not sufficient for cases when the microcode + * update is not present or running as guest that don't get TSX_CTRL. + */ + if (!(ia32_cap & ARCH_CAP_TAA_NO) && + (cpu_has(c, X86_FEATURE_RTM) || + (ia32_cap & ARCH_CAP_TSX_CTRL_MSR))) + setup_force_cpu_bug(X86_BUG_TAA); + if (cpu_matches(NO_MELTDOWN)) return; @@ -1287,6 +1314,8 @@ void __init identify_boot_cpu(void) enable_sep_cpu(); #endif cpu_detect_tlb(&boot_cpu_data); + + tsx_init(); } void identify_secondary_cpu(struct cpuinfo_x86 *c) diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h index 3b19d82f7932..c42cc1acd668 100644 --- a/arch/x86/kernel/cpu/cpu.h +++ b/arch/x86/kernel/cpu/cpu.h @@ -44,9 +44,27 @@ struct _tlb_table { extern const struct cpu_dev *const __x86_cpu_dev_start[], *const __x86_cpu_dev_end[]; +#ifdef CONFIG_CPU_SUP_INTEL +enum tsx_ctrl_states { + TSX_CTRL_ENABLE, + TSX_CTRL_DISABLE, + TSX_CTRL_NOT_SUPPORTED, +}; + +extern enum tsx_ctrl_states tsx_ctrl_state; + +extern void __init tsx_init(void); +extern void tsx_enable(void); +extern void tsx_disable(void); +#else +static inline void tsx_init(void) { } +#endif /* CONFIG_CPU_SUP_INTEL */ + extern void get_cpu_cap(struct cpuinfo_x86 *c); extern void cpu_detect_cache_sizes(struct cpuinfo_x86 *c); extern void x86_spec_ctrl_setup_ap(void); +extern u64 x86_read_arch_cap_msr(void); + #endif /* ARCH_X86_CPU_H */ diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index b0e0c7a12e61..7beef3da5904 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -582,6 +582,11 @@ static void init_intel(struct cpuinfo_x86 *c) detect_vmx_virtcap(c); init_intel_energy_perf(c); + + if (tsx_ctrl_state == TSX_CTRL_ENABLE) + tsx_enable(); + if (tsx_ctrl_state == TSX_CTRL_DISABLE) + tsx_disable(); } #ifdef CONFIG_X86_32 diff --git a/arch/x86/kernel/cpu/perf_event_amd_ibs.c b/arch/x86/kernel/cpu/perf_event_amd_ibs.c index 989d3c215d2b..66ca6ec09bd4 100644 --- a/arch/x86/kernel/cpu/perf_event_amd_ibs.c +++ b/arch/x86/kernel/cpu/perf_event_amd_ibs.c @@ -555,7 +555,7 @@ static int perf_ibs_handle_irq(struct perf_ibs *perf_ibs, struct pt_regs *iregs) if (event->attr.sample_type & PERF_SAMPLE_RAW) offset_max = perf_ibs->offset_max; else if (check_rip) - offset_max = 2; + offset_max = 3; else offset_max = 1; do { diff --git a/arch/x86/kernel/cpu/tsx.c b/arch/x86/kernel/cpu/tsx.c new file mode 100644 index 000000000000..c2a9dd816c5c --- /dev/null +++ b/arch/x86/kernel/cpu/tsx.c @@ -0,0 +1,140 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Intel Transactional Synchronization Extensions (TSX) control. + * + * Copyright (C) 2019 Intel Corporation + * + * Author: + * Pawan Gupta <pawan.kumar.gupta@linux.intel.com> + */ + +#include <linux/cpufeature.h> + +#include <asm/cmdline.h> + +#include "cpu.h" + +enum tsx_ctrl_states tsx_ctrl_state = TSX_CTRL_NOT_SUPPORTED; + +void tsx_disable(void) +{ + u64 tsx; + + rdmsrl(MSR_IA32_TSX_CTRL, tsx); + + /* Force all transactions to immediately abort */ + tsx |= TSX_CTRL_RTM_DISABLE; + + /* + * Ensure TSX support is not enumerated in CPUID. + * This is visible to userspace and will ensure they + * do not waste resources trying TSX transactions that + * will always abort. + */ + tsx |= TSX_CTRL_CPUID_CLEAR; + + wrmsrl(MSR_IA32_TSX_CTRL, tsx); +} + +void tsx_enable(void) +{ + u64 tsx; + + rdmsrl(MSR_IA32_TSX_CTRL, tsx); + + /* Enable the RTM feature in the cpu */ + tsx &= ~TSX_CTRL_RTM_DISABLE; + + /* + * Ensure TSX support is enumerated in CPUID. + * This is visible to userspace and will ensure they + * can enumerate and use the TSX feature. + */ + tsx &= ~TSX_CTRL_CPUID_CLEAR; + + wrmsrl(MSR_IA32_TSX_CTRL, tsx); +} + +static bool __init tsx_ctrl_is_supported(void) +{ + u64 ia32_cap = x86_read_arch_cap_msr(); + + /* + * TSX is controlled via MSR_IA32_TSX_CTRL. However, support for this + * MSR is enumerated by ARCH_CAP_TSX_MSR bit in MSR_IA32_ARCH_CAPABILITIES. + * + * TSX control (aka MSR_IA32_TSX_CTRL) is only available after a + * microcode update on CPUs that have their MSR_IA32_ARCH_CAPABILITIES + * bit MDS_NO=1. CPUs with MDS_NO=0 are not planned to get + * MSR_IA32_TSX_CTRL support even after a microcode update. Thus, + * tsx= cmdline requests will do nothing on CPUs without + * MSR_IA32_TSX_CTRL support. + */ + return !!(ia32_cap & ARCH_CAP_TSX_CTRL_MSR); +} + +static enum tsx_ctrl_states x86_get_tsx_auto_mode(void) +{ + if (boot_cpu_has_bug(X86_BUG_TAA)) + return TSX_CTRL_DISABLE; + + return TSX_CTRL_ENABLE; +} + +void __init tsx_init(void) +{ + char arg[5] = {}; + int ret; + + if (!tsx_ctrl_is_supported()) + return; + + ret = cmdline_find_option(boot_command_line, "tsx", arg, sizeof(arg)); + if (ret >= 0) { + if (!strcmp(arg, "on")) { + tsx_ctrl_state = TSX_CTRL_ENABLE; + } else if (!strcmp(arg, "off")) { + tsx_ctrl_state = TSX_CTRL_DISABLE; + } else if (!strcmp(arg, "auto")) { + tsx_ctrl_state = x86_get_tsx_auto_mode(); + } else { + tsx_ctrl_state = TSX_CTRL_DISABLE; + pr_err("tsx: invalid option, defaulting to off\n"); + } + } else { + /* tsx= not provided */ + if (IS_ENABLED(CONFIG_X86_INTEL_TSX_MODE_AUTO)) + tsx_ctrl_state = x86_get_tsx_auto_mode(); + else if (IS_ENABLED(CONFIG_X86_INTEL_TSX_MODE_OFF)) + tsx_ctrl_state = TSX_CTRL_DISABLE; + else + tsx_ctrl_state = TSX_CTRL_ENABLE; + } + + if (tsx_ctrl_state == TSX_CTRL_DISABLE) { + tsx_disable(); + + /* + * tsx_disable() will change the state of the + * RTM CPUID bit. Clear it here since it is now + * expected to be not set. + */ + setup_clear_cpu_cap(X86_FEATURE_RTM); + } else if (tsx_ctrl_state == TSX_CTRL_ENABLE) { + + /* + * HW defaults TSX to be enabled at bootup. + * We may still need the TSX enable support + * during init for special cases like + * kexec after TSX is disabled. + */ + tsx_enable(); + + /* + * tsx_enable() will change the state of the + * RTM CPUID bit. Force it here since it is now + * expected to be set. + */ + setup_force_cpu_cap(X86_FEATURE_RTM); + } +} diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c index 12c8286206ce..6a0ba9d09b0e 100644 --- a/arch/x86/kernel/smp.c +++ b/arch/x86/kernel/smp.c @@ -176,6 +176,12 @@ asmlinkage __visible void smp_reboot_interrupt(void) irq_exit(); } +static int register_stop_handler(void) +{ + return register_nmi_handler(NMI_LOCAL, smp_stop_nmi_callback, + NMI_FLAG_FIRST, "smp_stop"); +} + static void native_stop_other_cpus(int wait) { unsigned long flags; @@ -209,39 +215,41 @@ static void native_stop_other_cpus(int wait) apic->send_IPI_allbutself(REBOOT_VECTOR); /* - * Don't wait longer than a second if the caller - * didn't ask us to wait. + * Don't wait longer than a second for IPI completion. The + * wait request is not checked here because that would + * prevent an NMI shutdown attempt in case that not all + * CPUs reach shutdown state. */ timeout = USEC_PER_SEC; - while (num_online_cpus() > 1 && (wait || timeout--)) + while (num_online_cpus() > 1 && timeout--) udelay(1); } - - /* if the REBOOT_VECTOR didn't work, try with the NMI */ - if ((num_online_cpus() > 1) && (!smp_no_nmi_ipi)) { - if (register_nmi_handler(NMI_LOCAL, smp_stop_nmi_callback, - NMI_FLAG_FIRST, "smp_stop")) - /* Note: we ignore failures here */ - /* Hope the REBOOT_IRQ is good enough */ - goto finish; - - /* sync above data before sending IRQ */ - wmb(); - pr_emerg("Shutting down cpus with NMI\n"); + /* if the REBOOT_VECTOR didn't work, try with the NMI */ + if (num_online_cpus() > 1) { + /* + * If NMI IPI is enabled, try to register the stop handler + * and send the IPI. In any case try to wait for the other + * CPUs to stop. + */ + if (!smp_no_nmi_ipi && !register_stop_handler()) { + /* Sync above data before sending IRQ */ + wmb(); - apic->send_IPI_allbutself(NMI_VECTOR); + pr_emerg("Shutting down cpus with NMI\n"); + apic->send_IPI_allbutself(NMI_VECTOR); + } /* - * Don't wait longer than a 10 ms if the caller - * didn't ask us to wait. + * Don't wait longer than 10 ms if the caller didn't + * reqeust it. If wait is true, the machine hangs here if + * one or more CPUs do not reach shutdown state. */ timeout = USEC_PER_MSEC * 10; while (num_online_cpus() > 1 && (wait || timeout--)) udelay(1); } -finish: local_irq_save(flags); disable_local_APIC(); mcheck_cpu_clear(this_cpu_ptr(&cpu_info)); diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 53918abccbc3..40e415fedcee 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -447,6 +447,18 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function, entry->ebx |= F(TSC_ADJUST); entry->edx &= kvm_cpuid_7_0_edx_x86_features; cpuid_mask(&entry->edx, CPUID_7_EDX); + if (boot_cpu_has(X86_FEATURE_IBPB) && + boot_cpu_has(X86_FEATURE_IBRS)) + entry->edx |= F(SPEC_CTRL); + if (boot_cpu_has(X86_FEATURE_STIBP)) + entry->edx |= F(INTEL_STIBP); + if (boot_cpu_has(X86_FEATURE_SSBD)) + entry->edx |= F(SPEC_CTRL_SSBD); + /* + * We emulate ARCH_CAPABILITIES in software even + * if the host doesn't support it. + */ + entry->edx |= F(ARCH_CAPABILITIES); } else { entry->ebx = 0; entry->edx = 0; diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 5dd56e3517f3..6c7847b3aa2d 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -5245,6 +5245,8 @@ done_prefixes: ctxt->memopp->addr.mem.ea + ctxt->_eip); done: + if (rc == X86EMUL_PROPAGATE_FAULT) + ctxt->have_exception = true; return (rc != X86EMUL_CONTINUE) ? EMULATION_FAILED : EMULATION_OK; } diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 343c8ddad86a..1b3a432f6fd5 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -546,7 +546,6 @@ struct vcpu_vmx { u64 msr_guest_kernel_gs_base; #endif - u64 arch_capabilities; u64 spec_ctrl; u32 vm_entry_controls_shadow; @@ -2866,12 +2865,6 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) msr_info->data = to_vmx(vcpu)->spec_ctrl; break; - case MSR_IA32_ARCH_CAPABILITIES: - if (!msr_info->host_initiated && - !guest_cpuid_has_arch_capabilities(vcpu)) - return 1; - msr_info->data = to_vmx(vcpu)->arch_capabilities; - break; case MSR_IA32_SYSENTER_CS: msr_info->data = vmcs_read32(GUEST_SYSENTER_CS); break; @@ -3028,11 +3021,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) vmx_disable_intercept_for_msr(vmx->vmcs01.msr_bitmap, MSR_IA32_PRED_CMD, MSR_TYPE_W); break; - case MSR_IA32_ARCH_CAPABILITIES: - if (!msr_info->host_initiated) - return 1; - vmx->arch_capabilities = data; - break; case MSR_IA32_CR_PAT: if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) { if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data)) @@ -5079,9 +5067,6 @@ static int vmx_vcpu_setup(struct vcpu_vmx *vmx) ++vmx->nmsrs; } - if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES)) - rdmsrl(MSR_IA32_ARCH_CAPABILITIES, vmx->arch_capabilities); - vm_exit_controls_init(vmx, vmcs_config.vmexit_ctrl); /* 22.2.1, 20.8.1 */ @@ -7276,7 +7261,7 @@ static int handle_vmread(struct kvm_vcpu *vcpu) /* _system ok, as nested_vmx_check_permission verified cpl=0 */ if (kvm_write_guest_virt_system(vcpu, gva, &field_value, (is_long_mode(vcpu) ? 8 : 4), - NULL)) + &e)) kvm_inject_page_fault(vcpu, &e); } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 74674a6e4827..3b711cd261d7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -523,8 +523,14 @@ static int kvm_read_nested_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, data, offset, len, access); } +static inline u64 pdptr_rsvd_bits(struct kvm_vcpu *vcpu) +{ + return rsvd_bits(cpuid_maxphyaddr(vcpu), 63) | rsvd_bits(5, 8) | + rsvd_bits(1, 2); +} + /* - * Load the pae pdptrs. Return true is they are all valid. + * Load the pae pdptrs. Return 1 if they are all valid, 0 otherwise. */ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3) { @@ -543,8 +549,7 @@ int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3) } for (i = 0; i < ARRAY_SIZE(pdpte); ++i) { if (is_present_gpte(pdpte[i]) && - (pdpte[i] & - vcpu->arch.mmu.guest_rsvd_check.rsvd_bits_mask[0][2])) { + (pdpte[i] & pdptr_rsvd_bits(vcpu))) { ret = 0; goto out; } @@ -570,7 +575,7 @@ static bool pdptrs_changed(struct kvm_vcpu *vcpu) gfn_t gfn; int r; - if (is_long_mode(vcpu) || !is_pae(vcpu)) + if (is_long_mode(vcpu) || !is_pae(vcpu) || !is_paging(vcpu)) return false; if (!test_bit(VCPU_EXREG_PDPTR, @@ -990,6 +995,43 @@ static u32 emulated_msrs[] = { static unsigned num_emulated_msrs; +u64 kvm_get_arch_capabilities(void) +{ + u64 data; + + rdmsrl_safe(MSR_IA32_ARCH_CAPABILITIES, &data); + + if (!boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN)) + data |= ARCH_CAP_RDCL_NO; + if (!boot_cpu_has_bug(X86_BUG_SPEC_STORE_BYPASS)) + data |= ARCH_CAP_SSB_NO; + if (!boot_cpu_has_bug(X86_BUG_MDS)) + data |= ARCH_CAP_MDS_NO; + + /* + * On TAA affected systems, export MDS_NO=0 when: + * - TSX is enabled on the host, i.e. X86_FEATURE_RTM=1. + * - Updated microcode is present. This is detected by + * the presence of ARCH_CAP_TSX_CTRL_MSR and ensures + * that VERW clears CPU buffers. + * + * When MDS_NO=0 is exported, guests deploy clear CPU buffer + * mitigation and don't complain: + * + * "Vulnerable: Clear CPU buffers attempted, no microcode" + * + * If TSX is disabled on the system, guests are also mitigated against + * TAA and clear CPU buffer mitigation is not required for guests. + */ + if (boot_cpu_has_bug(X86_BUG_TAA) && boot_cpu_has(X86_FEATURE_RTM) && + (data & ARCH_CAP_TSX_CTRL_MSR)) + data &= ~ARCH_CAP_MDS_NO; + + return data; +} + +EXPORT_SYMBOL_GPL(kvm_get_arch_capabilities); + static bool __kvm_valid_efer(struct kvm_vcpu *vcpu, u64 efer) { if (efer & EFER_FFXSR) { @@ -2065,6 +2107,11 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_AMD64_BU_CFG2: break; + case MSR_IA32_ARCH_CAPABILITIES: + if (!msr_info->host_initiated) + return 1; + vcpu->arch.arch_capabilities = data; + break; case MSR_EFER: return set_efer(vcpu, msr_info); case MSR_K7_HWCR: @@ -2339,6 +2386,12 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) case MSR_IA32_UCODE_REV: msr_info->data = 0x100000000ULL; break; + case MSR_IA32_ARCH_CAPABILITIES: + if (!msr_info->host_initiated && + !guest_cpuid_has_arch_capabilities(vcpu)) + return 1; + msr_info->data = vcpu->arch.arch_capabilities; + break; case MSR_MTRRcap: case 0x200 ... 0x2ff: return kvm_mtrr_get_msr(vcpu, msr_info->index, &msr_info->data); @@ -5486,8 +5539,16 @@ int x86_emulate_instruction(struct kvm_vcpu *vcpu, if (reexecute_instruction(vcpu, cr2, write_fault_to_spt, emulation_type)) return EMULATE_DONE; - if (ctxt->have_exception && inject_emulated_exception(vcpu)) + if (ctxt->have_exception) { + /* + * #UD should result in just EMULATION_FAILED, and trap-like + * exception should not be encountered during decode. + */ + WARN_ON_ONCE(ctxt->exception.vector == UD_VECTOR || + exception_type(ctxt->exception.vector) == EXCPT_TRAP); + inject_emulated_exception(vcpu); return EMULATE_DONE; + } if (emulation_type & EMULTYPE_SKIP) return EMULATE_FAIL; return handle_emulation_failure(vcpu); @@ -7155,7 +7216,7 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, kvm_update_cpuid(vcpu); idx = srcu_read_lock(&vcpu->kvm->srcu); - if (!is_long_mode(vcpu) && is_pae(vcpu)) { + if (!is_long_mode(vcpu) && is_pae(vcpu) && is_paging(vcpu)) { load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu)); mmu_reset_needed = 1; } @@ -7379,6 +7440,7 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu) { int r; + vcpu->arch.arch_capabilities = kvm_get_arch_capabilities(); kvm_vcpu_mtrr_init(vcpu); r = vcpu_load(vcpu); if (r) diff --git a/arch/x86/lib/delay.c b/arch/x86/lib/delay.c index 45772560aceb..fc0cc6d08157 100644 --- a/arch/x86/lib/delay.c +++ b/arch/x86/lib/delay.c @@ -112,8 +112,8 @@ static void delay_mwaitx(unsigned long __loops) __monitorx(this_cpu_ptr(&cpu_tss), 0, 0); /* - * AMD, like Intel, supports the EAX hint and EAX=0xf - * means, do not enter any deep C-state and we use it + * AMD, like Intel's MWAIT version, supports the EAX hint and + * EAX=0xf0 means, do not enter any deep C-state and we use it * here in delay() to minimize wakeup latency. */ __mwaitx(MWAITX_DISABLE_CSTATES, delay, MWAITX_ECX_TIMER_ENABLE); diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c index ad285404ea7f..4bc352fc08f1 100644 --- a/arch/x86/platform/efi/efi.c +++ b/arch/x86/platform/efi/efi.c @@ -859,9 +859,6 @@ static void __init kexec_enter_virtual_mode(void) if (efi_enabled(EFI_OLD_MEMMAP) && (__supported_pte_mask & _PAGE_NX)) runtime_code_page_mkexec(); - - /* clean DUMMY object */ - efi_delete_dummy_variable(); #endif } diff --git a/arch/xtensa/kernel/xtensa_ksyms.c b/arch/xtensa/kernel/xtensa_ksyms.c index 4d2872fd9bb5..e2dd9109df63 100644 --- a/arch/xtensa/kernel/xtensa_ksyms.c +++ b/arch/xtensa/kernel/xtensa_ksyms.c @@ -116,13 +116,6 @@ EXPORT_SYMBOL(__invalidate_icache_range); // FIXME EXPORT_SYMBOL(screen_info); #endif -EXPORT_SYMBOL(outsb); -EXPORT_SYMBOL(outsw); -EXPORT_SYMBOL(outsl); -EXPORT_SYMBOL(insb); -EXPORT_SYMBOL(insw); -EXPORT_SYMBOL(insl); - extern long common_exception_return; EXPORT_SYMBOL(common_exception_return); diff --git a/build.config.aarch64 b/build.config.aarch64 new file mode 100644 index 000000000000..523bbc0449f7 --- /dev/null +++ b/build.config.aarch64 @@ -0,0 +1,11 @@ +ARCH=arm64 + +CLANG_TRIPLE=aarch64-linux-gnu- +CROSS_COMPILE=aarch64-linux-androidkernel- +LINUX_GCC_CROSS_COMPILE_PREBUILTS_BIN=prebuilts/gcc/linux-x86/aarch64/aarch64-linux-android-4.9/bin + +FILES=" +arch/arm64/boot/Image.gz +vmlinux +System.map +" diff --git a/build.config.common b/build.config.common new file mode 100644 index 000000000000..81cbb76e3aef --- /dev/null +++ b/build.config.common @@ -0,0 +1,9 @@ +BRANCH=android-4.4-p +KERNEL_DIR=common + +CC=clang +CLANG_PREBUILT_BIN=prebuilts-master/clang/host/linux-x86/clang-r365631c/bin +BUILDTOOLS_PREBUILT_BIN=build/build-tools/path/linux-x86 + +EXTRA_CMDS='' +STOP_SHIP_TRACEPRINTK=1 diff --git a/build.config.cuttlefish.aarch64 b/build.config.cuttlefish.aarch64 index 818b3dc1b3a3..0cb601958972 100644 --- a/build.config.cuttlefish.aarch64 +++ b/build.config.cuttlefish.aarch64 @@ -1,16 +1,5 @@ -ARCH=arm64 -BRANCH=android-4.4 -CLANG_TRIPLE=aarch64-linux-gnu- -CROSS_COMPILE=aarch64-linux-androidkernel- +. ${ROOT_DIR}/common/build.config.common +. ${ROOT_DIR}/common/build.config.aarch64 + DEFCONFIG=cuttlefish_defconfig -EXTRA_CMDS='' -KERNEL_DIR=common POST_DEFCONFIG_CMDS="check_defconfig" -CLANG_PREBUILT_BIN=prebuilts-master/clang/host/linux-x86/clang-r353983c/bin -LINUX_GCC_CROSS_COMPILE_PREBUILTS_BIN=prebuilts/gcc/linux-x86/aarch64/aarch64-linux-android-4.9/bin -FILES=" -arch/arm64/boot/Image.gz -vmlinux -System.map -" -STOP_SHIP_TRACEPRINTK=1 diff --git a/build.config.cuttlefish.x86_64 b/build.config.cuttlefish.x86_64 index 84556472221d..fed773ccc64a 100644 --- a/build.config.cuttlefish.x86_64 +++ b/build.config.cuttlefish.x86_64 @@ -1,16 +1,5 @@ -ARCH=x86_64 -BRANCH=android-4.4 -CLANG_TRIPLE=x86_64-linux-gnu- -CROSS_COMPILE=x86_64-linux-androidkernel- +. ${ROOT_DIR}/common/build.config.common +. ${ROOT_DIR}/common/build.config.x86_64 + DEFCONFIG=x86_64_cuttlefish_defconfig -EXTRA_CMDS='' -KERNEL_DIR=common POST_DEFCONFIG_CMDS="check_defconfig" -CLANG_PREBUILT_BIN=prebuilts-master/clang/host/linux-x86/clang-r353983c/bin -LINUX_GCC_CROSS_COMPILE_PREBUILTS_BIN=prebuilts/gcc/linux-x86/x86/x86_64-linux-android-4.9/bin -FILES=" -arch/x86/boot/bzImage -vmlinux -System.map -" -STOP_SHIP_TRACEPRINTK=1 diff --git a/build.config.x86_64 b/build.config.x86_64 new file mode 100644 index 000000000000..df73a47e7220 --- /dev/null +++ b/build.config.x86_64 @@ -0,0 +1,11 @@ +ARCH=x86_64 + +CLANG_TRIPLE=x86_64-linux-gnu- +CROSS_COMPILE=x86_64-linux-androidkernel- +LINUX_GCC_CROSS_COMPILE_PREBUILTS_BIN=prebuilts/gcc/linux-x86/x86/x86_64-linux-android-4.9/bin + +FILES=" +arch/x86/boot/bzImage +vmlinux +System.map +" diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c index 0afd1981e350..43c27c04c40a 100644 --- a/drivers/acpi/cppc_acpi.c +++ b/drivers/acpi/cppc_acpi.c @@ -137,8 +137,10 @@ static int acpi_get_psd(struct cpc_desc *cpc_ptr, acpi_handle handle) union acpi_object *psd = NULL; struct acpi_psd_package *pdomain; - status = acpi_evaluate_object_typed(handle, "_PSD", NULL, &buffer, - ACPI_TYPE_PACKAGE); + status = acpi_evaluate_object_typed(handle, "_PSD", NULL, + &buffer, ACPI_TYPE_PACKAGE); + if (status == AE_NOT_FOUND) /* _PSD is optional */ + return 0; if (ACPI_FAILURE(status)) return -ENODEV; diff --git a/drivers/acpi/custom_method.c b/drivers/acpi/custom_method.c index c68e72414a67..435bd0ffc8c0 100644 --- a/drivers/acpi/custom_method.c +++ b/drivers/acpi/custom_method.c @@ -48,8 +48,10 @@ static ssize_t cm_write(struct file *file, const char __user * user_buf, if ((*ppos > max_size) || (*ppos + count > max_size) || (*ppos + count < count) || - (count > uncopied_bytes)) + (count > uncopied_bytes)) { + kfree(buf); return -EINVAL; + } if (copy_from_user(buf + (*ppos), user_buf, count)) { kfree(buf); @@ -69,6 +71,7 @@ static ssize_t cm_write(struct file *file, const char __user * user_buf, add_taint(TAINT_OVERRIDDEN_ACPI_TABLE, LOCKDEP_NOW_UNRELIABLE); } + kfree(buf); return count; } diff --git a/drivers/base/core.c b/drivers/base/core.c index 4e27b3abf5a5..146d8c081695 100644 --- a/drivers/base/core.c +++ b/drivers/base/core.c @@ -10,6 +10,7 @@ * */ +#include <linux/cpufreq.h> #include <linux/device.h> #include <linux/err.h> #include <linux/fwnode.h> @@ -2129,6 +2130,8 @@ void device_shutdown(void) { struct device *dev, *parent; + cpufreq_suspend(); + spin_lock(&devices_kset->list_lock); /* * Walk the devices list backward, shutting down each in turn. diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c index e5ac568c7c9b..2a5aad29c8a3 100644 --- a/drivers/base/cpu.c +++ b/drivers/base/cpu.c @@ -711,12 +711,27 @@ ssize_t __weak cpu_show_mds(struct device *dev, return sprintf(buf, "Not affected\n"); } +ssize_t __weak cpu_show_tsx_async_abort(struct device *dev, + struct device_attribute *attr, + char *buf) +{ + return sprintf(buf, "Not affected\n"); +} + +ssize_t __weak cpu_show_itlb_multihit(struct device *dev, + struct device_attribute *attr, char *buf) +{ + return sprintf(buf, "Not affected\n"); +} + static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL); static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL); static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL); static DEVICE_ATTR(spec_store_bypass, 0444, cpu_show_spec_store_bypass, NULL); static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL); static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL); +static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL); +static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL); static struct attribute *cpu_root_vulnerabilities_attrs[] = { &dev_attr_meltdown.attr, @@ -725,6 +740,8 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = { &dev_attr_spec_store_bypass.attr, &dev_attr_l1tf.attr, &dev_attr_mds.attr, + &dev_attr_tsx_async_abort.attr, + &dev_attr_itlb_multihit.attr, NULL }; diff --git a/drivers/base/soc.c b/drivers/base/soc.c index 75b98aad6faf..84242e6b2897 100644 --- a/drivers/base/soc.c +++ b/drivers/base/soc.c @@ -146,6 +146,7 @@ out2: out1: return ERR_PTR(ret); } +EXPORT_SYMBOL_GPL(soc_device_register); /* Ensure soc_dev->attr is freed prior to calling soc_device_unregister. */ void soc_device_unregister(struct soc_device *soc_dev) @@ -154,6 +155,7 @@ void soc_device_unregister(struct soc_device *soc_dev) device_unregister(&soc_dev->dev); } +EXPORT_SYMBOL_GPL(soc_device_unregister); static int __init soc_bus_register(void) { diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 1a1b094da6cb..aa7895f2cf95 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -1614,6 +1614,7 @@ static int lo_compat_ioctl(struct block_device *bdev, fmode_t mode, case LOOP_SET_FD: case LOOP_CHANGE_FD: case LOOP_SET_BLOCK_SIZE: + case LOOP_SET_DIRECT_IO: err = lo_ioctl(bdev, mode, cmd, arg); break; default: diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index b0a12e6dae43..fcc12c879659 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -353,6 +353,9 @@ static const struct usb_device_id blacklist_table[] = { /* Additional Realtek 8822BE Bluetooth devices */ { USB_DEVICE(0x0b05, 0x185c), .driver_info = BTUSB_REALTEK }, + /* Additional Realtek 8822CE Bluetooth devices */ + { USB_DEVICE(0x04ca, 0x4005), .driver_info = BTUSB_REALTEK }, + /* Silicon Wave based devices */ { USB_DEVICE(0x0c10, 0x0000), .driver_info = BTUSB_SWAVE }, diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig index efdff5d16c5d..c603312bae17 100644 --- a/drivers/char/Kconfig +++ b/drivers/char/Kconfig @@ -633,8 +633,8 @@ config MSM_RDBG for a debugger running on a host PC to communicate with a remote stub running on peripheral subsystems such as the ADSP, MODEM etc. -config QCOM_SDIO_CLIENT - bool "QCOM_SDIO_CLIENT support" +config QTI_SDIO_CLIENT + bool "QTI_SDIO_CLIENT support" depends on SDIO_QCN default y help diff --git a/drivers/char/Makefile b/drivers/char/Makefile index dc00ba448d20..731f8fdd2ec6 100644 --- a/drivers/char/Makefile +++ b/drivers/char/Makefile @@ -67,4 +67,4 @@ ifdef CONFIG_COMPAT obj-$(CONFIG_MSM_ADSPRPC) += adsprpc_compat.o endif obj-$(CONFIG_MSM_RDBG) += rdbg.o -obj-$(CONFIG_QCOM_SDIO_CLIENT) += qti_sdio_client.o +obj-$(CONFIG_QTI_SDIO_CLIENT) += qti_sdio_client.o diff --git a/drivers/char/diag/Kconfig b/drivers/char/diag/Kconfig index 1bcacb8d83dd..f8a072025e9e 100644 --- a/drivers/char/diag/Kconfig +++ b/drivers/char/diag/Kconfig @@ -25,7 +25,7 @@ endmenu menu "HSIC/SMUX support for DIAG" config DIAGFWD_BRIDGE_CODE - depends on USB_QCOM_DIAG_BRIDGE || MSM_MHI || QCOM_SDIO_CLIENT + depends on USB_QCOM_DIAG_BRIDGE || MSM_MHI || QTI_SDIO_CLIENT default y bool "Enable QSC/9K DIAG traffic over SMUX/HSIC" help diff --git a/drivers/char/diag/Makefile b/drivers/char/diag/Makefile index 40c5387998b3..b090f228535c 100644 --- a/drivers/char/diag/Makefile +++ b/drivers/char/diag/Makefile @@ -3,6 +3,6 @@ obj-$(CONFIG_DIAGFWD_BRIDGE_CODE) += diagfwd_bridge.o obj-$(CONFIG_USB_QCOM_DIAG_BRIDGE) += diagfwd_hsic.o obj-$(CONFIG_MSM_MHI) += diagfwd_mhi.o -obj-$(CONFIG_QCOM_SDIO_CLIENT) += diagfwd_sdio.o +obj-$(CONFIG_QTI_SDIO_CLIENT) += diagfwd_sdio.o diagchar-objs := diagchar_core.o diagchar_hdlc.o diagfwd.o diagfwd_glink.o diagfwd_peripheral.o diagfwd_smd.o diagfwd_socket.o diag_mux.o diag_memorydevice.o diag_usb.o diagmem.o diagfwd_cntl.o diag_dci.o diag_masks.o diag_debugfs.o diff --git a/drivers/char/diag/diag_dci.c b/drivers/char/diag/diag_dci.c index 2301e1e566e0..e9a6711c9990 100644 --- a/drivers/char/diag/diag_dci.c +++ b/drivers/char/diag/diag_dci.c @@ -998,6 +998,7 @@ void extract_dci_pkt_rsp(unsigned char *buf, int len, int data_source, unsigned char *temp = buf; int save_req_uid = 0; struct diag_dci_pkt_rsp_header_t pkt_rsp_header; + int header_len = sizeof(struct diag_dci_pkt_rsp_header_t); if (!buf || len <= 0) { pr_err("diag: Invalid pointer in %s\n", __func__); @@ -1066,23 +1067,24 @@ void extract_dci_pkt_rsp(unsigned char *buf, int len, int data_source, mutex_lock(&rsp_buf->data_mutex); /* * Check if we can fit the data in the rsp buffer. The total length of - * the rsp is the rsp length (write_len) + DCI_PKT_RSP_TYPE header (int) - * + field for length (int) + delete_flag (uint8_t) + * the rsp is the rsp length (write_len) + dci response packet header + * length (sizeof(struct diag_dci_pkt_rsp_header_t)) */ - if ((rsp_buf->data_len + 9 + rsp_len) > rsp_buf->capacity) { + if ((rsp_buf->data_len + header_len + rsp_len) > rsp_buf->capacity) { pr_alert("diag: create capacity for pkt rsp\n"); - rsp_buf->capacity += 9 + rsp_len; - temp_buf = krealloc(rsp_buf->data, rsp_buf->capacity, - GFP_KERNEL); + temp_buf = vzalloc(rsp_buf->capacity + header_len + rsp_len); if (!temp_buf) { pr_err("diag: DCI realloc failed\n"); mutex_unlock(&rsp_buf->data_mutex); mutex_unlock(&entry->buffers[data_source].buf_mutex); mutex_unlock(&driver->dci_mutex); return; - } else { - rsp_buf->data = temp_buf; } + rsp_buf->capacity += header_len + rsp_len; + if (rsp_buf->capacity > rsp_buf->data_len) + memcpy(temp_buf, rsp_buf->data, rsp_buf->data_len); + vfree(rsp_buf->data); + rsp_buf->data = temp_buf; } /* Fill in packet response header information */ @@ -1091,9 +1093,8 @@ void extract_dci_pkt_rsp(unsigned char *buf, int len, int data_source, pkt_rsp_header.length = rsp_len + sizeof(int); pkt_rsp_header.delete_flag = delete_flag; pkt_rsp_header.uid = save_req_uid; - memcpy(rsp_buf->data + rsp_buf->data_len, &pkt_rsp_header, - sizeof(struct diag_dci_pkt_rsp_header_t)); - rsp_buf->data_len += sizeof(struct diag_dci_pkt_rsp_header_t); + memcpy(rsp_buf->data + rsp_buf->data_len, &pkt_rsp_header, header_len); + rsp_buf->data_len += header_len; memcpy(rsp_buf->data + rsp_buf->data_len, temp, rsp_len); rsp_buf->data_len += rsp_len; rsp_buf->data_source = data_source; diff --git a/drivers/char/diag/diagfwd_bridge.c b/drivers/char/diag/diagfwd_bridge.c index bfe85d9fd19d..298813917dec 100644 --- a/drivers/char/diag/diagfwd_bridge.c +++ b/drivers/char/diag/diagfwd_bridge.c @@ -43,7 +43,7 @@ static int diag_mhi_init(void) } #endif -#ifndef CONFIG_QCOM_SDIO_CLIENT +#ifndef CONFIG_QTI_SDIO_CLIENT static int diag_sdio_init(void) { return -EINVAL; diff --git a/drivers/char/diag/diagfwd_sdio.h b/drivers/char/diag/diagfwd_sdio.h index 1fa258e901c0..42c300d47c68 100644 --- a/drivers/char/diag/diagfwd_sdio.h +++ b/drivers/char/diag/diagfwd_sdio.h @@ -13,7 +13,7 @@ #ifndef DIAGFWD_SDIO_H #define DIAGFWD_SDIO_H -#ifdef CONFIG_QCOM_SDIO_CLIENT +#ifdef CONFIG_QTI_SDIO_CLIENT #ifdef CONFIG_DIAG_OVER_USB #include <linux/usb/usbdiag.h> @@ -52,4 +52,4 @@ int diag_sdio_init(void); void diag_sdio_exit(void); #endif -#endif /*CONFIG_QCOM_SDIO_CLIENT*/ +#endif /*CONFIG_QTI_SDIO_CLIENT*/ diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c index 340f96e44642..bba54422d2ca 100644 --- a/drivers/char/hw_random/core.c +++ b/drivers/char/hw_random/core.c @@ -88,7 +88,7 @@ static void add_early_randomness(struct hwrng *rng) size_t size = min_t(size_t, 16, rng_buffer_size()); mutex_lock(&reading_mutex); - bytes_read = rng_get_data(rng, rng_buffer, size, 1); + bytes_read = rng_get_data(rng, rng_buffer, size, 0); mutex_unlock(&reading_mutex); if (bytes_read > 0) add_device_randomness(rng_buffer, bytes_read); diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c index 2f9abe0d04dc..2f8ff63bbbe4 100644 --- a/drivers/char/ipmi/ipmi_si_intf.c +++ b/drivers/char/ipmi/ipmi_si_intf.c @@ -281,6 +281,9 @@ struct smi_info { */ bool irq_enable_broken; + /* Is the driver in maintenance mode? */ + bool in_maintenance_mode; + /* * Did we get an attention that we did not handle? */ @@ -1091,11 +1094,20 @@ static int ipmi_thread(void *data) spin_unlock_irqrestore(&(smi_info->si_lock), flags); busy_wait = ipmi_thread_busy_wait(smi_result, smi_info, &busy_until); - if (smi_result == SI_SM_CALL_WITHOUT_DELAY) + if (smi_result == SI_SM_CALL_WITHOUT_DELAY) { ; /* do nothing */ - else if (smi_result == SI_SM_CALL_WITH_DELAY && busy_wait) - schedule(); - else if (smi_result == SI_SM_IDLE) { + } else if (smi_result == SI_SM_CALL_WITH_DELAY && busy_wait) { + /* + * In maintenance mode we run as fast as + * possible to allow firmware updates to + * complete as fast as possible, but normally + * don't bang on the scheduler. + */ + if (smi_info->in_maintenance_mode) + schedule(); + else + usleep_range(100, 200); + } else if (smi_result == SI_SM_IDLE) { if (atomic_read(&smi_info->need_watch)) { schedule_timeout_interruptible(100); } else { @@ -1103,8 +1115,9 @@ static int ipmi_thread(void *data) __set_current_state(TASK_INTERRUPTIBLE); schedule(); } - } else + } else { schedule_timeout_interruptible(1); + } } return 0; } @@ -1283,6 +1296,7 @@ static void set_maintenance_mode(void *send_info, bool enable) if (!enable) atomic_set(&smi_info->req_events, 0); + smi_info->in_maintenance_mode = enable; } static const struct ipmi_smi_handlers handlers = { diff --git a/drivers/char/mem.c b/drivers/char/mem.c index 23f52a897283..6ebe2b86d8eb 100644 --- a/drivers/char/mem.c +++ b/drivers/char/mem.c @@ -95,6 +95,13 @@ void __weak unxlate_dev_mem_ptr(phys_addr_t phys, void *addr) } #endif +static inline bool should_stop_iteration(void) +{ + if (need_resched()) + cond_resched(); + return fatal_signal_pending(current); +} + /* * This funcion reads the *physical* memory. The f_pos points directly to the * memory location. @@ -161,6 +168,8 @@ static ssize_t read_mem(struct file *file, char __user *buf, p += sz; count -= sz; read += sz; + if (should_stop_iteration()) + break; } *ppos += read; @@ -232,6 +241,8 @@ static ssize_t write_mem(struct file *file, const char __user *buf, p += sz; count -= sz; written += sz; + if (should_stop_iteration()) + break; } *ppos += written; @@ -443,6 +454,10 @@ static ssize_t read_kmem(struct file *file, char __user *buf, read += sz; low_count -= sz; count -= sz; + if (should_stop_iteration()) { + count = 0; + break; + } } } @@ -467,6 +482,8 @@ static ssize_t read_kmem(struct file *file, char __user *buf, buf += sz; read += sz; p += sz; + if (should_stop_iteration()) + break; } free_page((unsigned long)kbuf); } @@ -517,6 +534,8 @@ static ssize_t do_write_kmem(unsigned long p, const char __user *buf, p += sz; count -= sz; written += sz; + if (should_stop_iteration()) + break; } *ppos += written; @@ -568,6 +587,8 @@ static ssize_t write_kmem(struct file *file, const char __user *buf, buf += sz; virtr += sz; p += sz; + if (should_stop_iteration()) + break; } free_page((unsigned long)kbuf); } diff --git a/drivers/clk/clk-qoriq.c b/drivers/clk/clk-qoriq.c index a5070f9cb0d4..7244a621c61b 100644 --- a/drivers/clk/clk-qoriq.c +++ b/drivers/clk/clk-qoriq.c @@ -540,7 +540,7 @@ static const struct clockgen_chipinfo chipinfo[] = { .guts_compat = "fsl,qoriq-device-config-1.0", .init_periph = p5020_init_periph, .cmux_groups = { - &p2041_cmux_grp1, &p2041_cmux_grp2 + &p5020_cmux_grp1, &p5020_cmux_grp2 }, .cmux_to_group = { 0, 1, -1 diff --git a/drivers/clk/sirf/clk-common.c b/drivers/clk/sirf/clk-common.c index 77e1e2491689..edb7197cc4b4 100644 --- a/drivers/clk/sirf/clk-common.c +++ b/drivers/clk/sirf/clk-common.c @@ -298,9 +298,10 @@ static u8 dmn_clk_get_parent(struct clk_hw *hw) { struct clk_dmn *clk = to_dmnclk(hw); u32 cfg = clkc_readl(clk->regofs); + const char *name = clk_hw_get_name(hw); /* parent of io domain can only be pll3 */ - if (strcmp(hw->init->name, "io") == 0) + if (strcmp(name, "io") == 0) return 4; WARN_ON((cfg & (BIT(3) - 1)) > 4); @@ -312,9 +313,10 @@ static int dmn_clk_set_parent(struct clk_hw *hw, u8 parent) { struct clk_dmn *clk = to_dmnclk(hw); u32 cfg = clkc_readl(clk->regofs); + const char *name = clk_hw_get_name(hw); /* parent of io domain can only be pll3 */ - if (strcmp(hw->init->name, "io") == 0) + if (strcmp(name, "io") == 0) return -EINVAL; cfg &= ~(BIT(3) - 1); @@ -354,7 +356,8 @@ static long dmn_clk_round_rate(struct clk_hw *hw, unsigned long rate, { unsigned long fin; unsigned ratio, wait, hold; - unsigned bits = (strcmp(hw->init->name, "mem") == 0) ? 3 : 4; + const char *name = clk_hw_get_name(hw); + unsigned bits = (strcmp(name, "mem") == 0) ? 3 : 4; fin = *parent_rate; ratio = fin / rate; @@ -376,7 +379,8 @@ static int dmn_clk_set_rate(struct clk_hw *hw, unsigned long rate, struct clk_dmn *clk = to_dmnclk(hw); unsigned long fin; unsigned ratio, wait, hold, reg; - unsigned bits = (strcmp(hw->init->name, "mem") == 0) ? 3 : 4; + const char *name = clk_hw_get_name(hw); + unsigned bits = (strcmp(name, "mem") == 0) ? 3 : 4; fin = parent_rate; ratio = fin / rate; diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 12c5210eabbc..e8fca2cf056b 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -2631,14 +2631,6 @@ int cpufreq_unregister_driver(struct cpufreq_driver *driver) } EXPORT_SYMBOL_GPL(cpufreq_unregister_driver); -/* - * Stop cpufreq at shutdown to make sure it isn't holding any locks - * or mutexes when secondary CPUs are halted. - */ -static struct syscore_ops cpufreq_syscore_ops = { - .shutdown = cpufreq_suspend, -}; - struct kobject *cpufreq_global_kobject; EXPORT_SYMBOL(cpufreq_global_kobject); @@ -2650,8 +2642,6 @@ static int __init cpufreq_core_init(void) cpufreq_global_kobject = kobject_create_and_add("cpufreq", &cpu_subsys.dev_root->kobj); BUG_ON(!cpufreq_global_kobject); - register_syscore_ops(&cpufreq_syscore_ops); - return 0; } core_initcall(cpufreq_core_init); diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index f2d1fea23fbf..492432dd5cd6 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -75,7 +75,7 @@ #define DESC_AEAD_BASE (4 * CAAM_CMD_SZ) #define DESC_AEAD_ENC_LEN (DESC_AEAD_BASE + 11 * CAAM_CMD_SZ) #define DESC_AEAD_DEC_LEN (DESC_AEAD_BASE + 15 * CAAM_CMD_SZ) -#define DESC_AEAD_GIVENC_LEN (DESC_AEAD_ENC_LEN + 9 * CAAM_CMD_SZ) +#define DESC_AEAD_GIVENC_LEN (DESC_AEAD_ENC_LEN + 10 * CAAM_CMD_SZ) /* Note: Nonce is counted in enckeylen */ #define DESC_AEAD_CTR_RFC3686_LEN (4 * CAAM_CMD_SZ) @@ -437,6 +437,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead) u32 geniv, moveiv; u32 ctx1_iv_off = 0; u32 *desc; + u32 *wait_cmd; const bool ctr_mode = ((ctx->class1_alg_type & OP_ALG_AAI_MASK) == OP_ALG_AAI_CTR_MOD128); const bool is_rfc3686 = alg->caam.rfc3686; @@ -702,6 +703,14 @@ copy_iv: /* Will read cryptlen */ append_math_add(desc, VARSEQINLEN, SEQINLEN, REG0, CAAM_CMD_SZ); + + /* + * Wait for IV transfer (ofifo -> class2) to finish before starting + * ciphertext transfer (ofifo -> external memory). + */ + wait_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL | JUMP_COND_NIFP); + set_jump_tgt_here(desc, wait_cmd); + append_seq_fifo_load(desc, 0, FIFOLD_CLASS_BOTH | KEY_VLF | FIFOLD_TYPE_MSG1OUT2 | FIFOLD_TYPE_LASTBOTH); append_seq_fifo_store(desc, 0, FIFOST_TYPE_MESSAGE_DATA | KEY_VLF); diff --git a/drivers/crypto/qat/qat_common/adf_common_drv.h b/drivers/crypto/qat/qat_common/adf_common_drv.h index aa1dbeaa9b49..5358162018dd 100644 --- a/drivers/crypto/qat/qat_common/adf_common_drv.h +++ b/drivers/crypto/qat/qat_common/adf_common_drv.h @@ -95,7 +95,7 @@ struct service_hndl { static inline int get_current_node(void) { - return topology_physical_package_id(smp_processor_id()); + return topology_physical_package_id(raw_smp_processor_id()); } int adf_service_register(struct service_hndl *service); diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c index 014745271bb4..1c8857e7db89 100644 --- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -2730,6 +2730,7 @@ static int talitos_remove(struct platform_device *ofdev) break; case CRYPTO_ALG_TYPE_AEAD: crypto_unregister_aead(&t_alg->algt.alg.aead); + break; case CRYPTO_ALG_TYPE_AHASH: crypto_unregister_ahash(&t_alg->algt.alg.hash); break; diff --git a/drivers/dma/bcm2835-dma.c b/drivers/dma/bcm2835-dma.c index 996c4b00d323..d6cdc3be03fc 100644 --- a/drivers/dma/bcm2835-dma.c +++ b/drivers/dma/bcm2835-dma.c @@ -595,8 +595,10 @@ static int bcm2835_dma_probe(struct platform_device *pdev) pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask; rc = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); - if (rc) + if (rc) { + dev_err(&pdev->dev, "Unable to set DMA mask\n"); return rc; + } od = devm_kzalloc(&pdev->dev, sizeof(*od), GFP_KERNEL); if (!od) diff --git a/drivers/dma/edma.c b/drivers/dma/edma.c index 85674a8d0436..e508c8c5f3fd 100644 --- a/drivers/dma/edma.c +++ b/drivers/dma/edma.c @@ -2218,9 +2218,6 @@ static int edma_probe(struct platform_device *pdev) ecc->default_queue = info->default_queue; - for (i = 0; i < ecc->num_slots; i++) - edma_write_slot(ecc, i, &dummy_paramset); - if (info->rsv) { /* Set the reserved slots in inuse list */ rsv_slots = info->rsv->rsv_slots; @@ -2233,6 +2230,12 @@ static int edma_probe(struct platform_device *pdev) } } + for (i = 0; i < ecc->num_slots; i++) { + /* Reset only unused - not reserved - paRAM slots */ + if (!test_bit(i, ecc->slot_inuse)) + edma_write_slot(ecc, i, &dummy_paramset); + } + /* Clear the xbar mapped channels in unused list */ xbar_chans = info->xbar_chans; if (xbar_chans) { diff --git a/drivers/dma/iop-adma.c b/drivers/dma/iop-adma.c index e4f43125e0fb..a390415c97a8 100644 --- a/drivers/dma/iop-adma.c +++ b/drivers/dma/iop-adma.c @@ -126,9 +126,9 @@ static void __iop_adma_slot_cleanup(struct iop_adma_chan *iop_chan) list_for_each_entry_safe(iter, _iter, &iop_chan->chain, chain_node) { pr_debug("\tcookie: %d slot: %d busy: %d " - "this_desc: %#x next_desc: %#x ack: %d\n", + "this_desc: %#x next_desc: %#llx ack: %d\n", iter->async_tx.cookie, iter->idx, busy, - iter->async_tx.phys, iop_desc_get_next_desc(iter), + iter->async_tx.phys, (u64)iop_desc_get_next_desc(iter), async_tx_test_ack(&iter->async_tx)); prefetch(_iter); prefetch(&_iter->async_tx); @@ -316,9 +316,9 @@ retry: int i; dev_dbg(iop_chan->device->common.dev, "allocated slot: %d " - "(desc %p phys: %#x) slots_per_op %d\n", + "(desc %p phys: %#llx) slots_per_op %d\n", iter->idx, iter->hw_desc, - iter->async_tx.phys, slots_per_op); + (u64)iter->async_tx.phys, slots_per_op); /* pre-ack all but the last descriptor */ if (num_slots != slots_per_op) @@ -526,7 +526,7 @@ iop_adma_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dma_dest, return NULL; BUG_ON(len > IOP_ADMA_MAX_BYTE_COUNT); - dev_dbg(iop_chan->device->common.dev, "%s len: %u\n", + dev_dbg(iop_chan->device->common.dev, "%s len: %zu\n", __func__, len); spin_lock_bh(&iop_chan->lock); @@ -559,7 +559,7 @@ iop_adma_prep_dma_xor(struct dma_chan *chan, dma_addr_t dma_dest, BUG_ON(len > IOP_ADMA_XOR_MAX_BYTE_COUNT); dev_dbg(iop_chan->device->common.dev, - "%s src_cnt: %d len: %u flags: %lx\n", + "%s src_cnt: %d len: %zu flags: %lx\n", __func__, src_cnt, len, flags); spin_lock_bh(&iop_chan->lock); @@ -592,7 +592,7 @@ iop_adma_prep_dma_xor_val(struct dma_chan *chan, dma_addr_t *dma_src, if (unlikely(!len)) return NULL; - dev_dbg(iop_chan->device->common.dev, "%s src_cnt: %d len: %u\n", + dev_dbg(iop_chan->device->common.dev, "%s src_cnt: %d len: %zu\n", __func__, src_cnt, len); spin_lock_bh(&iop_chan->lock); @@ -630,7 +630,7 @@ iop_adma_prep_dma_pq(struct dma_chan *chan, dma_addr_t *dst, dma_addr_t *src, BUG_ON(len > IOP_ADMA_XOR_MAX_BYTE_COUNT); dev_dbg(iop_chan->device->common.dev, - "%s src_cnt: %d len: %u flags: %lx\n", + "%s src_cnt: %d len: %zu flags: %lx\n", __func__, src_cnt, len, flags); if (dmaf_p_disabled_continue(flags)) @@ -693,7 +693,7 @@ iop_adma_prep_dma_pq_val(struct dma_chan *chan, dma_addr_t *pq, dma_addr_t *src, return NULL; BUG_ON(len > IOP_ADMA_XOR_MAX_BYTE_COUNT); - dev_dbg(iop_chan->device->common.dev, "%s src_cnt: %d len: %u\n", + dev_dbg(iop_chan->device->common.dev, "%s src_cnt: %d len: %zu\n", __func__, src_cnt, len); spin_lock_bh(&iop_chan->lock); diff --git a/drivers/dma/qcom_bam_dma.c b/drivers/dma/qcom_bam_dma.c index 5a250cdc8376..eca5b106d7d4 100644 --- a/drivers/dma/qcom_bam_dma.c +++ b/drivers/dma/qcom_bam_dma.c @@ -671,7 +671,21 @@ static int bam_dma_terminate_all(struct dma_chan *chan) /* remove all transactions, including active transaction */ spin_lock_irqsave(&bchan->vc.lock, flag); + /* + * If we have transactions queued, then some might be committed to the + * hardware in the desc fifo. The only way to reset the desc fifo is + * to do a hardware reset (either by pipe or the entire block). + * bam_chan_init_hw() will trigger a pipe reset, and also reinit the + * pipe. If the pipe is left disabled (default state after pipe reset) + * and is accessed by a connected hardware engine, a fatal error in + * the BAM will occur. There is a small window where this could happen + * with bam_chan_init_hw(), but it is assumed that the caller has + * stopped activity on any attached hardware engine. Make sure to do + * this first so that the BAM hardware doesn't cause memory corruption + * by accessing freed resources. + */ if (bchan->curr_txd) { + bam_chan_init_hw(bchan, bchan->curr_txd->dir); list_add(&bchan->curr_txd->vd.node, &bchan->vc.desc_issued); bchan->curr_txd = NULL; } diff --git a/drivers/firmware/efi/cper.c b/drivers/firmware/efi/cper.c index d42537425438..c0e54396f250 100644 --- a/drivers/firmware/efi/cper.c +++ b/drivers/firmware/efi/cper.c @@ -375,7 +375,7 @@ static void cper_print_pcie(const char *pfx, const struct cper_sec_pcie *pcie, printk("%s""vendor_id: 0x%04x, device_id: 0x%04x\n", pfx, pcie->device_id.vendor_id, pcie->device_id.device_id); p = pcie->device_id.class_code; - printk("%s""class_code: %02x%02x%02x\n", pfx, p[0], p[1], p[2]); + printk("%s""class_code: %02x%02x%02x\n", pfx, p[2], p[1], p[0]); } if (pcie->validation_bits & CPER_PCIE_VALID_SERIAL_NUMBER) printk("%s""serial number: 0x%04x, 0x%04x\n", pfx, @@ -384,6 +384,21 @@ static void cper_print_pcie(const char *pfx, const struct cper_sec_pcie *pcie, printk( "%s""bridge: secondary_status: 0x%04x, control: 0x%04x\n", pfx, pcie->bridge.secondary_status, pcie->bridge.control); + + /* Fatal errors call __ghes_panic() before AER handler prints this */ + if ((pcie->validation_bits & CPER_PCIE_VALID_AER_INFO) && + (gdata->error_severity & CPER_SEV_FATAL)) { + struct aer_capability_regs *aer; + + aer = (struct aer_capability_regs *)pcie->aer_info; + printk("%saer_uncor_status: 0x%08x, aer_uncor_mask: 0x%08x\n", + pfx, aer->uncor_status, aer->uncor_mask); + printk("%saer_uncor_severity: 0x%08x\n", + pfx, aer->uncor_severity); + printk("%sTLP Header: %08x %08x %08x %08x\n", pfx, + aer->header_log.dw0, aer->header_log.dw1, + aer->header_log.dw2, aer->header_log.dw3); + } } static void cper_estatus_print_section( diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c index 07dc692851d8..5a271b3f9bfb 100644 --- a/drivers/firmware/psci.c +++ b/drivers/firmware/psci.c @@ -58,7 +58,10 @@ bool psci_tos_resident_on(int cpu) return cpu == resident_cpu; } -struct psci_operations psci_ops; +struct psci_operations psci_ops = { + .conduit = PSCI_CONDUIT_NONE, + .smccc_version = SMCCC_VERSION_1_0, +}; typedef unsigned long (psci_fn)(unsigned long, unsigned long, unsigned long, unsigned long); @@ -189,6 +192,22 @@ static unsigned long psci_migrate_info_up_cpu(void) 0, 0, 0); } +static void set_conduit(enum psci_conduit conduit) +{ + switch (conduit) { + case PSCI_CONDUIT_HVC: + invoke_psci_fn = __invoke_psci_fn_hvc; + break; + case PSCI_CONDUIT_SMC: + invoke_psci_fn = __invoke_psci_fn_smc; + break; + default: + WARN(1, "Unexpected PSCI conduit %d\n", conduit); + } + + psci_ops.conduit = conduit; +} + static int get_set_conduit_method(struct device_node *np) { const char *method; @@ -201,9 +220,9 @@ static int get_set_conduit_method(struct device_node *np) } if (!strcmp("hvc", method)) { - invoke_psci_fn = __invoke_psci_fn_hvc; + set_conduit(PSCI_CONDUIT_HVC); } else if (!strcmp("smc", method)) { - invoke_psci_fn = __invoke_psci_fn_smc; + set_conduit(PSCI_CONDUIT_SMC); } else { pr_warn("invalid \"method\" property: %s\n", method); return -EINVAL; @@ -428,6 +447,31 @@ static void __init psci_init_migrate(void) pr_info("Trusted OS resident on physical CPU 0x%lx\n", cpuid); } +static void __init psci_init_smccc(void) +{ + u32 ver = ARM_SMCCC_VERSION_1_0; + int feature; + + feature = psci_features(ARM_SMCCC_VERSION_FUNC_ID); + + if (feature != PSCI_RET_NOT_SUPPORTED) { + u32 ret; + ret = invoke_psci_fn(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0); + if (ret == ARM_SMCCC_VERSION_1_1) { + psci_ops.smccc_version = SMCCC_VERSION_1_1; + ver = ret; + } + } + + /* + * Conveniently, the SMCCC and PSCI versions are encoded the + * same way. No, this isn't accidental. + */ + pr_info("SMC Calling Convention v%d.%d\n", + PSCI_VERSION_MAJOR(ver), PSCI_VERSION_MINOR(ver)); + +} + static void __init psci_0_2_set_functions(void) { pr_info("Using standard PSCI v0.2 function IDs\n"); @@ -476,6 +520,7 @@ static int __init psci_probe(void) psci_init_migrate(); if (PSCI_VERSION_MAJOR(ver) >= 1) { + psci_init_smccc(); psci_init_cpu_suspend(); psci_init_system_suspend(); } @@ -589,9 +634,9 @@ int __init psci_acpi_init(void) pr_info("probing for conduit method from ACPI.\n"); if (acpi_psci_use_hvc()) - invoke_psci_fn = __invoke_psci_fn_hvc; + set_conduit(PSCI_CONDUIT_HVC); else - invoke_psci_fn = __invoke_psci_fn_smc; + set_conduit(PSCI_CONDUIT_SMC); return psci_probe(); } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c index a5c824078472..e35e603710b4 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c @@ -406,6 +406,9 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file if (sh_num == AMDGPU_INFO_MMR_SH_INDEX_MASK) sh_num = 0xffffffff; + if (info->read_mmr_reg.count > 128) + return -EINVAL; + regs = kmalloc_array(info->read_mmr_reg.count, sizeof(*regs), GFP_KERNEL); if (!regs) return -ENOMEM; diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c index d3f3e33230f2..edef90810c84 100644 --- a/drivers/gpu/drm/drm_edid.c +++ b/drivers/gpu/drm/drm_edid.c @@ -158,6 +158,9 @@ static struct edid_quirk { /* Medion MD 30217 PG */ { "MED", 0x7b8, EDID_QUIRK_PREFER_LARGE_75 }, + /* Lenovo G50 */ + { "SDC", 18514, EDID_QUIRK_FORCE_6BPC }, + /* Panel in Samsung NP700G7A-S01PL notebook reports 6bpc */ { "SEC", 0xd033, EDID_QUIRK_FORCE_8BPC }, diff --git a/drivers/gpu/drm/drm_probe_helper.c b/drivers/gpu/drm/drm_probe_helper.c index 81dfc23e1c6a..e83ba7c61d90 100644 --- a/drivers/gpu/drm/drm_probe_helper.c +++ b/drivers/gpu/drm/drm_probe_helper.c @@ -326,6 +326,9 @@ static void output_poll_execute(struct work_struct *work) enum drm_connector_status old_status; bool repoll = false, changed; + if (!dev->mode_config.poll_enabled) + return; + /* Pick up any changes detected by the probe functions. */ changed = dev->mode_config.delayed_event; dev->mode_config.delayed_event = false; @@ -489,7 +492,11 @@ EXPORT_SYMBOL(drm_kms_helper_poll_init); */ void drm_kms_helper_poll_fini(struct drm_device *dev) { - drm_kms_helper_poll_disable(dev); + if (!dev->mode_config.poll_enabled) + return; + + dev->mode_config.poll_enabled = false; + cancel_delayed_work_sync(&dev->mode_config.output_poll_work); } EXPORT_SYMBOL(drm_kms_helper_poll_fini); diff --git a/drivers/gpu/drm/i915/i915_cmd_parser.c b/drivers/gpu/drm/i915/i915_cmd_parser.c index db58c8d664c2..6188b70c2790 100644 --- a/drivers/gpu/drm/i915/i915_cmd_parser.c +++ b/drivers/gpu/drm/i915/i915_cmd_parser.c @@ -50,13 +50,11 @@ * granting userspace undue privileges. There are three categories of privilege. * * First, commands which are explicitly defined as privileged or which should - * only be used by the kernel driver. The parser generally rejects such - * commands, though it may allow some from the drm master process. + * only be used by the kernel driver. The parser rejects such commands * * Second, commands which access registers. To support correct/enhanced * userspace functionality, particularly certain OpenGL extensions, the parser - * provides a whitelist of registers which userspace may safely access (for both - * normal and drm master processes). + * provides a whitelist of registers which userspace may safely access * * Third, commands which access privileged memory (i.e. GGTT, HWS page, etc). * The parser always rejects such commands. @@ -81,9 +79,9 @@ * in the per-ring command tables. * * Other command table entries map fairly directly to high level categories - * mentioned above: rejected, master-only, register whitelist. The parser - * implements a number of checks, including the privileged memory checks, via a - * general bitmasking mechanism. + * mentioned above: rejected, register whitelist. The parser implements a number + * of checks, including the privileged memory checks, via a general bitmasking + * mechanism. */ #define STD_MI_OPCODE_MASK 0xFF800000 @@ -94,7 +92,7 @@ #define CMD(op, opm, f, lm, fl, ...) \ { \ .flags = (fl) | ((f) ? CMD_DESC_FIXED : 0), \ - .cmd = { (op), (opm) }, \ + .cmd = { (op) & (opm), (opm) }, \ .length = { (lm) }, \ __VA_ARGS__ \ } @@ -109,14 +107,13 @@ #define R CMD_DESC_REJECT #define W CMD_DESC_REGISTER #define B CMD_DESC_BITMASK -#define M CMD_DESC_MASTER /* Command Mask Fixed Len Action ---------------------------------------------------------- */ -static const struct drm_i915_cmd_descriptor common_cmds[] = { +static const struct drm_i915_cmd_descriptor gen7_common_cmds[] = { CMD( MI_NOOP, SMI, F, 1, S ), CMD( MI_USER_INTERRUPT, SMI, F, 1, R ), - CMD( MI_WAIT_FOR_EVENT, SMI, F, 1, M ), + CMD( MI_WAIT_FOR_EVENT, SMI, F, 1, R ), CMD( MI_ARB_CHECK, SMI, F, 1, S ), CMD( MI_REPORT_HEAD, SMI, F, 1, S ), CMD( MI_SUSPEND_FLUSH, SMI, F, 1, S ), @@ -146,7 +143,7 @@ static const struct drm_i915_cmd_descriptor common_cmds[] = { CMD( MI_BATCH_BUFFER_START, SMI, !F, 0xFF, S ), }; -static const struct drm_i915_cmd_descriptor render_cmds[] = { +static const struct drm_i915_cmd_descriptor gen7_render_cmds[] = { CMD( MI_FLUSH, SMI, F, 1, S ), CMD( MI_ARB_ON_OFF, SMI, F, 1, R ), CMD( MI_PREDICATE, SMI, F, 1, S ), @@ -213,7 +210,7 @@ static const struct drm_i915_cmd_descriptor hsw_render_cmds[] = { CMD( MI_URB_ATOMIC_ALLOC, SMI, F, 1, S ), CMD( MI_SET_APPID, SMI, F, 1, S ), CMD( MI_RS_CONTEXT, SMI, F, 1, S ), - CMD( MI_LOAD_SCAN_LINES_INCL, SMI, !F, 0x3F, M ), + CMD( MI_LOAD_SCAN_LINES_INCL, SMI, !F, 0x3F, R ), CMD( MI_LOAD_SCAN_LINES_EXCL, SMI, !F, 0x3F, R ), CMD( MI_LOAD_REGISTER_REG, SMI, !F, 0xFF, R ), CMD( MI_RS_STORE_DATA_IMM, SMI, !F, 0xFF, S ), @@ -229,7 +226,7 @@ static const struct drm_i915_cmd_descriptor hsw_render_cmds[] = { CMD( GFX_OP_3DSTATE_BINDING_TABLE_EDIT_PS, S3D, !F, 0x1FF, S ), }; -static const struct drm_i915_cmd_descriptor video_cmds[] = { +static const struct drm_i915_cmd_descriptor gen7_video_cmds[] = { CMD( MI_ARB_ON_OFF, SMI, F, 1, R ), CMD( MI_SET_APPID, SMI, F, 1, S ), CMD( MI_STORE_DWORD_IMM, SMI, !F, 0xFF, B, @@ -273,7 +270,7 @@ static const struct drm_i915_cmd_descriptor video_cmds[] = { CMD( MFX_WAIT, SMFX, F, 1, S ), }; -static const struct drm_i915_cmd_descriptor vecs_cmds[] = { +static const struct drm_i915_cmd_descriptor gen7_vecs_cmds[] = { CMD( MI_ARB_ON_OFF, SMI, F, 1, R ), CMD( MI_SET_APPID, SMI, F, 1, S ), CMD( MI_STORE_DWORD_IMM, SMI, !F, 0xFF, B, @@ -311,7 +308,7 @@ static const struct drm_i915_cmd_descriptor vecs_cmds[] = { }}, ), }; -static const struct drm_i915_cmd_descriptor blt_cmds[] = { +static const struct drm_i915_cmd_descriptor gen7_blt_cmds[] = { CMD( MI_DISPLAY_FLIP, SMI, !F, 0xFF, R ), CMD( MI_STORE_DWORD_IMM, SMI, !F, 0x3FF, B, .bits = {{ @@ -345,10 +342,62 @@ static const struct drm_i915_cmd_descriptor blt_cmds[] = { }; static const struct drm_i915_cmd_descriptor hsw_blt_cmds[] = { - CMD( MI_LOAD_SCAN_LINES_INCL, SMI, !F, 0x3F, M ), + CMD( MI_LOAD_SCAN_LINES_INCL, SMI, !F, 0x3F, R ), CMD( MI_LOAD_SCAN_LINES_EXCL, SMI, !F, 0x3F, R ), }; +/* + * For Gen9 we can still rely on the h/w to enforce cmd security, and only + * need to re-enforce the register access checks. We therefore only need to + * teach the cmdparser how to find the end of each command, and identify + * register accesses. The table doesn't need to reject any commands, and so + * the only commands listed here are: + * 1) Those that touch registers + * 2) Those that do not have the default 8-bit length + * + * Note that the default MI length mask chosen for this table is 0xFF, not + * the 0x3F used on older devices. This is because the vast majority of MI + * cmds on Gen9 use a standard 8-bit Length field. + * All the Gen9 blitter instructions are standard 0xFF length mask, and + * none allow access to non-general registers, so in fact no BLT cmds are + * included in the table at all. + * + */ +static const struct drm_i915_cmd_descriptor gen9_blt_cmds[] = { + CMD( MI_NOOP, SMI, F, 1, S ), + CMD( MI_USER_INTERRUPT, SMI, F, 1, S ), + CMD( MI_WAIT_FOR_EVENT, SMI, F, 1, S ), + CMD( MI_FLUSH, SMI, F, 1, S ), + CMD( MI_ARB_CHECK, SMI, F, 1, S ), + CMD( MI_REPORT_HEAD, SMI, F, 1, S ), + CMD( MI_ARB_ON_OFF, SMI, F, 1, S ), + CMD( MI_SUSPEND_FLUSH, SMI, F, 1, S ), + CMD( MI_LOAD_SCAN_LINES_INCL, SMI, !F, 0x3F, S ), + CMD( MI_LOAD_SCAN_LINES_EXCL, SMI, !F, 0x3F, S ), + CMD( MI_STORE_DWORD_IMM, SMI, !F, 0x3FF, S ), + CMD( MI_LOAD_REGISTER_IMM(1), SMI, !F, 0xFF, W, + .reg = { .offset = 1, .mask = 0x007FFFFC, .step = 2 } ), + CMD( MI_UPDATE_GTT, SMI, !F, 0x3FF, S ), + CMD( MI_STORE_REGISTER_MEM_GEN8, SMI, F, 4, W, + .reg = { .offset = 1, .mask = 0x007FFFFC } ), + CMD( MI_FLUSH_DW, SMI, !F, 0x3F, S ), + CMD( MI_LOAD_REGISTER_MEM_GEN8, SMI, F, 4, W, + .reg = { .offset = 1, .mask = 0x007FFFFC } ), + CMD( MI_LOAD_REGISTER_REG, SMI, !F, 0xFF, W, + .reg = { .offset = 1, .mask = 0x007FFFFC, .step = 1 } ), + + /* + * We allow BB_START but apply further checks. We just sanitize the + * basic fields here. + */ + CMD( MI_BATCH_BUFFER_START_GEN8, SMI, !F, 0xFF, B, + .bits = {{ + .offset = 0, + .mask = ~SMI, + .expected = (MI_BATCH_PPGTT_HSW | 1), + }}, ), +}; + #undef CMD #undef SMI #undef S3D @@ -359,40 +408,44 @@ static const struct drm_i915_cmd_descriptor hsw_blt_cmds[] = { #undef R #undef W #undef B -#undef M -static const struct drm_i915_cmd_table gen7_render_cmds[] = { - { common_cmds, ARRAY_SIZE(common_cmds) }, - { render_cmds, ARRAY_SIZE(render_cmds) }, +static const struct drm_i915_cmd_table gen7_render_cmd_table[] = { + { gen7_common_cmds, ARRAY_SIZE(gen7_common_cmds) }, + { gen7_render_cmds, ARRAY_SIZE(gen7_render_cmds) }, }; -static const struct drm_i915_cmd_table hsw_render_ring_cmds[] = { - { common_cmds, ARRAY_SIZE(common_cmds) }, - { render_cmds, ARRAY_SIZE(render_cmds) }, +static const struct drm_i915_cmd_table hsw_render_ring_cmd_table[] = { + { gen7_common_cmds, ARRAY_SIZE(gen7_common_cmds) }, + { gen7_render_cmds, ARRAY_SIZE(gen7_render_cmds) }, { hsw_render_cmds, ARRAY_SIZE(hsw_render_cmds) }, }; -static const struct drm_i915_cmd_table gen7_video_cmds[] = { - { common_cmds, ARRAY_SIZE(common_cmds) }, - { video_cmds, ARRAY_SIZE(video_cmds) }, +static const struct drm_i915_cmd_table gen7_video_cmd_table[] = { + { gen7_common_cmds, ARRAY_SIZE(gen7_common_cmds) }, + { gen7_video_cmds, ARRAY_SIZE(gen7_video_cmds) }, }; -static const struct drm_i915_cmd_table hsw_vebox_cmds[] = { - { common_cmds, ARRAY_SIZE(common_cmds) }, - { vecs_cmds, ARRAY_SIZE(vecs_cmds) }, +static const struct drm_i915_cmd_table hsw_vebox_cmd_table[] = { + { gen7_common_cmds, ARRAY_SIZE(gen7_common_cmds) }, + { gen7_vecs_cmds, ARRAY_SIZE(gen7_vecs_cmds) }, }; -static const struct drm_i915_cmd_table gen7_blt_cmds[] = { - { common_cmds, ARRAY_SIZE(common_cmds) }, - { blt_cmds, ARRAY_SIZE(blt_cmds) }, +static const struct drm_i915_cmd_table gen7_blt_cmd_table[] = { + { gen7_common_cmds, ARRAY_SIZE(gen7_common_cmds) }, + { gen7_blt_cmds, ARRAY_SIZE(gen7_blt_cmds) }, }; -static const struct drm_i915_cmd_table hsw_blt_ring_cmds[] = { - { common_cmds, ARRAY_SIZE(common_cmds) }, - { blt_cmds, ARRAY_SIZE(blt_cmds) }, +static const struct drm_i915_cmd_table hsw_blt_ring_cmd_table[] = { + { gen7_common_cmds, ARRAY_SIZE(gen7_common_cmds) }, + { gen7_blt_cmds, ARRAY_SIZE(gen7_blt_cmds) }, { hsw_blt_cmds, ARRAY_SIZE(hsw_blt_cmds) }, }; +static const struct drm_i915_cmd_table gen9_blt_cmd_table[] = { + { gen9_blt_cmds, ARRAY_SIZE(gen9_blt_cmds) }, +}; + + /* * Register whitelists, sorted by increasing register offset. */ @@ -426,6 +479,10 @@ struct drm_i915_reg_descriptor { #define REG64(addr) \ REG32(addr), REG32(addr + sizeof(u32)) +#define REG64_IDX(_reg, idx) \ + { .addr = _reg(idx) }, \ + { .addr = _reg ## _UDW(idx) } + static const struct drm_i915_reg_descriptor gen7_render_regs[] = { REG64(GPGPU_THREADS_DISPATCHED), REG64(HS_INVOCATION_COUNT), @@ -479,17 +536,27 @@ static const struct drm_i915_reg_descriptor gen7_blt_regs[] = { REG32(BCS_SWCTRL), }; -static const struct drm_i915_reg_descriptor ivb_master_regs[] = { - REG32(FORCEWAKE_MT), - REG32(DERRMR), - REG32(GEN7_PIPE_DE_LOAD_SL(PIPE_A)), - REG32(GEN7_PIPE_DE_LOAD_SL(PIPE_B)), - REG32(GEN7_PIPE_DE_LOAD_SL(PIPE_C)), -}; - -static const struct drm_i915_reg_descriptor hsw_master_regs[] = { - REG32(FORCEWAKE_MT), - REG32(DERRMR), +static const struct drm_i915_reg_descriptor gen9_blt_regs[] = { + REG64_IDX(RING_TIMESTAMP, RENDER_RING_BASE), + REG64_IDX(RING_TIMESTAMP, BSD_RING_BASE), + REG32(BCS_SWCTRL), + REG64_IDX(RING_TIMESTAMP, BLT_RING_BASE), + REG64_IDX(BCS_GPR, 0), + REG64_IDX(BCS_GPR, 1), + REG64_IDX(BCS_GPR, 2), + REG64_IDX(BCS_GPR, 3), + REG64_IDX(BCS_GPR, 4), + REG64_IDX(BCS_GPR, 5), + REG64_IDX(BCS_GPR, 6), + REG64_IDX(BCS_GPR, 7), + REG64_IDX(BCS_GPR, 8), + REG64_IDX(BCS_GPR, 9), + REG64_IDX(BCS_GPR, 10), + REG64_IDX(BCS_GPR, 11), + REG64_IDX(BCS_GPR, 12), + REG64_IDX(BCS_GPR, 13), + REG64_IDX(BCS_GPR, 14), + REG64_IDX(BCS_GPR, 15), }; #undef REG64 @@ -550,6 +617,17 @@ static u32 gen7_blt_get_cmd_length_mask(u32 cmd_header) return 0; } +static u32 gen9_blt_get_cmd_length_mask(u32 cmd_header) +{ + u32 client = (cmd_header & INSTR_CLIENT_MASK) >> INSTR_CLIENT_SHIFT; + + if (client == INSTR_MI_CLIENT || client == INSTR_BC_CLIENT) + return 0xFF; + + DRM_DEBUG_DRIVER("CMD: Abnormal blt cmd length! 0x%08X\n", cmd_header); + return 0; +} + static bool validate_cmds_sorted(struct intel_engine_cs *ring, const struct drm_i915_cmd_table *cmd_tables, int cmd_table_count) @@ -608,9 +686,7 @@ static bool check_sorted(int ring_id, static bool validate_regs_sorted(struct intel_engine_cs *ring) { - return check_sorted(ring->id, ring->reg_table, ring->reg_count) && - check_sorted(ring->id, ring->master_reg_table, - ring->master_reg_count); + return check_sorted(ring->id, ring->reg_table, ring->reg_count); } struct cmd_node { @@ -691,63 +767,61 @@ int i915_cmd_parser_init_ring(struct intel_engine_cs *ring) int cmd_table_count; int ret; - if (!IS_GEN7(ring->dev)) + if (!IS_GEN7(ring->dev) && !(IS_GEN9(ring->dev) && ring->id == BCS)) return 0; switch (ring->id) { case RCS: if (IS_HASWELL(ring->dev)) { - cmd_tables = hsw_render_ring_cmds; + cmd_tables = hsw_render_ring_cmd_table; cmd_table_count = - ARRAY_SIZE(hsw_render_ring_cmds); + ARRAY_SIZE(hsw_render_ring_cmd_table); } else { - cmd_tables = gen7_render_cmds; - cmd_table_count = ARRAY_SIZE(gen7_render_cmds); + cmd_tables = gen7_render_cmd_table; + cmd_table_count = ARRAY_SIZE(gen7_render_cmd_table); } ring->reg_table = gen7_render_regs; ring->reg_count = ARRAY_SIZE(gen7_render_regs); - if (IS_HASWELL(ring->dev)) { - ring->master_reg_table = hsw_master_regs; - ring->master_reg_count = ARRAY_SIZE(hsw_master_regs); - } else { - ring->master_reg_table = ivb_master_regs; - ring->master_reg_count = ARRAY_SIZE(ivb_master_regs); - } - ring->get_cmd_length_mask = gen7_render_get_cmd_length_mask; break; case VCS: - cmd_tables = gen7_video_cmds; - cmd_table_count = ARRAY_SIZE(gen7_video_cmds); + cmd_tables = gen7_video_cmd_table; + cmd_table_count = ARRAY_SIZE(gen7_video_cmd_table); ring->get_cmd_length_mask = gen7_bsd_get_cmd_length_mask; break; case BCS: - if (IS_HASWELL(ring->dev)) { - cmd_tables = hsw_blt_ring_cmds; - cmd_table_count = ARRAY_SIZE(hsw_blt_ring_cmds); + ring->get_cmd_length_mask = gen7_blt_get_cmd_length_mask; + if (IS_GEN9(ring->dev)) { + cmd_tables = gen9_blt_cmd_table; + cmd_table_count = ARRAY_SIZE(gen9_blt_cmd_table); + ring->get_cmd_length_mask = + gen9_blt_get_cmd_length_mask; + + /* BCS Engine unsafe without parser */ + ring->requires_cmd_parser = 1; + } + else if (IS_HASWELL(ring->dev)) { + cmd_tables = hsw_blt_ring_cmd_table; + cmd_table_count = ARRAY_SIZE(hsw_blt_ring_cmd_table); } else { - cmd_tables = gen7_blt_cmds; - cmd_table_count = ARRAY_SIZE(gen7_blt_cmds); + cmd_tables = gen7_blt_cmd_table; + cmd_table_count = ARRAY_SIZE(gen7_blt_cmd_table); } - ring->reg_table = gen7_blt_regs; - ring->reg_count = ARRAY_SIZE(gen7_blt_regs); - - if (IS_HASWELL(ring->dev)) { - ring->master_reg_table = hsw_master_regs; - ring->master_reg_count = ARRAY_SIZE(hsw_master_regs); + if (IS_GEN9(ring->dev)) { + ring->reg_table = gen9_blt_regs; + ring->reg_count = ARRAY_SIZE(gen9_blt_regs); } else { - ring->master_reg_table = ivb_master_regs; - ring->master_reg_count = ARRAY_SIZE(ivb_master_regs); + ring->reg_table = gen7_blt_regs; + ring->reg_count = ARRAY_SIZE(gen7_blt_regs); } - ring->get_cmd_length_mask = gen7_blt_get_cmd_length_mask; break; case VECS: - cmd_tables = hsw_vebox_cmds; - cmd_table_count = ARRAY_SIZE(hsw_vebox_cmds); + cmd_tables = hsw_vebox_cmd_table; + cmd_table_count = ARRAY_SIZE(hsw_vebox_cmd_table); /* VECS can use the same length_mask function as VCS */ ring->get_cmd_length_mask = gen7_bsd_get_cmd_length_mask; break; @@ -769,7 +843,7 @@ int i915_cmd_parser_init_ring(struct intel_engine_cs *ring) return ret; } - ring->needs_cmd_parser = true; + ring->using_cmd_parser = true; return 0; } @@ -783,7 +857,7 @@ int i915_cmd_parser_init_ring(struct intel_engine_cs *ring) */ void i915_cmd_parser_fini_ring(struct intel_engine_cs *ring) { - if (!ring->needs_cmd_parser) + if (!ring->using_cmd_parser) return; fini_hash_table(ring); @@ -949,30 +1023,9 @@ unpin_src: return ret ? ERR_PTR(ret) : dst; } -/** - * i915_needs_cmd_parser() - should a given ring use software command parsing? - * @ring: the ring in question - * - * Only certain platforms require software batch buffer command parsing, and - * only when enabled via module parameter. - * - * Return: true if the ring requires software command parsing - */ -bool i915_needs_cmd_parser(struct intel_engine_cs *ring) -{ - if (!ring->needs_cmd_parser) - return false; - - if (!USES_PPGTT(ring->dev)) - return false; - - return (i915.enable_cmd_parser == 1); -} - -static bool check_cmd(const struct intel_engine_cs *ring, +static int check_cmd(const struct intel_engine_cs *ring, const struct drm_i915_cmd_descriptor *desc, const u32 *cmd, u32 length, - const bool is_master, bool *oacontrol_set) { if (desc->flags & CMD_DESC_REJECT) { @@ -980,12 +1033,6 @@ static bool check_cmd(const struct intel_engine_cs *ring, return false; } - if ((desc->flags & CMD_DESC_MASTER) && !is_master) { - DRM_DEBUG_DRIVER("CMD: Rejected master-only command: 0x%08X\n", - *cmd); - return false; - } - if (desc->flags & CMD_DESC_REGISTER) { /* * Get the distance between individual register offset @@ -1002,11 +1049,6 @@ static bool check_cmd(const struct intel_engine_cs *ring, find_reg(ring->reg_table, ring->reg_count, reg_addr); - if (!reg && is_master) - reg = find_reg(ring->master_reg_table, - ring->master_reg_count, - reg_addr); - if (!reg) { DRM_DEBUG_DRIVER("CMD: Rejected register 0x%08X in command: 0x%08X (ring=%d)\n", reg_addr, *cmd, ring->id); @@ -1091,16 +1133,113 @@ static bool check_cmd(const struct intel_engine_cs *ring, return true; } +static int check_bbstart(struct intel_context *ctx, + u32 *cmd, u64 offset, u32 length, + u32 batch_len, + u64 batch_start, + u64 shadow_batch_start) +{ + + u64 jump_offset, jump_target; + u32 target_cmd_offset, target_cmd_index; + + /* For igt compatibility on older platforms */ + if (CMDPARSER_USES_GGTT(ctx->i915)) { + DRM_DEBUG("CMD: Rejecting BB_START for ggtt based submission\n"); + return -EACCES; + } + + if (length != 3) { + DRM_DEBUG("CMD: Recursive BB_START with bad length(%u)\n", + length); + return -EINVAL; + } + + jump_target = *(u64*)(cmd+1); + jump_offset = jump_target - batch_start; + + /* + * Any underflow of jump_target is guaranteed to be outside the range + * of a u32, so >= test catches both too large and too small + */ + if (jump_offset >= batch_len) { + DRM_DEBUG("CMD: BB_START to 0x%llx jumps out of BB\n", + jump_target); + return -EINVAL; + } + + /* + * This cannot overflow a u32 because we already checked jump_offset + * is within the BB, and the batch_len is a u32 + */ + target_cmd_offset = lower_32_bits(jump_offset); + target_cmd_index = target_cmd_offset / sizeof(u32); + + *(u64*)(cmd + 1) = shadow_batch_start + target_cmd_offset; + + if (target_cmd_index == offset) + return 0; + + if (ctx->jump_whitelist_cmds <= target_cmd_index) { + DRM_DEBUG("CMD: Rejecting BB_START - truncated whitelist array\n"); + return -EINVAL; + } else if (!test_bit(target_cmd_index, ctx->jump_whitelist)) { + DRM_DEBUG("CMD: BB_START to 0x%llx not a previously executed cmd\n", + jump_target); + return -EINVAL; + } + + return 0; +} + +static void init_whitelist(struct intel_context *ctx, u32 batch_len) +{ + const u32 batch_cmds = DIV_ROUND_UP(batch_len, sizeof(u32)); + const u32 exact_size = BITS_TO_LONGS(batch_cmds); + u32 next_size = BITS_TO_LONGS(roundup_pow_of_two(batch_cmds)); + unsigned long *next_whitelist; + + if (CMDPARSER_USES_GGTT(ctx->i915)) + return; + + if (batch_cmds <= ctx->jump_whitelist_cmds) { + bitmap_zero(ctx->jump_whitelist, batch_cmds); + return; + } + +again: + next_whitelist = kcalloc(next_size, sizeof(long), GFP_KERNEL); + if (next_whitelist) { + kfree(ctx->jump_whitelist); + ctx->jump_whitelist = next_whitelist; + ctx->jump_whitelist_cmds = + next_size * BITS_PER_BYTE * sizeof(long); + return; + } + + if (next_size > exact_size) { + next_size = exact_size; + goto again; + } + + DRM_DEBUG("CMD: Failed to extend whitelist. BB_START may be disallowed\n"); + bitmap_zero(ctx->jump_whitelist, ctx->jump_whitelist_cmds); + + return; +} + #define LENGTH_BIAS 2 /** * i915_parse_cmds() - parse a submitted batch buffer for privilege violations + * @ctx: the context in which the batch is to execute * @ring: the ring on which the batch is to execute * @batch_obj: the batch buffer in question - * @shadow_batch_obj: copy of the batch buffer in question + * @user_batch_start: Canonical base address of original user batch * @batch_start_offset: byte offset in the batch at which execution starts * @batch_len: length of the commands in batch_obj - * @is_master: is the submitting process the drm master? + * @shadow_batch_obj: copy of the batch buffer in question + * @shadow_batch_start: Canonical base address of shadow_batch_obj * * Parses the specified batch buffer looking for privilege violations as * described in the overview. @@ -1108,14 +1247,16 @@ static bool check_cmd(const struct intel_engine_cs *ring, * Return: non-zero if the parser finds violations or otherwise fails; -EACCES * if the batch appears legal but should use hardware parsing */ -int i915_parse_cmds(struct intel_engine_cs *ring, +int i915_parse_cmds(struct intel_context *ctx, + struct intel_engine_cs *ring, struct drm_i915_gem_object *batch_obj, - struct drm_i915_gem_object *shadow_batch_obj, + u64 user_batch_start, u32 batch_start_offset, u32 batch_len, - bool is_master) + struct drm_i915_gem_object *shadow_batch_obj, + u64 shadow_batch_start) { - u32 *cmd, *batch_base, *batch_end; + u32 *cmd, *batch_base, *batch_end, offset = 0; struct drm_i915_cmd_descriptor default_desc = { 0 }; bool oacontrol_set = false; /* OACONTROL tracking. See check_cmd() */ int ret = 0; @@ -1127,6 +1268,8 @@ int i915_parse_cmds(struct intel_engine_cs *ring, return PTR_ERR(batch_base); } + init_whitelist(ctx, batch_len); + /* * We use the batch length as size because the shadow object is as * large or larger and copy_batch() will write MI_NOPs to the extra @@ -1150,16 +1293,6 @@ int i915_parse_cmds(struct intel_engine_cs *ring, break; } - /* - * If the batch buffer contains a chained batch, return an - * error that tells the caller to abort and dispatch the - * workload as a non-secure batch. - */ - if (desc->cmd.value == MI_BATCH_BUFFER_START) { - ret = -EACCES; - break; - } - if (desc->flags & CMD_DESC_FIXED) length = desc->length.fixed; else @@ -1174,13 +1307,23 @@ int i915_parse_cmds(struct intel_engine_cs *ring, break; } - if (!check_cmd(ring, desc, cmd, length, is_master, - &oacontrol_set)) { - ret = -EINVAL; + if (!check_cmd(ring, desc, cmd, length, &oacontrol_set)) { + ret = CMDPARSER_USES_GGTT(ring->dev) ? -EINVAL : -EACCES; break; } + if (desc->cmd.value == MI_BATCH_BUFFER_START) { + ret = check_bbstart(ctx, cmd, offset, length, + batch_len, user_batch_start, + shadow_batch_start); + break; + } + + if (ctx->jump_whitelist_cmds > offset) + set_bit(offset, ctx->jump_whitelist); + cmd += length; + offset += length; } if (oacontrol_set) { @@ -1206,7 +1349,7 @@ int i915_parse_cmds(struct intel_engine_cs *ring, * * Return: the current version number of the cmd parser */ -int i915_cmd_parser_get_version(void) +int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv) { /* * Command parser version history @@ -1218,6 +1361,7 @@ int i915_cmd_parser_get_version(void) * 3. Allow access to the GPGPU_THREADS_DISPATCHED register. * 4. L3 atomic chicken bits of HSW_SCRATCH1 and HSW_ROW_CHICKEN3. * 5. GPGPU dispatch compute indirect registers. + * 10. Gen9 only - Supports the new ppgtt based BLIT parser */ - return 5; + return CMDPARSER_USES_GGTT(dev_priv) ? 5 : 10; } diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c index 61fcb3b22297..7b61078c2330 100644 --- a/drivers/gpu/drm/i915/i915_dma.c +++ b/drivers/gpu/drm/i915/i915_dma.c @@ -133,7 +133,7 @@ static int i915_getparam(struct drm_device *dev, void *data, value = 1; break; case I915_PARAM_HAS_SECURE_BATCHES: - value = capable(CAP_SYS_ADMIN); + value = HAS_SECURE_BATCHES(dev_priv) && capable(CAP_SYS_ADMIN); break; case I915_PARAM_HAS_PINNED_BATCHES: value = 1; @@ -145,7 +145,7 @@ static int i915_getparam(struct drm_device *dev, void *data, value = 1; break; case I915_PARAM_CMD_PARSER_VERSION: - value = i915_cmd_parser_get_version(); + value = i915_cmd_parser_get_version(dev_priv); break; case I915_PARAM_HAS_COHERENT_PHYS_GTT: value = 1; diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c index a6ad938f44a6..697b2499c7a1 100644 --- a/drivers/gpu/drm/i915/i915_drv.c +++ b/drivers/gpu/drm/i915/i915_drv.c @@ -698,6 +698,8 @@ static int i915_drm_suspend_late(struct drm_device *drm_dev, bool hibernation) return ret; } + i915_rc6_ctx_wa_suspend(dev_priv); + pci_disable_device(drm_dev->pdev); /* * During hibernation on some platforms the BIOS may try to access @@ -849,6 +851,8 @@ static int i915_drm_resume_early(struct drm_device *dev) intel_uncore_sanitize(dev); intel_power_domains_init_hw(dev_priv); + i915_rc6_ctx_wa_resume(dev_priv); + return ret; } diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h index 6fca39e1c419..5799356f6b6b 100644 --- a/drivers/gpu/drm/i915/i915_drv.h +++ b/drivers/gpu/drm/i915/i915_drv.h @@ -891,6 +891,12 @@ struct intel_context { int pin_count; } engine[I915_NUM_RINGS]; + /* jump_whitelist: Bit array for tracking cmds during cmdparsing */ + unsigned long *jump_whitelist; + + /* jump_whitelist_cmds: No of cmd slots available */ + uint32_t jump_whitelist_cmds; + struct list_head link; }; @@ -1153,6 +1159,7 @@ struct intel_gen6_power_mgmt { bool client_boost; bool enabled; + bool ctx_corrupted; struct delayed_work delayed_resume_work; unsigned boosts; @@ -2539,6 +2546,9 @@ struct drm_i915_cmd_table { #define HAS_BSD2(dev) (INTEL_INFO(dev)->ring_mask & BSD2_RING) #define HAS_BLT(dev) (INTEL_INFO(dev)->ring_mask & BLT_RING) #define HAS_VEBOX(dev) (INTEL_INFO(dev)->ring_mask & VEBOX_RING) + +#define HAS_SECURE_BATCHES(dev_priv) (INTEL_INFO(dev_priv)->gen < 6) + #define HAS_LLC(dev) (INTEL_INFO(dev)->has_llc) #define HAS_WT(dev) ((IS_HASWELL(dev) || IS_BROADWELL(dev)) && \ __I915__(dev)->ellc_size) @@ -2553,8 +2563,18 @@ struct drm_i915_cmd_table { #define HAS_OVERLAY(dev) (INTEL_INFO(dev)->has_overlay) #define OVERLAY_NEEDS_PHYSICAL(dev) (INTEL_INFO(dev)->overlay_needs_physical) +/* + * The Gen7 cmdparser copies the scanned buffer to the ggtt for execution + * All later gens can run the final buffer from the ppgtt + */ +#define CMDPARSER_USES_GGTT(dev_priv) IS_GEN7(dev_priv) + /* Early gen2 have a totally busted CS tlb and require pinned batches. */ #define HAS_BROKEN_CS_TLB(dev) (IS_I830(dev) || IS_845G(dev)) + +#define NEEDS_RC6_CTX_CORRUPTION_WA(dev) \ + (IS_BROADWELL(dev) || INTEL_INFO(dev)->gen == 9) + /* * dp aux and gmbus irq on gen4 seems to be able to generate legacy interrupts * even when in MSI mode. This results in spurious interrupt warnings if the @@ -3276,16 +3296,19 @@ void i915_get_extra_instdone(struct drm_device *dev, uint32_t *instdone); const char *i915_cache_level_str(struct drm_i915_private *i915, int type); /* i915_cmd_parser.c */ -int i915_cmd_parser_get_version(void); +int i915_cmd_parser_get_version(struct drm_i915_private *dev_priv); int i915_cmd_parser_init_ring(struct intel_engine_cs *ring); void i915_cmd_parser_fini_ring(struct intel_engine_cs *ring); bool i915_needs_cmd_parser(struct intel_engine_cs *ring); -int i915_parse_cmds(struct intel_engine_cs *ring, +int i915_parse_cmds(struct intel_context *cxt, + struct intel_engine_cs *ring, struct drm_i915_gem_object *batch_obj, - struct drm_i915_gem_object *shadow_batch_obj, + u64 user_batch_start, u32 batch_start_offset, u32 batch_len, - bool is_master); + struct drm_i915_gem_object *shadow_batch_obj, + u64 shadow_batch_start); + /* i915_suspend.c */ extern int i915_save_state(struct drm_device *dev); diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c index 0433d25f9d23..20fb0ee1df4f 100644 --- a/drivers/gpu/drm/i915/i915_gem_context.c +++ b/drivers/gpu/drm/i915/i915_gem_context.c @@ -157,6 +157,8 @@ void i915_gem_context_free(struct kref *ctx_ref) if (i915.enable_execlists) intel_lr_context_free(ctx); + kfree(ctx->jump_whitelist); + /* * This context is going away and we need to remove all VMAs still * around. This is to handle imported shared objects for which @@ -246,6 +248,9 @@ __create_hw_context(struct drm_device *dev, ctx->hang_stats.ban_period_seconds = DRM_I915_CTX_BAN_PERIOD; + ctx->jump_whitelist = NULL; + ctx->jump_whitelist_cmds = 0; + return ctx; err_out: diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c index 8800f410b2d2..56f6f5c5941e 100644 --- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c @@ -1121,17 +1121,52 @@ i915_reset_gen7_sol_offsets(struct drm_device *dev, return 0; } +static struct i915_vma* +shadow_batch_pin(struct drm_i915_gem_object *obj, struct i915_address_space *vm) +{ + struct drm_i915_private *dev_priv = to_i915(obj->base.dev); + struct i915_address_space *pin_vm = vm; + u64 flags; + int ret; + + /* + * PPGTT backed shadow buffers must be mapped RO, to prevent + * post-scan tampering + */ + if (CMDPARSER_USES_GGTT(dev_priv)) { + flags = PIN_GLOBAL; + pin_vm = &dev_priv->gtt.base; + } else if (vm->has_read_only) { + flags = PIN_USER; + obj->gt_ro = 1; + } else { + DRM_DEBUG("Cannot prevent post-scan tampering without RO capable vm\n"); + return ERR_PTR(-EINVAL); + } + + ret = i915_gem_object_pin(obj, pin_vm, 0, flags); + if (ret) + return ERR_PTR(ret); + else + return i915_gem_obj_to_vma(obj, pin_vm); +} + static struct drm_i915_gem_object* -i915_gem_execbuffer_parse(struct intel_engine_cs *ring, +i915_gem_execbuffer_parse(struct intel_context *ctx, + struct intel_engine_cs *ring, struct drm_i915_gem_exec_object2 *shadow_exec_entry, struct eb_vmas *eb, + struct i915_address_space *vm, struct drm_i915_gem_object *batch_obj, u32 batch_start_offset, - u32 batch_len, - bool is_master) + u32 batch_len) { struct drm_i915_gem_object *shadow_batch_obj; struct i915_vma *vma; + struct i915_vma *user_vma = list_entry(eb->vmas.prev, + typeof(*user_vma), exec_list); + u64 batch_start; + u64 shadow_batch_start; int ret; shadow_batch_obj = i915_gem_batch_pool_get(&ring->batch_pool, @@ -1139,24 +1174,34 @@ i915_gem_execbuffer_parse(struct intel_engine_cs *ring, if (IS_ERR(shadow_batch_obj)) return shadow_batch_obj; - ret = i915_parse_cmds(ring, + vma = shadow_batch_pin(shadow_batch_obj, vm); + if (IS_ERR(vma)) { + ret = PTR_ERR(vma); + goto err; + } + + batch_start = user_vma->node.start + batch_start_offset; + + shadow_batch_start = vma->node.start; + + ret = i915_parse_cmds(ctx, + ring, batch_obj, - shadow_batch_obj, + batch_start, batch_start_offset, batch_len, - is_master); - if (ret) - goto err; - - ret = i915_gem_obj_ggtt_pin(shadow_batch_obj, 0, 0); - if (ret) + shadow_batch_obj, + shadow_batch_start); + if (ret) { + WARN_ON(vma->pin_count == 0); + vma->pin_count--; goto err; + } i915_gem_object_unpin_pages(shadow_batch_obj); memset(shadow_exec_entry, 0, sizeof(*shadow_exec_entry)); - vma = i915_gem_obj_to_ggtt(shadow_batch_obj); vma->exec_entry = shadow_exec_entry; vma->exec_entry->flags = __EXEC_OBJECT_HAS_PIN; drm_gem_object_reference(&shadow_batch_obj->base); @@ -1168,7 +1213,14 @@ i915_gem_execbuffer_parse(struct intel_engine_cs *ring, err: i915_gem_object_unpin_pages(shadow_batch_obj); - if (ret == -EACCES) /* unhandled chained batch */ + + /* + * Unsafe GGTT-backed buffers can still be submitted safely + * as non-secure. + * For PPGTT backing however, we have no choice but to forcibly + * reject unsafe buffers + */ + if (CMDPARSER_USES_GGTT(batch_obj->base.dev) && (ret == -EACCES)) return batch_obj; else return ERR_PTR(ret); @@ -1320,6 +1372,13 @@ eb_get_batch(struct eb_vmas *eb) return vma->obj; } +static inline bool use_cmdparser(const struct intel_engine_cs *ring, + u32 batch_len) +{ + return ring->requires_cmd_parser || + (ring->using_cmd_parser && batch_len && USES_PPGTT(ring->dev)); +} + static int i915_gem_do_execbuffer(struct drm_device *dev, void *data, struct drm_file *file, @@ -1349,6 +1408,10 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data, dispatch_flags = 0; if (args->flags & I915_EXEC_SECURE) { + /* Return -EPERM to trigger fallback code on old binaries. */ + if (!HAS_SECURE_BATCHES(dev_priv)) + return -EPERM; + if (!file->is_master || !capable(CAP_SYS_ADMIN)) return -EPERM; @@ -1487,16 +1550,20 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data, } params->args_batch_start_offset = args->batch_start_offset; - if (i915_needs_cmd_parser(ring) && args->batch_len) { + if (use_cmdparser(ring, args->batch_len)) { struct drm_i915_gem_object *parsed_batch_obj; - parsed_batch_obj = i915_gem_execbuffer_parse(ring, + u32 batch_off = args->batch_start_offset; + u32 batch_len = args->batch_len; + if (batch_len == 0) + batch_len = batch_obj->base.size - batch_off; + + parsed_batch_obj = i915_gem_execbuffer_parse(ctx, ring, &shadow_exec_entry, - eb, + eb, vm, batch_obj, - args->batch_start_offset, - args->batch_len, - file->is_master); + batch_off, + batch_len); if (IS_ERR(parsed_batch_obj)) { ret = PTR_ERR(parsed_batch_obj); goto err; @@ -1506,18 +1573,9 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data, * parsed_batch_obj == batch_obj means batch not fully parsed: * Accept, but don't promote to secure. */ - if (parsed_batch_obj != batch_obj) { - /* - * Batch parsed and accepted: - * - * Set the DISPATCH_SECURE bit to remove the NON_SECURE - * bit from MI_BATCH_BUFFER_START commands issued in - * the dispatch_execbuffer implementations. We - * specifically don't want that set on batches the - * command parser has accepted. - */ - dispatch_flags |= I915_DISPATCH_SECURE; + if (CMDPARSER_USES_GGTT(dev_priv)) + dispatch_flags |= I915_DISPATCH_SECURE; params->args_batch_start_offset = 0; batch_obj = parsed_batch_obj; } diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c index b37fe0df743e..65a53ee398b8 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.c +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c @@ -119,7 +119,8 @@ static int sanitize_enable_ppgtt(struct drm_device *dev, int enable_ppgtt) (enable_ppgtt == 0 || !has_aliasing_ppgtt)) return 0; - if (enable_ppgtt == 1) + /* Full PPGTT is required by the Gen9 cmdparser */ + if (enable_ppgtt == 1 && INTEL_INFO(dev)->gen != 9) return 1; if (enable_ppgtt == 2 && has_full_ppgtt) @@ -152,7 +153,8 @@ static int ppgtt_bind_vma(struct i915_vma *vma, { u32 pte_flags = 0; - /* Currently applicable only to VLV */ + /* Applicable to VLV, and gen8+ */ + pte_flags = 0; if (vma->obj->gt_ro) pte_flags |= PTE_READ_ONLY; @@ -172,11 +174,14 @@ static void ppgtt_unbind_vma(struct i915_vma *vma) static gen8_pte_t gen8_pte_encode(dma_addr_t addr, enum i915_cache_level level, - bool valid) + bool valid, u32 flags) { gen8_pte_t pte = valid ? _PAGE_PRESENT | _PAGE_RW : 0; pte |= addr; + if (unlikely(flags & PTE_READ_ONLY)) + pte &= ~_PAGE_RW; + switch (level) { case I915_CACHE_NONE: pte |= PPAT_UNCACHED_INDEX; @@ -460,7 +465,7 @@ static void gen8_initialize_pt(struct i915_address_space *vm, gen8_pte_t scratch_pte; scratch_pte = gen8_pte_encode(px_dma(vm->scratch_page), - I915_CACHE_LLC, true); + I915_CACHE_LLC, true, 0); fill_px(vm->dev, pt, scratch_pte); } @@ -757,8 +762,9 @@ static void gen8_ppgtt_clear_range(struct i915_address_space *vm, { struct i915_hw_ppgtt *ppgtt = container_of(vm, struct i915_hw_ppgtt, base); - gen8_pte_t scratch_pte = gen8_pte_encode(px_dma(vm->scratch_page), - I915_CACHE_LLC, use_scratch); + gen8_pte_t scratch_pte = + gen8_pte_encode(px_dma(vm->scratch_page), + I915_CACHE_LLC, use_scratch, 0); if (!USES_FULL_48BIT_PPGTT(vm->dev)) { gen8_ppgtt_clear_pte_range(vm, &ppgtt->pdp, start, length, @@ -779,7 +785,8 @@ gen8_ppgtt_insert_pte_entries(struct i915_address_space *vm, struct i915_page_directory_pointer *pdp, struct sg_page_iter *sg_iter, uint64_t start, - enum i915_cache_level cache_level) + enum i915_cache_level cache_level, + u32 flags) { struct i915_hw_ppgtt *ppgtt = container_of(vm, struct i915_hw_ppgtt, base); @@ -799,7 +806,7 @@ gen8_ppgtt_insert_pte_entries(struct i915_address_space *vm, pt_vaddr[pte] = gen8_pte_encode(sg_page_iter_dma_address(sg_iter), - cache_level, true); + cache_level, true, flags); if (++pte == GEN8_PTES) { kunmap_px(ppgtt, pt_vaddr); pt_vaddr = NULL; @@ -820,7 +827,7 @@ static void gen8_ppgtt_insert_entries(struct i915_address_space *vm, struct sg_table *pages, uint64_t start, enum i915_cache_level cache_level, - u32 unused) + u32 flags) { struct i915_hw_ppgtt *ppgtt = container_of(vm, struct i915_hw_ppgtt, base); @@ -830,7 +837,7 @@ static void gen8_ppgtt_insert_entries(struct i915_address_space *vm, if (!USES_FULL_48BIT_PPGTT(vm->dev)) { gen8_ppgtt_insert_pte_entries(vm, &ppgtt->pdp, &sg_iter, start, - cache_level); + cache_level, flags); } else { struct i915_page_directory_pointer *pdp; uint64_t templ4, pml4e; @@ -838,7 +845,7 @@ static void gen8_ppgtt_insert_entries(struct i915_address_space *vm, gen8_for_each_pml4e(pdp, &ppgtt->pml4, start, length, templ4, pml4e) { gen8_ppgtt_insert_pte_entries(vm, pdp, &sg_iter, - start, cache_level); + start, cache_level, flags); } } } @@ -1447,7 +1454,7 @@ static void gen8_dump_ppgtt(struct i915_hw_ppgtt *ppgtt, struct seq_file *m) uint64_t start = ppgtt->base.start; uint64_t length = ppgtt->base.total; gen8_pte_t scratch_pte = gen8_pte_encode(px_dma(vm->scratch_page), - I915_CACHE_LLC, true); + I915_CACHE_LLC, true, 0); if (!USES_FULL_48BIT_PPGTT(vm->dev)) { gen8_dump_pdp(&ppgtt->pdp, start, length, scratch_pte, m); @@ -1515,6 +1522,14 @@ static int gen8_ppgtt_init(struct i915_hw_ppgtt *ppgtt) ppgtt->base.clear_range = gen8_ppgtt_clear_range; ppgtt->base.unbind_vma = ppgtt_unbind_vma; ppgtt->base.bind_vma = ppgtt_bind_vma; + + /* + * From bdw, there is support for read-only pages in the PPGTT. + * + * XXX GVT is not honouring the lack of RW in the PTE bits. + */ + ppgtt->base.has_read_only = !intel_vgpu_active(ppgtt->base.dev); + ppgtt->debug_dump = gen8_dump_ppgtt; if (USES_FULL_48BIT_PPGTT(ppgtt->base.dev)) { @@ -2343,7 +2358,7 @@ static void gen8_set_pte(void __iomem *addr, gen8_pte_t pte) static void gen8_ggtt_insert_entries(struct i915_address_space *vm, struct sg_table *st, uint64_t start, - enum i915_cache_level level, u32 unused) + enum i915_cache_level level, u32 flags) { struct drm_i915_private *dev_priv = vm->dev->dev_private; unsigned first_entry = start >> PAGE_SHIFT; @@ -2357,7 +2372,7 @@ static void gen8_ggtt_insert_entries(struct i915_address_space *vm, addr = sg_dma_address(sg_iter.sg) + (sg_iter.sg_pgoffset << PAGE_SHIFT); gen8_set_pte(>t_entries[i], - gen8_pte_encode(addr, level, true)); + gen8_pte_encode(addr, level, true, flags)); i++; } @@ -2370,7 +2385,7 @@ static void gen8_ggtt_insert_entries(struct i915_address_space *vm, */ if (i != 0) WARN_ON(readq(>t_entries[i-1]) - != gen8_pte_encode(addr, level, true)); + != gen8_pte_encode(addr, level, true, flags)); /* This next bit makes the above posting read even more important. We * want to flush the TLBs only after we're certain all the PTE updates @@ -2444,7 +2459,7 @@ static void gen8_ggtt_clear_range(struct i915_address_space *vm, scratch_pte = gen8_pte_encode(px_dma(vm->scratch_page), I915_CACHE_LLC, - use_scratch); + use_scratch, 0); for (i = 0; i < num_entries; i++) gen8_set_pte(>t_base[i], scratch_pte); readl(gtt_base); @@ -2510,7 +2525,8 @@ static int ggtt_bind_vma(struct i915_vma *vma, if (ret) return ret; - /* Currently applicable only to VLV */ + /* Applicable to VLV (gen8+ do not support RO in the GGTT) */ + pte_flags = 0; if (obj->gt_ro) pte_flags |= PTE_READ_ONLY; @@ -2653,6 +2669,9 @@ static int i915_gem_setup_global_gtt(struct drm_device *dev, i915_address_space_init(ggtt_vm, dev_priv); ggtt_vm->total += PAGE_SIZE; + /* Only VLV supports read-only GGTT mappings */ + ggtt_vm->has_read_only = IS_VALLEYVIEW(dev_priv); + if (intel_vgpu_active(dev)) { ret = intel_vgt_balloon(dev); if (ret) diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h index a216397ead52..d36f2d77576a 100644 --- a/drivers/gpu/drm/i915/i915_gem_gtt.h +++ b/drivers/gpu/drm/i915/i915_gem_gtt.h @@ -307,6 +307,9 @@ struct i915_address_space { */ struct list_head inactive_list; + /* Some systems support read-only mappings for GGTT and/or PPGTT */ + bool has_read_only:1; + /* FIXME: Need a more generic return type */ gen6_pte_t (*pte_encode)(dma_addr_t addr, enum i915_cache_level level, diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h index cace154bbdc0..603d8cdfc5f1 100644 --- a/drivers/gpu/drm/i915/i915_reg.h +++ b/drivers/gpu/drm/i915/i915_reg.h @@ -170,6 +170,8 @@ #define ECOCHK_PPGTT_WT_HSW (0x2<<3) #define ECOCHK_PPGTT_WB_HSW (0x3<<3) +#define GEN8_RC6_CTX_INFO 0x8504 + #define GAC_ECO_BITS 0x14090 #define ECOBITS_SNB_BIT (1<<13) #define ECOBITS_PPGTT_CACHE64B (3<<8) @@ -511,6 +513,10 @@ */ #define BCS_SWCTRL 0x22200 +/* There are 16 GPR registers */ +#define BCS_GPR(n) (0x22600 + (n) * 8) +#define BCS_GPR_UDW(n) (0x22600 + (n) * 8 + 4) + #define GPGPU_THREADS_DISPATCHED 0x2290 #define HS_INVOCATION_COUNT 0x2300 #define DS_INVOCATION_COUNT 0x2308 @@ -1567,6 +1573,7 @@ enum skl_disp_power_wells { #define RING_IMR(base) ((base)+0xa8) #define RING_HWSTAM(base) ((base)+0x98) #define RING_TIMESTAMP(base) ((base)+0x358) +#define RING_TIMESTAMP_UDW(base) ((base) + 0x358 + 4) #define TAIL_ADDR 0x001FFFF8 #define HEAD_WRAP_COUNT 0xFFE00000 #define HEAD_WRAP_ONE 0x00200000 @@ -5704,6 +5711,10 @@ enum skl_disp_power_wells { #define GAMMA_MODE_MODE_12BIT (2 << 0) #define GAMMA_MODE_MODE_SPLIT (3 << 0) +/* Display Internal Timeout Register */ +#define RM_TIMEOUT 0x42060 +#define MMIO_TIMEOUT_US(us) ((us) << 0) + /* interrupts */ #define DE_MASTER_IRQ_CONTROL (1 << 31) #define DE_SPRITEB_FLIP_DONE (1 << 29) diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c index 4f5d07bb3511..a9166ff48a26 100644 --- a/drivers/gpu/drm/i915/intel_display.c +++ b/drivers/gpu/drm/i915/intel_display.c @@ -10747,6 +10747,10 @@ void intel_mark_busy(struct drm_device *dev) return; intel_runtime_pm_get(dev_priv); + + if (NEEDS_RC6_CTX_CORRUPTION_WA(dev_priv)) + intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL); + i915_update_gfx_val(dev_priv); if (INTEL_INFO(dev)->gen >= 6) gen6_rps_busy(dev_priv); @@ -10765,6 +10769,11 @@ void intel_mark_idle(struct drm_device *dev) if (INTEL_INFO(dev)->gen >= 6) gen6_rps_idle(dev->dev_private); + if (NEEDS_RC6_CTX_CORRUPTION_WA(dev_priv)) { + i915_rc6_ctx_wa_check(dev_priv); + intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL); + } + intel_runtime_pm_put(dev_priv); } diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h index 722aa159cd28..78503e481313 100644 --- a/drivers/gpu/drm/i915/intel_drv.h +++ b/drivers/gpu/drm/i915/intel_drv.h @@ -1410,6 +1410,9 @@ void intel_enable_gt_powersave(struct drm_device *dev); void intel_disable_gt_powersave(struct drm_device *dev); void intel_suspend_gt_powersave(struct drm_device *dev); void intel_reset_gt_powersave(struct drm_device *dev); +bool i915_rc6_ctx_wa_check(struct drm_i915_private *i915); +void i915_rc6_ctx_wa_suspend(struct drm_i915_private *i915); +void i915_rc6_ctx_wa_resume(struct drm_i915_private *i915); void gen6_update_ring_freq(struct drm_device *dev); void gen6_rps_busy(struct drm_i915_private *dev_priv); void gen6_rps_reset_ei(struct drm_i915_private *dev_priv); diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c index fd4690ed93c0..81bd84f9156b 100644 --- a/drivers/gpu/drm/i915/intel_pm.c +++ b/drivers/gpu/drm/i915/intel_pm.c @@ -66,6 +66,14 @@ static void bxt_init_clock_gating(struct drm_device *dev) */ I915_WRITE(GEN8_UCGCTL6, I915_READ(GEN8_UCGCTL6) | GEN8_HDCUNIT_CLOCK_GATE_DISABLE_HDCREQ); + + /* + * Lower the display internal timeout. + * This is needed to avoid any hard hangs when DSI port PLL + * is off and a MMIO access is attempted by any privilege + * application, using batch buffers or any other means. + */ + I915_WRITE(RM_TIMEOUT, MMIO_TIMEOUT_US(950)); } static void i915_pineview_get_mem_freq(struct drm_device *dev) @@ -4591,30 +4599,42 @@ void intel_set_rps(struct drm_device *dev, u8 val) gen6_set_rps(dev, val); } -static void gen9_disable_rps(struct drm_device *dev) +static void gen9_disable_rc6(struct drm_device *dev) { struct drm_i915_private *dev_priv = dev->dev_private; I915_WRITE(GEN6_RC_CONTROL, 0); +} + +static void gen9_disable_rps(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + I915_WRITE(GEN9_PG_ENABLE, 0); } -static void gen6_disable_rps(struct drm_device *dev) +static void gen6_disable_rc6(struct drm_device *dev) { struct drm_i915_private *dev_priv = dev->dev_private; I915_WRITE(GEN6_RC_CONTROL, 0); +} + +static void gen6_disable_rps(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = dev->dev_private; + I915_WRITE(GEN6_RPNSWREQ, 1 << 31); } -static void cherryview_disable_rps(struct drm_device *dev) +static void cherryview_disable_rc6(struct drm_device *dev) { struct drm_i915_private *dev_priv = dev->dev_private; I915_WRITE(GEN6_RC_CONTROL, 0); } -static void valleyview_disable_rps(struct drm_device *dev) +static void valleyview_disable_rc6(struct drm_device *dev) { struct drm_i915_private *dev_priv = dev->dev_private; @@ -4818,7 +4838,8 @@ static void gen9_enable_rc6(struct drm_device *dev) I915_WRITE(GEN9_RENDER_PG_IDLE_HYSTERESIS, 25); /* 3a: Enable RC6 */ - if (intel_enable_rc6(dev) & INTEL_RC6_ENABLE) + if (!dev_priv->rps.ctx_corrupted && + intel_enable_rc6(dev) & INTEL_RC6_ENABLE) rc6_mask = GEN6_RC_CTL_RC6_ENABLE; DRM_INFO("RC6 %s\n", (rc6_mask & GEN6_RC_CTL_RC6_ENABLE) ? "on" : "off"); @@ -4841,7 +4862,7 @@ static void gen9_enable_rc6(struct drm_device *dev) * WaRsDisableCoarsePowerGating:skl,bxt - Render/Media PG need to be disabled with RC6. */ if ((IS_BROXTON(dev) && (INTEL_REVID(dev) < BXT_REVID_B0)) || - ((IS_SKL_GT3(dev) || IS_SKL_GT4(dev)) && (INTEL_REVID(dev) <= SKL_REVID_F0))) + INTEL_INFO(dev)->gen == 9) I915_WRITE(GEN9_PG_ENABLE, 0); else I915_WRITE(GEN9_PG_ENABLE, (rc6_mask & GEN6_RC_CTL_RC6_ENABLE) ? @@ -4884,7 +4905,8 @@ static void gen8_enable_rps(struct drm_device *dev) I915_WRITE(GEN6_RC6_THRESHOLD, 50000); /* 50/125ms per EI */ /* 3: Enable RC6 */ - if (intel_enable_rc6(dev) & INTEL_RC6_ENABLE) + if (!dev_priv->rps.ctx_corrupted && + intel_enable_rc6(dev) & INTEL_RC6_ENABLE) rc6_mask = GEN6_RC_CTL_RC6_ENABLE; intel_print_rc6_info(dev, rc6_mask); if (IS_BROADWELL(dev)) @@ -6128,10 +6150,101 @@ static void intel_init_emon(struct drm_device *dev) dev_priv->ips.corr = (lcfuse & LCFUSE_HIV_MASK); } +static bool i915_rc6_ctx_corrupted(struct drm_i915_private *dev_priv) +{ + return !I915_READ(GEN8_RC6_CTX_INFO); +} + +static void i915_rc6_ctx_wa_init(struct drm_i915_private *i915) +{ + if (!NEEDS_RC6_CTX_CORRUPTION_WA(i915)) + return; + + if (i915_rc6_ctx_corrupted(i915)) { + DRM_INFO("RC6 context corrupted, disabling runtime power management\n"); + i915->rps.ctx_corrupted = true; + intel_runtime_pm_get(i915); + } +} + +static void i915_rc6_ctx_wa_cleanup(struct drm_i915_private *i915) +{ + if (i915->rps.ctx_corrupted) { + intel_runtime_pm_put(i915); + i915->rps.ctx_corrupted = false; + } +} + +/** + * i915_rc6_ctx_wa_suspend - system suspend sequence for the RC6 CTX WA + * @i915: i915 device + * + * Perform any steps needed to clean up the RC6 CTX WA before system suspend. + */ +void i915_rc6_ctx_wa_suspend(struct drm_i915_private *i915) +{ + if (i915->rps.ctx_corrupted) + intel_runtime_pm_put(i915); +} + +/** + * i915_rc6_ctx_wa_resume - system resume sequence for the RC6 CTX WA + * @i915: i915 device + * + * Perform any steps needed to re-init the RC6 CTX WA after system resume. + */ +void i915_rc6_ctx_wa_resume(struct drm_i915_private *i915) +{ + if (!i915->rps.ctx_corrupted) + return; + + if (i915_rc6_ctx_corrupted(i915)) { + intel_runtime_pm_get(i915); + return; + } + + DRM_INFO("RC6 context restored, re-enabling runtime power management\n"); + i915->rps.ctx_corrupted = false; +} + +static void intel_disable_rc6(struct drm_device *dev); + +/** + * i915_rc6_ctx_wa_check - check for a new RC6 CTX corruption + * @i915: i915 device + * + * Check if an RC6 CTX corruption has happened since the last check and if so + * disable RC6 and runtime power management. + * + * Return false if no context corruption has happened since the last call of + * this function, true otherwise. +*/ +bool i915_rc6_ctx_wa_check(struct drm_i915_private *i915) +{ + if (!NEEDS_RC6_CTX_CORRUPTION_WA(i915)) + return false; + + if (i915->rps.ctx_corrupted) + return false; + + if (!i915_rc6_ctx_corrupted(i915)) + return false; + + DRM_NOTE("RC6 context corruption, disabling runtime power management\n"); + + intel_disable_rc6(i915->dev); + i915->rps.ctx_corrupted = true; + intel_runtime_pm_get_noresume(i915); + + return true; +} + void intel_init_gt_powersave(struct drm_device *dev) { i915.enable_rc6 = sanitize_rc6_option(dev, i915.enable_rc6); + i915_rc6_ctx_wa_init(to_i915(dev)); + if (IS_CHERRYVIEW(dev)) cherryview_init_gt_powersave(dev); else if (IS_VALLEYVIEW(dev)) @@ -6144,6 +6257,8 @@ void intel_cleanup_gt_powersave(struct drm_device *dev) return; else if (IS_VALLEYVIEW(dev)) valleyview_cleanup_gt_powersave(dev); + + i915_rc6_ctx_wa_cleanup(to_i915(dev)); } static void gen6_suspend_rps(struct drm_device *dev) @@ -6176,6 +6291,38 @@ void intel_suspend_gt_powersave(struct drm_device *dev) gen6_rps_idle(dev_priv); } +static void __intel_disable_rc6(struct drm_device *dev) +{ + if (INTEL_INFO(dev)->gen >= 9) + gen9_disable_rc6(dev); + else if (IS_CHERRYVIEW(dev)) + cherryview_disable_rc6(dev); + else if (IS_VALLEYVIEW(dev)) + valleyview_disable_rc6(dev); + else + gen6_disable_rc6(dev); +} + +static void intel_disable_rc6(struct drm_device *dev) +{ + struct drm_i915_private *dev_priv = to_i915(dev); + + mutex_lock(&dev_priv->rps.hw_lock); + __intel_disable_rc6(dev); + mutex_unlock(&dev_priv->rps.hw_lock); +} + +static void intel_disable_rps(struct drm_device *dev) +{ + if (IS_CHERRYVIEW(dev) || IS_VALLEYVIEW(dev)) + return; + + if (INTEL_INFO(dev)->gen >= 9) + gen9_disable_rps(dev); + else + gen6_disable_rps(dev); +} + void intel_disable_gt_powersave(struct drm_device *dev) { struct drm_i915_private *dev_priv = dev->dev_private; @@ -6186,16 +6333,12 @@ void intel_disable_gt_powersave(struct drm_device *dev) intel_suspend_gt_powersave(dev); mutex_lock(&dev_priv->rps.hw_lock); - if (INTEL_INFO(dev)->gen >= 9) - gen9_disable_rps(dev); - else if (IS_CHERRYVIEW(dev)) - cherryview_disable_rps(dev); - else if (IS_VALLEYVIEW(dev)) - valleyview_disable_rps(dev); - else - gen6_disable_rps(dev); + + __intel_disable_rc6(dev); + intel_disable_rps(dev); dev_priv->rps.enabled = false; + mutex_unlock(&dev_priv->rps.hw_lock); } } diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c index 9d48443bca2e..df6547f60a5c 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.c +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c @@ -2058,6 +2058,8 @@ static void intel_destroy_ringbuffer_obj(struct intel_ringbuffer *ringbuf) static int intel_alloc_ringbuffer_obj(struct drm_device *dev, struct intel_ringbuffer *ringbuf) { + struct drm_i915_private *dev_priv = to_i915(dev); + struct i915_address_space *vm = &dev_priv->gtt.base; struct drm_i915_gem_object *obj; obj = NULL; @@ -2068,8 +2070,12 @@ static int intel_alloc_ringbuffer_obj(struct drm_device *dev, if (obj == NULL) return -ENOMEM; - /* mark ring buffers as read-only from GPU side by default */ - obj->gt_ro = 1; + /* + * Mark ring buffers as read-only from GPU side (so no stray overwrites) + * if supported by the platform's GGTT. + */ + if (vm->has_read_only) + obj->gt_ro = 1; ringbuf->obj = obj; diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index 49fa41dc0eb6..56c872b89a92 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -314,7 +314,8 @@ struct intel_engine_cs { volatile u32 *cpu_page; } scratch; - bool needs_cmd_parser; + bool using_cmd_parser; + bool requires_cmd_parser; /* * Table of commands the command parser needs to know about diff --git a/drivers/gpu/drm/msm/hdmi-staging/sde_hdmi.c b/drivers/gpu/drm/msm/hdmi-staging/sde_hdmi.c index 5a9e56b58158..d755ba959c20 100644 --- a/drivers/gpu/drm/msm/hdmi-staging/sde_hdmi.c +++ b/drivers/gpu/drm/msm/hdmi-staging/sde_hdmi.c @@ -78,7 +78,12 @@ static ssize_t _sde_hdmi_debugfs_dump_info_read(struct file *file, if (!buf) return -ENOMEM; - len += snprintf(buf, SZ_4K, "name = %s\n", display->name); + len += snprintf(buf, SZ_1K, "name = %s\n", display->name); + + if (len > count) { + kfree(buf); + return -ENOMEM; + } if (copy_to_user(buff, buf, len)) { kfree(buf); diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c index c6bf378534f8..bebcef2ce6b8 100644 --- a/drivers/gpu/drm/radeon/radeon_connectors.c +++ b/drivers/gpu/drm/radeon/radeon_connectors.c @@ -758,7 +758,7 @@ static int radeon_connector_set_property(struct drm_connector *connector, struct radeon_encoder->output_csc = val; - if (connector->encoder->crtc) { + if (connector->encoder && connector->encoder->crtc) { struct drm_crtc *crtc = connector->encoder->crtc; const struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private; struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c index 892d0a71d766..57724a3afe78 100644 --- a/drivers/gpu/drm/radeon/si_dpm.c +++ b/drivers/gpu/drm/radeon/si_dpm.c @@ -1956,6 +1956,7 @@ static void si_initialize_powertune_defaults(struct radeon_device *rdev) case 0x682C: si_pi->cac_weights = cac_weights_cape_verde_pro; si_pi->dte_data = dte_data_sun_xt; + update_dte_from_pl2 = true; break; case 0x6825: case 0x6827: diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig index 2729ab3557bb..1107d218b0c5 100644 --- a/drivers/hid/Kconfig +++ b/drivers/hid/Kconfig @@ -726,6 +726,14 @@ config HID_SPEEDLINK ---help--- Support for Speedlink Vicious and Divine Cezanne mouse. +config HID_STEAM + tristate "Steam Controller support" + depends on HID + ---help--- + Say Y here if you have a Steam Controller if you want to use it + without running the Steam Client. It supports both the wired and + the wireless adaptor. + config HID_STEELSERIES tristate "Steelseries SRW-S1 steering wheel support" depends on HID diff --git a/drivers/hid/Makefile b/drivers/hid/Makefile index 00011fee08b9..c21ff739fb31 100644 --- a/drivers/hid/Makefile +++ b/drivers/hid/Makefile @@ -85,6 +85,7 @@ obj-$(CONFIG_HID_SAMSUNG) += hid-samsung.o obj-$(CONFIG_HID_SMARTJOYPLUS) += hid-sjoy.o obj-$(CONFIG_HID_SONY) += hid-sony.o obj-$(CONFIG_HID_SPEEDLINK) += hid-speedlink.o +obj-$(CONFIG_HID_STEAM) += hid-steam.o obj-$(CONFIG_HID_STEELSERIES) += hid-steelseries.o obj-$(CONFIG_HID_SUNPLUS) += hid-sunplus.o obj-$(CONFIG_HID_GREENASIA) += hid-gaff.o diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c index 37e8b61b5c98..ec4250061922 100644 --- a/drivers/hid/hid-apple.c +++ b/drivers/hid/hid-apple.c @@ -55,7 +55,6 @@ MODULE_PARM_DESC(swap_opt_cmd, "Swap the Option (\"Alt\") and Command (\"Flag\") struct apple_sc { unsigned long quirks; unsigned int fn_on; - DECLARE_BITMAP(pressed_fn, KEY_CNT); DECLARE_BITMAP(pressed_numlock, KEY_CNT); }; @@ -182,6 +181,8 @@ static int hidinput_apple_event(struct hid_device *hid, struct input_dev *input, { struct apple_sc *asc = hid_get_drvdata(hid); const struct apple_key_translation *trans, *table; + bool do_translate; + u16 code = 0; if (usage->code == KEY_FN) { asc->fn_on = !!value; @@ -190,8 +191,6 @@ static int hidinput_apple_event(struct hid_device *hid, struct input_dev *input, } if (fnmode) { - int do_translate; - if (hid->product >= USB_DEVICE_ID_APPLE_WELLSPRING4_ANSI && hid->product <= USB_DEVICE_ID_APPLE_WELLSPRING4A_JIS) table = macbookair_fn_keys; @@ -203,25 +202,33 @@ static int hidinput_apple_event(struct hid_device *hid, struct input_dev *input, trans = apple_find_translation (table, usage->code); if (trans) { - if (test_bit(usage->code, asc->pressed_fn)) - do_translate = 1; - else if (trans->flags & APPLE_FLAG_FKEY) - do_translate = (fnmode == 2 && asc->fn_on) || - (fnmode == 1 && !asc->fn_on); - else - do_translate = asc->fn_on; - - if (do_translate) { - if (value) - set_bit(usage->code, asc->pressed_fn); - else - clear_bit(usage->code, asc->pressed_fn); - - input_event(input, usage->type, trans->to, - value); - - return 1; + if (test_bit(trans->from, input->key)) + code = trans->from; + else if (test_bit(trans->to, input->key)) + code = trans->to; + + if (!code) { + if (trans->flags & APPLE_FLAG_FKEY) { + switch (fnmode) { + case 1: + do_translate = !asc->fn_on; + break; + case 2: + do_translate = asc->fn_on; + break; + default: + /* should never happen */ + do_translate = false; + } + } else { + do_translate = asc->fn_on; + } + + code = do_translate ? trans->to : trans->from; } + + input_event(input, usage->type, code, value); + return 1; } if (asc->quirks & APPLE_NUMLOCK_EMULATION && diff --git a/drivers/hid/hid-axff.c b/drivers/hid/hid-axff.c index a594e478a1e2..843aed4dec80 100644 --- a/drivers/hid/hid-axff.c +++ b/drivers/hid/hid-axff.c @@ -75,13 +75,20 @@ static int axff_init(struct hid_device *hid) { struct axff_device *axff; struct hid_report *report; - struct hid_input *hidinput = list_first_entry(&hid->inputs, struct hid_input, list); + struct hid_input *hidinput; struct list_head *report_list =&hid->report_enum[HID_OUTPUT_REPORT].report_list; - struct input_dev *dev = hidinput->input; + struct input_dev *dev; int field_count = 0; int i, j; int error; + if (list_empty(&hid->inputs)) { + hid_err(hid, "no inputs found\n"); + return -ENODEV; + } + hidinput = list_first_entry(&hid->inputs, struct hid_input, list); + dev = hidinput->input; + if (list_empty(report_list)) { hid_err(hid, "no output reports found\n"); return -ENODEV; diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c index a22881befa27..f6476d1cfdb2 100644 --- a/drivers/hid/hid-core.c +++ b/drivers/hid/hid-core.c @@ -976,6 +976,7 @@ int hid_open_report(struct hid_device *device) __u8 *start; __u8 *buf; __u8 *end; + __u8 *next; int ret; static int (*dispatch_type[])(struct hid_parser *parser, struct hid_item *item) = { @@ -1029,7 +1030,8 @@ int hid_open_report(struct hid_device *device) device->collection_size = HID_DEFAULT_NUM_COLLECTIONS; ret = -EINVAL; - while ((start = fetch_item(start, end, &item)) != NULL) { + while ((next = fetch_item(start, end, &item)) != NULL) { + start = next; if (item.format != HID_ITEM_FORMAT_SHORT) { hid_err(device, "unexpected long global item\n"); @@ -1058,7 +1060,8 @@ int hid_open_report(struct hid_device *device) } } - hid_err(device, "item fetching failed at offset %d\n", (int)(end - start)); + hid_err(device, "item fetching failed at offset %u/%u\n", + size - (unsigned int)(end - start), size); err: vfree(parser); hid_close_report(device); @@ -2070,6 +2073,9 @@ static const struct hid_device_id hid_have_special_driver[] = { { HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC, USB_DEVICE_ID_UCLOGIC_TABLET_WP1062) }, { HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC, USB_DEVICE_ID_UCLOGIC_WIRELESS_TABLET_TWHL850) }, { HID_USB_DEVICE(USB_VENDOR_ID_UCLOGIC, USB_DEVICE_ID_UCLOGIC_TABLET_TWHA60) }, + { HID_USB_DEVICE(USB_VENDOR_ID_VALVE, USB_DEVICE_ID_STEAM_CONTROLLER) }, + { HID_USB_DEVICE(USB_VENDOR_ID_VALVE, USB_DEVICE_ID_STEAM_CONTROLLER_WIRELESS) }, + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_VALVE, USB_DEVICE_ID_STEAM_CONTROLLER_BT) }, { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_SMARTJOY_PLUS) }, { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_SUPER_JOY_BOX_3) }, { HID_USB_DEVICE(USB_VENDOR_ID_WISEGROUP, USB_DEVICE_ID_DUAL_USB_JOYPAD) }, diff --git a/drivers/hid/hid-dr.c b/drivers/hid/hid-dr.c index 1d78ba3b799e..fac829d2b19a 100644 --- a/drivers/hid/hid-dr.c +++ b/drivers/hid/hid-dr.c @@ -87,13 +87,19 @@ static int drff_init(struct hid_device *hid) { struct drff_device *drff; struct hid_report *report; - struct hid_input *hidinput = list_first_entry(&hid->inputs, - struct hid_input, list); + struct hid_input *hidinput; struct list_head *report_list = &hid->report_enum[HID_OUTPUT_REPORT].report_list; - struct input_dev *dev = hidinput->input; + struct input_dev *dev; int error; + if (list_empty(&hid->inputs)) { + hid_err(hid, "no inputs found\n"); + return -ENODEV; + } + hidinput = list_first_entry(&hid->inputs, struct hid_input, list); + dev = hidinput->input; + if (list_empty(report_list)) { hid_err(hid, "no output reports found\n"); return -ENODEV; diff --git a/drivers/hid/hid-emsff.c b/drivers/hid/hid-emsff.c index d82d75bb11f7..80f9a02dfa69 100644 --- a/drivers/hid/hid-emsff.c +++ b/drivers/hid/hid-emsff.c @@ -59,13 +59,19 @@ static int emsff_init(struct hid_device *hid) { struct emsff_device *emsff; struct hid_report *report; - struct hid_input *hidinput = list_first_entry(&hid->inputs, - struct hid_input, list); + struct hid_input *hidinput; struct list_head *report_list = &hid->report_enum[HID_OUTPUT_REPORT].report_list; - struct input_dev *dev = hidinput->input; + struct input_dev *dev; int error; + if (list_empty(&hid->inputs)) { + hid_err(hid, "no inputs found\n"); + return -ENODEV; + } + hidinput = list_first_entry(&hid->inputs, struct hid_input, list); + dev = hidinput->input; + if (list_empty(report_list)) { hid_err(hid, "no output reports found\n"); return -ENODEV; diff --git a/drivers/hid/hid-gaff.c b/drivers/hid/hid-gaff.c index 2d8cead3adca..5a02c50443cb 100644 --- a/drivers/hid/hid-gaff.c +++ b/drivers/hid/hid-gaff.c @@ -77,14 +77,20 @@ static int gaff_init(struct hid_device *hid) { struct gaff_device *gaff; struct hid_report *report; - struct hid_input *hidinput = list_entry(hid->inputs.next, - struct hid_input, list); + struct hid_input *hidinput; struct list_head *report_list = &hid->report_enum[HID_OUTPUT_REPORT].report_list; struct list_head *report_ptr = report_list; - struct input_dev *dev = hidinput->input; + struct input_dev *dev; int error; + if (list_empty(&hid->inputs)) { + hid_err(hid, "no inputs found\n"); + return -ENODEV; + } + hidinput = list_entry(hid->inputs.next, struct hid_input, list); + dev = hidinput->input; + if (list_empty(report_list)) { hid_err(hid, "no output reports found\n"); return -ENODEV; diff --git a/drivers/hid/hid-holtekff.c b/drivers/hid/hid-holtekff.c index 9325545fc3ae..3e84551cca9c 100644 --- a/drivers/hid/hid-holtekff.c +++ b/drivers/hid/hid-holtekff.c @@ -140,13 +140,19 @@ static int holtekff_init(struct hid_device *hid) { struct holtekff_device *holtekff; struct hid_report *report; - struct hid_input *hidinput = list_entry(hid->inputs.next, - struct hid_input, list); + struct hid_input *hidinput; struct list_head *report_list = &hid->report_enum[HID_OUTPUT_REPORT].report_list; - struct input_dev *dev = hidinput->input; + struct input_dev *dev; int error; + if (list_empty(&hid->inputs)) { + hid_err(hid, "no inputs found\n"); + return -ENODEV; + } + hidinput = list_entry(hid->inputs.next, struct hid_input, list); + dev = hidinput->input; + if (list_empty(report_list)) { hid_err(hid, "no output report found\n"); return -ENODEV; diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h index e1807296a1a0..b4bc316310bc 100644 --- a/drivers/hid/hid-ids.h +++ b/drivers/hid/hid-ids.h @@ -900,6 +900,11 @@ #define USB_VENDOR_ID_STANTUM_SITRONIX 0x1403 #define USB_DEVICE_ID_MTP_SITRONIX 0x5001 +#define USB_VENDOR_ID_VALVE 0x28de +#define USB_DEVICE_ID_STEAM_CONTROLLER 0x1102 +#define USB_DEVICE_ID_STEAM_CONTROLLER_WIRELESS 0x1142 +#define USB_DEVICE_ID_STEAM_CONTROLLER_BT 0x1106 + #define USB_VENDOR_ID_STEELSERIES 0x1038 #define USB_DEVICE_ID_STEELSERIES_SRWS1 0x1410 diff --git a/drivers/hid/hid-lg.c b/drivers/hid/hid-lg.c index c690fae02cf8..0fd9fc135f3d 100644 --- a/drivers/hid/hid-lg.c +++ b/drivers/hid/hid-lg.c @@ -701,11 +701,16 @@ static int lg_probe(struct hid_device *hdev, const struct hid_device_id *id) /* Setup wireless link with Logitech Wii wheel */ if (hdev->product == USB_DEVICE_ID_LOGITECH_WII_WHEEL) { - unsigned char buf[] = { 0x00, 0xAF, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }; + const unsigned char cbuf[] = { 0x00, 0xAF, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }; + u8 *buf = kmemdup(cbuf, sizeof(cbuf), GFP_KERNEL); - ret = hid_hw_raw_request(hdev, buf[0], buf, sizeof(buf), - HID_FEATURE_REPORT, HID_REQ_SET_REPORT); + if (!buf) { + ret = -ENOMEM; + goto err_stop; + } + ret = hid_hw_raw_request(hdev, buf[0], buf, sizeof(cbuf), + HID_FEATURE_REPORT, HID_REQ_SET_REPORT); if (ret >= 0) { /* insert a little delay of 10 jiffies ~ 40ms */ wait_queue_head_t wait; @@ -717,9 +722,10 @@ static int lg_probe(struct hid_device *hdev, const struct hid_device_id *id) buf[1] = 0xB2; get_random_bytes(&buf[2], 2); - ret = hid_hw_raw_request(hdev, buf[0], buf, sizeof(buf), + ret = hid_hw_raw_request(hdev, buf[0], buf, sizeof(cbuf), HID_FEATURE_REPORT, HID_REQ_SET_REPORT); } + kfree(buf); } if (drv_data->quirks & LG_FF) @@ -732,9 +738,12 @@ static int lg_probe(struct hid_device *hdev, const struct hid_device_id *id) ret = lg4ff_init(hdev); if (ret) - goto err_free; + goto err_stop; return 0; + +err_stop: + hid_hw_stop(hdev); err_free: kfree(drv_data); return ret; @@ -745,8 +754,7 @@ static void lg_remove(struct hid_device *hdev) struct lg_drv_data *drv_data = hid_get_drvdata(hdev); if (drv_data->quirks & LG_FF4) lg4ff_deinit(hdev); - else - hid_hw_stop(hdev); + hid_hw_stop(hdev); kfree(drv_data); } diff --git a/drivers/hid/hid-lg2ff.c b/drivers/hid/hid-lg2ff.c index 0e3fb1a7e421..6909d9c2fc67 100644 --- a/drivers/hid/hid-lg2ff.c +++ b/drivers/hid/hid-lg2ff.c @@ -62,11 +62,17 @@ int lg2ff_init(struct hid_device *hid) { struct lg2ff_device *lg2ff; struct hid_report *report; - struct hid_input *hidinput = list_entry(hid->inputs.next, - struct hid_input, list); - struct input_dev *dev = hidinput->input; + struct hid_input *hidinput; + struct input_dev *dev; int error; + if (list_empty(&hid->inputs)) { + hid_err(hid, "no inputs found\n"); + return -ENODEV; + } + hidinput = list_entry(hid->inputs.next, struct hid_input, list); + dev = hidinput->input; + /* Check that the report looks ok */ report = hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 7); if (!report) diff --git a/drivers/hid/hid-lg3ff.c b/drivers/hid/hid-lg3ff.c index 8c2da183d3bc..acf739fc4060 100644 --- a/drivers/hid/hid-lg3ff.c +++ b/drivers/hid/hid-lg3ff.c @@ -129,12 +129,19 @@ static const signed short ff3_joystick_ac[] = { int lg3ff_init(struct hid_device *hid) { - struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list); - struct input_dev *dev = hidinput->input; + struct hid_input *hidinput; + struct input_dev *dev; const signed short *ff_bits = ff3_joystick_ac; int error; int i; + if (list_empty(&hid->inputs)) { + hid_err(hid, "no inputs found\n"); + return -ENODEV; + } + hidinput = list_entry(hid->inputs.next, struct hid_input, list); + dev = hidinput->input; + /* Check that the report looks ok */ if (!hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 35)) return -ENODEV; diff --git a/drivers/hid/hid-lg4ff.c b/drivers/hid/hid-lg4ff.c index fbddcb37ae98..d8432d9f10f0 100644 --- a/drivers/hid/hid-lg4ff.c +++ b/drivers/hid/hid-lg4ff.c @@ -1158,8 +1158,8 @@ static int lg4ff_handle_multimode_wheel(struct hid_device *hid, u16 *real_produc int lg4ff_init(struct hid_device *hid) { - struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list); - struct input_dev *dev = hidinput->input; + struct hid_input *hidinput; + struct input_dev *dev; struct list_head *report_list = &hid->report_enum[HID_OUTPUT_REPORT].report_list; struct hid_report *report = list_entry(report_list->next, struct hid_report, list); const struct usb_device_descriptor *udesc = &(hid_to_usb_dev(hid)->descriptor); @@ -1171,6 +1171,13 @@ int lg4ff_init(struct hid_device *hid) int mmode_ret, mmode_idx = -1; u16 real_product_id; + if (list_empty(&hid->inputs)) { + hid_err(hid, "no inputs found\n"); + return -ENODEV; + } + hidinput = list_entry(hid->inputs.next, struct hid_input, list); + dev = hidinput->input; + /* Check that the report looks ok */ if (!hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 7)) return -1; @@ -1378,7 +1385,6 @@ int lg4ff_deinit(struct hid_device *hid) } } #endif - hid_hw_stop(hid); drv_data->device_props = NULL; kfree(entry); diff --git a/drivers/hid/hid-lgff.c b/drivers/hid/hid-lgff.c index e1394af0ae7b..1871cdcd1e0a 100644 --- a/drivers/hid/hid-lgff.c +++ b/drivers/hid/hid-lgff.c @@ -127,12 +127,19 @@ static void hid_lgff_set_autocenter(struct input_dev *dev, u16 magnitude) int lgff_init(struct hid_device* hid) { - struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list); - struct input_dev *dev = hidinput->input; + struct hid_input *hidinput; + struct input_dev *dev; const signed short *ff_bits = ff_joystick; int error; int i; + if (list_empty(&hid->inputs)) { + hid_err(hid, "no inputs found\n"); + return -ENODEV; + } + hidinput = list_entry(hid->inputs.next, struct hid_input, list); + dev = hidinput->input; + /* Check that the report looks ok */ if (!hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 7)) return -ENODEV; diff --git a/drivers/hid/hid-prodikeys.c b/drivers/hid/hid-prodikeys.c index 3a207c0ac0e3..cba15edd47c2 100644 --- a/drivers/hid/hid-prodikeys.c +++ b/drivers/hid/hid-prodikeys.c @@ -556,10 +556,14 @@ static void pcmidi_setup_extra_keys( static int pcmidi_set_operational(struct pcmidi_snd *pm) { + int rc; + if (pm->ifnum != 1) return 0; /* only set up ONCE for interace 1 */ - pcmidi_get_output_report(pm); + rc = pcmidi_get_output_report(pm); + if (rc < 0) + return rc; pcmidi_submit_output_report(pm, 0xc1); return 0; } @@ -688,7 +692,11 @@ static int pcmidi_snd_initialise(struct pcmidi_snd *pm) spin_lock_init(&pm->rawmidi_in_lock); init_sustain_timers(pm); - pcmidi_set_operational(pm); + err = pcmidi_set_operational(pm); + if (err < 0) { + pk_error("failed to find output report\n"); + goto fail_register; + } /* register it */ err = snd_card_register(card); diff --git a/drivers/hid/hid-sony.c b/drivers/hid/hid-sony.c index 6f3d47185bf0..353629aab49a 100644 --- a/drivers/hid/hid-sony.c +++ b/drivers/hid/hid-sony.c @@ -8,7 +8,7 @@ * Copyright (c) 2012 David Dillow <dave@thedillows.org> * Copyright (c) 2006-2013 Jiri Kosina * Copyright (c) 2013 Colin Leitner <colin.leitner@gmail.com> - * Copyright (c) 2014 Frank Praznik <frank.praznik@gmail.com> + * Copyright (c) 2014-2016 Frank Praznik <frank.praznik@gmail.com> */ /* @@ -36,6 +36,8 @@ #include <linux/list.h> #include <linux/idr.h> #include <linux/input/mt.h> +#include <linux/crc32.h> +#include <asm/unaligned.h> #include "hid-ids.h" @@ -46,17 +48,19 @@ #define PS3REMOTE BIT(4) #define DUALSHOCK4_CONTROLLER_USB BIT(5) #define DUALSHOCK4_CONTROLLER_BT BIT(6) -#define MOTION_CONTROLLER_USB BIT(7) -#define MOTION_CONTROLLER_BT BIT(8) -#define NAVIGATION_CONTROLLER_USB BIT(9) -#define NAVIGATION_CONTROLLER_BT BIT(10) +#define DUALSHOCK4_DONGLE BIT(7) +#define MOTION_CONTROLLER_USB BIT(8) +#define MOTION_CONTROLLER_BT BIT(9) +#define NAVIGATION_CONTROLLER_USB BIT(10) +#define NAVIGATION_CONTROLLER_BT BIT(11) #define SIXAXIS_CONTROLLER (SIXAXIS_CONTROLLER_USB | SIXAXIS_CONTROLLER_BT) #define MOTION_CONTROLLER (MOTION_CONTROLLER_USB | MOTION_CONTROLLER_BT) #define NAVIGATION_CONTROLLER (NAVIGATION_CONTROLLER_USB |\ NAVIGATION_CONTROLLER_BT) #define DUALSHOCK4_CONTROLLER (DUALSHOCK4_CONTROLLER_USB |\ - DUALSHOCK4_CONTROLLER_BT) + DUALSHOCK4_CONTROLLER_BT | \ + DUALSHOCK4_DONGLE) #define SONY_LED_SUPPORT (SIXAXIS_CONTROLLER | BUZZ_CONTROLLER |\ DUALSHOCK4_CONTROLLER | MOTION_CONTROLLER |\ NAVIGATION_CONTROLLER) @@ -64,95 +68,14 @@ MOTION_CONTROLLER_BT | NAVIGATION_CONTROLLER) #define SONY_FF_SUPPORT (SIXAXIS_CONTROLLER | DUALSHOCK4_CONTROLLER |\ MOTION_CONTROLLER) +#define SONY_BT_DEVICE (SIXAXIS_CONTROLLER_BT | DUALSHOCK4_CONTROLLER_BT |\ + MOTION_CONTROLLER_BT | NAVIGATION_CONTROLLER_BT) #define MAX_LEDS 4 -/* - * The Sixaxis reports both digital and analog values for each button on the - * controller except for Start, Select and the PS button. The controller ends - * up reporting 27 axes which causes them to spill over into the multi-touch - * axis values. Additionally, the controller only has 20 actual, physical axes - * so there are several unused axes in between the used ones. - */ -static __u8 sixaxis_rdesc[] = { - 0x05, 0x01, /* Usage Page (Desktop), */ - 0x09, 0x04, /* Usage (Joystick), */ - 0xA1, 0x01, /* Collection (Application), */ - 0xA1, 0x02, /* Collection (Logical), */ - 0x85, 0x01, /* Report ID (1), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x01, /* Report Count (1), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x26, 0xFF, 0x00, /* Logical Maximum (255), */ - 0x81, 0x03, /* Input (Constant, Variable), */ - 0x75, 0x01, /* Report Size (1), */ - 0x95, 0x13, /* Report Count (19), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x25, 0x01, /* Logical Maximum (1), */ - 0x35, 0x00, /* Physical Minimum (0), */ - 0x45, 0x01, /* Physical Maximum (1), */ - 0x05, 0x09, /* Usage Page (Button), */ - 0x19, 0x01, /* Usage Minimum (01h), */ - 0x29, 0x13, /* Usage Maximum (13h), */ - 0x81, 0x02, /* Input (Variable), */ - 0x75, 0x01, /* Report Size (1), */ - 0x95, 0x0D, /* Report Count (13), */ - 0x06, 0x00, 0xFF, /* Usage Page (FF00h), */ - 0x81, 0x03, /* Input (Constant, Variable), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x26, 0xFF, 0x00, /* Logical Maximum (255), */ - 0x05, 0x01, /* Usage Page (Desktop), */ - 0x09, 0x01, /* Usage (Pointer), */ - 0xA1, 0x00, /* Collection (Physical), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x04, /* Report Count (4), */ - 0x35, 0x00, /* Physical Minimum (0), */ - 0x46, 0xFF, 0x00, /* Physical Maximum (255), */ - 0x09, 0x30, /* Usage (X), */ - 0x09, 0x31, /* Usage (Y), */ - 0x09, 0x32, /* Usage (Z), */ - 0x09, 0x35, /* Usage (Rz), */ - 0x81, 0x02, /* Input (Variable), */ - 0xC0, /* End Collection, */ - 0x05, 0x01, /* Usage Page (Desktop), */ - 0x95, 0x13, /* Report Count (19), */ - 0x09, 0x01, /* Usage (Pointer), */ - 0x81, 0x02, /* Input (Variable), */ - 0x95, 0x0C, /* Report Count (12), */ - 0x81, 0x01, /* Input (Constant), */ - 0x75, 0x10, /* Report Size (16), */ - 0x95, 0x04, /* Report Count (4), */ - 0x26, 0xFF, 0x03, /* Logical Maximum (1023), */ - 0x46, 0xFF, 0x03, /* Physical Maximum (1023), */ - 0x09, 0x01, /* Usage (Pointer), */ - 0x81, 0x02, /* Input (Variable), */ - 0xC0, /* End Collection, */ - 0xA1, 0x02, /* Collection (Logical), */ - 0x85, 0x02, /* Report ID (2), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x30, /* Report Count (48), */ - 0x09, 0x01, /* Usage (Pointer), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0xC0, /* End Collection, */ - 0xA1, 0x02, /* Collection (Logical), */ - 0x85, 0xEE, /* Report ID (238), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x30, /* Report Count (48), */ - 0x09, 0x01, /* Usage (Pointer), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0xC0, /* End Collection, */ - 0xA1, 0x02, /* Collection (Logical), */ - 0x85, 0xEF, /* Report ID (239), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x30, /* Report Count (48), */ - 0x09, 0x01, /* Usage (Pointer), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0xC0, /* End Collection, */ - 0xC0 /* End Collection */ -}; /* PS/3 Motion controller */ -static __u8 motion_rdesc[] = { +static u8 motion_rdesc[] = { 0x05, 0x01, /* Usage Page (Desktop), */ 0x09, 0x04, /* Usage (Joystick), */ 0xA1, 0x01, /* Collection (Application), */ @@ -248,568 +171,7 @@ static __u8 motion_rdesc[] = { 0xC0 /* End Collection */ }; -/* PS/3 Navigation controller */ -static __u8 navigation_rdesc[] = { - 0x05, 0x01, /* Usage Page (Desktop), */ - 0x09, 0x04, /* Usage (Joystik), */ - 0xA1, 0x01, /* Collection (Application), */ - 0xA1, 0x02, /* Collection (Logical), */ - 0x85, 0x01, /* Report ID (1), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x01, /* Report Count (1), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x26, 0xFF, 0x00, /* Logical Maximum (255), */ - 0x81, 0x03, /* Input (Constant, Variable), */ - 0x75, 0x01, /* Report Size (1), */ - 0x95, 0x13, /* Report Count (19), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x25, 0x01, /* Logical Maximum (1), */ - 0x35, 0x00, /* Physical Minimum (0), */ - 0x45, 0x01, /* Physical Maximum (1), */ - 0x05, 0x09, /* Usage Page (Button), */ - 0x19, 0x01, /* Usage Minimum (01h), */ - 0x29, 0x13, /* Usage Maximum (13h), */ - 0x81, 0x02, /* Input (Variable), */ - 0x75, 0x01, /* Report Size (1), */ - 0x95, 0x0D, /* Report Count (13), */ - 0x06, 0x00, 0xFF, /* Usage Page (FF00h), */ - 0x81, 0x03, /* Input (Constant, Variable), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x26, 0xFF, 0x00, /* Logical Maximum (255), */ - 0x05, 0x01, /* Usage Page (Desktop), */ - 0x09, 0x01, /* Usage (Pointer), */ - 0xA1, 0x00, /* Collection (Physical), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x02, /* Report Count (2), */ - 0x35, 0x00, /* Physical Minimum (0), */ - 0x46, 0xFF, 0x00, /* Physical Maximum (255), */ - 0x09, 0x30, /* Usage (X), */ - 0x09, 0x31, /* Usage (Y), */ - 0x81, 0x02, /* Input (Variable), */ - 0xC0, /* End Collection, */ - 0x06, 0x00, 0xFF, /* Usage Page (FF00h), */ - 0x95, 0x06, /* Report Count (6), */ - 0x81, 0x03, /* Input (Constant, Variable), */ - 0x05, 0x01, /* Usage Page (Desktop), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x05, /* Report Count (5), */ - 0x09, 0x01, /* Usage (Pointer), */ - 0x81, 0x02, /* Input (Variable), */ - 0x06, 0x00, 0xFF, /* Usage Page (FF00h), */ - 0x95, 0x01, /* Report Count (1), */ - 0x81, 0x02, /* Input (Variable), */ - 0x05, 0x01, /* Usage Page (Desktop), */ - 0x95, 0x01, /* Report Count (1), */ - 0x09, 0x01, /* Usage (Pointer), */ - 0x81, 0x02, /* Input (Variable), */ - 0x06, 0x00, 0xFF, /* Usage Page (FF00h), */ - 0x95, 0x1E, /* Report Count (24), */ - 0x81, 0x02, /* Input (Variable), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x30, /* Report Count (48), */ - 0x09, 0x01, /* Usage (Pointer), */ - 0x91, 0x02, /* Output (Variable), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x30, /* Report Count (48), */ - 0x09, 0x01, /* Usage (Pointer), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0xC0, /* End Collection, */ - 0xA1, 0x02, /* Collection (Logical), */ - 0x85, 0x02, /* Report ID (2), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x30, /* Report Count (48), */ - 0x09, 0x01, /* Usage (Pointer), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0xC0, /* End Collection, */ - 0xA1, 0x02, /* Collection (Logical), */ - 0x85, 0xEE, /* Report ID (238), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x30, /* Report Count (48), */ - 0x09, 0x01, /* Usage (Pointer), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0xC0, /* End Collection, */ - 0xA1, 0x02, /* Collection (Logical), */ - 0x85, 0xEF, /* Report ID (239), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x30, /* Report Count (48), */ - 0x09, 0x01, /* Usage (Pointer), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0xC0, /* End Collection, */ - 0xC0 /* End Collection */ -}; - -/* - * The default descriptor doesn't provide mapping for the accelerometers - * or orientation sensors. This fixed descriptor maps the accelerometers - * to usage values 0x40, 0x41 and 0x42 and maps the orientation sensors - * to usage values 0x43, 0x44 and 0x45. - */ -static u8 dualshock4_usb_rdesc[] = { - 0x05, 0x01, /* Usage Page (Desktop), */ - 0x09, 0x05, /* Usage (Gamepad), */ - 0xA1, 0x01, /* Collection (Application), */ - 0x85, 0x01, /* Report ID (1), */ - 0x09, 0x30, /* Usage (X), */ - 0x09, 0x31, /* Usage (Y), */ - 0x09, 0x32, /* Usage (Z), */ - 0x09, 0x35, /* Usage (Rz), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x26, 0xFF, 0x00, /* Logical Maximum (255), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x04, /* Report Count (4), */ - 0x81, 0x02, /* Input (Variable), */ - 0x09, 0x39, /* Usage (Hat Switch), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x25, 0x07, /* Logical Maximum (7), */ - 0x35, 0x00, /* Physical Minimum (0), */ - 0x46, 0x3B, 0x01, /* Physical Maximum (315), */ - 0x65, 0x14, /* Unit (Degrees), */ - 0x75, 0x04, /* Report Size (4), */ - 0x95, 0x01, /* Report Count (1), */ - 0x81, 0x42, /* Input (Variable, Null State), */ - 0x65, 0x00, /* Unit, */ - 0x05, 0x09, /* Usage Page (Button), */ - 0x19, 0x01, /* Usage Minimum (01h), */ - 0x29, 0x0E, /* Usage Maximum (0Eh), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x25, 0x01, /* Logical Maximum (1), */ - 0x75, 0x01, /* Report Size (1), */ - 0x95, 0x0E, /* Report Count (14), */ - 0x81, 0x02, /* Input (Variable), */ - 0x06, 0x00, 0xFF, /* Usage Page (FF00h), */ - 0x09, 0x20, /* Usage (20h), */ - 0x75, 0x06, /* Report Size (6), */ - 0x95, 0x01, /* Report Count (1), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x25, 0x3F, /* Logical Maximum (63), */ - 0x81, 0x02, /* Input (Variable), */ - 0x05, 0x01, /* Usage Page (Desktop), */ - 0x09, 0x33, /* Usage (Rx), */ - 0x09, 0x34, /* Usage (Ry), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x26, 0xFF, 0x00, /* Logical Maximum (255), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x02, /* Report Count (2), */ - 0x81, 0x02, /* Input (Variable), */ - 0x06, 0x00, 0xFF, /* Usage Page (FF00h), */ - 0x09, 0x21, /* Usage (21h), */ - 0x95, 0x03, /* Report Count (3), */ - 0x81, 0x02, /* Input (Variable), */ - 0x05, 0x01, /* Usage Page (Desktop), */ - 0x19, 0x40, /* Usage Minimum (40h), */ - 0x29, 0x42, /* Usage Maximum (42h), */ - 0x16, 0x00, 0x80, /* Logical Minimum (-32768), */ - 0x26, 0x00, 0x7F, /* Logical Maximum (32767), */ - 0x75, 0x10, /* Report Size (16), */ - 0x95, 0x03, /* Report Count (3), */ - 0x81, 0x02, /* Input (Variable), */ - 0x19, 0x43, /* Usage Minimum (43h), */ - 0x29, 0x45, /* Usage Maximum (45h), */ - 0x16, 0x00, 0xE0, /* Logical Minimum (-8192), */ - 0x26, 0xFF, 0x1F, /* Logical Maximum (8191), */ - 0x95, 0x03, /* Report Count (3), */ - 0x81, 0x02, /* Input (Variable), */ - 0x06, 0x00, 0xFF, /* Usage Page (FF00h), */ - 0x09, 0x21, /* Usage (21h), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x26, 0xFF, 0x00, /* Logical Maximum (255), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x27, /* Report Count (39), */ - 0x81, 0x02, /* Input (Variable), */ - 0x85, 0x05, /* Report ID (5), */ - 0x09, 0x22, /* Usage (22h), */ - 0x95, 0x1F, /* Report Count (31), */ - 0x91, 0x02, /* Output (Variable), */ - 0x85, 0x04, /* Report ID (4), */ - 0x09, 0x23, /* Usage (23h), */ - 0x95, 0x24, /* Report Count (36), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x02, /* Report ID (2), */ - 0x09, 0x24, /* Usage (24h), */ - 0x95, 0x24, /* Report Count (36), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x08, /* Report ID (8), */ - 0x09, 0x25, /* Usage (25h), */ - 0x95, 0x03, /* Report Count (3), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x10, /* Report ID (16), */ - 0x09, 0x26, /* Usage (26h), */ - 0x95, 0x04, /* Report Count (4), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x11, /* Report ID (17), */ - 0x09, 0x27, /* Usage (27h), */ - 0x95, 0x02, /* Report Count (2), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x12, /* Report ID (18), */ - 0x06, 0x02, 0xFF, /* Usage Page (FF02h), */ - 0x09, 0x21, /* Usage (21h), */ - 0x95, 0x0F, /* Report Count (15), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x13, /* Report ID (19), */ - 0x09, 0x22, /* Usage (22h), */ - 0x95, 0x16, /* Report Count (22), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x14, /* Report ID (20), */ - 0x06, 0x05, 0xFF, /* Usage Page (FF05h), */ - 0x09, 0x20, /* Usage (20h), */ - 0x95, 0x10, /* Report Count (16), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x15, /* Report ID (21), */ - 0x09, 0x21, /* Usage (21h), */ - 0x95, 0x2C, /* Report Count (44), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x06, 0x80, 0xFF, /* Usage Page (FF80h), */ - 0x85, 0x80, /* Report ID (128), */ - 0x09, 0x20, /* Usage (20h), */ - 0x95, 0x06, /* Report Count (6), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x81, /* Report ID (129), */ - 0x09, 0x21, /* Usage (21h), */ - 0x95, 0x06, /* Report Count (6), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x82, /* Report ID (130), */ - 0x09, 0x22, /* Usage (22h), */ - 0x95, 0x05, /* Report Count (5), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x83, /* Report ID (131), */ - 0x09, 0x23, /* Usage (23h), */ - 0x95, 0x01, /* Report Count (1), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x84, /* Report ID (132), */ - 0x09, 0x24, /* Usage (24h), */ - 0x95, 0x04, /* Report Count (4), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x85, /* Report ID (133), */ - 0x09, 0x25, /* Usage (25h), */ - 0x95, 0x06, /* Report Count (6), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x86, /* Report ID (134), */ - 0x09, 0x26, /* Usage (26h), */ - 0x95, 0x06, /* Report Count (6), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x87, /* Report ID (135), */ - 0x09, 0x27, /* Usage (27h), */ - 0x95, 0x23, /* Report Count (35), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x88, /* Report ID (136), */ - 0x09, 0x28, /* Usage (28h), */ - 0x95, 0x22, /* Report Count (34), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x89, /* Report ID (137), */ - 0x09, 0x29, /* Usage (29h), */ - 0x95, 0x02, /* Report Count (2), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x90, /* Report ID (144), */ - 0x09, 0x30, /* Usage (30h), */ - 0x95, 0x05, /* Report Count (5), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x91, /* Report ID (145), */ - 0x09, 0x31, /* Usage (31h), */ - 0x95, 0x03, /* Report Count (3), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x92, /* Report ID (146), */ - 0x09, 0x32, /* Usage (32h), */ - 0x95, 0x03, /* Report Count (3), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x93, /* Report ID (147), */ - 0x09, 0x33, /* Usage (33h), */ - 0x95, 0x0C, /* Report Count (12), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xA0, /* Report ID (160), */ - 0x09, 0x40, /* Usage (40h), */ - 0x95, 0x06, /* Report Count (6), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xA1, /* Report ID (161), */ - 0x09, 0x41, /* Usage (41h), */ - 0x95, 0x01, /* Report Count (1), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xA2, /* Report ID (162), */ - 0x09, 0x42, /* Usage (42h), */ - 0x95, 0x01, /* Report Count (1), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xA3, /* Report ID (163), */ - 0x09, 0x43, /* Usage (43h), */ - 0x95, 0x30, /* Report Count (48), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xA4, /* Report ID (164), */ - 0x09, 0x44, /* Usage (44h), */ - 0x95, 0x0D, /* Report Count (13), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xA5, /* Report ID (165), */ - 0x09, 0x45, /* Usage (45h), */ - 0x95, 0x15, /* Report Count (21), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xA6, /* Report ID (166), */ - 0x09, 0x46, /* Usage (46h), */ - 0x95, 0x15, /* Report Count (21), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xF0, /* Report ID (240), */ - 0x09, 0x47, /* Usage (47h), */ - 0x95, 0x3F, /* Report Count (63), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xF1, /* Report ID (241), */ - 0x09, 0x48, /* Usage (48h), */ - 0x95, 0x3F, /* Report Count (63), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xF2, /* Report ID (242), */ - 0x09, 0x49, /* Usage (49h), */ - 0x95, 0x0F, /* Report Count (15), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xA7, /* Report ID (167), */ - 0x09, 0x4A, /* Usage (4Ah), */ - 0x95, 0x01, /* Report Count (1), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xA8, /* Report ID (168), */ - 0x09, 0x4B, /* Usage (4Bh), */ - 0x95, 0x01, /* Report Count (1), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xA9, /* Report ID (169), */ - 0x09, 0x4C, /* Usage (4Ch), */ - 0x95, 0x08, /* Report Count (8), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xAA, /* Report ID (170), */ - 0x09, 0x4E, /* Usage (4Eh), */ - 0x95, 0x01, /* Report Count (1), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xAB, /* Report ID (171), */ - 0x09, 0x4F, /* Usage (4Fh), */ - 0x95, 0x39, /* Report Count (57), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xAC, /* Report ID (172), */ - 0x09, 0x50, /* Usage (50h), */ - 0x95, 0x39, /* Report Count (57), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xAD, /* Report ID (173), */ - 0x09, 0x51, /* Usage (51h), */ - 0x95, 0x0B, /* Report Count (11), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xAE, /* Report ID (174), */ - 0x09, 0x52, /* Usage (52h), */ - 0x95, 0x01, /* Report Count (1), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xAF, /* Report ID (175), */ - 0x09, 0x53, /* Usage (53h), */ - 0x95, 0x02, /* Report Count (2), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xB0, /* Report ID (176), */ - 0x09, 0x54, /* Usage (54h), */ - 0x95, 0x3F, /* Report Count (63), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0xC0 /* End Collection */ -}; - -/* - * The default behavior of the Dualshock 4 is to send reports using report - * type 1 when running over Bluetooth. However, when feature report 2 is - * requested during the controller initialization it starts sending input - * reports in report 17. Since report 17 is undefined in the default HID - * descriptor the button and axis definitions must be moved to report 17 or - * the HID layer won't process the received input. - */ -static u8 dualshock4_bt_rdesc[] = { - 0x05, 0x01, /* Usage Page (Desktop), */ - 0x09, 0x05, /* Usage (Gamepad), */ - 0xA1, 0x01, /* Collection (Application), */ - 0x85, 0x01, /* Report ID (1), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x0A, /* Report Count (9), */ - 0x81, 0x02, /* Input (Variable), */ - 0x06, 0x04, 0xFF, /* Usage Page (FF04h), */ - 0x85, 0x02, /* Report ID (2), */ - 0x09, 0x24, /* Usage (24h), */ - 0x95, 0x24, /* Report Count (36), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xA3, /* Report ID (163), */ - 0x09, 0x25, /* Usage (25h), */ - 0x95, 0x30, /* Report Count (48), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x05, /* Report ID (5), */ - 0x09, 0x26, /* Usage (26h), */ - 0x95, 0x28, /* Report Count (40), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x06, /* Report ID (6), */ - 0x09, 0x27, /* Usage (27h), */ - 0x95, 0x34, /* Report Count (52), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x07, /* Report ID (7), */ - 0x09, 0x28, /* Usage (28h), */ - 0x95, 0x30, /* Report Count (48), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x08, /* Report ID (8), */ - 0x09, 0x29, /* Usage (29h), */ - 0x95, 0x2F, /* Report Count (47), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x06, 0x03, 0xFF, /* Usage Page (FF03h), */ - 0x85, 0x03, /* Report ID (3), */ - 0x09, 0x21, /* Usage (21h), */ - 0x95, 0x26, /* Report Count (38), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x04, /* Report ID (4), */ - 0x09, 0x22, /* Usage (22h), */ - 0x95, 0x2E, /* Report Count (46), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xF0, /* Report ID (240), */ - 0x09, 0x47, /* Usage (47h), */ - 0x95, 0x3F, /* Report Count (63), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xF1, /* Report ID (241), */ - 0x09, 0x48, /* Usage (48h), */ - 0x95, 0x3F, /* Report Count (63), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xF2, /* Report ID (242), */ - 0x09, 0x49, /* Usage (49h), */ - 0x95, 0x0F, /* Report Count (15), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x11, /* Report ID (17), */ - 0x06, 0x00, 0xFF, /* Usage Page (FF00h), */ - 0x09, 0x20, /* Usage (20h), */ - 0x95, 0x02, /* Report Count (2), */ - 0x81, 0x02, /* Input (Variable), */ - 0x05, 0x01, /* Usage Page (Desktop), */ - 0x09, 0x30, /* Usage (X), */ - 0x09, 0x31, /* Usage (Y), */ - 0x09, 0x32, /* Usage (Z), */ - 0x09, 0x35, /* Usage (Rz), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x26, 0xFF, 0x00, /* Logical Maximum (255), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x04, /* Report Count (4), */ - 0x81, 0x02, /* Input (Variable), */ - 0x09, 0x39, /* Usage (Hat Switch), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x25, 0x07, /* Logical Maximum (7), */ - 0x75, 0x04, /* Report Size (4), */ - 0x95, 0x01, /* Report Count (1), */ - 0x81, 0x42, /* Input (Variable, Null State), */ - 0x05, 0x09, /* Usage Page (Button), */ - 0x19, 0x01, /* Usage Minimum (01h), */ - 0x29, 0x0E, /* Usage Maximum (0Eh), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x25, 0x01, /* Logical Maximum (1), */ - 0x75, 0x01, /* Report Size (1), */ - 0x95, 0x0E, /* Report Count (14), */ - 0x81, 0x02, /* Input (Variable), */ - 0x75, 0x06, /* Report Size (6), */ - 0x95, 0x01, /* Report Count (1), */ - 0x81, 0x01, /* Input (Constant), */ - 0x05, 0x01, /* Usage Page (Desktop), */ - 0x09, 0x33, /* Usage (Rx), */ - 0x09, 0x34, /* Usage (Ry), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x26, 0xFF, 0x00, /* Logical Maximum (255), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x02, /* Report Count (2), */ - 0x81, 0x02, /* Input (Variable), */ - 0x06, 0x00, 0xFF, /* Usage Page (FF00h), */ - 0x09, 0x20, /* Usage (20h), */ - 0x95, 0x03, /* Report Count (3), */ - 0x81, 0x02, /* Input (Variable), */ - 0x05, 0x01, /* Usage Page (Desktop), */ - 0x19, 0x40, /* Usage Minimum (40h), */ - 0x29, 0x42, /* Usage Maximum (42h), */ - 0x16, 0x00, 0x80, /* Logical Minimum (-32768), */ - 0x26, 0x00, 0x7F, /* Logical Maximum (32767), */ - 0x75, 0x10, /* Report Size (16), */ - 0x95, 0x03, /* Report Count (3), */ - 0x81, 0x02, /* Input (Variable), */ - 0x19, 0x43, /* Usage Minimum (43h), */ - 0x29, 0x45, /* Usage Maximum (45h), */ - 0x16, 0x00, 0xE0, /* Logical Minimum (-8192), */ - 0x26, 0xFF, 0x1F, /* Logical Maximum (8191), */ - 0x95, 0x03, /* Report Count (3), */ - 0x81, 0x02, /* Input (Variable), */ - 0x06, 0x00, 0xFF, /* Usage Page (FF00h), */ - 0x09, 0x20, /* Usage (20h), */ - 0x15, 0x00, /* Logical Minimum (0), */ - 0x26, 0xFF, 0x00, /* Logical Maximum (255), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x31, /* Report Count (51), */ - 0x81, 0x02, /* Input (Variable), */ - 0x09, 0x21, /* Usage (21h), */ - 0x75, 0x08, /* Report Size (8), */ - 0x95, 0x4D, /* Report Count (77), */ - 0x91, 0x02, /* Output (Variable), */ - 0x85, 0x12, /* Report ID (18), */ - 0x09, 0x22, /* Usage (22h), */ - 0x95, 0x8D, /* Report Count (141), */ - 0x81, 0x02, /* Input (Variable), */ - 0x09, 0x23, /* Usage (23h), */ - 0x91, 0x02, /* Output (Variable), */ - 0x85, 0x13, /* Report ID (19), */ - 0x09, 0x24, /* Usage (24h), */ - 0x95, 0xCD, /* Report Count (205), */ - 0x81, 0x02, /* Input (Variable), */ - 0x09, 0x25, /* Usage (25h), */ - 0x91, 0x02, /* Output (Variable), */ - 0x85, 0x14, /* Report ID (20), */ - 0x09, 0x26, /* Usage (26h), */ - 0x96, 0x0D, 0x01, /* Report Count (269), */ - 0x81, 0x02, /* Input (Variable), */ - 0x09, 0x27, /* Usage (27h), */ - 0x91, 0x02, /* Output (Variable), */ - 0x85, 0x15, /* Report ID (21), */ - 0x09, 0x28, /* Usage (28h), */ - 0x96, 0x4D, 0x01, /* Report Count (333), */ - 0x81, 0x02, /* Input (Variable), */ - 0x09, 0x29, /* Usage (29h), */ - 0x91, 0x02, /* Output (Variable), */ - 0x85, 0x16, /* Report ID (22), */ - 0x09, 0x2A, /* Usage (2Ah), */ - 0x96, 0x8D, 0x01, /* Report Count (397), */ - 0x81, 0x02, /* Input (Variable), */ - 0x09, 0x2B, /* Usage (2Bh), */ - 0x91, 0x02, /* Output (Variable), */ - 0x85, 0x17, /* Report ID (23), */ - 0x09, 0x2C, /* Usage (2Ch), */ - 0x96, 0xCD, 0x01, /* Report Count (461), */ - 0x81, 0x02, /* Input (Variable), */ - 0x09, 0x2D, /* Usage (2Dh), */ - 0x91, 0x02, /* Output (Variable), */ - 0x85, 0x18, /* Report ID (24), */ - 0x09, 0x2E, /* Usage (2Eh), */ - 0x96, 0x0D, 0x02, /* Report Count (525), */ - 0x81, 0x02, /* Input (Variable), */ - 0x09, 0x2F, /* Usage (2Fh), */ - 0x91, 0x02, /* Output (Variable), */ - 0x85, 0x19, /* Report ID (25), */ - 0x09, 0x30, /* Usage (30h), */ - 0x96, 0x22, 0x02, /* Report Count (546), */ - 0x81, 0x02, /* Input (Variable), */ - 0x09, 0x31, /* Usage (31h), */ - 0x91, 0x02, /* Output (Variable), */ - 0x06, 0x80, 0xFF, /* Usage Page (FF80h), */ - 0x85, 0x82, /* Report ID (130), */ - 0x09, 0x22, /* Usage (22h), */ - 0x95, 0x3F, /* Report Count (63), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x83, /* Report ID (131), */ - 0x09, 0x23, /* Usage (23h), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x84, /* Report ID (132), */ - 0x09, 0x24, /* Usage (24h), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x90, /* Report ID (144), */ - 0x09, 0x30, /* Usage (30h), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x91, /* Report ID (145), */ - 0x09, 0x31, /* Usage (31h), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x92, /* Report ID (146), */ - 0x09, 0x32, /* Usage (32h), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0x93, /* Report ID (147), */ - 0x09, 0x33, /* Usage (33h), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xA0, /* Report ID (160), */ - 0x09, 0x40, /* Usage (40h), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0x85, 0xA4, /* Report ID (164), */ - 0x09, 0x44, /* Usage (44h), */ - 0xB1, 0x02, /* Feature (Variable), */ - 0xC0 /* End Collection */ -}; - -static __u8 ps3remote_rdesc[] = { +static u8 ps3remote_rdesc[] = { 0x05, 0x01, /* GUsagePage Generic Desktop */ 0x09, 0x05, /* LUsage 0x05 [Game Pad] */ 0xA1, 0x01, /* MCollection Application (mouse, keyboard) */ @@ -817,14 +179,18 @@ static __u8 ps3remote_rdesc[] = { /* Use collection 1 for joypad buttons */ 0xA1, 0x02, /* MCollection Logical (interrelated data) */ - /* Ignore the 1st byte, maybe it is used for a controller - * number but it's not needed for correct operation */ + /* + * Ignore the 1st byte, maybe it is used for a controller + * number but it's not needed for correct operation + */ 0x75, 0x08, /* GReportSize 0x08 [8] */ 0x95, 0x01, /* GReportCount 0x01 [1] */ 0x81, 0x01, /* MInput 0x01 (Const[0] Arr[1] Abs[2]) */ - /* Bytes from 2nd to 4th are a bitmap for joypad buttons, for these - * buttons multiple keypresses are allowed */ + /* + * Bytes from 2nd to 4th are a bitmap for joypad buttons, for these + * buttons multiple keypresses are allowed + */ 0x05, 0x09, /* GUsagePage Button */ 0x19, 0x01, /* LUsageMinimum 0x01 [Button 1 (primary/trigger)] */ 0x29, 0x18, /* LUsageMaximum 0x18 [Button 24] */ @@ -849,8 +215,10 @@ static __u8 ps3remote_rdesc[] = { 0x95, 0x01, /* GReportCount 0x01 [1] */ 0x80, /* MInput */ - /* Ignore bytes from 6th to 11th, 6th to 10th are always constant at - * 0xff and 11th is for press indication */ + /* + * Ignore bytes from 6th to 11th, 6th to 10th are always constant at + * 0xff and 11th is for press indication + */ 0x75, 0x08, /* GReportSize 0x08 [8] */ 0x95, 0x06, /* GReportCount 0x06 [6] */ 0x81, 0x01, /* MInput 0x01 (Const[0] Arr[1] Abs[2]) */ @@ -929,7 +297,7 @@ static const unsigned int buzz_keymap[] = { /* * The controller has 4 remote buzzers, each with one LED and 5 * buttons. - * + * * We use the mapping chosen by the controller, which is: * * Key Offset @@ -943,15 +311,15 @@ static const unsigned int buzz_keymap[] = { * So, for example, the orange button on the third buzzer is mapped to * BTN_TRIGGER_HAPPY14 */ - [ 1] = BTN_TRIGGER_HAPPY1, - [ 2] = BTN_TRIGGER_HAPPY2, - [ 3] = BTN_TRIGGER_HAPPY3, - [ 4] = BTN_TRIGGER_HAPPY4, - [ 5] = BTN_TRIGGER_HAPPY5, - [ 6] = BTN_TRIGGER_HAPPY6, - [ 7] = BTN_TRIGGER_HAPPY7, - [ 8] = BTN_TRIGGER_HAPPY8, - [ 9] = BTN_TRIGGER_HAPPY9, + [1] = BTN_TRIGGER_HAPPY1, + [2] = BTN_TRIGGER_HAPPY2, + [3] = BTN_TRIGGER_HAPPY3, + [4] = BTN_TRIGGER_HAPPY4, + [5] = BTN_TRIGGER_HAPPY5, + [6] = BTN_TRIGGER_HAPPY6, + [7] = BTN_TRIGGER_HAPPY7, + [8] = BTN_TRIGGER_HAPPY8, + [9] = BTN_TRIGGER_HAPPY9, [10] = BTN_TRIGGER_HAPPY10, [11] = BTN_TRIGGER_HAPPY11, [12] = BTN_TRIGGER_HAPPY12, @@ -965,6 +333,97 @@ static const unsigned int buzz_keymap[] = { [20] = BTN_TRIGGER_HAPPY20, }; +/* The Navigation controller is a partial DS3 and uses the same HID report + * and hence the same keymap indices, however not not all axes/buttons + * are physically present. We use the same axis and button mapping as + * the DS3, which uses the Linux gamepad spec. + */ +static const unsigned int navigation_absmap[] = { + [0x30] = ABS_X, + [0x31] = ABS_Y, + [0x33] = ABS_Z, /* L2 */ +}; + +/* Buttons not physically available on the device, but still available + * in the reports are explicitly set to 0 for documentation purposes. + */ +static const unsigned int navigation_keymap[] = { + [0x01] = 0, /* Select */ + [0x02] = BTN_THUMBL, /* L3 */ + [0x03] = 0, /* R3 */ + [0x04] = 0, /* Start */ + [0x05] = BTN_DPAD_UP, /* Up */ + [0x06] = BTN_DPAD_RIGHT, /* Right */ + [0x07] = BTN_DPAD_DOWN, /* Down */ + [0x08] = BTN_DPAD_LEFT, /* Left */ + [0x09] = BTN_TL2, /* L2 */ + [0x0a] = 0, /* R2 */ + [0x0b] = BTN_TL, /* L1 */ + [0x0c] = 0, /* R1 */ + [0x0d] = BTN_NORTH, /* Triangle */ + [0x0e] = BTN_EAST, /* Circle */ + [0x0f] = BTN_SOUTH, /* Cross */ + [0x10] = BTN_WEST, /* Square */ + [0x11] = BTN_MODE, /* PS */ +}; + +static const unsigned int sixaxis_absmap[] = { + [0x30] = ABS_X, + [0x31] = ABS_Y, + [0x32] = ABS_RX, /* right stick X */ + [0x35] = ABS_RY, /* right stick Y */ +}; + +static const unsigned int sixaxis_keymap[] = { + [0x01] = BTN_SELECT, /* Select */ + [0x02] = BTN_THUMBL, /* L3 */ + [0x03] = BTN_THUMBR, /* R3 */ + [0x04] = BTN_START, /* Start */ + [0x05] = BTN_DPAD_UP, /* Up */ + [0x06] = BTN_DPAD_RIGHT, /* Right */ + [0x07] = BTN_DPAD_DOWN, /* Down */ + [0x08] = BTN_DPAD_LEFT, /* Left */ + [0x09] = BTN_TL2, /* L2 */ + [0x0a] = BTN_TR2, /* R2 */ + [0x0b] = BTN_TL, /* L1 */ + [0x0c] = BTN_TR, /* R1 */ + [0x0d] = BTN_NORTH, /* Triangle */ + [0x0e] = BTN_EAST, /* Circle */ + [0x0f] = BTN_SOUTH, /* Cross */ + [0x10] = BTN_WEST, /* Square */ + [0x11] = BTN_MODE, /* PS */ +}; + +static const unsigned int ds4_absmap[] = { + [0x30] = ABS_X, + [0x31] = ABS_Y, + [0x32] = ABS_RX, /* right stick X */ + [0x33] = ABS_Z, /* L2 */ + [0x34] = ABS_RZ, /* R2 */ + [0x35] = ABS_RY, /* right stick Y */ +}; + +static const unsigned int ds4_keymap[] = { + [0x1] = BTN_WEST, /* Square */ + [0x2] = BTN_SOUTH, /* Cross */ + [0x3] = BTN_EAST, /* Circle */ + [0x4] = BTN_NORTH, /* Triangle */ + [0x5] = BTN_TL, /* L1 */ + [0x6] = BTN_TR, /* R1 */ + [0x7] = BTN_TL2, /* L2 */ + [0x8] = BTN_TR2, /* R2 */ + [0x9] = BTN_SELECT, /* Share */ + [0xa] = BTN_START, /* Options */ + [0xb] = BTN_THUMBL, /* L3 */ + [0xc] = BTN_THUMBR, /* R3 */ + [0xd] = BTN_MODE, /* PS */ +}; + +static const struct {int x; int y; } ds4_hat_mapping[] = { + {0, -1}, {1, -1}, {1, 0}, {1, 1}, {0, 1}, {-1, 1}, {-1, 0}, {-1, -1}, + {0, 0} +}; + static enum power_supply_property sony_battery_props[] = { POWER_SUPPLY_PROP_PRESENT, POWER_SUPPLY_PROP_CAPACITY, @@ -973,33 +432,33 @@ static enum power_supply_property sony_battery_props[] = { }; struct sixaxis_led { - __u8 time_enabled; /* the total time the led is active (0xff means forever) */ - __u8 duty_length; /* how long a cycle is in deciseconds (0 means "really fast") */ - __u8 enabled; - __u8 duty_off; /* % of duty_length the led is off (0xff means 100%) */ - __u8 duty_on; /* % of duty_length the led is on (0xff mean 100%) */ + u8 time_enabled; /* the total time the led is active (0xff means forever) */ + u8 duty_length; /* how long a cycle is in deciseconds (0 means "really fast") */ + u8 enabled; + u8 duty_off; /* % of duty_length the led is off (0xff means 100%) */ + u8 duty_on; /* % of duty_length the led is on (0xff mean 100%) */ } __packed; struct sixaxis_rumble { - __u8 padding; - __u8 right_duration; /* Right motor duration (0xff means forever) */ - __u8 right_motor_on; /* Right (small) motor on/off, only supports values of 0 or 1 (off/on) */ - __u8 left_duration; /* Left motor duration (0xff means forever) */ - __u8 left_motor_force; /* left (large) motor, supports force values from 0 to 255 */ + u8 padding; + u8 right_duration; /* Right motor duration (0xff means forever) */ + u8 right_motor_on; /* Right (small) motor on/off, only supports values of 0 or 1 (off/on) */ + u8 left_duration; /* Left motor duration (0xff means forever) */ + u8 left_motor_force; /* left (large) motor, supports force values from 0 to 255 */ } __packed; struct sixaxis_output_report { - __u8 report_id; + u8 report_id; struct sixaxis_rumble rumble; - __u8 padding[4]; - __u8 leds_bitmap; /* bitmap of enabled LEDs: LED_1 = 0x02, LED_2 = 0x04, ... */ + u8 padding[4]; + u8 leds_bitmap; /* bitmap of enabled LEDs: LED_1 = 0x02, LED_2 = 0x04, ... */ struct sixaxis_led led[4]; /* LEDx at (4 - x) */ struct sixaxis_led _reserved; /* LED5, not actually soldered */ } __packed; union sixaxis_output_report_01 { struct sixaxis_output_report data; - __u8 buf[36]; + u8 buf[36]; }; struct motion_output_report_02 { @@ -1009,68 +468,176 @@ struct motion_output_report_02 { u8 rumble; }; -#define DS4_REPORT_0x02_SIZE 37 -#define DS4_REPORT_0x05_SIZE 32 -#define DS4_REPORT_0x11_SIZE 78 -#define DS4_REPORT_0x81_SIZE 7 +#define DS4_FEATURE_REPORT_0x02_SIZE 37 +#define DS4_FEATURE_REPORT_0x05_SIZE 41 +#define DS4_FEATURE_REPORT_0x81_SIZE 7 +#define DS4_INPUT_REPORT_0x11_SIZE 78 +#define DS4_OUTPUT_REPORT_0x05_SIZE 32 +#define DS4_OUTPUT_REPORT_0x11_SIZE 78 #define SIXAXIS_REPORT_0xF2_SIZE 17 #define SIXAXIS_REPORT_0xF5_SIZE 8 #define MOTION_REPORT_0x02_SIZE 49 +/* Offsets relative to USB input report (0x1). Bluetooth (0x11) requires an + * additional +2. + */ +#define DS4_INPUT_REPORT_AXIS_OFFSET 1 +#define DS4_INPUT_REPORT_BUTTON_OFFSET 5 +#define DS4_INPUT_REPORT_TIMESTAMP_OFFSET 10 +#define DS4_INPUT_REPORT_GYRO_X_OFFSET 13 +#define DS4_INPUT_REPORT_BATTERY_OFFSET 30 +#define DS4_INPUT_REPORT_TOUCHPAD_OFFSET 33 + +#define SENSOR_SUFFIX " Motion Sensors" +#define DS4_TOUCHPAD_SUFFIX " Touchpad" + +/* Default to 4ms poll interval, which is same as USB (not adjustable). */ +#define DS4_BT_DEFAULT_POLL_INTERVAL_MS 4 +#define DS4_BT_MAX_POLL_INTERVAL_MS 62 +#define DS4_GYRO_RES_PER_DEG_S 1024 +#define DS4_ACC_RES_PER_G 8192 + +#define SIXAXIS_INPUT_REPORT_ACC_X_OFFSET 41 +#define SIXAXIS_ACC_RES_PER_G 113 + static DEFINE_SPINLOCK(sony_dev_list_lock); static LIST_HEAD(sony_device_list); static DEFINE_IDA(sony_device_id_allocator); +/* Used for calibration of DS4 accelerometer and gyro. */ +struct ds4_calibration_data { + int abs_code; + short bias; + /* Calibration requires scaling against a sensitivity value, which is a + * float. Store sensitivity as a fraction to limit floating point + * calculations until final calibration. + */ + int sens_numer; + int sens_denom; +}; + +enum ds4_dongle_state { + DONGLE_DISCONNECTED, + DONGLE_CALIBRATING, + DONGLE_CONNECTED, + DONGLE_DISABLED +}; + +enum sony_worker { + SONY_WORKER_STATE, + SONY_WORKER_HOTPLUG +}; + struct sony_sc { spinlock_t lock; struct list_head list_node; struct hid_device *hdev; + struct input_dev *touchpad; + struct input_dev *sensor_dev; struct led_classdev *leds[MAX_LEDS]; unsigned long quirks; + struct work_struct hotplug_worker; struct work_struct state_worker; + void (*send_output_report)(struct sony_sc *); struct power_supply *battery; struct power_supply_desc battery_desc; int device_id; - __u8 *output_report_dmabuf; + u8 *output_report_dmabuf; #ifdef CONFIG_SONY_FF - __u8 left; - __u8 right; + u8 left; + u8 right; #endif - __u8 mac_address[6]; - __u8 worker_initialized; - __u8 cable_state; - __u8 battery_charging; - __u8 battery_capacity; - __u8 led_state[MAX_LEDS]; - __u8 led_delay_on[MAX_LEDS]; - __u8 led_delay_off[MAX_LEDS]; - __u8 led_count; + u8 mac_address[6]; + u8 hotplug_worker_initialized; + u8 state_worker_initialized; + u8 defer_initialization; + u8 cable_state; + u8 battery_charging; + u8 battery_capacity; + u8 led_state[MAX_LEDS]; + u8 led_delay_on[MAX_LEDS]; + u8 led_delay_off[MAX_LEDS]; + u8 led_count; + + bool timestamp_initialized; + u16 prev_timestamp; + unsigned int timestamp_us; + + u8 ds4_bt_poll_interval; + enum ds4_dongle_state ds4_dongle_state; + /* DS4 calibration data */ + struct ds4_calibration_data ds4_calib_data[6]; }; -static __u8 *sixaxis_fixup(struct hid_device *hdev, __u8 *rdesc, - unsigned int *rsize) +static void sony_set_leds(struct sony_sc *sc); + +static inline void sony_schedule_work(struct sony_sc *sc, + enum sony_worker which) { - *rsize = sizeof(sixaxis_rdesc); - return sixaxis_rdesc; + unsigned long flags; + + switch (which) { + case SONY_WORKER_STATE: + spin_lock_irqsave(&sc->lock, flags); + if (!sc->defer_initialization && sc->state_worker_initialized) + schedule_work(&sc->state_worker); + spin_unlock_irqrestore(&sc->lock, flags); + break; + case SONY_WORKER_HOTPLUG: + if (sc->hotplug_worker_initialized) + schedule_work(&sc->hotplug_worker); + break; + } } -static u8 *motion_fixup(struct hid_device *hdev, u8 *rdesc, - unsigned int *rsize) +static ssize_t ds4_show_poll_interval(struct device *dev, + struct device_attribute + *attr, char *buf) { - *rsize = sizeof(motion_rdesc); - return motion_rdesc; + struct hid_device *hdev = container_of(dev, struct hid_device, dev); + struct sony_sc *sc = hid_get_drvdata(hdev); + + return snprintf(buf, PAGE_SIZE, "%i\n", sc->ds4_bt_poll_interval); +} + +static ssize_t ds4_store_poll_interval(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + struct hid_device *hdev = container_of(dev, struct hid_device, dev); + struct sony_sc *sc = hid_get_drvdata(hdev); + unsigned long flags; + u8 interval; + + if (kstrtou8(buf, 0, &interval)) + return -EINVAL; + + if (interval > DS4_BT_MAX_POLL_INTERVAL_MS) + return -EINVAL; + + spin_lock_irqsave(&sc->lock, flags); + sc->ds4_bt_poll_interval = interval; + spin_unlock_irqrestore(&sc->lock, flags); + + sony_schedule_work(sc, SONY_WORKER_STATE); + + return count; } -static u8 *navigation_fixup(struct hid_device *hdev, u8 *rdesc, +static DEVICE_ATTR(bt_poll_interval, 0644, ds4_show_poll_interval, + ds4_store_poll_interval); + + +static u8 *motion_fixup(struct hid_device *hdev, u8 *rdesc, unsigned int *rsize) { - *rsize = sizeof(navigation_rdesc); - return navigation_rdesc; + *rsize = sizeof(motion_rdesc); + return motion_rdesc; } -static __u8 *ps3remote_fixup(struct hid_device *hdev, __u8 *rdesc, +static u8 *ps3remote_fixup(struct hid_device *hdev, u8 *rdesc, unsigned int *rsize) { *rsize = sizeof(ps3remote_rdesc); @@ -1111,7 +678,134 @@ static int ps3remote_mapping(struct hid_device *hdev, struct hid_input *hi, return 1; } -static __u8 *sony_report_fixup(struct hid_device *hdev, __u8 *rdesc, +static int navigation_mapping(struct hid_device *hdev, struct hid_input *hi, + struct hid_field *field, struct hid_usage *usage, + unsigned long **bit, int *max) +{ + if ((usage->hid & HID_USAGE_PAGE) == HID_UP_BUTTON) { + unsigned int key = usage->hid & HID_USAGE; + + if (key >= ARRAY_SIZE(sixaxis_keymap)) + return -1; + + key = navigation_keymap[key]; + if (!key) + return -1; + + hid_map_usage_clear(hi, usage, bit, max, EV_KEY, key); + return 1; + } else if (usage->hid == HID_GD_POINTER) { + /* See comment in sixaxis_mapping, basically the L2 (and R2) + * triggers are reported through GD Pointer. + * In addition we ignore any analog button 'axes' and only + * support digital buttons. + */ + switch (usage->usage_index) { + case 8: /* L2 */ + usage->hid = HID_GD_Z; + break; + default: + return -1; + } + + hid_map_usage_clear(hi, usage, bit, max, EV_ABS, usage->hid & 0xf); + return 1; + } else if ((usage->hid & HID_USAGE_PAGE) == HID_UP_GENDESK) { + unsigned int abs = usage->hid & HID_USAGE; + + if (abs >= ARRAY_SIZE(navigation_absmap)) + return -1; + + abs = navigation_absmap[abs]; + + hid_map_usage_clear(hi, usage, bit, max, EV_ABS, abs); + return 1; + } + + return -1; +} + + +static int sixaxis_mapping(struct hid_device *hdev, struct hid_input *hi, + struct hid_field *field, struct hid_usage *usage, + unsigned long **bit, int *max) +{ + if ((usage->hid & HID_USAGE_PAGE) == HID_UP_BUTTON) { + unsigned int key = usage->hid & HID_USAGE; + + if (key >= ARRAY_SIZE(sixaxis_keymap)) + return -1; + + key = sixaxis_keymap[key]; + hid_map_usage_clear(hi, usage, bit, max, EV_KEY, key); + return 1; + } else if (usage->hid == HID_GD_POINTER) { + /* The DS3 provides analog values for most buttons and even + * for HAT axes through GD Pointer. L2 and R2 are reported + * among these as well instead of as GD Z / RZ. Remap L2 + * and R2 and ignore other analog 'button axes' as there is + * no good way for reporting them. + */ + switch (usage->usage_index) { + case 8: /* L2 */ + usage->hid = HID_GD_Z; + break; + case 9: /* R2 */ + usage->hid = HID_GD_RZ; + break; + default: + return -1; + } + + hid_map_usage_clear(hi, usage, bit, max, EV_ABS, usage->hid & 0xf); + return 1; + } else if ((usage->hid & HID_USAGE_PAGE) == HID_UP_GENDESK) { + unsigned int abs = usage->hid & HID_USAGE; + + if (abs >= ARRAY_SIZE(sixaxis_absmap)) + return -1; + + abs = sixaxis_absmap[abs]; + + hid_map_usage_clear(hi, usage, bit, max, EV_ABS, abs); + return 1; + } + + return -1; +} + +static int ds4_mapping(struct hid_device *hdev, struct hid_input *hi, + struct hid_field *field, struct hid_usage *usage, + unsigned long **bit, int *max) +{ + if ((usage->hid & HID_USAGE_PAGE) == HID_UP_BUTTON) { + unsigned int key = usage->hid & HID_USAGE; + + if (key >= ARRAY_SIZE(ds4_keymap)) + return -1; + + key = ds4_keymap[key]; + hid_map_usage_clear(hi, usage, bit, max, EV_KEY, key); + return 1; + } else if ((usage->hid & HID_USAGE_PAGE) == HID_UP_GENDESK) { + unsigned int abs = usage->hid & HID_USAGE; + + /* Let the HID parser deal with the HAT. */ + if (usage->hid == HID_GD_HATSWITCH) + return 0; + + if (abs >= ARRAY_SIZE(ds4_absmap)) + return -1; + + abs = ds4_absmap[abs]; + hid_map_usage_clear(hi, usage, bit, max, EV_ABS, abs); + return 1; + } + + return 0; +} + +static u8 *sony_report_fixup(struct hid_device *hdev, u8 *rdesc, unsigned int *rsize) { struct sony_sc *sc = hid_get_drvdata(hdev); @@ -1132,42 +826,21 @@ static __u8 *sony_report_fixup(struct hid_device *hdev, __u8 *rdesc, rdesc[55] = 0x06; } - /* - * The default Dualshock 4 USB descriptor doesn't assign - * the gyroscope values to corresponding axes so we need a - * modified one. - */ - if ((sc->quirks & DUALSHOCK4_CONTROLLER_USB) && *rsize == 467) { - hid_info(hdev, "Using modified Dualshock 4 report descriptor with gyroscope axes\n"); - rdesc = dualshock4_usb_rdesc; - *rsize = sizeof(dualshock4_usb_rdesc); - } else if ((sc->quirks & DUALSHOCK4_CONTROLLER_BT) && *rsize == 357) { - hid_info(hdev, "Using modified Dualshock 4 Bluetooth report descriptor\n"); - rdesc = dualshock4_bt_rdesc; - *rsize = sizeof(dualshock4_bt_rdesc); - } - - if (sc->quirks & SIXAXIS_CONTROLLER) - return sixaxis_fixup(hdev, rdesc, rsize); - if (sc->quirks & MOTION_CONTROLLER) return motion_fixup(hdev, rdesc, rsize); - if (sc->quirks & NAVIGATION_CONTROLLER) - return navigation_fixup(hdev, rdesc, rsize); - if (sc->quirks & PS3REMOTE) return ps3remote_fixup(hdev, rdesc, rsize); return rdesc; } -static void sixaxis_parse_report(struct sony_sc *sc, __u8 *rd, int size) +static void sixaxis_parse_report(struct sony_sc *sc, u8 *rd, int size) { - static const __u8 sixaxis_battery_capacity[] = { 0, 1, 25, 50, 75, 100 }; + static const u8 sixaxis_battery_capacity[] = { 0, 1, 25, 50, 75, 100 }; unsigned long flags; int offset; - __u8 cable_state, battery_capacity, battery_charging; + u8 cable_state, battery_capacity, battery_charging; /* * The sixaxis is charging if the battery value is 0xee @@ -1182,7 +855,7 @@ static void sixaxis_parse_report(struct sony_sc *sc, __u8 *rd, int size) battery_charging = !(rd[offset] & 0x01); cable_state = 1; } else { - __u8 index = rd[offset] <= 5 ? rd[offset] : 5; + u8 index = rd[offset] <= 5 ? rd[offset] : 5; battery_capacity = sixaxis_battery_capacity[index]; battery_charging = 0; cable_state = 0; @@ -1193,27 +866,136 @@ static void sixaxis_parse_report(struct sony_sc *sc, __u8 *rd, int size) sc->battery_capacity = battery_capacity; sc->battery_charging = battery_charging; spin_unlock_irqrestore(&sc->lock, flags); + + if (sc->quirks & SIXAXIS_CONTROLLER) { + int val; + + offset = SIXAXIS_INPUT_REPORT_ACC_X_OFFSET; + val = ((rd[offset+1] << 8) | rd[offset]) - 511; + input_report_abs(sc->sensor_dev, ABS_X, val); + + /* Y and Z are swapped and inversed */ + val = 511 - ((rd[offset+5] << 8) | rd[offset+4]); + input_report_abs(sc->sensor_dev, ABS_Y, val); + + val = 511 - ((rd[offset+3] << 8) | rd[offset+2]); + input_report_abs(sc->sensor_dev, ABS_Z, val); + + input_sync(sc->sensor_dev); + } } -static void dualshock4_parse_report(struct sony_sc *sc, __u8 *rd, int size) +static void dualshock4_parse_report(struct sony_sc *sc, u8 *rd, int size) { struct hid_input *hidinput = list_entry(sc->hdev->inputs.next, struct hid_input, list); struct input_dev *input_dev = hidinput->input; unsigned long flags; - int n, offset; - __u8 cable_state, battery_capacity, battery_charging; + int n, m, offset, num_touch_data, max_touch_data; + u8 cable_state, battery_capacity, battery_charging; + u16 timestamp; + + /* When using Bluetooth the header is 2 bytes longer, so skip these. */ + int data_offset = (sc->quirks & DUALSHOCK4_CONTROLLER_BT) ? 2 : 0; + + /* Second bit of third button byte is for the touchpad button. */ + offset = data_offset + DS4_INPUT_REPORT_BUTTON_OFFSET; + input_report_key(sc->touchpad, BTN_LEFT, rd[offset+2] & 0x2); /* - * Battery and touchpad data starts at byte 30 in the USB report and - * 32 in Bluetooth report. + * The default behavior of the Dualshock 4 is to send reports using + * report type 1 when running over Bluetooth. However, when feature + * report 2 is requested during the controller initialization it starts + * sending input reports in report 17. Since report 17 is undefined + * in the default HID descriptor, the HID layer won't generate events. + * While it is possible (and this was done before) to fixup the HID + * descriptor to add this mapping, it was better to do this manually. + * The reason is there were various pieces software both open and closed + * source, relying on the descriptors to be the same across various + * operating systems. If the descriptors wouldn't match some + * applications e.g. games on Wine would not be able to function due + * to different descriptors, which such applications are not parsing. */ - offset = (sc->quirks & DUALSHOCK4_CONTROLLER_USB) ? 30 : 32; + if (rd[0] == 17) { + int value; + + offset = data_offset + DS4_INPUT_REPORT_AXIS_OFFSET; + input_report_abs(input_dev, ABS_X, rd[offset]); + input_report_abs(input_dev, ABS_Y, rd[offset+1]); + input_report_abs(input_dev, ABS_RX, rd[offset+2]); + input_report_abs(input_dev, ABS_RY, rd[offset+3]); + + value = rd[offset+4] & 0xf; + if (value > 7) + value = 8; /* Center 0, 0 */ + input_report_abs(input_dev, ABS_HAT0X, ds4_hat_mapping[value].x); + input_report_abs(input_dev, ABS_HAT0Y, ds4_hat_mapping[value].y); + + input_report_key(input_dev, BTN_WEST, rd[offset+4] & 0x10); + input_report_key(input_dev, BTN_SOUTH, rd[offset+4] & 0x20); + input_report_key(input_dev, BTN_EAST, rd[offset+4] & 0x40); + input_report_key(input_dev, BTN_NORTH, rd[offset+4] & 0x80); + + input_report_key(input_dev, BTN_TL, rd[offset+5] & 0x1); + input_report_key(input_dev, BTN_TR, rd[offset+5] & 0x2); + input_report_key(input_dev, BTN_TL2, rd[offset+5] & 0x4); + input_report_key(input_dev, BTN_TR2, rd[offset+5] & 0x8); + input_report_key(input_dev, BTN_SELECT, rd[offset+5] & 0x10); + input_report_key(input_dev, BTN_START, rd[offset+5] & 0x20); + input_report_key(input_dev, BTN_THUMBL, rd[offset+5] & 0x40); + input_report_key(input_dev, BTN_THUMBR, rd[offset+5] & 0x80); + + input_report_key(input_dev, BTN_MODE, rd[offset+6] & 0x1); + + input_report_abs(input_dev, ABS_Z, rd[offset+7]); + input_report_abs(input_dev, ABS_RZ, rd[offset+8]); + + input_sync(input_dev); + } + + /* Convert timestamp (in 5.33us unit) to timestamp_us */ + offset = data_offset + DS4_INPUT_REPORT_TIMESTAMP_OFFSET; + timestamp = get_unaligned_le16(&rd[offset]); + if (!sc->timestamp_initialized) { + sc->timestamp_us = ((unsigned int)timestamp * 16) / 3; + sc->timestamp_initialized = true; + } else { + u16 delta; + + if (sc->prev_timestamp > timestamp) + delta = (U16_MAX - sc->prev_timestamp + timestamp + 1); + else + delta = timestamp - sc->prev_timestamp; + sc->timestamp_us += (delta * 16) / 3; + } + sc->prev_timestamp = timestamp; + input_event(sc->sensor_dev, EV_MSC, MSC_TIMESTAMP, sc->timestamp_us); + + offset = data_offset + DS4_INPUT_REPORT_GYRO_X_OFFSET; + for (n = 0; n < 6; n++) { + /* Store data in int for more precision during mult_frac. */ + int raw_data = (short)((rd[offset+1] << 8) | rd[offset]); + struct ds4_calibration_data *calib = &sc->ds4_calib_data[n]; + + /* High precision is needed during calibration, but the + * calibrated values are within 32-bit. + * Note: we swap numerator 'x' and 'numer' in mult_frac for + * precision reasons so we don't need 64-bit. + */ + int calib_data = mult_frac(calib->sens_numer, + raw_data - calib->bias, + calib->sens_denom); + + input_report_abs(sc->sensor_dev, calib->abs_code, calib_data); + offset += 2; + } + input_sync(sc->sensor_dev); /* - * The lower 4 bits of byte 30 contain the battery level + * The lower 4 bits of byte 30 (or 32 for BT) contain the battery level * and the 5th bit contains the USB cable state. */ + offset = data_offset + DS4_INPUT_REPORT_BATTERY_OFFSET; cable_state = (rd[offset] >> 4) & 0x01; battery_capacity = rd[offset] & 0x0F; @@ -1240,35 +1022,57 @@ static void dualshock4_parse_report(struct sony_sc *sc, __u8 *rd, int size) sc->battery_charging = battery_charging; spin_unlock_irqrestore(&sc->lock, flags); - offset += 5; - /* - * The Dualshock 4 multi-touch trackpad data starts at offset 35 on USB - * and 37 on Bluetooth. - * The first 7 bits of the first byte is a counter and bit 8 is a touch - * indicator that is 0 when pressed and 1 when not pressed. - * The next 3 bytes are two 12 bit touch coordinates, X and Y. - * The data for the second touch is in the same format and immediatly - * follows the data for the first. + * The Dualshock 4 multi-touch trackpad data starts at offset 33 on USB + * and 35 on Bluetooth. + * The first byte indicates the number of touch data in the report. + * Trackpad data starts 2 bytes later (e.g. 35 for USB). */ - for (n = 0; n < 2; n++) { - __u16 x, y; + offset = data_offset + DS4_INPUT_REPORT_TOUCHPAD_OFFSET; + max_touch_data = (sc->quirks & DUALSHOCK4_CONTROLLER_BT) ? 4 : 3; + if (rd[offset] > 0 && rd[offset] <= max_touch_data) + num_touch_data = rd[offset]; + else + num_touch_data = 1; + offset += 1; + + for (m = 0; m < num_touch_data; m++) { + /* Skip past timestamp */ + offset += 1; + + /* + * The first 7 bits of the first byte is a counter and bit 8 is + * a touch indicator that is 0 when pressed and 1 when not + * pressed. + * The next 3 bytes are two 12 bit touch coordinates, X and Y. + * The data for the second touch is in the same format and + * immediately follows the data for the first. + */ + for (n = 0; n < 2; n++) { + u16 x, y; + bool active; - x = rd[offset+1] | ((rd[offset+2] & 0xF) << 8); - y = ((rd[offset+2] & 0xF0) >> 4) | (rd[offset+3] << 4); + x = rd[offset+1] | ((rd[offset+2] & 0xF) << 8); + y = ((rd[offset+2] & 0xF0) >> 4) | (rd[offset+3] << 4); - input_mt_slot(input_dev, n); - input_mt_report_slot_state(input_dev, MT_TOOL_FINGER, - !(rd[offset] >> 7)); - input_report_abs(input_dev, ABS_MT_POSITION_X, x); - input_report_abs(input_dev, ABS_MT_POSITION_Y, y); + active = !(rd[offset] >> 7); + input_mt_slot(sc->touchpad, n); + input_mt_report_slot_state(sc->touchpad, MT_TOOL_FINGER, active); - offset += 4; + if (active) { + input_report_abs(sc->touchpad, ABS_MT_POSITION_X, x); + input_report_abs(sc->touchpad, ABS_MT_POSITION_Y, y); + } + + offset += 4; + } + input_mt_sync_frame(sc->touchpad); + input_sync(sc->touchpad); } } static int sony_raw_event(struct hid_device *hdev, struct hid_report *report, - __u8 *rd, int size) + u8 *rd, int size) { struct sony_sc *sc = hid_get_drvdata(hdev); @@ -1299,12 +1103,89 @@ static int sony_raw_event(struct hid_device *hdev, struct hid_report *report, } else if ((sc->quirks & NAVIGATION_CONTROLLER) && rd[0] == 0x01 && size == 49) { sixaxis_parse_report(sc, rd, size); - } else if (((sc->quirks & DUALSHOCK4_CONTROLLER_USB) && rd[0] == 0x01 && - size == 64) || ((sc->quirks & DUALSHOCK4_CONTROLLER_BT) - && rd[0] == 0x11 && size == 78)) { + } else if ((sc->quirks & DUALSHOCK4_CONTROLLER_USB) && rd[0] == 0x01 && + size == 64) { + dualshock4_parse_report(sc, rd, size); + } else if (((sc->quirks & DUALSHOCK4_CONTROLLER_BT) && rd[0] == 0x11 && + size == 78)) { + /* CRC check */ + u8 bthdr = 0xA1; + u32 crc; + u32 report_crc; + + crc = crc32_le(0xFFFFFFFF, &bthdr, 1); + crc = ~crc32_le(crc, rd, DS4_INPUT_REPORT_0x11_SIZE-4); + report_crc = get_unaligned_le32(&rd[DS4_INPUT_REPORT_0x11_SIZE-4]); + if (crc != report_crc) { + hid_dbg(sc->hdev, "DualShock 4 input report's CRC check failed, received crc 0x%0x != 0x%0x\n", + report_crc, crc); + return -EILSEQ; + } + + dualshock4_parse_report(sc, rd, size); + } else if ((sc->quirks & DUALSHOCK4_DONGLE) && rd[0] == 0x01 && + size == 64) { + unsigned long flags; + enum ds4_dongle_state dongle_state; + + /* + * In the case of a DS4 USB dongle, bit[2] of byte 31 indicates + * if a DS4 is actually connected (indicated by '0'). + * For non-dongle, this bit is always 0 (connected). + */ + bool connected = (rd[31] & 0x04) ? false : true; + + spin_lock_irqsave(&sc->lock, flags); + dongle_state = sc->ds4_dongle_state; + spin_unlock_irqrestore(&sc->lock, flags); + + /* + * The dongle always sends input reports even when no + * DS4 is attached. When a DS4 is connected, we need to + * obtain calibration data before we can use it. + * The code below tracks dongle state and kicks of + * calibration when needed and only allows us to process + * input if a DS4 is actually connected. + */ + if (dongle_state == DONGLE_DISCONNECTED && connected) { + hid_info(sc->hdev, "DualShock 4 USB dongle: controller connected\n"); + sony_set_leds(sc); + + spin_lock_irqsave(&sc->lock, flags); + sc->ds4_dongle_state = DONGLE_CALIBRATING; + spin_unlock_irqrestore(&sc->lock, flags); + + sony_schedule_work(sc, SONY_WORKER_HOTPLUG); + + /* Don't process the report since we don't have + * calibration data, but let hidraw have it anyway. + */ + return 0; + } else if ((dongle_state == DONGLE_CONNECTED || + dongle_state == DONGLE_DISABLED) && !connected) { + hid_info(sc->hdev, "DualShock 4 USB dongle: controller disconnected\n"); + + spin_lock_irqsave(&sc->lock, flags); + sc->ds4_dongle_state = DONGLE_DISCONNECTED; + spin_unlock_irqrestore(&sc->lock, flags); + + /* Return 0, so hidraw can get the report. */ + return 0; + } else if (dongle_state == DONGLE_CALIBRATING || + dongle_state == DONGLE_DISABLED || + dongle_state == DONGLE_DISCONNECTED) { + /* Return 0, so hidraw can get the report. */ + return 0; + } + dualshock4_parse_report(sc, rd, size); } + if (sc->defer_initialization) { + sc->defer_initialization = 0; + sony_schedule_work(sc, SONY_WORKER_STATE); + } + return 0; } @@ -1340,49 +1221,189 @@ static int sony_mapping(struct hid_device *hdev, struct hid_input *hi, if (sc->quirks & PS3REMOTE) return ps3remote_mapping(hdev, hi, field, usage, bit, max); + if (sc->quirks & NAVIGATION_CONTROLLER) + return navigation_mapping(hdev, hi, field, usage, bit, max); + + if (sc->quirks & SIXAXIS_CONTROLLER) + return sixaxis_mapping(hdev, hi, field, usage, bit, max); + + if (sc->quirks & DUALSHOCK4_CONTROLLER) + return ds4_mapping(hdev, hi, field, usage, bit, max); + + /* Let hid-core decide for the others */ return 0; } -static int sony_register_touchpad(struct hid_input *hi, int touch_count, +static int sony_register_touchpad(struct sony_sc *sc, int touch_count, int w, int h) { - struct input_dev *input_dev = hi->input; + size_t name_sz; + char *name; int ret; - ret = input_mt_init_slots(input_dev, touch_count, 0); + sc->touchpad = input_allocate_device(); + if (!sc->touchpad) + return -ENOMEM; + + input_set_drvdata(sc->touchpad, sc); + sc->touchpad->dev.parent = &sc->hdev->dev; + sc->touchpad->phys = sc->hdev->phys; + sc->touchpad->uniq = sc->hdev->uniq; + sc->touchpad->id.bustype = sc->hdev->bus; + sc->touchpad->id.vendor = sc->hdev->vendor; + sc->touchpad->id.product = sc->hdev->product; + sc->touchpad->id.version = sc->hdev->version; + + /* Append a suffix to the controller name as there are various + * DS4 compatible non-Sony devices with different names. + */ + name_sz = strlen(sc->hdev->name) + sizeof(DS4_TOUCHPAD_SUFFIX); + name = kzalloc(name_sz, GFP_KERNEL); + if (!name) { + ret = -ENOMEM; + goto err; + } + snprintf(name, name_sz, "%s" DS4_TOUCHPAD_SUFFIX, sc->hdev->name); + sc->touchpad->name = name; + + ret = input_mt_init_slots(sc->touchpad, touch_count, INPUT_MT_POINTER); if (ret < 0) - return ret; + goto err; + + /* We map the button underneath the touchpad to BTN_LEFT. */ + __set_bit(EV_KEY, sc->touchpad->evbit); + __set_bit(BTN_LEFT, sc->touchpad->keybit); + __set_bit(INPUT_PROP_BUTTONPAD, sc->touchpad->propbit); - input_set_abs_params(input_dev, ABS_MT_POSITION_X, 0, w, 0, 0); - input_set_abs_params(input_dev, ABS_MT_POSITION_Y, 0, h, 0, 0); + input_set_abs_params(sc->touchpad, ABS_MT_POSITION_X, 0, w, 0, 0); + input_set_abs_params(sc->touchpad, ABS_MT_POSITION_Y, 0, h, 0, 0); + + ret = input_register_device(sc->touchpad); + if (ret < 0) + goto err; return 0; + +err: + kfree(sc->touchpad->name); + sc->touchpad->name = NULL; + + input_free_device(sc->touchpad); + sc->touchpad = NULL; + + return ret; } -static int sony_input_configured(struct hid_device *hdev, - struct hid_input *hidinput) +static void sony_unregister_touchpad(struct sony_sc *sc) { - struct sony_sc *sc = hid_get_drvdata(hdev); + if (!sc->touchpad) + return; + + kfree(sc->touchpad->name); + sc->touchpad->name = NULL; + + input_unregister_device(sc->touchpad); + sc->touchpad = NULL; +} + +static int sony_register_sensors(struct sony_sc *sc) +{ + size_t name_sz; + char *name; int ret; + int range; - /* - * The Dualshock 4 touchpad supports 2 touches and has a - * resolution of 1920x942 (44.86 dots/mm). + sc->sensor_dev = input_allocate_device(); + if (!sc->sensor_dev) + return -ENOMEM; + + input_set_drvdata(sc->sensor_dev, sc); + sc->sensor_dev->dev.parent = &sc->hdev->dev; + sc->sensor_dev->phys = sc->hdev->phys; + sc->sensor_dev->uniq = sc->hdev->uniq; + sc->sensor_dev->id.bustype = sc->hdev->bus; + sc->sensor_dev->id.vendor = sc->hdev->vendor; + sc->sensor_dev->id.product = sc->hdev->product; + sc->sensor_dev->id.version = sc->hdev->version; + + /* Append a suffix to the controller name as there are various + * DS4 compatible non-Sony devices with different names. */ - if (sc->quirks & DUALSHOCK4_CONTROLLER) { - ret = sony_register_touchpad(hidinput, 2, 1920, 942); - if (ret) { - hid_err(sc->hdev, - "Unable to initialize multi-touch slots: %d\n", - ret); - return ret; - } + name_sz = strlen(sc->hdev->name) + sizeof(SENSOR_SUFFIX); + name = kzalloc(name_sz, GFP_KERNEL); + if (!name) { + ret = -ENOMEM; + goto err; } + snprintf(name, name_sz, "%s" SENSOR_SUFFIX, sc->hdev->name); + sc->sensor_dev->name = name; + + if (sc->quirks & SIXAXIS_CONTROLLER) { + /* For the DS3 we only support the accelerometer, which works + * quite well even without calibration. The device also has + * a 1-axis gyro, but it is very difficult to manage from within + * the driver even to get data, the sensor is inaccurate and + * the behavior is very different between hardware revisions. + */ + input_set_abs_params(sc->sensor_dev, ABS_X, -512, 511, 4, 0); + input_set_abs_params(sc->sensor_dev, ABS_Y, -512, 511, 4, 0); + input_set_abs_params(sc->sensor_dev, ABS_Z, -512, 511, 4, 0); + input_abs_set_res(sc->sensor_dev, ABS_X, SIXAXIS_ACC_RES_PER_G); + input_abs_set_res(sc->sensor_dev, ABS_Y, SIXAXIS_ACC_RES_PER_G); + input_abs_set_res(sc->sensor_dev, ABS_Z, SIXAXIS_ACC_RES_PER_G); + } else if (sc->quirks & DUALSHOCK4_CONTROLLER) { + range = DS4_ACC_RES_PER_G*4; + input_set_abs_params(sc->sensor_dev, ABS_X, -range, range, 16, 0); + input_set_abs_params(sc->sensor_dev, ABS_Y, -range, range, 16, 0); + input_set_abs_params(sc->sensor_dev, ABS_Z, -range, range, 16, 0); + input_abs_set_res(sc->sensor_dev, ABS_X, DS4_ACC_RES_PER_G); + input_abs_set_res(sc->sensor_dev, ABS_Y, DS4_ACC_RES_PER_G); + input_abs_set_res(sc->sensor_dev, ABS_Z, DS4_ACC_RES_PER_G); + + range = DS4_GYRO_RES_PER_DEG_S*2048; + input_set_abs_params(sc->sensor_dev, ABS_RX, -range, range, 16, 0); + input_set_abs_params(sc->sensor_dev, ABS_RY, -range, range, 16, 0); + input_set_abs_params(sc->sensor_dev, ABS_RZ, -range, range, 16, 0); + input_abs_set_res(sc->sensor_dev, ABS_RX, DS4_GYRO_RES_PER_DEG_S); + input_abs_set_res(sc->sensor_dev, ABS_RY, DS4_GYRO_RES_PER_DEG_S); + input_abs_set_res(sc->sensor_dev, ABS_RZ, DS4_GYRO_RES_PER_DEG_S); + + __set_bit(EV_MSC, sc->sensor_dev->evbit); + __set_bit(MSC_TIMESTAMP, sc->sensor_dev->mscbit); + } + + __set_bit(INPUT_PROP_ACCELEROMETER, sc->sensor_dev->propbit); + + ret = input_register_device(sc->sensor_dev); + if (ret < 0) + goto err; return 0; + +err: + kfree(sc->sensor_dev->name); + sc->sensor_dev->name = NULL; + + input_free_device(sc->sensor_dev); + sc->sensor_dev = NULL; + + return ret; } +static void sony_unregister_sensors(struct sony_sc *sc) +{ + if (!sc->sensor_dev) + return; + + kfree(sc->sensor_dev->name); + sc->sensor_dev->name = NULL; + + input_unregister_device(sc->sensor_dev); + sc->sensor_dev = NULL; +} + + /* * Sending HID_REQ_GET_REPORT changes the operation mode of the ps3 controller * to "operational". Without this, the ps3 controller will not report any @@ -1392,7 +1413,7 @@ static int sixaxis_set_operational_usb(struct hid_device *hdev) { const int buf_size = max(SIXAXIS_REPORT_0xF2_SIZE, SIXAXIS_REPORT_0xF5_SIZE); - __u8 *buf; + u8 *buf; int ret; buf = kmalloc(buf_size, GFP_KERNEL); @@ -1431,8 +1452,8 @@ out: static int sixaxis_set_operational_bt(struct hid_device *hdev) { - static const __u8 report[] = { 0xf4, 0x42, 0x03, 0x00, 0x00 }; - __u8 *buf; + static const u8 report[] = { 0xf4, 0x42, 0x03, 0x00, 0x00 }; + u8 *buf; int ret; buf = kmemdup(report, sizeof(report), GFP_KERNEL); @@ -1448,29 +1469,179 @@ static int sixaxis_set_operational_bt(struct hid_device *hdev) } /* - * Requesting feature report 0x02 in Bluetooth mode changes the state of the - * controller so that it sends full input reports of type 0x11. + * Request DS4 calibration data for the motion sensors. + * For Bluetooth this also affects the operating mode (see below). */ -static int dualshock4_set_operational_bt(struct hid_device *hdev) +static int dualshock4_get_calibration_data(struct sony_sc *sc) { - __u8 *buf; + u8 *buf; int ret; + short gyro_pitch_bias, gyro_pitch_plus, gyro_pitch_minus; + short gyro_yaw_bias, gyro_yaw_plus, gyro_yaw_minus; + short gyro_roll_bias, gyro_roll_plus, gyro_roll_minus; + short gyro_speed_plus, gyro_speed_minus; + short acc_x_plus, acc_x_minus; + short acc_y_plus, acc_y_minus; + short acc_z_plus, acc_z_minus; + int speed_2x; + int range_2g; + + /* For Bluetooth we use a different request, which supports CRC. + * Note: in Bluetooth mode feature report 0x02 also changes the state + * of the controller, so that it sends input reports of type 0x11. + */ + if (sc->quirks & (DUALSHOCK4_CONTROLLER_USB | DUALSHOCK4_DONGLE)) { + buf = kmalloc(DS4_FEATURE_REPORT_0x02_SIZE, GFP_KERNEL); + if (!buf) + return -ENOMEM; - buf = kmalloc(DS4_REPORT_0x02_SIZE, GFP_KERNEL); - if (!buf) - return -ENOMEM; + ret = hid_hw_raw_request(sc->hdev, 0x02, buf, + DS4_FEATURE_REPORT_0x02_SIZE, + HID_FEATURE_REPORT, + HID_REQ_GET_REPORT); + if (ret < 0) + goto err_stop; + } else { + u8 bthdr = 0xA3; + u32 crc; + u32 report_crc; + int retries; - ret = hid_hw_raw_request(hdev, 0x02, buf, DS4_REPORT_0x02_SIZE, - HID_FEATURE_REPORT, HID_REQ_GET_REPORT); + buf = kmalloc(DS4_FEATURE_REPORT_0x05_SIZE, GFP_KERNEL); + if (!buf) + return -ENOMEM; - kfree(buf); + for (retries = 0; retries < 3; retries++) { + ret = hid_hw_raw_request(sc->hdev, 0x05, buf, + DS4_FEATURE_REPORT_0x05_SIZE, + HID_FEATURE_REPORT, + HID_REQ_GET_REPORT); + if (ret < 0) + goto err_stop; + + /* CRC check */ + crc = crc32_le(0xFFFFFFFF, &bthdr, 1); + crc = ~crc32_le(crc, buf, DS4_FEATURE_REPORT_0x05_SIZE-4); + report_crc = get_unaligned_le32(&buf[DS4_FEATURE_REPORT_0x05_SIZE-4]); + if (crc != report_crc) { + hid_warn(sc->hdev, "DualShock 4 calibration report's CRC check failed, received crc 0x%0x != 0x%0x\n", + report_crc, crc); + if (retries < 2) { + hid_warn(sc->hdev, "Retrying DualShock 4 get calibration report request\n"); + continue; + } else { + ret = -EILSEQ; + goto err_stop; + } + } else { + break; + } + } + } + + gyro_pitch_bias = get_unaligned_le16(&buf[1]); + gyro_yaw_bias = get_unaligned_le16(&buf[3]); + gyro_roll_bias = get_unaligned_le16(&buf[5]); + if (sc->quirks & DUALSHOCK4_CONTROLLER_USB) { + gyro_pitch_plus = get_unaligned_le16(&buf[7]); + gyro_pitch_minus = get_unaligned_le16(&buf[9]); + gyro_yaw_plus = get_unaligned_le16(&buf[11]); + gyro_yaw_minus = get_unaligned_le16(&buf[13]); + gyro_roll_plus = get_unaligned_le16(&buf[15]); + gyro_roll_minus = get_unaligned_le16(&buf[17]); + } else { + /* BT + Dongle */ + gyro_pitch_plus = get_unaligned_le16(&buf[7]); + gyro_yaw_plus = get_unaligned_le16(&buf[9]); + gyro_roll_plus = get_unaligned_le16(&buf[11]); + gyro_pitch_minus = get_unaligned_le16(&buf[13]); + gyro_yaw_minus = get_unaligned_le16(&buf[15]); + gyro_roll_minus = get_unaligned_le16(&buf[17]); + } + gyro_speed_plus = get_unaligned_le16(&buf[19]); + gyro_speed_minus = get_unaligned_le16(&buf[21]); + acc_x_plus = get_unaligned_le16(&buf[23]); + acc_x_minus = get_unaligned_le16(&buf[25]); + acc_y_plus = get_unaligned_le16(&buf[27]); + acc_y_minus = get_unaligned_le16(&buf[29]); + acc_z_plus = get_unaligned_le16(&buf[31]); + acc_z_minus = get_unaligned_le16(&buf[33]); + + /* Set gyroscope calibration and normalization parameters. + * Data values will be normalized to 1/DS4_GYRO_RES_PER_DEG_S degree/s. + */ + speed_2x = (gyro_speed_plus + gyro_speed_minus); + sc->ds4_calib_data[0].abs_code = ABS_RX; + sc->ds4_calib_data[0].bias = gyro_pitch_bias; + sc->ds4_calib_data[0].sens_numer = speed_2x*DS4_GYRO_RES_PER_DEG_S; + sc->ds4_calib_data[0].sens_denom = gyro_pitch_plus - gyro_pitch_minus; + + sc->ds4_calib_data[1].abs_code = ABS_RY; + sc->ds4_calib_data[1].bias = gyro_yaw_bias; + sc->ds4_calib_data[1].sens_numer = speed_2x*DS4_GYRO_RES_PER_DEG_S; + sc->ds4_calib_data[1].sens_denom = gyro_yaw_plus - gyro_yaw_minus; + + sc->ds4_calib_data[2].abs_code = ABS_RZ; + sc->ds4_calib_data[2].bias = gyro_roll_bias; + sc->ds4_calib_data[2].sens_numer = speed_2x*DS4_GYRO_RES_PER_DEG_S; + sc->ds4_calib_data[2].sens_denom = gyro_roll_plus - gyro_roll_minus; + + /* Set accelerometer calibration and normalization parameters. + * Data values will be normalized to 1/DS4_ACC_RES_PER_G G. + */ + range_2g = acc_x_plus - acc_x_minus; + sc->ds4_calib_data[3].abs_code = ABS_X; + sc->ds4_calib_data[3].bias = acc_x_plus - range_2g / 2; + sc->ds4_calib_data[3].sens_numer = 2*DS4_ACC_RES_PER_G; + sc->ds4_calib_data[3].sens_denom = range_2g; + + range_2g = acc_y_plus - acc_y_minus; + sc->ds4_calib_data[4].abs_code = ABS_Y; + sc->ds4_calib_data[4].bias = acc_y_plus - range_2g / 2; + sc->ds4_calib_data[4].sens_numer = 2*DS4_ACC_RES_PER_G; + sc->ds4_calib_data[4].sens_denom = range_2g; + + range_2g = acc_z_plus - acc_z_minus; + sc->ds4_calib_data[5].abs_code = ABS_Z; + sc->ds4_calib_data[5].bias = acc_z_plus - range_2g / 2; + sc->ds4_calib_data[5].sens_numer = 2*DS4_ACC_RES_PER_G; + sc->ds4_calib_data[5].sens_denom = range_2g; +err_stop: + kfree(buf); return ret; } +static void dualshock4_calibration_work(struct work_struct *work) +{ + struct sony_sc *sc = container_of(work, struct sony_sc, hotplug_worker); + unsigned long flags; + enum ds4_dongle_state dongle_state; + int ret; + + ret = dualshock4_get_calibration_data(sc); + if (ret < 0) { + /* This call is very unlikely to fail for the dongle. When it + * fails we are probably in a very bad state, so mark the + * dongle as disabled. We will re-enable the dongle if a new + * DS4 hotplug is detect from sony_raw_event as any issues + * are likely resolved then (the dongle is quite stupid). + */ + hid_err(sc->hdev, "DualShock 4 USB dongle: calibration failed, disabling device\n"); + dongle_state = DONGLE_DISABLED; + } else { + hid_info(sc->hdev, "DualShock 4 USB dongle: calibration completed\n"); + dongle_state = DONGLE_CONNECTED; + } + + spin_lock_irqsave(&sc->lock, flags); + sc->ds4_dongle_state = dongle_state; + spin_unlock_irqrestore(&sc->lock, flags); +} + static void sixaxis_set_leds_from_id(struct sony_sc *sc) { - static const __u8 sixaxis_leds[10][4] = { + static const u8 sixaxis_leds[10][4] = { { 0x01, 0x00, 0x00, 0x00 }, { 0x00, 0x01, 0x00, 0x00 }, { 0x00, 0x00, 0x01, 0x00 }, @@ -1497,11 +1668,11 @@ static void sixaxis_set_leds_from_id(struct sony_sc *sc) static void dualshock4_set_leds_from_id(struct sony_sc *sc) { /* The first 4 color/index entries match what the PS4 assigns */ - static const __u8 color_code[7][3] = { - /* Blue */ { 0x00, 0x00, 0x01 }, - /* Red */ { 0x01, 0x00, 0x00 }, - /* Green */ { 0x00, 0x01, 0x00 }, - /* Pink */ { 0x02, 0x00, 0x01 }, + static const u8 color_code[7][3] = { + /* Blue */ { 0x00, 0x00, 0x40 }, + /* Red */ { 0x40, 0x00, 0x00 }, + /* Green */ { 0x00, 0x40, 0x00 }, + /* Pink */ { 0x20, 0x00, 0x20 }, /* Orange */ { 0x02, 0x01, 0x00 }, /* Teal */ { 0x00, 0x01, 0x01 }, /* White */ { 0x01, 0x01, 0x01 } @@ -1525,7 +1696,7 @@ static void buzz_set_leds(struct sony_sc *sc) &hdev->report_enum[HID_OUTPUT_REPORT].report_list; struct hid_report *report = list_entry(report_list->next, struct hid_report, list); - __s32 *value = report->field[0]->value; + s32 *value = report->field[0]->value; BUILD_BUG_ON(MAX_LEDS < 4); @@ -1542,7 +1713,7 @@ static void buzz_set_leds(struct sony_sc *sc) static void sony_set_leds(struct sony_sc *sc) { if (!(sc->quirks & BUZZ_CONTROLLER)) - schedule_work(&sc->state_worker); + sony_schedule_work(sc, SONY_WORKER_STATE); else buzz_set_leds(sc); } @@ -1619,7 +1790,7 @@ static int sony_led_blink_set(struct led_classdev *led, unsigned long *delay_on, struct hid_device *hdev = container_of(dev, struct hid_device, dev); struct sony_sc *drv_data = hid_get_drvdata(hdev); int n; - __u8 new_on, new_off; + u8 new_on, new_off; if (!drv_data) { hid_err(hdev, "No device data\n"); @@ -1653,7 +1824,7 @@ static int sony_led_blink_set(struct led_classdev *led, unsigned long *delay_on, new_off != drv_data->led_delay_off[n]) { drv_data->led_delay_on[n] = new_on; drv_data->led_delay_off[n] = new_off; - schedule_work(&drv_data->state_worker); + sony_schedule_work(drv_data, SONY_WORKER_STATE); } return 0; @@ -1690,8 +1861,8 @@ static int sony_leds_init(struct sony_sc *sc) const char *name_fmt; static const char * const ds4_name_str[] = { "red", "green", "blue", "global" }; - __u8 max_brightness[MAX_LEDS] = { [0 ... (MAX_LEDS - 1)] = 1 }; - __u8 use_hw_blink[MAX_LEDS] = { 0 }; + u8 max_brightness[MAX_LEDS] = { [0 ... (MAX_LEDS - 1)] = 1 }; + u8 use_hw_blink[MAX_LEDS] = { 0 }; BUG_ON(!(sc->quirks & SONY_LED_SUPPORT)); @@ -1719,7 +1890,7 @@ static int sony_leds_init(struct sony_sc *sc) name_len = 0; name_fmt = "%s:%s"; } else if (sc->quirks & NAVIGATION_CONTROLLER) { - static const __u8 navigation_leds[4] = {0x01, 0x00, 0x00, 0x00}; + static const u8 navigation_leds[4] = {0x01, 0x00, 0x00, 0x00}; memcpy(sc->led_state, navigation_leds, sizeof(navigation_leds)); sc->led_count = 1; @@ -1766,6 +1937,7 @@ static int sony_leds_init(struct sony_sc *sc) led->name = name; led->brightness = sc->led_state[n]; led->max_brightness = max_brightness[n]; + led->flags = LED_CORE_SUSPENDRESUME; led->brightness_get = sony_led_get_brightness; led->brightness_set = sony_led_set_brightness; @@ -1791,12 +1963,12 @@ error_leds: return ret; } -static void sixaxis_state_worker(struct work_struct *work) +static void sixaxis_send_output_report(struct sony_sc *sc) { static const union sixaxis_output_report_01 default_report = { .buf = { 0x01, - 0x00, 0xff, 0x00, 0xff, 0x00, + 0x01, 0xff, 0x00, 0xff, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0x27, 0x10, 0x00, 0x32, 0xff, 0x27, 0x10, 0x00, 0x32, @@ -1805,7 +1977,6 @@ static void sixaxis_state_worker(struct work_struct *work) 0x00, 0x00, 0x00, 0x00, 0x00 } }; - struct sony_sc *sc = container_of(work, struct sony_sc, state_worker); struct sixaxis_output_report *report = (struct sixaxis_output_report *)sc->output_report_dmabuf; int n; @@ -1843,28 +2014,36 @@ static void sixaxis_state_worker(struct work_struct *work) } } - hid_hw_raw_request(sc->hdev, report->report_id, (__u8 *)report, + hid_hw_raw_request(sc->hdev, report->report_id, (u8 *)report, sizeof(struct sixaxis_output_report), HID_OUTPUT_REPORT, HID_REQ_SET_REPORT); } -static void dualshock4_state_worker(struct work_struct *work) +static void dualshock4_send_output_report(struct sony_sc *sc) { - struct sony_sc *sc = container_of(work, struct sony_sc, state_worker); struct hid_device *hdev = sc->hdev; - __u8 *buf = sc->output_report_dmabuf; + u8 *buf = sc->output_report_dmabuf; int offset; - if (sc->quirks & DUALSHOCK4_CONTROLLER_USB) { - memset(buf, 0, DS4_REPORT_0x05_SIZE); + /* + * NOTE: The lower 6 bits of buf[1] field of the Bluetooth report + * control the interval at which Dualshock 4 reports data: + * 0x00 - 1ms + * 0x01 - 1ms + * 0x02 - 2ms + * 0x3E - 62ms + * 0x3F - disabled + */ + if (sc->quirks & (DUALSHOCK4_CONTROLLER_USB | DUALSHOCK4_DONGLE)) { + memset(buf, 0, DS4_OUTPUT_REPORT_0x05_SIZE); buf[0] = 0x05; - buf[1] = 0xFF; + buf[1] = 0x07; /* blink + LEDs + motor */ offset = 4; } else { - memset(buf, 0, DS4_REPORT_0x11_SIZE); + memset(buf, 0, DS4_OUTPUT_REPORT_0x11_SIZE); buf[0] = 0x11; - buf[1] = 0x80; - buf[3] = 0x0F; + buf[1] = 0xC0 /* HID + CRC */ | sc->ds4_bt_poll_interval; + buf[3] = 0x07; /* blink + LEDs + motor */ offset = 6; } @@ -1888,16 +2067,22 @@ static void dualshock4_state_worker(struct work_struct *work) buf[offset++] = sc->led_delay_on[3]; buf[offset++] = sc->led_delay_off[3]; - if (sc->quirks & DUALSHOCK4_CONTROLLER_USB) - hid_hw_output_report(hdev, buf, DS4_REPORT_0x05_SIZE); - else - hid_hw_raw_request(hdev, 0x11, buf, DS4_REPORT_0x11_SIZE, - HID_OUTPUT_REPORT, HID_REQ_SET_REPORT); + if (sc->quirks & (DUALSHOCK4_CONTROLLER_USB | DUALSHOCK4_DONGLE)) + hid_hw_output_report(hdev, buf, DS4_OUTPUT_REPORT_0x05_SIZE); + else { + /* CRC generation */ + u8 bthdr = 0xA2; + u32 crc; + + crc = crc32_le(0xFFFFFFFF, &bthdr, 1); + crc = ~crc32_le(crc, buf, DS4_OUTPUT_REPORT_0x11_SIZE-4); + put_unaligned_le32(crc, &buf[74]); + hid_hw_output_report(hdev, buf, DS4_OUTPUT_REPORT_0x11_SIZE); + } } -static void motion_state_worker(struct work_struct *work) +static void motion_send_output_report(struct sony_sc *sc) { - struct sony_sc *sc = container_of(work, struct sony_sc, state_worker); struct hid_device *hdev = sc->hdev; struct motion_output_report_02 *report = (struct motion_output_report_02 *)sc->output_report_dmabuf; @@ -1913,7 +2098,20 @@ static void motion_state_worker(struct work_struct *work) report->rumble = max(sc->right, sc->left); #endif - hid_hw_output_report(hdev, (__u8 *)report, MOTION_REPORT_0x02_SIZE); + hid_hw_output_report(hdev, (u8 *)report, MOTION_REPORT_0x02_SIZE); +} + +static inline void sony_send_output_report(struct sony_sc *sc) +{ + if (sc->send_output_report) + sc->send_output_report(sc); +} + +static void sony_state_worker(struct work_struct *work) +{ + struct sony_sc *sc = container_of(work, struct sony_sc, state_worker); + + sc->send_output_report(sc); } static int sony_allocate_output_report(struct sony_sc *sc) @@ -1924,10 +2122,10 @@ static int sony_allocate_output_report(struct sony_sc *sc) kmalloc(sizeof(union sixaxis_output_report_01), GFP_KERNEL); else if (sc->quirks & DUALSHOCK4_CONTROLLER_BT) - sc->output_report_dmabuf = kmalloc(DS4_REPORT_0x11_SIZE, + sc->output_report_dmabuf = kmalloc(DS4_OUTPUT_REPORT_0x11_SIZE, GFP_KERNEL); - else if (sc->quirks & DUALSHOCK4_CONTROLLER_USB) - sc->output_report_dmabuf = kmalloc(DS4_REPORT_0x05_SIZE, + else if (sc->quirks & (DUALSHOCK4_CONTROLLER_USB | DUALSHOCK4_DONGLE)) + sc->output_report_dmabuf = kmalloc(DS4_OUTPUT_REPORT_0x05_SIZE, GFP_KERNEL); else if (sc->quirks & MOTION_CONTROLLER) sc->output_report_dmabuf = kmalloc(MOTION_REPORT_0x02_SIZE, @@ -1954,15 +2152,21 @@ static int sony_play_effect(struct input_dev *dev, void *data, sc->left = effect->u.rumble.strong_magnitude / 256; sc->right = effect->u.rumble.weak_magnitude / 256; - schedule_work(&sc->state_worker); + sony_schedule_work(sc, SONY_WORKER_STATE); return 0; } static int sony_init_ff(struct sony_sc *sc) { - struct hid_input *hidinput = list_entry(sc->hdev->inputs.next, - struct hid_input, list); - struct input_dev *input_dev = hidinput->input; + struct hid_input *hidinput; + struct input_dev *input_dev; + + if (list_empty(&sc->hdev->inputs)) { + hid_err(sc->hdev, "no inputs found\n"); + return -ENODEV; + } + hidinput = list_entry(sc->hdev->inputs.next, struct hid_input, list); + input_dev = hidinput->input; input_set_capability(input_dev, EV_FF, FF_RUMBLE); return input_ff_create_memless(input_dev, NULL, sony_play_effect); @@ -2017,8 +2221,11 @@ static int sony_battery_get_property(struct power_supply *psy, return ret; } -static int sony_battery_probe(struct sony_sc *sc) +static int sony_battery_probe(struct sony_sc *sc, int append_dev_id) { + const char *battery_str_fmt = append_dev_id ? + "sony_controller_battery_%pMR_%i" : + "sony_controller_battery_%pMR"; struct power_supply_config psy_cfg = { .drv_data = sc, }; struct hid_device *hdev = sc->hdev; int ret; @@ -2034,9 +2241,8 @@ static int sony_battery_probe(struct sony_sc *sc) sc->battery_desc.get_property = sony_battery_get_property; sc->battery_desc.type = POWER_SUPPLY_TYPE_BATTERY; sc->battery_desc.use_for_apm = 0; - sc->battery_desc.name = kasprintf(GFP_KERNEL, - "sony_controller_battery_%pMR", - sc->mac_address); + sc->battery_desc.name = kasprintf(GFP_KERNEL, battery_str_fmt, + sc->mac_address, sc->device_id); if (!sc->battery_desc.name) return -ENOMEM; @@ -2072,7 +2278,21 @@ static void sony_battery_remove(struct sony_sc *sc) * it will show up as two devices. A global list of connected controllers and * their MAC addresses is maintained to ensure that a device is only connected * once. + * + * Some USB-only devices masquerade as Sixaxis controllers and all have the + * same dummy Bluetooth address, so a comparison of the connection type is + * required. Devices are only rejected in the case where two devices have + * matching Bluetooth addresses on different bus types. */ +static inline int sony_compare_connection_type(struct sony_sc *sc0, + struct sony_sc *sc1) +{ + const int sc0_not_bt = !(sc0->quirks & SONY_BT_DEVICE); + const int sc1_not_bt = !(sc1->quirks & SONY_BT_DEVICE); + + return sc0_not_bt == sc1_not_bt; +} + static int sony_check_add_dev_list(struct sony_sc *sc) { struct sony_sc *entry; @@ -2085,9 +2305,14 @@ static int sony_check_add_dev_list(struct sony_sc *sc) ret = memcmp(sc->mac_address, entry->mac_address, sizeof(sc->mac_address)); if (!ret) { - ret = -EEXIST; - hid_info(sc->hdev, "controller with MAC address %pMR already connected\n", + if (sony_compare_connection_type(sc, entry)) { + ret = 1; + } else { + ret = -EEXIST; + hid_info(sc->hdev, + "controller with MAC address %pMR already connected\n", sc->mac_address); + } goto unlock; } } @@ -2133,7 +2358,7 @@ static int sony_get_bt_devaddr(struct sony_sc *sc) static int sony_check_add(struct sony_sc *sc) { - __u8 *buf = NULL; + u8 *buf = NULL; int n, ret; if ((sc->quirks & DUALSHOCK4_CONTROLLER_BT) || @@ -2150,8 +2375,8 @@ static int sony_check_add(struct sony_sc *sc) hid_warn(sc->hdev, "UNIQ does not contain a MAC address; duplicate check skipped\n"); return 0; } - } else if (sc->quirks & DUALSHOCK4_CONTROLLER_USB) { - buf = kmalloc(DS4_REPORT_0x81_SIZE, GFP_KERNEL); + } else if (sc->quirks & (DUALSHOCK4_CONTROLLER_USB | DUALSHOCK4_DONGLE)) { + buf = kmalloc(DS4_FEATURE_REPORT_0x81_SIZE, GFP_KERNEL); if (!buf) return -ENOMEM; @@ -2161,16 +2386,22 @@ static int sony_check_add(struct sony_sc *sc) * offset 1. */ ret = hid_hw_raw_request(sc->hdev, 0x81, buf, - DS4_REPORT_0x81_SIZE, HID_FEATURE_REPORT, + DS4_FEATURE_REPORT_0x81_SIZE, HID_FEATURE_REPORT, HID_REQ_GET_REPORT); - if (ret != DS4_REPORT_0x81_SIZE) { + if (ret != DS4_FEATURE_REPORT_0x81_SIZE) { hid_err(sc->hdev, "failed to retrieve feature report 0x81 with the DualShock 4 MAC address\n"); ret = ret < 0 ? ret : -EINVAL; goto out_free; } memcpy(sc->mac_address, &buf[1], sizeof(sc->mac_address)); + + snprintf(sc->hdev->uniq, sizeof(sc->hdev->uniq), + "%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx", + sc->mac_address[5], sc->mac_address[4], + sc->mac_address[3], sc->mac_address[2], + sc->mac_address[1], sc->mac_address[0]); } else if ((sc->quirks & SIXAXIS_CONTROLLER_USB) || (sc->quirks & NAVIGATION_CONTROLLER_USB)) { buf = kmalloc(SIXAXIS_REPORT_0xF2_SIZE, GFP_KERNEL); @@ -2198,6 +2429,12 @@ static int sony_check_add(struct sony_sc *sc) */ for (n = 0; n < 6; n++) sc->mac_address[5-n] = buf[4+n]; + + snprintf(sc->hdev->uniq, sizeof(sc->hdev->uniq), + "%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx", + sc->mac_address[5], sc->mac_address[4], + sc->mac_address[3], sc->mac_address[2], + sc->mac_address[1], sc->mac_address[0]); } else { return 0; } @@ -2243,56 +2480,37 @@ static void sony_release_device_id(struct sony_sc *sc) } } -static inline void sony_init_work(struct sony_sc *sc, - void (*worker)(struct work_struct *)) +static inline void sony_init_output_report(struct sony_sc *sc, + void (*send_output_report)(struct sony_sc *)) { - if (!sc->worker_initialized) - INIT_WORK(&sc->state_worker, worker); + sc->send_output_report = send_output_report; + + if (!sc->state_worker_initialized) + INIT_WORK(&sc->state_worker, sony_state_worker); - sc->worker_initialized = 1; + sc->state_worker_initialized = 1; } static inline void sony_cancel_work_sync(struct sony_sc *sc) { - if (sc->worker_initialized) + unsigned long flags; + + if (sc->hotplug_worker_initialized) + cancel_work_sync(&sc->hotplug_worker); + if (sc->state_worker_initialized) { + spin_lock_irqsave(&sc->lock, flags); + sc->state_worker_initialized = 0; + spin_unlock_irqrestore(&sc->lock, flags); cancel_work_sync(&sc->state_worker); + } } -static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id) +static int sony_input_configured(struct hid_device *hdev, + struct hid_input *hidinput) { + struct sony_sc *sc = hid_get_drvdata(hdev); + int append_dev_id; int ret; - unsigned long quirks = id->driver_data; - struct sony_sc *sc; - unsigned int connect_mask = HID_CONNECT_DEFAULT; - - sc = devm_kzalloc(&hdev->dev, sizeof(*sc), GFP_KERNEL); - if (sc == NULL) { - hid_err(hdev, "can't alloc sony descriptor\n"); - return -ENOMEM; - } - - spin_lock_init(&sc->lock); - - sc->quirks = quirks; - hid_set_drvdata(hdev, sc); - sc->hdev = hdev; - - ret = hid_parse(hdev); - if (ret) { - hid_err(hdev, "parse failed\n"); - return ret; - } - - if (sc->quirks & VAIO_RDESC_CONSTANT) - connect_mask |= HID_CONNECT_HIDDEV_FORCE; - else if (sc->quirks & SIXAXIS_CONTROLLER) - connect_mask |= HID_CONNECT_HIDDEV_FORCE; - - ret = hid_hw_start(hdev, connect_mask); - if (ret) { - hid_err(hdev, "hw start failed\n"); - return ret; - } ret = sony_set_device_id(sc); if (ret < 0) { @@ -2300,14 +2518,17 @@ static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id) goto err_stop; } + ret = append_dev_id = sony_check_add(sc); + if (ret < 0) + goto err_stop; + ret = sony_allocate_output_report(sc); if (ret < 0) { hid_err(hdev, "failed to allocate the output report buffer\n"); goto err_stop; } - if ((sc->quirks & SIXAXIS_CONTROLLER_USB) || - (sc->quirks & NAVIGATION_CONTROLLER_USB)) { + if (sc->quirks & NAVIGATION_CONTROLLER_USB) { /* * The Sony Sixaxis does not handle HID Output Reports on the * Interrupt EP like it could, so we need to force HID Output @@ -2317,48 +2538,132 @@ static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id) * the Sixaxis does not want the report_id as part of the data * packet, so we have to discard buf[0] when sending the actual * control message, even for numbered reports, humpf! + * + * Additionally, the Sixaxis on USB isn't properly initialized + * until the PS logo button is pressed and as such won't retain + * any state set by an output report, so the initial + * configuration report is deferred until the first input + * report arrives. */ hdev->quirks |= HID_QUIRK_NO_OUTPUT_REPORTS_ON_INTR_EP; hdev->quirks |= HID_QUIRK_SKIP_OUTPUT_REPORT_ID; + sc->defer_initialization = 1; + ret = sixaxis_set_operational_usb(hdev); - sony_init_work(sc, sixaxis_state_worker); - } else if ((sc->quirks & SIXAXIS_CONTROLLER_BT) || - (sc->quirks & NAVIGATION_CONTROLLER_BT)) { + if (ret < 0) { + hid_err(hdev, "Failed to set controller into operational mode\n"); + goto err_stop; + } + + sony_init_output_report(sc, sixaxis_send_output_report); + } else if (sc->quirks & NAVIGATION_CONTROLLER_BT) { + /* + * The Navigation controller wants output reports sent on the ctrl + * endpoint when connected via Bluetooth. + */ + hdev->quirks |= HID_QUIRK_NO_OUTPUT_REPORTS_ON_INTR_EP; + + ret = sixaxis_set_operational_bt(hdev); + if (ret < 0) { + hid_err(hdev, "Failed to set controller into operational mode\n"); + goto err_stop; + } + + sony_init_output_report(sc, sixaxis_send_output_report); + } else if (sc->quirks & SIXAXIS_CONTROLLER_USB) { + /* + * The Sony Sixaxis does not handle HID Output Reports on the + * Interrupt EP and the device only becomes active when the + * PS button is pressed. See comment for Navigation controller + * above for more details. + */ + hdev->quirks |= HID_QUIRK_NO_OUTPUT_REPORTS_ON_INTR_EP; + hdev->quirks |= HID_QUIRK_SKIP_OUTPUT_REPORT_ID; + sc->defer_initialization = 1; + + ret = sixaxis_set_operational_usb(hdev); + if (ret < 0) { + hid_err(hdev, "Failed to set controller into operational mode\n"); + goto err_stop; + } + + ret = sony_register_sensors(sc); + if (ret) { + hid_err(sc->hdev, + "Unable to initialize motion sensors: %d\n", ret); + goto err_stop; + } + + sony_init_output_report(sc, sixaxis_send_output_report); + } else if (sc->quirks & SIXAXIS_CONTROLLER_BT) { /* * The Sixaxis wants output reports sent on the ctrl endpoint * when connected via Bluetooth. */ hdev->quirks |= HID_QUIRK_NO_OUTPUT_REPORTS_ON_INTR_EP; + ret = sixaxis_set_operational_bt(hdev); - sony_init_work(sc, sixaxis_state_worker); + if (ret < 0) { + hid_err(hdev, "Failed to set controller into operational mode\n"); + goto err_stop; + } + + ret = sony_register_sensors(sc); + if (ret) { + hid_err(sc->hdev, + "Unable to initialize motion sensors: %d\n", ret); + goto err_stop; + } + + sony_init_output_report(sc, sixaxis_send_output_report); } else if (sc->quirks & DUALSHOCK4_CONTROLLER) { + ret = dualshock4_get_calibration_data(sc); + if (ret < 0) { + hid_err(hdev, "Failed to get calibration data from Dualshock 4\n"); + goto err_stop; + } + + /* + * The Dualshock 4 touchpad supports 2 touches and has a + * resolution of 1920x942 (44.86 dots/mm). + */ + ret = sony_register_touchpad(sc, 2, 1920, 942); + if (ret) { + hid_err(sc->hdev, + "Unable to initialize multi-touch slots: %d\n", + ret); + goto err_stop; + } + + ret = sony_register_sensors(sc); + if (ret) { + hid_err(sc->hdev, + "Unable to initialize motion sensors: %d\n", ret); + goto err_stop; + } + if (sc->quirks & DUALSHOCK4_CONTROLLER_BT) { - /* - * The DualShock 4 wants output reports sent on the ctrl - * endpoint when connected via Bluetooth. - */ - hdev->quirks |= HID_QUIRK_NO_OUTPUT_REPORTS_ON_INTR_EP; - ret = dualshock4_set_operational_bt(hdev); - if (ret < 0) { - hid_err(hdev, "failed to set the Dualshock 4 operational mode\n"); - goto err_stop; - } + sc->ds4_bt_poll_interval = DS4_BT_DEFAULT_POLL_INTERVAL_MS; + ret = device_create_file(&sc->hdev->dev, &dev_attr_bt_poll_interval); + if (ret) + hid_warn(sc->hdev, + "can't create sysfs bt_poll_interval attribute err: %d\n", + ret); } - sony_init_work(sc, dualshock4_state_worker); + if (sc->quirks & DUALSHOCK4_DONGLE) { + INIT_WORK(&sc->hotplug_worker, dualshock4_calibration_work); + sc->hotplug_worker_initialized = 1; + sc->ds4_dongle_state = DONGLE_DISCONNECTED; + } + + sony_init_output_report(sc, dualshock4_send_output_report); } else if (sc->quirks & MOTION_CONTROLLER) { - sony_init_work(sc, motion_state_worker); + sony_init_output_report(sc, motion_send_output_report); } else { ret = 0; } - if (ret < 0) - goto err_stop; - - ret = sony_check_add(sc); - if (ret < 0) - goto err_stop; - if (sc->quirks & SONY_LED_SUPPORT) { ret = sony_leds_init(sc); if (ret < 0) @@ -2366,7 +2671,7 @@ static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id) } if (sc->quirks & SONY_BATTERY_SUPPORT) { - ret = sony_battery_probe(sc); + ret = sony_battery_probe(sc, append_dev_id); if (ret < 0) goto err_stop; @@ -2388,15 +2693,86 @@ static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id) err_close: hid_hw_close(hdev); err_stop: + /* Piggy back on the default ds4_bt_ poll_interval to determine + * if we need to remove the file as we don't know for sure if we + * executed that logic. + */ + if (sc->ds4_bt_poll_interval) + device_remove_file(&sc->hdev->dev, &dev_attr_bt_poll_interval); if (sc->quirks & SONY_LED_SUPPORT) sony_leds_remove(sc); if (sc->quirks & SONY_BATTERY_SUPPORT) sony_battery_remove(sc); + if (sc->touchpad) + sony_unregister_touchpad(sc); + if (sc->sensor_dev) + sony_unregister_sensors(sc); sony_cancel_work_sync(sc); kfree(sc->output_report_dmabuf); sony_remove_dev_list(sc); sony_release_device_id(sc); - hid_hw_stop(hdev); + return ret; +} + +static int sony_probe(struct hid_device *hdev, const struct hid_device_id *id) +{ + int ret; + unsigned long quirks = id->driver_data; + struct sony_sc *sc; + unsigned int connect_mask = HID_CONNECT_DEFAULT; + + sc = devm_kzalloc(&hdev->dev, sizeof(*sc), GFP_KERNEL); + if (sc == NULL) { + hid_err(hdev, "can't alloc sony descriptor\n"); + return -ENOMEM; + } + + spin_lock_init(&sc->lock); + + sc->quirks = quirks; + hid_set_drvdata(hdev, sc); + sc->hdev = hdev; + + ret = hid_parse(hdev); + if (ret) { + hid_err(hdev, "parse failed\n"); + return ret; + } + + if (sc->quirks & VAIO_RDESC_CONSTANT) + connect_mask |= HID_CONNECT_HIDDEV_FORCE; + else if (sc->quirks & SIXAXIS_CONTROLLER) + connect_mask |= HID_CONNECT_HIDDEV_FORCE; + + /* Patch the hw version on DS3/4 compatible devices, so applications can + * distinguish between the default HID mappings and the mappings defined + * by the Linux game controller spec. This is important for the SDL2 + * library, which has a game controller database, which uses device ids + * in combination with version as a key. + */ + if (sc->quirks & (SIXAXIS_CONTROLLER | DUALSHOCK4_CONTROLLER)) + hdev->version |= 0x8000; + + ret = hid_hw_start(hdev, connect_mask); + if (ret) { + hid_err(hdev, "hw start failed\n"); + return ret; + } + + /* sony_input_configured can fail, but this doesn't result + * in hid_hw_start failures (intended). Check whether + * the HID layer claimed the device else fail. + * We don't know the actual reason for the failure, most + * likely it is due to EEXIST in case of double connection + * of USB and Bluetooth, but could have been due to ENOMEM + * or other reasons as well. + */ + if (!(hdev->claimed & HID_CLAIMED_INPUT)) { + hid_err(hdev, "failed to claim input\n"); + hid_hw_stop(hdev); + return -ENODEV; + } + return ret; } @@ -2404,13 +2780,22 @@ static void sony_remove(struct hid_device *hdev) { struct sony_sc *sc = hid_get_drvdata(hdev); + hid_hw_close(hdev); + if (sc->quirks & SONY_LED_SUPPORT) sony_leds_remove(sc); - if (sc->quirks & SONY_BATTERY_SUPPORT) { - hid_hw_close(hdev); + if (sc->quirks & SONY_BATTERY_SUPPORT) sony_battery_remove(sc); - } + + if (sc->touchpad) + sony_unregister_touchpad(sc); + + if (sc->sensor_dev) + sony_unregister_sensors(sc); + + if (sc->quirks & DUALSHOCK4_CONTROLLER_BT) + device_remove_file(&sc->hdev->dev, &dev_attr_bt_poll_interval); sony_cancel_work_sync(sc); @@ -2423,6 +2808,43 @@ static void sony_remove(struct hid_device *hdev) hid_hw_stop(hdev); } +#ifdef CONFIG_PM + +static int sony_suspend(struct hid_device *hdev, pm_message_t message) +{ +#ifdef CONFIG_SONY_FF + + /* On suspend stop any running force-feedback events */ + if (SONY_FF_SUPPORT) { + struct sony_sc *sc = hid_get_drvdata(hdev); + + sc->left = sc->right = 0; + sony_send_output_report(sc); + } + +#endif + return 0; +} + +static int sony_resume(struct hid_device *hdev) +{ + struct sony_sc *sc = hid_get_drvdata(hdev); + + /* + * The Sixaxis and navigation controllers on USB need to be + * reinitialized on resume or they won't behave properly. + */ + if ((sc->quirks & SIXAXIS_CONTROLLER_USB) || + (sc->quirks & NAVIGATION_CONTROLLER_USB)) { + sixaxis_set_operational_usb(sc->hdev); + sc->defer_initialization = 1; + } + + return 0; +} + +#endif + static const struct hid_device_id sony_devices[] = { { HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS3_CONTROLLER), .driver_data = SIXAXIS_CONTROLLER_USB }, @@ -2440,8 +2862,10 @@ static const struct hid_device_id sony_devices[] = { .driver_data = VAIO_RDESC_CONSTANT }, { HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_VAIO_VGP_MOUSE), .driver_data = VAIO_RDESC_CONSTANT }, - /* Wired Buzz Controller. Reported as Sony Hub from its USB ID and as - * Logitech joystick from the device descriptor. */ + /* + * Wired Buzz Controller. Reported as Sony Hub from its USB ID and as + * Logitech joystick from the device descriptor. + */ { HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_BUZZ_CONTROLLER), .driver_data = BUZZ_CONTROLLER }, { HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_WIRELESS_BUZZ_CONTROLLER), @@ -2465,7 +2889,7 @@ static const struct hid_device_id sony_devices[] = { { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS4_CONTROLLER_2), .driver_data = DUALSHOCK4_CONTROLLER_BT }, { HID_USB_DEVICE(USB_VENDOR_ID_SONY, USB_DEVICE_ID_SONY_PS4_CONTROLLER_DONGLE), - .driver_data = DUALSHOCK4_CONTROLLER_USB }, + .driver_data = DUALSHOCK4_DONGLE }, { } }; MODULE_DEVICE_TABLE(hid, sony_devices); @@ -2478,7 +2902,13 @@ static struct hid_driver sony_driver = { .probe = sony_probe, .remove = sony_remove, .report_fixup = sony_report_fixup, - .raw_event = sony_raw_event + .raw_event = sony_raw_event, + +#ifdef CONFIG_PM + .suspend = sony_suspend, + .resume = sony_resume, + .reset_resume = sony_resume, +#endif }; static int __init sony_init(void) diff --git a/drivers/hid/hid-steam.c b/drivers/hid/hid-steam.c new file mode 100644 index 000000000000..44e1eefc5b24 --- /dev/null +++ b/drivers/hid/hid-steam.c @@ -0,0 +1,1141 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * HID driver for Valve Steam Controller + * + * Copyright (c) 2018 Rodrigo Rivas Costa <rodrigorivascosta@gmail.com> + * + * Supports both the wired and wireless interfaces. + * + * This controller has a builtin emulation of mouse and keyboard: the right pad + * can be used as a mouse, the shoulder buttons are mouse buttons, A and B + * buttons are ENTER and ESCAPE, and so on. This is implemented as additional + * HID interfaces. + * + * This is known as the "lizard mode", because apparently lizards like to use + * the computer from the coach, without a proper mouse and keyboard. + * + * This driver will disable the lizard mode when the input device is opened + * and re-enable it when the input device is closed, so as not to break user + * mode behaviour. The lizard_mode parameter can be used to change that. + * + * There are a few user space applications (notably Steam Client) that use + * the hidraw interface directly to create input devices (XTest, uinput...). + * In order to avoid breaking them this driver creates a layered hidraw device, + * so it can detect when the client is running and then: + * - it will not send any command to the controller. + * - this input device will be removed, to avoid double input of the same + * user action. + * When the client is closed, this input device will be created again. + * + * For additional functions, such as changing the right-pad margin or switching + * the led, you can use the user-space tool at: + * + * https://github.com/rodrigorc/steamctrl + */ + +#include <linux/device.h> +#include <linux/input.h> +#include <linux/hid.h> +#include <linux/module.h> +#include <linux/workqueue.h> +#include <linux/mutex.h> +#include <linux/rcupdate.h> +#include <linux/delay.h> +#include <linux/power_supply.h> +#include "hid-ids.h" + +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Rodrigo Rivas Costa <rodrigorivascosta@gmail.com>"); + +static bool lizard_mode = true; + +static DEFINE_MUTEX(steam_devices_lock); +static LIST_HEAD(steam_devices); + +#define STEAM_QUIRK_WIRELESS BIT(0) + +/* Touch pads are 40 mm in diameter and 65535 units */ +#define STEAM_PAD_RESOLUTION 1638 +/* Trigger runs are about 5 mm and 256 units */ +#define STEAM_TRIGGER_RESOLUTION 51 +/* Joystick runs are about 5 mm and 256 units */ +#define STEAM_JOYSTICK_RESOLUTION 51 + +#define STEAM_PAD_FUZZ 256 + +/* + * Commands that can be sent in a feature report. + * Thanks to Valve for some valuable hints. + */ +#define STEAM_CMD_SET_MAPPINGS 0x80 +#define STEAM_CMD_CLEAR_MAPPINGS 0x81 +#define STEAM_CMD_GET_MAPPINGS 0x82 +#define STEAM_CMD_GET_ATTRIB 0x83 +#define STEAM_CMD_GET_ATTRIB_LABEL 0x84 +#define STEAM_CMD_DEFAULT_MAPPINGS 0x85 +#define STEAM_CMD_FACTORY_RESET 0x86 +#define STEAM_CMD_WRITE_REGISTER 0x87 +#define STEAM_CMD_CLEAR_REGISTER 0x88 +#define STEAM_CMD_READ_REGISTER 0x89 +#define STEAM_CMD_GET_REGISTER_LABEL 0x8a +#define STEAM_CMD_GET_REGISTER_MAX 0x8b +#define STEAM_CMD_GET_REGISTER_DEFAULT 0x8c +#define STEAM_CMD_SET_MODE 0x8d +#define STEAM_CMD_DEFAULT_MOUSE 0x8e +#define STEAM_CMD_FORCEFEEDBAK 0x8f +#define STEAM_CMD_REQUEST_COMM_STATUS 0xb4 +#define STEAM_CMD_GET_SERIAL 0xae + +/* Some useful register ids */ +#define STEAM_REG_LPAD_MODE 0x07 +#define STEAM_REG_RPAD_MODE 0x08 +#define STEAM_REG_RPAD_MARGIN 0x18 +#define STEAM_REG_LED 0x2d +#define STEAM_REG_GYRO_MODE 0x30 + +/* Raw event identifiers */ +#define STEAM_EV_INPUT_DATA 0x01 +#define STEAM_EV_CONNECT 0x03 +#define STEAM_EV_BATTERY 0x04 + +/* Values for GYRO_MODE (bitmask) */ +#define STEAM_GYRO_MODE_OFF 0x0000 +#define STEAM_GYRO_MODE_STEERING 0x0001 +#define STEAM_GYRO_MODE_TILT 0x0002 +#define STEAM_GYRO_MODE_SEND_ORIENTATION 0x0004 +#define STEAM_GYRO_MODE_SEND_RAW_ACCEL 0x0008 +#define STEAM_GYRO_MODE_SEND_RAW_GYRO 0x0010 + +/* Other random constants */ +#define STEAM_SERIAL_LEN 10 + +struct steam_device { + struct list_head list; + spinlock_t lock; + struct hid_device *hdev, *client_hdev; + struct mutex mutex; + bool client_opened; + struct input_dev __rcu *input; + unsigned long quirks; + struct work_struct work_connect; + bool connected; + char serial_no[STEAM_SERIAL_LEN + 1]; + struct power_supply_desc battery_desc; + struct power_supply __rcu *battery; + u8 battery_charge; + u16 voltage; +}; + +static int steam_recv_report(struct steam_device *steam, + u8 *data, int size) +{ + struct hid_report *r; + u8 *buf; + int ret; + + r = steam->hdev->report_enum[HID_FEATURE_REPORT].report_id_hash[0]; + if (hid_report_len(r) < 64) + return -EINVAL; + + buf = hid_alloc_report_buf(r, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + /* + * The report ID is always 0, so strip the first byte from the output. + * hid_report_len() is not counting the report ID, so +1 to the length + * or else we get a EOVERFLOW. We are safe from a buffer overflow + * because hid_alloc_report_buf() allocates +7 bytes. + */ + ret = hid_hw_raw_request(steam->hdev, 0x00, + buf, hid_report_len(r) + 1, + HID_FEATURE_REPORT, HID_REQ_GET_REPORT); + if (ret > 0) + memcpy(data, buf + 1, min(size, ret - 1)); + kfree(buf); + return ret; +} + +static int steam_send_report(struct steam_device *steam, + u8 *cmd, int size) +{ + struct hid_report *r; + u8 *buf; + unsigned int retries = 50; + int ret; + + r = steam->hdev->report_enum[HID_FEATURE_REPORT].report_id_hash[0]; + if (hid_report_len(r) < 64) + return -EINVAL; + + buf = hid_alloc_report_buf(r, GFP_KERNEL); + if (!buf) + return -ENOMEM; + + /* The report ID is always 0 */ + memcpy(buf + 1, cmd, size); + + /* + * Sometimes the wireless controller fails with EPIPE + * when sending a feature report. + * Doing a HID_REQ_GET_REPORT and waiting for a while + * seems to fix that. + */ + do { + ret = hid_hw_raw_request(steam->hdev, 0, + buf, size + 1, + HID_FEATURE_REPORT, HID_REQ_SET_REPORT); + if (ret != -EPIPE) + break; + msleep(20); + } while (--retries); + + kfree(buf); + if (ret < 0) + hid_err(steam->hdev, "%s: error %d (%*ph)\n", __func__, + ret, size, cmd); + return ret; +} + +static inline int steam_send_report_byte(struct steam_device *steam, u8 cmd) +{ + return steam_send_report(steam, &cmd, 1); +} + +static int steam_write_registers(struct steam_device *steam, + /* u8 reg, u16 val */...) +{ + /* Send: 0x87 len (reg valLo valHi)* */ + u8 reg; + u16 val; + u8 cmd[64] = {STEAM_CMD_WRITE_REGISTER, 0x00}; + va_list args; + + va_start(args, steam); + for (;;) { + reg = va_arg(args, int); + if (reg == 0) + break; + val = va_arg(args, int); + cmd[cmd[1] + 2] = reg; + cmd[cmd[1] + 3] = val & 0xff; + cmd[cmd[1] + 4] = val >> 8; + cmd[1] += 3; + } + va_end(args); + + return steam_send_report(steam, cmd, 2 + cmd[1]); +} + +static int steam_get_serial(struct steam_device *steam) +{ + /* + * Send: 0xae 0x15 0x01 + * Recv: 0xae 0x15 0x01 serialnumber (10 chars) + */ + int ret; + u8 cmd[] = {STEAM_CMD_GET_SERIAL, 0x15, 0x01}; + u8 reply[3 + STEAM_SERIAL_LEN + 1]; + + ret = steam_send_report(steam, cmd, sizeof(cmd)); + if (ret < 0) + return ret; + ret = steam_recv_report(steam, reply, sizeof(reply)); + if (ret < 0) + return ret; + if (reply[0] != 0xae || reply[1] != 0x15 || reply[2] != 0x01) + return -EIO; + reply[3 + STEAM_SERIAL_LEN] = 0; + strlcpy(steam->serial_no, reply + 3, sizeof(steam->serial_no)); + return 0; +} + +/* + * This command requests the wireless adaptor to post an event + * with the connection status. Useful if this driver is loaded when + * the controller is already connected. + */ +static inline int steam_request_conn_status(struct steam_device *steam) +{ + return steam_send_report_byte(steam, STEAM_CMD_REQUEST_COMM_STATUS); +} + +static void steam_set_lizard_mode(struct steam_device *steam, bool enable) +{ + if (enable) { + /* enable esc, enter, cursors */ + steam_send_report_byte(steam, STEAM_CMD_DEFAULT_MAPPINGS); + /* enable mouse */ + steam_send_report_byte(steam, STEAM_CMD_DEFAULT_MOUSE); + steam_write_registers(steam, + STEAM_REG_RPAD_MARGIN, 0x01, /* enable margin */ + 0); + } else { + /* disable esc, enter, cursor */ + steam_send_report_byte(steam, STEAM_CMD_CLEAR_MAPPINGS); + steam_write_registers(steam, + STEAM_REG_RPAD_MODE, 0x07, /* disable mouse */ + STEAM_REG_RPAD_MARGIN, 0x00, /* disable margin */ + 0); + } +} + +static int steam_input_open(struct input_dev *dev) +{ + struct steam_device *steam = input_get_drvdata(dev); + + mutex_lock(&steam->mutex); + if (!steam->client_opened && lizard_mode) + steam_set_lizard_mode(steam, false); + mutex_unlock(&steam->mutex); + return 0; +} + +static void steam_input_close(struct input_dev *dev) +{ + struct steam_device *steam = input_get_drvdata(dev); + + mutex_lock(&steam->mutex); + if (!steam->client_opened && lizard_mode) + steam_set_lizard_mode(steam, true); + mutex_unlock(&steam->mutex); +} + +static enum power_supply_property steam_battery_props[] = { + POWER_SUPPLY_PROP_PRESENT, + POWER_SUPPLY_PROP_SCOPE, + POWER_SUPPLY_PROP_VOLTAGE_NOW, + POWER_SUPPLY_PROP_CAPACITY, +}; + +static int steam_battery_get_property(struct power_supply *psy, + enum power_supply_property psp, + union power_supply_propval *val) +{ + struct steam_device *steam = power_supply_get_drvdata(psy); + unsigned long flags; + s16 volts; + u8 batt; + int ret = 0; + + spin_lock_irqsave(&steam->lock, flags); + volts = steam->voltage; + batt = steam->battery_charge; + spin_unlock_irqrestore(&steam->lock, flags); + + switch (psp) { + case POWER_SUPPLY_PROP_PRESENT: + val->intval = 1; + break; + case POWER_SUPPLY_PROP_SCOPE: + val->intval = POWER_SUPPLY_SCOPE_DEVICE; + break; + case POWER_SUPPLY_PROP_VOLTAGE_NOW: + val->intval = volts * 1000; /* mV -> uV */ + break; + case POWER_SUPPLY_PROP_CAPACITY: + val->intval = batt; + break; + default: + ret = -EINVAL; + break; + } + return ret; +} + +static int steam_battery_register(struct steam_device *steam) +{ + struct power_supply *battery; + struct power_supply_config battery_cfg = { .drv_data = steam, }; + unsigned long flags; + int ret; + + steam->battery_desc.type = POWER_SUPPLY_TYPE_BATTERY; + steam->battery_desc.properties = steam_battery_props; + steam->battery_desc.num_properties = ARRAY_SIZE(steam_battery_props); + steam->battery_desc.get_property = steam_battery_get_property; + steam->battery_desc.name = devm_kasprintf(&steam->hdev->dev, + GFP_KERNEL, "steam-controller-%s-battery", + steam->serial_no); + if (!steam->battery_desc.name) + return -ENOMEM; + + /* avoid the warning of 0% battery while waiting for the first info */ + spin_lock_irqsave(&steam->lock, flags); + steam->voltage = 3000; + steam->battery_charge = 100; + spin_unlock_irqrestore(&steam->lock, flags); + + battery = power_supply_register(&steam->hdev->dev, + &steam->battery_desc, &battery_cfg); + if (IS_ERR(battery)) { + ret = PTR_ERR(battery); + hid_err(steam->hdev, + "%s:power_supply_register failed with error %d\n", + __func__, ret); + return ret; + } + rcu_assign_pointer(steam->battery, battery); + power_supply_powers(battery, &steam->hdev->dev); + return 0; +} + +static int steam_input_register(struct steam_device *steam) +{ + struct hid_device *hdev = steam->hdev; + struct input_dev *input; + int ret; + + rcu_read_lock(); + input = rcu_dereference(steam->input); + rcu_read_unlock(); + if (input) { + dbg_hid("%s: already connected\n", __func__); + return 0; + } + + input = input_allocate_device(); + if (!input) + return -ENOMEM; + + input_set_drvdata(input, steam); + input->dev.parent = &hdev->dev; + input->open = steam_input_open; + input->close = steam_input_close; + + input->name = (steam->quirks & STEAM_QUIRK_WIRELESS) ? + "Wireless Steam Controller" : + "Steam Controller"; + input->phys = hdev->phys; + input->uniq = steam->serial_no; + input->id.bustype = hdev->bus; + input->id.vendor = hdev->vendor; + input->id.product = hdev->product; + input->id.version = hdev->version; + + input_set_capability(input, EV_KEY, BTN_TR2); + input_set_capability(input, EV_KEY, BTN_TL2); + input_set_capability(input, EV_KEY, BTN_TR); + input_set_capability(input, EV_KEY, BTN_TL); + input_set_capability(input, EV_KEY, BTN_Y); + input_set_capability(input, EV_KEY, BTN_B); + input_set_capability(input, EV_KEY, BTN_X); + input_set_capability(input, EV_KEY, BTN_A); + input_set_capability(input, EV_KEY, BTN_DPAD_UP); + input_set_capability(input, EV_KEY, BTN_DPAD_RIGHT); + input_set_capability(input, EV_KEY, BTN_DPAD_LEFT); + input_set_capability(input, EV_KEY, BTN_DPAD_DOWN); + input_set_capability(input, EV_KEY, BTN_SELECT); + input_set_capability(input, EV_KEY, BTN_MODE); + input_set_capability(input, EV_KEY, BTN_START); + input_set_capability(input, EV_KEY, BTN_GEAR_DOWN); + input_set_capability(input, EV_KEY, BTN_GEAR_UP); + input_set_capability(input, EV_KEY, BTN_THUMBR); + input_set_capability(input, EV_KEY, BTN_THUMBL); + input_set_capability(input, EV_KEY, BTN_THUMB); + input_set_capability(input, EV_KEY, BTN_THUMB2); + + input_set_abs_params(input, ABS_HAT2Y, 0, 255, 0, 0); + input_set_abs_params(input, ABS_HAT2X, 0, 255, 0, 0); + input_set_abs_params(input, ABS_X, -32767, 32767, 0, 0); + input_set_abs_params(input, ABS_Y, -32767, 32767, 0, 0); + input_set_abs_params(input, ABS_RX, -32767, 32767, + STEAM_PAD_FUZZ, 0); + input_set_abs_params(input, ABS_RY, -32767, 32767, + STEAM_PAD_FUZZ, 0); + input_set_abs_params(input, ABS_HAT0X, -32767, 32767, + STEAM_PAD_FUZZ, 0); + input_set_abs_params(input, ABS_HAT0Y, -32767, 32767, + STEAM_PAD_FUZZ, 0); + input_abs_set_res(input, ABS_X, STEAM_JOYSTICK_RESOLUTION); + input_abs_set_res(input, ABS_Y, STEAM_JOYSTICK_RESOLUTION); + input_abs_set_res(input, ABS_RX, STEAM_PAD_RESOLUTION); + input_abs_set_res(input, ABS_RY, STEAM_PAD_RESOLUTION); + input_abs_set_res(input, ABS_HAT0X, STEAM_PAD_RESOLUTION); + input_abs_set_res(input, ABS_HAT0Y, STEAM_PAD_RESOLUTION); + input_abs_set_res(input, ABS_HAT2Y, STEAM_TRIGGER_RESOLUTION); + input_abs_set_res(input, ABS_HAT2X, STEAM_TRIGGER_RESOLUTION); + + ret = input_register_device(input); + if (ret) + goto input_register_fail; + + rcu_assign_pointer(steam->input, input); + return 0; + +input_register_fail: + input_free_device(input); + return ret; +} + +static void steam_input_unregister(struct steam_device *steam) +{ + struct input_dev *input; + rcu_read_lock(); + input = rcu_dereference(steam->input); + rcu_read_unlock(); + if (!input) + return; + RCU_INIT_POINTER(steam->input, NULL); + synchronize_rcu(); + input_unregister_device(input); +} + +static void steam_battery_unregister(struct steam_device *steam) +{ + struct power_supply *battery; + + rcu_read_lock(); + battery = rcu_dereference(steam->battery); + rcu_read_unlock(); + + if (!battery) + return; + RCU_INIT_POINTER(steam->battery, NULL); + synchronize_rcu(); + power_supply_unregister(battery); +} + +static int steam_register(struct steam_device *steam) +{ + int ret; + bool client_opened; + + /* + * This function can be called several times in a row with the + * wireless adaptor, without steam_unregister() between them, because + * another client send a get_connection_status command, for example. + * The battery and serial number are set just once per device. + */ + if (!steam->serial_no[0]) { + /* + * Unlikely, but getting the serial could fail, and it is not so + * important, so make up a serial number and go on. + */ + mutex_lock(&steam->mutex); + if (steam_get_serial(steam) < 0) + strlcpy(steam->serial_no, "XXXXXXXXXX", + sizeof(steam->serial_no)); + mutex_unlock(&steam->mutex); + + hid_info(steam->hdev, "Steam Controller '%s' connected", + steam->serial_no); + + /* ignore battery errors, we can live without it */ + if (steam->quirks & STEAM_QUIRK_WIRELESS) + steam_battery_register(steam); + + mutex_lock(&steam_devices_lock); + list_add(&steam->list, &steam_devices); + mutex_unlock(&steam_devices_lock); + } + + mutex_lock(&steam->mutex); + client_opened = steam->client_opened; + if (!client_opened) + steam_set_lizard_mode(steam, lizard_mode); + mutex_unlock(&steam->mutex); + + if (!client_opened) + ret = steam_input_register(steam); + else + ret = 0; + + return ret; +} + +static void steam_unregister(struct steam_device *steam) +{ + steam_battery_unregister(steam); + steam_input_unregister(steam); + if (steam->serial_no[0]) { + hid_info(steam->hdev, "Steam Controller '%s' disconnected", + steam->serial_no); + mutex_lock(&steam_devices_lock); + list_del(&steam->list); + mutex_unlock(&steam_devices_lock); + steam->serial_no[0] = 0; + } +} + +static void steam_work_connect_cb(struct work_struct *work) +{ + struct steam_device *steam = container_of(work, struct steam_device, + work_connect); + unsigned long flags; + bool connected; + int ret; + + spin_lock_irqsave(&steam->lock, flags); + connected = steam->connected; + spin_unlock_irqrestore(&steam->lock, flags); + + if (connected) { + ret = steam_register(steam); + if (ret) { + hid_err(steam->hdev, + "%s:steam_register failed with error %d\n", + __func__, ret); + } + } else { + steam_unregister(steam); + } +} + +static bool steam_is_valve_interface(struct hid_device *hdev) +{ + struct hid_report_enum *rep_enum; + + /* + * The wired device creates 3 interfaces: + * 0: emulated mouse. + * 1: emulated keyboard. + * 2: the real game pad. + * The wireless device creates 5 interfaces: + * 0: emulated keyboard. + * 1-4: slots where up to 4 real game pads will be connected to. + * We know which one is the real gamepad interface because they are the + * only ones with a feature report. + */ + rep_enum = &hdev->report_enum[HID_FEATURE_REPORT]; + return !list_empty(&rep_enum->report_list); +} + +static int steam_client_ll_parse(struct hid_device *hdev) +{ + struct steam_device *steam = hdev->driver_data; + + return hid_parse_report(hdev, steam->hdev->dev_rdesc, + steam->hdev->dev_rsize); +} + +static int steam_client_ll_start(struct hid_device *hdev) +{ + return 0; +} + +static void steam_client_ll_stop(struct hid_device *hdev) +{ +} + +static int steam_client_ll_open(struct hid_device *hdev) +{ + struct steam_device *steam = hdev->driver_data; + + mutex_lock(&steam->mutex); + steam->client_opened = true; + mutex_unlock(&steam->mutex); + + steam_input_unregister(steam); + + return 0; +} + +static void steam_client_ll_close(struct hid_device *hdev) +{ + struct steam_device *steam = hdev->driver_data; + + unsigned long flags; + bool connected; + + spin_lock_irqsave(&steam->lock, flags); + connected = steam->connected; + spin_unlock_irqrestore(&steam->lock, flags); + + mutex_lock(&steam->mutex); + steam->client_opened = false; + if (connected) + steam_set_lizard_mode(steam, lizard_mode); + mutex_unlock(&steam->mutex); + + if (connected) + steam_input_register(steam); +} + +static int steam_client_ll_raw_request(struct hid_device *hdev, + unsigned char reportnum, u8 *buf, + size_t count, unsigned char report_type, + int reqtype) +{ + struct steam_device *steam = hdev->driver_data; + + return hid_hw_raw_request(steam->hdev, reportnum, buf, count, + report_type, reqtype); +} + +static struct hid_ll_driver steam_client_ll_driver = { + .parse = steam_client_ll_parse, + .start = steam_client_ll_start, + .stop = steam_client_ll_stop, + .open = steam_client_ll_open, + .close = steam_client_ll_close, + .raw_request = steam_client_ll_raw_request, +}; + +static struct hid_device *steam_create_client_hid(struct hid_device *hdev) +{ + struct hid_device *client_hdev; + + client_hdev = hid_allocate_device(); + if (IS_ERR(client_hdev)) + return client_hdev; + + client_hdev->ll_driver = &steam_client_ll_driver; + client_hdev->dev.parent = hdev->dev.parent; + client_hdev->bus = hdev->bus; + client_hdev->vendor = hdev->vendor; + client_hdev->product = hdev->product; + client_hdev->version = hdev->version; + client_hdev->type = hdev->type; + client_hdev->country = hdev->country; + strlcpy(client_hdev->name, hdev->name, + sizeof(client_hdev->name)); + strlcpy(client_hdev->phys, hdev->phys, + sizeof(client_hdev->phys)); + /* + * Since we use the same device info than the real interface to + * trick userspace, we will be calling steam_probe recursively. + * We need to recognize the client interface somehow. + */ + client_hdev->group = HID_GROUP_STEAM; + return client_hdev; +} + +static int steam_probe(struct hid_device *hdev, + const struct hid_device_id *id) +{ + struct steam_device *steam; + int ret; + + ret = hid_parse(hdev); + if (ret) { + hid_err(hdev, + "%s:parse of hid interface failed\n", __func__); + return ret; + } + + /* + * The virtual client_dev is only used for hidraw. + * Also avoid the recursive probe. + */ + if (hdev->group == HID_GROUP_STEAM) + return hid_hw_start(hdev, HID_CONNECT_HIDRAW); + /* + * The non-valve interfaces (mouse and keyboard emulation) are + * connected without changes. + */ + if (!steam_is_valve_interface(hdev)) + return hid_hw_start(hdev, HID_CONNECT_DEFAULT); + + steam = devm_kzalloc(&hdev->dev, sizeof(*steam), GFP_KERNEL); + if (!steam) { + ret = -ENOMEM; + goto steam_alloc_fail; + } + steam->hdev = hdev; + hid_set_drvdata(hdev, steam); + spin_lock_init(&steam->lock); + mutex_init(&steam->mutex); + steam->quirks = id->driver_data; + INIT_WORK(&steam->work_connect, steam_work_connect_cb); + + steam->client_hdev = steam_create_client_hid(hdev); + if (IS_ERR(steam->client_hdev)) { + ret = PTR_ERR(steam->client_hdev); + goto client_hdev_fail; + } + steam->client_hdev->driver_data = steam; + + /* + * With the real steam controller interface, do not connect hidraw. + * Instead, create the client_hid and connect that. + */ + ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT & ~HID_CONNECT_HIDRAW); + if (ret) + goto hid_hw_start_fail; + + ret = hid_add_device(steam->client_hdev); + if (ret) + goto client_hdev_add_fail; + + ret = hid_hw_open(hdev); + if (ret) { + hid_err(hdev, + "%s:hid_hw_open\n", + __func__); + goto hid_hw_open_fail; + } + + if (steam->quirks & STEAM_QUIRK_WIRELESS) { + hid_info(hdev, "Steam wireless receiver connected"); + steam_request_conn_status(steam); + } else { + ret = steam_register(steam); + if (ret) { + hid_err(hdev, + "%s:steam_register failed with error %d\n", + __func__, ret); + goto input_register_fail; + } + } + + return 0; + +input_register_fail: +hid_hw_open_fail: +client_hdev_add_fail: + hid_hw_stop(hdev); +hid_hw_start_fail: + hid_destroy_device(steam->client_hdev); +client_hdev_fail: + cancel_work_sync(&steam->work_connect); +steam_alloc_fail: + hid_err(hdev, "%s: failed with error %d\n", + __func__, ret); + return ret; +} + +static void steam_remove(struct hid_device *hdev) +{ + struct steam_device *steam = hid_get_drvdata(hdev); + + if (!steam || hdev->group == HID_GROUP_STEAM) { + hid_hw_stop(hdev); + return; + } + + hid_destroy_device(steam->client_hdev); + steam->client_opened = false; + cancel_work_sync(&steam->work_connect); + if (steam->quirks & STEAM_QUIRK_WIRELESS) { + hid_info(hdev, "Steam wireless receiver disconnected"); + } + hid_hw_close(hdev); + hid_hw_stop(hdev); + steam_unregister(steam); +} + +static void steam_do_connect_event(struct steam_device *steam, bool connected) +{ + unsigned long flags; + bool changed; + + spin_lock_irqsave(&steam->lock, flags); + changed = steam->connected != connected; + steam->connected = connected; + spin_unlock_irqrestore(&steam->lock, flags); + + if (changed && schedule_work(&steam->work_connect) == 0) + dbg_hid("%s: connected=%d event already queued\n", + __func__, connected); +} + +/* + * Some input data in the protocol has the opposite sign. + * Clamp the values to 32767..-32767 so that the range is + * symmetrical and can be negated safely. + */ +static inline s16 steam_le16(u8 *data) +{ + s16 x = (s16) le16_to_cpup((__le16 *)data); + + return x == -32768 ? -32767 : x; +} + +/* + * The size for this message payload is 60. + * The known values are: + * (* values are not sent through wireless) + * (* accelerator/gyro is disabled by default) + * Offset| Type | Mapped to |Meaning + * -------+-------+-----------+-------------------------- + * 4-7 | u32 | -- | sequence number + * 8-10 | 24bit | see below | buttons + * 11 | u8 | ABS_HAT2Y | left trigger + * 12 | u8 | ABS_HAT2X | right trigger + * 13-15 | -- | -- | always 0 + * 16-17 | s16 | ABS_X/ABS_HAT0X | X value + * 18-19 | s16 | ABS_Y/ABS_HAT0Y | Y value + * 20-21 | s16 | ABS_RX | right-pad X value + * 22-23 | s16 | ABS_RY | right-pad Y value + * 24-25 | s16 | -- | * left trigger + * 26-27 | s16 | -- | * right trigger + * 28-29 | s16 | -- | * accelerometer X value + * 30-31 | s16 | -- | * accelerometer Y value + * 32-33 | s16 | -- | * accelerometer Z value + * 34-35 | s16 | -- | gyro X value + * 36-36 | s16 | -- | gyro Y value + * 38-39 | s16 | -- | gyro Z value + * 40-41 | s16 | -- | quaternion W value + * 42-43 | s16 | -- | quaternion X value + * 44-45 | s16 | -- | quaternion Y value + * 46-47 | s16 | -- | quaternion Z value + * 48-49 | -- | -- | always 0 + * 50-51 | s16 | -- | * left trigger (uncalibrated) + * 52-53 | s16 | -- | * right trigger (uncalibrated) + * 54-55 | s16 | -- | * joystick X value (uncalibrated) + * 56-57 | s16 | -- | * joystick Y value (uncalibrated) + * 58-59 | s16 | -- | * left-pad X value + * 60-61 | s16 | -- | * left-pad Y value + * 62-63 | u16 | -- | * battery voltage + * + * The buttons are: + * Bit | Mapped to | Description + * ------+------------+-------------------------------- + * 8.0 | BTN_TR2 | right trigger fully pressed + * 8.1 | BTN_TL2 | left trigger fully pressed + * 8.2 | BTN_TR | right shoulder + * 8.3 | BTN_TL | left shoulder + * 8.4 | BTN_Y | button Y + * 8.5 | BTN_B | button B + * 8.6 | BTN_X | button X + * 8.7 | BTN_A | button A + * 9.0 | BTN_DPAD_UP | lef-pad up + * 9.1 | BTN_DPAD_RIGHT | lef-pad right + * 9.2 | BTN_DPAD_LEFT | lef-pad left + * 9.3 | BTN_DPAD_DOWN | lef-pad down + * 9.4 | BTN_SELECT | menu left + * 9.5 | BTN_MODE | steam logo + * 9.6 | BTN_START | menu right + * 9.7 | BTN_GEAR_DOWN | left back lever + * 10.0 | BTN_GEAR_UP | right back lever + * 10.1 | -- | left-pad clicked + * 10.2 | BTN_THUMBR | right-pad clicked + * 10.3 | BTN_THUMB | left-pad touched (but see explanation below) + * 10.4 | BTN_THUMB2 | right-pad touched + * 10.5 | -- | unknown + * 10.6 | BTN_THUMBL | joystick clicked + * 10.7 | -- | lpad_and_joy + */ + +static void steam_do_input_event(struct steam_device *steam, + struct input_dev *input, u8 *data) +{ + /* 24 bits of buttons */ + u8 b8, b9, b10; + s16 x, y; + bool lpad_touched, lpad_and_joy; + + b8 = data[8]; + b9 = data[9]; + b10 = data[10]; + + input_report_abs(input, ABS_HAT2Y, data[11]); + input_report_abs(input, ABS_HAT2X, data[12]); + + /* + * These two bits tells how to interpret the values X and Y. + * lpad_and_joy tells that the joystick and the lpad are used at the + * same time. + * lpad_touched tells whether X/Y are to be read as lpad coord or + * joystick values. + * (lpad_touched || lpad_and_joy) tells if the lpad is really touched. + */ + lpad_touched = b10 & BIT(3); + lpad_and_joy = b10 & BIT(7); + x = steam_le16(data + 16); + y = -steam_le16(data + 18); + + input_report_abs(input, lpad_touched ? ABS_HAT0X : ABS_X, x); + input_report_abs(input, lpad_touched ? ABS_HAT0Y : ABS_Y, y); + /* Check if joystick is centered */ + if (lpad_touched && !lpad_and_joy) { + input_report_abs(input, ABS_X, 0); + input_report_abs(input, ABS_Y, 0); + } + /* Check if lpad is untouched */ + if (!(lpad_touched || lpad_and_joy)) { + input_report_abs(input, ABS_HAT0X, 0); + input_report_abs(input, ABS_HAT0Y, 0); + } + + input_report_abs(input, ABS_RX, steam_le16(data + 20)); + input_report_abs(input, ABS_RY, -steam_le16(data + 22)); + + input_event(input, EV_KEY, BTN_TR2, !!(b8 & BIT(0))); + input_event(input, EV_KEY, BTN_TL2, !!(b8 & BIT(1))); + input_event(input, EV_KEY, BTN_TR, !!(b8 & BIT(2))); + input_event(input, EV_KEY, BTN_TL, !!(b8 & BIT(3))); + input_event(input, EV_KEY, BTN_Y, !!(b8 & BIT(4))); + input_event(input, EV_KEY, BTN_B, !!(b8 & BIT(5))); + input_event(input, EV_KEY, BTN_X, !!(b8 & BIT(6))); + input_event(input, EV_KEY, BTN_A, !!(b8 & BIT(7))); + input_event(input, EV_KEY, BTN_SELECT, !!(b9 & BIT(4))); + input_event(input, EV_KEY, BTN_MODE, !!(b9 & BIT(5))); + input_event(input, EV_KEY, BTN_START, !!(b9 & BIT(6))); + input_event(input, EV_KEY, BTN_GEAR_DOWN, !!(b9 & BIT(7))); + input_event(input, EV_KEY, BTN_GEAR_UP, !!(b10 & BIT(0))); + input_event(input, EV_KEY, BTN_THUMBR, !!(b10 & BIT(2))); + input_event(input, EV_KEY, BTN_THUMBL, !!(b10 & BIT(6))); + input_event(input, EV_KEY, BTN_THUMB, lpad_touched || lpad_and_joy); + input_event(input, EV_KEY, BTN_THUMB2, !!(b10 & BIT(4))); + input_event(input, EV_KEY, BTN_DPAD_UP, !!(b9 & BIT(0))); + input_event(input, EV_KEY, BTN_DPAD_RIGHT, !!(b9 & BIT(1))); + input_event(input, EV_KEY, BTN_DPAD_LEFT, !!(b9 & BIT(2))); + input_event(input, EV_KEY, BTN_DPAD_DOWN, !!(b9 & BIT(3))); + + input_sync(input); +} + +/* + * The size for this message payload is 11. + * The known values are: + * Offset| Type | Meaning + * -------+-------+--------------------------- + * 4-7 | u32 | sequence number + * 8-11 | -- | always 0 + * 12-13 | u16 | voltage (mV) + * 14 | u8 | battery percent + */ +static void steam_do_battery_event(struct steam_device *steam, + struct power_supply *battery, u8 *data) +{ + unsigned long flags; + + s16 volts = steam_le16(data + 12); + u8 batt = data[14]; + + /* Creating the battery may have failed */ + rcu_read_lock(); + battery = rcu_dereference(steam->battery); + if (likely(battery)) { + spin_lock_irqsave(&steam->lock, flags); + steam->voltage = volts; + steam->battery_charge = batt; + spin_unlock_irqrestore(&steam->lock, flags); + power_supply_changed(battery); + } + rcu_read_unlock(); +} + +static int steam_raw_event(struct hid_device *hdev, + struct hid_report *report, u8 *data, + int size) +{ + struct steam_device *steam = hid_get_drvdata(hdev); + struct input_dev *input; + struct power_supply *battery; + + if (!steam) + return 0; + + if (steam->client_opened) + hid_input_report(steam->client_hdev, HID_FEATURE_REPORT, + data, size, 0); + /* + * All messages are size=64, all values little-endian. + * The format is: + * Offset| Meaning + * -------+-------------------------------------------- + * 0-1 | always 0x01, 0x00, maybe protocol version? + * 2 | type of message + * 3 | length of the real payload (not checked) + * 4-n | payload data, depends on the type + * + * There are these known types of message: + * 0x01: input data (60 bytes) + * 0x03: wireless connect/disconnect (1 byte) + * 0x04: battery status (11 bytes) + */ + + if (size != 64 || data[0] != 1 || data[1] != 0) + return 0; + + switch (data[2]) { + case STEAM_EV_INPUT_DATA: + if (steam->client_opened) + return 0; + rcu_read_lock(); + input = rcu_dereference(steam->input); + if (likely(input)) + steam_do_input_event(steam, input, data); + rcu_read_unlock(); + break; + case STEAM_EV_CONNECT: + /* + * The payload of this event is a single byte: + * 0x01: disconnected. + * 0x02: connected. + */ + switch (data[4]) { + case 0x01: + steam_do_connect_event(steam, false); + break; + case 0x02: + steam_do_connect_event(steam, true); + break; + } + break; + case STEAM_EV_BATTERY: + if (steam->quirks & STEAM_QUIRK_WIRELESS) { + rcu_read_lock(); + battery = rcu_dereference(steam->battery); + if (likely(battery)) { + steam_do_battery_event(steam, battery, data); + } else { + dbg_hid( + "%s: battery data without connect event\n", + __func__); + steam_do_connect_event(steam, true); + } + rcu_read_unlock(); + } + break; + } + return 0; +} + +static int steam_param_set_lizard_mode(const char *val, + const struct kernel_param *kp) +{ + struct steam_device *steam; + int ret; + + ret = param_set_bool(val, kp); + if (ret) + return ret; + + mutex_lock(&steam_devices_lock); + list_for_each_entry(steam, &steam_devices, list) { + mutex_lock(&steam->mutex); + if (!steam->client_opened) + steam_set_lizard_mode(steam, lizard_mode); + mutex_unlock(&steam->mutex); + } + mutex_unlock(&steam_devices_lock); + return 0; +} + +static const struct kernel_param_ops steam_lizard_mode_ops = { + .set = steam_param_set_lizard_mode, + .get = param_get_bool, +}; + +module_param_cb(lizard_mode, &steam_lizard_mode_ops, &lizard_mode, 0644); +MODULE_PARM_DESC(lizard_mode, + "Enable mouse and keyboard emulation (lizard mode) when the gamepad is not in use"); + +static const struct hid_device_id steam_controllers[] = { + { /* Wired Steam Controller */ + HID_USB_DEVICE(USB_VENDOR_ID_VALVE, + USB_DEVICE_ID_STEAM_CONTROLLER) + }, + { /* Wireless Steam Controller */ + HID_USB_DEVICE(USB_VENDOR_ID_VALVE, + USB_DEVICE_ID_STEAM_CONTROLLER_WIRELESS), + .driver_data = STEAM_QUIRK_WIRELESS + }, + {} +}; + +MODULE_DEVICE_TABLE(hid, steam_controllers); + +static struct hid_driver steam_controller_driver = { + .name = "hid-steam", + .id_table = steam_controllers, + .probe = steam_probe, + .remove = steam_remove, + .raw_event = steam_raw_event, +}; + +module_hid_driver(steam_controller_driver);
\ No newline at end of file diff --git a/drivers/hid/hid-tmff.c b/drivers/hid/hid-tmff.c index cfa0cb22c9b3..d98e471a5f7b 100644 --- a/drivers/hid/hid-tmff.c +++ b/drivers/hid/hid-tmff.c @@ -136,12 +136,18 @@ static int tmff_init(struct hid_device *hid, const signed short *ff_bits) struct tmff_device *tmff; struct hid_report *report; struct list_head *report_list; - struct hid_input *hidinput = list_entry(hid->inputs.next, - struct hid_input, list); - struct input_dev *input_dev = hidinput->input; + struct hid_input *hidinput; + struct input_dev *input_dev; int error; int i; + if (list_empty(&hid->inputs)) { + hid_err(hid, "no inputs found\n"); + return -ENODEV; + } + hidinput = list_entry(hid->inputs.next, struct hid_input, list); + input_dev = hidinput->input; + tmff = kzalloc(sizeof(struct tmff_device), GFP_KERNEL); if (!tmff) return -ENOMEM; diff --git a/drivers/hid/hid-zpff.c b/drivers/hid/hid-zpff.c index a29756c6ca02..4e7e01be99b1 100644 --- a/drivers/hid/hid-zpff.c +++ b/drivers/hid/hid-zpff.c @@ -66,11 +66,17 @@ static int zpff_init(struct hid_device *hid) { struct zpff_device *zpff; struct hid_report *report; - struct hid_input *hidinput = list_entry(hid->inputs.next, - struct hid_input, list); - struct input_dev *dev = hidinput->input; + struct hid_input *hidinput; + struct input_dev *dev; int i, error; + if (list_empty(&hid->inputs)) { + hid_err(hid, "no inputs found\n"); + return -ENODEV; + } + hidinput = list_entry(hid->inputs.next, struct hid_input, list); + dev = hidinput->input; + for (i = 0; i < 4; i++) { report = hid_validate_values(hid, HID_OUTPUT_REPORT, 0, i, 1); if (!report) diff --git a/drivers/hid/hidraw.c b/drivers/hid/hidraw.c index c0c4df198725..627a24d3ea7c 100644 --- a/drivers/hid/hidraw.c +++ b/drivers/hid/hidraw.c @@ -383,7 +383,7 @@ static long hidraw_ioctl(struct file *file, unsigned int cmd, mutex_lock(&minors_lock); dev = hidraw_table[minor]; - if (!dev) { + if (!dev || !dev->exist) { ret = -ENODEV; goto out; } diff --git a/drivers/hwmon/acpi_power_meter.c b/drivers/hwmon/acpi_power_meter.c index 579bdf93be43..e27f7e12c05b 100644 --- a/drivers/hwmon/acpi_power_meter.c +++ b/drivers/hwmon/acpi_power_meter.c @@ -693,8 +693,8 @@ static int setup_attrs(struct acpi_power_meter_resource *resource) if (resource->caps.flags & POWER_METER_CAN_CAP) { if (!can_cap_in_hardware()) { - dev_err(&resource->acpi_dev->dev, - "Ignoring unsafe software power cap!\n"); + dev_warn(&resource->acpi_dev->dev, + "Ignoring unsafe software power cap!\n"); goto skip_unsafe_cap; } diff --git a/drivers/i2c/busses/i2c-riic.c b/drivers/i2c/busses/i2c-riic.c index 16833365475f..a4eceb994f7e 100644 --- a/drivers/i2c/busses/i2c-riic.c +++ b/drivers/i2c/busses/i2c-riic.c @@ -212,6 +212,7 @@ static irqreturn_t riic_tend_isr(int irq, void *data) if (readb(riic->base + RIIC_ICSR2) & ICSR2_NACKF) { /* We got a NACKIE */ readb(riic->base + RIIC_ICDRR); /* dummy read */ + riic_clear_set_bit(riic, ICSR2_NACKF, 0, RIIC_ICSR2); riic->err = -ENXIO; } else if (riic->bytes_left) { return IRQ_NONE; diff --git a/drivers/iio/accel/bmc150-accel-core.c b/drivers/iio/accel/bmc150-accel-core.c index c7122919a8c0..ec7ddf867349 100644 --- a/drivers/iio/accel/bmc150-accel-core.c +++ b/drivers/iio/accel/bmc150-accel-core.c @@ -126,7 +126,7 @@ #define BMC150_ACCEL_SLEEP_1_SEC 0x0F #define BMC150_ACCEL_REG_TEMP 0x08 -#define BMC150_ACCEL_TEMP_CENTER_VAL 24 +#define BMC150_ACCEL_TEMP_CENTER_VAL 23 #define BMC150_ACCEL_AXIS_TO_REG(axis) (BMC150_ACCEL_REG_XOUT_L + (axis * 2)) #define BMC150_AUTO_SUSPEND_DELAY_MS 2000 diff --git a/drivers/iio/adc/ad799x.c b/drivers/iio/adc/ad799x.c index ba82de25a797..46681e399e22 100644 --- a/drivers/iio/adc/ad799x.c +++ b/drivers/iio/adc/ad799x.c @@ -822,10 +822,10 @@ static int ad799x_probe(struct i2c_client *client, ret = ad799x_write_config(st, st->chip_config->default_config); if (ret < 0) - goto error_disable_reg; + goto error_disable_vref; ret = ad799x_read_config(st); if (ret < 0) - goto error_disable_reg; + goto error_disable_vref; st->config = ret; ret = iio_triggered_buffer_setup(indio_dev, NULL, diff --git a/drivers/iio/imu/adis16480.c b/drivers/iio/imu/adis16480.c index 1880105cc8c4..778a46247f8d 100644 --- a/drivers/iio/imu/adis16480.c +++ b/drivers/iio/imu/adis16480.c @@ -266,8 +266,11 @@ static int adis16480_set_freq(struct iio_dev *indio_dev, int val, int val2) struct adis16480 *st = iio_priv(indio_dev); unsigned int t; + if (val < 0 || val2 < 0) + return -EINVAL; + t = val * 1000 + val2 / 1000; - if (t <= 0) + if (t == 0) return -EINVAL; t = 2460000 / t; diff --git a/drivers/iio/light/opt3001.c b/drivers/iio/light/opt3001.c index 01e111e72d4b..eecdc50ed282 100644 --- a/drivers/iio/light/opt3001.c +++ b/drivers/iio/light/opt3001.c @@ -646,6 +646,7 @@ static irqreturn_t opt3001_irq(int irq, void *_iio) struct iio_dev *iio = _iio; struct opt3001 *opt = iio_priv(iio); int ret; + bool wake_result_ready_queue = false; if (!opt->ok_to_ignore_lock) mutex_lock(&opt->lock); @@ -680,13 +681,16 @@ static irqreturn_t opt3001_irq(int irq, void *_iio) } opt->result = ret; opt->result_ready = true; - wake_up(&opt->result_ready_queue); + wake_result_ready_queue = true; } out: if (!opt->ok_to_ignore_lock) mutex_unlock(&opt->lock); + if (wake_result_ready_queue) + wake_up(&opt->result_ready_queue); + return IRQ_HANDLED; } diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c index 1454290078de..8ad9c6b04769 100644 --- a/drivers/infiniband/core/cma.c +++ b/drivers/infiniband/core/cma.c @@ -1976,9 +1976,10 @@ static int iw_conn_req_handler(struct iw_cm_id *cm_id, conn_id->cm_id.iw = NULL; cma_exch(conn_id, RDMA_CM_DESTROYING); mutex_unlock(&conn_id->handler_mutex); + mutex_unlock(&listen_id->handler_mutex); cma_deref_id(conn_id); rdma_destroy_id(&conn_id->id); - goto out; + return ret; } mutex_unlock(&conn_id->handler_mutex); diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c index 8218d714fa01..4b682375f465 100644 --- a/drivers/infiniband/hw/cxgb4/mem.c +++ b/drivers/infiniband/hw/cxgb4/mem.c @@ -254,13 +254,17 @@ static int write_tpt_entry(struct c4iw_rdev *rdev, u32 reset_tpt_entry, u64 len, u8 page_size, u32 pbl_size, u32 pbl_addr) { int err; - struct fw_ri_tpte tpt; + struct fw_ri_tpte *tpt; u32 stag_idx; static atomic_t key; if (c4iw_fatal_error(rdev)) return -EIO; + tpt = kmalloc(sizeof(*tpt), GFP_KERNEL); + if (!tpt) + return -ENOMEM; + stag_state = stag_state > 0; stag_idx = (*stag) >> 8; @@ -270,6 +274,7 @@ static int write_tpt_entry(struct c4iw_rdev *rdev, u32 reset_tpt_entry, mutex_lock(&rdev->stats.lock); rdev->stats.stag.fail++; mutex_unlock(&rdev->stats.lock); + kfree(tpt); return -ENOMEM; } mutex_lock(&rdev->stats.lock); @@ -284,28 +289,28 @@ static int write_tpt_entry(struct c4iw_rdev *rdev, u32 reset_tpt_entry, /* write TPT entry */ if (reset_tpt_entry) - memset(&tpt, 0, sizeof(tpt)); + memset(tpt, 0, sizeof(*tpt)); else { - tpt.valid_to_pdid = cpu_to_be32(FW_RI_TPTE_VALID_F | + tpt->valid_to_pdid = cpu_to_be32(FW_RI_TPTE_VALID_F | FW_RI_TPTE_STAGKEY_V((*stag & FW_RI_TPTE_STAGKEY_M)) | FW_RI_TPTE_STAGSTATE_V(stag_state) | FW_RI_TPTE_STAGTYPE_V(type) | FW_RI_TPTE_PDID_V(pdid)); - tpt.locread_to_qpid = cpu_to_be32(FW_RI_TPTE_PERM_V(perm) | + tpt->locread_to_qpid = cpu_to_be32(FW_RI_TPTE_PERM_V(perm) | (bind_enabled ? FW_RI_TPTE_MWBINDEN_F : 0) | FW_RI_TPTE_ADDRTYPE_V((zbva ? FW_RI_ZERO_BASED_TO : FW_RI_VA_BASED_TO))| FW_RI_TPTE_PS_V(page_size)); - tpt.nosnoop_pbladdr = !pbl_size ? 0 : cpu_to_be32( + tpt->nosnoop_pbladdr = !pbl_size ? 0 : cpu_to_be32( FW_RI_TPTE_PBLADDR_V(PBL_OFF(rdev, pbl_addr)>>3)); - tpt.len_lo = cpu_to_be32((u32)(len & 0xffffffffUL)); - tpt.va_hi = cpu_to_be32((u32)(to >> 32)); - tpt.va_lo_fbo = cpu_to_be32((u32)(to & 0xffffffffUL)); - tpt.dca_mwbcnt_pstag = cpu_to_be32(0); - tpt.len_hi = cpu_to_be32((u32)(len >> 32)); + tpt->len_lo = cpu_to_be32((u32)(len & 0xffffffffUL)); + tpt->va_hi = cpu_to_be32((u32)(to >> 32)); + tpt->va_lo_fbo = cpu_to_be32((u32)(to & 0xffffffffUL)); + tpt->dca_mwbcnt_pstag = cpu_to_be32(0); + tpt->len_hi = cpu_to_be32((u32)(len >> 32)); } err = write_adapter_mem(rdev, stag_idx + (rdev->lldi.vr->stag.start >> 5), - sizeof(tpt), &tpt); + sizeof(*tpt), tpt); if (reset_tpt_entry) { c4iw_put_resource(&rdev->resource.tpt_table, stag_idx); @@ -313,6 +318,7 @@ static int write_tpt_entry(struct c4iw_rdev *rdev, u32 reset_tpt_entry, rdev->stats.stag.cur -= 32; mutex_unlock(&rdev->stats.lock); } + kfree(tpt); return err; } diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index 114d5883d497..cf11d43ce241 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -1372,14 +1372,13 @@ static void its_irq_domain_free(struct irq_domain *domain, unsigned int virq, struct its_device *its_dev = irq_data_get_irq_chip_data(d); int i; + bitmap_release_region(its_dev->event_map.lpi_map, + its_get_event_id(irq_domain_get_irq_data(domain, virq)), + get_count_order(nr_irqs)); + for (i = 0; i < nr_irqs; i++) { struct irq_data *data = irq_domain_get_irq_data(domain, virq + i); - u32 event = its_get_event_id(data); - - /* Mark interrupt index as unused */ - clear_bit(event, its_dev->event_map.lpi_map); - /* Nuke the entry in the domain */ irq_domain_reset_irq_data(data); } diff --git a/drivers/isdn/mISDN/socket.c b/drivers/isdn/mISDN/socket.c index 8cbb75d09a1d..75962c62304d 100644 --- a/drivers/isdn/mISDN/socket.c +++ b/drivers/isdn/mISDN/socket.c @@ -763,6 +763,8 @@ base_sock_create(struct net *net, struct socket *sock, int protocol, int kern) if (sock->type != SOCK_RAW) return -ESOCKTNOSUPPORT; + if (!capable(CAP_NET_RAW)) + return -EPERM; sk = sk_alloc(net, PF_ISDN, GFP_KERNEL, &mISDN_proto, kern); if (!sk) diff --git a/drivers/leds/leds-lp5562.c b/drivers/leds/leds-lp5562.c index 0360c59dbdc9..fc8b2e7bcfef 100644 --- a/drivers/leds/leds-lp5562.c +++ b/drivers/leds/leds-lp5562.c @@ -263,7 +263,11 @@ static void lp5562_firmware_loaded(struct lp55xx_chip *chip) { const struct firmware *fw = chip->fw; - if (fw->size > LP5562_PROGRAM_LENGTH) { + /* + * the firmware is encoded in ascii hex character, with 2 chars + * per byte + */ + if (fw->size > (LP5562_PROGRAM_LENGTH * 2)) { dev_err(&chip->cl->dev, "firmware data size overflow: %zu\n", fw->size); return; diff --git a/drivers/md/dm-bio-prison.c b/drivers/md/dm-bio-prison.c index 03af174485d3..fa2432a89bac 100644 --- a/drivers/md/dm-bio-prison.c +++ b/drivers/md/dm-bio-prison.c @@ -32,7 +32,7 @@ static struct kmem_cache *_cell_cache; */ struct dm_bio_prison *dm_bio_prison_create(void) { - struct dm_bio_prison *prison = kmalloc(sizeof(*prison), GFP_KERNEL); + struct dm_bio_prison *prison = kzalloc(sizeof(*prison), GFP_KERNEL); if (!prison) return NULL; diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c index 1b84d2890fbf..ad9a470e5382 100644 --- a/drivers/md/dm-io.c +++ b/drivers/md/dm-io.c @@ -50,7 +50,7 @@ struct dm_io_client *dm_io_client_create(void) struct dm_io_client *client; unsigned min_ios = dm_get_reserved_bio_based_ios(); - client = kmalloc(sizeof(*client), GFP_KERNEL); + client = kzalloc(sizeof(*client), GFP_KERNEL); if (!client) return ERR_PTR(-ENOMEM); diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c index 04248394843e..09df2c688ba9 100644 --- a/drivers/md/dm-kcopyd.c +++ b/drivers/md/dm-kcopyd.c @@ -827,7 +827,7 @@ struct dm_kcopyd_client *dm_kcopyd_client_create(struct dm_kcopyd_throttle *thro int r = -ENOMEM; struct dm_kcopyd_client *kc; - kc = kmalloc(sizeof(*kc), GFP_KERNEL); + kc = kzalloc(sizeof(*kc), GFP_KERNEL); if (!kc) return ERR_PTR(-ENOMEM); diff --git a/drivers/md/dm-region-hash.c b/drivers/md/dm-region-hash.c index 74cb7b991d41..a93a4e683999 100644 --- a/drivers/md/dm-region-hash.c +++ b/drivers/md/dm-region-hash.c @@ -179,7 +179,7 @@ struct dm_region_hash *dm_region_hash_create( ; nr_buckets >>= 1; - rh = kmalloc(sizeof(*rh), GFP_KERNEL); + rh = kzalloc(sizeof(*rh), GFP_KERNEL); if (!rh) { DMERR("unable to allocate region hash memory"); return ERR_PTR(-ENOMEM); diff --git a/drivers/md/dm-snap.c b/drivers/md/dm-snap.c index 2a855e5429ab..55e158553700 100644 --- a/drivers/md/dm-snap.c +++ b/drivers/md/dm-snap.c @@ -19,7 +19,6 @@ #include <linux/vmalloc.h> #include <linux/log2.h> #include <linux/dm-kcopyd.h> -#include <linux/semaphore.h> #include "dm.h" @@ -48,7 +47,7 @@ struct dm_exception_table { }; struct dm_snapshot { - struct rw_semaphore lock; + struct mutex lock; struct dm_dev *origin; struct dm_dev *cow; @@ -106,8 +105,8 @@ struct dm_snapshot { /* The on disk metadata handler */ struct dm_exception_store *store; - /* Maximum number of in-flight COW jobs. */ - struct semaphore cow_count; + unsigned in_progress; + wait_queue_head_t in_progress_wait; struct dm_kcopyd_client *kcopyd_client; @@ -158,8 +157,8 @@ struct dm_snapshot { */ #define DEFAULT_COW_THRESHOLD 2048 -static int cow_threshold = DEFAULT_COW_THRESHOLD; -module_param_named(snapshot_cow_threshold, cow_threshold, int, 0644); +static unsigned cow_threshold = DEFAULT_COW_THRESHOLD; +module_param_named(snapshot_cow_threshold, cow_threshold, uint, 0644); MODULE_PARM_DESC(snapshot_cow_threshold, "Maximum number of chunks being copied on write"); DECLARE_DM_KCOPYD_THROTTLE_WITH_MODULE_PARM(snapshot_copy_throttle, @@ -456,9 +455,9 @@ static int __find_snapshots_sharing_cow(struct dm_snapshot *snap, if (!bdev_equal(s->cow->bdev, snap->cow->bdev)) continue; - down_read(&s->lock); + mutex_lock(&s->lock); active = s->active; - up_read(&s->lock); + mutex_unlock(&s->lock); if (active) { if (snap_src) @@ -926,7 +925,7 @@ static int remove_single_exception_chunk(struct dm_snapshot *s) int r; chunk_t old_chunk = s->first_merging_chunk + s->num_merging_chunks - 1; - down_write(&s->lock); + mutex_lock(&s->lock); /* * Process chunks (and associated exceptions) in reverse order @@ -941,7 +940,7 @@ static int remove_single_exception_chunk(struct dm_snapshot *s) b = __release_queued_bios_after_merge(s); out: - up_write(&s->lock); + mutex_unlock(&s->lock); if (b) flush_bios(b); @@ -1000,9 +999,9 @@ static void snapshot_merge_next_chunks(struct dm_snapshot *s) if (linear_chunks < 0) { DMERR("Read error in exception store: " "shutting down merge"); - down_write(&s->lock); + mutex_lock(&s->lock); s->merge_failed = 1; - up_write(&s->lock); + mutex_unlock(&s->lock); } goto shut; } @@ -1043,10 +1042,10 @@ static void snapshot_merge_next_chunks(struct dm_snapshot *s) previous_count = read_pending_exceptions_done_count(); } - down_write(&s->lock); + mutex_lock(&s->lock); s->first_merging_chunk = old_chunk; s->num_merging_chunks = linear_chunks; - up_write(&s->lock); + mutex_unlock(&s->lock); /* Wait until writes to all 'linear_chunks' drain */ for (i = 0; i < linear_chunks; i++) @@ -1088,10 +1087,10 @@ static void merge_callback(int read_err, unsigned long write_err, void *context) return; shut: - down_write(&s->lock); + mutex_lock(&s->lock); s->merge_failed = 1; b = __release_queued_bios_after_merge(s); - up_write(&s->lock); + mutex_unlock(&s->lock); error_bios(b); merge_shutdown(s); @@ -1137,7 +1136,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv) origin_mode = FMODE_WRITE; } - s = kmalloc(sizeof(*s), GFP_KERNEL); + s = kzalloc(sizeof(*s), GFP_KERNEL); if (!s) { ti->error = "Cannot allocate private snapshot structure"; r = -ENOMEM; @@ -1190,7 +1189,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv) s->exception_start_sequence = 0; s->exception_complete_sequence = 0; INIT_LIST_HEAD(&s->out_of_order_list); - init_rwsem(&s->lock); + mutex_init(&s->lock); INIT_LIST_HEAD(&s->list); spin_lock_init(&s->pe_lock); s->state_bits = 0; @@ -1206,7 +1205,7 @@ static int snapshot_ctr(struct dm_target *ti, unsigned int argc, char **argv) goto bad_hash_tables; } - sema_init(&s->cow_count, (cow_threshold > 0) ? cow_threshold : INT_MAX); + init_waitqueue_head(&s->in_progress_wait); s->kcopyd_client = dm_kcopyd_client_create(&dm_kcopyd_throttle); if (IS_ERR(s->kcopyd_client)) { @@ -1357,9 +1356,9 @@ static void snapshot_dtr(struct dm_target *ti) /* Check whether exception handover must be cancelled */ (void) __find_snapshots_sharing_cow(s, &snap_src, &snap_dest, NULL); if (snap_src && snap_dest && (s == snap_src)) { - down_write(&snap_dest->lock); + mutex_lock(&snap_dest->lock); snap_dest->valid = 0; - up_write(&snap_dest->lock); + mutex_unlock(&snap_dest->lock); DMERR("Cancelling snapshot handover."); } up_read(&_origins_lock); @@ -1390,13 +1389,62 @@ static void snapshot_dtr(struct dm_target *ti) dm_exception_store_destroy(s->store); + mutex_destroy(&s->lock); + dm_put_device(ti, s->cow); dm_put_device(ti, s->origin); + WARN_ON(s->in_progress); + kfree(s); } +static void account_start_copy(struct dm_snapshot *s) +{ + spin_lock(&s->in_progress_wait.lock); + s->in_progress++; + spin_unlock(&s->in_progress_wait.lock); +} + +static void account_end_copy(struct dm_snapshot *s) +{ + spin_lock(&s->in_progress_wait.lock); + BUG_ON(!s->in_progress); + s->in_progress--; + if (likely(s->in_progress <= cow_threshold) && + unlikely(waitqueue_active(&s->in_progress_wait))) + wake_up_locked(&s->in_progress_wait); + spin_unlock(&s->in_progress_wait.lock); +} + +static bool wait_for_in_progress(struct dm_snapshot *s, bool unlock_origins) +{ + if (unlikely(s->in_progress > cow_threshold)) { + spin_lock(&s->in_progress_wait.lock); + if (likely(s->in_progress > cow_threshold)) { + /* + * NOTE: this throttle doesn't account for whether + * the caller is servicing an IO that will trigger a COW + * so excess throttling may result for chunks not required + * to be COW'd. But if cow_threshold was reached, extra + * throttling is unlikely to negatively impact performance. + */ + DECLARE_WAITQUEUE(wait, current); + __add_wait_queue(&s->in_progress_wait, &wait); + __set_current_state(TASK_UNINTERRUPTIBLE); + spin_unlock(&s->in_progress_wait.lock); + if (unlock_origins) + up_read(&_origins_lock); + io_schedule(); + remove_wait_queue(&s->in_progress_wait, &wait); + return false; + } + spin_unlock(&s->in_progress_wait.lock); + } + return true; +} + /* * Flush a list of buffers. */ @@ -1412,7 +1460,7 @@ static void flush_bios(struct bio *bio) } } -static int do_origin(struct dm_dev *origin, struct bio *bio); +static int do_origin(struct dm_dev *origin, struct bio *bio, bool limit); /* * Flush a list of buffers. @@ -1425,7 +1473,7 @@ static void retry_origin_bios(struct dm_snapshot *s, struct bio *bio) while (bio) { n = bio->bi_next; bio->bi_next = NULL; - r = do_origin(s->origin, bio); + r = do_origin(s->origin, bio, false); if (r == DM_MAPIO_REMAPPED) generic_make_request(bio); bio = n; @@ -1477,7 +1525,7 @@ static void pending_complete(void *context, int success) if (!success) { /* Read/write error - snapshot is unusable */ - down_write(&s->lock); + mutex_lock(&s->lock); __invalidate_snapshot(s, -EIO); error = 1; goto out; @@ -1485,14 +1533,14 @@ static void pending_complete(void *context, int success) e = alloc_completed_exception(GFP_NOIO); if (!e) { - down_write(&s->lock); + mutex_lock(&s->lock); __invalidate_snapshot(s, -ENOMEM); error = 1; goto out; } *e = pe->e; - down_write(&s->lock); + mutex_lock(&s->lock); if (!s->valid) { free_completed_exception(e); error = 1; @@ -1517,7 +1565,7 @@ out: full_bio->bi_end_io = pe->full_bio_end_io; increment_pending_exceptions_done_count(); - up_write(&s->lock); + mutex_unlock(&s->lock); /* Submit any pending write bios */ if (error) { @@ -1579,7 +1627,7 @@ static void copy_callback(int read_err, unsigned long write_err, void *context) } list_add(&pe->out_of_order_entry, lh); } - up(&s->cow_count); + account_end_copy(s); } /* @@ -1603,7 +1651,7 @@ static void start_copy(struct dm_snap_pending_exception *pe) dest.count = src.count; /* Hand over to kcopyd */ - down(&s->cow_count); + account_start_copy(s); dm_kcopyd_copy(s->kcopyd_client, &src, 1, &dest, 0, copy_callback, pe); } @@ -1623,7 +1671,7 @@ static void start_full_bio(struct dm_snap_pending_exception *pe, pe->full_bio = bio; pe->full_bio_end_io = bio->bi_end_io; - down(&s->cow_count); + account_start_copy(s); callback_data = dm_kcopyd_prepare_callback(s->kcopyd_client, copy_callback, pe); @@ -1714,9 +1762,12 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio) if (!s->valid) return -EIO; - /* FIXME: should only take write lock if we need - * to copy an exception */ - down_write(&s->lock); + if (bio_data_dir(bio) == WRITE) { + while (unlikely(!wait_for_in_progress(s, false))) + ; /* wait_for_in_progress() has slept */ + } + + mutex_lock(&s->lock); if (!s->valid || (unlikely(s->snapshot_overflowed) && bio_rw(bio) == WRITE)) { r = -EIO; @@ -1738,9 +1789,9 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio) if (bio_rw(bio) == WRITE) { pe = __lookup_pending_exception(s, chunk); if (!pe) { - up_write(&s->lock); + mutex_unlock(&s->lock); pe = alloc_pending_exception(s); - down_write(&s->lock); + mutex_lock(&s->lock); if (!s->valid || s->snapshot_overflowed) { free_pending_exception(pe); @@ -1775,7 +1826,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio) bio->bi_iter.bi_size == (s->store->chunk_size << SECTOR_SHIFT)) { pe->started = 1; - up_write(&s->lock); + mutex_unlock(&s->lock); start_full_bio(pe, bio); goto out; } @@ -1785,7 +1836,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio) if (!pe->started) { /* this is protected by snap->lock */ pe->started = 1; - up_write(&s->lock); + mutex_unlock(&s->lock); start_copy(pe); goto out; } @@ -1795,7 +1846,7 @@ static int snapshot_map(struct dm_target *ti, struct bio *bio) } out_unlock: - up_write(&s->lock); + mutex_unlock(&s->lock); out: return r; } @@ -1831,7 +1882,7 @@ static int snapshot_merge_map(struct dm_target *ti, struct bio *bio) chunk = sector_to_chunk(s->store, bio->bi_iter.bi_sector); - down_write(&s->lock); + mutex_lock(&s->lock); /* Full merging snapshots are redirected to the origin */ if (!s->valid) @@ -1862,12 +1913,12 @@ redirect_to_origin: bio->bi_bdev = s->origin->bdev; if (bio_rw(bio) == WRITE) { - up_write(&s->lock); - return do_origin(s->origin, bio); + mutex_unlock(&s->lock); + return do_origin(s->origin, bio, false); } out_unlock: - up_write(&s->lock); + mutex_unlock(&s->lock); return r; } @@ -1898,7 +1949,7 @@ static int snapshot_preresume(struct dm_target *ti) down_read(&_origins_lock); (void) __find_snapshots_sharing_cow(s, &snap_src, &snap_dest, NULL); if (snap_src && snap_dest) { - down_read(&snap_src->lock); + mutex_lock(&snap_src->lock); if (s == snap_src) { DMERR("Unable to resume snapshot source until " "handover completes."); @@ -1908,7 +1959,7 @@ static int snapshot_preresume(struct dm_target *ti) "source is suspended."); r = -EINVAL; } - up_read(&snap_src->lock); + mutex_unlock(&snap_src->lock); } up_read(&_origins_lock); @@ -1954,11 +2005,11 @@ static void snapshot_resume(struct dm_target *ti) (void) __find_snapshots_sharing_cow(s, &snap_src, &snap_dest, NULL); if (snap_src && snap_dest) { - down_write(&snap_src->lock); - down_write_nested(&snap_dest->lock, SINGLE_DEPTH_NESTING); + mutex_lock(&snap_src->lock); + mutex_lock_nested(&snap_dest->lock, SINGLE_DEPTH_NESTING); __handover_exceptions(snap_src, snap_dest); - up_write(&snap_dest->lock); - up_write(&snap_src->lock); + mutex_unlock(&snap_dest->lock); + mutex_unlock(&snap_src->lock); } up_read(&_origins_lock); @@ -1973,9 +2024,9 @@ static void snapshot_resume(struct dm_target *ti) /* Now we have correct chunk size, reregister */ reregister_snapshot(s); - down_write(&s->lock); + mutex_lock(&s->lock); s->active = 1; - up_write(&s->lock); + mutex_unlock(&s->lock); } static uint32_t get_origin_minimum_chunksize(struct block_device *bdev) @@ -2015,7 +2066,7 @@ static void snapshot_status(struct dm_target *ti, status_type_t type, switch (type) { case STATUSTYPE_INFO: - down_write(&snap->lock); + mutex_lock(&snap->lock); if (!snap->valid) DMEMIT("Invalid"); @@ -2040,7 +2091,7 @@ static void snapshot_status(struct dm_target *ti, status_type_t type, DMEMIT("Unknown"); } - up_write(&snap->lock); + mutex_unlock(&snap->lock); break; @@ -2106,7 +2157,7 @@ static int __origin_write(struct list_head *snapshots, sector_t sector, if (dm_target_is_snapshot_merge(snap->ti)) continue; - down_write(&snap->lock); + mutex_lock(&snap->lock); /* Only deal with valid and active snapshots */ if (!snap->valid || !snap->active) @@ -2133,9 +2184,9 @@ static int __origin_write(struct list_head *snapshots, sector_t sector, pe = __lookup_pending_exception(snap, chunk); if (!pe) { - up_write(&snap->lock); + mutex_unlock(&snap->lock); pe = alloc_pending_exception(snap); - down_write(&snap->lock); + mutex_lock(&snap->lock); if (!snap->valid) { free_pending_exception(pe); @@ -2178,7 +2229,7 @@ static int __origin_write(struct list_head *snapshots, sector_t sector, } next_snapshot: - up_write(&snap->lock); + mutex_unlock(&snap->lock); if (pe_to_start_now) { start_copy(pe_to_start_now); @@ -2199,15 +2250,24 @@ next_snapshot: /* * Called on a write from the origin driver. */ -static int do_origin(struct dm_dev *origin, struct bio *bio) +static int do_origin(struct dm_dev *origin, struct bio *bio, bool limit) { struct origin *o; int r = DM_MAPIO_REMAPPED; +again: down_read(&_origins_lock); o = __lookup_origin(origin->bdev); - if (o) + if (o) { + if (limit) { + struct dm_snapshot *s; + list_for_each_entry(s, &o->snapshots, list) + if (unlikely(!wait_for_in_progress(s, true))) + goto again; + } + r = __origin_write(&o->snapshots, bio->bi_iter.bi_sector, bio); + } up_read(&_origins_lock); return r; @@ -2320,7 +2380,7 @@ static int origin_map(struct dm_target *ti, struct bio *bio) dm_accept_partial_bio(bio, available_sectors); /* Only tell snapshots if this is a write */ - return do_origin(o->dev, bio); + return do_origin(o->dev, bio, true); } /* diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c index ccb76fc5cb69..aa05a2fca354 100644 --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -2882,7 +2882,7 @@ static struct pool *pool_create(struct mapped_device *pool_md, return (struct pool *)pmd; } - pool = kmalloc(sizeof(*pool), GFP_KERNEL); + pool = kzalloc(sizeof(*pool), GFP_KERNEL); if (!pool) { *error = "Error allocating memory for pool"; err_p = ERR_PTR(-ENOMEM); diff --git a/drivers/md/md.c b/drivers/md/md.c index dd41ee2ee19d..c7a006e102de 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -1667,8 +1667,15 @@ static int super_1_validate(struct mddev *mddev, struct md_rdev *rdev) if (!(le32_to_cpu(sb->feature_map) & MD_FEATURE_RECOVERY_BITMAP)) rdev->saved_raid_disk = -1; - } else - set_bit(In_sync, &rdev->flags); + } else { + /* + * If the array is FROZEN, then the device can't + * be in_sync with rest of array. + */ + if (!test_bit(MD_RECOVERY_FROZEN, + &mddev->recovery)) + set_bit(In_sync, &rdev->flags); + } rdev->raid_disk = role; break; } @@ -8445,7 +8452,8 @@ void md_reap_sync_thread(struct mddev *mddev) /* resync has finished, collect result */ md_unregister_thread(&mddev->sync_thread); if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery) && - !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) { + !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery) && + mddev->degraded != mddev->raid_disks) { /* success...*/ /* activate any spares */ if (mddev->pers->spare_active(mddev)) { diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index b64915a54a02..c2a5378c5e08 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -2958,6 +2958,13 @@ static int run(struct mddev *mddev) !test_bit(In_sync, &conf->mirrors[i].rdev->flags) || test_bit(Faulty, &conf->mirrors[i].rdev->flags)) mddev->degraded++; + /* + * RAID1 needs at least one disk in active + */ + if (conf->raid_disks - mddev->degraded < 1) { + ret = -EINVAL; + goto abort; + } if (conf->raid_disks - mddev->degraded == 1) mddev->recovery_cp = MaxSector; @@ -2992,8 +2999,12 @@ static int run(struct mddev *mddev) ret = md_integrity_register(mddev); if (ret) { md_unregister_thread(&mddev->thread); - raid1_free(mddev, conf); + goto abort; } + return 0; + +abort: + raid1_free(mddev, conf); return ret; } diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 38e62418c6ff..8d712a939a7b 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -2394,7 +2394,9 @@ static void raid5_end_read_request(struct bio * bi) && !test_bit(R5_ReadNoMerge, &sh->dev[i].flags)) retry = 1; if (retry) - if (test_bit(R5_ReadNoMerge, &sh->dev[i].flags)) { + if (sh->qd_idx >= 0 && sh->pd_idx == i) + set_bit(R5_ReadError, &sh->dev[i].flags); + else if (test_bit(R5_ReadNoMerge, &sh->dev[i].flags)) { set_bit(R5_ReadError, &sh->dev[i].flags); clear_bit(R5_ReadNoMerge, &sh->dev[i].flags); } else diff --git a/drivers/media/i2c/ov9650.c b/drivers/media/i2c/ov9650.c index 1ee6a5527c38..d11de02ecb63 100644 --- a/drivers/media/i2c/ov9650.c +++ b/drivers/media/i2c/ov9650.c @@ -707,6 +707,11 @@ static int ov965x_set_gain(struct ov965x *ov965x, int auto_gain) for (m = 6; m >= 0; m--) if (gain >= (1 << m) * 16) break; + + /* Sanity check: don't adjust the gain with a negative value */ + if (m < 0) + return -EINVAL; + rgain = (gain - ((1 << m) * 16)) / (1 << m); rgain |= (((1 << m) - 1) << 4); diff --git a/drivers/media/pci/saa7134/saa7134-i2c.c b/drivers/media/pci/saa7134/saa7134-i2c.c index bc957528f69f..e636fca36e3d 100644 --- a/drivers/media/pci/saa7134/saa7134-i2c.c +++ b/drivers/media/pci/saa7134/saa7134-i2c.c @@ -355,7 +355,11 @@ static struct i2c_client saa7134_client_template = { /* ----------------------------------------------------------- */ -/* On Medion 7134 reading EEPROM needs DVB-T demod i2c gate open */ +/* + * On Medion 7134 reading the SAA7134 chip config EEPROM needs DVB-T + * demod i2c gate closed due to an address clash between this EEPROM + * and the demod one. + */ static void saa7134_i2c_eeprom_md7134_gate(struct saa7134_dev *dev) { u8 subaddr = 0x7, dmdregval; @@ -372,14 +376,14 @@ static void saa7134_i2c_eeprom_md7134_gate(struct saa7134_dev *dev) ret = i2c_transfer(&dev->i2c_adap, i2cgatemsg_r, 2); if ((ret == 2) && (dmdregval & 0x2)) { - pr_debug("%s: DVB-T demod i2c gate was left closed\n", + pr_debug("%s: DVB-T demod i2c gate was left open\n", dev->name); data[0] = subaddr; data[1] = (dmdregval & ~0x2); if (i2c_transfer(&dev->i2c_adap, i2cgatemsg_w, 1) != 1) - pr_err("%s: EEPROM i2c gate open failure\n", - dev->name); + pr_err("%s: EEPROM i2c gate close failure\n", + dev->name); } } diff --git a/drivers/media/pci/saa7146/hexium_gemini.c b/drivers/media/pci/saa7146/hexium_gemini.c index d4b3ce828285..343cd75fcd8d 100644 --- a/drivers/media/pci/saa7146/hexium_gemini.c +++ b/drivers/media/pci/saa7146/hexium_gemini.c @@ -304,6 +304,9 @@ static int hexium_attach(struct saa7146_dev *dev, struct saa7146_pci_extension_d ret = saa7146_register_device(&hexium->video_dev, dev, "hexium gemini", VFL_TYPE_GRABBER); if (ret < 0) { pr_err("cannot register capture v4l2 device. skipping.\n"); + saa7146_vv_release(dev); + i2c_del_adapter(&hexium->i2c_adapter); + kfree(hexium); return ret; } diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c index 136ea1848701..f41e0d08de93 100644 --- a/drivers/media/platform/omap3isp/isp.c +++ b/drivers/media/platform/omap3isp/isp.c @@ -917,6 +917,10 @@ static int isp_pipeline_enable(struct isp_pipeline *pipe, s_stream, mode); pipe->do_propagation = true; } + + /* Stop at the first external sub-device. */ + if (subdev->dev != isp->dev) + break; } return 0; @@ -1031,6 +1035,10 @@ static int isp_pipeline_disable(struct isp_pipeline *pipe) isp->crashed |= 1U << subdev->entity.id; failure = -ETIMEDOUT; } + + /* Stop at the first external sub-device. */ + if (subdev->dev != isp->dev) + break; } return failure; diff --git a/drivers/media/platform/omap3isp/ispccdc.c b/drivers/media/platform/omap3isp/ispccdc.c index a6a61cce43dd..e349f5d990b7 100644 --- a/drivers/media/platform/omap3isp/ispccdc.c +++ b/drivers/media/platform/omap3isp/ispccdc.c @@ -2603,6 +2603,7 @@ int omap3isp_ccdc_register_entities(struct isp_ccdc_device *ccdc, int ret; /* Register the subdev and video node. */ + ccdc->subdev.dev = vdev->mdev->dev; ret = v4l2_device_register_subdev(vdev, &ccdc->subdev); if (ret < 0) goto error; diff --git a/drivers/media/platform/omap3isp/ispccp2.c b/drivers/media/platform/omap3isp/ispccp2.c index 38e6a974c5b1..e6b19b785c2f 100644 --- a/drivers/media/platform/omap3isp/ispccp2.c +++ b/drivers/media/platform/omap3isp/ispccp2.c @@ -1025,6 +1025,7 @@ int omap3isp_ccp2_register_entities(struct isp_ccp2_device *ccp2, int ret; /* Register the subdev and video nodes. */ + ccp2->subdev.dev = vdev->mdev->dev; ret = v4l2_device_register_subdev(vdev, &ccp2->subdev); if (ret < 0) goto error; diff --git a/drivers/media/platform/omap3isp/ispcsi2.c b/drivers/media/platform/omap3isp/ispcsi2.c index a78338d012b4..029b434b7609 100644 --- a/drivers/media/platform/omap3isp/ispcsi2.c +++ b/drivers/media/platform/omap3isp/ispcsi2.c @@ -1201,6 +1201,7 @@ int omap3isp_csi2_register_entities(struct isp_csi2_device *csi2, int ret; /* Register the subdev and video nodes. */ + csi2->subdev.dev = vdev->mdev->dev; ret = v4l2_device_register_subdev(vdev, &csi2->subdev); if (ret < 0) goto error; diff --git a/drivers/media/platform/omap3isp/isppreview.c b/drivers/media/platform/omap3isp/isppreview.c index 13803270d104..c9e8845de1b1 100644 --- a/drivers/media/platform/omap3isp/isppreview.c +++ b/drivers/media/platform/omap3isp/isppreview.c @@ -2223,6 +2223,7 @@ int omap3isp_preview_register_entities(struct isp_prev_device *prev, int ret; /* Register the subdev and video nodes. */ + prev->subdev.dev = vdev->mdev->dev; ret = v4l2_device_register_subdev(vdev, &prev->subdev); if (ret < 0) goto error; diff --git a/drivers/media/platform/omap3isp/ispresizer.c b/drivers/media/platform/omap3isp/ispresizer.c index 7cfb43dc0ffd..d4e53cbe9193 100644 --- a/drivers/media/platform/omap3isp/ispresizer.c +++ b/drivers/media/platform/omap3isp/ispresizer.c @@ -1679,6 +1679,7 @@ int omap3isp_resizer_register_entities(struct isp_res_device *res, int ret; /* Register the subdev and video nodes. */ + res->subdev.dev = vdev->mdev->dev; ret = v4l2_device_register_subdev(vdev, &res->subdev); if (ret < 0) goto error; diff --git a/drivers/media/platform/omap3isp/ispstat.c b/drivers/media/platform/omap3isp/ispstat.c index 94d4c295d3d0..c54c5c494b75 100644 --- a/drivers/media/platform/omap3isp/ispstat.c +++ b/drivers/media/platform/omap3isp/ispstat.c @@ -1010,6 +1010,8 @@ void omap3isp_stat_unregister_entities(struct ispstat *stat) int omap3isp_stat_register_entities(struct ispstat *stat, struct v4l2_device *vdev) { + stat->subdev.dev = vdev->mdev->dev; + return v4l2_device_register_subdev(vdev, &stat->subdev); } diff --git a/drivers/media/radio/si470x/radio-si470x-usb.c b/drivers/media/radio/si470x/radio-si470x-usb.c index 091d793f6583..c9347d5aac04 100644 --- a/drivers/media/radio/si470x/radio-si470x-usb.c +++ b/drivers/media/radio/si470x/radio-si470x-usb.c @@ -743,7 +743,7 @@ static int si470x_usb_driver_probe(struct usb_interface *intf, /* start radio */ retval = si470x_start_usb(radio); if (retval < 0) - goto err_all; + goto err_buf; /* set initial frequency */ si470x_set_freq(radio, 87.5 * FREQ_MUL); /* available in all regions */ @@ -758,6 +758,8 @@ static int si470x_usb_driver_probe(struct usb_interface *intf, return 0; err_all: + usb_kill_urb(radio->int_in_urb); +err_buf: kfree(radio->buffer); err_ctrl: v4l2_ctrl_handler_free(&radio->hdl); @@ -831,6 +833,7 @@ static void si470x_usb_driver_disconnect(struct usb_interface *intf) mutex_lock(&radio->lock); v4l2_device_disconnect(&radio->v4l2_dev); video_unregister_device(&radio->videodev); + usb_kill_urb(radio->int_in_urb); usb_set_intfdata(intf, NULL); mutex_unlock(&radio->lock); v4l2_device_put(&radio->v4l2_dev); diff --git a/drivers/media/rc/iguanair.c b/drivers/media/rc/iguanair.c index ee60e17fba05..cda4ce612dcf 100644 --- a/drivers/media/rc/iguanair.c +++ b/drivers/media/rc/iguanair.c @@ -430,6 +430,10 @@ static int iguanair_probe(struct usb_interface *intf, int ret, pipein, pipeout; struct usb_host_interface *idesc; + idesc = intf->altsetting; + if (idesc->desc.bNumEndpoints < 2) + return -ENODEV; + ir = kzalloc(sizeof(*ir), GFP_KERNEL); rc = rc_allocate_device(); if (!ir || !rc) { @@ -444,18 +448,13 @@ static int iguanair_probe(struct usb_interface *intf, ir->urb_in = usb_alloc_urb(0, GFP_KERNEL); ir->urb_out = usb_alloc_urb(0, GFP_KERNEL); - if (!ir->buf_in || !ir->packet || !ir->urb_in || !ir->urb_out) { + if (!ir->buf_in || !ir->packet || !ir->urb_in || !ir->urb_out || + !usb_endpoint_is_int_in(&idesc->endpoint[0].desc) || + !usb_endpoint_is_int_out(&idesc->endpoint[1].desc)) { ret = -ENOMEM; goto out; } - idesc = intf->altsetting; - - if (idesc->desc.bNumEndpoints < 2) { - ret = -ENODEV; - goto out; - } - ir->rc = rc; ir->dev = &intf->dev; ir->udev = udev; diff --git a/drivers/media/usb/cpia2/cpia2_usb.c b/drivers/media/usb/cpia2/cpia2_usb.c index 41ea00ac3a87..76b9cb940b87 100644 --- a/drivers/media/usb/cpia2/cpia2_usb.c +++ b/drivers/media/usb/cpia2/cpia2_usb.c @@ -665,6 +665,10 @@ static int submit_urbs(struct camera_data *cam) ERR("%s: usb_alloc_urb error!\n", __func__); for (j = 0; j < i; j++) usb_free_urb(cam->sbuf[j].urb); + for (j = 0; j < NUM_SBUF; j++) { + kfree(cam->sbuf[j].data); + cam->sbuf[j].data = NULL; + } return -ENOMEM; } diff --git a/drivers/media/usb/dvb-usb/dib0700_devices.c b/drivers/media/usb/dvb-usb/dib0700_devices.c index 38c03283a441..e1316c7b7c2e 100644 --- a/drivers/media/usb/dvb-usb/dib0700_devices.c +++ b/drivers/media/usb/dvb-usb/dib0700_devices.c @@ -2418,9 +2418,13 @@ static int dib9090_tuner_attach(struct dvb_usb_adapter *adap) 8, 0x0486, }; + if (!IS_ENABLED(CONFIG_DVB_DIB9000)) + return -ENODEV; if (dvb_attach(dib0090_fw_register, adap->fe_adap[0].fe, i2c, &dib9090_dib0090_config) == NULL) return -ENODEV; i2c = dib9000_get_i2c_master(adap->fe_adap[0].fe, DIBX000_I2C_INTERFACE_GPIO_1_2, 0); + if (!i2c) + return -ENODEV; if (dib01x0_pmu_update(i2c, data_dib190, 10) != 0) return -ENODEV; dib0700_set_i2c_speed(adap->dev, 1500); @@ -2496,10 +2500,14 @@ static int nim9090md_tuner_attach(struct dvb_usb_adapter *adap) 0, 0x00ef, 8, 0x0406, }; + if (!IS_ENABLED(CONFIG_DVB_DIB9000)) + return -ENODEV; i2c = dib9000_get_tuner_interface(adap->fe_adap[0].fe); if (dvb_attach(dib0090_fw_register, adap->fe_adap[0].fe, i2c, &nim9090md_dib0090_config[0]) == NULL) return -ENODEV; i2c = dib9000_get_i2c_master(adap->fe_adap[0].fe, DIBX000_I2C_INTERFACE_GPIO_1_2, 0); + if (!i2c) + return -ENODEV; if (dib01x0_pmu_update(i2c, data_dib190, 10) < 0) return -ENODEV; diff --git a/drivers/media/usb/gspca/konica.c b/drivers/media/usb/gspca/konica.c index 0f6d57fbf91b..624b4d24716d 100644 --- a/drivers/media/usb/gspca/konica.c +++ b/drivers/media/usb/gspca/konica.c @@ -127,6 +127,11 @@ static void reg_r(struct gspca_dev *gspca_dev, u16 value, u16 index) if (ret < 0) { pr_err("reg_r err %d\n", ret); gspca_dev->usb_err = ret; + /* + * Make sure the buffer is zeroed to avoid uninitialized + * values. + */ + memset(gspca_dev->usb_buf, 0, 2); } } diff --git a/drivers/media/usb/gspca/nw80x.c b/drivers/media/usb/gspca/nw80x.c index 599f755e75b8..7ebeee98dc1b 100644 --- a/drivers/media/usb/gspca/nw80x.c +++ b/drivers/media/usb/gspca/nw80x.c @@ -1584,6 +1584,11 @@ static void reg_r(struct gspca_dev *gspca_dev, if (ret < 0) { pr_err("reg_r err %d\n", ret); gspca_dev->usb_err = ret; + /* + * Make sure the buffer is zeroed to avoid uninitialized + * values. + */ + memset(gspca_dev->usb_buf, 0, USB_BUF_SZ); return; } if (len == 1) diff --git a/drivers/media/usb/gspca/ov519.c b/drivers/media/usb/gspca/ov519.c index c95f32a0c02b..c7aafdbb5738 100644 --- a/drivers/media/usb/gspca/ov519.c +++ b/drivers/media/usb/gspca/ov519.c @@ -2116,6 +2116,11 @@ static int reg_r(struct sd *sd, u16 index) } else { PERR("reg_r %02x failed %d\n", index, ret); sd->gspca_dev.usb_err = ret; + /* + * Make sure the result is zeroed to avoid uninitialized + * values. + */ + gspca_dev->usb_buf[0] = 0; } return ret; @@ -2142,6 +2147,11 @@ static int reg_r8(struct sd *sd, } else { PERR("reg_r8 %02x failed %d\n", index, ret); sd->gspca_dev.usb_err = ret; + /* + * Make sure the buffer is zeroed to avoid uninitialized + * values. + */ + memset(gspca_dev->usb_buf, 0, 8); } return ret; diff --git a/drivers/media/usb/gspca/ov534.c b/drivers/media/usb/gspca/ov534.c index bfff1d1c70ab..466f984312dd 100644 --- a/drivers/media/usb/gspca/ov534.c +++ b/drivers/media/usb/gspca/ov534.c @@ -644,6 +644,11 @@ static u8 ov534_reg_read(struct gspca_dev *gspca_dev, u16 reg) if (ret < 0) { pr_err("read failed %d\n", ret); gspca_dev->usb_err = ret; + /* + * Make sure the result is zeroed to avoid uninitialized + * values. + */ + gspca_dev->usb_buf[0] = 0; } return gspca_dev->usb_buf[0]; } diff --git a/drivers/media/usb/gspca/ov534_9.c b/drivers/media/usb/gspca/ov534_9.c index 47085cf2d723..f2dca0606935 100644 --- a/drivers/media/usb/gspca/ov534_9.c +++ b/drivers/media/usb/gspca/ov534_9.c @@ -1157,6 +1157,7 @@ static u8 reg_r(struct gspca_dev *gspca_dev, u16 reg) if (ret < 0) { pr_err("reg_r err %d\n", ret); gspca_dev->usb_err = ret; + return 0; } return gspca_dev->usb_buf[0]; } diff --git a/drivers/media/usb/gspca/se401.c b/drivers/media/usb/gspca/se401.c index 5102cea50471..6adbb0eca71f 100644 --- a/drivers/media/usb/gspca/se401.c +++ b/drivers/media/usb/gspca/se401.c @@ -115,6 +115,11 @@ static void se401_read_req(struct gspca_dev *gspca_dev, u16 req, int silent) pr_err("read req failed req %#04x error %d\n", req, err); gspca_dev->usb_err = err; + /* + * Make sure the buffer is zeroed to avoid uninitialized + * values. + */ + memset(gspca_dev->usb_buf, 0, READ_REQ_SIZE); } } diff --git a/drivers/media/usb/gspca/sn9c20x.c b/drivers/media/usb/gspca/sn9c20x.c index d0ee899584a9..6136eb683306 100644 --- a/drivers/media/usb/gspca/sn9c20x.c +++ b/drivers/media/usb/gspca/sn9c20x.c @@ -139,6 +139,13 @@ static const struct dmi_system_id flip_dmi_table[] = { } }, { + .ident = "MSI MS-1039", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "MICRO-STAR INT'L CO.,LTD."), + DMI_MATCH(DMI_PRODUCT_NAME, "MS-1039"), + } + }, + { .ident = "MSI MS-1632", .matches = { DMI_MATCH(DMI_BOARD_VENDOR, "MSI"), @@ -924,6 +931,11 @@ static void reg_r(struct gspca_dev *gspca_dev, u16 reg, u16 length) if (unlikely(result < 0 || result != length)) { pr_err("Read register %02x failed %d\n", reg, result); gspca_dev->usb_err = result; + /* + * Make sure the buffer is zeroed to avoid uninitialized + * values. + */ + memset(gspca_dev->usb_buf, 0, USB_BUF_SZ); } } diff --git a/drivers/media/usb/gspca/sonixb.c b/drivers/media/usb/gspca/sonixb.c index 6696b2ec34e9..83e98b85ab6a 100644 --- a/drivers/media/usb/gspca/sonixb.c +++ b/drivers/media/usb/gspca/sonixb.c @@ -466,6 +466,11 @@ static void reg_r(struct gspca_dev *gspca_dev, dev_err(gspca_dev->v4l2_dev.dev, "Error reading register %02x: %d\n", value, res); gspca_dev->usb_err = res; + /* + * Make sure the result is zeroed to avoid uninitialized + * values. + */ + gspca_dev->usb_buf[0] = 0; } } diff --git a/drivers/media/usb/gspca/sonixj.c b/drivers/media/usb/gspca/sonixj.c index fd1c8706d86a..67e23557a1a9 100644 --- a/drivers/media/usb/gspca/sonixj.c +++ b/drivers/media/usb/gspca/sonixj.c @@ -1175,6 +1175,11 @@ static void reg_r(struct gspca_dev *gspca_dev, if (ret < 0) { pr_err("reg_r err %d\n", ret); gspca_dev->usb_err = ret; + /* + * Make sure the buffer is zeroed to avoid uninitialized + * values. + */ + memset(gspca_dev->usb_buf, 0, USB_BUF_SZ); } } diff --git a/drivers/media/usb/gspca/spca1528.c b/drivers/media/usb/gspca/spca1528.c index f38fd8949609..ee93bd443df5 100644 --- a/drivers/media/usb/gspca/spca1528.c +++ b/drivers/media/usb/gspca/spca1528.c @@ -84,6 +84,11 @@ static void reg_r(struct gspca_dev *gspca_dev, if (ret < 0) { pr_err("reg_r err %d\n", ret); gspca_dev->usb_err = ret; + /* + * Make sure the buffer is zeroed to avoid uninitialized + * values. + */ + memset(gspca_dev->usb_buf, 0, USB_BUF_SZ); } } diff --git a/drivers/media/usb/gspca/sq930x.c b/drivers/media/usb/gspca/sq930x.c index e274cf19a3ea..b236e9dcd468 100644 --- a/drivers/media/usb/gspca/sq930x.c +++ b/drivers/media/usb/gspca/sq930x.c @@ -438,6 +438,11 @@ static void reg_r(struct gspca_dev *gspca_dev, if (ret < 0) { pr_err("reg_r %04x failed %d\n", value, ret); gspca_dev->usb_err = ret; + /* + * Make sure the buffer is zeroed to avoid uninitialized + * values. + */ + memset(gspca_dev->usb_buf, 0, USB_BUF_SZ); } } diff --git a/drivers/media/usb/gspca/sunplus.c b/drivers/media/usb/gspca/sunplus.c index 46c9f2229a18..cc3e1478c5a0 100644 --- a/drivers/media/usb/gspca/sunplus.c +++ b/drivers/media/usb/gspca/sunplus.c @@ -268,6 +268,11 @@ static void reg_r(struct gspca_dev *gspca_dev, if (ret < 0) { pr_err("reg_r err %d\n", ret); gspca_dev->usb_err = ret; + /* + * Make sure the buffer is zeroed to avoid uninitialized + * values. + */ + memset(gspca_dev->usb_buf, 0, USB_BUF_SZ); } } diff --git a/drivers/media/usb/gspca/vc032x.c b/drivers/media/usb/gspca/vc032x.c index b4efb2fb36fa..5032b9d7d9bb 100644 --- a/drivers/media/usb/gspca/vc032x.c +++ b/drivers/media/usb/gspca/vc032x.c @@ -2919,6 +2919,11 @@ static void reg_r_i(struct gspca_dev *gspca_dev, if (ret < 0) { pr_err("reg_r err %d\n", ret); gspca_dev->usb_err = ret; + /* + * Make sure the buffer is zeroed to avoid uninitialized + * values. + */ + memset(gspca_dev->usb_buf, 0, USB_BUF_SZ); } } static void reg_r(struct gspca_dev *gspca_dev, diff --git a/drivers/media/usb/gspca/w996Xcf.c b/drivers/media/usb/gspca/w996Xcf.c index fb9fe2ef3a6f..a74ac595656f 100644 --- a/drivers/media/usb/gspca/w996Xcf.c +++ b/drivers/media/usb/gspca/w996Xcf.c @@ -139,6 +139,11 @@ static int w9968cf_read_sb(struct sd *sd) } else { pr_err("Read SB reg [01] failed\n"); sd->gspca_dev.usb_err = ret; + /* + * Make sure the buffer is zeroed to avoid uninitialized + * values. + */ + memset(sd->gspca_dev.usb_buf, 0, 2); } udelay(W9968CF_I2C_BUS_DELAY); diff --git a/drivers/media/usb/hdpvr/hdpvr-core.c b/drivers/media/usb/hdpvr/hdpvr-core.c index 08f0ca7aa012..7b5c493f02b0 100644 --- a/drivers/media/usb/hdpvr/hdpvr-core.c +++ b/drivers/media/usb/hdpvr/hdpvr-core.c @@ -143,6 +143,7 @@ static int device_authorization(struct hdpvr_device *dev) dev->fw_ver = dev->usbc_buf[1]; + dev->usbc_buf[46] = '\0'; v4l2_info(&dev->v4l2_dev, "firmware version 0x%x dated %s\n", dev->fw_ver, &dev->usbc_buf[2]); @@ -278,6 +279,7 @@ static int hdpvr_probe(struct usb_interface *interface, #endif size_t buffer_size; int i; + int dev_num; int retval = -ENOMEM; /* allocate memory for our device state and initialize it */ @@ -386,8 +388,17 @@ static int hdpvr_probe(struct usb_interface *interface, } #endif + dev_num = atomic_inc_return(&dev_nr); + if (dev_num >= HDPVR_MAX) { + v4l2_err(&dev->v4l2_dev, + "max device number reached, device register failed\n"); + atomic_dec(&dev_nr); + retval = -ENODEV; + goto reg_fail; + } + retval = hdpvr_register_videodev(dev, &interface->dev, - video_nr[atomic_inc_return(&dev_nr)]); + video_nr[dev_num]); if (retval < 0) { v4l2_err(&dev->v4l2_dev, "registering videodev failed\n"); goto reg_fail; diff --git a/drivers/media/usb/stkwebcam/stk-webcam.c b/drivers/media/usb/stkwebcam/stk-webcam.c index c21c4c004f97..17ee9cde4156 100644 --- a/drivers/media/usb/stkwebcam/stk-webcam.c +++ b/drivers/media/usb/stkwebcam/stk-webcam.c @@ -642,8 +642,7 @@ static int v4l_stk_release(struct file *fp) dev->owner = NULL; } - if (is_present(dev)) - usb_autopm_put_interface(dev->interface); + usb_autopm_put_interface(dev->interface); mutex_unlock(&dev->lock); return v4l2_fh_release(fp); } diff --git a/drivers/media/usb/ttusb-dec/ttusb_dec.c b/drivers/media/usb/ttusb-dec/ttusb_dec.c index a5de46f04247..f9b5de7ace01 100644 --- a/drivers/media/usb/ttusb-dec/ttusb_dec.c +++ b/drivers/media/usb/ttusb-dec/ttusb_dec.c @@ -272,7 +272,7 @@ static int ttusb_dec_send_command(struct ttusb_dec *dec, const u8 command, dprintk("%s\n", __func__); - b = kmalloc(COMMAND_PACKET_SIZE + 4, GFP_KERNEL); + b = kzalloc(COMMAND_PACKET_SIZE + 4, GFP_KERNEL); if (!b) return -ENOMEM; diff --git a/drivers/memstick/host/jmb38x_ms.c b/drivers/memstick/host/jmb38x_ms.c index 48db922075e2..08fa6400d255 100644 --- a/drivers/memstick/host/jmb38x_ms.c +++ b/drivers/memstick/host/jmb38x_ms.c @@ -947,7 +947,7 @@ static int jmb38x_ms_probe(struct pci_dev *pdev, if (!cnt) { rc = -ENODEV; pci_dev_busy = 1; - goto err_out; + goto err_out_int; } jm = kzalloc(sizeof(struct jmb38x_ms) diff --git a/drivers/mfd/intel-lpss-pci.c b/drivers/mfd/intel-lpss-pci.c index 5bfdfccbb9a1..032c95157497 100644 --- a/drivers/mfd/intel-lpss-pci.c +++ b/drivers/mfd/intel-lpss-pci.c @@ -38,6 +38,8 @@ static int intel_lpss_pci_probe(struct pci_dev *pdev, info->mem = &pdev->resource[0]; info->irq = pdev->irq; + pdev->d3cold_delay = 0; + /* Probably it is enough to set this for iDMA capable devices only */ pci_set_master(pdev); diff --git a/drivers/misc/qseecom.c b/drivers/misc/qseecom.c index d50144e0fed9..21ad3dc3abaa 100644 --- a/drivers/misc/qseecom.c +++ b/drivers/misc/qseecom.c @@ -1,4 +1,5 @@ -/*Qualcomm Secure Execution Environment Communicator (QSEECOM) driver +/* + * QTI Secure Execution Environment Communicator (QSEECOM) driver * * Copyright (c) 2012-2019, The Linux Foundation. All rights reserved. * @@ -43,13 +44,13 @@ #include <linux/msm-bus-board.h> #include <soc/qcom/qseecomi.h> #include <asm/cacheflush.h> -#include "qseecom_legacy.h" #include "qseecom_kernel.h" #include <crypto/ice.h> #include <linux/delay.h> #include <linux/compat.h> #include "compat_qseecom.h" +#include <linux/kthread.h> #define QSEECOM_DEV "qseecom" #define QSEOS_VERSION_14 0x14 @@ -65,7 +66,7 @@ #define QSEE_CE_CLK_100MHZ 100000000 #define CE_CLK_DIV 1000000 -#define QSEECOM_MAX_SG_ENTRY 512 +#define QSEECOM_MAX_SG_ENTRY 4096 #define QSEECOM_SG_ENTRY_MSG_BUF_SZ_64BIT \ (QSEECOM_MAX_SG_ENTRY * SG_ENTRY_SZ_64BIT) @@ -100,6 +101,10 @@ #define QSEECOM_STATE_READY 2 #define QSEECOM_ICE_FDE_KEY_SIZE_MASK 2 +#define QSEECOM_SCM_EBUSY_WAIT_MS 30 +#define QSEECOM_SCM_EBUSY_MAX_RETRY 67 +#define QSEECOM_SCM_SECURE_WORLD_BUSY_COUNT 33 + /* * default ce info unit to 0 for * services which @@ -109,6 +114,9 @@ #define DEFAULT_CE_INFO_UNIT 0 #define DEFAULT_NUM_CE_INFO_UNIT 1 +#define FDE_FLAG_POS 4 +#define ENABLE_KEY_WRAP_IN_KS (1 << FDE_FLAG_POS) + enum qseecom_clk_definitions { CLK_DFAB = 0, CLK_SFPB, @@ -137,12 +145,25 @@ enum qseecom_ce_hw_instance { CLK_INVALID, }; +enum qseecom_listener_unregister_kthread_state { + LSNR_UNREG_KT_SLEEP = 0, + LSNR_UNREG_KT_WAKEUP, +}; + +enum qseecom_unload_app_kthread_state { + UNLOAD_APP_KT_SLEEP = 0, + UNLOAD_APP_KT_WAKEUP, +}; + static struct class *driver_class; static dev_t qseecom_device_no; static DEFINE_MUTEX(qsee_bw_mutex); static DEFINE_MUTEX(app_access_lock); static DEFINE_MUTEX(clk_access_lock); +static DEFINE_MUTEX(listener_access_lock); +static DEFINE_MUTEX(unload_app_pending_list_lock); + struct sglist_info { uint32_t indexAndFlags; @@ -182,13 +203,21 @@ struct qseecom_registered_listener_list { size_t sb_length; struct ion_handle *ihandle; /* Retrieve phy addr */ wait_queue_head_t rcv_req_wq; + /* rcv_req_flag: 0: ready and empty; 1: received req */ int rcv_req_flag; int send_resp_flag; bool listener_in_use; /* wq for thread blocked on this listener*/ wait_queue_head_t listener_block_app_wq; - struct sglist_info sglistinfo_ptr[MAX_ION_FD]; - uint32_t sglist_cnt; + struct sglist_info sglistinfo_ptr[MAX_ION_FD]; + uint32_t sglist_cnt; + int abort; + bool unregister_pending; +}; + +struct qseecom_unregister_pending_list { + struct list_head list; + struct qseecom_dev_handle *data; }; struct qseecom_registered_app_list { @@ -198,6 +227,7 @@ struct qseecom_registered_app_list { char app_name[MAX_APP_NAME_SIZE]; u32 app_arch; bool app_blocked; + u32 check_block; u32 blocked_on_listener_id; }; @@ -235,7 +265,6 @@ struct qseecom_clk { struct qseecom_control { struct ion_client *ion_clnt; /* Ion client */ struct list_head registered_listener_list_head; - spinlock_t registered_listener_list_lock; struct list_head registered_app_list_head; spinlock_t registered_app_list_lock; @@ -275,12 +304,29 @@ struct qseecom_control { unsigned int ce_opp_freq_hz; bool appsbl_qseecom_support; uint32_t qsee_reentrancy_support; + bool enable_key_wrap_in_ks; uint32_t app_block_ref_cnt; wait_queue_head_t app_block_wq; atomic_t qseecom_state; int is_apps_region_protected; bool smcinvoke_support; + + struct list_head unregister_lsnr_pending_list_head; + wait_queue_head_t register_lsnr_pending_wq; + struct task_struct *unregister_lsnr_kthread_task; + wait_queue_head_t unregister_lsnr_kthread_wq; + atomic_t unregister_lsnr_kthread_state; + + struct list_head unload_app_pending_list_head; + struct task_struct *unload_app_kthread_task; + wait_queue_head_t unload_app_kthread_wq; + atomic_t unload_app_kthread_state; +}; + +struct qseecom_unload_app_pending_list { + struct list_head list; + struct qseecom_dev_handle *data; }; struct qseecom_sec_buf_fd_info { @@ -305,10 +351,14 @@ struct qseecom_client_handle { char app_name[MAX_APP_NAME_SIZE]; u32 app_arch; struct qseecom_sec_buf_fd_info sec_buf_fd[MAX_ION_FD]; + bool from_smcinvoke; + bool unload_pending; }; struct qseecom_listener_handle { u32 id; + bool unregister_pending; + bool release_called; }; static struct qseecom_control qseecom; @@ -380,6 +430,8 @@ static int qseecom_free_ce_info(struct qseecom_dev_handle *data, void __user *argp); static int qseecom_query_ce_info(struct qseecom_dev_handle *data, void __user *argp); +static int __qseecom_unload_app(struct qseecom_dev_handle *data, + uint32_t app_id); static int get_qseecom_keymaster_status(char *str) { @@ -388,6 +440,25 @@ static int get_qseecom_keymaster_status(char *str) } __setup("androidboot.keymaster=", get_qseecom_keymaster_status); +static int __qseecom_scm_call2_locked(uint32_t smc_id, struct scm_desc *desc) +{ + int ret = 0; + int retry_count = 0; + + do { + ret = scm_call2_noretry(smc_id, desc); + if (ret == -EBUSY) { + mutex_unlock(&app_access_lock); + msleep(QSEECOM_SCM_EBUSY_WAIT_MS); + mutex_lock(&app_access_lock); + } + if (retry_count == QSEECOM_SCM_SECURE_WORLD_BUSY_COUNT) + pr_warn("secure world has been busy for 1 second!\n"); + } while (ret == -EBUSY && + (retry_count++ < QSEECOM_SCM_EBUSY_MAX_RETRY)); + return ret; +} + static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, const void *req_buf, void *resp_buf) { @@ -415,7 +486,7 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, svc_id, tz_cmd_id); return -EINVAL; } - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case SCM_SVC_ES: { @@ -426,10 +497,9 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, (struct qseecom_save_partition_hash_req *) req_buf; char *tzbuf = kzalloc(tzbuflen, GFP_KERNEL); - if (!tzbuf) { - pr_err("error allocating data\n"); + + if (!tzbuf) return -ENOMEM; - } memset(tzbuf, 0, tzbuflen); memcpy(tzbuf, p_hash_req->digest, SHA256_DIGEST_LENGTH); @@ -439,7 +509,7 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[0] = p_hash_req->partition_id; desc.args[1] = virt_to_phys(tzbuf); desc.args[2] = SHA256_DIGEST_LENGTH; - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); kzfree(tzbuf); break; } @@ -457,6 +527,7 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, case QSEOS_APP_START_COMMAND: { struct qseecom_load_app_ireq *req; struct qseecom_load_app_64bit_ireq *req_64bit; + smc_id = TZ_OS_APP_START_ID; desc.arginfo = TZ_OS_APP_START_ID_PARAM_ID; if (qseecom.qsee_version < QSEE_VERSION_40) { @@ -473,11 +544,12 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[2] = req_64bit->phy_addr; } __qseecom_reentrancy_check_if_no_app_blocked(smc_id); - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_APP_SHUTDOWN_COMMAND: { struct qseecom_unload_app_ireq *req; + req = (struct qseecom_unload_app_ireq *)req_buf; smc_id = TZ_OS_APP_SHUTDOWN_ID; desc.arginfo = TZ_OS_APP_SHUTDOWN_ID_PARAM_ID; @@ -489,11 +561,9 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, struct qseecom_check_app_ireq *req; u32 tzbuflen = PAGE_ALIGN(sizeof(req->app_name)); char *tzbuf = kzalloc(tzbuflen, GFP_KERNEL); - if (!tzbuf) { - pr_err("Allocate %d bytes buffer failed\n", - tzbuflen); + + if (!tzbuf) return -ENOMEM; - } req = (struct qseecom_check_app_ireq *)req_buf; pr_debug("Lookup app_name = %s\n", req->app_name); strlcpy(tzbuf, req->app_name, sizeof(req->app_name)); @@ -503,13 +573,14 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[0] = virt_to_phys(tzbuf); desc.args[1] = strlen(req->app_name); __qseecom_reentrancy_check_if_no_app_blocked(smc_id); - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); kzfree(tzbuf); break; } case QSEOS_APP_REGION_NOTIFICATION: { struct qsee_apps_region_info_ireq *req; struct qsee_apps_region_info_64bit_ireq *req_64bit; + smc_id = TZ_OS_APP_REGION_NOTIFICATION_ID; desc.arginfo = TZ_OS_APP_REGION_NOTIFICATION_ID_PARAM_ID; @@ -526,12 +597,13 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[1] = req_64bit->size; } __qseecom_reentrancy_check_if_no_app_blocked(smc_id); - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_LOAD_SERV_IMAGE_COMMAND: { struct qseecom_load_lib_image_ireq *req; struct qseecom_load_lib_image_64bit_ireq *req_64bit; + smc_id = TZ_OS_LOAD_SERVICES_IMAGE_ID; desc.arginfo = TZ_OS_LOAD_SERVICES_IMAGE_ID_PARAM_ID; if (qseecom.qsee_version < QSEE_VERSION_40) { @@ -549,19 +621,20 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[2] = req_64bit->phy_addr; } __qseecom_reentrancy_check_if_no_app_blocked(smc_id); - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_UNLOAD_SERV_IMAGE_COMMAND: { smc_id = TZ_OS_UNLOAD_SERVICES_IMAGE_ID; desc.arginfo = TZ_OS_UNLOAD_SERVICES_IMAGE_ID_PARAM_ID; __qseecom_reentrancy_check_if_no_app_blocked(smc_id); - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_REGISTER_LISTENER: { struct qseecom_register_listener_ireq *req; struct qseecom_register_listener_64bit_ireq *req_64bit; + desc.arginfo = TZ_OS_REGISTER_LISTENER_ID_PARAM_ID; if (qseecom.qsee_version < QSEE_VERSION_40) { @@ -580,30 +653,29 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, } qseecom.smcinvoke_support = true; smc_id = TZ_OS_REGISTER_LISTENER_SMCINVOKE_ID; - __qseecom_reentrancy_check_if_no_app_blocked(smc_id); - ret = scm_call2(smc_id, &desc); - if (ret) { + ret = __qseecom_scm_call2_locked(smc_id, &desc); + if (ret == -EIO) { + /* smcinvoke is not supported */ qseecom.smcinvoke_support = false; smc_id = TZ_OS_REGISTER_LISTENER_ID; - __qseecom_reentrancy_check_if_no_app_blocked( - smc_id); - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); } break; } case QSEOS_DEREGISTER_LISTENER: { struct qseecom_unregister_listener_ireq *req; + req = (struct qseecom_unregister_listener_ireq *) req_buf; smc_id = TZ_OS_DEREGISTER_LISTENER_ID; desc.arginfo = TZ_OS_DEREGISTER_LISTENER_ID_PARAM_ID; desc.args[0] = req->listener_id; - __qseecom_reentrancy_check_if_no_app_blocked(smc_id); - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_LISTENER_DATA_RSP_COMMAND: { struct qseecom_client_listener_data_irsp *req; + req = (struct qseecom_client_listener_data_irsp *) req_buf; smc_id = TZ_OS_LISTENER_RESPONSE_HANDLER_ID; @@ -611,7 +683,7 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, TZ_OS_LISTENER_RESPONSE_HANDLER_ID_PARAM_ID; desc.args[0] = req->listener_id; desc.args[1] = req->status; - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_LISTENER_DATA_RSP_COMMAND_WHITELIST: { @@ -639,12 +711,13 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[2] = req_64->sglistinfo_ptr; desc.args[3] = req_64->sglistinfo_len; } - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_LOAD_EXTERNAL_ELF_COMMAND: { struct qseecom_load_app_ireq *req; struct qseecom_load_app_64bit_ireq *req_64bit; + smc_id = TZ_OS_LOAD_EXTERNAL_IMAGE_ID; desc.arginfo = TZ_OS_LOAD_SERVICES_IMAGE_ID_PARAM_ID; if (qseecom.qsee_version < QSEE_VERSION_40) { @@ -660,20 +733,21 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[2] = req_64bit->phy_addr; } __qseecom_reentrancy_check_if_no_app_blocked(smc_id); - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_UNLOAD_EXTERNAL_ELF_COMMAND: { smc_id = TZ_OS_UNLOAD_EXTERNAL_IMAGE_ID; desc.arginfo = TZ_OS_UNLOAD_SERVICES_IMAGE_ID_PARAM_ID; __qseecom_reentrancy_check_if_no_app_blocked(smc_id); - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_CLIENT_SEND_DATA_COMMAND: { struct qseecom_client_send_data_ireq *req; struct qseecom_client_send_data_64bit_ireq *req_64bit; + smc_id = TZ_APP_QSAPP_SEND_DATA_ID; desc.arginfo = TZ_APP_QSAPP_SEND_DATA_ID_PARAM_ID; if (qseecom.qsee_version < QSEE_VERSION_40) { @@ -694,7 +768,7 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[3] = req_64bit->rsp_ptr; desc.args[4] = req_64bit->rsp_len; } - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_CLIENT_SEND_DATA_COMMAND_WHITELIST: { @@ -726,32 +800,33 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[5] = req_64bit->sglistinfo_ptr; desc.args[6] = req_64bit->sglistinfo_len; } - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_RPMB_PROVISION_KEY_COMMAND: { struct qseecom_client_send_service_ireq *req; + req = (struct qseecom_client_send_service_ireq *) req_buf; smc_id = TZ_OS_RPMB_PROVISION_KEY_ID; desc.arginfo = TZ_OS_RPMB_PROVISION_KEY_ID_PARAM_ID; desc.args[0] = req->key_type; __qseecom_reentrancy_check_if_no_app_blocked(smc_id); - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_RPMB_ERASE_COMMAND: { smc_id = TZ_OS_RPMB_ERASE_ID; desc.arginfo = TZ_OS_RPMB_ERASE_ID_PARAM_ID; __qseecom_reentrancy_check_if_no_app_blocked(smc_id); - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_RPMB_CHECK_PROV_STATUS_COMMAND: { smc_id = TZ_OS_RPMB_CHECK_PROV_STATUS_ID; desc.arginfo = TZ_OS_RPMB_CHECK_PROV_STATUS_ID_PARAM_ID; __qseecom_reentrancy_check_if_no_app_blocked(smc_id); - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_GENERATE_KEY: { @@ -759,6 +834,7 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, (struct qseecom_key_generate_ireq) - sizeof(uint32_t)); char *tzbuf = kzalloc(tzbuflen, GFP_KERNEL); + if (!tzbuf) return -ENOMEM; memset(tzbuf, 0, tzbuflen); @@ -771,7 +847,7 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[0] = virt_to_phys(tzbuf); desc.args[1] = tzbuflen; __qseecom_reentrancy_check_if_no_app_blocked(smc_id); - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); kzfree(tzbuf); break; } @@ -780,11 +856,9 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, (struct qseecom_key_delete_ireq) - sizeof(uint32_t)); char *tzbuf = kzalloc(tzbuflen, GFP_KERNEL); - if (!tzbuf) { - pr_err("Allocate %d bytes buffer failed\n", - tzbuflen); + + if (!tzbuf) return -ENOMEM; - } memset(tzbuf, 0, tzbuflen); memcpy(tzbuf, req_buf + sizeof(uint32_t), (sizeof(struct qseecom_key_delete_ireq) - @@ -795,7 +869,7 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[0] = virt_to_phys(tzbuf); desc.args[1] = tzbuflen; __qseecom_reentrancy_check_if_no_app_blocked(smc_id); - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); kzfree(tzbuf); break; } @@ -804,11 +878,9 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, (struct qseecom_key_select_ireq) - sizeof(uint32_t)); char *tzbuf = kzalloc(tzbuflen, GFP_KERNEL); - if (!tzbuf) { - pr_err("Allocate %d bytes buffer failed\n", - tzbuflen); + + if (!tzbuf) return -ENOMEM; - } memset(tzbuf, 0, tzbuflen); memcpy(tzbuf, req_buf + sizeof(uint32_t), (sizeof(struct qseecom_key_select_ireq) - @@ -819,7 +891,7 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[0] = virt_to_phys(tzbuf); desc.args[1] = tzbuflen; __qseecom_reentrancy_check_if_no_app_blocked(smc_id); - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); kzfree(tzbuf); break; } @@ -828,11 +900,9 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, (struct qseecom_key_userinfo_update_ireq) - sizeof(uint32_t)); char *tzbuf = kzalloc(tzbuflen, GFP_KERNEL); - if (!tzbuf) { - pr_err("Allocate %d bytes buffer failed\n", - tzbuflen); + + if (!tzbuf) return -ENOMEM; - } memset(tzbuf, 0, tzbuflen); memcpy(tzbuf, req_buf + sizeof(uint32_t), (sizeof (struct qseecom_key_userinfo_update_ireq) - @@ -843,13 +913,14 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[0] = virt_to_phys(tzbuf); desc.args[1] = tzbuflen; __qseecom_reentrancy_check_if_no_app_blocked(smc_id); - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); kzfree(tzbuf); break; } case QSEOS_TEE_OPEN_SESSION: { struct qseecom_qteec_ireq *req; struct qseecom_qteec_64bit_ireq *req_64bit; + smc_id = TZ_APP_GPAPP_OPEN_SESSION_ID; desc.arginfo = TZ_APP_GPAPP_OPEN_SESSION_ID_PARAM_ID; if (qseecom.qsee_version < QSEE_VERSION_40) { @@ -868,7 +939,7 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[3] = req_64bit->resp_ptr; desc.args[4] = req_64bit->resp_len; } - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_TEE_OPEN_SESSION_WHITELIST: { @@ -898,12 +969,13 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[5] = req_64bit->sglistinfo_ptr; desc.args[6] = req_64bit->sglistinfo_len; } - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_TEE_INVOKE_COMMAND: { struct qseecom_qteec_ireq *req; struct qseecom_qteec_64bit_ireq *req_64bit; + smc_id = TZ_APP_GPAPP_INVOKE_COMMAND_ID; desc.arginfo = TZ_APP_GPAPP_INVOKE_COMMAND_ID_PARAM_ID; if (qseecom.qsee_version < QSEE_VERSION_40) { @@ -922,7 +994,7 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[3] = req_64bit->resp_ptr; desc.args[4] = req_64bit->resp_len; } - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_TEE_INVOKE_COMMAND_WHITELIST: { @@ -952,12 +1024,13 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[5] = req_64bit->sglistinfo_ptr; desc.args[6] = req_64bit->sglistinfo_len; } - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_TEE_CLOSE_SESSION: { struct qseecom_qteec_ireq *req; struct qseecom_qteec_64bit_ireq *req_64bit; + smc_id = TZ_APP_GPAPP_CLOSE_SESSION_ID; desc.arginfo = TZ_APP_GPAPP_CLOSE_SESSION_ID_PARAM_ID; if (qseecom.qsee_version < QSEE_VERSION_40) { @@ -976,12 +1049,13 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[3] = req_64bit->resp_ptr; desc.args[4] = req_64bit->resp_len; } - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_TEE_REQUEST_CANCELLATION: { struct qseecom_qteec_ireq *req; struct qseecom_qteec_64bit_ireq *req_64bit; + smc_id = TZ_APP_GPAPP_REQUEST_CANCELLATION_ID; desc.arginfo = TZ_APP_GPAPP_REQUEST_CANCELLATION_ID_PARAM_ID; @@ -1001,7 +1075,7 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.args[3] = req_64bit->resp_ptr; desc.args[4] = req_64bit->resp_len; } - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } case QSEOS_CONTINUE_BLOCKED_REQ_COMMAND: { @@ -1016,11 +1090,11 @@ static int qseecom_scm_call2(uint32_t svc_id, uint32_t tz_cmd_id, desc.arginfo = TZ_OS_CONTINUE_BLOCKED_REQUEST_ID_PARAM_ID; desc.args[0] = req->app_or_session_id; - ret = scm_call2(smc_id, &desc); + ret = __qseecom_scm_call2_locked(smc_id, &desc); break; } default: { - pr_err("qseos_cmd_id 0x%d is not supported by armv8 scm_call2.\n", + pr_err("qseos_cmd_id %d is not supported by armv8 scm_call2.\n", qseos_cmd_id); ret = -EINVAL; break; @@ -1056,42 +1130,18 @@ static int qseecom_scm_call(u32 svc_id, u32 tz_cmd_id, const void *cmd_buf, return qseecom_scm_call2(svc_id, tz_cmd_id, cmd_buf, resp_buf); } -static int __qseecom_is_svc_unique(struct qseecom_dev_handle *data, - struct qseecom_register_listener_req *svc) -{ - struct qseecom_registered_listener_list *ptr; - int unique = 1; - unsigned long flags; - - spin_lock_irqsave(&qseecom.registered_listener_list_lock, flags); - list_for_each_entry(ptr, &qseecom.registered_listener_list_head, list) { - if (ptr->svc.listener_id == svc->listener_id) { - pr_err("Service id: %u is already registered\n", - ptr->svc.listener_id); - unique = 0; - break; - } - } - spin_unlock_irqrestore(&qseecom.registered_listener_list_lock, flags); - return unique; -} - static struct qseecom_registered_listener_list *__qseecom_find_svc( int32_t listener_id) { struct qseecom_registered_listener_list *entry = NULL; - unsigned long flags; - spin_lock_irqsave(&qseecom.registered_listener_list_lock, flags); - list_for_each_entry(entry, &qseecom.registered_listener_list_head, list) - { + list_for_each_entry(entry, + &qseecom.registered_listener_list_head, list) { if (entry->svc.listener_id == listener_id) break; } - spin_unlock_irqrestore(&qseecom.registered_listener_list_lock, flags); - if ((entry != NULL) && (entry->svc.listener_id != listener_id)) { - pr_err("Service id: %u is not found\n", listener_id); + pr_debug("Service id: %u is not found\n", listener_id); return NULL; } @@ -1151,8 +1201,14 @@ static int __qseecom_set_sb_memory(struct qseecom_registered_listener_list *svc, resp.result = QSEOS_RESULT_INCOMPLETE; + mutex_unlock(&listener_access_lock); + mutex_lock(&app_access_lock); + __qseecom_reentrancy_check_if_no_app_blocked( + TZ_OS_REGISTER_LISTENER_SMCINVOKE_ID); ret = qseecom_scm_call(SCM_SVC_TZSCHEDULER, 1, cmd_buf, cmd_len, &resp, sizeof(resp)); + mutex_unlock(&app_access_lock); + mutex_lock(&listener_access_lock); if (ret) { pr_err("qseecom_scm_call failed with err: %d\n", ret); return -EINVAL; @@ -1170,9 +1226,9 @@ static int qseecom_register_listener(struct qseecom_dev_handle *data, void __user *argp) { int ret = 0; - unsigned long flags; struct qseecom_register_listener_req rcvd_lstnr; struct qseecom_registered_listener_list *new_entry; + struct qseecom_registered_listener_list *ptr_svc; ret = copy_from_user(&rcvd_lstnr, argp, sizeof(rcvd_lstnr)); if (ret) { @@ -1183,18 +1239,33 @@ static int qseecom_register_listener(struct qseecom_dev_handle *data, rcvd_lstnr.sb_size)) return -EFAULT; - data->listener.id = 0; - if (!__qseecom_is_svc_unique(data, &rcvd_lstnr)) { - pr_err("Service is not unique and is already registered\n"); - data->released = true; - return -EBUSY; + ptr_svc = __qseecom_find_svc(rcvd_lstnr.listener_id); + if (ptr_svc) { + if (ptr_svc->unregister_pending == false) { + pr_err("Service %d is not unique\n", + rcvd_lstnr.listener_id); + data->released = true; + return -EBUSY; + } + /*wait until listener is unregistered*/ + pr_debug("register %d has to wait\n", + rcvd_lstnr.listener_id); + mutex_unlock(&listener_access_lock); + ret = wait_event_interruptible( + qseecom.register_lsnr_pending_wq, + list_empty( + &qseecom.unregister_lsnr_pending_list_head)); + if (ret) { + pr_err("interrupted register_pending_wq %d\n", + rcvd_lstnr.listener_id); + mutex_lock(&listener_access_lock); + return -ERESTARTSYS; + } + mutex_lock(&listener_access_lock); } - new_entry = kzalloc(sizeof(*new_entry), GFP_KERNEL); - if (!new_entry) { - pr_err("kmalloc failed\n"); + if (!new_entry) return -ENOMEM; - } memcpy(&new_entry->svc, &rcvd_lstnr, sizeof(rcvd_lstnr)); new_entry->rcv_req_flag = 0; @@ -1202,30 +1273,28 @@ static int qseecom_register_listener(struct qseecom_dev_handle *data, new_entry->sb_length = rcvd_lstnr.sb_size; new_entry->user_virt_sb_base = rcvd_lstnr.virt_sb_base; if (__qseecom_set_sb_memory(new_entry, data, &rcvd_lstnr)) { - pr_err("qseecom_set_sb_memoryfailed\n"); + pr_err("qseecom_set_sb_memory failed for listener %d, size %d\n", + rcvd_lstnr.listener_id, rcvd_lstnr.sb_size); kzfree(new_entry); return -ENOMEM; } - data->listener.id = rcvd_lstnr.listener_id; init_waitqueue_head(&new_entry->rcv_req_wq); init_waitqueue_head(&new_entry->listener_block_app_wq); new_entry->send_resp_flag = 0; new_entry->listener_in_use = false; - spin_lock_irqsave(&qseecom.registered_listener_list_lock, flags); list_add_tail(&new_entry->list, &qseecom.registered_listener_list_head); - spin_unlock_irqrestore(&qseecom.registered_listener_list_lock, flags); + data->listener.id = rcvd_lstnr.listener_id; + pr_debug("Service %d is registered\n", rcvd_lstnr.listener_id); return ret; } -static int qseecom_unregister_listener(struct qseecom_dev_handle *data) +static int __qseecom_unregister_listener(struct qseecom_dev_handle *data, + struct qseecom_registered_listener_list *ptr_svc) { int ret = 0; - unsigned long flags; - uint32_t unmap_mem = 0; struct qseecom_register_listener_ireq req; - struct qseecom_registered_listener_list *ptr_svc = NULL; struct qseecom_command_scm_resp resp; struct ion_handle *ihandle = NULL; /* Retrieve phy addr */ @@ -1233,68 +1302,157 @@ static int qseecom_unregister_listener(struct qseecom_dev_handle *data) req.listener_id = data->listener.id; resp.result = QSEOS_RESULT_INCOMPLETE; + mutex_unlock(&listener_access_lock); + mutex_lock(&app_access_lock); + __qseecom_reentrancy_check_if_no_app_blocked( + TZ_OS_DEREGISTER_LISTENER_ID); ret = qseecom_scm_call(SCM_SVC_TZSCHEDULER, 1, &req, sizeof(req), &resp, sizeof(resp)); + mutex_unlock(&app_access_lock); + mutex_lock(&listener_access_lock); if (ret) { pr_err("scm_call() failed with err: %d (lstnr id=%d)\n", ret, data->listener.id); - return ret; + if (ret == -EBUSY) + return ret; + goto exit; } if (resp.result != QSEOS_RESULT_SUCCESS) { pr_err("Failed resp.result=%d,(lstnr id=%d)\n", resp.result, data->listener.id); - return -EPERM; - } - - data->abort = 1; - spin_lock_irqsave(&qseecom.registered_listener_list_lock, flags); - list_for_each_entry(ptr_svc, &qseecom.registered_listener_list_head, - list) { - if (ptr_svc->svc.listener_id == data->listener.id) { - wake_up_all(&ptr_svc->rcv_req_wq); - break; - } + ret = -EPERM; + goto exit; } - spin_unlock_irqrestore(&qseecom.registered_listener_list_lock, flags); while (atomic_read(&data->ioctl_count) > 1) { - if (wait_event_freezable(data->abort_wq, + if (wait_event_interruptible(data->abort_wq, atomic_read(&data->ioctl_count) <= 1)) { pr_err("Interrupted from abort\n"); ret = -ERESTARTSYS; - return ret; } } - spin_lock_irqsave(&qseecom.registered_listener_list_lock, flags); - list_for_each_entry(ptr_svc, - &qseecom.registered_listener_list_head, - list) - { - if (ptr_svc->svc.listener_id == data->listener.id) { - if (ptr_svc->sb_virt) { - unmap_mem = 1; - ihandle = ptr_svc->ihandle; - } - list_del(&ptr_svc->list); - kzfree(ptr_svc); - break; - } - } - spin_unlock_irqrestore(&qseecom.registered_listener_list_lock, flags); - - /* Unmap the memory */ - if (unmap_mem) { +exit: + if (ptr_svc->sb_virt) { + ihandle = ptr_svc->ihandle; if (!IS_ERR_OR_NULL(ihandle)) { ion_unmap_kernel(qseecom.ion_clnt, ihandle); ion_free(qseecom.ion_clnt, ihandle); - } + } } + list_del(&ptr_svc->list); + kzfree(ptr_svc); + data->released = true; + pr_debug("Service %d is unregistered\n", data->listener.id); return ret; } +static int qseecom_unregister_listener(struct qseecom_dev_handle *data) +{ + struct qseecom_registered_listener_list *ptr_svc = NULL; + struct qseecom_unregister_pending_list *entry = NULL; + + if (data->released) { + pr_err("Don't unregister lsnr %d\n", data->listener.id); + return -EINVAL; + } + + ptr_svc = __qseecom_find_svc(data->listener.id); + if (!ptr_svc) { + pr_err("Unregiser invalid listener ID %d\n", data->listener.id); + return -ENODATA; + } + /* stop CA thread waiting for listener response */ + ptr_svc->abort = 1; + wake_up_interruptible_all(&qseecom.send_resp_wq); + + /* stop listener thread waiting for listener request */ + data->abort = 1; + wake_up_all(&ptr_svc->rcv_req_wq); + + /* return directly if pending*/ + if (ptr_svc->unregister_pending) + return 0; + + /*add unregistration into pending list*/ + entry = kzalloc(sizeof(*entry), GFP_KERNEL); + if (!entry) + return -ENOMEM; + entry->data = data; + list_add_tail(&entry->list, + &qseecom.unregister_lsnr_pending_list_head); + ptr_svc->unregister_pending = true; + pr_debug("unregister %d pending\n", data->listener.id); + return 0; +} + +static void __qseecom_processing_pending_lsnr_unregister(void) +{ + struct qseecom_unregister_pending_list *entry = NULL; + struct qseecom_registered_listener_list *ptr_svc = NULL; + struct list_head *pos; + int ret = 0; + + mutex_lock(&listener_access_lock); + while (!list_empty(&qseecom.unregister_lsnr_pending_list_head)) { + pos = qseecom.unregister_lsnr_pending_list_head.next; + entry = list_entry(pos, + struct qseecom_unregister_pending_list, list); + if (entry && entry->data) { + pr_debug("process pending unregister %d\n", + entry->data->listener.id); + /* don't process if qseecom_release is not called*/ + if (!entry->data->listener.release_called) + break; + ptr_svc = __qseecom_find_svc( + entry->data->listener.id); + if (ptr_svc) { + ret = __qseecom_unregister_listener( + entry->data, ptr_svc); + if (ret == -EBUSY) { + pr_debug("unregister %d pending again\n", + entry->data->listener.id); + mutex_unlock(&listener_access_lock); + return; + } + } else + pr_err("invalid listener %d\n", + entry->data->listener.id); + kzfree(entry->data); + } + list_del(pos); + kzfree(entry); + } + mutex_unlock(&listener_access_lock); + wake_up_interruptible(&qseecom.register_lsnr_pending_wq); +} + +static void __wakeup_unregister_listener_kthread(void) +{ + atomic_set(&qseecom.unregister_lsnr_kthread_state, + LSNR_UNREG_KT_WAKEUP); + wake_up_interruptible(&qseecom.unregister_lsnr_kthread_wq); +} + +static int __qseecom_unregister_listener_kthread_func(void *data) +{ + while (!kthread_should_stop()) { + wait_event_interruptible( + qseecom.unregister_lsnr_kthread_wq, + atomic_read(&qseecom.unregister_lsnr_kthread_state) + == LSNR_UNREG_KT_WAKEUP); + pr_debug("kthread to unregister listener is called %d\n", + atomic_read(&qseecom.unregister_lsnr_kthread_state)); + __qseecom_processing_pending_lsnr_unregister(); + atomic_set(&qseecom.unregister_lsnr_kthread_state, + LSNR_UNREG_KT_SLEEP); + } + pr_warn("kthread to unregister listener stopped\n"); + return 0; +} + static int __qseecom_set_msm_bus_request(uint32_t mode) { int ret = 0; @@ -1343,39 +1501,35 @@ static void qseecom_bw_inactive_req_work(struct work_struct *work) qseecom.timer_running = false; mutex_unlock(&qsee_bw_mutex); mutex_unlock(&app_access_lock); - return; } static void qseecom_scale_bus_bandwidth_timer_callback(unsigned long data) { schedule_work(&qseecom.bw_inactive_req_ws); - return; } static int __qseecom_decrease_clk_ref_count(enum qseecom_ce_hw_instance ce) { struct qseecom_clk *qclk; int ret = 0; + mutex_lock(&clk_access_lock); if (ce == CLK_QSEE) qclk = &qseecom.qsee; else qclk = &qseecom.ce_drv; - if (qclk->clk_access_cnt > 2) { + if (qclk->clk_access_cnt > 0) { + qclk->clk_access_cnt--; + } else { pr_err("Invalid clock ref count %d\n", qclk->clk_access_cnt); ret = -EINVAL; - goto err_dec_ref_cnt; } - if (qclk->clk_access_cnt == 2) - qclk->clk_access_cnt--; -err_dec_ref_cnt: mutex_unlock(&clk_access_lock); return ret; } - static int qseecom_scale_bus_bandwidth_timer(uint32_t mode) { int32_t ret = 0; @@ -1445,6 +1599,7 @@ static int __qseecom_register_bus_bandwidth_needs( static int qseecom_perf_enable(struct qseecom_dev_handle *data) { int ret = 0; + ret = qsee_vote_for_clock(data, CLK_DFAB); if (ret) { pr_err("Failed to vote for DFAB clock with err %d\n", ret); @@ -1481,9 +1636,9 @@ static int qseecom_scale_bus_bandwidth(struct qseecom_dev_handle *data, } /* - * Register bus bandwidth needs if bus scaling feature is enabled; - * otherwise, qseecom enable/disable clocks for the client directly. - */ + * Register bus bandwidth needs if bus scaling feature is enabled; + * otherwise, qseecom enable/disable clocks for the client directly. + */ if (qseecom.support_bus_scaling) { mutex_lock(&qsee_bw_mutex); ret = __qseecom_register_bus_bandwidth_needs(data, req_mode); @@ -1526,12 +1681,12 @@ static void __qseecom_disable_clk_scale_down(struct qseecom_dev_handle *data) else __qseecom_add_bw_scale_down_timer( QSEECOM_LOAD_APP_CRYPTO_TIMEOUT); - return; } static int __qseecom_enable_clk_scale_up(struct qseecom_dev_handle *data) { int ret = 0; + if (qseecom.support_bus_scaling) { ret = qseecom_scale_bus_bandwidth_timer(MEDIUM); if (ret) @@ -1583,7 +1738,7 @@ static int qseecom_set_client_mem_param(struct qseecom_dev_handle *data, } if (len < req.sb_len) { - pr_err("Requested length (0x%x) is > allocated (0x%zu)\n", + pr_err("Requested length (0x%x) is > allocated (%zu)\n", req.sb_len, len); return -EINVAL; } @@ -1600,11 +1755,13 @@ static int qseecom_set_client_mem_param(struct qseecom_dev_handle *data, return 0; } -static int __qseecom_listener_has_sent_rsp(struct qseecom_dev_handle *data) +static int __qseecom_listener_has_sent_rsp(struct qseecom_dev_handle *data, + struct qseecom_registered_listener_list *ptr_svc) { int ret; + ret = (qseecom.send_resp_flag != 0); - return ret || data->abort; + return ret || data->abort || ptr_svc->abort; } static int __qseecom_reentrancy_listener_has_sent_rsp( @@ -1614,56 +1771,7 @@ static int __qseecom_reentrancy_listener_has_sent_rsp( int ret; ret = (ptr_svc->send_resp_flag != 0); - return ret || data->abort; -} - -static int __qseecom_qseos_fail_return_resp_tz(struct qseecom_dev_handle *data, - struct qseecom_command_scm_resp *resp, - struct qseecom_client_listener_data_irsp *send_data_rsp, - struct qseecom_registered_listener_list *ptr_svc, - uint32_t lstnr) { - int ret = 0; - - send_data_rsp->status = QSEOS_RESULT_FAILURE; - qseecom.send_resp_flag = 0; - send_data_rsp->qsee_cmd_id = QSEOS_LISTENER_DATA_RSP_COMMAND; - send_data_rsp->listener_id = lstnr; - if (ptr_svc) - pr_warn("listener_id:%x, lstnr: %x\n", - ptr_svc->svc.listener_id, lstnr); - if (ptr_svc && ptr_svc->ihandle) { - ret = msm_ion_do_cache_op(qseecom.ion_clnt, ptr_svc->ihandle, - ptr_svc->sb_virt, ptr_svc->sb_length, - ION_IOC_CLEAN_INV_CACHES); - if (ret) { - pr_err("cache operation failed %d\n", ret); - return ret; - } - } - - if (lstnr == RPMB_SERVICE) { - ret = __qseecom_enable_clk(CLK_QSEE); - if (ret) - return ret; - } - ret = qseecom_scm_call(SCM_SVC_TZSCHEDULER, 1, send_data_rsp, - sizeof(send_data_rsp), resp, sizeof(*resp)); - if (ret) { - pr_err("scm_call() failed with err: %d (app_id = %d)\n", - ret, data->client.app_id); - if (lstnr == RPMB_SERVICE) - __qseecom_disable_clk(CLK_QSEE); - return ret; - } - if ((resp->result != QSEOS_RESULT_SUCCESS) && - (resp->result != QSEOS_RESULT_INCOMPLETE)) { - pr_err("fail:resp res= %d,app_id = %d,lstr = %d\n", - resp->result, data->client.app_id, lstnr); - ret = -EINVAL; - } - if (lstnr == RPMB_SERVICE) - __qseecom_disable_clk(CLK_QSEE); - return ret; + return ret || data->abort || ptr_svc->abort; } static void __qseecom_clean_listener_sglistinfo( @@ -1682,9 +1790,9 @@ static int __qseecom_process_incomplete_cmd(struct qseecom_dev_handle *data, int ret = 0; int rc = 0; uint32_t lstnr; - unsigned long flags; - struct qseecom_client_listener_data_irsp send_data_rsp; - struct qseecom_client_listener_data_64bit_irsp send_data_rsp_64bit; + struct qseecom_client_listener_data_irsp send_data_rsp = {0}; + struct qseecom_client_listener_data_64bit_irsp send_data_rsp_64bit + = {0}; struct qseecom_registered_listener_list *ptr_svc = NULL; sigset_t new_sigset; sigset_t old_sigset; @@ -1693,13 +1801,13 @@ static int __qseecom_process_incomplete_cmd(struct qseecom_dev_handle *data, size_t cmd_len; struct sglist_info *table = NULL; + qseecom.app_block_ref_cnt++; while (resp->result == QSEOS_RESULT_INCOMPLETE) { lstnr = resp->data; /* * Wake up blocking lsitener service with the lstnr id */ - spin_lock_irqsave(&qseecom.registered_listener_list_lock, - flags); + mutex_lock(&listener_access_lock); list_for_each_entry(ptr_svc, &qseecom.registered_listener_list_head, list) { if (ptr_svc->svc.listener_id == lstnr) { @@ -1709,29 +1817,38 @@ static int __qseecom_process_incomplete_cmd(struct qseecom_dev_handle *data, break; } } - spin_unlock_irqrestore(&qseecom.registered_listener_list_lock, - flags); if (ptr_svc == NULL) { pr_err("Listener Svc %d does not exist\n", lstnr); - __qseecom_qseos_fail_return_resp_tz(data, resp, - &send_data_rsp, ptr_svc, lstnr); - return -EINVAL; + rc = -EINVAL; + status = QSEOS_RESULT_FAILURE; + goto err_resp; } if (!ptr_svc->ihandle) { pr_err("Client handle is not initialized\n"); - __qseecom_qseos_fail_return_resp_tz(data, resp, - &send_data_rsp, ptr_svc, lstnr); - return -EINVAL; + rc = -EINVAL; + status = QSEOS_RESULT_FAILURE; + goto err_resp; } if (ptr_svc->svc.listener_id != lstnr) { - pr_warn("Service requested does not exist\n"); - __qseecom_qseos_fail_return_resp_tz(data, resp, - &send_data_rsp, NULL, lstnr); - return -ERESTARTSYS; + pr_err("Service %d does not exist\n", + lstnr); + rc = -ERESTARTSYS; + ptr_svc = NULL; + status = QSEOS_RESULT_FAILURE; + goto err_resp; + } + + if (ptr_svc->abort == 1) { + pr_debug("Service %d abort %d\n", + lstnr, ptr_svc->abort); + rc = -ENODEV; + status = QSEOS_RESULT_FAILURE; + goto err_resp; } + pr_debug("waking up rcv_req_wq and waiting for send_resp_wq\n"); /* initialize the new signal mask with all signals*/ @@ -1739,6 +1856,7 @@ static int __qseecom_process_incomplete_cmd(struct qseecom_dev_handle *data, /* block all signals */ sigprocmask(SIG_SETMASK, &new_sigset, &old_sigset); + mutex_unlock(&listener_access_lock); do { /* * When reentrancy is not supported, check global @@ -1746,22 +1864,23 @@ static int __qseecom_process_incomplete_cmd(struct qseecom_dev_handle *data, * send_resp_flag. */ if (!qseecom.qsee_reentrancy_support && - !wait_event_freezable(qseecom.send_resp_wq, - __qseecom_listener_has_sent_rsp(data))) { - break; + !wait_event_interruptible(qseecom.send_resp_wq, + __qseecom_listener_has_sent_rsp( + data, ptr_svc))) { + break; } if (qseecom.qsee_reentrancy_support && - !wait_event_freezable(qseecom.send_resp_wq, + !wait_event_interruptible(qseecom.send_resp_wq, __qseecom_reentrancy_listener_has_sent_rsp( data, ptr_svc))) { - break; + break; } } while (1); - + mutex_lock(&listener_access_lock); /* restore signal mask */ sigprocmask(SIG_SETMASK, &old_sigset, NULL); - if (data->abort) { + if (data->abort || ptr_svc->abort) { pr_err("Abort clnt %d waiting on lstnr svc %d, ret %d", data->client.app_id, lstnr, ret); rc = -ENODEV; @@ -1769,76 +1888,91 @@ static int __qseecom_process_incomplete_cmd(struct qseecom_dev_handle *data, } else { status = QSEOS_RESULT_SUCCESS; } - +err_resp: qseecom.send_resp_flag = 0; - ptr_svc->send_resp_flag = 0; - table = ptr_svc->sglistinfo_ptr; + if (ptr_svc) { + ptr_svc->send_resp_flag = 0; + table = ptr_svc->sglistinfo_ptr; + } if (qseecom.qsee_version < QSEE_VERSION_40) { send_data_rsp.listener_id = lstnr; send_data_rsp.status = status; - send_data_rsp.sglistinfo_ptr = - (uint32_t)virt_to_phys(table); - send_data_rsp.sglistinfo_len = - SGLISTINFO_TABLE_SIZE; - dmac_flush_range((void *)table, - (void *)table + SGLISTINFO_TABLE_SIZE); + if (table) { + send_data_rsp.sglistinfo_ptr = + (uint32_t)virt_to_phys(table); + send_data_rsp.sglistinfo_len = + SGLISTINFO_TABLE_SIZE; + dmac_flush_range((void *)table, + (void *)table + SGLISTINFO_TABLE_SIZE); + } cmd_buf = (void *)&send_data_rsp; cmd_len = sizeof(send_data_rsp); } else { send_data_rsp_64bit.listener_id = lstnr; send_data_rsp_64bit.status = status; - send_data_rsp_64bit.sglistinfo_ptr = - virt_to_phys(table); - send_data_rsp_64bit.sglistinfo_len = - SGLISTINFO_TABLE_SIZE; - dmac_flush_range((void *)table, - (void *)table + SGLISTINFO_TABLE_SIZE); + if (table) { + send_data_rsp_64bit.sglistinfo_ptr = + virt_to_phys(table); + send_data_rsp_64bit.sglistinfo_len = + SGLISTINFO_TABLE_SIZE; + dmac_flush_range((void *)table, + (void *)table + SGLISTINFO_TABLE_SIZE); + } cmd_buf = (void *)&send_data_rsp_64bit; cmd_len = sizeof(send_data_rsp_64bit); } - if (qseecom.whitelist_support == false) + if (qseecom.whitelist_support == false || table == NULL) *(uint32_t *)cmd_buf = QSEOS_LISTENER_DATA_RSP_COMMAND; else *(uint32_t *)cmd_buf = QSEOS_LISTENER_DATA_RSP_COMMAND_WHITELIST; - if (ptr_svc) { + if (ptr_svc && ptr_svc->ihandle) { ret = msm_ion_do_cache_op(qseecom.ion_clnt, ptr_svc->ihandle, ptr_svc->sb_virt, ptr_svc->sb_length, ION_IOC_CLEAN_INV_CACHES); if (ret) { pr_err("cache operation failed %d\n", ret); - return ret; + goto exit; } } if ((lstnr == RPMB_SERVICE) || (lstnr == SSD_SERVICE)) { ret = __qseecom_enable_clk(CLK_QSEE); if (ret) - return ret; + goto exit; } ret = qseecom_scm_call(SCM_SVC_TZSCHEDULER, 1, cmd_buf, cmd_len, resp, sizeof(*resp)); - ptr_svc->listener_in_use = false; - __qseecom_clean_listener_sglistinfo(ptr_svc); + if (ptr_svc) { + ptr_svc->listener_in_use = false; + __qseecom_clean_listener_sglistinfo(ptr_svc); + } if (ret) { pr_err("scm_call() failed with err: %d (app_id = %d)\n", ret, data->client.app_id); if ((lstnr == RPMB_SERVICE) || (lstnr == SSD_SERVICE)) __qseecom_disable_clk(CLK_QSEE); - return ret; + goto exit; } + pr_debug("resp status %d, res= %d, app_id = %d, lstr = %d\n", + status, resp->result, data->client.app_id, lstnr); if ((resp->result != QSEOS_RESULT_SUCCESS) && (resp->result != QSEOS_RESULT_INCOMPLETE)) { pr_err("fail:resp res= %d,app_id = %d,lstr = %d\n", resp->result, data->client.app_id, lstnr); ret = -EINVAL; + goto exit; } +exit: + mutex_unlock(&listener_access_lock); if ((lstnr == RPMB_SERVICE) || (lstnr == SSD_SERVICE)) __qseecom_disable_clk(CLK_QSEE); } + qseecom.app_block_ref_cnt--; + wake_up_interruptible_all(&qseecom.app_block_wq); if (rc) return rc; @@ -1859,6 +1993,7 @@ static int __qseecom_process_reentrancy_blocked_on_listener( sigset_t old_sigset; unsigned long flags; bool found_app = false; + struct qseecom_registered_app_list dummy_app_entry = { {NULL} }; if (!resp || !data) { pr_err("invalid resp or data pointer\n"); @@ -1868,33 +2003,42 @@ static int __qseecom_process_reentrancy_blocked_on_listener( /* find app_id & img_name from list */ if (!ptr_app) { - spin_lock_irqsave(&qseecom.registered_app_list_lock, flags); - list_for_each_entry(ptr_app, &qseecom.registered_app_list_head, - list) { - if ((ptr_app->app_id == data->client.app_id) && - (!strcmp(ptr_app->app_name, + if (data->client.from_smcinvoke) { + pr_debug("This request is from smcinvoke\n"); + ptr_app = &dummy_app_entry; + ptr_app->app_id = data->client.app_id; + } else { + spin_lock_irqsave(&qseecom.registered_app_list_lock, + flags); + list_for_each_entry(ptr_app, + &qseecom.registered_app_list_head, list) { + if ((ptr_app->app_id == data->client.app_id) && + (!strcmp(ptr_app->app_name, data->client.app_name))) { - found_app = true; - break; + found_app = true; + break; + } + } + spin_unlock_irqrestore( + &qseecom.registered_app_list_lock, flags); + if (!found_app) { + pr_err("app_id %d (%s) is not found\n", + data->client.app_id, + (char *)data->client.app_name); + ret = -ENOENT; + goto exit; } - } - spin_unlock_irqrestore(&qseecom.registered_app_list_lock, - flags); - if (!found_app) { - pr_err("app_id %d (%s) is not found\n", - data->client.app_id, - (char *)data->client.app_name); - ret = -ENOENT; - goto exit; } } do { session_id = resp->resp_type; + mutex_lock(&listener_access_lock); list_ptr = __qseecom_find_svc(resp->data); if (!list_ptr) { pr_err("Invalid listener ID %d\n", resp->data); ret = -ENODATA; + mutex_unlock(&listener_access_lock); goto exit; } ptr_app->blocked_on_listener_id = resp->data; @@ -1910,11 +2054,13 @@ static int __qseecom_process_reentrancy_blocked_on_listener( do { qseecom.app_block_ref_cnt++; ptr_app->app_blocked = true; + mutex_unlock(&listener_access_lock); mutex_unlock(&app_access_lock); - wait_event_freezable( + wait_event_interruptible( list_ptr->listener_block_app_wq, !list_ptr->listener_in_use); mutex_lock(&app_access_lock); + mutex_lock(&listener_access_lock); ptr_app->app_blocked = false; qseecom.app_block_ref_cnt--; } while (list_ptr->listener_in_use); @@ -1947,9 +2093,11 @@ static int __qseecom_process_reentrancy_blocked_on_listener( if (ret) { pr_err("unblock app %d or session %d fail\n", data->client.app_id, session_id); + mutex_unlock(&listener_access_lock); goto exit; } } + mutex_unlock(&listener_access_lock); resp->result = continue_resp.result; resp->resp_type = continue_resp.resp_type; resp->data = continue_resp.data; @@ -1971,9 +2119,9 @@ static int __qseecom_reentrancy_process_incomplete_cmd( int ret = 0; int rc = 0; uint32_t lstnr; - unsigned long flags; - struct qseecom_client_listener_data_irsp send_data_rsp; - struct qseecom_client_listener_data_64bit_irsp send_data_rsp_64bit; + struct qseecom_client_listener_data_irsp send_data_rsp = {0}; + struct qseecom_client_listener_data_64bit_irsp send_data_rsp_64bit + = {0}; struct qseecom_registered_listener_list *ptr_svc = NULL; sigset_t new_sigset; sigset_t old_sigset; @@ -1982,13 +2130,12 @@ static int __qseecom_reentrancy_process_incomplete_cmd( size_t cmd_len; struct sglist_info *table = NULL; - while (ret == 0 && rc == 0 && resp->result == QSEOS_RESULT_INCOMPLETE) { + while (ret == 0 && resp->result == QSEOS_RESULT_INCOMPLETE) { lstnr = resp->data; /* * Wake up blocking lsitener service with the lstnr id */ - spin_lock_irqsave(&qseecom.registered_listener_list_lock, - flags); + mutex_lock(&listener_access_lock); list_for_each_entry(ptr_svc, &qseecom.registered_listener_list_head, list) { if (ptr_svc->svc.listener_id == lstnr) { @@ -1998,23 +2145,38 @@ static int __qseecom_reentrancy_process_incomplete_cmd( break; } } - spin_unlock_irqrestore(&qseecom.registered_listener_list_lock, - flags); if (ptr_svc == NULL) { pr_err("Listener Svc %d does not exist\n", lstnr); - return -EINVAL; + rc = -EINVAL; + status = QSEOS_RESULT_FAILURE; + goto err_resp; } if (!ptr_svc->ihandle) { pr_err("Client handle is not initialized\n"); - return -EINVAL; + rc = -EINVAL; + status = QSEOS_RESULT_FAILURE; + goto err_resp; } if (ptr_svc->svc.listener_id != lstnr) { - pr_warn("Service requested does not exist\n"); - return -ERESTARTSYS; + pr_err("Service %d does not exist\n", + lstnr); + rc = -ERESTARTSYS; + ptr_svc = NULL; + status = QSEOS_RESULT_FAILURE; + goto err_resp; + } + + if (ptr_svc->abort == 1) { + pr_debug("Service %d abort %d\n", + lstnr, ptr_svc->abort); + rc = -ENODEV; + status = QSEOS_RESULT_FAILURE; + goto err_resp; } + pr_debug("waking up rcv_req_wq and waiting for send_resp_wq\n"); /* initialize the new signal mask with all signals*/ @@ -2024,22 +2186,24 @@ static int __qseecom_reentrancy_process_incomplete_cmd( sigprocmask(SIG_SETMASK, &new_sigset, &old_sigset); /* unlock mutex btw waking listener and sleep-wait */ + mutex_unlock(&listener_access_lock); mutex_unlock(&app_access_lock); do { - if (!wait_event_freezable(qseecom.send_resp_wq, + if (!wait_event_interruptible(qseecom.send_resp_wq, __qseecom_reentrancy_listener_has_sent_rsp( data, ptr_svc))) { - break; + break; } } while (1); /* lock mutex again after resp sent */ mutex_lock(&app_access_lock); + mutex_lock(&listener_access_lock); ptr_svc->send_resp_flag = 0; qseecom.send_resp_flag = 0; /* restore signal mask */ sigprocmask(SIG_SETMASK, &old_sigset, NULL); - if (data->abort) { + if (data->abort || ptr_svc->abort) { pr_err("Abort clnt %d waiting on lstnr svc %d, ret %d", data->client.app_id, lstnr, ret); rc = -ENODEV; @@ -2047,55 +2211,64 @@ static int __qseecom_reentrancy_process_incomplete_cmd( } else { status = QSEOS_RESULT_SUCCESS; } - table = ptr_svc->sglistinfo_ptr; +err_resp: + if (ptr_svc) + table = ptr_svc->sglistinfo_ptr; if (qseecom.qsee_version < QSEE_VERSION_40) { send_data_rsp.listener_id = lstnr; send_data_rsp.status = status; - send_data_rsp.sglistinfo_ptr = - (uint32_t)virt_to_phys(table); - send_data_rsp.sglistinfo_len = SGLISTINFO_TABLE_SIZE; - dmac_flush_range((void *)table, - (void *)table + SGLISTINFO_TABLE_SIZE); + if (table) { + send_data_rsp.sglistinfo_ptr = + (uint32_t)virt_to_phys(table); + send_data_rsp.sglistinfo_len = + SGLISTINFO_TABLE_SIZE; + dmac_flush_range((void *)table, + (void *)table + SGLISTINFO_TABLE_SIZE); + } cmd_buf = (void *)&send_data_rsp; cmd_len = sizeof(send_data_rsp); } else { send_data_rsp_64bit.listener_id = lstnr; send_data_rsp_64bit.status = status; - send_data_rsp_64bit.sglistinfo_ptr = - virt_to_phys(table); - send_data_rsp_64bit.sglistinfo_len = - SGLISTINFO_TABLE_SIZE; - dmac_flush_range((void *)table, - (void *)table + SGLISTINFO_TABLE_SIZE); + if (table) { + send_data_rsp_64bit.sglistinfo_ptr = + virt_to_phys(table); + send_data_rsp_64bit.sglistinfo_len = + SGLISTINFO_TABLE_SIZE; + dmac_flush_range((void *)table, + (void *)table + SGLISTINFO_TABLE_SIZE); + } cmd_buf = (void *)&send_data_rsp_64bit; cmd_len = sizeof(send_data_rsp_64bit); } - if (qseecom.whitelist_support == false) + if (qseecom.whitelist_support == false || table == NULL) *(uint32_t *)cmd_buf = QSEOS_LISTENER_DATA_RSP_COMMAND; else *(uint32_t *)cmd_buf = QSEOS_LISTENER_DATA_RSP_COMMAND_WHITELIST; - if (ptr_svc) { + if (ptr_svc && ptr_svc->ihandle) { ret = msm_ion_do_cache_op(qseecom.ion_clnt, ptr_svc->ihandle, ptr_svc->sb_virt, ptr_svc->sb_length, ION_IOC_CLEAN_INV_CACHES); if (ret) { pr_err("cache operation failed %d\n", ret); - return ret; + goto exit; } } if (lstnr == RPMB_SERVICE) { ret = __qseecom_enable_clk(CLK_QSEE); if (ret) - return ret; + goto exit; } ret = qseecom_scm_call(SCM_SVC_TZSCHEDULER, 1, cmd_buf, cmd_len, resp, sizeof(*resp)); - ptr_svc->listener_in_use = false; - __qseecom_clean_listener_sglistinfo(ptr_svc); - wake_up_interruptible(&ptr_svc->listener_block_app_wq); + if (ptr_svc) { + ptr_svc->listener_in_use = false; + __qseecom_clean_listener_sglistinfo(ptr_svc); + wake_up_interruptible(&ptr_svc->listener_block_app_wq); + } if (ret) { pr_err("scm_call() failed with err: %d (app_id = %d)\n", @@ -2113,8 +2286,10 @@ static int __qseecom_reentrancy_process_incomplete_cmd( ret = -EINVAL; goto exit; } + mutex_unlock(&listener_access_lock); ret = __qseecom_process_reentrancy_blocked_on_listener( resp, NULL, data); + mutex_lock(&listener_access_lock); if (ret) { pr_err("failed to process App(%d) %s blocked on listener %d\n", data->client.app_id, @@ -2131,6 +2306,7 @@ static int __qseecom_reentrancy_process_incomplete_cmd( goto exit; } exit: + mutex_unlock(&listener_access_lock); if (lstnr == RPMB_SERVICE) __qseecom_disable_clk(CLK_QSEE); @@ -2149,23 +2325,15 @@ exit: */ static void __qseecom_reentrancy_check_if_no_app_blocked(uint32_t smc_id) { - sigset_t new_sigset, old_sigset; - if (qseecom.qsee_reentrancy_support > QSEE_REENTRANCY_PHASE_0 && qseecom.qsee_reentrancy_support < QSEE_REENTRANCY_PHASE_3 && IS_OWNER_TRUSTED_OS(TZ_SYSCALL_OWNER_ID(smc_id))) { /* thread sleep until this app unblocked */ while (qseecom.app_block_ref_cnt > 0) { - sigfillset(&new_sigset); - sigprocmask(SIG_SETMASK, &new_sigset, &old_sigset); mutex_unlock(&app_access_lock); - do { - if (!wait_event_freezable(qseecom.app_block_wq, - (qseecom.app_block_ref_cnt == 0))) - break; - } while (1); + wait_event_interruptible(qseecom.app_block_wq, + (!qseecom.app_block_ref_cnt)); mutex_lock(&app_access_lock); - sigprocmask(SIG_SETMASK, &old_sigset, NULL); } } } @@ -2178,22 +2346,17 @@ static void __qseecom_reentrancy_check_if_no_app_blocked(uint32_t smc_id) static void __qseecom_reentrancy_check_if_this_app_blocked( struct qseecom_registered_app_list *ptr_app) { - sigset_t new_sigset, old_sigset; if (qseecom.qsee_reentrancy_support) { + ptr_app->check_block++; while (ptr_app->app_blocked || qseecom.app_block_ref_cnt > 1) { /* thread sleep until this app unblocked */ - sigfillset(&new_sigset); - sigprocmask(SIG_SETMASK, &new_sigset, &old_sigset); mutex_unlock(&app_access_lock); - do { - if (!wait_event_freezable(qseecom.app_block_wq, - (!ptr_app->app_blocked && - qseecom.app_block_ref_cnt <= 1))) - break; - } while (1); + wait_event_interruptible(qseecom.app_block_wq, + (!ptr_app->app_blocked && + qseecom.app_block_ref_cnt <= 1)); mutex_lock(&app_access_lock); - sigprocmask(SIG_SETMASK, &old_sigset, NULL); } + ptr_app->check_block--; } } @@ -2420,8 +2583,11 @@ static int qseecom_load_app(struct qseecom_dev_handle *data, void __user *argp) if (resp.result == QSEOS_RESULT_INCOMPLETE) { ret = __qseecom_process_incomplete_cmd(data, &resp); if (ret) { - pr_err("process_incomplete_cmd failed err: %d\n", - ret); + /* TZ has created app_id, need to unload it */ + pr_err("incomp_cmd err %d, %d, unload %d %s\n", + ret, resp.result, resp.data, + load_img_req.img_name); + __qseecom_unload_app(data, resp.data); if (!IS_ERR_OR_NULL(ihandle)) ion_free(qseecom.ion_clnt, ihandle); ret = -EFAULT; @@ -2442,7 +2608,6 @@ static int qseecom_load_app(struct qseecom_dev_handle *data, void __user *argp) entry = kmalloc(sizeof(*entry), GFP_KERNEL); if (!entry) { - pr_err("kmalloc failed\n"); ret = -ENOMEM; goto loadapp_err; } @@ -2450,11 +2615,11 @@ static int qseecom_load_app(struct qseecom_dev_handle *data, void __user *argp) entry->ref_cnt = 1; entry->app_arch = load_img_req.app_arch; /* - * keymaster app may be first loaded as "keymaste" by qseecomd, - * and then used as "keymaster" on some targets. To avoid app - * name checking error, register "keymaster" into app_list and - * thread private data. - */ + * keymaster app may be first loaded as "keymaste" by qseecomd, + * and then used as "keymaster" on some targets. To avoid app + * name checking error, register "keymaster" into app_list and + * thread private data. + */ if (!strcmp(load_img_req.img_name, "keymaste")) strlcpy(entry->app_name, "keymaster", MAX_APP_NAME_SIZE); @@ -2463,6 +2628,7 @@ static int qseecom_load_app(struct qseecom_dev_handle *data, void __user *argp) MAX_APP_NAME_SIZE); entry->app_blocked = false; entry->blocked_on_listener_id = 0; + entry->check_block = 0; /* Deallocate the handle */ if (!IS_ERR_OR_NULL(ihandle)) @@ -2473,7 +2639,7 @@ static int qseecom_load_app(struct qseecom_dev_handle *data, void __user *argp) spin_unlock_irqrestore(&qseecom.registered_app_list_lock, flags); - pr_warn("App with id %d (%s) now loaded\n", app_id, + pr_warn("App with id %u (%s) now loaded\n", app_id, (char *)(load_img_req.img_name)); } data->client.app_id = app_id; @@ -2511,11 +2677,12 @@ enable_clk_err: static int __qseecom_cleanup_app(struct qseecom_dev_handle *data) { int ret = 1; /* Set unload app */ + wake_up_all(&qseecom.send_resp_wq); if (qseecom.qsee_reentrancy_support) mutex_unlock(&app_access_lock); while (atomic_read(&data->ioctl_count) > 1) { - if (wait_event_freezable(data->abort_wq, + if (wait_event_interruptible(data->abort_wq, atomic_read(&data->ioctl_count) <= 1)) { pr_err("Interrupted from abort\n"); ret = -ERESTARTSYS; @@ -2530,6 +2697,7 @@ static int __qseecom_cleanup_app(struct qseecom_dev_handle *data) static int qseecom_unmap_ion_allocated_memory(struct qseecom_dev_handle *data) { int ret = 0; + if (!IS_ERR_OR_NULL(data->client.ihandle)) { ion_unmap_kernel(qseecom.ion_clnt, data->client.ihandle); ion_free(qseecom.ion_clnt, data->client.ihandle); @@ -2539,17 +2707,61 @@ static int qseecom_unmap_ion_allocated_memory(struct qseecom_dev_handle *data) return ret; } +static int __qseecom_unload_app(struct qseecom_dev_handle *data, + uint32_t app_id) +{ + struct qseecom_unload_app_ireq req; + struct qseecom_command_scm_resp resp; + int ret = 0; + + /* Populate the structure for sending scm call to load image */ + req.qsee_cmd_id = QSEOS_APP_SHUTDOWN_COMMAND; + req.app_id = app_id; + + /* SCM_CALL to unload the app */ + ret = qseecom_scm_call(SCM_SVC_TZSCHEDULER, 1, &req, + sizeof(struct qseecom_unload_app_ireq), + &resp, sizeof(resp)); + if (ret) { + pr_err("scm_call to unload app (id = %d) failed\n", app_id); + return -EFAULT; + } + switch (resp.result) { + case QSEOS_RESULT_SUCCESS: + pr_warn("App (%d) is unloaded\n", app_id); + break; + case QSEOS_RESULT_INCOMPLETE: + ret = __qseecom_process_incomplete_cmd(data, &resp); + if (ret) + pr_err("unload app %d fail proc incom cmd: %d,%d,%d\n", + app_id, ret, resp.result, resp.data); + else + pr_warn("App (%d) is unloaded\n", app_id); + break; + case QSEOS_RESULT_FAILURE: + pr_err("app (%d) unload_failed!!\n", app_id); + ret = -EFAULT; + break; + default: + pr_err("unload app %d get unknown resp.result %d\n", + app_id, resp.result); + ret = -EFAULT; + break; + } + return ret; +} + static int qseecom_unload_app(struct qseecom_dev_handle *data, bool app_crash) { unsigned long flags; unsigned long flags1; int ret = 0; - struct qseecom_command_scm_resp resp; struct qseecom_registered_app_list *ptr_app = NULL; bool unload = false; bool found_app = false; bool found_dead_app = false; + bool doublecheck = false; if (!data) { pr_err("Invalid/uninitialized device handle\n"); @@ -2572,15 +2784,15 @@ static int qseecom_unload_app(struct qseecom_dev_handle *data, if (!strcmp((void *)ptr_app->app_name, (void *)data->client.app_name)) { found_app = true; - if (ptr_app->app_blocked) + if (ptr_app->app_blocked || + ptr_app->check_block) app_crash = false; if (app_crash || ptr_app->ref_cnt == 1) unload = true; break; - } else { - found_dead_app = true; - break; } + found_dead_app = true; + break; } } spin_unlock_irqrestore(&qseecom.registered_app_list_lock, @@ -2599,42 +2811,30 @@ static int qseecom_unload_app(struct qseecom_dev_handle *data, (char *)data->client.app_name); if (unload) { - struct qseecom_unload_app_ireq req; - /* Populate the structure for sending scm call to load image */ - req.qsee_cmd_id = QSEOS_APP_SHUTDOWN_COMMAND; - req.app_id = data->client.app_id; + ret = __qseecom_unload_app(data, data->client.app_id); - /* SCM_CALL to unload the app */ - ret = qseecom_scm_call(SCM_SVC_TZSCHEDULER, 1, &req, - sizeof(struct qseecom_unload_app_ireq), - &resp, sizeof(resp)); - if (ret) { - pr_err("scm_call to unload app (id = %d) failed\n", - req.app_id); - ret = -EFAULT; - goto unload_exit; - } else { - pr_warn("App id %d now unloaded\n", req.app_id); - } - if (resp.result == QSEOS_RESULT_FAILURE) { - pr_err("app (%d) unload_failed!!\n", - data->client.app_id); - ret = -EFAULT; - goto unload_exit; - } - if (resp.result == QSEOS_RESULT_SUCCESS) - pr_debug("App (%d) is unloaded!!\n", - data->client.app_id); - if (resp.result == QSEOS_RESULT_INCOMPLETE) { - ret = __qseecom_process_incomplete_cmd(data, &resp); - if (ret) { - pr_err("process_incomplete_cmd fail err: %d\n", - ret); - goto unload_exit; + /* double check if this app_entry still exists */ + spin_lock_irqsave(&qseecom.registered_app_list_lock, flags1); + list_for_each_entry(ptr_app, + &qseecom.registered_app_list_head, list) { + if ((ptr_app->app_id == data->client.app_id) && + (!strcmp((void *)ptr_app->app_name, + (void *)data->client.app_name))) { + doublecheck = true; + break; } } + spin_unlock_irqrestore(&qseecom.registered_app_list_lock, + flags1); + if (!doublecheck) { + pr_warn("app %d(%s) entry is already removed\n", + data->client.app_id, + (char *)data->client.app_name); + found_app = false; + } } +unload_exit: if (found_app) { spin_lock_irqsave(&qseecom.registered_app_list_lock, flags1); if (app_crash) { @@ -2657,12 +2857,105 @@ static int qseecom_unload_app(struct qseecom_dev_handle *data, spin_unlock_irqrestore(&qseecom.registered_app_list_lock, flags1); } -unload_exit: qseecom_unmap_ion_allocated_memory(data); data->released = true; return ret; } + +static int qseecom_prepare_unload_app(struct qseecom_dev_handle *data) +{ + struct qseecom_unload_app_pending_list *entry = NULL; + + pr_debug("prepare to unload app(%d)(%s), pending %d\n", + data->client.app_id, data->client.app_name, + data->client.unload_pending); + if (data->client.unload_pending) + return 0; + entry = kzalloc(sizeof(*entry), GFP_KERNEL); + if (!entry) + return -ENOMEM; + entry->data = data; + list_add_tail(&entry->list, + &qseecom.unload_app_pending_list_head); + data->client.unload_pending = true; + pr_debug("unload ta %d pending\n", data->client.app_id); + return 0; +} + +static void __wakeup_unload_app_kthread(void) +{ + atomic_set(&qseecom.unload_app_kthread_state, + UNLOAD_APP_KT_WAKEUP); + wake_up_interruptible(&qseecom.unload_app_kthread_wq); +} + +static bool __qseecom_find_pending_unload_app(uint32_t app_id, char *app_name) +{ + struct qseecom_unload_app_pending_list *entry = NULL; + bool found = false; + + mutex_lock(&unload_app_pending_list_lock); + list_for_each_entry(entry, &qseecom.unload_app_pending_list_head, + list) { + if ((entry->data->client.app_id == app_id) && + (!strcmp(entry->data->client.app_name, app_name))) { + found = true; + break; + } + } + mutex_unlock(&unload_app_pending_list_lock); + return found; +} + +static void __qseecom_processing_pending_unload_app(void) +{ + struct qseecom_unload_app_pending_list *entry = NULL; + struct list_head *pos; + int ret = 0; + + mutex_lock(&unload_app_pending_list_lock); + while (!list_empty(&qseecom.unload_app_pending_list_head)) { + pos = qseecom.unload_app_pending_list_head.next; + entry = list_entry(pos, + struct qseecom_unload_app_pending_list, list); + if (entry && entry->data) { + pr_debug("process pending unload app %d (%s)\n", + entry->data->client.app_id, + entry->data->client.app_name); + mutex_unlock(&unload_app_pending_list_lock); + mutex_lock(&app_access_lock); + ret = qseecom_unload_app(entry->data, true); + if (ret) + pr_err("unload app %d pending failed %d\n", + entry->data->client.app_id, ret); + mutex_unlock(&app_access_lock); + mutex_lock(&unload_app_pending_list_lock); + kzfree(entry->data); + } + list_del(pos); + kzfree(entry); + } + mutex_unlock(&unload_app_pending_list_lock); +} + +static int __qseecom_unload_app_kthread_func(void *data) +{ + while (!kthread_should_stop()) { + wait_event_interruptible( + qseecom.unload_app_kthread_wq, + atomic_read(&qseecom.unload_app_kthread_state) + == UNLOAD_APP_KT_WAKEUP); + pr_debug("kthread to unload app is called, state %d\n", + atomic_read(&qseecom.unload_app_kthread_state)); + __qseecom_processing_pending_unload_app(); + atomic_set(&qseecom.unload_app_kthread_state, + UNLOAD_APP_KT_SLEEP); + } + pr_warn("kthread to unload app stopped\n"); + return 0; +} + static phys_addr_t __qseecom_uvirt_to_kphys(struct qseecom_dev_handle *data, unsigned long virt) { @@ -3066,7 +3359,7 @@ int __qseecom_process_reentrancy(struct qseecom_command_scm_resp *resp, ret = __qseecom_reentrancy_process_incomplete_cmd(data, resp); ptr_app->app_blocked = false; qseecom.app_block_ref_cnt--; - wake_up_interruptible(&qseecom.app_block_wq); + wake_up_interruptible_all(&qseecom.app_block_wq); if (ret) pr_err("process_incomplete_cmd failed err: %d\n", ret); @@ -3115,6 +3408,13 @@ static int __qseecom_send_cmd(struct qseecom_dev_handle *data, return -ENOENT; } + if (__qseecom_find_pending_unload_app(data->client.app_id, + data->client.app_name)) { + pr_err("app %d (%s) unload is pending\n", + data->client.app_id, data->client.app_name); + return -ENOENT; + } + if (qseecom.qsee_version < QSEE_VERSION_40) { send_data_req.app_id = data->client.app_id; send_data_req.req_ptr = (uint32_t)(__qseecom_uvirt_to_kphys( @@ -3246,23 +3546,23 @@ int __boundary_checks_offset(struct qseecom_send_modfd_cmd_req *req, if ((data->type != QSEECOM_LISTENER_SERVICE) && (req->ifd_data[i].fd > 0)) { - if ((req->cmd_req_len < sizeof(uint32_t)) || - (req->ifd_data[i].cmd_buf_offset > - req->cmd_req_len - sizeof(uint32_t))) { - pr_err("Invalid offset (req len) 0x%x\n", - req->ifd_data[i].cmd_buf_offset); - return -EINVAL; - } + if ((req->cmd_req_len < sizeof(uint32_t)) || + (req->ifd_data[i].cmd_buf_offset > + req->cmd_req_len - sizeof(uint32_t))) { + pr_err("Invalid offset (req len) 0x%x\n", + req->ifd_data[i].cmd_buf_offset); + return -EINVAL; + } } else if ((data->type == QSEECOM_LISTENER_SERVICE) && (lstnr_resp->ifd_data[i].fd > 0)) { - if ((lstnr_resp->resp_len < sizeof(uint32_t)) || - (lstnr_resp->ifd_data[i].cmd_buf_offset > - lstnr_resp->resp_len - sizeof(uint32_t))) { - pr_err("Invalid offset (lstnr resp len) 0x%x\n", - lstnr_resp->ifd_data[i].cmd_buf_offset); - return -EINVAL; - } + if ((lstnr_resp->resp_len < sizeof(uint32_t)) || + (lstnr_resp->ifd_data[i].cmd_buf_offset > + lstnr_resp->resp_len - sizeof(uint32_t))) { + pr_err("Invalid offset (lstnr resp len) 0x%x\n", + lstnr_resp->ifd_data[i].cmd_buf_offset); + return -EINVAL; } + } return 0; } @@ -3370,6 +3670,7 @@ static int __qseecom_update_cmd_buf(void *msg, bool cleanup, sg = sg_ptr->sgl; if (sg_ptr->nents == 1) { uint32_t *update; + if (__boundary_checks_offset(req, lstnr_resp, data, i)) goto err; if ((data->type == QSEECOM_CLIENT_APP && @@ -3433,8 +3734,8 @@ static int __qseecom_update_cmd_buf(void *msg, bool cleanup, update = (struct qseecom_sg_entry *)field; for (j = 0; j < sg_ptr->nents; j++) { /* - * Check if sg list PA is under 4GB - */ + * Check if sg list PA is under 4GB + */ if ((qseecom.qsee_version >= QSEE_VERSION_40) && (!cleanup) && @@ -3650,8 +3951,9 @@ static int __qseecom_update_cmd_buf_64(void *msg, bool cleanup, sg = sg_ptr->sgl; if (sg_ptr->nents == 1) { uint64_t *update_64bit; + if (__boundary_checks_offset_64(req, lstnr_resp, - data, i)) + data, i)) goto err; /* 64bit app uses 64bit address */ update_64bit = (uint64_t *) field; @@ -3834,7 +4136,8 @@ static int __qseecom_listener_has_rcvd_req(struct qseecom_dev_handle *data, struct qseecom_registered_listener_list *svc) { int ret; - ret = (svc->rcv_req_flag != 0); + + ret = (svc->rcv_req_flag == 1); return ret || data->abort; } @@ -3843,14 +4146,17 @@ static int qseecom_receive_req(struct qseecom_dev_handle *data) int ret = 0; struct qseecom_registered_listener_list *this_lstnr; + mutex_lock(&listener_access_lock); this_lstnr = __qseecom_find_svc(data->listener.id); if (!this_lstnr) { pr_err("Invalid listener ID\n"); + mutex_unlock(&listener_access_lock); return -ENODATA; } + mutex_unlock(&listener_access_lock); while (1) { - if (wait_event_freezable(this_lstnr->rcv_req_wq, + if (wait_event_interruptible(this_lstnr->rcv_req_wq, __qseecom_listener_has_rcvd_req(data, this_lstnr))) { pr_debug("Interrupted: exiting Listener Service = %d\n", @@ -3861,10 +4167,12 @@ static int qseecom_receive_req(struct qseecom_dev_handle *data) if (data->abort) { pr_err("Aborting Listener Service = %d\n", - (uint32_t)data->listener.id); + (uint32_t)data->listener.id); return -ENODEV; } + mutex_lock(&listener_access_lock); this_lstnr->rcv_req_flag = 0; + mutex_unlock(&listener_access_lock); break; } return ret; @@ -4080,9 +4388,19 @@ static int __qseecom_allocate_img_data(struct ion_handle **pihandle, ion_phys_addr_t pa; struct ion_handle *ihandle = NULL; u8 *img_data = NULL; + int retry = 0; + int ion_flag = ION_FLAG_CACHED; - ihandle = ion_alloc(qseecom.ion_clnt, fw_size, - SZ_4K, ION_HEAP(ION_QSECOM_HEAP_ID), 0); + do { + if (retry++) { + mutex_unlock(&app_access_lock); + msleep(QSEECOM_TA_ION_ALLOCATE_DELAY); + mutex_lock(&app_access_lock); + } + ihandle = ion_alloc(qseecom.ion_clnt, fw_size, + SZ_4K, ION_HEAP(ION_QSECOM_HEAP_ID), ion_flag); + } while (IS_ERR_OR_NULL(ihandle) && + (retry <= QSEECOM_TA_ION_ALLOCATE_MAX_ATTEMP)); if (IS_ERR_OR_NULL(ihandle)) { pr_err("ION alloc failed\n"); @@ -4228,8 +4546,12 @@ static int __qseecom_load_fw(struct qseecom_dev_handle *data, char *appname, ret = qseecom_scm_call(SCM_SVC_TZSCHEDULER, 1, cmd_buf, cmd_len, &resp, sizeof(resp)); if (ret) { - pr_err("scm_call to load failed : ret %d\n", ret); - ret = -EIO; + pr_err("scm_call to load failed : ret %d, result %x\n", + ret, resp.result); + if (resp.result == QSEOS_RESULT_FAIL_APP_ALREADY_LOADED) + ret = -EEXIST; + else + ret = -EIO; goto exit_disable_clk_vote; } @@ -4239,10 +4561,14 @@ static int __qseecom_load_fw(struct qseecom_dev_handle *data, char *appname, break; case QSEOS_RESULT_INCOMPLETE: ret = __qseecom_process_incomplete_cmd(data, &resp); - if (ret) - pr_err("process_incomplete_cmd FAILED\n"); - else + if (ret) { + pr_err("incomp_cmd err %d, %d, unload %d %s\n", + ret, resp.result, resp.data, appname); + __qseecom_unload_app(data, resp.data); + ret = -EFAULT; + } else { *app_id = resp.data; + } break; case QSEOS_RESULT_FAILURE: pr_err("scm call failed with response QSEOS_RESULT FAILURE\n"); @@ -4438,6 +4764,9 @@ int qseecom_start_app(struct qseecom_handle **handle, uint32_t fw_size, app_arch; uint32_t app_id = 0; + __wakeup_unregister_listener_kthread(); + __wakeup_unload_app_kthread(); + if (atomic_read(&qseecom.qseecom_state) != QSEECOM_STATE_READY) { pr_err("Not allowed to be called in %d state\n", atomic_read(&qseecom.qseecom_state)); @@ -4455,19 +4784,13 @@ int qseecom_start_app(struct qseecom_handle **handle, } *handle = kzalloc(sizeof(struct qseecom_handle), GFP_KERNEL); - if (!(*handle)) { - pr_err("failed to allocate memory for kernel client handle\n"); + if (!(*handle)) return -ENOMEM; - } data = kzalloc(sizeof(*data), GFP_KERNEL); if (!data) { - pr_err("kmalloc failed\n"); - if (ret == 0) { - kfree(*handle); - *handle = NULL; - } - return -ENOMEM; + ret = -ENOMEM; + goto exit_handle_free; } data->abort = 0; data->type = QSEECOM_CLIENT_APP; @@ -4482,18 +4805,17 @@ int qseecom_start_app(struct qseecom_handle **handle, ION_HEAP(ION_QSECOM_HEAP_ID), 0); if (IS_ERR_OR_NULL(data->client.ihandle)) { pr_err("Ion client could not retrieve the handle\n"); - kfree(data); - kfree(*handle); - *handle = NULL; - return -EINVAL; + ret = -ENOMEM; + goto exit_data_free; } mutex_lock(&app_access_lock); +recheck: app_ireq.qsee_cmd_id = QSEOS_APP_LOOKUP_COMMAND; strlcpy(app_ireq.app_name, app_name, MAX_APP_NAME_SIZE); ret = __qseecom_check_app_exists(app_ireq, &app_id); if (ret) - goto err; + goto exit_ion_free; strlcpy(data->client.app_name, app_name, MAX_APP_NAME_SIZE); if (app_id) { @@ -4518,28 +4840,31 @@ int qseecom_start_app(struct qseecom_handle **handle, pr_debug("%s: Loading app for the first time'\n", qseecom.pdev->init_name); ret = __qseecom_load_fw(data, app_name, &app_id); - if (ret < 0) - goto err; + if (ret == -EEXIST) { + pr_err("recheck if TA %s is loaded\n", app_name); + goto recheck; + } else if (ret < 0) + goto exit_ion_free; } data->client.app_id = app_id; if (!found_app) { entry = kmalloc(sizeof(*entry), GFP_KERNEL); if (!entry) { pr_err("kmalloc for app entry failed\n"); - ret = -ENOMEM; - goto err; + ret = -ENOMEM; + goto exit_ion_free; } entry->app_id = app_id; entry->ref_cnt = 1; strlcpy(entry->app_name, app_name, MAX_APP_NAME_SIZE); if (__qseecom_get_fw_size(app_name, &fw_size, &app_arch)) { ret = -EIO; - kfree(entry); - goto err; + goto exit_entry_free; } entry->app_arch = app_arch; entry->app_blocked = false; entry->blocked_on_listener_id = 0; + entry->check_block = 0; spin_lock_irqsave(&qseecom.registered_app_list_lock, flags); list_add_tail(&entry->list, &qseecom.registered_app_list_head); spin_unlock_irqrestore(&qseecom.registered_app_list_lock, @@ -4551,7 +4876,7 @@ int qseecom_start_app(struct qseecom_handle **handle, if (ret) { pr_err("Cannot get phys_addr for the Ion Client, ret = %d\n", ret); - goto err; + goto exit_entry_free; } /* Populate the structure for sending scm call to load image */ @@ -4560,7 +4885,7 @@ int qseecom_start_app(struct qseecom_handle **handle, if (IS_ERR_OR_NULL(data->client.sb_virt)) { pr_err("ION memory mapping for client shared buf failed\n"); ret = -ENOMEM; - goto err; + goto exit_entry_free; } data->client.user_virt_sb_base = (uintptr_t)data->client.sb_virt; data->client.sb_phys = (phys_addr_t)pa; @@ -4570,9 +4895,8 @@ int qseecom_start_app(struct qseecom_handle **handle, kclient_entry = kzalloc(sizeof(*kclient_entry), GFP_KERNEL); if (!kclient_entry) { - pr_err("kmalloc failed\n"); ret = -ENOMEM; - goto err; + goto exit_ion_unmap_kernel; } kclient_entry->handle = *handle; @@ -4582,13 +4906,28 @@ int qseecom_start_app(struct qseecom_handle **handle, spin_unlock_irqrestore(&qseecom.registered_kclient_list_lock, flags); mutex_unlock(&app_access_lock); + __wakeup_unload_app_kthread(); return 0; -err: - kfree(data); - kfree(*handle); - *handle = NULL; +exit_ion_unmap_kernel: + if (!IS_ERR_OR_NULL(data->client.ihandle)) + ion_unmap_kernel(qseecom.ion_clnt, data->client.ihandle); +exit_entry_free: + kfree(entry); +exit_ion_free: mutex_unlock(&app_access_lock); + if (!IS_ERR_OR_NULL(data->client.ihandle)) { + ion_free(qseecom.ion_clnt, data->client.ihandle); + data->client.ihandle = NULL; + } +exit_data_free: + kfree(data); +exit_handle_free: + if (*handle) { + kfree(*handle); + *handle = NULL; + } + __wakeup_unload_app_kthread(); return ret; } EXPORT_SYMBOL(qseecom_start_app); @@ -4602,6 +4941,9 @@ int qseecom_shutdown_app(struct qseecom_handle **handle) unsigned long flags = 0; bool found_handle = false; + __wakeup_unregister_listener_kthread(); + __wakeup_unload_app_kthread(); + if (atomic_read(&qseecom.qseecom_state) != QSEECOM_STATE_READY) { pr_err("Not allowed to be called in %d state\n", atomic_read(&qseecom.qseecom_state)); @@ -4637,7 +4979,7 @@ int qseecom_shutdown_app(struct qseecom_handle **handle) kzfree(kclient); *handle = NULL; } - + __wakeup_unload_app_kthread(); return ret; } EXPORT_SYMBOL(qseecom_shutdown_app); @@ -4650,6 +4992,9 @@ int qseecom_send_command(struct qseecom_handle *handle, void *send_buf, struct qseecom_dev_handle *data; bool perf_enabled = false; + __wakeup_unregister_listener_kthread(); + __wakeup_unload_app_kthread(); + if (atomic_read(&qseecom.qseecom_state) != QSEECOM_STATE_READY) { pr_err("Not allowed to be called in %d state\n", atomic_read(&qseecom.qseecom_state)); @@ -4680,11 +5025,11 @@ int qseecom_send_command(struct qseecom_handle *handle, void *send_buf, } } /* - * On targets where crypto clock is handled by HLOS, - * if clk_access_cnt is zero and perf_enabled is false, - * then the crypto clock was not enabled before sending cmd - * to tz, qseecom will enable the clock to avoid service failure. - */ + * On targets where crypto clock is handled by HLOS, + * if clk_access_cnt is zero and perf_enabled is false, + * then the crypto clock was not enabled before sending cmd + * to tz, qseecom will enable the clock to avoid service failure. + */ if (!qseecom.no_clock_support && !qseecom.qsee.clk_access_cnt && !data->perf_enabled) { pr_debug("ce clock is not enabled!\n"); @@ -4725,6 +5070,7 @@ EXPORT_SYMBOL(qseecom_send_command); int qseecom_set_bandwidth(struct qseecom_handle *handle, bool high) { int ret = 0; + if ((handle == NULL) || (handle->dev == NULL)) { pr_err("No valid kernel client\n"); return -EINVAL; @@ -4772,6 +5118,7 @@ int qseecom_process_listener_from_smcinvoke(struct scm_desc *desc) resp.data = desc->ret[2]; /*listener_id*/ dummy_private_data.client.app_id = desc->ret[1]; + dummy_private_data.client.from_smcinvoke = true; dummy_app_entry.app_id = desc->ret[1]; mutex_lock(&app_access_lock); @@ -5441,6 +5788,7 @@ static int qseecom_query_app_loaded(struct qseecom_dev_handle *data, MAX_APP_NAME_SIZE); entry->app_blocked = false; entry->blocked_on_listener_id = 0; + entry->check_block = 0; spin_lock_irqsave(&qseecom.registered_app_list_lock, flags); list_add_tail(&entry->list, @@ -5722,9 +6070,9 @@ static int __qseecom_update_current_key_user_info( int ret; if (usage < QSEOS_KM_USAGE_DISK_ENCRYPTION || - usage >= QSEOS_KM_USAGE_MAX) { - pr_err("Error:: unsupported usage %d\n", usage); - return -EFAULT; + usage >= QSEOS_KM_USAGE_MAX) { + pr_err("Error:: unsupported usage %d\n", usage); + return -EFAULT; } ret = __qseecom_enable_clk(CLK_QSEE); if (ret) @@ -5890,6 +6238,9 @@ static int qseecom_create_key(struct qseecom_dev_handle *data, else flags |= QSEECOM_ICE_FDE_KEY_SIZE_16_BYTE; + if (qseecom.enable_key_wrap_in_ks == true) + flags |= ENABLE_KEY_WRAP_IN_KS; + generate_key_ireq.flags = flags; generate_key_ireq.qsee_command_id = QSEOS_GENERATE_KEY; memset((void *)generate_key_ireq.key_id, @@ -5938,10 +6289,12 @@ static int qseecom_create_key(struct qseecom_dev_handle *data, memcpy((void *)set_key_ireq.hash32, (void *)create_key_req.hash32, QSEECOM_HASH_SIZE); - - /* It will return false if it is GPCE based crypto instance or - ICE is setup properly */ - if (qseecom_enable_ice_setup(create_key_req.usage)) + /* + * It will return false if it is GPCE based crypto instance or + * ICE is setup properly + */ + ret = qseecom_enable_ice_setup(create_key_req.usage); + if (ret) goto free_buf; do { @@ -6066,9 +6419,12 @@ static int qseecom_wipe_key(struct qseecom_dev_handle *data, clear_key_ireq.key_id[i] = QSEECOM_INVALID_KEY_ID; memset((void *)clear_key_ireq.hash32, 0, QSEECOM_HASH_SIZE); - /* It will return false if it is GPCE based crypto instance or - ICE is setup properly */ - if (qseecom_enable_ice_setup(wipe_key_req.usage)) + /* + * It will return false if it is GPCE based crypto instance or + * ICE is setup properly + */ + ret = qseecom_enable_ice_setup(wipe_key_req.usage); + if (ret) goto free_buf; ret = __qseecom_set_clear_ce_key(data, wipe_key_req.usage, @@ -6149,7 +6505,7 @@ static int qseecom_update_key_user_info(struct qseecom_dev_handle *data, } static int qseecom_is_es_activated(void __user *argp) { - struct qseecom_is_es_activated_req req; + struct qseecom_is_es_activated_req req = {0}; struct qseecom_command_scm_resp resp; int ret; @@ -6240,9 +6596,9 @@ static int qseecom_mdtp_cipher_dip(void __user *argp) req.in_buf_size == 0 || req.in_buf_size > MAX_DIP || req.out_buf_size == 0 || req.out_buf_size > MAX_DIP || req.direction > 1) { - pr_err("invalid parameters\n"); - ret = -EINVAL; - break; + pr_err("invalid parameters\n"); + ret = -EINVAL; + break; } /* Copy the input buffer from userspace to kernel space */ @@ -6285,7 +6641,7 @@ static int qseecom_mdtp_cipher_dip(void __user *argp) if (ret) break; - ret = scm_call2(TZ_MDTP_CIPHER_DIP_ID, &desc); + ret = __qseecom_scm_call2_locked(TZ_MDTP_CIPHER_DIP_ID, &desc); __qseecom_disable_clk(CLK_QSEE); @@ -6400,10 +6756,10 @@ static int __qseecom_qteec_handle_pre_alc_fd(struct qseecom_dev_handle *data, return -ENOMEM; } /* - * Allocate a buffer, populate it with number of entry plus - * each sg entry's phy addr and lenth; then return the - * phy_addr of the buffer. - */ + * Allocate a buffer, populate it with number of entry plus + * each sg entry's phy addr and length; then return the + * phy_addr of the buffer. + */ size = sizeof(uint32_t) + sizeof(struct qseecom_sg_entry) * sg_ptr->nents; size = (size + PAGE_SIZE) & PAGE_MASK; @@ -6604,6 +6960,12 @@ static int __qseecom_qteec_issue_cmd(struct qseecom_dev_handle *data, (char *)data->client.app_name); return -ENOENT; } + if (__qseecom_find_pending_unload_app(data->client.app_id, + data->client.app_name)) { + pr_err("app %d (%s) unload is pending\n", + data->client.app_id, data->client.app_name); + return -ENOENT; + } req->req_ptr = (void *)__qseecom_uvirt_to_kvirt(data, (uintptr_t)req->req_ptr); @@ -6807,6 +7169,12 @@ static int qseecom_qteec_invoke_modfd_cmd(struct qseecom_dev_handle *data, (char *)data->client.app_name); return -ENOENT; } + if (__qseecom_find_pending_unload_app(data->client.app_id, + data->client.app_name)) { + pr_err("app %d (%s) unload is pending\n", + data->client.app_id, data->client.app_name); + return -ENOENT; + } /* validate offsets */ for (i = 0; i < MAX_ION_FD; i++) { @@ -6937,61 +7305,8 @@ static void __qseecom_clean_data_sglistinfo(struct qseecom_dev_handle *data) } } - -static int __qseecom_bus_scaling_enable(struct qseecom_dev_handle *data, - bool *perf_enabled) -{ - int ret = 0; - - if (qseecom.support_bus_scaling) { - if (!data->mode) { - mutex_lock(&qsee_bw_mutex); - __qseecom_register_bus_bandwidth_needs( - data, HIGH); - mutex_unlock(&qsee_bw_mutex); - } - ret = qseecom_scale_bus_bandwidth_timer(INACTIVE); - if (ret) { - pr_err("Failed to set bw\n"); - ret = -EINVAL; - goto exit; - } - } - /* - * On targets where crypto clock is handled by HLOS, - * if clk_access_cnt is zero and perf_enabled is false, - * then the crypto clock was not enabled before sending cmd - * to tz, qseecom will enable the clock to avoid service failure. - */ - if (!qseecom.no_clock_support && - !qseecom.qsee.clk_access_cnt && !data->perf_enabled) { - pr_debug("ce clock is not enabled\n"); - ret = qseecom_perf_enable(data); - if (ret) { - pr_err("Failed to vote for clock with err %d\n", - ret); - ret = -EINVAL; - goto exit; - } - *perf_enabled = true; - } -exit: - return ret; -} - -static void __qseecom_bus_scaling_disable(struct qseecom_dev_handle *data, - bool perf_enabled) -{ - if (qseecom.support_bus_scaling) - __qseecom_add_bw_scale_down_timer( - QSEECOM_SEND_CMD_CRYPTO_TIMEOUT); - if (perf_enabled) { - qsee_disable_clock_vote(data, CLK_DFAB); - qsee_disable_clock_vote(data, CLK_SFPB); - } -} - -long qseecom_ioctl(struct file *file, unsigned cmd, unsigned long arg) +long qseecom_ioctl(struct file *file, + unsigned int cmd, unsigned long arg) { int ret = 0; struct qseecom_dev_handle *data = file->private_data; @@ -7007,6 +7322,12 @@ long qseecom_ioctl(struct file *file, unsigned cmd, unsigned long arg) pr_err("Aborting qseecom driver\n"); return -ENODEV; } + if (cmd != QSEECOM_IOCTL_RECEIVE_REQ && + cmd != QSEECOM_IOCTL_SEND_RESP_REQ && + cmd != QSEECOM_IOCTL_SEND_MODFD_RESP && + cmd != QSEECOM_IOCTL_SEND_MODFD_RESP_64) + __wakeup_unregister_listener_kthread(); + __wakeup_unload_app_kthread(); switch (cmd) { case QSEECOM_IOCTL_REGISTER_LISTENER_REQ: { @@ -7017,17 +7338,18 @@ long qseecom_ioctl(struct file *file, unsigned cmd, unsigned long arg) break; } pr_debug("ioctl register_listener_req()\n"); - mutex_lock(&app_access_lock); + mutex_lock(&listener_access_lock); atomic_inc(&data->ioctl_count); data->type = QSEECOM_LISTENER_SERVICE; ret = qseecom_register_listener(data, argp); atomic_dec(&data->ioctl_count); wake_up_all(&data->abort_wq); - mutex_unlock(&app_access_lock); + mutex_unlock(&listener_access_lock); if (ret) pr_err("failed qseecom_register_listener: %d\n", ret); break; } + case QSEECOM_IOCTL_UNREGISTER_LISTENER_REQ: { if ((data->listener.id == 0) || (data->type != QSEECOM_LISTENER_SERVICE)) { @@ -7037,12 +7359,12 @@ long qseecom_ioctl(struct file *file, unsigned cmd, unsigned long arg) break; } pr_debug("ioctl unregister_listener_req()\n"); - mutex_lock(&app_access_lock); + mutex_lock(&listener_access_lock); atomic_inc(&data->ioctl_count); ret = qseecom_unregister_listener(data); atomic_dec(&data->ioctl_count); wake_up_all(&data->abort_wq); - mutex_unlock(&app_access_lock); + mutex_unlock(&listener_access_lock); if (ret) pr_err("failed qseecom_unregister_listener: %d\n", ret); break; @@ -7057,14 +7379,50 @@ long qseecom_ioctl(struct file *file, unsigned cmd, unsigned long arg) } /* Only one client allowed here at a time */ mutex_lock(&app_access_lock); - ret = __qseecom_bus_scaling_enable(data, &perf_enabled); - if (ret) { - mutex_unlock(&app_access_lock); - break; + if (qseecom.support_bus_scaling) { + /* register bus bw in case the client doesn't do it */ + if (!data->mode) { + mutex_lock(&qsee_bw_mutex); + __qseecom_register_bus_bandwidth_needs( + data, HIGH); + mutex_unlock(&qsee_bw_mutex); + } + ret = qseecom_scale_bus_bandwidth_timer(INACTIVE); + if (ret) { + pr_err("Failed to set bw.\n"); + ret = -EINVAL; + mutex_unlock(&app_access_lock); + break; + } + } + /* + * On targets where crypto clock is handled by HLOS, + * if clk_access_cnt is zero and perf_enabled is false, + * then the crypto clock was not enabled before sending cmd to + * tz, qseecom will enable the clock to avoid service failure. + */ + if (!qseecom.no_clock_support && + !qseecom.qsee.clk_access_cnt && !data->perf_enabled) { + pr_debug("ce clock is not enabled!\n"); + ret = qseecom_perf_enable(data); + if (ret) { + pr_err("Failed to vote for clock with err %d\n", + ret); + mutex_unlock(&app_access_lock); + ret = -EINVAL; + break; + } + perf_enabled = true; } atomic_inc(&data->ioctl_count); ret = qseecom_send_cmd(data, argp); - __qseecom_bus_scaling_disable(data, perf_enabled); + if (qseecom.support_bus_scaling) + __qseecom_add_bw_scale_down_timer( + QSEECOM_SEND_CMD_CRYPTO_TIMEOUT); + if (perf_enabled) { + qsee_disable_clock_vote(data, CLK_DFAB); + qsee_disable_clock_vote(data, CLK_SFPB); + } atomic_dec(&data->ioctl_count); wake_up_all(&data->abort_wq); mutex_unlock(&app_access_lock); @@ -7083,17 +7441,52 @@ long qseecom_ioctl(struct file *file, unsigned cmd, unsigned long arg) } /* Only one client allowed here at a time */ mutex_lock(&app_access_lock); - ret = __qseecom_bus_scaling_enable(data, &perf_enabled); - if (ret) { - mutex_unlock(&app_access_lock); - break; + if (qseecom.support_bus_scaling) { + if (!data->mode) { + mutex_lock(&qsee_bw_mutex); + __qseecom_register_bus_bandwidth_needs( + data, HIGH); + mutex_unlock(&qsee_bw_mutex); + } + ret = qseecom_scale_bus_bandwidth_timer(INACTIVE); + if (ret) { + pr_err("Failed to set bw.\n"); + mutex_unlock(&app_access_lock); + ret = -EINVAL; + break; + } + } + /* + * On targets where crypto clock is handled by HLOS, + * if clk_access_cnt is zero and perf_enabled is false, + * then the crypto clock was not enabled before sending cmd to + * tz, qseecom will enable the clock to avoid service failure. + */ + if (!qseecom.no_clock_support && + !qseecom.qsee.clk_access_cnt && !data->perf_enabled) { + pr_debug("ce clock is not enabled!\n"); + ret = qseecom_perf_enable(data); + if (ret) { + pr_err("Failed to vote for clock with err %d\n", + ret); + mutex_unlock(&app_access_lock); + ret = -EINVAL; + break; + } + perf_enabled = true; } atomic_inc(&data->ioctl_count); if (cmd == QSEECOM_IOCTL_SEND_MODFD_CMD_REQ) ret = qseecom_send_modfd_cmd(data, argp); else ret = qseecom_send_modfd_cmd_64(data, argp); - __qseecom_bus_scaling_disable(data, perf_enabled); + if (qseecom.support_bus_scaling) + __qseecom_add_bw_scale_down_timer( + QSEECOM_SEND_CMD_CRYPTO_TIMEOUT); + if (perf_enabled) { + qsee_disable_clock_vote(data, CLK_DFAB); + qsee_disable_clock_vote(data, CLK_SFPB); + } atomic_dec(&data->ioctl_count); wake_up_all(&data->abort_wq); mutex_unlock(&app_access_lock); @@ -7126,6 +7519,7 @@ long qseecom_ioctl(struct file *file, unsigned cmd, unsigned long arg) ret = -EINVAL; break; } + mutex_lock(&listener_access_lock); atomic_inc(&data->ioctl_count); if (!qseecom.qsee_reentrancy_support) ret = qseecom_send_resp(); @@ -7133,6 +7527,7 @@ long qseecom_ioctl(struct file *file, unsigned cmd, unsigned long arg) ret = qseecom_reentrancy_send_resp(data); atomic_dec(&data->ioctl_count); wake_up_all(&data->abort_wq); + mutex_unlock(&listener_access_lock); if (ret) pr_err("failed qseecom_send_resp: %d\n", ret); break; @@ -7174,6 +7569,7 @@ long qseecom_ioctl(struct file *file, unsigned cmd, unsigned long arg) mutex_unlock(&app_access_lock); if (ret) pr_err("failed load_app request: %d\n", ret); + __wakeup_unload_app_kthread(); break; } case QSEECOM_IOCTL_UNLOAD_APP_REQ: { @@ -7192,6 +7588,7 @@ long qseecom_ioctl(struct file *file, unsigned cmd, unsigned long arg) mutex_unlock(&app_access_lock); if (ret) pr_err("failed unload_app request: %d\n", ret); + __wakeup_unload_app_kthread(); break; } case QSEECOM_IOCTL_GET_QSEOS_VERSION_REQ: { @@ -7475,6 +7872,7 @@ long qseecom_ioctl(struct file *file, unsigned cmd, unsigned long arg) ret = -EINVAL; break; } + mutex_lock(&listener_access_lock); atomic_inc(&data->ioctl_count); if (cmd == QSEECOM_IOCTL_SEND_MODFD_RESP) ret = qseecom_send_modfd_resp(data, argp); @@ -7482,6 +7880,7 @@ long qseecom_ioctl(struct file *file, unsigned cmd, unsigned long arg) ret = qseecom_send_modfd_resp_64(data, argp); atomic_dec(&data->ioctl_count); wake_up_all(&data->abort_wq); + mutex_unlock(&listener_access_lock); if (ret) pr_err("failed qseecom_send_mod_resp: %d\n", ret); __qseecom_clean_data_sglistinfo(data); @@ -7502,14 +7901,8 @@ long qseecom_ioctl(struct file *file, unsigned cmd, unsigned long arg) } /* Only one client allowed here at a time */ mutex_lock(&app_access_lock); - ret = __qseecom_bus_scaling_enable(data, &perf_enabled); - if (ret) { - mutex_unlock(&app_access_lock); - break; - } atomic_inc(&data->ioctl_count); ret = qseecom_qteec_open_session(data, argp); - __qseecom_bus_scaling_disable(data, perf_enabled); atomic_dec(&data->ioctl_count); wake_up_all(&data->abort_wq); mutex_unlock(&app_access_lock); @@ -7557,14 +7950,8 @@ long qseecom_ioctl(struct file *file, unsigned cmd, unsigned long arg) } /* Only one client allowed here at a time */ mutex_lock(&app_access_lock); - ret = __qseecom_bus_scaling_enable(data, &perf_enabled); - if (ret) { - mutex_unlock(&app_access_lock); - break; - } atomic_inc(&data->ioctl_count); ret = qseecom_qteec_invoke_modfd_cmd(data, argp); - __qseecom_bus_scaling_disable(data, perf_enabled); atomic_dec(&data->ioctl_count); wake_up_all(&data->abort_wq); mutex_unlock(&app_access_lock); @@ -7634,10 +8021,8 @@ static int qseecom_open(struct inode *inode, struct file *file) struct qseecom_dev_handle *data; data = kzalloc(sizeof(*data), GFP_KERNEL); - if (!data) { - pr_err("kmalloc failed\n"); + if (!data) return -ENOMEM; - } file->private_data = data; data->abort = 0; data->type = QSEECOM_GENERIC; @@ -7649,24 +8034,57 @@ static int qseecom_open(struct inode *inode, struct file *file) return ret; } +static void __qseecom_release_disable_clk(struct qseecom_dev_handle *data) +{ + if (qseecom.no_clock_support) + return; + if (qseecom.support_bus_scaling) { + mutex_lock(&qsee_bw_mutex); + if (data->mode != INACTIVE) { + qseecom_unregister_bus_bandwidth_needs(data); + if (qseecom.cumulative_mode == INACTIVE) + __qseecom_set_msm_bus_request(INACTIVE); + } + mutex_unlock(&qsee_bw_mutex); + } else { + if (data->fast_load_enabled) + qsee_disable_clock_vote(data, CLK_SFPB); + if (data->perf_enabled) + qsee_disable_clock_vote(data, CLK_DFAB); + } +} + static int qseecom_release(struct inode *inode, struct file *file) { struct qseecom_dev_handle *data = file->private_data; int ret = 0; + bool free_private_data = true; - if (data->released == false) { + __qseecom_release_disable_clk(data); + if (!data->released) { pr_debug("data: released=false, type=%d, mode=%d, data=0x%pK\n", data->type, data->mode, data); switch (data->type) { case QSEECOM_LISTENER_SERVICE: - mutex_lock(&app_access_lock); + pr_debug("release lsnr svc %d\n", data->listener.id); + mutex_lock(&listener_access_lock); ret = qseecom_unregister_listener(data); - mutex_unlock(&app_access_lock); + if (!ret) + free_private_data = false; + data->listener.release_called = true; + mutex_unlock(&listener_access_lock); + __wakeup_unregister_listener_kthread(); break; case QSEECOM_CLIENT_APP: - mutex_lock(&app_access_lock); - ret = qseecom_unload_app(data, true); - mutex_unlock(&app_access_lock); + pr_debug("release app %d (%s)\n", + data->client.app_id, data->client.app_name); + if (data->client.app_id) { + free_private_data = false; + mutex_lock(&unload_app_pending_list_lock); + ret = qseecom_prepare_unload_app(data); + mutex_unlock(&unload_app_pending_list_lock); + __wakeup_unload_app_kthread(); + } break; case QSEECOM_SECURE_SERVICE: case QSEECOM_GENERIC: @@ -7683,34 +8101,19 @@ static int qseecom_release(struct inode *inode, struct file *file) } } - if (qseecom.support_bus_scaling) { - mutex_lock(&qsee_bw_mutex); - if (data->mode != INACTIVE) { - qseecom_unregister_bus_bandwidth_needs(data); - if (qseecom.cumulative_mode == INACTIVE) { - ret = __qseecom_set_msm_bus_request(INACTIVE); - if (ret) - pr_err("Fail to scale down bus\n"); - } - } - mutex_unlock(&qsee_bw_mutex); - } else { - if (data->fast_load_enabled == true) - qsee_disable_clock_vote(data, CLK_SFPB); - if (data->perf_enabled == true) - qsee_disable_clock_vote(data, CLK_DFAB); - } - kfree(data); - + if (free_private_data) + kfree(data); return ret; } +#ifndef CONFIG_COMPAT +#define compat_qseecom_ioctl NULL +#endif + static const struct file_operations qseecom_fops = { .owner = THIS_MODULE, .unlocked_ioctl = qseecom_ioctl, -#ifdef CONFIG_COMPAT .compat_ioctl = compat_qseecom_ioctl, -#endif .open = qseecom_open, .release = qseecom_release }; @@ -7762,14 +8165,14 @@ static int __qseecom_init_clk(enum qseecom_ce_hw_instance ce) /* Get CE3 src core clk. */ qclk->ce_core_src_clk = clk_get(pdev, core_clk_src); if (!IS_ERR(qclk->ce_core_src_clk)) { - rc = clk_set_rate(qclk->ce_core_src_clk, - qseecom.ce_opp_freq_hz); - if (rc) { - clk_put(qclk->ce_core_src_clk); - qclk->ce_core_src_clk = NULL; - pr_err("Unable to set the core src clk @%uMhz.\n", - qseecom.ce_opp_freq_hz/CE_CLK_DIV); - return -EIO; + rc = clk_set_rate(qclk->ce_core_src_clk, + qseecom.ce_opp_freq_hz); + if (rc) { + clk_put(qclk->ce_core_src_clk); + qclk->ce_core_src_clk = NULL; + pr_err("Unable to set the core src clk @%uMhz.\n", + qseecom.ce_opp_freq_hz/CE_CLK_DIV); + return -EIO; } } else { pr_warn("Unable to get CE core src clk, set to NULL\n"); @@ -8525,17 +8928,23 @@ static int qseecom_probe(struct platform_device *pdev) } INIT_LIST_HEAD(&qseecom.registered_listener_list_head); - spin_lock_init(&qseecom.registered_listener_list_lock); INIT_LIST_HEAD(&qseecom.registered_app_list_head); spin_lock_init(&qseecom.registered_app_list_lock); + INIT_LIST_HEAD(&qseecom.unregister_lsnr_pending_list_head); INIT_LIST_HEAD(&qseecom.registered_kclient_list_head); spin_lock_init(&qseecom.registered_kclient_list_lock); init_waitqueue_head(&qseecom.send_resp_wq); + init_waitqueue_head(&qseecom.register_lsnr_pending_wq); + init_waitqueue_head(&qseecom.unregister_lsnr_kthread_wq); + INIT_LIST_HEAD(&qseecom.unload_app_pending_list_head); + init_waitqueue_head(&qseecom.unload_app_kthread_wq); qseecom.send_resp_flag = 0; qseecom.qsee_version = QSEEE_VERSION_00; + mutex_lock(&app_access_lock); rc = qseecom_scm_call(6, 3, &feature, sizeof(feature), &resp, sizeof(resp)); + mutex_unlock(&app_access_lock); pr_info("qseecom.qsee_version = 0x%x\n", resp.result); if (rc) { pr_err("Failed to get QSEE version info %d\n", rc); @@ -8550,11 +8959,7 @@ static int qseecom_probe(struct platform_device *pdev) qseecom.ion_clnt = msm_ion_client_create("qseecom-kernel"); if (IS_ERR_OR_NULL(qseecom.ion_clnt)) { pr_err("Ion client cannot be created\n"); - - if (qseecom.ion_clnt != ERR_PTR(-EPROBE_DEFER)) - rc = -ENOMEM; - else - rc = -EPROBE_DEFER; + rc = -ENOMEM; goto exit_del_cdev; } @@ -8601,6 +9006,14 @@ static int qseecom_probe(struct platform_device *pdev) qseecom.qsee_reentrancy_support); } + qseecom.enable_key_wrap_in_ks = + of_property_read_bool((&pdev->dev)->of_node, + "qcom,enable-key-wrap-in-ks"); + if (qseecom.enable_key_wrap_in_ks) { + pr_warn("qseecom.enable_key_wrap_in_ks = %d\n", + qseecom.enable_key_wrap_in_ks); + } + /* * The qseecom bus scaling flag can not be enabled when * crypto clock is not handled by HLOS. @@ -8686,9 +9099,11 @@ static int qseecom_probe(struct platform_device *pdev) rc = -EIO; goto exit_deinit_clock; } + mutex_lock(&app_access_lock); rc = qseecom_scm_call(SCM_SVC_TZSCHEDULER, 1, cmd_buf, cmd_len, &resp, sizeof(resp)); + mutex_unlock(&app_access_lock); __qseecom_disable_clk(CLK_QSEE); if (rc || (resp.result != QSEOS_RESULT_SUCCESS)) { pr_err("send secapp reg fail %d resp.res %d\n", @@ -8727,9 +9142,36 @@ static int qseecom_probe(struct platform_device *pdev) if (!qseecom.qsee_perf_client) pr_err("Unable to register bus client\n"); + /*create a kthread to process pending listener unregister task */ + qseecom.unregister_lsnr_kthread_task = kthread_run( + __qseecom_unregister_listener_kthread_func, + NULL, "qseecom-unreg-lsnr"); + if (IS_ERR(qseecom.unregister_lsnr_kthread_task)) { + pr_err("failed to create kthread to unregister listener\n"); + rc = -EINVAL; + goto exit_deinit_clock; + } + atomic_set(&qseecom.unregister_lsnr_kthread_state, + LSNR_UNREG_KT_SLEEP); + + /*create a kthread to process pending ta unloading task */ + qseecom.unload_app_kthread_task = kthread_run( + __qseecom_unload_app_kthread_func, + NULL, "qseecom-unload-ta"); + if (IS_ERR(qseecom.unload_app_kthread_task)) { + pr_err("failed to create kthread to unload ta\n"); + rc = -EINVAL; + goto exit_kill_unreg_lsnr_kthread; + } + atomic_set(&qseecom.unload_app_kthread_state, + UNLOAD_APP_KT_SLEEP); + atomic_set(&qseecom.qseecom_state, QSEECOM_STATE_READY); return 0; +exit_kill_unreg_lsnr_kthread: + kthread_stop(qseecom.unregister_lsnr_kthread_task); + exit_deinit_clock: __qseecom_deinit_clk(CLK_QSEE); if ((qseecom.qsee.instance != qseecom.ce_drv.instance) && @@ -8843,6 +9285,10 @@ static int qseecom_remove(struct platform_device *pdev) ion_client_destroy(qseecom.ion_clnt); + kthread_stop(qseecom.unload_app_kthread_task); + + kthread_stop(qseecom.unregister_lsnr_kthread_task); + cdev_del(&qseecom.cdev); device_destroy(driver_class, qseecom_device_no); @@ -8858,8 +9304,8 @@ static int qseecom_suspend(struct platform_device *pdev, pm_message_t state) { int ret = 0; struct qseecom_clk *qclk; - qclk = &qseecom.qsee; + qclk = &qseecom.qsee; atomic_set(&qseecom.qseecom_state, QSEECOM_STATE_SUSPEND); if (qseecom.no_clock_support) return 0; @@ -8900,8 +9346,8 @@ static int qseecom_resume(struct platform_device *pdev) int mode = 0; int ret = 0; struct qseecom_clk *qclk; - qclk = &qseecom.qsee; + qclk = &qseecom.qsee; if (qseecom.no_clock_support) goto exit; @@ -8974,7 +9420,8 @@ exit: atomic_set(&qseecom.qseecom_state, QSEECOM_STATE_READY); return ret; } -static struct of_device_id qseecom_match[] = { + +static const struct of_device_id qseecom_match[] = { { .compatible = "qcom,qseecom", }, @@ -9004,7 +9451,7 @@ static void qseecom_exit(void) } MODULE_LICENSE("GPL v2"); -MODULE_DESCRIPTION("Qualcomm Secure Execution Environment Communicator"); +MODULE_DESCRIPTION("QTI Secure Execution Environment Communicator"); module_init(qseecom_init); module_exit(qseecom_exit); diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c index b96f27f861ba..82bfc9df04e8 100644 --- a/drivers/mmc/host/sdhci.c +++ b/drivers/mmc/host/sdhci.c @@ -1845,7 +1845,9 @@ void sdhci_set_uhs_signaling(struct sdhci_host *host, unsigned timing) ctrl_2 |= SDHCI_CTRL_UHS_SDR104; else if (timing == MMC_TIMING_UHS_SDR12) ctrl_2 |= SDHCI_CTRL_UHS_SDR12; - else if (timing == MMC_TIMING_UHS_SDR25) + else if (timing == MMC_TIMING_SD_HS || + timing == MMC_TIMING_MMC_HS || + timing == MMC_TIMING_UHS_SDR25) ctrl_2 |= SDHCI_CTRL_UHS_SDR25; else if (timing == MMC_TIMING_UHS_SDR50) ctrl_2 |= SDHCI_CTRL_UHS_SDR50; diff --git a/drivers/mtd/chips/cfi_cmdset_0002.c b/drivers/mtd/chips/cfi_cmdset_0002.c index fb5a3052f144..7589d891b311 100644 --- a/drivers/mtd/chips/cfi_cmdset_0002.c +++ b/drivers/mtd/chips/cfi_cmdset_0002.c @@ -1626,29 +1626,35 @@ static int __xipram do_write_oneword(struct map_info *map, struct flchip *chip, continue; } - if (time_after(jiffies, timeo) && !chip_ready(map, adr)){ + /* + * We check "time_after" and "!chip_good" before checking + * "chip_good" to avoid the failure due to scheduling. + */ + if (time_after(jiffies, timeo) && !chip_good(map, adr, datum)) { xip_enable(map, chip, adr); printk(KERN_WARNING "MTD %s(): software timeout\n", __func__); xip_disable(map, chip, adr); + ret = -EIO; break; } - if (chip_ready(map, adr)) + if (chip_good(map, adr, datum)) break; /* Latency issues. Drop the lock, wait a while and retry */ UDELAY(map, chip, adr, 1); } + /* Did we succeed? */ - if (!chip_good(map, adr, datum)) { + if (ret) { /* reset on all failures. */ map_write( map, CMD(0xF0), chip->start ); /* FIXME - should have reset delay before continuing */ - if (++retry_cnt <= MAX_RETRIES) + if (++retry_cnt <= MAX_RETRIES) { + ret = 0; goto retry; - - ret = -EIO; + } } xip_enable(map, chip, adr); op_done: diff --git a/drivers/net/arcnet/arcnet.c b/drivers/net/arcnet/arcnet.c index 6ea963e3b89a..85ffd0561827 100644 --- a/drivers/net/arcnet/arcnet.c +++ b/drivers/net/arcnet/arcnet.c @@ -1009,31 +1009,34 @@ EXPORT_SYMBOL(arcnet_interrupt); static void arcnet_rx(struct net_device *dev, int bufnum) { struct arcnet_local *lp = netdev_priv(dev); - struct archdr pkt; + union { + struct archdr pkt; + char buf[512]; + } rxdata; struct arc_rfc1201 *soft; int length, ofs; - soft = &pkt.soft.rfc1201; + soft = &rxdata.pkt.soft.rfc1201; - lp->hw.copy_from_card(dev, bufnum, 0, &pkt, ARC_HDR_SIZE); - if (pkt.hard.offset[0]) { - ofs = pkt.hard.offset[0]; + lp->hw.copy_from_card(dev, bufnum, 0, &rxdata.pkt, ARC_HDR_SIZE); + if (rxdata.pkt.hard.offset[0]) { + ofs = rxdata.pkt.hard.offset[0]; length = 256 - ofs; } else { - ofs = pkt.hard.offset[1]; + ofs = rxdata.pkt.hard.offset[1]; length = 512 - ofs; } /* get the full header, if possible */ - if (sizeof(pkt.soft) <= length) { - lp->hw.copy_from_card(dev, bufnum, ofs, soft, sizeof(pkt.soft)); + if (sizeof(rxdata.pkt.soft) <= length) { + lp->hw.copy_from_card(dev, bufnum, ofs, soft, sizeof(rxdata.pkt.soft)); } else { - memset(&pkt.soft, 0, sizeof(pkt.soft)); + memset(&rxdata.pkt.soft, 0, sizeof(rxdata.pkt.soft)); lp->hw.copy_from_card(dev, bufnum, ofs, soft, length); } arc_printk(D_DURING, dev, "Buffer #%d: received packet from %02Xh to %02Xh (%d+4 bytes)\n", - bufnum, pkt.hard.source, pkt.hard.dest, length); + bufnum, rxdata.pkt.hard.source, rxdata.pkt.hard.dest, length); dev->stats.rx_packets++; dev->stats.rx_bytes += length + ARC_HDR_SIZE; @@ -1042,13 +1045,13 @@ static void arcnet_rx(struct net_device *dev, int bufnum) if (arc_proto_map[soft->proto]->is_ip) { if (BUGLVL(D_PROTO)) { struct ArcProto - *oldp = arc_proto_map[lp->default_proto[pkt.hard.source]], + *oldp = arc_proto_map[lp->default_proto[rxdata.pkt.hard.source]], *newp = arc_proto_map[soft->proto]; if (oldp != newp) { arc_printk(D_PROTO, dev, "got protocol %02Xh; encap for host %02Xh is now '%c' (was '%c')\n", - soft->proto, pkt.hard.source, + soft->proto, rxdata.pkt.hard.source, newp->suffix, oldp->suffix); } } @@ -1057,10 +1060,10 @@ static void arcnet_rx(struct net_device *dev, int bufnum) lp->default_proto[0] = soft->proto; /* in striking contrast, the following isn't a hack. */ - lp->default_proto[pkt.hard.source] = soft->proto; + lp->default_proto[rxdata.pkt.hard.source] = soft->proto; } /* call the protocol-specific receiver. */ - arc_proto_map[soft->proto]->rx(dev, bufnum, &pkt, length); + arc_proto_map[soft->proto]->rx(dev, bufnum, &rxdata.pkt, length); } static void null_rx(struct net_device *dev, int bufnum, diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c index fd6aff9f0052..e31b4c7d2522 100644 --- a/drivers/net/bonding/bond_main.c +++ b/drivers/net/bonding/bond_main.c @@ -1719,7 +1719,8 @@ err_detach: slave_disable_netpoll(new_slave); err_close: - slave_dev->priv_flags &= ~IFF_BONDING; + if (!netif_is_bond_master(slave_dev)) + slave_dev->priv_flags &= ~IFF_BONDING; dev_close(slave_dev); err_restore_mac: @@ -1915,7 +1916,8 @@ static int __bond_release_one(struct net_device *bond_dev, dev_set_mtu(slave_dev, slave->original_mtu); - slave_dev->priv_flags &= ~IFF_BONDING; + if (!netif_is_bond_master(slave_dev)) + slave_dev->priv_flags &= ~IFF_BONDING; bond_free_slave(slave); @@ -3889,7 +3891,7 @@ out: * this to-be-skipped slave to send a packet out. */ old_arr = rtnl_dereference(bond->slave_arr); - for (idx = 0; idx < old_arr->count; idx++) { + for (idx = 0; old_arr != NULL && idx < old_arr->count; idx++) { if (skipslave == old_arr->arr[idx]) { old_arr->arr[idx] = old_arr->arr[old_arr->count-1]; diff --git a/drivers/net/can/c_can/c_can.c b/drivers/net/can/c_can/c_can.c index e3dccd3200d5..7d35f6737499 100644 --- a/drivers/net/can/c_can/c_can.c +++ b/drivers/net/can/c_can/c_can.c @@ -97,6 +97,9 @@ #define BTR_TSEG2_SHIFT 12 #define BTR_TSEG2_MASK (0x7 << BTR_TSEG2_SHIFT) +/* interrupt register */ +#define INT_STS_PENDING 0x8000 + /* brp extension register */ #define BRP_EXT_BRPE_MASK 0x0f #define BRP_EXT_BRPE_SHIFT 0 @@ -1029,10 +1032,16 @@ static int c_can_poll(struct napi_struct *napi, int quota) u16 curr, last = priv->last_status; int work_done = 0; - priv->last_status = curr = priv->read_reg(priv, C_CAN_STS_REG); - /* Ack status on C_CAN. D_CAN is self clearing */ - if (priv->type != BOSCH_D_CAN) - priv->write_reg(priv, C_CAN_STS_REG, LEC_UNUSED); + /* Only read the status register if a status interrupt was pending */ + if (atomic_xchg(&priv->sie_pending, 0)) { + priv->last_status = curr = priv->read_reg(priv, C_CAN_STS_REG); + /* Ack status on C_CAN. D_CAN is self clearing */ + if (priv->type != BOSCH_D_CAN) + priv->write_reg(priv, C_CAN_STS_REG, LEC_UNUSED); + } else { + /* no change detected ... */ + curr = last; + } /* handle state changes */ if ((curr & STATUS_EWARN) && (!(last & STATUS_EWARN))) { @@ -1083,10 +1092,16 @@ static irqreturn_t c_can_isr(int irq, void *dev_id) { struct net_device *dev = (struct net_device *)dev_id; struct c_can_priv *priv = netdev_priv(dev); + int reg_int; - if (!priv->read_reg(priv, C_CAN_INT_REG)) + reg_int = priv->read_reg(priv, C_CAN_INT_REG); + if (!reg_int) return IRQ_NONE; + /* save for later use */ + if (reg_int & INT_STS_PENDING) + atomic_set(&priv->sie_pending, 1); + /* disable all interrupts and schedule the NAPI */ c_can_irq_control(priv, false); napi_schedule(&priv->napi); diff --git a/drivers/net/can/c_can/c_can.h b/drivers/net/can/c_can/c_can.h index 8acdc7fa4792..d5567a7c1c6d 100644 --- a/drivers/net/can/c_can/c_can.h +++ b/drivers/net/can/c_can/c_can.h @@ -198,6 +198,7 @@ struct c_can_priv { struct net_device *dev; struct device *device; atomic_t tx_active; + atomic_t sie_pending; unsigned long tx_dir; int last_status; u16 (*read_reg) (const struct c_can_priv *priv, enum reg index); diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c index baef09b9449f..6b866d0451b2 100644 --- a/drivers/net/can/flexcan.c +++ b/drivers/net/can/flexcan.c @@ -923,6 +923,7 @@ static int flexcan_chip_start(struct net_device *dev) reg_mecr = flexcan_read(®s->mecr); reg_mecr &= ~FLEXCAN_MECR_ECRWRDIS; flexcan_write(reg_mecr, ®s->mecr); + reg_mecr |= FLEXCAN_MECR_ECCDIS; reg_mecr &= ~(FLEXCAN_MECR_NCEFAFRZ | FLEXCAN_MECR_HANCEI_MSK | FLEXCAN_MECR_FANCEI_MSK); flexcan_write(reg_mecr, ®s->mecr); diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c index 3bcbfcf0455a..83ddb7d8214a 100644 --- a/drivers/net/can/spi/mcp251x.c +++ b/drivers/net/can/spi/mcp251x.c @@ -627,7 +627,7 @@ static int mcp251x_setup(struct net_device *net, struct mcp251x_priv *priv, static int mcp251x_hw_reset(struct spi_device *spi) { struct mcp251x_priv *priv = spi_get_drvdata(spi); - u8 reg; + unsigned long timeout; int ret; /* Wait for oscillator startup timer after power up */ @@ -641,10 +641,19 @@ static int mcp251x_hw_reset(struct spi_device *spi) /* Wait for oscillator startup timer after reset */ mdelay(MCP251X_OST_DELAY_MS); - reg = mcp251x_read_reg(spi, CANSTAT); - if ((reg & CANCTRL_REQOP_MASK) != CANCTRL_REQOP_CONF) - return -ENODEV; - + /* Wait for reset to finish */ + timeout = jiffies + HZ; + while ((mcp251x_read_reg(spi, CANSTAT) & CANCTRL_REQOP_MASK) != + CANCTRL_REQOP_CONF) { + usleep_range(MCP251X_OST_DELAY_MS * 1000, + MCP251X_OST_DELAY_MS * 1000 * 2); + + if (time_after(jiffies, timeout)) { + dev_err(&spi->dev, + "MCP251x didn't enter in conf mode after reset\n"); + return -EBUSY; + } + } return 0; } diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c index b227f81e4a7e..6982ab8777b7 100644 --- a/drivers/net/can/usb/gs_usb.c +++ b/drivers/net/can/usb/gs_usb.c @@ -617,6 +617,7 @@ static int gs_can_open(struct net_device *netdev) rc); usb_unanchor_urb(urb); + usb_free_urb(urb); break; } diff --git a/drivers/net/can/usb/peak_usb/pcan_usb.c b/drivers/net/can/usb/peak_usb/pcan_usb.c index 838545ce468d..e626c2afbbb1 100644 --- a/drivers/net/can/usb/peak_usb/pcan_usb.c +++ b/drivers/net/can/usb/peak_usb/pcan_usb.c @@ -108,7 +108,7 @@ struct pcan_usb_msg_context { u8 *end; u8 rec_cnt; u8 rec_idx; - u8 rec_data_idx; + u8 rec_ts_idx; struct net_device *netdev; struct pcan_usb *pdev; }; @@ -552,10 +552,15 @@ static int pcan_usb_decode_status(struct pcan_usb_msg_context *mc, mc->ptr += PCAN_USB_CMD_ARGS; if (status_len & PCAN_USB_STATUSLEN_TIMESTAMP) { - int err = pcan_usb_decode_ts(mc, !mc->rec_idx); + int err = pcan_usb_decode_ts(mc, !mc->rec_ts_idx); if (err) return err; + + /* Next packet in the buffer will have a timestamp on a single + * byte + */ + mc->rec_ts_idx++; } switch (f) { @@ -638,10 +643,13 @@ static int pcan_usb_decode_data(struct pcan_usb_msg_context *mc, u8 status_len) cf->can_dlc = get_can_dlc(rec_len); - /* first data packet timestamp is a word */ - if (pcan_usb_decode_ts(mc, !mc->rec_data_idx)) + /* Only first packet timestamp is a word */ + if (pcan_usb_decode_ts(mc, !mc->rec_ts_idx)) goto decode_failed; + /* Next packet in the buffer will have a timestamp on a single byte */ + mc->rec_ts_idx++; + /* read data */ memset(cf->data, 0x0, sizeof(cf->data)); if (status_len & PCAN_USB_STATUSLEN_RTR) { @@ -695,7 +703,6 @@ static int pcan_usb_decode_msg(struct peak_usb_device *dev, u8 *ibuf, u32 lbuf) /* handle normal can frames here */ } else { err = pcan_usb_decode_data(&mc, sl); - mc.rec_data_idx++; } } diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_core.c b/drivers/net/can/usb/peak_usb/pcan_usb_core.c index b1d68f49b398..8c47cc8dc896 100644 --- a/drivers/net/can/usb/peak_usb/pcan_usb_core.c +++ b/drivers/net/can/usb/peak_usb/pcan_usb_core.c @@ -776,7 +776,7 @@ static int peak_usb_create_dev(const struct peak_usb_adapter *peak_usb_adapter, dev = netdev_priv(netdev); /* allocate a buffer large enough to send commands */ - dev->cmd_buf = kmalloc(PCAN_USB_MAX_CMD_LEN, GFP_KERNEL); + dev->cmd_buf = kzalloc(PCAN_USB_MAX_CMD_LEN, GFP_KERNEL); if (!dev->cmd_buf) { err = -ENOMEM; goto lbl_free_candev; diff --git a/drivers/net/can/usb/usb_8dev.c b/drivers/net/can/usb/usb_8dev.c index 522286cc0f9c..50d9b945089e 100644 --- a/drivers/net/can/usb/usb_8dev.c +++ b/drivers/net/can/usb/usb_8dev.c @@ -1010,9 +1010,8 @@ static void usb_8dev_disconnect(struct usb_interface *intf) netdev_info(priv->netdev, "device disconnected\n"); unregister_netdev(priv->netdev); - free_candev(priv->netdev); - unlink_all_urbs(priv); + free_candev(priv->netdev); } } diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.h b/drivers/net/ethernet/broadcom/genet/bcmgenet.h index ce20bc939b38..e651845c6605 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmgenet.h +++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.h @@ -362,6 +362,7 @@ struct bcmgenet_mib_counters { #define EXT_ENERGY_DET_MASK (1 << 12) #define EXT_RGMII_OOB_CTRL 0x0C +#define RGMII_MODE_EN_V123 (1 << 0) #define RGMII_LINK (1 << 4) #define OOB_DISABLE (1 << 5) #define RGMII_MODE_EN (1 << 6) diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c index 0565efad6e6e..3ad016f500b5 100644 --- a/drivers/net/ethernet/broadcom/genet/bcmmii.c +++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c @@ -328,7 +328,11 @@ int bcmgenet_mii_config(struct net_device *dev) */ if (priv->ext_phy) { reg = bcmgenet_ext_readl(priv, EXT_RGMII_OOB_CTRL); - reg |= RGMII_MODE_EN | id_mode_dis; + reg |= id_mode_dis; + if (GENET_IS_V1(priv) || GENET_IS_V2(priv) || GENET_IS_V3(priv)) + reg |= RGMII_MODE_EN_V123; + else + reg |= RGMII_MODE_EN; bcmgenet_ext_writel(priv, reg, EXT_RGMII_OOB_CTRL); } @@ -342,11 +346,12 @@ int bcmgenet_mii_probe(struct net_device *dev) struct bcmgenet_priv *priv = netdev_priv(dev); struct device_node *dn = priv->pdev->dev.of_node; struct phy_device *phydev; - u32 phy_flags; + u32 phy_flags = 0; int ret; /* Communicate the integrated PHY revision */ - phy_flags = priv->gphy_rev; + if (priv->internal_phy) + phy_flags = priv->gphy_rev; /* Initialize link state variables that bcmgenet_mii_setup() uses */ priv->old_link = -1; diff --git a/drivers/net/ethernet/hisilicon/hip04_eth.c b/drivers/net/ethernet/hisilicon/hip04_eth.c index def831c89d35..2a7dfac20546 100644 --- a/drivers/net/ethernet/hisilicon/hip04_eth.c +++ b/drivers/net/ethernet/hisilicon/hip04_eth.c @@ -174,6 +174,7 @@ struct hip04_priv { dma_addr_t rx_phys[RX_DESC_NUM]; unsigned int rx_head; unsigned int rx_buf_size; + unsigned int rx_cnt_remaining; struct device_node *phy_node; struct phy_device *phy; @@ -487,7 +488,6 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget) struct hip04_priv *priv = container_of(napi, struct hip04_priv, napi); struct net_device *ndev = priv->ndev; struct net_device_stats *stats = &ndev->stats; - unsigned int cnt = hip04_recv_cnt(priv); struct rx_desc *desc; struct sk_buff *skb; unsigned char *buf; @@ -500,8 +500,8 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget) /* clean up tx descriptors */ tx_remaining = hip04_tx_reclaim(ndev, false); - - while (cnt && !last) { + priv->rx_cnt_remaining += hip04_recv_cnt(priv); + while (priv->rx_cnt_remaining && !last) { buf = priv->rx_buf[priv->rx_head]; skb = build_skb(buf, priv->rx_buf_size); if (unlikely(!skb)) @@ -544,11 +544,13 @@ static int hip04_rx_poll(struct napi_struct *napi, int budget) hip04_set_recv_desc(priv, phys); priv->rx_head = RX_NEXT(priv->rx_head); - if (rx >= budget) + if (rx >= budget) { + --priv->rx_cnt_remaining; goto done; + } - if (--cnt == 0) - cnt = hip04_recv_cnt(priv); + if (--priv->rx_cnt_remaining == 0) + priv->rx_cnt_remaining += hip04_recv_cnt(priv); } if (!(priv->reg_inten & RCV_INT)) { @@ -633,6 +635,7 @@ static int hip04_mac_open(struct net_device *ndev) int i; priv->rx_head = 0; + priv->rx_cnt_remaining = 0; priv->tx_head = 0; priv->tx_tail = 0; hip04_reset_ppe(priv); @@ -947,7 +950,6 @@ static int hip04_remove(struct platform_device *pdev) hip04_free_ring(ndev, d); unregister_netdev(ndev); - free_irq(ndev->irq, ndev); of_node_put(priv->phy_node); cancel_work_sync(&priv->tx_timeout_task); free_netdev(ndev); diff --git a/drivers/net/ethernet/hisilicon/hns_mdio.c b/drivers/net/ethernet/hisilicon/hns_mdio.c index 6ff13c559e52..09fcc821b7da 100644 --- a/drivers/net/ethernet/hisilicon/hns_mdio.c +++ b/drivers/net/ethernet/hisilicon/hns_mdio.c @@ -156,11 +156,15 @@ static int mdio_sc_cfg_reg_write(struct hns_mdio_device *mdio_dev, { u32 time_cnt; u32 reg_value; + int ret; regmap_write(mdio_dev->subctrl_vbase, cfg_reg, set_val); for (time_cnt = MDIO_TIMEOUT; time_cnt; time_cnt--) { - regmap_read(mdio_dev->subctrl_vbase, st_reg, ®_value); + ret = regmap_read(mdio_dev->subctrl_vbase, st_reg, ®_value); + if (ret) + return ret; + reg_value &= st_msk; if ((!!check_st) == (!!reg_value)) break; diff --git a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c index d70b2e5d5222..cbb0bdf85177 100644 --- a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c +++ b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c @@ -628,6 +628,7 @@ static int e1000_set_ringparam(struct net_device *netdev, for (i = 0; i < adapter->num_rx_queues; i++) rxdr[i].count = rxdr->count; + err = 0; if (netif_running(adapter->netdev)) { /* Try to get new resources before deleting old */ err = e1000_setup_all_rx_resources(adapter); @@ -648,14 +649,13 @@ static int e1000_set_ringparam(struct net_device *netdev, adapter->rx_ring = rxdr; adapter->tx_ring = txdr; err = e1000_up(adapter); - if (err) - goto err_setup; } kfree(tx_old); kfree(rx_old); clear_bit(__E1000_RESETTING, &adapter->flags); - return 0; + return err; + err_setup_tx: e1000_free_all_rx_resources(adapter); err_setup_rx: @@ -667,7 +667,6 @@ err_alloc_rx: err_alloc_tx: if (netif_running(adapter->netdev)) e1000_up(adapter); -err_setup: clear_bit(__E1000_RESETTING, &adapter->flags); return err; } diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 70ed5e5c3514..9404f38d9d0d 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -1673,7 +1673,8 @@ static void igb_check_swap_media(struct igb_adapter *adapter) if ((hw->phy.media_type == e1000_media_type_copper) && (!(connsw & E1000_CONNSW_AUTOSENSE_EN))) { swap_now = true; - } else if (!(connsw & E1000_CONNSW_SERDESD)) { + } else if ((hw->phy.media_type != e1000_media_type_copper) && + !(connsw & E1000_CONNSW_SERDESD)) { /* copper signal takes time to appear */ if (adapter->copper_tries < 4) { adapter->copper_tries++; diff --git a/drivers/net/ethernet/marvell/skge.c b/drivers/net/ethernet/marvell/skge.c index c9f4b5412844..b97a070074b7 100644 --- a/drivers/net/ethernet/marvell/skge.c +++ b/drivers/net/ethernet/marvell/skge.c @@ -3114,7 +3114,7 @@ static struct sk_buff *skge_rx_get(struct net_device *dev, skb_put(skb, len); if (dev->features & NETIF_F_RXCSUM) { - skb->csum = csum; + skb->csum = le16_to_cpu(csum); skb->ip_summed = CHECKSUM_COMPLETE; } diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c index 37dfdb1329f4..170a49a6803e 100644 --- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c +++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c @@ -463,12 +463,31 @@ void mlx4_init_quotas(struct mlx4_dev *dev) priv->mfunc.master.res_tracker.res_alloc[RES_MPT].quota[pf]; } -static int get_max_gauranteed_vfs_counter(struct mlx4_dev *dev) +static int +mlx4_calc_res_counter_guaranteed(struct mlx4_dev *dev, + struct resource_allocator *res_alloc, + int vf) { - /* reduce the sink counter */ - return (dev->caps.max_counters - 1 - - (MLX4_PF_COUNTERS_PER_PORT * MLX4_MAX_PORTS)) - / MLX4_MAX_PORTS; + struct mlx4_active_ports actv_ports; + int ports, counters_guaranteed; + + /* For master, only allocate according to the number of phys ports */ + if (vf == mlx4_master_func_num(dev)) + return MLX4_PF_COUNTERS_PER_PORT * dev->caps.num_ports; + + /* calculate real number of ports for the VF */ + actv_ports = mlx4_get_active_ports(dev, vf); + ports = bitmap_weight(actv_ports.ports, dev->caps.num_ports); + counters_guaranteed = ports * MLX4_VF_COUNTERS_PER_PORT; + + /* If we do not have enough counters for this VF, do not + * allocate any for it. '-1' to reduce the sink counter. + */ + if ((res_alloc->res_reserved + counters_guaranteed) > + (dev->caps.max_counters - 1)) + return 0; + + return counters_guaranteed; } int mlx4_init_resource_tracker(struct mlx4_dev *dev) @@ -476,7 +495,6 @@ int mlx4_init_resource_tracker(struct mlx4_dev *dev) struct mlx4_priv *priv = mlx4_priv(dev); int i, j; int t; - int max_vfs_guarantee_counter = get_max_gauranteed_vfs_counter(dev); priv->mfunc.master.res_tracker.slave_list = kzalloc(dev->num_slaves * sizeof(struct slave_list), @@ -593,16 +611,8 @@ int mlx4_init_resource_tracker(struct mlx4_dev *dev) break; case RES_COUNTER: res_alloc->quota[t] = dev->caps.max_counters; - if (t == mlx4_master_func_num(dev)) - res_alloc->guaranteed[t] = - MLX4_PF_COUNTERS_PER_PORT * - MLX4_MAX_PORTS; - else if (t <= max_vfs_guarantee_counter) - res_alloc->guaranteed[t] = - MLX4_VF_COUNTERS_PER_PORT * - MLX4_MAX_PORTS; - else - res_alloc->guaranteed[t] = 0; + res_alloc->guaranteed[t] = + mlx4_calc_res_counter_guaranteed(dev, res_alloc, t); res_alloc->res_free -= res_alloc->guaranteed[t]; break; default: diff --git a/drivers/net/ethernet/nxp/lpc_eth.c b/drivers/net/ethernet/nxp/lpc_eth.c index 057665180f13..ba14bad81a21 100644 --- a/drivers/net/ethernet/nxp/lpc_eth.c +++ b/drivers/net/ethernet/nxp/lpc_eth.c @@ -1417,13 +1417,14 @@ static int lpc_eth_drv_probe(struct platform_device *pdev) pldat->dma_buff_base_p = dma_handle; netdev_dbg(ndev, "IO address space :%pR\n", res); - netdev_dbg(ndev, "IO address size :%d\n", resource_size(res)); + netdev_dbg(ndev, "IO address size :%zd\n", + (size_t)resource_size(res)); netdev_dbg(ndev, "IO address (mapped) :0x%p\n", pldat->net_base); netdev_dbg(ndev, "IRQ number :%d\n", ndev->irq); - netdev_dbg(ndev, "DMA buffer size :%d\n", pldat->dma_buff_size); - netdev_dbg(ndev, "DMA buffer P address :0x%08x\n", - pldat->dma_buff_base_p); + netdev_dbg(ndev, "DMA buffer size :%zd\n", pldat->dma_buff_size); + netdev_dbg(ndev, "DMA buffer P address :%pad\n", + &pldat->dma_buff_base_p); netdev_dbg(ndev, "DMA buffer V address :0x%p\n", pldat->dma_buff_base_v); @@ -1470,8 +1471,8 @@ static int lpc_eth_drv_probe(struct platform_device *pdev) if (ret) goto err_out_unregister_netdev; - netdev_info(ndev, "LPC mac at 0x%08x irq %d\n", - res->start, ndev->irq); + netdev_info(ndev, "LPC mac at 0x%08lx irq %d\n", + (unsigned long)res->start, ndev->irq); phydev = pldat->phy_dev; diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c index f4657a2e730a..8b63c9d183a2 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_main.c +++ b/drivers/net/ethernet/qlogic/qede/qede_main.c @@ -1465,8 +1465,16 @@ enum qede_remove_mode { static void __qede_remove(struct pci_dev *pdev, enum qede_remove_mode mode) { struct net_device *ndev = pci_get_drvdata(pdev); - struct qede_dev *edev = netdev_priv(ndev); - struct qed_dev *cdev = edev->cdev; + struct qede_dev *edev; + struct qed_dev *cdev; + + if (!ndev) { + dev_info(&pdev->dev, "Device has already been removed\n"); + return; + } + + edev = netdev_priv(ndev); + cdev = edev->cdev; DP_INFO(edev, "Starting qede_remove\n"); diff --git a/drivers/net/ethernet/qlogic/qla3xxx.c b/drivers/net/ethernet/qlogic/qla3xxx.c index 355c5fb802cd..c653b97d84d5 100644 --- a/drivers/net/ethernet/qlogic/qla3xxx.c +++ b/drivers/net/ethernet/qlogic/qla3xxx.c @@ -2783,6 +2783,7 @@ static int ql_alloc_large_buffers(struct ql3_adapter *qdev) netdev_err(qdev->ndev, "PCI mapping failed with error: %d\n", err); + dev_kfree_skb_irq(skb); ql_free_large_buffers(qdev); return -ENOMEM; } diff --git a/drivers/net/ieee802154/atusb.c b/drivers/net/ieee802154/atusb.c index 3a429f1a8002..d5e0e2aedc55 100644 --- a/drivers/net/ieee802154/atusb.c +++ b/drivers/net/ieee802154/atusb.c @@ -756,10 +756,11 @@ static void atusb_disconnect(struct usb_interface *interface) ieee802154_unregister_hw(atusb->hw); + usb_put_dev(atusb->usb_dev); + ieee802154_free_hw(atusb->hw); usb_set_intfdata(interface, NULL); - usb_put_dev(atusb->usb_dev); pr_debug("atusb_disconnect done\n"); } diff --git a/drivers/net/phy/national.c b/drivers/net/phy/national.c index 0a7b9c7f09a2..5c40655ec808 100644 --- a/drivers/net/phy/national.c +++ b/drivers/net/phy/national.c @@ -110,14 +110,17 @@ static void ns_giga_speed_fallback(struct phy_device *phydev, int mode) static void ns_10_base_t_hdx_loopack(struct phy_device *phydev, int disable) { + u16 lb_dis = BIT(1); + if (disable) - ns_exp_write(phydev, 0x1c0, ns_exp_read(phydev, 0x1c0) | 1); + ns_exp_write(phydev, 0x1c0, + ns_exp_read(phydev, 0x1c0) | lb_dis); else ns_exp_write(phydev, 0x1c0, - ns_exp_read(phydev, 0x1c0) & 0xfffe); + ns_exp_read(phydev, 0x1c0) & ~lb_dis); pr_debug("10BASE-T HDX loopback %s\n", - (ns_exp_read(phydev, 0x1c0) & 0x0001) ? "off" : "on"); + (ns_exp_read(phydev, 0x1c0) & lb_dis) ? "off" : "on"); } static int ns_config_init(struct phy_device *phydev) diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c index 1e921e5eddc7..71ef895b4dca 100644 --- a/drivers/net/usb/cdc_ncm.c +++ b/drivers/net/usb/cdc_ncm.c @@ -533,8 +533,8 @@ static void cdc_ncm_set_dgram_size(struct usbnet *dev, int new_size) /* read current mtu value from device */ err = usbnet_read_cmd(dev, USB_CDC_GET_MAX_DATAGRAM_SIZE, USB_TYPE_CLASS | USB_DIR_IN | USB_RECIP_INTERFACE, - 0, iface_no, &max_datagram_size, 2); - if (err < 0) { + 0, iface_no, &max_datagram_size, sizeof(max_datagram_size)); + if (err < sizeof(max_datagram_size)) { dev_dbg(&dev->intf->dev, "GET_MAX_DATAGRAM_SIZE failed\n"); goto out; } @@ -545,7 +545,7 @@ static void cdc_ncm_set_dgram_size(struct usbnet *dev, int new_size) max_datagram_size = cpu_to_le16(ctx->max_datagram_size); err = usbnet_write_cmd(dev, USB_CDC_SET_MAX_DATAGRAM_SIZE, USB_TYPE_CLASS | USB_DIR_OUT | USB_RECIP_INTERFACE, - 0, iface_no, &max_datagram_size, 2); + 0, iface_no, &max_datagram_size, sizeof(max_datagram_size)); if (err < 0) dev_dbg(&dev->intf->dev, "SET_MAX_DATAGRAM_SIZE failed\n"); @@ -636,8 +636,12 @@ cdc_ncm_find_endpoints(struct usbnet *dev, struct usb_interface *intf) u8 ep; for (ep = 0; ep < intf->cur_altsetting->desc.bNumEndpoints; ep++) { - e = intf->cur_altsetting->endpoint + ep; + + /* ignore endpoints which cannot transfer data */ + if (!usb_endpoint_maxp(&e->desc)) + continue; + switch (e->desc.bmAttributes & USB_ENDPOINT_XFERTYPE_MASK) { case USB_ENDPOINT_XFER_INT: if (usb_endpoint_dir_in(&e->desc)) { diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c index 79cede19e0c4..cbbff16d438f 100644 --- a/drivers/net/usb/hso.c +++ b/drivers/net/usb/hso.c @@ -2650,14 +2650,18 @@ static struct hso_device *hso_create_bulk_serial_device( */ if (serial->tiocmget) { tiocmget = serial->tiocmget; + tiocmget->endp = hso_get_ep(interface, + USB_ENDPOINT_XFER_INT, + USB_DIR_IN); + if (!tiocmget->endp) { + dev_err(&interface->dev, "Failed to find INT IN ep\n"); + goto exit; + } + tiocmget->urb = usb_alloc_urb(0, GFP_KERNEL); if (tiocmget->urb) { mutex_init(&tiocmget->mutex); init_waitqueue_head(&tiocmget->waitq); - tiocmget->endp = hso_get_ep( - interface, - USB_ENDPOINT_XFER_INT, - USB_DIR_IN); } else hso_free_tiomget(serial); } diff --git a/drivers/net/usb/sr9800.c b/drivers/net/usb/sr9800.c index 004c955c1fd1..da0ae16f5c74 100644 --- a/drivers/net/usb/sr9800.c +++ b/drivers/net/usb/sr9800.c @@ -336,7 +336,7 @@ static void sr_set_multicast(struct net_device *net) static int sr_mdio_read(struct net_device *net, int phy_id, int loc) { struct usbnet *dev = netdev_priv(net); - __le16 res; + __le16 res = 0; mutex_lock(&dev->phy_mutex); sr_set_sw_mii(dev); diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c index a3a66c481e91..db178e921e5d 100644 --- a/drivers/net/usb/usbnet.c +++ b/drivers/net/usb/usbnet.c @@ -114,6 +114,11 @@ int usbnet_get_endpoints(struct usbnet *dev, struct usb_interface *intf) int intr = 0; e = alt->endpoint + ep; + + /* ignore endpoints which cannot transfer data */ + if (!usb_endpoint_maxp(&e->desc)) + continue; + switch (e->desc.bmAttributes) { case USB_ENDPOINT_XFER_INT: if (!usb_endpoint_dir_in(&e->desc)) @@ -349,6 +354,8 @@ void usbnet_update_max_qlen(struct usbnet *dev) { enum usb_device_speed speed = dev->udev->speed; + if (!dev->rx_urb_size || !dev->hard_mtu) + goto insanity; switch (speed) { case USB_SPEED_HIGH: dev->rx_qlen = MAX_QUEUE_MEMORY / dev->rx_urb_size; @@ -364,6 +371,7 @@ void usbnet_update_max_qlen(struct usbnet *dev) dev->tx_qlen = 5 * MAX_QUEUE_MEMORY / dev->hard_mtu; break; default: +insanity: dev->rx_qlen = dev->tx_qlen = 4; } } diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c index 835129152fc4..536fee1e4b70 100644 --- a/drivers/net/vxlan.c +++ b/drivers/net/vxlan.c @@ -2006,8 +2006,11 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev, ttl = info->key.ttl; tos = info->key.tos; - if (info->options_len) + if (info->options_len) { + if (info->options_len < sizeof(*md)) + goto drop; md = ip_tunnel_info_opts(info); + } } else { md->gbp = skb->mark; } diff --git a/drivers/net/wireless/ath/ath6kl/usb.c b/drivers/net/wireless/ath/ath6kl/usb.c index 9da3594fd010..fc22c5f47927 100644 --- a/drivers/net/wireless/ath/ath6kl/usb.c +++ b/drivers/net/wireless/ath/ath6kl/usb.c @@ -132,6 +132,10 @@ ath6kl_usb_alloc_urb_from_pipe(struct ath6kl_usb_pipe *pipe) struct ath6kl_urb_context *urb_context = NULL; unsigned long flags; + /* bail if this pipe is not initialized */ + if (!pipe->ar_usb) + return NULL; + spin_lock_irqsave(&pipe->ar_usb->cs_lock, flags); if (!list_empty(&pipe->urb_list_head)) { urb_context = @@ -150,6 +154,10 @@ static void ath6kl_usb_free_urb_to_pipe(struct ath6kl_usb_pipe *pipe, { unsigned long flags; + /* bail if this pipe is not initialized */ + if (!pipe->ar_usb) + return; + spin_lock_irqsave(&pipe->ar_usb->cs_lock, flags); pipe->urb_cnt++; diff --git a/drivers/net/wireless/cnss2/bus.c b/drivers/net/wireless/cnss2/bus.c index e33eb647d9bb..5d77d8db4d53 100644 --- a/drivers/net/wireless/cnss2/bus.c +++ b/drivers/net/wireless/cnss2/bus.c @@ -304,7 +304,7 @@ int cnss_bus_dev_shutdown(struct cnss_plat_data *plat_priv) case CNSS_BUS_PCI: return cnss_pci_dev_shutdown(plat_priv->bus_priv); case CNSS_BUS_USB: - return 0; + return cnss_usb_dev_shutdown(plat_priv->bus_priv); case CNSS_BUS_SDIO: return cnss_sdio_dev_shutdown(plat_priv->bus_priv); default: diff --git a/drivers/net/wireless/cnss2/usb.c b/drivers/net/wireless/cnss2/usb.c index c647e0f8e9ba..c42ef2210a41 100644 --- a/drivers/net/wireless/cnss2/usb.c +++ b/drivers/net/wireless/cnss2/usb.c @@ -54,7 +54,8 @@ int cnss_usb_dev_powerup(struct cnss_plat_data *plat_priv) case QCN7605_STANDALONE_DEVICE_ID: case QCN7605_VER20_STANDALONE_DEVICE_ID: case QCN7605_VER20_COMPOSITE_DEVICE_ID: - ret = cnss_qcn7605_usb_powerup(plat_priv); + if (test_bit(CNSS_DEV_REMOVED, &plat_priv->driver_state)) + ret = cnss_qcn7605_usb_powerup(plat_priv); break; default: cnss_pr_err("Unknown device_id found: %lu\n", @@ -160,7 +161,6 @@ int cnss_usb_unregister_driver_hdlr(struct cnss_usb_data *usb_priv) set_bit(CNSS_DRIVER_UNLOADING, &plat_priv->driver_state); cnss_usb_dev_shutdown(usb_priv); usb_priv->driver_ops = NULL; - usb_priv->plat_priv = NULL; return 0; } diff --git a/drivers/net/wireless/libertas/if_usb.c b/drivers/net/wireless/libertas/if_usb.c index dff08a2896a3..d271eaf1f949 100644 --- a/drivers/net/wireless/libertas/if_usb.c +++ b/drivers/net/wireless/libertas/if_usb.c @@ -49,7 +49,8 @@ static const struct lbs_fw_table fw_table[] = { { MODEL_8388, "libertas/usb8388_v5.bin", NULL }, { MODEL_8388, "libertas/usb8388.bin", NULL }, { MODEL_8388, "usb8388.bin", NULL }, - { MODEL_8682, "libertas/usb8682.bin", NULL } + { MODEL_8682, "libertas/usb8682.bin", NULL }, + { 0, NULL, NULL } }; static struct usb_device_id if_usb_table[] = { diff --git a/drivers/net/wireless/realtek/rtlwifi/ps.c b/drivers/net/wireless/realtek/rtlwifi/ps.c index 626ff300352b..1fd3f22796a7 100644 --- a/drivers/net/wireless/realtek/rtlwifi/ps.c +++ b/drivers/net/wireless/realtek/rtlwifi/ps.c @@ -781,6 +781,9 @@ static void rtl_p2p_noa_ie(struct ieee80211_hw *hw, void *data, return; } else { noa_num = (noa_len - 2) / 13; + if (noa_num > P2P_MAX_NOA_NUM) + noa_num = P2P_MAX_NOA_NUM; + } noa_index = ie[3]; if (rtlpriv->psc.p2p_ps_info.p2p_ps_mode == @@ -875,6 +878,9 @@ static void rtl_p2p_action_ie(struct ieee80211_hw *hw, void *data, return; } else { noa_num = (noa_len - 2) / 13; + if (noa_num > P2P_MAX_NOA_NUM) + noa_num = P2P_MAX_NOA_NUM; + } noa_index = ie[3]; if (rtlpriv->psc.p2p_ps_info.p2p_ps_mode == diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c index 60b26f32d31d..2008c6a02b8a 100644 --- a/drivers/net/xen-netback/interface.c +++ b/drivers/net/xen-netback/interface.c @@ -620,7 +620,6 @@ err_tx_unbind: err_unmap: xenvif_unmap_frontend_rings(queue); err: - module_put(THIS_MODULE); return err; } diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index 574c93a24180..89eec6fead75 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -874,9 +874,9 @@ static int xennet_set_skb_gso(struct sk_buff *skb, return 0; } -static RING_IDX xennet_fill_frags(struct netfront_queue *queue, - struct sk_buff *skb, - struct sk_buff_head *list) +static int xennet_fill_frags(struct netfront_queue *queue, + struct sk_buff *skb, + struct sk_buff_head *list) { RING_IDX cons = queue->rx.rsp_cons; struct sk_buff *nskb; @@ -895,7 +895,7 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue, if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) { queue->rx.rsp_cons = ++cons + skb_queue_len(list); kfree_skb(nskb); - return ~0U; + return -ENOENT; } skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, @@ -906,7 +906,9 @@ static RING_IDX xennet_fill_frags(struct netfront_queue *queue, kfree_skb(nskb); } - return cons; + queue->rx.rsp_cons = cons; + + return 0; } static int checksum_setup(struct net_device *dev, struct sk_buff *skb) @@ -1032,8 +1034,7 @@ err: skb->data_len = rx->status; skb->len += rx->status; - i = xennet_fill_frags(queue, skb, &tmpq); - if (unlikely(i == ~0U)) + if (unlikely(xennet_fill_frags(queue, skb, &tmpq))) goto err; if (rx->flags & XEN_NETRXF_csum_blank) @@ -1043,7 +1044,7 @@ err: __skb_queue_tail(&rxq, skb); - queue->rx.rsp_cons = ++i; + i = ++queue->rx.rsp_cons; work_done++; } diff --git a/drivers/nfc/fdp/i2c.c b/drivers/nfc/fdp/i2c.c index 2d043415f326..dc2f3d5b2a36 100644 --- a/drivers/nfc/fdp/i2c.c +++ b/drivers/nfc/fdp/i2c.c @@ -278,7 +278,7 @@ static void fdp_nci_i2c_read_device_properties(struct device *dev, *fw_vsc_cfg, len); if (r) { - devm_kfree(dev, fw_vsc_cfg); + devm_kfree(dev, *fw_vsc_cfg); goto vsc_read_err; } } else { diff --git a/drivers/nfc/st21nfca/core.c b/drivers/nfc/st21nfca/core.c index dd8b150fbffa..1826cd330468 100644 --- a/drivers/nfc/st21nfca/core.c +++ b/drivers/nfc/st21nfca/core.c @@ -726,6 +726,7 @@ static int st21nfca_hci_complete_target_discovered(struct nfc_hci_dev *hdev, NFC_PROTO_FELICA_MASK; } else { kfree_skb(nfcid_skb); + nfcid_skb = NULL; /* P2P in type A */ r = nfc_hci_get_param(hdev, ST21NFCA_RF_READER_F_GATE, ST21NFCA_RF_READER_F_NFCID1, diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c index 2eac3df7dd29..af9e4785b7a6 100644 --- a/drivers/of/unittest.c +++ b/drivers/of/unittest.c @@ -924,6 +924,7 @@ static int __init unittest_data_add(void) of_fdt_unflatten_tree(unittest_data, &unittest_data_node); if (!unittest_data_node) { pr_warn("%s: No tree to attach; not running tests\n", __func__); + kfree(unittest_data); return -ENODATA; } of_node_set_flag(unittest_data_node, OF_DETACHED); diff --git a/drivers/parisc/dino.c b/drivers/parisc/dino.c index 005ea632ba53..8524faf28acb 100644 --- a/drivers/parisc/dino.c +++ b/drivers/parisc/dino.c @@ -160,6 +160,15 @@ struct dino_device (struct dino_device *)__pdata; }) +/* Check if PCI device is behind a Card-mode Dino. */ +static int pci_dev_is_behind_card_dino(struct pci_dev *dev) +{ + struct dino_device *dino_dev; + + dino_dev = DINO_DEV(parisc_walk_tree(dev->bus->bridge)); + return is_card_dino(&dino_dev->hba.dev->id); +} + /* * Dino Configuration Space Accessor Functions */ @@ -442,6 +451,21 @@ static void quirk_cirrus_cardbus(struct pci_dev *dev) } DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_CIRRUS, PCI_DEVICE_ID_CIRRUS_6832, quirk_cirrus_cardbus ); +#ifdef CONFIG_TULIP +static void pci_fixup_tulip(struct pci_dev *dev) +{ + if (!pci_dev_is_behind_card_dino(dev)) + return; + if (!(pci_resource_flags(dev, 1) & IORESOURCE_MEM)) + return; + pr_warn("%s: HP HSC-PCI Cards with card-mode Dino not yet supported.\n", + pci_name(dev)); + /* Disable this card by zeroing the PCI resources */ + memset(&dev->resource[0], 0, sizeof(dev->resource[0])); + memset(&dev->resource[1], 0, sizeof(dev->resource[1])); +} +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_DEC, PCI_ANY_ID, pci_fixup_tulip); +#endif /* CONFIG_TULIP */ static void __init dino_bios_init(void) diff --git a/drivers/pci/host/pci-tegra.c b/drivers/pci/host/pci-tegra.c index 30323114c53c..9865793b538a 100644 --- a/drivers/pci/host/pci-tegra.c +++ b/drivers/pci/host/pci-tegra.c @@ -586,12 +586,15 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0bf1, tegra_pcie_fixup_class); DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e1c, tegra_pcie_fixup_class); DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_NVIDIA, 0x0e1d, tegra_pcie_fixup_class); -/* Tegra PCIE requires relaxed ordering */ +/* Tegra20 and Tegra30 PCIE requires relaxed ordering */ static void tegra_pcie_relax_enable(struct pci_dev *dev) { pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_DEVCTL_RELAX_EN); } -DECLARE_PCI_FIXUP_FINAL(PCI_ANY_ID, PCI_ANY_ID, tegra_pcie_relax_enable); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, 0x0bf0, tegra_pcie_relax_enable); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, 0x0bf1, tegra_pcie_relax_enable); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, 0x0e1c, tegra_pcie_relax_enable); +DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, 0x0e1d, tegra_pcie_relax_enable); static int tegra_pcie_setup(int nr, struct pci_sys_data *sys) { diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c index 82b0c2cc2fd3..b7f65fc54dc2 100644 --- a/drivers/pci/pci.c +++ b/drivers/pci/pci.c @@ -704,19 +704,6 @@ void pci_update_current_state(struct pci_dev *dev, pci_power_t state) } /** - * pci_power_up - Put the given device into D0 forcibly - * @dev: PCI device to power up - */ -void pci_power_up(struct pci_dev *dev) -{ - if (platform_pci_power_manageable(dev)) - platform_pci_set_power_state(dev, PCI_D0); - - pci_raw_set_power_state(dev, PCI_D0); - pci_update_current_state(dev, PCI_D0); -} - -/** * pci_platform_power_transition - Use platform to change device power state * @dev: PCI device to handle. * @state: State to put the device into. @@ -892,6 +879,17 @@ int pci_set_power_state(struct pci_dev *dev, pci_power_t state) EXPORT_SYMBOL(pci_set_power_state); /** + * pci_power_up - Put the given device into D0 forcibly + * @dev: PCI device to power up + */ +void pci_power_up(struct pci_dev *dev) +{ + __pci_start_power_transition(dev, PCI_D0); + pci_raw_set_power_state(dev, PCI_D0); + pci_update_current_state(dev, PCI_D0); +} + +/** * pci_choose_state - Choose the power state of a PCI device * @dev: PCI device to be suspended * @state: target sleep state for the whole system. This is the value diff --git a/drivers/pinctrl/pinctrl-tegra.c b/drivers/pinctrl/pinctrl-tegra.c index 0fd7fd2b0f72..a30e967d75c2 100644 --- a/drivers/pinctrl/pinctrl-tegra.c +++ b/drivers/pinctrl/pinctrl-tegra.c @@ -52,7 +52,9 @@ static inline u32 pmx_readl(struct tegra_pmx *pmx, u32 bank, u32 reg) static inline void pmx_writel(struct tegra_pmx *pmx, u32 val, u32 bank, u32 reg) { - writel(val, pmx->regs[bank] + reg); + writel_relaxed(val, pmx->regs[bank] + reg); + /* make sure pinmux register write completed */ + pmx_readl(pmx, bank, reg); } static int tegra_pinctrl_get_groups_count(struct pinctrl_dev *pctldev) diff --git a/drivers/regulator/pfuze100-regulator.c b/drivers/regulator/pfuze100-regulator.c index c68556bf6f39..ec185502dceb 100644 --- a/drivers/regulator/pfuze100-regulator.c +++ b/drivers/regulator/pfuze100-regulator.c @@ -609,7 +609,13 @@ static int pfuze100_regulator_probe(struct i2c_client *client, /* SW2~SW4 high bit check and modify the voltage value table */ if (i >= sw_check_start && i <= sw_check_end) { - regmap_read(pfuze_chip->regmap, desc->vsel_reg, &val); + ret = regmap_read(pfuze_chip->regmap, + desc->vsel_reg, &val); + if (ret) { + dev_err(&client->dev, "Fails to read from the register.\n"); + return ret; + } + if (val & sw_hi) { if (pfuze_chip->chip_id == PFUZE3000) { desc->volt_table = pfuze3000_sw2hi; diff --git a/drivers/regulator/ti-abb-regulator.c b/drivers/regulator/ti-abb-regulator.c index d2f994298753..6d17357b3a24 100644 --- a/drivers/regulator/ti-abb-regulator.c +++ b/drivers/regulator/ti-abb-regulator.c @@ -173,19 +173,14 @@ static int ti_abb_wait_txdone(struct device *dev, struct ti_abb *abb) while (timeout++ <= abb->settling_time) { status = ti_abb_check_txdone(abb); if (status) - break; + return 0; udelay(1); } - if (timeout > abb->settling_time) { - dev_warn_ratelimited(dev, - "%s:TRANXDONE timeout(%duS) int=0x%08x\n", - __func__, timeout, readl(abb->int_base)); - return -ETIMEDOUT; - } - - return 0; + dev_warn_ratelimited(dev, "%s:TRANXDONE timeout(%duS) int=0x%08x\n", + __func__, timeout, readl(abb->int_base)); + return -ETIMEDOUT; } /** @@ -205,19 +200,14 @@ static int ti_abb_clear_all_txdone(struct device *dev, const struct ti_abb *abb) status = ti_abb_check_txdone(abb); if (!status) - break; + return 0; udelay(1); } - if (timeout > abb->settling_time) { - dev_warn_ratelimited(dev, - "%s:TRANXDONE timeout(%duS) int=0x%08x\n", - __func__, timeout, readl(abb->int_base)); - return -ETIMEDOUT; - } - - return 0; + dev_warn_ratelimited(dev, "%s:TRANXDONE timeout(%duS) int=0x%08x\n", + __func__, timeout, readl(abb->int_base)); + return -ETIMEDOUT; } /** diff --git a/drivers/s390/cio/ccwgroup.c b/drivers/s390/cio/ccwgroup.c index e443b0d0b236..0d59c128f734 100644 --- a/drivers/s390/cio/ccwgroup.c +++ b/drivers/s390/cio/ccwgroup.c @@ -369,7 +369,7 @@ int ccwgroup_create_dev(struct device *parent, struct ccwgroup_driver *gdrv, goto error; } /* Check for trailing stuff. */ - if (i == num_devices && strlen(buf) > 0) { + if (i == num_devices && buf && strlen(buf) > 0) { rc = -EINVAL; goto error; } diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c index 489e703dc82d..8ecc956ecb59 100644 --- a/drivers/s390/cio/css.c +++ b/drivers/s390/cio/css.c @@ -1120,6 +1120,8 @@ device_initcall(cio_settle_init); int sch_is_pseudo_sch(struct subchannel *sch) { + if (!sch->dev.parent) + return 0; return sch == to_css(sch->dev.parent)->pseudo_subchannel; } diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c index 1964391db904..a3aaef4c53a3 100644 --- a/drivers/s390/scsi/zfcp_fsf.c +++ b/drivers/s390/scsi/zfcp_fsf.c @@ -20,6 +20,11 @@ struct kmem_cache *zfcp_fsf_qtcb_cache; +static bool ber_stop = true; +module_param(ber_stop, bool, 0600); +MODULE_PARM_DESC(ber_stop, + "Shuts down FCP devices for FCP channels that report a bit-error count in excess of its threshold (default on)"); + static void zfcp_fsf_request_timeout_handler(unsigned long data) { struct zfcp_adapter *adapter = (struct zfcp_adapter *) data; @@ -231,10 +236,15 @@ static void zfcp_fsf_status_read_handler(struct zfcp_fsf_req *req) case FSF_STATUS_READ_SENSE_DATA_AVAIL: break; case FSF_STATUS_READ_BIT_ERROR_THRESHOLD: - dev_warn(&adapter->ccw_device->dev, - "The error threshold for checksum statistics " - "has been exceeded\n"); zfcp_dbf_hba_bit_err("fssrh_3", req); + if (ber_stop) { + dev_warn(&adapter->ccw_device->dev, + "All paths over this FCP device are disused because of excessive bit errors\n"); + zfcp_erp_adapter_shutdown(adapter, 0, "fssrh_b"); + } else { + dev_warn(&adapter->ccw_device->dev, + "The error threshold for checksum statistics has been exceeded\n"); + } break; case FSF_STATUS_READ_LINK_DOWN: zfcp_fsf_status_read_link_down(req); diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig index 433c5e3d5733..070359a7eea1 100644 --- a/drivers/scsi/Kconfig +++ b/drivers/scsi/Kconfig @@ -1013,7 +1013,7 @@ config SCSI_SNI_53C710 config 53C700_LE_ON_BE bool - depends on SCSI_LASI700 + depends on SCSI_LASI700 || SCSI_SNI_53C710 default y config SCSI_STEX diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c index 193733e8c823..3a4613f9fb9f 100644 --- a/drivers/scsi/lpfc/lpfc_nportdisc.c +++ b/drivers/scsi/lpfc/lpfc_nportdisc.c @@ -759,9 +759,9 @@ lpfc_disc_set_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) if (!(vport->fc_flag & FC_PT2PT)) { /* Check config parameter use-adisc or FCP-2 */ - if ((vport->cfg_use_adisc && (vport->fc_flag & FC_RSCN_MODE)) || + if (vport->cfg_use_adisc && ((vport->fc_flag & FC_RSCN_MODE) || ((ndlp->nlp_fcp_info & NLP_FCP_2_DEVICE) && - (ndlp->nlp_type & NLP_FCP_TARGET))) { + (ndlp->nlp_type & NLP_FCP_TARGET)))) { spin_lock_irq(shost->host_lock); ndlp->nlp_flag |= NLP_NPR_ADISC; spin_unlock_irq(shost->host_lock); diff --git a/drivers/scsi/megaraid.c b/drivers/scsi/megaraid.c index 19bffe0b2cc0..2cbfec6a7466 100644 --- a/drivers/scsi/megaraid.c +++ b/drivers/scsi/megaraid.c @@ -4219,11 +4219,11 @@ megaraid_probe_one(struct pci_dev *pdev, const struct pci_device_id *id) */ if (pdev->subsystem_vendor == PCI_VENDOR_ID_COMPAQ && pdev->subsystem_device == 0xC000) - return -ENODEV; + goto out_disable_device; /* Now check the magic signature byte */ pci_read_config_word(pdev, PCI_CONF_AMISIG, &magic); if (magic != HBA_SIGNATURE_471 && magic != HBA_SIGNATURE) - return -ENODEV; + goto out_disable_device; /* Ok it is probably a megaraid */ } diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c index c26acde797f0..2d5375d67736 100644 --- a/drivers/scsi/qla2xxx/qla_bsg.c +++ b/drivers/scsi/qla2xxx/qla_bsg.c @@ -252,7 +252,7 @@ qla2x00_process_els(struct fc_bsg_job *bsg_job) srb_t *sp; const char *type; int req_sg_cnt, rsp_sg_cnt; - int rval = (DRIVER_ERROR << 16); + int rval = (DID_ERROR << 16); uint16_t nextlid = 0; if (bsg_job->request->msgcode == FC_BSG_RPT_ELS) { @@ -426,7 +426,7 @@ qla2x00_process_ct(struct fc_bsg_job *bsg_job) struct Scsi_Host *host = bsg_job->shost; scsi_qla_host_t *vha = shost_priv(host); struct qla_hw_data *ha = vha->hw; - int rval = (DRIVER_ERROR << 16); + int rval = (DID_ERROR << 16); int req_sg_cnt, rsp_sg_cnt; uint16_t loop_id; struct fc_port *fcport; @@ -1910,7 +1910,7 @@ qlafx00_mgmt_cmd(struct fc_bsg_job *bsg_job) struct Scsi_Host *host = bsg_job->shost; scsi_qla_host_t *vha = shost_priv(host); struct qla_hw_data *ha = vha->hw; - int rval = (DRIVER_ERROR << 16); + int rval = (DID_ERROR << 16); struct qla_mt_iocb_rqst_fx00 *piocb_rqst; srb_t *sp; int req_sg_cnt = 0, rsp_sg_cnt = 0; diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c index ff5df33fc740..611a127f08d8 100644 --- a/drivers/scsi/qla2xxx/qla_os.c +++ b/drivers/scsi/qla2xxx/qla_os.c @@ -2962,6 +2962,10 @@ qla2x00_shutdown(struct pci_dev *pdev) /* Stop currently executing firmware. */ qla2x00_try_to_stop_firmware(vha); + /* Disable timer */ + if (vha->timer_active) + qla2x00_stop_timer(vha); + /* Turn adapter off line */ vha->flags.online = 0; diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c index 824e27eec7a1..6c4f54aa60df 100644 --- a/drivers/scsi/qla2xxx/qla_target.c +++ b/drivers/scsi/qla2xxx/qla_target.c @@ -437,6 +437,7 @@ static void qlt_free_session_done(struct work_struct *work) if (logout_started) { bool traced = false; + u16 cnt = 0; while (!ACCESS_ONCE(sess->logout_completed)) { if (!traced) { @@ -446,6 +447,9 @@ static void qlt_free_session_done(struct work_struct *work) traced = true; } msleep(100); + cnt++; + if (cnt > 200) + break; } ql_dbg(ql_dbg_tgt_mgt, vha, 0xf087, diff --git a/drivers/scsi/scsi_logging.c b/drivers/scsi/scsi_logging.c index bd70339c1242..03d9855a6afd 100644 --- a/drivers/scsi/scsi_logging.c +++ b/drivers/scsi/scsi_logging.c @@ -16,57 +16,15 @@ #include <scsi/scsi_eh.h> #include <scsi/scsi_dbg.h> -#define SCSI_LOG_SPOOLSIZE 4096 - -#if (SCSI_LOG_SPOOLSIZE / SCSI_LOG_BUFSIZE) > BITS_PER_LONG -#warning SCSI logging bitmask too large -#endif - -struct scsi_log_buf { - char buffer[SCSI_LOG_SPOOLSIZE]; - unsigned long map; -}; - -static DEFINE_PER_CPU(struct scsi_log_buf, scsi_format_log); - static char *scsi_log_reserve_buffer(size_t *len) { - struct scsi_log_buf *buf; - unsigned long map_bits = sizeof(buf->buffer) / SCSI_LOG_BUFSIZE; - unsigned long idx = 0; - - preempt_disable(); - buf = this_cpu_ptr(&scsi_format_log); - idx = find_first_zero_bit(&buf->map, map_bits); - if (likely(idx < map_bits)) { - while (test_and_set_bit(idx, &buf->map)) { - idx = find_next_zero_bit(&buf->map, map_bits, idx); - if (idx >= map_bits) - break; - } - } - if (WARN_ON(idx >= map_bits)) { - preempt_enable(); - return NULL; - } - *len = SCSI_LOG_BUFSIZE; - return buf->buffer + idx * SCSI_LOG_BUFSIZE; + *len = 128; + return kmalloc(*len, GFP_ATOMIC); } static void scsi_log_release_buffer(char *bufptr) { - struct scsi_log_buf *buf; - unsigned long idx; - int ret; - - buf = this_cpu_ptr(&scsi_format_log); - if (bufptr >= buf->buffer && - bufptr < buf->buffer + SCSI_LOG_SPOOLSIZE) { - idx = (bufptr - buf->buffer) / SCSI_LOG_BUFSIZE; - ret = test_and_clear_bit(idx, &buf->map); - WARN_ON(!ret); - } - preempt_enable(); + kfree(bufptr); } static inline const char *scmd_name(const struct scsi_cmnd *scmd) diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c index bb17221d3d01..1290c542f6d6 100644 --- a/drivers/scsi/scsi_sysfs.c +++ b/drivers/scsi/scsi_sysfs.c @@ -679,6 +679,14 @@ sdev_store_delete(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { struct kernfs_node *kn; + struct scsi_device *sdev = to_scsi_device(dev); + + /* + * We need to try to get module, avoiding the module been removed + * during delete. + */ + if (scsi_device_get(sdev)) + return -ENODEV; kn = sysfs_break_active_protection(&dev->kobj, &attr->attr); WARN_ON_ONCE(!kn); @@ -693,9 +701,10 @@ sdev_store_delete(struct device *dev, struct device_attribute *attr, * state into SDEV_DEL. */ device_remove_file(dev, attr); - scsi_remove_device(to_scsi_device(dev)); + scsi_remove_device(sdev); if (kn) sysfs_unbreak_active_protection(kn); + scsi_device_put(sdev); return count; }; static DEVICE_ATTR(delete, S_IWUSR, NULL, sdev_store_delete); diff --git a/drivers/scsi/sni_53c710.c b/drivers/scsi/sni_53c710.c index 76278072147e..b0f5220ae23a 100644 --- a/drivers/scsi/sni_53c710.c +++ b/drivers/scsi/sni_53c710.c @@ -78,10 +78,8 @@ static int snirm710_probe(struct platform_device *dev) base = res->start; hostdata = kzalloc(sizeof(*hostdata), GFP_KERNEL); - if (!hostdata) { - dev_printk(KERN_ERR, dev, "Failed to allocate host data\n"); + if (!hostdata) return -ENOMEM; - } hostdata->dev = &dev->dev; dma_set_mask(&dev->dev, DMA_BIT_MASK(32)); diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c index 02f89779ac81..9c77b89eb893 100644 --- a/drivers/scsi/ufs/ufshcd.c +++ b/drivers/scsi/ufs/ufshcd.c @@ -9372,6 +9372,9 @@ int ufshcd_shutdown(struct ufs_hba *hba) { int ret = 0; + if (!hba->is_powered) + goto out; + if (ufshcd_is_ufs_dev_poweroff(hba) && ufshcd_is_link_off(hba)) goto out; diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig index 70fd4035a627..81db4fe93dbc 100644 --- a/drivers/soc/qcom/Kconfig +++ b/drivers/soc/qcom/Kconfig @@ -327,7 +327,7 @@ config MSM_IPC_ROUTER_GLINK_XPRT message exchange. config MSM_IPC_ROUTER_SDIO_XPRT - depends on QCOM_SDIO_CLIENT + depends on QTI_SDIO_CLIENT depends on IPC_ROUTER bool "MSM SDIO XPRT Layer" help diff --git a/drivers/soc/qcom/scm.c b/drivers/soc/qcom/scm.c index c6bdcee8131e..f4125113efff 100644 --- a/drivers/soc/qcom/scm.c +++ b/drivers/soc/qcom/scm.c @@ -1,4 +1,4 @@ -/* Copyright (c) 2010-2017, The Linux Foundation. All rights reserved. +/* Copyright (c) 2010-2018, The Linux Foundation. All rights reserved. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 and @@ -635,28 +635,7 @@ static int allocate_extra_arg_buffer(struct scm_desc *desc, gfp_t flags) return 0; } -/** - * scm_call2() - Invoke a syscall in the secure world - * @fn_id: The function ID for this syscall - * @desc: Descriptor structure containing arguments and return values - * - * Sends a command to the SCM and waits for the command to finish processing. - * This should *only* be called in pre-emptible context. - * - * A note on cache maintenance: - * Note that any buffers that are expected to be accessed by the secure world - * must be flushed before invoking scm_call and invalidated in the cache - * immediately after scm_call returns. An important point that must be noted - * is that on ARMV8 architectures, invalidation actually also causes a dirty - * cache line to be cleaned (flushed + unset-dirty-bit). Therefore it is of - * paramount importance that the buffer be flushed before invoking scm_call2, - * even if you don't care about the contents of that buffer. - * - * Note that cache maintenance on the argument buffer (desc->args) is taken care - * of by scm_call2; however, callers are responsible for any other cached - * buffers passed over to the secure world. -*/ -int scm_call2(u32 fn_id, struct scm_desc *desc) +static int __scm_call2(u32 fn_id, struct scm_desc *desc, bool retry) { int arglen = desc->arginfo & 0xf; int ret, retry_count = 0; @@ -670,7 +649,6 @@ int scm_call2(u32 fn_id, struct scm_desc *desc) return ret; x0 = fn_id | scm_version_mask; - do { mutex_lock(&scm_lock); @@ -700,13 +678,15 @@ int scm_call2(u32 fn_id, struct scm_desc *desc) mutex_unlock(&scm_lmh_lock); mutex_unlock(&scm_lock); + if (!retry) + goto out; if (ret == SCM_V2_EBUSY) msleep(SCM_EBUSY_WAIT_MS); if (retry_count == 33) pr_warn("scm: secure world has been busy for 1 second!\n"); - } while (ret == SCM_V2_EBUSY && (retry_count++ < SCM_EBUSY_MAX_RETRY)); - + } while (ret == SCM_V2_EBUSY && (retry_count++ < SCM_EBUSY_MAX_RETRY)); +out: if (ret < 0) pr_err("scm_call failed: func id %#llx, ret: %d, syscall returns: %#llx, %#llx, %#llx\n", x0, ret, desc->ret[0], desc->ret[1], desc->ret[2]); @@ -717,9 +697,47 @@ int scm_call2(u32 fn_id, struct scm_desc *desc) return scm_remap_error(ret); return 0; } + +/** + * scm_call2() - Invoke a syscall in the secure world + * @fn_id: The function ID for this syscall + * @desc: Descriptor structure containing arguments and return values + * + * Sends a command to the SCM and waits for the command to finish processing. + * This should *only* be called in pre-emptible context. + * + * A note on cache maintenance: + * Note that any buffers that are expected to be accessed by the secure world + * must be flushed before invoking scm_call and invalidated in the cache + * immediately after scm_call returns. An important point that must be noted + * is that on ARMV8 architectures, invalidation actually also causes a dirty + * cache line to be cleaned (flushed + unset-dirty-bit). Therefore it is of + * paramount importance that the buffer be flushed before invoking scm_call2, + * even if you don't care about the contents of that buffer. + * + * Note that cache maintenance on the argument buffer (desc->args) is taken care + * of by scm_call2; however, callers are responsible for any other cached + * buffers passed over to the secure world. + */ +int scm_call2(u32 fn_id, struct scm_desc *desc) +{ + return __scm_call2(fn_id, desc, true); +} EXPORT_SYMBOL(scm_call2); /** + * scm_call2_noretry() - Invoke a syscall in the secure world + * + * Similar to scm_call2 except that there is no retry mechanism + * implemented. + */ +int scm_call2_noretry(u32 fn_id, struct scm_desc *desc) +{ + return __scm_call2(fn_id, desc, false); +} +EXPORT_SYMBOL(scm_call2_noretry); + +/** * scm_call2_atomic() - Invoke a syscall in the secure world * * Similar to scm_call2 except that this can be invoked in atomic context. diff --git a/drivers/spi/spi_qsd.c b/drivers/spi/spi_qsd.c index 799bf2988b30..cf1ea19dbc6a 100644 --- a/drivers/spi/spi_qsd.c +++ b/drivers/spi/spi_qsd.c @@ -1,4 +1,4 @@ -/* Copyright (c) 2008-2018, The Linux Foundation. All rights reserved. +/* Copyright (c) 2008-2019, The Linux Foundation. All rights reserved. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 and @@ -1390,8 +1390,8 @@ static int msm_spi_process_transfer(struct msm_spi *dd) dd->write_buf = dd->cur_transfer->tx_buf; dd->tx_done = false; dd->rx_done = false; - init_completion(&dd->tx_transfer_complete); - init_completion(&dd->rx_transfer_complete); + reinit_completion(&dd->tx_transfer_complete); + reinit_completion(&dd->rx_transfer_complete); if (dd->cur_transfer->bits_per_word) bpw = dd->cur_transfer->bits_per_word; else @@ -2716,6 +2716,8 @@ skip_dma_resources: spin_lock_init(&dd->queue_lock); mutex_init(&dd->core_lock); init_waitqueue_head(&dd->continue_suspend); + init_completion(&dd->tx_transfer_complete); + init_completion(&dd->rx_transfer_complete); if (!devm_request_mem_region(&pdev->dev, dd->mem_phys_addr, dd->mem_size, SPI_DRV_NAME)) { diff --git a/drivers/staging/android/lowmemorykiller.c b/drivers/staging/android/lowmemorykiller.c index 378fee418085..80070f67a895 100644 --- a/drivers/staging/android/lowmemorykiller.c +++ b/drivers/staging/android/lowmemorykiller.c @@ -119,19 +119,6 @@ void handle_lmk_event(struct task_struct *selected, int selected_tasksize, int tail; struct lmk_event *events; struct lmk_event *event; - int res; - char taskname[MAX_TASKNAME]; - - res = get_cmdline(selected, taskname, MAX_TASKNAME - 1); - - /* No valid process name means this is definitely not associated with a - * userspace activity. - */ - - if (res <= 0 || res >= MAX_TASKNAME) - return; - - taskname[res] = '\0'; spin_lock(&lmk_event_lock); @@ -147,7 +134,7 @@ void handle_lmk_event(struct task_struct *selected, int selected_tasksize, events = (struct lmk_event *) event_buffer.buf; event = &events[head]; - memcpy(event->taskname, taskname, res + 1); + strncpy(event->taskname, selected->comm, MAX_TASKNAME); event->pid = selected->pid; event->uid = from_kuid_munged(current_user_ns(), task_uid(selected)); diff --git a/drivers/staging/fbtft/fbtft-core.c b/drivers/staging/fbtft/fbtft-core.c index 18c2b6daf588..15937e0ef4d9 100644 --- a/drivers/staging/fbtft/fbtft-core.c +++ b/drivers/staging/fbtft/fbtft-core.c @@ -813,7 +813,7 @@ struct fb_info *fbtft_framebuffer_alloc(struct fbtft_display *display, if (par->gamma.curves && gamma) { if (fbtft_gamma_parse_str(par, par->gamma.curves, gamma, strlen(gamma))) - goto alloc_fail; + goto release_framebuf; } /* Transmit buffer */ @@ -836,7 +836,7 @@ struct fb_info *fbtft_framebuffer_alloc(struct fbtft_display *display, txbuf = devm_kzalloc(par->info->device, txbuflen, GFP_KERNEL); } if (!txbuf) - goto alloc_fail; + goto release_framebuf; par->txbuf.buf = txbuf; par->txbuf.len = txbuflen; } @@ -872,6 +872,9 @@ struct fb_info *fbtft_framebuffer_alloc(struct fbtft_display *display, return info; +release_framebuf: + framebuffer_release(info); + alloc_fail: vfree(vmem); diff --git a/drivers/staging/vt6655/device_main.c b/drivers/staging/vt6655/device_main.c index 58b6403458b7..2f3c9217d650 100644 --- a/drivers/staging/vt6655/device_main.c +++ b/drivers/staging/vt6655/device_main.c @@ -1668,8 +1668,10 @@ vt6655_probe(struct pci_dev *pcid, const struct pci_device_id *ent) priv->hw->max_signal = 100; - if (vnt_init(priv)) + if (vnt_init(priv)) { + device_free_info(priv); return -ENODEV; + } device_print_info(priv); pci_set_drvdata(pcid, priv); diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c index bb6a6c35324a..4198ed4ac607 100644 --- a/drivers/target/target_core_device.c +++ b/drivers/target/target_core_device.c @@ -1057,27 +1057,6 @@ passthrough_parse_cdb(struct se_cmd *cmd, unsigned char *cdb = cmd->t_task_cdb; /* - * Clear a lun set in the cdb if the initiator talking to use spoke - * and old standards version, as we can't assume the underlying device - * won't choke up on it. - */ - switch (cdb[0]) { - case READ_10: /* SBC - RDProtect */ - case READ_12: /* SBC - RDProtect */ - case READ_16: /* SBC - RDProtect */ - case SEND_DIAGNOSTIC: /* SPC - SELF-TEST Code */ - case VERIFY: /* SBC - VRProtect */ - case VERIFY_16: /* SBC - VRProtect */ - case WRITE_VERIFY: /* SBC - VRProtect */ - case WRITE_VERIFY_12: /* SBC - VRProtect */ - case MAINTENANCE_IN: /* SPC - Parameter Data Format for SA RTPG */ - break; - default: - cdb[1] &= 0x1f; /* clear logical unit number */ - break; - } - - /* * For REPORT LUNS we always need to emulate the response, for everything * else, pass it up. */ diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c index 04cf31c119ef..651fa346af44 100644 --- a/drivers/thermal/thermal_core.c +++ b/drivers/thermal/thermal_core.c @@ -804,7 +804,7 @@ static void thermal_zone_device_set_polling(struct thermal_zone_device *tz, mod_delayed_work(system_freezable_wq, &tz->poll_queue, msecs_to_jiffies(delay)); else - cancel_delayed_work(&tz->poll_queue); + cancel_delayed_work_sync(&tz->poll_queue); } static void monitor_thermal_zone(struct thermal_zone_device *tz) diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c index 6713fd1958e7..3a39d7d0175a 100644 --- a/drivers/thunderbolt/nhi.c +++ b/drivers/thunderbolt/nhi.c @@ -94,9 +94,20 @@ static void __iomem *ring_options_base(struct tb_ring *ring) return io; } -static void ring_iowrite16desc(struct tb_ring *ring, u32 value, u32 offset) +static void ring_iowrite_cons(struct tb_ring *ring, u16 cons) { - iowrite16(value, ring_desc_base(ring) + offset); + /* + * The other 16-bits in the register is read-only and writes to it + * are ignored by the hardware so we can save one ioread32() by + * filling the read-only bits with zeroes. + */ + iowrite32(cons, ring_desc_base(ring) + 8); +} + +static void ring_iowrite_prod(struct tb_ring *ring, u16 prod) +{ + /* See ring_iowrite_cons() above for explanation */ + iowrite32(prod << 16, ring_desc_base(ring) + 8); } static void ring_iowrite32desc(struct tb_ring *ring, u32 value, u32 offset) @@ -148,7 +159,10 @@ static void ring_write_descriptors(struct tb_ring *ring) descriptor->sof = frame->sof; } ring->head = (ring->head + 1) % ring->size; - ring_iowrite16desc(ring, ring->head, ring->is_tx ? 10 : 8); + if (ring->is_tx) + ring_iowrite_prod(ring, ring->head); + else + ring_iowrite_cons(ring, ring->head); } } @@ -368,7 +382,7 @@ void ring_stop(struct tb_ring *ring) ring_iowrite32options(ring, 0, 0); ring_iowrite64desc(ring, 0, 0); - ring_iowrite16desc(ring, 0, ring->is_tx ? 10 : 8); + ring_iowrite32desc(ring, 0, 8); ring_iowrite32desc(ring, 0, 12); ring->head = 0; ring->tail = 0; diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c index 032f3c13b8c4..a3dfefa33e3c 100644 --- a/drivers/tty/serial/sc16is7xx.c +++ b/drivers/tty/serial/sc16is7xx.c @@ -332,6 +332,7 @@ struct sc16is7xx_port { struct kthread_worker kworker; struct task_struct *kworker_task; struct kthread_work irq_work; + struct mutex efr_lock; struct sc16is7xx_one p[0]; }; @@ -496,6 +497,21 @@ static int sc16is7xx_set_baud(struct uart_port *port, int baud) div /= 4; } + /* In an amazing feat of design, the Enhanced Features Register shares + * the address of the Interrupt Identification Register, and is + * switched in by writing a magic value (0xbf) to the Line Control + * Register. Any interrupt firing during this time will see the EFR + * where it expects the IIR to be, leading to "Unexpected interrupt" + * messages. + * + * Prevent this possibility by claiming a mutex while accessing the + * EFR, and claiming the same mutex from within the interrupt handler. + * This is similar to disabling the interrupt, but that doesn't work + * because the bulk of the interrupt processing is run as a workqueue + * job in thread context. + */ + mutex_lock(&s->efr_lock); + lcr = sc16is7xx_port_read(port, SC16IS7XX_LCR_REG); /* Open the LCR divisors for configuration */ @@ -511,6 +527,8 @@ static int sc16is7xx_set_baud(struct uart_port *port, int baud) /* Put LCR back to the normal mode */ sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, lcr); + mutex_unlock(&s->efr_lock); + sc16is7xx_port_update(port, SC16IS7XX_MCR_REG, SC16IS7XX_MCR_CLKSEL_BIT, prescaler); @@ -693,6 +711,8 @@ static void sc16is7xx_ist(struct kthread_work *ws) { struct sc16is7xx_port *s = to_sc16is7xx_port(ws, irq_work); + mutex_lock(&s->efr_lock); + while (1) { bool keep_polling = false; int i; @@ -702,6 +722,8 @@ static void sc16is7xx_ist(struct kthread_work *ws) if (!keep_polling) break; } + + mutex_unlock(&s->efr_lock); } static irqreturn_t sc16is7xx_irq(int irq, void *dev_id) @@ -888,6 +910,9 @@ static void sc16is7xx_set_termios(struct uart_port *port, if (!(termios->c_cflag & CREAD)) port->ignore_status_mask |= SC16IS7XX_LSR_BRK_ERROR_MASK; + /* As above, claim the mutex while accessing the EFR. */ + mutex_lock(&s->efr_lock); + sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, SC16IS7XX_LCR_CONF_MODE_B); @@ -909,6 +934,8 @@ static void sc16is7xx_set_termios(struct uart_port *port, /* Update LCR register */ sc16is7xx_port_write(port, SC16IS7XX_LCR_REG, lcr); + mutex_unlock(&s->efr_lock); + /* Get baud rate generator configuration */ baud = uart_get_baud_rate(port, termios, old, port->uartclk / 16 / 4 / 0xffff, @@ -1172,6 +1199,7 @@ static int sc16is7xx_probe(struct device *dev, s->regmap = regmap; s->devtype = devtype; dev_set_drvdata(dev, s); + mutex_init(&s->efr_lock); init_kthread_worker(&s->kworker); init_kthread_work(&s->irq_work, sc16is7xx_ist); diff --git a/drivers/tty/serial/serial_mctrl_gpio.c b/drivers/tty/serial/serial_mctrl_gpio.c index 02147361eaa9..2b5329a3d716 100644 --- a/drivers/tty/serial/serial_mctrl_gpio.c +++ b/drivers/tty/serial/serial_mctrl_gpio.c @@ -67,6 +67,9 @@ EXPORT_SYMBOL_GPL(mctrl_gpio_set); struct gpio_desc *mctrl_gpio_to_gpiod(struct mctrl_gpios *gpios, enum mctrl_gpio_idx gidx) { + if (gpios == NULL) + return NULL; + return gpios->gpio[gidx]; } EXPORT_SYMBOL_GPL(mctrl_gpio_to_gpiod); diff --git a/drivers/tty/serial/uartlite.c b/drivers/tty/serial/uartlite.c index b1c6bd3d483f..c27db21b71ba 100644 --- a/drivers/tty/serial/uartlite.c +++ b/drivers/tty/serial/uartlite.c @@ -701,7 +701,8 @@ err_uart: static void __exit ulite_exit(void) { platform_driver_unregister(&ulite_platform_driver); - uart_unregister_driver(&ulite_uart_driver); + if (ulite_uart_driver.state) + uart_unregister_driver(&ulite_uart_driver); } module_init(ulite_init); diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c index 071964c7847f..07c3c3449147 100644 --- a/drivers/usb/class/usblp.c +++ b/drivers/usb/class/usblp.c @@ -458,6 +458,7 @@ static void usblp_cleanup(struct usblp *usblp) kfree(usblp->readbuf); kfree(usblp->device_id_string); kfree(usblp->statusbuf); + usb_put_intf(usblp->intf); kfree(usblp); } @@ -474,10 +475,12 @@ static int usblp_release(struct inode *inode, struct file *file) mutex_lock(&usblp_mutex); usblp->used = 0; - if (usblp->present) { + if (usblp->present) usblp_unlink_urbs(usblp); - usb_autopm_put_interface(usblp->intf); - } else /* finish cleanup from disconnect */ + + usb_autopm_put_interface(usblp->intf); + + if (!usblp->present) /* finish cleanup from disconnect */ usblp_cleanup(usblp); mutex_unlock(&usblp_mutex); return 0; @@ -1118,7 +1121,7 @@ static int usblp_probe(struct usb_interface *intf, init_waitqueue_head(&usblp->wwait); init_usb_anchor(&usblp->urbs); usblp->ifnum = intf->cur_altsetting->desc.bInterfaceNumber; - usblp->intf = intf; + usblp->intf = usb_get_intf(intf); /* Malloc device ID string buffer to the largest expected length, * since we can re-query it on an ioctl and a dynamic string @@ -1207,6 +1210,7 @@ abort: kfree(usblp->readbuf); kfree(usblp->statusbuf); kfree(usblp->device_id_string); + usb_put_intf(usblp->intf); kfree(usblp); abort_ret: return retval; diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c index 126987c9da75..eb82ee0a3ba8 100644 --- a/drivers/usb/core/config.c +++ b/drivers/usb/core/config.c @@ -314,6 +314,11 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno, int inum, /* Validate the wMaxPacketSize field */ maxp = usb_endpoint_maxp(&endpoint->desc); + if (maxp == 0) { + dev_warn(ddev, "config %d interface %d altsetting %d endpoint 0x%X has wMaxPacketSize 0, skipping\n", + cfgno, inum, asnum, d->bEndpointAddress); + goto skip_to_next_endpoint_or_interface_descriptor; + } /* Find the highest legal maxpacket size for this endpoint */ i = 0; /* additional transactions per microframe */ diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c index 63f7364ac2ce..655279ed6dba 100644 --- a/drivers/usb/core/hub.c +++ b/drivers/usb/core/hub.c @@ -107,6 +107,8 @@ EXPORT_SYMBOL_GPL(ehci_cf_port_reset_rwsem); static void hub_release(struct kref *kref); static int usb_reset_and_verify_device(struct usb_device *udev); static int hub_port_disable(struct usb_hub *hub, int port1, int set_state); +static bool hub_port_warm_reset_required(struct usb_hub *hub, int port1, + u16 portstatus); static inline char *portspeed(struct usb_hub *hub, int portstatus) { @@ -1103,6 +1105,11 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type) USB_PORT_FEAT_ENABLE); } + /* Make sure a warm-reset request is handled by port_event */ + if (type == HUB_RESUME && + hub_port_warm_reset_required(hub, port1, portstatus)) + set_bit(port1, hub->event_bits); + /* * Add debounce if USB3 link is in polling/link training state. * Link will automatically transition to Enabled state after diff --git a/drivers/usb/gadget/Kconfig b/drivers/usb/gadget/Kconfig index b63ecb487aff..5c9ec1fa563a 100644 --- a/drivers/usb/gadget/Kconfig +++ b/drivers/usb/gadget/Kconfig @@ -441,6 +441,7 @@ config USB_CONFIGFS_F_PTP config USB_CONFIGFS_F_ACC boolean "Accessory gadget" depends on USB_CONFIGFS + depends on HID=y select USB_F_ACC help USB gadget Accessory support diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c index a31322ed9447..6b0c4187aae1 100644 --- a/drivers/usb/gadget/configfs.c +++ b/drivers/usb/gadget/configfs.c @@ -90,6 +90,8 @@ struct gadget_info { bool unbinding; char b_vendor_code; char qw_sign[OS_STRING_QW_SIGN_LEN]; + spinlock_t spinlock; + bool unbind; #ifdef CONFIG_USB_CONFIGFS_UEVENT bool connected; bool sw_connected; @@ -1289,6 +1291,7 @@ static int configfs_composite_bind(struct usb_gadget *gadget, int ret; /* the gi->lock is hold by the caller */ + gi->unbind = 0; cdev->gadget = gadget; set_gadget_data(gadget, cdev); ret = composite_dev_prepare(composite, cdev); @@ -1475,19 +1478,118 @@ static void configfs_composite_unbind(struct usb_gadget *gadget) { struct usb_composite_dev *cdev; struct gadget_info *gi; + unsigned long flags; /* the gi->lock is hold by the caller */ cdev = get_gadget_data(gadget); gi = container_of(cdev, struct gadget_info, cdev); + spin_lock_irqsave(&gi->spinlock, flags); + gi->unbind = 1; + spin_unlock_irqrestore(&gi->spinlock, flags); kfree(otg_desc[0]); otg_desc[0] = NULL; purge_configs_funcs(gi); composite_dev_cleanup(cdev); usb_ep_autoconfig_reset(cdev->gadget); + spin_lock_irqsave(&gi->spinlock, flags); cdev->gadget = NULL; set_gadget_data(gadget, NULL); + spin_unlock_irqrestore(&gi->spinlock, flags); +} + +#if !IS_ENABLED(CONFIG_USB_CONFIGFS_UEVENT) +static int configfs_composite_setup(struct usb_gadget *gadget, + const struct usb_ctrlrequest *ctrl) +{ + struct usb_composite_dev *cdev; + struct gadget_info *gi; + unsigned long flags; + int ret; + + cdev = get_gadget_data(gadget); + if (!cdev) + return 0; + + gi = container_of(cdev, struct gadget_info, cdev); + spin_lock_irqsave(&gi->spinlock, flags); + cdev = get_gadget_data(gadget); + if (!cdev || gi->unbind) { + spin_unlock_irqrestore(&gi->spinlock, flags); + return 0; + } + + ret = composite_setup(gadget, ctrl); + spin_unlock_irqrestore(&gi->spinlock, flags); + return ret; +} + +static void configfs_composite_disconnect(struct usb_gadget *gadget) +{ + struct usb_composite_dev *cdev; + struct gadget_info *gi; + unsigned long flags; + + cdev = get_gadget_data(gadget); + if (!cdev) + return; + + gi = container_of(cdev, struct gadget_info, cdev); + spin_lock_irqsave(&gi->spinlock, flags); + cdev = get_gadget_data(gadget); + if (!cdev || gi->unbind) { + spin_unlock_irqrestore(&gi->spinlock, flags); + return; + } + + composite_disconnect(gadget); + spin_unlock_irqrestore(&gi->spinlock, flags); +} +#endif + +static void configfs_composite_suspend(struct usb_gadget *gadget) +{ + struct usb_composite_dev *cdev; + struct gadget_info *gi; + unsigned long flags; + + cdev = get_gadget_data(gadget); + if (!cdev) + return; + + gi = container_of(cdev, struct gadget_info, cdev); + spin_lock_irqsave(&gi->spinlock, flags); + cdev = get_gadget_data(gadget); + if (!cdev || gi->unbind) { + spin_unlock_irqrestore(&gi->spinlock, flags); + return; + } + + composite_suspend(gadget); + spin_unlock_irqrestore(&gi->spinlock, flags); +} + +static void configfs_composite_resume(struct usb_gadget *gadget) +{ + struct usb_composite_dev *cdev; + struct gadget_info *gi; + unsigned long flags; + + cdev = get_gadget_data(gadget); + if (!cdev) + return; + + gi = container_of(cdev, struct gadget_info, cdev); + spin_lock_irqsave(&gi->spinlock, flags); + cdev = get_gadget_data(gadget); + if (!cdev || gi->unbind) { + spin_unlock_irqrestore(&gi->spinlock, flags); + return; + } + + composite_resume(gadget); + spin_unlock_irqrestore(&gi->spinlock, flags); } #ifdef CONFIG_USB_CONFIGFS_UEVENT @@ -1579,12 +1681,12 @@ static const struct usb_gadget_driver configfs_driver_template = { .reset = android_disconnect, .disconnect = android_disconnect, #else - .setup = composite_setup, - .reset = composite_disconnect, - .disconnect = composite_disconnect, + .setup = configfs_composite_setup, + .reset = configfs_composite_disconnect, + .disconnect = configfs_composite_disconnect, #endif - .suspend = composite_suspend, - .resume = composite_resume, + .suspend = configfs_composite_suspend, + .resume = configfs_composite_resume, .max_speed = USB_SPEED_SUPER, .driver = { @@ -1714,6 +1816,7 @@ static struct config_group *gadgets_make( mutex_init(&gi->lock); INIT_LIST_HEAD(&gi->string_list); INIT_LIST_HEAD(&gi->available_func); + spin_lock_init(&gi->spinlock); composite_init_dev(&gi->cdev); gi->cdev.desc.bLength = USB_DT_DEVICE_SIZE; diff --git a/drivers/usb/gadget/udc/atmel_usba_udc.c b/drivers/usb/gadget/udc/atmel_usba_udc.c index 585cb8734f50..668ac5e8681b 100644 --- a/drivers/usb/gadget/udc/atmel_usba_udc.c +++ b/drivers/usb/gadget/udc/atmel_usba_udc.c @@ -403,9 +403,11 @@ static void submit_request(struct usba_ep *ep, struct usba_request *req) next_fifo_transaction(ep, req); if (req->last_transaction) { usba_ep_writel(ep, CTL_DIS, USBA_TX_PK_RDY); - usba_ep_writel(ep, CTL_ENB, USBA_TX_COMPLETE); + if (ep_is_control(ep)) + usba_ep_writel(ep, CTL_ENB, USBA_TX_COMPLETE); } else { - usba_ep_writel(ep, CTL_DIS, USBA_TX_COMPLETE); + if (ep_is_control(ep)) + usba_ep_writel(ep, CTL_DIS, USBA_TX_COMPLETE); usba_ep_writel(ep, CTL_ENB, USBA_TX_PK_RDY); } } diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c index 85f1f282c1d5..0321b9ce9faf 100644 --- a/drivers/usb/gadget/udc/dummy_hcd.c +++ b/drivers/usb/gadget/udc/dummy_hcd.c @@ -50,6 +50,7 @@ #define DRIVER_VERSION "02 May 2005" #define POWER_BUDGET 500 /* in mA; use 8 for low-power port testing */ +#define POWER_BUDGET_3 900 /* in mA */ static const char driver_name[] = "dummy_hcd"; static const char driver_desc[] = "USB Host+Gadget Emulator"; @@ -2435,7 +2436,7 @@ static int dummy_start_ss(struct dummy_hcd *dum_hcd) dum_hcd->rh_state = DUMMY_RH_RUNNING; dum_hcd->stream_en_ep = 0; INIT_LIST_HEAD(&dum_hcd->urbp_list); - dummy_hcd_to_hcd(dum_hcd)->power_budget = POWER_BUDGET; + dummy_hcd_to_hcd(dum_hcd)->power_budget = POWER_BUDGET_3; dummy_hcd_to_hcd(dum_hcd)->state = HC_STATE_RUNNING; dummy_hcd_to_hcd(dum_hcd)->uses_new_polling = 1; #ifdef CONFIG_USB_OTG diff --git a/drivers/usb/gadget/udc/fsl_udc_core.c b/drivers/usb/gadget/udc/fsl_udc_core.c index 8991a4070792..bd98557caa28 100644 --- a/drivers/usb/gadget/udc/fsl_udc_core.c +++ b/drivers/usb/gadget/udc/fsl_udc_core.c @@ -2570,7 +2570,7 @@ static int fsl_udc_remove(struct platform_device *pdev) dma_pool_destroy(udc_controller->td_pool); free_irq(udc_controller->irq, udc_controller); iounmap(dr_regs); - if (pdata->operating_mode == FSL_USB2_DR_DEVICE) + if (res && (pdata->operating_mode == FSL_USB2_DR_DEVICE)) release_mem_region(res->start, resource_size(res)); /* free udc --wait for the release() finished */ diff --git a/drivers/usb/gadget/udc/lpc32xx_udc.c b/drivers/usb/gadget/udc/lpc32xx_udc.c index 90d24f62bd81..ea43cb74a6f2 100644 --- a/drivers/usb/gadget/udc/lpc32xx_udc.c +++ b/drivers/usb/gadget/udc/lpc32xx_udc.c @@ -1225,11 +1225,11 @@ static void udc_pop_fifo(struct lpc32xx_udc *udc, u8 *data, u32 bytes) tmp = readl(USBD_RXDATA(udc->udp_baseaddr)); bl = bytes - n; - if (bl > 3) - bl = 3; + if (bl > 4) + bl = 4; for (i = 0; i < bl; i++) - data[n + i] = (u8) ((tmp >> (n * 8)) & 0xFF); + data[n + i] = (u8) ((tmp >> (i * 8)) & 0xFF); } break; diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index a283665e2ed5..89b4cc8265c5 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -1024,7 +1024,7 @@ int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup) command |= CMD_CSS; writel(command, &xhci->op_regs->command); if (xhci_handshake(&xhci->op_regs->status, - STS_SAVE, 0, 10 * 1000)) { + STS_SAVE, 0, 20 * 1000)) { xhci_warn(xhci, "WARN: xHC save state timeout\n"); spin_unlock_irq(&xhci->lock); return -ETIMEDOUT; @@ -1085,6 +1085,18 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated) hibernated = true; if (!hibernated) { + /* + * Some controllers might lose power during suspend, so wait + * for controller not ready bit to clear, just as in xHC init. + */ + retval = xhci_handshake(&xhci->op_regs->status, + STS_CNR, 0, 10 * 1000 * 1000); + if (retval) { + xhci_warn(xhci, "Controller not ready at resume %d\n", + retval); + spin_unlock_irq(&xhci->lock); + return retval; + } /* step 1: restore register */ xhci_restore_registers(xhci); /* step 2: initialize command ring buffer */ @@ -4575,12 +4587,12 @@ static int xhci_update_timeout_for_endpoint(struct xhci_hcd *xhci, alt_timeout = xhci_call_host_update_timeout_for_endpoint(xhci, udev, desc, state, timeout); - /* If we found we can't enable hub-initiated LPM, or + /* If we found we can't enable hub-initiated LPM, and * the U1 or U2 exit latency was too high to allow - * device-initiated LPM as well, just stop searching. + * device-initiated LPM as well, then we will disable LPM + * for this device, so stop searching any further. */ - if (alt_timeout == USB3_LPM_DISABLED || - alt_timeout == USB3_LPM_DEVICE_INITIATED) { + if (alt_timeout == USB3_LPM_DISABLED) { *timeout = alt_timeout; return -E2BIG; } @@ -4691,10 +4703,12 @@ static u16 xhci_calculate_lpm_timeout(struct usb_hcd *hcd, if (intf->dev.driver) { driver = to_usb_driver(intf->dev.driver); if (driver && driver->disable_hub_initiated_lpm) { - dev_dbg(&udev->dev, "Hub-initiated %s disabled " - "at request of driver %s\n", - state_name, driver->name); - return xhci_get_timeout_no_hub_lpm(udev, state); + dev_dbg(&udev->dev, "Hub-initiated %s disabled at request of driver %s\n", + state_name, driver->name); + timeout = xhci_get_timeout_no_hub_lpm(udev, + state); + if (timeout == USB3_LPM_DISABLED) + return timeout; } } diff --git a/drivers/usb/image/microtek.c b/drivers/usb/image/microtek.c index a4dbb0cd80da..0fecc002fa9f 100644 --- a/drivers/usb/image/microtek.c +++ b/drivers/usb/image/microtek.c @@ -724,6 +724,10 @@ static int mts_usb_probe(struct usb_interface *intf, } + if (ep_in_current != &ep_in_set[2]) { + MTS_WARNING("couldn't find two input bulk endpoints. Bailing out.\n"); + return -ENODEV; + } if ( ep_out == -1 ) { MTS_WARNING( "couldn't find an output bulk endpoint. Bailing out.\n" ); diff --git a/drivers/usb/misc/Kconfig b/drivers/usb/misc/Kconfig index b8f3e4199e20..8006d75efc09 100644 --- a/drivers/usb/misc/Kconfig +++ b/drivers/usb/misc/Kconfig @@ -46,16 +46,6 @@ config USB_SEVSEG To compile this driver as a module, choose M here: the module will be called usbsevseg. -config USB_RIO500 - tristate "USB Diamond Rio500 support" - help - Say Y here if you want to connect a USB Rio500 mp3 player to your - computer's USB port. Please read <file:Documentation/usb/rio.txt> - for more information. - - To compile this driver as a module, choose M here: the - module will be called rio500. - config USB_LEGOTOWER tristate "USB Lego Infrared Tower support" help diff --git a/drivers/usb/misc/Makefile b/drivers/usb/misc/Makefile index 4f417e23cf1a..4986df051c22 100644 --- a/drivers/usb/misc/Makefile +++ b/drivers/usb/misc/Makefile @@ -17,7 +17,6 @@ obj-$(CONFIG_USB_LCD) += usblcd.o obj-$(CONFIG_USB_LD) += ldusb.o obj-$(CONFIG_USB_LED) += usbled.o obj-$(CONFIG_USB_LEGOTOWER) += legousbtower.o -obj-$(CONFIG_USB_RIO500) += rio500.o obj-$(CONFIG_USB_TEST) += usbtest.o obj-$(CONFIG_USB_EHSET_TEST_FIXTURE) += ehset.o obj-$(CONFIG_USB_TRANCEVIBRATOR) += trancevibrator.o diff --git a/drivers/usb/misc/adutux.c b/drivers/usb/misc/adutux.c index 3071c0ef909b..6d849e7dc842 100644 --- a/drivers/usb/misc/adutux.c +++ b/drivers/usb/misc/adutux.c @@ -80,6 +80,7 @@ struct adu_device { char serial_number[8]; int open_count; /* number of times this port has been opened */ + unsigned long disconnected:1; char *read_buffer_primary; int read_buffer_length; @@ -121,7 +122,7 @@ static void adu_abort_transfers(struct adu_device *dev) { unsigned long flags; - if (dev->udev == NULL) + if (dev->disconnected) return; /* shutdown transfer */ @@ -151,6 +152,7 @@ static void adu_delete(struct adu_device *dev) kfree(dev->read_buffer_secondary); kfree(dev->interrupt_in_buffer); kfree(dev->interrupt_out_buffer); + usb_put_dev(dev->udev); kfree(dev); } @@ -244,7 +246,7 @@ static int adu_open(struct inode *inode, struct file *file) } dev = usb_get_intfdata(interface); - if (!dev || !dev->udev) { + if (!dev) { retval = -ENODEV; goto exit_no_device; } @@ -327,7 +329,7 @@ static int adu_release(struct inode *inode, struct file *file) } adu_release_internal(dev); - if (dev->udev == NULL) { + if (dev->disconnected) { /* the device was unplugged before the file was released */ if (!dev->open_count) /* ... and we're the last user */ adu_delete(dev); @@ -356,7 +358,7 @@ static ssize_t adu_read(struct file *file, __user char *buffer, size_t count, return -ERESTARTSYS; /* verify that the device wasn't unplugged */ - if (dev->udev == NULL) { + if (dev->disconnected) { retval = -ENODEV; pr_err("No device or device unplugged %d\n", retval); goto exit; @@ -525,7 +527,7 @@ static ssize_t adu_write(struct file *file, const __user char *buffer, goto exit_nolock; /* verify that the device wasn't unplugged */ - if (dev->udev == NULL) { + if (dev->disconnected) { retval = -ENODEV; pr_err("No device or device unplugged %d\n", retval); goto exit; @@ -680,7 +682,7 @@ static int adu_probe(struct usb_interface *interface, mutex_init(&dev->mtx); spin_lock_init(&dev->buflock); - dev->udev = udev; + dev->udev = usb_get_dev(udev); init_waitqueue_head(&dev->read_wait); init_waitqueue_head(&dev->write_wait); @@ -800,19 +802,21 @@ error: static void adu_disconnect(struct usb_interface *interface) { struct adu_device *dev; - int minor; dev = usb_get_intfdata(interface); - mutex_lock(&dev->mtx); /* not interruptible */ - dev->udev = NULL; /* poison */ - minor = dev->minor; usb_deregister_dev(interface, &adu_class); - mutex_unlock(&dev->mtx); + + usb_poison_urb(dev->interrupt_in_urb); + usb_poison_urb(dev->interrupt_out_urb); mutex_lock(&adutux_mutex); usb_set_intfdata(interface, NULL); + mutex_lock(&dev->mtx); /* not interruptible */ + dev->disconnected = 1; + mutex_unlock(&dev->mtx); + /* if the device is not opened, then we clean up right now */ if (!dev->open_count) adu_delete(dev); diff --git a/drivers/usb/misc/chaoskey.c b/drivers/usb/misc/chaoskey.c index 23c794813e6a..a3692811248b 100644 --- a/drivers/usb/misc/chaoskey.c +++ b/drivers/usb/misc/chaoskey.c @@ -96,6 +96,7 @@ static void chaoskey_free(struct chaoskey *dev) usb_dbg(dev->interface, "free"); kfree(dev->name); kfree(dev->buf); + usb_put_intf(dev->interface); kfree(dev); } @@ -144,6 +145,8 @@ static int chaoskey_probe(struct usb_interface *interface, if (dev == NULL) return -ENOMEM; + dev->interface = usb_get_intf(interface); + dev->buf = kmalloc(size, GFP_KERNEL); if (dev->buf == NULL) { @@ -169,8 +172,6 @@ static int chaoskey_probe(struct usb_interface *interface, strcat(dev->name, udev->serial); } - dev->interface = interface; - dev->in_ep = in_ep; dev->size = size; diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c index 836fb65c3c72..83342e579233 100644 --- a/drivers/usb/misc/iowarrior.c +++ b/drivers/usb/misc/iowarrior.c @@ -89,6 +89,7 @@ struct iowarrior { char chip_serial[9]; /* the serial number string of the chip connected */ int report_size; /* number of bytes in a report */ u16 product_id; + struct usb_anchor submitted; }; /*--------------*/ @@ -248,6 +249,7 @@ static inline void iowarrior_delete(struct iowarrior *dev) kfree(dev->int_in_buffer); usb_free_urb(dev->int_in_urb); kfree(dev->read_queue); + usb_put_intf(dev->interface); kfree(dev); } @@ -436,11 +438,13 @@ static ssize_t iowarrior_write(struct file *file, retval = -EFAULT; goto error; } + usb_anchor_urb(int_out_urb, &dev->submitted); retval = usb_submit_urb(int_out_urb, GFP_KERNEL); if (retval) { dev_dbg(&dev->interface->dev, "submit error %d for urb nr.%d\n", retval, atomic_read(&dev->write_busy)); + usb_unanchor_urb(int_out_urb); goto error; } /* submit was ok */ @@ -782,11 +786,13 @@ static int iowarrior_probe(struct usb_interface *interface, init_waitqueue_head(&dev->write_wait); dev->udev = udev; - dev->interface = interface; + dev->interface = usb_get_intf(interface); iface_desc = interface->cur_altsetting; dev->product_id = le16_to_cpu(udev->descriptor.idProduct); + init_usb_anchor(&dev->submitted); + /* set up the endpoint information */ for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) { endpoint = &iface_desc->endpoint[i].desc; @@ -898,8 +904,6 @@ static void iowarrior_disconnect(struct usb_interface *interface) dev = usb_get_intfdata(interface); mutex_lock(&iowarrior_open_disc_lock); usb_set_intfdata(interface, NULL); - /* prevent device read, write and ioctl */ - dev->present = 0; minor = dev->minor; mutex_unlock(&iowarrior_open_disc_lock); @@ -910,8 +914,7 @@ static void iowarrior_disconnect(struct usb_interface *interface) mutex_lock(&dev->mutex); /* prevent device read, write and ioctl */ - - mutex_unlock(&dev->mutex); + dev->present = 0; if (dev->opened) { /* There is a process that holds a filedescriptor to the device , @@ -919,10 +922,13 @@ static void iowarrior_disconnect(struct usb_interface *interface) Deleting the device is postponed until close() was called. */ usb_kill_urb(dev->int_in_urb); + usb_kill_anchored_urbs(&dev->submitted); wake_up_interruptible(&dev->read_wait); wake_up_interruptible(&dev->write_wait); + mutex_unlock(&dev->mutex); } else { /* no process is using the device, cleanup now */ + mutex_unlock(&dev->mutex); iowarrior_delete(dev); } diff --git a/drivers/usb/misc/ldusb.c b/drivers/usb/misc/ldusb.c index e9113238d9e3..8f5f8ad98632 100644 --- a/drivers/usb/misc/ldusb.c +++ b/drivers/usb/misc/ldusb.c @@ -158,6 +158,7 @@ MODULE_PARM_DESC(min_interrupt_out_interval, "Minimum interrupt out interval in struct ld_usb { struct mutex mutex; /* locks this structure */ struct usb_interface* intf; /* save off the usb interface pointer */ + unsigned long disconnected:1; int open_count; /* number of times this port has been opened */ @@ -197,12 +198,10 @@ static void ld_usb_abort_transfers(struct ld_usb *dev) /* shutdown transfer */ if (dev->interrupt_in_running) { dev->interrupt_in_running = 0; - if (dev->intf) - usb_kill_urb(dev->interrupt_in_urb); + usb_kill_urb(dev->interrupt_in_urb); } if (dev->interrupt_out_busy) - if (dev->intf) - usb_kill_urb(dev->interrupt_out_urb); + usb_kill_urb(dev->interrupt_out_urb); } /** @@ -210,8 +209,6 @@ static void ld_usb_abort_transfers(struct ld_usb *dev) */ static void ld_usb_delete(struct ld_usb *dev) { - ld_usb_abort_transfers(dev); - /* free data structures */ usb_free_urb(dev->interrupt_in_urb); usb_free_urb(dev->interrupt_out_urb); @@ -267,7 +264,7 @@ static void ld_usb_interrupt_in_callback(struct urb *urb) resubmit: /* resubmit if we're still running */ - if (dev->interrupt_in_running && !dev->buffer_overflow && dev->intf) { + if (dev->interrupt_in_running && !dev->buffer_overflow) { retval = usb_submit_urb(dev->interrupt_in_urb, GFP_ATOMIC); if (retval) { dev_err(&dev->intf->dev, @@ -387,16 +384,13 @@ static int ld_usb_release(struct inode *inode, struct file *file) goto exit; } - if (mutex_lock_interruptible(&dev->mutex)) { - retval = -ERESTARTSYS; - goto exit; - } + mutex_lock(&dev->mutex); if (dev->open_count != 1) { retval = -ENODEV; goto unlock_exit; } - if (dev->intf == NULL) { + if (dev->disconnected) { /* the device was unplugged before the file was released */ mutex_unlock(&dev->mutex); /* unlock here as ld_usb_delete frees dev */ @@ -427,7 +421,7 @@ static unsigned int ld_usb_poll(struct file *file, poll_table *wait) dev = file->private_data; - if (!dev->intf) + if (dev->disconnected) return POLLERR | POLLHUP; poll_wait(file, &dev->read_wait, wait); @@ -466,7 +460,7 @@ static ssize_t ld_usb_read(struct file *file, char __user *buffer, size_t count, } /* verify that the device wasn't unplugged */ - if (dev->intf == NULL) { + if (dev->disconnected) { retval = -ENODEV; printk(KERN_ERR "ldusb: No device or device unplugged %d\n", retval); goto unlock_exit; @@ -474,7 +468,7 @@ static ssize_t ld_usb_read(struct file *file, char __user *buffer, size_t count, /* wait for data */ spin_lock_irq(&dev->rbsl); - if (dev->ring_head == dev->ring_tail) { + while (dev->ring_head == dev->ring_tail) { dev->interrupt_in_done = 0; spin_unlock_irq(&dev->rbsl); if (file->f_flags & O_NONBLOCK) { @@ -484,12 +478,17 @@ static ssize_t ld_usb_read(struct file *file, char __user *buffer, size_t count, retval = wait_event_interruptible(dev->read_wait, dev->interrupt_in_done); if (retval < 0) goto unlock_exit; - } else { - spin_unlock_irq(&dev->rbsl); + + spin_lock_irq(&dev->rbsl); } + spin_unlock_irq(&dev->rbsl); /* actual_buffer contains actual_length + interrupt_in_buffer */ actual_buffer = (size_t*)(dev->ring_buffer + dev->ring_tail*(sizeof(size_t)+dev->interrupt_in_endpoint_size)); + if (*actual_buffer > dev->interrupt_in_endpoint_size) { + retval = -EIO; + goto unlock_exit; + } bytes_to_read = min(count, *actual_buffer); if (bytes_to_read < *actual_buffer) dev_warn(&dev->intf->dev, "Read buffer overflow, %zd bytes dropped\n", @@ -500,11 +499,11 @@ static ssize_t ld_usb_read(struct file *file, char __user *buffer, size_t count, retval = -EFAULT; goto unlock_exit; } - dev->ring_tail = (dev->ring_tail+1) % ring_buffer_size; - retval = bytes_to_read; spin_lock_irq(&dev->rbsl); + dev->ring_tail = (dev->ring_tail + 1) % ring_buffer_size; + if (dev->buffer_overflow) { dev->buffer_overflow = 0; spin_unlock_irq(&dev->rbsl); @@ -546,7 +545,7 @@ static ssize_t ld_usb_write(struct file *file, const char __user *buffer, } /* verify that the device wasn't unplugged */ - if (dev->intf == NULL) { + if (dev->disconnected) { retval = -ENODEV; printk(KERN_ERR "ldusb: No device or device unplugged %d\n", retval); goto unlock_exit; @@ -585,7 +584,7 @@ static ssize_t ld_usb_write(struct file *file, const char __user *buffer, 1 << 8, 0, dev->interrupt_out_buffer, bytes_to_write, - USB_CTRL_SET_TIMEOUT * HZ); + USB_CTRL_SET_TIMEOUT); if (retval < 0) dev_err(&dev->intf->dev, "Couldn't submit HID_REQ_SET_REPORT %d\n", @@ -709,7 +708,9 @@ static int ld_usb_probe(struct usb_interface *intf, const struct usb_device_id * dev_warn(&intf->dev, "Interrupt out endpoint not found (using control endpoint instead)\n"); dev->interrupt_in_endpoint_size = usb_endpoint_maxp(dev->interrupt_in_endpoint); - dev->ring_buffer = kmalloc(ring_buffer_size*(sizeof(size_t)+dev->interrupt_in_endpoint_size), GFP_KERNEL); + dev->ring_buffer = kcalloc(ring_buffer_size, + sizeof(size_t) + dev->interrupt_in_endpoint_size, + GFP_KERNEL); if (!dev->ring_buffer) { dev_err(&intf->dev, "Couldn't allocate ring_buffer\n"); goto error; @@ -782,6 +783,9 @@ static void ld_usb_disconnect(struct usb_interface *intf) /* give back our minor */ usb_deregister_dev(intf, &ld_usb_class); + usb_poison_urb(dev->interrupt_in_urb); + usb_poison_urb(dev->interrupt_out_urb); + mutex_lock(&dev->mutex); /* if the device is not opened, then we clean up right now */ @@ -789,7 +793,7 @@ static void ld_usb_disconnect(struct usb_interface *intf) mutex_unlock(&dev->mutex); ld_usb_delete(dev); } else { - dev->intf = NULL; + dev->disconnected = 1; /* wake up pollers */ wake_up_interruptible_all(&dev->read_wait); wake_up_interruptible_all(&dev->write_wait); diff --git a/drivers/usb/misc/legousbtower.c b/drivers/usb/misc/legousbtower.c index 0ec9ee573ffa..8350ecfbcf21 100644 --- a/drivers/usb/misc/legousbtower.c +++ b/drivers/usb/misc/legousbtower.c @@ -185,7 +185,6 @@ static const struct usb_device_id tower_table[] = { }; MODULE_DEVICE_TABLE (usb, tower_table); -static DEFINE_MUTEX(open_disc_mutex); #define LEGO_USB_TOWER_MINOR_BASE 160 @@ -197,6 +196,7 @@ struct lego_usb_tower { unsigned char minor; /* the starting minor number for this device */ int open_count; /* number of times this port has been opened */ + unsigned long disconnected:1; char* read_buffer; size_t read_buffer_length; /* this much came in */ @@ -296,14 +296,13 @@ static inline void lego_usb_tower_debug_data(struct device *dev, */ static inline void tower_delete (struct lego_usb_tower *dev) { - tower_abort_transfers (dev); - /* free data structures */ usb_free_urb(dev->interrupt_in_urb); usb_free_urb(dev->interrupt_out_urb); kfree (dev->read_buffer); kfree (dev->interrupt_in_buffer); kfree (dev->interrupt_out_buffer); + usb_put_dev(dev->udev); kfree (dev); } @@ -338,18 +337,14 @@ static int tower_open (struct inode *inode, struct file *file) goto exit; } - mutex_lock(&open_disc_mutex); dev = usb_get_intfdata(interface); - if (!dev) { - mutex_unlock(&open_disc_mutex); retval = -ENODEV; goto exit; } /* lock this device */ if (mutex_lock_interruptible(&dev->lock)) { - mutex_unlock(&open_disc_mutex); retval = -ERESTARTSYS; goto exit; } @@ -357,12 +352,9 @@ static int tower_open (struct inode *inode, struct file *file) /* allow opening only once */ if (dev->open_count) { - mutex_unlock(&open_disc_mutex); retval = -EBUSY; goto unlock_exit; } - dev->open_count = 1; - mutex_unlock(&open_disc_mutex); /* reset the tower */ result = usb_control_msg (dev->udev, @@ -402,13 +394,14 @@ static int tower_open (struct inode *inode, struct file *file) dev_err(&dev->udev->dev, "Couldn't submit interrupt_in_urb %d\n", retval); dev->interrupt_in_running = 0; - dev->open_count = 0; goto unlock_exit; } /* save device in the file's private structure */ file->private_data = dev; + dev->open_count = 1; + unlock_exit: mutex_unlock(&dev->lock); @@ -429,22 +422,19 @@ static int tower_release (struct inode *inode, struct file *file) if (dev == NULL) { retval = -ENODEV; - goto exit_nolock; - } - - mutex_lock(&open_disc_mutex); - if (mutex_lock_interruptible(&dev->lock)) { - retval = -ERESTARTSYS; goto exit; } + mutex_lock(&dev->lock); + if (dev->open_count != 1) { dev_dbg(&dev->udev->dev, "%s: device not opened exactly once\n", __func__); retval = -ENODEV; goto unlock_exit; } - if (dev->udev == NULL) { + + if (dev->disconnected) { /* the device was unplugged before the file was released */ /* unlock here as tower_delete frees dev */ @@ -462,10 +452,7 @@ static int tower_release (struct inode *inode, struct file *file) unlock_exit: mutex_unlock(&dev->lock); - exit: - mutex_unlock(&open_disc_mutex); -exit_nolock: return retval; } @@ -483,10 +470,9 @@ static void tower_abort_transfers (struct lego_usb_tower *dev) if (dev->interrupt_in_running) { dev->interrupt_in_running = 0; mb(); - if (dev->udev) - usb_kill_urb (dev->interrupt_in_urb); + usb_kill_urb(dev->interrupt_in_urb); } - if (dev->interrupt_out_busy && dev->udev) + if (dev->interrupt_out_busy) usb_kill_urb(dev->interrupt_out_urb); } @@ -522,7 +508,7 @@ static unsigned int tower_poll (struct file *file, poll_table *wait) dev = file->private_data; - if (!dev->udev) + if (dev->disconnected) return POLLERR | POLLHUP; poll_wait(file, &dev->read_wait, wait); @@ -569,7 +555,7 @@ static ssize_t tower_read (struct file *file, char __user *buffer, size_t count, } /* verify that the device wasn't unplugged */ - if (dev->udev == NULL) { + if (dev->disconnected) { retval = -ENODEV; pr_err("No device or device unplugged %d\n", retval); goto unlock_exit; @@ -655,7 +641,7 @@ static ssize_t tower_write (struct file *file, const char __user *buffer, size_t } /* verify that the device wasn't unplugged */ - if (dev->udev == NULL) { + if (dev->disconnected) { retval = -ENODEV; pr_err("No device or device unplugged %d\n", retval); goto unlock_exit; @@ -764,7 +750,7 @@ static void tower_interrupt_in_callback (struct urb *urb) resubmit: /* resubmit if we're still running */ - if (dev->interrupt_in_running && dev->udev) { + if (dev->interrupt_in_running) { retval = usb_submit_urb (dev->interrupt_in_urb, GFP_ATOMIC); if (retval) dev_err(&dev->udev->dev, @@ -832,8 +818,9 @@ static int tower_probe (struct usb_interface *interface, const struct usb_device mutex_init(&dev->lock); - dev->udev = udev; + dev->udev = usb_get_dev(udev); dev->open_count = 0; + dev->disconnected = 0; dev->read_buffer = NULL; dev->read_buffer_length = 0; @@ -923,8 +910,10 @@ static int tower_probe (struct usb_interface *interface, const struct usb_device get_version_reply, sizeof(*get_version_reply), 1000); - if (result < 0) { - dev_err(idev, "LEGO USB Tower get version control request failed\n"); + if (result != sizeof(*get_version_reply)) { + if (result >= 0) + result = -EIO; + dev_err(idev, "get version request failed: %d\n", result); retval = result; goto error; } @@ -942,7 +931,6 @@ static int tower_probe (struct usb_interface *interface, const struct usb_device if (retval) { /* something prevented us from registering this driver */ dev_err(idev, "Not able to get a minor for this device.\n"); - usb_set_intfdata (interface, NULL); goto error; } dev->minor = interface->minor; @@ -974,23 +962,24 @@ static void tower_disconnect (struct usb_interface *interface) int minor; dev = usb_get_intfdata (interface); - mutex_lock(&open_disc_mutex); - usb_set_intfdata (interface, NULL); minor = dev->minor; - /* give back our minor */ + /* give back our minor and prevent further open() */ usb_deregister_dev (interface, &tower_class); + /* stop I/O */ + usb_poison_urb(dev->interrupt_in_urb); + usb_poison_urb(dev->interrupt_out_urb); + mutex_lock(&dev->lock); - mutex_unlock(&open_disc_mutex); /* if the device is not opened, then we clean up right now */ if (!dev->open_count) { mutex_unlock(&dev->lock); tower_delete (dev); } else { - dev->udev = NULL; + dev->disconnected = 1; /* wake up pollers */ wake_up_interruptible_all(&dev->read_wait); wake_up_interruptible_all(&dev->write_wait); diff --git a/drivers/usb/misc/rio500.c b/drivers/usb/misc/rio500.c deleted file mode 100644 index 6e761fabffca..000000000000 --- a/drivers/usb/misc/rio500.c +++ /dev/null @@ -1,578 +0,0 @@ -/* -*- linux-c -*- */ - -/* - * Driver for USB Rio 500 - * - * Cesar Miquel (miquel@df.uba.ar) - * - * based on hp_scanner.c by David E. Nelson (dnelson@jump.net) - * - * This program is free software; you can redistribute it and/or - * modify it under the terms of the GNU General Public License as - * published by the Free Software Foundation; either version 2 of the - * License, or (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, but - * WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program; if not, write to the Free Software - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - * - * Based upon mouse.c (Brad Keryan) and printer.c (Michael Gee). - * - * Changelog: - * 30/05/2003 replaced lock/unlock kernel with up/down - * Daniele Bellucci bellucda@tiscali.it - * */ - -#include <linux/module.h> -#include <linux/kernel.h> -#include <linux/signal.h> -#include <linux/sched.h> -#include <linux/mutex.h> -#include <linux/errno.h> -#include <linux/random.h> -#include <linux/poll.h> -#include <linux/slab.h> -#include <linux/spinlock.h> -#include <linux/usb.h> -#include <linux/wait.h> - -#include "rio500_usb.h" - -/* - * Version Information - */ -#define DRIVER_VERSION "v1.1" -#define DRIVER_AUTHOR "Cesar Miquel <miquel@df.uba.ar>" -#define DRIVER_DESC "USB Rio 500 driver" - -#define RIO_MINOR 64 - -/* stall/wait timeout for rio */ -#define NAK_TIMEOUT (HZ) - -#define IBUF_SIZE 0x1000 - -/* Size of the rio buffer */ -#define OBUF_SIZE 0x10000 - -struct rio_usb_data { - struct usb_device *rio_dev; /* init: probe_rio */ - unsigned int ifnum; /* Interface number of the USB device */ - int isopen; /* nz if open */ - int present; /* Device is present on the bus */ - char *obuf, *ibuf; /* transfer buffers */ - char bulk_in_ep, bulk_out_ep; /* Endpoint assignments */ - wait_queue_head_t wait_q; /* for timeouts */ - struct mutex lock; /* general race avoidance */ -}; - -static DEFINE_MUTEX(rio500_mutex); -static struct rio_usb_data rio_instance; - -static int open_rio(struct inode *inode, struct file *file) -{ - struct rio_usb_data *rio = &rio_instance; - - /* against disconnect() */ - mutex_lock(&rio500_mutex); - mutex_lock(&(rio->lock)); - - if (rio->isopen || !rio->present) { - mutex_unlock(&(rio->lock)); - mutex_unlock(&rio500_mutex); - return -EBUSY; - } - rio->isopen = 1; - - init_waitqueue_head(&rio->wait_q); - - mutex_unlock(&(rio->lock)); - - dev_info(&rio->rio_dev->dev, "Rio opened.\n"); - mutex_unlock(&rio500_mutex); - - return 0; -} - -static int close_rio(struct inode *inode, struct file *file) -{ - struct rio_usb_data *rio = &rio_instance; - - /* against disconnect() */ - mutex_lock(&rio500_mutex); - mutex_lock(&(rio->lock)); - - rio->isopen = 0; - if (!rio->present) { - /* cleanup has been delayed */ - kfree(rio->ibuf); - kfree(rio->obuf); - rio->ibuf = NULL; - rio->obuf = NULL; - } else { - dev_info(&rio->rio_dev->dev, "Rio closed.\n"); - } - mutex_unlock(&(rio->lock)); - mutex_unlock(&rio500_mutex); - return 0; -} - -static long ioctl_rio(struct file *file, unsigned int cmd, unsigned long arg) -{ - struct RioCommand rio_cmd; - struct rio_usb_data *rio = &rio_instance; - void __user *data; - unsigned char *buffer; - int result, requesttype; - int retries; - int retval=0; - - mutex_lock(&(rio->lock)); - /* Sanity check to make sure rio is connected, powered, etc */ - if (rio->present == 0 || rio->rio_dev == NULL) { - retval = -ENODEV; - goto err_out; - } - - switch (cmd) { - case RIO_RECV_COMMAND: - data = (void __user *) arg; - if (data == NULL) - break; - if (copy_from_user(&rio_cmd, data, sizeof(struct RioCommand))) { - retval = -EFAULT; - goto err_out; - } - if (rio_cmd.length < 0 || rio_cmd.length > PAGE_SIZE) { - retval = -EINVAL; - goto err_out; - } - buffer = (unsigned char *) __get_free_page(GFP_KERNEL); - if (buffer == NULL) { - retval = -ENOMEM; - goto err_out; - } - if (copy_from_user(buffer, rio_cmd.buffer, rio_cmd.length)) { - retval = -EFAULT; - free_page((unsigned long) buffer); - goto err_out; - } - - requesttype = rio_cmd.requesttype | USB_DIR_IN | - USB_TYPE_VENDOR | USB_RECIP_DEVICE; - dev_dbg(&rio->rio_dev->dev, - "sending command:reqtype=%0x req=%0x value=%0x index=%0x len=%0x\n", - requesttype, rio_cmd.request, rio_cmd.value, - rio_cmd.index, rio_cmd.length); - /* Send rio control message */ - retries = 3; - while (retries) { - result = usb_control_msg(rio->rio_dev, - usb_rcvctrlpipe(rio-> rio_dev, 0), - rio_cmd.request, - requesttype, - rio_cmd.value, - rio_cmd.index, buffer, - rio_cmd.length, - jiffies_to_msecs(rio_cmd.timeout)); - if (result == -ETIMEDOUT) - retries--; - else if (result < 0) { - dev_err(&rio->rio_dev->dev, - "Error executing ioctrl. code = %d\n", - result); - retries = 0; - } else { - dev_dbg(&rio->rio_dev->dev, - "Executed ioctl. Result = %d (data=%02x)\n", - result, buffer[0]); - if (copy_to_user(rio_cmd.buffer, buffer, - rio_cmd.length)) { - free_page((unsigned long) buffer); - retval = -EFAULT; - goto err_out; - } - retries = 0; - } - - /* rio_cmd.buffer contains a raw stream of single byte - data which has been returned from rio. Data is - interpreted at application level. For data that - will be cast to data types longer than 1 byte, data - will be little_endian and will potentially need to - be swapped at the app level */ - - } - free_page((unsigned long) buffer); - break; - - case RIO_SEND_COMMAND: - data = (void __user *) arg; - if (data == NULL) - break; - if (copy_from_user(&rio_cmd, data, sizeof(struct RioCommand))) { - retval = -EFAULT; - goto err_out; - } - if (rio_cmd.length < 0 || rio_cmd.length > PAGE_SIZE) { - retval = -EINVAL; - goto err_out; - } - buffer = (unsigned char *) __get_free_page(GFP_KERNEL); - if (buffer == NULL) { - retval = -ENOMEM; - goto err_out; - } - if (copy_from_user(buffer, rio_cmd.buffer, rio_cmd.length)) { - free_page((unsigned long)buffer); - retval = -EFAULT; - goto err_out; - } - - requesttype = rio_cmd.requesttype | USB_DIR_OUT | - USB_TYPE_VENDOR | USB_RECIP_DEVICE; - dev_dbg(&rio->rio_dev->dev, - "sending command: reqtype=%0x req=%0x value=%0x index=%0x len=%0x\n", - requesttype, rio_cmd.request, rio_cmd.value, - rio_cmd.index, rio_cmd.length); - /* Send rio control message */ - retries = 3; - while (retries) { - result = usb_control_msg(rio->rio_dev, - usb_sndctrlpipe(rio-> rio_dev, 0), - rio_cmd.request, - requesttype, - rio_cmd.value, - rio_cmd.index, buffer, - rio_cmd.length, - jiffies_to_msecs(rio_cmd.timeout)); - if (result == -ETIMEDOUT) - retries--; - else if (result < 0) { - dev_err(&rio->rio_dev->dev, - "Error executing ioctrl. code = %d\n", - result); - retries = 0; - } else { - dev_dbg(&rio->rio_dev->dev, - "Executed ioctl. Result = %d\n", result); - retries = 0; - - } - - } - free_page((unsigned long) buffer); - break; - - default: - retval = -ENOTTY; - break; - } - - -err_out: - mutex_unlock(&(rio->lock)); - return retval; -} - -static ssize_t -write_rio(struct file *file, const char __user *buffer, - size_t count, loff_t * ppos) -{ - DEFINE_WAIT(wait); - struct rio_usb_data *rio = &rio_instance; - - unsigned long copy_size; - unsigned long bytes_written = 0; - unsigned int partial; - - int result = 0; - int maxretry; - int errn = 0; - int intr; - - intr = mutex_lock_interruptible(&(rio->lock)); - if (intr) - return -EINTR; - /* Sanity check to make sure rio is connected, powered, etc */ - if (rio->present == 0 || rio->rio_dev == NULL) { - mutex_unlock(&(rio->lock)); - return -ENODEV; - } - - - - do { - unsigned long thistime; - char *obuf = rio->obuf; - - thistime = copy_size = - (count >= OBUF_SIZE) ? OBUF_SIZE : count; - if (copy_from_user(rio->obuf, buffer, copy_size)) { - errn = -EFAULT; - goto error; - } - maxretry = 5; - while (thistime) { - if (!rio->rio_dev) { - errn = -ENODEV; - goto error; - } - if (signal_pending(current)) { - mutex_unlock(&(rio->lock)); - return bytes_written ? bytes_written : -EINTR; - } - - result = usb_bulk_msg(rio->rio_dev, - usb_sndbulkpipe(rio->rio_dev, 2), - obuf, thistime, &partial, 5000); - - dev_dbg(&rio->rio_dev->dev, - "write stats: result:%d thistime:%lu partial:%u\n", - result, thistime, partial); - - if (result == -ETIMEDOUT) { /* NAK - so hold for a while */ - if (!maxretry--) { - errn = -ETIME; - goto error; - } - prepare_to_wait(&rio->wait_q, &wait, TASK_INTERRUPTIBLE); - schedule_timeout(NAK_TIMEOUT); - finish_wait(&rio->wait_q, &wait); - continue; - } else if (!result && partial) { - obuf += partial; - thistime -= partial; - } else - break; - } - if (result) { - dev_err(&rio->rio_dev->dev, "Write Whoops - %x\n", - result); - errn = -EIO; - goto error; - } - bytes_written += copy_size; - count -= copy_size; - buffer += copy_size; - } while (count > 0); - - mutex_unlock(&(rio->lock)); - - return bytes_written ? bytes_written : -EIO; - -error: - mutex_unlock(&(rio->lock)); - return errn; -} - -static ssize_t -read_rio(struct file *file, char __user *buffer, size_t count, loff_t * ppos) -{ - DEFINE_WAIT(wait); - struct rio_usb_data *rio = &rio_instance; - ssize_t read_count; - unsigned int partial; - int this_read; - int result; - int maxretry = 10; - char *ibuf; - int intr; - - intr = mutex_lock_interruptible(&(rio->lock)); - if (intr) - return -EINTR; - /* Sanity check to make sure rio is connected, powered, etc */ - if (rio->present == 0 || rio->rio_dev == NULL) { - mutex_unlock(&(rio->lock)); - return -ENODEV; - } - - ibuf = rio->ibuf; - - read_count = 0; - - - while (count > 0) { - if (signal_pending(current)) { - mutex_unlock(&(rio->lock)); - return read_count ? read_count : -EINTR; - } - if (!rio->rio_dev) { - mutex_unlock(&(rio->lock)); - return -ENODEV; - } - this_read = (count >= IBUF_SIZE) ? IBUF_SIZE : count; - - result = usb_bulk_msg(rio->rio_dev, - usb_rcvbulkpipe(rio->rio_dev, 1), - ibuf, this_read, &partial, - 8000); - - dev_dbg(&rio->rio_dev->dev, - "read stats: result:%d this_read:%u partial:%u\n", - result, this_read, partial); - - if (partial) { - count = this_read = partial; - } else if (result == -ETIMEDOUT || result == 15) { /* FIXME: 15 ??? */ - if (!maxretry--) { - mutex_unlock(&(rio->lock)); - dev_err(&rio->rio_dev->dev, - "read_rio: maxretry timeout\n"); - return -ETIME; - } - prepare_to_wait(&rio->wait_q, &wait, TASK_INTERRUPTIBLE); - schedule_timeout(NAK_TIMEOUT); - finish_wait(&rio->wait_q, &wait); - continue; - } else if (result != -EREMOTEIO) { - mutex_unlock(&(rio->lock)); - dev_err(&rio->rio_dev->dev, - "Read Whoops - result:%u partial:%u this_read:%u\n", - result, partial, this_read); - return -EIO; - } else { - mutex_unlock(&(rio->lock)); - return (0); - } - - if (this_read) { - if (copy_to_user(buffer, ibuf, this_read)) { - mutex_unlock(&(rio->lock)); - return -EFAULT; - } - count -= this_read; - read_count += this_read; - buffer += this_read; - } - } - mutex_unlock(&(rio->lock)); - return read_count; -} - -static const struct file_operations usb_rio_fops = { - .owner = THIS_MODULE, - .read = read_rio, - .write = write_rio, - .unlocked_ioctl = ioctl_rio, - .open = open_rio, - .release = close_rio, - .llseek = noop_llseek, -}; - -static struct usb_class_driver usb_rio_class = { - .name = "rio500%d", - .fops = &usb_rio_fops, - .minor_base = RIO_MINOR, -}; - -static int probe_rio(struct usb_interface *intf, - const struct usb_device_id *id) -{ - struct usb_device *dev = interface_to_usbdev(intf); - struct rio_usb_data *rio = &rio_instance; - int retval = 0; - - mutex_lock(&rio500_mutex); - if (rio->present) { - dev_info(&intf->dev, "Second USB Rio at address %d refused\n", dev->devnum); - retval = -EBUSY; - goto bail_out; - } else { - dev_info(&intf->dev, "USB Rio found at address %d\n", dev->devnum); - } - - retval = usb_register_dev(intf, &usb_rio_class); - if (retval) { - dev_err(&dev->dev, - "Not able to get a minor for this device.\n"); - retval = -ENOMEM; - goto bail_out; - } - - rio->rio_dev = dev; - - if (!(rio->obuf = kmalloc(OBUF_SIZE, GFP_KERNEL))) { - dev_err(&dev->dev, - "probe_rio: Not enough memory for the output buffer\n"); - usb_deregister_dev(intf, &usb_rio_class); - retval = -ENOMEM; - goto bail_out; - } - dev_dbg(&intf->dev, "obuf address:%p\n", rio->obuf); - - if (!(rio->ibuf = kmalloc(IBUF_SIZE, GFP_KERNEL))) { - dev_err(&dev->dev, - "probe_rio: Not enough memory for the input buffer\n"); - usb_deregister_dev(intf, &usb_rio_class); - kfree(rio->obuf); - retval = -ENOMEM; - goto bail_out; - } - dev_dbg(&intf->dev, "ibuf address:%p\n", rio->ibuf); - - mutex_init(&(rio->lock)); - - usb_set_intfdata (intf, rio); - rio->present = 1; -bail_out: - mutex_unlock(&rio500_mutex); - - return retval; -} - -static void disconnect_rio(struct usb_interface *intf) -{ - struct rio_usb_data *rio = usb_get_intfdata (intf); - - usb_set_intfdata (intf, NULL); - mutex_lock(&rio500_mutex); - if (rio) { - usb_deregister_dev(intf, &usb_rio_class); - - mutex_lock(&(rio->lock)); - if (rio->isopen) { - rio->isopen = 0; - /* better let it finish - the release will do whats needed */ - rio->rio_dev = NULL; - mutex_unlock(&(rio->lock)); - mutex_unlock(&rio500_mutex); - return; - } - kfree(rio->ibuf); - kfree(rio->obuf); - - dev_info(&intf->dev, "USB Rio disconnected.\n"); - - rio->present = 0; - mutex_unlock(&(rio->lock)); - } - mutex_unlock(&rio500_mutex); -} - -static const struct usb_device_id rio_table[] = { - { USB_DEVICE(0x0841, 1) }, /* Rio 500 */ - { } /* Terminating entry */ -}; - -MODULE_DEVICE_TABLE (usb, rio_table); - -static struct usb_driver rio_driver = { - .name = "rio500", - .probe = probe_rio, - .disconnect = disconnect_rio, - .id_table = rio_table, -}; - -module_usb_driver(rio_driver); - -MODULE_AUTHOR( DRIVER_AUTHOR ); -MODULE_DESCRIPTION( DRIVER_DESC ); -MODULE_LICENSE("GPL"); - diff --git a/drivers/usb/misc/rio500_usb.h b/drivers/usb/misc/rio500_usb.h deleted file mode 100644 index 359abc98e706..000000000000 --- a/drivers/usb/misc/rio500_usb.h +++ /dev/null @@ -1,37 +0,0 @@ -/* ---------------------------------------------------------------------- - - Copyright (C) 2000 Cesar Miquel (miquel@df.uba.ar) - - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. - - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. - - ---------------------------------------------------------------------- */ - - - -#define RIO_SEND_COMMAND 0x1 -#define RIO_RECV_COMMAND 0x2 - -#define RIO_DIR_OUT 0x0 -#define RIO_DIR_IN 0x1 - -struct RioCommand { - short length; - int request; - int requesttype; - int value; - int index; - void __user *buffer; - int timeout; -}; diff --git a/drivers/usb/misc/usblcd.c b/drivers/usb/misc/usblcd.c index 1184390508e9..c77974fab29d 100644 --- a/drivers/usb/misc/usblcd.c +++ b/drivers/usb/misc/usblcd.c @@ -17,6 +17,7 @@ #include <linux/slab.h> #include <linux/errno.h> #include <linux/mutex.h> +#include <linux/rwsem.h> #include <linux/uaccess.h> #include <linux/usb.h> @@ -56,6 +57,8 @@ struct usb_lcd { using up all RAM */ struct usb_anchor submitted; /* URBs to wait for before suspend */ + struct rw_semaphore io_rwsem; + unsigned long disconnected:1; }; #define to_lcd_dev(d) container_of(d, struct usb_lcd, kref) @@ -141,6 +144,13 @@ static ssize_t lcd_read(struct file *file, char __user * buffer, dev = file->private_data; + down_read(&dev->io_rwsem); + + if (dev->disconnected) { + retval = -ENODEV; + goto out_up_io; + } + /* do a blocking bulk read to get data from the device */ retval = usb_bulk_msg(dev->udev, usb_rcvbulkpipe(dev->udev, @@ -157,6 +167,9 @@ static ssize_t lcd_read(struct file *file, char __user * buffer, retval = bytes_read; } +out_up_io: + up_read(&dev->io_rwsem); + return retval; } @@ -236,11 +249,18 @@ static ssize_t lcd_write(struct file *file, const char __user * user_buffer, if (r < 0) return -EINTR; + down_read(&dev->io_rwsem); + + if (dev->disconnected) { + retval = -ENODEV; + goto err_up_io; + } + /* create a urb, and a buffer for it, and copy the data to the urb */ urb = usb_alloc_urb(0, GFP_KERNEL); if (!urb) { retval = -ENOMEM; - goto err_no_buf; + goto err_up_io; } buf = usb_alloc_coherent(dev->udev, count, GFP_KERNEL, @@ -277,6 +297,7 @@ static ssize_t lcd_write(struct file *file, const char __user * user_buffer, the USB core will eventually free it entirely */ usb_free_urb(urb); + up_read(&dev->io_rwsem); exit: return count; error_unanchor: @@ -284,7 +305,8 @@ error_unanchor: error: usb_free_coherent(dev->udev, count, buf, urb->transfer_dma); usb_free_urb(urb); -err_no_buf: +err_up_io: + up_read(&dev->io_rwsem); up(&dev->limit_sem); return retval; } @@ -327,6 +349,7 @@ static int lcd_probe(struct usb_interface *interface, } kref_init(&dev->kref); sema_init(&dev->limit_sem, USB_LCD_CONCURRENT_WRITES); + init_rwsem(&dev->io_rwsem); init_usb_anchor(&dev->submitted); dev->udev = usb_get_dev(interface_to_usbdev(interface)); @@ -437,6 +460,12 @@ static void lcd_disconnect(struct usb_interface *interface) /* give back our minor */ usb_deregister_dev(interface, &lcd_class); + down_write(&dev->io_rwsem); + dev->disconnected = 1; + up_write(&dev->io_rwsem); + + usb_kill_anchored_urbs(&dev->submitted); + /* decrement our usage count */ kref_put(&dev->kref, lcd_delete); diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c index 2222ec2275fc..44c6ced5d442 100644 --- a/drivers/usb/misc/yurex.c +++ b/drivers/usb/misc/yurex.c @@ -64,6 +64,7 @@ struct usb_yurex { struct kref kref; struct mutex io_mutex; + unsigned long disconnected:1; struct fasync_struct *async_queue; wait_queue_head_t waitq; @@ -111,6 +112,7 @@ static void yurex_delete(struct kref *kref) dev->int_buffer, dev->urb->transfer_dma); usb_free_urb(dev->urb); } + usb_put_intf(dev->interface); usb_put_dev(dev->udev); kfree(dev); } @@ -136,6 +138,7 @@ static void yurex_interrupt(struct urb *urb) switch (status) { case 0: /*success*/ break; + /* The device is terminated or messed up, give up */ case -EOVERFLOW: dev_err(&dev->interface->dev, "%s - overflow with length %d, actual length is %d\n", @@ -144,12 +147,13 @@ static void yurex_interrupt(struct urb *urb) case -ENOENT: case -ESHUTDOWN: case -EILSEQ: - /* The device is terminated, clean up */ + case -EPROTO: + case -ETIME: return; default: dev_err(&dev->interface->dev, "%s - unknown status received: %d\n", __func__, status); - goto exit; + return; } /* handle received message */ @@ -181,7 +185,6 @@ static void yurex_interrupt(struct urb *urb) break; } -exit: retval = usb_submit_urb(dev->urb, GFP_ATOMIC); if (retval) { dev_err(&dev->interface->dev, "%s - usb_submit_urb failed: %d\n", @@ -210,7 +213,7 @@ static int yurex_probe(struct usb_interface *interface, const struct usb_device_ init_waitqueue_head(&dev->waitq); dev->udev = usb_get_dev(interface_to_usbdev(interface)); - dev->interface = interface; + dev->interface = usb_get_intf(interface); /* set up the endpoint information */ iface_desc = interface->cur_altsetting; @@ -333,8 +336,9 @@ static void yurex_disconnect(struct usb_interface *interface) /* prevent more I/O from starting */ usb_poison_urb(dev->urb); + usb_poison_urb(dev->cntl_urb); mutex_lock(&dev->io_mutex); - dev->interface = NULL; + dev->disconnected = 1; mutex_unlock(&dev->io_mutex); /* wakeup waiters */ @@ -422,7 +426,7 @@ static ssize_t yurex_read(struct file *file, char __user *buffer, size_t count, dev = file->private_data; mutex_lock(&dev->io_mutex); - if (!dev->interface) { /* already disconnected */ + if (dev->disconnected) { /* already disconnected */ mutex_unlock(&dev->io_mutex); return -ENODEV; } @@ -457,7 +461,7 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer, goto error; mutex_lock(&dev->io_mutex); - if (!dev->interface) { /* already disconnected */ + if (dev->disconnected) { /* already disconnected */ mutex_unlock(&dev->io_mutex); retval = -ENODEV; goto error; diff --git a/drivers/usb/renesas_usbhs/common.h b/drivers/usb/renesas_usbhs/common.h index 8c5fc12ad778..b8620aa6b72e 100644 --- a/drivers/usb/renesas_usbhs/common.h +++ b/drivers/usb/renesas_usbhs/common.h @@ -213,6 +213,7 @@ struct usbhs_priv; /* DCPCTR */ #define BSTS (1 << 15) /* Buffer Status */ #define SUREQ (1 << 14) /* Sending SETUP Token */ +#define INBUFM (1 << 14) /* (PIPEnCTR) Transfer Buffer Monitor */ #define CSSTS (1 << 12) /* CSSTS Status */ #define ACLRM (1 << 9) /* Buffer Auto-Clear Mode */ #define SQCLR (1 << 8) /* Toggle Bit Clear */ diff --git a/drivers/usb/renesas_usbhs/fifo.c b/drivers/usb/renesas_usbhs/fifo.c index 5e2aa4f85c81..79efb367e5ce 100644 --- a/drivers/usb/renesas_usbhs/fifo.c +++ b/drivers/usb/renesas_usbhs/fifo.c @@ -98,7 +98,7 @@ static void __usbhsf_pkt_del(struct usbhs_pkt *pkt) list_del_init(&pkt->node); } -static struct usbhs_pkt *__usbhsf_pkt_get(struct usbhs_pipe *pipe) +struct usbhs_pkt *__usbhsf_pkt_get(struct usbhs_pipe *pipe) { if (list_empty(&pipe->list)) return NULL; diff --git a/drivers/usb/renesas_usbhs/fifo.h b/drivers/usb/renesas_usbhs/fifo.h index c7d9b86d51bf..3640340e94d6 100644 --- a/drivers/usb/renesas_usbhs/fifo.h +++ b/drivers/usb/renesas_usbhs/fifo.h @@ -106,5 +106,6 @@ void usbhs_pkt_push(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt, void *buf, int len, int zero, int sequence); struct usbhs_pkt *usbhs_pkt_pop(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt); void usbhs_pkt_start(struct usbhs_pipe *pipe); +struct usbhs_pkt *__usbhsf_pkt_get(struct usbhs_pipe *pipe); #endif /* RENESAS_USB_FIFO_H */ diff --git a/drivers/usb/renesas_usbhs/mod_gadget.c b/drivers/usb/renesas_usbhs/mod_gadget.c index c5553028e616..efe8d815cf2c 100644 --- a/drivers/usb/renesas_usbhs/mod_gadget.c +++ b/drivers/usb/renesas_usbhs/mod_gadget.c @@ -731,8 +731,7 @@ static int __usbhsg_ep_set_halt_wedge(struct usb_ep *ep, int halt, int wedge) struct usbhs_priv *priv = usbhsg_gpriv_to_priv(gpriv); struct device *dev = usbhsg_gpriv_to_dev(gpriv); unsigned long flags; - - usbhsg_pipe_disable(uep); + int ret = 0; dev_dbg(dev, "set halt %d (pipe %d)\n", halt, usbhs_pipe_number(pipe)); @@ -740,6 +739,18 @@ static int __usbhsg_ep_set_halt_wedge(struct usb_ep *ep, int halt, int wedge) /******************** spin lock ********************/ usbhs_lock(priv, flags); + /* + * According to usb_ep_set_halt()'s description, this function should + * return -EAGAIN if the IN endpoint has any queue or data. Note + * that the usbhs_pipe_is_dir_in() returns false if the pipe is an + * IN endpoint in the gadget mode. + */ + if (!usbhs_pipe_is_dir_in(pipe) && (__usbhsf_pkt_get(pipe) || + usbhs_pipe_contains_transmittable_data(pipe))) { + ret = -EAGAIN; + goto out; + } + if (halt) usbhs_pipe_stall(pipe); else @@ -750,10 +761,11 @@ static int __usbhsg_ep_set_halt_wedge(struct usb_ep *ep, int halt, int wedge) else usbhsg_status_clr(gpriv, USBHSG_STATUS_WEDGE); +out: usbhs_unlock(priv, flags); /******************** spin unlock ******************/ - return 0; + return ret; } static int usbhsg_ep_set_halt(struct usb_ep *ep, int value) diff --git a/drivers/usb/renesas_usbhs/pipe.c b/drivers/usb/renesas_usbhs/pipe.c index 4f9c3356127a..75fb41d4e9fc 100644 --- a/drivers/usb/renesas_usbhs/pipe.c +++ b/drivers/usb/renesas_usbhs/pipe.c @@ -279,6 +279,21 @@ int usbhs_pipe_is_accessible(struct usbhs_pipe *pipe) return -EBUSY; } +bool usbhs_pipe_contains_transmittable_data(struct usbhs_pipe *pipe) +{ + u16 val; + + /* Do not support for DCP pipe */ + if (usbhs_pipe_is_dcp(pipe)) + return false; + + val = usbhsp_pipectrl_get(pipe); + if (val & INBUFM) + return true; + + return false; +} + /* * PID ctrl */ diff --git a/drivers/usb/renesas_usbhs/pipe.h b/drivers/usb/renesas_usbhs/pipe.h index b0bc7b603016..b7925d363bb4 100644 --- a/drivers/usb/renesas_usbhs/pipe.h +++ b/drivers/usb/renesas_usbhs/pipe.h @@ -89,6 +89,7 @@ void usbhs_pipe_init(struct usbhs_priv *priv, int usbhs_pipe_get_maxpacket(struct usbhs_pipe *pipe); void usbhs_pipe_clear(struct usbhs_pipe *pipe); int usbhs_pipe_is_accessible(struct usbhs_pipe *pipe); +bool usbhs_pipe_contains_transmittable_data(struct usbhs_pipe *pipe); void usbhs_pipe_enable(struct usbhs_pipe *pipe); void usbhs_pipe_disable(struct usbhs_pipe *pipe); void usbhs_pipe_stall(struct usbhs_pipe *pipe); diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c index 7edcd5a8d175..2998da6bd901 100644 --- a/drivers/usb/serial/ftdi_sio.c +++ b/drivers/usb/serial/ftdi_sio.c @@ -1025,6 +1025,9 @@ static const struct usb_device_id id_table_combined[] = { /* EZPrototypes devices */ { USB_DEVICE(EZPROTOTYPES_VID, HJELMSLUND_USB485_ISO_PID) }, { USB_DEVICE_INTERFACE_NUMBER(UNJO_VID, UNJO_ISODEBUG_V1_PID, 1) }, + /* Sienna devices */ + { USB_DEVICE(FTDI_VID, FTDI_SIENNA_PID) }, + { USB_DEVICE(ECHELON_VID, ECHELON_U20_PID) }, { } /* Terminating entry */ }; diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h index ed6b36674c15..2e8161f79b49 100644 --- a/drivers/usb/serial/ftdi_sio_ids.h +++ b/drivers/usb/serial/ftdi_sio_ids.h @@ -38,6 +38,9 @@ #define FTDI_LUMEL_PD12_PID 0x6002 +/* Sienna Serial Interface by Secyourit GmbH */ +#define FTDI_SIENNA_PID 0x8348 + /* Cyber Cortex AV by Fabulous Silicon (http://fabuloussilicon.com) */ #define CYBER_CORTEX_AV_PID 0x8698 @@ -688,6 +691,12 @@ #define BANDB_ZZ_PROG1_USB_PID 0xBA02 /* + * Echelon USB Serial Interface + */ +#define ECHELON_VID 0x0920 +#define ECHELON_U20_PID 0x7500 + +/* * Intrepid Control Systems (http://www.intrepidcs.com/) ValueCAN and NeoVI */ #define INTREPID_VID 0x093C diff --git a/drivers/usb/serial/keyspan.c b/drivers/usb/serial/keyspan.c index 7faa901ee47f..38112be0dbae 100644 --- a/drivers/usb/serial/keyspan.c +++ b/drivers/usb/serial/keyspan.c @@ -1249,8 +1249,8 @@ static struct urb *keyspan_setup_urb(struct usb_serial *serial, int endpoint, ep_desc = find_ep(serial, endpoint); if (!ep_desc) { - /* leak the urb, something's wrong and the callers don't care */ - return urb; + usb_free_urb(urb); + return NULL; } if (usb_endpoint_xfer_int(ep_desc)) { ep_type_name = "INT"; diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c index 1bceb11f3782..00a6e62a68a8 100644 --- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -421,6 +421,7 @@ static void option_instat_callback(struct urb *urb); #define CINTERION_PRODUCT_PH8_AUDIO 0x0083 #define CINTERION_PRODUCT_AHXX_2RMNET 0x0084 #define CINTERION_PRODUCT_AHXX_AUDIO 0x0085 +#define CINTERION_PRODUCT_CLS8 0x00b0 /* Olivetti products */ #define OLIVETTI_VENDOR_ID 0x0b3c @@ -1149,6 +1150,14 @@ static const struct usb_device_id option_ids[] = { .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) | RSVD(3) }, { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG5, 0xff), .driver_info = RSVD(0) | RSVD(1) | NCTRL(2) | RSVD(3) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1050, 0xff), /* Telit FN980 (rmnet) */ + .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1051, 0xff), /* Telit FN980 (MBIM) */ + .driver_info = NCTRL(0) | RSVD(1) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1052, 0xff), /* Telit FN980 (RNDIS) */ + .driver_info = NCTRL(2) | RSVD(3) }, + { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1053, 0xff), /* Telit FN980 (ECM) */ + .driver_info = NCTRL(0) | RSVD(1) }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910), .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) }, { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM), @@ -1842,6 +1851,8 @@ static const struct usb_device_id option_ids[] = { .driver_info = RSVD(4) }, { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_AHXX_2RMNET, 0xff) }, { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_AHXX_AUDIO, 0xff) }, + { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_CLS8, 0xff), + .driver_info = RSVD(0) | RSVD(4) }, { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_HC28_MDM) }, { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_HC28_MDMNET) }, { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC25_MDM) }, diff --git a/drivers/usb/serial/ti_usb_3410_5052.c b/drivers/usb/serial/ti_usb_3410_5052.c index fe7f5ace6064..a33acb8c16d3 100644 --- a/drivers/usb/serial/ti_usb_3410_5052.c +++ b/drivers/usb/serial/ti_usb_3410_5052.c @@ -542,7 +542,6 @@ static void ti_close(struct usb_serial_port *port) struct ti_port *tport; int port_number; int status; - int do_unlock; unsigned long flags; tdev = usb_get_serial_data(port->serial); @@ -569,16 +568,13 @@ static void ti_close(struct usb_serial_port *port) "%s - cannot send close port command, %d\n" , __func__, status); - /* if mutex_lock is interrupted, continue anyway */ - do_unlock = !mutex_lock_interruptible(&tdev->td_open_close_lock); + mutex_lock(&tdev->td_open_close_lock); --tport->tp_tdev->td_open_port_count; - if (tport->tp_tdev->td_open_port_count <= 0) { + if (tport->tp_tdev->td_open_port_count == 0) { /* last port is closed, shut down interrupt urb */ usb_kill_urb(port->serial->port[0]->interrupt_in_urb); - tport->tp_tdev->td_open_port_count = 0; } - if (do_unlock) - mutex_unlock(&tdev->td_open_close_lock); + mutex_unlock(&tdev->td_open_close_lock); } diff --git a/drivers/usb/serial/usb-serial.c b/drivers/usb/serial/usb-serial.c index e7e29c797824..80ba818d3a21 100644 --- a/drivers/usb/serial/usb-serial.c +++ b/drivers/usb/serial/usb-serial.c @@ -314,10 +314,7 @@ static void serial_cleanup(struct tty_struct *tty) serial = port->serial; owner = serial->type->driver.owner; - mutex_lock(&serial->disc_mutex); - if (!serial->disconnected) - usb_autopm_put_interface(serial->interface); - mutex_unlock(&serial->disc_mutex); + usb_autopm_put_interface(serial->interface); usb_serial_put(serial); module_put(owner); diff --git a/drivers/usb/serial/whiteheat.c b/drivers/usb/serial/whiteheat.c index d3ea90bef84d..345211f1a491 100644 --- a/drivers/usb/serial/whiteheat.c +++ b/drivers/usb/serial/whiteheat.c @@ -604,6 +604,10 @@ static int firm_send_command(struct usb_serial_port *port, __u8 command, command_port = port->serial->port[COMMAND_PORT]; command_info = usb_get_serial_port_data(command_port); + + if (command_port->bulk_out_size < datasize + 1) + return -EIO; + mutex_lock(&command_info->mutex); command_info->command_finished = false; @@ -677,6 +681,7 @@ static void firm_setup_port(struct tty_struct *tty) struct device *dev = &port->dev; struct whiteheat_port_settings port_settings; unsigned int cflag = tty->termios.c_cflag; + speed_t baud; port_settings.port = port->port_number + 1; @@ -737,11 +742,13 @@ static void firm_setup_port(struct tty_struct *tty) dev_dbg(dev, "%s - XON = %2x, XOFF = %2x\n", __func__, port_settings.xon, port_settings.xoff); /* get the baud rate wanted */ - port_settings.baud = tty_get_baud_rate(tty); - dev_dbg(dev, "%s - baud rate = %d\n", __func__, port_settings.baud); + baud = tty_get_baud_rate(tty); + port_settings.baud = cpu_to_le32(baud); + dev_dbg(dev, "%s - baud rate = %u\n", __func__, baud); /* fixme: should set validated settings */ - tty_encode_baud_rate(tty, port_settings.baud, port_settings.baud); + tty_encode_baud_rate(tty, baud, baud); + /* handle any settings that aren't specified in the tty structure */ port_settings.lloop = 0; diff --git a/drivers/usb/serial/whiteheat.h b/drivers/usb/serial/whiteheat.h index 38065df4d2d8..30169c859a74 100644 --- a/drivers/usb/serial/whiteheat.h +++ b/drivers/usb/serial/whiteheat.h @@ -91,7 +91,7 @@ struct whiteheat_simple { struct whiteheat_port_settings { __u8 port; /* port number (1 to N) */ - __u32 baud; /* any value 7 - 460800, firmware calculates + __le32 baud; /* any value 7 - 460800, firmware calculates best fit; arrives little endian */ __u8 bits; /* 5, 6, 7, or 8 */ __u8 stop; /* 1 or 2, default 1 (2 = 1.5 if bits = 5) */ diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c index e657b111b320..a7cc0bc68deb 100644 --- a/drivers/usb/storage/uas.c +++ b/drivers/usb/storage/uas.c @@ -772,30 +772,10 @@ static int uas_slave_alloc(struct scsi_device *sdev) { struct uas_dev_info *devinfo = (struct uas_dev_info *)sdev->host->hostdata; - int maxp; sdev->hostdata = devinfo; /* - * We have two requirements here. We must satisfy the requirements - * of the physical HC and the demands of the protocol, as we - * definitely want no additional memory allocation in this path - * ruling out using bounce buffers. - * - * For a transmission on USB to continue we must never send - * a package that is smaller than maxpacket. Hence the length of each - * scatterlist element except the last must be divisible by the - * Bulk maxpacket value. - * If the HC does not ensure that through SG, - * the upper layer must do that. We must assume nothing - * about the capabilities off the HC, so we use the most - * pessimistic requirement. - */ - - maxp = usb_maxpacket(devinfo->udev, devinfo->data_in_pipe, 0); - blk_queue_virt_boundary(sdev->request_queue, maxp - 1); - - /* * The protocol has no requirements on alignment in the strict sense. * Controllers may or may not have alignment restrictions. * As this is not exported, we use an extremely conservative guess. diff --git a/drivers/usb/usb-skeleton.c b/drivers/usb/usb-skeleton.c index 545d09b8081d..871c366d9229 100644 --- a/drivers/usb/usb-skeleton.c +++ b/drivers/usb/usb-skeleton.c @@ -63,6 +63,7 @@ struct usb_skel { spinlock_t err_lock; /* lock for errors */ struct kref kref; struct mutex io_mutex; /* synchronize I/O with disconnect */ + unsigned long disconnected:1; wait_queue_head_t bulk_in_wait; /* to wait for an ongoing read */ }; #define to_skel_dev(d) container_of(d, struct usb_skel, kref) @@ -75,6 +76,7 @@ static void skel_delete(struct kref *kref) struct usb_skel *dev = to_skel_dev(kref); usb_free_urb(dev->bulk_in_urb); + usb_put_intf(dev->interface); usb_put_dev(dev->udev); kfree(dev->bulk_in_buffer); kfree(dev); @@ -126,10 +128,7 @@ static int skel_release(struct inode *inode, struct file *file) return -ENODEV; /* allow the device to be autosuspended */ - mutex_lock(&dev->io_mutex); - if (dev->interface) - usb_autopm_put_interface(dev->interface); - mutex_unlock(&dev->io_mutex); + usb_autopm_put_interface(dev->interface); /* decrement the count on our device */ kref_put(&dev->kref, skel_delete); @@ -241,7 +240,7 @@ static ssize_t skel_read(struct file *file, char *buffer, size_t count, if (rv < 0) return rv; - if (!dev->interface) { /* disconnect() was called */ + if (dev->disconnected) { /* disconnect() was called */ rv = -ENODEV; goto exit; } @@ -422,7 +421,7 @@ static ssize_t skel_write(struct file *file, const char *user_buffer, /* this lock makes sure we don't submit URBs to gone devices */ mutex_lock(&dev->io_mutex); - if (!dev->interface) { /* disconnect() was called */ + if (dev->disconnected) { /* disconnect() was called */ mutex_unlock(&dev->io_mutex); retval = -ENODEV; goto error; @@ -511,7 +510,7 @@ static int skel_probe(struct usb_interface *interface, init_waitqueue_head(&dev->bulk_in_wait); dev->udev = usb_get_dev(interface_to_usbdev(interface)); - dev->interface = interface; + dev->interface = usb_get_intf(interface); /* set up the endpoint information */ /* use only the first bulk-in and bulk-out endpoints */ @@ -590,7 +589,7 @@ static void skel_disconnect(struct usb_interface *interface) /* prevent more I/O from starting */ mutex_lock(&dev->io_mutex); - dev->interface = NULL; + dev->disconnected = 1; mutex_unlock(&dev->io_mutex); usb_kill_anchored_urbs(&dev->submitted); diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c index 4d68a1e9e878..3476d02967f7 100644 --- a/drivers/usb/usbip/vhci_hcd.c +++ b/drivers/usb/usbip/vhci_hcd.c @@ -303,6 +303,7 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, default: break; } + break; default: usbip_dbg_vhci_rh(" ClearPortFeature: default %x\n", wValue); diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index 47b229fa5e8e..4b62eb3b5923 100644 --- a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -221,11 +221,20 @@ static void vfio_pci_disable(struct vfio_pci_device *vdev) pci_write_config_word(pdev, PCI_COMMAND, PCI_COMMAND_INTX_DISABLE); /* - * Try to reset the device. The success of this is dependent on - * being able to lock the device, which is not always possible. + * Try to get the locks ourselves to prevent a deadlock. The + * success of this is dependent on being able to lock the device, + * which is not always possible. + * We can not use the "try" reset interface here, which will + * overwrite the previously restored configuration information. */ - if (vdev->reset_works && !pci_try_reset_function(pdev)) - vdev->needs_reset = false; + if (vdev->reset_works && pci_cfg_access_trylock(pdev)) { + if (device_trylock(&pdev->dev)) { + if (!__pci_reset_function_locked(pdev)) + vdev->needs_reset = false; + device_unlock(&pdev->dev); + } + pci_cfg_access_unlock(pdev); + } pci_restore_state(pdev); out: diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c index 72e914de473e..81754c33c3a9 100644 --- a/drivers/vhost/vsock.c +++ b/drivers/vhost/vsock.c @@ -21,6 +21,14 @@ #include "vhost.h" #define VHOST_VSOCK_DEFAULT_HOST_CID 2 +/* Max number of bytes transferred before requeueing the job. + * Using this limit prevents one virtqueue from starving others. */ +#define VHOST_VSOCK_WEIGHT 0x80000 +/* Max number of packets transferred before requeueing the job. + * Using this limit prevents one virtqueue from starving others with + * small pkts. + */ +#define VHOST_VSOCK_PKT_WEIGHT 256 enum { VHOST_VSOCK_FEATURES = VHOST_FEATURES, @@ -529,7 +537,8 @@ static int vhost_vsock_dev_open(struct inode *inode, struct file *file) vsock->vqs[VSOCK_VQ_TX].handle_kick = vhost_vsock_handle_tx_kick; vsock->vqs[VSOCK_VQ_RX].handle_kick = vhost_vsock_handle_rx_kick; - vhost_dev_init(&vsock->dev, vqs, ARRAY_SIZE(vsock->vqs)); + vhost_dev_init(&vsock->dev, vqs, ARRAY_SIZE(vsock->vqs), + VHOST_VSOCK_PKT_WEIGHT, VHOST_VSOCK_WEIGHT); file->private_data = vsock; spin_lock_init(&vsock->send_pkt_list_lock); diff --git a/drivers/video/fbdev/ssd1307fb.c b/drivers/video/fbdev/ssd1307fb.c index fa3480815cdb..88e0763edcc7 100644 --- a/drivers/video/fbdev/ssd1307fb.c +++ b/drivers/video/fbdev/ssd1307fb.c @@ -421,7 +421,7 @@ static int ssd1307fb_init(struct ssd1307fb_par *par) if (ret < 0) return ret; - ret = ssd1307fb_write_cmd(par->client, 0x0); + ret = ssd1307fb_write_cmd(par->client, par->page_offset); if (ret < 0) return ret; diff --git a/drivers/xen/pci.c b/drivers/xen/pci.c index 7494dbeb4409..db58aaa4dc59 100644 --- a/drivers/xen/pci.c +++ b/drivers/xen/pci.c @@ -29,6 +29,8 @@ #include "../pci/pci.h" #ifdef CONFIG_PCI_MMCONFIG #include <asm/pci_x86.h> + +static int xen_mcfg_late(void); #endif static bool __read_mostly pci_seg_supported = true; @@ -40,7 +42,18 @@ static int xen_add_device(struct device *dev) #ifdef CONFIG_PCI_IOV struct pci_dev *physfn = pci_dev->physfn; #endif - +#ifdef CONFIG_PCI_MMCONFIG + static bool pci_mcfg_reserved = false; + /* + * Reserve MCFG areas in Xen on first invocation due to this being + * potentially called from inside of acpi_init immediately after + * MCFG table has been finally parsed. + */ + if (!pci_mcfg_reserved) { + xen_mcfg_late(); + pci_mcfg_reserved = true; + } +#endif if (pci_seg_supported) { struct { struct physdev_pci_device_add add; @@ -213,7 +226,7 @@ static int __init register_xen_pci_notifier(void) arch_initcall(register_xen_pci_notifier); #ifdef CONFIG_PCI_MMCONFIG -static int __init xen_mcfg_late(void) +static int xen_mcfg_late(void) { struct pci_mmcfg_region *cfg; int rc; @@ -252,8 +265,4 @@ static int __init xen_mcfg_late(void) } return 0; } -/* - * Needs to be done after acpi_init which are subsys_initcall. - */ -subsys_initcall_sync(xen_mcfg_late); #endif diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c index 373cc50544e9..9dbf37147126 100644 --- a/fs/9p/vfs_file.c +++ b/fs/9p/vfs_file.c @@ -528,6 +528,7 @@ v9fs_mmap_file_mmap(struct file *filp, struct vm_area_struct *vma) v9inode = V9FS_I(inode); mutex_lock(&v9inode->v_mutex); if (!v9inode->writeback_fid && + (vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE)) { /* * clone a fid and add it to writeback_fid @@ -629,6 +630,8 @@ static void v9fs_mmap_vm_close(struct vm_area_struct *vma) (vma->vm_end - vma->vm_start - 1), }; + if (!(vma->vm_flags & VM_SHARED)) + return; p9_debug(P9_DEBUG_VFS, "9p VMA close, %p, flushing", vma); diff --git a/fs/binfmt_script.c b/fs/binfmt_script.c index afdf4e3cafc2..37c2093a24d3 100644 --- a/fs/binfmt_script.c +++ b/fs/binfmt_script.c @@ -14,14 +14,31 @@ #include <linux/err.h> #include <linux/fs.h> +static inline bool spacetab(char c) { return c == ' ' || c == '\t'; } +static inline char *next_non_spacetab(char *first, const char *last) +{ + for (; first <= last; first++) + if (!spacetab(*first)) + return first; + return NULL; +} +static inline char *next_terminator(char *first, const char *last) +{ + for (; first <= last; first++) + if (spacetab(*first) || !*first) + return first; + return NULL; +} + static int load_script(struct linux_binprm *bprm) { const char *i_arg, *i_name; - char *cp; + char *cp, *buf_end; struct file *file; char interp[BINPRM_BUF_SIZE]; int retval; + /* Not ours to exec if we don't start with "#!". */ if ((bprm->buf[0] != '#') || (bprm->buf[1] != '!')) return -ENOEXEC; @@ -34,18 +51,40 @@ static int load_script(struct linux_binprm *bprm) if (bprm->interp_flags & BINPRM_FLAGS_PATH_INACCESSIBLE) return -ENOENT; - /* - * This section does the #! interpretation. - * Sorta complicated, but hopefully it will work. -TYT - */ - + /* Release since we are not mapping a binary into memory. */ allow_write_access(bprm->file); fput(bprm->file); bprm->file = NULL; - bprm->buf[BINPRM_BUF_SIZE - 1] = '\0'; - if ((cp = strchr(bprm->buf, '\n')) == NULL) - cp = bprm->buf+BINPRM_BUF_SIZE-1; + /* + * This section handles parsing the #! line into separate + * interpreter path and argument strings. We must be careful + * because bprm->buf is not yet guaranteed to be NUL-terminated + * (though the buffer will have trailing NUL padding when the + * file size was smaller than the buffer size). + * + * We do not want to exec a truncated interpreter path, so either + * we find a newline (which indicates nothing is truncated), or + * we find a space/tab/NUL after the interpreter path (which + * itself may be preceded by spaces/tabs). Truncating the + * arguments is fine: the interpreter can re-read the script to + * parse them on its own. + */ + buf_end = bprm->buf + sizeof(bprm->buf) - 1; + cp = strnchr(bprm->buf, sizeof(bprm->buf), '\n'); + if (!cp) { + cp = next_non_spacetab(bprm->buf + 2, buf_end); + if (!cp) + return -ENOEXEC; /* Entire buf is spaces/tabs */ + /* + * If there is no later space/tab/NUL we must assume the + * interpreter path is truncated. + */ + if (!next_terminator(cp, buf_end)) + return -ENOEXEC; + cp = buf_end; + } + /* NUL-terminate the buffer and any trailing spaces/tabs. */ *cp = '\0'; while (cp > bprm->buf) { cp--; diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c index 8f4baa3cb992..51a0409e1b84 100644 --- a/fs/btrfs/ctree.c +++ b/fs/btrfs/ctree.c @@ -1418,6 +1418,7 @@ get_old_root(struct btrfs_root *root, u64 time_seq) struct tree_mod_elem *tm; struct extent_buffer *eb = NULL; struct extent_buffer *eb_root; + u64 eb_root_owner = 0; struct extent_buffer *old; struct tree_mod_root *old_root = NULL; u64 old_generation = 0; @@ -1451,6 +1452,7 @@ get_old_root(struct btrfs_root *root, u64 time_seq) free_extent_buffer(old); } } else if (old_root) { + eb_root_owner = btrfs_header_owner(eb_root); btrfs_tree_read_unlock(eb_root); free_extent_buffer(eb_root); eb = alloc_dummy_extent_buffer(root->fs_info, logical); @@ -1468,7 +1470,7 @@ get_old_root(struct btrfs_root *root, u64 time_seq) if (old_root) { btrfs_set_header_bytenr(eb, eb->start); btrfs_set_header_backref_rev(eb, BTRFS_MIXED_BACKREF_REV); - btrfs_set_header_owner(eb, btrfs_header_owner(eb_root)); + btrfs_set_header_owner(eb, eb_root_owner); btrfs_set_header_level(eb, old_root->level); btrfs_set_header_generation(eb, old_generation); } @@ -5433,6 +5435,7 @@ int btrfs_compare_trees(struct btrfs_root *left_root, advance_left = advance_right = 0; while (1) { + cond_resched(); if (advance_left && !left_end_reached) { ret = tree_advance(left_root, left_path, &left_level, left_root_level, diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c index df2bb4b61a00..34ffc125763f 100644 --- a/fs/btrfs/extent-tree.c +++ b/fs/btrfs/extent-tree.c @@ -7168,6 +7168,14 @@ search: */ if ((flags & extra) && !(block_group->flags & extra)) goto loop; + + /* + * This block group has different flags than we want. + * It's possible that we have MIXED_GROUP flag but no + * block group is mixed. Just skip such block group. + */ + btrfs_release_block_group(block_group, delalloc); + continue; } have_block_group: @@ -9897,6 +9905,7 @@ int btrfs_read_block_groups(struct btrfs_root *root) btrfs_err(info, "bg %llu is a mixed block group but filesystem hasn't enabled mixed block groups", cache->key.objectid); + btrfs_put_block_group(cache); ret = -EINVAL; goto error; } diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c index 90e29d40aa82..734babb6626c 100644 --- a/fs/btrfs/qgroup.c +++ b/fs/btrfs/qgroup.c @@ -2328,9 +2328,6 @@ out: btrfs_free_path(path); mutex_lock(&fs_info->qgroup_rescan_lock); - if (!btrfs_fs_closing(fs_info)) - fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_RESCAN; - if (err > 0 && fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT) { fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT; @@ -2346,16 +2343,30 @@ out: trans = btrfs_start_transaction(fs_info->quota_root, 1); if (IS_ERR(trans)) { err = PTR_ERR(trans); + trans = NULL; btrfs_err(fs_info, "fail to start transaction for status update: %d\n", err); - goto done; } - ret = update_qgroup_status_item(trans, fs_info, fs_info->quota_root); - if (ret < 0) { - err = ret; - btrfs_err(fs_info, "fail to update qgroup status: %d\n", err); + + mutex_lock(&fs_info->qgroup_rescan_lock); + if (!btrfs_fs_closing(fs_info)) + fs_info->qgroup_flags &= ~BTRFS_QGROUP_STATUS_FLAG_RESCAN; + if (trans) { + ret = update_qgroup_status_item(trans, fs_info, fs_info->quota_root); + if (ret < 0) { + err = ret; + btrfs_err(fs_info, "fail to update qgroup status: %d", + err); + } } + fs_info->qgroup_rescan_running = false; + complete_all(&fs_info->qgroup_rescan_completion); + mutex_unlock(&fs_info->qgroup_rescan_lock); + + if (!trans) + return; + btrfs_end_transaction(trans, fs_info->quota_root); if (btrfs_fs_closing(fs_info)) { @@ -2366,12 +2377,6 @@ out: } else { btrfs_err(fs_info, "qgroup scan failed with %d", err); } - -done: - mutex_lock(&fs_info->qgroup_rescan_lock); - fs_info->qgroup_rescan_running = false; - mutex_unlock(&fs_info->qgroup_rescan_lock); - complete_all(&fs_info->qgroup_rescan_completion); } /* diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index e137ff6cd9da..aa4df4a02252 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -926,6 +926,11 @@ void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release) dout("__ceph_remove_cap %p from %p\n", cap, &ci->vfs_inode); + /* remove from inode's cap rbtree, and clear auth cap */ + rb_erase(&cap->ci_node, &ci->i_caps); + if (ci->i_auth_cap == cap) + ci->i_auth_cap = NULL; + /* remove from session list */ spin_lock(&session->s_cap_lock); if (session->s_cap_iterator == cap) { @@ -961,11 +966,6 @@ void __ceph_remove_cap(struct ceph_cap *cap, bool queue_release) spin_unlock(&session->s_cap_lock); - /* remove from inode list */ - rb_erase(&cap->ci_node, &ci->i_caps); - if (ci->i_auth_cap == cap) - ci->i_auth_cap = NULL; - if (removed) ceph_put_cap(mdsc, cap); diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c index a663b676d566..2ad3f4ab4dcf 100644 --- a/fs/ceph/inode.c +++ b/fs/ceph/inode.c @@ -725,7 +725,12 @@ static int fill_inode(struct inode *inode, struct page *locked_page, ci->i_version = le64_to_cpu(info->version); inode->i_version++; inode->i_rdev = le32_to_cpu(info->rdev); - inode->i_blkbits = fls(le32_to_cpu(info->layout.fl_stripe_unit)) - 1; + /* directories have fl_stripe_unit set to zero */ + if (le32_to_cpu(info->layout.fl_stripe_unit)) + inode->i_blkbits = + fls(le32_to_cpu(info->layout.fl_stripe_unit)) - 1; + else + inode->i_blkbits = CEPH_BLOCK_SHIFT; if ((new_version || (new_issued & CEPH_CAP_AUTH_SHARED)) && (issued & CEPH_CAP_AUTH_EXCL) == 0) { diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h index 8225de3c9743..6b61d4ad30b5 100644 --- a/fs/cifs/cifsglob.h +++ b/fs/cifs/cifsglob.h @@ -1152,6 +1152,11 @@ void cifsFileInfo_put(struct cifsFileInfo *cifs_file); struct cifsInodeInfo { bool can_cache_brlcks; struct list_head llist; /* locks helb by this inode */ + /* + * NOTE: Some code paths call down_read(lock_sem) twice, so + * we must always use use cifs_down_write() instead of down_write() + * for this semaphore to avoid deadlocks. + */ struct rw_semaphore lock_sem; /* protect the fields above */ /* BB add in lists for dirty pages i.e. write caching info for oplock */ struct list_head openFileList; diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h index 54590fd33df1..257c06c6a6c2 100644 --- a/fs/cifs/cifsproto.h +++ b/fs/cifs/cifsproto.h @@ -138,6 +138,7 @@ extern int cifs_unlock_range(struct cifsFileInfo *cfile, struct file_lock *flock, const unsigned int xid); extern int cifs_push_mandatory_locks(struct cifsFileInfo *cfile); +extern void cifs_down_write(struct rw_semaphore *sem); extern struct cifsFileInfo *cifs_new_fileinfo(struct cifs_fid *fid, struct file *file, struct tcon_link *tlink, diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c index afd317eb9db9..be16da31cbcc 100644 --- a/fs/cifs/dir.c +++ b/fs/cifs/dir.c @@ -830,10 +830,16 @@ lookup_out: static int cifs_d_revalidate(struct dentry *direntry, unsigned int flags) { + struct inode *inode; + if (flags & LOOKUP_RCU) return -ECHILD; if (d_really_is_positive(direntry)) { + inode = d_inode(direntry); + if ((flags & LOOKUP_REVAL) && !CIFS_CACHE_READ(CIFS_I(inode))) + CIFS_I(inode)->time = 0; /* force reval */ + if (cifs_revalidate_dentry(direntry)) return 0; else { @@ -844,7 +850,7 @@ cifs_d_revalidate(struct dentry *direntry, unsigned int flags) * attributes will have been updated by * cifs_revalidate_dentry(). */ - if (IS_AUTOMOUNT(d_inode(direntry)) && + if (IS_AUTOMOUNT(inode) && !(direntry->d_flags & DCACHE_NEED_AUTOMOUNT)) { spin_lock(&direntry->d_lock); direntry->d_flags |= DCACHE_NEED_AUTOMOUNT; diff --git a/fs/cifs/file.c b/fs/cifs/file.c index 5ec1662cfdd3..5cad1109ed80 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -252,6 +252,12 @@ cifs_nt_open(char *full_path, struct inode *inode, struct cifs_sb_info *cifs_sb, rc = cifs_get_inode_info(&inode, full_path, buf, inode->i_sb, xid, fid); + if (rc) { + server->ops->close(xid, tcon, fid); + if (rc == -ESTALE) + rc = -EOPENSTALE; + } + out: kfree(buf); return rc; @@ -274,6 +280,13 @@ cifs_has_mand_locks(struct cifsInodeInfo *cinode) return has_locks; } +void +cifs_down_write(struct rw_semaphore *sem) +{ + while (!down_write_trylock(sem)) + msleep(10); +} + struct cifsFileInfo * cifs_new_fileinfo(struct cifs_fid *fid, struct file *file, struct tcon_link *tlink, __u32 oplock) @@ -299,7 +312,7 @@ cifs_new_fileinfo(struct cifs_fid *fid, struct file *file, INIT_LIST_HEAD(&fdlocks->locks); fdlocks->cfile = cfile; cfile->llist = fdlocks; - down_write(&cinode->lock_sem); + cifs_down_write(&cinode->lock_sem); list_add(&fdlocks->llist, &cinode->llist); up_write(&cinode->lock_sem); @@ -432,7 +445,7 @@ void cifsFileInfo_put(struct cifsFileInfo *cifs_file) * Delete any outstanding lock records. We'll lose them when the file * is closed anyway. */ - down_write(&cifsi->lock_sem); + cifs_down_write(&cifsi->lock_sem); list_for_each_entry_safe(li, tmp, &cifs_file->llist->locks, llist) { list_del(&li->llist); cifs_del_lock_waiters(li); @@ -941,7 +954,7 @@ static void cifs_lock_add(struct cifsFileInfo *cfile, struct cifsLockInfo *lock) { struct cifsInodeInfo *cinode = CIFS_I(d_inode(cfile->dentry)); - down_write(&cinode->lock_sem); + cifs_down_write(&cinode->lock_sem); list_add_tail(&lock->llist, &cfile->llist->locks); up_write(&cinode->lock_sem); } @@ -963,7 +976,7 @@ cifs_lock_add_if(struct cifsFileInfo *cfile, struct cifsLockInfo *lock, try_again: exist = false; - down_write(&cinode->lock_sem); + cifs_down_write(&cinode->lock_sem); exist = cifs_find_lock_conflict(cfile, lock->offset, lock->length, lock->type, &conf_lock, CIFS_LOCK_OP); @@ -985,7 +998,7 @@ try_again: (lock->blist.next == &lock->blist)); if (!rc) goto try_again; - down_write(&cinode->lock_sem); + cifs_down_write(&cinode->lock_sem); list_del_init(&lock->blist); } @@ -1038,7 +1051,7 @@ cifs_posix_lock_set(struct file *file, struct file_lock *flock) return rc; try_again: - down_write(&cinode->lock_sem); + cifs_down_write(&cinode->lock_sem); if (!cinode->can_cache_brlcks) { up_write(&cinode->lock_sem); return rc; @@ -1236,7 +1249,7 @@ cifs_push_locks(struct cifsFileInfo *cfile) int rc = 0; /* we are going to update can_cache_brlcks here - need a write access */ - down_write(&cinode->lock_sem); + cifs_down_write(&cinode->lock_sem); if (!cinode->can_cache_brlcks) { up_write(&cinode->lock_sem); return rc; @@ -1424,7 +1437,7 @@ cifs_unlock_range(struct cifsFileInfo *cfile, struct file_lock *flock, if (!buf) return -ENOMEM; - down_write(&cinode->lock_sem); + cifs_down_write(&cinode->lock_sem); for (i = 0; i < 2; i++) { cur = buf; num = 0; diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c index 0f210cb5038a..0a219545940d 100644 --- a/fs/cifs/inode.c +++ b/fs/cifs/inode.c @@ -405,6 +405,7 @@ int cifs_get_inode_info_unix(struct inode **pinode, /* if uniqueid is different, return error */ if (unlikely(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM && CIFS_I(*pinode)->uniqueid != fattr.cf_uniqueid)) { + CIFS_I(*pinode)->time = 0; /* force reval */ rc = -ESTALE; goto cgiiu_exit; } @@ -412,6 +413,7 @@ int cifs_get_inode_info_unix(struct inode **pinode, /* if filetype is different, return error */ if (unlikely(((*pinode)->i_mode & S_IFMT) != (fattr.cf_mode & S_IFMT))) { + CIFS_I(*pinode)->time = 0; /* force reval */ rc = -ESTALE; goto cgiiu_exit; } @@ -829,8 +831,21 @@ cifs_get_inode_info(struct inode **inode, const char *full_path, } } else fattr.cf_uniqueid = iunique(sb, ROOT_I); - } else - fattr.cf_uniqueid = CIFS_I(*inode)->uniqueid; + } else { + if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM) && + validinum == false && server->ops->get_srv_inum) { + /* + * Pass a NULL tcon to ensure we don't make a round + * trip to the server. This only works for SMB2+. + */ + tmprc = server->ops->get_srv_inum(xid, + NULL, cifs_sb, full_path, + &fattr.cf_uniqueid, data); + if (tmprc) + fattr.cf_uniqueid = CIFS_I(*inode)->uniqueid; + } else + fattr.cf_uniqueid = CIFS_I(*inode)->uniqueid; + } /* query for SFU type info if supported and needed */ if (fattr.cf_cifsattrs & ATTR_SYSTEM && @@ -871,9 +886,18 @@ cifs_get_inode_info(struct inode **inode, const char *full_path, } else { /* we already have inode, update it */ + /* if uniqueid is different, return error */ + if (unlikely(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM && + CIFS_I(*inode)->uniqueid != fattr.cf_uniqueid)) { + CIFS_I(*inode)->time = 0; /* force reval */ + rc = -ESTALE; + goto cgii_exit; + } + /* if filetype is different, return error */ if (unlikely(((*inode)->i_mode & S_IFMT) != (fattr.cf_mode & S_IFMT))) { + CIFS_I(*inode)->time = 0; /* force reval */ rc = -ESTALE; goto cgii_exit; } diff --git a/fs/cifs/netmisc.c b/fs/cifs/netmisc.c index cc88f4f0325e..bed973330227 100644 --- a/fs/cifs/netmisc.c +++ b/fs/cifs/netmisc.c @@ -130,10 +130,6 @@ static const struct smb_to_posix_error mapping_table_ERRSRV[] = { {0, 0} }; -static const struct smb_to_posix_error mapping_table_ERRHRD[] = { - {0, 0} -}; - /* * Convert a string containing text IPv4 or IPv6 address to binary form. * diff --git a/fs/cifs/smb1ops.c b/fs/cifs/smb1ops.c index f7a9adab0b84..6f5d78b172ba 100644 --- a/fs/cifs/smb1ops.c +++ b/fs/cifs/smb1ops.c @@ -180,6 +180,9 @@ cifs_get_next_mid(struct TCP_Server_Info *server) /* we do not want to loop forever */ last_mid = cur_mid; cur_mid++; + /* avoid 0xFFFF MID */ + if (cur_mid == 0xffff) + cur_mid++; /* * This nested loop looks more expensive than it is. diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c index dee5250701de..41f1a5dd33a5 100644 --- a/fs/cifs/smb2file.c +++ b/fs/cifs/smb2file.c @@ -138,7 +138,7 @@ smb2_unlock_range(struct cifsFileInfo *cfile, struct file_lock *flock, cur = buf; - down_write(&cinode->lock_sem); + cifs_down_write(&cinode->lock_sem); list_for_each_entry_safe(li, tmp, &cfile->llist->locks, llist) { if (flock->fl_start > li->offset || (flock->fl_start + length) < diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c index 591c93de8c20..0fcf42401a5d 100644 --- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -1335,6 +1335,11 @@ smb21_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock, if (oplock == SMB2_OPLOCK_LEVEL_NOCHANGE) return; + /* Check if the server granted an oplock rather than a lease */ + if (oplock & SMB2_OPLOCK_LEVEL_EXCLUSIVE) + return smb2_set_oplock_level(cinode, oplock, epoch, + purge_cache); + if (oplock & SMB2_LEASE_READ_CACHING_HE) { new_oplock |= CIFS_CACHE_READ_FLG; strcat(message, "R"); diff --git a/fs/configfs/symlink.c b/fs/configfs/symlink.c index 66e8c5d58b21..3af565e8fd51 100644 --- a/fs/configfs/symlink.c +++ b/fs/configfs/symlink.c @@ -157,11 +157,42 @@ int configfs_symlink(struct inode *dir, struct dentry *dentry, const char *symna !type->ct_item_ops->allow_link) goto out_put; + /* + * This is really sick. What they wanted was a hybrid of + * link(2) and symlink(2) - they wanted the target resolved + * at syscall time (as link(2) would've done), be a directory + * (which link(2) would've refused to do) *AND* be a deep + * fucking magic, making the target busy from rmdir POV. + * symlink(2) is nothing of that sort, and the locking it + * gets matches the normal symlink(2) semantics. Without + * attempts to resolve the target (which might very well + * not even exist yet) done prior to locking the parent + * directory. This perversion, OTOH, needs to resolve + * the target, which would lead to obvious deadlocks if + * attempted with any directories locked. + * + * Unfortunately, that garbage is userland ABI and we should've + * said "no" back in 2005. Too late now, so we get to + * play very ugly games with locking. + * + * Try *ANYTHING* of that sort in new code, and you will + * really regret it. Just ask yourself - what could a BOFH + * do to me and do I want to find it out first-hand? + * + * AV, a thoroughly annoyed bastard. + */ + inode_unlock(dir); ret = get_target(symname, &path, &target_item, dentry->d_sb); + inode_lock(dir); if (ret) goto out_put; - ret = type->ct_item_ops->allow_link(parent_item, target_item); + if (dentry->d_inode || d_unhashed(dentry)) + ret = -EEXIST; + else + ret = inode_permission(dir, MAY_WRITE | MAY_EXEC); + if (!ret) + ret = type->ct_item_ops->allow_link(parent_item, target_item); if (!ret) { mutex_lock(&configfs_symlink_mutex); ret = create_link(parent_item, target_item, dentry); diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 6691d42ede93..a8eeea6bcb7c 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -3772,6 +3772,15 @@ int ext4_punch_hole(struct inode *inode, loff_t offset, loff_t length) trace_ext4_punch_hole(inode, offset, length, 0); + ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA); + if (ext4_has_inline_data(inode)) { + down_write(&EXT4_I(inode)->i_mmap_sem); + ret = ext4_convert_inline_data(inode); + up_write(&EXT4_I(inode)->i_mmap_sem); + if (ret) + return ret; + } + /* * Write out all dirty pages to avoid race conditions * Then release them. diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c index c51d0c531213..a714f430675c 100644 --- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -1820,6 +1820,8 @@ static int f2fs_ioc_getversion(struct file *filp, unsigned long arg) static int f2fs_ioc_start_atomic_write(struct file *filp) { struct inode *inode = file_inode(filp); + struct f2fs_inode_info *fi = F2FS_I(inode); + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); int ret; if (!inode_owner_or_capable(inode)) @@ -1859,6 +1861,12 @@ static int f2fs_ioc_start_atomic_write(struct file *filp) goto out; } + spin_lock(&sbi->inode_lock[ATOMIC_FILE]); + if (list_empty(&fi->inmem_ilist)) + list_add_tail(&fi->inmem_ilist, &sbi->inode_list[ATOMIC_FILE]); + spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); + + /* add inode in inmem_list first and set atomic_file */ set_inode_flag(inode, FI_ATOMIC_FILE); clear_inode_flag(inode, FI_ATOMIC_REVOKE_REQUEST); up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]); @@ -1900,11 +1908,8 @@ static int f2fs_ioc_commit_atomic_write(struct file *filp) goto err_out; ret = f2fs_do_sync_file(filp, 0, LLONG_MAX, 0, true); - if (!ret) { - clear_inode_flag(inode, FI_ATOMIC_FILE); - F2FS_I(inode)->i_gc_failures[GC_FAILURE_ATOMIC] = 0; - stat_dec_atomic_write(inode); - } + if (!ret) + f2fs_drop_inmem_pages(inode); } else { ret = f2fs_do_sync_file(filp, 0, LLONG_MAX, 1, false); } diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index 6b772ed7206f..7ddd8664487b 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -185,8 +185,6 @@ bool f2fs_need_SSR(struct f2fs_sb_info *sbi) void f2fs_register_inmem_page(struct inode *inode, struct page *page) { - struct f2fs_sb_info *sbi = F2FS_I_SB(inode); - struct f2fs_inode_info *fi = F2FS_I(inode); struct inmem_pages *new; f2fs_trace_pid(page); @@ -200,15 +198,11 @@ void f2fs_register_inmem_page(struct inode *inode, struct page *page) INIT_LIST_HEAD(&new->list); /* increase reference count with clean state */ - mutex_lock(&fi->inmem_lock); get_page(page); - list_add_tail(&new->list, &fi->inmem_pages); - spin_lock(&sbi->inode_lock[ATOMIC_FILE]); - if (list_empty(&fi->inmem_ilist)) - list_add_tail(&fi->inmem_ilist, &sbi->inode_list[ATOMIC_FILE]); - spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); + mutex_lock(&F2FS_I(inode)->inmem_lock); + list_add_tail(&new->list, &F2FS_I(inode)->inmem_pages); inc_page_count(F2FS_I_SB(inode), F2FS_INMEM_PAGES); - mutex_unlock(&fi->inmem_lock); + mutex_unlock(&F2FS_I(inode)->inmem_lock); trace_f2fs_register_inmem_page(page, INMEM); } @@ -330,19 +324,17 @@ void f2fs_drop_inmem_pages(struct inode *inode) mutex_lock(&fi->inmem_lock); __revoke_inmem_pages(inode, &fi->inmem_pages, true, false, true); - - if (list_empty(&fi->inmem_pages)) { - spin_lock(&sbi->inode_lock[ATOMIC_FILE]); - if (!list_empty(&fi->inmem_ilist)) - list_del_init(&fi->inmem_ilist); - spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); - } mutex_unlock(&fi->inmem_lock); } clear_inode_flag(inode, FI_ATOMIC_FILE); fi->i_gc_failures[GC_FAILURE_ATOMIC] = 0; stat_dec_atomic_write(inode); + + spin_lock(&sbi->inode_lock[ATOMIC_FILE]); + if (!list_empty(&fi->inmem_ilist)) + list_del_init(&fi->inmem_ilist); + spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); } void f2fs_drop_inmem_page(struct inode *inode, struct page *page) @@ -471,11 +463,6 @@ int f2fs_commit_inmem_pages(struct inode *inode) mutex_lock(&fi->inmem_lock); err = __f2fs_commit_inmem_pages(inode); - - spin_lock(&sbi->inode_lock[ATOMIC_FILE]); - if (!list_empty(&fi->inmem_ilist)) - list_del_init(&fi->inmem_ilist); - spin_unlock(&sbi->inode_lock[ATOMIC_FILE]); mutex_unlock(&fi->inmem_lock); clear_inode_flag(inode, FI_ATOMIC_COMMIT); @@ -4389,7 +4376,9 @@ static int sanity_check_curseg(struct f2fs_sb_info *sbi) continue; out: f2fs_err(sbi, - "Current segment's next free block offset is inconsistent with bitmap, logtype:%u, segno:%u, type:%u, next_blkoff:%u, blkofs:%u", + "Current segment's next free block offset is " + "inconsistent with bitmap, logtype:%u, " + "segno:%u, type:%u, next_blkoff:%u, blkofs:%u", i, curseg->segno, curseg->alloc_type, curseg->next_blkoff, blkofs); return -EFSCORRUPTED; diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index ee8e3f06ed17..281ec7da8128 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -2670,12 +2670,13 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi) } } for (i = 0; i < NR_CURSEG_NODE_TYPE; i++) { - for (j = i; j < NR_CURSEG_DATA_TYPE; j++) { + for (j = 0; j < NR_CURSEG_DATA_TYPE; j++) { if (le32_to_cpu(ckpt->cur_node_segno[i]) == le32_to_cpu(ckpt->cur_data_segno[j])) { - f2fs_err(sbi, "Data segment (%u) and Data segment (%u) has the same segno: %u", - i, j, - le32_to_cpu(ckpt->cur_node_segno[i])); + f2fs_err(sbi, + "Node segment (%u) and Data segment (%u)" + " has the same segno: %u", i, j, + le32_to_cpu(ckpt->cur_node_segno[i])); return 1; } } diff --git a/fs/fat/dir.c b/fs/fat/dir.c index 8b2127ffb226..9b77e2ad2b59 100644 --- a/fs/fat/dir.c +++ b/fs/fat/dir.c @@ -1097,8 +1097,11 @@ static int fat_zeroed_cluster(struct inode *dir, sector_t blknr, int nr_used, err = -ENOMEM; goto error; } + /* Avoid race with userspace read via bdev */ + lock_buffer(bhs[n]); memset(bhs[n]->b_data, 0, sb->s_blocksize); set_buffer_uptodate(bhs[n]); + unlock_buffer(bhs[n]); mark_buffer_dirty_inode(bhs[n], dir); n++; @@ -1155,6 +1158,8 @@ int fat_alloc_new_dir(struct inode *dir, struct timespec *ts) fat_time_unix2fat(sbi, ts, &time, &date, &time_cs); de = (struct msdos_dir_entry *)bhs[0]->b_data; + /* Avoid race with userspace read via bdev */ + lock_buffer(bhs[0]); /* filling the new directory slots ("." and ".." entries) */ memcpy(de[0].name, MSDOS_DOT, MSDOS_NAME); memcpy(de[1].name, MSDOS_DOTDOT, MSDOS_NAME); @@ -1177,6 +1182,7 @@ int fat_alloc_new_dir(struct inode *dir, struct timespec *ts) de[0].size = de[1].size = 0; memset(de + 2, 0, sb->s_blocksize - 2 * sizeof(*de)); set_buffer_uptodate(bhs[0]); + unlock_buffer(bhs[0]); mark_buffer_dirty_inode(bhs[0], dir); err = fat_zeroed_cluster(dir, blknr, 1, bhs, MAX_BUF_PER_PAGE); @@ -1234,11 +1240,14 @@ static int fat_add_new_entries(struct inode *dir, void *slots, int nr_slots, /* fill the directory entry */ copy = min(size, sb->s_blocksize); + /* Avoid race with userspace read via bdev */ + lock_buffer(bhs[n]); memcpy(bhs[n]->b_data, slots, copy); - slots += copy; - size -= copy; set_buffer_uptodate(bhs[n]); + unlock_buffer(bhs[n]); mark_buffer_dirty_inode(bhs[n], dir); + slots += copy; + size -= copy; if (!size) break; n++; diff --git a/fs/fat/fatent.c b/fs/fat/fatent.c index 85358940fb75..94fdb8efd93e 100644 --- a/fs/fat/fatent.c +++ b/fs/fat/fatent.c @@ -390,8 +390,11 @@ static int fat_mirror_bhs(struct super_block *sb, struct buffer_head **bhs, err = -ENOMEM; goto error; } + /* Avoid race with userspace read via bdev */ + lock_buffer(c_bh); memcpy(c_bh->b_data, bhs[n]->b_data, sb->s_blocksize); set_buffer_uptodate(c_bh); + unlock_buffer(c_bh); mark_buffer_dirty_inode(c_bh, sbi->fat_inode); if (sb->s_flags & MS_SYNCHRONOUS) err = sync_dirty_buffer(c_bh); diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c index de1138e4b049..67fbe6837fdb 100644 --- a/fs/fs-writeback.c +++ b/fs/fs-writeback.c @@ -582,10 +582,13 @@ void wbc_attach_and_unlock_inode(struct writeback_control *wbc, spin_unlock(&inode->i_lock); /* - * A dying wb indicates that the memcg-blkcg mapping has changed - * and a new wb is already serving the memcg. Switch immediately. + * A dying wb indicates that either the blkcg associated with the + * memcg changed or the associated memcg is dying. In the first + * case, a replacement wb should already be available and we should + * refresh the wb immediately. In the second case, trying to + * refresh will keep failing. */ - if (unlikely(wb_dying(wbc->wb))) + if (unlikely(wb_dying(wbc->wb) && !css_is_dying(wbc->wb->memcg_css))) inode_switch_wbs(inode, wbc->wb_id); } diff --git a/fs/fuse/cuse.c b/fs/fuse/cuse.c index c5b6b7165489..d9aba9700726 100644 --- a/fs/fuse/cuse.c +++ b/fs/fuse/cuse.c @@ -513,6 +513,7 @@ static int cuse_channel_open(struct inode *inode, struct file *file) rc = cuse_send_init(cc); if (rc) { fuse_dev_free(fud); + fuse_conn_put(&cc->fc); return rc; } file->private_data = fud; diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c index d3c77413dd56..4a9f20a861cf 100644 --- a/fs/fuse/dir.c +++ b/fs/fuse/dir.c @@ -1676,6 +1676,19 @@ int fuse_do_setattr(struct inode *inode, struct iattr *attr, if (attr->ia_valid & ATTR_SIZE) is_truncate = true; + /* Flush dirty data/metadata before non-truncate SETATTR */ + if (is_wb && S_ISREG(inode->i_mode) && + attr->ia_valid & + (ATTR_MODE | ATTR_UID | ATTR_GID | ATTR_MTIME_SET | + ATTR_TIMES_SET)) { + err = write_inode_now(inode, true); + if (err) + return err; + + fuse_set_nowrite(inode); + fuse_release_nowrite(inode); + } + if (is_truncate) { fuse_set_nowrite(inode); set_bit(FUSE_I_SIZE_UNSTABLE, &fi->state); diff --git a/fs/fuse/file.c b/fs/fuse/file.c index 56c9caa7b16a..52f1983868a0 100644 --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -217,7 +217,7 @@ int fuse_open_common(struct inode *inode, struct file *file, bool isdir) { struct fuse_conn *fc = get_fuse_conn(inode); int err; - bool lock_inode = (file->f_flags & O_TRUNC) && + bool is_wb_truncate = (file->f_flags & O_TRUNC) && fc->atomic_o_trunc && fc->writeback_cache; @@ -225,16 +225,20 @@ int fuse_open_common(struct inode *inode, struct file *file, bool isdir) if (err) return err; - if (lock_inode) + if (is_wb_truncate) { mutex_lock(&inode->i_mutex); + fuse_set_nowrite(inode); + } err = fuse_do_open(fc, get_node_id(inode), file, isdir); if (!err) fuse_finish_open(inode, file); - if (lock_inode) + if (is_wb_truncate) { + fuse_release_nowrite(inode); mutex_unlock(&inode->i_mutex); + } return err; } @@ -1775,6 +1779,7 @@ static int fuse_writepage(struct page *page, struct writeback_control *wbc) WARN_ON(wbc->sync_mode == WB_SYNC_ALL); redirty_page_for_writepage(wbc, page); + unlock_page(page); return 0; } diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c index 7af5eeabc80e..5dac3382405c 100644 --- a/fs/nfs/delegation.c +++ b/fs/nfs/delegation.c @@ -52,6 +52,16 @@ nfs4_is_valid_delegation(const struct nfs_delegation *delegation, return false; } +struct nfs_delegation *nfs4_get_valid_delegation(const struct inode *inode) +{ + struct nfs_delegation *delegation; + + delegation = rcu_dereference(NFS_I(inode)->delegation); + if (nfs4_is_valid_delegation(delegation, 0)) + return delegation; + return NULL; +} + static int nfs4_do_check_delegation(struct inode *inode, fmode_t flags, bool mark) { diff --git a/fs/nfs/delegation.h b/fs/nfs/delegation.h index 333063e032f0..26a8af7bdca3 100644 --- a/fs/nfs/delegation.h +++ b/fs/nfs/delegation.h @@ -58,6 +58,7 @@ int nfs4_open_delegation_recall(struct nfs_open_context *ctx, struct nfs4_state int nfs4_lock_delegation_recall(struct file_lock *fl, struct nfs4_state *state, const nfs4_stateid *stateid); bool nfs4_copy_delegation_stateid(nfs4_stateid *dst, struct inode *inode, fmode_t flags); +struct nfs_delegation *nfs4_get_valid_delegation(const struct inode *inode); void nfs_mark_delegation_referenced(struct nfs_delegation *delegation); int nfs4_have_delegation(struct inode *inode, fmode_t flags); int nfs4_check_delegation(struct inode *inode, fmode_t flags); diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index d1816ee0c11b..08207001d475 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -1243,8 +1243,6 @@ static int can_open_delegated(struct nfs_delegation *delegation, fmode_t fmode, return 0; if ((delegation->type & fmode) != fmode) return 0; - if (test_bit(NFS_DELEGATION_RETURNING, &delegation->flags)) - return 0; switch (claim) { case NFS4_OPEN_CLAIM_NULL: case NFS4_OPEN_CLAIM_FH: @@ -1473,7 +1471,6 @@ static void nfs4_return_incompatible_delegation(struct inode *inode, fmode_t fmo static struct nfs4_state *nfs4_try_open_cached(struct nfs4_opendata *opendata) { struct nfs4_state *state = opendata->state; - struct nfs_inode *nfsi = NFS_I(state->inode); struct nfs_delegation *delegation; int open_mode = opendata->o_arg.open_flags; fmode_t fmode = opendata->o_arg.fmode; @@ -1490,7 +1487,7 @@ static struct nfs4_state *nfs4_try_open_cached(struct nfs4_opendata *opendata) } spin_unlock(&state->owner->so_lock); rcu_read_lock(); - delegation = rcu_dereference(nfsi->delegation); + delegation = nfs4_get_valid_delegation(state->inode); if (!can_open_delegated(delegation, fmode, claim)) { rcu_read_unlock(); break; @@ -1981,7 +1978,7 @@ static void nfs4_open_prepare(struct rpc_task *task, void *calldata) if (can_open_cached(data->state, data->o_arg.fmode, data->o_arg.open_flags)) goto out_no_action; rcu_read_lock(); - delegation = rcu_dereference(NFS_I(data->state->inode)->delegation); + delegation = nfs4_get_valid_delegation(data->state->inode); if (can_open_delegated(delegation, data->o_arg.fmode, claim)) goto unlock_no_action; rcu_read_unlock(); @@ -5255,6 +5252,7 @@ int nfs4_proc_setclientid(struct nfs_client *clp, u32 program, } status = task->tk_status; if (setclientid.sc_cred) { + kfree(clp->cl_acceptor); clp->cl_acceptor = rpcauth_stringify_acceptor(setclientid.sc_cred); put_rpccred(setclientid.sc_cred); } diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c index 1cb50bb898b0..15cd9db6d616 100644 --- a/fs/nfs/nfs4xdr.c +++ b/fs/nfs/nfs4xdr.c @@ -1123,7 +1123,7 @@ static void encode_attrs(struct xdr_stream *xdr, const struct iattr *iap, } else *p++ = cpu_to_be32(NFS4_SET_TO_SERVER_TIME); } - if (bmval[2] & FATTR4_WORD2_SECURITY_LABEL) { + if (label && (bmval[2] & FATTR4_WORD2_SECURITY_LABEL)) { *p++ = cpu_to_be32(label->lfs); *p++ = cpu_to_be32(label->pi); *p++ = cpu_to_be32(label->len); diff --git a/fs/ocfs2/dlm/dlmunlock.c b/fs/ocfs2/dlm/dlmunlock.c index 2e3c9dbab68c..d137d4692b91 100644 --- a/fs/ocfs2/dlm/dlmunlock.c +++ b/fs/ocfs2/dlm/dlmunlock.c @@ -105,7 +105,8 @@ static enum dlm_status dlmunlock_common(struct dlm_ctxt *dlm, enum dlm_status status; int actions = 0; int in_use; - u8 owner; + u8 owner; + int recovery_wait = 0; mlog(0, "master_node = %d, valblk = %d\n", master_node, flags & LKM_VALBLK); @@ -208,9 +209,12 @@ static enum dlm_status dlmunlock_common(struct dlm_ctxt *dlm, } if (flags & LKM_CANCEL) lock->cancel_pending = 0; - else - lock->unlock_pending = 0; - + else { + if (!lock->unlock_pending) + recovery_wait = 1; + else + lock->unlock_pending = 0; + } } /* get an extra ref on lock. if we are just switching @@ -244,6 +248,17 @@ leave: spin_unlock(&res->spinlock); wake_up(&res->wq); + if (recovery_wait) { + spin_lock(&res->spinlock); + /* Unlock request will directly succeed after owner dies, + * and the lock is already removed from grant list. We have to + * wait for RECOVERING done or we miss the chance to purge it + * since the removement is much faster than RECOVERING proc. + */ + __dlm_wait_on_lockres_flags(res, DLM_LOCK_RES_RECOVERING); + spin_unlock(&res->spinlock); + } + /* let the caller's final dlm_lock_put handle the actual kfree */ if (actions & DLM_UNLOCK_FREE_LOCK) { /* this should always be coupled with list removal */ diff --git a/fs/ocfs2/ioctl.c b/fs/ocfs2/ioctl.c index 3cb097ccce60..79232296b7d2 100644 --- a/fs/ocfs2/ioctl.c +++ b/fs/ocfs2/ioctl.c @@ -289,7 +289,7 @@ static int ocfs2_info_scan_inode_alloc(struct ocfs2_super *osb, if (inode_alloc) mutex_lock(&inode_alloc->i_mutex); - if (o2info_coherent(&fi->ifi_req)) { + if (inode_alloc && o2info_coherent(&fi->ifi_req)) { status = ocfs2_inode_lock(inode_alloc, &bh, 0); if (status < 0) { mlog_errno(status); diff --git a/fs/ocfs2/xattr.c b/fs/ocfs2/xattr.c index 06faa608e562..dfa6d45dc4dc 100644 --- a/fs/ocfs2/xattr.c +++ b/fs/ocfs2/xattr.c @@ -1475,18 +1475,6 @@ static int ocfs2_xa_check_space(struct ocfs2_xa_loc *loc, return loc->xl_ops->xlo_check_space(loc, xi); } -static void ocfs2_xa_add_entry(struct ocfs2_xa_loc *loc, u32 name_hash) -{ - loc->xl_ops->xlo_add_entry(loc, name_hash); - loc->xl_entry->xe_name_hash = cpu_to_le32(name_hash); - /* - * We can't leave the new entry's xe_name_offset at zero or - * add_namevalue() will go nuts. We set it to the size of our - * storage so that it can never be less than any other entry. - */ - loc->xl_entry->xe_name_offset = cpu_to_le16(loc->xl_size); -} - static void ocfs2_xa_add_namevalue(struct ocfs2_xa_loc *loc, struct ocfs2_xattr_info *xi) { @@ -2118,29 +2106,31 @@ static int ocfs2_xa_prepare_entry(struct ocfs2_xa_loc *loc, if (rc) goto out; - if (loc->xl_entry) { - if (ocfs2_xa_can_reuse_entry(loc, xi)) { - orig_value_size = loc->xl_entry->xe_value_size; - rc = ocfs2_xa_reuse_entry(loc, xi, ctxt); - if (rc) - goto out; - goto alloc_value; - } + if (!loc->xl_entry) { + rc = -EINVAL; + goto out; + } - if (!ocfs2_xattr_is_local(loc->xl_entry)) { - orig_clusters = ocfs2_xa_value_clusters(loc); - rc = ocfs2_xa_value_truncate(loc, 0, ctxt); - if (rc) { - mlog_errno(rc); - ocfs2_xa_cleanup_value_truncate(loc, - "overwriting", - orig_clusters); - goto out; - } + if (ocfs2_xa_can_reuse_entry(loc, xi)) { + orig_value_size = loc->xl_entry->xe_value_size; + rc = ocfs2_xa_reuse_entry(loc, xi, ctxt); + if (rc) + goto out; + goto alloc_value; + } + + if (!ocfs2_xattr_is_local(loc->xl_entry)) { + orig_clusters = ocfs2_xa_value_clusters(loc); + rc = ocfs2_xa_value_truncate(loc, 0, ctxt); + if (rc) { + mlog_errno(rc); + ocfs2_xa_cleanup_value_truncate(loc, + "overwriting", + orig_clusters); + goto out; } - ocfs2_xa_wipe_namevalue(loc); - } else - ocfs2_xa_add_entry(loc, name_hash); + } + ocfs2_xa_wipe_namevalue(loc); /* * If we get here, we have a blank entry. Fill it. We grow our diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c index 85507909823d..230af81436a4 100644 --- a/fs/overlayfs/inode.c +++ b/fs/overlayfs/inode.c @@ -292,7 +292,8 @@ static bool ovl_can_list(const char *s) return true; /* Never list trusted.overlay, list other trusted for superuser only */ - return !ovl_is_private_xattr(s) && capable(CAP_SYS_ADMIN); + return !ovl_is_private_xattr(s) && + ns_capable_noaudit(&init_user_ns, CAP_SYS_ADMIN); } ssize_t ovl_listxattr(struct dentry *dentry, char *list, size_t size) diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index dcb70969ff1c..44d65939ed18 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -1584,7 +1584,7 @@ xfs_buftarg_isolate( * zero. If the value is already zero, we need to reclaim the * buffer, otherwise it gets another trip through the LRU. */ - if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) { + if (atomic_add_unless(&bp->b_lru_ref, -1, 0)) { spin_unlock(&bp->b_lock); return LRU_ROTATE; } diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index ef64a1e1a66a..ff3f5812c0fd 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1572,6 +1572,7 @@ xfs_fs_fill_super( out_close_devices: xfs_close_devices(mp); out_free_fsname: + sb->s_fs_info = NULL; xfs_free_fsname(mp); kfree(mp); out: @@ -1589,6 +1590,10 @@ xfs_fs_put_super( { struct xfs_mount *mp = XFS_M(sb); + /* if ->fill_super failed, we have no mount to tear down */ + if (!sb->s_fs_info) + return; + xfs_notice(mp, "Unmounting Filesystem"); xfs_filestream_unmount(mp); xfs_unmountfs(mp); @@ -1598,6 +1603,8 @@ xfs_fs_put_super( xfs_destroy_percpu_counters(mp); xfs_destroy_mount_workqueues(mp); xfs_close_devices(mp); + + sb->s_fs_info = NULL; xfs_free_fsname(mp); kfree(mp); } @@ -1617,6 +1624,9 @@ xfs_fs_nr_cached_objects( struct super_block *sb, struct shrink_control *sc) { + /* Paranoia: catch incorrect calls during mount setup or teardown */ + if (WARN_ON_ONCE(!sb->s_fs_info)) + return 0; return xfs_reclaim_inodes_count(XFS_M(sb)); } diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h index 0989e6b5c64c..7a66e5699bae 100644 --- a/include/linux/arm-smccc.h +++ b/include/linux/arm-smccc.h @@ -16,6 +16,7 @@ #include <linux/linkage.h> #include <linux/types.h> +#include <uapi/linux/const.h> /* * This file provides common defines for ARM SMC Calling Convention as @@ -23,8 +24,8 @@ * http://infocenter.arm.com/help/topic/com.arm.doc.den0028a/index.html */ -#define ARM_SMCCC_STD_CALL 0 -#define ARM_SMCCC_FAST_CALL 1 +#define ARM_SMCCC_STD_CALL _AC(0,U) +#define ARM_SMCCC_FAST_CALL _AC(1,U) #define ARM_SMCCC_TYPE_SHIFT 31 #define ARM_SMCCC_SMC_32 0 @@ -60,6 +61,29 @@ #define ARM_SMCCC_OWNER_TRUSTED_OS 50 #define ARM_SMCCC_OWNER_TRUSTED_OS_END 63 +#define ARM_SMCCC_VERSION_1_0 0x10000 +#define ARM_SMCCC_VERSION_1_1 0x10001 + +#define ARM_SMCCC_VERSION_FUNC_ID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + 0, 0) + +#define ARM_SMCCC_ARCH_FEATURES_FUNC_ID \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + 0, 1) + +#define ARM_SMCCC_ARCH_WORKAROUND_1 \ + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ + ARM_SMCCC_SMC_32, \ + 0, 0x8000) + +#ifndef __ASSEMBLY__ + +#include <linux/linkage.h> +#include <linux/types.h> + /** * struct arm_smccc_res - Result from SMC/HVC call * @a0-a3 result values from registers 0 to 3 @@ -122,4 +146,161 @@ static inline unsigned long __invoke_psci_fn_smc(unsigned long function_id, return res.a0; } +/* SMCCC v1.1 implementation madness follows */ +#ifdef CONFIG_ARM64 + +#define SMCCC_SMC_INST "smc #0" +#define SMCCC_HVC_INST "hvc #0" + +#elif defined(CONFIG_ARM) +#include <asm/opcodes-sec.h> +#include <asm/opcodes-virt.h> + +#define SMCCC_SMC_INST __SMC(0) +#define SMCCC_HVC_INST __HVC(0) + +#endif + +#define ___count_args(_0, _1, _2, _3, _4, _5, _6, _7, _8, x, ...) x + +#define __count_args(...) \ + ___count_args(__VA_ARGS__, 7, 6, 5, 4, 3, 2, 1, 0) + +#define __constraint_write_0 \ + "+r" (r0), "=&r" (r1), "=&r" (r2), "=&r" (r3) +#define __constraint_write_1 \ + "+r" (r0), "+r" (r1), "=&r" (r2), "=&r" (r3) +#define __constraint_write_2 \ + "+r" (r0), "+r" (r1), "+r" (r2), "=&r" (r3) +#define __constraint_write_3 \ + "+r" (r0), "+r" (r1), "+r" (r2), "+r" (r3) +#define __constraint_write_4 __constraint_write_3 +#define __constraint_write_5 __constraint_write_4 +#define __constraint_write_6 __constraint_write_5 +#define __constraint_write_7 __constraint_write_6 + +#define __constraint_read_0 +#define __constraint_read_1 +#define __constraint_read_2 +#define __constraint_read_3 +#define __constraint_read_4 "r" (r4) +#define __constraint_read_5 __constraint_read_4, "r" (r5) +#define __constraint_read_6 __constraint_read_5, "r" (r6) +#define __constraint_read_7 __constraint_read_6, "r" (r7) + +#define __declare_arg_0(a0, res) \ + struct arm_smccc_res *___res = res; \ + register unsigned long r0 asm("r0") = (u32)a0; \ + register unsigned long r1 asm("r1"); \ + register unsigned long r2 asm("r2"); \ + register unsigned long r3 asm("r3") + +#define __declare_arg_1(a0, a1, res) \ + typeof(a1) __a1 = a1; \ + struct arm_smccc_res *___res = res; \ + register unsigned long r0 asm("r0") = (u32)a0; \ + register unsigned long r1 asm("r1") = __a1; \ + register unsigned long r2 asm("r2"); \ + register unsigned long r3 asm("r3") + +#define __declare_arg_2(a0, a1, a2, res) \ + typeof(a1) __a1 = a1; \ + typeof(a2) __a2 = a2; \ + struct arm_smccc_res *___res = res; \ + register unsigned long r0 asm("r0") = (u32)a0; \ + register unsigned long r1 asm("r1") = __a1; \ + register unsigned long r2 asm("r2") = __a2; \ + register unsigned long r3 asm("r3") + +#define __declare_arg_3(a0, a1, a2, a3, res) \ + typeof(a1) __a1 = a1; \ + typeof(a2) __a2 = a2; \ + typeof(a3) __a3 = a3; \ + struct arm_smccc_res *___res = res; \ + register unsigned long r0 asm("r0") = (u32)a0; \ + register unsigned long r1 asm("r1") = __a1; \ + register unsigned long r2 asm("r2") = __a2; \ + register unsigned long r3 asm("r3") = __a3 + +#define __declare_arg_4(a0, a1, a2, a3, a4, res) \ + typeof(a4) __a4 = a4; \ + __declare_arg_3(a0, a1, a2, a3, res); \ + register unsigned long r4 asm("r4") = __a4 + +#define __declare_arg_5(a0, a1, a2, a3, a4, a5, res) \ + typeof(a5) __a5 = a5; \ + __declare_arg_4(a0, a1, a2, a3, a4, res); \ + register unsigned long r5 asm("r5") = __a5 + +#define __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res) \ + typeof(a6) __a6 = a6; \ + __declare_arg_5(a0, a1, a2, a3, a4, a5, res); \ + register unsigned long r6 asm("r6") = __a6 + +#define __declare_arg_7(a0, a1, a2, a3, a4, a5, a6, a7, res) \ + typeof(a7) __a7 = a7; \ + __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res); \ + register unsigned long r7 asm("r7") = __a7 + +#define ___declare_args(count, ...) __declare_arg_ ## count(__VA_ARGS__) +#define __declare_args(count, ...) ___declare_args(count, __VA_ARGS__) + +#define ___constraints(count) \ + : __constraint_write_ ## count \ + : __constraint_read_ ## count \ + : "memory" +#define __constraints(count) ___constraints(count) + +/* + * We have an output list that is not necessarily used, and GCC feels + * entitled to optimise the whole sequence away. "volatile" is what + * makes it stick. + */ +#define __arm_smccc_1_1(inst, ...) \ + do { \ + __declare_args(__count_args(__VA_ARGS__), __VA_ARGS__); \ + asm volatile(inst "\n" \ + __constraints(__count_args(__VA_ARGS__))); \ + if (___res) \ + *___res = (typeof(*___res)){r0, r1, r2, r3}; \ + } while (0) + +/* + * arm_smccc_1_1_smc() - make an SMCCC v1.1 compliant SMC call + * + * This is a variadic macro taking one to eight source arguments, and + * an optional return structure. + * + * @a0-a7: arguments passed in registers 0 to 7 + * @res: result values from registers 0 to 3 + * + * This macro is used to make SMC calls following SMC Calling Convention v1.1. + * The content of the supplied param are copied to registers 0 to 7 prior + * to the SMC instruction. The return values are updated with the content + * from register 0 to 3 on return from the SMC instruction if not NULL. + */ +#define arm_smccc_1_1_smc(...) __arm_smccc_1_1(SMCCC_SMC_INST, __VA_ARGS__) + +/* + * arm_smccc_1_1_hvc() - make an SMCCC v1.1 compliant HVC call + * + * This is a variadic macro taking one to eight source arguments, and + * an optional return structure. + * + * @a0-a7: arguments passed in registers 0 to 7 + * @res: result values from registers 0 to 3 + * + * This macro is used to make HVC calls following SMC Calling Convention v1.1. + * The content of the supplied param are copied to registers 0 to 7 prior + * to the HVC instruction. The return values are updated with the content + * from register 0 to 3 on return from the HVC instruction if not NULL. + */ +#define arm_smccc_1_1_hvc(...) __arm_smccc_1_1(SMCCC_HVC_INST, __VA_ARGS__) + +/* Return codes defined in ARM DEN 0070A */ +#define SMCCC_RET_SUCCESS 0 +#define SMCCC_RET_NOT_SUPPORTED -1 +#define SMCCC_RET_NOT_REQUIRED -2 + +#endif /*__ASSEMBLY__*/ #endif /*__LINUX_ARM_SMCCC_H*/ diff --git a/include/linux/bug.h b/include/linux/bug.h index 218ac5875124..bcfd70c217a3 100644 --- a/include/linux/bug.h +++ b/include/linux/bug.h @@ -102,6 +102,11 @@ int is_valid_bugaddr(unsigned long addr); #else /* !CONFIG_GENERIC_BUG */ +static inline void *find_bug(unsigned long bugaddr) +{ + return NULL; +} + static inline enum bug_trap_type report_bug(unsigned long bug_addr, struct pt_regs *regs) { diff --git a/include/linux/cpu.h b/include/linux/cpu.h index 960f2750bc42..1945aa03a52a 100644 --- a/include/linux/cpu.h +++ b/include/linux/cpu.h @@ -65,6 +65,11 @@ extern ssize_t cpu_show_l1tf(struct device *dev, struct device_attribute *attr, char *buf); extern ssize_t cpu_show_mds(struct device *dev, struct device_attribute *attr, char *buf); +extern ssize_t cpu_show_tsx_async_abort(struct device *dev, + struct device_attribute *attr, + char *buf); +extern ssize_t cpu_show_itlb_multihit(struct device *dev, + struct device_attribute *attr, char *buf); extern __printf(4, 5) struct device *cpu_device_create(struct device *parent, void *drvdata, diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 9796b4426710..a5f251d76018 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -281,6 +281,29 @@ static inline bool gfpflags_allow_blocking(const gfp_t gfp_flags) return (bool __force)(gfp_flags & __GFP_DIRECT_RECLAIM); } +/** + * gfpflags_normal_context - is gfp_flags a normal sleepable context? + * @gfp_flags: gfp_flags to test + * + * Test whether @gfp_flags indicates that the allocation is from the + * %current context and allowed to sleep. + * + * An allocation being allowed to block doesn't mean it owns the %current + * context. When direct reclaim path tries to allocate memory, the + * allocation context is nested inside whatever %current was doing at the + * time of the original allocation. The nested allocation may be allowed + * to block but modifying anything %current owns can corrupt the outer + * context's expectations. + * + * %true result from this function indicates that the allocation context + * can sleep and use anything that's associated with %current. + */ +static inline bool gfpflags_normal_context(const gfp_t gfp_flags) +{ + return (gfp_flags & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC)) == + __GFP_DIRECT_RECLAIM; +} + #ifdef CONFIG_HIGHMEM #define OPT_ZONE_HIGHMEM ZONE_HIGHMEM #else diff --git a/include/linux/hid.h b/include/linux/hid.h index e558919bd86a..2a6dc0ab9b88 100644 --- a/include/linux/hid.h +++ b/include/linux/hid.h @@ -343,6 +343,7 @@ struct hid_item { #define HID_GROUP_RMI 0x0100 #define HID_GROUP_WACOM 0x0101 #define HID_GROUP_LOGITECH_DJ_DEVICE 0x0102 +#define HID_GROUP_STEAM 0x0103 /* * HID protocol status diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h index 6cfef2760e09..c2f46cffaab4 100644 --- a/include/linux/ieee80211.h +++ b/include/linux/ieee80211.h @@ -2571,4 +2571,57 @@ static inline bool ieee80211_action_contains_tpc(struct sk_buff *skb) return true; } +struct element { + u8 id; + u8 datalen; + u8 data[]; +} __packed; + +/* element iteration helpers */ +#define for_each_element(_elem, _data, _datalen) \ + for (_elem = (const struct element *)(_data); \ + (const u8 *)(_data) + (_datalen) - (const u8 *)_elem >= \ + (int)sizeof(*_elem) && \ + (const u8 *)(_data) + (_datalen) - (const u8 *)_elem >= \ + (int)sizeof(*_elem) + _elem->datalen; \ + _elem = (const struct element *)(_elem->data + _elem->datalen)) + +#define for_each_element_id(element, _id, data, datalen) \ + for_each_element(element, data, datalen) \ + if (element->id == (_id)) + +#define for_each_element_extid(element, extid, data, datalen) \ + for_each_element(element, data, datalen) \ + if (element->id == WLAN_EID_EXTENSION && \ + element->datalen > 0 && \ + element->data[0] == (extid)) + +#define for_each_subelement(sub, element) \ + for_each_element(sub, (element)->data, (element)->datalen) + +#define for_each_subelement_id(sub, id, element) \ + for_each_element_id(sub, id, (element)->data, (element)->datalen) + +#define for_each_subelement_extid(sub, extid, element) \ + for_each_element_extid(sub, extid, (element)->data, (element)->datalen) + +/** + * for_each_element_completed - determine if element parsing consumed all data + * @element: element pointer after for_each_element() or friends + * @data: same data pointer as passed to for_each_element() or friends + * @datalen: same data length as passed to for_each_element() or friends + * + * This function returns %true if all the data was parsed or considered + * while walking the elements. Only use this if your for_each_element() + * loop cannot be broken out of, otherwise it always returns %false. + * + * If some data was malformed, this returns %false since the last parsed + * element will not fill the whole remaining data. + */ +static inline bool for_each_element_completed(const struct element *element, + const void *data, size_t datalen) +{ + return (const u8 *)element == (const u8 *)data + datalen; +} + #endif /* LINUX_IEEE80211_H */ diff --git a/include/linux/psci.h b/include/linux/psci.h index 66499dd612f5..4a24c1d0ff6d 100644 --- a/include/linux/psci.h +++ b/include/linux/psci.h @@ -27,6 +27,17 @@ bool psci_power_state_is_valid(u32 state); int psci_cpu_init_idle(unsigned int cpu); int psci_cpu_suspend_enter(unsigned long index); +enum psci_conduit { + PSCI_CONDUIT_NONE, + PSCI_CONDUIT_SMC, + PSCI_CONDUIT_HVC, +}; + +enum smccc_version { + SMCCC_VERSION_1_0, + SMCCC_VERSION_1_1, +}; + struct psci_operations { u32 (*get_version)(void); int (*cpu_suspend)(u32 state, unsigned long entry_point); @@ -36,6 +47,8 @@ struct psci_operations { int (*affinity_info)(unsigned long target_affinity, unsigned long lowest_affinity_level); int (*migrate_info_type)(void); + enum psci_conduit conduit; + enum smccc_version smccc_version; }; extern struct psci_operations psci_ops; diff --git a/include/linux/quotaops.h b/include/linux/quotaops.h index 7a57c28eb5e7..1f350238445c 100644 --- a/include/linux/quotaops.h +++ b/include/linux/quotaops.h @@ -21,7 +21,7 @@ static inline struct quota_info *sb_dqopt(struct super_block *sb) /* i_mutex must being held */ static inline bool is_quota_modification(struct inode *inode, struct iattr *ia) { - return (ia->ia_valid & ATTR_SIZE && ia->ia_size != inode->i_size) || + return (ia->ia_valid & ATTR_SIZE) || (ia->ia_valid & ATTR_UID && !uid_eq(ia->ia_uid, inode->i_uid)) || (ia->ia_valid & ATTR_GID && !gid_eq(ia->ia_gid, inode->i_gid)); } diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index a2f12d377d23..735ff1525f48 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -1073,7 +1073,8 @@ static inline __u32 skb_get_hash_flowi4(struct sk_buff *skb, const struct flowi4 return skb->hash; } -__u32 skb_get_hash_perturb(const struct sk_buff *skb, u32 perturb); +__u32 skb_get_hash_perturb(const struct sk_buff *skb, + const siphash_key_t *perturb); static inline __u32 skb_get_hash_raw(const struct sk_buff *skb) { diff --git a/include/linux/usb/gadget.h b/include/linux/usb/gadget.h index b661b6e3496b..8874ed16c836 100644 --- a/include/linux/usb/gadget.h +++ b/include/linux/usb/gadget.h @@ -361,6 +361,16 @@ static inline int usb_ep_enable(struct usb_ep *ep) if (ep->enabled) return 0; + /* UDC drivers can't handle endpoints with maxpacket size 0 */ + if (usb_endpoint_maxp(ep->desc) == 0) { + /* + * We should log an error message here, but we can't call + * dev_err() because there's no way to find the gadget + * given only ep. + */ + return -EINVAL; + } + ret = ep->ops->enable(ep, ep->desc); if (ret) return ret; diff --git a/include/net/flow_dissector.h b/include/net/flow_dissector.h index 8c8548cf5888..62a462413081 100644 --- a/include/net/flow_dissector.h +++ b/include/net/flow_dissector.h @@ -3,6 +3,7 @@ #include <linux/types.h> #include <linux/in6.h> +#include <linux/siphash.h> #include <uapi/linux/if_ether.h> /** @@ -146,7 +147,7 @@ struct flow_dissector { struct flow_keys { struct flow_dissector_key_control control; #define FLOW_KEYS_HASH_START_FIELD basic - struct flow_dissector_key_basic basic; + struct flow_dissector_key_basic basic __aligned(SIPHASH_ALIGNMENT); struct flow_dissector_key_tags tags; struct flow_dissector_key_keyid keyid; struct flow_dissector_key_ports ports; diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h index a6cc576fd467..b0156f8a9ab7 100644 --- a/include/net/ip_vs.h +++ b/include/net/ip_vs.h @@ -880,6 +880,7 @@ struct netns_ipvs { struct delayed_work defense_work; /* Work handler */ int drop_rate; int drop_counter; + int old_secure_tcp; atomic_t dropentry; /* locks in ctl.c */ spinlock_t dropentry_lock; /* drop entry handling */ diff --git a/include/net/llc_conn.h b/include/net/llc_conn.h index df528a623548..ea985aa7a6c5 100644 --- a/include/net/llc_conn.h +++ b/include/net/llc_conn.h @@ -104,7 +104,7 @@ void llc_sk_reset(struct sock *sk); /* Access to a connection */ int llc_conn_state_process(struct sock *sk, struct sk_buff *skb); -int llc_conn_send_pdu(struct sock *sk, struct sk_buff *skb); +void llc_conn_send_pdu(struct sock *sk, struct sk_buff *skb); void llc_conn_rtn_pdu(struct sock *sk, struct sk_buff *skb); void llc_conn_resend_i_pdu_as_cmd(struct sock *sk, u8 nr, u8 first_p_bit); void llc_conn_resend_i_pdu_as_rsp(struct sock *sk, u8 nr, u8 first_f_bit); diff --git a/include/net/neighbour.h b/include/net/neighbour.h index f6017ddc4ded..1c0d07376125 100644 --- a/include/net/neighbour.h +++ b/include/net/neighbour.h @@ -425,8 +425,8 @@ static inline int neigh_event_send(struct neighbour *neigh, struct sk_buff *skb) { unsigned long now = jiffies; - if (neigh->used != now) - neigh->used = now; + if (READ_ONCE(neigh->used) != now) + WRITE_ONCE(neigh->used, now); if (!(neigh->nud_state&(NUD_CONNECTED|NUD_DELAY|NUD_PROBE))) return __neigh_event_send(neigh, skb); return 0; diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h index 4bd7508bedc9..b96df7499600 100644 --- a/include/net/netfilter/nf_tables.h +++ b/include/net/netfilter/nf_tables.h @@ -648,7 +648,8 @@ struct nft_expr_ops { */ struct nft_expr { const struct nft_expr_ops *ops; - unsigned char data[]; + unsigned char data[] + __attribute__((aligned(__alignof__(u64)))); }; static inline void *nft_expr_priv(const struct nft_expr *expr) diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h index 7a5d6a073165..ccd2a964dad7 100644 --- a/include/net/sch_generic.h +++ b/include/net/sch_generic.h @@ -289,6 +289,11 @@ static inline struct Qdisc *qdisc_root(const struct Qdisc *qdisc) return q; } +static inline struct Qdisc *qdisc_root_bh(const struct Qdisc *qdisc) +{ + return rcu_dereference_bh(qdisc->dev_queue->qdisc); +} + static inline struct Qdisc *qdisc_root_sleeping(const struct Qdisc *qdisc) { return qdisc->dev_queue->qdisc_sleeping; diff --git a/include/net/sctp/sctp.h b/include/net/sctp/sctp.h index d33b17ba51d2..8a71f4d42c62 100644 --- a/include/net/sctp/sctp.h +++ b/include/net/sctp/sctp.h @@ -98,6 +98,8 @@ void sctp_addr_wq_mgmt(struct net *, struct sctp_sockaddr_entry *, int); /* * sctp/socket.c */ +int sctp_inet_connect(struct socket *sock, struct sockaddr *uaddr, + int addr_len, int flags); int sctp_backlog_rcv(struct sock *sk, struct sk_buff *skb); int sctp_inet_listen(struct socket *sock, int backlog); void sctp_write_space(struct sock *sk); diff --git a/include/net/sock.h b/include/net/sock.h index 28ee24e3e823..8eaba44fdc09 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -2087,12 +2087,17 @@ struct sk_buff *sk_stream_alloc_skb(struct sock *sk, int size, gfp_t gfp, * sk_page_frag - return an appropriate page_frag * @sk: socket * - * If socket allocation mode allows current thread to sleep, it means its - * safe to use the per task page_frag instead of the per socket one. + * Use the per task page_frag instead of the per socket one for + * optimization when we know that we're in the normal context and owns + * everything that's associated with %current. + * + * gfpflags_allow_blocking() isn't enough here as direct reclaim may nest + * inside other socket operations and end up recursing into sk_page_frag() + * while it's already in use. */ static inline struct page_frag *sk_page_frag(struct sock *sk) { - if (gfpflags_allow_blocking(sk->sk_allocation)) + if (gfpflags_normal_context(sk->sk_allocation)) return ¤t->task_frag; return &sk->sk_frag; @@ -2179,7 +2184,7 @@ static inline ktime_t sock_read_timestamp(struct sock *sk) return kt; #else - return sk->sk_stamp; + return READ_ONCE(sk->sk_stamp); #endif } @@ -2190,7 +2195,7 @@ static inline void sock_write_timestamp(struct sock *sk, ktime_t kt) sk->sk_stamp = kt; write_sequnlock(&sk->sk_stamp_seq); #else - sk->sk_stamp = kt; + WRITE_ONCE(sk->sk_stamp, kt); #endif } diff --git a/include/scsi/scsi_dbg.h b/include/scsi/scsi_dbg.h index f8170e90b49d..bbe71a6361db 100644 --- a/include/scsi/scsi_dbg.h +++ b/include/scsi/scsi_dbg.h @@ -5,8 +5,6 @@ struct scsi_cmnd; struct scsi_device; struct scsi_sense_hdr; -#define SCSI_LOG_BUFSIZE 128 - extern void scsi_print_command(struct scsi_cmnd *); extern size_t __scsi_format_command(char *, size_t, const unsigned char *, size_t); diff --git a/include/soc/qcom/qseecomi.h b/include/soc/qcom/qseecomi.h index e199978302bf..46b1c5da4e90 100644 --- a/include/soc/qcom/qseecomi.h +++ b/include/soc/qcom/qseecomi.h @@ -1,5 +1,5 @@ /* - * Copyright (c) 2013-2017, The Linux Foundation. All rights reserved. + * Copyright (c) 2013-2019, The Linux Foundation. All rights reserved. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 and @@ -19,6 +19,7 @@ #define QSEECOM_KEY_ID_SIZE 32 #define QSEOS_RESULT_FAIL_SEND_CMD_NO_THREAD -19 /*0xFFFFFFED*/ +#define QSEOS_RESULT_FAIL_APP_ALREADY_LOADED -38 /*0xFFFFFFDA*/ #define QSEOS_RESULT_FAIL_UNSUPPORTED_CE_PIPE -63 #define QSEOS_RESULT_FAIL_KS_OP -64 #define QSEOS_RESULT_FAIL_KEY_ID_EXISTS -65 @@ -104,82 +105,82 @@ enum qseecom_qsee_reentrancy_phase { QSEE_REENTRANCY_PHASE_MAX = 0xFF }; -__packed struct qsee_apps_region_info_ireq { +struct qsee_apps_region_info_ireq { uint32_t qsee_cmd_id; uint32_t addr; uint32_t size; -}; +} __attribute__((__packed__)); -__packed struct qsee_apps_region_info_64bit_ireq { +struct qsee_apps_region_info_64bit_ireq { uint32_t qsee_cmd_id; uint64_t addr; uint32_t size; -}; +} __attribute__((__packed__)); -__packed struct qseecom_check_app_ireq { +struct qseecom_check_app_ireq { uint32_t qsee_cmd_id; char app_name[MAX_APP_NAME_SIZE]; -}; +} __attribute__((__packed__)); -__packed struct qseecom_load_app_ireq { +struct qseecom_load_app_ireq { uint32_t qsee_cmd_id; uint32_t mdt_len; /* Length of the mdt file */ uint32_t img_len; /* Length of .bxx and .mdt files */ uint32_t phy_addr; /* phy addr of the start of image */ char app_name[MAX_APP_NAME_SIZE]; /* application name*/ -}; +} __attribute__((__packed__)); -__packed struct qseecom_load_app_64bit_ireq { +struct qseecom_load_app_64bit_ireq { uint32_t qsee_cmd_id; uint32_t mdt_len; uint32_t img_len; uint64_t phy_addr; char app_name[MAX_APP_NAME_SIZE]; -}; +} __attribute__((__packed__)); -__packed struct qseecom_unload_app_ireq { +struct qseecom_unload_app_ireq { uint32_t qsee_cmd_id; uint32_t app_id; -}; +} __attribute__((__packed__)); -__packed struct qseecom_load_lib_image_ireq { +struct qseecom_load_lib_image_ireq { uint32_t qsee_cmd_id; uint32_t mdt_len; uint32_t img_len; uint32_t phy_addr; -}; +} __attribute__((__packed__)); -__packed struct qseecom_load_lib_image_64bit_ireq { +struct qseecom_load_lib_image_64bit_ireq { uint32_t qsee_cmd_id; uint32_t mdt_len; uint32_t img_len; uint64_t phy_addr; -}; +} __attribute__((__packed__)); -__packed struct qseecom_unload_lib_image_ireq { +struct qseecom_unload_lib_image_ireq { uint32_t qsee_cmd_id; -}; +} __attribute__((__packed__)); -__packed struct qseecom_register_listener_ireq { +struct qseecom_register_listener_ireq { uint32_t qsee_cmd_id; uint32_t listener_id; uint32_t sb_ptr; uint32_t sb_len; -}; +} __attribute__((__packed__)); -__packed struct qseecom_register_listener_64bit_ireq { +struct qseecom_register_listener_64bit_ireq { uint32_t qsee_cmd_id; uint32_t listener_id; uint64_t sb_ptr; uint32_t sb_len; -}; +} __attribute__((__packed__)); -__packed struct qseecom_unregister_listener_ireq { +struct qseecom_unregister_listener_ireq { uint32_t qsee_cmd_id; uint32_t listener_id; -}; +} __attribute__((__packed__)); -__packed struct qseecom_client_send_data_ireq { +struct qseecom_client_send_data_ireq { uint32_t qsee_cmd_id; uint32_t app_id; uint32_t req_ptr; @@ -188,9 +189,9 @@ __packed struct qseecom_client_send_data_ireq { uint32_t rsp_len; uint32_t sglistinfo_ptr; uint32_t sglistinfo_len; -}; +} __attribute__((__packed__)); -__packed struct qseecom_client_send_data_64bit_ireq { +struct qseecom_client_send_data_64bit_ireq { uint32_t qsee_cmd_id; uint32_t app_id; uint64_t req_ptr; @@ -199,36 +200,36 @@ __packed struct qseecom_client_send_data_64bit_ireq { uint32_t rsp_len; uint64_t sglistinfo_ptr; uint32_t sglistinfo_len; -}; +} __attribute__((__packed__)); -__packed struct qseecom_reg_log_buf_ireq { +struct qseecom_reg_log_buf_ireq { uint32_t qsee_cmd_id; uint32_t phy_addr; uint32_t len; -}; +} __attribute__((__packed__)); -__packed struct qseecom_reg_log_buf_64bit_ireq { +struct qseecom_reg_log_buf_64bit_ireq { uint32_t qsee_cmd_id; uint64_t phy_addr; uint32_t len; -}; +} __attribute__((__packed__)); /* send_data resp */ -__packed struct qseecom_client_listener_data_irsp { +struct qseecom_client_listener_data_irsp { uint32_t qsee_cmd_id; uint32_t listener_id; uint32_t status; uint32_t sglistinfo_ptr; uint32_t sglistinfo_len; -}; +} __attribute__((__packed__)); -__packed struct qseecom_client_listener_data_64bit_irsp { +struct qseecom_client_listener_data_64bit_irsp { uint32_t qsee_cmd_id; uint32_t listener_id; uint32_t status; uint64_t sglistinfo_ptr; uint32_t sglistinfo_len; -}; +} __attribute__((__packed__)); /* * struct qseecom_command_scm_resp - qseecom response buffer @@ -237,40 +238,40 @@ __packed struct qseecom_client_listener_data_64bit_irsp { * buffer * @sb_in_rsp_len: length of command response */ -__packed struct qseecom_command_scm_resp { +struct qseecom_command_scm_resp { uint32_t result; enum qseecom_command_scm_resp_type resp_type; unsigned int data; -}; +} __attribute__((__packed__)); struct qseecom_rpmb_provision_key { uint32_t key_type; }; -__packed struct qseecom_client_send_service_ireq { +struct qseecom_client_send_service_ireq { uint32_t qsee_cmd_id; uint32_t key_type; /* in */ unsigned int req_len; /* in */ uint32_t rsp_ptr; /* in/out */ unsigned int rsp_len; /* in/out */ -}; +} __attribute__((__packed__)); -__packed struct qseecom_client_send_service_64bit_ireq { +struct qseecom_client_send_service_64bit_ireq { uint32_t qsee_cmd_id; uint32_t key_type; unsigned int req_len; uint64_t rsp_ptr; unsigned int rsp_len; -}; +} __attribute__((__packed__)); -__packed struct qseecom_key_generate_ireq { +struct qseecom_key_generate_ireq { uint32_t qsee_command_id; uint32_t flags; uint8_t key_id[QSEECOM_KEY_ID_SIZE]; uint8_t hash32[QSEECOM_HASH_SIZE]; -}; +} __attribute__((__packed__)); -__packed struct qseecom_key_select_ireq { +struct qseecom_key_select_ireq { uint32_t qsee_command_id; uint32_t ce; uint32_t pipe; @@ -278,33 +279,33 @@ __packed struct qseecom_key_select_ireq { uint32_t flags; uint8_t key_id[QSEECOM_KEY_ID_SIZE]; uint8_t hash32[QSEECOM_HASH_SIZE]; -}; +} __attribute__((__packed__)); -__packed struct qseecom_key_delete_ireq { +struct qseecom_key_delete_ireq { uint32_t qsee_command_id; uint32_t flags; uint8_t key_id[QSEECOM_KEY_ID_SIZE]; uint8_t hash32[QSEECOM_HASH_SIZE]; -}; +} __attribute__((__packed__)); -__packed struct qseecom_key_userinfo_update_ireq { +struct qseecom_key_userinfo_update_ireq { uint32_t qsee_command_id; uint32_t flags; uint8_t key_id[QSEECOM_KEY_ID_SIZE]; uint8_t current_hash32[QSEECOM_HASH_SIZE]; uint8_t new_hash32[QSEECOM_HASH_SIZE]; -}; +} __attribute__((__packed__)); -__packed struct qseecom_key_max_count_query_ireq { +struct qseecom_key_max_count_query_ireq { uint32_t flags; -}; +} __attribute__((__packed__)); -__packed struct qseecom_key_max_count_query_irsp { +struct qseecom_key_max_count_query_irsp { uint32_t max_key_count; -}; +} __attribute__((__packed__)); -__packed struct qseecom_qteec_ireq { +struct qseecom_qteec_ireq { uint32_t qsee_cmd_id; uint32_t app_id; uint32_t req_ptr; @@ -313,9 +314,9 @@ __packed struct qseecom_qteec_ireq { uint32_t resp_len; uint32_t sglistinfo_ptr; uint32_t sglistinfo_len; -}; +} __attribute__((__packed__)); -__packed struct qseecom_qteec_64bit_ireq { +struct qseecom_qteec_64bit_ireq { uint32_t qsee_cmd_id; uint32_t app_id; uint64_t req_ptr; @@ -324,21 +325,20 @@ __packed struct qseecom_qteec_64bit_ireq { uint32_t resp_len; uint64_t sglistinfo_ptr; uint32_t sglistinfo_len; -}; +} __attribute__((__packed__)); -__packed struct qseecom_client_send_fsm_key_req { +struct qseecom_client_send_fsm_key_req { uint32_t qsee_cmd_id; uint32_t req_ptr; uint32_t req_len; uint32_t rsp_ptr; uint32_t rsp_len; -}; +} __attribute__((__packed__)); -__packed struct qseecom_continue_blocked_request_ireq { +struct qseecom_continue_blocked_request_ireq { uint32_t qsee_cmd_id; uint32_t app_or_session_id; /*legacy: app_id; smcinvoke: session_id*/ -}; - +} __attribute__((__packed__)); /********** ARMV8 SMC INTERFACE TZ MACRO *******************/ @@ -352,7 +352,8 @@ __packed struct qseecom_continue_blocked_request_ireq { /*---------------------------------------------------------------------------- * Owning Entity IDs (defined by ARM SMC doc) - * -------------------------------------------------------------------------*/ + * --------------------------------------------------------------------------- + */ #define TZ_OWNER_ARM 0 /** ARM Architecture call ID */ #define TZ_OWNER_CPU 1 /** CPU service call ID */ #define TZ_OWNER_SIP 2 /** SIP service call ID */ @@ -380,18 +381,18 @@ __packed struct qseecom_continue_blocked_request_ireq { #define TZ_OWNER_OS_RESERVED_13 62 #define TZ_OWNER_OS_RESERVED_14 63 -#define TZ_SVC_INFO 6 /* Misc. information services */ +#define TZ_SVC_INFO 6 /* Misc. information services */ /** Trusted Application call groups */ -#define TZ_SVC_APP_ID_PLACEHOLDER 0 /* SVC bits will contain App ID */ +#define TZ_SVC_APP_ID_PLACEHOLDER 0 /* SVC bits will contain App ID */ /** General helper macro to create a bitmask from bits low to high. */ #define TZ_MASK_BITS(h, l) ((0xffffffff >> (32 - ((h - l) + 1))) << l) -/** - Macro used to define an SMC ID based on the owner ID, - service ID, and function number. -*/ +/* + * Macro used to define an SMC ID based on the owner ID, + * service ID, and function number. + */ #define TZ_SYSCALL_CREATE_SMC_ID(o, s, f) \ ((uint32_t)((((o & 0x3f) << 24) | (s & 0xff) << 8) | (f & 0xff))) @@ -412,8 +413,8 @@ __packed struct qseecom_continue_blocked_request_ireq { ((p9&TZ_SYSCALL_PARAM_TYPE_MASK)<<20)+ \ ((p10&TZ_SYSCALL_PARAM_TYPE_MASK)<<22)) -/** - Macros used to create the Parameter ID associated with the syscall +/* + * Macros used to create the Parameter ID associated with the syscall */ #define TZ_SYSCALL_CREATE_PARAM_ID_0 0 #define TZ_SYSCALL_CREATE_PARAM_ID_1(p1) \ @@ -437,8 +438,8 @@ __packed struct qseecom_continue_blocked_request_ireq { #define TZ_SYSCALL_CREATE_PARAM_ID_10(p1, p2, p3, p4, p5, p6, p7, p8, p9, p10) \ TZ_SYSCALL_CREATE_PARAM_ID(10, p1, p2, p3, p4, p5, p6, p7, p8, p9, p10) -/** - Macro used to obtain the Parameter ID associated with the syscall +/* + * Macro used to obtain the Parameter ID associated with the syscall */ #define TZ_SYSCALL_GET_PARAM_ID(CMD_ID) CMD_ID ## _PARAM_ID diff --git a/include/soc/qcom/scm.h b/include/soc/qcom/scm.h index f0a3124dae00..e741d540a8d6 100644 --- a/include/soc/qcom/scm.h +++ b/include/soc/qcom/scm.h @@ -1,4 +1,4 @@ -/* Copyright (c) 2010-2017, The Linux Foundation. All rights reserved. +/* Copyright (c) 2010-2018, The Linux Foundation. All rights reserved. * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 and @@ -101,6 +101,8 @@ extern int scm_call(u32 svc_id, u32 cmd_id, const void *cmd_buf, size_t cmd_len, extern int scm_call2(u32 cmd_id, struct scm_desc *desc); +extern int scm_call2_noretry(u32 cmd_id, struct scm_desc *desc); + extern int scm_call2_atomic(u32 cmd_id, struct scm_desc *desc); extern int scm_call_noalloc(u32 svc_id, u32 cmd_id, const void *cmd_buf, @@ -150,6 +152,11 @@ static inline int scm_call2(u32 cmd_id, struct scm_desc *desc) return 0; } +static inline int scm_call2_noretry(u32 cmd_id, struct scm_desc *desc) +{ + return 0; +} + static inline int scm_call2_atomic(u32 cmd_id, struct scm_desc *desc) { return 0; diff --git a/include/sound/soc-dapm.h b/include/sound/soc-dapm.h index d29e39ee28f4..4e1931b7c7bf 100644 --- a/include/sound/soc-dapm.h +++ b/include/sound/soc-dapm.h @@ -340,6 +340,8 @@ struct device; #define SND_SOC_DAPM_WILL_PMD 0x80 /* called at start of sequence */ #define SND_SOC_DAPM_PRE_POST_PMD \ (SND_SOC_DAPM_PRE_PMD | SND_SOC_DAPM_POST_PMD) +#define SND_SOC_DAPM_PRE_POST_PMU \ + (SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMU) /* convenience event type detection */ #define SND_SOC_DAPM_EVENT_ON(e) \ diff --git a/include/uapi/linux/input.h b/include/uapi/linux/input.h index b115bca91324..7a89b7b62ab8 100644 --- a/include/uapi/linux/input.h +++ b/include/uapi/linux/input.h @@ -61,9 +61,14 @@ struct input_id { * Note that input core does not clamp reported values to the * [minimum, maximum] limits, such task is left to userspace. * - * Resolution for main axes (ABS_X, ABS_Y, ABS_Z) is reported in - * units per millimeter (units/mm), resolution for rotational axes - * (ABS_RX, ABS_RY, ABS_RZ) is reported in units per radian. + * The default resolution for main axes (ABS_X, ABS_Y, ABS_Z) + * is reported in units per millimeter (units/mm), resolution + * for rotational axes (ABS_RX, ABS_RY, ABS_RZ) is reported + * in units per radian. + * When INPUT_PROP_ACCELEROMETER is set the resolution changes. + * The main axes (ABS_X, ABS_Y, ABS_Z) are then reported in + * in units per g (units/g) and in units per degree per second + * (units/deg/s) for rotational axes (ABS_RX, ABS_RY, ABS_RZ). */ struct input_absinfo { __s32 value; diff --git a/include/uapi/linux/qseecom.h b/include/uapi/linux/qseecom.h index 40c96eef3059..63e2a5f2b671 100644 --- a/include/uapi/linux/qseecom.h +++ b/include/uapi/linux/qseecom.h @@ -7,6 +7,11 @@ #define MAX_ION_FD 4 #define MAX_APP_NAME_SIZE 64 #define QSEECOM_HASH_SIZE 32 + +/* qseecom_ta_heap allocation retry delay (ms) and max attemp count */ +#define QSEECOM_TA_ION_ALLOCATE_DELAY 50 +#define QSEECOM_TA_ION_ALLOCATE_MAX_ATTEMP 20 + /* * struct qseecom_register_listener_req - * for register listener ioctl request diff --git a/kernel/elfcore.c b/kernel/elfcore.c index e556751d15d9..a2b29b9bdfcb 100644 --- a/kernel/elfcore.c +++ b/kernel/elfcore.c @@ -2,6 +2,7 @@ #include <linux/fs.h> #include <linux/mm.h> #include <linux/binfmts.h> +#include <linux/elfcore.h> Elf_Half __weak elf_core_extra_phdrs(void) { diff --git a/kernel/fork.c b/kernel/fork.c index b3899603f88e..58514ce46152 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -2190,7 +2190,7 @@ int sysctl_max_threads(struct ctl_table *table, int write, struct ctl_table t; int ret; int threads = max_threads; - int min = MIN_THREADS; + int min = 1; int max = MAX_THREADS; t = *table; @@ -2202,7 +2202,7 @@ int sysctl_max_threads(struct ctl_table *table, int write, if (ret || !write) return ret; - set_max_threads(threads); + max_threads = threads; return 0; } diff --git a/kernel/kprobes.c b/kernel/kprobes.c index a53998cba804..fdde50d39a46 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -1454,7 +1454,8 @@ static int check_kprobe_address_safe(struct kprobe *p, /* Ensure it is not in reserved area nor out of text */ if (!kernel_text_address((unsigned long) p->addr) || within_kprobe_blacklist((unsigned long) p->addr) || - jump_label_text_reserved(p->addr, p->addr)) { + jump_label_text_reserved(p->addr, p->addr) || + find_bug((unsigned long)p->addr)) { ret = -EINVAL; goto out; } diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index f2df5f86af28..a419696709a1 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -3314,6 +3314,9 @@ __lock_set_class(struct lockdep_map *lock, const char *name, unsigned int depth; int i; + if (unlikely(!debug_locks)) + return 0; + depth = curr->lockdep_depth; /* * This function is about (re)setting the class of a held lock, diff --git a/kernel/panic.c b/kernel/panic.c index 75f564a94a82..aceb10bcfc2f 100644 --- a/kernel/panic.c +++ b/kernel/panic.c @@ -90,6 +90,7 @@ void panic(const char *fmt, ...) * after the panic_lock is acquired) from invoking panic again. */ local_irq_disable(); + preempt_disable_notrace(); /* * It's possible to come here directly from a panic-assertion and diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index eb6a190c7a3a..2deedfc6db54 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -3066,7 +3066,7 @@ bool kmsg_dump_get_buffer(struct kmsg_dumper *dumper, bool syslog, seq = dumper->cur_seq; idx = dumper->cur_idx; prev = 0; - while (l > size && seq < dumper->next_seq) { + while (l >= size && seq < dumper->next_seq) { struct printk_log *msg = log_from_idx(idx); l -= msg_print_text(msg, prev, true, NULL, 0); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 543f7113b1d2..f6f8bb2f0d95 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -9339,10 +9339,6 @@ static int cpu_cgroup_can_attach(struct cgroup_taskset *tset) #ifdef CONFIG_RT_GROUP_SCHED if (!sched_rt_can_attach(css_tg(css), task)) return -EINVAL; -#else - /* We don't support RT-tasks being in separate groups */ - if (task->sched_class != &fair_sched_class) - return -EINVAL; #endif /* * Serialize against wake_up_new_task() such that if its diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e497ef064ab0..f01eb276835d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -10729,9 +10729,10 @@ no_move: out_balanced: /* * We reach balance although we may have faced some affinity - * constraints. Clear the imbalance flag if it was set. + * constraints. Clear the imbalance flag only if other tasks got + * a chance to move and fix the imbalance. */ - if (sd_parent) { + if (sd_parent && !(env.flags & LBF_ALL_PINNED)) { int *group_imbalance = &sd_parent->groups->sgc->imbalance; if (*group_imbalance) diff --git a/kernel/time/alarmtimer.c b/kernel/time/alarmtimer.c index 4ac0a040e4ef..4171fee2d4ec 100644 --- a/kernel/time/alarmtimer.c +++ b/kernel/time/alarmtimer.c @@ -640,7 +640,7 @@ static int alarm_timer_create(struct k_itimer *new_timer) struct alarm_base *base; if (!alarmtimer_get_rtcdev()) - return -ENOTSUPP; + return -EOPNOTSUPP; if (!capable(CAP_WAKE_ALARM)) return -EPERM; @@ -683,7 +683,7 @@ static void alarm_timer_get(struct k_itimer *timr, static int alarm_timer_del(struct k_itimer *timr) { if (!rtcdev) - return -ENOTSUPP; + return -EOPNOTSUPP; if (alarm_try_to_cancel(&timr->it.alarm.alarmtimer) < 0) return TIMER_RETRY; @@ -707,7 +707,7 @@ static int alarm_timer_set(struct k_itimer *timr, int flags, ktime_t exp; if (!rtcdev) - return -ENOTSUPP; + return -EOPNOTSUPP; if (flags & ~TIMER_ABSTIME) return -EINVAL; @@ -869,7 +869,7 @@ static int alarm_timer_nsleep(const clockid_t which_clock, int flags, struct restart_block *restart; if (!alarmtimer_get_rtcdev()) - return -ENOTSUPP; + return -EOPNOTSUPP; if (flags & ~TIMER_ABSTIME) return -EINVAL; diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 12f8bf9bf85d..4509cf51fbfd 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -3448,9 +3448,14 @@ static int show_traces_open(struct inode *inode, struct file *file) if (tracing_disabled) return -ENODEV; + if (trace_array_get(tr) < 0) + return -ENODEV; + ret = seq_open(file, &show_traces_seq_ops); - if (ret) + if (ret) { + trace_array_put(tr); return ret; + } m = file->private_data; m->private = tr; @@ -3458,6 +3463,14 @@ static int show_traces_open(struct inode *inode, struct file *file) return 0; } +static int show_traces_release(struct inode *inode, struct file *file) +{ + struct trace_array *tr = inode->i_private; + + trace_array_put(tr); + return seq_release(inode, file); +} + static ssize_t tracing_write_stub(struct file *filp, const char __user *ubuf, size_t count, loff_t *ppos) @@ -3488,8 +3501,8 @@ static const struct file_operations tracing_fops = { static const struct file_operations show_traces_fops = { .open = show_traces_open, .read = seq_read, - .release = seq_release, .llseek = seq_lseek, + .release = show_traces_release, }; static ssize_t @@ -4948,6 +4961,7 @@ waitagain: sizeof(struct trace_iterator) - offsetof(struct trace_iterator, seq)); cpumask_clear(iter->started); + trace_seq_init(&iter->seq); iter->pos = -1; trace_event_read_lock(); diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 30f5182927b3..a3a9196432d0 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -529,7 +529,7 @@ config DEBUG_KMEMLEAK_EARLY_LOG_SIZE int "Maximum kmemleak early log entries" depends on DEBUG_KMEMLEAK range 200 40000 - default 400 + default 16000 help Kmemleak must track all the memory allocations to avoid reporting false positives. Since memory may be allocated or diff --git a/lib/dump_stack.c b/lib/dump_stack.c index c30d07e99dba..72de6444934d 100644 --- a/lib/dump_stack.c +++ b/lib/dump_stack.c @@ -44,7 +44,12 @@ retry: was_locked = 1; } else { local_irq_restore(flags); - cpu_relax(); + /* + * Wait for the lock to release before jumping to + * atomic_cmpxchg() in order to mitigate the thundering herd + * problem. + */ + do { cpu_relax(); } while (atomic_read(&dump_lock) != -1); goto retry; } diff --git a/mm/filemap.c b/mm/filemap.c index ea5c7a6e20d2..6f3c539d9e68 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -342,7 +342,8 @@ int __filemap_fdatawrite_range(struct address_space *mapping, loff_t start, .range_end = end, }; - if (!mapping_cap_writeback_dirty(mapping)) + if (!mapping_cap_writeback_dirty(mapping) || + !mapping_tagged(mapping, PAGECACHE_TAG_DIRTY)) return 0; wbc_attach_fdatawrite_inode(&wbc, mapping->host); diff --git a/mm/shmem.c b/mm/shmem.c index f64a55543f3f..75e1d682370c 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1854,11 +1854,12 @@ static void shmem_tag_pins(struct address_space *mapping) void **slot; pgoff_t start; struct page *page; + unsigned int tagged = 0; lru_add_drain(); start = 0; - rcu_read_lock(); + spin_lock_irq(&mapping->tree_lock); restart: radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) { page = radix_tree_deref_slot(slot); @@ -1866,19 +1867,20 @@ restart: if (radix_tree_deref_retry(page)) goto restart; } else if (page_count(page) - page_mapcount(page) > 1) { - spin_lock_irq(&mapping->tree_lock); radix_tree_tag_set(&mapping->page_tree, iter.index, SHMEM_TAG_PINNED); - spin_unlock_irq(&mapping->tree_lock); } - if (need_resched()) { - cond_resched_rcu(); - start = iter.index + 1; - goto restart; - } + if (++tagged % 1024) + continue; + + spin_unlock_irq(&mapping->tree_lock); + cond_resched(); + start = iter.index + 1; + spin_lock_irq(&mapping->tree_lock); + goto restart; } - rcu_read_unlock(); + spin_unlock_irq(&mapping->tree_lock); } /* diff --git a/mm/slub.c b/mm/slub.c index 716b794f3679..7c821c5a30b6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4655,7 +4655,17 @@ static ssize_t show_slab_objects(struct kmem_cache *s, } } - get_online_mems(); + /* + * It is impossible to take "mem_hotplug_lock" here with "kernfs_mutex" + * already held which will conflict with an existing lock order: + * + * mem_hotplug_lock->slab_mutex->kernfs_mutex + * + * We don't really need mem_hotplug_lock (to hold off + * slab_mem_going_offline_callback) here because slab's memory hot + * unplug code doesn't destroy the kmem_cache->node[] data. + */ + #ifdef CONFIG_SLUB_DEBUG if (flags & SO_ALL) { struct kmem_cache_node *n; @@ -4696,7 +4706,6 @@ static ssize_t show_slab_objects(struct kmem_cache *s, x += sprintf(buf + x, " N%d=%lu", node, nodes[node]); #endif - put_online_mems(); kfree(nodes); return x + sprintf(buf + x, "\n"); } diff --git a/mm/vmstat.c b/mm/vmstat.c index e297ed47aa58..3a07628297da 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1588,7 +1588,7 @@ static int __init setup_vmstat(void) #endif #ifdef CONFIG_PROC_FS proc_create("buddyinfo", S_IRUGO, NULL, &fragmentation_file_operations); - proc_create("pagetypeinfo", S_IRUGO, NULL, &pagetypeinfo_file_ops); + proc_create("pagetypeinfo", 0400, NULL, &pagetypeinfo_file_ops); proc_create("vmstat", S_IRUGO, NULL, &proc_vmstat_file_operations); proc_create("zoneinfo", S_IRUGO, NULL, &proc_zoneinfo_file_operations); #endif diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c index 4246df3b7ae8..e23bf739492c 100644 --- a/net/appletalk/ddp.c +++ b/net/appletalk/ddp.c @@ -1029,6 +1029,11 @@ static int atalk_create(struct net *net, struct socket *sock, int protocol, */ if (sock->type != SOCK_RAW && sock->type != SOCK_DGRAM) goto out; + + rc = -EPERM; + if (sock->type == SOCK_RAW && !kern && !capable(CAP_NET_RAW)) + goto out; + rc = -ENOMEM; sk = sk_alloc(net, PF_APPLETALK, GFP_KERNEL, &ddp_proto, kern); if (!sk) diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c index 2772f6a13fcb..de55a3f001dc 100644 --- a/net/ax25/af_ax25.c +++ b/net/ax25/af_ax25.c @@ -859,6 +859,8 @@ static int ax25_create(struct net *net, struct socket *sock, int protocol, break; case SOCK_RAW: + if (!capable(CAP_NET_RAW)) + return -EPERM; break; default: return -ESOCKTNOSUPPORT; diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c index 8d65bea9d93b..cc1b7488861b 100644 --- a/net/bluetooth/hci_event.c +++ b/net/bluetooth/hci_event.c @@ -5062,11 +5062,6 @@ static void hci_le_remote_conn_param_req_evt(struct hci_dev *hdev, return send_conn_param_neg_reply(hdev, handle, HCI_ERROR_UNKNOWN_CONN_ID); - if (min < hcon->le_conn_min_interval || - max > hcon->le_conn_max_interval) - return send_conn_param_neg_reply(hdev, handle, - HCI_ERROR_INVALID_LL_PARAMS); - if (hci_check_conn_params(min, max, latency, timeout)) return send_conn_param_neg_reply(hdev, handle, HCI_ERROR_INVALID_LL_PARAMS); diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c index 6233951af203..824e46f06e7d 100644 --- a/net/bluetooth/l2cap_core.c +++ b/net/bluetooth/l2cap_core.c @@ -5267,14 +5267,7 @@ static inline int l2cap_conn_param_update_req(struct l2cap_conn *conn, memset(&rsp, 0, sizeof(rsp)); - if (min < hcon->le_conn_min_interval || - max > hcon->le_conn_max_interval) { - BT_DBG("requested connection interval exceeds current bounds."); - err = -EINVAL; - } else { - err = hci_check_conn_params(min, max, latency, to_multiplier); - } - + err = hci_check_conn_params(min, max, latency, to_multiplier); if (err) rsp.result = cpu_to_le16(L2CAP_CONN_PARAM_REJECTED); else diff --git a/net/core/datagram.c b/net/core/datagram.c index d62af69ad844..ba8af8b55f1f 100644 --- a/net/core/datagram.c +++ b/net/core/datagram.c @@ -96,7 +96,7 @@ static int wait_for_more_packets(struct sock *sk, int *err, long *timeo_p, if (error) goto out_err; - if (sk->sk_receive_queue.prev != skb) + if (READ_ONCE(sk->sk_receive_queue.prev) != skb) goto out; /* Socket shut down? */ diff --git a/net/core/ethtool.c b/net/core/ethtool.c index 66428c0eb663..7e4e7deb2542 100644 --- a/net/core/ethtool.c +++ b/net/core/ethtool.c @@ -941,11 +941,13 @@ static int ethtool_reset(struct net_device *dev, char __user *useraddr) static int ethtool_get_wol(struct net_device *dev, char __user *useraddr) { - struct ethtool_wolinfo wol = { .cmd = ETHTOOL_GWOL }; + struct ethtool_wolinfo wol; if (!dev->ethtool_ops->get_wol) return -EOPNOTSUPP; + memset(&wol, 0, sizeof(struct ethtool_wolinfo)); + wol.cmd = ETHTOOL_GWOL; dev->ethtool_ops->get_wol(dev, &wol); if (copy_to_user(useraddr, &wol, sizeof(wol))) diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c index fbb631004b43..b4f2c30e8313 100644 --- a/net/core/flow_dissector.c +++ b/net/core/flow_dissector.c @@ -541,45 +541,34 @@ out_bad: } EXPORT_SYMBOL(__skb_flow_dissect); -static u32 hashrnd __read_mostly; +static siphash_key_t hashrnd __read_mostly; static __always_inline void __flow_hash_secret_init(void) { net_get_random_once(&hashrnd, sizeof(hashrnd)); } -static __always_inline u32 __flow_hash_words(const u32 *words, u32 length, - u32 keyval) +static const void *flow_keys_hash_start(const struct flow_keys *flow) { - return jhash2(words, length, keyval); -} - -static inline const u32 *flow_keys_hash_start(const struct flow_keys *flow) -{ - const void *p = flow; - - BUILD_BUG_ON(FLOW_KEYS_HASH_OFFSET % sizeof(u32)); - return (const u32 *)(p + FLOW_KEYS_HASH_OFFSET); + BUILD_BUG_ON(FLOW_KEYS_HASH_OFFSET % SIPHASH_ALIGNMENT); + return &flow->FLOW_KEYS_HASH_START_FIELD; } static inline size_t flow_keys_hash_length(const struct flow_keys *flow) { - size_t diff = FLOW_KEYS_HASH_OFFSET + sizeof(flow->addrs); - BUILD_BUG_ON((sizeof(*flow) - FLOW_KEYS_HASH_OFFSET) % sizeof(u32)); - BUILD_BUG_ON(offsetof(typeof(*flow), addrs) != - sizeof(*flow) - sizeof(flow->addrs)); + size_t len = offsetof(typeof(*flow), addrs) - FLOW_KEYS_HASH_OFFSET; switch (flow->control.addr_type) { case FLOW_DISSECTOR_KEY_IPV4_ADDRS: - diff -= sizeof(flow->addrs.v4addrs); + len += sizeof(flow->addrs.v4addrs); break; case FLOW_DISSECTOR_KEY_IPV6_ADDRS: - diff -= sizeof(flow->addrs.v6addrs); + len += sizeof(flow->addrs.v6addrs); break; case FLOW_DISSECTOR_KEY_TIPC_ADDRS: - diff -= sizeof(flow->addrs.tipcaddrs); + len += sizeof(flow->addrs.tipcaddrs); break; } - return (sizeof(*flow) - diff) / sizeof(u32); + return len; } __be32 flow_get_u32_src(const struct flow_keys *flow) @@ -645,14 +634,15 @@ static inline void __flow_hash_consistentify(struct flow_keys *keys) } } -static inline u32 __flow_hash_from_keys(struct flow_keys *keys, u32 keyval) +static inline u32 __flow_hash_from_keys(struct flow_keys *keys, + const siphash_key_t *keyval) { u32 hash; __flow_hash_consistentify(keys); - hash = __flow_hash_words(flow_keys_hash_start(keys), - flow_keys_hash_length(keys), keyval); + hash = siphash(flow_keys_hash_start(keys), + flow_keys_hash_length(keys), keyval); if (!hash) hash = 1; @@ -662,12 +652,13 @@ static inline u32 __flow_hash_from_keys(struct flow_keys *keys, u32 keyval) u32 flow_hash_from_keys(struct flow_keys *keys) { __flow_hash_secret_init(); - return __flow_hash_from_keys(keys, hashrnd); + return __flow_hash_from_keys(keys, &hashrnd); } EXPORT_SYMBOL(flow_hash_from_keys); static inline u32 ___skb_get_hash(const struct sk_buff *skb, - struct flow_keys *keys, u32 keyval) + struct flow_keys *keys, + const siphash_key_t *keyval) { skb_flow_dissect_flow_keys(skb, keys, FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL); @@ -715,7 +706,7 @@ u32 __skb_get_hash_symmetric(struct sk_buff *skb) NULL, 0, 0, 0, FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL); - return __flow_hash_from_keys(&keys, hashrnd); + return __flow_hash_from_keys(&keys, &hashrnd); } EXPORT_SYMBOL_GPL(__skb_get_hash_symmetric); @@ -734,12 +725,13 @@ void __skb_get_hash(struct sk_buff *skb) __flow_hash_secret_init(); - __skb_set_sw_hash(skb, ___skb_get_hash(skb, &keys, hashrnd), + __skb_set_sw_hash(skb, ___skb_get_hash(skb, &keys, &hashrnd), flow_keys_have_l4(&keys)); } EXPORT_SYMBOL(__skb_get_hash); -__u32 skb_get_hash_perturb(const struct sk_buff *skb, u32 perturb) +__u32 skb_get_hash_perturb(const struct sk_buff *skb, + const siphash_key_t *perturb) { struct flow_keys keys; diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c index b0a577a79a6a..ef4c44d46293 100644 --- a/net/dccp/ipv4.c +++ b/net/dccp/ipv4.c @@ -121,7 +121,7 @@ int dccp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) inet->inet_daddr, inet->inet_sport, inet->inet_dport); - inet->inet_id = dp->dccps_iss ^ jiffies; + inet->inet_id = prandom_u32(); err = dccp_connect(sk); rt = NULL; @@ -417,7 +417,7 @@ struct sock *dccp_v4_request_recv_sock(const struct sock *sk, RCU_INIT_POINTER(newinet->inet_opt, rcu_dereference(ireq->ireq_opt)); newinet->mc_index = inet_iif(skb); newinet->mc_ttl = ip_hdr(skb)->ttl; - newinet->inet_id = jiffies; + newinet->inet_id = prandom_u32(); if (dst == NULL && (dst = inet_csk_route_child_sock(sk, newsk, req)) == NULL) goto put_and_exit; diff --git a/net/ieee802154/socket.c b/net/ieee802154/socket.c index 47b397264f24..cb6c0772ea36 100644 --- a/net/ieee802154/socket.c +++ b/net/ieee802154/socket.c @@ -999,6 +999,9 @@ static int ieee802154_create(struct net *net, struct socket *sock, switch (sock->type) { case SOCK_RAW: + rc = -EPERM; + if (!capable(CAP_NET_RAW)) + goto out; proto = &ieee802154_raw_prot; ops = &ieee802154_raw_ops; break; diff --git a/net/ipv4/datagram.c b/net/ipv4/datagram.c index f915abff1350..d3eddfd13875 100644 --- a/net/ipv4/datagram.c +++ b/net/ipv4/datagram.c @@ -75,7 +75,7 @@ int __ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len inet->inet_dport = usin->sin_port; sk->sk_state = TCP_ESTABLISHED; sk_set_txhash(sk); - inet->inet_id = jiffies; + inet->inet_id = prandom_u32(); sk_dst_set(sk, &rt->dst); err = 0; diff --git a/net/ipv4/route.c b/net/ipv4/route.c index 397b72f15047..6e17149b0983 100644 --- a/net/ipv4/route.c +++ b/net/ipv4/route.c @@ -901,16 +901,15 @@ void ip_rt_send_redirect(struct sk_buff *skb) if (peer->rate_tokens == 0 || time_after(jiffies, (peer->rate_last + - (ip_rt_redirect_load << peer->rate_tokens)))) { + (ip_rt_redirect_load << peer->n_redirects)))) { __be32 gw = rt_nexthop(rt, ip_hdr(skb)->daddr); icmp_send(skb, ICMP_REDIRECT, ICMP_REDIR_HOST, gw); peer->rate_last = jiffies; - ++peer->rate_tokens; ++peer->n_redirects; #ifdef CONFIG_IP_ROUTE_VERBOSE if (log_martians && - peer->rate_tokens == ip_rt_redirect_number) + peer->n_redirects == ip_rt_redirect_number) net_warn_ratelimited("host %pI4/if%d ignores redirects for %pI4 to %pI4\n", &ip_hdr(skb)->saddr, inet_iif(skb), &ip_hdr(skb)->daddr, &gw); @@ -2216,7 +2215,7 @@ struct rtable *__ip_route_output_key_hash(struct net *net, struct flowi4 *fl4, struct fib_result res; struct rtable *rth; int orig_oif; - int err = -ENETUNREACH; + int err; res.tclassid = 0; res.fi = NULL; @@ -2231,11 +2230,14 @@ struct rtable *__ip_route_output_key_hash(struct net *net, struct flowi4 *fl4, rcu_read_lock(); if (fl4->saddr) { - rth = ERR_PTR(-EINVAL); if (ipv4_is_multicast(fl4->saddr) || ipv4_is_lbcast(fl4->saddr) || - ipv4_is_zeronet(fl4->saddr)) + ipv4_is_zeronet(fl4->saddr)) { + rth = ERR_PTR(-EINVAL); goto out; + } + + rth = ERR_PTR(-ENETUNREACH); /* I removed check for oif == dev_out->oif here. It was wrong for two reasons: diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index d2d95cf398e8..1763e02103a3 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -241,7 +241,7 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len) inet->inet_sport, usin->sin_port); - inet->inet_id = tp->write_seq ^ jiffies; + inet->inet_id = prandom_u32(); err = tcp_connect(sk); @@ -1307,7 +1307,7 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb, inet_csk(newsk)->icsk_ext_hdr_len = 0; if (inet_opt) inet_csk(newsk)->icsk_ext_hdr_len = inet_opt->opt.optlen; - newinet->inet_id = newtp->write_seq ^ jiffies; + newinet->inet_id = prandom_u32(); if (!dst) { dst = inet_csk_route_child_sock(sk, newsk, req); diff --git a/net/ipv6/ip6_input.c b/net/ipv6/ip6_input.c index 9075acf081dd..c83c0faf5ae9 100644 --- a/net/ipv6/ip6_input.c +++ b/net/ipv6/ip6_input.c @@ -151,6 +151,16 @@ int ipv6_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt if (ipv6_addr_is_multicast(&hdr->saddr)) goto err; + /* While RFC4291 is not explicit about v4mapped addresses + * in IPv6 headers, it seems clear linux dual-stack + * model can not deal properly with these. + * Security models could be fooled by ::ffff:127.0.0.1 for example. + * + * https://tools.ietf.org/html/draft-itojun-v6ops-v4mapped-harmful-02 + */ + if (ipv6_addr_v4mapped(&hdr->saddr)) + goto err; + skb->transport_header = skb->network_header + sizeof(*hdr); IP6CB(skb)->nhoff = offsetof(struct ipv6hdr, nexthdr); diff --git a/net/llc/llc_c_ac.c b/net/llc/llc_c_ac.c index 4b60f68cb492..8354ae40ec85 100644 --- a/net/llc/llc_c_ac.c +++ b/net/llc/llc_c_ac.c @@ -372,6 +372,7 @@ int llc_conn_ac_send_i_cmd_p_set_1(struct sock *sk, struct sk_buff *skb) llc_pdu_init_as_i_cmd(skb, 1, llc->vS, llc->vR); rc = llc_mac_hdr_init(skb, llc->dev->dev_addr, llc->daddr.mac); if (likely(!rc)) { + skb_get(skb); llc_conn_send_pdu(sk, skb); llc_conn_ac_inc_vs_by_1(sk, skb); } @@ -389,7 +390,8 @@ static int llc_conn_ac_send_i_cmd_p_set_0(struct sock *sk, struct sk_buff *skb) llc_pdu_init_as_i_cmd(skb, 0, llc->vS, llc->vR); rc = llc_mac_hdr_init(skb, llc->dev->dev_addr, llc->daddr.mac); if (likely(!rc)) { - rc = llc_conn_send_pdu(sk, skb); + skb_get(skb); + llc_conn_send_pdu(sk, skb); llc_conn_ac_inc_vs_by_1(sk, skb); } return rc; @@ -406,6 +408,7 @@ int llc_conn_ac_send_i_xxx_x_set_0(struct sock *sk, struct sk_buff *skb) llc_pdu_init_as_i_cmd(skb, 0, llc->vS, llc->vR); rc = llc_mac_hdr_init(skb, llc->dev->dev_addr, llc->daddr.mac); if (likely(!rc)) { + skb_get(skb); llc_conn_send_pdu(sk, skb); llc_conn_ac_inc_vs_by_1(sk, skb); } @@ -916,7 +919,8 @@ static int llc_conn_ac_send_i_rsp_f_set_ackpf(struct sock *sk, llc_pdu_init_as_i_cmd(skb, llc->ack_pf, llc->vS, llc->vR); rc = llc_mac_hdr_init(skb, llc->dev->dev_addr, llc->daddr.mac); if (likely(!rc)) { - rc = llc_conn_send_pdu(sk, skb); + skb_get(skb); + llc_conn_send_pdu(sk, skb); llc_conn_ac_inc_vs_by_1(sk, skb); } return rc; diff --git a/net/llc/llc_conn.c b/net/llc/llc_conn.c index 79c346fd859b..d861b74ad068 100644 --- a/net/llc/llc_conn.c +++ b/net/llc/llc_conn.c @@ -30,7 +30,7 @@ #endif static int llc_find_offset(int state, int ev_type); -static int llc_conn_send_pdus(struct sock *sk, struct sk_buff *skb); +static void llc_conn_send_pdus(struct sock *sk); static int llc_conn_service(struct sock *sk, struct sk_buff *skb); static int llc_exec_conn_trans_actions(struct sock *sk, struct llc_conn_state_trans *trans, @@ -193,11 +193,11 @@ out_skb_put: return rc; } -int llc_conn_send_pdu(struct sock *sk, struct sk_buff *skb) +void llc_conn_send_pdu(struct sock *sk, struct sk_buff *skb) { /* queue PDU to send to MAC layer */ skb_queue_tail(&sk->sk_write_queue, skb); - return llc_conn_send_pdus(sk, skb); + llc_conn_send_pdus(sk); } /** @@ -255,7 +255,7 @@ void llc_conn_resend_i_pdu_as_cmd(struct sock *sk, u8 nr, u8 first_p_bit) if (howmany_resend > 0) llc->vS = (llc->vS + 1) % LLC_2_SEQ_NBR_MODULO; /* any PDUs to re-send are queued up; start sending to MAC */ - llc_conn_send_pdus(sk, NULL); + llc_conn_send_pdus(sk); out:; } @@ -296,7 +296,7 @@ void llc_conn_resend_i_pdu_as_rsp(struct sock *sk, u8 nr, u8 first_f_bit) if (howmany_resend > 0) llc->vS = (llc->vS + 1) % LLC_2_SEQ_NBR_MODULO; /* any PDUs to re-send are queued up; start sending to MAC */ - llc_conn_send_pdus(sk, NULL); + llc_conn_send_pdus(sk); out:; } @@ -340,16 +340,12 @@ out: /** * llc_conn_send_pdus - Sends queued PDUs * @sk: active connection - * @hold_skb: the skb held by caller, or NULL if does not care * - * Sends queued pdus to MAC layer for transmission. When @hold_skb is - * NULL, always return 0. Otherwise, return 0 if @hold_skb is sent - * successfully, or 1 for failure. + * Sends queued pdus to MAC layer for transmission. */ -static int llc_conn_send_pdus(struct sock *sk, struct sk_buff *hold_skb) +static void llc_conn_send_pdus(struct sock *sk) { struct sk_buff *skb; - int ret = 0; while ((skb = skb_dequeue(&sk->sk_write_queue)) != NULL) { struct llc_pdu_sn *pdu = llc_pdu_sn_hdr(skb); @@ -361,20 +357,10 @@ static int llc_conn_send_pdus(struct sock *sk, struct sk_buff *hold_skb) skb_queue_tail(&llc_sk(sk)->pdu_unack_q, skb); if (!skb2) break; - dev_queue_xmit(skb2); - } else { - bool is_target = skb == hold_skb; - int rc; - - if (is_target) - skb_get(skb); - rc = dev_queue_xmit(skb); - if (is_target) - ret = rc; + skb = skb2; } + dev_queue_xmit(skb); } - - return ret; } /** diff --git a/net/llc/llc_s_ac.c b/net/llc/llc_s_ac.c index a94bd56bcac6..7ae4cc684d3a 100644 --- a/net/llc/llc_s_ac.c +++ b/net/llc/llc_s_ac.c @@ -58,8 +58,10 @@ int llc_sap_action_send_ui(struct llc_sap *sap, struct sk_buff *skb) ev->daddr.lsap, LLC_PDU_CMD); llc_pdu_init_as_ui_cmd(skb); rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac); - if (likely(!rc)) + if (likely(!rc)) { + skb_get(skb); rc = dev_queue_xmit(skb); + } return rc; } @@ -81,8 +83,10 @@ int llc_sap_action_send_xid_c(struct llc_sap *sap, struct sk_buff *skb) ev->daddr.lsap, LLC_PDU_CMD); llc_pdu_init_as_xid_cmd(skb, LLC_XID_NULL_CLASS_2, 0); rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac); - if (likely(!rc)) + if (likely(!rc)) { + skb_get(skb); rc = dev_queue_xmit(skb); + } return rc; } @@ -135,8 +139,10 @@ int llc_sap_action_send_test_c(struct llc_sap *sap, struct sk_buff *skb) ev->daddr.lsap, LLC_PDU_CMD); llc_pdu_init_as_test_cmd(skb); rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac); - if (likely(!rc)) + if (likely(!rc)) { + skb_get(skb); rc = dev_queue_xmit(skb); + } return rc; } diff --git a/net/llc/llc_sap.c b/net/llc/llc_sap.c index 5404d0d195cc..d51ff9df9c95 100644 --- a/net/llc/llc_sap.c +++ b/net/llc/llc_sap.c @@ -197,29 +197,22 @@ out: * After executing actions of the event, upper layer will be indicated * if needed(on receiving an UI frame). sk can be null for the * datalink_proto case. + * + * This function always consumes a reference to the skb. */ static void llc_sap_state_process(struct llc_sap *sap, struct sk_buff *skb) { struct llc_sap_state_ev *ev = llc_sap_ev(skb); - /* - * We have to hold the skb, because llc_sap_next_state - * will kfree it in the sending path and we need to - * look at the skb->cb, where we encode llc_sap_state_ev. - */ - skb_get(skb); ev->ind_cfm_flag = 0; llc_sap_next_state(sap, skb); - if (ev->ind_cfm_flag == LLC_IND) { - if (skb->sk->sk_state == TCP_LISTEN) - kfree_skb(skb); - else { - llc_save_primitive(skb->sk, skb, ev->prim); - /* queue skb to the user. */ - if (sock_queue_rcv_skb(skb->sk, skb)) - kfree_skb(skb); - } + if (ev->ind_cfm_flag == LLC_IND && skb->sk->sk_state != TCP_LISTEN) { + llc_save_primitive(skb->sk, skb, ev->prim); + + /* queue skb to the user. */ + if (sock_queue_rcv_skb(skb->sk, skb) == 0) + return; } kfree_skb(skb); } diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h index 424aca76a192..2527294f96c8 100644 --- a/net/mac80211/ieee80211_i.h +++ b/net/mac80211/ieee80211_i.h @@ -2060,6 +2060,9 @@ void ieee80211_tdls_cancel_channel_switch(struct wiphy *wiphy, const u8 *addr); void ieee80211_teardown_tdls_peers(struct ieee80211_sub_if_data *sdata); void ieee80211_tdls_chsw_work(struct work_struct *wk); +void ieee80211_tdls_handle_disconnect(struct ieee80211_sub_if_data *sdata, + const u8 *peer, u16 reason); +const char *ieee80211_get_reason_code_string(u16 reason_code); extern const struct ethtool_ops ieee80211_ethtool_ops; diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index 1f2b1c8c373e..1e6dd3de116b 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -2431,7 +2431,8 @@ struct sk_buff *ieee80211_ap_probereq_get(struct ieee80211_hw *hw, rcu_read_lock(); ssid = ieee80211_bss_get_ie(cbss, WLAN_EID_SSID); - if (WARN_ON_ONCE(ssid == NULL)) + if (WARN_ONCE(!ssid || ssid[1] > IEEE80211_MAX_SSID_LEN, + "invalid SSID element (len=%d)", ssid ? ssid[1] : -1)) ssid_len = 0; else ssid_len = ssid[1]; @@ -2743,7 +2744,7 @@ static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata, #define case_WLAN(type) \ case WLAN_REASON_##type: return #type -static const char *ieee80211_get_reason_code_string(u16 reason_code) +const char *ieee80211_get_reason_code_string(u16 reason_code) { switch (reason_code) { case_WLAN(UNSPECIFIED); @@ -2808,6 +2809,11 @@ static void ieee80211_rx_mgmt_deauth(struct ieee80211_sub_if_data *sdata, if (len < 24 + 2) return; + if (!ether_addr_equal(mgmt->bssid, mgmt->sa)) { + ieee80211_tdls_handle_disconnect(sdata, mgmt->sa, reason_code); + return; + } + if (ifmgd->associated && ether_addr_equal(mgmt->bssid, ifmgd->associated->bssid)) { const u8 *bssid = ifmgd->associated->bssid; @@ -2857,8 +2863,14 @@ static void ieee80211_rx_mgmt_disassoc(struct ieee80211_sub_if_data *sdata, reason_code = le16_to_cpu(mgmt->u.disassoc.reason_code); - sdata_info(sdata, "disassociated from %pM (Reason: %u)\n", - mgmt->sa, reason_code); + if (!ether_addr_equal(mgmt->bssid, mgmt->sa)) { + ieee80211_tdls_handle_disconnect(sdata, mgmt->sa, reason_code); + return; + } + + sdata_info(sdata, "disassociated from %pM (Reason: %u=%s)\n", + mgmt->sa, reason_code, + ieee80211_get_reason_code_string(reason_code)); ieee80211_set_disassoc(sdata, 0, 0, false, NULL); @@ -4658,7 +4670,7 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata, rcu_read_lock(); ssidie = ieee80211_bss_get_ie(req->bss, WLAN_EID_SSID); - if (!ssidie) { + if (!ssidie || ssidie[1] > sizeof(assoc_data->ssid)) { rcu_read_unlock(); kfree(assoc_data); return -EINVAL; diff --git a/net/mac80211/tdls.c b/net/mac80211/tdls.c index c9eeb3f12808..ce2ece424384 100644 --- a/net/mac80211/tdls.c +++ b/net/mac80211/tdls.c @@ -1963,3 +1963,26 @@ void ieee80211_tdls_chsw_work(struct work_struct *wk) } rtnl_unlock(); } + +void ieee80211_tdls_handle_disconnect(struct ieee80211_sub_if_data *sdata, + const u8 *peer, u16 reason) +{ + struct ieee80211_sta *sta; + + rcu_read_lock(); + sta = ieee80211_find_sta(&sdata->vif, peer); + if (!sta || !sta->tdls) { + rcu_read_unlock(); + return; + } + rcu_read_unlock(); + + tdls_dbg(sdata, "disconnected from TDLS peer %pM (Reason: %u=%s)\n", + peer, reason, + ieee80211_get_reason_code_string(reason)); + + ieee80211_tdls_oper_request(&sdata->vif, peer, + NL80211_TDLS_TEARDOWN, + WLAN_REASON_TDLS_TEARDOWN_UNREACHABLE, + GFP_ATOMIC); +} diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c index 54f3d7cb23e6..caa26184f7e3 100644 --- a/net/netfilter/ipset/ip_set_core.c +++ b/net/netfilter/ipset/ip_set_core.c @@ -1930,8 +1930,9 @@ ip_set_sockfn_get(struct sock *sk, int optval, void __user *user, int *len) } req_version->version = IPSET_PROTOCOL; - ret = copy_to_user(user, req_version, - sizeof(struct ip_set_req_version)); + if (copy_to_user(user, req_version, + sizeof(struct ip_set_req_version))) + ret = -EFAULT; goto done; } case IP_SET_OP_GET_BYNAME: { @@ -1988,7 +1989,8 @@ ip_set_sockfn_get(struct sock *sk, int optval, void __user *user, int *len) } /* end of switch(op) */ copy: - ret = copy_to_user(user, data, copylen); + if (copy_to_user(user, data, copylen)) + ret = -EFAULT; done: vfree(data); diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c index e9a3e74567b7..79f4ffe7291a 100644 --- a/net/netfilter/ipvs/ip_vs_ctl.c +++ b/net/netfilter/ipvs/ip_vs_ctl.c @@ -97,7 +97,6 @@ static bool __ip_vs_addr_is_local_v6(struct net *net, static void update_defense_level(struct netns_ipvs *ipvs) { struct sysinfo i; - static int old_secure_tcp = 0; int availmem; int nomem; int to_change = -1; @@ -178,35 +177,35 @@ static void update_defense_level(struct netns_ipvs *ipvs) spin_lock(&ipvs->securetcp_lock); switch (ipvs->sysctl_secure_tcp) { case 0: - if (old_secure_tcp >= 2) + if (ipvs->old_secure_tcp >= 2) to_change = 0; break; case 1: if (nomem) { - if (old_secure_tcp < 2) + if (ipvs->old_secure_tcp < 2) to_change = 1; ipvs->sysctl_secure_tcp = 2; } else { - if (old_secure_tcp >= 2) + if (ipvs->old_secure_tcp >= 2) to_change = 0; } break; case 2: if (nomem) { - if (old_secure_tcp < 2) + if (ipvs->old_secure_tcp < 2) to_change = 1; } else { - if (old_secure_tcp >= 2) + if (ipvs->old_secure_tcp >= 2) to_change = 0; ipvs->sysctl_secure_tcp = 1; } break; case 3: - if (old_secure_tcp < 2) + if (ipvs->old_secure_tcp < 2) to_change = 1; break; } - old_secure_tcp = ipvs->sysctl_secure_tcp; + ipvs->old_secure_tcp = ipvs->sysctl_secure_tcp; if (to_change >= 0) ip_vs_protocol_timeout_change(ipvs, ipvs->sysctl_secure_tcp > 1); diff --git a/net/netfilter/xt_quota2.c b/net/netfilter/xt_quota2.c index 94663440d160..8c481e359d94 100644 --- a/net/netfilter/xt_quota2.c +++ b/net/netfilter/xt_quota2.c @@ -273,8 +273,8 @@ static void quota_mt2_destroy(const struct xt_mtdtor_param *par) } list_del(&e->list); - remove_proc_entry(e->name, proc_xt_quota); spin_unlock_bh(&counter_list_lock); + remove_proc_entry(e->name, proc_xt_quota); kfree(e); } diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c index 9c222a106c7f..44d6b8355eab 100644 --- a/net/nfc/llcp_sock.c +++ b/net/nfc/llcp_sock.c @@ -118,9 +118,14 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen) llcp_sock->service_name = kmemdup(llcp_addr.service_name, llcp_sock->service_name_len, GFP_KERNEL); - + if (!llcp_sock->service_name) { + ret = -ENOMEM; + goto put_dev; + } llcp_sock->ssap = nfc_llcp_get_sdp_ssap(local, llcp_sock); if (llcp_sock->ssap == LLCP_SAP_MAX) { + kfree(llcp_sock->service_name); + llcp_sock->service_name = NULL; ret = -EADDRINUSE; goto put_dev; } @@ -1005,10 +1010,13 @@ static int llcp_sock_create(struct net *net, struct socket *sock, sock->type != SOCK_RAW) return -ESOCKTNOSUPPORT; - if (sock->type == SOCK_RAW) + if (sock->type == SOCK_RAW) { + if (!capable(CAP_NET_RAW)) + return -EPERM; sock->ops = &llcp_rawsock_ops; - else + } else { sock->ops = &llcp_sock_ops; + } sk = nfc_llcp_sock_alloc(sock, sock->type, GFP_ATOMIC, kern); if (sk == NULL) diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c index 32cb0c87e852..04d4c388a7a8 100644 --- a/net/nfc/netlink.c +++ b/net/nfc/netlink.c @@ -936,7 +936,8 @@ static int nfc_genl_dep_link_down(struct sk_buff *skb, struct genl_info *info) int rc; u32 idx; - if (!info->attrs[NFC_ATTR_DEVICE_INDEX]) + if (!info->attrs[NFC_ATTR_DEVICE_INDEX] || + !info->attrs[NFC_ATTR_TARGET_INDEX]) return -EINVAL; idx = nla_get_u32(info->attrs[NFC_ATTR_DEVICE_INDEX]); @@ -985,7 +986,8 @@ static int nfc_genl_llc_get_params(struct sk_buff *skb, struct genl_info *info) struct sk_buff *msg = NULL; u32 idx; - if (!info->attrs[NFC_ATTR_DEVICE_INDEX]) + if (!info->attrs[NFC_ATTR_DEVICE_INDEX] || + !info->attrs[NFC_ATTR_FIRMWARE_NAME]) return -EINVAL; idx = nla_get_u32(info->attrs[NFC_ATTR_DEVICE_INDEX]); @@ -1064,7 +1066,6 @@ static int nfc_genl_llc_set_params(struct sk_buff *skb, struct genl_info *info) local = nfc_llcp_find_local(dev); if (!local) { - nfc_put_device(dev); rc = -ENODEV; goto exit; } @@ -1124,7 +1125,6 @@ static int nfc_genl_llc_sdreq(struct sk_buff *skb, struct genl_info *info) local = nfc_llcp_find_local(dev); if (!local) { - nfc_put_device(dev); rc = -ENODEV; goto exit; } diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c index deadfdab1bc3..caa23ee913f0 100644 --- a/net/openvswitch/datapath.c +++ b/net/openvswitch/datapath.c @@ -2152,7 +2152,7 @@ static const struct nla_policy vport_policy[OVS_VPORT_ATTR_MAX + 1] = { [OVS_VPORT_ATTR_STATS] = { .len = sizeof(struct ovs_vport_stats) }, [OVS_VPORT_ATTR_PORT_NO] = { .type = NLA_U32 }, [OVS_VPORT_ATTR_TYPE] = { .type = NLA_U32 }, - [OVS_VPORT_ATTR_UPCALL_PID] = { .type = NLA_U32 }, + [OVS_VPORT_ATTR_UPCALL_PID] = { .type = NLA_UNSPEC }, [OVS_VPORT_ATTR_OPTIONS] = { .type = NLA_NESTED }, }; diff --git a/net/rds/ib.c b/net/rds/ib.c index ed51ccc84b3a..aa5f75d4880c 100644 --- a/net/rds/ib.c +++ b/net/rds/ib.c @@ -146,6 +146,9 @@ static void rds_ib_add_one(struct ib_device *device) atomic_set(&rds_ibdev->refcount, 1); INIT_WORK(&rds_ibdev->free_work, rds_ib_dev_free); + INIT_LIST_HEAD(&rds_ibdev->ipaddr_list); + INIT_LIST_HEAD(&rds_ibdev->conn_list); + rds_ibdev->max_wrs = dev_attr->max_qp_wr; rds_ibdev->max_sge = min(dev_attr->max_sge, RDS_IB_MAX_SGE); @@ -187,9 +190,6 @@ static void rds_ib_add_one(struct ib_device *device) rds_ibdev->fmr_max_remaps, rds_ibdev->max_1m_fmrs, rds_ibdev->max_8k_fmrs); - INIT_LIST_HEAD(&rds_ibdev->ipaddr_list); - INIT_LIST_HEAD(&rds_ibdev->conn_list); - down_write(&rds_ib_devices_lock); list_add_tail_rcu(&rds_ibdev->list, &rds_ib_devices); up_write(&rds_ib_devices_lock); diff --git a/net/rds/tcp.c b/net/rds/tcp.c index 554d4b461983..c10622a9321c 100644 --- a/net/rds/tcp.c +++ b/net/rds/tcp.c @@ -352,9 +352,11 @@ static void rds_tcp_kill_sock(struct net *net) } spin_unlock_irq(&rds_tcp_conn_lock); list_for_each_entry_safe(tc, _tc, &tmp_list, t_tcp_node) { - sk = tc->t_sock->sk; - sk->sk_prot->disconnect(sk, 0); - tcp_done(sk); + if (tc->t_sock) { + sk = tc->t_sock->sk; + sk->sk_prot->disconnect(sk, 0); + tcp_done(sk); + } if (tc->conn->c_passive) rds_conn_destroy(tc->conn->c_passive); rds_conn_destroy(tc->conn); diff --git a/net/sched/act_api.c b/net/sched/act_api.c index f44fea22d69c..b3a165cb63ee 100644 --- a/net/sched/act_api.c +++ b/net/sched/act_api.c @@ -946,10 +946,15 @@ static int tcf_action_add(struct net *net, struct nlattr *nla, struct nlmsghdr *n, u32 portid, int ovr) { - int ret = 0; + int loop, ret; LIST_HEAD(actions); - ret = tcf_action_init(net, nla, NULL, NULL, ovr, 0, &actions); + for (loop = 0; loop < 10; loop++) { + ret = tcf_action_init(net, nla, NULL, NULL, ovr, 0, &actions); + if (ret != -EAGAIN) + break; + } + if (ret) goto done; @@ -992,10 +997,7 @@ static int tc_ctl_action(struct sk_buff *skb, struct nlmsghdr *n) */ if (n->nlmsg_flags & NLM_F_REPLACE) ovr = 1; -replay: ret = tcf_action_add(net, tca[TCA_ACT_TAB], n, portid, ovr); - if (ret == -EAGAIN) - goto replay; break; case RTM_DELACTION: ret = tca_action_gd(net, tca[TCA_ACT_TAB], n, diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c index 4fbb67430ce4..4d745a2efd20 100644 --- a/net/sched/cls_u32.c +++ b/net/sched/cls_u32.c @@ -734,6 +734,7 @@ static int u32_change(struct net *net, struct sk_buff *in_skb, struct nlattr *opt = tca[TCA_OPTIONS]; struct nlattr *tb[TCA_U32_MAX + 1]; u32 htid; + size_t sel_size; int err; #ifdef CONFIG_CLS_U32_PERF size_t size; @@ -827,8 +828,11 @@ static int u32_change(struct net *net, struct sk_buff *in_skb, return -EINVAL; s = nla_data(tb[TCA_U32_SEL]); + sel_size = sizeof(*s) + sizeof(*s->keys) * s->nkeys; + if (nla_len(tb[TCA_U32_SEL]) < sel_size) + return -EINVAL; - n = kzalloc(sizeof(*n) + s->nkeys*sizeof(struct tc_u32_key), GFP_KERNEL); + n = kzalloc(offsetof(typeof(*n), sel) + sel_size, GFP_KERNEL); if (n == NULL) return -ENOBUFS; @@ -841,7 +845,7 @@ static int u32_change(struct net *net, struct sk_buff *in_skb, } #endif - memcpy(&n->sel, s, sizeof(*s) + s->nkeys*sizeof(struct tc_u32_key)); + memcpy(&n->sel, s, sel_size); RCU_INIT_POINTER(n->ht_up, ht); n->handle = handle; n->fshift = s->hmask ? ffs(ntohl(s->hmask)) - 1 : 0; diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c index baafddf229ce..8182f9bc197c 100644 --- a/net/sched/sch_cbq.c +++ b/net/sched/sch_cbq.c @@ -1340,6 +1340,26 @@ static const struct nla_policy cbq_policy[TCA_CBQ_MAX + 1] = { [TCA_CBQ_POLICE] = { .len = sizeof(struct tc_cbq_police) }, }; +static int cbq_opt_parse(struct nlattr *tb[TCA_CBQ_MAX + 1], struct nlattr *opt) +{ + int err; + + if (!opt) + return -EINVAL; + + err = nla_parse_nested(tb, TCA_CBQ_MAX, opt, cbq_policy); + if (err < 0) + return err; + + if (tb[TCA_CBQ_WRROPT]) { + const struct tc_cbq_wrropt *wrr = nla_data(tb[TCA_CBQ_WRROPT]); + + if (wrr->priority > TC_CBQ_MAXPRIO) + err = -EINVAL; + } + return err; +} + static int cbq_init(struct Qdisc *sch, struct nlattr *opt) { struct cbq_sched_data *q = qdisc_priv(sch); @@ -1347,7 +1367,7 @@ static int cbq_init(struct Qdisc *sch, struct nlattr *opt) struct tc_ratespec *r; int err; - err = nla_parse_nested(tb, TCA_CBQ_MAX, opt, cbq_policy); + err = cbq_opt_parse(tb, opt); if (err < 0) return err; @@ -1728,10 +1748,7 @@ cbq_change_class(struct Qdisc *sch, u32 classid, u32 parentid, struct nlattr **t struct cbq_class *parent; struct qdisc_rate_table *rtab = NULL; - if (opt == NULL) - return -EINVAL; - - err = nla_parse_nested(tb, TCA_CBQ_MAX, opt, cbq_policy); + err = cbq_opt_parse(tb, opt); if (err < 0) return err; diff --git a/net/sched/sch_dsmark.c b/net/sched/sch_dsmark.c index cce4e6ada7fa..5f8f6d94336c 100644 --- a/net/sched/sch_dsmark.c +++ b/net/sched/sch_dsmark.c @@ -362,6 +362,8 @@ static int dsmark_init(struct Qdisc *sch, struct nlattr *opt) goto errout; err = -EINVAL; + if (!tb[TCA_DSMARK_INDICES]) + goto errout; indices = nla_get_u16(tb[TCA_DSMARK_INDICES]); if (hweight32(indices) != 1) diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c index d3fc8f9dd3d4..1800f7977595 100644 --- a/net/sched/sch_fq_codel.c +++ b/net/sched/sch_fq_codel.c @@ -55,7 +55,7 @@ struct fq_codel_sched_data { struct fq_codel_flow *flows; /* Flows table [flows_cnt] */ u32 *backlogs; /* backlog table [flows_cnt] */ u32 flows_cnt; /* number of flows */ - u32 perturbation; /* hash perturbation */ + siphash_key_t perturbation; /* hash perturbation */ u32 quantum; /* psched_mtu(qdisc_dev(sch)); */ struct codel_params cparams; struct codel_stats cstats; @@ -69,7 +69,7 @@ struct fq_codel_sched_data { static unsigned int fq_codel_hash(const struct fq_codel_sched_data *q, struct sk_buff *skb) { - u32 hash = skb_get_hash_perturb(skb, q->perturbation); + u32 hash = skb_get_hash_perturb(skb, &q->perturbation); return reciprocal_scale(hash, q->flows_cnt); } @@ -420,7 +420,7 @@ static int fq_codel_init(struct Qdisc *sch, struct nlattr *opt) sch->limit = 10*1024; q->flows_cnt = 1024; q->quantum = psched_mtu(qdisc_dev(sch)); - q->perturbation = prandom_u32(); + get_random_bytes(&q->perturbation, sizeof(q->perturbation)); INIT_LIST_HEAD(&q->new_flows); INIT_LIST_HEAD(&q->old_flows); codel_params_init(&q->cparams, sch); diff --git a/net/sched/sch_hhf.c b/net/sched/sch_hhf.c index dc68dccc6b0c..40ec5b280eb6 100644 --- a/net/sched/sch_hhf.c +++ b/net/sched/sch_hhf.c @@ -4,11 +4,11 @@ * Copyright (C) 2013 Nandita Dukkipati <nanditad@google.com> */ -#include <linux/jhash.h> #include <linux/jiffies.h> #include <linux/module.h> #include <linux/skbuff.h> #include <linux/vmalloc.h> +#include <linux/siphash.h> #include <net/pkt_sched.h> #include <net/sock.h> @@ -125,7 +125,7 @@ struct wdrr_bucket { struct hhf_sched_data { struct wdrr_bucket buckets[WDRR_BUCKET_CNT]; - u32 perturbation; /* hash perturbation */ + siphash_key_t perturbation; /* hash perturbation */ u32 quantum; /* psched_mtu(qdisc_dev(sch)); */ u32 drop_overlimit; /* number of times max qdisc packet * limit was hit @@ -263,7 +263,7 @@ static enum wdrr_bucket_idx hhf_classify(struct sk_buff *skb, struct Qdisc *sch) } /* Get hashed flow-id of the skb. */ - hash = skb_get_hash_perturb(skb, q->perturbation); + hash = skb_get_hash_perturb(skb, &q->perturbation); /* Check if this packet belongs to an already established HH flow. */ flow_pos = hash & HHF_BIT_MASK; @@ -602,7 +602,7 @@ static int hhf_init(struct Qdisc *sch, struct nlattr *opt) sch->limit = 1000; q->quantum = psched_mtu(qdisc_dev(sch)); - q->perturbation = prandom_u32(); + get_random_bytes(&q->perturbation, sizeof(q->perturbation)); INIT_LIST_HEAD(&q->new_buckets); INIT_LIST_HEAD(&q->old_buckets); diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c index 7acf1f2b8dfc..caf33af4f9a7 100644 --- a/net/sched/sch_netem.c +++ b/net/sched/sch_netem.c @@ -464,7 +464,7 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch) * skb will be queued. */ if (count > 1 && (skb2 = skb_clone(skb, GFP_ATOMIC)) != NULL) { - struct Qdisc *rootq = qdisc_root(sch); + struct Qdisc *rootq = qdisc_root_bh(sch); u32 dupsave = q->duplicate; /* prevent duplicating a dup... */ q->duplicate = 0; @@ -713,7 +713,7 @@ static int get_dist_table(struct Qdisc *sch, const struct nlattr *attr) int i; size_t s; - if (n > NETEM_DIST_MAX) + if (!n || n > NETEM_DIST_MAX) return -EINVAL; s = sizeof(struct disttable) + n * sizeof(s16); diff --git a/net/sched/sch_sfb.c b/net/sched/sch_sfb.c index c69611640fa5..10c0b184cdbe 100644 --- a/net/sched/sch_sfb.c +++ b/net/sched/sch_sfb.c @@ -22,7 +22,7 @@ #include <linux/errno.h> #include <linux/skbuff.h> #include <linux/random.h> -#include <linux/jhash.h> +#include <linux/siphash.h> #include <net/ip.h> #include <net/pkt_sched.h> #include <net/inet_ecn.h> @@ -48,7 +48,7 @@ struct sfb_bucket { * (Section 4.4 of SFB reference : moving hash functions) */ struct sfb_bins { - u32 perturbation; /* jhash perturbation */ + siphash_key_t perturbation; /* siphash key */ struct sfb_bucket bins[SFB_LEVELS][SFB_NUMBUCKETS]; }; @@ -219,7 +219,8 @@ static u32 sfb_compute_qlen(u32 *prob_r, u32 *avgpm_r, const struct sfb_sched_da static void sfb_init_perturbation(u32 slot, struct sfb_sched_data *q) { - q->bins[slot].perturbation = prandom_u32(); + get_random_bytes(&q->bins[slot].perturbation, + sizeof(q->bins[slot].perturbation)); } static void sfb_swap_slot(struct sfb_sched_data *q) @@ -313,9 +314,9 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch) /* If using external classifiers, get result and record it. */ if (!sfb_classify(skb, fl, &ret, &salt)) goto other_drop; - sfbhash = jhash_1word(salt, q->bins[slot].perturbation); + sfbhash = siphash_1u32(salt, &q->bins[slot].perturbation); } else { - sfbhash = skb_get_hash_perturb(skb, q->bins[slot].perturbation); + sfbhash = skb_get_hash_perturb(skb, &q->bins[slot].perturbation); } @@ -351,7 +352,7 @@ static int sfb_enqueue(struct sk_buff *skb, struct Qdisc *sch) /* Inelastic flow */ if (q->double_buffering) { sfbhash = skb_get_hash_perturb(skb, - q->bins[slot].perturbation); + &q->bins[slot].perturbation); if (!sfbhash) sfbhash = 1; sfb_skb_cb(skb)->hashes[slot] = sfbhash; diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c index 8b8c084b32cd..e2e4ebc0c4c3 100644 --- a/net/sched/sch_sfq.c +++ b/net/sched/sch_sfq.c @@ -18,7 +18,7 @@ #include <linux/errno.h> #include <linux/init.h> #include <linux/skbuff.h> -#include <linux/jhash.h> +#include <linux/siphash.h> #include <linux/slab.h> #include <linux/vmalloc.h> #include <net/netlink.h> @@ -120,7 +120,7 @@ struct sfq_sched_data { u8 headdrop; u8 maxdepth; /* limit of packets per flow */ - u32 perturbation; + siphash_key_t perturbation; u8 cur_depth; /* depth of longest slot */ u8 flags; unsigned short scaled_quantum; /* SFQ_ALLOT_SIZE(quantum) */ @@ -158,7 +158,7 @@ static inline struct sfq_head *sfq_dep_head(struct sfq_sched_data *q, sfq_index static unsigned int sfq_hash(const struct sfq_sched_data *q, const struct sk_buff *skb) { - return skb_get_hash_perturb(skb, q->perturbation) & (q->divisor - 1); + return skb_get_hash_perturb(skb, &q->perturbation) & (q->divisor - 1); } static unsigned int sfq_classify(struct sk_buff *skb, struct Qdisc *sch, @@ -607,9 +607,11 @@ static void sfq_perturbation(unsigned long arg) struct Qdisc *sch = (struct Qdisc *)arg; struct sfq_sched_data *q = qdisc_priv(sch); spinlock_t *root_lock = qdisc_lock(qdisc_root_sleeping(sch)); + siphash_key_t nkey; + get_random_bytes(&nkey, sizeof(nkey)); spin_lock(root_lock); - q->perturbation = prandom_u32(); + q->perturbation = nkey; if (!q->filter_list && q->tail) sfq_rehash(sch); spin_unlock(root_lock); @@ -681,7 +683,7 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt) del_timer(&q->perturb_timer); if (q->perturb_period) { mod_timer(&q->perturb_timer, jiffies + q->perturb_period); - q->perturbation = prandom_u32(); + get_random_bytes(&q->perturbation, sizeof(q->perturbation)); } sch_tree_unlock(sch); kfree(p); @@ -737,7 +739,7 @@ static int sfq_init(struct Qdisc *sch, struct nlattr *opt) q->quantum = psched_mtu(qdisc_dev(sch)); q->scaled_quantum = SFQ_ALLOT_SIZE(q->quantum); q->perturb_period = 0; - q->perturbation = prandom_u32(); + get_random_bytes(&q->perturbation, sizeof(q->perturbation)); if (opt) { int err = sfq_change(sch, opt); diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c index 9fa0b0dc3868..ae619cffc3a9 100644 --- a/net/sctp/ipv6.c +++ b/net/sctp/ipv6.c @@ -970,7 +970,7 @@ static const struct proto_ops inet6_seqpacket_ops = { .owner = THIS_MODULE, .release = inet6_release, .bind = inet6_bind, - .connect = inet_dgram_connect, + .connect = sctp_inet_connect, .socketpair = sock_no_socketpair, .accept = inet_accept, .getname = sctp_getname, diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c index 07c54b212cd7..8816e49fd88b 100644 --- a/net/sctp/protocol.c +++ b/net/sctp/protocol.c @@ -1012,7 +1012,7 @@ static const struct proto_ops inet_seqpacket_ops = { .owner = THIS_MODULE, .release = inet_release, /* Needs to be wrapped... */ .bind = inet_bind, - .connect = inet_dgram_connect, + .connect = sctp_inet_connect, .socketpair = sock_no_socketpair, .accept = inet_accept, .getname = inet_getname, /* Semantics are different. */ diff --git a/net/sctp/socket.c b/net/sctp/socket.c index 53f1b33bca4e..2b6c88b9a038 100644 --- a/net/sctp/socket.c +++ b/net/sctp/socket.c @@ -1072,7 +1072,7 @@ out: */ static int __sctp_connect(struct sock *sk, struct sockaddr *kaddrs, - int addrs_size, + int addrs_size, int flags, sctp_assoc_t *assoc_id) { struct net *net = sock_net(sk); @@ -1090,7 +1090,6 @@ static int __sctp_connect(struct sock *sk, union sctp_addr *sa_addr = NULL; void *addr_buf; unsigned short port; - unsigned int f_flags = 0; sp = sctp_sk(sk); ep = sp->ep; @@ -1238,13 +1237,7 @@ static int __sctp_connect(struct sock *sk, sp->pf->to_sk_daddr(sa_addr, sk); sk->sk_err = 0; - /* in-kernel sockets don't generally have a file allocated to them - * if all they do is call sock_create_kern(). - */ - if (sk->sk_socket->file) - f_flags = sk->sk_socket->file->f_flags; - - timeo = sock_sndtimeo(sk, f_flags & O_NONBLOCK); + timeo = sock_sndtimeo(sk, flags & O_NONBLOCK); if (assoc_id) *assoc_id = asoc->assoc_id; @@ -1340,7 +1333,7 @@ static int __sctp_setsockopt_connectx(struct sock *sk, { struct sockaddr *kaddrs; gfp_t gfp = GFP_KERNEL; - int err = 0; + int err = 0, flags = 0; pr_debug("%s: sk:%p addrs:%p addrs_size:%d\n", __func__, sk, addrs, addrs_size); @@ -1360,11 +1353,18 @@ static int __sctp_setsockopt_connectx(struct sock *sk, return -ENOMEM; if (__copy_from_user(kaddrs, addrs, addrs_size)) { - err = -EFAULT; - } else { - err = __sctp_connect(sk, kaddrs, addrs_size, assoc_id); + kfree(kaddrs); + return -EFAULT; } + /* in-kernel sockets don't generally have a file allocated to them + * if all they do is call sock_create_kern(). + */ + if (sk->sk_socket->file) + flags = sk->sk_socket->file->f_flags; + + err = __sctp_connect(sk, kaddrs, addrs_size, flags, assoc_id); + kfree(kaddrs); return err; @@ -3895,31 +3895,36 @@ out_nounlock: * len: the size of the address. */ static int sctp_connect(struct sock *sk, struct sockaddr *addr, - int addr_len) + int addr_len, int flags) { - int err = 0; struct sctp_af *af; + int err = -EINVAL; lock_sock(sk); - pr_debug("%s: sk:%p, sockaddr:%p, addr_len:%d\n", __func__, sk, addr, addr_len); /* Validate addr_len before calling common connect/connectx routine. */ af = sctp_get_af_specific(addr->sa_family); - if (!af || addr_len < af->sockaddr_len) { - err = -EINVAL; - } else { - /* Pass correct addr len to common routine (so it knows there - * is only one address being passed. - */ - err = __sctp_connect(sk, addr, af->sockaddr_len, NULL); - } + if (af && addr_len >= af->sockaddr_len) + err = __sctp_connect(sk, addr, af->sockaddr_len, flags, NULL); release_sock(sk); return err; } +int sctp_inet_connect(struct socket *sock, struct sockaddr *uaddr, + int addr_len, int flags) +{ + if (addr_len < sizeof(uaddr->sa_family)) + return -EINVAL; + + if (uaddr->sa_family == AF_UNSPEC) + return -EOPNOTSUPP; + + return sctp_connect(sock->sk, uaddr, addr_len, flags); +} + /* FIXME: Write comments. */ static int sctp_disconnect(struct sock *sk, int flags) { @@ -7262,7 +7267,7 @@ void sctp_copy_sock(struct sock *newsk, struct sock *sk, newinet->inet_rcv_saddr = inet->inet_rcv_saddr; newinet->inet_dport = htons(asoc->peer.port); newinet->pmtudisc = inet->pmtudisc; - newinet->inet_id = asoc->next_tsn ^ jiffies; + newinet->inet_id = prandom_u32(); newinet->uc_ttl = inet->uc_ttl; newinet->mc_loop = 1; @@ -7428,7 +7433,6 @@ struct proto sctp_prot = { .name = "SCTP", .owner = THIS_MODULE, .close = sctp_close, - .connect = sctp_connect, .disconnect = sctp_disconnect, .accept = sctp_accept, .ioctl = sctp_ioctl, @@ -7443,7 +7447,7 @@ struct proto sctp_prot = { .backlog_rcv = sctp_backlog_rcv, .hash = sctp_hash, .unhash = sctp_unhash, - .get_port = sctp_get_port, + .no_autobind = true, .obj_size = sizeof(struct sctp_sock), .sysctl_mem = sysctl_sctp_mem, .sysctl_rmem = sysctl_sctp_rmem, @@ -7467,7 +7471,6 @@ struct proto sctpv6_prot = { .name = "SCTPv6", .owner = THIS_MODULE, .close = sctp_close, - .connect = sctp_connect, .disconnect = sctp_disconnect, .accept = sctp_accept, .ioctl = sctp_ioctl, @@ -7482,7 +7485,7 @@ struct proto sctpv6_prot = { .backlog_rcv = sctp_backlog_rcv, .hash = sctp_hash, .unhash = sctp_unhash, - .get_port = sctp_get_port, + .no_autobind = true, .obj_size = sizeof(struct sctp6_sock), .sysctl_mem = sysctl_sctp_mem, .sysctl_rmem = sysctl_sctp_rmem, diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c index 2bac09d3ec8c..2150b5f1db7b 100644 --- a/net/wireless/nl80211.c +++ b/net/wireless/nl80211.c @@ -210,6 +210,36 @@ cfg80211_get_dev_from_info(struct net *netns, struct genl_info *info) return __cfg80211_rdev_from_attrs(netns, info->attrs); } +static int validate_beacon_head(const struct nlattr *attr) +{ + const u8 *data = nla_data(attr); + unsigned int len = nla_len(attr); + const struct element *elem; + const struct ieee80211_mgmt *mgmt = (void *)data; + unsigned int fixedlen = offsetof(struct ieee80211_mgmt, + u.beacon.variable); + + if (len < fixedlen) + goto err; + + if (ieee80211_hdrlen(mgmt->frame_control) != + offsetof(struct ieee80211_mgmt, u.beacon)) + goto err; + + data += fixedlen; + len -= fixedlen; + + for_each_element(elem, data, len) { + /* nothing */ + } + + if (for_each_element_completed(elem, data, len)) + return 0; + +err: + return -EINVAL; +} + /* policy for the attributes */ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = { [NL80211_ATTR_WIPHY] = { .type = NLA_U32 }, @@ -262,7 +292,8 @@ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = { [NL80211_ATTR_MNTR_FLAGS] = { /* NLA_NESTED can't be empty */ }, [NL80211_ATTR_MESH_ID] = { .type = NLA_BINARY, .len = IEEE80211_MAX_MESH_ID_LEN }, - [NL80211_ATTR_MPATH_NEXT_HOP] = { .type = NLA_U32 }, + [NL80211_ATTR_MPATH_NEXT_HOP] = { .type = NLA_BINARY, + .len = ETH_ALEN }, [NL80211_ATTR_REG_ALPHA2] = { .type = NLA_STRING, .len = 2 }, [NL80211_ATTR_REG_RULES] = { .type = NLA_NESTED }, @@ -2030,6 +2061,8 @@ static int nl80211_parse_chandef(struct cfg80211_registered_device *rdev, control_freq = nla_get_u32(info->attrs[NL80211_ATTR_WIPHY_FREQ]); + memset(chandef, 0, sizeof(*chandef)); + chandef->chan = ieee80211_get_channel(&rdev->wiphy, control_freq); chandef->width = NL80211_CHAN_WIDTH_20_NOHT; chandef->center_freq1 = control_freq; @@ -2498,7 +2531,7 @@ static int nl80211_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flag if (rdev->ops->get_channel) { int ret; - struct cfg80211_chan_def chandef; + struct cfg80211_chan_def chandef = {}; ret = rdev_get_channel(rdev, wdev, &chandef); if (ret == 0) { @@ -3593,6 +3626,11 @@ static int nl80211_parse_beacon(struct nlattr *attrs[], memset(bcn, 0, sizeof(*bcn)); if (attrs[NL80211_ATTR_BEACON_HEAD]) { + int ret = validate_beacon_head(attrs[NL80211_ATTR_BEACON_HEAD]); + + if (ret) + return ret; + bcn->head = nla_data(attrs[NL80211_ATTR_BEACON_HEAD]); bcn->head_len = nla_len(attrs[NL80211_ATTR_BEACON_HEAD]); if (!bcn->head_len) @@ -5203,6 +5241,9 @@ static int nl80211_del_mpath(struct sk_buff *skb, struct genl_info *info) if (!rdev->ops->del_mpath) return -EOPNOTSUPP; + if (dev->ieee80211_ptr->iftype != NL80211_IFTYPE_MESH_POINT) + return -EOPNOTSUPP; + return rdev_del_mpath(rdev, dev, dst); } diff --git a/net/wireless/reg.c b/net/wireless/reg.c index a3d08bbaa457..647591c5dadd 100644 --- a/net/wireless/reg.c +++ b/net/wireless/reg.c @@ -1609,7 +1609,7 @@ static void reg_call_notifier(struct wiphy *wiphy, static bool reg_wdev_chan_valid(struct wiphy *wiphy, struct wireless_dev *wdev) { - struct cfg80211_chan_def chandef; + struct cfg80211_chan_def chandef = {}; struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy); enum nl80211_iftype iftype; diff --git a/net/wireless/util.c b/net/wireless/util.c index 4d3719d7113d..3e7525d0d8e3 100644 --- a/net/wireless/util.c +++ b/net/wireless/util.c @@ -970,6 +970,7 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev, } cfg80211_process_rdev_events(rdev); + cfg80211_mlme_purge_registrations(dev->ieee80211_ptr); } err = rdev_change_virtual_intf(rdev, dev, ntype, flags, params); diff --git a/net/wireless/wext-compat.c b/net/wireless/wext-compat.c index fd682832a0e3..cd119943612b 100644 --- a/net/wireless/wext-compat.c +++ b/net/wireless/wext-compat.c @@ -821,7 +821,7 @@ static int cfg80211_wext_giwfreq(struct net_device *dev, { struct wireless_dev *wdev = dev->ieee80211_ptr; struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy); - struct cfg80211_chan_def chandef; + struct cfg80211_chan_def chandef = {}; int ret; switch (wdev->iftype) { diff --git a/net/wireless/wext-sme.c b/net/wireless/wext-sme.c index a4e8af3321d2..98ff9d9e1aa9 100644 --- a/net/wireless/wext-sme.c +++ b/net/wireless/wext-sme.c @@ -225,6 +225,7 @@ int cfg80211_mgd_wext_giwessid(struct net_device *dev, struct iw_point *data, char *ssid) { struct wireless_dev *wdev = dev->ieee80211_ptr; + int ret = 0; /* call only for station! */ if (WARN_ON(wdev->iftype != NL80211_IFTYPE_STATION)) @@ -242,7 +243,10 @@ int cfg80211_mgd_wext_giwessid(struct net_device *dev, if (ie) { data->flags = 1; data->length = ie[1]; - memcpy(ssid, ie + 2, data->length); + if (data->length > IW_ESSID_MAX_SIZE) + ret = -EINVAL; + else + memcpy(ssid, ie + 2, data->length); } rcu_read_unlock(); } else if (wdev->wext.connect.ssid && wdev->wext.connect.ssid_len) { @@ -252,7 +256,7 @@ int cfg80211_mgd_wext_giwessid(struct net_device *dev, } wdev_unlock(wdev); - return 0; + return ret; } int cfg80211_mgd_wext_siwap(struct net_device *dev, diff --git a/scripts/namespace.pl b/scripts/namespace.pl index a71be6b7cdec..9a2a32ce8a3b 100755 --- a/scripts/namespace.pl +++ b/scripts/namespace.pl @@ -65,13 +65,14 @@ require 5; # at least perl 5 use strict; use File::Find; +use File::Spec; my $nm = ($ENV{'NM'} || "nm") . " -p"; my $objdump = ($ENV{'OBJDUMP'} || "objdump") . " -s -j .comment"; -my $srctree = ""; -my $objtree = ""; -$srctree = "$ENV{'srctree'}/" if (exists($ENV{'srctree'})); -$objtree = "$ENV{'objtree'}/" if (exists($ENV{'objtree'})); +my $srctree = File::Spec->curdir(); +my $objtree = File::Spec->curdir(); +$srctree = File::Spec->rel2abs($ENV{'srctree'}) if (exists($ENV{'srctree'})); +$objtree = File::Spec->rel2abs($ENV{'objtree'}) if (exists($ENV{'objtree'})); if ($#ARGV != -1) { print STDERR "usage: $0 takes no parameters\n"; @@ -229,9 +230,9 @@ sub do_nm } ($source = $basename) =~ s/\.o$//; if (-e "$source.c" || -e "$source.S") { - $source = "$objtree$File::Find::dir/$source"; + $source = File::Spec->catfile($objtree, $File::Find::dir, $source) } else { - $source = "$srctree$File::Find::dir/$source"; + $source = File::Spec->catfile($srctree, $File::Find::dir, $source) } if (! -e "$source.c" && ! -e "$source.S") { # No obvious source, exclude the object if it is conglomerate diff --git a/scripts/setlocalversion b/scripts/setlocalversion index 91ba51dd88ef..1bbde3a91099 100755 --- a/scripts/setlocalversion +++ b/scripts/setlocalversion @@ -87,8 +87,16 @@ scm_version() printf -- '-svn%s' "`git svn find-rev $head`" fi - # Check for uncommitted changes - if git diff-index --name-only HEAD | grep -qv "^scripts/package"; then + # Check for uncommitted changes. + # First, with git-status, but --no-optional-locks is only + # supported in git >= 2.14, so fall back to git-diff-index if + # it fails. Note that git-diff-index does not refresh the + # index, so it may give misleading results. See + # git-update-index(1), git-diff-index(1), and git-status(1). + if { + git --no-optional-locks status -uno --porcelain 2>/dev/null || + git diff-index --name-only HEAD + } | grep -qvE '^(.. )?scripts/package'; then printf '%s' -dirty fi diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c index a29209fa5674..5c87baaefafb 100644 --- a/security/integrity/ima/ima_crypto.c +++ b/security/integrity/ima/ima_crypto.c @@ -298,8 +298,11 @@ static int ima_calc_file_hash_atfm(struct file *file, rbuf_len = min_t(loff_t, i_size - offset, rbuf_size[active]); rc = integrity_kernel_read(file, offset, rbuf[active], rbuf_len); - if (rc != rbuf_len) + if (rc != rbuf_len) { + if (rc >= 0) + rc = -EINVAL; goto out3; + } if (rbuf[1] && offset) { /* Using two buffers, and it is not the first diff --git a/security/smack/smack_access.c b/security/smack/smack_access.c index a283f9e796c1..0df316c62005 100644 --- a/security/smack/smack_access.c +++ b/security/smack/smack_access.c @@ -474,7 +474,7 @@ char *smk_parse_smack(const char *string, int len) if (i == 0 || i >= SMK_LONGLABEL) return ERR_PTR(-EINVAL); - smack = kzalloc(i + 1, GFP_KERNEL); + smack = kzalloc(i + 1, GFP_NOFS); if (smack == NULL) return ERR_PTR(-ENOMEM); @@ -545,7 +545,7 @@ struct smack_known *smk_import_entry(const char *string, int len) if (skp != NULL) goto freeout; - skp = kzalloc(sizeof(*skp), GFP_KERNEL); + skp = kzalloc(sizeof(*skp), GFP_NOFS); if (skp == NULL) { skp = ERR_PTR(-ENOMEM); goto freeout; diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c index 9db7c80a74aa..716433e63052 100644 --- a/security/smack/smack_lsm.c +++ b/security/smack/smack_lsm.c @@ -268,7 +268,7 @@ static struct smack_known *smk_fetch(const char *name, struct inode *ip, if (ip->i_op->getxattr == NULL) return ERR_PTR(-EOPNOTSUPP); - buffer = kzalloc(SMK_LONGLABEL, GFP_KERNEL); + buffer = kzalloc(SMK_LONGLABEL, GFP_NOFS); if (buffer == NULL) return ERR_PTR(-ENOMEM); @@ -932,7 +932,8 @@ static int smack_bprm_set_creds(struct linux_binprm *bprm) if (rc != 0) return rc; - } else if (bprm->unsafe) + } + if (bprm->unsafe & ~LSM_UNSAFE_PTRACE) return -EPERM; bsp->smk_task = isp->smk_task; @@ -3986,6 +3987,8 @@ access_check: skp = smack_ipv6host_label(&sadd); if (skp == NULL) skp = smack_net_ambient; + if (skb == NULL) + break; #ifdef CONFIG_AUDIT smk_ad_init_net(&ad, __func__, LSM_AUDIT_DATA_NET, &net); ad.a.u.net->family = sk->sk_family; diff --git a/sound/firewire/bebob/bebob_focusrite.c b/sound/firewire/bebob/bebob_focusrite.c index f11090057949..d0a8736613a1 100644 --- a/sound/firewire/bebob/bebob_focusrite.c +++ b/sound/firewire/bebob/bebob_focusrite.c @@ -28,6 +28,8 @@ #define SAFFIRE_CLOCK_SOURCE_SPDIF 1 /* clock sources as returned from register of Saffire Pro 10 and 26 */ +#define SAFFIREPRO_CLOCK_SOURCE_SELECT_MASK 0x000000ff +#define SAFFIREPRO_CLOCK_SOURCE_DETECT_MASK 0x0000ff00 #define SAFFIREPRO_CLOCK_SOURCE_INTERNAL 0 #define SAFFIREPRO_CLOCK_SOURCE_SKIP 1 /* never used on hardware */ #define SAFFIREPRO_CLOCK_SOURCE_SPDIF 2 @@ -190,6 +192,7 @@ saffirepro_both_clk_src_get(struct snd_bebob *bebob, unsigned int *id) map = saffirepro_clk_maps[1]; /* In a case that this driver cannot handle the value of register. */ + value &= SAFFIREPRO_CLOCK_SOURCE_SELECT_MASK; if (value >= SAFFIREPRO_CLOCK_SOURCE_COUNT || map[value] < 0) { err = -EIO; goto end; diff --git a/sound/firewire/bebob/bebob_stream.c b/sound/firewire/bebob/bebob_stream.c index 5022c9b97ddf..15009ecf259d 100644 --- a/sound/firewire/bebob/bebob_stream.c +++ b/sound/firewire/bebob/bebob_stream.c @@ -253,8 +253,7 @@ end: return err; } -static unsigned int -map_data_channels(struct snd_bebob *bebob, struct amdtp_stream *s) +static int map_data_channels(struct snd_bebob *bebob, struct amdtp_stream *s) { unsigned int sec, sections, ch, channels; unsigned int pcm, midi, location; diff --git a/sound/firewire/tascam/tascam-pcm.c b/sound/firewire/tascam/tascam-pcm.c index 380d3db969a5..64edb44d74f6 100644 --- a/sound/firewire/tascam/tascam-pcm.c +++ b/sound/firewire/tascam/tascam-pcm.c @@ -81,6 +81,9 @@ static int pcm_open(struct snd_pcm_substream *substream) goto err_locked; err = snd_tscm_stream_get_clock(tscm, &clock); + if (err < 0) + goto err_locked; + if (clock != SND_TSCM_CLOCK_INTERNAL || amdtp_stream_pcm_running(&tscm->rx_stream) || amdtp_stream_pcm_running(&tscm->tx_stream)) { diff --git a/sound/firewire/tascam/tascam-stream.c b/sound/firewire/tascam/tascam-stream.c index e4c306398b35..d8a9e313eae6 100644 --- a/sound/firewire/tascam/tascam-stream.c +++ b/sound/firewire/tascam/tascam-stream.c @@ -9,20 +9,37 @@ #include <linux/delay.h> #include "tascam.h" +#define CLOCK_STATUS_MASK 0xffff0000 +#define CLOCK_CONFIG_MASK 0x0000ffff + #define CALLBACK_TIMEOUT 500 static int get_clock(struct snd_tscm *tscm, u32 *data) { + int trial = 0; __be32 reg; int err; - err = snd_fw_transaction(tscm->unit, TCODE_READ_QUADLET_REQUEST, - TSCM_ADDR_BASE + TSCM_OFFSET_CLOCK_STATUS, - ®, sizeof(reg), 0); - if (err >= 0) + while (trial++ < 5) { + err = snd_fw_transaction(tscm->unit, TCODE_READ_QUADLET_REQUEST, + TSCM_ADDR_BASE + TSCM_OFFSET_CLOCK_STATUS, + ®, sizeof(reg), 0); + if (err < 0) + return err; + *data = be32_to_cpu(reg); + if (*data & CLOCK_STATUS_MASK) + break; - return err; + // In intermediate state after changing clock status. + msleep(50); + } + + // Still in the intermediate state. + if (trial >= 5) + return -EAGAIN; + + return 0; } static int set_clock(struct snd_tscm *tscm, unsigned int rate, @@ -35,7 +52,7 @@ static int set_clock(struct snd_tscm *tscm, unsigned int rate, err = get_clock(tscm, &data); if (err < 0) return err; - data &= 0x0000ffff; + data &= CLOCK_CONFIG_MASK; if (rate > 0) { data &= 0x000000ff; @@ -80,17 +97,14 @@ static int set_clock(struct snd_tscm *tscm, unsigned int rate, int snd_tscm_stream_get_rate(struct snd_tscm *tscm, unsigned int *rate) { - u32 data = 0x0; - unsigned int trials = 0; + u32 data; int err; - while (data == 0x0 || trials++ < 5) { - err = get_clock(tscm, &data); - if (err < 0) - return err; + err = get_clock(tscm, &data); + if (err < 0) + return err; - data = (data & 0xff000000) >> 24; - } + data = (data & 0xff000000) >> 24; /* Check base rate. */ if ((data & 0x0f) == 0x01) diff --git a/sound/i2c/other/ak4xxx-adda.c b/sound/i2c/other/ak4xxx-adda.c index bf377dc192aa..d33e02c31712 100644 --- a/sound/i2c/other/ak4xxx-adda.c +++ b/sound/i2c/other/ak4xxx-adda.c @@ -789,11 +789,12 @@ static int build_adc_controls(struct snd_akm4xxx *ak) return err; memset(&knew, 0, sizeof(knew)); - knew.name = ak->adc_info[mixer_ch].selector_name; - if (!knew.name) { + if (!ak->adc_info || + !ak->adc_info[mixer_ch].selector_name) { knew.name = "Capture Channel"; knew.index = mixer_ch + ak->idx_offset * 2; - } + } else + knew.name = ak->adc_info[mixer_ch].selector_name; knew.iface = SNDRV_CTL_ELEM_IFACE_MIXER; knew.info = ak4xxx_capture_source_info; diff --git a/sound/pci/hda/hda_controller.c b/sound/pci/hda/hda_controller.c index 273364c39171..9cdf86f04e03 100644 --- a/sound/pci/hda/hda_controller.c +++ b/sound/pci/hda/hda_controller.c @@ -667,10 +667,13 @@ static int azx_rirb_get_response(struct hdac_bus *bus, unsigned int addr, */ if (hbus->allow_bus_reset && !hbus->response_reset && !hbus->in_reset) { hbus->response_reset = 1; + dev_err(chip->card->dev, + "No response from codec, resetting bus: last cmd=0x%08x\n", + bus->last_cmd[addr]); return -EAGAIN; /* give a chance to retry */ } - dev_err(chip->card->dev, + dev_WARN(chip->card->dev, "azx_get_response timeout, switching to single_cmd mode: last cmd=0x%08x\n", bus->last_cmd[addr]); chip->single_cmd = 1; diff --git a/sound/pci/hda/patch_analog.c b/sound/pci/hda/patch_analog.c index e0fb8c6d1bc2..7d65c6df9aa8 100644 --- a/sound/pci/hda/patch_analog.c +++ b/sound/pci/hda/patch_analog.c @@ -370,6 +370,7 @@ static const struct hda_fixup ad1986a_fixups[] = { static const struct snd_pci_quirk ad1986a_fixup_tbl[] = { SND_PCI_QUIRK(0x103c, 0x30af, "HP B2800", AD1986A_FIXUP_LAPTOP_IMIC), + SND_PCI_QUIRK(0x1043, 0x1153, "ASUS M9V", AD1986A_FIXUP_LAPTOP_IMIC), SND_PCI_QUIRK(0x1043, 0x1443, "ASUS Z99He", AD1986A_FIXUP_EAPD), SND_PCI_QUIRK(0x1043, 0x1447, "ASUS A8JN", AD1986A_FIXUP_EAPD), SND_PCI_QUIRK_MASK(0x1043, 0xff00, 0x8100, "ASUS P5", AD1986A_FIXUP_3STACK), diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c index c55c0131be0a..c0742ee11519 100644 --- a/sound/pci/hda/patch_ca0132.c +++ b/sound/pci/hda/patch_ca0132.c @@ -4440,7 +4440,7 @@ static void hp_callback(struct hda_codec *codec, struct hda_jack_callback *cb) /* Delay enabling the HP amp, to let the mic-detection * state machine run. */ - cancel_delayed_work_sync(&spec->unsol_hp_work); + cancel_delayed_work(&spec->unsol_hp_work); schedule_delayed_work(&spec->unsol_hp_work, msecs_to_jiffies(500)); tbl = snd_hda_jack_tbl_get(codec, cb->nid); if (tbl) diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index d5ca16048ce0..55bae9e6de27 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -977,6 +977,9 @@ static const struct snd_pci_quirk beep_white_list[] = { SND_PCI_QUIRK(0x1043, 0x834a, "EeePC", 1), SND_PCI_QUIRK(0x1458, 0xa002, "GA-MA790X", 1), SND_PCI_QUIRK(0x8086, 0xd613, "Intel", 1), + /* blacklist -- no beep available */ + SND_PCI_QUIRK(0x17aa, 0x309e, "Lenovo ThinkCentre M73", 0), + SND_PCI_QUIRK(0x17aa, 0x30a3, "Lenovo ThinkCentre M93", 0), {} }; diff --git a/sound/soc/codecs/sgtl5000.c b/sound/soc/codecs/sgtl5000.c index 08b40460663c..4808b70ec12c 100644 --- a/sound/soc/codecs/sgtl5000.c +++ b/sound/soc/codecs/sgtl5000.c @@ -35,6 +35,13 @@ #define SGTL5000_DAP_REG_OFFSET 0x0100 #define SGTL5000_MAX_REG_OFFSET 0x013A +/* Delay for the VAG ramp up */ +#define SGTL5000_VAG_POWERUP_DELAY 500 /* ms */ +/* Delay for the VAG ramp down */ +#define SGTL5000_VAG_POWERDOWN_DELAY 500 /* ms */ + +#define SGTL5000_OUTPUTS_MUTE (SGTL5000_HP_MUTE | SGTL5000_LINE_OUT_MUTE) + /* default value of sgtl5000 registers */ static const struct reg_default sgtl5000_reg_defaults[] = { { SGTL5000_CHIP_DIG_POWER, 0x0000 }, @@ -129,6 +136,13 @@ enum sgtl5000_micbias_resistor { SGTL5000_MICBIAS_8K = 8, }; +enum { + HP_POWER_EVENT, + DAC_POWER_EVENT, + ADC_POWER_EVENT, + LAST_POWER_EVENT = ADC_POWER_EVENT +}; + /* sgtl5000 private structure in codec */ struct sgtl5000_priv { int sysclk; /* sysclk rate */ @@ -141,8 +155,117 @@ struct sgtl5000_priv { int revision; u8 micbias_resistor; u8 micbias_voltage; + u16 mute_state[LAST_POWER_EVENT + 1]; }; +static inline int hp_sel_input(struct snd_soc_component *component) +{ + unsigned int ana_reg = 0; + + snd_soc_component_read(component, SGTL5000_CHIP_ANA_CTRL, &ana_reg); + + return (ana_reg & SGTL5000_HP_SEL_MASK) >> SGTL5000_HP_SEL_SHIFT; +} + +static inline u16 mute_output(struct snd_soc_component *component, + u16 mute_mask) +{ + unsigned int mute_reg = 0; + + snd_soc_component_read(component, SGTL5000_CHIP_ANA_CTRL, &mute_reg); + + snd_soc_component_update_bits(component, SGTL5000_CHIP_ANA_CTRL, + mute_mask, mute_mask); + return mute_reg; +} + +static inline void restore_output(struct snd_soc_component *component, + u16 mute_mask, u16 mute_reg) +{ + snd_soc_component_update_bits(component, SGTL5000_CHIP_ANA_CTRL, + mute_mask, mute_reg); +} + +static void vag_power_on(struct snd_soc_component *component, u32 source) +{ + unsigned int ana_reg = 0; + + snd_soc_component_read(component, SGTL5000_CHIP_ANA_POWER, &ana_reg); + + if (ana_reg & SGTL5000_VAG_POWERUP) + return; + + snd_soc_component_update_bits(component, SGTL5000_CHIP_ANA_POWER, + SGTL5000_VAG_POWERUP, SGTL5000_VAG_POWERUP); + + /* When VAG powering on to get local loop from Line-In, the sleep + * is required to avoid loud pop. + */ + if (hp_sel_input(component) == SGTL5000_HP_SEL_LINE_IN && + source == HP_POWER_EVENT) + msleep(SGTL5000_VAG_POWERUP_DELAY); +} + +static int vag_power_consumers(struct snd_soc_component *component, + u16 ana_pwr_reg, u32 source) +{ + int consumers = 0; + + /* count dac/adc consumers unconditional */ + if (ana_pwr_reg & SGTL5000_DAC_POWERUP) + consumers++; + if (ana_pwr_reg & SGTL5000_ADC_POWERUP) + consumers++; + + /* + * If the event comes from HP and Line-In is selected, + * current action is 'DAC to be powered down'. + * As HP_POWERUP is not set when HP muxed to line-in, + * we need to keep VAG power ON. + */ + if (source == HP_POWER_EVENT) { + if (hp_sel_input(component) == SGTL5000_HP_SEL_LINE_IN) + consumers++; + } else { + if (ana_pwr_reg & SGTL5000_HP_POWERUP) + consumers++; + } + + return consumers; +} + +static void vag_power_off(struct snd_soc_component *component, u32 source) +{ + unsigned int ana_pwr = SGTL5000_VAG_POWERUP; + + snd_soc_component_read(component, SGTL5000_CHIP_ANA_POWER, &ana_pwr); + + if (!(ana_pwr & SGTL5000_VAG_POWERUP)) + return; + + /* + * This function calls when any of VAG power consumers is disappearing. + * Thus, if there is more than one consumer at the moment, as minimum + * one consumer will definitely stay after the end of the current + * event. + * Don't clear VAG_POWERUP if 2 or more consumers of VAG present: + * - LINE_IN (for HP events) / HP (for DAC/ADC events) + * - DAC + * - ADC + * (the current consumer is disappearing right now) + */ + if (vag_power_consumers(component, ana_pwr, source) >= 2) + return; + + snd_soc_component_update_bits(component, SGTL5000_CHIP_ANA_POWER, + SGTL5000_VAG_POWERUP, 0); + /* In power down case, we need wait 400-1000 ms + * when VAG fully ramped down. + * As longer we wait, as smaller pop we've got. + */ + msleep(SGTL5000_VAG_POWERDOWN_DELAY); +} + /* * mic_bias power on/off share the same register bits with * output impedance of mic bias, when power on mic bias, we @@ -174,36 +297,46 @@ static int mic_bias_event(struct snd_soc_dapm_widget *w, return 0; } -/* - * As manual described, ADC/DAC only works when VAG powerup, - * So enabled VAG before ADC/DAC up. - * In power down case, we need wait 400ms when vag fully ramped down. - */ -static int power_vag_event(struct snd_soc_dapm_widget *w, - struct snd_kcontrol *kcontrol, int event) +static int vag_and_mute_control(struct snd_soc_component *component, + int event, int event_source) { - struct snd_soc_codec *codec = snd_soc_dapm_to_codec(w->dapm); - const u32 mask = SGTL5000_DAC_POWERUP | SGTL5000_ADC_POWERUP; + static const u16 mute_mask[] = { + /* + * Mask for HP_POWER_EVENT. + * Muxing Headphones have to be wrapped with mute/unmute + * headphones only. + */ + SGTL5000_HP_MUTE, + /* + * Masks for DAC_POWER_EVENT/ADC_POWER_EVENT. + * Muxing DAC or ADC block have to be wrapped with mute/unmute + * both headphones and line-out. + */ + SGTL5000_OUTPUTS_MUTE, + SGTL5000_OUTPUTS_MUTE + }; + + struct sgtl5000_priv *sgtl5000 = + snd_soc_component_get_drvdata(component); switch (event) { + case SND_SOC_DAPM_PRE_PMU: + sgtl5000->mute_state[event_source] = + mute_output(component, mute_mask[event_source]); + break; case SND_SOC_DAPM_POST_PMU: - snd_soc_update_bits(codec, SGTL5000_CHIP_ANA_POWER, - SGTL5000_VAG_POWERUP, SGTL5000_VAG_POWERUP); - msleep(400); + vag_power_on(component, event_source); + restore_output(component, mute_mask[event_source], + sgtl5000->mute_state[event_source]); break; - case SND_SOC_DAPM_PRE_PMD: - /* - * Don't clear VAG_POWERUP, when both DAC and ADC are - * operational to prevent inadvertently starving the - * other one of them. - */ - if ((snd_soc_read(codec, SGTL5000_CHIP_ANA_POWER) & - mask) != mask) { - snd_soc_update_bits(codec, SGTL5000_CHIP_ANA_POWER, - SGTL5000_VAG_POWERUP, 0); - msleep(400); - } + sgtl5000->mute_state[event_source] = + mute_output(component, mute_mask[event_source]); + vag_power_off(component, event_source); + break; + case SND_SOC_DAPM_POST_PMD: + restore_output(component, mute_mask[event_source], + sgtl5000->mute_state[event_source]); break; default: break; @@ -212,6 +345,41 @@ static int power_vag_event(struct snd_soc_dapm_widget *w, return 0; } +/* + * Mute Headphone when power it up/down. + * Control VAG power on HP power path. + */ +static int headphone_pga_event(struct snd_soc_dapm_widget *w, + struct snd_kcontrol *kcontrol, int event) +{ + struct snd_soc_component *component = + snd_soc_dapm_to_component(w->dapm); + + return vag_and_mute_control(component, event, HP_POWER_EVENT); +} + +/* As manual describes, ADC/DAC powering up/down requires + * to mute outputs to avoid pops. + * Control VAG power on ADC/DAC power path. + */ +static int adc_updown_depop(struct snd_soc_dapm_widget *w, + struct snd_kcontrol *kcontrol, int event) +{ + struct snd_soc_component *component = + snd_soc_dapm_to_component(w->dapm); + + return vag_and_mute_control(component, event, ADC_POWER_EVENT); +} + +static int dac_updown_depop(struct snd_soc_dapm_widget *w, + struct snd_kcontrol *kcontrol, int event) +{ + struct snd_soc_component *component = + snd_soc_dapm_to_component(w->dapm); + + return vag_and_mute_control(component, event, DAC_POWER_EVENT); +} + /* input sources for ADC */ static const char *adc_mux_text[] = { "MIC_IN", "LINE_IN" @@ -247,7 +415,10 @@ static const struct snd_soc_dapm_widget sgtl5000_dapm_widgets[] = { mic_bias_event, SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_PMD), - SND_SOC_DAPM_PGA("HP", SGTL5000_CHIP_ANA_POWER, 4, 0, NULL, 0), + SND_SOC_DAPM_PGA_E("HP", SGTL5000_CHIP_ANA_POWER, 4, 0, NULL, 0, + headphone_pga_event, + SND_SOC_DAPM_PRE_POST_PMU | + SND_SOC_DAPM_PRE_POST_PMD), SND_SOC_DAPM_PGA("LO", SGTL5000_CHIP_ANA_POWER, 0, 0, NULL, 0), SND_SOC_DAPM_MUX("Capture Mux", SND_SOC_NOPM, 0, 0, &adc_mux), @@ -263,11 +434,12 @@ static const struct snd_soc_dapm_widget sgtl5000_dapm_widgets[] = { 0, SGTL5000_CHIP_DIG_POWER, 1, 0), - SND_SOC_DAPM_ADC("ADC", "Capture", SGTL5000_CHIP_ANA_POWER, 1, 0), - SND_SOC_DAPM_DAC("DAC", "Playback", SGTL5000_CHIP_ANA_POWER, 3, 0), - - SND_SOC_DAPM_PRE("VAG_POWER_PRE", power_vag_event), - SND_SOC_DAPM_POST("VAG_POWER_POST", power_vag_event), + SND_SOC_DAPM_ADC_E("ADC", "Capture", SGTL5000_CHIP_ANA_POWER, 1, 0, + adc_updown_depop, SND_SOC_DAPM_PRE_POST_PMU | + SND_SOC_DAPM_PRE_POST_PMD), + SND_SOC_DAPM_DAC_E("DAC", "Playback", SGTL5000_CHIP_ANA_POWER, 3, 0, + dac_updown_depop, SND_SOC_DAPM_PRE_POST_PMU | + SND_SOC_DAPM_PRE_POST_PMD), }; /* routes for sgtl5000 */ @@ -1166,12 +1338,17 @@ static int sgtl5000_set_power_regs(struct snd_soc_codec *codec) SGTL5000_INT_OSC_EN); /* Enable VDDC charge pump */ ana_pwr |= SGTL5000_VDDC_CHRGPMP_POWERUP; - } else if (vddio >= 3100 && vdda >= 3100) { + } else { ana_pwr &= ~SGTL5000_VDDC_CHRGPMP_POWERUP; - /* VDDC use VDDIO rail */ - lreg_ctrl |= SGTL5000_VDDC_ASSN_OVRD; - lreg_ctrl |= SGTL5000_VDDC_MAN_ASSN_VDDIO << - SGTL5000_VDDC_MAN_ASSN_SHIFT; + /* + * if vddio == vdda the source of charge pump should be + * assigned manually to VDDIO + */ + if (vddio == vdda) { + lreg_ctrl |= SGTL5000_VDDC_ASSN_OVRD; + lreg_ctrl |= SGTL5000_VDDC_MAN_ASSN_VDDIO << + SGTL5000_VDDC_MAN_ASSN_SHIFT; + } } snd_soc_write(codec, SGTL5000_CHIP_LINREG_CTRL, lreg_ctrl); diff --git a/sound/soc/fsl/fsl_ssi.c b/sound/soc/fsl/fsl_ssi.c index 7ca67613e0d4..d46e9ad600b4 100644 --- a/sound/soc/fsl/fsl_ssi.c +++ b/sound/soc/fsl/fsl_ssi.c @@ -1374,6 +1374,7 @@ static int fsl_ssi_probe(struct platform_device *pdev) struct fsl_ssi_private *ssi_private; int ret = 0; struct device_node *np = pdev->dev.of_node; + struct device_node *root; const struct of_device_id *of_id; const char *p, *sprop; const uint32_t *iprop; @@ -1510,7 +1511,9 @@ static int fsl_ssi_probe(struct platform_device *pdev) * device tree. We also pass the address of the CPU DAI driver * structure. */ - sprop = of_get_property(of_find_node_by_path("/"), "compatible", NULL); + root = of_find_node_by_path("/"); + sprop = of_get_property(root, "compatible", NULL); + of_node_put(root); /* Sometimes the compatible name has a "fsl," prefix, so we strip it. */ p = strrchr(sprop, ','); if (p) diff --git a/sound/soc/intel/common/sst-ipc.c b/sound/soc/intel/common/sst-ipc.c index a12c7bb08d3b..b96bf44be2d5 100644 --- a/sound/soc/intel/common/sst-ipc.c +++ b/sound/soc/intel/common/sst-ipc.c @@ -211,6 +211,8 @@ struct ipc_message *sst_ipc_reply_find_msg(struct sst_generic_ipc *ipc, if (ipc->ops.reply_msg_match != NULL) header = ipc->ops.reply_msg_match(header, &mask); + else + mask = (u64)-1; if (list_empty(&ipc->rx_list)) { dev_err(ipc->dev, "error: rx list empty but received 0x%llx\n", diff --git a/sound/soc/msm/qdsp6v2/q6adm.c b/sound/soc/msm/qdsp6v2/q6adm.c index 5478920e38a9..dc7165263316 100644 --- a/sound/soc/msm/qdsp6v2/q6adm.c +++ b/sound/soc/msm/qdsp6v2/q6adm.c @@ -1219,7 +1219,7 @@ static int adm_process_get_param_response(u32 opcode, u32 idx, u32 *payload, struct adm_cmd_rsp_get_pp_params_v5 *v5_rsp = NULL; struct adm_cmd_rsp_get_pp_params_v6 *v6_rsp = NULL; u32 *param_data = NULL; - int data_size; + int data_size = 0; int struct_size; if (payload == NULL) { @@ -1233,7 +1233,7 @@ static int adm_process_get_param_response(u32 opcode, u32 idx, u32 *payload, if (payload_size < struct_size) { pr_err("%s: payload size %d < expected size %d\n", __func__, payload_size, struct_size); - break; + return -EINVAL; } v5_rsp = (struct adm_cmd_rsp_get_pp_params_v5 *) payload; data_size = v5_rsp->param_hdr.param_size; @@ -1244,7 +1244,7 @@ static int adm_process_get_param_response(u32 opcode, u32 idx, u32 *payload, if (payload_size < struct_size) { pr_err("%s: payload size %d < expected size %d\n", __func__, payload_size, struct_size); - break; + return -EINVAL; } v6_rsp = (struct adm_cmd_rsp_get_pp_params_v6 *) payload; data_size = v6_rsp->param_hdr.param_size; @@ -1272,6 +1272,10 @@ static int adm_process_get_param_response(u32 opcode, u32 idx, u32 *payload, pr_debug("%s: GET_PP PARAM: received parameter length: 0x%x\n", __func__, adm_get_parameters[idx]); /* store params after param_size */ + if (param_data == NULL) { + pr_err("%s: Invalid parameter data got!\n", __func__); + return -EINVAL; + } memcpy(&adm_get_parameters[idx + 1], param_data, data_size); return 0; } diff --git a/sound/soc/rockchip/rockchip_i2s.c b/sound/soc/rockchip/rockchip_i2s.c index 58ee64594f07..f583f317644a 100644 --- a/sound/soc/rockchip/rockchip_i2s.c +++ b/sound/soc/rockchip/rockchip_i2s.c @@ -530,7 +530,7 @@ static int rockchip_i2s_probe(struct platform_device *pdev) ret = devm_snd_dmaengine_pcm_register(&pdev->dev, NULL, 0); if (ret) { dev_err(&pdev->dev, "Could not register PCM\n"); - return ret; + goto err_suspend; } return 0; diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c index e00dfbec22c5..f18485c6a5d8 100644 --- a/sound/soc/sh/rcar/core.c +++ b/sound/soc/sh/rcar/core.c @@ -524,6 +524,7 @@ static int rsnd_soc_dai_set_fmt(struct snd_soc_dai *dai, unsigned int fmt) } /* set format */ + rdai->bit_clk_inv = 0; switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) { case SND_SOC_DAIFMT_I2S: rdai->sys_delay = 0; diff --git a/sound/soc/soc-generic-dmaengine-pcm.c b/sound/soc/soc-generic-dmaengine-pcm.c index 6fd1906af387..fe65754c2e50 100644 --- a/sound/soc/soc-generic-dmaengine-pcm.c +++ b/sound/soc/soc-generic-dmaengine-pcm.c @@ -301,6 +301,12 @@ static int dmaengine_pcm_new(struct snd_soc_pcm_runtime *rtd) if (!dmaengine_pcm_can_report_residue(dev, pcm->chan[i])) pcm->flags |= SND_DMAENGINE_PCM_FLAG_NO_RESIDUE; + + if (rtd->pcm->streams[i].pcm->name[0] == '\0') { + strncpy(rtd->pcm->streams[i].pcm->name, + rtd->pcm->streams[i].pcm->id, + sizeof(rtd->pcm->streams[i].pcm->name)); + } } return 0; diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c index 9bea2aed1ed6..8eaadc0a4cb2 100644 --- a/sound/usb/pcm.c +++ b/sound/usb/pcm.c @@ -460,6 +460,7 @@ static int set_sync_endpoint(struct snd_usb_substream *subs, } ep = get_endpoint(alts, 1)->bEndpointAddress; if (get_endpoint(alts, 0)->bLength >= USB_DT_ENDPOINT_AUDIO_SIZE && + get_endpoint(alts, 0)->bSynchAddress != 0 && ((is_playback && ep != (unsigned int)(get_endpoint(alts, 0)->bSynchAddress | USB_DIR_IN)) || (!is_playback && ep != (unsigned int)(get_endpoint(alts, 0)->bSynchAddress & ~USB_DIR_IN)))) { dev_err(&dev->dev, diff --git a/tools/lib/traceevent/Makefile b/tools/lib/traceevent/Makefile index 7851df1490e0..cc3315da6dc3 100644 --- a/tools/lib/traceevent/Makefile +++ b/tools/lib/traceevent/Makefile @@ -54,15 +54,15 @@ set_plugin_dir := 1 # Set plugin_dir to preffered global plugin location # If we install under $HOME directory we go under -# $(HOME)/.traceevent/plugins +# $(HOME)/.local/lib/traceevent/plugins # # We dont set PLUGIN_DIR in case we install under $HOME # directory, because by default the code looks under: -# $(HOME)/.traceevent/plugins by default. +# $(HOME)/.local/lib/traceevent/plugins by default. # ifeq ($(plugin_dir),) ifeq ($(prefix),$(HOME)) -override plugin_dir = $(HOME)/.traceevent/plugins +override plugin_dir = $(HOME)/.local/lib/traceevent/plugins set_plugin_dir := 0 else override plugin_dir = $(libdir)/traceevent/plugins diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c index df3c73e9dea4..9954b069b3ca 100644 --- a/tools/lib/traceevent/event-parse.c +++ b/tools/lib/traceevent/event-parse.c @@ -265,10 +265,10 @@ static int add_new_comm(struct pevent *pevent, const char *comm, int pid) errno = ENOMEM; return -1; } + pevent->cmdlines = cmdlines; cmdlines[pevent->cmdline_count].comm = strdup(comm); if (!cmdlines[pevent->cmdline_count].comm) { - free(cmdlines); errno = ENOMEM; return -1; } @@ -279,7 +279,6 @@ static int add_new_comm(struct pevent *pevent, const char *comm, int pid) pevent->cmdline_count++; qsort(cmdlines, pevent->cmdline_count, sizeof(*cmdlines), cmdline_cmp); - pevent->cmdlines = cmdlines; return 0; } diff --git a/tools/lib/traceevent/event-plugin.c b/tools/lib/traceevent/event-plugin.c index a16756ae3526..5fe7889606a2 100644 --- a/tools/lib/traceevent/event-plugin.c +++ b/tools/lib/traceevent/event-plugin.c @@ -30,7 +30,7 @@ #include "event-parse.h" #include "event-utils.h" -#define LOCAL_PLUGIN_DIR ".traceevent/plugins" +#define LOCAL_PLUGIN_DIR ".local/lib/traceevent/plugins/" static struct registered_plugin_options { struct registered_plugin_options *next; diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c index 93ce665f976f..b62f2f139edf 100644 --- a/tools/perf/builtin-kmem.c +++ b/tools/perf/builtin-kmem.c @@ -664,6 +664,7 @@ static char *compact_gfp_flags(char *gfp_flags) new = realloc(new_flags, len + strlen(cpt) + 2); if (new == NULL) { free(new_flags); + free(orig_flags); return NULL; } diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c index e77880b5094d..65a6922db722 100644 --- a/tools/perf/builtin-stat.c +++ b/tools/perf/builtin-stat.c @@ -1416,7 +1416,7 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused) run_idx + 1); status = run_perf_stat(argc, argv); - if (forever && status != -1) { + if (forever && status != -1 && !interval) { print_counters(NULL, argc, argv); perf_stat__reset_stats(); } diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c index f6720afa9f34..97ebd1d3646d 100644 --- a/tools/perf/util/hist.c +++ b/tools/perf/util/hist.c @@ -1080,7 +1080,7 @@ void hists__collapse_resort(struct hists *hists, struct ui_progress *prog) } } -static int hist_entry__sort(struct hist_entry *a, struct hist_entry *b) +static int64_t hist_entry__sort(struct hist_entry *a, struct hist_entry *b) { struct perf_hpp_fmt *fmt; int64_t cmp = 0; diff --git a/tools/perf/util/llvm-utils.c b/tools/perf/util/llvm-utils.c index 62f6d7dc2dda..9d02aa93ef90 100644 --- a/tools/perf/util/llvm-utils.c +++ b/tools/perf/util/llvm-utils.c @@ -214,14 +214,14 @@ static int detect_kbuild_dir(char **kbuild_dir) const char *prefix_dir = ""; const char *suffix_dir = ""; + /* _UTSNAME_LENGTH is 65 */ + char release[128]; + char *autoconf_path; int err; if (!test_dir) { - /* _UTSNAME_LENGTH is 65 */ - char release[128]; - err = fetch_kernel_version(NULL, release, sizeof(release)); if (err) diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c index afc6b56cf749..97c0684588d9 100644 --- a/tools/perf/util/map.c +++ b/tools/perf/util/map.c @@ -1,4 +1,5 @@ #include "symbol.h" +#include <assert.h> #include <errno.h> #include <inttypes.h> #include <limits.h> @@ -702,6 +703,8 @@ static int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp } after->start = map->end; + after->pgoff += map->end - pos->start; + assert(pos->map_ip(pos, map->end) == after->map_ip(after, map->end)); __map_groups__insert(pos->groups, after); if (verbose >= 2) map__fprintf(after, fp); |
