Magellan Linux

Annotation of /trunk/kernel-alx/patches-4.19/0164-4.19.65-all-fixes.patch

Parent Directory Parent Directory | Revision Log Revision Log


Revision 3443 - (hide annotations) (download)
Thu Aug 15 09:33:22 2019 UTC (4 years, 9 months ago) by niro
File size: 94954 byte(s)
-linux-4.19.65
1 niro 3443 diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
2     index 25f3b2532198..e05e581af5cf 100644
3     --- a/Documentation/admin-guide/hw-vuln/spectre.rst
4     +++ b/Documentation/admin-guide/hw-vuln/spectre.rst
5     @@ -41,10 +41,11 @@ Related CVEs
6    
7     The following CVE entries describe Spectre variants:
8    
9     - ============= ======================= =================
10     + ============= ======================= ==========================
11     CVE-2017-5753 Bounds check bypass Spectre variant 1
12     CVE-2017-5715 Branch target injection Spectre variant 2
13     - ============= ======================= =================
14     + CVE-2019-1125 Spectre v1 swapgs Spectre variant 1 (swapgs)
15     + ============= ======================= ==========================
16    
17     Problem
18     -------
19     @@ -78,6 +79,13 @@ There are some extensions of Spectre variant 1 attacks for reading data
20     over the network, see :ref:`[12] <spec_ref12>`. However such attacks
21     are difficult, low bandwidth, fragile, and are considered low risk.
22    
23     +Note that, despite "Bounds Check Bypass" name, Spectre variant 1 is not
24     +only about user-controlled array bounds checks. It can affect any
25     +conditional checks. The kernel entry code interrupt, exception, and NMI
26     +handlers all have conditional swapgs checks. Those may be problematic
27     +in the context of Spectre v1, as kernel code can speculatively run with
28     +a user GS.
29     +
30     Spectre variant 2 (Branch Target Injection)
31     -------------------------------------------
32    
33     @@ -132,6 +140,9 @@ not cover all possible attack vectors.
34     1. A user process attacking the kernel
35     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
36    
37     +Spectre variant 1
38     +~~~~~~~~~~~~~~~~~
39     +
40     The attacker passes a parameter to the kernel via a register or
41     via a known address in memory during a syscall. Such parameter may
42     be used later by the kernel as an index to an array or to derive
43     @@ -144,7 +155,40 @@ not cover all possible attack vectors.
44     potentially be influenced for Spectre attacks, new "nospec" accessor
45     macros are used to prevent speculative loading of data.
46    
47     - Spectre variant 2 attacker can :ref:`poison <poison_btb>` the branch
48     +Spectre variant 1 (swapgs)
49     +~~~~~~~~~~~~~~~~~~~~~~~~~~
50     +
51     + An attacker can train the branch predictor to speculatively skip the
52     + swapgs path for an interrupt or exception. If they initialize
53     + the GS register to a user-space value, if the swapgs is speculatively
54     + skipped, subsequent GS-related percpu accesses in the speculation
55     + window will be done with the attacker-controlled GS value. This
56     + could cause privileged memory to be accessed and leaked.
57     +
58     + For example:
59     +
60     + ::
61     +
62     + if (coming from user space)
63     + swapgs
64     + mov %gs:<percpu_offset>, %reg
65     + mov (%reg), %reg1
66     +
67     + When coming from user space, the CPU can speculatively skip the
68     + swapgs, and then do a speculative percpu load using the user GS
69     + value. So the user can speculatively force a read of any kernel
70     + value. If a gadget exists which uses the percpu value as an address
71     + in another load/store, then the contents of the kernel value may
72     + become visible via an L1 side channel attack.
73     +
74     + A similar attack exists when coming from kernel space. The CPU can
75     + speculatively do the swapgs, causing the user GS to get used for the
76     + rest of the speculative window.
77     +
78     +Spectre variant 2
79     +~~~~~~~~~~~~~~~~~
80     +
81     + A spectre variant 2 attacker can :ref:`poison <poison_btb>` the branch
82     target buffer (BTB) before issuing syscall to launch an attack.
83     After entering the kernel, the kernel could use the poisoned branch
84     target buffer on indirect jump and jump to gadget code in speculative
85     @@ -280,11 +324,18 @@ The sysfs file showing Spectre variant 1 mitigation status is:
86    
87     The possible values in this file are:
88    
89     - ======================================= =================================
90     - 'Mitigation: __user pointer sanitation' Protection in kernel on a case by
91     - case base with explicit pointer
92     - sanitation.
93     - ======================================= =================================
94     + .. list-table::
95     +
96     + * - 'Not affected'
97     + - The processor is not vulnerable.
98     + * - 'Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers'
99     + - The swapgs protections are disabled; otherwise it has
100     + protection in the kernel on a case by case base with explicit
101     + pointer sanitation and usercopy LFENCE barriers.
102     + * - 'Mitigation: usercopy/swapgs barriers and __user pointer sanitization'
103     + - Protection in the kernel on a case by case base with explicit
104     + pointer sanitation, usercopy LFENCE barriers, and swapgs LFENCE
105     + barriers.
106    
107     However, the protections are put in place on a case by case basis,
108     and there is no guarantee that all possible attack vectors for Spectre
109     @@ -366,12 +417,27 @@ Turning on mitigation for Spectre variant 1 and Spectre variant 2
110     1. Kernel mitigation
111     ^^^^^^^^^^^^^^^^^^^^
112    
113     +Spectre variant 1
114     +~~~~~~~~~~~~~~~~~
115     +
116     For the Spectre variant 1, vulnerable kernel code (as determined
117     by code audit or scanning tools) is annotated on a case by case
118     basis to use nospec accessor macros for bounds clipping :ref:`[2]
119     <spec_ref2>` to avoid any usable disclosure gadgets. However, it may
120     not cover all attack vectors for Spectre variant 1.
121    
122     + Copy-from-user code has an LFENCE barrier to prevent the access_ok()
123     + check from being mis-speculated. The barrier is done by the
124     + barrier_nospec() macro.
125     +
126     + For the swapgs variant of Spectre variant 1, LFENCE barriers are
127     + added to interrupt, exception and NMI entry where needed. These
128     + barriers are done by the FENCE_SWAPGS_KERNEL_ENTRY and
129     + FENCE_SWAPGS_USER_ENTRY macros.
130     +
131     +Spectre variant 2
132     +~~~~~~~~~~~~~~~~~
133     +
134     For Spectre variant 2 mitigation, the compiler turns indirect calls or
135     jumps in the kernel into equivalent return trampolines (retpolines)
136     :ref:`[3] <spec_ref3>` :ref:`[9] <spec_ref9>` to go to the target
137     @@ -473,6 +539,12 @@ Mitigation control on the kernel command line
138     Spectre variant 2 mitigation can be disabled or force enabled at the
139     kernel command line.
140    
141     + nospectre_v1
142     +
143     + [X86,PPC] Disable mitigations for Spectre Variant 1
144     + (bounds check bypass). With this option data leaks are
145     + possible in the system.
146     +
147     nospectre_v2
148    
149     [X86] Disable all mitigations for the Spectre variant 2
150     diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
151     index 1cee1174cde6..c96a8e9ad5c2 100644
152     --- a/Documentation/admin-guide/kernel-parameters.txt
153     +++ b/Documentation/admin-guide/kernel-parameters.txt
154     @@ -2515,6 +2515,7 @@
155     Equivalent to: nopti [X86,PPC]
156     nospectre_v1 [PPC]
157     nobp=0 [S390]
158     + nospectre_v1 [X86]
159     nospectre_v2 [X86,PPC,S390]
160     spectre_v2_user=off [X86]
161     spec_store_bypass_disable=off [X86,PPC]
162     @@ -2861,9 +2862,9 @@
163     nosmt=force: Force disable SMT, cannot be undone
164     via the sysfs control file.
165    
166     - nospectre_v1 [PPC] Disable mitigations for Spectre Variant 1 (bounds
167     - check bypass). With this option data leaks are possible
168     - in the system.
169     + nospectre_v1 [X66, PPC] Disable mitigations for Spectre Variant 1
170     + (bounds check bypass). With this option data leaks
171     + are possible in the system.
172    
173     nospectre_v2 [X86,PPC_FSL_BOOK3E] Disable all mitigations for the Spectre variant 2
174     (indirect branch prediction) vulnerability. System may
175     diff --git a/Makefile b/Makefile
176     index 203d9e80a315..41a565770431 100644
177     --- a/Makefile
178     +++ b/Makefile
179     @@ -1,7 +1,7 @@
180     # SPDX-License-Identifier: GPL-2.0
181     VERSION = 4
182     PATCHLEVEL = 19
183     -SUBLEVEL = 64
184     +SUBLEVEL = 65
185     EXTRAVERSION =
186     NAME = "People's Front"
187    
188     @@ -430,6 +430,7 @@ KBUILD_CFLAGS_MODULE := -DMODULE
189     KBUILD_LDFLAGS_MODULE := -T $(srctree)/scripts/module-common.lds
190     KBUILD_LDFLAGS :=
191     GCC_PLUGINS_CFLAGS :=
192     +CLANG_FLAGS :=
193    
194     export ARCH SRCARCH CONFIG_SHELL HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE AS LD CC
195     export CPP AR NM STRIP OBJCOPY OBJDUMP KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS
196     @@ -482,7 +483,7 @@ endif
197    
198     ifeq ($(cc-name),clang)
199     ifneq ($(CROSS_COMPILE),)
200     -CLANG_FLAGS := --target=$(notdir $(CROSS_COMPILE:%-=%))
201     +CLANG_FLAGS += --target=$(notdir $(CROSS_COMPILE:%-=%))
202     GCC_TOOLCHAIN_DIR := $(dir $(shell which $(CROSS_COMPILE)elfedit))
203     CLANG_FLAGS += --prefix=$(GCC_TOOLCHAIN_DIR)
204     GCC_TOOLCHAIN := $(realpath $(GCC_TOOLCHAIN_DIR)/..)
205     diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
206     index 74953e76a57d..0cce54182cc5 100644
207     --- a/arch/arc/Kconfig
208     +++ b/arch/arc/Kconfig
209     @@ -199,7 +199,6 @@ config NR_CPUS
210    
211     config ARC_SMP_HALT_ON_RESET
212     bool "Enable Halt-on-reset boot mode"
213     - default y if ARC_UBOOT_SUPPORT
214     help
215     In SMP configuration cores can be configured as Halt-on-reset
216     or they could all start at same time. For Halt-on-reset, non
217     @@ -539,18 +538,6 @@ config ARC_DBG_TLB_PARANOIA
218    
219     endif
220    
221     -config ARC_UBOOT_SUPPORT
222     - bool "Support uboot arg Handling"
223     - default n
224     - help
225     - ARC Linux by default checks for uboot provided args as pointers to
226     - external cmdline or DTB. This however breaks in absence of uboot,
227     - when booting from Metaware debugger directly, as the registers are
228     - not zeroed out on reset by mdb and/or ARCv2 based cores. The bogus
229     - registers look like uboot args to kernel which then chokes.
230     - So only enable the uboot arg checking/processing if users are sure
231     - of uboot being in play.
232     -
233     config ARC_BUILTIN_DTB_NAME
234     string "Built in DTB"
235     help
236     diff --git a/arch/arc/configs/nps_defconfig b/arch/arc/configs/nps_defconfig
237     index 6e84060e7c90..621f59407d76 100644
238     --- a/arch/arc/configs/nps_defconfig
239     +++ b/arch/arc/configs/nps_defconfig
240     @@ -31,7 +31,6 @@ CONFIG_ARC_CACHE_LINE_SHIFT=5
241     # CONFIG_ARC_HAS_LLSC is not set
242     CONFIG_ARC_KVADDR_SIZE=402
243     CONFIG_ARC_EMUL_UNALIGNED=y
244     -CONFIG_ARC_UBOOT_SUPPORT=y
245     CONFIG_PREEMPT=y
246     CONFIG_NET=y
247     CONFIG_UNIX=y
248     diff --git a/arch/arc/configs/vdk_hs38_defconfig b/arch/arc/configs/vdk_hs38_defconfig
249     index 1e59a2e9c602..e447ace6fa1c 100644
250     --- a/arch/arc/configs/vdk_hs38_defconfig
251     +++ b/arch/arc/configs/vdk_hs38_defconfig
252     @@ -13,7 +13,6 @@ CONFIG_PARTITION_ADVANCED=y
253     CONFIG_ARC_PLAT_AXS10X=y
254     CONFIG_AXS103=y
255     CONFIG_ISA_ARCV2=y
256     -CONFIG_ARC_UBOOT_SUPPORT=y
257     CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38"
258     CONFIG_PREEMPT=y
259     CONFIG_NET=y
260     diff --git a/arch/arc/configs/vdk_hs38_smp_defconfig b/arch/arc/configs/vdk_hs38_smp_defconfig
261     index b5c3f6c54b03..c82cdb10aaf4 100644
262     --- a/arch/arc/configs/vdk_hs38_smp_defconfig
263     +++ b/arch/arc/configs/vdk_hs38_smp_defconfig
264     @@ -15,8 +15,6 @@ CONFIG_AXS103=y
265     CONFIG_ISA_ARCV2=y
266     CONFIG_SMP=y
267     # CONFIG_ARC_TIMERS_64BIT is not set
268     -# CONFIG_ARC_SMP_HALT_ON_RESET is not set
269     -CONFIG_ARC_UBOOT_SUPPORT=y
270     CONFIG_ARC_BUILTIN_DTB_NAME="vdk_hs38_smp"
271     CONFIG_PREEMPT=y
272     CONFIG_NET=y
273     diff --git a/arch/arc/kernel/head.S b/arch/arc/kernel/head.S
274     index 208bf2c9e7b0..a72bbda2f7aa 100644
275     --- a/arch/arc/kernel/head.S
276     +++ b/arch/arc/kernel/head.S
277     @@ -100,7 +100,6 @@ ENTRY(stext)
278     st.ab 0, [r5, 4]
279     1:
280    
281     -#ifdef CONFIG_ARC_UBOOT_SUPPORT
282     ; Uboot - kernel ABI
283     ; r0 = [0] No uboot interaction, [1] cmdline in r2, [2] DTB in r2
284     ; r1 = magic number (always zero as of now)
285     @@ -109,7 +108,6 @@ ENTRY(stext)
286     st r0, [@uboot_tag]
287     st r1, [@uboot_magic]
288     st r2, [@uboot_arg]
289     -#endif
290    
291     ; setup "current" tsk and optionally cache it in dedicated r25
292     mov r9, @init_task
293     diff --git a/arch/arc/kernel/setup.c b/arch/arc/kernel/setup.c
294     index a1218937abd6..89c97dcfa360 100644
295     --- a/arch/arc/kernel/setup.c
296     +++ b/arch/arc/kernel/setup.c
297     @@ -493,7 +493,6 @@ void __init handle_uboot_args(void)
298     bool use_embedded_dtb = true;
299     bool append_cmdline = false;
300    
301     -#ifdef CONFIG_ARC_UBOOT_SUPPORT
302     /* check that we know this tag */
303     if (uboot_tag != UBOOT_TAG_NONE &&
304     uboot_tag != UBOOT_TAG_CMDLINE &&
305     @@ -525,7 +524,6 @@ void __init handle_uboot_args(void)
306     append_cmdline = true;
307    
308     ignore_uboot_args:
309     -#endif
310    
311     if (use_embedded_dtb) {
312     machine_desc = setup_machine_fdt(__dtb_start);
313     diff --git a/arch/arm/boot/dts/rk3288-veyron-mickey.dts b/arch/arm/boot/dts/rk3288-veyron-mickey.dts
314     index 1e0158acf895..a593d0a998fc 100644
315     --- a/arch/arm/boot/dts/rk3288-veyron-mickey.dts
316     +++ b/arch/arm/boot/dts/rk3288-veyron-mickey.dts
317     @@ -124,10 +124,6 @@
318     };
319     };
320    
321     -&emmc {
322     - /delete-property/mmc-hs200-1_8v;
323     -};
324     -
325     &i2c2 {
326     status = "disabled";
327     };
328     diff --git a/arch/arm/boot/dts/rk3288-veyron-minnie.dts b/arch/arm/boot/dts/rk3288-veyron-minnie.dts
329     index f95d0c5fcf71..6e8946052c78 100644
330     --- a/arch/arm/boot/dts/rk3288-veyron-minnie.dts
331     +++ b/arch/arm/boot/dts/rk3288-veyron-minnie.dts
332     @@ -90,10 +90,6 @@
333     pwm-off-delay-ms = <200>;
334     };
335    
336     -&emmc {
337     - /delete-property/mmc-hs200-1_8v;
338     -};
339     -
340     &gpio_keys {
341     pinctrl-0 = <&pwr_key_l &ap_lid_int_l &volum_down_l &volum_up_l>;
342    
343     diff --git a/arch/arm/boot/dts/rk3288.dtsi b/arch/arm/boot/dts/rk3288.dtsi
344     index c706adf4aed2..440d6783faca 100644
345     --- a/arch/arm/boot/dts/rk3288.dtsi
346     +++ b/arch/arm/boot/dts/rk3288.dtsi
347     @@ -227,6 +227,7 @@
348     <GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>,
349     <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>;
350     clock-frequency = <24000000>;
351     + arm,no-tick-in-suspend;
352     };
353    
354     timer: timer@ff810000 {
355     diff --git a/arch/arm/mach-rpc/dma.c b/arch/arm/mach-rpc/dma.c
356     index fb48f3141fb4..c4c96661eb89 100644
357     --- a/arch/arm/mach-rpc/dma.c
358     +++ b/arch/arm/mach-rpc/dma.c
359     @@ -131,7 +131,7 @@ static irqreturn_t iomd_dma_handle(int irq, void *dev_id)
360     } while (1);
361    
362     idma->state = ~DMA_ST_AB;
363     - disable_irq(irq);
364     + disable_irq_nosync(irq);
365    
366     return IRQ_HANDLED;
367     }
368     @@ -174,6 +174,9 @@ static void iomd_enable_dma(unsigned int chan, dma_t *dma)
369     DMA_FROM_DEVICE : DMA_TO_DEVICE);
370     }
371    
372     + idma->dma_addr = idma->dma.sg->dma_address;
373     + idma->dma_len = idma->dma.sg->length;
374     +
375     iomd_writeb(DMA_CR_C, dma_base + CR);
376     idma->state = DMA_ST_AB;
377     }
378     diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
379     index df7e62d9a670..cea44a7c7cf9 100644
380     --- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi
381     +++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
382     @@ -1643,11 +1643,11 @@
383     reg = <0x0 0xff914000 0x0 0x100>, <0x0 0xff915000 0x0 0x100>;
384     interrupts = <GIC_SPI 43 IRQ_TYPE_LEVEL_HIGH 0>;
385     interrupt-names = "isp0_mmu";
386     - clocks = <&cru ACLK_ISP0_NOC>, <&cru HCLK_ISP0_NOC>;
387     + clocks = <&cru ACLK_ISP0_WRAPPER>, <&cru HCLK_ISP0_WRAPPER>;
388     clock-names = "aclk", "iface";
389     #iommu-cells = <0>;
390     + power-domains = <&power RK3399_PD_ISP0>;
391     rockchip,disable-mmu-reset;
392     - status = "disabled";
393     };
394    
395     isp1_mmu: iommu@ff924000 {
396     @@ -1655,11 +1655,11 @@
397     reg = <0x0 0xff924000 0x0 0x100>, <0x0 0xff925000 0x0 0x100>;
398     interrupts = <GIC_SPI 44 IRQ_TYPE_LEVEL_HIGH 0>;
399     interrupt-names = "isp1_mmu";
400     - clocks = <&cru ACLK_ISP1_NOC>, <&cru HCLK_ISP1_NOC>;
401     + clocks = <&cru ACLK_ISP1_WRAPPER>, <&cru HCLK_ISP1_WRAPPER>;
402     clock-names = "aclk", "iface";
403     #iommu-cells = <0>;
404     + power-domains = <&power RK3399_PD_ISP1>;
405     rockchip,disable-mmu-reset;
406     - status = "disabled";
407     };
408    
409     hdmi_sound: hdmi-sound {
410     diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
411     index 1717ba1db35d..510f687d269a 100644
412     --- a/arch/arm64/include/asm/cpufeature.h
413     +++ b/arch/arm64/include/asm/cpufeature.h
414     @@ -45,9 +45,10 @@
415     */
416    
417     enum ftr_type {
418     - FTR_EXACT, /* Use a predefined safe value */
419     - FTR_LOWER_SAFE, /* Smaller value is safe */
420     - FTR_HIGHER_SAFE,/* Bigger value is safe */
421     + FTR_EXACT, /* Use a predefined safe value */
422     + FTR_LOWER_SAFE, /* Smaller value is safe */
423     + FTR_HIGHER_SAFE, /* Bigger value is safe */
424     + FTR_HIGHER_OR_ZERO_SAFE, /* Bigger value is safe, but 0 is biggest */
425     };
426    
427     #define FTR_STRICT true /* SANITY check strict matching required */
428     diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
429     index 93f69d82225d..bce06083685d 100644
430     --- a/arch/arm64/kernel/cpufeature.c
431     +++ b/arch/arm64/kernel/cpufeature.c
432     @@ -206,8 +206,8 @@ static const struct arm64_ftr_bits ftr_ctr[] = {
433     ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RES1 */
434     ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DIC_SHIFT, 1, 1),
435     ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_IDC_SHIFT, 1, 1),
436     - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_SAFE, CTR_CWG_SHIFT, 4, 0),
437     - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_SAFE, CTR_ERG_SHIFT, 4, 0),
438     + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_CWG_SHIFT, 4, 0),
439     + ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_OR_ZERO_SAFE, CTR_ERG_SHIFT, 4, 0),
440     ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, CTR_DMINLINE_SHIFT, 4, 1),
441     /*
442     * Linux can handle differing I-cache policies. Userspace JITs will
443     @@ -449,6 +449,10 @@ static s64 arm64_ftr_safe_value(const struct arm64_ftr_bits *ftrp, s64 new,
444     case FTR_LOWER_SAFE:
445     ret = new < cur ? new : cur;
446     break;
447     + case FTR_HIGHER_OR_ZERO_SAFE:
448     + if (!cur || !new)
449     + break;
450     + /* Fallthrough */
451     case FTR_HIGHER_SAFE:
452     ret = new > cur ? new : cur;
453     break;
454     diff --git a/arch/arm64/kernel/hw_breakpoint.c b/arch/arm64/kernel/hw_breakpoint.c
455     index 8c9644376326..7c0611f5d2ce 100644
456     --- a/arch/arm64/kernel/hw_breakpoint.c
457     +++ b/arch/arm64/kernel/hw_breakpoint.c
458     @@ -547,13 +547,14 @@ int hw_breakpoint_arch_parse(struct perf_event *bp,
459     /* Aligned */
460     break;
461     case 1:
462     - /* Allow single byte watchpoint. */
463     - if (hw->ctrl.len == ARM_BREAKPOINT_LEN_1)
464     - break;
465     case 2:
466     /* Allow halfword watchpoints and breakpoints. */
467     if (hw->ctrl.len == ARM_BREAKPOINT_LEN_2)
468     break;
469     + case 3:
470     + /* Allow single byte watchpoint. */
471     + if (hw->ctrl.len == ARM_BREAKPOINT_LEN_1)
472     + break;
473     default:
474     return -EINVAL;
475     }
476     diff --git a/arch/mips/lantiq/irq.c b/arch/mips/lantiq/irq.c
477     index c4ef1c31e0c4..37caeadb2964 100644
478     --- a/arch/mips/lantiq/irq.c
479     +++ b/arch/mips/lantiq/irq.c
480     @@ -156,8 +156,9 @@ static int ltq_eiu_settype(struct irq_data *d, unsigned int type)
481     if (edge)
482     irq_set_handler(d->hwirq, handle_edge_irq);
483    
484     - ltq_eiu_w32(ltq_eiu_r32(LTQ_EIU_EXIN_C) |
485     - (val << (i * 4)), LTQ_EIU_EXIN_C);
486     + ltq_eiu_w32((ltq_eiu_r32(LTQ_EIU_EXIN_C) &
487     + (~(7 << (i * 4)))) | (val << (i * 4)),
488     + LTQ_EIU_EXIN_C);
489     }
490     }
491    
492     diff --git a/arch/parisc/boot/compressed/vmlinux.lds.S b/arch/parisc/boot/compressed/vmlinux.lds.S
493     index 4ebd4e65524c..41ebe97fad10 100644
494     --- a/arch/parisc/boot/compressed/vmlinux.lds.S
495     +++ b/arch/parisc/boot/compressed/vmlinux.lds.S
496     @@ -42,8 +42,8 @@ SECTIONS
497     #endif
498     _startcode_end = .;
499    
500     - /* bootloader code and data starts behind area of extracted kernel */
501     - . = (SZ_end - SZparisc_kernel_start + KERNEL_BINARY_TEXT_START);
502     + /* bootloader code and data starts at least behind area of extracted kernel */
503     + . = MAX(ABSOLUTE(.), (SZ_end - SZparisc_kernel_start + KERNEL_BINARY_TEXT_START));
504    
505     /* align on next page boundary */
506     . = ALIGN(4096);
507     diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
508     index 8dd1d5ccae58..0387d7a96c84 100644
509     --- a/arch/x86/boot/compressed/misc.c
510     +++ b/arch/x86/boot/compressed/misc.c
511     @@ -17,6 +17,7 @@
512     #include "pgtable.h"
513     #include "../string.h"
514     #include "../voffset.h"
515     +#include <asm/bootparam_utils.h>
516    
517     /*
518     * WARNING!!
519     diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
520     index a423bdb42686..47fd18db6b3b 100644
521     --- a/arch/x86/boot/compressed/misc.h
522     +++ b/arch/x86/boot/compressed/misc.h
523     @@ -22,7 +22,6 @@
524     #include <asm/page.h>
525     #include <asm/boot.h>
526     #include <asm/bootparam.h>
527     -#include <asm/bootparam_utils.h>
528    
529     #define BOOT_BOOT_H
530     #include "../ctype.h"
531     diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
532     index e699b2041665..578b5455334f 100644
533     --- a/arch/x86/entry/calling.h
534     +++ b/arch/x86/entry/calling.h
535     @@ -329,6 +329,23 @@ For 32-bit we have the following conventions - kernel is built with
536    
537     #endif
538    
539     +/*
540     + * Mitigate Spectre v1 for conditional swapgs code paths.
541     + *
542     + * FENCE_SWAPGS_USER_ENTRY is used in the user entry swapgs code path, to
543     + * prevent a speculative swapgs when coming from kernel space.
544     + *
545     + * FENCE_SWAPGS_KERNEL_ENTRY is used in the kernel entry non-swapgs code path,
546     + * to prevent the swapgs from getting speculatively skipped when coming from
547     + * user space.
548     + */
549     +.macro FENCE_SWAPGS_USER_ENTRY
550     + ALTERNATIVE "", "lfence", X86_FEATURE_FENCE_SWAPGS_USER
551     +.endm
552     +.macro FENCE_SWAPGS_KERNEL_ENTRY
553     + ALTERNATIVE "", "lfence", X86_FEATURE_FENCE_SWAPGS_KERNEL
554     +.endm
555     +
556     #endif /* CONFIG_X86_64 */
557    
558     /*
559     diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
560     index 206df099950e..ccb5e3486aee 100644
561     --- a/arch/x86/entry/entry_64.S
562     +++ b/arch/x86/entry/entry_64.S
563     @@ -582,7 +582,7 @@ ENTRY(interrupt_entry)
564     testb $3, CS-ORIG_RAX+8(%rsp)
565     jz 1f
566     SWAPGS
567     -
568     + FENCE_SWAPGS_USER_ENTRY
569     /*
570     * Switch to the thread stack. The IRET frame and orig_ax are
571     * on the stack, as well as the return address. RDI..R12 are
572     @@ -612,8 +612,10 @@ ENTRY(interrupt_entry)
573     UNWIND_HINT_FUNC
574    
575     movq (%rdi), %rdi
576     + jmp 2f
577     1:
578     -
579     + FENCE_SWAPGS_KERNEL_ENTRY
580     +2:
581     PUSH_AND_CLEAR_REGS save_ret=1
582     ENCODE_FRAME_POINTER 8
583    
584     @@ -1196,7 +1198,6 @@ idtentry stack_segment do_stack_segment has_error_code=1
585     #ifdef CONFIG_XEN
586     idtentry xennmi do_nmi has_error_code=0
587     idtentry xendebug do_debug has_error_code=0
588     -idtentry xenint3 do_int3 has_error_code=0
589     #endif
590    
591     idtentry general_protection do_general_protection has_error_code=1
592     @@ -1241,6 +1242,13 @@ ENTRY(paranoid_entry)
593     */
594     SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg=%rax save_reg=%r14
595    
596     + /*
597     + * The above SAVE_AND_SWITCH_TO_KERNEL_CR3 macro doesn't do an
598     + * unconditional CR3 write, even in the PTI case. So do an lfence
599     + * to prevent GS speculation, regardless of whether PTI is enabled.
600     + */
601     + FENCE_SWAPGS_KERNEL_ENTRY
602     +
603     ret
604     END(paranoid_entry)
605    
606     @@ -1291,6 +1299,7 @@ ENTRY(error_entry)
607     * from user mode due to an IRET fault.
608     */
609     SWAPGS
610     + FENCE_SWAPGS_USER_ENTRY
611     /* We have user CR3. Change to kernel CR3. */
612     SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
613    
614     @@ -1312,6 +1321,8 @@ ENTRY(error_entry)
615     CALL_enter_from_user_mode
616     ret
617    
618     +.Lerror_entry_done_lfence:
619     + FENCE_SWAPGS_KERNEL_ENTRY
620     .Lerror_entry_done:
621     TRACE_IRQS_OFF
622     ret
623     @@ -1330,7 +1341,7 @@ ENTRY(error_entry)
624     cmpq %rax, RIP+8(%rsp)
625     je .Lbstep_iret
626     cmpq $.Lgs_change, RIP+8(%rsp)
627     - jne .Lerror_entry_done
628     + jne .Lerror_entry_done_lfence
629    
630     /*
631     * hack: .Lgs_change can fail with user gsbase. If this happens, fix up
632     @@ -1338,6 +1349,7 @@ ENTRY(error_entry)
633     * .Lgs_change's error handler with kernel gsbase.
634     */
635     SWAPGS
636     + FENCE_SWAPGS_USER_ENTRY
637     SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
638     jmp .Lerror_entry_done
639    
640     @@ -1352,6 +1364,7 @@ ENTRY(error_entry)
641     * gsbase and CR3. Switch to kernel gsbase and CR3:
642     */
643     SWAPGS
644     + FENCE_SWAPGS_USER_ENTRY
645     SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
646    
647     /*
648     @@ -1443,6 +1456,7 @@ ENTRY(nmi)
649    
650     swapgs
651     cld
652     + FENCE_SWAPGS_USER_ENTRY
653     SWITCH_TO_KERNEL_CR3 scratch_reg=%rdx
654     movq %rsp, %rdx
655     movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
656     diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/vclock_gettime.c
657     index e48ca3afa091..8a88e738f87d 100644
658     --- a/arch/x86/entry/vdso/vclock_gettime.c
659     +++ b/arch/x86/entry/vdso/vclock_gettime.c
660     @@ -29,12 +29,12 @@ extern int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz);
661     extern time_t __vdso_time(time_t *t);
662    
663     #ifdef CONFIG_PARAVIRT_CLOCK
664     -extern u8 pvclock_page
665     +extern u8 pvclock_page[PAGE_SIZE]
666     __attribute__((visibility("hidden")));
667     #endif
668    
669     #ifdef CONFIG_HYPERV_TSCPAGE
670     -extern u8 hvclock_page
671     +extern u8 hvclock_page[PAGE_SIZE]
672     __attribute__((visibility("hidden")));
673     #endif
674    
675     @@ -191,13 +191,24 @@ notrace static inline u64 vgetsns(int *mode)
676    
677     if (gtod->vclock_mode == VCLOCK_TSC)
678     cycles = vread_tsc();
679     +
680     + /*
681     + * For any memory-mapped vclock type, we need to make sure that gcc
682     + * doesn't cleverly hoist a load before the mode check. Otherwise we
683     + * might end up touching the memory-mapped page even if the vclock in
684     + * question isn't enabled, which will segfault. Hence the barriers.
685     + */
686     #ifdef CONFIG_PARAVIRT_CLOCK
687     - else if (gtod->vclock_mode == VCLOCK_PVCLOCK)
688     + else if (gtod->vclock_mode == VCLOCK_PVCLOCK) {
689     + barrier();
690     cycles = vread_pvclock(mode);
691     + }
692     #endif
693     #ifdef CONFIG_HYPERV_TSCPAGE
694     - else if (gtod->vclock_mode == VCLOCK_HVCLOCK)
695     + else if (gtod->vclock_mode == VCLOCK_HVCLOCK) {
696     + barrier();
697     cycles = vread_hvclock(mode);
698     + }
699     #endif
700     else
701     return 0;
702     diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
703     index 130e81e10fc7..050368db9d35 100644
704     --- a/arch/x86/include/asm/apic.h
705     +++ b/arch/x86/include/asm/apic.h
706     @@ -48,7 +48,7 @@ static inline void generic_apic_probe(void)
707    
708     #ifdef CONFIG_X86_LOCAL_APIC
709    
710     -extern unsigned int apic_verbosity;
711     +extern int apic_verbosity;
712     extern int local_apic_timer_c2_ok;
713    
714     extern int disable_apic;
715     diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
716     index ce95b8cbd229..68889ace9c4c 100644
717     --- a/arch/x86/include/asm/cpufeature.h
718     +++ b/arch/x86/include/asm/cpufeature.h
719     @@ -22,8 +22,8 @@ enum cpuid_leafs
720     CPUID_LNX_3,
721     CPUID_7_0_EBX,
722     CPUID_D_1_EAX,
723     - CPUID_F_0_EDX,
724     - CPUID_F_1_EDX,
725     + CPUID_LNX_4,
726     + CPUID_DUMMY,
727     CPUID_8000_0008_EBX,
728     CPUID_6_EAX,
729     CPUID_8000_000A_EDX,
730     diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
731     index 0cf704933f23..759f0a176612 100644
732     --- a/arch/x86/include/asm/cpufeatures.h
733     +++ b/arch/x86/include/asm/cpufeatures.h
734     @@ -271,13 +271,18 @@
735     #define X86_FEATURE_XGETBV1 (10*32+ 2) /* XGETBV with ECX = 1 instruction */
736     #define X86_FEATURE_XSAVES (10*32+ 3) /* XSAVES/XRSTORS instructions */
737    
738     -/* Intel-defined CPU QoS Sub-leaf, CPUID level 0x0000000F:0 (EDX), word 11 */
739     -#define X86_FEATURE_CQM_LLC (11*32+ 1) /* LLC QoS if 1 */
740     -
741     -/* Intel-defined CPU QoS Sub-leaf, CPUID level 0x0000000F:1 (EDX), word 12 */
742     -#define X86_FEATURE_CQM_OCCUP_LLC (12*32+ 0) /* LLC occupancy monitoring */
743     -#define X86_FEATURE_CQM_MBM_TOTAL (12*32+ 1) /* LLC Total MBM monitoring */
744     -#define X86_FEATURE_CQM_MBM_LOCAL (12*32+ 2) /* LLC Local MBM monitoring */
745     +/*
746     + * Extended auxiliary flags: Linux defined - for features scattered in various
747     + * CPUID levels like 0xf, etc.
748     + *
749     + * Reuse free bits when adding new feature flags!
750     + */
751     +#define X86_FEATURE_CQM_LLC (11*32+ 0) /* LLC QoS if 1 */
752     +#define X86_FEATURE_CQM_OCCUP_LLC (11*32+ 1) /* LLC occupancy monitoring */
753     +#define X86_FEATURE_CQM_MBM_TOTAL (11*32+ 2) /* LLC Total MBM monitoring */
754     +#define X86_FEATURE_CQM_MBM_LOCAL (11*32+ 3) /* LLC Local MBM monitoring */
755     +#define X86_FEATURE_FENCE_SWAPGS_USER (11*32+ 4) /* "" LFENCE in user entry SWAPGS path */
756     +#define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel entry SWAPGS path */
757    
758     /* AMD-defined CPU features, CPUID level 0x80000008 (EBX), word 13 */
759     #define X86_FEATURE_CLZERO (13*32+ 0) /* CLZERO instruction */
760     @@ -383,5 +388,6 @@
761     #define X86_BUG_L1TF X86_BUG(18) /* CPU is affected by L1 Terminal Fault */
762     #define X86_BUG_MDS X86_BUG(19) /* CPU is affected by Microarchitectural data sampling */
763     #define X86_BUG_MSBDS_ONLY X86_BUG(20) /* CPU is only affected by the MSDBS variant of BUG_MDS */
764     +#define X86_BUG_SWAPGS X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
765    
766     #endif /* _ASM_X86_CPUFEATURES_H */
767     diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
768     index 7014dba23d20..2877e1fbadd8 100644
769     --- a/arch/x86/include/asm/kvm_host.h
770     +++ b/arch/x86/include/asm/kvm_host.h
771     @@ -1427,25 +1427,29 @@ enum {
772     #define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0)
773     #define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm)
774    
775     +asmlinkage void __noreturn kvm_spurious_fault(void);
776     +
777     /*
778     * Hardware virtualization extension instructions may fault if a
779     * reboot turns off virtualization while processes are running.
780     - * Trap the fault and ignore the instruction if that happens.
781     + * Usually after catching the fault we just panic; during reboot
782     + * instead the instruction is ignored.
783     */
784     -asmlinkage void kvm_spurious_fault(void);
785     -
786     -#define ____kvm_handle_fault_on_reboot(insn, cleanup_insn) \
787     - "666: " insn "\n\t" \
788     - "668: \n\t" \
789     - ".pushsection .fixup, \"ax\" \n" \
790     - "667: \n\t" \
791     - cleanup_insn "\n\t" \
792     - "cmpb $0, kvm_rebooting \n\t" \
793     - "jne 668b \n\t" \
794     - __ASM_SIZE(push) " $666b \n\t" \
795     - "jmp kvm_spurious_fault \n\t" \
796     - ".popsection \n\t" \
797     - _ASM_EXTABLE(666b, 667b)
798     +#define ____kvm_handle_fault_on_reboot(insn, cleanup_insn) \
799     + "666: \n\t" \
800     + insn "\n\t" \
801     + "jmp 668f \n\t" \
802     + "667: \n\t" \
803     + "call kvm_spurious_fault \n\t" \
804     + "668: \n\t" \
805     + ".pushsection .fixup, \"ax\" \n\t" \
806     + "700: \n\t" \
807     + cleanup_insn "\n\t" \
808     + "cmpb $0, kvm_rebooting\n\t" \
809     + "je 667b \n\t" \
810     + "jmp 668b \n\t" \
811     + ".popsection \n\t" \
812     + _ASM_EXTABLE(666b, 700b)
813    
814     #define __kvm_handle_fault_on_reboot(insn) \
815     ____kvm_handle_fault_on_reboot(insn, "")
816     diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
817     index e375d4266b53..a04677038872 100644
818     --- a/arch/x86/include/asm/paravirt.h
819     +++ b/arch/x86/include/asm/paravirt.h
820     @@ -768,6 +768,7 @@ static __always_inline bool pv_vcpu_is_preempted(long cpu)
821     PV_RESTORE_ALL_CALLER_REGS \
822     FRAME_END \
823     "ret;" \
824     + ".size " PV_THUNK_NAME(func) ", .-" PV_THUNK_NAME(func) ";" \
825     ".popsection")
826    
827     /* Get a reference to a callee-save function */
828     diff --git a/arch/x86/include/asm/traps.h b/arch/x86/include/asm/traps.h
829     index afbc87206886..b771bb3d159b 100644
830     --- a/arch/x86/include/asm/traps.h
831     +++ b/arch/x86/include/asm/traps.h
832     @@ -40,7 +40,7 @@ asmlinkage void simd_coprocessor_error(void);
833     asmlinkage void xen_divide_error(void);
834     asmlinkage void xen_xennmi(void);
835     asmlinkage void xen_xendebug(void);
836     -asmlinkage void xen_xenint3(void);
837     +asmlinkage void xen_int3(void);
838     asmlinkage void xen_overflow(void);
839     asmlinkage void xen_bounds(void);
840     asmlinkage void xen_invalid_op(void);
841     diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
842     index 02020f2e0080..272a12865b2a 100644
843     --- a/arch/x86/kernel/apic/apic.c
844     +++ b/arch/x86/kernel/apic/apic.c
845     @@ -181,7 +181,7 @@ EXPORT_SYMBOL_GPL(local_apic_timer_c2_ok);
846     /*
847     * Debug level, exported for io_apic.c
848     */
849     -unsigned int apic_verbosity;
850     +int apic_verbosity;
851    
852     int pic_mode;
853    
854     diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
855     index c5690440fbd4..ee7d17611ead 100644
856     --- a/arch/x86/kernel/cpu/bugs.c
857     +++ b/arch/x86/kernel/cpu/bugs.c
858     @@ -32,6 +32,7 @@
859     #include <asm/e820/api.h>
860     #include <asm/hypervisor.h>
861    
862     +static void __init spectre_v1_select_mitigation(void);
863     static void __init spectre_v2_select_mitigation(void);
864     static void __init ssb_select_mitigation(void);
865     static void __init l1tf_select_mitigation(void);
866     @@ -96,17 +97,11 @@ void __init check_bugs(void)
867     if (boot_cpu_has(X86_FEATURE_STIBP))
868     x86_spec_ctrl_mask |= SPEC_CTRL_STIBP;
869    
870     - /* Select the proper spectre mitigation before patching alternatives */
871     + /* Select the proper CPU mitigations before patching alternatives: */
872     + spectre_v1_select_mitigation();
873     spectre_v2_select_mitigation();
874     -
875     - /*
876     - * Select proper mitigation for any exposure to the Speculative Store
877     - * Bypass vulnerability.
878     - */
879     ssb_select_mitigation();
880     -
881     l1tf_select_mitigation();
882     -
883     mds_select_mitigation();
884    
885     arch_smt_update();
886     @@ -271,6 +266,98 @@ static int __init mds_cmdline(char *str)
887     }
888     early_param("mds", mds_cmdline);
889    
890     +#undef pr_fmt
891     +#define pr_fmt(fmt) "Spectre V1 : " fmt
892     +
893     +enum spectre_v1_mitigation {
894     + SPECTRE_V1_MITIGATION_NONE,
895     + SPECTRE_V1_MITIGATION_AUTO,
896     +};
897     +
898     +static enum spectre_v1_mitigation spectre_v1_mitigation __ro_after_init =
899     + SPECTRE_V1_MITIGATION_AUTO;
900     +
901     +static const char * const spectre_v1_strings[] = {
902     + [SPECTRE_V1_MITIGATION_NONE] = "Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers",
903     + [SPECTRE_V1_MITIGATION_AUTO] = "Mitigation: usercopy/swapgs barriers and __user pointer sanitization",
904     +};
905     +
906     +/*
907     + * Does SMAP provide full mitigation against speculative kernel access to
908     + * userspace?
909     + */
910     +static bool smap_works_speculatively(void)
911     +{
912     + if (!boot_cpu_has(X86_FEATURE_SMAP))
913     + return false;
914     +
915     + /*
916     + * On CPUs which are vulnerable to Meltdown, SMAP does not
917     + * prevent speculative access to user data in the L1 cache.
918     + * Consider SMAP to be non-functional as a mitigation on these
919     + * CPUs.
920     + */
921     + if (boot_cpu_has(X86_BUG_CPU_MELTDOWN))
922     + return false;
923     +
924     + return true;
925     +}
926     +
927     +static void __init spectre_v1_select_mitigation(void)
928     +{
929     + if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off()) {
930     + spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
931     + return;
932     + }
933     +
934     + if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_AUTO) {
935     + /*
936     + * With Spectre v1, a user can speculatively control either
937     + * path of a conditional swapgs with a user-controlled GS
938     + * value. The mitigation is to add lfences to both code paths.
939     + *
940     + * If FSGSBASE is enabled, the user can put a kernel address in
941     + * GS, in which case SMAP provides no protection.
942     + *
943     + * [ NOTE: Don't check for X86_FEATURE_FSGSBASE until the
944     + * FSGSBASE enablement patches have been merged. ]
945     + *
946     + * If FSGSBASE is disabled, the user can only put a user space
947     + * address in GS. That makes an attack harder, but still
948     + * possible if there's no SMAP protection.
949     + */
950     + if (!smap_works_speculatively()) {
951     + /*
952     + * Mitigation can be provided from SWAPGS itself or
953     + * PTI as the CR3 write in the Meltdown mitigation
954     + * is serializing.
955     + *
956     + * If neither is there, mitigate with an LFENCE to
957     + * stop speculation through swapgs.
958     + */
959     + if (boot_cpu_has_bug(X86_BUG_SWAPGS) &&
960     + !boot_cpu_has(X86_FEATURE_PTI))
961     + setup_force_cpu_cap(X86_FEATURE_FENCE_SWAPGS_USER);
962     +
963     + /*
964     + * Enable lfences in the kernel entry (non-swapgs)
965     + * paths, to prevent user entry from speculatively
966     + * skipping swapgs.
967     + */
968     + setup_force_cpu_cap(X86_FEATURE_FENCE_SWAPGS_KERNEL);
969     + }
970     + }
971     +
972     + pr_info("%s\n", spectre_v1_strings[spectre_v1_mitigation]);
973     +}
974     +
975     +static int __init nospectre_v1_cmdline(char *str)
976     +{
977     + spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
978     + return 0;
979     +}
980     +early_param("nospectre_v1", nospectre_v1_cmdline);
981     +
982     #undef pr_fmt
983     #define pr_fmt(fmt) "Spectre V2 : " fmt
984    
985     @@ -1258,7 +1345,7 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
986     break;
987    
988     case X86_BUG_SPECTRE_V1:
989     - return sprintf(buf, "Mitigation: __user pointer sanitization\n");
990     + return sprintf(buf, "%s\n", spectre_v1_strings[spectre_v1_mitigation]);
991    
992     case X86_BUG_SPECTRE_V2:
993     return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
994     diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
995     index 1073118b9bf0..b33fdfa0ff49 100644
996     --- a/arch/x86/kernel/cpu/common.c
997     +++ b/arch/x86/kernel/cpu/common.c
998     @@ -808,6 +808,30 @@ static void init_speculation_control(struct cpuinfo_x86 *c)
999     }
1000     }
1001    
1002     +static void init_cqm(struct cpuinfo_x86 *c)
1003     +{
1004     + if (!cpu_has(c, X86_FEATURE_CQM_LLC)) {
1005     + c->x86_cache_max_rmid = -1;
1006     + c->x86_cache_occ_scale = -1;
1007     + return;
1008     + }
1009     +
1010     + /* will be overridden if occupancy monitoring exists */
1011     + c->x86_cache_max_rmid = cpuid_ebx(0xf);
1012     +
1013     + if (cpu_has(c, X86_FEATURE_CQM_OCCUP_LLC) ||
1014     + cpu_has(c, X86_FEATURE_CQM_MBM_TOTAL) ||
1015     + cpu_has(c, X86_FEATURE_CQM_MBM_LOCAL)) {
1016     + u32 eax, ebx, ecx, edx;
1017     +
1018     + /* QoS sub-leaf, EAX=0Fh, ECX=1 */
1019     + cpuid_count(0xf, 1, &eax, &ebx, &ecx, &edx);
1020     +
1021     + c->x86_cache_max_rmid = ecx;
1022     + c->x86_cache_occ_scale = ebx;
1023     + }
1024     +}
1025     +
1026     void get_cpu_cap(struct cpuinfo_x86 *c)
1027     {
1028     u32 eax, ebx, ecx, edx;
1029     @@ -839,33 +863,6 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
1030     c->x86_capability[CPUID_D_1_EAX] = eax;
1031     }
1032    
1033     - /* Additional Intel-defined flags: level 0x0000000F */
1034     - if (c->cpuid_level >= 0x0000000F) {
1035     -
1036     - /* QoS sub-leaf, EAX=0Fh, ECX=0 */
1037     - cpuid_count(0x0000000F, 0, &eax, &ebx, &ecx, &edx);
1038     - c->x86_capability[CPUID_F_0_EDX] = edx;
1039     -
1040     - if (cpu_has(c, X86_FEATURE_CQM_LLC)) {
1041     - /* will be overridden if occupancy monitoring exists */
1042     - c->x86_cache_max_rmid = ebx;
1043     -
1044     - /* QoS sub-leaf, EAX=0Fh, ECX=1 */
1045     - cpuid_count(0x0000000F, 1, &eax, &ebx, &ecx, &edx);
1046     - c->x86_capability[CPUID_F_1_EDX] = edx;
1047     -
1048     - if ((cpu_has(c, X86_FEATURE_CQM_OCCUP_LLC)) ||
1049     - ((cpu_has(c, X86_FEATURE_CQM_MBM_TOTAL)) ||
1050     - (cpu_has(c, X86_FEATURE_CQM_MBM_LOCAL)))) {
1051     - c->x86_cache_max_rmid = ecx;
1052     - c->x86_cache_occ_scale = ebx;
1053     - }
1054     - } else {
1055     - c->x86_cache_max_rmid = -1;
1056     - c->x86_cache_occ_scale = -1;
1057     - }
1058     - }
1059     -
1060     /* AMD-defined flags: level 0x80000001 */
1061     eax = cpuid_eax(0x80000000);
1062     c->extended_cpuid_level = eax;
1063     @@ -896,6 +893,7 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
1064    
1065     init_scattered_cpuid_features(c);
1066     init_speculation_control(c);
1067     + init_cqm(c);
1068    
1069     /*
1070     * Clear/Set all flags overridden by options, after probe.
1071     @@ -954,6 +952,7 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
1072     #define NO_L1TF BIT(3)
1073     #define NO_MDS BIT(4)
1074     #define MSBDS_ONLY BIT(5)
1075     +#define NO_SWAPGS BIT(6)
1076    
1077     #define VULNWL(_vendor, _family, _model, _whitelist) \
1078     { X86_VENDOR_##_vendor, _family, _model, X86_FEATURE_ANY, _whitelist }
1079     @@ -977,29 +976,37 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
1080     VULNWL_INTEL(ATOM_BONNELL, NO_SPECULATION),
1081     VULNWL_INTEL(ATOM_BONNELL_MID, NO_SPECULATION),
1082    
1083     - VULNWL_INTEL(ATOM_SILVERMONT, NO_SSB | NO_L1TF | MSBDS_ONLY),
1084     - VULNWL_INTEL(ATOM_SILVERMONT_X, NO_SSB | NO_L1TF | MSBDS_ONLY),
1085     - VULNWL_INTEL(ATOM_SILVERMONT_MID, NO_SSB | NO_L1TF | MSBDS_ONLY),
1086     - VULNWL_INTEL(ATOM_AIRMONT, NO_SSB | NO_L1TF | MSBDS_ONLY),
1087     - VULNWL_INTEL(XEON_PHI_KNL, NO_SSB | NO_L1TF | MSBDS_ONLY),
1088     - VULNWL_INTEL(XEON_PHI_KNM, NO_SSB | NO_L1TF | MSBDS_ONLY),
1089     + VULNWL_INTEL(ATOM_SILVERMONT, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
1090     + VULNWL_INTEL(ATOM_SILVERMONT_X, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
1091     + VULNWL_INTEL(ATOM_SILVERMONT_MID, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
1092     + VULNWL_INTEL(ATOM_AIRMONT, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
1093     + VULNWL_INTEL(XEON_PHI_KNL, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
1094     + VULNWL_INTEL(XEON_PHI_KNM, NO_SSB | NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
1095    
1096     VULNWL_INTEL(CORE_YONAH, NO_SSB),
1097    
1098     - VULNWL_INTEL(ATOM_AIRMONT_MID, NO_L1TF | MSBDS_ONLY),
1099     + VULNWL_INTEL(ATOM_AIRMONT_MID, NO_L1TF | MSBDS_ONLY | NO_SWAPGS),
1100    
1101     - VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF),
1102     - VULNWL_INTEL(ATOM_GOLDMONT_X, NO_MDS | NO_L1TF),
1103     - VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF),
1104     + VULNWL_INTEL(ATOM_GOLDMONT, NO_MDS | NO_L1TF | NO_SWAPGS),
1105     + VULNWL_INTEL(ATOM_GOLDMONT_X, NO_MDS | NO_L1TF | NO_SWAPGS),
1106     + VULNWL_INTEL(ATOM_GOLDMONT_PLUS, NO_MDS | NO_L1TF | NO_SWAPGS),
1107     +
1108     + /*
1109     + * Technically, swapgs isn't serializing on AMD (despite it previously
1110     + * being documented as such in the APM). But according to AMD, %gs is
1111     + * updated non-speculatively, and the issuing of %gs-relative memory
1112     + * operands will be blocked until the %gs update completes, which is
1113     + * good enough for our purposes.
1114     + */
1115    
1116     /* AMD Family 0xf - 0x12 */
1117     - VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS),
1118     - VULNWL_AMD(0x10, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS),
1119     - VULNWL_AMD(0x11, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS),
1120     - VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS),
1121     + VULNWL_AMD(0x0f, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS),
1122     + VULNWL_AMD(0x10, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS),
1123     + VULNWL_AMD(0x11, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS),
1124     + VULNWL_AMD(0x12, NO_MELTDOWN | NO_SSB | NO_L1TF | NO_MDS | NO_SWAPGS),
1125    
1126     /* FAMILY_ANY must be last, otherwise 0x0f - 0x12 matches won't work */
1127     - VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS),
1128     + VULNWL_AMD(X86_FAMILY_ANY, NO_MELTDOWN | NO_L1TF | NO_MDS | NO_SWAPGS),
1129     {}
1130     };
1131    
1132     @@ -1036,6 +1043,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
1133     setup_force_cpu_bug(X86_BUG_MSBDS_ONLY);
1134     }
1135    
1136     + if (!cpu_matches(NO_SWAPGS))
1137     + setup_force_cpu_bug(X86_BUG_SWAPGS);
1138     +
1139     if (cpu_matches(NO_MELTDOWN))
1140     return;
1141    
1142     diff --git a/arch/x86/kernel/cpu/cpuid-deps.c b/arch/x86/kernel/cpu/cpuid-deps.c
1143     index 2c0bd38a44ab..fa07a224e7b9 100644
1144     --- a/arch/x86/kernel/cpu/cpuid-deps.c
1145     +++ b/arch/x86/kernel/cpu/cpuid-deps.c
1146     @@ -59,6 +59,9 @@ static const struct cpuid_dep cpuid_deps[] = {
1147     { X86_FEATURE_AVX512_4VNNIW, X86_FEATURE_AVX512F },
1148     { X86_FEATURE_AVX512_4FMAPS, X86_FEATURE_AVX512F },
1149     { X86_FEATURE_AVX512_VPOPCNTDQ, X86_FEATURE_AVX512F },
1150     + { X86_FEATURE_CQM_OCCUP_LLC, X86_FEATURE_CQM_LLC },
1151     + { X86_FEATURE_CQM_MBM_TOTAL, X86_FEATURE_CQM_LLC },
1152     + { X86_FEATURE_CQM_MBM_LOCAL, X86_FEATURE_CQM_LLC },
1153     {}
1154     };
1155    
1156     diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
1157     index 772c219b6889..5a52672e3f8b 100644
1158     --- a/arch/x86/kernel/cpu/scattered.c
1159     +++ b/arch/x86/kernel/cpu/scattered.c
1160     @@ -21,6 +21,10 @@ struct cpuid_bit {
1161     static const struct cpuid_bit cpuid_bits[] = {
1162     { X86_FEATURE_APERFMPERF, CPUID_ECX, 0, 0x00000006, 0 },
1163     { X86_FEATURE_EPB, CPUID_ECX, 3, 0x00000006, 0 },
1164     + { X86_FEATURE_CQM_LLC, CPUID_EDX, 1, 0x0000000f, 0 },
1165     + { X86_FEATURE_CQM_OCCUP_LLC, CPUID_EDX, 0, 0x0000000f, 1 },
1166     + { X86_FEATURE_CQM_MBM_TOTAL, CPUID_EDX, 1, 0x0000000f, 1 },
1167     + { X86_FEATURE_CQM_MBM_LOCAL, CPUID_EDX, 2, 0x0000000f, 1 },
1168     { X86_FEATURE_CAT_L3, CPUID_EBX, 1, 0x00000010, 0 },
1169     { X86_FEATURE_CAT_L2, CPUID_EBX, 2, 0x00000010, 0 },
1170     { X86_FEATURE_CDP_L3, CPUID_ECX, 2, 0x00000010, 1 },
1171     diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
1172     index 7f89d609095a..cee45d46e67d 100644
1173     --- a/arch/x86/kernel/kvm.c
1174     +++ b/arch/x86/kernel/kvm.c
1175     @@ -830,6 +830,7 @@ asm(
1176     "cmpb $0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);"
1177     "setne %al;"
1178     "ret;"
1179     +".size __raw_callee_save___kvm_vcpu_is_preempted, .-__raw_callee_save___kvm_vcpu_is_preempted;"
1180     ".popsection");
1181    
1182     #endif
1183     diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
1184     index 9a327d5b6d1f..d78a61408243 100644
1185     --- a/arch/x86/kvm/cpuid.h
1186     +++ b/arch/x86/kvm/cpuid.h
1187     @@ -47,8 +47,6 @@ static const struct cpuid_reg reverse_cpuid[] = {
1188     [CPUID_8000_0001_ECX] = {0x80000001, 0, CPUID_ECX},
1189     [CPUID_7_0_EBX] = { 7, 0, CPUID_EBX},
1190     [CPUID_D_1_EAX] = { 0xd, 1, CPUID_EAX},
1191     - [CPUID_F_0_EDX] = { 0xf, 0, CPUID_EDX},
1192     - [CPUID_F_1_EDX] = { 0xf, 1, CPUID_EDX},
1193     [CPUID_8000_0008_EBX] = {0x80000008, 0, CPUID_EBX},
1194     [CPUID_6_EAX] = { 6, 0, CPUID_EAX},
1195     [CPUID_8000_000A_EDX] = {0x8000000a, 0, CPUID_EDX},
1196     diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
1197     index e0f982e35c96..cdc0c460950f 100644
1198     --- a/arch/x86/kvm/mmu.c
1199     +++ b/arch/x86/kvm/mmu.c
1200     @@ -4532,11 +4532,11 @@ static void update_permission_bitmask(struct kvm_vcpu *vcpu,
1201     */
1202    
1203     /* Faults from writes to non-writable pages */
1204     - u8 wf = (pfec & PFERR_WRITE_MASK) ? ~w : 0;
1205     + u8 wf = (pfec & PFERR_WRITE_MASK) ? (u8)~w : 0;
1206     /* Faults from user mode accesses to supervisor pages */
1207     - u8 uf = (pfec & PFERR_USER_MASK) ? ~u : 0;
1208     + u8 uf = (pfec & PFERR_USER_MASK) ? (u8)~u : 0;
1209     /* Faults from fetches of non-executable pages*/
1210     - u8 ff = (pfec & PFERR_FETCH_MASK) ? ~x : 0;
1211     + u8 ff = (pfec & PFERR_FETCH_MASK) ? (u8)~x : 0;
1212     /* Faults from kernel mode fetches of user pages */
1213     u8 smepf = 0;
1214     /* Faults from kernel mode accesses of user pages */
1215     diff --git a/arch/x86/math-emu/fpu_emu.h b/arch/x86/math-emu/fpu_emu.h
1216     index a5a41ec58072..0c122226ca56 100644
1217     --- a/arch/x86/math-emu/fpu_emu.h
1218     +++ b/arch/x86/math-emu/fpu_emu.h
1219     @@ -177,7 +177,7 @@ static inline void reg_copy(FPU_REG const *x, FPU_REG *y)
1220     #define setexponentpos(x,y) { (*(short *)&((x)->exp)) = \
1221     ((y) + EXTENDED_Ebias) & 0x7fff; }
1222     #define exponent16(x) (*(short *)&((x)->exp))
1223     -#define setexponent16(x,y) { (*(short *)&((x)->exp)) = (y); }
1224     +#define setexponent16(x,y) { (*(short *)&((x)->exp)) = (u16)(y); }
1225     #define addexponent(x,y) { (*(short *)&((x)->exp)) += (y); }
1226     #define stdexp(x) { (*(short *)&((x)->exp)) += EXTENDED_Ebias; }
1227    
1228     diff --git a/arch/x86/math-emu/reg_constant.c b/arch/x86/math-emu/reg_constant.c
1229     index 8dc9095bab22..742619e94bdf 100644
1230     --- a/arch/x86/math-emu/reg_constant.c
1231     +++ b/arch/x86/math-emu/reg_constant.c
1232     @@ -18,7 +18,7 @@
1233     #include "control_w.h"
1234    
1235     #define MAKE_REG(s, e, l, h) { l, h, \
1236     - ((EXTENDED_Ebias+(e)) | ((SIGN_##s != 0)*0x8000)) }
1237     + (u16)((EXTENDED_Ebias+(e)) | ((SIGN_##s != 0)*0x8000)) }
1238    
1239     FPU_REG const CONST_1 = MAKE_REG(POS, 0, 0x00000000, 0x80000000);
1240     #if 0
1241     diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
1242     index 782f98b332f0..1730a26ff6ab 100644
1243     --- a/arch/x86/xen/enlighten_pv.c
1244     +++ b/arch/x86/xen/enlighten_pv.c
1245     @@ -597,12 +597,12 @@ struct trap_array_entry {
1246    
1247     static struct trap_array_entry trap_array[] = {
1248     { debug, xen_xendebug, true },
1249     - { int3, xen_xenint3, true },
1250     { double_fault, xen_double_fault, true },
1251     #ifdef CONFIG_X86_MCE
1252     { machine_check, xen_machine_check, true },
1253     #endif
1254     { nmi, xen_xennmi, true },
1255     + { int3, xen_int3, false },
1256     { overflow, xen_overflow, false },
1257     #ifdef CONFIG_IA32_EMULATION
1258     { entry_INT80_compat, xen_entry_INT80_compat, false },
1259     diff --git a/arch/x86/xen/xen-asm_64.S b/arch/x86/xen/xen-asm_64.S
1260     index 417b339e5c8e..3a6feed76dfc 100644
1261     --- a/arch/x86/xen/xen-asm_64.S
1262     +++ b/arch/x86/xen/xen-asm_64.S
1263     @@ -30,7 +30,6 @@ xen_pv_trap divide_error
1264     xen_pv_trap debug
1265     xen_pv_trap xendebug
1266     xen_pv_trap int3
1267     -xen_pv_trap xenint3
1268     xen_pv_trap xennmi
1269     xen_pv_trap overflow
1270     xen_pv_trap bounds
1271     diff --git a/drivers/acpi/blacklist.c b/drivers/acpi/blacklist.c
1272     index 995c4d8922b1..761f0c19a451 100644
1273     --- a/drivers/acpi/blacklist.c
1274     +++ b/drivers/acpi/blacklist.c
1275     @@ -30,7 +30,9 @@
1276    
1277     #include "internal.h"
1278    
1279     +#ifdef CONFIG_DMI
1280     static const struct dmi_system_id acpi_rev_dmi_table[] __initconst;
1281     +#endif
1282    
1283     /*
1284     * POLICY: If *anything* doesn't work, put it on the blacklist.
1285     @@ -74,7 +76,9 @@ int __init acpi_blacklisted(void)
1286     }
1287    
1288     (void)early_acpi_osi_init();
1289     +#ifdef CONFIG_DMI
1290     dmi_check_system(acpi_rev_dmi_table);
1291     +#endif
1292    
1293     return blacklisted;
1294     }
1295     diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
1296     index c13a6d1796a7..fa60f265ee50 100644
1297     --- a/drivers/block/nbd.c
1298     +++ b/drivers/block/nbd.c
1299     @@ -1218,7 +1218,7 @@ static void nbd_clear_sock_ioctl(struct nbd_device *nbd,
1300     struct block_device *bdev)
1301     {
1302     sock_shutdown(nbd);
1303     - kill_bdev(bdev);
1304     + __invalidate_device(bdev, true);
1305     nbd_bdev_reset(bdev);
1306     if (test_and_clear_bit(NBD_HAS_CONFIG_REF,
1307     &nbd->config->runtime_flags))
1308     diff --git a/drivers/clk/sprd/sc9860-clk.c b/drivers/clk/sprd/sc9860-clk.c
1309     index 9980ab55271b..f76305b4bc8d 100644
1310     --- a/drivers/clk/sprd/sc9860-clk.c
1311     +++ b/drivers/clk/sprd/sc9860-clk.c
1312     @@ -2023,6 +2023,7 @@ static int sc9860_clk_probe(struct platform_device *pdev)
1313     {
1314     const struct of_device_id *match;
1315     const struct sprd_clk_desc *desc;
1316     + int ret;
1317    
1318     match = of_match_node(sprd_sc9860_clk_ids, pdev->dev.of_node);
1319     if (!match) {
1320     @@ -2031,7 +2032,9 @@ static int sc9860_clk_probe(struct platform_device *pdev)
1321     }
1322    
1323     desc = match->data;
1324     - sprd_clk_regmap_init(pdev, desc);
1325     + ret = sprd_clk_regmap_init(pdev, desc);
1326     + if (ret)
1327     + return ret;
1328    
1329     return sprd_clk_probe(&pdev->dev, desc->hw_clks);
1330     }
1331     diff --git a/drivers/clk/tegra/clk-tegra210.c b/drivers/clk/tegra/clk-tegra210.c
1332     index 9eb1cb14fce1..4e1bc23c9865 100644
1333     --- a/drivers/clk/tegra/clk-tegra210.c
1334     +++ b/drivers/clk/tegra/clk-tegra210.c
1335     @@ -2214,9 +2214,9 @@ static struct div_nmp pllu_nmp = {
1336     };
1337    
1338     static struct tegra_clk_pll_freq_table pll_u_freq_table[] = {
1339     - { 12000000, 480000000, 40, 1, 0, 0 },
1340     - { 13000000, 480000000, 36, 1, 0, 0 }, /* actual: 468.0 MHz */
1341     - { 38400000, 480000000, 25, 2, 0, 0 },
1342     + { 12000000, 480000000, 40, 1, 1, 0 },
1343     + { 13000000, 480000000, 36, 1, 1, 0 }, /* actual: 468.0 MHz */
1344     + { 38400000, 480000000, 25, 2, 1, 0 },
1345     { 0, 0, 0, 0, 0, 0 },
1346     };
1347    
1348     @@ -3343,6 +3343,7 @@ static struct tegra_clk_init_table init_table[] __initdata = {
1349     { TEGRA210_CLK_DFLL_REF, TEGRA210_CLK_PLL_P, 51000000, 1 },
1350     { TEGRA210_CLK_SBC4, TEGRA210_CLK_PLL_P, 12000000, 1 },
1351     { TEGRA210_CLK_PLL_RE_VCO, TEGRA210_CLK_CLK_MAX, 672000000, 1 },
1352     + { TEGRA210_CLK_PLL_U_OUT1, TEGRA210_CLK_CLK_MAX, 48000000, 1 },
1353     { TEGRA210_CLK_XUSB_GATE, TEGRA210_CLK_CLK_MAX, 0, 1 },
1354     { TEGRA210_CLK_XUSB_SS_SRC, TEGRA210_CLK_PLL_U_480M, 120000000, 0 },
1355     { TEGRA210_CLK_XUSB_FS_SRC, TEGRA210_CLK_PLL_U_48M, 48000000, 0 },
1356     @@ -3367,7 +3368,6 @@ static struct tegra_clk_init_table init_table[] __initdata = {
1357     { TEGRA210_CLK_PLL_DP, TEGRA210_CLK_CLK_MAX, 270000000, 0 },
1358     { TEGRA210_CLK_SOC_THERM, TEGRA210_CLK_PLL_P, 51000000, 0 },
1359     { TEGRA210_CLK_CCLK_G, TEGRA210_CLK_CLK_MAX, 0, 1 },
1360     - { TEGRA210_CLK_PLL_U_OUT1, TEGRA210_CLK_CLK_MAX, 48000000, 1 },
1361     { TEGRA210_CLK_PLL_U_OUT2, TEGRA210_CLK_CLK_MAX, 60000000, 1 },
1362     /* This MUST be the last entry. */
1363     { TEGRA210_CLK_CLK_MAX, TEGRA210_CLK_CLK_MAX, 0, 0 },
1364     diff --git a/drivers/dma/sh/rcar-dmac.c b/drivers/dma/sh/rcar-dmac.c
1365     index 0b05a1e08d21..041ce864097e 100644
1366     --- a/drivers/dma/sh/rcar-dmac.c
1367     +++ b/drivers/dma/sh/rcar-dmac.c
1368     @@ -1164,7 +1164,7 @@ rcar_dmac_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
1369     struct rcar_dmac_chan *rchan = to_rcar_dmac_chan(chan);
1370    
1371     /* Someone calling slave DMA on a generic channel? */
1372     - if (rchan->mid_rid < 0 || !sg_len) {
1373     + if (rchan->mid_rid < 0 || !sg_len || !sg_dma_len(sgl)) {
1374     dev_warn(chan->device->dev,
1375     "%s: bad parameter: len=%d, id=%d\n",
1376     __func__, sg_len, rchan->mid_rid);
1377     diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c
1378     index 8219ab88a507..fb23993430d3 100644
1379     --- a/drivers/dma/tegra20-apb-dma.c
1380     +++ b/drivers/dma/tegra20-apb-dma.c
1381     @@ -981,8 +981,12 @@ static struct dma_async_tx_descriptor *tegra_dma_prep_slave_sg(
1382     csr |= tdc->slave_id << TEGRA_APBDMA_CSR_REQ_SEL_SHIFT;
1383     }
1384    
1385     - if (flags & DMA_PREP_INTERRUPT)
1386     + if (flags & DMA_PREP_INTERRUPT) {
1387     csr |= TEGRA_APBDMA_CSR_IE_EOC;
1388     + } else {
1389     + WARN_ON_ONCE(1);
1390     + return NULL;
1391     + }
1392    
1393     apb_seq |= TEGRA_APBDMA_APBSEQ_WRAP_WORD_1;
1394    
1395     @@ -1124,8 +1128,12 @@ static struct dma_async_tx_descriptor *tegra_dma_prep_dma_cyclic(
1396     csr |= tdc->slave_id << TEGRA_APBDMA_CSR_REQ_SEL_SHIFT;
1397     }
1398    
1399     - if (flags & DMA_PREP_INTERRUPT)
1400     + if (flags & DMA_PREP_INTERRUPT) {
1401     csr |= TEGRA_APBDMA_CSR_IE_EOC;
1402     + } else {
1403     + WARN_ON_ONCE(1);
1404     + return NULL;
1405     + }
1406    
1407     apb_seq |= TEGRA_APBDMA_APBSEQ_WRAP_WORD_1;
1408    
1409     diff --git a/drivers/firmware/psci_checker.c b/drivers/firmware/psci_checker.c
1410     index 346943657962..cbd53cb1b2d4 100644
1411     --- a/drivers/firmware/psci_checker.c
1412     +++ b/drivers/firmware/psci_checker.c
1413     @@ -366,16 +366,16 @@ static int suspend_test_thread(void *arg)
1414     for (;;) {
1415     /* Needs to be set first to avoid missing a wakeup. */
1416     set_current_state(TASK_INTERRUPTIBLE);
1417     - if (kthread_should_stop()) {
1418     - __set_current_state(TASK_RUNNING);
1419     + if (kthread_should_park())
1420     break;
1421     - }
1422     schedule();
1423     }
1424    
1425     pr_info("CPU %d suspend test results: success %d, shallow states %d, errors %d\n",
1426     cpu, nb_suspend, nb_shallow_sleep, nb_err);
1427    
1428     + kthread_parkme();
1429     +
1430     return nb_err;
1431     }
1432    
1433     @@ -440,8 +440,10 @@ static int suspend_tests(void)
1434    
1435    
1436     /* Stop and destroy all threads, get return status. */
1437     - for (i = 0; i < nb_threads; ++i)
1438     + for (i = 0; i < nb_threads; ++i) {
1439     + err += kthread_park(threads[i]);
1440     err += kthread_stop(threads[i]);
1441     + }
1442     out:
1443     cpuidle_resume_and_unlock();
1444     kfree(threads);
1445     diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
1446     index 4a48c7c47709..b308ce92685d 100644
1447     --- a/drivers/gpio/gpiolib.c
1448     +++ b/drivers/gpio/gpiolib.c
1449     @@ -946,9 +946,11 @@ static int lineevent_create(struct gpio_device *gdev, void __user *ip)
1450     }
1451    
1452     if (eflags & GPIOEVENT_REQUEST_RISING_EDGE)
1453     - irqflags |= IRQF_TRIGGER_RISING;
1454     + irqflags |= test_bit(FLAG_ACTIVE_LOW, &desc->flags) ?
1455     + IRQF_TRIGGER_FALLING : IRQF_TRIGGER_RISING;
1456     if (eflags & GPIOEVENT_REQUEST_FALLING_EDGE)
1457     - irqflags |= IRQF_TRIGGER_FALLING;
1458     + irqflags |= test_bit(FLAG_ACTIVE_LOW, &desc->flags) ?
1459     + IRQF_TRIGGER_RISING : IRQF_TRIGGER_FALLING;
1460     irqflags |= IRQF_ONESHOT;
1461     irqflags |= IRQF_SHARED;
1462    
1463     diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
1464     index 12e4203c06db..66abe061f07b 100644
1465     --- a/drivers/gpu/drm/i915/gvt/kvmgt.c
1466     +++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
1467     @@ -1741,6 +1741,18 @@ int kvmgt_dma_map_guest_page(unsigned long handle, unsigned long gfn,
1468    
1469     entry = __gvt_cache_find_gfn(info->vgpu, gfn);
1470     if (!entry) {
1471     + ret = gvt_dma_map_page(vgpu, gfn, dma_addr, size);
1472     + if (ret)
1473     + goto err_unlock;
1474     +
1475     + ret = __gvt_cache_add(info->vgpu, gfn, *dma_addr, size);
1476     + if (ret)
1477     + goto err_unmap;
1478     + } else if (entry->size != size) {
1479     + /* the same gfn with different size: unmap and re-map */
1480     + gvt_dma_unmap_page(vgpu, gfn, entry->dma_addr, entry->size);
1481     + __gvt_cache_remove_entry(vgpu, entry);
1482     +
1483     ret = gvt_dma_map_page(vgpu, gfn, dma_addr, size);
1484     if (ret)
1485     goto err_unlock;
1486     diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
1487     index 247f72cc4d10..fb0094fc5583 100644
1488     --- a/drivers/gpu/drm/nouveau/nouveau_connector.c
1489     +++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
1490     @@ -251,7 +251,7 @@ nouveau_conn_reset(struct drm_connector *connector)
1491     return;
1492    
1493     if (connector->state)
1494     - __drm_atomic_helper_connector_destroy_state(connector->state);
1495     + nouveau_conn_atomic_destroy_state(connector, connector->state);
1496     __drm_atomic_helper_connector_reset(connector, &asyc->state);
1497     asyc->dither.mode = DITHERING_MODE_AUTO;
1498     asyc->dither.depth = DITHERING_DEPTH_AUTO;
1499     diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c
1500     index d8eb4dc04d69..6aa5a8a242ff 100644
1501     --- a/drivers/infiniband/hw/hfi1/chip.c
1502     +++ b/drivers/infiniband/hw/hfi1/chip.c
1503     @@ -14586,7 +14586,7 @@ void hfi1_deinit_vnic_rsm(struct hfi1_devdata *dd)
1504     clear_rcvctrl(dd, RCV_CTRL_RCV_RSM_ENABLE_SMASK);
1505     }
1506    
1507     -static void init_rxe(struct hfi1_devdata *dd)
1508     +static int init_rxe(struct hfi1_devdata *dd)
1509     {
1510     struct rsm_map_table *rmt;
1511     u64 val;
1512     @@ -14595,6 +14595,9 @@ static void init_rxe(struct hfi1_devdata *dd)
1513     write_csr(dd, RCV_ERR_MASK, ~0ull);
1514    
1515     rmt = alloc_rsm_map_table(dd);
1516     + if (!rmt)
1517     + return -ENOMEM;
1518     +
1519     /* set up QOS, including the QPN map table */
1520     init_qos(dd, rmt);
1521     init_user_fecn_handling(dd, rmt);
1522     @@ -14621,6 +14624,7 @@ static void init_rxe(struct hfi1_devdata *dd)
1523     val |= ((4ull & RCV_BYPASS_HDR_SIZE_MASK) <<
1524     RCV_BYPASS_HDR_SIZE_SHIFT);
1525     write_csr(dd, RCV_BYPASS, val);
1526     + return 0;
1527     }
1528    
1529     static void init_other(struct hfi1_devdata *dd)
1530     @@ -15163,7 +15167,10 @@ struct hfi1_devdata *hfi1_init_dd(struct pci_dev *pdev,
1531     goto bail_cleanup;
1532    
1533     /* set initial RXE CSRs */
1534     - init_rxe(dd);
1535     + ret = init_rxe(dd);
1536     + if (ret)
1537     + goto bail_cleanup;
1538     +
1539     /* set initial TXE CSRs */
1540     init_txe(dd);
1541     /* set initial non-RXE, non-TXE CSRs */
1542     diff --git a/drivers/infiniband/hw/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c
1543     index 27d9c4cefdc7..1ad38c8c1ef9 100644
1544     --- a/drivers/infiniband/hw/hfi1/verbs.c
1545     +++ b/drivers/infiniband/hw/hfi1/verbs.c
1546     @@ -54,6 +54,7 @@
1547     #include <linux/mm.h>
1548     #include <linux/vmalloc.h>
1549     #include <rdma/opa_addr.h>
1550     +#include <linux/nospec.h>
1551    
1552     #include "hfi.h"
1553     #include "common.h"
1554     @@ -1596,6 +1597,7 @@ static int hfi1_check_ah(struct ib_device *ibdev, struct rdma_ah_attr *ah_attr)
1555     sl = rdma_ah_get_sl(ah_attr);
1556     if (sl >= ARRAY_SIZE(ibp->sl_to_sc))
1557     return -EINVAL;
1558     + sl = array_index_nospec(sl, ARRAY_SIZE(ibp->sl_to_sc));
1559    
1560     sc5 = ibp->sl_to_sc[sl];
1561     if (sc_to_vlt(dd, sc5) > num_vls && sc_to_vlt(dd, sc5) != 0xf)
1562     diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
1563     index 320d4dfe8c2f..941d1df54631 100644
1564     --- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
1565     +++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
1566     @@ -467,6 +467,7 @@ struct mlx5_umr_wr {
1567     u64 length;
1568     int access_flags;
1569     u32 mkey;
1570     + u8 ignore_free_state:1;
1571     };
1572    
1573     static inline const struct mlx5_umr_wr *umr_wr(const struct ib_send_wr *wr)
1574     diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
1575     index 7df4a4fe4af4..9bab4fb65c68 100644
1576     --- a/drivers/infiniband/hw/mlx5/mr.c
1577     +++ b/drivers/infiniband/hw/mlx5/mr.c
1578     @@ -548,13 +548,16 @@ void mlx5_mr_cache_free(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
1579     return;
1580    
1581     c = order2idx(dev, mr->order);
1582     - if (c < 0 || c >= MAX_MR_CACHE_ENTRIES) {
1583     - mlx5_ib_warn(dev, "order %d, cache index %d\n", mr->order, c);
1584     - return;
1585     - }
1586     + WARN_ON(c < 0 || c >= MAX_MR_CACHE_ENTRIES);
1587    
1588     - if (unreg_umr(dev, mr))
1589     + if (unreg_umr(dev, mr)) {
1590     + mr->allocated_from_cache = false;
1591     + destroy_mkey(dev, mr);
1592     + ent = &cache->ent[c];
1593     + if (ent->cur < ent->limit)
1594     + queue_work(cache->wq, &ent->work);
1595     return;
1596     + }
1597    
1598     ent = &cache->ent[c];
1599     spin_lock_irq(&ent->lock);
1600     @@ -1408,9 +1411,11 @@ static int unreg_umr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
1601     return 0;
1602    
1603     umrwr.wr.send_flags = MLX5_IB_SEND_UMR_DISABLE_MR |
1604     - MLX5_IB_SEND_UMR_FAIL_IF_FREE;
1605     + MLX5_IB_SEND_UMR_UPDATE_PD_ACCESS;
1606     umrwr.wr.opcode = MLX5_IB_WR_UMR;
1607     + umrwr.pd = dev->umrc.pd;
1608     umrwr.mkey = mr->mmkey.key;
1609     + umrwr.ignore_free_state = 1;
1610    
1611     return mlx5_ib_post_send_wait(dev, &umrwr);
1612     }
1613     @@ -1615,10 +1620,10 @@ static void clean_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
1614     mr->sig = NULL;
1615     }
1616    
1617     - mlx5_free_priv_descs(mr);
1618     -
1619     - if (!allocated_from_cache)
1620     + if (!allocated_from_cache) {
1621     destroy_mkey(dev, mr);
1622     + mlx5_free_priv_descs(mr);
1623     + }
1624     }
1625    
1626     static void dereg_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
1627     diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
1628     index 183fe5c8ceb7..77b1f3fd086a 100644
1629     --- a/drivers/infiniband/hw/mlx5/qp.c
1630     +++ b/drivers/infiniband/hw/mlx5/qp.c
1631     @@ -1501,7 +1501,6 @@ static int create_rss_raw_qp_tir(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
1632     }
1633    
1634     MLX5_SET(tirc, tirc, rx_hash_fn, MLX5_RX_HASH_FN_TOEPLITZ);
1635     - MLX5_SET(tirc, tirc, rx_hash_symmetric, 1);
1636     memcpy(rss_key, ucmd.rx_hash_key, len);
1637     break;
1638     }
1639     @@ -3717,10 +3716,14 @@ static int set_reg_umr_segment(struct mlx5_ib_dev *dev,
1640    
1641     memset(umr, 0, sizeof(*umr));
1642    
1643     - if (wr->send_flags & MLX5_IB_SEND_UMR_FAIL_IF_FREE)
1644     - umr->flags = MLX5_UMR_CHECK_FREE; /* fail if free */
1645     - else
1646     - umr->flags = MLX5_UMR_CHECK_NOT_FREE; /* fail if not free */
1647     + if (!umrwr->ignore_free_state) {
1648     + if (wr->send_flags & MLX5_IB_SEND_UMR_FAIL_IF_FREE)
1649     + /* fail if free */
1650     + umr->flags = MLX5_UMR_CHECK_FREE;
1651     + else
1652     + /* fail if not free */
1653     + umr->flags = MLX5_UMR_CHECK_NOT_FREE;
1654     + }
1655    
1656     umr->xlt_octowords = cpu_to_be16(get_xlt_octo(umrwr->xlt_size));
1657     if (wr->send_flags & MLX5_IB_SEND_UMR_UPDATE_XLT) {
1658     diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c
1659     index ddfcf4ade7bf..dc3537651b80 100644
1660     --- a/drivers/misc/eeprom/at24.c
1661     +++ b/drivers/misc/eeprom/at24.c
1662     @@ -724,7 +724,7 @@ static int at24_probe(struct i2c_client *client)
1663     nvmem_config.name = dev_name(dev);
1664     nvmem_config.dev = dev;
1665     nvmem_config.read_only = !writable;
1666     - nvmem_config.root_only = true;
1667     + nvmem_config.root_only = !(pdata.flags & AT24_FLAG_IRUGO);
1668     nvmem_config.owner = THIS_MODULE;
1669     nvmem_config.compat = true;
1670     nvmem_config.base_dev = dev;
1671     diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
1672     index 80dc2fd6576c..942da07c9eb8 100644
1673     --- a/drivers/mmc/host/dw_mmc.c
1674     +++ b/drivers/mmc/host/dw_mmc.c
1675     @@ -2038,8 +2038,7 @@ static void dw_mci_tasklet_func(unsigned long priv)
1676     * delayed. Allowing the transfer to take place
1677     * avoids races and keeps things simple.
1678     */
1679     - if ((err != -ETIMEDOUT) &&
1680     - (cmd->opcode == MMC_SEND_TUNING_BLOCK)) {
1681     + if (err != -ETIMEDOUT) {
1682     state = STATE_SENDING_DATA;
1683     continue;
1684     }
1685     diff --git a/drivers/mmc/host/meson-mx-sdio.c b/drivers/mmc/host/meson-mx-sdio.c
1686     index 9841b447ccde..f6c76be2be0d 100644
1687     --- a/drivers/mmc/host/meson-mx-sdio.c
1688     +++ b/drivers/mmc/host/meson-mx-sdio.c
1689     @@ -76,7 +76,7 @@
1690     #define MESON_MX_SDIO_IRQC_IF_CONFIG_MASK GENMASK(7, 6)
1691     #define MESON_MX_SDIO_IRQC_FORCE_DATA_CLK BIT(8)
1692     #define MESON_MX_SDIO_IRQC_FORCE_DATA_CMD BIT(9)
1693     - #define MESON_MX_SDIO_IRQC_FORCE_DATA_DAT_MASK GENMASK(10, 13)
1694     + #define MESON_MX_SDIO_IRQC_FORCE_DATA_DAT_MASK GENMASK(13, 10)
1695     #define MESON_MX_SDIO_IRQC_SOFT_RESET BIT(15)
1696     #define MESON_MX_SDIO_IRQC_FORCE_HALT BIT(30)
1697     #define MESON_MX_SDIO_IRQC_HALT_HOLE BIT(31)
1698     diff --git a/drivers/mtd/nand/raw/nand_micron.c b/drivers/mtd/nand/raw/nand_micron.c
1699     index f5dc0a7a2456..fb401c25732c 100644
1700     --- a/drivers/mtd/nand/raw/nand_micron.c
1701     +++ b/drivers/mtd/nand/raw/nand_micron.c
1702     @@ -400,6 +400,14 @@ static int micron_supports_on_die_ecc(struct nand_chip *chip)
1703     (chip->id.data[4] & MICRON_ID_INTERNAL_ECC_MASK) != 0x2)
1704     return MICRON_ON_DIE_UNSUPPORTED;
1705    
1706     + /*
1707     + * It seems that there are devices which do not support ECC officially.
1708     + * At least the MT29F2G08ABAGA / MT29F2G08ABBGA devices supports
1709     + * enabling the ECC feature but don't reflect that to the READ_ID table.
1710     + * So we have to guarantee that we disable the ECC feature directly
1711     + * after we did the READ_ID table command. Later we can evaluate the
1712     + * ECC_ENABLE support.
1713     + */
1714     ret = micron_nand_on_die_ecc_setup(chip, true);
1715     if (ret)
1716     return MICRON_ON_DIE_UNSUPPORTED;
1717     @@ -408,13 +416,13 @@ static int micron_supports_on_die_ecc(struct nand_chip *chip)
1718     if (ret)
1719     return MICRON_ON_DIE_UNSUPPORTED;
1720    
1721     - if (!(id[4] & MICRON_ID_ECC_ENABLED))
1722     - return MICRON_ON_DIE_UNSUPPORTED;
1723     -
1724     ret = micron_nand_on_die_ecc_setup(chip, false);
1725     if (ret)
1726     return MICRON_ON_DIE_UNSUPPORTED;
1727    
1728     + if (!(id[4] & MICRON_ID_ECC_ENABLED))
1729     + return MICRON_ON_DIE_UNSUPPORTED;
1730     +
1731     ret = nand_readid_op(chip, 0, id, sizeof(id));
1732     if (ret)
1733     return MICRON_ON_DIE_UNSUPPORTED;
1734     diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
1735     index bff74752cef1..3fe6a28027fe 100644
1736     --- a/drivers/net/ethernet/emulex/benet/be_main.c
1737     +++ b/drivers/net/ethernet/emulex/benet/be_main.c
1738     @@ -4700,8 +4700,12 @@ int be_update_queues(struct be_adapter *adapter)
1739     struct net_device *netdev = adapter->netdev;
1740     int status;
1741    
1742     - if (netif_running(netdev))
1743     + if (netif_running(netdev)) {
1744     + /* device cannot transmit now, avoid dev_watchdog timeouts */
1745     + netif_carrier_off(netdev);
1746     +
1747     be_close(netdev);
1748     + }
1749    
1750     be_cancel_worker(adapter);
1751    
1752     diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c
1753     index b25048c6c761..21296fa7f7fb 100644
1754     --- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c
1755     +++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c
1756     @@ -408,14 +408,6 @@ static int mlxsw_sp_port_dcb_app_update(struct mlxsw_sp_port *mlxsw_sp_port)
1757     have_dscp = mlxsw_sp_port_dcb_app_prio_dscp_map(mlxsw_sp_port,
1758     &prio_map);
1759    
1760     - if (!have_dscp) {
1761     - err = mlxsw_sp_port_dcb_toggle_trust(mlxsw_sp_port,
1762     - MLXSW_REG_QPTS_TRUST_STATE_PCP);
1763     - if (err)
1764     - netdev_err(mlxsw_sp_port->dev, "Couldn't switch to trust L2\n");
1765     - return err;
1766     - }
1767     -
1768     mlxsw_sp_port_dcb_app_dscp_prio_map(mlxsw_sp_port, default_prio,
1769     &dscp_map);
1770     err = mlxsw_sp_port_dcb_app_update_qpdpm(mlxsw_sp_port,
1771     @@ -432,6 +424,14 @@ static int mlxsw_sp_port_dcb_app_update(struct mlxsw_sp_port *mlxsw_sp_port)
1772     return err;
1773     }
1774    
1775     + if (!have_dscp) {
1776     + err = mlxsw_sp_port_dcb_toggle_trust(mlxsw_sp_port,
1777     + MLXSW_REG_QPTS_TRUST_STATE_PCP);
1778     + if (err)
1779     + netdev_err(mlxsw_sp_port->dev, "Couldn't switch to trust L2\n");
1780     + return err;
1781     + }
1782     +
1783     err = mlxsw_sp_port_dcb_toggle_trust(mlxsw_sp_port,
1784     MLXSW_REG_QPTS_TRUST_STATE_DSCP);
1785     if (err) {
1786     diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
1787     index d0b7dd8fb184..77995df7fe54 100644
1788     --- a/drivers/perf/arm_pmu.c
1789     +++ b/drivers/perf/arm_pmu.c
1790     @@ -730,8 +730,8 @@ static int cpu_pm_pmu_notify(struct notifier_block *b, unsigned long cmd,
1791     cpu_pm_pmu_setup(armpmu, cmd);
1792     break;
1793     case CPU_PM_EXIT:
1794     - cpu_pm_pmu_setup(armpmu, cmd);
1795     case CPU_PM_ENTER_FAILED:
1796     + cpu_pm_pmu_setup(armpmu, cmd);
1797     armpmu->start(armpmu);
1798     break;
1799     default:
1800     diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c
1801     index cbe467ff1aba..fa0bbda4b3f2 100644
1802     --- a/drivers/rapidio/devices/rio_mport_cdev.c
1803     +++ b/drivers/rapidio/devices/rio_mport_cdev.c
1804     @@ -1688,6 +1688,7 @@ static int rio_mport_add_riodev(struct mport_cdev_priv *priv,
1805    
1806     if (copy_from_user(&dev_info, arg, sizeof(dev_info)))
1807     return -EFAULT;
1808     + dev_info.name[sizeof(dev_info.name) - 1] = '\0';
1809    
1810     rmcd_debug(RDEV, "name:%s ct:0x%x did:0x%x hc:0x%x", dev_info.name,
1811     dev_info.comptag, dev_info.destid, dev_info.hopcount);
1812     @@ -1819,6 +1820,7 @@ static int rio_mport_del_riodev(struct mport_cdev_priv *priv, void __user *arg)
1813    
1814     if (copy_from_user(&dev_info, arg, sizeof(dev_info)))
1815     return -EFAULT;
1816     + dev_info.name[sizeof(dev_info.name) - 1] = '\0';
1817    
1818     mport = priv->md->mport;
1819    
1820     diff --git a/drivers/s390/block/dasd_alias.c b/drivers/s390/block/dasd_alias.c
1821     index b9ce93e9df89..99f86612f775 100644
1822     --- a/drivers/s390/block/dasd_alias.c
1823     +++ b/drivers/s390/block/dasd_alias.c
1824     @@ -383,6 +383,20 @@ suborder_not_supported(struct dasd_ccw_req *cqr)
1825     char msg_format;
1826     char msg_no;
1827    
1828     + /*
1829     + * intrc values ENODEV, ENOLINK and EPERM
1830     + * will be optained from sleep_on to indicate that no
1831     + * IO operation can be started
1832     + */
1833     + if (cqr->intrc == -ENODEV)
1834     + return 1;
1835     +
1836     + if (cqr->intrc == -ENOLINK)
1837     + return 1;
1838     +
1839     + if (cqr->intrc == -EPERM)
1840     + return 1;
1841     +
1842     sense = dasd_get_sense(&cqr->irb);
1843     if (!sense)
1844     return 0;
1845     @@ -447,12 +461,8 @@ static int read_unit_address_configuration(struct dasd_device *device,
1846     lcu->flags &= ~NEED_UAC_UPDATE;
1847     spin_unlock_irqrestore(&lcu->lock, flags);
1848    
1849     - do {
1850     - rc = dasd_sleep_on(cqr);
1851     - if (rc && suborder_not_supported(cqr))
1852     - return -EOPNOTSUPP;
1853     - } while (rc && (cqr->retries > 0));
1854     - if (rc) {
1855     + rc = dasd_sleep_on(cqr);
1856     + if (rc && !suborder_not_supported(cqr)) {
1857     spin_lock_irqsave(&lcu->lock, flags);
1858     lcu->flags |= NEED_UAC_UPDATE;
1859     spin_unlock_irqrestore(&lcu->lock, flags);
1860     diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c
1861     index ebdbc457003f..332701db7379 100644
1862     --- a/drivers/s390/scsi/zfcp_erp.c
1863     +++ b/drivers/s390/scsi/zfcp_erp.c
1864     @@ -11,6 +11,7 @@
1865     #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
1866    
1867     #include <linux/kthread.h>
1868     +#include <linux/bug.h>
1869     #include "zfcp_ext.h"
1870     #include "zfcp_reqlist.h"
1871    
1872     @@ -238,6 +239,12 @@ static struct zfcp_erp_action *zfcp_erp_setup_act(int need, u32 act_status,
1873     struct zfcp_erp_action *erp_action;
1874     struct zfcp_scsi_dev *zfcp_sdev;
1875    
1876     + if (WARN_ON_ONCE(need != ZFCP_ERP_ACTION_REOPEN_LUN &&
1877     + need != ZFCP_ERP_ACTION_REOPEN_PORT &&
1878     + need != ZFCP_ERP_ACTION_REOPEN_PORT_FORCED &&
1879     + need != ZFCP_ERP_ACTION_REOPEN_ADAPTER))
1880     + return NULL;
1881     +
1882     switch (need) {
1883     case ZFCP_ERP_ACTION_REOPEN_LUN:
1884     zfcp_sdev = sdev_to_zfcp(sdev);
1885     diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
1886     index 8776330175e3..d2ab52026014 100644
1887     --- a/drivers/scsi/mpt3sas/mpt3sas_base.c
1888     +++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
1889     @@ -2565,12 +2565,14 @@ _base_config_dma_addressing(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev)
1890     {
1891     struct sysinfo s;
1892     u64 consistent_dma_mask;
1893     + /* Set 63 bit DMA mask for all SAS3 and SAS35 controllers */
1894     + int dma_mask = (ioc->hba_mpi_version_belonged > MPI2_VERSION) ? 63 : 64;
1895    
1896     if (ioc->is_mcpu_endpoint)
1897     goto try_32bit;
1898    
1899     if (ioc->dma_mask)
1900     - consistent_dma_mask = DMA_BIT_MASK(64);
1901     + consistent_dma_mask = DMA_BIT_MASK(dma_mask);
1902     else
1903     consistent_dma_mask = DMA_BIT_MASK(32);
1904    
1905     @@ -2578,11 +2580,11 @@ _base_config_dma_addressing(struct MPT3SAS_ADAPTER *ioc, struct pci_dev *pdev)
1906     const uint64_t required_mask =
1907     dma_get_required_mask(&pdev->dev);
1908     if ((required_mask > DMA_BIT_MASK(32)) &&
1909     - !pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) &&
1910     + !pci_set_dma_mask(pdev, DMA_BIT_MASK(dma_mask)) &&
1911     !pci_set_consistent_dma_mask(pdev, consistent_dma_mask)) {
1912     ioc->base_add_sg_single = &_base_add_sg_single_64;
1913     ioc->sge_size = sizeof(Mpi2SGESimple64_t);
1914     - ioc->dma_mask = 64;
1915     + ioc->dma_mask = dma_mask;
1916     goto out;
1917     }
1918     }
1919     @@ -2609,7 +2611,7 @@ static int
1920     _base_change_consistent_dma_mask(struct MPT3SAS_ADAPTER *ioc,
1921     struct pci_dev *pdev)
1922     {
1923     - if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64))) {
1924     + if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(ioc->dma_mask))) {
1925     if (pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)))
1926     return -ENODEV;
1927     }
1928     @@ -4545,7 +4547,7 @@ _base_allocate_memory_pools(struct MPT3SAS_ADAPTER *ioc)
1929     total_sz += sz;
1930     } while (ioc->rdpq_array_enable && (++i < ioc->reply_queue_count));
1931    
1932     - if (ioc->dma_mask == 64) {
1933     + if (ioc->dma_mask > 32) {
1934     if (_base_change_consistent_dma_mask(ioc, ioc->pdev) != 0) {
1935     pr_warn(MPT3SAS_FMT
1936     "no suitable consistent DMA mask for %s\n",
1937     diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
1938     index aa081f806728..3d9997595d90 100644
1939     --- a/drivers/xen/swiotlb-xen.c
1940     +++ b/drivers/xen/swiotlb-xen.c
1941     @@ -357,8 +357,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
1942     /* Convert the size to actually allocated. */
1943     size = 1UL << (order + XEN_PAGE_SHIFT);
1944    
1945     - if (((dev_addr + size - 1 <= dma_mask)) ||
1946     - range_straddles_page_boundary(phys, size))
1947     + if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
1948     + range_straddles_page_boundary(phys, size)))
1949     xen_destroy_contiguous_region(phys, order);
1950    
1951     xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
1952     diff --git a/fs/adfs/super.c b/fs/adfs/super.c
1953     index 7e099a7a4eb1..4dc15b263489 100644
1954     --- a/fs/adfs/super.c
1955     +++ b/fs/adfs/super.c
1956     @@ -369,6 +369,7 @@ static int adfs_fill_super(struct super_block *sb, void *data, int silent)
1957     struct buffer_head *bh;
1958     struct object_info root_obj;
1959     unsigned char *b_data;
1960     + unsigned int blocksize;
1961     struct adfs_sb_info *asb;
1962     struct inode *root;
1963     int ret = -EINVAL;
1964     @@ -420,8 +421,10 @@ static int adfs_fill_super(struct super_block *sb, void *data, int silent)
1965     goto error_free_bh;
1966     }
1967    
1968     + blocksize = 1 << dr->log2secsize;
1969     brelse(bh);
1970     - if (sb_set_blocksize(sb, 1 << dr->log2secsize)) {
1971     +
1972     + if (sb_set_blocksize(sb, blocksize)) {
1973     bh = sb_bread(sb, ADFS_DISCRECORD / sb->s_blocksize);
1974     if (!bh) {
1975     adfs_error(sb, "couldn't read superblock on "
1976     diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
1977     index e46e83e87600..734866ab5194 100644
1978     --- a/fs/btrfs/qgroup.c
1979     +++ b/fs/btrfs/qgroup.c
1980     @@ -2249,6 +2249,7 @@ int btrfs_qgroup_inherit(struct btrfs_trans_handle *trans, u64 srcid,
1981     int ret = 0;
1982     int i;
1983     u64 *i_qgroups;
1984     + bool committing = false;
1985     struct btrfs_fs_info *fs_info = trans->fs_info;
1986     struct btrfs_root *quota_root;
1987     struct btrfs_qgroup *srcgroup;
1988     @@ -2256,7 +2257,25 @@ int btrfs_qgroup_inherit(struct btrfs_trans_handle *trans, u64 srcid,
1989     u32 level_size = 0;
1990     u64 nums;
1991    
1992     - mutex_lock(&fs_info->qgroup_ioctl_lock);
1993     + /*
1994     + * There are only two callers of this function.
1995     + *
1996     + * One in create_subvol() in the ioctl context, which needs to hold
1997     + * the qgroup_ioctl_lock.
1998     + *
1999     + * The other one in create_pending_snapshot() where no other qgroup
2000     + * code can modify the fs as they all need to either start a new trans
2001     + * or hold a trans handler, thus we don't need to hold
2002     + * qgroup_ioctl_lock.
2003     + * This would avoid long and complex lock chain and make lockdep happy.
2004     + */
2005     + spin_lock(&fs_info->trans_lock);
2006     + if (trans->transaction->state == TRANS_STATE_COMMIT_DOING)
2007     + committing = true;
2008     + spin_unlock(&fs_info->trans_lock);
2009     +
2010     + if (!committing)
2011     + mutex_lock(&fs_info->qgroup_ioctl_lock);
2012     if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags))
2013     goto out;
2014    
2015     @@ -2420,7 +2439,8 @@ int btrfs_qgroup_inherit(struct btrfs_trans_handle *trans, u64 srcid,
2016     unlock:
2017     spin_unlock(&fs_info->qgroup_lock);
2018     out:
2019     - mutex_unlock(&fs_info->qgroup_ioctl_lock);
2020     + if (!committing)
2021     + mutex_unlock(&fs_info->qgroup_ioctl_lock);
2022     return ret;
2023     }
2024    
2025     diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
2026     index 258392b75048..48ddbc187e58 100644
2027     --- a/fs/btrfs/send.c
2028     +++ b/fs/btrfs/send.c
2029     @@ -6272,68 +6272,21 @@ static int changed_extent(struct send_ctx *sctx,
2030     {
2031     int ret = 0;
2032    
2033     - if (sctx->cur_ino != sctx->cmp_key->objectid) {
2034     -
2035     - if (result == BTRFS_COMPARE_TREE_CHANGED) {
2036     - struct extent_buffer *leaf_l;
2037     - struct extent_buffer *leaf_r;
2038     - struct btrfs_file_extent_item *ei_l;
2039     - struct btrfs_file_extent_item *ei_r;
2040     -
2041     - leaf_l = sctx->left_path->nodes[0];
2042     - leaf_r = sctx->right_path->nodes[0];
2043     - ei_l = btrfs_item_ptr(leaf_l,
2044     - sctx->left_path->slots[0],
2045     - struct btrfs_file_extent_item);
2046     - ei_r = btrfs_item_ptr(leaf_r,
2047     - sctx->right_path->slots[0],
2048     - struct btrfs_file_extent_item);
2049     -
2050     - /*
2051     - * We may have found an extent item that has changed
2052     - * only its disk_bytenr field and the corresponding
2053     - * inode item was not updated. This case happens due to
2054     - * very specific timings during relocation when a leaf
2055     - * that contains file extent items is COWed while
2056     - * relocation is ongoing and its in the stage where it
2057     - * updates data pointers. So when this happens we can
2058     - * safely ignore it since we know it's the same extent,
2059     - * but just at different logical and physical locations
2060     - * (when an extent is fully replaced with a new one, we
2061     - * know the generation number must have changed too,
2062     - * since snapshot creation implies committing the current
2063     - * transaction, and the inode item must have been updated
2064     - * as well).
2065     - * This replacement of the disk_bytenr happens at
2066     - * relocation.c:replace_file_extents() through
2067     - * relocation.c:btrfs_reloc_cow_block().
2068     - */
2069     - if (btrfs_file_extent_generation(leaf_l, ei_l) ==
2070     - btrfs_file_extent_generation(leaf_r, ei_r) &&
2071     - btrfs_file_extent_ram_bytes(leaf_l, ei_l) ==
2072     - btrfs_file_extent_ram_bytes(leaf_r, ei_r) &&
2073     - btrfs_file_extent_compression(leaf_l, ei_l) ==
2074     - btrfs_file_extent_compression(leaf_r, ei_r) &&
2075     - btrfs_file_extent_encryption(leaf_l, ei_l) ==
2076     - btrfs_file_extent_encryption(leaf_r, ei_r) &&
2077     - btrfs_file_extent_other_encoding(leaf_l, ei_l) ==
2078     - btrfs_file_extent_other_encoding(leaf_r, ei_r) &&
2079     - btrfs_file_extent_type(leaf_l, ei_l) ==
2080     - btrfs_file_extent_type(leaf_r, ei_r) &&
2081     - btrfs_file_extent_disk_bytenr(leaf_l, ei_l) !=
2082     - btrfs_file_extent_disk_bytenr(leaf_r, ei_r) &&
2083     - btrfs_file_extent_disk_num_bytes(leaf_l, ei_l) ==
2084     - btrfs_file_extent_disk_num_bytes(leaf_r, ei_r) &&
2085     - btrfs_file_extent_offset(leaf_l, ei_l) ==
2086     - btrfs_file_extent_offset(leaf_r, ei_r) &&
2087     - btrfs_file_extent_num_bytes(leaf_l, ei_l) ==
2088     - btrfs_file_extent_num_bytes(leaf_r, ei_r))
2089     - return 0;
2090     - }
2091     -
2092     - inconsistent_snapshot_error(sctx, result, "extent");
2093     - return -EIO;
2094     - }
2095     + /*
2096     + * We have found an extent item that changed without the inode item
2097     + * having changed. This can happen either after relocation (where the
2098     + * disk_bytenr of an extent item is replaced at
2099     + * relocation.c:replace_file_extents()) or after deduplication into a
2100     + * file in both the parent and send snapshots (where an extent item can
2101     + * get modified or replaced with a new one). Note that deduplication
2102     + * updates the inode item, but it only changes the iversion (sequence
2103     + * field in the inode item) of the inode, so if a file is deduplicated
2104     + * the same amount of times in both the parent and send snapshots, its
2105     + * iversion becames the same in both snapshots, whence the inode item is
2106     + * the same on both snapshots.
2107     + */
2108     + if (sctx->cur_ino != sctx->cmp_key->objectid)
2109     + return 0;
2110    
2111     if (!sctx->cur_inode_new_gen && !sctx->cur_inode_deleted) {
2112     if (result != BTRFS_COMPARE_TREE_DELETED)
2113     diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
2114     index bb8f6c020d22..f1ca53a3ff0b 100644
2115     --- a/fs/btrfs/transaction.c
2116     +++ b/fs/btrfs/transaction.c
2117     @@ -2027,6 +2027,16 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
2118     }
2119     } else {
2120     spin_unlock(&fs_info->trans_lock);
2121     + /*
2122     + * The previous transaction was aborted and was already removed
2123     + * from the list of transactions at fs_info->trans_list. So we
2124     + * abort to prevent writing a new superblock that reflects a
2125     + * corrupt state (pointing to trees with unwritten nodes/leafs).
2126     + */
2127     + if (test_bit(BTRFS_FS_STATE_TRANS_ABORTED, &fs_info->fs_state)) {
2128     + ret = -EROFS;
2129     + goto cleanup_transaction;
2130     + }
2131     }
2132    
2133     extwriter_counter_dec(cur_trans, trans->type);
2134     diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
2135     index 2fd000308be7..6e008bd5c8cd 100644
2136     --- a/fs/btrfs/volumes.c
2137     +++ b/fs/btrfs/volumes.c
2138     @@ -5040,8 +5040,7 @@ static inline int btrfs_chunk_max_errors(struct map_lookup *map)
2139    
2140     if (map->type & (BTRFS_BLOCK_GROUP_RAID1 |
2141     BTRFS_BLOCK_GROUP_RAID10 |
2142     - BTRFS_BLOCK_GROUP_RAID5 |
2143     - BTRFS_BLOCK_GROUP_DUP)) {
2144     + BTRFS_BLOCK_GROUP_RAID5)) {
2145     max_errors = 1;
2146     } else if (map->type & BTRFS_BLOCK_GROUP_RAID6) {
2147     max_errors = 2;
2148     diff --git a/fs/ceph/super.h b/fs/ceph/super.h
2149     index 582e28fd1b7b..d8579a56e5dc 100644
2150     --- a/fs/ceph/super.h
2151     +++ b/fs/ceph/super.h
2152     @@ -526,7 +526,12 @@ static inline void __ceph_dir_set_complete(struct ceph_inode_info *ci,
2153     long long release_count,
2154     long long ordered_count)
2155     {
2156     - smp_mb__before_atomic();
2157     + /*
2158     + * Makes sure operations that setup readdir cache (update page
2159     + * cache and i_size) are strongly ordered w.r.t. the following
2160     + * atomic64_set() operations.
2161     + */
2162     + smp_mb();
2163     atomic64_set(&ci->i_complete_seq[0], release_count);
2164     atomic64_set(&ci->i_complete_seq[1], ordered_count);
2165     }
2166     diff --git a/fs/ceph/xattr.c b/fs/ceph/xattr.c
2167     index 5cc8b94f8206..0a2d4898ee16 100644
2168     --- a/fs/ceph/xattr.c
2169     +++ b/fs/ceph/xattr.c
2170     @@ -79,7 +79,7 @@ static size_t ceph_vxattrcb_layout(struct ceph_inode_info *ci, char *val,
2171     const char *ns_field = " pool_namespace=";
2172     char buf[128];
2173     size_t len, total_len = 0;
2174     - int ret;
2175     + ssize_t ret;
2176    
2177     pool_ns = ceph_try_get_string(ci->i_layout.pool_ns);
2178    
2179     @@ -103,11 +103,8 @@ static size_t ceph_vxattrcb_layout(struct ceph_inode_info *ci, char *val,
2180     if (pool_ns)
2181     total_len += strlen(ns_field) + pool_ns->len;
2182    
2183     - if (!size) {
2184     - ret = total_len;
2185     - } else if (total_len > size) {
2186     - ret = -ERANGE;
2187     - } else {
2188     + ret = total_len;
2189     + if (size >= total_len) {
2190     memcpy(val, buf, len);
2191     ret = len;
2192     if (pool_name) {
2193     @@ -817,8 +814,11 @@ ssize_t __ceph_getxattr(struct inode *inode, const char *name, void *value,
2194     if (err)
2195     return err;
2196     err = -ENODATA;
2197     - if (!(vxattr->exists_cb && !vxattr->exists_cb(ci)))
2198     + if (!(vxattr->exists_cb && !vxattr->exists_cb(ci))) {
2199     err = vxattr->getxattr_cb(ci, value, size);
2200     + if (size && size < err)
2201     + err = -ERANGE;
2202     + }
2203     return err;
2204     }
2205    
2206     diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
2207     index f31339db45fd..c53a2e86ed54 100644
2208     --- a/fs/cifs/connect.c
2209     +++ b/fs/cifs/connect.c
2210     @@ -563,10 +563,10 @@ static bool
2211     server_unresponsive(struct TCP_Server_Info *server)
2212     {
2213     /*
2214     - * We need to wait 2 echo intervals to make sure we handle such
2215     + * We need to wait 3 echo intervals to make sure we handle such
2216     * situations right:
2217     * 1s client sends a normal SMB request
2218     - * 2s client gets a response
2219     + * 3s client gets a response
2220     * 30s echo workqueue job pops, and decides we got a response recently
2221     * and don't need to send another
2222     * ...
2223     @@ -575,9 +575,9 @@ server_unresponsive(struct TCP_Server_Info *server)
2224     */
2225     if ((server->tcpStatus == CifsGood ||
2226     server->tcpStatus == CifsNeedNegotiate) &&
2227     - time_after(jiffies, server->lstrp + 2 * server->echo_interval)) {
2228     + time_after(jiffies, server->lstrp + 3 * server->echo_interval)) {
2229     cifs_dbg(VFS, "Server %s has not responded in %lu seconds. Reconnecting...\n",
2230     - server->hostname, (2 * server->echo_interval) / HZ);
2231     + server->hostname, (3 * server->echo_interval) / HZ);
2232     cifs_reconnect(server);
2233     wake_up(&server->response_q);
2234     return true;
2235     diff --git a/fs/coda/psdev.c b/fs/coda/psdev.c
2236     index c5234c21b539..55824cba3245 100644
2237     --- a/fs/coda/psdev.c
2238     +++ b/fs/coda/psdev.c
2239     @@ -187,8 +187,11 @@ static ssize_t coda_psdev_write(struct file *file, const char __user *buf,
2240     if (req->uc_opcode == CODA_OPEN_BY_FD) {
2241     struct coda_open_by_fd_out *outp =
2242     (struct coda_open_by_fd_out *)req->uc_data;
2243     - if (!outp->oh.result)
2244     + if (!outp->oh.result) {
2245     outp->fh = fget(outp->fd);
2246     + if (!outp->fh)
2247     + return -EBADF;
2248     + }
2249     }
2250    
2251     wake_up(&req->uc_sleep);
2252     diff --git a/include/linux/acpi.h b/include/linux/acpi.h
2253     index de8d3d3fa651..b4d23b3a2ef2 100644
2254     --- a/include/linux/acpi.h
2255     +++ b/include/linux/acpi.h
2256     @@ -326,7 +326,10 @@ void acpi_set_irq_model(enum acpi_irq_model_id model,
2257     #ifdef CONFIG_X86_IO_APIC
2258     extern int acpi_get_override_irq(u32 gsi, int *trigger, int *polarity);
2259     #else
2260     -#define acpi_get_override_irq(gsi, trigger, polarity) (-1)
2261     +static inline int acpi_get_override_irq(u32 gsi, int *trigger, int *polarity)
2262     +{
2263     + return -1;
2264     +}
2265     #endif
2266     /*
2267     * This function undoes the effect of one call to acpi_register_gsi().
2268     diff --git a/include/linux/coda.h b/include/linux/coda.h
2269     index d30209b9cef8..0ca0c83fdb1c 100644
2270     --- a/include/linux/coda.h
2271     +++ b/include/linux/coda.h
2272     @@ -58,8 +58,7 @@ Mellon the rights to redistribute these changes without encumbrance.
2273     #ifndef _CODA_HEADER_
2274     #define _CODA_HEADER_
2275    
2276     -#if defined(__linux__)
2277     typedef unsigned long long u_quad_t;
2278     -#endif
2279     +
2280     #include <uapi/linux/coda.h>
2281     #endif
2282     diff --git a/include/linux/coda_psdev.h b/include/linux/coda_psdev.h
2283     index 15170954aa2b..57d2b2faf6a3 100644
2284     --- a/include/linux/coda_psdev.h
2285     +++ b/include/linux/coda_psdev.h
2286     @@ -19,6 +19,17 @@ struct venus_comm {
2287     struct mutex vc_mutex;
2288     };
2289    
2290     +/* messages between coda filesystem in kernel and Venus */
2291     +struct upc_req {
2292     + struct list_head uc_chain;
2293     + caddr_t uc_data;
2294     + u_short uc_flags;
2295     + u_short uc_inSize; /* Size is at most 5000 bytes */
2296     + u_short uc_outSize;
2297     + u_short uc_opcode; /* copied from data to save lookup */
2298     + int uc_unique;
2299     + wait_queue_head_t uc_sleep; /* process' wait queue */
2300     +};
2301    
2302     static inline struct venus_comm *coda_vcp(struct super_block *sb)
2303     {
2304     diff --git a/include/uapi/linux/coda_psdev.h b/include/uapi/linux/coda_psdev.h
2305     index aa6623efd2dd..d50d51a57fe4 100644
2306     --- a/include/uapi/linux/coda_psdev.h
2307     +++ b/include/uapi/linux/coda_psdev.h
2308     @@ -7,19 +7,6 @@
2309     #define CODA_PSDEV_MAJOR 67
2310     #define MAX_CODADEVS 5 /* how many do we allow */
2311    
2312     -
2313     -/* messages between coda filesystem in kernel and Venus */
2314     -struct upc_req {
2315     - struct list_head uc_chain;
2316     - caddr_t uc_data;
2317     - u_short uc_flags;
2318     - u_short uc_inSize; /* Size is at most 5000 bytes */
2319     - u_short uc_outSize;
2320     - u_short uc_opcode; /* copied from data to save lookup */
2321     - int uc_unique;
2322     - wait_queue_head_t uc_sleep; /* process' wait queue */
2323     -};
2324     -
2325     #define CODA_REQ_ASYNC 0x1
2326     #define CODA_REQ_READ 0x2
2327     #define CODA_REQ_WRITE 0x4
2328     diff --git a/ipc/mqueue.c b/ipc/mqueue.c
2329     index bce7af1546d9..de4070d5472f 100644
2330     --- a/ipc/mqueue.c
2331     +++ b/ipc/mqueue.c
2332     @@ -389,7 +389,6 @@ static void mqueue_evict_inode(struct inode *inode)
2333     {
2334     struct mqueue_inode_info *info;
2335     struct user_struct *user;
2336     - unsigned long mq_bytes, mq_treesize;
2337     struct ipc_namespace *ipc_ns;
2338     struct msg_msg *msg, *nmsg;
2339     LIST_HEAD(tmp_msg);
2340     @@ -412,16 +411,18 @@ static void mqueue_evict_inode(struct inode *inode)
2341     free_msg(msg);
2342     }
2343    
2344     - /* Total amount of bytes accounted for the mqueue */
2345     - mq_treesize = info->attr.mq_maxmsg * sizeof(struct msg_msg) +
2346     - min_t(unsigned int, info->attr.mq_maxmsg, MQ_PRIO_MAX) *
2347     - sizeof(struct posix_msg_tree_node);
2348     -
2349     - mq_bytes = mq_treesize + (info->attr.mq_maxmsg *
2350     - info->attr.mq_msgsize);
2351     -
2352     user = info->user;
2353     if (user) {
2354     + unsigned long mq_bytes, mq_treesize;
2355     +
2356     + /* Total amount of bytes accounted for the mqueue */
2357     + mq_treesize = info->attr.mq_maxmsg * sizeof(struct msg_msg) +
2358     + min_t(unsigned int, info->attr.mq_maxmsg, MQ_PRIO_MAX) *
2359     + sizeof(struct posix_msg_tree_node);
2360     +
2361     + mq_bytes = mq_treesize + (info->attr.mq_maxmsg *
2362     + info->attr.mq_msgsize);
2363     +
2364     spin_lock(&mq_lock);
2365     user->mq_bytes -= mq_bytes;
2366     /*
2367     diff --git a/kernel/module.c b/kernel/module.c
2368     index b8f37376856b..3fda10c549a2 100644
2369     --- a/kernel/module.c
2370     +++ b/kernel/module.c
2371     @@ -3388,8 +3388,7 @@ static bool finished_loading(const char *name)
2372     sched_annotate_sleep();
2373     mutex_lock(&module_mutex);
2374     mod = find_module_all(name, strlen(name), true);
2375     - ret = !mod || mod->state == MODULE_STATE_LIVE
2376     - || mod->state == MODULE_STATE_GOING;
2377     + ret = !mod || mod->state == MODULE_STATE_LIVE;
2378     mutex_unlock(&module_mutex);
2379    
2380     return ret;
2381     @@ -3559,8 +3558,7 @@ again:
2382     mutex_lock(&module_mutex);
2383     old = find_module_all(mod->name, strlen(mod->name), true);
2384     if (old != NULL) {
2385     - if (old->state == MODULE_STATE_COMING
2386     - || old->state == MODULE_STATE_UNFORMED) {
2387     + if (old->state != MODULE_STATE_LIVE) {
2388     /* Wait in case it fails to load. */
2389     mutex_unlock(&module_mutex);
2390     err = wait_event_interruptible(module_wq,
2391     diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
2392     index 118ecce14386..d9dd709b3c12 100644
2393     --- a/kernel/trace/ftrace.c
2394     +++ b/kernel/trace/ftrace.c
2395     @@ -1647,6 +1647,11 @@ static bool test_rec_ops_needs_regs(struct dyn_ftrace *rec)
2396     return keep_regs;
2397     }
2398    
2399     +static struct ftrace_ops *
2400     +ftrace_find_tramp_ops_any(struct dyn_ftrace *rec);
2401     +static struct ftrace_ops *
2402     +ftrace_find_tramp_ops_next(struct dyn_ftrace *rec, struct ftrace_ops *ops);
2403     +
2404     static bool __ftrace_hash_rec_update(struct ftrace_ops *ops,
2405     int filter_hash,
2406     bool inc)
2407     @@ -1775,15 +1780,17 @@ static bool __ftrace_hash_rec_update(struct ftrace_ops *ops,
2408     }
2409    
2410     /*
2411     - * If the rec had TRAMP enabled, then it needs to
2412     - * be cleared. As TRAMP can only be enabled iff
2413     - * there is only a single ops attached to it.
2414     - * In otherwords, always disable it on decrementing.
2415     - * In the future, we may set it if rec count is
2416     - * decremented to one, and the ops that is left
2417     - * has a trampoline.
2418     + * The TRAMP needs to be set only if rec count
2419     + * is decremented to one, and the ops that is
2420     + * left has a trampoline. As TRAMP can only be
2421     + * enabled if there is only a single ops attached
2422     + * to it.
2423     */
2424     - rec->flags &= ~FTRACE_FL_TRAMP;
2425     + if (ftrace_rec_count(rec) == 1 &&
2426     + ftrace_find_tramp_ops_any(rec))
2427     + rec->flags |= FTRACE_FL_TRAMP;
2428     + else
2429     + rec->flags &= ~FTRACE_FL_TRAMP;
2430    
2431     /*
2432     * flags will be cleared in ftrace_check_record()
2433     @@ -1976,11 +1983,6 @@ static void print_ip_ins(const char *fmt, const unsigned char *p)
2434     printk(KERN_CONT "%s%02x", i ? ":" : "", p[i]);
2435     }
2436    
2437     -static struct ftrace_ops *
2438     -ftrace_find_tramp_ops_any(struct dyn_ftrace *rec);
2439     -static struct ftrace_ops *
2440     -ftrace_find_tramp_ops_next(struct dyn_ftrace *rec, struct ftrace_ops *ops);
2441     -
2442     enum ftrace_bug_type ftrace_bug_type;
2443     const void *ftrace_expected;
2444    
2445     diff --git a/lib/test_overflow.c b/lib/test_overflow.c
2446     index fc680562d8b6..7a4b6f6c5473 100644
2447     --- a/lib/test_overflow.c
2448     +++ b/lib/test_overflow.c
2449     @@ -486,16 +486,17 @@ static int __init test_overflow_shift(void)
2450     * Deal with the various forms of allocator arguments. See comments above
2451     * the DEFINE_TEST_ALLOC() instances for mapping of the "bits".
2452     */
2453     -#define alloc010(alloc, arg, sz) alloc(sz, GFP_KERNEL)
2454     -#define alloc011(alloc, arg, sz) alloc(sz, GFP_KERNEL, NUMA_NO_NODE)
2455     +#define alloc_GFP (GFP_KERNEL | __GFP_NOWARN)
2456     +#define alloc010(alloc, arg, sz) alloc(sz, alloc_GFP)
2457     +#define alloc011(alloc, arg, sz) alloc(sz, alloc_GFP, NUMA_NO_NODE)
2458     #define alloc000(alloc, arg, sz) alloc(sz)
2459     #define alloc001(alloc, arg, sz) alloc(sz, NUMA_NO_NODE)
2460     -#define alloc110(alloc, arg, sz) alloc(arg, sz, GFP_KERNEL)
2461     +#define alloc110(alloc, arg, sz) alloc(arg, sz, alloc_GFP)
2462     #define free0(free, arg, ptr) free(ptr)
2463     #define free1(free, arg, ptr) free(arg, ptr)
2464    
2465     -/* Wrap around to 8K */
2466     -#define TEST_SIZE (9 << PAGE_SHIFT)
2467     +/* Wrap around to 16K */
2468     +#define TEST_SIZE (5 * 4096)
2469    
2470     #define DEFINE_TEST_ALLOC(func, free_func, want_arg, want_gfp, want_node)\
2471     static int __init test_ ## func (void *arg) \
2472     diff --git a/lib/test_string.c b/lib/test_string.c
2473     index 0fcdb82dca86..98a787e7a1fd 100644
2474     --- a/lib/test_string.c
2475     +++ b/lib/test_string.c
2476     @@ -35,7 +35,7 @@ static __init int memset16_selftest(void)
2477     fail:
2478     kfree(p);
2479     if (i < 256)
2480     - return (i << 24) | (j << 16) | k;
2481     + return (i << 24) | (j << 16) | k | 0x8000;
2482     return 0;
2483     }
2484    
2485     @@ -71,7 +71,7 @@ static __init int memset32_selftest(void)
2486     fail:
2487     kfree(p);
2488     if (i < 256)
2489     - return (i << 24) | (j << 16) | k;
2490     + return (i << 24) | (j << 16) | k | 0x8000;
2491     return 0;
2492     }
2493    
2494     @@ -107,7 +107,7 @@ static __init int memset64_selftest(void)
2495     fail:
2496     kfree(p);
2497     if (i < 256)
2498     - return (i << 24) | (j << 16) | k;
2499     + return (i << 24) | (j << 16) | k | 0x8000;
2500     return 0;
2501     }
2502    
2503     diff --git a/mm/cma.c b/mm/cma.c
2504     index 476dfe13a701..4c2864270a39 100644
2505     --- a/mm/cma.c
2506     +++ b/mm/cma.c
2507     @@ -282,6 +282,12 @@ int __init cma_declare_contiguous(phys_addr_t base,
2508     */
2509     alignment = max(alignment, (phys_addr_t)PAGE_SIZE <<
2510     max_t(unsigned long, MAX_ORDER - 1, pageblock_order));
2511     + if (fixed && base & (alignment - 1)) {
2512     + ret = -EINVAL;
2513     + pr_err("Region at %pa must be aligned to %pa bytes\n",
2514     + &base, &alignment);
2515     + goto err;
2516     + }
2517     base = ALIGN(base, alignment);
2518     size = ALIGN(size, alignment);
2519     limit &= ~(alignment - 1);
2520     @@ -312,6 +318,13 @@ int __init cma_declare_contiguous(phys_addr_t base,
2521     if (limit == 0 || limit > memblock_end)
2522     limit = memblock_end;
2523    
2524     + if (base + size > limit) {
2525     + ret = -EINVAL;
2526     + pr_err("Size (%pa) of region at %pa exceeds limit (%pa)\n",
2527     + &size, &base, &limit);
2528     + goto err;
2529     + }
2530     +
2531     /* Reserve memory */
2532     if (fixed) {
2533     if (memblock_is_region_reserved(base, size) ||
2534     diff --git a/mm/vmscan.c b/mm/vmscan.c
2535     index 576379e87421..b37610c0eac6 100644
2536     --- a/mm/vmscan.c
2537     +++ b/mm/vmscan.c
2538     @@ -670,7 +670,14 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
2539     unsigned long ret, freed = 0;
2540     struct shrinker *shrinker;
2541    
2542     - if (!mem_cgroup_is_root(memcg))
2543     + /*
2544     + * The root memcg might be allocated even though memcg is disabled
2545     + * via "cgroup_disable=memory" boot parameter. This could make
2546     + * mem_cgroup_is_root() return false, then just run memcg slab
2547     + * shrink, but skip global shrink. This may result in premature
2548     + * oom.
2549     + */
2550     + if (!mem_cgroup_disabled() && !mem_cgroup_is_root(memcg))
2551     return shrink_slab_memcg(gfp_mask, nid, memcg, priority);
2552    
2553     if (!down_read_trylock(&shrinker_rwsem))
2554     diff --git a/scripts/kconfig/confdata.c b/scripts/kconfig/confdata.c
2555     index fd99ae90a618..0dde19cf7486 100644
2556     --- a/scripts/kconfig/confdata.c
2557     +++ b/scripts/kconfig/confdata.c
2558     @@ -784,6 +784,7 @@ int conf_write(const char *name)
2559     const char *str;
2560     char dirname[PATH_MAX+1], tmpname[PATH_MAX+22], newname[PATH_MAX+8];
2561     char *env;
2562     + int i;
2563    
2564     dirname[0] = 0;
2565     if (name && name[0]) {
2566     @@ -860,6 +861,9 @@ next:
2567     }
2568     fclose(out);
2569    
2570     + for_all_symbols(i, sym)
2571     + sym->flags &= ~SYMBOL_WRITTEN;
2572     +
2573     if (*tmpname) {
2574     strcat(dirname, basename);
2575     strcat(dirname, ".old");
2576     diff --git a/security/selinux/ss/policydb.c b/security/selinux/ss/policydb.c
2577     index d31a52e56b9e..91d259c87d10 100644
2578     --- a/security/selinux/ss/policydb.c
2579     +++ b/security/selinux/ss/policydb.c
2580     @@ -275,6 +275,8 @@ static int rangetr_cmp(struct hashtab *h, const void *k1, const void *k2)
2581     return v;
2582     }
2583    
2584     +static int (*destroy_f[SYM_NUM]) (void *key, void *datum, void *datap);
2585     +
2586     /*
2587     * Initialize a policy database structure.
2588     */
2589     @@ -322,8 +324,10 @@ static int policydb_init(struct policydb *p)
2590     out:
2591     hashtab_destroy(p->filename_trans);
2592     hashtab_destroy(p->range_tr);
2593     - for (i = 0; i < SYM_NUM; i++)
2594     + for (i = 0; i < SYM_NUM; i++) {
2595     + hashtab_map(p->symtab[i].table, destroy_f[i], NULL);
2596     hashtab_destroy(p->symtab[i].table);
2597     + }
2598     return rc;
2599     }
2600    
2601     diff --git a/sound/hda/hdac_i915.c b/sound/hda/hdac_i915.c
2602     index 27eb0270a711..3847fe841d33 100644
2603     --- a/sound/hda/hdac_i915.c
2604     +++ b/sound/hda/hdac_i915.c
2605     @@ -143,10 +143,12 @@ int snd_hdac_i915_init(struct hdac_bus *bus)
2606     if (!acomp)
2607     return -ENODEV;
2608     if (!acomp->ops) {
2609     - request_module("i915");
2610     - /* 60s timeout */
2611     - wait_for_completion_timeout(&bind_complete,
2612     - msecs_to_jiffies(60 * 1000));
2613     + if (!IS_ENABLED(CONFIG_MODULES) ||
2614     + !request_module("i915")) {
2615     + /* 60s timeout */
2616     + wait_for_completion_timeout(&bind_complete,
2617     + msecs_to_jiffies(60 * 1000));
2618     + }
2619     }
2620     if (!acomp->ops) {
2621     dev_info(bus->dev, "couldn't bind with audio component\n");
2622     diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
2623     index abed594a9653..b8f3cca8e58b 100644
2624     --- a/tools/objtool/elf.c
2625     +++ b/tools/objtool/elf.c
2626     @@ -305,7 +305,7 @@ static int read_symbols(struct elf *elf)
2627     if (sym->type != STT_FUNC)
2628     continue;
2629     sym->pfunc = sym->cfunc = sym;
2630     - coldstr = strstr(sym->name, ".cold.");
2631     + coldstr = strstr(sym->name, ".cold");
2632     if (!coldstr)
2633     continue;
2634    
2635     diff --git a/tools/perf/builtin-version.c b/tools/perf/builtin-version.c
2636     index 50df168be326..b02c96104640 100644
2637     --- a/tools/perf/builtin-version.c
2638     +++ b/tools/perf/builtin-version.c
2639     @@ -19,6 +19,7 @@ static struct version version;
2640     static struct option version_options[] = {
2641     OPT_BOOLEAN(0, "build-options", &version.build_options,
2642     "display the build options"),
2643     + OPT_END(),
2644     };
2645    
2646     static const char * const version_usage[] = {
2647     diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c
2648     index 14c9fe284806..075cb0c73014 100644
2649     --- a/tools/testing/selftests/cgroup/cgroup_util.c
2650     +++ b/tools/testing/selftests/cgroup/cgroup_util.c
2651     @@ -181,8 +181,7 @@ int cg_find_unified_root(char *root, size_t len)
2652     strtok(NULL, delim);
2653     strtok(NULL, delim);
2654    
2655     - if (strcmp(fs, "cgroup") == 0 &&
2656     - strcmp(type, "cgroup2") == 0) {
2657     + if (strcmp(type, "cgroup2") == 0) {
2658     strncpy(root, mount, len);
2659     return 0;
2660     }