Magellan Linux

Contents of /trunk/kernel-alx/patches-3.18/0122-3.18.23-all-fixes.patch

Parent Directory Parent Directory | Revision Log Revision Log


Revision 2718 - (show annotations) (download)
Thu Nov 12 09:12:09 2015 UTC (8 years, 6 months ago) by niro
File size: 336024 byte(s)
-linux-3.18.23
1 diff --git a/Documentation/ABI/testing/configfs-usb-gadget-loopback b/Documentation/ABI/testing/configfs-usb-gadget-loopback
2 index 9aae5bfb9908..06beefbcf061 100644
3 --- a/Documentation/ABI/testing/configfs-usb-gadget-loopback
4 +++ b/Documentation/ABI/testing/configfs-usb-gadget-loopback
5 @@ -5,4 +5,4 @@ Description:
6 The attributes:
7
8 qlen - depth of loopback queue
9 - bulk_buflen - buffer length
10 + buflen - buffer length
11 diff --git a/Documentation/ABI/testing/configfs-usb-gadget-sourcesink b/Documentation/ABI/testing/configfs-usb-gadget-sourcesink
12 index 29477c319f61..bc7ff731aa0c 100644
13 --- a/Documentation/ABI/testing/configfs-usb-gadget-sourcesink
14 +++ b/Documentation/ABI/testing/configfs-usb-gadget-sourcesink
15 @@ -9,4 +9,4 @@ Description:
16 isoc_maxpacket - 0 - 1023 (fs), 0 - 1024 (hs/ss)
17 isoc_mult - 0..2 (hs/ss only)
18 isoc_maxburst - 0..15 (ss only)
19 - qlen - buffer length
20 + buflen - buffer length
21 diff --git a/Documentation/HOWTO b/Documentation/HOWTO
22 index 93aa8604630e..21152d397b88 100644
23 --- a/Documentation/HOWTO
24 +++ b/Documentation/HOWTO
25 @@ -218,16 +218,16 @@ The development process
26 Linux kernel development process currently consists of a few different
27 main kernel "branches" and lots of different subsystem-specific kernel
28 branches. These different branches are:
29 - - main 3.x kernel tree
30 - - 3.x.y -stable kernel tree
31 - - 3.x -git kernel patches
32 + - main 4.x kernel tree
33 + - 4.x.y -stable kernel tree
34 + - 4.x -git kernel patches
35 - subsystem specific kernel trees and patches
36 - - the 3.x -next kernel tree for integration tests
37 + - the 4.x -next kernel tree for integration tests
38
39 -3.x kernel tree
40 +4.x kernel tree
41 -----------------
42 -3.x kernels are maintained by Linus Torvalds, and can be found on
43 -kernel.org in the pub/linux/kernel/v3.x/ directory. Its development
44 +4.x kernels are maintained by Linus Torvalds, and can be found on
45 +kernel.org in the pub/linux/kernel/v4.x/ directory. Its development
46 process is as follows:
47 - As soon as a new kernel is released a two weeks window is open,
48 during this period of time maintainers can submit big diffs to
49 @@ -262,20 +262,20 @@ mailing list about kernel releases:
50 released according to perceived bug status, not according to a
51 preconceived timeline."
52
53 -3.x.y -stable kernel tree
54 +4.x.y -stable kernel tree
55 ---------------------------
56 Kernels with 3-part versions are -stable kernels. They contain
57 relatively small and critical fixes for security problems or significant
58 -regressions discovered in a given 3.x kernel.
59 +regressions discovered in a given 4.x kernel.
60
61 This is the recommended branch for users who want the most recent stable
62 kernel and are not interested in helping test development/experimental
63 versions.
64
65 -If no 3.x.y kernel is available, then the highest numbered 3.x
66 +If no 4.x.y kernel is available, then the highest numbered 4.x
67 kernel is the current stable kernel.
68
69 -3.x.y are maintained by the "stable" team <stable@vger.kernel.org>, and
70 +4.x.y are maintained by the "stable" team <stable@vger.kernel.org>, and
71 are released as needs dictate. The normal release period is approximately
72 two weeks, but it can be longer if there are no pressing problems. A
73 security-related problem, instead, can cause a release to happen almost
74 @@ -285,7 +285,7 @@ The file Documentation/stable_kernel_rules.txt in the kernel tree
75 documents what kinds of changes are acceptable for the -stable tree, and
76 how the release process works.
77
78 -3.x -git patches
79 +4.x -git patches
80 ------------------
81 These are daily snapshots of Linus' kernel tree which are managed in a
82 git repository (hence the name.) These patches are usually released
83 @@ -317,9 +317,9 @@ revisions to it, and maintainers can mark patches as under review,
84 accepted, or rejected. Most of these patchwork sites are listed at
85 http://patchwork.kernel.org/.
86
87 -3.x -next kernel tree for integration tests
88 +4.x -next kernel tree for integration tests
89 ---------------------------------------------
90 -Before updates from subsystem trees are merged into the mainline 3.x
91 +Before updates from subsystem trees are merged into the mainline 4.x
92 tree, they need to be integration-tested. For this purpose, a special
93 testing repository exists into which virtually all subsystem trees are
94 pulled on an almost daily basis:
95 diff --git a/Documentation/devicetree/bindings/net/ethernet.txt b/Documentation/devicetree/bindings/net/ethernet.txt
96 index 3fc360523bc9..cb115a3b7e00 100644
97 --- a/Documentation/devicetree/bindings/net/ethernet.txt
98 +++ b/Documentation/devicetree/bindings/net/ethernet.txt
99 @@ -19,7 +19,11 @@ The following properties are common to the Ethernet controllers:
100 - phy: the same as "phy-handle" property, not recommended for new bindings.
101 - phy-device: the same as "phy-handle" property, not recommended for new
102 bindings.
103 +- managed: string, specifies the PHY management type. Supported values are:
104 + "auto", "in-band-status". "auto" is the default, it usess MDIO for
105 + management if fixed-link is not specified.
106
107 Child nodes of the Ethernet controller are typically the individual PHY devices
108 connected via the MDIO bus (sometimes the MDIO bus controller is separate).
109 They are described in the phy.txt file in this same directory.
110 +For non-MDIO PHY management see fixed-link.txt.
111 diff --git a/Makefile b/Makefile
112 index 7adbbbeeb421..2ebc49903d33 100644
113 --- a/Makefile
114 +++ b/Makefile
115 @@ -1,6 +1,6 @@
116 VERSION = 3
117 PATCHLEVEL = 18
118 -SUBLEVEL = 22
119 +SUBLEVEL = 23
120 EXTRAVERSION =
121 NAME = Diseased Newt
122
123 diff --git a/arch/arm/Makefile b/arch/arm/Makefile
124 index 034a94904d69..b5d79884b2af 100644
125 --- a/arch/arm/Makefile
126 +++ b/arch/arm/Makefile
127 @@ -50,6 +50,14 @@ AS += -EL
128 LD += -EL
129 endif
130
131 +#
132 +# The Scalar Replacement of Aggregates (SRA) optimization pass in GCC 4.9 and
133 +# later may result in code being generated that handles signed short and signed
134 +# char struct members incorrectly. So disable it.
135 +# (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65932)
136 +#
137 +KBUILD_CFLAGS += $(call cc-option,-fno-ipa-sra)
138 +
139 # This selects which instruction set is used.
140 # Note that GCC does not numerically define an architecture version
141 # macro, but instead defines a whole series of macros which makes
142 diff --git a/arch/arm/boot/dts/imx25-pdk.dts b/arch/arm/boot/dts/imx25-pdk.dts
143 index 9c21b1583762..300507fc722f 100644
144 --- a/arch/arm/boot/dts/imx25-pdk.dts
145 +++ b/arch/arm/boot/dts/imx25-pdk.dts
146 @@ -10,6 +10,7 @@
147 */
148
149 /dts-v1/;
150 +#include <dt-bindings/gpio/gpio.h>
151 #include <dt-bindings/input/input.h>
152 #include "imx25.dtsi"
153
154 @@ -93,8 +94,8 @@
155 &esdhc1 {
156 pinctrl-names = "default";
157 pinctrl-0 = <&pinctrl_esdhc1>;
158 - cd-gpios = <&gpio2 1 0>;
159 - wp-gpios = <&gpio2 0 0>;
160 + cd-gpios = <&gpio2 1 GPIO_ACTIVE_LOW>;
161 + wp-gpios = <&gpio2 0 GPIO_ACTIVE_HIGH>;
162 status = "okay";
163 };
164
165 diff --git a/arch/arm/boot/dts/imx51-apf51dev.dts b/arch/arm/boot/dts/imx51-apf51dev.dts
166 index c5a9a24c280a..cdd72e0eb4d4 100644
167 --- a/arch/arm/boot/dts/imx51-apf51dev.dts
168 +++ b/arch/arm/boot/dts/imx51-apf51dev.dts
169 @@ -90,7 +90,7 @@
170 &esdhc1 {
171 pinctrl-names = "default";
172 pinctrl-0 = <&pinctrl_esdhc1>;
173 - cd-gpios = <&gpio2 29 GPIO_ACTIVE_HIGH>;
174 + cd-gpios = <&gpio2 29 GPIO_ACTIVE_LOW>;
175 bus-width = <4>;
176 status = "okay";
177 };
178 diff --git a/arch/arm/boot/dts/imx53-ard.dts b/arch/arm/boot/dts/imx53-ard.dts
179 index e9337ad52f59..3bc18835fb4b 100644
180 --- a/arch/arm/boot/dts/imx53-ard.dts
181 +++ b/arch/arm/boot/dts/imx53-ard.dts
182 @@ -103,8 +103,8 @@
183 &esdhc1 {
184 pinctrl-names = "default";
185 pinctrl-0 = <&pinctrl_esdhc1>;
186 - cd-gpios = <&gpio1 1 0>;
187 - wp-gpios = <&gpio1 9 0>;
188 + cd-gpios = <&gpio1 1 GPIO_ACTIVE_LOW>;
189 + wp-gpios = <&gpio1 9 GPIO_ACTIVE_HIGH>;
190 status = "okay";
191 };
192
193 diff --git a/arch/arm/boot/dts/imx53-m53evk.dts b/arch/arm/boot/dts/imx53-m53evk.dts
194 index d0e0f57eb432..53f40885c530 100644
195 --- a/arch/arm/boot/dts/imx53-m53evk.dts
196 +++ b/arch/arm/boot/dts/imx53-m53evk.dts
197 @@ -124,8 +124,8 @@
198 &esdhc1 {
199 pinctrl-names = "default";
200 pinctrl-0 = <&pinctrl_esdhc1>;
201 - cd-gpios = <&gpio1 1 0>;
202 - wp-gpios = <&gpio1 9 0>;
203 + cd-gpios = <&gpio1 1 GPIO_ACTIVE_LOW>;
204 + wp-gpios = <&gpio1 9 GPIO_ACTIVE_HIGH>;
205 status = "okay";
206 };
207
208 diff --git a/arch/arm/boot/dts/imx53-qsb-common.dtsi b/arch/arm/boot/dts/imx53-qsb-common.dtsi
209 index 181ae5ebf23f..1f55187ed9ce 100644
210 --- a/arch/arm/boot/dts/imx53-qsb-common.dtsi
211 +++ b/arch/arm/boot/dts/imx53-qsb-common.dtsi
212 @@ -147,8 +147,8 @@
213 &esdhc3 {
214 pinctrl-names = "default";
215 pinctrl-0 = <&pinctrl_esdhc3>;
216 - cd-gpios = <&gpio3 11 0>;
217 - wp-gpios = <&gpio3 12 0>;
218 + cd-gpios = <&gpio3 11 GPIO_ACTIVE_LOW>;
219 + wp-gpios = <&gpio3 12 GPIO_ACTIVE_HIGH>;
220 bus-width = <8>;
221 status = "okay";
222 };
223 diff --git a/arch/arm/boot/dts/imx53-smd.dts b/arch/arm/boot/dts/imx53-smd.dts
224 index 1d325576bcc0..fc89ce1e5763 100644
225 --- a/arch/arm/boot/dts/imx53-smd.dts
226 +++ b/arch/arm/boot/dts/imx53-smd.dts
227 @@ -41,8 +41,8 @@
228 &esdhc1 {
229 pinctrl-names = "default";
230 pinctrl-0 = <&pinctrl_esdhc1>;
231 - cd-gpios = <&gpio3 13 0>;
232 - wp-gpios = <&gpio4 11 0>;
233 + cd-gpios = <&gpio3 13 GPIO_ACTIVE_LOW>;
234 + wp-gpios = <&gpio4 11 GPIO_ACTIVE_HIGH>;
235 status = "okay";
236 };
237
238 diff --git a/arch/arm/boot/dts/imx53-tqma53.dtsi b/arch/arm/boot/dts/imx53-tqma53.dtsi
239 index 4f1f0e2868bf..e03373a58760 100644
240 --- a/arch/arm/boot/dts/imx53-tqma53.dtsi
241 +++ b/arch/arm/boot/dts/imx53-tqma53.dtsi
242 @@ -41,8 +41,8 @@
243 pinctrl-0 = <&pinctrl_esdhc2>,
244 <&pinctrl_esdhc2_cdwp>;
245 vmmc-supply = <&reg_3p3v>;
246 - wp-gpios = <&gpio1 2 0>;
247 - cd-gpios = <&gpio1 4 0>;
248 + wp-gpios = <&gpio1 2 GPIO_ACTIVE_HIGH>;
249 + cd-gpios = <&gpio1 4 GPIO_ACTIVE_LOW>;
250 status = "disabled";
251 };
252
253 diff --git a/arch/arm/boot/dts/imx53-tx53.dtsi b/arch/arm/boot/dts/imx53-tx53.dtsi
254 index 704bd72cbfec..d3e50b22064f 100644
255 --- a/arch/arm/boot/dts/imx53-tx53.dtsi
256 +++ b/arch/arm/boot/dts/imx53-tx53.dtsi
257 @@ -183,7 +183,7 @@
258 };
259
260 &esdhc1 {
261 - cd-gpios = <&gpio3 24 GPIO_ACTIVE_HIGH>;
262 + cd-gpios = <&gpio3 24 GPIO_ACTIVE_LOW>;
263 fsl,wp-controller;
264 pinctrl-names = "default";
265 pinctrl-0 = <&pinctrl_esdhc1>;
266 @@ -191,7 +191,7 @@
267 };
268
269 &esdhc2 {
270 - cd-gpios = <&gpio3 25 GPIO_ACTIVE_HIGH>;
271 + cd-gpios = <&gpio3 25 GPIO_ACTIVE_LOW>;
272 fsl,wp-controller;
273 pinctrl-names = "default";
274 pinctrl-0 = <&pinctrl_esdhc2>;
275 diff --git a/arch/arm/boot/dts/imx53-voipac-bsb.dts b/arch/arm/boot/dts/imx53-voipac-bsb.dts
276 index c17d3ad6dba5..fc51b87ad208 100644
277 --- a/arch/arm/boot/dts/imx53-voipac-bsb.dts
278 +++ b/arch/arm/boot/dts/imx53-voipac-bsb.dts
279 @@ -119,8 +119,8 @@
280 &esdhc2 {
281 pinctrl-names = "default";
282 pinctrl-0 = <&pinctrl_esdhc2>;
283 - cd-gpios = <&gpio3 25 0>;
284 - wp-gpios = <&gpio2 19 0>;
285 + cd-gpios = <&gpio3 25 GPIO_ACTIVE_LOW>;
286 + wp-gpios = <&gpio2 19 GPIO_ACTIVE_HIGH>;
287 vmmc-supply = <&reg_3p3v>;
288 status = "okay";
289 };
290 diff --git a/arch/arm/boot/dts/imx6qdl-rex.dtsi b/arch/arm/boot/dts/imx6qdl-rex.dtsi
291 index df7bcf86c156..4b5e8c87e53f 100644
292 --- a/arch/arm/boot/dts/imx6qdl-rex.dtsi
293 +++ b/arch/arm/boot/dts/imx6qdl-rex.dtsi
294 @@ -35,7 +35,6 @@
295 compatible = "regulator-fixed";
296 reg = <1>;
297 pinctrl-names = "default";
298 - pinctrl-0 = <&pinctrl_usbh1>;
299 regulator-name = "usbh1_vbus";
300 regulator-min-microvolt = <5000000>;
301 regulator-max-microvolt = <5000000>;
302 @@ -47,7 +46,6 @@
303 compatible = "regulator-fixed";
304 reg = <2>;
305 pinctrl-names = "default";
306 - pinctrl-0 = <&pinctrl_usbotg>;
307 regulator-name = "usb_otg_vbus";
308 regulator-min-microvolt = <5000000>;
309 regulator-max-microvolt = <5000000>;
310 diff --git a/arch/arm/boot/dts/omap3-beagle.dts b/arch/arm/boot/dts/omap3-beagle.dts
311 index a9aae88b74f5..bd603aa2cd82 100644
312 --- a/arch/arm/boot/dts/omap3-beagle.dts
313 +++ b/arch/arm/boot/dts/omap3-beagle.dts
314 @@ -176,7 +176,7 @@
315
316 tfp410_pins: pinmux_tfp410_pins {
317 pinctrl-single,pins = <
318 - 0x194 (PIN_OUTPUT | MUX_MODE4) /* hdq_sio.gpio_170 */
319 + 0x196 (PIN_OUTPUT | MUX_MODE4) /* hdq_sio.gpio_170 */
320 >;
321 };
322
323 diff --git a/arch/arm/boot/dts/omap5-uevm.dts b/arch/arm/boot/dts/omap5-uevm.dts
324 index 159720d6c956..ec23e86e7e4f 100644
325 --- a/arch/arm/boot/dts/omap5-uevm.dts
326 +++ b/arch/arm/boot/dts/omap5-uevm.dts
327 @@ -174,8 +174,8 @@
328
329 i2c5_pins: pinmux_i2c5_pins {
330 pinctrl-single,pins = <
331 - 0x184 (PIN_INPUT | MUX_MODE0) /* i2c5_scl */
332 - 0x186 (PIN_INPUT | MUX_MODE0) /* i2c5_sda */
333 + 0x186 (PIN_INPUT | MUX_MODE0) /* i2c5_scl */
334 + 0x188 (PIN_INPUT | MUX_MODE0) /* i2c5_sda */
335 >;
336 };
337
338 diff --git a/arch/arm/kernel/signal.c b/arch/arm/kernel/signal.c
339 index bd1983437205..ea6d69125dde 100644
340 --- a/arch/arm/kernel/signal.c
341 +++ b/arch/arm/kernel/signal.c
342 @@ -354,12 +354,17 @@ setup_return(struct pt_regs *regs, struct ksignal *ksig,
343 */
344 thumb = handler & 1;
345
346 -#if __LINUX_ARM_ARCH__ >= 7
347 +#if __LINUX_ARM_ARCH__ >= 6
348 /*
349 - * Clear the If-Then Thumb-2 execution state
350 - * ARM spec requires this to be all 000s in ARM mode
351 - * Snapdragon S4/Krait misbehaves on a Thumb=>ARM
352 - * signal transition without this.
353 + * Clear the If-Then Thumb-2 execution state. ARM spec
354 + * requires this to be all 000s in ARM mode. Snapdragon
355 + * S4/Krait misbehaves on a Thumb=>ARM signal transition
356 + * without this.
357 + *
358 + * We must do this whenever we are running on a Thumb-2
359 + * capable CPU, which includes ARMv6T2. However, we elect
360 + * to do this whenever we're on an ARMv6 or later CPU for
361 + * simplicity.
362 */
363 cpsr &= ~PSR_IT_MASK;
364 #endif
365 diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
366 index cba52cf6ed3f..3535480e0e6b 100644
367 --- a/arch/arm/kvm/mmu.c
368 +++ b/arch/arm/kvm/mmu.c
369 @@ -1439,8 +1439,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
370 if (vma->vm_flags & VM_PFNMAP) {
371 gpa_t gpa = mem->guest_phys_addr +
372 (vm_start - mem->userspace_addr);
373 - phys_addr_t pa = (vma->vm_pgoff << PAGE_SHIFT) +
374 - vm_start - vma->vm_start;
375 + phys_addr_t pa;
376 +
377 + pa = (phys_addr_t)vma->vm_pgoff << PAGE_SHIFT;
378 + pa += vm_start - vma->vm_start;
379
380 ret = kvm_phys_addr_ioremap(kvm, gpa, pa,
381 vm_end - vm_start,
382 diff --git a/arch/arm/mach-omap2/clockdomains7xx_data.c b/arch/arm/mach-omap2/clockdomains7xx_data.c
383 index 57d5df0c1fbd..7581e036bda6 100644
384 --- a/arch/arm/mach-omap2/clockdomains7xx_data.c
385 +++ b/arch/arm/mach-omap2/clockdomains7xx_data.c
386 @@ -331,7 +331,7 @@ static struct clockdomain l4per2_7xx_clkdm = {
387 .dep_bit = DRA7XX_L4PER2_STATDEP_SHIFT,
388 .wkdep_srcs = l4per2_wkup_sleep_deps,
389 .sleepdep_srcs = l4per2_wkup_sleep_deps,
390 - .flags = CLKDM_CAN_HWSUP_SWSUP,
391 + .flags = CLKDM_CAN_SWSUP,
392 };
393
394 static struct clockdomain mpu0_7xx_clkdm = {
395 diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
396 index dc2d66cdf311..00b9c4870230 100644
397 --- a/arch/arm64/Kconfig
398 +++ b/arch/arm64/Kconfig
399 @@ -91,6 +91,10 @@ config NO_IOPORT_MAP
400 config STACKTRACE_SUPPORT
401 def_bool y
402
403 +config ILLEGAL_POINTER_VALUE
404 + hex
405 + default 0xdead000000000000
406 +
407 config LOCKDEP_SUPPORT
408 def_bool y
409
410 @@ -575,6 +579,22 @@ source "drivers/cpuidle/Kconfig"
411
412 source "drivers/cpufreq/Kconfig"
413
414 +config ARM64_ERRATUM_843419
415 + bool "Cortex-A53: 843419: A load or store might access an incorrect address"
416 + depends on MODULES
417 + default y
418 + help
419 + This option builds kernel modules using the large memory model in
420 + order to avoid the use of the ADRP instruction, which can cause
421 + a subsequent memory access to use an incorrect address on Cortex-A53
422 + parts up to r0p4.
423 +
424 + Note that the kernel itself must be linked with a version of ld
425 + which fixes potentially affected ADRP instructions through the
426 + use of veneers.
427 +
428 + If unsure, say Y.
429 +
430 endmenu
431
432 source "net/Kconfig"
433 diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
434 index 20901ffed182..e7391aef5433 100644
435 --- a/arch/arm64/Makefile
436 +++ b/arch/arm64/Makefile
437 @@ -32,6 +32,10 @@ endif
438
439 CHECKFLAGS += -D__aarch64__
440
441 +ifeq ($(CONFIG_ARM64_ERRATUM_843419), y)
442 +CFLAGS_MODULE += -mcmodel=large
443 +endif
444 +
445 # Default value
446 head-y := arch/arm64/kernel/head.o
447
448 diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S
449 index 38e704e597f7..c85a02b6cca0 100644
450 --- a/arch/arm64/kernel/entry-ftrace.S
451 +++ b/arch/arm64/kernel/entry-ftrace.S
452 @@ -177,6 +177,24 @@ ENTRY(ftrace_stub)
453 ENDPROC(ftrace_stub)
454
455 #ifdef CONFIG_FUNCTION_GRAPH_TRACER
456 + /* save return value regs*/
457 + .macro save_return_regs
458 + sub sp, sp, #64
459 + stp x0, x1, [sp]
460 + stp x2, x3, [sp, #16]
461 + stp x4, x5, [sp, #32]
462 + stp x6, x7, [sp, #48]
463 + .endm
464 +
465 + /* restore return value regs*/
466 + .macro restore_return_regs
467 + ldp x0, x1, [sp]
468 + ldp x2, x3, [sp, #16]
469 + ldp x4, x5, [sp, #32]
470 + ldp x6, x7, [sp, #48]
471 + add sp, sp, #64
472 + .endm
473 +
474 /*
475 * void ftrace_graph_caller(void)
476 *
477 @@ -203,11 +221,11 @@ ENDPROC(ftrace_graph_caller)
478 * only when CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST is enabled.
479 */
480 ENTRY(return_to_handler)
481 - str x0, [sp, #-16]!
482 + save_return_regs
483 mov x0, x29 // parent's fp
484 bl ftrace_return_to_handler// addr = ftrace_return_to_hander(fp);
485 mov x30, x0 // restore the original return address
486 - ldr x0, [sp], #16
487 + restore_return_regs
488 ret
489 END(return_to_handler)
490 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
491 diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
492 index 0a6e4f924df8..2877dd818977 100644
493 --- a/arch/arm64/kernel/head.S
494 +++ b/arch/arm64/kernel/head.S
495 @@ -327,6 +327,11 @@ CPU_LE( movk x0, #0x30d0, lsl #16 ) // Clear EE and E0E on LE systems
496 msr hstr_el2, xzr // Disable CP15 traps to EL2
497 #endif
498
499 + /* EL2 debug */
500 + mrs x0, pmcr_el0 // Disable debug access traps
501 + ubfx x0, x0, #11, #5 // to EL2 and allow access to
502 + msr mdcr_el2, x0 // all PMU counters from EL1
503 +
504 /* Stage-2 translation */
505 msr vttbr_el2, xzr
506
507 diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
508 index 1eb1cc955139..e366329d96d8 100644
509 --- a/arch/arm64/kernel/module.c
510 +++ b/arch/arm64/kernel/module.c
511 @@ -330,12 +330,14 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
512 ovf = reloc_insn_imm(RELOC_OP_PREL, loc, val, 0, 21,
513 AARCH64_INSN_IMM_ADR);
514 break;
515 +#ifndef CONFIG_ARM64_ERRATUM_843419
516 case R_AARCH64_ADR_PREL_PG_HI21_NC:
517 overflow_check = false;
518 case R_AARCH64_ADR_PREL_PG_HI21:
519 ovf = reloc_insn_imm(RELOC_OP_PAGE, loc, val, 12, 21,
520 AARCH64_INSN_IMM_ADR);
521 break;
522 +#endif
523 case R_AARCH64_ADD_ABS_LO12_NC:
524 case R_AARCH64_LDST8_ABS_LO12_NC:
525 overflow_check = false;
526 diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
527 index b4efc2e38336..15dd021b0025 100644
528 --- a/arch/arm64/kernel/signal32.c
529 +++ b/arch/arm64/kernel/signal32.c
530 @@ -206,14 +206,32 @@ int copy_siginfo_from_user32(siginfo_t *to, compat_siginfo_t __user *from)
531
532 /*
533 * VFP save/restore code.
534 + *
535 + * We have to be careful with endianness, since the fpsimd context-switch
536 + * code operates on 128-bit (Q) register values whereas the compat ABI
537 + * uses an array of 64-bit (D) registers. Consequently, we need to swap
538 + * the two halves of each Q register when running on a big-endian CPU.
539 */
540 +union __fpsimd_vreg {
541 + __uint128_t raw;
542 + struct {
543 +#ifdef __AARCH64EB__
544 + u64 hi;
545 + u64 lo;
546 +#else
547 + u64 lo;
548 + u64 hi;
549 +#endif
550 + };
551 +};
552 +
553 static int compat_preserve_vfp_context(struct compat_vfp_sigframe __user *frame)
554 {
555 struct fpsimd_state *fpsimd = &current->thread.fpsimd_state;
556 compat_ulong_t magic = VFP_MAGIC;
557 compat_ulong_t size = VFP_STORAGE_SIZE;
558 compat_ulong_t fpscr, fpexc;
559 - int err = 0;
560 + int i, err = 0;
561
562 /*
563 * Save the hardware registers to the fpsimd_state structure.
564 @@ -229,10 +247,15 @@ static int compat_preserve_vfp_context(struct compat_vfp_sigframe __user *frame)
565 /*
566 * Now copy the FP registers. Since the registers are packed,
567 * we can copy the prefix we want (V0-V15) as it is.
568 - * FIXME: Won't work if big endian.
569 */
570 - err |= __copy_to_user(&frame->ufp.fpregs, fpsimd->vregs,
571 - sizeof(frame->ufp.fpregs));
572 + for (i = 0; i < ARRAY_SIZE(frame->ufp.fpregs); i += 2) {
573 + union __fpsimd_vreg vreg = {
574 + .raw = fpsimd->vregs[i >> 1],
575 + };
576 +
577 + __put_user_error(vreg.lo, &frame->ufp.fpregs[i], err);
578 + __put_user_error(vreg.hi, &frame->ufp.fpregs[i + 1], err);
579 + }
580
581 /* Create an AArch32 fpscr from the fpsr and the fpcr. */
582 fpscr = (fpsimd->fpsr & VFP_FPSCR_STAT_MASK) |
583 @@ -257,7 +280,7 @@ static int compat_restore_vfp_context(struct compat_vfp_sigframe __user *frame)
584 compat_ulong_t magic = VFP_MAGIC;
585 compat_ulong_t size = VFP_STORAGE_SIZE;
586 compat_ulong_t fpscr;
587 - int err = 0;
588 + int i, err = 0;
589
590 __get_user_error(magic, &frame->magic, err);
591 __get_user_error(size, &frame->size, err);
592 @@ -267,12 +290,14 @@ static int compat_restore_vfp_context(struct compat_vfp_sigframe __user *frame)
593 if (magic != VFP_MAGIC || size != VFP_STORAGE_SIZE)
594 return -EINVAL;
595
596 - /*
597 - * Copy the FP registers into the start of the fpsimd_state.
598 - * FIXME: Won't work if big endian.
599 - */
600 - err |= __copy_from_user(fpsimd.vregs, frame->ufp.fpregs,
601 - sizeof(frame->ufp.fpregs));
602 + /* Copy the FP registers into the start of the fpsimd_state. */
603 + for (i = 0; i < ARRAY_SIZE(frame->ufp.fpregs); i += 2) {
604 + union __fpsimd_vreg vreg;
605 +
606 + __get_user_error(vreg.lo, &frame->ufp.fpregs[i], err);
607 + __get_user_error(vreg.hi, &frame->ufp.fpregs[i + 1], err);
608 + fpsimd.vregs[i >> 1] = vreg.raw;
609 + }
610
611 /* Extract the fpsr and the fpcr from the fpscr */
612 __get_user_error(fpscr, &frame->ufp.fpscr, err);
613 diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
614 index a767f6a4ce54..566a457d1803 100644
615 --- a/arch/arm64/kvm/hyp.S
616 +++ b/arch/arm64/kvm/hyp.S
617 @@ -843,8 +843,6 @@
618 mrs x3, cntv_ctl_el0
619 and x3, x3, #3
620 str w3, [x0, #VCPU_TIMER_CNTV_CTL]
621 - bic x3, x3, #1 // Clear Enable
622 - msr cntv_ctl_el0, x3
623
624 isb
625
626 @@ -852,6 +850,9 @@
627 str x3, [x0, #VCPU_TIMER_CNTV_CVAL]
628
629 1:
630 + // Disable the virtual timer
631 + msr cntv_ctl_el0, xzr
632 +
633 // Allow physical timer/counter access for the host
634 mrs x2, cnthctl_el2
635 orr x2, x2, #3
636 diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
637 index 41cb6d3d6075..6094c64b3380 100644
638 --- a/arch/arm64/mm/fault.c
639 +++ b/arch/arm64/mm/fault.c
640 @@ -279,6 +279,7 @@ retry:
641 * starvation.
642 */
643 mm_flags &= ~FAULT_FLAG_ALLOW_RETRY;
644 + mm_flags |= FAULT_FLAG_TRIED;
645 goto retry;
646 }
647 }
648 diff --git a/arch/m68k/include/asm/linkage.h b/arch/m68k/include/asm/linkage.h
649 index 5a822bb790f7..066e74f666ae 100644
650 --- a/arch/m68k/include/asm/linkage.h
651 +++ b/arch/m68k/include/asm/linkage.h
652 @@ -4,4 +4,34 @@
653 #define __ALIGN .align 4
654 #define __ALIGN_STR ".align 4"
655
656 +/*
657 + * Make sure the compiler doesn't do anything stupid with the
658 + * arguments on the stack - they are owned by the *caller*, not
659 + * the callee. This just fools gcc into not spilling into them,
660 + * and keeps it from doing tailcall recursion and/or using the
661 + * stack slots for temporaries, since they are live and "used"
662 + * all the way to the end of the function.
663 + */
664 +#define asmlinkage_protect(n, ret, args...) \
665 + __asmlinkage_protect##n(ret, ##args)
666 +#define __asmlinkage_protect_n(ret, args...) \
667 + __asm__ __volatile__ ("" : "=r" (ret) : "0" (ret), ##args)
668 +#define __asmlinkage_protect0(ret) \
669 + __asmlinkage_protect_n(ret)
670 +#define __asmlinkage_protect1(ret, arg1) \
671 + __asmlinkage_protect_n(ret, "m" (arg1))
672 +#define __asmlinkage_protect2(ret, arg1, arg2) \
673 + __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2))
674 +#define __asmlinkage_protect3(ret, arg1, arg2, arg3) \
675 + __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3))
676 +#define __asmlinkage_protect4(ret, arg1, arg2, arg3, arg4) \
677 + __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \
678 + "m" (arg4))
679 +#define __asmlinkage_protect5(ret, arg1, arg2, arg3, arg4, arg5) \
680 + __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \
681 + "m" (arg4), "m" (arg5))
682 +#define __asmlinkage_protect6(ret, arg1, arg2, arg3, arg4, arg5, arg6) \
683 + __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \
684 + "m" (arg4), "m" (arg5), "m" (arg6))
685 +
686 #endif
687 diff --git a/arch/mips/mm/dma-default.c b/arch/mips/mm/dma-default.c
688 index 33ba3c558fe4..027ad1f24e32 100644
689 --- a/arch/mips/mm/dma-default.c
690 +++ b/arch/mips/mm/dma-default.c
691 @@ -95,7 +95,7 @@ static gfp_t massage_gfp_flags(const struct device *dev, gfp_t gfp)
692 else
693 #endif
694 #if defined(CONFIG_ZONE_DMA) && !defined(CONFIG_ZONE_DMA32)
695 - if (dev->coherent_dma_mask < DMA_BIT_MASK(64))
696 + if (dev->coherent_dma_mask < DMA_BIT_MASK(sizeof(phys_addr_t) * 8))
697 dma_flag = __GFP_DMA;
698 else
699 #endif
700 diff --git a/arch/parisc/kernel/irq.c b/arch/parisc/kernel/irq.c
701 index cfe056fe7f5c..34f06be569d9 100644
702 --- a/arch/parisc/kernel/irq.c
703 +++ b/arch/parisc/kernel/irq.c
704 @@ -507,8 +507,8 @@ void do_cpu_irq_mask(struct pt_regs *regs)
705 struct pt_regs *old_regs;
706 unsigned long eirr_val;
707 int irq, cpu = smp_processor_id();
708 -#ifdef CONFIG_SMP
709 struct irq_desc *desc;
710 +#ifdef CONFIG_SMP
711 cpumask_t dest;
712 #endif
713
714 @@ -521,8 +521,12 @@ void do_cpu_irq_mask(struct pt_regs *regs)
715 goto set_out;
716 irq = eirr_to_irq(eirr_val);
717
718 -#ifdef CONFIG_SMP
719 + /* Filter out spurious interrupts, mostly from serial port at bootup */
720 desc = irq_to_desc(irq);
721 + if (unlikely(!desc->action))
722 + goto set_out;
723 +
724 +#ifdef CONFIG_SMP
725 cpumask_copy(&dest, desc->irq_data.affinity);
726 if (irqd_is_per_cpu(&desc->irq_data) &&
727 !cpu_isset(smp_processor_id(), dest)) {
728 diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
729 index 7ef22e3387e0..0b8d26d3ba43 100644
730 --- a/arch/parisc/kernel/syscall.S
731 +++ b/arch/parisc/kernel/syscall.S
732 @@ -821,7 +821,7 @@ cas2_action:
733 /* 64bit CAS */
734 #ifdef CONFIG_64BIT
735 19: ldd,ma 0(%sr3,%r26), %r29
736 - sub,= %r29, %r25, %r0
737 + sub,*= %r29, %r25, %r0
738 b,n cas2_end
739 20: std,ma %r24, 0(%sr3,%r26)
740 copy %r0, %r28
741 diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
742 index ae153c40ab7c..daf4add50743 100644
743 --- a/arch/powerpc/include/asm/pgtable-ppc64.h
744 +++ b/arch/powerpc/include/asm/pgtable-ppc64.h
745 @@ -135,7 +135,19 @@
746 #define pte_iterate_hashed_end() } while(0)
747
748 #ifdef CONFIG_PPC_HAS_HASH_64K
749 -#define pte_pagesize_index(mm, addr, pte) get_slice_psize(mm, addr)
750 +/*
751 + * We expect this to be called only for user addresses or kernel virtual
752 + * addresses other than the linear mapping.
753 + */
754 +#define pte_pagesize_index(mm, addr, pte) \
755 + ({ \
756 + unsigned int psize; \
757 + if (is_kernel_addr(addr)) \
758 + psize = MMU_PAGE_4K; \
759 + else \
760 + psize = get_slice_psize(mm, addr); \
761 + psize; \
762 + })
763 #else
764 #define pte_pagesize_index(mm, addr, pte) MMU_PAGE_4K
765 #endif
766 diff --git a/arch/powerpc/include/asm/rtas.h b/arch/powerpc/include/asm/rtas.h
767 index b390f55b0df1..af37e69b3b74 100644
768 --- a/arch/powerpc/include/asm/rtas.h
769 +++ b/arch/powerpc/include/asm/rtas.h
770 @@ -316,6 +316,7 @@ extern void rtas_power_off(void);
771 extern void rtas_halt(void);
772 extern void rtas_os_term(char *str);
773 extern int rtas_get_sensor(int sensor, int index, int *state);
774 +extern int rtas_get_sensor_fast(int sensor, int index, int *state);
775 extern int rtas_get_power_level(int powerdomain, int *level);
776 extern int rtas_set_power_level(int powerdomain, int level, int *setlevel);
777 extern bool rtas_indicator_present(int token, int *maxindex);
778 diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
779 index 8b4c857c1421..af0dafab5807 100644
780 --- a/arch/powerpc/kernel/rtas.c
781 +++ b/arch/powerpc/kernel/rtas.c
782 @@ -584,6 +584,23 @@ int rtas_get_sensor(int sensor, int index, int *state)
783 }
784 EXPORT_SYMBOL(rtas_get_sensor);
785
786 +int rtas_get_sensor_fast(int sensor, int index, int *state)
787 +{
788 + int token = rtas_token("get-sensor-state");
789 + int rc;
790 +
791 + if (token == RTAS_UNKNOWN_SERVICE)
792 + return -ENOENT;
793 +
794 + rc = rtas_call(token, 2, 2, state, sensor, index);
795 + WARN_ON(rc == RTAS_BUSY || (rc >= RTAS_EXTENDED_DELAY_MIN &&
796 + rc <= RTAS_EXTENDED_DELAY_MAX));
797 +
798 + if (rc < 0)
799 + return rtas_error_rc(rc);
800 + return rc;
801 +}
802 +
803 bool rtas_indicator_present(int token, int *maxindex)
804 {
805 int proplen, count, i;
806 diff --git a/arch/powerpc/mm/hugepage-hash64.c b/arch/powerpc/mm/hugepage-hash64.c
807 index 5f5e6328c21c..5061c6f676da 100644
808 --- a/arch/powerpc/mm/hugepage-hash64.c
809 +++ b/arch/powerpc/mm/hugepage-hash64.c
810 @@ -136,7 +136,6 @@ int __hash_page_thp(unsigned long ea, unsigned long access, unsigned long vsid,
811 BUG_ON(index >= 4096);
812
813 vpn = hpt_vpn(ea, vsid, ssize);
814 - hash = hpt_hash(vpn, shift, ssize);
815 hpte_slot_array = get_hpte_slot_array(pmdp);
816 if (psize == MMU_PAGE_4K) {
817 /*
818 @@ -151,6 +150,7 @@ int __hash_page_thp(unsigned long ea, unsigned long access, unsigned long vsid,
819 valid = hpte_valid(hpte_slot_array, index);
820 if (valid) {
821 /* update the hpte bits */
822 + hash = hpt_hash(vpn, shift, ssize);
823 hidx = hpte_hash_index(hpte_slot_array, index);
824 if (hidx & _PTEIDX_SECONDARY)
825 hash = ~hash;
826 @@ -176,6 +176,7 @@ int __hash_page_thp(unsigned long ea, unsigned long access, unsigned long vsid,
827 if (!valid) {
828 unsigned long hpte_group;
829
830 + hash = hpt_hash(vpn, shift, ssize);
831 /* insert new entry */
832 pa = pmd_pfn(__pmd(old_pmd)) << PAGE_SHIFT;
833 new_pmd |= _PAGE_HASHPTE;
834 diff --git a/arch/powerpc/platforms/powernv/pci.c b/arch/powerpc/platforms/powernv/pci.c
835 index 4b20f2c6b3b2..9ff55d59ac76 100644
836 --- a/arch/powerpc/platforms/powernv/pci.c
837 +++ b/arch/powerpc/platforms/powernv/pci.c
838 @@ -100,6 +100,7 @@ static void pnv_teardown_msi_irqs(struct pci_dev *pdev)
839 struct pci_controller *hose = pci_bus_to_host(pdev->bus);
840 struct pnv_phb *phb = hose->private_data;
841 struct msi_desc *entry;
842 + irq_hw_number_t hwirq;
843
844 if (WARN_ON(!phb))
845 return;
846 @@ -107,10 +108,10 @@ static void pnv_teardown_msi_irqs(struct pci_dev *pdev)
847 list_for_each_entry(entry, &pdev->msi_list, list) {
848 if (entry->irq == NO_IRQ)
849 continue;
850 + hwirq = virq_to_hw(entry->irq);
851 irq_set_msi_desc(entry->irq, NULL);
852 - msi_bitmap_free_hwirqs(&phb->msi_bmp,
853 - virq_to_hw(entry->irq) - phb->msi_base, 1);
854 irq_dispose_mapping(entry->irq);
855 + msi_bitmap_free_hwirqs(&phb->msi_bmp, hwirq - phb->msi_base, 1);
856 }
857 }
858 #endif /* CONFIG_PCI_MSI */
859 diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
860 index 5a4d0fc03b03..d263f7bc80fc 100644
861 --- a/arch/powerpc/platforms/pseries/ras.c
862 +++ b/arch/powerpc/platforms/pseries/ras.c
863 @@ -187,7 +187,8 @@ static irqreturn_t ras_epow_interrupt(int irq, void *dev_id)
864 int state;
865 int critical;
866
867 - status = rtas_get_sensor(EPOW_SENSOR_TOKEN, EPOW_SENSOR_INDEX, &state);
868 + status = rtas_get_sensor_fast(EPOW_SENSOR_TOKEN, EPOW_SENSOR_INDEX,
869 + &state);
870
871 if (state > 3)
872 critical = 1; /* Time Critical */
873 diff --git a/arch/powerpc/sysdev/fsl_msi.c b/arch/powerpc/sysdev/fsl_msi.c
874 index da08ed088157..ea6b3a1c79d8 100644
875 --- a/arch/powerpc/sysdev/fsl_msi.c
876 +++ b/arch/powerpc/sysdev/fsl_msi.c
877 @@ -129,15 +129,16 @@ static void fsl_teardown_msi_irqs(struct pci_dev *pdev)
878 {
879 struct msi_desc *entry;
880 struct fsl_msi *msi_data;
881 + irq_hw_number_t hwirq;
882
883 list_for_each_entry(entry, &pdev->msi_list, list) {
884 if (entry->irq == NO_IRQ)
885 continue;
886 + hwirq = virq_to_hw(entry->irq);
887 msi_data = irq_get_chip_data(entry->irq);
888 irq_set_msi_desc(entry->irq, NULL);
889 - msi_bitmap_free_hwirqs(&msi_data->bitmap,
890 - virq_to_hw(entry->irq), 1);
891 irq_dispose_mapping(entry->irq);
892 + msi_bitmap_free_hwirqs(&msi_data->bitmap, hwirq, 1);
893 }
894
895 return;
896 diff --git a/arch/powerpc/sysdev/mpic_pasemi_msi.c b/arch/powerpc/sysdev/mpic_pasemi_msi.c
897 index 15dccd35fa11..a6add4ae6c5a 100644
898 --- a/arch/powerpc/sysdev/mpic_pasemi_msi.c
899 +++ b/arch/powerpc/sysdev/mpic_pasemi_msi.c
900 @@ -66,6 +66,7 @@ static struct irq_chip mpic_pasemi_msi_chip = {
901 static void pasemi_msi_teardown_msi_irqs(struct pci_dev *pdev)
902 {
903 struct msi_desc *entry;
904 + irq_hw_number_t hwirq;
905
906 pr_debug("pasemi_msi_teardown_msi_irqs, pdev %p\n", pdev);
907
908 @@ -73,10 +74,11 @@ static void pasemi_msi_teardown_msi_irqs(struct pci_dev *pdev)
909 if (entry->irq == NO_IRQ)
910 continue;
911
912 + hwirq = virq_to_hw(entry->irq);
913 irq_set_msi_desc(entry->irq, NULL);
914 - msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap,
915 - virq_to_hw(entry->irq), ALLOC_CHUNK);
916 irq_dispose_mapping(entry->irq);
917 + msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap,
918 + hwirq, ALLOC_CHUNK);
919 }
920
921 return;
922 diff --git a/arch/powerpc/sysdev/mpic_u3msi.c b/arch/powerpc/sysdev/mpic_u3msi.c
923 index 623d7fba15b4..db35a4073127 100644
924 --- a/arch/powerpc/sysdev/mpic_u3msi.c
925 +++ b/arch/powerpc/sysdev/mpic_u3msi.c
926 @@ -108,15 +108,16 @@ static u64 find_u4_magic_addr(struct pci_dev *pdev, unsigned int hwirq)
927 static void u3msi_teardown_msi_irqs(struct pci_dev *pdev)
928 {
929 struct msi_desc *entry;
930 + irq_hw_number_t hwirq;
931
932 list_for_each_entry(entry, &pdev->msi_list, list) {
933 if (entry->irq == NO_IRQ)
934 continue;
935
936 + hwirq = virq_to_hw(entry->irq);
937 irq_set_msi_desc(entry->irq, NULL);
938 - msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap,
939 - virq_to_hw(entry->irq), 1);
940 irq_dispose_mapping(entry->irq);
941 + msi_bitmap_free_hwirqs(&msi_mpic->msi_bitmap, hwirq, 1);
942 }
943
944 return;
945 diff --git a/arch/powerpc/sysdev/ppc4xx_msi.c b/arch/powerpc/sysdev/ppc4xx_msi.c
946 index 22b5200636e7..85d9c1852d19 100644
947 --- a/arch/powerpc/sysdev/ppc4xx_msi.c
948 +++ b/arch/powerpc/sysdev/ppc4xx_msi.c
949 @@ -125,16 +125,17 @@ void ppc4xx_teardown_msi_irqs(struct pci_dev *dev)
950 {
951 struct msi_desc *entry;
952 struct ppc4xx_msi *msi_data = &ppc4xx_msi;
953 + irq_hw_number_t hwirq;
954
955 dev_dbg(&dev->dev, "PCIE-MSI: tearing down msi irqs\n");
956
957 list_for_each_entry(entry, &dev->msi_list, list) {
958 if (entry->irq == NO_IRQ)
959 continue;
960 + hwirq = virq_to_hw(entry->irq);
961 irq_set_msi_desc(entry->irq, NULL);
962 - msi_bitmap_free_hwirqs(&msi_data->bitmap,
963 - virq_to_hw(entry->irq), 1);
964 irq_dispose_mapping(entry->irq);
965 + msi_bitmap_free_hwirqs(&msi_data->bitmap, hwirq, 1);
966 }
967 }
968
969 diff --git a/arch/s390/boot/compressed/Makefile b/arch/s390/boot/compressed/Makefile
970 index f90d1fc6d603..f70b2321071e 100644
971 --- a/arch/s390/boot/compressed/Makefile
972 +++ b/arch/s390/boot/compressed/Makefile
973 @@ -12,7 +12,7 @@ targets += misc.o piggy.o sizes.h head$(BITS).o
974
975 KBUILD_CFLAGS := -m$(BITS) -D__KERNEL__ $(LINUX_INCLUDE) -O2
976 KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
977 -KBUILD_CFLAGS += $(cflags-y) -fno-delete-null-pointer-checks
978 +KBUILD_CFLAGS += $(cflags-y) -fno-delete-null-pointer-checks -msoft-float
979 KBUILD_CFLAGS += $(call cc-option,-mpacked-stack)
980 KBUILD_CFLAGS += $(call cc-option,-ffreestanding)
981
982 diff --git a/arch/s390/kernel/compat_signal.c b/arch/s390/kernel/compat_signal.c
983 index 009f5eb11125..14c3e80e003a 100644
984 --- a/arch/s390/kernel/compat_signal.c
985 +++ b/arch/s390/kernel/compat_signal.c
986 @@ -48,6 +48,19 @@ typedef struct
987 struct ucontext32 uc;
988 } rt_sigframe32;
989
990 +static inline void sigset_to_sigset32(unsigned long *set64,
991 + compat_sigset_word *set32)
992 +{
993 + set32[0] = (compat_sigset_word) set64[0];
994 + set32[1] = (compat_sigset_word)(set64[0] >> 32);
995 +}
996 +
997 +static inline void sigset32_to_sigset(compat_sigset_word *set32,
998 + unsigned long *set64)
999 +{
1000 + set64[0] = (unsigned long) set32[0] | ((unsigned long) set32[1] << 32);
1001 +}
1002 +
1003 int copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from)
1004 {
1005 int err;
1006 @@ -303,10 +316,12 @@ COMPAT_SYSCALL_DEFINE0(sigreturn)
1007 {
1008 struct pt_regs *regs = task_pt_regs(current);
1009 sigframe32 __user *frame = (sigframe32 __user *)regs->gprs[15];
1010 + compat_sigset_t cset;
1011 sigset_t set;
1012
1013 - if (__copy_from_user(&set.sig, &frame->sc.oldmask, _SIGMASK_COPY_SIZE32))
1014 + if (__copy_from_user(&cset.sig, &frame->sc.oldmask, _SIGMASK_COPY_SIZE32))
1015 goto badframe;
1016 + sigset32_to_sigset(cset.sig, set.sig);
1017 set_current_blocked(&set);
1018 if (restore_sigregs32(regs, &frame->sregs))
1019 goto badframe;
1020 @@ -323,10 +338,12 @@ COMPAT_SYSCALL_DEFINE0(rt_sigreturn)
1021 {
1022 struct pt_regs *regs = task_pt_regs(current);
1023 rt_sigframe32 __user *frame = (rt_sigframe32 __user *)regs->gprs[15];
1024 + compat_sigset_t cset;
1025 sigset_t set;
1026
1027 - if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
1028 + if (__copy_from_user(&cset, &frame->uc.uc_sigmask, sizeof(cset)))
1029 goto badframe;
1030 + sigset32_to_sigset(cset.sig, set.sig);
1031 set_current_blocked(&set);
1032 if (compat_restore_altstack(&frame->uc.uc_stack))
1033 goto badframe;
1034 @@ -407,7 +424,7 @@ static int setup_frame32(struct ksignal *ksig, sigset_t *set,
1035 return -EFAULT;
1036
1037 /* Create struct sigcontext32 on the signal stack */
1038 - memcpy(&sc.oldmask, &set->sig, _SIGMASK_COPY_SIZE32);
1039 + sigset_to_sigset32(set->sig, sc.oldmask);
1040 sc.sregs = (__u32)(unsigned long __force) &frame->sregs;
1041 if (__copy_to_user(&frame->sc, &sc, sizeof(frame->sc)))
1042 return -EFAULT;
1043 @@ -468,6 +485,7 @@ static int setup_frame32(struct ksignal *ksig, sigset_t *set,
1044 static int setup_rt_frame32(struct ksignal *ksig, sigset_t *set,
1045 struct pt_regs *regs)
1046 {
1047 + compat_sigset_t cset;
1048 rt_sigframe32 __user *frame;
1049 unsigned long restorer;
1050 size_t frame_size;
1051 @@ -515,11 +533,12 @@ static int setup_rt_frame32(struct ksignal *ksig, sigset_t *set,
1052 store_sigregs();
1053
1054 /* Create ucontext on the signal stack. */
1055 + sigset_to_sigset32(set->sig, cset.sig);
1056 if (__put_user(uc_flags, &frame->uc.uc_flags) ||
1057 __put_user(0, &frame->uc.uc_link) ||
1058 __compat_save_altstack(&frame->uc.uc_stack, regs->gprs[15]) ||
1059 save_sigregs32(regs, &frame->uc.uc_mcontext) ||
1060 - __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set)) ||
1061 + __copy_to_user(&frame->uc.uc_sigmask, &cset, sizeof(cset)) ||
1062 save_sigregs_ext32(regs, &frame->uc.uc_mcontext_ext))
1063 return -EFAULT;
1064
1065 diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c
1066 index 8253d85aa165..de1d72e3ec59 100644
1067 --- a/arch/x86/crypto/ghash-clmulni-intel_glue.c
1068 +++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c
1069 @@ -291,6 +291,7 @@ static struct ahash_alg ghash_async_alg = {
1070 .cra_name = "ghash",
1071 .cra_driver_name = "ghash-clmulni",
1072 .cra_priority = 400,
1073 + .cra_ctxsize = sizeof(struct ghash_async_ctx),
1074 .cra_flags = CRYPTO_ALG_TYPE_AHASH | CRYPTO_ALG_ASYNC,
1075 .cra_blocksize = GHASH_BLOCK_SIZE,
1076 .cra_type = &crypto_ahash_type,
1077 diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
1078 index ddd8d13a010f..26d5e05a7def 100644
1079 --- a/arch/x86/include/asm/processor.h
1080 +++ b/arch/x86/include/asm/processor.h
1081 @@ -852,7 +852,8 @@ extern unsigned long thread_saved_pc(struct task_struct *tsk);
1082 #define task_pt_regs(task) \
1083 ({ \
1084 struct pt_regs *__regs__; \
1085 - __regs__ = (struct pt_regs *)(KSTK_TOP(task_stack_page(task))-8); \
1086 + __regs__ = (struct pt_regs *)(KSTK_TOP(task_stack_page(task)) - \
1087 + TOP_OF_KERNEL_STACK_PADDING); \
1088 __regs__ - 1; \
1089 })
1090
1091 diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
1092 index 547e344a6dc6..c4d96943e666 100644
1093 --- a/arch/x86/include/asm/thread_info.h
1094 +++ b/arch/x86/include/asm/thread_info.h
1095 @@ -13,6 +13,33 @@
1096 #include <asm/types.h>
1097
1098 /*
1099 + * TOP_OF_KERNEL_STACK_PADDING is a number of unused bytes that we
1100 + * reserve at the top of the kernel stack. We do it because of a nasty
1101 + * 32-bit corner case. On x86_32, the hardware stack frame is
1102 + * variable-length. Except for vm86 mode, struct pt_regs assumes a
1103 + * maximum-length frame. If we enter from CPL 0, the top 8 bytes of
1104 + * pt_regs don't actually exist. Ordinarily this doesn't matter, but it
1105 + * does in at least one case:
1106 + *
1107 + * If we take an NMI early enough in SYSENTER, then we can end up with
1108 + * pt_regs that extends above sp0. On the way out, in the espfix code,
1109 + * we can read the saved SS value, but that value will be above sp0.
1110 + * Without this offset, that can result in a page fault. (We are
1111 + * careful that, in this case, the value we read doesn't matter.)
1112 + *
1113 + * In vm86 mode, the hardware frame is much longer still, but we neither
1114 + * access the extra members from NMI context, nor do we write such a
1115 + * frame at sp0 at all.
1116 + *
1117 + * x86_64 has a fixed-length stack frame.
1118 + */
1119 +#ifdef CONFIG_X86_32
1120 +# define TOP_OF_KERNEL_STACK_PADDING 8
1121 +#else
1122 +# define TOP_OF_KERNEL_STACK_PADDING 0
1123 +#endif
1124 +
1125 +/*
1126 * low level task data that entry.S needs immediate access to
1127 * - this struct should fit entirely inside of one cache line
1128 * - this struct shares the supervisor stack pages
1129 diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
1130 index ba6cc041edb1..f7eef03fd4b3 100644
1131 --- a/arch/x86/kernel/apic/apic.c
1132 +++ b/arch/x86/kernel/apic/apic.c
1133 @@ -366,6 +366,13 @@ static void __setup_APIC_LVTT(unsigned int clocks, int oneshot, int irqen)
1134 apic_write(APIC_LVTT, lvtt_value);
1135
1136 if (lvtt_value & APIC_LVT_TIMER_TSCDEADLINE) {
1137 + /*
1138 + * See Intel SDM: TSC-Deadline Mode chapter. In xAPIC mode,
1139 + * writing to the APIC LVTT and TSC_DEADLINE MSR isn't serialized.
1140 + * According to Intel, MFENCE can do the serialization here.
1141 + */
1142 + asm volatile("mfence" : : : "memory");
1143 +
1144 printk_once(KERN_DEBUG "TSC deadline timer enabled\n");
1145 return;
1146 }
1147 diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
1148 index e757fcbe90db..88635b301694 100644
1149 --- a/arch/x86/kernel/cpu/common.c
1150 +++ b/arch/x86/kernel/cpu/common.c
1151 @@ -1405,6 +1405,12 @@ void cpu_init(void)
1152
1153 wait_for_master_cpu(cpu);
1154
1155 + /*
1156 + * Initialize the CR4 shadow before doing anything that could
1157 + * try to read it.
1158 + */
1159 + cr4_init_shadow();
1160 +
1161 show_ucode_info_early();
1162
1163 printk(KERN_INFO "Initializing CPU#%d\n", cpu);
1164 diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
1165 index f5ab56d14287..3af40315a127 100644
1166 --- a/arch/x86/kernel/crash.c
1167 +++ b/arch/x86/kernel/crash.c
1168 @@ -183,10 +183,9 @@ void native_machine_crash_shutdown(struct pt_regs *regs)
1169 }
1170
1171 #ifdef CONFIG_KEXEC_FILE
1172 -static int get_nr_ram_ranges_callback(unsigned long start_pfn,
1173 - unsigned long nr_pfn, void *arg)
1174 +static int get_nr_ram_ranges_callback(u64 start, u64 end, void *arg)
1175 {
1176 - int *nr_ranges = arg;
1177 + unsigned int *nr_ranges = arg;
1178
1179 (*nr_ranges)++;
1180 return 0;
1181 @@ -212,7 +211,7 @@ static void fill_up_crash_elf_data(struct crash_elf_data *ced,
1182
1183 ced->image = image;
1184
1185 - walk_system_ram_range(0, -1, &nr_ranges,
1186 + walk_system_ram_res(0, -1, &nr_ranges,
1187 get_nr_ram_ranges_callback);
1188
1189 ced->max_nr_ranges = nr_ranges;
1190 diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
1191 index 3dddb89ba320..fe611c4ae3ff 100644
1192 --- a/arch/x86/kernel/entry_32.S
1193 +++ b/arch/x86/kernel/entry_32.S
1194 @@ -398,7 +398,7 @@ sysenter_past_esp:
1195 * A tiny bit of offset fixup is necessary - 4*4 means the 4 words
1196 * pushed above; +8 corresponds to copy_thread's esp0 setting.
1197 */
1198 - pushl_cfi ((TI_sysenter_return)-THREAD_SIZE+8+4*4)(%esp)
1199 + pushl_cfi ((TI_sysenter_return)-THREAD_SIZE+TOP_OF_KERNEL_STACK_PADDING+4*4)(%esp)
1200 CFI_REL_OFFSET eip, 0
1201
1202 pushl_cfi %eax
1203 diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
1204 index fad5cd9d7c4b..a3255ca219ea 100644
1205 --- a/arch/x86/kernel/entry_64.S
1206 +++ b/arch/x86/kernel/entry_64.S
1207 @@ -1428,7 +1428,18 @@ END(error_exit)
1208 /* runs on exception stack */
1209 ENTRY(nmi)
1210 INTR_FRAME
1211 + /*
1212 + * Fix up the exception frame if we're on Xen.
1213 + * PARAVIRT_ADJUST_EXCEPTION_FRAME is guaranteed to push at most
1214 + * one value to the stack on native, so it may clobber the rdx
1215 + * scratch slot, but it won't clobber any of the important
1216 + * slots past it.
1217 + *
1218 + * Xen is a different story, because the Xen frame itself overlaps
1219 + * the "NMI executing" variable.
1220 + */
1221 PARAVIRT_ADJUST_EXCEPTION_FRAME
1222 +
1223 /*
1224 * We allow breakpoints in NMIs. If a breakpoint occurs, then
1225 * the iretq it performs will take us out of NMI context.
1226 @@ -1446,11 +1457,12 @@ ENTRY(nmi)
1227 * If the variable is not set and the stack is not the NMI
1228 * stack then:
1229 * o Set the special variable on the stack
1230 - * o Copy the interrupt frame into a "saved" location on the stack
1231 - * o Copy the interrupt frame into a "copy" location on the stack
1232 + * o Copy the interrupt frame into an "outermost" location on the
1233 + * stack
1234 + * o Copy the interrupt frame into an "iret" location on the stack
1235 * o Continue processing the NMI
1236 * If the variable is set or the previous stack is the NMI stack:
1237 - * o Modify the "copy" location to jump to the repeate_nmi
1238 + * o Modify the "iret" location to jump to the repeat_nmi
1239 * o return back to the first NMI
1240 *
1241 * Now on exit of the first NMI, we first clear the stack variable
1242 @@ -1479,9 +1491,11 @@ ENTRY(nmi)
1243 * we don't want to enable interrupts, because then we'll end
1244 * up in an awkward situation in which IRQs are on but NMIs
1245 * are off.
1246 + *
1247 + * We also must not push anything to the stack before switching
1248 + * stacks lest we corrupt the "NMI executing" variable.
1249 */
1250 -
1251 - SWAPGS
1252 + SWAPGS_UNSAFE_STACK
1253 cld
1254 movq %rsp, %rdx
1255 movq PER_CPU_VAR(kernel_stack), %rsp
1256 @@ -1530,38 +1544,101 @@ ENTRY(nmi)
1257
1258 .Lnmi_from_kernel:
1259 /*
1260 - * Check the special variable on the stack to see if NMIs are
1261 - * executing.
1262 + * Here's what our stack frame will look like:
1263 + * +---------------------------------------------------------+
1264 + * | original SS |
1265 + * | original Return RSP |
1266 + * | original RFLAGS |
1267 + * | original CS |
1268 + * | original RIP |
1269 + * +---------------------------------------------------------+
1270 + * | temp storage for rdx |
1271 + * +---------------------------------------------------------+
1272 + * | "NMI executing" variable |
1273 + * +---------------------------------------------------------+
1274 + * | iret SS } Copied from "outermost" frame |
1275 + * | iret Return RSP } on each loop iteration; overwritten |
1276 + * | iret RFLAGS } by a nested NMI to force another |
1277 + * | iret CS } iteration if needed. |
1278 + * | iret RIP } |
1279 + * +---------------------------------------------------------+
1280 + * | outermost SS } initialized in first_nmi; |
1281 + * | outermost Return RSP } will not be changed before |
1282 + * | outermost RFLAGS } NMI processing is done. |
1283 + * | outermost CS } Copied to "iret" frame on each |
1284 + * | outermost RIP } iteration. |
1285 + * +---------------------------------------------------------+
1286 + * | pt_regs |
1287 + * +---------------------------------------------------------+
1288 + *
1289 + * The "original" frame is used by hardware. Before re-enabling
1290 + * NMIs, we need to be done with it, and we need to leave enough
1291 + * space for the asm code here.
1292 + *
1293 + * We return by executing IRET while RSP points to the "iret" frame.
1294 + * That will either return for real or it will loop back into NMI
1295 + * processing.
1296 + *
1297 + * The "outermost" frame is copied to the "iret" frame on each
1298 + * iteration of the loop, so each iteration starts with the "iret"
1299 + * frame pointing to the final return target.
1300 + */
1301 +
1302 + /*
1303 + * Determine whether we're a nested NMI.
1304 + *
1305 + * If we interrupted kernel code between repeat_nmi and
1306 + * end_repeat_nmi, then we are a nested NMI. We must not
1307 + * modify the "iret" frame because it's being written by
1308 + * the outer NMI. That's okay; the outer NMI handler is
1309 + * about to about to call do_nmi anyway, so we can just
1310 + * resume the outer NMI.
1311 + */
1312 + movq $repeat_nmi, %rdx
1313 + cmpq 8(%rsp), %rdx
1314 + ja 1f
1315 + movq $end_repeat_nmi, %rdx
1316 + cmpq 8(%rsp), %rdx
1317 + ja nested_nmi_out
1318 +1:
1319 +
1320 + /*
1321 + * Now check "NMI executing". If it's set, then we're nested.
1322 + * This will not detect if we interrupted an outer NMI just
1323 + * before IRET.
1324 */
1325 cmpl $1, -8(%rsp)
1326 je nested_nmi
1327
1328 /*
1329 - * Now test if the previous stack was an NMI stack.
1330 - * We need the double check. We check the NMI stack to satisfy the
1331 - * race when the first NMI clears the variable before returning.
1332 - * We check the variable because the first NMI could be in a
1333 - * breakpoint routine using a breakpoint stack.
1334 + * Now test if the previous stack was an NMI stack. This covers
1335 + * the case where we interrupt an outer NMI after it clears
1336 + * "NMI executing" but before IRET. We need to be careful, though:
1337 + * there is one case in which RSP could point to the NMI stack
1338 + * despite there being no NMI active: naughty userspace controls
1339 + * RSP at the very beginning of the SYSCALL targets. We can
1340 + * pull a fast one on naughty userspace, though: we program
1341 + * SYSCALL to mask DF, so userspace cannot cause DF to be set
1342 + * if it controls the kernel's RSP. We set DF before we clear
1343 + * "NMI executing".
1344 */
1345 lea 6*8(%rsp), %rdx
1346 test_in_nmi rdx, 4*8(%rsp), nested_nmi, first_nmi
1347 +
1348 + /* Ah, it is within the NMI stack. */
1349 +
1350 + testb $(X86_EFLAGS_DF >> 8), (3*8 + 1)(%rsp)
1351 + jz first_nmi /* RSP was user controlled. */
1352 +
1353 + /* This is a nested NMI. */
1354 +
1355 CFI_REMEMBER_STATE
1356
1357 nested_nmi:
1358 /*
1359 - * Do nothing if we interrupted the fixup in repeat_nmi.
1360 - * It's about to repeat the NMI handler, so we are fine
1361 - * with ignoring this one.
1362 + * Modify the "iret" frame to point to repeat_nmi, forcing another
1363 + * iteration of NMI handling.
1364 */
1365 - movq $repeat_nmi, %rdx
1366 - cmpq 8(%rsp), %rdx
1367 - ja 1f
1368 - movq $end_repeat_nmi, %rdx
1369 - cmpq 8(%rsp), %rdx
1370 - ja nested_nmi_out
1371 -
1372 -1:
1373 - /* Set up the interrupted NMIs stack to jump to repeat_nmi */
1374 leaq -1*8(%rsp), %rdx
1375 movq %rdx, %rsp
1376 CFI_ADJUST_CFA_OFFSET 1*8
1377 @@ -1580,60 +1657,23 @@ nested_nmi_out:
1378 popq_cfi %rdx
1379 CFI_RESTORE rdx
1380
1381 - /* No need to check faults here */
1382 + /* We are returning to kernel mode, so this cannot result in a fault. */
1383 INTERRUPT_RETURN
1384
1385 CFI_RESTORE_STATE
1386 first_nmi:
1387 - /*
1388 - * Because nested NMIs will use the pushed location that we
1389 - * stored in rdx, we must keep that space available.
1390 - * Here's what our stack frame will look like:
1391 - * +-------------------------+
1392 - * | original SS |
1393 - * | original Return RSP |
1394 - * | original RFLAGS |
1395 - * | original CS |
1396 - * | original RIP |
1397 - * +-------------------------+
1398 - * | temp storage for rdx |
1399 - * +-------------------------+
1400 - * | NMI executing variable |
1401 - * +-------------------------+
1402 - * | copied SS |
1403 - * | copied Return RSP |
1404 - * | copied RFLAGS |
1405 - * | copied CS |
1406 - * | copied RIP |
1407 - * +-------------------------+
1408 - * | Saved SS |
1409 - * | Saved Return RSP |
1410 - * | Saved RFLAGS |
1411 - * | Saved CS |
1412 - * | Saved RIP |
1413 - * +-------------------------+
1414 - * | pt_regs |
1415 - * +-------------------------+
1416 - *
1417 - * The saved stack frame is used to fix up the copied stack frame
1418 - * that a nested NMI may change to make the interrupted NMI iret jump
1419 - * to the repeat_nmi. The original stack frame and the temp storage
1420 - * is also used by nested NMIs and can not be trusted on exit.
1421 - */
1422 - /* Do not pop rdx, nested NMIs will corrupt that part of the stack */
1423 + /* Restore rdx. */
1424 movq (%rsp), %rdx
1425 CFI_RESTORE rdx
1426
1427 - /* Set the NMI executing variable on the stack. */
1428 + /* Set "NMI executing" on the stack. */
1429 pushq_cfi $1
1430
1431 - /*
1432 - * Leave room for the "copied" frame
1433 - */
1434 + /* Leave room for the "iret" frame */
1435 subq $(5*8), %rsp
1436 CFI_ADJUST_CFA_OFFSET 5*8
1437
1438 - /* Copy the stack frame to the Saved frame */
1439 + /* Copy the "original" frame to the "outermost" frame */
1440 .rept 5
1441 pushq_cfi 11*8(%rsp)
1442 .endr
1443 @@ -1641,6 +1681,7 @@ first_nmi:
1444
1445 /* Everything up to here is safe from nested NMIs */
1446
1447 +repeat_nmi:
1448 /*
1449 * If there was a nested NMI, the first NMI's iret will return
1450 * here. But NMIs are still enabled and we can take another
1451 @@ -1649,16 +1690,21 @@ first_nmi:
1452 * it will just return, as we are about to repeat an NMI anyway.
1453 * This makes it safe to copy to the stack frame that a nested
1454 * NMI will update.
1455 - */
1456 -repeat_nmi:
1457 - /*
1458 - * Update the stack variable to say we are still in NMI (the update
1459 - * is benign for the non-repeat case, where 1 was pushed just above
1460 - * to this very stack slot).
1461 + *
1462 + * RSP is pointing to "outermost RIP". gsbase is unknown, but, if
1463 + * we're repeating an NMI, gsbase has the same value that it had on
1464 + * the first iteration. paranoid_entry will load the kernel
1465 + * gsbase if needed before we call do_nmi.
1466 + *
1467 + * Set "NMI executing" in case we came back here via IRET.
1468 */
1469 movq $1, 10*8(%rsp)
1470
1471 - /* Make another copy, this one may be modified by nested NMIs */
1472 + /*
1473 + * Copy the "outermost" frame to the "iret" frame. NMIs that nest
1474 + * here must not modify the "iret" frame while we're writing to
1475 + * it or it will end up containing garbage.
1476 + */
1477 addq $(10*8), %rsp
1478 CFI_ADJUST_CFA_OFFSET -10*8
1479 .rept 5
1480 @@ -1669,9 +1715,9 @@ repeat_nmi:
1481 end_repeat_nmi:
1482
1483 /*
1484 - * Everything below this point can be preempted by a nested
1485 - * NMI if the first NMI took an exception and reset our iret stack
1486 - * so that we repeat another NMI.
1487 + * Everything below this point can be preempted by a nested NMI.
1488 + * If this happens, then the inner NMI will change the "iret"
1489 + * frame to point back to repeat_nmi.
1490 */
1491 pushq_cfi $-1 /* ORIG_RAX: no syscall to restart */
1492 subq $ORIG_RAX-R15, %rsp
1493 @@ -1699,9 +1745,23 @@ nmi_restore:
1494 /* Pop the extra iret frame at once */
1495 RESTORE_ALL 6*8
1496
1497 - /* Clear the NMI executing stack variable */
1498 - movq $0, 5*8(%rsp)
1499 - jmp irq_return
1500 + /*
1501 + * Clear "NMI executing". Set DF first so that we can easily
1502 + * distinguish the remaining code between here and IRET from
1503 + * the SYSCALL entry and exit paths. On a native kernel, we
1504 + * could just inspect RIP, but, on paravirt kernels,
1505 + * INTERRUPT_RETURN can translate into a jump into a
1506 + * hypercall page.
1507 + */
1508 + std
1509 + movq $0, 5*8(%rsp) /* clear "NMI executing" */
1510 +
1511 + /*
1512 + * INTERRUPT_RETURN reads the "iret" frame and exits the NMI
1513 + * stack in a single instruction. We are returning to kernel
1514 + * mode, so this cannot result in a fault.
1515 + */
1516 + INTERRUPT_RETURN
1517 CFI_ENDPROC
1518 END(nmi)
1519
1520 diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c
1521 index 5c5ec7d28d9b..a701b49e8c87 100644
1522 --- a/arch/x86/kernel/nmi.c
1523 +++ b/arch/x86/kernel/nmi.c
1524 @@ -408,8 +408,8 @@ static void default_do_nmi(struct pt_regs *regs)
1525 NOKPROBE_SYMBOL(default_do_nmi);
1526
1527 /*
1528 - * NMIs can hit breakpoints which will cause it to lose its NMI context
1529 - * with the CPU when the breakpoint or page fault does an IRET.
1530 + * NMIs can page fault or hit breakpoints which will cause it to lose
1531 + * its NMI context with the CPU when the breakpoint or page fault does an IRET.
1532 *
1533 * As a result, NMIs can nest if NMIs get unmasked due an IRET during
1534 * NMI processing. On x86_64, the asm glue protects us from nested NMIs
1535 diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
1536 index 548d25f00c90..8d12f0546dfc 100644
1537 --- a/arch/x86/kernel/paravirt.c
1538 +++ b/arch/x86/kernel/paravirt.c
1539 @@ -41,10 +41,18 @@
1540 #include <asm/timer.h>
1541 #include <asm/special_insns.h>
1542
1543 -/* nop stub */
1544 -void _paravirt_nop(void)
1545 -{
1546 -}
1547 +/*
1548 + * nop stub, which must not clobber anything *including the stack* to
1549 + * avoid confusing the entry prologues.
1550 + */
1551 +extern void _paravirt_nop(void);
1552 +asm (".pushsection .entry.text, \"ax\"\n"
1553 + ".global _paravirt_nop\n"
1554 + "_paravirt_nop:\n\t"
1555 + "ret\n\t"
1556 + ".size _paravirt_nop, . - _paravirt_nop\n\t"
1557 + ".type _paravirt_nop, @function\n\t"
1558 + ".popsection");
1559
1560 /* identity function, which can be inlined */
1561 u32 _paravirt_ident_32(u32 x)
1562 diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
1563 index 63a4b5092203..54cfd5ebd96c 100644
1564 --- a/arch/x86/kernel/process_64.c
1565 +++ b/arch/x86/kernel/process_64.c
1566 @@ -476,27 +476,59 @@ void set_personality_ia32(bool x32)
1567 }
1568 EXPORT_SYMBOL_GPL(set_personality_ia32);
1569
1570 +/*
1571 + * Called from fs/proc with a reference on @p to find the function
1572 + * which called into schedule(). This needs to be done carefully
1573 + * because the task might wake up and we might look at a stack
1574 + * changing under us.
1575 + */
1576 unsigned long get_wchan(struct task_struct *p)
1577 {
1578 - unsigned long stack;
1579 - u64 fp, ip;
1580 + unsigned long start, bottom, top, sp, fp, ip;
1581 int count = 0;
1582
1583 if (!p || p == current || p->state == TASK_RUNNING)
1584 return 0;
1585 - stack = (unsigned long)task_stack_page(p);
1586 - if (p->thread.sp < stack || p->thread.sp >= stack+THREAD_SIZE)
1587 +
1588 + start = (unsigned long)task_stack_page(p);
1589 + if (!start)
1590 + return 0;
1591 +
1592 + /*
1593 + * Layout of the stack page:
1594 + *
1595 + * ----------- topmax = start + THREAD_SIZE - sizeof(unsigned long)
1596 + * PADDING
1597 + * ----------- top = topmax - TOP_OF_KERNEL_STACK_PADDING
1598 + * stack
1599 + * ----------- bottom = start + sizeof(thread_info)
1600 + * thread_info
1601 + * ----------- start
1602 + *
1603 + * The tasks stack pointer points at the location where the
1604 + * framepointer is stored. The data on the stack is:
1605 + * ... IP FP ... IP FP
1606 + *
1607 + * We need to read FP and IP, so we need to adjust the upper
1608 + * bound by another unsigned long.
1609 + */
1610 + top = start + THREAD_SIZE - TOP_OF_KERNEL_STACK_PADDING;
1611 + top -= 2 * sizeof(unsigned long);
1612 + bottom = start + sizeof(struct thread_info);
1613 +
1614 + sp = READ_ONCE(p->thread.sp);
1615 + if (sp < bottom || sp > top)
1616 return 0;
1617 - fp = *(u64 *)(p->thread.sp);
1618 +
1619 + fp = READ_ONCE(*(unsigned long *)sp);
1620 do {
1621 - if (fp < (unsigned long)stack ||
1622 - fp >= (unsigned long)stack+THREAD_SIZE)
1623 + if (fp < bottom || fp > top)
1624 return 0;
1625 - ip = *(u64 *)(fp+8);
1626 + ip = READ_ONCE(*(unsigned long *)(fp + sizeof(unsigned long)));
1627 if (!in_sched_functions(ip))
1628 return ip;
1629 - fp = *(u64 *)fp;
1630 - } while (count++ < 16);
1631 + fp = READ_ONCE(*(unsigned long *)fp);
1632 + } while (count++ < 16 && p->state != TASK_RUNNING);
1633 return 0;
1634 }
1635
1636 diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
1637 index 505449700e0c..21187ebee7d0 100644
1638 --- a/arch/x86/kernel/tsc.c
1639 +++ b/arch/x86/kernel/tsc.c
1640 @@ -21,6 +21,7 @@
1641 #include <asm/hypervisor.h>
1642 #include <asm/nmi.h>
1643 #include <asm/x86_init.h>
1644 +#include <asm/geode.h>
1645
1646 unsigned int __read_mostly cpu_khz; /* TSC clocks / usec, not used here */
1647 EXPORT_SYMBOL(cpu_khz);
1648 @@ -1004,15 +1005,17 @@ EXPORT_SYMBOL_GPL(mark_tsc_unstable);
1649
1650 static void __init check_system_tsc_reliable(void)
1651 {
1652 -#ifdef CONFIG_MGEODE_LX
1653 - /* RTSC counts during suspend */
1654 +#if defined(CONFIG_MGEODEGX1) || defined(CONFIG_MGEODE_LX) || defined(CONFIG_X86_GENERIC)
1655 + if (is_geode_lx()) {
1656 + /* RTSC counts during suspend */
1657 #define RTSC_SUSP 0x100
1658 - unsigned long res_low, res_high;
1659 + unsigned long res_low, res_high;
1660
1661 - rdmsr_safe(MSR_GEODE_BUSCONT_CONF0, &res_low, &res_high);
1662 - /* Geode_LX - the OLPC CPU has a very reliable TSC */
1663 - if (res_low & RTSC_SUSP)
1664 - tsc_clocksource_reliable = 1;
1665 + rdmsr_safe(MSR_GEODE_BUSCONT_CONF0, &res_low, &res_high);
1666 + /* Geode_LX - the OLPC CPU has a very reliable TSC */
1667 + if (res_low & RTSC_SUSP)
1668 + tsc_clocksource_reliable = 1;
1669 + }
1670 #endif
1671 if (boot_cpu_has(X86_FEATURE_TSC_RELIABLE))
1672 tsc_clocksource_reliable = 1;
1673 diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
1674 index f696dedb0fa7..23875c26fb34 100644
1675 --- a/arch/x86/kvm/mmu.c
1676 +++ b/arch/x86/kvm/mmu.c
1677 @@ -372,12 +372,6 @@ static u64 __get_spte_lockless(u64 *sptep)
1678 {
1679 return ACCESS_ONCE(*sptep);
1680 }
1681 -
1682 -static bool __check_direct_spte_mmio_pf(u64 spte)
1683 -{
1684 - /* It is valid if the spte is zapped. */
1685 - return spte == 0ull;
1686 -}
1687 #else
1688 union split_spte {
1689 struct {
1690 @@ -493,23 +487,6 @@ retry:
1691
1692 return spte.spte;
1693 }
1694 -
1695 -static bool __check_direct_spte_mmio_pf(u64 spte)
1696 -{
1697 - union split_spte sspte = (union split_spte)spte;
1698 - u32 high_mmio_mask = shadow_mmio_mask >> 32;
1699 -
1700 - /* It is valid if the spte is zapped. */
1701 - if (spte == 0ull)
1702 - return true;
1703 -
1704 - /* It is valid if the spte is being zapped. */
1705 - if (sspte.spte_low == 0ull &&
1706 - (sspte.spte_high & high_mmio_mask) == high_mmio_mask)
1707 - return true;
1708 -
1709 - return false;
1710 -}
1711 #endif
1712
1713 static bool spte_is_locklessly_modifiable(u64 spte)
1714 @@ -3230,21 +3207,6 @@ static bool quickly_check_mmio_pf(struct kvm_vcpu *vcpu, u64 addr, bool direct)
1715 return vcpu_match_mmio_gva(vcpu, addr);
1716 }
1717
1718 -
1719 -/*
1720 - * On direct hosts, the last spte is only allows two states
1721 - * for mmio page fault:
1722 - * - It is the mmio spte
1723 - * - It is zapped or it is being zapped.
1724 - *
1725 - * This function completely checks the spte when the last spte
1726 - * is not the mmio spte.
1727 - */
1728 -static bool check_direct_spte_mmio_pf(u64 spte)
1729 -{
1730 - return __check_direct_spte_mmio_pf(spte);
1731 -}
1732 -
1733 static u64 walk_shadow_page_get_mmio_spte(struct kvm_vcpu *vcpu, u64 addr)
1734 {
1735 struct kvm_shadow_walk_iterator iterator;
1736 @@ -3287,13 +3249,6 @@ int handle_mmio_page_fault_common(struct kvm_vcpu *vcpu, u64 addr, bool direct)
1737 }
1738
1739 /*
1740 - * It's ok if the gva is remapped by other cpus on shadow guest,
1741 - * it's a BUG if the gfn is not a mmio page.
1742 - */
1743 - if (direct && !check_direct_spte_mmio_pf(spte))
1744 - return RET_MMIO_PF_BUG;
1745 -
1746 - /*
1747 * If the page table is zapped by other cpus, let CPU fault again on
1748 * the address.
1749 */
1750 diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
1751 index b83bff87408f..f98baebfa9a7 100644
1752 --- a/arch/x86/kvm/svm.c
1753 +++ b/arch/x86/kvm/svm.c
1754 @@ -512,7 +512,7 @@ static void skip_emulated_instruction(struct kvm_vcpu *vcpu)
1755 struct vcpu_svm *svm = to_svm(vcpu);
1756
1757 if (svm->vmcb->control.next_rip != 0) {
1758 - WARN_ON(!static_cpu_has(X86_FEATURE_NRIPS));
1759 + WARN_ON_ONCE(!static_cpu_has(X86_FEATURE_NRIPS));
1760 svm->next_rip = svm->vmcb->control.next_rip;
1761 }
1762
1763 diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
1764 index c8140e12816a..c23ab1ee3a9a 100644
1765 --- a/arch/x86/mm/init_32.c
1766 +++ b/arch/x86/mm/init_32.c
1767 @@ -137,6 +137,7 @@ page_table_range_init_count(unsigned long start, unsigned long end)
1768
1769 vaddr = start;
1770 pgd_idx = pgd_index(vaddr);
1771 + pmd_idx = pmd_index(vaddr);
1772
1773 for ( ; (pgd_idx < PTRS_PER_PGD) && (vaddr != end); pgd_idx++) {
1774 for (; (pmd_idx < PTRS_PER_PMD) && (vaddr != end);
1775 diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
1776 index 4e5dfec750fc..fa77995b62a4 100644
1777 --- a/arch/x86/mm/init_64.c
1778 +++ b/arch/x86/mm/init_64.c
1779 @@ -1144,7 +1144,7 @@ void mark_rodata_ro(void)
1780 * has been zapped already via cleanup_highmem().
1781 */
1782 all_end = roundup((unsigned long)_brk_end, PMD_SIZE);
1783 - set_memory_nx(rodata_start, (all_end - rodata_start) >> PAGE_SHIFT);
1784 + set_memory_nx(text_end, (all_end - text_end) >> PAGE_SHIFT);
1785
1786 rodata_test();
1787
1788 diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
1789 index dbc8627a5cdf..6d6080f3fa35 100644
1790 --- a/arch/x86/platform/efi/efi.c
1791 +++ b/arch/x86/platform/efi/efi.c
1792 @@ -670,6 +670,70 @@ out:
1793 }
1794
1795 /*
1796 + * Iterate the EFI memory map in reverse order because the regions
1797 + * will be mapped top-down. The end result is the same as if we had
1798 + * mapped things forward, but doesn't require us to change the
1799 + * existing implementation of efi_map_region().
1800 + */
1801 +static inline void *efi_map_next_entry_reverse(void *entry)
1802 +{
1803 + /* Initial call */
1804 + if (!entry)
1805 + return memmap.map_end - memmap.desc_size;
1806 +
1807 + entry -= memmap.desc_size;
1808 + if (entry < memmap.map)
1809 + return NULL;
1810 +
1811 + return entry;
1812 +}
1813 +
1814 +/*
1815 + * efi_map_next_entry - Return the next EFI memory map descriptor
1816 + * @entry: Previous EFI memory map descriptor
1817 + *
1818 + * This is a helper function to iterate over the EFI memory map, which
1819 + * we do in different orders depending on the current configuration.
1820 + *
1821 + * To begin traversing the memory map @entry must be %NULL.
1822 + *
1823 + * Returns %NULL when we reach the end of the memory map.
1824 + */
1825 +static void *efi_map_next_entry(void *entry)
1826 +{
1827 + if (!efi_enabled(EFI_OLD_MEMMAP) && efi_enabled(EFI_64BIT)) {
1828 + /*
1829 + * Starting in UEFI v2.5 the EFI_PROPERTIES_TABLE
1830 + * config table feature requires us to map all entries
1831 + * in the same order as they appear in the EFI memory
1832 + * map. That is to say, entry N must have a lower
1833 + * virtual address than entry N+1. This is because the
1834 + * firmware toolchain leaves relative references in
1835 + * the code/data sections, which are split and become
1836 + * separate EFI memory regions. Mapping things
1837 + * out-of-order leads to the firmware accessing
1838 + * unmapped addresses.
1839 + *
1840 + * Since we need to map things this way whether or not
1841 + * the kernel actually makes use of
1842 + * EFI_PROPERTIES_TABLE, let's just switch to this
1843 + * scheme by default for 64-bit.
1844 + */
1845 + return efi_map_next_entry_reverse(entry);
1846 + }
1847 +
1848 + /* Initial call */
1849 + if (!entry)
1850 + return memmap.map;
1851 +
1852 + entry += memmap.desc_size;
1853 + if (entry >= memmap.map_end)
1854 + return NULL;
1855 +
1856 + return entry;
1857 +}
1858 +
1859 +/*
1860 * Map the efi memory ranges of the runtime services and update new_mmap with
1861 * virtual addresses.
1862 */
1863 @@ -679,7 +743,8 @@ static void * __init efi_map_regions(int *count, int *pg_shift)
1864 unsigned long left = 0;
1865 efi_memory_desc_t *md;
1866
1867 - for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
1868 + p = NULL;
1869 + while ((p = efi_map_next_entry(p))) {
1870 md = p;
1871 if (!(md->attribute & EFI_MEMORY_RUNTIME)) {
1872 #ifdef CONFIG_X86_64
1873 diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
1874 index d8d81d1aa1d5..7e365d231a93 100644
1875 --- a/arch/x86/xen/enlighten.c
1876 +++ b/arch/x86/xen/enlighten.c
1877 @@ -33,6 +33,10 @@
1878 #include <linux/memblock.h>
1879 #include <linux/edd.h>
1880
1881 +#ifdef CONFIG_KEXEC_CORE
1882 +#include <linux/kexec.h>
1883 +#endif
1884 +
1885 #include <xen/xen.h>
1886 #include <xen/events.h>
1887 #include <xen/interface/xen.h>
1888 @@ -1859,6 +1863,21 @@ static struct notifier_block xen_hvm_cpu_notifier = {
1889 .notifier_call = xen_hvm_cpu_notify,
1890 };
1891
1892 +#ifdef CONFIG_KEXEC_CORE
1893 +static void xen_hvm_shutdown(void)
1894 +{
1895 + native_machine_shutdown();
1896 + if (kexec_in_progress)
1897 + xen_reboot(SHUTDOWN_soft_reset);
1898 +}
1899 +
1900 +static void xen_hvm_crash_shutdown(struct pt_regs *regs)
1901 +{
1902 + native_machine_crash_shutdown(regs);
1903 + xen_reboot(SHUTDOWN_soft_reset);
1904 +}
1905 +#endif
1906 +
1907 static void __init xen_hvm_guest_init(void)
1908 {
1909 init_hvm_pv_info();
1910 @@ -1875,6 +1894,10 @@ static void __init xen_hvm_guest_init(void)
1911 x86_init.irqs.intr_init = xen_init_IRQ;
1912 xen_hvm_init_time_ops();
1913 xen_hvm_init_mmu_ops();
1914 +#ifdef CONFIG_KEXEC_CORE
1915 + machine_ops.shutdown = xen_hvm_shutdown;
1916 + machine_ops.crash_shutdown = xen_hvm_crash_shutdown;
1917 +#endif
1918 }
1919
1920 static bool xen_nopv = false;
1921 diff --git a/arch/xtensa/include/asm/traps.h b/arch/xtensa/include/asm/traps.h
1922 index 677bfcf4ee5d..28f33a8b7f5f 100644
1923 --- a/arch/xtensa/include/asm/traps.h
1924 +++ b/arch/xtensa/include/asm/traps.h
1925 @@ -25,30 +25,39 @@ static inline void spill_registers(void)
1926 {
1927 #if XCHAL_NUM_AREGS > 16
1928 __asm__ __volatile__ (
1929 - " call12 1f\n"
1930 + " call8 1f\n"
1931 " _j 2f\n"
1932 " retw\n"
1933 " .align 4\n"
1934 "1:\n"
1935 +#if XCHAL_NUM_AREGS == 32
1936 + " _entry a1, 32\n"
1937 + " addi a8, a0, 3\n"
1938 + " _entry a1, 16\n"
1939 + " mov a12, a12\n"
1940 + " retw\n"
1941 +#else
1942 " _entry a1, 48\n"
1943 - " addi a12, a0, 3\n"
1944 -#if XCHAL_NUM_AREGS > 32
1945 - " .rept (" __stringify(XCHAL_NUM_AREGS) " - 32) / 12\n"
1946 + " call12 1f\n"
1947 + " retw\n"
1948 + " .align 4\n"
1949 + "1:\n"
1950 + " .rept (" __stringify(XCHAL_NUM_AREGS) " - 16) / 12\n"
1951 " _entry a1, 48\n"
1952 " mov a12, a0\n"
1953 " .endr\n"
1954 -#endif
1955 - " _entry a1, 48\n"
1956 + " _entry a1, 16\n"
1957 #if XCHAL_NUM_AREGS % 12 == 0
1958 - " mov a8, a8\n"
1959 -#elif XCHAL_NUM_AREGS % 12 == 4
1960 " mov a12, a12\n"
1961 -#elif XCHAL_NUM_AREGS % 12 == 8
1962 +#elif XCHAL_NUM_AREGS % 12 == 4
1963 " mov a4, a4\n"
1964 +#elif XCHAL_NUM_AREGS % 12 == 8
1965 + " mov a8, a8\n"
1966 #endif
1967 " retw\n"
1968 +#endif
1969 "2:\n"
1970 - : : : "a12", "a13", "memory");
1971 + : : : "a8", "a9", "memory");
1972 #else
1973 __asm__ __volatile__ (
1974 " mov a12, a12\n"
1975 diff --git a/arch/xtensa/kernel/entry.S b/arch/xtensa/kernel/entry.S
1976 index 82bbfa5a05b3..a2a902140c4e 100644
1977 --- a/arch/xtensa/kernel/entry.S
1978 +++ b/arch/xtensa/kernel/entry.S
1979 @@ -568,12 +568,13 @@ user_exception_exit:
1980 * (if we have restored WSBITS-1 frames).
1981 */
1982
1983 +2:
1984 #if XCHAL_HAVE_THREADPTR
1985 l32i a3, a1, PT_THREADPTR
1986 wur a3, threadptr
1987 #endif
1988
1989 -2: j common_exception_exit
1990 + j common_exception_exit
1991
1992 /* This is the kernel exception exit.
1993 * We avoided to do a MOVSP when we entered the exception, but we
1994 @@ -1820,7 +1821,7 @@ ENDPROC(system_call)
1995 mov a12, a0
1996 .endr
1997 #endif
1998 - _entry a1, 48
1999 + _entry a1, 16
2000 #if XCHAL_NUM_AREGS % 12 == 0
2001 mov a8, a8
2002 #elif XCHAL_NUM_AREGS % 12 == 4
2003 @@ -1844,7 +1845,7 @@ ENDPROC(system_call)
2004
2005 ENTRY(_switch_to)
2006
2007 - entry a1, 16
2008 + entry a1, 48
2009
2010 mov a11, a3 # and 'next' (a3)
2011
2012 diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
2013 index 1630a20d5dcf..d477f83f29bf 100644
2014 --- a/block/blk-mq-sysfs.c
2015 +++ b/block/blk-mq-sysfs.c
2016 @@ -141,15 +141,26 @@ static ssize_t blk_mq_sysfs_completed_show(struct blk_mq_ctx *ctx, char *page)
2017
2018 static ssize_t sysfs_list_show(char *page, struct list_head *list, char *msg)
2019 {
2020 - char *start_page = page;
2021 struct request *rq;
2022 + int len = snprintf(page, PAGE_SIZE - 1, "%s:\n", msg);
2023 +
2024 + list_for_each_entry(rq, list, queuelist) {
2025 + const int rq_len = 2 * sizeof(rq) + 2;
2026 +
2027 + /* if the output will be truncated */
2028 + if (PAGE_SIZE - 1 < len + rq_len) {
2029 + /* backspacing if it can't hold '\t...\n' */
2030 + if (PAGE_SIZE - 1 < len + 5)
2031 + len -= rq_len;
2032 + len += snprintf(page + len, PAGE_SIZE - 1 - len,
2033 + "\t...\n");
2034 + break;
2035 + }
2036 + len += snprintf(page + len, PAGE_SIZE - 1 - len,
2037 + "\t%p\n", rq);
2038 + }
2039
2040 - page += sprintf(page, "%s:\n", msg);
2041 -
2042 - list_for_each_entry(rq, list, queuelist)
2043 - page += sprintf(page, "\t%p\n", rq);
2044 -
2045 - return page - start_page;
2046 + return len;
2047 }
2048
2049 static ssize_t blk_mq_sysfs_rq_list_show(struct blk_mq_ctx *ctx, char *page)
2050 diff --git a/drivers/auxdisplay/ks0108.c b/drivers/auxdisplay/ks0108.c
2051 index 5b93852392b8..0d752851a1ee 100644
2052 --- a/drivers/auxdisplay/ks0108.c
2053 +++ b/drivers/auxdisplay/ks0108.c
2054 @@ -139,6 +139,7 @@ static int __init ks0108_init(void)
2055
2056 ks0108_pardevice = parport_register_device(ks0108_parport, KS0108_NAME,
2057 NULL, NULL, NULL, PARPORT_DEV_EXCL, NULL);
2058 + parport_put_port(ks0108_parport);
2059 if (ks0108_pardevice == NULL) {
2060 printk(KERN_ERR KS0108_NAME ": ERROR: "
2061 "parport didn't register new device\n");
2062 diff --git a/drivers/base/devres.c b/drivers/base/devres.c
2063 index c8a53d1e019f..875464690117 100644
2064 --- a/drivers/base/devres.c
2065 +++ b/drivers/base/devres.c
2066 @@ -297,10 +297,10 @@ void * devres_get(struct device *dev, void *new_res,
2067 if (!dr) {
2068 add_dr(dev, &new_dr->node);
2069 dr = new_dr;
2070 - new_dr = NULL;
2071 + new_res = NULL;
2072 }
2073 spin_unlock_irqrestore(&dev->devres_lock, flags);
2074 - devres_free(new_dr);
2075 + devres_free(new_res);
2076
2077 return dr->data;
2078 }
2079 diff --git a/drivers/base/node.c b/drivers/base/node.c
2080 index 472168cd0c97..74d45823890b 100644
2081 --- a/drivers/base/node.c
2082 +++ b/drivers/base/node.c
2083 @@ -396,6 +396,16 @@ int register_mem_sect_under_node(struct memory_block *mem_blk, int nid)
2084 for (pfn = sect_start_pfn; pfn <= sect_end_pfn; pfn++) {
2085 int page_nid;
2086
2087 + /*
2088 + * memory block could have several absent sections from start.
2089 + * skip pfn range from absent section
2090 + */
2091 + if (!pfn_present(pfn)) {
2092 + pfn = round_down(pfn + PAGES_PER_SECTION,
2093 + PAGES_PER_SECTION) - 1;
2094 + continue;
2095 + }
2096 +
2097 page_nid = get_nid_for_pfn(pfn);
2098 if (page_nid < 0)
2099 continue;
2100 diff --git a/drivers/base/platform.c b/drivers/base/platform.c
2101 index 360272cd4549..317e0e491ea0 100644
2102 --- a/drivers/base/platform.c
2103 +++ b/drivers/base/platform.c
2104 @@ -375,9 +375,7 @@ int platform_device_add(struct platform_device *pdev)
2105
2106 while (--i >= 0) {
2107 struct resource *r = &pdev->resource[i];
2108 - unsigned long type = resource_type(r);
2109 -
2110 - if (type == IORESOURCE_MEM || type == IORESOURCE_IO)
2111 + if (r->parent)
2112 release_resource(r);
2113 }
2114
2115 @@ -408,9 +406,7 @@ void platform_device_del(struct platform_device *pdev)
2116
2117 for (i = 0; i < pdev->num_resources; i++) {
2118 struct resource *r = &pdev->resource[i];
2119 - unsigned long type = resource_type(r);
2120 -
2121 - if (type == IORESOURCE_MEM || type == IORESOURCE_IO)
2122 + if (r->parent)
2123 release_resource(r);
2124 }
2125 }
2126 diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
2127 index 5799a0b9e6cc..c8941f39c919 100644
2128 --- a/drivers/base/regmap/regmap-debugfs.c
2129 +++ b/drivers/base/regmap/regmap-debugfs.c
2130 @@ -32,8 +32,7 @@ static DEFINE_MUTEX(regmap_debugfs_early_lock);
2131 /* Calculate the length of a fixed format */
2132 static size_t regmap_calc_reg_len(int max_val, char *buf, size_t buf_size)
2133 {
2134 - snprintf(buf, buf_size, "%x", max_val);
2135 - return strlen(buf);
2136 + return snprintf(NULL, 0, "%x", max_val);
2137 }
2138
2139 static ssize_t regmap_name_read_file(struct file *file,
2140 @@ -432,7 +431,7 @@ static ssize_t regmap_access_read_file(struct file *file,
2141 /* If we're in the region the user is trying to read */
2142 if (p >= *ppos) {
2143 /* ...but not beyond it */
2144 - if (buf_pos >= count - 1 - tot_len)
2145 + if (buf_pos + tot_len + 1 >= count)
2146 break;
2147
2148 /* Format the register */
2149 diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c
2150 index f1ff39a3d1c1..54d946a9eee6 100644
2151 --- a/drivers/block/zram/zcomp.c
2152 +++ b/drivers/block/zram/zcomp.c
2153 @@ -325,12 +325,14 @@ void zcomp_destroy(struct zcomp *comp)
2154 * allocate new zcomp and initialize it. return compressing
2155 * backend pointer or ERR_PTR if things went bad. ERR_PTR(-EINVAL)
2156 * if requested algorithm is not supported, ERR_PTR(-ENOMEM) in
2157 - * case of allocation error.
2158 + * case of allocation error, or any other error potentially
2159 + * returned by functions zcomp_strm_{multi,single}_create.
2160 */
2161 struct zcomp *zcomp_create(const char *compress, int max_strm)
2162 {
2163 struct zcomp *comp;
2164 struct zcomp_backend *backend;
2165 + int error;
2166
2167 backend = find_backend(compress);
2168 if (!backend)
2169 @@ -342,12 +344,12 @@ struct zcomp *zcomp_create(const char *compress, int max_strm)
2170
2171 comp->backend = backend;
2172 if (max_strm > 1)
2173 - zcomp_strm_multi_create(comp, max_strm);
2174 + error = zcomp_strm_multi_create(comp, max_strm);
2175 else
2176 - zcomp_strm_single_create(comp);
2177 - if (!comp->stream) {
2178 + error = zcomp_strm_single_create(comp);
2179 + if (error) {
2180 kfree(comp);
2181 - return ERR_PTR(-ENOMEM);
2182 + return ERR_PTR(error);
2183 }
2184 return comp;
2185 }
2186 diff --git a/drivers/clk/ti/clk-3xxx.c b/drivers/clk/ti/clk-3xxx.c
2187 index 0d1750a8aea4..088930c3ee4b 100644
2188 --- a/drivers/clk/ti/clk-3xxx.c
2189 +++ b/drivers/clk/ti/clk-3xxx.c
2190 @@ -170,7 +170,6 @@ static struct ti_dt_clk omap3xxx_clks[] = {
2191 DT_CLK(NULL, "gpio2_ick", "gpio2_ick"),
2192 DT_CLK(NULL, "wdt3_ick", "wdt3_ick"),
2193 DT_CLK(NULL, "uart3_ick", "uart3_ick"),
2194 - DT_CLK(NULL, "uart4_ick", "uart4_ick"),
2195 DT_CLK(NULL, "gpt9_ick", "gpt9_ick"),
2196 DT_CLK(NULL, "gpt8_ick", "gpt8_ick"),
2197 DT_CLK(NULL, "gpt7_ick", "gpt7_ick"),
2198 @@ -313,6 +312,7 @@ static struct ti_dt_clk am35xx_clks[] = {
2199 static struct ti_dt_clk omap36xx_clks[] = {
2200 DT_CLK(NULL, "omap_192m_alwon_fck", "omap_192m_alwon_fck"),
2201 DT_CLK(NULL, "uart4_fck", "uart4_fck"),
2202 + DT_CLK(NULL, "uart4_ick", "uart4_ick"),
2203 { .node_name = NULL },
2204 };
2205
2206 diff --git a/drivers/clk/versatile/clk-sp810.c b/drivers/clk/versatile/clk-sp810.c
2207 index c6e86a9a2aa3..5122ef25f595 100644
2208 --- a/drivers/clk/versatile/clk-sp810.c
2209 +++ b/drivers/clk/versatile/clk-sp810.c
2210 @@ -128,8 +128,8 @@ static struct clk *clk_sp810_timerclken_of_get(struct of_phandle_args *clkspec,
2211 {
2212 struct clk_sp810 *sp810 = data;
2213
2214 - if (WARN_ON(clkspec->args_count != 1 || clkspec->args[0] >
2215 - ARRAY_SIZE(sp810->timerclken)))
2216 + if (WARN_ON(clkspec->args_count != 1 ||
2217 + clkspec->args[0] >= ARRAY_SIZE(sp810->timerclken)))
2218 return NULL;
2219
2220 return sp810->timerclken[clkspec->args[0]].clk;
2221 diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
2222 index f657c571b18e..bdb6951d0978 100644
2223 --- a/drivers/cpufreq/cpufreq-dt.c
2224 +++ b/drivers/cpufreq/cpufreq-dt.c
2225 @@ -240,7 +240,8 @@ static int cpufreq_init(struct cpufreq_policy *policy)
2226 rcu_read_unlock();
2227
2228 tol_uV = opp_uV * priv->voltage_tolerance / 100;
2229 - if (regulator_is_supported_voltage(cpu_reg, opp_uV,
2230 + if (regulator_is_supported_voltage(cpu_reg,
2231 + opp_uV - tol_uV,
2232 opp_uV + tol_uV)) {
2233 if (opp_uV < min_uV)
2234 min_uV = opp_uV;
2235 diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
2236 index d0d21363c63f..c1da6e121a67 100644
2237 --- a/drivers/cpufreq/intel_pstate.c
2238 +++ b/drivers/cpufreq/intel_pstate.c
2239 @@ -47,9 +47,9 @@ static inline int32_t mul_fp(int32_t x, int32_t y)
2240 return ((int64_t)x * (int64_t)y) >> FRAC_BITS;
2241 }
2242
2243 -static inline int32_t div_fp(int32_t x, int32_t y)
2244 +static inline int32_t div_fp(s64 x, s64 y)
2245 {
2246 - return div_s64((int64_t)x << FRAC_BITS, y);
2247 + return div64_s64((int64_t)x << FRAC_BITS, y);
2248 }
2249
2250 static inline int ceiling_fp(int32_t x)
2251 @@ -659,7 +659,7 @@ static inline void intel_pstate_set_sample_time(struct cpudata *cpu)
2252 static inline int32_t intel_pstate_get_scaled_busy(struct cpudata *cpu)
2253 {
2254 int32_t core_busy, max_pstate, current_pstate, sample_ratio;
2255 - u32 duration_us;
2256 + s64 duration_us;
2257 u32 sample_time;
2258
2259 core_busy = cpu->sample.core_pct_busy;
2260 @@ -668,8 +668,8 @@ static inline int32_t intel_pstate_get_scaled_busy(struct cpudata *cpu)
2261 core_busy = mul_fp(core_busy, div_fp(max_pstate, current_pstate));
2262
2263 sample_time = pid_params.sample_rate_ms * USEC_PER_MSEC;
2264 - duration_us = (u32) ktime_us_delta(cpu->sample.time,
2265 - cpu->last_sample_time);
2266 + duration_us = ktime_us_delta(cpu->sample.time,
2267 + cpu->last_sample_time);
2268 if (duration_us > sample_time * 3) {
2269 sample_ratio = div_fp(int_tofp(sample_time),
2270 int_tofp(duration_us));
2271 diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c
2272 index 244722170410..9da24a5e1561 100644
2273 --- a/drivers/dma/dw/core.c
2274 +++ b/drivers/dma/dw/core.c
2275 @@ -1579,7 +1579,6 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
2276 INIT_LIST_HEAD(&dw->dma.channels);
2277 for (i = 0; i < nr_channels; i++) {
2278 struct dw_dma_chan *dwc = &dw->chan[i];
2279 - int r = nr_channels - i - 1;
2280
2281 dwc->chan.device = &dw->dma;
2282 dma_cookie_init(&dwc->chan);
2283 @@ -1591,7 +1590,7 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
2284
2285 /* 7 is highest priority & 0 is lowest. */
2286 if (pdata->chan_priority == CHAN_PRIORITY_ASCENDING)
2287 - dwc->priority = r;
2288 + dwc->priority = nr_channels - i - 1;
2289 else
2290 dwc->priority = i;
2291
2292 @@ -1610,6 +1609,7 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
2293 /* Hardware configuration */
2294 if (autocfg) {
2295 unsigned int dwc_params;
2296 + unsigned int r = DW_DMA_MAX_NR_CHANNELS - i - 1;
2297 void __iomem *addr = chip->regs + r * sizeof(u32);
2298
2299 dwc_params = dma_read_byaddr(addr, DWC_PARAMS);
2300 diff --git a/drivers/gpu/drm/drm_lock.c b/drivers/gpu/drm/drm_lock.c
2301 index f861361a635e..4924d381b664 100644
2302 --- a/drivers/gpu/drm/drm_lock.c
2303 +++ b/drivers/gpu/drm/drm_lock.c
2304 @@ -61,6 +61,9 @@ int drm_legacy_lock(struct drm_device *dev, void *data,
2305 struct drm_master *master = file_priv->master;
2306 int ret = 0;
2307
2308 + if (drm_core_check_feature(dev, DRIVER_MODESET))
2309 + return -EINVAL;
2310 +
2311 ++file_priv->lock_count;
2312
2313 if (lock->context == DRM_KERNEL_CONTEXT) {
2314 @@ -153,6 +156,9 @@ int drm_legacy_unlock(struct drm_device *dev, void *data, struct drm_file *file_
2315 struct drm_lock *lock = data;
2316 struct drm_master *master = file_priv->master;
2317
2318 + if (drm_core_check_feature(dev, DRIVER_MODESET))
2319 + return -EINVAL;
2320 +
2321 if (lock->context == DRM_KERNEL_CONTEXT) {
2322 DRM_ERROR("Process %d using kernel context %d\n",
2323 task_pid_nr(current), lock->context);
2324 diff --git a/drivers/gpu/drm/i915/intel_bios.c b/drivers/gpu/drm/i915/intel_bios.c
2325 index a4bd90f36a03..d96b152a6e04 100644
2326 --- a/drivers/gpu/drm/i915/intel_bios.c
2327 +++ b/drivers/gpu/drm/i915/intel_bios.c
2328 @@ -41,7 +41,7 @@ find_section(struct bdb_header *bdb, int section_id)
2329 {
2330 u8 *base = (u8 *)bdb;
2331 int index = 0;
2332 - u16 total, current_size;
2333 + u32 total, current_size;
2334 u8 current_id;
2335
2336 /* skip to first section */
2337 @@ -56,6 +56,10 @@ find_section(struct bdb_header *bdb, int section_id)
2338 current_size = *((u16 *)(base + index));
2339 index += 2;
2340
2341 + /* The MIPI Sequence Block v3+ has a separate size field. */
2342 + if (current_id == BDB_MIPI_SEQUENCE && *(base + index) >= 3)
2343 + current_size = *((const u32 *)(base + index + 1));
2344 +
2345 if (index + current_size > total)
2346 return NULL;
2347
2348 @@ -794,6 +798,12 @@ parse_mipi(struct drm_i915_private *dev_priv, struct bdb_header *bdb)
2349 return;
2350 }
2351
2352 + /* Fail gracefully for forward incompatible sequence block. */
2353 + if (sequence->version >= 3) {
2354 + DRM_ERROR("Unable to parse MIPI Sequence Block v3+\n");
2355 + return;
2356 + }
2357 +
2358 DRM_DEBUG_DRIVER("Found MIPI sequence block\n");
2359
2360 block_size = get_blocksize(sequence);
2361 diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
2362 index 0d1396266857..011b22836fd6 100644
2363 --- a/drivers/gpu/drm/qxl/qxl_display.c
2364 +++ b/drivers/gpu/drm/qxl/qxl_display.c
2365 @@ -136,9 +136,35 @@ static int qxl_add_monitors_config_modes(struct drm_connector *connector,
2366 *pwidth = head->width;
2367 *pheight = head->height;
2368 drm_mode_probed_add(connector, mode);
2369 + /* remember the last custom size for mode validation */
2370 + qdev->monitors_config_width = mode->hdisplay;
2371 + qdev->monitors_config_height = mode->vdisplay;
2372 return 1;
2373 }
2374
2375 +static struct mode_size {
2376 + int w;
2377 + int h;
2378 +} common_modes[] = {
2379 + { 640, 480},
2380 + { 720, 480},
2381 + { 800, 600},
2382 + { 848, 480},
2383 + {1024, 768},
2384 + {1152, 768},
2385 + {1280, 720},
2386 + {1280, 800},
2387 + {1280, 854},
2388 + {1280, 960},
2389 + {1280, 1024},
2390 + {1440, 900},
2391 + {1400, 1050},
2392 + {1680, 1050},
2393 + {1600, 1200},
2394 + {1920, 1080},
2395 + {1920, 1200}
2396 +};
2397 +
2398 static int qxl_add_common_modes(struct drm_connector *connector,
2399 unsigned pwidth,
2400 unsigned pheight)
2401 @@ -146,29 +172,6 @@ static int qxl_add_common_modes(struct drm_connector *connector,
2402 struct drm_device *dev = connector->dev;
2403 struct drm_display_mode *mode = NULL;
2404 int i;
2405 - struct mode_size {
2406 - int w;
2407 - int h;
2408 - } common_modes[] = {
2409 - { 640, 480},
2410 - { 720, 480},
2411 - { 800, 600},
2412 - { 848, 480},
2413 - {1024, 768},
2414 - {1152, 768},
2415 - {1280, 720},
2416 - {1280, 800},
2417 - {1280, 854},
2418 - {1280, 960},
2419 - {1280, 1024},
2420 - {1440, 900},
2421 - {1400, 1050},
2422 - {1680, 1050},
2423 - {1600, 1200},
2424 - {1920, 1080},
2425 - {1920, 1200}
2426 - };
2427 -
2428 for (i = 0; i < ARRAY_SIZE(common_modes); i++) {
2429 mode = drm_cvt_mode(dev, common_modes[i].w, common_modes[i].h,
2430 60, false, false, false);
2431 @@ -598,7 +601,7 @@ static int qxl_crtc_mode_set(struct drm_crtc *crtc,
2432 adjusted_mode->hdisplay,
2433 adjusted_mode->vdisplay);
2434
2435 - if (qcrtc->index == 0)
2436 + if (bo->is_primary == false)
2437 recreate_primary = true;
2438
2439 if (bo->surf.stride * bo->surf.height > qdev->vram_size) {
2440 @@ -806,11 +809,22 @@ static int qxl_conn_get_modes(struct drm_connector *connector)
2441 static int qxl_conn_mode_valid(struct drm_connector *connector,
2442 struct drm_display_mode *mode)
2443 {
2444 + struct drm_device *ddev = connector->dev;
2445 + struct qxl_device *qdev = ddev->dev_private;
2446 + int i;
2447 +
2448 /* TODO: is this called for user defined modes? (xrandr --add-mode)
2449 * TODO: check that the mode fits in the framebuffer */
2450 - DRM_DEBUG("%s: %dx%d status=%d\n", mode->name, mode->hdisplay,
2451 - mode->vdisplay, mode->status);
2452 - return MODE_OK;
2453 +
2454 + if(qdev->monitors_config_width == mode->hdisplay &&
2455 + qdev->monitors_config_height == mode->vdisplay)
2456 + return MODE_OK;
2457 +
2458 + for (i = 0; i < ARRAY_SIZE(common_modes); i++) {
2459 + if (common_modes[i].w == mode->hdisplay && common_modes[i].h == mode->vdisplay)
2460 + return MODE_OK;
2461 + }
2462 + return MODE_BAD;
2463 }
2464
2465 static struct drm_encoder *qxl_best_encoder(struct drm_connector *connector)
2466 @@ -855,13 +869,15 @@ static enum drm_connector_status qxl_conn_detect(
2467 drm_connector_to_qxl_output(connector);
2468 struct drm_device *ddev = connector->dev;
2469 struct qxl_device *qdev = ddev->dev_private;
2470 - int connected;
2471 + bool connected = false;
2472
2473 /* The first monitor is always connected */
2474 - connected = (output->index == 0) ||
2475 - (qdev->client_monitors_config &&
2476 - qdev->client_monitors_config->count > output->index &&
2477 - qxl_head_enabled(&qdev->client_monitors_config->heads[output->index]));
2478 + if (!qdev->client_monitors_config) {
2479 + if (output->index == 0)
2480 + connected = true;
2481 + } else
2482 + connected = qdev->client_monitors_config->count > output->index &&
2483 + qxl_head_enabled(&qdev->client_monitors_config->heads[output->index]);
2484
2485 DRM_DEBUG("#%d connected: %d\n", output->index, connected);
2486 if (!connected)
2487 diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
2488 index 7c6cafe21f5f..e66143cc1a7a 100644
2489 --- a/drivers/gpu/drm/qxl/qxl_drv.h
2490 +++ b/drivers/gpu/drm/qxl/qxl_drv.h
2491 @@ -325,6 +325,8 @@ struct qxl_device {
2492 struct work_struct fb_work;
2493
2494 struct drm_property *hotplug_mode_update_property;
2495 + int monitors_config_width;
2496 + int monitors_config_height;
2497 };
2498
2499 /* forward declaration for QXL_INFO_IO */
2500 diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c
2501 index b8cd7975f797..d8a5db204a81 100644
2502 --- a/drivers/gpu/drm/radeon/atombios_encoders.c
2503 +++ b/drivers/gpu/drm/radeon/atombios_encoders.c
2504 @@ -1586,8 +1586,9 @@ radeon_atom_encoder_dpms_avivo(struct drm_encoder *encoder, int mode)
2505 } else
2506 atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
2507 if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
2508 - args.ucAction = ATOM_LCD_BLON;
2509 - atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
2510 + struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
2511 +
2512 + atombios_set_backlight_level(radeon_encoder, dig->backlight_level);
2513 }
2514 break;
2515 case DRM_MODE_DPMS_STANDBY:
2516 @@ -1668,8 +1669,7 @@ radeon_atom_encoder_dpms_dig(struct drm_encoder *encoder, int mode)
2517 atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_ON, 0);
2518 }
2519 if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT))
2520 - atombios_dig_transmitter_setup(encoder,
2521 - ATOM_TRANSMITTER_ACTION_LCD_BLON, 0, 0);
2522 + atombios_set_backlight_level(radeon_encoder, dig->backlight_level);
2523 if (ext_encoder)
2524 atombios_external_encoder_setup(encoder, ext_encoder, ATOM_ENABLE);
2525 break;
2526 diff --git a/drivers/gpu/drm/radeon/radeon_combios.c b/drivers/gpu/drm/radeon/radeon_combios.c
2527 index c097d3a82bda..a9b01bcf7d0a 100644
2528 --- a/drivers/gpu/drm/radeon/radeon_combios.c
2529 +++ b/drivers/gpu/drm/radeon/radeon_combios.c
2530 @@ -3387,6 +3387,14 @@ void radeon_combios_asic_init(struct drm_device *dev)
2531 rdev->pdev->subsystem_device == 0x30ae)
2532 return;
2533
2534 + /* quirk for rs4xx HP Compaq dc5750 Small Form Factor to make it resume
2535 + * - it hangs on resume inside the dynclk 1 table.
2536 + */
2537 + if (rdev->family == CHIP_RS480 &&
2538 + rdev->pdev->subsystem_vendor == 0x103c &&
2539 + rdev->pdev->subsystem_device == 0x280a)
2540 + return;
2541 +
2542 /* DYN CLK 1 */
2543 table = combios_get_table_offset(dev, COMBIOS_DYN_CLK_1_TABLE);
2544 if (table)
2545 diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
2546 index 26baa9c05f6c..15f09068ac00 100644
2547 --- a/drivers/gpu/drm/radeon/radeon_connectors.c
2548 +++ b/drivers/gpu/drm/radeon/radeon_connectors.c
2549 @@ -72,6 +72,11 @@ void radeon_connector_hotplug(struct drm_connector *connector)
2550 if (!radeon_hpd_sense(rdev, radeon_connector->hpd.hpd)) {
2551 drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
2552 } else if (radeon_dp_needs_link_train(radeon_connector)) {
2553 + /* Don't try to start link training before we
2554 + * have the dpcd */
2555 + if (!radeon_dp_getdpcd(radeon_connector))
2556 + return;
2557 +
2558 /* set it to OFF so that drm_helper_connector_dpms()
2559 * won't return immediately since the current state
2560 * is ON at this point.
2561 diff --git a/drivers/hid/usbhid/hid-core.c b/drivers/hid/usbhid/hid-core.c
2562 index ca6849a0121e..97342ebc7de7 100644
2563 --- a/drivers/hid/usbhid/hid-core.c
2564 +++ b/drivers/hid/usbhid/hid-core.c
2565 @@ -164,7 +164,7 @@ static void hid_io_error(struct hid_device *hid)
2566 if (time_after(jiffies, usbhid->stop_retry)) {
2567
2568 /* Retries failed, so do a port reset unless we lack bandwidth*/
2569 - if (test_bit(HID_NO_BANDWIDTH, &usbhid->iofl)
2570 + if (!test_bit(HID_NO_BANDWIDTH, &usbhid->iofl)
2571 && !test_and_set_bit(HID_RESET_PENDING, &usbhid->iofl)) {
2572
2573 schedule_work(&usbhid->reset_work);
2574 diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c
2575 index 6461964f49a8..3aa958b5d45d 100644
2576 --- a/drivers/hwmon/nct6775.c
2577 +++ b/drivers/hwmon/nct6775.c
2578 @@ -350,6 +350,10 @@ static const u16 NCT6775_REG_TEMP_CRIT[ARRAY_SIZE(nct6775_temp_label) - 1]
2579
2580 /* NCT6776 specific data */
2581
2582 +/* STEP_UP_TIME and STEP_DOWN_TIME regs are swapped for all chips but NCT6775 */
2583 +#define NCT6776_REG_FAN_STEP_UP_TIME NCT6775_REG_FAN_STEP_DOWN_TIME
2584 +#define NCT6776_REG_FAN_STEP_DOWN_TIME NCT6775_REG_FAN_STEP_UP_TIME
2585 +
2586 static const s8 NCT6776_ALARM_BITS[] = {
2587 0, 1, 2, 3, 8, 21, 20, 16, /* in0.. in7 */
2588 17, -1, -1, -1, -1, -1, -1, /* in8..in14 */
2589 @@ -3476,8 +3480,8 @@ static int nct6775_probe(struct platform_device *pdev)
2590 data->REG_FAN_PULSES = NCT6776_REG_FAN_PULSES;
2591 data->FAN_PULSE_SHIFT = NCT6775_FAN_PULSE_SHIFT;
2592 data->REG_FAN_TIME[0] = NCT6775_REG_FAN_STOP_TIME;
2593 - data->REG_FAN_TIME[1] = NCT6775_REG_FAN_STEP_UP_TIME;
2594 - data->REG_FAN_TIME[2] = NCT6775_REG_FAN_STEP_DOWN_TIME;
2595 + data->REG_FAN_TIME[1] = NCT6776_REG_FAN_STEP_UP_TIME;
2596 + data->REG_FAN_TIME[2] = NCT6776_REG_FAN_STEP_DOWN_TIME;
2597 data->REG_TOLERANCE_H = NCT6776_REG_TOLERANCE_H;
2598 data->REG_PWM[0] = NCT6775_REG_PWM;
2599 data->REG_PWM[1] = NCT6775_REG_FAN_START_OUTPUT;
2600 @@ -3548,8 +3552,8 @@ static int nct6775_probe(struct platform_device *pdev)
2601 data->REG_FAN_PULSES = NCT6779_REG_FAN_PULSES;
2602 data->FAN_PULSE_SHIFT = NCT6775_FAN_PULSE_SHIFT;
2603 data->REG_FAN_TIME[0] = NCT6775_REG_FAN_STOP_TIME;
2604 - data->REG_FAN_TIME[1] = NCT6775_REG_FAN_STEP_UP_TIME;
2605 - data->REG_FAN_TIME[2] = NCT6775_REG_FAN_STEP_DOWN_TIME;
2606 + data->REG_FAN_TIME[1] = NCT6776_REG_FAN_STEP_UP_TIME;
2607 + data->REG_FAN_TIME[2] = NCT6776_REG_FAN_STEP_DOWN_TIME;
2608 data->REG_TOLERANCE_H = NCT6776_REG_TOLERANCE_H;
2609 data->REG_PWM[0] = NCT6775_REG_PWM;
2610 data->REG_PWM[1] = NCT6775_REG_FAN_START_OUTPUT;
2611 @@ -3624,8 +3628,8 @@ static int nct6775_probe(struct platform_device *pdev)
2612 data->REG_FAN_PULSES = NCT6779_REG_FAN_PULSES;
2613 data->FAN_PULSE_SHIFT = NCT6775_FAN_PULSE_SHIFT;
2614 data->REG_FAN_TIME[0] = NCT6775_REG_FAN_STOP_TIME;
2615 - data->REG_FAN_TIME[1] = NCT6775_REG_FAN_STEP_UP_TIME;
2616 - data->REG_FAN_TIME[2] = NCT6775_REG_FAN_STEP_DOWN_TIME;
2617 + data->REG_FAN_TIME[1] = NCT6776_REG_FAN_STEP_UP_TIME;
2618 + data->REG_FAN_TIME[2] = NCT6776_REG_FAN_STEP_DOWN_TIME;
2619 data->REG_TOLERANCE_H = NCT6776_REG_TOLERANCE_H;
2620 data->REG_PWM[0] = NCT6775_REG_PWM;
2621 data->REG_PWM[1] = NCT6775_REG_FAN_START_OUTPUT;
2622 diff --git a/drivers/iio/imu/adis16480.c b/drivers/iio/imu/adis16480.c
2623 index 989605dd6f78..b94bfd3f595b 100644
2624 --- a/drivers/iio/imu/adis16480.c
2625 +++ b/drivers/iio/imu/adis16480.c
2626 @@ -110,6 +110,10 @@
2627 struct adis16480_chip_info {
2628 unsigned int num_channels;
2629 const struct iio_chan_spec *channels;
2630 + unsigned int gyro_max_val;
2631 + unsigned int gyro_max_scale;
2632 + unsigned int accel_max_val;
2633 + unsigned int accel_max_scale;
2634 };
2635
2636 struct adis16480 {
2637 @@ -497,19 +501,21 @@ static int adis16480_set_filter_freq(struct iio_dev *indio_dev,
2638 static int adis16480_read_raw(struct iio_dev *indio_dev,
2639 const struct iio_chan_spec *chan, int *val, int *val2, long info)
2640 {
2641 + struct adis16480 *st = iio_priv(indio_dev);
2642 +
2643 switch (info) {
2644 case IIO_CHAN_INFO_RAW:
2645 return adis_single_conversion(indio_dev, chan, 0, val);
2646 case IIO_CHAN_INFO_SCALE:
2647 switch (chan->type) {
2648 case IIO_ANGL_VEL:
2649 - *val = 0;
2650 - *val2 = IIO_DEGREE_TO_RAD(20000); /* 0.02 degree/sec */
2651 - return IIO_VAL_INT_PLUS_MICRO;
2652 + *val = st->chip_info->gyro_max_scale;
2653 + *val2 = st->chip_info->gyro_max_val;
2654 + return IIO_VAL_FRACTIONAL;
2655 case IIO_ACCEL:
2656 - *val = 0;
2657 - *val2 = IIO_G_TO_M_S_2(800); /* 0.8 mg */
2658 - return IIO_VAL_INT_PLUS_MICRO;
2659 + *val = st->chip_info->accel_max_scale;
2660 + *val2 = st->chip_info->accel_max_val;
2661 + return IIO_VAL_FRACTIONAL;
2662 case IIO_MAGN:
2663 *val = 0;
2664 *val2 = 100; /* 0.0001 gauss */
2665 @@ -674,18 +680,39 @@ static const struct adis16480_chip_info adis16480_chip_info[] = {
2666 [ADIS16375] = {
2667 .channels = adis16485_channels,
2668 .num_channels = ARRAY_SIZE(adis16485_channels),
2669 + /*
2670 + * storing the value in rad/degree and the scale in degree
2671 + * gives us the result in rad and better precession than
2672 + * storing the scale directly in rad.
2673 + */
2674 + .gyro_max_val = IIO_RAD_TO_DEGREE(22887),
2675 + .gyro_max_scale = 300,
2676 + .accel_max_val = IIO_M_S_2_TO_G(21973),
2677 + .accel_max_scale = 18,
2678 },
2679 [ADIS16480] = {
2680 .channels = adis16480_channels,
2681 .num_channels = ARRAY_SIZE(adis16480_channels),
2682 + .gyro_max_val = IIO_RAD_TO_DEGREE(22500),
2683 + .gyro_max_scale = 450,
2684 + .accel_max_val = IIO_M_S_2_TO_G(12500),
2685 + .accel_max_scale = 5,
2686 },
2687 [ADIS16485] = {
2688 .channels = adis16485_channels,
2689 .num_channels = ARRAY_SIZE(adis16485_channels),
2690 + .gyro_max_val = IIO_RAD_TO_DEGREE(22500),
2691 + .gyro_max_scale = 450,
2692 + .accel_max_val = IIO_M_S_2_TO_G(20000),
2693 + .accel_max_scale = 5,
2694 },
2695 [ADIS16488] = {
2696 .channels = adis16480_channels,
2697 .num_channels = ARRAY_SIZE(adis16480_channels),
2698 + .gyro_max_val = IIO_RAD_TO_DEGREE(22500),
2699 + .gyro_max_scale = 450,
2700 + .accel_max_val = IIO_M_S_2_TO_G(22500),
2701 + .accel_max_scale = 18,
2702 },
2703 };
2704
2705 diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
2706 index f971f79103ec..25c68de393ad 100644
2707 --- a/drivers/iio/industrialio-buffer.c
2708 +++ b/drivers/iio/industrialio-buffer.c
2709 @@ -93,7 +93,7 @@ unsigned int iio_buffer_poll(struct file *filp,
2710 struct iio_buffer *rb = indio_dev->buffer;
2711
2712 if (!indio_dev->info)
2713 - return -ENODEV;
2714 + return 0;
2715
2716 poll_wait(filp, &rb->pollq, wait);
2717 if (iio_buffer_data_available(rb))
2718 diff --git a/drivers/iio/industrialio-event.c b/drivers/iio/industrialio-event.c
2719 index 35c02aeec75e..158a760ada12 100644
2720 --- a/drivers/iio/industrialio-event.c
2721 +++ b/drivers/iio/industrialio-event.c
2722 @@ -84,7 +84,7 @@ static unsigned int iio_event_poll(struct file *filep,
2723 unsigned int events = 0;
2724
2725 if (!indio_dev->info)
2726 - return -ENODEV;
2727 + return events;
2728
2729 poll_wait(filep, &ev_int->wait, wait);
2730
2731 diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h
2732 index 643c08a025a5..1c74d89fd2ad 100644
2733 --- a/drivers/infiniband/core/uverbs.h
2734 +++ b/drivers/infiniband/core/uverbs.h
2735 @@ -85,7 +85,7 @@
2736 */
2737
2738 struct ib_uverbs_device {
2739 - struct kref ref;
2740 + atomic_t refcount;
2741 int num_comp_vectors;
2742 struct completion comp;
2743 struct device *dev;
2744 @@ -94,6 +94,7 @@ struct ib_uverbs_device {
2745 struct cdev cdev;
2746 struct rb_root xrcd_tree;
2747 struct mutex xrcd_tree_mutex;
2748 + struct kobject kobj;
2749 };
2750
2751 struct ib_uverbs_event_file {
2752 diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
2753 index 63a9f04bdb6c..f3748311d79b 100644
2754 --- a/drivers/infiniband/core/uverbs_cmd.c
2755 +++ b/drivers/infiniband/core/uverbs_cmd.c
2756 @@ -2204,6 +2204,12 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file,
2757 next->send_flags = user_wr->send_flags;
2758
2759 if (is_ud) {
2760 + if (next->opcode != IB_WR_SEND &&
2761 + next->opcode != IB_WR_SEND_WITH_IMM) {
2762 + ret = -EINVAL;
2763 + goto out_put;
2764 + }
2765 +
2766 next->wr.ud.ah = idr_read_ah(user_wr->wr.ud.ah,
2767 file->ucontext);
2768 if (!next->wr.ud.ah) {
2769 @@ -2243,9 +2249,11 @@ ssize_t ib_uverbs_post_send(struct ib_uverbs_file *file,
2770 user_wr->wr.atomic.compare_add;
2771 next->wr.atomic.swap = user_wr->wr.atomic.swap;
2772 next->wr.atomic.rkey = user_wr->wr.atomic.rkey;
2773 + case IB_WR_SEND:
2774 break;
2775 default:
2776 - break;
2777 + ret = -EINVAL;
2778 + goto out_put;
2779 }
2780 }
2781
2782 diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
2783 index 71ab83fde472..d3abb7ea2dee 100644
2784 --- a/drivers/infiniband/core/uverbs_main.c
2785 +++ b/drivers/infiniband/core/uverbs_main.c
2786 @@ -128,14 +128,18 @@ static int (*uverbs_ex_cmd_table[])(struct ib_uverbs_file *file,
2787 static void ib_uverbs_add_one(struct ib_device *device);
2788 static void ib_uverbs_remove_one(struct ib_device *device);
2789
2790 -static void ib_uverbs_release_dev(struct kref *ref)
2791 +static void ib_uverbs_release_dev(struct kobject *kobj)
2792 {
2793 struct ib_uverbs_device *dev =
2794 - container_of(ref, struct ib_uverbs_device, ref);
2795 + container_of(kobj, struct ib_uverbs_device, kobj);
2796
2797 - complete(&dev->comp);
2798 + kfree(dev);
2799 }
2800
2801 +static struct kobj_type ib_uverbs_dev_ktype = {
2802 + .release = ib_uverbs_release_dev,
2803 +};
2804 +
2805 static void ib_uverbs_release_event_file(struct kref *ref)
2806 {
2807 struct ib_uverbs_event_file *file =
2808 @@ -299,13 +303,19 @@ static int ib_uverbs_cleanup_ucontext(struct ib_uverbs_file *file,
2809 return context->device->dealloc_ucontext(context);
2810 }
2811
2812 +static void ib_uverbs_comp_dev(struct ib_uverbs_device *dev)
2813 +{
2814 + complete(&dev->comp);
2815 +}
2816 +
2817 static void ib_uverbs_release_file(struct kref *ref)
2818 {
2819 struct ib_uverbs_file *file =
2820 container_of(ref, struct ib_uverbs_file, ref);
2821
2822 module_put(file->device->ib_dev->owner);
2823 - kref_put(&file->device->ref, ib_uverbs_release_dev);
2824 + if (atomic_dec_and_test(&file->device->refcount))
2825 + ib_uverbs_comp_dev(file->device);
2826
2827 kfree(file);
2828 }
2829 @@ -739,9 +749,7 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp)
2830 int ret;
2831
2832 dev = container_of(inode->i_cdev, struct ib_uverbs_device, cdev);
2833 - if (dev)
2834 - kref_get(&dev->ref);
2835 - else
2836 + if (!atomic_inc_not_zero(&dev->refcount))
2837 return -ENXIO;
2838
2839 if (!try_module_get(dev->ib_dev->owner)) {
2840 @@ -762,6 +770,7 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp)
2841 mutex_init(&file->mutex);
2842
2843 filp->private_data = file;
2844 + kobject_get(&dev->kobj);
2845
2846 return nonseekable_open(inode, filp);
2847
2848 @@ -769,13 +778,16 @@ err_module:
2849 module_put(dev->ib_dev->owner);
2850
2851 err:
2852 - kref_put(&dev->ref, ib_uverbs_release_dev);
2853 + if (atomic_dec_and_test(&dev->refcount))
2854 + ib_uverbs_comp_dev(dev);
2855 +
2856 return ret;
2857 }
2858
2859 static int ib_uverbs_close(struct inode *inode, struct file *filp)
2860 {
2861 struct ib_uverbs_file *file = filp->private_data;
2862 + struct ib_uverbs_device *dev = file->device;
2863
2864 ib_uverbs_cleanup_ucontext(file, file->ucontext);
2865
2866 @@ -783,6 +795,7 @@ static int ib_uverbs_close(struct inode *inode, struct file *filp)
2867 kref_put(&file->async_file->ref, ib_uverbs_release_event_file);
2868
2869 kref_put(&file->ref, ib_uverbs_release_file);
2870 + kobject_put(&dev->kobj);
2871
2872 return 0;
2873 }
2874 @@ -878,10 +891,11 @@ static void ib_uverbs_add_one(struct ib_device *device)
2875 if (!uverbs_dev)
2876 return;
2877
2878 - kref_init(&uverbs_dev->ref);
2879 + atomic_set(&uverbs_dev->refcount, 1);
2880 init_completion(&uverbs_dev->comp);
2881 uverbs_dev->xrcd_tree = RB_ROOT;
2882 mutex_init(&uverbs_dev->xrcd_tree_mutex);
2883 + kobject_init(&uverbs_dev->kobj, &ib_uverbs_dev_ktype);
2884
2885 spin_lock(&map_lock);
2886 devnum = find_first_zero_bit(dev_map, IB_UVERBS_MAX_DEVICES);
2887 @@ -908,6 +922,7 @@ static void ib_uverbs_add_one(struct ib_device *device)
2888 cdev_init(&uverbs_dev->cdev, NULL);
2889 uverbs_dev->cdev.owner = THIS_MODULE;
2890 uverbs_dev->cdev.ops = device->mmap ? &uverbs_mmap_fops : &uverbs_fops;
2891 + uverbs_dev->cdev.kobj.parent = &uverbs_dev->kobj;
2892 kobject_set_name(&uverbs_dev->cdev.kobj, "uverbs%d", uverbs_dev->devnum);
2893 if (cdev_add(&uverbs_dev->cdev, base, 1))
2894 goto err_cdev;
2895 @@ -938,9 +953,10 @@ err_cdev:
2896 clear_bit(devnum, overflow_map);
2897
2898 err:
2899 - kref_put(&uverbs_dev->ref, ib_uverbs_release_dev);
2900 + if (atomic_dec_and_test(&uverbs_dev->refcount))
2901 + ib_uverbs_comp_dev(uverbs_dev);
2902 wait_for_completion(&uverbs_dev->comp);
2903 - kfree(uverbs_dev);
2904 + kobject_put(&uverbs_dev->kobj);
2905 return;
2906 }
2907
2908 @@ -960,9 +976,10 @@ static void ib_uverbs_remove_one(struct ib_device *device)
2909 else
2910 clear_bit(uverbs_dev->devnum - IB_UVERBS_MAX_DEVICES, overflow_map);
2911
2912 - kref_put(&uverbs_dev->ref, ib_uverbs_release_dev);
2913 + if (atomic_dec_and_test(&uverbs_dev->refcount))
2914 + ib_uverbs_comp_dev(uverbs_dev);
2915 wait_for_completion(&uverbs_dev->comp);
2916 - kfree(uverbs_dev);
2917 + kobject_put(&uverbs_dev->kobj);
2918 }
2919
2920 static char *uverbs_devnode(struct device *dev, umode_t *mode)
2921 diff --git a/drivers/infiniband/hw/mlx4/ah.c b/drivers/infiniband/hw/mlx4/ah.c
2922 index 2d8c3397774f..e65ee1947279 100644
2923 --- a/drivers/infiniband/hw/mlx4/ah.c
2924 +++ b/drivers/infiniband/hw/mlx4/ah.c
2925 @@ -147,9 +147,13 @@ int mlx4_ib_query_ah(struct ib_ah *ibah, struct ib_ah_attr *ah_attr)
2926 enum rdma_link_layer ll;
2927
2928 memset(ah_attr, 0, sizeof *ah_attr);
2929 - ah_attr->sl = be32_to_cpu(ah->av.ib.sl_tclass_flowlabel) >> 28;
2930 ah_attr->port_num = be32_to_cpu(ah->av.ib.port_pd) >> 24;
2931 ll = rdma_port_get_link_layer(ibah->device, ah_attr->port_num);
2932 + if (ll == IB_LINK_LAYER_ETHERNET)
2933 + ah_attr->sl = be32_to_cpu(ah->av.eth.sl_tclass_flowlabel) >> 29;
2934 + else
2935 + ah_attr->sl = be32_to_cpu(ah->av.ib.sl_tclass_flowlabel) >> 28;
2936 +
2937 ah_attr->dlid = ll == IB_LINK_LAYER_INFINIBAND ? be16_to_cpu(ah->av.ib.dlid) : 0;
2938 if (ah->av.ib.stat_rate)
2939 ah_attr->static_rate = ah->av.ib.stat_rate - MLX4_STAT_RATE_OFFSET;
2940 diff --git a/drivers/infiniband/hw/mlx4/sysfs.c b/drivers/infiniband/hw/mlx4/sysfs.c
2941 index cb4c66e723b5..89b43da1978d 100644
2942 --- a/drivers/infiniband/hw/mlx4/sysfs.c
2943 +++ b/drivers/infiniband/hw/mlx4/sysfs.c
2944 @@ -660,6 +660,8 @@ static int add_port(struct mlx4_ib_dev *dev, int port_num, int slave)
2945 struct mlx4_port *p;
2946 int i;
2947 int ret;
2948 + int is_eth = rdma_port_get_link_layer(&dev->ib_dev, port_num) ==
2949 + IB_LINK_LAYER_ETHERNET;
2950
2951 p = kzalloc(sizeof *p, GFP_KERNEL);
2952 if (!p)
2953 @@ -677,7 +679,8 @@ static int add_port(struct mlx4_ib_dev *dev, int port_num, int slave)
2954
2955 p->pkey_group.name = "pkey_idx";
2956 p->pkey_group.attrs =
2957 - alloc_group_attrs(show_port_pkey, store_port_pkey,
2958 + alloc_group_attrs(show_port_pkey,
2959 + is_eth ? NULL : store_port_pkey,
2960 dev->dev->caps.pkey_table_len[port_num]);
2961 if (!p->pkey_group.attrs) {
2962 ret = -ENOMEM;
2963 diff --git a/drivers/infiniband/hw/qib/qib_keys.c b/drivers/infiniband/hw/qib/qib_keys.c
2964 index 3b9afccaaade..eabe54738be6 100644
2965 --- a/drivers/infiniband/hw/qib/qib_keys.c
2966 +++ b/drivers/infiniband/hw/qib/qib_keys.c
2967 @@ -86,6 +86,10 @@ int qib_alloc_lkey(struct qib_mregion *mr, int dma_region)
2968 * unrestricted LKEY.
2969 */
2970 rkt->gen++;
2971 + /*
2972 + * bits are capped in qib_verbs.c to insure enough bits
2973 + * for generation number
2974 + */
2975 mr->lkey = (r << (32 - ib_qib_lkey_table_size)) |
2976 ((((1 << (24 - ib_qib_lkey_table_size)) - 1) & rkt->gen)
2977 << 8);
2978 diff --git a/drivers/infiniband/hw/qib/qib_verbs.c b/drivers/infiniband/hw/qib/qib_verbs.c
2979 index 9bcfbd842980..40afdfce2fbc 100644
2980 --- a/drivers/infiniband/hw/qib/qib_verbs.c
2981 +++ b/drivers/infiniband/hw/qib/qib_verbs.c
2982 @@ -40,6 +40,7 @@
2983 #include <linux/rculist.h>
2984 #include <linux/mm.h>
2985 #include <linux/random.h>
2986 +#include <linux/vmalloc.h>
2987
2988 #include "qib.h"
2989 #include "qib_common.h"
2990 @@ -2086,10 +2087,16 @@ int qib_register_ib_device(struct qib_devdata *dd)
2991 * the LKEY). The remaining bits act as a generation number or tag.
2992 */
2993 spin_lock_init(&dev->lk_table.lock);
2994 + /* insure generation is at least 4 bits see keys.c */
2995 + if (ib_qib_lkey_table_size > MAX_LKEY_TABLE_BITS) {
2996 + qib_dev_warn(dd, "lkey bits %u too large, reduced to %u\n",
2997 + ib_qib_lkey_table_size, MAX_LKEY_TABLE_BITS);
2998 + ib_qib_lkey_table_size = MAX_LKEY_TABLE_BITS;
2999 + }
3000 dev->lk_table.max = 1 << ib_qib_lkey_table_size;
3001 lk_tab_size = dev->lk_table.max * sizeof(*dev->lk_table.table);
3002 dev->lk_table.table = (struct qib_mregion __rcu **)
3003 - __get_free_pages(GFP_KERNEL, get_order(lk_tab_size));
3004 + vmalloc(lk_tab_size);
3005 if (dev->lk_table.table == NULL) {
3006 ret = -ENOMEM;
3007 goto err_lk;
3008 @@ -2262,7 +2269,7 @@ err_tx:
3009 sizeof(struct qib_pio_header),
3010 dev->pio_hdrs, dev->pio_hdrs_phys);
3011 err_hdrs:
3012 - free_pages((unsigned long) dev->lk_table.table, get_order(lk_tab_size));
3013 + vfree(dev->lk_table.table);
3014 err_lk:
3015 kfree(dev->qp_table);
3016 err_qpt:
3017 @@ -2316,8 +2323,7 @@ void qib_unregister_ib_device(struct qib_devdata *dd)
3018 sizeof(struct qib_pio_header),
3019 dev->pio_hdrs, dev->pio_hdrs_phys);
3020 lk_tab_size = dev->lk_table.max * sizeof(*dev->lk_table.table);
3021 - free_pages((unsigned long) dev->lk_table.table,
3022 - get_order(lk_tab_size));
3023 + vfree(dev->lk_table.table);
3024 kfree(dev->qp_table);
3025 }
3026
3027 diff --git a/drivers/infiniband/hw/qib/qib_verbs.h b/drivers/infiniband/hw/qib/qib_verbs.h
3028 index bfc8948fdd35..44ca28c83fe6 100644
3029 --- a/drivers/infiniband/hw/qib/qib_verbs.h
3030 +++ b/drivers/infiniband/hw/qib/qib_verbs.h
3031 @@ -647,6 +647,8 @@ struct qib_qpn_table {
3032 struct qpn_map map[QPNMAP_ENTRIES];
3033 };
3034
3035 +#define MAX_LKEY_TABLE_BITS 23
3036 +
3037 struct qib_lkey_table {
3038 spinlock_t lock; /* protect changes in this struct */
3039 u32 next; /* next unused index (speeds search) */
3040 diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
3041 index 7b8c29b295ac..357206a20017 100644
3042 --- a/drivers/infiniband/ulp/isert/ib_isert.c
3043 +++ b/drivers/infiniband/ulp/isert/ib_isert.c
3044 @@ -3110,9 +3110,16 @@ isert_get_dataout(struct iscsi_conn *conn, struct iscsi_cmd *cmd, bool recovery)
3045 static int
3046 isert_immediate_queue(struct iscsi_conn *conn, struct iscsi_cmd *cmd, int state)
3047 {
3048 - int ret;
3049 + struct isert_cmd *isert_cmd = iscsit_priv_cmd(cmd);
3050 + int ret = 0;
3051
3052 switch (state) {
3053 + case ISTATE_REMOVE:
3054 + spin_lock_bh(&conn->cmd_lock);
3055 + list_del_init(&cmd->i_conn_node);
3056 + spin_unlock_bh(&conn->cmd_lock);
3057 + isert_put_cmd(isert_cmd, true);
3058 + break;
3059 case ISTATE_SEND_NOPIN_WANT_RESPONSE:
3060 ret = isert_put_nopin(cmd, conn, false);
3061 break;
3062 diff --git a/drivers/input/evdev.c b/drivers/input/evdev.c
3063 index 8afa28e4570e..928dec311c2b 100644
3064 --- a/drivers/input/evdev.c
3065 +++ b/drivers/input/evdev.c
3066 @@ -239,19 +239,14 @@ static int evdev_flush(struct file *file, fl_owner_t id)
3067 {
3068 struct evdev_client *client = file->private_data;
3069 struct evdev *evdev = client->evdev;
3070 - int retval;
3071
3072 - retval = mutex_lock_interruptible(&evdev->mutex);
3073 - if (retval)
3074 - return retval;
3075 + mutex_lock(&evdev->mutex);
3076
3077 - if (!evdev->exist || client->revoked)
3078 - retval = -ENODEV;
3079 - else
3080 - retval = input_flush_device(&evdev->handle, file);
3081 + if (evdev->exist && !client->revoked)
3082 + input_flush_device(&evdev->handle, file);
3083
3084 mutex_unlock(&evdev->mutex);
3085 - return retval;
3086 + return 0;
3087 }
3088
3089 static void evdev_free(struct device *dev)
3090 diff --git a/drivers/macintosh/windfarm_core.c b/drivers/macintosh/windfarm_core.c
3091 index 3ee198b65843..cc7ece1712b5 100644
3092 --- a/drivers/macintosh/windfarm_core.c
3093 +++ b/drivers/macintosh/windfarm_core.c
3094 @@ -435,7 +435,7 @@ int wf_unregister_client(struct notifier_block *nb)
3095 {
3096 mutex_lock(&wf_lock);
3097 blocking_notifier_chain_unregister(&wf_client_list, nb);
3098 - wf_client_count++;
3099 + wf_client_count--;
3100 if (wf_client_count == 0)
3101 wf_stop_thread();
3102 mutex_unlock(&wf_lock);
3103 diff --git a/drivers/md/dm-cache-policy-cleaner.c b/drivers/md/dm-cache-policy-cleaner.c
3104 index b04d1f904d07..2eca9084defe 100644
3105 --- a/drivers/md/dm-cache-policy-cleaner.c
3106 +++ b/drivers/md/dm-cache-policy-cleaner.c
3107 @@ -434,7 +434,7 @@ static struct dm_cache_policy *wb_create(dm_cblock_t cache_size,
3108 static struct dm_cache_policy_type wb_policy_type = {
3109 .name = "cleaner",
3110 .version = {1, 0, 0},
3111 - .hint_size = 0,
3112 + .hint_size = 4,
3113 .owner = THIS_MODULE,
3114 .create = wb_create
3115 };
3116 diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
3117 index 07c0fa0fa284..e5d97da5f41e 100644
3118 --- a/drivers/md/dm-raid.c
3119 +++ b/drivers/md/dm-raid.c
3120 @@ -327,8 +327,7 @@ static int validate_region_size(struct raid_set *rs, unsigned long region_size)
3121 */
3122 if (min_region_size > (1 << 13)) {
3123 /* If not a power of 2, make it the next power of 2 */
3124 - if (min_region_size & (min_region_size - 1))
3125 - region_size = 1 << fls(region_size);
3126 + region_size = roundup_pow_of_two(min_region_size);
3127 DMINFO("Choosing default region size of %lu sectors",
3128 region_size);
3129 } else {
3130 diff --git a/drivers/md/md.c b/drivers/md/md.c
3131 index dd7a3701b99c..9c5d53f3e4c6 100644
3132 --- a/drivers/md/md.c
3133 +++ b/drivers/md/md.c
3134 @@ -5073,6 +5073,8 @@ EXPORT_SYMBOL_GPL(md_stop_writes);
3135 static void __md_stop(struct mddev *mddev)
3136 {
3137 mddev->ready = 0;
3138 + /* Ensure ->event_work is done */
3139 + flush_workqueue(md_misc_wq);
3140 mddev->pers->stop(mddev);
3141 if (mddev->pers->sync_request && mddev->to_remove == NULL)
3142 mddev->to_remove = &md_redundancy_group;
3143 diff --git a/drivers/md/persistent-data/dm-btree-internal.h b/drivers/md/persistent-data/dm-btree-internal.h
3144 index bf2b80d5c470..8731b6ea026b 100644
3145 --- a/drivers/md/persistent-data/dm-btree-internal.h
3146 +++ b/drivers/md/persistent-data/dm-btree-internal.h
3147 @@ -138,4 +138,10 @@ int lower_bound(struct btree_node *n, uint64_t key);
3148
3149 extern struct dm_block_validator btree_node_validator;
3150
3151 +/*
3152 + * Value type for upper levels of multi-level btrees.
3153 + */
3154 +extern void init_le64_type(struct dm_transaction_manager *tm,
3155 + struct dm_btree_value_type *vt);
3156 +
3157 #endif /* DM_BTREE_INTERNAL_H */
3158 diff --git a/drivers/md/persistent-data/dm-btree-remove.c b/drivers/md/persistent-data/dm-btree-remove.c
3159 index a03178e91a79..7c0d75547ccf 100644
3160 --- a/drivers/md/persistent-data/dm-btree-remove.c
3161 +++ b/drivers/md/persistent-data/dm-btree-remove.c
3162 @@ -544,14 +544,6 @@ static int remove_raw(struct shadow_spine *s, struct dm_btree_info *info,
3163 return r;
3164 }
3165
3166 -static struct dm_btree_value_type le64_type = {
3167 - .context = NULL,
3168 - .size = sizeof(__le64),
3169 - .inc = NULL,
3170 - .dec = NULL,
3171 - .equal = NULL
3172 -};
3173 -
3174 int dm_btree_remove(struct dm_btree_info *info, dm_block_t root,
3175 uint64_t *keys, dm_block_t *new_root)
3176 {
3177 @@ -559,12 +551,14 @@ int dm_btree_remove(struct dm_btree_info *info, dm_block_t root,
3178 int index = 0, r = 0;
3179 struct shadow_spine spine;
3180 struct btree_node *n;
3181 + struct dm_btree_value_type le64_vt;
3182
3183 + init_le64_type(info->tm, &le64_vt);
3184 init_shadow_spine(&spine, info);
3185 for (level = 0; level < info->levels; level++) {
3186 r = remove_raw(&spine, info,
3187 (level == last_level ?
3188 - &info->value_type : &le64_type),
3189 + &info->value_type : &le64_vt),
3190 root, keys[level], (unsigned *)&index);
3191 if (r < 0)
3192 break;
3193 diff --git a/drivers/md/persistent-data/dm-btree-spine.c b/drivers/md/persistent-data/dm-btree-spine.c
3194 index 1b5e13ec7f96..0dee514ba4c5 100644
3195 --- a/drivers/md/persistent-data/dm-btree-spine.c
3196 +++ b/drivers/md/persistent-data/dm-btree-spine.c
3197 @@ -249,3 +249,40 @@ int shadow_root(struct shadow_spine *s)
3198 {
3199 return s->root;
3200 }
3201 +
3202 +static void le64_inc(void *context, const void *value_le)
3203 +{
3204 + struct dm_transaction_manager *tm = context;
3205 + __le64 v_le;
3206 +
3207 + memcpy(&v_le, value_le, sizeof(v_le));
3208 + dm_tm_inc(tm, le64_to_cpu(v_le));
3209 +}
3210 +
3211 +static void le64_dec(void *context, const void *value_le)
3212 +{
3213 + struct dm_transaction_manager *tm = context;
3214 + __le64 v_le;
3215 +
3216 + memcpy(&v_le, value_le, sizeof(v_le));
3217 + dm_tm_dec(tm, le64_to_cpu(v_le));
3218 +}
3219 +
3220 +static int le64_equal(void *context, const void *value1_le, const void *value2_le)
3221 +{
3222 + __le64 v1_le, v2_le;
3223 +
3224 + memcpy(&v1_le, value1_le, sizeof(v1_le));
3225 + memcpy(&v2_le, value2_le, sizeof(v2_le));
3226 + return v1_le == v2_le;
3227 +}
3228 +
3229 +void init_le64_type(struct dm_transaction_manager *tm,
3230 + struct dm_btree_value_type *vt)
3231 +{
3232 + vt->context = tm;
3233 + vt->size = sizeof(__le64);
3234 + vt->inc = le64_inc;
3235 + vt->dec = le64_dec;
3236 + vt->equal = le64_equal;
3237 +}
3238 diff --git a/drivers/md/persistent-data/dm-btree.c b/drivers/md/persistent-data/dm-btree.c
3239 index fdd3793e22f9..c7726cebc495 100644
3240 --- a/drivers/md/persistent-data/dm-btree.c
3241 +++ b/drivers/md/persistent-data/dm-btree.c
3242 @@ -667,12 +667,7 @@ static int insert(struct dm_btree_info *info, dm_block_t root,
3243 struct btree_node *n;
3244 struct dm_btree_value_type le64_type;
3245
3246 - le64_type.context = NULL;
3247 - le64_type.size = sizeof(__le64);
3248 - le64_type.inc = NULL;
3249 - le64_type.dec = NULL;
3250 - le64_type.equal = NULL;
3251 -
3252 + init_le64_type(info->tm, &le64_type);
3253 init_shadow_spine(&spine, info);
3254
3255 for (level = 0; level < (info->levels - 1); level++) {
3256 diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
3257 index 32e282f4c83c..17eb76760bf5 100644
3258 --- a/drivers/md/raid10.c
3259 +++ b/drivers/md/raid10.c
3260 @@ -3581,6 +3581,7 @@ static struct r10conf *setup_conf(struct mddev *mddev)
3261 /* far_copies must be 1 */
3262 conf->prev.stride = conf->dev_sectors;
3263 }
3264 + conf->reshape_safe = conf->reshape_progress;
3265 spin_lock_init(&conf->device_lock);
3266 INIT_LIST_HEAD(&conf->retry_list);
3267
3268 @@ -3788,7 +3789,6 @@ static int run(struct mddev *mddev)
3269 }
3270 conf->offset_diff = min_offset_diff;
3271
3272 - conf->reshape_safe = conf->reshape_progress;
3273 clear_bit(MD_RECOVERY_SYNC, &mddev->recovery);
3274 clear_bit(MD_RECOVERY_CHECK, &mddev->recovery);
3275 set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
3276 @@ -4135,6 +4135,7 @@ static int raid10_start_reshape(struct mddev *mddev)
3277 conf->reshape_progress = size;
3278 } else
3279 conf->reshape_progress = 0;
3280 + conf->reshape_safe = conf->reshape_progress;
3281 spin_unlock_irq(&conf->device_lock);
3282
3283 if (mddev->delta_disks && mddev->bitmap) {
3284 @@ -4201,6 +4202,7 @@ abort:
3285 rdev->new_data_offset = rdev->data_offset;
3286 smp_wmb();
3287 conf->reshape_progress = MaxSector;
3288 + conf->reshape_safe = MaxSector;
3289 mddev->reshape_position = MaxSector;
3290 spin_unlock_irq(&conf->device_lock);
3291 return ret;
3292 @@ -4555,6 +4557,7 @@ static void end_reshape(struct r10conf *conf)
3293 md_finish_reshape(conf->mddev);
3294 smp_wmb();
3295 conf->reshape_progress = MaxSector;
3296 + conf->reshape_safe = MaxSector;
3297 spin_unlock_irq(&conf->device_lock);
3298
3299 /* read-ahead size must cover two whole stripes, which is
3300 diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c
3301 index 72265e58ca60..233eccc5c33e 100644
3302 --- a/drivers/media/platform/omap3isp/isp.c
3303 +++ b/drivers/media/platform/omap3isp/isp.c
3304 @@ -813,14 +813,14 @@ static int isp_pipeline_link_notify(struct media_link *link, u32 flags,
3305 int ret;
3306
3307 if (notification == MEDIA_DEV_NOTIFY_POST_LINK_CH &&
3308 - !(link->flags & MEDIA_LNK_FL_ENABLED)) {
3309 + !(flags & MEDIA_LNK_FL_ENABLED)) {
3310 /* Powering off entities is assumed to never fail. */
3311 isp_pipeline_pm_power(source, -sink_use);
3312 isp_pipeline_pm_power(sink, -source_use);
3313 return 0;
3314 }
3315
3316 - if (notification == MEDIA_DEV_NOTIFY_POST_LINK_CH &&
3317 + if (notification == MEDIA_DEV_NOTIFY_PRE_LINK_CH &&
3318 (flags & MEDIA_LNK_FL_ENABLED)) {
3319
3320 ret = isp_pipeline_pm_power(source, sink_use);
3321 diff --git a/drivers/media/rc/rc-main.c b/drivers/media/rc/rc-main.c
3322 index fc369b033484..b4ceda856939 100644
3323 --- a/drivers/media/rc/rc-main.c
3324 +++ b/drivers/media/rc/rc-main.c
3325 @@ -1191,9 +1191,6 @@ static int rc_dev_uevent(struct device *device, struct kobj_uevent_env *env)
3326 {
3327 struct rc_dev *dev = to_rc_dev(device);
3328
3329 - if (!dev || !dev->input_dev)
3330 - return -ENODEV;
3331 -
3332 if (dev->rc_map.name)
3333 ADD_HOTPLUG_VAR("NAME=%s", dev->rc_map.name);
3334 if (dev->driver_name)
3335 diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
3336 index eee4fd606dc1..cc55691dbea6 100644
3337 --- a/drivers/misc/cxl/pci.c
3338 +++ b/drivers/misc/cxl/pci.c
3339 @@ -987,8 +987,6 @@ static int cxl_probe(struct pci_dev *dev, const struct pci_device_id *id)
3340 int slice;
3341 int rc;
3342
3343 - pci_dev_get(dev);
3344 -
3345 if (cxl_verbose)
3346 dump_cxl_config_space(dev);
3347
3348 diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
3349 index 297b4f912c2d..9a8cb9cd852d 100644
3350 --- a/drivers/mmc/core/core.c
3351 +++ b/drivers/mmc/core/core.c
3352 @@ -314,8 +314,10 @@ EXPORT_SYMBOL(mmc_start_bkops);
3353 */
3354 static void mmc_wait_data_done(struct mmc_request *mrq)
3355 {
3356 - mrq->host->context_info.is_done_rcv = true;
3357 - wake_up_interruptible(&mrq->host->context_info.wait);
3358 + struct mmc_context_info *context_info = &mrq->host->context_info;
3359 +
3360 + context_info->is_done_rcv = true;
3361 + wake_up_interruptible(&context_info->wait);
3362 }
3363
3364 static void mmc_wait_done(struct mmc_request *mrq)
3365 diff --git a/drivers/mtd/nand/pxa3xx_nand.c b/drivers/mtd/nand/pxa3xx_nand.c
3366 index bc677362bc73..eac876732f97 100644
3367 --- a/drivers/mtd/nand/pxa3xx_nand.c
3368 +++ b/drivers/mtd/nand/pxa3xx_nand.c
3369 @@ -1465,6 +1465,9 @@ static int pxa3xx_nand_scan(struct mtd_info *mtd)
3370 if (pdata->keep_config && !pxa3xx_nand_detect_config(info))
3371 goto KEEP_CONFIG;
3372
3373 + /* Set a default chunk size */
3374 + info->chunk_size = 512;
3375 +
3376 ret = pxa3xx_nand_sensing(info);
3377 if (ret) {
3378 dev_info(&info->pdev->dev, "There is no chip on cs %d!\n",
3379 diff --git a/drivers/mtd/ubi/io.c b/drivers/mtd/ubi/io.c
3380 index d36134925d31..db657f2168d7 100644
3381 --- a/drivers/mtd/ubi/io.c
3382 +++ b/drivers/mtd/ubi/io.c
3383 @@ -921,6 +921,11 @@ static int validate_vid_hdr(const struct ubi_device *ubi,
3384 goto bad;
3385 }
3386
3387 + if (data_size > ubi->leb_size) {
3388 + ubi_err("bad data_size");
3389 + goto bad;
3390 + }
3391 +
3392 if (vol_type == UBI_VID_STATIC) {
3393 /*
3394 * Although from high-level point of view static volumes may
3395 diff --git a/drivers/mtd/ubi/vtbl.c b/drivers/mtd/ubi/vtbl.c
3396 index 07cac5f9ffb8..ec1009407fec 100644
3397 --- a/drivers/mtd/ubi/vtbl.c
3398 +++ b/drivers/mtd/ubi/vtbl.c
3399 @@ -651,6 +651,7 @@ static int init_volumes(struct ubi_device *ubi,
3400 if (ubi->corr_peb_count)
3401 ubi_err("%d PEBs are corrupted and not used",
3402 ubi->corr_peb_count);
3403 + return -ENOSPC;
3404 }
3405 ubi->rsvd_pebs += reserved_pebs;
3406 ubi->avail_pebs -= reserved_pebs;
3407 diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
3408 index ef670560971e..21d03130d8a7 100644
3409 --- a/drivers/mtd/ubi/wl.c
3410 +++ b/drivers/mtd/ubi/wl.c
3411 @@ -1982,6 +1982,7 @@ int ubi_wl_init(struct ubi_device *ubi, struct ubi_attach_info *ai)
3412 if (ubi->corr_peb_count)
3413 ubi_err("%d PEBs are corrupted and not used",
3414 ubi->corr_peb_count);
3415 + err = -ENOSPC;
3416 goto out_free;
3417 }
3418 ubi->avail_pebs -= reserved_pebs;
3419 diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
3420 index 4f4c2a7888e5..ea26483833f5 100644
3421 --- a/drivers/net/dsa/bcm_sf2.c
3422 +++ b/drivers/net/dsa/bcm_sf2.c
3423 @@ -684,16 +684,12 @@ static void bcm_sf2_sw_fixed_link_update(struct dsa_switch *ds, int port,
3424 struct fixed_phy_status *status)
3425 {
3426 struct bcm_sf2_priv *priv = ds_to_priv(ds);
3427 - u32 link, duplex, pause, speed;
3428 + u32 link, duplex, pause;
3429 u32 reg;
3430
3431 link = core_readl(priv, CORE_LNKSTS);
3432 duplex = core_readl(priv, CORE_DUPSTS);
3433 pause = core_readl(priv, CORE_PAUSESTS);
3434 - speed = core_readl(priv, CORE_SPDSTS);
3435 -
3436 - speed >>= (port * SPDSTS_SHIFT);
3437 - speed &= SPDSTS_MASK;
3438
3439 status->link = 0;
3440
3441 @@ -717,18 +713,6 @@ static void bcm_sf2_sw_fixed_link_update(struct dsa_switch *ds, int port,
3442 status->duplex = !!(duplex & (1 << port));
3443 }
3444
3445 - switch (speed) {
3446 - case SPDSTS_10:
3447 - status->speed = SPEED_10;
3448 - break;
3449 - case SPDSTS_100:
3450 - status->speed = SPEED_100;
3451 - break;
3452 - case SPDSTS_1000:
3453 - status->speed = SPEED_1000;
3454 - break;
3455 - }
3456 -
3457 if ((pause & (1 << port)) &&
3458 (pause & (1 << (port + PAUSESTS_TX_PAUSE_SHIFT)))) {
3459 status->asym_pause = 1;
3460 diff --git a/drivers/net/dsa/bcm_sf2.h b/drivers/net/dsa/bcm_sf2.h
3461 index ee9f650d5026..3ecfda86366e 100644
3462 --- a/drivers/net/dsa/bcm_sf2.h
3463 +++ b/drivers/net/dsa/bcm_sf2.h
3464 @@ -110,8 +110,8 @@ static inline u64 name##_readq(struct bcm_sf2_priv *priv, u32 off) \
3465 spin_unlock(&priv->indir_lock); \
3466 return (u64)indir << 32 | dir; \
3467 } \
3468 -static inline void name##_writeq(struct bcm_sf2_priv *priv, u32 off, \
3469 - u64 val) \
3470 +static inline void name##_writeq(struct bcm_sf2_priv *priv, u64 val, \
3471 + u32 off) \
3472 { \
3473 spin_lock(&priv->indir_lock); \
3474 reg_writel(priv, upper_32_bits(val), REG_DIR_DATA_WRITE); \
3475 diff --git a/drivers/net/ethernet/altera/altera_tse_main.c b/drivers/net/ethernet/altera/altera_tse_main.c
3476 index 4efc4355d345..2eb6404755b1 100644
3477 --- a/drivers/net/ethernet/altera/altera_tse_main.c
3478 +++ b/drivers/net/ethernet/altera/altera_tse_main.c
3479 @@ -501,8 +501,7 @@ static int tse_poll(struct napi_struct *napi, int budget)
3480 if (rxcomplete >= budget || txcomplete > 0)
3481 return rxcomplete;
3482
3483 - napi_gro_flush(napi, false);
3484 - __napi_complete(napi);
3485 + napi_complete(napi);
3486
3487 netdev_dbg(priv->dev,
3488 "NAPI Complete, did %d packets with budget %d\n",
3489 diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
3490 index a37800ecb27c..6fa8272c8f31 100644
3491 --- a/drivers/net/ethernet/broadcom/tg3.c
3492 +++ b/drivers/net/ethernet/broadcom/tg3.c
3493 @@ -10752,7 +10752,7 @@ static ssize_t tg3_show_temp(struct device *dev,
3494 tg3_ape_scratchpad_read(tp, &temperature, attr->index,
3495 sizeof(temperature));
3496 spin_unlock_bh(&tp->lock);
3497 - return sprintf(buf, "%u\n", temperature);
3498 + return sprintf(buf, "%u\n", temperature * 1000);
3499 }
3500
3501
3502 diff --git a/drivers/net/ethernet/brocade/bna/bnad.c b/drivers/net/ethernet/brocade/bna/bnad.c
3503 index c3861de9dc81..d864614f1255 100644
3504 --- a/drivers/net/ethernet/brocade/bna/bnad.c
3505 +++ b/drivers/net/ethernet/brocade/bna/bnad.c
3506 @@ -674,6 +674,7 @@ bnad_cq_process(struct bnad *bnad, struct bna_ccb *ccb, int budget)
3507 if (!next_cmpl->valid)
3508 break;
3509 }
3510 + packets++;
3511
3512 /* TODO: BNA_CQ_EF_LOCAL ? */
3513 if (unlikely(flags & (BNA_CQ_EF_MAC_ERROR |
3514 @@ -690,7 +691,6 @@ bnad_cq_process(struct bnad *bnad, struct bna_ccb *ccb, int budget)
3515 else
3516 bnad_cq_setup_skb_frags(rcb, skb, sop_ci, nvecs, len);
3517
3518 - packets++;
3519 rcb->rxq->rx_packets++;
3520 rcb->rxq->rx_bytes += totlen;
3521 ccb->bytes_per_intr += totlen;
3522 diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h
3523 index 82d891e183b1..95f47b9f50d4 100644
3524 --- a/drivers/net/ethernet/intel/igb/igb.h
3525 +++ b/drivers/net/ethernet/intel/igb/igb.h
3526 @@ -531,6 +531,7 @@ void igb_ptp_rx_pktstamp(struct igb_q_vector *q_vector, unsigned char *va,
3527 struct sk_buff *skb);
3528 int igb_ptp_set_ts_config(struct net_device *netdev, struct ifreq *ifr);
3529 int igb_ptp_get_ts_config(struct net_device *netdev, struct ifreq *ifr);
3530 +void igb_set_flag_queue_pairs(struct igb_adapter *, const u32);
3531 #ifdef CONFIG_IGB_HWMON
3532 void igb_sysfs_exit(struct igb_adapter *adapter);
3533 int igb_sysfs_init(struct igb_adapter *adapter);
3534 diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c
3535 index 02cfd3b14762..aa176cea5a41 100644
3536 --- a/drivers/net/ethernet/intel/igb/igb_ethtool.c
3537 +++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c
3538 @@ -2979,6 +2979,7 @@ static int igb_set_channels(struct net_device *netdev,
3539 {
3540 struct igb_adapter *adapter = netdev_priv(netdev);
3541 unsigned int count = ch->combined_count;
3542 + unsigned int max_combined = 0;
3543
3544 /* Verify they are not requesting separate vectors */
3545 if (!count || ch->rx_count || ch->tx_count)
3546 @@ -2989,11 +2990,13 @@ static int igb_set_channels(struct net_device *netdev,
3547 return -EINVAL;
3548
3549 /* Verify the number of channels doesn't exceed hw limits */
3550 - if (count > igb_max_channels(adapter))
3551 + max_combined = igb_max_channels(adapter);
3552 + if (count > max_combined)
3553 return -EINVAL;
3554
3555 if (count != adapter->rss_queues) {
3556 adapter->rss_queues = count;
3557 + igb_set_flag_queue_pairs(adapter, max_combined);
3558
3559 /* Hardware has to reinitialize queues and interrupts to
3560 * match the new configuration.
3561 diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
3562 index 487cd9c4ac0d..e0f36647d3dd 100644
3563 --- a/drivers/net/ethernet/intel/igb/igb_main.c
3564 +++ b/drivers/net/ethernet/intel/igb/igb_main.c
3565 @@ -1207,8 +1207,14 @@ static int igb_alloc_q_vector(struct igb_adapter *adapter,
3566
3567 /* allocate q_vector and rings */
3568 q_vector = adapter->q_vector[v_idx];
3569 - if (!q_vector)
3570 + if (!q_vector) {
3571 + q_vector = kzalloc(size, GFP_KERNEL);
3572 + } else if (size > ksize(q_vector)) {
3573 + kfree_rcu(q_vector, rcu);
3574 q_vector = kzalloc(size, GFP_KERNEL);
3575 + } else {
3576 + memset(q_vector, 0, size);
3577 + }
3578 if (!q_vector)
3579 return -ENOMEM;
3580
3581 @@ -2861,7 +2867,7 @@ static void igb_probe_vfs(struct igb_adapter *adapter)
3582 return;
3583
3584 pci_sriov_set_totalvfs(pdev, 7);
3585 - igb_pci_enable_sriov(pdev, max_vfs);
3586 + igb_enable_sriov(pdev, max_vfs);
3587
3588 #endif /* CONFIG_PCI_IOV */
3589 }
3590 @@ -2902,6 +2908,14 @@ static void igb_init_queue_configuration(struct igb_adapter *adapter)
3591
3592 adapter->rss_queues = min_t(u32, max_rss_queues, num_online_cpus());
3593
3594 + igb_set_flag_queue_pairs(adapter, max_rss_queues);
3595 +}
3596 +
3597 +void igb_set_flag_queue_pairs(struct igb_adapter *adapter,
3598 + const u32 max_rss_queues)
3599 +{
3600 + struct e1000_hw *hw = &adapter->hw;
3601 +
3602 /* Determine if we need to pair queues. */
3603 switch (hw->mac.type) {
3604 case e1000_82575:
3605 diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
3606 index e7ed2513b1d1..7a598932f922 100644
3607 --- a/drivers/net/usb/usbnet.c
3608 +++ b/drivers/net/usb/usbnet.c
3609 @@ -779,7 +779,7 @@ int usbnet_stop (struct net_device *net)
3610 {
3611 struct usbnet *dev = netdev_priv(net);
3612 struct driver_info *info = dev->driver_info;
3613 - int retval, pm;
3614 + int retval, pm, mpn;
3615
3616 clear_bit(EVENT_DEV_OPEN, &dev->flags);
3617 netif_stop_queue (net);
3618 @@ -810,6 +810,8 @@ int usbnet_stop (struct net_device *net)
3619
3620 usbnet_purge_paused_rxq(dev);
3621
3622 + mpn = !test_and_clear_bit(EVENT_NO_RUNTIME_PM, &dev->flags);
3623 +
3624 /* deferred work (task, timer, softirq) must also stop.
3625 * can't flush_scheduled_work() until we drop rtnl (later),
3626 * else workers could deadlock; so make workers a NOP.
3627 @@ -820,8 +822,7 @@ int usbnet_stop (struct net_device *net)
3628 if (!pm)
3629 usb_autopm_put_interface(dev->intf);
3630
3631 - if (info->manage_power &&
3632 - !test_and_clear_bit(EVENT_NO_RUNTIME_PM, &dev->flags))
3633 + if (info->manage_power && mpn)
3634 info->manage_power(dev, 0);
3635 else
3636 usb_autopm_put_interface(dev->intf);
3637 diff --git a/drivers/net/wireless/ath/ath10k/htc.c b/drivers/net/wireless/ath/ath10k/htc.c
3638 index 676bd4ed969b..32003e682351 100644
3639 --- a/drivers/net/wireless/ath/ath10k/htc.c
3640 +++ b/drivers/net/wireless/ath/ath10k/htc.c
3641 @@ -162,8 +162,10 @@ int ath10k_htc_send(struct ath10k_htc *htc,
3642
3643 skb_cb->paddr = dma_map_single(dev, skb->data, skb->len, DMA_TO_DEVICE);
3644 ret = dma_mapping_error(dev, skb_cb->paddr);
3645 - if (ret)
3646 + if (ret) {
3647 + ret = -EIO;
3648 goto err_credits;
3649 + }
3650
3651 sg_item.transfer_id = ep->eid;
3652 sg_item.transfer_context = skb;
3653 diff --git a/drivers/net/wireless/ath/ath10k/htt_tx.c b/drivers/net/wireless/ath/ath10k/htt_tx.c
3654 index bd87a35201d8..55c60783c129 100644
3655 --- a/drivers/net/wireless/ath/ath10k/htt_tx.c
3656 +++ b/drivers/net/wireless/ath/ath10k/htt_tx.c
3657 @@ -402,8 +402,10 @@ int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
3658 skb_cb->paddr = dma_map_single(dev, msdu->data, msdu->len,
3659 DMA_TO_DEVICE);
3660 res = dma_mapping_error(dev, skb_cb->paddr);
3661 - if (res)
3662 + if (res) {
3663 + res = -EIO;
3664 goto err_free_txdesc;
3665 + }
3666
3667 skb_put(txdesc, len);
3668 cmd = (struct htt_cmd *)txdesc->data;
3669 @@ -488,8 +490,10 @@ int ath10k_htt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
3670 skb_cb->paddr = dma_map_single(dev, msdu->data, msdu->len,
3671 DMA_TO_DEVICE);
3672 res = dma_mapping_error(dev, skb_cb->paddr);
3673 - if (res)
3674 + if (res) {
3675 + res = -EIO;
3676 goto err_free_txbuf;
3677 + }
3678
3679 if (likely(use_frags)) {
3680 frags = skb_cb->htt.txbuf->frags;
3681 diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
3682 index 59e0ea83be50..06620657cc19 100644
3683 --- a/drivers/net/wireless/ath/ath10k/pci.c
3684 +++ b/drivers/net/wireless/ath/ath10k/pci.c
3685 @@ -1298,8 +1298,10 @@ static int ath10k_pci_hif_exchange_bmi_msg(struct ath10k *ar,
3686
3687 req_paddr = dma_map_single(ar->dev, treq, req_len, DMA_TO_DEVICE);
3688 ret = dma_mapping_error(ar->dev, req_paddr);
3689 - if (ret)
3690 + if (ret) {
3691 + ret = -EIO;
3692 goto err_dma;
3693 + }
3694
3695 if (resp && resp_len) {
3696 tresp = kzalloc(*resp_len, GFP_KERNEL);
3697 @@ -1311,8 +1313,10 @@ static int ath10k_pci_hif_exchange_bmi_msg(struct ath10k *ar,
3698 resp_paddr = dma_map_single(ar->dev, tresp, *resp_len,
3699 DMA_FROM_DEVICE);
3700 ret = dma_mapping_error(ar->dev, resp_paddr);
3701 - if (ret)
3702 + if (ret) {
3703 + ret = EIO;
3704 goto err_req;
3705 + }
3706
3707 xfer.wait_for_resp = true;
3708 xfer.resp_len = 0;
3709 diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
3710 index 2c42bd504b79..8a091485960d 100644
3711 --- a/drivers/net/wireless/ath/ath10k/wmi.c
3712 +++ b/drivers/net/wireless/ath/ath10k/wmi.c
3713 @@ -1661,6 +1661,7 @@ static void ath10k_wmi_event_host_swba(struct ath10k *ar, struct sk_buff *skb)
3714 ATH10K_SKB_CB(bcn)->paddr);
3715 if (ret) {
3716 ath10k_warn(ar, "failed to map beacon: %d\n", ret);
3717 + ret = -EIO;
3718 dev_kfree_skb_any(bcn);
3719 goto skip;
3720 }
3721 diff --git a/drivers/net/wireless/rtlwifi/rtl8821ae/hw.c b/drivers/net/wireless/rtlwifi/rtl8821ae/hw.c
3722 index 43c14d4da563..e17b728a21aa 100644
3723 --- a/drivers/net/wireless/rtlwifi/rtl8821ae/hw.c
3724 +++ b/drivers/net/wireless/rtlwifi/rtl8821ae/hw.c
3725 @@ -2180,7 +2180,7 @@ static int _rtl8821ae_set_media_status(struct ieee80211_hw *hw,
3726
3727 rtl_write_byte(rtlpriv, (MSR), bt_msr);
3728 rtlpriv->cfg->ops->led_control(hw, ledaction);
3729 - if ((bt_msr & 0xfc) == MSR_AP)
3730 + if ((bt_msr & MSR_MASK) == MSR_AP)
3731 rtl_write_byte(rtlpriv, REG_BCNTCFG + 1, 0x00);
3732 else
3733 rtl_write_byte(rtlpriv, REG_BCNTCFG + 1, 0x66);
3734 diff --git a/drivers/net/wireless/rtlwifi/rtl8821ae/reg.h b/drivers/net/wireless/rtlwifi/rtl8821ae/reg.h
3735 index 53668fc8f23e..1d6110f9c1fb 100644
3736 --- a/drivers/net/wireless/rtlwifi/rtl8821ae/reg.h
3737 +++ b/drivers/net/wireless/rtlwifi/rtl8821ae/reg.h
3738 @@ -429,6 +429,7 @@
3739 #define MSR_ADHOC 0x01
3740 #define MSR_INFRA 0x02
3741 #define MSR_AP 0x03
3742 +#define MSR_MASK 0x03
3743
3744 #define RRSR_RSC_OFFSET 21
3745 #define RRSR_SHORT_OFFSET 23
3746 diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
3747 index 2b0b4e62f171..2a64f28b2dad 100644
3748 --- a/drivers/net/xen-netfront.c
3749 +++ b/drivers/net/xen-netfront.c
3750 @@ -1432,7 +1432,8 @@ static void xennet_disconnect_backend(struct netfront_info *info)
3751 queue->tx_evtchn = queue->rx_evtchn = 0;
3752 queue->tx_irq = queue->rx_irq = 0;
3753
3754 - napi_synchronize(&queue->napi);
3755 + if (netif_running(info->netdev))
3756 + napi_synchronize(&queue->napi);
3757
3758 xennet_release_tx_bufs(queue);
3759 xennet_release_rx_bufs(queue);
3760 diff --git a/drivers/of/address.c b/drivers/of/address.c
3761 index 1dba1a9c1fcf..9e77614391a0 100644
3762 --- a/drivers/of/address.c
3763 +++ b/drivers/of/address.c
3764 @@ -845,10 +845,10 @@ struct device_node *of_find_matching_node_by_address(struct device_node *from,
3765 struct resource res;
3766
3767 while (dn) {
3768 - if (of_address_to_resource(dn, 0, &res))
3769 - continue;
3770 - if (res.start == base_address)
3771 + if (!of_address_to_resource(dn, 0, &res) &&
3772 + res.start == base_address)
3773 return dn;
3774 +
3775 dn = of_find_matching_node(dn, matches);
3776 }
3777
3778 diff --git a/drivers/of/of_mdio.c b/drivers/of/of_mdio.c
3779 index 1bd43053b8c7..5dc1ef955a0f 100644
3780 --- a/drivers/of/of_mdio.c
3781 +++ b/drivers/of/of_mdio.c
3782 @@ -262,7 +262,8 @@ EXPORT_SYMBOL(of_phy_attach);
3783 bool of_phy_is_fixed_link(struct device_node *np)
3784 {
3785 struct device_node *dn;
3786 - int len;
3787 + int len, err;
3788 + const char *managed;
3789
3790 /* New binding */
3791 dn = of_get_child_by_name(np, "fixed-link");
3792 @@ -271,6 +272,10 @@ bool of_phy_is_fixed_link(struct device_node *np)
3793 return true;
3794 }
3795
3796 + err = of_property_read_string(np, "managed", &managed);
3797 + if (err == 0 && strcmp(managed, "auto") != 0)
3798 + return true;
3799 +
3800 /* Old binding */
3801 if (of_get_property(np, "fixed-link", &len) &&
3802 len == (5 * sizeof(__be32)))
3803 @@ -285,8 +290,18 @@ int of_phy_register_fixed_link(struct device_node *np)
3804 struct fixed_phy_status status = {};
3805 struct device_node *fixed_link_node;
3806 const __be32 *fixed_link_prop;
3807 - int len;
3808 + int len, err;
3809 struct phy_device *phy;
3810 + const char *managed;
3811 +
3812 + err = of_property_read_string(np, "managed", &managed);
3813 + if (err == 0) {
3814 + if (strcmp(managed, "in-band-status") == 0) {
3815 + /* status is zeroed, namely its .link member */
3816 + phy = fixed_phy_register(PHY_POLL, &status, np);
3817 + return IS_ERR(phy) ? PTR_ERR(phy) : 0;
3818 + }
3819 + }
3820
3821 /* New binding */
3822 fixed_link_node = of_get_child_by_name(np, "fixed-link");
3823 diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
3824 index 04ea682ab2aa..00b1cd32a4b1 100644
3825 --- a/drivers/pci/quirks.c
3826 +++ b/drivers/pci/quirks.c
3827 @@ -2818,12 +2818,15 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x3c28, vtd_mask_spec_errors);
3828
3829 static void fixup_ti816x_class(struct pci_dev *dev)
3830 {
3831 + u32 class = dev->class;
3832 +
3833 /* TI 816x devices do not have class code set when in PCIe boot mode */
3834 - dev_info(&dev->dev, "Setting PCI class for 816x PCIe device\n");
3835 - dev->class = PCI_CLASS_MULTIMEDIA_VIDEO;
3836 + dev->class = PCI_CLASS_MULTIMEDIA_VIDEO << 8;
3837 + dev_info(&dev->dev, "PCI class overridden (%#08x -> %#08x)\n",
3838 + class, dev->class);
3839 }
3840 DECLARE_PCI_FIXUP_CLASS_EARLY(PCI_VENDOR_ID_TI, 0xb800,
3841 - PCI_CLASS_NOT_DEFINED, 0, fixup_ti816x_class);
3842 + PCI_CLASS_NOT_DEFINED, 0, fixup_ti816x_class);
3843
3844 /* Some PCIe devices do not work reliably with the claimed maximum
3845 * payload size supported.
3846 diff --git a/drivers/platform/x86/hp-wmi.c b/drivers/platform/x86/hp-wmi.c
3847 index 4c559640dcba..301386c4d85b 100644
3848 --- a/drivers/platform/x86/hp-wmi.c
3849 +++ b/drivers/platform/x86/hp-wmi.c
3850 @@ -54,8 +54,9 @@ MODULE_ALIAS("wmi:5FB7F034-2C63-45e9-BE91-3D44E2C707E4");
3851 #define HPWMI_HARDWARE_QUERY 0x4
3852 #define HPWMI_WIRELESS_QUERY 0x5
3853 #define HPWMI_BIOS_QUERY 0x9
3854 +#define HPWMI_FEATURE_QUERY 0xb
3855 #define HPWMI_HOTKEY_QUERY 0xc
3856 -#define HPWMI_FEATURE_QUERY 0xd
3857 +#define HPWMI_FEATURE2_QUERY 0xd
3858 #define HPWMI_WIRELESS2_QUERY 0x1b
3859 #define HPWMI_POSTCODEERROR_QUERY 0x2a
3860
3861 @@ -295,25 +296,33 @@ static int hp_wmi_tablet_state(void)
3862 return (state & 0x4) ? 1 : 0;
3863 }
3864
3865 -static int __init hp_wmi_bios_2009_later(void)
3866 +static int __init hp_wmi_bios_2008_later(void)
3867 {
3868 int state = 0;
3869 int ret = hp_wmi_perform_query(HPWMI_FEATURE_QUERY, 0, &state,
3870 sizeof(state), sizeof(state));
3871 - if (ret)
3872 - return ret;
3873 + if (!ret)
3874 + return 1;
3875
3876 - return (state & 0x10) ? 1 : 0;
3877 + return (ret == HPWMI_RET_UNKNOWN_CMDTYPE) ? 0 : -ENXIO;
3878 }
3879
3880 -static int hp_wmi_enable_hotkeys(void)
3881 +static int __init hp_wmi_bios_2009_later(void)
3882 {
3883 - int ret;
3884 - int query = 0x6e;
3885 + int state = 0;
3886 + int ret = hp_wmi_perform_query(HPWMI_FEATURE2_QUERY, 0, &state,
3887 + sizeof(state), sizeof(state));
3888 + if (!ret)
3889 + return 1;
3890
3891 - ret = hp_wmi_perform_query(HPWMI_BIOS_QUERY, 1, &query, sizeof(query),
3892 - 0);
3893 + return (ret == HPWMI_RET_UNKNOWN_CMDTYPE) ? 0 : -ENXIO;
3894 +}
3895
3896 +static int __init hp_wmi_enable_hotkeys(void)
3897 +{
3898 + int value = 0x6e;
3899 + int ret = hp_wmi_perform_query(HPWMI_BIOS_QUERY, 1, &value,
3900 + sizeof(value), 0);
3901 if (ret)
3902 return -EINVAL;
3903 return 0;
3904 @@ -663,7 +672,7 @@ static int __init hp_wmi_input_setup(void)
3905 hp_wmi_tablet_state());
3906 input_sync(hp_wmi_input_dev);
3907
3908 - if (hp_wmi_bios_2009_later() == 4)
3909 + if (!hp_wmi_bios_2009_later() && hp_wmi_bios_2008_later())
3910 hp_wmi_enable_hotkeys();
3911
3912 status = wmi_install_notify_handler(HPWMI_EVENT_GUID, hp_wmi_notify, NULL);
3913 diff --git a/drivers/power/avs/Kconfig b/drivers/power/avs/Kconfig
3914 index 7f3d389bd601..a67eeace6a89 100644
3915 --- a/drivers/power/avs/Kconfig
3916 +++ b/drivers/power/avs/Kconfig
3917 @@ -13,7 +13,7 @@ menuconfig POWER_AVS
3918
3919 config ROCKCHIP_IODOMAIN
3920 tristate "Rockchip IO domain support"
3921 - depends on ARCH_ROCKCHIP && OF
3922 + depends on POWER_AVS && ARCH_ROCKCHIP && OF
3923 help
3924 Say y here to enable support io domains on Rockchip SoCs. It is
3925 necessary for the io domain setting of the SoC to match the
3926 diff --git a/drivers/s390/char/sclp_early.c b/drivers/s390/char/sclp_early.c
3927 index 5bd6cb145a87..efc9a13bf457 100644
3928 --- a/drivers/s390/char/sclp_early.c
3929 +++ b/drivers/s390/char/sclp_early.c
3930 @@ -7,6 +7,7 @@
3931 #define KMSG_COMPONENT "sclp_early"
3932 #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
3933
3934 +#include <linux/errno.h>
3935 #include <asm/ctl_reg.h>
3936 #include <asm/sclp.h>
3937 #include <asm/ipl.h>
3938 diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
3939 index 5f57e3d35e26..6adf9abdf955 100644
3940 --- a/drivers/scsi/3w-9xxx.c
3941 +++ b/drivers/scsi/3w-9xxx.c
3942 @@ -225,6 +225,17 @@ static const struct file_operations twa_fops = {
3943 .llseek = noop_llseek,
3944 };
3945
3946 +/*
3947 + * The controllers use an inline buffer instead of a mapped SGL for small,
3948 + * single entry buffers. Note that we treat a zero-length transfer like
3949 + * a mapped SGL.
3950 + */
3951 +static bool twa_command_mapped(struct scsi_cmnd *cmd)
3952 +{
3953 + return scsi_sg_count(cmd) != 1 ||
3954 + scsi_bufflen(cmd) >= TW_MIN_SGL_LENGTH;
3955 +}
3956 +
3957 /* This function will complete an aen request from the isr */
3958 static int twa_aen_complete(TW_Device_Extension *tw_dev, int request_id)
3959 {
3960 @@ -1351,7 +1362,8 @@ static irqreturn_t twa_interrupt(int irq, void *dev_instance)
3961 }
3962
3963 /* Now complete the io */
3964 - scsi_dma_unmap(cmd);
3965 + if (twa_command_mapped(cmd))
3966 + scsi_dma_unmap(cmd);
3967 cmd->scsi_done(cmd);
3968 tw_dev->state[request_id] = TW_S_COMPLETED;
3969 twa_free_request_id(tw_dev, request_id);
3970 @@ -1594,7 +1606,8 @@ static int twa_reset_device_extension(TW_Device_Extension *tw_dev)
3971 struct scsi_cmnd *cmd = tw_dev->srb[i];
3972
3973 cmd->result = (DID_RESET << 16);
3974 - scsi_dma_unmap(cmd);
3975 + if (twa_command_mapped(cmd))
3976 + scsi_dma_unmap(cmd);
3977 cmd->scsi_done(cmd);
3978 }
3979 }
3980 @@ -1777,12 +1790,14 @@ static int twa_scsi_queue_lck(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_
3981 retval = twa_scsiop_execute_scsi(tw_dev, request_id, NULL, 0, NULL);
3982 switch (retval) {
3983 case SCSI_MLQUEUE_HOST_BUSY:
3984 - scsi_dma_unmap(SCpnt);
3985 + if (twa_command_mapped(SCpnt))
3986 + scsi_dma_unmap(SCpnt);
3987 twa_free_request_id(tw_dev, request_id);
3988 break;
3989 case 1:
3990 SCpnt->result = (DID_ERROR << 16);
3991 - scsi_dma_unmap(SCpnt);
3992 + if (twa_command_mapped(SCpnt))
3993 + scsi_dma_unmap(SCpnt);
3994 done(SCpnt);
3995 tw_dev->state[request_id] = TW_S_COMPLETED;
3996 twa_free_request_id(tw_dev, request_id);
3997 @@ -1843,8 +1858,7 @@ static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id,
3998 /* Map sglist from scsi layer to cmd packet */
3999
4000 if (scsi_sg_count(srb)) {
4001 - if ((scsi_sg_count(srb) == 1) &&
4002 - (scsi_bufflen(srb) < TW_MIN_SGL_LENGTH)) {
4003 + if (!twa_command_mapped(srb)) {
4004 if (srb->sc_data_direction == DMA_TO_DEVICE ||
4005 srb->sc_data_direction == DMA_BIDIRECTIONAL)
4006 scsi_sg_copy_to_buffer(srb,
4007 @@ -1917,7 +1931,7 @@ static void twa_scsiop_execute_scsi_complete(TW_Device_Extension *tw_dev, int re
4008 {
4009 struct scsi_cmnd *cmd = tw_dev->srb[request_id];
4010
4011 - if (scsi_bufflen(cmd) < TW_MIN_SGL_LENGTH &&
4012 + if (!twa_command_mapped(cmd) &&
4013 (cmd->sc_data_direction == DMA_FROM_DEVICE ||
4014 cmd->sc_data_direction == DMA_BIDIRECTIONAL)) {
4015 if (scsi_sg_count(cmd) == 1) {
4016 diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
4017 index 01a79473350a..3d12c52c3f81 100644
4018 --- a/drivers/scsi/scsi_error.c
4019 +++ b/drivers/scsi/scsi_error.c
4020 @@ -2166,8 +2166,17 @@ int scsi_error_handler(void *data)
4021 * We never actually get interrupted because kthread_run
4022 * disables signal delivery for the created thread.
4023 */
4024 - while (!kthread_should_stop()) {
4025 + while (true) {
4026 + /*
4027 + * The sequence in kthread_stop() sets the stop flag first
4028 + * then wakes the process. To avoid missed wakeups, the task
4029 + * should always be in a non running state before the stop
4030 + * flag is checked
4031 + */
4032 set_current_state(TASK_INTERRUPTIBLE);
4033 + if (kthread_should_stop())
4034 + break;
4035 +
4036 if ((shost->host_failed == 0 && shost->host_eh_scheduled == 0) ||
4037 shost->host_failed != atomic_read(&shost->host_busy)) {
4038 SCSI_LOG_ERROR_RECOVERY(1,
4039 diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
4040 index d95656d05eb6..e56802d85ff9 100644
4041 --- a/drivers/spi/spi-pxa2xx.c
4042 +++ b/drivers/spi/spi-pxa2xx.c
4043 @@ -564,6 +564,10 @@ static irqreturn_t ssp_int(int irq, void *dev_id)
4044 if (!(sccr1_reg & SSCR1_TIE))
4045 mask &= ~SSSR_TFS;
4046
4047 + /* Ignore RX timeout interrupt if it is disabled */
4048 + if (!(sccr1_reg & SSCR1_TINTE))
4049 + mask &= ~SSSR_TINT;
4050 +
4051 if (!(status & mask))
4052 return IRQ_NONE;
4053
4054 diff --git a/drivers/spi/spi-xtensa-xtfpga.c b/drivers/spi/spi-xtensa-xtfpga.c
4055 index 0dc5df5233a9..cb030389a265 100644
4056 --- a/drivers/spi/spi-xtensa-xtfpga.c
4057 +++ b/drivers/spi/spi-xtensa-xtfpga.c
4058 @@ -34,13 +34,13 @@ struct xtfpga_spi {
4059 static inline void xtfpga_spi_write32(const struct xtfpga_spi *spi,
4060 unsigned addr, u32 val)
4061 {
4062 - iowrite32(val, spi->regs + addr);
4063 + __raw_writel(val, spi->regs + addr);
4064 }
4065
4066 static inline unsigned int xtfpga_spi_read32(const struct xtfpga_spi *spi,
4067 unsigned addr)
4068 {
4069 - return ioread32(spi->regs + addr);
4070 + return __raw_readl(spi->regs + addr);
4071 }
4072
4073 static inline void xtfpga_spi_wait_busy(struct xtfpga_spi *xspi)
4074 diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
4075 index 115ad5dbc7c5..85a6da81723a 100644
4076 --- a/drivers/spi/spi.c
4077 +++ b/drivers/spi/spi.c
4078 @@ -1478,8 +1478,7 @@ static struct class spi_master_class = {
4079 *
4080 * The caller is responsible for assigning the bus number and initializing
4081 * the master's methods before calling spi_register_master(); and (after errors
4082 - * adding the device) calling spi_master_put() and kfree() to prevent a memory
4083 - * leak.
4084 + * adding the device) calling spi_master_put() to prevent a memory leak.
4085 */
4086 struct spi_master *spi_alloc_master(struct device *dev, unsigned size)
4087 {
4088 diff --git a/drivers/staging/android/ion/ion.c b/drivers/staging/android/ion/ion.c
4089 index 56604f41ec48..ce3808a868e7 100644
4090 --- a/drivers/staging/android/ion/ion.c
4091 +++ b/drivers/staging/android/ion/ion.c
4092 @@ -1176,13 +1176,13 @@ struct ion_handle *ion_import_dma_buf(struct ion_client *client, int fd)
4093 mutex_unlock(&client->lock);
4094 goto end;
4095 }
4096 - mutex_unlock(&client->lock);
4097
4098 handle = ion_handle_create(client, buffer);
4099 - if (IS_ERR(handle))
4100 + if (IS_ERR(handle)) {
4101 + mutex_unlock(&client->lock);
4102 goto end;
4103 + }
4104
4105 - mutex_lock(&client->lock);
4106 ret = ion_handle_add(client, handle);
4107 mutex_unlock(&client->lock);
4108 if (ret) {
4109 diff --git a/drivers/staging/comedi/drivers/adl_pci7x3x.c b/drivers/staging/comedi/drivers/adl_pci7x3x.c
4110 index fb8e5f582496..3346c0753d7e 100644
4111 --- a/drivers/staging/comedi/drivers/adl_pci7x3x.c
4112 +++ b/drivers/staging/comedi/drivers/adl_pci7x3x.c
4113 @@ -113,8 +113,20 @@ static int adl_pci7x3x_do_insn_bits(struct comedi_device *dev,
4114 {
4115 unsigned long reg = (unsigned long)s->private;
4116
4117 - if (comedi_dio_update_state(s, data))
4118 - outl(s->state, dev->iobase + reg);
4119 + if (comedi_dio_update_state(s, data)) {
4120 + unsigned int val = s->state;
4121 +
4122 + if (s->n_chan == 16) {
4123 + /*
4124 + * It seems the PCI-7230 needs the 16-bit DO state
4125 + * to be shifted left by 16 bits before being written
4126 + * to the 32-bit register. Set the value in both
4127 + * halves of the register to be sure.
4128 + */
4129 + val |= val << 16;
4130 + }
4131 + outl(val, dev->iobase + reg);
4132 + }
4133
4134 data[1] = s->state;
4135
4136 diff --git a/drivers/staging/speakup/fakekey.c b/drivers/staging/speakup/fakekey.c
4137 index 4299cf45f947..5e1f16c36b49 100644
4138 --- a/drivers/staging/speakup/fakekey.c
4139 +++ b/drivers/staging/speakup/fakekey.c
4140 @@ -81,6 +81,7 @@ void speakup_fake_down_arrow(void)
4141 __this_cpu_write(reporting_keystroke, true);
4142 input_report_key(virt_keyboard, KEY_DOWN, PRESSED);
4143 input_report_key(virt_keyboard, KEY_DOWN, RELEASED);
4144 + input_sync(virt_keyboard);
4145 __this_cpu_write(reporting_keystroke, false);
4146
4147 /* reenable preemption */
4148 diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
4149 index 06ea1a113e45..062633295bc2 100644
4150 --- a/drivers/target/iscsi/iscsi_target.c
4151 +++ b/drivers/target/iscsi/iscsi_target.c
4152 @@ -343,7 +343,6 @@ static struct iscsi_np *iscsit_get_np(
4153
4154 struct iscsi_np *iscsit_add_np(
4155 struct __kernel_sockaddr_storage *sockaddr,
4156 - char *ip_str,
4157 int network_transport)
4158 {
4159 struct sockaddr_in *sock_in;
4160 @@ -372,11 +371,9 @@ struct iscsi_np *iscsit_add_np(
4161 np->np_flags |= NPF_IP_NETWORK;
4162 if (sockaddr->ss_family == AF_INET6) {
4163 sock_in6 = (struct sockaddr_in6 *)sockaddr;
4164 - snprintf(np->np_ip, IPV6_ADDRESS_SPACE, "%s", ip_str);
4165 np->np_port = ntohs(sock_in6->sin6_port);
4166 } else {
4167 sock_in = (struct sockaddr_in *)sockaddr;
4168 - sprintf(np->np_ip, "%s", ip_str);
4169 np->np_port = ntohs(sock_in->sin_port);
4170 }
4171
4172 @@ -413,8 +410,8 @@ struct iscsi_np *iscsit_add_np(
4173 list_add_tail(&np->np_list, &g_np_list);
4174 mutex_unlock(&np_lock);
4175
4176 - pr_debug("CORE[0] - Added Network Portal: %s:%hu on %s\n",
4177 - np->np_ip, np->np_port, np->np_transport->name);
4178 + pr_debug("CORE[0] - Added Network Portal: %pISc:%hu on %s\n",
4179 + &np->np_sockaddr, np->np_port, np->np_transport->name);
4180
4181 return np;
4182 }
4183 @@ -483,8 +480,8 @@ int iscsit_del_np(struct iscsi_np *np)
4184 list_del(&np->np_list);
4185 mutex_unlock(&np_lock);
4186
4187 - pr_debug("CORE[0] - Removed Network Portal: %s:%hu on %s\n",
4188 - np->np_ip, np->np_port, np->np_transport->name);
4189 + pr_debug("CORE[0] - Removed Network Portal: %pISc:%hu on %s\n",
4190 + &np->np_sockaddr, np->np_port, np->np_transport->name);
4191
4192 iscsit_put_transport(np->np_transport);
4193 kfree(np);
4194 @@ -3482,11 +3479,18 @@ iscsit_build_sendtargets_response(struct iscsi_cmd *cmd,
4195 target_name_printed = 1;
4196 }
4197
4198 - len = sprintf(buf, "TargetAddress="
4199 - "%s:%hu,%hu",
4200 - inaddr_any ? conn->local_ip : np->np_ip,
4201 - np->np_port,
4202 - tpg->tpgt);
4203 + if (inaddr_any) {
4204 + len = sprintf(buf, "TargetAddress="
4205 + "%s:%hu,%hu",
4206 + conn->local_ip,
4207 + np->np_port,
4208 + tpg->tpgt);
4209 + } else {
4210 + len = sprintf(buf, "TargetAddress="
4211 + "%pISpc,%hu",
4212 + &np->np_sockaddr,
4213 + tpg->tpgt);
4214 + }
4215 len += 1;
4216
4217 if ((len + payload_len) > buffer_len) {
4218 diff --git a/drivers/target/iscsi/iscsi_target.h b/drivers/target/iscsi/iscsi_target.h
4219 index e936d56fb523..3ef6ef582b10 100644
4220 --- a/drivers/target/iscsi/iscsi_target.h
4221 +++ b/drivers/target/iscsi/iscsi_target.h
4222 @@ -13,7 +13,7 @@ extern int iscsit_deaccess_np(struct iscsi_np *, struct iscsi_portal_group *,
4223 extern bool iscsit_check_np_match(struct __kernel_sockaddr_storage *,
4224 struct iscsi_np *, int);
4225 extern struct iscsi_np *iscsit_add_np(struct __kernel_sockaddr_storage *,
4226 - char *, int);
4227 + int);
4228 extern int iscsit_reset_np_thread(struct iscsi_np *, struct iscsi_tpg_np *,
4229 struct iscsi_portal_group *, bool);
4230 extern int iscsit_del_np(struct iscsi_np *);
4231 diff --git a/drivers/target/iscsi/iscsi_target_configfs.c b/drivers/target/iscsi/iscsi_target_configfs.c
4232 index 9059c1e0b26e..49b34655e57a 100644
4233 --- a/drivers/target/iscsi/iscsi_target_configfs.c
4234 +++ b/drivers/target/iscsi/iscsi_target_configfs.c
4235 @@ -103,7 +103,7 @@ static ssize_t lio_target_np_store_sctp(
4236 * Use existing np->np_sockaddr for SCTP network portal reference
4237 */
4238 tpg_np_sctp = iscsit_tpg_add_network_portal(tpg, &np->np_sockaddr,
4239 - np->np_ip, tpg_np, ISCSI_SCTP_TCP);
4240 + tpg_np, ISCSI_SCTP_TCP);
4241 if (!tpg_np_sctp || IS_ERR(tpg_np_sctp))
4242 goto out;
4243 } else {
4244 @@ -181,7 +181,7 @@ static ssize_t lio_target_np_store_iser(
4245 }
4246
4247 tpg_np_iser = iscsit_tpg_add_network_portal(tpg, &np->np_sockaddr,
4248 - np->np_ip, tpg_np, ISCSI_INFINIBAND);
4249 + tpg_np, ISCSI_INFINIBAND);
4250 if (IS_ERR(tpg_np_iser)) {
4251 rc = PTR_ERR(tpg_np_iser);
4252 goto out;
4253 @@ -252,8 +252,8 @@ static struct se_tpg_np *lio_target_call_addnptotpg(
4254 return ERR_PTR(-EINVAL);
4255 }
4256 str++; /* Skip over leading "[" */
4257 - *str2 = '\0'; /* Terminate the IPv6 address */
4258 - str2++; /* Skip over the "]" */
4259 + *str2 = '\0'; /* Terminate the unbracketed IPv6 address */
4260 + str2++; /* Skip over the \0 */
4261 port_str = strstr(str2, ":");
4262 if (!port_str) {
4263 pr_err("Unable to locate \":port\""
4264 @@ -320,7 +320,7 @@ static struct se_tpg_np *lio_target_call_addnptotpg(
4265 * sys/kernel/config/iscsi/$IQN/$TPG/np/$IP:$PORT/
4266 *
4267 */
4268 - tpg_np = iscsit_tpg_add_network_portal(tpg, &sockaddr, str, NULL,
4269 + tpg_np = iscsit_tpg_add_network_portal(tpg, &sockaddr, NULL,
4270 ISCSI_TCP);
4271 if (IS_ERR(tpg_np)) {
4272 iscsit_put_tpg(tpg);
4273 @@ -348,8 +348,8 @@ static void lio_target_call_delnpfromtpg(
4274
4275 se_tpg = &tpg->tpg_se_tpg;
4276 pr_debug("LIO_Target_ConfigFS: DEREGISTER -> %s TPGT: %hu"
4277 - " PORTAL: %s:%hu\n", config_item_name(&se_tpg->se_tpg_wwn->wwn_group.cg_item),
4278 - tpg->tpgt, tpg_np->tpg_np->np_ip, tpg_np->tpg_np->np_port);
4279 + " PORTAL: %pISc:%hu\n", config_item_name(&se_tpg->se_tpg_wwn->wwn_group.cg_item),
4280 + tpg->tpgt, &tpg_np->tpg_np->np_sockaddr, tpg_np->tpg_np->np_port);
4281
4282 ret = iscsit_tpg_del_network_portal(tpg, tpg_np);
4283 if (ret < 0)
4284 diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
4285 index 719ec300cd24..eb320e6eb93d 100644
4286 --- a/drivers/target/iscsi/iscsi_target_login.c
4287 +++ b/drivers/target/iscsi/iscsi_target_login.c
4288 @@ -879,8 +879,8 @@ static void iscsi_handle_login_thread_timeout(unsigned long data)
4289 struct iscsi_np *np = (struct iscsi_np *) data;
4290
4291 spin_lock_bh(&np->np_thread_lock);
4292 - pr_err("iSCSI Login timeout on Network Portal %s:%hu\n",
4293 - np->np_ip, np->np_port);
4294 + pr_err("iSCSI Login timeout on Network Portal %pISc:%hu\n",
4295 + &np->np_sockaddr, np->np_port);
4296
4297 if (np->np_login_timer_flags & ISCSI_TF_STOP) {
4298 spin_unlock_bh(&np->np_thread_lock);
4299 @@ -1357,8 +1357,8 @@ static int __iscsi_target_login_thread(struct iscsi_np *np)
4300 spin_lock_bh(&np->np_thread_lock);
4301 if (np->np_thread_state != ISCSI_NP_THREAD_ACTIVE) {
4302 spin_unlock_bh(&np->np_thread_lock);
4303 - pr_err("iSCSI Network Portal on %s:%hu currently not"
4304 - " active.\n", np->np_ip, np->np_port);
4305 + pr_err("iSCSI Network Portal on %pISc:%hu currently not"
4306 + " active.\n", &np->np_sockaddr, np->np_port);
4307 iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR,
4308 ISCSI_LOGIN_STATUS_SVC_UNAVAILABLE);
4309 goto new_sess_out;
4310 diff --git a/drivers/target/iscsi/iscsi_target_tpg.c b/drivers/target/iscsi/iscsi_target_tpg.c
4311 index c3cb5c15efda..5530321c44f2 100644
4312 --- a/drivers/target/iscsi/iscsi_target_tpg.c
4313 +++ b/drivers/target/iscsi/iscsi_target_tpg.c
4314 @@ -464,7 +464,6 @@ static bool iscsit_tpg_check_network_portal(
4315 struct iscsi_tpg_np *iscsit_tpg_add_network_portal(
4316 struct iscsi_portal_group *tpg,
4317 struct __kernel_sockaddr_storage *sockaddr,
4318 - char *ip_str,
4319 struct iscsi_tpg_np *tpg_np_parent,
4320 int network_transport)
4321 {
4322 @@ -474,8 +473,8 @@ struct iscsi_tpg_np *iscsit_tpg_add_network_portal(
4323 if (!tpg_np_parent) {
4324 if (iscsit_tpg_check_network_portal(tpg->tpg_tiqn, sockaddr,
4325 network_transport)) {
4326 - pr_err("Network Portal: %s already exists on a"
4327 - " different TPG on %s\n", ip_str,
4328 + pr_err("Network Portal: %pISc already exists on a"
4329 + " different TPG on %s\n", sockaddr,
4330 tpg->tpg_tiqn->tiqn);
4331 return ERR_PTR(-EEXIST);
4332 }
4333 @@ -488,7 +487,7 @@ struct iscsi_tpg_np *iscsit_tpg_add_network_portal(
4334 return ERR_PTR(-ENOMEM);
4335 }
4336
4337 - np = iscsit_add_np(sockaddr, ip_str, network_transport);
4338 + np = iscsit_add_np(sockaddr, network_transport);
4339 if (IS_ERR(np)) {
4340 kfree(tpg_np);
4341 return ERR_CAST(np);
4342 @@ -519,8 +518,8 @@ struct iscsi_tpg_np *iscsit_tpg_add_network_portal(
4343 spin_unlock(&tpg_np_parent->tpg_np_parent_lock);
4344 }
4345
4346 - pr_debug("CORE[%s] - Added Network Portal: %s:%hu,%hu on %s\n",
4347 - tpg->tpg_tiqn->tiqn, np->np_ip, np->np_port, tpg->tpgt,
4348 + pr_debug("CORE[%s] - Added Network Portal: %pISc:%hu,%hu on %s\n",
4349 + tpg->tpg_tiqn->tiqn, &np->np_sockaddr, np->np_port, tpg->tpgt,
4350 np->np_transport->name);
4351
4352 return tpg_np;
4353 @@ -533,8 +532,8 @@ static int iscsit_tpg_release_np(
4354 {
4355 iscsit_clear_tpg_np_login_thread(tpg_np, tpg, true);
4356
4357 - pr_debug("CORE[%s] - Removed Network Portal: %s:%hu,%hu on %s\n",
4358 - tpg->tpg_tiqn->tiqn, np->np_ip, np->np_port, tpg->tpgt,
4359 + pr_debug("CORE[%s] - Removed Network Portal: %pISc:%hu,%hu on %s\n",
4360 + tpg->tpg_tiqn->tiqn, &np->np_sockaddr, np->np_port, tpg->tpgt,
4361 np->np_transport->name);
4362
4363 tpg_np->tpg_np = NULL;
4364 diff --git a/drivers/target/iscsi/iscsi_target_tpg.h b/drivers/target/iscsi/iscsi_target_tpg.h
4365 index e7265337bc43..e216128b5a98 100644
4366 --- a/drivers/target/iscsi/iscsi_target_tpg.h
4367 +++ b/drivers/target/iscsi/iscsi_target_tpg.h
4368 @@ -22,7 +22,7 @@ extern struct iscsi_node_attrib *iscsit_tpg_get_node_attrib(struct iscsi_session
4369 extern void iscsit_tpg_del_external_nps(struct iscsi_tpg_np *);
4370 extern struct iscsi_tpg_np *iscsit_tpg_locate_child_np(struct iscsi_tpg_np *, int);
4371 extern struct iscsi_tpg_np *iscsit_tpg_add_network_portal(struct iscsi_portal_group *,
4372 - struct __kernel_sockaddr_storage *, char *, struct iscsi_tpg_np *,
4373 + struct __kernel_sockaddr_storage *, struct iscsi_tpg_np *,
4374 int);
4375 extern int iscsit_tpg_del_network_portal(struct iscsi_portal_group *,
4376 struct iscsi_tpg_np *);
4377 diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
4378 index e3ebb674a693..fea7d905e77c 100644
4379 --- a/drivers/tty/n_tty.c
4380 +++ b/drivers/tty/n_tty.c
4381 @@ -364,8 +364,8 @@ static void n_tty_packet_mode_flush(struct tty_struct *tty)
4382 spin_lock_irqsave(&tty->ctrl_lock, flags);
4383 if (tty->link->packet) {
4384 tty->ctrl_status |= TIOCPKT_FLUSHREAD;
4385 - if (waitqueue_active(&tty->link->read_wait))
4386 - wake_up_interruptible(&tty->link->read_wait);
4387 + spin_unlock_irqrestore(&tty->ctrl_lock, flags);
4388 + wake_up_interruptible(&tty->link->read_wait);
4389 }
4390 spin_unlock_irqrestore(&tty->ctrl_lock, flags);
4391 }
4392 @@ -1387,8 +1387,7 @@ handle_newline:
4393 put_tty_queue(c, ldata);
4394 ldata->canon_head = ldata->read_head;
4395 kill_fasync(&tty->fasync, SIGIO, POLL_IN);
4396 - if (waitqueue_active(&tty->read_wait))
4397 - wake_up_interruptible_poll(&tty->read_wait, POLLIN);
4398 + wake_up_interruptible_poll(&tty->read_wait, POLLIN);
4399 return 0;
4400 }
4401 }
4402 @@ -1671,8 +1670,7 @@ static void __receive_buf(struct tty_struct *tty, const unsigned char *cp,
4403 if ((!ldata->icanon && (read_cnt(ldata) >= ldata->minimum_to_wake)) ||
4404 L_EXTPROC(tty)) {
4405 kill_fasync(&tty->fasync, SIGIO, POLL_IN);
4406 - if (waitqueue_active(&tty->read_wait))
4407 - wake_up_interruptible_poll(&tty->read_wait, POLLIN);
4408 + wake_up_interruptible_poll(&tty->read_wait, POLLIN);
4409 }
4410 }
4411
4412 @@ -1891,10 +1889,8 @@ static void n_tty_set_termios(struct tty_struct *tty, struct ktermios *old)
4413 }
4414
4415 /* The termios change make the tty ready for I/O */
4416 - if (waitqueue_active(&tty->write_wait))
4417 - wake_up_interruptible(&tty->write_wait);
4418 - if (waitqueue_active(&tty->read_wait))
4419 - wake_up_interruptible(&tty->read_wait);
4420 + wake_up_interruptible(&tty->write_wait);
4421 + wake_up_interruptible(&tty->read_wait);
4422 }
4423
4424 /**
4425 diff --git a/drivers/tty/serial/8250/8250_pnp.c b/drivers/tty/serial/8250/8250_pnp.c
4426 index 682a2fbe5c06..2b22cc1e57a2 100644
4427 --- a/drivers/tty/serial/8250/8250_pnp.c
4428 +++ b/drivers/tty/serial/8250/8250_pnp.c
4429 @@ -364,6 +364,11 @@ static const struct pnp_device_id pnp_dev_table[] = {
4430 /* Winbond CIR port, should not be probed. We should keep track
4431 of it to prevent the legacy serial driver from probing it */
4432 { "WEC1022", CIR_PORT },
4433 + /*
4434 + * SMSC IrCC SIR/FIR port, should not be probed by serial driver
4435 + * as well so its own driver can bind to it.
4436 + */
4437 + { "SMCF010", CIR_PORT },
4438 { "", 0 }
4439 };
4440
4441 diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
4442 index c42bf8da56db..7b362870277e 100644
4443 --- a/drivers/usb/chipidea/udc.c
4444 +++ b/drivers/usb/chipidea/udc.c
4445 @@ -638,6 +638,44 @@ __acquires(hwep->lock)
4446 return 0;
4447 }
4448
4449 +static int _ep_set_halt(struct usb_ep *ep, int value, bool check_transfer)
4450 +{
4451 + struct ci_hw_ep *hwep = container_of(ep, struct ci_hw_ep, ep);
4452 + int direction, retval = 0;
4453 + unsigned long flags;
4454 +
4455 + if (ep == NULL || hwep->ep.desc == NULL)
4456 + return -EINVAL;
4457 +
4458 + if (usb_endpoint_xfer_isoc(hwep->ep.desc))
4459 + return -EOPNOTSUPP;
4460 +
4461 + spin_lock_irqsave(hwep->lock, flags);
4462 +
4463 + if (value && hwep->dir == TX && check_transfer &&
4464 + !list_empty(&hwep->qh.queue) &&
4465 + !usb_endpoint_xfer_control(hwep->ep.desc)) {
4466 + spin_unlock_irqrestore(hwep->lock, flags);
4467 + return -EAGAIN;
4468 + }
4469 +
4470 + direction = hwep->dir;
4471 + do {
4472 + retval |= hw_ep_set_halt(hwep->ci, hwep->num, hwep->dir, value);
4473 +
4474 + if (!value)
4475 + hwep->wedge = 0;
4476 +
4477 + if (hwep->type == USB_ENDPOINT_XFER_CONTROL)
4478 + hwep->dir = (hwep->dir == TX) ? RX : TX;
4479 +
4480 + } while (hwep->dir != direction);
4481 +
4482 + spin_unlock_irqrestore(hwep->lock, flags);
4483 + return retval;
4484 +}
4485 +
4486 +
4487 /**
4488 * _gadget_stop_activity: stops all USB activity, flushes & disables all endpts
4489 * @gadget: gadget
4490 @@ -1037,7 +1075,7 @@ __acquires(ci->lock)
4491 num += ci->hw_ep_max / 2;
4492
4493 spin_unlock(&ci->lock);
4494 - err = usb_ep_set_halt(&ci->ci_hw_ep[num].ep);
4495 + err = _ep_set_halt(&ci->ci_hw_ep[num].ep, 1, false);
4496 spin_lock(&ci->lock);
4497 if (!err)
4498 isr_setup_status_phase(ci);
4499 @@ -1096,8 +1134,8 @@ delegate:
4500
4501 if (err < 0) {
4502 spin_unlock(&ci->lock);
4503 - if (usb_ep_set_halt(&hwep->ep))
4504 - dev_err(ci->dev, "error: ep_set_halt\n");
4505 + if (_ep_set_halt(&hwep->ep, 1, false))
4506 + dev_err(ci->dev, "error: _ep_set_halt\n");
4507 spin_lock(&ci->lock);
4508 }
4509 }
4510 @@ -1128,9 +1166,9 @@ __acquires(ci->lock)
4511 err = isr_setup_status_phase(ci);
4512 if (err < 0) {
4513 spin_unlock(&ci->lock);
4514 - if (usb_ep_set_halt(&hwep->ep))
4515 + if (_ep_set_halt(&hwep->ep, 1, false))
4516 dev_err(ci->dev,
4517 - "error: ep_set_halt\n");
4518 + "error: _ep_set_halt\n");
4519 spin_lock(&ci->lock);
4520 }
4521 }
4522 @@ -1373,41 +1411,7 @@ static int ep_dequeue(struct usb_ep *ep, struct usb_request *req)
4523 */
4524 static int ep_set_halt(struct usb_ep *ep, int value)
4525 {
4526 - struct ci_hw_ep *hwep = container_of(ep, struct ci_hw_ep, ep);
4527 - int direction, retval = 0;
4528 - unsigned long flags;
4529 -
4530 - if (ep == NULL || hwep->ep.desc == NULL)
4531 - return -EINVAL;
4532 -
4533 - if (usb_endpoint_xfer_isoc(hwep->ep.desc))
4534 - return -EOPNOTSUPP;
4535 -
4536 - spin_lock_irqsave(hwep->lock, flags);
4537 -
4538 -#ifndef STALL_IN
4539 - /* g_file_storage MS compliant but g_zero fails chapter 9 compliance */
4540 - if (value && hwep->type == USB_ENDPOINT_XFER_BULK && hwep->dir == TX &&
4541 - !list_empty(&hwep->qh.queue)) {
4542 - spin_unlock_irqrestore(hwep->lock, flags);
4543 - return -EAGAIN;
4544 - }
4545 -#endif
4546 -
4547 - direction = hwep->dir;
4548 - do {
4549 - retval |= hw_ep_set_halt(hwep->ci, hwep->num, hwep->dir, value);
4550 -
4551 - if (!value)
4552 - hwep->wedge = 0;
4553 -
4554 - if (hwep->type == USB_ENDPOINT_XFER_CONTROL)
4555 - hwep->dir = (hwep->dir == TX) ? RX : TX;
4556 -
4557 - } while (hwep->dir != direction);
4558 -
4559 - spin_unlock_irqrestore(hwep->lock, flags);
4560 - return retval;
4561 + return _ep_set_halt(ep, value, true);
4562 }
4563
4564 /**
4565 diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
4566 index b2a540b43f97..b9ddf0c1ffe5 100644
4567 --- a/drivers/usb/core/config.c
4568 +++ b/drivers/usb/core/config.c
4569 @@ -112,7 +112,7 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
4570 cfgno, inum, asnum, ep->desc.bEndpointAddress);
4571 ep->ss_ep_comp.bmAttributes = 16;
4572 } else if (usb_endpoint_xfer_isoc(&ep->desc) &&
4573 - desc->bmAttributes > 2) {
4574 + USB_SS_MULT(desc->bmAttributes) > 3) {
4575 dev_warn(ddev, "Isoc endpoint has Mult of %d in "
4576 "config %d interface %d altsetting %d ep %d: "
4577 "setting to 3\n", desc->bmAttributes + 1,
4578 @@ -121,7 +121,8 @@ static void usb_parse_ss_endpoint_companion(struct device *ddev, int cfgno,
4579 }
4580
4581 if (usb_endpoint_xfer_isoc(&ep->desc))
4582 - max_tx = (desc->bMaxBurst + 1) * (desc->bmAttributes + 1) *
4583 + max_tx = (desc->bMaxBurst + 1) *
4584 + (USB_SS_MULT(desc->bmAttributes)) *
4585 usb_endpoint_maxp(&ep->desc);
4586 else if (usb_endpoint_xfer_int(&ep->desc))
4587 max_tx = usb_endpoint_maxp(&ep->desc) *
4588 diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
4589 index 41e510ae8c83..8a77a417ccfd 100644
4590 --- a/drivers/usb/core/quirks.c
4591 +++ b/drivers/usb/core/quirks.c
4592 @@ -54,6 +54,13 @@ static const struct usb_device_id usb_quirk_list[] = {
4593 { USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT },
4594 { USB_DEVICE(0x046d, 0x0843), .driver_info = USB_QUIRK_DELAY_INIT },
4595
4596 + /* Logitech ConferenceCam CC3000e */
4597 + { USB_DEVICE(0x046d, 0x0847), .driver_info = USB_QUIRK_DELAY_INIT },
4598 + { USB_DEVICE(0x046d, 0x0848), .driver_info = USB_QUIRK_DELAY_INIT },
4599 +
4600 + /* Logitech PTZ Pro Camera */
4601 + { USB_DEVICE(0x046d, 0x0853), .driver_info = USB_QUIRK_DELAY_INIT },
4602 +
4603 /* Logitech Quickcam Fusion */
4604 { USB_DEVICE(0x046d, 0x08c1), .driver_info = USB_QUIRK_RESET_RESUME },
4605
4606 @@ -78,6 +85,12 @@ static const struct usb_device_id usb_quirk_list[] = {
4607 /* Philips PSC805 audio device */
4608 { USB_DEVICE(0x0471, 0x0155), .driver_info = USB_QUIRK_RESET_RESUME },
4609
4610 + /* Plantronic Audio 655 DSP */
4611 + { USB_DEVICE(0x047f, 0xc008), .driver_info = USB_QUIRK_RESET_RESUME },
4612 +
4613 + /* Plantronic Audio 648 USB */
4614 + { USB_DEVICE(0x047f, 0xc013), .driver_info = USB_QUIRK_RESET_RESUME },
4615 +
4616 /* Artisman Watchdog Dongle */
4617 { USB_DEVICE(0x04b4, 0x0526), .driver_info =
4618 USB_QUIRK_CONFIG_INTF_STRINGS },
4619 diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
4620 index bdc995da3420..1c1525b0a1fb 100644
4621 --- a/drivers/usb/dwc3/ep0.c
4622 +++ b/drivers/usb/dwc3/ep0.c
4623 @@ -818,6 +818,11 @@ static void dwc3_ep0_complete_data(struct dwc3 *dwc,
4624 unsigned maxp = ep0->endpoint.maxpacket;
4625
4626 transfer_size += (maxp - (transfer_size % maxp));
4627 +
4628 + /* Maximum of DWC3_EP0_BOUNCE_SIZE can only be received */
4629 + if (transfer_size > DWC3_EP0_BOUNCE_SIZE)
4630 + transfer_size = DWC3_EP0_BOUNCE_SIZE;
4631 +
4632 transferred = min_t(u32, ur->length,
4633 transfer_size - length);
4634 memcpy(ur->buf, dwc->ep0_bounce, transferred);
4635 @@ -937,11 +942,14 @@ static void __dwc3_ep0_do_control_data(struct dwc3 *dwc,
4636 return;
4637 }
4638
4639 - WARN_ON(req->request.length > DWC3_EP0_BOUNCE_SIZE);
4640 -
4641 maxpacket = dep->endpoint.maxpacket;
4642 transfer_size = roundup(req->request.length, maxpacket);
4643
4644 + if (transfer_size > DWC3_EP0_BOUNCE_SIZE) {
4645 + dev_WARN(dwc->dev, "bounce buf can't handle req len\n");
4646 + transfer_size = DWC3_EP0_BOUNCE_SIZE;
4647 + }
4648 +
4649 dwc->ep0_bounced = true;
4650
4651 /*
4652 diff --git a/drivers/usb/host/ehci-sysfs.c b/drivers/usb/host/ehci-sysfs.c
4653 index f6459dfb6f54..94054dad7710 100644
4654 --- a/drivers/usb/host/ehci-sysfs.c
4655 +++ b/drivers/usb/host/ehci-sysfs.c
4656 @@ -29,7 +29,7 @@ static ssize_t show_companion(struct device *dev,
4657 int count = PAGE_SIZE;
4658 char *ptr = buf;
4659
4660 - ehci = hcd_to_ehci(bus_to_hcd(dev_get_drvdata(dev)));
4661 + ehci = hcd_to_ehci(dev_get_drvdata(dev));
4662 nports = HCS_N_PORTS(ehci->hcs_params);
4663
4664 for (index = 0; index < nports; ++index) {
4665 @@ -54,7 +54,7 @@ static ssize_t store_companion(struct device *dev,
4666 struct ehci_hcd *ehci;
4667 int portnum, new_owner;
4668
4669 - ehci = hcd_to_ehci(bus_to_hcd(dev_get_drvdata(dev)));
4670 + ehci = hcd_to_ehci(dev_get_drvdata(dev));
4671 new_owner = PORT_OWNER; /* Owned by companion */
4672 if (sscanf(buf, "%d", &portnum) != 1)
4673 return -EINVAL;
4674 @@ -85,7 +85,7 @@ static ssize_t show_uframe_periodic_max(struct device *dev,
4675 struct ehci_hcd *ehci;
4676 int n;
4677
4678 - ehci = hcd_to_ehci(bus_to_hcd(dev_get_drvdata(dev)));
4679 + ehci = hcd_to_ehci(dev_get_drvdata(dev));
4680 n = scnprintf(buf, PAGE_SIZE, "%d\n", ehci->uframe_periodic_max);
4681 return n;
4682 }
4683 @@ -101,7 +101,7 @@ static ssize_t store_uframe_periodic_max(struct device *dev,
4684 unsigned long flags;
4685 ssize_t ret;
4686
4687 - ehci = hcd_to_ehci(bus_to_hcd(dev_get_drvdata(dev)));
4688 + ehci = hcd_to_ehci(dev_get_drvdata(dev));
4689 if (kstrtouint(buf, 0, &uframe_periodic_max) < 0)
4690 return -EINVAL;
4691
4692 diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
4693 index d44c904df055..ce3087bd95d2 100644
4694 --- a/drivers/usb/host/xhci-mem.c
4695 +++ b/drivers/usb/host/xhci-mem.c
4696 @@ -1502,10 +1502,10 @@ int xhci_endpoint_init(struct xhci_hcd *xhci,
4697 * use Event Data TRBs, and we don't chain in a link TRB on short
4698 * transfers, we're basically dividing by 1.
4699 *
4700 - * xHCI 1.0 specification indicates that the Average TRB Length should
4701 - * be set to 8 for control endpoints.
4702 + * xHCI 1.0 and 1.1 specification indicates that the Average TRB Length
4703 + * should be set to 8 for control endpoints.
4704 */
4705 - if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version == 0x100)
4706 + if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version >= 0x100)
4707 ep_ctx->tx_info |= cpu_to_le32(AVG_TRB_LENGTH_FOR_EP(8));
4708 else
4709 ep_ctx->tx_info |=
4710 @@ -1796,8 +1796,7 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci)
4711 int size;
4712 int i, j, num_ports;
4713
4714 - if (timer_pending(&xhci->cmd_timer))
4715 - del_timer_sync(&xhci->cmd_timer);
4716 + del_timer_sync(&xhci->cmd_timer);
4717
4718 /* Free the Event Ring Segment Table and the actual Event Ring */
4719 size = sizeof(struct xhci_erst_entry)*(xhci->erst.num_entries);
4720 @@ -2325,6 +2324,10 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
4721
4722 INIT_LIST_HEAD(&xhci->cmd_list);
4723
4724 + /* init command timeout timer */
4725 + setup_timer(&xhci->cmd_timer, xhci_handle_command_timeout,
4726 + (unsigned long)xhci);
4727 +
4728 page_size = readl(&xhci->op_regs->page_size);
4729 xhci_dbg_trace(xhci, trace_xhci_dbg_init,
4730 "Supported page size register = 0x%x", page_size);
4731 @@ -2509,11 +2512,6 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
4732 "Wrote ERST address to ir_set 0.");
4733 xhci_print_ir_set(xhci, 0);
4734
4735 - /* init command timeout timer */
4736 - init_timer(&xhci->cmd_timer);
4737 - xhci->cmd_timer.data = (unsigned long) xhci;
4738 - xhci->cmd_timer.function = xhci_handle_command_timeout;
4739 -
4740 /*
4741 * XXX: Might need to set the Interrupter Moderation Register to
4742 * something other than the default (~1ms minimum between interrupts).
4743 diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
4744 index c70291cffc27..136259bc93b8 100644
4745 --- a/drivers/usb/host/xhci-ring.c
4746 +++ b/drivers/usb/host/xhci-ring.c
4747 @@ -3049,9 +3049,11 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4748 struct xhci_td *td;
4749 struct scatterlist *sg;
4750 int num_sgs;
4751 - int trb_buff_len, this_sg_len, running_total;
4752 + int trb_buff_len, this_sg_len, running_total, ret;
4753 unsigned int total_packet_count;
4754 + bool zero_length_needed;
4755 bool first_trb;
4756 + int last_trb_num;
4757 u64 addr;
4758 bool more_trbs_coming;
4759
4760 @@ -3067,13 +3069,27 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4761 total_packet_count = DIV_ROUND_UP(urb->transfer_buffer_length,
4762 usb_endpoint_maxp(&urb->ep->desc));
4763
4764 - trb_buff_len = prepare_transfer(xhci, xhci->devs[slot_id],
4765 + ret = prepare_transfer(xhci, xhci->devs[slot_id],
4766 ep_index, urb->stream_id,
4767 num_trbs, urb, 0, mem_flags);
4768 - if (trb_buff_len < 0)
4769 - return trb_buff_len;
4770 + if (ret < 0)
4771 + return ret;
4772
4773 urb_priv = urb->hcpriv;
4774 +
4775 + /* Deal with URB_ZERO_PACKET - need one more td/trb */
4776 + zero_length_needed = urb->transfer_flags & URB_ZERO_PACKET &&
4777 + urb_priv->length == 2;
4778 + if (zero_length_needed) {
4779 + num_trbs++;
4780 + xhci_dbg(xhci, "Creating zero length td.\n");
4781 + ret = prepare_transfer(xhci, xhci->devs[slot_id],
4782 + ep_index, urb->stream_id,
4783 + 1, urb, 1, mem_flags);
4784 + if (ret < 0)
4785 + return ret;
4786 + }
4787 +
4788 td = urb_priv->td[0];
4789
4790 /*
4791 @@ -3103,6 +3119,7 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4792 trb_buff_len = urb->transfer_buffer_length;
4793
4794 first_trb = true;
4795 + last_trb_num = zero_length_needed ? 2 : 1;
4796 /* Queue the first TRB, even if it's zero-length */
4797 do {
4798 u32 field = 0;
4799 @@ -3120,12 +3137,15 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4800 /* Chain all the TRBs together; clear the chain bit in the last
4801 * TRB to indicate it's the last TRB in the chain.
4802 */
4803 - if (num_trbs > 1) {
4804 + if (num_trbs > last_trb_num) {
4805 field |= TRB_CHAIN;
4806 - } else {
4807 - /* FIXME - add check for ZERO_PACKET flag before this */
4808 + } else if (num_trbs == last_trb_num) {
4809 td->last_trb = ep_ring->enqueue;
4810 field |= TRB_IOC;
4811 + } else if (zero_length_needed && num_trbs == 1) {
4812 + trb_buff_len = 0;
4813 + urb_priv->td[1]->last_trb = ep_ring->enqueue;
4814 + field |= TRB_IOC;
4815 }
4816
4817 /* Only set interrupt on short packet for IN endpoints */
4818 @@ -3187,7 +3207,7 @@ static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4819 if (running_total + trb_buff_len > urb->transfer_buffer_length)
4820 trb_buff_len =
4821 urb->transfer_buffer_length - running_total;
4822 - } while (running_total < urb->transfer_buffer_length);
4823 + } while (num_trbs > 0);
4824
4825 check_trb_math(urb, num_trbs, running_total);
4826 giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id,
4827 @@ -3205,7 +3225,9 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4828 int num_trbs;
4829 struct xhci_generic_trb *start_trb;
4830 bool first_trb;
4831 + int last_trb_num;
4832 bool more_trbs_coming;
4833 + bool zero_length_needed;
4834 int start_cycle;
4835 u32 field, length_field;
4836
4837 @@ -3236,7 +3258,6 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4838 num_trbs++;
4839 running_total += TRB_MAX_BUFF_SIZE;
4840 }
4841 - /* FIXME: this doesn't deal with URB_ZERO_PACKET - need one more */
4842
4843 ret = prepare_transfer(xhci, xhci->devs[slot_id],
4844 ep_index, urb->stream_id,
4845 @@ -3245,6 +3266,20 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4846 return ret;
4847
4848 urb_priv = urb->hcpriv;
4849 +
4850 + /* Deal with URB_ZERO_PACKET - need one more td/trb */
4851 + zero_length_needed = urb->transfer_flags & URB_ZERO_PACKET &&
4852 + urb_priv->length == 2;
4853 + if (zero_length_needed) {
4854 + num_trbs++;
4855 + xhci_dbg(xhci, "Creating zero length td.\n");
4856 + ret = prepare_transfer(xhci, xhci->devs[slot_id],
4857 + ep_index, urb->stream_id,
4858 + 1, urb, 1, mem_flags);
4859 + if (ret < 0)
4860 + return ret;
4861 + }
4862 +
4863 td = urb_priv->td[0];
4864
4865 /*
4866 @@ -3266,7 +3301,7 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4867 trb_buff_len = urb->transfer_buffer_length;
4868
4869 first_trb = true;
4870 -
4871 + last_trb_num = zero_length_needed ? 2 : 1;
4872 /* Queue the first TRB, even if it's zero-length */
4873 do {
4874 u32 remainder = 0;
4875 @@ -3283,12 +3318,15 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4876 /* Chain all the TRBs together; clear the chain bit in the last
4877 * TRB to indicate it's the last TRB in the chain.
4878 */
4879 - if (num_trbs > 1) {
4880 + if (num_trbs > last_trb_num) {
4881 field |= TRB_CHAIN;
4882 - } else {
4883 - /* FIXME - add check for ZERO_PACKET flag before this */
4884 + } else if (num_trbs == last_trb_num) {
4885 td->last_trb = ep_ring->enqueue;
4886 field |= TRB_IOC;
4887 + } else if (zero_length_needed && num_trbs == 1) {
4888 + trb_buff_len = 0;
4889 + urb_priv->td[1]->last_trb = ep_ring->enqueue;
4890 + field |= TRB_IOC;
4891 }
4892
4893 /* Only set interrupt on short packet for IN endpoints */
4894 @@ -3326,7 +3364,7 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4895 trb_buff_len = urb->transfer_buffer_length - running_total;
4896 if (trb_buff_len > TRB_MAX_BUFF_SIZE)
4897 trb_buff_len = TRB_MAX_BUFF_SIZE;
4898 - } while (running_total < urb->transfer_buffer_length);
4899 + } while (num_trbs > 0);
4900
4901 check_trb_math(urb, num_trbs, running_total);
4902 giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id,
4903 @@ -3393,8 +3431,8 @@ int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
4904 if (start_cycle == 0)
4905 field |= 0x1;
4906
4907 - /* xHCI 1.0 6.4.1.2.1: Transfer Type field */
4908 - if (xhci->hci_version == 0x100) {
4909 + /* xHCI 1.0/1.1 6.4.1.2.1: Transfer Type field */
4910 + if (xhci->hci_version >= 0x100) {
4911 if (urb->transfer_buffer_length > 0) {
4912 if (setup->bRequestType & USB_DIR_IN)
4913 field |= TRB_TX_TYPE(TRB_DATA_IN);
4914 diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
4915 index 8e5f46082316..98380fa68fbf 100644
4916 --- a/drivers/usb/host/xhci.c
4917 +++ b/drivers/usb/host/xhci.c
4918 @@ -147,7 +147,8 @@ static int xhci_start(struct xhci_hcd *xhci)
4919 "waited %u microseconds.\n",
4920 XHCI_MAX_HALT_USEC);
4921 if (!ret)
4922 - xhci->xhc_state &= ~XHCI_STATE_HALTED;
4923 + xhci->xhc_state &= ~(XHCI_STATE_HALTED | XHCI_STATE_DYING);
4924 +
4925 return ret;
4926 }
4927
4928 @@ -1343,6 +1344,11 @@ int xhci_urb_enqueue(struct usb_hcd *hcd, struct urb *urb, gfp_t mem_flags)
4929
4930 if (usb_endpoint_xfer_isoc(&urb->ep->desc))
4931 size = urb->number_of_packets;
4932 + else if (usb_endpoint_is_bulk_out(&urb->ep->desc) &&
4933 + urb->transfer_buffer_length > 0 &&
4934 + urb->transfer_flags & URB_ZERO_PACKET &&
4935 + !(urb->transfer_buffer_length % usb_endpoint_maxp(&urb->ep->desc)))
4936 + size = 2;
4937 else
4938 size = 1;
4939
4940 @@ -3787,6 +3793,9 @@ static int xhci_setup_device(struct usb_hcd *hcd, struct usb_device *udev,
4941 u64 temp_64;
4942 struct xhci_command *command;
4943
4944 + if (xhci->xhc_state) /* dying or halted */
4945 + return -EINVAL;
4946 +
4947 if (!udev->slot_id) {
4948 xhci_dbg_trace(xhci, trace_xhci_dbg_address,
4949 "Bad Slot ID %d", udev->slot_id);
4950 diff --git a/drivers/usb/musb/musb_cppi41.c b/drivers/usb/musb/musb_cppi41.c
4951 index 5a9b977fbc19..d59b232614d5 100644
4952 --- a/drivers/usb/musb/musb_cppi41.c
4953 +++ b/drivers/usb/musb/musb_cppi41.c
4954 @@ -600,7 +600,7 @@ static int cppi41_dma_controller_start(struct cppi41_dma_controller *controller)
4955 {
4956 struct musb *musb = controller->musb;
4957 struct device *dev = musb->controller;
4958 - struct device_node *np = dev->of_node;
4959 + struct device_node *np = dev->parent->of_node;
4960 struct cppi41_dma_channel *cppi41_channel;
4961 int count;
4962 int i;
4963 @@ -650,7 +650,7 @@ static int cppi41_dma_controller_start(struct cppi41_dma_controller *controller)
4964 musb_dma->status = MUSB_DMA_STATUS_FREE;
4965 musb_dma->max_len = SZ_4M;
4966
4967 - dc = dma_request_slave_channel(dev, str);
4968 + dc = dma_request_slave_channel(dev->parent, str);
4969 if (!dc) {
4970 dev_err(dev, "Failed to request %s.\n", str);
4971 ret = -EPROBE_DEFER;
4972 @@ -680,7 +680,7 @@ struct dma_controller *dma_controller_create(struct musb *musb,
4973 struct cppi41_dma_controller *controller;
4974 int ret = 0;
4975
4976 - if (!musb->controller->of_node) {
4977 + if (!musb->controller->parent->of_node) {
4978 dev_err(musb->controller, "Need DT for the DMA engine.\n");
4979 return NULL;
4980 }
4981 diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
4982 index 4c8b3b82103d..a5a0376bbd48 100644
4983 --- a/drivers/usb/serial/ftdi_sio.c
4984 +++ b/drivers/usb/serial/ftdi_sio.c
4985 @@ -605,6 +605,10 @@ static const struct usb_device_id id_table_combined[] = {
4986 { USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLXM_PID),
4987 .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
4988 { USB_DEVICE(FTDI_VID, FTDI_SYNAPSE_SS200_PID) },
4989 + { USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX_PID) },
4990 + { USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX2_PID) },
4991 + { USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX2WI_PID) },
4992 + { USB_DEVICE(FTDI_VID, FTDI_CUSTOMWARE_MINIPLEX3_PID) },
4993 /*
4994 * ELV devices:
4995 */
4996 diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
4997 index 792e054126de..2943b97b2a83 100644
4998 --- a/drivers/usb/serial/ftdi_sio_ids.h
4999 +++ b/drivers/usb/serial/ftdi_sio_ids.h
5000 @@ -568,6 +568,14 @@
5001 */
5002 #define FTDI_SYNAPSE_SS200_PID 0x9090 /* SS200 - SNAP Stick 200 */
5003
5004 +/*
5005 + * CustomWare / ShipModul NMEA multiplexers product ids (FTDI_VID)
5006 + */
5007 +#define FTDI_CUSTOMWARE_MINIPLEX_PID 0xfd48 /* MiniPlex first generation NMEA Multiplexer */
5008 +#define FTDI_CUSTOMWARE_MINIPLEX2_PID 0xfd49 /* MiniPlex-USB and MiniPlex-2 series */
5009 +#define FTDI_CUSTOMWARE_MINIPLEX2WI_PID 0xfd4a /* MiniPlex-2Wi */
5010 +#define FTDI_CUSTOMWARE_MINIPLEX3_PID 0xfd4b /* MiniPlex-3 series */
5011 +
5012
5013 /********************************/
5014 /** third-party VID/PID combos **/
5015 diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
5016 index 463feb836f20..17d04d98358c 100644
5017 --- a/drivers/usb/serial/option.c
5018 +++ b/drivers/usb/serial/option.c
5019 @@ -278,6 +278,10 @@ static void option_instat_callback(struct urb *urb);
5020 #define ZTE_PRODUCT_MF622 0x0001
5021 #define ZTE_PRODUCT_MF628 0x0015
5022 #define ZTE_PRODUCT_MF626 0x0031
5023 +#define ZTE_PRODUCT_ZM8620_X 0x0396
5024 +#define ZTE_PRODUCT_ME3620_MBIM 0x0426
5025 +#define ZTE_PRODUCT_ME3620_X 0x1432
5026 +#define ZTE_PRODUCT_ME3620_L 0x1433
5027 #define ZTE_PRODUCT_AC2726 0xfff1
5028 #define ZTE_PRODUCT_MG880 0xfffd
5029 #define ZTE_PRODUCT_CDMA_TECH 0xfffe
5030 @@ -552,6 +556,18 @@ static const struct option_blacklist_info zte_mc2716_z_blacklist = {
5031 .sendsetup = BIT(1) | BIT(2) | BIT(3),
5032 };
5033
5034 +static const struct option_blacklist_info zte_me3620_mbim_blacklist = {
5035 + .reserved = BIT(2) | BIT(3) | BIT(4),
5036 +};
5037 +
5038 +static const struct option_blacklist_info zte_me3620_xl_blacklist = {
5039 + .reserved = BIT(3) | BIT(4) | BIT(5),
5040 +};
5041 +
5042 +static const struct option_blacklist_info zte_zm8620_x_blacklist = {
5043 + .reserved = BIT(3) | BIT(4) | BIT(5),
5044 +};
5045 +
5046 static const struct option_blacklist_info huawei_cdc12_blacklist = {
5047 .reserved = BIT(1) | BIT(2),
5048 };
5049 @@ -1599,6 +1615,14 @@ static const struct usb_device_id option_ids[] = {
5050 .driver_info = (kernel_ulong_t)&zte_ad3812_z_blacklist },
5051 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MC2716, 0xff, 0xff, 0xff),
5052 .driver_info = (kernel_ulong_t)&zte_mc2716_z_blacklist },
5053 + { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_L),
5054 + .driver_info = (kernel_ulong_t)&zte_me3620_xl_blacklist },
5055 + { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_MBIM),
5056 + .driver_info = (kernel_ulong_t)&zte_me3620_mbim_blacklist },
5057 + { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_X),
5058 + .driver_info = (kernel_ulong_t)&zte_me3620_xl_blacklist },
5059 + { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ZM8620_X),
5060 + .driver_info = (kernel_ulong_t)&zte_zm8620_x_blacklist },
5061 { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x01) },
5062 { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x05) },
5063 { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x86, 0x10) },
5064 diff --git a/drivers/usb/serial/symbolserial.c b/drivers/usb/serial/symbolserial.c
5065 index 8fceec7298e0..6ed804450a5a 100644
5066 --- a/drivers/usb/serial/symbolserial.c
5067 +++ b/drivers/usb/serial/symbolserial.c
5068 @@ -94,7 +94,7 @@ exit:
5069
5070 static int symbol_open(struct tty_struct *tty, struct usb_serial_port *port)
5071 {
5072 - struct symbol_private *priv = usb_get_serial_data(port->serial);
5073 + struct symbol_private *priv = usb_get_serial_port_data(port);
5074 unsigned long flags;
5075 int result = 0;
5076
5077 @@ -120,7 +120,7 @@ static void symbol_close(struct usb_serial_port *port)
5078 static void symbol_throttle(struct tty_struct *tty)
5079 {
5080 struct usb_serial_port *port = tty->driver_data;
5081 - struct symbol_private *priv = usb_get_serial_data(port->serial);
5082 + struct symbol_private *priv = usb_get_serial_port_data(port);
5083
5084 spin_lock_irq(&priv->lock);
5085 priv->throttled = true;
5086 @@ -130,7 +130,7 @@ static void symbol_throttle(struct tty_struct *tty)
5087 static void symbol_unthrottle(struct tty_struct *tty)
5088 {
5089 struct usb_serial_port *port = tty->driver_data;
5090 - struct symbol_private *priv = usb_get_serial_data(port->serial);
5091 + struct symbol_private *priv = usb_get_serial_port_data(port);
5092 int result;
5093 bool was_throttled;
5094
5095 diff --git a/drivers/usb/serial/whiteheat.c b/drivers/usb/serial/whiteheat.c
5096 index 6c3734d2b45a..d3ea90bef84d 100644
5097 --- a/drivers/usb/serial/whiteheat.c
5098 +++ b/drivers/usb/serial/whiteheat.c
5099 @@ -80,6 +80,8 @@ static int whiteheat_firmware_download(struct usb_serial *serial,
5100 static int whiteheat_firmware_attach(struct usb_serial *serial);
5101
5102 /* function prototypes for the Connect Tech WhiteHEAT serial converter */
5103 +static int whiteheat_probe(struct usb_serial *serial,
5104 + const struct usb_device_id *id);
5105 static int whiteheat_attach(struct usb_serial *serial);
5106 static void whiteheat_release(struct usb_serial *serial);
5107 static int whiteheat_port_probe(struct usb_serial_port *port);
5108 @@ -116,6 +118,7 @@ static struct usb_serial_driver whiteheat_device = {
5109 .description = "Connect Tech - WhiteHEAT",
5110 .id_table = id_table_std,
5111 .num_ports = 4,
5112 + .probe = whiteheat_probe,
5113 .attach = whiteheat_attach,
5114 .release = whiteheat_release,
5115 .port_probe = whiteheat_port_probe,
5116 @@ -217,6 +220,34 @@ static int whiteheat_firmware_attach(struct usb_serial *serial)
5117 /*****************************************************************************
5118 * Connect Tech's White Heat serial driver functions
5119 *****************************************************************************/
5120 +
5121 +static int whiteheat_probe(struct usb_serial *serial,
5122 + const struct usb_device_id *id)
5123 +{
5124 + struct usb_host_interface *iface_desc;
5125 + struct usb_endpoint_descriptor *endpoint;
5126 + size_t num_bulk_in = 0;
5127 + size_t num_bulk_out = 0;
5128 + size_t min_num_bulk;
5129 + unsigned int i;
5130 +
5131 + iface_desc = serial->interface->cur_altsetting;
5132 +
5133 + for (i = 0; i < iface_desc->desc.bNumEndpoints; i++) {
5134 + endpoint = &iface_desc->endpoint[i].desc;
5135 + if (usb_endpoint_is_bulk_in(endpoint))
5136 + ++num_bulk_in;
5137 + if (usb_endpoint_is_bulk_out(endpoint))
5138 + ++num_bulk_out;
5139 + }
5140 +
5141 + min_num_bulk = COMMAND_PORT + 1;
5142 + if (num_bulk_in < min_num_bulk || num_bulk_out < min_num_bulk)
5143 + return -ENODEV;
5144 +
5145 + return 0;
5146 +}
5147 +
5148 static int whiteheat_attach(struct usb_serial *serial)
5149 {
5150 struct usb_serial_port *command_port;
5151 diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
5152 index c7bf606a8706..a5f88377cec5 100644
5153 --- a/drivers/video/fbdev/Kconfig
5154 +++ b/drivers/video/fbdev/Kconfig
5155 @@ -298,7 +298,7 @@ config FB_ARMCLCD
5156
5157 # Helper logic selected only by the ARM Versatile platform family.
5158 config PLAT_VERSATILE_CLCD
5159 - def_bool ARCH_VERSATILE || ARCH_REALVIEW || ARCH_VEXPRESS
5160 + def_bool ARCH_VERSATILE || ARCH_REALVIEW || ARCH_VEXPRESS || ARCH_INTEGRATOR
5161 depends on ARM
5162 depends on FB_ARMCLCD && FB=y
5163
5164 diff --git a/drivers/watchdog/sunxi_wdt.c b/drivers/watchdog/sunxi_wdt.c
5165 index b62301e74e5f..a9b37bdac5d3 100644
5166 --- a/drivers/watchdog/sunxi_wdt.c
5167 +++ b/drivers/watchdog/sunxi_wdt.c
5168 @@ -184,7 +184,7 @@ static int sunxi_wdt_start(struct watchdog_device *wdt_dev)
5169 /* Set system reset function */
5170 reg = readl(wdt_base + regs->wdt_cfg);
5171 reg &= ~(regs->wdt_reset_mask);
5172 - reg |= ~(regs->wdt_reset_val);
5173 + reg |= regs->wdt_reset_val;
5174 writel(reg, wdt_base + regs->wdt_cfg);
5175
5176 /* Enable watchdog */
5177 diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
5178 index 02391f3eb9b0..cb239ddae5c9 100644
5179 --- a/fs/btrfs/extent_io.c
5180 +++ b/fs/btrfs/extent_io.c
5181 @@ -2767,7 +2767,8 @@ static int submit_extent_page(int rw, struct extent_io_tree *tree,
5182 bio_end_io_t end_io_func,
5183 int mirror_num,
5184 unsigned long prev_bio_flags,
5185 - unsigned long bio_flags)
5186 + unsigned long bio_flags,
5187 + bool force_bio_submit)
5188 {
5189 int ret = 0;
5190 struct bio *bio;
5191 @@ -2785,6 +2786,7 @@ static int submit_extent_page(int rw, struct extent_io_tree *tree,
5192 contig = bio_end_sector(bio) == sector;
5193
5194 if (prev_bio_flags != bio_flags || !contig ||
5195 + force_bio_submit ||
5196 merge_bio(rw, tree, page, offset, page_size, bio, bio_flags) ||
5197 bio_add_page(bio, page, page_size, offset) < page_size) {
5198 ret = submit_one_bio(rw, bio, mirror_num,
5199 @@ -2876,7 +2878,8 @@ static int __do_readpage(struct extent_io_tree *tree,
5200 get_extent_t *get_extent,
5201 struct extent_map **em_cached,
5202 struct bio **bio, int mirror_num,
5203 - unsigned long *bio_flags, int rw)
5204 + unsigned long *bio_flags, int rw,
5205 + u64 *prev_em_start)
5206 {
5207 struct inode *inode = page->mapping->host;
5208 u64 start = page_offset(page);
5209 @@ -2924,6 +2927,7 @@ static int __do_readpage(struct extent_io_tree *tree,
5210 }
5211 while (cur <= end) {
5212 unsigned long pnr = (last_byte >> PAGE_CACHE_SHIFT) + 1;
5213 + bool force_bio_submit = false;
5214
5215 if (cur >= last_byte) {
5216 char *userpage;
5217 @@ -2974,6 +2978,49 @@ static int __do_readpage(struct extent_io_tree *tree,
5218 block_start = em->block_start;
5219 if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags))
5220 block_start = EXTENT_MAP_HOLE;
5221 +
5222 + /*
5223 + * If we have a file range that points to a compressed extent
5224 + * and it's followed by a consecutive file range that points to
5225 + * to the same compressed extent (possibly with a different
5226 + * offset and/or length, so it either points to the whole extent
5227 + * or only part of it), we must make sure we do not submit a
5228 + * single bio to populate the pages for the 2 ranges because
5229 + * this makes the compressed extent read zero out the pages
5230 + * belonging to the 2nd range. Imagine the following scenario:
5231 + *
5232 + * File layout
5233 + * [0 - 8K] [8K - 24K]
5234 + * | |
5235 + * | |
5236 + * points to extent X, points to extent X,
5237 + * offset 4K, length of 8K offset 0, length 16K
5238 + *
5239 + * [extent X, compressed length = 4K uncompressed length = 16K]
5240 + *
5241 + * If the bio to read the compressed extent covers both ranges,
5242 + * it will decompress extent X into the pages belonging to the
5243 + * first range and then it will stop, zeroing out the remaining
5244 + * pages that belong to the other range that points to extent X.
5245 + * So here we make sure we submit 2 bios, one for the first
5246 + * range and another one for the third range. Both will target
5247 + * the same physical extent from disk, but we can't currently
5248 + * make the compressed bio endio callback populate the pages
5249 + * for both ranges because each compressed bio is tightly
5250 + * coupled with a single extent map, and each range can have
5251 + * an extent map with a different offset value relative to the
5252 + * uncompressed data of our extent and different lengths. This
5253 + * is a corner case so we prioritize correctness over
5254 + * non-optimal behavior (submitting 2 bios for the same extent).
5255 + */
5256 + if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) &&
5257 + prev_em_start && *prev_em_start != (u64)-1 &&
5258 + *prev_em_start != em->orig_start)
5259 + force_bio_submit = true;
5260 +
5261 + if (prev_em_start)
5262 + *prev_em_start = em->orig_start;
5263 +
5264 free_extent_map(em);
5265 em = NULL;
5266
5267 @@ -3023,7 +3070,8 @@ static int __do_readpage(struct extent_io_tree *tree,
5268 bdev, bio, pnr,
5269 end_bio_extent_readpage, mirror_num,
5270 *bio_flags,
5271 - this_bio_flag);
5272 + this_bio_flag,
5273 + force_bio_submit);
5274 if (!ret) {
5275 nr++;
5276 *bio_flags = this_bio_flag;
5277 @@ -3050,7 +3098,8 @@ static inline void __do_contiguous_readpages(struct extent_io_tree *tree,
5278 get_extent_t *get_extent,
5279 struct extent_map **em_cached,
5280 struct bio **bio, int mirror_num,
5281 - unsigned long *bio_flags, int rw)
5282 + unsigned long *bio_flags, int rw,
5283 + u64 *prev_em_start)
5284 {
5285 struct inode *inode;
5286 struct btrfs_ordered_extent *ordered;
5287 @@ -3070,7 +3119,7 @@ static inline void __do_contiguous_readpages(struct extent_io_tree *tree,
5288
5289 for (index = 0; index < nr_pages; index++) {
5290 __do_readpage(tree, pages[index], get_extent, em_cached, bio,
5291 - mirror_num, bio_flags, rw);
5292 + mirror_num, bio_flags, rw, prev_em_start);
5293 page_cache_release(pages[index]);
5294 }
5295 }
5296 @@ -3080,7 +3129,8 @@ static void __extent_readpages(struct extent_io_tree *tree,
5297 int nr_pages, get_extent_t *get_extent,
5298 struct extent_map **em_cached,
5299 struct bio **bio, int mirror_num,
5300 - unsigned long *bio_flags, int rw)
5301 + unsigned long *bio_flags, int rw,
5302 + u64 *prev_em_start)
5303 {
5304 u64 start = 0;
5305 u64 end = 0;
5306 @@ -3101,7 +3151,7 @@ static void __extent_readpages(struct extent_io_tree *tree,
5307 index - first_index, start,
5308 end, get_extent, em_cached,
5309 bio, mirror_num, bio_flags,
5310 - rw);
5311 + rw, prev_em_start);
5312 start = page_start;
5313 end = start + PAGE_CACHE_SIZE - 1;
5314 first_index = index;
5315 @@ -3112,7 +3162,8 @@ static void __extent_readpages(struct extent_io_tree *tree,
5316 __do_contiguous_readpages(tree, &pages[first_index],
5317 index - first_index, start,
5318 end, get_extent, em_cached, bio,
5319 - mirror_num, bio_flags, rw);
5320 + mirror_num, bio_flags, rw,
5321 + prev_em_start);
5322 }
5323
5324 static int __extent_read_full_page(struct extent_io_tree *tree,
5325 @@ -3138,7 +3189,7 @@ static int __extent_read_full_page(struct extent_io_tree *tree,
5326 }
5327
5328 ret = __do_readpage(tree, page, get_extent, NULL, bio, mirror_num,
5329 - bio_flags, rw);
5330 + bio_flags, rw, NULL);
5331 return ret;
5332 }
5333
5334 @@ -3164,7 +3215,7 @@ int extent_read_full_page_nolock(struct extent_io_tree *tree, struct page *page,
5335 int ret;
5336
5337 ret = __do_readpage(tree, page, get_extent, NULL, &bio, mirror_num,
5338 - &bio_flags, READ);
5339 + &bio_flags, READ, NULL);
5340 if (bio)
5341 ret = submit_one_bio(READ, bio, mirror_num, bio_flags);
5342 return ret;
5343 @@ -3417,7 +3468,7 @@ static noinline_for_stack int __extent_writepage_io(struct inode *inode,
5344 sector, iosize, pg_offset,
5345 bdev, &epd->bio, max_nr,
5346 end_bio_extent_writepage,
5347 - 0, 0, 0);
5348 + 0, 0, 0, false);
5349 if (ret)
5350 SetPageError(page);
5351 }
5352 @@ -3719,7 +3770,7 @@ static noinline_for_stack int write_one_eb(struct extent_buffer *eb,
5353 ret = submit_extent_page(rw, tree, p, offset >> 9,
5354 PAGE_CACHE_SIZE, 0, bdev, &epd->bio,
5355 -1, end_bio_extent_buffer_writepage,
5356 - 0, epd->bio_flags, bio_flags);
5357 + 0, epd->bio_flags, bio_flags, false);
5358 epd->bio_flags = bio_flags;
5359 if (ret) {
5360 set_btree_ioerr(p);
5361 @@ -4123,6 +4174,7 @@ int extent_readpages(struct extent_io_tree *tree,
5362 struct page *page;
5363 struct extent_map *em_cached = NULL;
5364 int nr = 0;
5365 + u64 prev_em_start = (u64)-1;
5366
5367 for (page_idx = 0; page_idx < nr_pages; page_idx++) {
5368 page = list_entry(pages->prev, struct page, lru);
5369 @@ -4139,12 +4191,12 @@ int extent_readpages(struct extent_io_tree *tree,
5370 if (nr < ARRAY_SIZE(pagepool))
5371 continue;
5372 __extent_readpages(tree, pagepool, nr, get_extent, &em_cached,
5373 - &bio, 0, &bio_flags, READ);
5374 + &bio, 0, &bio_flags, READ, &prev_em_start);
5375 nr = 0;
5376 }
5377 if (nr)
5378 __extent_readpages(tree, pagepool, nr, get_extent, &em_cached,
5379 - &bio, 0, &bio_flags, READ);
5380 + &bio, 0, &bio_flags, READ, &prev_em_start);
5381
5382 if (em_cached)
5383 free_extent_map(em_cached);
5384 diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
5385 index edaa6178b4ec..0be09bb34b75 100644
5386 --- a/fs/btrfs/inode.c
5387 +++ b/fs/btrfs/inode.c
5388 @@ -4802,7 +4802,8 @@ void btrfs_evict_inode(struct inode *inode)
5389 goto no_delete;
5390 }
5391 /* do we really want it for ->i_nlink > 0 and zero btrfs_root_refs? */
5392 - btrfs_wait_ordered_range(inode, 0, (u64)-1);
5393 + if (!special_file(inode->i_mode))
5394 + btrfs_wait_ordered_range(inode, 0, (u64)-1);
5395
5396 btrfs_free_io_failure_record(inode, 0, (u64)-1);
5397
5398 diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
5399 index 63c6d05950f2..7dce00b91a71 100644
5400 --- a/fs/btrfs/transaction.c
5401 +++ b/fs/btrfs/transaction.c
5402 @@ -1756,8 +1756,11 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans,
5403 spin_unlock(&root->fs_info->trans_lock);
5404
5405 wait_for_commit(root, prev_trans);
5406 + ret = prev_trans->aborted;
5407
5408 btrfs_put_transaction(prev_trans);
5409 + if (ret)
5410 + goto cleanup_transaction;
5411 } else {
5412 spin_unlock(&root->fs_info->trans_lock);
5413 }
5414 diff --git a/fs/cifs/cifsencrypt.c b/fs/cifs/cifsencrypt.c
5415 index 4ac7445e6ec7..da7fbfaa60b4 100644
5416 --- a/fs/cifs/cifsencrypt.c
5417 +++ b/fs/cifs/cifsencrypt.c
5418 @@ -441,6 +441,48 @@ find_domain_name(struct cifs_ses *ses, const struct nls_table *nls_cp)
5419 return 0;
5420 }
5421
5422 +/* Server has provided av pairs/target info in the type 2 challenge
5423 + * packet and we have plucked it and stored within smb session.
5424 + * We parse that blob here to find the server given timestamp
5425 + * as part of ntlmv2 authentication (or local current time as
5426 + * default in case of failure)
5427 + */
5428 +static __le64
5429 +find_timestamp(struct cifs_ses *ses)
5430 +{
5431 + unsigned int attrsize;
5432 + unsigned int type;
5433 + unsigned int onesize = sizeof(struct ntlmssp2_name);
5434 + unsigned char *blobptr;
5435 + unsigned char *blobend;
5436 + struct ntlmssp2_name *attrptr;
5437 +
5438 + if (!ses->auth_key.len || !ses->auth_key.response)
5439 + return 0;
5440 +
5441 + blobptr = ses->auth_key.response;
5442 + blobend = blobptr + ses->auth_key.len;
5443 +
5444 + while (blobptr + onesize < blobend) {
5445 + attrptr = (struct ntlmssp2_name *) blobptr;
5446 + type = le16_to_cpu(attrptr->type);
5447 + if (type == NTLMSSP_AV_EOL)
5448 + break;
5449 + blobptr += 2; /* advance attr type */
5450 + attrsize = le16_to_cpu(attrptr->length);
5451 + blobptr += 2; /* advance attr size */
5452 + if (blobptr + attrsize > blobend)
5453 + break;
5454 + if (type == NTLMSSP_AV_TIMESTAMP) {
5455 + if (attrsize == sizeof(u64))
5456 + return *((__le64 *)blobptr);
5457 + }
5458 + blobptr += attrsize; /* advance attr value */
5459 + }
5460 +
5461 + return cpu_to_le64(cifs_UnixTimeToNT(CURRENT_TIME));
5462 +}
5463 +
5464 static int calc_ntlmv2_hash(struct cifs_ses *ses, char *ntlmv2_hash,
5465 const struct nls_table *nls_cp)
5466 {
5467 @@ -637,6 +679,7 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)
5468 struct ntlmv2_resp *ntlmv2;
5469 char ntlmv2_hash[16];
5470 unsigned char *tiblob = NULL; /* target info blob */
5471 + __le64 rsp_timestamp;
5472
5473 if (ses->server->negflavor == CIFS_NEGFLAVOR_EXTENDED) {
5474 if (!ses->domainName) {
5475 @@ -655,6 +698,12 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)
5476 }
5477 }
5478
5479 + /* Must be within 5 minutes of the server (or in range +/-2h
5480 + * in case of Mac OS X), so simply carry over server timestamp
5481 + * (as Windows 7 does)
5482 + */
5483 + rsp_timestamp = find_timestamp(ses);
5484 +
5485 baselen = CIFS_SESS_KEY_SIZE + sizeof(struct ntlmv2_resp);
5486 tilen = ses->auth_key.len;
5487 tiblob = ses->auth_key.response;
5488 @@ -671,8 +720,8 @@ setup_ntlmv2_rsp(struct cifs_ses *ses, const struct nls_table *nls_cp)
5489 (ses->auth_key.response + CIFS_SESS_KEY_SIZE);
5490 ntlmv2->blob_signature = cpu_to_le32(0x00000101);
5491 ntlmv2->reserved = 0;
5492 - /* Must be within 5 minutes of the server */
5493 - ntlmv2->time = cpu_to_le64(cifs_UnixTimeToNT(CURRENT_TIME));
5494 + ntlmv2->time = rsp_timestamp;
5495 +
5496 get_random_bytes(&ntlmv2->client_chal, sizeof(ntlmv2->client_chal));
5497 ntlmv2->reserved2 = 0;
5498
5499 diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
5500 index 0c3ce464cae4..c88a8279e532 100644
5501 --- a/fs/cifs/inode.c
5502 +++ b/fs/cifs/inode.c
5503 @@ -2010,7 +2010,6 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs,
5504 struct tcon_link *tlink = NULL;
5505 struct cifs_tcon *tcon = NULL;
5506 struct TCP_Server_Info *server;
5507 - struct cifs_io_parms io_parms;
5508
5509 /*
5510 * To avoid spurious oplock breaks from server, in the case of
5511 @@ -2032,18 +2031,6 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs,
5512 rc = -ENOSYS;
5513 cifsFileInfo_put(open_file);
5514 cifs_dbg(FYI, "SetFSize for attrs rc = %d\n", rc);
5515 - if ((rc == -EINVAL) || (rc == -EOPNOTSUPP)) {
5516 - unsigned int bytes_written;
5517 -
5518 - io_parms.netfid = open_file->fid.netfid;
5519 - io_parms.pid = open_file->pid;
5520 - io_parms.tcon = tcon;
5521 - io_parms.offset = 0;
5522 - io_parms.length = attrs->ia_size;
5523 - rc = CIFSSMBWrite(xid, &io_parms, &bytes_written,
5524 - NULL, NULL, 1);
5525 - cifs_dbg(FYI, "Wrt seteof rc %d\n", rc);
5526 - }
5527 } else
5528 rc = -EINVAL;
5529
5530 @@ -2069,28 +2056,7 @@ cifs_set_file_size(struct inode *inode, struct iattr *attrs,
5531 else
5532 rc = -ENOSYS;
5533 cifs_dbg(FYI, "SetEOF by path (setattrs) rc = %d\n", rc);
5534 - if ((rc == -EINVAL) || (rc == -EOPNOTSUPP)) {
5535 - __u16 netfid;
5536 - int oplock = 0;
5537
5538 - rc = SMBLegacyOpen(xid, tcon, full_path, FILE_OPEN,
5539 - GENERIC_WRITE, CREATE_NOT_DIR, &netfid,
5540 - &oplock, NULL, cifs_sb->local_nls,
5541 - cifs_remap(cifs_sb));
5542 - if (rc == 0) {
5543 - unsigned int bytes_written;
5544 -
5545 - io_parms.netfid = netfid;
5546 - io_parms.pid = current->tgid;
5547 - io_parms.tcon = tcon;
5548 - io_parms.offset = 0;
5549 - io_parms.length = attrs->ia_size;
5550 - rc = CIFSSMBWrite(xid, &io_parms, &bytes_written, NULL,
5551 - NULL, 1);
5552 - cifs_dbg(FYI, "wrt seteof rc %d\n", rc);
5553 - CIFSSMBClose(xid, tcon, netfid);
5554 - }
5555 - }
5556 if (tlink)
5557 cifs_put_tlink(tlink);
5558
5559 diff --git a/fs/cifs/ioctl.c b/fs/cifs/ioctl.c
5560 index 8b7898b7670f..64a9bca976d0 100644
5561 --- a/fs/cifs/ioctl.c
5562 +++ b/fs/cifs/ioctl.c
5563 @@ -67,6 +67,12 @@ static long cifs_ioctl_clone(unsigned int xid, struct file *dst_file,
5564 goto out_drop_write;
5565 }
5566
5567 + if (src_file.file->f_op->unlocked_ioctl != cifs_ioctl) {
5568 + rc = -EBADF;
5569 + cifs_dbg(VFS, "src file seems to be from a different filesystem type\n");
5570 + goto out_fput;
5571 + }
5572 +
5573 if ((!src_file.file->private_data) || (!dst_file->private_data)) {
5574 rc = -EBADF;
5575 cifs_dbg(VFS, "missing cifsFileInfo on copy range src file\n");
5576 diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
5577 index cc93a7ffe8e4..51f5251d7db5 100644
5578 --- a/fs/cifs/smb2ops.c
5579 +++ b/fs/cifs/smb2ops.c
5580 @@ -50,9 +50,13 @@ change_conf(struct TCP_Server_Info *server)
5581 break;
5582 default:
5583 server->echoes = true;
5584 - server->oplocks = true;
5585 + if (enable_oplocks) {
5586 + server->oplocks = true;
5587 + server->oplock_credits = 1;
5588 + } else
5589 + server->oplocks = false;
5590 +
5591 server->echo_credits = 1;
5592 - server->oplock_credits = 1;
5593 }
5594 server->credits -= server->echo_credits + server->oplock_credits;
5595 return 0;
5596 diff --git a/fs/coredump.c b/fs/coredump.c
5597 index 4c5866b948e7..00d75e82f6f2 100644
5598 --- a/fs/coredump.c
5599 +++ b/fs/coredump.c
5600 @@ -506,10 +506,10 @@ void do_coredump(const siginfo_t *siginfo)
5601 const struct cred *old_cred;
5602 struct cred *cred;
5603 int retval = 0;
5604 - int flag = 0;
5605 int ispipe;
5606 struct files_struct *displaced;
5607 - bool need_nonrelative = false;
5608 + /* require nonrelative corefile path and be extra careful */
5609 + bool need_suid_safe = false;
5610 bool core_dumped = false;
5611 static atomic_t core_dump_count = ATOMIC_INIT(0);
5612 struct coredump_params cprm = {
5613 @@ -543,9 +543,8 @@ void do_coredump(const siginfo_t *siginfo)
5614 */
5615 if (__get_dumpable(cprm.mm_flags) == SUID_DUMP_ROOT) {
5616 /* Setuid core dump mode */
5617 - flag = O_EXCL; /* Stop rewrite attacks */
5618 cred->fsuid = GLOBAL_ROOT_UID; /* Dump root private */
5619 - need_nonrelative = true;
5620 + need_suid_safe = true;
5621 }
5622
5623 retval = coredump_wait(siginfo->si_signo, &core_state);
5624 @@ -626,7 +625,7 @@ void do_coredump(const siginfo_t *siginfo)
5625 if (cprm.limit < binfmt->min_coredump)
5626 goto fail_unlock;
5627
5628 - if (need_nonrelative && cn.corename[0] != '/') {
5629 + if (need_suid_safe && cn.corename[0] != '/') {
5630 printk(KERN_WARNING "Pid %d(%s) can only dump core "\
5631 "to fully qualified path!\n",
5632 task_tgid_vnr(current), current->comm);
5633 @@ -634,8 +633,35 @@ void do_coredump(const siginfo_t *siginfo)
5634 goto fail_unlock;
5635 }
5636
5637 + /*
5638 + * Unlink the file if it exists unless this is a SUID
5639 + * binary - in that case, we're running around with root
5640 + * privs and don't want to unlink another user's coredump.
5641 + */
5642 + if (!need_suid_safe) {
5643 + mm_segment_t old_fs;
5644 +
5645 + old_fs = get_fs();
5646 + set_fs(KERNEL_DS);
5647 + /*
5648 + * If it doesn't exist, that's fine. If there's some
5649 + * other problem, we'll catch it at the filp_open().
5650 + */
5651 + (void) sys_unlink((const char __user *)cn.corename);
5652 + set_fs(old_fs);
5653 + }
5654 +
5655 + /*
5656 + * There is a race between unlinking and creating the
5657 + * file, but if that causes an EEXIST here, that's
5658 + * fine - another process raced with us while creating
5659 + * the corefile, and the other process won. To userspace,
5660 + * what matters is that at least one of the two processes
5661 + * writes its coredump successfully, not which one.
5662 + */
5663 cprm.file = filp_open(cn.corename,
5664 - O_CREAT | 2 | O_NOFOLLOW | O_LARGEFILE | flag,
5665 + O_CREAT | 2 | O_NOFOLLOW |
5666 + O_LARGEFILE | O_EXCL,
5667 0600);
5668 if (IS_ERR(cprm.file))
5669 goto fail_unlock;
5670 diff --git a/fs/dcache.c b/fs/dcache.c
5671 index a66d6d80e2d9..d25f8fdcd397 100644
5672 --- a/fs/dcache.c
5673 +++ b/fs/dcache.c
5674 @@ -1528,7 +1528,8 @@ void d_set_d_op(struct dentry *dentry, const struct dentry_operations *op)
5675 DCACHE_OP_COMPARE |
5676 DCACHE_OP_REVALIDATE |
5677 DCACHE_OP_WEAK_REVALIDATE |
5678 - DCACHE_OP_DELETE ));
5679 + DCACHE_OP_DELETE |
5680 + DCACHE_OP_SELECT_INODE));
5681 dentry->d_op = op;
5682 if (!op)
5683 return;
5684 @@ -1544,6 +1545,8 @@ void d_set_d_op(struct dentry *dentry, const struct dentry_operations *op)
5685 dentry->d_flags |= DCACHE_OP_DELETE;
5686 if (op->d_prune)
5687 dentry->d_flags |= DCACHE_OP_PRUNE;
5688 + if (op->d_select_inode)
5689 + dentry->d_flags |= DCACHE_OP_SELECT_INODE;
5690
5691 }
5692 EXPORT_SYMBOL(d_set_d_op);
5693 @@ -2889,6 +2892,13 @@ restart:
5694
5695 if (dentry == vfsmnt->mnt_root || IS_ROOT(dentry)) {
5696 struct mount *parent = ACCESS_ONCE(mnt->mnt_parent);
5697 + /* Escaped? */
5698 + if (dentry != vfsmnt->mnt_root) {
5699 + bptr = *buffer;
5700 + blen = *buflen;
5701 + error = 3;
5702 + break;
5703 + }
5704 /* Global root? */
5705 if (mnt != parent) {
5706 dentry = ACCESS_ONCE(mnt->mnt_mountpoint);
5707 diff --git a/fs/ext4/super.c b/fs/ext4/super.c
5708 index bf038468d752..b5a2c29a8db8 100644
5709 --- a/fs/ext4/super.c
5710 +++ b/fs/ext4/super.c
5711 @@ -4754,10 +4754,11 @@ static int ext4_freeze(struct super_block *sb)
5712 error = jbd2_journal_flush(journal);
5713 if (error < 0)
5714 goto out;
5715 +
5716 + /* Journal blocked and flushed, clear needs_recovery flag. */
5717 + EXT4_CLEAR_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_RECOVER);
5718 }
5719
5720 - /* Journal blocked and flushed, clear needs_recovery flag. */
5721 - EXT4_CLEAR_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_RECOVER);
5722 error = ext4_commit_super(sb, 1);
5723 out:
5724 if (journal)
5725 @@ -4775,8 +4776,11 @@ static int ext4_unfreeze(struct super_block *sb)
5726 if (sb->s_flags & MS_RDONLY)
5727 return 0;
5728
5729 - /* Reset the needs_recovery flag before the fs is unlocked. */
5730 - EXT4_SET_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_RECOVER);
5731 + if (EXT4_SB(sb)->s_journal) {
5732 + /* Reset the needs_recovery flag before the fs is unlocked. */
5733 + EXT4_SET_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_RECOVER);
5734 + }
5735 +
5736 ext4_commit_super(sb, 1);
5737 return 0;
5738 }
5739 diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c
5740 index d3fa6bd9503e..221719eac5de 100644
5741 --- a/fs/hfs/bnode.c
5742 +++ b/fs/hfs/bnode.c
5743 @@ -288,7 +288,6 @@ static struct hfs_bnode *__hfs_bnode_create(struct hfs_btree *tree, u32 cnid)
5744 page_cache_release(page);
5745 goto fail;
5746 }
5747 - page_cache_release(page);
5748 node->page[i] = page;
5749 }
5750
5751 @@ -398,11 +397,11 @@ node_error:
5752
5753 void hfs_bnode_free(struct hfs_bnode *node)
5754 {
5755 - //int i;
5756 + int i;
5757
5758 - //for (i = 0; i < node->tree->pages_per_bnode; i++)
5759 - // if (node->page[i])
5760 - // page_cache_release(node->page[i]);
5761 + for (i = 0; i < node->tree->pages_per_bnode; i++)
5762 + if (node->page[i])
5763 + page_cache_release(node->page[i]);
5764 kfree(node);
5765 }
5766
5767 diff --git a/fs/hfs/brec.c b/fs/hfs/brec.c
5768 index 9f4ee7f52026..6fc766df0461 100644
5769 --- a/fs/hfs/brec.c
5770 +++ b/fs/hfs/brec.c
5771 @@ -131,13 +131,16 @@ skip:
5772 hfs_bnode_write(node, entry, data_off + key_len, entry_len);
5773 hfs_bnode_dump(node);
5774
5775 - if (new_node) {
5776 - /* update parent key if we inserted a key
5777 - * at the start of the first node
5778 - */
5779 - if (!rec && new_node != node)
5780 - hfs_brec_update_parent(fd);
5781 + /*
5782 + * update parent key if we inserted a key
5783 + * at the start of the node and it is not the new node
5784 + */
5785 + if (!rec && new_node != node) {
5786 + hfs_bnode_read_key(node, fd->search_key, data_off + size);
5787 + hfs_brec_update_parent(fd);
5788 + }
5789
5790 + if (new_node) {
5791 hfs_bnode_put(fd->bnode);
5792 if (!new_node->parent) {
5793 hfs_btree_inc_height(tree);
5794 @@ -166,9 +169,6 @@ skip:
5795 goto again;
5796 }
5797
5798 - if (!rec)
5799 - hfs_brec_update_parent(fd);
5800 -
5801 return 0;
5802 }
5803
5804 @@ -366,6 +366,8 @@ again:
5805 if (IS_ERR(parent))
5806 return PTR_ERR(parent);
5807 __hfs_brec_find(parent, fd);
5808 + if (fd->record < 0)
5809 + return -ENOENT;
5810 hfs_bnode_dump(parent);
5811 rec = fd->record;
5812
5813 diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c
5814 index 759708fd9331..63924662aaf3 100644
5815 --- a/fs/hfsplus/bnode.c
5816 +++ b/fs/hfsplus/bnode.c
5817 @@ -454,7 +454,6 @@ static struct hfs_bnode *__hfs_bnode_create(struct hfs_btree *tree, u32 cnid)
5818 page_cache_release(page);
5819 goto fail;
5820 }
5821 - page_cache_release(page);
5822 node->page[i] = page;
5823 }
5824
5825 @@ -566,13 +565,11 @@ node_error:
5826
5827 void hfs_bnode_free(struct hfs_bnode *node)
5828 {
5829 -#if 0
5830 int i;
5831
5832 for (i = 0; i < node->tree->pages_per_bnode; i++)
5833 if (node->page[i])
5834 page_cache_release(node->page[i]);
5835 -#endif
5836 kfree(node);
5837 }
5838
5839 diff --git a/fs/hpfs/namei.c b/fs/hpfs/namei.c
5840 index bdbc2c3080a4..0642cafaab34 100644
5841 --- a/fs/hpfs/namei.c
5842 +++ b/fs/hpfs/namei.c
5843 @@ -8,6 +8,17 @@
5844 #include <linux/sched.h>
5845 #include "hpfs_fn.h"
5846
5847 +static void hpfs_update_directory_times(struct inode *dir)
5848 +{
5849 + time_t t = get_seconds();
5850 + if (t == dir->i_mtime.tv_sec &&
5851 + t == dir->i_ctime.tv_sec)
5852 + return;
5853 + dir->i_mtime.tv_sec = dir->i_ctime.tv_sec = t;
5854 + dir->i_mtime.tv_nsec = dir->i_ctime.tv_nsec = 0;
5855 + hpfs_write_inode_nolock(dir);
5856 +}
5857 +
5858 static int hpfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
5859 {
5860 const unsigned char *name = dentry->d_name.name;
5861 @@ -99,6 +110,7 @@ static int hpfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
5862 result->i_mode = mode | S_IFDIR;
5863 hpfs_write_inode_nolock(result);
5864 }
5865 + hpfs_update_directory_times(dir);
5866 d_instantiate(dentry, result);
5867 hpfs_unlock(dir->i_sb);
5868 return 0;
5869 @@ -187,6 +199,7 @@ static int hpfs_create(struct inode *dir, struct dentry *dentry, umode_t mode, b
5870 result->i_mode = mode | S_IFREG;
5871 hpfs_write_inode_nolock(result);
5872 }
5873 + hpfs_update_directory_times(dir);
5874 d_instantiate(dentry, result);
5875 hpfs_unlock(dir->i_sb);
5876 return 0;
5877 @@ -262,6 +275,7 @@ static int hpfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode, de
5878 insert_inode_hash(result);
5879
5880 hpfs_write_inode_nolock(result);
5881 + hpfs_update_directory_times(dir);
5882 d_instantiate(dentry, result);
5883 brelse(bh);
5884 hpfs_unlock(dir->i_sb);
5885 @@ -340,6 +354,7 @@ static int hpfs_symlink(struct inode *dir, struct dentry *dentry, const char *sy
5886 insert_inode_hash(result);
5887
5888 hpfs_write_inode_nolock(result);
5889 + hpfs_update_directory_times(dir);
5890 d_instantiate(dentry, result);
5891 hpfs_unlock(dir->i_sb);
5892 return 0;
5893 @@ -423,6 +438,8 @@ again:
5894 out1:
5895 hpfs_brelse4(&qbh);
5896 out:
5897 + if (!err)
5898 + hpfs_update_directory_times(dir);
5899 hpfs_unlock(dir->i_sb);
5900 return err;
5901 }
5902 @@ -477,6 +494,8 @@ static int hpfs_rmdir(struct inode *dir, struct dentry *dentry)
5903 out1:
5904 hpfs_brelse4(&qbh);
5905 out:
5906 + if (!err)
5907 + hpfs_update_directory_times(dir);
5908 hpfs_unlock(dir->i_sb);
5909 return err;
5910 }
5911 @@ -595,7 +614,7 @@ static int hpfs_rename(struct inode *old_dir, struct dentry *old_dentry,
5912 goto end1;
5913 }
5914
5915 - end:
5916 +end:
5917 hpfs_i(i)->i_parent_dir = new_dir->i_ino;
5918 if (S_ISDIR(i->i_mode)) {
5919 inc_nlink(new_dir);
5920 @@ -610,6 +629,10 @@ static int hpfs_rename(struct inode *old_dir, struct dentry *old_dentry,
5921 brelse(bh);
5922 }
5923 end1:
5924 + if (!err) {
5925 + hpfs_update_directory_times(old_dir);
5926 + hpfs_update_directory_times(new_dir);
5927 + }
5928 hpfs_unlock(i->i_sb);
5929 return err;
5930 }
5931 diff --git a/fs/internal.h b/fs/internal.h
5932 index 757ba2abf21e..53279bd90b72 100644
5933 --- a/fs/internal.h
5934 +++ b/fs/internal.h
5935 @@ -106,6 +106,7 @@ extern struct file *do_file_open_root(struct dentry *, struct vfsmount *,
5936 extern long do_handle_open(int mountdirfd,
5937 struct file_handle __user *ufh, int open_flag);
5938 extern int open_check_o_direct(struct file *f);
5939 +extern int vfs_open(const struct path *, struct file *, const struct cred *);
5940
5941 /*
5942 * inode.c
5943 diff --git a/fs/namei.c b/fs/namei.c
5944 index d20f061cddd3..be3d538d56b3 100644
5945 --- a/fs/namei.c
5946 +++ b/fs/namei.c
5947 @@ -487,6 +487,24 @@ void path_put(const struct path *path)
5948 }
5949 EXPORT_SYMBOL(path_put);
5950
5951 +/**
5952 + * path_connected - Verify that a path->dentry is below path->mnt.mnt_root
5953 + * @path: nameidate to verify
5954 + *
5955 + * Rename can sometimes move a file or directory outside of a bind
5956 + * mount, path_connected allows those cases to be detected.
5957 + */
5958 +static bool path_connected(const struct path *path)
5959 +{
5960 + struct vfsmount *mnt = path->mnt;
5961 +
5962 + /* Only bind mounts can have disconnected paths */
5963 + if (mnt->mnt_root == mnt->mnt_sb->s_root)
5964 + return true;
5965 +
5966 + return is_subdir(path->dentry, mnt->mnt_root);
5967 +}
5968 +
5969 /*
5970 * Path walking has 2 modes, rcu-walk and ref-walk (see
5971 * Documentation/filesystems/path-lookup.txt). In situations when we can't
5972 @@ -1164,6 +1182,8 @@ static int follow_dotdot_rcu(struct nameidata *nd)
5973 goto failed;
5974 nd->path.dentry = parent;
5975 nd->seq = seq;
5976 + if (unlikely(!path_connected(&nd->path)))
5977 + goto failed;
5978 break;
5979 }
5980 if (!follow_up_rcu(&nd->path))
5981 @@ -1260,7 +1280,7 @@ static void follow_mount(struct path *path)
5982 }
5983 }
5984
5985 -static void follow_dotdot(struct nameidata *nd)
5986 +static int follow_dotdot(struct nameidata *nd)
5987 {
5988 if (!nd->root.mnt)
5989 set_root(nd);
5990 @@ -1276,6 +1296,10 @@ static void follow_dotdot(struct nameidata *nd)
5991 /* rare case of legitimate dget_parent()... */
5992 nd->path.dentry = dget_parent(nd->path.dentry);
5993 dput(old);
5994 + if (unlikely(!path_connected(&nd->path))) {
5995 + path_put(&nd->path);
5996 + return -ENOENT;
5997 + }
5998 break;
5999 }
6000 if (!follow_up(&nd->path))
6001 @@ -1283,6 +1307,7 @@ static void follow_dotdot(struct nameidata *nd)
6002 }
6003 follow_mount(&nd->path);
6004 nd->inode = nd->path.dentry->d_inode;
6005 + return 0;
6006 }
6007
6008 /*
6009 @@ -1503,7 +1528,7 @@ static inline int handle_dots(struct nameidata *nd, int type)
6010 if (follow_dotdot_rcu(nd))
6011 return -ECHILD;
6012 } else
6013 - follow_dotdot(nd);
6014 + return follow_dotdot(nd);
6015 }
6016 return 0;
6017 }
6018 @@ -2239,7 +2264,7 @@ mountpoint_last(struct nameidata *nd, struct path *path)
6019 if (unlikely(nd->last_type != LAST_NORM)) {
6020 error = handle_dots(nd, nd->last_type);
6021 if (error)
6022 - goto out;
6023 + return error;
6024 dentry = dget(nd->path.dentry);
6025 goto done;
6026 }
6027 diff --git a/fs/nfs/filelayout/filelayout.c b/fs/nfs/filelayout/filelayout.c
6028 index 7afb52f6a25a..32879965b255 100644
6029 --- a/fs/nfs/filelayout/filelayout.c
6030 +++ b/fs/nfs/filelayout/filelayout.c
6031 @@ -682,23 +682,18 @@ out_put:
6032 goto out;
6033 }
6034
6035 -static void filelayout_free_fh_array(struct nfs4_filelayout_segment *fl)
6036 +static void _filelayout_free_lseg(struct nfs4_filelayout_segment *fl)
6037 {
6038 int i;
6039
6040 - for (i = 0; i < fl->num_fh; i++) {
6041 - if (!fl->fh_array[i])
6042 - break;
6043 - kfree(fl->fh_array[i]);
6044 + if (fl->fh_array) {
6045 + for (i = 0; i < fl->num_fh; i++) {
6046 + if (!fl->fh_array[i])
6047 + break;
6048 + kfree(fl->fh_array[i]);
6049 + }
6050 + kfree(fl->fh_array);
6051 }
6052 - kfree(fl->fh_array);
6053 - fl->fh_array = NULL;
6054 -}
6055 -
6056 -static void
6057 -_filelayout_free_lseg(struct nfs4_filelayout_segment *fl)
6058 -{
6059 - filelayout_free_fh_array(fl);
6060 kfree(fl);
6061 }
6062
6063 @@ -769,21 +764,21 @@ filelayout_decode_layout(struct pnfs_layout_hdr *flo,
6064 /* Do we want to use a mempool here? */
6065 fl->fh_array[i] = kmalloc(sizeof(struct nfs_fh), gfp_flags);
6066 if (!fl->fh_array[i])
6067 - goto out_err_free;
6068 + goto out_err;
6069
6070 p = xdr_inline_decode(&stream, 4);
6071 if (unlikely(!p))
6072 - goto out_err_free;
6073 + goto out_err;
6074 fl->fh_array[i]->size = be32_to_cpup(p++);
6075 if (sizeof(struct nfs_fh) < fl->fh_array[i]->size) {
6076 printk(KERN_ERR "NFS: Too big fh %d received %d\n",
6077 i, fl->fh_array[i]->size);
6078 - goto out_err_free;
6079 + goto out_err;
6080 }
6081
6082 p = xdr_inline_decode(&stream, fl->fh_array[i]->size);
6083 if (unlikely(!p))
6084 - goto out_err_free;
6085 + goto out_err;
6086 memcpy(fl->fh_array[i]->data, p, fl->fh_array[i]->size);
6087 dprintk("DEBUG: %s: fh len %d\n", __func__,
6088 fl->fh_array[i]->size);
6089 @@ -792,8 +787,6 @@ filelayout_decode_layout(struct pnfs_layout_hdr *flo,
6090 __free_page(scratch);
6091 return 0;
6092
6093 -out_err_free:
6094 - filelayout_free_fh_array(fl);
6095 out_err:
6096 __free_page(scratch);
6097 return -EIO;
6098 diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
6099 index c9ff4a176a25..123a494d018b 100644
6100 --- a/fs/nfs/nfs4proc.c
6101 +++ b/fs/nfs/nfs4proc.c
6102 @@ -2346,7 +2346,7 @@ static int _nfs4_do_open(struct inode *dir,
6103 goto err_free_label;
6104 state = ctx->state;
6105
6106 - if ((opendata->o_arg.open_flags & O_EXCL) &&
6107 + if ((opendata->o_arg.open_flags & (O_CREAT|O_EXCL)) == (O_CREAT|O_EXCL) &&
6108 (opendata->o_arg.createmode != NFS4_CREATE_GUARDED)) {
6109 nfs4_exclusive_attrset(opendata, sattr);
6110
6111 @@ -8443,6 +8443,7 @@ static const struct nfs4_minor_version_ops nfs_v4_2_minor_ops = {
6112 .reboot_recovery_ops = &nfs41_reboot_recovery_ops,
6113 .nograce_recovery_ops = &nfs41_nograce_recovery_ops,
6114 .state_renewal_ops = &nfs41_state_renewal_ops,
6115 + .mig_recovery_ops = &nfs41_mig_recovery_ops,
6116 };
6117 #endif
6118
6119 diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
6120 index ed0db61f8543..54631609d601 100644
6121 --- a/fs/nfs/pagelist.c
6122 +++ b/fs/nfs/pagelist.c
6123 @@ -63,8 +63,8 @@ EXPORT_SYMBOL_GPL(nfs_pgheader_init);
6124 void nfs_set_pgio_error(struct nfs_pgio_header *hdr, int error, loff_t pos)
6125 {
6126 spin_lock(&hdr->lock);
6127 - if (pos < hdr->io_start + hdr->good_bytes) {
6128 - set_bit(NFS_IOHDR_ERROR, &hdr->flags);
6129 + if (!test_and_set_bit(NFS_IOHDR_ERROR, &hdr->flags)
6130 + || pos < hdr->io_start + hdr->good_bytes) {
6131 clear_bit(NFS_IOHDR_EOF, &hdr->flags);
6132 hdr->good_bytes = pos - hdr->io_start;
6133 hdr->error = error;
6134 @@ -486,7 +486,7 @@ size_t nfs_generic_pg_test(struct nfs_pageio_descriptor *desc,
6135 * for it without upsetting the slab allocator.
6136 */
6137 if (((desc->pg_count + req->wb_bytes) >> PAGE_SHIFT) *
6138 - sizeof(struct page) > PAGE_SIZE)
6139 + sizeof(struct page *) > PAGE_SIZE)
6140 return 0;
6141
6142 return min(desc->pg_bsize - desc->pg_count, (size_t)req->wb_bytes);
6143 diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c
6144 index 9ec1eea7c3a3..5972f5a30713 100644
6145 --- a/fs/ocfs2/dlm/dlmmaster.c
6146 +++ b/fs/ocfs2/dlm/dlmmaster.c
6147 @@ -1447,6 +1447,7 @@ int dlm_master_request_handler(struct o2net_msg *msg, u32 len, void *data,
6148 int found, ret;
6149 int set_maybe;
6150 int dispatch_assert = 0;
6151 + int dispatched = 0;
6152
6153 if (!dlm_grab(dlm))
6154 return DLM_MASTER_RESP_NO;
6155 @@ -1653,14 +1654,17 @@ send_response:
6156 mlog(ML_ERROR, "failed to dispatch assert master work\n");
6157 response = DLM_MASTER_RESP_ERROR;
6158 dlm_lockres_put(res);
6159 - } else
6160 + } else {
6161 + dispatched = 1;
6162 dlm_lockres_grab_inflight_worker(dlm, res);
6163 + }
6164 } else {
6165 if (res)
6166 dlm_lockres_put(res);
6167 }
6168
6169 - dlm_put(dlm);
6170 + if (!dispatched)
6171 + dlm_put(dlm);
6172 return response;
6173 }
6174
6175 @@ -2084,7 +2088,6 @@ int dlm_dispatch_assert_master(struct dlm_ctxt *dlm,
6176
6177
6178 /* queue up work for dlm_assert_master_worker */
6179 - dlm_grab(dlm); /* get an extra ref for the work item */
6180 dlm_init_work_item(dlm, item, dlm_assert_master_worker, NULL);
6181 item->u.am.lockres = res; /* already have a ref */
6182 /* can optionally ignore node numbers higher than this node */
6183 diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c
6184 index 3365839d2971..8632f9c5fb5d 100644
6185 --- a/fs/ocfs2/dlm/dlmrecovery.c
6186 +++ b/fs/ocfs2/dlm/dlmrecovery.c
6187 @@ -1687,6 +1687,7 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
6188 unsigned int hash;
6189 int master = DLM_LOCK_RES_OWNER_UNKNOWN;
6190 u32 flags = DLM_ASSERT_MASTER_REQUERY;
6191 + int dispatched = 0;
6192
6193 if (!dlm_grab(dlm)) {
6194 /* since the domain has gone away on this
6195 @@ -1708,8 +1709,10 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
6196 mlog_errno(-ENOMEM);
6197 /* retry!? */
6198 BUG();
6199 - } else
6200 + } else {
6201 + dispatched = 1;
6202 __dlm_lockres_grab_inflight_worker(dlm, res);
6203 + }
6204 spin_unlock(&res->spinlock);
6205 } else {
6206 /* put.. incase we are not the master */
6207 @@ -1719,7 +1722,8 @@ int dlm_master_requery_handler(struct o2net_msg *msg, u32 len, void *data,
6208 }
6209 spin_unlock(&dlm->spinlock);
6210
6211 - dlm_put(dlm);
6212 + if (!dispatched)
6213 + dlm_put(dlm);
6214 return master;
6215 }
6216
6217 diff --git a/fs/open.c b/fs/open.c
6218 index 4a8a355ffab8..d058ff1b841b 100644
6219 --- a/fs/open.c
6220 +++ b/fs/open.c
6221 @@ -665,18 +665,18 @@ int open_check_o_direct(struct file *f)
6222 }
6223
6224 static int do_dentry_open(struct file *f,
6225 + struct inode *inode,
6226 int (*open)(struct inode *, struct file *),
6227 const struct cred *cred)
6228 {
6229 static const struct file_operations empty_fops = {};
6230 - struct inode *inode;
6231 int error;
6232
6233 f->f_mode = OPEN_FMODE(f->f_flags) | FMODE_LSEEK |
6234 FMODE_PREAD | FMODE_PWRITE;
6235
6236 path_get(&f->f_path);
6237 - inode = f->f_inode = f->f_path.dentry->d_inode;
6238 + f->f_inode = inode;
6239 f->f_mapping = inode->i_mapping;
6240
6241 if (unlikely(f->f_flags & O_PATH)) {
6242 @@ -780,7 +780,8 @@ int finish_open(struct file *file, struct dentry *dentry,
6243 BUG_ON(*opened & FILE_OPENED); /* once it's opened, it's opened */
6244
6245 file->f_path.dentry = dentry;
6246 - error = do_dentry_open(file, open, current_cred());
6247 + error = do_dentry_open(file, d_backing_inode(dentry), open,
6248 + current_cred());
6249 if (!error)
6250 *opened |= FILE_OPENED;
6251
6252 @@ -809,6 +810,28 @@ int finish_no_open(struct file *file, struct dentry *dentry)
6253 }
6254 EXPORT_SYMBOL(finish_no_open);
6255
6256 +/**
6257 + * vfs_open - open the file at the given path
6258 + * @path: path to open
6259 + * @file: newly allocated file with f_flag initialized
6260 + * @cred: credentials to use
6261 + */
6262 +int vfs_open(const struct path *path, struct file *file,
6263 + const struct cred *cred)
6264 +{
6265 + struct dentry *dentry = path->dentry;
6266 + struct inode *inode = dentry->d_inode;
6267 +
6268 + file->f_path = *path;
6269 + if (dentry->d_flags & DCACHE_OP_SELECT_INODE) {
6270 + inode = dentry->d_op->d_select_inode(dentry, file->f_flags);
6271 + if (IS_ERR(inode))
6272 + return PTR_ERR(inode);
6273 + }
6274 +
6275 + return do_dentry_open(file, inode, NULL, cred);
6276 +}
6277 +
6278 struct file *dentry_open(const struct path *path, int flags,
6279 const struct cred *cred)
6280 {
6281 @@ -840,26 +863,6 @@ struct file *dentry_open(const struct path *path, int flags,
6282 }
6283 EXPORT_SYMBOL(dentry_open);
6284
6285 -/**
6286 - * vfs_open - open the file at the given path
6287 - * @path: path to open
6288 - * @filp: newly allocated file with f_flag initialized
6289 - * @cred: credentials to use
6290 - */
6291 -int vfs_open(const struct path *path, struct file *filp,
6292 - const struct cred *cred)
6293 -{
6294 - struct inode *inode = path->dentry->d_inode;
6295 -
6296 - if (inode->i_op->dentry_open)
6297 - return inode->i_op->dentry_open(path->dentry, filp, cred);
6298 - else {
6299 - filp->f_path = *path;
6300 - return do_dentry_open(filp, NULL, cred);
6301 - }
6302 -}
6303 -EXPORT_SYMBOL(vfs_open);
6304 -
6305 static inline int build_open_flags(int flags, umode_t mode, struct open_flags *op)
6306 {
6307 int lookup_flags = 0;
6308 diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
6309 index 07d74b24913b..e3903b74a1f2 100644
6310 --- a/fs/overlayfs/inode.c
6311 +++ b/fs/overlayfs/inode.c
6312 @@ -333,37 +333,33 @@ static bool ovl_open_need_copy_up(int flags, enum ovl_path_type type,
6313 return true;
6314 }
6315
6316 -static int ovl_dentry_open(struct dentry *dentry, struct file *file,
6317 - const struct cred *cred)
6318 +struct inode *ovl_d_select_inode(struct dentry *dentry, unsigned file_flags)
6319 {
6320 int err;
6321 struct path realpath;
6322 enum ovl_path_type type;
6323 - bool want_write = false;
6324 +
6325 + if (d_is_dir(dentry))
6326 + return d_backing_inode(dentry);
6327
6328 type = ovl_path_real(dentry, &realpath);
6329 - if (ovl_open_need_copy_up(file->f_flags, type, realpath.dentry)) {
6330 - want_write = true;
6331 + if (ovl_open_need_copy_up(file_flags, type, realpath.dentry)) {
6332 err = ovl_want_write(dentry);
6333 if (err)
6334 - goto out;
6335 + return ERR_PTR(err);
6336
6337 - if (file->f_flags & O_TRUNC)
6338 + if (file_flags & O_TRUNC)
6339 err = ovl_copy_up_last(dentry, NULL, true);
6340 else
6341 err = ovl_copy_up(dentry);
6342 + ovl_drop_write(dentry);
6343 if (err)
6344 - goto out_drop_write;
6345 + return ERR_PTR(err);
6346
6347 ovl_path_upper(dentry, &realpath);
6348 }
6349
6350 - err = vfs_open(&realpath, file, cred);
6351 -out_drop_write:
6352 - if (want_write)
6353 - ovl_drop_write(dentry);
6354 -out:
6355 - return err;
6356 + return d_backing_inode(realpath.dentry);
6357 }
6358
6359 static const struct inode_operations ovl_file_inode_operations = {
6360 @@ -374,7 +370,6 @@ static const struct inode_operations ovl_file_inode_operations = {
6361 .getxattr = ovl_getxattr,
6362 .listxattr = ovl_listxattr,
6363 .removexattr = ovl_removexattr,
6364 - .dentry_open = ovl_dentry_open,
6365 };
6366
6367 static const struct inode_operations ovl_symlink_inode_operations = {
6368 diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
6369 index 814bed33dd07..1714fcc7603e 100644
6370 --- a/fs/overlayfs/overlayfs.h
6371 +++ b/fs/overlayfs/overlayfs.h
6372 @@ -165,6 +165,7 @@ ssize_t ovl_getxattr(struct dentry *dentry, const char *name,
6373 void *value, size_t size);
6374 ssize_t ovl_listxattr(struct dentry *dentry, char *list, size_t size);
6375 int ovl_removexattr(struct dentry *dentry, const char *name);
6376 +struct inode *ovl_d_select_inode(struct dentry *dentry, unsigned file_flags);
6377
6378 struct inode *ovl_new_inode(struct super_block *sb, umode_t mode,
6379 struct ovl_entry *oe);
6380 diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
6381 index f16d318b71f8..6256c8ed52c9 100644
6382 --- a/fs/overlayfs/super.c
6383 +++ b/fs/overlayfs/super.c
6384 @@ -269,6 +269,7 @@ static void ovl_dentry_release(struct dentry *dentry)
6385
6386 static const struct dentry_operations ovl_dentry_operations = {
6387 .d_release = ovl_dentry_release,
6388 + .d_select_inode = ovl_d_select_inode,
6389 };
6390
6391 static struct ovl_entry *ovl_alloc_entry(void)
6392 diff --git a/include/linux/dcache.h b/include/linux/dcache.h
6393 index 1c2f1b84468b..340ee0dae93b 100644
6394 --- a/include/linux/dcache.h
6395 +++ b/include/linux/dcache.h
6396 @@ -160,6 +160,7 @@ struct dentry_operations {
6397 char *(*d_dname)(struct dentry *, char *, int);
6398 struct vfsmount *(*d_automount)(struct path *);
6399 int (*d_manage)(struct dentry *, bool);
6400 + struct inode *(*d_select_inode)(struct dentry *, unsigned);
6401 } ____cacheline_aligned;
6402
6403 /*
6404 @@ -222,6 +223,7 @@ struct dentry_operations {
6405 #define DCACHE_FILE_TYPE 0x00400000 /* Other file type */
6406
6407 #define DCACHE_MAY_FREE 0x00800000
6408 +#define DCACHE_OP_SELECT_INODE 0x02000000 /* Unioned entry: dcache op selects inode */
6409
6410 extern seqlock_t rename_lock;
6411
6412 @@ -468,4 +470,61 @@ static inline unsigned long vfs_pressure_ratio(unsigned long val)
6413 {
6414 return mult_frac(val, sysctl_vfs_cache_pressure, 100);
6415 }
6416 +
6417 +/**
6418 + * d_inode - Get the actual inode of this dentry
6419 + * @dentry: The dentry to query
6420 + *
6421 + * This is the helper normal filesystems should use to get at their own inodes
6422 + * in their own dentries and ignore the layering superimposed upon them.
6423 + */
6424 +static inline struct inode *d_inode(const struct dentry *dentry)
6425 +{
6426 + return dentry->d_inode;
6427 +}
6428 +
6429 +/**
6430 + * d_inode_rcu - Get the actual inode of this dentry with ACCESS_ONCE()
6431 + * @dentry: The dentry to query
6432 + *
6433 + * This is the helper normal filesystems should use to get at their own inodes
6434 + * in their own dentries and ignore the layering superimposed upon them.
6435 + */
6436 +static inline struct inode *d_inode_rcu(const struct dentry *dentry)
6437 +{
6438 + return ACCESS_ONCE(dentry->d_inode);
6439 +}
6440 +
6441 +/**
6442 + * d_backing_inode - Get upper or lower inode we should be using
6443 + * @upper: The upper layer
6444 + *
6445 + * This is the helper that should be used to get at the inode that will be used
6446 + * if this dentry were to be opened as a file. The inode may be on the upper
6447 + * dentry or it may be on a lower dentry pinned by the upper.
6448 + *
6449 + * Normal filesystems should not use this to access their own inodes.
6450 + */
6451 +static inline struct inode *d_backing_inode(const struct dentry *upper)
6452 +{
6453 + struct inode *inode = upper->d_inode;
6454 +
6455 + return inode;
6456 +}
6457 +
6458 +/**
6459 + * d_backing_dentry - Get upper or lower dentry we should be using
6460 + * @upper: The upper layer
6461 + *
6462 + * This is the helper that should be used to get the dentry of the inode that
6463 + * will be used if this dentry were opened as a file. It may be the upper
6464 + * dentry or it may be a lower dentry pinned by the upper.
6465 + *
6466 + * Normal filesystems should not use this to access their own dentries.
6467 + */
6468 +static inline struct dentry *d_backing_dentry(struct dentry *upper)
6469 +{
6470 + return upper;
6471 +}
6472 +
6473 #endif /* __LINUX_DCACHE_H */
6474 diff --git a/include/linux/fs.h b/include/linux/fs.h
6475 index 84d672914bd8..6fd017e25c0a 100644
6476 --- a/include/linux/fs.h
6477 +++ b/include/linux/fs.h
6478 @@ -1552,7 +1552,6 @@ struct inode_operations {
6479 int (*set_acl)(struct inode *, struct posix_acl *, int);
6480
6481 /* WARNING: probably going away soon, do not use! */
6482 - int (*dentry_open)(struct dentry *, struct file *, const struct cred *);
6483 } ____cacheline_aligned;
6484
6485 ssize_t rw_copy_check_uvector(int type, const struct iovec __user * uvector,
6486 @@ -2068,7 +2067,6 @@ extern struct file *file_open_name(struct filename *, int, umode_t);
6487 extern struct file *filp_open(const char *, int, umode_t);
6488 extern struct file *file_open_root(struct dentry *, struct vfsmount *,
6489 const char *, int);
6490 -extern int vfs_open(const struct path *, struct file *, const struct cred *);
6491 extern struct file * dentry_open(const struct path *, int, const struct cred *);
6492 extern int filp_close(struct file *, fl_owner_t id);
6493
6494 diff --git a/include/linux/if_link.h b/include/linux/if_link.h
6495 index 119130e9298b..da4929927f69 100644
6496 --- a/include/linux/if_link.h
6497 +++ b/include/linux/if_link.h
6498 @@ -14,5 +14,6 @@ struct ifla_vf_info {
6499 __u32 linkstate;
6500 __u32 min_tx_rate;
6501 __u32 max_tx_rate;
6502 + __u32 rss_query_en;
6503 };
6504 #endif /* _LINUX_IF_LINK_H */
6505 diff --git a/include/linux/iio/iio.h b/include/linux/iio/iio.h
6506 index 15dc6bc2bdd2..6c17af80823c 100644
6507 --- a/include/linux/iio/iio.h
6508 +++ b/include/linux/iio/iio.h
6509 @@ -614,6 +614,15 @@ int iio_str_to_fixpoint(const char *str, int fract_mult, int *integer,
6510 #define IIO_DEGREE_TO_RAD(deg) (((deg) * 314159ULL + 9000000ULL) / 18000000ULL)
6511
6512 /**
6513 + * IIO_RAD_TO_DEGREE() - Convert rad to degree
6514 + * @rad: A value in rad
6515 + *
6516 + * Returns the given value converted from rad to degree
6517 + */
6518 +#define IIO_RAD_TO_DEGREE(rad) \
6519 + (((rad) * 18000000ULL + 314159ULL / 2) / 314159ULL)
6520 +
6521 +/**
6522 * IIO_G_TO_M_S_2() - Convert g to meter / second**2
6523 * @g: A value in g
6524 *
6525 @@ -621,4 +630,12 @@ int iio_str_to_fixpoint(const char *str, int fract_mult, int *integer,
6526 */
6527 #define IIO_G_TO_M_S_2(g) ((g) * 980665ULL / 100000ULL)
6528
6529 +/**
6530 + * IIO_M_S_2_TO_G() - Convert meter / second**2 to g
6531 + * @ms2: A value in meter / second**2
6532 + *
6533 + * Returns the given value converted from meter / second**2 to g
6534 + */
6535 +#define IIO_M_S_2_TO_G(ms2) (((ms2) * 100000ULL + 980665ULL / 2) / 980665ULL)
6536 +
6537 #endif /* _INDUSTRIAL_IO_H_ */
6538 diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
6539 index c3fd34da6c08..70fde9c5c61d 100644
6540 --- a/include/linux/netdevice.h
6541 +++ b/include/linux/netdevice.h
6542 @@ -859,6 +859,11 @@ typedef u16 (*select_queue_fallback_t)(struct net_device *dev,
6543 * int (*ndo_set_vf_link_state)(struct net_device *dev, int vf, int link_state);
6544 * int (*ndo_set_vf_port)(struct net_device *dev, int vf,
6545 * struct nlattr *port[]);
6546 + *
6547 + * Enable or disable the VF ability to query its RSS Redirection Table and
6548 + * Hash Key. This is needed since on some devices VF share this information
6549 + * with PF and querying it may adduce a theoretical security risk.
6550 + * int (*ndo_set_vf_rss_query_en)(struct net_device *dev, int vf, bool setting);
6551 * int (*ndo_get_vf_port)(struct net_device *dev, int vf, struct sk_buff *skb);
6552 * int (*ndo_setup_tc)(struct net_device *dev, u8 tc)
6553 * Called to setup 'tc' number of traffic classes in the net device. This
6554 @@ -1071,6 +1076,9 @@ struct net_device_ops {
6555 struct nlattr *port[]);
6556 int (*ndo_get_vf_port)(struct net_device *dev,
6557 int vf, struct sk_buff *skb);
6558 + int (*ndo_set_vf_rss_query_en)(
6559 + struct net_device *dev,
6560 + int vf, bool setting);
6561 int (*ndo_setup_tc)(struct net_device *dev, u8 tc);
6562 #if IS_ENABLED(CONFIG_FCOE)
6563 int (*ndo_fcoe_enable)(struct net_device *dev);
6564 diff --git a/include/linux/security.h b/include/linux/security.h
6565 index ba96471c11ba..ea9eda4abdd5 100644
6566 --- a/include/linux/security.h
6567 +++ b/include/linux/security.h
6568 @@ -2471,7 +2471,7 @@ static inline int security_task_prctl(int option, unsigned long arg2,
6569 unsigned long arg4,
6570 unsigned long arg5)
6571 {
6572 - return cap_task_prctl(option, arg2, arg3, arg3, arg5);
6573 + return cap_task_prctl(option, arg2, arg3, arg4, arg5);
6574 }
6575
6576 static inline void security_task_to_inode(struct task_struct *p, struct inode *inode)
6577 diff --git a/include/target/iscsi/iscsi_target_core.h b/include/target/iscsi/iscsi_target_core.h
6578 new file mode 100644
6579 index 000000000000..7bd03f867fca
6580 --- /dev/null
6581 +++ b/include/target/iscsi/iscsi_target_core.h
6582 @@ -0,0 +1,904 @@
6583 +#ifndef ISCSI_TARGET_CORE_H
6584 +#define ISCSI_TARGET_CORE_H
6585 +
6586 +#include <linux/in.h>
6587 +#include <linux/configfs.h>
6588 +#include <net/sock.h>
6589 +#include <net/tcp.h>
6590 +#include <scsi/scsi_cmnd.h>
6591 +#include <scsi/iscsi_proto.h>
6592 +#include <target/target_core_base.h>
6593 +
6594 +#define ISCSIT_VERSION "v4.1.0"
6595 +#define ISCSI_MAX_DATASN_MISSING_COUNT 16
6596 +#define ISCSI_TX_THREAD_TCP_TIMEOUT 2
6597 +#define ISCSI_RX_THREAD_TCP_TIMEOUT 2
6598 +#define SECONDS_FOR_ASYNC_LOGOUT 10
6599 +#define SECONDS_FOR_ASYNC_TEXT 10
6600 +#define SECONDS_FOR_LOGOUT_COMP 15
6601 +#define WHITE_SPACE " \t\v\f\n\r"
6602 +#define ISCSIT_MIN_TAGS 16
6603 +#define ISCSIT_EXTRA_TAGS 8
6604 +#define ISCSIT_TCP_BACKLOG 256
6605 +#define ISCSI_RX_THREAD_NAME "iscsi_trx"
6606 +#define ISCSI_TX_THREAD_NAME "iscsi_ttx"
6607 +
6608 +/* struct iscsi_node_attrib sanity values */
6609 +#define NA_DATAOUT_TIMEOUT 3
6610 +#define NA_DATAOUT_TIMEOUT_MAX 60
6611 +#define NA_DATAOUT_TIMEOUT_MIX 2
6612 +#define NA_DATAOUT_TIMEOUT_RETRIES 5
6613 +#define NA_DATAOUT_TIMEOUT_RETRIES_MAX 15
6614 +#define NA_DATAOUT_TIMEOUT_RETRIES_MIN 1
6615 +#define NA_NOPIN_TIMEOUT 15
6616 +#define NA_NOPIN_TIMEOUT_MAX 60
6617 +#define NA_NOPIN_TIMEOUT_MIN 3
6618 +#define NA_NOPIN_RESPONSE_TIMEOUT 30
6619 +#define NA_NOPIN_RESPONSE_TIMEOUT_MAX 60
6620 +#define NA_NOPIN_RESPONSE_TIMEOUT_MIN 3
6621 +#define NA_RANDOM_DATAIN_PDU_OFFSETS 0
6622 +#define NA_RANDOM_DATAIN_SEQ_OFFSETS 0
6623 +#define NA_RANDOM_R2T_OFFSETS 0
6624 +
6625 +/* struct iscsi_tpg_attrib sanity values */
6626 +#define TA_AUTHENTICATION 1
6627 +#define TA_LOGIN_TIMEOUT 15
6628 +#define TA_LOGIN_TIMEOUT_MAX 30
6629 +#define TA_LOGIN_TIMEOUT_MIN 5
6630 +#define TA_NETIF_TIMEOUT 2
6631 +#define TA_NETIF_TIMEOUT_MAX 15
6632 +#define TA_NETIF_TIMEOUT_MIN 2
6633 +#define TA_GENERATE_NODE_ACLS 0
6634 +#define TA_DEFAULT_CMDSN_DEPTH 64
6635 +#define TA_DEFAULT_CMDSN_DEPTH_MAX 512
6636 +#define TA_DEFAULT_CMDSN_DEPTH_MIN 1
6637 +#define TA_CACHE_DYNAMIC_ACLS 0
6638 +/* Enabled by default in demo mode (generic_node_acls=1) */
6639 +#define TA_DEMO_MODE_WRITE_PROTECT 1
6640 +/* Disabled by default in production mode w/ explict ACLs */
6641 +#define TA_PROD_MODE_WRITE_PROTECT 0
6642 +#define TA_DEMO_MODE_DISCOVERY 1
6643 +#define TA_DEFAULT_ERL 0
6644 +#define TA_CACHE_CORE_NPS 0
6645 +/* T10 protection information disabled by default */
6646 +#define TA_DEFAULT_T10_PI 0
6647 +#define TA_DEFAULT_FABRIC_PROT_TYPE 0
6648 +
6649 +#define ISCSI_IOV_DATA_BUFFER 5
6650 +
6651 +enum iscsit_transport_type {
6652 + ISCSI_TCP = 0,
6653 + ISCSI_SCTP_TCP = 1,
6654 + ISCSI_SCTP_UDP = 2,
6655 + ISCSI_IWARP_TCP = 3,
6656 + ISCSI_IWARP_SCTP = 4,
6657 + ISCSI_INFINIBAND = 5,
6658 +};
6659 +
6660 +/* RFC-3720 7.1.4 Standard Connection State Diagram for a Target */
6661 +enum target_conn_state_table {
6662 + TARG_CONN_STATE_FREE = 0x1,
6663 + TARG_CONN_STATE_XPT_UP = 0x3,
6664 + TARG_CONN_STATE_IN_LOGIN = 0x4,
6665 + TARG_CONN_STATE_LOGGED_IN = 0x5,
6666 + TARG_CONN_STATE_IN_LOGOUT = 0x6,
6667 + TARG_CONN_STATE_LOGOUT_REQUESTED = 0x7,
6668 + TARG_CONN_STATE_CLEANUP_WAIT = 0x8,
6669 +};
6670 +
6671 +/* RFC-3720 7.3.2 Session State Diagram for a Target */
6672 +enum target_sess_state_table {
6673 + TARG_SESS_STATE_FREE = 0x1,
6674 + TARG_SESS_STATE_ACTIVE = 0x2,
6675 + TARG_SESS_STATE_LOGGED_IN = 0x3,
6676 + TARG_SESS_STATE_FAILED = 0x4,
6677 + TARG_SESS_STATE_IN_CONTINUE = 0x5,
6678 +};
6679 +
6680 +/* struct iscsi_data_count->type */
6681 +enum data_count_type {
6682 + ISCSI_RX_DATA = 1,
6683 + ISCSI_TX_DATA = 2,
6684 +};
6685 +
6686 +/* struct iscsi_datain_req->dr_complete */
6687 +enum datain_req_comp_table {
6688 + DATAIN_COMPLETE_NORMAL = 1,
6689 + DATAIN_COMPLETE_WITHIN_COMMAND_RECOVERY = 2,
6690 + DATAIN_COMPLETE_CONNECTION_RECOVERY = 3,
6691 +};
6692 +
6693 +/* struct iscsi_datain_req->recovery */
6694 +enum datain_req_rec_table {
6695 + DATAIN_WITHIN_COMMAND_RECOVERY = 1,
6696 + DATAIN_CONNECTION_RECOVERY = 2,
6697 +};
6698 +
6699 +/* struct iscsi_portal_group->state */
6700 +enum tpg_state_table {
6701 + TPG_STATE_FREE = 0,
6702 + TPG_STATE_ACTIVE = 1,
6703 + TPG_STATE_INACTIVE = 2,
6704 + TPG_STATE_COLD_RESET = 3,
6705 +};
6706 +
6707 +/* struct iscsi_tiqn->tiqn_state */
6708 +enum tiqn_state_table {
6709 + TIQN_STATE_ACTIVE = 1,
6710 + TIQN_STATE_SHUTDOWN = 2,
6711 +};
6712 +
6713 +/* struct iscsi_cmd->cmd_flags */
6714 +enum cmd_flags_table {
6715 + ICF_GOT_LAST_DATAOUT = 0x00000001,
6716 + ICF_GOT_DATACK_SNACK = 0x00000002,
6717 + ICF_NON_IMMEDIATE_UNSOLICITED_DATA = 0x00000004,
6718 + ICF_SENT_LAST_R2T = 0x00000008,
6719 + ICF_WITHIN_COMMAND_RECOVERY = 0x00000010,
6720 + ICF_CONTIG_MEMORY = 0x00000020,
6721 + ICF_ATTACHED_TO_RQUEUE = 0x00000040,
6722 + ICF_OOO_CMDSN = 0x00000080,
6723 + ICF_SENDTARGETS_ALL = 0x00000100,
6724 + ICF_SENDTARGETS_SINGLE = 0x00000200,
6725 +};
6726 +
6727 +/* struct iscsi_cmd->i_state */
6728 +enum cmd_i_state_table {
6729 + ISTATE_NO_STATE = 0,
6730 + ISTATE_NEW_CMD = 1,
6731 + ISTATE_DEFERRED_CMD = 2,
6732 + ISTATE_UNSOLICITED_DATA = 3,
6733 + ISTATE_RECEIVE_DATAOUT = 4,
6734 + ISTATE_RECEIVE_DATAOUT_RECOVERY = 5,
6735 + ISTATE_RECEIVED_LAST_DATAOUT = 6,
6736 + ISTATE_WITHIN_DATAOUT_RECOVERY = 7,
6737 + ISTATE_IN_CONNECTION_RECOVERY = 8,
6738 + ISTATE_RECEIVED_TASKMGT = 9,
6739 + ISTATE_SEND_ASYNCMSG = 10,
6740 + ISTATE_SENT_ASYNCMSG = 11,
6741 + ISTATE_SEND_DATAIN = 12,
6742 + ISTATE_SEND_LAST_DATAIN = 13,
6743 + ISTATE_SENT_LAST_DATAIN = 14,
6744 + ISTATE_SEND_LOGOUTRSP = 15,
6745 + ISTATE_SENT_LOGOUTRSP = 16,
6746 + ISTATE_SEND_NOPIN = 17,
6747 + ISTATE_SENT_NOPIN = 18,
6748 + ISTATE_SEND_REJECT = 19,
6749 + ISTATE_SENT_REJECT = 20,
6750 + ISTATE_SEND_R2T = 21,
6751 + ISTATE_SENT_R2T = 22,
6752 + ISTATE_SEND_R2T_RECOVERY = 23,
6753 + ISTATE_SENT_R2T_RECOVERY = 24,
6754 + ISTATE_SEND_LAST_R2T = 25,
6755 + ISTATE_SENT_LAST_R2T = 26,
6756 + ISTATE_SEND_LAST_R2T_RECOVERY = 27,
6757 + ISTATE_SENT_LAST_R2T_RECOVERY = 28,
6758 + ISTATE_SEND_STATUS = 29,
6759 + ISTATE_SEND_STATUS_BROKEN_PC = 30,
6760 + ISTATE_SENT_STATUS = 31,
6761 + ISTATE_SEND_STATUS_RECOVERY = 32,
6762 + ISTATE_SENT_STATUS_RECOVERY = 33,
6763 + ISTATE_SEND_TASKMGTRSP = 34,
6764 + ISTATE_SENT_TASKMGTRSP = 35,
6765 + ISTATE_SEND_TEXTRSP = 36,
6766 + ISTATE_SENT_TEXTRSP = 37,
6767 + ISTATE_SEND_NOPIN_WANT_RESPONSE = 38,
6768 + ISTATE_SENT_NOPIN_WANT_RESPONSE = 39,
6769 + ISTATE_SEND_NOPIN_NO_RESPONSE = 40,
6770 + ISTATE_REMOVE = 41,
6771 + ISTATE_FREE = 42,
6772 +};
6773 +
6774 +/* Used for iscsi_recover_cmdsn() return values */
6775 +enum recover_cmdsn_ret_table {
6776 + CMDSN_ERROR_CANNOT_RECOVER = -1,
6777 + CMDSN_NORMAL_OPERATION = 0,
6778 + CMDSN_LOWER_THAN_EXP = 1,
6779 + CMDSN_HIGHER_THAN_EXP = 2,
6780 + CMDSN_MAXCMDSN_OVERRUN = 3,
6781 +};
6782 +
6783 +/* Used for iscsi_handle_immediate_data() return values */
6784 +enum immedate_data_ret_table {
6785 + IMMEDIATE_DATA_CANNOT_RECOVER = -1,
6786 + IMMEDIATE_DATA_NORMAL_OPERATION = 0,
6787 + IMMEDIATE_DATA_ERL1_CRC_FAILURE = 1,
6788 +};
6789 +
6790 +/* Used for iscsi_decide_dataout_action() return values */
6791 +enum dataout_action_ret_table {
6792 + DATAOUT_CANNOT_RECOVER = -1,
6793 + DATAOUT_NORMAL = 0,
6794 + DATAOUT_SEND_R2T = 1,
6795 + DATAOUT_SEND_TO_TRANSPORT = 2,
6796 + DATAOUT_WITHIN_COMMAND_RECOVERY = 3,
6797 +};
6798 +
6799 +/* Used for struct iscsi_node_auth->naf_flags */
6800 +enum naf_flags_table {
6801 + NAF_USERID_SET = 0x01,
6802 + NAF_PASSWORD_SET = 0x02,
6803 + NAF_USERID_IN_SET = 0x04,
6804 + NAF_PASSWORD_IN_SET = 0x08,
6805 +};
6806 +
6807 +/* Used by various struct timer_list to manage iSCSI specific state */
6808 +enum iscsi_timer_flags_table {
6809 + ISCSI_TF_RUNNING = 0x01,
6810 + ISCSI_TF_STOP = 0x02,
6811 + ISCSI_TF_EXPIRED = 0x04,
6812 +};
6813 +
6814 +/* Used for struct iscsi_np->np_flags */
6815 +enum np_flags_table {
6816 + NPF_IP_NETWORK = 0x00,
6817 +};
6818 +
6819 +/* Used for struct iscsi_np->np_thread_state */
6820 +enum np_thread_state_table {
6821 + ISCSI_NP_THREAD_ACTIVE = 1,
6822 + ISCSI_NP_THREAD_INACTIVE = 2,
6823 + ISCSI_NP_THREAD_RESET = 3,
6824 + ISCSI_NP_THREAD_SHUTDOWN = 4,
6825 + ISCSI_NP_THREAD_EXIT = 5,
6826 +};
6827 +
6828 +struct iscsi_conn_ops {
6829 + u8 HeaderDigest; /* [0,1] == [None,CRC32C] */
6830 + u8 DataDigest; /* [0,1] == [None,CRC32C] */
6831 + u32 MaxRecvDataSegmentLength; /* [512..2**24-1] */
6832 + u32 MaxXmitDataSegmentLength; /* [512..2**24-1] */
6833 + u8 OFMarker; /* [0,1] == [No,Yes] */
6834 + u8 IFMarker; /* [0,1] == [No,Yes] */
6835 + u32 OFMarkInt; /* [1..65535] */
6836 + u32 IFMarkInt; /* [1..65535] */
6837 + /*
6838 + * iSER specific connection parameters
6839 + */
6840 + u32 InitiatorRecvDataSegmentLength; /* [512..2**24-1] */
6841 + u32 TargetRecvDataSegmentLength; /* [512..2**24-1] */
6842 +};
6843 +
6844 +struct iscsi_sess_ops {
6845 + char InitiatorName[224];
6846 + char InitiatorAlias[256];
6847 + char TargetName[224];
6848 + char TargetAlias[256];
6849 + char TargetAddress[256];
6850 + u16 TargetPortalGroupTag; /* [0..65535] */
6851 + u16 MaxConnections; /* [1..65535] */
6852 + u8 InitialR2T; /* [0,1] == [No,Yes] */
6853 + u8 ImmediateData; /* [0,1] == [No,Yes] */
6854 + u32 MaxBurstLength; /* [512..2**24-1] */
6855 + u32 FirstBurstLength; /* [512..2**24-1] */
6856 + u16 DefaultTime2Wait; /* [0..3600] */
6857 + u16 DefaultTime2Retain; /* [0..3600] */
6858 + u16 MaxOutstandingR2T; /* [1..65535] */
6859 + u8 DataPDUInOrder; /* [0,1] == [No,Yes] */
6860 + u8 DataSequenceInOrder; /* [0,1] == [No,Yes] */
6861 + u8 ErrorRecoveryLevel; /* [0..2] */
6862 + u8 SessionType; /* [0,1] == [Normal,Discovery]*/
6863 + /*
6864 + * iSER specific session parameters
6865 + */
6866 + u8 RDMAExtensions; /* [0,1] == [No,Yes] */
6867 +};
6868 +
6869 +struct iscsi_queue_req {
6870 + int state;
6871 + struct iscsi_cmd *cmd;
6872 + struct list_head qr_list;
6873 +};
6874 +
6875 +struct iscsi_data_count {
6876 + int data_length;
6877 + int sync_and_steering;
6878 + enum data_count_type type;
6879 + u32 iov_count;
6880 + u32 ss_iov_count;
6881 + u32 ss_marker_count;
6882 + struct kvec *iov;
6883 +};
6884 +
6885 +struct iscsi_param_list {
6886 + bool iser;
6887 + struct list_head param_list;
6888 + struct list_head extra_response_list;
6889 +};
6890 +
6891 +struct iscsi_datain_req {
6892 + enum datain_req_comp_table dr_complete;
6893 + int generate_recovery_values;
6894 + enum datain_req_rec_table recovery;
6895 + u32 begrun;
6896 + u32 runlength;
6897 + u32 data_length;
6898 + u32 data_offset;
6899 + u32 data_sn;
6900 + u32 next_burst_len;
6901 + u32 read_data_done;
6902 + u32 seq_send_order;
6903 + struct list_head cmd_datain_node;
6904 +} ____cacheline_aligned;
6905 +
6906 +struct iscsi_ooo_cmdsn {
6907 + u16 cid;
6908 + u32 batch_count;
6909 + u32 cmdsn;
6910 + u32 exp_cmdsn;
6911 + struct iscsi_cmd *cmd;
6912 + struct list_head ooo_list;
6913 +} ____cacheline_aligned;
6914 +
6915 +struct iscsi_datain {
6916 + u8 flags;
6917 + u32 data_sn;
6918 + u32 length;
6919 + u32 offset;
6920 +} ____cacheline_aligned;
6921 +
6922 +struct iscsi_r2t {
6923 + int seq_complete;
6924 + int recovery_r2t;
6925 + int sent_r2t;
6926 + u32 r2t_sn;
6927 + u32 offset;
6928 + u32 targ_xfer_tag;
6929 + u32 xfer_len;
6930 + struct list_head r2t_list;
6931 +} ____cacheline_aligned;
6932 +
6933 +struct iscsi_cmd {
6934 + enum iscsi_timer_flags_table dataout_timer_flags;
6935 + /* DataOUT timeout retries */
6936 + u8 dataout_timeout_retries;
6937 + /* Within command recovery count */
6938 + u8 error_recovery_count;
6939 + /* iSCSI dependent state for out or order CmdSNs */
6940 + enum cmd_i_state_table deferred_i_state;
6941 + /* iSCSI dependent state */
6942 + enum cmd_i_state_table i_state;
6943 + /* Command is an immediate command (ISCSI_OP_IMMEDIATE set) */
6944 + u8 immediate_cmd;
6945 + /* Immediate data present */
6946 + u8 immediate_data;
6947 + /* iSCSI Opcode */
6948 + u8 iscsi_opcode;
6949 + /* iSCSI Response Code */
6950 + u8 iscsi_response;
6951 + /* Logout reason when iscsi_opcode == ISCSI_INIT_LOGOUT_CMND */
6952 + u8 logout_reason;
6953 + /* Logout response code when iscsi_opcode == ISCSI_INIT_LOGOUT_CMND */
6954 + u8 logout_response;
6955 + /* MaxCmdSN has been incremented */
6956 + u8 maxcmdsn_inc;
6957 + /* Immediate Unsolicited Dataout */
6958 + u8 unsolicited_data;
6959 + /* Reject reason code */
6960 + u8 reject_reason;
6961 + /* CID contained in logout PDU when opcode == ISCSI_INIT_LOGOUT_CMND */
6962 + u16 logout_cid;
6963 + /* Command flags */
6964 + enum cmd_flags_table cmd_flags;
6965 + /* Initiator Task Tag assigned from Initiator */
6966 + itt_t init_task_tag;
6967 + /* Target Transfer Tag assigned from Target */
6968 + u32 targ_xfer_tag;
6969 + /* CmdSN assigned from Initiator */
6970 + u32 cmd_sn;
6971 + /* ExpStatSN assigned from Initiator */
6972 + u32 exp_stat_sn;
6973 + /* StatSN assigned to this ITT */
6974 + u32 stat_sn;
6975 + /* DataSN Counter */
6976 + u32 data_sn;
6977 + /* R2TSN Counter */
6978 + u32 r2t_sn;
6979 + /* Last DataSN acknowledged via DataAck SNACK */
6980 + u32 acked_data_sn;
6981 + /* Used for echoing NOPOUT ping data */
6982 + u32 buf_ptr_size;
6983 + /* Used to store DataDigest */
6984 + u32 data_crc;
6985 + /* Counter for MaxOutstandingR2T */
6986 + u32 outstanding_r2ts;
6987 + /* Next R2T Offset when DataSequenceInOrder=Yes */
6988 + u32 r2t_offset;
6989 + /* Iovec current and orig count for iscsi_cmd->iov_data */
6990 + u32 iov_data_count;
6991 + u32 orig_iov_data_count;
6992 + /* Number of miscellaneous iovecs used for IP stack calls */
6993 + u32 iov_misc_count;
6994 + /* Number of struct iscsi_pdu in struct iscsi_cmd->pdu_list */
6995 + u32 pdu_count;
6996 + /* Next struct iscsi_pdu to send in struct iscsi_cmd->pdu_list */
6997 + u32 pdu_send_order;
6998 + /* Current struct iscsi_pdu in struct iscsi_cmd->pdu_list */
6999 + u32 pdu_start;
7000 + /* Next struct iscsi_seq to send in struct iscsi_cmd->seq_list */
7001 + u32 seq_send_order;
7002 + /* Number of struct iscsi_seq in struct iscsi_cmd->seq_list */
7003 + u32 seq_count;
7004 + /* Current struct iscsi_seq in struct iscsi_cmd->seq_list */
7005 + u32 seq_no;
7006 + /* Lowest offset in current DataOUT sequence */
7007 + u32 seq_start_offset;
7008 + /* Highest offset in current DataOUT sequence */
7009 + u32 seq_end_offset;
7010 + /* Total size in bytes received so far of READ data */
7011 + u32 read_data_done;
7012 + /* Total size in bytes received so far of WRITE data */
7013 + u32 write_data_done;
7014 + /* Counter for FirstBurstLength key */
7015 + u32 first_burst_len;
7016 + /* Counter for MaxBurstLength key */
7017 + u32 next_burst_len;
7018 + /* Transfer size used for IP stack calls */
7019 + u32 tx_size;
7020 + /* Buffer used for various purposes */
7021 + void *buf_ptr;
7022 + /* Used by SendTargets=[iqn.,eui.] discovery */
7023 + void *text_in_ptr;
7024 + /* See include/linux/dma-mapping.h */
7025 + enum dma_data_direction data_direction;
7026 + /* iSCSI PDU Header + CRC */
7027 + unsigned char pdu[ISCSI_HDR_LEN + ISCSI_CRC_LEN];
7028 + /* Number of times struct iscsi_cmd is present in immediate queue */
7029 + atomic_t immed_queue_count;
7030 + atomic_t response_queue_count;
7031 + spinlock_t datain_lock;
7032 + spinlock_t dataout_timeout_lock;
7033 + /* spinlock for protecting struct iscsi_cmd->i_state */
7034 + spinlock_t istate_lock;
7035 + /* spinlock for adding within command recovery entries */
7036 + spinlock_t error_lock;
7037 + /* spinlock for adding R2Ts */
7038 + spinlock_t r2t_lock;
7039 + /* DataIN List */
7040 + struct list_head datain_list;
7041 + /* R2T List */
7042 + struct list_head cmd_r2t_list;
7043 + /* Timer for DataOUT */
7044 + struct timer_list dataout_timer;
7045 + /* Iovecs for SCSI data payload RX/TX w/ kernel level sockets */
7046 + struct kvec *iov_data;
7047 + /* Iovecs for miscellaneous purposes */
7048 +#define ISCSI_MISC_IOVECS 5
7049 + struct kvec iov_misc[ISCSI_MISC_IOVECS];
7050 + /* Array of struct iscsi_pdu used for DataPDUInOrder=No */
7051 + struct iscsi_pdu *pdu_list;
7052 + /* Current struct iscsi_pdu used for DataPDUInOrder=No */
7053 + struct iscsi_pdu *pdu_ptr;
7054 + /* Array of struct iscsi_seq used for DataSequenceInOrder=No */
7055 + struct iscsi_seq *seq_list;
7056 + /* Current struct iscsi_seq used for DataSequenceInOrder=No */
7057 + struct iscsi_seq *seq_ptr;
7058 + /* TMR Request when iscsi_opcode == ISCSI_OP_SCSI_TMFUNC */
7059 + struct iscsi_tmr_req *tmr_req;
7060 + /* Connection this command is alligient to */
7061 + struct iscsi_conn *conn;
7062 + /* Pointer to connection recovery entry */
7063 + struct iscsi_conn_recovery *cr;
7064 + /* Session the command is part of, used for connection recovery */
7065 + struct iscsi_session *sess;
7066 + /* list_head for connection list */
7067 + struct list_head i_conn_node;
7068 + /* The TCM I/O descriptor that is accessed via container_of() */
7069 + struct se_cmd se_cmd;
7070 + /* Sense buffer that will be mapped into outgoing status */
7071 +#define ISCSI_SENSE_BUFFER_LEN (TRANSPORT_SENSE_BUFFER + 2)
7072 + unsigned char sense_buffer[ISCSI_SENSE_BUFFER_LEN];
7073 +
7074 + u32 padding;
7075 + u8 pad_bytes[4];
7076 +
7077 + struct scatterlist *first_data_sg;
7078 + u32 first_data_sg_off;
7079 + u32 kmapped_nents;
7080 + sense_reason_t sense_reason;
7081 +} ____cacheline_aligned;
7082 +
7083 +struct iscsi_tmr_req {
7084 + bool task_reassign:1;
7085 + u32 exp_data_sn;
7086 + struct iscsi_cmd *ref_cmd;
7087 + struct iscsi_conn_recovery *conn_recovery;
7088 + struct se_tmr_req *se_tmr_req;
7089 +};
7090 +
7091 +struct iscsi_conn {
7092 + wait_queue_head_t queues_wq;
7093 + /* Authentication Successful for this connection */
7094 + u8 auth_complete;
7095 + /* State connection is currently in */
7096 + u8 conn_state;
7097 + u8 conn_logout_reason;
7098 + u8 network_transport;
7099 + enum iscsi_timer_flags_table nopin_timer_flags;
7100 + enum iscsi_timer_flags_table nopin_response_timer_flags;
7101 + /* Used to know what thread encountered a transport failure */
7102 + u8 which_thread;
7103 + /* connection id assigned by the Initiator */
7104 + u16 cid;
7105 + /* Remote TCP Port */
7106 + u16 login_port;
7107 + u16 local_port;
7108 + int net_size;
7109 + int login_family;
7110 + u32 auth_id;
7111 + u32 conn_flags;
7112 + /* Used for iscsi_tx_login_rsp() */
7113 + itt_t login_itt;
7114 + u32 exp_statsn;
7115 + /* Per connection status sequence number */
7116 + u32 stat_sn;
7117 + /* IFMarkInt's Current Value */
7118 + u32 if_marker;
7119 + /* OFMarkInt's Current Value */
7120 + u32 of_marker;
7121 + /* Used for calculating OFMarker offset to next PDU */
7122 + u32 of_marker_offset;
7123 +#define IPV6_ADDRESS_SPACE 48
7124 + unsigned char login_ip[IPV6_ADDRESS_SPACE];
7125 + unsigned char local_ip[IPV6_ADDRESS_SPACE];
7126 + int conn_usage_count;
7127 + int conn_waiting_on_uc;
7128 + atomic_t check_immediate_queue;
7129 + atomic_t conn_logout_remove;
7130 + atomic_t connection_exit;
7131 + atomic_t connection_recovery;
7132 + atomic_t connection_reinstatement;
7133 + atomic_t connection_wait_rcfr;
7134 + atomic_t sleep_on_conn_wait_comp;
7135 + atomic_t transport_failed;
7136 + struct completion conn_post_wait_comp;
7137 + struct completion conn_wait_comp;
7138 + struct completion conn_wait_rcfr_comp;
7139 + struct completion conn_waiting_on_uc_comp;
7140 + struct completion conn_logout_comp;
7141 + struct completion tx_half_close_comp;
7142 + struct completion rx_half_close_comp;
7143 + /* socket used by this connection */
7144 + struct socket *sock;
7145 + void (*orig_data_ready)(struct sock *);
7146 + void (*orig_state_change)(struct sock *);
7147 +#define LOGIN_FLAGS_READ_ACTIVE 1
7148 +#define LOGIN_FLAGS_CLOSED 2
7149 +#define LOGIN_FLAGS_READY 4
7150 + unsigned long login_flags;
7151 + struct delayed_work login_work;
7152 + struct delayed_work login_cleanup_work;
7153 + struct iscsi_login *login;
7154 + struct timer_list nopin_timer;
7155 + struct timer_list nopin_response_timer;
7156 + struct timer_list transport_timer;
7157 + struct task_struct *login_kworker;
7158 + /* Spinlock used for add/deleting cmd's from conn_cmd_list */
7159 + spinlock_t cmd_lock;
7160 + spinlock_t conn_usage_lock;
7161 + spinlock_t immed_queue_lock;
7162 + spinlock_t nopin_timer_lock;
7163 + spinlock_t response_queue_lock;
7164 + spinlock_t state_lock;
7165 + /* libcrypto RX and TX contexts for crc32c */
7166 + struct hash_desc conn_rx_hash;
7167 + struct hash_desc conn_tx_hash;
7168 + /* Used for scheduling TX and RX connection kthreads */
7169 + cpumask_var_t conn_cpumask;
7170 + unsigned int conn_rx_reset_cpumask:1;
7171 + unsigned int conn_tx_reset_cpumask:1;
7172 + /* list_head of struct iscsi_cmd for this connection */
7173 + struct list_head conn_cmd_list;
7174 + struct list_head immed_queue_list;
7175 + struct list_head response_queue_list;
7176 + struct iscsi_conn_ops *conn_ops;
7177 + struct iscsi_login *conn_login;
7178 + struct iscsit_transport *conn_transport;
7179 + struct iscsi_param_list *param_list;
7180 + /* Used for per connection auth state machine */
7181 + void *auth_protocol;
7182 + void *context;
7183 + struct iscsi_login_thread_s *login_thread;
7184 + struct iscsi_portal_group *tpg;
7185 + struct iscsi_tpg_np *tpg_np;
7186 + /* Pointer to parent session */
7187 + struct iscsi_session *sess;
7188 + int bitmap_id;
7189 + int rx_thread_active;
7190 + struct task_struct *rx_thread;
7191 + struct completion rx_login_comp;
7192 + int tx_thread_active;
7193 + struct task_struct *tx_thread;
7194 + /* list_head for session connection list */
7195 + struct list_head conn_list;
7196 +} ____cacheline_aligned;
7197 +
7198 +struct iscsi_conn_recovery {
7199 + u16 cid;
7200 + u32 cmd_count;
7201 + u32 maxrecvdatasegmentlength;
7202 + u32 maxxmitdatasegmentlength;
7203 + int ready_for_reallegiance;
7204 + struct list_head conn_recovery_cmd_list;
7205 + spinlock_t conn_recovery_cmd_lock;
7206 + struct timer_list time2retain_timer;
7207 + struct iscsi_session *sess;
7208 + struct list_head cr_list;
7209 +} ____cacheline_aligned;
7210 +
7211 +struct iscsi_session {
7212 + u8 initiator_vendor;
7213 + u8 isid[6];
7214 + enum iscsi_timer_flags_table time2retain_timer_flags;
7215 + u8 version_active;
7216 + u16 cid_called;
7217 + u16 conn_recovery_count;
7218 + u16 tsih;
7219 + /* state session is currently in */
7220 + u32 session_state;
7221 + /* session wide counter: initiator assigned task tag */
7222 + itt_t init_task_tag;
7223 + /* session wide counter: target assigned task tag */
7224 + u32 targ_xfer_tag;
7225 + u32 cmdsn_window;
7226 +
7227 + /* protects cmdsn values */
7228 + struct mutex cmdsn_mutex;
7229 + /* session wide counter: expected command sequence number */
7230 + u32 exp_cmd_sn;
7231 + /* session wide counter: maximum allowed command sequence number */
7232 + u32 max_cmd_sn;
7233 + struct list_head sess_ooo_cmdsn_list;
7234 +
7235 + /* LIO specific session ID */
7236 + u32 sid;
7237 + char auth_type[8];
7238 + /* unique within the target */
7239 + int session_index;
7240 + /* Used for session reference counting */
7241 + int session_usage_count;
7242 + int session_waiting_on_uc;
7243 + atomic_long_t cmd_pdus;
7244 + atomic_long_t rsp_pdus;
7245 + atomic_long_t tx_data_octets;
7246 + atomic_long_t rx_data_octets;
7247 + atomic_long_t conn_digest_errors;
7248 + atomic_long_t conn_timeout_errors;
7249 + u64 creation_time;
7250 + /* Number of active connections */
7251 + atomic_t nconn;
7252 + atomic_t session_continuation;
7253 + atomic_t session_fall_back_to_erl0;
7254 + atomic_t session_logout;
7255 + atomic_t session_reinstatement;
7256 + atomic_t session_stop_active;
7257 + atomic_t sleep_on_sess_wait_comp;
7258 + /* connection list */
7259 + struct list_head sess_conn_list;
7260 + struct list_head cr_active_list;
7261 + struct list_head cr_inactive_list;
7262 + spinlock_t conn_lock;
7263 + spinlock_t cr_a_lock;
7264 + spinlock_t cr_i_lock;
7265 + spinlock_t session_usage_lock;
7266 + spinlock_t ttt_lock;
7267 + struct completion async_msg_comp;
7268 + struct completion reinstatement_comp;
7269 + struct completion session_wait_comp;
7270 + struct completion session_waiting_on_uc_comp;
7271 + struct timer_list time2retain_timer;
7272 + struct iscsi_sess_ops *sess_ops;
7273 + struct se_session *se_sess;
7274 + struct iscsi_portal_group *tpg;
7275 +} ____cacheline_aligned;
7276 +
7277 +struct iscsi_login {
7278 + u8 auth_complete;
7279 + u8 checked_for_existing;
7280 + u8 current_stage;
7281 + u8 leading_connection;
7282 + u8 first_request;
7283 + u8 version_min;
7284 + u8 version_max;
7285 + u8 login_complete;
7286 + u8 login_failed;
7287 + bool zero_tsih;
7288 + char isid[6];
7289 + u32 cmd_sn;
7290 + itt_t init_task_tag;
7291 + u32 initial_exp_statsn;
7292 + u32 rsp_length;
7293 + u16 cid;
7294 + u16 tsih;
7295 + char req[ISCSI_HDR_LEN];
7296 + char rsp[ISCSI_HDR_LEN];
7297 + char *req_buf;
7298 + char *rsp_buf;
7299 + struct iscsi_conn *conn;
7300 + struct iscsi_np *np;
7301 +} ____cacheline_aligned;
7302 +
7303 +struct iscsi_node_attrib {
7304 + u32 dataout_timeout;
7305 + u32 dataout_timeout_retries;
7306 + u32 default_erl;
7307 + u32 nopin_timeout;
7308 + u32 nopin_response_timeout;
7309 + u32 random_datain_pdu_offsets;
7310 + u32 random_datain_seq_offsets;
7311 + u32 random_r2t_offsets;
7312 + u32 tmr_cold_reset;
7313 + u32 tmr_warm_reset;
7314 + struct iscsi_node_acl *nacl;
7315 +};
7316 +
7317 +struct se_dev_entry_s;
7318 +
7319 +struct iscsi_node_auth {
7320 + enum naf_flags_table naf_flags;
7321 + int authenticate_target;
7322 + /* Used for iscsit_global->discovery_auth,
7323 + * set to zero (auth disabled) by default */
7324 + int enforce_discovery_auth;
7325 +#define MAX_USER_LEN 256
7326 +#define MAX_PASS_LEN 256
7327 + char userid[MAX_USER_LEN];
7328 + char password[MAX_PASS_LEN];
7329 + char userid_mutual[MAX_USER_LEN];
7330 + char password_mutual[MAX_PASS_LEN];
7331 +};
7332 +
7333 +#include "iscsi_target_stat.h"
7334 +
7335 +struct iscsi_node_stat_grps {
7336 + struct config_group iscsi_sess_stats_group;
7337 + struct config_group iscsi_conn_stats_group;
7338 +};
7339 +
7340 +struct iscsi_node_acl {
7341 + struct iscsi_node_attrib node_attrib;
7342 + struct iscsi_node_auth node_auth;
7343 + struct iscsi_node_stat_grps node_stat_grps;
7344 + struct se_node_acl se_node_acl;
7345 +};
7346 +
7347 +struct iscsi_tpg_attrib {
7348 + u32 authentication;
7349 + u32 login_timeout;
7350 + u32 netif_timeout;
7351 + u32 generate_node_acls;
7352 + u32 cache_dynamic_acls;
7353 + u32 default_cmdsn_depth;
7354 + u32 demo_mode_write_protect;
7355 + u32 prod_mode_write_protect;
7356 + u32 demo_mode_discovery;
7357 + u32 default_erl;
7358 + u8 t10_pi;
7359 + u32 fabric_prot_type;
7360 + struct iscsi_portal_group *tpg;
7361 +};
7362 +
7363 +struct iscsi_np {
7364 + int np_network_transport;
7365 + int np_ip_proto;
7366 + int np_sock_type;
7367 + enum np_thread_state_table np_thread_state;
7368 + bool enabled;
7369 + enum iscsi_timer_flags_table np_login_timer_flags;
7370 + u32 np_exports;
7371 + enum np_flags_table np_flags;
7372 + u16 np_port;
7373 + spinlock_t np_thread_lock;
7374 + struct completion np_restart_comp;
7375 + struct socket *np_socket;
7376 + struct __kernel_sockaddr_storage np_sockaddr;
7377 + struct task_struct *np_thread;
7378 + struct timer_list np_login_timer;
7379 + void *np_context;
7380 + struct iscsit_transport *np_transport;
7381 + struct list_head np_list;
7382 +} ____cacheline_aligned;
7383 +
7384 +struct iscsi_tpg_np {
7385 + struct iscsi_np *tpg_np;
7386 + struct iscsi_portal_group *tpg;
7387 + struct iscsi_tpg_np *tpg_np_parent;
7388 + struct list_head tpg_np_list;
7389 + struct list_head tpg_np_child_list;
7390 + struct list_head tpg_np_parent_list;
7391 + struct se_tpg_np se_tpg_np;
7392 + spinlock_t tpg_np_parent_lock;
7393 + struct completion tpg_np_comp;
7394 + struct kref tpg_np_kref;
7395 +};
7396 +
7397 +struct iscsi_portal_group {
7398 + unsigned char tpg_chap_id;
7399 + /* TPG State */
7400 + enum tpg_state_table tpg_state;
7401 + /* Target Portal Group Tag */
7402 + u16 tpgt;
7403 + /* Id assigned to target sessions */
7404 + u16 ntsih;
7405 + /* Number of active sessions */
7406 + u32 nsessions;
7407 + /* Number of Network Portals available for this TPG */
7408 + u32 num_tpg_nps;
7409 + /* Per TPG LIO specific session ID. */
7410 + u32 sid;
7411 + /* Spinlock for adding/removing Network Portals */
7412 + spinlock_t tpg_np_lock;
7413 + spinlock_t tpg_state_lock;
7414 + struct se_portal_group tpg_se_tpg;
7415 + struct mutex tpg_access_lock;
7416 + struct semaphore np_login_sem;
7417 + struct iscsi_tpg_attrib tpg_attrib;
7418 + struct iscsi_node_auth tpg_demo_auth;
7419 + /* Pointer to default list of iSCSI parameters for TPG */
7420 + struct iscsi_param_list *param_list;
7421 + struct iscsi_tiqn *tpg_tiqn;
7422 + struct list_head tpg_gnp_list;
7423 + struct list_head tpg_list;
7424 +} ____cacheline_aligned;
7425 +
7426 +struct iscsi_wwn_stat_grps {
7427 + struct config_group iscsi_stat_group;
7428 + struct config_group iscsi_instance_group;
7429 + struct config_group iscsi_sess_err_group;
7430 + struct config_group iscsi_tgt_attr_group;
7431 + struct config_group iscsi_login_stats_group;
7432 + struct config_group iscsi_logout_stats_group;
7433 +};
7434 +
7435 +struct iscsi_tiqn {
7436 +#define ISCSI_IQN_LEN 224
7437 + unsigned char tiqn[ISCSI_IQN_LEN];
7438 + enum tiqn_state_table tiqn_state;
7439 + int tiqn_access_count;
7440 + u32 tiqn_active_tpgs;
7441 + u32 tiqn_ntpgs;
7442 + u32 tiqn_num_tpg_nps;
7443 + u32 tiqn_nsessions;
7444 + struct list_head tiqn_list;
7445 + struct list_head tiqn_tpg_list;
7446 + spinlock_t tiqn_state_lock;
7447 + spinlock_t tiqn_tpg_lock;
7448 + struct se_wwn tiqn_wwn;
7449 + struct iscsi_wwn_stat_grps tiqn_stat_grps;
7450 + int tiqn_index;
7451 + struct iscsi_sess_err_stats sess_err_stats;
7452 + struct iscsi_login_stats login_stats;
7453 + struct iscsi_logout_stats logout_stats;
7454 +} ____cacheline_aligned;
7455 +
7456 +struct iscsit_global {
7457 + /* In core shutdown */
7458 + u32 in_shutdown;
7459 + u32 active_ts;
7460 + /* Unique identifier used for the authentication daemon */
7461 + u32 auth_id;
7462 + u32 inactive_ts;
7463 +#define ISCSIT_BITMAP_BITS 262144
7464 + /* Thread Set bitmap pointer */
7465 + unsigned long *ts_bitmap;
7466 + spinlock_t ts_bitmap_lock;
7467 + /* Used for iSCSI discovery session authentication */
7468 + struct iscsi_node_acl discovery_acl;
7469 + struct iscsi_portal_group *discovery_tpg;
7470 +};
7471 +
7472 +static inline u32 session_get_next_ttt(struct iscsi_session *session)
7473 +{
7474 + u32 ttt;
7475 +
7476 + spin_lock_bh(&session->ttt_lock);
7477 + ttt = session->targ_xfer_tag++;
7478 + if (ttt == 0xFFFFFFFF)
7479 + ttt = session->targ_xfer_tag++;
7480 + spin_unlock_bh(&session->ttt_lock);
7481 +
7482 + return ttt;
7483 +}
7484 +
7485 +extern struct iscsi_cmd *iscsit_find_cmd_from_itt(struct iscsi_conn *, itt_t);
7486 +#endif /* ISCSI_TARGET_CORE_H */
7487 diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h
7488 index 0bdb77e16875..fe017d125839 100644
7489 --- a/include/uapi/linux/if_link.h
7490 +++ b/include/uapi/linux/if_link.h
7491 @@ -436,6 +436,9 @@ enum {
7492 IFLA_VF_SPOOFCHK, /* Spoof Checking on/off switch */
7493 IFLA_VF_LINK_STATE, /* link state enable/disable/auto switch */
7494 IFLA_VF_RATE, /* Min and Max TX Bandwidth Allocation */
7495 + IFLA_VF_RSS_QUERY_EN, /* RSS Redirection Table and Hash Key query
7496 + * on/off switch
7497 + */
7498 __IFLA_VF_MAX,
7499 };
7500
7501 @@ -480,6 +483,11 @@ struct ifla_vf_link_state {
7502 __u32 link_state;
7503 };
7504
7505 +struct ifla_vf_rss_query_en {
7506 + __u32 vf;
7507 + __u32 setting;
7508 +};
7509 +
7510 /* VF ports management section
7511 *
7512 * Nested layout of set/get msg is:
7513 diff --git a/include/xen/interface/sched.h b/include/xen/interface/sched.h
7514 index 9ce083960a25..f18490985fc8 100644
7515 --- a/include/xen/interface/sched.h
7516 +++ b/include/xen/interface/sched.h
7517 @@ -107,5 +107,13 @@ struct sched_watchdog {
7518 #define SHUTDOWN_suspend 2 /* Clean up, save suspend info, kill. */
7519 #define SHUTDOWN_crash 3 /* Tell controller we've crashed. */
7520 #define SHUTDOWN_watchdog 4 /* Restart because watchdog time expired. */
7521 +/*
7522 + * Domain asked to perform 'soft reset' for it. The expected behavior is to
7523 + * reset internal Xen state for the domain returning it to the point where it
7524 + * was created but leaving the domain's memory contents and vCPU contexts
7525 + * intact. This will allow the domain to start over and set up all Xen specific
7526 + * interfaces again.
7527 + */
7528 +#define SHUTDOWN_soft_reset 5
7529
7530 #endif /* __XEN_PUBLIC_SCHED_H__ */
7531 diff --git a/ipc/msg.c b/ipc/msg.c
7532 index c5d8e3749985..cfc8b388332d 100644
7533 --- a/ipc/msg.c
7534 +++ b/ipc/msg.c
7535 @@ -137,13 +137,6 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
7536 return retval;
7537 }
7538
7539 - /* ipc_addid() locks msq upon success. */
7540 - id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni);
7541 - if (id < 0) {
7542 - ipc_rcu_putref(msq, msg_rcu_free);
7543 - return id;
7544 - }
7545 -
7546 msq->q_stime = msq->q_rtime = 0;
7547 msq->q_ctime = get_seconds();
7548 msq->q_cbytes = msq->q_qnum = 0;
7549 @@ -153,6 +146,13 @@ static int newque(struct ipc_namespace *ns, struct ipc_params *params)
7550 INIT_LIST_HEAD(&msq->q_receivers);
7551 INIT_LIST_HEAD(&msq->q_senders);
7552
7553 + /* ipc_addid() locks msq upon success. */
7554 + id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni);
7555 + if (id < 0) {
7556 + ipc_rcu_putref(msq, msg_rcu_free);
7557 + return id;
7558 + }
7559 +
7560 ipc_unlock_object(&msq->q_perm);
7561 rcu_read_unlock();
7562
7563 diff --git a/ipc/shm.c b/ipc/shm.c
7564 index 01454796ba3c..2511771a9a07 100644
7565 --- a/ipc/shm.c
7566 +++ b/ipc/shm.c
7567 @@ -549,12 +549,6 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
7568 if (IS_ERR(file))
7569 goto no_file;
7570
7571 - id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni);
7572 - if (id < 0) {
7573 - error = id;
7574 - goto no_id;
7575 - }
7576 -
7577 shp->shm_cprid = task_tgid_vnr(current);
7578 shp->shm_lprid = 0;
7579 shp->shm_atim = shp->shm_dtim = 0;
7580 @@ -563,6 +557,13 @@ static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
7581 shp->shm_nattch = 0;
7582 shp->shm_file = file;
7583 shp->shm_creator = current;
7584 +
7585 + id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni);
7586 + if (id < 0) {
7587 + error = id;
7588 + goto no_id;
7589 + }
7590 +
7591 list_add(&shp->shm_clist, &current->sysvshm.shm_clist);
7592
7593 /*
7594 diff --git a/ipc/util.c b/ipc/util.c
7595 index 88adc329888c..bc72cbf929da 100644
7596 --- a/ipc/util.c
7597 +++ b/ipc/util.c
7598 @@ -277,6 +277,10 @@ int ipc_addid(struct ipc_ids *ids, struct kern_ipc_perm *new, int size)
7599 rcu_read_lock();
7600 spin_lock(&new->lock);
7601
7602 + current_euid_egid(&euid, &egid);
7603 + new->cuid = new->uid = euid;
7604 + new->gid = new->cgid = egid;
7605 +
7606 id = idr_alloc(&ids->ipcs_idr, new,
7607 (next_id < 0) ? 0 : ipcid_to_idx(next_id), 0,
7608 GFP_NOWAIT);
7609 @@ -289,10 +293,6 @@ int ipc_addid(struct ipc_ids *ids, struct kern_ipc_perm *new, int size)
7610
7611 ids->in_use++;
7612
7613 - current_euid_egid(&euid, &egid);
7614 - new->cuid = new->uid = euid;
7615 - new->gid = new->cgid = egid;
7616 -
7617 if (next_id < 0) {
7618 new->seq = ids->seq++;
7619 if (ids->seq > IPCID_SEQ_MAX)
7620 diff --git a/kernel/fork.c b/kernel/fork.c
7621 index 9b7d746d6d62..0a4f601e35ab 100644
7622 --- a/kernel/fork.c
7623 +++ b/kernel/fork.c
7624 @@ -1800,13 +1800,21 @@ static int check_unshare_flags(unsigned long unshare_flags)
7625 CLONE_NEWUSER|CLONE_NEWPID))
7626 return -EINVAL;
7627 /*
7628 - * Not implemented, but pretend it works if there is nothing to
7629 - * unshare. Note that unsharing CLONE_THREAD or CLONE_SIGHAND
7630 - * needs to unshare vm.
7631 + * Not implemented, but pretend it works if there is nothing
7632 + * to unshare. Note that unsharing the address space or the
7633 + * signal handlers also need to unshare the signal queues (aka
7634 + * CLONE_THREAD).
7635 */
7636 if (unshare_flags & (CLONE_THREAD | CLONE_SIGHAND | CLONE_VM)) {
7637 - /* FIXME: get_task_mm() increments ->mm_users */
7638 - if (atomic_read(&current->mm->mm_users) > 1)
7639 + if (!thread_group_empty(current))
7640 + return -EINVAL;
7641 + }
7642 + if (unshare_flags & (CLONE_SIGHAND | CLONE_VM)) {
7643 + if (atomic_read(&current->sighand->count) > 1)
7644 + return -EINVAL;
7645 + }
7646 + if (unshare_flags & CLONE_VM) {
7647 + if (!current_is_single_threaded())
7648 return -EINVAL;
7649 }
7650
7651 @@ -1875,16 +1883,16 @@ SYSCALL_DEFINE1(unshare, unsigned long, unshare_flags)
7652 if (unshare_flags & CLONE_NEWUSER)
7653 unshare_flags |= CLONE_THREAD | CLONE_FS;
7654 /*
7655 - * If unsharing a thread from a thread group, must also unshare vm.
7656 - */
7657 - if (unshare_flags & CLONE_THREAD)
7658 - unshare_flags |= CLONE_VM;
7659 - /*
7660 * If unsharing vm, must also unshare signal handlers.
7661 */
7662 if (unshare_flags & CLONE_VM)
7663 unshare_flags |= CLONE_SIGHAND;
7664 /*
7665 + * If unsharing a signal handlers, must also unshare the signal queues.
7666 + */
7667 + if (unshare_flags & CLONE_SIGHAND)
7668 + unshare_flags |= CLONE_THREAD;
7669 + /*
7670 * If unsharing namespace, must also unshare filesystem information.
7671 */
7672 if (unshare_flags & CLONE_NEWNS)
7673 diff --git a/kernel/irq/proc.c b/kernel/irq/proc.c
7674 index 9dc9bfd8a678..9791f93dd5f2 100644
7675 --- a/kernel/irq/proc.c
7676 +++ b/kernel/irq/proc.c
7677 @@ -12,6 +12,7 @@
7678 #include <linux/seq_file.h>
7679 #include <linux/interrupt.h>
7680 #include <linux/kernel_stat.h>
7681 +#include <linux/mutex.h>
7682
7683 #include "internals.h"
7684
7685 @@ -326,18 +327,29 @@ void register_handler_proc(unsigned int irq, struct irqaction *action)
7686
7687 void register_irq_proc(unsigned int irq, struct irq_desc *desc)
7688 {
7689 + static DEFINE_MUTEX(register_lock);
7690 char name [MAX_NAMELEN];
7691
7692 - if (!root_irq_dir || (desc->irq_data.chip == &no_irq_chip) || desc->dir)
7693 + if (!root_irq_dir || (desc->irq_data.chip == &no_irq_chip))
7694 return;
7695
7696 + /*
7697 + * irq directories are registered only when a handler is
7698 + * added, not when the descriptor is created, so multiple
7699 + * tasks might try to register at the same time.
7700 + */
7701 + mutex_lock(&register_lock);
7702 +
7703 + if (desc->dir)
7704 + goto out_unlock;
7705 +
7706 memset(name, 0, MAX_NAMELEN);
7707 sprintf(name, "%d", irq);
7708
7709 /* create /proc/irq/1234 */
7710 desc->dir = proc_mkdir(name, root_irq_dir);
7711 if (!desc->dir)
7712 - return;
7713 + goto out_unlock;
7714
7715 #ifdef CONFIG_SMP
7716 /* create /proc/irq/<irq>/smp_affinity */
7717 @@ -358,6 +370,9 @@ void register_irq_proc(unsigned int irq, struct irq_desc *desc)
7718
7719 proc_create_data("spurious", 0444, desc->dir,
7720 &irq_spurious_proc_fops, (void *)(long)irq);
7721 +
7722 +out_unlock:
7723 + mutex_unlock(&register_lock);
7724 }
7725
7726 void unregister_irq_proc(unsigned int irq, struct irq_desc *desc)
7727 diff --git a/kernel/sched/core.c b/kernel/sched/core.c
7728 index 6810e572eda5..a882dd91722d 100644
7729 --- a/kernel/sched/core.c
7730 +++ b/kernel/sched/core.c
7731 @@ -2256,11 +2256,11 @@ static void finish_task_switch(struct rq *rq, struct task_struct *prev)
7732 * If a task dies, then it sets TASK_DEAD in tsk->state and calls
7733 * schedule one last time. The schedule call will never return, and
7734 * the scheduled task must drop that reference.
7735 - * The test for TASK_DEAD must occur while the runqueue locks are
7736 - * still held, otherwise prev could be scheduled on another cpu, die
7737 - * there before we look at prev->state, and then the reference would
7738 - * be dropped twice.
7739 - * Manfred Spraul <manfred@colorfullife.com>
7740 + *
7741 + * We must observe prev->state before clearing prev->on_cpu (in
7742 + * finish_lock_switch), otherwise a concurrent wakeup can get prev
7743 + * running on another CPU and we could rave with its RUNNING -> DEAD
7744 + * transition, resulting in a double drop.
7745 */
7746 prev_state = prev->state;
7747 vtime_task_switch(prev);
7748 @@ -2404,13 +2404,20 @@ unsigned long nr_running(void)
7749
7750 /*
7751 * Check if only the current task is running on the cpu.
7752 + *
7753 + * Caution: this function does not check that the caller has disabled
7754 + * preemption, thus the result might have a time-of-check-to-time-of-use
7755 + * race. The caller is responsible to use it correctly, for example:
7756 + *
7757 + * - from a non-preemptable section (of course)
7758 + *
7759 + * - from a thread that is bound to a single CPU
7760 + *
7761 + * - in a loop with very short iterations (e.g. a polling loop)
7762 */
7763 bool single_task_running(void)
7764 {
7765 - if (cpu_rq(smp_processor_id())->nr_running == 1)
7766 - return true;
7767 - else
7768 - return false;
7769 + return raw_rq()->nr_running == 1;
7770 }
7771 EXPORT_SYMBOL(single_task_running);
7772
7773 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
7774 index 2246a36050f9..07a75c150eeb 100644
7775 --- a/kernel/sched/fair.c
7776 +++ b/kernel/sched/fair.c
7777 @@ -4844,18 +4844,21 @@ again:
7778 * entity, update_curr() will update its vruntime, otherwise
7779 * forget we've ever seen it.
7780 */
7781 - if (curr && curr->on_rq)
7782 - update_curr(cfs_rq);
7783 - else
7784 - curr = NULL;
7785 + if (curr) {
7786 + if (curr->on_rq)
7787 + update_curr(cfs_rq);
7788 + else
7789 + curr = NULL;
7790
7791 - /*
7792 - * This call to check_cfs_rq_runtime() will do the throttle and
7793 - * dequeue its entity in the parent(s). Therefore the 'simple'
7794 - * nr_running test will indeed be correct.
7795 - */
7796 - if (unlikely(check_cfs_rq_runtime(cfs_rq)))
7797 - goto simple;
7798 + /*
7799 + * This call to check_cfs_rq_runtime() will do the
7800 + * throttle and dequeue its entity in the parent(s).
7801 + * Therefore the 'simple' nr_running test will indeed
7802 + * be correct.
7803 + */
7804 + if (unlikely(check_cfs_rq_runtime(cfs_rq)))
7805 + goto simple;
7806 + }
7807
7808 se = pick_next_entity(cfs_rq, curr);
7809 cfs_rq = group_cfs_rq(se);
7810 diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
7811 index 2df8ef067cc5..f698089e10ca 100644
7812 --- a/kernel/sched/sched.h
7813 +++ b/kernel/sched/sched.h
7814 @@ -994,9 +994,10 @@ static inline void finish_lock_switch(struct rq *rq, struct task_struct *prev)
7815 * After ->on_cpu is cleared, the task can be moved to a different CPU.
7816 * We must ensure this doesn't happen until the switch is completely
7817 * finished.
7818 + *
7819 + * Pairs with the control dependency and rmb in try_to_wake_up().
7820 */
7821 - smp_wmb();
7822 - prev->on_cpu = 0;
7823 + smp_store_release(&prev->on_cpu, 0);
7824 #endif
7825 #ifdef CONFIG_DEBUG_SPINLOCK
7826 /* this is a valid case when another task releases the spinlock */
7827 diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
7828 index ec1791fae965..a4038c57e25d 100644
7829 --- a/kernel/time/timekeeping.c
7830 +++ b/kernel/time/timekeeping.c
7831 @@ -1369,7 +1369,7 @@ static __always_inline void timekeeping_freqadjust(struct timekeeper *tk,
7832 negative = (tick_error < 0);
7833
7834 /* Sort out the magnitude of the correction */
7835 - tick_error = abs(tick_error);
7836 + tick_error = abs64(tick_error);
7837 for (adj = 0; tick_error > interval; adj++)
7838 tick_error >>= 1;
7839
7840 diff --git a/mm/hugetlb.c b/mm/hugetlb.c
7841 index a1d4dfa62023..77c8d03b4278 100644
7842 --- a/mm/hugetlb.c
7843 +++ b/mm/hugetlb.c
7844 @@ -2806,6 +2806,14 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
7845 continue;
7846
7847 /*
7848 + * Shared VMAs have their own reserves and do not affect
7849 + * MAP_PRIVATE accounting but it is possible that a shared
7850 + * VMA is using the same page so check and skip such VMAs.
7851 + */
7852 + if (iter_vma->vm_flags & VM_MAYSHARE)
7853 + continue;
7854 +
7855 + /*
7856 * Unmap the page from other VMAs without their own reserves.
7857 * They get marked to be SIGKILLed if they fault in these
7858 * areas. This is because a future no-page fault on this VMA
7859 diff --git a/mm/slab.c b/mm/slab.c
7860 index f34e053ec46e..b7f9f6456a61 100644
7861 --- a/mm/slab.c
7862 +++ b/mm/slab.c
7863 @@ -2175,9 +2175,16 @@ __kmem_cache_create (struct kmem_cache *cachep, unsigned long flags)
7864 size += BYTES_PER_WORD;
7865 }
7866 #if FORCED_DEBUG && defined(CONFIG_DEBUG_PAGEALLOC)
7867 - if (size >= kmalloc_size(INDEX_NODE + 1)
7868 - && cachep->object_size > cache_line_size()
7869 - && ALIGN(size, cachep->align) < PAGE_SIZE) {
7870 + /*
7871 + * To activate debug pagealloc, off-slab management is necessary
7872 + * requirement. In early phase of initialization, small sized slab
7873 + * doesn't get initialized so it would not be possible. So, we need
7874 + * to check size >= 256. It guarantees that all necessary small
7875 + * sized slab is initialized in current slab initialization sequence.
7876 + */
7877 + if (!slab_early_init && size >= kmalloc_size(INDEX_NODE) &&
7878 + size >= 256 && cachep->object_size > cache_line_size() &&
7879 + ALIGN(size, cachep->align) < PAGE_SIZE) {
7880 cachep->obj_offset += PAGE_SIZE - ALIGN(size, cachep->align);
7881 size = PAGE_SIZE;
7882 }
7883 diff --git a/mm/vmscan.c b/mm/vmscan.c
7884 index e321fe20b979..d48b28219edf 100644
7885 --- a/mm/vmscan.c
7886 +++ b/mm/vmscan.c
7887 @@ -1111,7 +1111,7 @@ cull_mlocked:
7888 if (PageSwapCache(page))
7889 try_to_free_swap(page);
7890 unlock_page(page);
7891 - putback_lru_page(page);
7892 + list_add(&page->lru, &ret_pages);
7893 continue;
7894
7895 activate_locked:
7896 diff --git a/net/batman-adv/distributed-arp-table.c b/net/batman-adv/distributed-arp-table.c
7897 index b5981113c9a7..4bbd72e90756 100644
7898 --- a/net/batman-adv/distributed-arp-table.c
7899 +++ b/net/batman-adv/distributed-arp-table.c
7900 @@ -15,6 +15,7 @@
7901 * along with this program; if not, see <http://www.gnu.org/licenses/>.
7902 */
7903
7904 +#include <linux/bitops.h>
7905 #include <linux/if_ether.h>
7906 #include <linux/if_arp.h>
7907 #include <linux/if_vlan.h>
7908 @@ -422,7 +423,7 @@ static bool batadv_is_orig_node_eligible(struct batadv_dat_candidate *res,
7909 int j;
7910
7911 /* check if orig node candidate is running DAT */
7912 - if (!(candidate->capabilities & BATADV_ORIG_CAPA_HAS_DAT))
7913 + if (!test_bit(BATADV_ORIG_CAPA_HAS_DAT, &candidate->capabilities))
7914 goto out;
7915
7916 /* Check if this node has already been selected... */
7917 @@ -682,9 +683,9 @@ static void batadv_dat_tvlv_ogm_handler_v1(struct batadv_priv *bat_priv,
7918 uint16_t tvlv_value_len)
7919 {
7920 if (flags & BATADV_TVLV_HANDLER_OGM_CIFNOTFND)
7921 - orig->capabilities &= ~BATADV_ORIG_CAPA_HAS_DAT;
7922 + clear_bit(BATADV_ORIG_CAPA_HAS_DAT, &orig->capabilities);
7923 else
7924 - orig->capabilities |= BATADV_ORIG_CAPA_HAS_DAT;
7925 + set_bit(BATADV_ORIG_CAPA_HAS_DAT, &orig->capabilities);
7926 }
7927
7928 /**
7929 diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c
7930 index 8d04d174669e..65d19690d8ae 100644
7931 --- a/net/batman-adv/network-coding.c
7932 +++ b/net/batman-adv/network-coding.c
7933 @@ -15,6 +15,7 @@
7934 * along with this program; if not, see <http://www.gnu.org/licenses/>.
7935 */
7936
7937 +#include <linux/bitops.h>
7938 #include <linux/debugfs.h>
7939
7940 #include "main.h"
7941 @@ -105,9 +106,9 @@ static void batadv_nc_tvlv_ogm_handler_v1(struct batadv_priv *bat_priv,
7942 uint16_t tvlv_value_len)
7943 {
7944 if (flags & BATADV_TVLV_HANDLER_OGM_CIFNOTFND)
7945 - orig->capabilities &= ~BATADV_ORIG_CAPA_HAS_NC;
7946 + clear_bit(BATADV_ORIG_CAPA_HAS_NC, &orig->capabilities);
7947 else
7948 - orig->capabilities |= BATADV_ORIG_CAPA_HAS_NC;
7949 + set_bit(BATADV_ORIG_CAPA_HAS_NC, &orig->capabilities);
7950 }
7951
7952 /**
7953 @@ -871,7 +872,7 @@ void batadv_nc_update_nc_node(struct batadv_priv *bat_priv,
7954 goto out;
7955
7956 /* check if orig node is network coding enabled */
7957 - if (!(orig_node->capabilities & BATADV_ORIG_CAPA_HAS_NC))
7958 + if (!test_bit(BATADV_ORIG_CAPA_HAS_NC, &orig_node->capabilities))
7959 goto out;
7960
7961 /* accept ogms from 'good' neighbors and single hop neighbors */
7962 diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c
7963 index 5467955eb27c..492b0593dc2f 100644
7964 --- a/net/batman-adv/soft-interface.c
7965 +++ b/net/batman-adv/soft-interface.c
7966 @@ -173,6 +173,7 @@ static int batadv_interface_tx(struct sk_buff *skb,
7967 int gw_mode;
7968 enum batadv_forw_mode forw_mode;
7969 struct batadv_orig_node *mcast_single_orig = NULL;
7970 + int network_offset = ETH_HLEN;
7971
7972 if (atomic_read(&bat_priv->mesh_state) != BATADV_MESH_ACTIVE)
7973 goto dropped;
7974 @@ -185,14 +186,18 @@ static int batadv_interface_tx(struct sk_buff *skb,
7975 case ETH_P_8021Q:
7976 vhdr = vlan_eth_hdr(skb);
7977
7978 - if (vhdr->h_vlan_encapsulated_proto != ethertype)
7979 + if (vhdr->h_vlan_encapsulated_proto != ethertype) {
7980 + network_offset += VLAN_HLEN;
7981 break;
7982 + }
7983
7984 /* fall through */
7985 case ETH_P_BATMAN:
7986 goto dropped;
7987 }
7988
7989 + skb_set_network_header(skb, network_offset);
7990 +
7991 if (batadv_bla_tx(bat_priv, skb, vid))
7992 goto dropped;
7993
7994 diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
7995 index 5f59e7f899a0..58ad6ba429b3 100644
7996 --- a/net/batman-adv/translation-table.c
7997 +++ b/net/batman-adv/translation-table.c
7998 @@ -15,6 +15,7 @@
7999 * along with this program; if not, see <http://www.gnu.org/licenses/>.
8000 */
8001
8002 +#include <linux/bitops.h>
8003 #include "main.h"
8004 #include "translation-table.h"
8005 #include "soft-interface.h"
8006 @@ -1015,6 +1016,7 @@ uint16_t batadv_tt_local_remove(struct batadv_priv *bat_priv,
8007 struct batadv_tt_local_entry *tt_local_entry;
8008 uint16_t flags, curr_flags = BATADV_NO_FLAGS;
8009 struct batadv_softif_vlan *vlan;
8010 + void *tt_entry_exists;
8011
8012 tt_local_entry = batadv_tt_local_hash_find(bat_priv, addr, vid);
8013 if (!tt_local_entry)
8014 @@ -1042,7 +1044,15 @@ uint16_t batadv_tt_local_remove(struct batadv_priv *bat_priv,
8015 * immediately purge it
8016 */
8017 batadv_tt_local_event(bat_priv, tt_local_entry, BATADV_TT_CLIENT_DEL);
8018 - hlist_del_rcu(&tt_local_entry->common.hash_entry);
8019 +
8020 + tt_entry_exists = batadv_hash_remove(bat_priv->tt.local_hash,
8021 + batadv_compare_tt,
8022 + batadv_choose_tt,
8023 + &tt_local_entry->common);
8024 + if (!tt_entry_exists)
8025 + goto out;
8026 +
8027 + /* extra call to free the local tt entry */
8028 batadv_tt_local_entry_free_ref(tt_local_entry);
8029
8030 /* decrease the reference held for this vlan */
8031 @@ -1844,7 +1854,7 @@ void batadv_tt_global_del_orig(struct batadv_priv *bat_priv,
8032 }
8033 spin_unlock_bh(list_lock);
8034 }
8035 - orig_node->capa_initialized &= ~BATADV_ORIG_CAPA_HAS_TT;
8036 + clear_bit(BATADV_ORIG_CAPA_HAS_TT, &orig_node->capa_initialized);
8037 }
8038
8039 static bool batadv_tt_global_to_purge(struct batadv_tt_global_entry *tt_global,
8040 @@ -2804,7 +2814,7 @@ static void _batadv_tt_update_changes(struct batadv_priv *bat_priv,
8041 return;
8042 }
8043 }
8044 - orig_node->capa_initialized |= BATADV_ORIG_CAPA_HAS_TT;
8045 + set_bit(BATADV_ORIG_CAPA_HAS_TT, &orig_node->capa_initialized);
8046 }
8047
8048 static void batadv_tt_fill_gtable(struct batadv_priv *bat_priv,
8049 @@ -3304,7 +3314,8 @@ static void batadv_tt_update_orig(struct batadv_priv *bat_priv,
8050 bool has_tt_init;
8051
8052 tt_vlan = (struct batadv_tvlv_tt_vlan_data *)tt_buff;
8053 - has_tt_init = orig_node->capa_initialized & BATADV_ORIG_CAPA_HAS_TT;
8054 + has_tt_init = test_bit(BATADV_ORIG_CAPA_HAS_TT,
8055 + &orig_node->capa_initialized);
8056
8057 /* orig table not initialised AND first diff is in the OGM OR the ttvn
8058 * increased by one -> we can apply the attached changes
8059 diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h
8060 index 8854c05622a9..fdf65b50e3ec 100644
8061 --- a/net/batman-adv/types.h
8062 +++ b/net/batman-adv/types.h
8063 @@ -258,8 +258,8 @@ struct batadv_orig_node {
8064 struct hlist_node mcast_want_all_ipv4_node;
8065 struct hlist_node mcast_want_all_ipv6_node;
8066 #endif
8067 - uint8_t capabilities;
8068 - uint8_t capa_initialized;
8069 + unsigned long capabilities;
8070 + unsigned long capa_initialized;
8071 atomic_t last_ttvn;
8072 unsigned char *tt_buff;
8073 int16_t tt_buff_len;
8074 @@ -298,9 +298,9 @@ struct batadv_orig_node {
8075 * (= orig node announces a tvlv of type BATADV_TVLV_MCAST)
8076 */
8077 enum batadv_orig_capabilities {
8078 - BATADV_ORIG_CAPA_HAS_DAT = BIT(0),
8079 - BATADV_ORIG_CAPA_HAS_NC = BIT(1),
8080 - BATADV_ORIG_CAPA_HAS_TT = BIT(2),
8081 + BATADV_ORIG_CAPA_HAS_DAT,
8082 + BATADV_ORIG_CAPA_HAS_NC,
8083 + BATADV_ORIG_CAPA_HAS_TT,
8084 BATADV_ORIG_CAPA_HAS_MCAST = BIT(3),
8085 };
8086
8087 diff --git a/net/core/datagram.c b/net/core/datagram.c
8088 index 3a402a7b20e9..61e99f315ed9 100644
8089 --- a/net/core/datagram.c
8090 +++ b/net/core/datagram.c
8091 @@ -130,6 +130,35 @@ out_noerr:
8092 goto out;
8093 }
8094
8095 +static int skb_set_peeked(struct sk_buff *skb)
8096 +{
8097 + struct sk_buff *nskb;
8098 +
8099 + if (skb->peeked)
8100 + return 0;
8101 +
8102 + /* We have to unshare an skb before modifying it. */
8103 + if (!skb_shared(skb))
8104 + goto done;
8105 +
8106 + nskb = skb_clone(skb, GFP_ATOMIC);
8107 + if (!nskb)
8108 + return -ENOMEM;
8109 +
8110 + skb->prev->next = nskb;
8111 + skb->next->prev = nskb;
8112 + nskb->prev = skb->prev;
8113 + nskb->next = skb->next;
8114 +
8115 + consume_skb(skb);
8116 + skb = nskb;
8117 +
8118 +done:
8119 + skb->peeked = 1;
8120 +
8121 + return 0;
8122 +}
8123 +
8124 /**
8125 * __skb_recv_datagram - Receive a datagram skbuff
8126 * @sk: socket
8127 @@ -164,7 +193,9 @@ out_noerr:
8128 struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned int flags,
8129 int *peeked, int *off, int *err)
8130 {
8131 + struct sk_buff_head *queue = &sk->sk_receive_queue;
8132 struct sk_buff *skb, *last;
8133 + unsigned long cpu_flags;
8134 long timeo;
8135 /*
8136 * Caller is allowed not to check sk->sk_err before skb_recv_datagram()
8137 @@ -183,8 +214,6 @@ struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned int flags,
8138 * Look at current nfs client by the way...
8139 * However, this function was correct in any case. 8)
8140 */
8141 - unsigned long cpu_flags;
8142 - struct sk_buff_head *queue = &sk->sk_receive_queue;
8143 int _off = *off;
8144
8145 last = (struct sk_buff *)queue;
8146 @@ -198,7 +227,11 @@ struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned int flags,
8147 _off -= skb->len;
8148 continue;
8149 }
8150 - skb->peeked = 1;
8151 +
8152 + error = skb_set_peeked(skb);
8153 + if (error)
8154 + goto unlock_err;
8155 +
8156 atomic_inc(&skb->users);
8157 } else
8158 __skb_unlink(skb, queue);
8159 @@ -222,6 +255,8 @@ struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned int flags,
8160
8161 return NULL;
8162
8163 +unlock_err:
8164 + spin_unlock_irqrestore(&queue->lock, cpu_flags);
8165 no_packet:
8166 *err = error;
8167 return NULL;
8168 diff --git a/net/core/fib_rules.c b/net/core/fib_rules.c
8169 index 185c341fafbd..99ae718b79be 100644
8170 --- a/net/core/fib_rules.c
8171 +++ b/net/core/fib_rules.c
8172 @@ -621,15 +621,17 @@ static int dump_rules(struct sk_buff *skb, struct netlink_callback *cb,
8173 {
8174 int idx = 0;
8175 struct fib_rule *rule;
8176 + int err = 0;
8177
8178 rcu_read_lock();
8179 list_for_each_entry_rcu(rule, &ops->rules_list, list) {
8180 if (idx < cb->args[1])
8181 goto skip;
8182
8183 - if (fib_nl_fill_rule(skb, rule, NETLINK_CB(cb->skb).portid,
8184 - cb->nlh->nlmsg_seq, RTM_NEWRULE,
8185 - NLM_F_MULTI, ops) < 0)
8186 + err = fib_nl_fill_rule(skb, rule, NETLINK_CB(cb->skb).portid,
8187 + cb->nlh->nlmsg_seq, RTM_NEWRULE,
8188 + NLM_F_MULTI, ops);
8189 + if (err < 0)
8190 break;
8191 skip:
8192 idx++;
8193 @@ -638,7 +640,7 @@ skip:
8194 cb->args[1] = idx;
8195 rules_ops_put(ops);
8196
8197 - return skb->len;
8198 + return err;
8199 }
8200
8201 static int fib_nl_dumprule(struct sk_buff *skb, struct netlink_callback *cb)
8202 @@ -654,7 +656,9 @@ static int fib_nl_dumprule(struct sk_buff *skb, struct netlink_callback *cb)
8203 if (ops == NULL)
8204 return -EAFNOSUPPORT;
8205
8206 - return dump_rules(skb, cb, ops);
8207 + dump_rules(skb, cb, ops);
8208 +
8209 + return skb->len;
8210 }
8211
8212 rcu_read_lock();
8213 diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
8214 index c522f7a00eab..c412db774603 100644
8215 --- a/net/core/rtnetlink.c
8216 +++ b/net/core/rtnetlink.c
8217 @@ -805,7 +805,8 @@ static inline int rtnl_vfinfo_size(const struct net_device *dev,
8218 nla_total_size(sizeof(struct ifla_vf_vlan)) +
8219 nla_total_size(sizeof(struct ifla_vf_spoofchk)) +
8220 nla_total_size(sizeof(struct ifla_vf_rate)) +
8221 - nla_total_size(sizeof(struct ifla_vf_link_state)));
8222 + nla_total_size(sizeof(struct ifla_vf_link_state)) +
8223 + nla_total_size(sizeof(struct ifla_vf_rss_query_en)));
8224 return size;
8225 } else
8226 return 0;
8227 @@ -1075,14 +1076,16 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb, struct net_device *dev,
8228 struct ifla_vf_tx_rate vf_tx_rate;
8229 struct ifla_vf_spoofchk vf_spoofchk;
8230 struct ifla_vf_link_state vf_linkstate;
8231 + struct ifla_vf_rss_query_en vf_rss_query_en;
8232
8233 /*
8234 * Not all SR-IOV capable drivers support the
8235 - * spoofcheck query. Preset to -1 so the user
8236 - * space tool can detect that the driver didn't
8237 - * report anything.
8238 + * spoofcheck and "RSS query enable" query. Preset to
8239 + * -1 so the user space tool can detect that the driver
8240 + * didn't report anything.
8241 */
8242 ivi.spoofchk = -1;
8243 + ivi.rss_query_en = -1;
8244 memset(ivi.mac, 0, sizeof(ivi.mac));
8245 /* The default value for VF link state is "auto"
8246 * IFLA_VF_LINK_STATE_AUTO which equals zero
8247 @@ -1095,7 +1098,8 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb, struct net_device *dev,
8248 vf_rate.vf =
8249 vf_tx_rate.vf =
8250 vf_spoofchk.vf =
8251 - vf_linkstate.vf = ivi.vf;
8252 + vf_linkstate.vf =
8253 + vf_rss_query_en.vf = ivi.vf;
8254
8255 memcpy(vf_mac.mac, ivi.mac, sizeof(ivi.mac));
8256 vf_vlan.vlan = ivi.vlan;
8257 @@ -1105,6 +1109,7 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb, struct net_device *dev,
8258 vf_rate.max_tx_rate = ivi.max_tx_rate;
8259 vf_spoofchk.setting = ivi.spoofchk;
8260 vf_linkstate.link_state = ivi.linkstate;
8261 + vf_rss_query_en.setting = ivi.rss_query_en;
8262 vf = nla_nest_start(skb, IFLA_VF_INFO);
8263 if (!vf) {
8264 nla_nest_cancel(skb, vfinfo);
8265 @@ -1119,7 +1124,10 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb, struct net_device *dev,
8266 nla_put(skb, IFLA_VF_SPOOFCHK, sizeof(vf_spoofchk),
8267 &vf_spoofchk) ||
8268 nla_put(skb, IFLA_VF_LINK_STATE, sizeof(vf_linkstate),
8269 - &vf_linkstate))
8270 + &vf_linkstate) ||
8271 + nla_put(skb, IFLA_VF_RSS_QUERY_EN,
8272 + sizeof(vf_rss_query_en),
8273 + &vf_rss_query_en))
8274 goto nla_put_failure;
8275 nla_nest_end(skb, vf);
8276 }
8277 @@ -1207,10 +1215,6 @@ static const struct nla_policy ifla_info_policy[IFLA_INFO_MAX+1] = {
8278 [IFLA_INFO_SLAVE_DATA] = { .type = NLA_NESTED },
8279 };
8280
8281 -static const struct nla_policy ifla_vfinfo_policy[IFLA_VF_INFO_MAX+1] = {
8282 - [IFLA_VF_INFO] = { .type = NLA_NESTED },
8283 -};
8284 -
8285 static const struct nla_policy ifla_vf_policy[IFLA_VF_MAX+1] = {
8286 [IFLA_VF_MAC] = { .len = sizeof(struct ifla_vf_mac) },
8287 [IFLA_VF_VLAN] = { .len = sizeof(struct ifla_vf_vlan) },
8288 @@ -1218,6 +1222,7 @@ static const struct nla_policy ifla_vf_policy[IFLA_VF_MAX+1] = {
8289 [IFLA_VF_SPOOFCHK] = { .len = sizeof(struct ifla_vf_spoofchk) },
8290 [IFLA_VF_RATE] = { .len = sizeof(struct ifla_vf_rate) },
8291 [IFLA_VF_LINK_STATE] = { .len = sizeof(struct ifla_vf_link_state) },
8292 + [IFLA_VF_RSS_QUERY_EN] = { .len = sizeof(struct ifla_vf_rss_query_en) },
8293 };
8294
8295 static const struct nla_policy ifla_port_policy[IFLA_PORT_MAX+1] = {
8296 @@ -1356,85 +1361,98 @@ static int validate_linkmsg(struct net_device *dev, struct nlattr *tb[])
8297 return 0;
8298 }
8299
8300 -static int do_setvfinfo(struct net_device *dev, struct nlattr *attr)
8301 +static int do_setvfinfo(struct net_device *dev, struct nlattr **tb)
8302 {
8303 - int rem, err = -EINVAL;
8304 - struct nlattr *vf;
8305 const struct net_device_ops *ops = dev->netdev_ops;
8306 + int err = -EINVAL;
8307
8308 - nla_for_each_nested(vf, attr, rem) {
8309 - switch (nla_type(vf)) {
8310 - case IFLA_VF_MAC: {
8311 - struct ifla_vf_mac *ivm;
8312 - ivm = nla_data(vf);
8313 - err = -EOPNOTSUPP;
8314 - if (ops->ndo_set_vf_mac)
8315 - err = ops->ndo_set_vf_mac(dev, ivm->vf,
8316 - ivm->mac);
8317 - break;
8318 - }
8319 - case IFLA_VF_VLAN: {
8320 - struct ifla_vf_vlan *ivv;
8321 - ivv = nla_data(vf);
8322 - err = -EOPNOTSUPP;
8323 - if (ops->ndo_set_vf_vlan)
8324 - err = ops->ndo_set_vf_vlan(dev, ivv->vf,
8325 - ivv->vlan,
8326 - ivv->qos);
8327 - break;
8328 - }
8329 - case IFLA_VF_TX_RATE: {
8330 - struct ifla_vf_tx_rate *ivt;
8331 - struct ifla_vf_info ivf;
8332 - ivt = nla_data(vf);
8333 - err = -EOPNOTSUPP;
8334 - if (ops->ndo_get_vf_config)
8335 - err = ops->ndo_get_vf_config(dev, ivt->vf,
8336 - &ivf);
8337 - if (err)
8338 - break;
8339 - err = -EOPNOTSUPP;
8340 - if (ops->ndo_set_vf_rate)
8341 - err = ops->ndo_set_vf_rate(dev, ivt->vf,
8342 - ivf.min_tx_rate,
8343 - ivt->rate);
8344 - break;
8345 - }
8346 - case IFLA_VF_RATE: {
8347 - struct ifla_vf_rate *ivt;
8348 - ivt = nla_data(vf);
8349 - err = -EOPNOTSUPP;
8350 - if (ops->ndo_set_vf_rate)
8351 - err = ops->ndo_set_vf_rate(dev, ivt->vf,
8352 - ivt->min_tx_rate,
8353 - ivt->max_tx_rate);
8354 - break;
8355 - }
8356 - case IFLA_VF_SPOOFCHK: {
8357 - struct ifla_vf_spoofchk *ivs;
8358 - ivs = nla_data(vf);
8359 - err = -EOPNOTSUPP;
8360 - if (ops->ndo_set_vf_spoofchk)
8361 - err = ops->ndo_set_vf_spoofchk(dev, ivs->vf,
8362 - ivs->setting);
8363 - break;
8364 - }
8365 - case IFLA_VF_LINK_STATE: {
8366 - struct ifla_vf_link_state *ivl;
8367 - ivl = nla_data(vf);
8368 - err = -EOPNOTSUPP;
8369 - if (ops->ndo_set_vf_link_state)
8370 - err = ops->ndo_set_vf_link_state(dev, ivl->vf,
8371 - ivl->link_state);
8372 - break;
8373 - }
8374 - default:
8375 - err = -EINVAL;
8376 - break;
8377 - }
8378 - if (err)
8379 - break;
8380 + if (tb[IFLA_VF_MAC]) {
8381 + struct ifla_vf_mac *ivm = nla_data(tb[IFLA_VF_MAC]);
8382 +
8383 + err = -EOPNOTSUPP;
8384 + if (ops->ndo_set_vf_mac)
8385 + err = ops->ndo_set_vf_mac(dev, ivm->vf,
8386 + ivm->mac);
8387 + if (err < 0)
8388 + return err;
8389 + }
8390 +
8391 + if (tb[IFLA_VF_VLAN]) {
8392 + struct ifla_vf_vlan *ivv = nla_data(tb[IFLA_VF_VLAN]);
8393 +
8394 + err = -EOPNOTSUPP;
8395 + if (ops->ndo_set_vf_vlan)
8396 + err = ops->ndo_set_vf_vlan(dev, ivv->vf, ivv->vlan,
8397 + ivv->qos);
8398 + if (err < 0)
8399 + return err;
8400 }
8401 +
8402 + if (tb[IFLA_VF_TX_RATE]) {
8403 + struct ifla_vf_tx_rate *ivt = nla_data(tb[IFLA_VF_TX_RATE]);
8404 + struct ifla_vf_info ivf;
8405 +
8406 + err = -EOPNOTSUPP;
8407 + if (ops->ndo_get_vf_config)
8408 + err = ops->ndo_get_vf_config(dev, ivt->vf, &ivf);
8409 + if (err < 0)
8410 + return err;
8411 +
8412 + err = -EOPNOTSUPP;
8413 + if (ops->ndo_set_vf_rate)
8414 + err = ops->ndo_set_vf_rate(dev, ivt->vf,
8415 + ivf.min_tx_rate,
8416 + ivt->rate);
8417 + if (err < 0)
8418 + return err;
8419 + }
8420 +
8421 + if (tb[IFLA_VF_RATE]) {
8422 + struct ifla_vf_rate *ivt = nla_data(tb[IFLA_VF_RATE]);
8423 +
8424 + err = -EOPNOTSUPP;
8425 + if (ops->ndo_set_vf_rate)
8426 + err = ops->ndo_set_vf_rate(dev, ivt->vf,
8427 + ivt->min_tx_rate,
8428 + ivt->max_tx_rate);
8429 + if (err < 0)
8430 + return err;
8431 + }
8432 +
8433 + if (tb[IFLA_VF_SPOOFCHK]) {
8434 + struct ifla_vf_spoofchk *ivs = nla_data(tb[IFLA_VF_SPOOFCHK]);
8435 +
8436 + err = -EOPNOTSUPP;
8437 + if (ops->ndo_set_vf_spoofchk)
8438 + err = ops->ndo_set_vf_spoofchk(dev, ivs->vf,
8439 + ivs->setting);
8440 + if (err < 0)
8441 + return err;
8442 + }
8443 +
8444 + if (tb[IFLA_VF_LINK_STATE]) {
8445 + struct ifla_vf_link_state *ivl = nla_data(tb[IFLA_VF_LINK_STATE]);
8446 +
8447 + err = -EOPNOTSUPP;
8448 + if (ops->ndo_set_vf_link_state)
8449 + err = ops->ndo_set_vf_link_state(dev, ivl->vf,
8450 + ivl->link_state);
8451 + if (err < 0)
8452 + return err;
8453 + }
8454 +
8455 + if (tb[IFLA_VF_RSS_QUERY_EN]) {
8456 + struct ifla_vf_rss_query_en *ivrssq_en;
8457 +
8458 + err = -EOPNOTSUPP;
8459 + ivrssq_en = nla_data(tb[IFLA_VF_RSS_QUERY_EN]);
8460 + if (ops->ndo_set_vf_rss_query_en)
8461 + err = ops->ndo_set_vf_rss_query_en(dev, ivrssq_en->vf,
8462 + ivrssq_en->setting);
8463 + if (err < 0)
8464 + return err;
8465 + }
8466 +
8467 return err;
8468 }
8469
8470 @@ -1630,14 +1648,21 @@ static int do_setlink(const struct sk_buff *skb,
8471 }
8472
8473 if (tb[IFLA_VFINFO_LIST]) {
8474 + struct nlattr *vfinfo[IFLA_VF_MAX + 1];
8475 struct nlattr *attr;
8476 int rem;
8477 +
8478 nla_for_each_nested(attr, tb[IFLA_VFINFO_LIST], rem) {
8479 - if (nla_type(attr) != IFLA_VF_INFO) {
8480 + if (nla_type(attr) != IFLA_VF_INFO ||
8481 + nla_len(attr) < NLA_HDRLEN) {
8482 err = -EINVAL;
8483 goto errout;
8484 }
8485 - err = do_setvfinfo(dev, attr);
8486 + err = nla_parse_nested(vfinfo, IFLA_VF_MAX, attr,
8487 + ifla_vf_policy);
8488 + if (err < 0)
8489 + goto errout;
8490 + err = do_setvfinfo(dev, vfinfo);
8491 if (err < 0)
8492 goto errout;
8493 status |= DO_SETLINK_NOTIFY;
8494 diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
8495 index dc9f925b0cd5..9c7d88870e2b 100644
8496 --- a/net/ipv4/tcp_output.c
8497 +++ b/net/ipv4/tcp_output.c
8498 @@ -2798,6 +2798,7 @@ void tcp_send_active_reset(struct sock *sk, gfp_t priority)
8499 skb_reserve(skb, MAX_TCP_HEADER);
8500 tcp_init_nondata_skb(skb, tcp_acceptable_seq(sk),
8501 TCPHDR_ACK | TCPHDR_RST);
8502 + skb_mstamp_get(&skb->skb_mstamp);
8503 /* Send it off. */
8504 if (tcp_transmit_skb(sk, skb, 0, priority))
8505 NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTFAILED);
8506 diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
8507 index c5e3194fd9a5..4ea975324888 100644
8508 --- a/net/ipv4/udp.c
8509 +++ b/net/ipv4/udp.c
8510 @@ -1983,12 +1983,19 @@ void udp_v4_early_demux(struct sk_buff *skb)
8511
8512 skb->sk = sk;
8513 skb->destructor = sock_efree;
8514 - dst = sk->sk_rx_dst;
8515 + dst = READ_ONCE(sk->sk_rx_dst);
8516
8517 if (dst)
8518 dst = dst_check(dst, 0);
8519 - if (dst)
8520 - skb_dst_set_noref(skb, dst);
8521 + if (dst) {
8522 + /* DST_NOCACHE can not be used without taking a reference */
8523 + if (dst->flags & DST_NOCACHE) {
8524 + if (likely(atomic_inc_not_zero(&dst->__refcnt)))
8525 + skb_dst_set(skb, dst);
8526 + } else {
8527 + skb_dst_set_noref(skb, dst);
8528 + }
8529 + }
8530 }
8531
8532 int udp_rcv(struct sk_buff *skb)
8533 diff --git a/net/ipv6/exthdrs_offload.c b/net/ipv6/exthdrs_offload.c
8534 index 447a7fbd1bb6..f5e2ba1c18bf 100644
8535 --- a/net/ipv6/exthdrs_offload.c
8536 +++ b/net/ipv6/exthdrs_offload.c
8537 @@ -36,6 +36,6 @@ out:
8538 return ret;
8539
8540 out_rt:
8541 - inet_del_offload(&rthdr_offload, IPPROTO_ROUTING);
8542 + inet6_del_offload(&rthdr_offload, IPPROTO_ROUTING);
8543 goto out;
8544 }
8545 diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
8546 index 0e32d2e1bdbf..28d7a245ea34 100644
8547 --- a/net/ipv6/ip6_gre.c
8548 +++ b/net/ipv6/ip6_gre.c
8549 @@ -360,7 +360,7 @@ static void ip6gre_tunnel_uninit(struct net_device *dev)
8550 struct ip6_tnl *t = netdev_priv(dev);
8551 struct ip6gre_net *ign = net_generic(t->net, ip6gre_net_id);
8552
8553 - ip6gre_tunnel_unlink(ign, t);
8554 + ip6_tnl_dst_reset(netdev_priv(dev));
8555 dev_put(dev);
8556 }
8557
8558 diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
8559 index 1a01d79b8698..0d58542f9db0 100644
8560 --- a/net/ipv6/ip6mr.c
8561 +++ b/net/ipv6/ip6mr.c
8562 @@ -552,7 +552,7 @@ static void ipmr_mfc_seq_stop(struct seq_file *seq, void *v)
8563
8564 if (it->cache == &mrt->mfc6_unres_queue)
8565 spin_unlock_bh(&mfc_unres_lock);
8566 - else if (it->cache == mrt->mfc6_cache_array)
8567 + else if (it->cache == &mrt->mfc6_cache_array[it->ct])
8568 read_unlock(&mrt_lock);
8569 }
8570
8571 diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
8572 index 80ce44f6693d..45e782825567 100644
8573 --- a/net/mac80211/tx.c
8574 +++ b/net/mac80211/tx.c
8575 @@ -299,9 +299,6 @@ ieee80211_tx_h_check_assoc(struct ieee80211_tx_data *tx)
8576 if (tx->sdata->vif.type == NL80211_IFTYPE_WDS)
8577 return TX_CONTINUE;
8578
8579 - if (tx->sdata->vif.type == NL80211_IFTYPE_MESH_POINT)
8580 - return TX_CONTINUE;
8581 -
8582 if (tx->flags & IEEE80211_TX_PS_BUFFERED)
8583 return TX_CONTINUE;
8584
8585 diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c
8586 index 990decba1fe4..3a2fa9c044f8 100644
8587 --- a/net/netfilter/ipvs/ip_vs_core.c
8588 +++ b/net/netfilter/ipvs/ip_vs_core.c
8589 @@ -313,7 +313,13 @@ ip_vs_sched_persist(struct ip_vs_service *svc,
8590 * return *ignored=0 i.e. ICMP and NF_DROP
8591 */
8592 sched = rcu_dereference(svc->scheduler);
8593 - dest = sched->schedule(svc, skb, iph);
8594 + if (sched) {
8595 + /* read svc->sched_data after svc->scheduler */
8596 + smp_rmb();
8597 + dest = sched->schedule(svc, skb, iph);
8598 + } else {
8599 + dest = NULL;
8600 + }
8601 if (!dest) {
8602 IP_VS_DBG(1, "p-schedule: no dest found.\n");
8603 kfree(param.pe_data);
8604 @@ -461,7 +467,13 @@ ip_vs_schedule(struct ip_vs_service *svc, struct sk_buff *skb,
8605 }
8606
8607 sched = rcu_dereference(svc->scheduler);
8608 - dest = sched->schedule(svc, skb, iph);
8609 + if (sched) {
8610 + /* read svc->sched_data after svc->scheduler */
8611 + smp_rmb();
8612 + dest = sched->schedule(svc, skb, iph);
8613 + } else {
8614 + dest = NULL;
8615 + }
8616 if (dest == NULL) {
8617 IP_VS_DBG(1, "Schedule: no dest found.\n");
8618 return NULL;
8619 diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
8620 index ac7ba689efe7..9b1452e8e868 100644
8621 --- a/net/netfilter/ipvs/ip_vs_ctl.c
8622 +++ b/net/netfilter/ipvs/ip_vs_ctl.c
8623 @@ -828,15 +828,16 @@ __ip_vs_update_dest(struct ip_vs_service *svc, struct ip_vs_dest *dest,
8624 __ip_vs_dst_cache_reset(dest);
8625 spin_unlock_bh(&dest->dst_lock);
8626
8627 - sched = rcu_dereference_protected(svc->scheduler, 1);
8628 if (add) {
8629 ip_vs_start_estimator(svc->net, &dest->stats);
8630 list_add_rcu(&dest->n_list, &svc->destinations);
8631 svc->num_dests++;
8632 - if (sched->add_dest)
8633 + sched = rcu_dereference_protected(svc->scheduler, 1);
8634 + if (sched && sched->add_dest)
8635 sched->add_dest(svc, dest);
8636 } else {
8637 - if (sched->upd_dest)
8638 + sched = rcu_dereference_protected(svc->scheduler, 1);
8639 + if (sched && sched->upd_dest)
8640 sched->upd_dest(svc, dest);
8641 }
8642 }
8643 @@ -1070,7 +1071,7 @@ static void __ip_vs_unlink_dest(struct ip_vs_service *svc,
8644 struct ip_vs_scheduler *sched;
8645
8646 sched = rcu_dereference_protected(svc->scheduler, 1);
8647 - if (sched->del_dest)
8648 + if (sched && sched->del_dest)
8649 sched->del_dest(svc, dest);
8650 }
8651 }
8652 @@ -1161,11 +1162,14 @@ ip_vs_add_service(struct net *net, struct ip_vs_service_user_kern *u,
8653 ip_vs_use_count_inc();
8654
8655 /* Lookup the scheduler by 'u->sched_name' */
8656 - sched = ip_vs_scheduler_get(u->sched_name);
8657 - if (sched == NULL) {
8658 - pr_info("Scheduler module ip_vs_%s not found\n", u->sched_name);
8659 - ret = -ENOENT;
8660 - goto out_err;
8661 + if (strcmp(u->sched_name, "none")) {
8662 + sched = ip_vs_scheduler_get(u->sched_name);
8663 + if (!sched) {
8664 + pr_info("Scheduler module ip_vs_%s not found\n",
8665 + u->sched_name);
8666 + ret = -ENOENT;
8667 + goto out_err;
8668 + }
8669 }
8670
8671 if (u->pe_name && *u->pe_name) {
8672 @@ -1226,10 +1230,12 @@ ip_vs_add_service(struct net *net, struct ip_vs_service_user_kern *u,
8673 spin_lock_init(&svc->stats.lock);
8674
8675 /* Bind the scheduler */
8676 - ret = ip_vs_bind_scheduler(svc, sched);
8677 - if (ret)
8678 - goto out_err;
8679 - sched = NULL;
8680 + if (sched) {
8681 + ret = ip_vs_bind_scheduler(svc, sched);
8682 + if (ret)
8683 + goto out_err;
8684 + sched = NULL;
8685 + }
8686
8687 /* Bind the ct retriever */
8688 RCU_INIT_POINTER(svc->pe, pe);
8689 @@ -1277,17 +1283,20 @@ ip_vs_add_service(struct net *net, struct ip_vs_service_user_kern *u,
8690 static int
8691 ip_vs_edit_service(struct ip_vs_service *svc, struct ip_vs_service_user_kern *u)
8692 {
8693 - struct ip_vs_scheduler *sched, *old_sched;
8694 + struct ip_vs_scheduler *sched = NULL, *old_sched;
8695 struct ip_vs_pe *pe = NULL, *old_pe = NULL;
8696 int ret = 0;
8697
8698 /*
8699 * Lookup the scheduler, by 'u->sched_name'
8700 */
8701 - sched = ip_vs_scheduler_get(u->sched_name);
8702 - if (sched == NULL) {
8703 - pr_info("Scheduler module ip_vs_%s not found\n", u->sched_name);
8704 - return -ENOENT;
8705 + if (strcmp(u->sched_name, "none")) {
8706 + sched = ip_vs_scheduler_get(u->sched_name);
8707 + if (!sched) {
8708 + pr_info("Scheduler module ip_vs_%s not found\n",
8709 + u->sched_name);
8710 + return -ENOENT;
8711 + }
8712 }
8713 old_sched = sched;
8714
8715 @@ -1315,14 +1324,20 @@ ip_vs_edit_service(struct ip_vs_service *svc, struct ip_vs_service_user_kern *u)
8716
8717 old_sched = rcu_dereference_protected(svc->scheduler, 1);
8718 if (sched != old_sched) {
8719 + if (old_sched) {
8720 + ip_vs_unbind_scheduler(svc, old_sched);
8721 + RCU_INIT_POINTER(svc->scheduler, NULL);
8722 + /* Wait all svc->sched_data users */
8723 + synchronize_rcu();
8724 + }
8725 /* Bind the new scheduler */
8726 - ret = ip_vs_bind_scheduler(svc, sched);
8727 - if (ret) {
8728 - old_sched = sched;
8729 - goto out;
8730 + if (sched) {
8731 + ret = ip_vs_bind_scheduler(svc, sched);
8732 + if (ret) {
8733 + ip_vs_scheduler_put(sched);
8734 + goto out;
8735 + }
8736 }
8737 - /* Unbind the old scheduler on success */
8738 - ip_vs_unbind_scheduler(svc, old_sched);
8739 }
8740
8741 /*
8742 @@ -1962,6 +1977,7 @@ static int ip_vs_info_seq_show(struct seq_file *seq, void *v)
8743 const struct ip_vs_iter *iter = seq->private;
8744 const struct ip_vs_dest *dest;
8745 struct ip_vs_scheduler *sched = rcu_dereference(svc->scheduler);
8746 + char *sched_name = sched ? sched->name : "none";
8747
8748 if (iter->table == ip_vs_svc_table) {
8749 #ifdef CONFIG_IP_VS_IPV6
8750 @@ -1970,18 +1986,18 @@ static int ip_vs_info_seq_show(struct seq_file *seq, void *v)
8751 ip_vs_proto_name(svc->protocol),
8752 &svc->addr.in6,
8753 ntohs(svc->port),
8754 - sched->name);
8755 + sched_name);
8756 else
8757 #endif
8758 seq_printf(seq, "%s %08X:%04X %s %s ",
8759 ip_vs_proto_name(svc->protocol),
8760 ntohl(svc->addr.ip),
8761 ntohs(svc->port),
8762 - sched->name,
8763 + sched_name,
8764 (svc->flags & IP_VS_SVC_F_ONEPACKET)?"ops ":"");
8765 } else {
8766 seq_printf(seq, "FWM %08X %s %s",
8767 - svc->fwmark, sched->name,
8768 + svc->fwmark, sched_name,
8769 (svc->flags & IP_VS_SVC_F_ONEPACKET)?"ops ":"");
8770 }
8771
8772 @@ -2401,13 +2417,15 @@ static void
8773 ip_vs_copy_service(struct ip_vs_service_entry *dst, struct ip_vs_service *src)
8774 {
8775 struct ip_vs_scheduler *sched;
8776 + char *sched_name;
8777
8778 sched = rcu_dereference_protected(src->scheduler, 1);
8779 + sched_name = sched ? sched->name : "none";
8780 dst->protocol = src->protocol;
8781 dst->addr = src->addr.ip;
8782 dst->port = src->port;
8783 dst->fwmark = src->fwmark;
8784 - strlcpy(dst->sched_name, sched->name, sizeof(dst->sched_name));
8785 + strlcpy(dst->sched_name, sched_name, sizeof(dst->sched_name));
8786 dst->flags = src->flags;
8787 dst->timeout = src->timeout / HZ;
8788 dst->netmask = src->netmask;
8789 @@ -2836,6 +2854,7 @@ static int ip_vs_genl_fill_service(struct sk_buff *skb,
8790 struct nlattr *nl_service;
8791 struct ip_vs_flags flags = { .flags = svc->flags,
8792 .mask = ~0 };
8793 + char *sched_name;
8794
8795 nl_service = nla_nest_start(skb, IPVS_CMD_ATTR_SERVICE);
8796 if (!nl_service)
8797 @@ -2854,8 +2873,9 @@ static int ip_vs_genl_fill_service(struct sk_buff *skb,
8798 }
8799
8800 sched = rcu_dereference_protected(svc->scheduler, 1);
8801 + sched_name = sched ? sched->name : "none";
8802 pe = rcu_dereference_protected(svc->pe, 1);
8803 - if (nla_put_string(skb, IPVS_SVC_ATTR_SCHED_NAME, sched->name) ||
8804 + if (nla_put_string(skb, IPVS_SVC_ATTR_SCHED_NAME, sched_name) ||
8805 (pe && nla_put_string(skb, IPVS_SVC_ATTR_PE_NAME, pe->name)) ||
8806 nla_put(skb, IPVS_SVC_ATTR_FLAGS, sizeof(flags), &flags) ||
8807 nla_put_u32(skb, IPVS_SVC_ATTR_TIMEOUT, svc->timeout / HZ) ||
8808 diff --git a/net/netfilter/ipvs/ip_vs_sched.c b/net/netfilter/ipvs/ip_vs_sched.c
8809 index 4dbcda6258bc..21b6b515a09c 100644
8810 --- a/net/netfilter/ipvs/ip_vs_sched.c
8811 +++ b/net/netfilter/ipvs/ip_vs_sched.c
8812 @@ -74,7 +74,7 @@ void ip_vs_unbind_scheduler(struct ip_vs_service *svc,
8813
8814 if (sched->done_service)
8815 sched->done_service(svc);
8816 - /* svc->scheduler can not be set to NULL */
8817 + /* svc->scheduler can be set to NULL only by caller */
8818 }
8819
8820
8821 @@ -148,21 +148,21 @@ void ip_vs_scheduler_put(struct ip_vs_scheduler *scheduler)
8822
8823 void ip_vs_scheduler_err(struct ip_vs_service *svc, const char *msg)
8824 {
8825 - struct ip_vs_scheduler *sched;
8826 + struct ip_vs_scheduler *sched = rcu_dereference(svc->scheduler);
8827 + char *sched_name = sched ? sched->name : "none";
8828
8829 - sched = rcu_dereference(svc->scheduler);
8830 if (svc->fwmark) {
8831 IP_VS_ERR_RL("%s: FWM %u 0x%08X - %s\n",
8832 - sched->name, svc->fwmark, svc->fwmark, msg);
8833 + sched_name, svc->fwmark, svc->fwmark, msg);
8834 #ifdef CONFIG_IP_VS_IPV6
8835 } else if (svc->af == AF_INET6) {
8836 IP_VS_ERR_RL("%s: %s [%pI6c]:%d - %s\n",
8837 - sched->name, ip_vs_proto_name(svc->protocol),
8838 + sched_name, ip_vs_proto_name(svc->protocol),
8839 &svc->addr.in6, ntohs(svc->port), msg);
8840 #endif
8841 } else {
8842 IP_VS_ERR_RL("%s: %s %pI4:%d - %s\n",
8843 - sched->name, ip_vs_proto_name(svc->protocol),
8844 + sched_name, ip_vs_proto_name(svc->protocol),
8845 &svc->addr.ip, ntohs(svc->port), msg);
8846 }
8847 }
8848 diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c
8849 index 7162c86fd50d..72fac696c85e 100644
8850 --- a/net/netfilter/ipvs/ip_vs_sync.c
8851 +++ b/net/netfilter/ipvs/ip_vs_sync.c
8852 @@ -612,7 +612,7 @@ static void ip_vs_sync_conn_v0(struct net *net, struct ip_vs_conn *cp,
8853 pkts = atomic_add_return(1, &cp->in_pkts);
8854 else
8855 pkts = sysctl_sync_threshold(ipvs);
8856 - ip_vs_sync_conn(net, cp->control, pkts);
8857 + ip_vs_sync_conn(net, cp, pkts);
8858 }
8859 }
8860
8861 diff --git a/net/netfilter/ipvs/ip_vs_xmit.c b/net/netfilter/ipvs/ip_vs_xmit.c
8862 index bd90bf8107da..72f030878e7a 100644
8863 --- a/net/netfilter/ipvs/ip_vs_xmit.c
8864 +++ b/net/netfilter/ipvs/ip_vs_xmit.c
8865 @@ -130,7 +130,6 @@ static struct rtable *do_output_route4(struct net *net, __be32 daddr,
8866
8867 memset(&fl4, 0, sizeof(fl4));
8868 fl4.daddr = daddr;
8869 - fl4.saddr = (rt_mode & IP_VS_RT_MODE_CONNECT) ? *saddr : 0;
8870 fl4.flowi4_flags = (rt_mode & IP_VS_RT_MODE_KNOWN_NH) ?
8871 FLOWI_FLAG_KNOWN_NH : 0;
8872
8873 @@ -524,6 +523,21 @@ static inline int ip_vs_tunnel_xmit_prepare(struct sk_buff *skb,
8874 return ret;
8875 }
8876
8877 +/* In the event of a remote destination, it's possible that we would have
8878 + * matches against an old socket (particularly a TIME-WAIT socket). This
8879 + * causes havoc down the line (ip_local_out et. al. expect regular sockets
8880 + * and invalid memory accesses will happen) so simply drop the association
8881 + * in this case.
8882 +*/
8883 +static inline void ip_vs_drop_early_demux_sk(struct sk_buff *skb)
8884 +{
8885 + /* If dev is set, the packet came from the LOCAL_IN callback and
8886 + * not from a local TCP socket.
8887 + */
8888 + if (skb->dev)
8889 + skb_orphan(skb);
8890 +}
8891 +
8892 /* return NF_STOLEN (sent) or NF_ACCEPT if local=1 (not sent) */
8893 static inline int ip_vs_nat_send_or_cont(int pf, struct sk_buff *skb,
8894 struct ip_vs_conn *cp, int local)
8895 @@ -535,12 +549,21 @@ static inline int ip_vs_nat_send_or_cont(int pf, struct sk_buff *skb,
8896 ip_vs_notrack(skb);
8897 else
8898 ip_vs_update_conntrack(skb, cp, 1);
8899 +
8900 + /* Remove the early_demux association unless it's bound for the
8901 + * exact same port and address on this host after translation.
8902 + */
8903 + if (!local || cp->vport != cp->dport ||
8904 + !ip_vs_addr_equal(cp->af, &cp->vaddr, &cp->daddr))
8905 + ip_vs_drop_early_demux_sk(skb);
8906 +
8907 if (!local) {
8908 skb_forward_csum(skb);
8909 NF_HOOK(pf, NF_INET_LOCAL_OUT, skb, NULL, skb_dst(skb)->dev,
8910 dst_output);
8911 } else
8912 ret = NF_ACCEPT;
8913 +
8914 return ret;
8915 }
8916
8917 @@ -554,6 +577,7 @@ static inline int ip_vs_send_or_cont(int pf, struct sk_buff *skb,
8918 if (likely(!(cp->flags & IP_VS_CONN_F_NFCT)))
8919 ip_vs_notrack(skb);
8920 if (!local) {
8921 + ip_vs_drop_early_demux_sk(skb);
8922 skb_forward_csum(skb);
8923 NF_HOOK(pf, NF_INET_LOCAL_OUT, skb, NULL, skb_dst(skb)->dev,
8924 dst_output);
8925 @@ -842,6 +866,8 @@ ip_vs_prepare_tunneled_skb(struct sk_buff *skb, int skb_af,
8926 struct ipv6hdr *old_ipv6h = NULL;
8927 #endif
8928
8929 + ip_vs_drop_early_demux_sk(skb);
8930 +
8931 if (skb_headroom(skb) < max_headroom || skb_cloned(skb)) {
8932 new_skb = skb_realloc_headroom(skb, max_headroom);
8933 if (!new_skb)
8934 diff --git a/net/netfilter/nf_conntrack_expect.c b/net/netfilter/nf_conntrack_expect.c
8935 index 91a1837acd0e..26af45193ab7 100644
8936 --- a/net/netfilter/nf_conntrack_expect.c
8937 +++ b/net/netfilter/nf_conntrack_expect.c
8938 @@ -219,7 +219,8 @@ static inline int expect_clash(const struct nf_conntrack_expect *a,
8939 a->mask.src.u3.all[count] & b->mask.src.u3.all[count];
8940 }
8941
8942 - return nf_ct_tuple_mask_cmp(&a->tuple, &b->tuple, &intersect_mask);
8943 + return nf_ct_tuple_mask_cmp(&a->tuple, &b->tuple, &intersect_mask) &&
8944 + nf_ct_zone(a->master) == nf_ct_zone(b->master);
8945 }
8946
8947 static inline int expect_matches(const struct nf_conntrack_expect *a,
8948 diff --git a/net/netfilter/nf_conntrack_netlink.c b/net/netfilter/nf_conntrack_netlink.c
8949 index 1bd9ed9e62f6..d3ea2999d0dc 100644
8950 --- a/net/netfilter/nf_conntrack_netlink.c
8951 +++ b/net/netfilter/nf_conntrack_netlink.c
8952 @@ -2956,11 +2956,6 @@ ctnetlink_create_expect(struct net *net, u16 zone,
8953 }
8954
8955 err = nf_ct_expect_related_report(exp, portid, report);
8956 - if (err < 0)
8957 - goto err_exp;
8958 -
8959 - return 0;
8960 -err_exp:
8961 nf_ct_expect_put(exp);
8962 err_ct:
8963 nf_ct_put(ct);
8964 diff --git a/net/netfilter/nf_log.c b/net/netfilter/nf_log.c
8965 index d7197649dba6..cfe93c2227c5 100644
8966 --- a/net/netfilter/nf_log.c
8967 +++ b/net/netfilter/nf_log.c
8968 @@ -19,6 +19,9 @@
8969 static struct nf_logger __rcu *loggers[NFPROTO_NUMPROTO][NF_LOG_TYPE_MAX] __read_mostly;
8970 static DEFINE_MUTEX(nf_log_mutex);
8971
8972 +#define nft_log_dereference(logger) \
8973 + rcu_dereference_protected(logger, lockdep_is_held(&nf_log_mutex))
8974 +
8975 static struct nf_logger *__find_logger(int pf, const char *str_logger)
8976 {
8977 struct nf_logger *log;
8978 @@ -28,8 +31,7 @@ static struct nf_logger *__find_logger(int pf, const char *str_logger)
8979 if (loggers[pf][i] == NULL)
8980 continue;
8981
8982 - log = rcu_dereference_protected(loggers[pf][i],
8983 - lockdep_is_held(&nf_log_mutex));
8984 + log = nft_log_dereference(loggers[pf][i]);
8985 if (!strncasecmp(str_logger, log->name, strlen(log->name)))
8986 return log;
8987 }
8988 @@ -45,8 +47,7 @@ void nf_log_set(struct net *net, u_int8_t pf, const struct nf_logger *logger)
8989 return;
8990
8991 mutex_lock(&nf_log_mutex);
8992 - log = rcu_dereference_protected(net->nf.nf_loggers[pf],
8993 - lockdep_is_held(&nf_log_mutex));
8994 + log = nft_log_dereference(net->nf.nf_loggers[pf]);
8995 if (log == NULL)
8996 rcu_assign_pointer(net->nf.nf_loggers[pf], logger);
8997
8998 @@ -61,8 +62,7 @@ void nf_log_unset(struct net *net, const struct nf_logger *logger)
8999
9000 mutex_lock(&nf_log_mutex);
9001 for (i = 0; i < NFPROTO_NUMPROTO; i++) {
9002 - log = rcu_dereference_protected(net->nf.nf_loggers[i],
9003 - lockdep_is_held(&nf_log_mutex));
9004 + log = nft_log_dereference(net->nf.nf_loggers[i]);
9005 if (log == logger)
9006 RCU_INIT_POINTER(net->nf.nf_loggers[i], NULL);
9007 }
9008 @@ -97,12 +97,17 @@ EXPORT_SYMBOL(nf_log_register);
9009
9010 void nf_log_unregister(struct nf_logger *logger)
9011 {
9012 + const struct nf_logger *log;
9013 int i;
9014
9015 mutex_lock(&nf_log_mutex);
9016 - for (i = 0; i < NFPROTO_NUMPROTO; i++)
9017 - RCU_INIT_POINTER(loggers[i][logger->type], NULL);
9018 + for (i = 0; i < NFPROTO_NUMPROTO; i++) {
9019 + log = nft_log_dereference(loggers[i][logger->type]);
9020 + if (log == logger)
9021 + RCU_INIT_POINTER(loggers[i][logger->type], NULL);
9022 + }
9023 mutex_unlock(&nf_log_mutex);
9024 + synchronize_rcu();
9025 }
9026 EXPORT_SYMBOL(nf_log_unregister);
9027
9028 @@ -297,8 +302,7 @@ static int seq_show(struct seq_file *s, void *v)
9029 int i, ret;
9030 struct net *net = seq_file_net(s);
9031
9032 - logger = rcu_dereference_protected(net->nf.nf_loggers[*pos],
9033 - lockdep_is_held(&nf_log_mutex));
9034 + logger = nft_log_dereference(net->nf.nf_loggers[*pos]);
9035
9036 if (!logger)
9037 ret = seq_printf(s, "%2lld NONE (", *pos);
9038 @@ -312,8 +316,7 @@ static int seq_show(struct seq_file *s, void *v)
9039 if (loggers[*pos][i] == NULL)
9040 continue;
9041
9042 - logger = rcu_dereference_protected(loggers[*pos][i],
9043 - lockdep_is_held(&nf_log_mutex));
9044 + logger = nft_log_dereference(loggers[*pos][i]);
9045 ret = seq_printf(s, "%s", logger->name);
9046 if (ret < 0)
9047 return ret;
9048 @@ -385,8 +388,7 @@ static int nf_log_proc_dostring(struct ctl_table *table, int write,
9049 mutex_unlock(&nf_log_mutex);
9050 } else {
9051 mutex_lock(&nf_log_mutex);
9052 - logger = rcu_dereference_protected(net->nf.nf_loggers[tindex],
9053 - lockdep_is_held(&nf_log_mutex));
9054 + logger = nft_log_dereference(net->nf.nf_loggers[tindex]);
9055 if (!logger)
9056 table->data = "NONE";
9057 else
9058 diff --git a/net/netfilter/nfnetlink.c b/net/netfilter/nfnetlink.c
9059 index 1aa7049c93f5..e41bab38a3ca 100644
9060 --- a/net/netfilter/nfnetlink.c
9061 +++ b/net/netfilter/nfnetlink.c
9062 @@ -433,6 +433,7 @@ done:
9063 static void nfnetlink_rcv(struct sk_buff *skb)
9064 {
9065 struct nlmsghdr *nlh = nlmsg_hdr(skb);
9066 + u_int16_t res_id;
9067 int msglen;
9068
9069 if (nlh->nlmsg_len < NLMSG_HDRLEN ||
9070 @@ -457,7 +458,12 @@ static void nfnetlink_rcv(struct sk_buff *skb)
9071
9072 nfgenmsg = nlmsg_data(nlh);
9073 skb_pull(skb, msglen);
9074 - nfnetlink_rcv_batch(skb, nlh, nfgenmsg->res_id);
9075 + /* Work around old nft using host byte order */
9076 + if (nfgenmsg->res_id == NFNL_SUBSYS_NFTABLES)
9077 + res_id = NFNL_SUBSYS_NFTABLES;
9078 + else
9079 + res_id = ntohs(nfgenmsg->res_id);
9080 + nfnetlink_rcv_batch(skb, nlh, res_id);
9081 } else {
9082 netlink_rcv_skb(skb, &nfnetlink_rcv_msg);
9083 }
9084 diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
9085 index e22a2961cc39..ff6f35971ea2 100644
9086 --- a/net/netfilter/nft_compat.c
9087 +++ b/net/netfilter/nft_compat.c
9088 @@ -561,6 +561,13 @@ struct nft_xt {
9089
9090 static struct nft_expr_type nft_match_type;
9091
9092 +static bool nft_match_cmp(const struct xt_match *match,
9093 + const char *name, u32 rev, u32 family)
9094 +{
9095 + return strcmp(match->name, name) == 0 && match->revision == rev &&
9096 + (match->family == NFPROTO_UNSPEC || match->family == family);
9097 +}
9098 +
9099 static const struct nft_expr_ops *
9100 nft_match_select_ops(const struct nft_ctx *ctx,
9101 const struct nlattr * const tb[])
9102 @@ -568,7 +575,7 @@ nft_match_select_ops(const struct nft_ctx *ctx,
9103 struct nft_xt *nft_match;
9104 struct xt_match *match;
9105 char *mt_name;
9106 - __u32 rev, family;
9107 + u32 rev, family;
9108
9109 if (tb[NFTA_MATCH_NAME] == NULL ||
9110 tb[NFTA_MATCH_REV] == NULL ||
9111 @@ -583,9 +590,12 @@ nft_match_select_ops(const struct nft_ctx *ctx,
9112 list_for_each_entry(nft_match, &nft_match_list, head) {
9113 struct xt_match *match = nft_match->ops.data;
9114
9115 - if (strcmp(match->name, mt_name) == 0 &&
9116 - match->revision == rev && match->family == family)
9117 + if (nft_match_cmp(match, mt_name, rev, family)) {
9118 + if (!try_module_get(match->me))
9119 + return ERR_PTR(-ENOENT);
9120 +
9121 return &nft_match->ops;
9122 + }
9123 }
9124
9125 match = xt_request_find_match(family, mt_name, rev);
9126 @@ -631,6 +641,13 @@ static LIST_HEAD(nft_target_list);
9127
9128 static struct nft_expr_type nft_target_type;
9129
9130 +static bool nft_target_cmp(const struct xt_target *tg,
9131 + const char *name, u32 rev, u32 family)
9132 +{
9133 + return strcmp(tg->name, name) == 0 && tg->revision == rev &&
9134 + (tg->family == NFPROTO_UNSPEC || tg->family == family);
9135 +}
9136 +
9137 static const struct nft_expr_ops *
9138 nft_target_select_ops(const struct nft_ctx *ctx,
9139 const struct nlattr * const tb[])
9140 @@ -638,7 +655,7 @@ nft_target_select_ops(const struct nft_ctx *ctx,
9141 struct nft_xt *nft_target;
9142 struct xt_target *target;
9143 char *tg_name;
9144 - __u32 rev, family;
9145 + u32 rev, family;
9146
9147 if (tb[NFTA_TARGET_NAME] == NULL ||
9148 tb[NFTA_TARGET_REV] == NULL ||
9149 @@ -653,9 +670,12 @@ nft_target_select_ops(const struct nft_ctx *ctx,
9150 list_for_each_entry(nft_target, &nft_target_list, head) {
9151 struct xt_target *target = nft_target->ops.data;
9152
9153 - if (strcmp(target->name, tg_name) == 0 &&
9154 - target->revision == rev && target->family == family)
9155 + if (nft_target_cmp(target, tg_name, rev, family)) {
9156 + if (!try_module_get(target->me))
9157 + return ERR_PTR(-ENOENT);
9158 +
9159 return &nft_target->ops;
9160 + }
9161 }
9162
9163 target = xt_request_find_target(family, tg_name, rev);
9164 diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
9165 index 6ffd1ebaba93..fe106b50053e 100644
9166 --- a/net/netlink/af_netlink.c
9167 +++ b/net/netlink/af_netlink.c
9168 @@ -133,6 +133,24 @@ static inline u32 netlink_group_mask(u32 group)
9169 return group ? 1 << (group - 1) : 0;
9170 }
9171
9172 +static struct sk_buff *netlink_to_full_skb(const struct sk_buff *skb,
9173 + gfp_t gfp_mask)
9174 +{
9175 + unsigned int len = skb_end_offset(skb);
9176 + struct sk_buff *new;
9177 +
9178 + new = alloc_skb(len, gfp_mask);
9179 + if (new == NULL)
9180 + return NULL;
9181 +
9182 + NETLINK_CB(new).portid = NETLINK_CB(skb).portid;
9183 + NETLINK_CB(new).dst_group = NETLINK_CB(skb).dst_group;
9184 + NETLINK_CB(new).creds = NETLINK_CB(skb).creds;
9185 +
9186 + memcpy(skb_put(new, len), skb->data, len);
9187 + return new;
9188 +}
9189 +
9190 int netlink_add_tap(struct netlink_tap *nt)
9191 {
9192 if (unlikely(nt->dev->type != ARPHRD_NETLINK))
9193 @@ -215,7 +233,11 @@ static int __netlink_deliver_tap_skb(struct sk_buff *skb,
9194 int ret = -ENOMEM;
9195
9196 dev_hold(dev);
9197 - nskb = skb_clone(skb, GFP_ATOMIC);
9198 +
9199 + if (netlink_skb_is_mmaped(skb) || is_vmalloc_addr(skb->head))
9200 + nskb = netlink_to_full_skb(skb, GFP_ATOMIC);
9201 + else
9202 + nskb = skb_clone(skb, GFP_ATOMIC);
9203 if (nskb) {
9204 nskb->dev = dev;
9205 nskb->protocol = htons((u16) sk->sk_protocol);
9206 @@ -287,11 +309,6 @@ static void netlink_rcv_wake(struct sock *sk)
9207 }
9208
9209 #ifdef CONFIG_NETLINK_MMAP
9210 -static bool netlink_skb_is_mmaped(const struct sk_buff *skb)
9211 -{
9212 - return NETLINK_CB(skb).flags & NETLINK_SKB_MMAPED;
9213 -}
9214 -
9215 static bool netlink_rx_is_mmaped(struct sock *sk)
9216 {
9217 return nlk_sk(sk)->rx_ring.pg_vec != NULL;
9218 @@ -843,7 +860,6 @@ static void netlink_ring_set_copied(struct sock *sk, struct sk_buff *skb)
9219 }
9220
9221 #else /* CONFIG_NETLINK_MMAP */
9222 -#define netlink_skb_is_mmaped(skb) false
9223 #define netlink_rx_is_mmaped(sk) false
9224 #define netlink_tx_is_mmaped(sk) false
9225 #define netlink_mmap sock_no_mmap
9226 diff --git a/net/netlink/af_netlink.h b/net/netlink/af_netlink.h
9227 index b20a1731759b..3951874e715d 100644
9228 --- a/net/netlink/af_netlink.h
9229 +++ b/net/netlink/af_netlink.h
9230 @@ -57,6 +57,15 @@ static inline struct netlink_sock *nlk_sk(struct sock *sk)
9231 return container_of(sk, struct netlink_sock, sk);
9232 }
9233
9234 +static inline bool netlink_skb_is_mmaped(const struct sk_buff *skb)
9235 +{
9236 +#ifdef CONFIG_NETLINK_MMAP
9237 + return NETLINK_CB(skb).flags & NETLINK_SKB_MMAPED;
9238 +#else
9239 + return false;
9240 +#endif /* CONFIG_NETLINK_MMAP */
9241 +}
9242 +
9243 struct netlink_table {
9244 struct rhashtable hash;
9245 struct hlist_head mc_list;
9246 diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
9247 index 28213dff723d..acf6b2edba65 100644
9248 --- a/net/openvswitch/datapath.c
9249 +++ b/net/openvswitch/datapath.c
9250 @@ -834,7 +834,7 @@ static int ovs_flow_cmd_new(struct sk_buff *skb, struct genl_info *info)
9251 if (error)
9252 goto err_kfree_flow;
9253
9254 - ovs_flow_mask_key(&new_flow->key, &new_flow->unmasked_key, &mask);
9255 + ovs_flow_mask_key(&new_flow->key, &new_flow->unmasked_key, true, &mask);
9256
9257 /* Validate actions. */
9258 acts = ovs_nla_alloc_flow_actions(nla_len(a[OVS_FLOW_ATTR_ACTIONS]));
9259 @@ -949,7 +949,7 @@ static struct sw_flow_actions *get_flow_actions(const struct nlattr *a,
9260 if (IS_ERR(acts))
9261 return acts;
9262
9263 - ovs_flow_mask_key(&masked_key, key, mask);
9264 + ovs_flow_mask_key(&masked_key, key, true, mask);
9265 error = ovs_nla_copy_actions(a, &masked_key, 0, &acts);
9266 if (error) {
9267 OVS_NLERR("Flow actions may not be safe on all matching packets.\n");
9268 diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c
9269 index cf2d853646f0..740041a09b9d 100644
9270 --- a/net/openvswitch/flow_table.c
9271 +++ b/net/openvswitch/flow_table.c
9272 @@ -56,20 +56,21 @@ static u16 range_n_bytes(const struct sw_flow_key_range *range)
9273 }
9274
9275 void ovs_flow_mask_key(struct sw_flow_key *dst, const struct sw_flow_key *src,
9276 - const struct sw_flow_mask *mask)
9277 + bool full, const struct sw_flow_mask *mask)
9278 {
9279 - const long *m = (const long *)((const u8 *)&mask->key +
9280 - mask->range.start);
9281 - const long *s = (const long *)((const u8 *)src +
9282 - mask->range.start);
9283 - long *d = (long *)((u8 *)dst + mask->range.start);
9284 + int start = full ? 0 : mask->range.start;
9285 + int len = full ? sizeof *dst : range_n_bytes(&mask->range);
9286 + const long *m = (const long *)((const u8 *)&mask->key + start);
9287 + const long *s = (const long *)((const u8 *)src + start);
9288 + long *d = (long *)((u8 *)dst + start);
9289 int i;
9290
9291 - /* The memory outside of the 'mask->range' are not set since
9292 - * further operations on 'dst' only uses contents within
9293 - * 'mask->range'.
9294 + /* If 'full' is true then all of 'dst' is fully initialized. Otherwise,
9295 + * if 'full' is false the memory outside of the 'mask->range' is left
9296 + * uninitialized. This can be used as an optimization when further
9297 + * operations on 'dst' only use contents within 'mask->range'.
9298 */
9299 - for (i = 0; i < range_n_bytes(&mask->range); i += sizeof(long))
9300 + for (i = 0; i < len; i += sizeof(long))
9301 *d++ = *s++ & *m++;
9302 }
9303
9304 @@ -418,7 +419,7 @@ static struct sw_flow *masked_flow_lookup(struct table_instance *ti,
9305 u32 hash;
9306 struct sw_flow_key masked_key;
9307
9308 - ovs_flow_mask_key(&masked_key, unmasked, mask);
9309 + ovs_flow_mask_key(&masked_key, unmasked, false, mask);
9310 hash = flow_hash(&masked_key, key_start, key_end);
9311 head = find_bucket(ti, hash);
9312 hlist_for_each_entry_rcu(flow, head, hash_node[ti->node_ver]) {
9313 diff --git a/net/openvswitch/flow_table.h b/net/openvswitch/flow_table.h
9314 index 5918bff7f3f6..2f0cf200ede9 100644
9315 --- a/net/openvswitch/flow_table.h
9316 +++ b/net/openvswitch/flow_table.h
9317 @@ -82,5 +82,5 @@ bool ovs_flow_cmp_unmasked_key(const struct sw_flow *flow,
9318 struct sw_flow_match *match);
9319
9320 void ovs_flow_mask_key(struct sw_flow_key *dst, const struct sw_flow_key *src,
9321 - const struct sw_flow_mask *mask);
9322 + bool full, const struct sw_flow_mask *mask);
9323 #endif /* flow_table.h */
9324 diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
9325 index 5dcfe05ea232..bf6097793170 100644
9326 --- a/net/packet/af_packet.c
9327 +++ b/net/packet/af_packet.c
9328 @@ -2645,7 +2645,7 @@ static int packet_release(struct socket *sock)
9329 static int packet_do_bind(struct sock *sk, struct net_device *dev, __be16 proto)
9330 {
9331 struct packet_sock *po = pkt_sk(sk);
9332 - const struct net_device *dev_curr;
9333 + struct net_device *dev_curr;
9334 __be16 proto_curr;
9335 bool need_rehook;
9336
9337 @@ -2669,15 +2669,13 @@ static int packet_do_bind(struct sock *sk, struct net_device *dev, __be16 proto)
9338
9339 po->num = proto;
9340 po->prot_hook.type = proto;
9341 -
9342 - if (po->prot_hook.dev)
9343 - dev_put(po->prot_hook.dev);
9344 -
9345 po->prot_hook.dev = dev;
9346
9347 po->ifindex = dev ? dev->ifindex : 0;
9348 packet_cached_dev_assign(po, dev);
9349 }
9350 + if (dev_curr)
9351 + dev_put(dev_curr);
9352
9353 if (proto == 0 || !need_rehook)
9354 goto out_unlock;
9355 diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
9356 index 8f34b27d5775..143c4ebd55fa 100644
9357 --- a/net/sctp/protocol.c
9358 +++ b/net/sctp/protocol.c
9359 @@ -1166,7 +1166,7 @@ static void sctp_v4_del_protocol(void)
9360 unregister_inetaddr_notifier(&sctp_inetaddr_notifier);
9361 }
9362
9363 -static int __net_init sctp_net_init(struct net *net)
9364 +static int __net_init sctp_defaults_init(struct net *net)
9365 {
9366 int status;
9367
9368 @@ -1259,12 +1259,6 @@ static int __net_init sctp_net_init(struct net *net)
9369
9370 sctp_dbg_objcnt_init(net);
9371
9372 - /* Initialize the control inode/socket for handling OOTB packets. */
9373 - if ((status = sctp_ctl_sock_init(net))) {
9374 - pr_err("Failed to initialize the SCTP control sock\n");
9375 - goto err_ctl_sock_init;
9376 - }
9377 -
9378 /* Initialize the local address list. */
9379 INIT_LIST_HEAD(&net->sctp.local_addr_list);
9380 spin_lock_init(&net->sctp.local_addr_lock);
9381 @@ -1280,9 +1274,6 @@ static int __net_init sctp_net_init(struct net *net)
9382
9383 return 0;
9384
9385 -err_ctl_sock_init:
9386 - sctp_dbg_objcnt_exit(net);
9387 - sctp_proc_exit(net);
9388 err_init_proc:
9389 cleanup_sctp_mibs(net);
9390 err_init_mibs:
9391 @@ -1291,15 +1282,12 @@ err_sysctl_register:
9392 return status;
9393 }
9394
9395 -static void __net_exit sctp_net_exit(struct net *net)
9396 +static void __net_exit sctp_defaults_exit(struct net *net)
9397 {
9398 /* Free the local address list */
9399 sctp_free_addr_wq(net);
9400 sctp_free_local_addr_list(net);
9401
9402 - /* Free the control endpoint. */
9403 - inet_ctl_sock_destroy(net->sctp.ctl_sock);
9404 -
9405 sctp_dbg_objcnt_exit(net);
9406
9407 sctp_proc_exit(net);
9408 @@ -1307,9 +1295,32 @@ static void __net_exit sctp_net_exit(struct net *net)
9409 sctp_sysctl_net_unregister(net);
9410 }
9411
9412 -static struct pernet_operations sctp_net_ops = {
9413 - .init = sctp_net_init,
9414 - .exit = sctp_net_exit,
9415 +static struct pernet_operations sctp_defaults_ops = {
9416 + .init = sctp_defaults_init,
9417 + .exit = sctp_defaults_exit,
9418 +};
9419 +
9420 +static int __net_init sctp_ctrlsock_init(struct net *net)
9421 +{
9422 + int status;
9423 +
9424 + /* Initialize the control inode/socket for handling OOTB packets. */
9425 + status = sctp_ctl_sock_init(net);
9426 + if (status)
9427 + pr_err("Failed to initialize the SCTP control sock\n");
9428 +
9429 + return status;
9430 +}
9431 +
9432 +static void __net_init sctp_ctrlsock_exit(struct net *net)
9433 +{
9434 + /* Free the control endpoint. */
9435 + inet_ctl_sock_destroy(net->sctp.ctl_sock);
9436 +}
9437 +
9438 +static struct pernet_operations sctp_ctrlsock_ops = {
9439 + .init = sctp_ctrlsock_init,
9440 + .exit = sctp_ctrlsock_exit,
9441 };
9442
9443 /* Initialize the universe into something sensible. */
9444 @@ -1443,8 +1454,11 @@ static __init int sctp_init(void)
9445 sctp_v4_pf_init();
9446 sctp_v6_pf_init();
9447
9448 - status = sctp_v4_protosw_init();
9449 + status = register_pernet_subsys(&sctp_defaults_ops);
9450 + if (status)
9451 + goto err_register_defaults;
9452
9453 + status = sctp_v4_protosw_init();
9454 if (status)
9455 goto err_protosw_init;
9456
9457 @@ -1452,9 +1466,9 @@ static __init int sctp_init(void)
9458 if (status)
9459 goto err_v6_protosw_init;
9460
9461 - status = register_pernet_subsys(&sctp_net_ops);
9462 + status = register_pernet_subsys(&sctp_ctrlsock_ops);
9463 if (status)
9464 - goto err_register_pernet_subsys;
9465 + goto err_register_ctrlsock;
9466
9467 status = sctp_v4_add_protocol();
9468 if (status)
9469 @@ -1470,12 +1484,14 @@ out:
9470 err_v6_add_protocol:
9471 sctp_v4_del_protocol();
9472 err_add_protocol:
9473 - unregister_pernet_subsys(&sctp_net_ops);
9474 -err_register_pernet_subsys:
9475 + unregister_pernet_subsys(&sctp_ctrlsock_ops);
9476 +err_register_ctrlsock:
9477 sctp_v6_protosw_exit();
9478 err_v6_protosw_init:
9479 sctp_v4_protosw_exit();
9480 err_protosw_init:
9481 + unregister_pernet_subsys(&sctp_defaults_ops);
9482 +err_register_defaults:
9483 sctp_v4_pf_exit();
9484 sctp_v6_pf_exit();
9485 sctp_sysctl_unregister();
9486 @@ -1508,12 +1524,14 @@ static __exit void sctp_exit(void)
9487 sctp_v6_del_protocol();
9488 sctp_v4_del_protocol();
9489
9490 - unregister_pernet_subsys(&sctp_net_ops);
9491 + unregister_pernet_subsys(&sctp_ctrlsock_ops);
9492
9493 /* Free protosw registrations */
9494 sctp_v6_protosw_exit();
9495 sctp_v4_protosw_exit();
9496
9497 + unregister_pernet_subsys(&sctp_defaults_ops);
9498 +
9499 /* Unregister with socket layer. */
9500 sctp_v6_pf_exit();
9501 sctp_v4_pf_exit();
9502 diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
9503 index 9f1b50689c0f..03252a652e4b 100644
9504 --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
9505 +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
9506 @@ -372,6 +372,7 @@ static int send_reply(struct svcxprt_rdma *rdma,
9507 int byte_count)
9508 {
9509 struct ib_send_wr send_wr;
9510 + u32 xdr_off;
9511 int sge_no;
9512 int sge_bytes;
9513 int page_no;
9514 @@ -406,8 +407,8 @@ static int send_reply(struct svcxprt_rdma *rdma,
9515 ctxt->direction = DMA_TO_DEVICE;
9516
9517 /* Map the payload indicated by 'byte_count' */
9518 + xdr_off = 0;
9519 for (sge_no = 1; byte_count && sge_no < vec->count; sge_no++) {
9520 - int xdr_off = 0;
9521 sge_bytes = min_t(size_t, vec->sge[sge_no].iov_len, byte_count);
9522 byte_count -= sge_bytes;
9523 ctxt->sge[sge_no].addr =
9524 @@ -443,6 +444,14 @@ static int send_reply(struct svcxprt_rdma *rdma,
9525 rqstp->rq_next_page = rqstp->rq_respages + 1;
9526
9527 BUG_ON(sge_no > rdma->sc_max_sge);
9528 +
9529 + /* The loop above bumps sc_dma_used for each sge. The
9530 + * xdr_buf.tail gets a separate sge, but resides in the
9531 + * same page as xdr_buf.head. Don't count it twice.
9532 + */
9533 + if (sge_no > ctxt->count)
9534 + atomic_dec(&rdma->sc_dma_used);
9535 +
9536 memset(&send_wr, 0, sizeof send_wr);
9537 ctxt->wr_op = IB_WR_SEND;
9538 send_wr.wr_id = (unsigned long)ctxt;
9539 diff --git a/sound/arm/Kconfig b/sound/arm/Kconfig
9540 index 885683a3b0bd..e0406211716b 100644
9541 --- a/sound/arm/Kconfig
9542 +++ b/sound/arm/Kconfig
9543 @@ -9,6 +9,14 @@ menuconfig SND_ARM
9544 Drivers that are implemented on ASoC can be found in
9545 "ALSA for SoC audio support" section.
9546
9547 +config SND_PXA2XX_LIB
9548 + tristate
9549 + select SND_AC97_CODEC if SND_PXA2XX_LIB_AC97
9550 + select SND_DMAENGINE_PCM
9551 +
9552 +config SND_PXA2XX_LIB_AC97
9553 + bool
9554 +
9555 if SND_ARM
9556
9557 config SND_ARMAACI
9558 @@ -21,13 +29,6 @@ config SND_PXA2XX_PCM
9559 tristate
9560 select SND_PCM
9561
9562 -config SND_PXA2XX_LIB
9563 - tristate
9564 - select SND_AC97_CODEC if SND_PXA2XX_LIB_AC97
9565 -
9566 -config SND_PXA2XX_LIB_AC97
9567 - bool
9568 -
9569 config SND_PXA2XX_AC97
9570 tristate "AC97 driver for the Intel PXA2xx chip"
9571 depends on ARCH_PXA
9572 diff --git a/sound/pci/hda/patch_cirrus.c b/sound/pci/hda/patch_cirrus.c
9573 index e5dac8ea65e4..1b3b38d025fc 100644
9574 --- a/sound/pci/hda/patch_cirrus.c
9575 +++ b/sound/pci/hda/patch_cirrus.c
9576 @@ -634,6 +634,7 @@ static const struct snd_pci_quirk cs4208_mac_fixup_tbl[] = {
9577 SND_PCI_QUIRK(0x106b, 0x5e00, "MacBookPro 11,2", CS4208_MBP11),
9578 SND_PCI_QUIRK(0x106b, 0x7100, "MacBookAir 6,1", CS4208_MBA6),
9579 SND_PCI_QUIRK(0x106b, 0x7200, "MacBookAir 6,2", CS4208_MBA6),
9580 + SND_PCI_QUIRK(0x106b, 0x7b00, "MacBookPro 12,1", CS4208_MBP11),
9581 {} /* terminator */
9582 };
9583
9584 diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
9585 index f979293b421a..d36cdb27a02c 100644
9586 --- a/sound/pci/hda/patch_realtek.c
9587 +++ b/sound/pci/hda/patch_realtek.c
9588 @@ -1131,7 +1131,7 @@ static const struct hda_fixup alc880_fixups[] = {
9589 /* override all pins as BIOS on old Amilo is broken */
9590 .type = HDA_FIXUP_PINS,
9591 .v.pins = (const struct hda_pintbl[]) {
9592 - { 0x14, 0x0121411f }, /* HP */
9593 + { 0x14, 0x0121401f }, /* HP */
9594 { 0x15, 0x99030120 }, /* speaker */
9595 { 0x16, 0x99030130 }, /* bass speaker */
9596 { 0x17, 0x411111f0 }, /* N/A */
9597 @@ -1151,7 +1151,7 @@ static const struct hda_fixup alc880_fixups[] = {
9598 /* almost compatible with FUJITSU, but no bass and SPDIF */
9599 .type = HDA_FIXUP_PINS,
9600 .v.pins = (const struct hda_pintbl[]) {
9601 - { 0x14, 0x0121411f }, /* HP */
9602 + { 0x14, 0x0121401f }, /* HP */
9603 { 0x15, 0x99030120 }, /* speaker */
9604 { 0x16, 0x411111f0 }, /* N/A */
9605 { 0x17, 0x411111f0 }, /* N/A */
9606 @@ -1360,7 +1360,7 @@ static const struct snd_pci_quirk alc880_fixup_tbl[] = {
9607 SND_PCI_QUIRK(0x161f, 0x203d, "W810", ALC880_FIXUP_W810),
9608 SND_PCI_QUIRK(0x161f, 0x205d, "Medion Rim 2150", ALC880_FIXUP_MEDION_RIM),
9609 SND_PCI_QUIRK(0x1631, 0xe011, "PB 13201056", ALC880_FIXUP_6ST_AUTOMUTE),
9610 - SND_PCI_QUIRK(0x1734, 0x107c, "FSC F1734", ALC880_FIXUP_F1734),
9611 + SND_PCI_QUIRK(0x1734, 0x107c, "FSC Amilo M1437", ALC880_FIXUP_FUJITSU),
9612 SND_PCI_QUIRK(0x1734, 0x1094, "FSC Amilo M1451G", ALC880_FIXUP_FUJITSU),
9613 SND_PCI_QUIRK(0x1734, 0x10ac, "FSC AMILO Xi 1526", ALC880_FIXUP_F1734),
9614 SND_PCI_QUIRK(0x1734, 0x10b0, "FSC Amilo Pi1556", ALC880_FIXUP_FUJITSU),
9615 @@ -5050,6 +5050,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
9616 SND_PCI_QUIRK(0x17aa, 0x2212, "Thinkpad T440", ALC292_FIXUP_TPT440_DOCK),
9617 SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad X240", ALC292_FIXUP_TPT440_DOCK),
9618 SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
9619 + SND_PCI_QUIRK(0x17aa, 0x2223, "ThinkPad T550", ALC292_FIXUP_TPT440_DOCK),
9620 SND_PCI_QUIRK(0x17aa, 0x2226, "ThinkPad X250", ALC292_FIXUP_TPT440_DOCK),
9621 SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
9622 SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP),
9623 diff --git a/sound/soc/dwc/designware_i2s.c b/sound/soc/dwc/designware_i2s.c
9624 index 10e1b8ca42ed..8086f3da5de8 100644
9625 --- a/sound/soc/dwc/designware_i2s.c
9626 +++ b/sound/soc/dwc/designware_i2s.c
9627 @@ -100,10 +100,10 @@ static inline void i2s_clear_irqs(struct dw_i2s_dev *dev, u32 stream)
9628
9629 if (stream == SNDRV_PCM_STREAM_PLAYBACK) {
9630 for (i = 0; i < 4; i++)
9631 - i2s_write_reg(dev->i2s_base, TOR(i), 0);
9632 + i2s_read_reg(dev->i2s_base, TOR(i));
9633 } else {
9634 for (i = 0; i < 4; i++)
9635 - i2s_write_reg(dev->i2s_base, ROR(i), 0);
9636 + i2s_read_reg(dev->i2s_base, ROR(i));
9637 }
9638 }
9639
9640 diff --git a/sound/soc/pxa/Kconfig b/sound/soc/pxa/Kconfig
9641 index 2434b6d61675..e1f501b46c9d 100644
9642 --- a/sound/soc/pxa/Kconfig
9643 +++ b/sound/soc/pxa/Kconfig
9644 @@ -1,7 +1,6 @@
9645 config SND_PXA2XX_SOC
9646 tristate "SoC Audio for the Intel PXA2xx chip"
9647 depends on ARCH_PXA
9648 - select SND_ARM
9649 select SND_PXA2XX_LIB
9650 help
9651 Say Y or M if you want to add support for codecs attached to
9652 @@ -25,7 +24,6 @@ config SND_PXA2XX_AC97
9653 config SND_PXA2XX_SOC_AC97
9654 tristate
9655 select AC97_BUS
9656 - select SND_ARM
9657 select SND_PXA2XX_LIB_AC97
9658 select SND_SOC_AC97_BUS
9659
9660 diff --git a/sound/soc/pxa/pxa2xx-ac97.c b/sound/soc/pxa/pxa2xx-ac97.c
9661 index ae956e3f4b9d..593e3202fc35 100644
9662 --- a/sound/soc/pxa/pxa2xx-ac97.c
9663 +++ b/sound/soc/pxa/pxa2xx-ac97.c
9664 @@ -49,7 +49,7 @@ static struct snd_ac97_bus_ops pxa2xx_ac97_ops = {
9665 .reset = pxa2xx_ac97_cold_reset,
9666 };
9667
9668 -static unsigned long pxa2xx_ac97_pcm_stereo_in_req = 12;
9669 +static unsigned long pxa2xx_ac97_pcm_stereo_in_req = 11;
9670 static struct snd_dmaengine_dai_dma_data pxa2xx_ac97_pcm_stereo_in = {
9671 .addr = __PREG(PCDR),
9672 .addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES,
9673 @@ -57,7 +57,7 @@ static struct snd_dmaengine_dai_dma_data pxa2xx_ac97_pcm_stereo_in = {
9674 .filter_data = &pxa2xx_ac97_pcm_stereo_in_req,
9675 };
9676
9677 -static unsigned long pxa2xx_ac97_pcm_stereo_out_req = 11;
9678 +static unsigned long pxa2xx_ac97_pcm_stereo_out_req = 12;
9679 static struct snd_dmaengine_dai_dma_data pxa2xx_ac97_pcm_stereo_out = {
9680 .addr = __PREG(PCDR),
9681 .addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES,
9682 diff --git a/sound/synth/emux/emux_oss.c b/sound/synth/emux/emux_oss.c
9683 index daf61abc3670..646b66703bd8 100644
9684 --- a/sound/synth/emux/emux_oss.c
9685 +++ b/sound/synth/emux/emux_oss.c
9686 @@ -69,7 +69,8 @@ snd_emux_init_seq_oss(struct snd_emux *emu)
9687 struct snd_seq_oss_reg *arg;
9688 struct snd_seq_device *dev;
9689
9690 - if (snd_seq_device_new(emu->card, 0, SNDRV_SEQ_DEV_ID_OSS,
9691 + /* using device#1 here for avoiding conflicts with OPL3 */
9692 + if (snd_seq_device_new(emu->card, 1, SNDRV_SEQ_DEV_ID_OSS,
9693 sizeof(struct snd_seq_oss_reg), &dev) < 0)
9694 return;
9695
9696 diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c
9697 index cf3a44bf1ec3..dfb8be78ff75 100644
9698 --- a/tools/lib/traceevent/event-parse.c
9699 +++ b/tools/lib/traceevent/event-parse.c
9700 @@ -3658,7 +3658,7 @@ static void print_str_arg(struct trace_seq *s, void *data, int size,
9701 struct format_field *field;
9702 struct printk_map *printk;
9703 unsigned long long val, fval;
9704 - unsigned long addr;
9705 + unsigned long long addr;
9706 char *str;
9707 unsigned char *hex;
9708 int print;
9709 @@ -3691,13 +3691,30 @@ static void print_str_arg(struct trace_seq *s, void *data, int size,
9710 */
9711 if (!(field->flags & FIELD_IS_ARRAY) &&
9712 field->size == pevent->long_size) {
9713 - addr = *(unsigned long *)(data + field->offset);
9714 +
9715 + /* Handle heterogeneous recording and processing
9716 + * architectures
9717 + *
9718 + * CASE I:
9719 + * Traces recorded on 32-bit devices (32-bit
9720 + * addressing) and processed on 64-bit devices:
9721 + * In this case, only 32 bits should be read.
9722 + *
9723 + * CASE II:
9724 + * Traces recorded on 64 bit devices and processed
9725 + * on 32-bit devices:
9726 + * In this case, 64 bits must be read.
9727 + */
9728 + addr = (pevent->long_size == 8) ?
9729 + *(unsigned long long *)(data + field->offset) :
9730 + (unsigned long long)*(unsigned int *)(data + field->offset);
9731 +
9732 /* Check if it matches a print format */
9733 printk = find_printk(pevent, addr);
9734 if (printk)
9735 trace_seq_puts(s, printk->printk);
9736 else
9737 - trace_seq_printf(s, "%lx", addr);
9738 + trace_seq_printf(s, "%llx", addr);
9739 break;
9740 }
9741 str = malloc(len + 1);
9742 diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
9743 index 055ce9232c9e..66c9fc730a14 100644
9744 --- a/tools/perf/builtin-stat.c
9745 +++ b/tools/perf/builtin-stat.c
9746 @@ -1117,7 +1117,7 @@ static void abs_printout(int id, int nr, struct perf_evsel *evsel, double avg)
9747 static void print_aggr(char *prefix)
9748 {
9749 struct perf_evsel *counter;
9750 - int cpu, cpu2, s, s2, id, nr;
9751 + int cpu, s, s2, id, nr;
9752 double uval;
9753 u64 ena, run, val;
9754
9755 @@ -1130,8 +1130,7 @@ static void print_aggr(char *prefix)
9756 val = ena = run = 0;
9757 nr = 0;
9758 for (cpu = 0; cpu < perf_evsel__nr_cpus(counter); cpu++) {
9759 - cpu2 = perf_evsel__cpus(counter)->map[cpu];
9760 - s2 = aggr_get_id(evsel_list->cpus, cpu2);
9761 + s2 = aggr_get_id(perf_evsel__cpus(counter), cpu);
9762 if (s2 != id)
9763 continue;
9764 val += counter->counts->cpu[cpu].val;
9765 diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
9766 index 26f5b2fe5dc8..74caa262ace5 100644
9767 --- a/tools/perf/util/header.c
9768 +++ b/tools/perf/util/header.c
9769 @@ -1775,7 +1775,7 @@ static int process_nrcpus(struct perf_file_section *section __maybe_unused,
9770 if (ph->needs_swap)
9771 nr = bswap_32(nr);
9772
9773 - ph->env.nr_cpus_online = nr;
9774 + ph->env.nr_cpus_avail = nr;
9775
9776 ret = readn(fd, &nr, sizeof(nr));
9777 if (ret != sizeof(nr))
9778 @@ -1784,7 +1784,7 @@ static int process_nrcpus(struct perf_file_section *section __maybe_unused,
9779 if (ph->needs_swap)
9780 nr = bswap_32(nr);
9781
9782 - ph->env.nr_cpus_avail = nr;
9783 + ph->env.nr_cpus_online = nr;
9784 return 0;
9785 }
9786
9787 diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
9788 index 6e88b9e395df..06868c61f8dd 100644
9789 --- a/tools/perf/util/hist.c
9790 +++ b/tools/perf/util/hist.c
9791 @@ -150,6 +150,9 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
9792 hists__new_col_len(hists, HISTC_LOCAL_WEIGHT, 12);
9793 hists__new_col_len(hists, HISTC_GLOBAL_WEIGHT, 12);
9794
9795 + if (h->srcline)
9796 + hists__new_col_len(hists, HISTC_SRCLINE, strlen(h->srcline));
9797 +
9798 if (h->transaction)
9799 hists__new_col_len(hists, HISTC_TRANSACTION,
9800 hist_entry__transaction_len());
9801 diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
9802 index fcaf06b40558..194300a08197 100644
9803 --- a/tools/perf/util/symbol-elf.c
9804 +++ b/tools/perf/util/symbol-elf.c
9805 @@ -1166,8 +1166,6 @@ out_close:
9806 static int kcore__init(struct kcore *kcore, char *filename, int elfclass,
9807 bool temp)
9808 {
9809 - GElf_Ehdr *ehdr;
9810 -
9811 kcore->elfclass = elfclass;
9812
9813 if (temp)
9814 @@ -1184,9 +1182,7 @@ static int kcore__init(struct kcore *kcore, char *filename, int elfclass,
9815 if (!gelf_newehdr(kcore->elf, elfclass))
9816 goto out_end;
9817
9818 - ehdr = gelf_getehdr(kcore->elf, &kcore->ehdr);
9819 - if (!ehdr)
9820 - goto out_end;
9821 + memset(&kcore->ehdr, 0, sizeof(GElf_Ehdr));
9822
9823 return 0;
9824
9825 @@ -1243,23 +1239,18 @@ static int kcore__copy_hdr(struct kcore *from, struct kcore *to, size_t count)
9826 static int kcore__add_phdr(struct kcore *kcore, int idx, off_t offset,
9827 u64 addr, u64 len)
9828 {
9829 - GElf_Phdr gphdr;
9830 - GElf_Phdr *phdr;
9831 -
9832 - phdr = gelf_getphdr(kcore->elf, idx, &gphdr);
9833 - if (!phdr)
9834 - return -1;
9835 -
9836 - phdr->p_type = PT_LOAD;
9837 - phdr->p_flags = PF_R | PF_W | PF_X;
9838 - phdr->p_offset = offset;
9839 - phdr->p_vaddr = addr;
9840 - phdr->p_paddr = 0;
9841 - phdr->p_filesz = len;
9842 - phdr->p_memsz = len;
9843 - phdr->p_align = page_size;
9844 -
9845 - if (!gelf_update_phdr(kcore->elf, idx, phdr))
9846 + GElf_Phdr phdr = {
9847 + .p_type = PT_LOAD,
9848 + .p_flags = PF_R | PF_W | PF_X,
9849 + .p_offset = offset,
9850 + .p_vaddr = addr,
9851 + .p_paddr = 0,
9852 + .p_filesz = len,
9853 + .p_memsz = len,
9854 + .p_align = page_size,
9855 + };
9856 +
9857 + if (!gelf_update_phdr(kcore->elf, idx, &phdr))
9858 return -1;
9859
9860 return 0;
9861 diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
9862 index b0fb390943c6..5a310caf4bbe 100644
9863 --- a/virt/kvm/eventfd.c
9864 +++ b/virt/kvm/eventfd.c
9865 @@ -775,40 +775,14 @@ static enum kvm_bus ioeventfd_bus_from_flags(__u32 flags)
9866 return KVM_MMIO_BUS;
9867 }
9868
9869 -static int
9870 -kvm_assign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
9871 +static int kvm_assign_ioeventfd_idx(struct kvm *kvm,
9872 + enum kvm_bus bus_idx,
9873 + struct kvm_ioeventfd *args)
9874 {
9875 - enum kvm_bus bus_idx;
9876 - struct _ioeventfd *p;
9877 - struct eventfd_ctx *eventfd;
9878 - int ret;
9879 -
9880 - bus_idx = ioeventfd_bus_from_flags(args->flags);
9881 - /* must be natural-word sized, or 0 to ignore length */
9882 - switch (args->len) {
9883 - case 0:
9884 - case 1:
9885 - case 2:
9886 - case 4:
9887 - case 8:
9888 - break;
9889 - default:
9890 - return -EINVAL;
9891 - }
9892 -
9893 - /* check for range overflow */
9894 - if (args->addr + args->len < args->addr)
9895 - return -EINVAL;
9896
9897 - /* check for extra flags that we don't understand */
9898 - if (args->flags & ~KVM_IOEVENTFD_VALID_FLAG_MASK)
9899 - return -EINVAL;
9900 -
9901 - /* ioeventfd with no length can't be combined with DATAMATCH */
9902 - if (!args->len &&
9903 - args->flags & (KVM_IOEVENTFD_FLAG_PIO |
9904 - KVM_IOEVENTFD_FLAG_DATAMATCH))
9905 - return -EINVAL;
9906 + struct eventfd_ctx *eventfd;
9907 + struct _ioeventfd *p;
9908 + int ret;
9909
9910 eventfd = eventfd_ctx_fdget(args->fd);
9911 if (IS_ERR(eventfd))
9912 @@ -847,16 +821,6 @@ kvm_assign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
9913 if (ret < 0)
9914 goto unlock_fail;
9915
9916 - /* When length is ignored, MMIO is also put on a separate bus, for
9917 - * faster lookups.
9918 - */
9919 - if (!args->len && !(args->flags & KVM_IOEVENTFD_FLAG_PIO)) {
9920 - ret = kvm_io_bus_register_dev(kvm, KVM_FAST_MMIO_BUS,
9921 - p->addr, 0, &p->dev);
9922 - if (ret < 0)
9923 - goto register_fail;
9924 - }
9925 -
9926 kvm->buses[bus_idx]->ioeventfd_count++;
9927 list_add_tail(&p->list, &kvm->ioeventfds);
9928
9929 @@ -864,8 +828,6 @@ kvm_assign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
9930
9931 return 0;
9932
9933 -register_fail:
9934 - kvm_io_bus_unregister_dev(kvm, bus_idx, &p->dev);
9935 unlock_fail:
9936 mutex_unlock(&kvm->slots_lock);
9937
9938 @@ -877,14 +839,13 @@ fail:
9939 }
9940
9941 static int
9942 -kvm_deassign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
9943 +kvm_deassign_ioeventfd_idx(struct kvm *kvm, enum kvm_bus bus_idx,
9944 + struct kvm_ioeventfd *args)
9945 {
9946 - enum kvm_bus bus_idx;
9947 struct _ioeventfd *p, *tmp;
9948 struct eventfd_ctx *eventfd;
9949 int ret = -ENOENT;
9950
9951 - bus_idx = ioeventfd_bus_from_flags(args->flags);
9952 eventfd = eventfd_ctx_fdget(args->fd);
9953 if (IS_ERR(eventfd))
9954 return PTR_ERR(eventfd);
9955 @@ -905,10 +866,6 @@ kvm_deassign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
9956 continue;
9957
9958 kvm_io_bus_unregister_dev(kvm, bus_idx, &p->dev);
9959 - if (!p->length) {
9960 - kvm_io_bus_unregister_dev(kvm, KVM_FAST_MMIO_BUS,
9961 - &p->dev);
9962 - }
9963 kvm->buses[bus_idx]->ioeventfd_count--;
9964 ioeventfd_release(p);
9965 ret = 0;
9966 @@ -922,6 +879,71 @@ kvm_deassign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
9967 return ret;
9968 }
9969
9970 +static int kvm_deassign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
9971 +{
9972 + enum kvm_bus bus_idx = ioeventfd_bus_from_flags(args->flags);
9973 + int ret = kvm_deassign_ioeventfd_idx(kvm, bus_idx, args);
9974 +
9975 + if (!args->len && bus_idx == KVM_MMIO_BUS)
9976 + kvm_deassign_ioeventfd_idx(kvm, KVM_FAST_MMIO_BUS, args);
9977 +
9978 + return ret;
9979 +}
9980 +
9981 +static int
9982 +kvm_assign_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
9983 +{
9984 + enum kvm_bus bus_idx;
9985 + int ret;
9986 +
9987 + bus_idx = ioeventfd_bus_from_flags(args->flags);
9988 + /* must be natural-word sized, or 0 to ignore length */
9989 + switch (args->len) {
9990 + case 0:
9991 + case 1:
9992 + case 2:
9993 + case 4:
9994 + case 8:
9995 + break;
9996 + default:
9997 + return -EINVAL;
9998 + }
9999 +
10000 + /* check for range overflow */
10001 + if (args->addr + args->len < args->addr)
10002 + return -EINVAL;
10003 +
10004 + /* check for extra flags that we don't understand */
10005 + if (args->flags & ~KVM_IOEVENTFD_VALID_FLAG_MASK)
10006 + return -EINVAL;
10007 +
10008 + /* ioeventfd with no length can't be combined with DATAMATCH */
10009 + if (!args->len &&
10010 + args->flags & (KVM_IOEVENTFD_FLAG_PIO |
10011 + KVM_IOEVENTFD_FLAG_DATAMATCH))
10012 + return -EINVAL;
10013 +
10014 + ret = kvm_assign_ioeventfd_idx(kvm, bus_idx, args);
10015 + if (ret)
10016 + goto fail;
10017 +
10018 + /* When length is ignored, MMIO is also put on a separate bus, for
10019 + * faster lookups.
10020 + */
10021 + if (!args->len && bus_idx == KVM_MMIO_BUS) {
10022 + ret = kvm_assign_ioeventfd_idx(kvm, KVM_FAST_MMIO_BUS, args);
10023 + if (ret < 0)
10024 + goto fast_fail;
10025 + }
10026 +
10027 + return 0;
10028 +
10029 +fast_fail:
10030 + kvm_deassign_ioeventfd_idx(kvm, bus_idx, args);
10031 +fail:
10032 + return ret;
10033 +}
10034 +
10035 int
10036 kvm_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args)
10037 {
10038 diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
10039 index 4e52bb926374..329c3c91bb68 100644
10040 --- a/virt/kvm/kvm_main.c
10041 +++ b/virt/kvm/kvm_main.c
10042 @@ -2875,10 +2875,25 @@ static void kvm_io_bus_destroy(struct kvm_io_bus *bus)
10043 static inline int kvm_io_bus_cmp(const struct kvm_io_range *r1,
10044 const struct kvm_io_range *r2)
10045 {
10046 - if (r1->addr < r2->addr)
10047 + gpa_t addr1 = r1->addr;
10048 + gpa_t addr2 = r2->addr;
10049 +
10050 + if (addr1 < addr2)
10051 return -1;
10052 - if (r1->addr + r1->len > r2->addr + r2->len)
10053 +
10054 + /* If r2->len == 0, match the exact address. If r2->len != 0,
10055 + * accept any overlapping write. Any order is acceptable for
10056 + * overlapping ranges, because kvm_io_bus_get_first_dev ensures
10057 + * we process all of them.
10058 + */
10059 + if (r2->len) {
10060 + addr1 += r1->len;
10061 + addr2 += r2->len;
10062 + }
10063 +
10064 + if (addr1 > addr2)
10065 return 1;
10066 +
10067 return 0;
10068 }
10069