Magellan Linux

Contents of /trunk/kernel-alx/patches-4.19/0158-4.19.59-all-fixes.patch

Parent Directory Parent Directory | Revision Log Revision Log


Revision 3437 - (show annotations) (download)
Fri Aug 2 11:48:04 2019 UTC (4 years, 9 months ago) by niro
File size: 151987 byte(s)
-linux-4.19.59
1 diff --git a/Documentation/ABI/testing/sysfs-class-net-qmi b/Documentation/ABI/testing/sysfs-class-net-qmi
2 index 7122d6264c49..c310db4ccbc2 100644
3 --- a/Documentation/ABI/testing/sysfs-class-net-qmi
4 +++ b/Documentation/ABI/testing/sysfs-class-net-qmi
5 @@ -29,7 +29,7 @@ Contact: Bjørn Mork <bjorn@mork.no>
6 Description:
7 Unsigned integer.
8
9 - Write a number ranging from 1 to 127 to add a qmap mux
10 + Write a number ranging from 1 to 254 to add a qmap mux
11 based network device, supported by recent Qualcomm based
12 modems.
13
14 @@ -46,5 +46,5 @@ Contact: Bjørn Mork <bjorn@mork.no>
15 Description:
16 Unsigned integer.
17
18 - Write a number ranging from 1 to 127 to delete a previously
19 + Write a number ranging from 1 to 254 to delete a previously
20 created qmap mux based network device.
21 diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
22 index ffc064c1ec68..49311f3da6f2 100644
23 --- a/Documentation/admin-guide/hw-vuln/index.rst
24 +++ b/Documentation/admin-guide/hw-vuln/index.rst
25 @@ -9,5 +9,6 @@ are configurable at compile, boot or run time.
26 .. toctree::
27 :maxdepth: 1
28
29 + spectre
30 l1tf
31 mds
32 diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
33 new file mode 100644
34 index 000000000000..25f3b2532198
35 --- /dev/null
36 +++ b/Documentation/admin-guide/hw-vuln/spectre.rst
37 @@ -0,0 +1,697 @@
38 +.. SPDX-License-Identifier: GPL-2.0
39 +
40 +Spectre Side Channels
41 +=====================
42 +
43 +Spectre is a class of side channel attacks that exploit branch prediction
44 +and speculative execution on modern CPUs to read memory, possibly
45 +bypassing access controls. Speculative execution side channel exploits
46 +do not modify memory but attempt to infer privileged data in the memory.
47 +
48 +This document covers Spectre variant 1 and Spectre variant 2.
49 +
50 +Affected processors
51 +-------------------
52 +
53 +Speculative execution side channel methods affect a wide range of modern
54 +high performance processors, since most modern high speed processors
55 +use branch prediction and speculative execution.
56 +
57 +The following CPUs are vulnerable:
58 +
59 + - Intel Core, Atom, Pentium, and Xeon processors
60 +
61 + - AMD Phenom, EPYC, and Zen processors
62 +
63 + - IBM POWER and zSeries processors
64 +
65 + - Higher end ARM processors
66 +
67 + - Apple CPUs
68 +
69 + - Higher end MIPS CPUs
70 +
71 + - Likely most other high performance CPUs. Contact your CPU vendor for details.
72 +
73 +Whether a processor is affected or not can be read out from the Spectre
74 +vulnerability files in sysfs. See :ref:`spectre_sys_info`.
75 +
76 +Related CVEs
77 +------------
78 +
79 +The following CVE entries describe Spectre variants:
80 +
81 + ============= ======================= =================
82 + CVE-2017-5753 Bounds check bypass Spectre variant 1
83 + CVE-2017-5715 Branch target injection Spectre variant 2
84 + ============= ======================= =================
85 +
86 +Problem
87 +-------
88 +
89 +CPUs use speculative operations to improve performance. That may leave
90 +traces of memory accesses or computations in the processor's caches,
91 +buffers, and branch predictors. Malicious software may be able to
92 +influence the speculative execution paths, and then use the side effects
93 +of the speculative execution in the CPUs' caches and buffers to infer
94 +privileged data touched during the speculative execution.
95 +
96 +Spectre variant 1 attacks take advantage of speculative execution of
97 +conditional branches, while Spectre variant 2 attacks use speculative
98 +execution of indirect branches to leak privileged memory.
99 +See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[7] <spec_ref7>`
100 +:ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`.
101 +
102 +Spectre variant 1 (Bounds Check Bypass)
103 +---------------------------------------
104 +
105 +The bounds check bypass attack :ref:`[2] <spec_ref2>` takes advantage
106 +of speculative execution that bypasses conditional branch instructions
107 +used for memory access bounds check (e.g. checking if the index of an
108 +array results in memory access within a valid range). This results in
109 +memory accesses to invalid memory (with out-of-bound index) that are
110 +done speculatively before validation checks resolve. Such speculative
111 +memory accesses can leave side effects, creating side channels which
112 +leak information to the attacker.
113 +
114 +There are some extensions of Spectre variant 1 attacks for reading data
115 +over the network, see :ref:`[12] <spec_ref12>`. However such attacks
116 +are difficult, low bandwidth, fragile, and are considered low risk.
117 +
118 +Spectre variant 2 (Branch Target Injection)
119 +-------------------------------------------
120 +
121 +The branch target injection attack takes advantage of speculative
122 +execution of indirect branches :ref:`[3] <spec_ref3>`. The indirect
123 +branch predictors inside the processor used to guess the target of
124 +indirect branches can be influenced by an attacker, causing gadget code
125 +to be speculatively executed, thus exposing sensitive data touched by
126 +the victim. The side effects left in the CPU's caches during speculative
127 +execution can be measured to infer data values.
128 +
129 +.. _poison_btb:
130 +
131 +In Spectre variant 2 attacks, the attacker can steer speculative indirect
132 +branches in the victim to gadget code by poisoning the branch target
133 +buffer of a CPU used for predicting indirect branch addresses. Such
134 +poisoning could be done by indirect branching into existing code,
135 +with the address offset of the indirect branch under the attacker's
136 +control. Since the branch prediction on impacted hardware does not
137 +fully disambiguate branch address and uses the offset for prediction,
138 +this could cause privileged code's indirect branch to jump to a gadget
139 +code with the same offset.
140 +
141 +The most useful gadgets take an attacker-controlled input parameter (such
142 +as a register value) so that the memory read can be controlled. Gadgets
143 +without input parameters might be possible, but the attacker would have
144 +very little control over what memory can be read, reducing the risk of
145 +the attack revealing useful data.
146 +
147 +One other variant 2 attack vector is for the attacker to poison the
148 +return stack buffer (RSB) :ref:`[13] <spec_ref13>` to cause speculative
149 +subroutine return instruction execution to go to a gadget. An attacker's
150 +imbalanced subroutine call instructions might "poison" entries in the
151 +return stack buffer which are later consumed by a victim's subroutine
152 +return instructions. This attack can be mitigated by flushing the return
153 +stack buffer on context switch, or virtual machine (VM) exit.
154 +
155 +On systems with simultaneous multi-threading (SMT), attacks are possible
156 +from the sibling thread, as level 1 cache and branch target buffer
157 +(BTB) may be shared between hardware threads in a CPU core. A malicious
158 +program running on the sibling thread may influence its peer's BTB to
159 +steer its indirect branch speculations to gadget code, and measure the
160 +speculative execution's side effects left in level 1 cache to infer the
161 +victim's data.
162 +
163 +Attack scenarios
164 +----------------
165 +
166 +The following list of attack scenarios have been anticipated, but may
167 +not cover all possible attack vectors.
168 +
169 +1. A user process attacking the kernel
170 +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
171 +
172 + The attacker passes a parameter to the kernel via a register or
173 + via a known address in memory during a syscall. Such parameter may
174 + be used later by the kernel as an index to an array or to derive
175 + a pointer for a Spectre variant 1 attack. The index or pointer
176 + is invalid, but bound checks are bypassed in the code branch taken
177 + for speculative execution. This could cause privileged memory to be
178 + accessed and leaked.
179 +
180 + For kernel code that has been identified where data pointers could
181 + potentially be influenced for Spectre attacks, new "nospec" accessor
182 + macros are used to prevent speculative loading of data.
183 +
184 + Spectre variant 2 attacker can :ref:`poison <poison_btb>` the branch
185 + target buffer (BTB) before issuing syscall to launch an attack.
186 + After entering the kernel, the kernel could use the poisoned branch
187 + target buffer on indirect jump and jump to gadget code in speculative
188 + execution.
189 +
190 + If an attacker tries to control the memory addresses leaked during
191 + speculative execution, he would also need to pass a parameter to the
192 + gadget, either through a register or a known address in memory. After
193 + the gadget has executed, he can measure the side effect.
194 +
195 + The kernel can protect itself against consuming poisoned branch
196 + target buffer entries by using return trampolines (also known as
197 + "retpoline") :ref:`[3] <spec_ref3>` :ref:`[9] <spec_ref9>` for all
198 + indirect branches. Return trampolines trap speculative execution paths
199 + to prevent jumping to gadget code during speculative execution.
200 + x86 CPUs with Enhanced Indirect Branch Restricted Speculation
201 + (Enhanced IBRS) available in hardware should use the feature to
202 + mitigate Spectre variant 2 instead of retpoline. Enhanced IBRS is
203 + more efficient than retpoline.
204 +
205 + There may be gadget code in firmware which could be exploited with
206 + Spectre variant 2 attack by a rogue user process. To mitigate such
207 + attacks on x86, Indirect Branch Restricted Speculation (IBRS) feature
208 + is turned on before the kernel invokes any firmware code.
209 +
210 +2. A user process attacking another user process
211 +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
212 +
213 + A malicious user process can try to attack another user process,
214 + either via a context switch on the same hardware thread, or from the
215 + sibling hyperthread sharing a physical processor core on simultaneous
216 + multi-threading (SMT) system.
217 +
218 + Spectre variant 1 attacks generally require passing parameters
219 + between the processes, which needs a data passing relationship, such
220 + as remote procedure calls (RPC). Those parameters are used in gadget
221 + code to derive invalid data pointers accessing privileged memory in
222 + the attacked process.
223 +
224 + Spectre variant 2 attacks can be launched from a rogue process by
225 + :ref:`poisoning <poison_btb>` the branch target buffer. This can
226 + influence the indirect branch targets for a victim process that either
227 + runs later on the same hardware thread, or running concurrently on
228 + a sibling hardware thread sharing the same physical core.
229 +
230 + A user process can protect itself against Spectre variant 2 attacks
231 + by using the prctl() syscall to disable indirect branch speculation
232 + for itself. An administrator can also cordon off an unsafe process
233 + from polluting the branch target buffer by disabling the process's
234 + indirect branch speculation. This comes with a performance cost
235 + from not using indirect branch speculation and clearing the branch
236 + target buffer. When SMT is enabled on x86, for a process that has
237 + indirect branch speculation disabled, Single Threaded Indirect Branch
238 + Predictors (STIBP) :ref:`[4] <spec_ref4>` are turned on to prevent the
239 + sibling thread from controlling branch target buffer. In addition,
240 + the Indirect Branch Prediction Barrier (IBPB) is issued to clear the
241 + branch target buffer when context switching to and from such process.
242 +
243 + On x86, the return stack buffer is stuffed on context switch.
244 + This prevents the branch target buffer from being used for branch
245 + prediction when the return stack buffer underflows while switching to
246 + a deeper call stack. Any poisoned entries in the return stack buffer
247 + left by the previous process will also be cleared.
248 +
249 + User programs should use address space randomization to make attacks
250 + more difficult (Set /proc/sys/kernel/randomize_va_space = 1 or 2).
251 +
252 +3. A virtualized guest attacking the host
253 +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
254 +
255 + The attack mechanism is similar to how user processes attack the
256 + kernel. The kernel is entered via hyper-calls or other virtualization
257 + exit paths.
258 +
259 + For Spectre variant 1 attacks, rogue guests can pass parameters
260 + (e.g. in registers) via hyper-calls to derive invalid pointers to
261 + speculate into privileged memory after entering the kernel. For places
262 + where such kernel code has been identified, nospec accessor macros
263 + are used to stop speculative memory access.
264 +
265 + For Spectre variant 2 attacks, rogue guests can :ref:`poison
266 + <poison_btb>` the branch target buffer or return stack buffer, causing
267 + the kernel to jump to gadget code in the speculative execution paths.
268 +
269 + To mitigate variant 2, the host kernel can use return trampolines
270 + for indirect branches to bypass the poisoned branch target buffer,
271 + and flushing the return stack buffer on VM exit. This prevents rogue
272 + guests from affecting indirect branching in the host kernel.
273 +
274 + To protect host processes from rogue guests, host processes can have
275 + indirect branch speculation disabled via prctl(). The branch target
276 + buffer is cleared before context switching to such processes.
277 +
278 +4. A virtualized guest attacking other guest
279 +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
280 +
281 + A rogue guest may attack another guest to get data accessible by the
282 + other guest.
283 +
284 + Spectre variant 1 attacks are possible if parameters can be passed
285 + between guests. This may be done via mechanisms such as shared memory
286 + or message passing. Such parameters could be used to derive data
287 + pointers to privileged data in guest. The privileged data could be
288 + accessed by gadget code in the victim's speculation paths.
289 +
290 + Spectre variant 2 attacks can be launched from a rogue guest by
291 + :ref:`poisoning <poison_btb>` the branch target buffer or the return
292 + stack buffer. Such poisoned entries could be used to influence
293 + speculation execution paths in the victim guest.
294 +
295 + Linux kernel mitigates attacks to other guests running in the same
296 + CPU hardware thread by flushing the return stack buffer on VM exit,
297 + and clearing the branch target buffer before switching to a new guest.
298 +
299 + If SMT is used, Spectre variant 2 attacks from an untrusted guest
300 + in the sibling hyperthread can be mitigated by the administrator,
301 + by turning off the unsafe guest's indirect branch speculation via
302 + prctl(). A guest can also protect itself by turning on microcode
303 + based mitigations (such as IBPB or STIBP on x86) within the guest.
304 +
305 +.. _spectre_sys_info:
306 +
307 +Spectre system information
308 +--------------------------
309 +
310 +The Linux kernel provides a sysfs interface to enumerate the current
311 +mitigation status of the system for Spectre: whether the system is
312 +vulnerable, and which mitigations are active.
313 +
314 +The sysfs file showing Spectre variant 1 mitigation status is:
315 +
316 + /sys/devices/system/cpu/vulnerabilities/spectre_v1
317 +
318 +The possible values in this file are:
319 +
320 + ======================================= =================================
321 + 'Mitigation: __user pointer sanitation' Protection in kernel on a case by
322 + case base with explicit pointer
323 + sanitation.
324 + ======================================= =================================
325 +
326 +However, the protections are put in place on a case by case basis,
327 +and there is no guarantee that all possible attack vectors for Spectre
328 +variant 1 are covered.
329 +
330 +The spectre_v2 kernel file reports if the kernel has been compiled with
331 +retpoline mitigation or if the CPU has hardware mitigation, and if the
332 +CPU has support for additional process-specific mitigation.
333 +
334 +This file also reports CPU features enabled by microcode to mitigate
335 +attack between user processes:
336 +
337 +1. Indirect Branch Prediction Barrier (IBPB) to add additional
338 + isolation between processes of different users.
339 +2. Single Thread Indirect Branch Predictors (STIBP) to add additional
340 + isolation between CPU threads running on the same core.
341 +
342 +These CPU features may impact performance when used and can be enabled
343 +per process on a case-by-case base.
344 +
345 +The sysfs file showing Spectre variant 2 mitigation status is:
346 +
347 + /sys/devices/system/cpu/vulnerabilities/spectre_v2
348 +
349 +The possible values in this file are:
350 +
351 + - Kernel status:
352 +
353 + ==================================== =================================
354 + 'Not affected' The processor is not vulnerable
355 + 'Vulnerable' Vulnerable, no mitigation
356 + 'Mitigation: Full generic retpoline' Software-focused mitigation
357 + 'Mitigation: Full AMD retpoline' AMD-specific software mitigation
358 + 'Mitigation: Enhanced IBRS' Hardware-focused mitigation
359 + ==================================== =================================
360 +
361 + - Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is
362 + used to protect against Spectre variant 2 attacks when calling firmware (x86 only).
363 +
364 + ========== =============================================================
365 + 'IBRS_FW' Protection against user program attacks when calling firmware
366 + ========== =============================================================
367 +
368 + - Indirect branch prediction barrier (IBPB) status for protection between
369 + processes of different users. This feature can be controlled through
370 + prctl() per process, or through kernel command line options. This is
371 + an x86 only feature. For more details see below.
372 +
373 + =================== ========================================================
374 + 'IBPB: disabled' IBPB unused
375 + 'IBPB: always-on' Use IBPB on all tasks
376 + 'IBPB: conditional' Use IBPB on SECCOMP or indirect branch restricted tasks
377 + =================== ========================================================
378 +
379 + - Single threaded indirect branch prediction (STIBP) status for protection
380 + between different hyper threads. This feature can be controlled through
381 + prctl per process, or through kernel command line options. This is x86
382 + only feature. For more details see below.
383 +
384 + ==================== ========================================================
385 + 'STIBP: disabled' STIBP unused
386 + 'STIBP: forced' Use STIBP on all tasks
387 + 'STIBP: conditional' Use STIBP on SECCOMP or indirect branch restricted tasks
388 + ==================== ========================================================
389 +
390 + - Return stack buffer (RSB) protection status:
391 +
392 + ============= ===========================================
393 + 'RSB filling' Protection of RSB on context switch enabled
394 + ============= ===========================================
395 +
396 +Full mitigation might require a microcode update from the CPU
397 +vendor. When the necessary microcode is not available, the kernel will
398 +report vulnerability.
399 +
400 +Turning on mitigation for Spectre variant 1 and Spectre variant 2
401 +-----------------------------------------------------------------
402 +
403 +1. Kernel mitigation
404 +^^^^^^^^^^^^^^^^^^^^
405 +
406 + For the Spectre variant 1, vulnerable kernel code (as determined
407 + by code audit or scanning tools) is annotated on a case by case
408 + basis to use nospec accessor macros for bounds clipping :ref:`[2]
409 + <spec_ref2>` to avoid any usable disclosure gadgets. However, it may
410 + not cover all attack vectors for Spectre variant 1.
411 +
412 + For Spectre variant 2 mitigation, the compiler turns indirect calls or
413 + jumps in the kernel into equivalent return trampolines (retpolines)
414 + :ref:`[3] <spec_ref3>` :ref:`[9] <spec_ref9>` to go to the target
415 + addresses. Speculative execution paths under retpolines are trapped
416 + in an infinite loop to prevent any speculative execution jumping to
417 + a gadget.
418 +
419 + To turn on retpoline mitigation on a vulnerable CPU, the kernel
420 + needs to be compiled with a gcc compiler that supports the
421 + -mindirect-branch=thunk-extern -mindirect-branch-register options.
422 + If the kernel is compiled with a Clang compiler, the compiler needs
423 + to support -mretpoline-external-thunk option. The kernel config
424 + CONFIG_RETPOLINE needs to be turned on, and the CPU needs to run with
425 + the latest updated microcode.
426 +
427 + On Intel Skylake-era systems the mitigation covers most, but not all,
428 + cases. See :ref:`[3] <spec_ref3>` for more details.
429 +
430 + On CPUs with hardware mitigation for Spectre variant 2 (e.g. Enhanced
431 + IBRS on x86), retpoline is automatically disabled at run time.
432 +
433 + The retpoline mitigation is turned on by default on vulnerable
434 + CPUs. It can be forced on or off by the administrator
435 + via the kernel command line and sysfs control files. See
436 + :ref:`spectre_mitigation_control_command_line`.
437 +
438 + On x86, indirect branch restricted speculation is turned on by default
439 + before invoking any firmware code to prevent Spectre variant 2 exploits
440 + using the firmware.
441 +
442 + Using kernel address space randomization (CONFIG_RANDOMIZE_SLAB=y
443 + and CONFIG_SLAB_FREELIST_RANDOM=y in the kernel configuration) makes
444 + attacks on the kernel generally more difficult.
445 +
446 +2. User program mitigation
447 +^^^^^^^^^^^^^^^^^^^^^^^^^^
448 +
449 + User programs can mitigate Spectre variant 1 using LFENCE or "bounds
450 + clipping". For more details see :ref:`[2] <spec_ref2>`.
451 +
452 + For Spectre variant 2 mitigation, individual user programs
453 + can be compiled with return trampolines for indirect branches.
454 + This protects them from consuming poisoned entries in the branch
455 + target buffer left by malicious software. Alternatively, the
456 + programs can disable their indirect branch speculation via prctl()
457 + (See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
458 + On x86, this will turn on STIBP to guard against attacks from the
459 + sibling thread when the user program is running, and use IBPB to
460 + flush the branch target buffer when switching to/from the program.
461 +
462 + Restricting indirect branch speculation on a user program will
463 + also prevent the program from launching a variant 2 attack
464 + on x86. All sand-boxed SECCOMP programs have indirect branch
465 + speculation restricted by default. Administrators can change
466 + that behavior via the kernel command line and sysfs control files.
467 + See :ref:`spectre_mitigation_control_command_line`.
468 +
469 + Programs that disable their indirect branch speculation will have
470 + more overhead and run slower.
471 +
472 + User programs should use address space randomization
473 + (/proc/sys/kernel/randomize_va_space = 1 or 2) to make attacks more
474 + difficult.
475 +
476 +3. VM mitigation
477 +^^^^^^^^^^^^^^^^
478 +
479 + Within the kernel, Spectre variant 1 attacks from rogue guests are
480 + mitigated on a case by case basis in VM exit paths. Vulnerable code
481 + uses nospec accessor macros for "bounds clipping", to avoid any
482 + usable disclosure gadgets. However, this may not cover all variant
483 + 1 attack vectors.
484 +
485 + For Spectre variant 2 attacks from rogue guests to the kernel, the
486 + Linux kernel uses retpoline or Enhanced IBRS to prevent consumption of
487 + poisoned entries in branch target buffer left by rogue guests. It also
488 + flushes the return stack buffer on every VM exit to prevent a return
489 + stack buffer underflow so poisoned branch target buffer could be used,
490 + or attacker guests leaving poisoned entries in the return stack buffer.
491 +
492 + To mitigate guest-to-guest attacks in the same CPU hardware thread,
493 + the branch target buffer is sanitized by flushing before switching
494 + to a new guest on a CPU.
495 +
496 + The above mitigations are turned on by default on vulnerable CPUs.
497 +
498 + To mitigate guest-to-guest attacks from sibling thread when SMT is
499 + in use, an untrusted guest running in the sibling thread can have
500 + its indirect branch speculation disabled by administrator via prctl().
501 +
502 + The kernel also allows guests to use any microcode based mitigation
503 + they choose to use (such as IBPB or STIBP on x86) to protect themselves.
504 +
505 +.. _spectre_mitigation_control_command_line:
506 +
507 +Mitigation control on the kernel command line
508 +---------------------------------------------
509 +
510 +Spectre variant 2 mitigation can be disabled or force enabled at the
511 +kernel command line.
512 +
513 + nospectre_v2
514 +
515 + [X86] Disable all mitigations for the Spectre variant 2
516 + (indirect branch prediction) vulnerability. System may
517 + allow data leaks with this option, which is equivalent
518 + to spectre_v2=off.
519 +
520 +
521 + spectre_v2=
522 +
523 + [X86] Control mitigation of Spectre variant 2
524 + (indirect branch speculation) vulnerability.
525 + The default operation protects the kernel from
526 + user space attacks.
527 +
528 + on
529 + unconditionally enable, implies
530 + spectre_v2_user=on
531 + off
532 + unconditionally disable, implies
533 + spectre_v2_user=off
534 + auto
535 + kernel detects whether your CPU model is
536 + vulnerable
537 +
538 + Selecting 'on' will, and 'auto' may, choose a
539 + mitigation method at run time according to the
540 + CPU, the available microcode, the setting of the
541 + CONFIG_RETPOLINE configuration option, and the
542 + compiler with which the kernel was built.
543 +
544 + Selecting 'on' will also enable the mitigation
545 + against user space to user space task attacks.
546 +
547 + Selecting 'off' will disable both the kernel and
548 + the user space protections.
549 +
550 + Specific mitigations can also be selected manually:
551 +
552 + retpoline
553 + replace indirect branches
554 + retpoline,generic
555 + google's original retpoline
556 + retpoline,amd
557 + AMD-specific minimal thunk
558 +
559 + Not specifying this option is equivalent to
560 + spectre_v2=auto.
561 +
562 +For user space mitigation:
563 +
564 + spectre_v2_user=
565 +
566 + [X86] Control mitigation of Spectre variant 2
567 + (indirect branch speculation) vulnerability between
568 + user space tasks
569 +
570 + on
571 + Unconditionally enable mitigations. Is
572 + enforced by spectre_v2=on
573 +
574 + off
575 + Unconditionally disable mitigations. Is
576 + enforced by spectre_v2=off
577 +
578 + prctl
579 + Indirect branch speculation is enabled,
580 + but mitigation can be enabled via prctl
581 + per thread. The mitigation control state
582 + is inherited on fork.
583 +
584 + prctl,ibpb
585 + Like "prctl" above, but only STIBP is
586 + controlled per thread. IBPB is issued
587 + always when switching between different user
588 + space processes.
589 +
590 + seccomp
591 + Same as "prctl" above, but all seccomp
592 + threads will enable the mitigation unless
593 + they explicitly opt out.
594 +
595 + seccomp,ibpb
596 + Like "seccomp" above, but only STIBP is
597 + controlled per thread. IBPB is issued
598 + always when switching between different
599 + user space processes.
600 +
601 + auto
602 + Kernel selects the mitigation depending on
603 + the available CPU features and vulnerability.
604 +
605 + Default mitigation:
606 + If CONFIG_SECCOMP=y then "seccomp", otherwise "prctl"
607 +
608 + Not specifying this option is equivalent to
609 + spectre_v2_user=auto.
610 +
611 + In general the kernel by default selects
612 + reasonable mitigations for the current CPU. To
613 + disable Spectre variant 2 mitigations, boot with
614 + spectre_v2=off. Spectre variant 1 mitigations
615 + cannot be disabled.
616 +
617 +Mitigation selection guide
618 +--------------------------
619 +
620 +1. Trusted userspace
621 +^^^^^^^^^^^^^^^^^^^^
622 +
623 + If all userspace applications are from trusted sources and do not
624 + execute externally supplied untrusted code, then the mitigations can
625 + be disabled.
626 +
627 +2. Protect sensitive programs
628 +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
629 +
630 + For security-sensitive programs that have secrets (e.g. crypto
631 + keys), protection against Spectre variant 2 can be put in place by
632 + disabling indirect branch speculation when the program is running
633 + (See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
634 +
635 +3. Sandbox untrusted programs
636 +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
637 +
638 + Untrusted programs that could be a source of attacks can be cordoned
639 + off by disabling their indirect branch speculation when they are run
640 + (See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
641 + This prevents untrusted programs from polluting the branch target
642 + buffer. All programs running in SECCOMP sandboxes have indirect
643 + branch speculation restricted by default. This behavior can be
644 + changed via the kernel command line and sysfs control files. See
645 + :ref:`spectre_mitigation_control_command_line`.
646 +
647 +3. High security mode
648 +^^^^^^^^^^^^^^^^^^^^^
649 +
650 + All Spectre variant 2 mitigations can be forced on
651 + at boot time for all programs (See the "on" option in
652 + :ref:`spectre_mitigation_control_command_line`). This will add
653 + overhead as indirect branch speculations for all programs will be
654 + restricted.
655 +
656 + On x86, branch target buffer will be flushed with IBPB when switching
657 + to a new program. STIBP is left on all the time to protect programs
658 + against variant 2 attacks originating from programs running on
659 + sibling threads.
660 +
661 + Alternatively, STIBP can be used only when running programs
662 + whose indirect branch speculation is explicitly disabled,
663 + while IBPB is still used all the time when switching to a new
664 + program to clear the branch target buffer (See "ibpb" option in
665 + :ref:`spectre_mitigation_control_command_line`). This "ibpb" option
666 + has less performance cost than the "on" option, which leaves STIBP
667 + on all the time.
668 +
669 +References on Spectre
670 +---------------------
671 +
672 +Intel white papers:
673 +
674 +.. _spec_ref1:
675 +
676 +[1] `Intel analysis of speculative execution side channels <https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/Intel-Analysis-of-Speculative-Execution-Side-Channels.pdf>`_.
677 +
678 +.. _spec_ref2:
679 +
680 +[2] `Bounds check bypass <https://software.intel.com/security-software-guidance/software-guidance/bounds-check-bypass>`_.
681 +
682 +.. _spec_ref3:
683 +
684 +[3] `Deep dive: Retpoline: A branch target injection mitigation <https://software.intel.com/security-software-guidance/insights/deep-dive-retpoline-branch-target-injection-mitigation>`_.
685 +
686 +.. _spec_ref4:
687 +
688 +[4] `Deep Dive: Single Thread Indirect Branch Predictors <https://software.intel.com/security-software-guidance/insights/deep-dive-single-thread-indirect-branch-predictors>`_.
689 +
690 +AMD white papers:
691 +
692 +.. _spec_ref5:
693 +
694 +[5] `AMD64 technology indirect branch control extension <https://developer.amd.com/wp-content/resources/Architecture_Guidelines_Update_Indirect_Branch_Control.pdf>`_.
695 +
696 +.. _spec_ref6:
697 +
698 +[6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/90343-B_SoftwareTechniquesforManagingSpeculation_WP_7-18Update_FNL.pdf>`_.
699 +
700 +ARM white papers:
701 +
702 +.. _spec_ref7:
703 +
704 +[7] `Cache speculation side-channels <https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/download-the-whitepaper>`_.
705 +
706 +.. _spec_ref8:
707 +
708 +[8] `Cache speculation issues update <https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/latest-updates/cache-speculation-issues-update>`_.
709 +
710 +Google white paper:
711 +
712 +.. _spec_ref9:
713 +
714 +[9] `Retpoline: a software construct for preventing branch-target-injection <https://support.google.com/faqs/answer/7625886>`_.
715 +
716 +MIPS white paper:
717 +
718 +.. _spec_ref10:
719 +
720 +[10] `MIPS: response on speculative execution and side channel vulnerabilities <https://www.mips.com/blog/mips-response-on-speculative-execution-and-side-channel-vulnerabilities/>`_.
721 +
722 +Academic papers:
723 +
724 +.. _spec_ref11:
725 +
726 +[11] `Spectre Attacks: Exploiting Speculative Execution <https://spectreattack.com/spectre.pdf>`_.
727 +
728 +.. _spec_ref12:
729 +
730 +[12] `NetSpectre: Read Arbitrary Memory over Network <https://arxiv.org/abs/1807.10535>`_.
731 +
732 +.. _spec_ref13:
733 +
734 +[13] `Spectre Returns! Speculation Attacks using the Return Stack Buffer <https://www.usenix.org/system/files/conference/woot18/woot18-paper-koruyeh.pdf>`_.
735 diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
736 index a29301d6e6c6..1cee1174cde6 100644
737 --- a/Documentation/admin-guide/kernel-parameters.txt
738 +++ b/Documentation/admin-guide/kernel-parameters.txt
739 @@ -4976,12 +4976,6 @@
740 emulate [default] Vsyscalls turn into traps and are
741 emulated reasonably safely.
742
743 - native Vsyscalls are native syscall instructions.
744 - This is a little bit faster than trapping
745 - and makes a few dynamic recompilers work
746 - better than they would in emulation mode.
747 - It also makes exploits much easier to write.
748 -
749 none Vsyscalls don't work at all. This makes
750 them quite hard to use for exploits but
751 might break your system.
752 diff --git a/Documentation/devicetree/bindings/net/can/microchip,mcp251x.txt b/Documentation/devicetree/bindings/net/can/microchip,mcp251x.txt
753 index 188c8bd4eb67..5a0111d4de58 100644
754 --- a/Documentation/devicetree/bindings/net/can/microchip,mcp251x.txt
755 +++ b/Documentation/devicetree/bindings/net/can/microchip,mcp251x.txt
756 @@ -4,6 +4,7 @@ Required properties:
757 - compatible: Should be one of the following:
758 - "microchip,mcp2510" for MCP2510.
759 - "microchip,mcp2515" for MCP2515.
760 + - "microchip,mcp25625" for MCP25625.
761 - reg: SPI chip select.
762 - clocks: The clock feeding the CAN controller.
763 - interrupts: Should contain IRQ line for the CAN controller.
764 diff --git a/Documentation/userspace-api/spec_ctrl.rst b/Documentation/userspace-api/spec_ctrl.rst
765 index c4dbe6f7cdae..0fda8f614110 100644
766 --- a/Documentation/userspace-api/spec_ctrl.rst
767 +++ b/Documentation/userspace-api/spec_ctrl.rst
768 @@ -47,6 +47,8 @@ If PR_SPEC_PRCTL is set, then the per-task control of the mitigation is
769 available. If not set, prctl(PR_SET_SPECULATION_CTRL) for the speculation
770 misfeature will fail.
771
772 +.. _set_spec_ctrl:
773 +
774 PR_SET_SPECULATION_CTRL
775 -----------------------
776
777 diff --git a/Makefile b/Makefile
778 index 5dcd01cd1bf6..38f2150457fd 100644
779 --- a/Makefile
780 +++ b/Makefile
781 @@ -1,7 +1,7 @@
782 # SPDX-License-Identifier: GPL-2.0
783 VERSION = 4
784 PATCHLEVEL = 19
785 -SUBLEVEL = 58
786 +SUBLEVEL = 59
787 EXTRAVERSION =
788 NAME = "People's Front"
789
790 diff --git a/arch/arm/boot/dts/am335x-pcm-953.dtsi b/arch/arm/boot/dts/am335x-pcm-953.dtsi
791 index 1ec8e0d80191..572fbd254690 100644
792 --- a/arch/arm/boot/dts/am335x-pcm-953.dtsi
793 +++ b/arch/arm/boot/dts/am335x-pcm-953.dtsi
794 @@ -197,7 +197,7 @@
795 bus-width = <4>;
796 pinctrl-names = "default";
797 pinctrl-0 = <&mmc1_pins>;
798 - cd-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>;
799 + cd-gpios = <&gpio0 6 GPIO_ACTIVE_LOW>;
800 status = "okay";
801 };
802
803 diff --git a/arch/arm/boot/dts/am335x-wega.dtsi b/arch/arm/boot/dts/am335x-wega.dtsi
804 index 8ce541739b24..83e4fe595e37 100644
805 --- a/arch/arm/boot/dts/am335x-wega.dtsi
806 +++ b/arch/arm/boot/dts/am335x-wega.dtsi
807 @@ -157,7 +157,7 @@
808 bus-width = <4>;
809 pinctrl-names = "default";
810 pinctrl-0 = <&mmc1_pins>;
811 - cd-gpios = <&gpio0 6 GPIO_ACTIVE_HIGH>;
812 + cd-gpios = <&gpio0 6 GPIO_ACTIVE_LOW>;
813 status = "okay";
814 };
815
816 diff --git a/arch/arm/mach-davinci/board-da850-evm.c b/arch/arm/mach-davinci/board-da850-evm.c
817 index e1a949b47306..774a3e535ad0 100644
818 --- a/arch/arm/mach-davinci/board-da850-evm.c
819 +++ b/arch/arm/mach-davinci/board-da850-evm.c
820 @@ -1472,6 +1472,8 @@ static __init void da850_evm_init(void)
821 if (ret)
822 pr_warn("%s: dsp/rproc registration failed: %d\n",
823 __func__, ret);
824 +
825 + regulator_has_full_constraints();
826 }
827
828 #ifdef CONFIG_SERIAL_8250_CONSOLE
829 diff --git a/arch/arm/mach-davinci/devices-da8xx.c b/arch/arm/mach-davinci/devices-da8xx.c
830 index 1fd3619f6a09..3c42bf9fa061 100644
831 --- a/arch/arm/mach-davinci/devices-da8xx.c
832 +++ b/arch/arm/mach-davinci/devices-da8xx.c
833 @@ -685,6 +685,9 @@ static struct platform_device da8xx_lcdc_device = {
834 .id = 0,
835 .num_resources = ARRAY_SIZE(da8xx_lcdc_resources),
836 .resource = da8xx_lcdc_resources,
837 + .dev = {
838 + .coherent_dma_mask = DMA_BIT_MASK(32),
839 + }
840 };
841
842 int __init da8xx_register_lcdc(struct da8xx_lcdc_platform_data *pdata)
843 diff --git a/arch/mips/include/uapi/asm/sgidefs.h b/arch/mips/include/uapi/asm/sgidefs.h
844 index 26143e3b7c26..69c3de90c536 100644
845 --- a/arch/mips/include/uapi/asm/sgidefs.h
846 +++ b/arch/mips/include/uapi/asm/sgidefs.h
847 @@ -11,14 +11,6 @@
848 #ifndef __ASM_SGIDEFS_H
849 #define __ASM_SGIDEFS_H
850
851 -/*
852 - * Using a Linux compiler for building Linux seems logic but not to
853 - * everybody.
854 - */
855 -#ifndef __linux__
856 -#error Use a Linux compiler or give up.
857 -#endif
858 -
859 /*
860 * Definitions for the ISA levels
861 *
862 diff --git a/arch/riscv/lib/delay.c b/arch/riscv/lib/delay.c
863 index dce8ae24c6d3..ee6853c1e341 100644
864 --- a/arch/riscv/lib/delay.c
865 +++ b/arch/riscv/lib/delay.c
866 @@ -88,7 +88,7 @@ EXPORT_SYMBOL(__delay);
867
868 void udelay(unsigned long usecs)
869 {
870 - unsigned long ucycles = usecs * lpj_fine * UDELAY_MULT;
871 + u64 ucycles = (u64)usecs * lpj_fine * UDELAY_MULT;
872
873 if (unlikely(usecs > MAX_UDELAY_US)) {
874 __delay((u64)usecs * riscv_timebase / 1000000ULL);
875 diff --git a/arch/s390/Makefile b/arch/s390/Makefile
876 index ee65185bbc80..e6c2e8925fef 100644
877 --- a/arch/s390/Makefile
878 +++ b/arch/s390/Makefile
879 @@ -24,6 +24,7 @@ KBUILD_CFLAGS_DECOMPRESSOR += -DDISABLE_BRANCH_PROFILING -D__NO_FORTIFY
880 KBUILD_CFLAGS_DECOMPRESSOR += -fno-delete-null-pointer-checks -msoft-float
881 KBUILD_CFLAGS_DECOMPRESSOR += -fno-asynchronous-unwind-tables
882 KBUILD_CFLAGS_DECOMPRESSOR += $(call cc-option,-ffreestanding)
883 +KBUILD_CFLAGS_DECOMPRESSOR += $(call cc-disable-warning, address-of-packed-member)
884 KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO),-g)
885 KBUILD_CFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO_DWARF4), $(call cc-option, -gdwarf-4,))
886 UTS_MACHINE := s390x
887 diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
888 index e2ee403865eb..aeba77881d85 100644
889 --- a/arch/x86/kernel/ptrace.c
890 +++ b/arch/x86/kernel/ptrace.c
891 @@ -24,6 +24,7 @@
892 #include <linux/rcupdate.h>
893 #include <linux/export.h>
894 #include <linux/context_tracking.h>
895 +#include <linux/nospec.h>
896
897 #include <linux/uaccess.h>
898 #include <asm/pgtable.h>
899 @@ -651,9 +652,11 @@ static unsigned long ptrace_get_debugreg(struct task_struct *tsk, int n)
900 {
901 struct thread_struct *thread = &tsk->thread;
902 unsigned long val = 0;
903 + int index = n;
904
905 if (n < HBP_NUM) {
906 - struct perf_event *bp = thread->ptrace_bps[n];
907 + struct perf_event *bp = thread->ptrace_bps[index];
908 + index = array_index_nospec(index, HBP_NUM);
909
910 if (bp)
911 val = bp->hw.info.address;
912 diff --git a/arch/x86/kernel/tls.c b/arch/x86/kernel/tls.c
913 index a5b802a12212..71d3fef1edc9 100644
914 --- a/arch/x86/kernel/tls.c
915 +++ b/arch/x86/kernel/tls.c
916 @@ -5,6 +5,7 @@
917 #include <linux/user.h>
918 #include <linux/regset.h>
919 #include <linux/syscalls.h>
920 +#include <linux/nospec.h>
921
922 #include <linux/uaccess.h>
923 #include <asm/desc.h>
924 @@ -220,6 +221,7 @@ int do_get_thread_area(struct task_struct *p, int idx,
925 struct user_desc __user *u_info)
926 {
927 struct user_desc info;
928 + int index;
929
930 if (idx == -1 && get_user(idx, &u_info->entry_number))
931 return -EFAULT;
932 @@ -227,8 +229,11 @@ int do_get_thread_area(struct task_struct *p, int idx,
933 if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
934 return -EINVAL;
935
936 - fill_user_desc(&info, idx,
937 - &p->thread.tls_array[idx - GDT_ENTRY_TLS_MIN]);
938 + index = idx - GDT_ENTRY_TLS_MIN;
939 + index = array_index_nospec(index,
940 + GDT_ENTRY_TLS_MAX - GDT_ENTRY_TLS_MIN + 1);
941 +
942 + fill_user_desc(&info, idx, &p->thread.tls_array[index]);
943
944 if (copy_to_user(u_info, &info, sizeof(info)))
945 return -EFAULT;
946 diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
947 index 2580cd2e98b1..a32fc3d99407 100644
948 --- a/arch/x86/net/bpf_jit_comp.c
949 +++ b/arch/x86/net/bpf_jit_comp.c
950 @@ -190,9 +190,7 @@ struct jit_context {
951 #define BPF_MAX_INSN_SIZE 128
952 #define BPF_INSN_SAFETY 64
953
954 -#define AUX_STACK_SPACE 40 /* Space for RBX, R13, R14, R15, tailcnt */
955 -
956 -#define PROLOGUE_SIZE 37
957 +#define PROLOGUE_SIZE 20
958
959 /*
960 * Emit x86-64 prologue code for BPF program and check its size.
961 @@ -203,44 +201,19 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf)
962 u8 *prog = *pprog;
963 int cnt = 0;
964
965 - /* push rbp */
966 - EMIT1(0x55);
967 -
968 - /* mov rbp,rsp */
969 - EMIT3(0x48, 0x89, 0xE5);
970 -
971 - /* sub rsp, rounded_stack_depth + AUX_STACK_SPACE */
972 - EMIT3_off32(0x48, 0x81, 0xEC,
973 - round_up(stack_depth, 8) + AUX_STACK_SPACE);
974 -
975 - /* sub rbp, AUX_STACK_SPACE */
976 - EMIT4(0x48, 0x83, 0xED, AUX_STACK_SPACE);
977 -
978 - /* mov qword ptr [rbp+0],rbx */
979 - EMIT4(0x48, 0x89, 0x5D, 0);
980 - /* mov qword ptr [rbp+8],r13 */
981 - EMIT4(0x4C, 0x89, 0x6D, 8);
982 - /* mov qword ptr [rbp+16],r14 */
983 - EMIT4(0x4C, 0x89, 0x75, 16);
984 - /* mov qword ptr [rbp+24],r15 */
985 - EMIT4(0x4C, 0x89, 0x7D, 24);
986 -
987 + EMIT1(0x55); /* push rbp */
988 + EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
989 + /* sub rsp, rounded_stack_depth */
990 + EMIT3_off32(0x48, 0x81, 0xEC, round_up(stack_depth, 8));
991 + EMIT1(0x53); /* push rbx */
992 + EMIT2(0x41, 0x55); /* push r13 */
993 + EMIT2(0x41, 0x56); /* push r14 */
994 + EMIT2(0x41, 0x57); /* push r15 */
995 if (!ebpf_from_cbpf) {
996 - /*
997 - * Clear the tail call counter (tail_call_cnt): for eBPF tail
998 - * calls we need to reset the counter to 0. It's done in two
999 - * instructions, resetting RAX register to 0, and moving it
1000 - * to the counter location.
1001 - */
1002 -
1003 - /* xor eax, eax */
1004 - EMIT2(0x31, 0xc0);
1005 - /* mov qword ptr [rbp+32], rax */
1006 - EMIT4(0x48, 0x89, 0x45, 32);
1007 -
1008 + /* zero init tail_call_cnt */
1009 + EMIT2(0x6a, 0x00);
1010 BUILD_BUG_ON(cnt != PROLOGUE_SIZE);
1011 }
1012 -
1013 *pprog = prog;
1014 }
1015
1016 @@ -285,13 +258,13 @@ static void emit_bpf_tail_call(u8 **pprog)
1017 * if (tail_call_cnt > MAX_TAIL_CALL_CNT)
1018 * goto out;
1019 */
1020 - EMIT2_off32(0x8B, 0x85, 36); /* mov eax, dword ptr [rbp + 36] */
1021 + EMIT2_off32(0x8B, 0x85, -36 - MAX_BPF_STACK); /* mov eax, dword ptr [rbp - 548] */
1022 EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */
1023 #define OFFSET2 (30 + RETPOLINE_RAX_BPF_JIT_SIZE)
1024 EMIT2(X86_JA, OFFSET2); /* ja out */
1025 label2 = cnt;
1026 EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */
1027 - EMIT2_off32(0x89, 0x85, 36); /* mov dword ptr [rbp + 36], eax */
1028 + EMIT2_off32(0x89, 0x85, -36 - MAX_BPF_STACK); /* mov dword ptr [rbp -548], eax */
1029
1030 /* prog = array->ptrs[index]; */
1031 EMIT4_off32(0x48, 0x8B, 0x84, 0xD6, /* mov rax, [rsi + rdx * 8 + offsetof(...)] */
1032 @@ -1006,19 +979,14 @@ emit_jmp:
1033 seen_exit = true;
1034 /* Update cleanup_addr */
1035 ctx->cleanup_addr = proglen;
1036 - /* mov rbx, qword ptr [rbp+0] */
1037 - EMIT4(0x48, 0x8B, 0x5D, 0);
1038 - /* mov r13, qword ptr [rbp+8] */
1039 - EMIT4(0x4C, 0x8B, 0x6D, 8);
1040 - /* mov r14, qword ptr [rbp+16] */
1041 - EMIT4(0x4C, 0x8B, 0x75, 16);
1042 - /* mov r15, qword ptr [rbp+24] */
1043 - EMIT4(0x4C, 0x8B, 0x7D, 24);
1044 -
1045 - /* add rbp, AUX_STACK_SPACE */
1046 - EMIT4(0x48, 0x83, 0xC5, AUX_STACK_SPACE);
1047 - EMIT1(0xC9); /* leave */
1048 - EMIT1(0xC3); /* ret */
1049 + if (!bpf_prog_was_classic(bpf_prog))
1050 + EMIT1(0x5B); /* get rid of tail_call_cnt */
1051 + EMIT2(0x41, 0x5F); /* pop r15 */
1052 + EMIT2(0x41, 0x5E); /* pop r14 */
1053 + EMIT2(0x41, 0x5D); /* pop r13 */
1054 + EMIT1(0x5B); /* pop rbx */
1055 + EMIT1(0xC9); /* leave */
1056 + EMIT1(0xC3); /* ret */
1057 break;
1058
1059 default:
1060 diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
1061 index 6bb397995610..becd793a258c 100644
1062 --- a/block/bfq-iosched.c
1063 +++ b/block/bfq-iosched.c
1064 @@ -4116,6 +4116,7 @@ static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync)
1065 unsigned long flags;
1066
1067 spin_lock_irqsave(&bfqd->lock, flags);
1068 + bfqq->bic = NULL;
1069 bfq_exit_bfqq(bfqd, bfqq);
1070 bic_set_bfqq(bic, NULL, is_sync);
1071 spin_unlock_irqrestore(&bfqd->lock, flags);
1072 diff --git a/drivers/android/binder.c b/drivers/android/binder.c
1073 index ce0e4d317d24..5d67f5fec6c1 100644
1074 --- a/drivers/android/binder.c
1075 +++ b/drivers/android/binder.c
1076 @@ -3936,6 +3936,8 @@ retry:
1077 case BINDER_WORK_TRANSACTION_COMPLETE: {
1078 binder_inner_proc_unlock(proc);
1079 cmd = BR_TRANSACTION_COMPLETE;
1080 + kfree(w);
1081 + binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
1082 if (put_user(cmd, (uint32_t __user *)ptr))
1083 return -EFAULT;
1084 ptr += sizeof(uint32_t);
1085 @@ -3944,8 +3946,6 @@ retry:
1086 binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE,
1087 "%d:%d BR_TRANSACTION_COMPLETE\n",
1088 proc->pid, thread->pid);
1089 - kfree(w);
1090 - binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
1091 } break;
1092 case BINDER_WORK_NODE: {
1093 struct binder_node *node = container_of(w, struct binder_node, work);
1094 diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
1095 index f4f3e9a5851e..c5859d3cb825 100644
1096 --- a/drivers/crypto/talitos.c
1097 +++ b/drivers/crypto/talitos.c
1098 @@ -2286,7 +2286,7 @@ static struct talitos_alg_template driver_algs[] = {
1099 .base = {
1100 .cra_name = "authenc(hmac(sha1),cbc(aes))",
1101 .cra_driver_name = "authenc-hmac-sha1-"
1102 - "cbc-aes-talitos",
1103 + "cbc-aes-talitos-hsna",
1104 .cra_blocksize = AES_BLOCK_SIZE,
1105 .cra_flags = CRYPTO_ALG_ASYNC,
1106 },
1107 @@ -2330,7 +2330,7 @@ static struct talitos_alg_template driver_algs[] = {
1108 .cra_name = "authenc(hmac(sha1),"
1109 "cbc(des3_ede))",
1110 .cra_driver_name = "authenc-hmac-sha1-"
1111 - "cbc-3des-talitos",
1112 + "cbc-3des-talitos-hsna",
1113 .cra_blocksize = DES3_EDE_BLOCK_SIZE,
1114 .cra_flags = CRYPTO_ALG_ASYNC,
1115 },
1116 @@ -2372,7 +2372,7 @@ static struct talitos_alg_template driver_algs[] = {
1117 .base = {
1118 .cra_name = "authenc(hmac(sha224),cbc(aes))",
1119 .cra_driver_name = "authenc-hmac-sha224-"
1120 - "cbc-aes-talitos",
1121 + "cbc-aes-talitos-hsna",
1122 .cra_blocksize = AES_BLOCK_SIZE,
1123 .cra_flags = CRYPTO_ALG_ASYNC,
1124 },
1125 @@ -2416,7 +2416,7 @@ static struct talitos_alg_template driver_algs[] = {
1126 .cra_name = "authenc(hmac(sha224),"
1127 "cbc(des3_ede))",
1128 .cra_driver_name = "authenc-hmac-sha224-"
1129 - "cbc-3des-talitos",
1130 + "cbc-3des-talitos-hsna",
1131 .cra_blocksize = DES3_EDE_BLOCK_SIZE,
1132 .cra_flags = CRYPTO_ALG_ASYNC,
1133 },
1134 @@ -2458,7 +2458,7 @@ static struct talitos_alg_template driver_algs[] = {
1135 .base = {
1136 .cra_name = "authenc(hmac(sha256),cbc(aes))",
1137 .cra_driver_name = "authenc-hmac-sha256-"
1138 - "cbc-aes-talitos",
1139 + "cbc-aes-talitos-hsna",
1140 .cra_blocksize = AES_BLOCK_SIZE,
1141 .cra_flags = CRYPTO_ALG_ASYNC,
1142 },
1143 @@ -2502,7 +2502,7 @@ static struct talitos_alg_template driver_algs[] = {
1144 .cra_name = "authenc(hmac(sha256),"
1145 "cbc(des3_ede))",
1146 .cra_driver_name = "authenc-hmac-sha256-"
1147 - "cbc-3des-talitos",
1148 + "cbc-3des-talitos-hsna",
1149 .cra_blocksize = DES3_EDE_BLOCK_SIZE,
1150 .cra_flags = CRYPTO_ALG_ASYNC,
1151 },
1152 @@ -2628,7 +2628,7 @@ static struct talitos_alg_template driver_algs[] = {
1153 .base = {
1154 .cra_name = "authenc(hmac(md5),cbc(aes))",
1155 .cra_driver_name = "authenc-hmac-md5-"
1156 - "cbc-aes-talitos",
1157 + "cbc-aes-talitos-hsna",
1158 .cra_blocksize = AES_BLOCK_SIZE,
1159 .cra_flags = CRYPTO_ALG_ASYNC,
1160 },
1161 @@ -2670,7 +2670,7 @@ static struct talitos_alg_template driver_algs[] = {
1162 .base = {
1163 .cra_name = "authenc(hmac(md5),cbc(des3_ede))",
1164 .cra_driver_name = "authenc-hmac-md5-"
1165 - "cbc-3des-talitos",
1166 + "cbc-3des-talitos-hsna",
1167 .cra_blocksize = DES3_EDE_BLOCK_SIZE,
1168 .cra_flags = CRYPTO_ALG_ASYNC,
1169 },
1170 diff --git a/drivers/gpu/drm/drm_bufs.c b/drivers/gpu/drm/drm_bufs.c
1171 index e2f775d1c112..21bec4548092 100644
1172 --- a/drivers/gpu/drm/drm_bufs.c
1173 +++ b/drivers/gpu/drm/drm_bufs.c
1174 @@ -1321,7 +1321,10 @@ static int copy_one_buf(void *data, int count, struct drm_buf_entry *from)
1175 .size = from->buf_size,
1176 .low_mark = from->low_mark,
1177 .high_mark = from->high_mark};
1178 - return copy_to_user(to, &v, offsetof(struct drm_buf_desc, flags));
1179 +
1180 + if (copy_to_user(to, &v, offsetof(struct drm_buf_desc, flags)))
1181 + return -EFAULT;
1182 + return 0;
1183 }
1184
1185 int drm_legacy_infobufs(struct drm_device *dev, void *data,
1186 diff --git a/drivers/gpu/drm/drm_ioc32.c b/drivers/gpu/drm/drm_ioc32.c
1187 index 67b1fca39aa6..138680b37c70 100644
1188 --- a/drivers/gpu/drm/drm_ioc32.c
1189 +++ b/drivers/gpu/drm/drm_ioc32.c
1190 @@ -372,7 +372,10 @@ static int copy_one_buf32(void *data, int count, struct drm_buf_entry *from)
1191 .size = from->buf_size,
1192 .low_mark = from->low_mark,
1193 .high_mark = from->high_mark};
1194 - return copy_to_user(to + count, &v, offsetof(drm_buf_desc32_t, flags));
1195 +
1196 + if (copy_to_user(to + count, &v, offsetof(drm_buf_desc32_t, flags)))
1197 + return -EFAULT;
1198 + return 0;
1199 }
1200
1201 static int drm_legacy_infobufs32(struct drm_device *dev, void *data,
1202 diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
1203 index 82ae68716696..05a800807c26 100644
1204 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
1205 +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
1206 @@ -789,6 +789,9 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset)
1207 if (unlikely(ret != 0))
1208 goto out_err0;
1209
1210 + dma_set_max_seg_size(dev->dev, min_t(unsigned int, U32_MAX & PAGE_MASK,
1211 + SCATTERLIST_MAX_SEGMENT));
1212 +
1213 if (dev_priv->capabilities & SVGA_CAP_GMR2) {
1214 DRM_INFO("Max GMR ids is %u\n",
1215 (unsigned)dev_priv->max_gmr_ids);
1216 diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
1217 index 31786b200afc..f388ad51e72b 100644
1218 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
1219 +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
1220 @@ -448,11 +448,11 @@ static int vmw_ttm_map_dma(struct vmw_ttm_tt *vmw_tt)
1221 if (unlikely(ret != 0))
1222 return ret;
1223
1224 - ret = sg_alloc_table_from_pages(&vmw_tt->sgt, vsgt->pages,
1225 - vsgt->num_pages, 0,
1226 - (unsigned long)
1227 - vsgt->num_pages << PAGE_SHIFT,
1228 - GFP_KERNEL);
1229 + ret = __sg_alloc_table_from_pages
1230 + (&vmw_tt->sgt, vsgt->pages, vsgt->num_pages, 0,
1231 + (unsigned long) vsgt->num_pages << PAGE_SHIFT,
1232 + dma_get_max_seg_size(dev_priv->dev->dev),
1233 + GFP_KERNEL);
1234 if (unlikely(ret != 0))
1235 goto out_sg_alloc_fail;
1236
1237 diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
1238 index 97d33b8ed36c..92452992b368 100644
1239 --- a/drivers/hid/hid-ids.h
1240 +++ b/drivers/hid/hid-ids.h
1241 @@ -1212,6 +1212,7 @@
1242 #define USB_DEVICE_ID_PRIMAX_KEYBOARD 0x4e05
1243 #define USB_DEVICE_ID_PRIMAX_REZEL 0x4e72
1244 #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F 0x4d0f
1245 +#define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D65 0x4d65
1246 #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22 0x4e22
1247
1248
1249 diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
1250 index e24790c988c0..5892f1bd037e 100644
1251 --- a/drivers/hid/hid-quirks.c
1252 +++ b/drivers/hid/hid-quirks.c
1253 @@ -131,6 +131,7 @@ static const struct hid_device_id hid_quirks[] = {
1254 { HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL },
1255 { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_MOUSE_4D22), HID_QUIRK_ALWAYS_POLL },
1256 { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F), HID_QUIRK_ALWAYS_POLL },
1257 + { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D65), HID_QUIRK_ALWAYS_POLL },
1258 { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22), HID_QUIRK_ALWAYS_POLL },
1259 { HID_USB_DEVICE(USB_VENDOR_ID_PRODIGE, USB_DEVICE_ID_PRODIGE_CORDLESS), HID_QUIRK_NOGET },
1260 { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001), HID_QUIRK_NOGET },
1261 diff --git a/drivers/input/keyboard/imx_keypad.c b/drivers/input/keyboard/imx_keypad.c
1262 index 539cb670de41..ae9c51cc85f9 100644
1263 --- a/drivers/input/keyboard/imx_keypad.c
1264 +++ b/drivers/input/keyboard/imx_keypad.c
1265 @@ -526,11 +526,12 @@ static int imx_keypad_probe(struct platform_device *pdev)
1266 return 0;
1267 }
1268
1269 -static int __maybe_unused imx_kbd_suspend(struct device *dev)
1270 +static int __maybe_unused imx_kbd_noirq_suspend(struct device *dev)
1271 {
1272 struct platform_device *pdev = to_platform_device(dev);
1273 struct imx_keypad *kbd = platform_get_drvdata(pdev);
1274 struct input_dev *input_dev = kbd->input_dev;
1275 + unsigned short reg_val = readw(kbd->mmio_base + KPSR);
1276
1277 /* imx kbd can wake up system even clock is disabled */
1278 mutex_lock(&input_dev->mutex);
1279 @@ -540,13 +541,20 @@ static int __maybe_unused imx_kbd_suspend(struct device *dev)
1280
1281 mutex_unlock(&input_dev->mutex);
1282
1283 - if (device_may_wakeup(&pdev->dev))
1284 + if (device_may_wakeup(&pdev->dev)) {
1285 + if (reg_val & KBD_STAT_KPKD)
1286 + reg_val |= KBD_STAT_KRIE;
1287 + if (reg_val & KBD_STAT_KPKR)
1288 + reg_val |= KBD_STAT_KDIE;
1289 + writew(reg_val, kbd->mmio_base + KPSR);
1290 +
1291 enable_irq_wake(kbd->irq);
1292 + }
1293
1294 return 0;
1295 }
1296
1297 -static int __maybe_unused imx_kbd_resume(struct device *dev)
1298 +static int __maybe_unused imx_kbd_noirq_resume(struct device *dev)
1299 {
1300 struct platform_device *pdev = to_platform_device(dev);
1301 struct imx_keypad *kbd = platform_get_drvdata(pdev);
1302 @@ -570,7 +578,9 @@ err_clk:
1303 return ret;
1304 }
1305
1306 -static SIMPLE_DEV_PM_OPS(imx_kbd_pm_ops, imx_kbd_suspend, imx_kbd_resume);
1307 +static const struct dev_pm_ops imx_kbd_pm_ops = {
1308 + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(imx_kbd_noirq_suspend, imx_kbd_noirq_resume)
1309 +};
1310
1311 static struct platform_driver imx_keypad_driver = {
1312 .driver = {
1313 diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
1314 index a7f8b1614559..530142b5a115 100644
1315 --- a/drivers/input/mouse/elantech.c
1316 +++ b/drivers/input/mouse/elantech.c
1317 @@ -1189,6 +1189,8 @@ static const char * const middle_button_pnp_ids[] = {
1318 "LEN2132", /* ThinkPad P52 */
1319 "LEN2133", /* ThinkPad P72 w/ NFC */
1320 "LEN2134", /* ThinkPad P72 */
1321 + "LEN0407",
1322 + "LEN0408",
1323 NULL
1324 };
1325
1326 diff --git a/drivers/md/md.c b/drivers/md/md.c
1327 index b924f62e2cd5..fb5d702e43b5 100644
1328 --- a/drivers/md/md.c
1329 +++ b/drivers/md/md.c
1330 @@ -7625,9 +7625,9 @@ static void status_unused(struct seq_file *seq)
1331 static int status_resync(struct seq_file *seq, struct mddev *mddev)
1332 {
1333 sector_t max_sectors, resync, res;
1334 - unsigned long dt, db;
1335 - sector_t rt;
1336 - int scale;
1337 + unsigned long dt, db = 0;
1338 + sector_t rt, curr_mark_cnt, resync_mark_cnt;
1339 + int scale, recovery_active;
1340 unsigned int per_milli;
1341
1342 if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery) ||
1343 @@ -7716,22 +7716,30 @@ static int status_resync(struct seq_file *seq, struct mddev *mddev)
1344 * db: blocks written from mark until now
1345 * rt: remaining time
1346 *
1347 - * rt is a sector_t, so could be 32bit or 64bit.
1348 - * So we divide before multiply in case it is 32bit and close
1349 - * to the limit.
1350 - * We scale the divisor (db) by 32 to avoid losing precision
1351 - * near the end of resync when the number of remaining sectors
1352 - * is close to 'db'.
1353 - * We then divide rt by 32 after multiplying by db to compensate.
1354 - * The '+1' avoids division by zero if db is very small.
1355 + * rt is a sector_t, which is always 64bit now. We are keeping
1356 + * the original algorithm, but it is not really necessary.
1357 + *
1358 + * Original algorithm:
1359 + * So we divide before multiply in case it is 32bit and close
1360 + * to the limit.
1361 + * We scale the divisor (db) by 32 to avoid losing precision
1362 + * near the end of resync when the number of remaining sectors
1363 + * is close to 'db'.
1364 + * We then divide rt by 32 after multiplying by db to compensate.
1365 + * The '+1' avoids division by zero if db is very small.
1366 */
1367 dt = ((jiffies - mddev->resync_mark) / HZ);
1368 if (!dt) dt++;
1369 - db = (mddev->curr_mark_cnt - atomic_read(&mddev->recovery_active))
1370 - - mddev->resync_mark_cnt;
1371 +
1372 + curr_mark_cnt = mddev->curr_mark_cnt;
1373 + recovery_active = atomic_read(&mddev->recovery_active);
1374 + resync_mark_cnt = mddev->resync_mark_cnt;
1375 +
1376 + if (curr_mark_cnt >= (recovery_active + resync_mark_cnt))
1377 + db = curr_mark_cnt - (recovery_active + resync_mark_cnt);
1378
1379 rt = max_sectors - resync; /* number of remaining sectors */
1380 - sector_div(rt, db/32+1);
1381 + rt = div64_u64(rt, db/32+1);
1382 rt *= dt;
1383 rt >>= 5;
1384
1385 diff --git a/drivers/media/dvb-frontends/stv0297.c b/drivers/media/dvb-frontends/stv0297.c
1386 index 9a9915f71483..3ef31a3a27ff 100644
1387 --- a/drivers/media/dvb-frontends/stv0297.c
1388 +++ b/drivers/media/dvb-frontends/stv0297.c
1389 @@ -694,7 +694,7 @@ static const struct dvb_frontend_ops stv0297_ops = {
1390 .delsys = { SYS_DVBC_ANNEX_A },
1391 .info = {
1392 .name = "ST STV0297 DVB-C",
1393 - .frequency_min_hz = 470 * MHz,
1394 + .frequency_min_hz = 47 * MHz,
1395 .frequency_max_hz = 862 * MHz,
1396 .frequency_stepsize_hz = 62500,
1397 .symbol_rate_min = 870000,
1398 diff --git a/drivers/misc/lkdtm/Makefile b/drivers/misc/lkdtm/Makefile
1399 index 3370a4138e94..cce47a15a79f 100644
1400 --- a/drivers/misc/lkdtm/Makefile
1401 +++ b/drivers/misc/lkdtm/Makefile
1402 @@ -13,8 +13,7 @@ KCOV_INSTRUMENT_rodata.o := n
1403
1404 OBJCOPYFLAGS :=
1405 OBJCOPYFLAGS_rodata_objcopy.o := \
1406 - --set-section-flags .text=alloc,readonly \
1407 - --rename-section .text=.rodata
1408 + --rename-section .text=.rodata,alloc,readonly,load
1409 targets += rodata.o rodata_objcopy.o
1410 $(obj)/rodata_objcopy.o: $(obj)/rodata.o FORCE
1411 $(call if_changed,objcopy)
1412 diff --git a/drivers/misc/vmw_vmci/vmci_context.c b/drivers/misc/vmw_vmci/vmci_context.c
1413 index 21d0fa592145..bc089e634a75 100644
1414 --- a/drivers/misc/vmw_vmci/vmci_context.c
1415 +++ b/drivers/misc/vmw_vmci/vmci_context.c
1416 @@ -29,6 +29,9 @@
1417 #include "vmci_driver.h"
1418 #include "vmci_event.h"
1419
1420 +/* Use a wide upper bound for the maximum contexts. */
1421 +#define VMCI_MAX_CONTEXTS 2000
1422 +
1423 /*
1424 * List of current VMCI contexts. Contexts can be added by
1425 * vmci_ctx_create() and removed via vmci_ctx_destroy().
1426 @@ -125,19 +128,22 @@ struct vmci_ctx *vmci_ctx_create(u32 cid, u32 priv_flags,
1427 /* Initialize host-specific VMCI context. */
1428 init_waitqueue_head(&context->host_context.wait_queue);
1429
1430 - context->queue_pair_array = vmci_handle_arr_create(0);
1431 + context->queue_pair_array =
1432 + vmci_handle_arr_create(0, VMCI_MAX_GUEST_QP_COUNT);
1433 if (!context->queue_pair_array) {
1434 error = -ENOMEM;
1435 goto err_free_ctx;
1436 }
1437
1438 - context->doorbell_array = vmci_handle_arr_create(0);
1439 + context->doorbell_array =
1440 + vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT);
1441 if (!context->doorbell_array) {
1442 error = -ENOMEM;
1443 goto err_free_qp_array;
1444 }
1445
1446 - context->pending_doorbell_array = vmci_handle_arr_create(0);
1447 + context->pending_doorbell_array =
1448 + vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT);
1449 if (!context->pending_doorbell_array) {
1450 error = -ENOMEM;
1451 goto err_free_db_array;
1452 @@ -212,7 +218,7 @@ static int ctx_fire_notification(u32 context_id, u32 priv_flags)
1453 * We create an array to hold the subscribers we find when
1454 * scanning through all contexts.
1455 */
1456 - subscriber_array = vmci_handle_arr_create(0);
1457 + subscriber_array = vmci_handle_arr_create(0, VMCI_MAX_CONTEXTS);
1458 if (subscriber_array == NULL)
1459 return VMCI_ERROR_NO_MEM;
1460
1461 @@ -631,20 +637,26 @@ int vmci_ctx_add_notification(u32 context_id, u32 remote_cid)
1462
1463 spin_lock(&context->lock);
1464
1465 - list_for_each_entry(n, &context->notifier_list, node) {
1466 - if (vmci_handle_is_equal(n->handle, notifier->handle)) {
1467 - exists = true;
1468 - break;
1469 + if (context->n_notifiers < VMCI_MAX_CONTEXTS) {
1470 + list_for_each_entry(n, &context->notifier_list, node) {
1471 + if (vmci_handle_is_equal(n->handle, notifier->handle)) {
1472 + exists = true;
1473 + break;
1474 + }
1475 }
1476 - }
1477
1478 - if (exists) {
1479 - kfree(notifier);
1480 - result = VMCI_ERROR_ALREADY_EXISTS;
1481 + if (exists) {
1482 + kfree(notifier);
1483 + result = VMCI_ERROR_ALREADY_EXISTS;
1484 + } else {
1485 + list_add_tail_rcu(&notifier->node,
1486 + &context->notifier_list);
1487 + context->n_notifiers++;
1488 + result = VMCI_SUCCESS;
1489 + }
1490 } else {
1491 - list_add_tail_rcu(&notifier->node, &context->notifier_list);
1492 - context->n_notifiers++;
1493 - result = VMCI_SUCCESS;
1494 + kfree(notifier);
1495 + result = VMCI_ERROR_NO_MEM;
1496 }
1497
1498 spin_unlock(&context->lock);
1499 @@ -729,8 +741,7 @@ static int vmci_ctx_get_chkpt_doorbells(struct vmci_ctx *context,
1500 u32 *buf_size, void **pbuf)
1501 {
1502 struct dbell_cpt_state *dbells;
1503 - size_t n_doorbells;
1504 - int i;
1505 + u32 i, n_doorbells;
1506
1507 n_doorbells = vmci_handle_arr_get_size(context->doorbell_array);
1508 if (n_doorbells > 0) {
1509 @@ -868,7 +879,8 @@ int vmci_ctx_rcv_notifications_get(u32 context_id,
1510 spin_lock(&context->lock);
1511
1512 *db_handle_array = context->pending_doorbell_array;
1513 - context->pending_doorbell_array = vmci_handle_arr_create(0);
1514 + context->pending_doorbell_array =
1515 + vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT);
1516 if (!context->pending_doorbell_array) {
1517 context->pending_doorbell_array = *db_handle_array;
1518 *db_handle_array = NULL;
1519 @@ -950,12 +962,11 @@ int vmci_ctx_dbell_create(u32 context_id, struct vmci_handle handle)
1520 return VMCI_ERROR_NOT_FOUND;
1521
1522 spin_lock(&context->lock);
1523 - if (!vmci_handle_arr_has_entry(context->doorbell_array, handle)) {
1524 - vmci_handle_arr_append_entry(&context->doorbell_array, handle);
1525 - result = VMCI_SUCCESS;
1526 - } else {
1527 + if (!vmci_handle_arr_has_entry(context->doorbell_array, handle))
1528 + result = vmci_handle_arr_append_entry(&context->doorbell_array,
1529 + handle);
1530 + else
1531 result = VMCI_ERROR_DUPLICATE_ENTRY;
1532 - }
1533
1534 spin_unlock(&context->lock);
1535 vmci_ctx_put(context);
1536 @@ -1091,15 +1102,16 @@ int vmci_ctx_notify_dbell(u32 src_cid,
1537 if (!vmci_handle_arr_has_entry(
1538 dst_context->pending_doorbell_array,
1539 handle)) {
1540 - vmci_handle_arr_append_entry(
1541 + result = vmci_handle_arr_append_entry(
1542 &dst_context->pending_doorbell_array,
1543 handle);
1544 -
1545 - ctx_signal_notify(dst_context);
1546 - wake_up(&dst_context->host_context.wait_queue);
1547 -
1548 + if (result == VMCI_SUCCESS) {
1549 + ctx_signal_notify(dst_context);
1550 + wake_up(&dst_context->host_context.wait_queue);
1551 + }
1552 + } else {
1553 + result = VMCI_SUCCESS;
1554 }
1555 - result = VMCI_SUCCESS;
1556 }
1557 spin_unlock(&dst_context->lock);
1558 }
1559 @@ -1126,13 +1138,11 @@ int vmci_ctx_qp_create(struct vmci_ctx *context, struct vmci_handle handle)
1560 if (context == NULL || vmci_handle_is_invalid(handle))
1561 return VMCI_ERROR_INVALID_ARGS;
1562
1563 - if (!vmci_handle_arr_has_entry(context->queue_pair_array, handle)) {
1564 - vmci_handle_arr_append_entry(&context->queue_pair_array,
1565 - handle);
1566 - result = VMCI_SUCCESS;
1567 - } else {
1568 + if (!vmci_handle_arr_has_entry(context->queue_pair_array, handle))
1569 + result = vmci_handle_arr_append_entry(
1570 + &context->queue_pair_array, handle);
1571 + else
1572 result = VMCI_ERROR_DUPLICATE_ENTRY;
1573 - }
1574
1575 return result;
1576 }
1577 diff --git a/drivers/misc/vmw_vmci/vmci_handle_array.c b/drivers/misc/vmw_vmci/vmci_handle_array.c
1578 index 344973a0fb0a..917e18a8af95 100644
1579 --- a/drivers/misc/vmw_vmci/vmci_handle_array.c
1580 +++ b/drivers/misc/vmw_vmci/vmci_handle_array.c
1581 @@ -16,24 +16,29 @@
1582 #include <linux/slab.h>
1583 #include "vmci_handle_array.h"
1584
1585 -static size_t handle_arr_calc_size(size_t capacity)
1586 +static size_t handle_arr_calc_size(u32 capacity)
1587 {
1588 - return sizeof(struct vmci_handle_arr) +
1589 + return VMCI_HANDLE_ARRAY_HEADER_SIZE +
1590 capacity * sizeof(struct vmci_handle);
1591 }
1592
1593 -struct vmci_handle_arr *vmci_handle_arr_create(size_t capacity)
1594 +struct vmci_handle_arr *vmci_handle_arr_create(u32 capacity, u32 max_capacity)
1595 {
1596 struct vmci_handle_arr *array;
1597
1598 + if (max_capacity == 0 || capacity > max_capacity)
1599 + return NULL;
1600 +
1601 if (capacity == 0)
1602 - capacity = VMCI_HANDLE_ARRAY_DEFAULT_SIZE;
1603 + capacity = min((u32)VMCI_HANDLE_ARRAY_DEFAULT_CAPACITY,
1604 + max_capacity);
1605
1606 array = kmalloc(handle_arr_calc_size(capacity), GFP_ATOMIC);
1607 if (!array)
1608 return NULL;
1609
1610 array->capacity = capacity;
1611 + array->max_capacity = max_capacity;
1612 array->size = 0;
1613
1614 return array;
1615 @@ -44,27 +49,34 @@ void vmci_handle_arr_destroy(struct vmci_handle_arr *array)
1616 kfree(array);
1617 }
1618
1619 -void vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr,
1620 - struct vmci_handle handle)
1621 +int vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr,
1622 + struct vmci_handle handle)
1623 {
1624 struct vmci_handle_arr *array = *array_ptr;
1625
1626 if (unlikely(array->size >= array->capacity)) {
1627 /* reallocate. */
1628 struct vmci_handle_arr *new_array;
1629 - size_t new_capacity = array->capacity * VMCI_ARR_CAP_MULT;
1630 - size_t new_size = handle_arr_calc_size(new_capacity);
1631 + u32 capacity_bump = min(array->max_capacity - array->capacity,
1632 + array->capacity);
1633 + size_t new_size = handle_arr_calc_size(array->capacity +
1634 + capacity_bump);
1635 +
1636 + if (array->size >= array->max_capacity)
1637 + return VMCI_ERROR_NO_MEM;
1638
1639 new_array = krealloc(array, new_size, GFP_ATOMIC);
1640 if (!new_array)
1641 - return;
1642 + return VMCI_ERROR_NO_MEM;
1643
1644 - new_array->capacity = new_capacity;
1645 + new_array->capacity += capacity_bump;
1646 *array_ptr = array = new_array;
1647 }
1648
1649 array->entries[array->size] = handle;
1650 array->size++;
1651 +
1652 + return VMCI_SUCCESS;
1653 }
1654
1655 /*
1656 @@ -74,7 +86,7 @@ struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array,
1657 struct vmci_handle entry_handle)
1658 {
1659 struct vmci_handle handle = VMCI_INVALID_HANDLE;
1660 - size_t i;
1661 + u32 i;
1662
1663 for (i = 0; i < array->size; i++) {
1664 if (vmci_handle_is_equal(array->entries[i], entry_handle)) {
1665 @@ -109,7 +121,7 @@ struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array)
1666 * Handle at given index, VMCI_INVALID_HANDLE if invalid index.
1667 */
1668 struct vmci_handle
1669 -vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index)
1670 +vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, u32 index)
1671 {
1672 if (unlikely(index >= array->size))
1673 return VMCI_INVALID_HANDLE;
1674 @@ -120,7 +132,7 @@ vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index)
1675 bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array,
1676 struct vmci_handle entry_handle)
1677 {
1678 - size_t i;
1679 + u32 i;
1680
1681 for (i = 0; i < array->size; i++)
1682 if (vmci_handle_is_equal(array->entries[i], entry_handle))
1683 diff --git a/drivers/misc/vmw_vmci/vmci_handle_array.h b/drivers/misc/vmw_vmci/vmci_handle_array.h
1684 index b5f3a7f98cf1..0fc58597820e 100644
1685 --- a/drivers/misc/vmw_vmci/vmci_handle_array.h
1686 +++ b/drivers/misc/vmw_vmci/vmci_handle_array.h
1687 @@ -17,32 +17,41 @@
1688 #define _VMCI_HANDLE_ARRAY_H_
1689
1690 #include <linux/vmw_vmci_defs.h>
1691 +#include <linux/limits.h>
1692 #include <linux/types.h>
1693
1694 -#define VMCI_HANDLE_ARRAY_DEFAULT_SIZE 4
1695 -#define VMCI_ARR_CAP_MULT 2 /* Array capacity multiplier */
1696 -
1697 struct vmci_handle_arr {
1698 - size_t capacity;
1699 - size_t size;
1700 + u32 capacity;
1701 + u32 max_capacity;
1702 + u32 size;
1703 + u32 pad;
1704 struct vmci_handle entries[];
1705 };
1706
1707 -struct vmci_handle_arr *vmci_handle_arr_create(size_t capacity);
1708 +#define VMCI_HANDLE_ARRAY_HEADER_SIZE \
1709 + offsetof(struct vmci_handle_arr, entries)
1710 +/* Select a default capacity that results in a 64 byte sized array */
1711 +#define VMCI_HANDLE_ARRAY_DEFAULT_CAPACITY 6
1712 +/* Make sure that the max array size can be expressed by a u32 */
1713 +#define VMCI_HANDLE_ARRAY_MAX_CAPACITY \
1714 + ((U32_MAX - VMCI_HANDLE_ARRAY_HEADER_SIZE - 1) / \
1715 + sizeof(struct vmci_handle))
1716 +
1717 +struct vmci_handle_arr *vmci_handle_arr_create(u32 capacity, u32 max_capacity);
1718 void vmci_handle_arr_destroy(struct vmci_handle_arr *array);
1719 -void vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr,
1720 - struct vmci_handle handle);
1721 +int vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr,
1722 + struct vmci_handle handle);
1723 struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array,
1724 struct vmci_handle
1725 entry_handle);
1726 struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array);
1727 struct vmci_handle
1728 -vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index);
1729 +vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, u32 index);
1730 bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array,
1731 struct vmci_handle entry_handle);
1732 struct vmci_handle *vmci_handle_arr_get_handles(struct vmci_handle_arr *array);
1733
1734 -static inline size_t vmci_handle_arr_get_size(
1735 +static inline u32 vmci_handle_arr_get_size(
1736 const struct vmci_handle_arr *array)
1737 {
1738 return array->size;
1739 diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
1740 index 55997cf84b39..f1fe446eee66 100644
1741 --- a/drivers/mmc/core/mmc.c
1742 +++ b/drivers/mmc/core/mmc.c
1743 @@ -1209,13 +1209,13 @@ static int mmc_select_hs400(struct mmc_card *card)
1744 mmc_set_timing(host, MMC_TIMING_MMC_HS400);
1745 mmc_set_bus_speed(card);
1746
1747 + if (host->ops->hs400_complete)
1748 + host->ops->hs400_complete(host);
1749 +
1750 err = mmc_switch_status(card);
1751 if (err)
1752 goto out_err;
1753
1754 - if (host->ops->hs400_complete)
1755 - host->ops->hs400_complete(host);
1756 -
1757 return 0;
1758
1759 out_err:
1760 diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
1761 index 9b449400376b..deb274a19ba0 100644
1762 --- a/drivers/net/can/m_can/m_can.c
1763 +++ b/drivers/net/can/m_can/m_can.c
1764 @@ -822,6 +822,27 @@ static int m_can_poll(struct napi_struct *napi, int quota)
1765 if (!irqstatus)
1766 goto end;
1767
1768 + /* Errata workaround for issue "Needless activation of MRAF irq"
1769 + * During frame reception while the MCAN is in Error Passive state
1770 + * and the Receive Error Counter has the value MCAN_ECR.REC = 127,
1771 + * it may happen that MCAN_IR.MRAF is set although there was no
1772 + * Message RAM access failure.
1773 + * If MCAN_IR.MRAF is enabled, an interrupt to the Host CPU is generated
1774 + * The Message RAM Access Failure interrupt routine needs to check
1775 + * whether MCAN_ECR.RP = ’1’ and MCAN_ECR.REC = 127.
1776 + * In this case, reset MCAN_IR.MRAF. No further action is required.
1777 + */
1778 + if ((priv->version <= 31) && (irqstatus & IR_MRAF) &&
1779 + (m_can_read(priv, M_CAN_ECR) & ECR_RP)) {
1780 + struct can_berr_counter bec;
1781 +
1782 + __m_can_get_berr_counter(dev, &bec);
1783 + if (bec.rxerr == 127) {
1784 + m_can_write(priv, M_CAN_IR, IR_MRAF);
1785 + irqstatus &= ~IR_MRAF;
1786 + }
1787 + }
1788 +
1789 psr = m_can_read(priv, M_CAN_PSR);
1790 if (irqstatus & IR_ERR_STATE)
1791 work_done += m_can_handle_state_errors(dev, psr);
1792 diff --git a/drivers/net/can/spi/Kconfig b/drivers/net/can/spi/Kconfig
1793 index 8f2e0dd7b756..792e9c6c4a2f 100644
1794 --- a/drivers/net/can/spi/Kconfig
1795 +++ b/drivers/net/can/spi/Kconfig
1796 @@ -8,9 +8,10 @@ config CAN_HI311X
1797 Driver for the Holt HI311x SPI CAN controllers.
1798
1799 config CAN_MCP251X
1800 - tristate "Microchip MCP251x SPI CAN controllers"
1801 + tristate "Microchip MCP251x and MCP25625 SPI CAN controllers"
1802 depends on HAS_DMA
1803 ---help---
1804 - Driver for the Microchip MCP251x SPI CAN controllers.
1805 + Driver for the Microchip MCP251x and MCP25625 SPI CAN
1806 + controllers.
1807
1808 endmenu
1809 diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
1810 index e90817608645..da64e71a62ee 100644
1811 --- a/drivers/net/can/spi/mcp251x.c
1812 +++ b/drivers/net/can/spi/mcp251x.c
1813 @@ -1,5 +1,5 @@
1814 /*
1815 - * CAN bus driver for Microchip 251x CAN Controller with SPI Interface
1816 + * CAN bus driver for Microchip 251x/25625 CAN Controller with SPI Interface
1817 *
1818 * MCP2510 support and bug fixes by Christian Pellegrin
1819 * <chripell@evolware.org>
1820 @@ -41,7 +41,7 @@
1821 * static struct spi_board_info spi_board_info[] = {
1822 * {
1823 * .modalias = "mcp2510",
1824 - * // or "mcp2515" depending on your controller
1825 + * // "mcp2515" or "mcp25625" depending on your controller
1826 * .platform_data = &mcp251x_info,
1827 * .irq = IRQ_EINT13,
1828 * .max_speed_hz = 2*1000*1000,
1829 @@ -238,6 +238,7 @@ static const struct can_bittiming_const mcp251x_bittiming_const = {
1830 enum mcp251x_model {
1831 CAN_MCP251X_MCP2510 = 0x2510,
1832 CAN_MCP251X_MCP2515 = 0x2515,
1833 + CAN_MCP251X_MCP25625 = 0x25625,
1834 };
1835
1836 struct mcp251x_priv {
1837 @@ -280,7 +281,6 @@ static inline int mcp251x_is_##_model(struct spi_device *spi) \
1838 }
1839
1840 MCP251X_IS(2510);
1841 -MCP251X_IS(2515);
1842
1843 static void mcp251x_clean(struct net_device *net)
1844 {
1845 @@ -639,7 +639,7 @@ static int mcp251x_hw_reset(struct spi_device *spi)
1846
1847 /* Wait for oscillator startup timer after reset */
1848 mdelay(MCP251X_OST_DELAY_MS);
1849 -
1850 +
1851 reg = mcp251x_read_reg(spi, CANSTAT);
1852 if ((reg & CANCTRL_REQOP_MASK) != CANCTRL_REQOP_CONF)
1853 return -ENODEV;
1854 @@ -820,9 +820,8 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id)
1855 /* receive buffer 0 */
1856 if (intf & CANINTF_RX0IF) {
1857 mcp251x_hw_rx(spi, 0);
1858 - /*
1859 - * Free one buffer ASAP
1860 - * (The MCP2515 does this automatically.)
1861 + /* Free one buffer ASAP
1862 + * (The MCP2515/25625 does this automatically.)
1863 */
1864 if (mcp251x_is_2510(spi))
1865 mcp251x_write_bits(spi, CANINTF, CANINTF_RX0IF, 0x00);
1866 @@ -831,7 +830,7 @@ static irqreturn_t mcp251x_can_ist(int irq, void *dev_id)
1867 /* receive buffer 1 */
1868 if (intf & CANINTF_RX1IF) {
1869 mcp251x_hw_rx(spi, 1);
1870 - /* the MCP2515 does this automatically */
1871 + /* The MCP2515/25625 does this automatically. */
1872 if (mcp251x_is_2510(spi))
1873 clear_intf |= CANINTF_RX1IF;
1874 }
1875 @@ -1006,6 +1005,10 @@ static const struct of_device_id mcp251x_of_match[] = {
1876 .compatible = "microchip,mcp2515",
1877 .data = (void *)CAN_MCP251X_MCP2515,
1878 },
1879 + {
1880 + .compatible = "microchip,mcp25625",
1881 + .data = (void *)CAN_MCP251X_MCP25625,
1882 + },
1883 { }
1884 };
1885 MODULE_DEVICE_TABLE(of, mcp251x_of_match);
1886 @@ -1019,6 +1022,10 @@ static const struct spi_device_id mcp251x_id_table[] = {
1887 .name = "mcp2515",
1888 .driver_data = (kernel_ulong_t)CAN_MCP251X_MCP2515,
1889 },
1890 + {
1891 + .name = "mcp25625",
1892 + .driver_data = (kernel_ulong_t)CAN_MCP251X_MCP25625,
1893 + },
1894 { }
1895 };
1896 MODULE_DEVICE_TABLE(spi, mcp251x_id_table);
1897 @@ -1259,5 +1266,5 @@ module_spi_driver(mcp251x_can_driver);
1898
1899 MODULE_AUTHOR("Chris Elston <celston@katalix.com>, "
1900 "Christian Pellegrin <chripell@evolware.org>");
1901 -MODULE_DESCRIPTION("Microchip 251x CAN driver");
1902 +MODULE_DESCRIPTION("Microchip 251x/25625 CAN driver");
1903 MODULE_LICENSE("GPL v2");
1904 diff --git a/drivers/net/dsa/mv88e6xxx/global1_vtu.c b/drivers/net/dsa/mv88e6xxx/global1_vtu.c
1905 index 058326924f3e..7a6667e0b9f9 100644
1906 --- a/drivers/net/dsa/mv88e6xxx/global1_vtu.c
1907 +++ b/drivers/net/dsa/mv88e6xxx/global1_vtu.c
1908 @@ -419,7 +419,7 @@ int mv88e6185_g1_vtu_loadpurge(struct mv88e6xxx_chip *chip,
1909 * VTU DBNum[7:4] are located in VTU Operation 11:8
1910 */
1911 op |= entry->fid & 0x000f;
1912 - op |= (entry->fid & 0x00f0) << 8;
1913 + op |= (entry->fid & 0x00f0) << 4;
1914 }
1915
1916 return mv88e6xxx_g1_vtu_op(chip, op);
1917 diff --git a/drivers/net/ethernet/8390/Kconfig b/drivers/net/ethernet/8390/Kconfig
1918 index f2f0264c58ba..443b34e2725f 100644
1919 --- a/drivers/net/ethernet/8390/Kconfig
1920 +++ b/drivers/net/ethernet/8390/Kconfig
1921 @@ -49,7 +49,7 @@ config XSURF100
1922 tristate "Amiga XSurf 100 AX88796/NE2000 clone support"
1923 depends on ZORRO
1924 select AX88796
1925 - select ASIX_PHY
1926 + select AX88796B_PHY
1927 help
1928 This driver is for the Individual Computers X-Surf 100 Ethernet
1929 card (based on the Asix AX88796 chip). If you have such a card,
1930 diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
1931 index a4a90b6cdb46..c428b0655c26 100644
1932 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
1933 +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
1934 @@ -1581,7 +1581,8 @@ static int bnx2x_get_module_info(struct net_device *dev,
1935 }
1936
1937 if (!sff8472_comp ||
1938 - (diag_type & SFP_EEPROM_DIAG_ADDR_CHANGE_REQ)) {
1939 + (diag_type & SFP_EEPROM_DIAG_ADDR_CHANGE_REQ) ||
1940 + !(diag_type & SFP_EEPROM_DDM_IMPLEMENTED)) {
1941 modinfo->type = ETH_MODULE_SFF_8079;
1942 modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN;
1943 } else {
1944 diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h
1945 index b7d251108c19..7115f5025664 100644
1946 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h
1947 +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_link.h
1948 @@ -62,6 +62,7 @@
1949 #define SFP_EEPROM_DIAG_TYPE_ADDR 0x5c
1950 #define SFP_EEPROM_DIAG_TYPE_SIZE 1
1951 #define SFP_EEPROM_DIAG_ADDR_CHANGE_REQ (1<<2)
1952 +#define SFP_EEPROM_DDM_IMPLEMENTED (1<<6)
1953 #define SFP_EEPROM_SFF_8472_COMP_ADDR 0x5e
1954 #define SFP_EEPROM_SFF_8472_COMP_SIZE 1
1955
1956 diff --git a/drivers/net/ethernet/cavium/liquidio/lio_core.c b/drivers/net/ethernet/cavium/liquidio/lio_core.c
1957 index 8093c5eafea2..781814835a4f 100644
1958 --- a/drivers/net/ethernet/cavium/liquidio/lio_core.c
1959 +++ b/drivers/net/ethernet/cavium/liquidio/lio_core.c
1960 @@ -985,7 +985,7 @@ static void liquidio_schedule_droq_pkt_handlers(struct octeon_device *oct)
1961
1962 if (droq->ops.poll_mode) {
1963 droq->ops.napi_fn(droq);
1964 - oct_priv->napi_mask |= (1 << oq_no);
1965 + oct_priv->napi_mask |= BIT_ULL(oq_no);
1966 } else {
1967 tasklet_schedule(&oct_priv->droq_tasklet);
1968 }
1969 diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
1970 index 426789e2c23d..0ae43d27cdcf 100644
1971 --- a/drivers/net/ethernet/ibm/ibmvnic.c
1972 +++ b/drivers/net/ethernet/ibm/ibmvnic.c
1973 @@ -438,9 +438,10 @@ static int reset_rx_pools(struct ibmvnic_adapter *adapter)
1974 if (rx_pool->buff_size != be64_to_cpu(size_array[i])) {
1975 free_long_term_buff(adapter, &rx_pool->long_term_buff);
1976 rx_pool->buff_size = be64_to_cpu(size_array[i]);
1977 - alloc_long_term_buff(adapter, &rx_pool->long_term_buff,
1978 - rx_pool->size *
1979 - rx_pool->buff_size);
1980 + rc = alloc_long_term_buff(adapter,
1981 + &rx_pool->long_term_buff,
1982 + rx_pool->size *
1983 + rx_pool->buff_size);
1984 } else {
1985 rc = reset_long_term_buff(adapter,
1986 &rx_pool->long_term_buff);
1987 @@ -706,9 +707,9 @@ static int init_tx_pools(struct net_device *netdev)
1988 return rc;
1989 }
1990
1991 - init_one_tx_pool(netdev, &adapter->tso_pool[i],
1992 - IBMVNIC_TSO_BUFS,
1993 - IBMVNIC_TSO_BUF_SZ);
1994 + rc = init_one_tx_pool(netdev, &adapter->tso_pool[i],
1995 + IBMVNIC_TSO_BUFS,
1996 + IBMVNIC_TSO_BUF_SZ);
1997 if (rc) {
1998 release_tx_pools(adapter);
1999 return rc;
2000 @@ -1754,7 +1755,8 @@ static int do_reset(struct ibmvnic_adapter *adapter,
2001
2002 ibmvnic_cleanup(netdev);
2003
2004 - if (adapter->reset_reason != VNIC_RESET_MOBILITY &&
2005 + if (reset_state == VNIC_OPEN &&
2006 + adapter->reset_reason != VNIC_RESET_MOBILITY &&
2007 adapter->reset_reason != VNIC_RESET_FAILOVER) {
2008 rc = __ibmvnic_close(netdev);
2009 if (rc)
2010 @@ -1853,6 +1855,9 @@ static int do_reset(struct ibmvnic_adapter *adapter,
2011 return 0;
2012 }
2013
2014 + /* refresh device's multicast list */
2015 + ibmvnic_set_multi(netdev);
2016 +
2017 /* kick napi */
2018 for (i = 0; i < adapter->req_rx_queues; i++)
2019 napi_schedule(&adapter->napi[i]);
2020 diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h
2021 index 6e8b619b769b..aee58b3892f2 100644
2022 --- a/drivers/net/ethernet/mellanox/mlxsw/reg.h
2023 +++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h
2024 @@ -877,7 +877,7 @@ static inline void mlxsw_reg_spaft_pack(char *payload, u8 local_port,
2025 MLXSW_REG_ZERO(spaft, payload);
2026 mlxsw_reg_spaft_local_port_set(payload, local_port);
2027 mlxsw_reg_spaft_allow_untagged_set(payload, allow_untagged);
2028 - mlxsw_reg_spaft_allow_prio_tagged_set(payload, true);
2029 + mlxsw_reg_spaft_allow_prio_tagged_set(payload, allow_untagged);
2030 mlxsw_reg_spaft_allow_tagged_set(payload, true);
2031 }
2032
2033 diff --git a/drivers/net/phy/Kconfig b/drivers/net/phy/Kconfig
2034 index 82070792edbb..1f5fd24cd749 100644
2035 --- a/drivers/net/phy/Kconfig
2036 +++ b/drivers/net/phy/Kconfig
2037 @@ -227,7 +227,7 @@ config AQUANTIA_PHY
2038 ---help---
2039 Currently supports the Aquantia AQ1202, AQ2104, AQR105, AQR405
2040
2041 -config ASIX_PHY
2042 +config AX88796B_PHY
2043 tristate "Asix PHYs"
2044 help
2045 Currently supports the Asix Electronics PHY found in the X-Surf 100
2046 diff --git a/drivers/net/phy/Makefile b/drivers/net/phy/Makefile
2047 index 5805c0b7d60e..f21cda9d865e 100644
2048 --- a/drivers/net/phy/Makefile
2049 +++ b/drivers/net/phy/Makefile
2050 @@ -46,7 +46,7 @@ obj-y += $(sfp-obj-y) $(sfp-obj-m)
2051
2052 obj-$(CONFIG_AMD_PHY) += amd.o
2053 obj-$(CONFIG_AQUANTIA_PHY) += aquantia.o
2054 -obj-$(CONFIG_ASIX_PHY) += asix.o
2055 +obj-$(CONFIG_AX88796B_PHY) += ax88796b.o
2056 obj-$(CONFIG_AT803X_PHY) += at803x.o
2057 obj-$(CONFIG_BCM63XX_PHY) += bcm63xx.o
2058 obj-$(CONFIG_BCM7XXX_PHY) += bcm7xxx.o
2059 diff --git a/drivers/net/phy/asix.c b/drivers/net/phy/asix.c
2060 deleted file mode 100644
2061 index 8ebe7f5484ae..000000000000
2062 --- a/drivers/net/phy/asix.c
2063 +++ /dev/null
2064 @@ -1,63 +0,0 @@
2065 -// SPDX-License-Identifier: GPL-2.0
2066 -/* Driver for Asix PHYs
2067 - *
2068 - * Author: Michael Schmitz <schmitzmic@gmail.com>
2069 - *
2070 - * This program is free software; you can redistribute it and/or modify it
2071 - * under the terms of the GNU General Public License as published by the
2072 - * Free Software Foundation; either version 2 of the License, or (at your
2073 - * option) any later version.
2074 - *
2075 - */
2076 -#include <linux/kernel.h>
2077 -#include <linux/errno.h>
2078 -#include <linux/init.h>
2079 -#include <linux/module.h>
2080 -#include <linux/mii.h>
2081 -#include <linux/phy.h>
2082 -
2083 -#define PHY_ID_ASIX_AX88796B 0x003b1841
2084 -
2085 -MODULE_DESCRIPTION("Asix PHY driver");
2086 -MODULE_AUTHOR("Michael Schmitz <schmitzmic@gmail.com>");
2087 -MODULE_LICENSE("GPL");
2088 -
2089 -/**
2090 - * asix_soft_reset - software reset the PHY via BMCR_RESET bit
2091 - * @phydev: target phy_device struct
2092 - *
2093 - * Description: Perform a software PHY reset using the standard
2094 - * BMCR_RESET bit and poll for the reset bit to be cleared.
2095 - * Toggle BMCR_RESET bit off to accommodate broken AX8796B PHY implementation
2096 - * such as used on the Individual Computers' X-Surf 100 Zorro card.
2097 - *
2098 - * Returns: 0 on success, < 0 on failure
2099 - */
2100 -static int asix_soft_reset(struct phy_device *phydev)
2101 -{
2102 - int ret;
2103 -
2104 - /* Asix PHY won't reset unless reset bit toggles */
2105 - ret = phy_write(phydev, MII_BMCR, 0);
2106 - if (ret < 0)
2107 - return ret;
2108 -
2109 - return genphy_soft_reset(phydev);
2110 -}
2111 -
2112 -static struct phy_driver asix_driver[] = { {
2113 - .phy_id = PHY_ID_ASIX_AX88796B,
2114 - .name = "Asix Electronics AX88796B",
2115 - .phy_id_mask = 0xfffffff0,
2116 - .features = PHY_BASIC_FEATURES,
2117 - .soft_reset = asix_soft_reset,
2118 -} };
2119 -
2120 -module_phy_driver(asix_driver);
2121 -
2122 -static struct mdio_device_id __maybe_unused asix_tbl[] = {
2123 - { PHY_ID_ASIX_AX88796B, 0xfffffff0 },
2124 - { }
2125 -};
2126 -
2127 -MODULE_DEVICE_TABLE(mdio, asix_tbl);
2128 diff --git a/drivers/net/phy/ax88796b.c b/drivers/net/phy/ax88796b.c
2129 new file mode 100644
2130 index 000000000000..8ebe7f5484ae
2131 --- /dev/null
2132 +++ b/drivers/net/phy/ax88796b.c
2133 @@ -0,0 +1,63 @@
2134 +// SPDX-License-Identifier: GPL-2.0
2135 +/* Driver for Asix PHYs
2136 + *
2137 + * Author: Michael Schmitz <schmitzmic@gmail.com>
2138 + *
2139 + * This program is free software; you can redistribute it and/or modify it
2140 + * under the terms of the GNU General Public License as published by the
2141 + * Free Software Foundation; either version 2 of the License, or (at your
2142 + * option) any later version.
2143 + *
2144 + */
2145 +#include <linux/kernel.h>
2146 +#include <linux/errno.h>
2147 +#include <linux/init.h>
2148 +#include <linux/module.h>
2149 +#include <linux/mii.h>
2150 +#include <linux/phy.h>
2151 +
2152 +#define PHY_ID_ASIX_AX88796B 0x003b1841
2153 +
2154 +MODULE_DESCRIPTION("Asix PHY driver");
2155 +MODULE_AUTHOR("Michael Schmitz <schmitzmic@gmail.com>");
2156 +MODULE_LICENSE("GPL");
2157 +
2158 +/**
2159 + * asix_soft_reset - software reset the PHY via BMCR_RESET bit
2160 + * @phydev: target phy_device struct
2161 + *
2162 + * Description: Perform a software PHY reset using the standard
2163 + * BMCR_RESET bit and poll for the reset bit to be cleared.
2164 + * Toggle BMCR_RESET bit off to accommodate broken AX8796B PHY implementation
2165 + * such as used on the Individual Computers' X-Surf 100 Zorro card.
2166 + *
2167 + * Returns: 0 on success, < 0 on failure
2168 + */
2169 +static int asix_soft_reset(struct phy_device *phydev)
2170 +{
2171 + int ret;
2172 +
2173 + /* Asix PHY won't reset unless reset bit toggles */
2174 + ret = phy_write(phydev, MII_BMCR, 0);
2175 + if (ret < 0)
2176 + return ret;
2177 +
2178 + return genphy_soft_reset(phydev);
2179 +}
2180 +
2181 +static struct phy_driver asix_driver[] = { {
2182 + .phy_id = PHY_ID_ASIX_AX88796B,
2183 + .name = "Asix Electronics AX88796B",
2184 + .phy_id_mask = 0xfffffff0,
2185 + .features = PHY_BASIC_FEATURES,
2186 + .soft_reset = asix_soft_reset,
2187 +} };
2188 +
2189 +module_phy_driver(asix_driver);
2190 +
2191 +static struct mdio_device_id __maybe_unused asix_tbl[] = {
2192 + { PHY_ID_ASIX_AX88796B, 0xfffffff0 },
2193 + { }
2194 +};
2195 +
2196 +MODULE_DEVICE_TABLE(mdio, asix_tbl);
2197 diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
2198 index e657d8947125..128c8a327d8e 100644
2199 --- a/drivers/net/usb/qmi_wwan.c
2200 +++ b/drivers/net/usb/qmi_wwan.c
2201 @@ -153,7 +153,7 @@ static bool qmimux_has_slaves(struct usbnet *dev)
2202
2203 static int qmimux_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
2204 {
2205 - unsigned int len, offset = 0;
2206 + unsigned int len, offset = 0, pad_len, pkt_len;
2207 struct qmimux_hdr *hdr;
2208 struct net_device *net;
2209 struct sk_buff *skbn;
2210 @@ -171,10 +171,16 @@ static int qmimux_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
2211 if (hdr->pad & 0x80)
2212 goto skip;
2213
2214 + /* extract padding length and check for valid length info */
2215 + pad_len = hdr->pad & 0x3f;
2216 + if (len == 0 || pad_len >= len)
2217 + goto skip;
2218 + pkt_len = len - pad_len;
2219 +
2220 net = qmimux_find_dev(dev, hdr->mux_id);
2221 if (!net)
2222 goto skip;
2223 - skbn = netdev_alloc_skb(net, len);
2224 + skbn = netdev_alloc_skb(net, pkt_len);
2225 if (!skbn)
2226 return 0;
2227 skbn->dev = net;
2228 @@ -191,7 +197,7 @@ static int qmimux_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
2229 goto skip;
2230 }
2231
2232 - skb_put_data(skbn, skb->data + offset + qmimux_hdr_sz, len);
2233 + skb_put_data(skbn, skb->data + offset + qmimux_hdr_sz, pkt_len);
2234 if (netif_rx(skbn) != NET_RX_SUCCESS)
2235 return 0;
2236
2237 @@ -241,13 +247,14 @@ out_free_newdev:
2238 return err;
2239 }
2240
2241 -static void qmimux_unregister_device(struct net_device *dev)
2242 +static void qmimux_unregister_device(struct net_device *dev,
2243 + struct list_head *head)
2244 {
2245 struct qmimux_priv *priv = netdev_priv(dev);
2246 struct net_device *real_dev = priv->real_dev;
2247
2248 netdev_upper_dev_unlink(real_dev, dev);
2249 - unregister_netdevice(dev);
2250 + unregister_netdevice_queue(dev, head);
2251
2252 /* Get rid of the reference to real_dev */
2253 dev_put(real_dev);
2254 @@ -356,8 +363,8 @@ static ssize_t add_mux_store(struct device *d, struct device_attribute *attr, c
2255 if (kstrtou8(buf, 0, &mux_id))
2256 return -EINVAL;
2257
2258 - /* mux_id [1 - 0x7f] range empirically found */
2259 - if (mux_id < 1 || mux_id > 0x7f)
2260 + /* mux_id [1 - 254] for compatibility with ip(8) and the rmnet driver */
2261 + if (mux_id < 1 || mux_id > 254)
2262 return -EINVAL;
2263
2264 if (!rtnl_trylock())
2265 @@ -418,7 +425,7 @@ static ssize_t del_mux_store(struct device *d, struct device_attribute *attr, c
2266 ret = -EINVAL;
2267 goto err;
2268 }
2269 - qmimux_unregister_device(del_dev);
2270 + qmimux_unregister_device(del_dev, NULL);
2271
2272 if (!qmimux_has_slaves(dev))
2273 info->flags &= ~QMI_WWAN_FLAG_MUX;
2274 @@ -1428,6 +1435,7 @@ static void qmi_wwan_disconnect(struct usb_interface *intf)
2275 struct qmi_wwan_state *info;
2276 struct list_head *iter;
2277 struct net_device *ldev;
2278 + LIST_HEAD(list);
2279
2280 /* called twice if separate control and data intf */
2281 if (!dev)
2282 @@ -1440,8 +1448,9 @@ static void qmi_wwan_disconnect(struct usb_interface *intf)
2283 }
2284 rcu_read_lock();
2285 netdev_for_each_upper_dev_rcu(dev->net, ldev, iter)
2286 - qmimux_unregister_device(ldev);
2287 + qmimux_unregister_device(ldev, &list);
2288 rcu_read_unlock();
2289 + unregister_netdevice_many(&list);
2290 rtnl_unlock();
2291 info->flags &= ~QMI_WWAN_FLAG_MUX;
2292 }
2293 diff --git a/drivers/net/wireless/ath/carl9170/usb.c b/drivers/net/wireless/ath/carl9170/usb.c
2294 index e7c3f3b8457d..99f1897a775d 100644
2295 --- a/drivers/net/wireless/ath/carl9170/usb.c
2296 +++ b/drivers/net/wireless/ath/carl9170/usb.c
2297 @@ -128,6 +128,8 @@ static const struct usb_device_id carl9170_usb_ids[] = {
2298 };
2299 MODULE_DEVICE_TABLE(usb, carl9170_usb_ids);
2300
2301 +static struct usb_driver carl9170_driver;
2302 +
2303 static void carl9170_usb_submit_data_urb(struct ar9170 *ar)
2304 {
2305 struct urb *urb;
2306 @@ -966,32 +968,28 @@ err_out:
2307
2308 static void carl9170_usb_firmware_failed(struct ar9170 *ar)
2309 {
2310 - struct device *parent = ar->udev->dev.parent;
2311 - struct usb_device *udev;
2312 -
2313 - /*
2314 - * Store a copy of the usb_device pointer locally.
2315 - * This is because device_release_driver initiates
2316 - * carl9170_usb_disconnect, which in turn frees our
2317 - * driver context (ar).
2318 + /* Store a copies of the usb_interface and usb_device pointer locally.
2319 + * This is because release_driver initiates carl9170_usb_disconnect,
2320 + * which in turn frees our driver context (ar).
2321 */
2322 - udev = ar->udev;
2323 + struct usb_interface *intf = ar->intf;
2324 + struct usb_device *udev = ar->udev;
2325
2326 complete(&ar->fw_load_wait);
2327 + /* at this point 'ar' could be already freed. Don't use it anymore */
2328 + ar = NULL;
2329
2330 /* unbind anything failed */
2331 - if (parent)
2332 - device_lock(parent);
2333 -
2334 - device_release_driver(&udev->dev);
2335 - if (parent)
2336 - device_unlock(parent);
2337 + usb_lock_device(udev);
2338 + usb_driver_release_interface(&carl9170_driver, intf);
2339 + usb_unlock_device(udev);
2340
2341 - usb_put_dev(udev);
2342 + usb_put_intf(intf);
2343 }
2344
2345 static void carl9170_usb_firmware_finish(struct ar9170 *ar)
2346 {
2347 + struct usb_interface *intf = ar->intf;
2348 int err;
2349
2350 err = carl9170_parse_firmware(ar);
2351 @@ -1009,7 +1007,7 @@ static void carl9170_usb_firmware_finish(struct ar9170 *ar)
2352 goto err_unrx;
2353
2354 complete(&ar->fw_load_wait);
2355 - usb_put_dev(ar->udev);
2356 + usb_put_intf(intf);
2357 return;
2358
2359 err_unrx:
2360 @@ -1052,7 +1050,6 @@ static int carl9170_usb_probe(struct usb_interface *intf,
2361 return PTR_ERR(ar);
2362
2363 udev = interface_to_usbdev(intf);
2364 - usb_get_dev(udev);
2365 ar->udev = udev;
2366 ar->intf = intf;
2367 ar->features = id->driver_info;
2368 @@ -1094,15 +1091,14 @@ static int carl9170_usb_probe(struct usb_interface *intf,
2369 atomic_set(&ar->rx_anch_urbs, 0);
2370 atomic_set(&ar->rx_pool_urbs, 0);
2371
2372 - usb_get_dev(ar->udev);
2373 + usb_get_intf(intf);
2374
2375 carl9170_set_state(ar, CARL9170_STOPPED);
2376
2377 err = request_firmware_nowait(THIS_MODULE, 1, CARL9170FW_NAME,
2378 &ar->udev->dev, GFP_KERNEL, ar, carl9170_usb_firmware_step2);
2379 if (err) {
2380 - usb_put_dev(udev);
2381 - usb_put_dev(udev);
2382 + usb_put_intf(intf);
2383 carl9170_free(ar);
2384 }
2385 return err;
2386 @@ -1131,7 +1127,6 @@ static void carl9170_usb_disconnect(struct usb_interface *intf)
2387
2388 carl9170_release_firmware(ar);
2389 carl9170_free(ar);
2390 - usb_put_dev(udev);
2391 }
2392
2393 #ifdef CONFIG_PM
2394 diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
2395 index c0631255aee7..db6628d390a2 100644
2396 --- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
2397 +++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
2398 @@ -1547,7 +1547,6 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
2399 goto free;
2400
2401 out_free_fw:
2402 - iwl_dealloc_ucode(drv);
2403 release_firmware(ucode_raw);
2404 out_unbind:
2405 complete(&drv->request_firmware_complete);
2406 diff --git a/drivers/net/wireless/intersil/p54/p54usb.c b/drivers/net/wireless/intersil/p54/p54usb.c
2407 index b0b86f701061..15661da6eedc 100644
2408 --- a/drivers/net/wireless/intersil/p54/p54usb.c
2409 +++ b/drivers/net/wireless/intersil/p54/p54usb.c
2410 @@ -33,6 +33,8 @@ MODULE_ALIAS("prism54usb");
2411 MODULE_FIRMWARE("isl3886usb");
2412 MODULE_FIRMWARE("isl3887usb");
2413
2414 +static struct usb_driver p54u_driver;
2415 +
2416 /*
2417 * Note:
2418 *
2419 @@ -921,9 +923,9 @@ static void p54u_load_firmware_cb(const struct firmware *firmware,
2420 {
2421 struct p54u_priv *priv = context;
2422 struct usb_device *udev = priv->udev;
2423 + struct usb_interface *intf = priv->intf;
2424 int err;
2425
2426 - complete(&priv->fw_wait_load);
2427 if (firmware) {
2428 priv->fw = firmware;
2429 err = p54u_start_ops(priv);
2430 @@ -932,26 +934,22 @@ static void p54u_load_firmware_cb(const struct firmware *firmware,
2431 dev_err(&udev->dev, "Firmware not found.\n");
2432 }
2433
2434 - if (err) {
2435 - struct device *parent = priv->udev->dev.parent;
2436 -
2437 - dev_err(&udev->dev, "failed to initialize device (%d)\n", err);
2438 -
2439 - if (parent)
2440 - device_lock(parent);
2441 + complete(&priv->fw_wait_load);
2442 + /*
2443 + * At this point p54u_disconnect may have already freed
2444 + * the "priv" context. Do not use it anymore!
2445 + */
2446 + priv = NULL;
2447
2448 - device_release_driver(&udev->dev);
2449 - /*
2450 - * At this point p54u_disconnect has already freed
2451 - * the "priv" context. Do not use it anymore!
2452 - */
2453 - priv = NULL;
2454 + if (err) {
2455 + dev_err(&intf->dev, "failed to initialize device (%d)\n", err);
2456
2457 - if (parent)
2458 - device_unlock(parent);
2459 + usb_lock_device(udev);
2460 + usb_driver_release_interface(&p54u_driver, intf);
2461 + usb_unlock_device(udev);
2462 }
2463
2464 - usb_put_dev(udev);
2465 + usb_put_intf(intf);
2466 }
2467
2468 static int p54u_load_firmware(struct ieee80211_hw *dev,
2469 @@ -972,14 +970,14 @@ static int p54u_load_firmware(struct ieee80211_hw *dev,
2470 dev_info(&priv->udev->dev, "Loading firmware file %s\n",
2471 p54u_fwlist[i].fw);
2472
2473 - usb_get_dev(udev);
2474 + usb_get_intf(intf);
2475 err = request_firmware_nowait(THIS_MODULE, 1, p54u_fwlist[i].fw,
2476 device, GFP_KERNEL, priv,
2477 p54u_load_firmware_cb);
2478 if (err) {
2479 dev_err(&priv->udev->dev, "(p54usb) cannot load firmware %s "
2480 "(%d)!\n", p54u_fwlist[i].fw, err);
2481 - usb_put_dev(udev);
2482 + usb_put_intf(intf);
2483 }
2484
2485 return err;
2486 @@ -1011,8 +1009,6 @@ static int p54u_probe(struct usb_interface *intf,
2487 skb_queue_head_init(&priv->rx_queue);
2488 init_usb_anchor(&priv->submitted);
2489
2490 - usb_get_dev(udev);
2491 -
2492 /* really lazy and simple way of figuring out if we're a 3887 */
2493 /* TODO: should just stick the identification in the device table */
2494 i = intf->altsetting->desc.bNumEndpoints;
2495 @@ -1053,10 +1049,8 @@ static int p54u_probe(struct usb_interface *intf,
2496 priv->upload_fw = p54u_upload_firmware_net2280;
2497 }
2498 err = p54u_load_firmware(dev, intf);
2499 - if (err) {
2500 - usb_put_dev(udev);
2501 + if (err)
2502 p54_free_common(dev);
2503 - }
2504 return err;
2505 }
2506
2507 @@ -1072,7 +1066,6 @@ static void p54u_disconnect(struct usb_interface *intf)
2508 wait_for_completion(&priv->fw_wait_load);
2509 p54_unregister_common(dev);
2510
2511 - usb_put_dev(interface_to_usbdev(intf));
2512 release_firmware(priv->fw);
2513 p54_free_common(dev);
2514 }
2515 diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h
2516 index b73f99dc5a72..1fb76d2f5d3f 100644
2517 --- a/drivers/net/wireless/marvell/mwifiex/fw.h
2518 +++ b/drivers/net/wireless/marvell/mwifiex/fw.h
2519 @@ -1759,9 +1759,10 @@ struct mwifiex_ie_types_wmm_queue_status {
2520 struct ieee_types_vendor_header {
2521 u8 element_id;
2522 u8 len;
2523 - u8 oui[4]; /* 0~2: oui, 3: oui_type */
2524 - u8 oui_subtype;
2525 - u8 version;
2526 + struct {
2527 + u8 oui[3];
2528 + u8 oui_type;
2529 + } __packed oui;
2530 } __packed;
2531
2532 struct ieee_types_wmm_parameter {
2533 @@ -1775,6 +1776,9 @@ struct ieee_types_wmm_parameter {
2534 * Version [1]
2535 */
2536 struct ieee_types_vendor_header vend_hdr;
2537 + u8 oui_subtype;
2538 + u8 version;
2539 +
2540 u8 qos_info_bitmap;
2541 u8 reserved;
2542 struct ieee_types_wmm_ac_parameters ac_params[IEEE80211_NUM_ACS];
2543 @@ -1792,6 +1796,8 @@ struct ieee_types_wmm_info {
2544 * Version [1]
2545 */
2546 struct ieee_types_vendor_header vend_hdr;
2547 + u8 oui_subtype;
2548 + u8 version;
2549
2550 u8 qos_info_bitmap;
2551 } __packed;
2552 diff --git a/drivers/net/wireless/marvell/mwifiex/ie.c b/drivers/net/wireless/marvell/mwifiex/ie.c
2553 index 75cbd609d606..801a2d7b020a 100644
2554 --- a/drivers/net/wireless/marvell/mwifiex/ie.c
2555 +++ b/drivers/net/wireless/marvell/mwifiex/ie.c
2556 @@ -329,6 +329,8 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv,
2557 struct ieee80211_vendor_ie *vendorhdr;
2558 u16 gen_idx = MWIFIEX_AUTO_IDX_MASK, ie_len = 0;
2559 int left_len, parsed_len = 0;
2560 + unsigned int token_len;
2561 + int err = 0;
2562
2563 if (!info->tail || !info->tail_len)
2564 return 0;
2565 @@ -344,6 +346,12 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv,
2566 */
2567 while (left_len > sizeof(struct ieee_types_header)) {
2568 hdr = (void *)(info->tail + parsed_len);
2569 + token_len = hdr->len + sizeof(struct ieee_types_header);
2570 + if (token_len > left_len) {
2571 + err = -EINVAL;
2572 + goto out;
2573 + }
2574 +
2575 switch (hdr->element_id) {
2576 case WLAN_EID_SSID:
2577 case WLAN_EID_SUPP_RATES:
2578 @@ -361,16 +369,19 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv,
2579 if (cfg80211_find_vendor_ie(WLAN_OUI_MICROSOFT,
2580 WLAN_OUI_TYPE_MICROSOFT_WMM,
2581 (const u8 *)hdr,
2582 - hdr->len + sizeof(struct ieee_types_header)))
2583 + token_len))
2584 break;
2585 default:
2586 - memcpy(gen_ie->ie_buffer + ie_len, hdr,
2587 - hdr->len + sizeof(struct ieee_types_header));
2588 - ie_len += hdr->len + sizeof(struct ieee_types_header);
2589 + if (ie_len + token_len > IEEE_MAX_IE_SIZE) {
2590 + err = -EINVAL;
2591 + goto out;
2592 + }
2593 + memcpy(gen_ie->ie_buffer + ie_len, hdr, token_len);
2594 + ie_len += token_len;
2595 break;
2596 }
2597 - left_len -= hdr->len + sizeof(struct ieee_types_header);
2598 - parsed_len += hdr->len + sizeof(struct ieee_types_header);
2599 + left_len -= token_len;
2600 + parsed_len += token_len;
2601 }
2602
2603 /* parse only WPA vendor IE from tail, WMM IE is configured by
2604 @@ -380,15 +391,17 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv,
2605 WLAN_OUI_TYPE_MICROSOFT_WPA,
2606 info->tail, info->tail_len);
2607 if (vendorhdr) {
2608 - memcpy(gen_ie->ie_buffer + ie_len, vendorhdr,
2609 - vendorhdr->len + sizeof(struct ieee_types_header));
2610 - ie_len += vendorhdr->len + sizeof(struct ieee_types_header);
2611 + token_len = vendorhdr->len + sizeof(struct ieee_types_header);
2612 + if (ie_len + token_len > IEEE_MAX_IE_SIZE) {
2613 + err = -EINVAL;
2614 + goto out;
2615 + }
2616 + memcpy(gen_ie->ie_buffer + ie_len, vendorhdr, token_len);
2617 + ie_len += token_len;
2618 }
2619
2620 - if (!ie_len) {
2621 - kfree(gen_ie);
2622 - return 0;
2623 - }
2624 + if (!ie_len)
2625 + goto out;
2626
2627 gen_ie->ie_index = cpu_to_le16(gen_idx);
2628 gen_ie->mgmt_subtype_mask = cpu_to_le16(MGMT_MASK_BEACON |
2629 @@ -398,13 +411,15 @@ static int mwifiex_uap_parse_tail_ies(struct mwifiex_private *priv,
2630
2631 if (mwifiex_update_uap_custom_ie(priv, gen_ie, &gen_idx, NULL, NULL,
2632 NULL, NULL)) {
2633 - kfree(gen_ie);
2634 - return -1;
2635 + err = -EINVAL;
2636 + goto out;
2637 }
2638
2639 priv->gen_idx = gen_idx;
2640 +
2641 + out:
2642 kfree(gen_ie);
2643 - return 0;
2644 + return err;
2645 }
2646
2647 /* This function parses different IEs-head & tail IEs, beacon IEs,
2648 diff --git a/drivers/net/wireless/marvell/mwifiex/scan.c b/drivers/net/wireless/marvell/mwifiex/scan.c
2649 index 8e483b0bc3b1..6dd771ce68a3 100644
2650 --- a/drivers/net/wireless/marvell/mwifiex/scan.c
2651 +++ b/drivers/net/wireless/marvell/mwifiex/scan.c
2652 @@ -1247,6 +1247,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
2653 }
2654 switch (element_id) {
2655 case WLAN_EID_SSID:
2656 + if (element_len > IEEE80211_MAX_SSID_LEN)
2657 + return -EINVAL;
2658 bss_entry->ssid.ssid_len = element_len;
2659 memcpy(bss_entry->ssid.ssid, (current_ptr + 2),
2660 element_len);
2661 @@ -1256,6 +1258,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
2662 break;
2663
2664 case WLAN_EID_SUPP_RATES:
2665 + if (element_len > MWIFIEX_SUPPORTED_RATES)
2666 + return -EINVAL;
2667 memcpy(bss_entry->data_rates, current_ptr + 2,
2668 element_len);
2669 memcpy(bss_entry->supported_rates, current_ptr + 2,
2670 @@ -1265,6 +1269,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
2671 break;
2672
2673 case WLAN_EID_FH_PARAMS:
2674 + if (element_len + 2 < sizeof(*fh_param_set))
2675 + return -EINVAL;
2676 fh_param_set =
2677 (struct ieee_types_fh_param_set *) current_ptr;
2678 memcpy(&bss_entry->phy_param_set.fh_param_set,
2679 @@ -1273,6 +1279,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
2680 break;
2681
2682 case WLAN_EID_DS_PARAMS:
2683 + if (element_len + 2 < sizeof(*ds_param_set))
2684 + return -EINVAL;
2685 ds_param_set =
2686 (struct ieee_types_ds_param_set *) current_ptr;
2687
2688 @@ -1284,6 +1292,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
2689 break;
2690
2691 case WLAN_EID_CF_PARAMS:
2692 + if (element_len + 2 < sizeof(*cf_param_set))
2693 + return -EINVAL;
2694 cf_param_set =
2695 (struct ieee_types_cf_param_set *) current_ptr;
2696 memcpy(&bss_entry->ss_param_set.cf_param_set,
2697 @@ -1292,6 +1302,8 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
2698 break;
2699
2700 case WLAN_EID_IBSS_PARAMS:
2701 + if (element_len + 2 < sizeof(*ibss_param_set))
2702 + return -EINVAL;
2703 ibss_param_set =
2704 (struct ieee_types_ibss_param_set *)
2705 current_ptr;
2706 @@ -1301,10 +1313,14 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
2707 break;
2708
2709 case WLAN_EID_ERP_INFO:
2710 + if (!element_len)
2711 + return -EINVAL;
2712 bss_entry->erp_flags = *(current_ptr + 2);
2713 break;
2714
2715 case WLAN_EID_PWR_CONSTRAINT:
2716 + if (!element_len)
2717 + return -EINVAL;
2718 bss_entry->local_constraint = *(current_ptr + 2);
2719 bss_entry->sensed_11h = true;
2720 break;
2721 @@ -1348,15 +1364,22 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
2722 vendor_ie = (struct ieee_types_vendor_specific *)
2723 current_ptr;
2724
2725 - if (!memcmp
2726 - (vendor_ie->vend_hdr.oui, wpa_oui,
2727 - sizeof(wpa_oui))) {
2728 + /* 802.11 requires at least 3-byte OUI. */
2729 + if (element_len < sizeof(vendor_ie->vend_hdr.oui.oui))
2730 + return -EINVAL;
2731 +
2732 + /* Not long enough for a match? Skip it. */
2733 + if (element_len < sizeof(wpa_oui))
2734 + break;
2735 +
2736 + if (!memcmp(&vendor_ie->vend_hdr.oui, wpa_oui,
2737 + sizeof(wpa_oui))) {
2738 bss_entry->bcn_wpa_ie =
2739 (struct ieee_types_vendor_specific *)
2740 current_ptr;
2741 bss_entry->wpa_offset = (u16)
2742 (current_ptr - bss_entry->beacon_buf);
2743 - } else if (!memcmp(vendor_ie->vend_hdr.oui, wmm_oui,
2744 + } else if (!memcmp(&vendor_ie->vend_hdr.oui, wmm_oui,
2745 sizeof(wmm_oui))) {
2746 if (total_ie_len ==
2747 sizeof(struct ieee_types_wmm_parameter) ||
2748 diff --git a/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c b/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
2749 index b454b5f85503..843d65bba181 100644
2750 --- a/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
2751 +++ b/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
2752 @@ -1348,7 +1348,7 @@ mwifiex_set_gen_ie_helper(struct mwifiex_private *priv, u8 *ie_data_ptr,
2753 /* Test to see if it is a WPA IE, if not, then
2754 * it is a gen IE
2755 */
2756 - if (!memcmp(pvendor_ie->oui, wpa_oui,
2757 + if (!memcmp(&pvendor_ie->oui, wpa_oui,
2758 sizeof(wpa_oui))) {
2759 /* IE is a WPA/WPA2 IE so call set_wpa function
2760 */
2761 @@ -1358,7 +1358,7 @@ mwifiex_set_gen_ie_helper(struct mwifiex_private *priv, u8 *ie_data_ptr,
2762 goto next_ie;
2763 }
2764
2765 - if (!memcmp(pvendor_ie->oui, wps_oui,
2766 + if (!memcmp(&pvendor_ie->oui, wps_oui,
2767 sizeof(wps_oui))) {
2768 /* Test to see if it is a WPS IE,
2769 * if so, enable wps session flag
2770 diff --git a/drivers/net/wireless/marvell/mwifiex/wmm.c b/drivers/net/wireless/marvell/mwifiex/wmm.c
2771 index 407b9932ca4d..64916ba15df5 100644
2772 --- a/drivers/net/wireless/marvell/mwifiex/wmm.c
2773 +++ b/drivers/net/wireless/marvell/mwifiex/wmm.c
2774 @@ -240,7 +240,7 @@ mwifiex_wmm_setup_queue_priorities(struct mwifiex_private *priv,
2775 mwifiex_dbg(priv->adapter, INFO,
2776 "info: WMM Parameter IE: version=%d,\t"
2777 "qos_info Parameter Set Count=%d, Reserved=%#x\n",
2778 - wmm_ie->vend_hdr.version, wmm_ie->qos_info_bitmap &
2779 + wmm_ie->version, wmm_ie->qos_info_bitmap &
2780 IEEE80211_WMM_IE_AP_QOSINFO_PARAM_SET_CNT_MASK,
2781 wmm_ie->reserved);
2782
2783 diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
2784 index 4de740da547b..763c7628356b 100644
2785 --- a/drivers/scsi/qedi/qedi_main.c
2786 +++ b/drivers/scsi/qedi/qedi_main.c
2787 @@ -955,6 +955,9 @@ static int qedi_find_boot_info(struct qedi_ctx *qedi,
2788 if (!iscsi_is_session_online(cls_sess))
2789 continue;
2790
2791 + if (!sess->targetname)
2792 + continue;
2793 +
2794 if (pri_ctrl_flags) {
2795 if (!strcmp(pri_tgt->iscsi_name, sess->targetname) &&
2796 !strcmp(pri_tgt->ip_addr, ep_ip_addr)) {
2797 diff --git a/drivers/soc/bcm/brcmstb/biuctrl.c b/drivers/soc/bcm/brcmstb/biuctrl.c
2798 index 6d89ebf13b8a..20b63bee5b09 100644
2799 --- a/drivers/soc/bcm/brcmstb/biuctrl.c
2800 +++ b/drivers/soc/bcm/brcmstb/biuctrl.c
2801 @@ -56,7 +56,7 @@ static inline void cbc_writel(u32 val, int reg)
2802 if (offset == -1)
2803 return;
2804
2805 - writel_relaxed(val, cpubiuctrl_base + offset);
2806 + writel(val, cpubiuctrl_base + offset);
2807 }
2808
2809 enum cpubiuctrl_regs {
2810 @@ -246,7 +246,9 @@ static int __init brcmstb_biuctrl_init(void)
2811 if (!np)
2812 return 0;
2813
2814 - setup_hifcpubiuctrl_regs(np);
2815 + ret = setup_hifcpubiuctrl_regs(np);
2816 + if (ret)
2817 + return ret;
2818
2819 ret = mcp_write_pairing_set();
2820 if (ret) {
2821 diff --git a/drivers/soundwire/intel.c b/drivers/soundwire/intel.c
2822 index 0a8990e758f9..a6e2581ada70 100644
2823 --- a/drivers/soundwire/intel.c
2824 +++ b/drivers/soundwire/intel.c
2825 @@ -651,8 +651,8 @@ static int intel_create_dai(struct sdw_cdns *cdns,
2826 return -ENOMEM;
2827 }
2828
2829 - dais[i].playback.channels_min = 1;
2830 - dais[i].playback.channels_max = max_ch;
2831 + dais[i].capture.channels_min = 1;
2832 + dais[i].capture.channels_max = max_ch;
2833 dais[i].capture.rates = SNDRV_PCM_RATE_48000;
2834 dais[i].capture.formats = SNDRV_PCM_FMTBIT_S16_LE;
2835 }
2836 diff --git a/drivers/soundwire/stream.c b/drivers/soundwire/stream.c
2837 index e5c7e1ef6318..907a548645b7 100644
2838 --- a/drivers/soundwire/stream.c
2839 +++ b/drivers/soundwire/stream.c
2840 @@ -1236,9 +1236,7 @@ struct sdw_dpn_prop *sdw_get_slave_dpn_prop(struct sdw_slave *slave,
2841 }
2842
2843 for (i = 0; i < num_ports; i++) {
2844 - dpn_prop = &dpn_prop[i];
2845 -
2846 - if (dpn_prop->num == port_num)
2847 + if (dpn_prop[i].num == port_num)
2848 return &dpn_prop[i];
2849 }
2850
2851 diff --git a/drivers/staging/comedi/drivers/amplc_pci230.c b/drivers/staging/comedi/drivers/amplc_pci230.c
2852 index 08ffe26c5d43..0f16e85911f2 100644
2853 --- a/drivers/staging/comedi/drivers/amplc_pci230.c
2854 +++ b/drivers/staging/comedi/drivers/amplc_pci230.c
2855 @@ -2330,7 +2330,8 @@ static irqreturn_t pci230_interrupt(int irq, void *d)
2856 devpriv->intr_running = false;
2857 spin_unlock_irqrestore(&devpriv->isr_spinlock, irqflags);
2858
2859 - comedi_handle_events(dev, s_ao);
2860 + if (s_ao)
2861 + comedi_handle_events(dev, s_ao);
2862 comedi_handle_events(dev, s_ai);
2863
2864 return IRQ_HANDLED;
2865 diff --git a/drivers/staging/comedi/drivers/dt282x.c b/drivers/staging/comedi/drivers/dt282x.c
2866 index 3be927f1d3a9..e15e33ed94ae 100644
2867 --- a/drivers/staging/comedi/drivers/dt282x.c
2868 +++ b/drivers/staging/comedi/drivers/dt282x.c
2869 @@ -557,7 +557,8 @@ static irqreturn_t dt282x_interrupt(int irq, void *d)
2870 }
2871 #endif
2872 comedi_handle_events(dev, s);
2873 - comedi_handle_events(dev, s_ao);
2874 + if (s_ao)
2875 + comedi_handle_events(dev, s_ao);
2876
2877 return IRQ_RETVAL(handled);
2878 }
2879 diff --git a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
2880 index ecdd3d84f956..8549e809363e 100644
2881 --- a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
2882 +++ b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
2883 @@ -1073,6 +1073,7 @@ static int port_switchdev_event(struct notifier_block *unused,
2884 dev_hold(dev);
2885 break;
2886 default:
2887 + kfree(switchdev_work);
2888 return NOTIFY_DONE;
2889 }
2890
2891 diff --git a/drivers/staging/iio/cdc/ad7150.c b/drivers/staging/iio/cdc/ad7150.c
2892 index d16084d7068c..a354ce6b2b7b 100644
2893 --- a/drivers/staging/iio/cdc/ad7150.c
2894 +++ b/drivers/staging/iio/cdc/ad7150.c
2895 @@ -6,6 +6,7 @@
2896 * Licensed under the GPL-2 or later.
2897 */
2898
2899 +#include <linux/bitfield.h>
2900 #include <linux/interrupt.h>
2901 #include <linux/device.h>
2902 #include <linux/kernel.h>
2903 @@ -130,7 +131,7 @@ static int ad7150_read_event_config(struct iio_dev *indio_dev,
2904 {
2905 int ret;
2906 u8 threshtype;
2907 - bool adaptive;
2908 + bool thrfixed;
2909 struct ad7150_chip_info *chip = iio_priv(indio_dev);
2910
2911 ret = i2c_smbus_read_byte_data(chip->client, AD7150_CFG);
2912 @@ -138,21 +139,23 @@ static int ad7150_read_event_config(struct iio_dev *indio_dev,
2913 return ret;
2914
2915 threshtype = (ret >> 5) & 0x03;
2916 - adaptive = !!(ret & 0x80);
2917 +
2918 + /*check if threshold mode is fixed or adaptive*/
2919 + thrfixed = FIELD_GET(AD7150_CFG_FIX, ret);
2920
2921 switch (type) {
2922 case IIO_EV_TYPE_MAG_ADAPTIVE:
2923 if (dir == IIO_EV_DIR_RISING)
2924 - return adaptive && (threshtype == 0x1);
2925 - return adaptive && (threshtype == 0x0);
2926 + return !thrfixed && (threshtype == 0x1);
2927 + return !thrfixed && (threshtype == 0x0);
2928 case IIO_EV_TYPE_THRESH_ADAPTIVE:
2929 if (dir == IIO_EV_DIR_RISING)
2930 - return adaptive && (threshtype == 0x3);
2931 - return adaptive && (threshtype == 0x2);
2932 + return !thrfixed && (threshtype == 0x3);
2933 + return !thrfixed && (threshtype == 0x2);
2934 case IIO_EV_TYPE_THRESH:
2935 if (dir == IIO_EV_DIR_RISING)
2936 - return !adaptive && (threshtype == 0x1);
2937 - return !adaptive && (threshtype == 0x0);
2938 + return thrfixed && (threshtype == 0x1);
2939 + return thrfixed && (threshtype == 0x0);
2940 default:
2941 break;
2942 }
2943 diff --git a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
2944 index c3ff7c3e6681..2f490a4bf60a 100644
2945 --- a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
2946 +++ b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
2947 @@ -141,10 +141,91 @@ static inline void handle_group_key(struct ieee_param *param,
2948 }
2949 }
2950
2951 -static noinline_for_stack char *translate_scan(struct _adapter *padapter,
2952 - struct iw_request_info *info,
2953 - struct wlan_network *pnetwork,
2954 - char *start, char *stop)
2955 +static noinline_for_stack char *translate_scan_wpa(struct iw_request_info *info,
2956 + struct wlan_network *pnetwork,
2957 + struct iw_event *iwe,
2958 + char *start, char *stop)
2959 +{
2960 + /* parsing WPA/WPA2 IE */
2961 + u8 buf[MAX_WPA_IE_LEN];
2962 + u8 wpa_ie[255], rsn_ie[255];
2963 + u16 wpa_len = 0, rsn_len = 0;
2964 + int n, i;
2965 +
2966 + r8712_get_sec_ie(pnetwork->network.IEs,
2967 + pnetwork->network.IELength, rsn_ie, &rsn_len,
2968 + wpa_ie, &wpa_len);
2969 + if (wpa_len > 0) {
2970 + memset(buf, 0, MAX_WPA_IE_LEN);
2971 + n = sprintf(buf, "wpa_ie=");
2972 + for (i = 0; i < wpa_len; i++) {
2973 + n += snprintf(buf + n, MAX_WPA_IE_LEN - n,
2974 + "%02x", wpa_ie[i]);
2975 + if (n >= MAX_WPA_IE_LEN)
2976 + break;
2977 + }
2978 + memset(iwe, 0, sizeof(*iwe));
2979 + iwe->cmd = IWEVCUSTOM;
2980 + iwe->u.data.length = (u16)strlen(buf);
2981 + start = iwe_stream_add_point(info, start, stop,
2982 + iwe, buf);
2983 + memset(iwe, 0, sizeof(*iwe));
2984 + iwe->cmd = IWEVGENIE;
2985 + iwe->u.data.length = (u16)wpa_len;
2986 + start = iwe_stream_add_point(info, start, stop,
2987 + iwe, wpa_ie);
2988 + }
2989 + if (rsn_len > 0) {
2990 + memset(buf, 0, MAX_WPA_IE_LEN);
2991 + n = sprintf(buf, "rsn_ie=");
2992 + for (i = 0; i < rsn_len; i++) {
2993 + n += snprintf(buf + n, MAX_WPA_IE_LEN - n,
2994 + "%02x", rsn_ie[i]);
2995 + if (n >= MAX_WPA_IE_LEN)
2996 + break;
2997 + }
2998 + memset(iwe, 0, sizeof(*iwe));
2999 + iwe->cmd = IWEVCUSTOM;
3000 + iwe->u.data.length = strlen(buf);
3001 + start = iwe_stream_add_point(info, start, stop,
3002 + iwe, buf);
3003 + memset(iwe, 0, sizeof(*iwe));
3004 + iwe->cmd = IWEVGENIE;
3005 + iwe->u.data.length = rsn_len;
3006 + start = iwe_stream_add_point(info, start, stop, iwe,
3007 + rsn_ie);
3008 + }
3009 +
3010 + return start;
3011 +}
3012 +
3013 +static noinline_for_stack char *translate_scan_wps(struct iw_request_info *info,
3014 + struct wlan_network *pnetwork,
3015 + struct iw_event *iwe,
3016 + char *start, char *stop)
3017 +{
3018 + /* parsing WPS IE */
3019 + u8 wps_ie[512];
3020 + uint wps_ielen;
3021 +
3022 + if (r8712_get_wps_ie(pnetwork->network.IEs,
3023 + pnetwork->network.IELength,
3024 + wps_ie, &wps_ielen)) {
3025 + if (wps_ielen > 2) {
3026 + iwe->cmd = IWEVGENIE;
3027 + iwe->u.data.length = (u16)wps_ielen;
3028 + start = iwe_stream_add_point(info, start, stop,
3029 + iwe, wps_ie);
3030 + }
3031 + }
3032 +
3033 + return start;
3034 +}
3035 +
3036 +static char *translate_scan(struct _adapter *padapter,
3037 + struct iw_request_info *info,
3038 + struct wlan_network *pnetwork,
3039 + char *start, char *stop)
3040 {
3041 struct iw_event iwe;
3042 struct ieee80211_ht_cap *pht_capie;
3043 @@ -257,73 +338,11 @@ static noinline_for_stack char *translate_scan(struct _adapter *padapter,
3044 /* Check if we added any event */
3045 if ((current_val - start) > iwe_stream_lcp_len(info))
3046 start = current_val;
3047 - /* parsing WPA/WPA2 IE */
3048 - {
3049 - u8 buf[MAX_WPA_IE_LEN];
3050 - u8 wpa_ie[255], rsn_ie[255];
3051 - u16 wpa_len = 0, rsn_len = 0;
3052 - int n;
3053 -
3054 - r8712_get_sec_ie(pnetwork->network.IEs,
3055 - pnetwork->network.IELength, rsn_ie, &rsn_len,
3056 - wpa_ie, &wpa_len);
3057 - if (wpa_len > 0) {
3058 - memset(buf, 0, MAX_WPA_IE_LEN);
3059 - n = sprintf(buf, "wpa_ie=");
3060 - for (i = 0; i < wpa_len; i++) {
3061 - n += snprintf(buf + n, MAX_WPA_IE_LEN - n,
3062 - "%02x", wpa_ie[i]);
3063 - if (n >= MAX_WPA_IE_LEN)
3064 - break;
3065 - }
3066 - memset(&iwe, 0, sizeof(iwe));
3067 - iwe.cmd = IWEVCUSTOM;
3068 - iwe.u.data.length = (u16)strlen(buf);
3069 - start = iwe_stream_add_point(info, start, stop,
3070 - &iwe, buf);
3071 - memset(&iwe, 0, sizeof(iwe));
3072 - iwe.cmd = IWEVGENIE;
3073 - iwe.u.data.length = (u16)wpa_len;
3074 - start = iwe_stream_add_point(info, start, stop,
3075 - &iwe, wpa_ie);
3076 - }
3077 - if (rsn_len > 0) {
3078 - memset(buf, 0, MAX_WPA_IE_LEN);
3079 - n = sprintf(buf, "rsn_ie=");
3080 - for (i = 0; i < rsn_len; i++) {
3081 - n += snprintf(buf + n, MAX_WPA_IE_LEN - n,
3082 - "%02x", rsn_ie[i]);
3083 - if (n >= MAX_WPA_IE_LEN)
3084 - break;
3085 - }
3086 - memset(&iwe, 0, sizeof(iwe));
3087 - iwe.cmd = IWEVCUSTOM;
3088 - iwe.u.data.length = strlen(buf);
3089 - start = iwe_stream_add_point(info, start, stop,
3090 - &iwe, buf);
3091 - memset(&iwe, 0, sizeof(iwe));
3092 - iwe.cmd = IWEVGENIE;
3093 - iwe.u.data.length = rsn_len;
3094 - start = iwe_stream_add_point(info, start, stop, &iwe,
3095 - rsn_ie);
3096 - }
3097 - }
3098
3099 - { /* parsing WPS IE */
3100 - u8 wps_ie[512];
3101 - uint wps_ielen;
3102 + start = translate_scan_wpa(info, pnetwork, &iwe, start, stop);
3103 +
3104 + start = translate_scan_wps(info, pnetwork, &iwe, start, stop);
3105
3106 - if (r8712_get_wps_ie(pnetwork->network.IEs,
3107 - pnetwork->network.IELength,
3108 - wps_ie, &wps_ielen)) {
3109 - if (wps_ielen > 2) {
3110 - iwe.cmd = IWEVGENIE;
3111 - iwe.u.data.length = (u16)wps_ielen;
3112 - start = iwe_stream_add_point(info, start, stop,
3113 - &iwe, wps_ie);
3114 - }
3115 - }
3116 - }
3117 /* Add quality statistics */
3118 iwe.cmd = IWEVQUAL;
3119 rssi = r8712_signal_scale_mapping(pnetwork->network.Rssi);
3120 diff --git a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
3121 index c04bdf070c87..455082867246 100644
3122 --- a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
3123 +++ b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
3124 @@ -342,16 +342,13 @@ static void buffer_cb(struct vchiq_mmal_instance *instance,
3125 return;
3126 } else if (length == 0) {
3127 /* stream ended */
3128 - if (buf) {
3129 - /* this should only ever happen if the port is
3130 - * disabled and there are buffers still queued
3131 + if (dev->capture.frame_count) {
3132 + /* empty buffer whilst capturing - expected to be an
3133 + * EOS, so grab another frame
3134 */
3135 - vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
3136 - pr_debug("Empty buffer");
3137 - } else if (dev->capture.frame_count) {
3138 - /* grab another frame */
3139 if (is_capturing(dev)) {
3140 - pr_debug("Grab another frame");
3141 + v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
3142 + "Grab another frame");
3143 vchiq_mmal_port_parameter_set(
3144 instance,
3145 dev->capture.camera_port,
3146 @@ -359,8 +356,14 @@ static void buffer_cb(struct vchiq_mmal_instance *instance,
3147 &dev->capture.frame_count,
3148 sizeof(dev->capture.frame_count));
3149 }
3150 + if (vchiq_mmal_submit_buffer(instance, port, buf))
3151 + v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
3152 + "Failed to return EOS buffer");
3153 } else {
3154 - /* signal frame completion */
3155 + /* stopping streaming.
3156 + * return buffer, and signal frame completion
3157 + */
3158 + vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
3159 complete(&dev->capture.frame_cmplt);
3160 }
3161 } else {
3162 @@ -582,6 +585,7 @@ static void stop_streaming(struct vb2_queue *vq)
3163 int ret;
3164 unsigned long timeout;
3165 struct bm2835_mmal_dev *dev = vb2_get_drv_priv(vq);
3166 + struct vchiq_mmal_port *port = dev->capture.port;
3167
3168 v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, "%s: dev:%p\n",
3169 __func__, dev);
3170 @@ -605,12 +609,6 @@ static void stop_streaming(struct vb2_queue *vq)
3171 &dev->capture.frame_count,
3172 sizeof(dev->capture.frame_count));
3173
3174 - /* wait for last frame to complete */
3175 - timeout = wait_for_completion_timeout(&dev->capture.frame_cmplt, HZ);
3176 - if (timeout == 0)
3177 - v4l2_err(&dev->v4l2_dev,
3178 - "timed out waiting for frame completion\n");
3179 -
3180 v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
3181 "disabling connection\n");
3182
3183 @@ -625,6 +623,21 @@ static void stop_streaming(struct vb2_queue *vq)
3184 ret);
3185 }
3186
3187 + /* wait for all buffers to be returned */
3188 + while (atomic_read(&port->buffers_with_vpu)) {
3189 + v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
3190 + "%s: Waiting for buffers to be returned - %d outstanding\n",
3191 + __func__, atomic_read(&port->buffers_with_vpu));
3192 + timeout = wait_for_completion_timeout(&dev->capture.frame_cmplt,
3193 + HZ);
3194 + if (timeout == 0) {
3195 + v4l2_err(&dev->v4l2_dev, "%s: Timeout waiting for buffers to be returned - %d outstanding\n",
3196 + __func__,
3197 + atomic_read(&port->buffers_with_vpu));
3198 + break;
3199 + }
3200 + }
3201 +
3202 if (disable_camera(dev) < 0)
3203 v4l2_err(&dev->v4l2_dev, "Failed to disable camera\n");
3204 }
3205 diff --git a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
3206 index 51e5b04ff0f5..daa2b9656552 100644
3207 --- a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
3208 +++ b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
3209 @@ -162,7 +162,8 @@ struct vchiq_mmal_instance {
3210 void *bulk_scratch;
3211
3212 struct idr context_map;
3213 - spinlock_t context_map_lock;
3214 + /* protect accesses to context_map */
3215 + struct mutex context_map_lock;
3216
3217 /* component to use next */
3218 int component_idx;
3219 @@ -185,10 +186,10 @@ get_msg_context(struct vchiq_mmal_instance *instance)
3220 * that when we service the VCHI reply, we can look up what
3221 * message is being replied to.
3222 */
3223 - spin_lock(&instance->context_map_lock);
3224 + mutex_lock(&instance->context_map_lock);
3225 handle = idr_alloc(&instance->context_map, msg_context,
3226 0, 0, GFP_KERNEL);
3227 - spin_unlock(&instance->context_map_lock);
3228 + mutex_unlock(&instance->context_map_lock);
3229
3230 if (handle < 0) {
3231 kfree(msg_context);
3232 @@ -212,9 +213,9 @@ release_msg_context(struct mmal_msg_context *msg_context)
3233 {
3234 struct vchiq_mmal_instance *instance = msg_context->instance;
3235
3236 - spin_lock(&instance->context_map_lock);
3237 + mutex_lock(&instance->context_map_lock);
3238 idr_remove(&instance->context_map, msg_context->handle);
3239 - spin_unlock(&instance->context_map_lock);
3240 + mutex_unlock(&instance->context_map_lock);
3241 kfree(msg_context);
3242 }
3243
3244 @@ -240,6 +241,8 @@ static void buffer_work_cb(struct work_struct *work)
3245 struct mmal_msg_context *msg_context =
3246 container_of(work, struct mmal_msg_context, u.bulk.work);
3247
3248 + atomic_dec(&msg_context->u.bulk.port->buffers_with_vpu);
3249 +
3250 msg_context->u.bulk.port->buffer_cb(msg_context->u.bulk.instance,
3251 msg_context->u.bulk.port,
3252 msg_context->u.bulk.status,
3253 @@ -288,8 +291,6 @@ static int bulk_receive(struct vchiq_mmal_instance *instance,
3254
3255 /* store length */
3256 msg_context->u.bulk.buffer_used = rd_len;
3257 - msg_context->u.bulk.mmal_flags =
3258 - msg->u.buffer_from_host.buffer_header.flags;
3259 msg_context->u.bulk.dts = msg->u.buffer_from_host.buffer_header.dts;
3260 msg_context->u.bulk.pts = msg->u.buffer_from_host.buffer_header.pts;
3261
3262 @@ -380,6 +381,8 @@ buffer_from_host(struct vchiq_mmal_instance *instance,
3263 /* initialise work structure ready to schedule callback */
3264 INIT_WORK(&msg_context->u.bulk.work, buffer_work_cb);
3265
3266 + atomic_inc(&port->buffers_with_vpu);
3267 +
3268 /* prep the buffer from host message */
3269 memset(&m, 0xbc, sizeof(m)); /* just to make debug clearer */
3270
3271 @@ -448,6 +451,9 @@ static void buffer_to_host_cb(struct vchiq_mmal_instance *instance,
3272 return;
3273 }
3274
3275 + msg_context->u.bulk.mmal_flags =
3276 + msg->u.buffer_from_host.buffer_header.flags;
3277 +
3278 if (msg->h.status != MMAL_MSG_STATUS_SUCCESS) {
3279 /* message reception had an error */
3280 pr_warn("error %d in reply\n", msg->h.status);
3281 @@ -1324,16 +1330,6 @@ static int port_enable(struct vchiq_mmal_instance *instance,
3282 if (port->enabled)
3283 return 0;
3284
3285 - /* ensure there are enough buffers queued to cover the buffer headers */
3286 - if (port->buffer_cb) {
3287 - hdr_count = 0;
3288 - list_for_each(buf_head, &port->buffers) {
3289 - hdr_count++;
3290 - }
3291 - if (hdr_count < port->current_buffer.num)
3292 - return -ENOSPC;
3293 - }
3294 -
3295 ret = port_action_port(instance, port,
3296 MMAL_MSG_PORT_ACTION_TYPE_ENABLE);
3297 if (ret)
3298 @@ -1854,7 +1850,7 @@ int vchiq_mmal_init(struct vchiq_mmal_instance **out_instance)
3299
3300 instance->bulk_scratch = vmalloc(PAGE_SIZE);
3301
3302 - spin_lock_init(&instance->context_map_lock);
3303 + mutex_init(&instance->context_map_lock);
3304 idr_init_base(&instance->context_map, 1);
3305
3306 params.callback_param = instance;
3307 diff --git a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h
3308 index 22b839ecd5f0..b0ee1716525b 100644
3309 --- a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h
3310 +++ b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h
3311 @@ -71,6 +71,9 @@ struct vchiq_mmal_port {
3312 struct list_head buffers;
3313 /* lock to serialise adding and removing buffers from list */
3314 spinlock_t slock;
3315 +
3316 + /* Count of buffers the VPU has yet to return */
3317 + atomic_t buffers_with_vpu;
3318 /* callback on buffer completion */
3319 vchiq_mmal_buffer_cb buffer_cb;
3320 /* callback context */
3321 diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
3322 index 3f779d25ec0c..e26d87b6ffc5 100644
3323 --- a/drivers/tty/serial/8250/8250_port.c
3324 +++ b/drivers/tty/serial/8250/8250_port.c
3325 @@ -1869,8 +1869,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
3326
3327 status = serial_port_in(port, UART_LSR);
3328
3329 - if (status & (UART_LSR_DR | UART_LSR_BI) &&
3330 - iir & UART_IIR_RDI) {
3331 + if (status & (UART_LSR_DR | UART_LSR_BI)) {
3332 if (!up->dma || handle_rx_dma(up, iir))
3333 status = serial8250_rx_chars(up, status);
3334 }
3335 diff --git a/drivers/usb/dwc2/core.c b/drivers/usb/dwc2/core.c
3336 index 55d5ae2a7ec7..51d83f77dc04 100644
3337 --- a/drivers/usb/dwc2/core.c
3338 +++ b/drivers/usb/dwc2/core.c
3339 @@ -531,7 +531,7 @@ int dwc2_core_reset(struct dwc2_hsotg *hsotg, bool skip_wait)
3340 }
3341
3342 /* Wait for AHB master IDLE state */
3343 - if (dwc2_hsotg_wait_bit_set(hsotg, GRSTCTL, GRSTCTL_AHBIDLE, 50)) {
3344 + if (dwc2_hsotg_wait_bit_set(hsotg, GRSTCTL, GRSTCTL_AHBIDLE, 10000)) {
3345 dev_warn(hsotg->dev, "%s: HANG! AHB Idle timeout GRSTCTL GRSTCTL_AHBIDLE\n",
3346 __func__);
3347 return -EBUSY;
3348 diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
3349 index 0f026d445e31..0ef00315ec73 100644
3350 --- a/drivers/usb/gadget/function/u_ether.c
3351 +++ b/drivers/usb/gadget/function/u_ether.c
3352 @@ -186,11 +186,12 @@ rx_submit(struct eth_dev *dev, struct usb_request *req, gfp_t gfp_flags)
3353 out = dev->port_usb->out_ep;
3354 else
3355 out = NULL;
3356 - spin_unlock_irqrestore(&dev->lock, flags);
3357
3358 if (!out)
3359 + {
3360 + spin_unlock_irqrestore(&dev->lock, flags);
3361 return -ENOTCONN;
3362 -
3363 + }
3364
3365 /* Padding up to RX_EXTRA handles minor disagreements with host.
3366 * Normally we use the USB "terminate on short read" convention;
3367 @@ -214,6 +215,7 @@ rx_submit(struct eth_dev *dev, struct usb_request *req, gfp_t gfp_flags)
3368
3369 if (dev->port_usb->is_fixed)
3370 size = max_t(size_t, size, dev->port_usb->fixed_out_len);
3371 + spin_unlock_irqrestore(&dev->lock, flags);
3372
3373 skb = __netdev_alloc_skb(dev->net, size + NET_IP_ALIGN, gfp_flags);
3374 if (skb == NULL) {
3375 diff --git a/drivers/usb/renesas_usbhs/fifo.c b/drivers/usb/renesas_usbhs/fifo.c
3376 index 39fa2fc1b8b7..6036cbae8c78 100644
3377 --- a/drivers/usb/renesas_usbhs/fifo.c
3378 +++ b/drivers/usb/renesas_usbhs/fifo.c
3379 @@ -802,9 +802,8 @@ static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map)
3380 }
3381
3382 static void usbhsf_dma_complete(void *arg);
3383 -static void xfer_work(struct work_struct *work)
3384 +static void usbhsf_dma_xfer_preparing(struct usbhs_pkt *pkt)
3385 {
3386 - struct usbhs_pkt *pkt = container_of(work, struct usbhs_pkt, work);
3387 struct usbhs_pipe *pipe = pkt->pipe;
3388 struct usbhs_fifo *fifo;
3389 struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
3390 @@ -812,12 +811,10 @@ static void xfer_work(struct work_struct *work)
3391 struct dma_chan *chan;
3392 struct device *dev = usbhs_priv_to_dev(priv);
3393 enum dma_transfer_direction dir;
3394 - unsigned long flags;
3395
3396 - usbhs_lock(priv, flags);
3397 fifo = usbhs_pipe_to_fifo(pipe);
3398 if (!fifo)
3399 - goto xfer_work_end;
3400 + return;
3401
3402 chan = usbhsf_dma_chan_get(fifo, pkt);
3403 dir = usbhs_pipe_is_dir_in(pipe) ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV;
3404 @@ -826,7 +823,7 @@ static void xfer_work(struct work_struct *work)
3405 pkt->trans, dir,
3406 DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
3407 if (!desc)
3408 - goto xfer_work_end;
3409 + return;
3410
3411 desc->callback = usbhsf_dma_complete;
3412 desc->callback_param = pipe;
3413 @@ -834,7 +831,7 @@ static void xfer_work(struct work_struct *work)
3414 pkt->cookie = dmaengine_submit(desc);
3415 if (pkt->cookie < 0) {
3416 dev_err(dev, "Failed to submit dma descriptor\n");
3417 - goto xfer_work_end;
3418 + return;
3419 }
3420
3421 dev_dbg(dev, " %s %d (%d/ %d)\n",
3422 @@ -845,8 +842,17 @@ static void xfer_work(struct work_struct *work)
3423 dma_async_issue_pending(chan);
3424 usbhsf_dma_start(pipe, fifo);
3425 usbhs_pipe_enable(pipe);
3426 +}
3427 +
3428 +static void xfer_work(struct work_struct *work)
3429 +{
3430 + struct usbhs_pkt *pkt = container_of(work, struct usbhs_pkt, work);
3431 + struct usbhs_pipe *pipe = pkt->pipe;
3432 + struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
3433 + unsigned long flags;
3434
3435 -xfer_work_end:
3436 + usbhs_lock(priv, flags);
3437 + usbhsf_dma_xfer_preparing(pkt);
3438 usbhs_unlock(priv, flags);
3439 }
3440
3441 @@ -899,8 +905,13 @@ static int usbhsf_dma_prepare_push(struct usbhs_pkt *pkt, int *is_done)
3442 pkt->trans = len;
3443
3444 usbhsf_tx_irq_ctrl(pipe, 0);
3445 - INIT_WORK(&pkt->work, xfer_work);
3446 - schedule_work(&pkt->work);
3447 + /* FIXME: Workaound for usb dmac that driver can be used in atomic */
3448 + if (usbhs_get_dparam(priv, has_usb_dmac)) {
3449 + usbhsf_dma_xfer_preparing(pkt);
3450 + } else {
3451 + INIT_WORK(&pkt->work, xfer_work);
3452 + schedule_work(&pkt->work);
3453 + }
3454
3455 return 0;
3456
3457 @@ -1006,8 +1017,7 @@ static int usbhsf_dma_prepare_pop_with_usb_dmac(struct usbhs_pkt *pkt,
3458
3459 pkt->trans = pkt->length;
3460
3461 - INIT_WORK(&pkt->work, xfer_work);
3462 - schedule_work(&pkt->work);
3463 + usbhsf_dma_xfer_preparing(pkt);
3464
3465 return 0;
3466
3467 diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
3468 index c0dc4bc776db..e18735e00463 100644
3469 --- a/drivers/usb/serial/ftdi_sio.c
3470 +++ b/drivers/usb/serial/ftdi_sio.c
3471 @@ -1019,6 +1019,7 @@ static const struct usb_device_id id_table_combined[] = {
3472 { USB_DEVICE(AIRBUS_DS_VID, AIRBUS_DS_P8GR) },
3473 /* EZPrototypes devices */
3474 { USB_DEVICE(EZPROTOTYPES_VID, HJELMSLUND_USB485_ISO_PID) },
3475 + { USB_DEVICE_INTERFACE_NUMBER(UNJO_VID, UNJO_ISODEBUG_V1_PID, 1) },
3476 { } /* Terminating entry */
3477 };
3478
3479 diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
3480 index 5755f0df0025..f12d806220b4 100644
3481 --- a/drivers/usb/serial/ftdi_sio_ids.h
3482 +++ b/drivers/usb/serial/ftdi_sio_ids.h
3483 @@ -1543,3 +1543,9 @@
3484 #define CHETCO_SEASMART_DISPLAY_PID 0xA5AD /* SeaSmart NMEA2000 Display */
3485 #define CHETCO_SEASMART_LITE_PID 0xA5AE /* SeaSmart Lite USB Adapter */
3486 #define CHETCO_SEASMART_ANALOG_PID 0xA5AF /* SeaSmart Analog Adapter */
3487 +
3488 +/*
3489 + * Unjo AB
3490 + */
3491 +#define UNJO_VID 0x22B7
3492 +#define UNJO_ISODEBUG_V1_PID 0x150D
3493 diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
3494 index ea891195bbdf..e0a4749ba565 100644
3495 --- a/drivers/usb/serial/option.c
3496 +++ b/drivers/usb/serial/option.c
3497 @@ -1343,6 +1343,7 @@ static const struct usb_device_id option_ids[] = {
3498 .driver_info = RSVD(4) },
3499 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0414, 0xff, 0xff, 0xff) },
3500 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0417, 0xff, 0xff, 0xff) },
3501 + { USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x0601, 0xff) }, /* GosunCn ZTE WeLink ME3630 (RNDIS mode) */
3502 { USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x0602, 0xff) }, /* GosunCn ZTE WeLink ME3630 (MBIM mode) */
3503 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1008, 0xff, 0xff, 0xff),
3504 .driver_info = RSVD(4) },
3505 diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c
3506 index eb8046f87a54..987b8fcfb2aa 100644
3507 --- a/drivers/usb/typec/tps6598x.c
3508 +++ b/drivers/usb/typec/tps6598x.c
3509 @@ -39,7 +39,7 @@
3510 #define TPS_STATUS_VCONN(s) (!!((s) & BIT(7)))
3511
3512 /* TPS_REG_SYSTEM_CONF bits */
3513 -#define TPS_SYSCONF_PORTINFO(c) ((c) & 3)
3514 +#define TPS_SYSCONF_PORTINFO(c) ((c) & 7)
3515
3516 enum {
3517 TPS_PORTINFO_SINK,
3518 @@ -111,7 +111,7 @@ tps6598x_block_read(struct tps6598x *tps, u8 reg, void *val, size_t len)
3519 }
3520
3521 static int tps6598x_block_write(struct tps6598x *tps, u8 reg,
3522 - void *val, size_t len)
3523 + const void *val, size_t len)
3524 {
3525 u8 data[TPS_MAX_LEN + 1];
3526
3527 @@ -157,7 +157,7 @@ static inline int tps6598x_write64(struct tps6598x *tps, u8 reg, u64 val)
3528 static inline int
3529 tps6598x_write_4cc(struct tps6598x *tps, u8 reg, const char *val)
3530 {
3531 - return tps6598x_block_write(tps, reg, &val, sizeof(u32));
3532 + return tps6598x_block_write(tps, reg, val, 4);
3533 }
3534
3535 static int tps6598x_read_partner_identity(struct tps6598x *tps)
3536 diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c
3537 index c6d431a5cce9..4288839501e9 100644
3538 --- a/fs/crypto/policy.c
3539 +++ b/fs/crypto/policy.c
3540 @@ -81,6 +81,8 @@ int fscrypt_ioctl_set_policy(struct file *filp, const void __user *arg)
3541 if (ret == -ENODATA) {
3542 if (!S_ISDIR(inode->i_mode))
3543 ret = -ENOTDIR;
3544 + else if (IS_DEADDIR(inode))
3545 + ret = -ENOENT;
3546 else if (!inode->i_sb->s_cop->empty_dir(inode))
3547 ret = -ENOTEMPTY;
3548 else
3549 diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
3550 index 53cf8599a46e..1de855e0ae61 100644
3551 --- a/fs/nfs/nfs4proc.c
3552 +++ b/fs/nfs/nfs4proc.c
3553 @@ -1243,10 +1243,20 @@ static struct nfs4_opendata *nfs4_opendata_alloc(struct dentry *dentry,
3554 atomic_inc(&sp->so_count);
3555 p->o_arg.open_flags = flags;
3556 p->o_arg.fmode = fmode & (FMODE_READ|FMODE_WRITE);
3557 - p->o_arg.umask = current_umask();
3558 p->o_arg.claim = nfs4_map_atomic_open_claim(server, claim);
3559 p->o_arg.share_access = nfs4_map_atomic_open_share(server,
3560 fmode, flags);
3561 + if (flags & O_CREAT) {
3562 + p->o_arg.umask = current_umask();
3563 + p->o_arg.label = nfs4_label_copy(p->a_label, label);
3564 + if (c->sattr != NULL && c->sattr->ia_valid != 0) {
3565 + p->o_arg.u.attrs = &p->attrs;
3566 + memcpy(&p->attrs, c->sattr, sizeof(p->attrs));
3567 +
3568 + memcpy(p->o_arg.u.verifier.data, c->verf,
3569 + sizeof(p->o_arg.u.verifier.data));
3570 + }
3571 + }
3572 /* don't put an ACCESS op in OPEN compound if O_EXCL, because ACCESS
3573 * will return permission denied for all bits until close */
3574 if (!(flags & O_EXCL)) {
3575 @@ -1270,7 +1280,6 @@ static struct nfs4_opendata *nfs4_opendata_alloc(struct dentry *dentry,
3576 p->o_arg.server = server;
3577 p->o_arg.bitmask = nfs4_bitmask(server, label);
3578 p->o_arg.open_bitmap = &nfs4_fattr_bitmap[0];
3579 - p->o_arg.label = nfs4_label_copy(p->a_label, label);
3580 switch (p->o_arg.claim) {
3581 case NFS4_OPEN_CLAIM_NULL:
3582 case NFS4_OPEN_CLAIM_DELEGATE_CUR:
3583 @@ -1283,13 +1292,6 @@ static struct nfs4_opendata *nfs4_opendata_alloc(struct dentry *dentry,
3584 case NFS4_OPEN_CLAIM_DELEG_PREV_FH:
3585 p->o_arg.fh = NFS_FH(d_inode(dentry));
3586 }
3587 - if (c != NULL && c->sattr != NULL && c->sattr->ia_valid != 0) {
3588 - p->o_arg.u.attrs = &p->attrs;
3589 - memcpy(&p->attrs, c->sattr, sizeof(p->attrs));
3590 -
3591 - memcpy(p->o_arg.u.verifier.data, c->verf,
3592 - sizeof(p->o_arg.u.verifier.data));
3593 - }
3594 p->c_arg.fh = &p->o_res.fh;
3595 p->c_arg.stateid = &p->o_res.stateid;
3596 p->c_arg.seqid = p->o_arg.seqid;
3597 diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
3598 index fc20e06c56ba..dd1783ea7003 100644
3599 --- a/fs/quota/dquot.c
3600 +++ b/fs/quota/dquot.c
3601 @@ -1993,8 +1993,8 @@ int __dquot_transfer(struct inode *inode, struct dquot **transfer_to)
3602 &warn_to[cnt]);
3603 if (ret)
3604 goto over_quota;
3605 - ret = dquot_add_space(transfer_to[cnt], cur_space, rsv_space, 0,
3606 - &warn_to[cnt]);
3607 + ret = dquot_add_space(transfer_to[cnt], cur_space, rsv_space,
3608 + DQUOT_SPACE_WARN, &warn_to[cnt]);
3609 if (ret) {
3610 spin_lock(&transfer_to[cnt]->dq_dqb_lock);
3611 dquot_decr_inodes(transfer_to[cnt], inode_usage);
3612 diff --git a/fs/udf/inode.c b/fs/udf/inode.c
3613 index ae796e10f68b..4c46ebf0e773 100644
3614 --- a/fs/udf/inode.c
3615 +++ b/fs/udf/inode.c
3616 @@ -470,13 +470,15 @@ static struct buffer_head *udf_getblk(struct inode *inode, udf_pblk_t block,
3617 return NULL;
3618 }
3619
3620 -/* Extend the file by 'blocks' blocks, return the number of extents added */
3621 +/* Extend the file with new blocks totaling 'new_block_bytes',
3622 + * return the number of extents added
3623 + */
3624 static int udf_do_extend_file(struct inode *inode,
3625 struct extent_position *last_pos,
3626 struct kernel_long_ad *last_ext,
3627 - sector_t blocks)
3628 + loff_t new_block_bytes)
3629 {
3630 - sector_t add;
3631 + uint32_t add;
3632 int count = 0, fake = !(last_ext->extLength & UDF_EXTENT_LENGTH_MASK);
3633 struct super_block *sb = inode->i_sb;
3634 struct kernel_lb_addr prealloc_loc = {};
3635 @@ -486,7 +488,7 @@ static int udf_do_extend_file(struct inode *inode,
3636
3637 /* The previous extent is fake and we should not extend by anything
3638 * - there's nothing to do... */
3639 - if (!blocks && fake)
3640 + if (!new_block_bytes && fake)
3641 return 0;
3642
3643 iinfo = UDF_I(inode);
3644 @@ -517,13 +519,12 @@ static int udf_do_extend_file(struct inode *inode,
3645 /* Can we merge with the previous extent? */
3646 if ((last_ext->extLength & UDF_EXTENT_FLAG_MASK) ==
3647 EXT_NOT_RECORDED_NOT_ALLOCATED) {
3648 - add = ((1 << 30) - sb->s_blocksize -
3649 - (last_ext->extLength & UDF_EXTENT_LENGTH_MASK)) >>
3650 - sb->s_blocksize_bits;
3651 - if (add > blocks)
3652 - add = blocks;
3653 - blocks -= add;
3654 - last_ext->extLength += add << sb->s_blocksize_bits;
3655 + add = (1 << 30) - sb->s_blocksize -
3656 + (last_ext->extLength & UDF_EXTENT_LENGTH_MASK);
3657 + if (add > new_block_bytes)
3658 + add = new_block_bytes;
3659 + new_block_bytes -= add;
3660 + last_ext->extLength += add;
3661 }
3662
3663 if (fake) {
3664 @@ -544,28 +545,27 @@ static int udf_do_extend_file(struct inode *inode,
3665 }
3666
3667 /* Managed to do everything necessary? */
3668 - if (!blocks)
3669 + if (!new_block_bytes)
3670 goto out;
3671
3672 /* All further extents will be NOT_RECORDED_NOT_ALLOCATED */
3673 last_ext->extLocation.logicalBlockNum = 0;
3674 last_ext->extLocation.partitionReferenceNum = 0;
3675 - add = (1 << (30-sb->s_blocksize_bits)) - 1;
3676 - last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED |
3677 - (add << sb->s_blocksize_bits);
3678 + add = (1 << 30) - sb->s_blocksize;
3679 + last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED | add;
3680
3681 /* Create enough extents to cover the whole hole */
3682 - while (blocks > add) {
3683 - blocks -= add;
3684 + while (new_block_bytes > add) {
3685 + new_block_bytes -= add;
3686 err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
3687 last_ext->extLength, 1);
3688 if (err)
3689 return err;
3690 count++;
3691 }
3692 - if (blocks) {
3693 + if (new_block_bytes) {
3694 last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED |
3695 - (blocks << sb->s_blocksize_bits);
3696 + new_block_bytes;
3697 err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
3698 last_ext->extLength, 1);
3699 if (err)
3700 @@ -596,6 +596,24 @@ out:
3701 return count;
3702 }
3703
3704 +/* Extend the final block of the file to final_block_len bytes */
3705 +static void udf_do_extend_final_block(struct inode *inode,
3706 + struct extent_position *last_pos,
3707 + struct kernel_long_ad *last_ext,
3708 + uint32_t final_block_len)
3709 +{
3710 + struct super_block *sb = inode->i_sb;
3711 + uint32_t added_bytes;
3712 +
3713 + added_bytes = final_block_len -
3714 + (last_ext->extLength & (sb->s_blocksize - 1));
3715 + last_ext->extLength += added_bytes;
3716 + UDF_I(inode)->i_lenExtents += added_bytes;
3717 +
3718 + udf_write_aext(inode, last_pos, &last_ext->extLocation,
3719 + last_ext->extLength, 1);
3720 +}
3721 +
3722 static int udf_extend_file(struct inode *inode, loff_t newsize)
3723 {
3724
3725 @@ -605,10 +623,12 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
3726 int8_t etype;
3727 struct super_block *sb = inode->i_sb;
3728 sector_t first_block = newsize >> sb->s_blocksize_bits, offset;
3729 + unsigned long partial_final_block;
3730 int adsize;
3731 struct udf_inode_info *iinfo = UDF_I(inode);
3732 struct kernel_long_ad extent;
3733 - int err;
3734 + int err = 0;
3735 + int within_final_block;
3736
3737 if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_SHORT)
3738 adsize = sizeof(struct short_ad);
3739 @@ -618,18 +638,8 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
3740 BUG();
3741
3742 etype = inode_bmap(inode, first_block, &epos, &eloc, &elen, &offset);
3743 + within_final_block = (etype != -1);
3744
3745 - /* File has extent covering the new size (could happen when extending
3746 - * inside a block)? */
3747 - if (etype != -1)
3748 - return 0;
3749 - if (newsize & (sb->s_blocksize - 1))
3750 - offset++;
3751 - /* Extended file just to the boundary of the last file block? */
3752 - if (offset == 0)
3753 - return 0;
3754 -
3755 - /* Truncate is extending the file by 'offset' blocks */
3756 if ((!epos.bh && epos.offset == udf_file_entry_alloc_offset(inode)) ||
3757 (epos.bh && epos.offset == sizeof(struct allocExtDesc))) {
3758 /* File has no extents at all or has empty last
3759 @@ -643,7 +653,22 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
3760 &extent.extLength, 0);
3761 extent.extLength |= etype << 30;
3762 }
3763 - err = udf_do_extend_file(inode, &epos, &extent, offset);
3764 +
3765 + partial_final_block = newsize & (sb->s_blocksize - 1);
3766 +
3767 + /* File has extent covering the new size (could happen when extending
3768 + * inside a block)?
3769 + */
3770 + if (within_final_block) {
3771 + /* Extending file within the last file block */
3772 + udf_do_extend_final_block(inode, &epos, &extent,
3773 + partial_final_block);
3774 + } else {
3775 + loff_t add = ((loff_t)offset << sb->s_blocksize_bits) |
3776 + partial_final_block;
3777 + err = udf_do_extend_file(inode, &epos, &extent, add);
3778 + }
3779 +
3780 if (err < 0)
3781 goto out;
3782 err = 0;
3783 @@ -745,6 +770,7 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
3784 /* Are we beyond EOF? */
3785 if (etype == -1) {
3786 int ret;
3787 + loff_t hole_len;
3788 isBeyondEOF = true;
3789 if (count) {
3790 if (c)
3791 @@ -760,7 +786,8 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
3792 startnum = (offset > 0);
3793 }
3794 /* Create extents for the hole between EOF and offset */
3795 - ret = udf_do_extend_file(inode, &prev_epos, laarr, offset);
3796 + hole_len = (loff_t)offset << inode->i_blkbits;
3797 + ret = udf_do_extend_file(inode, &prev_epos, laarr, hole_len);
3798 if (ret < 0) {
3799 *err = ret;
3800 newblock = 0;
3801 diff --git a/include/linux/vmw_vmci_defs.h b/include/linux/vmw_vmci_defs.h
3802 index b724ef7005de..53c5e40a2a8f 100644
3803 --- a/include/linux/vmw_vmci_defs.h
3804 +++ b/include/linux/vmw_vmci_defs.h
3805 @@ -68,9 +68,18 @@ enum {
3806
3807 /*
3808 * A single VMCI device has an upper limit of 128MB on the amount of
3809 - * memory that can be used for queue pairs.
3810 + * memory that can be used for queue pairs. Since each queue pair
3811 + * consists of at least two pages, the memory limit also dictates the
3812 + * number of queue pairs a guest can create.
3813 */
3814 #define VMCI_MAX_GUEST_QP_MEMORY (128 * 1024 * 1024)
3815 +#define VMCI_MAX_GUEST_QP_COUNT (VMCI_MAX_GUEST_QP_MEMORY / PAGE_SIZE / 2)
3816 +
3817 +/*
3818 + * There can be at most PAGE_SIZE doorbells since there is one doorbell
3819 + * per byte in the doorbell bitmap page.
3820 + */
3821 +#define VMCI_MAX_GUEST_DOORBELL_COUNT PAGE_SIZE
3822
3823 /*
3824 * Queues with pre-mapped data pages must be small, so that we don't pin
3825 diff --git a/include/net/ip6_tunnel.h b/include/net/ip6_tunnel.h
3826 index 236e40ba06bf..f594eb71c274 100644
3827 --- a/include/net/ip6_tunnel.h
3828 +++ b/include/net/ip6_tunnel.h
3829 @@ -156,9 +156,12 @@ static inline void ip6tunnel_xmit(struct sock *sk, struct sk_buff *skb,
3830 memset(skb->cb, 0, sizeof(struct inet6_skb_parm));
3831 pkt_len = skb->len - skb_inner_network_offset(skb);
3832 err = ip6_local_out(dev_net(skb_dst(skb)->dev), sk, skb);
3833 - if (unlikely(net_xmit_eval(err)))
3834 - pkt_len = -1;
3835 - iptunnel_xmit_stats(dev, pkt_len);
3836 +
3837 + if (dev) {
3838 + if (unlikely(net_xmit_eval(err)))
3839 + pkt_len = -1;
3840 + iptunnel_xmit_stats(dev, pkt_len);
3841 + }
3842 }
3843 #endif
3844 #endif
3845 diff --git a/include/uapi/linux/usb/audio.h b/include/uapi/linux/usb/audio.h
3846 index ddc5396800aa..76b7c3f6cd0d 100644
3847 --- a/include/uapi/linux/usb/audio.h
3848 +++ b/include/uapi/linux/usb/audio.h
3849 @@ -450,6 +450,43 @@ static inline __u8 *uac_processing_unit_specific(struct uac_processing_unit_desc
3850 }
3851 }
3852
3853 +/*
3854 + * Extension Unit (XU) has almost compatible layout with Processing Unit, but
3855 + * on UAC2, it has a different bmControls size (bControlSize); it's 1 byte for
3856 + * XU while 2 bytes for PU. The last iExtension field is a one-byte index as
3857 + * well as iProcessing field of PU.
3858 + */
3859 +static inline __u8 uac_extension_unit_bControlSize(struct uac_processing_unit_descriptor *desc,
3860 + int protocol)
3861 +{
3862 + switch (protocol) {
3863 + case UAC_VERSION_1:
3864 + return desc->baSourceID[desc->bNrInPins + 4];
3865 + case UAC_VERSION_2:
3866 + return 1; /* in UAC2, this value is constant */
3867 + case UAC_VERSION_3:
3868 + return 4; /* in UAC3, this value is constant */
3869 + default:
3870 + return 1;
3871 + }
3872 +}
3873 +
3874 +static inline __u8 uac_extension_unit_iExtension(struct uac_processing_unit_descriptor *desc,
3875 + int protocol)
3876 +{
3877 + __u8 control_size = uac_extension_unit_bControlSize(desc, protocol);
3878 +
3879 + switch (protocol) {
3880 + case UAC_VERSION_1:
3881 + case UAC_VERSION_2:
3882 + default:
3883 + return *(uac_processing_unit_bmControls(desc, protocol)
3884 + + control_size);
3885 + case UAC_VERSION_3:
3886 + return 0; /* UAC3 does not have this field */
3887 + }
3888 +}
3889 +
3890 /* 4.5.2 Class-Specific AS Interface Descriptor */
3891 struct uac1_as_header_descriptor {
3892 __u8 bLength; /* in bytes: 7 */
3893 diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
3894 index 2faad033715f..fc500ca464d0 100644
3895 --- a/kernel/bpf/devmap.c
3896 +++ b/kernel/bpf/devmap.c
3897 @@ -186,6 +186,7 @@ static void dev_map_free(struct bpf_map *map)
3898 if (!dev)
3899 continue;
3900
3901 + free_percpu(dev->bulkq);
3902 dev_put(dev->dev);
3903 kfree(dev);
3904 }
3905 @@ -281,6 +282,7 @@ void __dev_map_flush(struct bpf_map *map)
3906 unsigned long *bitmap = this_cpu_ptr(dtab->flush_needed);
3907 u32 bit;
3908
3909 + rcu_read_lock();
3910 for_each_set_bit(bit, bitmap, map->max_entries) {
3911 struct bpf_dtab_netdev *dev = READ_ONCE(dtab->netdev_map[bit]);
3912 struct xdp_bulk_queue *bq;
3913 @@ -291,11 +293,12 @@ void __dev_map_flush(struct bpf_map *map)
3914 if (unlikely(!dev))
3915 continue;
3916
3917 - __clear_bit(bit, bitmap);
3918 -
3919 bq = this_cpu_ptr(dev->bulkq);
3920 bq_xmit_all(dev, bq, XDP_XMIT_FLUSH, true);
3921 +
3922 + __clear_bit(bit, bitmap);
3923 }
3924 + rcu_read_unlock();
3925 }
3926
3927 /* rcu_read_lock (from syscall and BPF contexts) ensures that if a delete and/or
3928 @@ -388,6 +391,7 @@ static void dev_map_flush_old(struct bpf_dtab_netdev *dev)
3929
3930 int cpu;
3931
3932 + rcu_read_lock();
3933 for_each_online_cpu(cpu) {
3934 bitmap = per_cpu_ptr(dev->dtab->flush_needed, cpu);
3935 __clear_bit(dev->bit, bitmap);
3936 @@ -395,6 +399,7 @@ static void dev_map_flush_old(struct bpf_dtab_netdev *dev)
3937 bq = per_cpu_ptr(dev->bulkq, cpu);
3938 bq_xmit_all(dev, bq, XDP_XMIT_FLUSH, false);
3939 }
3940 + rcu_read_unlock();
3941 }
3942 }
3943
3944 diff --git a/net/can/af_can.c b/net/can/af_can.c
3945 index e386d654116d..04132b0b5d36 100644
3946 --- a/net/can/af_can.c
3947 +++ b/net/can/af_can.c
3948 @@ -959,6 +959,8 @@ static struct pernet_operations can_pernet_ops __read_mostly = {
3949
3950 static __init int can_init(void)
3951 {
3952 + int err;
3953 +
3954 /* check for correct padding to be able to use the structs similarly */
3955 BUILD_BUG_ON(offsetof(struct can_frame, can_dlc) !=
3956 offsetof(struct canfd_frame, len) ||
3957 @@ -972,15 +974,31 @@ static __init int can_init(void)
3958 if (!rcv_cache)
3959 return -ENOMEM;
3960
3961 - register_pernet_subsys(&can_pernet_ops);
3962 + err = register_pernet_subsys(&can_pernet_ops);
3963 + if (err)
3964 + goto out_pernet;
3965
3966 /* protocol register */
3967 - sock_register(&can_family_ops);
3968 - register_netdevice_notifier(&can_netdev_notifier);
3969 + err = sock_register(&can_family_ops);
3970 + if (err)
3971 + goto out_sock;
3972 + err = register_netdevice_notifier(&can_netdev_notifier);
3973 + if (err)
3974 + goto out_notifier;
3975 +
3976 dev_add_pack(&can_packet);
3977 dev_add_pack(&canfd_packet);
3978
3979 return 0;
3980 +
3981 +out_notifier:
3982 + sock_unregister(PF_CAN);
3983 +out_sock:
3984 + unregister_pernet_subsys(&can_pernet_ops);
3985 +out_pernet:
3986 + kmem_cache_destroy(rcv_cache);
3987 +
3988 + return err;
3989 }
3990
3991 static __exit void can_exit(void)
3992 diff --git a/net/core/skbuff.c b/net/core/skbuff.c
3993 index 8b5768113acd..9b9f696281a9 100644
3994 --- a/net/core/skbuff.c
3995 +++ b/net/core/skbuff.c
3996 @@ -2302,6 +2302,7 @@ do_frag_list:
3997 kv.iov_base = skb->data + offset;
3998 kv.iov_len = slen;
3999 memset(&msg, 0, sizeof(msg));
4000 + msg.msg_flags = MSG_DONTWAIT;
4001
4002 ret = kernel_sendmsg_locked(sk, &msg, &kv, 1, slen);
4003 if (ret <= 0)
4004 diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
4005 index 35c6dfa13fa8..cfd30671ccdf 100644
4006 --- a/net/mac80211/ieee80211_i.h
4007 +++ b/net/mac80211/ieee80211_i.h
4008 @@ -1410,7 +1410,7 @@ ieee80211_get_sband(struct ieee80211_sub_if_data *sdata)
4009 rcu_read_lock();
4010 chanctx_conf = rcu_dereference(sdata->vif.chanctx_conf);
4011
4012 - if (WARN_ON(!chanctx_conf)) {
4013 + if (WARN_ON_ONCE(!chanctx_conf)) {
4014 rcu_read_unlock();
4015 return NULL;
4016 }
4017 @@ -1998,6 +1998,13 @@ void __ieee80211_flush_queues(struct ieee80211_local *local,
4018
4019 static inline bool ieee80211_can_run_worker(struct ieee80211_local *local)
4020 {
4021 + /*
4022 + * It's unsafe to try to do any work during reconfigure flow.
4023 + * When the flow ends the work will be requeued.
4024 + */
4025 + if (local->in_reconfig)
4026 + return false;
4027 +
4028 /*
4029 * If quiescing is set, we are racing with __ieee80211_suspend.
4030 * __ieee80211_suspend flushes the workers after setting quiescing,
4031 diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c
4032 index d51da26e9c18..3162f955f3ae 100644
4033 --- a/net/mac80211/mesh.c
4034 +++ b/net/mac80211/mesh.c
4035 @@ -923,6 +923,7 @@ void ieee80211_stop_mesh(struct ieee80211_sub_if_data *sdata)
4036
4037 /* flush STAs and mpaths on this iface */
4038 sta_info_flush(sdata);
4039 + ieee80211_free_keys(sdata, true);
4040 mesh_path_flush_by_iface(sdata);
4041
4042 /* stop the beacon */
4043 @@ -1212,7 +1213,8 @@ int ieee80211_mesh_finish_csa(struct ieee80211_sub_if_data *sdata)
4044 ifmsh->chsw_ttl = 0;
4045
4046 /* Remove the CSA and MCSP elements from the beacon */
4047 - tmp_csa_settings = rcu_dereference(ifmsh->csa);
4048 + tmp_csa_settings = rcu_dereference_protected(ifmsh->csa,
4049 + lockdep_is_held(&sdata->wdev.mtx));
4050 RCU_INIT_POINTER(ifmsh->csa, NULL);
4051 if (tmp_csa_settings)
4052 kfree_rcu(tmp_csa_settings, rcu_head);
4053 @@ -1234,6 +1236,8 @@ int ieee80211_mesh_csa_beacon(struct ieee80211_sub_if_data *sdata,
4054 struct mesh_csa_settings *tmp_csa_settings;
4055 int ret = 0;
4056
4057 + lockdep_assert_held(&sdata->wdev.mtx);
4058 +
4059 tmp_csa_settings = kmalloc(sizeof(*tmp_csa_settings),
4060 GFP_ATOMIC);
4061 if (!tmp_csa_settings)
4062 diff --git a/net/mac80211/util.c b/net/mac80211/util.c
4063 index 2558a34c9df1..c59638574cf8 100644
4064 --- a/net/mac80211/util.c
4065 +++ b/net/mac80211/util.c
4066 @@ -2224,6 +2224,10 @@ int ieee80211_reconfig(struct ieee80211_local *local)
4067 mutex_lock(&local->mtx);
4068 ieee80211_start_next_roc(local);
4069 mutex_unlock(&local->mtx);
4070 +
4071 + /* Requeue all works */
4072 + list_for_each_entry(sdata, &local->interfaces, list)
4073 + ieee80211_queue_work(&local->hw, &sdata->work);
4074 }
4075
4076 ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP,
4077 diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
4078 index 7e4553dbc3c7..0d7d149b1b1b 100644
4079 --- a/net/sunrpc/clnt.c
4080 +++ b/net/sunrpc/clnt.c
4081 @@ -2713,6 +2713,7 @@ int rpc_clnt_add_xprt(struct rpc_clnt *clnt,
4082 xprt = xprt_iter_xprt(&clnt->cl_xpi);
4083 if (xps == NULL || xprt == NULL) {
4084 rcu_read_unlock();
4085 + xprt_switch_put(xps);
4086 return -EAGAIN;
4087 }
4088 resvport = xprt->resvport;
4089 diff --git a/net/wireless/util.c b/net/wireless/util.c
4090 index aad1c8e858e5..d57e2f679a3e 100644
4091 --- a/net/wireless/util.c
4092 +++ b/net/wireless/util.c
4093 @@ -1219,7 +1219,7 @@ static u32 cfg80211_calculate_bitrate_he(struct rate_info *rate)
4094 if (rate->he_dcm)
4095 result /= 2;
4096
4097 - return result;
4098 + return result / 10000;
4099 }
4100
4101 u32 cfg80211_calculate_bitrate(struct rate_info *rate)
4102 diff --git a/samples/bpf/bpf_load.c b/samples/bpf/bpf_load.c
4103 index cf40a8284a38..5061a2ec4564 100644
4104 --- a/samples/bpf/bpf_load.c
4105 +++ b/samples/bpf/bpf_load.c
4106 @@ -677,7 +677,7 @@ void read_trace_pipe(void)
4107 static char buf[4096];
4108 ssize_t sz;
4109
4110 - sz = read(trace_fd, buf, sizeof(buf));
4111 + sz = read(trace_fd, buf, sizeof(buf) - 1);
4112 if (sz > 0) {
4113 buf[sz] = 0;
4114 puts(buf);
4115 diff --git a/samples/bpf/task_fd_query_user.c b/samples/bpf/task_fd_query_user.c
4116 index 8381d792f138..06957f0fbe83 100644
4117 --- a/samples/bpf/task_fd_query_user.c
4118 +++ b/samples/bpf/task_fd_query_user.c
4119 @@ -216,7 +216,7 @@ static int test_debug_fs_uprobe(char *binary_path, long offset, bool is_return)
4120 {
4121 const char *event_type = "uprobe";
4122 struct perf_event_attr attr = {};
4123 - char buf[256], event_alias[256];
4124 + char buf[256], event_alias[sizeof("test_1234567890")];
4125 __u64 probe_offset, probe_addr;
4126 __u32 len, prog_id, fd_type;
4127 int err, res, kfd, efd;
4128 diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
4129 index 6453370abacc..98cfdcfce5b3 100644
4130 --- a/sound/pci/hda/patch_realtek.c
4131 +++ b/sound/pci/hda/patch_realtek.c
4132 @@ -3236,6 +3236,7 @@ static void alc256_init(struct hda_codec *codec)
4133 alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
4134 alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 1 << 15); /* Clear bit */
4135 alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 0 << 15);
4136 + alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/
4137 }
4138
4139 static void alc256_shutup(struct hda_codec *codec)
4140 @@ -7686,7 +7687,6 @@ static int patch_alc269(struct hda_codec *codec)
4141 spec->shutup = alc256_shutup;
4142 spec->init_hook = alc256_init;
4143 spec->gen.mixer_nid = 0; /* ALC256 does not have any loopback mixer path */
4144 - alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/
4145 break;
4146 case 0x10ec0257:
4147 spec->codec_variant = ALC269_TYPE_ALC257;
4148 diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
4149 index 5a10b1b7f6b9..7e1c6c2dc99e 100644
4150 --- a/sound/usb/mixer.c
4151 +++ b/sound/usb/mixer.c
4152 @@ -2322,7 +2322,7 @@ static struct procunit_info extunits[] = {
4153 */
4154 static int build_audio_procunit(struct mixer_build *state, int unitid,
4155 void *raw_desc, struct procunit_info *list,
4156 - char *name)
4157 + bool extension_unit)
4158 {
4159 struct uac_processing_unit_descriptor *desc = raw_desc;
4160 int num_ins;
4161 @@ -2339,6 +2339,8 @@ static int build_audio_procunit(struct mixer_build *state, int unitid,
4162 static struct procunit_info default_info = {
4163 0, NULL, default_value_info
4164 };
4165 + const char *name = extension_unit ?
4166 + "Extension Unit" : "Processing Unit";
4167
4168 if (desc->bLength < 13) {
4169 usb_audio_err(state->chip, "invalid %s descriptor (id %d)\n", name, unitid);
4170 @@ -2452,7 +2454,10 @@ static int build_audio_procunit(struct mixer_build *state, int unitid,
4171 } else if (info->name) {
4172 strlcpy(kctl->id.name, info->name, sizeof(kctl->id.name));
4173 } else {
4174 - nameid = uac_processing_unit_iProcessing(desc, state->mixer->protocol);
4175 + if (extension_unit)
4176 + nameid = uac_extension_unit_iExtension(desc, state->mixer->protocol);
4177 + else
4178 + nameid = uac_processing_unit_iProcessing(desc, state->mixer->protocol);
4179 len = 0;
4180 if (nameid)
4181 len = snd_usb_copy_string_desc(state->chip,
4182 @@ -2485,10 +2490,10 @@ static int parse_audio_processing_unit(struct mixer_build *state, int unitid,
4183 case UAC_VERSION_2:
4184 default:
4185 return build_audio_procunit(state, unitid, raw_desc,
4186 - procunits, "Processing Unit");
4187 + procunits, false);
4188 case UAC_VERSION_3:
4189 return build_audio_procunit(state, unitid, raw_desc,
4190 - uac3_procunits, "Processing Unit");
4191 + uac3_procunits, false);
4192 }
4193 }
4194
4195 @@ -2499,8 +2504,7 @@ static int parse_audio_extension_unit(struct mixer_build *state, int unitid,
4196 * Note that we parse extension units with processing unit descriptors.
4197 * That's ok as the layout is the same.
4198 */
4199 - return build_audio_procunit(state, unitid, raw_desc,
4200 - extunits, "Extension Unit");
4201 + return build_audio_procunit(state, unitid, raw_desc, extunits, true);
4202 }
4203
4204 /*
4205 diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
4206 index 36cfc64c3824..c1acf04c9f7a 100644
4207 --- a/tools/perf/util/pmu.c
4208 +++ b/tools/perf/util/pmu.c
4209 @@ -750,9 +750,7 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu)
4210 {
4211 int i;
4212 struct pmu_events_map *map;
4213 - struct pmu_event *pe;
4214 const char *name = pmu->name;
4215 - const char *pname;
4216
4217 map = perf_pmu__find_map(pmu);
4218 if (!map)
4219 @@ -763,28 +761,26 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu)
4220 */
4221 i = 0;
4222 while (1) {
4223 + const char *cpu_name = is_arm_pmu_core(name) ? name : "cpu";
4224 + struct pmu_event *pe = &map->table[i++];
4225 + const char *pname = pe->pmu ? pe->pmu : cpu_name;
4226
4227 - pe = &map->table[i++];
4228 if (!pe->name) {
4229 if (pe->metric_group || pe->metric_name)
4230 continue;
4231 break;
4232 }
4233
4234 - if (!is_arm_pmu_core(name)) {
4235 - pname = pe->pmu ? pe->pmu : "cpu";
4236 -
4237 - /*
4238 - * uncore alias may be from different PMU
4239 - * with common prefix
4240 - */
4241 - if (pmu_is_uncore(name) &&
4242 - !strncmp(pname, name, strlen(pname)))
4243 - goto new_alias;
4244 + /*
4245 + * uncore alias may be from different PMU
4246 + * with common prefix
4247 + */
4248 + if (pmu_is_uncore(name) &&
4249 + !strncmp(pname, name, strlen(pname)))
4250 + goto new_alias;
4251
4252 - if (strcmp(pname, name))
4253 - continue;
4254 - }
4255 + if (strcmp(pname, name))
4256 + continue;
4257
4258 new_alias:
4259 /* need type casts to override 'const' */
4260 diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c
4261 index 621bb004067e..0dbe332eb343 100644
4262 --- a/virt/kvm/arm/vgic/vgic-its.c
4263 +++ b/virt/kvm/arm/vgic/vgic-its.c
4264 @@ -1750,6 +1750,7 @@ static void vgic_its_destroy(struct kvm_device *kvm_dev)
4265
4266 mutex_unlock(&its->its_lock);
4267 kfree(its);
4268 + kfree(kvm_dev);/* alloc by kvm_ioctl_create_device, free by .destroy */
4269 }
4270
4271 int vgic_its_has_attr_regs(struct kvm_device *dev,