Magellan Linux

Annotation of /trunk/kernel-magellan/patches-5.2/0100-5.2.1-all-fixes.patch

Parent Directory Parent Directory | Revision Log Revision Log


Revision 3370 - (hide annotations) (download)
Tue Jul 16 19:50:37 2019 UTC (4 years, 10 months ago) by niro
File size: 141143 byte(s)
-linux-5.2.1
1 niro 3370 diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
2     index ffc064c1ec68..49311f3da6f2 100644
3     --- a/Documentation/admin-guide/hw-vuln/index.rst
4     +++ b/Documentation/admin-guide/hw-vuln/index.rst
5     @@ -9,5 +9,6 @@ are configurable at compile, boot or run time.
6     .. toctree::
7     :maxdepth: 1
8    
9     + spectre
10     l1tf
11     mds
12     diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
13     new file mode 100644
14     index 000000000000..25f3b2532198
15     --- /dev/null
16     +++ b/Documentation/admin-guide/hw-vuln/spectre.rst
17     @@ -0,0 +1,697 @@
18     +.. SPDX-License-Identifier: GPL-2.0
19     +
20     +Spectre Side Channels
21     +=====================
22     +
23     +Spectre is a class of side channel attacks that exploit branch prediction
24     +and speculative execution on modern CPUs to read memory, possibly
25     +bypassing access controls. Speculative execution side channel exploits
26     +do not modify memory but attempt to infer privileged data in the memory.
27     +
28     +This document covers Spectre variant 1 and Spectre variant 2.
29     +
30     +Affected processors
31     +-------------------
32     +
33     +Speculative execution side channel methods affect a wide range of modern
34     +high performance processors, since most modern high speed processors
35     +use branch prediction and speculative execution.
36     +
37     +The following CPUs are vulnerable:
38     +
39     + - Intel Core, Atom, Pentium, and Xeon processors
40     +
41     + - AMD Phenom, EPYC, and Zen processors
42     +
43     + - IBM POWER and zSeries processors
44     +
45     + - Higher end ARM processors
46     +
47     + - Apple CPUs
48     +
49     + - Higher end MIPS CPUs
50     +
51     + - Likely most other high performance CPUs. Contact your CPU vendor for details.
52     +
53     +Whether a processor is affected or not can be read out from the Spectre
54     +vulnerability files in sysfs. See :ref:`spectre_sys_info`.
55     +
56     +Related CVEs
57     +------------
58     +
59     +The following CVE entries describe Spectre variants:
60     +
61     + ============= ======================= =================
62     + CVE-2017-5753 Bounds check bypass Spectre variant 1
63     + CVE-2017-5715 Branch target injection Spectre variant 2
64     + ============= ======================= =================
65     +
66     +Problem
67     +-------
68     +
69     +CPUs use speculative operations to improve performance. That may leave
70     +traces of memory accesses or computations in the processor's caches,
71     +buffers, and branch predictors. Malicious software may be able to
72     +influence the speculative execution paths, and then use the side effects
73     +of the speculative execution in the CPUs' caches and buffers to infer
74     +privileged data touched during the speculative execution.
75     +
76     +Spectre variant 1 attacks take advantage of speculative execution of
77     +conditional branches, while Spectre variant 2 attacks use speculative
78     +execution of indirect branches to leak privileged memory.
79     +See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[7] <spec_ref7>`
80     +:ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`.
81     +
82     +Spectre variant 1 (Bounds Check Bypass)
83     +---------------------------------------
84     +
85     +The bounds check bypass attack :ref:`[2] <spec_ref2>` takes advantage
86     +of speculative execution that bypasses conditional branch instructions
87     +used for memory access bounds check (e.g. checking if the index of an
88     +array results in memory access within a valid range). This results in
89     +memory accesses to invalid memory (with out-of-bound index) that are
90     +done speculatively before validation checks resolve. Such speculative
91     +memory accesses can leave side effects, creating side channels which
92     +leak information to the attacker.
93     +
94     +There are some extensions of Spectre variant 1 attacks for reading data
95     +over the network, see :ref:`[12] <spec_ref12>`. However such attacks
96     +are difficult, low bandwidth, fragile, and are considered low risk.
97     +
98     +Spectre variant 2 (Branch Target Injection)
99     +-------------------------------------------
100     +
101     +The branch target injection attack takes advantage of speculative
102     +execution of indirect branches :ref:`[3] <spec_ref3>`. The indirect
103     +branch predictors inside the processor used to guess the target of
104     +indirect branches can be influenced by an attacker, causing gadget code
105     +to be speculatively executed, thus exposing sensitive data touched by
106     +the victim. The side effects left in the CPU's caches during speculative
107     +execution can be measured to infer data values.
108     +
109     +.. _poison_btb:
110     +
111     +In Spectre variant 2 attacks, the attacker can steer speculative indirect
112     +branches in the victim to gadget code by poisoning the branch target
113     +buffer of a CPU used for predicting indirect branch addresses. Such
114     +poisoning could be done by indirect branching into existing code,
115     +with the address offset of the indirect branch under the attacker's
116     +control. Since the branch prediction on impacted hardware does not
117     +fully disambiguate branch address and uses the offset for prediction,
118     +this could cause privileged code's indirect branch to jump to a gadget
119     +code with the same offset.
120     +
121     +The most useful gadgets take an attacker-controlled input parameter (such
122     +as a register value) so that the memory read can be controlled. Gadgets
123     +without input parameters might be possible, but the attacker would have
124     +very little control over what memory can be read, reducing the risk of
125     +the attack revealing useful data.
126     +
127     +One other variant 2 attack vector is for the attacker to poison the
128     +return stack buffer (RSB) :ref:`[13] <spec_ref13>` to cause speculative
129     +subroutine return instruction execution to go to a gadget. An attacker's
130     +imbalanced subroutine call instructions might "poison" entries in the
131     +return stack buffer which are later consumed by a victim's subroutine
132     +return instructions. This attack can be mitigated by flushing the return
133     +stack buffer on context switch, or virtual machine (VM) exit.
134     +
135     +On systems with simultaneous multi-threading (SMT), attacks are possible
136     +from the sibling thread, as level 1 cache and branch target buffer
137     +(BTB) may be shared between hardware threads in a CPU core. A malicious
138     +program running on the sibling thread may influence its peer's BTB to
139     +steer its indirect branch speculations to gadget code, and measure the
140     +speculative execution's side effects left in level 1 cache to infer the
141     +victim's data.
142     +
143     +Attack scenarios
144     +----------------
145     +
146     +The following list of attack scenarios have been anticipated, but may
147     +not cover all possible attack vectors.
148     +
149     +1. A user process attacking the kernel
150     +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
151     +
152     + The attacker passes a parameter to the kernel via a register or
153     + via a known address in memory during a syscall. Such parameter may
154     + be used later by the kernel as an index to an array or to derive
155     + a pointer for a Spectre variant 1 attack. The index or pointer
156     + is invalid, but bound checks are bypassed in the code branch taken
157     + for speculative execution. This could cause privileged memory to be
158     + accessed and leaked.
159     +
160     + For kernel code that has been identified where data pointers could
161     + potentially be influenced for Spectre attacks, new "nospec" accessor
162     + macros are used to prevent speculative loading of data.
163     +
164     + Spectre variant 2 attacker can :ref:`poison <poison_btb>` the branch
165     + target buffer (BTB) before issuing syscall to launch an attack.
166     + After entering the kernel, the kernel could use the poisoned branch
167     + target buffer on indirect jump and jump to gadget code in speculative
168     + execution.
169     +
170     + If an attacker tries to control the memory addresses leaked during
171     + speculative execution, he would also need to pass a parameter to the
172     + gadget, either through a register or a known address in memory. After
173     + the gadget has executed, he can measure the side effect.
174     +
175     + The kernel can protect itself against consuming poisoned branch
176     + target buffer entries by using return trampolines (also known as
177     + "retpoline") :ref:`[3] <spec_ref3>` :ref:`[9] <spec_ref9>` for all
178     + indirect branches. Return trampolines trap speculative execution paths
179     + to prevent jumping to gadget code during speculative execution.
180     + x86 CPUs with Enhanced Indirect Branch Restricted Speculation
181     + (Enhanced IBRS) available in hardware should use the feature to
182     + mitigate Spectre variant 2 instead of retpoline. Enhanced IBRS is
183     + more efficient than retpoline.
184     +
185     + There may be gadget code in firmware which could be exploited with
186     + Spectre variant 2 attack by a rogue user process. To mitigate such
187     + attacks on x86, Indirect Branch Restricted Speculation (IBRS) feature
188     + is turned on before the kernel invokes any firmware code.
189     +
190     +2. A user process attacking another user process
191     +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
192     +
193     + A malicious user process can try to attack another user process,
194     + either via a context switch on the same hardware thread, or from the
195     + sibling hyperthread sharing a physical processor core on simultaneous
196     + multi-threading (SMT) system.
197     +
198     + Spectre variant 1 attacks generally require passing parameters
199     + between the processes, which needs a data passing relationship, such
200     + as remote procedure calls (RPC). Those parameters are used in gadget
201     + code to derive invalid data pointers accessing privileged memory in
202     + the attacked process.
203     +
204     + Spectre variant 2 attacks can be launched from a rogue process by
205     + :ref:`poisoning <poison_btb>` the branch target buffer. This can
206     + influence the indirect branch targets for a victim process that either
207     + runs later on the same hardware thread, or running concurrently on
208     + a sibling hardware thread sharing the same physical core.
209     +
210     + A user process can protect itself against Spectre variant 2 attacks
211     + by using the prctl() syscall to disable indirect branch speculation
212     + for itself. An administrator can also cordon off an unsafe process
213     + from polluting the branch target buffer by disabling the process's
214     + indirect branch speculation. This comes with a performance cost
215     + from not using indirect branch speculation and clearing the branch
216     + target buffer. When SMT is enabled on x86, for a process that has
217     + indirect branch speculation disabled, Single Threaded Indirect Branch
218     + Predictors (STIBP) :ref:`[4] <spec_ref4>` are turned on to prevent the
219     + sibling thread from controlling branch target buffer. In addition,
220     + the Indirect Branch Prediction Barrier (IBPB) is issued to clear the
221     + branch target buffer when context switching to and from such process.
222     +
223     + On x86, the return stack buffer is stuffed on context switch.
224     + This prevents the branch target buffer from being used for branch
225     + prediction when the return stack buffer underflows while switching to
226     + a deeper call stack. Any poisoned entries in the return stack buffer
227     + left by the previous process will also be cleared.
228     +
229     + User programs should use address space randomization to make attacks
230     + more difficult (Set /proc/sys/kernel/randomize_va_space = 1 or 2).
231     +
232     +3. A virtualized guest attacking the host
233     +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
234     +
235     + The attack mechanism is similar to how user processes attack the
236     + kernel. The kernel is entered via hyper-calls or other virtualization
237     + exit paths.
238     +
239     + For Spectre variant 1 attacks, rogue guests can pass parameters
240     + (e.g. in registers) via hyper-calls to derive invalid pointers to
241     + speculate into privileged memory after entering the kernel. For places
242     + where such kernel code has been identified, nospec accessor macros
243     + are used to stop speculative memory access.
244     +
245     + For Spectre variant 2 attacks, rogue guests can :ref:`poison
246     + <poison_btb>` the branch target buffer or return stack buffer, causing
247     + the kernel to jump to gadget code in the speculative execution paths.
248     +
249     + To mitigate variant 2, the host kernel can use return trampolines
250     + for indirect branches to bypass the poisoned branch target buffer,
251     + and flushing the return stack buffer on VM exit. This prevents rogue
252     + guests from affecting indirect branching in the host kernel.
253     +
254     + To protect host processes from rogue guests, host processes can have
255     + indirect branch speculation disabled via prctl(). The branch target
256     + buffer is cleared before context switching to such processes.
257     +
258     +4. A virtualized guest attacking other guest
259     +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
260     +
261     + A rogue guest may attack another guest to get data accessible by the
262     + other guest.
263     +
264     + Spectre variant 1 attacks are possible if parameters can be passed
265     + between guests. This may be done via mechanisms such as shared memory
266     + or message passing. Such parameters could be used to derive data
267     + pointers to privileged data in guest. The privileged data could be
268     + accessed by gadget code in the victim's speculation paths.
269     +
270     + Spectre variant 2 attacks can be launched from a rogue guest by
271     + :ref:`poisoning <poison_btb>` the branch target buffer or the return
272     + stack buffer. Such poisoned entries could be used to influence
273     + speculation execution paths in the victim guest.
274     +
275     + Linux kernel mitigates attacks to other guests running in the same
276     + CPU hardware thread by flushing the return stack buffer on VM exit,
277     + and clearing the branch target buffer before switching to a new guest.
278     +
279     + If SMT is used, Spectre variant 2 attacks from an untrusted guest
280     + in the sibling hyperthread can be mitigated by the administrator,
281     + by turning off the unsafe guest's indirect branch speculation via
282     + prctl(). A guest can also protect itself by turning on microcode
283     + based mitigations (such as IBPB or STIBP on x86) within the guest.
284     +
285     +.. _spectre_sys_info:
286     +
287     +Spectre system information
288     +--------------------------
289     +
290     +The Linux kernel provides a sysfs interface to enumerate the current
291     +mitigation status of the system for Spectre: whether the system is
292     +vulnerable, and which mitigations are active.
293     +
294     +The sysfs file showing Spectre variant 1 mitigation status is:
295     +
296     + /sys/devices/system/cpu/vulnerabilities/spectre_v1
297     +
298     +The possible values in this file are:
299     +
300     + ======================================= =================================
301     + 'Mitigation: __user pointer sanitation' Protection in kernel on a case by
302     + case base with explicit pointer
303     + sanitation.
304     + ======================================= =================================
305     +
306     +However, the protections are put in place on a case by case basis,
307     +and there is no guarantee that all possible attack vectors for Spectre
308     +variant 1 are covered.
309     +
310     +The spectre_v2 kernel file reports if the kernel has been compiled with
311     +retpoline mitigation or if the CPU has hardware mitigation, and if the
312     +CPU has support for additional process-specific mitigation.
313     +
314     +This file also reports CPU features enabled by microcode to mitigate
315     +attack between user processes:
316     +
317     +1. Indirect Branch Prediction Barrier (IBPB) to add additional
318     + isolation between processes of different users.
319     +2. Single Thread Indirect Branch Predictors (STIBP) to add additional
320     + isolation between CPU threads running on the same core.
321     +
322     +These CPU features may impact performance when used and can be enabled
323     +per process on a case-by-case base.
324     +
325     +The sysfs file showing Spectre variant 2 mitigation status is:
326     +
327     + /sys/devices/system/cpu/vulnerabilities/spectre_v2
328     +
329     +The possible values in this file are:
330     +
331     + - Kernel status:
332     +
333     + ==================================== =================================
334     + 'Not affected' The processor is not vulnerable
335     + 'Vulnerable' Vulnerable, no mitigation
336     + 'Mitigation: Full generic retpoline' Software-focused mitigation
337     + 'Mitigation: Full AMD retpoline' AMD-specific software mitigation
338     + 'Mitigation: Enhanced IBRS' Hardware-focused mitigation
339     + ==================================== =================================
340     +
341     + - Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is
342     + used to protect against Spectre variant 2 attacks when calling firmware (x86 only).
343     +
344     + ========== =============================================================
345     + 'IBRS_FW' Protection against user program attacks when calling firmware
346     + ========== =============================================================
347     +
348     + - Indirect branch prediction barrier (IBPB) status for protection between
349     + processes of different users. This feature can be controlled through
350     + prctl() per process, or through kernel command line options. This is
351     + an x86 only feature. For more details see below.
352     +
353     + =================== ========================================================
354     + 'IBPB: disabled' IBPB unused
355     + 'IBPB: always-on' Use IBPB on all tasks
356     + 'IBPB: conditional' Use IBPB on SECCOMP or indirect branch restricted tasks
357     + =================== ========================================================
358     +
359     + - Single threaded indirect branch prediction (STIBP) status for protection
360     + between different hyper threads. This feature can be controlled through
361     + prctl per process, or through kernel command line options. This is x86
362     + only feature. For more details see below.
363     +
364     + ==================== ========================================================
365     + 'STIBP: disabled' STIBP unused
366     + 'STIBP: forced' Use STIBP on all tasks
367     + 'STIBP: conditional' Use STIBP on SECCOMP or indirect branch restricted tasks
368     + ==================== ========================================================
369     +
370     + - Return stack buffer (RSB) protection status:
371     +
372     + ============= ===========================================
373     + 'RSB filling' Protection of RSB on context switch enabled
374     + ============= ===========================================
375     +
376     +Full mitigation might require a microcode update from the CPU
377     +vendor. When the necessary microcode is not available, the kernel will
378     +report vulnerability.
379     +
380     +Turning on mitigation for Spectre variant 1 and Spectre variant 2
381     +-----------------------------------------------------------------
382     +
383     +1. Kernel mitigation
384     +^^^^^^^^^^^^^^^^^^^^
385     +
386     + For the Spectre variant 1, vulnerable kernel code (as determined
387     + by code audit or scanning tools) is annotated on a case by case
388     + basis to use nospec accessor macros for bounds clipping :ref:`[2]
389     + <spec_ref2>` to avoid any usable disclosure gadgets. However, it may
390     + not cover all attack vectors for Spectre variant 1.
391     +
392     + For Spectre variant 2 mitigation, the compiler turns indirect calls or
393     + jumps in the kernel into equivalent return trampolines (retpolines)
394     + :ref:`[3] <spec_ref3>` :ref:`[9] <spec_ref9>` to go to the target
395     + addresses. Speculative execution paths under retpolines are trapped
396     + in an infinite loop to prevent any speculative execution jumping to
397     + a gadget.
398     +
399     + To turn on retpoline mitigation on a vulnerable CPU, the kernel
400     + needs to be compiled with a gcc compiler that supports the
401     + -mindirect-branch=thunk-extern -mindirect-branch-register options.
402     + If the kernel is compiled with a Clang compiler, the compiler needs
403     + to support -mretpoline-external-thunk option. The kernel config
404     + CONFIG_RETPOLINE needs to be turned on, and the CPU needs to run with
405     + the latest updated microcode.
406     +
407     + On Intel Skylake-era systems the mitigation covers most, but not all,
408     + cases. See :ref:`[3] <spec_ref3>` for more details.
409     +
410     + On CPUs with hardware mitigation for Spectre variant 2 (e.g. Enhanced
411     + IBRS on x86), retpoline is automatically disabled at run time.
412     +
413     + The retpoline mitigation is turned on by default on vulnerable
414     + CPUs. It can be forced on or off by the administrator
415     + via the kernel command line and sysfs control files. See
416     + :ref:`spectre_mitigation_control_command_line`.
417     +
418     + On x86, indirect branch restricted speculation is turned on by default
419     + before invoking any firmware code to prevent Spectre variant 2 exploits
420     + using the firmware.
421     +
422     + Using kernel address space randomization (CONFIG_RANDOMIZE_SLAB=y
423     + and CONFIG_SLAB_FREELIST_RANDOM=y in the kernel configuration) makes
424     + attacks on the kernel generally more difficult.
425     +
426     +2. User program mitigation
427     +^^^^^^^^^^^^^^^^^^^^^^^^^^
428     +
429     + User programs can mitigate Spectre variant 1 using LFENCE or "bounds
430     + clipping". For more details see :ref:`[2] <spec_ref2>`.
431     +
432     + For Spectre variant 2 mitigation, individual user programs
433     + can be compiled with return trampolines for indirect branches.
434     + This protects them from consuming poisoned entries in the branch
435     + target buffer left by malicious software. Alternatively, the
436     + programs can disable their indirect branch speculation via prctl()
437     + (See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
438     + On x86, this will turn on STIBP to guard against attacks from the
439     + sibling thread when the user program is running, and use IBPB to
440     + flush the branch target buffer when switching to/from the program.
441     +
442     + Restricting indirect branch speculation on a user program will
443     + also prevent the program from launching a variant 2 attack
444     + on x86. All sand-boxed SECCOMP programs have indirect branch
445     + speculation restricted by default. Administrators can change
446     + that behavior via the kernel command line and sysfs control files.
447     + See :ref:`spectre_mitigation_control_command_line`.
448     +
449     + Programs that disable their indirect branch speculation will have
450     + more overhead and run slower.
451     +
452     + User programs should use address space randomization
453     + (/proc/sys/kernel/randomize_va_space = 1 or 2) to make attacks more
454     + difficult.
455     +
456     +3. VM mitigation
457     +^^^^^^^^^^^^^^^^
458     +
459     + Within the kernel, Spectre variant 1 attacks from rogue guests are
460     + mitigated on a case by case basis in VM exit paths. Vulnerable code
461     + uses nospec accessor macros for "bounds clipping", to avoid any
462     + usable disclosure gadgets. However, this may not cover all variant
463     + 1 attack vectors.
464     +
465     + For Spectre variant 2 attacks from rogue guests to the kernel, the
466     + Linux kernel uses retpoline or Enhanced IBRS to prevent consumption of
467     + poisoned entries in branch target buffer left by rogue guests. It also
468     + flushes the return stack buffer on every VM exit to prevent a return
469     + stack buffer underflow so poisoned branch target buffer could be used,
470     + or attacker guests leaving poisoned entries in the return stack buffer.
471     +
472     + To mitigate guest-to-guest attacks in the same CPU hardware thread,
473     + the branch target buffer is sanitized by flushing before switching
474     + to a new guest on a CPU.
475     +
476     + The above mitigations are turned on by default on vulnerable CPUs.
477     +
478     + To mitigate guest-to-guest attacks from sibling thread when SMT is
479     + in use, an untrusted guest running in the sibling thread can have
480     + its indirect branch speculation disabled by administrator via prctl().
481     +
482     + The kernel also allows guests to use any microcode based mitigation
483     + they choose to use (such as IBPB or STIBP on x86) to protect themselves.
484     +
485     +.. _spectre_mitigation_control_command_line:
486     +
487     +Mitigation control on the kernel command line
488     +---------------------------------------------
489     +
490     +Spectre variant 2 mitigation can be disabled or force enabled at the
491     +kernel command line.
492     +
493     + nospectre_v2
494     +
495     + [X86] Disable all mitigations for the Spectre variant 2
496     + (indirect branch prediction) vulnerability. System may
497     + allow data leaks with this option, which is equivalent
498     + to spectre_v2=off.
499     +
500     +
501     + spectre_v2=
502     +
503     + [X86] Control mitigation of Spectre variant 2
504     + (indirect branch speculation) vulnerability.
505     + The default operation protects the kernel from
506     + user space attacks.
507     +
508     + on
509     + unconditionally enable, implies
510     + spectre_v2_user=on
511     + off
512     + unconditionally disable, implies
513     + spectre_v2_user=off
514     + auto
515     + kernel detects whether your CPU model is
516     + vulnerable
517     +
518     + Selecting 'on' will, and 'auto' may, choose a
519     + mitigation method at run time according to the
520     + CPU, the available microcode, the setting of the
521     + CONFIG_RETPOLINE configuration option, and the
522     + compiler with which the kernel was built.
523     +
524     + Selecting 'on' will also enable the mitigation
525     + against user space to user space task attacks.
526     +
527     + Selecting 'off' will disable both the kernel and
528     + the user space protections.
529     +
530     + Specific mitigations can also be selected manually:
531     +
532     + retpoline
533     + replace indirect branches
534     + retpoline,generic
535     + google's original retpoline
536     + retpoline,amd
537     + AMD-specific minimal thunk
538     +
539     + Not specifying this option is equivalent to
540     + spectre_v2=auto.
541     +
542     +For user space mitigation:
543     +
544     + spectre_v2_user=
545     +
546     + [X86] Control mitigation of Spectre variant 2
547     + (indirect branch speculation) vulnerability between
548     + user space tasks
549     +
550     + on
551     + Unconditionally enable mitigations. Is
552     + enforced by spectre_v2=on
553     +
554     + off
555     + Unconditionally disable mitigations. Is
556     + enforced by spectre_v2=off
557     +
558     + prctl
559     + Indirect branch speculation is enabled,
560     + but mitigation can be enabled via prctl
561     + per thread. The mitigation control state
562     + is inherited on fork.
563     +
564     + prctl,ibpb
565     + Like "prctl" above, but only STIBP is
566     + controlled per thread. IBPB is issued
567     + always when switching between different user
568     + space processes.
569     +
570     + seccomp
571     + Same as "prctl" above, but all seccomp
572     + threads will enable the mitigation unless
573     + they explicitly opt out.
574     +
575     + seccomp,ibpb
576     + Like "seccomp" above, but only STIBP is
577     + controlled per thread. IBPB is issued
578     + always when switching between different
579     + user space processes.
580     +
581     + auto
582     + Kernel selects the mitigation depending on
583     + the available CPU features and vulnerability.
584     +
585     + Default mitigation:
586     + If CONFIG_SECCOMP=y then "seccomp", otherwise "prctl"
587     +
588     + Not specifying this option is equivalent to
589     + spectre_v2_user=auto.
590     +
591     + In general the kernel by default selects
592     + reasonable mitigations for the current CPU. To
593     + disable Spectre variant 2 mitigations, boot with
594     + spectre_v2=off. Spectre variant 1 mitigations
595     + cannot be disabled.
596     +
597     +Mitigation selection guide
598     +--------------------------
599     +
600     +1. Trusted userspace
601     +^^^^^^^^^^^^^^^^^^^^
602     +
603     + If all userspace applications are from trusted sources and do not
604     + execute externally supplied untrusted code, then the mitigations can
605     + be disabled.
606     +
607     +2. Protect sensitive programs
608     +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
609     +
610     + For security-sensitive programs that have secrets (e.g. crypto
611     + keys), protection against Spectre variant 2 can be put in place by
612     + disabling indirect branch speculation when the program is running
613     + (See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
614     +
615     +3. Sandbox untrusted programs
616     +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
617     +
618     + Untrusted programs that could be a source of attacks can be cordoned
619     + off by disabling their indirect branch speculation when they are run
620     + (See :ref:`Documentation/userspace-api/spec_ctrl.rst <set_spec_ctrl>`).
621     + This prevents untrusted programs from polluting the branch target
622     + buffer. All programs running in SECCOMP sandboxes have indirect
623     + branch speculation restricted by default. This behavior can be
624     + changed via the kernel command line and sysfs control files. See
625     + :ref:`spectre_mitigation_control_command_line`.
626     +
627     +3. High security mode
628     +^^^^^^^^^^^^^^^^^^^^^
629     +
630     + All Spectre variant 2 mitigations can be forced on
631     + at boot time for all programs (See the "on" option in
632     + :ref:`spectre_mitigation_control_command_line`). This will add
633     + overhead as indirect branch speculations for all programs will be
634     + restricted.
635     +
636     + On x86, branch target buffer will be flushed with IBPB when switching
637     + to a new program. STIBP is left on all the time to protect programs
638     + against variant 2 attacks originating from programs running on
639     + sibling threads.
640     +
641     + Alternatively, STIBP can be used only when running programs
642     + whose indirect branch speculation is explicitly disabled,
643     + while IBPB is still used all the time when switching to a new
644     + program to clear the branch target buffer (See "ibpb" option in
645     + :ref:`spectre_mitigation_control_command_line`). This "ibpb" option
646     + has less performance cost than the "on" option, which leaves STIBP
647     + on all the time.
648     +
649     +References on Spectre
650     +---------------------
651     +
652     +Intel white papers:
653     +
654     +.. _spec_ref1:
655     +
656     +[1] `Intel analysis of speculative execution side channels <https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/Intel-Analysis-of-Speculative-Execution-Side-Channels.pdf>`_.
657     +
658     +.. _spec_ref2:
659     +
660     +[2] `Bounds check bypass <https://software.intel.com/security-software-guidance/software-guidance/bounds-check-bypass>`_.
661     +
662     +.. _spec_ref3:
663     +
664     +[3] `Deep dive: Retpoline: A branch target injection mitigation <https://software.intel.com/security-software-guidance/insights/deep-dive-retpoline-branch-target-injection-mitigation>`_.
665     +
666     +.. _spec_ref4:
667     +
668     +[4] `Deep Dive: Single Thread Indirect Branch Predictors <https://software.intel.com/security-software-guidance/insights/deep-dive-single-thread-indirect-branch-predictors>`_.
669     +
670     +AMD white papers:
671     +
672     +.. _spec_ref5:
673     +
674     +[5] `AMD64 technology indirect branch control extension <https://developer.amd.com/wp-content/resources/Architecture_Guidelines_Update_Indirect_Branch_Control.pdf>`_.
675     +
676     +.. _spec_ref6:
677     +
678     +[6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/90343-B_SoftwareTechniquesforManagingSpeculation_WP_7-18Update_FNL.pdf>`_.
679     +
680     +ARM white papers:
681     +
682     +.. _spec_ref7:
683     +
684     +[7] `Cache speculation side-channels <https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/download-the-whitepaper>`_.
685     +
686     +.. _spec_ref8:
687     +
688     +[8] `Cache speculation issues update <https://developer.arm.com/support/arm-security-updates/speculative-processor-vulnerability/latest-updates/cache-speculation-issues-update>`_.
689     +
690     +Google white paper:
691     +
692     +.. _spec_ref9:
693     +
694     +[9] `Retpoline: a software construct for preventing branch-target-injection <https://support.google.com/faqs/answer/7625886>`_.
695     +
696     +MIPS white paper:
697     +
698     +.. _spec_ref10:
699     +
700     +[10] `MIPS: response on speculative execution and side channel vulnerabilities <https://www.mips.com/blog/mips-response-on-speculative-execution-and-side-channel-vulnerabilities/>`_.
701     +
702     +Academic papers:
703     +
704     +.. _spec_ref11:
705     +
706     +[11] `Spectre Attacks: Exploiting Speculative Execution <https://spectreattack.com/spectre.pdf>`_.
707     +
708     +.. _spec_ref12:
709     +
710     +[12] `NetSpectre: Read Arbitrary Memory over Network <https://arxiv.org/abs/1807.10535>`_.
711     +
712     +.. _spec_ref13:
713     +
714     +[13] `Spectre Returns! Speculation Attacks using the Return Stack Buffer <https://www.usenix.org/system/files/conference/woot18/woot18-paper-koruyeh.pdf>`_.
715     diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
716     index 138f6664b2e2..0082d1e56999 100644
717     --- a/Documentation/admin-guide/kernel-parameters.txt
718     +++ b/Documentation/admin-guide/kernel-parameters.txt
719     @@ -5102,12 +5102,6 @@
720     emulate [default] Vsyscalls turn into traps and are
721     emulated reasonably safely.
722    
723     - native Vsyscalls are native syscall instructions.
724     - This is a little bit faster than trapping
725     - and makes a few dynamic recompilers work
726     - better than they would in emulation mode.
727     - It also makes exploits much easier to write.
728     -
729     none Vsyscalls don't work at all. This makes
730     them quite hard to use for exploits but
731     might break your system.
732     diff --git a/Documentation/userspace-api/spec_ctrl.rst b/Documentation/userspace-api/spec_ctrl.rst
733     index 1129c7550a48..7ddd8f667459 100644
734     --- a/Documentation/userspace-api/spec_ctrl.rst
735     +++ b/Documentation/userspace-api/spec_ctrl.rst
736     @@ -49,6 +49,8 @@ If PR_SPEC_PRCTL is set, then the per-task control of the mitigation is
737     available. If not set, prctl(PR_SET_SPECULATION_CTRL) for the speculation
738     misfeature will fail.
739    
740     +.. _set_spec_ctrl:
741     +
742     PR_SET_SPECULATION_CTRL
743     -----------------------
744    
745     diff --git a/Makefile b/Makefile
746     index 3e4868a6498b..d8f5dbfd6b76 100644
747     --- a/Makefile
748     +++ b/Makefile
749     @@ -1,7 +1,7 @@
750     # SPDX-License-Identifier: GPL-2.0
751     VERSION = 5
752     PATCHLEVEL = 2
753     -SUBLEVEL = 0
754     +SUBLEVEL = 1
755     EXTRAVERSION =
756     NAME = Bobtail Squid
757    
758     diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
759     index a166c960bc9e..e9d0bc3a5e88 100644
760     --- a/arch/x86/kernel/ptrace.c
761     +++ b/arch/x86/kernel/ptrace.c
762     @@ -25,6 +25,7 @@
763     #include <linux/rcupdate.h>
764     #include <linux/export.h>
765     #include <linux/context_tracking.h>
766     +#include <linux/nospec.h>
767    
768     #include <linux/uaccess.h>
769     #include <asm/pgtable.h>
770     @@ -643,9 +644,11 @@ static unsigned long ptrace_get_debugreg(struct task_struct *tsk, int n)
771     {
772     struct thread_struct *thread = &tsk->thread;
773     unsigned long val = 0;
774     + int index = n;
775    
776     if (n < HBP_NUM) {
777     - struct perf_event *bp = thread->ptrace_bps[n];
778     + struct perf_event *bp = thread->ptrace_bps[index];
779     + index = array_index_nospec(index, HBP_NUM);
780    
781     if (bp)
782     val = bp->hw.info.address;
783     diff --git a/arch/x86/kernel/tls.c b/arch/x86/kernel/tls.c
784     index a5b802a12212..71d3fef1edc9 100644
785     --- a/arch/x86/kernel/tls.c
786     +++ b/arch/x86/kernel/tls.c
787     @@ -5,6 +5,7 @@
788     #include <linux/user.h>
789     #include <linux/regset.h>
790     #include <linux/syscalls.h>
791     +#include <linux/nospec.h>
792    
793     #include <linux/uaccess.h>
794     #include <asm/desc.h>
795     @@ -220,6 +221,7 @@ int do_get_thread_area(struct task_struct *p, int idx,
796     struct user_desc __user *u_info)
797     {
798     struct user_desc info;
799     + int index;
800    
801     if (idx == -1 && get_user(idx, &u_info->entry_number))
802     return -EFAULT;
803     @@ -227,8 +229,11 @@ int do_get_thread_area(struct task_struct *p, int idx,
804     if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)
805     return -EINVAL;
806    
807     - fill_user_desc(&info, idx,
808     - &p->thread.tls_array[idx - GDT_ENTRY_TLS_MIN]);
809     + index = idx - GDT_ENTRY_TLS_MIN;
810     + index = array_index_nospec(index,
811     + GDT_ENTRY_TLS_MAX - GDT_ENTRY_TLS_MIN + 1);
812     +
813     + fill_user_desc(&info, idx, &p->thread.tls_array[index]);
814    
815     if (copy_to_user(u_info, &info, sizeof(info)))
816     return -EFAULT;
817     diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
818     index 0850b5149345..4d1517022a14 100644
819     --- a/arch/x86/kernel/vmlinux.lds.S
820     +++ b/arch/x86/kernel/vmlinux.lds.S
821     @@ -141,10 +141,10 @@ SECTIONS
822     *(.text.__x86.indirect_thunk)
823     __indirect_thunk_end = .;
824     #endif
825     - } :text = 0x9090
826    
827     - /* End of text section */
828     - _etext = .;
829     + /* End of text section */
830     + _etext = .;
831     + } :text = 0x9090
832    
833     NOTES :text :note
834    
835     diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
836     index f9269ae6da9c..e5db3856b194 100644
837     --- a/block/bfq-iosched.c
838     +++ b/block/bfq-iosched.c
839     @@ -4584,6 +4584,7 @@ static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync)
840     unsigned long flags;
841    
842     spin_lock_irqsave(&bfqd->lock, flags);
843     + bfqq->bic = NULL;
844     bfq_exit_bfqq(bfqd, bfqq);
845     bic_set_bfqq(bic, NULL, is_sync);
846     spin_unlock_irqrestore(&bfqd->lock, flags);
847     diff --git a/block/bio.c b/block/bio.c
848     index ce797d73bb43..67bba12d273b 100644
849     --- a/block/bio.c
850     +++ b/block/bio.c
851     @@ -731,7 +731,7 @@ static int __bio_add_pc_page(struct request_queue *q, struct bio *bio,
852     }
853     }
854    
855     - if (bio_full(bio))
856     + if (bio_full(bio, len))
857     return 0;
858    
859     if (bio->bi_phys_segments >= queue_max_segments(q))
860     @@ -807,7 +807,7 @@ void __bio_add_page(struct bio *bio, struct page *page,
861     struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt];
862    
863     WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED));
864     - WARN_ON_ONCE(bio_full(bio));
865     + WARN_ON_ONCE(bio_full(bio, len));
866    
867     bv->bv_page = page;
868     bv->bv_offset = off;
869     @@ -834,7 +834,7 @@ int bio_add_page(struct bio *bio, struct page *page,
870     bool same_page = false;
871    
872     if (!__bio_try_merge_page(bio, page, len, offset, &same_page)) {
873     - if (bio_full(bio))
874     + if (bio_full(bio, len))
875     return 0;
876     __bio_add_page(bio, page, len, offset);
877     }
878     @@ -922,7 +922,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
879     if (same_page)
880     put_page(page);
881     } else {
882     - if (WARN_ON_ONCE(bio_full(bio)))
883     + if (WARN_ON_ONCE(bio_full(bio, len)))
884     return -EINVAL;
885     __bio_add_page(bio, page, len, offset);
886     }
887     @@ -966,7 +966,7 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter)
888     ret = __bio_iov_bvec_add_pages(bio, iter);
889     else
890     ret = __bio_iov_iter_get_pages(bio, iter);
891     - } while (!ret && iov_iter_count(iter) && !bio_full(bio));
892     + } while (!ret && iov_iter_count(iter) && !bio_full(bio, 0));
893    
894     if (iov_iter_bvec_no_ref(iter))
895     bio_set_flag(bio, BIO_NO_PAGE_REF);
896     diff --git a/crypto/lrw.c b/crypto/lrw.c
897     index 58009cf63a6e..be829f6afc8e 100644
898     --- a/crypto/lrw.c
899     +++ b/crypto/lrw.c
900     @@ -384,7 +384,7 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
901     inst->alg.base.cra_priority = alg->base.cra_priority;
902     inst->alg.base.cra_blocksize = LRW_BLOCK_SIZE;
903     inst->alg.base.cra_alignmask = alg->base.cra_alignmask |
904     - (__alignof__(__be32) - 1);
905     + (__alignof__(be128) - 1);
906    
907     inst->alg.ivsize = LRW_BLOCK_SIZE;
908     inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg) +
909     diff --git a/drivers/android/binder.c b/drivers/android/binder.c
910     index bc26b5511f0a..38a59a630cd4 100644
911     --- a/drivers/android/binder.c
912     +++ b/drivers/android/binder.c
913     @@ -2059,10 +2059,9 @@ static size_t binder_get_object(struct binder_proc *proc,
914    
915     read_size = min_t(size_t, sizeof(*object), buffer->data_size - offset);
916     if (offset > buffer->data_size || read_size < sizeof(*hdr) ||
917     - !IS_ALIGNED(offset, sizeof(u32)))
918     + binder_alloc_copy_from_buffer(&proc->alloc, object, buffer,
919     + offset, read_size))
920     return 0;
921     - binder_alloc_copy_from_buffer(&proc->alloc, object, buffer,
922     - offset, read_size);
923    
924     /* Ok, now see if we read a complete object. */
925     hdr = &object->hdr;
926     @@ -2131,8 +2130,10 @@ static struct binder_buffer_object *binder_validate_ptr(
927     return NULL;
928    
929     buffer_offset = start_offset + sizeof(binder_size_t) * index;
930     - binder_alloc_copy_from_buffer(&proc->alloc, &object_offset,
931     - b, buffer_offset, sizeof(object_offset));
932     + if (binder_alloc_copy_from_buffer(&proc->alloc, &object_offset,
933     + b, buffer_offset,
934     + sizeof(object_offset)))
935     + return NULL;
936     object_size = binder_get_object(proc, b, object_offset, object);
937     if (!object_size || object->hdr.type != BINDER_TYPE_PTR)
938     return NULL;
939     @@ -2212,10 +2213,12 @@ static bool binder_validate_fixup(struct binder_proc *proc,
940     return false;
941     last_min_offset = last_bbo->parent_offset + sizeof(uintptr_t);
942     buffer_offset = objects_start_offset +
943     - sizeof(binder_size_t) * last_bbo->parent,
944     - binder_alloc_copy_from_buffer(&proc->alloc, &last_obj_offset,
945     - b, buffer_offset,
946     - sizeof(last_obj_offset));
947     + sizeof(binder_size_t) * last_bbo->parent;
948     + if (binder_alloc_copy_from_buffer(&proc->alloc,
949     + &last_obj_offset,
950     + b, buffer_offset,
951     + sizeof(last_obj_offset)))
952     + return false;
953     }
954     return (fixup_offset >= last_min_offset);
955     }
956     @@ -2301,15 +2304,15 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
957     for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
958     buffer_offset += sizeof(binder_size_t)) {
959     struct binder_object_header *hdr;
960     - size_t object_size;
961     + size_t object_size = 0;
962     struct binder_object object;
963     binder_size_t object_offset;
964    
965     - binder_alloc_copy_from_buffer(&proc->alloc, &object_offset,
966     - buffer, buffer_offset,
967     - sizeof(object_offset));
968     - object_size = binder_get_object(proc, buffer,
969     - object_offset, &object);
970     + if (!binder_alloc_copy_from_buffer(&proc->alloc, &object_offset,
971     + buffer, buffer_offset,
972     + sizeof(object_offset)))
973     + object_size = binder_get_object(proc, buffer,
974     + object_offset, &object);
975     if (object_size == 0) {
976     pr_err("transaction release %d bad object at offset %lld, size %zd\n",
977     debug_id, (u64)object_offset, buffer->data_size);
978     @@ -2432,15 +2435,16 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
979     for (fd_index = 0; fd_index < fda->num_fds;
980     fd_index++) {
981     u32 fd;
982     + int err;
983     binder_size_t offset = fda_offset +
984     fd_index * sizeof(fd);
985    
986     - binder_alloc_copy_from_buffer(&proc->alloc,
987     - &fd,
988     - buffer,
989     - offset,
990     - sizeof(fd));
991     - binder_deferred_fd_close(fd);
992     + err = binder_alloc_copy_from_buffer(
993     + &proc->alloc, &fd, buffer,
994     + offset, sizeof(fd));
995     + WARN_ON(err);
996     + if (!err)
997     + binder_deferred_fd_close(fd);
998     }
999     } break;
1000     default:
1001     @@ -2683,11 +2687,12 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda,
1002     int ret;
1003     binder_size_t offset = fda_offset + fdi * sizeof(fd);
1004    
1005     - binder_alloc_copy_from_buffer(&target_proc->alloc,
1006     - &fd, t->buffer,
1007     - offset, sizeof(fd));
1008     - ret = binder_translate_fd(fd, offset, t, thread,
1009     - in_reply_to);
1010     + ret = binder_alloc_copy_from_buffer(&target_proc->alloc,
1011     + &fd, t->buffer,
1012     + offset, sizeof(fd));
1013     + if (!ret)
1014     + ret = binder_translate_fd(fd, offset, t, thread,
1015     + in_reply_to);
1016     if (ret < 0)
1017     return ret;
1018     }
1019     @@ -2740,8 +2745,12 @@ static int binder_fixup_parent(struct binder_transaction *t,
1020     }
1021     buffer_offset = bp->parent_offset +
1022     (uintptr_t)parent->buffer - (uintptr_t)b->user_data;
1023     - binder_alloc_copy_to_buffer(&target_proc->alloc, b, buffer_offset,
1024     - &bp->buffer, sizeof(bp->buffer));
1025     + if (binder_alloc_copy_to_buffer(&target_proc->alloc, b, buffer_offset,
1026     + &bp->buffer, sizeof(bp->buffer))) {
1027     + binder_user_error("%d:%d got transaction with invalid parent offset\n",
1028     + proc->pid, thread->pid);
1029     + return -EINVAL;
1030     + }
1031    
1032     return 0;
1033     }
1034     @@ -3160,15 +3169,20 @@ static void binder_transaction(struct binder_proc *proc,
1035     goto err_binder_alloc_buf_failed;
1036     }
1037     if (secctx) {
1038     + int err;
1039     size_t buf_offset = ALIGN(tr->data_size, sizeof(void *)) +
1040     ALIGN(tr->offsets_size, sizeof(void *)) +
1041     ALIGN(extra_buffers_size, sizeof(void *)) -
1042     ALIGN(secctx_sz, sizeof(u64));
1043    
1044     t->security_ctx = (uintptr_t)t->buffer->user_data + buf_offset;
1045     - binder_alloc_copy_to_buffer(&target_proc->alloc,
1046     - t->buffer, buf_offset,
1047     - secctx, secctx_sz);
1048     + err = binder_alloc_copy_to_buffer(&target_proc->alloc,
1049     + t->buffer, buf_offset,
1050     + secctx, secctx_sz);
1051     + if (err) {
1052     + t->security_ctx = 0;
1053     + WARN_ON(1);
1054     + }
1055     security_release_secctx(secctx, secctx_sz);
1056     secctx = NULL;
1057     }
1058     @@ -3234,11 +3248,16 @@ static void binder_transaction(struct binder_proc *proc,
1059     struct binder_object object;
1060     binder_size_t object_offset;
1061    
1062     - binder_alloc_copy_from_buffer(&target_proc->alloc,
1063     - &object_offset,
1064     - t->buffer,
1065     - buffer_offset,
1066     - sizeof(object_offset));
1067     + if (binder_alloc_copy_from_buffer(&target_proc->alloc,
1068     + &object_offset,
1069     + t->buffer,
1070     + buffer_offset,
1071     + sizeof(object_offset))) {
1072     + return_error = BR_FAILED_REPLY;
1073     + return_error_param = -EINVAL;
1074     + return_error_line = __LINE__;
1075     + goto err_bad_offset;
1076     + }
1077     object_size = binder_get_object(target_proc, t->buffer,
1078     object_offset, &object);
1079     if (object_size == 0 || object_offset < off_min) {
1080     @@ -3262,15 +3281,17 @@ static void binder_transaction(struct binder_proc *proc,
1081    
1082     fp = to_flat_binder_object(hdr);
1083     ret = binder_translate_binder(fp, t, thread);
1084     - if (ret < 0) {
1085     +
1086     + if (ret < 0 ||
1087     + binder_alloc_copy_to_buffer(&target_proc->alloc,
1088     + t->buffer,
1089     + object_offset,
1090     + fp, sizeof(*fp))) {
1091     return_error = BR_FAILED_REPLY;
1092     return_error_param = ret;
1093     return_error_line = __LINE__;
1094     goto err_translate_failed;
1095     }
1096     - binder_alloc_copy_to_buffer(&target_proc->alloc,
1097     - t->buffer, object_offset,
1098     - fp, sizeof(*fp));
1099     } break;
1100     case BINDER_TYPE_HANDLE:
1101     case BINDER_TYPE_WEAK_HANDLE: {
1102     @@ -3278,15 +3299,16 @@ static void binder_transaction(struct binder_proc *proc,
1103    
1104     fp = to_flat_binder_object(hdr);
1105     ret = binder_translate_handle(fp, t, thread);
1106     - if (ret < 0) {
1107     + if (ret < 0 ||
1108     + binder_alloc_copy_to_buffer(&target_proc->alloc,
1109     + t->buffer,
1110     + object_offset,
1111     + fp, sizeof(*fp))) {
1112     return_error = BR_FAILED_REPLY;
1113     return_error_param = ret;
1114     return_error_line = __LINE__;
1115     goto err_translate_failed;
1116     }
1117     - binder_alloc_copy_to_buffer(&target_proc->alloc,
1118     - t->buffer, object_offset,
1119     - fp, sizeof(*fp));
1120     } break;
1121    
1122     case BINDER_TYPE_FD: {
1123     @@ -3296,16 +3318,17 @@ static void binder_transaction(struct binder_proc *proc,
1124     int ret = binder_translate_fd(fp->fd, fd_offset, t,
1125     thread, in_reply_to);
1126    
1127     - if (ret < 0) {
1128     + fp->pad_binder = 0;
1129     + if (ret < 0 ||
1130     + binder_alloc_copy_to_buffer(&target_proc->alloc,
1131     + t->buffer,
1132     + object_offset,
1133     + fp, sizeof(*fp))) {
1134     return_error = BR_FAILED_REPLY;
1135     return_error_param = ret;
1136     return_error_line = __LINE__;
1137     goto err_translate_failed;
1138     }
1139     - fp->pad_binder = 0;
1140     - binder_alloc_copy_to_buffer(&target_proc->alloc,
1141     - t->buffer, object_offset,
1142     - fp, sizeof(*fp));
1143     } break;
1144     case BINDER_TYPE_FDA: {
1145     struct binder_object ptr_object;
1146     @@ -3393,15 +3416,16 @@ static void binder_transaction(struct binder_proc *proc,
1147     num_valid,
1148     last_fixup_obj_off,
1149     last_fixup_min_off);
1150     - if (ret < 0) {
1151     + if (ret < 0 ||
1152     + binder_alloc_copy_to_buffer(&target_proc->alloc,
1153     + t->buffer,
1154     + object_offset,
1155     + bp, sizeof(*bp))) {
1156     return_error = BR_FAILED_REPLY;
1157     return_error_param = ret;
1158     return_error_line = __LINE__;
1159     goto err_translate_failed;
1160     }
1161     - binder_alloc_copy_to_buffer(&target_proc->alloc,
1162     - t->buffer, object_offset,
1163     - bp, sizeof(*bp));
1164     last_fixup_obj_off = object_offset;
1165     last_fixup_min_off = 0;
1166     } break;
1167     @@ -4140,20 +4164,27 @@ static int binder_apply_fd_fixups(struct binder_proc *proc,
1168     trace_binder_transaction_fd_recv(t, fd, fixup->offset);
1169     fd_install(fd, fixup->file);
1170     fixup->file = NULL;
1171     - binder_alloc_copy_to_buffer(&proc->alloc, t->buffer,
1172     - fixup->offset, &fd,
1173     - sizeof(u32));
1174     + if (binder_alloc_copy_to_buffer(&proc->alloc, t->buffer,
1175     + fixup->offset, &fd,
1176     + sizeof(u32))) {
1177     + ret = -EINVAL;
1178     + break;
1179     + }
1180     }
1181     list_for_each_entry_safe(fixup, tmp, &t->fd_fixups, fixup_entry) {
1182     if (fixup->file) {
1183     fput(fixup->file);
1184     } else if (ret) {
1185     u32 fd;
1186     -
1187     - binder_alloc_copy_from_buffer(&proc->alloc, &fd,
1188     - t->buffer, fixup->offset,
1189     - sizeof(fd));
1190     - binder_deferred_fd_close(fd);
1191     + int err;
1192     +
1193     + err = binder_alloc_copy_from_buffer(&proc->alloc, &fd,
1194     + t->buffer,
1195     + fixup->offset,
1196     + sizeof(fd));
1197     + WARN_ON(err);
1198     + if (!err)
1199     + binder_deferred_fd_close(fd);
1200     }
1201     list_del(&fixup->fixup_entry);
1202     kfree(fixup);
1203     @@ -4268,6 +4299,8 @@ retry:
1204     case BINDER_WORK_TRANSACTION_COMPLETE: {
1205     binder_inner_proc_unlock(proc);
1206     cmd = BR_TRANSACTION_COMPLETE;
1207     + kfree(w);
1208     + binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
1209     if (put_user(cmd, (uint32_t __user *)ptr))
1210     return -EFAULT;
1211     ptr += sizeof(uint32_t);
1212     @@ -4276,8 +4309,6 @@ retry:
1213     binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE,
1214     "%d:%d BR_TRANSACTION_COMPLETE\n",
1215     proc->pid, thread->pid);
1216     - kfree(w);
1217     - binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
1218     } break;
1219     case BINDER_WORK_NODE: {
1220     struct binder_node *node = container_of(w, struct binder_node, work);
1221     diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
1222     index ce5603c2291c..6d79a1b0d446 100644
1223     --- a/drivers/android/binder_alloc.c
1224     +++ b/drivers/android/binder_alloc.c
1225     @@ -1119,15 +1119,16 @@ binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc,
1226     return 0;
1227     }
1228    
1229     -static void binder_alloc_do_buffer_copy(struct binder_alloc *alloc,
1230     - bool to_buffer,
1231     - struct binder_buffer *buffer,
1232     - binder_size_t buffer_offset,
1233     - void *ptr,
1234     - size_t bytes)
1235     +static int binder_alloc_do_buffer_copy(struct binder_alloc *alloc,
1236     + bool to_buffer,
1237     + struct binder_buffer *buffer,
1238     + binder_size_t buffer_offset,
1239     + void *ptr,
1240     + size_t bytes)
1241     {
1242     /* All copies must be 32-bit aligned and 32-bit size */
1243     - BUG_ON(!check_buffer(alloc, buffer, buffer_offset, bytes));
1244     + if (!check_buffer(alloc, buffer, buffer_offset, bytes))
1245     + return -EINVAL;
1246    
1247     while (bytes) {
1248     unsigned long size;
1249     @@ -1155,25 +1156,26 @@ static void binder_alloc_do_buffer_copy(struct binder_alloc *alloc,
1250     ptr = ptr + size;
1251     buffer_offset += size;
1252     }
1253     + return 0;
1254     }
1255    
1256     -void binder_alloc_copy_to_buffer(struct binder_alloc *alloc,
1257     - struct binder_buffer *buffer,
1258     - binder_size_t buffer_offset,
1259     - void *src,
1260     - size_t bytes)
1261     +int binder_alloc_copy_to_buffer(struct binder_alloc *alloc,
1262     + struct binder_buffer *buffer,
1263     + binder_size_t buffer_offset,
1264     + void *src,
1265     + size_t bytes)
1266     {
1267     - binder_alloc_do_buffer_copy(alloc, true, buffer, buffer_offset,
1268     - src, bytes);
1269     + return binder_alloc_do_buffer_copy(alloc, true, buffer, buffer_offset,
1270     + src, bytes);
1271     }
1272    
1273     -void binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
1274     - void *dest,
1275     - struct binder_buffer *buffer,
1276     - binder_size_t buffer_offset,
1277     - size_t bytes)
1278     +int binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
1279     + void *dest,
1280     + struct binder_buffer *buffer,
1281     + binder_size_t buffer_offset,
1282     + size_t bytes)
1283     {
1284     - binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset,
1285     - dest, bytes);
1286     + return binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset,
1287     + dest, bytes);
1288     }
1289    
1290     diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h
1291     index 71bfa95f8e09..db9c1b984695 100644
1292     --- a/drivers/android/binder_alloc.h
1293     +++ b/drivers/android/binder_alloc.h
1294     @@ -159,17 +159,17 @@ binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc,
1295     const void __user *from,
1296     size_t bytes);
1297    
1298     -void binder_alloc_copy_to_buffer(struct binder_alloc *alloc,
1299     - struct binder_buffer *buffer,
1300     - binder_size_t buffer_offset,
1301     - void *src,
1302     - size_t bytes);
1303     -
1304     -void binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
1305     - void *dest,
1306     - struct binder_buffer *buffer,
1307     - binder_size_t buffer_offset,
1308     - size_t bytes);
1309     +int binder_alloc_copy_to_buffer(struct binder_alloc *alloc,
1310     + struct binder_buffer *buffer,
1311     + binder_size_t buffer_offset,
1312     + void *src,
1313     + size_t bytes);
1314     +
1315     +int binder_alloc_copy_from_buffer(struct binder_alloc *alloc,
1316     + void *dest,
1317     + struct binder_buffer *buffer,
1318     + binder_size_t buffer_offset,
1319     + size_t bytes);
1320    
1321     #endif /* _LINUX_BINDER_ALLOC_H */
1322    
1323     diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
1324     index 90325e1749fb..d47ad10a35fe 100644
1325     --- a/drivers/char/tpm/tpm-chip.c
1326     +++ b/drivers/char/tpm/tpm-chip.c
1327     @@ -289,15 +289,15 @@ static int tpm_class_shutdown(struct device *dev)
1328     {
1329     struct tpm_chip *chip = container_of(dev, struct tpm_chip, dev);
1330    
1331     + down_write(&chip->ops_sem);
1332     if (chip->flags & TPM_CHIP_FLAG_TPM2) {
1333     - down_write(&chip->ops_sem);
1334     if (!tpm_chip_start(chip)) {
1335     tpm2_shutdown(chip, TPM2_SU_CLEAR);
1336     tpm_chip_stop(chip);
1337     }
1338     - chip->ops = NULL;
1339     - up_write(&chip->ops_sem);
1340     }
1341     + chip->ops = NULL;
1342     + up_write(&chip->ops_sem);
1343    
1344     return 0;
1345     }
1346     diff --git a/drivers/char/tpm/tpm1-cmd.c b/drivers/char/tpm/tpm1-cmd.c
1347     index 85dcf2654d11..faacbe1ffa1a 100644
1348     --- a/drivers/char/tpm/tpm1-cmd.c
1349     +++ b/drivers/char/tpm/tpm1-cmd.c
1350     @@ -510,7 +510,7 @@ struct tpm1_get_random_out {
1351     *
1352     * Return:
1353     * * number of bytes read
1354     - * * -errno or a TPM return code otherwise
1355     + * * -errno (positive TPM return codes are masked to -EIO)
1356     */
1357     int tpm1_get_random(struct tpm_chip *chip, u8 *dest, size_t max)
1358     {
1359     @@ -531,8 +531,11 @@ int tpm1_get_random(struct tpm_chip *chip, u8 *dest, size_t max)
1360    
1361     rc = tpm_transmit_cmd(chip, &buf, sizeof(out->rng_data_len),
1362     "attempting get random");
1363     - if (rc)
1364     + if (rc) {
1365     + if (rc > 0)
1366     + rc = -EIO;
1367     goto out;
1368     + }
1369    
1370     out = (struct tpm1_get_random_out *)&buf.data[TPM_HEADER_SIZE];
1371    
1372     diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
1373     index 4de49924cfc4..d103545e4055 100644
1374     --- a/drivers/char/tpm/tpm2-cmd.c
1375     +++ b/drivers/char/tpm/tpm2-cmd.c
1376     @@ -297,7 +297,7 @@ struct tpm2_get_random_out {
1377     *
1378     * Return:
1379     * size of the buffer on success,
1380     - * -errno otherwise
1381     + * -errno otherwise (positive TPM return codes are masked to -EIO)
1382     */
1383     int tpm2_get_random(struct tpm_chip *chip, u8 *dest, size_t max)
1384     {
1385     @@ -324,8 +324,11 @@ int tpm2_get_random(struct tpm_chip *chip, u8 *dest, size_t max)
1386     offsetof(struct tpm2_get_random_out,
1387     buffer),
1388     "attempting get random");
1389     - if (err)
1390     + if (err) {
1391     + if (err > 0)
1392     + err = -EIO;
1393     goto out;
1394     + }
1395    
1396     out = (struct tpm2_get_random_out *)
1397     &buf.data[TPM_HEADER_SIZE];
1398     diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
1399     index fbc7bf9d7380..427c78d4d948 100644
1400     --- a/drivers/crypto/talitos.c
1401     +++ b/drivers/crypto/talitos.c
1402     @@ -2339,7 +2339,7 @@ static struct talitos_alg_template driver_algs[] = {
1403     .base = {
1404     .cra_name = "authenc(hmac(sha1),cbc(aes))",
1405     .cra_driver_name = "authenc-hmac-sha1-"
1406     - "cbc-aes-talitos",
1407     + "cbc-aes-talitos-hsna",
1408     .cra_blocksize = AES_BLOCK_SIZE,
1409     .cra_flags = CRYPTO_ALG_ASYNC,
1410     },
1411     @@ -2384,7 +2384,7 @@ static struct talitos_alg_template driver_algs[] = {
1412     .cra_name = "authenc(hmac(sha1),"
1413     "cbc(des3_ede))",
1414     .cra_driver_name = "authenc-hmac-sha1-"
1415     - "cbc-3des-talitos",
1416     + "cbc-3des-talitos-hsna",
1417     .cra_blocksize = DES3_EDE_BLOCK_SIZE,
1418     .cra_flags = CRYPTO_ALG_ASYNC,
1419     },
1420     @@ -2427,7 +2427,7 @@ static struct talitos_alg_template driver_algs[] = {
1421     .base = {
1422     .cra_name = "authenc(hmac(sha224),cbc(aes))",
1423     .cra_driver_name = "authenc-hmac-sha224-"
1424     - "cbc-aes-talitos",
1425     + "cbc-aes-talitos-hsna",
1426     .cra_blocksize = AES_BLOCK_SIZE,
1427     .cra_flags = CRYPTO_ALG_ASYNC,
1428     },
1429     @@ -2472,7 +2472,7 @@ static struct talitos_alg_template driver_algs[] = {
1430     .cra_name = "authenc(hmac(sha224),"
1431     "cbc(des3_ede))",
1432     .cra_driver_name = "authenc-hmac-sha224-"
1433     - "cbc-3des-talitos",
1434     + "cbc-3des-talitos-hsna",
1435     .cra_blocksize = DES3_EDE_BLOCK_SIZE,
1436     .cra_flags = CRYPTO_ALG_ASYNC,
1437     },
1438     @@ -2515,7 +2515,7 @@ static struct talitos_alg_template driver_algs[] = {
1439     .base = {
1440     .cra_name = "authenc(hmac(sha256),cbc(aes))",
1441     .cra_driver_name = "authenc-hmac-sha256-"
1442     - "cbc-aes-talitos",
1443     + "cbc-aes-talitos-hsna",
1444     .cra_blocksize = AES_BLOCK_SIZE,
1445     .cra_flags = CRYPTO_ALG_ASYNC,
1446     },
1447     @@ -2560,7 +2560,7 @@ static struct talitos_alg_template driver_algs[] = {
1448     .cra_name = "authenc(hmac(sha256),"
1449     "cbc(des3_ede))",
1450     .cra_driver_name = "authenc-hmac-sha256-"
1451     - "cbc-3des-talitos",
1452     + "cbc-3des-talitos-hsna",
1453     .cra_blocksize = DES3_EDE_BLOCK_SIZE,
1454     .cra_flags = CRYPTO_ALG_ASYNC,
1455     },
1456     @@ -2689,7 +2689,7 @@ static struct talitos_alg_template driver_algs[] = {
1457     .base = {
1458     .cra_name = "authenc(hmac(md5),cbc(aes))",
1459     .cra_driver_name = "authenc-hmac-md5-"
1460     - "cbc-aes-talitos",
1461     + "cbc-aes-talitos-hsna",
1462     .cra_blocksize = AES_BLOCK_SIZE,
1463     .cra_flags = CRYPTO_ALG_ASYNC,
1464     },
1465     @@ -2732,7 +2732,7 @@ static struct talitos_alg_template driver_algs[] = {
1466     .base = {
1467     .cra_name = "authenc(hmac(md5),cbc(des3_ede))",
1468     .cra_driver_name = "authenc-hmac-md5-"
1469     - "cbc-3des-talitos",
1470     + "cbc-3des-talitos-hsna",
1471     .cra_blocksize = DES3_EDE_BLOCK_SIZE,
1472     .cra_flags = CRYPTO_ALG_ASYNC,
1473     },
1474     diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
1475     index b032d3899fa3..bfc584ada4eb 100644
1476     --- a/drivers/hid/hid-ids.h
1477     +++ b/drivers/hid/hid-ids.h
1478     @@ -1241,6 +1241,7 @@
1479     #define USB_DEVICE_ID_PRIMAX_KEYBOARD 0x4e05
1480     #define USB_DEVICE_ID_PRIMAX_REZEL 0x4e72
1481     #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F 0x4d0f
1482     +#define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D65 0x4d65
1483     #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22 0x4e22
1484    
1485    
1486     diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
1487     index 671a285724f9..1549c7a2f04c 100644
1488     --- a/drivers/hid/hid-quirks.c
1489     +++ b/drivers/hid/hid-quirks.c
1490     @@ -130,6 +130,7 @@ static const struct hid_device_id hid_quirks[] = {
1491     { HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL },
1492     { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_MOUSE_4D22), HID_QUIRK_ALWAYS_POLL },
1493     { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F), HID_QUIRK_ALWAYS_POLL },
1494     + { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D65), HID_QUIRK_ALWAYS_POLL },
1495     { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22), HID_QUIRK_ALWAYS_POLL },
1496     { HID_USB_DEVICE(USB_VENDOR_ID_PRODIGE, USB_DEVICE_ID_PRODIGE_CORDLESS), HID_QUIRK_NOGET },
1497     { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001), HID_QUIRK_NOGET },
1498     diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c
1499     index 4ee4c80a4354..543cc3d36e1d 100644
1500     --- a/drivers/hwtracing/coresight/coresight-etb10.c
1501     +++ b/drivers/hwtracing/coresight/coresight-etb10.c
1502     @@ -373,12 +373,10 @@ static void *etb_alloc_buffer(struct coresight_device *csdev,
1503     struct perf_event *event, void **pages,
1504     int nr_pages, bool overwrite)
1505     {
1506     - int node, cpu = event->cpu;
1507     + int node;
1508     struct cs_buffers *buf;
1509    
1510     - if (cpu == -1)
1511     - cpu = smp_processor_id();
1512     - node = cpu_to_node(cpu);
1513     + node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu);
1514    
1515     buf = kzalloc_node(sizeof(struct cs_buffers), GFP_KERNEL, node);
1516     if (!buf)
1517     diff --git a/drivers/hwtracing/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c
1518     index 16b0c0e1e43a..ad6e16c96263 100644
1519     --- a/drivers/hwtracing/coresight/coresight-funnel.c
1520     +++ b/drivers/hwtracing/coresight/coresight-funnel.c
1521     @@ -241,6 +241,7 @@ static int funnel_probe(struct device *dev, struct resource *res)
1522     }
1523    
1524     pm_runtime_put(dev);
1525     + ret = 0;
1526    
1527     out_disable_clk:
1528     if (ret && !IS_ERR_OR_NULL(drvdata->atclk))
1529     diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
1530     index 2527b5d3b65e..8de109de171f 100644
1531     --- a/drivers/hwtracing/coresight/coresight-tmc-etf.c
1532     +++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
1533     @@ -378,12 +378,10 @@ static void *tmc_alloc_etf_buffer(struct coresight_device *csdev,
1534     struct perf_event *event, void **pages,
1535     int nr_pages, bool overwrite)
1536     {
1537     - int node, cpu = event->cpu;
1538     + int node;
1539     struct cs_buffers *buf;
1540    
1541     - if (cpu == -1)
1542     - cpu = smp_processor_id();
1543     - node = cpu_to_node(cpu);
1544     + node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu);
1545    
1546     /* Allocate memory structure for interaction with Perf */
1547     buf = kzalloc_node(sizeof(struct cs_buffers), GFP_KERNEL, node);
1548     diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c
1549     index df6e4b0b84e9..9f293b9dce8c 100644
1550     --- a/drivers/hwtracing/coresight/coresight-tmc-etr.c
1551     +++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c
1552     @@ -1178,14 +1178,11 @@ static struct etr_buf *
1553     alloc_etr_buf(struct tmc_drvdata *drvdata, struct perf_event *event,
1554     int nr_pages, void **pages, bool snapshot)
1555     {
1556     - int node, cpu = event->cpu;
1557     + int node;
1558     struct etr_buf *etr_buf;
1559     unsigned long size;
1560    
1561     - if (cpu == -1)
1562     - cpu = smp_processor_id();
1563     - node = cpu_to_node(cpu);
1564     -
1565     + node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu);
1566     /*
1567     * Try to match the perf ring buffer size if it is larger
1568     * than the size requested via sysfs.
1569     @@ -1317,13 +1314,11 @@ static struct etr_perf_buffer *
1570     tmc_etr_setup_perf_buf(struct tmc_drvdata *drvdata, struct perf_event *event,
1571     int nr_pages, void **pages, bool snapshot)
1572     {
1573     - int node, cpu = event->cpu;
1574     + int node;
1575     struct etr_buf *etr_buf;
1576     struct etr_perf_buffer *etr_perf;
1577    
1578     - if (cpu == -1)
1579     - cpu = smp_processor_id();
1580     - node = cpu_to_node(cpu);
1581     + node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu);
1582    
1583     etr_perf = kzalloc_node(sizeof(*etr_perf), GFP_KERNEL, node);
1584     if (!etr_perf)
1585     diff --git a/drivers/iio/adc/stm32-adc-core.c b/drivers/iio/adc/stm32-adc-core.c
1586     index 2327ec18b40c..1f7ce5186dfc 100644
1587     --- a/drivers/iio/adc/stm32-adc-core.c
1588     +++ b/drivers/iio/adc/stm32-adc-core.c
1589     @@ -87,6 +87,7 @@ struct stm32_adc_priv_cfg {
1590     * @domain: irq domain reference
1591     * @aclk: clock reference for the analog circuitry
1592     * @bclk: bus clock common for all ADCs, depends on part used
1593     + * @vdda: vdda analog supply reference
1594     * @vref: regulator reference
1595     * @cfg: compatible configuration data
1596     * @common: common data for all ADC instances
1597     @@ -97,6 +98,7 @@ struct stm32_adc_priv {
1598     struct irq_domain *domain;
1599     struct clk *aclk;
1600     struct clk *bclk;
1601     + struct regulator *vdda;
1602     struct regulator *vref;
1603     const struct stm32_adc_priv_cfg *cfg;
1604     struct stm32_adc_common common;
1605     @@ -394,10 +396,16 @@ static int stm32_adc_core_hw_start(struct device *dev)
1606     struct stm32_adc_priv *priv = to_stm32_adc_priv(common);
1607     int ret;
1608    
1609     + ret = regulator_enable(priv->vdda);
1610     + if (ret < 0) {
1611     + dev_err(dev, "vdda enable failed %d\n", ret);
1612     + return ret;
1613     + }
1614     +
1615     ret = regulator_enable(priv->vref);
1616     if (ret < 0) {
1617     dev_err(dev, "vref enable failed\n");
1618     - return ret;
1619     + goto err_vdda_disable;
1620     }
1621    
1622     if (priv->bclk) {
1623     @@ -425,6 +433,8 @@ err_bclk_disable:
1624     clk_disable_unprepare(priv->bclk);
1625     err_regulator_disable:
1626     regulator_disable(priv->vref);
1627     +err_vdda_disable:
1628     + regulator_disable(priv->vdda);
1629    
1630     return ret;
1631     }
1632     @@ -441,6 +451,7 @@ static void stm32_adc_core_hw_stop(struct device *dev)
1633     if (priv->bclk)
1634     clk_disable_unprepare(priv->bclk);
1635     regulator_disable(priv->vref);
1636     + regulator_disable(priv->vdda);
1637     }
1638    
1639     static int stm32_adc_probe(struct platform_device *pdev)
1640     @@ -468,6 +479,14 @@ static int stm32_adc_probe(struct platform_device *pdev)
1641     return PTR_ERR(priv->common.base);
1642     priv->common.phys_base = res->start;
1643    
1644     + priv->vdda = devm_regulator_get(&pdev->dev, "vdda");
1645     + if (IS_ERR(priv->vdda)) {
1646     + ret = PTR_ERR(priv->vdda);
1647     + if (ret != -EPROBE_DEFER)
1648     + dev_err(&pdev->dev, "vdda get failed, %d\n", ret);
1649     + return ret;
1650     + }
1651     +
1652     priv->vref = devm_regulator_get(&pdev->dev, "vref");
1653     if (IS_ERR(priv->vref)) {
1654     ret = PTR_ERR(priv->vref);
1655     diff --git a/drivers/media/dvb-frontends/stv0297.c b/drivers/media/dvb-frontends/stv0297.c
1656     index dac396c95a59..6d5962d5697a 100644
1657     --- a/drivers/media/dvb-frontends/stv0297.c
1658     +++ b/drivers/media/dvb-frontends/stv0297.c
1659     @@ -682,7 +682,7 @@ static const struct dvb_frontend_ops stv0297_ops = {
1660     .delsys = { SYS_DVBC_ANNEX_A },
1661     .info = {
1662     .name = "ST STV0297 DVB-C",
1663     - .frequency_min_hz = 470 * MHz,
1664     + .frequency_min_hz = 47 * MHz,
1665     .frequency_max_hz = 862 * MHz,
1666     .frequency_stepsize_hz = 62500,
1667     .symbol_rate_min = 870000,
1668     diff --git a/drivers/misc/lkdtm/Makefile b/drivers/misc/lkdtm/Makefile
1669     index 951c984de61a..fb10eafe9bde 100644
1670     --- a/drivers/misc/lkdtm/Makefile
1671     +++ b/drivers/misc/lkdtm/Makefile
1672     @@ -15,8 +15,7 @@ KCOV_INSTRUMENT_rodata.o := n
1673    
1674     OBJCOPYFLAGS :=
1675     OBJCOPYFLAGS_rodata_objcopy.o := \
1676     - --set-section-flags .text=alloc,readonly \
1677     - --rename-section .text=.rodata
1678     + --rename-section .text=.rodata,alloc,readonly,load
1679     targets += rodata.o rodata_objcopy.o
1680     $(obj)/rodata_objcopy.o: $(obj)/rodata.o FORCE
1681     $(call if_changed,objcopy)
1682     diff --git a/drivers/misc/vmw_vmci/vmci_context.c b/drivers/misc/vmw_vmci/vmci_context.c
1683     index 300ed69fe2c7..16695366ec92 100644
1684     --- a/drivers/misc/vmw_vmci/vmci_context.c
1685     +++ b/drivers/misc/vmw_vmci/vmci_context.c
1686     @@ -21,6 +21,9 @@
1687     #include "vmci_driver.h"
1688     #include "vmci_event.h"
1689    
1690     +/* Use a wide upper bound for the maximum contexts. */
1691     +#define VMCI_MAX_CONTEXTS 2000
1692     +
1693     /*
1694     * List of current VMCI contexts. Contexts can be added by
1695     * vmci_ctx_create() and removed via vmci_ctx_destroy().
1696     @@ -117,19 +120,22 @@ struct vmci_ctx *vmci_ctx_create(u32 cid, u32 priv_flags,
1697     /* Initialize host-specific VMCI context. */
1698     init_waitqueue_head(&context->host_context.wait_queue);
1699    
1700     - context->queue_pair_array = vmci_handle_arr_create(0);
1701     + context->queue_pair_array =
1702     + vmci_handle_arr_create(0, VMCI_MAX_GUEST_QP_COUNT);
1703     if (!context->queue_pair_array) {
1704     error = -ENOMEM;
1705     goto err_free_ctx;
1706     }
1707    
1708     - context->doorbell_array = vmci_handle_arr_create(0);
1709     + context->doorbell_array =
1710     + vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT);
1711     if (!context->doorbell_array) {
1712     error = -ENOMEM;
1713     goto err_free_qp_array;
1714     }
1715    
1716     - context->pending_doorbell_array = vmci_handle_arr_create(0);
1717     + context->pending_doorbell_array =
1718     + vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT);
1719     if (!context->pending_doorbell_array) {
1720     error = -ENOMEM;
1721     goto err_free_db_array;
1722     @@ -204,7 +210,7 @@ static int ctx_fire_notification(u32 context_id, u32 priv_flags)
1723     * We create an array to hold the subscribers we find when
1724     * scanning through all contexts.
1725     */
1726     - subscriber_array = vmci_handle_arr_create(0);
1727     + subscriber_array = vmci_handle_arr_create(0, VMCI_MAX_CONTEXTS);
1728     if (subscriber_array == NULL)
1729     return VMCI_ERROR_NO_MEM;
1730    
1731     @@ -623,20 +629,26 @@ int vmci_ctx_add_notification(u32 context_id, u32 remote_cid)
1732    
1733     spin_lock(&context->lock);
1734    
1735     - list_for_each_entry(n, &context->notifier_list, node) {
1736     - if (vmci_handle_is_equal(n->handle, notifier->handle)) {
1737     - exists = true;
1738     - break;
1739     + if (context->n_notifiers < VMCI_MAX_CONTEXTS) {
1740     + list_for_each_entry(n, &context->notifier_list, node) {
1741     + if (vmci_handle_is_equal(n->handle, notifier->handle)) {
1742     + exists = true;
1743     + break;
1744     + }
1745     }
1746     - }
1747    
1748     - if (exists) {
1749     - kfree(notifier);
1750     - result = VMCI_ERROR_ALREADY_EXISTS;
1751     + if (exists) {
1752     + kfree(notifier);
1753     + result = VMCI_ERROR_ALREADY_EXISTS;
1754     + } else {
1755     + list_add_tail_rcu(&notifier->node,
1756     + &context->notifier_list);
1757     + context->n_notifiers++;
1758     + result = VMCI_SUCCESS;
1759     + }
1760     } else {
1761     - list_add_tail_rcu(&notifier->node, &context->notifier_list);
1762     - context->n_notifiers++;
1763     - result = VMCI_SUCCESS;
1764     + kfree(notifier);
1765     + result = VMCI_ERROR_NO_MEM;
1766     }
1767    
1768     spin_unlock(&context->lock);
1769     @@ -721,8 +733,7 @@ static int vmci_ctx_get_chkpt_doorbells(struct vmci_ctx *context,
1770     u32 *buf_size, void **pbuf)
1771     {
1772     struct dbell_cpt_state *dbells;
1773     - size_t n_doorbells;
1774     - int i;
1775     + u32 i, n_doorbells;
1776    
1777     n_doorbells = vmci_handle_arr_get_size(context->doorbell_array);
1778     if (n_doorbells > 0) {
1779     @@ -860,7 +871,8 @@ int vmci_ctx_rcv_notifications_get(u32 context_id,
1780     spin_lock(&context->lock);
1781    
1782     *db_handle_array = context->pending_doorbell_array;
1783     - context->pending_doorbell_array = vmci_handle_arr_create(0);
1784     + context->pending_doorbell_array =
1785     + vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT);
1786     if (!context->pending_doorbell_array) {
1787     context->pending_doorbell_array = *db_handle_array;
1788     *db_handle_array = NULL;
1789     @@ -942,12 +954,11 @@ int vmci_ctx_dbell_create(u32 context_id, struct vmci_handle handle)
1790     return VMCI_ERROR_NOT_FOUND;
1791    
1792     spin_lock(&context->lock);
1793     - if (!vmci_handle_arr_has_entry(context->doorbell_array, handle)) {
1794     - vmci_handle_arr_append_entry(&context->doorbell_array, handle);
1795     - result = VMCI_SUCCESS;
1796     - } else {
1797     + if (!vmci_handle_arr_has_entry(context->doorbell_array, handle))
1798     + result = vmci_handle_arr_append_entry(&context->doorbell_array,
1799     + handle);
1800     + else
1801     result = VMCI_ERROR_DUPLICATE_ENTRY;
1802     - }
1803    
1804     spin_unlock(&context->lock);
1805     vmci_ctx_put(context);
1806     @@ -1083,15 +1094,16 @@ int vmci_ctx_notify_dbell(u32 src_cid,
1807     if (!vmci_handle_arr_has_entry(
1808     dst_context->pending_doorbell_array,
1809     handle)) {
1810     - vmci_handle_arr_append_entry(
1811     + result = vmci_handle_arr_append_entry(
1812     &dst_context->pending_doorbell_array,
1813     handle);
1814     -
1815     - ctx_signal_notify(dst_context);
1816     - wake_up(&dst_context->host_context.wait_queue);
1817     -
1818     + if (result == VMCI_SUCCESS) {
1819     + ctx_signal_notify(dst_context);
1820     + wake_up(&dst_context->host_context.wait_queue);
1821     + }
1822     + } else {
1823     + result = VMCI_SUCCESS;
1824     }
1825     - result = VMCI_SUCCESS;
1826     }
1827     spin_unlock(&dst_context->lock);
1828     }
1829     @@ -1118,13 +1130,11 @@ int vmci_ctx_qp_create(struct vmci_ctx *context, struct vmci_handle handle)
1830     if (context == NULL || vmci_handle_is_invalid(handle))
1831     return VMCI_ERROR_INVALID_ARGS;
1832    
1833     - if (!vmci_handle_arr_has_entry(context->queue_pair_array, handle)) {
1834     - vmci_handle_arr_append_entry(&context->queue_pair_array,
1835     - handle);
1836     - result = VMCI_SUCCESS;
1837     - } else {
1838     + if (!vmci_handle_arr_has_entry(context->queue_pair_array, handle))
1839     + result = vmci_handle_arr_append_entry(
1840     + &context->queue_pair_array, handle);
1841     + else
1842     result = VMCI_ERROR_DUPLICATE_ENTRY;
1843     - }
1844    
1845     return result;
1846     }
1847     diff --git a/drivers/misc/vmw_vmci/vmci_handle_array.c b/drivers/misc/vmw_vmci/vmci_handle_array.c
1848     index c527388f5d7b..de7fee7ead1b 100644
1849     --- a/drivers/misc/vmw_vmci/vmci_handle_array.c
1850     +++ b/drivers/misc/vmw_vmci/vmci_handle_array.c
1851     @@ -8,24 +8,29 @@
1852     #include <linux/slab.h>
1853     #include "vmci_handle_array.h"
1854    
1855     -static size_t handle_arr_calc_size(size_t capacity)
1856     +static size_t handle_arr_calc_size(u32 capacity)
1857     {
1858     - return sizeof(struct vmci_handle_arr) +
1859     + return VMCI_HANDLE_ARRAY_HEADER_SIZE +
1860     capacity * sizeof(struct vmci_handle);
1861     }
1862    
1863     -struct vmci_handle_arr *vmci_handle_arr_create(size_t capacity)
1864     +struct vmci_handle_arr *vmci_handle_arr_create(u32 capacity, u32 max_capacity)
1865     {
1866     struct vmci_handle_arr *array;
1867    
1868     + if (max_capacity == 0 || capacity > max_capacity)
1869     + return NULL;
1870     +
1871     if (capacity == 0)
1872     - capacity = VMCI_HANDLE_ARRAY_DEFAULT_SIZE;
1873     + capacity = min((u32)VMCI_HANDLE_ARRAY_DEFAULT_CAPACITY,
1874     + max_capacity);
1875    
1876     array = kmalloc(handle_arr_calc_size(capacity), GFP_ATOMIC);
1877     if (!array)
1878     return NULL;
1879    
1880     array->capacity = capacity;
1881     + array->max_capacity = max_capacity;
1882     array->size = 0;
1883    
1884     return array;
1885     @@ -36,27 +41,34 @@ void vmci_handle_arr_destroy(struct vmci_handle_arr *array)
1886     kfree(array);
1887     }
1888    
1889     -void vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr,
1890     - struct vmci_handle handle)
1891     +int vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr,
1892     + struct vmci_handle handle)
1893     {
1894     struct vmci_handle_arr *array = *array_ptr;
1895    
1896     if (unlikely(array->size >= array->capacity)) {
1897     /* reallocate. */
1898     struct vmci_handle_arr *new_array;
1899     - size_t new_capacity = array->capacity * VMCI_ARR_CAP_MULT;
1900     - size_t new_size = handle_arr_calc_size(new_capacity);
1901     + u32 capacity_bump = min(array->max_capacity - array->capacity,
1902     + array->capacity);
1903     + size_t new_size = handle_arr_calc_size(array->capacity +
1904     + capacity_bump);
1905     +
1906     + if (array->size >= array->max_capacity)
1907     + return VMCI_ERROR_NO_MEM;
1908    
1909     new_array = krealloc(array, new_size, GFP_ATOMIC);
1910     if (!new_array)
1911     - return;
1912     + return VMCI_ERROR_NO_MEM;
1913    
1914     - new_array->capacity = new_capacity;
1915     + new_array->capacity += capacity_bump;
1916     *array_ptr = array = new_array;
1917     }
1918    
1919     array->entries[array->size] = handle;
1920     array->size++;
1921     +
1922     + return VMCI_SUCCESS;
1923     }
1924    
1925     /*
1926     @@ -66,7 +78,7 @@ struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array,
1927     struct vmci_handle entry_handle)
1928     {
1929     struct vmci_handle handle = VMCI_INVALID_HANDLE;
1930     - size_t i;
1931     + u32 i;
1932    
1933     for (i = 0; i < array->size; i++) {
1934     if (vmci_handle_is_equal(array->entries[i], entry_handle)) {
1935     @@ -101,7 +113,7 @@ struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array)
1936     * Handle at given index, VMCI_INVALID_HANDLE if invalid index.
1937     */
1938     struct vmci_handle
1939     -vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index)
1940     +vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, u32 index)
1941     {
1942     if (unlikely(index >= array->size))
1943     return VMCI_INVALID_HANDLE;
1944     @@ -112,7 +124,7 @@ vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index)
1945     bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array,
1946     struct vmci_handle entry_handle)
1947     {
1948     - size_t i;
1949     + u32 i;
1950    
1951     for (i = 0; i < array->size; i++)
1952     if (vmci_handle_is_equal(array->entries[i], entry_handle))
1953     diff --git a/drivers/misc/vmw_vmci/vmci_handle_array.h b/drivers/misc/vmw_vmci/vmci_handle_array.h
1954     index bd1559a548e9..96193f85be5b 100644
1955     --- a/drivers/misc/vmw_vmci/vmci_handle_array.h
1956     +++ b/drivers/misc/vmw_vmci/vmci_handle_array.h
1957     @@ -9,32 +9,41 @@
1958     #define _VMCI_HANDLE_ARRAY_H_
1959    
1960     #include <linux/vmw_vmci_defs.h>
1961     +#include <linux/limits.h>
1962     #include <linux/types.h>
1963    
1964     -#define VMCI_HANDLE_ARRAY_DEFAULT_SIZE 4
1965     -#define VMCI_ARR_CAP_MULT 2 /* Array capacity multiplier */
1966     -
1967     struct vmci_handle_arr {
1968     - size_t capacity;
1969     - size_t size;
1970     + u32 capacity;
1971     + u32 max_capacity;
1972     + u32 size;
1973     + u32 pad;
1974     struct vmci_handle entries[];
1975     };
1976    
1977     -struct vmci_handle_arr *vmci_handle_arr_create(size_t capacity);
1978     +#define VMCI_HANDLE_ARRAY_HEADER_SIZE \
1979     + offsetof(struct vmci_handle_arr, entries)
1980     +/* Select a default capacity that results in a 64 byte sized array */
1981     +#define VMCI_HANDLE_ARRAY_DEFAULT_CAPACITY 6
1982     +/* Make sure that the max array size can be expressed by a u32 */
1983     +#define VMCI_HANDLE_ARRAY_MAX_CAPACITY \
1984     + ((U32_MAX - VMCI_HANDLE_ARRAY_HEADER_SIZE - 1) / \
1985     + sizeof(struct vmci_handle))
1986     +
1987     +struct vmci_handle_arr *vmci_handle_arr_create(u32 capacity, u32 max_capacity);
1988     void vmci_handle_arr_destroy(struct vmci_handle_arr *array);
1989     -void vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr,
1990     - struct vmci_handle handle);
1991     +int vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr,
1992     + struct vmci_handle handle);
1993     struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array,
1994     struct vmci_handle
1995     entry_handle);
1996     struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array);
1997     struct vmci_handle
1998     -vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index);
1999     +vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, u32 index);
2000     bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array,
2001     struct vmci_handle entry_handle);
2002     struct vmci_handle *vmci_handle_arr_get_handles(struct vmci_handle_arr *array);
2003    
2004     -static inline size_t vmci_handle_arr_get_size(
2005     +static inline u32 vmci_handle_arr_get_size(
2006     const struct vmci_handle_arr *array)
2007     {
2008     return array->size;
2009     diff --git a/drivers/net/wireless/ath/carl9170/usb.c b/drivers/net/wireless/ath/carl9170/usb.c
2010     index e7c3f3b8457d..99f1897a775d 100644
2011     --- a/drivers/net/wireless/ath/carl9170/usb.c
2012     +++ b/drivers/net/wireless/ath/carl9170/usb.c
2013     @@ -128,6 +128,8 @@ static const struct usb_device_id carl9170_usb_ids[] = {
2014     };
2015     MODULE_DEVICE_TABLE(usb, carl9170_usb_ids);
2016    
2017     +static struct usb_driver carl9170_driver;
2018     +
2019     static void carl9170_usb_submit_data_urb(struct ar9170 *ar)
2020     {
2021     struct urb *urb;
2022     @@ -966,32 +968,28 @@ err_out:
2023    
2024     static void carl9170_usb_firmware_failed(struct ar9170 *ar)
2025     {
2026     - struct device *parent = ar->udev->dev.parent;
2027     - struct usb_device *udev;
2028     -
2029     - /*
2030     - * Store a copy of the usb_device pointer locally.
2031     - * This is because device_release_driver initiates
2032     - * carl9170_usb_disconnect, which in turn frees our
2033     - * driver context (ar).
2034     + /* Store a copies of the usb_interface and usb_device pointer locally.
2035     + * This is because release_driver initiates carl9170_usb_disconnect,
2036     + * which in turn frees our driver context (ar).
2037     */
2038     - udev = ar->udev;
2039     + struct usb_interface *intf = ar->intf;
2040     + struct usb_device *udev = ar->udev;
2041    
2042     complete(&ar->fw_load_wait);
2043     + /* at this point 'ar' could be already freed. Don't use it anymore */
2044     + ar = NULL;
2045    
2046     /* unbind anything failed */
2047     - if (parent)
2048     - device_lock(parent);
2049     -
2050     - device_release_driver(&udev->dev);
2051     - if (parent)
2052     - device_unlock(parent);
2053     + usb_lock_device(udev);
2054     + usb_driver_release_interface(&carl9170_driver, intf);
2055     + usb_unlock_device(udev);
2056    
2057     - usb_put_dev(udev);
2058     + usb_put_intf(intf);
2059     }
2060    
2061     static void carl9170_usb_firmware_finish(struct ar9170 *ar)
2062     {
2063     + struct usb_interface *intf = ar->intf;
2064     int err;
2065    
2066     err = carl9170_parse_firmware(ar);
2067     @@ -1009,7 +1007,7 @@ static void carl9170_usb_firmware_finish(struct ar9170 *ar)
2068     goto err_unrx;
2069    
2070     complete(&ar->fw_load_wait);
2071     - usb_put_dev(ar->udev);
2072     + usb_put_intf(intf);
2073     return;
2074    
2075     err_unrx:
2076     @@ -1052,7 +1050,6 @@ static int carl9170_usb_probe(struct usb_interface *intf,
2077     return PTR_ERR(ar);
2078    
2079     udev = interface_to_usbdev(intf);
2080     - usb_get_dev(udev);
2081     ar->udev = udev;
2082     ar->intf = intf;
2083     ar->features = id->driver_info;
2084     @@ -1094,15 +1091,14 @@ static int carl9170_usb_probe(struct usb_interface *intf,
2085     atomic_set(&ar->rx_anch_urbs, 0);
2086     atomic_set(&ar->rx_pool_urbs, 0);
2087    
2088     - usb_get_dev(ar->udev);
2089     + usb_get_intf(intf);
2090    
2091     carl9170_set_state(ar, CARL9170_STOPPED);
2092    
2093     err = request_firmware_nowait(THIS_MODULE, 1, CARL9170FW_NAME,
2094     &ar->udev->dev, GFP_KERNEL, ar, carl9170_usb_firmware_step2);
2095     if (err) {
2096     - usb_put_dev(udev);
2097     - usb_put_dev(udev);
2098     + usb_put_intf(intf);
2099     carl9170_free(ar);
2100     }
2101     return err;
2102     @@ -1131,7 +1127,6 @@ static void carl9170_usb_disconnect(struct usb_interface *intf)
2103    
2104     carl9170_release_firmware(ar);
2105     carl9170_free(ar);
2106     - usb_put_dev(udev);
2107     }
2108    
2109     #ifdef CONFIG_PM
2110     diff --git a/drivers/net/wireless/intersil/p54/p54usb.c b/drivers/net/wireless/intersil/p54/p54usb.c
2111     index f937815f0f2c..b94764c88750 100644
2112     --- a/drivers/net/wireless/intersil/p54/p54usb.c
2113     +++ b/drivers/net/wireless/intersil/p54/p54usb.c
2114     @@ -30,6 +30,8 @@ MODULE_ALIAS("prism54usb");
2115     MODULE_FIRMWARE("isl3886usb");
2116     MODULE_FIRMWARE("isl3887usb");
2117    
2118     +static struct usb_driver p54u_driver;
2119     +
2120     /*
2121     * Note:
2122     *
2123     @@ -918,9 +920,9 @@ static void p54u_load_firmware_cb(const struct firmware *firmware,
2124     {
2125     struct p54u_priv *priv = context;
2126     struct usb_device *udev = priv->udev;
2127     + struct usb_interface *intf = priv->intf;
2128     int err;
2129    
2130     - complete(&priv->fw_wait_load);
2131     if (firmware) {
2132     priv->fw = firmware;
2133     err = p54u_start_ops(priv);
2134     @@ -929,26 +931,22 @@ static void p54u_load_firmware_cb(const struct firmware *firmware,
2135     dev_err(&udev->dev, "Firmware not found.\n");
2136     }
2137    
2138     - if (err) {
2139     - struct device *parent = priv->udev->dev.parent;
2140     -
2141     - dev_err(&udev->dev, "failed to initialize device (%d)\n", err);
2142     -
2143     - if (parent)
2144     - device_lock(parent);
2145     + complete(&priv->fw_wait_load);
2146     + /*
2147     + * At this point p54u_disconnect may have already freed
2148     + * the "priv" context. Do not use it anymore!
2149     + */
2150     + priv = NULL;
2151    
2152     - device_release_driver(&udev->dev);
2153     - /*
2154     - * At this point p54u_disconnect has already freed
2155     - * the "priv" context. Do not use it anymore!
2156     - */
2157     - priv = NULL;
2158     + if (err) {
2159     + dev_err(&intf->dev, "failed to initialize device (%d)\n", err);
2160    
2161     - if (parent)
2162     - device_unlock(parent);
2163     + usb_lock_device(udev);
2164     + usb_driver_release_interface(&p54u_driver, intf);
2165     + usb_unlock_device(udev);
2166     }
2167    
2168     - usb_put_dev(udev);
2169     + usb_put_intf(intf);
2170     }
2171    
2172     static int p54u_load_firmware(struct ieee80211_hw *dev,
2173     @@ -969,14 +967,14 @@ static int p54u_load_firmware(struct ieee80211_hw *dev,
2174     dev_info(&priv->udev->dev, "Loading firmware file %s\n",
2175     p54u_fwlist[i].fw);
2176    
2177     - usb_get_dev(udev);
2178     + usb_get_intf(intf);
2179     err = request_firmware_nowait(THIS_MODULE, 1, p54u_fwlist[i].fw,
2180     device, GFP_KERNEL, priv,
2181     p54u_load_firmware_cb);
2182     if (err) {
2183     dev_err(&priv->udev->dev, "(p54usb) cannot load firmware %s "
2184     "(%d)!\n", p54u_fwlist[i].fw, err);
2185     - usb_put_dev(udev);
2186     + usb_put_intf(intf);
2187     }
2188    
2189     return err;
2190     @@ -1008,8 +1006,6 @@ static int p54u_probe(struct usb_interface *intf,
2191     skb_queue_head_init(&priv->rx_queue);
2192     init_usb_anchor(&priv->submitted);
2193    
2194     - usb_get_dev(udev);
2195     -
2196     /* really lazy and simple way of figuring out if we're a 3887 */
2197     /* TODO: should just stick the identification in the device table */
2198     i = intf->altsetting->desc.bNumEndpoints;
2199     @@ -1050,10 +1046,8 @@ static int p54u_probe(struct usb_interface *intf,
2200     priv->upload_fw = p54u_upload_firmware_net2280;
2201     }
2202     err = p54u_load_firmware(dev, intf);
2203     - if (err) {
2204     - usb_put_dev(udev);
2205     + if (err)
2206     p54_free_common(dev);
2207     - }
2208     return err;
2209     }
2210    
2211     @@ -1069,7 +1063,6 @@ static void p54u_disconnect(struct usb_interface *intf)
2212     wait_for_completion(&priv->fw_wait_load);
2213     p54_unregister_common(dev);
2214    
2215     - usb_put_dev(interface_to_usbdev(intf));
2216     release_firmware(priv->fw);
2217     p54_free_common(dev);
2218     }
2219     diff --git a/drivers/net/wireless/intersil/p54/txrx.c b/drivers/net/wireless/intersil/p54/txrx.c
2220     index ff9acd1563f4..5892898f8853 100644
2221     --- a/drivers/net/wireless/intersil/p54/txrx.c
2222     +++ b/drivers/net/wireless/intersil/p54/txrx.c
2223     @@ -139,7 +139,10 @@ static int p54_assign_address(struct p54_common *priv, struct sk_buff *skb)
2224     unlikely(GET_HW_QUEUE(skb) == P54_QUEUE_BEACON))
2225     priv->beacon_req_id = data->req_id;
2226    
2227     - __skb_queue_after(&priv->tx_queue, target_skb, skb);
2228     + if (target_skb)
2229     + __skb_queue_after(&priv->tx_queue, target_skb, skb);
2230     + else
2231     + __skb_queue_head(&priv->tx_queue, skb);
2232     spin_unlock_irqrestore(&priv->tx_queue.lock, flags);
2233     return 0;
2234     }
2235     diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h
2236     index b73f99dc5a72..1fb76d2f5d3f 100644
2237     --- a/drivers/net/wireless/marvell/mwifiex/fw.h
2238     +++ b/drivers/net/wireless/marvell/mwifiex/fw.h
2239     @@ -1759,9 +1759,10 @@ struct mwifiex_ie_types_wmm_queue_status {
2240     struct ieee_types_vendor_header {
2241     u8 element_id;
2242     u8 len;
2243     - u8 oui[4]; /* 0~2: oui, 3: oui_type */
2244     - u8 oui_subtype;
2245     - u8 version;
2246     + struct {
2247     + u8 oui[3];
2248     + u8 oui_type;
2249     + } __packed oui;
2250     } __packed;
2251    
2252     struct ieee_types_wmm_parameter {
2253     @@ -1775,6 +1776,9 @@ struct ieee_types_wmm_parameter {
2254     * Version [1]
2255     */
2256     struct ieee_types_vendor_header vend_hdr;
2257     + u8 oui_subtype;
2258     + u8 version;
2259     +
2260     u8 qos_info_bitmap;
2261     u8 reserved;
2262     struct ieee_types_wmm_ac_parameters ac_params[IEEE80211_NUM_ACS];
2263     @@ -1792,6 +1796,8 @@ struct ieee_types_wmm_info {
2264     * Version [1]
2265     */
2266     struct ieee_types_vendor_header vend_hdr;
2267     + u8 oui_subtype;
2268     + u8 version;
2269    
2270     u8 qos_info_bitmap;
2271     } __packed;
2272     diff --git a/drivers/net/wireless/marvell/mwifiex/scan.c b/drivers/net/wireless/marvell/mwifiex/scan.c
2273     index c269a0de9413..e2786ab612ca 100644
2274     --- a/drivers/net/wireless/marvell/mwifiex/scan.c
2275     +++ b/drivers/net/wireless/marvell/mwifiex/scan.c
2276     @@ -1361,21 +1361,25 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter,
2277     break;
2278    
2279     case WLAN_EID_VENDOR_SPECIFIC:
2280     - if (element_len + 2 < sizeof(vendor_ie->vend_hdr))
2281     - return -EINVAL;
2282     -
2283     vendor_ie = (struct ieee_types_vendor_specific *)
2284     current_ptr;
2285    
2286     - if (!memcmp
2287     - (vendor_ie->vend_hdr.oui, wpa_oui,
2288     - sizeof(wpa_oui))) {
2289     + /* 802.11 requires at least 3-byte OUI. */
2290     + if (element_len < sizeof(vendor_ie->vend_hdr.oui.oui))
2291     + return -EINVAL;
2292     +
2293     + /* Not long enough for a match? Skip it. */
2294     + if (element_len < sizeof(wpa_oui))
2295     + break;
2296     +
2297     + if (!memcmp(&vendor_ie->vend_hdr.oui, wpa_oui,
2298     + sizeof(wpa_oui))) {
2299     bss_entry->bcn_wpa_ie =
2300     (struct ieee_types_vendor_specific *)
2301     current_ptr;
2302     bss_entry->wpa_offset = (u16)
2303     (current_ptr - bss_entry->beacon_buf);
2304     - } else if (!memcmp(vendor_ie->vend_hdr.oui, wmm_oui,
2305     + } else if (!memcmp(&vendor_ie->vend_hdr.oui, wmm_oui,
2306     sizeof(wmm_oui))) {
2307     if (total_ie_len ==
2308     sizeof(struct ieee_types_wmm_parameter) ||
2309     diff --git a/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c b/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
2310     index ebc0e41e5d3b..74e50566db1f 100644
2311     --- a/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
2312     +++ b/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
2313     @@ -1351,7 +1351,7 @@ mwifiex_set_gen_ie_helper(struct mwifiex_private *priv, u8 *ie_data_ptr,
2314     /* Test to see if it is a WPA IE, if not, then
2315     * it is a gen IE
2316     */
2317     - if (!memcmp(pvendor_ie->oui, wpa_oui,
2318     + if (!memcmp(&pvendor_ie->oui, wpa_oui,
2319     sizeof(wpa_oui))) {
2320     /* IE is a WPA/WPA2 IE so call set_wpa function
2321     */
2322     @@ -1361,7 +1361,7 @@ mwifiex_set_gen_ie_helper(struct mwifiex_private *priv, u8 *ie_data_ptr,
2323     goto next_ie;
2324     }
2325    
2326     - if (!memcmp(pvendor_ie->oui, wps_oui,
2327     + if (!memcmp(&pvendor_ie->oui, wps_oui,
2328     sizeof(wps_oui))) {
2329     /* Test to see if it is a WPS IE,
2330     * if so, enable wps session flag
2331     diff --git a/drivers/net/wireless/marvell/mwifiex/wmm.c b/drivers/net/wireless/marvell/mwifiex/wmm.c
2332     index 407b9932ca4d..64916ba15df5 100644
2333     --- a/drivers/net/wireless/marvell/mwifiex/wmm.c
2334     +++ b/drivers/net/wireless/marvell/mwifiex/wmm.c
2335     @@ -240,7 +240,7 @@ mwifiex_wmm_setup_queue_priorities(struct mwifiex_private *priv,
2336     mwifiex_dbg(priv->adapter, INFO,
2337     "info: WMM Parameter IE: version=%d,\t"
2338     "qos_info Parameter Set Count=%d, Reserved=%#x\n",
2339     - wmm_ie->vend_hdr.version, wmm_ie->qos_info_bitmap &
2340     + wmm_ie->version, wmm_ie->qos_info_bitmap &
2341     IEEE80211_WMM_IE_AP_QOSINFO_PARAM_SET_CNT_MASK,
2342     wmm_ie->reserved);
2343    
2344     diff --git a/drivers/staging/comedi/drivers/amplc_pci230.c b/drivers/staging/comedi/drivers/amplc_pci230.c
2345     index 65f60c2b702a..f7e673121864 100644
2346     --- a/drivers/staging/comedi/drivers/amplc_pci230.c
2347     +++ b/drivers/staging/comedi/drivers/amplc_pci230.c
2348     @@ -2330,7 +2330,8 @@ static irqreturn_t pci230_interrupt(int irq, void *d)
2349     devpriv->intr_running = false;
2350     spin_unlock_irqrestore(&devpriv->isr_spinlock, irqflags);
2351    
2352     - comedi_handle_events(dev, s_ao);
2353     + if (s_ao)
2354     + comedi_handle_events(dev, s_ao);
2355     comedi_handle_events(dev, s_ai);
2356    
2357     return IRQ_HANDLED;
2358     diff --git a/drivers/staging/comedi/drivers/dt282x.c b/drivers/staging/comedi/drivers/dt282x.c
2359     index 3be927f1d3a9..e15e33ed94ae 100644
2360     --- a/drivers/staging/comedi/drivers/dt282x.c
2361     +++ b/drivers/staging/comedi/drivers/dt282x.c
2362     @@ -557,7 +557,8 @@ static irqreturn_t dt282x_interrupt(int irq, void *d)
2363     }
2364     #endif
2365     comedi_handle_events(dev, s);
2366     - comedi_handle_events(dev, s_ao);
2367     + if (s_ao)
2368     + comedi_handle_events(dev, s_ao);
2369    
2370     return IRQ_RETVAL(handled);
2371     }
2372     diff --git a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
2373     index e3c3e427309a..f73edaf6ce87 100644
2374     --- a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
2375     +++ b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c
2376     @@ -1086,6 +1086,7 @@ static int port_switchdev_event(struct notifier_block *unused,
2377     dev_hold(dev);
2378     break;
2379     default:
2380     + kfree(switchdev_work);
2381     return NOTIFY_DONE;
2382     }
2383    
2384     diff --git a/drivers/staging/mt7621-pci/pci-mt7621.c b/drivers/staging/mt7621-pci/pci-mt7621.c
2385     index 03d919a94552..93763d40e3a1 100644
2386     --- a/drivers/staging/mt7621-pci/pci-mt7621.c
2387     +++ b/drivers/staging/mt7621-pci/pci-mt7621.c
2388     @@ -40,7 +40,7 @@
2389     /* MediaTek specific configuration registers */
2390     #define PCIE_FTS_NUM 0x70c
2391     #define PCIE_FTS_NUM_MASK GENMASK(15, 8)
2392     -#define PCIE_FTS_NUM_L0(x) ((x) & 0xff << 8)
2393     +#define PCIE_FTS_NUM_L0(x) (((x) & 0xff) << 8)
2394    
2395     /* rt_sysc_membase relative registers */
2396     #define RALINK_PCIE_CLK_GEN 0x7c
2397     diff --git a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
2398     index a7230c0c7b23..8f5a8ac1b010 100644
2399     --- a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
2400     +++ b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c
2401     @@ -124,10 +124,91 @@ static inline void handle_group_key(struct ieee_param *param,
2402     }
2403     }
2404    
2405     -static noinline_for_stack char *translate_scan(struct _adapter *padapter,
2406     - struct iw_request_info *info,
2407     - struct wlan_network *pnetwork,
2408     - char *start, char *stop)
2409     +static noinline_for_stack char *translate_scan_wpa(struct iw_request_info *info,
2410     + struct wlan_network *pnetwork,
2411     + struct iw_event *iwe,
2412     + char *start, char *stop)
2413     +{
2414     + /* parsing WPA/WPA2 IE */
2415     + u8 buf[MAX_WPA_IE_LEN];
2416     + u8 wpa_ie[255], rsn_ie[255];
2417     + u16 wpa_len = 0, rsn_len = 0;
2418     + int n, i;
2419     +
2420     + r8712_get_sec_ie(pnetwork->network.IEs,
2421     + pnetwork->network.IELength, rsn_ie, &rsn_len,
2422     + wpa_ie, &wpa_len);
2423     + if (wpa_len > 0) {
2424     + memset(buf, 0, MAX_WPA_IE_LEN);
2425     + n = sprintf(buf, "wpa_ie=");
2426     + for (i = 0; i < wpa_len; i++) {
2427     + n += snprintf(buf + n, MAX_WPA_IE_LEN - n,
2428     + "%02x", wpa_ie[i]);
2429     + if (n >= MAX_WPA_IE_LEN)
2430     + break;
2431     + }
2432     + memset(iwe, 0, sizeof(*iwe));
2433     + iwe->cmd = IWEVCUSTOM;
2434     + iwe->u.data.length = (u16)strlen(buf);
2435     + start = iwe_stream_add_point(info, start, stop,
2436     + iwe, buf);
2437     + memset(iwe, 0, sizeof(*iwe));
2438     + iwe->cmd = IWEVGENIE;
2439     + iwe->u.data.length = (u16)wpa_len;
2440     + start = iwe_stream_add_point(info, start, stop,
2441     + iwe, wpa_ie);
2442     + }
2443     + if (rsn_len > 0) {
2444     + memset(buf, 0, MAX_WPA_IE_LEN);
2445     + n = sprintf(buf, "rsn_ie=");
2446     + for (i = 0; i < rsn_len; i++) {
2447     + n += snprintf(buf + n, MAX_WPA_IE_LEN - n,
2448     + "%02x", rsn_ie[i]);
2449     + if (n >= MAX_WPA_IE_LEN)
2450     + break;
2451     + }
2452     + memset(iwe, 0, sizeof(*iwe));
2453     + iwe->cmd = IWEVCUSTOM;
2454     + iwe->u.data.length = strlen(buf);
2455     + start = iwe_stream_add_point(info, start, stop,
2456     + iwe, buf);
2457     + memset(iwe, 0, sizeof(*iwe));
2458     + iwe->cmd = IWEVGENIE;
2459     + iwe->u.data.length = rsn_len;
2460     + start = iwe_stream_add_point(info, start, stop, iwe,
2461     + rsn_ie);
2462     + }
2463     +
2464     + return start;
2465     +}
2466     +
2467     +static noinline_for_stack char *translate_scan_wps(struct iw_request_info *info,
2468     + struct wlan_network *pnetwork,
2469     + struct iw_event *iwe,
2470     + char *start, char *stop)
2471     +{
2472     + /* parsing WPS IE */
2473     + u8 wps_ie[512];
2474     + uint wps_ielen;
2475     +
2476     + if (r8712_get_wps_ie(pnetwork->network.IEs,
2477     + pnetwork->network.IELength,
2478     + wps_ie, &wps_ielen)) {
2479     + if (wps_ielen > 2) {
2480     + iwe->cmd = IWEVGENIE;
2481     + iwe->u.data.length = (u16)wps_ielen;
2482     + start = iwe_stream_add_point(info, start, stop,
2483     + iwe, wps_ie);
2484     + }
2485     + }
2486     +
2487     + return start;
2488     +}
2489     +
2490     +static char *translate_scan(struct _adapter *padapter,
2491     + struct iw_request_info *info,
2492     + struct wlan_network *pnetwork,
2493     + char *start, char *stop)
2494     {
2495     struct iw_event iwe;
2496     struct ieee80211_ht_cap *pht_capie;
2497     @@ -240,73 +321,11 @@ static noinline_for_stack char *translate_scan(struct _adapter *padapter,
2498     /* Check if we added any event */
2499     if ((current_val - start) > iwe_stream_lcp_len(info))
2500     start = current_val;
2501     - /* parsing WPA/WPA2 IE */
2502     - {
2503     - u8 buf[MAX_WPA_IE_LEN];
2504     - u8 wpa_ie[255], rsn_ie[255];
2505     - u16 wpa_len = 0, rsn_len = 0;
2506     - int n;
2507     -
2508     - r8712_get_sec_ie(pnetwork->network.IEs,
2509     - pnetwork->network.IELength, rsn_ie, &rsn_len,
2510     - wpa_ie, &wpa_len);
2511     - if (wpa_len > 0) {
2512     - memset(buf, 0, MAX_WPA_IE_LEN);
2513     - n = sprintf(buf, "wpa_ie=");
2514     - for (i = 0; i < wpa_len; i++) {
2515     - n += snprintf(buf + n, MAX_WPA_IE_LEN - n,
2516     - "%02x", wpa_ie[i]);
2517     - if (n >= MAX_WPA_IE_LEN)
2518     - break;
2519     - }
2520     - memset(&iwe, 0, sizeof(iwe));
2521     - iwe.cmd = IWEVCUSTOM;
2522     - iwe.u.data.length = (u16)strlen(buf);
2523     - start = iwe_stream_add_point(info, start, stop,
2524     - &iwe, buf);
2525     - memset(&iwe, 0, sizeof(iwe));
2526     - iwe.cmd = IWEVGENIE;
2527     - iwe.u.data.length = (u16)wpa_len;
2528     - start = iwe_stream_add_point(info, start, stop,
2529     - &iwe, wpa_ie);
2530     - }
2531     - if (rsn_len > 0) {
2532     - memset(buf, 0, MAX_WPA_IE_LEN);
2533     - n = sprintf(buf, "rsn_ie=");
2534     - for (i = 0; i < rsn_len; i++) {
2535     - n += snprintf(buf + n, MAX_WPA_IE_LEN - n,
2536     - "%02x", rsn_ie[i]);
2537     - if (n >= MAX_WPA_IE_LEN)
2538     - break;
2539     - }
2540     - memset(&iwe, 0, sizeof(iwe));
2541     - iwe.cmd = IWEVCUSTOM;
2542     - iwe.u.data.length = strlen(buf);
2543     - start = iwe_stream_add_point(info, start, stop,
2544     - &iwe, buf);
2545     - memset(&iwe, 0, sizeof(iwe));
2546     - iwe.cmd = IWEVGENIE;
2547     - iwe.u.data.length = rsn_len;
2548     - start = iwe_stream_add_point(info, start, stop, &iwe,
2549     - rsn_ie);
2550     - }
2551     - }
2552    
2553     - { /* parsing WPS IE */
2554     - u8 wps_ie[512];
2555     - uint wps_ielen;
2556     + start = translate_scan_wpa(info, pnetwork, &iwe, start, stop);
2557     +
2558     + start = translate_scan_wps(info, pnetwork, &iwe, start, stop);
2559    
2560     - if (r8712_get_wps_ie(pnetwork->network.IEs,
2561     - pnetwork->network.IELength,
2562     - wps_ie, &wps_ielen)) {
2563     - if (wps_ielen > 2) {
2564     - iwe.cmd = IWEVGENIE;
2565     - iwe.u.data.length = (u16)wps_ielen;
2566     - start = iwe_stream_add_point(info, start, stop,
2567     - &iwe, wps_ie);
2568     - }
2569     - }
2570     - }
2571     /* Add quality statistics */
2572     iwe.cmd = IWEVQUAL;
2573     rssi = r8712_signal_scale_mapping(pnetwork->network.Rssi);
2574     diff --git a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
2575     index 68f08dc18da9..5e9187edeef4 100644
2576     --- a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
2577     +++ b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
2578     @@ -336,16 +336,13 @@ static void buffer_cb(struct vchiq_mmal_instance *instance,
2579     return;
2580     } else if (length == 0) {
2581     /* stream ended */
2582     - if (buf) {
2583     - /* this should only ever happen if the port is
2584     - * disabled and there are buffers still queued
2585     + if (dev->capture.frame_count) {
2586     + /* empty buffer whilst capturing - expected to be an
2587     + * EOS, so grab another frame
2588     */
2589     - vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
2590     - pr_debug("Empty buffer");
2591     - } else if (dev->capture.frame_count) {
2592     - /* grab another frame */
2593     if (is_capturing(dev)) {
2594     - pr_debug("Grab another frame");
2595     + v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
2596     + "Grab another frame");
2597     vchiq_mmal_port_parameter_set(
2598     instance,
2599     dev->capture.camera_port,
2600     @@ -353,8 +350,14 @@ static void buffer_cb(struct vchiq_mmal_instance *instance,
2601     &dev->capture.frame_count,
2602     sizeof(dev->capture.frame_count));
2603     }
2604     + if (vchiq_mmal_submit_buffer(instance, port, buf))
2605     + v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
2606     + "Failed to return EOS buffer");
2607     } else {
2608     - /* signal frame completion */
2609     + /* stopping streaming.
2610     + * return buffer, and signal frame completion
2611     + */
2612     + vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
2613     complete(&dev->capture.frame_cmplt);
2614     }
2615     } else {
2616     @@ -576,6 +579,7 @@ static void stop_streaming(struct vb2_queue *vq)
2617     int ret;
2618     unsigned long timeout;
2619     struct bm2835_mmal_dev *dev = vb2_get_drv_priv(vq);
2620     + struct vchiq_mmal_port *port = dev->capture.port;
2621    
2622     v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, "%s: dev:%p\n",
2623     __func__, dev);
2624     @@ -599,12 +603,6 @@ static void stop_streaming(struct vb2_queue *vq)
2625     &dev->capture.frame_count,
2626     sizeof(dev->capture.frame_count));
2627    
2628     - /* wait for last frame to complete */
2629     - timeout = wait_for_completion_timeout(&dev->capture.frame_cmplt, HZ);
2630     - if (timeout == 0)
2631     - v4l2_err(&dev->v4l2_dev,
2632     - "timed out waiting for frame completion\n");
2633     -
2634     v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
2635     "disabling connection\n");
2636    
2637     @@ -619,6 +617,21 @@ static void stop_streaming(struct vb2_queue *vq)
2638     ret);
2639     }
2640    
2641     + /* wait for all buffers to be returned */
2642     + while (atomic_read(&port->buffers_with_vpu)) {
2643     + v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
2644     + "%s: Waiting for buffers to be returned - %d outstanding\n",
2645     + __func__, atomic_read(&port->buffers_with_vpu));
2646     + timeout = wait_for_completion_timeout(&dev->capture.frame_cmplt,
2647     + HZ);
2648     + if (timeout == 0) {
2649     + v4l2_err(&dev->v4l2_dev, "%s: Timeout waiting for buffers to be returned - %d outstanding\n",
2650     + __func__,
2651     + atomic_read(&port->buffers_with_vpu));
2652     + break;
2653     + }
2654     + }
2655     +
2656     if (disable_camera(dev) < 0)
2657     v4l2_err(&dev->v4l2_dev, "Failed to disable camera\n");
2658     }
2659     diff --git a/drivers/staging/vc04_services/bcm2835-camera/controls.c b/drivers/staging/vc04_services/bcm2835-camera/controls.c
2660     index dade79738a29..12ac3ef61fe6 100644
2661     --- a/drivers/staging/vc04_services/bcm2835-camera/controls.c
2662     +++ b/drivers/staging/vc04_services/bcm2835-camera/controls.c
2663     @@ -603,15 +603,28 @@ static int ctrl_set_bitrate(struct bm2835_mmal_dev *dev,
2664     struct v4l2_ctrl *ctrl,
2665     const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl)
2666     {
2667     + int ret;
2668     struct vchiq_mmal_port *encoder_out;
2669    
2670     dev->capture.encode_bitrate = ctrl->val;
2671    
2672     encoder_out = &dev->component[MMAL_COMPONENT_VIDEO_ENCODE]->output[0];
2673    
2674     - return vchiq_mmal_port_parameter_set(dev->instance, encoder_out,
2675     - mmal_ctrl->mmal_id, &ctrl->val,
2676     - sizeof(ctrl->val));
2677     + ret = vchiq_mmal_port_parameter_set(dev->instance, encoder_out,
2678     + mmal_ctrl->mmal_id, &ctrl->val,
2679     + sizeof(ctrl->val));
2680     +
2681     + v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev,
2682     + "%s: After: mmal_ctrl:%p ctrl id:0x%x ctrl val:%d ret %d(%d)\n",
2683     + __func__, mmal_ctrl, ctrl->id, ctrl->val, ret,
2684     + (ret == 0 ? 0 : -EINVAL));
2685     +
2686     + /*
2687     + * Older firmware versions (pre July 2019) have a bug in handling
2688     + * MMAL_PARAMETER_VIDEO_BIT_RATE that result in the call
2689     + * returning -MMAL_MSG_STATUS_EINVAL. So ignore errors from this call.
2690     + */
2691     + return 0;
2692     }
2693    
2694     static int ctrl_set_bitrate_mode(struct bm2835_mmal_dev *dev,
2695     diff --git a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
2696     index 16af735af5c3..29761f6c3b55 100644
2697     --- a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
2698     +++ b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c
2699     @@ -161,7 +161,8 @@ struct vchiq_mmal_instance {
2700     void *bulk_scratch;
2701    
2702     struct idr context_map;
2703     - spinlock_t context_map_lock;
2704     + /* protect accesses to context_map */
2705     + struct mutex context_map_lock;
2706    
2707     /* component to use next */
2708     int component_idx;
2709     @@ -184,10 +185,10 @@ get_msg_context(struct vchiq_mmal_instance *instance)
2710     * that when we service the VCHI reply, we can look up what
2711     * message is being replied to.
2712     */
2713     - spin_lock(&instance->context_map_lock);
2714     + mutex_lock(&instance->context_map_lock);
2715     handle = idr_alloc(&instance->context_map, msg_context,
2716     0, 0, GFP_KERNEL);
2717     - spin_unlock(&instance->context_map_lock);
2718     + mutex_unlock(&instance->context_map_lock);
2719    
2720     if (handle < 0) {
2721     kfree(msg_context);
2722     @@ -211,9 +212,9 @@ release_msg_context(struct mmal_msg_context *msg_context)
2723     {
2724     struct vchiq_mmal_instance *instance = msg_context->instance;
2725    
2726     - spin_lock(&instance->context_map_lock);
2727     + mutex_lock(&instance->context_map_lock);
2728     idr_remove(&instance->context_map, msg_context->handle);
2729     - spin_unlock(&instance->context_map_lock);
2730     + mutex_unlock(&instance->context_map_lock);
2731     kfree(msg_context);
2732     }
2733    
2734     @@ -239,6 +240,8 @@ static void buffer_work_cb(struct work_struct *work)
2735     struct mmal_msg_context *msg_context =
2736     container_of(work, struct mmal_msg_context, u.bulk.work);
2737    
2738     + atomic_dec(&msg_context->u.bulk.port->buffers_with_vpu);
2739     +
2740     msg_context->u.bulk.port->buffer_cb(msg_context->u.bulk.instance,
2741     msg_context->u.bulk.port,
2742     msg_context->u.bulk.status,
2743     @@ -287,8 +290,6 @@ static int bulk_receive(struct vchiq_mmal_instance *instance,
2744    
2745     /* store length */
2746     msg_context->u.bulk.buffer_used = rd_len;
2747     - msg_context->u.bulk.mmal_flags =
2748     - msg->u.buffer_from_host.buffer_header.flags;
2749     msg_context->u.bulk.dts = msg->u.buffer_from_host.buffer_header.dts;
2750     msg_context->u.bulk.pts = msg->u.buffer_from_host.buffer_header.pts;
2751    
2752     @@ -379,6 +380,8 @@ buffer_from_host(struct vchiq_mmal_instance *instance,
2753     /* initialise work structure ready to schedule callback */
2754     INIT_WORK(&msg_context->u.bulk.work, buffer_work_cb);
2755    
2756     + atomic_inc(&port->buffers_with_vpu);
2757     +
2758     /* prep the buffer from host message */
2759     memset(&m, 0xbc, sizeof(m)); /* just to make debug clearer */
2760    
2761     @@ -447,6 +450,9 @@ static void buffer_to_host_cb(struct vchiq_mmal_instance *instance,
2762     return;
2763     }
2764    
2765     + msg_context->u.bulk.mmal_flags =
2766     + msg->u.buffer_from_host.buffer_header.flags;
2767     +
2768     if (msg->h.status != MMAL_MSG_STATUS_SUCCESS) {
2769     /* message reception had an error */
2770     pr_warn("error %d in reply\n", msg->h.status);
2771     @@ -1323,16 +1329,6 @@ static int port_enable(struct vchiq_mmal_instance *instance,
2772     if (port->enabled)
2773     return 0;
2774    
2775     - /* ensure there are enough buffers queued to cover the buffer headers */
2776     - if (port->buffer_cb) {
2777     - hdr_count = 0;
2778     - list_for_each(buf_head, &port->buffers) {
2779     - hdr_count++;
2780     - }
2781     - if (hdr_count < port->current_buffer.num)
2782     - return -ENOSPC;
2783     - }
2784     -
2785     ret = port_action_port(instance, port,
2786     MMAL_MSG_PORT_ACTION_TYPE_ENABLE);
2787     if (ret)
2788     @@ -1849,7 +1845,7 @@ int vchiq_mmal_init(struct vchiq_mmal_instance **out_instance)
2789    
2790     instance->bulk_scratch = vmalloc(PAGE_SIZE);
2791    
2792     - spin_lock_init(&instance->context_map_lock);
2793     + mutex_init(&instance->context_map_lock);
2794     idr_init_base(&instance->context_map, 1);
2795    
2796     params.callback_param = instance;
2797     diff --git a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h
2798     index 22b839ecd5f0..b0ee1716525b 100644
2799     --- a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h
2800     +++ b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h
2801     @@ -71,6 +71,9 @@ struct vchiq_mmal_port {
2802     struct list_head buffers;
2803     /* lock to serialise adding and removing buffers from list */
2804     spinlock_t slock;
2805     +
2806     + /* Count of buffers the VPU has yet to return */
2807     + atomic_t buffers_with_vpu;
2808     /* callback on buffer completion */
2809     vchiq_mmal_buffer_cb buffer_cb;
2810     /* callback context */
2811     diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
2812     index c557c9953724..aa20fcaefa9d 100644
2813     --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
2814     +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
2815     @@ -523,7 +523,7 @@ create_pagelist(char __user *buf, size_t count, unsigned short type)
2816     (g_cache_line_size - 1)))) {
2817     char *fragments;
2818    
2819     - if (down_killable(&g_free_fragments_sema)) {
2820     + if (down_interruptible(&g_free_fragments_sema) != 0) {
2821     cleanup_pagelistinfo(pagelistinfo);
2822     return NULL;
2823     }
2824     diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
2825     index ab7d6a0ce94c..62d8f599e765 100644
2826     --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
2827     +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
2828     @@ -532,7 +532,8 @@ add_completion(VCHIQ_INSTANCE_T instance, VCHIQ_REASON_T reason,
2829     vchiq_log_trace(vchiq_arm_log_level,
2830     "%s - completion queue full", __func__);
2831     DEBUG_COUNT(COMPLETION_QUEUE_FULL_COUNT);
2832     - if (wait_for_completion_killable(&instance->remove_event)) {
2833     + if (wait_for_completion_interruptible(
2834     + &instance->remove_event)) {
2835     vchiq_log_info(vchiq_arm_log_level,
2836     "service_callback interrupted");
2837     return VCHIQ_RETRY;
2838     @@ -643,7 +644,7 @@ service_callback(VCHIQ_REASON_T reason, struct vchiq_header *header,
2839     }
2840    
2841     DEBUG_TRACE(SERVICE_CALLBACK_LINE);
2842     - if (wait_for_completion_killable(
2843     + if (wait_for_completion_interruptible(
2844     &user_service->remove_event)
2845     != 0) {
2846     vchiq_log_info(vchiq_arm_log_level,
2847     @@ -978,7 +979,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
2848     has been closed until the client library calls the
2849     CLOSE_DELIVERED ioctl, signalling close_event. */
2850     if (user_service->close_pending &&
2851     - wait_for_completion_killable(
2852     + wait_for_completion_interruptible(
2853     &user_service->close_event))
2854     status = VCHIQ_RETRY;
2855     break;
2856     @@ -1154,7 +1155,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
2857    
2858     DEBUG_TRACE(AWAIT_COMPLETION_LINE);
2859     mutex_unlock(&instance->completion_mutex);
2860     - rc = wait_for_completion_killable(
2861     + rc = wait_for_completion_interruptible(
2862     &instance->insert_event);
2863     mutex_lock(&instance->completion_mutex);
2864     if (rc != 0) {
2865     @@ -1324,7 +1325,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
2866     do {
2867     spin_unlock(&msg_queue_spinlock);
2868     DEBUG_TRACE(DEQUEUE_MESSAGE_LINE);
2869     - if (wait_for_completion_killable(
2870     + if (wait_for_completion_interruptible(
2871     &user_service->insert_event)) {
2872     vchiq_log_info(vchiq_arm_log_level,
2873     "DEQUEUE_MESSAGE interrupted");
2874     @@ -2328,7 +2329,7 @@ vchiq_keepalive_thread_func(void *v)
2875     while (1) {
2876     long rc = 0, uc = 0;
2877    
2878     - if (wait_for_completion_killable(&arm_state->ka_evt)
2879     + if (wait_for_completion_interruptible(&arm_state->ka_evt)
2880     != 0) {
2881     vchiq_log_error(vchiq_susp_log_level,
2882     "%s interrupted", __func__);
2883     @@ -2579,7 +2580,7 @@ block_resume(struct vchiq_arm_state *arm_state)
2884     write_unlock_bh(&arm_state->susp_res_lock);
2885     vchiq_log_info(vchiq_susp_log_level, "%s wait for previously "
2886     "blocked clients", __func__);
2887     - if (wait_for_completion_killable_timeout(
2888     + if (wait_for_completion_interruptible_timeout(
2889     &arm_state->blocked_blocker, timeout_val)
2890     <= 0) {
2891     vchiq_log_error(vchiq_susp_log_level, "%s wait for "
2892     @@ -2605,7 +2606,7 @@ block_resume(struct vchiq_arm_state *arm_state)
2893     write_unlock_bh(&arm_state->susp_res_lock);
2894     vchiq_log_info(vchiq_susp_log_level, "%s wait for resume",
2895     __func__);
2896     - if (wait_for_completion_killable_timeout(
2897     + if (wait_for_completion_interruptible_timeout(
2898     &arm_state->vc_resume_complete, timeout_val)
2899     <= 0) {
2900     vchiq_log_error(vchiq_susp_log_level, "%s wait for "
2901     @@ -2812,7 +2813,7 @@ vchiq_arm_force_suspend(struct vchiq_state *state)
2902     do {
2903     write_unlock_bh(&arm_state->susp_res_lock);
2904    
2905     - rc = wait_for_completion_killable_timeout(
2906     + rc = wait_for_completion_interruptible_timeout(
2907     &arm_state->vc_suspend_complete,
2908     msecs_to_jiffies(FORCE_SUSPEND_TIMEOUT_MS));
2909    
2910     @@ -2908,7 +2909,7 @@ vchiq_arm_allow_resume(struct vchiq_state *state)
2911     write_unlock_bh(&arm_state->susp_res_lock);
2912    
2913     if (resume) {
2914     - if (wait_for_completion_killable(
2915     + if (wait_for_completion_interruptible(
2916     &arm_state->vc_resume_complete) < 0) {
2917     vchiq_log_error(vchiq_susp_log_level,
2918     "%s interrupted", __func__);
2919     diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
2920     index 0c387b6473a5..44bfa890e0e5 100644
2921     --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
2922     +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c
2923     @@ -395,13 +395,21 @@ remote_event_create(wait_queue_head_t *wq, struct remote_event *event)
2924     init_waitqueue_head(wq);
2925     }
2926    
2927     +/*
2928     + * All the event waiting routines in VCHIQ used a custom semaphore
2929     + * implementation that filtered most signals. This achieved a behaviour similar
2930     + * to the "killable" family of functions. While cleaning up this code all the
2931     + * routines where switched to the "interruptible" family of functions, as the
2932     + * former was deemed unjustified and the use "killable" set all VCHIQ's
2933     + * threads in D state.
2934     + */
2935     static inline int
2936     remote_event_wait(wait_queue_head_t *wq, struct remote_event *event)
2937     {
2938     if (!event->fired) {
2939     event->armed = 1;
2940     dsb(sy);
2941     - if (wait_event_killable(*wq, event->fired)) {
2942     + if (wait_event_interruptible(*wq, event->fired)) {
2943     event->armed = 0;
2944     return 0;
2945     }
2946     @@ -560,7 +568,7 @@ reserve_space(struct vchiq_state *state, size_t space, int is_blocking)
2947     remote_event_signal(&state->remote->trigger);
2948    
2949     if (!is_blocking ||
2950     - (wait_for_completion_killable(
2951     + (wait_for_completion_interruptible(
2952     &state->slot_available_event)))
2953     return NULL; /* No space available */
2954     }
2955     @@ -830,7 +838,7 @@ queue_message(struct vchiq_state *state, struct vchiq_service *service,
2956     spin_unlock(&quota_spinlock);
2957     mutex_unlock(&state->slot_mutex);
2958    
2959     - if (wait_for_completion_killable(
2960     + if (wait_for_completion_interruptible(
2961     &state->data_quota_event))
2962     return VCHIQ_RETRY;
2963    
2964     @@ -861,7 +869,7 @@ queue_message(struct vchiq_state *state, struct vchiq_service *service,
2965     service_quota->slot_use_count);
2966     VCHIQ_SERVICE_STATS_INC(service, quota_stalls);
2967     mutex_unlock(&state->slot_mutex);
2968     - if (wait_for_completion_killable(
2969     + if (wait_for_completion_interruptible(
2970     &service_quota->quota_event))
2971     return VCHIQ_RETRY;
2972     if (service->closing)
2973     @@ -1710,7 +1718,8 @@ parse_rx_slots(struct vchiq_state *state)
2974     &service->bulk_rx : &service->bulk_tx;
2975    
2976     DEBUG_TRACE(PARSE_LINE);
2977     - if (mutex_lock_killable(&service->bulk_mutex)) {
2978     + if (mutex_lock_killable(
2979     + &service->bulk_mutex) != 0) {
2980     DEBUG_TRACE(PARSE_LINE);
2981     goto bail_not_ready;
2982     }
2983     @@ -2428,7 +2437,7 @@ vchiq_open_service_internal(struct vchiq_service *service, int client_id)
2984     QMFLAGS_IS_BLOCKING);
2985     if (status == VCHIQ_SUCCESS) {
2986     /* Wait for the ACK/NAK */
2987     - if (wait_for_completion_killable(&service->remove_event)) {
2988     + if (wait_for_completion_interruptible(&service->remove_event)) {
2989     status = VCHIQ_RETRY;
2990     vchiq_release_service_internal(service);
2991     } else if ((service->srvstate != VCHIQ_SRVSTATE_OPEN) &&
2992     @@ -2795,7 +2804,7 @@ vchiq_connect_internal(struct vchiq_state *state, VCHIQ_INSTANCE_T instance)
2993     }
2994    
2995     if (state->conn_state == VCHIQ_CONNSTATE_CONNECTING) {
2996     - if (wait_for_completion_killable(&state->connect))
2997     + if (wait_for_completion_interruptible(&state->connect))
2998     return VCHIQ_RETRY;
2999    
3000     vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTED);
3001     @@ -2894,7 +2903,7 @@ vchiq_close_service(VCHIQ_SERVICE_HANDLE_T handle)
3002     }
3003    
3004     while (1) {
3005     - if (wait_for_completion_killable(&service->remove_event)) {
3006     + if (wait_for_completion_interruptible(&service->remove_event)) {
3007     status = VCHIQ_RETRY;
3008     break;
3009     }
3010     @@ -2955,7 +2964,7 @@ vchiq_remove_service(VCHIQ_SERVICE_HANDLE_T handle)
3011     request_poll(service->state, service, VCHIQ_POLL_REMOVE);
3012     }
3013     while (1) {
3014     - if (wait_for_completion_killable(&service->remove_event)) {
3015     + if (wait_for_completion_interruptible(&service->remove_event)) {
3016     status = VCHIQ_RETRY;
3017     break;
3018     }
3019     @@ -3038,7 +3047,7 @@ VCHIQ_STATUS_T vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle,
3020     VCHIQ_SERVICE_STATS_INC(service, bulk_stalls);
3021     do {
3022     mutex_unlock(&service->bulk_mutex);
3023     - if (wait_for_completion_killable(
3024     + if (wait_for_completion_interruptible(
3025     &service->bulk_remove_event)) {
3026     status = VCHIQ_RETRY;
3027     goto error_exit;
3028     @@ -3115,7 +3124,7 @@ waiting:
3029    
3030     if (bulk_waiter) {
3031     bulk_waiter->bulk = bulk;
3032     - if (wait_for_completion_killable(&bulk_waiter->event))
3033     + if (wait_for_completion_interruptible(&bulk_waiter->event))
3034     status = VCHIQ_RETRY;
3035     else if (bulk_waiter->actual == VCHIQ_BULK_ACTUAL_ABORTED)
3036     status = VCHIQ_ERROR;
3037     diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c
3038     index 6c519d8e48cb..8ee85c5e6f77 100644
3039     --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c
3040     +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c
3041     @@ -50,7 +50,7 @@ void vchiu_queue_push(struct vchiu_queue *queue, struct vchiq_header *header)
3042     return;
3043    
3044     while (queue->write == queue->read + queue->size) {
3045     - if (wait_for_completion_killable(&queue->pop))
3046     + if (wait_for_completion_interruptible(&queue->pop))
3047     flush_signals(current);
3048     }
3049    
3050     @@ -63,7 +63,7 @@ void vchiu_queue_push(struct vchiu_queue *queue, struct vchiq_header *header)
3051     struct vchiq_header *vchiu_queue_peek(struct vchiu_queue *queue)
3052     {
3053     while (queue->write == queue->read) {
3054     - if (wait_for_completion_killable(&queue->push))
3055     + if (wait_for_completion_interruptible(&queue->push))
3056     flush_signals(current);
3057     }
3058    
3059     @@ -77,7 +77,7 @@ struct vchiq_header *vchiu_queue_pop(struct vchiu_queue *queue)
3060     struct vchiq_header *header;
3061    
3062     while (queue->write == queue->read) {
3063     - if (wait_for_completion_killable(&queue->push))
3064     + if (wait_for_completion_interruptible(&queue->push))
3065     flush_signals(current);
3066     }
3067    
3068     diff --git a/drivers/staging/wilc1000/wilc_netdev.c b/drivers/staging/wilc1000/wilc_netdev.c
3069     index ba78c08a17f1..5338d7d2b248 100644
3070     --- a/drivers/staging/wilc1000/wilc_netdev.c
3071     +++ b/drivers/staging/wilc1000/wilc_netdev.c
3072     @@ -530,17 +530,17 @@ static int wilc_wlan_initialize(struct net_device *dev, struct wilc_vif *vif)
3073     goto fail_locks;
3074     }
3075    
3076     - if (wl->gpio_irq && init_irq(dev)) {
3077     - ret = -EIO;
3078     - goto fail_locks;
3079     - }
3080     -
3081     ret = wlan_initialize_threads(dev);
3082     if (ret < 0) {
3083     ret = -EIO;
3084     goto fail_wilc_wlan;
3085     }
3086    
3087     + if (wl->gpio_irq && init_irq(dev)) {
3088     + ret = -EIO;
3089     + goto fail_threads;
3090     + }
3091     +
3092     if (!wl->dev_irq_num &&
3093     wl->hif_func->enable_interrupt &&
3094     wl->hif_func->enable_interrupt(wl)) {
3095     @@ -596,7 +596,7 @@ fail_irq_enable:
3096     fail_irq_init:
3097     if (wl->dev_irq_num)
3098     deinit_irq(dev);
3099     -
3100     +fail_threads:
3101     wlan_deinitialize_threads(dev);
3102     fail_wilc_wlan:
3103     wilc_wlan_cleanup(dev);
3104     diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
3105     index d2f3310abe54..682300713be4 100644
3106     --- a/drivers/tty/serial/8250/8250_port.c
3107     +++ b/drivers/tty/serial/8250/8250_port.c
3108     @@ -1869,8 +1869,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir)
3109    
3110     status = serial_port_in(port, UART_LSR);
3111    
3112     - if (status & (UART_LSR_DR | UART_LSR_BI) &&
3113     - iir & UART_IIR_RDI) {
3114     + if (status & (UART_LSR_DR | UART_LSR_BI)) {
3115     if (!up->dma || handle_rx_dma(up, iir))
3116     status = serial8250_rx_chars(up, status);
3117     }
3118     diff --git a/drivers/usb/dwc2/core.c b/drivers/usb/dwc2/core.c
3119     index 8b499d643461..8e41d70fd298 100644
3120     --- a/drivers/usb/dwc2/core.c
3121     +++ b/drivers/usb/dwc2/core.c
3122     @@ -531,7 +531,7 @@ int dwc2_core_reset(struct dwc2_hsotg *hsotg, bool skip_wait)
3123     }
3124    
3125     /* Wait for AHB master IDLE state */
3126     - if (dwc2_hsotg_wait_bit_set(hsotg, GRSTCTL, GRSTCTL_AHBIDLE, 50)) {
3127     + if (dwc2_hsotg_wait_bit_set(hsotg, GRSTCTL, GRSTCTL_AHBIDLE, 10000)) {
3128     dev_warn(hsotg->dev, "%s: HANG! AHB Idle timeout GRSTCTL GRSTCTL_AHBIDLE\n",
3129     __func__);
3130     return -EBUSY;
3131     diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
3132     index 47be961f1bf3..c7ed90084d1a 100644
3133     --- a/drivers/usb/gadget/function/f_fs.c
3134     +++ b/drivers/usb/gadget/function/f_fs.c
3135     @@ -997,7 +997,6 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
3136     * earlier
3137     */
3138     gadget = epfile->ffs->gadget;
3139     - io_data->use_sg = gadget->sg_supported && data_len > PAGE_SIZE;
3140    
3141     spin_lock_irq(&epfile->ffs->eps_lock);
3142     /* In the meantime, endpoint got disabled or changed. */
3143     @@ -1012,6 +1011,8 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
3144     */
3145     if (io_data->read)
3146     data_len = usb_ep_align_maybe(gadget, ep->ep, data_len);
3147     +
3148     + io_data->use_sg = gadget->sg_supported && data_len > PAGE_SIZE;
3149     spin_unlock_irq(&epfile->ffs->eps_lock);
3150    
3151     data = ffs_alloc_buffer(io_data, data_len);
3152     diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
3153     index 737bd77a575d..2929bb47a618 100644
3154     --- a/drivers/usb/gadget/function/u_ether.c
3155     +++ b/drivers/usb/gadget/function/u_ether.c
3156     @@ -186,11 +186,12 @@ rx_submit(struct eth_dev *dev, struct usb_request *req, gfp_t gfp_flags)
3157     out = dev->port_usb->out_ep;
3158     else
3159     out = NULL;
3160     - spin_unlock_irqrestore(&dev->lock, flags);
3161    
3162     if (!out)
3163     + {
3164     + spin_unlock_irqrestore(&dev->lock, flags);
3165     return -ENOTCONN;
3166     -
3167     + }
3168    
3169     /* Padding up to RX_EXTRA handles minor disagreements with host.
3170     * Normally we use the USB "terminate on short read" convention;
3171     @@ -214,6 +215,7 @@ rx_submit(struct eth_dev *dev, struct usb_request *req, gfp_t gfp_flags)
3172    
3173     if (dev->port_usb->is_fixed)
3174     size = max_t(size_t, size, dev->port_usb->fixed_out_len);
3175     + spin_unlock_irqrestore(&dev->lock, flags);
3176    
3177     skb = __netdev_alloc_skb(dev->net, size + NET_IP_ALIGN, gfp_flags);
3178     if (skb == NULL) {
3179     diff --git a/drivers/usb/renesas_usbhs/fifo.c b/drivers/usb/renesas_usbhs/fifo.c
3180     index 39fa2fc1b8b7..6036cbae8c78 100644
3181     --- a/drivers/usb/renesas_usbhs/fifo.c
3182     +++ b/drivers/usb/renesas_usbhs/fifo.c
3183     @@ -802,9 +802,8 @@ static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map)
3184     }
3185    
3186     static void usbhsf_dma_complete(void *arg);
3187     -static void xfer_work(struct work_struct *work)
3188     +static void usbhsf_dma_xfer_preparing(struct usbhs_pkt *pkt)
3189     {
3190     - struct usbhs_pkt *pkt = container_of(work, struct usbhs_pkt, work);
3191     struct usbhs_pipe *pipe = pkt->pipe;
3192     struct usbhs_fifo *fifo;
3193     struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
3194     @@ -812,12 +811,10 @@ static void xfer_work(struct work_struct *work)
3195     struct dma_chan *chan;
3196     struct device *dev = usbhs_priv_to_dev(priv);
3197     enum dma_transfer_direction dir;
3198     - unsigned long flags;
3199    
3200     - usbhs_lock(priv, flags);
3201     fifo = usbhs_pipe_to_fifo(pipe);
3202     if (!fifo)
3203     - goto xfer_work_end;
3204     + return;
3205    
3206     chan = usbhsf_dma_chan_get(fifo, pkt);
3207     dir = usbhs_pipe_is_dir_in(pipe) ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV;
3208     @@ -826,7 +823,7 @@ static void xfer_work(struct work_struct *work)
3209     pkt->trans, dir,
3210     DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
3211     if (!desc)
3212     - goto xfer_work_end;
3213     + return;
3214    
3215     desc->callback = usbhsf_dma_complete;
3216     desc->callback_param = pipe;
3217     @@ -834,7 +831,7 @@ static void xfer_work(struct work_struct *work)
3218     pkt->cookie = dmaengine_submit(desc);
3219     if (pkt->cookie < 0) {
3220     dev_err(dev, "Failed to submit dma descriptor\n");
3221     - goto xfer_work_end;
3222     + return;
3223     }
3224    
3225     dev_dbg(dev, " %s %d (%d/ %d)\n",
3226     @@ -845,8 +842,17 @@ static void xfer_work(struct work_struct *work)
3227     dma_async_issue_pending(chan);
3228     usbhsf_dma_start(pipe, fifo);
3229     usbhs_pipe_enable(pipe);
3230     +}
3231     +
3232     +static void xfer_work(struct work_struct *work)
3233     +{
3234     + struct usbhs_pkt *pkt = container_of(work, struct usbhs_pkt, work);
3235     + struct usbhs_pipe *pipe = pkt->pipe;
3236     + struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe);
3237     + unsigned long flags;
3238    
3239     -xfer_work_end:
3240     + usbhs_lock(priv, flags);
3241     + usbhsf_dma_xfer_preparing(pkt);
3242     usbhs_unlock(priv, flags);
3243     }
3244    
3245     @@ -899,8 +905,13 @@ static int usbhsf_dma_prepare_push(struct usbhs_pkt *pkt, int *is_done)
3246     pkt->trans = len;
3247    
3248     usbhsf_tx_irq_ctrl(pipe, 0);
3249     - INIT_WORK(&pkt->work, xfer_work);
3250     - schedule_work(&pkt->work);
3251     + /* FIXME: Workaound for usb dmac that driver can be used in atomic */
3252     + if (usbhs_get_dparam(priv, has_usb_dmac)) {
3253     + usbhsf_dma_xfer_preparing(pkt);
3254     + } else {
3255     + INIT_WORK(&pkt->work, xfer_work);
3256     + schedule_work(&pkt->work);
3257     + }
3258    
3259     return 0;
3260    
3261     @@ -1006,8 +1017,7 @@ static int usbhsf_dma_prepare_pop_with_usb_dmac(struct usbhs_pkt *pkt,
3262    
3263     pkt->trans = pkt->length;
3264    
3265     - INIT_WORK(&pkt->work, xfer_work);
3266     - schedule_work(&pkt->work);
3267     + usbhsf_dma_xfer_preparing(pkt);
3268    
3269     return 0;
3270    
3271     diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
3272     index 1d8461ae2c34..23669a584bae 100644
3273     --- a/drivers/usb/serial/ftdi_sio.c
3274     +++ b/drivers/usb/serial/ftdi_sio.c
3275     @@ -1029,6 +1029,7 @@ static const struct usb_device_id id_table_combined[] = {
3276     { USB_DEVICE(AIRBUS_DS_VID, AIRBUS_DS_P8GR) },
3277     /* EZPrototypes devices */
3278     { USB_DEVICE(EZPROTOTYPES_VID, HJELMSLUND_USB485_ISO_PID) },
3279     + { USB_DEVICE_INTERFACE_NUMBER(UNJO_VID, UNJO_ISODEBUG_V1_PID, 1) },
3280     { } /* Terminating entry */
3281     };
3282    
3283     diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
3284     index 5755f0df0025..f12d806220b4 100644
3285     --- a/drivers/usb/serial/ftdi_sio_ids.h
3286     +++ b/drivers/usb/serial/ftdi_sio_ids.h
3287     @@ -1543,3 +1543,9 @@
3288     #define CHETCO_SEASMART_DISPLAY_PID 0xA5AD /* SeaSmart NMEA2000 Display */
3289     #define CHETCO_SEASMART_LITE_PID 0xA5AE /* SeaSmart Lite USB Adapter */
3290     #define CHETCO_SEASMART_ANALOG_PID 0xA5AF /* SeaSmart Analog Adapter */
3291     +
3292     +/*
3293     + * Unjo AB
3294     + */
3295     +#define UNJO_VID 0x22B7
3296     +#define UNJO_ISODEBUG_V1_PID 0x150D
3297     diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
3298     index a0aaf0635359..c1582fbd1150 100644
3299     --- a/drivers/usb/serial/option.c
3300     +++ b/drivers/usb/serial/option.c
3301     @@ -1343,6 +1343,7 @@ static const struct usb_device_id option_ids[] = {
3302     .driver_info = RSVD(4) },
3303     { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0414, 0xff, 0xff, 0xff) },
3304     { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0417, 0xff, 0xff, 0xff) },
3305     + { USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x0601, 0xff) }, /* GosunCn ZTE WeLink ME3630 (RNDIS mode) */
3306     { USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x0602, 0xff) }, /* GosunCn ZTE WeLink ME3630 (MBIM mode) */
3307     { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1008, 0xff, 0xff, 0xff),
3308     .driver_info = RSVD(4) },
3309     diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c
3310     index c674abe3cf99..a38d1409f15b 100644
3311     --- a/drivers/usb/typec/tps6598x.c
3312     +++ b/drivers/usb/typec/tps6598x.c
3313     @@ -41,7 +41,7 @@
3314     #define TPS_STATUS_VCONN(s) (!!((s) & BIT(7)))
3315    
3316     /* TPS_REG_SYSTEM_CONF bits */
3317     -#define TPS_SYSCONF_PORTINFO(c) ((c) & 3)
3318     +#define TPS_SYSCONF_PORTINFO(c) ((c) & 7)
3319    
3320     enum {
3321     TPS_PORTINFO_SINK,
3322     @@ -127,7 +127,7 @@ tps6598x_block_read(struct tps6598x *tps, u8 reg, void *val, size_t len)
3323     }
3324    
3325     static int tps6598x_block_write(struct tps6598x *tps, u8 reg,
3326     - void *val, size_t len)
3327     + const void *val, size_t len)
3328     {
3329     u8 data[TPS_MAX_LEN + 1];
3330    
3331     @@ -173,7 +173,7 @@ static inline int tps6598x_write64(struct tps6598x *tps, u8 reg, u64 val)
3332     static inline int
3333     tps6598x_write_4cc(struct tps6598x *tps, u8 reg, const char *val)
3334     {
3335     - return tps6598x_block_write(tps, reg, &val, sizeof(u32));
3336     + return tps6598x_block_write(tps, reg, val, 4);
3337     }
3338    
3339     static int tps6598x_read_partner_identity(struct tps6598x *tps)
3340     diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c
3341     index d536889ac31b..4941fe8471ce 100644
3342     --- a/fs/crypto/policy.c
3343     +++ b/fs/crypto/policy.c
3344     @@ -81,6 +81,8 @@ int fscrypt_ioctl_set_policy(struct file *filp, const void __user *arg)
3345     if (ret == -ENODATA) {
3346     if (!S_ISDIR(inode->i_mode))
3347     ret = -ENOTDIR;
3348     + else if (IS_DEADDIR(inode))
3349     + ret = -ENOENT;
3350     else if (!inode->i_sb->s_cop->empty_dir(inode))
3351     ret = -ENOTEMPTY;
3352     else
3353     diff --git a/fs/iomap.c b/fs/iomap.c
3354     index 12654c2e78f8..da961fca3180 100644
3355     --- a/fs/iomap.c
3356     +++ b/fs/iomap.c
3357     @@ -333,7 +333,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
3358     if (iop)
3359     atomic_inc(&iop->read_count);
3360    
3361     - if (!ctx->bio || !is_contig || bio_full(ctx->bio)) {
3362     + if (!ctx->bio || !is_contig || bio_full(ctx->bio, plen)) {
3363     gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL);
3364     int nr_vecs = (length + PAGE_SIZE - 1) >> PAGE_SHIFT;
3365    
3366     diff --git a/fs/udf/inode.c b/fs/udf/inode.c
3367     index e7276932e433..9bb18311a22f 100644
3368     --- a/fs/udf/inode.c
3369     +++ b/fs/udf/inode.c
3370     @@ -470,13 +470,15 @@ static struct buffer_head *udf_getblk(struct inode *inode, udf_pblk_t block,
3371     return NULL;
3372     }
3373    
3374     -/* Extend the file by 'blocks' blocks, return the number of extents added */
3375     +/* Extend the file with new blocks totaling 'new_block_bytes',
3376     + * return the number of extents added
3377     + */
3378     static int udf_do_extend_file(struct inode *inode,
3379     struct extent_position *last_pos,
3380     struct kernel_long_ad *last_ext,
3381     - sector_t blocks)
3382     + loff_t new_block_bytes)
3383     {
3384     - sector_t add;
3385     + uint32_t add;
3386     int count = 0, fake = !(last_ext->extLength & UDF_EXTENT_LENGTH_MASK);
3387     struct super_block *sb = inode->i_sb;
3388     struct kernel_lb_addr prealloc_loc = {};
3389     @@ -486,7 +488,7 @@ static int udf_do_extend_file(struct inode *inode,
3390    
3391     /* The previous extent is fake and we should not extend by anything
3392     * - there's nothing to do... */
3393     - if (!blocks && fake)
3394     + if (!new_block_bytes && fake)
3395     return 0;
3396    
3397     iinfo = UDF_I(inode);
3398     @@ -517,13 +519,12 @@ static int udf_do_extend_file(struct inode *inode,
3399     /* Can we merge with the previous extent? */
3400     if ((last_ext->extLength & UDF_EXTENT_FLAG_MASK) ==
3401     EXT_NOT_RECORDED_NOT_ALLOCATED) {
3402     - add = ((1 << 30) - sb->s_blocksize -
3403     - (last_ext->extLength & UDF_EXTENT_LENGTH_MASK)) >>
3404     - sb->s_blocksize_bits;
3405     - if (add > blocks)
3406     - add = blocks;
3407     - blocks -= add;
3408     - last_ext->extLength += add << sb->s_blocksize_bits;
3409     + add = (1 << 30) - sb->s_blocksize -
3410     + (last_ext->extLength & UDF_EXTENT_LENGTH_MASK);
3411     + if (add > new_block_bytes)
3412     + add = new_block_bytes;
3413     + new_block_bytes -= add;
3414     + last_ext->extLength += add;
3415     }
3416    
3417     if (fake) {
3418     @@ -544,28 +545,27 @@ static int udf_do_extend_file(struct inode *inode,
3419     }
3420    
3421     /* Managed to do everything necessary? */
3422     - if (!blocks)
3423     + if (!new_block_bytes)
3424     goto out;
3425    
3426     /* All further extents will be NOT_RECORDED_NOT_ALLOCATED */
3427     last_ext->extLocation.logicalBlockNum = 0;
3428     last_ext->extLocation.partitionReferenceNum = 0;
3429     - add = (1 << (30-sb->s_blocksize_bits)) - 1;
3430     - last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED |
3431     - (add << sb->s_blocksize_bits);
3432     + add = (1 << 30) - sb->s_blocksize;
3433     + last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED | add;
3434    
3435     /* Create enough extents to cover the whole hole */
3436     - while (blocks > add) {
3437     - blocks -= add;
3438     + while (new_block_bytes > add) {
3439     + new_block_bytes -= add;
3440     err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
3441     last_ext->extLength, 1);
3442     if (err)
3443     return err;
3444     count++;
3445     }
3446     - if (blocks) {
3447     + if (new_block_bytes) {
3448     last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED |
3449     - (blocks << sb->s_blocksize_bits);
3450     + new_block_bytes;
3451     err = udf_add_aext(inode, last_pos, &last_ext->extLocation,
3452     last_ext->extLength, 1);
3453     if (err)
3454     @@ -596,6 +596,24 @@ out:
3455     return count;
3456     }
3457    
3458     +/* Extend the final block of the file to final_block_len bytes */
3459     +static void udf_do_extend_final_block(struct inode *inode,
3460     + struct extent_position *last_pos,
3461     + struct kernel_long_ad *last_ext,
3462     + uint32_t final_block_len)
3463     +{
3464     + struct super_block *sb = inode->i_sb;
3465     + uint32_t added_bytes;
3466     +
3467     + added_bytes = final_block_len -
3468     + (last_ext->extLength & (sb->s_blocksize - 1));
3469     + last_ext->extLength += added_bytes;
3470     + UDF_I(inode)->i_lenExtents += added_bytes;
3471     +
3472     + udf_write_aext(inode, last_pos, &last_ext->extLocation,
3473     + last_ext->extLength, 1);
3474     +}
3475     +
3476     static int udf_extend_file(struct inode *inode, loff_t newsize)
3477     {
3478    
3479     @@ -605,10 +623,12 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
3480     int8_t etype;
3481     struct super_block *sb = inode->i_sb;
3482     sector_t first_block = newsize >> sb->s_blocksize_bits, offset;
3483     + unsigned long partial_final_block;
3484     int adsize;
3485     struct udf_inode_info *iinfo = UDF_I(inode);
3486     struct kernel_long_ad extent;
3487     - int err;
3488     + int err = 0;
3489     + int within_final_block;
3490    
3491     if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_SHORT)
3492     adsize = sizeof(struct short_ad);
3493     @@ -618,18 +638,8 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
3494     BUG();
3495    
3496     etype = inode_bmap(inode, first_block, &epos, &eloc, &elen, &offset);
3497     + within_final_block = (etype != -1);
3498    
3499     - /* File has extent covering the new size (could happen when extending
3500     - * inside a block)? */
3501     - if (etype != -1)
3502     - return 0;
3503     - if (newsize & (sb->s_blocksize - 1))
3504     - offset++;
3505     - /* Extended file just to the boundary of the last file block? */
3506     - if (offset == 0)
3507     - return 0;
3508     -
3509     - /* Truncate is extending the file by 'offset' blocks */
3510     if ((!epos.bh && epos.offset == udf_file_entry_alloc_offset(inode)) ||
3511     (epos.bh && epos.offset == sizeof(struct allocExtDesc))) {
3512     /* File has no extents at all or has empty last
3513     @@ -643,7 +653,22 @@ static int udf_extend_file(struct inode *inode, loff_t newsize)
3514     &extent.extLength, 0);
3515     extent.extLength |= etype << 30;
3516     }
3517     - err = udf_do_extend_file(inode, &epos, &extent, offset);
3518     +
3519     + partial_final_block = newsize & (sb->s_blocksize - 1);
3520     +
3521     + /* File has extent covering the new size (could happen when extending
3522     + * inside a block)?
3523     + */
3524     + if (within_final_block) {
3525     + /* Extending file within the last file block */
3526     + udf_do_extend_final_block(inode, &epos, &extent,
3527     + partial_final_block);
3528     + } else {
3529     + loff_t add = ((loff_t)offset << sb->s_blocksize_bits) |
3530     + partial_final_block;
3531     + err = udf_do_extend_file(inode, &epos, &extent, add);
3532     + }
3533     +
3534     if (err < 0)
3535     goto out;
3536     err = 0;
3537     @@ -745,6 +770,7 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
3538     /* Are we beyond EOF? */
3539     if (etype == -1) {
3540     int ret;
3541     + loff_t hole_len;
3542     isBeyondEOF = true;
3543     if (count) {
3544     if (c)
3545     @@ -760,7 +786,8 @@ static sector_t inode_getblk(struct inode *inode, sector_t block,
3546     startnum = (offset > 0);
3547     }
3548     /* Create extents for the hole between EOF and offset */
3549     - ret = udf_do_extend_file(inode, &prev_epos, laarr, offset);
3550     + hole_len = (loff_t)offset << inode->i_blkbits;
3551     + ret = udf_do_extend_file(inode, &prev_epos, laarr, hole_len);
3552     if (ret < 0) {
3553     *err = ret;
3554     newblock = 0;
3555     diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
3556     index 8da5e6637771..11f703d4a605 100644
3557     --- a/fs/xfs/xfs_aops.c
3558     +++ b/fs/xfs/xfs_aops.c
3559     @@ -782,7 +782,7 @@ xfs_add_to_ioend(
3560     atomic_inc(&iop->write_count);
3561    
3562     if (!merged) {
3563     - if (bio_full(wpc->ioend->io_bio))
3564     + if (bio_full(wpc->ioend->io_bio, len))
3565     xfs_chain_bio(wpc->ioend, wbc, bdev, sector);
3566     bio_add_page(wpc->ioend->io_bio, page, len, poff);
3567     }
3568     diff --git a/include/linux/bio.h b/include/linux/bio.h
3569     index f87abaa898f0..e36b8fc1b1c3 100644
3570     --- a/include/linux/bio.h
3571     +++ b/include/linux/bio.h
3572     @@ -102,9 +102,23 @@ static inline void *bio_data(struct bio *bio)
3573     return NULL;
3574     }
3575    
3576     -static inline bool bio_full(struct bio *bio)
3577     +/**
3578     + * bio_full - check if the bio is full
3579     + * @bio: bio to check
3580     + * @len: length of one segment to be added
3581     + *
3582     + * Return true if @bio is full and one segment with @len bytes can't be
3583     + * added to the bio, otherwise return false
3584     + */
3585     +static inline bool bio_full(struct bio *bio, unsigned len)
3586     {
3587     - return bio->bi_vcnt >= bio->bi_max_vecs;
3588     + if (bio->bi_vcnt >= bio->bi_max_vecs)
3589     + return true;
3590     +
3591     + if (bio->bi_iter.bi_size > UINT_MAX - len)
3592     + return true;
3593     +
3594     + return false;
3595     }
3596    
3597     static inline bool bio_next_segment(const struct bio *bio,
3598     diff --git a/include/linux/vmw_vmci_defs.h b/include/linux/vmw_vmci_defs.h
3599     index 77ac9c7b9483..762f793e92f6 100644
3600     --- a/include/linux/vmw_vmci_defs.h
3601     +++ b/include/linux/vmw_vmci_defs.h
3602     @@ -62,9 +62,18 @@ enum {
3603    
3604     /*
3605     * A single VMCI device has an upper limit of 128MB on the amount of
3606     - * memory that can be used for queue pairs.
3607     + * memory that can be used for queue pairs. Since each queue pair
3608     + * consists of at least two pages, the memory limit also dictates the
3609     + * number of queue pairs a guest can create.
3610     */
3611     #define VMCI_MAX_GUEST_QP_MEMORY (128 * 1024 * 1024)
3612     +#define VMCI_MAX_GUEST_QP_COUNT (VMCI_MAX_GUEST_QP_MEMORY / PAGE_SIZE / 2)
3613     +
3614     +/*
3615     + * There can be at most PAGE_SIZE doorbells since there is one doorbell
3616     + * per byte in the doorbell bitmap page.
3617     + */
3618     +#define VMCI_MAX_GUEST_DOORBELL_COUNT PAGE_SIZE
3619    
3620     /*
3621     * Queues with pre-mapped data pages must be small, so that we don't pin
3622     diff --git a/include/uapi/linux/usb/audio.h b/include/uapi/linux/usb/audio.h
3623     index ddc5396800aa..76b7c3f6cd0d 100644
3624     --- a/include/uapi/linux/usb/audio.h
3625     +++ b/include/uapi/linux/usb/audio.h
3626     @@ -450,6 +450,43 @@ static inline __u8 *uac_processing_unit_specific(struct uac_processing_unit_desc
3627     }
3628     }
3629    
3630     +/*
3631     + * Extension Unit (XU) has almost compatible layout with Processing Unit, but
3632     + * on UAC2, it has a different bmControls size (bControlSize); it's 1 byte for
3633     + * XU while 2 bytes for PU. The last iExtension field is a one-byte index as
3634     + * well as iProcessing field of PU.
3635     + */
3636     +static inline __u8 uac_extension_unit_bControlSize(struct uac_processing_unit_descriptor *desc,
3637     + int protocol)
3638     +{
3639     + switch (protocol) {
3640     + case UAC_VERSION_1:
3641     + return desc->baSourceID[desc->bNrInPins + 4];
3642     + case UAC_VERSION_2:
3643     + return 1; /* in UAC2, this value is constant */
3644     + case UAC_VERSION_3:
3645     + return 4; /* in UAC3, this value is constant */
3646     + default:
3647     + return 1;
3648     + }
3649     +}
3650     +
3651     +static inline __u8 uac_extension_unit_iExtension(struct uac_processing_unit_descriptor *desc,
3652     + int protocol)
3653     +{
3654     + __u8 control_size = uac_extension_unit_bControlSize(desc, protocol);
3655     +
3656     + switch (protocol) {
3657     + case UAC_VERSION_1:
3658     + case UAC_VERSION_2:
3659     + default:
3660     + return *(uac_processing_unit_bmControls(desc, protocol)
3661     + + control_size);
3662     + case UAC_VERSION_3:
3663     + return 0; /* UAC3 does not have this field */
3664     + }
3665     +}
3666     +
3667     /* 4.5.2 Class-Specific AS Interface Descriptor */
3668     struct uac1_as_header_descriptor {
3669     __u8 bLength; /* in bytes: 7 */
3670     diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
3671     index 6f3a35949cdd..f24a757f8239 100644
3672     --- a/sound/pci/hda/patch_realtek.c
3673     +++ b/sound/pci/hda/patch_realtek.c
3674     @@ -3255,6 +3255,7 @@ static void alc256_init(struct hda_codec *codec)
3675     alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
3676     alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 1 << 15); /* Clear bit */
3677     alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 0 << 15);
3678     + alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/
3679     }
3680    
3681     static void alc256_shutup(struct hda_codec *codec)
3682     @@ -7825,7 +7826,6 @@ static int patch_alc269(struct hda_codec *codec)
3683     spec->shutup = alc256_shutup;
3684     spec->init_hook = alc256_init;
3685     spec->gen.mixer_nid = 0; /* ALC256 does not have any loopback mixer path */
3686     - alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/
3687     break;
3688     case 0x10ec0257:
3689     spec->codec_variant = ALC269_TYPE_ALC257;
3690     diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
3691     index c703f8534b07..7498b5191b68 100644
3692     --- a/sound/usb/mixer.c
3693     +++ b/sound/usb/mixer.c
3694     @@ -2303,7 +2303,7 @@ static struct procunit_info extunits[] = {
3695     */
3696     static int build_audio_procunit(struct mixer_build *state, int unitid,
3697     void *raw_desc, struct procunit_info *list,
3698     - char *name)
3699     + bool extension_unit)
3700     {
3701     struct uac_processing_unit_descriptor *desc = raw_desc;
3702     int num_ins;
3703     @@ -2320,6 +2320,8 @@ static int build_audio_procunit(struct mixer_build *state, int unitid,
3704     static struct procunit_info default_info = {
3705     0, NULL, default_value_info
3706     };
3707     + const char *name = extension_unit ?
3708     + "Extension Unit" : "Processing Unit";
3709    
3710     if (desc->bLength < 13) {
3711     usb_audio_err(state->chip, "invalid %s descriptor (id %d)\n", name, unitid);
3712     @@ -2433,7 +2435,10 @@ static int build_audio_procunit(struct mixer_build *state, int unitid,
3713     } else if (info->name) {
3714     strlcpy(kctl->id.name, info->name, sizeof(kctl->id.name));
3715     } else {
3716     - nameid = uac_processing_unit_iProcessing(desc, state->mixer->protocol);
3717     + if (extension_unit)
3718     + nameid = uac_extension_unit_iExtension(desc, state->mixer->protocol);
3719     + else
3720     + nameid = uac_processing_unit_iProcessing(desc, state->mixer->protocol);
3721     len = 0;
3722     if (nameid)
3723     len = snd_usb_copy_string_desc(state->chip,
3724     @@ -2466,10 +2471,10 @@ static int parse_audio_processing_unit(struct mixer_build *state, int unitid,
3725     case UAC_VERSION_2:
3726     default:
3727     return build_audio_procunit(state, unitid, raw_desc,
3728     - procunits, "Processing Unit");
3729     + procunits, false);
3730     case UAC_VERSION_3:
3731     return build_audio_procunit(state, unitid, raw_desc,
3732     - uac3_procunits, "Processing Unit");
3733     + uac3_procunits, false);
3734     }
3735     }
3736    
3737     @@ -2480,8 +2485,7 @@ static int parse_audio_extension_unit(struct mixer_build *state, int unitid,
3738     * Note that we parse extension units with processing unit descriptors.
3739     * That's ok as the layout is the same.
3740     */
3741     - return build_audio_procunit(state, unitid, raw_desc,
3742     - extunits, "Extension Unit");
3743     + return build_audio_procunit(state, unitid, raw_desc, extunits, true);
3744     }
3745    
3746     /*
3747     diff --git a/tools/perf/Documentation/intel-pt.txt b/tools/perf/Documentation/intel-pt.txt
3748     index 115eaacc455f..60d99e5e7921 100644
3749     --- a/tools/perf/Documentation/intel-pt.txt
3750     +++ b/tools/perf/Documentation/intel-pt.txt
3751     @@ -88,16 +88,16 @@ smaller.
3752    
3753     To represent software control flow, "branches" samples are produced. By default
3754     a branch sample is synthesized for every single branch. To get an idea what
3755     -data is available you can use the 'perf script' tool with no parameters, which
3756     -will list all the samples.
3757     +data is available you can use the 'perf script' tool with all itrace sampling
3758     +options, which will list all the samples.
3759    
3760     perf record -e intel_pt//u ls
3761     - perf script
3762     + perf script --itrace=ibxwpe
3763    
3764     An interesting field that is not printed by default is 'flags' which can be
3765     displayed as follows:
3766    
3767     - perf script -Fcomm,tid,pid,time,cpu,event,trace,ip,sym,dso,addr,symoff,flags
3768     + perf script --itrace=ibxwpe -F+flags
3769    
3770     The flags are "bcrosyiABEx" which stand for branch, call, return, conditional,
3771     system, asynchronous, interrupt, transaction abort, trace begin, trace end, and
3772     @@ -713,7 +713,7 @@ Having no option is the same as
3773    
3774     which, in turn, is the same as
3775    
3776     - --itrace=ibxwpe
3777     + --itrace=cepwx
3778    
3779     The letters are:
3780    
3781     diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
3782     index 66e82bd0683e..cfdbf65f1e02 100644
3783     --- a/tools/perf/util/auxtrace.c
3784     +++ b/tools/perf/util/auxtrace.c
3785     @@ -1001,7 +1001,8 @@ int itrace_parse_synth_opts(const struct option *opt, const char *str,
3786     }
3787    
3788     if (!str) {
3789     - itrace_synth_opts__set_default(synth_opts, false);
3790     + itrace_synth_opts__set_default(synth_opts,
3791     + synth_opts->default_no_sample);
3792     return 0;
3793     }
3794    
3795     diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
3796     index 847ae51a524b..fb0aa661644b 100644
3797     --- a/tools/perf/util/header.c
3798     +++ b/tools/perf/util/header.c
3799     @@ -3602,6 +3602,7 @@ int perf_event__synthesize_features(struct perf_tool *tool,
3800     return -ENOMEM;
3801    
3802     ff.size = sz - sz_hdr;
3803     + ff.ph = &session->header;
3804    
3805     for_each_set_bit(feat, header->adds_features, HEADER_FEAT_BITS) {
3806     if (!feat_ops[feat].synthesize) {
3807     diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
3808     index d6f1b2a03f9b..f7dd4657535d 100644
3809     --- a/tools/perf/util/intel-pt.c
3810     +++ b/tools/perf/util/intel-pt.c
3811     @@ -2579,7 +2579,8 @@ int intel_pt_process_auxtrace_info(union perf_event *event,
3812     } else {
3813     itrace_synth_opts__set_default(&pt->synth_opts,
3814     session->itrace_synth_opts->default_no_sample);
3815     - if (use_browser != -1) {
3816     + if (!session->itrace_synth_opts->default_no_sample &&
3817     + !session->itrace_synth_opts->inject) {
3818     pt->synth_opts.branches = false;
3819     pt->synth_opts.callchain = true;
3820     }
3821     diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
3822     index e0429f4ef335..faa8eb231e1b 100644
3823     --- a/tools/perf/util/pmu.c
3824     +++ b/tools/perf/util/pmu.c
3825     @@ -709,9 +709,7 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu)
3826     {
3827     int i;
3828     struct pmu_events_map *map;
3829     - struct pmu_event *pe;
3830     const char *name = pmu->name;
3831     - const char *pname;
3832    
3833     map = perf_pmu__find_map(pmu);
3834     if (!map)
3835     @@ -722,28 +720,26 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu)
3836     */
3837     i = 0;
3838     while (1) {
3839     + const char *cpu_name = is_arm_pmu_core(name) ? name : "cpu";
3840     + struct pmu_event *pe = &map->table[i++];
3841     + const char *pname = pe->pmu ? pe->pmu : cpu_name;
3842    
3843     - pe = &map->table[i++];
3844     if (!pe->name) {
3845     if (pe->metric_group || pe->metric_name)
3846     continue;
3847     break;
3848     }
3849    
3850     - if (!is_arm_pmu_core(name)) {
3851     - pname = pe->pmu ? pe->pmu : "cpu";
3852     -
3853     - /*
3854     - * uncore alias may be from different PMU
3855     - * with common prefix
3856     - */
3857     - if (pmu_is_uncore(name) &&
3858     - !strncmp(pname, name, strlen(pname)))
3859     - goto new_alias;
3860     + /*
3861     + * uncore alias may be from different PMU
3862     + * with common prefix
3863     + */
3864     + if (pmu_is_uncore(name) &&
3865     + !strncmp(pname, name, strlen(pname)))
3866     + goto new_alias;
3867    
3868     - if (strcmp(pname, name))
3869     - continue;
3870     - }
3871     + if (strcmp(pname, name))
3872     + continue;
3873    
3874     new_alias:
3875     /* need type casts to override 'const' */
3876     diff --git a/tools/perf/util/thread-stack.c b/tools/perf/util/thread-stack.c
3877     index 4ba9e866b076..60c9d955c4d7 100644
3878     --- a/tools/perf/util/thread-stack.c
3879     +++ b/tools/perf/util/thread-stack.c
3880     @@ -616,6 +616,23 @@ static int thread_stack__bottom(struct thread_stack *ts,
3881     true, false);
3882     }
3883    
3884     +static int thread_stack__pop_ks(struct thread *thread, struct thread_stack *ts,
3885     + struct perf_sample *sample, u64 ref)
3886     +{
3887     + u64 tm = sample->time;
3888     + int err;
3889     +
3890     + /* Return to userspace, so pop all kernel addresses */
3891     + while (thread_stack__in_kernel(ts)) {
3892     + err = thread_stack__call_return(thread, ts, --ts->cnt,
3893     + tm, ref, true);
3894     + if (err)
3895     + return err;
3896     + }
3897     +
3898     + return 0;
3899     +}
3900     +
3901     static int thread_stack__no_call_return(struct thread *thread,
3902     struct thread_stack *ts,
3903     struct perf_sample *sample,
3904     @@ -896,7 +913,18 @@ int thread_stack__process(struct thread *thread, struct comm *comm,
3905     ts->rstate = X86_RETPOLINE_DETECTED;
3906    
3907     } else if (sample->flags & PERF_IP_FLAG_RETURN) {
3908     - if (!sample->ip || !sample->addr)
3909     + if (!sample->addr) {
3910     + u32 return_from_kernel = PERF_IP_FLAG_SYSCALLRET |
3911     + PERF_IP_FLAG_INTERRUPT;
3912     +
3913     + if (!(sample->flags & return_from_kernel))
3914     + return 0;
3915     +
3916     + /* Pop kernel stack */
3917     + return thread_stack__pop_ks(thread, ts, sample, ref);
3918     + }
3919     +
3920     + if (!sample->ip)
3921     return 0;
3922    
3923     /* x86 retpoline 'return' doesn't match the stack */