Magellan Linux

Contents of /trunk/kernel-alx/patches-5.4/0283-5.4.184-all-fixes.patch

Parent Directory Parent Directory | Revision Log Revision Log


Revision 3637 - (show annotations) (download)
Mon Oct 24 12:40:44 2022 UTC (18 months ago) by niro
File size: 70457 byte(s)
-add missing
1 diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
2 index 985181dba0bac..6bd97cd50d625 100644
3 --- a/Documentation/admin-guide/hw-vuln/spectre.rst
4 +++ b/Documentation/admin-guide/hw-vuln/spectre.rst
5 @@ -60,8 +60,8 @@ privileged data touched during the speculative execution.
6 Spectre variant 1 attacks take advantage of speculative execution of
7 conditional branches, while Spectre variant 2 attacks use speculative
8 execution of indirect branches to leak privileged memory.
9 -See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[7] <spec_ref7>`
10 -:ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`.
11 +See :ref:`[1] <spec_ref1>` :ref:`[5] <spec_ref5>` :ref:`[6] <spec_ref6>`
12 +:ref:`[7] <spec_ref7>` :ref:`[10] <spec_ref10>` :ref:`[11] <spec_ref11>`.
13
14 Spectre variant 1 (Bounds Check Bypass)
15 ---------------------------------------
16 @@ -131,6 +131,19 @@ steer its indirect branch speculations to gadget code, and measure the
17 speculative execution's side effects left in level 1 cache to infer the
18 victim's data.
19
20 +Yet another variant 2 attack vector is for the attacker to poison the
21 +Branch History Buffer (BHB) to speculatively steer an indirect branch
22 +to a specific Branch Target Buffer (BTB) entry, even if the entry isn't
23 +associated with the source address of the indirect branch. Specifically,
24 +the BHB might be shared across privilege levels even in the presence of
25 +Enhanced IBRS.
26 +
27 +Currently the only known real-world BHB attack vector is via
28 +unprivileged eBPF. Therefore, it's highly recommended to not enable
29 +unprivileged eBPF, especially when eIBRS is used (without retpolines).
30 +For a full mitigation against BHB attacks, it's recommended to use
31 +retpolines (or eIBRS combined with retpolines).
32 +
33 Attack scenarios
34 ----------------
35
36 @@ -364,13 +377,15 @@ The possible values in this file are:
37
38 - Kernel status:
39
40 - ==================================== =================================
41 - 'Not affected' The processor is not vulnerable
42 - 'Vulnerable' Vulnerable, no mitigation
43 - 'Mitigation: Full generic retpoline' Software-focused mitigation
44 - 'Mitigation: Full AMD retpoline' AMD-specific software mitigation
45 - 'Mitigation: Enhanced IBRS' Hardware-focused mitigation
46 - ==================================== =================================
47 + ======================================== =================================
48 + 'Not affected' The processor is not vulnerable
49 + 'Mitigation: None' Vulnerable, no mitigation
50 + 'Mitigation: Retpolines' Use Retpoline thunks
51 + 'Mitigation: LFENCE' Use LFENCE instructions
52 + 'Mitigation: Enhanced IBRS' Hardware-focused mitigation
53 + 'Mitigation: Enhanced IBRS + Retpolines' Hardware-focused + Retpolines
54 + 'Mitigation: Enhanced IBRS + LFENCE' Hardware-focused + LFENCE
55 + ======================================== =================================
56
57 - Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is
58 used to protect against Spectre variant 2 attacks when calling firmware (x86 only).
59 @@ -584,12 +599,13 @@ kernel command line.
60
61 Specific mitigations can also be selected manually:
62
63 - retpoline
64 - replace indirect branches
65 - retpoline,generic
66 - google's original retpoline
67 - retpoline,amd
68 - AMD-specific minimal thunk
69 + retpoline auto pick between generic,lfence
70 + retpoline,generic Retpolines
71 + retpoline,lfence LFENCE; indirect branch
72 + retpoline,amd alias for retpoline,lfence
73 + eibrs enhanced IBRS
74 + eibrs,retpoline enhanced IBRS + Retpolines
75 + eibrs,lfence enhanced IBRS + LFENCE
76
77 Not specifying this option is equivalent to
78 spectre_v2=auto.
79 @@ -730,7 +746,7 @@ AMD white papers:
80
81 .. _spec_ref6:
82
83 -[6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/90343-B_SoftwareTechniquesforManagingSpeculation_WP_7-18Update_FNL.pdf>`_.
84 +[6] `Software techniques for managing speculation on AMD processors <https://developer.amd.com/wp-content/resources/Managing-Speculation-on-AMD-Processors.pdf>`_.
85
86 ARM white papers:
87
88 diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
89 index 165abcb656c5b..979423e1b639f 100644
90 --- a/Documentation/admin-guide/kernel-parameters.txt
91 +++ b/Documentation/admin-guide/kernel-parameters.txt
92 @@ -4493,8 +4493,12 @@
93 Specific mitigations can also be selected manually:
94
95 retpoline - replace indirect branches
96 - retpoline,generic - google's original retpoline
97 - retpoline,amd - AMD-specific minimal thunk
98 + retpoline,generic - Retpolines
99 + retpoline,lfence - LFENCE; indirect branch
100 + retpoline,amd - alias for retpoline,lfence
101 + eibrs - enhanced IBRS
102 + eibrs,retpoline - enhanced IBRS + Retpolines
103 + eibrs,lfence - enhanced IBRS + LFENCE
104
105 Not specifying this option is equivalent to
106 spectre_v2=auto.
107 diff --git a/Makefile b/Makefile
108 index a94b5ea499e13..e914e1a8a7d2c 100644
109 --- a/Makefile
110 +++ b/Makefile
111 @@ -1,7 +1,7 @@
112 # SPDX-License-Identifier: GPL-2.0
113 VERSION = 5
114 PATCHLEVEL = 4
115 -SUBLEVEL = 183
116 +SUBLEVEL = 184
117 EXTRAVERSION =
118 NAME = Kleptomaniac Octopus
119
120 diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
121 index 3546d294d55fa..6b3e64e19fb6f 100644
122 --- a/arch/arm/include/asm/assembler.h
123 +++ b/arch/arm/include/asm/assembler.h
124 @@ -107,6 +107,16 @@
125 .endm
126 #endif
127
128 +#if __LINUX_ARM_ARCH__ < 7
129 + .macro dsb, args
130 + mcr p15, 0, r0, c7, c10, 4
131 + .endm
132 +
133 + .macro isb, args
134 + mcr p15, 0, r0, c7, c5, 4
135 + .endm
136 +#endif
137 +
138 .macro asm_trace_hardirqs_off, save=1
139 #if defined(CONFIG_TRACE_IRQFLAGS)
140 .if \save
141 diff --git a/arch/arm/include/asm/spectre.h b/arch/arm/include/asm/spectre.h
142 new file mode 100644
143 index 0000000000000..d1fa5607d3aa3
144 --- /dev/null
145 +++ b/arch/arm/include/asm/spectre.h
146 @@ -0,0 +1,32 @@
147 +/* SPDX-License-Identifier: GPL-2.0-only */
148 +
149 +#ifndef __ASM_SPECTRE_H
150 +#define __ASM_SPECTRE_H
151 +
152 +enum {
153 + SPECTRE_UNAFFECTED,
154 + SPECTRE_MITIGATED,
155 + SPECTRE_VULNERABLE,
156 +};
157 +
158 +enum {
159 + __SPECTRE_V2_METHOD_BPIALL,
160 + __SPECTRE_V2_METHOD_ICIALLU,
161 + __SPECTRE_V2_METHOD_SMC,
162 + __SPECTRE_V2_METHOD_HVC,
163 + __SPECTRE_V2_METHOD_LOOP8,
164 +};
165 +
166 +enum {
167 + SPECTRE_V2_METHOD_BPIALL = BIT(__SPECTRE_V2_METHOD_BPIALL),
168 + SPECTRE_V2_METHOD_ICIALLU = BIT(__SPECTRE_V2_METHOD_ICIALLU),
169 + SPECTRE_V2_METHOD_SMC = BIT(__SPECTRE_V2_METHOD_SMC),
170 + SPECTRE_V2_METHOD_HVC = BIT(__SPECTRE_V2_METHOD_HVC),
171 + SPECTRE_V2_METHOD_LOOP8 = BIT(__SPECTRE_V2_METHOD_LOOP8),
172 +};
173 +
174 +void spectre_v2_update_state(unsigned int state, unsigned int methods);
175 +
176 +int spectre_bhb_update_vectors(unsigned int method);
177 +
178 +#endif
179 diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
180 index 8b679e2ca3c3d..dc31426cae6d8 100644
181 --- a/arch/arm/kernel/Makefile
182 +++ b/arch/arm/kernel/Makefile
183 @@ -106,4 +106,6 @@ endif
184
185 obj-$(CONFIG_HAVE_ARM_SMCCC) += smccc-call.o
186
187 +obj-$(CONFIG_GENERIC_CPU_VULNERABILITIES) += spectre.o
188 +
189 extra-y := $(head-y) vmlinux.lds
190 diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
191 index 4937d514318ec..94d25425b7bce 100644
192 --- a/arch/arm/kernel/entry-armv.S
193 +++ b/arch/arm/kernel/entry-armv.S
194 @@ -1005,12 +1005,11 @@ vector_\name:
195 sub lr, lr, #\correction
196 .endif
197
198 - @
199 - @ Save r0, lr_<exception> (parent PC) and spsr_<exception>
200 - @ (parent CPSR)
201 - @
202 + @ Save r0, lr_<exception> (parent PC)
203 stmia sp, {r0, lr} @ save r0, lr
204 - mrs lr, spsr
205 +
206 + @ Save spsr_<exception> (parent CPSR)
207 +2: mrs lr, spsr
208 str lr, [sp, #8] @ save spsr
209
210 @
211 @@ -1031,6 +1030,44 @@ vector_\name:
212 movs pc, lr @ branch to handler in SVC mode
213 ENDPROC(vector_\name)
214
215 +#ifdef CONFIG_HARDEN_BRANCH_HISTORY
216 + .subsection 1
217 + .align 5
218 +vector_bhb_loop8_\name:
219 + .if \correction
220 + sub lr, lr, #\correction
221 + .endif
222 +
223 + @ Save r0, lr_<exception> (parent PC)
224 + stmia sp, {r0, lr}
225 +
226 + @ bhb workaround
227 + mov r0, #8
228 +1: b . + 4
229 + subs r0, r0, #1
230 + bne 1b
231 + dsb
232 + isb
233 + b 2b
234 +ENDPROC(vector_bhb_loop8_\name)
235 +
236 +vector_bhb_bpiall_\name:
237 + .if \correction
238 + sub lr, lr, #\correction
239 + .endif
240 +
241 + @ Save r0, lr_<exception> (parent PC)
242 + stmia sp, {r0, lr}
243 +
244 + @ bhb workaround
245 + mcr p15, 0, r0, c7, c5, 6 @ BPIALL
246 + @ isb not needed due to "movs pc, lr" in the vector stub
247 + @ which gives a "context synchronisation".
248 + b 2b
249 +ENDPROC(vector_bhb_bpiall_\name)
250 + .previous
251 +#endif
252 +
253 .align 2
254 @ handler addresses follow this label
255 1:
256 @@ -1039,6 +1076,10 @@ ENDPROC(vector_\name)
257 .section .stubs, "ax", %progbits
258 @ This must be the first word
259 .word vector_swi
260 +#ifdef CONFIG_HARDEN_BRANCH_HISTORY
261 + .word vector_bhb_loop8_swi
262 + .word vector_bhb_bpiall_swi
263 +#endif
264
265 vector_rst:
266 ARM( swi SYS_ERROR0 )
267 @@ -1153,8 +1194,10 @@ vector_addrexcptn:
268 * FIQ "NMI" handler
269 *-----------------------------------------------------------------------------
270 * Handle a FIQ using the SVC stack allowing FIQ act like NMI on x86
271 - * systems.
272 + * systems. This must be the last vector stub, so lets place it in its own
273 + * subsection.
274 */
275 + .subsection 2
276 vector_stub fiq, FIQ_MODE, 4
277
278 .long __fiq_usr @ 0 (USR_26 / USR_32)
279 @@ -1187,6 +1230,30 @@ vector_addrexcptn:
280 W(b) vector_irq
281 W(b) vector_fiq
282
283 +#ifdef CONFIG_HARDEN_BRANCH_HISTORY
284 + .section .vectors.bhb.loop8, "ax", %progbits
285 +.L__vectors_bhb_loop8_start:
286 + W(b) vector_rst
287 + W(b) vector_bhb_loop8_und
288 + W(ldr) pc, .L__vectors_bhb_loop8_start + 0x1004
289 + W(b) vector_bhb_loop8_pabt
290 + W(b) vector_bhb_loop8_dabt
291 + W(b) vector_addrexcptn
292 + W(b) vector_bhb_loop8_irq
293 + W(b) vector_bhb_loop8_fiq
294 +
295 + .section .vectors.bhb.bpiall, "ax", %progbits
296 +.L__vectors_bhb_bpiall_start:
297 + W(b) vector_rst
298 + W(b) vector_bhb_bpiall_und
299 + W(ldr) pc, .L__vectors_bhb_bpiall_start + 0x1008
300 + W(b) vector_bhb_bpiall_pabt
301 + W(b) vector_bhb_bpiall_dabt
302 + W(b) vector_addrexcptn
303 + W(b) vector_bhb_bpiall_irq
304 + W(b) vector_bhb_bpiall_fiq
305 +#endif
306 +
307 .data
308 .align 2
309
310 diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S
311 index 271cb8a1eba1e..bd619da73c84e 100644
312 --- a/arch/arm/kernel/entry-common.S
313 +++ b/arch/arm/kernel/entry-common.S
314 @@ -162,6 +162,29 @@ ENDPROC(ret_from_fork)
315 *-----------------------------------------------------------------------------
316 */
317
318 + .align 5
319 +#ifdef CONFIG_HARDEN_BRANCH_HISTORY
320 +ENTRY(vector_bhb_loop8_swi)
321 + sub sp, sp, #PT_REGS_SIZE
322 + stmia sp, {r0 - r12}
323 + mov r8, #8
324 +1: b 2f
325 +2: subs r8, r8, #1
326 + bne 1b
327 + dsb
328 + isb
329 + b 3f
330 +ENDPROC(vector_bhb_loop8_swi)
331 +
332 + .align 5
333 +ENTRY(vector_bhb_bpiall_swi)
334 + sub sp, sp, #PT_REGS_SIZE
335 + stmia sp, {r0 - r12}
336 + mcr p15, 0, r8, c7, c5, 6 @ BPIALL
337 + isb
338 + b 3f
339 +ENDPROC(vector_bhb_bpiall_swi)
340 +#endif
341 .align 5
342 ENTRY(vector_swi)
343 #ifdef CONFIG_CPU_V7M
344 @@ -169,6 +192,7 @@ ENTRY(vector_swi)
345 #else
346 sub sp, sp, #PT_REGS_SIZE
347 stmia sp, {r0 - r12} @ Calling r0 - r12
348 +3:
349 ARM( add r8, sp, #S_PC )
350 ARM( stmdb r8, {sp, lr}^ ) @ Calling sp, lr
351 THUMB( mov r8, sp )
352 diff --git a/arch/arm/kernel/spectre.c b/arch/arm/kernel/spectre.c
353 new file mode 100644
354 index 0000000000000..0dcefc36fb7a0
355 --- /dev/null
356 +++ b/arch/arm/kernel/spectre.c
357 @@ -0,0 +1,71 @@
358 +// SPDX-License-Identifier: GPL-2.0-only
359 +#include <linux/bpf.h>
360 +#include <linux/cpu.h>
361 +#include <linux/device.h>
362 +
363 +#include <asm/spectre.h>
364 +
365 +static bool _unprivileged_ebpf_enabled(void)
366 +{
367 +#ifdef CONFIG_BPF_SYSCALL
368 + return !sysctl_unprivileged_bpf_disabled;
369 +#else
370 + return false;
371 +#endif
372 +}
373 +
374 +ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr,
375 + char *buf)
376 +{
377 + return sprintf(buf, "Mitigation: __user pointer sanitization\n");
378 +}
379 +
380 +static unsigned int spectre_v2_state;
381 +static unsigned int spectre_v2_methods;
382 +
383 +void spectre_v2_update_state(unsigned int state, unsigned int method)
384 +{
385 + if (state > spectre_v2_state)
386 + spectre_v2_state = state;
387 + spectre_v2_methods |= method;
388 +}
389 +
390 +ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr,
391 + char *buf)
392 +{
393 + const char *method;
394 +
395 + if (spectre_v2_state == SPECTRE_UNAFFECTED)
396 + return sprintf(buf, "%s\n", "Not affected");
397 +
398 + if (spectre_v2_state != SPECTRE_MITIGATED)
399 + return sprintf(buf, "%s\n", "Vulnerable");
400 +
401 + if (_unprivileged_ebpf_enabled())
402 + return sprintf(buf, "Vulnerable: Unprivileged eBPF enabled\n");
403 +
404 + switch (spectre_v2_methods) {
405 + case SPECTRE_V2_METHOD_BPIALL:
406 + method = "Branch predictor hardening";
407 + break;
408 +
409 + case SPECTRE_V2_METHOD_ICIALLU:
410 + method = "I-cache invalidation";
411 + break;
412 +
413 + case SPECTRE_V2_METHOD_SMC:
414 + case SPECTRE_V2_METHOD_HVC:
415 + method = "Firmware call";
416 + break;
417 +
418 + case SPECTRE_V2_METHOD_LOOP8:
419 + method = "History overwrite";
420 + break;
421 +
422 + default:
423 + method = "Multiple mitigations";
424 + break;
425 + }
426 +
427 + return sprintf(buf, "Mitigation: %s\n", method);
428 +}
429 diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
430 index 97a512551b217..207ef9a797bd4 100644
431 --- a/arch/arm/kernel/traps.c
432 +++ b/arch/arm/kernel/traps.c
433 @@ -30,6 +30,7 @@
434 #include <linux/atomic.h>
435 #include <asm/cacheflush.h>
436 #include <asm/exception.h>
437 +#include <asm/spectre.h>
438 #include <asm/unistd.h>
439 #include <asm/traps.h>
440 #include <asm/ptrace.h>
441 @@ -799,10 +800,59 @@ static inline void __init kuser_init(void *vectors)
442 }
443 #endif
444
445 +#ifndef CONFIG_CPU_V7M
446 +static void copy_from_lma(void *vma, void *lma_start, void *lma_end)
447 +{
448 + memcpy(vma, lma_start, lma_end - lma_start);
449 +}
450 +
451 +static void flush_vectors(void *vma, size_t offset, size_t size)
452 +{
453 + unsigned long start = (unsigned long)vma + offset;
454 + unsigned long end = start + size;
455 +
456 + flush_icache_range(start, end);
457 +}
458 +
459 +#ifdef CONFIG_HARDEN_BRANCH_HISTORY
460 +int spectre_bhb_update_vectors(unsigned int method)
461 +{
462 + extern char __vectors_bhb_bpiall_start[], __vectors_bhb_bpiall_end[];
463 + extern char __vectors_bhb_loop8_start[], __vectors_bhb_loop8_end[];
464 + void *vec_start, *vec_end;
465 +
466 + if (system_state > SYSTEM_SCHEDULING) {
467 + pr_err("CPU%u: Spectre BHB workaround too late - system vulnerable\n",
468 + smp_processor_id());
469 + return SPECTRE_VULNERABLE;
470 + }
471 +
472 + switch (method) {
473 + case SPECTRE_V2_METHOD_LOOP8:
474 + vec_start = __vectors_bhb_loop8_start;
475 + vec_end = __vectors_bhb_loop8_end;
476 + break;
477 +
478 + case SPECTRE_V2_METHOD_BPIALL:
479 + vec_start = __vectors_bhb_bpiall_start;
480 + vec_end = __vectors_bhb_bpiall_end;
481 + break;
482 +
483 + default:
484 + pr_err("CPU%u: unknown Spectre BHB state %d\n",
485 + smp_processor_id(), method);
486 + return SPECTRE_VULNERABLE;
487 + }
488 +
489 + copy_from_lma(vectors_page, vec_start, vec_end);
490 + flush_vectors(vectors_page, 0, vec_end - vec_start);
491 +
492 + return SPECTRE_MITIGATED;
493 +}
494 +#endif
495 +
496 void __init early_trap_init(void *vectors_base)
497 {
498 -#ifndef CONFIG_CPU_V7M
499 - unsigned long vectors = (unsigned long)vectors_base;
500 extern char __stubs_start[], __stubs_end[];
501 extern char __vectors_start[], __vectors_end[];
502 unsigned i;
503 @@ -823,17 +873,20 @@ void __init early_trap_init(void *vectors_base)
504 * into the vector page, mapped at 0xffff0000, and ensure these
505 * are visible to the instruction stream.
506 */
507 - memcpy((void *)vectors, __vectors_start, __vectors_end - __vectors_start);
508 - memcpy((void *)vectors + 0x1000, __stubs_start, __stubs_end - __stubs_start);
509 + copy_from_lma(vectors_base, __vectors_start, __vectors_end);
510 + copy_from_lma(vectors_base + 0x1000, __stubs_start, __stubs_end);
511
512 kuser_init(vectors_base);
513
514 - flush_icache_range(vectors, vectors + PAGE_SIZE * 2);
515 + flush_vectors(vectors_base, 0, PAGE_SIZE * 2);
516 +}
517 #else /* ifndef CONFIG_CPU_V7M */
518 +void __init early_trap_init(void *vectors_base)
519 +{
520 /*
521 * on V7-M there is no need to copy the vector table to a dedicated
522 * memory area. The address is configurable and so a table in the kernel
523 * image can be used.
524 */
525 -#endif
526 }
527 +#endif
528 diff --git a/arch/arm/kernel/vmlinux.lds.h b/arch/arm/kernel/vmlinux.lds.h
529 index 8247bc15addc4..78d156e4f0088 100644
530 --- a/arch/arm/kernel/vmlinux.lds.h
531 +++ b/arch/arm/kernel/vmlinux.lds.h
532 @@ -25,6 +25,19 @@
533 #define ARM_MMU_DISCARD(x) x
534 #endif
535
536 +/*
537 + * ld.lld does not support NOCROSSREFS:
538 + * https://github.com/ClangBuiltLinux/linux/issues/1609
539 + */
540 +#ifdef CONFIG_LD_IS_LLD
541 +#define NOCROSSREFS
542 +#endif
543 +
544 +/* Set start/end symbol names to the LMA for the section */
545 +#define ARM_LMA(sym, section) \
546 + sym##_start = LOADADDR(section); \
547 + sym##_end = LOADADDR(section) + SIZEOF(section)
548 +
549 #define PROC_INFO \
550 . = ALIGN(4); \
551 __proc_info_begin = .; \
552 @@ -100,19 +113,31 @@
553 * only thing that matters is their relative offsets
554 */
555 #define ARM_VECTORS \
556 - __vectors_start = .; \
557 - .vectors 0xffff0000 : AT(__vectors_start) { \
558 - *(.vectors) \
559 + __vectors_lma = .; \
560 + OVERLAY 0xffff0000 : NOCROSSREFS AT(__vectors_lma) { \
561 + .vectors { \
562 + *(.vectors) \
563 + } \
564 + .vectors.bhb.loop8 { \
565 + *(.vectors.bhb.loop8) \
566 + } \
567 + .vectors.bhb.bpiall { \
568 + *(.vectors.bhb.bpiall) \
569 + } \
570 } \
571 - . = __vectors_start + SIZEOF(.vectors); \
572 - __vectors_end = .; \
573 + ARM_LMA(__vectors, .vectors); \
574 + ARM_LMA(__vectors_bhb_loop8, .vectors.bhb.loop8); \
575 + ARM_LMA(__vectors_bhb_bpiall, .vectors.bhb.bpiall); \
576 + . = __vectors_lma + SIZEOF(.vectors) + \
577 + SIZEOF(.vectors.bhb.loop8) + \
578 + SIZEOF(.vectors.bhb.bpiall); \
579 \
580 - __stubs_start = .; \
581 - .stubs ADDR(.vectors) + 0x1000 : AT(__stubs_start) { \
582 + __stubs_lma = .; \
583 + .stubs ADDR(.vectors) + 0x1000 : AT(__stubs_lma) { \
584 *(.stubs) \
585 } \
586 - . = __stubs_start + SIZEOF(.stubs); \
587 - __stubs_end = .; \
588 + ARM_LMA(__stubs, .stubs); \
589 + . = __stubs_lma + SIZEOF(.stubs); \
590 \
591 PROVIDE(vector_fiq_offset = vector_fiq - ADDR(.vectors));
592
593 diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
594 index 64cce0c8560ab..00ffee644372e 100644
595 --- a/arch/arm/mm/Kconfig
596 +++ b/arch/arm/mm/Kconfig
597 @@ -833,6 +833,7 @@ config CPU_BPREDICT_DISABLE
598
599 config CPU_SPECTRE
600 bool
601 + select GENERIC_CPU_VULNERABILITIES
602
603 config HARDEN_BRANCH_PREDICTOR
604 bool "Harden the branch predictor against aliasing attacks" if EXPERT
605 @@ -853,6 +854,16 @@ config HARDEN_BRANCH_PREDICTOR
606
607 If unsure, say Y.
608
609 +config HARDEN_BRANCH_HISTORY
610 + bool "Harden Spectre style attacks against branch history" if EXPERT
611 + depends on CPU_SPECTRE
612 + default y
613 + help
614 + Speculation attacks against some high-performance processors can
615 + make use of branch history to influence future speculation. When
616 + taking an exception, a sequence of branches overwrites the branch
617 + history, or branch history is invalidated.
618 +
619 config TLS_REG_EMUL
620 bool
621 select NEED_KUSER_HELPERS
622 diff --git a/arch/arm/mm/proc-v7-bugs.c b/arch/arm/mm/proc-v7-bugs.c
623 index a6554fdb56c54..097ef85bb7f21 100644
624 --- a/arch/arm/mm/proc-v7-bugs.c
625 +++ b/arch/arm/mm/proc-v7-bugs.c
626 @@ -7,8 +7,35 @@
627 #include <asm/cp15.h>
628 #include <asm/cputype.h>
629 #include <asm/proc-fns.h>
630 +#include <asm/spectre.h>
631 #include <asm/system_misc.h>
632
633 +#ifdef CONFIG_ARM_PSCI
634 +static int __maybe_unused spectre_v2_get_cpu_fw_mitigation_state(void)
635 +{
636 + struct arm_smccc_res res;
637 +
638 + arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
639 + ARM_SMCCC_ARCH_WORKAROUND_1, &res);
640 +
641 + switch ((int)res.a0) {
642 + case SMCCC_RET_SUCCESS:
643 + return SPECTRE_MITIGATED;
644 +
645 + case SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED:
646 + return SPECTRE_UNAFFECTED;
647 +
648 + default:
649 + return SPECTRE_VULNERABLE;
650 + }
651 +}
652 +#else
653 +static int __maybe_unused spectre_v2_get_cpu_fw_mitigation_state(void)
654 +{
655 + return SPECTRE_VULNERABLE;
656 +}
657 +#endif
658 +
659 #ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
660 DEFINE_PER_CPU(harden_branch_predictor_fn_t, harden_branch_predictor_fn);
661
662 @@ -37,13 +64,61 @@ static void __maybe_unused call_hvc_arch_workaround_1(void)
663 arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
664 }
665
666 -static void cpu_v7_spectre_init(void)
667 +static unsigned int spectre_v2_install_workaround(unsigned int method)
668 {
669 const char *spectre_v2_method = NULL;
670 int cpu = smp_processor_id();
671
672 if (per_cpu(harden_branch_predictor_fn, cpu))
673 - return;
674 + return SPECTRE_MITIGATED;
675 +
676 + switch (method) {
677 + case SPECTRE_V2_METHOD_BPIALL:
678 + per_cpu(harden_branch_predictor_fn, cpu) =
679 + harden_branch_predictor_bpiall;
680 + spectre_v2_method = "BPIALL";
681 + break;
682 +
683 + case SPECTRE_V2_METHOD_ICIALLU:
684 + per_cpu(harden_branch_predictor_fn, cpu) =
685 + harden_branch_predictor_iciallu;
686 + spectre_v2_method = "ICIALLU";
687 + break;
688 +
689 + case SPECTRE_V2_METHOD_HVC:
690 + per_cpu(harden_branch_predictor_fn, cpu) =
691 + call_hvc_arch_workaround_1;
692 + cpu_do_switch_mm = cpu_v7_hvc_switch_mm;
693 + spectre_v2_method = "hypervisor";
694 + break;
695 +
696 + case SPECTRE_V2_METHOD_SMC:
697 + per_cpu(harden_branch_predictor_fn, cpu) =
698 + call_smc_arch_workaround_1;
699 + cpu_do_switch_mm = cpu_v7_smc_switch_mm;
700 + spectre_v2_method = "firmware";
701 + break;
702 + }
703 +
704 + if (spectre_v2_method)
705 + pr_info("CPU%u: Spectre v2: using %s workaround\n",
706 + smp_processor_id(), spectre_v2_method);
707 +
708 + return SPECTRE_MITIGATED;
709 +}
710 +#else
711 +static unsigned int spectre_v2_install_workaround(unsigned int method)
712 +{
713 + pr_info("CPU%u: Spectre V2: workarounds disabled by configuration\n",
714 + smp_processor_id());
715 +
716 + return SPECTRE_VULNERABLE;
717 +}
718 +#endif
719 +
720 +static void cpu_v7_spectre_v2_init(void)
721 +{
722 + unsigned int state, method = 0;
723
724 switch (read_cpuid_part()) {
725 case ARM_CPU_PART_CORTEX_A8:
726 @@ -52,32 +127,37 @@ static void cpu_v7_spectre_init(void)
727 case ARM_CPU_PART_CORTEX_A17:
728 case ARM_CPU_PART_CORTEX_A73:
729 case ARM_CPU_PART_CORTEX_A75:
730 - per_cpu(harden_branch_predictor_fn, cpu) =
731 - harden_branch_predictor_bpiall;
732 - spectre_v2_method = "BPIALL";
733 + state = SPECTRE_MITIGATED;
734 + method = SPECTRE_V2_METHOD_BPIALL;
735 break;
736
737 case ARM_CPU_PART_CORTEX_A15:
738 case ARM_CPU_PART_BRAHMA_B15:
739 - per_cpu(harden_branch_predictor_fn, cpu) =
740 - harden_branch_predictor_iciallu;
741 - spectre_v2_method = "ICIALLU";
742 + state = SPECTRE_MITIGATED;
743 + method = SPECTRE_V2_METHOD_ICIALLU;
744 break;
745
746 -#ifdef CONFIG_ARM_PSCI
747 case ARM_CPU_PART_BRAHMA_B53:
748 /* Requires no workaround */
749 + state = SPECTRE_UNAFFECTED;
750 break;
751 +
752 default:
753 /* Other ARM CPUs require no workaround */
754 - if (read_cpuid_implementor() == ARM_CPU_IMP_ARM)
755 + if (read_cpuid_implementor() == ARM_CPU_IMP_ARM) {
756 + state = SPECTRE_UNAFFECTED;
757 break;
758 + }
759 /* fallthrough */
760 - /* Cortex A57/A72 require firmware workaround */
761 + /* Cortex A57/A72 require firmware workaround */
762 case ARM_CPU_PART_CORTEX_A57:
763 case ARM_CPU_PART_CORTEX_A72: {
764 struct arm_smccc_res res;
765
766 + state = spectre_v2_get_cpu_fw_mitigation_state();
767 + if (state != SPECTRE_MITIGATED)
768 + break;
769 +
770 if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
771 break;
772
773 @@ -87,10 +167,7 @@ static void cpu_v7_spectre_init(void)
774 ARM_SMCCC_ARCH_WORKAROUND_1, &res);
775 if ((int)res.a0 != 0)
776 break;
777 - per_cpu(harden_branch_predictor_fn, cpu) =
778 - call_hvc_arch_workaround_1;
779 - cpu_do_switch_mm = cpu_v7_hvc_switch_mm;
780 - spectre_v2_method = "hypervisor";
781 + method = SPECTRE_V2_METHOD_HVC;
782 break;
783
784 case PSCI_CONDUIT_SMC:
785 @@ -98,29 +175,97 @@ static void cpu_v7_spectre_init(void)
786 ARM_SMCCC_ARCH_WORKAROUND_1, &res);
787 if ((int)res.a0 != 0)
788 break;
789 - per_cpu(harden_branch_predictor_fn, cpu) =
790 - call_smc_arch_workaround_1;
791 - cpu_do_switch_mm = cpu_v7_smc_switch_mm;
792 - spectre_v2_method = "firmware";
793 + method = SPECTRE_V2_METHOD_SMC;
794 break;
795
796 default:
797 + state = SPECTRE_VULNERABLE;
798 break;
799 }
800 }
801 -#endif
802 }
803
804 - if (spectre_v2_method)
805 - pr_info("CPU%u: Spectre v2: using %s workaround\n",
806 - smp_processor_id(), spectre_v2_method);
807 + if (state == SPECTRE_MITIGATED)
808 + state = spectre_v2_install_workaround(method);
809 +
810 + spectre_v2_update_state(state, method);
811 +}
812 +
813 +#ifdef CONFIG_HARDEN_BRANCH_HISTORY
814 +static int spectre_bhb_method;
815 +
816 +static const char *spectre_bhb_method_name(int method)
817 +{
818 + switch (method) {
819 + case SPECTRE_V2_METHOD_LOOP8:
820 + return "loop";
821 +
822 + case SPECTRE_V2_METHOD_BPIALL:
823 + return "BPIALL";
824 +
825 + default:
826 + return "unknown";
827 + }
828 +}
829 +
830 +static int spectre_bhb_install_workaround(int method)
831 +{
832 + if (spectre_bhb_method != method) {
833 + if (spectre_bhb_method) {
834 + pr_err("CPU%u: Spectre BHB: method disagreement, system vulnerable\n",
835 + smp_processor_id());
836 +
837 + return SPECTRE_VULNERABLE;
838 + }
839 +
840 + if (spectre_bhb_update_vectors(method) == SPECTRE_VULNERABLE)
841 + return SPECTRE_VULNERABLE;
842 +
843 + spectre_bhb_method = method;
844 + }
845 +
846 + pr_info("CPU%u: Spectre BHB: using %s workaround\n",
847 + smp_processor_id(), spectre_bhb_method_name(method));
848 +
849 + return SPECTRE_MITIGATED;
850 }
851 #else
852 -static void cpu_v7_spectre_init(void)
853 +static int spectre_bhb_install_workaround(int method)
854 {
855 + return SPECTRE_VULNERABLE;
856 }
857 #endif
858
859 +static void cpu_v7_spectre_bhb_init(void)
860 +{
861 + unsigned int state, method = 0;
862 +
863 + switch (read_cpuid_part()) {
864 + case ARM_CPU_PART_CORTEX_A15:
865 + case ARM_CPU_PART_BRAHMA_B15:
866 + case ARM_CPU_PART_CORTEX_A57:
867 + case ARM_CPU_PART_CORTEX_A72:
868 + state = SPECTRE_MITIGATED;
869 + method = SPECTRE_V2_METHOD_LOOP8;
870 + break;
871 +
872 + case ARM_CPU_PART_CORTEX_A73:
873 + case ARM_CPU_PART_CORTEX_A75:
874 + state = SPECTRE_MITIGATED;
875 + method = SPECTRE_V2_METHOD_BPIALL;
876 + break;
877 +
878 + default:
879 + state = SPECTRE_UNAFFECTED;
880 + break;
881 + }
882 +
883 + if (state == SPECTRE_MITIGATED)
884 + state = spectre_bhb_install_workaround(method);
885 +
886 + spectre_v2_update_state(state, method);
887 +}
888 +
889 static __maybe_unused bool cpu_v7_check_auxcr_set(bool *warned,
890 u32 mask, const char *msg)
891 {
892 @@ -149,16 +294,17 @@ static bool check_spectre_auxcr(bool *warned, u32 bit)
893 void cpu_v7_ca8_ibe(void)
894 {
895 if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(6)))
896 - cpu_v7_spectre_init();
897 + cpu_v7_spectre_v2_init();
898 }
899
900 void cpu_v7_ca15_ibe(void)
901 {
902 if (check_spectre_auxcr(this_cpu_ptr(&spectre_warned), BIT(0)))
903 - cpu_v7_spectre_init();
904 + cpu_v7_spectre_v2_init();
905 }
906
907 void cpu_v7_bugs_init(void)
908 {
909 - cpu_v7_spectre_init();
910 + cpu_v7_spectre_v2_init();
911 + cpu_v7_spectre_bhb_init();
912 }
913 diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
914 index d912457f56a79..f48905f796e9d 100644
915 --- a/arch/x86/include/asm/cpufeatures.h
916 +++ b/arch/x86/include/asm/cpufeatures.h
917 @@ -202,7 +202,7 @@
918 #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */
919 #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */
920 #define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
921 -#define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */
922 +#define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCE for Spectre variant 2 */
923 #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */
924 #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */
925 #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */
926 diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
927 index b222a35959467..956df82bbc2bc 100644
928 --- a/arch/x86/include/asm/nospec-branch.h
929 +++ b/arch/x86/include/asm/nospec-branch.h
930 @@ -115,7 +115,7 @@
931 ANNOTATE_NOSPEC_ALTERNATIVE
932 ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *\reg), \
933 __stringify(RETPOLINE_JMP \reg), X86_FEATURE_RETPOLINE, \
934 - __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *\reg), X86_FEATURE_RETPOLINE_AMD
935 + __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *\reg), X86_FEATURE_RETPOLINE_LFENCE
936 #else
937 jmp *\reg
938 #endif
939 @@ -126,7 +126,7 @@
940 ANNOTATE_NOSPEC_ALTERNATIVE
941 ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; call *\reg), \
942 __stringify(RETPOLINE_CALL \reg), X86_FEATURE_RETPOLINE,\
943 - __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *\reg), X86_FEATURE_RETPOLINE_AMD
944 + __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *\reg), X86_FEATURE_RETPOLINE_LFENCE
945 #else
946 call *\reg
947 #endif
948 @@ -171,7 +171,7 @@
949 "lfence;\n" \
950 ANNOTATE_RETPOLINE_SAFE \
951 "call *%[thunk_target]\n", \
952 - X86_FEATURE_RETPOLINE_AMD)
953 + X86_FEATURE_RETPOLINE_LFENCE)
954 # define THUNK_TARGET(addr) [thunk_target] "r" (addr)
955
956 #else /* CONFIG_X86_32 */
957 @@ -201,7 +201,7 @@
958 "lfence;\n" \
959 ANNOTATE_RETPOLINE_SAFE \
960 "call *%[thunk_target]\n", \
961 - X86_FEATURE_RETPOLINE_AMD)
962 + X86_FEATURE_RETPOLINE_LFENCE)
963
964 # define THUNK_TARGET(addr) [thunk_target] "rm" (addr)
965 #endif
966 @@ -213,9 +213,11 @@
967 /* The Spectre V2 mitigation variants */
968 enum spectre_v2_mitigation {
969 SPECTRE_V2_NONE,
970 - SPECTRE_V2_RETPOLINE_GENERIC,
971 - SPECTRE_V2_RETPOLINE_AMD,
972 - SPECTRE_V2_IBRS_ENHANCED,
973 + SPECTRE_V2_RETPOLINE,
974 + SPECTRE_V2_LFENCE,
975 + SPECTRE_V2_EIBRS,
976 + SPECTRE_V2_EIBRS_RETPOLINE,
977 + SPECTRE_V2_EIBRS_LFENCE,
978 };
979
980 /* The indirect branch speculation control variants */
981 diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
982 index fcc4238ee95f8..e817aaeef254c 100644
983 --- a/arch/x86/kernel/cpu/bugs.c
984 +++ b/arch/x86/kernel/cpu/bugs.c
985 @@ -31,6 +31,7 @@
986 #include <asm/intel-family.h>
987 #include <asm/e820/api.h>
988 #include <asm/hypervisor.h>
989 +#include <linux/bpf.h>
990
991 #include "cpu.h"
992
993 @@ -607,6 +608,32 @@ static inline const char *spectre_v2_module_string(void)
994 static inline const char *spectre_v2_module_string(void) { return ""; }
995 #endif
996
997 +#define SPECTRE_V2_LFENCE_MSG "WARNING: LFENCE mitigation is not recommended for this CPU, data leaks possible!\n"
998 +#define SPECTRE_V2_EIBRS_EBPF_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!\n"
999 +#define SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG "WARNING: Unprivileged eBPF is enabled with eIBRS+LFENCE mitigation and SMT, data leaks possible via Spectre v2 BHB attacks!\n"
1000 +
1001 +#ifdef CONFIG_BPF_SYSCALL
1002 +void unpriv_ebpf_notify(int new_state)
1003 +{
1004 + if (new_state)
1005 + return;
1006 +
1007 + /* Unprivileged eBPF is enabled */
1008 +
1009 + switch (spectre_v2_enabled) {
1010 + case SPECTRE_V2_EIBRS:
1011 + pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
1012 + break;
1013 + case SPECTRE_V2_EIBRS_LFENCE:
1014 + if (sched_smt_active())
1015 + pr_err(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
1016 + break;
1017 + default:
1018 + break;
1019 + }
1020 +}
1021 +#endif
1022 +
1023 static inline bool match_option(const char *arg, int arglen, const char *opt)
1024 {
1025 int len = strlen(opt);
1026 @@ -621,7 +648,10 @@ enum spectre_v2_mitigation_cmd {
1027 SPECTRE_V2_CMD_FORCE,
1028 SPECTRE_V2_CMD_RETPOLINE,
1029 SPECTRE_V2_CMD_RETPOLINE_GENERIC,
1030 - SPECTRE_V2_CMD_RETPOLINE_AMD,
1031 + SPECTRE_V2_CMD_RETPOLINE_LFENCE,
1032 + SPECTRE_V2_CMD_EIBRS,
1033 + SPECTRE_V2_CMD_EIBRS_RETPOLINE,
1034 + SPECTRE_V2_CMD_EIBRS_LFENCE,
1035 };
1036
1037 enum spectre_v2_user_cmd {
1038 @@ -694,6 +724,13 @@ spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd)
1039 return SPECTRE_V2_USER_CMD_AUTO;
1040 }
1041
1042 +static inline bool spectre_v2_in_eibrs_mode(enum spectre_v2_mitigation mode)
1043 +{
1044 + return (mode == SPECTRE_V2_EIBRS ||
1045 + mode == SPECTRE_V2_EIBRS_RETPOLINE ||
1046 + mode == SPECTRE_V2_EIBRS_LFENCE);
1047 +}
1048 +
1049 static void __init
1050 spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
1051 {
1052 @@ -756,10 +793,12 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
1053 }
1054
1055 /*
1056 - * If enhanced IBRS is enabled or SMT impossible, STIBP is not
1057 + * If no STIBP, enhanced IBRS is enabled or SMT impossible, STIBP is not
1058 * required.
1059 */
1060 - if (!smt_possible || spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
1061 + if (!boot_cpu_has(X86_FEATURE_STIBP) ||
1062 + !smt_possible ||
1063 + spectre_v2_in_eibrs_mode(spectre_v2_enabled))
1064 return;
1065
1066 /*
1067 @@ -771,12 +810,6 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
1068 boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
1069 mode = SPECTRE_V2_USER_STRICT_PREFERRED;
1070
1071 - /*
1072 - * If STIBP is not available, clear the STIBP mode.
1073 - */
1074 - if (!boot_cpu_has(X86_FEATURE_STIBP))
1075 - mode = SPECTRE_V2_USER_NONE;
1076 -
1077 spectre_v2_user_stibp = mode;
1078
1079 set_mode:
1080 @@ -785,9 +818,11 @@ set_mode:
1081
1082 static const char * const spectre_v2_strings[] = {
1083 [SPECTRE_V2_NONE] = "Vulnerable",
1084 - [SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline",
1085 - [SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline",
1086 - [SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS",
1087 + [SPECTRE_V2_RETPOLINE] = "Mitigation: Retpolines",
1088 + [SPECTRE_V2_LFENCE] = "Mitigation: LFENCE",
1089 + [SPECTRE_V2_EIBRS] = "Mitigation: Enhanced IBRS",
1090 + [SPECTRE_V2_EIBRS_LFENCE] = "Mitigation: Enhanced IBRS + LFENCE",
1091 + [SPECTRE_V2_EIBRS_RETPOLINE] = "Mitigation: Enhanced IBRS + Retpolines",
1092 };
1093
1094 static const struct {
1095 @@ -798,8 +833,12 @@ static const struct {
1096 { "off", SPECTRE_V2_CMD_NONE, false },
1097 { "on", SPECTRE_V2_CMD_FORCE, true },
1098 { "retpoline", SPECTRE_V2_CMD_RETPOLINE, false },
1099 - { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_AMD, false },
1100 + { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false },
1101 + { "retpoline,lfence", SPECTRE_V2_CMD_RETPOLINE_LFENCE, false },
1102 { "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
1103 + { "eibrs", SPECTRE_V2_CMD_EIBRS, false },
1104 + { "eibrs,lfence", SPECTRE_V2_CMD_EIBRS_LFENCE, false },
1105 + { "eibrs,retpoline", SPECTRE_V2_CMD_EIBRS_RETPOLINE, false },
1106 { "auto", SPECTRE_V2_CMD_AUTO, false },
1107 };
1108
1109 @@ -836,17 +875,30 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
1110 }
1111
1112 if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||
1113 - cmd == SPECTRE_V2_CMD_RETPOLINE_AMD ||
1114 - cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC) &&
1115 + cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE ||
1116 + cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC ||
1117 + cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
1118 + cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
1119 !IS_ENABLED(CONFIG_RETPOLINE)) {
1120 - pr_err("%s selected but not compiled in. Switching to AUTO select\n", mitigation_options[i].option);
1121 + pr_err("%s selected but not compiled in. Switching to AUTO select\n",
1122 + mitigation_options[i].option);
1123 + return SPECTRE_V2_CMD_AUTO;
1124 + }
1125 +
1126 + if ((cmd == SPECTRE_V2_CMD_EIBRS ||
1127 + cmd == SPECTRE_V2_CMD_EIBRS_LFENCE ||
1128 + cmd == SPECTRE_V2_CMD_EIBRS_RETPOLINE) &&
1129 + !boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
1130 + pr_err("%s selected but CPU doesn't have eIBRS. Switching to AUTO select\n",
1131 + mitigation_options[i].option);
1132 return SPECTRE_V2_CMD_AUTO;
1133 }
1134
1135 - if (cmd == SPECTRE_V2_CMD_RETPOLINE_AMD &&
1136 - boot_cpu_data.x86_vendor != X86_VENDOR_HYGON &&
1137 - boot_cpu_data.x86_vendor != X86_VENDOR_AMD) {
1138 - pr_err("retpoline,amd selected but CPU is not AMD. Switching to AUTO select\n");
1139 + if ((cmd == SPECTRE_V2_CMD_RETPOLINE_LFENCE ||
1140 + cmd == SPECTRE_V2_CMD_EIBRS_LFENCE) &&
1141 + !boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
1142 + pr_err("%s selected, but CPU doesn't have a serializing LFENCE. Switching to AUTO select\n",
1143 + mitigation_options[i].option);
1144 return SPECTRE_V2_CMD_AUTO;
1145 }
1146
1147 @@ -855,6 +907,16 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
1148 return cmd;
1149 }
1150
1151 +static enum spectre_v2_mitigation __init spectre_v2_select_retpoline(void)
1152 +{
1153 + if (!IS_ENABLED(CONFIG_RETPOLINE)) {
1154 + pr_err("Kernel not compiled with retpoline; no mitigation available!");
1155 + return SPECTRE_V2_NONE;
1156 + }
1157 +
1158 + return SPECTRE_V2_RETPOLINE;
1159 +}
1160 +
1161 static void __init spectre_v2_select_mitigation(void)
1162 {
1163 enum spectre_v2_mitigation_cmd cmd = spectre_v2_parse_cmdline();
1164 @@ -875,49 +937,64 @@ static void __init spectre_v2_select_mitigation(void)
1165 case SPECTRE_V2_CMD_FORCE:
1166 case SPECTRE_V2_CMD_AUTO:
1167 if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) {
1168 - mode = SPECTRE_V2_IBRS_ENHANCED;
1169 - /* Force it so VMEXIT will restore correctly */
1170 - x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
1171 - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
1172 - goto specv2_set_mode;
1173 + mode = SPECTRE_V2_EIBRS;
1174 + break;
1175 }
1176 - if (IS_ENABLED(CONFIG_RETPOLINE))
1177 - goto retpoline_auto;
1178 +
1179 + mode = spectre_v2_select_retpoline();
1180 break;
1181 - case SPECTRE_V2_CMD_RETPOLINE_AMD:
1182 - if (IS_ENABLED(CONFIG_RETPOLINE))
1183 - goto retpoline_amd;
1184 +
1185 + case SPECTRE_V2_CMD_RETPOLINE_LFENCE:
1186 + pr_err(SPECTRE_V2_LFENCE_MSG);
1187 + mode = SPECTRE_V2_LFENCE;
1188 break;
1189 +
1190 case SPECTRE_V2_CMD_RETPOLINE_GENERIC:
1191 - if (IS_ENABLED(CONFIG_RETPOLINE))
1192 - goto retpoline_generic;
1193 + mode = SPECTRE_V2_RETPOLINE;
1194 break;
1195 +
1196 case SPECTRE_V2_CMD_RETPOLINE:
1197 - if (IS_ENABLED(CONFIG_RETPOLINE))
1198 - goto retpoline_auto;
1199 + mode = spectre_v2_select_retpoline();
1200 + break;
1201 +
1202 + case SPECTRE_V2_CMD_EIBRS:
1203 + mode = SPECTRE_V2_EIBRS;
1204 + break;
1205 +
1206 + case SPECTRE_V2_CMD_EIBRS_LFENCE:
1207 + mode = SPECTRE_V2_EIBRS_LFENCE;
1208 + break;
1209 +
1210 + case SPECTRE_V2_CMD_EIBRS_RETPOLINE:
1211 + mode = SPECTRE_V2_EIBRS_RETPOLINE;
1212 break;
1213 }
1214 - pr_err("Spectre mitigation: kernel not compiled with retpoline; no mitigation available!");
1215 - return;
1216
1217 -retpoline_auto:
1218 - if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
1219 - boot_cpu_data.x86_vendor == X86_VENDOR_HYGON) {
1220 - retpoline_amd:
1221 - if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
1222 - pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n");
1223 - goto retpoline_generic;
1224 - }
1225 - mode = SPECTRE_V2_RETPOLINE_AMD;
1226 - setup_force_cpu_cap(X86_FEATURE_RETPOLINE_AMD);
1227 - setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
1228 - } else {
1229 - retpoline_generic:
1230 - mode = SPECTRE_V2_RETPOLINE_GENERIC;
1231 + if (mode == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
1232 + pr_err(SPECTRE_V2_EIBRS_EBPF_MSG);
1233 +
1234 + if (spectre_v2_in_eibrs_mode(mode)) {
1235 + /* Force it so VMEXIT will restore correctly */
1236 + x86_spec_ctrl_base |= SPEC_CTRL_IBRS;
1237 + wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
1238 + }
1239 +
1240 + switch (mode) {
1241 + case SPECTRE_V2_NONE:
1242 + case SPECTRE_V2_EIBRS:
1243 + break;
1244 +
1245 + case SPECTRE_V2_LFENCE:
1246 + case SPECTRE_V2_EIBRS_LFENCE:
1247 + setup_force_cpu_cap(X86_FEATURE_RETPOLINE_LFENCE);
1248 + fallthrough;
1249 +
1250 + case SPECTRE_V2_RETPOLINE:
1251 + case SPECTRE_V2_EIBRS_RETPOLINE:
1252 setup_force_cpu_cap(X86_FEATURE_RETPOLINE);
1253 + break;
1254 }
1255
1256 -specv2_set_mode:
1257 spectre_v2_enabled = mode;
1258 pr_info("%s\n", spectre_v2_strings[mode]);
1259
1260 @@ -943,7 +1020,7 @@ specv2_set_mode:
1261 * the CPU supports Enhanced IBRS, kernel might un-intentionally not
1262 * enable IBRS around firmware calls.
1263 */
1264 - if (boot_cpu_has(X86_FEATURE_IBRS) && mode != SPECTRE_V2_IBRS_ENHANCED) {
1265 + if (boot_cpu_has(X86_FEATURE_IBRS) && !spectre_v2_in_eibrs_mode(mode)) {
1266 setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
1267 pr_info("Enabling Restricted Speculation for firmware calls\n");
1268 }
1269 @@ -1013,6 +1090,10 @@ void cpu_bugs_smt_update(void)
1270 {
1271 mutex_lock(&spec_ctrl_mutex);
1272
1273 + if (sched_smt_active() && unprivileged_ebpf_enabled() &&
1274 + spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
1275 + pr_warn_once(SPECTRE_V2_EIBRS_LFENCE_EBPF_SMT_MSG);
1276 +
1277 switch (spectre_v2_user_stibp) {
1278 case SPECTRE_V2_USER_NONE:
1279 break;
1280 @@ -1267,7 +1348,6 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
1281 if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
1282 spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)
1283 return 0;
1284 -
1285 /*
1286 * With strict mode for both IBPB and STIBP, the instruction
1287 * code paths avoid checking this task flag and instead,
1288 @@ -1614,7 +1694,7 @@ static ssize_t tsx_async_abort_show_state(char *buf)
1289
1290 static char *stibp_state(void)
1291 {
1292 - if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
1293 + if (spectre_v2_in_eibrs_mode(spectre_v2_enabled))
1294 return "";
1295
1296 switch (spectre_v2_user_stibp) {
1297 @@ -1644,6 +1724,27 @@ static char *ibpb_state(void)
1298 return "";
1299 }
1300
1301 +static ssize_t spectre_v2_show_state(char *buf)
1302 +{
1303 + if (spectre_v2_enabled == SPECTRE_V2_LFENCE)
1304 + return sprintf(buf, "Vulnerable: LFENCE\n");
1305 +
1306 + if (spectre_v2_enabled == SPECTRE_V2_EIBRS && unprivileged_ebpf_enabled())
1307 + return sprintf(buf, "Vulnerable: eIBRS with unprivileged eBPF\n");
1308 +
1309 + if (sched_smt_active() && unprivileged_ebpf_enabled() &&
1310 + spectre_v2_enabled == SPECTRE_V2_EIBRS_LFENCE)
1311 + return sprintf(buf, "Vulnerable: eIBRS+LFENCE with unprivileged eBPF and SMT\n");
1312 +
1313 + return sprintf(buf, "%s%s%s%s%s%s\n",
1314 + spectre_v2_strings[spectre_v2_enabled],
1315 + ibpb_state(),
1316 + boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
1317 + stibp_state(),
1318 + boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
1319 + spectre_v2_module_string());
1320 +}
1321 +
1322 static ssize_t srbds_show_state(char *buf)
1323 {
1324 return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
1325 @@ -1669,12 +1770,7 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
1326 return sprintf(buf, "%s\n", spectre_v1_strings[spectre_v1_mitigation]);
1327
1328 case X86_BUG_SPECTRE_V2:
1329 - return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
1330 - ibpb_state(),
1331 - boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
1332 - stibp_state(),
1333 - boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "",
1334 - spectre_v2_module_string());
1335 + return spectre_v2_show_state(buf);
1336
1337 case X86_BUG_SPEC_STORE_BYPASS:
1338 return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
1339 diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
1340 index ce9a570f217ad..e5b92958c299e 100644
1341 --- a/drivers/acpi/ec.c
1342 +++ b/drivers/acpi/ec.c
1343 @@ -2002,16 +2002,6 @@ bool acpi_ec_dispatch_gpe(void)
1344 if (acpi_any_gpe_status_set(first_ec->gpe))
1345 return true;
1346
1347 - /*
1348 - * Cancel the SCI wakeup and process all pending events in case there
1349 - * are any wakeup ones in there.
1350 - *
1351 - * Note that if any non-EC GPEs are active at this point, the SCI will
1352 - * retrigger after the rearming in acpi_s2idle_wake(), so no events
1353 - * should be missed by canceling the wakeup here.
1354 - */
1355 - pm_system_cancel_wakeup();
1356 -
1357 /*
1358 * Dispatch the EC GPE in-band, but do not report wakeup in any case
1359 * to allow the caller to process events properly after that.
1360 diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
1361 index cd590b4793e09..b0e23e3fe0d56 100644
1362 --- a/drivers/acpi/sleep.c
1363 +++ b/drivers/acpi/sleep.c
1364 @@ -1003,13 +1003,19 @@ static bool acpi_s2idle_wake(void)
1365 if (acpi_check_wakeup_handlers())
1366 return true;
1367
1368 - /*
1369 - * Check non-EC GPE wakeups and if there are none, cancel the
1370 - * SCI-related wakeup and dispatch the EC GPE.
1371 - */
1372 + /* Check non-EC GPE wakeups and dispatch the EC GPE. */
1373 if (acpi_ec_dispatch_gpe())
1374 return true;
1375
1376 + /*
1377 + * Cancel the SCI wakeup and process all pending events in case
1378 + * there are any wakeup ones in there.
1379 + *
1380 + * Note that if any non-EC GPEs are active at this point, the
1381 + * SCI will retrigger after the rearming below, so no events
1382 + * should be missed by canceling the wakeup here.
1383 + */
1384 + pm_system_cancel_wakeup();
1385 acpi_os_wait_events_complete();
1386
1387 /*
1388 diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
1389 index 774af5ce70dad..3731066f2c1ca 100644
1390 --- a/drivers/block/xen-blkfront.c
1391 +++ b/drivers/block/xen-blkfront.c
1392 @@ -1344,7 +1344,8 @@ free_shadow:
1393 rinfo->ring_ref[i] = GRANT_INVALID_REF;
1394 }
1395 }
1396 - free_pages((unsigned long)rinfo->ring.sring, get_order(info->nr_ring_pages * XEN_PAGE_SIZE));
1397 + free_pages_exact(rinfo->ring.sring,
1398 + info->nr_ring_pages * XEN_PAGE_SIZE);
1399 rinfo->ring.sring = NULL;
1400
1401 if (rinfo->irq)
1402 @@ -1428,9 +1429,15 @@ static int blkif_get_final_status(enum blk_req_status s1,
1403 return BLKIF_RSP_OKAY;
1404 }
1405
1406 -static bool blkif_completion(unsigned long *id,
1407 - struct blkfront_ring_info *rinfo,
1408 - struct blkif_response *bret)
1409 +/*
1410 + * Return values:
1411 + * 1 response processed.
1412 + * 0 missing further responses.
1413 + * -1 error while processing.
1414 + */
1415 +static int blkif_completion(unsigned long *id,
1416 + struct blkfront_ring_info *rinfo,
1417 + struct blkif_response *bret)
1418 {
1419 int i = 0;
1420 struct scatterlist *sg;
1421 @@ -1453,7 +1460,7 @@ static bool blkif_completion(unsigned long *id,
1422
1423 /* Wait the second response if not yet here. */
1424 if (s2->status < REQ_DONE)
1425 - return false;
1426 + return 0;
1427
1428 bret->status = blkif_get_final_status(s->status,
1429 s2->status);
1430 @@ -1504,42 +1511,43 @@ static bool blkif_completion(unsigned long *id,
1431 }
1432 /* Add the persistent grant into the list of free grants */
1433 for (i = 0; i < num_grant; i++) {
1434 - if (gnttab_query_foreign_access(s->grants_used[i]->gref)) {
1435 + if (!gnttab_try_end_foreign_access(s->grants_used[i]->gref)) {
1436 /*
1437 * If the grant is still mapped by the backend (the
1438 * backend has chosen to make this grant persistent)
1439 * we add it at the head of the list, so it will be
1440 * reused first.
1441 */
1442 - if (!info->feature_persistent)
1443 - pr_alert_ratelimited("backed has not unmapped grant: %u\n",
1444 - s->grants_used[i]->gref);
1445 + if (!info->feature_persistent) {
1446 + pr_alert("backed has not unmapped grant: %u\n",
1447 + s->grants_used[i]->gref);
1448 + return -1;
1449 + }
1450 list_add(&s->grants_used[i]->node, &rinfo->grants);
1451 rinfo->persistent_gnts_c++;
1452 } else {
1453 /*
1454 - * If the grant is not mapped by the backend we end the
1455 - * foreign access and add it to the tail of the list,
1456 - * so it will not be picked again unless we run out of
1457 - * persistent grants.
1458 + * If the grant is not mapped by the backend we add it
1459 + * to the tail of the list, so it will not be picked
1460 + * again unless we run out of persistent grants.
1461 */
1462 - gnttab_end_foreign_access(s->grants_used[i]->gref, 0, 0UL);
1463 s->grants_used[i]->gref = GRANT_INVALID_REF;
1464 list_add_tail(&s->grants_used[i]->node, &rinfo->grants);
1465 }
1466 }
1467 if (s->req.operation == BLKIF_OP_INDIRECT) {
1468 for (i = 0; i < INDIRECT_GREFS(num_grant); i++) {
1469 - if (gnttab_query_foreign_access(s->indirect_grants[i]->gref)) {
1470 - if (!info->feature_persistent)
1471 - pr_alert_ratelimited("backed has not unmapped grant: %u\n",
1472 - s->indirect_grants[i]->gref);
1473 + if (!gnttab_try_end_foreign_access(s->indirect_grants[i]->gref)) {
1474 + if (!info->feature_persistent) {
1475 + pr_alert("backed has not unmapped grant: %u\n",
1476 + s->indirect_grants[i]->gref);
1477 + return -1;
1478 + }
1479 list_add(&s->indirect_grants[i]->node, &rinfo->grants);
1480 rinfo->persistent_gnts_c++;
1481 } else {
1482 struct page *indirect_page;
1483
1484 - gnttab_end_foreign_access(s->indirect_grants[i]->gref, 0, 0UL);
1485 /*
1486 * Add the used indirect page back to the list of
1487 * available pages for indirect grefs.
1488 @@ -1554,7 +1562,7 @@ static bool blkif_completion(unsigned long *id,
1489 }
1490 }
1491
1492 - return true;
1493 + return 1;
1494 }
1495
1496 static irqreturn_t blkif_interrupt(int irq, void *dev_id)
1497 @@ -1620,12 +1628,17 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
1498 }
1499
1500 if (bret.operation != BLKIF_OP_DISCARD) {
1501 + int ret;
1502 +
1503 /*
1504 * We may need to wait for an extra response if the
1505 * I/O request is split in 2
1506 */
1507 - if (!blkif_completion(&id, rinfo, &bret))
1508 + ret = blkif_completion(&id, rinfo, &bret);
1509 + if (!ret)
1510 continue;
1511 + if (unlikely(ret < 0))
1512 + goto err;
1513 }
1514
1515 if (add_id_to_freelist(rinfo, id)) {
1516 @@ -1731,8 +1744,7 @@ static int setup_blkring(struct xenbus_device *dev,
1517 for (i = 0; i < info->nr_ring_pages; i++)
1518 rinfo->ring_ref[i] = GRANT_INVALID_REF;
1519
1520 - sring = (struct blkif_sring *)__get_free_pages(GFP_NOIO | __GFP_HIGH,
1521 - get_order(ring_size));
1522 + sring = alloc_pages_exact(ring_size, GFP_NOIO);
1523 if (!sring) {
1524 xenbus_dev_fatal(dev, -ENOMEM, "allocating shared ring");
1525 return -ENOMEM;
1526 @@ -1742,7 +1754,7 @@ static int setup_blkring(struct xenbus_device *dev,
1527
1528 err = xenbus_grant_ring(dev, rinfo->ring.sring, info->nr_ring_pages, gref);
1529 if (err < 0) {
1530 - free_pages((unsigned long)sring, get_order(ring_size));
1531 + free_pages_exact(sring, ring_size);
1532 rinfo->ring.sring = NULL;
1533 goto fail;
1534 }
1535 @@ -2720,11 +2732,10 @@ static void purge_persistent_grants(struct blkfront_info *info)
1536 list_for_each_entry_safe(gnt_list_entry, tmp, &rinfo->grants,
1537 node) {
1538 if (gnt_list_entry->gref == GRANT_INVALID_REF ||
1539 - gnttab_query_foreign_access(gnt_list_entry->gref))
1540 + !gnttab_try_end_foreign_access(gnt_list_entry->gref))
1541 continue;
1542
1543 list_del(&gnt_list_entry->node);
1544 - gnttab_end_foreign_access(gnt_list_entry->gref, 0, 0UL);
1545 rinfo->persistent_gnts_c--;
1546 gnt_list_entry->gref = GRANT_INVALID_REF;
1547 list_add_tail(&gnt_list_entry->node, &rinfo->grants);
1548 diff --git a/drivers/firmware/psci/psci.c b/drivers/firmware/psci/psci.c
1549 index 84f4ff351c629..eb797081d1596 100644
1550 --- a/drivers/firmware/psci/psci.c
1551 +++ b/drivers/firmware/psci/psci.c
1552 @@ -57,6 +57,21 @@ struct psci_operations psci_ops = {
1553 .smccc_version = SMCCC_VERSION_1_0,
1554 };
1555
1556 +enum arm_smccc_conduit arm_smccc_1_1_get_conduit(void)
1557 +{
1558 + if (psci_ops.smccc_version < SMCCC_VERSION_1_1)
1559 + return SMCCC_CONDUIT_NONE;
1560 +
1561 + switch (psci_ops.conduit) {
1562 + case PSCI_CONDUIT_SMC:
1563 + return SMCCC_CONDUIT_SMC;
1564 + case PSCI_CONDUIT_HVC:
1565 + return SMCCC_CONDUIT_HVC;
1566 + default:
1567 + return SMCCC_CONDUIT_NONE;
1568 + }
1569 +}
1570 +
1571 typedef unsigned long (psci_fn)(unsigned long, unsigned long,
1572 unsigned long, unsigned long);
1573 static psci_fn *invoke_psci_fn;
1574 diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
1575 index d45d83968e769..94dd6edd18006 100644
1576 --- a/drivers/net/xen-netfront.c
1577 +++ b/drivers/net/xen-netfront.c
1578 @@ -412,14 +412,12 @@ static bool xennet_tx_buf_gc(struct netfront_queue *queue)
1579 queue->tx_link[id] = TX_LINK_NONE;
1580 skb = queue->tx_skbs[id];
1581 queue->tx_skbs[id] = NULL;
1582 - if (unlikely(gnttab_query_foreign_access(
1583 - queue->grant_tx_ref[id]) != 0)) {
1584 + if (unlikely(!gnttab_end_foreign_access_ref(
1585 + queue->grant_tx_ref[id], GNTMAP_readonly))) {
1586 dev_alert(dev,
1587 "Grant still in use by backend domain\n");
1588 goto err;
1589 }
1590 - gnttab_end_foreign_access_ref(
1591 - queue->grant_tx_ref[id], GNTMAP_readonly);
1592 gnttab_release_grant_reference(
1593 &queue->gref_tx_head, queue->grant_tx_ref[id]);
1594 queue->grant_tx_ref[id] = GRANT_INVALID_REF;
1595 @@ -861,7 +859,6 @@ static int xennet_get_responses(struct netfront_queue *queue,
1596 int max = XEN_NETIF_NR_SLOTS_MIN + (rx->status <= RX_COPY_THRESHOLD);
1597 int slots = 1;
1598 int err = 0;
1599 - unsigned long ret;
1600
1601 if (rx->flags & XEN_NETRXF_extra_info) {
1602 err = xennet_get_extras(queue, extras, rp);
1603 @@ -892,8 +889,13 @@ static int xennet_get_responses(struct netfront_queue *queue,
1604 goto next;
1605 }
1606
1607 - ret = gnttab_end_foreign_access_ref(ref, 0);
1608 - BUG_ON(!ret);
1609 + if (!gnttab_end_foreign_access_ref(ref, 0)) {
1610 + dev_alert(dev,
1611 + "Grant still in use by backend domain\n");
1612 + queue->info->broken = true;
1613 + dev_alert(dev, "Disabled for further use\n");
1614 + return -EINVAL;
1615 + }
1616
1617 gnttab_release_grant_reference(&queue->gref_rx_head, ref);
1618
1619 @@ -1097,6 +1099,10 @@ static int xennet_poll(struct napi_struct *napi, int budget)
1620 err = xennet_get_responses(queue, &rinfo, rp, &tmpq);
1621
1622 if (unlikely(err)) {
1623 + if (queue->info->broken) {
1624 + spin_unlock(&queue->rx_lock);
1625 + return 0;
1626 + }
1627 err:
1628 while ((skb = __skb_dequeue(&tmpq)))
1629 __skb_queue_tail(&errq, skb);
1630 @@ -1675,7 +1681,7 @@ static int setup_netfront(struct xenbus_device *dev,
1631 struct netfront_queue *queue, unsigned int feature_split_evtchn)
1632 {
1633 struct xen_netif_tx_sring *txs;
1634 - struct xen_netif_rx_sring *rxs;
1635 + struct xen_netif_rx_sring *rxs = NULL;
1636 grant_ref_t gref;
1637 int err;
1638
1639 @@ -1695,21 +1701,21 @@ static int setup_netfront(struct xenbus_device *dev,
1640
1641 err = xenbus_grant_ring(dev, txs, 1, &gref);
1642 if (err < 0)
1643 - goto grant_tx_ring_fail;
1644 + goto fail;
1645 queue->tx_ring_ref = gref;
1646
1647 rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
1648 if (!rxs) {
1649 err = -ENOMEM;
1650 xenbus_dev_fatal(dev, err, "allocating rx ring page");
1651 - goto alloc_rx_ring_fail;
1652 + goto fail;
1653 }
1654 SHARED_RING_INIT(rxs);
1655 FRONT_RING_INIT(&queue->rx, rxs, XEN_PAGE_SIZE);
1656
1657 err = xenbus_grant_ring(dev, rxs, 1, &gref);
1658 if (err < 0)
1659 - goto grant_rx_ring_fail;
1660 + goto fail;
1661 queue->rx_ring_ref = gref;
1662
1663 if (feature_split_evtchn)
1664 @@ -1722,22 +1728,28 @@ static int setup_netfront(struct xenbus_device *dev,
1665 err = setup_netfront_single(queue);
1666
1667 if (err)
1668 - goto alloc_evtchn_fail;
1669 + goto fail;
1670
1671 return 0;
1672
1673 /* If we fail to setup netfront, it is safe to just revoke access to
1674 * granted pages because backend is not accessing it at this point.
1675 */
1676 -alloc_evtchn_fail:
1677 - gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0);
1678 -grant_rx_ring_fail:
1679 - free_page((unsigned long)rxs);
1680 -alloc_rx_ring_fail:
1681 - gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0);
1682 -grant_tx_ring_fail:
1683 - free_page((unsigned long)txs);
1684 -fail:
1685 + fail:
1686 + if (queue->rx_ring_ref != GRANT_INVALID_REF) {
1687 + gnttab_end_foreign_access(queue->rx_ring_ref, 0,
1688 + (unsigned long)rxs);
1689 + queue->rx_ring_ref = GRANT_INVALID_REF;
1690 + } else {
1691 + free_page((unsigned long)rxs);
1692 + }
1693 + if (queue->tx_ring_ref != GRANT_INVALID_REF) {
1694 + gnttab_end_foreign_access(queue->tx_ring_ref, 0,
1695 + (unsigned long)txs);
1696 + queue->tx_ring_ref = GRANT_INVALID_REF;
1697 + } else {
1698 + free_page((unsigned long)txs);
1699 + }
1700 return err;
1701 }
1702
1703 diff --git a/drivers/scsi/xen-scsifront.c b/drivers/scsi/xen-scsifront.c
1704 index f0068e96a177f..39e39869a1ad9 100644
1705 --- a/drivers/scsi/xen-scsifront.c
1706 +++ b/drivers/scsi/xen-scsifront.c
1707 @@ -233,12 +233,11 @@ static void scsifront_gnttab_done(struct vscsifrnt_info *info,
1708 return;
1709
1710 for (i = 0; i < shadow->nr_grants; i++) {
1711 - if (unlikely(gnttab_query_foreign_access(shadow->gref[i]))) {
1712 + if (unlikely(!gnttab_try_end_foreign_access(shadow->gref[i]))) {
1713 shost_printk(KERN_ALERT, info->host, KBUILD_MODNAME
1714 "grant still in use by backend\n");
1715 BUG();
1716 }
1717 - gnttab_end_foreign_access(shadow->gref[i], 0, 0UL);
1718 }
1719
1720 kfree(shadow->sg);
1721 diff --git a/drivers/xen/gntalloc.c b/drivers/xen/gntalloc.c
1722 index 3fa40c723e8e9..edb0acd0b8323 100644
1723 --- a/drivers/xen/gntalloc.c
1724 +++ b/drivers/xen/gntalloc.c
1725 @@ -169,20 +169,14 @@ undo:
1726 __del_gref(gref);
1727 }
1728
1729 - /* It's possible for the target domain to map the just-allocated grant
1730 - * references by blindly guessing their IDs; if this is done, then
1731 - * __del_gref will leave them in the queue_gref list. They need to be
1732 - * added to the global list so that we can free them when they are no
1733 - * longer referenced.
1734 - */
1735 - if (unlikely(!list_empty(&queue_gref)))
1736 - list_splice_tail(&queue_gref, &gref_list);
1737 mutex_unlock(&gref_mutex);
1738 return rc;
1739 }
1740
1741 static void __del_gref(struct gntalloc_gref *gref)
1742 {
1743 + unsigned long addr;
1744 +
1745 if (gref->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
1746 uint8_t *tmp = kmap(gref->page);
1747 tmp[gref->notify.pgoff] = 0;
1748 @@ -196,21 +190,16 @@ static void __del_gref(struct gntalloc_gref *gref)
1749 gref->notify.flags = 0;
1750
1751 if (gref->gref_id) {
1752 - if (gnttab_query_foreign_access(gref->gref_id))
1753 - return;
1754 -
1755 - if (!gnttab_end_foreign_access_ref(gref->gref_id, 0))
1756 - return;
1757 -
1758 - gnttab_free_grant_reference(gref->gref_id);
1759 + if (gref->page) {
1760 + addr = (unsigned long)page_to_virt(gref->page);
1761 + gnttab_end_foreign_access(gref->gref_id, 0, addr);
1762 + } else
1763 + gnttab_free_grant_reference(gref->gref_id);
1764 }
1765
1766 gref_size--;
1767 list_del(&gref->next_gref);
1768
1769 - if (gref->page)
1770 - __free_page(gref->page);
1771 -
1772 kfree(gref);
1773 }
1774
1775 diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
1776 index 49b381e104efa..c75dc17d1a617 100644
1777 --- a/drivers/xen/grant-table.c
1778 +++ b/drivers/xen/grant-table.c
1779 @@ -135,12 +135,9 @@ struct gnttab_ops {
1780 */
1781 unsigned long (*end_foreign_transfer_ref)(grant_ref_t ref);
1782 /*
1783 - * Query the status of a grant entry. Ref parameter is reference of
1784 - * queried grant entry, return value is the status of queried entry.
1785 - * Detailed status(writing/reading) can be gotten from the return value
1786 - * by bit operations.
1787 + * Read the frame number related to a given grant reference.
1788 */
1789 - int (*query_foreign_access)(grant_ref_t ref);
1790 + unsigned long (*read_frame)(grant_ref_t ref);
1791 };
1792
1793 struct unmap_refs_callback_data {
1794 @@ -285,22 +282,6 @@ int gnttab_grant_foreign_access(domid_t domid, unsigned long frame,
1795 }
1796 EXPORT_SYMBOL_GPL(gnttab_grant_foreign_access);
1797
1798 -static int gnttab_query_foreign_access_v1(grant_ref_t ref)
1799 -{
1800 - return gnttab_shared.v1[ref].flags & (GTF_reading|GTF_writing);
1801 -}
1802 -
1803 -static int gnttab_query_foreign_access_v2(grant_ref_t ref)
1804 -{
1805 - return grstatus[ref] & (GTF_reading|GTF_writing);
1806 -}
1807 -
1808 -int gnttab_query_foreign_access(grant_ref_t ref)
1809 -{
1810 - return gnttab_interface->query_foreign_access(ref);
1811 -}
1812 -EXPORT_SYMBOL_GPL(gnttab_query_foreign_access);
1813 -
1814 static int gnttab_end_foreign_access_ref_v1(grant_ref_t ref, int readonly)
1815 {
1816 u16 flags, nflags;
1817 @@ -354,6 +335,16 @@ int gnttab_end_foreign_access_ref(grant_ref_t ref, int readonly)
1818 }
1819 EXPORT_SYMBOL_GPL(gnttab_end_foreign_access_ref);
1820
1821 +static unsigned long gnttab_read_frame_v1(grant_ref_t ref)
1822 +{
1823 + return gnttab_shared.v1[ref].frame;
1824 +}
1825 +
1826 +static unsigned long gnttab_read_frame_v2(grant_ref_t ref)
1827 +{
1828 + return gnttab_shared.v2[ref].full_page.frame;
1829 +}
1830 +
1831 struct deferred_entry {
1832 struct list_head list;
1833 grant_ref_t ref;
1834 @@ -383,12 +374,9 @@ static void gnttab_handle_deferred(struct timer_list *unused)
1835 spin_unlock_irqrestore(&gnttab_list_lock, flags);
1836 if (_gnttab_end_foreign_access_ref(entry->ref, entry->ro)) {
1837 put_free_entry(entry->ref);
1838 - if (entry->page) {
1839 - pr_debug("freeing g.e. %#x (pfn %#lx)\n",
1840 - entry->ref, page_to_pfn(entry->page));
1841 - put_page(entry->page);
1842 - } else
1843 - pr_info("freeing g.e. %#x\n", entry->ref);
1844 + pr_debug("freeing g.e. %#x (pfn %#lx)\n",
1845 + entry->ref, page_to_pfn(entry->page));
1846 + put_page(entry->page);
1847 kfree(entry);
1848 entry = NULL;
1849 } else {
1850 @@ -413,9 +401,18 @@ static void gnttab_handle_deferred(struct timer_list *unused)
1851 static void gnttab_add_deferred(grant_ref_t ref, bool readonly,
1852 struct page *page)
1853 {
1854 - struct deferred_entry *entry = kmalloc(sizeof(*entry), GFP_ATOMIC);
1855 + struct deferred_entry *entry;
1856 + gfp_t gfp = (in_atomic() || irqs_disabled()) ? GFP_ATOMIC : GFP_KERNEL;
1857 const char *what = KERN_WARNING "leaking";
1858
1859 + entry = kmalloc(sizeof(*entry), gfp);
1860 + if (!page) {
1861 + unsigned long gfn = gnttab_interface->read_frame(ref);
1862 +
1863 + page = pfn_to_page(gfn_to_pfn(gfn));
1864 + get_page(page);
1865 + }
1866 +
1867 if (entry) {
1868 unsigned long flags;
1869
1870 @@ -436,11 +433,21 @@ static void gnttab_add_deferred(grant_ref_t ref, bool readonly,
1871 what, ref, page ? page_to_pfn(page) : -1);
1872 }
1873
1874 +int gnttab_try_end_foreign_access(grant_ref_t ref)
1875 +{
1876 + int ret = _gnttab_end_foreign_access_ref(ref, 0);
1877 +
1878 + if (ret)
1879 + put_free_entry(ref);
1880 +
1881 + return ret;
1882 +}
1883 +EXPORT_SYMBOL_GPL(gnttab_try_end_foreign_access);
1884 +
1885 void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
1886 unsigned long page)
1887 {
1888 - if (gnttab_end_foreign_access_ref(ref, readonly)) {
1889 - put_free_entry(ref);
1890 + if (gnttab_try_end_foreign_access(ref)) {
1891 if (page != 0)
1892 put_page(virt_to_page(page));
1893 } else
1894 @@ -1297,7 +1304,7 @@ static const struct gnttab_ops gnttab_v1_ops = {
1895 .update_entry = gnttab_update_entry_v1,
1896 .end_foreign_access_ref = gnttab_end_foreign_access_ref_v1,
1897 .end_foreign_transfer_ref = gnttab_end_foreign_transfer_ref_v1,
1898 - .query_foreign_access = gnttab_query_foreign_access_v1,
1899 + .read_frame = gnttab_read_frame_v1,
1900 };
1901
1902 static const struct gnttab_ops gnttab_v2_ops = {
1903 @@ -1309,7 +1316,7 @@ static const struct gnttab_ops gnttab_v2_ops = {
1904 .update_entry = gnttab_update_entry_v2,
1905 .end_foreign_access_ref = gnttab_end_foreign_access_ref_v2,
1906 .end_foreign_transfer_ref = gnttab_end_foreign_transfer_ref_v2,
1907 - .query_foreign_access = gnttab_query_foreign_access_v2,
1908 + .read_frame = gnttab_read_frame_v2,
1909 };
1910
1911 static bool gnttab_need_v2(void)
1912 diff --git a/drivers/xen/pvcalls-front.c b/drivers/xen/pvcalls-front.c
1913 index 57592a6b5c9e3..91e52e05555eb 100644
1914 --- a/drivers/xen/pvcalls-front.c
1915 +++ b/drivers/xen/pvcalls-front.c
1916 @@ -337,8 +337,8 @@ static void free_active_ring(struct sock_mapping *map)
1917 if (!map->active.ring)
1918 return;
1919
1920 - free_pages((unsigned long)map->active.data.in,
1921 - map->active.ring->ring_order);
1922 + free_pages_exact(map->active.data.in,
1923 + PAGE_SIZE << map->active.ring->ring_order);
1924 free_page((unsigned long)map->active.ring);
1925 }
1926
1927 @@ -352,8 +352,8 @@ static int alloc_active_ring(struct sock_mapping *map)
1928 goto out;
1929
1930 map->active.ring->ring_order = PVCALLS_RING_ORDER;
1931 - bytes = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
1932 - PVCALLS_RING_ORDER);
1933 + bytes = alloc_pages_exact(PAGE_SIZE << PVCALLS_RING_ORDER,
1934 + GFP_KERNEL | __GFP_ZERO);
1935 if (!bytes)
1936 goto out;
1937
1938 diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
1939 index 81eddb8529ffc..8739dd0ee870d 100644
1940 --- a/drivers/xen/xenbus/xenbus_client.c
1941 +++ b/drivers/xen/xenbus/xenbus_client.c
1942 @@ -366,7 +366,14 @@ int xenbus_grant_ring(struct xenbus_device *dev, void *vaddr,
1943 unsigned int nr_pages, grant_ref_t *grefs)
1944 {
1945 int err;
1946 - int i, j;
1947 + unsigned int i;
1948 + grant_ref_t gref_head;
1949 +
1950 + err = gnttab_alloc_grant_references(nr_pages, &gref_head);
1951 + if (err) {
1952 + xenbus_dev_fatal(dev, err, "granting access to ring page");
1953 + return err;
1954 + }
1955
1956 for (i = 0; i < nr_pages; i++) {
1957 unsigned long gfn;
1958 @@ -376,23 +383,14 @@ int xenbus_grant_ring(struct xenbus_device *dev, void *vaddr,
1959 else
1960 gfn = virt_to_gfn(vaddr);
1961
1962 - err = gnttab_grant_foreign_access(dev->otherend_id, gfn, 0);
1963 - if (err < 0) {
1964 - xenbus_dev_fatal(dev, err,
1965 - "granting access to ring page");
1966 - goto fail;
1967 - }
1968 - grefs[i] = err;
1969 + grefs[i] = gnttab_claim_grant_reference(&gref_head);
1970 + gnttab_grant_foreign_access_ref(grefs[i], dev->otherend_id,
1971 + gfn, 0);
1972
1973 vaddr = vaddr + XEN_PAGE_SIZE;
1974 }
1975
1976 return 0;
1977 -
1978 -fail:
1979 - for (j = 0; j < i; j++)
1980 - gnttab_end_foreign_access_ref(grefs[j], 0);
1981 - return err;
1982 }
1983 EXPORT_SYMBOL_GPL(xenbus_grant_ring);
1984
1985 diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
1986 index 157e4a6a83f6d..4e97ba64dbb42 100644
1987 --- a/include/linux/arm-smccc.h
1988 +++ b/include/linux/arm-smccc.h
1989 @@ -82,6 +82,22 @@
1990
1991 #include <linux/linkage.h>
1992 #include <linux/types.h>
1993 +
1994 +enum arm_smccc_conduit {
1995 + SMCCC_CONDUIT_NONE,
1996 + SMCCC_CONDUIT_SMC,
1997 + SMCCC_CONDUIT_HVC,
1998 +};
1999 +
2000 +/**
2001 + * arm_smccc_1_1_get_conduit()
2002 + *
2003 + * Returns the conduit to be used for SMCCCv1.1 or later.
2004 + *
2005 + * When SMCCCv1.1 is not present, returns SMCCC_CONDUIT_NONE.
2006 + */
2007 +enum arm_smccc_conduit arm_smccc_1_1_get_conduit(void);
2008 +
2009 /**
2010 * struct arm_smccc_res - Result from SMC/HVC call
2011 * @a0-a3 result values from registers 0 to 3
2012 @@ -304,5 +320,63 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
2013 #define SMCCC_RET_NOT_SUPPORTED -1
2014 #define SMCCC_RET_NOT_REQUIRED -2
2015
2016 +/*
2017 + * Like arm_smccc_1_1* but always returns SMCCC_RET_NOT_SUPPORTED.
2018 + * Used when the SMCCC conduit is not defined. The empty asm statement
2019 + * avoids compiler warnings about unused variables.
2020 + */
2021 +#define __fail_smccc_1_1(...) \
2022 + do { \
2023 + __declare_args(__count_args(__VA_ARGS__), __VA_ARGS__); \
2024 + asm ("" __constraints(__count_args(__VA_ARGS__))); \
2025 + if (___res) \
2026 + ___res->a0 = SMCCC_RET_NOT_SUPPORTED; \
2027 + } while (0)
2028 +
2029 +/*
2030 + * arm_smccc_1_1_invoke() - make an SMCCC v1.1 compliant call
2031 + *
2032 + * This is a variadic macro taking one to eight source arguments, and
2033 + * an optional return structure.
2034 + *
2035 + * @a0-a7: arguments passed in registers 0 to 7
2036 + * @res: result values from registers 0 to 3
2037 + *
2038 + * This macro will make either an HVC call or an SMC call depending on the
2039 + * current SMCCC conduit. If no valid conduit is available then -1
2040 + * (SMCCC_RET_NOT_SUPPORTED) is returned in @res.a0 (if supplied).
2041 + *
2042 + * The return value also provides the conduit that was used.
2043 + */
2044 +#define arm_smccc_1_1_invoke(...) ({ \
2045 + int method = arm_smccc_1_1_get_conduit(); \
2046 + switch (method) { \
2047 + case SMCCC_CONDUIT_HVC: \
2048 + arm_smccc_1_1_hvc(__VA_ARGS__); \
2049 + break; \
2050 + case SMCCC_CONDUIT_SMC: \
2051 + arm_smccc_1_1_smc(__VA_ARGS__); \
2052 + break; \
2053 + default: \
2054 + __fail_smccc_1_1(__VA_ARGS__); \
2055 + method = SMCCC_CONDUIT_NONE; \
2056 + break; \
2057 + } \
2058 + method; \
2059 + })
2060 +
2061 +/* Paravirtualised time calls (defined by ARM DEN0057A) */
2062 +#define ARM_SMCCC_HV_PV_TIME_FEATURES \
2063 + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
2064 + ARM_SMCCC_SMC_64, \
2065 + ARM_SMCCC_OWNER_STANDARD_HYP, \
2066 + 0x20)
2067 +
2068 +#define ARM_SMCCC_HV_PV_TIME_ST \
2069 + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
2070 + ARM_SMCCC_SMC_64, \
2071 + ARM_SMCCC_OWNER_STANDARD_HYP, \
2072 + 0x21)
2073 +
2074 #endif /*__ASSEMBLY__*/
2075 #endif /*__LINUX_ARM_SMCCC_H*/
2076 diff --git a/include/linux/bpf.h b/include/linux/bpf.h
2077 index 66590ae89c97c..a73ca7c9c7d0e 100644
2078 --- a/include/linux/bpf.h
2079 +++ b/include/linux/bpf.h
2080 @@ -751,6 +751,12 @@ int bpf_prog_test_run_skb(struct bpf_prog *prog, const union bpf_attr *kattr,
2081 int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
2082 const union bpf_attr *kattr,
2083 union bpf_attr __user *uattr);
2084 +
2085 +static inline bool unprivileged_ebpf_enabled(void)
2086 +{
2087 + return !sysctl_unprivileged_bpf_disabled;
2088 +}
2089 +
2090 #else /* !CONFIG_BPF_SYSCALL */
2091 static inline struct bpf_prog *bpf_prog_get(u32 ufd)
2092 {
2093 @@ -881,6 +887,12 @@ static inline int bpf_prog_test_run_flow_dissector(struct bpf_prog *prog,
2094 {
2095 return -ENOTSUPP;
2096 }
2097 +
2098 +static inline bool unprivileged_ebpf_enabled(void)
2099 +{
2100 + return false;
2101 +}
2102 +
2103 #endif /* CONFIG_BPF_SYSCALL */
2104
2105 static inline struct bpf_prog *bpf_prog_get_type(u32 ufd,
2106 diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
2107 index a9978350b45b0..a58a89cc0e97d 100644
2108 --- a/include/xen/grant_table.h
2109 +++ b/include/xen/grant_table.h
2110 @@ -97,17 +97,32 @@ int gnttab_end_foreign_access_ref(grant_ref_t ref, int readonly);
2111 * access has been ended, free the given page too. Access will be ended
2112 * immediately iff the grant entry is not in use, otherwise it will happen
2113 * some time later. page may be 0, in which case no freeing will occur.
2114 + * Note that the granted page might still be accessed (read or write) by the
2115 + * other side after gnttab_end_foreign_access() returns, so even if page was
2116 + * specified as 0 it is not allowed to just reuse the page for other
2117 + * purposes immediately. gnttab_end_foreign_access() will take an additional
2118 + * reference to the granted page in this case, which is dropped only after
2119 + * the grant is no longer in use.
2120 + * This requires that multi page allocations for areas subject to
2121 + * gnttab_end_foreign_access() are done via alloc_pages_exact() (and freeing
2122 + * via free_pages_exact()) in order to avoid high order pages.
2123 */
2124 void gnttab_end_foreign_access(grant_ref_t ref, int readonly,
2125 unsigned long page);
2126
2127 +/*
2128 + * End access through the given grant reference, iff the grant entry is
2129 + * no longer in use. In case of success ending foreign access, the
2130 + * grant reference is deallocated.
2131 + * Return 1 if the grant entry was freed, 0 if it is still in use.
2132 + */
2133 +int gnttab_try_end_foreign_access(grant_ref_t ref);
2134 +
2135 int gnttab_grant_foreign_transfer(domid_t domid, unsigned long pfn);
2136
2137 unsigned long gnttab_end_foreign_transfer_ref(grant_ref_t ref);
2138 unsigned long gnttab_end_foreign_transfer(grant_ref_t ref);
2139
2140 -int gnttab_query_foreign_access(grant_ref_t ref);
2141 -
2142 /*
2143 * operations on reserved batches of grant references
2144 */
2145 diff --git a/kernel/sysctl.c b/kernel/sysctl.c
2146 index 8494d5a706bb5..0457d36540e38 100644
2147 --- a/kernel/sysctl.c
2148 +++ b/kernel/sysctl.c
2149 @@ -251,6 +251,11 @@ static int sysrq_sysctl_handler(struct ctl_table *table, int write,
2150 #endif
2151
2152 #ifdef CONFIG_BPF_SYSCALL
2153 +
2154 +void __weak unpriv_ebpf_notify(int new_state)
2155 +{
2156 +}
2157 +
2158 static int bpf_unpriv_handler(struct ctl_table *table, int write,
2159 void *buffer, size_t *lenp, loff_t *ppos)
2160 {
2161 @@ -268,6 +273,9 @@ static int bpf_unpriv_handler(struct ctl_table *table, int write,
2162 return -EPERM;
2163 *(int *)table->data = unpriv_enable;
2164 }
2165 +
2166 + unpriv_ebpf_notify(unpriv_enable);
2167 +
2168 return ret;
2169 }
2170 #endif
2171 diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
2172 index 44e6c74ed4288..2779ec1053a02 100644
2173 --- a/net/9p/trans_xen.c
2174 +++ b/net/9p/trans_xen.c
2175 @@ -301,9 +301,9 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
2176 ref = priv->rings[i].intf->ref[j];
2177 gnttab_end_foreign_access(ref, 0, 0);
2178 }
2179 - free_pages((unsigned long)priv->rings[i].data.in,
2180 - XEN_9PFS_RING_ORDER -
2181 - (PAGE_SHIFT - XEN_PAGE_SHIFT));
2182 + free_pages_exact(priv->rings[i].data.in,
2183 + 1UL << (XEN_9PFS_RING_ORDER +
2184 + XEN_PAGE_SHIFT));
2185 }
2186 gnttab_end_foreign_access(priv->rings[i].ref, 0, 0);
2187 free_page((unsigned long)priv->rings[i].intf);
2188 @@ -341,8 +341,8 @@ static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev,
2189 if (ret < 0)
2190 goto out;
2191 ring->ref = ret;
2192 - bytes = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
2193 - XEN_9PFS_RING_ORDER - (PAGE_SHIFT - XEN_PAGE_SHIFT));
2194 + bytes = alloc_pages_exact(1UL << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT),
2195 + GFP_KERNEL | __GFP_ZERO);
2196 if (!bytes) {
2197 ret = -ENOMEM;
2198 goto out;
2199 @@ -373,9 +373,7 @@ out:
2200 if (bytes) {
2201 for (i--; i >= 0; i--)
2202 gnttab_end_foreign_access(ring->intf->ref[i], 0, 0);
2203 - free_pages((unsigned long)bytes,
2204 - XEN_9PFS_RING_ORDER -
2205 - (PAGE_SHIFT - XEN_PAGE_SHIFT));
2206 + free_pages_exact(bytes, 1UL << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT));
2207 }
2208 gnttab_end_foreign_access(ring->ref, 0, 0);
2209 free_page((unsigned long)ring->intf);
2210 diff --git a/tools/arch/x86/include/asm/cpufeatures.h b/tools/arch/x86/include/asm/cpufeatures.h
2211 index 0652d3eed9bda..4133c721af6ed 100644
2212 --- a/tools/arch/x86/include/asm/cpufeatures.h
2213 +++ b/tools/arch/x86/include/asm/cpufeatures.h
2214 @@ -202,7 +202,7 @@
2215 #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */
2216 #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */
2217 #define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
2218 -#define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */
2219 +#define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCEs for Spectre variant 2 */
2220 #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */
2221 #define X86_FEATURE_CDP_L2 ( 7*32+15) /* Code and Data Prioritization L2 */
2222 #define X86_FEATURE_MSR_SPEC_CTRL ( 7*32+16) /* "" MSR SPEC_CTRL is implemented */