Magellan Linux

Annotation of /trunk/kernel-magellan/patches-4.1/0102-4.1.3-all-fixes.patch

Parent Directory Parent Directory | Revision Log Revision Log


Revision 2681 - (hide annotations) (download)
Tue Aug 4 09:17:35 2015 UTC (8 years, 9 months ago) by niro
File size: 129134 byte(s)
-linux-4.1.3
1 niro 2681 diff --git a/Documentation/DMA-API-HOWTO.txt b/Documentation/DMA-API-HOWTO.txt
2     index 0f7afb2bb442..aef8cc5a677b 100644
3     --- a/Documentation/DMA-API-HOWTO.txt
4     +++ b/Documentation/DMA-API-HOWTO.txt
5     @@ -25,13 +25,18 @@ physical addresses. These are the addresses in /proc/iomem. The physical
6     address is not directly useful to a driver; it must use ioremap() to map
7     the space and produce a virtual address.
8    
9     -I/O devices use a third kind of address: a "bus address" or "DMA address".
10     -If a device has registers at an MMIO address, or if it performs DMA to read
11     -or write system memory, the addresses used by the device are bus addresses.
12     -In some systems, bus addresses are identical to CPU physical addresses, but
13     -in general they are not. IOMMUs and host bridges can produce arbitrary
14     +I/O devices use a third kind of address: a "bus address". If a device has
15     +registers at an MMIO address, or if it performs DMA to read or write system
16     +memory, the addresses used by the device are bus addresses. In some
17     +systems, bus addresses are identical to CPU physical addresses, but in
18     +general they are not. IOMMUs and host bridges can produce arbitrary
19     mappings between physical and bus addresses.
20    
21     +From a device's point of view, DMA uses the bus address space, but it may
22     +be restricted to a subset of that space. For example, even if a system
23     +supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
24     +so devices only need to use 32-bit DMA addresses.
25     +
26     Here's a picture and some examples:
27    
28     CPU CPU Bus
29     @@ -72,11 +77,11 @@ can use virtual address X to access the buffer, but the device itself
30     cannot because DMA doesn't go through the CPU virtual memory system.
31    
32     In some simple systems, the device can do DMA directly to physical address
33     -Y. But in many others, there is IOMMU hardware that translates bus
34     +Y. But in many others, there is IOMMU hardware that translates DMA
35     addresses to physical addresses, e.g., it translates Z to Y. This is part
36     of the reason for the DMA API: the driver can give a virtual address X to
37     an interface like dma_map_single(), which sets up any required IOMMU
38     -mapping and returns the bus address Z. The driver then tells the device to
39     +mapping and returns the DMA address Z. The driver then tells the device to
40     do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
41     RAM.
42    
43     @@ -98,7 +103,7 @@ First of all, you should make sure
44     #include <linux/dma-mapping.h>
45    
46     is in your driver, which provides the definition of dma_addr_t. This type
47     -can hold any valid DMA or bus address for the platform and should be used
48     +can hold any valid DMA address for the platform and should be used
49     everywhere you hold a DMA address returned from the DMA mapping functions.
50    
51     What memory is DMA'able?
52     @@ -316,7 +321,7 @@ There are two types of DMA mappings:
53     Think of "consistent" as "synchronous" or "coherent".
54    
55     The current default is to return consistent memory in the low 32
56     - bits of the bus space. However, for future compatibility you should
57     + bits of the DMA space. However, for future compatibility you should
58     set the consistent mask even if this default is fine for your
59     driver.
60    
61     @@ -403,7 +408,7 @@ dma_alloc_coherent() returns two values: the virtual address which you
62     can use to access it from the CPU and dma_handle which you pass to the
63     card.
64    
65     -The CPU virtual address and the DMA bus address are both
66     +The CPU virtual address and the DMA address are both
67     guaranteed to be aligned to the smallest PAGE_SIZE order which
68     is greater than or equal to the requested size. This invariant
69     exists (for example) to guarantee that if you allocate a chunk
70     @@ -645,8 +650,8 @@ PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be
71     dma_map_sg call.
72    
73     Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}()
74     -counterpart, because the bus address space is a shared resource and
75     -you could render the machine unusable by consuming all bus addresses.
76     +counterpart, because the DMA address space is a shared resource and
77     +you could render the machine unusable by consuming all DMA addresses.
78    
79     If you need to use the same streaming DMA region multiple times and touch
80     the data in between the DMA transfers, the buffer needs to be synced
81     diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
82     index 52088408668a..7eba542eff7c 100644
83     --- a/Documentation/DMA-API.txt
84     +++ b/Documentation/DMA-API.txt
85     @@ -18,10 +18,10 @@ Part I - dma_ API
86     To get the dma_ API, you must #include <linux/dma-mapping.h>. This
87     provides dma_addr_t and the interfaces described below.
88    
89     -A dma_addr_t can hold any valid DMA or bus address for the platform. It
90     -can be given to a device to use as a DMA source or target. A CPU cannot
91     -reference a dma_addr_t directly because there may be translation between
92     -its physical address space and the bus address space.
93     +A dma_addr_t can hold any valid DMA address for the platform. It can be
94     +given to a device to use as a DMA source or target. A CPU cannot reference
95     +a dma_addr_t directly because there may be translation between its physical
96     +address space and the DMA address space.
97    
98     Part Ia - Using large DMA-coherent buffers
99     ------------------------------------------
100     @@ -42,7 +42,7 @@ It returns a pointer to the allocated region (in the processor's virtual
101     address space) or NULL if the allocation failed.
102    
103     It also returns a <dma_handle> which may be cast to an unsigned integer the
104     -same width as the bus and given to the device as the bus address base of
105     +same width as the bus and given to the device as the DMA address base of
106     the region.
107    
108     Note: consistent memory can be expensive on some platforms, and the
109     @@ -193,7 +193,7 @@ dma_map_single(struct device *dev, void *cpu_addr, size_t size,
110     enum dma_data_direction direction)
111    
112     Maps a piece of processor virtual memory so it can be accessed by the
113     -device and returns the bus address of the memory.
114     +device and returns the DMA address of the memory.
115    
116     The direction for both APIs may be converted freely by casting.
117     However the dma_ API uses a strongly typed enumerator for its
118     @@ -212,20 +212,20 @@ contiguous piece of memory. For this reason, memory to be mapped by
119     this API should be obtained from sources which guarantee it to be
120     physically contiguous (like kmalloc).
121    
122     -Further, the bus address of the memory must be within the
123     +Further, the DMA address of the memory must be within the
124     dma_mask of the device (the dma_mask is a bit mask of the
125     -addressable region for the device, i.e., if the bus address of
126     -the memory ANDed with the dma_mask is still equal to the bus
127     +addressable region for the device, i.e., if the DMA address of
128     +the memory ANDed with the dma_mask is still equal to the DMA
129     address, then the device can perform DMA to the memory). To
130     ensure that the memory allocated by kmalloc is within the dma_mask,
131     the driver may specify various platform-dependent flags to restrict
132     -the bus address range of the allocation (e.g., on x86, GFP_DMA
133     -guarantees to be within the first 16MB of available bus addresses,
134     +the DMA address range of the allocation (e.g., on x86, GFP_DMA
135     +guarantees to be within the first 16MB of available DMA addresses,
136     as required by ISA devices).
137    
138     Note also that the above constraints on physical contiguity and
139     dma_mask may not apply if the platform has an IOMMU (a device which
140     -maps an I/O bus address to a physical memory address). However, to be
141     +maps an I/O DMA address to a physical memory address). However, to be
142     portable, device driver writers may *not* assume that such an IOMMU
143     exists.
144    
145     @@ -296,7 +296,7 @@ reduce current DMA mapping usage or delay and try again later).
146     dma_map_sg(struct device *dev, struct scatterlist *sg,
147     int nents, enum dma_data_direction direction)
148    
149     -Returns: the number of bus address segments mapped (this may be shorter
150     +Returns: the number of DMA address segments mapped (this may be shorter
151     than <nents> passed in if some elements of the scatter/gather list are
152     physically or virtually adjacent and an IOMMU maps them with a single
153     entry).
154     @@ -340,7 +340,7 @@ must be the same as those and passed in to the scatter/gather mapping
155     API.
156    
157     Note: <nents> must be the number you passed in, *not* the number of
158     -bus address entries returned.
159     +DMA address entries returned.
160    
161     void
162     dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
163     @@ -507,7 +507,7 @@ it's asked for coherent memory for this device.
164     phys_addr is the CPU physical address to which the memory is currently
165     assigned (this will be ioremapped so the CPU can access the region).
166    
167     -device_addr is the bus address the device needs to be programmed
168     +device_addr is the DMA address the device needs to be programmed
169     with to actually address this memory (this will be handed out as the
170     dma_addr_t in dma_alloc_coherent()).
171    
172     diff --git a/Documentation/devicetree/bindings/spi/spi_pl022.txt b/Documentation/devicetree/bindings/spi/spi_pl022.txt
173     index 22ed6797216d..4d1673ca8cf8 100644
174     --- a/Documentation/devicetree/bindings/spi/spi_pl022.txt
175     +++ b/Documentation/devicetree/bindings/spi/spi_pl022.txt
176     @@ -4,9 +4,9 @@ Required properties:
177     - compatible : "arm,pl022", "arm,primecell"
178     - reg : Offset and length of the register set for the device
179     - interrupts : Should contain SPI controller interrupt
180     +- num-cs : total number of chipselects
181    
182     Optional properties:
183     -- num-cs : total number of chipselects
184     - cs-gpios : should specify GPIOs used for chipselects.
185     The gpios will be referred to as reg = <index> in the SPI child nodes.
186     If unspecified, a single SPI device without a chip select can be used.
187     diff --git a/Makefile b/Makefile
188     index cef84c061f02..e3cdec4898be 100644
189     --- a/Makefile
190     +++ b/Makefile
191     @@ -1,6 +1,6 @@
192     VERSION = 4
193     PATCHLEVEL = 1
194     -SUBLEVEL = 2
195     +SUBLEVEL = 3
196     EXTRAVERSION =
197     NAME = Series 4800
198    
199     diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
200     index 9917a45fc430..20b7dc17979e 100644
201     --- a/arch/arc/include/asm/atomic.h
202     +++ b/arch/arc/include/asm/atomic.h
203     @@ -43,6 +43,12 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \
204     { \
205     unsigned int temp; \
206     \
207     + /* \
208     + * Explicit full memory barrier needed before/after as \
209     + * LLOCK/SCOND thmeselves don't provide any such semantics \
210     + */ \
211     + smp_mb(); \
212     + \
213     __asm__ __volatile__( \
214     "1: llock %0, [%1] \n" \
215     " " #asm_op " %0, %0, %2 \n" \
216     @@ -52,6 +58,8 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \
217     : "r"(&v->counter), "ir"(i) \
218     : "cc"); \
219     \
220     + smp_mb(); \
221     + \
222     return temp; \
223     }
224    
225     @@ -105,6 +113,9 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \
226     unsigned long flags; \
227     unsigned long temp; \
228     \
229     + /* \
230     + * spin lock/unlock provides the needed smp_mb() before/after \
231     + */ \
232     atomic_ops_lock(flags); \
233     temp = v->counter; \
234     temp c_op i; \
235     @@ -142,9 +153,19 @@ ATOMIC_OP(and, &=, and)
236     #define __atomic_add_unless(v, a, u) \
237     ({ \
238     int c, old; \
239     + \
240     + /* \
241     + * Explicit full memory barrier needed before/after as \
242     + * LLOCK/SCOND thmeselves don't provide any such semantics \
243     + */ \
244     + smp_mb(); \
245     + \
246     c = atomic_read(v); \
247     while (c != (u) && (old = atomic_cmpxchg((v), c, c + (a))) != c)\
248     c = old; \
249     + \
250     + smp_mb(); \
251     + \
252     c; \
253     })
254    
255     diff --git a/arch/arc/include/asm/bitops.h b/arch/arc/include/asm/bitops.h
256     index 4051e9525939..624a9d048ca9 100644
257     --- a/arch/arc/include/asm/bitops.h
258     +++ b/arch/arc/include/asm/bitops.h
259     @@ -117,6 +117,12 @@ static inline int test_and_set_bit(unsigned long nr, volatile unsigned long *m)
260     if (__builtin_constant_p(nr))
261     nr &= 0x1f;
262    
263     + /*
264     + * Explicit full memory barrier needed before/after as
265     + * LLOCK/SCOND themselves don't provide any such semantics
266     + */
267     + smp_mb();
268     +
269     __asm__ __volatile__(
270     "1: llock %0, [%2] \n"
271     " bset %1, %0, %3 \n"
272     @@ -126,6 +132,8 @@ static inline int test_and_set_bit(unsigned long nr, volatile unsigned long *m)
273     : "r"(m), "ir"(nr)
274     : "cc");
275    
276     + smp_mb();
277     +
278     return (old & (1 << nr)) != 0;
279     }
280    
281     @@ -139,6 +147,8 @@ test_and_clear_bit(unsigned long nr, volatile unsigned long *m)
282     if (__builtin_constant_p(nr))
283     nr &= 0x1f;
284    
285     + smp_mb();
286     +
287     __asm__ __volatile__(
288     "1: llock %0, [%2] \n"
289     " bclr %1, %0, %3 \n"
290     @@ -148,6 +158,8 @@ test_and_clear_bit(unsigned long nr, volatile unsigned long *m)
291     : "r"(m), "ir"(nr)
292     : "cc");
293    
294     + smp_mb();
295     +
296     return (old & (1 << nr)) != 0;
297     }
298    
299     @@ -161,6 +173,8 @@ test_and_change_bit(unsigned long nr, volatile unsigned long *m)
300     if (__builtin_constant_p(nr))
301     nr &= 0x1f;
302    
303     + smp_mb();
304     +
305     __asm__ __volatile__(
306     "1: llock %0, [%2] \n"
307     " bxor %1, %0, %3 \n"
308     @@ -170,6 +184,8 @@ test_and_change_bit(unsigned long nr, volatile unsigned long *m)
309     : "r"(m), "ir"(nr)
310     : "cc");
311    
312     + smp_mb();
313     +
314     return (old & (1 << nr)) != 0;
315     }
316    
317     @@ -249,6 +265,9 @@ static inline int test_and_set_bit(unsigned long nr, volatile unsigned long *m)
318     if (__builtin_constant_p(nr))
319     nr &= 0x1f;
320    
321     + /*
322     + * spin lock/unlock provide the needed smp_mb() before/after
323     + */
324     bitops_lock(flags);
325    
326     old = *m;
327     diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
328     index 03cd6894855d..44fd531f4d7b 100644
329     --- a/arch/arc/include/asm/cmpxchg.h
330     +++ b/arch/arc/include/asm/cmpxchg.h
331     @@ -10,6 +10,8 @@
332     #define __ASM_ARC_CMPXCHG_H
333    
334     #include <linux/types.h>
335     +
336     +#include <asm/barrier.h>
337     #include <asm/smp.h>
338    
339     #ifdef CONFIG_ARC_HAS_LLSC
340     @@ -19,16 +21,25 @@ __cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new)
341     {
342     unsigned long prev;
343    
344     + /*
345     + * Explicit full memory barrier needed before/after as
346     + * LLOCK/SCOND thmeselves don't provide any such semantics
347     + */
348     + smp_mb();
349     +
350     __asm__ __volatile__(
351     "1: llock %0, [%1] \n"
352     " brne %0, %2, 2f \n"
353     " scond %3, [%1] \n"
354     " bnz 1b \n"
355     "2: \n"
356     - : "=&r"(prev)
357     - : "r"(ptr), "ir"(expected),
358     - "r"(new) /* can't be "ir". scond can't take limm for "b" */
359     - : "cc");
360     + : "=&r"(prev) /* Early clobber, to prevent reg reuse */
361     + : "r"(ptr), /* Not "m": llock only supports reg direct addr mode */
362     + "ir"(expected),
363     + "r"(new) /* can't be "ir". scond can't take LIMM for "b" */
364     + : "cc", "memory"); /* so that gcc knows memory is being written here */
365     +
366     + smp_mb();
367    
368     return prev;
369     }
370     @@ -42,6 +53,9 @@ __cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new)
371     int prev;
372     volatile unsigned long *p = ptr;
373    
374     + /*
375     + * spin lock/unlock provide the needed smp_mb() before/after
376     + */
377     atomic_ops_lock(flags);
378     prev = *p;
379     if (prev == expected)
380     @@ -77,12 +91,16 @@ static inline unsigned long __xchg(unsigned long val, volatile void *ptr,
381    
382     switch (size) {
383     case 4:
384     + smp_mb();
385     +
386     __asm__ __volatile__(
387     " ex %0, [%1] \n"
388     : "+r"(val)
389     : "r"(ptr)
390     : "memory");
391    
392     + smp_mb();
393     +
394     return val;
395     }
396     return __xchg_bad_pointer();
397     diff --git a/arch/arc/include/asm/spinlock.h b/arch/arc/include/asm/spinlock.h
398     index b6a8c2dfbe6e..e1651df6a93d 100644
399     --- a/arch/arc/include/asm/spinlock.h
400     +++ b/arch/arc/include/asm/spinlock.h
401     @@ -22,24 +22,46 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
402     {
403     unsigned int tmp = __ARCH_SPIN_LOCK_LOCKED__;
404    
405     + /*
406     + * This smp_mb() is technically superfluous, we only need the one
407     + * after the lock for providing the ACQUIRE semantics.
408     + * However doing the "right" thing was regressing hackbench
409     + * so keeping this, pending further investigation
410     + */
411     + smp_mb();
412     +
413     __asm__ __volatile__(
414     "1: ex %0, [%1] \n"
415     " breq %0, %2, 1b \n"
416     : "+&r" (tmp)
417     : "r"(&(lock->slock)), "ir"(__ARCH_SPIN_LOCK_LOCKED__)
418     : "memory");
419     +
420     + /*
421     + * ACQUIRE barrier to ensure load/store after taking the lock
422     + * don't "bleed-up" out of the critical section (leak-in is allowed)
423     + * http://www.spinics.net/lists/kernel/msg2010409.html
424     + *
425     + * ARCv2 only has load-load, store-store and all-all barrier
426     + * thus need the full all-all barrier
427     + */
428     + smp_mb();
429     }
430    
431     static inline int arch_spin_trylock(arch_spinlock_t *lock)
432     {
433     unsigned int tmp = __ARCH_SPIN_LOCK_LOCKED__;
434    
435     + smp_mb();
436     +
437     __asm__ __volatile__(
438     "1: ex %0, [%1] \n"
439     : "+r" (tmp)
440     : "r"(&(lock->slock))
441     : "memory");
442    
443     + smp_mb();
444     +
445     return (tmp == __ARCH_SPIN_LOCK_UNLOCKED__);
446     }
447    
448     @@ -47,12 +69,22 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
449     {
450     unsigned int tmp = __ARCH_SPIN_LOCK_UNLOCKED__;
451    
452     + /*
453     + * RELEASE barrier: given the instructions avail on ARCv2, full barrier
454     + * is the only option
455     + */
456     + smp_mb();
457     +
458     __asm__ __volatile__(
459     " ex %0, [%1] \n"
460     : "+r" (tmp)
461     : "r"(&(lock->slock))
462     : "memory");
463    
464     + /*
465     + * superfluous, but keeping for now - see pairing version in
466     + * arch_spin_lock above
467     + */
468     smp_mb();
469     }
470    
471     diff --git a/arch/arc/kernel/perf_event.c b/arch/arc/kernel/perf_event.c
472     index fd2ec50102f2..57b58f52d825 100644
473     --- a/arch/arc/kernel/perf_event.c
474     +++ b/arch/arc/kernel/perf_event.c
475     @@ -266,7 +266,6 @@ static int arc_pmu_add(struct perf_event *event, int flags)
476    
477     static int arc_pmu_device_probe(struct platform_device *pdev)
478     {
479     - struct arc_pmu *arc_pmu;
480     struct arc_reg_pct_build pct_bcr;
481     struct arc_reg_cc_build cc_bcr;
482     int i, j, ret;
483     diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
484     index 959fe8733560..bddd04d031db 100644
485     --- a/arch/arm64/kernel/entry.S
486     +++ b/arch/arm64/kernel/entry.S
487     @@ -517,6 +517,7 @@ el0_sp_pc:
488     mrs x26, far_el1
489     // enable interrupts before calling the main handler
490     enable_dbg_and_irq
491     + ct_user_exit
492     mov x0, x26
493     mov x1, x25
494     mov x2, sp
495     diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
496     index ff3bddea482d..f6fe17d88da5 100644
497     --- a/arch/arm64/kernel/vdso/Makefile
498     +++ b/arch/arm64/kernel/vdso/Makefile
499     @@ -15,6 +15,10 @@ ccflags-y := -shared -fno-common -fno-builtin
500     ccflags-y += -nostdlib -Wl,-soname=linux-vdso.so.1 \
501     $(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
502    
503     +# Workaround for bare-metal (ELF) toolchains that neglect to pass -shared
504     +# down to collect2, resulting in silent corruption of the vDSO image.
505     +ccflags-y += -Wl,-shared
506     +
507     obj-y += vdso.o
508     extra-y += vdso.lds vdso-offsets.h
509     CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
510     diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
511     index baa758d37021..76c1e6cd36fc 100644
512     --- a/arch/arm64/mm/context.c
513     +++ b/arch/arm64/mm/context.c
514     @@ -92,6 +92,14 @@ static void reset_context(void *info)
515     unsigned int cpu = smp_processor_id();
516     struct mm_struct *mm = current->active_mm;
517    
518     + /*
519     + * current->active_mm could be init_mm for the idle thread immediately
520     + * after secondary CPU boot or hotplug. TTBR0_EL1 is already set to
521     + * the reserved value, so no need to reset any context.
522     + */
523     + if (mm == &init_mm)
524     + return;
525     +
526     smp_rmb();
527     asid = cpu_last_asid + cpu;
528    
529     diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
530     index 597831bdddf3..ad87ce826cce 100644
531     --- a/arch/arm64/mm/init.c
532     +++ b/arch/arm64/mm/init.c
533     @@ -262,7 +262,7 @@ static void __init free_unused_memmap(void)
534     * memmap entries are valid from the bank end aligned to
535     * MAX_ORDER_NR_PAGES.
536     */
537     - prev_end = ALIGN(start + __phys_to_pfn(reg->size),
538     + prev_end = ALIGN(__phys_to_pfn(reg->base + reg->size),
539     MAX_ORDER_NR_PAGES);
540     }
541    
542     diff --git a/arch/s390/hypfs/inode.c b/arch/s390/hypfs/inode.c
543     index d3f896a35b98..2eeb0a0f506d 100644
544     --- a/arch/s390/hypfs/inode.c
545     +++ b/arch/s390/hypfs/inode.c
546     @@ -456,8 +456,6 @@ static const struct super_operations hypfs_s_ops = {
547     .show_options = hypfs_show_options,
548     };
549    
550     -static struct kobject *s390_kobj;
551     -
552     static int __init hypfs_init(void)
553     {
554     int rc;
555     @@ -481,18 +479,16 @@ static int __init hypfs_init(void)
556     rc = -ENODATA;
557     goto fail_hypfs_sprp_exit;
558     }
559     - s390_kobj = kobject_create_and_add("s390", hypervisor_kobj);
560     - if (!s390_kobj) {
561     - rc = -ENOMEM;
562     + rc = sysfs_create_mount_point(hypervisor_kobj, "s390");
563     + if (rc)
564     goto fail_hypfs_diag0c_exit;
565     - }
566     rc = register_filesystem(&hypfs_type);
567     if (rc)
568     goto fail_filesystem;
569     return 0;
570    
571     fail_filesystem:
572     - kobject_put(s390_kobj);
573     + sysfs_remove_mount_point(hypervisor_kobj, "s390");
574     fail_hypfs_diag0c_exit:
575     hypfs_diag0c_exit();
576     fail_hypfs_sprp_exit:
577     @@ -510,7 +506,7 @@ fail_dbfs_exit:
578     static void __exit hypfs_exit(void)
579     {
580     unregister_filesystem(&hypfs_type);
581     - kobject_put(s390_kobj);
582     + sysfs_remove_mount_point(hypervisor_kobj, "s390");
583     hypfs_diag0c_exit();
584     hypfs_sprp_exit();
585     hypfs_vm_exit();
586     diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
587     index c412fdb28d34..513e7230e3d0 100644
588     --- a/drivers/acpi/bus.c
589     +++ b/drivers/acpi/bus.c
590     @@ -470,6 +470,16 @@ static int __init acpi_bus_init_irq(void)
591     return 0;
592     }
593    
594     +/**
595     + * acpi_early_init - Initialize ACPICA and populate the ACPI namespace.
596     + *
597     + * The ACPI tables are accessible after this, but the handling of events has not
598     + * been initialized and the global lock is not available yet, so AML should not
599     + * be executed at this point.
600     + *
601     + * Doing this before switching the EFI runtime services to virtual mode allows
602     + * the EfiBootServices memory to be freed slightly earlier on boot.
603     + */
604     void __init acpi_early_init(void)
605     {
606     acpi_status status;
607     @@ -533,26 +543,42 @@ void __init acpi_early_init(void)
608     acpi_gbl_FADT.sci_interrupt = acpi_sci_override_gsi;
609     }
610     #endif
611     + return;
612     +
613     + error0:
614     + disable_acpi();
615     +}
616     +
617     +/**
618     + * acpi_subsystem_init - Finalize the early initialization of ACPI.
619     + *
620     + * Switch over the platform to the ACPI mode (if possible), initialize the
621     + * handling of ACPI events, install the interrupt and global lock handlers.
622     + *
623     + * Doing this too early is generally unsafe, but at the same time it needs to be
624     + * done before all things that really depend on ACPI. The right spot appears to
625     + * be before finalizing the EFI initialization.
626     + */
627     +void __init acpi_subsystem_init(void)
628     +{
629     + acpi_status status;
630     +
631     + if (acpi_disabled)
632     + return;
633    
634     status = acpi_enable_subsystem(~ACPI_NO_ACPI_ENABLE);
635     if (ACPI_FAILURE(status)) {
636     printk(KERN_ERR PREFIX "Unable to enable ACPI\n");
637     - goto error0;
638     + disable_acpi();
639     + } else {
640     + /*
641     + * If the system is using ACPI then we can be reasonably
642     + * confident that any regulators are managed by the firmware
643     + * so tell the regulator core it has everything it needs to
644     + * know.
645     + */
646     + regulator_has_full_constraints();
647     }
648     -
649     - /*
650     - * If the system is using ACPI then we can be reasonably
651     - * confident that any regulators are managed by the firmware
652     - * so tell the regulator core it has everything it needs to
653     - * know.
654     - */
655     - regulator_has_full_constraints();
656     -
657     - return;
658     -
659     - error0:
660     - disable_acpi();
661     - return;
662     }
663    
664     static int __init acpi_bus_init(void)
665     diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
666     index 735db11a9b00..8217e0bda60f 100644
667     --- a/drivers/acpi/device_pm.c
668     +++ b/drivers/acpi/device_pm.c
669     @@ -953,6 +953,7 @@ EXPORT_SYMBOL_GPL(acpi_subsys_prepare);
670     */
671     void acpi_subsys_complete(struct device *dev)
672     {
673     + pm_generic_complete(dev);
674     /*
675     * If the device had been runtime-suspended before the system went into
676     * the sleep state it is going out of and it has never been resumed till
677     diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
678     index 7ccba395c9dd..5226a8b921ae 100644
679     --- a/drivers/acpi/osl.c
680     +++ b/drivers/acpi/osl.c
681     @@ -175,11 +175,7 @@ static void __init acpi_request_region (struct acpi_generic_address *gas,
682     if (!addr || !length)
683     return;
684    
685     - /* Resources are never freed */
686     - if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_IO)
687     - request_region(addr, length, desc);
688     - else if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
689     - request_mem_region(addr, length, desc);
690     + acpi_reserve_region(addr, length, gas->space_id, 0, desc);
691     }
692    
693     static void __init acpi_reserve_resources(void)
694     diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
695     index 8244f013f210..fcb7807ea8b7 100644
696     --- a/drivers/acpi/resource.c
697     +++ b/drivers/acpi/resource.c
698     @@ -26,6 +26,7 @@
699     #include <linux/device.h>
700     #include <linux/export.h>
701     #include <linux/ioport.h>
702     +#include <linux/list.h>
703     #include <linux/slab.h>
704    
705     #ifdef CONFIG_X86
706     @@ -621,3 +622,162 @@ int acpi_dev_filter_resource_type(struct acpi_resource *ares,
707     return (type & types) ? 0 : 1;
708     }
709     EXPORT_SYMBOL_GPL(acpi_dev_filter_resource_type);
710     +
711     +struct reserved_region {
712     + struct list_head node;
713     + u64 start;
714     + u64 end;
715     +};
716     +
717     +static LIST_HEAD(reserved_io_regions);
718     +static LIST_HEAD(reserved_mem_regions);
719     +
720     +static int request_range(u64 start, u64 end, u8 space_id, unsigned long flags,
721     + char *desc)
722     +{
723     + unsigned int length = end - start + 1;
724     + struct resource *res;
725     +
726     + res = space_id == ACPI_ADR_SPACE_SYSTEM_IO ?
727     + request_region(start, length, desc) :
728     + request_mem_region(start, length, desc);
729     + if (!res)
730     + return -EIO;
731     +
732     + res->flags &= ~flags;
733     + return 0;
734     +}
735     +
736     +static int add_region_before(u64 start, u64 end, u8 space_id,
737     + unsigned long flags, char *desc,
738     + struct list_head *head)
739     +{
740     + struct reserved_region *reg;
741     + int error;
742     +
743     + reg = kmalloc(sizeof(*reg), GFP_KERNEL);
744     + if (!reg)
745     + return -ENOMEM;
746     +
747     + error = request_range(start, end, space_id, flags, desc);
748     + if (error)
749     + return error;
750     +
751     + reg->start = start;
752     + reg->end = end;
753     + list_add_tail(&reg->node, head);
754     + return 0;
755     +}
756     +
757     +/**
758     + * acpi_reserve_region - Reserve an I/O or memory region as a system resource.
759     + * @start: Starting address of the region.
760     + * @length: Length of the region.
761     + * @space_id: Identifier of address space to reserve the region from.
762     + * @flags: Resource flags to clear for the region after requesting it.
763     + * @desc: Region description (for messages).
764     + *
765     + * Reserve an I/O or memory region as a system resource to prevent others from
766     + * using it. If the new region overlaps with one of the regions (in the given
767     + * address space) already reserved by this routine, only the non-overlapping
768     + * parts of it will be reserved.
769     + *
770     + * Returned is either 0 (success) or a negative error code indicating a resource
771     + * reservation problem. It is the code of the first encountered error, but the
772     + * routine doesn't abort until it has attempted to request all of the parts of
773     + * the new region that don't overlap with other regions reserved previously.
774     + *
775     + * The resources requested by this routine are never released.
776     + */
777     +int acpi_reserve_region(u64 start, unsigned int length, u8 space_id,
778     + unsigned long flags, char *desc)
779     +{
780     + struct list_head *regions;
781     + struct reserved_region *reg;
782     + u64 end = start + length - 1;
783     + int ret = 0, error = 0;
784     +
785     + if (space_id == ACPI_ADR_SPACE_SYSTEM_IO)
786     + regions = &reserved_io_regions;
787     + else if (space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
788     + regions = &reserved_mem_regions;
789     + else
790     + return -EINVAL;
791     +
792     + if (list_empty(regions))
793     + return add_region_before(start, end, space_id, flags, desc, regions);
794     +
795     + list_for_each_entry(reg, regions, node)
796     + if (reg->start == end + 1) {
797     + /* The new region can be prepended to this one. */
798     + ret = request_range(start, end, space_id, flags, desc);
799     + if (!ret)
800     + reg->start = start;
801     +
802     + return ret;
803     + } else if (reg->start > end) {
804     + /* No overlap. Add the new region here and get out. */
805     + return add_region_before(start, end, space_id, flags,
806     + desc, &reg->node);
807     + } else if (reg->end == start - 1) {
808     + goto combine;
809     + } else if (reg->end >= start) {
810     + goto overlap;
811     + }
812     +
813     + /* The new region goes after the last existing one. */
814     + return add_region_before(start, end, space_id, flags, desc, regions);
815     +
816     + overlap:
817     + /*
818     + * The new region overlaps an existing one.
819     + *
820     + * The head part of the new region immediately preceding the existing
821     + * overlapping one can be combined with it right away.
822     + */
823     + if (reg->start > start) {
824     + error = request_range(start, reg->start - 1, space_id, flags, desc);
825     + if (error)
826     + ret = error;
827     + else
828     + reg->start = start;
829     + }
830     +
831     + combine:
832     + /*
833     + * The new region is adjacent to an existing one. If it extends beyond
834     + * that region all the way to the next one, it is possible to combine
835     + * all three of them.
836     + */
837     + while (reg->end < end) {
838     + struct reserved_region *next = NULL;
839     + u64 a = reg->end + 1, b = end;
840     +
841     + if (!list_is_last(&reg->node, regions)) {
842     + next = list_next_entry(reg, node);
843     + if (next->start <= end)
844     + b = next->start - 1;
845     + }
846     + error = request_range(a, b, space_id, flags, desc);
847     + if (!error) {
848     + if (next && next->start == b + 1) {
849     + reg->end = next->end;
850     + list_del(&next->node);
851     + kfree(next);
852     + } else {
853     + reg->end = end;
854     + break;
855     + }
856     + } else if (next) {
857     + if (!ret)
858     + ret = error;
859     +
860     + reg = next;
861     + } else {
862     + break;
863     + }
864     + }
865     +
866     + return ret ? ret : error;
867     +}
868     +EXPORT_SYMBOL_GPL(acpi_reserve_region);
869     diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
870     index 6273ff072f3e..1c76dcb502cf 100644
871     --- a/drivers/base/regmap/regmap.c
872     +++ b/drivers/base/regmap/regmap.c
873     @@ -945,11 +945,10 @@ EXPORT_SYMBOL_GPL(devm_regmap_init);
874     static void regmap_field_init(struct regmap_field *rm_field,
875     struct regmap *regmap, struct reg_field reg_field)
876     {
877     - int field_bits = reg_field.msb - reg_field.lsb + 1;
878     rm_field->regmap = regmap;
879     rm_field->reg = reg_field.reg;
880     rm_field->shift = reg_field.lsb;
881     - rm_field->mask = ((BIT(field_bits) - 1) << reg_field.lsb);
882     + rm_field->mask = GENMASK(reg_field.msb, reg_field.lsb);
883     rm_field->id_size = reg_field.id_size;
884     rm_field->id_offset = reg_field.id_offset;
885     }
886     @@ -2318,7 +2317,7 @@ int regmap_bulk_read(struct regmap *map, unsigned int reg, void *val,
887     &ival);
888     if (ret != 0)
889     return ret;
890     - memcpy(val + (i * val_bytes), &ival, val_bytes);
891     + map->format.format_val(val + (i * val_bytes), ival, 0);
892     }
893     }
894    
895     diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
896     index 3061bb8629dc..e14363d12690 100644
897     --- a/drivers/firmware/efi/efi.c
898     +++ b/drivers/firmware/efi/efi.c
899     @@ -65,7 +65,6 @@ static int __init parse_efi_cmdline(char *str)
900     early_param("efi", parse_efi_cmdline);
901    
902     static struct kobject *efi_kobj;
903     -static struct kobject *efivars_kobj;
904    
905     /*
906     * Let's not leave out systab information that snuck into
907     @@ -212,10 +211,9 @@ static int __init efisubsys_init(void)
908     goto err_remove_group;
909    
910     /* and the standard mountpoint for efivarfs */
911     - efivars_kobj = kobject_create_and_add("efivars", efi_kobj);
912     - if (!efivars_kobj) {
913     + error = sysfs_create_mount_point(efi_kobj, "efivars");
914     + if (error) {
915     pr_err("efivars: Subsystem registration failed.\n");
916     - error = -ENOMEM;
917     goto err_remove_group;
918     }
919    
920     diff --git a/drivers/gpio/gpio-crystalcove.c b/drivers/gpio/gpio-crystalcove.c
921     index 91a7ffe83135..ab457fc00e75 100644
922     --- a/drivers/gpio/gpio-crystalcove.c
923     +++ b/drivers/gpio/gpio-crystalcove.c
924     @@ -255,6 +255,7 @@ static struct irq_chip crystalcove_irqchip = {
925     .irq_set_type = crystalcove_irq_type,
926     .irq_bus_lock = crystalcove_bus_lock,
927     .irq_bus_sync_unlock = crystalcove_bus_sync_unlock,
928     + .flags = IRQCHIP_SKIP_SET_WAKE,
929     };
930    
931     static irqreturn_t crystalcove_gpio_irq_handler(int irq, void *data)
932     diff --git a/drivers/gpio/gpio-rcar.c b/drivers/gpio/gpio-rcar.c
933     index fd3977465948..1e14a6c74ed1 100644
934     --- a/drivers/gpio/gpio-rcar.c
935     +++ b/drivers/gpio/gpio-rcar.c
936     @@ -177,8 +177,17 @@ static int gpio_rcar_irq_set_wake(struct irq_data *d, unsigned int on)
937     struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
938     struct gpio_rcar_priv *p = container_of(gc, struct gpio_rcar_priv,
939     gpio_chip);
940     -
941     - irq_set_irq_wake(p->irq_parent, on);
942     + int error;
943     +
944     + if (p->irq_parent) {
945     + error = irq_set_irq_wake(p->irq_parent, on);
946     + if (error) {
947     + dev_dbg(&p->pdev->dev,
948     + "irq %u doesn't support irq_set_wake\n",
949     + p->irq_parent);
950     + p->irq_parent = 0;
951     + }
952     + }
953    
954     if (!p->clk)
955     return 0;
956     diff --git a/drivers/iio/accel/kxcjk-1013.c b/drivers/iio/accel/kxcjk-1013.c
957     index 51da3692d561..5b7a860df524 100644
958     --- a/drivers/iio/accel/kxcjk-1013.c
959     +++ b/drivers/iio/accel/kxcjk-1013.c
960     @@ -1418,6 +1418,7 @@ static const struct dev_pm_ops kxcjk1013_pm_ops = {
961     static const struct acpi_device_id kx_acpi_match[] = {
962     {"KXCJ1013", KXCJK1013},
963     {"KXCJ1008", KXCJ91008},
964     + {"KXCJ9000", KXCJ91008},
965     {"KXTJ1009", KXTJ21009},
966     {"SMO8500", KXCJ91008},
967     { },
968     diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
969     index 918814cd0f80..75c01b27bd0b 100644
970     --- a/drivers/infiniband/ulp/srp/ib_srp.c
971     +++ b/drivers/infiniband/ulp/srp/ib_srp.c
972     @@ -465,14 +465,13 @@ static struct srp_fr_pool *srp_alloc_fr_pool(struct srp_target_port *target)
973     */
974     static void srp_destroy_qp(struct srp_rdma_ch *ch)
975     {
976     - struct srp_target_port *target = ch->target;
977     static struct ib_qp_attr attr = { .qp_state = IB_QPS_ERR };
978     static struct ib_recv_wr wr = { .wr_id = SRP_LAST_WR_ID };
979     struct ib_recv_wr *bad_wr;
980     int ret;
981    
982     /* Destroying a QP and reusing ch->done is only safe if not connected */
983     - WARN_ON_ONCE(target->connected);
984     + WARN_ON_ONCE(ch->connected);
985    
986     ret = ib_modify_qp(ch->qp, &attr, IB_QP_STATE);
987     WARN_ONCE(ret, "ib_cm_init_qp_attr() returned %d\n", ret);
988     @@ -811,35 +810,19 @@ static bool srp_queue_remove_work(struct srp_target_port *target)
989     return changed;
990     }
991    
992     -static bool srp_change_conn_state(struct srp_target_port *target,
993     - bool connected)
994     -{
995     - bool changed = false;
996     -
997     - spin_lock_irq(&target->lock);
998     - if (target->connected != connected) {
999     - target->connected = connected;
1000     - changed = true;
1001     - }
1002     - spin_unlock_irq(&target->lock);
1003     -
1004     - return changed;
1005     -}
1006     -
1007     static void srp_disconnect_target(struct srp_target_port *target)
1008     {
1009     struct srp_rdma_ch *ch;
1010     int i;
1011    
1012     - if (srp_change_conn_state(target, false)) {
1013     - /* XXX should send SRP_I_LOGOUT request */
1014     + /* XXX should send SRP_I_LOGOUT request */
1015    
1016     - for (i = 0; i < target->ch_count; i++) {
1017     - ch = &target->ch[i];
1018     - if (ch->cm_id && ib_send_cm_dreq(ch->cm_id, NULL, 0)) {
1019     - shost_printk(KERN_DEBUG, target->scsi_host,
1020     - PFX "Sending CM DREQ failed\n");
1021     - }
1022     + for (i = 0; i < target->ch_count; i++) {
1023     + ch = &target->ch[i];
1024     + ch->connected = false;
1025     + if (ch->cm_id && ib_send_cm_dreq(ch->cm_id, NULL, 0)) {
1026     + shost_printk(KERN_DEBUG, target->scsi_host,
1027     + PFX "Sending CM DREQ failed\n");
1028     }
1029     }
1030     }
1031     @@ -986,14 +969,26 @@ static void srp_rport_delete(struct srp_rport *rport)
1032     srp_queue_remove_work(target);
1033     }
1034    
1035     +/**
1036     + * srp_connected_ch() - number of connected channels
1037     + * @target: SRP target port.
1038     + */
1039     +static int srp_connected_ch(struct srp_target_port *target)
1040     +{
1041     + int i, c = 0;
1042     +
1043     + for (i = 0; i < target->ch_count; i++)
1044     + c += target->ch[i].connected;
1045     +
1046     + return c;
1047     +}
1048     +
1049     static int srp_connect_ch(struct srp_rdma_ch *ch, bool multich)
1050     {
1051     struct srp_target_port *target = ch->target;
1052     int ret;
1053    
1054     - WARN_ON_ONCE(!multich && target->connected);
1055     -
1056     - target->qp_in_error = false;
1057     + WARN_ON_ONCE(!multich && srp_connected_ch(target) > 0);
1058    
1059     ret = srp_lookup_path(ch);
1060     if (ret)
1061     @@ -1016,7 +1011,7 @@ static int srp_connect_ch(struct srp_rdma_ch *ch, bool multich)
1062     */
1063     switch (ch->status) {
1064     case 0:
1065     - srp_change_conn_state(target, true);
1066     + ch->connected = true;
1067     return 0;
1068    
1069     case SRP_PORT_REDIRECT:
1070     @@ -1243,13 +1238,13 @@ static int srp_rport_reconnect(struct srp_rport *rport)
1071     for (j = 0; j < target->queue_size; ++j)
1072     list_add(&ch->tx_ring[j]->list, &ch->free_tx);
1073     }
1074     +
1075     + target->qp_in_error = false;
1076     +
1077     for (i = 0; i < target->ch_count; i++) {
1078     ch = &target->ch[i];
1079     - if (ret || !ch->target) {
1080     - if (i > 1)
1081     - ret = 0;
1082     + if (ret || !ch->target)
1083     break;
1084     - }
1085     ret = srp_connect_ch(ch, multich);
1086     multich = true;
1087     }
1088     @@ -1929,7 +1924,7 @@ static void srp_handle_qp_err(u64 wr_id, enum ib_wc_status wc_status,
1089     return;
1090     }
1091    
1092     - if (target->connected && !target->qp_in_error) {
1093     + if (ch->connected && !target->qp_in_error) {
1094     if (wr_id & LOCAL_INV_WR_ID_MASK) {
1095     shost_printk(KERN_ERR, target->scsi_host, PFX
1096     "LOCAL_INV failed with status %d\n",
1097     @@ -2367,7 +2362,7 @@ static int srp_cm_handler(struct ib_cm_id *cm_id, struct ib_cm_event *event)
1098     case IB_CM_DREQ_RECEIVED:
1099     shost_printk(KERN_WARNING, target->scsi_host,
1100     PFX "DREQ received - connection closed\n");
1101     - srp_change_conn_state(target, false);
1102     + ch->connected = false;
1103     if (ib_send_cm_drep(cm_id, NULL, 0))
1104     shost_printk(KERN_ERR, target->scsi_host,
1105     PFX "Sending CM DREP failed\n");
1106     @@ -2423,7 +2418,7 @@ static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag,
1107     struct srp_iu *iu;
1108     struct srp_tsk_mgmt *tsk_mgmt;
1109    
1110     - if (!target->connected || target->qp_in_error)
1111     + if (!ch->connected || target->qp_in_error)
1112     return -1;
1113    
1114     init_completion(&ch->tsk_mgmt_done);
1115     @@ -2797,7 +2792,8 @@ static int srp_add_target(struct srp_host *host, struct srp_target_port *target)
1116     scsi_scan_target(&target->scsi_host->shost_gendev,
1117     0, target->scsi_id, SCAN_WILD_CARD, 0);
1118    
1119     - if (!target->connected || target->qp_in_error) {
1120     + if (srp_connected_ch(target) < target->ch_count ||
1121     + target->qp_in_error) {
1122     shost_printk(KERN_INFO, target->scsi_host,
1123     PFX "SCSI scan failed - removing SCSI host\n");
1124     srp_queue_remove_work(target);
1125     @@ -3172,11 +3168,11 @@ static ssize_t srp_create_target(struct device *dev,
1126    
1127     ret = srp_parse_options(buf, target);
1128     if (ret)
1129     - goto err;
1130     + goto out;
1131    
1132     ret = scsi_init_shared_tag_map(target_host, target_host->can_queue);
1133     if (ret)
1134     - goto err;
1135     + goto out;
1136    
1137     target->req_ring_size = target->queue_size - SRP_TSK_MGMT_SQ_SIZE;
1138    
1139     @@ -3187,7 +3183,7 @@ static ssize_t srp_create_target(struct device *dev,
1140     be64_to_cpu(target->ioc_guid),
1141     be64_to_cpu(target->initiator_ext));
1142     ret = -EEXIST;
1143     - goto err;
1144     + goto out;
1145     }
1146    
1147     if (!srp_dev->has_fmr && !srp_dev->has_fr && !target->allow_ext_sg &&
1148     @@ -3208,7 +3204,7 @@ static ssize_t srp_create_target(struct device *dev,
1149     spin_lock_init(&target->lock);
1150     ret = ib_query_gid(ibdev, host->port, 0, &target->sgid);
1151     if (ret)
1152     - goto err;
1153     + goto out;
1154    
1155     ret = -ENOMEM;
1156     target->ch_count = max_t(unsigned, num_online_nodes(),
1157     @@ -3219,7 +3215,7 @@ static ssize_t srp_create_target(struct device *dev,
1158     target->ch = kcalloc(target->ch_count, sizeof(*target->ch),
1159     GFP_KERNEL);
1160     if (!target->ch)
1161     - goto err;
1162     + goto out;
1163    
1164     node_idx = 0;
1165     for_each_online_node(node) {
1166     @@ -3315,9 +3311,6 @@ err_disconnect:
1167     }
1168    
1169     kfree(target->ch);
1170     -
1171     -err:
1172     - scsi_host_put(target_host);
1173     goto out;
1174     }
1175    
1176     diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h
1177     index a611556406ac..e690847a46dd 100644
1178     --- a/drivers/infiniband/ulp/srp/ib_srp.h
1179     +++ b/drivers/infiniband/ulp/srp/ib_srp.h
1180     @@ -170,6 +170,7 @@ struct srp_rdma_ch {
1181    
1182     struct completion tsk_mgmt_done;
1183     u8 tsk_mgmt_status;
1184     + bool connected;
1185     };
1186    
1187     /**
1188     @@ -214,7 +215,6 @@ struct srp_target_port {
1189     __be16 pkey;
1190    
1191     u32 rq_tmo_jiffies;
1192     - bool connected;
1193    
1194     int zero_req_lim;
1195    
1196     diff --git a/drivers/input/touchscreen/pixcir_i2c_ts.c b/drivers/input/touchscreen/pixcir_i2c_ts.c
1197     index 2c2107147319..8f3e243a62bf 100644
1198     --- a/drivers/input/touchscreen/pixcir_i2c_ts.c
1199     +++ b/drivers/input/touchscreen/pixcir_i2c_ts.c
1200     @@ -78,7 +78,7 @@ static void pixcir_ts_parse(struct pixcir_i2c_ts_data *tsdata,
1201     }
1202    
1203     ret = i2c_master_recv(tsdata->client, rdbuf, readsize);
1204     - if (ret != sizeof(rdbuf)) {
1205     + if (ret != readsize) {
1206     dev_err(&tsdata->client->dev,
1207     "%s: i2c_master_recv failed(), ret=%d\n",
1208     __func__, ret);
1209     diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
1210     index 728681debdbe..7fb2a19ac649 100644
1211     --- a/drivers/leds/led-class.c
1212     +++ b/drivers/leds/led-class.c
1213     @@ -187,6 +187,7 @@ void led_classdev_resume(struct led_classdev *led_cdev)
1214     }
1215     EXPORT_SYMBOL_GPL(led_classdev_resume);
1216    
1217     +#ifdef CONFIG_PM_SLEEP
1218     static int led_suspend(struct device *dev)
1219     {
1220     struct led_classdev *led_cdev = dev_get_drvdata(dev);
1221     @@ -206,11 +207,9 @@ static int led_resume(struct device *dev)
1222    
1223     return 0;
1224     }
1225     +#endif
1226    
1227     -static const struct dev_pm_ops leds_class_dev_pm_ops = {
1228     - .suspend = led_suspend,
1229     - .resume = led_resume,
1230     -};
1231     +static SIMPLE_DEV_PM_OPS(leds_class_dev_pm_ops, led_suspend, led_resume);
1232    
1233     static int match_name(struct device *dev, const void *data)
1234     {
1235     diff --git a/drivers/misc/mei/client.c b/drivers/misc/mei/client.c
1236     index 1e99ef6a54a2..b2b9f4382d77 100644
1237     --- a/drivers/misc/mei/client.c
1238     +++ b/drivers/misc/mei/client.c
1239     @@ -699,7 +699,7 @@ void mei_host_client_init(struct work_struct *work)
1240     bool mei_hbuf_acquire(struct mei_device *dev)
1241     {
1242     if (mei_pg_state(dev) == MEI_PG_ON ||
1243     - dev->pg_event == MEI_PG_EVENT_WAIT) {
1244     + mei_pg_in_transition(dev)) {
1245     dev_dbg(dev->dev, "device is in pg\n");
1246     return false;
1247     }
1248     diff --git a/drivers/misc/mei/hw-me.c b/drivers/misc/mei/hw-me.c
1249     index 6fb75e62a764..43d7101ff993 100644
1250     --- a/drivers/misc/mei/hw-me.c
1251     +++ b/drivers/misc/mei/hw-me.c
1252     @@ -663,11 +663,27 @@ int mei_me_pg_exit_sync(struct mei_device *dev)
1253     mutex_lock(&dev->device_lock);
1254    
1255     reply:
1256     - if (dev->pg_event == MEI_PG_EVENT_RECEIVED)
1257     - ret = mei_hbm_pg(dev, MEI_PG_ISOLATION_EXIT_RES_CMD);
1258     + if (dev->pg_event != MEI_PG_EVENT_RECEIVED) {
1259     + ret = -ETIME;
1260     + goto out;
1261     + }
1262     +
1263     + dev->pg_event = MEI_PG_EVENT_INTR_WAIT;
1264     + ret = mei_hbm_pg(dev, MEI_PG_ISOLATION_EXIT_RES_CMD);
1265     + if (ret)
1266     + return ret;
1267     +
1268     + mutex_unlock(&dev->device_lock);
1269     + wait_event_timeout(dev->wait_pg,
1270     + dev->pg_event == MEI_PG_EVENT_INTR_RECEIVED, timeout);
1271     + mutex_lock(&dev->device_lock);
1272     +
1273     + if (dev->pg_event == MEI_PG_EVENT_INTR_RECEIVED)
1274     + ret = 0;
1275     else
1276     ret = -ETIME;
1277    
1278     +out:
1279     dev->pg_event = MEI_PG_EVENT_IDLE;
1280     hw->pg_state = MEI_PG_OFF;
1281    
1282     @@ -675,6 +691,19 @@ reply:
1283     }
1284    
1285     /**
1286     + * mei_me_pg_in_transition - is device now in pg transition
1287     + *
1288     + * @dev: the device structure
1289     + *
1290     + * Return: true if in pg transition, false otherwise
1291     + */
1292     +static bool mei_me_pg_in_transition(struct mei_device *dev)
1293     +{
1294     + return dev->pg_event >= MEI_PG_EVENT_WAIT &&
1295     + dev->pg_event <= MEI_PG_EVENT_INTR_WAIT;
1296     +}
1297     +
1298     +/**
1299     * mei_me_pg_is_enabled - detect if PG is supported by HW
1300     *
1301     * @dev: the device structure
1302     @@ -705,6 +734,24 @@ notsupported:
1303     }
1304    
1305     /**
1306     + * mei_me_pg_intr - perform pg processing in interrupt thread handler
1307     + *
1308     + * @dev: the device structure
1309     + */
1310     +static void mei_me_pg_intr(struct mei_device *dev)
1311     +{
1312     + struct mei_me_hw *hw = to_me_hw(dev);
1313     +
1314     + if (dev->pg_event != MEI_PG_EVENT_INTR_WAIT)
1315     + return;
1316     +
1317     + dev->pg_event = MEI_PG_EVENT_INTR_RECEIVED;
1318     + hw->pg_state = MEI_PG_OFF;
1319     + if (waitqueue_active(&dev->wait_pg))
1320     + wake_up(&dev->wait_pg);
1321     +}
1322     +
1323     +/**
1324     * mei_me_irq_quick_handler - The ISR of the MEI device
1325     *
1326     * @irq: The irq number
1327     @@ -761,6 +808,8 @@ irqreturn_t mei_me_irq_thread_handler(int irq, void *dev_id)
1328     goto end;
1329     }
1330    
1331     + mei_me_pg_intr(dev);
1332     +
1333     /* check if we need to start the dev */
1334     if (!mei_host_is_ready(dev)) {
1335     if (mei_hw_is_ready(dev)) {
1336     @@ -797,9 +846,10 @@ irqreturn_t mei_me_irq_thread_handler(int irq, void *dev_id)
1337     /*
1338     * During PG handshake only allowed write is the replay to the
1339     * PG exit message, so block calling write function
1340     - * if the pg state is not idle
1341     + * if the pg event is in PG handshake
1342     */
1343     - if (dev->pg_event == MEI_PG_EVENT_IDLE) {
1344     + if (dev->pg_event != MEI_PG_EVENT_WAIT &&
1345     + dev->pg_event != MEI_PG_EVENT_RECEIVED) {
1346     rets = mei_irq_write_handler(dev, &complete_list);
1347     dev->hbuf_is_ready = mei_hbuf_is_ready(dev);
1348     }
1349     @@ -824,6 +874,7 @@ static const struct mei_hw_ops mei_me_hw_ops = {
1350     .hw_config = mei_me_hw_config,
1351     .hw_start = mei_me_hw_start,
1352    
1353     + .pg_in_transition = mei_me_pg_in_transition,
1354     .pg_is_enabled = mei_me_pg_is_enabled,
1355    
1356     .intr_clear = mei_me_intr_clear,
1357     diff --git a/drivers/misc/mei/hw-txe.c b/drivers/misc/mei/hw-txe.c
1358     index 7abafe7d120d..bae680c648ff 100644
1359     --- a/drivers/misc/mei/hw-txe.c
1360     +++ b/drivers/misc/mei/hw-txe.c
1361     @@ -16,6 +16,7 @@
1362    
1363     #include <linux/pci.h>
1364     #include <linux/jiffies.h>
1365     +#include <linux/ktime.h>
1366     #include <linux/delay.h>
1367     #include <linux/kthread.h>
1368     #include <linux/irqreturn.h>
1369     @@ -218,26 +219,25 @@ static u32 mei_txe_aliveness_get(struct mei_device *dev)
1370     *
1371     * Polls for HICR_HOST_ALIVENESS_RESP.ALIVENESS_RESP to be set
1372     *
1373     - * Return: > 0 if the expected value was received, -ETIME otherwise
1374     + * Return: 0 if the expected value was received, -ETIME otherwise
1375     */
1376     static int mei_txe_aliveness_poll(struct mei_device *dev, u32 expected)
1377     {
1378     struct mei_txe_hw *hw = to_txe_hw(dev);
1379     - int t = 0;
1380     + ktime_t stop, start;
1381    
1382     + start = ktime_get();
1383     + stop = ktime_add(start, ms_to_ktime(SEC_ALIVENESS_WAIT_TIMEOUT));
1384     do {
1385     hw->aliveness = mei_txe_aliveness_get(dev);
1386     if (hw->aliveness == expected) {
1387     dev->pg_event = MEI_PG_EVENT_IDLE;
1388     - dev_dbg(dev->dev,
1389     - "aliveness settled after %d msecs\n", t);
1390     - return t;
1391     + dev_dbg(dev->dev, "aliveness settled after %lld usecs\n",
1392     + ktime_to_us(ktime_sub(ktime_get(), start)));
1393     + return 0;
1394     }
1395     - mutex_unlock(&dev->device_lock);
1396     - msleep(MSEC_PER_SEC / 5);
1397     - mutex_lock(&dev->device_lock);
1398     - t += MSEC_PER_SEC / 5;
1399     - } while (t < SEC_ALIVENESS_WAIT_TIMEOUT);
1400     + usleep_range(20, 50);
1401     + } while (ktime_before(ktime_get(), stop));
1402    
1403     dev->pg_event = MEI_PG_EVENT_IDLE;
1404     dev_err(dev->dev, "aliveness timed out\n");
1405     @@ -302,6 +302,18 @@ int mei_txe_aliveness_set_sync(struct mei_device *dev, u32 req)
1406     }
1407    
1408     /**
1409     + * mei_txe_pg_in_transition - is device now in pg transition
1410     + *
1411     + * @dev: the device structure
1412     + *
1413     + * Return: true if in pg transition, false otherwise
1414     + */
1415     +static bool mei_txe_pg_in_transition(struct mei_device *dev)
1416     +{
1417     + return dev->pg_event == MEI_PG_EVENT_WAIT;
1418     +}
1419     +
1420     +/**
1421     * mei_txe_pg_is_enabled - detect if PG is supported by HW
1422     *
1423     * @dev: the device structure
1424     @@ -1138,6 +1150,7 @@ static const struct mei_hw_ops mei_txe_hw_ops = {
1425     .hw_config = mei_txe_hw_config,
1426     .hw_start = mei_txe_hw_start,
1427    
1428     + .pg_in_transition = mei_txe_pg_in_transition,
1429     .pg_is_enabled = mei_txe_pg_is_enabled,
1430    
1431     .intr_clear = mei_txe_intr_clear,
1432     diff --git a/drivers/misc/mei/mei_dev.h b/drivers/misc/mei/mei_dev.h
1433     index f066ecd71939..f84c39ee28a8 100644
1434     --- a/drivers/misc/mei/mei_dev.h
1435     +++ b/drivers/misc/mei/mei_dev.h
1436     @@ -271,6 +271,7 @@ struct mei_cl {
1437    
1438     * @fw_status : get fw status registers
1439     * @pg_state : power gating state of the device
1440     + * @pg_in_transition : is device now in pg transition
1441     * @pg_is_enabled : is power gating enabled
1442    
1443     * @intr_clear : clear pending interrupts
1444     @@ -300,6 +301,7 @@ struct mei_hw_ops {
1445    
1446     int (*fw_status)(struct mei_device *dev, struct mei_fw_status *fw_sts);
1447     enum mei_pg_state (*pg_state)(struct mei_device *dev);
1448     + bool (*pg_in_transition)(struct mei_device *dev);
1449     bool (*pg_is_enabled)(struct mei_device *dev);
1450    
1451     void (*intr_clear)(struct mei_device *dev);
1452     @@ -398,11 +400,15 @@ struct mei_cl_device {
1453     * @MEI_PG_EVENT_IDLE: the driver is not in power gating transition
1454     * @MEI_PG_EVENT_WAIT: the driver is waiting for a pg event to complete
1455     * @MEI_PG_EVENT_RECEIVED: the driver received pg event
1456     + * @MEI_PG_EVENT_INTR_WAIT: the driver is waiting for a pg event interrupt
1457     + * @MEI_PG_EVENT_INTR_RECEIVED: the driver received pg event interrupt
1458     */
1459     enum mei_pg_event {
1460     MEI_PG_EVENT_IDLE,
1461     MEI_PG_EVENT_WAIT,
1462     MEI_PG_EVENT_RECEIVED,
1463     + MEI_PG_EVENT_INTR_WAIT,
1464     + MEI_PG_EVENT_INTR_RECEIVED,
1465     };
1466    
1467     /**
1468     @@ -717,6 +723,11 @@ static inline enum mei_pg_state mei_pg_state(struct mei_device *dev)
1469     return dev->ops->pg_state(dev);
1470     }
1471    
1472     +static inline bool mei_pg_in_transition(struct mei_device *dev)
1473     +{
1474     + return dev->ops->pg_in_transition(dev);
1475     +}
1476     +
1477     static inline bool mei_pg_is_enabled(struct mei_device *dev)
1478     {
1479     return dev->ops->pg_is_enabled(dev);
1480     diff --git a/drivers/mtd/maps/dc21285.c b/drivers/mtd/maps/dc21285.c
1481     index f8a7dd14cee0..70a3db3ab856 100644
1482     --- a/drivers/mtd/maps/dc21285.c
1483     +++ b/drivers/mtd/maps/dc21285.c
1484     @@ -38,9 +38,9 @@ static void nw_en_write(void)
1485     * we want to write a bit pattern XXX1 to Xilinx to enable
1486     * the write gate, which will be open for about the next 2ms.
1487     */
1488     - spin_lock_irqsave(&nw_gpio_lock, flags);
1489     + raw_spin_lock_irqsave(&nw_gpio_lock, flags);
1490     nw_cpld_modify(CPLD_FLASH_WR_ENABLE, CPLD_FLASH_WR_ENABLE);
1491     - spin_unlock_irqrestore(&nw_gpio_lock, flags);
1492     + raw_spin_unlock_irqrestore(&nw_gpio_lock, flags);
1493    
1494     /*
1495     * let the ISA bus to catch on...
1496     diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
1497     index 2b0c52870999..df7c6c70757a 100644
1498     --- a/drivers/mtd/mtd_blkdevs.c
1499     +++ b/drivers/mtd/mtd_blkdevs.c
1500     @@ -197,6 +197,7 @@ static int blktrans_open(struct block_device *bdev, fmode_t mode)
1501     return -ERESTARTSYS; /* FIXME: busy loop! -arnd*/
1502    
1503     mutex_lock(&dev->lock);
1504     + mutex_lock(&mtd_table_mutex);
1505    
1506     if (dev->open)
1507     goto unlock;
1508     @@ -220,6 +221,7 @@ static int blktrans_open(struct block_device *bdev, fmode_t mode)
1509    
1510     unlock:
1511     dev->open++;
1512     + mutex_unlock(&mtd_table_mutex);
1513     mutex_unlock(&dev->lock);
1514     blktrans_dev_put(dev);
1515     return ret;
1516     @@ -230,6 +232,7 @@ error_release:
1517     error_put:
1518     module_put(dev->tr->owner);
1519     kref_put(&dev->ref, blktrans_dev_release);
1520     + mutex_unlock(&mtd_table_mutex);
1521     mutex_unlock(&dev->lock);
1522     blktrans_dev_put(dev);
1523     return ret;
1524     @@ -243,6 +246,7 @@ static void blktrans_release(struct gendisk *disk, fmode_t mode)
1525     return;
1526    
1527     mutex_lock(&dev->lock);
1528     + mutex_lock(&mtd_table_mutex);
1529    
1530     if (--dev->open)
1531     goto unlock;
1532     @@ -256,6 +260,7 @@ static void blktrans_release(struct gendisk *disk, fmode_t mode)
1533     __put_mtd_device(dev->mtd);
1534     }
1535     unlock:
1536     + mutex_unlock(&mtd_table_mutex);
1537     mutex_unlock(&dev->lock);
1538     blktrans_dev_put(dev);
1539     }
1540     diff --git a/drivers/of/address.c b/drivers/of/address.c
1541     index 78a7dcbec7d8..6906a3f61bd8 100644
1542     --- a/drivers/of/address.c
1543     +++ b/drivers/of/address.c
1544     @@ -765,7 +765,7 @@ unsigned long __weak pci_address_to_pio(phys_addr_t address)
1545     spin_lock(&io_range_lock);
1546     list_for_each_entry(res, &io_range_list, list) {
1547     if (address >= res->start && address < res->start + res->size) {
1548     - addr = res->start - address + offset;
1549     + addr = address - res->start + offset;
1550     break;
1551     }
1552     offset += res->size;
1553     diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
1554     index 7a8f1c5e65af..73de4efcbe6e 100644
1555     --- a/drivers/pci/Kconfig
1556     +++ b/drivers/pci/Kconfig
1557     @@ -1,6 +1,10 @@
1558     #
1559     # PCI configuration
1560     #
1561     +config PCI_BUS_ADDR_T_64BIT
1562     + def_bool y if (ARCH_DMA_ADDR_T_64BIT || 64BIT)
1563     + depends on PCI
1564     +
1565     config PCI_MSI
1566     bool "Message Signaled Interrupts (MSI and MSI-X)"
1567     depends on PCI
1568     diff --git a/drivers/pci/bus.c b/drivers/pci/bus.c
1569     index 90fa3a78fb7c..6fbd3f2b5992 100644
1570     --- a/drivers/pci/bus.c
1571     +++ b/drivers/pci/bus.c
1572     @@ -92,11 +92,11 @@ void pci_bus_remove_resources(struct pci_bus *bus)
1573     }
1574    
1575     static struct pci_bus_region pci_32_bit = {0, 0xffffffffULL};
1576     -#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
1577     +#ifdef CONFIG_PCI_BUS_ADDR_T_64BIT
1578     static struct pci_bus_region pci_64_bit = {0,
1579     - (dma_addr_t) 0xffffffffffffffffULL};
1580     -static struct pci_bus_region pci_high = {(dma_addr_t) 0x100000000ULL,
1581     - (dma_addr_t) 0xffffffffffffffffULL};
1582     + (pci_bus_addr_t) 0xffffffffffffffffULL};
1583     +static struct pci_bus_region pci_high = {(pci_bus_addr_t) 0x100000000ULL,
1584     + (pci_bus_addr_t) 0xffffffffffffffffULL};
1585     #endif
1586    
1587     /*
1588     @@ -200,7 +200,7 @@ int pci_bus_alloc_resource(struct pci_bus *bus, struct resource *res,
1589     resource_size_t),
1590     void *alignf_data)
1591     {
1592     -#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
1593     +#ifdef CONFIG_PCI_BUS_ADDR_T_64BIT
1594     int rc;
1595    
1596     if (res->flags & IORESOURCE_MEM_64) {
1597     diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
1598     index 0ebf754fc177..6d6868811e56 100644
1599     --- a/drivers/pci/hotplug/pciehp_hpc.c
1600     +++ b/drivers/pci/hotplug/pciehp_hpc.c
1601     @@ -176,20 +176,17 @@ static void pcie_wait_cmd(struct controller *ctrl)
1602     jiffies_to_msecs(jiffies - ctrl->cmd_started));
1603     }
1604    
1605     -/**
1606     - * pcie_write_cmd - Issue controller command
1607     - * @ctrl: controller to which the command is issued
1608     - * @cmd: command value written to slot control register
1609     - * @mask: bitmask of slot control register to be modified
1610     - */
1611     -static void pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask)
1612     +static void pcie_do_write_cmd(struct controller *ctrl, u16 cmd,
1613     + u16 mask, bool wait)
1614     {
1615     struct pci_dev *pdev = ctrl_dev(ctrl);
1616     u16 slot_ctrl;
1617    
1618     mutex_lock(&ctrl->ctrl_lock);
1619    
1620     - /* Wait for any previous command that might still be in progress */
1621     + /*
1622     + * Always wait for any previous command that might still be in progress
1623     + */
1624     pcie_wait_cmd(ctrl);
1625    
1626     pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl);
1627     @@ -201,9 +198,33 @@ static void pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask)
1628     ctrl->cmd_started = jiffies;
1629     ctrl->slot_ctrl = slot_ctrl;
1630    
1631     + /*
1632     + * Optionally wait for the hardware to be ready for a new command,
1633     + * indicating completion of the above issued command.
1634     + */
1635     + if (wait)
1636     + pcie_wait_cmd(ctrl);
1637     +
1638     mutex_unlock(&ctrl->ctrl_lock);
1639     }
1640    
1641     +/**
1642     + * pcie_write_cmd - Issue controller command
1643     + * @ctrl: controller to which the command is issued
1644     + * @cmd: command value written to slot control register
1645     + * @mask: bitmask of slot control register to be modified
1646     + */
1647     +static void pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask)
1648     +{
1649     + pcie_do_write_cmd(ctrl, cmd, mask, true);
1650     +}
1651     +
1652     +/* Same as above without waiting for the hardware to latch */
1653     +static void pcie_write_cmd_nowait(struct controller *ctrl, u16 cmd, u16 mask)
1654     +{
1655     + pcie_do_write_cmd(ctrl, cmd, mask, false);
1656     +}
1657     +
1658     bool pciehp_check_link_active(struct controller *ctrl)
1659     {
1660     struct pci_dev *pdev = ctrl_dev(ctrl);
1661     @@ -422,7 +443,7 @@ void pciehp_set_attention_status(struct slot *slot, u8 value)
1662     default:
1663     return;
1664     }
1665     - pcie_write_cmd(ctrl, slot_cmd, PCI_EXP_SLTCTL_AIC);
1666     + pcie_write_cmd_nowait(ctrl, slot_cmd, PCI_EXP_SLTCTL_AIC);
1667     ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
1668     pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd);
1669     }
1670     @@ -434,7 +455,8 @@ void pciehp_green_led_on(struct slot *slot)
1671     if (!PWR_LED(ctrl))
1672     return;
1673    
1674     - pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON, PCI_EXP_SLTCTL_PIC);
1675     + pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON,
1676     + PCI_EXP_SLTCTL_PIC);
1677     ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
1678     pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
1679     PCI_EXP_SLTCTL_PWR_IND_ON);
1680     @@ -447,7 +469,8 @@ void pciehp_green_led_off(struct slot *slot)
1681     if (!PWR_LED(ctrl))
1682     return;
1683    
1684     - pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, PCI_EXP_SLTCTL_PIC);
1685     + pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
1686     + PCI_EXP_SLTCTL_PIC);
1687     ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
1688     pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
1689     PCI_EXP_SLTCTL_PWR_IND_OFF);
1690     @@ -460,7 +483,8 @@ void pciehp_green_led_blink(struct slot *slot)
1691     if (!PWR_LED(ctrl))
1692     return;
1693    
1694     - pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK, PCI_EXP_SLTCTL_PIC);
1695     + pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK,
1696     + PCI_EXP_SLTCTL_PIC);
1697     ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
1698     pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
1699     PCI_EXP_SLTCTL_PWR_IND_BLINK);
1700     @@ -613,7 +637,7 @@ void pcie_enable_notification(struct controller *ctrl)
1701     PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_CCIE |
1702     PCI_EXP_SLTCTL_DLLSCE);
1703    
1704     - pcie_write_cmd(ctrl, cmd, mask);
1705     + pcie_write_cmd_nowait(ctrl, cmd, mask);
1706     ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
1707     pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, cmd);
1708     }
1709     @@ -664,7 +688,7 @@ int pciehp_reset_slot(struct slot *slot, int probe)
1710     pci_reset_bridge_secondary_bus(ctrl->pcie->port);
1711    
1712     pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, stat_mask);
1713     - pcie_write_cmd(ctrl, ctrl_mask, ctrl_mask);
1714     + pcie_write_cmd_nowait(ctrl, ctrl_mask, ctrl_mask);
1715     ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
1716     pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ctrl_mask);
1717     if (pciehp_poll_mode)
1718     diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
1719     index acc4b6ef78c4..c44393f26fd3 100644
1720     --- a/drivers/pci/pci.c
1721     +++ b/drivers/pci/pci.c
1722     @@ -4324,6 +4324,17 @@ bool pci_device_is_present(struct pci_dev *pdev)
1723     }
1724     EXPORT_SYMBOL_GPL(pci_device_is_present);
1725    
1726     +void pci_ignore_hotplug(struct pci_dev *dev)
1727     +{
1728     + struct pci_dev *bridge = dev->bus->self;
1729     +
1730     + dev->ignore_hotplug = 1;
1731     + /* Propagate the "ignore hotplug" setting to the parent bridge. */
1732     + if (bridge)
1733     + bridge->ignore_hotplug = 1;
1734     +}
1735     +EXPORT_SYMBOL_GPL(pci_ignore_hotplug);
1736     +
1737     #define RESOURCE_ALIGNMENT_PARAM_SIZE COMMAND_LINE_SIZE
1738     static char resource_alignment_param[RESOURCE_ALIGNMENT_PARAM_SIZE] = {0};
1739     static DEFINE_SPINLOCK(resource_alignment_lock);
1740     diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
1741     index 6675a7a1b9fc..c91185721345 100644
1742     --- a/drivers/pci/probe.c
1743     +++ b/drivers/pci/probe.c
1744     @@ -254,8 +254,8 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
1745     }
1746    
1747     if (res->flags & IORESOURCE_MEM_64) {
1748     - if ((sizeof(dma_addr_t) < 8 || sizeof(resource_size_t) < 8) &&
1749     - sz64 > 0x100000000ULL) {
1750     + if ((sizeof(pci_bus_addr_t) < 8 || sizeof(resource_size_t) < 8)
1751     + && sz64 > 0x100000000ULL) {
1752     res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED;
1753     res->start = 0;
1754     res->end = 0;
1755     @@ -264,7 +264,7 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
1756     goto out;
1757     }
1758    
1759     - if ((sizeof(dma_addr_t) < 8) && l) {
1760     + if ((sizeof(pci_bus_addr_t) < 8) && l) {
1761     /* Above 32-bit boundary; try to reallocate */
1762     res->flags |= IORESOURCE_UNSET;
1763     res->start = 0;
1764     @@ -399,7 +399,7 @@ static void pci_read_bridge_mmio_pref(struct pci_bus *child)
1765     struct pci_dev *dev = child->self;
1766     u16 mem_base_lo, mem_limit_lo;
1767     u64 base64, limit64;
1768     - dma_addr_t base, limit;
1769     + pci_bus_addr_t base, limit;
1770     struct pci_bus_region region;
1771     struct resource *res;
1772    
1773     @@ -426,8 +426,8 @@ static void pci_read_bridge_mmio_pref(struct pci_bus *child)
1774     }
1775     }
1776    
1777     - base = (dma_addr_t) base64;
1778     - limit = (dma_addr_t) limit64;
1779     + base = (pci_bus_addr_t) base64;
1780     + limit = (pci_bus_addr_t) limit64;
1781    
1782     if (base != base64) {
1783     dev_err(&dev->dev, "can't handle bridge window above 4GB (bus address %#010llx)\n",
1784     diff --git a/drivers/pcmcia/topic.h b/drivers/pcmcia/topic.h
1785     index 615a45a8fe86..582688fe7505 100644
1786     --- a/drivers/pcmcia/topic.h
1787     +++ b/drivers/pcmcia/topic.h
1788     @@ -104,6 +104,9 @@
1789     #define TOPIC_EXCA_IF_CONTROL 0x3e /* 8 bit */
1790     #define TOPIC_EXCA_IFC_33V_ENA 0x01
1791    
1792     +#define TOPIC_PCI_CFG_PPBCN 0x3e /* 16-bit */
1793     +#define TOPIC_PCI_CFG_PPBCN_WBEN 0x0400
1794     +
1795     static void topic97_zoom_video(struct pcmcia_socket *sock, int onoff)
1796     {
1797     struct yenta_socket *socket = container_of(sock, struct yenta_socket, socket);
1798     @@ -138,6 +141,7 @@ static int topic97_override(struct yenta_socket *socket)
1799     static int topic95_override(struct yenta_socket *socket)
1800     {
1801     u8 fctrl;
1802     + u16 ppbcn;
1803    
1804     /* enable 3.3V support for 16bit cards */
1805     fctrl = exca_readb(socket, TOPIC_EXCA_IF_CONTROL);
1806     @@ -146,6 +150,18 @@ static int topic95_override(struct yenta_socket *socket)
1807     /* tell yenta to use exca registers to power 16bit cards */
1808     socket->flags |= YENTA_16BIT_POWER_EXCA | YENTA_16BIT_POWER_DF;
1809    
1810     + /* Disable write buffers to prevent lockups under load with numerous
1811     + Cardbus cards, observed on Tecra 500CDT and reported elsewhere on the
1812     + net. This is not a power-on default according to the datasheet
1813     + but some BIOSes seem to set it. */
1814     + if (pci_read_config_word(socket->dev, TOPIC_PCI_CFG_PPBCN, &ppbcn) == 0
1815     + && socket->dev->revision <= 7
1816     + && (ppbcn & TOPIC_PCI_CFG_PPBCN_WBEN)) {
1817     + ppbcn &= ~TOPIC_PCI_CFG_PPBCN_WBEN;
1818     + pci_write_config_word(socket->dev, TOPIC_PCI_CFG_PPBCN, ppbcn);
1819     + dev_info(&socket->dev->dev, "Disabled ToPIC95 Cardbus write buffers.\n");
1820     + }
1821     +
1822     return 0;
1823     }
1824    
1825     diff --git a/drivers/pnp/system.c b/drivers/pnp/system.c
1826     index 49c1720df59a..515f33882ab8 100644
1827     --- a/drivers/pnp/system.c
1828     +++ b/drivers/pnp/system.c
1829     @@ -7,6 +7,7 @@
1830     * Bjorn Helgaas <bjorn.helgaas@hp.com>
1831     */
1832    
1833     +#include <linux/acpi.h>
1834     #include <linux/pnp.h>
1835     #include <linux/device.h>
1836     #include <linux/init.h>
1837     @@ -22,25 +23,41 @@ static const struct pnp_device_id pnp_dev_table[] = {
1838     {"", 0}
1839     };
1840    
1841     +#ifdef CONFIG_ACPI
1842     +static bool __reserve_range(u64 start, unsigned int length, bool io, char *desc)
1843     +{
1844     + u8 space_id = io ? ACPI_ADR_SPACE_SYSTEM_IO : ACPI_ADR_SPACE_SYSTEM_MEMORY;
1845     + return !acpi_reserve_region(start, length, space_id, IORESOURCE_BUSY, desc);
1846     +}
1847     +#else
1848     +static bool __reserve_range(u64 start, unsigned int length, bool io, char *desc)
1849     +{
1850     + struct resource *res;
1851     +
1852     + res = io ? request_region(start, length, desc) :
1853     + request_mem_region(start, length, desc);
1854     + if (res) {
1855     + res->flags &= ~IORESOURCE_BUSY;
1856     + return true;
1857     + }
1858     + return false;
1859     +}
1860     +#endif
1861     +
1862     static void reserve_range(struct pnp_dev *dev, struct resource *r, int port)
1863     {
1864     char *regionid;
1865     const char *pnpid = dev_name(&dev->dev);
1866     resource_size_t start = r->start, end = r->end;
1867     - struct resource *res;
1868     + bool reserved;
1869    
1870     regionid = kmalloc(16, GFP_KERNEL);
1871     if (!regionid)
1872     return;
1873    
1874     snprintf(regionid, 16, "pnp %s", pnpid);
1875     - if (port)
1876     - res = request_region(start, end - start + 1, regionid);
1877     - else
1878     - res = request_mem_region(start, end - start + 1, regionid);
1879     - if (res)
1880     - res->flags &= ~IORESOURCE_BUSY;
1881     - else
1882     + reserved = __reserve_range(start, end - start + 1, !!port, regionid);
1883     + if (!reserved)
1884     kfree(regionid);
1885    
1886     /*
1887     @@ -49,7 +66,7 @@ static void reserve_range(struct pnp_dev *dev, struct resource *r, int port)
1888     * have double reservations.
1889     */
1890     dev_info(&dev->dev, "%pR %s reserved\n", r,
1891     - res ? "has been" : "could not be");
1892     + reserved ? "has been" : "could not be");
1893     }
1894    
1895     static void reserve_resources_of_dev(struct pnp_dev *dev)
1896     diff --git a/drivers/power/power_supply_core.c b/drivers/power/power_supply_core.c
1897     index 2ed4a4a6b3c5..4bc0c7f459a5 100644
1898     --- a/drivers/power/power_supply_core.c
1899     +++ b/drivers/power/power_supply_core.c
1900     @@ -30,6 +30,8 @@ EXPORT_SYMBOL_GPL(power_supply_notifier);
1901    
1902     static struct device_type power_supply_dev_type;
1903    
1904     +#define POWER_SUPPLY_DEFERRED_REGISTER_TIME msecs_to_jiffies(10)
1905     +
1906     static bool __power_supply_is_supplied_by(struct power_supply *supplier,
1907     struct power_supply *supply)
1908     {
1909     @@ -121,6 +123,30 @@ void power_supply_changed(struct power_supply *psy)
1910     }
1911     EXPORT_SYMBOL_GPL(power_supply_changed);
1912    
1913     +/*
1914     + * Notify that power supply was registered after parent finished the probing.
1915     + *
1916     + * Often power supply is registered from driver's probe function. However
1917     + * calling power_supply_changed() directly from power_supply_register()
1918     + * would lead to execution of get_property() function provided by the driver
1919     + * too early - before the probe ends.
1920     + *
1921     + * Avoid that by waiting on parent's mutex.
1922     + */
1923     +static void power_supply_deferred_register_work(struct work_struct *work)
1924     +{
1925     + struct power_supply *psy = container_of(work, struct power_supply,
1926     + deferred_register_work.work);
1927     +
1928     + if (psy->dev.parent)
1929     + mutex_lock(&psy->dev.parent->mutex);
1930     +
1931     + power_supply_changed(psy);
1932     +
1933     + if (psy->dev.parent)
1934     + mutex_unlock(&psy->dev.parent->mutex);
1935     +}
1936     +
1937     #ifdef CONFIG_OF
1938     #include <linux/of.h>
1939    
1940     @@ -645,6 +671,10 @@ __power_supply_register(struct device *parent,
1941     struct power_supply *psy;
1942     int rc;
1943    
1944     + if (!parent)
1945     + pr_warn("%s: Expected proper parent device for '%s'\n",
1946     + __func__, desc->name);
1947     +
1948     psy = kzalloc(sizeof(*psy), GFP_KERNEL);
1949     if (!psy)
1950     return ERR_PTR(-ENOMEM);
1951     @@ -659,7 +689,6 @@ __power_supply_register(struct device *parent,
1952     dev->release = power_supply_dev_release;
1953     dev_set_drvdata(dev, psy);
1954     psy->desc = desc;
1955     - atomic_inc(&psy->use_cnt);
1956     if (cfg) {
1957     psy->drv_data = cfg->drv_data;
1958     psy->of_node = cfg->of_node;
1959     @@ -672,6 +701,8 @@ __power_supply_register(struct device *parent,
1960     goto dev_set_name_failed;
1961    
1962     INIT_WORK(&psy->changed_work, power_supply_changed_work);
1963     + INIT_DELAYED_WORK(&psy->deferred_register_work,
1964     + power_supply_deferred_register_work);
1965    
1966     rc = power_supply_check_supplies(psy);
1967     if (rc) {
1968     @@ -700,7 +731,20 @@ __power_supply_register(struct device *parent,
1969     if (rc)
1970     goto create_triggers_failed;
1971    
1972     - power_supply_changed(psy);
1973     + /*
1974     + * Update use_cnt after any uevents (most notably from device_add()).
1975     + * We are here still during driver's probe but
1976     + * the power_supply_uevent() calls back driver's get_property
1977     + * method so:
1978     + * 1. Driver did not assigned the returned struct power_supply,
1979     + * 2. Driver could not finish initialization (anything in its probe
1980     + * after calling power_supply_register()).
1981     + */
1982     + atomic_inc(&psy->use_cnt);
1983     +
1984     + queue_delayed_work(system_power_efficient_wq,
1985     + &psy->deferred_register_work,
1986     + POWER_SUPPLY_DEFERRED_REGISTER_TIME);
1987    
1988     return psy;
1989    
1990     @@ -720,7 +764,8 @@ dev_set_name_failed:
1991    
1992     /**
1993     * power_supply_register() - Register new power supply
1994     - * @parent: Device to be a parent of power supply's device
1995     + * @parent: Device to be a parent of power supply's device, usually
1996     + * the device which probe function calls this
1997     * @desc: Description of power supply, must be valid through whole
1998     * lifetime of this power supply
1999     * @cfg: Run-time specific configuration accessed during registering,
2000     @@ -741,7 +786,8 @@ EXPORT_SYMBOL_GPL(power_supply_register);
2001    
2002     /**
2003     * power_supply_register() - Register new non-waking-source power supply
2004     - * @parent: Device to be a parent of power supply's device
2005     + * @parent: Device to be a parent of power supply's device, usually
2006     + * the device which probe function calls this
2007     * @desc: Description of power supply, must be valid through whole
2008     * lifetime of this power supply
2009     * @cfg: Run-time specific configuration accessed during registering,
2010     @@ -770,7 +816,8 @@ static void devm_power_supply_release(struct device *dev, void *res)
2011    
2012     /**
2013     * power_supply_register() - Register managed power supply
2014     - * @parent: Device to be a parent of power supply's device
2015     + * @parent: Device to be a parent of power supply's device, usually
2016     + * the device which probe function calls this
2017     * @desc: Description of power supply, must be valid through whole
2018     * lifetime of this power supply
2019     * @cfg: Run-time specific configuration accessed during registering,
2020     @@ -805,7 +852,8 @@ EXPORT_SYMBOL_GPL(devm_power_supply_register);
2021    
2022     /**
2023     * power_supply_register() - Register managed non-waking-source power supply
2024     - * @parent: Device to be a parent of power supply's device
2025     + * @parent: Device to be a parent of power supply's device, usually
2026     + * the device which probe function calls this
2027     * @desc: Description of power supply, must be valid through whole
2028     * lifetime of this power supply
2029     * @cfg: Run-time specific configuration accessed during registering,
2030     @@ -849,6 +897,7 @@ void power_supply_unregister(struct power_supply *psy)
2031     {
2032     WARN_ON(atomic_dec_return(&psy->use_cnt));
2033     cancel_work_sync(&psy->changed_work);
2034     + cancel_delayed_work_sync(&psy->deferred_register_work);
2035     sysfs_remove_link(&psy->dev.kobj, "powers");
2036     power_supply_remove_triggers(psy);
2037     psy_unregister_cooler(psy);
2038     diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
2039     index 443eaab933fc..8a28116b5805 100644
2040     --- a/drivers/regulator/core.c
2041     +++ b/drivers/regulator/core.c
2042     @@ -779,7 +779,7 @@ static int suspend_prepare(struct regulator_dev *rdev, suspend_state_t state)
2043     static void print_constraints(struct regulator_dev *rdev)
2044     {
2045     struct regulation_constraints *constraints = rdev->constraints;
2046     - char buf[80] = "";
2047     + char buf[160] = "";
2048     int count = 0;
2049     int ret;
2050    
2051     diff --git a/drivers/regulator/max77686.c b/drivers/regulator/max77686.c
2052     index 15fb1416bfbd..c064e32fb3b9 100644
2053     --- a/drivers/regulator/max77686.c
2054     +++ b/drivers/regulator/max77686.c
2055     @@ -88,7 +88,7 @@ enum max77686_ramp_rate {
2056     };
2057    
2058     struct max77686_data {
2059     - u64 gpio_enabled:MAX77686_REGULATORS;
2060     + DECLARE_BITMAP(gpio_enabled, MAX77686_REGULATORS);
2061    
2062     /* Array indexed by regulator id */
2063     unsigned int opmode[MAX77686_REGULATORS];
2064     @@ -121,7 +121,7 @@ static unsigned int max77686_map_normal_mode(struct max77686_data *max77686,
2065     case MAX77686_BUCK8:
2066     case MAX77686_BUCK9:
2067     case MAX77686_LDO20 ... MAX77686_LDO22:
2068     - if (max77686->gpio_enabled & (1 << id))
2069     + if (test_bit(id, max77686->gpio_enabled))
2070     return MAX77686_GPIO_CONTROL;
2071     }
2072    
2073     @@ -277,7 +277,7 @@ static int max77686_of_parse_cb(struct device_node *np,
2074     }
2075    
2076     if (gpio_is_valid(config->ena_gpio)) {
2077     - max77686->gpio_enabled |= (1 << desc->id);
2078     + set_bit(desc->id, max77686->gpio_enabled);
2079    
2080     return regmap_update_bits(config->regmap, desc->enable_reg,
2081     desc->enable_mask,
2082     diff --git a/drivers/scsi/ipr.h b/drivers/scsi/ipr.h
2083     index 47412cf4eaac..73790a1d0969 100644
2084     --- a/drivers/scsi/ipr.h
2085     +++ b/drivers/scsi/ipr.h
2086     @@ -272,7 +272,7 @@
2087     #define IPR_RUNTIME_RESET 0x40000000
2088    
2089     #define IPR_IPL_INIT_MIN_STAGE_TIME 5
2090     -#define IPR_IPL_INIT_DEFAULT_STAGE_TIME 15
2091     +#define IPR_IPL_INIT_DEFAULT_STAGE_TIME 30
2092     #define IPR_IPL_INIT_STAGE_UNKNOWN 0x0
2093     #define IPR_IPL_INIT_STAGE_TRANSOP 0xB0000000
2094     #define IPR_IPL_INIT_STAGE_MASK 0xff000000
2095     diff --git a/drivers/scsi/scsi_transport_srp.c b/drivers/scsi/scsi_transport_srp.c
2096     index ae45bd99baed..f115f67a6ba5 100644
2097     --- a/drivers/scsi/scsi_transport_srp.c
2098     +++ b/drivers/scsi/scsi_transport_srp.c
2099     @@ -396,6 +396,36 @@ static void srp_reconnect_work(struct work_struct *work)
2100     }
2101     }
2102    
2103     +/**
2104     + * scsi_request_fn_active() - number of kernel threads inside scsi_request_fn()
2105     + * @shost: SCSI host for which to count the number of scsi_request_fn() callers.
2106     + *
2107     + * To do: add support for scsi-mq in this function.
2108     + */
2109     +static int scsi_request_fn_active(struct Scsi_Host *shost)
2110     +{
2111     + struct scsi_device *sdev;
2112     + struct request_queue *q;
2113     + int request_fn_active = 0;
2114     +
2115     + shost_for_each_device(sdev, shost) {
2116     + q = sdev->request_queue;
2117     +
2118     + spin_lock_irq(q->queue_lock);
2119     + request_fn_active += q->request_fn_active;
2120     + spin_unlock_irq(q->queue_lock);
2121     + }
2122     +
2123     + return request_fn_active;
2124     +}
2125     +
2126     +/* Wait until ongoing shost->hostt->queuecommand() calls have finished. */
2127     +static void srp_wait_for_queuecommand(struct Scsi_Host *shost)
2128     +{
2129     + while (scsi_request_fn_active(shost))
2130     + msleep(20);
2131     +}
2132     +
2133     static void __rport_fail_io_fast(struct srp_rport *rport)
2134     {
2135     struct Scsi_Host *shost = rport_to_shost(rport);
2136     @@ -409,8 +439,10 @@ static void __rport_fail_io_fast(struct srp_rport *rport)
2137    
2138     /* Involve the LLD if possible to terminate all I/O on the rport. */
2139     i = to_srp_internal(shost->transportt);
2140     - if (i->f->terminate_rport_io)
2141     + if (i->f->terminate_rport_io) {
2142     + srp_wait_for_queuecommand(shost);
2143     i->f->terminate_rport_io(rport);
2144     + }
2145     }
2146    
2147     /**
2148     @@ -504,27 +536,6 @@ void srp_start_tl_fail_timers(struct srp_rport *rport)
2149     EXPORT_SYMBOL(srp_start_tl_fail_timers);
2150    
2151     /**
2152     - * scsi_request_fn_active() - number of kernel threads inside scsi_request_fn()
2153     - * @shost: SCSI host for which to count the number of scsi_request_fn() callers.
2154     - */
2155     -static int scsi_request_fn_active(struct Scsi_Host *shost)
2156     -{
2157     - struct scsi_device *sdev;
2158     - struct request_queue *q;
2159     - int request_fn_active = 0;
2160     -
2161     - shost_for_each_device(sdev, shost) {
2162     - q = sdev->request_queue;
2163     -
2164     - spin_lock_irq(q->queue_lock);
2165     - request_fn_active += q->request_fn_active;
2166     - spin_unlock_irq(q->queue_lock);
2167     - }
2168     -
2169     - return request_fn_active;
2170     -}
2171     -
2172     -/**
2173     * srp_reconnect_rport() - reconnect to an SRP target port
2174     * @rport: SRP target port.
2175     *
2176     @@ -559,8 +570,7 @@ int srp_reconnect_rport(struct srp_rport *rport)
2177     if (res)
2178     goto out;
2179     scsi_target_block(&shost->shost_gendev);
2180     - while (scsi_request_fn_active(shost))
2181     - msleep(20);
2182     + srp_wait_for_queuecommand(shost);
2183     res = rport->state != SRP_RPORT_LOST ? i->f->reconnect(rport) : -ENODEV;
2184     pr_debug("%s (state %d): transport.reconnect() returned %d\n",
2185     dev_name(&shost->shost_gendev), rport->state, res);
2186     diff --git a/drivers/spi/spi-orion.c b/drivers/spi/spi-orion.c
2187     index 861664776672..ff97cabdaa81 100644
2188     --- a/drivers/spi/spi-orion.c
2189     +++ b/drivers/spi/spi-orion.c
2190     @@ -61,6 +61,12 @@ enum orion_spi_type {
2191    
2192     struct orion_spi_dev {
2193     enum orion_spi_type typ;
2194     + /*
2195     + * min_divisor and max_hz should be exclusive, the only we can
2196     + * have both is for managing the armada-370-spi case with old
2197     + * device tree
2198     + */
2199     + unsigned long max_hz;
2200     unsigned int min_divisor;
2201     unsigned int max_divisor;
2202     u32 prescale_mask;
2203     @@ -387,8 +393,9 @@ static const struct orion_spi_dev orion_spi_dev_data = {
2204    
2205     static const struct orion_spi_dev armada_spi_dev_data = {
2206     .typ = ARMADA_SPI,
2207     - .min_divisor = 1,
2208     + .min_divisor = 4,
2209     .max_divisor = 1920,
2210     + .max_hz = 50000000,
2211     .prescale_mask = ARMADA_SPI_CLK_PRESCALE_MASK,
2212     };
2213    
2214     @@ -454,7 +461,21 @@ static int orion_spi_probe(struct platform_device *pdev)
2215     goto out;
2216    
2217     tclk_hz = clk_get_rate(spi->clk);
2218     - master->max_speed_hz = DIV_ROUND_UP(tclk_hz, devdata->min_divisor);
2219     +
2220     + /*
2221     + * With old device tree, armada-370-spi could be used with
2222     + * Armada XP, however for this SoC the maximum frequency is
2223     + * 50MHz instead of tclk/4. On Armada 370, tclk cannot be
2224     + * higher than 200MHz. So, in order to be able to handle both
2225     + * SoCs, we can take the minimum of 50MHz and tclk/4.
2226     + */
2227     + if (of_device_is_compatible(pdev->dev.of_node,
2228     + "marvell,armada-370-spi"))
2229     + master->max_speed_hz = min(devdata->max_hz,
2230     + DIV_ROUND_UP(tclk_hz, devdata->min_divisor));
2231     + else
2232     + master->max_speed_hz =
2233     + DIV_ROUND_UP(tclk_hz, devdata->min_divisor);
2234     master->min_speed_hz = DIV_ROUND_UP(tclk_hz, devdata->max_divisor);
2235    
2236     r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
2237     diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
2238     index 50910d85df5a..d35c1a13217c 100644
2239     --- a/drivers/spi/spi.c
2240     +++ b/drivers/spi/spi.c
2241     @@ -988,9 +988,6 @@ void spi_finalize_current_message(struct spi_master *master)
2242    
2243     spin_lock_irqsave(&master->queue_lock, flags);
2244     mesg = master->cur_msg;
2245     - master->cur_msg = NULL;
2246     -
2247     - queue_kthread_work(&master->kworker, &master->pump_messages);
2248     spin_unlock_irqrestore(&master->queue_lock, flags);
2249    
2250     spi_unmap_msg(master, mesg);
2251     @@ -1003,9 +1000,13 @@ void spi_finalize_current_message(struct spi_master *master)
2252     }
2253     }
2254    
2255     - trace_spi_message_done(mesg);
2256     -
2257     + spin_lock_irqsave(&master->queue_lock, flags);
2258     + master->cur_msg = NULL;
2259     master->cur_msg_prepared = false;
2260     + queue_kthread_work(&master->kworker, &master->pump_messages);
2261     + spin_unlock_irqrestore(&master->queue_lock, flags);
2262     +
2263     + trace_spi_message_done(mesg);
2264    
2265     mesg->state = NULL;
2266     if (mesg->complete)
2267     diff --git a/drivers/video/fbdev/mxsfb.c b/drivers/video/fbdev/mxsfb.c
2268     index f8ac4a452f26..0f64165b0147 100644
2269     --- a/drivers/video/fbdev/mxsfb.c
2270     +++ b/drivers/video/fbdev/mxsfb.c
2271     @@ -316,6 +316,18 @@ static int mxsfb_check_var(struct fb_var_screeninfo *var,
2272     return 0;
2273     }
2274    
2275     +static inline void mxsfb_enable_axi_clk(struct mxsfb_info *host)
2276     +{
2277     + if (host->clk_axi)
2278     + clk_prepare_enable(host->clk_axi);
2279     +}
2280     +
2281     +static inline void mxsfb_disable_axi_clk(struct mxsfb_info *host)
2282     +{
2283     + if (host->clk_axi)
2284     + clk_disable_unprepare(host->clk_axi);
2285     +}
2286     +
2287     static void mxsfb_enable_controller(struct fb_info *fb_info)
2288     {
2289     struct mxsfb_info *host = to_imxfb_host(fb_info);
2290     @@ -333,14 +345,13 @@ static void mxsfb_enable_controller(struct fb_info *fb_info)
2291     }
2292     }
2293    
2294     - if (host->clk_axi)
2295     - clk_prepare_enable(host->clk_axi);
2296     -
2297     if (host->clk_disp_axi)
2298     clk_prepare_enable(host->clk_disp_axi);
2299     clk_prepare_enable(host->clk);
2300     clk_set_rate(host->clk, PICOS2KHZ(fb_info->var.pixclock) * 1000U);
2301    
2302     + mxsfb_enable_axi_clk(host);
2303     +
2304     /* if it was disabled, re-enable the mode again */
2305     writel(CTRL_DOTCLK_MODE, host->base + LCDC_CTRL + REG_SET);
2306    
2307     @@ -380,11 +391,11 @@ static void mxsfb_disable_controller(struct fb_info *fb_info)
2308     reg = readl(host->base + LCDC_VDCTRL4);
2309     writel(reg & ~VDCTRL4_SYNC_SIGNALS_ON, host->base + LCDC_VDCTRL4);
2310    
2311     + mxsfb_disable_axi_clk(host);
2312     +
2313     clk_disable_unprepare(host->clk);
2314     if (host->clk_disp_axi)
2315     clk_disable_unprepare(host->clk_disp_axi);
2316     - if (host->clk_axi)
2317     - clk_disable_unprepare(host->clk_axi);
2318    
2319     host->enabled = 0;
2320    
2321     @@ -421,6 +432,8 @@ static int mxsfb_set_par(struct fb_info *fb_info)
2322     mxsfb_disable_controller(fb_info);
2323     }
2324    
2325     + mxsfb_enable_axi_clk(host);
2326     +
2327     /* clear the FIFOs */
2328     writel(CTRL1_FIFO_CLEAR, host->base + LCDC_CTRL1 + REG_SET);
2329    
2330     @@ -438,6 +451,7 @@ static int mxsfb_set_par(struct fb_info *fb_info)
2331     ctrl |= CTRL_SET_WORD_LENGTH(3);
2332     switch (host->ld_intf_width) {
2333     case STMLCDIF_8BIT:
2334     + mxsfb_disable_axi_clk(host);
2335     dev_err(&host->pdev->dev,
2336     "Unsupported LCD bus width mapping\n");
2337     return -EINVAL;
2338     @@ -451,6 +465,7 @@ static int mxsfb_set_par(struct fb_info *fb_info)
2339     writel(CTRL1_SET_BYTE_PACKAGING(0x7), host->base + LCDC_CTRL1);
2340     break;
2341     default:
2342     + mxsfb_disable_axi_clk(host);
2343     dev_err(&host->pdev->dev, "Unhandled color depth of %u\n",
2344     fb_info->var.bits_per_pixel);
2345     return -EINVAL;
2346     @@ -504,6 +519,8 @@ static int mxsfb_set_par(struct fb_info *fb_info)
2347     fb_info->fix.line_length * fb_info->var.yoffset,
2348     host->base + host->devdata->next_buf);
2349    
2350     + mxsfb_disable_axi_clk(host);
2351     +
2352     if (reenable)
2353     mxsfb_enable_controller(fb_info);
2354    
2355     @@ -582,10 +599,14 @@ static int mxsfb_pan_display(struct fb_var_screeninfo *var,
2356    
2357     offset = fb_info->fix.line_length * var->yoffset;
2358    
2359     + mxsfb_enable_axi_clk(host);
2360     +
2361     /* update on next VSYNC */
2362     writel(fb_info->fix.smem_start + offset,
2363     host->base + host->devdata->next_buf);
2364    
2365     + mxsfb_disable_axi_clk(host);
2366     +
2367     return 0;
2368     }
2369    
2370     @@ -608,13 +629,17 @@ static int mxsfb_restore_mode(struct mxsfb_info *host,
2371     unsigned line_count;
2372     unsigned period;
2373     unsigned long pa, fbsize;
2374     - int bits_per_pixel, ofs;
2375     + int bits_per_pixel, ofs, ret = 0;
2376     u32 transfer_count, vdctrl0, vdctrl2, vdctrl3, vdctrl4, ctrl;
2377    
2378     + mxsfb_enable_axi_clk(host);
2379     +
2380     /* Only restore the mode when the controller is running */
2381     ctrl = readl(host->base + LCDC_CTRL);
2382     - if (!(ctrl & CTRL_RUN))
2383     - return -EINVAL;
2384     + if (!(ctrl & CTRL_RUN)) {
2385     + ret = -EINVAL;
2386     + goto err;
2387     + }
2388    
2389     vdctrl0 = readl(host->base + LCDC_VDCTRL0);
2390     vdctrl2 = readl(host->base + LCDC_VDCTRL2);
2391     @@ -635,7 +660,8 @@ static int mxsfb_restore_mode(struct mxsfb_info *host,
2392     break;
2393     case 1:
2394     default:
2395     - return -EINVAL;
2396     + ret = -EINVAL;
2397     + goto err;
2398     }
2399    
2400     fb_info->var.bits_per_pixel = bits_per_pixel;
2401     @@ -673,10 +699,14 @@ static int mxsfb_restore_mode(struct mxsfb_info *host,
2402    
2403     pa = readl(host->base + host->devdata->cur_buf);
2404     fbsize = fb_info->fix.line_length * vmode->yres;
2405     - if (pa < fb_info->fix.smem_start)
2406     - return -EINVAL;
2407     - if (pa + fbsize > fb_info->fix.smem_start + fb_info->fix.smem_len)
2408     - return -EINVAL;
2409     + if (pa < fb_info->fix.smem_start) {
2410     + ret = -EINVAL;
2411     + goto err;
2412     + }
2413     + if (pa + fbsize > fb_info->fix.smem_start + fb_info->fix.smem_len) {
2414     + ret = -EINVAL;
2415     + goto err;
2416     + }
2417     ofs = pa - fb_info->fix.smem_start;
2418     if (ofs) {
2419     memmove(fb_info->screen_base, fb_info->screen_base + ofs, fbsize);
2420     @@ -689,7 +719,11 @@ static int mxsfb_restore_mode(struct mxsfb_info *host,
2421     clk_prepare_enable(host->clk);
2422     host->enabled = 1;
2423    
2424     - return 0;
2425     +err:
2426     + if (ret)
2427     + mxsfb_disable_axi_clk(host);
2428     +
2429     + return ret;
2430     }
2431    
2432     static int mxsfb_init_fbinfo_dt(struct mxsfb_info *host,
2433     @@ -915,7 +949,9 @@ static int mxsfb_probe(struct platform_device *pdev)
2434     }
2435    
2436     if (!host->enabled) {
2437     + mxsfb_enable_axi_clk(host);
2438     writel(0, host->base + LCDC_CTRL);
2439     + mxsfb_disable_axi_clk(host);
2440     mxsfb_set_par(fb_info);
2441     mxsfb_enable_controller(fb_info);
2442     }
2443     @@ -954,11 +990,15 @@ static void mxsfb_shutdown(struct platform_device *pdev)
2444     struct fb_info *fb_info = platform_get_drvdata(pdev);
2445     struct mxsfb_info *host = to_imxfb_host(fb_info);
2446    
2447     + mxsfb_enable_axi_clk(host);
2448     +
2449     /*
2450     * Force stop the LCD controller as keeping it running during reboot
2451     * might interfere with the BootROM's boot mode pads sampling.
2452     */
2453     writel(CTRL_RUN, host->base + LCDC_CTRL + REG_CLR);
2454     +
2455     + mxsfb_disable_axi_clk(host);
2456     }
2457    
2458     static struct platform_driver mxsfb_driver = {
2459     diff --git a/fs/configfs/mount.c b/fs/configfs/mount.c
2460     index 537356742091..a8f3b589a2df 100644
2461     --- a/fs/configfs/mount.c
2462     +++ b/fs/configfs/mount.c
2463     @@ -129,8 +129,6 @@ void configfs_release_fs(void)
2464     }
2465    
2466    
2467     -static struct kobject *config_kobj;
2468     -
2469     static int __init configfs_init(void)
2470     {
2471     int err = -ENOMEM;
2472     @@ -141,8 +139,8 @@ static int __init configfs_init(void)
2473     if (!configfs_dir_cachep)
2474     goto out;
2475    
2476     - config_kobj = kobject_create_and_add("config", kernel_kobj);
2477     - if (!config_kobj)
2478     + err = sysfs_create_mount_point(kernel_kobj, "config");
2479     + if (err)
2480     goto out2;
2481    
2482     err = register_filesystem(&configfs_fs_type);
2483     @@ -152,7 +150,7 @@ static int __init configfs_init(void)
2484     return 0;
2485     out3:
2486     pr_err("Unable to register filesystem!\n");
2487     - kobject_put(config_kobj);
2488     + sysfs_remove_mount_point(kernel_kobj, "config");
2489     out2:
2490     kmem_cache_destroy(configfs_dir_cachep);
2491     configfs_dir_cachep = NULL;
2492     @@ -163,7 +161,7 @@ out:
2493     static void __exit configfs_exit(void)
2494     {
2495     unregister_filesystem(&configfs_fs_type);
2496     - kobject_put(config_kobj);
2497     + sysfs_remove_mount_point(kernel_kobj, "config");
2498     kmem_cache_destroy(configfs_dir_cachep);
2499     configfs_dir_cachep = NULL;
2500     }
2501     diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
2502     index c1e7ffb0dab6..12756040ca20 100644
2503     --- a/fs/debugfs/inode.c
2504     +++ b/fs/debugfs/inode.c
2505     @@ -716,20 +716,17 @@ bool debugfs_initialized(void)
2506     }
2507     EXPORT_SYMBOL_GPL(debugfs_initialized);
2508    
2509     -
2510     -static struct kobject *debug_kobj;
2511     -
2512     static int __init debugfs_init(void)
2513     {
2514     int retval;
2515    
2516     - debug_kobj = kobject_create_and_add("debug", kernel_kobj);
2517     - if (!debug_kobj)
2518     - return -EINVAL;
2519     + retval = sysfs_create_mount_point(kernel_kobj, "debug");
2520     + if (retval)
2521     + return retval;
2522    
2523     retval = register_filesystem(&debug_fs_type);
2524     if (retval)
2525     - kobject_put(debug_kobj);
2526     + sysfs_remove_mount_point(kernel_kobj, "debug");
2527     else
2528     debugfs_registered = true;
2529    
2530     diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
2531     index 082ac1c97f39..18dacf9ed8ff 100644
2532     --- a/fs/fuse/inode.c
2533     +++ b/fs/fuse/inode.c
2534     @@ -1238,7 +1238,6 @@ static void fuse_fs_cleanup(void)
2535     }
2536    
2537     static struct kobject *fuse_kobj;
2538     -static struct kobject *connections_kobj;
2539    
2540     static int fuse_sysfs_init(void)
2541     {
2542     @@ -1250,11 +1249,9 @@ static int fuse_sysfs_init(void)
2543     goto out_err;
2544     }
2545    
2546     - connections_kobj = kobject_create_and_add("connections", fuse_kobj);
2547     - if (!connections_kobj) {
2548     - err = -ENOMEM;
2549     + err = sysfs_create_mount_point(fuse_kobj, "connections");
2550     + if (err)
2551     goto out_fuse_unregister;
2552     - }
2553    
2554     return 0;
2555    
2556     @@ -1266,7 +1263,7 @@ static int fuse_sysfs_init(void)
2557    
2558     static void fuse_sysfs_cleanup(void)
2559     {
2560     - kobject_put(connections_kobj);
2561     + sysfs_remove_mount_point(fuse_kobj, "connections");
2562     kobject_put(fuse_kobj);
2563     }
2564    
2565     diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
2566     index fffca9517321..2d48d28e1640 100644
2567     --- a/fs/kernfs/dir.c
2568     +++ b/fs/kernfs/dir.c
2569     @@ -592,6 +592,9 @@ int kernfs_add_one(struct kernfs_node *kn)
2570     goto out_unlock;
2571    
2572     ret = -ENOENT;
2573     + if (parent->flags & KERNFS_EMPTY_DIR)
2574     + goto out_unlock;
2575     +
2576     if ((parent->flags & KERNFS_ACTIVATED) && !kernfs_active(parent))
2577     goto out_unlock;
2578    
2579     @@ -783,6 +786,38 @@ struct kernfs_node *kernfs_create_dir_ns(struct kernfs_node *parent,
2580     return ERR_PTR(rc);
2581     }
2582    
2583     +/**
2584     + * kernfs_create_empty_dir - create an always empty directory
2585     + * @parent: parent in which to create a new directory
2586     + * @name: name of the new directory
2587     + *
2588     + * Returns the created node on success, ERR_PTR() value on failure.
2589     + */
2590     +struct kernfs_node *kernfs_create_empty_dir(struct kernfs_node *parent,
2591     + const char *name)
2592     +{
2593     + struct kernfs_node *kn;
2594     + int rc;
2595     +
2596     + /* allocate */
2597     + kn = kernfs_new_node(parent, name, S_IRUGO|S_IXUGO|S_IFDIR, KERNFS_DIR);
2598     + if (!kn)
2599     + return ERR_PTR(-ENOMEM);
2600     +
2601     + kn->flags |= KERNFS_EMPTY_DIR;
2602     + kn->dir.root = parent->dir.root;
2603     + kn->ns = NULL;
2604     + kn->priv = NULL;
2605     +
2606     + /* link in */
2607     + rc = kernfs_add_one(kn);
2608     + if (!rc)
2609     + return kn;
2610     +
2611     + kernfs_put(kn);
2612     + return ERR_PTR(rc);
2613     +}
2614     +
2615     static struct dentry *kernfs_iop_lookup(struct inode *dir,
2616     struct dentry *dentry,
2617     unsigned int flags)
2618     @@ -1254,7 +1289,8 @@ int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent,
2619     mutex_lock(&kernfs_mutex);
2620    
2621     error = -ENOENT;
2622     - if (!kernfs_active(kn) || !kernfs_active(new_parent))
2623     + if (!kernfs_active(kn) || !kernfs_active(new_parent) ||
2624     + (new_parent->flags & KERNFS_EMPTY_DIR))
2625     goto out;
2626    
2627     error = 0;
2628     diff --git a/fs/kernfs/inode.c b/fs/kernfs/inode.c
2629     index 2da8493a380b..756dd56aaf60 100644
2630     --- a/fs/kernfs/inode.c
2631     +++ b/fs/kernfs/inode.c
2632     @@ -296,6 +296,8 @@ static void kernfs_init_inode(struct kernfs_node *kn, struct inode *inode)
2633     case KERNFS_DIR:
2634     inode->i_op = &kernfs_dir_iops;
2635     inode->i_fop = &kernfs_dir_fops;
2636     + if (kn->flags & KERNFS_EMPTY_DIR)
2637     + make_empty_dir_inode(inode);
2638     break;
2639     case KERNFS_FILE:
2640     inode->i_size = kn->attr.size;
2641     diff --git a/fs/libfs.c b/fs/libfs.c
2642     index cb1fb4b9b637..02813592e121 100644
2643     --- a/fs/libfs.c
2644     +++ b/fs/libfs.c
2645     @@ -1093,3 +1093,99 @@ simple_nosetlease(struct file *filp, long arg, struct file_lock **flp,
2646     return -EINVAL;
2647     }
2648     EXPORT_SYMBOL(simple_nosetlease);
2649     +
2650     +
2651     +/*
2652     + * Operations for a permanently empty directory.
2653     + */
2654     +static struct dentry *empty_dir_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags)
2655     +{
2656     + return ERR_PTR(-ENOENT);
2657     +}
2658     +
2659     +static int empty_dir_getattr(struct vfsmount *mnt, struct dentry *dentry,
2660     + struct kstat *stat)
2661     +{
2662     + struct inode *inode = d_inode(dentry);
2663     + generic_fillattr(inode, stat);
2664     + return 0;
2665     +}
2666     +
2667     +static int empty_dir_setattr(struct dentry *dentry, struct iattr *attr)
2668     +{
2669     + return -EPERM;
2670     +}
2671     +
2672     +static int empty_dir_setxattr(struct dentry *dentry, const char *name,
2673     + const void *value, size_t size, int flags)
2674     +{
2675     + return -EOPNOTSUPP;
2676     +}
2677     +
2678     +static ssize_t empty_dir_getxattr(struct dentry *dentry, const char *name,
2679     + void *value, size_t size)
2680     +{
2681     + return -EOPNOTSUPP;
2682     +}
2683     +
2684     +static int empty_dir_removexattr(struct dentry *dentry, const char *name)
2685     +{
2686     + return -EOPNOTSUPP;
2687     +}
2688     +
2689     +static ssize_t empty_dir_listxattr(struct dentry *dentry, char *list, size_t size)
2690     +{
2691     + return -EOPNOTSUPP;
2692     +}
2693     +
2694     +static const struct inode_operations empty_dir_inode_operations = {
2695     + .lookup = empty_dir_lookup,
2696     + .permission = generic_permission,
2697     + .setattr = empty_dir_setattr,
2698     + .getattr = empty_dir_getattr,
2699     + .setxattr = empty_dir_setxattr,
2700     + .getxattr = empty_dir_getxattr,
2701     + .removexattr = empty_dir_removexattr,
2702     + .listxattr = empty_dir_listxattr,
2703     +};
2704     +
2705     +static loff_t empty_dir_llseek(struct file *file, loff_t offset, int whence)
2706     +{
2707     + /* An empty directory has two entries . and .. at offsets 0 and 1 */
2708     + return generic_file_llseek_size(file, offset, whence, 2, 2);
2709     +}
2710     +
2711     +static int empty_dir_readdir(struct file *file, struct dir_context *ctx)
2712     +{
2713     + dir_emit_dots(file, ctx);
2714     + return 0;
2715     +}
2716     +
2717     +static const struct file_operations empty_dir_operations = {
2718     + .llseek = empty_dir_llseek,
2719     + .read = generic_read_dir,
2720     + .iterate = empty_dir_readdir,
2721     + .fsync = noop_fsync,
2722     +};
2723     +
2724     +
2725     +void make_empty_dir_inode(struct inode *inode)
2726     +{
2727     + set_nlink(inode, 2);
2728     + inode->i_mode = S_IFDIR | S_IRUGO | S_IXUGO;
2729     + inode->i_uid = GLOBAL_ROOT_UID;
2730     + inode->i_gid = GLOBAL_ROOT_GID;
2731     + inode->i_rdev = 0;
2732     + inode->i_size = 2;
2733     + inode->i_blkbits = PAGE_SHIFT;
2734     + inode->i_blocks = 0;
2735     +
2736     + inode->i_op = &empty_dir_inode_operations;
2737     + inode->i_fop = &empty_dir_operations;
2738     +}
2739     +
2740     +bool is_empty_dir_inode(struct inode *inode)
2741     +{
2742     + return (inode->i_fop == &empty_dir_operations) &&
2743     + (inode->i_op == &empty_dir_inode_operations);
2744     +}
2745     diff --git a/fs/namespace.c b/fs/namespace.c
2746     index 1d4a97c573e0..02c6875dd945 100644
2747     --- a/fs/namespace.c
2748     +++ b/fs/namespace.c
2749     @@ -2332,6 +2332,8 @@ unlock:
2750     return err;
2751     }
2752    
2753     +static bool fs_fully_visible(struct file_system_type *fs_type, int *new_mnt_flags);
2754     +
2755     /*
2756     * create a new mount for userspace and request it to be added into the
2757     * namespace's tree
2758     @@ -2363,6 +2365,10 @@ static int do_new_mount(struct path *path, const char *fstype, int flags,
2759     flags |= MS_NODEV;
2760     mnt_flags |= MNT_NODEV | MNT_LOCK_NODEV;
2761     }
2762     + if (type->fs_flags & FS_USERNS_VISIBLE) {
2763     + if (!fs_fully_visible(type, &mnt_flags))
2764     + return -EPERM;
2765     + }
2766     }
2767    
2768     mnt = vfs_kern_mount(type, flags, name, data);
2769     @@ -3164,9 +3170,10 @@ bool current_chrooted(void)
2770     return chrooted;
2771     }
2772    
2773     -bool fs_fully_visible(struct file_system_type *type)
2774     +static bool fs_fully_visible(struct file_system_type *type, int *new_mnt_flags)
2775     {
2776     struct mnt_namespace *ns = current->nsproxy->mnt_ns;
2777     + int new_flags = *new_mnt_flags;
2778     struct mount *mnt;
2779     bool visible = false;
2780    
2781     @@ -3185,6 +3192,19 @@ bool fs_fully_visible(struct file_system_type *type)
2782     if (mnt->mnt.mnt_root != mnt->mnt.mnt_sb->s_root)
2783     continue;
2784    
2785     + /* Verify the mount flags are equal to or more permissive
2786     + * than the proposed new mount.
2787     + */
2788     + if ((mnt->mnt.mnt_flags & MNT_LOCK_READONLY) &&
2789     + !(new_flags & MNT_READONLY))
2790     + continue;
2791     + if ((mnt->mnt.mnt_flags & MNT_LOCK_NODEV) &&
2792     + !(new_flags & MNT_NODEV))
2793     + continue;
2794     + if ((mnt->mnt.mnt_flags & MNT_LOCK_ATIME) &&
2795     + ((mnt->mnt.mnt_flags & MNT_ATIME_MASK) != (new_flags & MNT_ATIME_MASK)))
2796     + continue;
2797     +
2798     /* This mount is not fully visible if there are any
2799     * locked child mounts that cover anything except for
2800     * empty directories.
2801     @@ -3194,11 +3214,14 @@ bool fs_fully_visible(struct file_system_type *type)
2802     /* Only worry about locked mounts */
2803     if (!(mnt->mnt.mnt_flags & MNT_LOCKED))
2804     continue;
2805     - if (!S_ISDIR(inode->i_mode))
2806     - goto next;
2807     - if (inode->i_nlink > 2)
2808     + /* Is the directory permanetly empty? */
2809     + if (!is_empty_dir_inode(inode))
2810     goto next;
2811     }
2812     + /* Preserve the locked attributes */
2813     + *new_mnt_flags |= mnt->mnt.mnt_flags & (MNT_LOCK_READONLY | \
2814     + MNT_LOCK_NODEV | \
2815     + MNT_LOCK_ATIME);
2816     visible = true;
2817     goto found;
2818     next: ;
2819     diff --git a/fs/proc/generic.c b/fs/proc/generic.c
2820     index df6327a2b865..e5dee5c3188e 100644
2821     --- a/fs/proc/generic.c
2822     +++ b/fs/proc/generic.c
2823     @@ -373,6 +373,10 @@ static struct proc_dir_entry *__proc_create(struct proc_dir_entry **parent,
2824     WARN(1, "create '/proc/%s' by hand\n", qstr.name);
2825     return NULL;
2826     }
2827     + if (is_empty_pde(*parent)) {
2828     + WARN(1, "attempt to add to permanently empty directory");
2829     + return NULL;
2830     + }
2831    
2832     ent = kzalloc(sizeof(struct proc_dir_entry) + qstr.len + 1, GFP_KERNEL);
2833     if (!ent)
2834     @@ -455,6 +459,25 @@ struct proc_dir_entry *proc_mkdir(const char *name,
2835     }
2836     EXPORT_SYMBOL(proc_mkdir);
2837    
2838     +struct proc_dir_entry *proc_create_mount_point(const char *name)
2839     +{
2840     + umode_t mode = S_IFDIR | S_IRUGO | S_IXUGO;
2841     + struct proc_dir_entry *ent, *parent = NULL;
2842     +
2843     + ent = __proc_create(&parent, name, mode, 2);
2844     + if (ent) {
2845     + ent->data = NULL;
2846     + ent->proc_fops = NULL;
2847     + ent->proc_iops = NULL;
2848     + if (proc_register(parent, ent) < 0) {
2849     + kfree(ent);
2850     + parent->nlink--;
2851     + ent = NULL;
2852     + }
2853     + }
2854     + return ent;
2855     +}
2856     +
2857     struct proc_dir_entry *proc_create_data(const char *name, umode_t mode,
2858     struct proc_dir_entry *parent,
2859     const struct file_operations *proc_fops,
2860     diff --git a/fs/proc/inode.c b/fs/proc/inode.c
2861     index 8272aaba1bb0..e3eb5524639f 100644
2862     --- a/fs/proc/inode.c
2863     +++ b/fs/proc/inode.c
2864     @@ -423,6 +423,10 @@ struct inode *proc_get_inode(struct super_block *sb, struct proc_dir_entry *de)
2865     inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
2866     PROC_I(inode)->pde = de;
2867    
2868     + if (is_empty_pde(de)) {
2869     + make_empty_dir_inode(inode);
2870     + return inode;
2871     + }
2872     if (de->mode) {
2873     inode->i_mode = de->mode;
2874     inode->i_uid = de->uid;
2875     diff --git a/fs/proc/internal.h b/fs/proc/internal.h
2876     index c835b94c0cd3..aa2781095bd1 100644
2877     --- a/fs/proc/internal.h
2878     +++ b/fs/proc/internal.h
2879     @@ -191,6 +191,12 @@ static inline struct proc_dir_entry *pde_get(struct proc_dir_entry *pde)
2880     }
2881     extern void pde_put(struct proc_dir_entry *);
2882    
2883     +static inline bool is_empty_pde(const struct proc_dir_entry *pde)
2884     +{
2885     + return S_ISDIR(pde->mode) && !pde->proc_iops;
2886     +}
2887     +struct proc_dir_entry *proc_create_mount_point(const char *name);
2888     +
2889     /*
2890     * inode.c
2891     */
2892     diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
2893     index fea2561d773b..fdda62e6115e 100644
2894     --- a/fs/proc/proc_sysctl.c
2895     +++ b/fs/proc/proc_sysctl.c
2896     @@ -19,6 +19,28 @@ static const struct inode_operations proc_sys_inode_operations;
2897     static const struct file_operations proc_sys_dir_file_operations;
2898     static const struct inode_operations proc_sys_dir_operations;
2899    
2900     +/* Support for permanently empty directories */
2901     +
2902     +struct ctl_table sysctl_mount_point[] = {
2903     + { }
2904     +};
2905     +
2906     +static bool is_empty_dir(struct ctl_table_header *head)
2907     +{
2908     + return head->ctl_table[0].child == sysctl_mount_point;
2909     +}
2910     +
2911     +static void set_empty_dir(struct ctl_dir *dir)
2912     +{
2913     + dir->header.ctl_table[0].child = sysctl_mount_point;
2914     +}
2915     +
2916     +static void clear_empty_dir(struct ctl_dir *dir)
2917     +
2918     +{
2919     + dir->header.ctl_table[0].child = NULL;
2920     +}
2921     +
2922     void proc_sys_poll_notify(struct ctl_table_poll *poll)
2923     {
2924     if (!poll)
2925     @@ -187,6 +209,17 @@ static int insert_header(struct ctl_dir *dir, struct ctl_table_header *header)
2926     struct ctl_table *entry;
2927     int err;
2928    
2929     + /* Is this a permanently empty directory? */
2930     + if (is_empty_dir(&dir->header))
2931     + return -EROFS;
2932     +
2933     + /* Am I creating a permanently empty directory? */
2934     + if (header->ctl_table == sysctl_mount_point) {
2935     + if (!RB_EMPTY_ROOT(&dir->root))
2936     + return -EINVAL;
2937     + set_empty_dir(dir);
2938     + }
2939     +
2940     dir->header.nreg++;
2941     header->parent = dir;
2942     err = insert_links(header);
2943     @@ -202,6 +235,8 @@ fail:
2944     erase_header(header);
2945     put_links(header);
2946     fail_links:
2947     + if (header->ctl_table == sysctl_mount_point)
2948     + clear_empty_dir(dir);
2949     header->parent = NULL;
2950     drop_sysctl_table(&dir->header);
2951     return err;
2952     @@ -419,6 +454,8 @@ static struct inode *proc_sys_make_inode(struct super_block *sb,
2953     inode->i_mode |= S_IFDIR;
2954     inode->i_op = &proc_sys_dir_operations;
2955     inode->i_fop = &proc_sys_dir_file_operations;
2956     + if (is_empty_dir(head))
2957     + make_empty_dir_inode(inode);
2958     }
2959     out:
2960     return inode;
2961     diff --git a/fs/proc/root.c b/fs/proc/root.c
2962     index b7fa4bfe896a..68feb0f70e63 100644
2963     --- a/fs/proc/root.c
2964     +++ b/fs/proc/root.c
2965     @@ -112,9 +112,6 @@ static struct dentry *proc_mount(struct file_system_type *fs_type,
2966     ns = task_active_pid_ns(current);
2967     options = data;
2968    
2969     - if (!capable(CAP_SYS_ADMIN) && !fs_fully_visible(fs_type))
2970     - return ERR_PTR(-EPERM);
2971     -
2972     /* Does the mounter have privilege over the pid namespace? */
2973     if (!ns_capable(ns->user_ns, CAP_SYS_ADMIN))
2974     return ERR_PTR(-EPERM);
2975     @@ -159,7 +156,7 @@ static struct file_system_type proc_fs_type = {
2976     .name = "proc",
2977     .mount = proc_mount,
2978     .kill_sb = proc_kill_sb,
2979     - .fs_flags = FS_USERNS_MOUNT,
2980     + .fs_flags = FS_USERNS_VISIBLE | FS_USERNS_MOUNT,
2981     };
2982    
2983     void __init proc_root_init(void)
2984     @@ -182,10 +179,10 @@ void __init proc_root_init(void)
2985     #endif
2986     proc_mkdir("fs", NULL);
2987     proc_mkdir("driver", NULL);
2988     - proc_mkdir("fs/nfsd", NULL); /* somewhere for the nfsd filesystem to be mounted */
2989     + proc_create_mount_point("fs/nfsd"); /* somewhere for the nfsd filesystem to be mounted */
2990     #if defined(CONFIG_SUN_OPENPROMFS) || defined(CONFIG_SUN_OPENPROMFS_MODULE)
2991     /* just give it a mountpoint */
2992     - proc_mkdir("openprom", NULL);
2993     + proc_create_mount_point("openprom");
2994     #endif
2995     proc_tty_init();
2996     proc_mkdir("bus", NULL);
2997     diff --git a/fs/pstore/inode.c b/fs/pstore/inode.c
2998     index dc43b5f29305..3adcc4669fac 100644
2999     --- a/fs/pstore/inode.c
3000     +++ b/fs/pstore/inode.c
3001     @@ -461,22 +461,18 @@ static struct file_system_type pstore_fs_type = {
3002     .kill_sb = pstore_kill_sb,
3003     };
3004    
3005     -static struct kobject *pstore_kobj;
3006     -
3007     static int __init init_pstore_fs(void)
3008     {
3009     - int err = 0;
3010     + int err;
3011    
3012     /* Create a convenient mount point for people to access pstore */
3013     - pstore_kobj = kobject_create_and_add("pstore", fs_kobj);
3014     - if (!pstore_kobj) {
3015     - err = -ENOMEM;
3016     + err = sysfs_create_mount_point(fs_kobj, "pstore");
3017     + if (err)
3018     goto out;
3019     - }
3020    
3021     err = register_filesystem(&pstore_fs_type);
3022     if (err < 0)
3023     - kobject_put(pstore_kobj);
3024     + sysfs_remove_mount_point(fs_kobj, "pstore");
3025    
3026     out:
3027     return err;
3028     diff --git a/fs/sysfs/dir.c b/fs/sysfs/dir.c
3029     index 0b45ff42f374..94374e435025 100644
3030     --- a/fs/sysfs/dir.c
3031     +++ b/fs/sysfs/dir.c
3032     @@ -121,3 +121,37 @@ int sysfs_move_dir_ns(struct kobject *kobj, struct kobject *new_parent_kobj,
3033    
3034     return kernfs_rename_ns(kn, new_parent, kn->name, new_ns);
3035     }
3036     +
3037     +/**
3038     + * sysfs_create_mount_point - create an always empty directory
3039     + * @parent_kobj: kobject that will contain this always empty directory
3040     + * @name: The name of the always empty directory to add
3041     + */
3042     +int sysfs_create_mount_point(struct kobject *parent_kobj, const char *name)
3043     +{
3044     + struct kernfs_node *kn, *parent = parent_kobj->sd;
3045     +
3046     + kn = kernfs_create_empty_dir(parent, name);
3047     + if (IS_ERR(kn)) {
3048     + if (PTR_ERR(kn) == -EEXIST)
3049     + sysfs_warn_dup(parent, name);
3050     + return PTR_ERR(kn);
3051     + }
3052     +
3053     + return 0;
3054     +}
3055     +EXPORT_SYMBOL_GPL(sysfs_create_mount_point);
3056     +
3057     +/**
3058     + * sysfs_remove_mount_point - remove an always empty directory.
3059     + * @parent_kobj: kobject that will contain this always empty directory
3060     + * @name: The name of the always empty directory to remove
3061     + *
3062     + */
3063     +void sysfs_remove_mount_point(struct kobject *parent_kobj, const char *name)
3064     +{
3065     + struct kernfs_node *parent = parent_kobj->sd;
3066     +
3067     + kernfs_remove_by_name_ns(parent, name, NULL);
3068     +}
3069     +EXPORT_SYMBOL_GPL(sysfs_remove_mount_point);
3070     diff --git a/fs/sysfs/mount.c b/fs/sysfs/mount.c
3071     index 8a49486bf30c..1c6ac6fcee9f 100644
3072     --- a/fs/sysfs/mount.c
3073     +++ b/fs/sysfs/mount.c
3074     @@ -31,9 +31,6 @@ static struct dentry *sysfs_mount(struct file_system_type *fs_type,
3075     bool new_sb;
3076    
3077     if (!(flags & MS_KERNMOUNT)) {
3078     - if (!capable(CAP_SYS_ADMIN) && !fs_fully_visible(fs_type))
3079     - return ERR_PTR(-EPERM);
3080     -
3081     if (!kobj_ns_current_may_mount(KOBJ_NS_TYPE_NET))
3082     return ERR_PTR(-EPERM);
3083     }
3084     @@ -58,7 +55,7 @@ static struct file_system_type sysfs_fs_type = {
3085     .name = "sysfs",
3086     .mount = sysfs_mount,
3087     .kill_sb = sysfs_kill_sb,
3088     - .fs_flags = FS_USERNS_MOUNT,
3089     + .fs_flags = FS_USERNS_VISIBLE | FS_USERNS_MOUNT,
3090     };
3091    
3092     int __init sysfs_init(void)
3093     diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
3094     index d92bdf3b079a..a43df11a163f 100644
3095     --- a/fs/tracefs/inode.c
3096     +++ b/fs/tracefs/inode.c
3097     @@ -631,14 +631,12 @@ bool tracefs_initialized(void)
3098     return tracefs_registered;
3099     }
3100    
3101     -static struct kobject *trace_kobj;
3102     -
3103     static int __init tracefs_init(void)
3104     {
3105     int retval;
3106    
3107     - trace_kobj = kobject_create_and_add("tracing", kernel_kobj);
3108     - if (!trace_kobj)
3109     + retval = sysfs_create_mount_point(kernel_kobj, "tracing");
3110     + if (retval)
3111     return -EINVAL;
3112    
3113     retval = register_filesystem(&trace_fs_type);
3114     diff --git a/include/linux/acpi.h b/include/linux/acpi.h
3115     index e4da5e35e29c..5da2d2e9d38e 100644
3116     --- a/include/linux/acpi.h
3117     +++ b/include/linux/acpi.h
3118     @@ -332,6 +332,9 @@ int acpi_check_region(resource_size_t start, resource_size_t n,
3119    
3120     int acpi_resources_are_enforced(void);
3121    
3122     +int acpi_reserve_region(u64 start, unsigned int length, u8 space_id,
3123     + unsigned long flags, char *desc);
3124     +
3125     #ifdef CONFIG_HIBERNATION
3126     void __init acpi_no_s4_hw_signature(void);
3127     #endif
3128     @@ -440,6 +443,7 @@ extern acpi_status acpi_pci_osc_control_set(acpi_handle handle,
3129     #define ACPI_OST_SC_INSERT_NOT_SUPPORTED 0x82
3130    
3131     extern void acpi_early_init(void);
3132     +extern void acpi_subsystem_init(void);
3133    
3134     extern int acpi_nvs_register(__u64 start, __u64 size);
3135    
3136     @@ -494,6 +498,7 @@ static inline const char *acpi_dev_name(struct acpi_device *adev)
3137     }
3138    
3139     static inline void acpi_early_init(void) { }
3140     +static inline void acpi_subsystem_init(void) { }
3141    
3142     static inline int early_acpi_boot_init(void)
3143     {
3144     @@ -525,6 +530,13 @@ static inline int acpi_check_region(resource_size_t start, resource_size_t n,
3145     return 0;
3146     }
3147    
3148     +static inline int acpi_reserve_region(u64 start, unsigned int length,
3149     + u8 space_id, unsigned long flags,
3150     + char *desc)
3151     +{
3152     + return -ENXIO;
3153     +}
3154     +
3155     struct acpi_table_header;
3156     static inline int acpi_table_parse(char *id,
3157     int (*handler)(struct acpi_table_header *))
3158     diff --git a/include/linux/fs.h b/include/linux/fs.h
3159     index 35ec87e490b1..571aab91bfc0 100644
3160     --- a/include/linux/fs.h
3161     +++ b/include/linux/fs.h
3162     @@ -1897,6 +1897,7 @@ struct file_system_type {
3163     #define FS_HAS_SUBTYPE 4
3164     #define FS_USERNS_MOUNT 8 /* Can be mounted by userns root */
3165     #define FS_USERNS_DEV_MOUNT 16 /* A userns mount does not imply MNT_NODEV */
3166     +#define FS_USERNS_VISIBLE 32 /* FS must already be visible */
3167     #define FS_RENAME_DOES_D_MOVE 32768 /* FS will handle d_move() during rename() internally. */
3168     struct dentry *(*mount) (struct file_system_type *, int,
3169     const char *, void *);
3170     @@ -1984,7 +1985,6 @@ extern int vfs_ustat(dev_t, struct kstatfs *);
3171     extern int freeze_super(struct super_block *super);
3172     extern int thaw_super(struct super_block *super);
3173     extern bool our_mnt(struct vfsmount *mnt);
3174     -extern bool fs_fully_visible(struct file_system_type *);
3175    
3176     extern int current_umask(void);
3177    
3178     @@ -2780,6 +2780,8 @@ extern struct dentry *simple_lookup(struct inode *, struct dentry *, unsigned in
3179     extern ssize_t generic_read_dir(struct file *, char __user *, size_t, loff_t *);
3180     extern const struct file_operations simple_dir_operations;
3181     extern const struct inode_operations simple_dir_inode_operations;
3182     +extern void make_empty_dir_inode(struct inode *inode);
3183     +extern bool is_empty_dir_inode(struct inode *inode);
3184     struct tree_descr { char *name; const struct file_operations *ops; int mode; };
3185     struct dentry *d_alloc_name(struct dentry *, const char *);
3186     extern int simple_fill_super(struct super_block *, unsigned long, struct tree_descr *);
3187     diff --git a/include/linux/kernfs.h b/include/linux/kernfs.h
3188     index 71ecdab1671b..29d1896c3ba5 100644
3189     --- a/include/linux/kernfs.h
3190     +++ b/include/linux/kernfs.h
3191     @@ -45,6 +45,7 @@ enum kernfs_node_flag {
3192     KERNFS_LOCKDEP = 0x0100,
3193     KERNFS_SUICIDAL = 0x0400,
3194     KERNFS_SUICIDED = 0x0800,
3195     + KERNFS_EMPTY_DIR = 0x1000,
3196     };
3197    
3198     /* @flags for kernfs_create_root() */
3199     @@ -285,6 +286,8 @@ void kernfs_destroy_root(struct kernfs_root *root);
3200     struct kernfs_node *kernfs_create_dir_ns(struct kernfs_node *parent,
3201     const char *name, umode_t mode,
3202     void *priv, const void *ns);
3203     +struct kernfs_node *kernfs_create_empty_dir(struct kernfs_node *parent,
3204     + const char *name);
3205     struct kernfs_node *__kernfs_create_file(struct kernfs_node *parent,
3206     const char *name,
3207     umode_t mode, loff_t size,
3208     diff --git a/include/linux/kmemleak.h b/include/linux/kmemleak.h
3209     index e705467ddb47..d0a1f99e24e3 100644
3210     --- a/include/linux/kmemleak.h
3211     +++ b/include/linux/kmemleak.h
3212     @@ -28,7 +28,8 @@
3213     extern void kmemleak_init(void) __ref;
3214     extern void kmemleak_alloc(const void *ptr, size_t size, int min_count,
3215     gfp_t gfp) __ref;
3216     -extern void kmemleak_alloc_percpu(const void __percpu *ptr, size_t size) __ref;
3217     +extern void kmemleak_alloc_percpu(const void __percpu *ptr, size_t size,
3218     + gfp_t gfp) __ref;
3219     extern void kmemleak_free(const void *ptr) __ref;
3220     extern void kmemleak_free_part(const void *ptr, size_t size) __ref;
3221     extern void kmemleak_free_percpu(const void __percpu *ptr) __ref;
3222     @@ -71,7 +72,8 @@ static inline void kmemleak_alloc_recursive(const void *ptr, size_t size,
3223     gfp_t gfp)
3224     {
3225     }
3226     -static inline void kmemleak_alloc_percpu(const void __percpu *ptr, size_t size)
3227     +static inline void kmemleak_alloc_percpu(const void __percpu *ptr, size_t size,
3228     + gfp_t gfp)
3229     {
3230     }
3231     static inline void kmemleak_free(const void *ptr)
3232     diff --git a/include/linux/pci.h b/include/linux/pci.h
3233     index 353db8dc4c6e..3ef3a52068df 100644
3234     --- a/include/linux/pci.h
3235     +++ b/include/linux/pci.h
3236     @@ -577,9 +577,15 @@ int raw_pci_read(unsigned int domain, unsigned int bus, unsigned int devfn,
3237     int raw_pci_write(unsigned int domain, unsigned int bus, unsigned int devfn,
3238     int reg, int len, u32 val);
3239    
3240     +#ifdef CONFIG_PCI_BUS_ADDR_T_64BIT
3241     +typedef u64 pci_bus_addr_t;
3242     +#else
3243     +typedef u32 pci_bus_addr_t;
3244     +#endif
3245     +
3246     struct pci_bus_region {
3247     - dma_addr_t start;
3248     - dma_addr_t end;
3249     + pci_bus_addr_t start;
3250     + pci_bus_addr_t end;
3251     };
3252    
3253     struct pci_dynids {
3254     @@ -1006,6 +1012,7 @@ int __must_check pci_assign_resource(struct pci_dev *dev, int i);
3255     int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align);
3256     int pci_select_bars(struct pci_dev *dev, unsigned long flags);
3257     bool pci_device_is_present(struct pci_dev *pdev);
3258     +void pci_ignore_hotplug(struct pci_dev *dev);
3259    
3260     /* ROM control related routines */
3261     int pci_enable_rom(struct pci_dev *pdev);
3262     @@ -1043,11 +1050,6 @@ bool pci_dev_run_wake(struct pci_dev *dev);
3263     bool pci_check_pme_status(struct pci_dev *dev);
3264     void pci_pme_wakeup_bus(struct pci_bus *bus);
3265    
3266     -static inline void pci_ignore_hotplug(struct pci_dev *dev)
3267     -{
3268     - dev->ignore_hotplug = 1;
3269     -}
3270     -
3271     static inline int pci_enable_wake(struct pci_dev *dev, pci_power_t state,
3272     bool enable)
3273     {
3274     @@ -1128,7 +1130,7 @@ int __must_check pci_bus_alloc_resource(struct pci_bus *bus,
3275    
3276     int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr);
3277    
3278     -static inline dma_addr_t pci_bus_address(struct pci_dev *pdev, int bar)
3279     +static inline pci_bus_addr_t pci_bus_address(struct pci_dev *pdev, int bar)
3280     {
3281     struct pci_bus_region region;
3282    
3283     diff --git a/include/linux/power_supply.h b/include/linux/power_supply.h
3284     index 75a1dd8dc56e..a80f1fd01ddb 100644
3285     --- a/include/linux/power_supply.h
3286     +++ b/include/linux/power_supply.h
3287     @@ -237,6 +237,7 @@ struct power_supply {
3288     /* private */
3289     struct device dev;
3290     struct work_struct changed_work;
3291     + struct delayed_work deferred_register_work;
3292     spinlock_t changed_lock;
3293     bool changed;
3294     atomic_t use_cnt;
3295     diff --git a/include/linux/sysctl.h b/include/linux/sysctl.h
3296     index 795d5fea5697..fa7bc29925c9 100644
3297     --- a/include/linux/sysctl.h
3298     +++ b/include/linux/sysctl.h
3299     @@ -188,6 +188,9 @@ struct ctl_table_header *register_sysctl_paths(const struct ctl_path *path,
3300     void unregister_sysctl_table(struct ctl_table_header * table);
3301    
3302     extern int sysctl_init(void);
3303     +
3304     +extern struct ctl_table sysctl_mount_point[];
3305     +
3306     #else /* CONFIG_SYSCTL */
3307     static inline struct ctl_table_header *register_sysctl_table(struct ctl_table * table)
3308     {
3309     diff --git a/include/linux/sysfs.h b/include/linux/sysfs.h
3310     index 99382c0df17e..9f65758311a4 100644
3311     --- a/include/linux/sysfs.h
3312     +++ b/include/linux/sysfs.h
3313     @@ -210,6 +210,10 @@ int __must_check sysfs_rename_dir_ns(struct kobject *kobj, const char *new_name,
3314     int __must_check sysfs_move_dir_ns(struct kobject *kobj,
3315     struct kobject *new_parent_kobj,
3316     const void *new_ns);
3317     +int __must_check sysfs_create_mount_point(struct kobject *parent_kobj,
3318     + const char *name);
3319     +void sysfs_remove_mount_point(struct kobject *parent_kobj,
3320     + const char *name);
3321    
3322     int __must_check sysfs_create_file_ns(struct kobject *kobj,
3323     const struct attribute *attr,
3324     @@ -298,6 +302,17 @@ static inline int sysfs_move_dir_ns(struct kobject *kobj,
3325     return 0;
3326     }
3327    
3328     +static inline int sysfs_create_mount_point(struct kobject *parent_kobj,
3329     + const char *name)
3330     +{
3331     + return 0;
3332     +}
3333     +
3334     +static inline void sysfs_remove_mount_point(struct kobject *parent_kobj,
3335     + const char *name)
3336     +{
3337     +}
3338     +
3339     static inline int sysfs_create_file_ns(struct kobject *kobj,
3340     const struct attribute *attr,
3341     const void *ns)
3342     diff --git a/include/linux/types.h b/include/linux/types.h
3343     index 59698be03490..8715287c3b1f 100644
3344     --- a/include/linux/types.h
3345     +++ b/include/linux/types.h
3346     @@ -139,12 +139,20 @@ typedef unsigned long blkcnt_t;
3347     */
3348     #define pgoff_t unsigned long
3349    
3350     -/* A dma_addr_t can hold any valid DMA or bus address for the platform */
3351     +/*
3352     + * A dma_addr_t can hold any valid DMA address, i.e., any address returned
3353     + * by the DMA API.
3354     + *
3355     + * If the DMA API only uses 32-bit addresses, dma_addr_t need only be 32
3356     + * bits wide. Bus addresses, e.g., PCI BARs, may be wider than 32 bits,
3357     + * but drivers do memory-mapped I/O to ioremapped kernel virtual addresses,
3358     + * so they don't care about the size of the actual bus addresses.
3359     + */
3360     #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
3361     typedef u64 dma_addr_t;
3362     #else
3363     typedef u32 dma_addr_t;
3364     -#endif /* dma_addr_t */
3365     +#endif
3366    
3367     typedef unsigned __bitwise__ gfp_t;
3368     typedef unsigned __bitwise__ fmode_t;
3369     diff --git a/init/main.c b/init/main.c
3370     index 2115055faeac..2a89545e0a5d 100644
3371     --- a/init/main.c
3372     +++ b/init/main.c
3373     @@ -664,6 +664,7 @@ asmlinkage __visible void __init start_kernel(void)
3374    
3375     check_bugs();
3376    
3377     + acpi_subsystem_init();
3378     sfi_init_late();
3379    
3380     if (efi_enabled(EFI_RUNTIME_SERVICES)) {
3381     diff --git a/kernel/cgroup.c b/kernel/cgroup.c
3382     index 469dd547770c..e8a5491be756 100644
3383     --- a/kernel/cgroup.c
3384     +++ b/kernel/cgroup.c
3385     @@ -1924,8 +1924,6 @@ static struct file_system_type cgroup_fs_type = {
3386     .kill_sb = cgroup_kill_sb,
3387     };
3388    
3389     -static struct kobject *cgroup_kobj;
3390     -
3391     /**
3392     * task_cgroup_path - cgroup path of a task in the first cgroup hierarchy
3393     * @task: target task
3394     @@ -5044,13 +5042,13 @@ int __init cgroup_init(void)
3395     ss->bind(init_css_set.subsys[ssid]);
3396     }
3397    
3398     - cgroup_kobj = kobject_create_and_add("cgroup", fs_kobj);
3399     - if (!cgroup_kobj)
3400     - return -ENOMEM;
3401     + err = sysfs_create_mount_point(fs_kobj, "cgroup");
3402     + if (err)
3403     + return err;
3404    
3405     err = register_filesystem(&cgroup_fs_type);
3406     if (err < 0) {
3407     - kobject_put(cgroup_kobj);
3408     + sysfs_remove_mount_point(fs_kobj, "cgroup");
3409     return err;
3410     }
3411    
3412     diff --git a/kernel/irq/devres.c b/kernel/irq/devres.c
3413     index d5d0f7345c54..74d90a754268 100644
3414     --- a/kernel/irq/devres.c
3415     +++ b/kernel/irq/devres.c
3416     @@ -104,7 +104,7 @@ int devm_request_any_context_irq(struct device *dev, unsigned int irq,
3417     return -ENOMEM;
3418    
3419     rc = request_any_context_irq(irq, handler, irqflags, devname, dev_id);
3420     - if (rc) {
3421     + if (rc < 0) {
3422     devres_free(dr);
3423     return rc;
3424     }
3425     @@ -113,7 +113,7 @@ int devm_request_any_context_irq(struct device *dev, unsigned int irq,
3426     dr->dev_id = dev_id;
3427     devres_add(dev, dr);
3428    
3429     - return 0;
3430     + return rc;
3431     }
3432     EXPORT_SYMBOL(devm_request_any_context_irq);
3433    
3434     diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
3435     index 284e2691e380..9ec555732f1a 100644
3436     --- a/kernel/livepatch/core.c
3437     +++ b/kernel/livepatch/core.c
3438     @@ -179,7 +179,9 @@ static int klp_find_object_symbol(const char *objname, const char *name,
3439     .count = 0
3440     };
3441    
3442     + mutex_lock(&module_mutex);
3443     kallsyms_on_each_symbol(klp_find_callback, &args);
3444     + mutex_unlock(&module_mutex);
3445    
3446     if (args.count == 0)
3447     pr_err("symbol '%s' not found in symbol table\n", name);
3448     @@ -219,13 +221,19 @@ static int klp_verify_vmlinux_symbol(const char *name, unsigned long addr)
3449     .name = name,
3450     .addr = addr,
3451     };
3452     + int ret;
3453    
3454     - if (kallsyms_on_each_symbol(klp_verify_callback, &args))
3455     - return 0;
3456     + mutex_lock(&module_mutex);
3457     + ret = kallsyms_on_each_symbol(klp_verify_callback, &args);
3458     + mutex_unlock(&module_mutex);
3459    
3460     - pr_err("symbol '%s' not found at specified address 0x%016lx, kernel mismatch?\n",
3461     - name, addr);
3462     - return -EINVAL;
3463     + if (!ret) {
3464     + pr_err("symbol '%s' not found at specified address 0x%016lx, kernel mismatch?\n",
3465     + name, addr);
3466     + return -EINVAL;
3467     + }
3468     +
3469     + return 0;
3470     }
3471    
3472     static int klp_find_verify_func_addr(struct klp_object *obj,
3473     diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c
3474     index 069742d61c68..ec3086879cb5 100644
3475     --- a/kernel/rcu/tiny.c
3476     +++ b/kernel/rcu/tiny.c
3477     @@ -170,6 +170,11 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
3478    
3479     /* Move the ready-to-invoke callbacks to a local list. */
3480     local_irq_save(flags);
3481     + if (rcp->donetail == &rcp->rcucblist) {
3482     + /* No callbacks ready, so just leave. */
3483     + local_irq_restore(flags);
3484     + return;
3485     + }
3486     RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, rcp->qlen, -1));
3487     list = rcp->rcucblist;
3488     rcp->rcucblist = *rcp->donetail;
3489     diff --git a/kernel/sysctl.c b/kernel/sysctl.c
3490     index 2082b1a88fb9..c3eee4c6d6c1 100644
3491     --- a/kernel/sysctl.c
3492     +++ b/kernel/sysctl.c
3493     @@ -1531,12 +1531,6 @@ static struct ctl_table vm_table[] = {
3494     { }
3495     };
3496    
3497     -#if defined(CONFIG_BINFMT_MISC) || defined(CONFIG_BINFMT_MISC_MODULE)
3498     -static struct ctl_table binfmt_misc_table[] = {
3499     - { }
3500     -};
3501     -#endif
3502     -
3503     static struct ctl_table fs_table[] = {
3504     {
3505     .procname = "inode-nr",
3506     @@ -1690,7 +1684,7 @@ static struct ctl_table fs_table[] = {
3507     {
3508     .procname = "binfmt_misc",
3509     .mode = 0555,
3510     - .child = binfmt_misc_table,
3511     + .child = sysctl_mount_point,
3512     },
3513     #endif
3514     {
3515     diff --git a/mm/kmemleak.c b/mm/kmemleak.c
3516     index f0fe4f2c1fa7..3716cdb8ba42 100644
3517     --- a/mm/kmemleak.c
3518     +++ b/mm/kmemleak.c
3519     @@ -195,6 +195,8 @@ static struct kmem_cache *scan_area_cache;
3520    
3521     /* set if tracing memory operations is enabled */
3522     static int kmemleak_enabled;
3523     +/* same as above but only for the kmemleak_free() callback */
3524     +static int kmemleak_free_enabled;
3525     /* set in the late_initcall if there were no errors */
3526     static int kmemleak_initialized;
3527     /* enables or disables early logging of the memory operations */
3528     @@ -907,12 +909,13 @@ EXPORT_SYMBOL_GPL(kmemleak_alloc);
3529     * kmemleak_alloc_percpu - register a newly allocated __percpu object
3530     * @ptr: __percpu pointer to beginning of the object
3531     * @size: size of the object
3532     + * @gfp: flags used for kmemleak internal memory allocations
3533     *
3534     * This function is called from the kernel percpu allocator when a new object
3535     - * (memory block) is allocated (alloc_percpu). It assumes GFP_KERNEL
3536     - * allocation.
3537     + * (memory block) is allocated (alloc_percpu).
3538     */
3539     -void __ref kmemleak_alloc_percpu(const void __percpu *ptr, size_t size)
3540     +void __ref kmemleak_alloc_percpu(const void __percpu *ptr, size_t size,
3541     + gfp_t gfp)
3542     {
3543     unsigned int cpu;
3544    
3545     @@ -925,7 +928,7 @@ void __ref kmemleak_alloc_percpu(const void __percpu *ptr, size_t size)
3546     if (kmemleak_enabled && ptr && !IS_ERR(ptr))
3547     for_each_possible_cpu(cpu)
3548     create_object((unsigned long)per_cpu_ptr(ptr, cpu),
3549     - size, 0, GFP_KERNEL);
3550     + size, 0, gfp);
3551     else if (kmemleak_early_log)
3552     log_early(KMEMLEAK_ALLOC_PERCPU, ptr, size, 0);
3553     }
3554     @@ -942,7 +945,7 @@ void __ref kmemleak_free(const void *ptr)
3555     {
3556     pr_debug("%s(0x%p)\n", __func__, ptr);
3557    
3558     - if (kmemleak_enabled && ptr && !IS_ERR(ptr))
3559     + if (kmemleak_free_enabled && ptr && !IS_ERR(ptr))
3560     delete_object_full((unsigned long)ptr);
3561     else if (kmemleak_early_log)
3562     log_early(KMEMLEAK_FREE, ptr, 0, 0);
3563     @@ -982,7 +985,7 @@ void __ref kmemleak_free_percpu(const void __percpu *ptr)
3564    
3565     pr_debug("%s(0x%p)\n", __func__, ptr);
3566    
3567     - if (kmemleak_enabled && ptr && !IS_ERR(ptr))
3568     + if (kmemleak_free_enabled && ptr && !IS_ERR(ptr))
3569     for_each_possible_cpu(cpu)
3570     delete_object_full((unsigned long)per_cpu_ptr(ptr,
3571     cpu));
3572     @@ -1750,6 +1753,13 @@ static void kmemleak_do_cleanup(struct work_struct *work)
3573     mutex_lock(&scan_mutex);
3574     stop_scan_thread();
3575    
3576     + /*
3577     + * Once the scan thread has stopped, it is safe to no longer track
3578     + * object freeing. Ordering of the scan thread stopping and the memory
3579     + * accesses below is guaranteed by the kthread_stop() function.
3580     + */
3581     + kmemleak_free_enabled = 0;
3582     +
3583     if (!kmemleak_found_leaks)
3584     __kmemleak_do_cleanup();
3585     else
3586     @@ -1776,6 +1786,8 @@ static void kmemleak_disable(void)
3587     /* check whether it is too early for a kernel thread */
3588     if (kmemleak_initialized)
3589     schedule_work(&cleanup_work);
3590     + else
3591     + kmemleak_free_enabled = 0;
3592    
3593     pr_info("Kernel memory leak detector disabled\n");
3594     }
3595     @@ -1840,8 +1852,10 @@ void __init kmemleak_init(void)
3596     if (kmemleak_error) {
3597     local_irq_restore(flags);
3598     return;
3599     - } else
3600     + } else {
3601     kmemleak_enabled = 1;
3602     + kmemleak_free_enabled = 1;
3603     + }
3604     local_irq_restore(flags);
3605    
3606     /*
3607     diff --git a/mm/mempolicy.c b/mm/mempolicy.c
3608     index 747743237d9f..99d4c1d0b858 100644
3609     --- a/mm/mempolicy.c
3610     +++ b/mm/mempolicy.c
3611     @@ -1972,35 +1972,41 @@ retry_cpuset:
3612     pol = get_vma_policy(vma, addr);
3613     cpuset_mems_cookie = read_mems_allowed_begin();
3614    
3615     - if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage &&
3616     - pol->mode != MPOL_INTERLEAVE)) {
3617     + if (pol->mode == MPOL_INTERLEAVE) {
3618     + unsigned nid;
3619     +
3620     + nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order);
3621     + mpol_cond_put(pol);
3622     + page = alloc_page_interleave(gfp, order, nid);
3623     + goto out;
3624     + }
3625     +
3626     + if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) {
3627     + int hpage_node = node;
3628     +
3629     /*
3630     * For hugepage allocation and non-interleave policy which
3631     - * allows the current node, we only try to allocate from the
3632     - * current node and don't fall back to other nodes, as the
3633     - * cost of remote accesses would likely offset THP benefits.
3634     + * allows the current node (or other explicitly preferred
3635     + * node) we only try to allocate from the current/preferred
3636     + * node and don't fall back to other nodes, as the cost of
3637     + * remote accesses would likely offset THP benefits.
3638     *
3639     * If the policy is interleave, or does not allow the current
3640     * node in its nodemask, we allocate the standard way.
3641     */
3642     + if (pol->mode == MPOL_PREFERRED &&
3643     + !(pol->flags & MPOL_F_LOCAL))
3644     + hpage_node = pol->v.preferred_node;
3645     +
3646     nmask = policy_nodemask(gfp, pol);
3647     - if (!nmask || node_isset(node, *nmask)) {
3648     + if (!nmask || node_isset(hpage_node, *nmask)) {
3649     mpol_cond_put(pol);
3650     - page = alloc_pages_exact_node(node,
3651     + page = alloc_pages_exact_node(hpage_node,
3652     gfp | __GFP_THISNODE, order);
3653     goto out;
3654     }
3655     }
3656    
3657     - if (pol->mode == MPOL_INTERLEAVE) {
3658     - unsigned nid;
3659     -
3660     - nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order);
3661     - mpol_cond_put(pol);
3662     - page = alloc_page_interleave(gfp, order, nid);
3663     - goto out;
3664     - }
3665     -
3666     nmask = policy_nodemask(gfp, pol);
3667     zl = policy_zonelist(gfp, pol, node);
3668     mpol_cond_put(pol);
3669     diff --git a/mm/percpu.c b/mm/percpu.c
3670     index dfd02484e8de..2dd74487a0af 100644
3671     --- a/mm/percpu.c
3672     +++ b/mm/percpu.c
3673     @@ -1030,7 +1030,7 @@ area_found:
3674     memset((void *)pcpu_chunk_addr(chunk, cpu, 0) + off, 0, size);
3675    
3676     ptr = __addr_to_pcpu_ptr(chunk->base_addr + off);
3677     - kmemleak_alloc_percpu(ptr, size);
3678     + kmemleak_alloc_percpu(ptr, size, gfp);
3679     return ptr;
3680    
3681     fail_unlock:
3682     diff --git a/security/inode.c b/security/inode.c
3683     index 91503b79c5f8..0e37e4fba8fa 100644
3684     --- a/security/inode.c
3685     +++ b/security/inode.c
3686     @@ -215,19 +215,17 @@ void securityfs_remove(struct dentry *dentry)
3687     }
3688     EXPORT_SYMBOL_GPL(securityfs_remove);
3689    
3690     -static struct kobject *security_kobj;
3691     -
3692     static int __init securityfs_init(void)
3693     {
3694     int retval;
3695    
3696     - security_kobj = kobject_create_and_add("security", kernel_kobj);
3697     - if (!security_kobj)
3698     - return -EINVAL;
3699     + retval = sysfs_create_mount_point(kernel_kobj, "security");
3700     + if (retval)
3701     + return retval;
3702    
3703     retval = register_filesystem(&fs_type);
3704     if (retval)
3705     - kobject_put(security_kobj);
3706     + sysfs_remove_mount_point(kernel_kobj, "security");
3707     return retval;
3708     }
3709    
3710     diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
3711     index d2787cca1fcb..3d2201413028 100644
3712     --- a/security/selinux/selinuxfs.c
3713     +++ b/security/selinux/selinuxfs.c
3714     @@ -1853,7 +1853,6 @@ static struct file_system_type sel_fs_type = {
3715     };
3716    
3717     struct vfsmount *selinuxfs_mount;
3718     -static struct kobject *selinuxfs_kobj;
3719    
3720     static int __init init_sel_fs(void)
3721     {
3722     @@ -1862,13 +1861,13 @@ static int __init init_sel_fs(void)
3723     if (!selinux_enabled)
3724     return 0;
3725    
3726     - selinuxfs_kobj = kobject_create_and_add("selinux", fs_kobj);
3727     - if (!selinuxfs_kobj)
3728     - return -ENOMEM;
3729     + err = sysfs_create_mount_point(fs_kobj, "selinux");
3730     + if (err)
3731     + return err;
3732    
3733     err = register_filesystem(&sel_fs_type);
3734     if (err) {
3735     - kobject_put(selinuxfs_kobj);
3736     + sysfs_remove_mount_point(fs_kobj, "selinux");
3737     return err;
3738     }
3739    
3740     @@ -1887,7 +1886,7 @@ __initcall(init_sel_fs);
3741     #ifdef CONFIG_SECURITY_SELINUX_DISABLE
3742     void exit_sel_fs(void)
3743     {
3744     - kobject_put(selinuxfs_kobj);
3745     + sysfs_remove_mount_point(fs_kobj, "selinux");
3746     kern_unmount(selinuxfs_mount);
3747     unregister_filesystem(&sel_fs_type);
3748     }
3749     diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
3750     index d9682985349e..ac4cac7c661a 100644
3751     --- a/security/smack/smackfs.c
3752     +++ b/security/smack/smackfs.c
3753     @@ -2241,16 +2241,16 @@ static const struct file_operations smk_revoke_subj_ops = {
3754     .llseek = generic_file_llseek,
3755     };
3756    
3757     -static struct kset *smackfs_kset;
3758     /**
3759     * smk_init_sysfs - initialize /sys/fs/smackfs
3760     *
3761     */
3762     static int smk_init_sysfs(void)
3763     {
3764     - smackfs_kset = kset_create_and_add("smackfs", NULL, fs_kobj);
3765     - if (!smackfs_kset)
3766     - return -ENOMEM;
3767     + int err;
3768     + err = sysfs_create_mount_point(fs_kobj, "smackfs");
3769     + if (err)
3770     + return err;
3771     return 0;
3772     }
3773    
3774     diff --git a/sound/core/pcm.c b/sound/core/pcm.c
3775     index b25bcf5b8644..dfed728d8c87 100644
3776     --- a/sound/core/pcm.c
3777     +++ b/sound/core/pcm.c
3778     @@ -1027,7 +1027,8 @@ void snd_pcm_detach_substream(struct snd_pcm_substream *substream)
3779     static ssize_t show_pcm_class(struct device *dev,
3780     struct device_attribute *attr, char *buf)
3781     {
3782     - struct snd_pcm *pcm;
3783     + struct snd_pcm_str *pstr = container_of(dev, struct snd_pcm_str, dev);
3784     + struct snd_pcm *pcm = pstr->pcm;
3785     const char *str;
3786     static const char *strs[SNDRV_PCM_CLASS_LAST + 1] = {
3787     [SNDRV_PCM_CLASS_GENERIC] = "generic",
3788     @@ -1036,8 +1037,7 @@ static ssize_t show_pcm_class(struct device *dev,
3789     [SNDRV_PCM_CLASS_DIGITIZER] = "digitizer",
3790     };
3791    
3792     - if (! (pcm = dev_get_drvdata(dev)) ||
3793     - pcm->dev_class > SNDRV_PCM_CLASS_LAST)
3794     + if (pcm->dev_class > SNDRV_PCM_CLASS_LAST)
3795     str = "none";
3796     else
3797     str = strs[pcm->dev_class];
3798     diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
3799     index b6db25b23dd3..c403dd10d126 100644
3800     --- a/sound/pci/hda/hda_intel.c
3801     +++ b/sound/pci/hda/hda_intel.c
3802     @@ -2054,6 +2054,8 @@ static const struct pci_device_id azx_ids[] = {
3803     { PCI_DEVICE(0x1022, 0x780d),
3804     .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
3805     /* ATI HDMI */
3806     + { PCI_DEVICE(0x1002, 0x1308),
3807     + .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
3808     { PCI_DEVICE(0x1002, 0x793b),
3809     .driver_data = AZX_DRIVER_ATIHDMI | AZX_DCAPS_PRESET_ATI_HDMI },
3810     { PCI_DEVICE(0x1002, 0x7919),
3811     @@ -2062,6 +2064,8 @@ static const struct pci_device_id azx_ids[] = {
3812     .driver_data = AZX_DRIVER_ATIHDMI | AZX_DCAPS_PRESET_ATI_HDMI },
3813     { PCI_DEVICE(0x1002, 0x970f),
3814     .driver_data = AZX_DRIVER_ATIHDMI | AZX_DCAPS_PRESET_ATI_HDMI },
3815     + { PCI_DEVICE(0x1002, 0x9840),
3816     + .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
3817     { PCI_DEVICE(0x1002, 0xaa00),
3818     .driver_data = AZX_DRIVER_ATIHDMI | AZX_DCAPS_PRESET_ATI_HDMI },
3819     { PCI_DEVICE(0x1002, 0xaa08),
3820     diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
3821     index 6d010452c1f5..0e75998db39f 100644
3822     --- a/sound/pci/hda/patch_realtek.c
3823     +++ b/sound/pci/hda/patch_realtek.c
3824     @@ -4458,6 +4458,7 @@ enum {
3825     ALC269_FIXUP_LIFEBOOK,
3826     ALC269_FIXUP_LIFEBOOK_EXTMIC,
3827     ALC269_FIXUP_LIFEBOOK_HP_PIN,
3828     + ALC269_FIXUP_LIFEBOOK_NO_HP_TO_LINEOUT,
3829     ALC269_FIXUP_AMIC,
3830     ALC269_FIXUP_DMIC,
3831     ALC269VB_FIXUP_AMIC,
3832     @@ -4478,6 +4479,7 @@ enum {
3833     ALC269_FIXUP_DELL3_MIC_NO_PRESENCE,
3834     ALC269_FIXUP_HEADSET_MODE,
3835     ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC,
3836     + ALC269_FIXUP_ASPIRE_HEADSET_MIC,
3837     ALC269_FIXUP_ASUS_X101_FUNC,
3838     ALC269_FIXUP_ASUS_X101_VERB,
3839     ALC269_FIXUP_ASUS_X101,
3840     @@ -4505,6 +4507,7 @@ enum {
3841     ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC,
3842     ALC293_FIXUP_DELL1_MIC_NO_PRESENCE,
3843     ALC292_FIXUP_TPT440_DOCK,
3844     + ALC292_FIXUP_TPT440_DOCK2,
3845     ALC283_FIXUP_BXBT2807_MIC,
3846     ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED,
3847     ALC282_FIXUP_ASPIRE_V5_PINS,
3848     @@ -4515,6 +4518,8 @@ enum {
3849     ALC288_FIXUP_DELL_HEADSET_MODE,
3850     ALC288_FIXUP_DELL1_MIC_NO_PRESENCE,
3851     ALC288_FIXUP_DELL_XPS_13_GPIO6,
3852     + ALC288_FIXUP_DELL_XPS_13,
3853     + ALC288_FIXUP_DISABLE_AAMIX,
3854     ALC292_FIXUP_DELL_E7X,
3855     ALC292_FIXUP_DISABLE_AAMIX,
3856     };
3857     @@ -4623,6 +4628,10 @@ static const struct hda_fixup alc269_fixups[] = {
3858     { }
3859     },
3860     },
3861     + [ALC269_FIXUP_LIFEBOOK_NO_HP_TO_LINEOUT] = {
3862     + .type = HDA_FIXUP_FUNC,
3863     + .v.func = alc269_fixup_pincfg_no_hp_to_lineout,
3864     + },
3865     [ALC269_FIXUP_AMIC] = {
3866     .type = HDA_FIXUP_PINS,
3867     .v.pins = (const struct hda_pintbl[]) {
3868     @@ -4751,6 +4760,15 @@ static const struct hda_fixup alc269_fixups[] = {
3869     .type = HDA_FIXUP_FUNC,
3870     .v.func = alc_fixup_headset_mode_no_hp_mic,
3871     },
3872     + [ALC269_FIXUP_ASPIRE_HEADSET_MIC] = {
3873     + .type = HDA_FIXUP_PINS,
3874     + .v.pins = (const struct hda_pintbl[]) {
3875     + { 0x19, 0x01a1913c }, /* headset mic w/o jack detect */
3876     + { }
3877     + },
3878     + .chained = true,
3879     + .chain_id = ALC269_FIXUP_HEADSET_MODE,
3880     + },
3881     [ALC286_FIXUP_SONY_MIC_NO_PRESENCE] = {
3882     .type = HDA_FIXUP_PINS,
3883     .v.pins = (const struct hda_pintbl[]) {
3884     @@ -4953,6 +4971,12 @@ static const struct hda_fixup alc269_fixups[] = {
3885     .chain_id = ALC269_FIXUP_HEADSET_MODE
3886     },
3887     [ALC292_FIXUP_TPT440_DOCK] = {
3888     + .type = HDA_FIXUP_FUNC,
3889     + .v.func = alc269_fixup_pincfg_no_hp_to_lineout,
3890     + .chained = true,
3891     + .chain_id = ALC292_FIXUP_TPT440_DOCK2
3892     + },
3893     + [ALC292_FIXUP_TPT440_DOCK2] = {
3894     .type = HDA_FIXUP_PINS,
3895     .v.pins = (const struct hda_pintbl[]) {
3896     { 0x16, 0x21211010 }, /* dock headphone */
3897     @@ -5039,9 +5063,23 @@ static const struct hda_fixup alc269_fixups[] = {
3898     .chained = true,
3899     .chain_id = ALC288_FIXUP_DELL1_MIC_NO_PRESENCE
3900     },
3901     + [ALC288_FIXUP_DISABLE_AAMIX] = {
3902     + .type = HDA_FIXUP_FUNC,
3903     + .v.func = alc_fixup_disable_aamix,
3904     + .chained = true,
3905     + .chain_id = ALC288_FIXUP_DELL_XPS_13_GPIO6
3906     + },
3907     + [ALC288_FIXUP_DELL_XPS_13] = {
3908     + .type = HDA_FIXUP_FUNC,
3909     + .v.func = alc_fixup_dell_xps13,
3910     + .chained = true,
3911     + .chain_id = ALC288_FIXUP_DISABLE_AAMIX
3912     + },
3913     [ALC292_FIXUP_DISABLE_AAMIX] = {
3914     .type = HDA_FIXUP_FUNC,
3915     .v.func = alc_fixup_disable_aamix,
3916     + .chained = true,
3917     + .chain_id = ALC269_FIXUP_DELL2_MIC_NO_PRESENCE
3918     },
3919     [ALC292_FIXUP_DELL_E7X] = {
3920     .type = HDA_FIXUP_FUNC,
3921     @@ -5056,6 +5094,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
3922     SND_PCI_QUIRK(0x1025, 0x029b, "Acer 1810TZ", ALC269_FIXUP_INV_DMIC),
3923     SND_PCI_QUIRK(0x1025, 0x0349, "Acer AOD260", ALC269_FIXUP_INV_DMIC),
3924     SND_PCI_QUIRK(0x1025, 0x047c, "Acer AC700", ALC269_FIXUP_ACER_AC700),
3925     + SND_PCI_QUIRK(0x1025, 0x072d, "Acer Aspire V5-571G", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
3926     + SND_PCI_QUIRK(0x1025, 0x080d, "Acer Aspire V5-122P", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
3927     SND_PCI_QUIRK(0x1025, 0x0740, "Acer AO725", ALC271_FIXUP_HP_GATE_MIC_JACK),
3928     SND_PCI_QUIRK(0x1025, 0x0742, "Acer AO756", ALC271_FIXUP_HP_GATE_MIC_JACK),
3929     SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
3930     @@ -5069,10 +5109,11 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
3931     SND_PCI_QUIRK(0x1028, 0x05f6, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
3932     SND_PCI_QUIRK(0x1028, 0x0615, "Dell Vostro 5470", ALC290_FIXUP_SUBWOOFER_HSJACK),
3933     SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_SUBWOOFER_HSJACK),
3934     + SND_PCI_QUIRK(0x1028, 0x062e, "Dell Latitude E7450", ALC292_FIXUP_DELL_E7X),
3935     SND_PCI_QUIRK(0x1028, 0x0638, "Dell Inspiron 5439", ALC290_FIXUP_MONO_SPEAKERS_HSJACK),
3936     SND_PCI_QUIRK(0x1028, 0x064a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
3937     SND_PCI_QUIRK(0x1028, 0x064b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
3938     - SND_PCI_QUIRK(0x1028, 0x0665, "Dell XPS 13", ALC292_FIXUP_DELL_E7X),
3939     + SND_PCI_QUIRK(0x1028, 0x0665, "Dell XPS 13", ALC288_FIXUP_DELL_XPS_13),
3940     SND_PCI_QUIRK(0x1028, 0x06c7, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE),
3941     SND_PCI_QUIRK(0x1028, 0x06d9, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
3942     SND_PCI_QUIRK(0x1028, 0x06da, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
3943     @@ -5156,6 +5197,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
3944     SND_PCI_QUIRK(0x104d, 0x9084, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
3945     SND_PCI_QUIRK(0x104d, 0x9099, "Sony VAIO S13", ALC275_FIXUP_SONY_DISABLE_AAMIX),
3946     SND_PCI_QUIRK(0x10cf, 0x1475, "Lifebook", ALC269_FIXUP_LIFEBOOK),
3947     + SND_PCI_QUIRK(0x10cf, 0x159f, "Lifebook E780", ALC269_FIXUP_LIFEBOOK_NO_HP_TO_LINEOUT),
3948     SND_PCI_QUIRK(0x10cf, 0x15dc, "Lifebook T731", ALC269_FIXUP_LIFEBOOK_HP_PIN),
3949     SND_PCI_QUIRK(0x10cf, 0x1757, "Lifebook E752", ALC269_FIXUP_LIFEBOOK_HP_PIN),
3950     SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
3951     diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c
3952     index bab6c04932aa..0baeecc2213c 100644
3953     --- a/sound/pci/hda/patch_via.c
3954     +++ b/sound/pci/hda/patch_via.c
3955     @@ -238,7 +238,9 @@ static int via_pin_power_ctl_get(struct snd_kcontrol *kcontrol,
3956     struct snd_ctl_elem_value *ucontrol)
3957     {
3958     struct hda_codec *codec = snd_kcontrol_chip(kcontrol);
3959     - ucontrol->value.enumerated.item[0] = codec->power_save_node;
3960     + struct via_spec *spec = codec->spec;
3961     +
3962     + ucontrol->value.enumerated.item[0] = spec->gen.power_down_unused;
3963     return 0;
3964     }
3965    
3966     @@ -249,9 +251,9 @@ static int via_pin_power_ctl_put(struct snd_kcontrol *kcontrol,
3967     struct via_spec *spec = codec->spec;
3968     bool val = !!ucontrol->value.enumerated.item[0];
3969    
3970     - if (val == codec->power_save_node)
3971     + if (val == spec->gen.power_down_unused)
3972     return 0;
3973     - codec->power_save_node = val;
3974     + /* codec->power_save_node = val; */ /* widget PM seems yet broken */
3975     spec->gen.power_down_unused = val;
3976     analog_low_current_mode(codec);
3977     return 1;
3978     diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
3979     index 95abddcd7839..f76830643086 100644
3980     --- a/tools/testing/selftests/Makefile
3981     +++ b/tools/testing/selftests/Makefile
3982     @@ -27,7 +27,7 @@ TARGETS_HOTPLUG += memory-hotplug
3983     # Makefile to avoid test build failures when test
3984     # Makefile doesn't have explicit build rules.
3985     ifeq (1,$(MAKELEVEL))
3986     -undefine LDFLAGS
3987     +override LDFLAGS =
3988     override MAKEFLAGS =
3989     endif
3990