Magellan Linux

Contents of /trunk/kernel-magellan/patches-3.4/0104-3.4.5-all-fixes.patch

Parent Directory Parent Directory | Revision Log Revision Log


Revision 1850 - (show annotations) (download)
Thu Jul 19 07:56:38 2012 UTC (11 years, 9 months ago) by niro
File size: 254689 byte(s)
-added patch for linux-3.4.5
1 diff --git a/Documentation/device-mapper/verity.txt b/Documentation/device-mapper/verity.txt
2 index 32e4879..9884681 100644
3 --- a/Documentation/device-mapper/verity.txt
4 +++ b/Documentation/device-mapper/verity.txt
5 @@ -7,39 +7,39 @@ This target is read-only.
6
7 Construction Parameters
8 =======================
9 - <version> <dev> <hash_dev> <hash_start>
10 + <version> <dev> <hash_dev>
11 <data_block_size> <hash_block_size>
12 <num_data_blocks> <hash_start_block>
13 <algorithm> <digest> <salt>
14
15 <version>
16 - This is the version number of the on-disk format.
17 + This is the type of the on-disk hash format.
18
19 0 is the original format used in the Chromium OS.
20 - The salt is appended when hashing, digests are stored continuously and
21 - the rest of the block is padded with zeros.
22 + The salt is appended when hashing, digests are stored continuously and
23 + the rest of the block is padded with zeros.
24
25 1 is the current format that should be used for new devices.
26 - The salt is prepended when hashing and each digest is
27 - padded with zeros to the power of two.
28 + The salt is prepended when hashing and each digest is
29 + padded with zeros to the power of two.
30
31 <dev>
32 - This is the device containing the data the integrity of which needs to be
33 + This is the device containing data, the integrity of which needs to be
34 checked. It may be specified as a path, like /dev/sdaX, or a device number,
35 <major>:<minor>.
36
37 <hash_dev>
38 - This is the device that that supplies the hash tree data. It may be
39 + This is the device that supplies the hash tree data. It may be
40 specified similarly to the device path and may be the same device. If the
41 - same device is used, the hash_start should be outside of the dm-verity
42 - configured device size.
43 + same device is used, the hash_start should be outside the configured
44 + dm-verity device.
45
46 <data_block_size>
47 - The block size on a data device. Each block corresponds to one digest on
48 - the hash device.
49 + The block size on a data device in bytes.
50 + Each block corresponds to one digest on the hash device.
51
52 <hash_block_size>
53 - The size of a hash block.
54 + The size of a hash block in bytes.
55
56 <num_data_blocks>
57 The number of data blocks on the data device. Additional blocks are
58 @@ -65,7 +65,7 @@ Construction Parameters
59 Theory of operation
60 ===================
61
62 -dm-verity is meant to be setup as part of a verified boot path. This
63 +dm-verity is meant to be set up as part of a verified boot path. This
64 may be anything ranging from a boot using tboot or trustedgrub to just
65 booting from a known-good device (like a USB drive or CD).
66
67 @@ -73,20 +73,20 @@ When a dm-verity device is configured, it is expected that the caller
68 has been authenticated in some way (cryptographic signatures, etc).
69 After instantiation, all hashes will be verified on-demand during
70 disk access. If they cannot be verified up to the root node of the
71 -tree, the root hash, then the I/O will fail. This should identify
72 +tree, the root hash, then the I/O will fail. This should detect
73 tampering with any data on the device and the hash data.
74
75 Cryptographic hashes are used to assert the integrity of the device on a
76 -per-block basis. This allows for a lightweight hash computation on first read
77 -into the page cache. Block hashes are stored linearly-aligned to the nearest
78 -block the size of a page.
79 +per-block basis. This allows for a lightweight hash computation on first read
80 +into the page cache. Block hashes are stored linearly, aligned to the nearest
81 +block size.
82
83 Hash Tree
84 ---------
85
86 Each node in the tree is a cryptographic hash. If it is a leaf node, the hash
87 -is of some block data on disk. If it is an intermediary node, then the hash is
88 -of a number of child nodes.
89 +of some data block on disk is calculated. If it is an intermediary node,
90 +the hash of a number of child nodes is calculated.
91
92 Each entry in the tree is a collection of neighboring nodes that fit in one
93 block. The number is determined based on block_size and the size of the
94 @@ -110,63 +110,23 @@ alg = sha256, num_blocks = 32768, block_size = 4096
95 On-disk format
96 ==============
97
98 -Below is the recommended on-disk format. The verity kernel code does not
99 -read the on-disk header. It only reads the hash blocks which directly
100 -follow the header. It is expected that a user-space tool will verify the
101 -integrity of the verity_header and then call dmsetup with the correct
102 -parameters. Alternatively, the header can be omitted and the dmsetup
103 -parameters can be passed via the kernel command-line in a rooted chain
104 -of trust where the command-line is verified.
105 +The verity kernel code does not read the verity metadata on-disk header.
106 +It only reads the hash blocks which directly follow the header.
107 +It is expected that a user-space tool will verify the integrity of the
108 +verity header.
109
110 -The on-disk format is especially useful in cases where the hash blocks
111 -are on a separate partition. The magic number allows easy identification
112 -of the partition contents. Alternatively, the hash blocks can be stored
113 -in the same partition as the data to be verified. In such a configuration
114 -the filesystem on the partition would be sized a little smaller than
115 -the full-partition, leaving room for the hash blocks.
116 -
117 -struct superblock {
118 - uint8_t signature[8]
119 - "verity\0\0";
120 -
121 - uint8_t version;
122 - 1 - current format
123 -
124 - uint8_t data_block_bits;
125 - log2(data block size)
126 -
127 - uint8_t hash_block_bits;
128 - log2(hash block size)
129 -
130 - uint8_t pad1[1];
131 - zero padding
132 -
133 - uint16_t salt_size;
134 - big-endian salt size
135 -
136 - uint8_t pad2[2];
137 - zero padding
138 -
139 - uint32_t data_blocks_hi;
140 - big-endian high 32 bits of the 64-bit number of data blocks
141 -
142 - uint32_t data_blocks_lo;
143 - big-endian low 32 bits of the 64-bit number of data blocks
144 -
145 - uint8_t algorithm[16];
146 - cryptographic algorithm
147 -
148 - uint8_t salt[384];
149 - salt (the salt size is specified above)
150 -
151 - uint8_t pad3[88];
152 - zero padding to 512-byte boundary
153 -}
154 +Alternatively, the header can be omitted and the dmsetup parameters can
155 +be passed via the kernel command-line in a rooted chain of trust where
156 +the command-line is verified.
157
158 Directly following the header (and with sector number padded to the next hash
159 block boundary) are the hash blocks which are stored a depth at a time
160 (starting from the root), sorted in order of increasing index.
161
162 +The full specification of kernel parameters and on-disk metadata format
163 +is available at the cryptsetup project's wiki page
164 + http://code.google.com/p/cryptsetup/wiki/DMVerity
165 +
166 Status
167 ======
168 V (for Valid) is returned if every check performed so far was valid.
169 @@ -174,21 +134,22 @@ If any check failed, C (for Corruption) is returned.
170
171 Example
172 =======
173 -
174 -Setup a device:
175 - dmsetup create vroot --table \
176 - "0 2097152 "\
177 - "verity 1 /dev/sda1 /dev/sda2 4096 4096 2097152 1 "\
178 +Set up a device:
179 + # dmsetup create vroot --readonly --table \
180 + "0 2097152 verity 1 /dev/sda1 /dev/sda2 4096 4096 262144 1 sha256 "\
181 "4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076 "\
182 "1234000000000000000000000000000000000000000000000000000000000000"
183
184 A command line tool veritysetup is available to compute or verify
185 -the hash tree or activate the kernel driver. This is available from
186 -the LVM2 upstream repository and may be supplied as a package called
187 -device-mapper-verity-tools:
188 - git://sources.redhat.com/git/lvm2
189 - http://sourceware.org/git/?p=lvm2.git
190 - http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/verity?cvsroot=lvm2
191 -
192 -veritysetup -a vroot /dev/sda1 /dev/sda2 \
193 - 4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076
194 +the hash tree or activate the kernel device. This is available from
195 +the cryptsetup upstream repository http://code.google.com/p/cryptsetup/
196 +(as a libcryptsetup extension).
197 +
198 +Create hash on the device:
199 + # veritysetup format /dev/sda1 /dev/sda2
200 + ...
201 + Root hash: 4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076
202 +
203 +Activate the device:
204 + # veritysetup create vroot /dev/sda1 /dev/sda2 \
205 + 4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076
206 diff --git a/Documentation/stable_kernel_rules.txt b/Documentation/stable_kernel_rules.txt
207 index f0ab5cf..4a7b54b 100644
208 --- a/Documentation/stable_kernel_rules.txt
209 +++ b/Documentation/stable_kernel_rules.txt
210 @@ -12,6 +12,12 @@ Rules on what kind of patches are accepted, and which ones are not, into the
211 marked CONFIG_BROKEN), an oops, a hang, data corruption, a real
212 security issue, or some "oh, that's not good" issue. In short, something
213 critical.
214 + - Serious issues as reported by a user of a distribution kernel may also
215 + be considered if they fix a notable performance or interactivity issue.
216 + As these fixes are not as obvious and have a higher risk of a subtle
217 + regression they should only be submitted by a distribution kernel
218 + maintainer and include an addendum linking to a bugzilla entry if it
219 + exists and additional information on the user-visible impact.
220 - New device IDs and quirks are also accepted.
221 - No "theoretical race condition" issues, unless an explanation of how the
222 race can be exploited is also provided.
223 diff --git a/arch/arm/mach-dove/include/mach/bridge-regs.h b/arch/arm/mach-dove/include/mach/bridge-regs.h
224 index 226949d..f953bb5 100644
225 --- a/arch/arm/mach-dove/include/mach/bridge-regs.h
226 +++ b/arch/arm/mach-dove/include/mach/bridge-regs.h
227 @@ -50,5 +50,6 @@
228 #define POWER_MANAGEMENT (BRIDGE_VIRT_BASE | 0x011c)
229
230 #define TIMER_VIRT_BASE (BRIDGE_VIRT_BASE | 0x0300)
231 +#define TIMER_PHYS_BASE (BRIDGE_PHYS_BASE | 0x0300)
232
233 #endif
234 diff --git a/arch/arm/mach-dove/include/mach/dove.h b/arch/arm/mach-dove/include/mach/dove.h
235 index ad1165d..d52b0ef 100644
236 --- a/arch/arm/mach-dove/include/mach/dove.h
237 +++ b/arch/arm/mach-dove/include/mach/dove.h
238 @@ -78,6 +78,7 @@
239
240 /* North-South Bridge */
241 #define BRIDGE_VIRT_BASE (DOVE_SB_REGS_VIRT_BASE | 0x20000)
242 +#define BRIDGE_PHYS_BASE (DOVE_SB_REGS_PHYS_BASE | 0x20000)
243
244 /* Cryptographic Engine */
245 #define DOVE_CRYPT_PHYS_BASE (DOVE_SB_REGS_PHYS_BASE | 0x30000)
246 diff --git a/arch/arm/mach-kirkwood/include/mach/bridge-regs.h b/arch/arm/mach-kirkwood/include/mach/bridge-regs.h
247 index 957bd79..086f25e 100644
248 --- a/arch/arm/mach-kirkwood/include/mach/bridge-regs.h
249 +++ b/arch/arm/mach-kirkwood/include/mach/bridge-regs.h
250 @@ -38,6 +38,7 @@
251 #define IRQ_MASK_HIGH_OFF 0x0014
252
253 #define TIMER_VIRT_BASE (BRIDGE_VIRT_BASE | 0x0300)
254 +#define TIMER_PHYS_BASE (BRIDGE_PHYS_BASE | 0x0300)
255
256 #define L2_CONFIG_REG (BRIDGE_VIRT_BASE | 0x0128)
257 #define L2_WRITETHROUGH 0x00000010
258 diff --git a/arch/arm/mach-kirkwood/include/mach/kirkwood.h b/arch/arm/mach-kirkwood/include/mach/kirkwood.h
259 index fede3d5..c5b6851 100644
260 --- a/arch/arm/mach-kirkwood/include/mach/kirkwood.h
261 +++ b/arch/arm/mach-kirkwood/include/mach/kirkwood.h
262 @@ -80,6 +80,7 @@
263 #define UART1_VIRT_BASE (DEV_BUS_VIRT_BASE | 0x2100)
264
265 #define BRIDGE_VIRT_BASE (KIRKWOOD_REGS_VIRT_BASE | 0x20000)
266 +#define BRIDGE_PHYS_BASE (KIRKWOOD_REGS_PHYS_BASE | 0x20000)
267
268 #define CRYPTO_PHYS_BASE (KIRKWOOD_REGS_PHYS_BASE | 0x30000)
269
270 diff --git a/arch/arm/mach-mv78xx0/include/mach/bridge-regs.h b/arch/arm/mach-mv78xx0/include/mach/bridge-regs.h
271 index c64dbb9..eb187e0 100644
272 --- a/arch/arm/mach-mv78xx0/include/mach/bridge-regs.h
273 +++ b/arch/arm/mach-mv78xx0/include/mach/bridge-regs.h
274 @@ -31,5 +31,6 @@
275 #define IRQ_MASK_HIGH_OFF 0x0014
276
277 #define TIMER_VIRT_BASE (BRIDGE_VIRT_BASE | 0x0300)
278 +#define TIMER_PHYS_BASE (BRIDGE_PHYS_BASE | 0x0300)
279
280 #endif
281 diff --git a/arch/arm/mach-mv78xx0/include/mach/mv78xx0.h b/arch/arm/mach-mv78xx0/include/mach/mv78xx0.h
282 index 3674497..e807c4c 100644
283 --- a/arch/arm/mach-mv78xx0/include/mach/mv78xx0.h
284 +++ b/arch/arm/mach-mv78xx0/include/mach/mv78xx0.h
285 @@ -42,6 +42,7 @@
286 #define MV78XX0_CORE0_REGS_PHYS_BASE 0xf1020000
287 #define MV78XX0_CORE1_REGS_PHYS_BASE 0xf1024000
288 #define MV78XX0_CORE_REGS_VIRT_BASE 0xfe400000
289 +#define MV78XX0_CORE_REGS_PHYS_BASE 0xfe400000
290 #define MV78XX0_CORE_REGS_SIZE SZ_16K
291
292 #define MV78XX0_PCIE_IO_PHYS_BASE(i) (0xf0800000 + ((i) << 20))
293 @@ -59,6 +60,7 @@
294 * Core-specific peripheral registers.
295 */
296 #define BRIDGE_VIRT_BASE (MV78XX0_CORE_REGS_VIRT_BASE)
297 +#define BRIDGE_PHYS_BASE (MV78XX0_CORE_REGS_PHYS_BASE)
298
299 /*
300 * Register Map
301 diff --git a/arch/arm/mach-orion5x/include/mach/bridge-regs.h b/arch/arm/mach-orion5x/include/mach/bridge-regs.h
302 index 96484bc..11a3c1e 100644
303 --- a/arch/arm/mach-orion5x/include/mach/bridge-regs.h
304 +++ b/arch/arm/mach-orion5x/include/mach/bridge-regs.h
305 @@ -35,5 +35,5 @@
306 #define MAIN_IRQ_MASK (ORION5X_BRIDGE_VIRT_BASE | 0x204)
307
308 #define TIMER_VIRT_BASE (ORION5X_BRIDGE_VIRT_BASE | 0x300)
309 -
310 +#define TIMER_PHYS_BASE (ORION5X_BRIDGE_PHYS_BASE | 0x300)
311 #endif
312 diff --git a/arch/arm/mach-orion5x/include/mach/orion5x.h b/arch/arm/mach-orion5x/include/mach/orion5x.h
313 index 2745f5d..683e085 100644
314 --- a/arch/arm/mach-orion5x/include/mach/orion5x.h
315 +++ b/arch/arm/mach-orion5x/include/mach/orion5x.h
316 @@ -82,6 +82,7 @@
317 #define UART1_VIRT_BASE (ORION5X_DEV_BUS_VIRT_BASE | 0x2100)
318
319 #define ORION5X_BRIDGE_VIRT_BASE (ORION5X_REGS_VIRT_BASE | 0x20000)
320 +#define ORION5X_BRIDGE_PHYS_BASE (ORION5X_REGS_PHYS_BASE | 0x20000)
321
322 #define ORION5X_PCI_VIRT_BASE (ORION5X_REGS_VIRT_BASE | 0x30000)
323
324 diff --git a/arch/arm/mach-tegra/reset.c b/arch/arm/mach-tegra/reset.c
325 index 4d6a2ee..5beb7eb 100644
326 --- a/arch/arm/mach-tegra/reset.c
327 +++ b/arch/arm/mach-tegra/reset.c
328 @@ -33,7 +33,7 @@
329
330 static bool is_enabled;
331
332 -static void tegra_cpu_reset_handler_enable(void)
333 +static void __init tegra_cpu_reset_handler_enable(void)
334 {
335 void __iomem *iram_base = IO_ADDRESS(TEGRA_IRAM_RESET_BASE);
336 void __iomem *evp_cpu_reset =
337 diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
338 index aa78de8..75f9f9d 100644
339 --- a/arch/arm/mm/mmu.c
340 +++ b/arch/arm/mm/mmu.c
341 @@ -783,6 +783,79 @@ void __init iotable_init(struct map_desc *io_desc, int nr)
342 }
343 }
344
345 +#ifndef CONFIG_ARM_LPAE
346 +
347 +/*
348 + * The Linux PMD is made of two consecutive section entries covering 2MB
349 + * (see definition in include/asm/pgtable-2level.h). However a call to
350 + * create_mapping() may optimize static mappings by using individual
351 + * 1MB section mappings. This leaves the actual PMD potentially half
352 + * initialized if the top or bottom section entry isn't used, leaving it
353 + * open to problems if a subsequent ioremap() or vmalloc() tries to use
354 + * the virtual space left free by that unused section entry.
355 + *
356 + * Let's avoid the issue by inserting dummy vm entries covering the unused
357 + * PMD halves once the static mappings are in place.
358 + */
359 +
360 +static void __init pmd_empty_section_gap(unsigned long addr)
361 +{
362 + struct vm_struct *vm;
363 +
364 + vm = early_alloc_aligned(sizeof(*vm), __alignof__(*vm));
365 + vm->addr = (void *)addr;
366 + vm->size = SECTION_SIZE;
367 + vm->flags = VM_IOREMAP | VM_ARM_STATIC_MAPPING;
368 + vm->caller = pmd_empty_section_gap;
369 + vm_area_add_early(vm);
370 +}
371 +
372 +static void __init fill_pmd_gaps(void)
373 +{
374 + struct vm_struct *vm;
375 + unsigned long addr, next = 0;
376 + pmd_t *pmd;
377 +
378 + /* we're still single threaded hence no lock needed here */
379 + for (vm = vmlist; vm; vm = vm->next) {
380 + if (!(vm->flags & VM_ARM_STATIC_MAPPING))
381 + continue;
382 + addr = (unsigned long)vm->addr;
383 + if (addr < next)
384 + continue;
385 +
386 + /*
387 + * Check if this vm starts on an odd section boundary.
388 + * If so and the first section entry for this PMD is free
389 + * then we block the corresponding virtual address.
390 + */
391 + if ((addr & ~PMD_MASK) == SECTION_SIZE) {
392 + pmd = pmd_off_k(addr);
393 + if (pmd_none(*pmd))
394 + pmd_empty_section_gap(addr & PMD_MASK);
395 + }
396 +
397 + /*
398 + * Then check if this vm ends on an odd section boundary.
399 + * If so and the second section entry for this PMD is empty
400 + * then we block the corresponding virtual address.
401 + */
402 + addr += vm->size;
403 + if ((addr & ~PMD_MASK) == SECTION_SIZE) {
404 + pmd = pmd_off_k(addr) + 1;
405 + if (pmd_none(*pmd))
406 + pmd_empty_section_gap(addr);
407 + }
408 +
409 + /* no need to look at any vm entry until we hit the next PMD */
410 + next = (addr + PMD_SIZE - 1) & PMD_MASK;
411 + }
412 +}
413 +
414 +#else
415 +#define fill_pmd_gaps() do { } while (0)
416 +#endif
417 +
418 static void * __initdata vmalloc_min =
419 (void *)(VMALLOC_END - (240 << 20) - VMALLOC_OFFSET);
420
421 @@ -1064,6 +1137,7 @@ static void __init devicemaps_init(struct machine_desc *mdesc)
422 */
423 if (mdesc->map_io)
424 mdesc->map_io();
425 + fill_pmd_gaps();
426
427 /*
428 * Finally flush the caches and tlb to ensure that we're in a
429 diff --git a/arch/arm/plat-orion/common.c b/arch/arm/plat-orion/common.c
430 index 74daf5e..331f8bb 100644
431 --- a/arch/arm/plat-orion/common.c
432 +++ b/arch/arm/plat-orion/common.c
433 @@ -570,7 +570,7 @@ void __init orion_spi_1_init(unsigned long mapbase,
434 static struct orion_wdt_platform_data orion_wdt_data;
435
436 static struct resource orion_wdt_resource =
437 - DEFINE_RES_MEM(TIMER_VIRT_BASE, 0x28);
438 + DEFINE_RES_MEM(TIMER_PHYS_BASE, 0x28);
439
440 static struct platform_device orion_wdt_device = {
441 .name = "orion_wdt",
442 diff --git a/arch/arm/plat-samsung/include/plat/map-s3c.h b/arch/arm/plat-samsung/include/plat/map-s3c.h
443 index 7d04875..c0c70a8 100644
444 --- a/arch/arm/plat-samsung/include/plat/map-s3c.h
445 +++ b/arch/arm/plat-samsung/include/plat/map-s3c.h
446 @@ -22,7 +22,7 @@
447 #define S3C24XX_VA_WATCHDOG S3C_VA_WATCHDOG
448
449 #define S3C2412_VA_SSMC S3C_ADDR_CPU(0x00000000)
450 -#define S3C2412_VA_EBI S3C_ADDR_CPU(0x00010000)
451 +#define S3C2412_VA_EBI S3C_ADDR_CPU(0x00100000)
452
453 #define S3C2410_PA_UART (0x50000000)
454 #define S3C24XX_PA_UART S3C2410_PA_UART
455 diff --git a/arch/arm/plat-samsung/include/plat/watchdog-reset.h b/arch/arm/plat-samsung/include/plat/watchdog-reset.h
456 index f19aff1..bc4db9b 100644
457 --- a/arch/arm/plat-samsung/include/plat/watchdog-reset.h
458 +++ b/arch/arm/plat-samsung/include/plat/watchdog-reset.h
459 @@ -25,7 +25,7 @@ static inline void arch_wdt_reset(void)
460
461 __raw_writel(0, S3C2410_WTCON); /* disable watchdog, to be safe */
462
463 - if (s3c2410_wdtclk)
464 + if (!IS_ERR(s3c2410_wdtclk))
465 clk_enable(s3c2410_wdtclk);
466
467 /* put initial values into count and data */
468 diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
469 index 102abd6..907e9fd 100644
470 --- a/arch/powerpc/include/asm/hw_irq.h
471 +++ b/arch/powerpc/include/asm/hw_irq.h
472 @@ -85,8 +85,8 @@ static inline bool arch_irqs_disabled(void)
473 }
474
475 #ifdef CONFIG_PPC_BOOK3E
476 -#define __hard_irq_enable() asm volatile("wrteei 1" : : : "memory");
477 -#define __hard_irq_disable() asm volatile("wrteei 0" : : : "memory");
478 +#define __hard_irq_enable() asm volatile("wrteei 1" : : : "memory")
479 +#define __hard_irq_disable() asm volatile("wrteei 0" : : : "memory")
480 #else
481 #define __hard_irq_enable() __mtmsrd(local_paca->kernel_msr | MSR_EE, 1)
482 #define __hard_irq_disable() __mtmsrd(local_paca->kernel_msr, 1)
483 @@ -102,6 +102,11 @@ static inline void hard_irq_disable(void)
484 /* include/linux/interrupt.h needs hard_irq_disable to be a macro */
485 #define hard_irq_disable hard_irq_disable
486
487 +static inline bool lazy_irq_pending(void)
488 +{
489 + return !!(get_paca()->irq_happened & ~PACA_IRQ_HARD_DIS);
490 +}
491 +
492 /*
493 * This is called by asynchronous interrupts to conditionally
494 * re-enable hard interrupts when soft-disabled after having
495 @@ -119,6 +124,8 @@ static inline bool arch_irq_disabled_regs(struct pt_regs *regs)
496 return !regs->softe;
497 }
498
499 +extern bool prep_irq_for_idle(void);
500 +
501 #else /* CONFIG_PPC64 */
502
503 #define SET_MSR_EE(x) mtmsr(x)
504 diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
505 index 641da9e..d7ebc58 100644
506 --- a/arch/powerpc/kernel/irq.c
507 +++ b/arch/powerpc/kernel/irq.c
508 @@ -229,7 +229,7 @@ notrace void arch_local_irq_restore(unsigned long en)
509 */
510 if (unlikely(irq_happened != PACA_IRQ_HARD_DIS))
511 __hard_irq_disable();
512 -#ifdef CONFIG_TRACE_IRQFLAG
513 +#ifdef CONFIG_TRACE_IRQFLAGS
514 else {
515 /*
516 * We should already be hard disabled here. We had bugs
517 @@ -277,7 +277,7 @@ EXPORT_SYMBOL(arch_local_irq_restore);
518 * NOTE: This is called with interrupts hard disabled but not marked
519 * as such in paca->irq_happened, so we need to resync this.
520 */
521 -void restore_interrupts(void)
522 +void notrace restore_interrupts(void)
523 {
524 if (irqs_disabled()) {
525 local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
526 @@ -286,6 +286,52 @@ void restore_interrupts(void)
527 __hard_irq_enable();
528 }
529
530 +/*
531 + * This is a helper to use when about to go into idle low-power
532 + * when the latter has the side effect of re-enabling interrupts
533 + * (such as calling H_CEDE under pHyp).
534 + *
535 + * You call this function with interrupts soft-disabled (this is
536 + * already the case when ppc_md.power_save is called). The function
537 + * will return whether to enter power save or just return.
538 + *
539 + * In the former case, it will have notified lockdep of interrupts
540 + * being re-enabled and generally sanitized the lazy irq state,
541 + * and in the latter case it will leave with interrupts hard
542 + * disabled and marked as such, so the local_irq_enable() call
543 + * in cpu_idle() will properly re-enable everything.
544 + */
545 +bool prep_irq_for_idle(void)
546 +{
547 + /*
548 + * First we need to hard disable to ensure no interrupt
549 + * occurs before we effectively enter the low power state
550 + */
551 + hard_irq_disable();
552 +
553 + /*
554 + * If anything happened while we were soft-disabled,
555 + * we return now and do not enter the low power state.
556 + */
557 + if (lazy_irq_pending())
558 + return false;
559 +
560 + /* Tell lockdep we are about to re-enable */
561 + trace_hardirqs_on();
562 +
563 + /*
564 + * Mark interrupts as soft-enabled and clear the
565 + * PACA_IRQ_HARD_DIS from the pending mask since we
566 + * are about to hard enable as well as a side effect
567 + * of entering the low power state.
568 + */
569 + local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS;
570 + local_paca->soft_enabled = 1;
571 +
572 + /* Tell the caller to enter the low power state */
573 + return true;
574 +}
575 +
576 #endif /* CONFIG_PPC64 */
577
578 int arch_show_interrupts(struct seq_file *p, int prec)
579 diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
580 index b70bf22..24b23a4 100644
581 --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
582 +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
583 @@ -776,7 +776,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
584 lwz r3,VCORE_NAPPING_THREADS(r5)
585 lwz r4,VCPU_PTID(r9)
586 li r0,1
587 - sldi r0,r0,r4
588 + sld r0,r0,r4
589 andc. r3,r3,r0 /* no sense IPI'ing ourselves */
590 beq 43f
591 mulli r4,r4,PACA_SIZE /* get paca for thread 0 */
592 diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
593 index b6edbb3..6e8f677 100644
594 --- a/arch/powerpc/mm/numa.c
595 +++ b/arch/powerpc/mm/numa.c
596 @@ -635,7 +635,7 @@ static inline int __init read_usm_ranges(const u32 **usm)
597 */
598 static void __init parse_drconf_memory(struct device_node *memory)
599 {
600 - const u32 *dm, *usm;
601 + const u32 *uninitialized_var(dm), *usm;
602 unsigned int n, rc, ranges, is_kexec_kdump = 0;
603 unsigned long lmb_size, base, size, sz;
604 int nid;
605 diff --git a/arch/powerpc/platforms/cell/pervasive.c b/arch/powerpc/platforms/cell/pervasive.c
606 index efdacc8..d17e98b 100644
607 --- a/arch/powerpc/platforms/cell/pervasive.c
608 +++ b/arch/powerpc/platforms/cell/pervasive.c
609 @@ -42,11 +42,9 @@ static void cbe_power_save(void)
610 {
611 unsigned long ctrl, thread_switch_control;
612
613 - /*
614 - * We need to hard disable interrupts, the local_irq_enable() done by
615 - * our caller upon return will hard re-enable.
616 - */
617 - hard_irq_disable();
618 + /* Ensure our interrupt state is properly tracked */
619 + if (!prep_irq_for_idle())
620 + return;
621
622 ctrl = mfspr(SPRN_CTRLF);
623
624 @@ -81,6 +79,9 @@ static void cbe_power_save(void)
625 */
626 ctrl &= ~(CTRL_RUNLATCH | CTRL_TE);
627 mtspr(SPRN_CTRLT, ctrl);
628 +
629 + /* Re-enable interrupts in MSR */
630 + __hard_irq_enable();
631 }
632
633 static int cbe_system_reset_exception(struct pt_regs *regs)
634 diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
635 index 0915b1a..2d311c0 100644
636 --- a/arch/powerpc/platforms/pseries/iommu.c
637 +++ b/arch/powerpc/platforms/pseries/iommu.c
638 @@ -106,7 +106,7 @@ static int tce_build_pSeries(struct iommu_table *tbl, long index,
639 tcep++;
640 }
641
642 - if (tbl->it_type == TCE_PCI_SWINV_CREATE)
643 + if (tbl->it_type & TCE_PCI_SWINV_CREATE)
644 tce_invalidate_pSeries_sw(tbl, tces, tcep - 1);
645 return 0;
646 }
647 @@ -121,7 +121,7 @@ static void tce_free_pSeries(struct iommu_table *tbl, long index, long npages)
648 while (npages--)
649 *(tcep++) = 0;
650
651 - if (tbl->it_type == TCE_PCI_SWINV_FREE)
652 + if (tbl->it_type & TCE_PCI_SWINV_FREE)
653 tce_invalidate_pSeries_sw(tbl, tces, tcep - 1);
654 }
655
656 diff --git a/arch/powerpc/platforms/pseries/processor_idle.c b/arch/powerpc/platforms/pseries/processor_idle.c
657 index 41a34bc..c71be66 100644
658 --- a/arch/powerpc/platforms/pseries/processor_idle.c
659 +++ b/arch/powerpc/platforms/pseries/processor_idle.c
660 @@ -99,15 +99,18 @@ out:
661 static void check_and_cede_processor(void)
662 {
663 /*
664 - * Interrupts are soft-disabled at this point,
665 - * but not hard disabled. So an interrupt might have
666 - * occurred before entering NAP, and would be potentially
667 - * lost (edge events, decrementer events, etc...) unless
668 - * we first hard disable then check.
669 + * Ensure our interrupt state is properly tracked,
670 + * also checks if no interrupt has occurred while we
671 + * were soft-disabled
672 */
673 - hard_irq_disable();
674 - if (get_paca()->irq_happened == 0)
675 + if (prep_irq_for_idle()) {
676 cede_processor();
677 +#ifdef CONFIG_TRACE_IRQFLAGS
678 + /* Ensure that H_CEDE returns with IRQs on */
679 + if (WARN_ON(!(mfmsr() & MSR_EE)))
680 + __hard_irq_enable();
681 +#endif
682 + }
683 }
684
685 static int dedicated_cede_loop(struct cpuidle_device *dev,
686 diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
687 index 0f3ab06..eab3492 100644
688 --- a/arch/powerpc/xmon/xmon.c
689 +++ b/arch/powerpc/xmon/xmon.c
690 @@ -971,7 +971,7 @@ static int cpu_cmd(void)
691 /* print cpus waiting or in xmon */
692 printf("cpus stopped:");
693 count = 0;
694 - for (cpu = 0; cpu < NR_CPUS; ++cpu) {
695 + for_each_possible_cpu(cpu) {
696 if (cpumask_test_cpu(cpu, &cpus_in_xmon)) {
697 if (count == 0)
698 printf(" %x", cpu);
699 diff --git a/arch/x86/ia32/ia32_signal.c b/arch/x86/ia32/ia32_signal.c
700 index a69245b..4f5bfac 100644
701 --- a/arch/x86/ia32/ia32_signal.c
702 +++ b/arch/x86/ia32/ia32_signal.c
703 @@ -38,7 +38,7 @@
704 int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from)
705 {
706 int err = 0;
707 - bool ia32 = is_ia32_task();
708 + bool ia32 = test_thread_flag(TIF_IA32);
709
710 if (!access_ok(VERIFY_WRITE, to, sizeof(compat_siginfo_t)))
711 return -EFAULT;
712 diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
713 index 340ee49..f91e80f 100644
714 --- a/arch/x86/include/asm/cpufeature.h
715 +++ b/arch/x86/include/asm/cpufeature.h
716 @@ -176,7 +176,7 @@
717 #define X86_FEATURE_XSAVEOPT (7*32+ 4) /* Optimized Xsave */
718 #define X86_FEATURE_PLN (7*32+ 5) /* Intel Power Limit Notification */
719 #define X86_FEATURE_PTS (7*32+ 6) /* Intel Package Thermal Status */
720 -#define X86_FEATURE_DTS (7*32+ 7) /* Digital Thermal Sensor */
721 +#define X86_FEATURE_DTHERM (7*32+ 7) /* Digital Thermal Sensor */
722 #define X86_FEATURE_HW_PSTATE (7*32+ 8) /* AMD HW-PState */
723
724 /* Virtualization flags: Linux defined, word 8 */
725 diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h
726 index effff47..cb00ccc 100644
727 --- a/arch/x86/include/asm/pgtable-3level.h
728 +++ b/arch/x86/include/asm/pgtable-3level.h
729 @@ -31,6 +31,60 @@ static inline void native_set_pte(pte_t *ptep, pte_t pte)
730 ptep->pte_low = pte.pte_low;
731 }
732
733 +#define pmd_read_atomic pmd_read_atomic
734 +/*
735 + * pte_offset_map_lock on 32bit PAE kernels was reading the pmd_t with
736 + * a "*pmdp" dereference done by gcc. Problem is, in certain places
737 + * where pte_offset_map_lock is called, concurrent page faults are
738 + * allowed, if the mmap_sem is hold for reading. An example is mincore
739 + * vs page faults vs MADV_DONTNEED. On the page fault side
740 + * pmd_populate rightfully does a set_64bit, but if we're reading the
741 + * pmd_t with a "*pmdp" on the mincore side, a SMP race can happen
742 + * because gcc will not read the 64bit of the pmd atomically. To fix
743 + * this all places running pmd_offset_map_lock() while holding the
744 + * mmap_sem in read mode, shall read the pmdp pointer using this
745 + * function to know if the pmd is null nor not, and in turn to know if
746 + * they can run pmd_offset_map_lock or pmd_trans_huge or other pmd
747 + * operations.
748 + *
749 + * Without THP if the mmap_sem is hold for reading, the pmd can only
750 + * transition from null to not null while pmd_read_atomic runs. So
751 + * we can always return atomic pmd values with this function.
752 + *
753 + * With THP if the mmap_sem is hold for reading, the pmd can become
754 + * trans_huge or none or point to a pte (and in turn become "stable")
755 + * at any time under pmd_read_atomic. We could read it really
756 + * atomically here with a atomic64_read for the THP enabled case (and
757 + * it would be a whole lot simpler), but to avoid using cmpxchg8b we
758 + * only return an atomic pmdval if the low part of the pmdval is later
759 + * found stable (i.e. pointing to a pte). And we're returning a none
760 + * pmdval if the low part of the pmd is none. In some cases the high
761 + * and low part of the pmdval returned may not be consistent if THP is
762 + * enabled (the low part may point to previously mapped hugepage,
763 + * while the high part may point to a more recently mapped hugepage),
764 + * but pmd_none_or_trans_huge_or_clear_bad() only needs the low part
765 + * of the pmd to be read atomically to decide if the pmd is unstable
766 + * or not, with the only exception of when the low part of the pmd is
767 + * zero in which case we return a none pmd.
768 + */
769 +static inline pmd_t pmd_read_atomic(pmd_t *pmdp)
770 +{
771 + pmdval_t ret;
772 + u32 *tmp = (u32 *)pmdp;
773 +
774 + ret = (pmdval_t) (*tmp);
775 + if (ret) {
776 + /*
777 + * If the low part is null, we must not read the high part
778 + * or we can end up with a partial pmd.
779 + */
780 + smp_rmb();
781 + ret |= ((pmdval_t)*(tmp + 1)) << 32;
782 + }
783 +
784 + return (pmd_t) { ret };
785 +}
786 +
787 static inline void native_set_pte_atomic(pte_t *ptep, pte_t pte)
788 {
789 set_64bit((unsigned long long *)(ptep), native_pte_val(pte));
790 diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
791 index 7c439fe..bbdffc2 100644
792 --- a/arch/x86/kernel/acpi/boot.c
793 +++ b/arch/x86/kernel/acpi/boot.c
794 @@ -422,12 +422,14 @@ acpi_parse_int_src_ovr(struct acpi_subtable_header * header,
795 return 0;
796 }
797
798 - if (intsrc->source_irq == 0 && intsrc->global_irq == 2) {
799 + if (intsrc->source_irq == 0) {
800 if (acpi_skip_timer_override) {
801 - printk(PREFIX "BIOS IRQ0 pin2 override ignored.\n");
802 + printk(PREFIX "BIOS IRQ0 override ignored.\n");
803 return 0;
804 }
805 - if (acpi_fix_pin2_polarity && (intsrc->inti_flags & ACPI_MADT_POLARITY_MASK)) {
806 +
807 + if ((intsrc->global_irq == 2) && acpi_fix_pin2_polarity
808 + && (intsrc->inti_flags & ACPI_MADT_POLARITY_MASK)) {
809 intsrc->inti_flags &= ~ACPI_MADT_POLARITY_MASK;
810 printk(PREFIX "BIOS IRQ0 pin2 override: forcing polarity to high active.\n");
811 }
812 @@ -1334,17 +1336,12 @@ static int __init dmi_disable_acpi(const struct dmi_system_id *d)
813 }
814
815 /*
816 - * Force ignoring BIOS IRQ0 pin2 override
817 + * Force ignoring BIOS IRQ0 override
818 */
819 static int __init dmi_ignore_irq0_timer_override(const struct dmi_system_id *d)
820 {
821 - /*
822 - * The ati_ixp4x0_rev() early PCI quirk should have set
823 - * the acpi_skip_timer_override flag already:
824 - */
825 if (!acpi_skip_timer_override) {
826 - WARN(1, KERN_ERR "ati_ixp4x0 quirk not complete.\n");
827 - pr_notice("%s detected: Ignoring BIOS IRQ0 pin2 override\n",
828 + pr_notice("%s detected: Ignoring BIOS IRQ0 override\n",
829 d->ident);
830 acpi_skip_timer_override = 1;
831 }
832 @@ -1438,7 +1435,7 @@ static struct dmi_system_id __initdata acpi_dmi_table_late[] = {
833 * is enabled. This input is incorrectly designated the
834 * ISA IRQ 0 via an interrupt source override even though
835 * it is wired to the output of the master 8259A and INTIN0
836 - * is not connected at all. Force ignoring BIOS IRQ0 pin2
837 + * is not connected at all. Force ignoring BIOS IRQ0
838 * override in that cases.
839 */
840 {
841 @@ -1473,6 +1470,14 @@ static struct dmi_system_id __initdata acpi_dmi_table_late[] = {
842 DMI_MATCH(DMI_PRODUCT_NAME, "HP Compaq 6715b"),
843 },
844 },
845 + {
846 + .callback = dmi_ignore_irq0_timer_override,
847 + .ident = "FUJITSU SIEMENS",
848 + .matches = {
849 + DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
850 + DMI_MATCH(DMI_PRODUCT_NAME, "AMILO PRO V2030"),
851 + },
852 + },
853 {}
854 };
855
856 diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
857 index addf9e8..ee8e9ab 100644
858 --- a/arch/x86/kernel/cpu/scattered.c
859 +++ b/arch/x86/kernel/cpu/scattered.c
860 @@ -31,7 +31,7 @@ void __cpuinit init_scattered_cpuid_features(struct cpuinfo_x86 *c)
861 const struct cpuid_bit *cb;
862
863 static const struct cpuid_bit __cpuinitconst cpuid_bits[] = {
864 - { X86_FEATURE_DTS, CR_EAX, 0, 0x00000006, 0 },
865 + { X86_FEATURE_DTHERM, CR_EAX, 0, 0x00000006, 0 },
866 { X86_FEATURE_IDA, CR_EAX, 1, 0x00000006, 0 },
867 { X86_FEATURE_ARAT, CR_EAX, 2, 0x00000006, 0 },
868 { X86_FEATURE_PLN, CR_EAX, 4, 0x00000006, 0 },
869 diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
870 index d840e69..3034ee5 100644
871 --- a/arch/x86/kernel/reboot.c
872 +++ b/arch/x86/kernel/reboot.c
873 @@ -471,6 +471,14 @@ static struct dmi_system_id __initdata pci_reboot_dmi_table[] = {
874 DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex 990"),
875 },
876 },
877 + { /* Handle problems with rebooting on the Precision M6600. */
878 + .callback = set_pci_reboot,
879 + .ident = "Dell OptiPlex 990",
880 + .matches = {
881 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
882 + DMI_MATCH(DMI_PRODUCT_NAME, "Precision M6600"),
883 + },
884 + },
885 { }
886 };
887
888 diff --git a/drivers/acpi/acpi_pad.c b/drivers/acpi/acpi_pad.c
889 index a43fa1a..1502c502 100644
890 --- a/drivers/acpi/acpi_pad.c
891 +++ b/drivers/acpi/acpi_pad.c
892 @@ -36,6 +36,7 @@
893 #define ACPI_PROCESSOR_AGGREGATOR_DEVICE_NAME "Processor Aggregator"
894 #define ACPI_PROCESSOR_AGGREGATOR_NOTIFY 0x80
895 static DEFINE_MUTEX(isolated_cpus_lock);
896 +static DEFINE_MUTEX(round_robin_lock);
897
898 static unsigned long power_saving_mwait_eax;
899
900 @@ -107,7 +108,7 @@ static void round_robin_cpu(unsigned int tsk_index)
901 if (!alloc_cpumask_var(&tmp, GFP_KERNEL))
902 return;
903
904 - mutex_lock(&isolated_cpus_lock);
905 + mutex_lock(&round_robin_lock);
906 cpumask_clear(tmp);
907 for_each_cpu(cpu, pad_busy_cpus)
908 cpumask_or(tmp, tmp, topology_thread_cpumask(cpu));
909 @@ -116,7 +117,7 @@ static void round_robin_cpu(unsigned int tsk_index)
910 if (cpumask_empty(tmp))
911 cpumask_andnot(tmp, cpu_online_mask, pad_busy_cpus);
912 if (cpumask_empty(tmp)) {
913 - mutex_unlock(&isolated_cpus_lock);
914 + mutex_unlock(&round_robin_lock);
915 return;
916 }
917 for_each_cpu(cpu, tmp) {
918 @@ -131,7 +132,7 @@ static void round_robin_cpu(unsigned int tsk_index)
919 tsk_in_cpu[tsk_index] = preferred_cpu;
920 cpumask_set_cpu(preferred_cpu, pad_busy_cpus);
921 cpu_weight[preferred_cpu]++;
922 - mutex_unlock(&isolated_cpus_lock);
923 + mutex_unlock(&round_robin_lock);
924
925 set_cpus_allowed_ptr(current, cpumask_of(preferred_cpu));
926 }
927 diff --git a/drivers/acpi/acpica/hwsleep.c b/drivers/acpi/acpica/hwsleep.c
928 index 0ed85ca..615996a 100644
929 --- a/drivers/acpi/acpica/hwsleep.c
930 +++ b/drivers/acpi/acpica/hwsleep.c
931 @@ -95,18 +95,6 @@ acpi_status acpi_hw_legacy_sleep(u8 sleep_state, u8 flags)
932 return_ACPI_STATUS(status);
933 }
934
935 - if (sleep_state != ACPI_STATE_S5) {
936 - /*
937 - * Disable BM arbitration. This feature is contained within an
938 - * optional register (PM2 Control), so ignore a BAD_ADDRESS
939 - * exception.
940 - */
941 - status = acpi_write_bit_register(ACPI_BITREG_ARB_DISABLE, 1);
942 - if (ACPI_FAILURE(status) && (status != AE_BAD_ADDRESS)) {
943 - return_ACPI_STATUS(status);
944 - }
945 - }
946 -
947 /*
948 * 1) Disable/Clear all GPEs
949 * 2) Enable all wakeup GPEs
950 @@ -364,16 +352,6 @@ acpi_status acpi_hw_legacy_wake(u8 sleep_state, u8 flags)
951 [ACPI_EVENT_POWER_BUTTON].
952 status_register_id, ACPI_CLEAR_STATUS);
953
954 - /*
955 - * Enable BM arbitration. This feature is contained within an
956 - * optional register (PM2 Control), so ignore a BAD_ADDRESS
957 - * exception.
958 - */
959 - status = acpi_write_bit_register(ACPI_BITREG_ARB_DISABLE, 0);
960 - if (ACPI_FAILURE(status) && (status != AE_BAD_ADDRESS)) {
961 - return_ACPI_STATUS(status);
962 - }
963 -
964 acpi_hw_execute_sleep_method(METHOD_PATHNAME__SST, ACPI_SST_WORKING);
965 return_ACPI_STATUS(status);
966 }
967 diff --git a/drivers/acpi/apei/apei-base.c b/drivers/acpi/apei/apei-base.c
968 index 5577762..6686b1e 100644
969 --- a/drivers/acpi/apei/apei-base.c
970 +++ b/drivers/acpi/apei/apei-base.c
971 @@ -243,7 +243,7 @@ static int pre_map_gar_callback(struct apei_exec_context *ctx,
972 u8 ins = entry->instruction;
973
974 if (ctx->ins_table[ins].flags & APEI_EXEC_INS_ACCESS_REGISTER)
975 - return acpi_os_map_generic_address(&entry->register_region);
976 + return apei_map_generic_address(&entry->register_region);
977
978 return 0;
979 }
980 @@ -276,7 +276,7 @@ static int post_unmap_gar_callback(struct apei_exec_context *ctx,
981 u8 ins = entry->instruction;
982
983 if (ctx->ins_table[ins].flags & APEI_EXEC_INS_ACCESS_REGISTER)
984 - acpi_os_unmap_generic_address(&entry->register_region);
985 + apei_unmap_generic_address(&entry->register_region);
986
987 return 0;
988 }
989 @@ -606,6 +606,19 @@ static int apei_check_gar(struct acpi_generic_address *reg, u64 *paddr,
990 return 0;
991 }
992
993 +int apei_map_generic_address(struct acpi_generic_address *reg)
994 +{
995 + int rc;
996 + u32 access_bit_width;
997 + u64 address;
998 +
999 + rc = apei_check_gar(reg, &address, &access_bit_width);
1000 + if (rc)
1001 + return rc;
1002 + return acpi_os_map_generic_address(reg);
1003 +}
1004 +EXPORT_SYMBOL_GPL(apei_map_generic_address);
1005 +
1006 /* read GAR in interrupt (including NMI) or process context */
1007 int apei_read(u64 *val, struct acpi_generic_address *reg)
1008 {
1009 diff --git a/drivers/acpi/apei/apei-internal.h b/drivers/acpi/apei/apei-internal.h
1010 index cca240a..f220d64 100644
1011 --- a/drivers/acpi/apei/apei-internal.h
1012 +++ b/drivers/acpi/apei/apei-internal.h
1013 @@ -7,6 +7,8 @@
1014 #define APEI_INTERNAL_H
1015
1016 #include <linux/cper.h>
1017 +#include <linux/acpi.h>
1018 +#include <linux/acpi_io.h>
1019
1020 struct apei_exec_context;
1021
1022 @@ -68,6 +70,13 @@ static inline int apei_exec_run_optional(struct apei_exec_context *ctx, u8 actio
1023 /* IP has been set in instruction function */
1024 #define APEI_EXEC_SET_IP 1
1025
1026 +int apei_map_generic_address(struct acpi_generic_address *reg);
1027 +
1028 +static inline void apei_unmap_generic_address(struct acpi_generic_address *reg)
1029 +{
1030 + acpi_os_unmap_generic_address(reg);
1031 +}
1032 +
1033 int apei_read(u64 *val, struct acpi_generic_address *reg);
1034 int apei_write(u64 val, struct acpi_generic_address *reg);
1035
1036 diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
1037 index 9b3cac0..1599566 100644
1038 --- a/drivers/acpi/apei/ghes.c
1039 +++ b/drivers/acpi/apei/ghes.c
1040 @@ -301,7 +301,7 @@ static struct ghes *ghes_new(struct acpi_hest_generic *generic)
1041 if (!ghes)
1042 return ERR_PTR(-ENOMEM);
1043 ghes->generic = generic;
1044 - rc = acpi_os_map_generic_address(&generic->error_status_address);
1045 + rc = apei_map_generic_address(&generic->error_status_address);
1046 if (rc)
1047 goto err_free;
1048 error_block_length = generic->error_block_length;
1049 @@ -321,7 +321,7 @@ static struct ghes *ghes_new(struct acpi_hest_generic *generic)
1050 return ghes;
1051
1052 err_unmap:
1053 - acpi_os_unmap_generic_address(&generic->error_status_address);
1054 + apei_unmap_generic_address(&generic->error_status_address);
1055 err_free:
1056 kfree(ghes);
1057 return ERR_PTR(rc);
1058 @@ -330,7 +330,7 @@ err_free:
1059 static void ghes_fini(struct ghes *ghes)
1060 {
1061 kfree(ghes->estatus);
1062 - acpi_os_unmap_generic_address(&ghes->generic->error_status_address);
1063 + apei_unmap_generic_address(&ghes->generic->error_status_address);
1064 }
1065
1066 enum {
1067 diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
1068 index eb6fd23..2377445 100644
1069 --- a/drivers/acpi/sleep.c
1070 +++ b/drivers/acpi/sleep.c
1071 @@ -732,8 +732,8 @@ int acpi_pm_device_sleep_state(struct device *dev, int *d_min_p)
1072 * can wake the system. _S0W may be valid, too.
1073 */
1074 if (acpi_target_sleep_state == ACPI_STATE_S0 ||
1075 - (device_may_wakeup(dev) &&
1076 - adev->wakeup.sleep_state <= acpi_target_sleep_state)) {
1077 + (device_may_wakeup(dev) && adev->wakeup.flags.valid &&
1078 + adev->wakeup.sleep_state >= acpi_target_sleep_state)) {
1079 acpi_status status;
1080
1081 acpi_method[3] = 'W';
1082 diff --git a/drivers/acpi/sysfs.c b/drivers/acpi/sysfs.c
1083 index 9f66181..240a244 100644
1084 --- a/drivers/acpi/sysfs.c
1085 +++ b/drivers/acpi/sysfs.c
1086 @@ -173,7 +173,7 @@ static int param_set_trace_state(const char *val, struct kernel_param *kp)
1087 {
1088 int result = 0;
1089
1090 - if (!strncmp(val, "enable", strlen("enable") - 1)) {
1091 + if (!strncmp(val, "enable", strlen("enable"))) {
1092 result = acpi_debug_trace(trace_method_name, trace_debug_level,
1093 trace_debug_layer, 0);
1094 if (result)
1095 @@ -181,7 +181,7 @@ static int param_set_trace_state(const char *val, struct kernel_param *kp)
1096 goto exit;
1097 }
1098
1099 - if (!strncmp(val, "disable", strlen("disable") - 1)) {
1100 + if (!strncmp(val, "disable", strlen("disable"))) {
1101 int name = 0;
1102 result = acpi_debug_trace((char *)&name, trace_debug_level,
1103 trace_debug_layer, 0);
1104 diff --git a/drivers/acpi/video.c b/drivers/acpi/video.c
1105 index 66e8f73..48b5a3c 100644
1106 --- a/drivers/acpi/video.c
1107 +++ b/drivers/acpi/video.c
1108 @@ -558,6 +558,8 @@ acpi_video_bus_DOS(struct acpi_video_bus *video, int bios_flag, int lcd_flag)
1109 union acpi_object arg0 = { ACPI_TYPE_INTEGER };
1110 struct acpi_object_list args = { 1, &arg0 };
1111
1112 + if (!video->cap._DOS)
1113 + return 0;
1114
1115 if (bios_flag < 0 || bios_flag > 3 || lcd_flag < 0 || lcd_flag > 1)
1116 return -EINVAL;
1117 diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
1118 index b462c0e..3085f9b 100644
1119 --- a/drivers/base/power/main.c
1120 +++ b/drivers/base/power/main.c
1121 @@ -1021,7 +1021,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
1122 dpm_wait_for_children(dev, async);
1123
1124 if (async_error)
1125 - return 0;
1126 + goto Complete;
1127
1128 pm_runtime_get_noresume(dev);
1129 if (pm_runtime_barrier(dev) && device_may_wakeup(dev))
1130 @@ -1030,7 +1030,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
1131 if (pm_wakeup_pending()) {
1132 pm_runtime_put_sync(dev);
1133 async_error = -EBUSY;
1134 - return 0;
1135 + goto Complete;
1136 }
1137
1138 device_lock(dev);
1139 @@ -1087,6 +1087,8 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
1140 }
1141
1142 device_unlock(dev);
1143 +
1144 + Complete:
1145 complete_all(&dev->power.completion);
1146
1147 if (error) {
1148 diff --git a/drivers/block/umem.c b/drivers/block/umem.c
1149 index aa27120..9a72277 100644
1150 --- a/drivers/block/umem.c
1151 +++ b/drivers/block/umem.c
1152 @@ -513,6 +513,44 @@ static void process_page(unsigned long data)
1153 }
1154 }
1155
1156 +struct mm_plug_cb {
1157 + struct blk_plug_cb cb;
1158 + struct cardinfo *card;
1159 +};
1160 +
1161 +static void mm_unplug(struct blk_plug_cb *cb)
1162 +{
1163 + struct mm_plug_cb *mmcb = container_of(cb, struct mm_plug_cb, cb);
1164 +
1165 + spin_lock_irq(&mmcb->card->lock);
1166 + activate(mmcb->card);
1167 + spin_unlock_irq(&mmcb->card->lock);
1168 + kfree(mmcb);
1169 +}
1170 +
1171 +static int mm_check_plugged(struct cardinfo *card)
1172 +{
1173 + struct blk_plug *plug = current->plug;
1174 + struct mm_plug_cb *mmcb;
1175 +
1176 + if (!plug)
1177 + return 0;
1178 +
1179 + list_for_each_entry(mmcb, &plug->cb_list, cb.list) {
1180 + if (mmcb->cb.callback == mm_unplug && mmcb->card == card)
1181 + return 1;
1182 + }
1183 + /* Not currently on the callback list */
1184 + mmcb = kmalloc(sizeof(*mmcb), GFP_ATOMIC);
1185 + if (!mmcb)
1186 + return 0;
1187 +
1188 + mmcb->card = card;
1189 + mmcb->cb.callback = mm_unplug;
1190 + list_add(&mmcb->cb.list, &plug->cb_list);
1191 + return 1;
1192 +}
1193 +
1194 static void mm_make_request(struct request_queue *q, struct bio *bio)
1195 {
1196 struct cardinfo *card = q->queuedata;
1197 @@ -523,6 +561,8 @@ static void mm_make_request(struct request_queue *q, struct bio *bio)
1198 *card->biotail = bio;
1199 bio->bi_next = NULL;
1200 card->biotail = &bio->bi_next;
1201 + if (bio->bi_rw & REQ_SYNC || !mm_check_plugged(card))
1202 + activate(card);
1203 spin_unlock_irq(&card->lock);
1204
1205 return;
1206 diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
1207 index 773cf27..9ad3b5e 100644
1208 --- a/drivers/block/xen-blkback/common.h
1209 +++ b/drivers/block/xen-blkback/common.h
1210 @@ -257,6 +257,7 @@ static inline void blkif_get_x86_32_req(struct blkif_request *dst,
1211 break;
1212 case BLKIF_OP_DISCARD:
1213 dst->u.discard.flag = src->u.discard.flag;
1214 + dst->u.discard.id = src->u.discard.id;
1215 dst->u.discard.sector_number = src->u.discard.sector_number;
1216 dst->u.discard.nr_sectors = src->u.discard.nr_sectors;
1217 break;
1218 @@ -287,6 +288,7 @@ static inline void blkif_get_x86_64_req(struct blkif_request *dst,
1219 break;
1220 case BLKIF_OP_DISCARD:
1221 dst->u.discard.flag = src->u.discard.flag;
1222 + dst->u.discard.id = src->u.discard.id;
1223 dst->u.discard.sector_number = src->u.discard.sector_number;
1224 dst->u.discard.nr_sectors = src->u.discard.nr_sectors;
1225 break;
1226 diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
1227 index 9cf6f59..2da025e 100644
1228 --- a/drivers/clk/clk.c
1229 +++ b/drivers/clk/clk.c
1230 @@ -997,7 +997,7 @@ static struct clk *__clk_init_parent(struct clk *clk)
1231
1232 if (!clk->parents)
1233 clk->parents =
1234 - kmalloc((sizeof(struct clk*) * clk->num_parents),
1235 + kzalloc((sizeof(struct clk*) * clk->num_parents),
1236 GFP_KERNEL);
1237
1238 if (!clk->parents)
1239 @@ -1062,21 +1062,24 @@ static int __clk_set_parent(struct clk *clk, struct clk *parent)
1240
1241 old_parent = clk->parent;
1242
1243 - /* find index of new parent clock using cached parent ptrs */
1244 - for (i = 0; i < clk->num_parents; i++)
1245 - if (clk->parents[i] == parent)
1246 - break;
1247 + if (!clk->parents)
1248 + clk->parents = kzalloc((sizeof(struct clk*) * clk->num_parents),
1249 + GFP_KERNEL);
1250
1251 /*
1252 - * find index of new parent clock using string name comparison
1253 - * also try to cache the parent to avoid future calls to __clk_lookup
1254 + * find index of new parent clock using cached parent ptrs,
1255 + * or if not yet cached, use string name comparison and cache
1256 + * them now to avoid future calls to __clk_lookup.
1257 */
1258 - if (i == clk->num_parents)
1259 - for (i = 0; i < clk->num_parents; i++)
1260 - if (!strcmp(clk->parent_names[i], parent->name)) {
1261 + for (i = 0; i < clk->num_parents; i++) {
1262 + if (clk->parents && clk->parents[i] == parent)
1263 + break;
1264 + else if (!strcmp(clk->parent_names[i], parent->name)) {
1265 + if (clk->parents)
1266 clk->parents[i] = __clk_lookup(parent->name);
1267 - break;
1268 - }
1269 + break;
1270 + }
1271 + }
1272
1273 if (i == clk->num_parents) {
1274 pr_debug("%s: clock %s is not a possible parent of clock %s\n",
1275 diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
1276 index fa3fb21..8c44f17 100644
1277 --- a/drivers/dma/pl330.c
1278 +++ b/drivers/dma/pl330.c
1279 @@ -2322,7 +2322,7 @@ static void pl330_tasklet(unsigned long data)
1280 /* Pick up ripe tomatoes */
1281 list_for_each_entry_safe(desc, _dt, &pch->work_list, node)
1282 if (desc->status == DONE) {
1283 - if (pch->cyclic)
1284 + if (!pch->cyclic)
1285 dma_cookie_complete(&desc->txd);
1286 list_move_tail(&desc->node, &list);
1287 }
1288 diff --git a/drivers/gpio/gpio-wm8994.c b/drivers/gpio/gpio-wm8994.c
1289 index 92ea535..aa61ad2 100644
1290 --- a/drivers/gpio/gpio-wm8994.c
1291 +++ b/drivers/gpio/gpio-wm8994.c
1292 @@ -89,8 +89,11 @@ static int wm8994_gpio_direction_out(struct gpio_chip *chip,
1293 struct wm8994_gpio *wm8994_gpio = to_wm8994_gpio(chip);
1294 struct wm8994 *wm8994 = wm8994_gpio->wm8994;
1295
1296 + if (value)
1297 + value = WM8994_GPN_LVL;
1298 +
1299 return wm8994_set_bits(wm8994, WM8994_GPIO_1 + offset,
1300 - WM8994_GPN_DIR, 0);
1301 + WM8994_GPN_DIR | WM8994_GPN_LVL, value);
1302 }
1303
1304 static void wm8994_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
1305 diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
1306 index 5a18b0d..6e38325 100644
1307 --- a/drivers/gpu/drm/drm_edid.c
1308 +++ b/drivers/gpu/drm/drm_edid.c
1309 @@ -574,7 +574,7 @@ static bool
1310 drm_monitor_supports_rb(struct edid *edid)
1311 {
1312 if (edid->revision >= 4) {
1313 - bool ret;
1314 + bool ret = false;
1315 drm_for_each_detailed_block((u8 *)edid, is_rb, &ret);
1316 return ret;
1317 }
1318 diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
1319 index f57e5cf..26c67a7 100644
1320 --- a/drivers/gpu/drm/i915/i915_irq.c
1321 +++ b/drivers/gpu/drm/i915/i915_irq.c
1322 @@ -424,6 +424,30 @@ static void gen6_pm_rps_work(struct work_struct *work)
1323 mutex_unlock(&dev_priv->dev->struct_mutex);
1324 }
1325
1326 +static void gen6_queue_rps_work(struct drm_i915_private *dev_priv,
1327 + u32 pm_iir)
1328 +{
1329 + unsigned long flags;
1330 +
1331 + /*
1332 + * IIR bits should never already be set because IMR should
1333 + * prevent an interrupt from being shown in IIR. The warning
1334 + * displays a case where we've unsafely cleared
1335 + * dev_priv->pm_iir. Although missing an interrupt of the same
1336 + * type is not a problem, it displays a problem in the logic.
1337 + *
1338 + * The mask bit in IMR is cleared by rps_work.
1339 + */
1340 +
1341 + spin_lock_irqsave(&dev_priv->rps_lock, flags);
1342 + dev_priv->pm_iir |= pm_iir;
1343 + I915_WRITE(GEN6_PMIMR, dev_priv->pm_iir);
1344 + POSTING_READ(GEN6_PMIMR);
1345 + spin_unlock_irqrestore(&dev_priv->rps_lock, flags);
1346 +
1347 + queue_work(dev_priv->wq, &dev_priv->rps_work);
1348 +}
1349 +
1350 static void pch_irq_handler(struct drm_device *dev, u32 pch_iir)
1351 {
1352 drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
1353 @@ -529,16 +553,8 @@ static irqreturn_t ivybridge_irq_handler(DRM_IRQ_ARGS)
1354 pch_irq_handler(dev, pch_iir);
1355 }
1356
1357 - if (pm_iir & GEN6_PM_DEFERRED_EVENTS) {
1358 - unsigned long flags;
1359 - spin_lock_irqsave(&dev_priv->rps_lock, flags);
1360 - WARN(dev_priv->pm_iir & pm_iir, "Missed a PM interrupt\n");
1361 - dev_priv->pm_iir |= pm_iir;
1362 - I915_WRITE(GEN6_PMIMR, dev_priv->pm_iir);
1363 - POSTING_READ(GEN6_PMIMR);
1364 - spin_unlock_irqrestore(&dev_priv->rps_lock, flags);
1365 - queue_work(dev_priv->wq, &dev_priv->rps_work);
1366 - }
1367 + if (pm_iir & GEN6_PM_DEFERRED_EVENTS)
1368 + gen6_queue_rps_work(dev_priv, pm_iir);
1369
1370 /* should clear PCH hotplug event before clear CPU irq */
1371 I915_WRITE(SDEIIR, pch_iir);
1372 @@ -634,25 +650,8 @@ static irqreturn_t ironlake_irq_handler(DRM_IRQ_ARGS)
1373 i915_handle_rps_change(dev);
1374 }
1375
1376 - if (IS_GEN6(dev) && pm_iir & GEN6_PM_DEFERRED_EVENTS) {
1377 - /*
1378 - * IIR bits should never already be set because IMR should
1379 - * prevent an interrupt from being shown in IIR. The warning
1380 - * displays a case where we've unsafely cleared
1381 - * dev_priv->pm_iir. Although missing an interrupt of the same
1382 - * type is not a problem, it displays a problem in the logic.
1383 - *
1384 - * The mask bit in IMR is cleared by rps_work.
1385 - */
1386 - unsigned long flags;
1387 - spin_lock_irqsave(&dev_priv->rps_lock, flags);
1388 - WARN(dev_priv->pm_iir & pm_iir, "Missed a PM interrupt\n");
1389 - dev_priv->pm_iir |= pm_iir;
1390 - I915_WRITE(GEN6_PMIMR, dev_priv->pm_iir);
1391 - POSTING_READ(GEN6_PMIMR);
1392 - spin_unlock_irqrestore(&dev_priv->rps_lock, flags);
1393 - queue_work(dev_priv->wq, &dev_priv->rps_work);
1394 - }
1395 + if (IS_GEN6(dev) && pm_iir & GEN6_PM_DEFERRED_EVENTS)
1396 + gen6_queue_rps_work(dev_priv, pm_iir);
1397
1398 /* should clear PCH hotplug event before clear CPU irq */
1399 I915_WRITE(SDEIIR, pch_iir);
1400 diff --git a/drivers/gpu/drm/i915/i915_suspend.c b/drivers/gpu/drm/i915/i915_suspend.c
1401 index 2b5eb22..0d13778 100644
1402 --- a/drivers/gpu/drm/i915/i915_suspend.c
1403 +++ b/drivers/gpu/drm/i915/i915_suspend.c
1404 @@ -740,8 +740,11 @@ static void i915_restore_display(struct drm_device *dev)
1405 if (HAS_PCH_SPLIT(dev)) {
1406 I915_WRITE(BLC_PWM_PCH_CTL1, dev_priv->saveBLC_PWM_CTL);
1407 I915_WRITE(BLC_PWM_PCH_CTL2, dev_priv->saveBLC_PWM_CTL2);
1408 - I915_WRITE(BLC_PWM_CPU_CTL, dev_priv->saveBLC_CPU_PWM_CTL);
1409 + /* NOTE: BLC_PWM_CPU_CTL must be written after BLC_PWM_CPU_CTL2;
1410 + * otherwise we get blank eDP screen after S3 on some machines
1411 + */
1412 I915_WRITE(BLC_PWM_CPU_CTL2, dev_priv->saveBLC_CPU_PWM_CTL2);
1413 + I915_WRITE(BLC_PWM_CPU_CTL, dev_priv->saveBLC_CPU_PWM_CTL);
1414 I915_WRITE(PCH_PP_ON_DELAYS, dev_priv->savePP_ON_DELAYS);
1415 I915_WRITE(PCH_PP_OFF_DELAYS, dev_priv->savePP_OFF_DELAYS);
1416 I915_WRITE(PCH_PP_DIVISOR, dev_priv->savePP_DIVISOR);
1417 diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
1418 index 8113e92..6fd2211 100644
1419 --- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c
1420 +++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
1421 @@ -497,7 +497,7 @@ int nouveau_fbcon_init(struct drm_device *dev)
1422 nfbdev->helper.funcs = &nouveau_fbcon_helper_funcs;
1423
1424 ret = drm_fb_helper_init(dev, &nfbdev->helper,
1425 - nv_two_heads(dev) ? 2 : 1, 4);
1426 + dev->mode_config.num_crtc, 4);
1427 if (ret) {
1428 kfree(nfbdev);
1429 return ret;
1430 diff --git a/drivers/gpu/drm/radeon/radeon_gart.c b/drivers/gpu/drm/radeon/radeon_gart.c
1431 index 62050f5..2a4c592 100644
1432 --- a/drivers/gpu/drm/radeon/radeon_gart.c
1433 +++ b/drivers/gpu/drm/radeon/radeon_gart.c
1434 @@ -289,8 +289,9 @@ int radeon_vm_manager_init(struct radeon_device *rdev)
1435 rdev->vm_manager.enabled = false;
1436
1437 /* mark first vm as always in use, it's the system one */
1438 + /* allocate enough for 2 full VM pts */
1439 r = radeon_sa_bo_manager_init(rdev, &rdev->vm_manager.sa_manager,
1440 - rdev->vm_manager.max_pfn * 8,
1441 + rdev->vm_manager.max_pfn * 8 * 2,
1442 RADEON_GEM_DOMAIN_VRAM);
1443 if (r) {
1444 dev_err(rdev->dev, "failed to allocate vm bo (%dKB)\n",
1445 @@ -635,7 +636,15 @@ int radeon_vm_init(struct radeon_device *rdev, struct radeon_vm *vm)
1446 mutex_init(&vm->mutex);
1447 INIT_LIST_HEAD(&vm->list);
1448 INIT_LIST_HEAD(&vm->va);
1449 - vm->last_pfn = 0;
1450 + /* SI requires equal sized PTs for all VMs, so always set
1451 + * last_pfn to max_pfn. cayman allows variable sized
1452 + * pts so we can grow then as needed. Once we switch
1453 + * to two level pts we can unify this again.
1454 + */
1455 + if (rdev->family >= CHIP_TAHITI)
1456 + vm->last_pfn = rdev->vm_manager.max_pfn;
1457 + else
1458 + vm->last_pfn = 0;
1459 /* map the ib pool buffer at 0 in virtual address space, set
1460 * read only
1461 */
1462 diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
1463 index 27bda98..2af1ce6 100644
1464 --- a/drivers/gpu/drm/radeon/si.c
1465 +++ b/drivers/gpu/drm/radeon/si.c
1466 @@ -2527,12 +2527,12 @@ int si_pcie_gart_enable(struct radeon_device *rdev)
1467 WREG32(0x15DC, 0);
1468
1469 /* empty context1-15 */
1470 - /* FIXME start with 1G, once using 2 level pt switch to full
1471 + /* FIXME start with 4G, once using 2 level pt switch to full
1472 * vm size space
1473 */
1474 /* set vm size, must be a multiple of 4 */
1475 WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0);
1476 - WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, (1 << 30) / RADEON_GPU_PAGE_SIZE);
1477 + WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn);
1478 for (i = 1; i < 16; i++) {
1479 if (i < 8)
1480 WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),
1481 diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
1482 index 1d5b941..543896d 100644
1483 --- a/drivers/hid/hid-multitouch.c
1484 +++ b/drivers/hid/hid-multitouch.c
1485 @@ -70,9 +70,16 @@ struct mt_class {
1486 bool is_indirect; /* true for touchpads */
1487 };
1488
1489 +struct mt_fields {
1490 + unsigned usages[HID_MAX_FIELDS];
1491 + unsigned int length;
1492 +};
1493 +
1494 struct mt_device {
1495 struct mt_slot curdata; /* placeholder of incoming data */
1496 struct mt_class mtclass; /* our mt device class */
1497 + struct mt_fields *fields; /* temporary placeholder for storing the
1498 + multitouch fields */
1499 unsigned last_field_index; /* last field index of the report */
1500 unsigned last_slot_field; /* the last field of a slot */
1501 __s8 inputmode; /* InputMode HID feature, -1 if non-existent */
1502 @@ -275,11 +282,15 @@ static void set_abs(struct input_dev *input, unsigned int code,
1503 input_set_abs_params(input, code, fmin, fmax, fuzz, 0);
1504 }
1505
1506 -static void set_last_slot_field(struct hid_usage *usage, struct mt_device *td,
1507 +static void mt_store_field(struct hid_usage *usage, struct mt_device *td,
1508 struct hid_input *hi)
1509 {
1510 - if (!test_bit(usage->hid, hi->input->absbit))
1511 - td->last_slot_field = usage->hid;
1512 + struct mt_fields *f = td->fields;
1513 +
1514 + if (f->length >= HID_MAX_FIELDS)
1515 + return;
1516 +
1517 + f->usages[f->length++] = usage->hid;
1518 }
1519
1520 static int mt_input_mapping(struct hid_device *hdev, struct hid_input *hi,
1521 @@ -330,7 +341,7 @@ static int mt_input_mapping(struct hid_device *hdev, struct hid_input *hi,
1522 cls->sn_move);
1523 /* touchscreen emulation */
1524 set_abs(hi->input, ABS_X, field, cls->sn_move);
1525 - set_last_slot_field(usage, td, hi);
1526 + mt_store_field(usage, td, hi);
1527 td->last_field_index = field->index;
1528 return 1;
1529 case HID_GD_Y:
1530 @@ -340,7 +351,7 @@ static int mt_input_mapping(struct hid_device *hdev, struct hid_input *hi,
1531 cls->sn_move);
1532 /* touchscreen emulation */
1533 set_abs(hi->input, ABS_Y, field, cls->sn_move);
1534 - set_last_slot_field(usage, td, hi);
1535 + mt_store_field(usage, td, hi);
1536 td->last_field_index = field->index;
1537 return 1;
1538 }
1539 @@ -349,24 +360,24 @@ static int mt_input_mapping(struct hid_device *hdev, struct hid_input *hi,
1540 case HID_UP_DIGITIZER:
1541 switch (usage->hid) {
1542 case HID_DG_INRANGE:
1543 - set_last_slot_field(usage, td, hi);
1544 + mt_store_field(usage, td, hi);
1545 td->last_field_index = field->index;
1546 return 1;
1547 case HID_DG_CONFIDENCE:
1548 - set_last_slot_field(usage, td, hi);
1549 + mt_store_field(usage, td, hi);
1550 td->last_field_index = field->index;
1551 return 1;
1552 case HID_DG_TIPSWITCH:
1553 hid_map_usage(hi, usage, bit, max, EV_KEY, BTN_TOUCH);
1554 input_set_capability(hi->input, EV_KEY, BTN_TOUCH);
1555 - set_last_slot_field(usage, td, hi);
1556 + mt_store_field(usage, td, hi);
1557 td->last_field_index = field->index;
1558 return 1;
1559 case HID_DG_CONTACTID:
1560 if (!td->maxcontacts)
1561 td->maxcontacts = MT_DEFAULT_MAXCONTACT;
1562 input_mt_init_slots(hi->input, td->maxcontacts);
1563 - td->last_slot_field = usage->hid;
1564 + mt_store_field(usage, td, hi);
1565 td->last_field_index = field->index;
1566 td->touches_by_report++;
1567 return 1;
1568 @@ -375,7 +386,7 @@ static int mt_input_mapping(struct hid_device *hdev, struct hid_input *hi,
1569 EV_ABS, ABS_MT_TOUCH_MAJOR);
1570 set_abs(hi->input, ABS_MT_TOUCH_MAJOR, field,
1571 cls->sn_width);
1572 - set_last_slot_field(usage, td, hi);
1573 + mt_store_field(usage, td, hi);
1574 td->last_field_index = field->index;
1575 return 1;
1576 case HID_DG_HEIGHT:
1577 @@ -385,7 +396,7 @@ static int mt_input_mapping(struct hid_device *hdev, struct hid_input *hi,
1578 cls->sn_height);
1579 input_set_abs_params(hi->input,
1580 ABS_MT_ORIENTATION, 0, 1, 0, 0);
1581 - set_last_slot_field(usage, td, hi);
1582 + mt_store_field(usage, td, hi);
1583 td->last_field_index = field->index;
1584 return 1;
1585 case HID_DG_TIPPRESSURE:
1586 @@ -396,7 +407,7 @@ static int mt_input_mapping(struct hid_device *hdev, struct hid_input *hi,
1587 /* touchscreen emulation */
1588 set_abs(hi->input, ABS_PRESSURE, field,
1589 cls->sn_pressure);
1590 - set_last_slot_field(usage, td, hi);
1591 + mt_store_field(usage, td, hi);
1592 td->last_field_index = field->index;
1593 return 1;
1594 case HID_DG_CONTACTCOUNT:
1595 @@ -635,6 +646,16 @@ static void mt_set_maxcontacts(struct hid_device *hdev)
1596 }
1597 }
1598
1599 +static void mt_post_parse(struct mt_device *td)
1600 +{
1601 + struct mt_fields *f = td->fields;
1602 +
1603 + if (td->touches_by_report > 0) {
1604 + int field_count_per_touch = f->length / td->touches_by_report;
1605 + td->last_slot_field = f->usages[field_count_per_touch - 1];
1606 + }
1607 +}
1608 +
1609 static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
1610 {
1611 int ret, i;
1612 @@ -666,6 +687,13 @@ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
1613 td->maxcontact_report_id = -1;
1614 hid_set_drvdata(hdev, td);
1615
1616 + td->fields = kzalloc(sizeof(struct mt_fields), GFP_KERNEL);
1617 + if (!td->fields) {
1618 + dev_err(&hdev->dev, "cannot allocate multitouch fields data\n");
1619 + ret = -ENOMEM;
1620 + goto fail;
1621 + }
1622 +
1623 ret = hid_parse(hdev);
1624 if (ret != 0)
1625 goto fail;
1626 @@ -674,6 +702,8 @@ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
1627 if (ret)
1628 goto fail;
1629
1630 + mt_post_parse(td);
1631 +
1632 if (!id && td->touches_by_report == 1) {
1633 /* the device has been sent by hid-generic */
1634 mtclass = &td->mtclass;
1635 @@ -697,9 +727,13 @@ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
1636 mt_set_maxcontacts(hdev);
1637 mt_set_input_mode(hdev);
1638
1639 + kfree(td->fields);
1640 + td->fields = NULL;
1641 +
1642 return 0;
1643
1644 fail:
1645 + kfree(td->fields);
1646 kfree(td);
1647 return ret;
1648 }
1649 diff --git a/drivers/hwmon/applesmc.c b/drivers/hwmon/applesmc.c
1650 index f082e48..70d62f5 100644
1651 --- a/drivers/hwmon/applesmc.c
1652 +++ b/drivers/hwmon/applesmc.c
1653 @@ -215,7 +215,7 @@ static int read_smc(u8 cmd, const char *key, u8 *buffer, u8 len)
1654 int i;
1655
1656 if (send_command(cmd) || send_argument(key)) {
1657 - pr_warn("%s: read arg fail\n", key);
1658 + pr_warn("%.4s: read arg fail\n", key);
1659 return -EIO;
1660 }
1661
1662 @@ -223,7 +223,7 @@ static int read_smc(u8 cmd, const char *key, u8 *buffer, u8 len)
1663
1664 for (i = 0; i < len; i++) {
1665 if (__wait_status(0x05)) {
1666 - pr_warn("%s: read data fail\n", key);
1667 + pr_warn("%.4s: read data fail\n", key);
1668 return -EIO;
1669 }
1670 buffer[i] = inb(APPLESMC_DATA_PORT);
1671 diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
1672 index b9d5123..0f52799 100644
1673 --- a/drivers/hwmon/coretemp.c
1674 +++ b/drivers/hwmon/coretemp.c
1675 @@ -664,7 +664,7 @@ static void __cpuinit get_core_online(unsigned int cpu)
1676 * sensors. We check this bit only, all the early CPUs
1677 * without thermal sensors will be filtered out.
1678 */
1679 - if (!cpu_has(c, X86_FEATURE_DTS))
1680 + if (!cpu_has(c, X86_FEATURE_DTHERM))
1681 return;
1682
1683 if (!pdev) {
1684 @@ -765,7 +765,7 @@ static struct notifier_block coretemp_cpu_notifier __refdata = {
1685 };
1686
1687 static const struct x86_cpu_id coretemp_ids[] = {
1688 - { X86_VENDOR_INTEL, X86_FAMILY_ANY, X86_MODEL_ANY, X86_FEATURE_DTS },
1689 + { X86_VENDOR_INTEL, X86_FAMILY_ANY, X86_MODEL_ANY, X86_FEATURE_DTHERM },
1690 {}
1691 };
1692 MODULE_DEVICE_TABLE(x86cpu, coretemp_ids);
1693 diff --git a/drivers/hwspinlock/hwspinlock_core.c b/drivers/hwspinlock/hwspinlock_core.c
1694 index 61c9cf1..1201a15 100644
1695 --- a/drivers/hwspinlock/hwspinlock_core.c
1696 +++ b/drivers/hwspinlock/hwspinlock_core.c
1697 @@ -345,7 +345,7 @@ int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev,
1698 spin_lock_init(&hwlock->lock);
1699 hwlock->bank = bank;
1700
1701 - ret = hwspin_lock_register_single(hwlock, i);
1702 + ret = hwspin_lock_register_single(hwlock, base_id + i);
1703 if (ret)
1704 goto reg_failed;
1705 }
1706 @@ -354,7 +354,7 @@ int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev,
1707
1708 reg_failed:
1709 while (--i >= 0)
1710 - hwspin_lock_unregister_single(i);
1711 + hwspin_lock_unregister_single(base_id + i);
1712 return ret;
1713 }
1714 EXPORT_SYMBOL_GPL(hwspin_lock_register);
1715 diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
1716 index a2e418c..dfe7d37 100644
1717 --- a/drivers/iommu/amd_iommu.c
1718 +++ b/drivers/iommu/amd_iommu.c
1719 @@ -83,6 +83,8 @@ static struct iommu_ops amd_iommu_ops;
1720 static ATOMIC_NOTIFIER_HEAD(ppr_notifier);
1721 int amd_iommu_max_glx_val = -1;
1722
1723 +static struct dma_map_ops amd_iommu_dma_ops;
1724 +
1725 /*
1726 * general struct to manage commands send to an IOMMU
1727 */
1728 @@ -2267,6 +2269,13 @@ static int device_change_notifier(struct notifier_block *nb,
1729 list_add_tail(&dma_domain->list, &iommu_pd_list);
1730 spin_unlock_irqrestore(&iommu_pd_list_lock, flags);
1731
1732 + dev_data = get_dev_data(dev);
1733 +
1734 + if (!dev_data->passthrough)
1735 + dev->archdata.dma_ops = &amd_iommu_dma_ops;
1736 + else
1737 + dev->archdata.dma_ops = &nommu_dma_ops;
1738 +
1739 break;
1740 case BUS_NOTIFY_DEL_DEVICE:
1741
1742 diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
1743 index 542024b..c04ddca 100644
1744 --- a/drivers/iommu/amd_iommu_init.c
1745 +++ b/drivers/iommu/amd_iommu_init.c
1746 @@ -1641,6 +1641,8 @@ static int __init amd_iommu_init(void)
1747
1748 amd_iommu_init_api();
1749
1750 + x86_platform.iommu_shutdown = disable_iommus;
1751 +
1752 if (iommu_pass_through)
1753 goto out;
1754
1755 @@ -1649,8 +1651,6 @@ static int __init amd_iommu_init(void)
1756 else
1757 printk(KERN_INFO "AMD-Vi: Lazy IO/TLB flushing enabled\n");
1758
1759 - x86_platform.iommu_shutdown = disable_iommus;
1760 -
1761 out:
1762 return ret;
1763
1764 diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
1765 index eb93c82..17ef6c4 100644
1766 --- a/drivers/iommu/tegra-smmu.c
1767 +++ b/drivers/iommu/tegra-smmu.c
1768 @@ -550,13 +550,13 @@ static int alloc_pdir(struct smmu_as *as)
1769 return 0;
1770
1771 as->pte_count = devm_kzalloc(smmu->dev,
1772 - sizeof(as->pte_count[0]) * SMMU_PDIR_COUNT, GFP_KERNEL);
1773 + sizeof(as->pte_count[0]) * SMMU_PDIR_COUNT, GFP_ATOMIC);
1774 if (!as->pte_count) {
1775 dev_err(smmu->dev,
1776 "failed to allocate smmu_device PTE cunters\n");
1777 return -ENOMEM;
1778 }
1779 - as->pdir_page = alloc_page(GFP_KERNEL | __GFP_DMA);
1780 + as->pdir_page = alloc_page(GFP_ATOMIC | __GFP_DMA);
1781 if (!as->pdir_page) {
1782 dev_err(smmu->dev,
1783 "failed to allocate smmu_device page directory\n");
1784 diff --git a/drivers/md/persistent-data/dm-space-map-checker.c b/drivers/md/persistent-data/dm-space-map-checker.c
1785 index 50ed53b..fc90c11 100644
1786 --- a/drivers/md/persistent-data/dm-space-map-checker.c
1787 +++ b/drivers/md/persistent-data/dm-space-map-checker.c
1788 @@ -8,6 +8,7 @@
1789
1790 #include <linux/device-mapper.h>
1791 #include <linux/export.h>
1792 +#include <linux/vmalloc.h>
1793
1794 #ifdef CONFIG_DM_DEBUG_SPACE_MAPS
1795
1796 @@ -89,13 +90,23 @@ static int ca_create(struct count_array *ca, struct dm_space_map *sm)
1797
1798 ca->nr = nr_blocks;
1799 ca->nr_free = nr_blocks;
1800 - ca->counts = kzalloc(sizeof(*ca->counts) * nr_blocks, GFP_KERNEL);
1801 - if (!ca->counts)
1802 - return -ENOMEM;
1803 +
1804 + if (!nr_blocks)
1805 + ca->counts = NULL;
1806 + else {
1807 + ca->counts = vzalloc(sizeof(*ca->counts) * nr_blocks);
1808 + if (!ca->counts)
1809 + return -ENOMEM;
1810 + }
1811
1812 return 0;
1813 }
1814
1815 +static void ca_destroy(struct count_array *ca)
1816 +{
1817 + vfree(ca->counts);
1818 +}
1819 +
1820 static int ca_load(struct count_array *ca, struct dm_space_map *sm)
1821 {
1822 int r;
1823 @@ -126,12 +137,14 @@ static int ca_load(struct count_array *ca, struct dm_space_map *sm)
1824 static int ca_extend(struct count_array *ca, dm_block_t extra_blocks)
1825 {
1826 dm_block_t nr_blocks = ca->nr + extra_blocks;
1827 - uint32_t *counts = kzalloc(sizeof(*counts) * nr_blocks, GFP_KERNEL);
1828 + uint32_t *counts = vzalloc(sizeof(*counts) * nr_blocks);
1829 if (!counts)
1830 return -ENOMEM;
1831
1832 - memcpy(counts, ca->counts, sizeof(*counts) * ca->nr);
1833 - kfree(ca->counts);
1834 + if (ca->counts) {
1835 + memcpy(counts, ca->counts, sizeof(*counts) * ca->nr);
1836 + ca_destroy(ca);
1837 + }
1838 ca->nr = nr_blocks;
1839 ca->nr_free += extra_blocks;
1840 ca->counts = counts;
1841 @@ -151,11 +164,6 @@ static int ca_commit(struct count_array *old, struct count_array *new)
1842 return 0;
1843 }
1844
1845 -static void ca_destroy(struct count_array *ca)
1846 -{
1847 - kfree(ca->counts);
1848 -}
1849 -
1850 /*----------------------------------------------------------------*/
1851
1852 struct sm_checker {
1853 @@ -343,25 +351,25 @@ struct dm_space_map *dm_sm_checker_create(struct dm_space_map *sm)
1854 int r;
1855 struct sm_checker *smc;
1856
1857 - if (!sm)
1858 - return NULL;
1859 + if (IS_ERR_OR_NULL(sm))
1860 + return ERR_PTR(-EINVAL);
1861
1862 smc = kmalloc(sizeof(*smc), GFP_KERNEL);
1863 if (!smc)
1864 - return NULL;
1865 + return ERR_PTR(-ENOMEM);
1866
1867 memcpy(&smc->sm, &ops_, sizeof(smc->sm));
1868 r = ca_create(&smc->old_counts, sm);
1869 if (r) {
1870 kfree(smc);
1871 - return NULL;
1872 + return ERR_PTR(r);
1873 }
1874
1875 r = ca_create(&smc->counts, sm);
1876 if (r) {
1877 ca_destroy(&smc->old_counts);
1878 kfree(smc);
1879 - return NULL;
1880 + return ERR_PTR(r);
1881 }
1882
1883 smc->real_sm = sm;
1884 @@ -371,7 +379,7 @@ struct dm_space_map *dm_sm_checker_create(struct dm_space_map *sm)
1885 ca_destroy(&smc->counts);
1886 ca_destroy(&smc->old_counts);
1887 kfree(smc);
1888 - return NULL;
1889 + return ERR_PTR(r);
1890 }
1891
1892 r = ca_commit(&smc->old_counts, &smc->counts);
1893 @@ -379,7 +387,7 @@ struct dm_space_map *dm_sm_checker_create(struct dm_space_map *sm)
1894 ca_destroy(&smc->counts);
1895 ca_destroy(&smc->old_counts);
1896 kfree(smc);
1897 - return NULL;
1898 + return ERR_PTR(r);
1899 }
1900
1901 return &smc->sm;
1902 @@ -391,25 +399,25 @@ struct dm_space_map *dm_sm_checker_create_fresh(struct dm_space_map *sm)
1903 int r;
1904 struct sm_checker *smc;
1905
1906 - if (!sm)
1907 - return NULL;
1908 + if (IS_ERR_OR_NULL(sm))
1909 + return ERR_PTR(-EINVAL);
1910
1911 smc = kmalloc(sizeof(*smc), GFP_KERNEL);
1912 if (!smc)
1913 - return NULL;
1914 + return ERR_PTR(-ENOMEM);
1915
1916 memcpy(&smc->sm, &ops_, sizeof(smc->sm));
1917 r = ca_create(&smc->old_counts, sm);
1918 if (r) {
1919 kfree(smc);
1920 - return NULL;
1921 + return ERR_PTR(r);
1922 }
1923
1924 r = ca_create(&smc->counts, sm);
1925 if (r) {
1926 ca_destroy(&smc->old_counts);
1927 kfree(smc);
1928 - return NULL;
1929 + return ERR_PTR(r);
1930 }
1931
1932 smc->real_sm = sm;
1933 diff --git a/drivers/md/persistent-data/dm-space-map-disk.c b/drivers/md/persistent-data/dm-space-map-disk.c
1934 index fc469ba..3d0ed53 100644
1935 --- a/drivers/md/persistent-data/dm-space-map-disk.c
1936 +++ b/drivers/md/persistent-data/dm-space-map-disk.c
1937 @@ -290,7 +290,16 @@ struct dm_space_map *dm_sm_disk_create(struct dm_transaction_manager *tm,
1938 dm_block_t nr_blocks)
1939 {
1940 struct dm_space_map *sm = dm_sm_disk_create_real(tm, nr_blocks);
1941 - return dm_sm_checker_create_fresh(sm);
1942 + struct dm_space_map *smc;
1943 +
1944 + if (IS_ERR_OR_NULL(sm))
1945 + return sm;
1946 +
1947 + smc = dm_sm_checker_create_fresh(sm);
1948 + if (IS_ERR(smc))
1949 + dm_sm_destroy(sm);
1950 +
1951 + return smc;
1952 }
1953 EXPORT_SYMBOL_GPL(dm_sm_disk_create);
1954
1955 diff --git a/drivers/md/persistent-data/dm-transaction-manager.c b/drivers/md/persistent-data/dm-transaction-manager.c
1956 index 6f8d387..ba54aac 100644
1957 --- a/drivers/md/persistent-data/dm-transaction-manager.c
1958 +++ b/drivers/md/persistent-data/dm-transaction-manager.c
1959 @@ -138,6 +138,9 @@ EXPORT_SYMBOL_GPL(dm_tm_create_non_blocking_clone);
1960
1961 void dm_tm_destroy(struct dm_transaction_manager *tm)
1962 {
1963 + if (!tm->is_clone)
1964 + wipe_shadow_table(tm);
1965 +
1966 kfree(tm);
1967 }
1968 EXPORT_SYMBOL_GPL(dm_tm_destroy);
1969 @@ -342,8 +345,10 @@ static int dm_tm_create_internal(struct dm_block_manager *bm,
1970 }
1971
1972 *sm = dm_sm_checker_create(inner);
1973 - if (!*sm)
1974 + if (IS_ERR(*sm)) {
1975 + r = PTR_ERR(*sm);
1976 goto bad2;
1977 + }
1978
1979 } else {
1980 r = dm_bm_write_lock(dm_tm_get_bm(*tm), sb_location,
1981 @@ -362,8 +367,10 @@ static int dm_tm_create_internal(struct dm_block_manager *bm,
1982 }
1983
1984 *sm = dm_sm_checker_create(inner);
1985 - if (!*sm)
1986 + if (IS_ERR(*sm)) {
1987 + r = PTR_ERR(*sm);
1988 goto bad2;
1989 + }
1990 }
1991
1992 return 0;
1993 diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
1994 index d037adb..a954c95 100644
1995 --- a/drivers/md/raid10.c
1996 +++ b/drivers/md/raid10.c
1997 @@ -2209,7 +2209,7 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10
1998 if (r10_sync_page_io(rdev,
1999 r10_bio->devs[sl].addr +
2000 sect,
2001 - s<<9, conf->tmppage, WRITE)
2002 + s, conf->tmppage, WRITE)
2003 == 0) {
2004 /* Well, this device is dead */
2005 printk(KERN_NOTICE
2006 @@ -2246,7 +2246,7 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10
2007 switch (r10_sync_page_io(rdev,
2008 r10_bio->devs[sl].addr +
2009 sect,
2010 - s<<9, conf->tmppage,
2011 + s, conf->tmppage,
2012 READ)) {
2013 case 0:
2014 /* Well, this device is dead */
2015 @@ -2407,7 +2407,7 @@ read_more:
2016 slot = r10_bio->read_slot;
2017 printk_ratelimited(
2018 KERN_ERR
2019 - "md/raid10:%s: %s: redirecting"
2020 + "md/raid10:%s: %s: redirecting "
2021 "sector %llu to another mirror\n",
2022 mdname(mddev),
2023 bdevname(rdev->bdev, b),
2024 @@ -2772,6 +2772,12 @@ static sector_t sync_request(struct mddev *mddev, sector_t sector_nr,
2025 /* want to reconstruct this device */
2026 rb2 = r10_bio;
2027 sect = raid10_find_virt(conf, sector_nr, i);
2028 + if (sect >= mddev->resync_max_sectors) {
2029 + /* last stripe is not complete - don't
2030 + * try to recover this sector.
2031 + */
2032 + continue;
2033 + }
2034 /* Unless we are doing a full sync, or a replacement
2035 * we only need to recover the block if it is set in
2036 * the bitmap
2037 diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
2038 index f351422..73a5800 100644
2039 --- a/drivers/md/raid5.c
2040 +++ b/drivers/md/raid5.c
2041 @@ -196,12 +196,14 @@ static void __release_stripe(struct r5conf *conf, struct stripe_head *sh)
2042 BUG_ON(!list_empty(&sh->lru));
2043 BUG_ON(atomic_read(&conf->active_stripes)==0);
2044 if (test_bit(STRIPE_HANDLE, &sh->state)) {
2045 - if (test_bit(STRIPE_DELAYED, &sh->state))
2046 + if (test_bit(STRIPE_DELAYED, &sh->state) &&
2047 + !test_bit(STRIPE_PREREAD_ACTIVE, &sh->state))
2048 list_add_tail(&sh->lru, &conf->delayed_list);
2049 else if (test_bit(STRIPE_BIT_DELAY, &sh->state) &&
2050 sh->bm_seq - conf->seq_write > 0)
2051 list_add_tail(&sh->lru, &conf->bitmap_list);
2052 else {
2053 + clear_bit(STRIPE_DELAYED, &sh->state);
2054 clear_bit(STRIPE_BIT_DELAY, &sh->state);
2055 list_add_tail(&sh->lru, &conf->handle_list);
2056 }
2057 @@ -583,6 +585,12 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
2058 * a chance*/
2059 md_check_recovery(conf->mddev);
2060 }
2061 + /*
2062 + * Because md_wait_for_blocked_rdev
2063 + * will dec nr_pending, we must
2064 + * increment it first.
2065 + */
2066 + atomic_inc(&rdev->nr_pending);
2067 md_wait_for_blocked_rdev(rdev, conf->mddev);
2068 } else {
2069 /* Acknowledged bad block - skip the write */
2070 @@ -3842,7 +3850,6 @@ static int chunk_aligned_read(struct mddev *mddev, struct bio * raid_bio)
2071 raid_bio->bi_next = (void*)rdev;
2072 align_bi->bi_bdev = rdev->bdev;
2073 align_bi->bi_flags &= ~(1 << BIO_SEG_VALID);
2074 - align_bi->bi_sector += rdev->data_offset;
2075
2076 if (!bio_fits_rdev(align_bi) ||
2077 is_badblock(rdev, align_bi->bi_sector, align_bi->bi_size>>9,
2078 @@ -3853,6 +3860,9 @@ static int chunk_aligned_read(struct mddev *mddev, struct bio * raid_bio)
2079 return 0;
2080 }
2081
2082 + /* No reshape active, so we can trust rdev->data_offset */
2083 + align_bi->bi_sector += rdev->data_offset;
2084 +
2085 spin_lock_irq(&conf->device_lock);
2086 wait_event_lock_irq(conf->wait_for_stripe,
2087 conf->quiesce == 0,
2088 diff --git a/drivers/media/dvb/siano/smsusb.c b/drivers/media/dvb/siano/smsusb.c
2089 index 63c004a..664e460 100644
2090 --- a/drivers/media/dvb/siano/smsusb.c
2091 +++ b/drivers/media/dvb/siano/smsusb.c
2092 @@ -544,6 +544,8 @@ static const struct usb_device_id smsusb_id_table[] __devinitconst = {
2093 .driver_info = SMS1XXX_BOARD_HAUPPAUGE_WINDHAM },
2094 { USB_DEVICE(0x2040, 0xc0a0),
2095 .driver_info = SMS1XXX_BOARD_HAUPPAUGE_WINDHAM },
2096 + { USB_DEVICE(0x2040, 0xf5a0),
2097 + .driver_info = SMS1XXX_BOARD_HAUPPAUGE_WINDHAM },
2098 { } /* Terminating entry */
2099 };
2100
2101 diff --git a/drivers/media/video/gspca/gspca.c b/drivers/media/video/gspca/gspca.c
2102 index ca5a2b1..4dc8852 100644
2103 --- a/drivers/media/video/gspca/gspca.c
2104 +++ b/drivers/media/video/gspca/gspca.c
2105 @@ -1723,7 +1723,7 @@ static int vidioc_streamoff(struct file *file, void *priv,
2106 enum v4l2_buf_type buf_type)
2107 {
2108 struct gspca_dev *gspca_dev = priv;
2109 - int ret;
2110 + int i, ret;
2111
2112 if (buf_type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
2113 return -EINVAL;
2114 @@ -1754,6 +1754,8 @@ static int vidioc_streamoff(struct file *file, void *priv,
2115 wake_up_interruptible(&gspca_dev->wq);
2116
2117 /* empty the transfer queues */
2118 + for (i = 0; i < gspca_dev->nframes; i++)
2119 + gspca_dev->frame[i].v4l2_buf.flags &= ~BUF_ALL_FLAGS;
2120 atomic_set(&gspca_dev->fr_q, 0);
2121 atomic_set(&gspca_dev->fr_i, 0);
2122 gspca_dev->fr_o = 0;
2123 diff --git a/drivers/mtd/nand/cafe_nand.c b/drivers/mtd/nand/cafe_nand.c
2124 index 2a96e1a..6d22755 100644
2125 --- a/drivers/mtd/nand/cafe_nand.c
2126 +++ b/drivers/mtd/nand/cafe_nand.c
2127 @@ -102,7 +102,7 @@ static const char *part_probes[] = { "cmdlinepart", "RedBoot", NULL };
2128 static int cafe_device_ready(struct mtd_info *mtd)
2129 {
2130 struct cafe_priv *cafe = mtd->priv;
2131 - int result = !!(cafe_readl(cafe, NAND_STATUS) | 0x40000000);
2132 + int result = !!(cafe_readl(cafe, NAND_STATUS) & 0x40000000);
2133 uint32_t irqs = cafe_readl(cafe, NAND_IRQ);
2134
2135 cafe_writel(cafe, irqs, NAND_IRQ);
2136 diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
2137 index bc13b3d..a579a2f 100644
2138 --- a/drivers/net/bonding/bond_main.c
2139 +++ b/drivers/net/bonding/bond_main.c
2140 @@ -76,6 +76,7 @@
2141 #include <net/route.h>
2142 #include <net/net_namespace.h>
2143 #include <net/netns/generic.h>
2144 +#include <net/pkt_sched.h>
2145 #include "bonding.h"
2146 #include "bond_3ad.h"
2147 #include "bond_alb.h"
2148 @@ -381,8 +382,6 @@ struct vlan_entry *bond_next_vlan(struct bonding *bond, struct vlan_entry *curr)
2149 return next;
2150 }
2151
2152 -#define bond_queue_mapping(skb) (*(u16 *)((skb)->cb))
2153 -
2154 /**
2155 * bond_dev_queue_xmit - Prepare skb for xmit.
2156 *
2157 @@ -395,7 +394,9 @@ int bond_dev_queue_xmit(struct bonding *bond, struct sk_buff *skb,
2158 {
2159 skb->dev = slave_dev;
2160
2161 - skb->queue_mapping = bond_queue_mapping(skb);
2162 + BUILD_BUG_ON(sizeof(skb->queue_mapping) !=
2163 + sizeof(qdisc_skb_cb(skb)->bond_queue_mapping));
2164 + skb->queue_mapping = qdisc_skb_cb(skb)->bond_queue_mapping;
2165
2166 if (unlikely(netpoll_tx_running(slave_dev)))
2167 bond_netpoll_send_skb(bond_get_slave_by_dev(bond, slave_dev), skb);
2168 @@ -4162,7 +4163,7 @@ static u16 bond_select_queue(struct net_device *dev, struct sk_buff *skb)
2169 /*
2170 * Save the original txq to restore before passing to the driver
2171 */
2172 - bond_queue_mapping(skb) = skb->queue_mapping;
2173 + qdisc_skb_cb(skb)->bond_queue_mapping = skb->queue_mapping;
2174
2175 if (unlikely(txq >= dev->real_num_tx_queues)) {
2176 do {
2177 diff --git a/drivers/net/can/c_can/c_can.c b/drivers/net/can/c_can/c_can.c
2178 index 8dc84d6..86cd532 100644
2179 --- a/drivers/net/can/c_can/c_can.c
2180 +++ b/drivers/net/can/c_can/c_can.c
2181 @@ -590,8 +590,8 @@ static void c_can_chip_config(struct net_device *dev)
2182 priv->write_reg(priv, &priv->regs->control,
2183 CONTROL_ENABLE_AR);
2184
2185 - if (priv->can.ctrlmode & (CAN_CTRLMODE_LISTENONLY &
2186 - CAN_CTRLMODE_LOOPBACK)) {
2187 + if ((priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY) &&
2188 + (priv->can.ctrlmode & CAN_CTRLMODE_LOOPBACK)) {
2189 /* loopback + silent mode : useful for hot self-test */
2190 priv->write_reg(priv, &priv->regs->control, CONTROL_EIE |
2191 CONTROL_SIE | CONTROL_IE | CONTROL_TEST);
2192 diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
2193 index 1efb083..00baa7e 100644
2194 --- a/drivers/net/can/flexcan.c
2195 +++ b/drivers/net/can/flexcan.c
2196 @@ -933,12 +933,12 @@ static int __devinit flexcan_probe(struct platform_device *pdev)
2197 u32 clock_freq = 0;
2198
2199 if (pdev->dev.of_node) {
2200 - const u32 *clock_freq_p;
2201 + const __be32 *clock_freq_p;
2202
2203 clock_freq_p = of_get_property(pdev->dev.of_node,
2204 "clock-frequency", NULL);
2205 if (clock_freq_p)
2206 - clock_freq = *clock_freq_p;
2207 + clock_freq = be32_to_cpup(clock_freq_p);
2208 }
2209
2210 if (!clock_freq) {
2211 diff --git a/drivers/net/dummy.c b/drivers/net/dummy.c
2212 index 442d91a..bab0158 100644
2213 --- a/drivers/net/dummy.c
2214 +++ b/drivers/net/dummy.c
2215 @@ -187,8 +187,10 @@ static int __init dummy_init_module(void)
2216 rtnl_lock();
2217 err = __rtnl_link_register(&dummy_link_ops);
2218
2219 - for (i = 0; i < numdummies && !err; i++)
2220 + for (i = 0; i < numdummies && !err; i++) {
2221 err = dummy_init_one();
2222 + cond_resched();
2223 + }
2224 if (err < 0)
2225 __rtnl_link_unregister(&dummy_link_ops);
2226 rtnl_unlock();
2227 diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
2228 index 2c9ee55..75d35ec 100644
2229 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
2230 +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
2231 @@ -744,21 +744,6 @@ struct bnx2x_fastpath {
2232
2233 #define ETH_RX_ERROR_FALGS ETH_FAST_PATH_RX_CQE_PHY_DECODE_ERR_FLG
2234
2235 -#define BNX2X_IP_CSUM_ERR(cqe) \
2236 - (!((cqe)->fast_path_cqe.status_flags & \
2237 - ETH_FAST_PATH_RX_CQE_IP_XSUM_NO_VALIDATION_FLG) && \
2238 - ((cqe)->fast_path_cqe.type_error_flags & \
2239 - ETH_FAST_PATH_RX_CQE_IP_BAD_XSUM_FLG))
2240 -
2241 -#define BNX2X_L4_CSUM_ERR(cqe) \
2242 - (!((cqe)->fast_path_cqe.status_flags & \
2243 - ETH_FAST_PATH_RX_CQE_L4_XSUM_NO_VALIDATION_FLG) && \
2244 - ((cqe)->fast_path_cqe.type_error_flags & \
2245 - ETH_FAST_PATH_RX_CQE_L4_BAD_XSUM_FLG))
2246 -
2247 -#define BNX2X_RX_CSUM_OK(cqe) \
2248 - (!(BNX2X_L4_CSUM_ERR(cqe) || BNX2X_IP_CSUM_ERR(cqe)))
2249 -
2250 #define BNX2X_PRS_FLAG_OVERETH_IPV4(flags) \
2251 (((le16_to_cpu(flags) & \
2252 PARSING_FLAGS_OVER_ETHERNET_PROTOCOL) >> \
2253 diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
2254 index 4b05481..41bb34f 100644
2255 --- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
2256 +++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
2257 @@ -191,7 +191,7 @@ int bnx2x_tx_int(struct bnx2x *bp, struct bnx2x_fp_txdata *txdata)
2258
2259 if ((netif_tx_queue_stopped(txq)) &&
2260 (bp->state == BNX2X_STATE_OPEN) &&
2261 - (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 3))
2262 + (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 4))
2263 netif_tx_wake_queue(txq);
2264
2265 __netif_tx_unlock(txq);
2266 @@ -568,6 +568,25 @@ drop:
2267 fp->eth_q_stats.rx_skb_alloc_failed++;
2268 }
2269
2270 +static void bnx2x_csum_validate(struct sk_buff *skb, union eth_rx_cqe *cqe,
2271 + struct bnx2x_fastpath *fp)
2272 +{
2273 + /* Do nothing if no IP/L4 csum validation was done */
2274 +
2275 + if (cqe->fast_path_cqe.status_flags &
2276 + (ETH_FAST_PATH_RX_CQE_IP_XSUM_NO_VALIDATION_FLG |
2277 + ETH_FAST_PATH_RX_CQE_L4_XSUM_NO_VALIDATION_FLG))
2278 + return;
2279 +
2280 + /* If both IP/L4 validation were done, check if an error was found. */
2281 +
2282 + if (cqe->fast_path_cqe.type_error_flags &
2283 + (ETH_FAST_PATH_RX_CQE_IP_BAD_XSUM_FLG |
2284 + ETH_FAST_PATH_RX_CQE_L4_BAD_XSUM_FLG))
2285 + fp->eth_q_stats.hw_csum_err++;
2286 + else
2287 + skb->ip_summed = CHECKSUM_UNNECESSARY;
2288 +}
2289
2290 int bnx2x_rx_int(struct bnx2x_fastpath *fp, int budget)
2291 {
2292 @@ -757,13 +776,9 @@ reuse_rx:
2293
2294 skb_checksum_none_assert(skb);
2295
2296 - if (bp->dev->features & NETIF_F_RXCSUM) {
2297 + if (bp->dev->features & NETIF_F_RXCSUM)
2298 + bnx2x_csum_validate(skb, cqe, fp);
2299
2300 - if (likely(BNX2X_RX_CSUM_OK(cqe)))
2301 - skb->ip_summed = CHECKSUM_UNNECESSARY;
2302 - else
2303 - fp->eth_q_stats.hw_csum_err++;
2304 - }
2305
2306 skb_record_rx_queue(skb, fp->rx_queue);
2307
2308 @@ -2334,8 +2349,6 @@ int bnx2x_poll(struct napi_struct *napi, int budget)
2309 /* we split the first BD into headers and data BDs
2310 * to ease the pain of our fellow microcode engineers
2311 * we use one mapping for both BDs
2312 - * So far this has only been observed to happen
2313 - * in Other Operating Systems(TM)
2314 */
2315 static noinline u16 bnx2x_tx_split(struct bnx2x *bp,
2316 struct bnx2x_fp_txdata *txdata,
2317 @@ -2987,7 +3000,7 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
2318
2319 txdata->tx_bd_prod += nbd;
2320
2321 - if (unlikely(bnx2x_tx_avail(bp, txdata) < MAX_SKB_FRAGS + 3)) {
2322 + if (unlikely(bnx2x_tx_avail(bp, txdata) < MAX_SKB_FRAGS + 4)) {
2323 netif_tx_stop_queue(txq);
2324
2325 /* paired memory barrier is in bnx2x_tx_int(), we have to keep
2326 @@ -2996,7 +3009,7 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
2327 smp_mb();
2328
2329 fp->eth_q_stats.driver_xoff++;
2330 - if (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 3)
2331 + if (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 4)
2332 netif_tx_wake_queue(txq);
2333 }
2334 txdata->tx_pkt++;
2335 diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
2336 index ceeab8e..1a1b29f 100644
2337 --- a/drivers/net/ethernet/broadcom/tg3.c
2338 +++ b/drivers/net/ethernet/broadcom/tg3.c
2339 @@ -14248,7 +14248,8 @@ static int __devinit tg3_get_invariants(struct tg3 *tp)
2340 }
2341 }
2342
2343 - if (tg3_flag(tp, 5755_PLUS))
2344 + if (tg3_flag(tp, 5755_PLUS) ||
2345 + GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5906)
2346 tg3_flag_set(tp, SHORT_DMA_BUG);
2347
2348 if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719)
2349 diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
2350 index 528a886..1bbf6b3 100644
2351 --- a/drivers/net/ethernet/emulex/benet/be_main.c
2352 +++ b/drivers/net/ethernet/emulex/benet/be_main.c
2353 @@ -731,6 +731,8 @@ static netdev_tx_t be_xmit(struct sk_buff *skb,
2354
2355 copied = make_tx_wrbs(adapter, txq, skb, wrb_cnt, dummy_wrb);
2356 if (copied) {
2357 + int gso_segs = skb_shinfo(skb)->gso_segs;
2358 +
2359 /* record the sent skb in the sent_skb table */
2360 BUG_ON(txo->sent_skb_list[start]);
2361 txo->sent_skb_list[start] = skb;
2362 @@ -748,8 +750,7 @@ static netdev_tx_t be_xmit(struct sk_buff *skb,
2363
2364 be_txq_notify(adapter, txq->id, wrb_cnt);
2365
2366 - be_tx_stats_update(txo, wrb_cnt, copied,
2367 - skb_shinfo(skb)->gso_segs, stopped);
2368 + be_tx_stats_update(txo, wrb_cnt, copied, gso_segs, stopped);
2369 } else {
2370 txq->head = start;
2371 dev_kfree_skb_any(skb);
2372 diff --git a/drivers/net/ethernet/intel/e1000e/defines.h b/drivers/net/ethernet/intel/e1000e/defines.h
2373 index 3a50259..eb84fe7 100644
2374 --- a/drivers/net/ethernet/intel/e1000e/defines.h
2375 +++ b/drivers/net/ethernet/intel/e1000e/defines.h
2376 @@ -101,6 +101,7 @@
2377 #define E1000_RXD_ERR_SEQ 0x04 /* Sequence Error */
2378 #define E1000_RXD_ERR_CXE 0x10 /* Carrier Extension Error */
2379 #define E1000_RXD_ERR_TCPE 0x20 /* TCP/UDP Checksum Error */
2380 +#define E1000_RXD_ERR_IPE 0x40 /* IP Checksum Error */
2381 #define E1000_RXD_ERR_RXE 0x80 /* Rx Data Error */
2382 #define E1000_RXD_SPC_VLAN_MASK 0x0FFF /* VLAN ID is in lower 12 bits */
2383
2384 diff --git a/drivers/net/ethernet/intel/e1000e/ethtool.c b/drivers/net/ethernet/intel/e1000e/ethtool.c
2385 index db35dd5..e48f2d2 100644
2386 --- a/drivers/net/ethernet/intel/e1000e/ethtool.c
2387 +++ b/drivers/net/ethernet/intel/e1000e/ethtool.c
2388 @@ -258,7 +258,8 @@ static int e1000_set_settings(struct net_device *netdev,
2389 * When SoL/IDER sessions are active, autoneg/speed/duplex
2390 * cannot be changed
2391 */
2392 - if (hw->phy.ops.check_reset_block(hw)) {
2393 + if (hw->phy.ops.check_reset_block &&
2394 + hw->phy.ops.check_reset_block(hw)) {
2395 e_err("Cannot change link characteristics when SoL/IDER is "
2396 "active.\n");
2397 return -EINVAL;
2398 @@ -1604,7 +1605,8 @@ static int e1000_loopback_test(struct e1000_adapter *adapter, u64 *data)
2399 * PHY loopback cannot be performed if SoL/IDER
2400 * sessions are active
2401 */
2402 - if (hw->phy.ops.check_reset_block(hw)) {
2403 + if (hw->phy.ops.check_reset_block &&
2404 + hw->phy.ops.check_reset_block(hw)) {
2405 e_err("Cannot do PHY loopback test when SoL/IDER is active.\n");
2406 *data = 0;
2407 goto out;
2408 diff --git a/drivers/net/ethernet/intel/e1000e/mac.c b/drivers/net/ethernet/intel/e1000e/mac.c
2409 index decad98..efecb50 100644
2410 --- a/drivers/net/ethernet/intel/e1000e/mac.c
2411 +++ b/drivers/net/ethernet/intel/e1000e/mac.c
2412 @@ -709,7 +709,7 @@ s32 e1000e_setup_link_generic(struct e1000_hw *hw)
2413 * In the case of the phy reset being blocked, we already have a link.
2414 * We do not need to set it up again.
2415 */
2416 - if (hw->phy.ops.check_reset_block(hw))
2417 + if (hw->phy.ops.check_reset_block && hw->phy.ops.check_reset_block(hw))
2418 return 0;
2419
2420 /*
2421 diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
2422 index 00e961e..5621d5b 100644
2423 --- a/drivers/net/ethernet/intel/e1000e/netdev.c
2424 +++ b/drivers/net/ethernet/intel/e1000e/netdev.c
2425 @@ -495,7 +495,7 @@ static void e1000_receive_skb(struct e1000_adapter *adapter,
2426 * @sk_buff: socket buffer with received data
2427 **/
2428 static void e1000_rx_checksum(struct e1000_adapter *adapter, u32 status_err,
2429 - __le16 csum, struct sk_buff *skb)
2430 + struct sk_buff *skb)
2431 {
2432 u16 status = (u16)status_err;
2433 u8 errors = (u8)(status_err >> 24);
2434 @@ -510,8 +510,8 @@ static void e1000_rx_checksum(struct e1000_adapter *adapter, u32 status_err,
2435 if (status & E1000_RXD_STAT_IXSM)
2436 return;
2437
2438 - /* TCP/UDP checksum error bit is set */
2439 - if (errors & E1000_RXD_ERR_TCPE) {
2440 + /* TCP/UDP checksum error bit or IP checksum error bit is set */
2441 + if (errors & (E1000_RXD_ERR_TCPE | E1000_RXD_ERR_IPE)) {
2442 /* let the stack verify checksum errors */
2443 adapter->hw_csum_err++;
2444 return;
2445 @@ -522,19 +522,7 @@ static void e1000_rx_checksum(struct e1000_adapter *adapter, u32 status_err,
2446 return;
2447
2448 /* It must be a TCP or UDP packet with a valid checksum */
2449 - if (status & E1000_RXD_STAT_TCPCS) {
2450 - /* TCP checksum is good */
2451 - skb->ip_summed = CHECKSUM_UNNECESSARY;
2452 - } else {
2453 - /*
2454 - * IP fragment with UDP payload
2455 - * Hardware complements the payload checksum, so we undo it
2456 - * and then put the value in host order for further stack use.
2457 - */
2458 - __sum16 sum = (__force __sum16)swab16((__force u16)csum);
2459 - skb->csum = csum_unfold(~sum);
2460 - skb->ip_summed = CHECKSUM_COMPLETE;
2461 - }
2462 + skb->ip_summed = CHECKSUM_UNNECESSARY;
2463 adapter->hw_csum_good++;
2464 }
2465
2466 @@ -978,8 +966,7 @@ static bool e1000_clean_rx_irq(struct e1000_ring *rx_ring, int *work_done,
2467 skb_put(skb, length);
2468
2469 /* Receive Checksum Offload */
2470 - e1000_rx_checksum(adapter, staterr,
2471 - rx_desc->wb.lower.hi_dword.csum_ip.csum, skb);
2472 + e1000_rx_checksum(adapter, staterr, skb);
2473
2474 e1000_rx_hash(netdev, rx_desc->wb.lower.hi_dword.rss, skb);
2475
2476 @@ -1360,8 +1347,7 @@ copydone:
2477 total_rx_bytes += skb->len;
2478 total_rx_packets++;
2479
2480 - e1000_rx_checksum(adapter, staterr,
2481 - rx_desc->wb.lower.hi_dword.csum_ip.csum, skb);
2482 + e1000_rx_checksum(adapter, staterr, skb);
2483
2484 e1000_rx_hash(netdev, rx_desc->wb.lower.hi_dword.rss, skb);
2485
2486 @@ -1531,9 +1517,8 @@ static bool e1000_clean_jumbo_rx_irq(struct e1000_ring *rx_ring, int *work_done,
2487 }
2488 }
2489
2490 - /* Receive Checksum Offload XXX recompute due to CRC strip? */
2491 - e1000_rx_checksum(adapter, staterr,
2492 - rx_desc->wb.lower.hi_dword.csum_ip.csum, skb);
2493 + /* Receive Checksum Offload */
2494 + e1000_rx_checksum(adapter, staterr, skb);
2495
2496 e1000_rx_hash(netdev, rx_desc->wb.lower.hi_dword.rss, skb);
2497
2498 @@ -3120,19 +3105,10 @@ static void e1000_configure_rx(struct e1000_adapter *adapter)
2499
2500 /* Enable Receive Checksum Offload for TCP and UDP */
2501 rxcsum = er32(RXCSUM);
2502 - if (adapter->netdev->features & NETIF_F_RXCSUM) {
2503 + if (adapter->netdev->features & NETIF_F_RXCSUM)
2504 rxcsum |= E1000_RXCSUM_TUOFL;
2505 -
2506 - /*
2507 - * IPv4 payload checksum for UDP fragments must be
2508 - * used in conjunction with packet-split.
2509 - */
2510 - if (adapter->rx_ps_pages)
2511 - rxcsum |= E1000_RXCSUM_IPPCSE;
2512 - } else {
2513 + else
2514 rxcsum &= ~E1000_RXCSUM_TUOFL;
2515 - /* no need to clear IPPCSE as it defaults to 0 */
2516 - }
2517 ew32(RXCSUM, rxcsum);
2518
2519 if (adapter->hw.mac.type == e1000_pch2lan) {
2520 @@ -5260,22 +5236,10 @@ static int e1000_change_mtu(struct net_device *netdev, int new_mtu)
2521 int max_frame = new_mtu + ETH_HLEN + ETH_FCS_LEN;
2522
2523 /* Jumbo frame support */
2524 - if (max_frame > ETH_FRAME_LEN + ETH_FCS_LEN) {
2525 - if (!(adapter->flags & FLAG_HAS_JUMBO_FRAMES)) {
2526 - e_err("Jumbo Frames not supported.\n");
2527 - return -EINVAL;
2528 - }
2529 -
2530 - /*
2531 - * IP payload checksum (enabled with jumbos/packet-split when
2532 - * Rx checksum is enabled) and generation of RSS hash is
2533 - * mutually exclusive in the hardware.
2534 - */
2535 - if ((netdev->features & NETIF_F_RXCSUM) &&
2536 - (netdev->features & NETIF_F_RXHASH)) {
2537 - e_err("Jumbo frames cannot be enabled when both receive checksum offload and receive hashing are enabled. Disable one of the receive offload features before enabling jumbos.\n");
2538 - return -EINVAL;
2539 - }
2540 + if ((max_frame > ETH_FRAME_LEN + ETH_FCS_LEN) &&
2541 + !(adapter->flags & FLAG_HAS_JUMBO_FRAMES)) {
2542 + e_err("Jumbo Frames not supported.\n");
2543 + return -EINVAL;
2544 }
2545
2546 /* Supported frame sizes */
2547 @@ -6049,17 +6013,6 @@ static int e1000_set_features(struct net_device *netdev,
2548 NETIF_F_RXALL)))
2549 return 0;
2550
2551 - /*
2552 - * IP payload checksum (enabled with jumbos/packet-split when Rx
2553 - * checksum is enabled) and generation of RSS hash is mutually
2554 - * exclusive in the hardware.
2555 - */
2556 - if (adapter->rx_ps_pages &&
2557 - (features & NETIF_F_RXCSUM) && (features & NETIF_F_RXHASH)) {
2558 - e_err("Enabling both receive checksum offload and receive hashing is not possible with jumbo frames. Disable jumbos or enable only one of the receive offload features.\n");
2559 - return -EINVAL;
2560 - }
2561 -
2562 if (changed & NETIF_F_RXFCS) {
2563 if (features & NETIF_F_RXFCS) {
2564 adapter->flags2 &= ~FLAG2_CRC_STRIPPING;
2565 @@ -6256,7 +6209,7 @@ static int __devinit e1000_probe(struct pci_dev *pdev,
2566 adapter->hw.phy.ms_type = e1000_ms_hw_default;
2567 }
2568
2569 - if (hw->phy.ops.check_reset_block(hw))
2570 + if (hw->phy.ops.check_reset_block && hw->phy.ops.check_reset_block(hw))
2571 e_info("PHY reset is blocked due to SOL/IDER session.\n");
2572
2573 /* Set initial default active device features */
2574 @@ -6423,7 +6376,7 @@ err_register:
2575 if (!(adapter->flags & FLAG_HAS_AMT))
2576 e1000e_release_hw_control(adapter);
2577 err_eeprom:
2578 - if (!hw->phy.ops.check_reset_block(hw))
2579 + if (hw->phy.ops.check_reset_block && !hw->phy.ops.check_reset_block(hw))
2580 e1000_phy_hw_reset(&adapter->hw);
2581 err_hw_init:
2582 kfree(adapter->tx_ring);
2583 diff --git a/drivers/net/ethernet/intel/e1000e/phy.c b/drivers/net/ethernet/intel/e1000e/phy.c
2584 index 35b4557..c4befb3 100644
2585 --- a/drivers/net/ethernet/intel/e1000e/phy.c
2586 +++ b/drivers/net/ethernet/intel/e1000e/phy.c
2587 @@ -2121,9 +2121,11 @@ s32 e1000e_phy_hw_reset_generic(struct e1000_hw *hw)
2588 s32 ret_val;
2589 u32 ctrl;
2590
2591 - ret_val = phy->ops.check_reset_block(hw);
2592 - if (ret_val)
2593 - return 0;
2594 + if (phy->ops.check_reset_block) {
2595 + ret_val = phy->ops.check_reset_block(hw);
2596 + if (ret_val)
2597 + return 0;
2598 + }
2599
2600 ret_val = phy->ops.acquire(hw);
2601 if (ret_val)
2602 diff --git a/drivers/net/ethernet/intel/igbvf/ethtool.c b/drivers/net/ethernet/intel/igbvf/ethtool.c
2603 index 8ce6706..90eef07 100644
2604 --- a/drivers/net/ethernet/intel/igbvf/ethtool.c
2605 +++ b/drivers/net/ethernet/intel/igbvf/ethtool.c
2606 @@ -357,21 +357,28 @@ static int igbvf_set_coalesce(struct net_device *netdev,
2607 struct igbvf_adapter *adapter = netdev_priv(netdev);
2608 struct e1000_hw *hw = &adapter->hw;
2609
2610 - if ((ec->rx_coalesce_usecs > IGBVF_MAX_ITR_USECS) ||
2611 - ((ec->rx_coalesce_usecs > 3) &&
2612 - (ec->rx_coalesce_usecs < IGBVF_MIN_ITR_USECS)) ||
2613 - (ec->rx_coalesce_usecs == 2))
2614 - return -EINVAL;
2615 -
2616 - /* convert to rate of irq's per second */
2617 - if (ec->rx_coalesce_usecs && ec->rx_coalesce_usecs <= 3) {
2618 + if ((ec->rx_coalesce_usecs >= IGBVF_MIN_ITR_USECS) &&
2619 + (ec->rx_coalesce_usecs <= IGBVF_MAX_ITR_USECS)) {
2620 + adapter->current_itr = ec->rx_coalesce_usecs << 2;
2621 + adapter->requested_itr = 1000000000 /
2622 + (adapter->current_itr * 256);
2623 + } else if ((ec->rx_coalesce_usecs == 3) ||
2624 + (ec->rx_coalesce_usecs == 2)) {
2625 adapter->current_itr = IGBVF_START_ITR;
2626 adapter->requested_itr = ec->rx_coalesce_usecs;
2627 - } else {
2628 - adapter->current_itr = ec->rx_coalesce_usecs << 2;
2629 + } else if (ec->rx_coalesce_usecs == 0) {
2630 + /*
2631 + * The user's desire is to turn off interrupt throttling
2632 + * altogether, but due to HW limitations, we can't do that.
2633 + * Instead we set a very small value in EITR, which would
2634 + * allow ~967k interrupts per second, but allow the adapter's
2635 + * internal clocking to still function properly.
2636 + */
2637 + adapter->current_itr = 4;
2638 adapter->requested_itr = 1000000000 /
2639 (adapter->current_itr * 256);
2640 - }
2641 + } else
2642 + return -EINVAL;
2643
2644 writel(adapter->current_itr,
2645 hw->hw_addr + adapter->rx_ring->itr_register);
2646 diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
2647 index 81b1555..f8f85ec 100644
2648 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
2649 +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
2650 @@ -189,7 +189,7 @@ enum ixgbe_ring_state_t {
2651 __IXGBE_HANG_CHECK_ARMED,
2652 __IXGBE_RX_RSC_ENABLED,
2653 __IXGBE_RX_CSUM_UDP_ZERO_ERR,
2654 - __IXGBE_RX_FCOE_BUFSZ,
2655 + __IXGBE_RX_FCOE,
2656 };
2657
2658 #define check_for_tx_hang(ring) \
2659 @@ -283,7 +283,7 @@ struct ixgbe_ring_feature {
2660 #if defined(IXGBE_FCOE) && (PAGE_SIZE < 8192)
2661 static inline unsigned int ixgbe_rx_pg_order(struct ixgbe_ring *ring)
2662 {
2663 - return test_bit(__IXGBE_RX_FCOE_BUFSZ, &ring->state) ? 1 : 0;
2664 + return test_bit(__IXGBE_RX_FCOE, &ring->state) ? 1 : 0;
2665 }
2666 #else
2667 #define ixgbe_rx_pg_order(_ring) 0
2668 diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
2669 index ed1b47d..a269d11 100644
2670 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
2671 +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
2672 @@ -628,7 +628,7 @@ static int ixgbe_alloc_q_vector(struct ixgbe_adapter *adapter, int v_idx,
2673 f = &adapter->ring_feature[RING_F_FCOE];
2674 if ((rxr_idx >= f->mask) &&
2675 (rxr_idx < f->mask + f->indices))
2676 - set_bit(__IXGBE_RX_FCOE_BUFSZ, &ring->state);
2677 + set_bit(__IXGBE_RX_FCOE, &ring->state);
2678 }
2679
2680 #endif /* IXGBE_FCOE */
2681 diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
2682 index 467948e..a66c215 100644
2683 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
2684 +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
2685 @@ -1036,17 +1036,17 @@ static inline void ixgbe_rx_hash(struct ixgbe_ring *ring,
2686 #ifdef IXGBE_FCOE
2687 /**
2688 * ixgbe_rx_is_fcoe - check the rx desc for incoming pkt type
2689 - * @adapter: address of board private structure
2690 + * @ring: structure containing ring specific data
2691 * @rx_desc: advanced rx descriptor
2692 *
2693 * Returns : true if it is FCoE pkt
2694 */
2695 -static inline bool ixgbe_rx_is_fcoe(struct ixgbe_adapter *adapter,
2696 +static inline bool ixgbe_rx_is_fcoe(struct ixgbe_ring *ring,
2697 union ixgbe_adv_rx_desc *rx_desc)
2698 {
2699 __le16 pkt_info = rx_desc->wb.lower.lo_dword.hs_rss.pkt_info;
2700
2701 - return (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) &&
2702 + return test_bit(__IXGBE_RX_FCOE, &ring->state) &&
2703 ((pkt_info & cpu_to_le16(IXGBE_RXDADV_PKTTYPE_ETQF_MASK)) ==
2704 (cpu_to_le16(IXGBE_ETQF_FILTER_FCOE <<
2705 IXGBE_RXDADV_PKTTYPE_ETQF_SHIFT)));
2706 @@ -1519,6 +1519,12 @@ static bool ixgbe_cleanup_headers(struct ixgbe_ring *rx_ring,
2707 skb->truesize -= ixgbe_rx_bufsz(rx_ring);
2708 }
2709
2710 +#ifdef IXGBE_FCOE
2711 + /* do not attempt to pad FCoE Frames as this will disrupt DDP */
2712 + if (ixgbe_rx_is_fcoe(rx_ring, rx_desc))
2713 + return false;
2714 +
2715 +#endif
2716 /* if skb_pad returns an error the skb was freed */
2717 if (unlikely(skb->len < 60)) {
2718 int pad_len = 60 - skb->len;
2719 @@ -1745,7 +1751,7 @@ static bool ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
2720
2721 #ifdef IXGBE_FCOE
2722 /* if ddp, not passing to ULD unless for FCP_RSP or error */
2723 - if (ixgbe_rx_is_fcoe(adapter, rx_desc)) {
2724 + if (ixgbe_rx_is_fcoe(rx_ring, rx_desc)) {
2725 ddp_bytes = ixgbe_fcoe_ddp(adapter, rx_desc, skb);
2726 if (!ddp_bytes) {
2727 dev_kfree_skb_any(skb);
2728 diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
2729 index 487a6c8..589753f 100644
2730 --- a/drivers/net/ethernet/marvell/sky2.c
2731 +++ b/drivers/net/ethernet/marvell/sky2.c
2732 @@ -4381,10 +4381,12 @@ static int sky2_set_features(struct net_device *dev, netdev_features_t features)
2733 struct sky2_port *sky2 = netdev_priv(dev);
2734 netdev_features_t changed = dev->features ^ features;
2735
2736 - if (changed & NETIF_F_RXCSUM) {
2737 - bool on = features & NETIF_F_RXCSUM;
2738 - sky2_write32(sky2->hw, Q_ADDR(rxqaddr[sky2->port], Q_CSR),
2739 - on ? BMU_ENA_RX_CHKSUM : BMU_DIS_RX_CHKSUM);
2740 + if ((changed & NETIF_F_RXCSUM) &&
2741 + !(sky2->hw->flags & SKY2_HW_NEW_LE)) {
2742 + sky2_write32(sky2->hw,
2743 + Q_ADDR(rxqaddr[sky2->port], Q_CSR),
2744 + (features & NETIF_F_RXCSUM)
2745 + ? BMU_ENA_RX_CHKSUM : BMU_DIS_RX_CHKSUM);
2746 }
2747
2748 if (changed & NETIF_F_RXHASH)
2749 diff --git a/drivers/net/ethernet/nxp/lpc_eth.c b/drivers/net/ethernet/nxp/lpc_eth.c
2750 index 6dfc26d..0c5edc1 100644
2751 --- a/drivers/net/ethernet/nxp/lpc_eth.c
2752 +++ b/drivers/net/ethernet/nxp/lpc_eth.c
2753 @@ -936,16 +936,16 @@ static void __lpc_handle_xmit(struct net_device *ndev)
2754 /* Update stats */
2755 ndev->stats.tx_packets++;
2756 ndev->stats.tx_bytes += skb->len;
2757 -
2758 - /* Free buffer */
2759 - dev_kfree_skb_irq(skb);
2760 }
2761 + dev_kfree_skb_irq(skb);
2762
2763 txcidx = readl(LPC_ENET_TXCONSUMEINDEX(pldat->net_base));
2764 }
2765
2766 - if (netif_queue_stopped(ndev))
2767 - netif_wake_queue(ndev);
2768 + if (pldat->num_used_tx_buffs <= ENET_TX_DESC/2) {
2769 + if (netif_queue_stopped(ndev))
2770 + netif_wake_queue(ndev);
2771 + }
2772 }
2773
2774 static int __lpc_handle_recv(struct net_device *ndev, int budget)
2775 @@ -1310,6 +1310,7 @@ static const struct net_device_ops lpc_netdev_ops = {
2776 .ndo_set_rx_mode = lpc_eth_set_multicast_list,
2777 .ndo_do_ioctl = lpc_eth_ioctl,
2778 .ndo_set_mac_address = lpc_set_mac_address,
2779 + .ndo_change_mtu = eth_change_mtu,
2780 };
2781
2782 static int lpc_eth_drv_probe(struct platform_device *pdev)
2783 diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
2784 index ce6b44d..161e045 100644
2785 --- a/drivers/net/ethernet/realtek/r8169.c
2786 +++ b/drivers/net/ethernet/realtek/r8169.c
2787 @@ -5966,6 +5966,8 @@ static void __devexit rtl_remove_one(struct pci_dev *pdev)
2788
2789 cancel_work_sync(&tp->wk.work);
2790
2791 + netif_napi_del(&tp->napi);
2792 +
2793 unregister_netdev(dev);
2794
2795 rtl_release_firmware(tp);
2796 @@ -6288,6 +6290,7 @@ out:
2797 return rc;
2798
2799 err_out_msi_4:
2800 + netif_napi_del(&tp->napi);
2801 rtl_disable_msi(pdev, tp);
2802 iounmap(ioaddr);
2803 err_out_free_res_3:
2804 diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c
2805 index c99b3b0..8489d09 100644
2806 --- a/drivers/net/ethernet/sun/niu.c
2807 +++ b/drivers/net/ethernet/sun/niu.c
2808 @@ -3598,7 +3598,6 @@ static int release_tx_packet(struct niu *np, struct tx_ring_info *rp, int idx)
2809 static void niu_tx_work(struct niu *np, struct tx_ring_info *rp)
2810 {
2811 struct netdev_queue *txq;
2812 - unsigned int tx_bytes;
2813 u16 pkt_cnt, tmp;
2814 int cons, index;
2815 u64 cs;
2816 @@ -3621,18 +3620,12 @@ static void niu_tx_work(struct niu *np, struct tx_ring_info *rp)
2817 netif_printk(np, tx_done, KERN_DEBUG, np->dev,
2818 "%s() pkt_cnt[%u] cons[%d]\n", __func__, pkt_cnt, cons);
2819
2820 - tx_bytes = 0;
2821 - tmp = pkt_cnt;
2822 - while (tmp--) {
2823 - tx_bytes += rp->tx_buffs[cons].skb->len;
2824 + while (pkt_cnt--)
2825 cons = release_tx_packet(np, rp, cons);
2826 - }
2827
2828 rp->cons = cons;
2829 smp_mb();
2830
2831 - netdev_tx_completed_queue(txq, pkt_cnt, tx_bytes);
2832 -
2833 out:
2834 if (unlikely(netif_tx_queue_stopped(txq) &&
2835 (niu_tx_avail(rp) > NIU_TX_WAKEUP_THRESH(rp)))) {
2836 @@ -4333,7 +4326,6 @@ static void niu_free_channels(struct niu *np)
2837 struct tx_ring_info *rp = &np->tx_rings[i];
2838
2839 niu_free_tx_ring_info(np, rp);
2840 - netdev_tx_reset_queue(netdev_get_tx_queue(np->dev, i));
2841 }
2842 kfree(np->tx_rings);
2843 np->tx_rings = NULL;
2844 @@ -6739,8 +6731,6 @@ static netdev_tx_t niu_start_xmit(struct sk_buff *skb,
2845 prod = NEXT_TX(rp, prod);
2846 }
2847
2848 - netdev_tx_sent_queue(txq, skb->len);
2849 -
2850 if (prod < rp->prod)
2851 rp->wrap_bit ^= TX_RING_KICK_WRAP;
2852 rp->prod = prod;
2853 diff --git a/drivers/net/macvtap.c b/drivers/net/macvtap.c
2854 index cb8fd50..c1d602d 100644
2855 --- a/drivers/net/macvtap.c
2856 +++ b/drivers/net/macvtap.c
2857 @@ -528,9 +528,10 @@ static int zerocopy_sg_from_iovec(struct sk_buff *skb, const struct iovec *from,
2858 }
2859 base = (unsigned long)from->iov_base + offset1;
2860 size = ((base & ~PAGE_MASK) + len + ~PAGE_MASK) >> PAGE_SHIFT;
2861 + if (i + size > MAX_SKB_FRAGS)
2862 + return -EMSGSIZE;
2863 num_pages = get_user_pages_fast(base, size, 0, &page[i]);
2864 - if ((num_pages != size) ||
2865 - (num_pages > MAX_SKB_FRAGS - skb_shinfo(skb)->nr_frags))
2866 + if (num_pages != size)
2867 /* put_page is in skb free */
2868 return -EFAULT;
2869 skb->data_len += len;
2870 @@ -647,7 +648,7 @@ static ssize_t macvtap_get_user(struct macvtap_queue *q, struct msghdr *m,
2871 int err;
2872 struct virtio_net_hdr vnet_hdr = { 0 };
2873 int vnet_hdr_len = 0;
2874 - int copylen;
2875 + int copylen = 0;
2876 bool zerocopy = false;
2877
2878 if (q->flags & IFF_VNET_HDR) {
2879 @@ -676,15 +677,31 @@ static ssize_t macvtap_get_user(struct macvtap_queue *q, struct msghdr *m,
2880 if (unlikely(len < ETH_HLEN))
2881 goto err;
2882
2883 + err = -EMSGSIZE;
2884 + if (unlikely(count > UIO_MAXIOV))
2885 + goto err;
2886 +
2887 if (m && m->msg_control && sock_flag(&q->sk, SOCK_ZEROCOPY))
2888 zerocopy = true;
2889
2890 if (zerocopy) {
2891 + /* Userspace may produce vectors with count greater than
2892 + * MAX_SKB_FRAGS, so we need to linearize parts of the skb
2893 + * to let the rest of data to be fit in the frags.
2894 + */
2895 + if (count > MAX_SKB_FRAGS) {
2896 + copylen = iov_length(iv, count - MAX_SKB_FRAGS);
2897 + if (copylen < vnet_hdr_len)
2898 + copylen = 0;
2899 + else
2900 + copylen -= vnet_hdr_len;
2901 + }
2902 /* There are 256 bytes to be copied in skb, so there is enough
2903 * room for skb expand head in case it is used.
2904 * The rest buffer is mapped from userspace.
2905 */
2906 - copylen = vnet_hdr.hdr_len;
2907 + if (copylen < vnet_hdr.hdr_len)
2908 + copylen = vnet_hdr.hdr_len;
2909 if (!copylen)
2910 copylen = GOODCOPY_LEN;
2911 } else
2912 diff --git a/drivers/net/usb/ipheth.c b/drivers/net/usb/ipheth.c
2913 index dd78c4c..5cba415 100644
2914 --- a/drivers/net/usb/ipheth.c
2915 +++ b/drivers/net/usb/ipheth.c
2916 @@ -59,6 +59,7 @@
2917 #define USB_PRODUCT_IPHONE_3G 0x1292
2918 #define USB_PRODUCT_IPHONE_3GS 0x1294
2919 #define USB_PRODUCT_IPHONE_4 0x1297
2920 +#define USB_PRODUCT_IPAD 0x129a
2921 #define USB_PRODUCT_IPHONE_4_VZW 0x129c
2922 #define USB_PRODUCT_IPHONE_4S 0x12a0
2923
2924 @@ -101,6 +102,10 @@ static struct usb_device_id ipheth_table[] = {
2925 IPHETH_USBINTF_CLASS, IPHETH_USBINTF_SUBCLASS,
2926 IPHETH_USBINTF_PROTO) },
2927 { USB_DEVICE_AND_INTERFACE_INFO(
2928 + USB_VENDOR_APPLE, USB_PRODUCT_IPAD,
2929 + IPHETH_USBINTF_CLASS, IPHETH_USBINTF_SUBCLASS,
2930 + IPHETH_USBINTF_PROTO) },
2931 + { USB_DEVICE_AND_INTERFACE_INFO(
2932 USB_VENDOR_APPLE, USB_PRODUCT_IPHONE_4_VZW,
2933 IPHETH_USBINTF_CLASS, IPHETH_USBINTF_SUBCLASS,
2934 IPHETH_USBINTF_PROTO) },
2935 diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
2936 index d316503b..c2ae426 100644
2937 --- a/drivers/net/usb/qmi_wwan.c
2938 +++ b/drivers/net/usb/qmi_wwan.c
2939 @@ -197,6 +197,10 @@ err:
2940 static int qmi_wwan_cdc_wdm_manage_power(struct usb_interface *intf, int on)
2941 {
2942 struct usbnet *dev = usb_get_intfdata(intf);
2943 +
2944 + /* can be called while disconnecting */
2945 + if (!dev)
2946 + return 0;
2947 return qmi_wwan_manage_power(dev, on);
2948 }
2949
2950 @@ -257,29 +261,6 @@ err:
2951 return rv;
2952 }
2953
2954 -/* Gobi devices uses identical class/protocol codes for all interfaces regardless
2955 - * of function. Some of these are CDC ACM like and have the exact same endpoints
2956 - * we are looking for. This leaves two possible strategies for identifying the
2957 - * correct interface:
2958 - * a) hardcoding interface number, or
2959 - * b) use the fact that the wwan interface is the only one lacking additional
2960 - * (CDC functional) descriptors
2961 - *
2962 - * Let's see if we can get away with the generic b) solution.
2963 - */
2964 -static int qmi_wwan_bind_gobi(struct usbnet *dev, struct usb_interface *intf)
2965 -{
2966 - int rv = -EINVAL;
2967 -
2968 - /* ignore any interface with additional descriptors */
2969 - if (intf->cur_altsetting->extralen)
2970 - goto err;
2971 -
2972 - rv = qmi_wwan_bind_shared(dev, intf);
2973 -err:
2974 - return rv;
2975 -}
2976 -
2977 static void qmi_wwan_unbind_shared(struct usbnet *dev, struct usb_interface *intf)
2978 {
2979 struct usb_driver *subdriver = (void *)dev->data[0];
2980 @@ -347,19 +328,37 @@ static const struct driver_info qmi_wwan_shared = {
2981 .manage_power = qmi_wwan_manage_power,
2982 };
2983
2984 -static const struct driver_info qmi_wwan_gobi = {
2985 - .description = "Qualcomm Gobi wwan/QMI device",
2986 +static const struct driver_info qmi_wwan_force_int0 = {
2987 + .description = "Qualcomm WWAN/QMI device",
2988 + .flags = FLAG_WWAN,
2989 + .bind = qmi_wwan_bind_shared,
2990 + .unbind = qmi_wwan_unbind_shared,
2991 + .manage_power = qmi_wwan_manage_power,
2992 + .data = BIT(0), /* interface whitelist bitmap */
2993 +};
2994 +
2995 +static const struct driver_info qmi_wwan_force_int1 = {
2996 + .description = "Qualcomm WWAN/QMI device",
2997 + .flags = FLAG_WWAN,
2998 + .bind = qmi_wwan_bind_shared,
2999 + .unbind = qmi_wwan_unbind_shared,
3000 + .manage_power = qmi_wwan_manage_power,
3001 + .data = BIT(1), /* interface whitelist bitmap */
3002 +};
3003 +
3004 +static const struct driver_info qmi_wwan_force_int3 = {
3005 + .description = "Qualcomm WWAN/QMI device",
3006 .flags = FLAG_WWAN,
3007 - .bind = qmi_wwan_bind_gobi,
3008 + .bind = qmi_wwan_bind_shared,
3009 .unbind = qmi_wwan_unbind_shared,
3010 .manage_power = qmi_wwan_manage_power,
3011 + .data = BIT(3), /* interface whitelist bitmap */
3012 };
3013
3014 -/* ZTE suck at making USB descriptors */
3015 static const struct driver_info qmi_wwan_force_int4 = {
3016 - .description = "Qualcomm Gobi wwan/QMI device",
3017 + .description = "Qualcomm WWAN/QMI device",
3018 .flags = FLAG_WWAN,
3019 - .bind = qmi_wwan_bind_gobi,
3020 + .bind = qmi_wwan_bind_shared,
3021 .unbind = qmi_wwan_unbind_shared,
3022 .manage_power = qmi_wwan_manage_power,
3023 .data = BIT(4), /* interface whitelist bitmap */
3024 @@ -381,16 +380,23 @@ static const struct driver_info qmi_wwan_force_int4 = {
3025 static const struct driver_info qmi_wwan_sierra = {
3026 .description = "Sierra Wireless wwan/QMI device",
3027 .flags = FLAG_WWAN,
3028 - .bind = qmi_wwan_bind_gobi,
3029 + .bind = qmi_wwan_bind_shared,
3030 .unbind = qmi_wwan_unbind_shared,
3031 .manage_power = qmi_wwan_manage_power,
3032 .data = BIT(8) | BIT(19), /* interface whitelist bitmap */
3033 };
3034
3035 #define HUAWEI_VENDOR_ID 0x12D1
3036 +
3037 +/* Gobi 1000 QMI/wwan interface number is 3 according to qcserial */
3038 +#define QMI_GOBI1K_DEVICE(vend, prod) \
3039 + USB_DEVICE(vend, prod), \
3040 + .driver_info = (unsigned long)&qmi_wwan_force_int3
3041 +
3042 +/* Gobi 2000 and Gobi 3000 QMI/wwan interface number is 0 according to qcserial */
3043 #define QMI_GOBI_DEVICE(vend, prod) \
3044 USB_DEVICE(vend, prod), \
3045 - .driver_info = (unsigned long)&qmi_wwan_gobi
3046 + .driver_info = (unsigned long)&qmi_wwan_force_int0
3047
3048 static const struct usb_device_id products[] = {
3049 { /* Huawei E392, E398 and possibly others sharing both device id and more... */
3050 @@ -430,6 +436,15 @@ static const struct usb_device_id products[] = {
3051 .bInterfaceProtocol = 0xff,
3052 .driver_info = (unsigned long)&qmi_wwan_force_int4,
3053 },
3054 + { /* ZTE (Vodafone) K3520-Z */
3055 + .match_flags = USB_DEVICE_ID_MATCH_DEVICE | USB_DEVICE_ID_MATCH_INT_INFO,
3056 + .idVendor = 0x19d2,
3057 + .idProduct = 0x0055,
3058 + .bInterfaceClass = 0xff,
3059 + .bInterfaceSubClass = 0xff,
3060 + .bInterfaceProtocol = 0xff,
3061 + .driver_info = (unsigned long)&qmi_wwan_force_int1,
3062 + },
3063 { /* ZTE (Vodafone) K3565-Z */
3064 .match_flags = USB_DEVICE_ID_MATCH_DEVICE | USB_DEVICE_ID_MATCH_INT_INFO,
3065 .idVendor = 0x19d2,
3066 @@ -475,20 +490,24 @@ static const struct usb_device_id products[] = {
3067 .bInterfaceProtocol = 0xff,
3068 .driver_info = (unsigned long)&qmi_wwan_sierra,
3069 },
3070 - {QMI_GOBI_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
3071 - {QMI_GOBI_DEVICE(0x03f0, 0x1f1d)}, /* HP un2400 Gobi Modem Device */
3072 - {QMI_GOBI_DEVICE(0x03f0, 0x371d)}, /* HP un2430 Mobile Broadband Module */
3073 - {QMI_GOBI_DEVICE(0x04da, 0x250d)}, /* Panasonic Gobi Modem device */
3074 - {QMI_GOBI_DEVICE(0x413c, 0x8172)}, /* Dell Gobi Modem device */
3075 - {QMI_GOBI_DEVICE(0x1410, 0xa001)}, /* Novatel Gobi Modem device */
3076 - {QMI_GOBI_DEVICE(0x0b05, 0x1776)}, /* Asus Gobi Modem device */
3077 - {QMI_GOBI_DEVICE(0x19d2, 0xfff3)}, /* ONDA Gobi Modem device */
3078 - {QMI_GOBI_DEVICE(0x05c6, 0x9001)}, /* Generic Gobi Modem device */
3079 - {QMI_GOBI_DEVICE(0x05c6, 0x9002)}, /* Generic Gobi Modem device */
3080 - {QMI_GOBI_DEVICE(0x05c6, 0x9202)}, /* Generic Gobi Modem device */
3081 - {QMI_GOBI_DEVICE(0x05c6, 0x9203)}, /* Generic Gobi Modem device */
3082 - {QMI_GOBI_DEVICE(0x05c6, 0x9222)}, /* Generic Gobi Modem device */
3083 - {QMI_GOBI_DEVICE(0x05c6, 0x9009)}, /* Generic Gobi Modem device */
3084 +
3085 + /* Gobi 1000 devices */
3086 + {QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
3087 + {QMI_GOBI1K_DEVICE(0x03f0, 0x1f1d)}, /* HP un2400 Gobi Modem Device */
3088 + {QMI_GOBI1K_DEVICE(0x03f0, 0x371d)}, /* HP un2430 Mobile Broadband Module */
3089 + {QMI_GOBI1K_DEVICE(0x04da, 0x250d)}, /* Panasonic Gobi Modem device */
3090 + {QMI_GOBI1K_DEVICE(0x413c, 0x8172)}, /* Dell Gobi Modem device */
3091 + {QMI_GOBI1K_DEVICE(0x1410, 0xa001)}, /* Novatel Gobi Modem device */
3092 + {QMI_GOBI1K_DEVICE(0x0b05, 0x1776)}, /* Asus Gobi Modem device */
3093 + {QMI_GOBI1K_DEVICE(0x19d2, 0xfff3)}, /* ONDA Gobi Modem device */
3094 + {QMI_GOBI1K_DEVICE(0x05c6, 0x9001)}, /* Generic Gobi Modem device */
3095 + {QMI_GOBI1K_DEVICE(0x05c6, 0x9002)}, /* Generic Gobi Modem device */
3096 + {QMI_GOBI1K_DEVICE(0x05c6, 0x9202)}, /* Generic Gobi Modem device */
3097 + {QMI_GOBI1K_DEVICE(0x05c6, 0x9203)}, /* Generic Gobi Modem device */
3098 + {QMI_GOBI1K_DEVICE(0x05c6, 0x9222)}, /* Generic Gobi Modem device */
3099 + {QMI_GOBI1K_DEVICE(0x05c6, 0x9009)}, /* Generic Gobi Modem device */
3100 +
3101 + /* Gobi 2000 and 3000 devices */
3102 {QMI_GOBI_DEVICE(0x413c, 0x8186)}, /* Dell Gobi 2000 Modem device (N0218, VU936) */
3103 {QMI_GOBI_DEVICE(0x05c6, 0x920b)}, /* Generic Gobi 2000 Modem device */
3104 {QMI_GOBI_DEVICE(0x05c6, 0x9225)}, /* Sony Gobi 2000 Modem device (N0279, VU730) */
3105 diff --git a/drivers/net/wireless/ath/ath.h b/drivers/net/wireless/ath/ath.h
3106 index c54b7d37..420d69b 100644
3107 --- a/drivers/net/wireless/ath/ath.h
3108 +++ b/drivers/net/wireless/ath/ath.h
3109 @@ -143,6 +143,7 @@ struct ath_common {
3110 u32 keymax;
3111 DECLARE_BITMAP(keymap, ATH_KEYMAX);
3112 DECLARE_BITMAP(tkip_keymap, ATH_KEYMAX);
3113 + DECLARE_BITMAP(ccmp_keymap, ATH_KEYMAX);
3114 enum ath_crypt_caps crypt_caps;
3115
3116 unsigned int clockrate;
3117 diff --git a/drivers/net/wireless/ath/ath9k/ath9k.h b/drivers/net/wireless/ath/ath9k/ath9k.h
3118 index 8c84049..4bfb44a 100644
3119 --- a/drivers/net/wireless/ath/ath9k/ath9k.h
3120 +++ b/drivers/net/wireless/ath/ath9k/ath9k.h
3121 @@ -213,6 +213,7 @@ struct ath_frame_info {
3122 enum ath9k_key_type keytype;
3123 u8 keyix;
3124 u8 retries;
3125 + u8 rtscts_rate;
3126 };
3127
3128 struct ath_buf_state {
3129 diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_main.c b/drivers/net/wireless/ath/ath9k/htc_drv_main.c
3130 index 2b8f61c..abbd6ef 100644
3131 --- a/drivers/net/wireless/ath/ath9k/htc_drv_main.c
3132 +++ b/drivers/net/wireless/ath/ath9k/htc_drv_main.c
3133 @@ -1496,6 +1496,7 @@ static void ath9k_htc_bss_info_changed(struct ieee80211_hw *hw,
3134 priv->num_sta_assoc_vif++ : priv->num_sta_assoc_vif--;
3135
3136 if (priv->ah->opmode == NL80211_IFTYPE_STATION) {
3137 + ath9k_htc_choose_set_bssid(priv);
3138 if (bss_conf->assoc && (priv->num_sta_assoc_vif == 1))
3139 ath9k_htc_start_ani(priv);
3140 else if (priv->num_sta_assoc_vif == 0)
3141 @@ -1503,13 +1504,11 @@ static void ath9k_htc_bss_info_changed(struct ieee80211_hw *hw,
3142 }
3143 }
3144
3145 - if (changed & BSS_CHANGED_BSSID) {
3146 + if (changed & BSS_CHANGED_IBSS) {
3147 if (priv->ah->opmode == NL80211_IFTYPE_ADHOC) {
3148 common->curaid = bss_conf->aid;
3149 memcpy(common->curbssid, bss_conf->bssid, ETH_ALEN);
3150 ath9k_htc_set_bssid(priv);
3151 - } else if (priv->ah->opmode == NL80211_IFTYPE_STATION) {
3152 - ath9k_htc_choose_set_bssid(priv);
3153 }
3154 }
3155
3156 diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
3157 index fa84e37..6dfd964 100644
3158 --- a/drivers/net/wireless/ath/ath9k/hw.c
3159 +++ b/drivers/net/wireless/ath/ath9k/hw.c
3160 @@ -558,7 +558,7 @@ static int __ath9k_hw_init(struct ath_hw *ah)
3161
3162 if (NR_CPUS > 1 && ah->config.serialize_regmode == SER_REG_MODE_AUTO) {
3163 if (ah->hw_version.macVersion == AR_SREV_VERSION_5416_PCI ||
3164 - ((AR_SREV_9160(ah) || AR_SREV_9280(ah)) &&
3165 + ((AR_SREV_9160(ah) || AR_SREV_9280(ah) || AR_SREV_9287(ah)) &&
3166 !ah->is_pciexpress)) {
3167 ah->config.serialize_regmode =
3168 SER_REG_MODE_ON;
3169 @@ -720,13 +720,25 @@ static void ath9k_hw_init_qos(struct ath_hw *ah)
3170
3171 u32 ar9003_get_pll_sqsum_dvc(struct ath_hw *ah)
3172 {
3173 + struct ath_common *common = ath9k_hw_common(ah);
3174 + int i = 0;
3175 +
3176 REG_CLR_BIT(ah, PLL3, PLL3_DO_MEAS_MASK);
3177 udelay(100);
3178 REG_SET_BIT(ah, PLL3, PLL3_DO_MEAS_MASK);
3179
3180 - while ((REG_READ(ah, PLL4) & PLL4_MEAS_DONE) == 0)
3181 + while ((REG_READ(ah, PLL4) & PLL4_MEAS_DONE) == 0) {
3182 +
3183 udelay(100);
3184
3185 + if (WARN_ON_ONCE(i >= 100)) {
3186 + ath_err(common, "PLL4 meaurement not done\n");
3187 + break;
3188 + }
3189 +
3190 + i++;
3191 + }
3192 +
3193 return (REG_READ(ah, PLL3) & SQSUM_DVC_MASK) >> 3;
3194 }
3195 EXPORT_SYMBOL(ar9003_get_pll_sqsum_dvc);
3196 diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
3197 index 798ea57..d5dabcb 100644
3198 --- a/drivers/net/wireless/ath/ath9k/main.c
3199 +++ b/drivers/net/wireless/ath/ath9k/main.c
3200 @@ -960,6 +960,15 @@ void ath_hw_pll_work(struct work_struct *work)
3201 hw_pll_work.work);
3202 u32 pll_sqsum;
3203
3204 + /*
3205 + * ensure that the PLL WAR is executed only
3206 + * after the STA is associated (or) if the
3207 + * beaconing had started in interfaces that
3208 + * uses beacons.
3209 + */
3210 + if (!(sc->sc_flags & SC_OP_BEACONS))
3211 + return;
3212 +
3213 if (AR_SREV_9485(sc->sc_ah)) {
3214
3215 ath9k_ps_wakeup(sc);
3216 @@ -1419,15 +1428,6 @@ static int ath9k_add_interface(struct ieee80211_hw *hw,
3217 }
3218 }
3219
3220 - if ((ah->opmode == NL80211_IFTYPE_ADHOC) ||
3221 - ((vif->type == NL80211_IFTYPE_ADHOC) &&
3222 - sc->nvifs > 0)) {
3223 - ath_err(common, "Cannot create ADHOC interface when other"
3224 - " interfaces already exist.\n");
3225 - ret = -EINVAL;
3226 - goto out;
3227 - }
3228 -
3229 ath_dbg(common, CONFIG, "Attach a VIF of type: %d\n", vif->type);
3230
3231 sc->nvifs++;
3232 diff --git a/drivers/net/wireless/ath/ath9k/recv.c b/drivers/net/wireless/ath/ath9k/recv.c
3233 index 1c4583c..a2f7ae8 100644
3234 --- a/drivers/net/wireless/ath/ath9k/recv.c
3235 +++ b/drivers/net/wireless/ath/ath9k/recv.c
3236 @@ -695,9 +695,9 @@ static bool ath_edma_get_buffers(struct ath_softc *sc,
3237 __skb_unlink(skb, &rx_edma->rx_fifo);
3238 list_add_tail(&bf->list, &sc->rx.rxbuf);
3239 ath_rx_edma_buf_link(sc, qtype);
3240 - } else {
3241 - bf = NULL;
3242 }
3243 +
3244 + bf = NULL;
3245 }
3246
3247 *dest = bf;
3248 @@ -821,7 +821,8 @@ static bool ath9k_rx_accept(struct ath_common *common,
3249 * descriptor does contain a valid key index. This has been observed
3250 * mostly with CCMP encryption.
3251 */
3252 - if (rx_stats->rs_keyix == ATH9K_RXKEYIX_INVALID)
3253 + if (rx_stats->rs_keyix == ATH9K_RXKEYIX_INVALID ||
3254 + !test_bit(rx_stats->rs_keyix, common->ccmp_keymap))
3255 rx_stats->rs_status &= ~ATH9K_RXERR_KEYMISS;
3256
3257 if (!rx_stats->rs_datalen)
3258 diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
3259 index d59dd01..4d57139 100644
3260 --- a/drivers/net/wireless/ath/ath9k/xmit.c
3261 +++ b/drivers/net/wireless/ath/ath9k/xmit.c
3262 @@ -938,6 +938,7 @@ static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf,
3263 struct ieee80211_tx_rate *rates;
3264 const struct ieee80211_rate *rate;
3265 struct ieee80211_hdr *hdr;
3266 + struct ath_frame_info *fi = get_frame_info(bf->bf_mpdu);
3267 int i;
3268 u8 rix = 0;
3269
3270 @@ -948,18 +949,7 @@ static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf,
3271
3272 /* set dur_update_en for l-sig computation except for PS-Poll frames */
3273 info->dur_update = !ieee80211_is_pspoll(hdr->frame_control);
3274 -
3275 - /*
3276 - * We check if Short Preamble is needed for the CTS rate by
3277 - * checking the BSS's global flag.
3278 - * But for the rate series, IEEE80211_TX_RC_USE_SHORT_PREAMBLE is used.
3279 - */
3280 - rate = ieee80211_get_rts_cts_rate(sc->hw, tx_info);
3281 - info->rtscts_rate = rate->hw_value;
3282 -
3283 - if (tx_info->control.vif &&
3284 - tx_info->control.vif->bss_conf.use_short_preamble)
3285 - info->rtscts_rate |= rate->hw_value_short;
3286 + info->rtscts_rate = fi->rtscts_rate;
3287
3288 for (i = 0; i < 4; i++) {
3289 bool is_40, is_sgi, is_sp;
3290 @@ -1001,13 +991,13 @@ static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf,
3291 }
3292
3293 /* legacy rates */
3294 + rate = &sc->sbands[tx_info->band].bitrates[rates[i].idx];
3295 if ((tx_info->band == IEEE80211_BAND_2GHZ) &&
3296 !(rate->flags & IEEE80211_RATE_ERP_G))
3297 phy = WLAN_RC_PHY_CCK;
3298 else
3299 phy = WLAN_RC_PHY_OFDM;
3300
3301 - rate = &sc->sbands[tx_info->band].bitrates[rates[i].idx];
3302 info->rates[i].Rate = rate->hw_value;
3303 if (rate->hw_value_short) {
3304 if (rates[i].flags & IEEE80211_TX_RC_USE_SHORT_PREAMBLE)
3305 @@ -1776,10 +1766,22 @@ static void setup_frame_info(struct ieee80211_hw *hw, struct sk_buff *skb,
3306 struct ieee80211_sta *sta = tx_info->control.sta;
3307 struct ieee80211_key_conf *hw_key = tx_info->control.hw_key;
3308 struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
3309 + const struct ieee80211_rate *rate;
3310 struct ath_frame_info *fi = get_frame_info(skb);
3311 struct ath_node *an = NULL;
3312 enum ath9k_key_type keytype;
3313 + bool short_preamble = false;
3314 +
3315 + /*
3316 + * We check if Short Preamble is needed for the CTS rate by
3317 + * checking the BSS's global flag.
3318 + * But for the rate series, IEEE80211_TX_RC_USE_SHORT_PREAMBLE is used.
3319 + */
3320 + if (tx_info->control.vif &&
3321 + tx_info->control.vif->bss_conf.use_short_preamble)
3322 + short_preamble = true;
3323
3324 + rate = ieee80211_get_rts_cts_rate(hw, tx_info);
3325 keytype = ath9k_cmn_get_hw_crypto_keytype(skb);
3326
3327 if (sta)
3328 @@ -1794,6 +1796,9 @@ static void setup_frame_info(struct ieee80211_hw *hw, struct sk_buff *skb,
3329 fi->keyix = ATH9K_TXKEYIX_INVALID;
3330 fi->keytype = keytype;
3331 fi->framelen = framelen;
3332 + fi->rtscts_rate = rate->hw_value;
3333 + if (short_preamble)
3334 + fi->rtscts_rate |= rate->hw_value_short;
3335 }
3336
3337 u8 ath_txchainmask_reduction(struct ath_softc *sc, u8 chainmask, u32 rate)
3338 diff --git a/drivers/net/wireless/ath/key.c b/drivers/net/wireless/ath/key.c
3339 index 0e81904..5c54aa4 100644
3340 --- a/drivers/net/wireless/ath/key.c
3341 +++ b/drivers/net/wireless/ath/key.c
3342 @@ -556,6 +556,9 @@ int ath_key_config(struct ath_common *common,
3343 return -EIO;
3344
3345 set_bit(idx, common->keymap);
3346 + if (key->cipher == WLAN_CIPHER_SUITE_CCMP)
3347 + set_bit(idx, common->ccmp_keymap);
3348 +
3349 if (key->cipher == WLAN_CIPHER_SUITE_TKIP) {
3350 set_bit(idx + 64, common->keymap);
3351 set_bit(idx, common->tkip_keymap);
3352 @@ -582,6 +585,7 @@ void ath_key_delete(struct ath_common *common, struct ieee80211_key_conf *key)
3353 return;
3354
3355 clear_bit(key->hw_key_idx, common->keymap);
3356 + clear_bit(key->hw_key_idx, common->ccmp_keymap);
3357 if (key->cipher != WLAN_CIPHER_SUITE_TKIP)
3358 return;
3359
3360 diff --git a/drivers/net/wireless/ipw2x00/ipw.h b/drivers/net/wireless/ipw2x00/ipw.h
3361 new file mode 100644
3362 index 0000000..4007bf5
3363 --- /dev/null
3364 +++ b/drivers/net/wireless/ipw2x00/ipw.h
3365 @@ -0,0 +1,23 @@
3366 +/*
3367 + * Intel Pro/Wireless 2100, 2200BG, 2915ABG network connection driver
3368 + *
3369 + * Copyright 2012 Stanislav Yakovlev <stas.yakovlev@gmail.com>
3370 + *
3371 + * This program is free software; you can redistribute it and/or modify
3372 + * it under the terms of the GNU General Public License version 2 as
3373 + * published by the Free Software Foundation.
3374 + */
3375 +
3376 +#ifndef __IPW_H__
3377 +#define __IPW_H__
3378 +
3379 +#include <linux/ieee80211.h>
3380 +
3381 +static const u32 ipw_cipher_suites[] = {
3382 + WLAN_CIPHER_SUITE_WEP40,
3383 + WLAN_CIPHER_SUITE_WEP104,
3384 + WLAN_CIPHER_SUITE_TKIP,
3385 + WLAN_CIPHER_SUITE_CCMP,
3386 +};
3387 +
3388 +#endif
3389 diff --git a/drivers/net/wireless/ipw2x00/ipw2100.c b/drivers/net/wireless/ipw2x00/ipw2100.c
3390 index f0551f8..7c8e8b1 100644
3391 --- a/drivers/net/wireless/ipw2x00/ipw2100.c
3392 +++ b/drivers/net/wireless/ipw2x00/ipw2100.c
3393 @@ -166,6 +166,7 @@ that only one external action is invoked at a time.
3394 #include <net/lib80211.h>
3395
3396 #include "ipw2100.h"
3397 +#include "ipw.h"
3398
3399 #define IPW2100_VERSION "git-1.2.2"
3400
3401 @@ -1946,6 +1947,9 @@ static int ipw2100_wdev_init(struct net_device *dev)
3402 wdev->wiphy->bands[IEEE80211_BAND_2GHZ] = bg_band;
3403 }
3404
3405 + wdev->wiphy->cipher_suites = ipw_cipher_suites;
3406 + wdev->wiphy->n_cipher_suites = ARRAY_SIZE(ipw_cipher_suites);
3407 +
3408 set_wiphy_dev(wdev->wiphy, &priv->pci_dev->dev);
3409 if (wiphy_register(wdev->wiphy)) {
3410 ipw2100_down(priv);
3411 diff --git a/drivers/net/wireless/ipw2x00/ipw2200.c b/drivers/net/wireless/ipw2x00/ipw2200.c
3412 index 1779db3..3a6b991 100644
3413 --- a/drivers/net/wireless/ipw2x00/ipw2200.c
3414 +++ b/drivers/net/wireless/ipw2x00/ipw2200.c
3415 @@ -34,6 +34,7 @@
3416 #include <linux/slab.h>
3417 #include <net/cfg80211-wext.h>
3418 #include "ipw2200.h"
3419 +#include "ipw.h"
3420
3421
3422 #ifndef KBUILD_EXTMOD
3423 @@ -11544,6 +11545,9 @@ static int ipw_wdev_init(struct net_device *dev)
3424 wdev->wiphy->bands[IEEE80211_BAND_5GHZ] = a_band;
3425 }
3426
3427 + wdev->wiphy->cipher_suites = ipw_cipher_suites;
3428 + wdev->wiphy->n_cipher_suites = ARRAY_SIZE(ipw_cipher_suites);
3429 +
3430 set_wiphy_dev(wdev->wiphy, &priv->pci_dev->dev);
3431
3432 /* With that information in place, we can now register the wiphy... */
3433 diff --git a/drivers/net/wireless/iwlwifi/iwl-mac80211.c b/drivers/net/wireless/iwlwifi/iwl-mac80211.c
3434 index 1018f9b..e0e6c67 100644
3435 --- a/drivers/net/wireless/iwlwifi/iwl-mac80211.c
3436 +++ b/drivers/net/wireless/iwlwifi/iwl-mac80211.c
3437 @@ -788,6 +788,18 @@ static int iwlagn_mac_sta_state(struct ieee80211_hw *hw,
3438 switch (op) {
3439 case ADD:
3440 ret = iwlagn_mac_sta_add(hw, vif, sta);
3441 + if (ret)
3442 + break;
3443 + /*
3444 + * Clear the in-progress flag, the AP station entry was added
3445 + * but we'll initialize LQ only when we've associated (which
3446 + * would also clear the in-progress flag). This is necessary
3447 + * in case we never initialize LQ because association fails.
3448 + */
3449 + spin_lock_bh(&priv->sta_lock);
3450 + priv->stations[iwl_sta_id(sta)].used &=
3451 + ~IWL_STA_UCODE_INPROGRESS;
3452 + spin_unlock_bh(&priv->sta_lock);
3453 break;
3454 case REMOVE:
3455 ret = iwlagn_mac_sta_remove(hw, vif, sta);
3456 diff --git a/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c b/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c
3457 index 6eac984..8741048 100644
3458 --- a/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c
3459 +++ b/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c
3460 @@ -2000,6 +2000,7 @@ static ssize_t iwl_dbgfs_rx_queue_read(struct file *file,
3461 return simple_read_from_buffer(user_buf, count, ppos, buf, pos);
3462 }
3463
3464 +#ifdef CONFIG_IWLWIFI_DEBUG
3465 static ssize_t iwl_dbgfs_log_event_read(struct file *file,
3466 char __user *user_buf,
3467 size_t count, loff_t *ppos)
3468 @@ -2037,6 +2038,7 @@ static ssize_t iwl_dbgfs_log_event_write(struct file *file,
3469
3470 return count;
3471 }
3472 +#endif
3473
3474 static ssize_t iwl_dbgfs_interrupt_read(struct file *file,
3475 char __user *user_buf,
3476 @@ -2164,7 +2166,9 @@ static ssize_t iwl_dbgfs_fh_reg_read(struct file *file,
3477 return ret;
3478 }
3479
3480 +#ifdef CONFIG_IWLWIFI_DEBUG
3481 DEBUGFS_READ_WRITE_FILE_OPS(log_event);
3482 +#endif
3483 DEBUGFS_READ_WRITE_FILE_OPS(interrupt);
3484 DEBUGFS_READ_FILE_OPS(fh_reg);
3485 DEBUGFS_READ_FILE_OPS(rx_queue);
3486 @@ -2180,7 +2184,9 @@ static int iwl_trans_pcie_dbgfs_register(struct iwl_trans *trans,
3487 {
3488 DEBUGFS_ADD_FILE(rx_queue, dir, S_IRUSR);
3489 DEBUGFS_ADD_FILE(tx_queue, dir, S_IRUSR);
3490 +#ifdef CONFIG_IWLWIFI_DEBUG
3491 DEBUGFS_ADD_FILE(log_event, dir, S_IWUSR | S_IRUSR);
3492 +#endif
3493 DEBUGFS_ADD_FILE(interrupt, dir, S_IWUSR | S_IRUSR);
3494 DEBUGFS_ADD_FILE(csr, dir, S_IWUSR);
3495 DEBUGFS_ADD_FILE(fh_reg, dir, S_IRUSR);
3496 diff --git a/drivers/net/wireless/mwifiex/11n_rxreorder.c b/drivers/net/wireless/mwifiex/11n_rxreorder.c
3497 index 9c44088..900ee12 100644
3498 --- a/drivers/net/wireless/mwifiex/11n_rxreorder.c
3499 +++ b/drivers/net/wireless/mwifiex/11n_rxreorder.c
3500 @@ -256,7 +256,8 @@ mwifiex_11n_create_rx_reorder_tbl(struct mwifiex_private *priv, u8 *ta,
3501 else
3502 last_seq = priv->rx_seq[tid];
3503
3504 - if (last_seq >= new_node->start_win)
3505 + if (last_seq != MWIFIEX_DEF_11N_RX_SEQ_NUM &&
3506 + last_seq >= new_node->start_win)
3507 new_node->start_win = last_seq + 1;
3508
3509 new_node->win_size = win_size;
3510 @@ -596,5 +597,5 @@ void mwifiex_11n_cleanup_reorder_tbl(struct mwifiex_private *priv)
3511 spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags);
3512
3513 INIT_LIST_HEAD(&priv->rx_reorder_tbl_ptr);
3514 - memset(priv->rx_seq, 0, sizeof(priv->rx_seq));
3515 + mwifiex_reset_11n_rx_seq_num(priv);
3516 }
3517 diff --git a/drivers/net/wireless/mwifiex/11n_rxreorder.h b/drivers/net/wireless/mwifiex/11n_rxreorder.h
3518 index f1bffeb..6c9815a 100644
3519 --- a/drivers/net/wireless/mwifiex/11n_rxreorder.h
3520 +++ b/drivers/net/wireless/mwifiex/11n_rxreorder.h
3521 @@ -37,6 +37,13 @@
3522
3523 #define ADDBA_RSP_STATUS_ACCEPT 0
3524
3525 +#define MWIFIEX_DEF_11N_RX_SEQ_NUM 0xffff
3526 +
3527 +static inline void mwifiex_reset_11n_rx_seq_num(struct mwifiex_private *priv)
3528 +{
3529 + memset(priv->rx_seq, 0xff, sizeof(priv->rx_seq));
3530 +}
3531 +
3532 int mwifiex_11n_rx_reorder_pkt(struct mwifiex_private *,
3533 u16 seqNum,
3534 u16 tid, u8 *ta,
3535 diff --git a/drivers/net/wireless/mwifiex/cfg80211.c b/drivers/net/wireless/mwifiex/cfg80211.c
3536 index 6505038..baf6919 100644
3537 --- a/drivers/net/wireless/mwifiex/cfg80211.c
3538 +++ b/drivers/net/wireless/mwifiex/cfg80211.c
3539 @@ -1214,11 +1214,11 @@ struct net_device *mwifiex_add_virtual_intf(struct wiphy *wiphy,
3540 void *mdev_priv;
3541
3542 if (!priv)
3543 - return NULL;
3544 + return ERR_PTR(-EFAULT);
3545
3546 adapter = priv->adapter;
3547 if (!adapter)
3548 - return NULL;
3549 + return ERR_PTR(-EFAULT);
3550
3551 switch (type) {
3552 case NL80211_IFTYPE_UNSPECIFIED:
3553 @@ -1227,7 +1227,7 @@ struct net_device *mwifiex_add_virtual_intf(struct wiphy *wiphy,
3554 if (priv->bss_mode) {
3555 wiphy_err(wiphy, "cannot create multiple"
3556 " station/adhoc interfaces\n");
3557 - return NULL;
3558 + return ERR_PTR(-EINVAL);
3559 }
3560
3561 if (type == NL80211_IFTYPE_UNSPECIFIED)
3562 @@ -1244,14 +1244,15 @@ struct net_device *mwifiex_add_virtual_intf(struct wiphy *wiphy,
3563 break;
3564 default:
3565 wiphy_err(wiphy, "type not supported\n");
3566 - return NULL;
3567 + return ERR_PTR(-EINVAL);
3568 }
3569
3570 dev = alloc_netdev_mq(sizeof(struct mwifiex_private *), name,
3571 ether_setup, 1);
3572 if (!dev) {
3573 wiphy_err(wiphy, "no memory available for netdevice\n");
3574 - goto error;
3575 + priv->bss_mode = NL80211_IFTYPE_UNSPECIFIED;
3576 + return ERR_PTR(-ENOMEM);
3577 }
3578
3579 dev_net_set(dev, wiphy_net(wiphy));
3580 @@ -1276,7 +1277,9 @@ struct net_device *mwifiex_add_virtual_intf(struct wiphy *wiphy,
3581 /* Register network device */
3582 if (register_netdevice(dev)) {
3583 wiphy_err(wiphy, "cannot register virtual network device\n");
3584 - goto error;
3585 + free_netdev(dev);
3586 + priv->bss_mode = NL80211_IFTYPE_UNSPECIFIED;
3587 + return ERR_PTR(-EFAULT);
3588 }
3589
3590 sema_init(&priv->async_sem, 1);
3591 @@ -1288,12 +1291,6 @@ struct net_device *mwifiex_add_virtual_intf(struct wiphy *wiphy,
3592 mwifiex_dev_debugfs_init(priv);
3593 #endif
3594 return dev;
3595 -error:
3596 - if (dev && (dev->reg_state == NETREG_UNREGISTERED))
3597 - free_netdev(dev);
3598 - priv->bss_mode = NL80211_IFTYPE_UNSPECIFIED;
3599 -
3600 - return NULL;
3601 }
3602 EXPORT_SYMBOL_GPL(mwifiex_add_virtual_intf);
3603
3604 diff --git a/drivers/net/wireless/mwifiex/wmm.c b/drivers/net/wireless/mwifiex/wmm.c
3605 index 5a7316c..3e6abf0 100644
3606 --- a/drivers/net/wireless/mwifiex/wmm.c
3607 +++ b/drivers/net/wireless/mwifiex/wmm.c
3608 @@ -404,6 +404,8 @@ mwifiex_wmm_init(struct mwifiex_adapter *adapter)
3609 priv->add_ba_param.tx_win_size = MWIFIEX_AMPDU_DEF_TXWINSIZE;
3610 priv->add_ba_param.rx_win_size = MWIFIEX_AMPDU_DEF_RXWINSIZE;
3611
3612 + mwifiex_reset_11n_rx_seq_num(priv);
3613 +
3614 atomic_set(&priv->wmm.tx_pkts_queued, 0);
3615 atomic_set(&priv->wmm.highest_queued_prio, HIGH_PRIO_TID);
3616 }
3617 @@ -1209,6 +1211,7 @@ mwifiex_dequeue_tx_packet(struct mwifiex_adapter *adapter)
3618
3619 if (!ptr->is_11n_enabled ||
3620 mwifiex_is_ba_stream_setup(priv, ptr, tid) ||
3621 + priv->wps.session_enable ||
3622 ((priv->sec_info.wpa_enabled ||
3623 priv->sec_info.wpa2_enabled) &&
3624 !priv->wpa_is_gtk_set)) {
3625 diff --git a/drivers/net/wireless/rtl818x/rtl8187/leds.c b/drivers/net/wireless/rtl818x/rtl8187/leds.c
3626 index 2e0de2f..c2d5b49 100644
3627 --- a/drivers/net/wireless/rtl818x/rtl8187/leds.c
3628 +++ b/drivers/net/wireless/rtl818x/rtl8187/leds.c
3629 @@ -117,7 +117,7 @@ static void rtl8187_led_brightness_set(struct led_classdev *led_dev,
3630 radio_on = true;
3631 } else if (radio_on) {
3632 radio_on = false;
3633 - cancel_delayed_work_sync(&priv->led_on);
3634 + cancel_delayed_work(&priv->led_on);
3635 ieee80211_queue_delayed_work(hw, &priv->led_off, 0);
3636 }
3637 } else if (radio_on) {
3638 diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
3639 index 82c85286..5bd4085 100644
3640 --- a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
3641 +++ b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
3642 @@ -301,9 +301,11 @@ static struct usb_device_id rtl8192c_usb_ids[] = {
3643 {RTL_USB_DEVICE(0x07b8, 0x8188, rtl92cu_hal_cfg)}, /*Abocom - Abocom*/
3644 {RTL_USB_DEVICE(0x07b8, 0x8189, rtl92cu_hal_cfg)}, /*Funai - Abocom*/
3645 {RTL_USB_DEVICE(0x0846, 0x9041, rtl92cu_hal_cfg)}, /*NetGear WNA1000M*/
3646 + {RTL_USB_DEVICE(0x0bda, 0x5088, rtl92cu_hal_cfg)}, /*Thinkware-CC&C*/
3647 {RTL_USB_DEVICE(0x0df6, 0x0052, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/
3648 {RTL_USB_DEVICE(0x0df6, 0x005c, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/
3649 {RTL_USB_DEVICE(0x0eb0, 0x9071, rtl92cu_hal_cfg)}, /*NO Brand - Etop*/
3650 + {RTL_USB_DEVICE(0x4856, 0x0091, rtl92cu_hal_cfg)}, /*NetweeN - Feixun*/
3651 /* HP - Lite-On ,8188CUS Slim Combo */
3652 {RTL_USB_DEVICE(0x103c, 0x1629, rtl92cu_hal_cfg)},
3653 {RTL_USB_DEVICE(0x13d3, 0x3357, rtl92cu_hal_cfg)}, /* AzureWave */
3654 @@ -345,6 +347,7 @@ static struct usb_device_id rtl8192c_usb_ids[] = {
3655 {RTL_USB_DEVICE(0x07b8, 0x8178, rtl92cu_hal_cfg)}, /*Funai -Abocom*/
3656 {RTL_USB_DEVICE(0x0846, 0x9021, rtl92cu_hal_cfg)}, /*Netgear-Sercomm*/
3657 {RTL_USB_DEVICE(0x0b05, 0x17ab, rtl92cu_hal_cfg)}, /*ASUS-Edimax*/
3658 + {RTL_USB_DEVICE(0x0bda, 0x8186, rtl92cu_hal_cfg)}, /*Realtek 92CE-VAU*/
3659 {RTL_USB_DEVICE(0x0df6, 0x0061, rtl92cu_hal_cfg)}, /*Sitecom-Edimax*/
3660 {RTL_USB_DEVICE(0x0e66, 0x0019, rtl92cu_hal_cfg)}, /*Hawking-Edimax*/
3661 {RTL_USB_DEVICE(0x2001, 0x3307, rtl92cu_hal_cfg)}, /*D-Link-Cameo*/
3662 diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
3663 index 0ebbb19..796afbf 100644
3664 --- a/drivers/net/xen-netfront.c
3665 +++ b/drivers/net/xen-netfront.c
3666 @@ -1935,14 +1935,14 @@ static int __devexit xennet_remove(struct xenbus_device *dev)
3667
3668 dev_dbg(&dev->dev, "%s\n", dev->nodename);
3669
3670 - unregister_netdev(info->netdev);
3671 -
3672 xennet_disconnect_backend(info);
3673
3674 - del_timer_sync(&info->rx_refill_timer);
3675 -
3676 xennet_sysfs_delif(info->netdev);
3677
3678 + unregister_netdev(info->netdev);
3679 +
3680 + del_timer_sync(&info->rx_refill_timer);
3681 +
3682 free_percpu(info->stats);
3683
3684 free_netdev(info->netdev);
3685 diff --git a/drivers/oprofile/oprofile_perf.c b/drivers/oprofile/oprofile_perf.c
3686 index da14432..efc4b7f 100644
3687 --- a/drivers/oprofile/oprofile_perf.c
3688 +++ b/drivers/oprofile/oprofile_perf.c
3689 @@ -25,7 +25,7 @@ static int oprofile_perf_enabled;
3690 static DEFINE_MUTEX(oprofile_perf_mutex);
3691
3692 static struct op_counter_config *counter_config;
3693 -static struct perf_event **perf_events[nr_cpumask_bits];
3694 +static struct perf_event **perf_events[NR_CPUS];
3695 static int num_counters;
3696
3697 /*
3698 diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
3699 index 6b54b23..3cd3f45 100644
3700 --- a/drivers/pci/pci-driver.c
3701 +++ b/drivers/pci/pci-driver.c
3702 @@ -742,6 +742,18 @@ static int pci_pm_suspend_noirq(struct device *dev)
3703
3704 pci_pm_set_unknown_state(pci_dev);
3705
3706 + /*
3707 + * Some BIOSes from ASUS have a bug: If a USB EHCI host controller's
3708 + * PCI COMMAND register isn't 0, the BIOS assumes that the controller
3709 + * hasn't been quiesced and tries to turn it off. If the controller
3710 + * is already in D3, this can hang or cause memory corruption.
3711 + *
3712 + * Since the value of the COMMAND register doesn't matter once the
3713 + * device has been suspended, we can safely set it to 0 here.
3714 + */
3715 + if (pci_dev->class == PCI_CLASS_SERIAL_USB_EHCI)
3716 + pci_write_config_word(pci_dev, PCI_COMMAND, 0);
3717 +
3718 return 0;
3719 }
3720
3721 diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
3722 index f597a1a..111569c 100644
3723 --- a/drivers/pci/pci.c
3724 +++ b/drivers/pci/pci.c
3725 @@ -1743,11 +1743,6 @@ int pci_prepare_to_sleep(struct pci_dev *dev)
3726 if (target_state == PCI_POWER_ERROR)
3727 return -EIO;
3728
3729 - /* Some devices mustn't be in D3 during system sleep */
3730 - if (target_state == PCI_D3hot &&
3731 - (dev->dev_flags & PCI_DEV_FLAGS_NO_D3_DURING_SLEEP))
3732 - return 0;
3733 -
3734 pci_enable_wake(dev, target_state, device_may_wakeup(&dev->dev));
3735
3736 error = pci_set_power_state(dev, target_state);
3737 diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
3738 index bf33f0b..4bf7102 100644
3739 --- a/drivers/pci/quirks.c
3740 +++ b/drivers/pci/quirks.c
3741 @@ -2917,32 +2917,6 @@ static void __devinit disable_igfx_irq(struct pci_dev *dev)
3742 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq);
3743 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq);
3744
3745 -/*
3746 - * The Intel 6 Series/C200 Series chipset's EHCI controllers on many
3747 - * ASUS motherboards will cause memory corruption or a system crash
3748 - * if they are in D3 while the system is put into S3 sleep.
3749 - */
3750 -static void __devinit asus_ehci_no_d3(struct pci_dev *dev)
3751 -{
3752 - const char *sys_info;
3753 - static const char good_Asus_board[] = "P8Z68-V";
3754 -
3755 - if (dev->dev_flags & PCI_DEV_FLAGS_NO_D3_DURING_SLEEP)
3756 - return;
3757 - if (dev->subsystem_vendor != PCI_VENDOR_ID_ASUSTEK)
3758 - return;
3759 - sys_info = dmi_get_system_info(DMI_BOARD_NAME);
3760 - if (sys_info && memcmp(sys_info, good_Asus_board,
3761 - sizeof(good_Asus_board) - 1) == 0)
3762 - return;
3763 -
3764 - dev_info(&dev->dev, "broken D3 during system sleep on ASUS\n");
3765 - dev->dev_flags |= PCI_DEV_FLAGS_NO_D3_DURING_SLEEP;
3766 - device_set_wakeup_capable(&dev->dev, false);
3767 -}
3768 -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1c26, asus_ehci_no_d3);
3769 -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1c2d, asus_ehci_no_d3);
3770 -
3771 static void pci_do_fixups(struct pci_dev *dev, struct pci_fixup *f,
3772 struct pci_fixup *end)
3773 {
3774 diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
3775 index 24d880e..f8d818a 100644
3776 --- a/drivers/remoteproc/Kconfig
3777 +++ b/drivers/remoteproc/Kconfig
3778 @@ -4,9 +4,11 @@ menu "Remoteproc drivers (EXPERIMENTAL)"
3779 config REMOTEPROC
3780 tristate
3781 depends on EXPERIMENTAL
3782 + select FW_CONFIG
3783
3784 config OMAP_REMOTEPROC
3785 tristate "OMAP remoteproc support"
3786 + depends on EXPERIMENTAL
3787 depends on ARCH_OMAP4
3788 depends on OMAP_IOMMU
3789 select REMOTEPROC
3790 diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c
3791 index 75506ec..39d3aa4 100644
3792 --- a/drivers/rpmsg/virtio_rpmsg_bus.c
3793 +++ b/drivers/rpmsg/virtio_rpmsg_bus.c
3794 @@ -188,6 +188,26 @@ static int rpmsg_uevent(struct device *dev, struct kobj_uevent_env *env)
3795 rpdev->id.name);
3796 }
3797
3798 +/**
3799 + * __ept_release() - deallocate an rpmsg endpoint
3800 + * @kref: the ept's reference count
3801 + *
3802 + * This function deallocates an ept, and is invoked when its @kref refcount
3803 + * drops to zero.
3804 + *
3805 + * Never invoke this function directly!
3806 + */
3807 +static void __ept_release(struct kref *kref)
3808 +{
3809 + struct rpmsg_endpoint *ept = container_of(kref, struct rpmsg_endpoint,
3810 + refcount);
3811 + /*
3812 + * At this point no one holds a reference to ept anymore,
3813 + * so we can directly free it
3814 + */
3815 + kfree(ept);
3816 +}
3817 +
3818 /* for more info, see below documentation of rpmsg_create_ept() */
3819 static struct rpmsg_endpoint *__rpmsg_create_ept(struct virtproc_info *vrp,
3820 struct rpmsg_channel *rpdev, rpmsg_rx_cb_t cb,
3821 @@ -206,6 +226,9 @@ static struct rpmsg_endpoint *__rpmsg_create_ept(struct virtproc_info *vrp,
3822 return NULL;
3823 }
3824
3825 + kref_init(&ept->refcount);
3826 + mutex_init(&ept->cb_lock);
3827 +
3828 ept->rpdev = rpdev;
3829 ept->cb = cb;
3830 ept->priv = priv;
3831 @@ -238,7 +261,7 @@ rem_idr:
3832 idr_remove(&vrp->endpoints, request);
3833 free_ept:
3834 mutex_unlock(&vrp->endpoints_lock);
3835 - kfree(ept);
3836 + kref_put(&ept->refcount, __ept_release);
3837 return NULL;
3838 }
3839
3840 @@ -302,11 +325,17 @@ EXPORT_SYMBOL(rpmsg_create_ept);
3841 static void
3842 __rpmsg_destroy_ept(struct virtproc_info *vrp, struct rpmsg_endpoint *ept)
3843 {
3844 + /* make sure new inbound messages can't find this ept anymore */
3845 mutex_lock(&vrp->endpoints_lock);
3846 idr_remove(&vrp->endpoints, ept->addr);
3847 mutex_unlock(&vrp->endpoints_lock);
3848
3849 - kfree(ept);
3850 + /* make sure in-flight inbound messages won't invoke cb anymore */
3851 + mutex_lock(&ept->cb_lock);
3852 + ept->cb = NULL;
3853 + mutex_unlock(&ept->cb_lock);
3854 +
3855 + kref_put(&ept->refcount, __ept_release);
3856 }
3857
3858 /**
3859 @@ -790,12 +819,28 @@ static void rpmsg_recv_done(struct virtqueue *rvq)
3860
3861 /* use the dst addr to fetch the callback of the appropriate user */
3862 mutex_lock(&vrp->endpoints_lock);
3863 +
3864 ept = idr_find(&vrp->endpoints, msg->dst);
3865 +
3866 + /* let's make sure no one deallocates ept while we use it */
3867 + if (ept)
3868 + kref_get(&ept->refcount);
3869 +
3870 mutex_unlock(&vrp->endpoints_lock);
3871
3872 - if (ept && ept->cb)
3873 - ept->cb(ept->rpdev, msg->data, msg->len, ept->priv, msg->src);
3874 - else
3875 + if (ept) {
3876 + /* make sure ept->cb doesn't go away while we use it */
3877 + mutex_lock(&ept->cb_lock);
3878 +
3879 + if (ept->cb)
3880 + ept->cb(ept->rpdev, msg->data, msg->len, ept->priv,
3881 + msg->src);
3882 +
3883 + mutex_unlock(&ept->cb_lock);
3884 +
3885 + /* farewell, ept, we don't need you anymore */
3886 + kref_put(&ept->refcount, __ept_release);
3887 + } else
3888 dev_warn(dev, "msg received with no recepient\n");
3889
3890 /* publish the real size of the buffer */
3891 diff --git a/drivers/rtc/rtc-ab8500.c b/drivers/rtc/rtc-ab8500.c
3892 index 4bcf9ca..b11a2ec 100644
3893 --- a/drivers/rtc/rtc-ab8500.c
3894 +++ b/drivers/rtc/rtc-ab8500.c
3895 @@ -422,7 +422,7 @@ static int __devinit ab8500_rtc_probe(struct platform_device *pdev)
3896 }
3897
3898 err = request_threaded_irq(irq, NULL, rtc_alarm_handler,
3899 - IRQF_NO_SUSPEND, "ab8500-rtc", rtc);
3900 + IRQF_NO_SUSPEND | IRQF_ONESHOT, "ab8500-rtc", rtc);
3901 if (err < 0) {
3902 rtc_device_unregister(rtc);
3903 return err;
3904 diff --git a/drivers/rtc/rtc-mxc.c b/drivers/rtc/rtc-mxc.c
3905 index 5e1d64e..e3e50d6 100644
3906 --- a/drivers/rtc/rtc-mxc.c
3907 +++ b/drivers/rtc/rtc-mxc.c
3908 @@ -202,10 +202,11 @@ static irqreturn_t mxc_rtc_interrupt(int irq, void *dev_id)
3909 struct platform_device *pdev = dev_id;
3910 struct rtc_plat_data *pdata = platform_get_drvdata(pdev);
3911 void __iomem *ioaddr = pdata->ioaddr;
3912 + unsigned long flags;
3913 u32 status;
3914 u32 events = 0;
3915
3916 - spin_lock_irq(&pdata->rtc->irq_lock);
3917 + spin_lock_irqsave(&pdata->rtc->irq_lock, flags);
3918 status = readw(ioaddr + RTC_RTCISR) & readw(ioaddr + RTC_RTCIENR);
3919 /* clear interrupt sources */
3920 writew(status, ioaddr + RTC_RTCISR);
3921 @@ -224,7 +225,7 @@ static irqreturn_t mxc_rtc_interrupt(int irq, void *dev_id)
3922 events |= (RTC_PF | RTC_IRQF);
3923
3924 rtc_update_irq(pdata->rtc, 1, events);
3925 - spin_unlock_irq(&pdata->rtc->irq_lock);
3926 + spin_unlock_irqrestore(&pdata->rtc->irq_lock, flags);
3927
3928 return IRQ_HANDLED;
3929 }
3930 diff --git a/drivers/rtc/rtc-spear.c b/drivers/rtc/rtc-spear.c
3931 index e38da0d..235b0ef 100644
3932 --- a/drivers/rtc/rtc-spear.c
3933 +++ b/drivers/rtc/rtc-spear.c
3934 @@ -457,12 +457,12 @@ static int __devexit spear_rtc_remove(struct platform_device *pdev)
3935 clk_disable(config->clk);
3936 clk_put(config->clk);
3937 iounmap(config->ioaddr);
3938 - kfree(config);
3939 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
3940 if (res)
3941 release_mem_region(res->start, resource_size(res));
3942 platform_set_drvdata(pdev, NULL);
3943 rtc_device_unregister(config->rtc);
3944 + kfree(config);
3945
3946 return 0;
3947 }
3948 diff --git a/drivers/scsi/aic94xx/aic94xx_task.c b/drivers/scsi/aic94xx/aic94xx_task.c
3949 index 532d212..393e7ce 100644
3950 --- a/drivers/scsi/aic94xx/aic94xx_task.c
3951 +++ b/drivers/scsi/aic94xx/aic94xx_task.c
3952 @@ -201,7 +201,7 @@ static void asd_get_response_tasklet(struct asd_ascb *ascb,
3953
3954 if (SAS_STATUS_BUF_SIZE >= sizeof(*resp)) {
3955 resp->frame_len = le16_to_cpu(*(__le16 *)(r+6));
3956 - memcpy(&resp->ending_fis[0], r+16, 24);
3957 + memcpy(&resp->ending_fis[0], r+16, ATA_RESP_FIS_SIZE);
3958 ts->buf_valid_size = sizeof(*resp);
3959 }
3960 }
3961 diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
3962 index 441d88a..d109cc3 100644
3963 --- a/drivers/scsi/libsas/sas_ata.c
3964 +++ b/drivers/scsi/libsas/sas_ata.c
3965 @@ -139,12 +139,12 @@ static void sas_ata_task_done(struct sas_task *task)
3966 if (stat->stat == SAS_PROTO_RESPONSE || stat->stat == SAM_STAT_GOOD ||
3967 ((stat->stat == SAM_STAT_CHECK_CONDITION &&
3968 dev->sata_dev.command_set == ATAPI_COMMAND_SET))) {
3969 - ata_tf_from_fis(resp->ending_fis, &dev->sata_dev.tf);
3970 + memcpy(dev->sata_dev.fis, resp->ending_fis, ATA_RESP_FIS_SIZE);
3971
3972 if (!link->sactive) {
3973 - qc->err_mask |= ac_err_mask(dev->sata_dev.tf.command);
3974 + qc->err_mask |= ac_err_mask(dev->sata_dev.fis[2]);
3975 } else {
3976 - link->eh_info.err_mask |= ac_err_mask(dev->sata_dev.tf.command);
3977 + link->eh_info.err_mask |= ac_err_mask(dev->sata_dev.fis[2]);
3978 if (unlikely(link->eh_info.err_mask))
3979 qc->flags |= ATA_QCFLAG_FAILED;
3980 }
3981 @@ -161,8 +161,8 @@ static void sas_ata_task_done(struct sas_task *task)
3982 qc->flags |= ATA_QCFLAG_FAILED;
3983 }
3984
3985 - dev->sata_dev.tf.feature = 0x04; /* status err */
3986 - dev->sata_dev.tf.command = ATA_ERR;
3987 + dev->sata_dev.fis[3] = 0x04; /* status err */
3988 + dev->sata_dev.fis[2] = ATA_ERR;
3989 }
3990 }
3991
3992 @@ -269,7 +269,7 @@ static bool sas_ata_qc_fill_rtf(struct ata_queued_cmd *qc)
3993 {
3994 struct domain_device *dev = qc->ap->private_data;
3995
3996 - memcpy(&qc->result_tf, &dev->sata_dev.tf, sizeof(qc->result_tf));
3997 + ata_tf_from_fis(dev->sata_dev.fis, &qc->result_tf);
3998 return true;
3999 }
4000
4001 diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
4002 index 5ba5c2a..a239382 100644
4003 --- a/drivers/scsi/sd.c
4004 +++ b/drivers/scsi/sd.c
4005 @@ -1898,6 +1898,8 @@ static int sd_try_rc16_first(struct scsi_device *sdp)
4006 {
4007 if (sdp->host->max_cmd_len < 16)
4008 return 0;
4009 + if (sdp->try_rc_10_first)
4010 + return 0;
4011 if (sdp->scsi_level > SCSI_SPC_2)
4012 return 1;
4013 if (scsi_device_protection(sdp))
4014 diff --git a/drivers/staging/iio/adc/ad7606_core.c b/drivers/staging/iio/adc/ad7606_core.c
4015 index 97e8d3d..7322c16 100644
4016 --- a/drivers/staging/iio/adc/ad7606_core.c
4017 +++ b/drivers/staging/iio/adc/ad7606_core.c
4018 @@ -235,6 +235,7 @@ static const struct attribute_group ad7606_attribute_group_range = {
4019 .indexed = 1, \
4020 .channel = num, \
4021 .address = num, \
4022 + .info_mask = IIO_CHAN_INFO_SCALE_SHARED_BIT, \
4023 .scan_index = num, \
4024 .scan_type = IIO_ST('s', 16, 16, 0), \
4025 }
4026 diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
4027 index e419b4f..2c80745 100644
4028 --- a/drivers/staging/rtl8712/usb_intf.c
4029 +++ b/drivers/staging/rtl8712/usb_intf.c
4030 @@ -102,6 +102,8 @@ static struct usb_device_id rtl871x_usb_id_tbl[] = {
4031 /* - */
4032 {USB_DEVICE(0x20F4, 0x646B)},
4033 {USB_DEVICE(0x083A, 0xC512)},
4034 + {USB_DEVICE(0x25D4, 0x4CA1)},
4035 + {USB_DEVICE(0x25D4, 0x4CAB)},
4036
4037 /* RTL8191SU */
4038 /* Realtek */
4039 diff --git a/drivers/target/tcm_fc/tfc_sess.c b/drivers/target/tcm_fc/tfc_sess.c
4040 index cb99da9..87901fa 100644
4041 --- a/drivers/target/tcm_fc/tfc_sess.c
4042 +++ b/drivers/target/tcm_fc/tfc_sess.c
4043 @@ -58,7 +58,8 @@ static struct ft_tport *ft_tport_create(struct fc_lport *lport)
4044 struct ft_tport *tport;
4045 int i;
4046
4047 - tport = rcu_dereference(lport->prov[FC_TYPE_FCP]);
4048 + tport = rcu_dereference_protected(lport->prov[FC_TYPE_FCP],
4049 + lockdep_is_held(&ft_lport_lock));
4050 if (tport && tport->tpg)
4051 return tport;
4052
4053 diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
4054 index 83d14bf..01d247e 100644
4055 --- a/drivers/usb/class/cdc-wdm.c
4056 +++ b/drivers/usb/class/cdc-wdm.c
4057 @@ -497,6 +497,8 @@ retry:
4058 goto retry;
4059 }
4060 if (!desc->reslength) { /* zero length read */
4061 + dev_dbg(&desc->intf->dev, "%s: zero length - clearing WDM_READ\n", __func__);
4062 + clear_bit(WDM_READ, &desc->flags);
4063 spin_unlock_irq(&desc->iuspin);
4064 goto retry;
4065 }
4066 diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
4067 index c8e0704..6241b71 100644
4068 --- a/drivers/usb/core/hub.c
4069 +++ b/drivers/usb/core/hub.c
4070 @@ -2102,12 +2102,16 @@ static unsigned hub_is_wusb(struct usb_hub *hub)
4071 static int hub_port_reset(struct usb_hub *hub, int port1,
4072 struct usb_device *udev, unsigned int delay, bool warm);
4073
4074 -/* Is a USB 3.0 port in the Inactive state? */
4075 -static bool hub_port_inactive(struct usb_hub *hub, u16 portstatus)
4076 +/* Is a USB 3.0 port in the Inactive or Complinance Mode state?
4077 + * Port worm reset is required to recover
4078 + */
4079 +static bool hub_port_warm_reset_required(struct usb_hub *hub, u16 portstatus)
4080 {
4081 return hub_is_superspeed(hub->hdev) &&
4082 - (portstatus & USB_PORT_STAT_LINK_STATE) ==
4083 - USB_SS_PORT_LS_SS_INACTIVE;
4084 + (((portstatus & USB_PORT_STAT_LINK_STATE) ==
4085 + USB_SS_PORT_LS_SS_INACTIVE) ||
4086 + ((portstatus & USB_PORT_STAT_LINK_STATE) ==
4087 + USB_SS_PORT_LS_COMP_MOD)) ;
4088 }
4089
4090 static int hub_port_wait_reset(struct usb_hub *hub, int port1,
4091 @@ -2143,7 +2147,7 @@ static int hub_port_wait_reset(struct usb_hub *hub, int port1,
4092 *
4093 * See https://bugzilla.kernel.org/show_bug.cgi?id=41752
4094 */
4095 - if (hub_port_inactive(hub, portstatus)) {
4096 + if (hub_port_warm_reset_required(hub, portstatus)) {
4097 int ret;
4098
4099 if ((portchange & USB_PORT_STAT_C_CONNECTION))
4100 @@ -3757,9 +3761,7 @@ static void hub_events(void)
4101 /* Warm reset a USB3 protocol port if it's in
4102 * SS.Inactive state.
4103 */
4104 - if (hub_is_superspeed(hub->hdev) &&
4105 - (portstatus & USB_PORT_STAT_LINK_STATE)
4106 - == USB_SS_PORT_LS_SS_INACTIVE) {
4107 + if (hub_port_warm_reset_required(hub, portstatus)) {
4108 dev_dbg(hub_dev, "warm reset port %d\n", i);
4109 hub_port_reset(hub, i, NULL,
4110 HUB_BH_RESET_TIME, true);
4111 diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
4112 index 89850a8..bbf3c0c 100644
4113 --- a/drivers/usb/host/xhci-hub.c
4114 +++ b/drivers/usb/host/xhci-hub.c
4115 @@ -462,6 +462,42 @@ void xhci_test_and_clear_bit(struct xhci_hcd *xhci, __le32 __iomem **port_array,
4116 }
4117 }
4118
4119 +/* Updates Link Status for super Speed port */
4120 +static void xhci_hub_report_link_state(u32 *status, u32 status_reg)
4121 +{
4122 + u32 pls = status_reg & PORT_PLS_MASK;
4123 +
4124 + /* resume state is a xHCI internal state.
4125 + * Do not report it to usb core.
4126 + */
4127 + if (pls == XDEV_RESUME)
4128 + return;
4129 +
4130 + /* When the CAS bit is set then warm reset
4131 + * should be performed on port
4132 + */
4133 + if (status_reg & PORT_CAS) {
4134 + /* The CAS bit can be set while the port is
4135 + * in any link state.
4136 + * Only roothubs have CAS bit, so we
4137 + * pretend to be in compliance mode
4138 + * unless we're already in compliance
4139 + * or the inactive state.
4140 + */
4141 + if (pls != USB_SS_PORT_LS_COMP_MOD &&
4142 + pls != USB_SS_PORT_LS_SS_INACTIVE) {
4143 + pls = USB_SS_PORT_LS_COMP_MOD;
4144 + }
4145 + /* Return also connection bit -
4146 + * hub state machine resets port
4147 + * when this bit is set.
4148 + */
4149 + pls |= USB_PORT_STAT_CONNECTION;
4150 + }
4151 + /* update status field */
4152 + *status |= pls;
4153 +}
4154 +
4155 int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
4156 u16 wIndex, char *buf, u16 wLength)
4157 {
4158 @@ -605,13 +641,9 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
4159 else
4160 status |= USB_PORT_STAT_POWER;
4161 }
4162 - /* Port Link State */
4163 + /* Update Port Link State for super speed ports*/
4164 if (hcd->speed == HCD_USB3) {
4165 - /* resume state is a xHCI internal state.
4166 - * Do not report it to usb core.
4167 - */
4168 - if ((temp & PORT_PLS_MASK) != XDEV_RESUME)
4169 - status |= (temp & PORT_PLS_MASK);
4170 + xhci_hub_report_link_state(&status, temp);
4171 }
4172 if (bus_state->port_c_suspend & (1 << wIndex))
4173 status |= 1 << USB_PORT_FEAT_C_SUSPEND;
4174 diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
4175 index 525a1ee..158175bf 100644
4176 --- a/drivers/usb/host/xhci-ring.c
4177 +++ b/drivers/usb/host/xhci-ring.c
4178 @@ -885,6 +885,17 @@ static void update_ring_for_set_deq_completion(struct xhci_hcd *xhci,
4179 num_trbs_free_temp = ep_ring->num_trbs_free;
4180 dequeue_temp = ep_ring->dequeue;
4181
4182 + /* If we get two back-to-back stalls, and the first stalled transfer
4183 + * ends just before a link TRB, the dequeue pointer will be left on
4184 + * the link TRB by the code in the while loop. So we have to update
4185 + * the dequeue pointer one segment further, or we'll jump off
4186 + * the segment into la-la-land.
4187 + */
4188 + if (last_trb(xhci, ep_ring, ep_ring->deq_seg, ep_ring->dequeue)) {
4189 + ep_ring->deq_seg = ep_ring->deq_seg->next;
4190 + ep_ring->dequeue = ep_ring->deq_seg->trbs;
4191 + }
4192 +
4193 while (ep_ring->dequeue != dev->eps[ep_index].queued_deq_ptr) {
4194 /* We have more usable TRBs */
4195 ep_ring->num_trbs_free++;
4196 diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
4197 index ac14276..59434fe 100644
4198 --- a/drivers/usb/host/xhci.h
4199 +++ b/drivers/usb/host/xhci.h
4200 @@ -341,7 +341,11 @@ struct xhci_op_regs {
4201 #define PORT_PLC (1 << 22)
4202 /* port configure error change - port failed to configure its link partner */
4203 #define PORT_CEC (1 << 23)
4204 -/* bit 24 reserved */
4205 +/* Cold Attach Status - xHC can set this bit to report device attached during
4206 + * Sx state. Warm port reset should be perfomed to clear this bit and move port
4207 + * to connected state.
4208 + */
4209 +#define PORT_CAS (1 << 24)
4210 /* wake on connect (enable) */
4211 #define PORT_WKCONN_E (1 << 25)
4212 /* wake on disconnect (enable) */
4213 diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
4214 index 95de9c0..53e7e69 100644
4215 --- a/drivers/usb/serial/cp210x.c
4216 +++ b/drivers/usb/serial/cp210x.c
4217 @@ -93,6 +93,7 @@ static const struct usb_device_id id_table[] = {
4218 { USB_DEVICE(0x10C4, 0x814B) }, /* West Mountain Radio RIGtalk */
4219 { USB_DEVICE(0x10C4, 0x8156) }, /* B&G H3000 link cable */
4220 { USB_DEVICE(0x10C4, 0x815E) }, /* Helicomm IP-Link 1220-DVM */
4221 + { USB_DEVICE(0x10C4, 0x815F) }, /* Timewave HamLinkUSB */
4222 { USB_DEVICE(0x10C4, 0x818B) }, /* AVIT Research USB to TTL */
4223 { USB_DEVICE(0x10C4, 0x819F) }, /* MJS USB Toslink Switcher */
4224 { USB_DEVICE(0x10C4, 0x81A6) }, /* ThinkOptics WavIt */
4225 @@ -134,7 +135,13 @@ static const struct usb_device_id id_table[] = {
4226 { USB_DEVICE(0x10CE, 0xEA6A) }, /* Silicon Labs MobiData GPRS USB Modem 100EU */
4227 { USB_DEVICE(0x13AD, 0x9999) }, /* Baltech card reader */
4228 { USB_DEVICE(0x1555, 0x0004) }, /* Owen AC4 USB-RS485 Converter */
4229 + { USB_DEVICE(0x166A, 0x0201) }, /* Clipsal 5500PACA C-Bus Pascal Automation Controller */
4230 + { USB_DEVICE(0x166A, 0x0301) }, /* Clipsal 5800PC C-Bus Wireless PC Interface */
4231 { USB_DEVICE(0x166A, 0x0303) }, /* Clipsal 5500PCU C-Bus USB interface */
4232 + { USB_DEVICE(0x166A, 0x0304) }, /* Clipsal 5000CT2 C-Bus Black and White Touchscreen */
4233 + { USB_DEVICE(0x166A, 0x0305) }, /* Clipsal C-5000CT2 C-Bus Spectrum Colour Touchscreen */
4234 + { USB_DEVICE(0x166A, 0x0401) }, /* Clipsal L51xx C-Bus Architectural Dimmer */
4235 + { USB_DEVICE(0x166A, 0x0101) }, /* Clipsal 5560884 C-Bus Multi-room Audio Matrix Switcher */
4236 { USB_DEVICE(0x16D6, 0x0001) }, /* Jablotron serial interface */
4237 { USB_DEVICE(0x16DC, 0x0010) }, /* W-IE-NE-R Plein & Baus GmbH PL512 Power Supply */
4238 { USB_DEVICE(0x16DC, 0x0011) }, /* W-IE-NE-R Plein & Baus GmbH RCM Remote Control for MARATON Power Supply */
4239 @@ -146,7 +153,11 @@ static const struct usb_device_id id_table[] = {
4240 { USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */
4241 { USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */
4242 { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */
4243 + { USB_DEVICE(0x1E29, 0x0102) }, /* Festo CPX-USB */
4244 + { USB_DEVICE(0x1E29, 0x0501) }, /* Festo CMSP */
4245 { USB_DEVICE(0x3195, 0xF190) }, /* Link Instruments MSO-19 */
4246 + { USB_DEVICE(0x3195, 0xF280) }, /* Link Instruments MSO-28 */
4247 + { USB_DEVICE(0x3195, 0xF281) }, /* Link Instruments MSO-28 */
4248 { USB_DEVICE(0x413C, 0x9500) }, /* DW700 GPS USB interface */
4249 { } /* Terminating Entry */
4250 };
4251 diff --git a/drivers/usb/serial/metro-usb.c b/drivers/usb/serial/metro-usb.c
4252 index 08d16e8..7c14671 100644
4253 --- a/drivers/usb/serial/metro-usb.c
4254 +++ b/drivers/usb/serial/metro-usb.c
4255 @@ -171,14 +171,6 @@ static int metrousb_open(struct tty_struct *tty, struct usb_serial_port *port)
4256 metro_priv->throttled = 0;
4257 spin_unlock_irqrestore(&metro_priv->lock, flags);
4258
4259 - /*
4260 - * Force low_latency on so that our tty_push actually forces the data
4261 - * through, otherwise it is scheduled, and with high data rates (like
4262 - * with OHCI) data can get lost.
4263 - */
4264 - if (tty)
4265 - tty->low_latency = 1;
4266 -
4267 /* Clear the urb pipe. */
4268 usb_clear_halt(serial->dev, port->interrupt_in_urb->pipe);
4269
4270 diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
4271 index 2706d8a..49484b3 100644
4272 --- a/drivers/usb/serial/option.c
4273 +++ b/drivers/usb/serial/option.c
4274 @@ -236,6 +236,7 @@ static void option_instat_callback(struct urb *urb);
4275 #define NOVATELWIRELESS_PRODUCT_G1 0xA001
4276 #define NOVATELWIRELESS_PRODUCT_G1_M 0xA002
4277 #define NOVATELWIRELESS_PRODUCT_G2 0xA010
4278 +#define NOVATELWIRELESS_PRODUCT_MC551 0xB001
4279
4280 /* AMOI PRODUCTS */
4281 #define AMOI_VENDOR_ID 0x1614
4282 @@ -496,6 +497,19 @@ static void option_instat_callback(struct urb *urb);
4283
4284 /* MediaTek products */
4285 #define MEDIATEK_VENDOR_ID 0x0e8d
4286 +#define MEDIATEK_PRODUCT_DC_1COM 0x00a0
4287 +#define MEDIATEK_PRODUCT_DC_4COM 0x00a5
4288 +#define MEDIATEK_PRODUCT_DC_5COM 0x00a4
4289 +#define MEDIATEK_PRODUCT_7208_1COM 0x7101
4290 +#define MEDIATEK_PRODUCT_7208_2COM 0x7102
4291 +#define MEDIATEK_PRODUCT_FP_1COM 0x0003
4292 +#define MEDIATEK_PRODUCT_FP_2COM 0x0023
4293 +#define MEDIATEK_PRODUCT_FPDC_1COM 0x0043
4294 +#define MEDIATEK_PRODUCT_FPDC_2COM 0x0033
4295 +
4296 +/* Cellient products */
4297 +#define CELLIENT_VENDOR_ID 0x2692
4298 +#define CELLIENT_PRODUCT_MEN200 0x9005
4299
4300 /* some devices interfaces need special handling due to a number of reasons */
4301 enum option_blacklist_reason {
4302 @@ -549,6 +563,10 @@ static const struct option_blacklist_info net_intf1_blacklist = {
4303 .reserved = BIT(1),
4304 };
4305
4306 +static const struct option_blacklist_info net_intf2_blacklist = {
4307 + .reserved = BIT(2),
4308 +};
4309 +
4310 static const struct option_blacklist_info net_intf3_blacklist = {
4311 .reserved = BIT(3),
4312 };
4313 @@ -734,6 +752,8 @@ static const struct usb_device_id option_ids[] = {
4314 { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_G1) },
4315 { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_G1_M) },
4316 { USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_G2) },
4317 + /* Novatel Ovation MC551 a.k.a. Verizon USB551L */
4318 + { USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_MC551, 0xff, 0xff, 0xff) },
4319
4320 { USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01) },
4321 { USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01A) },
4322 @@ -1092,6 +1112,8 @@ static const struct usb_device_id option_ids[] = {
4323 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1298, 0xff, 0xff, 0xff) },
4324 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1299, 0xff, 0xff, 0xff) },
4325 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1300, 0xff, 0xff, 0xff) },
4326 + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1402, 0xff, 0xff, 0xff),
4327 + .driver_info = (kernel_ulong_t)&net_intf2_blacklist },
4328 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2002, 0xff,
4329 0xff, 0xff), .driver_info = (kernel_ulong_t)&zte_k3765_z_blacklist },
4330 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2003, 0xff, 0xff, 0xff) },
4331 @@ -1233,6 +1255,18 @@ static const struct usb_device_id option_ids[] = {
4332 { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x00a1, 0xff, 0x02, 0x01) },
4333 { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x00a2, 0xff, 0x00, 0x00) },
4334 { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x00a2, 0xff, 0x02, 0x01) }, /* MediaTek MT6276M modem & app port */
4335 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_1COM, 0x0a, 0x00, 0x00) },
4336 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_5COM, 0xff, 0x02, 0x01) },
4337 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_5COM, 0xff, 0x00, 0x00) },
4338 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_4COM, 0xff, 0x02, 0x01) },
4339 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_4COM, 0xff, 0x00, 0x00) },
4340 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_7208_1COM, 0x02, 0x00, 0x00) },
4341 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_7208_2COM, 0x02, 0x02, 0x01) },
4342 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_FP_1COM, 0x0a, 0x00, 0x00) },
4343 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_FP_2COM, 0x0a, 0x00, 0x00) },
4344 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_FPDC_1COM, 0x0a, 0x00, 0x00) },
4345 + { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_FPDC_2COM, 0x0a, 0x00, 0x00) },
4346 + { USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MEN200) },
4347 { } /* Terminating entry */
4348 };
4349 MODULE_DEVICE_TABLE(usb, option_ids);
4350 diff --git a/drivers/usb/storage/scsiglue.c b/drivers/usb/storage/scsiglue.c
4351 index a324a5d..11418da 100644
4352 --- a/drivers/usb/storage/scsiglue.c
4353 +++ b/drivers/usb/storage/scsiglue.c
4354 @@ -202,6 +202,12 @@ static int slave_configure(struct scsi_device *sdev)
4355 if (us->fflags & US_FL_NO_READ_CAPACITY_16)
4356 sdev->no_read_capacity_16 = 1;
4357
4358 + /*
4359 + * Many devices do not respond properly to READ_CAPACITY_16.
4360 + * Tell the SCSI layer to try READ_CAPACITY_10 first.
4361 + */
4362 + sdev->try_rc_10_first = 1;
4363 +
4364 /* assume SPC3 or latter devices support sense size > 18 */
4365 if (sdev->scsi_level > SCSI_SPC_2)
4366 us->fflags |= US_FL_SANE_SENSE;
4367 diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
4368 index f4d3e1a..8f3cbb8 100644
4369 --- a/drivers/usb/storage/unusual_devs.h
4370 +++ b/drivers/usb/storage/unusual_devs.h
4371 @@ -1107,13 +1107,6 @@ UNUSUAL_DEV( 0x090a, 0x1200, 0x0000, 0x9999,
4372 USB_SC_RBC, USB_PR_BULK, NULL,
4373 0 ),
4374
4375 -/* Feiya QDI U2 DISK, reported by Hans de Goede <hdegoede@redhat.com> */
4376 -UNUSUAL_DEV( 0x090c, 0x1000, 0x0000, 0xffff,
4377 - "Feiya",
4378 - "QDI U2 DISK",
4379 - USB_SC_DEVICE, USB_PR_DEVICE, NULL,
4380 - US_FL_NO_READ_CAPACITY_16 ),
4381 -
4382 /* aeb */
4383 UNUSUAL_DEV( 0x090c, 0x1132, 0x0000, 0xffff,
4384 "Feiya",
4385 diff --git a/drivers/video/omap2/dss/apply.c b/drivers/video/omap2/dss/apply.c
4386 index b10b3bc..cb19af2 100644
4387 --- a/drivers/video/omap2/dss/apply.c
4388 +++ b/drivers/video/omap2/dss/apply.c
4389 @@ -927,7 +927,7 @@ static void dss_ovl_setup_fifo(struct omap_overlay *ovl,
4390 dssdev = ovl->manager->device;
4391
4392 dispc_ovl_compute_fifo_thresholds(ovl->id, &fifo_low, &fifo_high,
4393 - use_fifo_merge);
4394 + use_fifo_merge, ovl_manual_update(ovl));
4395
4396 dss_apply_ovl_fifo_thresholds(ovl, fifo_low, fifo_high);
4397 }
4398 diff --git a/drivers/video/omap2/dss/dispc.c b/drivers/video/omap2/dss/dispc.c
4399 index ee30937..c4d0e44 100644
4400 --- a/drivers/video/omap2/dss/dispc.c
4401 +++ b/drivers/video/omap2/dss/dispc.c
4402 @@ -1063,7 +1063,8 @@ void dispc_enable_fifomerge(bool enable)
4403 }
4404
4405 void dispc_ovl_compute_fifo_thresholds(enum omap_plane plane,
4406 - u32 *fifo_low, u32 *fifo_high, bool use_fifomerge)
4407 + u32 *fifo_low, u32 *fifo_high, bool use_fifomerge,
4408 + bool manual_update)
4409 {
4410 /*
4411 * All sizes are in bytes. Both the buffer and burst are made of
4412 @@ -1091,7 +1092,7 @@ void dispc_ovl_compute_fifo_thresholds(enum omap_plane plane,
4413 * combined fifo size
4414 */
4415
4416 - if (dss_has_feature(FEAT_OMAP3_DSI_FIFO_BUG)) {
4417 + if (manual_update && dss_has_feature(FEAT_OMAP3_DSI_FIFO_BUG)) {
4418 *fifo_low = ovl_fifo_size - burst_size * 2;
4419 *fifo_high = total_fifo_size - burst_size;
4420 } else {
4421 diff --git a/drivers/video/omap2/dss/dss.h b/drivers/video/omap2/dss/dss.h
4422 index d4b3dff..d0638da 100644
4423 --- a/drivers/video/omap2/dss/dss.h
4424 +++ b/drivers/video/omap2/dss/dss.h
4425 @@ -424,7 +424,8 @@ int dispc_calc_clock_rates(unsigned long dispc_fclk_rate,
4426
4427 void dispc_ovl_set_fifo_threshold(enum omap_plane plane, u32 low, u32 high);
4428 void dispc_ovl_compute_fifo_thresholds(enum omap_plane plane,
4429 - u32 *fifo_low, u32 *fifo_high, bool use_fifomerge);
4430 + u32 *fifo_low, u32 *fifo_high, bool use_fifomerge,
4431 + bool manual_update);
4432 int dispc_ovl_setup(enum omap_plane plane, struct omap_overlay_info *oi,
4433 bool ilace, bool replication);
4434 int dispc_ovl_enable(enum omap_plane plane, bool enable);
4435 diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
4436 index eb1ae90..dce89da 100644
4437 --- a/fs/btrfs/tree-log.c
4438 +++ b/fs/btrfs/tree-log.c
4439 @@ -690,6 +690,8 @@ static noinline int drop_one_dir_item(struct btrfs_trans_handle *trans,
4440 kfree(name);
4441
4442 iput(inode);
4443 +
4444 + btrfs_run_delayed_items(trans, root);
4445 return ret;
4446 }
4447
4448 @@ -895,6 +897,7 @@ again:
4449 ret = btrfs_unlink_inode(trans, root, dir,
4450 inode, victim_name,
4451 victim_name_len);
4452 + btrfs_run_delayed_items(trans, root);
4453 }
4454 kfree(victim_name);
4455 ptr = (unsigned long)(victim_ref + 1) + victim_name_len;
4456 @@ -1475,6 +1478,9 @@ again:
4457 ret = btrfs_unlink_inode(trans, root, dir, inode,
4458 name, name_len);
4459 BUG_ON(ret);
4460 +
4461 + btrfs_run_delayed_items(trans, root);
4462 +
4463 kfree(name);
4464 iput(inode);
4465
4466 diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
4467 index e0b56d7..402fa0f 100644
4468 --- a/fs/cifs/connect.c
4469 +++ b/fs/cifs/connect.c
4470 @@ -1585,24 +1585,26 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
4471 * If yes, we have encountered a double deliminator
4472 * reset the NULL character to the deliminator
4473 */
4474 - if (tmp_end < end && tmp_end[1] == delim)
4475 + if (tmp_end < end && tmp_end[1] == delim) {
4476 tmp_end[0] = delim;
4477
4478 - /* Keep iterating until we get to a single deliminator
4479 - * OR the end
4480 - */
4481 - while ((tmp_end = strchr(tmp_end, delim)) != NULL &&
4482 - (tmp_end[1] == delim)) {
4483 - tmp_end = (char *) &tmp_end[2];
4484 - }
4485 + /* Keep iterating until we get to a single
4486 + * deliminator OR the end
4487 + */
4488 + while ((tmp_end = strchr(tmp_end, delim))
4489 + != NULL && (tmp_end[1] == delim)) {
4490 + tmp_end = (char *) &tmp_end[2];
4491 + }
4492
4493 - /* Reset var options to point to next element */
4494 - if (tmp_end) {
4495 - tmp_end[0] = '\0';
4496 - options = (char *) &tmp_end[1];
4497 - } else
4498 - /* Reached the end of the mount option string */
4499 - options = end;
4500 + /* Reset var options to point to next element */
4501 + if (tmp_end) {
4502 + tmp_end[0] = '\0';
4503 + options = (char *) &tmp_end[1];
4504 + } else
4505 + /* Reached the end of the mount option
4506 + * string */
4507 + options = end;
4508 + }
4509
4510 /* Now build new password string */
4511 temp_len = strlen(value);
4512 @@ -3396,18 +3398,15 @@ cifs_negotiate_rsize(struct cifs_tcon *tcon, struct smb_vol *pvolume_info)
4513 * MS-CIFS indicates that servers are only limited by the client's
4514 * bufsize for reads, testing against win98se shows that it throws
4515 * INVALID_PARAMETER errors if you try to request too large a read.
4516 + * OS/2 just sends back short reads.
4517 *
4518 - * If the server advertises a MaxBufferSize of less than one page,
4519 - * assume that it also can't satisfy reads larger than that either.
4520 - *
4521 - * FIXME: Is there a better heuristic for this?
4522 + * If the server doesn't advertise CAP_LARGE_READ_X, then assume that
4523 + * it can't handle a read request larger than its MaxBufferSize either.
4524 */
4525 if (tcon->unix_ext && (unix_cap & CIFS_UNIX_LARGE_READ_CAP))
4526 defsize = CIFS_DEFAULT_IOSIZE;
4527 else if (server->capabilities & CAP_LARGE_READ_X)
4528 defsize = CIFS_DEFAULT_NON_POSIX_RSIZE;
4529 - else if (server->maxBuf >= PAGE_CACHE_SIZE)
4530 - defsize = CIFSMaxBufSize;
4531 else
4532 defsize = server->maxBuf - sizeof(READ_RSP);
4533
4534 diff --git a/fs/ecryptfs/kthread.c b/fs/ecryptfs/kthread.c
4535 index 69f994a..0dbe58a 100644
4536 --- a/fs/ecryptfs/kthread.c
4537 +++ b/fs/ecryptfs/kthread.c
4538 @@ -149,7 +149,7 @@ int ecryptfs_privileged_open(struct file **lower_file,
4539 (*lower_file) = dentry_open(lower_dentry, lower_mnt, flags, cred);
4540 if (!IS_ERR(*lower_file))
4541 goto out;
4542 - if (flags & O_RDONLY) {
4543 + if ((flags & O_ACCMODE) == O_RDONLY) {
4544 rc = PTR_ERR((*lower_file));
4545 goto out;
4546 }
4547 diff --git a/fs/ecryptfs/miscdev.c b/fs/ecryptfs/miscdev.c
4548 index 3a06f40..c0038f6 100644
4549 --- a/fs/ecryptfs/miscdev.c
4550 +++ b/fs/ecryptfs/miscdev.c
4551 @@ -49,7 +49,10 @@ ecryptfs_miscdev_poll(struct file *file, poll_table *pt)
4552 mutex_lock(&ecryptfs_daemon_hash_mux);
4553 /* TODO: Just use file->private_data? */
4554 rc = ecryptfs_find_daemon_by_euid(&daemon, euid, current_user_ns());
4555 - BUG_ON(rc || !daemon);
4556 + if (rc || !daemon) {
4557 + mutex_unlock(&ecryptfs_daemon_hash_mux);
4558 + return -EINVAL;
4559 + }
4560 mutex_lock(&daemon->mux);
4561 mutex_unlock(&ecryptfs_daemon_hash_mux);
4562 if (daemon->flags & ECRYPTFS_DAEMON_ZOMBIE) {
4563 @@ -122,6 +125,7 @@ ecryptfs_miscdev_open(struct inode *inode, struct file *file)
4564 goto out_unlock_daemon;
4565 }
4566 daemon->flags |= ECRYPTFS_DAEMON_MISCDEV_OPEN;
4567 + file->private_data = daemon;
4568 atomic_inc(&ecryptfs_num_miscdev_opens);
4569 out_unlock_daemon:
4570 mutex_unlock(&daemon->mux);
4571 @@ -152,9 +156,9 @@ ecryptfs_miscdev_release(struct inode *inode, struct file *file)
4572
4573 mutex_lock(&ecryptfs_daemon_hash_mux);
4574 rc = ecryptfs_find_daemon_by_euid(&daemon, euid, current_user_ns());
4575 - BUG_ON(rc || !daemon);
4576 + if (rc || !daemon)
4577 + daemon = file->private_data;
4578 mutex_lock(&daemon->mux);
4579 - BUG_ON(daemon->pid != task_pid(current));
4580 BUG_ON(!(daemon->flags & ECRYPTFS_DAEMON_MISCDEV_OPEN));
4581 daemon->flags &= ~ECRYPTFS_DAEMON_MISCDEV_OPEN;
4582 atomic_dec(&ecryptfs_num_miscdev_opens);
4583 @@ -191,31 +195,32 @@ int ecryptfs_send_miscdev(char *data, size_t data_size,
4584 struct ecryptfs_msg_ctx *msg_ctx, u8 msg_type,
4585 u16 msg_flags, struct ecryptfs_daemon *daemon)
4586 {
4587 - int rc = 0;
4588 + struct ecryptfs_message *msg;
4589
4590 - mutex_lock(&msg_ctx->mux);
4591 - msg_ctx->msg = kmalloc((sizeof(*msg_ctx->msg) + data_size),
4592 - GFP_KERNEL);
4593 - if (!msg_ctx->msg) {
4594 - rc = -ENOMEM;
4595 + msg = kmalloc((sizeof(*msg) + data_size), GFP_KERNEL);
4596 + if (!msg) {
4597 printk(KERN_ERR "%s: Out of memory whilst attempting "
4598 "to kmalloc(%zd, GFP_KERNEL)\n", __func__,
4599 - (sizeof(*msg_ctx->msg) + data_size));
4600 - goto out_unlock;
4601 + (sizeof(*msg) + data_size));
4602 + return -ENOMEM;
4603 }
4604 +
4605 + mutex_lock(&msg_ctx->mux);
4606 + msg_ctx->msg = msg;
4607 msg_ctx->msg->index = msg_ctx->index;
4608 msg_ctx->msg->data_len = data_size;
4609 msg_ctx->type = msg_type;
4610 memcpy(msg_ctx->msg->data, data, data_size);
4611 msg_ctx->msg_size = (sizeof(*msg_ctx->msg) + data_size);
4612 - mutex_lock(&daemon->mux);
4613 list_add_tail(&msg_ctx->daemon_out_list, &daemon->msg_ctx_out_queue);
4614 + mutex_unlock(&msg_ctx->mux);
4615 +
4616 + mutex_lock(&daemon->mux);
4617 daemon->num_queued_msg_ctx++;
4618 wake_up_interruptible(&daemon->wait);
4619 mutex_unlock(&daemon->mux);
4620 -out_unlock:
4621 - mutex_unlock(&msg_ctx->mux);
4622 - return rc;
4623 +
4624 + return 0;
4625 }
4626
4627 /*
4628 @@ -269,8 +274,16 @@ ecryptfs_miscdev_read(struct file *file, char __user *buf, size_t count,
4629 mutex_lock(&ecryptfs_daemon_hash_mux);
4630 /* TODO: Just use file->private_data? */
4631 rc = ecryptfs_find_daemon_by_euid(&daemon, euid, current_user_ns());
4632 - BUG_ON(rc || !daemon);
4633 + if (rc || !daemon) {
4634 + mutex_unlock(&ecryptfs_daemon_hash_mux);
4635 + return -EINVAL;
4636 + }
4637 mutex_lock(&daemon->mux);
4638 + if (task_pid(current) != daemon->pid) {
4639 + mutex_unlock(&daemon->mux);
4640 + mutex_unlock(&ecryptfs_daemon_hash_mux);
4641 + return -EPERM;
4642 + }
4643 if (daemon->flags & ECRYPTFS_DAEMON_ZOMBIE) {
4644 rc = 0;
4645 mutex_unlock(&ecryptfs_daemon_hash_mux);
4646 @@ -307,9 +320,6 @@ check_list:
4647 * message from the queue; try again */
4648 goto check_list;
4649 }
4650 - BUG_ON(euid != daemon->euid);
4651 - BUG_ON(current_user_ns() != daemon->user_ns);
4652 - BUG_ON(task_pid(current) != daemon->pid);
4653 msg_ctx = list_first_entry(&daemon->msg_ctx_out_queue,
4654 struct ecryptfs_msg_ctx, daemon_out_list);
4655 BUG_ON(!msg_ctx);
4656 diff --git a/fs/exec.c b/fs/exec.c
4657 index b1fd202..29e5f84 100644
4658 --- a/fs/exec.c
4659 +++ b/fs/exec.c
4660 @@ -823,10 +823,10 @@ static int exec_mmap(struct mm_struct *mm)
4661 /* Notify parent that we're no longer interested in the old VM */
4662 tsk = current;
4663 old_mm = current->mm;
4664 - sync_mm_rss(old_mm);
4665 mm_release(tsk, old_mm);
4666
4667 if (old_mm) {
4668 + sync_mm_rss(old_mm);
4669 /*
4670 * Make sure that if there is a core dump in progress
4671 * for the old mm, we get out and die instead of going
4672 diff --git a/fs/lockd/clntlock.c b/fs/lockd/clntlock.c
4673 index ba1dc2e..ca0a080 100644
4674 --- a/fs/lockd/clntlock.c
4675 +++ b/fs/lockd/clntlock.c
4676 @@ -56,7 +56,7 @@ struct nlm_host *nlmclnt_init(const struct nlmclnt_initdata *nlm_init)
4677 u32 nlm_version = (nlm_init->nfs_version == 2) ? 1 : 4;
4678 int status;
4679
4680 - status = lockd_up();
4681 + status = lockd_up(nlm_init->net);
4682 if (status < 0)
4683 return ERR_PTR(status);
4684
4685 @@ -65,7 +65,7 @@ struct nlm_host *nlmclnt_init(const struct nlmclnt_initdata *nlm_init)
4686 nlm_init->hostname, nlm_init->noresvport,
4687 nlm_init->net);
4688 if (host == NULL) {
4689 - lockd_down();
4690 + lockd_down(nlm_init->net);
4691 return ERR_PTR(-ENOLCK);
4692 }
4693
4694 @@ -80,8 +80,10 @@ EXPORT_SYMBOL_GPL(nlmclnt_init);
4695 */
4696 void nlmclnt_done(struct nlm_host *host)
4697 {
4698 + struct net *net = host->net;
4699 +
4700 nlmclnt_release_host(host);
4701 - lockd_down();
4702 + lockd_down(net);
4703 }
4704 EXPORT_SYMBOL_GPL(nlmclnt_done);
4705
4706 @@ -220,11 +222,12 @@ reclaimer(void *ptr)
4707 struct nlm_wait *block;
4708 struct file_lock *fl, *next;
4709 u32 nsmstate;
4710 + struct net *net = host->net;
4711
4712 allow_signal(SIGKILL);
4713
4714 down_write(&host->h_rwsem);
4715 - lockd_up(); /* note: this cannot fail as lockd is already running */
4716 + lockd_up(net); /* note: this cannot fail as lockd is already running */
4717
4718 dprintk("lockd: reclaiming locks for host %s\n", host->h_name);
4719
4720 @@ -275,6 +278,6 @@ restart:
4721
4722 /* Release host handle after use */
4723 nlmclnt_release_host(host);
4724 - lockd_down();
4725 + lockd_down(net);
4726 return 0;
4727 }
4728 diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
4729 index f49b9af..3250f28 100644
4730 --- a/fs/lockd/svc.c
4731 +++ b/fs/lockd/svc.c
4732 @@ -257,7 +257,7 @@ static int lockd_up_net(struct net *net)
4733 struct svc_serv *serv = nlmsvc_rqst->rq_server;
4734 int error;
4735
4736 - if (ln->nlmsvc_users)
4737 + if (ln->nlmsvc_users++)
4738 return 0;
4739
4740 error = svc_rpcb_setup(serv, net);
4741 @@ -272,6 +272,7 @@ static int lockd_up_net(struct net *net)
4742 err_socks:
4743 svc_rpcb_cleanup(serv, net);
4744 err_rpcb:
4745 + ln->nlmsvc_users--;
4746 return error;
4747 }
4748
4749 @@ -295,11 +296,11 @@ static void lockd_down_net(struct net *net)
4750 /*
4751 * Bring up the lockd process if it's not already up.
4752 */
4753 -int lockd_up(void)
4754 +int lockd_up(struct net *net)
4755 {
4756 struct svc_serv *serv;
4757 int error = 0;
4758 - struct net *net = current->nsproxy->net_ns;
4759 + struct lockd_net *ln = net_generic(net, lockd_net_id);
4760
4761 mutex_lock(&nlmsvc_mutex);
4762 /*
4763 @@ -325,9 +326,17 @@ int lockd_up(void)
4764 goto out;
4765 }
4766
4767 + error = svc_bind(serv, net);
4768 + if (error < 0) {
4769 + printk(KERN_WARNING "lockd_up: bind service failed\n");
4770 + goto destroy_and_out;
4771 + }
4772 +
4773 + ln->nlmsvc_users++;
4774 +
4775 error = make_socks(serv, net);
4776 if (error < 0)
4777 - goto destroy_and_out;
4778 + goto err_start;
4779
4780 /*
4781 * Create the kernel thread and wait for it to start.
4782 @@ -339,7 +348,7 @@ int lockd_up(void)
4783 printk(KERN_WARNING
4784 "lockd_up: svc_rqst allocation failed, error=%d\n",
4785 error);
4786 - goto destroy_and_out;
4787 + goto err_start;
4788 }
4789
4790 svc_sock_update_bufs(serv);
4791 @@ -353,7 +362,7 @@ int lockd_up(void)
4792 nlmsvc_rqst = NULL;
4793 printk(KERN_WARNING
4794 "lockd_up: kthread_run failed, error=%d\n", error);
4795 - goto destroy_and_out;
4796 + goto err_start;
4797 }
4798
4799 /*
4800 @@ -363,14 +372,14 @@ int lockd_up(void)
4801 destroy_and_out:
4802 svc_destroy(serv);
4803 out:
4804 - if (!error) {
4805 - struct lockd_net *ln = net_generic(net, lockd_net_id);
4806 -
4807 - ln->nlmsvc_users++;
4808 + if (!error)
4809 nlmsvc_users++;
4810 - }
4811 mutex_unlock(&nlmsvc_mutex);
4812 return error;
4813 +
4814 +err_start:
4815 + lockd_down_net(net);
4816 + goto destroy_and_out;
4817 }
4818 EXPORT_SYMBOL_GPL(lockd_up);
4819
4820 @@ -378,14 +387,13 @@ EXPORT_SYMBOL_GPL(lockd_up);
4821 * Decrement the user count and bring down lockd if we're the last.
4822 */
4823 void
4824 -lockd_down(void)
4825 +lockd_down(struct net *net)
4826 {
4827 mutex_lock(&nlmsvc_mutex);
4828 + lockd_down_net(net);
4829 if (nlmsvc_users) {
4830 - if (--nlmsvc_users) {
4831 - lockd_down_net(current->nsproxy->net_ns);
4832 + if (--nlmsvc_users)
4833 goto out;
4834 - }
4835 } else {
4836 printk(KERN_ERR "lockd_down: no users! task=%p\n",
4837 nlmsvc_task);
4838 diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
4839 index eb95f50..38a44c6 100644
4840 --- a/fs/nfs/callback.c
4841 +++ b/fs/nfs/callback.c
4842 @@ -106,7 +106,7 @@ nfs4_callback_up(struct svc_serv *serv, struct rpc_xprt *xprt)
4843 {
4844 int ret;
4845
4846 - ret = svc_create_xprt(serv, "tcp", xprt->xprt_net, PF_INET,
4847 + ret = svc_create_xprt(serv, "tcp", &init_net, PF_INET,
4848 nfs_callback_set_tcpport, SVC_SOCK_ANONYMOUS);
4849 if (ret <= 0)
4850 goto out_err;
4851 @@ -114,7 +114,7 @@ nfs4_callback_up(struct svc_serv *serv, struct rpc_xprt *xprt)
4852 dprintk("NFS: Callback listener port = %u (af %u)\n",
4853 nfs_callback_tcpport, PF_INET);
4854
4855 - ret = svc_create_xprt(serv, "tcp", xprt->xprt_net, PF_INET6,
4856 + ret = svc_create_xprt(serv, "tcp", &init_net, PF_INET6,
4857 nfs_callback_set_tcpport, SVC_SOCK_ANONYMOUS);
4858 if (ret > 0) {
4859 nfs_callback_tcpport6 = ret;
4860 @@ -183,7 +183,7 @@ nfs41_callback_up(struct svc_serv *serv, struct rpc_xprt *xprt)
4861 * fore channel connection.
4862 * Returns the input port (0) and sets the svc_serv bc_xprt on success
4863 */
4864 - ret = svc_create_xprt(serv, "tcp-bc", xprt->xprt_net, PF_INET, 0,
4865 + ret = svc_create_xprt(serv, "tcp-bc", &init_net, PF_INET, 0,
4866 SVC_SOCK_ANONYMOUS);
4867 if (ret < 0) {
4868 rqstp = ERR_PTR(ret);
4869 @@ -253,6 +253,7 @@ int nfs_callback_up(u32 minorversion, struct rpc_xprt *xprt)
4870 char svc_name[12];
4871 int ret = 0;
4872 int minorversion_setup;
4873 + struct net *net = &init_net;
4874
4875 mutex_lock(&nfs_callback_mutex);
4876 if (cb_info->users++ || cb_info->task != NULL) {
4877 @@ -265,6 +266,12 @@ int nfs_callback_up(u32 minorversion, struct rpc_xprt *xprt)
4878 goto out_err;
4879 }
4880
4881 + ret = svc_bind(serv, net);
4882 + if (ret < 0) {
4883 + printk(KERN_WARNING "NFS: bind callback service failed\n");
4884 + goto out_err;
4885 + }
4886 +
4887 minorversion_setup = nfs_minorversion_callback_svc_setup(minorversion,
4888 serv, xprt, &rqstp, &callback_svc);
4889 if (!minorversion_setup) {
4890 @@ -306,6 +313,8 @@ out_err:
4891 dprintk("NFS: Couldn't create callback socket or server thread; "
4892 "err = %d\n", ret);
4893 cb_info->users--;
4894 + if (serv)
4895 + svc_shutdown_net(serv, net);
4896 goto out;
4897 }
4898
4899 @@ -320,6 +329,7 @@ void nfs_callback_down(int minorversion)
4900 cb_info->users--;
4901 if (cb_info->users == 0 && cb_info->task != NULL) {
4902 kthread_stop(cb_info->task);
4903 + svc_shutdown_net(cb_info->serv, &init_net);
4904 svc_exit_thread(cb_info->rqst);
4905 cb_info->serv = NULL;
4906 cb_info->rqst = NULL;
4907 diff --git a/fs/nfs/idmap.c b/fs/nfs/idmap.c
4908 index 3e8edbe..93aa3a4 100644
4909 --- a/fs/nfs/idmap.c
4910 +++ b/fs/nfs/idmap.c
4911 @@ -57,6 +57,11 @@ unsigned int nfs_idmap_cache_timeout = 600;
4912 static const struct cred *id_resolver_cache;
4913 static struct key_type key_type_id_resolver_legacy;
4914
4915 +struct idmap {
4916 + struct rpc_pipe *idmap_pipe;
4917 + struct key_construction *idmap_key_cons;
4918 + struct mutex idmap_mutex;
4919 +};
4920
4921 /**
4922 * nfs_fattr_init_names - initialise the nfs_fattr owner_name/group_name fields
4923 @@ -310,9 +315,11 @@ static ssize_t nfs_idmap_get_key(const char *name, size_t namelen,
4924 name, namelen, type, data,
4925 data_size, NULL);
4926 if (ret < 0) {
4927 + mutex_lock(&idmap->idmap_mutex);
4928 ret = nfs_idmap_request_key(&key_type_id_resolver_legacy,
4929 name, namelen, type, data,
4930 data_size, idmap);
4931 + mutex_unlock(&idmap->idmap_mutex);
4932 }
4933 return ret;
4934 }
4935 @@ -354,11 +361,6 @@ static int nfs_idmap_lookup_id(const char *name, size_t namelen, const char *typ
4936 /* idmap classic begins here */
4937 module_param(nfs_idmap_cache_timeout, int, 0644);
4938
4939 -struct idmap {
4940 - struct rpc_pipe *idmap_pipe;
4941 - struct key_construction *idmap_key_cons;
4942 -};
4943 -
4944 enum {
4945 Opt_find_uid, Opt_find_gid, Opt_find_user, Opt_find_group, Opt_find_err
4946 };
4947 @@ -469,6 +471,7 @@ nfs_idmap_new(struct nfs_client *clp)
4948 return error;
4949 }
4950 idmap->idmap_pipe = pipe;
4951 + mutex_init(&idmap->idmap_mutex);
4952
4953 clp->cl_idmap = idmap;
4954 return 0;
4955 diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
4956 index 2c53be6..3ab12eb 100644
4957 --- a/fs/nfsd/nfsctl.c
4958 +++ b/fs/nfsd/nfsctl.c
4959 @@ -651,6 +651,7 @@ static ssize_t __write_ports_addfd(char *buf)
4960 {
4961 char *mesg = buf;
4962 int fd, err;
4963 + struct net *net = &init_net;
4964
4965 err = get_int(&mesg, &fd);
4966 if (err != 0 || fd < 0)
4967 @@ -662,6 +663,8 @@ static ssize_t __write_ports_addfd(char *buf)
4968
4969 err = svc_addsock(nfsd_serv, fd, buf, SIMPLE_TRANSACTION_LIMIT);
4970 if (err < 0) {
4971 + if (nfsd_serv->sv_nrthreads == 1)
4972 + svc_shutdown_net(nfsd_serv, net);
4973 svc_destroy(nfsd_serv);
4974 return err;
4975 }
4976 @@ -699,6 +702,7 @@ static ssize_t __write_ports_addxprt(char *buf)
4977 char transport[16];
4978 struct svc_xprt *xprt;
4979 int port, err;
4980 + struct net *net = &init_net;
4981
4982 if (sscanf(buf, "%15s %4u", transport, &port) != 2)
4983 return -EINVAL;
4984 @@ -710,12 +714,12 @@ static ssize_t __write_ports_addxprt(char *buf)
4985 if (err != 0)
4986 return err;
4987
4988 - err = svc_create_xprt(nfsd_serv, transport, &init_net,
4989 + err = svc_create_xprt(nfsd_serv, transport, net,
4990 PF_INET, port, SVC_SOCK_ANONYMOUS);
4991 if (err < 0)
4992 goto out_err;
4993
4994 - err = svc_create_xprt(nfsd_serv, transport, &init_net,
4995 + err = svc_create_xprt(nfsd_serv, transport, net,
4996 PF_INET6, port, SVC_SOCK_ANONYMOUS);
4997 if (err < 0 && err != -EAFNOSUPPORT)
4998 goto out_close;
4999 @@ -724,12 +728,14 @@ static ssize_t __write_ports_addxprt(char *buf)
5000 nfsd_serv->sv_nrthreads--;
5001 return 0;
5002 out_close:
5003 - xprt = svc_find_xprt(nfsd_serv, transport, &init_net, PF_INET, port);
5004 + xprt = svc_find_xprt(nfsd_serv, transport, net, PF_INET, port);
5005 if (xprt != NULL) {
5006 svc_close_xprt(xprt);
5007 svc_xprt_put(xprt);
5008 }
5009 out_err:
5010 + if (nfsd_serv->sv_nrthreads == 1)
5011 + svc_shutdown_net(nfsd_serv, net);
5012 svc_destroy(nfsd_serv);
5013 return err;
5014 }
5015 diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
5016 index 28dfad3..bcda12a 100644
5017 --- a/fs/nfsd/nfssvc.c
5018 +++ b/fs/nfsd/nfssvc.c
5019 @@ -11,6 +11,7 @@
5020 #include <linux/module.h>
5021 #include <linux/fs_struct.h>
5022 #include <linux/swap.h>
5023 +#include <linux/nsproxy.h>
5024
5025 #include <linux/sunrpc/stats.h>
5026 #include <linux/sunrpc/svcsock.h>
5027 @@ -220,7 +221,7 @@ static int nfsd_startup(unsigned short port, int nrservs)
5028 ret = nfsd_init_socks(port);
5029 if (ret)
5030 goto out_racache;
5031 - ret = lockd_up();
5032 + ret = lockd_up(&init_net);
5033 if (ret)
5034 goto out_racache;
5035 ret = nfs4_state_start();
5036 @@ -229,7 +230,7 @@ static int nfsd_startup(unsigned short port, int nrservs)
5037 nfsd_up = true;
5038 return 0;
5039 out_lockd:
5040 - lockd_down();
5041 + lockd_down(&init_net);
5042 out_racache:
5043 nfsd_racache_shutdown();
5044 return ret;
5045 @@ -246,7 +247,7 @@ static void nfsd_shutdown(void)
5046 if (!nfsd_up)
5047 return;
5048 nfs4_state_shutdown();
5049 - lockd_down();
5050 + lockd_down(&init_net);
5051 nfsd_racache_shutdown();
5052 nfsd_up = false;
5053 }
5054 @@ -330,6 +331,8 @@ static int nfsd_get_default_max_blksize(void)
5055
5056 int nfsd_create_serv(void)
5057 {
5058 + int error;
5059 +
5060 WARN_ON(!mutex_is_locked(&nfsd_mutex));
5061 if (nfsd_serv) {
5062 svc_get(nfsd_serv);
5063 @@ -343,6 +346,12 @@ int nfsd_create_serv(void)
5064 if (nfsd_serv == NULL)
5065 return -ENOMEM;
5066
5067 + error = svc_bind(nfsd_serv, current->nsproxy->net_ns);
5068 + if (error < 0) {
5069 + svc_destroy(nfsd_serv);
5070 + return error;
5071 + }
5072 +
5073 set_max_drc();
5074 do_gettimeofday(&nfssvc_boot); /* record boot time */
5075 return 0;
5076 @@ -373,6 +382,7 @@ int nfsd_set_nrthreads(int n, int *nthreads)
5077 int i = 0;
5078 int tot = 0;
5079 int err = 0;
5080 + struct net *net = &init_net;
5081
5082 WARN_ON(!mutex_is_locked(&nfsd_mutex));
5083
5084 @@ -417,6 +427,9 @@ int nfsd_set_nrthreads(int n, int *nthreads)
5085 if (err)
5086 break;
5087 }
5088 +
5089 + if (nfsd_serv->sv_nrthreads == 1)
5090 + svc_shutdown_net(nfsd_serv, net);
5091 svc_destroy(nfsd_serv);
5092
5093 return err;
5094 @@ -432,6 +445,7 @@ nfsd_svc(unsigned short port, int nrservs)
5095 {
5096 int error;
5097 bool nfsd_up_before;
5098 + struct net *net = &init_net;
5099
5100 mutex_lock(&nfsd_mutex);
5101 dprintk("nfsd: creating service\n");
5102 @@ -464,6 +478,8 @@ out_shutdown:
5103 if (error < 0 && !nfsd_up_before)
5104 nfsd_shutdown();
5105 out_destroy:
5106 + if (nfsd_serv->sv_nrthreads == 1)
5107 + svc_shutdown_net(nfsd_serv, net);
5108 svc_destroy(nfsd_serv); /* Release server */
5109 out:
5110 mutex_unlock(&nfsd_mutex);
5111 @@ -547,6 +563,9 @@ nfsd(void *vrqstp)
5112 nfsdstats.th_cnt --;
5113
5114 out:
5115 + if (rqstp->rq_server->sv_nrthreads == 1)
5116 + svc_shutdown_net(rqstp->rq_server, &init_net);
5117 +
5118 /* Release the thread */
5119 svc_exit_thread(rqstp);
5120
5121 @@ -659,8 +678,12 @@ int nfsd_pool_stats_open(struct inode *inode, struct file *file)
5122 int nfsd_pool_stats_release(struct inode *inode, struct file *file)
5123 {
5124 int ret = seq_release(inode, file);
5125 + struct net *net = &init_net;
5126 +
5127 mutex_lock(&nfsd_mutex);
5128 /* this function really, really should have been called svc_put() */
5129 + if (nfsd_serv->sv_nrthreads == 1)
5130 + svc_shutdown_net(nfsd_serv, net);
5131 svc_destroy(nfsd_serv);
5132 mutex_unlock(&nfsd_mutex);
5133 return ret;
5134 diff --git a/fs/nilfs2/gcinode.c b/fs/nilfs2/gcinode.c
5135 index 08a07a2..57ceaf3 100644
5136 --- a/fs/nilfs2/gcinode.c
5137 +++ b/fs/nilfs2/gcinode.c
5138 @@ -191,6 +191,8 @@ void nilfs_remove_all_gcinodes(struct the_nilfs *nilfs)
5139 while (!list_empty(head)) {
5140 ii = list_first_entry(head, struct nilfs_inode_info, i_dirty);
5141 list_del_init(&ii->i_dirty);
5142 + truncate_inode_pages(&ii->vfs_inode.i_data, 0);
5143 + nilfs_btnode_cache_clear(&ii->i_btnode_cache);
5144 iput(&ii->vfs_inode);
5145 }
5146 }
5147 diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
5148 index 0e72ad6..88e11fb 100644
5149 --- a/fs/nilfs2/segment.c
5150 +++ b/fs/nilfs2/segment.c
5151 @@ -2309,6 +2309,8 @@ nilfs_remove_written_gcinodes(struct the_nilfs *nilfs, struct list_head *head)
5152 if (!test_bit(NILFS_I_UPDATED, &ii->i_state))
5153 continue;
5154 list_del_init(&ii->i_dirty);
5155 + truncate_inode_pages(&ii->vfs_inode.i_data, 0);
5156 + nilfs_btnode_cache_clear(&ii->i_btnode_cache);
5157 iput(&ii->vfs_inode);
5158 }
5159 }
5160 diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
5161 index 061591a..7602783 100644
5162 --- a/fs/ocfs2/file.c
5163 +++ b/fs/ocfs2/file.c
5164 @@ -1950,7 +1950,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
5165 if (ret < 0)
5166 mlog_errno(ret);
5167
5168 - if (file->f_flags & O_SYNC)
5169 + if (file && (file->f_flags & O_SYNC))
5170 handle->h_sync = 1;
5171
5172 ocfs2_commit_trans(osb, handle);
5173 @@ -2422,8 +2422,10 @@ out_dio:
5174 unaligned_dio = 0;
5175 }
5176
5177 - if (unaligned_dio)
5178 + if (unaligned_dio) {
5179 + ocfs2_iocb_clear_unaligned_aio(iocb);
5180 atomic_dec(&OCFS2_I(inode)->ip_unaligned_aio);
5181 + }
5182
5183 out:
5184 if (rw_level != -1)
5185 diff --git a/fs/open.c b/fs/open.c
5186 index 5720854..3f1108b 100644
5187 --- a/fs/open.c
5188 +++ b/fs/open.c
5189 @@ -396,10 +396,10 @@ SYSCALL_DEFINE1(fchdir, unsigned int, fd)
5190 {
5191 struct file *file;
5192 struct inode *inode;
5193 - int error;
5194 + int error, fput_needed;
5195
5196 error = -EBADF;
5197 - file = fget(fd);
5198 + file = fget_raw_light(fd, &fput_needed);
5199 if (!file)
5200 goto out;
5201
5202 @@ -413,7 +413,7 @@ SYSCALL_DEFINE1(fchdir, unsigned int, fd)
5203 if (!error)
5204 set_fs_pwd(current->fs, &file->f_path);
5205 out_putf:
5206 - fput(file);
5207 + fput_light(file, fput_needed);
5208 out:
5209 return error;
5210 }
5211 diff --git a/fs/ramfs/file-nommu.c b/fs/ramfs/file-nommu.c
5212 index fbb0b47..d5378d0 100644
5213 --- a/fs/ramfs/file-nommu.c
5214 +++ b/fs/ramfs/file-nommu.c
5215 @@ -110,6 +110,7 @@ int ramfs_nommu_expand_for_mapping(struct inode *inode, size_t newsize)
5216
5217 /* prevent the page from being discarded on memory pressure */
5218 SetPageDirty(page);
5219 + SetPageUptodate(page);
5220
5221 unlock_page(page);
5222 put_page(page);
5223 diff --git a/fs/splice.c b/fs/splice.c
5224 index f847684..5cac690 100644
5225 --- a/fs/splice.c
5226 +++ b/fs/splice.c
5227 @@ -273,13 +273,16 @@ void spd_release_page(struct splice_pipe_desc *spd, unsigned int i)
5228 * Check if we need to grow the arrays holding pages and partial page
5229 * descriptions.
5230 */
5231 -int splice_grow_spd(struct pipe_inode_info *pipe, struct splice_pipe_desc *spd)
5232 +int splice_grow_spd(const struct pipe_inode_info *pipe, struct splice_pipe_desc *spd)
5233 {
5234 - if (pipe->buffers <= PIPE_DEF_BUFFERS)
5235 + unsigned int buffers = ACCESS_ONCE(pipe->buffers);
5236 +
5237 + spd->nr_pages_max = buffers;
5238 + if (buffers <= PIPE_DEF_BUFFERS)
5239 return 0;
5240
5241 - spd->pages = kmalloc(pipe->buffers * sizeof(struct page *), GFP_KERNEL);
5242 - spd->partial = kmalloc(pipe->buffers * sizeof(struct partial_page), GFP_KERNEL);
5243 + spd->pages = kmalloc(buffers * sizeof(struct page *), GFP_KERNEL);
5244 + spd->partial = kmalloc(buffers * sizeof(struct partial_page), GFP_KERNEL);
5245
5246 if (spd->pages && spd->partial)
5247 return 0;
5248 @@ -289,10 +292,9 @@ int splice_grow_spd(struct pipe_inode_info *pipe, struct splice_pipe_desc *spd)
5249 return -ENOMEM;
5250 }
5251
5252 -void splice_shrink_spd(struct pipe_inode_info *pipe,
5253 - struct splice_pipe_desc *spd)
5254 +void splice_shrink_spd(struct splice_pipe_desc *spd)
5255 {
5256 - if (pipe->buffers <= PIPE_DEF_BUFFERS)
5257 + if (spd->nr_pages_max <= PIPE_DEF_BUFFERS)
5258 return;
5259
5260 kfree(spd->pages);
5261 @@ -315,6 +317,7 @@ __generic_file_splice_read(struct file *in, loff_t *ppos,
5262 struct splice_pipe_desc spd = {
5263 .pages = pages,
5264 .partial = partial,
5265 + .nr_pages_max = PIPE_DEF_BUFFERS,
5266 .flags = flags,
5267 .ops = &page_cache_pipe_buf_ops,
5268 .spd_release = spd_release_page,
5269 @@ -326,7 +329,7 @@ __generic_file_splice_read(struct file *in, loff_t *ppos,
5270 index = *ppos >> PAGE_CACHE_SHIFT;
5271 loff = *ppos & ~PAGE_CACHE_MASK;
5272 req_pages = (len + loff + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
5273 - nr_pages = min(req_pages, pipe->buffers);
5274 + nr_pages = min(req_pages, spd.nr_pages_max);
5275
5276 /*
5277 * Lookup the (hopefully) full range of pages we need.
5278 @@ -497,7 +500,7 @@ fill_it:
5279 if (spd.nr_pages)
5280 error = splice_to_pipe(pipe, &spd);
5281
5282 - splice_shrink_spd(pipe, &spd);
5283 + splice_shrink_spd(&spd);
5284 return error;
5285 }
5286
5287 @@ -598,6 +601,7 @@ ssize_t default_file_splice_read(struct file *in, loff_t *ppos,
5288 struct splice_pipe_desc spd = {
5289 .pages = pages,
5290 .partial = partial,
5291 + .nr_pages_max = PIPE_DEF_BUFFERS,
5292 .flags = flags,
5293 .ops = &default_pipe_buf_ops,
5294 .spd_release = spd_release_page,
5295 @@ -608,8 +612,8 @@ ssize_t default_file_splice_read(struct file *in, loff_t *ppos,
5296
5297 res = -ENOMEM;
5298 vec = __vec;
5299 - if (pipe->buffers > PIPE_DEF_BUFFERS) {
5300 - vec = kmalloc(pipe->buffers * sizeof(struct iovec), GFP_KERNEL);
5301 + if (spd.nr_pages_max > PIPE_DEF_BUFFERS) {
5302 + vec = kmalloc(spd.nr_pages_max * sizeof(struct iovec), GFP_KERNEL);
5303 if (!vec)
5304 goto shrink_ret;
5305 }
5306 @@ -617,7 +621,7 @@ ssize_t default_file_splice_read(struct file *in, loff_t *ppos,
5307 offset = *ppos & ~PAGE_CACHE_MASK;
5308 nr_pages = (len + offset + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
5309
5310 - for (i = 0; i < nr_pages && i < pipe->buffers && len; i++) {
5311 + for (i = 0; i < nr_pages && i < spd.nr_pages_max && len; i++) {
5312 struct page *page;
5313
5314 page = alloc_page(GFP_USER);
5315 @@ -665,7 +669,7 @@ ssize_t default_file_splice_read(struct file *in, loff_t *ppos,
5316 shrink_ret:
5317 if (vec != __vec)
5318 kfree(vec);
5319 - splice_shrink_spd(pipe, &spd);
5320 + splice_shrink_spd(&spd);
5321 return res;
5322
5323 err:
5324 @@ -1612,6 +1616,7 @@ static long vmsplice_to_pipe(struct file *file, const struct iovec __user *iov,
5325 struct splice_pipe_desc spd = {
5326 .pages = pages,
5327 .partial = partial,
5328 + .nr_pages_max = PIPE_DEF_BUFFERS,
5329 .flags = flags,
5330 .ops = &user_page_pipe_buf_ops,
5331 .spd_release = spd_release_page,
5332 @@ -1627,13 +1632,13 @@ static long vmsplice_to_pipe(struct file *file, const struct iovec __user *iov,
5333
5334 spd.nr_pages = get_iovec_page_array(iov, nr_segs, spd.pages,
5335 spd.partial, flags & SPLICE_F_GIFT,
5336 - pipe->buffers);
5337 + spd.nr_pages_max);
5338 if (spd.nr_pages <= 0)
5339 ret = spd.nr_pages;
5340 else
5341 ret = splice_to_pipe(pipe, &spd);
5342
5343 - splice_shrink_spd(pipe, &spd);
5344 + splice_shrink_spd(&spd);
5345 return ret;
5346 }
5347
5348 diff --git a/fs/udf/super.c b/fs/udf/super.c
5349 index ac8a348..8d86a87 100644
5350 --- a/fs/udf/super.c
5351 +++ b/fs/udf/super.c
5352 @@ -56,6 +56,7 @@
5353 #include <linux/seq_file.h>
5354 #include <linux/bitmap.h>
5355 #include <linux/crc-itu-t.h>
5356 +#include <linux/log2.h>
5357 #include <asm/byteorder.h>
5358
5359 #include "udf_sb.h"
5360 @@ -1215,16 +1216,65 @@ out_bh:
5361 return ret;
5362 }
5363
5364 +static int udf_load_sparable_map(struct super_block *sb,
5365 + struct udf_part_map *map,
5366 + struct sparablePartitionMap *spm)
5367 +{
5368 + uint32_t loc;
5369 + uint16_t ident;
5370 + struct sparingTable *st;
5371 + struct udf_sparing_data *sdata = &map->s_type_specific.s_sparing;
5372 + int i;
5373 + struct buffer_head *bh;
5374 +
5375 + map->s_partition_type = UDF_SPARABLE_MAP15;
5376 + sdata->s_packet_len = le16_to_cpu(spm->packetLength);
5377 + if (!is_power_of_2(sdata->s_packet_len)) {
5378 + udf_err(sb, "error loading logical volume descriptor: "
5379 + "Invalid packet length %u\n",
5380 + (unsigned)sdata->s_packet_len);
5381 + return -EIO;
5382 + }
5383 + if (spm->numSparingTables > 4) {
5384 + udf_err(sb, "error loading logical volume descriptor: "
5385 + "Too many sparing tables (%d)\n",
5386 + (int)spm->numSparingTables);
5387 + return -EIO;
5388 + }
5389 +
5390 + for (i = 0; i < spm->numSparingTables; i++) {
5391 + loc = le32_to_cpu(spm->locSparingTable[i]);
5392 + bh = udf_read_tagged(sb, loc, loc, &ident);
5393 + if (!bh)
5394 + continue;
5395 +
5396 + st = (struct sparingTable *)bh->b_data;
5397 + if (ident != 0 ||
5398 + strncmp(st->sparingIdent.ident, UDF_ID_SPARING,
5399 + strlen(UDF_ID_SPARING)) ||
5400 + sizeof(*st) + le16_to_cpu(st->reallocationTableLen) >
5401 + sb->s_blocksize) {
5402 + brelse(bh);
5403 + continue;
5404 + }
5405 +
5406 + sdata->s_spar_map[i] = bh;
5407 + }
5408 + map->s_partition_func = udf_get_pblock_spar15;
5409 + return 0;
5410 +}
5411 +
5412 static int udf_load_logicalvol(struct super_block *sb, sector_t block,
5413 struct kernel_lb_addr *fileset)
5414 {
5415 struct logicalVolDesc *lvd;
5416 - int i, j, offset;
5417 + int i, offset;
5418 uint8_t type;
5419 struct udf_sb_info *sbi = UDF_SB(sb);
5420 struct genericPartitionMap *gpm;
5421 uint16_t ident;
5422 struct buffer_head *bh;
5423 + unsigned int table_len;
5424 int ret = 0;
5425
5426 bh = udf_read_tagged(sb, block, block, &ident);
5427 @@ -1232,15 +1282,20 @@ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
5428 return 1;
5429 BUG_ON(ident != TAG_IDENT_LVD);
5430 lvd = (struct logicalVolDesc *)bh->b_data;
5431 -
5432 - i = udf_sb_alloc_partition_maps(sb, le32_to_cpu(lvd->numPartitionMaps));
5433 - if (i != 0) {
5434 - ret = i;
5435 + table_len = le32_to_cpu(lvd->mapTableLength);
5436 + if (sizeof(*lvd) + table_len > sb->s_blocksize) {
5437 + udf_err(sb, "error loading logical volume descriptor: "
5438 + "Partition table too long (%u > %lu)\n", table_len,
5439 + sb->s_blocksize - sizeof(*lvd));
5440 goto out_bh;
5441 }
5442
5443 + ret = udf_sb_alloc_partition_maps(sb, le32_to_cpu(lvd->numPartitionMaps));
5444 + if (ret)
5445 + goto out_bh;
5446 +
5447 for (i = 0, offset = 0;
5448 - i < sbi->s_partitions && offset < le32_to_cpu(lvd->mapTableLength);
5449 + i < sbi->s_partitions && offset < table_len;
5450 i++, offset += gpm->partitionMapLength) {
5451 struct udf_part_map *map = &sbi->s_partmaps[i];
5452 gpm = (struct genericPartitionMap *)
5453 @@ -1275,38 +1330,9 @@ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
5454 } else if (!strncmp(upm2->partIdent.ident,
5455 UDF_ID_SPARABLE,
5456 strlen(UDF_ID_SPARABLE))) {
5457 - uint32_t loc;
5458 - struct sparingTable *st;
5459 - struct sparablePartitionMap *spm =
5460 - (struct sparablePartitionMap *)gpm;
5461 -
5462 - map->s_partition_type = UDF_SPARABLE_MAP15;
5463 - map->s_type_specific.s_sparing.s_packet_len =
5464 - le16_to_cpu(spm->packetLength);
5465 - for (j = 0; j < spm->numSparingTables; j++) {
5466 - struct buffer_head *bh2;
5467 -
5468 - loc = le32_to_cpu(
5469 - spm->locSparingTable[j]);
5470 - bh2 = udf_read_tagged(sb, loc, loc,
5471 - &ident);
5472 - map->s_type_specific.s_sparing.
5473 - s_spar_map[j] = bh2;
5474 -
5475 - if (bh2 == NULL)
5476 - continue;
5477 -
5478 - st = (struct sparingTable *)bh2->b_data;
5479 - if (ident != 0 || strncmp(
5480 - st->sparingIdent.ident,
5481 - UDF_ID_SPARING,
5482 - strlen(UDF_ID_SPARING))) {
5483 - brelse(bh2);
5484 - map->s_type_specific.s_sparing.
5485 - s_spar_map[j] = NULL;
5486 - }
5487 - }
5488 - map->s_partition_func = udf_get_pblock_spar15;
5489 + if (udf_load_sparable_map(sb, map,
5490 + (struct sparablePartitionMap *)gpm) < 0)
5491 + goto out_bh;
5492 } else if (!strncmp(upm2->partIdent.ident,
5493 UDF_ID_METADATA,
5494 strlen(UDF_ID_METADATA))) {
5495 diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
5496 index 125c54e..c7ec2cd 100644
5497 --- a/include/asm-generic/pgtable.h
5498 +++ b/include/asm-generic/pgtable.h
5499 @@ -446,6 +446,18 @@ static inline int pmd_write(pmd_t pmd)
5500 #endif /* __HAVE_ARCH_PMD_WRITE */
5501 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
5502
5503 +#ifndef pmd_read_atomic
5504 +static inline pmd_t pmd_read_atomic(pmd_t *pmdp)
5505 +{
5506 + /*
5507 + * Depend on compiler for an atomic pmd read. NOTE: this is
5508 + * only going to work, if the pmdval_t isn't larger than
5509 + * an unsigned long.
5510 + */
5511 + return *pmdp;
5512 +}
5513 +#endif
5514 +
5515 /*
5516 * This function is meant to be used by sites walking pagetables with
5517 * the mmap_sem hold in read mode to protect against MADV_DONTNEED and
5518 @@ -459,14 +471,30 @@ static inline int pmd_write(pmd_t pmd)
5519 * undefined so behaving like if the pmd was none is safe (because it
5520 * can return none anyway). The compiler level barrier() is critically
5521 * important to compute the two checks atomically on the same pmdval.
5522 + *
5523 + * For 32bit kernels with a 64bit large pmd_t this automatically takes
5524 + * care of reading the pmd atomically to avoid SMP race conditions
5525 + * against pmd_populate() when the mmap_sem is hold for reading by the
5526 + * caller (a special atomic read not done by "gcc" as in the generic
5527 + * version above, is also needed when THP is disabled because the page
5528 + * fault can populate the pmd from under us).
5529 */
5530 static inline int pmd_none_or_trans_huge_or_clear_bad(pmd_t *pmd)
5531 {
5532 - /* depend on compiler for an atomic pmd read */
5533 - pmd_t pmdval = *pmd;
5534 + pmd_t pmdval = pmd_read_atomic(pmd);
5535 /*
5536 * The barrier will stabilize the pmdval in a register or on
5537 * the stack so that it will stop changing under the code.
5538 + *
5539 + * When CONFIG_TRANSPARENT_HUGEPAGE=y on x86 32bit PAE,
5540 + * pmd_read_atomic is allowed to return a not atomic pmdval
5541 + * (for example pointing to an hugepage that has never been
5542 + * mapped in the pmd). The below checks will only care about
5543 + * the low part of the pmd with 32bit PAE x86 anyway, with the
5544 + * exception of pmd_none(). So the important thing is that if
5545 + * the low part of the pmd is found null, the high part will
5546 + * be also null or the pmd_none() check below would be
5547 + * confused.
5548 */
5549 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
5550 barrier();
5551 diff --git a/include/linux/aio.h b/include/linux/aio.h
5552 index 2314ad8..b1a520e 100644
5553 --- a/include/linux/aio.h
5554 +++ b/include/linux/aio.h
5555 @@ -140,6 +140,7 @@ struct kiocb {
5556 (x)->ki_dtor = NULL; \
5557 (x)->ki_obj.tsk = tsk; \
5558 (x)->ki_user_data = 0; \
5559 + (x)->private = NULL; \
5560 } while (0)
5561
5562 #define AIO_RING_MAGIC 0xa10a10a1
5563 diff --git a/include/linux/lockd/bind.h b/include/linux/lockd/bind.h
5564 index 11a966e..4d24d64 100644
5565 --- a/include/linux/lockd/bind.h
5566 +++ b/include/linux/lockd/bind.h
5567 @@ -54,7 +54,7 @@ extern void nlmclnt_done(struct nlm_host *host);
5568
5569 extern int nlmclnt_proc(struct nlm_host *host, int cmd,
5570 struct file_lock *fl);
5571 -extern int lockd_up(void);
5572 -extern void lockd_down(void);
5573 +extern int lockd_up(struct net *net);
5574 +extern void lockd_down(struct net *net);
5575
5576 #endif /* LINUX_LOCKD_BIND_H */
5577 diff --git a/include/linux/memblock.h b/include/linux/memblock.h
5578 index a6bb102..19dc455 100644
5579 --- a/include/linux/memblock.h
5580 +++ b/include/linux/memblock.h
5581 @@ -50,9 +50,7 @@ phys_addr_t memblock_find_in_range_node(phys_addr_t start, phys_addr_t end,
5582 phys_addr_t size, phys_addr_t align, int nid);
5583 phys_addr_t memblock_find_in_range(phys_addr_t start, phys_addr_t end,
5584 phys_addr_t size, phys_addr_t align);
5585 -int memblock_free_reserved_regions(void);
5586 -int memblock_reserve_reserved_regions(void);
5587 -
5588 +phys_addr_t get_allocated_memblock_reserved_regions_info(phys_addr_t *addr);
5589 void memblock_allow_resize(void);
5590 int memblock_add_node(phys_addr_t base, phys_addr_t size, int nid);
5591 int memblock_add(phys_addr_t base, phys_addr_t size);
5592 diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
5593 index 3cc3062..b35752f 100644
5594 --- a/include/linux/mm_types.h
5595 +++ b/include/linux/mm_types.h
5596 @@ -56,8 +56,18 @@ struct page {
5597 };
5598
5599 union {
5600 +#if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \
5601 + defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
5602 /* Used for cmpxchg_double in slub */
5603 unsigned long counters;
5604 +#else
5605 + /*
5606 + * Keep _count separate from slub cmpxchg_double data.
5607 + * As the rest of the double word is protected by
5608 + * slab_lock but _count is not.
5609 + */
5610 + unsigned counters;
5611 +#endif
5612
5613 struct {
5614
5615 diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
5616 index dff7115..5f6806b 100644
5617 --- a/include/linux/mmzone.h
5618 +++ b/include/linux/mmzone.h
5619 @@ -663,7 +663,7 @@ typedef struct pglist_data {
5620 range, including holes */
5621 int node_id;
5622 wait_queue_head_t kswapd_wait;
5623 - struct task_struct *kswapd;
5624 + struct task_struct *kswapd; /* Protected by lock_memory_hotplug() */
5625 int kswapd_max_order;
5626 enum zone_type classzone_idx;
5627 } pg_data_t;
5628 diff --git a/include/linux/pci.h b/include/linux/pci.h
5629 index 8b2921a..e444f5b 100644
5630 --- a/include/linux/pci.h
5631 +++ b/include/linux/pci.h
5632 @@ -176,8 +176,6 @@ enum pci_dev_flags {
5633 PCI_DEV_FLAGS_NO_D3 = (__force pci_dev_flags_t) 2,
5634 /* Provide indication device is assigned by a Virtual Machine Manager */
5635 PCI_DEV_FLAGS_ASSIGNED = (__force pci_dev_flags_t) 4,
5636 - /* Device causes system crash if in D3 during S3 sleep */
5637 - PCI_DEV_FLAGS_NO_D3_DURING_SLEEP = (__force pci_dev_flags_t) 8,
5638 };
5639
5640 enum pci_irq_reroute_variant {
5641 diff --git a/include/linux/rpmsg.h b/include/linux/rpmsg.h
5642 index a8e50e4..82a6739 100644
5643 --- a/include/linux/rpmsg.h
5644 +++ b/include/linux/rpmsg.h
5645 @@ -38,6 +38,8 @@
5646 #include <linux/types.h>
5647 #include <linux/device.h>
5648 #include <linux/mod_devicetable.h>
5649 +#include <linux/kref.h>
5650 +#include <linux/mutex.h>
5651
5652 /* The feature bitmap for virtio rpmsg */
5653 #define VIRTIO_RPMSG_F_NS 0 /* RP supports name service notifications */
5654 @@ -120,7 +122,9 @@ typedef void (*rpmsg_rx_cb_t)(struct rpmsg_channel *, void *, int, void *, u32);
5655 /**
5656 * struct rpmsg_endpoint - binds a local rpmsg address to its user
5657 * @rpdev: rpmsg channel device
5658 + * @refcount: when this drops to zero, the ept is deallocated
5659 * @cb: rx callback handler
5660 + * @cb_lock: must be taken before accessing/changing @cb
5661 * @addr: local rpmsg address
5662 * @priv: private data for the driver's use
5663 *
5664 @@ -140,7 +144,9 @@ typedef void (*rpmsg_rx_cb_t)(struct rpmsg_channel *, void *, int, void *, u32);
5665 */
5666 struct rpmsg_endpoint {
5667 struct rpmsg_channel *rpdev;
5668 + struct kref refcount;
5669 rpmsg_rx_cb_t cb;
5670 + struct mutex cb_lock;
5671 u32 addr;
5672 void *priv;
5673 };
5674 diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
5675 index c168907..c1bae8d 100644
5676 --- a/include/linux/skbuff.h
5677 +++ b/include/linux/skbuff.h
5678 @@ -225,14 +225,11 @@ enum {
5679 /* device driver is going to provide hardware time stamp */
5680 SKBTX_IN_PROGRESS = 1 << 2,
5681
5682 - /* ensure the originating sk reference is available on driver level */
5683 - SKBTX_DRV_NEEDS_SK_REF = 1 << 3,
5684 -
5685 /* device driver supports TX zero-copy buffers */
5686 - SKBTX_DEV_ZEROCOPY = 1 << 4,
5687 + SKBTX_DEV_ZEROCOPY = 1 << 3,
5688
5689 /* generate wifi status information (where possible) */
5690 - SKBTX_WIFI_STATUS = 1 << 5,
5691 + SKBTX_WIFI_STATUS = 1 << 4,
5692 };
5693
5694 /*
5695 diff --git a/include/linux/splice.h b/include/linux/splice.h
5696 index 26e5b61..09a545a 100644
5697 --- a/include/linux/splice.h
5698 +++ b/include/linux/splice.h
5699 @@ -51,7 +51,8 @@ struct partial_page {
5700 struct splice_pipe_desc {
5701 struct page **pages; /* page map */
5702 struct partial_page *partial; /* pages[] may not be contig */
5703 - int nr_pages; /* number of pages in map */
5704 + int nr_pages; /* number of populated pages in map */
5705 + unsigned int nr_pages_max; /* pages[] & partial[] arrays size */
5706 unsigned int flags; /* splice flags */
5707 const struct pipe_buf_operations *ops;/* ops associated with output pipe */
5708 void (*spd_release)(struct splice_pipe_desc *, unsigned int);
5709 @@ -85,9 +86,8 @@ extern ssize_t splice_direct_to_actor(struct file *, struct splice_desc *,
5710 /*
5711 * for dynamic pipe sizing
5712 */
5713 -extern int splice_grow_spd(struct pipe_inode_info *, struct splice_pipe_desc *);
5714 -extern void splice_shrink_spd(struct pipe_inode_info *,
5715 - struct splice_pipe_desc *);
5716 +extern int splice_grow_spd(const struct pipe_inode_info *, struct splice_pipe_desc *);
5717 +extern void splice_shrink_spd(struct splice_pipe_desc *);
5718 extern void spd_release_page(struct splice_pipe_desc *, unsigned int);
5719
5720 extern const struct pipe_buf_operations page_cache_pipe_buf_ops;
5721 diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
5722 index 51b29ac..2b43e02 100644
5723 --- a/include/linux/sunrpc/svc.h
5724 +++ b/include/linux/sunrpc/svc.h
5725 @@ -416,6 +416,7 @@ struct svc_procedure {
5726 */
5727 int svc_rpcb_setup(struct svc_serv *serv, struct net *net);
5728 void svc_rpcb_cleanup(struct svc_serv *serv, struct net *net);
5729 +int svc_bind(struct svc_serv *serv, struct net *net);
5730 struct svc_serv *svc_create(struct svc_program *, unsigned int,
5731 void (*shutdown)(struct svc_serv *, struct net *net));
5732 struct svc_rqst *svc_prepare_thread(struct svc_serv *serv,
5733 diff --git a/include/net/cipso_ipv4.h b/include/net/cipso_ipv4.h
5734 index 9808877..a7a683e 100644
5735 --- a/include/net/cipso_ipv4.h
5736 +++ b/include/net/cipso_ipv4.h
5737 @@ -42,6 +42,7 @@
5738 #include <net/netlabel.h>
5739 #include <net/request_sock.h>
5740 #include <linux/atomic.h>
5741 +#include <asm/unaligned.h>
5742
5743 /* known doi values */
5744 #define CIPSO_V4_DOI_UNKNOWN 0x00000000
5745 @@ -285,7 +286,33 @@ static inline int cipso_v4_skbuff_getattr(const struct sk_buff *skb,
5746 static inline int cipso_v4_validate(const struct sk_buff *skb,
5747 unsigned char **option)
5748 {
5749 - return -ENOSYS;
5750 + unsigned char *opt = *option;
5751 + unsigned char err_offset = 0;
5752 + u8 opt_len = opt[1];
5753 + u8 opt_iter;
5754 +
5755 + if (opt_len < 8) {
5756 + err_offset = 1;
5757 + goto out;
5758 + }
5759 +
5760 + if (get_unaligned_be32(&opt[2]) == 0) {
5761 + err_offset = 2;
5762 + goto out;
5763 + }
5764 +
5765 + for (opt_iter = 6; opt_iter < opt_len;) {
5766 + if (opt[opt_iter + 1] > (opt_len - opt_iter)) {
5767 + err_offset = opt_iter + 1;
5768 + goto out;
5769 + }
5770 + opt_iter += opt[opt_iter + 1];
5771 + }
5772 +
5773 +out:
5774 + *option = opt + err_offset;
5775 + return err_offset;
5776 +
5777 }
5778 #endif /* CONFIG_NETLABEL */
5779
5780 diff --git a/include/net/inetpeer.h b/include/net/inetpeer.h
5781 index b94765e..2040bff 100644
5782 --- a/include/net/inetpeer.h
5783 +++ b/include/net/inetpeer.h
5784 @@ -40,7 +40,10 @@ struct inet_peer {
5785 u32 pmtu_orig;
5786 u32 pmtu_learned;
5787 struct inetpeer_addr_base redirect_learned;
5788 - struct list_head gc_list;
5789 + union {
5790 + struct list_head gc_list;
5791 + struct rcu_head gc_rcu;
5792 + };
5793 /*
5794 * Once inet_peer is queued for deletion (refcnt == -1), following fields
5795 * are not available: rid, ip_id_count, tcp_ts, tcp_ts_stamp
5796 diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
5797 index 55ce96b..9d7d54a 100644
5798 --- a/include/net/sch_generic.h
5799 +++ b/include/net/sch_generic.h
5800 @@ -220,13 +220,16 @@ struct tcf_proto {
5801
5802 struct qdisc_skb_cb {
5803 unsigned int pkt_len;
5804 - unsigned char data[24];
5805 + u16 bond_queue_mapping;
5806 + u16 _pad;
5807 + unsigned char data[20];
5808 };
5809
5810 static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz)
5811 {
5812 struct qdisc_skb_cb *qcb;
5813 - BUILD_BUG_ON(sizeof(skb->cb) < sizeof(unsigned int) + sz);
5814 +
5815 + BUILD_BUG_ON(sizeof(skb->cb) < offsetof(struct qdisc_skb_cb, data) + sz);
5816 BUILD_BUG_ON(sizeof(qcb->data) < sz);
5817 }
5818
5819 diff --git a/include/scsi/libsas.h b/include/scsi/libsas.h
5820 index f4f1c96..10ce74f 100644
5821 --- a/include/scsi/libsas.h
5822 +++ b/include/scsi/libsas.h
5823 @@ -163,6 +163,8 @@ enum ata_command_set {
5824 ATAPI_COMMAND_SET = 1,
5825 };
5826
5827 +#define ATA_RESP_FIS_SIZE 24
5828 +
5829 struct sata_device {
5830 enum ata_command_set command_set;
5831 struct smp_resp rps_resp; /* report_phy_sata_resp */
5832 @@ -171,7 +173,7 @@ struct sata_device {
5833
5834 struct ata_port *ap;
5835 struct ata_host ata_host;
5836 - struct ata_taskfile tf;
5837 + u8 fis[ATA_RESP_FIS_SIZE];
5838 };
5839
5840 enum {
5841 @@ -537,7 +539,7 @@ enum exec_status {
5842 */
5843 struct ata_task_resp {
5844 u16 frame_len;
5845 - u8 ending_fis[24]; /* dev to host or data-in */
5846 + u8 ending_fis[ATA_RESP_FIS_SIZE]; /* dev to host or data-in */
5847 };
5848
5849 #define SAS_STATUS_BUF_SIZE 96
5850 diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
5851 index 1e11985..ac06cc5 100644
5852 --- a/include/scsi/scsi_cmnd.h
5853 +++ b/include/scsi/scsi_cmnd.h
5854 @@ -134,10 +134,16 @@ struct scsi_cmnd {
5855
5856 static inline struct scsi_driver *scsi_cmd_to_driver(struct scsi_cmnd *cmd)
5857 {
5858 + struct scsi_driver **sdp;
5859 +
5860 if (!cmd->request->rq_disk)
5861 return NULL;
5862
5863 - return *(struct scsi_driver **)cmd->request->rq_disk->private_data;
5864 + sdp = (struct scsi_driver **)cmd->request->rq_disk->private_data;
5865 + if (!sdp)
5866 + return NULL;
5867 +
5868 + return *sdp;
5869 }
5870
5871 extern struct scsi_cmnd *scsi_get_command(struct scsi_device *, gfp_t);
5872 diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
5873 index 6efb2e1..ba96988 100644
5874 --- a/include/scsi/scsi_device.h
5875 +++ b/include/scsi/scsi_device.h
5876 @@ -151,6 +151,7 @@ struct scsi_device {
5877 SD_LAST_BUGGY_SECTORS */
5878 unsigned no_read_disc_info:1; /* Avoid READ_DISC_INFO cmds */
5879 unsigned no_read_capacity_16:1; /* Avoid READ_CAPACITY_16 cmds */
5880 + unsigned try_rc_10_first:1; /* Try READ_CAPACACITY_10 first */
5881 unsigned is_visible:1; /* is the device visible in sysfs */
5882
5883 DECLARE_BITMAP(supported_events, SDEV_EVT_MAXBITS); /* supported events */
5884 diff --git a/kernel/exit.c b/kernel/exit.c
5885 index d8bd3b42..9d81012 100644
5886 --- a/kernel/exit.c
5887 +++ b/kernel/exit.c
5888 @@ -643,6 +643,7 @@ static void exit_mm(struct task_struct * tsk)
5889 mm_release(tsk, mm);
5890 if (!mm)
5891 return;
5892 + sync_mm_rss(mm);
5893 /*
5894 * Serialize with any possible pending coredump.
5895 * We must hold mmap_sem around checking core_state
5896 diff --git a/kernel/relay.c b/kernel/relay.c
5897 index ab56a17..e8cd202 100644
5898 --- a/kernel/relay.c
5899 +++ b/kernel/relay.c
5900 @@ -1235,6 +1235,7 @@ static ssize_t subbuf_splice_actor(struct file *in,
5901 struct splice_pipe_desc spd = {
5902 .pages = pages,
5903 .nr_pages = 0,
5904 + .nr_pages_max = PIPE_DEF_BUFFERS,
5905 .partial = partial,
5906 .flags = flags,
5907 .ops = &relay_pipe_buf_ops,
5908 @@ -1302,8 +1303,8 @@ static ssize_t subbuf_splice_actor(struct file *in,
5909 ret += padding;
5910
5911 out:
5912 - splice_shrink_spd(pipe, &spd);
5913 - return ret;
5914 + splice_shrink_spd(&spd);
5915 + return ret;
5916 }
5917
5918 static ssize_t relay_file_splice_read(struct file *in,
5919 diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
5920 index 464a96f..55e4d4c 100644
5921 --- a/kernel/trace/trace.c
5922 +++ b/kernel/trace/trace.c
5923 @@ -2648,10 +2648,12 @@ tracing_cpumask_write(struct file *filp, const char __user *ubuf,
5924 if (cpumask_test_cpu(cpu, tracing_cpumask) &&
5925 !cpumask_test_cpu(cpu, tracing_cpumask_new)) {
5926 atomic_inc(&global_trace.data[cpu]->disabled);
5927 + ring_buffer_record_disable_cpu(global_trace.buffer, cpu);
5928 }
5929 if (!cpumask_test_cpu(cpu, tracing_cpumask) &&
5930 cpumask_test_cpu(cpu, tracing_cpumask_new)) {
5931 atomic_dec(&global_trace.data[cpu]->disabled);
5932 + ring_buffer_record_enable_cpu(global_trace.buffer, cpu);
5933 }
5934 }
5935 arch_spin_unlock(&ftrace_max_lock);
5936 @@ -3563,6 +3565,7 @@ static ssize_t tracing_splice_read_pipe(struct file *filp,
5937 .pages = pages_def,
5938 .partial = partial_def,
5939 .nr_pages = 0, /* This gets updated below. */
5940 + .nr_pages_max = PIPE_DEF_BUFFERS,
5941 .flags = flags,
5942 .ops = &tracing_pipe_buf_ops,
5943 .spd_release = tracing_spd_release_pipe,
5944 @@ -3634,7 +3637,7 @@ static ssize_t tracing_splice_read_pipe(struct file *filp,
5945
5946 ret = splice_to_pipe(pipe, &spd);
5947 out:
5948 - splice_shrink_spd(pipe, &spd);
5949 + splice_shrink_spd(&spd);
5950 return ret;
5951
5952 out_err:
5953 @@ -4124,6 +4127,7 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
5954 struct splice_pipe_desc spd = {
5955 .pages = pages_def,
5956 .partial = partial_def,
5957 + .nr_pages_max = PIPE_DEF_BUFFERS,
5958 .flags = flags,
5959 .ops = &buffer_pipe_buf_ops,
5960 .spd_release = buffer_spd_release,
5961 @@ -4211,7 +4215,7 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
5962 }
5963
5964 ret = splice_to_pipe(pipe, &spd);
5965 - splice_shrink_spd(pipe, &spd);
5966 + splice_shrink_spd(&spd);
5967 out:
5968 return ret;
5969 }
5970 diff --git a/lib/dynamic_queue_limits.c b/lib/dynamic_queue_limits.c
5971 index 6ab4587..0777c5a 100644
5972 --- a/lib/dynamic_queue_limits.c
5973 +++ b/lib/dynamic_queue_limits.c
5974 @@ -10,23 +10,27 @@
5975 #include <linux/jiffies.h>
5976 #include <linux/dynamic_queue_limits.h>
5977
5978 -#define POSDIFF(A, B) ((A) > (B) ? (A) - (B) : 0)
5979 +#define POSDIFF(A, B) ((int)((A) - (B)) > 0 ? (A) - (B) : 0)
5980 +#define AFTER_EQ(A, B) ((int)((A) - (B)) >= 0)
5981
5982 /* Records completed count and recalculates the queue limit */
5983 void dql_completed(struct dql *dql, unsigned int count)
5984 {
5985 unsigned int inprogress, prev_inprogress, limit;
5986 - unsigned int ovlimit, all_prev_completed, completed;
5987 + unsigned int ovlimit, completed, num_queued;
5988 + bool all_prev_completed;
5989 +
5990 + num_queued = ACCESS_ONCE(dql->num_queued);
5991
5992 /* Can't complete more than what's in queue */
5993 - BUG_ON(count > dql->num_queued - dql->num_completed);
5994 + BUG_ON(count > num_queued - dql->num_completed);
5995
5996 completed = dql->num_completed + count;
5997 limit = dql->limit;
5998 - ovlimit = POSDIFF(dql->num_queued - dql->num_completed, limit);
5999 - inprogress = dql->num_queued - completed;
6000 + ovlimit = POSDIFF(num_queued - dql->num_completed, limit);
6001 + inprogress = num_queued - completed;
6002 prev_inprogress = dql->prev_num_queued - dql->num_completed;
6003 - all_prev_completed = POSDIFF(completed, dql->prev_num_queued);
6004 + all_prev_completed = AFTER_EQ(completed, dql->prev_num_queued);
6005
6006 if ((ovlimit && !inprogress) ||
6007 (dql->prev_ovlimit && all_prev_completed)) {
6008 @@ -104,7 +108,7 @@ void dql_completed(struct dql *dql, unsigned int count)
6009 dql->prev_ovlimit = ovlimit;
6010 dql->prev_last_obj_cnt = dql->last_obj_cnt;
6011 dql->num_completed = completed;
6012 - dql->prev_num_queued = dql->num_queued;
6013 + dql->prev_num_queued = num_queued;
6014 }
6015 EXPORT_SYMBOL(dql_completed);
6016
6017 diff --git a/mm/compaction.c b/mm/compaction.c
6018 index 74a8c82..459b0ab 100644
6019 --- a/mm/compaction.c
6020 +++ b/mm/compaction.c
6021 @@ -594,8 +594,11 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
6022 if (err) {
6023 putback_lru_pages(&cc->migratepages);
6024 cc->nr_migratepages = 0;
6025 + if (err == -ENOMEM) {
6026 + ret = COMPACT_PARTIAL;
6027 + goto out;
6028 + }
6029 }
6030 -
6031 }
6032
6033 out:
6034 diff --git a/mm/madvise.c b/mm/madvise.c
6035 index 1ccbba5..55f645c 100644
6036 --- a/mm/madvise.c
6037 +++ b/mm/madvise.c
6038 @@ -13,6 +13,7 @@
6039 #include <linux/hugetlb.h>
6040 #include <linux/sched.h>
6041 #include <linux/ksm.h>
6042 +#include <linux/file.h>
6043
6044 /*
6045 * Any behaviour which results in changes to the vma->vm_flags needs to
6046 @@ -203,14 +204,16 @@ static long madvise_remove(struct vm_area_struct *vma,
6047 struct address_space *mapping;
6048 loff_t offset, endoff;
6049 int error;
6050 + struct file *f;
6051
6052 *prev = NULL; /* tell sys_madvise we drop mmap_sem */
6053
6054 if (vma->vm_flags & (VM_LOCKED|VM_NONLINEAR|VM_HUGETLB))
6055 return -EINVAL;
6056
6057 - if (!vma->vm_file || !vma->vm_file->f_mapping
6058 - || !vma->vm_file->f_mapping->host) {
6059 + f = vma->vm_file;
6060 +
6061 + if (!f || !f->f_mapping || !f->f_mapping->host) {
6062 return -EINVAL;
6063 }
6064
6065 @@ -224,9 +227,16 @@ static long madvise_remove(struct vm_area_struct *vma,
6066 endoff = (loff_t)(end - vma->vm_start - 1)
6067 + ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
6068
6069 - /* vmtruncate_range needs to take i_mutex */
6070 + /*
6071 + * vmtruncate_range may need to take i_mutex. We need to
6072 + * explicitly grab a reference because the vma (and hence the
6073 + * vma's reference to the file) can go away as soon as we drop
6074 + * mmap_sem.
6075 + */
6076 + get_file(f);
6077 up_read(&current->mm->mmap_sem);
6078 error = vmtruncate_range(mapping->host, offset, endoff);
6079 + fput(f);
6080 down_read(&current->mm->mmap_sem);
6081 return error;
6082 }
6083 diff --git a/mm/memblock.c b/mm/memblock.c
6084 index a44eab3..280d3d7 100644
6085 --- a/mm/memblock.c
6086 +++ b/mm/memblock.c
6087 @@ -37,6 +37,8 @@ struct memblock memblock __initdata_memblock = {
6088
6089 int memblock_debug __initdata_memblock;
6090 static int memblock_can_resize __initdata_memblock;
6091 +static int memblock_memory_in_slab __initdata_memblock = 0;
6092 +static int memblock_reserved_in_slab __initdata_memblock = 0;
6093
6094 /* inline so we don't get a warning when pr_debug is compiled out */
6095 static inline const char *memblock_type_name(struct memblock_type *type)
6096 @@ -141,30 +143,6 @@ phys_addr_t __init_memblock memblock_find_in_range(phys_addr_t start,
6097 MAX_NUMNODES);
6098 }
6099
6100 -/*
6101 - * Free memblock.reserved.regions
6102 - */
6103 -int __init_memblock memblock_free_reserved_regions(void)
6104 -{
6105 - if (memblock.reserved.regions == memblock_reserved_init_regions)
6106 - return 0;
6107 -
6108 - return memblock_free(__pa(memblock.reserved.regions),
6109 - sizeof(struct memblock_region) * memblock.reserved.max);
6110 -}
6111 -
6112 -/*
6113 - * Reserve memblock.reserved.regions
6114 - */
6115 -int __init_memblock memblock_reserve_reserved_regions(void)
6116 -{
6117 - if (memblock.reserved.regions == memblock_reserved_init_regions)
6118 - return 0;
6119 -
6120 - return memblock_reserve(__pa(memblock.reserved.regions),
6121 - sizeof(struct memblock_region) * memblock.reserved.max);
6122 -}
6123 -
6124 static void __init_memblock memblock_remove_region(struct memblock_type *type, unsigned long r)
6125 {
6126 type->total_size -= type->regions[r].size;
6127 @@ -182,11 +160,42 @@ static void __init_memblock memblock_remove_region(struct memblock_type *type, u
6128 }
6129 }
6130
6131 -static int __init_memblock memblock_double_array(struct memblock_type *type)
6132 +phys_addr_t __init_memblock get_allocated_memblock_reserved_regions_info(
6133 + phys_addr_t *addr)
6134 +{
6135 + if (memblock.reserved.regions == memblock_reserved_init_regions)
6136 + return 0;
6137 +
6138 + *addr = __pa(memblock.reserved.regions);
6139 +
6140 + return PAGE_ALIGN(sizeof(struct memblock_region) *
6141 + memblock.reserved.max);
6142 +}
6143 +
6144 +/**
6145 + * memblock_double_array - double the size of the memblock regions array
6146 + * @type: memblock type of the regions array being doubled
6147 + * @new_area_start: starting address of memory range to avoid overlap with
6148 + * @new_area_size: size of memory range to avoid overlap with
6149 + *
6150 + * Double the size of the @type regions array. If memblock is being used to
6151 + * allocate memory for a new reserved regions array and there is a previously
6152 + * allocated memory range [@new_area_start,@new_area_start+@new_area_size]
6153 + * waiting to be reserved, ensure the memory used by the new array does
6154 + * not overlap.
6155 + *
6156 + * RETURNS:
6157 + * 0 on success, -1 on failure.
6158 + */
6159 +static int __init_memblock memblock_double_array(struct memblock_type *type,
6160 + phys_addr_t new_area_start,
6161 + phys_addr_t new_area_size)
6162 {
6163 struct memblock_region *new_array, *old_array;
6164 + phys_addr_t old_alloc_size, new_alloc_size;
6165 phys_addr_t old_size, new_size, addr;
6166 int use_slab = slab_is_available();
6167 + int *in_slab;
6168
6169 /* We don't allow resizing until we know about the reserved regions
6170 * of memory that aren't suitable for allocation
6171 @@ -197,6 +206,18 @@ static int __init_memblock memblock_double_array(struct memblock_type *type)
6172 /* Calculate new doubled size */
6173 old_size = type->max * sizeof(struct memblock_region);
6174 new_size = old_size << 1;
6175 + /*
6176 + * We need to allocated new one align to PAGE_SIZE,
6177 + * so we can free them completely later.
6178 + */
6179 + old_alloc_size = PAGE_ALIGN(old_size);
6180 + new_alloc_size = PAGE_ALIGN(new_size);
6181 +
6182 + /* Retrieve the slab flag */
6183 + if (type == &memblock.memory)
6184 + in_slab = &memblock_memory_in_slab;
6185 + else
6186 + in_slab = &memblock_reserved_in_slab;
6187
6188 /* Try to find some space for it.
6189 *
6190 @@ -212,14 +233,26 @@ static int __init_memblock memblock_double_array(struct memblock_type *type)
6191 if (use_slab) {
6192 new_array = kmalloc(new_size, GFP_KERNEL);
6193 addr = new_array ? __pa(new_array) : 0;
6194 - } else
6195 - addr = memblock_find_in_range(0, MEMBLOCK_ALLOC_ACCESSIBLE, new_size, sizeof(phys_addr_t));
6196 + } else {
6197 + /* only exclude range when trying to double reserved.regions */
6198 + if (type != &memblock.reserved)
6199 + new_area_start = new_area_size = 0;
6200 +
6201 + addr = memblock_find_in_range(new_area_start + new_area_size,
6202 + memblock.current_limit,
6203 + new_alloc_size, PAGE_SIZE);
6204 + if (!addr && new_area_size)
6205 + addr = memblock_find_in_range(0,
6206 + min(new_area_start, memblock.current_limit),
6207 + new_alloc_size, PAGE_SIZE);
6208 +
6209 + new_array = addr ? __va(addr) : 0;
6210 + }
6211 if (!addr) {
6212 pr_err("memblock: Failed to double %s array from %ld to %ld entries !\n",
6213 memblock_type_name(type), type->max, type->max * 2);
6214 return -1;
6215 }
6216 - new_array = __va(addr);
6217
6218 memblock_dbg("memblock: %s array is doubled to %ld at [%#010llx-%#010llx]",
6219 memblock_type_name(type), type->max * 2, (u64)addr, (u64)addr + new_size - 1);
6220 @@ -234,21 +267,23 @@ static int __init_memblock memblock_double_array(struct memblock_type *type)
6221 type->regions = new_array;
6222 type->max <<= 1;
6223
6224 - /* If we use SLAB that's it, we are done */
6225 - if (use_slab)
6226 - return 0;
6227 -
6228 - /* Add the new reserved region now. Should not fail ! */
6229 - BUG_ON(memblock_reserve(addr, new_size));
6230 -
6231 - /* If the array wasn't our static init one, then free it. We only do
6232 - * that before SLAB is available as later on, we don't know whether
6233 - * to use kfree or free_bootmem_pages(). Shouldn't be a big deal
6234 - * anyways
6235 + /* Free old array. We needn't free it if the array is the
6236 + * static one
6237 */
6238 - if (old_array != memblock_memory_init_regions &&
6239 - old_array != memblock_reserved_init_regions)
6240 - memblock_free(__pa(old_array), old_size);
6241 + if (*in_slab)
6242 + kfree(old_array);
6243 + else if (old_array != memblock_memory_init_regions &&
6244 + old_array != memblock_reserved_init_regions)
6245 + memblock_free(__pa(old_array), old_alloc_size);
6246 +
6247 + /* Reserve the new array if that comes from the memblock.
6248 + * Otherwise, we needn't do it
6249 + */
6250 + if (!use_slab)
6251 + BUG_ON(memblock_reserve(addr, new_alloc_size));
6252 +
6253 + /* Update slab flag */
6254 + *in_slab = use_slab;
6255
6256 return 0;
6257 }
6258 @@ -387,7 +422,7 @@ repeat:
6259 */
6260 if (!insert) {
6261 while (type->cnt + nr_new > type->max)
6262 - if (memblock_double_array(type) < 0)
6263 + if (memblock_double_array(type, obase, size) < 0)
6264 return -ENOMEM;
6265 insert = true;
6266 goto repeat;
6267 @@ -438,7 +473,7 @@ static int __init_memblock memblock_isolate_range(struct memblock_type *type,
6268
6269 /* we'll create at most two more regions */
6270 while (type->cnt + 2 > type->max)
6271 - if (memblock_double_array(type) < 0)
6272 + if (memblock_double_array(type, base, size) < 0)
6273 return -ENOMEM;
6274
6275 for (i = 0; i < type->cnt; i++) {
6276 diff --git a/mm/nobootmem.c b/mm/nobootmem.c
6277 index 1983fb1..218e6f9 100644
6278 --- a/mm/nobootmem.c
6279 +++ b/mm/nobootmem.c
6280 @@ -105,27 +105,35 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end)
6281 __free_pages_bootmem(pfn_to_page(i), 0);
6282 }
6283
6284 +static unsigned long __init __free_memory_core(phys_addr_t start,
6285 + phys_addr_t end)
6286 +{
6287 + unsigned long start_pfn = PFN_UP(start);
6288 + unsigned long end_pfn = min_t(unsigned long,
6289 + PFN_DOWN(end), max_low_pfn);
6290 +
6291 + if (start_pfn > end_pfn)
6292 + return 0;
6293 +
6294 + __free_pages_memory(start_pfn, end_pfn);
6295 +
6296 + return end_pfn - start_pfn;
6297 +}
6298 +
6299 unsigned long __init free_low_memory_core_early(int nodeid)
6300 {
6301 unsigned long count = 0;
6302 - phys_addr_t start, end;
6303 + phys_addr_t start, end, size;
6304 u64 i;
6305
6306 - /* free reserved array temporarily so that it's treated as free area */
6307 - memblock_free_reserved_regions();
6308 -
6309 - for_each_free_mem_range(i, MAX_NUMNODES, &start, &end, NULL) {
6310 - unsigned long start_pfn = PFN_UP(start);
6311 - unsigned long end_pfn = min_t(unsigned long,
6312 - PFN_DOWN(end), max_low_pfn);
6313 - if (start_pfn < end_pfn) {
6314 - __free_pages_memory(start_pfn, end_pfn);
6315 - count += end_pfn - start_pfn;
6316 - }
6317 - }
6318 + for_each_free_mem_range(i, MAX_NUMNODES, &start, &end, NULL)
6319 + count += __free_memory_core(start, end);
6320 +
6321 + /* free range that is used for reserved array if we allocate it */
6322 + size = get_allocated_memblock_reserved_regions_info(&start);
6323 + if (size)
6324 + count += __free_memory_core(start, start + size);
6325
6326 - /* put region array back? */
6327 - memblock_reserve_reserved_regions();
6328 return count;
6329 }
6330
6331 diff --git a/mm/shmem.c b/mm/shmem.c
6332 index f99ff3e..9d65a02 100644
6333 --- a/mm/shmem.c
6334 +++ b/mm/shmem.c
6335 @@ -1365,6 +1365,7 @@ static ssize_t shmem_file_splice_read(struct file *in, loff_t *ppos,
6336 struct splice_pipe_desc spd = {
6337 .pages = pages,
6338 .partial = partial,
6339 + .nr_pages_max = PIPE_DEF_BUFFERS,
6340 .flags = flags,
6341 .ops = &page_cache_pipe_buf_ops,
6342 .spd_release = spd_release_page,
6343 @@ -1453,7 +1454,7 @@ static ssize_t shmem_file_splice_read(struct file *in, loff_t *ppos,
6344 if (spd.nr_pages)
6345 error = splice_to_pipe(pipe, &spd);
6346
6347 - splice_shrink_spd(pipe, &spd);
6348 + splice_shrink_spd(&spd);
6349
6350 if (error > 0) {
6351 *ppos += error;
6352 diff --git a/mm/vmscan.c b/mm/vmscan.c
6353 index 0932dc2..4607cc6 100644
6354 --- a/mm/vmscan.c
6355 +++ b/mm/vmscan.c
6356 @@ -3279,14 +3279,17 @@ int kswapd_run(int nid)
6357 }
6358
6359 /*
6360 - * Called by memory hotplug when all memory in a node is offlined.
6361 + * Called by memory hotplug when all memory in a node is offlined. Caller must
6362 + * hold lock_memory_hotplug().
6363 */
6364 void kswapd_stop(int nid)
6365 {
6366 struct task_struct *kswapd = NODE_DATA(nid)->kswapd;
6367
6368 - if (kswapd)
6369 + if (kswapd) {
6370 kthread_stop(kswapd);
6371 + NODE_DATA(nid)->kswapd = NULL;
6372 + }
6373 }
6374
6375 static int __init kswapd_init(void)
6376 diff --git a/net/batman-adv/routing.c b/net/batman-adv/routing.c
6377 index 7f8e158..8df3a1f 100644
6378 --- a/net/batman-adv/routing.c
6379 +++ b/net/batman-adv/routing.c
6380 @@ -618,6 +618,8 @@ int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if)
6381 * changes */
6382 if (skb_linearize(skb) < 0)
6383 goto out;
6384 + /* skb_linearize() possibly changed skb->data */
6385 + tt_query = (struct tt_query_packet *)skb->data;
6386
6387 tt_len = tt_query->tt_data * sizeof(struct tt_change);
6388
6389 diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
6390 index 1f86921..f014bf8 100644
6391 --- a/net/batman-adv/translation-table.c
6392 +++ b/net/batman-adv/translation-table.c
6393 @@ -1803,10 +1803,10 @@ bool is_ap_isolated(struct bat_priv *bat_priv, uint8_t *src, uint8_t *dst)
6394 {
6395 struct tt_local_entry *tt_local_entry = NULL;
6396 struct tt_global_entry *tt_global_entry = NULL;
6397 - bool ret = true;
6398 + bool ret = false;
6399
6400 if (!atomic_read(&bat_priv->ap_isolation))
6401 - return false;
6402 + goto out;
6403
6404 tt_local_entry = tt_local_hash_find(bat_priv, dst);
6405 if (!tt_local_entry)
6406 @@ -1816,10 +1816,10 @@ bool is_ap_isolated(struct bat_priv *bat_priv, uint8_t *src, uint8_t *dst)
6407 if (!tt_global_entry)
6408 goto out;
6409
6410 - if (_is_ap_isolated(tt_local_entry, tt_global_entry))
6411 + if (!_is_ap_isolated(tt_local_entry, tt_global_entry))
6412 goto out;
6413
6414 - ret = false;
6415 + ret = true;
6416
6417 out:
6418 if (tt_global_entry)
6419 diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c
6420 index 0a942fb..e1144e1 100644
6421 --- a/net/bridge/br_if.c
6422 +++ b/net/bridge/br_if.c
6423 @@ -240,6 +240,7 @@ int br_add_bridge(struct net *net, const char *name)
6424 return -ENOMEM;
6425
6426 dev_net_set(dev, net);
6427 + dev->rtnl_link_ops = &br_link_ops;
6428
6429 res = register_netdev(dev);
6430 if (res)
6431 diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c
6432 index a1daf82..cbf9ccd 100644
6433 --- a/net/bridge/br_netlink.c
6434 +++ b/net/bridge/br_netlink.c
6435 @@ -211,7 +211,7 @@ static int br_validate(struct nlattr *tb[], struct nlattr *data[])
6436 return 0;
6437 }
6438
6439 -static struct rtnl_link_ops br_link_ops __read_mostly = {
6440 +struct rtnl_link_ops br_link_ops __read_mostly = {
6441 .kind = "bridge",
6442 .priv_size = sizeof(struct net_bridge),
6443 .setup = br_dev_setup,
6444 diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
6445 index e1d8822..51e8826 100644
6446 --- a/net/bridge/br_private.h
6447 +++ b/net/bridge/br_private.h
6448 @@ -538,6 +538,7 @@ extern int (*br_fdb_test_addr_hook)(struct net_device *dev, unsigned char *addr)
6449 #endif
6450
6451 /* br_netlink.c */
6452 +extern struct rtnl_link_ops br_link_ops;
6453 extern int br_netlink_init(void);
6454 extern void br_netlink_fini(void);
6455 extern void br_ifinfo_notify(int event, struct net_bridge_port *port);
6456 diff --git a/net/can/raw.c b/net/can/raw.c
6457 index cde1b4a..46cca3a 100644
6458 --- a/net/can/raw.c
6459 +++ b/net/can/raw.c
6460 @@ -681,9 +681,6 @@ static int raw_sendmsg(struct kiocb *iocb, struct socket *sock,
6461 if (err < 0)
6462 goto free_skb;
6463
6464 - /* to be able to check the received tx sock reference in raw_rcv() */
6465 - skb_shinfo(skb)->tx_flags |= SKBTX_DRV_NEEDS_SK_REF;
6466 -
6467 skb->dev = dev;
6468 skb->sk = sk;
6469
6470 diff --git a/net/core/dev.c b/net/core/dev.c
6471 index 99e1d75..533c586 100644
6472 --- a/net/core/dev.c
6473 +++ b/net/core/dev.c
6474 @@ -2091,25 +2091,6 @@ static int dev_gso_segment(struct sk_buff *skb, netdev_features_t features)
6475 return 0;
6476 }
6477
6478 -/*
6479 - * Try to orphan skb early, right before transmission by the device.
6480 - * We cannot orphan skb if tx timestamp is requested or the sk-reference
6481 - * is needed on driver level for other reasons, e.g. see net/can/raw.c
6482 - */
6483 -static inline void skb_orphan_try(struct sk_buff *skb)
6484 -{
6485 - struct sock *sk = skb->sk;
6486 -
6487 - if (sk && !skb_shinfo(skb)->tx_flags) {
6488 - /* skb_tx_hash() wont be able to get sk.
6489 - * We copy sk_hash into skb->rxhash
6490 - */
6491 - if (!skb->rxhash)
6492 - skb->rxhash = sk->sk_hash;
6493 - skb_orphan(skb);
6494 - }
6495 -}
6496 -
6497 static bool can_checksum_protocol(netdev_features_t features, __be16 protocol)
6498 {
6499 return ((features & NETIF_F_GEN_CSUM) ||
6500 @@ -2195,8 +2176,6 @@ int dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev,
6501 if (!list_empty(&ptype_all))
6502 dev_queue_xmit_nit(skb, dev);
6503
6504 - skb_orphan_try(skb);
6505 -
6506 features = netif_skb_features(skb);
6507
6508 if (vlan_tx_tag_present(skb) &&
6509 @@ -2306,7 +2285,7 @@ u16 __skb_tx_hash(const struct net_device *dev, const struct sk_buff *skb,
6510 if (skb->sk && skb->sk->sk_hash)
6511 hash = skb->sk->sk_hash;
6512 else
6513 - hash = (__force u16) skb->protocol ^ skb->rxhash;
6514 + hash = (__force u16) skb->protocol;
6515 hash = jhash_1word(hash, hashrnd);
6516
6517 return (u16) (((u64) hash * qcount) >> 32) + qoffset;
6518 diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
6519 index a7cad74..b856f87 100644
6520 --- a/net/core/drop_monitor.c
6521 +++ b/net/core/drop_monitor.c
6522 @@ -33,9 +33,6 @@
6523 #define TRACE_ON 1
6524 #define TRACE_OFF 0
6525
6526 -static void send_dm_alert(struct work_struct *unused);
6527 -
6528 -
6529 /*
6530 * Globals, our netlink socket pointer
6531 * and the work handle that will send up
6532 @@ -45,11 +42,10 @@ static int trace_state = TRACE_OFF;
6533 static DEFINE_MUTEX(trace_state_mutex);
6534
6535 struct per_cpu_dm_data {
6536 - struct work_struct dm_alert_work;
6537 - struct sk_buff __rcu *skb;
6538 - atomic_t dm_hit_count;
6539 - struct timer_list send_timer;
6540 - int cpu;
6541 + spinlock_t lock;
6542 + struct sk_buff *skb;
6543 + struct work_struct dm_alert_work;
6544 + struct timer_list send_timer;
6545 };
6546
6547 struct dm_hw_stat_delta {
6548 @@ -75,13 +71,13 @@ static int dm_delay = 1;
6549 static unsigned long dm_hw_check_delta = 2*HZ;
6550 static LIST_HEAD(hw_stats_list);
6551
6552 -static void reset_per_cpu_data(struct per_cpu_dm_data *data)
6553 +static struct sk_buff *reset_per_cpu_data(struct per_cpu_dm_data *data)
6554 {
6555 size_t al;
6556 struct net_dm_alert_msg *msg;
6557 struct nlattr *nla;
6558 struct sk_buff *skb;
6559 - struct sk_buff *oskb = rcu_dereference_protected(data->skb, 1);
6560 + unsigned long flags;
6561
6562 al = sizeof(struct net_dm_alert_msg);
6563 al += dm_hit_limit * sizeof(struct net_dm_drop_point);
6564 @@ -96,65 +92,40 @@ static void reset_per_cpu_data(struct per_cpu_dm_data *data)
6565 sizeof(struct net_dm_alert_msg));
6566 msg = nla_data(nla);
6567 memset(msg, 0, al);
6568 - } else
6569 - schedule_work_on(data->cpu, &data->dm_alert_work);
6570 -
6571 - /*
6572 - * Don't need to lock this, since we are guaranteed to only
6573 - * run this on a single cpu at a time.
6574 - * Note also that we only update data->skb if the old and new skb
6575 - * pointers don't match. This ensures that we don't continually call
6576 - * synchornize_rcu if we repeatedly fail to alloc a new netlink message.
6577 - */
6578 - if (skb != oskb) {
6579 - rcu_assign_pointer(data->skb, skb);
6580 -
6581 - synchronize_rcu();
6582 -
6583 - atomic_set(&data->dm_hit_count, dm_hit_limit);
6584 + } else {
6585 + mod_timer(&data->send_timer, jiffies + HZ / 10);
6586 }
6587
6588 + spin_lock_irqsave(&data->lock, flags);
6589 + swap(data->skb, skb);
6590 + spin_unlock_irqrestore(&data->lock, flags);
6591 +
6592 + return skb;
6593 }
6594
6595 -static void send_dm_alert(struct work_struct *unused)
6596 +static void send_dm_alert(struct work_struct *work)
6597 {
6598 struct sk_buff *skb;
6599 - struct per_cpu_dm_data *data = &get_cpu_var(dm_cpu_data);
6600 + struct per_cpu_dm_data *data;
6601
6602 - WARN_ON_ONCE(data->cpu != smp_processor_id());
6603 + data = container_of(work, struct per_cpu_dm_data, dm_alert_work);
6604
6605 - /*
6606 - * Grab the skb we're about to send
6607 - */
6608 - skb = rcu_dereference_protected(data->skb, 1);
6609 + skb = reset_per_cpu_data(data);
6610
6611 - /*
6612 - * Replace it with a new one
6613 - */
6614 - reset_per_cpu_data(data);
6615 -
6616 - /*
6617 - * Ship it!
6618 - */
6619 if (skb)
6620 genlmsg_multicast(skb, 0, NET_DM_GRP_ALERT, GFP_KERNEL);
6621 -
6622 - put_cpu_var(dm_cpu_data);
6623 }
6624
6625 /*
6626 * This is the timer function to delay the sending of an alert
6627 * in the event that more drops will arrive during the
6628 - * hysteresis period. Note that it operates under the timer interrupt
6629 - * so we don't need to disable preemption here
6630 + * hysteresis period.
6631 */
6632 -static void sched_send_work(unsigned long unused)
6633 +static void sched_send_work(unsigned long _data)
6634 {
6635 - struct per_cpu_dm_data *data = &get_cpu_var(dm_cpu_data);
6636 + struct per_cpu_dm_data *data = (struct per_cpu_dm_data *)_data;
6637
6638 - schedule_work_on(smp_processor_id(), &data->dm_alert_work);
6639 -
6640 - put_cpu_var(dm_cpu_data);
6641 + schedule_work(&data->dm_alert_work);
6642 }
6643
6644 static void trace_drop_common(struct sk_buff *skb, void *location)
6645 @@ -164,33 +135,28 @@ static void trace_drop_common(struct sk_buff *skb, void *location)
6646 struct nlattr *nla;
6647 int i;
6648 struct sk_buff *dskb;
6649 - struct per_cpu_dm_data *data = &get_cpu_var(dm_cpu_data);
6650 -
6651 + struct per_cpu_dm_data *data;
6652 + unsigned long flags;
6653
6654 - rcu_read_lock();
6655 - dskb = rcu_dereference(data->skb);
6656 + local_irq_save(flags);
6657 + data = &__get_cpu_var(dm_cpu_data);
6658 + spin_lock(&data->lock);
6659 + dskb = data->skb;
6660
6661 if (!dskb)
6662 goto out;
6663
6664 - if (!atomic_add_unless(&data->dm_hit_count, -1, 0)) {
6665 - /*
6666 - * we're already at zero, discard this hit
6667 - */
6668 - goto out;
6669 - }
6670 -
6671 nlh = (struct nlmsghdr *)dskb->data;
6672 nla = genlmsg_data(nlmsg_data(nlh));
6673 msg = nla_data(nla);
6674 for (i = 0; i < msg->entries; i++) {
6675 if (!memcmp(&location, msg->points[i].pc, sizeof(void *))) {
6676 msg->points[i].count++;
6677 - atomic_inc(&data->dm_hit_count);
6678 goto out;
6679 }
6680 }
6681 -
6682 + if (msg->entries == dm_hit_limit)
6683 + goto out;
6684 /*
6685 * We need to create a new entry
6686 */
6687 @@ -202,13 +168,11 @@ static void trace_drop_common(struct sk_buff *skb, void *location)
6688
6689 if (!timer_pending(&data->send_timer)) {
6690 data->send_timer.expires = jiffies + dm_delay * HZ;
6691 - add_timer_on(&data->send_timer, smp_processor_id());
6692 + add_timer(&data->send_timer);
6693 }
6694
6695 out:
6696 - rcu_read_unlock();
6697 - put_cpu_var(dm_cpu_data);
6698 - return;
6699 + spin_unlock_irqrestore(&data->lock, flags);
6700 }
6701
6702 static void trace_kfree_skb_hit(void *ignore, struct sk_buff *skb, void *location)
6703 @@ -406,11 +370,11 @@ static int __init init_net_drop_monitor(void)
6704
6705 for_each_present_cpu(cpu) {
6706 data = &per_cpu(dm_cpu_data, cpu);
6707 - data->cpu = cpu;
6708 INIT_WORK(&data->dm_alert_work, send_dm_alert);
6709 init_timer(&data->send_timer);
6710 - data->send_timer.data = cpu;
6711 + data->send_timer.data = (unsigned long)data;
6712 data->send_timer.function = sched_send_work;
6713 + spin_lock_init(&data->lock);
6714 reset_per_cpu_data(data);
6715 }
6716
6717 diff --git a/net/core/neighbour.c b/net/core/neighbour.c
6718 index 0a68045..73b9035 100644
6719 --- a/net/core/neighbour.c
6720 +++ b/net/core/neighbour.c
6721 @@ -2214,9 +2214,7 @@ static int neigh_dump_table(struct neigh_table *tbl, struct sk_buff *skb,
6722 rcu_read_lock_bh();
6723 nht = rcu_dereference_bh(tbl->nht);
6724
6725 - for (h = 0; h < (1 << nht->hash_shift); h++) {
6726 - if (h < s_h)
6727 - continue;
6728 + for (h = s_h; h < (1 << nht->hash_shift); h++) {
6729 if (h > s_h)
6730 s_idx = 0;
6731 for (n = rcu_dereference_bh(nht->hash_buckets[h]), idx = 0;
6732 @@ -2255,9 +2253,7 @@ static int pneigh_dump_table(struct neigh_table *tbl, struct sk_buff *skb,
6733
6734 read_lock_bh(&tbl->lock);
6735
6736 - for (h = 0; h <= PNEIGH_HASHMASK; h++) {
6737 - if (h < s_h)
6738 - continue;
6739 + for (h = s_h; h <= PNEIGH_HASHMASK; h++) {
6740 if (h > s_h)
6741 s_idx = 0;
6742 for (n = tbl->phash_buckets[h], idx = 0; n; n = n->next) {
6743 @@ -2292,7 +2288,7 @@ static int neigh_dump_info(struct sk_buff *skb, struct netlink_callback *cb)
6744 struct neigh_table *tbl;
6745 int t, family, s_t;
6746 int proxy = 0;
6747 - int err = 0;
6748 + int err;
6749
6750 read_lock(&neigh_tbl_lock);
6751 family = ((struct rtgenmsg *) nlmsg_data(cb->nlh))->rtgen_family;
6752 @@ -2306,7 +2302,7 @@ static int neigh_dump_info(struct sk_buff *skb, struct netlink_callback *cb)
6753
6754 s_t = cb->args[0];
6755
6756 - for (tbl = neigh_tables, t = 0; tbl && (err >= 0);
6757 + for (tbl = neigh_tables, t = 0; tbl;
6758 tbl = tbl->next, t++) {
6759 if (t < s_t || (family && tbl->family != family))
6760 continue;
6761 @@ -2317,6 +2313,8 @@ static int neigh_dump_info(struct sk_buff *skb, struct netlink_callback *cb)
6762 err = pneigh_dump_table(tbl, skb, cb);
6763 else
6764 err = neigh_dump_table(tbl, skb, cb);
6765 + if (err < 0)
6766 + break;
6767 }
6768 read_unlock(&neigh_tbl_lock);
6769
6770 diff --git a/net/core/netpoll.c b/net/core/netpoll.c
6771 index 3d84fb9..f9f40b9 100644
6772 --- a/net/core/netpoll.c
6773 +++ b/net/core/netpoll.c
6774 @@ -362,22 +362,23 @@ EXPORT_SYMBOL(netpoll_send_skb_on_dev);
6775
6776 void netpoll_send_udp(struct netpoll *np, const char *msg, int len)
6777 {
6778 - int total_len, eth_len, ip_len, udp_len;
6779 + int total_len, ip_len, udp_len;
6780 struct sk_buff *skb;
6781 struct udphdr *udph;
6782 struct iphdr *iph;
6783 struct ethhdr *eth;
6784
6785 udp_len = len + sizeof(*udph);
6786 - ip_len = eth_len = udp_len + sizeof(*iph);
6787 - total_len = eth_len + ETH_HLEN + NET_IP_ALIGN;
6788 + ip_len = udp_len + sizeof(*iph);
6789 + total_len = ip_len + LL_RESERVED_SPACE(np->dev);
6790
6791 - skb = find_skb(np, total_len, total_len - len);
6792 + skb = find_skb(np, total_len + np->dev->needed_tailroom,
6793 + total_len - len);
6794 if (!skb)
6795 return;
6796
6797 skb_copy_to_linear_data(skb, msg, len);
6798 - skb->len += len;
6799 + skb_put(skb, len);
6800
6801 skb_push(skb, sizeof(*udph));
6802 skb_reset_transport_header(skb);
6803 diff --git a/net/core/skbuff.c b/net/core/skbuff.c
6804 index e598400..e99aedd 100644
6805 --- a/net/core/skbuff.c
6806 +++ b/net/core/skbuff.c
6807 @@ -1712,6 +1712,7 @@ int skb_splice_bits(struct sk_buff *skb, unsigned int offset,
6808 struct splice_pipe_desc spd = {
6809 .pages = pages,
6810 .partial = partial,
6811 + .nr_pages_max = MAX_SKB_FRAGS,
6812 .flags = flags,
6813 .ops = &sock_pipe_buf_ops,
6814 .spd_release = sock_spd_release,
6815 @@ -1758,7 +1759,7 @@ done:
6816 lock_sock(sk);
6817 }
6818
6819 - splice_shrink_spd(pipe, &spd);
6820 + splice_shrink_spd(&spd);
6821 return ret;
6822 }
6823
6824 diff --git a/net/core/sock.c b/net/core/sock.c
6825 index b2e14c0..0f8402e 100644
6826 --- a/net/core/sock.c
6827 +++ b/net/core/sock.c
6828 @@ -1600,6 +1600,11 @@ struct sk_buff *sock_alloc_send_pskb(struct sock *sk, unsigned long header_len,
6829 gfp_t gfp_mask;
6830 long timeo;
6831 int err;
6832 + int npages = (data_len + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
6833 +
6834 + err = -EMSGSIZE;
6835 + if (npages > MAX_SKB_FRAGS)
6836 + goto failure;
6837
6838 gfp_mask = sk->sk_allocation;
6839 if (gfp_mask & __GFP_WAIT)
6840 @@ -1618,14 +1623,12 @@ struct sk_buff *sock_alloc_send_pskb(struct sock *sk, unsigned long header_len,
6841 if (atomic_read(&sk->sk_wmem_alloc) < sk->sk_sndbuf) {
6842 skb = alloc_skb(header_len, gfp_mask);
6843 if (skb) {
6844 - int npages;
6845 int i;
6846
6847 /* No pages, we're done... */
6848 if (!data_len)
6849 break;
6850
6851 - npages = (data_len + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
6852 skb->truesize += data_len;
6853 skb_shinfo(skb)->nr_frags = npages;
6854 for (i = 0; i < npages; i++) {
6855 diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
6856 index d4d61b6..dfba343 100644
6857 --- a/net/ipv4/inetpeer.c
6858 +++ b/net/ipv4/inetpeer.c
6859 @@ -560,6 +560,17 @@ bool inet_peer_xrlim_allow(struct inet_peer *peer, int timeout)
6860 }
6861 EXPORT_SYMBOL(inet_peer_xrlim_allow);
6862
6863 +static void inetpeer_inval_rcu(struct rcu_head *head)
6864 +{
6865 + struct inet_peer *p = container_of(head, struct inet_peer, gc_rcu);
6866 +
6867 + spin_lock_bh(&gc_lock);
6868 + list_add_tail(&p->gc_list, &gc_list);
6869 + spin_unlock_bh(&gc_lock);
6870 +
6871 + schedule_delayed_work(&gc_work, gc_delay);
6872 +}
6873 +
6874 void inetpeer_invalidate_tree(int family)
6875 {
6876 struct inet_peer *old, *new, *prev;
6877 @@ -576,10 +587,7 @@ void inetpeer_invalidate_tree(int family)
6878 prev = cmpxchg(&base->root, old, new);
6879 if (prev == old) {
6880 base->total = 0;
6881 - spin_lock(&gc_lock);
6882 - list_add_tail(&prev->gc_list, &gc_list);
6883 - spin_unlock(&gc_lock);
6884 - schedule_delayed_work(&gc_work, gc_delay);
6885 + call_rcu(&prev->gc_rcu, inetpeer_inval_rcu);
6886 }
6887
6888 out:
6889 diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
6890 index 9371743..92bb9cb 100644
6891 --- a/net/ipv6/ip6_fib.c
6892 +++ b/net/ipv6/ip6_fib.c
6893 @@ -1560,7 +1560,7 @@ static int fib6_age(struct rt6_info *rt, void *arg)
6894 neigh_flags = neigh->flags;
6895 neigh_release(neigh);
6896 }
6897 - if (neigh_flags & NTF_ROUTER) {
6898 + if (!(neigh_flags & NTF_ROUTER)) {
6899 RT6_TRACE("purging route %p via non-router but gateway\n",
6900 rt);
6901 return -1;
6902 diff --git a/net/ipv6/route.c b/net/ipv6/route.c
6903 index bc4888d..c4920ca 100644
6904 --- a/net/ipv6/route.c
6905 +++ b/net/ipv6/route.c
6906 @@ -2953,10 +2953,6 @@ static int __net_init ip6_route_net_init(struct net *net)
6907 net->ipv6.sysctl.ip6_rt_mtu_expires = 10*60*HZ;
6908 net->ipv6.sysctl.ip6_rt_min_advmss = IPV6_MIN_MTU - 20 - 40;
6909
6910 -#ifdef CONFIG_PROC_FS
6911 - proc_net_fops_create(net, "ipv6_route", 0, &ipv6_route_proc_fops);
6912 - proc_net_fops_create(net, "rt6_stats", S_IRUGO, &rt6_stats_seq_fops);
6913 -#endif
6914 net->ipv6.ip6_rt_gc_expire = 30*HZ;
6915
6916 ret = 0;
6917 @@ -2977,10 +2973,6 @@ out_ip6_dst_ops:
6918
6919 static void __net_exit ip6_route_net_exit(struct net *net)
6920 {
6921 -#ifdef CONFIG_PROC_FS
6922 - proc_net_remove(net, "ipv6_route");
6923 - proc_net_remove(net, "rt6_stats");
6924 -#endif
6925 kfree(net->ipv6.ip6_null_entry);
6926 #ifdef CONFIG_IPV6_MULTIPLE_TABLES
6927 kfree(net->ipv6.ip6_prohibit_entry);
6928 @@ -2989,11 +2981,33 @@ static void __net_exit ip6_route_net_exit(struct net *net)
6929 dst_entries_destroy(&net->ipv6.ip6_dst_ops);
6930 }
6931
6932 +static int __net_init ip6_route_net_init_late(struct net *net)
6933 +{
6934 +#ifdef CONFIG_PROC_FS
6935 + proc_net_fops_create(net, "ipv6_route", 0, &ipv6_route_proc_fops);
6936 + proc_net_fops_create(net, "rt6_stats", S_IRUGO, &rt6_stats_seq_fops);
6937 +#endif
6938 + return 0;
6939 +}
6940 +
6941 +static void __net_exit ip6_route_net_exit_late(struct net *net)
6942 +{
6943 +#ifdef CONFIG_PROC_FS
6944 + proc_net_remove(net, "ipv6_route");
6945 + proc_net_remove(net, "rt6_stats");
6946 +#endif
6947 +}
6948 +
6949 static struct pernet_operations ip6_route_net_ops = {
6950 .init = ip6_route_net_init,
6951 .exit = ip6_route_net_exit,
6952 };
6953
6954 +static struct pernet_operations ip6_route_net_late_ops = {
6955 + .init = ip6_route_net_init_late,
6956 + .exit = ip6_route_net_exit_late,
6957 +};
6958 +
6959 static struct notifier_block ip6_route_dev_notifier = {
6960 .notifier_call = ip6_route_dev_notify,
6961 .priority = 0,
6962 @@ -3043,19 +3057,25 @@ int __init ip6_route_init(void)
6963 if (ret)
6964 goto xfrm6_init;
6965
6966 + ret = register_pernet_subsys(&ip6_route_net_late_ops);
6967 + if (ret)
6968 + goto fib6_rules_init;
6969 +
6970 ret = -ENOBUFS;
6971 if (__rtnl_register(PF_INET6, RTM_NEWROUTE, inet6_rtm_newroute, NULL, NULL) ||
6972 __rtnl_register(PF_INET6, RTM_DELROUTE, inet6_rtm_delroute, NULL, NULL) ||
6973 __rtnl_register(PF_INET6, RTM_GETROUTE, inet6_rtm_getroute, NULL, NULL))
6974 - goto fib6_rules_init;
6975 + goto out_register_late_subsys;
6976
6977 ret = register_netdevice_notifier(&ip6_route_dev_notifier);
6978 if (ret)
6979 - goto fib6_rules_init;
6980 + goto out_register_late_subsys;
6981
6982 out:
6983 return ret;
6984
6985 +out_register_late_subsys:
6986 + unregister_pernet_subsys(&ip6_route_net_late_ops);
6987 fib6_rules_init:
6988 fib6_rules_cleanup();
6989 xfrm6_init:
6990 @@ -3074,6 +3094,7 @@ out_kmem_cache:
6991 void ip6_route_cleanup(void)
6992 {
6993 unregister_netdevice_notifier(&ip6_route_dev_notifier);
6994 + unregister_pernet_subsys(&ip6_route_net_late_ops);
6995 fib6_rules_cleanup();
6996 xfrm6_fini();
6997 fib6_gc_cleanup();
6998 diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
6999 index 07d7d55..cd6f7a9 100644
7000 --- a/net/iucv/af_iucv.c
7001 +++ b/net/iucv/af_iucv.c
7002 @@ -372,7 +372,6 @@ static int afiucv_hs_send(struct iucv_message *imsg, struct sock *sock,
7003 skb_trim(skb, skb->dev->mtu);
7004 }
7005 skb->protocol = ETH_P_AF_IUCV;
7006 - skb_shinfo(skb)->tx_flags |= SKBTX_DRV_NEEDS_SK_REF;
7007 nskb = skb_clone(skb, GFP_ATOMIC);
7008 if (!nskb)
7009 return -ENOMEM;
7010 diff --git a/net/l2tp/l2tp_eth.c b/net/l2tp/l2tp_eth.c
7011 index 63fe5f3..7446038 100644
7012 --- a/net/l2tp/l2tp_eth.c
7013 +++ b/net/l2tp/l2tp_eth.c
7014 @@ -167,6 +167,7 @@ static void l2tp_eth_delete(struct l2tp_session *session)
7015 if (dev) {
7016 unregister_netdev(dev);
7017 spriv->dev = NULL;
7018 + module_put(THIS_MODULE);
7019 }
7020 }
7021 }
7022 @@ -254,6 +255,7 @@ static int l2tp_eth_create(struct net *net, u32 tunnel_id, u32 session_id, u32 p
7023 if (rc < 0)
7024 goto out_del_dev;
7025
7026 + __module_get(THIS_MODULE);
7027 /* Must be done after register_netdev() */
7028 strlcpy(session->ifname, dev->name, IFNAMSIZ);
7029
7030 diff --git a/net/l2tp/l2tp_ip.c b/net/l2tp/l2tp_ip.c
7031 index cc8ad7b..b1d4370 100644
7032 --- a/net/l2tp/l2tp_ip.c
7033 +++ b/net/l2tp/l2tp_ip.c
7034 @@ -516,10 +516,12 @@ static int l2tp_ip_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *m
7035 sk->sk_bound_dev_if);
7036 if (IS_ERR(rt))
7037 goto no_route;
7038 - if (connected)
7039 + if (connected) {
7040 sk_setup_caps(sk, &rt->dst);
7041 - else
7042 - dst_release(&rt->dst); /* safe since we hold rcu_read_lock */
7043 + } else {
7044 + skb_dst_set(skb, &rt->dst);
7045 + goto xmit;
7046 + }
7047 }
7048
7049 /* We dont need to clone dst here, it is guaranteed to not disappear.
7050 @@ -527,6 +529,7 @@ static int l2tp_ip_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *m
7051 */
7052 skb_dst_set_noref(skb, &rt->dst);
7053
7054 +xmit:
7055 /* Queue the packet to IP for output */
7056 rc = ip_queue_xmit(skb, &inet->cork.fl);
7057 rcu_read_unlock();
7058 diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
7059 index 20c680b..1197e8d 100644
7060 --- a/net/mac80211/mlme.c
7061 +++ b/net/mac80211/mlme.c
7062 @@ -187,7 +187,7 @@ static u32 ieee80211_enable_ht(struct ieee80211_sub_if_data *sdata,
7063 u32 changed = 0;
7064 int hti_cfreq;
7065 u16 ht_opmode;
7066 - bool enable_ht = true;
7067 + bool enable_ht = true, queues_stopped = false;
7068 enum nl80211_channel_type prev_chantype;
7069 enum nl80211_channel_type rx_channel_type = NL80211_CHAN_NO_HT;
7070 enum nl80211_channel_type tx_channel_type;
7071 @@ -254,6 +254,7 @@ static u32 ieee80211_enable_ht(struct ieee80211_sub_if_data *sdata,
7072 */
7073 ieee80211_stop_queues_by_reason(&sdata->local->hw,
7074 IEEE80211_QUEUE_STOP_REASON_CHTYPE_CHANGE);
7075 + queues_stopped = true;
7076
7077 /* flush out all packets */
7078 synchronize_net();
7079 @@ -272,12 +273,12 @@ static u32 ieee80211_enable_ht(struct ieee80211_sub_if_data *sdata,
7080 IEEE80211_RC_HT_CHANGED,
7081 tx_channel_type);
7082 rcu_read_unlock();
7083 -
7084 - if (beacon_htcap_ie)
7085 - ieee80211_wake_queues_by_reason(&sdata->local->hw,
7086 - IEEE80211_QUEUE_STOP_REASON_CHTYPE_CHANGE);
7087 }
7088
7089 + if (queues_stopped)
7090 + ieee80211_wake_queues_by_reason(&sdata->local->hw,
7091 + IEEE80211_QUEUE_STOP_REASON_CHTYPE_CHANGE);
7092 +
7093 ht_opmode = le16_to_cpu(hti->operation_mode);
7094
7095 /* if bss configuration changed store the new one */
7096 @@ -1375,7 +1376,6 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
7097 struct ieee80211_local *local = sdata->local;
7098 struct sta_info *sta;
7099 u32 changed = 0;
7100 - u8 bssid[ETH_ALEN];
7101
7102 ASSERT_MGD_MTX(ifmgd);
7103
7104 @@ -1385,10 +1385,7 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
7105 if (WARN_ON(!ifmgd->associated))
7106 return;
7107
7108 - memcpy(bssid, ifmgd->associated->bssid, ETH_ALEN);
7109 -
7110 ifmgd->associated = NULL;
7111 - memset(ifmgd->bssid, 0, ETH_ALEN);
7112
7113 /*
7114 * we need to commit the associated = NULL change because the
7115 @@ -1408,7 +1405,7 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
7116 netif_carrier_off(sdata->dev);
7117
7118 mutex_lock(&local->sta_mtx);
7119 - sta = sta_info_get(sdata, bssid);
7120 + sta = sta_info_get(sdata, ifmgd->bssid);
7121 if (sta) {
7122 set_sta_flag(sta, WLAN_STA_BLOCK_BA);
7123 ieee80211_sta_tear_down_BA_sessions(sta, tx);
7124 @@ -1417,13 +1414,16 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
7125
7126 /* deauthenticate/disassociate now */
7127 if (tx || frame_buf)
7128 - ieee80211_send_deauth_disassoc(sdata, bssid, stype, reason,
7129 - tx, frame_buf);
7130 + ieee80211_send_deauth_disassoc(sdata, ifmgd->bssid, stype,
7131 + reason, tx, frame_buf);
7132
7133 /* flush out frame */
7134 if (tx)
7135 drv_flush(local, false);
7136
7137 + /* clear bssid only after building the needed mgmt frames */
7138 + memset(ifmgd->bssid, 0, ETH_ALEN);
7139 +
7140 /* remove AP and TDLS peers */
7141 sta_info_flush(local, sdata);
7142
7143 diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
7144 index d64e285..c9b508e 100644
7145 --- a/net/mac80211/rx.c
7146 +++ b/net/mac80211/rx.c
7147 @@ -2459,7 +2459,7 @@ ieee80211_rx_h_action_return(struct ieee80211_rx_data *rx)
7148 * frames that we didn't handle, including returning unknown
7149 * ones. For all other modes we will return them to the sender,
7150 * setting the 0x80 bit in the action category, as required by
7151 - * 802.11-2007 7.3.1.11.
7152 + * 802.11-2012 9.24.4.
7153 * Newer versions of hostapd shall also use the management frame
7154 * registration mechanisms, but older ones still use cooked
7155 * monitor interfaces so push all frames there.
7156 @@ -2469,6 +2469,9 @@ ieee80211_rx_h_action_return(struct ieee80211_rx_data *rx)
7157 sdata->vif.type == NL80211_IFTYPE_AP_VLAN))
7158 return RX_DROP_MONITOR;
7159
7160 + if (is_multicast_ether_addr(mgmt->da))
7161 + return RX_DROP_MONITOR;
7162 +
7163 /* do not return rejected action frames */
7164 if (mgmt->u.action.category & 0x80)
7165 return RX_DROP_UNUSABLE;
7166 diff --git a/net/nfc/nci/ntf.c b/net/nfc/nci/ntf.c
7167 index 2e3dee4..e460cf1 100644
7168 --- a/net/nfc/nci/ntf.c
7169 +++ b/net/nfc/nci/ntf.c
7170 @@ -106,7 +106,7 @@ static __u8 *nci_extract_rf_params_nfca_passive_poll(struct nci_dev *ndev,
7171 nfca_poll->sens_res = __le16_to_cpu(*((__u16 *)data));
7172 data += 2;
7173
7174 - nfca_poll->nfcid1_len = *data++;
7175 + nfca_poll->nfcid1_len = min_t(__u8, *data++, NFC_NFCID1_MAXSIZE);
7176
7177 pr_debug("sens_res 0x%x, nfcid1_len %d\n",
7178 nfca_poll->sens_res, nfca_poll->nfcid1_len);
7179 @@ -130,7 +130,7 @@ static __u8 *nci_extract_rf_params_nfcb_passive_poll(struct nci_dev *ndev,
7180 struct rf_tech_specific_params_nfcb_poll *nfcb_poll,
7181 __u8 *data)
7182 {
7183 - nfcb_poll->sensb_res_len = *data++;
7184 + nfcb_poll->sensb_res_len = min_t(__u8, *data++, NFC_SENSB_RES_MAXSIZE);
7185
7186 pr_debug("sensb_res_len %d\n", nfcb_poll->sensb_res_len);
7187
7188 @@ -145,7 +145,7 @@ static __u8 *nci_extract_rf_params_nfcf_passive_poll(struct nci_dev *ndev,
7189 __u8 *data)
7190 {
7191 nfcf_poll->bit_rate = *data++;
7192 - nfcf_poll->sensf_res_len = *data++;
7193 + nfcf_poll->sensf_res_len = min_t(__u8, *data++, NFC_SENSF_RES_MAXSIZE);
7194
7195 pr_debug("bit_rate %d, sensf_res_len %d\n",
7196 nfcf_poll->bit_rate, nfcf_poll->sensf_res_len);
7197 @@ -331,7 +331,7 @@ static int nci_extract_activation_params_iso_dep(struct nci_dev *ndev,
7198 switch (ntf->activation_rf_tech_and_mode) {
7199 case NCI_NFC_A_PASSIVE_POLL_MODE:
7200 nfca_poll = &ntf->activation_params.nfca_poll_iso_dep;
7201 - nfca_poll->rats_res_len = *data++;
7202 + nfca_poll->rats_res_len = min_t(__u8, *data++, 20);
7203 pr_debug("rats_res_len %d\n", nfca_poll->rats_res_len);
7204 if (nfca_poll->rats_res_len > 0) {
7205 memcpy(nfca_poll->rats_res,
7206 @@ -341,7 +341,7 @@ static int nci_extract_activation_params_iso_dep(struct nci_dev *ndev,
7207
7208 case NCI_NFC_B_PASSIVE_POLL_MODE:
7209 nfcb_poll = &ntf->activation_params.nfcb_poll_iso_dep;
7210 - nfcb_poll->attrib_res_len = *data++;
7211 + nfcb_poll->attrib_res_len = min_t(__u8, *data++, 50);
7212 pr_debug("attrib_res_len %d\n", nfcb_poll->attrib_res_len);
7213 if (nfcb_poll->attrib_res_len > 0) {
7214 memcpy(nfcb_poll->attrib_res,
7215 diff --git a/net/nfc/rawsock.c b/net/nfc/rawsock.c
7216 index 5a839ce..e879dce 100644
7217 --- a/net/nfc/rawsock.c
7218 +++ b/net/nfc/rawsock.c
7219 @@ -54,7 +54,10 @@ static int rawsock_release(struct socket *sock)
7220 {
7221 struct sock *sk = sock->sk;
7222
7223 - pr_debug("sock=%p\n", sock);
7224 + pr_debug("sock=%p sk=%p\n", sock, sk);
7225 +
7226 + if (!sk)
7227 + return 0;
7228
7229 sock_orphan(sk);
7230 sock_put(sk);
7231 diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
7232 index 78ac39f..4c38b33 100644
7233 --- a/net/sunrpc/rpcb_clnt.c
7234 +++ b/net/sunrpc/rpcb_clnt.c
7235 @@ -180,14 +180,16 @@ void rpcb_put_local(struct net *net)
7236 struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
7237 struct rpc_clnt *clnt = sn->rpcb_local_clnt;
7238 struct rpc_clnt *clnt4 = sn->rpcb_local_clnt4;
7239 - int shutdown;
7240 + int shutdown = 0;
7241
7242 spin_lock(&sn->rpcb_clnt_lock);
7243 - if (--sn->rpcb_users == 0) {
7244 - sn->rpcb_local_clnt = NULL;
7245 - sn->rpcb_local_clnt4 = NULL;
7246 + if (sn->rpcb_users) {
7247 + if (--sn->rpcb_users == 0) {
7248 + sn->rpcb_local_clnt = NULL;
7249 + sn->rpcb_local_clnt4 = NULL;
7250 + }
7251 + shutdown = !sn->rpcb_users;
7252 }
7253 - shutdown = !sn->rpcb_users;
7254 spin_unlock(&sn->rpcb_clnt_lock);
7255
7256 if (shutdown) {
7257 diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
7258 index 234ee39..cb7c13f 100644
7259 --- a/net/sunrpc/svc.c
7260 +++ b/net/sunrpc/svc.c
7261 @@ -407,6 +407,14 @@ static int svc_uses_rpcbind(struct svc_serv *serv)
7262 return 0;
7263 }
7264
7265 +int svc_bind(struct svc_serv *serv, struct net *net)
7266 +{
7267 + if (!svc_uses_rpcbind(serv))
7268 + return 0;
7269 + return svc_rpcb_setup(serv, net);
7270 +}
7271 +EXPORT_SYMBOL_GPL(svc_bind);
7272 +
7273 /*
7274 * Create an RPC service
7275 */
7276 @@ -471,15 +479,8 @@ __svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
7277 spin_lock_init(&pool->sp_lock);
7278 }
7279
7280 - if (svc_uses_rpcbind(serv)) {
7281 - if (svc_rpcb_setup(serv, current->nsproxy->net_ns) < 0) {
7282 - kfree(serv->sv_pools);
7283 - kfree(serv);
7284 - return NULL;
7285 - }
7286 - if (!serv->sv_shutdown)
7287 - serv->sv_shutdown = svc_rpcb_cleanup;
7288 - }
7289 + if (svc_uses_rpcbind(serv) && (!serv->sv_shutdown))
7290 + serv->sv_shutdown = svc_rpcb_cleanup;
7291
7292 return serv;
7293 }
7294 @@ -536,8 +537,6 @@ EXPORT_SYMBOL_GPL(svc_shutdown_net);
7295 void
7296 svc_destroy(struct svc_serv *serv)
7297 {
7298 - struct net *net = current->nsproxy->net_ns;
7299 -
7300 dprintk("svc: svc_destroy(%s, %d)\n",
7301 serv->sv_program->pg_name,
7302 serv->sv_nrthreads);
7303 @@ -552,8 +551,6 @@ svc_destroy(struct svc_serv *serv)
7304
7305 del_timer_sync(&serv->sv_temptimer);
7306
7307 - svc_shutdown_net(serv, net);
7308 -
7309 /*
7310 * The last user is gone and thus all sockets have to be destroyed to
7311 * the point. Check this.
7312 diff --git a/net/wireless/reg.c b/net/wireless/reg.c
7313 index 15f3474..baf5704 100644
7314 --- a/net/wireless/reg.c
7315 +++ b/net/wireless/reg.c
7316 @@ -1389,7 +1389,7 @@ static void reg_set_request_processed(void)
7317 spin_unlock(&reg_requests_lock);
7318
7319 if (last_request->initiator == NL80211_REGDOM_SET_BY_USER)
7320 - cancel_delayed_work_sync(&reg_timeout);
7321 + cancel_delayed_work(&reg_timeout);
7322
7323 if (need_more_processing)
7324 schedule_work(&reg_work);
7325 diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
7326 index 841475c..926b455 100644
7327 --- a/sound/pci/hda/hda_codec.c
7328 +++ b/sound/pci/hda/hda_codec.c
7329 @@ -1192,6 +1192,7 @@ static void snd_hda_codec_free(struct hda_codec *codec)
7330 {
7331 if (!codec)
7332 return;
7333 + snd_hda_jack_tbl_clear(codec);
7334 restore_init_pincfgs(codec);
7335 #ifdef CONFIG_SND_HDA_POWER_SAVE
7336 cancel_delayed_work(&codec->power_work);
7337 @@ -1200,6 +1201,7 @@ static void snd_hda_codec_free(struct hda_codec *codec)
7338 list_del(&codec->list);
7339 snd_array_free(&codec->mixers);
7340 snd_array_free(&codec->nids);
7341 + snd_array_free(&codec->cvt_setups);
7342 snd_array_free(&codec->conn_lists);
7343 snd_array_free(&codec->spdif_out);
7344 codec->bus->caddr_tbl[codec->addr] = NULL;
7345 diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
7346 index e56c2c8..c43264f 100644
7347 --- a/sound/pci/hda/patch_realtek.c
7348 +++ b/sound/pci/hda/patch_realtek.c
7349 @@ -6976,6 +6976,7 @@ static const struct hda_codec_preset snd_hda_preset_realtek[] = {
7350 { .id = 0x10ec0272, .name = "ALC272", .patch = patch_alc662 },
7351 { .id = 0x10ec0275, .name = "ALC275", .patch = patch_alc269 },
7352 { .id = 0x10ec0276, .name = "ALC276", .patch = patch_alc269 },
7353 + { .id = 0x10ec0280, .name = "ALC280", .patch = patch_alc269 },
7354 { .id = 0x10ec0861, .rev = 0x100340, .name = "ALC660",
7355 .patch = patch_alc861 },
7356 { .id = 0x10ec0660, .name = "ALC660-VD", .patch = patch_alc861vd },
7357 diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
7358 index 2cb1e08..7494fbc 100644
7359 --- a/sound/pci/hda/patch_sigmatel.c
7360 +++ b/sound/pci/hda/patch_sigmatel.c
7361 @@ -4388,7 +4388,7 @@ static int stac92xx_init(struct hda_codec *codec)
7362 AC_PINCTL_IN_EN);
7363 for (i = 0; i < spec->num_pwrs; i++) {
7364 hda_nid_t nid = spec->pwr_nids[i];
7365 - int pinctl, def_conf;
7366 + unsigned int pinctl, def_conf;
7367
7368 /* power on when no jack detection is available */
7369 /* or when the VREF is used for controlling LED */
7370 @@ -4415,7 +4415,7 @@ static int stac92xx_init(struct hda_codec *codec)
7371 def_conf = get_defcfg_connect(def_conf);
7372 /* skip any ports that don't have jacks since presence
7373 * detection is useless */
7374 - if (def_conf != AC_JACK_PORT_NONE &&
7375 + if (def_conf != AC_JACK_PORT_COMPLEX ||
7376 !is_jack_detectable(codec, nid)) {
7377 stac_toggle_power_map(codec, nid, 1);
7378 continue;
7379 diff --git a/sound/soc/codecs/tlv320aic3x.c b/sound/soc/codecs/tlv320aic3x.c
7380 index 8d20f6e..b8f0262 100644
7381 --- a/sound/soc/codecs/tlv320aic3x.c
7382 +++ b/sound/soc/codecs/tlv320aic3x.c
7383 @@ -936,9 +936,7 @@ static int aic3x_hw_params(struct snd_pcm_substream *substream,
7384 }
7385
7386 found:
7387 - data = snd_soc_read(codec, AIC3X_PLL_PROGA_REG);
7388 - snd_soc_write(codec, AIC3X_PLL_PROGA_REG,
7389 - data | (pll_p << PLLP_SHIFT));
7390 + snd_soc_update_bits(codec, AIC3X_PLL_PROGA_REG, PLLP_MASK, pll_p);
7391 snd_soc_write(codec, AIC3X_OVRF_STATUS_AND_PLLR_REG,
7392 pll_r << PLLR_SHIFT);
7393 snd_soc_write(codec, AIC3X_PLL_PROGB_REG, pll_j << PLLJ_SHIFT);
7394 diff --git a/sound/soc/codecs/tlv320aic3x.h b/sound/soc/codecs/tlv320aic3x.h
7395 index 6f097fb..08c7f66 100644
7396 --- a/sound/soc/codecs/tlv320aic3x.h
7397 +++ b/sound/soc/codecs/tlv320aic3x.h
7398 @@ -166,6 +166,7 @@
7399
7400 /* PLL registers bitfields */
7401 #define PLLP_SHIFT 0
7402 +#define PLLP_MASK 7
7403 #define PLLQ_SHIFT 3
7404 #define PLLR_SHIFT 0
7405 #define PLLJ_SHIFT 2
7406 diff --git a/sound/soc/codecs/wm2200.c b/sound/soc/codecs/wm2200.c
7407 index acbdc5f..32682c1 100644
7408 --- a/sound/soc/codecs/wm2200.c
7409 +++ b/sound/soc/codecs/wm2200.c
7410 @@ -1491,6 +1491,7 @@ static int wm2200_bclk_rates_dat[WM2200_NUM_BCLK_RATES] = {
7411
7412 static int wm2200_bclk_rates_cd[WM2200_NUM_BCLK_RATES] = {
7413 5644800,
7414 + 3763200,
7415 2882400,
7416 1881600,
7417 1411200,
7418 diff --git a/tools/hv/hv_kvp_daemon.c b/tools/hv/hv_kvp_daemon.c
7419 index 146fd61..d9834b3 100644
7420 --- a/tools/hv/hv_kvp_daemon.c
7421 +++ b/tools/hv/hv_kvp_daemon.c
7422 @@ -701,14 +701,18 @@ int main(void)
7423 pfd.fd = fd;
7424
7425 while (1) {
7426 + struct sockaddr *addr_p = (struct sockaddr *) &addr;
7427 + socklen_t addr_l = sizeof(addr);
7428 pfd.events = POLLIN;
7429 pfd.revents = 0;
7430 poll(&pfd, 1, -1);
7431
7432 - len = recv(fd, kvp_recv_buffer, sizeof(kvp_recv_buffer), 0);
7433 + len = recvfrom(fd, kvp_recv_buffer, sizeof(kvp_recv_buffer), 0,
7434 + addr_p, &addr_l);
7435
7436 - if (len < 0) {
7437 - syslog(LOG_ERR, "recv failed; error:%d", len);
7438 + if (len < 0 || addr.nl_pid) {
7439 + syslog(LOG_ERR, "recvfrom failed; pid:%u error:%d %s",
7440 + addr.nl_pid, errno, strerror(errno));
7441 close(fd);
7442 return -1;
7443 }