Magellan Linux

Contents of /trunk/kernel-alx/patches-3.10/0158-3.10.59-all-fixes.patch

Parent Directory Parent Directory | Revision Log Revision Log


Revision 2646 - (show annotations) (download)
Tue Jul 21 16:20:21 2015 UTC (8 years, 9 months ago) by niro
File size: 47236 byte(s)
-linux-3.10.59
1 diff --git a/Documentation/lzo.txt b/Documentation/lzo.txt
2 new file mode 100644
3 index 000000000000..ea45dd3901e3
4 --- /dev/null
5 +++ b/Documentation/lzo.txt
6 @@ -0,0 +1,164 @@
7 +
8 +LZO stream format as understood by Linux's LZO decompressor
9 +===========================================================
10 +
11 +Introduction
12 +
13 + This is not a specification. No specification seems to be publicly available
14 + for the LZO stream format. This document describes what input format the LZO
15 + decompressor as implemented in the Linux kernel understands. The file subject
16 + of this analysis is lib/lzo/lzo1x_decompress_safe.c. No analysis was made on
17 + the compressor nor on any other implementations though it seems likely that
18 + the format matches the standard one. The purpose of this document is to
19 + better understand what the code does in order to propose more efficient fixes
20 + for future bug reports.
21 +
22 +Description
23 +
24 + The stream is composed of a series of instructions, operands, and data. The
25 + instructions consist in a few bits representing an opcode, and bits forming
26 + the operands for the instruction, whose size and position depend on the
27 + opcode and on the number of literals copied by previous instruction. The
28 + operands are used to indicate :
29 +
30 + - a distance when copying data from the dictionary (past output buffer)
31 + - a length (number of bytes to copy from dictionary)
32 + - the number of literals to copy, which is retained in variable "state"
33 + as a piece of information for next instructions.
34 +
35 + Optionally depending on the opcode and operands, extra data may follow. These
36 + extra data can be a complement for the operand (eg: a length or a distance
37 + encoded on larger values), or a literal to be copied to the output buffer.
38 +
39 + The first byte of the block follows a different encoding from other bytes, it
40 + seems to be optimized for literal use only, since there is no dictionary yet
41 + prior to that byte.
42 +
43 + Lengths are always encoded on a variable size starting with a small number
44 + of bits in the operand. If the number of bits isn't enough to represent the
45 + length, up to 255 may be added in increments by consuming more bytes with a
46 + rate of at most 255 per extra byte (thus the compression ratio cannot exceed
47 + around 255:1). The variable length encoding using #bits is always the same :
48 +
49 + length = byte & ((1 << #bits) - 1)
50 + if (!length) {
51 + length = ((1 << #bits) - 1)
52 + length += 255*(number of zero bytes)
53 + length += first-non-zero-byte
54 + }
55 + length += constant (generally 2 or 3)
56 +
57 + For references to the dictionary, distances are relative to the output
58 + pointer. Distances are encoded using very few bits belonging to certain
59 + ranges, resulting in multiple copy instructions using different encodings.
60 + Certain encodings involve one extra byte, others involve two extra bytes
61 + forming a little-endian 16-bit quantity (marked LE16 below).
62 +
63 + After any instruction except the large literal copy, 0, 1, 2 or 3 literals
64 + are copied before starting the next instruction. The number of literals that
65 + were copied may change the meaning and behaviour of the next instruction. In
66 + practice, only one instruction needs to know whether 0, less than 4, or more
67 + literals were copied. This is the information stored in the <state> variable
68 + in this implementation. This number of immediate literals to be copied is
69 + generally encoded in the last two bits of the instruction but may also be
70 + taken from the last two bits of an extra operand (eg: distance).
71 +
72 + End of stream is declared when a block copy of distance 0 is seen. Only one
73 + instruction may encode this distance (0001HLLL), it takes one LE16 operand
74 + for the distance, thus requiring 3 bytes.
75 +
76 + IMPORTANT NOTE : in the code some length checks are missing because certain
77 + instructions are called under the assumption that a certain number of bytes
78 + follow because it has already been garanteed before parsing the instructions.
79 + They just have to "refill" this credit if they consume extra bytes. This is
80 + an implementation design choice independant on the algorithm or encoding.
81 +
82 +Byte sequences
83 +
84 + First byte encoding :
85 +
86 + 0..17 : follow regular instruction encoding, see below. It is worth
87 + noting that codes 16 and 17 will represent a block copy from
88 + the dictionary which is empty, and that they will always be
89 + invalid at this place.
90 +
91 + 18..21 : copy 0..3 literals
92 + state = (byte - 17) = 0..3 [ copy <state> literals ]
93 + skip byte
94 +
95 + 22..255 : copy literal string
96 + length = (byte - 17) = 4..238
97 + state = 4 [ don't copy extra literals ]
98 + skip byte
99 +
100 + Instruction encoding :
101 +
102 + 0 0 0 0 X X X X (0..15)
103 + Depends on the number of literals copied by the last instruction.
104 + If last instruction did not copy any literal (state == 0), this
105 + encoding will be a copy of 4 or more literal, and must be interpreted
106 + like this :
107 +
108 + 0 0 0 0 L L L L (0..15) : copy long literal string
109 + length = 3 + (L ?: 15 + (zero_bytes * 255) + non_zero_byte)
110 + state = 4 (no extra literals are copied)
111 +
112 + If last instruction used to copy between 1 to 3 literals (encoded in
113 + the instruction's opcode or distance), the instruction is a copy of a
114 + 2-byte block from the dictionary within a 1kB distance. It is worth
115 + noting that this instruction provides little savings since it uses 2
116 + bytes to encode a copy of 2 other bytes but it encodes the number of
117 + following literals for free. It must be interpreted like this :
118 +
119 + 0 0 0 0 D D S S (0..15) : copy 2 bytes from <= 1kB distance
120 + length = 2
121 + state = S (copy S literals after this block)
122 + Always followed by exactly one byte : H H H H H H H H
123 + distance = (H << 2) + D + 1
124 +
125 + If last instruction used to copy 4 or more literals (as detected by
126 + state == 4), the instruction becomes a copy of a 3-byte block from the
127 + dictionary from a 2..3kB distance, and must be interpreted like this :
128 +
129 + 0 0 0 0 D D S S (0..15) : copy 3 bytes from 2..3 kB distance
130 + length = 3
131 + state = S (copy S literals after this block)
132 + Always followed by exactly one byte : H H H H H H H H
133 + distance = (H << 2) + D + 2049
134 +
135 + 0 0 0 1 H L L L (16..31)
136 + Copy of a block within 16..48kB distance (preferably less than 10B)
137 + length = 2 + (L ?: 7 + (zero_bytes * 255) + non_zero_byte)
138 + Always followed by exactly one LE16 : D D D D D D D D : D D D D D D S S
139 + distance = 16384 + (H << 14) + D
140 + state = S (copy S literals after this block)
141 + End of stream is reached if distance == 16384
142 +
143 + 0 0 1 L L L L L (32..63)
144 + Copy of small block within 16kB distance (preferably less than 34B)
145 + length = 2 + (L ?: 31 + (zero_bytes * 255) + non_zero_byte)
146 + Always followed by exactly one LE16 : D D D D D D D D : D D D D D D S S
147 + distance = D + 1
148 + state = S (copy S literals after this block)
149 +
150 + 0 1 L D D D S S (64..127)
151 + Copy 3-4 bytes from block within 2kB distance
152 + state = S (copy S literals after this block)
153 + length = 3 + L
154 + Always followed by exactly one byte : H H H H H H H H
155 + distance = (H << 3) + D + 1
156 +
157 + 1 L L D D D S S (128..255)
158 + Copy 5-8 bytes from block within 2kB distance
159 + state = S (copy S literals after this block)
160 + length = 5 + L
161 + Always followed by exactly one byte : H H H H H H H H
162 + distance = (H << 3) + D + 1
163 +
164 +Authors
165 +
166 + This document was written by Willy Tarreau <w@1wt.eu> on 2014/07/19 during an
167 + analysis of the decompression code available in Linux 3.16-rc5. The code is
168 + tricky, it is possible that this document contains mistakes or that a few
169 + corner cases were overlooked. In any case, please report any doubt, fix, or
170 + proposed updates to the author(s) so that the document can be updated.
171 diff --git a/Makefile b/Makefile
172 index c27454b8ca3e..7baf27f5cf0f 100644
173 --- a/Makefile
174 +++ b/Makefile
175 @@ -1,6 +1,6 @@
176 VERSION = 3
177 PATCHLEVEL = 10
178 -SUBLEVEL = 58
179 +SUBLEVEL = 59
180 EXTRAVERSION =
181 NAME = TOSSUG Baby Fish
182
183 diff --git a/arch/arm/mach-at91/clock.c b/arch/arm/mach-at91/clock.c
184 index da841885d01c..64f9f1045539 100644
185 --- a/arch/arm/mach-at91/clock.c
186 +++ b/arch/arm/mach-at91/clock.c
187 @@ -947,6 +947,7 @@ static int __init at91_clock_reset(void)
188 }
189
190 at91_pmc_write(AT91_PMC_SCDR, scdr);
191 + at91_pmc_write(AT91_PMC_PCDR, pcdr);
192 if (cpu_is_sama5d3())
193 at91_pmc_write(AT91_PMC_PCDR1, pcdr1);
194
195 diff --git a/arch/arm64/include/asm/compat.h b/arch/arm64/include/asm/compat.h
196 index 899af807ef0f..c30a548cee56 100644
197 --- a/arch/arm64/include/asm/compat.h
198 +++ b/arch/arm64/include/asm/compat.h
199 @@ -33,8 +33,8 @@ typedef s32 compat_ssize_t;
200 typedef s32 compat_time_t;
201 typedef s32 compat_clock_t;
202 typedef s32 compat_pid_t;
203 -typedef u32 __compat_uid_t;
204 -typedef u32 __compat_gid_t;
205 +typedef u16 __compat_uid_t;
206 +typedef u16 __compat_gid_t;
207 typedef u16 __compat_uid16_t;
208 typedef u16 __compat_gid16_t;
209 typedef u32 __compat_uid32_t;
210 diff --git a/arch/m68k/mm/hwtest.c b/arch/m68k/mm/hwtest.c
211 index 2c7dde3c6430..2a5259fd23eb 100644
212 --- a/arch/m68k/mm/hwtest.c
213 +++ b/arch/m68k/mm/hwtest.c
214 @@ -28,9 +28,11 @@
215 int hwreg_present( volatile void *regp )
216 {
217 int ret = 0;
218 + unsigned long flags;
219 long save_sp, save_vbr;
220 long tmp_vectors[3];
221
222 + local_irq_save(flags);
223 __asm__ __volatile__
224 ( "movec %/vbr,%2\n\t"
225 "movel #Lberr1,%4@(8)\n\t"
226 @@ -46,6 +48,7 @@ int hwreg_present( volatile void *regp )
227 : "=&d" (ret), "=&r" (save_sp), "=&r" (save_vbr)
228 : "a" (regp), "a" (tmp_vectors)
229 );
230 + local_irq_restore(flags);
231
232 return( ret );
233 }
234 @@ -58,9 +61,11 @@ EXPORT_SYMBOL(hwreg_present);
235 int hwreg_write( volatile void *regp, unsigned short val )
236 {
237 int ret;
238 + unsigned long flags;
239 long save_sp, save_vbr;
240 long tmp_vectors[3];
241
242 + local_irq_save(flags);
243 __asm__ __volatile__
244 ( "movec %/vbr,%2\n\t"
245 "movel #Lberr2,%4@(8)\n\t"
246 @@ -78,6 +83,7 @@ int hwreg_write( volatile void *regp, unsigned short val )
247 : "=&d" (ret), "=&r" (save_sp), "=&r" (save_vbr)
248 : "a" (regp), "a" (tmp_vectors), "g" (val)
249 );
250 + local_irq_restore(flags);
251
252 return( ret );
253 }
254 diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
255 index 5c948177529e..bc79ab00536f 100644
256 --- a/arch/s390/kvm/interrupt.c
257 +++ b/arch/s390/kvm/interrupt.c
258 @@ -71,6 +71,7 @@ static int __interrupt_is_deliverable(struct kvm_vcpu *vcpu,
259 return 0;
260 if (vcpu->arch.sie_block->gcr[0] & 0x2000ul)
261 return 1;
262 + return 0;
263 case KVM_S390_INT_EMERGENCY:
264 if (psw_extint_disabled(vcpu))
265 return 0;
266 diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
267 index f7f20f7fac3c..373058c9b75d 100644
268 --- a/arch/x86/include/asm/kvm_host.h
269 +++ b/arch/x86/include/asm/kvm_host.h
270 @@ -463,6 +463,7 @@ struct kvm_vcpu_arch {
271 u64 mmio_gva;
272 unsigned access;
273 gfn_t mmio_gfn;
274 + u64 mmio_gen;
275
276 struct kvm_pmu pmu;
277
278 diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
279 index f187806dfc18..8533e69d2b89 100644
280 --- a/arch/x86/kernel/cpu/intel.c
281 +++ b/arch/x86/kernel/cpu/intel.c
282 @@ -154,6 +154,21 @@ static void __cpuinit early_init_intel(struct cpuinfo_x86 *c)
283 setup_clear_cpu_cap(X86_FEATURE_ERMS);
284 }
285 }
286 +
287 + /*
288 + * Intel Quark Core DevMan_001.pdf section 6.4.11
289 + * "The operating system also is required to invalidate (i.e., flush)
290 + * the TLB when any changes are made to any of the page table entries.
291 + * The operating system must reload CR3 to cause the TLB to be flushed"
292 + *
293 + * As a result cpu_has_pge() in arch/x86/include/asm/tlbflush.h should
294 + * be false so that __flush_tlb_all() causes CR3 insted of CR4.PGE
295 + * to be modified
296 + */
297 + if (c->x86 == 5 && c->x86_model == 9) {
298 + pr_info("Disabling PGE capability bit\n");
299 + setup_clear_cpu_cap(X86_FEATURE_PGE);
300 + }
301 }
302
303 #ifdef CONFIG_X86_32
304 diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
305 index 711c649f80b7..e14b1f8667bb 100644
306 --- a/arch/x86/kvm/mmu.c
307 +++ b/arch/x86/kvm/mmu.c
308 @@ -3072,7 +3072,7 @@ static void mmu_sync_roots(struct kvm_vcpu *vcpu)
309 if (!VALID_PAGE(vcpu->arch.mmu.root_hpa))
310 return;
311
312 - vcpu_clear_mmio_info(vcpu, ~0ul);
313 + vcpu_clear_mmio_info(vcpu, MMIO_GVA_ANY);
314 kvm_mmu_audit(vcpu, AUDIT_PRE_SYNC);
315 if (vcpu->arch.mmu.root_level == PT64_ROOT_LEVEL) {
316 hpa_t root = vcpu->arch.mmu.root_hpa;
317 diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
318 index 3186542f2fa3..7626d3efa064 100644
319 --- a/arch/x86/kvm/x86.h
320 +++ b/arch/x86/kvm/x86.h
321 @@ -78,15 +78,23 @@ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu,
322 vcpu->arch.mmio_gva = gva & PAGE_MASK;
323 vcpu->arch.access = access;
324 vcpu->arch.mmio_gfn = gfn;
325 + vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation;
326 +}
327 +
328 +static inline bool vcpu_match_mmio_gen(struct kvm_vcpu *vcpu)
329 +{
330 + return vcpu->arch.mmio_gen == kvm_memslots(vcpu->kvm)->generation;
331 }
332
333 /*
334 - * Clear the mmio cache info for the given gva,
335 - * specially, if gva is ~0ul, we clear all mmio cache info.
336 + * Clear the mmio cache info for the given gva. If gva is MMIO_GVA_ANY, we
337 + * clear all mmio cache info.
338 */
339 +#define MMIO_GVA_ANY (~(gva_t)0)
340 +
341 static inline void vcpu_clear_mmio_info(struct kvm_vcpu *vcpu, gva_t gva)
342 {
343 - if (gva != (~0ul) && vcpu->arch.mmio_gva != (gva & PAGE_MASK))
344 + if (gva != MMIO_GVA_ANY && vcpu->arch.mmio_gva != (gva & PAGE_MASK))
345 return;
346
347 vcpu->arch.mmio_gva = 0;
348 @@ -94,7 +102,8 @@ static inline void vcpu_clear_mmio_info(struct kvm_vcpu *vcpu, gva_t gva)
349
350 static inline bool vcpu_match_mmio_gva(struct kvm_vcpu *vcpu, unsigned long gva)
351 {
352 - if (vcpu->arch.mmio_gva && vcpu->arch.mmio_gva == (gva & PAGE_MASK))
353 + if (vcpu_match_mmio_gen(vcpu) && vcpu->arch.mmio_gva &&
354 + vcpu->arch.mmio_gva == (gva & PAGE_MASK))
355 return true;
356
357 return false;
358 @@ -102,7 +111,8 @@ static inline bool vcpu_match_mmio_gva(struct kvm_vcpu *vcpu, unsigned long gva)
359
360 static inline bool vcpu_match_mmio_gpa(struct kvm_vcpu *vcpu, gpa_t gpa)
361 {
362 - if (vcpu->arch.mmio_gfn && vcpu->arch.mmio_gfn == gpa >> PAGE_SHIFT)
363 + if (vcpu_match_mmio_gen(vcpu) && vcpu->arch.mmio_gfn &&
364 + vcpu->arch.mmio_gfn == gpa >> PAGE_SHIFT)
365 return true;
366
367 return false;
368 diff --git a/drivers/base/firmware_class.c b/drivers/base/firmware_class.c
369 index 01e21037d8fe..00a565676583 100644
370 --- a/drivers/base/firmware_class.c
371 +++ b/drivers/base/firmware_class.c
372 @@ -1021,6 +1021,9 @@ _request_firmware(const struct firmware **firmware_p, const char *name,
373 if (!firmware_p)
374 return -EINVAL;
375
376 + if (!name || name[0] == '\0')
377 + return -EINVAL;
378 +
379 ret = _request_firmware_prepare(&fw, name, device);
380 if (ret <= 0) /* error or already assigned */
381 goto out;
382 diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c
383 index 975719bc3450..b41994fd8460 100644
384 --- a/drivers/base/regmap/regmap-debugfs.c
385 +++ b/drivers/base/regmap/regmap-debugfs.c
386 @@ -460,16 +460,20 @@ void regmap_debugfs_init(struct regmap *map, const char *name)
387 {
388 struct rb_node *next;
389 struct regmap_range_node *range_node;
390 + const char *devname = "dummy";
391
392 INIT_LIST_HEAD(&map->debugfs_off_cache);
393 mutex_init(&map->cache_lock);
394
395 + if (map->dev)
396 + devname = dev_name(map->dev);
397 +
398 if (name) {
399 map->debugfs_name = kasprintf(GFP_KERNEL, "%s-%s",
400 - dev_name(map->dev), name);
401 + devname, name);
402 name = map->debugfs_name;
403 } else {
404 - name = dev_name(map->dev);
405 + name = devname;
406 }
407
408 map->debugfs = debugfs_create_dir(name, regmap_debugfs_root);
409 diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
410 index 4b5cf2e34e9a..6a66f0b7d3d4 100644
411 --- a/drivers/base/regmap/regmap.c
412 +++ b/drivers/base/regmap/regmap.c
413 @@ -1177,7 +1177,7 @@ int _regmap_write(struct regmap *map, unsigned int reg,
414 }
415
416 #ifdef LOG_DEVICE
417 - if (strcmp(dev_name(map->dev), LOG_DEVICE) == 0)
418 + if (map->dev && strcmp(dev_name(map->dev), LOG_DEVICE) == 0)
419 dev_info(map->dev, "%x <= %x\n", reg, val);
420 #endif
421
422 @@ -1437,7 +1437,7 @@ static int _regmap_read(struct regmap *map, unsigned int reg,
423 ret = map->reg_read(context, reg, val);
424 if (ret == 0) {
425 #ifdef LOG_DEVICE
426 - if (strcmp(dev_name(map->dev), LOG_DEVICE) == 0)
427 + if (map->dev && strcmp(dev_name(map->dev), LOG_DEVICE) == 0)
428 dev_info(map->dev, "%x => %x\n", reg, *val);
429 #endif
430
431 diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
432 index 45aa8e760124..61a8ec4e5f4d 100644
433 --- a/drivers/bluetooth/btusb.c
434 +++ b/drivers/bluetooth/btusb.c
435 @@ -302,6 +302,9 @@ static void btusb_intr_complete(struct urb *urb)
436 BT_ERR("%s corrupted event packet", hdev->name);
437 hdev->stat.err_rx++;
438 }
439 + } else if (urb->status == -ENOENT) {
440 + /* Avoid suspend failed when usb_kill_urb */
441 + return;
442 }
443
444 if (!test_bit(BTUSB_INTR_RUNNING, &data->flags))
445 @@ -390,6 +393,9 @@ static void btusb_bulk_complete(struct urb *urb)
446 BT_ERR("%s corrupted ACL packet", hdev->name);
447 hdev->stat.err_rx++;
448 }
449 + } else if (urb->status == -ENOENT) {
450 + /* Avoid suspend failed when usb_kill_urb */
451 + return;
452 }
453
454 if (!test_bit(BTUSB_BULK_RUNNING, &data->flags))
455 @@ -484,6 +490,9 @@ static void btusb_isoc_complete(struct urb *urb)
456 hdev->stat.err_rx++;
457 }
458 }
459 + } else if (urb->status == -ENOENT) {
460 + /* Avoid suspend failed when usb_kill_urb */
461 + return;
462 }
463
464 if (!test_bit(BTUSB_ISOC_RUNNING, &data->flags))
465 diff --git a/drivers/bluetooth/hci_h5.c b/drivers/bluetooth/hci_h5.c
466 index db0be2fb05fe..db35c542eb20 100644
467 --- a/drivers/bluetooth/hci_h5.c
468 +++ b/drivers/bluetooth/hci_h5.c
469 @@ -237,7 +237,7 @@ static void h5_pkt_cull(struct h5 *h5)
470 break;
471
472 to_remove--;
473 - seq = (seq - 1) % 8;
474 + seq = (seq - 1) & 0x07;
475 }
476
477 if (seq != h5->rx_ack)
478 diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
479 index 0b122f8c7005..92f34de7aee9 100644
480 --- a/drivers/hv/channel.c
481 +++ b/drivers/hv/channel.c
482 @@ -199,8 +199,10 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
483 ret = vmbus_post_msg(open_msg,
484 sizeof(struct vmbus_channel_open_channel));
485
486 - if (ret != 0)
487 + if (ret != 0) {
488 + err = ret;
489 goto error1;
490 + }
491
492 t = wait_for_completion_timeout(&open_info->waitevent, 5*HZ);
493 if (t == 0) {
494 @@ -392,7 +394,6 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer,
495 u32 next_gpadl_handle;
496 unsigned long flags;
497 int ret = 0;
498 - int t;
499
500 next_gpadl_handle = atomic_read(&vmbus_connection.next_gpadl_handle);
501 atomic_inc(&vmbus_connection.next_gpadl_handle);
502 @@ -439,9 +440,7 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer,
503
504 }
505 }
506 - t = wait_for_completion_timeout(&msginfo->waitevent, 5*HZ);
507 - BUG_ON(t == 0);
508 -
509 + wait_for_completion(&msginfo->waitevent);
510
511 /* At this point, we received the gpadl created msg */
512 *gpadl_handle = gpadlmsg->gpadl;
513 @@ -464,7 +463,7 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle)
514 struct vmbus_channel_gpadl_teardown *msg;
515 struct vmbus_channel_msginfo *info;
516 unsigned long flags;
517 - int ret, t;
518 + int ret;
519
520 info = kmalloc(sizeof(*info) +
521 sizeof(struct vmbus_channel_gpadl_teardown), GFP_KERNEL);
522 @@ -486,11 +485,12 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle)
523 ret = vmbus_post_msg(msg,
524 sizeof(struct vmbus_channel_gpadl_teardown));
525
526 - BUG_ON(ret != 0);
527 - t = wait_for_completion_timeout(&info->waitevent, 5*HZ);
528 - BUG_ON(t == 0);
529 + if (ret)
530 + goto post_msg_err;
531 +
532 + wait_for_completion(&info->waitevent);
533
534 - /* Received a torndown response */
535 +post_msg_err:
536 spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
537 list_del(&info->msglistentry);
538 spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
539 diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
540 index b9f5d295cbec..a3b555808768 100644
541 --- a/drivers/hv/connection.c
542 +++ b/drivers/hv/connection.c
543 @@ -393,10 +393,21 @@ int vmbus_post_msg(void *buffer, size_t buflen)
544 * insufficient resources. Retry the operation a couple of
545 * times before giving up.
546 */
547 - while (retries < 3) {
548 - ret = hv_post_message(conn_id, 1, buffer, buflen);
549 - if (ret != HV_STATUS_INSUFFICIENT_BUFFERS)
550 + while (retries < 10) {
551 + ret = hv_post_message(conn_id, 1, buffer, buflen);
552 +
553 + switch (ret) {
554 + case HV_STATUS_INSUFFICIENT_BUFFERS:
555 + ret = -ENOMEM;
556 + case -ENOMEM:
557 + break;
558 + case HV_STATUS_SUCCESS:
559 return ret;
560 + default:
561 + pr_err("hv_post_msg() failed; error code:%d\n", ret);
562 + return -EINVAL;
563 + }
564 +
565 retries++;
566 msleep(100);
567 }
568 diff --git a/drivers/message/fusion/mptspi.c b/drivers/message/fusion/mptspi.c
569 index 5653e505f91f..424f51d1e2ce 100644
570 --- a/drivers/message/fusion/mptspi.c
571 +++ b/drivers/message/fusion/mptspi.c
572 @@ -1422,6 +1422,11 @@ mptspi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
573 goto out_mptspi_probe;
574 }
575
576 + /* VMWare emulation doesn't properly implement WRITE_SAME
577 + */
578 + if (pdev->subsystem_vendor == 0x15AD)
579 + sh->no_write_same = 1;
580 +
581 spin_lock_irqsave(&ioc->FreeQlock, flags);
582
583 /* Attach the SCSI Host to the IOC structure
584 diff --git a/drivers/net/wireless/iwlwifi/pcie/drv.c b/drivers/net/wireless/iwlwifi/pcie/drv.c
585 index b53e5c3f403b..bb020ad3f76c 100644
586 --- a/drivers/net/wireless/iwlwifi/pcie/drv.c
587 +++ b/drivers/net/wireless/iwlwifi/pcie/drv.c
588 @@ -269,6 +269,8 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = {
589 {IWL_PCI_DEVICE(0x08B1, 0x4070, iwl7260_2ac_cfg)},
590 {IWL_PCI_DEVICE(0x08B1, 0x4072, iwl7260_2ac_cfg)},
591 {IWL_PCI_DEVICE(0x08B1, 0x4170, iwl7260_2ac_cfg)},
592 + {IWL_PCI_DEVICE(0x08B1, 0x4C60, iwl7260_2ac_cfg)},
593 + {IWL_PCI_DEVICE(0x08B1, 0x4C70, iwl7260_2ac_cfg)},
594 {IWL_PCI_DEVICE(0x08B1, 0x4060, iwl7260_2n_cfg)},
595 {IWL_PCI_DEVICE(0x08B1, 0x406A, iwl7260_2n_cfg)},
596 {IWL_PCI_DEVICE(0x08B1, 0x4160, iwl7260_2n_cfg)},
597 @@ -306,6 +308,8 @@ static DEFINE_PCI_DEVICE_TABLE(iwl_hw_card_ids) = {
598 {IWL_PCI_DEVICE(0x08B1, 0xC770, iwl7260_2ac_cfg)},
599 {IWL_PCI_DEVICE(0x08B1, 0xC760, iwl7260_2n_cfg)},
600 {IWL_PCI_DEVICE(0x08B2, 0xC270, iwl7260_2ac_cfg)},
601 + {IWL_PCI_DEVICE(0x08B1, 0xCC70, iwl7260_2ac_cfg)},
602 + {IWL_PCI_DEVICE(0x08B1, 0xCC60, iwl7260_2ac_cfg)},
603 {IWL_PCI_DEVICE(0x08B2, 0xC272, iwl7260_2ac_cfg)},
604 {IWL_PCI_DEVICE(0x08B2, 0xC260, iwl7260_2n_cfg)},
605 {IWL_PCI_DEVICE(0x08B2, 0xC26A, iwl7260_n_cfg)},
606 diff --git a/drivers/net/wireless/rt2x00/rt2800.h b/drivers/net/wireless/rt2x00/rt2800.h
607 index a7630d5ec892..a629313dd98a 100644
608 --- a/drivers/net/wireless/rt2x00/rt2800.h
609 +++ b/drivers/net/wireless/rt2x00/rt2800.h
610 @@ -1920,7 +1920,7 @@ struct mac_iveiv_entry {
611 * 2 - drop tx power by 12dBm,
612 * 3 - increase tx power by 6dBm
613 */
614 -#define BBP1_TX_POWER_CTRL FIELD8(0x07)
615 +#define BBP1_TX_POWER_CTRL FIELD8(0x03)
616 #define BBP1_TX_ANTENNA FIELD8(0x18)
617
618 /*
619 diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
620 index 5b4a9d9cd200..689f3c87ee5c 100644
621 --- a/drivers/pci/pci-sysfs.c
622 +++ b/drivers/pci/pci-sysfs.c
623 @@ -175,7 +175,7 @@ static ssize_t modalias_show(struct device *dev, struct device_attribute *attr,
624 {
625 struct pci_dev *pci_dev = to_pci_dev(dev);
626
627 - return sprintf(buf, "pci:v%08Xd%08Xsv%08Xsd%08Xbc%02Xsc%02Xi%02x\n",
628 + return sprintf(buf, "pci:v%08Xd%08Xsv%08Xsd%08Xbc%02Xsc%02Xi%02X\n",
629 pci_dev->vendor, pci_dev->device,
630 pci_dev->subsystem_vendor, pci_dev->subsystem_device,
631 (u8)(pci_dev->class >> 16), (u8)(pci_dev->class >> 8),
632 diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
633 index 4510279e28dc..910339c0791f 100644
634 --- a/drivers/pci/quirks.c
635 +++ b/drivers/pci/quirks.c
636 @@ -28,6 +28,7 @@
637 #include <linux/ioport.h>
638 #include <linux/sched.h>
639 #include <linux/ktime.h>
640 +#include <linux/mm.h>
641 #include <asm/dma.h> /* isa_dma_bridge_buggy */
642 #include "pci.h"
643
644 @@ -291,6 +292,25 @@ static void quirk_citrine(struct pci_dev *dev)
645 }
646 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CITRINE, quirk_citrine);
647
648 +/* On IBM Crocodile ipr SAS adapters, expand BAR to system page size */
649 +static void quirk_extend_bar_to_page(struct pci_dev *dev)
650 +{
651 + int i;
652 +
653 + for (i = 0; i < PCI_STD_RESOURCE_END; i++) {
654 + struct resource *r = &dev->resource[i];
655 +
656 + if (r->flags & IORESOURCE_MEM && resource_size(r) < PAGE_SIZE) {
657 + r->end = PAGE_SIZE - 1;
658 + r->start = 0;
659 + r->flags |= IORESOURCE_UNSET;
660 + dev_info(&dev->dev, "expanded BAR %d to page size: %pR\n",
661 + i, r);
662 + }
663 + }
664 +}
665 +DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_IBM, 0x034a, quirk_extend_bar_to_page);
666 +
667 /*
668 * S3 868 and 968 chips report region size equal to 32M, but they decode 64M.
669 * If it's needed, re-allocate the region.
670 diff --git a/drivers/scsi/be2iscsi/be_mgmt.c b/drivers/scsi/be2iscsi/be_mgmt.c
671 index 245a9595a93a..ef0a78b0d730 100644
672 --- a/drivers/scsi/be2iscsi/be_mgmt.c
673 +++ b/drivers/scsi/be2iscsi/be_mgmt.c
674 @@ -812,17 +812,20 @@ mgmt_static_ip_modify(struct beiscsi_hba *phba,
675
676 if (ip_action == IP_ACTION_ADD) {
677 memcpy(req->ip_params.ip_record.ip_addr.addr, ip_param->value,
678 - ip_param->len);
679 + sizeof(req->ip_params.ip_record.ip_addr.addr));
680
681 if (subnet_param)
682 memcpy(req->ip_params.ip_record.ip_addr.subnet_mask,
683 - subnet_param->value, subnet_param->len);
684 + subnet_param->value,
685 + sizeof(req->ip_params.ip_record.ip_addr.subnet_mask));
686 } else {
687 memcpy(req->ip_params.ip_record.ip_addr.addr,
688 - if_info->ip_addr.addr, ip_param->len);
689 + if_info->ip_addr.addr,
690 + sizeof(req->ip_params.ip_record.ip_addr.addr));
691
692 memcpy(req->ip_params.ip_record.ip_addr.subnet_mask,
693 - if_info->ip_addr.subnet_mask, ip_param->len);
694 + if_info->ip_addr.subnet_mask,
695 + sizeof(req->ip_params.ip_record.ip_addr.subnet_mask));
696 }
697
698 rc = mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, NULL, 0);
699 @@ -850,7 +853,7 @@ static int mgmt_modify_gateway(struct beiscsi_hba *phba, uint8_t *gt_addr,
700 req->action = gtway_action;
701 req->ip_addr.ip_type = BE2_IPV4;
702
703 - memcpy(req->ip_addr.addr, gt_addr, param_len);
704 + memcpy(req->ip_addr.addr, gt_addr, sizeof(req->ip_addr.addr));
705
706 return mgmt_exec_nonemb_cmd(phba, &nonemb_cmd, NULL, 0);
707 }
708 diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
709 index f033b191a022..e6884940d107 100644
710 --- a/drivers/scsi/qla2xxx/qla_target.c
711 +++ b/drivers/scsi/qla2xxx/qla_target.c
712 @@ -1514,12 +1514,10 @@ static inline void qlt_unmap_sg(struct scsi_qla_host *vha,
713 static int qlt_check_reserve_free_req(struct scsi_qla_host *vha,
714 uint32_t req_cnt)
715 {
716 - struct qla_hw_data *ha = vha->hw;
717 - device_reg_t __iomem *reg = ha->iobase;
718 uint32_t cnt;
719
720 if (vha->req->cnt < (req_cnt + 2)) {
721 - cnt = (uint16_t)RD_REG_DWORD(&reg->isp24.req_q_out);
722 + cnt = (uint16_t)RD_REG_DWORD(vha->req->req_q_out);
723
724 ql_dbg(ql_dbg_tgt, vha, 0xe00a,
725 "Request ring circled: cnt=%d, vha->->ring_index=%d, "
726 diff --git a/drivers/spi/spi-dw-mid.c b/drivers/spi/spi-dw-mid.c
727 index b9f0192758d6..0791c92e8c50 100644
728 --- a/drivers/spi/spi-dw-mid.c
729 +++ b/drivers/spi/spi-dw-mid.c
730 @@ -89,7 +89,13 @@ err_exit:
731
732 static void mid_spi_dma_exit(struct dw_spi *dws)
733 {
734 + if (!dws->dma_inited)
735 + return;
736 +
737 + dmaengine_terminate_all(dws->txchan);
738 dma_release_channel(dws->txchan);
739 +
740 + dmaengine_terminate_all(dws->rxchan);
741 dma_release_channel(dws->rxchan);
742 }
743
744 @@ -136,7 +142,7 @@ static int mid_spi_dma_transfer(struct dw_spi *dws, int cs_change)
745 txconf.dst_addr = dws->dma_addr;
746 txconf.dst_maxburst = LNW_DMA_MSIZE_16;
747 txconf.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
748 - txconf.dst_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
749 + txconf.dst_addr_width = dws->dma_width;
750 txconf.device_fc = false;
751
752 txchan->device->device_control(txchan, DMA_SLAVE_CONFIG,
753 @@ -159,7 +165,7 @@ static int mid_spi_dma_transfer(struct dw_spi *dws, int cs_change)
754 rxconf.src_addr = dws->dma_addr;
755 rxconf.src_maxburst = LNW_DMA_MSIZE_16;
756 rxconf.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
757 - rxconf.src_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
758 + rxconf.src_addr_width = dws->dma_width;
759 rxconf.device_fc = false;
760
761 rxchan->device->device_control(rxchan, DMA_SLAVE_CONFIG,
762 diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
763 index 8fcd2424e7f9..187911fbabce 100644
764 --- a/fs/btrfs/inode.c
765 +++ b/fs/btrfs/inode.c
766 @@ -3545,7 +3545,8 @@ noinline int btrfs_update_inode(struct btrfs_trans_handle *trans,
767 * without delay
768 */
769 if (!btrfs_is_free_space_inode(inode)
770 - && root->root_key.objectid != BTRFS_DATA_RELOC_TREE_OBJECTID) {
771 + && root->root_key.objectid != BTRFS_DATA_RELOC_TREE_OBJECTID
772 + && !root->fs_info->log_root_recovering) {
773 btrfs_update_root_times(trans, root);
774
775 ret = btrfs_delayed_update_inode(trans, root, inode);
776 diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
777 index b3896d5f233a..0e7f7765b3bb 100644
778 --- a/fs/btrfs/relocation.c
779 +++ b/fs/btrfs/relocation.c
780 @@ -967,8 +967,11 @@ again:
781 need_check = false;
782 list_add_tail(&edge->list[UPPER],
783 &list);
784 - } else
785 + } else {
786 + if (upper->checked)
787 + need_check = true;
788 INIT_LIST_HEAD(&edge->list[UPPER]);
789 + }
790 } else {
791 upper = rb_entry(rb_node, struct backref_node,
792 rb_node);
793 diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
794 index 0544587d74f4..1f214689fa5e 100644
795 --- a/fs/btrfs/transaction.c
796 +++ b/fs/btrfs/transaction.c
797 @@ -524,7 +524,6 @@ int btrfs_wait_for_commit(struct btrfs_root *root, u64 transid)
798 if (transid <= root->fs_info->last_trans_committed)
799 goto out;
800
801 - ret = -EINVAL;
802 /* find specified transaction */
803 spin_lock(&root->fs_info->trans_lock);
804 list_for_each_entry(t, &root->fs_info->trans_list, list) {
805 @@ -540,9 +539,16 @@ int btrfs_wait_for_commit(struct btrfs_root *root, u64 transid)
806 }
807 }
808 spin_unlock(&root->fs_info->trans_lock);
809 - /* The specified transaction doesn't exist */
810 - if (!cur_trans)
811 +
812 + /*
813 + * The specified transaction doesn't exist, or we
814 + * raced with btrfs_commit_transaction
815 + */
816 + if (!cur_trans) {
817 + if (transid > root->fs_info->last_trans_committed)
818 + ret = -EINVAL;
819 goto out;
820 + }
821 } else {
822 /* find newest transaction that is committing | committed */
823 spin_lock(&root->fs_info->trans_lock);
824 diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c
825 index 5eab400e2590..41baf8b5e0eb 100644
826 --- a/fs/ecryptfs/inode.c
827 +++ b/fs/ecryptfs/inode.c
828 @@ -1051,7 +1051,7 @@ ecryptfs_setxattr(struct dentry *dentry, const char *name, const void *value,
829 }
830
831 rc = vfs_setxattr(lower_dentry, name, value, size, flags);
832 - if (!rc)
833 + if (!rc && dentry->d_inode)
834 fsstack_copy_attr_all(dentry->d_inode, lower_dentry->d_inode);
835 out:
836 return rc;
837 diff --git a/fs/namespace.c b/fs/namespace.c
838 index 00409add4d96..7f6a9348c589 100644
839 --- a/fs/namespace.c
840 +++ b/fs/namespace.c
841 @@ -1274,6 +1274,8 @@ static int do_umount(struct mount *mnt, int flags)
842 * Special case for "unmounting" root ...
843 * we just try to remount it readonly.
844 */
845 + if (!capable(CAP_SYS_ADMIN))
846 + return -EPERM;
847 down_write(&sb->s_umount);
848 if (!(sb->s_flags & MS_RDONLY))
849 retval = do_remount_sb(sb, MS_RDONLY, NULL, 0);
850 diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
851 index 3fc87b6f9def..69fc437be661 100644
852 --- a/fs/nfs/nfs4proc.c
853 +++ b/fs/nfs/nfs4proc.c
854 @@ -6067,7 +6067,7 @@ static int nfs41_proc_async_sequence(struct nfs_client *clp, struct rpc_cred *cr
855 int ret = 0;
856
857 if ((renew_flags & NFS4_RENEW_TIMEOUT) == 0)
858 - return 0;
859 + return -EAGAIN;
860 task = _nfs41_proc_sequence(clp, cred, false);
861 if (IS_ERR(task))
862 ret = PTR_ERR(task);
863 diff --git a/fs/nfs/nfs4renewd.c b/fs/nfs/nfs4renewd.c
864 index 1720d32ffa54..e1ba58c3d1ad 100644
865 --- a/fs/nfs/nfs4renewd.c
866 +++ b/fs/nfs/nfs4renewd.c
867 @@ -88,10 +88,18 @@ nfs4_renew_state(struct work_struct *work)
868 }
869 nfs_expire_all_delegations(clp);
870 } else {
871 + int ret;
872 +
873 /* Queue an asynchronous RENEW. */
874 - ops->sched_state_renewal(clp, cred, renew_flags);
875 + ret = ops->sched_state_renewal(clp, cred, renew_flags);
876 put_rpccred(cred);
877 - goto out_exp;
878 + switch (ret) {
879 + default:
880 + goto out_exp;
881 + case -EAGAIN:
882 + case -ENOMEM:
883 + break;
884 + }
885 }
886 } else {
887 dprintk("%s: failed to call renewd. Reason: lease not expired \n",
888 diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
889 index 2c37442ed936..d482b86d0e0b 100644
890 --- a/fs/nfs/nfs4state.c
891 +++ b/fs/nfs/nfs4state.c
892 @@ -1699,7 +1699,8 @@ restart:
893 if (status < 0) {
894 set_bit(ops->owner_flag_bit, &sp->so_flags);
895 nfs4_put_state_owner(sp);
896 - return nfs4_recovery_handle_error(clp, status);
897 + status = nfs4_recovery_handle_error(clp, status);
898 + return (status != 0) ? status : -EAGAIN;
899 }
900
901 nfs4_put_state_owner(sp);
902 @@ -1708,7 +1709,7 @@ restart:
903 spin_unlock(&clp->cl_lock);
904 }
905 rcu_read_unlock();
906 - return status;
907 + return 0;
908 }
909
910 static int nfs4_check_lease(struct nfs_client *clp)
911 @@ -1755,7 +1756,6 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
912 break;
913 case -NFS4ERR_STALE_CLIENTID:
914 clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
915 - nfs4_state_clear_reclaim_reboot(clp);
916 nfs4_state_start_reclaim_reboot(clp);
917 break;
918 case -NFS4ERR_CLID_INUSE:
919 @@ -2174,14 +2174,11 @@ static void nfs4_state_manager(struct nfs_client *clp)
920 section = "reclaim reboot";
921 status = nfs4_do_reclaim(clp,
922 clp->cl_mvops->reboot_recovery_ops);
923 - if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) ||
924 - test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state))
925 - continue;
926 - nfs4_state_end_reclaim_reboot(clp);
927 - if (test_bit(NFS4CLNT_RECLAIM_NOGRACE, &clp->cl_state))
928 + if (status == -EAGAIN)
929 continue;
930 if (status < 0)
931 goto out_error;
932 + nfs4_state_end_reclaim_reboot(clp);
933 }
934
935 /* Now recover expired state... */
936 @@ -2189,9 +2186,7 @@ static void nfs4_state_manager(struct nfs_client *clp)
937 section = "reclaim nograce";
938 status = nfs4_do_reclaim(clp,
939 clp->cl_mvops->nograce_recovery_ops);
940 - if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) ||
941 - test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state) ||
942 - test_bit(NFS4CLNT_RECLAIM_REBOOT, &clp->cl_state))
943 + if (status == -EAGAIN)
944 continue;
945 if (status < 0)
946 goto out_error;
947 diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
948 index f1680cdbd88b..9be6b4163406 100644
949 --- a/fs/notify/fanotify/fanotify_user.c
950 +++ b/fs/notify/fanotify/fanotify_user.c
951 @@ -69,7 +69,7 @@ static int create_fd(struct fsnotify_group *group,
952
953 pr_debug("%s: group=%p event=%p\n", __func__, group, event);
954
955 - client_fd = get_unused_fd();
956 + client_fd = get_unused_fd_flags(group->fanotify_data.f_flags);
957 if (client_fd < 0)
958 return client_fd;
959
960 diff --git a/include/linux/compiler-gcc5.h b/include/linux/compiler-gcc5.h
961 new file mode 100644
962 index 000000000000..cdd1cc202d51
963 --- /dev/null
964 +++ b/include/linux/compiler-gcc5.h
965 @@ -0,0 +1,66 @@
966 +#ifndef __LINUX_COMPILER_H
967 +#error "Please don't include <linux/compiler-gcc5.h> directly, include <linux/compiler.h> instead."
968 +#endif
969 +
970 +#define __used __attribute__((__used__))
971 +#define __must_check __attribute__((warn_unused_result))
972 +#define __compiler_offsetof(a, b) __builtin_offsetof(a, b)
973 +
974 +/* Mark functions as cold. gcc will assume any path leading to a call
975 + to them will be unlikely. This means a lot of manual unlikely()s
976 + are unnecessary now for any paths leading to the usual suspects
977 + like BUG(), printk(), panic() etc. [but let's keep them for now for
978 + older compilers]
979 +
980 + Early snapshots of gcc 4.3 don't support this and we can't detect this
981 + in the preprocessor, but we can live with this because they're unreleased.
982 + Maketime probing would be overkill here.
983 +
984 + gcc also has a __attribute__((__hot__)) to move hot functions into
985 + a special section, but I don't see any sense in this right now in
986 + the kernel context */
987 +#define __cold __attribute__((__cold__))
988 +
989 +#define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __COUNTER__)
990 +
991 +#ifndef __CHECKER__
992 +# define __compiletime_warning(message) __attribute__((warning(message)))
993 +# define __compiletime_error(message) __attribute__((error(message)))
994 +#endif /* __CHECKER__ */
995 +
996 +/*
997 + * Mark a position in code as unreachable. This can be used to
998 + * suppress control flow warnings after asm blocks that transfer
999 + * control elsewhere.
1000 + *
1001 + * Early snapshots of gcc 4.5 don't support this and we can't detect
1002 + * this in the preprocessor, but we can live with this because they're
1003 + * unreleased. Really, we need to have autoconf for the kernel.
1004 + */
1005 +#define unreachable() __builtin_unreachable()
1006 +
1007 +/* Mark a function definition as prohibited from being cloned. */
1008 +#define __noclone __attribute__((__noclone__))
1009 +
1010 +/*
1011 + * Tell the optimizer that something else uses this function or variable.
1012 + */
1013 +#define __visible __attribute__((externally_visible))
1014 +
1015 +/*
1016 + * GCC 'asm goto' miscompiles certain code sequences:
1017 + *
1018 + * http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670
1019 + *
1020 + * Work it around via a compiler barrier quirk suggested by Jakub Jelinek.
1021 + * Fixed in GCC 4.8.2 and later versions.
1022 + *
1023 + * (asm goto is automatically volatile - the naming reflects this.)
1024 + */
1025 +#define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0)
1026 +
1027 +#ifdef CONFIG_ARCH_USE_BUILTIN_BSWAP
1028 +#define __HAVE_BUILTIN_BSWAP32__
1029 +#define __HAVE_BUILTIN_BSWAP64__
1030 +#define __HAVE_BUILTIN_BSWAP16__
1031 +#endif /* CONFIG_ARCH_USE_BUILTIN_BSWAP */
1032 diff --git a/include/linux/sched.h b/include/linux/sched.h
1033 index 8293545ac9b7..f87e9a8d364f 100644
1034 --- a/include/linux/sched.h
1035 +++ b/include/linux/sched.h
1036 @@ -1670,11 +1670,13 @@ extern void thread_group_cputime_adjusted(struct task_struct *p, cputime_t *ut,
1037 #define tsk_used_math(p) ((p)->flags & PF_USED_MATH)
1038 #define used_math() tsk_used_math(current)
1039
1040 -/* __GFP_IO isn't allowed if PF_MEMALLOC_NOIO is set in current->flags */
1041 +/* __GFP_IO isn't allowed if PF_MEMALLOC_NOIO is set in current->flags
1042 + * __GFP_FS is also cleared as it implies __GFP_IO.
1043 + */
1044 static inline gfp_t memalloc_noio_flags(gfp_t flags)
1045 {
1046 if (unlikely(current->flags & PF_MEMALLOC_NOIO))
1047 - flags &= ~__GFP_IO;
1048 + flags &= ~(__GFP_IO | __GFP_FS);
1049 return flags;
1050 }
1051
1052 diff --git a/lib/lzo/lzo1x_decompress_safe.c b/lib/lzo/lzo1x_decompress_safe.c
1053 index 8563081e8da3..a1c387f6afba 100644
1054 --- a/lib/lzo/lzo1x_decompress_safe.c
1055 +++ b/lib/lzo/lzo1x_decompress_safe.c
1056 @@ -19,31 +19,21 @@
1057 #include <linux/lzo.h>
1058 #include "lzodefs.h"
1059
1060 -#define HAVE_IP(t, x) \
1061 - (((size_t)(ip_end - ip) >= (size_t)(t + x)) && \
1062 - (((t + x) >= t) && ((t + x) >= x)))
1063 +#define HAVE_IP(x) ((size_t)(ip_end - ip) >= (size_t)(x))
1064 +#define HAVE_OP(x) ((size_t)(op_end - op) >= (size_t)(x))
1065 +#define NEED_IP(x) if (!HAVE_IP(x)) goto input_overrun
1066 +#define NEED_OP(x) if (!HAVE_OP(x)) goto output_overrun
1067 +#define TEST_LB(m_pos) if ((m_pos) < out) goto lookbehind_overrun
1068
1069 -#define HAVE_OP(t, x) \
1070 - (((size_t)(op_end - op) >= (size_t)(t + x)) && \
1071 - (((t + x) >= t) && ((t + x) >= x)))
1072 -
1073 -#define NEED_IP(t, x) \
1074 - do { \
1075 - if (!HAVE_IP(t, x)) \
1076 - goto input_overrun; \
1077 - } while (0)
1078 -
1079 -#define NEED_OP(t, x) \
1080 - do { \
1081 - if (!HAVE_OP(t, x)) \
1082 - goto output_overrun; \
1083 - } while (0)
1084 -
1085 -#define TEST_LB(m_pos) \
1086 - do { \
1087 - if ((m_pos) < out) \
1088 - goto lookbehind_overrun; \
1089 - } while (0)
1090 +/* This MAX_255_COUNT is the maximum number of times we can add 255 to a base
1091 + * count without overflowing an integer. The multiply will overflow when
1092 + * multiplying 255 by more than MAXINT/255. The sum will overflow earlier
1093 + * depending on the base count. Since the base count is taken from a u8
1094 + * and a few bits, it is safe to assume that it will always be lower than
1095 + * or equal to 2*255, thus we can always prevent any overflow by accepting
1096 + * two less 255 steps. See Documentation/lzo.txt for more information.
1097 + */
1098 +#define MAX_255_COUNT ((((size_t)~0) / 255) - 2)
1099
1100 int lzo1x_decompress_safe(const unsigned char *in, size_t in_len,
1101 unsigned char *out, size_t *out_len)
1102 @@ -75,17 +65,24 @@ int lzo1x_decompress_safe(const unsigned char *in, size_t in_len,
1103 if (t < 16) {
1104 if (likely(state == 0)) {
1105 if (unlikely(t == 0)) {
1106 + size_t offset;
1107 + const unsigned char *ip_last = ip;
1108 +
1109 while (unlikely(*ip == 0)) {
1110 - t += 255;
1111 ip++;
1112 - NEED_IP(1, 0);
1113 + NEED_IP(1);
1114 }
1115 - t += 15 + *ip++;
1116 + offset = ip - ip_last;
1117 + if (unlikely(offset > MAX_255_COUNT))
1118 + return LZO_E_ERROR;
1119 +
1120 + offset = (offset << 8) - offset;
1121 + t += offset + 15 + *ip++;
1122 }
1123 t += 3;
1124 copy_literal_run:
1125 #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
1126 - if (likely(HAVE_IP(t, 15) && HAVE_OP(t, 15))) {
1127 + if (likely(HAVE_IP(t + 15) && HAVE_OP(t + 15))) {
1128 const unsigned char *ie = ip + t;
1129 unsigned char *oe = op + t;
1130 do {
1131 @@ -101,8 +98,8 @@ copy_literal_run:
1132 } else
1133 #endif
1134 {
1135 - NEED_OP(t, 0);
1136 - NEED_IP(t, 3);
1137 + NEED_OP(t);
1138 + NEED_IP(t + 3);
1139 do {
1140 *op++ = *ip++;
1141 } while (--t > 0);
1142 @@ -115,7 +112,7 @@ copy_literal_run:
1143 m_pos -= t >> 2;
1144 m_pos -= *ip++ << 2;
1145 TEST_LB(m_pos);
1146 - NEED_OP(2, 0);
1147 + NEED_OP(2);
1148 op[0] = m_pos[0];
1149 op[1] = m_pos[1];
1150 op += 2;
1151 @@ -136,13 +133,20 @@ copy_literal_run:
1152 } else if (t >= 32) {
1153 t = (t & 31) + (3 - 1);
1154 if (unlikely(t == 2)) {
1155 + size_t offset;
1156 + const unsigned char *ip_last = ip;
1157 +
1158 while (unlikely(*ip == 0)) {
1159 - t += 255;
1160 ip++;
1161 - NEED_IP(1, 0);
1162 + NEED_IP(1);
1163 }
1164 - t += 31 + *ip++;
1165 - NEED_IP(2, 0);
1166 + offset = ip - ip_last;
1167 + if (unlikely(offset > MAX_255_COUNT))
1168 + return LZO_E_ERROR;
1169 +
1170 + offset = (offset << 8) - offset;
1171 + t += offset + 31 + *ip++;
1172 + NEED_IP(2);
1173 }
1174 m_pos = op - 1;
1175 next = get_unaligned_le16(ip);
1176 @@ -154,13 +158,20 @@ copy_literal_run:
1177 m_pos -= (t & 8) << 11;
1178 t = (t & 7) + (3 - 1);
1179 if (unlikely(t == 2)) {
1180 + size_t offset;
1181 + const unsigned char *ip_last = ip;
1182 +
1183 while (unlikely(*ip == 0)) {
1184 - t += 255;
1185 ip++;
1186 - NEED_IP(1, 0);
1187 + NEED_IP(1);
1188 }
1189 - t += 7 + *ip++;
1190 - NEED_IP(2, 0);
1191 + offset = ip - ip_last;
1192 + if (unlikely(offset > MAX_255_COUNT))
1193 + return LZO_E_ERROR;
1194 +
1195 + offset = (offset << 8) - offset;
1196 + t += offset + 7 + *ip++;
1197 + NEED_IP(2);
1198 }
1199 next = get_unaligned_le16(ip);
1200 ip += 2;
1201 @@ -174,7 +185,7 @@ copy_literal_run:
1202 #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
1203 if (op - m_pos >= 8) {
1204 unsigned char *oe = op + t;
1205 - if (likely(HAVE_OP(t, 15))) {
1206 + if (likely(HAVE_OP(t + 15))) {
1207 do {
1208 COPY8(op, m_pos);
1209 op += 8;
1210 @@ -184,7 +195,7 @@ copy_literal_run:
1211 m_pos += 8;
1212 } while (op < oe);
1213 op = oe;
1214 - if (HAVE_IP(6, 0)) {
1215 + if (HAVE_IP(6)) {
1216 state = next;
1217 COPY4(op, ip);
1218 op += next;
1219 @@ -192,7 +203,7 @@ copy_literal_run:
1220 continue;
1221 }
1222 } else {
1223 - NEED_OP(t, 0);
1224 + NEED_OP(t);
1225 do {
1226 *op++ = *m_pos++;
1227 } while (op < oe);
1228 @@ -201,7 +212,7 @@ copy_literal_run:
1229 #endif
1230 {
1231 unsigned char *oe = op + t;
1232 - NEED_OP(t, 0);
1233 + NEED_OP(t);
1234 op[0] = m_pos[0];
1235 op[1] = m_pos[1];
1236 op += 2;
1237 @@ -214,15 +225,15 @@ match_next:
1238 state = next;
1239 t = next;
1240 #if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
1241 - if (likely(HAVE_IP(6, 0) && HAVE_OP(4, 0))) {
1242 + if (likely(HAVE_IP(6) && HAVE_OP(4))) {
1243 COPY4(op, ip);
1244 op += t;
1245 ip += t;
1246 } else
1247 #endif
1248 {
1249 - NEED_IP(t, 3);
1250 - NEED_OP(t, 0);
1251 + NEED_IP(t + 3);
1252 + NEED_OP(t);
1253 while (t > 0) {
1254 *op++ = *ip++;
1255 t--;
1256 diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
1257 index f92818155958..175dca44c97e 100644
1258 --- a/sound/core/pcm_native.c
1259 +++ b/sound/core/pcm_native.c
1260 @@ -3197,7 +3197,7 @@ static const struct vm_operations_struct snd_pcm_vm_ops_data_fault = {
1261
1262 #ifndef ARCH_HAS_DMA_MMAP_COHERENT
1263 /* This should be defined / handled globally! */
1264 -#ifdef CONFIG_ARM
1265 +#if defined(CONFIG_ARM) || defined(CONFIG_ARM64)
1266 #define ARCH_HAS_DMA_MMAP_COHERENT
1267 #endif
1268 #endif
1269 diff --git a/sound/pci/emu10k1/emu10k1_callback.c b/sound/pci/emu10k1/emu10k1_callback.c
1270 index cae36597aa71..0a34b5f1c475 100644
1271 --- a/sound/pci/emu10k1/emu10k1_callback.c
1272 +++ b/sound/pci/emu10k1/emu10k1_callback.c
1273 @@ -85,6 +85,8 @@ snd_emu10k1_ops_setup(struct snd_emux *emux)
1274 * get more voice for pcm
1275 *
1276 * terminate most inactive voice and give it as a pcm voice.
1277 + *
1278 + * voice_lock is already held.
1279 */
1280 int
1281 snd_emu10k1_synth_get_voice(struct snd_emu10k1 *hw)
1282 @@ -92,12 +94,10 @@ snd_emu10k1_synth_get_voice(struct snd_emu10k1 *hw)
1283 struct snd_emux *emu;
1284 struct snd_emux_voice *vp;
1285 struct best_voice best[V_END];
1286 - unsigned long flags;
1287 int i;
1288
1289 emu = hw->synth;
1290
1291 - spin_lock_irqsave(&emu->voice_lock, flags);
1292 lookup_voices(emu, hw, best, 1); /* no OFF voices */
1293 for (i = 0; i < V_END; i++) {
1294 if (best[i].voice >= 0) {
1295 @@ -113,11 +113,9 @@ snd_emu10k1_synth_get_voice(struct snd_emu10k1 *hw)
1296 vp->emu->num_voices--;
1297 vp->ch = -1;
1298 vp->state = SNDRV_EMUX_ST_OFF;
1299 - spin_unlock_irqrestore(&emu->voice_lock, flags);
1300 return ch;
1301 }
1302 }
1303 - spin_unlock_irqrestore(&emu->voice_lock, flags);
1304
1305 /* not found */
1306 return -ENOMEM;
1307 diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
1308 index 8b75bcf136f6..d5bed1d25713 100644
1309 --- a/sound/usb/quirks-table.h
1310 +++ b/sound/usb/quirks-table.h
1311 @@ -386,6 +386,36 @@ YAMAHA_DEVICE(0x105d, NULL),
1312 }
1313 },
1314 {
1315 + USB_DEVICE(0x0499, 0x1509),
1316 + .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
1317 + /* .vendor_name = "Yamaha", */
1318 + /* .product_name = "Steinberg UR22", */
1319 + .ifnum = QUIRK_ANY_INTERFACE,
1320 + .type = QUIRK_COMPOSITE,
1321 + .data = (const struct snd_usb_audio_quirk[]) {
1322 + {
1323 + .ifnum = 1,
1324 + .type = QUIRK_AUDIO_STANDARD_INTERFACE
1325 + },
1326 + {
1327 + .ifnum = 2,
1328 + .type = QUIRK_AUDIO_STANDARD_INTERFACE
1329 + },
1330 + {
1331 + .ifnum = 3,
1332 + .type = QUIRK_MIDI_YAMAHA
1333 + },
1334 + {
1335 + .ifnum = 4,
1336 + .type = QUIRK_IGNORE_INTERFACE
1337 + },
1338 + {
1339 + .ifnum = -1
1340 + }
1341 + }
1342 + }
1343 +},
1344 +{
1345 USB_DEVICE(0x0499, 0x150a),
1346 .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
1347 /* .vendor_name = "Yamaha", */
1348 diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
1349 index 8cf1cd2fadaa..a17f190be58e 100644
1350 --- a/virt/kvm/kvm_main.c
1351 +++ b/virt/kvm/kvm_main.c
1352 @@ -52,6 +52,7 @@
1353
1354 #include <asm/processor.h>
1355 #include <asm/io.h>
1356 +#include <asm/ioctl.h>
1357 #include <asm/uaccess.h>
1358 #include <asm/pgtable.h>
1359
1360 @@ -1981,6 +1982,9 @@ static long kvm_vcpu_ioctl(struct file *filp,
1361 if (vcpu->kvm->mm != current->mm)
1362 return -EIO;
1363
1364 + if (unlikely(_IOC_TYPE(ioctl) != KVMIO))
1365 + return -EINVAL;
1366 +
1367 #if defined(CONFIG_S390) || defined(CONFIG_PPC) || defined(CONFIG_MIPS)
1368 /*
1369 * Special cases: vcpu ioctls that are asynchronous to vcpu execution,