Contents of /alx-src/tags/kernel26-2.6.12-alx-r9/Documentation/atomic_ops.txt
Parent Directory | Revision Log
Revision 630 -
(show annotations)
(download)
Wed Mar 4 11:03:09 2009 UTC (15 years, 6 months ago) by niro
File MIME type: text/plain
File size: 15116 byte(s)
Wed Mar 4 11:03:09 2009 UTC (15 years, 6 months ago) by niro
File MIME type: text/plain
File size: 15116 byte(s)
Tag kernel26-2.6.12-alx-r9
1 | Semantics and Behavior of Atomic and |
2 | Bitmask Operations |
3 | |
4 | David S. Miller |
5 | |
6 | This document is intended to serve as a guide to Linux port |
7 | maintainers on how to implement atomic counter, bitops, and spinlock |
8 | interfaces properly. |
9 | |
10 | The atomic_t type should be defined as a signed integer. |
11 | Also, it should be made opaque such that any kind of cast to a normal |
12 | C integer type will fail. Something like the following should |
13 | suffice: |
14 | |
15 | typedef struct { volatile int counter; } atomic_t; |
16 | |
17 | The first operations to implement for atomic_t's are the |
18 | initializers and plain reads. |
19 | |
20 | #define ATOMIC_INIT(i) { (i) } |
21 | #define atomic_set(v, i) ((v)->counter = (i)) |
22 | |
23 | The first macro is used in definitions, such as: |
24 | |
25 | static atomic_t my_counter = ATOMIC_INIT(1); |
26 | |
27 | The second interface can be used at runtime, as in: |
28 | |
29 | struct foo { atomic_t counter; }; |
30 | ... |
31 | |
32 | struct foo *k; |
33 | |
34 | k = kmalloc(sizeof(*k), GFP_KERNEL); |
35 | if (!k) |
36 | return -ENOMEM; |
37 | atomic_set(&k->counter, 0); |
38 | |
39 | Next, we have: |
40 | |
41 | #define atomic_read(v) ((v)->counter) |
42 | |
43 | which simply reads the current value of the counter. |
44 | |
45 | Now, we move onto the actual atomic operation interfaces. |
46 | |
47 | void atomic_add(int i, atomic_t *v); |
48 | void atomic_sub(int i, atomic_t *v); |
49 | void atomic_inc(atomic_t *v); |
50 | void atomic_dec(atomic_t *v); |
51 | |
52 | These four routines add and subtract integral values to/from the given |
53 | atomic_t value. The first two routines pass explicit integers by |
54 | which to make the adjustment, whereas the latter two use an implicit |
55 | adjustment value of "1". |
56 | |
57 | One very important aspect of these two routines is that they DO NOT |
58 | require any explicit memory barriers. They need only perform the |
59 | atomic_t counter update in an SMP safe manner. |
60 | |
61 | Next, we have: |
62 | |
63 | int atomic_inc_return(atomic_t *v); |
64 | int atomic_dec_return(atomic_t *v); |
65 | |
66 | These routines add 1 and subtract 1, respectively, from the given |
67 | atomic_t and return the new counter value after the operation is |
68 | performed. |
69 | |
70 | Unlike the above routines, it is required that explicit memory |
71 | barriers are performed before and after the operation. It must be |
72 | done such that all memory operations before and after the atomic |
73 | operation calls are strongly ordered with respect to the atomic |
74 | operation itself. |
75 | |
76 | For example, it should behave as if a smp_mb() call existed both |
77 | before and after the atomic operation. |
78 | |
79 | If the atomic instructions used in an implementation provide explicit |
80 | memory barrier semantics which satisfy the above requirements, that is |
81 | fine as well. |
82 | |
83 | Let's move on: |
84 | |
85 | int atomic_add_return(int i, atomic_t *v); |
86 | int atomic_sub_return(int i, atomic_t *v); |
87 | |
88 | These behave just like atomic_{inc,dec}_return() except that an |
89 | explicit counter adjustment is given instead of the implicit "1". |
90 | This means that like atomic_{inc,dec}_return(), the memory barrier |
91 | semantics are required. |
92 | |
93 | Next: |
94 | |
95 | int atomic_inc_and_test(atomic_t *v); |
96 | int atomic_dec_and_test(atomic_t *v); |
97 | |
98 | These two routines increment and decrement by 1, respectively, the |
99 | given atomic counter. They return a boolean indicating whether the |
100 | resulting counter value was zero or not. |
101 | |
102 | It requires explicit memory barrier semantics around the operation as |
103 | above. |
104 | |
105 | int atomic_sub_and_test(int i, atomic_t *v); |
106 | |
107 | This is identical to atomic_dec_and_test() except that an explicit |
108 | decrement is given instead of the implicit "1". It requires explicit |
109 | memory barrier semantics around the operation. |
110 | |
111 | int atomic_add_negative(int i, atomic_t *v); |
112 | |
113 | The given increment is added to the given atomic counter value. A |
114 | boolean is return which indicates whether the resulting counter value |
115 | is negative. It requires explicit memory barrier semantics around the |
116 | operation. |
117 | |
118 | If a caller requires memory barrier semantics around an atomic_t |
119 | operation which does not return a value, a set of interfaces are |
120 | defined which accomplish this: |
121 | |
122 | void smp_mb__before_atomic_dec(void); |
123 | void smp_mb__after_atomic_dec(void); |
124 | void smp_mb__before_atomic_inc(void); |
125 | void smp_mb__after_atomic_dec(void); |
126 | |
127 | For example, smp_mb__before_atomic_dec() can be used like so: |
128 | |
129 | obj->dead = 1; |
130 | smp_mb__before_atomic_dec(); |
131 | atomic_dec(&obj->ref_count); |
132 | |
133 | It makes sure that all memory operations preceeding the atomic_dec() |
134 | call are strongly ordered with respect to the atomic counter |
135 | operation. In the above example, it guarentees that the assignment of |
136 | "1" to obj->dead will be globally visible to other cpus before the |
137 | atomic counter decrement. |
138 | |
139 | Without the explicitl smp_mb__before_atomic_dec() call, the |
140 | implementation could legally allow the atomic counter update visible |
141 | to other cpus before the "obj->dead = 1;" assignment. |
142 | |
143 | The other three interfaces listed are used to provide explicit |
144 | ordering with respect to memory operations after an atomic_dec() call |
145 | (smp_mb__after_atomic_dec()) and around atomic_inc() calls |
146 | (smp_mb__{before,after}_atomic_inc()). |
147 | |
148 | A missing memory barrier in the cases where they are required by the |
149 | atomic_t implementation above can have disasterous results. Here is |
150 | an example, which follows a pattern occuring frequently in the Linux |
151 | kernel. It is the use of atomic counters to implement reference |
152 | counting, and it works such that once the counter falls to zero it can |
153 | be guarenteed that no other entity can be accessing the object: |
154 | |
155 | static void obj_list_add(struct obj *obj) |
156 | { |
157 | obj->active = 1; |
158 | list_add(&obj->list); |
159 | } |
160 | |
161 | static void obj_list_del(struct obj *obj) |
162 | { |
163 | list_del(&obj->list); |
164 | obj->active = 0; |
165 | } |
166 | |
167 | static void obj_destroy(struct obj *obj) |
168 | { |
169 | BUG_ON(obj->active); |
170 | kfree(obj); |
171 | } |
172 | |
173 | struct obj *obj_list_peek(struct list_head *head) |
174 | { |
175 | if (!list_empty(head)) { |
176 | struct obj *obj; |
177 | |
178 | obj = list_entry(head->next, struct obj, list); |
179 | atomic_inc(&obj->refcnt); |
180 | return obj; |
181 | } |
182 | return NULL; |
183 | } |
184 | |
185 | void obj_poke(void) |
186 | { |
187 | struct obj *obj; |
188 | |
189 | spin_lock(&global_list_lock); |
190 | obj = obj_list_peek(&global_list); |
191 | spin_unlock(&global_list_lock); |
192 | |
193 | if (obj) { |
194 | obj->ops->poke(obj); |
195 | if (atomic_dec_and_test(&obj->refcnt)) |
196 | obj_destroy(obj); |
197 | } |
198 | } |
199 | |
200 | void obj_timeout(struct obj *obj) |
201 | { |
202 | spin_lock(&global_list_lock); |
203 | obj_list_del(obj); |
204 | spin_unlock(&global_list_lock); |
205 | |
206 | if (atomic_dec_and_test(&obj->refcnt)) |
207 | obj_destroy(obj); |
208 | } |
209 | |
210 | (This is a simplification of the ARP queue management in the |
211 | generic neighbour discover code of the networking. Olaf Kirch |
212 | found a bug wrt. memory barriers in kfree_skb() that exposed |
213 | the atomic_t memory barrier requirements quite clearly.) |
214 | |
215 | Given the above scheme, it must be the case that the obj->active |
216 | update done by the obj list deletion be visible to other processors |
217 | before the atomic counter decrement is performed. |
218 | |
219 | Otherwise, the counter could fall to zero, yet obj->active would still |
220 | be set, thus triggering the assertion in obj_destroy(). The error |
221 | sequence looks like this: |
222 | |
223 | cpu 0 cpu 1 |
224 | obj_poke() obj_timeout() |
225 | obj = obj_list_peek(); |
226 | ... gains ref to obj, refcnt=2 |
227 | obj_list_del(obj); |
228 | obj->active = 0 ... |
229 | ... visibility delayed ... |
230 | atomic_dec_and_test() |
231 | ... refcnt drops to 1 ... |
232 | atomic_dec_and_test() |
233 | ... refcount drops to 0 ... |
234 | obj_destroy() |
235 | BUG() triggers since obj->active |
236 | still seen as one |
237 | obj->active update visibility occurs |
238 | |
239 | With the memory barrier semantics required of the atomic_t operations |
240 | which return values, the above sequence of memory visibility can never |
241 | happen. Specifically, in the above case the atomic_dec_and_test() |
242 | counter decrement would not become globally visible until the |
243 | obj->active update does. |
244 | |
245 | As a historical note, 32-bit Sparc used to only allow usage of |
246 | 24-bits of it's atomic_t type. This was because it used 8 bits |
247 | as a spinlock for SMP safety. Sparc32 lacked a "compare and swap" |
248 | type instruction. However, 32-bit Sparc has since been moved over |
249 | to a "hash table of spinlocks" scheme, that allows the full 32-bit |
250 | counter to be realized. Essentially, an array of spinlocks are |
251 | indexed into based upon the address of the atomic_t being operated |
252 | on, and that lock protects the atomic operation. Parisc uses the |
253 | same scheme. |
254 | |
255 | Another note is that the atomic_t operations returning values are |
256 | extremely slow on an old 386. |
257 | |
258 | We will now cover the atomic bitmask operations. You will find that |
259 | their SMP and memory barrier semantics are similar in shape and scope |
260 | to the atomic_t ops above. |
261 | |
262 | Native atomic bit operations are defined to operate on objects aligned |
263 | to the size of an "unsigned long" C data type, and are least of that |
264 | size. The endianness of the bits within each "unsigned long" are the |
265 | native endianness of the cpu. |
266 | |
267 | void set_bit(unsigned long nr, volatils unsigned long *addr); |
268 | void clear_bit(unsigned long nr, volatils unsigned long *addr); |
269 | void change_bit(unsigned long nr, volatils unsigned long *addr); |
270 | |
271 | These routines set, clear, and change, respectively, the bit number |
272 | indicated by "nr" on the bit mask pointed to by "ADDR". |
273 | |
274 | They must execute atomically, yet there are no implicit memory barrier |
275 | semantics required of these interfaces. |
276 | |
277 | int test_and_set_bit(unsigned long nr, volatils unsigned long *addr); |
278 | int test_and_clear_bit(unsigned long nr, volatils unsigned long *addr); |
279 | int test_and_change_bit(unsigned long nr, volatils unsigned long *addr); |
280 | |
281 | Like the above, except that these routines return a boolean which |
282 | indicates whether the changed bit was set _BEFORE_ the atomic bit |
283 | operation. |
284 | |
285 | WARNING! It is incredibly important that the value be a boolean, |
286 | ie. "0" or "1". Do not try to be fancy and save a few instructions by |
287 | declaring the above to return "long" and just returning something like |
288 | "old_val & mask" because that will not work. |
289 | |
290 | For one thing, this return value gets truncated to int in many code |
291 | paths using these interfaces, so on 64-bit if the bit is set in the |
292 | upper 32-bits then testers will never see that. |
293 | |
294 | One great example of where this problem crops up are the thread_info |
295 | flag operations. Routines such as test_and_set_ti_thread_flag() chop |
296 | the return value into an int. There are other places where things |
297 | like this occur as well. |
298 | |
299 | These routines, like the atomic_t counter operations returning values, |
300 | require explicit memory barrier semantics around their execution. All |
301 | memory operations before the atomic bit operation call must be made |
302 | visible globally before the atomic bit operation is made visible. |
303 | Likewise, the atomic bit operation must be visible globally before any |
304 | subsequent memory operation is made visible. For example: |
305 | |
306 | obj->dead = 1; |
307 | if (test_and_set_bit(0, &obj->flags)) |
308 | /* ... */; |
309 | obj->killed = 1; |
310 | |
311 | The implementation of test_and_set_bit() must guarentee that |
312 | "obj->dead = 1;" is visible to cpus before the atomic memory operation |
313 | done by test_and_set_bit() becomes visible. Likewise, the atomic |
314 | memory operation done by test_and_set_bit() must become visible before |
315 | "obj->killed = 1;" is visible. |
316 | |
317 | Finally there is the basic operation: |
318 | |
319 | int test_bit(unsigned long nr, __const__ volatile unsigned long *addr); |
320 | |
321 | Which returns a boolean indicating if bit "nr" is set in the bitmask |
322 | pointed to by "addr". |
323 | |
324 | If explicit memory barriers are required around clear_bit() (which |
325 | does not return a value, and thus does not need to provide memory |
326 | barrier semantics), two interfaces are provided: |
327 | |
328 | void smp_mb__before_clear_bit(void); |
329 | void smp_mb__after_clear_bit(void); |
330 | |
331 | They are used as follows, and are akin to their atomic_t operation |
332 | brothers: |
333 | |
334 | /* All memory operations before this call will |
335 | * be globally visible before the clear_bit(). |
336 | */ |
337 | smp_mb__before_clear_bit(); |
338 | clear_bit( ... ); |
339 | |
340 | /* The clear_bit() will be visible before all |
341 | * subsequent memory operations. |
342 | */ |
343 | smp_mb__after_clear_bit(); |
344 | |
345 | Finally, there are non-atomic versions of the bitmask operations |
346 | provided. They are used in contexts where some other higher-level SMP |
347 | locking scheme is being used to protect the bitmask, and thus less |
348 | expensive non-atomic operations may be used in the implementation. |
349 | They have names similar to the above bitmask operation interfaces, |
350 | except that two underscores are prefixed to the interface name. |
351 | |
352 | void __set_bit(unsigned long nr, volatile unsigned long *addr); |
353 | void __clear_bit(unsigned long nr, volatile unsigned long *addr); |
354 | void __change_bit(unsigned long nr, volatile unsigned long *addr); |
355 | int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr); |
356 | int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr); |
357 | int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr); |
358 | |
359 | These non-atomic variants also do not require any special memory |
360 | barrier semantics. |
361 | |
362 | The routines xchg() and cmpxchg() need the same exact memory barriers |
363 | as the atomic and bit operations returning values. |
364 | |
365 | Spinlocks and rwlocks have memory barrier expectations as well. |
366 | The rule to follow is simple: |
367 | |
368 | 1) When acquiring a lock, the implementation must make it globally |
369 | visible before any subsequent memory operation. |
370 | |
371 | 2) When releasing a lock, the implementation must make it such that |
372 | all previous memory operations are globally visible before the |
373 | lock release. |
374 | |
375 | Which finally brings us to _atomic_dec_and_lock(). There is an |
376 | architecture-neutral version implemented in lib/dec_and_lock.c, |
377 | but most platforms will wish to optimize this in assembler. |
378 | |
379 | int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock); |
380 | |
381 | Atomically decrement the given counter, and if will drop to zero |
382 | atomically acquire the given spinlock and perform the decrement |
383 | of the counter to zero. If it does not drop to zero, do nothing |
384 | with the spinlock. |
385 | |
386 | It is actually pretty simple to get the memory barrier correct. |
387 | Simply satisfy the spinlock grab requirements, which is make |
388 | sure the spinlock operation is globally visible before any |
389 | subsequent memory operation. |
390 | |
391 | We can demonstrate this operation more clearly if we define |
392 | an abstract atomic operation: |
393 | |
394 | long cas(long *mem, long old, long new); |
395 | |
396 | "cas" stands for "compare and swap". It atomically: |
397 | |
398 | 1) Compares "old" with the value currently at "mem". |
399 | 2) If they are equal, "new" is written to "mem". |
400 | 3) Regardless, the current value at "mem" is returned. |
401 | |
402 | As an example usage, here is what an atomic counter update |
403 | might look like: |
404 | |
405 | void example_atomic_inc(long *counter) |
406 | { |
407 | long old, new, ret; |
408 | |
409 | while (1) { |
410 | old = *counter; |
411 | new = old + 1; |
412 | |
413 | ret = cas(counter, old, new); |
414 | if (ret == old) |
415 | break; |
416 | } |
417 | } |
418 | |
419 | Let's use cas() in order to build a pseudo-C atomic_dec_and_lock(): |
420 | |
421 | int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock) |
422 | { |
423 | long old, new, ret; |
424 | int went_to_zero; |
425 | |
426 | went_to_zero = 0; |
427 | while (1) { |
428 | old = atomic_read(atomic); |
429 | new = old - 1; |
430 | if (new == 0) { |
431 | went_to_zero = 1; |
432 | spin_lock(lock); |
433 | } |
434 | ret = cas(atomic, old, new); |
435 | if (ret == old) |
436 | break; |
437 | if (went_to_zero) { |
438 | spin_unlock(lock); |
439 | went_to_zero = 0; |
440 | } |
441 | } |
442 | |
443 | return went_to_zero; |
444 | } |
445 | |
446 | Now, as far as memory barriers go, as long as spin_lock() |
447 | strictly orders all subsequent memory operations (including |
448 | the cas()) with respect to itself, things will be fine. |
449 | |
450 | Said another way, _atomic_dec_and_lock() must guarentee that |
451 | a counter dropping to zero is never made visible before the |
452 | spinlock being acquired. |
453 | |
454 | Note that this also means that for the case where the counter |
455 | is not dropping to zero, there are no memory ordering |
456 | requirements. |