Magellan Linux

Contents of /trunk/kernel26-magellan/patches-2.6.16-r12/0035-2.6.16-swap-prefetch-fix-lru_cache_add_tail.patch

Parent Directory Parent Directory | Revision Log Revision Log


Revision 72 - (show annotations) (download)
Mon Jun 5 09:25:38 2006 UTC (17 years, 10 months ago) by niro
File size: 4628 byte(s)
ver bump to 2.6.16-r12:
- updated to linux-2.6.16.19
- updated to ck11

1 From akpm@osdl.org Thu May 18 17:57:05 2006
2 Return-Path: <akpm@osdl.org>
3 X-Original-To: kernel@kolivas.org
4 Delivered-To: kernel@kolivas.org
5 Received: from bhhdoa.org.au (bhhdoa.org.au [65.98.99.88])
6 by mail.kolivas.org (Postfix) with ESMTP id 8B7BAC60BD
7 for <kernel@kolivas.org>; Thu, 18 May 2006 17:57:12 +1000 (EST)
8 Received: from smtp.osdl.org (smtp.osdl.org [65.172.181.4])
9 by bhhdoa.org.au (Postfix) with ESMTP id 3FB7E517F9
10 for <kernel@kolivas.org>; Thu, 18 May 2006 15:53:28 +1000 (EST)
11 Received: from shell0.pdx.osdl.net (fw.osdl.org [65.172.181.6])
12 by smtp.osdl.org (8.12.8/8.12.8) with ESMTP id k4I7v5tH000982
13 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO);
14 Thu, 18 May 2006 00:57:06 -0700
15 Received: from localhost.localdomain (shell0.pdx.osdl.net [10.9.0.31])
16 by shell0.pdx.osdl.net (8.13.1/8.11.6) with ESMTP id k4I7v4H0012555;
17 Thu, 18 May 2006 00:57:05 -0700
18 Message-Id: <200605180757.k4I7v4H0012555@shell0.pdx.osdl.net>
19 Subject: + swap-prefetch-fix-lru_cache_add_tail.patch added to -mm tree
20 To: a.p.zijlstra@chello.nl,
21 kernel@kolivas.org,
22 mm-commits@vger.kernel.org
23 From: akpm@osdl.org
24 Date: Thu, 18 May 2006 00:57:05 -0700
25 X-Spam-Status: No, hits=1.088 required=5 tests=NO_REAL_NAME
26 X-Spam-Level: *
27 X-Spam-Checker-Version: SpamAssassin 2.63-osdl_revision__1.74__
28 X-MIMEDefang-Filter: osdl$Revision: 1.1 $
29 X-Scanned-By: MIMEDefang 2.36
30 X-DSPAM-Result: Whitelisted
31 X-DSPAM-Confidence: 0.9997
32 X-DSPAM-Probability: 0.0000
33 X-DSPAM-Signature: 446c28db96851907813259
34 X-DSPAM-Factors: 27,
35 var+lru, 0.00010,
36 var+lru, 0.00010,
37 dirty+pages, 0.00010,
38 dirty+pages, 0.00010,
39 tail+struct, 0.00010,
40 tail+struct, 0.00010,
41 put+cpu, 0.00010,
42 struct+pagevec, 0.00010,
43 struct+pagevec, 0.00010,
44 deletion+diff, 0.00010,
45 devel+mm, 0.00010,
46 pagevec, 0.00010,
47 pagevec, 0.00010,
48 pvec+get, 0.00010,
49 pvec+get, 0.00010,
50 add+active, 0.00010,
51 add+active, 0.00010,
52 pvecs, 0.00010,
53 pvecs, 0.00010,
54 From+Peter, 0.00010,
55 static+DEFINE, 0.00010,
56 static+DEFINE, 0.00010,
57 add+pvec, 0.00010,
58 tail+pvec, 0.00010,
59 add+struct, 0.00010,
60 pagevec+lru, 0.00010,
61 pagevec+lru, 0.00010
62 X-UID: 19440
63 X-Length: 5060
64 Status: R
65 X-Status: NC
66 X-KMail-EncryptionState:
67 X-KMail-SignatureState:
68 X-KMail-MDN-Sent:
69
70
71 The patch titled
72
73 swap-prefetch: fix lru_cache_add_tail()
74
75 has been added to the -mm tree. Its filename is
76
77 swap-prefetch-fix-lru_cache_add_tail.patch
78
79 See http://www.zip.com.au/~akpm/linux/patches/stuff/added-to-mm.txt to find
80 out what to do about this
81
82
83 From: Peter Zijlstra <a.p.zijlstra@chello.nl>
84
85 lru_cache_add_tail() uses the inactive per-cpu pagevec. This causes normal
86 inactive and intactive tail inserts to end up on the wrong end of the list.
87
88 When the pagevec is completed by lru_cache_add_tail() but still contains
89 normal inactive pages, all pages will be added to the inactive tail and
90 vice versa.
91
92 Also *add_drain*() will always complete to the inactive head.
93
94 Add a third per-cpu pagevec to alleviate this problem.
95
96 Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
97 Acked-by: Con Kolivas <kernel@kolivas.org>
98 Signed-off-by: Andrew Morton <akpm@osdl.org>
99 ---
100
101 mm/swap.c | 8 +++++++-
102 1 files changed, 7 insertions(+), 1 deletion(-)
103
104 Index: linux-2.6.16-ck11/mm/swap.c
105 ===================================================================
106 --- linux-2.6.16-ck11.orig/mm/swap.c 2006-05-21 12:20:15.000000000 +1000
107 +++ linux-2.6.16-ck11/mm/swap.c 2006-05-21 12:24:27.000000000 +1000
108 @@ -141,6 +141,7 @@ EXPORT_SYMBOL(mark_page_accessed);
109 */
110 static DEFINE_PER_CPU(struct pagevec, lru_add_pvecs) = { 0, };
111 static DEFINE_PER_CPU(struct pagevec, lru_add_active_pvecs) = { 0, };
112 +static DEFINE_PER_CPU(struct pagevec, lru_add_tail_pvecs) = { 0, };
113
114 void fastcall lru_cache_add(struct page *page)
115 {
116 @@ -162,6 +163,8 @@ void fastcall lru_cache_add_active(struc
117 put_cpu_var(lru_add_active_pvecs);
118 }
119
120 +static inline void __pagevec_lru_add_tail(struct pagevec *pvec);
121 +
122 static void __lru_add_drain(int cpu)
123 {
124 struct pagevec *pvec = &per_cpu(lru_add_pvecs, cpu);
125 @@ -172,6 +175,9 @@ static void __lru_add_drain(int cpu)
126 pvec = &per_cpu(lru_add_active_pvecs, cpu);
127 if (pagevec_count(pvec))
128 __pagevec_lru_add_active(pvec);
129 + pvec = &per_cpu(lru_add_tail_pvecs, cpu);
130 + if (pagevec_count(pvec))
131 + __pagevec_lru_add_tail(pvec);
132 }
133
134 void lru_add_drain(void)
135 @@ -417,7 +423,7 @@ static inline void __pagevec_lru_add_tai
136 */
137 void fastcall lru_cache_add_tail(struct page *page)
138 {
139 - struct pagevec *pvec = &get_cpu_var(lru_add_pvecs);
140 + struct pagevec *pvec = &get_cpu_var(lru_add_tail_pvecs);
141
142 page_cache_get(page);
143 if (!pagevec_add(pvec, page))