From akpm@osdl.org Tue Mar 14 12:57:28 2006 Return-Path: X-Original-To: kernel@kolivas.org Delivered-To: kernel@kolivas.org Received: from bhhdoa.org.au (bhhdoa.org.au [65.98.99.88]) by mail.kolivas.org (Postfix) with ESMTP id 2264BC7486 for ; Tue, 14 Mar 2006 13:24:20 +1100 (EST) Received: from smtp.osdl.org (smtp.osdl.org [65.172.181.4]) by bhhdoa.org.au (Postfix) with ESMTP id 48A7D51704 for ; Tue, 14 Mar 2006 11:48:42 +1100 (EST) Received: from shell0.pdx.osdl.net (fw.osdl.org [65.172.181.6]) by smtp.osdl.org (8.12.8/8.12.8) with ESMTP id k2E1xqDZ010665 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Mon, 13 Mar 2006 17:59:53 -0800 Received: from localhost.localdomain (shell0.pdx.osdl.net [10.9.0.31]) by shell0.pdx.osdl.net (8.13.1/8.11.6) with ESMTP id k2E1xqtI004517; Mon, 13 Mar 2006 17:59:52 -0800 Message-Id: <200603140159.k2E1xqtI004517@shell0.pdx.osdl.net> Subject: + sched-store-weighted-load-on-up.patch added to -mm tree To: kernel@kolivas.org, mingo@elte.hu, pwil3058@bigpond.net.au, mm-commits@vger.kernel.org From: akpm@osdl.org Date: Mon, 13 Mar 2006 17:57:28 -0800 X-Spam-Status: No, hits=-4.416 required=5 tests=MAILTO_TO_SPAM_ADDR,NO_REAL_NAME,PATCH_UNIFIED_DIFF_OSDL X-Spam-Checker-Version: SpamAssassin 2.63-osdl_revision__1.68__ X-MIMEDefang-Filter: osdl$Revision: 1.1 $ X-Scanned-By: MIMEDefang 2.36 X-DSPAM-Result: Whitelisted X-DSPAM-Confidence: 0.9997 X-DSPAM-Probability: 0.0000 X-DSPAM-Signature: 4416295b68304546116836 X-DSPAM-Factors: 27, detect+patch, 0.00010, X-Spam-Status*MAILTO+TO, 0.00010, hu+Signed, 0.00010, to+uneven, 0.00010, load+rq, 0.00010, oncpu+#endif, 0.00010, thread+does, 0.00010, X-Spam-Status*ADDR+NO, 0.00010, on+runqueue, 0.00010, subversion+of, 0.00010, tp+if, 0.00010, niceness, 0.00010, niceness, 0.00010, load+weight, 0.00010, load+weight, 0.00010, Subject*load, 0.00010, is+sched, 0.00010, sched+cleanup, 0.00010, run+list, 0.00010, X-Spam-Status*tests+MAILTO, 0.00010, sched+implement, 0.00010, smpnice+patch, 0.00010, To*org+mingo, 0.00010, noninteractive+use, 0.00010, load+per, 0.00010, alter+uninterruptible, 0.00010, SMP+if, 0.00010 X-Length: 7043 X-KMail-EncryptionState: X-KMail-SignatureState: X-KMail-MDN-Sent: Content-Type: X-UID: 9777 The patch titled sched: store weighted load on up has been added to the -mm tree. Its filename is sched-store-weighted-load-on-up.patch See http://www.zip.com.au/~akpm/linux/patches/stuff/added-to-mm.txt to find out what to do about this From: Con Kolivas Modify the smp nice code to store load_weight on uniprocessor as well to allow relative niceness on one cpu to be assessed. Minor cleanups and uninline set_load_weight(). Signed-off-by: Con Kolivas Cc: Peter Williams Acked-by: Ingo Molnar Signed-off-by: Andrew Morton include/linux/sched.h | 4 ++-- kernel/sched.c | 24 ++++++------------------ 2 files changed, 8 insertions(+), 20 deletions(-) Index: linux-2.6.16-ck1/include/linux/sched.h =================================================================== --- linux-2.6.16-ck1.orig/include/linux/sched.h 2006-03-20 20:46:44.000000000 +1100 +++ linux-2.6.16-ck1/include/linux/sched.h 2006-03-20 20:46:46.000000000 +1100 @@ -551,9 +551,9 @@ enum idle_type /* * sched-domains (multiprocessor balancing) declarations: */ -#ifdef CONFIG_SMP #define SCHED_LOAD_SCALE 128UL /* increase resolution of load */ +#ifdef CONFIG_SMP #define SD_LOAD_BALANCE 1 /* Do load balancing on this domain. */ #define SD_BALANCE_NEWIDLE 2 /* Balance when about to become idle */ #define SD_BALANCE_EXEC 4 /* Balance on exec */ @@ -702,8 +702,8 @@ struct task_struct { #ifdef __ARCH_WANT_UNLOCKED_CTXSW int oncpu; #endif - int load_weight; /* for load balancing purposes */ #endif + int load_weight; /* for niceness load balancing purposes */ int prio, static_prio; struct list_head run_list; prio_array_t *array; Index: linux-2.6.16-ck1/kernel/sched.c =================================================================== --- linux-2.6.16-ck1.orig/kernel/sched.c 2006-03-20 20:46:45.000000000 +1100 +++ linux-2.6.16-ck1/kernel/sched.c 2006-03-20 20:46:46.000000000 +1100 @@ -166,12 +166,12 @@ */ #define SCALE_PRIO(x, prio) \ - max(x * (MAX_PRIO - prio) / (MAX_USER_PRIO/2), MIN_TIMESLICE) + max(x * (MAX_PRIO - prio) / (MAX_USER_PRIO / 2), MIN_TIMESLICE) static unsigned int static_prio_timeslice(int static_prio) { if (static_prio < NICE_TO_PRIO(0)) - return SCALE_PRIO(DEF_TIMESLICE*4, static_prio); + return SCALE_PRIO(DEF_TIMESLICE * 4, static_prio); else return SCALE_PRIO(DEF_TIMESLICE, static_prio); } @@ -213,8 +213,8 @@ struct runqueue { * remote CPUs use both these fields when doing load calculation. */ unsigned long nr_running; -#ifdef CONFIG_SMP unsigned long raw_weighted_load; +#ifdef CONFIG_SMP unsigned long cpu_load[3]; #endif unsigned long long nr_switches; @@ -668,7 +668,6 @@ static int effective_prio(task_t *p) return prio; } -#ifdef CONFIG_SMP /* * To aid in avoiding the subversion of "niceness" due to uneven distribution * of tasks with abnormal "nice" values across CPUs the contribution that @@ -691,9 +690,10 @@ static int effective_prio(task_t *p) #define RTPRIO_TO_LOAD_WEIGHT(rp) \ (PRIO_TO_LOAD_WEIGHT(MAX_RT_PRIO) + LOAD_WEIGHT(rp)) -static inline void set_load_weight(task_t *p) +static void set_load_weight(task_t *p) { if (rt_task(p)) { +#ifdef CONFIG_SMP if (p == task_rq(p)->migration_thread) /* * The migration thread does the actual balancing. @@ -702,6 +702,7 @@ static inline void set_load_weight(task_ */ p->load_weight = 0; else +#endif p->load_weight = RTPRIO_TO_LOAD_WEIGHT(p->rt_priority); } else p->load_weight = PRIO_TO_LOAD_WEIGHT(p->static_prio); @@ -716,19 +717,6 @@ static inline void dec_raw_weighted_load { rq->raw_weighted_load -= p->load_weight; } -#else -static inline void set_load_weight(task_t *p) -{ -} - -static inline void inc_raw_weighted_load(runqueue_t *rq, const task_t *p) -{ -} - -static inline void dec_raw_weighted_load(runqueue_t *rq, const task_t *p) -{ -} -#endif static inline void inc_nr_running(task_t *p, runqueue_t *rq) {