diff options
author | Morten Rasmussen <morten.rasmussen@arm.com> | 2014-12-02 14:06:32 +0000 |
---|---|---|
committer | Michael Turquette <mturquette@deferred.io> | 2014-12-09 20:34:37 -0800 |
commit | 67cc03886394320689d30d65d65b386756d47e39 (patch) | |
tree | d985babf3cf6d6c9c55fedd2cef733290a5e98ba | |
parent | 3c5e3dac9b6bb3efec12938ac662f094bea855c8 (diff) |
sched: Include blocked load in weighted_cpuloadeas-next-20141209
Adds blocked_load_avg to weighted_cpuload() to take recently runnable
tasks into account in load-balancing decisions. This changes the nature
of weighted_cpuload() as it may >0 while there are currently no runnable
tasks on the cpu rq. Hence care must be taken in the load-balance code
to use cfs_rq->runnable_load_avg or nr_running when current rq status is
needed.
This patch is highly experimental and will probably have require
additional updates of the users of weighted_cpuload().
cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
-rw-r--r-- | kernel/sched/fair.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 8553ec17a510..f55ce1ae9b33 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4393,7 +4393,8 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) /* Used instead of source_load when we know the type == 0 */ static unsigned long weighted_cpuload(const int cpu) { - return cpu_rq(cpu)->cfs.runnable_load_avg; + return cpu_rq(cpu)->cfs.runnable_load_avg + + cpu_rq(cpu)->cfs.blocked_load_avg; } /* |