diff options
author | Andrew Morton <akpm@linux-foundation.org> | 2017-05-26 11:31:58 +1000 |
---|---|---|
committer | Stephen Rothwell <sfr@canb.auug.org.au> | 2017-05-26 11:31:58 +1000 |
commit | e530a26c2267bf265c3bf5a0a2bb17065b06b317 (patch) | |
tree | 983182ade658c0c70aed3e1762af65bdd54609cf | |
parent | aefd950b83d2d8cf4d3c270546c8725f866da191 (diff) |
mm-make-kswapd-try-harder-to-keep-active-pages-in-cache-fix
fix comment
Cc: Josef Bacik <jbacik@fb.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-rw-r--r-- | mm/vmscan.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index 90080bd10a47..adf4918ec6fa 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2598,9 +2598,9 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) /* * We still want to slightly prefer slab over inactive, so if the - * inactive on this node is large enough and what is pushing us into - * reclaim terretitory then limit our flushing to the inactive list for - * the first go around. + * inactive on this node is large enough and is pushing us into reclaim + * terrtitory then limit our flushing to the inactive list for the first + * go around. * * The idea is that with a memcg configured system we will still reclaim * memcg aware shrinkers, which includes the super block shrinkers. So |