diff options
| author | Bob Liu <bob.liu@oracle.com> | 2013-08-06 19:36:17 +0800 |
|---|---|---|
| committer | Kyle Yan <kyan@codeaurora.org> | 2016-05-31 15:27:11 -0700 |
| commit | 5248c3b4e4e57bd37aaa3a130a6038921becc89a (patch) | |
| tree | ed1f113eef95df78d036c43e309f5d60e6ce229e /mm/vmscan.c | |
| parent | a851b0a3c8b41c0eea4bc6bd7587feada670a267 (diff) | |
mm: add WasActive page flag
Zcache could be ineffective if the compressed memory pool is full with
compressed inactive file pages and most of them will be never used again.
So we pick up pages from active file list only, those pages would probably
be accessed again. Compress them in memory can reduce the latency
significantly compared with rereading from disk.
When a file page is shrunk from active file list to inactive file list,
PageActive flag is also cleared.
So adding an extra WasActive page flag for zcache to know whether the
file page was shrunk from the active list.
Change-Id: Ida1f4db17075d1f6f825ef7ce2b3bae4eb799e3f
Signed-off-by: Bob Liu <bob.liu@oracle.com>
Patch-mainline: linux-mm @ 2013-08-06 11:36:17
[vinmenon@codeaurora.org: trivial merge conflict fixes, checkpatch fixes,
fix the definitions of was_active page flag so that it does not create
compile time errors with CONFIG_CLEANCACHE disabled. Also remove the
unnecessary use of PG_was_active in PAGE_FLAGS_CHECK_AT_PREP. Since
was_active is a requirement for zcache, make the definitions dependent on
CONFIG_ZCACHE rather than CONFIG_CLEANCACHE.]
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
Diffstat (limited to 'mm/vmscan.c')
| -rw-r--r-- | mm/vmscan.c | 12 |
1 files changed, 11 insertions, 1 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index 3f702c2f9d58..9c92f7d1be90 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1575,6 +1575,7 @@ putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list) while (!list_empty(page_list)) { struct page *page = lru_to_page(page_list); int lru; + int file; VM_BUG_ON_PAGE(PageLRU(page), page); list_del(&page->lru); @@ -1591,8 +1592,11 @@ putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list) lru = page_lru(page); add_page_to_lru_list(page, lruvec, lru); + file = is_file_lru(lru); + if (IS_ENABLED(CONFIG_ZCACHE)) + if (file) + SetPageWasActive(page); if (is_active_lru(lru)) { - int file = is_file_lru(lru); int numpages = hpage_nr_pages(page); reclaim_stat->recent_rotated[file] += numpages; } @@ -1917,6 +1921,12 @@ static void shrink_active_list(unsigned long nr_to_scan, } ClearPageActive(page); /* we are de-activating */ + if (IS_ENABLED(CONFIG_ZCACHE)) + /* + * For zcache to know whether the page is from active + * file list + */ + SetPageWasActive(page); list_add(&page->lru, &l_inactive); } |
