diff options
| author | Minchan Kim <minchan@kernel.org> | 2013-05-09 16:21:27 +0900 |
|---|---|---|
| committer | Kyle Yan <kyan@codeaurora.org> | 2016-06-22 14:43:57 -0700 |
| commit | 06de050ac6a250930f5b95dc153744d64dbc13c4 (patch) | |
| tree | 7d78780c17783725e3699e0a414160ce52dcd500 /include/linux | |
| parent | a4e92011d44d60ad33dca31785682ddb82c44e40 (diff) | |
mm: Enhance per process reclaim to consider shared pages
Some pages could be shared by several processes. (ex, libc)
In case of that, it's too bad to reclaim them from the beginnig.
This patch causes VM to keep them on memory until last task
try to reclaim them so shared pages will be reclaimed only if
all of task has gone swapping out.
This feature doesn't handle non-linear mapping on ramfs because
it's very time-consuming and doesn't make sure of reclaiming and
not common.
Change-Id: I7e5f34f2e947f5db6d405867fe2ad34863ca40f7
Signed-off-by: Sangseok Lee <sangseok.lee@lge.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Patch-mainline: linux-mm @ 9 May 2013 16:21:27
[vinmenon@codeaurora.org: trivial merge conflict fixes + changes
to make the patch work with 3.18 kernel]
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
Diffstat (limited to 'include/linux')
| -rw-r--r-- | include/linux/rmap.h | 9 |
1 files changed, 6 insertions, 3 deletions
diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 7536f000c77c..e72b85737a99 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -12,7 +12,8 @@ extern int isolate_lru_page(struct page *page); extern void putback_lru_page(struct page *page); -extern unsigned long reclaim_pages_from_list(struct list_head *page_list); +extern unsigned long reclaim_pages_from_list(struct list_head *page_list, + struct vm_area_struct *vma); /* * The anon_vma heads a list of private "related" vmas, to scan if @@ -180,7 +181,8 @@ int page_referenced(struct page *, int is_locked, #define TTU_ACTION(x) ((x) & TTU_ACTION_MASK) -int try_to_unmap(struct page *, enum ttu_flags flags); +int try_to_unmap(struct page *, enum ttu_flags flags, + struct vm_area_struct *vma); /* * Used by uprobes to replace a userspace page safely @@ -236,6 +238,7 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); */ struct rmap_walk_control { void *arg; + struct vm_area_struct *target_vma; int (*rmap_one)(struct page *page, struct vm_area_struct *vma, unsigned long addr, void *arg); int (*done)(struct page *page); @@ -259,7 +262,7 @@ static inline int page_referenced(struct page *page, int is_locked, return 0; } -#define try_to_unmap(page, refs) SWAP_FAIL +#define try_to_unmap(page, refs, vma) SWAP_FAIL static inline int page_mkclean(struct page *page) { |
