Show a cover letter.

GET /api/1.1/covers/2230792/?format=api
HTTP 200 OK
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 2230792,
    "url": "http://patchwork.ozlabs.org/api/1.1/covers/2230792/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/linuxppc-dev/cover/20260430040427.4672-1-baohua@kernel.org/",
    "project": {
        "id": 2,
        "url": "http://patchwork.ozlabs.org/api/1.1/projects/2/?format=api",
        "name": "Linux PPC development",
        "link_name": "linuxppc-dev",
        "list_id": "linuxppc-dev.lists.ozlabs.org",
        "list_email": "linuxppc-dev@lists.ozlabs.org",
        "web_url": "https://github.com/linuxppc/wiki/wiki",
        "scm_url": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git",
        "webscm_url": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/"
    },
    "msgid": "<20260430040427.4672-1-baohua@kernel.org>",
    "date": "2026-04-30T04:04:22",
    "name": "[v2,0/5] mm: reduce mmap_lock contention and improve page fault performance",
    "submitter": {
        "id": 48512,
        "url": "http://patchwork.ozlabs.org/api/1.1/people/48512/?format=api",
        "name": "Barry Song",
        "email": "baohua@kernel.org"
    },
    "mbox": "http://patchwork.ozlabs.org/project/linuxppc-dev/cover/20260430040427.4672-1-baohua@kernel.org/mbox/",
    "series": [
        {
            "id": 502184,
            "url": "http://patchwork.ozlabs.org/api/1.1/series/502184/?format=api",
            "web_url": "http://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=502184",
            "date": "2026-04-30T04:04:27",
            "name": "mm: reduce mmap_lock contention and improve page fault performance",
            "version": 2,
            "mbox": "http://patchwork.ozlabs.org/series/502184/mbox/"
        }
    ],
    "comments": "http://patchwork.ozlabs.org/api/covers/2230792/comments/",
    "headers": {
        "Return-Path": "\n <linuxppc-dev+bounces-20331-incoming=patchwork.ozlabs.org@lists.ozlabs.org>",
        "X-Original-To": [
            "incoming@patchwork.ozlabs.org",
            "linuxppc-dev@lists.ozlabs.org"
        ],
        "Delivered-To": "patchwork-incoming@legolas.ozlabs.org",
        "Authentication-Results": [
            "legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256\n header.s=k20201202 header.b=XmywiKhu;\n\tdkim-atps=neutral",
            "legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=lists.ozlabs.org\n (client-ip=2404:9400:21b9:f100::1; helo=lists.ozlabs.org;\n envelope-from=linuxppc-dev+bounces-20331-incoming=patchwork.ozlabs.org@lists.ozlabs.org;\n receiver=patchwork.ozlabs.org)",
            "lists.ozlabs.org;\n arc=none smtp.remote-ip=172.105.4.254",
            "lists.ozlabs.org;\n dmarc=pass (p=quarantine dis=none) header.from=kernel.org",
            "lists.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256\n header.s=k20201202 header.b=XmywiKhu;\n\tdkim-atps=neutral",
            "lists.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=kernel.org\n (client-ip=172.105.4.254; helo=tor.source.kernel.org;\n envelope-from=baohua@kernel.org; receiver=lists.ozlabs.org)"
        ],
        "Received": [
            "from lists.ozlabs.org (lists.ozlabs.org\n [IPv6:2404:9400:21b9:f100::1])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4g5grY4WV4z1yHv\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 30 Apr 2026 14:18:05 +1000 (AEST)",
            "from boromir.ozlabs.org (localhost [127.0.0.1])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 4g5grX0j8rz2xm1;\n\tThu, 30 Apr 2026 14:18:04 +1000 (AEST)",
            "from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 4g5gYN32nZz2xMY\n\tfor <linuxppc-dev@lists.ozlabs.org>; Thu, 30 Apr 2026 14:04:56 +1000 (AEST)",
            "from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])\n\tby tor.source.kernel.org (Postfix) with ESMTP id 9FF0960582;\n\tThu, 30 Apr 2026 04:04:53 +0000 (UTC)",
            "by smtp.kernel.org (Postfix) with ESMTPSA id 6E6B0C2BCB8;\n\tThu, 30 Apr 2026 04:04:48 +0000 (UTC)"
        ],
        "ARC-Seal": "i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707; t=1777521897;\n\tcv=none;\n b=WLwck7rcw9T2VbDJ2FDOpLdAsTCDvBRPCHmJ7VKxSgw0v625mwgeaxzlEuh86/bdHcwLKS3ulFTpHQgKq2daym6WnGitWddsutOysANQvz3VxA/qbVWjlZYxYJDZ/dioXCyY/0v8oDyt+UMYVFn893ANRWWZ0K4OjtDzp5pesEppBJWaes/bTsO4iAXP7S9bTWVYgtnJcI2MbpPUhtAh25Ph8KWkWnKf+lFIk58T178eC8cAHDMNT9RRU79031a+w74bzV+vmRACAxSr/jZmFcZh/fbpz6HQx1P509fQ7QkT3gvMwi6EJz54+4JZtxilLNEgU60FWxtx+P+Pw8/0Wg==",
        "ARC-Message-Signature": "i=1; a=rsa-sha256; d=lists.ozlabs.org; s=201707;\n\tt=1777521897; c=relaxed/relaxed;\n\tbh=snB875P95SUbpGSGDIAQLGtDYkvzFzItmsNJKMa2UQ4=;\n\th=From:To:Cc:Subject:Date:Message-Id:MIME-Version:Content-Type;\n b=ofD2+ID9vEwJS40UayTF+CzbVasw8zX5C5i1gTMCi/sa159gaNco2pHX0kOE+Tr7tH9PmDlo0HrIFiXrmOl88iZF4RaTrVgnG964NZrsFBHgpnFYIRjkQwf6zNzO0v6LdDGfRPq5eemKx6Ybs/HDXmu2Htv3DHZnV+/zWtjIK+0zpufhnNW7wPfeeeJqY7oa+I25PTtvPw9eP78sHsoBT9r5euTPxcWDpe6lCDgdQSXIZ4b0hiLIkkSUpc7UAbE8+pus+WlFFPfOaMlbbYrIjhHJg7tqSTeIM+Xom6yiKaDWsOzCjmNlyWL5gsk+kFQ7BMPMnDPwDGusXSka+bIMWQ==",
        "ARC-Authentication-Results": "i=1; lists.ozlabs.org;\n dmarc=pass (p=quarantine dis=none) header.from=kernel.org;\n dkim=pass (2048-bit key;\n unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256\n header.s=k20201202 header.b=XmywiKhu; dkim-atps=neutral;\n spf=pass (client-ip=172.105.4.254; helo=tor.source.kernel.org;\n envelope-from=baohua@kernel.org;\n receiver=lists.ozlabs.org) smtp.mailfrom=kernel.org",
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;\n\ts=k20201202; t=1777521893;\n\tbh=SQbc6yap3IwxIjf5vkMZhpQlq9soeHHQjAS0yGku44k=;\n\th=From:To:Cc:Subject:Date:From;\n\tb=XmywiKhuyr1+iRfR3PHU6v9GONpwVAKCcc0eTlQU9yjUnF6uRHR/EBTZ/UN8UVVVB\n\t kG/ZtEEtVbHvFaU+7KcGKU5U2/IzqdlsTRcV5sdP0HYPRqvAozZqjFb0EOASauV2xC\n\t xUaKzad/uIF07HjedFILdewu7IjvV/gAtfxZbzVm8Kt/pEV3hdCFjBHz3/a9JeCr35\n\t 5oYSoSs/WSqmNPs15idDrhOJSsG7WKscp9W5QVhlflYcPmOPSGHTGSFBqz1kz4GAV0\n\t apuUfkjNj3pyaP8/fgBFyApKG9AU7GKaHlnxXjL4Jva5JTKwRj7lbU5sCFB18KTvNd\n\t ycxSlp7jRcFKg==",
        "From": "\"Barry Song (Xiaomi)\" <baohua@kernel.org>",
        "To": "akpm@linux-foundation.org,\n\tlinux-mm@kvack.org,\n\twilly@infradead.org",
        "Cc": "david@kernel.org,\n\tljs@kernel.org,\n\tliam@infradead.org,\n\tvbabka@kernel.org,\n\trppt@kernel.org,\n\tsurenb@google.com,\n\tmhocko@suse.com,\n\tjack@suse.cz,\n\tpfalcato@suse.de,\n\twanglian@kylinos.cn,\n\tchentao@kylinos.cn,\n\tlianux.mm@gmail.com,\n\tkunwu.chan@gmail.com,\n\tliyangouwen1@oppo.com,\n\tchrisl@kernel.org,\n\tkasong@tencent.com,\n\tshikemeng@huaweicloud.com,\n\tnphamcs@gmail.com,\n\tbhe@redhat.com,\n\tyoungjun.park@lge.com,\n\tlinux-arm-kernel@lists.infradead.org,\n\tlinux-kernel@vger.kernel.org,\n\tloongarch@lists.linux.dev,\n\tlinuxppc-dev@lists.ozlabs.org,\n\tlinux-riscv@lists.infradead.org,\n\tlinux-s390@vger.kernel.org,\n\t\"Barry Song (Xiaomi)\" <baohua@kernel.org>",
        "Subject": "[PATCH v2 0/5] mm: reduce mmap_lock contention and improve page fault\n performance",
        "Date": "Thu, 30 Apr 2026 12:04:22 +0800",
        "Message-Id": "<20260430040427.4672-1-baohua@kernel.org>",
        "X-Mailer": "git-send-email 2.39.3 (Apple Git-146)",
        "X-Mailing-List": "linuxppc-dev@lists.ozlabs.org",
        "List-Id": "<linuxppc-dev.lists.ozlabs.org>",
        "List-Help": "<mailto:linuxppc-dev+help@lists.ozlabs.org>",
        "List-Owner": "<mailto:linuxppc-dev+owner@lists.ozlabs.org>",
        "List-Post": "<mailto:linuxppc-dev@lists.ozlabs.org>",
        "List-Archive": "<https://lore.kernel.org/linuxppc-dev/>,\n  <https://lists.ozlabs.org/pipermail/linuxppc-dev/>",
        "List-Subscribe": "<mailto:linuxppc-dev+subscribe@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-digest@lists.ozlabs.org>,\n  <mailto:linuxppc-dev+subscribe-nomail@lists.ozlabs.org>",
        "List-Unsubscribe": "<mailto:linuxppc-dev+unsubscribe@lists.ozlabs.org>",
        "Precedence": "list",
        "MIME-Version": "1.0",
        "Content-Type": "text/plain; charset=UTF-8",
        "Content-Transfer-Encoding": "8bit",
        "X-Spam-Status": "No, score=-0.2 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED,\n\tDKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS\n\tautolearn=disabled version=4.0.1 OzLabs 8",
        "X-Spam-Checker-Version": "SpamAssassin 4.0.1 (2024-03-25) on lists.ozlabs.org"
    },
    "content": "Oven observed most mmap_lock contention and priority inversion\ncome from page fault retries after waiting for I/O completion.\nOven subsequently raised the following idea:\n\nThere is no need to always fall back to mmap_lock when the per-VMA lock\nis released only to wait for the page cache to become ready. On a page\nfault retry, the per-VMA lock can still be reused.\n\nWe believe the same should also apply to anonymous folios. However, there\nis a case where I/O has completed but we fail to acquire the folio lock\nbecause a concurrent thread may be installing PTEs for the folio. This\nis expected to be short-lived, so retrying the page fault is unnecessary.\n\nThis patchset handles two cases:\n\n(1) If we need to wait for I/O completion, we still drop the per-VMA lock, as\ncurrent page fault handling already does. Holding it for too long may introduce\nvarious priority inversion issues on mobile devices. After I/O completes, we\nretry the page fault with the per-VMA lock, rather than falling back to\nmmap_lock.\n\n(2) If I/O has already completed and the folio is up to date, the wait is\nlikely due to a concurrent PTE installation. In this case, we keep the\nper-VMA lock and avoid retrying the page fault.\n\nWith (1), the dramatically reduced mmap_lock contention leads to a\nsignificant improvement in Douyin performance. Oven’s data is shown\nbelow.\n\nDouyin (the Chinese version of TikTok) warm start on a smartphone with\n8GB RAM.\n\n== mmap_lock Acquisitions And Wait Time ==\n\nMetric                    Before (Avg)    After (Avg)    Change\n------------------------------------------------------------------------\nRead Lock Count           20,010          5,719          -71.42%\nRead Total Wait (us)      10,695,877     408,436        -96.18%\nRead Avg Wait (us)        534.00         71.00           -86.70%\nWrite Lock Count          838             909            +8.47%\nWrite Total Wait (us)     501,293        97,633          -80.52%\nWrite Avg Wait (us)       598.00         107.00          -82.11%\n\n\n== Read Lock Waiting Time Distribution of mmap_lock ==\n\nRange (us)                 Before (Avg)    After (Avg)    Change\n------------------------------------------------------------------------\n[0, 1)                     9,927           4,286          -56.82%\n[1, 10)                    9,179           1,327          -85.54%\n[10, 100)                  191             88             -53.93%\n[100, 1000)                57              6              -89.47%\n[1000, 10000)              328             9              -97.26%\n[10000, 100000)            328             6              -98.17%\n[100000, 1000000)          0               0              N/A\n[1000000, +)               0               0              N/A\n\n== Write Lock Waiting Time Distribution of mmap_lock ==\n\nRange (us)                 Before (Avg)    After (Avg)    Change\n------------------------------------------------------------------------\n[0, 1)                     250             300            +20.00%\n[1, 10)                    483             556            +15.11%\n[10, 100)                  52              41             -21.15%\n[100, 1000)                12              5              -58.33%\n[1000, 10000)              22              4              -81.82%\n[10000, 100000)            16              1              -93.75%\n[100000, 1000000)          0               0              N/A\n[1000000, +)               0               0              N/A\n\nAfter the optimization, the number of read lock acquisitions is \nsignificantly reduced, and both lock waiting time and tail latency are \ndramatically improved.\n\nKunwu and Lian also developed a model to capture the situation described\nby Matthew [1], where a memcg with limited memory may fail to make\nprogress. This happens because after I/O is initiated on the first page\nfault, the folios may be reclaimed by the time of the retry, leaving the\nworkload with little or no forward progress.\n\nA stress setup made by Kunwu and Lian as follows:\n* 256-core x86 system\n* 500 threads continuously faulting on 16MB files\n\nThe model was running within a memcg with limited memory,\nas shown below:\n\nsystemd-run --scope -p MemoryHigh=1G -p MemoryMax=1.2G -p MemorySwapMax=0 \\\n--unit=mmap-thrash-$$ ./mmap_lock & \\\nTEST_PID=$!\n\nThe reproducer code is shown below:\n\n #define THREADS 500 \n #define FILE_SIZE (16 * 1024 * 1024) /* 16MB */ \n static _Atomic int g_stop = 0; \n #define RUN_SECONDS 600 \n \n struct worker_arg { \n         long id; \n         uint64_t *counts; \n }; \n \n void *worker(void *arg) \n { \n         struct worker_arg *wa = (struct worker_arg *)arg; \n         long id = wa->id; \n         char path[64]; \n         uint64_t local_rounds = 0; \n \n         snprintf(path, sizeof(path), \"./test_file_%d_%ld.dat\", \n                  getpid(), id); \n         int fd = open(path, O_RDWR | O_CREAT | O_TRUNC, 0666); \n         if (fd < 0) return NULL; \n         if (ftruncate(fd, FILE_SIZE) < 0) { \n                 close(fd); return NULL; \n         } \n \n         while (!atomic_load_explicit(&g_stop, memory_order_relaxed)) { \n                 char *f_map = mmap(NULL, FILE_SIZE, PROT_READ, \n                                    MAP_SHARED, fd, 0); \n                 if (f_map != MAP_FAILED) { \n                         /* Pure page cache thrashing */ \n                         for (int i = 0; i < FILE_SIZE; i += 4096) { \n                                 volatile unsigned char c = \n                                         (unsigned char)f_map[i]; \n                                 (void)c; \n                         } \n                         munmap(f_map, FILE_SIZE); \n                         local_rounds++; \n                 } \n         } \n         wa->counts[id] = local_rounds; \n         close(fd); \n         unlink(path); \n         return NULL; \n } \n \n int main(void) \n { \n         printf(\"Pure File Thrashing Started. PID: %d\\n\", getpid()); \n         pthread_t t[THREADS]; \n         uint64_t local_counts[THREADS]; \n         memset(local_counts, 0, sizeof(local_counts)); \n         struct worker_arg args[THREADS]; \n \n         for (long i = 0; i < THREADS; i++) { \n                 args[i].id = i; \n                 args[i].counts = local_counts; \n                 pthread_create(&t[i], NULL, worker, &args[i]); \n         } \n \n         sleep(RUN_SECONDS); \n         atomic_store_explicit(&g_stop, 1, memory_order_relaxed); \n \n         for (int i = 0; i < THREADS; i++) pthread_join(t[i], NULL); \n \n         uint64_t total = 0; \n         for (int i = 0; i < THREADS; i++) total += local_counts[i]; \n \n         printf(\"Total rounds     : %llu\\n\", (unsigned long long)total); \n         printf(\"Throughput       : %.2f rounds/sec\\n\", \n                (double)total / RUN_SECONDS); \n         return 0; \n }\n\nThey also added temporary counters in page fault retries [2]:\n- RETRY_IO_MISS   : folio not present after I/O completion\n- RETRY_MMAP_DROP : retry fallback due to waiting for I/O\n\nTheir results are as follows:\n\n| Case                | Total Rounds | Throughput | Miss/Drop(%) | RETRY_MMAP_DROP | RETRY_IO_MISS |\n| ------------------- | ------------ | ---------- | ------------ | --------------- | ------------- |\n| Baseline (Run 1)    | 22,711       | 37.85 /s   | 45.04        | 970,078         | 436,956       |\n| Baseline (Run 2)    | 23,530       | 39.22 /s   | 44.96        | 972,043         | 437,077       |\n| With Series (Run A) | 54,428       | 90.71 /s   | 1.69         | 1,204,124       | 20,398        |\n| With Series (Run B) | 35,949       | 59.91 /s   | 0.03         | 327,023         | 99            |\n\nWithout this series, nearly half of the retries fail to observe completed\nI/O results, leading to significant CPU and I/O waste. With the finer-\ngrained VMA lock, faulting threads avoid the heavily contended mmap_lock\nduring retries and are therefore able to complete the page fault.\n\nWith (2), there is a clear improvement in swap-in bandwidth in a model\nwith five threads issuing MADV_PAGEOUT-based swap-outs and five threads\nperforming swap-ins on a 100MB anonymous mmap VMA.\n\n #define SIZE (100 * 1024 * 1024)\n #define PAGE_SIZE 4096\n #define WRITER_THREADS 5\n #define READER_THREADS 5\n #define RUN_SECONDS 30\n \n static uint8_t *buf;\n static atomic_ulong pageout_rounds = 0;\n static atomic_ulong swapin_rounds = 0;\n static atomic_int stop_flag = 0;\n \n static void *pageout_thread(void *arg)\n {\n     (void)arg;\n     while (!atomic_load(&stop_flag)) {\n         if (madvise(buf, SIZE, MADV_PAGEOUT) == 0) {\n             atomic_fetch_add(&pageout_rounds, 1);\n         }\n     }\n     return NULL;\n }\n \n static void *reader_thread(void *arg)\n {\n     (void)arg;\n     volatile uint64_t sum = 0;\n \n     while (!atomic_load(&stop_flag)) {\n         for (size_t i = 0; i < SIZE; i += PAGE_SIZE) {\n             sum += buf[i];\n         }\n         /* One full pass over 100MB, counted as one swap-in round (approximate) */\n         atomic_fetch_add(&swapin_rounds, 1);\n     }\n     return NULL;\n }\n \n int main(void)\n {\n     pthread_t writers[WRITER_THREADS];\n     pthread_t readers[READER_THREADS];\n \n     buf = mmap(NULL, SIZE, PROT_READ | PROT_WRITE,\n                MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);\n     if (buf == MAP_FAILED) {\n         exit(EXIT_FAILURE);\n     }\n     memset(buf, 0, SIZE);\n \n     for (int i = 0; i < WRITER_THREADS; i++) {\n         if (pthread_create(&writers[i], NULL, pageout_thread, NULL) != 0) {\n             perror(\"pthread_create\");\n             exit(EXIT_FAILURE);\n         }\n     }\n     for (int i = 0; i < READER_THREADS; i++) {\n         if (pthread_create(&readers[i], NULL, reader_thread, NULL) != 0) {\n             perror(\"pthread_create\");\n             exit(EXIT_FAILURE);\n         }\n     }\n \n     sleep(RUN_SECONDS);\n     atomic_store(&stop_flag, 1);\n     for (int i = 0; i < WRITER_THREADS; i++)\n         pthread_join(writers[i], NULL);\n     for (int i = 0; i < READER_THREADS; i++)\n         pthread_join(readers[i], NULL);\n \n     printf(\"=== Result (30s) ===\\n\");\n     printf(\"Pageout rounds: %lu\\n\", pageout_rounds);\n     printf(\"Swap-in rounds (approx): %lu\\n\", swapin_rounds);\n     munmap(buf, SIZE);\n     return 0;\n }\n\nW/o patches:\n=== Result (30s) ===\nPageout rounds: 1324847\nSwap-in rounds (approx): 874\n\nW/patches:\n=== Result (30s) ===\nPageout rounds: 1330550\nSwap-in rounds (approx): 1017\n\n[1] https://lore.kernel.org/linux-mm/aSip2mWX13sqPW_l@casper.infradead.org/\n[2] https://github.com/lianux-mm/ioretry_test/\n\n-v2:\n  * collect tags from Pedro, Kunwu and Lian, thanks!\n  * handle case (2), for uptodate folios, don't retry PF\n-RFC:\n  https://lore.kernel.org/linux-mm/20251127011438.6918-1-21cnbao@gmail.com/\n\nBarry Song (Xiaomi) (4):\n  mm/swapin: Retry swapin by VMA lock if the lock was released for I/O\n  mm: Move folio_lock_or_retry() and drop __folio_lock_or_retry()\n  mm: Don't retry page fault if folio is uptodate during swap-in\n  mm/filemap: Avoid retrying page faults on uptodate folios in filemap\n    faults\n\nOven Liyang (1):\n  mm/filemap: Retry fault by VMA lock if the lock was released for I/O\n\n arch/arm/mm/fault.c       |  5 +++\n arch/arm64/mm/fault.c     |  5 +++\n arch/loongarch/mm/fault.c |  4 +++\n arch/powerpc/mm/fault.c   |  5 ++-\n arch/riscv/mm/fault.c     |  4 +++\n arch/s390/mm/fault.c      |  4 +++\n arch/x86/mm/fault.c       |  4 +++\n include/linux/mm_types.h  |  9 ++---\n include/linux/pagemap.h   | 17 ----------\n mm/filemap.c              | 57 ++++++-------------------------\n mm/memory.c               | 70 +++++++++++++++++++++++++++++++++++++--\n 11 files changed, 114 insertions(+), 70 deletions(-)"
}