[{"id":1769900,"web_url":"http://patchwork.ozlabs.org/comment/1769900/","msgid":"<9e206be2-57a7-ee60-581c-5afc6df030d0@linux.vnet.ibm.com>","date":"2017-09-18T07:15:22","subject":"Re: [PATCH v3 00/20] Speculative page faults","submitter":{"id":40248,"url":"http://patchwork.ozlabs.org/api/people/40248/","name":"Laurent Dufour","email":"ldufour@linux.vnet.ibm.com"},"content":"Despite the unprovable lockdep warning raised by Sergey, I didn't get any\nfeedback on this series.\n\nIs there a chance to get it moved upstream ?\n\nThanks,\nLaurent.\n\nOn 08/09/2017 20:06, Laurent Dufour wrote:\n> This is a port on kernel 4.13 of the work done by Peter Zijlstra to\n> handle page fault without holding the mm semaphore [1].\n> \n> The idea is to try to handle user space page faults without holding the\n> mmap_sem. This should allow better concurrency for massively threaded\n> process since the page fault handler will not wait for other threads memory\n> layout change to be done, assuming that this change is done in another part\n> of the process's memory space. This type page fault is named speculative\n> page fault. If the speculative page fault fails because of a concurrency is\n> detected or because underlying PMD or PTE tables are not yet allocating, it\n> is failing its processing and a classic page fault is then tried.\n> \n> The speculative page fault (SPF) has to look for the VMA matching the fault\n> address without holding the mmap_sem, so the VMA list is now managed using\n> SRCU allowing lockless walking. The only impact would be the deferred file\n> derefencing in the case of a file mapping, since the file pointer is\n> released once the SRCU cleaning is done.  This patch relies on the change\n> done recently by Paul McKenney in SRCU which now runs a callback per CPU\n> instead of per SRCU structure [1].\n> \n> The VMA's attributes checked during the speculative page fault processing\n> have to be protected against parallel changes. This is done by using a per\n> VMA sequence lock. This sequence lock allows the speculative page fault\n> handler to fast check for parallel changes in progress and to abort the\n> speculative page fault in that case.\n> \n> Once the VMA is found, the speculative page fault handler would check for\n> the VMA's attributes to verify that the page fault has to be handled\n> correctly or not. Thus the VMA is protected through a sequence lock which\n> allows fast detection of concurrent VMA changes. If such a change is\n> detected, the speculative page fault is aborted and a *classic* page fault\n> is tried.  VMA sequence locks are added when VMA attributes which are\n> checked during the page fault are modified.\n> \n> When the PTE is fetched, the VMA is checked to see if it has been changed,\n> so once the page table is locked, the VMA is valid, so any other changes\n> leading to touching this PTE will need to lock the page table, so no\n> parallel change is possible at this time.\n> \n> Compared to the Peter's initial work, this series introduces a spin_trylock\n> when dealing with speculative page fault. This is required to avoid dead\n> lock when handling a page fault while a TLB invalidate is requested by an\n> other CPU holding the PTE. Another change due to a lock dependency issue\n> with mapping->i_mmap_rwsem.\n> \n> In addition some VMA field values which are used once the PTE is unlocked\n> at the end the page fault path are saved into the vm_fault structure to\n> used the values matching the VMA at the time the PTE was locked.\n> \n> This series only support VMA with no vm_ops define, so huge page and mapped\n> file are not managed with the speculative path. In addition transparent\n> huge page are not supported. Once this series will be accepted upstream\n> I'll extend the support to mapped files, and transparent huge pages.\n> \n> This series builds on top of v4.13.9-mm1 and is functional on x86 and\n> PowerPC.\n> \n> Tests have been made using a large commercial in-memory database on a\n> PowerPC system with 752 CPU using RFC v5 using a previous version of this\n> series. The results are very encouraging since the loading of the 2TB\n> database was faster by 14% with the speculative page fault.\n> \n> Using ebizzy test [3], which spreads a lot of threads, the result are good\n> when running on both a large or a small system. When using kernbench, the\n> result are quite similar which expected as not so much multithreaded\n> processes are involved. But there is no performance degradation neither\n> which is good.\n> \n> ------------------\n> Benchmarks results\n> \n> Note these test have been made on top of 4.13.0-mm1.\n> \n> Ebizzy:\n> -------\n> The test is counting the number of records per second it can manage, the\n> higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent\n> result I repeated the test 100 times and measure the average result, mean\n> deviation, max and min.\n> \n> - 16 CPUs x86 VM\n> Records/s\t4.13.0-mm1\t4.13.0-mm1-spf\tdelta\n> Average\t\t13217.90 \t65765.94\t+397.55%\n> Mean deviation\t690.37\t\t2609.36\t\t+277.97%\n> Max\t\t16726\t\t77675\t\t+364.40%\n> Min\t\t12194\t\t616340\t\t+405.45%\n> \t\t\n> - 80 CPUs Power 8 node:\n> Records/s\t4.13.0-mm1\t4.13.0-mm1-spf\tdelta\n> Average\t\t38175.40\t67635.55\t77.17% \n> Mean deviation\t600.09\t \t2349.66\t\t291.55%\n> Max\t\t39563\t\t74292\t\t87.78% \n> Min\t\t35846\t\t62657\t\t74.79% \n> \n> The number of record per second is far better with the speculative page\n> fault. \n> The mean deviation is higher with the speculative page fault, may be\n> because sometime the fault are not handled in a speculative way leading to\n> more variation.\n> The numbers for the x86 guest are really insane for the SPF case, but I\n> did the test several times and this leads each time this delta. I did again\n> the test using the previous version of the patch and I got similar\n> numbers. It happens that the host running the VM is far less loaded now\n> leading to better results as more threads are eligible to run.\n> Test on Power are done on a badly balanced node where the memory is only\n> attached to one core.\n> \n> Kernbench:\n> ----------\n> This test is building a 4.12 kernel using platform default config. The\n> build has been run 5 times each time.\n> \n> - 16 CPUs x86 VM\n> Average Half load -j 8 Run (std deviation)\n>  \t\t 4.13.0-mm1\t\t4.13.0-mm1-spf\t\tdelta %\n> Elapsed Time     145.968 (0.402206)\t145.654 (0.533601)\t-0.22\n> User Time        1006.58 (2.74729)\t1003.7 (4.11294)\t-0.29\n> System Time      108.464 (0.177567)\t111.034 (0.718213)\t+2.37\n> Percent CPU \t 763.4 (1.34164)\t764.8 (1.30384)\t\t+0.18\n> Context Switches 46599.6 (412.013)\t63771 (1049.95)\t\t+36.85\n> Sleeps           85313.2 (514.456)\t85532.2 (681.199)\t-0.26\n> \n> Average Optimal load -j 16 Run (std deviation)\n>  \t\t 4.13.0-mm1\t\t4.13.0-mm1-spf\t\tdelta %\n> Elapsed Time     74.292 (0.75998)\t74.484 (0.723035)\t+0.26\n> User Time        959.949 (49.2036)\t956.057 (50.2993)\t-0.41\n> System Time      100.203 (8.7119)\t101.984 (9.56099)\t+1.78\n> Percent CPU \t 1058 (310.661)\t\t1054.3 (305.263)\t-0.35\n> Context Switches 65713.8 (20161.7)\t86619.4 (24095.4)\t+31.81\n> Sleeps           90344.9 (5364.74)\t90877.4 (5655.87)\t-0.59\n> \n> The elapsed time are similar, but the impact less important since there are\n> less multithreaded processes involved here. \n> \n> - 80 CPUs Power 8 node:\n> Average Half load -j 40 Run (std deviation)\n> \t\t 4.13.0-mm1\t\t4.13.0-mm1-spf\t\tdelta %\n> Elapsed Time \t 115.342 (0.321668)\t115.786 (0.427118)\t+0.38\n> User Time \t 4355.08 (10.1778)\t4371.77 (14.9715)\t+0.38\n> System Time \t 127.612 (0.882083)\t130.048 (1.06258)\t+1.91\n> Percent CPU \t 3885.8 (11.606)\t3887.4 (8.04984)\t+0.04\n> Context Switches 80907.8 (657.481)\t81936.4 (729.538)\t+1.27\n> Sleeps\t\t 162109 (793.331)\t162057 (1414.08)\t+0.03\n> \n> Average Optimal load -j 80 Run (std deviation)\n>  \t\t 4.13.0-mm1\t\t4.13.0-mm1-spf\n> Elapsed Time \t 110.308 (0.725445)\t109.78 (0.826862)\t-0.48\n> User Time \t 5893.12 (1621.33)\t5923.19 (1635.48)\t+0.51\n> System Time \t 162.168 (36.4347)\t166.533 (38.4695)\t+2.69\n> Percent CPU \t 5400.2 (1596.89)\t5440.4 (1637.71)\t+0.74\n> Context Switches 129372 (51088.2)\t144529 (65985.5)\t+11.72\n> Sleeps\t\t 157312 (5113.57)\t158696 (4301.48)\t-0.87\n> \n> Here the elapsed time are similar the SPF release, but we remain in the error\n> margin. It has to be noted that this system is not correctly balanced on\n> the NUMA point of view as all the available memory is attached to one core.\n> \n> ------------------------\n> Changes since v2:\n>  - Perf event is renamed in PERF_COUNT_SW_SPF\n>  - On Power handle do_page_fault()'s cleaning\n>  - On Power if the VM_FAULT_ERROR is returned by\n>  handle_speculative_fault(), do not retry but jump to the error path\n>  - If VMA's flags are not matching the fault, directly returns\n>  VM_FAULT_SIGSEGV and not VM_FAULT_RETRY\n>  - Check for pud_trans_huge() to avoid speculative path\n>  - Handles _vm_normal_page()'s introduced by 6f16211df3bf\n>  (\"mm/device-public-memory: device memory cache coherent with CPU\")\n>  - add and review few comments in the code\n> Changes since v1:\n>  - Remove PERF_COUNT_SW_SPF_FAILED perf event.\n>  - Add tracing events to details speculative page fault failures.\n>  - Cache VMA fields values which are used once the PTE is unlocked at the\n>  end of the page fault events.\n>  - Ensure that fields read during the speculative path are written and read\n>  using WRITE_ONCE and READ_ONCE.\n>  - Add checks at the beginning of the speculative path to abort it if the\n>  VMA is known to not be supported.\n> Changes since RFC V5 [5]\n>  - Port to 4.13 kernel\n>  - Merging patch fixing lock dependency into the original patch\n>  - Replace the 2 parameters of vma_has_changed() with the vmf pointer\n>  - In patch 7, don't call __do_fault() in the speculative path as it may\n>  want to unlock the mmap_sem.\n>  - In patch 11-12, don't check for vma boundaries when\n>  page_add_new_anon_rmap() is called during the spf path and protect against\n>  anon_vma pointer's update.\n>  - In patch 13-16, add performance events to report number of successful\n>  and failed speculative events. \n> \n> [1] https://urldefense.proofpoint.com/v2/url?u=http-3A__linux-2Dkernel.2935.n7.nabble.com_RFC-2DPATCH-2D0-2D6-2DAnother-2Dgo-2Dat-2Dspeculative-2Dpage-2Dfaults-2Dtt965642.html-23none&d=DwIBAg&c=jf_iaSHvJObTbx-siA1ZOg&r=WE1-GjEMX6XRg4v6rPpC0RVdhh4z63Csy-Wmu5dgUp0&m=449ThuJ31DP_64d96xAqLlSqq4qgY5LlJvzwiULSaos&s=9wDEbeKddqKRa0zfN13yjrErkFIQJo9Ohe07I7IuBSk&e= \n> [2] https://urldefense.proofpoint.com/v2/url?u=https-3A__git.kernel.org_pub_scm_linux_kernel_git_torvalds_linux.git_commit_-3Fid-3Dda915ad5cf25b5f5d358dd3670c3378d8ae8c03e&d=DwIBAg&c=jf_iaSHvJObTbx-siA1ZOg&r=WE1-GjEMX6XRg4v6rPpC0RVdhh4z63Csy-Wmu5dgUp0&m=449ThuJ31DP_64d96xAqLlSqq4qgY5LlJvzwiULSaos&s=OUT_ItjCInfCHdZQS5cjmxUQd3Ws8VkT54MZgJm2dAE&e= \n> [3] https://urldefense.proofpoint.com/v2/url?u=http-3A__ebizzy.sourceforge.net_&d=DwIBAg&c=jf_iaSHvJObTbx-siA1ZOg&r=WE1-GjEMX6XRg4v6rPpC0RVdhh4z63Csy-Wmu5dgUp0&m=449ThuJ31DP_64d96xAqLlSqq4qgY5LlJvzwiULSaos&s=cMZB09rj1TqCKM2B3DPrtrB1LpZan637kvHrM6ShaDk&e= \n> [4] https://urldefense.proofpoint.com/v2/url?u=http-3A__ck.kolivas.org_apps_kernbench_kernbench-2D0.50_&d=DwIBAg&c=jf_iaSHvJObTbx-siA1ZOg&r=WE1-GjEMX6XRg4v6rPpC0RVdhh4z63Csy-Wmu5dgUp0&m=449ThuJ31DP_64d96xAqLlSqq4qgY5LlJvzwiULSaos&s=2D_JH8n0pGF5lE0jSXnb2RY5etKV7C7UfO7-8hknJDE&e= \n> [5] https://urldefense.proofpoint.com/v2/url?u=https-3A__lwn.net_Articles_725607_&d=DwIBAg&c=jf_iaSHvJObTbx-siA1ZOg&r=WE1-GjEMX6XRg4v6rPpC0RVdhh4z63Csy-Wmu5dgUp0&m=449ThuJ31DP_64d96xAqLlSqq4qgY5LlJvzwiULSaos&s=CEgoZjaMNHIZFX-XAzuzr8EswsKhQAArNwmc_8bnduA&e= \n> \n> Laurent Dufour (14):\n>   mm: Introduce pte_spinlock for FAULT_FLAG_SPECULATIVE\n>   mm: Protect VMA modifications using VMA sequence count\n>   mm: Cache some VMA fields in the vm_fault structure\n>   mm: Protect SPF handler against anon_vma changes\n>   mm/migrate: Pass vm_fault pointer to migrate_misplaced_page()\n>   mm: Introduce __lru_cache_add_active_or_unevictable\n>   mm: Introduce __maybe_mkwrite()\n>   mm: Introduce __vm_normal_page()\n>   mm: Introduce __page_add_new_anon_rmap()\n>   mm: Try spin lock in speculative path\n>   mm: Adding speculative page fault failure trace events\n>   perf: Add a speculative page fault sw event\n>   perf tools: Add support for the SPF perf event\n>   powerpc/mm: Add speculative page fault\n> \n> Peter Zijlstra (6):\n>   mm: Dont assume page-table invariance during faults\n>   mm: Prepare for FAULT_FLAG_SPECULATIVE\n>   mm: VMA sequence count\n>   mm: RCU free VMAs\n>   mm: Provide speculative fault infrastructure\n>   x86/mm: Add speculative pagefault handling\n> \n>  arch/powerpc/include/asm/book3s/64/pgtable.h |   5 +\n>  arch/powerpc/mm/fault.c                      |  15 +\n>  arch/x86/include/asm/pgtable_types.h         |   7 +\n>  arch/x86/mm/fault.c                          |  19 ++\n>  fs/proc/task_mmu.c                           |   5 +-\n>  fs/userfaultfd.c                             |  17 +-\n>  include/linux/hugetlb_inline.h               |   2 +-\n>  include/linux/migrate.h                      |   4 +-\n>  include/linux/mm.h                           |  28 +-\n>  include/linux/mm_types.h                     |   3 +\n>  include/linux/pagemap.h                      |   4 +-\n>  include/linux/rmap.h                         |  12 +-\n>  include/linux/swap.h                         |  11 +-\n>  include/trace/events/pagefault.h             |  87 +++++\n>  include/uapi/linux/perf_event.h              |   1 +\n>  kernel/fork.c                                |   1 +\n>  mm/hugetlb.c                                 |   2 +\n>  mm/init-mm.c                                 |   1 +\n>  mm/internal.h                                |  19 ++\n>  mm/khugepaged.c                              |   5 +\n>  mm/madvise.c                                 |   6 +-\n>  mm/memory.c                                  | 478 ++++++++++++++++++++++-----\n>  mm/mempolicy.c                               |  51 ++-\n>  mm/migrate.c                                 |   4 +-\n>  mm/mlock.c                                   |  13 +-\n>  mm/mmap.c                                    | 138 ++++++--\n>  mm/mprotect.c                                |   4 +-\n>  mm/mremap.c                                  |   7 +\n>  mm/rmap.c                                    |   5 +-\n>  mm/swap.c                                    |  12 +-\n>  tools/include/uapi/linux/perf_event.h        |   1 +\n>  tools/perf/util/evsel.c                      |   1 +\n>  tools/perf/util/parse-events.c               |   4 +\n>  tools/perf/util/parse-events.l               |   1 +\n>  tools/perf/util/python.c                     |   1 +\n>  35 files changed, 796 insertions(+), 178 deletions(-)\n>  create mode 100644 include/trace/events/pagefault.h\n>","headers":{"Return-Path":"<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>","X-Original-To":["patchwork-incoming@ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Delivered-To":["patchwork-incoming@ozlabs.org","linuxppc-dev@lists.ozlabs.org"],"Received":["from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68])\n\t(using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 3xwcl00gbcz9s7G\n\tfor <patchwork-incoming@ozlabs.org>;\n\tMon, 18 Sep 2017 17:17:00 +1000 (AEST)","from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 3xwckz6F47zDrp6\n\tfor <patchwork-incoming@ozlabs.org>;\n\tMon, 18 Sep 2017 17:16:59 +1000 (AEST)","from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com\n\t[148.163.156.1])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256\n\tbits)) (No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 3xwcjN2xvszDrWb\n\tfor <linuxppc-dev@lists.ozlabs.org>;\n\tMon, 18 Sep 2017 17:15:36 +1000 (AEST)","from pps.filterd (m0098393.ppops.net [127.0.0.1])\n\tby mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id\n\tv8I7DfuQ113712\n\tfor <linuxppc-dev@lists.ozlabs.org>; Mon, 18 Sep 2017 03:15:34 -0400","from e06smtp12.uk.ibm.com (e06smtp12.uk.ibm.com [195.75.94.108])\n\tby mx0a-001b2d01.pphosted.com with ESMTP id 2d26g47tp0-1\n\t(version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT)\n\tfor <linuxppc-dev@lists.ozlabs.org>; Mon, 18 Sep 2017 03:15:33 -0400","from localhost\n\tby e06smtp12.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use\n\tOnly! Violators will be prosecuted\n\tfor <linuxppc-dev@lists.ozlabs.org> from <ldufour@linux.vnet.ibm.com>;\n\tMon, 18 Sep 2017 08:15:31 +0100","from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198)\n\tby e06smtp12.uk.ibm.com (192.168.101.142) with IBM ESMTP SMTP\n\tGateway: Authorized Use Only! Violators will be prosecuted; \n\tMon, 18 Sep 2017 08:15:25 +0100","from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60])\n\tby b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with\n\tESMTP id v8I7FOd120840480; Mon, 18 Sep 2017 07:15:24 GMT","from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1])\n\tby IMSVA (Postfix) with ESMTP id C51054204B;\n\tMon, 18 Sep 2017 08:11:38 +0100 (BST)","from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1])\n\tby IMSVA (Postfix) with ESMTP id 0951A4203F;\n\tMon, 18 Sep 2017 08:11:37 +0100 (BST)","from [9.145.149.46] (unknown [9.145.149.46])\n\tby d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP;\n\tMon, 18 Sep 2017 08:11:36 +0100 (BST)"],"Authentication-Results":"ozlabs.org;\n\tspf=none (mailfrom) smtp.mailfrom=linux.vnet.ibm.com\n\t(client-ip=148.163.156.1; helo=mx0a-001b2d01.pphosted.com;\n\tenvelope-from=ldufour@linux.vnet.ibm.com; receiver=<UNKNOWN>)","Subject":"Re: [PATCH v3 00/20] Speculative page faults","From":"Laurent Dufour <ldufour@linux.vnet.ibm.com>","To":"paulmck@linux.vnet.ibm.com, peterz@infradead.org,\n\takpm@linux-foundation.org, kirill@shutemov.name, ak@linux.intel.com, \n\tmhocko@kernel.org, dave@stgolabs.net, jack@suse.cz,\n\tMatthew Wilcox <willy@infradead.org>, benh@kernel.crashing.org,\n\tmpe@ellerman.id.au, paulus@samba.org,\n\tThomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, \n\thpa@zytor.com, Will Deacon <will.deacon@arm.com>,\n\tSergey Senozhatsky <sergey.senozhatsky@gmail.com>","References":"<1504894024-2750-1-git-send-email-ldufour@linux.vnet.ibm.com>","Date":"Mon, 18 Sep 2017 09:15:22 +0200","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101\n\tThunderbird/52.3.0","MIME-Version":"1.0","In-Reply-To":"<1504894024-2750-1-git-send-email-ldufour@linux.vnet.ibm.com>","Content-Type":"text/plain; charset=utf-8","Content-Language":"en-US","Content-Transfer-Encoding":"7bit","X-TM-AS-GCONF":"00","x-cbid":"17091807-0008-0000-0000-00000497A80B","X-IBM-AV-DETECTION":"SAVI=unused REMOTE=unused XFE=unused","x-cbparentid":"17091807-0009-0000-0000-00001E28D2CA","Message-Id":"<9e206be2-57a7-ee60-581c-5afc6df030d0@linux.vnet.ibm.com>","X-Proofpoint-Virus-Version":"vendor=fsecure engine=2.50.10432:, ,\n\tdefinitions=2017-09-18_01:, , signatures=0","X-Proofpoint-Spam-Details":"rule=outbound_notspam policy=outbound score=0\n\tspamscore=0 suspectscore=1\n\tmalwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam\n\tadjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000\n\tdefinitions=main-1709180105","X-BeenThere":"linuxppc-dev@lists.ozlabs.org","X-Mailman-Version":"2.1.24","Precedence":"list","List-Id":"Linux on PowerPC Developers Mail List\n\t<linuxppc-dev.lists.ozlabs.org>","List-Unsubscribe":"<https://lists.ozlabs.org/options/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=unsubscribe>","List-Archive":"<http://lists.ozlabs.org/pipermail/linuxppc-dev/>","List-Post":"<mailto:linuxppc-dev@lists.ozlabs.org>","List-Help":"<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=help>","List-Subscribe":"<https://lists.ozlabs.org/listinfo/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=subscribe>","Cc":"linuxppc-dev@lists.ozlabs.org, x86@kernel.org,\n\tlinux-kernel@vger.kernel.org, npiggin@gmail.com, linux-mm@kvack.org,\n\tTim Chen <tim.c.chen@linux.intel.com>, \n\tharen@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com","Errors-To":"linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org","Sender":"\"Linuxppc-dev\"\n\t<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>"}}]