From patchwork Tue Mar 14 04:06:04 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vaibhav Jain X-Patchwork-Id: 738543 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3vj1T04r4Rz9s5g for ; Tue, 14 Mar 2017 15:09:08 +1100 (AEDT) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3vj1T041tCzDqgv for ; Tue, 14 Mar 2017 15:09:08 +1100 (AEDT) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3vj1Qc5n9bzDqGk for ; Tue, 14 Mar 2017 15:07:04 +1100 (AEDT) Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v2E3xI6t028605 for ; Tue, 14 Mar 2017 00:07:02 -0400 Received: from e28smtp01.in.ibm.com (e28smtp01.in.ibm.com [125.16.236.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 2955muc5tc-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Tue, 14 Mar 2017 00:07:02 -0400 Received: from localhost by e28smtp01.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 14 Mar 2017 09:36:58 +0530 Received: from d28relay05.in.ibm.com (9.184.220.62) by e28smtp01.in.ibm.com (192.168.1.131) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 14 Mar 2017 09:36:56 +0530 Received: from d28av03.in.ibm.com (d28av03.in.ibm.com [9.184.220.65]) by d28relay05.in.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v2E46pVn16842836 for ; Tue, 14 Mar 2017 09:36:51 +0530 Received: from d28av03.in.ibm.com (localhost [127.0.0.1]) by d28av03.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id v2E46sIk016138 for ; Tue, 14 Mar 2017 09:36:55 +0530 Received: from vajain21.in.ibm.com ([9.199.37.76]) by d28av03.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id v2E46h3b016043; Tue, 14 Mar 2017 09:36:54 +0530 From: Vaibhav Jain To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH 1/3] cxl: Re-factor cxl_pci_afu_read_err_buffer() Date: Tue, 14 Mar 2017 09:36:04 +0530 X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170314040606.16894-1-vaibhav@linux.vnet.ibm.com> References: <20170314040606.16894-1-vaibhav@linux.vnet.ibm.com> X-TM-AS-MML: disable x-cbid: 17031404-7323-0000-0000-000000D73710 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17031404-7324-0000-0000-0000027BCFA6 Message-Id: <20170314040606.16894-2-vaibhav@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-03-14_02:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=3 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1702020001 definitions=main-1703140032 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Philippe Bergheaud , Vaibhav Jain , Frederic Barrat , Ian Munsie , Andrew Donnellan , Christophe Lombard , Greg Kurz Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" This patch moves,renames and re-factors the function afu_pci_afu_err_buffer(). The function is now moved to native.c from pci.c and renamed as native_afu_read_err_buffer(). Also the ability of copying data from h/w enforcing 4/8 byte aligned access is useful and better shared across other functions. So this patch moves the core logic of existing cxl_pci_afu_read_err_buffer() to a new function named __aligned_memcpy().The new implementation of native_afu_read_err_buffer() is simply a call to __aligned_memcpy() with appropriate actual parameters. Signed-off-by: Vaibhav Jain Reviewed-by: Andrew Donnellan Acked-by: Frederic Barrat --- drivers/misc/cxl/cxl.h | 3 --- drivers/misc/cxl/native.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++- drivers/misc/cxl/pci.c | 44 ------------------------------------- 3 files changed, 55 insertions(+), 48 deletions(-) diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h index 79e60ec..ef683b7 100644 --- a/drivers/misc/cxl/cxl.h +++ b/drivers/misc/cxl/cxl.h @@ -739,9 +739,6 @@ static inline u64 cxl_p2n_read(struct cxl_afu *afu, cxl_p2n_reg_t reg) return ~0ULL; } -ssize_t cxl_pci_afu_read_err_buffer(struct cxl_afu *afu, char *buf, - loff_t off, size_t count); - /* Internal functions wrapped in cxl_base to allow PHB to call them */ bool _cxl_pci_associate_default_context(struct pci_dev *dev, struct cxl_afu *afu); void _cxl_pci_disable_device(struct pci_dev *dev); diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c index 7ae7105..20d3df6 100644 --- a/drivers/misc/cxl/native.c +++ b/drivers/misc/cxl/native.c @@ -1276,6 +1276,60 @@ static int native_afu_cr_write8(struct cxl_afu *afu, int cr, u64 off, u8 in) return rc; } +#define ERR_BUFF_MAX_COPY_SIZE PAGE_SIZE + +/* + * __aligned_memcpy: + * Copies count or max_read bytes (whichever is smaller) from src to dst buffer + * starting at offset off in src buffer. This specialized implementation of + * memcpy_fromio is needed as capi h/w only supports 4/8 bytes aligned access. + * So in case the requested offset/count arent 8 byte aligned the function uses + * a bounce buffer which can be max ERR_BUFF_MAX_COPY_SIZE == PAGE_SIZE + */ +static ssize_t __aligned_memcpy(void *dst, void __iomem *src, loff_t off, + size_t count, size_t max_read) +{ + loff_t aligned_start, aligned_end; + size_t aligned_length; + void *tbuf; + + if (count == 0 || off < 0 || (size_t)off >= max_read) + return 0; + + /* calculate aligned read window */ + count = min((size_t)(max_read - off), count); + aligned_start = round_down(off, 8); + aligned_end = round_up(off + count, 8); + aligned_length = aligned_end - aligned_start; + + /* max we can copy in one read is PAGE_SIZE */ + if (aligned_length > ERR_BUFF_MAX_COPY_SIZE) { + aligned_length = ERR_BUFF_MAX_COPY_SIZE; + count = ERR_BUFF_MAX_COPY_SIZE - (off & 0x7); + } + + /* use bounce buffer for copy */ + tbuf = (void *)__get_free_page(GFP_TEMPORARY); + if (!tbuf) + return -ENOMEM; + + /* perform aligned read from the mmio region */ + memcpy_fromio(tbuf, src + aligned_start, aligned_length); + memcpy(dst, tbuf + (off & 0x7), count); + + free_page((unsigned long)tbuf); + + return count; +} + +static ssize_t native_afu_read_err_buffer(struct cxl_afu *afu, char *buf, + loff_t off, size_t count) +{ + void __iomem *ebuf = afu->native->afu_desc_mmio + afu->eb_offset; + + return __aligned_memcpy(buf, ebuf, off, count, afu->eb_len); +} + const struct cxl_backend_ops cxl_native_ops = { .module = THIS_MODULE, .adapter_reset = cxl_pci_reset, @@ -1294,7 +1348,7 @@ const struct cxl_backend_ops cxl_native_ops = { .support_attributes = native_support_attributes, .link_ok = cxl_adapter_link_ok, .release_afu = cxl_pci_release_afu, - .afu_read_err_buffer = cxl_pci_afu_read_err_buffer, + .afu_read_err_buffer = native_afu_read_err_buffer, .afu_check_and_enable = native_afu_check_and_enable, .afu_activate_mode = native_afu_activate_mode, .afu_deactivate_mode = native_afu_deactivate_mode, diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c index 91f6459..541dc9a 100644 --- a/drivers/misc/cxl/pci.c +++ b/drivers/misc/cxl/pci.c @@ -1051,50 +1051,6 @@ static int sanitise_afu_regs(struct cxl_afu *afu) return 0; } -#define ERR_BUFF_MAX_COPY_SIZE PAGE_SIZE -/* - * afu_eb_read: - * Called from sysfs and reads the afu error info buffer. The h/w only supports - * 4/8 bytes aligned access. So in case the requested offset/count arent 8 byte - * aligned the function uses a bounce buffer which can be max PAGE_SIZE. - */ -ssize_t cxl_pci_afu_read_err_buffer(struct cxl_afu *afu, char *buf, - loff_t off, size_t count) -{ - loff_t aligned_start, aligned_end; - size_t aligned_length; - void *tbuf; - const void __iomem *ebuf = afu->native->afu_desc_mmio + afu->eb_offset; - - if (count == 0 || off < 0 || (size_t)off >= afu->eb_len) - return 0; - - /* calculate aligned read window */ - count = min((size_t)(afu->eb_len - off), count); - aligned_start = round_down(off, 8); - aligned_end = round_up(off + count, 8); - aligned_length = aligned_end - aligned_start; - - /* max we can copy in one read is PAGE_SIZE */ - if (aligned_length > ERR_BUFF_MAX_COPY_SIZE) { - aligned_length = ERR_BUFF_MAX_COPY_SIZE; - count = ERR_BUFF_MAX_COPY_SIZE - (off & 0x7); - } - - /* use bounce buffer for copy */ - tbuf = (void *)__get_free_page(GFP_TEMPORARY); - if (!tbuf) - return -ENOMEM; - - /* perform aligned read from the mmio region */ - memcpy_fromio(tbuf, ebuf + aligned_start, aligned_length); - memcpy(buf, tbuf + (off & 0x7), count); - - free_page((unsigned long)tbuf); - - return count; -} - static int pci_configure_afu(struct cxl_afu *afu, struct cxl *adapter, struct pci_dev *dev) { int rc;