From patchwork Fri Jul 15 15:36:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 1656957 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=JLAr7/ii; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4LkwSr1hM7z9s09 for ; Sat, 16 Jul 2022 01:36:44 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233147AbiGOPgl (ORCPT ); Fri, 15 Jul 2022 11:36:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229576AbiGOPgj (ORCPT ); Fri, 15 Jul 2022 11:36:39 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E0F3B57E17 for ; Fri, 15 Jul 2022 08:36:38 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 721E5620A7 for ; Fri, 15 Jul 2022 15:36:38 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6CA7CC34115; Fri, 15 Jul 2022 15:36:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657899397; bh=PmEcMeAJDQN00oV2AMtEy6Zd1MxvU5nx0IC3yGDIuGU=; h=From:To:Cc:Subject:Date:From; b=JLAr7/iifA4e6fFyE7OYXkD6mQeNPW6UJ/OyANbSlj8GMcObMgEk1ODm+gQKgQHE5 14bhRGHfSKs7DyphVSsX2K2M3+zdJVWPJnrmNeF9qa9kJKLXdrHrzoFivs+T45CpYj 9cdnMthsVWFgJA8lKt0Br/VOXWF5KSHRqALvSDbrumh/nFUVmqxkXpSCTp1zAzzaDd dsVKSYgRmx1qUgmsrXd41KuczxxeQFnSrgGAwra2J/Ya3lzJbPTwLeRLxNSqjGoAfL JPPAHQ94jIcSDdWD+gP9E9qQh1uHy0V8V10ZR9l0+5v12HLTVnxtpwV6fUgEhMP+xc t36NKcCW7VQAQ== From: Arnd Bergmann To: linux-pci@vger.kernel.org Cc: Bjorn Helgaas , Arnd Bergmann , David Woodhouse , "David S . Miller" , Stafford Horne Subject: [PATCH 1/2] pci: remove pci_mmap_page_range wrapper Date: Fri, 15 Jul 2022 17:36:16 +0200 Message-Id: <20220715153617.3393420-1-arnd@kernel.org> X-Mailer: git-send-email 2.29.2 MIME-Version: 1.0 X-Spam-Status: No, score=-7.7 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Arnd Bergmann The ARCH_GENERIC_PCI_MMAP_RESOURCE symbol came up in a recent discussion, and I noticed that this was left behind by an unfinished cleanup from 2017. The only architecture that still relies on providing its own pci_mmap_page_range() helper instead of using the generic pci_mmap_resource_range() is sparc. Presumably the reasons for this have not changed, but at least this can be simplified by converting sparc to use the same interface as the others. The only difference between the two is the device specific offset that gets added to or subtracted from vma->vm_pgoff. Change the only caller of pci_mmap_page_range() in common code to subtract this offset and call the modern interface, while adding it back in the sparc implementation to preserve the existing behavior. This removes the complexities of the dual interfaces from the common code, and keeps it all specific to the sparc architecture code. According to David Miller, the sparc code lets user space poke into the VGA I/O port registers by mmapping the I/O space of the parent bridge device, which is something that the generic pci_mmap_resource_range() code apparently does not. Cc: David Woodhouse Cc: David S. Miller Cc: Stafford Horne Link: https://lore.kernel.org/lkml/1519887203.622.3.camel@infradead.org/t/ Link: https://lore.kernel.org/lkml/20220714214657.2402250-3-shorne@gmail.com/ Signed-off-by: Arnd Bergmann --- Documentation/PCI/sysfs-pci.rst | 2 +- arch/sparc/kernel/pci.c | 13 +++++++--- drivers/pci/mmap.c | 44 --------------------------------- drivers/pci/proc.c | 7 +++++- include/linux/pci.h | 12 +-------- 5 files changed, 17 insertions(+), 61 deletions(-) diff --git a/Documentation/PCI/sysfs-pci.rst b/Documentation/PCI/sysfs-pci.rst index 742fbd21dc1f..f495185aa88a 100644 --- a/Documentation/PCI/sysfs-pci.rst +++ b/Documentation/PCI/sysfs-pci.rst @@ -125,7 +125,7 @@ implementation of that functionality. To support the historical interface of mmap() through files in /proc/bus/pci, platforms may also set HAVE_PCI_MMAP. Alternatively, platforms which set HAVE_PCI_MMAP may provide their own -implementation of pci_mmap_page_range() instead of defining +implementation of pci_mmap_resource_range() instead of defining ARCH_GENERIC_PCI_MMAP_RESOURCE. Platforms which support write-combining maps of PCI resources must define diff --git a/arch/sparc/kernel/pci.c b/arch/sparc/kernel/pci.c index 31b0c1983286..f580db840bf7 100644 --- a/arch/sparc/kernel/pci.c +++ b/arch/sparc/kernel/pci.c @@ -876,17 +876,22 @@ static void __pci_mmap_set_pgprot(struct pci_dev *dev, struct vm_area_struct *vm /* Perform the actual remap of the pages for a PCI device mapping, as appropriate * for this architecture. The region in the process to map is described by vm_start - * and vm_end members of VMA, the base physical address is found in vm_pgoff. + * and vm_end members of VMA, the BAR relative address is found in vm_pgoff. * The pci device structure is provided so that architectures may make mapping * decisions on a per-device or per-bus basis. * * Returns a negative error code on failure, zero on success. */ -int pci_mmap_page_range(struct pci_dev *dev, int bar, - struct vm_area_struct *vma, - enum pci_mmap_state mmap_state, int write_combine) +int pci_mmap_resource_range(struct pci_dev *dev, int bar, + struct vm_area_struct *vma, + enum pci_mmap_state mmap_state, int write_combine) { int ret; + resource_size_t start, end; + + /* convert per-BAR address to PCI bus address */ + pci_resource_to_user(dev, bar, &dev->resource[bar], &start, &end); + vma->vm_pgoff += start >> PAGE_SHIFT; ret = __pci_mmap_make_offset(dev, vma, mmap_state); if (ret < 0) diff --git a/drivers/pci/mmap.c b/drivers/pci/mmap.c index b8c9011987f4..4504039056d1 100644 --- a/drivers/pci/mmap.c +++ b/drivers/pci/mmap.c @@ -13,27 +13,6 @@ #ifdef ARCH_GENERIC_PCI_MMAP_RESOURCE -/* - * Modern setup: generic pci_mmap_resource_range(), and implement the legacy - * pci_mmap_page_range() (if needed) as a wrapper round it. - */ - -#ifdef HAVE_PCI_MMAP -int pci_mmap_page_range(struct pci_dev *pdev, int bar, - struct vm_area_struct *vma, - enum pci_mmap_state mmap_state, int write_combine) -{ - resource_size_t start, end; - - pci_resource_to_user(pdev, bar, &pdev->resource[bar], &start, &end); - - /* Adjust vm_pgoff to be the offset within the resource */ - vma->vm_pgoff -= start >> PAGE_SHIFT; - return pci_mmap_resource_range(pdev, bar, vma, mmap_state, - write_combine); -} -#endif - static const struct vm_operations_struct pci_phys_vm_ops = { #ifdef CONFIG_HAVE_IOREMAP_PROT .access = generic_access_phys, @@ -70,27 +49,4 @@ int pci_mmap_resource_range(struct pci_dev *pdev, int bar, vma->vm_page_prot); } -#elif defined(HAVE_PCI_MMAP) /* && !ARCH_GENERIC_PCI_MMAP_RESOURCE */ - -/* - * Legacy setup: Implement pci_mmap_resource_range() as a wrapper around - * the architecture's pci_mmap_page_range(), converting to "user visible" - * addresses as necessary. - */ - -int pci_mmap_resource_range(struct pci_dev *pdev, int bar, - struct vm_area_struct *vma, - enum pci_mmap_state mmap_state, int write_combine) -{ - resource_size_t start, end; - - /* - * pci_mmap_page_range() expects the same kind of entry as coming - * from /proc/bus/pci/ which is a "user visible" value. If this is - * different from the resource itself, arch will do necessary fixup. - */ - pci_resource_to_user(pdev, bar, &pdev->resource[bar], &start, &end); - vma->vm_pgoff += start >> PAGE_SHIFT; - return pci_mmap_page_range(pdev, bar, vma, mmap_state, write_combine); -} #endif diff --git a/drivers/pci/proc.c b/drivers/pci/proc.c index 31b26d8ea6cc..f967709082d6 100644 --- a/drivers/pci/proc.c +++ b/drivers/pci/proc.c @@ -244,6 +244,7 @@ static int proc_bus_pci_mmap(struct file *file, struct vm_area_struct *vma) { struct pci_dev *dev = pde_data(file_inode(file)); struct pci_filp_private *fpriv = file->private_data; + resource_size_t start, end; int i, ret, write_combine = 0, res_bit = IORESOURCE_MEM; if (!capable(CAP_SYS_RAWIO) || @@ -278,7 +279,11 @@ static int proc_bus_pci_mmap(struct file *file, struct vm_area_struct *vma) iomem_is_exclusive(dev->resource[i].start)) return -EINVAL; - ret = pci_mmap_page_range(dev, i, vma, + pci_resource_to_user(dev, i, &dev->resource[i], &start, &end); + + /* Adjust vm_pgoff to be the offset within the resource */ + vma->vm_pgoff -= start >> PAGE_SHIFT; + ret = pci_mmap_resource_range(dev, i, vma, fpriv->mmap_state, write_combine); if (ret < 0) return ret; diff --git a/include/linux/pci.h b/include/linux/pci.h index 81a57b498f22..060af91bafcd 100644 --- a/include/linux/pci.h +++ b/include/linux/pci.h @@ -1909,24 +1909,14 @@ pci_alloc_irq_vectors(struct pci_dev *dev, unsigned int min_vecs, #include -/* These two functions provide almost identical functionality. Depending - * on the architecture, one will be implemented as a wrapper around the - * other (in drivers/pci/mmap.c). - * +/* * pci_mmap_resource_range() maps a specific BAR, and vm->vm_pgoff * is expected to be an offset within that region. * - * pci_mmap_page_range() is the legacy architecture-specific interface, - * which accepts a "user visible" resource address converted by - * pci_resource_to_user(), as used in the legacy mmap() interface in - * /proc/bus/pci/. */ int pci_mmap_resource_range(struct pci_dev *dev, int bar, struct vm_area_struct *vma, enum pci_mmap_state mmap_state, int write_combine); -int pci_mmap_page_range(struct pci_dev *pdev, int bar, - struct vm_area_struct *vma, - enum pci_mmap_state mmap_state, int write_combine); #ifndef arch_can_pci_mmap_wc #define arch_can_pci_mmap_wc() 0 From patchwork Fri Jul 15 15:36:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 1656958 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=kernel.org header.i=@kernel.org header.a=rsa-sha256 header.s=k20201202 header.b=h4trP2ob; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by bilbo.ozlabs.org (Postfix) with ESMTP id 4LkwSw3Shwz9s09 for ; Sat, 16 Jul 2022 01:36:48 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234928AbiGOPgr (ORCPT ); Fri, 15 Jul 2022 11:36:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235010AbiGOPgn (ORCPT ); Fri, 15 Jul 2022 11:36:43 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96EEB5A2D1 for ; Fri, 15 Jul 2022 08:36:42 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 17242B82D11 for ; Fri, 15 Jul 2022 15:36:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 52F0BC3411E; Fri, 15 Jul 2022 15:36:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657899399; bh=2dlhaiT/PHyRwDrAtnA08dgWECmHLItKNahH8WkarKE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=h4trP2ob4A2ErmThvl4gd1T45jmNOHpJkkmZ1awCz0oVwyj3JrvX4aII3IAojU2cY KAL8JO4lD+5wrEwZermgbP2S6YQntn1mltQMgzIsNlxZCIOJXSSMZy41QDCWvhoDin WTJLS8uD2tqzjUjc81LhSAPt+6xIZ+FVx1QtG+itb+vj9LiJDiNlOnIMDp9YHYY+IP 8hYbSXLqvxjc4bjlzYzlS84ICY7j+AKCltICffaAqHt47At18N6HHVY1jMkg+siLay YtRjbWZ4ccJ+VXGhP3pkiFebBRkZtEvZi0UCdnOeOYBuKMZsws2SZSYVWV4auPNq4U ZkfDrh/37CVxg== From: Arnd Bergmann To: linux-pci@vger.kernel.org Cc: Bjorn Helgaas , Arnd Bergmann , David Woodhouse , "David S . Miller" , Stafford Horne Subject: [PATCH 2/2] [RFC] sparc: Use generic pci_mmap_resource_range() Date: Fri, 15 Jul 2022 17:36:17 +0200 Message-Id: <20220715153617.3393420-2-arnd@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20220715153617.3393420-1-arnd@kernel.org> References: <20220715153617.3393420-1-arnd@kernel.org> MIME-Version: 1.0 X-Spam-Status: No, score=-7.7 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Arnd Bergmann The main feature of the sparc specific implementation of pci_mmap_resource_range() is that it allows mapping the entire PCI I/O space for a PCI host bridge using the /proc/bus/pci interface on a bridge device. The generic implementation cannot do this, but it also appears that this got broken for sparc by commit 9eff02e2042f ("PCI: check mmap range of /proc/bus/pci files too"), which enforces that each address is part of a BAR for kernels after 2.6.28. Remove it all, assuming that the corresponding user space code has already been changed to access /dev/ioport instead a long time ago. The pci_iobar_pfn() function needs to be added to make it possible to map I/O resources. This is adapted from the powerpc version. Link: https://lore.kernel.org/lkml/1519887203.622.3.camel@infradead.org/t/ Signed-off-by: Arnd Bergmann --- It's quite possible that I missed something important, but it appears that David Miller was thinking of the pre-2.6.28 behavior that was already broken when David Woodhouse sent his series. --- arch/sparc/include/asm/pci.h | 1 + arch/sparc/kernel/pci.c | 154 +---------------------------------- 2 files changed, 5 insertions(+), 150 deletions(-) diff --git a/arch/sparc/include/asm/pci.h b/arch/sparc/include/asm/pci.h index 4deddf430e5d..dff90dce6cb7 100644 --- a/arch/sparc/include/asm/pci.h +++ b/arch/sparc/include/asm/pci.h @@ -37,6 +37,7 @@ static inline int pci_proc_domain(struct pci_bus *bus) #define HAVE_PCI_MMAP #define arch_can_pci_mmap_io() 1 #define HAVE_ARCH_PCI_GET_UNMAPPED_AREA +#define ARCH_GENERIC_PCI_MMAP_RESOURCE #define get_pci_unmapped_area get_fb_unmapped_area #endif /* CONFIG_SPARC64 */ diff --git a/arch/sparc/kernel/pci.c b/arch/sparc/kernel/pci.c index f580db840bf7..cb1ef25116e9 100644 --- a/arch/sparc/kernel/pci.c +++ b/arch/sparc/kernel/pci.c @@ -751,161 +751,15 @@ int pcibios_enable_device(struct pci_dev *dev, int mask) } /* Platform support for /proc/bus/pci/X/Y mmap()s. */ - -/* If the user uses a host-bridge as the PCI device, he may use - * this to perform a raw mmap() of the I/O or MEM space behind - * that controller. - * - * This can be useful for execution of x86 PCI bios initialization code - * on a PCI card, like the xfree86 int10 stuff does. - */ -static int __pci_mmap_make_offset_bus(struct pci_dev *pdev, struct vm_area_struct *vma, - enum pci_mmap_state mmap_state) +int pci_iobar_pfn(struct pci_dev *pdev, int bar, struct vm_area_struct *vma) { struct pci_pbm_info *pbm = pdev->dev.archdata.host_controller; - unsigned long space_size, user_offset, user_size; - - if (mmap_state == pci_mmap_io) { - space_size = resource_size(&pbm->io_space); - } else { - space_size = resource_size(&pbm->mem_space); - } - - /* Make sure the request is in range. */ - user_offset = vma->vm_pgoff << PAGE_SHIFT; - user_size = vma->vm_end - vma->vm_start; + resource_size_t ioaddr = pci_resource_start(pdev, bar); - if (user_offset >= space_size || - (user_offset + user_size) > space_size) + if (!pbm) return -EINVAL; - if (mmap_state == pci_mmap_io) { - vma->vm_pgoff = (pbm->io_space.start + - user_offset) >> PAGE_SHIFT; - } else { - vma->vm_pgoff = (pbm->mem_space.start + - user_offset) >> PAGE_SHIFT; - } - - return 0; -} - -/* Adjust vm_pgoff of VMA such that it is the physical page offset - * corresponding to the 32-bit pci bus offset for DEV requested by the user. - * - * Basically, the user finds the base address for his device which he wishes - * to mmap. They read the 32-bit value from the config space base register, - * add whatever PAGE_SIZE multiple offset they wish, and feed this into the - * offset parameter of mmap on /proc/bus/pci/XXX for that device. - * - * Returns negative error code on failure, zero on success. - */ -static int __pci_mmap_make_offset(struct pci_dev *pdev, - struct vm_area_struct *vma, - enum pci_mmap_state mmap_state) -{ - unsigned long user_paddr, user_size; - int i, err; - - /* First compute the physical address in vma->vm_pgoff, - * making sure the user offset is within range in the - * appropriate PCI space. - */ - err = __pci_mmap_make_offset_bus(pdev, vma, mmap_state); - if (err) - return err; - - /* If this is a mapping on a host bridge, any address - * is OK. - */ - if ((pdev->class >> 8) == PCI_CLASS_BRIDGE_HOST) - return err; - - /* Otherwise make sure it's in the range for one of the - * device's resources. - */ - user_paddr = vma->vm_pgoff << PAGE_SHIFT; - user_size = vma->vm_end - vma->vm_start; - - for (i = 0; i <= PCI_ROM_RESOURCE; i++) { - struct resource *rp = &pdev->resource[i]; - resource_size_t aligned_end; - - /* Active? */ - if (!rp->flags) - continue; - - /* Same type? */ - if (i == PCI_ROM_RESOURCE) { - if (mmap_state != pci_mmap_mem) - continue; - } else { - if ((mmap_state == pci_mmap_io && - (rp->flags & IORESOURCE_IO) == 0) || - (mmap_state == pci_mmap_mem && - (rp->flags & IORESOURCE_MEM) == 0)) - continue; - } - - /* Align the resource end to the next page address. - * PAGE_SIZE intentionally added instead of (PAGE_SIZE - 1), - * because actually we need the address of the next byte - * after rp->end. - */ - aligned_end = (rp->end + PAGE_SIZE) & PAGE_MASK; - - if ((rp->start <= user_paddr) && - (user_paddr + user_size) <= aligned_end) - break; - } - - if (i > PCI_ROM_RESOURCE) - return -EINVAL; - - return 0; -} - -/* Set vm_page_prot of VMA, as appropriate for this architecture, for a pci - * device mapping. - */ -static void __pci_mmap_set_pgprot(struct pci_dev *dev, struct vm_area_struct *vma, - enum pci_mmap_state mmap_state) -{ - /* Our io_remap_pfn_range takes care of this, do nothing. */ -} - -/* Perform the actual remap of the pages for a PCI device mapping, as appropriate - * for this architecture. The region in the process to map is described by vm_start - * and vm_end members of VMA, the BAR relative address is found in vm_pgoff. - * The pci device structure is provided so that architectures may make mapping - * decisions on a per-device or per-bus basis. - * - * Returns a negative error code on failure, zero on success. - */ -int pci_mmap_resource_range(struct pci_dev *dev, int bar, - struct vm_area_struct *vma, - enum pci_mmap_state mmap_state, int write_combine) -{ - int ret; - resource_size_t start, end; - - /* convert per-BAR address to PCI bus address */ - pci_resource_to_user(dev, bar, &dev->resource[bar], &start, &end); - vma->vm_pgoff += start >> PAGE_SHIFT; - - ret = __pci_mmap_make_offset(dev, vma, mmap_state); - if (ret < 0) - return ret; - - __pci_mmap_set_pgprot(dev, vma, mmap_state); - - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); - ret = io_remap_pfn_range(vma, vma->vm_start, - vma->vm_pgoff, - vma->vm_end - vma->vm_start, - vma->vm_page_prot); - if (ret) - return ret; + vma->vm_pgoff += (ioaddr + pbm->io_space.start) >> PAGE_SHIFT; return 0; }