From patchwork Fri Jun 29 06:15:27 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hiroshi Doyu X-Patchwork-Id: 167988 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 1160F1007D4 for ; Fri, 29 Jun 2012 16:15:52 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751011Ab2F2GPv (ORCPT ); Fri, 29 Jun 2012 02:15:51 -0400 Received: from hqemgate03.nvidia.com ([216.228.121.140]:15246 "EHLO hqemgate03.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750991Ab2F2GPv (ORCPT ); Fri, 29 Jun 2012 02:15:51 -0400 Received: from hqnvupgp06.nvidia.com (Not Verified[216.228.121.13]) by hqemgate03.nvidia.com id ; Thu, 28 Jun 2012 23:16:00 -0700 Received: from hqemhub03.nvidia.com ([172.17.108.22]) by hqnvupgp06.nvidia.com (PGP Universal service); Thu, 28 Jun 2012 23:15:50 -0700 X-PGP-Universal: processed; by hqnvupgp06.nvidia.com on Thu, 28 Jun 2012 23:15:50 -0700 Received: from hqnvemgw02.nvidia.com (172.16.227.111) by HQEMHUB03.nvidia.com (172.20.150.15) with Microsoft SMTP Server id 8.3.264.0; Thu, 28 Jun 2012 23:15:50 -0700 Received: from thelma.nvidia.com (Not Verified[172.16.212.77]) by hqnvemgw02.nvidia.com with MailMarshal (v6,7,2,8378) id ; Thu, 28 Jun 2012 23:15:59 -0700 Received: from oreo.Nvidia.com (dhcp-10-21-25-186.nvidia.com [10.21.25.186]) by thelma.nvidia.com (8.13.8+Sun/8.8.8) with ESMTP id q5T6Fj1w019583; Thu, 28 Jun 2012 23:15:47 -0700 (PDT) From: Hiroshi DOYU To: CC: , Hiroshi DOYU , Chris Wright , Stephen Warren Subject: [v3 2/2] iommu/tegra: smmu: Fix unsleepable memory allocation at alloc_pdir() Date: Fri, 29 Jun 2012 09:15:27 +0300 Message-ID: <1340950527-16482-2-git-send-email-hdoyu@nvidia.com> X-Mailer: git-send-email 1.7.5.4 In-Reply-To: <1340950527-16482-1-git-send-email-hdoyu@nvidia.com> References: <20120628.221945.1306825810095951985.hdoyu@nvidia.com> <1340950527-16482-1-git-send-email-hdoyu@nvidia.com> MIME-Version: 1.0 Sender: linux-tegra-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-tegra@vger.kernel.org alloc_pdir() is called from smmu_iommu_domain_init() with spin_lock held. memory allocations in alloc_pdir() had to be atomic. Instead of converting into atomic allocation, this patch once releases a lock, does the allocation, holds the lock again and then sees if it's raced or not in order to avoid introducing mutex and preallocation. Signed-off-by: Hiroshi DOYU Reported-by: Chris Wright Cc: Chris Wright Cc: Stephen Warren --- v3: Added -EAGAIN in alloc_pdir when raced to retry. --- drivers/iommu/tegra-smmu.c | 77 +++++++++++++++++++++++++------------------ 1 files changed, 45 insertions(+), 32 deletions(-) diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c index 532c8a4..dbba94c 100644 --- a/drivers/iommu/tegra-smmu.c +++ b/drivers/iommu/tegra-smmu.c @@ -539,28 +539,39 @@ static inline void put_signature(struct smmu_as *as, /* * Caller must lock/unlock as */ -static int alloc_pdir(struct smmu_as *as) +static int alloc_pdir(struct smmu_as *as, unsigned long *flags) { unsigned long *pdir; - int pdn; + int pdn, err = 0; u32 val; struct smmu_device *smmu = as->smmu; + struct page *page; + unsigned int *cnt; - as->pte_count = devm_kzalloc(smmu->dev, - sizeof(as->pte_count[0]) * SMMU_PDIR_COUNT, GFP_KERNEL); - if (!as->pte_count) { - dev_err(smmu->dev, - "failed to allocate smmu_device PTE cunters\n"); - return -ENOMEM; + /* + * do the allocation outside the as lock + */ + spin_unlock_irqrestore(&as->lock, *flags); + cnt = devm_kzalloc(smmu->dev, + sizeof(cnt[0]) * SMMU_PDIR_COUNT, GFP_KERNEL); + page = alloc_page(GFP_KERNEL | __GFP_DMA); + spin_lock_irqsave(&as->lock, *flags); + + if (as->pdir_page) { + /* We raced, free the redundant */ + err = -EAGAIN; + goto err_out; } - as->pdir_page = alloc_page(GFP_KERNEL | __GFP_DMA); - if (!as->pdir_page) { - dev_err(smmu->dev, - "failed to allocate smmu_device page directory\n"); - devm_kfree(smmu->dev, as->pte_count); - as->pte_count = NULL; - return -ENOMEM; + + if (!page || !cnt) { + dev_err(smmu->dev, "failed to allocate at %s\n", __func__); + err = -ENOMEM; + goto err_out; } + + as->pdir_page = page; + as->pte_count = cnt; + SetPageReserved(as->pdir_page); pdir = page_address(as->pdir_page); @@ -577,6 +588,12 @@ static int alloc_pdir(struct smmu_as *as) FLUSH_SMMU_REGS(as->smmu); return 0; + +err_out: + devm_kfree(smmu->dev, cnt); + if (page) + __free_page(page); + return err; } static void __smmu_iommu_unmap(struct smmu_as *as, dma_addr_t iova) @@ -768,29 +785,29 @@ out: static int smmu_iommu_domain_init(struct iommu_domain *domain) { - int i; + int i, err = -ENODEV; unsigned long flags; struct smmu_as *as; struct smmu_device *smmu = smmu_handle; /* Look for a free AS with lock held */ for (i = 0; i < smmu->num_as; i++) { - struct smmu_as *tmp = &smmu->as[i]; - - spin_lock_irqsave(&tmp->lock, flags); - if (!tmp->pdir_page) { - as = tmp; - goto found; + as = &smmu->as[i]; + spin_lock_irqsave(&as->lock, flags); + if (!as->pdir_page) { + err = alloc_pdir(as, &flags); + if (!err) + goto found; } - spin_unlock_irqrestore(&tmp->lock, flags); + spin_unlock_irqrestore(&as->lock, flags); + if (err != -EAGAIN) + break; } - dev_err(smmu->dev, "no free AS\n"); - return -ENODEV; + if (i == smmu->num_as) + dev_err(smmu->dev, "no free AS\n"); + return err; found: - if (alloc_pdir(as) < 0) - goto err_alloc_pdir; - spin_lock(&smmu->lock); /* Update PDIR register */ @@ -806,10 +823,6 @@ found: dev_dbg(smmu->dev, "smmu_as@%p\n", as); return 0; - -err_alloc_pdir: - spin_unlock_irqrestore(&as->lock, flags); - return -ENODEV; } static void smmu_iommu_domain_destroy(struct iommu_domain *domain)