From patchwork Tue Aug 27 01:26:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: dann frazier X-Patchwork-Id: 1153525 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 46HWS0374zz9sDQ; Tue, 27 Aug 2019 11:26:38 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1i2QFj-0002iD-QB; Tue, 27 Aug 2019 01:26:31 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1i2QFh-0002hu-QG for kernel-team@lists.ubuntu.com; Tue, 27 Aug 2019 01:26:29 +0000 Received: from mail-io1-f71.google.com ([209.85.166.71]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1i2QFh-0000BJ-BR for kernel-team@lists.ubuntu.com; Tue, 27 Aug 2019 01:26:29 +0000 Received: by mail-io1-f71.google.com with SMTP id k13so25006084ioh.16 for ; Mon, 26 Aug 2019 18:26:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=T6h2BCZ2TvwH9NabZvNbJbv8uSUVtnKaD1R6grIEwnM=; b=JNUMRdKk5LIUWrlELZgQm/Mgvh0lV1ZBHIVAwrMt57tvSkb84Is8daYJfXJsxHCBYL YjZC5OSTtR1Qfw/UPgmxlzRJGNM76OHqrAIHFyNTDgptiU4Vf0sVX6QFpeyKJo8UpuLG knuNmRWB9OoS//5MMSwY6pVztJzya7ApFMGiZifPdR87QZomm5CalR8JDjlALveCcKLq 07gC5d6Qu7DpLyfAZe1SRD9uyPkJhMI9jeF1nud2VHnPclo7ihJlrjCfiev4JJCt28PG egK+e3e/CSfXqOZ/yl3ErPVcMM4igYrkcEjvURn94N48ZOZUN6C9lukLH2JrTtbEOl2A mOpQ== X-Gm-Message-State: APjAAAU2SBkWCkrrOfW8aV1mPnL8LOuCBF79q5PbMcusa5SLcoPoGHt6 BDybK5bcogyVDKiTh0zPqWbAz2ETSLKKMbCIa4shny6T/RJ+9w9qMCG5VCX/qTcJf1/A+VYyZvn 5EKUIbtlDhzrUwT7ZJUgT5q7tJY4xOFvEe/aOiu5ZCw== X-Received: by 2002:a02:9a0b:: with SMTP id b11mr20717146jal.106.1566869188088; Mon, 26 Aug 2019 18:26:28 -0700 (PDT) X-Google-Smtp-Source: APXvYqzjfQ3nfReTvrxE0RgmHBj+rorPkNKsKz+grzAu4iRsjZJ1aajQUVlZL0LkrGxvxN5Nsd6cjQ== X-Received: by 2002:a02:9a0b:: with SMTP id b11mr20717128jal.106.1566869187727; Mon, 26 Aug 2019 18:26:27 -0700 (PDT) Received: from xps13.canonical.com (c-71-56-235-36.hsd1.co.comcast.net. [71.56.235.36]) by smtp.gmail.com with ESMTPSA id k3sm16071709ioq.18.2019.08.26.18.26.26 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Aug 2019 18:26:27 -0700 (PDT) From: dann frazier To: kernel-team@lists.ubuntu.com Subject: [PATCH 1/1][Eoan][SRU Disco] dma-direct: fix zone selection after an unaddressable CMA allocation Date: Mon, 26 Aug 2019 19:26:08 -0600 Message-Id: <20190827012608.12501-2-dann.frazier@canonical.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190827012608.12501-1-dann.frazier@canonical.com> References: <20190827012608.12501-1-dann.frazier@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Christoph Hellwig BugLink: https://bugs.launchpad.net/bugs/1841483 The new dma_alloc_contiguous hides if we allocate CMA or regular pages, and thus fails to retry a ZONE_NORMAL allocation if the CMA allocation succeeds but isn't addressable. That means we either fail outright or dip into a small zone that might not succeed either. Thanks to Hillf Danton for debugging this issue. Fixes: b1d2dc009dec ("dma-contiguous: add dma_{alloc,free}_contiguous() helpers") Reported-by: Tobias Klausmann Signed-off-by: Christoph Hellwig Tested-by: Tobias Klausmann (backported from commit 90ae409f9eb3bcaf38688f9ec22375816053a08e) [ dannf: dropped dma-iommu.c changes, as that didn't switch over to the new dma_alloc_contiguous() interface until after v5.2 ] Signed-off-by: dann frazier Acked-by: Kleber Sacilotto de Souza --- include/linux/dma-contiguous.h | 5 +---- kernel/dma/contiguous.c | 8 ++------ kernel/dma/direct.c | 10 +++++++++- 3 files changed, 12 insertions(+), 11 deletions(-) diff --git a/include/linux/dma-contiguous.h b/include/linux/dma-contiguous.h index c05d4e661489b..03f8e98e3bcc9 100644 --- a/include/linux/dma-contiguous.h +++ b/include/linux/dma-contiguous.h @@ -160,10 +160,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages, static inline struct page *dma_alloc_contiguous(struct device *dev, size_t size, gfp_t gfp) { - int node = dev ? dev_to_node(dev) : NUMA_NO_NODE; - size_t align = get_order(PAGE_ALIGN(size)); - - return alloc_pages_node(node, gfp, align); + return NULL; } static inline void dma_free_contiguous(struct device *dev, struct page *page, diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c index 2bd410f934b32..69cfb4345388c 100644 --- a/kernel/dma/contiguous.c +++ b/kernel/dma/contiguous.c @@ -230,9 +230,7 @@ bool dma_release_from_contiguous(struct device *dev, struct page *pages, */ struct page *dma_alloc_contiguous(struct device *dev, size_t size, gfp_t gfp) { - int node = dev ? dev_to_node(dev) : NUMA_NO_NODE; - size_t count = PAGE_ALIGN(size) >> PAGE_SHIFT; - size_t align = get_order(PAGE_ALIGN(size)); + size_t count = size >> PAGE_SHIFT; struct page *page = NULL; struct cma *cma = NULL; @@ -243,14 +241,12 @@ struct page *dma_alloc_contiguous(struct device *dev, size_t size, gfp_t gfp) /* CMA can be used only in the context which permits sleeping */ if (cma && gfpflags_allow_blocking(gfp)) { + size_t align = get_order(size); size_t cma_align = min_t(size_t, align, CONFIG_CMA_ALIGNMENT); page = cma_alloc(cma, count, cma_align, gfp & __GFP_NOWARN); } - /* Fallback allocation of normal pages */ - if (!page) - page = alloc_pages_node(node, gfp, align); return page; } diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 0816c1e8b05af..18596b804d384 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -96,6 +96,8 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size) struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) { + size_t alloc_size = PAGE_ALIGN(size); + int node = dev_to_node(dev); struct page *page = NULL; u64 phys_mask; @@ -106,8 +108,14 @@ struct page *__dma_direct_alloc_pages(struct device *dev, size_t size, gfp &= ~__GFP_ZERO; gfp |= __dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask, &phys_mask); + page = dma_alloc_contiguous(dev, alloc_size, gfp); + if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { + dma_free_contiguous(dev, page, alloc_size); + page = NULL; + } again: - page = dma_alloc_contiguous(dev, size, gfp); + if (!page) + page = alloc_pages_node(node, gfp, get_order(alloc_size)); if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) { dma_free_contiguous(dev, page, size); page = NULL;