From patchwork Mon Nov 2 12:48:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: William Breathitt Gray X-Patchwork-Id: 1392236 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CPt721Jrdz9sWM; Mon, 2 Nov 2020 23:50:22 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1kZZHt-0006o5-VQ; Mon, 02 Nov 2020 12:50:17 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kZZH5-0006CO-K8 for kernel-team@lists.ubuntu.com; Mon, 02 Nov 2020 12:49:27 +0000 Received: from mail-qv1-f71.google.com ([209.85.219.71]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kZZH4-00046n-Cw for kernel-team@lists.ubuntu.com; Mon, 02 Nov 2020 12:49:26 +0000 Received: by mail-qv1-f71.google.com with SMTP id d41so8036577qvc.23 for ; Mon, 02 Nov 2020 04:49:26 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1WykU7pg+D5xD33d5wFfrviXowZsE/GTHuFgFOVKhys=; b=PVG20lmlGpyNFNvjDex62pDIaFUFKy6Jw9LdYpZRKbE/Rg/o7AGWAdc0+m74GjYKnc SfdfW7t0o9l2aug/kFiO0uzF8dGH2aihuZyIoyipb5jqU7b6pENVX9M6gvIo32f35Zwp GBVyZjNDsyUbDPTS359TxDpHsz0JJmYyTqpNUTg77Te994Bki6WCNE2yfGq2Ni3Wf6Hk qcSrX+4XnxWfbyp0iqTPj0iUSbGS7278aQRooOQZ3HU1zEdk6lhMM2QimfybCBPyvVPL BSGMTw+LbiVaLMZiMvqk1Wh7m0gSL1cz5PqQ0F81XFEEoaFsQ/YBWChO3TORTmD/6+Fr bW0g== X-Gm-Message-State: AOAM530hrV4Pdx88v8Tb8D8jsc38VLz6ZkCLxU42YJXh6iN8rPjDlx9o eQaQ1JeprUvN30FKODBwVM2NXlXqE1rkrReKeQmev0+0CdhIG1cjwgpz0fsVXOsUxPTkJ6peR0T EyorgI2MHP6chq7n+8E87+uLmj1EXueAdzfxyVgnOuw== X-Received: by 2002:ac8:6f23:: with SMTP id i3mr14173175qtv.255.1604321365223; Mon, 02 Nov 2020 04:49:25 -0800 (PST) X-Google-Smtp-Source: ABdhPJwwIISGIATY+hikCNWPnk1gLBDc5UTbL4GtkkavdEAckrKCYQWgnmVPCC4mKMqYDeQ3Lrn7lw== X-Received: by 2002:ac8:6f23:: with SMTP id i3mr14173165qtv.255.1604321364971; Mon, 02 Nov 2020 04:49:24 -0800 (PST) Received: from localhost.localdomain (072-189-064-225.res.spectrum.com. [72.189.64.225]) by smtp.gmail.com with ESMTPSA id q7sm7666201qtd.49.2020.11.02.04.49.24 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 Nov 2020 04:49:24 -0800 (PST) From: William Breathitt Gray To: kernel-team@lists.ubuntu.com Subject: [SRU][B:linux-azure-4.15][PATCH 24/40] xdp: allow page_pool as an allocator type in xdp_return_frame Date: Mon, 2 Nov 2020 07:48:40 -0500 Message-Id: <20201102124856.4659-25-william.gray@canonical.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20201102124856.4659-1-william.gray@canonical.com> References: <20201102124856.4659-1-william.gray@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Jesper Dangaard Brouer BugLink: https://bugs.launchpad.net/bugs/1877654 New allocator type MEM_TYPE_PAGE_POOL for page_pool usage. The registered allocator page_pool pointer is not available directly from xdp_rxq_info, but it could be (if needed). For now, the driver should keep separate track of the page_pool pointer, which it should use for RX-ring page allocation. As suggested by Saeed, to maintain a symmetric API it is the drivers responsibility to allocate/create and free/destroy the page_pool. Thus, after the driver have called xdp_rxq_info_unreg(), it is drivers responsibility to free the page_pool, but with a RCU free call. This is done easily via the page_pool helper page_pool_destroy() (which avoids touching any driver code during the RCU callback, which could happen after the driver have been unloaded). V8: address issues found by kbuild test robot - Address sparse should be static warnings - Allow xdp.o to be compiled without page_pool.o V9: Remove inline from .c file, compiler knows best Signed-off-by: Jesper Dangaard Brouer Signed-off-by: David S. Miller (cherry picked from commit 57d0a1c1ac9e6a836bbab4698ba2a2e03f64bf1b) Signed-off-by: William Breathitt Gray --- include/net/page_pool.h | 14 ++++++++++ include/net/xdp.h | 3 +++ net/core/xdp.c | 60 ++++++++++++++++++++++++++++++++--------- 3 files changed, 65 insertions(+), 12 deletions(-) diff --git a/include/net/page_pool.h b/include/net/page_pool.h index 1fe77db59518..c79087153148 100644 --- a/include/net/page_pool.h +++ b/include/net/page_pool.h @@ -117,7 +117,12 @@ void __page_pool_put_page(struct page_pool *pool, static inline void page_pool_put_page(struct page_pool *pool, struct page *page) { + /* When page_pool isn't compiled-in, net/core/xdp.c doesn't + * allow registering MEM_TYPE_PAGE_POOL, but shield linker. + */ +#ifdef CONFIG_PAGE_POOL __page_pool_put_page(pool, page, false); +#endif } /* Very limited use-cases allow recycle direct */ static inline void page_pool_recycle_direct(struct page_pool *pool, @@ -126,4 +131,13 @@ static inline void page_pool_recycle_direct(struct page_pool *pool, __page_pool_put_page(pool, page, true); } +static inline bool is_page_pool_compiled_in(void) +{ +#ifdef CONFIG_PAGE_POOL + return true; +#else + return false; +#endif +} + #endif /* _NET_PAGE_POOL_H */ diff --git a/include/net/xdp.h b/include/net/xdp.h index 5f67c62540aa..d0ee437753dc 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -36,6 +36,7 @@ enum xdp_mem_type { MEM_TYPE_PAGE_SHARED = 0, /* Split-page refcnt based model */ MEM_TYPE_PAGE_ORDER0, /* Orig XDP full page model */ + MEM_TYPE_PAGE_POOL, MEM_TYPE_MAX, }; @@ -44,6 +45,8 @@ struct xdp_mem_info { u32 id; }; +struct page_pool; + struct xdp_rxq_info { struct net_device *dev; u32 queue_index; diff --git a/net/core/xdp.c b/net/core/xdp.c index 8b2cb79b5de0..33e382afbd95 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -8,6 +8,7 @@ #include #include #include +#include #include @@ -27,7 +28,10 @@ static struct rhashtable *mem_id_ht; struct xdp_mem_allocator { struct xdp_mem_info mem; - void *allocator; + union { + void *allocator; + struct page_pool *page_pool; + }; struct rhash_head node; struct rcu_head rcu; }; @@ -74,7 +78,9 @@ static void __xdp_mem_allocator_rcu_free(struct rcu_head *rcu) /* Allow this ID to be reused */ ida_simple_remove(&mem_id_pool, xa->mem.id); - /* TODO: Depending on allocator type/pointer free resources */ + /* Notice, driver is expected to free the *allocator, + * e.g. page_pool, and MUST also use RCU free. + */ /* Poison memory */ xa->mem.id = 0xFFFF; @@ -225,6 +231,17 @@ static int __mem_id_cyclic_get(gfp_t gfp) return id; } +static bool __is_supported_mem_type(enum xdp_mem_type type) +{ + if (type == MEM_TYPE_PAGE_POOL) + return is_page_pool_compiled_in(); + + if (type >= MEM_TYPE_MAX) + return false; + + return true; +} + int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq, enum xdp_mem_type type, void *allocator) { @@ -238,13 +255,16 @@ int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq, return -EFAULT; } - if (type >= MEM_TYPE_MAX) - return -EINVAL; + if (!__is_supported_mem_type(type)) + return -EOPNOTSUPP; xdp_rxq->mem.type = type; - if (!allocator) + if (!allocator) { + if (type == MEM_TYPE_PAGE_POOL) + return -EINVAL; /* Setup time check page_pool req */ return 0; + } /* Delay init of rhashtable to save memory if feature isn't used */ if (!mem_id_init) { @@ -290,15 +310,31 @@ EXPORT_SYMBOL_GPL(xdp_rxq_info_reg_mem_model); void xdp_return_frame(void *data, struct xdp_mem_info *mem) { - if (mem->type == MEM_TYPE_PAGE_SHARED) { + struct xdp_mem_allocator *xa; + struct page *page; + + switch (mem->type) { + case MEM_TYPE_PAGE_POOL: + rcu_read_lock(); + /* mem->id is valid, checked in xdp_rxq_info_reg_mem_model() */ + xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); + page = virt_to_head_page(data); + if (xa) + page_pool_put_page(xa->page_pool, page); + else + put_page(page); + rcu_read_unlock(); + break; + case MEM_TYPE_PAGE_SHARED: page_frag_free(data); - return; - } - - if (mem->type == MEM_TYPE_PAGE_ORDER0) { - struct page *page = virt_to_page(data); /* Assumes order0 page*/ - + break; + case MEM_TYPE_PAGE_ORDER0: + page = virt_to_page(data); /* Assumes order0 page*/ put_page(page); + break; + default: + /* Not possible, checked in xdp_rxq_info_reg_mem_model() */ + break; } } EXPORT_SYMBOL_GPL(xdp_return_frame);