From patchwork Wed Jan 2 21:01:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Rientjes X-Patchwork-Id: 1020085 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.b="oxM3Gdeo"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 43VNlH2MD9z9s2P for ; Thu, 3 Jan 2019 08:01:47 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727139AbfABVBq (ORCPT ); Wed, 2 Jan 2019 16:01:46 -0500 Received: from mail-pg1-f194.google.com ([209.85.215.194]:33112 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726100AbfABVBp (ORCPT ); Wed, 2 Jan 2019 16:01:45 -0500 Received: by mail-pg1-f194.google.com with SMTP id z11so15074452pgu.0 for ; Wed, 02 Jan 2019 13:01:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:user-agent:mime-version; bh=7jOLZApund1vrMmXa18mGVqJGSoStuzJ8CBOA3gKuDw=; b=oxM3GdeoGbmHLLY+WvXY6v4dSjaMW5JgZz22eion/T6d/5+8CxP8NVtNibj2ewKDQh me8NHiFDewKIVBzR6F+xUYdUhNnX+J/pr7Qa1Y8jJYF/qK04r4v8Kp51del1ruVhcbQI LO9jD+N9rs8ceZQxdGt68ZeniViVfefXfAqVYjU6D3dXwwq+z/snVLqVw4289VkOof/Z 0ok6zSmsU1kiqEyn1qX7Wr4auG0UVmIdqIaSEED08O/yyNWzGjpZmYAteixbxUcgLl3S 06abecq9lbH8pj6grYV6Rmco8XUgrkg/Jnsx4ISn5Zszq5vSPFNcji2uqmYcvYxlOf4d S9tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:user-agent :mime-version; bh=7jOLZApund1vrMmXa18mGVqJGSoStuzJ8CBOA3gKuDw=; b=ROLByHG6yYxrpwLWelZQX2RLMSOFTdXsQWdvX/wy9Py+uwkZXR2OSGdLcBx6ZTsT/l 2IynwKZs+f2oest0RXCloFisnV/c6H2o+2EYntRa8AjY2q1D4YkuW6Jr/pd3N7LvTIPk C0lmW5T1j2hqLbbnYYx2rk8tHDuMVJB1b6bT2RUbRpUw8hE7HlYxUr3MQhlWewyKPApt T8Y3I1R6o1dDWxQNiA474c9hgo261a6AGaYqRzpIMDz76P4PsLz5+ldx5xK3MaOYQhRu 3qq+KhA5/qvjiDILEBHxBgPaosJiIetcJ9nLOZJ4A37wUvgf4UhackkEuIIBGyyVMzrw 76nw== X-Gm-Message-State: AA+aEWYTcvCxu3HPFCoIfaUOWXZCDguPVzjc2NtDgAPX1jIrjpAZbOYf jgDCQ8yUrN4v9rjJPid5n1DTtWo/7DDtMw== X-Google-Smtp-Source: AFSGD/UCTQzb7ytfTFZo9Pdga8W72uMWi1r3mIgACGcI1qGqeHZdD8LkHComvdasQ6gJ26VzUybCVg== X-Received: by 2002:a62:1b50:: with SMTP id b77mr46131674pfb.36.1546462905048; Wed, 02 Jan 2019 13:01:45 -0800 (PST) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id k24sm79188227pfj.13.2019.01.02.13.01.44 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 02 Jan 2019 13:01:44 -0800 (PST) Date: Wed, 2 Jan 2019 13:01:43 -0800 (PST) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: "David S. Miller" , Eric Dumazet cc: Andrew Morton , Willem de Bruijn , Michal Hocko , Vlastimil Babka , netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [patch] net, skbuff: do not prefer skb allocation fails early Message-ID: User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Commit dcda9b04713c ("mm, tree wide: replace __GFP_REPEAT by __GFP_RETRY_MAYFAIL with more useful semantic") replaced __GFP_REPEAT in alloc_skb_with_frags() with __GFP_RETRY_MAYFAIL when the allocation may directly reclaim. The previous behavior would require reclaim up to 1 << order pages for skb aligned header_len of order > PAGE_ALLOC_COSTLY_ORDER before failing, otherwise the allocations in alloc_skb() would loop in the page allocator looking for memory. __GFP_RETRY_MAYFAIL makes both allocations failable under memory pressure, including for the HEAD allocation. This can cause, among many other things, write() to fail with ENOTCONN during RPC when under memory pressure. These allocations should succeed as they did previous to dcda9b04713c even if it requires calling the oom killer and additional looping in the page allocator to find memory. There is no way to specify the previous behavior of __GFP_REPEAT, but it's unlikely to be necessary since the previous behavior only guaranteed that 1 << order pages would be reclaimed before failing for order > PAGE_ALLOC_COSTLY_ORDER. That reclaim is not guaranteed to be contiguous memory, so repeating for such large orders is usually not beneficial. Removing the setting of __GFP_RETRY_MAYFAIL to restore the previous behavior, specifically not allowing alloc_skb() to fail for small orders and oom kill if necessary rather than allowing RPCs to fail. Fixes: dcda9b04713c ("mm, tree wide: replace __GFP_REPEAT by __GFP_RETRY_MAYFAIL with more useful semantic") Signed-off-by: David Rientjes Reviewed-by: Eric Dumazet --- net/core/skbuff.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/net/core/skbuff.c b/net/core/skbuff.c --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -5270,7 +5270,6 @@ struct sk_buff *alloc_skb_with_frags(unsigned long header_len, unsigned long chunk; struct sk_buff *skb; struct page *page; - gfp_t gfp_head; int i; *errcode = -EMSGSIZE; @@ -5280,12 +5279,8 @@ struct sk_buff *alloc_skb_with_frags(unsigned long header_len, if (npages > MAX_SKB_FRAGS) return NULL; - gfp_head = gfp_mask; - if (gfp_head & __GFP_DIRECT_RECLAIM) - gfp_head |= __GFP_RETRY_MAYFAIL; - *errcode = -ENOBUFS; - skb = alloc_skb(header_len, gfp_head); + skb = alloc_skb(header_len, gfp_mask); if (!skb) return NULL;