From patchwork Tue Jun 25 01:25:20 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wedson Almeida Filho X-Patchwork-Id: 253996 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id C9FD52C0087 for ; Tue, 25 Jun 2013 11:26:01 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751367Ab3FYBZk (ORCPT ); Mon, 24 Jun 2013 21:25:40 -0400 Received: from mail-pd0-f180.google.com ([209.85.192.180]:64765 "EHLO mail-pd0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751223Ab3FYBZg (ORCPT ); Mon, 24 Jun 2013 21:25:36 -0400 Received: by mail-pd0-f180.google.com with SMTP id 10so707362pdi.39 for ; Mon, 24 Jun 2013 18:25:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer; bh=7CE32iUfP+3EIeR500g57c9ro7D09YZS4We4WCTX3Vw=; b=Gdx1UGdFaH2y3xkjMUTcIsvbP08uotkSLgTdsq6/ybn8qPT9zA/6eLpq0Mpl4zGoZc xYN6SifA8Pzvm1toNtsO0wv13Qot5tzacgBmKhK2o4kIWubAjacde1ldTiRrWd4XYOqx ADq9IKvYScJw1XziiKe24SNKeLKnVDPMsusvnwSj44vf8Q7u6qhP8DLfikDdSwAMYVIE wpKbOZfIE9mzfp2hYgm4tPBnHROfiu0ShuXEUrrXaHeSiLok/GaiZF1mdL7Lal7BA9f8 S6Vo7IES0cVLAMm2gDzOnzPsGwXRMa/DpdilPGPorAaDohVIwzRNnjldvEV71Vnar7LC 0akw== X-Received: by 10.66.145.201 with SMTP id sw9mr29737288pab.63.1372123535950; Mon, 24 Jun 2013 18:25:35 -0700 (PDT) Received: from ubuntu.localdomain ([173.252.71.189]) by mx.google.com with ESMTPSA id br1sm20526366pbb.4.2013.06.24.18.25.34 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Mon, 24 Jun 2013 18:25:35 -0700 (PDT) From: Wedson Almeida Filho To: "David S. Miller" Cc: sergei.shtylyov@cogentembedded.com, Thomas Graf , netdev@vger.kernel.org, linux-kernel@vger.kernel.org, Wedson Almeida Filho Subject: [PATCH v3] net: Unmap fragment page once iterator is done Date: Mon, 24 Jun 2013 18:25:20 -0700 Message-Id: <1372123520-18287-1-git-send-email-wedsonaf@gmail.com> X-Mailer: git-send-email 1.7.9.5 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Callers of skb_seq_read() are currently forced to call skb_abort_seq_read() even when consuming all the data because the last call to skb_seq_read (the one that returns 0 to indicate the end) fails to unmap the last fragment page. With this patch callers will be allowed to traverse the SKB data by calling skb_prepare_seq_read() once and repeatedly calling skb_seq_read() as originally intended (and documented in the original commit 677e90eda - "[NET]: Zerocopy sequential reading of skb data"), that is, only call skb_abort_seq_read() if the sequential read is actually aborted. Signed-off-by: Wedson Almeida Filho --- drivers/scsi/libiscsi_tcp.c | 1 - net/batman-adv/main.c | 1 - net/core/skbuff.c | 7 ++++++- 3 files changed, 6 insertions(+), 3 deletions(-) diff --git a/drivers/scsi/libiscsi_tcp.c b/drivers/scsi/libiscsi_tcp.c index 552e8a2..448eae8 100644 --- a/drivers/scsi/libiscsi_tcp.c +++ b/drivers/scsi/libiscsi_tcp.c @@ -906,7 +906,6 @@ int iscsi_tcp_recv_skb(struct iscsi_conn *conn, struct sk_buff *skb, ISCSI_DBG_TCP(conn, "no more data avail. Consumed %d\n", consumed); *status = ISCSI_TCP_SKB_DONE; - skb_abort_seq_read(&seq); goto skb_done; } BUG_ON(segment->copied >= segment->size); diff --git a/net/batman-adv/main.c b/net/batman-adv/main.c index 51aafd6..08125f3 100644 --- a/net/batman-adv/main.c +++ b/net/batman-adv/main.c @@ -473,7 +473,6 @@ __be32 batadv_skb_crc32(struct sk_buff *skb, u8 *payload_ptr) crc = crc32c(crc, data, len); consumed += len; } - skb_abort_seq_read(&st); return htonl(crc); } diff --git a/net/core/skbuff.c b/net/core/skbuff.c index cfd777b..26ea1cf 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -2554,8 +2554,13 @@ unsigned int skb_seq_read(unsigned int consumed, const u8 **data, unsigned int block_limit, abs_offset = consumed + st->lower_offset; skb_frag_t *frag; - if (unlikely(abs_offset >= st->upper_offset)) + if (unlikely(abs_offset >= st->upper_offset)) { + if (st->frag_data) { + kunmap_atomic(st->frag_data); + st->frag_data = NULL; + } return 0; + } next_skb: block_limit = skb_headlen(st->cur_skb) + st->stepped_offset;