get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/674/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 674,
    "url": "http://patchwork.ozlabs.org/api/patches/674/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/netdev/patch/20080919.155037.37762423.davem@davemloft.net/",
    "project": {
        "id": 7,
        "url": "http://patchwork.ozlabs.org/api/projects/7/?format=api",
        "name": "Linux network development",
        "link_name": "netdev",
        "list_id": "netdev.vger.kernel.org",
        "list_email": "netdev@vger.kernel.org",
        "web_url": null,
        "scm_url": null,
        "webscm_url": null,
        "list_archive_url": "",
        "list_archive_url_format": "",
        "commit_url_format": ""
    },
    "msgid": "<20080919.155037.37762423.davem@davemloft.net>",
    "list_archive_url": null,
    "date": "2008-09-19T22:50:37",
    "name": "[INFO] : Use list_head in sk_buff...",
    "commit_ref": null,
    "pull_url": null,
    "state": "rfc",
    "archived": true,
    "hash": "1fa5f46c34fd684c3a1c6eed7db2166b61f1e352",
    "submitter": {
        "id": 15,
        "url": "http://patchwork.ozlabs.org/api/people/15/?format=api",
        "name": "David Miller",
        "email": "davem@davemloft.net"
    },
    "delegate": null,
    "mbox": "http://patchwork.ozlabs.org/project/netdev/patch/20080919.155037.37762423.davem@davemloft.net/mbox/",
    "series": [],
    "comments": "http://patchwork.ozlabs.org/api/patches/674/comments/",
    "check": "pending",
    "checks": "http://patchwork.ozlabs.org/api/patches/674/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<netdev-owner@vger.kernel.org>",
        "X-Original-To": "patchwork-incoming@ozlabs.org",
        "Delivered-To": "patchwork-incoming@ozlabs.org",
        "Received": [
            "from vger.kernel.org (vger.kernel.org [209.132.176.167])\n\tby ozlabs.org (Postfix) with ESMTP id 67FF2DDE09\n\tfor <patchwork-incoming@ozlabs.org>;\n\tSat, 20 Sep 2008 08:51:05 +1000 (EST)",
            "(majordomo@vger.kernel.org) by vger.kernel.org via listexpand\n\tid S1751694AbYISWu6 (ORCPT <rfc822;patchwork-incoming@ozlabs.org>);\n\tFri, 19 Sep 2008 18:50:58 -0400",
            "(majordomo@vger.kernel.org) by vger.kernel.org id S1751503AbYISWu6\n\t(ORCPT <rfc822; netdev-outgoing>); Fri, 19 Sep 2008 18:50:58 -0400",
            "from 74-93-104-97-Washington.hfc.comcastbusiness.net\n\t([74.93.104.97]:55409\n\t\"EHLO sunset.davemloft.net\" rhost-flags-OK-FAIL-OK-OK)\n\tby vger.kernel.org with ESMTP id S1751389AbYISWuw (ORCPT\n\t<rfc822;netdev@vger.kernel.org>); Fri, 19 Sep 2008 18:50:52 -0400",
            "from localhost (localhost [127.0.0.1])\n\tby sunset.davemloft.net (Postfix) with ESMTP id 82DDCC8C181\n\tfor <netdev@vger.kernel.org>; Fri, 19 Sep 2008 15:50:37 -0700 (PDT)"
        ],
        "Date": "Fri, 19 Sep 2008 15:50:37 -0700 (PDT)",
        "Message-Id": "<20080919.155037.37762423.davem@davemloft.net>",
        "To": "netdev@vger.kernel.org",
        "Subject": "[INFO PATCH]: Use list_head in sk_buff...",
        "From": "David Miller <davem@davemloft.net>",
        "X-Mailer": "Mew version 6.1 on Emacs 22.1 / Mule 5.0 (SAKAKI)",
        "Mime-Version": "1.0",
        "Content-Type": "Text/Plain; charset=us-ascii",
        "Content-Transfer-Encoding": "7bit",
        "Sender": "netdev-owner@vger.kernel.org",
        "Precedence": "bulk",
        "List-ID": "<netdev.vger.kernel.org>",
        "X-Mailing-List": "netdev@vger.kernel.org"
    },
    "content": "I just want folks to know I have this patch against net-next-2.6\n\nIt's been running on my workstation for the better part of the last\nday so basic things work fine.  It also passes allmodconfig builds\non sparc64.\n\nBut I want to see if I can find some way to do this change in stages\nso that the transition is less painful and can be bisected at least\npartially.\n\nThe biggest pain areas are TIPC and SCTP.  Actually, TIPC gets special\nmarks for implementing it's own SKB queues in a thousand different\nways instead of using the standard skb_queue_head facilities.  I didn't\neven try to do anything special for them in the patch below.\n\nMost things were trivially converted or \"just worked\" because they used\nthe generic interfaces for SKB queue management.\n\nYou'll also notice that this patch doesn't try to handle the frag lists\nspecially yet.  That could get the same treatment, making the\nskb_shinfo()->frag_list be a list_head too.\n\nFinally, I want to mention that some other things we get from doing this\nchange:\n\n1) Things that just need a list of SKBs without the lock and the dinky\n   qlen thing, can just convert to using a list_head to manage their\n   queues.  Just like any other piece of the kernel.\n\n2) It's now easier to add the call_single_data member in that initial\n   sk_buff anonymous union member, in order to minimize the space cost,\n   in those networking remote softirq patches I posted the other day.",
    "diff": "diff --git a/drivers/atm/idt77252.c b/drivers/atm/idt77252.c\nindex 3a504e9..dc6151d 100644\n--- a/drivers/atm/idt77252.c\n+++ b/drivers/atm/idt77252.c\n@@ -1115,10 +1115,8 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)\n \trpp = &vc->rcv.rx_pool;\n \n \trpp->len += skb->len;\n-\tif (!rpp->count++)\n-\t\trpp->first = skb;\n-\t*rpp->last = skb;\n-\trpp->last = &skb->next;\n+\tlist_add_tail(&skb->list, &rpp->list);\n+\trpp->count++;\n \n \tif (stat & SAR_RSQE_EPDU) {\n \t\tunsigned char *l1l2;\n@@ -1161,12 +1159,9 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)\n \t\t\t\tdev_kfree_skb(skb);\n \t\t\t\treturn;\n \t\t\t}\n-\t\t\tsb = rpp->first;\n-\t\t\tfor (i = 0; i < rpp->count; i++) {\n+\t\t\tlist_for_each_entry(sb, &rpp->list, list)\n \t\t\t\tmemcpy(skb_put(skb, sb->len),\n \t\t\t\t       sb->data, sb->len);\n-\t\t\t\tsb = sb->next;\n-\t\t\t}\n \n \t\t\trecycle_rx_pool_skb(card, rpp);\n \n@@ -1180,7 +1175,6 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)\n \t\t\treturn;\n \t\t}\n \n-\t\tskb->next = NULL;\n \t\tflush_rx_pool(card, rpp);\n \n \t\tif (!atm_charge(vcc, skb->truesize)) {\n@@ -1920,23 +1914,16 @@ flush_rx_pool(struct idt77252_dev *card, struct rx_pool *rpp)\n {\n \trpp->len = 0;\n \trpp->count = 0;\n-\trpp->first = NULL;\n-\trpp->last = &rpp->first;\n+\tINIT_LIST_HEAD(&rpp->list);\n }\n \n static void\n recycle_rx_pool_skb(struct idt77252_dev *card, struct rx_pool *rpp)\n {\n \tstruct sk_buff *skb, *next;\n-\tint i;\n \n-\tskb = rpp->first;\n-\tfor (i = 0; i < rpp->count; i++) {\n-\t\tnext = skb->next;\n-\t\tskb->next = NULL;\n+\tlist_for_each_entry_safe(skb, next, &rpp->list, list)\n \t\trecycle_rx_skb(card, skb);\n-\t\tskb = next;\n-\t}\n \tflush_rx_pool(card, rpp);\n }\n \ndiff --git a/drivers/atm/idt77252.h b/drivers/atm/idt77252.h\nindex e83eaf1..93ee7a9 100644\n--- a/drivers/atm/idt77252.h\n+++ b/drivers/atm/idt77252.h\n@@ -173,8 +173,7 @@ struct scq_info\n };\n \n struct rx_pool {\n-\tstruct sk_buff\t\t*first;\n-\tstruct sk_buff\t\t**last;\n+\tstruct list_head\tlist;\n \tunsigned int\t\tlen;\n \tunsigned int\t\tcount;\n };\ndiff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c\nindex 2f17462..fbda328 100644\n--- a/drivers/block/aoe/aoecmd.c\n+++ b/drivers/block/aoe/aoecmd.c\n@@ -34,7 +34,6 @@ new_skb(ulong len)\n \t\tskb_reset_network_header(skb);\n \t\tskb->protocol = __constant_htons(ETH_P_AOE);\n \t\tskb->priority = 0;\n-\t\tskb->next = skb->prev = NULL;\n \n \t\t/* tell the network layer not to perform IP checksums\n \t\t * or to get the NIC to do it\n@@ -117,7 +116,7 @@ skb_pool_put(struct aoedev *d, struct sk_buff *skb)\n \tif (!d->skbpool_hd)\n \t\td->skbpool_hd = skb;\n \telse\n-\t\td->skbpool_tl->next = skb;\n+\t\td->skbpool_tl->frag_next = skb;\n \td->skbpool_tl = skb;\n }\n \n@@ -128,8 +127,8 @@ skb_pool_get(struct aoedev *d)\n \n \tskb = d->skbpool_hd;\n \tif (skb && atomic_read(&skb_shinfo(skb)->dataref) == 1) {\n-\t\td->skbpool_hd = skb->next;\n-\t\tskb->next = NULL;\n+\t\td->skbpool_hd = skb->frag_next;\n+\t\tskb->frag_next = NULL;\n \t\treturn skb;\n \t}\n \tif (d->nskbpool < NSKBPOOLMAX\n@@ -295,7 +294,7 @@ aoecmd_ata_rw(struct aoedev *d)\n \tskb = skb_clone(skb, GFP_ATOMIC);\n \tif (skb) {\n \t\tif (d->sendq_hd)\n-\t\t\td->sendq_tl->next = skb;\n+\t\t\td->sendq_tl->frag_next = skb;\n \t\telse\n \t\t\td->sendq_hd = skb;\n \t\td->sendq_tl = skb;\n@@ -342,7 +341,7 @@ aoecmd_cfg_pkts(ushort aoemajor, unsigned char aoeminor, struct sk_buff **tail)\n \t\th->minor = aoeminor;\n \t\th->cmd = AOECMD_CFG;\n \n-\t\tskb->next = sl;\n+\t\tskb->frag_next = sl;\n \t\tsl = skb;\n cont:\n \t\tdev_put(ifp);\n@@ -407,7 +406,7 @@ resend(struct aoedev *d, struct aoetgt *t, struct frame *f)\n \tif (skb == NULL)\n \t\treturn;\n \tif (d->sendq_hd)\n-\t\td->sendq_tl->next = skb;\n+\t\td->sendq_tl->frag_next = skb;\n \telse\n \t\td->sendq_hd = skb;\n \td->sendq_tl = skb;\ndiff --git a/drivers/block/aoe/aoedev.c b/drivers/block/aoe/aoedev.c\nindex a1d813a..c75c4f9 100644\n--- a/drivers/block/aoe/aoedev.c\n+++ b/drivers/block/aoe/aoedev.c\n@@ -191,8 +191,8 @@ skbpoolfree(struct aoedev *d)\n \tstruct sk_buff *skb;\n \n \twhile ((skb = d->skbpool_hd)) {\n-\t\td->skbpool_hd = skb->next;\n-\t\tskb->next = NULL;\n+\t\td->skbpool_hd = skb->frag_next;\n+\t\tskb->frag_next = NULL;\n \t\tskbfree(skb);\n \t}\n \td->skbpool_tl = NULL;\ndiff --git a/drivers/block/aoe/aoenet.c b/drivers/block/aoe/aoenet.c\nindex 0c81ca7..f774abf 100644\n--- a/drivers/block/aoe/aoenet.c\n+++ b/drivers/block/aoe/aoenet.c\n@@ -100,8 +100,8 @@ aoenet_xmit(struct sk_buff *sl)\n \tstruct sk_buff *skb;\n \n \twhile ((skb = sl)) {\n-\t\tsl = sl->next;\n-\t\tskb->next = skb->prev = NULL;\n+\t\tsl = sl->frag_next;\n+\t\tskb->frag_next = NULL;\n \t\tdev_queue_xmit(skb);\n \t}\n }\ndiff --git a/drivers/bluetooth/hci_bcsp.c b/drivers/bluetooth/hci_bcsp.c\nindex 4d37bb3..575eebb 100644\n--- a/drivers/bluetooth/hci_bcsp.c\n+++ b/drivers/bluetooth/hci_bcsp.c\n@@ -353,7 +353,7 @@ static int bcsp_flush(struct hci_uart *hu)\n static void bcsp_pkt_cull(struct bcsp_struct *bcsp)\n {\n \tunsigned long flags;\n-\tstruct sk_buff *skb;\n+\tstruct sk_buff *skb, *n;\n \tint i, pkts_to_be_removed;\n \tu8 seqno;\n \n@@ -375,14 +375,13 @@ static void bcsp_pkt_cull(struct bcsp_struct *bcsp)\n \tBT_DBG(\"Removing %u pkts out of %u, up to seqno %u\",\n \t\tpkts_to_be_removed, bcsp->unack.qlen, (seqno - 1) & 0x07);\n \n-\tfor (i = 0, skb = ((struct sk_buff *) &bcsp->unack)->next; i < pkts_to_be_removed\n-\t\t\t&& skb != (struct sk_buff *) &bcsp->unack; i++) {\n-\t\tstruct sk_buff *nskb;\n+\ti = 0;\n+\tlist_for_each_entry_safe(skb, n, &bcsp->unack.list, list) {\n+\t\tif (i++ >= pkts_to_be_removed)\n+\t\t\tbreak;\n \n-\t\tnskb = skb->next;\n \t\t__skb_unlink(skb, &bcsp->unack);\n \t\tkfree_skb(skb);\n-\t\tskb = nskb;\n \t}\n \n \tif (bcsp->unack.qlen == 0)\ndiff --git a/drivers/isdn/i4l/isdn_ppp.c b/drivers/isdn/i4l/isdn_ppp.c\nindex 127cfda..501749d 100644\n--- a/drivers/isdn/i4l/isdn_ppp.c\n+++ b/drivers/isdn/i4l/isdn_ppp.c\n@@ -1533,8 +1533,10 @@ static int isdn_ppp_mp_bundle_array_init(void)\n \tint sz = ISDN_MAX_CHANNELS*sizeof(ippp_bundle);\n \tif( (isdn_ppp_bundle_arr = kzalloc(sz, GFP_KERNEL)) == NULL )\n \t\treturn -ENOMEM;\n-\tfor( i = 0; i < ISDN_MAX_CHANNELS; i++ )\n+\tfor( i = 0; i < ISDN_MAX_CHANNELS; i++ ) {\n \t\tspin_lock_init(&isdn_ppp_bundle_arr[i].lock);\n+\t\tINIT_LIST_HEAD(&isdn_ppp_bundle_arr[i].frags);\n+\t}\n \treturn 0;\n }\n \n@@ -1567,7 +1569,7 @@ static int isdn_ppp_mp_init( isdn_net_local * lp, ippp_bundle * add_to )\n \t\tif ((lp->netdev->pb = isdn_ppp_mp_bundle_alloc()) == NULL)\n \t\t\treturn -ENOMEM;\n \t\tlp->next = lp->last = lp;\t/* nobody else in a queue */\n-\t\tlp->netdev->pb->frags = NULL;\n+\t\tINIT_LIST_HEAD(&lp->netdev->pb->frags);\n \t\tlp->netdev->pb->frames = 0;\n \t\tlp->netdev->pb->seq = UINT_MAX;\n \t}\n@@ -1579,8 +1581,7 @@ static int isdn_ppp_mp_init( isdn_net_local * lp, ippp_bundle * add_to )\n \n static u32 isdn_ppp_mp_get_seq( int short_seq, \n \t\t\t\t\tstruct sk_buff * skb, u32 last_seq );\n-static struct sk_buff * isdn_ppp_mp_discard( ippp_bundle * mp,\n-\t\t\tstruct sk_buff * from, struct sk_buff * to );\n+static void isdn_ppp_mp_discard(ippp_bundle *mp, struct sk_buff *from, struct sk_buff *to);\n static void isdn_ppp_mp_reassembly( isdn_net_dev * net_dev, isdn_net_local * lp,\n \t\t\t\tstruct sk_buff * from, struct sk_buff * to );\n static void isdn_ppp_mp_free_skb( ippp_bundle * mp, struct sk_buff * skb );\n@@ -1656,10 +1657,13 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,\n \tnewfrag = skb;\n \n   \t/* if this new fragment is before the first one, then enqueue it now. */\n-  \tif ((frag = mp->frags) == NULL || MP_LT(newseq, MP_SEQ(frag))) {\n-\t\tnewfrag->next = frag;\n-    \t\tmp->frags = frag = newfrag;\n-    \t\tnewfrag = NULL;\n+\tfrag = NULL;\n+\tif (!list_empty(&mp->frags))\n+\t\tfrag = list_entry(mp->frags.next, struct sk_buff, list);\n+  \tif (!frag || MP_LT(newseq, MP_SEQ(frag))) {\n+\t\tlist_add(&newfrag->list, &mp->frags);\n+\t\tfrag = newfrag;\n+\t\tnewfrag = NULL;\n   \t}\n \n   \tstart = MP_FLAGS(frag) & MP_BEGIN_FRAG &&\n@@ -1690,7 +1694,10 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,\n   \twhile (start != NULL || newfrag != NULL) {\n \n     \t\tthisseq = MP_SEQ(frag);\n-    \t\tnextf = frag->next;\n+\t\tnextf = NULL;\n+\t\tif (frag->list.next != &mp->frags)\n+\t\t\tnextf = list_entry(frag->list.next,\n+\t\t\t\t\t   struct sk_buff, list);\n \n     \t\t/* drop any duplicate fragments */\n     \t\tif (newfrag != NULL && thisseq == newseq) {\n@@ -1701,8 +1708,8 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,\n     \t\t/* insert new fragment before next element if possible. */\n     \t\tif (newfrag != NULL && (nextf == NULL || \n \t\t\t\t\t\tMP_LT(newseq, MP_SEQ(nextf)))) {\n-      \t\t\tnewfrag->next = nextf;\n-      \t\t\tfrag->next = nextf = newfrag;\n+\t\t\tlist_add_tail(&newfrag->list, &nextf->list);\n+\t\t\tnextf = newfrag;\n       \t\t\tnewfrag = NULL;\n     \t\t}\n \n@@ -1713,8 +1720,13 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,\n \t\t\t\t      \"BEGIN flag with no prior END\", thisseq);\n \t\t\t\tstats->seqerrs++;\n \t\t\t\tstats->frame_drops++;\n-\t\t\t\tstart = isdn_ppp_mp_discard(mp, start,frag);\n-\t\t\t\tnextf = frag->next;\n+\t\t\t\tstart = frag;\n+\t\t\t\tisdn_ppp_mp_discard(mp, start, frag);\n+\n+\t\t\t\tnextf = NULL;\n+\t\t\t\tif (frag->list.next != &mp->frags)\n+\t\t\t\t\tnextf = list_entry(frag->list.next,\n+\t\t\t\t\t\t\t   struct sk_buff, list);\n       \t\t\t}\n     \t\t} else if (MP_LE(thisseq, minseq)) {\t\t\n       \t\t\tif (MP_FLAGS(frag) & MP_BEGIN_FRAG)\n@@ -1722,8 +1734,7 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,\n       \t\t\telse {\n \t\t\t\tif (MP_FLAGS(frag) & MP_END_FRAG)\n \t  \t\t\t\tstats->frame_drops++;\n-\t\t\t\tif( mp->frags == frag )\n-\t\t\t\t\tmp->frags = nextf;\t\n+\t\t\t\tlist_del(&frag->list);\n \t\t\t\tisdn_ppp_mp_free_skb(mp, frag);\n \t\t\t\tfrag = nextf;\n \t\t\t\tcontinue;\n@@ -1741,8 +1752,6 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,\n       \n       \t\t\tstart = NULL;\n       \t\t\tfrag = NULL;\n-\n-      \t\t\tmp->frags = nextf;\n     \t\t}\n \n \t\t/* check if need to update start pointer: if we just\n@@ -1782,7 +1791,7 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,\n \t\t\t \t * discard all the frames below low watermark \n \t\t\t\t * and start over */\n \t\t\t\tstats->frame_drops++;\n-\t\t\t\tmp->frags = isdn_ppp_mp_discard(mp,start,nextf);\n+\t\t\t\tisdn_ppp_mp_discard(mp, start, nextf);\n \t\t\t}\n \t\t\t/* break in the sequence, no reassembly */\n       \t\t\tstart = NULL;\n@@ -1791,32 +1800,29 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,\n     \t\tfrag = nextf;\n   \t}\t/* while -- main loop */\n \t\n-  \tif (mp->frags == NULL)\n-    \t\tmp->frags = frag;\n+\tif (list_empty(&mp->frags))\n+\t\tlist_add(&frag->list, &mp->frags);\n \t\t\n \t/* rather straighforward way to deal with (not very) possible \n \t * queue overflow */\n \tif (mp->frames > MP_MAX_QUEUE_LEN) {\n \t\tstats->overflows++;\n \t\twhile (mp->frames > MP_MAX_QUEUE_LEN) {\n-\t\t\tfrag = mp->frags->next;\n-\t\t\tisdn_ppp_mp_free_skb(mp, mp->frags);\n-\t\t\tmp->frags = frag;\n+\t\t\tfrag = list_entry(mp->frags.next,\n+\t\t\t\t\t  struct sk_buff, list);\n+\t\t\tisdn_ppp_mp_free_skb(mp, frag);\n \t\t}\n \t}\n \tspin_unlock_irqrestore(&mp->lock, flags);\n }\n \n-static void isdn_ppp_mp_cleanup( isdn_net_local * lp )\n+static void isdn_ppp_mp_cleanup(isdn_net_local *lp)\n {\n-\tstruct sk_buff * frag = lp->netdev->pb->frags;\n-\tstruct sk_buff * nextfrag;\n-    \twhile( frag ) {\n-\t\tnextfrag = frag->next;\n-\t\tisdn_ppp_mp_free_skb(lp->netdev->pb, frag);\n-\t\tfrag = nextfrag;\n-\t}\n-\tlp->netdev->pb->frags = NULL;\n+\tippp_bundle *mp = lp->netdev->pb;\n+\tstruct sk_buff *skb, *n;\n+\n+\tlist_for_each_entry_safe(skb, n, &mp->frags, list)\n+\t\tisdn_ppp_mp_free_skb(lp->netdev->pb, skb);\n }\n \n static u32 isdn_ppp_mp_get_seq( int short_seq, \n@@ -1853,16 +1859,17 @@ static u32 isdn_ppp_mp_get_seq( int short_seq,\n \treturn seq;\n }\n \n-struct sk_buff * isdn_ppp_mp_discard( ippp_bundle * mp,\n-\t\t\tstruct sk_buff * from, struct sk_buff * to )\n+void isdn_ppp_mp_discard(ippp_bundle * mp, struct sk_buff *from, struct sk_buff *to)\n {\n-\tif( from )\n-\t\twhile (from != to) {\n-\t  \t\tstruct sk_buff * next = from->next;\n+\tif (!from) {\n+\t\tstruct sk_buff *n;\n+\n+\t\tlist_for_each_entry_safe_from(from, n, &mp->frags, list) {\n+\t\t\tif (from == to)\n+\t\t\t\tbreak;\n \t\t\tisdn_ppp_mp_free_skb(mp, from);\n-\t  \t\tfrom = next;\n \t\t}\n-\treturn from;\n+\t}\n }\n \n void isdn_ppp_mp_reassembly( isdn_net_dev * net_dev, isdn_net_local * lp,\n@@ -1889,9 +1896,13 @@ void isdn_ppp_mp_reassembly( isdn_net_dev * net_dev, isdn_net_local * lp,\n \t\tstruct sk_buff * frag;\n \t\tint n;\n \n-\t\tfor(tot_len=n=0, frag=from; frag != to; frag=frag->next, n++)\n+\t\ttot_len = n = 0;\n+\t\tfrag = from;\n+\t\tlist_for_each_entry_from(frag, &mp->frags, list) {\n+\t\t\tif (frag == to)\n+\t\t\t\tbreak;\n \t\t\ttot_len += frag->len - MP_HEADER_LEN;\n-\n+\t\t}\n \t\tif( ippp_table[lp->ppp_slot]->debug & 0x40 )\n \t\t\tprintk(KERN_DEBUG\"isdn_mppp: reassembling frames %d \"\n \t\t\t\t\"to %d, len %d\\n\", MP_SEQ(from), \n@@ -1903,15 +1914,17 @@ void isdn_ppp_mp_reassembly( isdn_net_dev * net_dev, isdn_net_local * lp,\n \t\t\treturn;\n \t\t}\n \n-\t\twhile( from != to ) {\n-\t\t\tunsigned int len = from->len - MP_HEADER_LEN;\n+\t\tlist_for_each_entry_safe_from(from, frag, &mp->frags, list) {\n+\t\t\tunsigned int len;\n+\n+\t\t\tif (from == to)\n+\t\t\t\tbreak;\n \n+\t\t\tlen = from->len - MP_HEADER_LEN;\n \t\t\tskb_copy_from_linear_data_offset(from, MP_HEADER_LEN,\n \t\t\t\t\t\t\t skb_put(skb,len),\n \t\t\t\t\t\t\t len);\n-\t\t\tfrag = from->next;\n \t\t\tisdn_ppp_mp_free_skb(mp, from);\n-\t\t\tfrom = frag; \n \t\t}\n \t}\n    \tproto = isdn_ppp_strip_proto(skb);\n@@ -1920,6 +1933,7 @@ void isdn_ppp_mp_reassembly( isdn_net_dev * net_dev, isdn_net_local * lp,\n \n static void isdn_ppp_mp_free_skb(ippp_bundle * mp, struct sk_buff * skb)\n {\n+\tlist_del(&skb->list);\n \tdev_kfree_skb(skb);\n \tmp->frames--;\n }\ndiff --git a/drivers/net/cassini.c b/drivers/net/cassini.c\nindex f1936d5..40ff6a9 100644\n--- a/drivers/net/cassini.c\n+++ b/drivers/net/cassini.c\n@@ -2182,7 +2182,7 @@ static inline void cas_rx_flow_pkt(struct cas *cp, const u64 *words,\n \t * do any additional locking here. stick the buffer\n \t * at the end.\n \t */\n-\t__skb_insert(skb, flow->prev, (struct sk_buff *) flow, flow);\n+\t__skb_queue_tail(flow, skb);\n \tif (words[0] & RX_COMP1_RELEASE_FLOW) {\n \t\twhile ((skb = __skb_dequeue(flow))) {\n \t\t\tcas_skb_release(skb);\ndiff --git a/drivers/net/cxgb3/adapter.h b/drivers/net/cxgb3/adapter.h\nindex 2711404..06aabf4 100644\n--- a/drivers/net/cxgb3/adapter.h\n+++ b/drivers/net/cxgb3/adapter.h\n@@ -124,8 +124,7 @@ struct sge_rspq {\t\t/* state for an SGE response queue */\n \tdma_addr_t phys_addr;\t/* physical address of the ring */\n \tunsigned int cntxt_id;\t/* SGE context id for the response q */\n \tspinlock_t lock;\t/* guards response processing */\n-\tstruct sk_buff *rx_head;\t/* offload packet receive queue head */\n-\tstruct sk_buff *rx_tail;\t/* offload packet receive queue tail */\n+\tstruct list_head rx_list;\n \tstruct sk_buff *pg_skb; /* used to build frag list in napi handler */\n \n \tunsigned long offload_pkts;\ndiff --git a/drivers/net/cxgb3/l2t.c b/drivers/net/cxgb3/l2t.c\nindex 825e510..b3498e8 100644\n--- a/drivers/net/cxgb3/l2t.c\n+++ b/drivers/net/cxgb3/l2t.c\n@@ -86,6 +86,7 @@ static int setup_l2e_send_pending(struct t3cdev *dev, struct sk_buff *skb,\n \t\t\t\t  struct l2t_entry *e)\n {\n \tstruct cpl_l2t_write_req *req;\n+\tstruct sk_buff *n;\n \n \tif (!skb) {\n \t\tskb = alloc_skb(sizeof(*req), GFP_ATOMIC);\n@@ -103,13 +104,10 @@ static int setup_l2e_send_pending(struct t3cdev *dev, struct sk_buff *skb,\n \tmemcpy(req->dst_mac, e->dmac, sizeof(req->dst_mac));\n \tskb->priority = CPL_PRIORITY_CONTROL;\n \tcxgb3_ofld_send(dev, skb);\n-\twhile (e->arpq_head) {\n-\t\tskb = e->arpq_head;\n-\t\te->arpq_head = skb->next;\n-\t\tskb->next = NULL;\n+\tlist_for_each_entry_safe(skb, n, &e->arpq, list) {\n+\t\tlist_del(&skb->list);\n \t\tcxgb3_ofld_send(dev, skb);\n \t}\n-\te->arpq_tail = NULL;\n \te->state = L2T_STATE_VALID;\n \n \treturn 0;\n@@ -121,12 +119,7 @@ static int setup_l2e_send_pending(struct t3cdev *dev, struct sk_buff *skb,\n  */\n static inline void arpq_enqueue(struct l2t_entry *e, struct sk_buff *skb)\n {\n-\tskb->next = NULL;\n-\tif (e->arpq_head)\n-\t\te->arpq_tail->next = skb;\n-\telse\n-\t\te->arpq_head = skb;\n-\te->arpq_tail = skb;\n+\tlist_add_tail(&skb->list, &e->arpq);\n }\n \n int t3_l2t_send_slow(struct t3cdev *dev, struct sk_buff *skb,\n@@ -167,7 +160,7 @@ again:\n \t\t\t\tbreak;\n \n \t\t\tspin_lock_bh(&e->lock);\n-\t\t\tif (e->arpq_head)\n+\t\t\tif (!list_empty(&e->arpq))\n \t\t\t\tsetup_l2e_send_pending(dev, skb, e);\n \t\t\telse\t/* we lost the race */\n \t\t\t\t__kfree_skb(skb);\n@@ -357,14 +350,13 @@ EXPORT_SYMBOL(t3_l2t_get);\n  * XXX: maybe we should abandon the latter behavior and just require a failure\n  * handler.\n  */\n-static void handle_failed_resolution(struct t3cdev *dev, struct sk_buff *arpq)\n+static void handle_failed_resolution(struct t3cdev *dev, struct list_head *list)\n {\n-\twhile (arpq) {\n-\t\tstruct sk_buff *skb = arpq;\n+\tstruct sk_buff *skb, *n;\n+\tlist_for_each_entry_safe(skb, n, list, list) {\n \t\tstruct l2t_skb_cb *cb = L2T_SKB_CB(skb);\n \n-\t\tarpq = skb->next;\n-\t\tskb->next = NULL;\n+\t\tlist_del(&skb->list);\n \t\tif (cb->arp_failure_handler)\n \t\t\tcb->arp_failure_handler(dev, skb);\n \t\telse\n@@ -379,7 +371,7 @@ static void handle_failed_resolution(struct t3cdev *dev, struct sk_buff *arpq)\n void t3_l2t_update(struct t3cdev *dev, struct neighbour *neigh)\n {\n \tstruct l2t_entry *e;\n-\tstruct sk_buff *arpq = NULL;\n+\tLIST_HEAD(arpq);\n \tstruct l2t_data *d = L2DATA(dev);\n \tu32 addr = *(u32 *) neigh->primary_key;\n \tint ifidx = neigh->dev->ifindex;\n@@ -402,8 +394,7 @@ found:\n \n \t\tif (e->state == L2T_STATE_RESOLVING) {\n \t\t\tif (neigh->nud_state & NUD_FAILED) {\n-\t\t\t\tarpq = e->arpq_head;\n-\t\t\t\te->arpq_head = e->arpq_tail = NULL;\n+\t\t\t\tlist_splice_init(&e->arpq, &arpq);\n \t\t\t} else if (neigh->nud_state & (NUD_CONNECTED|NUD_STALE))\n \t\t\t\tsetup_l2e_send_pending(dev, NULL, e);\n \t\t} else {\n@@ -415,8 +406,8 @@ found:\n \t}\n \tspin_unlock_bh(&e->lock);\n \n-\tif (arpq)\n-\t\thandle_failed_resolution(dev, arpq);\n+\tif (!list_empty(&arpq))\n+\t\thandle_failed_resolution(dev, &arpq);\n }\n \n struct l2t_data *t3_init_l2t(unsigned int l2t_capacity)\n@@ -436,6 +427,7 @@ struct l2t_data *t3_init_l2t(unsigned int l2t_capacity)\n \tfor (i = 0; i < l2t_capacity; ++i) {\n \t\td->l2tab[i].idx = i;\n \t\td->l2tab[i].state = L2T_STATE_UNUSED;\n+\t\tINIT_LIST_HEAD(&d->l2tab[i].arpq);\n \t\tspin_lock_init(&d->l2tab[i].lock);\n \t\tatomic_set(&d->l2tab[i].refcnt, 0);\n \t}\ndiff --git a/drivers/net/cxgb3/l2t.h b/drivers/net/cxgb3/l2t.h\nindex d790013..1b4b390 100644\n--- a/drivers/net/cxgb3/l2t.h\n+++ b/drivers/net/cxgb3/l2t.h\n@@ -64,8 +64,7 @@ struct l2t_entry {\n \tstruct neighbour *neigh;\t/* associated neighbour */\n \tstruct l2t_entry *first;\t/* start of hash chain */\n \tstruct l2t_entry *next;\t/* next l2t_entry on chain */\n-\tstruct sk_buff *arpq_head;\t/* queue of packets awaiting resolution */\n-\tstruct sk_buff *arpq_tail;\n+\tstruct list_head arpq;\t/* queue of packets awaiting resolution */\n \tspinlock_t lock;\n \tatomic_t refcnt;\t/* entry reference count */\n \tu8 dmac[6];\t\t/* neighbour's MAC address */\ndiff --git a/drivers/net/cxgb3/sge.c b/drivers/net/cxgb3/sge.c\nindex 1b0861d..bbd0be2 100644\n--- a/drivers/net/cxgb3/sge.c\n+++ b/drivers/net/cxgb3/sge.c\n@@ -1704,16 +1704,14 @@ int t3_offload_tx(struct t3cdev *tdev, struct sk_buff *skb)\n  */\n static inline void offload_enqueue(struct sge_rspq *q, struct sk_buff *skb)\n {\n-\tskb->next = skb->prev = NULL;\n-\tif (q->rx_tail)\n-\t\tq->rx_tail->next = skb;\n-\telse {\n+\tint was_empty = list_empty(&q->rx_list);\n+\n+\tlist_add_tail(&skb->list, &q->rx_list);\n+\tif (was_empty) {\n \t\tstruct sge_qset *qs = rspq_to_qset(q);\n \n \t\tnapi_schedule(&qs->napi);\n-\t\tq->rx_head = skb;\n \t}\n-\tq->rx_tail = skb;\n }\n \n /**\n@@ -1754,39 +1752,40 @@ static int ofld_poll(struct napi_struct *napi, int budget)\n \tint work_done = 0;\n \n \twhile (work_done < budget) {\n-\t\tstruct sk_buff *head, *tail, *skbs[RX_BUNDLE_SIZE];\n+\t\tstruct sk_buff *skb, *n, *skbs[RX_BUNDLE_SIZE];\n+\t\tLIST_HEAD(list);\n \t\tint ngathered;\n \n \t\tspin_lock_irq(&q->lock);\n-\t\thead = q->rx_head;\n-\t\tif (!head) {\n+\t\tlist_splice_init(&q->rx_list, &list);\n+\t\tif (list_empty(&list)) {\n \t\t\tnapi_complete(napi);\n \t\t\tspin_unlock_irq(&q->lock);\n \t\t\treturn work_done;\n \t\t}\n-\n-\t\ttail = q->rx_tail;\n-\t\tq->rx_head = q->rx_tail = NULL;\n \t\tspin_unlock_irq(&q->lock);\n \n-\t\tfor (ngathered = 0; work_done < budget && head; work_done++) {\n-\t\t\tprefetch(head->data);\n-\t\t\tskbs[ngathered] = head;\n-\t\t\thead = head->next;\n-\t\t\tskbs[ngathered]->next = NULL;\n-\t\t\tif (++ngathered == RX_BUNDLE_SIZE) {\n+\t\tngathered = 0;\n+\t\tlist_for_each_entry_safe(skb, n, &list, list) {\n+\t\t\tprefetch(skb->data);\n+\n+\t\t\tif (work_done >= budget)\n+\t\t\t\tbreak;\n+\n+\t\t\twork_done++;\n+\t\t\tlist_del(&skb->list);\n+\t\t\tskbs[ngathered++] = skb;\n+\t\t\tif (ngathered == RX_BUNDLE_SIZE) {\n \t\t\t\tq->offload_bundles++;\n \t\t\t\tadapter->tdev.recv(&adapter->tdev, skbs,\n \t\t\t\t\t\t   ngathered);\n \t\t\t\tngathered = 0;\n \t\t\t}\n \t\t}\n-\t\tif (head) {\t/* splice remaining packets back onto Rx queue */\n+\n+\t\tif (!list_empty(&list)) { /* splice remaining packets back onto Rx queue */\n \t\t\tspin_lock_irq(&q->lock);\n-\t\t\ttail->next = q->rx_head;\n-\t\t\tif (!q->rx_head)\n-\t\t\t\tq->rx_tail = tail;\n-\t\t\tq->rx_head = head;\n+\t\t\tlist_splice(&list, &q->rx_list);\n \t\t\tspin_unlock_irq(&q->lock);\n \t\t}\n \t\tdeliver_partial_bundle(&adapter->tdev, q, skbs, ngathered);\n@@ -2934,6 +2933,7 @@ int t3_sge_alloc_qset(struct adapter *adapter, unsigned int id, int nports,\n \tq->rspq.gen = 1;\n \tq->rspq.size = p->rspq_size;\n \tspin_lock_init(&q->rspq.lock);\n+\tINIT_LIST_HEAD(&q->rspq.rx_list);\n \n \tq->txq[TXQ_ETH].stop_thres = nports *\n \t    flits_to_desc(sgl_len(MAX_SKB_FRAGS + 1) + 3);\ndiff --git a/drivers/net/myri10ge/myri10ge.c b/drivers/net/myri10ge/myri10ge.c\nindex d6524db..7bf343e 100644\n--- a/drivers/net/myri10ge/myri10ge.c\n+++ b/drivers/net/myri10ge/myri10ge.c\n@@ -2851,15 +2851,15 @@ static int myri10ge_sw_tso(struct sk_buff *skb, struct net_device *dev)\n \n \twhile (segs) {\n \t\tcurr = segs;\n-\t\tsegs = segs->next;\n-\t\tcurr->next = NULL;\n+\t\tsegs = segs->frag_next;\n+\t\tcurr->frag_next = NULL;\n \t\tstatus = myri10ge_xmit(curr, dev);\n \t\tif (status != 0) {\n \t\t\tdev_kfree_skb_any(curr);\n \t\t\tif (segs != NULL) {\n \t\t\t\tcurr = segs;\n-\t\t\t\tsegs = segs->next;\n-\t\t\t\tcurr->next = NULL;\n+\t\t\t\tsegs = segs->frag_next;\n+\t\t\t\tcurr->frag_next = NULL;\n \t\t\t\tdev_kfree_skb_any(segs);\n \t\t\t}\n \t\t\tgoto drop;\ndiff --git a/drivers/net/ppp_generic.c b/drivers/net/ppp_generic.c\nindex ddccc07..47a23d9 100644\n--- a/drivers/net/ppp_generic.c\n+++ b/drivers/net/ppp_generic.c\n@@ -1833,9 +1833,11 @@ ppp_receive_mp_frame(struct ppp *ppp, struct sk_buff *skb, struct channel *pch)\n \n \t/* If the queue is getting long, don't wait any longer for packets\n \t   before the start of the queue. */\n-\tif (skb_queue_len(&ppp->mrq) >= PPP_MP_MAX_QLEN\n-\t    && seq_before(ppp->minseq, ppp->mrq.next->sequence))\n-\t\tppp->minseq = ppp->mrq.next->sequence;\n+\tif (skb_queue_len(&ppp->mrq) >= PPP_MP_MAX_QLEN) {\n+\t\tstruct sk_buff *skb = skb_peek(&ppp->mrq);\n+\t\tif (seq_before(ppp->minseq, skb->sequence))\n+\t\t\tppp->minseq = skb->sequence;\n+\t}\n \n \t/* Pull completed packets off the queue and receive them. */\n \twhile ((skb = ppp_mp_reconstruct(ppp)))\n@@ -1861,10 +1863,11 @@ ppp_mp_insert(struct ppp *ppp, struct sk_buff *skb)\n \n \t/* N.B. we don't need to lock the list lock because we have the\n \t   ppp unit receive-side lock. */\n-\tfor (p = list->next; p != (struct sk_buff *)list; p = p->next)\n+\tlist_for_each_entry(p, &list->list, list) {\n \t\tif (seq_before(seq, p->sequence))\n \t\t\tbreak;\n-\t__skb_insert(skb, p->prev, p, list);\n+\t}\n+\t__skb_insert(skb, p, list);\n }\n \n /*\n@@ -1886,10 +1889,10 @@ ppp_mp_reconstruct(struct ppp *ppp)\n \n \tif (ppp->mrru == 0)\t/* do nothing until mrru is set */\n \t\treturn NULL;\n-\thead = list->next;\n+\thead = list_entry(list->list.next, struct sk_buff, list);\n \ttail = NULL;\n-\tfor (p = head; p != (struct sk_buff *) list; p = next) {\n-\t\tnext = p->next;\n+\tfor (p = head; &p->list != &list->list; p = next) {\n+\t\tnext = list_entry(p->list.next, struct sk_buff, list);\n \t\tif (seq_before(p->sequence, seq)) {\n \t\t\t/* this can't happen, anyway ignore the skb */\n \t\t\tprintk(KERN_ERR \"ppp_mp_reconstruct bad seq %u < %u\\n\",\n@@ -1974,15 +1977,16 @@ ppp_mp_reconstruct(struct ppp *ppp)\n \n \t\tif (head != tail)\n \t\t\t/* copy to a single skb */\n-\t\t\tfor (p = head; p != tail->next; p = p->next)\n+\t\t\tfor (p = head; &p->list != tail->list.next;\n+\t\t\t     p = list_entry(p->list.next, struct sk_buff, list))\n \t\t\t\tskb_copy_bits(p, 0, skb_put(skb, p->len), p->len);\n \t\tppp->nextseq = tail->sequence + 1;\n-\t\thead = tail->next;\n+\t\thead = list_entry(tail->list.next, struct sk_buff, list);\n \t}\n \n \t/* Discard all the skbuffs that we have copied the data out of\n \t   or that we can't use. */\n-\twhile ((p = list->next) != head) {\n+\twhile ((p = list_entry(list->list.next, struct sk_buff, list)) != head) {\n \t\t__skb_unlink(p, list);\n \t\tkfree_skb(p);\n \t}\ndiff --git a/drivers/net/pppol2tp.c b/drivers/net/pppol2tp.c\nindex ff175e8..09dfb10 100644\n--- a/drivers/net/pppol2tp.c\n+++ b/drivers/net/pppol2tp.c\n@@ -353,7 +353,7 @@ static void pppol2tp_recv_queue_skb(struct pppol2tp_session *session, struct sk_\n \tspin_lock_bh(&session->reorder_q.lock);\n \tskb_queue_walk_safe(&session->reorder_q, skbp, tmp) {\n \t\tif (PPPOL2TP_SKB_CB(skbp)->ns > ns) {\n-\t\t\t__skb_insert(skb, skbp->prev, skbp, &session->reorder_q);\n+\t\t\t__skb_insert(skb, skbp, &session->reorder_q);\n \t\t\tPRINTK(session->debug, PPPOL2TP_MSG_SEQ, KERN_DEBUG,\n \t\t\t       \"%s: pkt %hu, inserted before %hu, reorder_q len=%d\\n\",\n \t\t\t       session->name, ns, PPPOL2TP_SKB_CB(skbp)->ns,\ndiff --git a/drivers/net/s2io.c b/drivers/net/s2io.c\nindex 243db33..b410d62 100644\n--- a/drivers/net/s2io.c\n+++ b/drivers/net/s2io.c\n@@ -8616,7 +8616,7 @@ static void lro_append_pkt(struct s2io_nic *sp, struct lro *lro,\n \tfirst->data_len = lro->frags_len;\n \tskb_pull(skb, (skb->len - tcp_len));\n \tif (skb_shinfo(first)->frag_list)\n-\t\tlro->last_frag->next = skb;\n+\t\tlro->last_frag->frag_next = skb;\n \telse\n \t\tskb_shinfo(first)->frag_list = skb;\n \tfirst->truesize += skb->truesize;\ndiff --git a/drivers/net/tg3.c b/drivers/net/tg3.c\nindex 1239207..909d962 100644\n--- a/drivers/net/tg3.c\n+++ b/drivers/net/tg3.c\n@@ -4829,8 +4829,8 @@ static int tg3_tso_bug(struct tg3 *tp, struct sk_buff *skb)\n \n \tdo {\n \t\tnskb = segs;\n-\t\tsegs = segs->next;\n-\t\tnskb->next = NULL;\n+\t\tsegs = segs->frag_next;\n+\t\tnskb->frag_next = NULL;\n \t\ttg3_start_xmit_dma_bug(nskb, tp->dev);\n \t} while (segs);\n \ndiff --git a/drivers/net/tulip/de4x5.c b/drivers/net/tulip/de4x5.c\nindex 617ef41..0b9c68d 100644\n--- a/drivers/net/tulip/de4x5.c\n+++ b/drivers/net/tulip/de4x5.c\n@@ -3784,12 +3784,12 @@ de4x5_put_cache(struct net_device *dev, struct sk_buff *skb)\n     struct sk_buff *p;\n \n     if (lp->cache.skb) {\n-\tfor (p=lp->cache.skb; p->next; p=p->next);\n-\tp->next = skb;\n+\tfor (p=lp->cache.skb; p->frag_next; p=p->frag_next);\n+\tp->frag_next = skb;\n     } else {\n \tlp->cache.skb = skb;\n     }\n-    skb->next = NULL;\n+    skb->frag_next = NULL;\n \n     return;\n }\n@@ -3801,7 +3801,7 @@ de4x5_putb_cache(struct net_device *dev, struct sk_buff *skb)\n     struct sk_buff *p = lp->cache.skb;\n \n     lp->cache.skb = skb;\n-    skb->next = p;\n+    skb->frag_next = p;\n \n     return;\n }\n@@ -3813,8 +3813,8 @@ de4x5_get_cache(struct net_device *dev)\n     struct sk_buff *p = lp->cache.skb;\n \n     if (p) {\n-\tlp->cache.skb = p->next;\n-\tp->next = NULL;\n+\tlp->cache.skb = p->frag_next;\n+\tp->frag_next = NULL;\n     }\n \n     return p;\ndiff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c\nindex 8463efb..6d27f73 100644\n--- a/drivers/net/usb/usbnet.c\n+++ b/drivers/net/usb/usbnet.c\n@@ -512,14 +512,13 @@ static int unlink_urbs (struct usbnet *dev, struct sk_buff_head *q)\n \tint\t\t\tcount = 0;\n \n \tspin_lock_irqsave (&q->lock, flags);\n-\tfor (skb = q->next; skb != (struct sk_buff *) q; skb = skbnext) {\n+\tlist_for_each_entry_safe(skb, skbnext, &q->list, list) {\n \t\tstruct skb_data\t\t*entry;\n \t\tstruct urb\t\t*urb;\n \t\tint\t\t\tretval;\n \n \t\tentry = (struct skb_data *) skb->cb;\n \t\turb = entry->urb;\n-\t\tskbnext = skb->next;\n \n \t\t// during some PM-driven resume scenarios,\n \t\t// these (async) unlinks complete immediately\ndiff --git a/drivers/net/wireless/p54/p54common.c b/drivers/net/wireless/p54/p54common.c\nindex da51786..ea18adc 100644\n--- a/drivers/net/wireless/p54/p54common.c\n+++ b/drivers/net/wireless/p54/p54common.c\n@@ -546,15 +546,15 @@ static void p54_rx_frame_sent(struct ieee80211_hw *dev, struct sk_buff *skb)\n \tstruct p54_common *priv = dev->priv;\n \tstruct p54_control_hdr *hdr = (struct p54_control_hdr *) skb->data;\n \tstruct p54_frame_sent_hdr *payload = (struct p54_frame_sent_hdr *) hdr->data;\n-\tstruct sk_buff *entry = (struct sk_buff *) priv->tx_queue.next;\n \tu32 addr = le32_to_cpu(hdr->req_id) - priv->headroom;\n \tstruct memrecord *range = NULL;\n+\tstruct sk_buff *entry;\n \tu32 freed = 0;\n \tu32 last_addr = priv->rx_start;\n \tunsigned long flags;\n \n \tspin_lock_irqsave(&priv->tx_queue.lock, flags);\n-\twhile (entry != (struct sk_buff *)&priv->tx_queue) {\n+\tlist_for_each_entry(entry, &priv->tx_queue.list, list) {\n \t\tstruct ieee80211_tx_info *info = IEEE80211_SKB_CB(entry);\n \t\trange = (void *)info->driver_data;\n \t\tif (range->start_addr == addr) {\n@@ -562,11 +562,14 @@ static void p54_rx_frame_sent(struct ieee80211_hw *dev, struct sk_buff *skb)\n \t\t\tstruct p54_tx_control_allocdata *entry_data;\n \t\t\tint pad = 0;\n \n-\t\t\tif (entry->next != (struct sk_buff *)&priv->tx_queue) {\n+\t\t\tif (entry->list.next != &priv->tx_queue.list) {\n \t\t\t\tstruct ieee80211_tx_info *ni;\n+\t\t\t\tstruct sk_buff *next;\n \t\t\t\tstruct memrecord *mr;\n \n-\t\t\t\tni = IEEE80211_SKB_CB(entry->next);\n+\t\t\t\tnext = list_entry(entry->list.next,\n+\t\t\t\t\t\t  struct sk_buff, list);\n+\t\t\t\tni = IEEE80211_SKB_CB(next);\n \t\t\t\tmr = (struct memrecord *)ni->driver_data;\n \t\t\t\tfreed = mr->start_addr - last_addr;\n \t\t\t} else\n@@ -597,7 +600,6 @@ static void p54_rx_frame_sent(struct ieee80211_hw *dev, struct sk_buff *skb)\n \t\t\tgoto out;\n \t\t} else\n \t\t\tlast_addr = range->end_addr;\n-\t\tentry = entry->next;\n \t}\n \tspin_unlock_irqrestore(&priv->tx_queue.lock, flags);\n \n@@ -692,34 +694,31 @@ EXPORT_SYMBOL_GPL(p54_rx);\n static void p54_assign_address(struct ieee80211_hw *dev, struct sk_buff *skb,\n \t\t\t       struct p54_control_hdr *data, u32 len)\n {\n+\tstruct sk_buff *target_skb = NULL, *entry;\n \tstruct p54_common *priv = dev->priv;\n-\tstruct sk_buff *entry = priv->tx_queue.next;\n-\tstruct sk_buff *target_skb = NULL;\n \tu32 last_addr = priv->rx_start;\n \tu32 largest_hole = 0;\n \tu32 target_addr = priv->rx_start;\n \tunsigned long flags;\n-\tunsigned int left;\n \tlen = (len + priv->headroom + priv->tailroom + 3) & ~0x3;\n \n \tspin_lock_irqsave(&priv->tx_queue.lock, flags);\n-\tleft = skb_queue_len(&priv->tx_queue);\n-\twhile (left--) {\n+\tlist_for_each_entry(entry, &priv->tx_queue.list, list) {\n \t\tu32 hole_size;\n \t\tstruct ieee80211_tx_info *info = IEEE80211_SKB_CB(entry);\n \t\tstruct memrecord *range = (void *)info->driver_data;\n \t\thole_size = range->start_addr - last_addr;\n \t\tif (!target_skb && hole_size >= len) {\n-\t\t\ttarget_skb = entry->prev;\n+\t\t\ttarget_skb = list_entry(entry->list.prev,\n+\t\t\t\t\t\tstruct sk_buff, list);\n \t\t\thole_size -= len;\n \t\t\ttarget_addr = last_addr;\n \t\t}\n \t\tlargest_hole = max(largest_hole, hole_size);\n \t\tlast_addr = range->end_addr;\n-\t\tentry = entry->next;\n \t}\n \tif (!target_skb && priv->rx_end - last_addr >= len) {\n-\t\ttarget_skb = priv->tx_queue.prev;\n+\t\ttarget_skb = skb_peek_tail(&priv->tx_queue);\n \t\tlargest_hole = max(largest_hole, priv->rx_end - last_addr - len);\n \t\tif (!skb_queue_empty(&priv->tx_queue)) {\n \t\t\tstruct ieee80211_tx_info *info = IEEE80211_SKB_CB(target_skb);\ndiff --git a/drivers/net/wireless/rtl8187_dev.c b/drivers/net/wireless/rtl8187_dev.c\nindex 8a42bfa..13a58c2 100644\n--- a/drivers/net/wireless/rtl8187_dev.c\n+++ b/drivers/net/wireless/rtl8187_dev.c\n@@ -278,7 +278,7 @@ static void rtl8187_rx_cb(struct urb *urb)\n \tu32 quality;\n \n \tspin_lock(&priv->rx_queue.lock);\n-\tif (skb->next)\n+\tif (!list_empty(&skb->list))\n \t\t__skb_unlink(skb, &priv->rx_queue);\n \telse {\n \t\tspin_unlock(&priv->rx_queue.lock);\ndiff --git a/drivers/net/wireless/zd1211rw/zd_mac.c b/drivers/net/wireless/zd1211rw/zd_mac.c\nindex e019102..13fc601 100644\n--- a/drivers/net/wireless/zd1211rw/zd_mac.c\n+++ b/drivers/net/wireless/zd1211rw/zd_mac.c\n@@ -579,12 +579,11 @@ static int filter_ack(struct ieee80211_hw *hw, struct ieee80211_hdr *rx_hdr,\n \n \tq = &zd_hw_mac(hw)->ack_wait_queue;\n \tspin_lock_irqsave(&q->lock, flags);\n-\tfor (skb = q->next; skb != (struct sk_buff *)q; skb = skb->next) {\n+\tlist_for_each_entry(skb, &q->list, list) {\n \t\tstruct ieee80211_hdr *tx_hdr;\n \n \t\ttx_hdr = (struct ieee80211_hdr *)skb->data;\n-\t\tif (likely(!compare_ether_addr(tx_hdr->addr2, rx_hdr->addr1)))\n-\t\t{\n+\t\tif (likely(!compare_ether_addr(tx_hdr->addr2, rx_hdr->addr1))) {\n \t\t\t__skb_unlink(skb, q);\n \t\t\ttx_status(hw, skb, IEEE80211_TX_STAT_ACK, stats->signal, 1);\n \t\t\tgoto out;\ndiff --git a/drivers/usb/atm/usbatm.c b/drivers/usb/atm/usbatm.c\nindex 0722872..1adafda 100644\n--- a/drivers/usb/atm/usbatm.c\n+++ b/drivers/usb/atm/usbatm.c\n@@ -640,14 +640,13 @@ static void usbatm_cancel_send(struct usbatm_data *instance,\n \n \tatm_dbg(instance, \"%s entered\\n\", __func__);\n \tspin_lock_irq(&instance->sndqueue.lock);\n-\tfor (skb = instance->sndqueue.next, n = skb->next;\n-\t     skb != (struct sk_buff *)&instance->sndqueue;\n-\t     skb = n, n = skb->next)\n+\tlist_for_each_entry_safe(skb, n, &instance->sndqueue.list, list) {\n \t\tif (UDSL_SKB(skb)->atm.vcc == vcc) {\n \t\t\tatm_dbg(instance, \"%s: popping skb 0x%p\\n\", __func__, skb);\n \t\t\t__skb_unlink(skb, &instance->sndqueue);\n \t\t\tusbatm_pop(vcc, skb);\n \t\t}\n+\t}\n \tspin_unlock_irq(&instance->sndqueue.lock);\n \n \ttasklet_disable(&instance->tx_channel.tasklet);\ndiff --git a/include/linux/isdn_ppp.h b/include/linux/isdn_ppp.h\nindex 8687a7d..06c9063 100644\n--- a/include/linux/isdn_ppp.h\n+++ b/include/linux/isdn_ppp.h\n@@ -157,7 +157,7 @@ typedef struct {\n \n typedef struct {\n   int mp_mrru;                        /* unused                             */\n-  struct sk_buff * frags;\t/* fragments sl list -- use skb->next */\n+  struct list_head frags;\t/* fragments sl list */\n   long frames;\t\t\t/* number of frames in the frame list */\n   unsigned int seq;\t\t/* last processed packet seq #: any packets\n   \t\t\t\t * with smaller seq # will be dropped\ndiff --git a/include/linux/list.h b/include/linux/list.h\nindex 969f6e9..4e7b91b 100644\n--- a/include/linux/list.h\n+++ b/include/linux/list.h\n@@ -471,6 +471,18 @@ static inline void list_splice_tail_init(struct list_head *list,\n \t     pos = list_entry(pos->member.next, typeof(*pos), member))\n \n /**\n+ * list_for_each_entry_from_reverse\n+ * @pos:\tthe type * to use as a loop cursor.\n+ * @head:\tthe head for your list.\n+ * @member:\tthe name of the list_struct within the struct.\n+ *\n+ * Iterate over list of given type, continuing from current position.\n+ */\n+#define list_for_each_entry_from_reverse(pos, head, member) \t\t\\\n+\tfor (; prefetch(pos->member.prev), &pos->member != (head);\t\\\n+\t     pos = list_entry(pos->member.prev, typeof(*pos), member))\n+\n+/**\n  * list_for_each_entry_safe - iterate over list of given type safe against removal of list entry\n  * @pos:\tthe type * to use as a loop cursor.\n  * @n:\t\tanother type * to use as temporary storage\ndiff --git a/include/linux/skbuff.h b/include/linux/skbuff.h\nindex aa80ad9..489140c 100644\n--- a/include/linux/skbuff.h\n+++ b/include/linux/skbuff.h\n@@ -114,12 +114,9 @@ struct nf_bridge_info {\n #endif\n \n struct sk_buff_head {\n-\t/* These two members must be first. */\n-\tstruct sk_buff\t*next;\n-\tstruct sk_buff\t*prev;\n-\n-\t__u32\t\tqlen;\n-\tspinlock_t\tlock;\n+\tstruct list_head\tlist;\n+\t__u32\t\t\tqlen;\n+\tspinlock_t\t\tlock;\n };\n \n struct sk_buff;\n@@ -257,9 +254,10 @@ typedef unsigned char *sk_buff_data_t;\n  */\n \n struct sk_buff {\n-\t/* These two members must be first. */\n-\tstruct sk_buff\t\t*next;\n-\tstruct sk_buff\t\t*prev;\n+\tunion {\n+\t\tstruct list_head\tlist;\n+\t\tstruct sk_buff\t\t*frag_next;\n+\t};\n \n \tstruct sock\t\t*sk;\n \tktime_t\t\t\ttstamp;\n@@ -469,7 +467,7 @@ static inline unsigned char *skb_end_pointer(const struct sk_buff *skb)\n  */\n static inline int skb_queue_empty(const struct sk_buff_head *list)\n {\n-\treturn list->next == (struct sk_buff *)list;\n+\treturn list_empty(&list->list);\n }\n \n /**\n@@ -622,10 +620,10 @@ static inline struct sk_buff *skb_unshare(struct sk_buff *skb,\n  */\n static inline struct sk_buff *skb_peek(struct sk_buff_head *list_)\n {\n-\tstruct sk_buff *list = ((struct sk_buff *)list_)->next;\n-\tif (list == (struct sk_buff *)list_)\n-\t\tlist = NULL;\n-\treturn list;\n+\tstruct list_head *list = &list_->list;\n+\tif (list_empty(list))\n+\t\treturn NULL;\n+\treturn list_entry(list->next, struct sk_buff, list);\n }\n \n /**\n@@ -643,10 +641,10 @@ static inline struct sk_buff *skb_peek(struct sk_buff_head *list_)\n  */\n static inline struct sk_buff *skb_peek_tail(struct sk_buff_head *list_)\n {\n-\tstruct sk_buff *list = ((struct sk_buff *)list_)->prev;\n-\tif (list == (struct sk_buff *)list_)\n-\t\tlist = NULL;\n-\treturn list;\n+\tstruct list_head *list = &list_->list;\n+\tif (list_empty(list))\n+\t\treturn NULL;\n+\treturn list_entry(list->prev, struct sk_buff, list);\n }\n \n /**\n@@ -670,8 +668,8 @@ static inline __u32 skb_queue_len(const struct sk_buff_head *list_)\n  */\n static inline void skb_queue_head_init(struct sk_buff_head *list)\n {\n+\tINIT_LIST_HEAD(&list->list);\n \tspin_lock_init(&list->lock);\n-\tlist->prev = list->next = (struct sk_buff *)list;\n \tlist->qlen = 0;\n }\n \n@@ -689,13 +687,10 @@ static inline void skb_queue_head_init_class(struct sk_buff_head *list,\n  *\tcan only be called with interrupts disabled.\n  */\n extern void        skb_insert(struct sk_buff *old, struct sk_buff *newsk, struct sk_buff_head *list);\n-static inline void __skb_insert(struct sk_buff *newsk,\n-\t\t\t\tstruct sk_buff *prev, struct sk_buff *next,\n+static inline void __skb_insert(struct sk_buff *newsk, struct sk_buff *next,\n \t\t\t\tstruct sk_buff_head *list)\n {\n-\tnewsk->next = next;\n-\tnewsk->prev = prev;\n-\tnext->prev  = prev->next = newsk;\n+\tlist_add_tail(&newsk->list, &next->list);\n \tlist->qlen++;\n }\n \n@@ -714,19 +709,13 @@ static inline void __skb_queue_after(struct sk_buff_head *list,\n \t\t\t\t     struct sk_buff *prev,\n \t\t\t\t     struct sk_buff *newsk)\n {\n-\t__skb_insert(newsk, prev, prev->next, list);\n+\tlist_add(&newsk->list, &prev->list);\n+\tlist->qlen++;\n }\n \n extern void skb_append(struct sk_buff *old, struct sk_buff *newsk,\n \t\t       struct sk_buff_head *list);\n \n-static inline void __skb_queue_before(struct sk_buff_head *list,\n-\t\t\t\t      struct sk_buff *next,\n-\t\t\t\t      struct sk_buff *newsk)\n-{\n-\t__skb_insert(newsk, next->prev, next, list);\n-}\n-\n /**\n  *\t__skb_queue_head - queue a buffer at the list head\n  *\t@list: list to use\n@@ -741,7 +730,8 @@ extern void skb_queue_head(struct sk_buff_head *list, struct sk_buff *newsk);\n static inline void __skb_queue_head(struct sk_buff_head *list,\n \t\t\t\t    struct sk_buff *newsk)\n {\n-\t__skb_queue_after(list, (struct sk_buff *)list, newsk);\n+\tlist_add(&newsk->list, &list->list);\n+\tlist->qlen++;\n }\n \n /**\n@@ -758,7 +748,8 @@ extern void skb_queue_tail(struct sk_buff_head *list, struct sk_buff *newsk);\n static inline void __skb_queue_tail(struct sk_buff_head *list,\n \t\t\t\t   struct sk_buff *newsk)\n {\n-\t__skb_queue_before(list, (struct sk_buff *)list, newsk);\n+\tlist_add_tail(&newsk->list, &list->list);\n+\tlist->qlen++;\n }\n \n /*\n@@ -768,14 +759,9 @@ static inline void __skb_queue_tail(struct sk_buff_head *list,\n extern void\t   skb_unlink(struct sk_buff *skb, struct sk_buff_head *list);\n static inline void __skb_unlink(struct sk_buff *skb, struct sk_buff_head *list)\n {\n-\tstruct sk_buff *next, *prev;\n-\n \tlist->qlen--;\n-\tnext\t   = skb->next;\n-\tprev\t   = skb->prev;\n-\tskb->next  = skb->prev = NULL;\n-\tnext->prev = prev;\n-\tprev->next = next;\n+\tlist_del(&skb->list);\n+\tskb->list.next = skb->list.prev = NULL;\n }\n \n /**\n@@ -1439,20 +1425,13 @@ static inline int pskb_trim_rcsum(struct sk_buff *skb, unsigned int len)\n }\n \n #define skb_queue_walk(queue, skb) \\\n-\t\tfor (skb = (queue)->next;\t\t\t\t\t\\\n-\t\t     prefetch(skb->next), (skb != (struct sk_buff *)(queue));\t\\\n-\t\t     skb = skb->next)\n+\tlist_for_each_entry(skb, &(queue)->list, list)\n \n #define skb_queue_walk_safe(queue, skb, tmp)\t\t\t\t\t\\\n-\t\tfor (skb = (queue)->next, tmp = skb->next;\t\t\t\\\n-\t\t     skb != (struct sk_buff *)(queue);\t\t\t\t\\\n-\t\t     skb = tmp, tmp = skb->next)\n+\tlist_for_each_entry_safe(skb, tmp, &(queue)->list, list)\n \n #define skb_queue_reverse_walk(queue, skb) \\\n-\t\tfor (skb = (queue)->prev;\t\t\t\t\t\\\n-\t\t     prefetch(skb->prev), (skb != (struct sk_buff *)(queue));\t\\\n-\t\t     skb = skb->prev)\n-\n+\tlist_for_each_entry_reverse(skb, &(queue)->list, list)\n \n extern struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned flags,\n \t\t\t\t\t   int *peeked, int *err);\ndiff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h\nindex 6f8418b..d998517 100644\n--- a/include/net/bluetooth/bluetooth.h\n+++ b/include/net/bluetooth/bluetooth.h\n@@ -164,7 +164,7 @@ static inline int skb_frags_no(struct sk_buff *skb)\n \tregister struct sk_buff *frag = skb_shinfo(skb)->frag_list;\n \tregister int n = 1;\n \n-\tfor (; frag; frag=frag->next, n++);\n+\tfor (; frag; frag=frag->frag_next, n++);\n \treturn n;\n }\n \ndiff --git a/include/net/sctp/sctp.h b/include/net/sctp/sctp.h\nindex 17b932b..26ac109 100644\n--- a/include/net/sctp/sctp.h\n+++ b/include/net/sctp/sctp.h\n@@ -406,10 +406,7 @@ struct sctp_association *sctp_id2assoc(struct sock *sk, sctp_assoc_t id);\n \n /* A macro to walk a list of skbs.  */\n #define sctp_skb_for_each(pos, head, tmp) \\\n-for (pos = (head)->next;\\\n-     tmp = (pos)->next, pos != ((struct sk_buff *)(head));\\\n-     pos = tmp)\n-\n+\tlist_for_each_entry_safe(pos, tmp, &(head)->list, list)\n \n /* A helper to append an entire skb list (list) to another (head). */\n static inline void sctp_skb_list_tail(struct sk_buff_head *list,\n@@ -420,7 +417,7 @@ static inline void sctp_skb_list_tail(struct sk_buff_head *list,\n \tsctp_spin_lock_irqsave(&head->lock, flags);\n \tsctp_spin_lock(&list->lock);\n \n-\tlist_splice((struct list_head *)list, (struct list_head *)head->prev);\n+\tlist_splice_tail_init(&list->list, &head->list);\n \n \thead->qlen += list->qlen;\n \tlist->qlen = 0;\ndiff --git a/include/net/sock.h b/include/net/sock.h\nindex 75a312d..c29b4bd 100644\n--- a/include/net/sock.h\n+++ b/include/net/sock.h\n@@ -223,10 +223,7 @@ struct sock {\n \t * the per-socket spinlock held and requires low latency\n \t * access. Therefore we special case it's implementation.\n \t */\n-\tstruct {\n-\t\tstruct sk_buff *head;\n-\t\tstruct sk_buff *tail;\n-\t} sk_backlog;\n+\tstruct list_head\tsk_backlog;\n \twait_queue_head_t\t*sk_sleep;\n \tstruct dst_entry\t*sk_dst_cache;\n \tstruct xfrm_policy\t*sk_policy[2];\n@@ -473,13 +470,7 @@ static inline int sk_stream_memory_free(struct sock *sk)\n /* The per-socket spinlock must be held here. */\n static inline void sk_add_backlog(struct sock *sk, struct sk_buff *skb)\n {\n-\tif (!sk->sk_backlog.tail) {\n-\t\tsk->sk_backlog.head = sk->sk_backlog.tail = skb;\n-\t} else {\n-\t\tsk->sk_backlog.tail->next = skb;\n-\t\tsk->sk_backlog.tail = skb;\n-\t}\n-\tskb->next = NULL;\n+\tlist_add_tail(&skb->list, &sk->sk_backlog);\n }\n \n #define sk_wait_event(__sk, __timeo, __condition)\t\t\t\\\ndiff --git a/include/net/tcp.h b/include/net/tcp.h\nindex 8983386..23c9e99 100644\n--- a/include/net/tcp.h\n+++ b/include/net/tcp.h\n@@ -1180,38 +1180,27 @@ static inline void tcp_write_queue_purge(struct sock *sk)\n \n static inline struct sk_buff *tcp_write_queue_head(struct sock *sk)\n {\n-\tstruct sk_buff *skb = sk->sk_write_queue.next;\n-\tif (skb == (struct sk_buff *) &sk->sk_write_queue)\n-\t\treturn NULL;\n-\treturn skb;\n+\treturn skb_peek(&sk->sk_write_queue);\n }\n \n static inline struct sk_buff *tcp_write_queue_tail(struct sock *sk)\n {\n-\tstruct sk_buff *skb = sk->sk_write_queue.prev;\n-\tif (skb == (struct sk_buff *) &sk->sk_write_queue)\n-\t\treturn NULL;\n-\treturn skb;\n+\treturn skb_peek_tail(&sk->sk_write_queue);\n }\n \n static inline struct sk_buff *tcp_write_queue_next(struct sock *sk, struct sk_buff *skb)\n {\n-\treturn skb->next;\n+\treturn list_entry(skb->list.next, struct sk_buff, list);\n }\n \n #define tcp_for_write_queue(skb, sk)\t\t\t\t\t\\\n-\t\tfor (skb = (sk)->sk_write_queue.next;\t\t\t\\\n-\t\t     (skb != (struct sk_buff *)&(sk)->sk_write_queue);\t\\\n-\t\t     skb = skb->next)\n+\tlist_for_each_entry(skb, &(sk)->sk_write_queue.list, list)\n \n #define tcp_for_write_queue_from(skb, sk)\t\t\t\t\\\n-\t\tfor (; (skb != (struct sk_buff *)&(sk)->sk_write_queue);\\\n-\t\t     skb = skb->next)\n+\tlist_for_each_entry_from(skb, &(sk)->sk_write_queue.list, list)\n \n #define tcp_for_write_queue_from_safe(skb, tmp, sk)\t\t\t\\\n-\t\tfor (tmp = skb->next;\t\t\t\t\t\\\n-\t\t     (skb != (struct sk_buff *)&(sk)->sk_write_queue);\t\\\n-\t\t     skb = tmp, tmp = skb->next)\n+\tlist_for_each_entry_safe_from(skb, tmp, &(sk)->sk_write_queue.list, list)\n \n static inline struct sk_buff *tcp_send_head(struct sock *sk)\n {\n@@ -1220,9 +1209,10 @@ static inline struct sk_buff *tcp_send_head(struct sock *sk)\n \n static inline void tcp_advance_send_head(struct sock *sk, struct sk_buff *skb)\n {\n-\tsk->sk_send_head = skb->next;\n-\tif (sk->sk_send_head == (struct sk_buff *)&sk->sk_write_queue)\n+\tif (skb->list.next == &sk->sk_write_queue.list)\n \t\tsk->sk_send_head = NULL;\n+\telse\n+\t\tsk->sk_send_head = tcp_write_queue_next(sk, skb);\n }\n \n static inline void tcp_check_send_head(struct sock *sk, struct sk_buff *skb_unlinked)\n@@ -1272,7 +1262,7 @@ static inline void tcp_insert_write_queue_before(struct sk_buff *new,\n \t\t\t\t\t\t  struct sk_buff *skb,\n \t\t\t\t\t\t  struct sock *sk)\n {\n-\t__skb_insert(new, skb->prev, skb, &sk->sk_write_queue);\n+\t__skb_insert(new, skb, &sk->sk_write_queue);\n \n \tif (sk->sk_send_head == skb)\n \t\tsk->sk_send_head = new;\n@@ -1286,7 +1276,7 @@ static inline void tcp_unlink_write_queue(struct sk_buff *skb, struct sock *sk)\n static inline int tcp_skb_is_last(const struct sock *sk,\n \t\t\t\t  const struct sk_buff *skb)\n {\n-\treturn skb->next == (struct sk_buff *)&sk->sk_write_queue;\n+\treturn skb->list.next == &sk->sk_write_queue.list;\n }\n \n static inline int tcp_write_queue_empty(struct sock *sk)\ndiff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c\nindex 0c85042..da43560 100644\n--- a/net/appletalk/ddp.c\n+++ b/net/appletalk/ddp.c\n@@ -983,7 +983,7 @@ static unsigned long atalk_sum_skb(const struct sk_buff *skb, int offset,\n \tif (skb_shinfo(skb)->frag_list) {\n \t\tstruct sk_buff *list = skb_shinfo(skb)->frag_list;\n \n-\t\tfor (; list; list = list->next) {\n+\t\tfor (; list; list = list->frag_next) {\n \t\t\tint end;\n \n \t\t\tWARN_ON(start > offset + len);\ndiff --git a/net/atm/br2684.c b/net/atm/br2684.c\nindex 8d9a6f1..4f25ef8 100644\n--- a/net/atm/br2684.c\n+++ b/net/atm/br2684.c\n@@ -454,12 +454,13 @@ static int br2684_regvcc(struct atm_vcc *atmvcc, void __user * arg)\n {\n \tint err;\n \tstruct br2684_vcc *brvcc;\n-\tstruct sk_buff *skb;\n+\tstruct sk_buff *skb, *n;\n \tstruct sk_buff_head *rq;\n \tstruct br2684_dev *brdev;\n \tstruct net_device *net_dev;\n \tstruct atm_backend_br2684 be;\n \tunsigned long flags;\n+\tLIST_HEAD(list);\n \n \tif (copy_from_user(&be, arg, sizeof be))\n \t\treturn -EFAULT;\n@@ -515,26 +516,15 @@ static int br2684_regvcc(struct atm_vcc *atmvcc, void __user * arg)\n \trq = &sk_atm(atmvcc)->sk_receive_queue;\n \n \tspin_lock_irqsave(&rq->lock, flags);\n-\tif (skb_queue_empty(rq)) {\n-\t\tskb = NULL;\n-\t} else {\n-\t\t/* NULL terminate the list.  */\n-\t\trq->prev->next = NULL;\n-\t\tskb = rq->next;\n-\t}\n-\trq->prev = rq->next = (struct sk_buff *)rq;\n+\tlist_splice_init(&rq->list, &list);\n \trq->qlen = 0;\n \tspin_unlock_irqrestore(&rq->lock, flags);\n \n-\twhile (skb) {\n-\t\tstruct sk_buff *next = skb->next;\n-\n-\t\tskb->next = skb->prev = NULL;\n+\tlist_for_each_entry_safe(skb, n, &list, list) {\n+\t\tlist_del(&skb->list);\n \t\tbr2684_push(atmvcc, skb);\n \t\tBRPRIV(skb->dev)->stats.rx_bytes -= skb->len;\n \t\tBRPRIV(skb->dev)->stats.rx_packets--;\n-\n-\t\tskb = next;\n \t}\n \t__module_get(THIS_MODULE);\n \treturn 0;\ndiff --git a/net/atm/clip.c b/net/atm/clip.c\nindex 5b5b963..916aba6 100644\n--- a/net/atm/clip.c\n+++ b/net/atm/clip.c\n@@ -450,8 +450,9 @@ static struct net_device_stats *clip_get_stats(struct net_device *dev)\n \n static int clip_mkip(struct atm_vcc *vcc, int timeout)\n {\n+\tLIST_HEAD(rq_list);\n \tstruct clip_vcc *clip_vcc;\n-\tstruct sk_buff *skb;\n+\tstruct sk_buff *skb, *n;\n \tstruct sk_buff_head *rq;\n \tunsigned long flags;\n \n@@ -477,22 +478,12 @@ static int clip_mkip(struct atm_vcc *vcc, int timeout)\n \trq = &sk_atm(vcc)->sk_receive_queue;\n \n \tspin_lock_irqsave(&rq->lock, flags);\n-\tif (skb_queue_empty(rq)) {\n-\t\tskb = NULL;\n-\t} else {\n-\t\t/* NULL terminate the list.  */\n-\t\trq->prev->next = NULL;\n-\t\tskb = rq->next;\n-\t}\n-\trq->prev = rq->next = (struct sk_buff *)rq;\n+\tlist_splice_init(&rq->list, &rq_list);\n \trq->qlen = 0;\n \tspin_unlock_irqrestore(&rq->lock, flags);\n \n \t/* re-process everything received between connection setup and MKIP */\n-\twhile (skb) {\n-\t\tstruct sk_buff *next = skb->next;\n-\n-\t\tskb->next = skb->prev = NULL;\n+\tlist_for_each_entry_safe(skb, n, &rq_list, list) {\n \t\tif (!clip_devs) {\n \t\t\tatm_return(vcc, skb->truesize);\n \t\t\tkfree_skb(skb);\n@@ -505,8 +496,6 @@ static int clip_mkip(struct atm_vcc *vcc, int timeout)\n \t\t\tPRIV(skb->dev)->stats.rx_bytes -= len;\n \t\t\tkfree_skb(skb);\n \t\t}\n-\n-\t\tskb = next;\n \t}\n \treturn 0;\n }\ndiff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c\nindex f5b21cb..109c48b 100644\n--- a/net/bluetooth/hci_core.c\n+++ b/net/bluetooth/hci_core.c\n@@ -1220,7 +1220,7 @@ int hci_send_acl(struct hci_conn *conn, struct sk_buff *skb, __u16 flags)\n \n \t\t__skb_queue_tail(&conn->data_q, skb);\n \t\tdo {\n-\t\t\tskb = list; list = list->next;\n+\t\t\tskb = list; list = list->frag_next;\n \n \t\t\tskb->dev = (void *) hdev;\n \t\t\tbt_cb(skb)->pkt_type = HCI_ACLDATA_PKT;\ndiff --git a/net/bluetooth/l2cap.c b/net/bluetooth/l2cap.c\nindex 9610a9c..c079333 100644\n--- a/net/bluetooth/l2cap.c\n+++ b/net/bluetooth/l2cap.c\n@@ -1069,7 +1069,7 @@ static inline int l2cap_do_send(struct sock *sk, struct msghdr *msg, int len)\n \t\tsent += count;\n \t\tlen  -= count;\n \n-\t\tfrag = &(*frag)->next;\n+\t\tfrag = &(*frag)->frag_next;\n \t}\n \n \tif ((err = hci_send_acl(conn->hcon, skb, 0)) < 0)\n@@ -1358,7 +1358,7 @@ static struct sk_buff *l2cap_build_cmd(struct l2cap_conn *conn,\n \t\tlen  -= count;\n \t\tdata += count;\n \n-\t\tfrag = &(*frag)->next;\n+\t\tfrag = &(*frag)->frag_next;\n \t}\n \n \treturn skb;\ndiff --git a/net/core/datagram.c b/net/core/datagram.c\nindex 52f577a..0de47a6 100644\n--- a/net/core/datagram.c\n+++ b/net/core/datagram.c\n@@ -312,7 +312,7 @@ int skb_copy_datagram_iovec(const struct sk_buff *skb, int offset,\n \tif (skb_shinfo(skb)->frag_list) {\n \t\tstruct sk_buff *list = skb_shinfo(skb)->frag_list;\n \n-\t\tfor (; list; list = list->next) {\n+\t\tfor (; list; list = list->frag_next) {\n \t\t\tint end;\n \n \t\t\tWARN_ON(start > offset + len);\n@@ -398,7 +398,7 @@ int skb_copy_datagram_from_iovec(struct sk_buff *skb, int offset,\n \tif (skb_shinfo(skb)->frag_list) {\n \t\tstruct sk_buff *list = skb_shinfo(skb)->frag_list;\n \n-\t\tfor (; list; list = list->next) {\n+\t\tfor (; list; list = list->frag_next) {\n \t\t\tint end;\n \n \t\t\tWARN_ON(start > offset + len);\n@@ -486,7 +486,7 @@ static int skb_copy_and_csum_datagram(const struct sk_buff *skb, int offset,\n \tif (skb_shinfo(skb)->frag_list) {\n \t\tstruct sk_buff *list = skb_shinfo(skb)->frag_list;\n \n-\t\tfor (; list; list=list->next) {\n+\t\tfor (; list; list = list->frag_next) {\n \t\t\tint end;\n \n \t\t\tWARN_ON(start > offset + len);\ndiff --git a/net/core/dev.c b/net/core/dev.c\nindex f48d1b2..d0d1375 100644\n--- a/net/core/dev.c\n+++ b/net/core/dev.c\n@@ -1367,7 +1367,7 @@ void dev_kfree_skb_irq(struct sk_buff *skb)\n \n \t\tlocal_irq_save(flags);\n \t\tsd = &__get_cpu_var(softnet_data);\n-\t\tskb->next = sd->completion_queue;\n+\t\tskb->frag_next = sd->completion_queue;\n \t\tsd->completion_queue = skb;\n \t\traise_softirq_irqoff(NET_TX_SOFTIRQ);\n \t\tlocal_irq_restore(flags);\n@@ -1577,12 +1577,12 @@ static void dev_gso_skb_destructor(struct sk_buff *skb)\n \tstruct dev_gso_cb *cb;\n \n \tdo {\n-\t\tstruct sk_buff *nskb = skb->next;\n+\t\tstruct sk_buff *nskb = skb->frag_next;\n \n-\t\tskb->next = nskb->next;\n-\t\tnskb->next = NULL;\n+\t\tskb->frag_next = nskb->frag_next;\n+\t\tnskb->frag_next = NULL;\n \t\tkfree_skb(nskb);\n-\t} while (skb->next);\n+\t} while (skb->frag_next);\n \n \tcb = DEV_GSO_CB(skb);\n \tif (cb->destructor)\n@@ -1612,7 +1612,7 @@ static int dev_gso_segment(struct sk_buff *skb)\n \tif (IS_ERR(segs))\n \t\treturn PTR_ERR(segs);\n \n-\tskb->next = segs;\n+\tskb->frag_next = segs;\n \tDEV_GSO_CB(skb)->destructor = skb->destructor;\n \tskb->destructor = dev_gso_skb_destructor;\n \n@@ -1622,14 +1622,14 @@ static int dev_gso_segment(struct sk_buff *skb)\n int dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev,\n \t\t\tstruct netdev_queue *txq)\n {\n-\tif (likely(!skb->next)) {\n+\tif (likely(!skb->frag_next)) {\n \t\tif (!list_empty(&ptype_all))\n \t\t\tdev_queue_xmit_nit(skb, dev);\n \n \t\tif (netif_needs_gso(dev, skb)) {\n \t\t\tif (unlikely(dev_gso_segment(skb)))\n \t\t\t\tgoto out_kfree_skb;\n-\t\t\tif (skb->next)\n+\t\t\tif (skb->frag_next)\n \t\t\t\tgoto gso;\n \t\t}\n \n@@ -1638,20 +1638,20 @@ int dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev,\n \n gso:\n \tdo {\n-\t\tstruct sk_buff *nskb = skb->next;\n+\t\tstruct sk_buff *nskb = skb->frag_next;\n \t\tint rc;\n \n-\t\tskb->next = nskb->next;\n-\t\tnskb->next = NULL;\n+\t\tskb->frag_next = nskb->frag_next;\n+\t\tnskb->frag_next = NULL;\n \t\trc = dev->hard_start_xmit(nskb, dev);\n \t\tif (unlikely(rc)) {\n-\t\t\tnskb->next = skb->next;\n-\t\t\tskb->next = nskb;\n+\t\t\tnskb->frag_next = skb->frag_next;\n+\t\t\tskb->frag_next = nskb;\n \t\t\treturn rc;\n \t\t}\n-\t\tif (unlikely(netif_tx_queue_stopped(txq) && skb->next))\n+\t\tif (unlikely(netif_tx_queue_stopped(txq) && skb->frag_next))\n \t\t\treturn NETDEV_TX_BUSY;\n-\t} while (skb->next);\n+\t} while (skb->frag_next);\n \n \tskb->destructor = DEV_GSO_CB(skb)->destructor;\n \n@@ -1961,7 +1961,7 @@ static void net_tx_action(struct softirq_action *h)\n \n \t\twhile (clist) {\n \t\t\tstruct sk_buff *skb = clist;\n-\t\t\tclist = clist->next;\n+\t\t\tclist = clist->frag_next;\n \n \t\t\tWARN_ON(atomic_read(&skb->users));\n \t\t\t__kfree_skb(skb);\n@@ -4504,7 +4504,7 @@ static int dev_cpu_callback(struct notifier_block *nfb,\n \t/* Find end of our completion_queue. */\n \tlist_skb = &sd->completion_queue;\n \twhile (*list_skb)\n-\t\tlist_skb = &(*list_skb)->next;\n+\t\tlist_skb = &(*list_skb)->frag_next;\n \t/* Append completion queue from offline CPU. */\n \t*list_skb = oldsd->completion_queue;\n \toldsd->completion_queue = NULL;\ndiff --git a/net/core/neighbour.c b/net/core/neighbour.c\nindex 9d92e41..45bc7a6 100644\n--- a/net/core/neighbour.c\n+++ b/net/core/neighbour.c\n@@ -927,8 +927,7 @@ int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb)\n \t\t\tif (skb_queue_len(&neigh->arp_queue) >=\n \t\t\t    neigh->parms->queue_len) {\n \t\t\t\tstruct sk_buff *buff;\n-\t\t\t\tbuff = neigh->arp_queue.next;\n-\t\t\t\t__skb_unlink(buff, &neigh->arp_queue);\n+\t\t\t\tbuff = __skb_dequeue(&neigh->arp_queue);\n \t\t\t\tkfree_skb(buff);\n \t\t\t\tNEIGH_CACHE_STAT_INC(neigh->tbl, unres_discards);\n \t\t\t}\n@@ -1259,24 +1258,20 @@ static void neigh_proxy_process(unsigned long arg)\n \tstruct neigh_table *tbl = (struct neigh_table *)arg;\n \tlong sched_next = 0;\n \tunsigned long now = jiffies;\n-\tstruct sk_buff *skb;\n+\tstruct sk_buff *skb, *n;\n \n \tspin_lock(&tbl->proxy_queue.lock);\n \n-\tskb = tbl->proxy_queue.next;\n-\n-\twhile (skb != (struct sk_buff *)&tbl->proxy_queue) {\n-\t\tstruct sk_buff *back = skb;\n-\t\tlong tdif = NEIGH_CB(back)->sched_next - now;\n+\tlist_for_each_entry_safe(skb, n, &tbl->proxy_queue.list, list) {\n+\t\tlong tdif = NEIGH_CB(skb)->sched_next - now;\n \n-\t\tskb = skb->next;\n \t\tif (tdif <= 0) {\n-\t\t\tstruct net_device *dev = back->dev;\n-\t\t\t__skb_unlink(back, &tbl->proxy_queue);\n+\t\t\tstruct net_device *dev = skb->dev;\n+\t\t\t__skb_unlink(skb, &tbl->proxy_queue);\n \t\t\tif (tbl->proxy_redo && netif_running(dev))\n-\t\t\t\ttbl->proxy_redo(back);\n+\t\t\t\ttbl->proxy_redo(skb);\n \t\t\telse\n-\t\t\t\tkfree_skb(back);\n+\t\t\t\tkfree_skb(skb);\n \n \t\t\tdev_put(dev);\n \t\t} else if (!sched_next || tdif < sched_next)\ndiff --git a/net/core/netpoll.c b/net/core/netpoll.c\nindex 6c7af39..74e9a1c 100644\n--- a/net/core/netpoll.c\n+++ b/net/core/netpoll.c\n@@ -217,7 +217,7 @@ static void zap_completion_queue(void)\n \n \t\twhile (clist != NULL) {\n \t\t\tstruct sk_buff *skb = clist;\n-\t\t\tclist = clist->next;\n+\t\t\tclist = clist->frag_next;\n \t\t\tif (skb->destructor) {\n \t\t\t\tatomic_inc(&skb->users);\n \t\t\t\tdev_kfree_skb_any(skb); /* put this one back */\ndiff --git a/net/core/skbuff.c b/net/core/skbuff.c\nindex ca1ccdf..0922c35 100644\n--- a/net/core/skbuff.c\n+++ b/net/core/skbuff.c\n@@ -287,15 +287,13 @@ EXPORT_SYMBOL(dev_alloc_skb);\n \n static void skb_drop_list(struct sk_buff **listp)\n {\n-\tstruct sk_buff *list = *listp;\n+\tstruct sk_buff *skb = *listp;\n \n-\t*listp = NULL;\n-\n-\tdo {\n-\t\tstruct sk_buff *this = list;\n-\t\tlist = list->next;\n-\t\tkfree_skb(this);\n-\t} while (list);\n+\twhile (skb) {\n+\t\tstruct sk_buff *next = skb->frag_next;\n+\t\tkfree_skb(skb);\n+\t\tskb = next;\n+\t}\n }\n \n static inline void skb_drop_fraglist(struct sk_buff *skb)\n@@ -305,10 +303,10 @@ static inline void skb_drop_fraglist(struct sk_buff *skb)\n \n static void skb_clone_fraglist(struct sk_buff *skb)\n {\n-\tstruct sk_buff *list;\n+\tstruct sk_buff *n;\n \n-\tfor (list = skb_shinfo(skb)->frag_list; list; list = list->next)\n-\t\tskb_get(list);\n+\tfor (n = skb_shinfo(skb)->frag_list; n; n = n->frag_next)\n+\t\tskb_get(n);\n }\n \n static void skb_release_data(struct sk_buff *skb)\n@@ -468,7 +466,7 @@ static struct sk_buff *__skb_clone(struct sk_buff *n, struct sk_buff *skb)\n {\n #define C(x) n->x = skb->x\n \n-\tn->next = n->prev = NULL;\n+\tn->list.next = n->list.prev = NULL;\n \tn->sk = NULL;\n \t__copy_skb_header(n, skb);\n \n@@ -998,7 +996,7 @@ drop_pages:\n \t}\n \n \tfor (fragp = &skb_shinfo(skb)->frag_list; (frag = *fragp);\n-\t     fragp = &frag->next) {\n+\t     fragp = &frag->frag_next) {\n \t\tint end = offset + frag->len;\n \n \t\tif (skb_shared(frag)) {\n@@ -1008,7 +1006,7 @@ drop_pages:\n \t\t\tif (unlikely(!nfrag))\n \t\t\t\treturn -ENOMEM;\n \n-\t\t\tnfrag->next = frag->next;\n+\t\t\tnfrag->frag_next = frag->frag_next;\n \t\t\tkfree_skb(frag);\n \t\t\tfrag = nfrag;\n \t\t\t*fragp = frag;\n@@ -1023,8 +1021,8 @@ drop_pages:\n \t\t    unlikely((err = pskb_trim(frag, len - offset))))\n \t\t\treturn err;\n \n-\t\tif (frag->next)\n-\t\t\tskb_drop_list(&frag->next);\n+\t\tif (frag->frag_next)\n+\t\t\tskb_drop_list(&frag->frag_next);\n \t\tbreak;\n \t}\n \n@@ -1115,7 +1113,7 @@ unsigned char *__pskb_pull_tail(struct sk_buff *skb, int delta)\n \t\t\tif (list->len <= eat) {\n \t\t\t\t/* Eaten as whole. */\n \t\t\t\teat -= list->len;\n-\t\t\t\tlist = list->next;\n+\t\t\t\tlist = list->frag_next;\n \t\t\t\tinsp = list;\n \t\t\t} else {\n \t\t\t\t/* Eaten partially. */\n@@ -1125,7 +1123,7 @@ unsigned char *__pskb_pull_tail(struct sk_buff *skb, int delta)\n \t\t\t\t\tclone = skb_clone(list, GFP_ATOMIC);\n \t\t\t\t\tif (!clone)\n \t\t\t\t\t\treturn NULL;\n-\t\t\t\t\tinsp = list->next;\n+\t\t\t\t\tinsp = list->frag_next;\n \t\t\t\t\tlist = clone;\n \t\t\t\t} else {\n \t\t\t\t\t/* This may be pulled without\n@@ -1143,12 +1141,12 @@ unsigned char *__pskb_pull_tail(struct sk_buff *skb, int delta)\n \n \t\t/* Free pulled out fragments. */\n \t\twhile ((list = skb_shinfo(skb)->frag_list) != insp) {\n-\t\t\tskb_shinfo(skb)->frag_list = list->next;\n+\t\t\tskb_shinfo(skb)->frag_list = list->frag_next;\n \t\t\tkfree_skb(list);\n \t\t}\n \t\t/* And insert new clone at head. */\n \t\tif (clone) {\n-\t\t\tclone->next = list;\n+\t\t\tclone->frag_next = list;\n \t\t\tskb_shinfo(skb)->frag_list = clone;\n \t\t}\n \t}\n@@ -1229,7 +1227,7 @@ int skb_copy_bits(const struct sk_buff *skb, int offset, void *to, int len)\n \tif (skb_shinfo(skb)->frag_list) {\n \t\tstruct sk_buff *list = skb_shinfo(skb)->frag_list;\n \n-\t\tfor (; list; list = list->next) {\n+\t\tfor (; list; list = list->frag_next) {\n \t\t\tint end;\n \n \t\t\tWARN_ON(start > offset + len);\n@@ -1409,7 +1407,7 @@ int skb_splice_bits(struct sk_buff *__skb, unsigned int offset,\n \tif (skb_shinfo(skb)->frag_list) {\n \t\tstruct sk_buff *list = skb_shinfo(skb)->frag_list;\n \n-\t\tfor (; list && tlen; list = list->next) {\n+\t\tfor (; list && tlen; list = list->frag_next) {\n \t\t\tif (__skb_splice_bits(list, &offset, &tlen, &spd))\n \t\t\t\tbreak;\n \t\t}\n@@ -1503,7 +1501,7 @@ int skb_store_bits(struct sk_buff *skb, int offset, const void *from, int len)\n \tif (skb_shinfo(skb)->frag_list) {\n \t\tstruct sk_buff *list = skb_shinfo(skb)->frag_list;\n \n-\t\tfor (; list; list = list->next) {\n+\t\tfor (; list; list = list->frag_next) {\n \t\t\tint end;\n \n \t\t\tWARN_ON(start > offset + len);\n@@ -1581,7 +1579,7 @@ __wsum skb_checksum(const struct sk_buff *skb, int offset,\n \tif (skb_shinfo(skb)->frag_list) {\n \t\tstruct sk_buff *list = skb_shinfo(skb)->frag_list;\n \n-\t\tfor (; list; list = list->next) {\n+\t\tfor (; list; list = list->frag_next) {\n \t\t\tint end;\n \n \t\t\tWARN_ON(start > offset + len);\n@@ -1661,7 +1659,7 @@ __wsum skb_copy_and_csum_bits(const struct sk_buff *skb, int offset,\n \tif (skb_shinfo(skb)->frag_list) {\n \t\tstruct sk_buff *list = skb_shinfo(skb)->frag_list;\n \n-\t\tfor (; list; list = list->next) {\n+\t\tfor (; list; list = list->frag_next) {\n \t\t\t__wsum csum2;\n \t\t\tint end;\n \n@@ -1864,7 +1862,8 @@ void skb_insert(struct sk_buff *old, struct sk_buff *newsk, struct sk_buff_head\n \tunsigned long flags;\n \n \tspin_lock_irqsave(&list->lock, flags);\n-\t__skb_insert(newsk, old->prev, old, list);\n+\tlist_add_tail(&newsk->list, &old->list);\n+\tlist->qlen++;\n \tspin_unlock_irqrestore(&list->lock, flags);\n }\n \n@@ -2039,8 +2038,8 @@ next_skb:\n \t\tst->frag_data = NULL;\n \t}\n \n-\tif (st->cur_skb->next) {\n-\t\tst->cur_skb = st->cur_skb->next;\n+\tif (st->cur_skb->frag_next) {\n+\t\tst->cur_skb = st->cur_skb->frag_next;\n \t\tst->frag_idx = 0;\n \t\tgoto next_skb;\n \t} else if (st->root_skb == st->cur_skb &&\n@@ -2251,7 +2250,7 @@ struct sk_buff *skb_segment(struct sk_buff *skb, int features)\n \t\t\tgoto err;\n \n \t\tif (segs)\n-\t\t\ttail->next = nskb;\n+\t\t\ttail->frag_next = nskb;\n \t\telse\n \t\t\tsegs = nskb;\n \t\ttail = nskb;\n@@ -2315,7 +2314,7 @@ struct sk_buff *skb_segment(struct sk_buff *skb, int features)\n \n err:\n \twhile ((skb = segs)) {\n-\t\tsegs = skb->next;\n+\t\tsegs = skb->frag_next;\n \t\tkfree_skb(skb);\n \t}\n \treturn ERR_PTR(err);\n@@ -2389,7 +2388,7 @@ __skb_to_sgvec(struct sk_buff *skb, struct scatterlist *sg, int offset, int len)\n \tif (skb_shinfo(skb)->frag_list) {\n \t\tstruct sk_buff *list = skb_shinfo(skb)->frag_list;\n \n-\t\tfor (; list; list = list->next) {\n+\t\tfor (; list; list = list->frag_next) {\n \t\t\tint end;\n \n \t\t\tWARN_ON(start > offset + len);\n@@ -2485,7 +2484,7 @@ int skb_cow_data(struct sk_buff *skb, int tailbits, struct sk_buff **trailer)\n \n \t\t/* If the skb is the last, worry about trailer. */\n \n-\t\tif (skb1->next == NULL && tailbits) {\n+\t\tif (skb1->frag_next == NULL && tailbits) {\n \t\t\tif (skb_shinfo(skb1)->nr_frags ||\n \t\t\t    skb_shinfo(skb1)->frag_list ||\n \t\t\t    skb_tailroom(skb1) < tailbits)\n@@ -2516,14 +2515,14 @@ int skb_cow_data(struct sk_buff *skb, int tailbits, struct sk_buff **trailer)\n \t\t\t/* Looking around. Are we still alive?\n \t\t\t * OK, link new skb, drop old one */\n \n-\t\t\tskb2->next = skb1->next;\n+\t\t\tskb2->frag_next = skb1->frag_next;\n \t\t\t*skb_p = skb2;\n \t\t\tkfree_skb(skb1);\n \t\t\tskb1 = skb2;\n \t\t}\n \t\telt++;\n \t\t*trailer = skb1;\n-\t\tskb_p = &skb1->next;\n+\t\tskb_p = &skb1->frag_next;\n \t}\n \n \treturn elt;\ndiff --git a/net/core/sock.c b/net/core/sock.c\nindex 23b8b9d..3b856e2 100644\n--- a/net/core/sock.c\n+++ b/net/core/sock.c\n@@ -947,6 +947,7 @@ struct sock *sk_alloc(struct net *net, int family, gfp_t priority,\n \t\t */\n \t\tsk->sk_prot = sk->sk_prot_creator = prot;\n \t\tsock_lock_init(sk);\n+\t\tINIT_LIST_HEAD(&sk->sk_backlog);\n \t\tsock_net_set(sk, get_net(net));\n \t}\n \n@@ -1011,7 +1012,7 @@ struct sock *sk_clone(const struct sock *sk, const gfp_t priority)\n \t\tsk_node_init(&newsk->sk_node);\n \t\tsock_lock_init(newsk);\n \t\tbh_lock_sock(newsk);\n-\t\tnewsk->sk_backlog.head\t= newsk->sk_backlog.tail = NULL;\n+\t\tINIT_LIST_HEAD(&newsk->sk_backlog);\n \n \t\tatomic_set(&newsk->sk_rmem_alloc, 0);\n \t\tatomic_set(&newsk->sk_wmem_alloc, 0);\n@@ -1361,16 +1362,15 @@ static void __lock_sock(struct sock *sk)\n \n static void __release_sock(struct sock *sk)\n {\n-\tstruct sk_buff *skb = sk->sk_backlog.head;\n-\n \tdo {\n-\t\tsk->sk_backlog.head = sk->sk_backlog.tail = NULL;\n-\t\tbh_unlock_sock(sk);\n+\t\tLIST_HEAD(local_list);\n+\t\tstruct sk_buff *skb, *n;\n \n-\t\tdo {\n-\t\t\tstruct sk_buff *next = skb->next;\n+\t\tlist_splice_init(&sk->sk_backlog, &local_list);\n+\t\tbh_unlock_sock(sk);\n \n-\t\t\tskb->next = NULL;\n+\t\tlist_for_each_entry_safe(skb, n, &local_list, list) {\n+\t\t\tINIT_LIST_HEAD(&skb->list);\n \t\t\tsk->sk_backlog_rcv(sk, skb);\n \n \t\t\t/*\n@@ -1380,12 +1380,10 @@ static void __release_sock(struct sock *sk)\n \t\t\t * queue private:\n \t\t\t */\n \t\t\tcond_resched_softirq();\n-\n-\t\t\tskb = next;\n-\t\t} while (skb != NULL);\n+\t\t}\n \n \t\tbh_lock_sock(sk);\n-\t} while ((skb = sk->sk_backlog.head) != NULL);\n+\t} while (!list_empty(&sk->sk_backlog));\n }\n \n /**\n@@ -1767,7 +1765,7 @@ void release_sock(struct sock *sk)\n \tmutex_release(&sk->sk_lock.dep_map, 1, _RET_IP_);\n \n \tspin_lock_bh(&sk->sk_lock.slock);\n-\tif (sk->sk_backlog.tail)\n+\tif (!list_empty(&sk->sk_backlog))\n \t\t__release_sock(sk);\n \tsk->sk_lock.owned = 0;\n \tif (waitqueue_active(&sk->sk_lock.wq))\ndiff --git a/net/decnet/af_decnet.c b/net/decnet/af_decnet.c\nindex 3c23ab3..6677cf5 100644\n--- a/net/decnet/af_decnet.c\n+++ b/net/decnet/af_decnet.c\n@@ -1249,14 +1249,8 @@ static int dn_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)\n \t\tif ((skb = skb_peek(&scp->other_receive_queue)) != NULL) {\n \t\t\tamount = skb->len;\n \t\t} else {\n-\t\t\tstruct sk_buff *skb = sk->sk_receive_queue.next;\n-\t\t\tfor(;;) {\n-\t\t\t\tif (skb ==\n-\t\t\t\t    (struct sk_buff *)&sk->sk_receive_queue)\n-\t\t\t\t\tbreak;\n+\t\t\tlist_for_each_entry(skb, &sk->sk_receive_queue.list, list)\n \t\t\t\tamount += skb->len;\n-\t\t\t\tskb = skb->next;\n-\t\t\t}\n \t\t}\n \t\trelease_sock(sk);\n \t\terr = put_user(amount, (int __user *)arg);\n@@ -1643,13 +1637,13 @@ static int __dn_getsockopt(struct socket *sock, int level,int optname, char __us\n \n static int dn_data_ready(struct sock *sk, struct sk_buff_head *q, int flags, int target)\n {\n-\tstruct sk_buff *skb = q->next;\n+\tstruct sk_buff *skb;\n \tint len = 0;\n \n \tif (flags & MSG_OOB)\n \t\treturn !skb_queue_empty(q) ? 1 : 0;\n \n-\twhile(skb != (struct sk_buff *)q) {\n+\tlist_for_each_entry(skb, &q->list, list) {\n \t\tstruct dn_skb_cb *cb = DN_SKB_CB(skb);\n \t\tlen += skb->len;\n \n@@ -1665,8 +1659,6 @@ static int dn_data_ready(struct sock *sk, struct sk_buff_head *q, int flags, int\n \t\t/* minimum data length for read exceeded */\n \t\tif (len >= target)\n \t\t\treturn 1;\n-\n-\t\tskb = skb->next;\n \t}\n \n \treturn 0;\n@@ -1682,7 +1674,7 @@ static int dn_recvmsg(struct kiocb *iocb, struct socket *sock,\n \tsize_t target = size > 1 ? 1 : 0;\n \tsize_t copied = 0;\n \tint rv = 0;\n-\tstruct sk_buff *skb, *nskb;\n+\tstruct sk_buff *skb, *n;\n \tstruct dn_skb_cb *cb = NULL;\n \tunsigned char eor = 0;\n \tlong timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);\n@@ -1757,7 +1749,7 @@ static int dn_recvmsg(struct kiocb *iocb, struct socket *sock,\n \t\tfinish_wait(sk->sk_sleep, &wait);\n \t}\n \n-\tfor(skb = queue->next; skb != (struct sk_buff *)queue; skb = nskb) {\n+\tlist_for_each_entry_safe(skb, n, &queue->list, list) {\n \t\tunsigned int chunk = skb->len;\n \t\tcb = DN_SKB_CB(skb);\n \n@@ -1774,7 +1766,6 @@ static int dn_recvmsg(struct kiocb *iocb, struct socket *sock,\n \t\t\tskb_pull(skb, chunk);\n \n \t\teor = cb->nsp_flags & 0x40;\n-\t\tnskb = skb->next;\n \n \t\tif (skb->len == 0) {\n \t\t\tskb_unlink(skb, queue);\ndiff --git a/net/decnet/dn_nsp_out.c b/net/decnet/dn_nsp_out.c\nindex 1964faf..2adc681 100644\n--- a/net/decnet/dn_nsp_out.c\n+++ b/net/decnet/dn_nsp_out.c\n@@ -383,7 +383,7 @@ int dn_nsp_check_xmit_queue(struct sock *sk, struct sk_buff *skb, struct sk_buff\n {\n \tstruct dn_skb_cb *cb = DN_SKB_CB(skb);\n \tstruct dn_scp *scp = DN_SK(sk);\n-\tstruct sk_buff *skb2, *list, *ack = NULL;\n+\tstruct sk_buff *skb2, *n, *ack = NULL;\n \tint wakeup = 0;\n \tint try_retrans = 0;\n \tunsigned long reftime = cb->stamp;\n@@ -391,9 +391,7 @@ int dn_nsp_check_xmit_queue(struct sock *sk, struct sk_buff *skb, struct sk_buff\n \tunsigned short xmit_count;\n \tunsigned short segnum;\n \n-\tskb2 = q->next;\n-\tlist = (struct sk_buff *)q;\n-\twhile(list != skb2) {\n+\tlist_for_each_entry_safe(skb2, n, &q->list, list) {\n \t\tstruct dn_skb_cb *cb2 = DN_SKB_CB(skb2);\n \n \t\tif (dn_before_or_equal(cb2->segnum, acknum))\n@@ -401,8 +399,6 @@ int dn_nsp_check_xmit_queue(struct sock *sk, struct sk_buff *skb, struct sk_buff\n \n \t\t/* printk(KERN_DEBUG \"ack: %s %04x %04x\\n\", ack ? \"ACK\" : \"SKIP\", (int)cb2->segnum, (int)acknum); */\n \n-\t\tskb2 = skb2->next;\n-\n \t\tif (ack == NULL)\n \t\t\tcontinue;\n \ndiff --git a/net/econet/af_econet.c b/net/econet/af_econet.c\nindex 8789d2b..7b7461d 100644\n--- a/net/econet/af_econet.c\n+++ b/net/econet/af_econet.c\n@@ -901,15 +901,10 @@ static void aun_tx_ack(unsigned long seq, int result)\n \tstruct ec_cb *eb;\n \n \tspin_lock_irqsave(&aun_queue_lock, flags);\n-\tskb = skb_peek(&aun_queue);\n-\twhile (skb && skb != (struct sk_buff *)&aun_queue)\n-\t{\n-\t\tstruct sk_buff *newskb = skb->next;\n+\tlist_for_each_entry(skb, &aun_queue.list, list) {\n \t\teb = (struct ec_cb *)&skb->cb;\n \t\tif (eb->seq == seq)\n \t\t\tgoto foundit;\n-\n-\t\tskb = newskb;\n \t}\n \tspin_unlock_irqrestore(&aun_queue_lock, flags);\n \tprintk(KERN_DEBUG \"AUN: unknown sequence %ld\\n\", seq);\n@@ -982,23 +977,18 @@ static void aun_data_available(struct sock *sk, int slen)\n \n static void ab_cleanup(unsigned long h)\n {\n-\tstruct sk_buff *skb;\n+\tstruct sk_buff *skb, *n;\n \tunsigned long flags;\n \n \tspin_lock_irqsave(&aun_queue_lock, flags);\n-\tskb = skb_peek(&aun_queue);\n-\twhile (skb && skb != (struct sk_buff *)&aun_queue)\n-\t{\n-\t\tstruct sk_buff *newskb = skb->next;\n+\tlist_for_each_entry_safe(skb, n, &aun_queue.list, list) {\n \t\tstruct ec_cb *eb = (struct ec_cb *)&skb->cb;\n-\t\tif ((jiffies - eb->start) > eb->timeout)\n-\t\t{\n+\t\tif ((jiffies - eb->start) > eb->timeout) {\n \t\t\ttx_result(skb->sk, eb->cookie,\n \t\t\t\t  ECTYPE_TRANSMIT_NOT_PRESENT);\n \t\t\tskb_unlink(skb, &aun_queue);\n \t\t\tkfree_skb(skb);\n \t\t}\n-\t\tskb = newskb;\n \t}\n \tspin_unlock_irqrestore(&aun_queue_lock, flags);\n \ndiff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c\nindex 8a3ac1f..794e79b 100644\n--- a/net/ipv4/af_inet.c\n+++ b/net/ipv4/af_inet.c\n@@ -1238,7 +1238,7 @@ static struct sk_buff *inet_gso_segment(struct sk_buff *skb, int features)\n \t\tiph->tot_len = htons(skb->len - skb->mac_len);\n \t\tiph->check = 0;\n \t\tiph->check = ip_fast_csum(skb_network_header(skb), iph->ihl);\n-\t} while ((skb = skb->next));\n+\t} while ((skb = skb->frag_next));\n \n out:\n \treturn segs;\ndiff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c\nindex 6c52e08..f4d9ff5 100644\n--- a/net/ipv4/inet_fragment.c\n+++ b/net/ipv4/inet_fragment.c\n@@ -141,7 +141,7 @@ void inet_frag_destroy(struct inet_frag_queue *q, struct inet_frags *f,\n \tfp = q->fragments;\n \tnf = q->net;\n \twhile (fp) {\n-\t\tstruct sk_buff *xp = fp->next;\n+\t\tstruct sk_buff *xp = fp->frag_next;\n \n \t\tfrag_kfree_skb(nf, f, fp, work);\n \t\tfp = xp;\ndiff --git a/net/ipv4/inet_lro.c b/net/ipv4/inet_lro.c\nindex cfd034a..e5732eb 100644\n--- a/net/ipv4/inet_lro.c\n+++ b/net/ipv4/inet_lro.c\n@@ -227,7 +227,7 @@ static void lro_add_packet(struct net_lro_desc *lro_desc, struct sk_buff *skb,\n \tparent->truesize += skb->truesize;\n \n \tif (lro_desc->last_skb)\n-\t\tlro_desc->last_skb->next = skb;\n+\t\tlro_desc->last_skb->frag_next = skb;\n \telse\n \t\tskb_shinfo(parent)->frag_list = skb;\n \ndiff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c\nindex 2152d22..b3b241e 100644\n--- a/net/ipv4/ip_fragment.c\n+++ b/net/ipv4/ip_fragment.c\n@@ -281,7 +281,7 @@ static int ip_frag_reinit(struct ipq *qp)\n \n \tfp = qp->q.fragments;\n \tdo {\n-\t\tstruct sk_buff *xp = fp->next;\n+\t\tstruct sk_buff *xp = fp->frag_next;\n \t\tfrag_kfree_skb(qp->q.net, fp, NULL);\n \t\tfp = xp;\n \t} while (fp);\n@@ -363,7 +363,7 @@ static int ip_frag_queue(struct ipq *qp, struct sk_buff *skb)\n \t * this fragment, right?\n \t */\n \tprev = NULL;\n-\tfor (next = qp->q.fragments; next != NULL; next = next->next) {\n+\tfor (next = qp->q.fragments; next != NULL; next = next->frag_next) {\n \t\tif (FRAG_CB(next)->offset >= offset)\n \t\t\tbreak;\t/* bingo! */\n \t\tprev = next;\n@@ -411,10 +411,10 @@ static int ip_frag_queue(struct ipq *qp, struct sk_buff *skb)\n \t\t\t/* Old fragment is completely overridden with\n \t\t\t * new one drop it.\n \t\t\t */\n-\t\t\tnext = next->next;\n+\t\t\tnext = next->frag_next;\n \n \t\t\tif (prev)\n-\t\t\t\tprev->next = next;\n+\t\t\t\tprev->frag_next = next;\n \t\t\telse\n \t\t\t\tqp->q.fragments = next;\n \n@@ -426,9 +426,9 @@ static int ip_frag_queue(struct ipq *qp, struct sk_buff *skb)\n \tFRAG_CB(skb)->offset = offset;\n \n \t/* Insert this fragment in the chain of fragments. */\n-\tskb->next = next;\n+\tskb->frag_next = next;\n \tif (prev)\n-\t\tprev->next = skb;\n+\t\tprev->frag_next = skb;\n \telse\n \t\tqp->q.fragments = skb;\n \n@@ -473,16 +473,16 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *prev,\n \n \t/* Make the one we just received the head. */\n \tif (prev) {\n-\t\thead = prev->next;\n+\t\thead = prev->frag_next;\n \t\tfp = skb_clone(head, GFP_ATOMIC);\n \t\tif (!fp)\n \t\t\tgoto out_nomem;\n \n-\t\tfp->next = head->next;\n-\t\tprev->next = fp;\n+\t\tfp->frag_next = head->frag_next;\n+\t\tprev->frag_next = fp;\n \n \t\tskb_morph(head, qp->q.fragments);\n-\t\thead->next = qp->q.fragments->next;\n+\t\thead->frag_next = qp->q.fragments->frag_next;\n \n \t\tkfree_skb(qp->q.fragments);\n \t\tqp->q.fragments = head;\n@@ -512,8 +512,8 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *prev,\n \n \t\tif ((clone = alloc_skb(0, GFP_ATOMIC)) == NULL)\n \t\t\tgoto out_nomem;\n-\t\tclone->next = head->next;\n-\t\thead->next = clone;\n+\t\tclone->frag_next = head->frag_next;\n+\t\thead->frag_next = clone;\n \t\tskb_shinfo(clone)->frag_list = skb_shinfo(head)->frag_list;\n \t\tskb_shinfo(head)->frag_list = NULL;\n \t\tfor (i=0; i<skb_shinfo(head)->nr_frags; i++)\n@@ -526,11 +526,11 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *prev,\n \t\tatomic_add(clone->truesize, &qp->q.net->mem);\n \t}\n \n-\tskb_shinfo(head)->frag_list = head->next;\n+\tskb_shinfo(head)->frag_list = head->frag_next;\n \tskb_push(head, head->data - skb_network_header(head));\n \tatomic_sub(head->truesize, &qp->q.net->mem);\n \n-\tfor (fp=head->next; fp; fp = fp->next) {\n+\tfor (fp=head->frag_next; fp; fp = fp->frag_next) {\n \t\thead->data_len += fp->len;\n \t\thead->len += fp->len;\n \t\tif (head->ip_summed != fp->ip_summed)\n@@ -541,7 +541,7 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *prev,\n \t\tatomic_sub(fp->truesize, &qp->q.net->mem);\n \t}\n \n-\thead->next = NULL;\n+\thead->frag_next = NULL;\n \thead->dev = dev;\n \thead->tstamp = qp->q.stamp;\n \ndiff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c\nindex d533a89..0d2cd9a 100644\n--- a/net/ipv4/ip_output.c\n+++ b/net/ipv4/ip_output.c\n@@ -484,10 +484,10 @@ int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff*))\n \t\t    skb_cloned(skb))\n \t\t\tgoto slow_path;\n \n-\t\tfor (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->next) {\n+\t\tfor (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->frag_next) {\n \t\t\t/* Correct geometry. */\n \t\t\tif (frag->len > mtu ||\n-\t\t\t    ((frag->len & 7) && frag->next) ||\n+\t\t\t    ((frag->len & 7) && frag->frag_next) ||\n \t\t\t    skb_headroom(frag) < hlen)\n \t\t\t    goto slow_path;\n \n@@ -533,7 +533,7 @@ int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff*))\n \t\t\t\t\tip_options_fragment(frag);\n \t\t\t\toffset += skb->len - hlen;\n \t\t\t\tiph->frag_off = htons(offset>>3);\n-\t\t\t\tif (frag->next != NULL)\n+\t\t\t\tif (frag->frag_next != NULL)\n \t\t\t\t\tiph->frag_off |= htons(IP_MF);\n \t\t\t\t/* Ready, complete checksum */\n \t\t\t\tip_send_check(iph);\n@@ -547,8 +547,8 @@ int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff*))\n \t\t\t\tbreak;\n \n \t\t\tskb = frag;\n-\t\t\tfrag = skb->next;\n-\t\t\tskb->next = NULL;\n+\t\t\tfrag = skb->frag_next;\n+\t\t\tskb->frag_next = NULL;\n \t\t}\n \n \t\tif (err == 0) {\n@@ -557,7 +557,7 @@ int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff*))\n \t\t}\n \n \t\twhile (frag) {\n-\t\t\tskb = frag->next;\n+\t\t\tskb = frag->frag_next;\n \t\t\tkfree_skb(frag);\n \t\t\tfrag = skb;\n \t\t}\n@@ -1229,7 +1229,7 @@ int ip_push_pending_frames(struct sock *sk)\n \twhile ((tmp_skb = __skb_dequeue(&sk->sk_write_queue)) != NULL) {\n \t\t__skb_pull(tmp_skb, skb_network_header_len(skb));\n \t\t*tail_skb = tmp_skb;\n-\t\ttail_skb = &(tmp_skb->next);\n+\t\ttail_skb = &(tmp_skb->frag_next);\n \t\tskb->len += tmp_skb->len;\n \t\tskb->data_len += tmp_skb->len;\n \t\tskb->truesize += tmp_skb->truesize;\ndiff --git a/net/ipv4/netfilter/nf_nat_proto_sctp.c b/net/ipv4/netfilter/nf_nat_proto_sctp.c\nindex 65e470b..9dc5a67 100644\n--- a/net/ipv4/netfilter/nf_nat_proto_sctp.c\n+++ b/net/ipv4/netfilter/nf_nat_proto_sctp.c\n@@ -57,7 +57,7 @@ sctp_manip_pkt(struct sk_buff *skb,\n \t}\n \n \tcrc32 = sctp_start_cksum((u8 *)hdr, skb_headlen(skb) - hdroff);\n-\tfor (skb = skb_shinfo(skb)->frag_list; skb; skb = skb->next)\n+\tfor (skb = skb_shinfo(skb)->frag_list; skb; skb = skb->frag_next)\n \t\tcrc32 = sctp_update_cksum((u8 *)skb->data, skb_headlen(skb),\n \t\t\t\t\t  crc32);\n \tcrc32 = sctp_end_cksum(crc32);\ndiff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c\nindex 1ab341e..2310ee6 100644\n--- a/net/ipv4/tcp.c\n+++ b/net/ipv4/tcp.c\n@@ -433,12 +433,15 @@ int tcp_ioctl(struct sock *sk, int cmd, unsigned long arg)\n \t\t\t !tp->urg_data ||\n \t\t\t before(tp->urg_seq, tp->copied_seq) ||\n \t\t\t !before(tp->urg_seq, tp->rcv_nxt)) {\n+\t\t\tstruct sk_buff *last;\n+\n \t\t\tansw = tp->rcv_nxt - tp->copied_seq;\n \n \t\t\t/* Subtract 1, if FIN is in queue. */\n+\t\t\tlast = list_entry(sk->sk_receive_queue.list.prev,\n+\t\t\t\t\t  struct sk_buff, list);\n \t\t\tif (answ && !skb_queue_empty(&sk->sk_receive_queue))\n-\t\t\t\tansw -=\n-\t\t       tcp_hdr((struct sk_buff *)sk->sk_receive_queue.prev)->fin;\n+\t\t\t\tansw -= tcp_hdr(last)->fin;\n \t\t} else\n \t\t\tansw = tp->urg_seq - tp->copied_seq;\n \t\trelease_sock(sk);\n@@ -1338,11 +1341,7 @@ int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,\n \n \t\t/* Next get a buffer. */\n \n-\t\tskb = skb_peek(&sk->sk_receive_queue);\n-\t\tdo {\n-\t\t\tif (!skb)\n-\t\t\t\tbreak;\n-\n+\t\tlist_for_each_entry(skb, &sk->sk_receive_queue.list, list) {\n \t\t\t/* Now that we have two receive queues this\n \t\t\t * shouldn't happen.\n \t\t\t */\n@@ -1359,12 +1358,11 @@ int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,\n \t\t\tif (tcp_hdr(skb)->fin)\n \t\t\t\tgoto found_fin_ok;\n \t\t\tWARN_ON(!(flags & MSG_PEEK));\n-\t\t\tskb = skb->next;\n-\t\t} while (skb != (struct sk_buff *)&sk->sk_receive_queue);\n+\t\t}\n \n \t\t/* Well, if we have backlog, try to process it now yet. */\n \n-\t\tif (copied >= target && !sk->sk_backlog.tail)\n+\t\tif (copied >= target && list_empty(&sk->sk_backlog))\n \t\t\tbreak;\n \n \t\tif (copied) {\n@@ -2440,12 +2438,12 @@ struct sk_buff *tcp_tso_segment(struct sk_buff *skb, int features)\n \t\t\t\t\t\t    thlen, skb->csum));\n \n \t\tseq += len;\n-\t\tskb = skb->next;\n+\t\tskb = skb->frag_next;\n \t\tth = tcp_hdr(skb);\n \n \t\tth->seq = htonl(seq);\n \t\tth->cwr = 0;\n-\t} while (skb->next);\n+\t} while (skb->frag_next);\n \n \tdelta = htonl(oldlen + (skb->tail - skb->transport_header) +\n \t\t      skb->data_len);\ndiff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c\nindex f79a516..aca677d 100644\n--- a/net/ipv4/tcp_input.c\n+++ b/net/ipv4/tcp_input.c\n@@ -4106,7 +4106,7 @@ drop:\n \t\t}\n \t\t__skb_queue_head(&tp->out_of_order_queue, skb);\n \t} else {\n-\t\tstruct sk_buff *skb1 = tp->out_of_order_queue.prev;\n+\t\tstruct sk_buff *skb1 = skb_peek_tail(&tp->out_of_order_queue);\n \t\tu32 seq = TCP_SKB_CB(skb)->seq;\n \t\tu32 end_seq = TCP_SKB_CB(skb)->end_seq;\n \n@@ -4123,14 +4123,13 @@ drop:\n \t\t}\n \n \t\t/* Find place to insert this segment. */\n-\t\tdo {\n+\t\tlist_for_each_entry_from_reverse(skb1, &tp->out_of_order_queue.list, list) {\n \t\t\tif (!after(TCP_SKB_CB(skb1)->seq, seq))\n \t\t\t\tbreak;\n-\t\t} while ((skb1 = skb1->prev) !=\n-\t\t\t (struct sk_buff *)&tp->out_of_order_queue);\n+\t\t}\n \n \t\t/* Do skb overlap to previous one? */\n-\t\tif (skb1 != (struct sk_buff *)&tp->out_of_order_queue &&\n+\t\tif (&skb1->list != &tp->out_of_order_queue.list &&\n \t\t    before(seq, TCP_SKB_CB(skb1)->end_seq)) {\n \t\t\tif (!after(end_seq, TCP_SKB_CB(skb1)->end_seq)) {\n \t\t\t\t/* All the bits are present. Drop. */\n@@ -4143,24 +4142,27 @@ drop:\n \t\t\t\ttcp_dsack_set(sk, seq,\n \t\t\t\t\t      TCP_SKB_CB(skb1)->end_seq);\n \t\t\t} else {\n-\t\t\t\tskb1 = skb1->prev;\n+\t\t\t\tskb1 = list_entry(skb1->list.prev,\n+\t\t\t\t\t\t  struct sk_buff,\n+\t\t\t\t\t\t  list);\n \t\t\t}\n \t\t}\n-\t\t__skb_insert(skb, skb1, skb1->next, &tp->out_of_order_queue);\n+\t\tlist_add(&skb->list, &skb1->list);\n+\t\ttp->out_of_order_queue.qlen++;\n \n \t\t/* And clean segments covered by new one as whole. */\n-\t\twhile ((skb1 = skb->next) !=\n-\t\t       (struct sk_buff *)&tp->out_of_order_queue &&\n-\t\t       after(end_seq, TCP_SKB_CB(skb1)->seq)) {\n-\t\t\tif (before(end_seq, TCP_SKB_CB(skb1)->end_seq)) {\n-\t\t\t\ttcp_dsack_extend(sk, TCP_SKB_CB(skb1)->seq,\n+\t\tlist_for_each_entry_safe_continue(skb, skb1, &tp->out_of_order_queue.list, list) {\n+\t\t\tif (!after(end_seq, TCP_SKB_CB(skb)->seq))\n+\t\t\t\tbreak;\n+\t\t\tif (before(end_seq, TCP_SKB_CB(skb)->end_seq)) {\n+\t\t\t\ttcp_dsack_extend(sk, TCP_SKB_CB(skb)->seq,\n \t\t\t\t\t\t end_seq);\n \t\t\t\tbreak;\n \t\t\t}\n-\t\t\t__skb_unlink(skb1, &tp->out_of_order_queue);\n-\t\t\ttcp_dsack_extend(sk, TCP_SKB_CB(skb1)->seq,\n-\t\t\t\t\t TCP_SKB_CB(skb1)->end_seq);\n-\t\t\t__kfree_skb(skb1);\n+\t\t\t__skb_unlink(skb, &tp->out_of_order_queue);\n+\t\t\ttcp_dsack_extend(sk, TCP_SKB_CB(skb)->seq,\n+\t\t\t\t\t TCP_SKB_CB(skb)->end_seq);\n+\t\t\t__kfree_skb(skb);\n \t\t}\n \n add_sack:\n@@ -4172,7 +4174,7 @@ add_sack:\n static struct sk_buff *tcp_collapse_one(struct sock *sk, struct sk_buff *skb,\n \t\t\t\t\tstruct sk_buff_head *list)\n {\n-\tstruct sk_buff *next = skb->next;\n+\tstruct sk_buff *next = list_entry(skb->list.next, struct sk_buff, list);\n \n \t__skb_unlink(skb, list);\n \t__kfree_skb(skb);\n@@ -4196,12 +4198,16 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list,\n \t/* First, check that queue is collapsible and find\n \t * the point where collapsing can be useful. */\n \tfor (skb = head; skb != tail;) {\n+\t\tstruct sk_buff *next;\n+\n \t\t/* No new bits? It is possible on ofo queue. */\n \t\tif (!before(start, TCP_SKB_CB(skb)->end_seq)) {\n \t\t\tskb = tcp_collapse_one(sk, skb, list);\n \t\t\tcontinue;\n \t\t}\n \n+\t\tnext = list_entry(skb->list.next, struct sk_buff, list);\n+\n \t\t/* The first skb to collapse is:\n \t\t * - not SYN/FIN and\n \t\t * - bloated or contains data before \"start\" or\n@@ -4210,13 +4216,13 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list,\n \t\tif (!tcp_hdr(skb)->syn && !tcp_hdr(skb)->fin &&\n \t\t    (tcp_win_from_space(skb->truesize) > skb->len ||\n \t\t     before(TCP_SKB_CB(skb)->seq, start) ||\n-\t\t     (skb->next != tail &&\n-\t\t      TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb->next)->seq)))\n+\t\t     (next != tail &&\n+\t\t      TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(next)->seq)))\n \t\t\tbreak;\n \n \t\t/* Decided to skip this, advance start seq. */\n \t\tstart = TCP_SKB_CB(skb)->end_seq;\n-\t\tskb = skb->next;\n+\t\tskb = next;\n \t}\n \tif (skb == tail || tcp_hdr(skb)->syn || tcp_hdr(skb)->fin)\n \t\treturn;\n@@ -4244,7 +4250,7 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list,\n \t\tmemcpy(nskb->head, skb->head, header);\n \t\tmemcpy(nskb->cb, skb->cb, sizeof(skb->cb));\n \t\tTCP_SKB_CB(nskb)->seq = TCP_SKB_CB(nskb)->end_seq = start;\n-\t\t__skb_insert(nskb, skb->prev, skb, list);\n+\t\t__skb_insert(nskb, skb, list);\n \t\tskb_set_owner_r(nskb, sk);\n \n \t\t/* Copy data, releasing collapsed skbs. */\n@@ -4290,7 +4296,7 @@ static void tcp_collapse_ofo_queue(struct sock *sk)\n \thead = skb;\n \n \tfor (;;) {\n-\t\tskb = skb->next;\n+\t\tskb = list_entry(skb->list.next, struct sk_buff, list);\n \n \t\t/* Segment is terminated when we see gap or when\n \t\t * we are at the end of all the queue. */\n@@ -4362,8 +4368,10 @@ static int tcp_prune_queue(struct sock *sk)\n \n \ttcp_collapse_ofo_queue(sk);\n \ttcp_collapse(sk, &sk->sk_receive_queue,\n-\t\t     sk->sk_receive_queue.next,\n-\t\t     (struct sk_buff *)&sk->sk_receive_queue,\n+\t\t     list_entry(sk->sk_receive_queue.list.next,\n+\t\t\t\tstruct sk_buff, list),\n+\t\t     list_entry(&sk->sk_receive_queue.list,\n+\t\t\t\tstruct sk_buff, list),\n \t\t     tp->copied_seq, tp->rcv_nxt);\n \tsk_mem_reclaim(sk);\n \ndiff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c\nindex 95055f8..98884ab 100644\n--- a/net/ipv6/af_inet6.c\n+++ b/net/ipv6/af_inet6.c\n@@ -766,7 +766,7 @@ static struct sk_buff *ipv6_gso_segment(struct sk_buff *skb, int features)\n \tif (unlikely(IS_ERR(segs)))\n \t\tgoto out;\n \n-\tfor (skb = segs; skb; skb = skb->next) {\n+\tfor (skb = segs; skb; skb = skb->frag_next) {\n \t\tipv6h = ipv6_hdr(skb);\n \t\tipv6h->payload_len = htons(skb->len - skb->mac_len -\n \t\t\t\t\t   sizeof(*ipv6h));\ndiff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c\nindex 3df2c44..b45eb1a 100644\n--- a/net/ipv6/ip6_output.c\n+++ b/net/ipv6/ip6_output.c\n@@ -657,10 +657,10 @@ static int ip6_fragment(struct sk_buff *skb, int (*output)(struct sk_buff *))\n \t\t    skb_cloned(skb))\n \t\t\tgoto slow_path;\n \n-\t\tfor (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->next) {\n+\t\tfor (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->frag_next) {\n \t\t\t/* Correct geometry. */\n \t\t\tif (frag->len > mtu ||\n-\t\t\t    ((frag->len & 7) && frag->next) ||\n+\t\t\t    ((frag->len & 7) && frag->frag_next) ||\n \t\t\t    skb_headroom(frag) < hlen)\n \t\t\t    goto slow_path;\n \n@@ -726,7 +726,7 @@ static int ip6_fragment(struct sk_buff *skb, int (*output)(struct sk_buff *))\n \t\t\t\tfh->nexthdr = nexthdr;\n \t\t\t\tfh->reserved = 0;\n \t\t\t\tfh->frag_off = htons(offset);\n-\t\t\t\tif (frag->next != NULL)\n+\t\t\t\tif (frag->frag_next != NULL)\n \t\t\t\t\tfh->frag_off |= htons(IP6_MF);\n \t\t\t\tfh->identification = frag_id;\n \t\t\t\tipv6_hdr(frag)->payload_len =\n@@ -743,8 +743,8 @@ static int ip6_fragment(struct sk_buff *skb, int (*output)(struct sk_buff *))\n \t\t\t\tbreak;\n \n \t\t\tskb = frag;\n-\t\t\tfrag = skb->next;\n-\t\t\tskb->next = NULL;\n+\t\t\tfrag = skb->frag_next;\n+\t\t\tskb->frag_next = NULL;\n \t\t}\n \n \t\tkfree(tmp_hdr);\n@@ -756,7 +756,7 @@ static int ip6_fragment(struct sk_buff *skb, int (*output)(struct sk_buff *))\n \t\t}\n \n \t\twhile (frag) {\n-\t\t\tskb = frag->next;\n+\t\t\tskb = frag->frag_next;\n \t\t\tkfree_skb(frag);\n \t\t\tfrag = skb;\n \t\t}\n@@ -1428,7 +1428,7 @@ int ip6_push_pending_frames(struct sock *sk)\n \twhile ((tmp_skb = __skb_dequeue(&sk->sk_write_queue)) != NULL) {\n \t\t__skb_pull(tmp_skb, skb_network_header_len(skb));\n \t\t*tail_skb = tmp_skb;\n-\t\ttail_skb = &(tmp_skb->next);\n+\t\ttail_skb = &(tmp_skb->frag_next);\n \t\tskb->len += tmp_skb->len;\n \t\tskb->data_len += tmp_skb->len;\n \t\tskb->truesize += tmp_skb->truesize;\ndiff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c\nindex 52d06dd..f7955eb 100644\n--- a/net/ipv6/netfilter/nf_conntrack_reasm.c\n+++ b/net/ipv6/netfilter/nf_conntrack_reasm.c\n@@ -302,7 +302,7 @@ static int nf_ct_frag6_queue(struct nf_ct_frag6_queue *fq, struct sk_buff *skb,\n \t * this fragment, right?\n \t */\n \tprev = NULL;\n-\tfor (next = fq->q.fragments; next != NULL; next = next->next) {\n+\tfor (next = fq->q.fragments; next != NULL; next = next->frag_next) {\n \t\tif (NFCT_FRAG6_CB(next)->offset >= offset)\n \t\t\tbreak;\t/* bingo! */\n \t\tprev = next;\n@@ -357,10 +357,10 @@ static int nf_ct_frag6_queue(struct nf_ct_frag6_queue *fq, struct sk_buff *skb,\n \t\t\t/* Old fragmnet is completely overridden with\n \t\t\t * new one drop it.\n \t\t\t */\n-\t\t\tnext = next->next;\n+\t\t\tnext = next->frag_next;\n \n \t\t\tif (prev)\n-\t\t\t\tprev->next = next;\n+\t\t\t\tprev->frag_next = next;\n \t\t\telse\n \t\t\t\tfq->q.fragments = next;\n \n@@ -372,9 +372,9 @@ static int nf_ct_frag6_queue(struct nf_ct_frag6_queue *fq, struct sk_buff *skb,\n \tNFCT_FRAG6_CB(skb)->offset = offset;\n \n \t/* Insert this fragment in the chain of fragments. */\n-\tskb->next = next;\n+\tskb->frag_next = next;\n \tif (prev)\n-\t\tprev->next = skb;\n+\t\tprev->frag_next = skb;\n \telse\n \t\tfq->q.fragments = skb;\n \n@@ -445,8 +445,8 @@ nf_ct_frag6_reasm(struct nf_ct_frag6_queue *fq, struct net_device *dev)\n \t\t\tpr_debug(\"Can't alloc skb\\n\");\n \t\t\tgoto out_oom;\n \t\t}\n-\t\tclone->next = head->next;\n-\t\thead->next = clone;\n+\t\tclone->frag_next = head->frag_next;\n+\t\thead->frag_next = clone;\n \t\tskb_shinfo(clone)->frag_list = skb_shinfo(head)->frag_list;\n \t\tskb_shinfo(head)->frag_list = NULL;\n \t\tfor (i=0; i<skb_shinfo(head)->nr_frags; i++)\n@@ -469,12 +469,12 @@ nf_ct_frag6_reasm(struct nf_ct_frag6_queue *fq, struct net_device *dev)\n \thead->mac_header += sizeof(struct frag_hdr);\n \thead->network_header += sizeof(struct frag_hdr);\n \n-\tskb_shinfo(head)->frag_list = head->next;\n+\tskb_shinfo(head)->frag_list = head->frag_next;\n \tskb_reset_transport_header(head);\n \tskb_push(head, head->data - skb_network_header(head));\n \tatomic_sub(head->truesize, &nf_init_frags.mem);\n \n-\tfor (fp=head->next; fp; fp = fp->next) {\n+\tfor (fp=head->frag_next; fp; fp = fp->frag_next) {\n \t\thead->data_len += fp->len;\n \t\thead->len += fp->len;\n \t\tif (head->ip_summed != fp->ip_summed)\n@@ -485,7 +485,7 @@ nf_ct_frag6_reasm(struct nf_ct_frag6_queue *fq, struct net_device *dev)\n \t\tatomic_sub(fp->truesize, &nf_init_frags.mem);\n \t}\n \n-\thead->next = NULL;\n+\thead->frag_next = NULL;\n \thead->dev = dev;\n \thead->tstamp = fq->q.stamp;\n \tipv6_hdr(head)->payload_len = htons(payload_len);\n@@ -502,13 +502,13 @@ nf_ct_frag6_reasm(struct nf_ct_frag6_queue *fq, struct net_device *dev)\n \tfp = skb_shinfo(head)->frag_list;\n \tif (NFCT_FRAG6_CB(fp)->orig == NULL)\n \t\t/* at above code, head skb is divided into two skbs. */\n-\t\tfp = fp->next;\n+\t\tfp = fp->frag_next;\n \n \top = NFCT_FRAG6_CB(head)->orig;\n-\tfor (; fp; fp = fp->next) {\n+\tfor (; fp; fp = fp->frag_next) {\n \t\tstruct sk_buff *orig = NFCT_FRAG6_CB(fp)->orig;\n \n-\t\top->next = orig;\n+\t\top->frag_next = orig;\n \t\top = orig;\n \t\tNFCT_FRAG6_CB(fp)->orig = NULL;\n \t}\n@@ -677,8 +677,8 @@ void nf_ct_frag6_output(unsigned int hooknum, struct sk_buff *skb,\n \t\tnf_conntrack_get_reasm(skb);\n \t\ts->nfct_reasm = skb;\n \n-\t\ts2 = s->next;\n-\t\ts->next = NULL;\n+\t\ts2 = s->frag_next;\n+\t\ts->frag_next = NULL;\n \n \t\tNF_HOOK_THRESH(PF_INET6, hooknum, s, in, out, okfn,\n \t\t\t       NF_IP6_PRI_CONNTRACK_DEFRAG + 1);\ndiff --git a/net/ipv6/reassembly.c b/net/ipv6/reassembly.c\nindex 89184b5..8140afe 100644\n--- a/net/ipv6/reassembly.c\n+++ b/net/ipv6/reassembly.c\n@@ -337,7 +337,7 @@ static int ip6_frag_queue(struct frag_queue *fq, struct sk_buff *skb,\n \t * this fragment, right?\n \t */\n \tprev = NULL;\n-\tfor(next = fq->q.fragments; next != NULL; next = next->next) {\n+\tfor(next = fq->q.fragments; next != NULL; next = next->frag_next) {\n \t\tif (FRAG6_CB(next)->offset >= offset)\n \t\t\tbreak;\t/* bingo! */\n \t\tprev = next;\n@@ -384,10 +384,10 @@ static int ip6_frag_queue(struct frag_queue *fq, struct sk_buff *skb,\n \t\t\t/* Old fragment is completely overridden with\n \t\t\t * new one drop it.\n \t\t\t */\n-\t\t\tnext = next->next;\n+\t\t\tnext = next->frag_next;\n \n \t\t\tif (prev)\n-\t\t\t\tprev->next = next;\n+\t\t\t\tprev->frag_next = next;\n \t\t\telse\n \t\t\t\tfq->q.fragments = next;\n \n@@ -399,9 +399,9 @@ static int ip6_frag_queue(struct frag_queue *fq, struct sk_buff *skb,\n \tFRAG6_CB(skb)->offset = offset;\n \n \t/* Insert this fragment in the chain of fragments. */\n-\tskb->next = next;\n+\tskb->frag_next = next;\n \tif (prev)\n-\t\tprev->next = skb;\n+\t\tprev->frag_next = skb;\n \telse\n \t\tfq->q.fragments = skb;\n \n@@ -457,17 +457,17 @@ static int ip6_frag_reasm(struct frag_queue *fq, struct sk_buff *prev,\n \n \t/* Make the one we just received the head. */\n \tif (prev) {\n-\t\thead = prev->next;\n+\t\thead = prev->frag_next;\n \t\tfp = skb_clone(head, GFP_ATOMIC);\n \n \t\tif (!fp)\n \t\t\tgoto out_oom;\n \n-\t\tfp->next = head->next;\n-\t\tprev->next = fp;\n+\t\tfp->frag_next = head->frag_next;\n+\t\tprev->frag_next = fp;\n \n \t\tskb_morph(head, fq->q.fragments);\n-\t\thead->next = fq->q.fragments->next;\n+\t\thead->frag_next = fq->q.fragments->frag_next;\n \n \t\tkfree_skb(fq->q.fragments);\n \t\tfq->q.fragments = head;\n@@ -496,8 +496,8 @@ static int ip6_frag_reasm(struct frag_queue *fq, struct sk_buff *prev,\n \n \t\tif ((clone = alloc_skb(0, GFP_ATOMIC)) == NULL)\n \t\t\tgoto out_oom;\n-\t\tclone->next = head->next;\n-\t\thead->next = clone;\n+\t\tclone->frag_next = head->frag_next;\n+\t\thead->frag_next = clone;\n \t\tskb_shinfo(clone)->frag_list = skb_shinfo(head)->frag_list;\n \t\tskb_shinfo(head)->frag_list = NULL;\n \t\tfor (i=0; i<skb_shinfo(head)->nr_frags; i++)\n@@ -519,12 +519,12 @@ static int ip6_frag_reasm(struct frag_queue *fq, struct sk_buff *prev,\n \thead->mac_header += sizeof(struct frag_hdr);\n \thead->network_header += sizeof(struct frag_hdr);\n \n-\tskb_shinfo(head)->frag_list = head->next;\n+\tskb_shinfo(head)->frag_list = head->frag_next;\n \tskb_reset_transport_header(head);\n \tskb_push(head, head->data - skb_network_header(head));\n \tatomic_sub(head->truesize, &fq->q.net->mem);\n \n-\tfor (fp=head->next; fp; fp = fp->next) {\n+\tfor (fp=head->frag_next; fp; fp = fp->frag_next) {\n \t\thead->data_len += fp->len;\n \t\thead->len += fp->len;\n \t\tif (head->ip_summed != fp->ip_summed)\n@@ -535,7 +535,7 @@ static int ip6_frag_reasm(struct frag_queue *fq, struct sk_buff *prev,\n \t\tatomic_sub(fp->truesize, &fq->q.net->mem);\n \t}\n \n-\thead->next = NULL;\n+\thead->frag_next = NULL;\n \thead->dev = dev;\n \thead->tstamp = fq->q.stamp;\n \tipv6_hdr(head)->payload_len = htons(payload_len);\ndiff --git a/net/irda/irlap_frame.c b/net/irda/irlap_frame.c\nindex f17b65a..4f86435 100644\n--- a/net/irda/irlap_frame.c\n+++ b/net/irda/irlap_frame.c\n@@ -991,8 +991,7 @@ void irlap_resend_rejected_frames(struct irlap_cb *self, int command)\n \tcount = skb_queue_len(&self->wx_list);\n \n \t/*  Resend unacknowledged frame(s) */\n-\tskb = skb_peek(&self->wx_list);\n-\twhile (skb != NULL) {\n+\tlist_for_each_entry(skb, &self->wx_list.list, list) {\n \t\tirlap_wait_min_turn_around(self, &self->qos_tx);\n \n \t\t/* We copy the skb to be retransmitted since we will have to\n@@ -1023,9 +1022,7 @@ void irlap_resend_rejected_frames(struct irlap_cb *self, int command)\n \t\t *  we are finished, if not, move to the next sk-buffer\n \t\t */\n \t\tif (skb == skb_peek_tail(&self->wx_list))\n-\t\t\tskb = NULL;\n-\t\telse\n-\t\t\tskb = skb->next;\n+\t\t\tbreak;\n \t}\n #if 0 /* Not yet */\n \t/*\ndiff --git a/net/llc/af_llc.c b/net/llc/af_llc.c\nindex 5bcc452..261363a 100644\n--- a/net/llc/af_llc.c\n+++ b/net/llc/af_llc.c\n@@ -715,7 +715,7 @@ static int llc_ui_recvmsg(struct kiocb *iocb, struct socket *sock,\n \t\t}\n \t\t/* Well, if we have backlog, try to process it now yet. */\n \n-\t\tif (copied >= target && !sk->sk_backlog.tail)\n+\t\tif (copied >= target && list_empty(&sk->sk_backlog))\n \t\t\tbreak;\n \n \t\tif (copied) {\ndiff --git a/net/llc/llc_conn.c b/net/llc/llc_conn.c\nindex 5c6d89c..2b986a3 100644\n--- a/net/llc/llc_conn.c\n+++ b/net/llc/llc_conn.c\n@@ -83,7 +83,7 @@ int llc_conn_state_process(struct sock *sk, struct sk_buff *skb)\n \t\t * XXX indicate/confirm-needed state in the llc_conn_state_ev\n \t\t * XXX control block of the SKB instead? -DaveM\n \t\t */\n-\t\tif (!skb->next)\n+\t\tif (list_empty(&skb->list))\n \t\t\tgoto out_kfree_skb;\n \t\tgoto out_skb_put;\n \t}\ndiff --git a/net/llc/llc_proc.c b/net/llc/llc_proc.c\nindex 48212c0..d3e2332 100644\n--- a/net/llc/llc_proc.c\n+++ b/net/llc/llc_proc.c\n@@ -182,7 +182,7 @@ static int llc_seq_core_show(struct seq_file *seq, void *v)\n \t\t   timer_pending(&llc->pf_cycle_timer.timer),\n \t\t   timer_pending(&llc->rej_sent_timer.timer),\n \t\t   timer_pending(&llc->busy_state_timer.timer),\n-\t\t   !!sk->sk_backlog.tail, !!sock_owned_by_user(sk));\n+\t\t   !list_empty(&sk->sk_backlog), !!sock_owned_by_user(sk));\n out:\n \treturn 0;\n }\ndiff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c\nindex 210d6b8..536700f 100644\n--- a/net/mac80211/mesh_hwmp.c\n+++ b/net/mac80211/mesh_hwmp.c\n@@ -808,10 +808,8 @@ int mesh_nexthop_lookup(struct sk_buff *skb,\n \t\t}\n \n \t\tif (skb_queue_len(&mpath->frame_queue) >=\n-\t\t\t\tMESH_FRAME_QUEUE_LEN) {\n-\t\t\tskb_to_free = mpath->frame_queue.next;\n-\t\t\tskb_unlink(skb_to_free, &mpath->frame_queue);\n-\t\t}\n+\t\t\t\tMESH_FRAME_QUEUE_LEN)\n+\t\t\tskb_to_free = skb_dequeue(&mpath->frame_queue);\n \n \t\tskb_queue_tail(&mpath->frame_queue, skb);\n \t\tif (skb_to_free)\ndiff --git a/net/mac80211/rx.c b/net/mac80211/rx.c\nindex d080379..04c1ccd 100644\n--- a/net/mac80211/rx.c\n+++ b/net/mac80211/rx.c\n@@ -793,7 +793,7 @@ ieee80211_reassemble_add(struct ieee80211_sub_if_data *sdata,\n \tif (!skb_queue_empty(&entry->skb_list)) {\n #ifdef CONFIG_MAC80211_VERBOSE_DEBUG\n \t\tstruct ieee80211_hdr *hdr =\n-\t\t\t(struct ieee80211_hdr *) entry->skb_list.next->data;\n+\t\t\t(struct ieee80211_hdr *) skb_peek(&entry->skb_list)->data;\n \t\tDECLARE_MAC_BUF(mac);\n \t\tDECLARE_MAC_BUF(mac2);\n \t\tprintk(KERN_DEBUG \"%s: RX reassembly removed oldest \"\n@@ -841,7 +841,7 @@ ieee80211_reassemble_find(struct ieee80211_sub_if_data *sdata,\n \t\t    entry->last_frag + 1 != frag)\n \t\t\tcontinue;\n \n-\t\tf_hdr = (struct ieee80211_hdr *)entry->skb_list.next->data;\n+\t\tf_hdr = (struct ieee80211_hdr *) skb_peek(&entry->skb_list)->data;\n \n \t\t/*\n \t\t * Check ftype and addresses are equal, else check next fragment\ndiff --git a/net/netfilter/nf_queue.c b/net/netfilter/nf_queue.c\nindex 582ec3e..a16d8bf 100644\n--- a/net/netfilter/nf_queue.c\n+++ b/net/netfilter/nf_queue.c\n@@ -218,9 +218,9 @@ int nf_queue(struct sk_buff *skb,\n \t\treturn 1;\n \n \tdo {\n-\t\tstruct sk_buff *nskb = segs->next;\n+\t\tstruct sk_buff *nskb = segs->frag_next;\n \n-\t\tsegs->next = NULL;\n+\t\tsegs->frag_next = NULL;\n \t\tif (!__nf_queue(segs, elem, pf, hook, indev, outdev, okfn,\n \t\t\t\tqueuenum))\n \t\t\tkfree_skb(segs);\ndiff --git a/net/rxrpc/ar-recvmsg.c b/net/rxrpc/ar-recvmsg.c\nindex a39bf97..1d9dae3 100644\n--- a/net/rxrpc/ar-recvmsg.c\n+++ b/net/rxrpc/ar-recvmsg.c\n@@ -235,9 +235,9 @@ int rxrpc_recvmsg(struct kiocb *iocb, struct socket *sock,\n \n \t\tif (flags & MSG_PEEK) {\n \t\t\t_debug(\"peek next\");\n-\t\t\tskb = skb->next;\n-\t\t\tif (skb == (struct sk_buff *) &rx->sk.sk_receive_queue)\n+\t\t\tif (skb->list.next == &rx->sk.sk_receive_queue.list)\n \t\t\t\tbreak;\n+\t\t\tskb = list_entry(skb->list.next, struct sk_buff, list);\n \t\t\tgoto peek_next_packet;\n \t\t}\n \ndiff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c\nindex ec0a083..1592a4e 100644\n--- a/net/sched/sch_generic.c\n+++ b/net/sched/sch_generic.c\n@@ -44,7 +44,7 @@ static inline int qdisc_qlen(struct Qdisc *q)\n \n static inline int dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q)\n {\n-\tif (unlikely(skb->next))\n+\tif (unlikely(skb->frag_next))\n \t\tq->gso_skb = skb;\n \telse\n \t\tq->ops->requeue(skb, q);\ndiff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c\nindex 6e041d1..44e40b7 100644\n--- a/net/sched/sch_sfq.c\n+++ b/net/sched/sch_sfq.c\n@@ -244,7 +244,7 @@ static unsigned int sfq_drop(struct Qdisc *sch)\n \n \tif (d > 1) {\n \t\tsfq_index x = q->dep[d + SFQ_DEPTH].next;\n-\t\tskb = q->qs[x].prev;\n+\t\tskb = skb_peek_tail(&q->qs[x]);\n \t\tlen = qdisc_pkt_len(skb);\n \t\t__skb_unlink(skb, &q->qs[x]);\n \t\tkfree_skb(skb);\n@@ -260,7 +260,7 @@ static unsigned int sfq_drop(struct Qdisc *sch)\n \t\td = q->next[q->tail];\n \t\tq->next[q->tail] = q->next[d];\n \t\tq->allot[q->next[d]] += q->quantum;\n-\t\tskb = q->qs[d].prev;\n+\t\tskb = skb_peek_tail(&q->qs[d]);\n \t\tlen = qdisc_pkt_len(skb);\n \t\t__skb_unlink(skb, &q->qs[d]);\n \t\tkfree_skb(skb);\n@@ -360,7 +360,7 @@ sfq_requeue(struct sk_buff *skb, struct Qdisc *sch)\n \t * is dropped.\n \t */\n \tif (q->qs[x].qlen > q->limit) {\n-\t\tskb = q->qs[x].prev;\n+\t\tskb = skb_peek_tail(&q->qs[x]);\n \t\t__skb_unlink(skb, &q->qs[x]);\n \t\tsch->qstats.drops++;\n \t\tsch->qstats.backlog -= qdisc_pkt_len(skb);\ndiff --git a/net/sctp/input.c b/net/sctp/input.c\nindex a49fa80..bdfedae 100644\n--- a/net/sctp/input.c\n+++ b/net/sctp/input.c\n@@ -86,7 +86,7 @@ static inline int sctp_rcv_checksum(struct sk_buff *skb)\n \t__be32 cmp = sh->checksum;\n \t__be32 val = sctp_start_cksum((__u8 *)sh, skb_headlen(skb));\n \n-\tfor (; list; list = list->next)\n+\tfor (; list; list = list->frag_next)\n \t\tval = sctp_update_cksum((__u8 *)list->data, skb_headlen(list),\n \t\t\t\t\tval);\n \ndiff --git a/net/sctp/socket.c b/net/sctp/socket.c\nindex 5ffb9de..79fec46 100644\n--- a/net/sctp/socket.c\n+++ b/net/sctp/socket.c\n@@ -1844,7 +1844,7 @@ static int sctp_skb_pull(struct sk_buff *skb, int len)\n \tlen -= skb_len;\n \t__skb_pull(skb, skb_len);\n \n-\tfor (list = skb_shinfo(skb)->frag_list; list; list = list->next) {\n+\tfor (list = skb_shinfo(skb)->frag_list; list; list = list->frag_next) {\n \t\trlen = sctp_skb_pull(list, len);\n \t\tskb->len -= (len-rlen);\n \t\tskb->data_len -= (len-rlen);\n@@ -6538,7 +6538,7 @@ static void sctp_sock_rfree_frag(struct sk_buff *skb)\n \t\tgoto done;\n \n \t/* Don't forget the fragments. */\n-\tfor (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->next)\n+\tfor (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->frag_next)\n \t\tsctp_sock_rfree_frag(frag);\n \n done:\n@@ -6553,7 +6553,7 @@ static void sctp_skb_set_owner_r_frag(struct sk_buff *skb, struct sock *sk)\n \t\tgoto done;\n \n \t/* Don't forget the fragments. */\n-\tfor (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->next)\n+\tfor (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->frag_next)\n \t\tsctp_skb_set_owner_r_frag(frag, sk);\n \n done:\ndiff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c\nindex a1f654a..8648342 100644\n--- a/net/sctp/ulpevent.c\n+++ b/net/sctp/ulpevent.c\n@@ -60,6 +60,10 @@ SCTP_STATIC void sctp_ulpevent_init(struct sctp_ulpevent *event,\n \t\t\t\t    int msg_flags,\n \t\t\t\t    unsigned int len)\n {\n+\tstruct sk_buff *skb = sctp_event2skb(event);\n+\n+\tINIT_LIST_HEAD(&skb->list);\n+\n \tmemset(event, 0, sizeof(struct sctp_ulpevent));\n \tevent->msg_flags = msg_flags;\n \tevent->rmem_len = len;\n@@ -970,7 +974,7 @@ static void sctp_ulpevent_receive_data(struct sctp_ulpevent *event,\n \t * In general, the skb passed from IP can have only 1 level of\n \t * fragments. But we allow multiple levels of fragments.\n \t */\n-\tfor (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->next) {\n+\tfor (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->frag_next) {\n \t\tsctp_ulpevent_receive_data(sctp_skb2event(frag), asoc);\n \t}\n }\n@@ -997,7 +1001,7 @@ static void sctp_ulpevent_release_data(struct sctp_ulpevent *event)\n \t\tgoto done;\n \n \t/* Don't forget the fragments. */\n-\tfor (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->next) {\n+\tfor (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->frag_next) {\n \t\t/* NOTE:  skb_shinfos are recursive. Although IP returns\n \t\t * skb's with only 1 level of fragments, SCTP reassembly can\n \t\t * increase the levels.\n@@ -1020,7 +1024,7 @@ static void sctp_ulpevent_release_frag_data(struct sctp_ulpevent *event)\n \t\tgoto done;\n \n \t/* Don't forget the fragments. */\n-\tfor (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->next) {\n+\tfor (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->frag_next) {\n \t\t/* NOTE:  skb_shinfos are recursive. Although IP returns\n \t\t * skb's with only 1 level of fragments, SCTP reassembly can\n \t\t * increase the levels.\ndiff --git a/net/sctp/ulpqueue.c b/net/sctp/ulpqueue.c\nindex 5061a26..b765541 100644\n--- a/net/sctp/ulpqueue.c\n+++ b/net/sctp/ulpqueue.c\n@@ -205,7 +205,11 @@ int sctp_ulpq_tail_event(struct sctp_ulpq *ulpq, struct sctp_ulpevent *event)\n \tstruct sk_buff *skb = sctp_event2skb(event);\n \tint clear_pd = 0;\n \n-\tskb_list = (struct sk_buff_head *) skb->prev;\n+\tskb_list = NULL;\n+\tif (!list_empty(&skb->list)) {\n+\t\tstruct list_head *head = skb->list.prev;\n+\t\tskb_list = container_of(head, struct sk_buff_head, list);\n+\t}\n \n \t/* If the socket is just going to throw this away, do not\n \t * even try to deliver it.\n@@ -317,7 +321,7 @@ static void sctp_ulpq_store_reasm(struct sctp_ulpq *ulpq,\n \t}\n \n \t/* Insert before pos. */\n-\t__skb_insert(sctp_event2skb(event), pos->prev, pos, &ulpq->reasm);\n+\t__skb_insert(sctp_event2skb(event), pos, &ulpq->reasm);\n \n }\n \n@@ -337,19 +341,20 @@ static struct sctp_ulpevent *sctp_make_reassembled_event(struct sk_buff_head *qu\n \tstruct sk_buff *list = skb_shinfo(f_frag)->frag_list;\n \n \t/* Store the pointer to the 2nd skb */\n-\tif (f_frag == l_frag)\n-\t\tpos = NULL;\n-\telse\n-\t\tpos = f_frag->next;\n+\tpos = NULL;\n+\tif (f_frag != l_frag) {\n+\t\tif (f_frag->list.next != &queue->list)\n+\t\t\tpos = list_entry(f_frag->list.next, struct sk_buff, list);\n+\t}\n \n \t/* Get the last skb in the f_frag's frag_list if present. */\n-\tfor (last = list; list; last = list, list = list->next);\n+\tfor (last = list; list; last = list, list = list->frag_next);\n \n \t/* Add the list of remaining fragments to the first fragments\n \t * frag_list.\n \t */\n \tif (last)\n-\t\tlast->next = pos;\n+\t\tlast->frag_next = pos;\n \telse {\n \t\tif (skb_cloned(f_frag)) {\n \t\t\t/* This is a cloned skb, we can't just modify\n@@ -378,8 +383,7 @@ static struct sctp_ulpevent *sctp_make_reassembled_event(struct sk_buff_head *qu\n \t}\n \n \twhile (pos) {\n-\n-\t\tpnext = pos->next;\n+\t\tpnext = pos->frag_next;\n \n \t\t/* Update the len and data_len fields of the first fragment. */\n \t\tf_frag->len += pos->len;\n@@ -387,11 +391,12 @@ static struct sctp_ulpevent *sctp_make_reassembled_event(struct sk_buff_head *qu\n \n \t\t/* Remove the fragment from the reassembly queue.  */\n \t\t__skb_unlink(pos, queue);\n+\t\tpos->frag_next = NULL;\n \n \t\t/* Break if we have reached the last fragment.  */\n \t\tif (pos == l_frag)\n \t\t\tbreak;\n-\t\tpos->next = pnext;\n+\t\tpos->frag_next = pnext;\n \t\tpos = pnext;\n \t}\n \n@@ -447,7 +452,7 @@ static struct sctp_ulpevent *sctp_ulpq_retrieve_reassembled(struct sctp_ulpq *ul\n \t\t\t * element in the queue, then count it towards\n \t\t\t * possible PD.\n \t\t\t */\n-\t\t\tif (pos == ulpq->reasm.next) {\n+\t\t\tif (pos->list.prev == &ulpq->reasm.list) {\n \t\t\t    pd_first = pos;\n \t\t\t    pd_last = pos;\n \t\t\t    pd_len = pos->len;\n@@ -739,9 +744,10 @@ static void sctp_ulpq_retrieve_ordered(struct sctp_ulpq *ulpq,\n \t\t\t\t\t      struct sctp_ulpevent *event)\n {\n \tstruct sk_buff_head *event_list;\n-\tstruct sk_buff *pos, *tmp;\n+\tstruct sk_buff *pos, *tmp, *skb;\n \tstruct sctp_ulpevent *cevent;\n \tstruct sctp_stream *in;\n+\tstruct list_head *prev;\n \t__u16 sid, csid;\n \t__u16 ssn, cssn;\n \n@@ -749,7 +755,9 @@ static void sctp_ulpq_retrieve_ordered(struct sctp_ulpq *ulpq,\n \tssn = event->ssn;\n \tin  = &ulpq->asoc->ssnmap->in;\n \n-\tevent_list = (struct sk_buff_head *) sctp_event2skb(event)->prev;\n+\tskb = sctp_event2skb(event);\n+\tprev = skb->list.prev;\n+\tevent_list = container_of(prev, struct sk_buff_head, list);\n \n \t/* We are holding the chunks by stream, by SSN.  */\n \tsctp_skb_for_each(pos, &ulpq->lobby, tmp) {\n@@ -825,7 +833,7 @@ static void sctp_ulpq_store_ordered(struct sctp_ulpq *ulpq,\n \n \n \t/* Insert before pos. */\n-\t__skb_insert(sctp_event2skb(event), pos->prev, pos, &ulpq->lobby);\n+\t__skb_insert(sctp_event2skb(event), pos, &ulpq->lobby);\n \n }\n \ndiff --git a/net/tipc/bcast.c b/net/tipc/bcast.c\nindex 3ddaff4..e19de8c 100644\n--- a/net/tipc/bcast.c\n+++ b/net/tipc/bcast.c\n@@ -189,7 +189,7 @@ static void bclink_retransmit_pkt(u32 after, u32 to)\n \n \tbuf = bcl->first_out;\n \twhile (buf && less_eq(buf_seqno(buf), after)) {\n-\t\tbuf = buf->next;\n+\t\tbuf = buf->frag_next;\n \t}\n \ttipc_link_retransmit(bcl, buf, mod(to - after));\n }\n@@ -217,13 +217,13 @@ void tipc_bclink_acknowledge(struct tipc_node *n_ptr, u32 acked)\n \n \tcrs = bcl->first_out;\n \twhile (crs && less_eq(buf_seqno(crs), n_ptr->bclink.acked)) {\n-\t\tcrs = crs->next;\n+\t\tcrs = crs->frag_next;\n \t}\n \n \t/* Update packets that node is now acknowledging */\n \n \twhile (crs && less_eq(buf_seqno(crs), acked)) {\n-\t\tnext = crs->next;\n+\t\tnext = crs->frag_next;\n \t\tbcbuf_decr_acks(crs);\n \t\tif (bcbuf_acks(crs) == 0) {\n \t\t\tbcl->first_out = next;\n@@ -355,7 +355,7 @@ static void tipc_bclink_peek_nack(u32 dest, u32 sender_tag, u32 gap_after, u32 g\n \t\tstruct sk_buff *buf = n_ptr->bclink.deferred_head;\n \t\tu32 prev = n_ptr->bclink.gap_to;\n \n-\t\tfor (; buf; buf = buf->next) {\n+\t\tfor (; buf; buf = buf->frag_next) {\n \t\t\tu32 seqno = buf_seqno(buf);\n \n \t\t\tif (mod(seqno - prev) != 1) {\n@@ -499,7 +499,7 @@ receive:\n \t\t\ttipc_node_lock(node);\n \t\t\tbuf = deferred;\n \t\t\tmsg = buf_msg(buf);\n-\t\t\tnode->bclink.deferred_head = deferred->next;\n+\t\t\tnode->bclink.deferred_head = deferred->frag_next;\n \t\t\tgoto receive;\n \t\t}\n \t\treturn;\ndiff --git a/net/tipc/core.h b/net/tipc/core.h\nindex a881f92..180d68c 100644\n--- a/net/tipc/core.h\n+++ b/net/tipc/core.h\n@@ -343,7 +343,6 @@ static inline struct sk_buff *buf_acquire(u32 size)\n \tif (skb) {\n \t\tskb_reserve(skb, BUF_HEADROOM);\n \t\tskb_put(skb, size);\n-\t\tskb->next = NULL;\n \t}\n \treturn skb;\n }\ndiff --git a/net/tipc/eth_media.c b/net/tipc/eth_media.c\nindex fe43ef7..69e3fc2 100644\n--- a/net/tipc/eth_media.c\n+++ b/net/tipc/eth_media.c\n@@ -111,7 +111,7 @@ static int recv_msg(struct sk_buff *buf, struct net_device *dev,\n \t\t\tsize = msg_size((struct tipc_msg *)buf->data);\n \t\t\tskb_trim(buf, size);\n \t\t\tif (likely(buf->len == size)) {\n-\t\t\t\tbuf->next = NULL;\n+\t\t\t\tbuf->frag_next = NULL;\n \t\t\t\ttipc_recv_msg(buf, eb_ptr->bearer);\n \t\t\t\treturn 0;\n \t\t\t}\ndiff --git a/net/tipc/link.c b/net/tipc/link.c\nindex dd4c18b..8c4d418 100644\n--- a/net/tipc/link.c\n+++ b/net/tipc/link.c\n@@ -188,7 +188,7 @@ static void dbg_print_buf_chain(struct sk_buff *root_buf)\n \n \t\twhile (buf) {\n \t\t\tmsg_dbg(buf_msg(buf), \"In chain: \");\n-\t\t\tbuf = buf->next;\n+\t\t\tbuf = buf->frag_next;\n \t\t}\n \t}\n }\n@@ -615,7 +615,7 @@ static void link_release_outqueue(struct link *l_ptr)\n \tstruct sk_buff *next;\n \n \twhile (buf) {\n-\t\tnext = buf->next;\n+\t\tnext = buf->frag_next;\n \t\tbuf_discard(buf);\n \t\tbuf = next;\n \t}\n@@ -634,7 +634,7 @@ void tipc_link_reset_fragments(struct link *l_ptr)\n \tstruct sk_buff *next;\n \n \twhile (buf) {\n-\t\tnext = buf->next;\n+\t\tnext = buf->frag_next;\n \t\tbuf_discard(buf);\n \t\tbuf = next;\n \t}\n@@ -653,14 +653,14 @@ void tipc_link_stop(struct link *l_ptr)\n \n \tbuf = l_ptr->oldest_deferred_in;\n \twhile (buf) {\n-\t\tnext = buf->next;\n+\t\tnext = buf->frag_next;\n \t\tbuf_discard(buf);\n \t\tbuf = next;\n \t}\n \n \tbuf = l_ptr->first_out;\n \twhile (buf) {\n-\t\tnext = buf->next;\n+\t\tnext = buf->frag_next;\n \t\tbuf_discard(buf);\n \t\tbuf = next;\n \t}\n@@ -744,7 +744,7 @@ void tipc_link_reset(struct link *l_ptr)\n \tl_ptr->proto_msg_queue = NULL;\n \tbuf = l_ptr->oldest_deferred_in;\n \twhile (buf) {\n-\t\tstruct sk_buff *next = buf->next;\n+\t\tstruct sk_buff *next = buf->frag_next;\n \t\tbuf_discard(buf);\n \t\tbuf = next;\n \t}\n@@ -1041,9 +1041,9 @@ static void link_add_to_outqueue(struct link *l_ptr,\n \n \tmsg_set_word(msg, 2, ((ack << 16) | seqno));\n \tmsg_set_bcast_ack(msg, l_ptr->owner->bclink.last_in);\n-\tbuf->next = NULL;\n+\tbuf->frag_next = NULL;\n \tif (l_ptr->first_out) {\n-\t\tl_ptr->last_out->next = buf;\n+\t\tl_ptr->last_out->frag_next = buf;\n \t\tl_ptr->last_out = buf;\n \t} else\n \t\tl_ptr->first_out = l_ptr->last_out = buf;\n@@ -1402,7 +1402,7 @@ again:\n \tbuf_chain = buf = buf_acquire(max_pkt);\n \tif (!buf)\n \t\treturn -ENOMEM;\n-\tbuf->next = NULL;\n+\tbuf->frag_next = NULL;\n \tskb_copy_to_linear_data(buf, &fragm_hdr, INT_H_SIZE);\n \thsz = msg_hdr_sz(hdr);\n \tskb_copy_to_linear_data_offset(buf, INT_H_SIZE, hdr, hsz);\n@@ -1430,7 +1430,7 @@ again:\n \t\t\tif (copy_from_user(buf->data + fragm_crs, sect_crs, sz)) {\n error:\n \t\t\t\tfor (; buf_chain; buf_chain = buf) {\n-\t\t\t\t\tbuf = buf_chain->next;\n+\t\t\t\t\tbuf = buf_chain->frag_next;\n \t\t\t\t\tbuf_discard(buf_chain);\n \t\t\t\t}\n \t\t\t\treturn -EFAULT;\n@@ -1460,8 +1460,8 @@ error:\n \t\t\tif (!buf)\n \t\t\t\tgoto error;\n \n-\t\t\tbuf->next = NULL;\n-\t\t\tprev->next = buf;\n+\t\t\tbuf->frag_next = NULL;\n+\t\t\tprev->frag_next = buf;\n \t\t\tskb_copy_to_linear_data(buf, &fragm_hdr, INT_H_SIZE);\n \t\t\tfragm_crs = INT_H_SIZE;\n \t\t\tfragm_rest = fragm_sz;\n@@ -1486,7 +1486,7 @@ error:\n \t\t\tsender->publ.max_pkt = link_max_pkt(l_ptr);\n \t\t\ttipc_node_unlock(node);\n \t\t\tfor (; buf_chain; buf_chain = buf) {\n-\t\t\t\tbuf = buf_chain->next;\n+\t\t\t\tbuf = buf_chain->frag_next;\n \t\t\t\tbuf_discard(buf_chain);\n \t\t\t}\n \t\t\tgoto again;\n@@ -1494,7 +1494,7 @@ error:\n \t} else {\n reject:\n \t\tfor (; buf_chain; buf_chain = buf) {\n-\t\t\tbuf = buf_chain->next;\n+\t\t\tbuf = buf_chain->frag_next;\n \t\t\tbuf_discard(buf_chain);\n \t\t}\n \t\treturn tipc_port_reject_sections(sender, hdr, msg_sect, num_sect,\n@@ -1509,7 +1509,7 @@ reject:\n \t\tl_ptr->next_out = buf_chain;\n \tl_ptr->stats.sent_fragmented++;\n \twhile (buf) {\n-\t\tstruct sk_buff *next = buf->next;\n+\t\tstruct sk_buff *next = buf->frag_next;\n \t\tstruct tipc_msg *msg = buf_msg(buf);\n \n \t\tl_ptr->stats.sent_fragments++;\n@@ -1545,7 +1545,7 @@ u32 tipc_link_push_packet(struct link *l_ptr)\n \n \t\twhile (buf && less(first, r_q_head)) {\n \t\t\tfirst = mod(first + 1);\n-\t\t\tbuf = buf->next;\n+\t\t\tbuf = buf->frag_next;\n \t\t}\n \t\tl_ptr->retransm_queue_head = r_q_head = first;\n \t\tl_ptr->retransm_queue_size = r_q_size = mod(last - first);\n@@ -1603,7 +1603,7 @@ u32 tipc_link_push_packet(struct link *l_ptr)\n \t\t\t\tif (msg_user(msg) == MSG_BUNDLER)\n \t\t\t\t\tmsg_set_type(msg, CLOSED_MSG);\n \t\t\t\tmsg_dbg(msg, \">PUSH-DATA>\");\n-\t\t\t\tl_ptr->next_out = buf->next;\n+\t\t\t\tl_ptr->next_out = buf->frag_next;\n \t\t\t\treturn 0;\n \t\t\t} else {\n \t\t\t\tmsg_dbg(msg, \"|PUSH-DATA|\");\n@@ -1751,7 +1751,7 @@ void tipc_link_retransmit(struct link *l_ptr, struct sk_buff *buf,\n \t\tmsg_set_bcast_ack(msg, l_ptr->owner->bclink.last_in);\n \t\tif (tipc_bearer_send(l_ptr->b_ptr, buf, &l_ptr->media_addr)) {\n \t\t\tmsg_dbg(buf_msg(buf), \">RETR>\");\n-\t\t\tbuf = buf->next;\n+\t\t\tbuf = buf->frag_next;\n \t\t\tretransmits--;\n \t\t\tl_ptr->stats.retransmitted++;\n \t\t} else {\n@@ -1780,7 +1780,7 @@ static struct sk_buff *link_insert_deferred_queue(struct link *l_ptr,\n \n \tseq_no = msg_seqno(buf_msg(l_ptr->oldest_deferred_in));\n \tif (seq_no == mod(l_ptr->next_in_no)) {\n-\t\tl_ptr->newest_deferred_in->next = buf;\n+\t\tl_ptr->newest_deferred_in->frag_next = buf;\n \t\tbuf = l_ptr->oldest_deferred_in;\n \t\tl_ptr->oldest_deferred_in = NULL;\n \t\tl_ptr->deferred_inqueue_sz = 0;\n@@ -1853,7 +1853,7 @@ void tipc_recv_msg(struct sk_buff *head, struct tipc_bearer *tb_ptr)\n \t\tu32 released = 0;\n \t\tint type;\n \n-\t\thead = head->next;\n+\t\thead = head->frag_next;\n \n \t\t/* Ensure message is well-formed */\n \n@@ -1910,7 +1910,7 @@ void tipc_recv_msg(struct sk_buff *head, struct tipc_bearer *tb_ptr)\n \t\tcrs = l_ptr->first_out;\n \t\twhile ((crs != l_ptr->next_out) &&\n \t\t       less_eq(msg_seqno(buf_msg(crs)), ackd)) {\n-\t\t\tstruct sk_buff *next = crs->next;\n+\t\t\tstruct sk_buff *next = crs->frag_next;\n \n \t\t\tbuf_discard(crs);\n \t\t\tcrs = next;\n@@ -2010,7 +2010,7 @@ deliver:\n \t\tif (link_working_working(l_ptr)) {\n \t\t\t/* Re-insert in front of queue */\n \t\t\tmsg_dbg(msg,\"RECV-REINS:\");\n-\t\t\tbuf->next = head;\n+\t\t\tbuf->frag_next = head;\n \t\t\thead = buf;\n \t\t\ttipc_node_unlock(n_ptr);\n \t\t\tcontinue;\n@@ -2036,7 +2036,7 @@ u32 tipc_link_defer_pkt(struct sk_buff **head,\n \tstruct sk_buff *crs = *head;\n \tu32 seq_no = msg_seqno(buf_msg(buf));\n \n-\tbuf->next = NULL;\n+\tbuf->frag_next = NULL;\n \n \t/* Empty queue ? */\n \tif (*head == NULL) {\n@@ -2046,7 +2046,7 @@ u32 tipc_link_defer_pkt(struct sk_buff **head,\n \n \t/* Last ? */\n \tif (less(msg_seqno(buf_msg(*tail)), seq_no)) {\n-\t\t(*tail)->next = buf;\n+\t\t(*tail)->frag_next = buf;\n \t\t*tail = buf;\n \t\treturn 1;\n \t}\n@@ -2056,9 +2056,9 @@ u32 tipc_link_defer_pkt(struct sk_buff **head,\n \t\tstruct tipc_msg *msg = buf_msg(crs);\n \n \t\tif (less(seq_no, msg_seqno(msg))) {\n-\t\t\tbuf->next = crs;\n+\t\t\tbuf->frag_next = crs;\n \t\t\tif (prev)\n-\t\t\t\tprev->next = buf;\n+\t\t\t\tprev->frag_next = buf;\n \t\t\telse\n \t\t\t\t*head = buf;\n \t\t\treturn 1;\n@@ -2067,7 +2067,7 @@ u32 tipc_link_defer_pkt(struct sk_buff **head,\n \t\t\tbreak;\n \t\t}\n \t\tprev = crs;\n-\t\tcrs = crs->next;\n+\t\tcrs = crs->frag_next;\n \t}\n \twhile (crs);\n \n@@ -2471,7 +2471,7 @@ void tipc_link_changeover(struct link *l_ptr)\n \t\t\ttipc_link_tunnel(l_ptr, &tunnel_hdr, msg,\n \t\t\t\t\t msg_link_selector(msg));\n \t\t}\n-\t\tcrs = crs->next;\n+\t\tcrs = crs->frag_next;\n \t}\n }\n \n@@ -2510,7 +2510,7 @@ void tipc_link_send_duplicate(struct link *l_ptr, struct link *tunnel)\n \t\ttipc_link_send_buf(tunnel, outbuf);\n \t\tif (!tipc_link_is_up(l_ptr))\n \t\t\treturn;\n-\t\titer = iter->next;\n+\t\titer = iter->frag_next;\n \t}\n }\n \n@@ -2791,7 +2791,7 @@ int tipc_link_recv_fragment(struct sk_buff **pending, struct sk_buff **fb,\n \twhile (pbuf && ((msg_seqno(buf_msg(pbuf)) != long_msg_seq_no)\n \t\t\t|| (msg_orignode(fragm) != msg_orignode(buf_msg(pbuf))))) {\n \t\tprev = pbuf;\n-\t\tpbuf = pbuf->next;\n+\t\tpbuf = pbuf->frag_next;\n \t}\n \n \tif (!pbuf && (msg_type(fragm) == FIRST_FRAGMENT)) {\n@@ -2809,7 +2809,7 @@ int tipc_link_recv_fragment(struct sk_buff **pending, struct sk_buff **fb,\n \t\t}\n \t\tpbuf = buf_acquire(msg_size(imsg));\n \t\tif (pbuf != NULL) {\n-\t\t\tpbuf->next = *pending;\n+\t\t\tpbuf->frag_next = *pending;\n \t\t\t*pending = pbuf;\n \t\t\tskb_copy_to_linear_data(pbuf, imsg,\n \t\t\t\t\t\tmsg_data_sz(fragm));\n@@ -2836,9 +2836,9 @@ int tipc_link_recv_fragment(struct sk_buff **pending, struct sk_buff **fb,\n \n \t\tif (exp_frags == 0) {\n \t\t\tif (prev)\n-\t\t\t\tprev->next = pbuf->next;\n+\t\t\t\tprev->frag_next = pbuf->frag_next;\n \t\t\telse\n-\t\t\t\t*pending = pbuf->next;\n+\t\t\t\t*pending = pbuf->frag_next;\n \t\t\tmsg_reset_reroute_cnt(buf_msg(pbuf));\n \t\t\t*fb = pbuf;\n \t\t\t*m = buf_msg(pbuf);\n@@ -2873,7 +2873,7 @@ static void link_check_defragm_bufs(struct link *l_ptr)\n \twhile (buf) {\n \t\tu32 cnt = get_timer_cnt(buf);\n \n-\t\tnext = buf->next;\n+\t\tnext = buf->frag_next;\n \t\tif (cnt < 4) {\n \t\t\tincr_timer_cnt(buf);\n \t\t\tprev = buf;\n@@ -2884,9 +2884,9 @@ static void link_check_defragm_bufs(struct link *l_ptr)\n \t\t\tdbg(\"Pending long buffers:\\n\");\n \t\t\tdbg_print_buf_chain(l_ptr->defragm_buf);\n \t\t\tif (prev)\n-\t\t\t\tprev->next = buf->next;\n+\t\t\t\tprev->frag_next = buf->frag_next;\n \t\t\telse\n-\t\t\t\tl_ptr->defragm_buf = buf->next;\n+\t\t\t\tl_ptr->defragm_buf = buf->frag_next;\n \t\t\tbuf_discard(buf);\n \t\t}\n \t\tbuf = next;\n@@ -3286,7 +3286,7 @@ static void link_dump_rec_queue(struct link *l_ptr)\n \t\t\treturn;\n \t\t}\n \t\tmsg_dbg(buf_msg(crs), \"In rec queue: \\n\");\n-\t\tcrs = crs->next;\n+\t\tcrs = crs->frag_next;\n \t}\n }\n #endif\n@@ -3326,7 +3326,7 @@ static void link_print(struct link *l_ptr, struct print_buf *buf,\n \t\tif ((mod(msg_seqno(buf_msg(l_ptr->last_out)) -\n \t\t\t msg_seqno(buf_msg(l_ptr->first_out)))\n \t\t     != (l_ptr->out_queue_size - 1))\n-\t\t    || (l_ptr->last_out->next != NULL)) {\n+\t\t    || (l_ptr->last_out->frag_next != NULL)) {\n \t\t\ttipc_printf(buf, \"\\nSend queue inconsistency\\n\");\n \t\t\ttipc_printf(buf, \"first_out= %x \", l_ptr->first_out);\n \t\t\ttipc_printf(buf, \"next_out= %x \", l_ptr->next_out);\ndiff --git a/net/tipc/node.c b/net/tipc/node.c\nindex 20d98c5..e91e036 100644\n--- a/net/tipc/node.c\n+++ b/net/tipc/node.c\n@@ -395,7 +395,7 @@ static void node_lost_contact(struct tipc_node *n_ptr)\n \tn_ptr->bclink.gap_after = n_ptr->bclink.gap_to = 0;\n \twhile (n_ptr->bclink.deferred_head) {\n \t\tstruct sk_buff* buf = n_ptr->bclink.deferred_head;\n-\t\tn_ptr->bclink.deferred_head = buf->next;\n+\t\tn_ptr->bclink.deferred_head = buf->frag_next;\n \t\tbuf_discard(buf);\n \t}\n \tif (n_ptr->bclink.defragm) {\ndiff --git a/net/tipc/port.c b/net/tipc/port.c\nindex e70d27e..a5e3209 100644\n--- a/net/tipc/port.c\n+++ b/net/tipc/port.c\n@@ -805,7 +805,7 @@ static void port_dispatcher_sigh(void *dummy)\n \t\tint published;\n \t\tu32 message_type;\n \n-\t\tstruct sk_buff *next = buf->next;\n+\t\tstruct sk_buff *next = buf->frag_next;\n \t\tstruct tipc_msg *msg = buf_msg(buf);\n \t\tu32 dref = msg_destport(msg);\n \n@@ -953,10 +953,10 @@ reject:\n \n static u32 port_dispatcher(struct tipc_port *dummy, struct sk_buff *buf)\n {\n-\tbuf->next = NULL;\n+\tbuf->frag_next = NULL;\n \tspin_lock_bh(&queue_lock);\n \tif (msg_queue_head) {\n-\t\tmsg_queue_tail->next = buf;\n+\t\tmsg_queue_tail->frag_next = buf;\n \t\tmsg_queue_tail = buf;\n \t} else {\n \t\tmsg_queue_tail = msg_queue_head = buf;\ndiff --git a/net/unix/garbage.c b/net/unix/garbage.c\nindex 2a27b84..bc0289c 100644\n--- a/net/unix/garbage.c\n+++ b/net/unix/garbage.c\n@@ -152,14 +152,8 @@ void unix_notinflight(struct file *fp)\n \t}\n }\n \n-static inline struct sk_buff *sock_queue_head(struct sock *sk)\n-{\n-\treturn (struct sk_buff *) &sk->sk_receive_queue;\n-}\n-\n #define receive_queue_for_each_skb(sk, next, skb) \\\n-\tfor (skb = sock_queue_head(sk)->next, next = skb->next; \\\n-\t     skb != sock_queue_head(sk); skb = next, next = skb->next)\n+\tlist_for_each_entry_safe(skb, next, &(sk)->sk_receive_queue.list, list)\n \n static void scan_inflight(struct sock *x, void (*func)(struct unix_sock *),\n \t\t\t  struct sk_buff_head *hitlist)\ndiff --git a/net/xfrm/xfrm_algo.c b/net/xfrm/xfrm_algo.c\nindex 96036cf..eaddbcc 100644\n--- a/net/xfrm/xfrm_algo.c\n+++ b/net/xfrm/xfrm_algo.c\n@@ -745,7 +745,7 @@ int skb_icv_walk(const struct sk_buff *skb, struct hash_desc *desc,\n \tif (skb_shinfo(skb)->frag_list) {\n \t\tstruct sk_buff *list = skb_shinfo(skb)->frag_list;\n \n-\t\tfor (; list; list = list->next) {\n+\t\tfor (; list; list = list->frag_next) {\n \t\t\tint end;\n \n \t\t\tWARN_ON(start > offset + len);\ndiff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c\nindex ac25b4c..d63bf6e 100644\n--- a/net/xfrm/xfrm_output.c\n+++ b/net/xfrm/xfrm_output.c\n@@ -151,16 +151,16 @@ static int xfrm_output_gso(struct sk_buff *skb)\n \t\treturn PTR_ERR(segs);\n \n \tdo {\n-\t\tstruct sk_buff *nskb = segs->next;\n+\t\tstruct sk_buff *nskb = segs->frag_next;\n \t\tint err;\n \n-\t\tsegs->next = NULL;\n+\t\tsegs->frag_next = NULL;\n \t\terr = xfrm_output2(segs);\n \n \t\tif (unlikely(err)) {\n \t\t\twhile ((segs = nskb)) {\n-\t\t\t\tnskb = segs->next;\n-\t\t\t\tsegs->next = NULL;\n+\t\t\t\tnskb = segs->frag_next;\n+\t\t\t\tsegs->frag_next = NULL;\n \t\t\t\tkfree_skb(segs);\n \t\t\t}\n \t\t\treturn err;\n",
    "prefixes": [
        "INFO"
    ]
}