diff mbox

[RFC,1/2] netmap: infrastructure (in staging)

Message ID 20130419120651.4e646976@nehalam.linuxnetplumber.net
State RFC, archived
Delegated to: David Miller
Headers show

Commit Message

Stephen Hemminger April 19, 2013, 7:06 p.m. UTC
Netmap is a framework for packet generation and capture from user
space. It allows for efficient packet handling (up to line rate on
10Gb) with minimum system load.  For more info see:
	http://info.iet.unipi.it/~luigi/netmap/

This version is based on the latest version from Luigi Rizzo. It has
been modified to live to work with current net-next kernel. It still
has all the BSD ugliness, and that is why it needs to spend some time
in the staging penalty box.

It builds and loads, but definitely needs more work.

Signed-off-by: Stephen Hemminger <stephen@networkplumger.org>

---
Patch is against net-next

 drivers/staging/Kconfig              |    2 
 drivers/staging/Makefile             |    1 
 drivers/staging/netmap/Kconfig       |   16 
 drivers/staging/netmap/Makefile      |    2 
 drivers/staging/netmap/README        |  127 +
 drivers/staging/netmap/TODO          |   16 
 drivers/staging/netmap/netmap.c      | 2514 +++++++++++++++++++++++++++++++++++
 drivers/staging/netmap/netmap_mem2.c |  974 +++++++++++++
 include/netmap/bsd_glue.h            |  263 +++
 include/netmap/netmap_kern.h         |  474 ++++++
 include/uapi/Kbuild                  |    1 
 include/uapi/linux/if.h              |    1 
 include/uapi/netmap/Kbuild           |    3 
 include/uapi/netmap/netmap.h         |  289 ++++
 include/uapi/netmap/netmap_user.h    |   95 +
 15 files changed, 4778 insertions(+)

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

gregkh@linuxfoundation.org April 19, 2013, 7:45 p.m. UTC | #1
On Fri, Apr 19, 2013 at 12:06:51PM -0700, Stephen Hemminger wrote:
> Netmap is a framework for packet generation and capture from user
> space. It allows for efficient packet handling (up to line rate on
> 10Gb) with minimum system load.  For more info see:
> 	http://info.iet.unipi.it/~luigi/netmap/
> 
> This version is based on the latest version from Luigi Rizzo. It has
> been modified to live to work with current net-next kernel. It still
> has all the BSD ugliness, and that is why it needs to spend some time
> in the staging penalty box.
> 
> It builds and loads, but definitely needs more work.
> 
> Signed-off-by: Stephen Hemminger <stephen@networkplumger.org>
> 
> ---
> Patch is against net-next
> 
>  drivers/staging/Kconfig              |    2 
>  drivers/staging/Makefile             |    1 
>  drivers/staging/netmap/Kconfig       |   16 
>  drivers/staging/netmap/Makefile      |    2 
>  drivers/staging/netmap/README        |  127 +
>  drivers/staging/netmap/TODO          |   16 
>  drivers/staging/netmap/netmap.c      | 2514 +++++++++++++++++++++++++++++++++++
>  drivers/staging/netmap/netmap_mem2.c |  974 +++++++++++++
>  include/netmap/bsd_glue.h            |  263 +++
>  include/netmap/netmap_kern.h         |  474 ++++++
>  include/uapi/Kbuild                  |    1 
>  include/uapi/linux/if.h              |    1 
>  include/uapi/netmap/Kbuild           |    3 
>  include/uapi/netmap/netmap.h         |  289 ++++
>  include/uapi/netmap/netmap_user.h    |   95 +
>  15 files changed, 4778 insertions(+)

I have no objection for this to go into staging.  If you want it to go
through the net-next tree, feel free to add my ack:

Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

Otherwise, I'll be glad to take it through the staging-next tree.

thanks,

greg k-h
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller April 19, 2013, 7:58 p.m. UTC | #2
From: Greg KH <gregkh@linuxfoundation.org>
Date: Fri, 19 Apr 2013 12:45:37 -0700

> On Fri, Apr 19, 2013 at 12:06:51PM -0700, Stephen Hemminger wrote:
>> Netmap is a framework for packet generation and capture from user
>> space. It allows for efficient packet handling (up to line rate on
>> 10Gb) with minimum system load.  For more info see:
>> 	http://info.iet.unipi.it/~luigi/netmap/

So are you saying that people can't get line rate today?

Even the the suricata folks are doing deep packet inspection at line
rate using AF_PACKET fanouts just fine.  That means they aren't just
grabbing packets, they are actually processing them and making
stateful decisions based upon the packet's contents.

That means that capture is cheap enough already that they have all
the compute left over that they need.

The existing mechanisms also have the huge advantage that they are
already implemented, require zero driver specific changes, and are
already starting to be deployed to end users.

Given that and how incredibly gross this code is right now, the only
reaction I can have is "meh".

If you are going to propose something, please show a real and genuine
need.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stephen Hemminger April 19, 2013, 8:16 p.m. UTC | #3
On Fri, 19 Apr 2013 15:58:59 -0400 (EDT)
David Miller <davem@davemloft.net> wrote:

> From: Greg KH <gregkh@linuxfoundation.org>
> Date: Fri, 19 Apr 2013 12:45:37 -0700
> 
> > On Fri, Apr 19, 2013 at 12:06:51PM -0700, Stephen Hemminger wrote:
> >> Netmap is a framework for packet generation and capture from user
> >> space. It allows for efficient packet handling (up to line rate on
> >> 10Gb) with minimum system load.  For more info see:
> >> 	http://info.iet.unipi.it/~luigi/netmap/
> 
> So are you saying that people can't get line rate today?
> 
> Even the the suricata folks are doing deep packet inspection at line
> rate using AF_PACKET fanouts just fine.  That means they aren't just
> grabbing packets, they are actually processing them and making
> stateful decisions based upon the packet's contents.
> 
> That means that capture is cheap enough already that they have all
> the compute left over that they need.
> 
> The existing mechanisms also have the huge advantage that they are
> already implemented, require zero driver specific changes, and are
> already starting to be deployed to end users.
> 
> Given that and how incredibly gross this code is right now, the only
> reaction I can have is "meh".
> 
> If you are going to propose something, please show a real and genuine
> need.

I can not get line rate output with pktgen on existing kernels today.
I can get line rate easily with netmap.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Eric Dumazet April 19, 2013, 8:31 p.m. UTC | #4
On Fri, 2013-04-19 at 13:16 -0700, Stephen Hemminger wrote:

> I can not get line rate output with pktgen on existing kernels today.

I have no trouble saturating at line rate with pktgen, and using
multiqueue NIC.


> I can get line rate easily with netmap.

Yes, but then, its about bypassing the OS, and reimplementing everything
in user land. TCP/UDP/IP stack, iptables, rate limiting, sharing a NIC
among users of this NIC, adding new hot points (sending tx descriptors
without caring about need_resched or whatever)

But I agree pktgen is kind of limited (it sends UDP packets.), please do
not compare pktgen and netmap.

pktgen was something to avoid spending time in UDP/IP stack, and it was
probably needed years ago or if network researchers want to use a laptop
as a pktgen host.

It reminds me the days linux had http server in kernel.

With 8+ cpus, you can plainly use a user land application.

I really hope we do not use pktgen as an argument for having netmap in
the kernel.



--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller April 19, 2013, 8:37 p.m. UTC | #5
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Fri, 19 Apr 2013 13:16:32 -0700

> I can not get line rate output with pktgen on existing kernels today.

Something is incredibly wrong if this is the case.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller April 19, 2013, 8:38 p.m. UTC | #6
From: Eric Dumazet <eric.dumazet@gmail.com>
Date: Fri, 19 Apr 2013 13:31:23 -0700

> On Fri, 2013-04-19 at 13:16 -0700, Stephen Hemminger wrote:
> 
>> I can not get line rate output with pktgen on existing kernels today.
> 
> I have no trouble saturating at line rate with pktgen, and using
> multiqueue NIC.

+1

> I really hope we do not use pktgen as an argument for having netmap in
> the kernel.

Me too.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Stephen Hemminger April 19, 2013, 8:49 p.m. UTC | #7
On Fri, 19 Apr 2013 16:38:31 -0400 (EDT)
David Miller <davem@davemloft.net> wrote:

> From: Eric Dumazet <eric.dumazet@gmail.com>
> Date: Fri, 19 Apr 2013 13:31:23 -0700
> 
> > On Fri, 2013-04-19 at 13:16 -0700, Stephen Hemminger wrote:
> > 
> >> I can not get line rate output with pktgen on existing kernels today.
> > 
> > I have no trouble saturating at line rate with pktgen, and using
> > multiqueue NIC.
> 
> +1
> 
> > I really hope we do not use pktgen as an argument for having netmap in
> > the kernel.
> 
> Me too.

I get 7Mpps (single queue) with ixgbe and pktgen.
Easily hit 14.8 Mpps (single queue) with netmap.

The real problem is that DPDK and netmap can do multiple packets per request
to driver. Right now their is one PCI bus transaction per packet with current
driver model.

But, I am not convinced that netmap is the right solution either, this is purely
an RFC to get some attention on doing better at small packet performance.
If you look at netmap right now, it has really ugly BSD wrapper code
and does lots of assumptions and bypassing of network stack; ie. fugly.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller April 19, 2013, 8:53 p.m. UTC | #8
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Fri, 19 Apr 2013 13:49:25 -0700

> The real problem is that DPDK and netmap can do multiple packets per request
> to driver.

And the kernel itself wouldn't benefit from batching support for
netdev_ops->ndo_start_xmit()?

Give me a break.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Daniel Borkmann April 20, 2013, 11:31 a.m. UTC | #9
On 04/19/2013 09:58 PM, David Miller wrote:
> From: Greg KH <gregkh@linuxfoundation.org>
> Date: Fri, 19 Apr 2013 12:45:37 -0700
>
>> On Fri, Apr 19, 2013 at 12:06:51PM -0700, Stephen Hemminger wrote:
>>> Netmap is a framework for packet generation and capture from user
>>> space. It allows for efficient packet handling (up to line rate on
>>> 10Gb) with minimum system load.  For more info see:
>>> 	http://info.iet.unipi.it/~luigi/netmap/
>
> So are you saying that people can't get line rate today?
>
> Even the the suricata folks are doing deep packet inspection at line
> rate using AF_PACKET fanouts just fine.  That means they aren't just
> grabbing packets, they are actually processing them and making
> stateful decisions based upon the packet's contents.
>
> That means that capture is cheap enough already that they have all
> the compute left over that they need.
>
> The existing mechanisms also have the huge advantage that they are
> already implemented, require zero driver specific changes, and are
> already starting to be deployed to end users.

+1, and if so, then I'm actually rather for further improving/optimizing/..
AF_PACKET. Btw., Eric had a blog post from 2012 about this topic (and
maybe TPACKET_V3 could even further improve perf. over TPACKET_V2 on this):

   https://home.regit.org/2012/07/suricata-to-10gbps-and-beyond/

Also, I just looked over Netmap's Usenix paper from 2012, where they compare
netmap against pktgen, and while they state the version of the FreeBSD kernel
where they did the evaluation on, they just don't even mention the Linux'
kernel version, their Linux kernel setup etc. Not even mentioning a comparison
of PF_PACKET+fanout (similarly as the PF_RING project seems to avoid this
comparison and only presents perf numbers where they just count packets !).
Also, I've seen other papers published in 2012 on this topic, where they
compare performance with a 2.6.2x kernel, hm, quite sad actually.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jamal Hadi Salim April 20, 2013, 2:57 p.m. UTC | #10
On 13-04-20 07:31 AM, Daniel Borkmann wrote:

> Also, I just looked over Netmap's Usenix paper from 2012, where they
> compare netmap against pktgen, and while they state the version of the FreeBSD
> kernel
> where they did the evaluation on, they just don't even mention the Linux'
> kernel version, their Linux kernel setup etc. Not even mentioning a
> comparison
> of PF_PACKET+fanout (similarly as the PF_RING project seems to avoid this
> comparison and only presents perf numbers where they just count packets !).
> Also, I've seen other papers published in 2012 on this topic, where they
> compare performance with a 2.6.2x kernel, hm, quite sad actually.

I hope I can put your doubts to rest. Netmap does provide the 
performance it claims to. I did play with it about 6-9 months back and i 
was able to loopback wirerate 10Gbps (~14.4Mpps) 64B packets on a 
_single core_. i.e i send to from machine A to B which echoes back to 
the sender via a driver hack i had on the intel driver and i count the 
packets. I should note that this was with machines that have circa 2010 
capabilities (and they were cheap too).

It is true without some changes to the kernel (such as using multiqueues 
and batching, pktgen will not be able to achieve that speed). It would 
be interesting to see what you can achieve with PF_PACKET transmit. 
PF_PACKET is already behind if you have to depend on fanout for receive 
(you are using more processing).
Granted that this was a simple app. Unfortunately the trend with 
approaches like netmap is happening. Theres some closed source thing out 
of intel called DPDK which intel is aggressively marketing.
If someone is showing you 10x improvement over what Linux gives you,
then we are doing something wrong and we shouldnt be living in some
parallel universe and claim theres nothing to see here.
1.5-3x is something i can live with because it shows theres some room
for tweaking.

 From a personal perspective - I have always been a supporter of "if 
something is wrong with what linux gives you, lets improve it" (Still 
grinding my teeth at openvswitch).
How about we learn something from this and try to improve what we have?
I did talk to Luigi briefly (CCing him) - his biggest beef was with skbs
and how fat they are. I know Eric D. has been doing some excellent work
putting them on some low-carb diet but there are still people showing up
and arguing for more fields. Here's a thought:
Could we put something in the kernel that allows for high-perfomance 
zero copy to user  space and inject packets + metadata (from/to 
arbitrary parts of the kernel)? I do this all the time from say 
ingress/egress via tuntap (which sucks at this).
Yes, theres a danger of allowing for competing interfaces in userspace
to develop tcp stacks etc but I think for someone who wants to take 
advantage of Linux, thats a non-starter.

cheers,
jamal
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Oleg Arkhangelsky April 20, 2013, 3:19 p.m. UTC | #11
20.04.2013, 18:57, "Jamal Hadi Salim" <jhs@mojatatu.com>:

> approaches like netmap is happening. Theres some closed source thing out
> of intel called DPDK which intel is aggressively marketing.

DPKD is open source circa end of 2012.

See also http://dpdk.org/
Vincent JARDIN April 20, 2013, 11:20 p.m. UTC | #12
+1 

Even if DPDK (see http://dpdk.org) offers some alternatives with better performances ; Netmap is a usable option and a good compromise.

I think both shall coexist.

Best regards,
  Vincent

Le 20 avr. 2013 à 13:31, Daniel Borkmann <dborkman@redhat.com> a écrit :

> On 04/19/2013 09:58 PM, David Miller wrote:
>> From: Greg KH <gregkh@linuxfoundation.org>
>> Date: Fri, 19 Apr 2013 12:45:37 -0700
>> 
>>> On Fri, Apr 19, 2013 at 12:06:51PM -0700, Stephen Hemminger wrote:
>>>> Netmap is a framework for packet generation and capture from user
>>>> space. It allows for efficient packet handling (up to line rate on
>>>> 10Gb) with minimum system load.  For more info see:
>>>>    http://info.iet.unipi.it/~luigi/netmap/
>> 
>> So are you saying that people can't get line rate today?
>> 
>> Even the the suricata folks are doing deep packet inspection at line
>> rate using AF_PACKET fanouts just fine.  That means they aren't just
>> grabbing packets, they are actually processing them and making
>> stateful decisions based upon the packet's contents.
>> 
>> That means that capture is cheap enough already that they have all
>> the compute left over that they need.
>> 
>> The existing mechanisms also have the huge advantage that they are
>> already implemented, require zero driver specific changes, and are
>> already starting to be deployed to end users.
> 
> +1, and if so, then I'm actually rather for further improving/optimizing/..
> AF_PACKET. Btw., Eric had a blog post from 2012 about this topic (and
> maybe TPACKET_V3 could even further improve perf. over TPACKET_V2 on this):
> 
>  https://home.regit.org/2012/07/suricata-to-10gbps-and-beyond/
> 
> Also, I just looked over Netmap's Usenix paper from 2012, where they compare
> netmap against pktgen, and while they state the version of the FreeBSD kernel
> where they did the evaluation on, they just don't even mention the Linux'
> kernel version, their Linux kernel setup etc. Not even mentioning a comparison
> of PF_PACKET+fanout (similarly as the PF_RING project seems to avoid this
> comparison and only presents perf numbers where they just count packets !).
> Also, I've seen other papers published in 2012 on this topic, where they
> compare performance with a 2.6.2x kernel, hm, quite sad actually.
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Naoto MATSUMOTO April 23, 2013, 7:04 a.m. UTC | #13
Hi all

Various10GbE-NIC 64Byte Short Packet Processing Bechmark (RX) with
netmap memo share for you.  http://twitpic.com/clb97k

FYI: Another netmap benchmark with DPDK result.

Disruptive IP Networking with Intel DPDK on Linux
http://slidesha.re/SeVFZn

enjoy!
--
Naoto

On Fri, 19 Apr 2013 12:06:51 -0700
Stephen Hemminger <stephen@networkplumber.org> wrote:

> Netmap is a framework for packet generation and capture from user
> space. It allows for efficient packet handling (up to line rate on
> 10Gb) with minimum system load.  For more info see:
> 	http://info.iet.unipi.it/~luigi/netmap/
> 
> This version is based on the latest version from Luigi Rizzo. It has
> been modified to live to work with current net-next kernel. It still
> has all the BSD ugliness, and that is why it needs to spend some time
> in the staging penalty box.
> 
> It builds and loads, but definitely needs more work.
> 
> Signed-off-by: Stephen Hemminger <stephen@networkplumger.org>
> 
> ---
> Patch is against net-next
> 
>  drivers/staging/Kconfig              |    2 
>  drivers/staging/Makefile             |    1 
>  drivers/staging/netmap/Kconfig       |   16 
>  drivers/staging/netmap/Makefile      |    2 
>  drivers/staging/netmap/README        |  127 +
>  drivers/staging/netmap/TODO          |   16 
>  drivers/staging/netmap/netmap.c      | 2514 +++++++++++++++++++++++++++++++++++
>  drivers/staging/netmap/netmap_mem2.c |  974 +++++++++++++
>  include/netmap/bsd_glue.h            |  263 +++
>  include/netmap/netmap_kern.h         |  474 ++++++
>  include/uapi/Kbuild                  |    1 
>  include/uapi/linux/if.h              |    1 
>  include/uapi/netmap/Kbuild           |    3 
>  include/uapi/netmap/netmap.h         |  289 ++++
>  include/uapi/netmap/netmap_user.h    |   95 +
>  15 files changed, 4778 insertions(+)
> 
> --- a/drivers/staging/Kconfig	2013-02-26 10:19:35.000000000 -0800
> +++ b/drivers/staging/Kconfig	2013-03-10 10:08:20.323671490 -0700
> @@ -140,4 +140,6 @@ source "drivers/staging/zcache/Kconfig"
>  
>  source "drivers/staging/goldfish/Kconfig"
>  
> +source "drivers/staging/netmap/Kconfig"
> +
>  endif # STAGING
> --- a/drivers/staging/Makefile	2013-02-26 10:19:35.000000000 -0800
> +++ b/drivers/staging/Makefile	2013-03-10 10:08:48.555305267 -0700
> @@ -62,3 +62,4 @@ obj-$(CONFIG_SB105X)		+= sb105x/
>  obj-$(CONFIG_FIREWIRE_SERIAL)	+= fwserial/
>  obj-$(CONFIG_ZCACHE)		+= zcache/
>  obj-$(CONFIG_GOLDFISH)		+= goldfish/
> +obj-$(CONFIG_NETMAP)		+= netmap/
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ b/drivers/staging/netmap/Kconfig	2013-03-10 10:08:20.323671490 -0700
> @@ -0,0 +1,16 @@
> +#
> +# Netmap - user mode packet processing framework
> +#
> +
> +config NETMAP
> +	tristate "Netmap - user mode networking"
> +	depends on NET && !IOMMU_API
> +	default n
> +	help
> +	  If you say Y here, you will get experimental support for
> +	  netmap a framework for fast packet I/O using memory mapped
> +	  buffers.
> +
> +	  See <http:/nfo.iet.unipi.it/~luigi/netmap/> for more information.
> +
> +	  If unsure, say N.
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ b/drivers/staging/netmap/Makefile	2013-03-10 10:08:20.323671490 -0700
> @@ -0,0 +1,2 @@
> +EXTRA_CFLAGS := -DDEBUG
> +obj-$(CONFIG_NETMAP) += netmap.o
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ b/drivers/staging/netmap/netmap.c	2013-03-10 16:52:34.160592316 -0700
> @@ -0,0 +1,2515 @@
> +/*
> + * Copyright (C) 2011-2012 Matteo Landi, Luigi Rizzo. All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *   1. Redistributions of source code must retain the above copyright
> + *      notice, this list of conditions and the following disclaimer.
> + *   2. Redistributions in binary form must reproduce the above copyright
> + *      notice, this list of conditions and the following disclaimer in the
> + *    documentation and/or other materials provided with the distribution.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
> + * ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
> + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
> + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
> + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
> + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> + * SUCH DAMAGE.
> + */
> +
> +#define NM_BRIDGE
> +
> +/*
> + * This module supports memory mapped access to network devices,
> + * see netmap(4).
> + *
> + * The module uses a large, memory pool allocated by the kernel
> + * and accessible as mmapped memory by multiple userspace threads/processes.
> + * The memory pool contains packet buffers and "netmap rings",
> + * i.e. user-accessible copies of the interface's queues.
> + *
> + * Access to the network card works like this:
> + * 1. a process/thread issues one or more open() on /dev/netmap, to create
> + *    select()able file descriptor on which events are reported.
> + * 2. on each descriptor, the process issues an ioctl() to identify
> + *    the interface that should report events to the file descriptor.
> + * 3. on each descriptor, the process issues an mmap() request to
> + *    map the shared memory region within the process' address space.
> + *    The list of interesting queues is indicated by a location in
> + *    the shared memory region.
> + * 4. using the functions in the netmap(4) userspace API, a process
> + *    can look up the occupation state of a queue, access memory buffers,
> + *    and retrieve received packets or enqueue packets to transmit.
> + * 5. using some ioctl()s the process can synchronize the userspace view
> + *    of the queue with the actual status in the kernel. This includes both
> + *    receiving the notification of new packets, and transmitting new
> + *    packets on the output interface.
> + * 6. select() or poll() can be used to wait for events on individual
> + *    transmit or receive queues (or all queues for a given interface).
> + */
> +
> +#ifdef linux
> +#include <netmap/bsd_glue.h>
> +static netdev_tx_t linux_netmap_start(struct sk_buff *skb, struct net_device *dev);
> +#endif /* linux */
> +
> +#ifdef __APPLE__
> +#include "osx_glue.h"
> +#endif /* __APPLE__ */
> +
> +#ifdef __FreeBSD__
> +#include <sys/cdefs.h> /* prerequisite */
> +__FBSDID("$FreeBSD: head/sys/dev/netmap/netmap.c 241723 2012-10-19 09:41:45Z glebius $");
> +
> +#include <sys/types.h>
> +#include <sys/module.h>
> +#include <sys/errno.h>
> +#include <sys/param.h>	/* defines used in kernel.h */
> +#include <sys/jail.h>
> +#include <sys/kernel.h>	/* types used in module initialization */
> +#include <sys/conf.h>	/* cdevsw struct */
> +#include <sys/uio.h>	/* uio struct */
> +#include <sys/sockio.h>
> +#include <sys/socketvar.h>	/* struct socket */
> +#include <sys/malloc.h>
> +#include <sys/mman.h>	/* PROT_EXEC */
> +#include <sys/poll.h>
> +#include <sys/proc.h>
> +#include <vm/vm.h>	/* vtophys */
> +#include <vm/pmap.h>	/* vtophys */
> +#include <sys/socket.h> /* sockaddrs */
> +#include <machine/bus.h>
> +#include <sys/selinfo.h>
> +#include <sys/sysctl.h>
> +#include <net/if.h>
> +#include <net/bpf.h>		/* BIOCIMMEDIATE */
> +#include <net/vnet.h>
> +#include <machine/bus.h>	/* bus_dmamap_* */
> +
> +MALLOC_DEFINE(M_NETMAP, "netmap", "Network memory map");
> +#endif /* __FreeBSD__ */
> +
> +#include <netmap/netmap.h>
> +#include <netmap/netmap_kern.h>
> +
> +u_int netmap_total_buffers;
> +u_int netmap_buf_size;
> +char *netmap_buffer_base;	/* address of an invalid buffer */
> +
> +/* user-controlled variables */
> +int netmap_verbose = 1;
> +
> +static int netmap_no_timestamp; /* don't timestamp on rxsync */
> +
> +SYSCTL_NODE(_dev, OID_AUTO, netmap, CTLFLAG_RW, 0, "Netmap args");
> +SYSCTL_INT(_dev_netmap, OID_AUTO, verbose,
> +    CTLFLAG_RW, &netmap_verbose, 0, "Verbose mode");
> +SYSCTL_INT(_dev_netmap, OID_AUTO, no_timestamp,
> +    CTLFLAG_RW, &netmap_no_timestamp, 0, "no_timestamp");
> +int netmap_mitigate = 1;
> +SYSCTL_INT(_dev_netmap, OID_AUTO, mitigate, CTLFLAG_RW, &netmap_mitigate, 0, "");
> +int netmap_no_pendintr = 1;
> +SYSCTL_INT(_dev_netmap, OID_AUTO, no_pendintr,
> +    CTLFLAG_RW, &netmap_no_pendintr, 0, "Always look for new received packets.");
> +
> +int netmap_drop = 0;	/* debugging */
> +int netmap_flags = 0;	/* debug flags */
> +int netmap_fwd = 0;	/* force transparent mode */
> +int netmap_copy = 0;	/* debugging, copy content */
> +
> +SYSCTL_INT(_dev_netmap, OID_AUTO, drop, CTLFLAG_RW, &netmap_drop, 0 , "");
> +SYSCTL_INT(_dev_netmap, OID_AUTO, flags, CTLFLAG_RW, &netmap_flags, 0 , "");
> +SYSCTL_INT(_dev_netmap, OID_AUTO, fwd, CTLFLAG_RW, &netmap_fwd, 0 , "");
> +SYSCTL_INT(_dev_netmap, OID_AUTO, copy, CTLFLAG_RW, &netmap_copy, 0 , "");
> +
> +#ifdef NM_BRIDGE /* support for netmap bridge */
> +
> +/*
> + * system parameters.
> + *
> + * All switched ports have prefix NM_NAME.
> + * The switch has a max of NM_BDG_MAXPORTS ports (often stored in a bitmap,
> + * so a practical upper bound is 64).
> + * Each tx ring is read-write, whereas rx rings are readonly (XXX not done yet).
> + * The virtual interfaces use per-queue lock instead of core lock.
> + * In the tx loop, we aggregate traffic in batches to make all operations
> + * faster. The batch size is NM_BDG_BATCH
> + */
> +#define	NM_NAME			"vale"	/* prefix for the interface */
> +#define NM_BDG_MAXPORTS		16	/* up to 64 ? */
> +#define NM_BRIDGE_RINGSIZE	1024	/* in the device */
> +#define NM_BDG_HASH		1024	/* forwarding table entries */
> +#define NM_BDG_BATCH		1024	/* entries in the forwarding buffer */
> +#define	NM_BRIDGES		4	/* number of bridges */
> +int netmap_bridge = NM_BDG_BATCH; /* bridge batch size */
> +SYSCTL_INT(_dev_netmap, OID_AUTO, bridge, CTLFLAG_RW, &netmap_bridge, 0 , "");
> +
> +#ifdef linux
> +#define	ADD_BDG_REF(ifp)	(NA(ifp)->if_refcount++)
> +#define	DROP_BDG_REF(ifp)	(NA(ifp)->if_refcount-- <= 1)
> +#else /* !linux */
> +#define	ADD_BDG_REF(ifp)	(ifp)->if_refcount++
> +#define	DROP_BDG_REF(ifp)	refcount_release(&(ifp)->if_refcount)
> +#ifdef __FreeBSD__
> +#include <sys/endian.h>
> +#include <sys/refcount.h>
> +#endif /* __FreeBSD__ */
> +#define prefetch(x)	__builtin_prefetch(x)
> +#endif /* !linux */
> +
> +static void bdg_netmap_attach(struct ifnet *ifp);
> +static int bdg_netmap_reg(struct ifnet *ifp, int onoff);
> +/* per-tx-queue entry */
> +struct nm_bdg_fwd {	/* forwarding entry for a bridge */
> +	void *buf;
> +	uint64_t dst;	/* dst mask */
> +	uint32_t src;	/* src index ? */
> +	uint16_t len;	/* src len */
> +};
> +
> +struct nm_hash_ent {
> +	uint64_t	mac;	/* the top 2 bytes are the epoch */
> +	uint64_t	ports;
> +};
> +
> +/*
> + * Interfaces for a bridge are all in ports[].
> + * The array has fixed size, an empty entry does not terminate
> + * the search.
> + */
> +struct nm_bridge {
> +	struct ifnet *bdg_ports[NM_BDG_MAXPORTS];
> +	int n_ports;
> +	uint64_t act_ports;
> +	int freelist;	/* first buffer index */
> +	NM_SELINFO_T si;	/* poll/select wait queue */
> +	NM_LOCK_T bdg_lock;	/* protect the selinfo ? */
> +
> +	/* the forwarding table, MAC+ports */
> +	struct nm_hash_ent ht[NM_BDG_HASH];
> +
> +	int namelen;	/* 0 means free */
> +	char basename[IFNAMSIZ];
> +};
> +
> +struct nm_bridge nm_bridges[NM_BRIDGES];
> +
> +#define BDG_LOCK(b)	mtx_lock(&(b)->bdg_lock)
> +#define BDG_UNLOCK(b)	mtx_unlock(&(b)->bdg_lock)
> +
> +/*
> + * NA(ifp)->bdg_port	port index
> + */
> +
> +// XXX only for multiples of 64 bytes, non overlapped.
> +static inline void
> +pkt_copy(void *_src, void *_dst, int l)
> +{
> +        uint64_t *src = _src;
> +        uint64_t *dst = _dst;
> +        if (unlikely(l >= 1024)) {
> +                bcopy(src, dst, l);
> +                return;
> +        }
> +        for (; likely(l > 0); l-=64) {
> +                *dst++ = *src++;
> +                *dst++ = *src++;
> +                *dst++ = *src++;
> +                *dst++ = *src++;
> +                *dst++ = *src++;
> +                *dst++ = *src++;
> +                *dst++ = *src++;
> +                *dst++ = *src++;
> +        }
> +}
> +
> +/*
> + * locate a bridge among the existing ones.
> + * a ':' in the name terminates the bridge name. Otherwise, just NM_NAME.
> + * We assume that this is called with a name of at least NM_NAME chars.
> + */
> +static struct nm_bridge *
> +nm_find_bridge(const char *name)
> +{
> +	int i, l, namelen, e;
> +	struct nm_bridge *b = NULL;
> +
> +	namelen = strlen(NM_NAME);	/* base length */
> +	l = strlen(name);		/* actual length */
> +	for (i = namelen + 1; i < l; i++) {
> +		if (name[i] == ':') {
> +			namelen = i;
> +			break;
> +		}
> +	}
> +	if (namelen >= IFNAMSIZ)
> +		namelen = IFNAMSIZ;
> +	ND("--- prefix is '%.*s' ---", namelen, name);
> +
> +	/* use the first entry for locking */
> +	BDG_LOCK(nm_bridges); // XXX do better
> +	for (e = -1, i = 1; i < NM_BRIDGES; i++) {
> +		b = nm_bridges + i;
> +		if (b->namelen == 0)
> +			e = i;	/* record empty slot */
> +		else if (strncmp(name, b->basename, namelen) == 0) {
> +			ND("found '%.*s' at %d", namelen, name, i);
> +			break;
> +		}
> +	}
> +	if (i == NM_BRIDGES) { /* all full */
> +		if (e == -1) { /* no empty slot */
> +			b = NULL;
> +		} else {
> +			b = nm_bridges + e;
> +			strncpy(b->basename, name, namelen);
> +			b->namelen = namelen;
> +		}
> +	}
> +	BDG_UNLOCK(nm_bridges);
> +	return b;
> +}
> +#endif /* NM_BRIDGE */
> +
> +
> +/*
> + * Fetch configuration from the device, to cope with dynamic
> + * reconfigurations after loading the module.
> + */
> +static int
> +netmap_update_config(struct netmap_adapter *na)
> +{
> +	struct ifnet *ifp = na->ifp;
> +	u_int txr, txd, rxr, rxd;
> +
> +	txr = txd = rxr = rxd = 0;
> +	if (na->nm_config) {
> +		na->nm_config(ifp, &txr, &txd, &rxr, &rxd);
> +	} else {
> +		/* take whatever we had at init time */
> +		txr = na->num_tx_rings;
> +		txd = na->num_tx_desc;
> +		rxr = na->num_rx_rings;
> +		rxd = na->num_rx_desc;
> +	}
> +
> +	if (na->num_tx_rings == txr && na->num_tx_desc == txd &&
> +	    na->num_rx_rings == rxr && na->num_rx_desc == rxd)
> +		return 0; /* nothing changed */
> +	if (netmap_verbose || na->refcount > 0) {
> +		D("stored config %s: txring %d x %d, rxring %d x %d",
> +			ifp->if_xname,
> +			na->num_tx_rings, na->num_tx_desc,
> +			na->num_rx_rings, na->num_rx_desc);
> +		D("new config %s: txring %d x %d, rxring %d x %d",
> +			ifp->if_xname, txr, txd, rxr, rxd);
> +	}
> +	if (na->refcount == 0) {
> +		D("configuration changed (but fine)");
> +		na->num_tx_rings = txr;
> +		na->num_tx_desc = txd;
> +		na->num_rx_rings = rxr;
> +		na->num_rx_desc = rxd;
> +		return 0;
> +	}
> +	D("configuration changed while active, this is bad...");
> +	return 1;
> +}
> +
> +/*------------- memory allocator -----------------*/
> +#ifdef NETMAP_MEM2
> +#include "netmap_mem2.c"
> +#else /* !NETMAP_MEM2 */
> +#include "netmap_mem1.c"
> +#endif /* !NETMAP_MEM2 */
> +/*------------ end of memory allocator ----------*/
> +
> +
> +/* Structure associated to each thread which registered an interface.
> + *
> + * The first 4 fields of this structure are written by NIOCREGIF and
> + * read by poll() and NIOC?XSYNC.
> + * There is low contention among writers (actually, a correct user program
> + * should have no contention among writers) and among writers and readers,
> + * so we use a single global lock to protect the structure initialization.
> + * Since initialization involves the allocation of memory, we reuse the memory
> + * allocator lock.
> + * Read access to the structure is lock free. Readers must check that
> + * np_nifp is not NULL before using the other fields.
> + * If np_nifp is NULL initialization has not been performed, so they should
> + * return an error to userlevel.
> + *
> + * The ref_done field is used to regulate access to the refcount in the
> + * memory allocator. The refcount must be incremented at most once for
> + * each open("/dev/netmap"). The increment is performed by the first
> + * function that calls netmap_get_memory() (currently called by
> + * mmap(), NIOCGINFO and NIOCREGIF).
> + * If the refcount is incremented, it is then decremented when the
> + * private structure is destroyed.
> + */
> +struct netmap_priv_d {
> +	struct netmap_if * volatile np_nifp;	/* netmap interface descriptor. */
> +
> +	struct ifnet	*np_ifp;	/* device for which we hold a reference */
> +	int		np_ringid;	/* from the ioctl */
> +	u_int		np_qfirst, np_qlast;	/* range of rings to scan */
> +	uint16_t	np_txpoll;
> +
> +	unsigned long	ref_done;	/* use with NMA_LOCK held */
> +};
> +
> +
> +static int
> +netmap_get_memory(struct netmap_priv_d* p)
> +{
> +	int error = 0;
> +	NMA_LOCK();
> +	if (!p->ref_done) {
> +		error = netmap_memory_finalize();
> +		if (!error)
> +			p->ref_done = 1;
> +	}
> +	NMA_UNLOCK();
> +	return error;
> +}
> +
> +/*
> + * File descriptor's private data destructor.
> + *
> + * Call nm_register(ifp,0) to stop netmap mode on the interface and
> + * revert to normal operation. We expect that np_ifp has not gone.
> + */
> +/* call with NMA_LOCK held */
> +static void
> +netmap_dtor_locked(void *data)
> +{
> +	struct netmap_priv_d *priv = data;
> +	struct ifnet *ifp = priv->np_ifp;
> +	struct netmap_adapter *na = NA(ifp);
> +	struct netmap_if *nifp = priv->np_nifp;
> +
> +	na->refcount--;
> +	if (na->refcount <= 0) {	/* last instance */
> +		u_int i, j, lim;
> +
> +		if (netmap_verbose)
> +			D("deleting last instance for %s", ifp->if_xname);
> +		/*
> +		 * there is a race here with *_netmap_task() and
> +		 * netmap_poll(), which don't run under NETMAP_REG_LOCK.
> +		 * na->refcount == 0 && na->ifp->if_capenable & IFCAP_NETMAP
> +		 * (aka NETMAP_DELETING(na)) are a unique marker that the
> +		 * device is dying.
> +		 * Before destroying stuff we sleep a bit, and then complete
> +		 * the job. NIOCREG should realize the condition and
> +		 * loop until they can continue; the other routines
> +		 * should check the condition at entry and quit if
> +		 * they cannot run.
> +		 */
> +		na->nm_lock(ifp, NETMAP_REG_UNLOCK, 0);
> +		tsleep(na, 0, "NIOCUNREG", 4);
> +		na->nm_lock(ifp, NETMAP_REG_LOCK, 0);
> +		na->nm_register(ifp, 0); /* off, clear IFCAP_NETMAP */
> +		/* Wake up any sleeping threads. netmap_poll will
> +		 * then return POLLERR
> +		 */
> +		for (i = 0; i < na->num_tx_rings + 1; i++)
> +			selwakeuppri(&na->tx_rings[i].si, PI_NET);
> +		for (i = 0; i < na->num_rx_rings + 1; i++)
> +			selwakeuppri(&na->rx_rings[i].si, PI_NET);
> +		selwakeuppri(&na->tx_si, PI_NET);
> +		selwakeuppri(&na->rx_si, PI_NET);
> +		/* release all buffers */
> +		for (i = 0; i < na->num_tx_rings + 1; i++) {
> +			struct netmap_ring *ring = na->tx_rings[i].ring;
> +			lim = na->tx_rings[i].nkr_num_slots;
> +			for (j = 0; j < lim; j++)
> +				netmap_free_buf(nifp, ring->slot[j].buf_idx);
> +			/* knlist_destroy(&na->tx_rings[i].si.si_note); */
> +			mtx_destroy(&na->tx_rings[i].q_lock);
> +		}
> +		for (i = 0; i < na->num_rx_rings + 1; i++) {
> +			struct netmap_ring *ring = na->rx_rings[i].ring;
> +			lim = na->rx_rings[i].nkr_num_slots;
> +			for (j = 0; j < lim; j++)
> +				netmap_free_buf(nifp, ring->slot[j].buf_idx);
> +			/* knlist_destroy(&na->rx_rings[i].si.si_note); */
> +			mtx_destroy(&na->rx_rings[i].q_lock);
> +		}
> +		/* XXX kqueue(9) needed; these will mirror knlist_init. */
> +		/* knlist_destroy(&na->tx_si.si_note); */
> +		/* knlist_destroy(&na->rx_si.si_note); */
> +		netmap_free_rings(na);
> +		wakeup(na);
> +	}
> +	netmap_if_free(nifp);
> +}
> +
> +static void
> +nm_if_rele(struct ifnet *ifp)
> +{
> +#ifndef NM_BRIDGE
> +	if_rele(ifp);
> +#else /* NM_BRIDGE */
> +	int i, full;
> +	struct nm_bridge *b;
> +
> +	if (strncmp(ifp->if_xname, NM_NAME, sizeof(NM_NAME) - 1)) {
> +		if_rele(ifp);
> +		return;
> +	}
> +	if (!DROP_BDG_REF(ifp))
> +		return;
> +	b = ifp->if_bridge;
> +	BDG_LOCK(nm_bridges);
> +	BDG_LOCK(b);
> +	ND("want to disconnect %s from the bridge", ifp->if_xname);
> +	full = 0;
> +	for (i = 0; i < NM_BDG_MAXPORTS; i++) {
> +		if (b->bdg_ports[i] == ifp) {
> +			b->bdg_ports[i] = NULL;
> +			bzero(ifp, sizeof(*ifp));
> +			free(ifp, M_DEVBUF);
> +			break;
> +		}
> +		else if (b->bdg_ports[i] != NULL)
> +			full = 1;
> +	}
> +	BDG_UNLOCK(b);
> +	if (full == 0) {
> +		ND("freeing bridge %d", b - nm_bridges);
> +		b->namelen = 0;
> +	}
> +	BDG_UNLOCK(nm_bridges);
> +	if (i == NM_BDG_MAXPORTS)
> +		D("ouch, cannot find ifp to remove");
> +#endif /* NM_BRIDGE */
> +}
> +
> +static void
> +netmap_dtor(void *data)
> +{
> +	struct netmap_priv_d *priv = data;
> +	struct ifnet *ifp = priv->np_ifp;
> +	struct netmap_adapter *na;
> +
> +	NMA_LOCK();
> +	if (ifp) {
> +		na = NA(ifp);
> +		na->nm_lock(ifp, NETMAP_REG_LOCK, 0);
> +		netmap_dtor_locked(data);
> +		na->nm_lock(ifp, NETMAP_REG_UNLOCK, 0);
> +
> +		nm_if_rele(ifp);
> +	}
> +	if (priv->ref_done) {
> +		netmap_memory_deref();
> +	}
> +	NMA_UNLOCK();
> +	bzero(priv, sizeof(*priv));	/* XXX for safety */
> +	free(priv, M_DEVBUF);
> +}
> +
> +#ifdef __FreeBSD__
> +#include <vm/vm.h>
> +#include <vm/vm_param.h>
> +#include <vm/vm_object.h>
> +#include <vm/vm_page.h>
> +#include <vm/vm_pager.h>
> +#include <vm/uma.h>
> +
> +static struct cdev_pager_ops saved_cdev_pager_ops;
> +
> +static int
> +netmap_dev_pager_ctor(void *handle, vm_ooffset_t size, vm_prot_t prot,
> +    vm_ooffset_t foff, struct ucred *cred, u_short *color)
> +{
> +	if (netmap_verbose)
> +		D("first mmap for %p", handle);
> +	return saved_cdev_pager_ops.cdev_pg_ctor(handle,
> +			size, prot, foff, cred, color);
> +}
> +
> +static void
> +netmap_dev_pager_dtor(void *handle)
> +{
> +	saved_cdev_pager_ops.cdev_pg_dtor(handle);
> +	ND("ready to release memory for %p", handle);
> +}
> +
> +
> +static struct cdev_pager_ops netmap_cdev_pager_ops = {
> +        .cdev_pg_ctor = netmap_dev_pager_ctor,
> +        .cdev_pg_dtor = netmap_dev_pager_dtor,
> +        .cdev_pg_fault = NULL,
> +};
> +
> +static int
> +netmap_mmap_single(struct cdev *cdev, vm_ooffset_t *foff,
> +	vm_size_t objsize,  vm_object_t *objp, int prot)
> +{
> +	vm_object_t obj;
> +
> +	ND("cdev %p foff %jd size %jd objp %p prot %d", cdev,
> +	    (intmax_t )*foff, (intmax_t )objsize, objp, prot);
> +	obj = vm_pager_allocate(OBJT_DEVICE, cdev, objsize, prot, *foff,
> +            curthread->td_ucred);
> +	ND("returns obj %p", obj);
> +	if (obj == NULL)
> +		return EINVAL;
> +	if (saved_cdev_pager_ops.cdev_pg_fault == NULL) {
> +		ND("initialize cdev_pager_ops");
> +		saved_cdev_pager_ops = *(obj->un_pager.devp.ops);
> +		netmap_cdev_pager_ops.cdev_pg_fault =
> +			saved_cdev_pager_ops.cdev_pg_fault;
> +	};
> +	obj->un_pager.devp.ops = &netmap_cdev_pager_ops;
> +	*objp = obj;
> +	return 0;
> +}
> +#endif /* __FreeBSD__ */
> +
> +
> +/*
> + * mmap(2) support for the "netmap" device.
> + *
> + * Expose all the memory previously allocated by our custom memory
> + * allocator: this way the user has only to issue a single mmap(2), and
> + * can work on all the data structures flawlessly.
> + *
> + * Return 0 on success, -1 otherwise.
> + */
> +
> +#ifdef __FreeBSD__
> +static int
> +netmap_mmap(__unused struct cdev *dev,
> +#if __FreeBSD_version < 900000
> +		vm_offset_t offset, vm_paddr_t *paddr, int nprot
> +#else
> +		vm_ooffset_t offset, vm_paddr_t *paddr, int nprot,
> +		__unused vm_memattr_t *memattr
> +#endif
> +	)
> +{
> +	int error = 0;
> +	struct netmap_priv_d *priv;
> +
> +	if (nprot & PROT_EXEC)
> +		return (-1);	// XXX -1 or EINVAL ?
> +
> +	error = devfs_get_cdevpriv((void **)&priv);
> +	if (error == EBADF) {	/* called on fault, memory is initialized */
> +		ND(5, "handling fault at ofs 0x%x", offset);
> +		error = 0;
> +	} else if (error == 0)	/* make sure memory is set */
> +		error = netmap_get_memory(priv);
> +	if (error)
> +		return (error);
> +
> +	ND("request for offset 0x%x", (uint32_t)offset);
> +	*paddr = netmap_ofstophys(offset);
> +
> +	return (*paddr ? 0 : ENOMEM);
> +}
> +
> +static int
> +netmap_close(struct cdev *dev, int fflag, int devtype, struct thread *td)
> +{
> +	if (netmap_verbose)
> +		D("dev %p fflag 0x%x devtype %d td %p",
> +			dev, fflag, devtype, td);
> +	return 0;
> +}
> +
> +static int
> +netmap_open(struct cdev *dev, int oflags, int devtype, struct thread *td)
> +{
> +	struct netmap_priv_d *priv;
> +	int error;
> +
> +	priv = malloc(sizeof(struct netmap_priv_d), M_DEVBUF,
> +			      M_NOWAIT | M_ZERO);
> +	if (priv == NULL)
> +		return ENOMEM;
> +
> +	error = devfs_set_cdevpriv(priv, netmap_dtor);
> +	if (error)
> +	        return error;
> +
> +	return 0;
> +}
> +#endif /* __FreeBSD__ */
> +
> +
> +/*
> + * Handlers for synchronization of the queues from/to the host.
> + * Netmap has two operating modes:
> + * - in the default mode, the rings connected to the host stack are
> + *   just another ring pair managed by userspace;
> + * - in transparent mode (XXX to be defined) incoming packets
> + *   (from the host or the NIC) are marked as NS_FORWARD upon
> + *   arrival, and the user application has a chance to reset the
> + *   flag for packets that should be dropped.
> + *   On the RXSYNC or poll(), packets in RX rings between
> + *   kring->nr_kcur and ring->cur with NS_FORWARD still set are moved
> + *   to the other side.
> + * The transfer NIC --> host is relatively easy, just encapsulate
> + * into mbufs and we are done. The host --> NIC side is slightly
> + * harder because there might not be room in the tx ring so it
> + * might take a while before releasing the buffer.
> + */
> +
> +/*
> + * pass a chain of buffers to the host stack as coming from 'dst'
> + */
> +static void
> +netmap_send_up(struct ifnet *dst, struct mbuf *head)
> +{
> +	struct mbuf *m;
> +
> +	/* send packets up, outside the lock */
> +	while ((m = head) != NULL) {
> +		head = head->m_nextpkt;
> +		m->m_nextpkt = NULL;
> +		if (netmap_verbose & NM_VERB_HOST)
> +			D("sending up pkt %p size %d", m, MBUF_LEN(m));
> +		NM_SEND_UP(dst, m);
> +	}
> +}
> +
> +struct mbq {
> +	struct mbuf *head;
> +	struct mbuf *tail;
> +	int count;
> +};
> +
> +/*
> + * put a copy of the buffers marked NS_FORWARD into an mbuf chain.
> + * Run from hwcur to cur - reserved
> + */
> +static void
> +netmap_grab_packets(struct netmap_kring *kring, struct mbq *q, int force)
> +{
> +	/* Take packets from hwcur to cur-reserved and pass them up.
> +	 * In case of no buffers we give up. At the end of the loop,
> +	 * the queue is drained in all cases.
> +	 * XXX handle reserved
> +	 */
> +	int k = kring->ring->cur - kring->ring->reserved;
> +	u_int n, lim = kring->nkr_num_slots - 1;
> +	struct mbuf *m, *tail = q->tail;
> +
> +	if (k < 0)
> +		k = k + kring->nkr_num_slots;
> +	for (n = kring->nr_hwcur; n != k;) {
> +		struct netmap_slot *slot = &kring->ring->slot[n];
> +
> +		n = (n == lim) ? 0 : n + 1;
> +		if ((slot->flags & NS_FORWARD) == 0 && !force)
> +			continue;
> +		if (slot->len < 14 || slot->len > NETMAP_BUF_SIZE) {
> +			D("bad pkt at %d len %d", n, slot->len);
> +			continue;
> +		}
> +		slot->flags &= ~NS_FORWARD; // XXX needed ?
> +		m = m_devget(NMB(slot), slot->len, 0, kring->na->ifp, NULL);
> +
> +		if (m == NULL)
> +			break;
> +		if (tail)
> +			tail->m_nextpkt = m;
> +		else
> +			q->head = m;
> +		tail = m;
> +		q->count++;
> +		m->m_nextpkt = NULL;
> +	}
> +	q->tail = tail;
> +}
> +
> +/*
> + * called under main lock to send packets from the host to the NIC
> + * The host ring has packets from nr_hwcur to (cur - reserved)
> + * to be sent down. We scan the tx rings, which have just been
> + * flushed so nr_hwcur == cur. Pushing packets down means
> + * increment cur and decrement avail.
> + * XXX to be verified
> + */
> +static void
> +netmap_sw_to_nic(struct netmap_adapter *na)
> +{
> +	struct netmap_kring *kring = &na->rx_rings[na->num_rx_rings];
> +	struct netmap_kring *k1 = &na->tx_rings[0];
> +	int i, howmany, src_lim, dst_lim;
> +
> +	howmany = kring->nr_hwavail;	/* XXX otherwise cur - reserved - nr_hwcur */
> +
> +	src_lim = kring->nkr_num_slots;
> +	for (i = 0; howmany > 0 && i < na->num_tx_rings; i++, k1++) {
> +		ND("%d packets left to ring %d (space %d)", howmany, i, k1->nr_hwavail);
> +		dst_lim = k1->nkr_num_slots;
> +		while (howmany > 0 && k1->ring->avail > 0) {
> +			struct netmap_slot *src, *dst, tmp;
> +			src = &kring->ring->slot[kring->nr_hwcur];
> +			dst = &k1->ring->slot[k1->ring->cur];
> +			tmp = *src;
> +			src->buf_idx = dst->buf_idx;
> +			src->flags = NS_BUF_CHANGED;
> +
> +			dst->buf_idx = tmp.buf_idx;
> +			dst->len = tmp.len;
> +			dst->flags = NS_BUF_CHANGED;
> +			ND("out len %d buf %d from %d to %d",
> +				dst->len, dst->buf_idx,
> +				kring->nr_hwcur, k1->ring->cur);
> +
> +			if (++kring->nr_hwcur >= src_lim)
> +				kring->nr_hwcur = 0;
> +			howmany--;
> +			kring->nr_hwavail--;
> +			if (++k1->ring->cur >= dst_lim)
> +				k1->ring->cur = 0;
> +			k1->ring->avail--;
> +		}
> +		kring->ring->cur = kring->nr_hwcur; // XXX
> +		k1++;
> +	}
> +}
> +
> +/*
> + * netmap_sync_to_host() passes packets up. We are called from a
> + * system call in user process context, and the only contention
> + * can be among multiple user threads erroneously calling
> + * this routine concurrently.
> + */
> +static void
> +netmap_sync_to_host(struct netmap_adapter *na)
> +{
> +	struct netmap_kring *kring = &na->tx_rings[na->num_tx_rings];
> +	struct netmap_ring *ring = kring->ring;
> +	u_int k, lim = kring->nkr_num_slots - 1;
> +	struct mbq q = { NULL, NULL };
> +
> +	k = ring->cur;
> +	if (k > lim) {
> +		netmap_ring_reinit(kring);
> +		return;
> +	}
> +	// na->nm_lock(na->ifp, NETMAP_CORE_LOCK, 0);
> +
> +	/* Take packets from hwcur to cur and pass them up.
> +	 * In case of no buffers we give up. At the end of the loop,
> +	 * the queue is drained in all cases.
> +	 */
> +	netmap_grab_packets(kring, &q, 1);
> +	kring->nr_hwcur = k;
> +	kring->nr_hwavail = ring->avail = lim;
> +	// na->nm_lock(na->ifp, NETMAP_CORE_UNLOCK, 0);
> +
> +	netmap_send_up(na->ifp, q.head);
> +}
> +
> +/*
> + * rxsync backend for packets coming from the host stack.
> + * They have been put in the queue by netmap_start() so we
> + * need to protect access to the kring using a lock.
> + *
> + * This routine also does the selrecord if called from the poll handler
> + * (we know because td != NULL).
> + *
> + * NOTE: on linux, selrecord() is defined as a macro and uses pwait
> + *     as an additional hidden argument.
> + */
> +static void
> +netmap_sync_from_host(struct netmap_adapter *na, struct thread *td, void *pwait)
> +{
> +	struct netmap_kring *kring = &na->rx_rings[na->num_rx_rings];
> +	struct netmap_ring *ring = kring->ring;
> +	u_int j, n, lim = kring->nkr_num_slots;
> +	u_int k = ring->cur, resvd = ring->reserved;
> +
> +	(void)pwait;	/* disable unused warnings */
> +	na->nm_lock(na->ifp, NETMAP_CORE_LOCK, 0);
> +	if (k >= lim) {
> +		netmap_ring_reinit(kring);
> +		return;
> +	}
> +	/* new packets are already set in nr_hwavail */
> +	/* skip past packets that userspace has released */
> +	j = kring->nr_hwcur;
> +	if (resvd > 0) {
> +		if (resvd + ring->avail >= lim + 1) {
> +			D("XXX invalid reserve/avail %d %d", resvd, ring->avail);
> +			ring->reserved = resvd = 0; // XXX panic...
> +		}
> +		k = (k >= resvd) ? k - resvd : k + lim - resvd;
> +        }
> +	if (j != k) {
> +		n = k >= j ? k - j : k + lim - j;
> +		kring->nr_hwavail -= n;
> +		kring->nr_hwcur = k;
> +	}
> +	k = ring->avail = kring->nr_hwavail - resvd;
> +	if (k == 0 && td)
> +		selrecord(td, &kring->si);
> +	if (k && (netmap_verbose & NM_VERB_HOST))
> +		D("%d pkts from stack", k);
> +	na->nm_lock(na->ifp, NETMAP_CORE_UNLOCK, 0);
> +}
> +
> +
> +/*
> + * get a refcounted reference to an interface.
> + * Return ENXIO if the interface does not exist, EINVAL if netmap
> + * is not supported by the interface.
> + * If successful, hold a reference.
> + */
> +static int
> +get_ifp(const char *name, struct ifnet **ifp)
> +{
> +#ifdef NM_BRIDGE
> +	struct ifnet *iter = NULL;
> +
> +	do {
> +		struct nm_bridge *b;
> +		int i, l, cand = -1;
> +
> +		if (strncmp(name, NM_NAME, sizeof(NM_NAME) - 1))
> +			break;
> +		b = nm_find_bridge(name);
> +		if (b == NULL) {
> +			D("no bridges available for '%s'", name);
> +			return (ENXIO);
> +		}
> +		/* XXX locking */
> +		BDG_LOCK(b);
> +		/* lookup in the local list of ports */
> +		for (i = 0; i < NM_BDG_MAXPORTS; i++) {
> +			iter = b->bdg_ports[i];
> +			if (iter == NULL) {
> +				if (cand == -1)
> +					cand = i; /* potential insert point */
> +				continue;
> +			}
> +			if (!strcmp(iter->if_xname, name)) {
> +				ADD_BDG_REF(iter);
> +				ND("found existing interface");
> +				BDG_UNLOCK(b);
> +				break;
> +			}
> +		}
> +		if (i < NM_BDG_MAXPORTS) /* already unlocked */
> +			break;
> +		if (cand == -1) {
> +			D("bridge full, cannot create new port");
> +no_port:
> +			BDG_UNLOCK(b);
> +			*ifp = NULL;
> +			return EINVAL;
> +		}
> +		ND("create new bridge port %s", name);
> +		/* space for forwarding list after the ifnet */
> +		l = sizeof(*iter) +
> +			 sizeof(struct nm_bdg_fwd)*NM_BDG_BATCH ;
> +		iter = malloc(l, M_DEVBUF, M_NOWAIT | M_ZERO);
> +		if (!iter)
> +			goto no_port;
> +		strcpy(iter->if_xname, name);
> +		bdg_netmap_attach(iter);
> +		b->bdg_ports[cand] = iter;
> +		iter->if_bridge = b;
> +		ADD_BDG_REF(iter);
> +		BDG_UNLOCK(b);
> +		ND("attaching virtual bridge %p", b);
> +	} while (0);
> +	*ifp = iter;
> +	if (! *ifp)
> +#endif /* NM_BRIDGE */
> +	*ifp = ifunit_ref(name);
> +	if (*ifp == NULL)
> +		return (ENXIO);
> +	/* can do this if the capability exists and if_pspare[0]
> +	 * points to the netmap descriptor.
> +	 */
> +	if (NETMAP_CAPABLE(*ifp))
> +		return 0;	/* valid pointer, we hold the refcount */
> +	nm_if_rele(*ifp);
> +	return EINVAL;	// not NETMAP capable
> +}
> +
> +
> +/*
> + * Error routine called when txsync/rxsync detects an error.
> + * Can't do much more than resetting cur = hwcur, avail = hwavail.
> + * Return 1 on reinit.
> + *
> + * This routine is only called by the upper half of the kernel.
> + * It only reads hwcur (which is changed only by the upper half, too)
> + * and hwavail (which may be changed by the lower half, but only on
> + * a tx ring and only to increase it, so any error will be recovered
> + * on the next call). For the above, we don't strictly need to call
> + * it under lock.
> + */
> +int
> +netmap_ring_reinit(struct netmap_kring *kring)
> +{
> +	struct netmap_ring *ring = kring->ring;
> +	u_int i, lim = kring->nkr_num_slots - 1;
> +	int errors = 0;
> +
> +	RD(10, "called for %s", kring->na->ifp->if_xname);
> +	if (ring->cur > lim)
> +		errors++;
> +	for (i = 0; i <= lim; i++) {
> +		u_int idx = ring->slot[i].buf_idx;
> +		u_int len = ring->slot[i].len;
> +		if (idx < 2 || idx >= netmap_total_buffers) {
> +			if (!errors++)
> +				D("bad buffer at slot %d idx %d len %d ", i, idx, len);
> +			ring->slot[i].buf_idx = 0;
> +			ring->slot[i].len = 0;
> +		} else if (len > NETMAP_BUF_SIZE) {
> +			ring->slot[i].len = 0;
> +			if (!errors++)
> +				D("bad len %d at slot %d idx %d",
> +					len, i, idx);
> +		}
> +	}
> +	if (errors) {
> +		int pos = kring - kring->na->tx_rings;
> +		int n = kring->na->num_tx_rings + 1;
> +
> +		RD(10, "total %d errors", errors);
> +		errors++;
> +		RD(10, "%s %s[%d] reinit, cur %d -> %d avail %d -> %d",
> +			kring->na->ifp->if_xname,
> +			pos < n ?  "TX" : "RX", pos < n ? pos : pos - n,
> +			ring->cur, kring->nr_hwcur,
> +			ring->avail, kring->nr_hwavail);
> +		ring->cur = kring->nr_hwcur;
> +		ring->avail = kring->nr_hwavail;
> +	}
> +	return (errors ? 1 : 0);
> +}
> +
> +
> +/*
> + * Set the ring ID. For devices with a single queue, a request
> + * for all rings is the same as a single ring.
> + */
> +static int
> +netmap_set_ringid(struct netmap_priv_d *priv, u_int ringid)
> +{
> +	struct ifnet *ifp = priv->np_ifp;
> +	struct netmap_adapter *na = NA(ifp);
> +	u_int i = ringid & NETMAP_RING_MASK;
> +	/* initially (np_qfirst == np_qlast) we don't want to lock */
> +	int need_lock = (priv->np_qfirst != priv->np_qlast);
> +	int lim = na->num_rx_rings;
> +
> +	if (na->num_tx_rings > lim)
> +		lim = na->num_tx_rings;
> +	if ( (ringid & NETMAP_HW_RING) && i >= lim) {
> +		D("invalid ring id %d", i);
> +		return (EINVAL);
> +	}
> +	if (need_lock)
> +		na->nm_lock(ifp, NETMAP_CORE_LOCK, 0);
> +	priv->np_ringid = ringid;
> +	if (ringid & NETMAP_SW_RING) {
> +		priv->np_qfirst = NETMAP_SW_RING;
> +		priv->np_qlast = 0;
> +	} else if (ringid & NETMAP_HW_RING) {
> +		priv->np_qfirst = i;
> +		priv->np_qlast = i + 1;
> +	} else {
> +		priv->np_qfirst = 0;
> +		priv->np_qlast = NETMAP_HW_RING ;
> +	}
> +	priv->np_txpoll = (ringid & NETMAP_NO_TX_POLL) ? 0 : 1;
> +	if (need_lock)
> +		na->nm_lock(ifp, NETMAP_CORE_UNLOCK, 0);
> +    if (netmap_verbose) {
> +	if (ringid & NETMAP_SW_RING)
> +		D("ringid %s set to SW RING", ifp->if_xname);
> +	else if (ringid & NETMAP_HW_RING)
> +		D("ringid %s set to HW RING %d", ifp->if_xname,
> +			priv->np_qfirst);
> +	else
> +		D("ringid %s set to all %d HW RINGS", ifp->if_xname, lim);
> +    }
> +	return 0;
> +}
> +
> +/*
> + * ioctl(2) support for the "netmap" device.
> + *
> + * Following a list of accepted commands:
> + * - NIOCGINFO
> + * - SIOCGIFADDR	just for convenience
> + * - NIOCREGIF
> + * - NIOCUNREGIF
> + * - NIOCTXSYNC
> + * - NIOCRXSYNC
> + *
> + * Return 0 on success, errno otherwise.
> + */
> +static int
> +netmap_ioctl(struct cdev *dev, u_long cmd, caddr_t data,
> +	int fflag, struct thread *td)
> +{
> +	struct netmap_priv_d *priv = NULL;
> +	struct ifnet *ifp;
> +	struct nmreq *nmr = (struct nmreq *) data;
> +	struct netmap_adapter *na;
> +	int error;
> +	u_int i, lim;
> +	struct netmap_if *nifp;
> +
> +	(void)dev;	/* UNUSED */
> +	(void)fflag;	/* UNUSED */
> +#ifdef linux
> +#define devfs_get_cdevpriv(pp)				\
> +	({ *(struct netmap_priv_d **)pp = ((struct file *)td)->private_data; 	\
> +		(*pp ? 0 : ENOENT); })
> +
> +/* devfs_set_cdevpriv cannot fail on linux */
> +#define devfs_set_cdevpriv(p, fn)				\
> +	({ ((struct file *)td)->private_data = p; (p ? 0 : EINVAL); })
> +
> +
> +#define devfs_clear_cdevpriv()	do {				\
> +		netmap_dtor(priv); ((struct file *)td)->private_data = 0;	\
> +	} while (0)
> +#endif /* linux */
> +
> +	CURVNET_SET(TD_TO_VNET(td));
> +
> +	error = devfs_get_cdevpriv((void **)&priv);
> +	if (error) {
> +		CURVNET_RESTORE();
> +		/* XXX ENOENT should be impossible, since the priv
> +		 * is now created in the open */
> +		return (error == ENOENT ? ENXIO : error);
> +	}
> +
> +	nmr->nr_name[sizeof(nmr->nr_name) - 1] = '\0';	/* truncate name */
> +	switch (cmd) {
> +	case NIOCGINFO:		/* return capabilities etc */
> +		if (nmr->nr_version != NETMAP_API) {
> +			D("API mismatch got %d have %d",
> +				nmr->nr_version, NETMAP_API);
> +			nmr->nr_version = NETMAP_API;
> +			error = EINVAL;
> +			break;
> +		}
> +		/* update configuration */
> +		error = netmap_get_memory(priv);
> +		ND("get_memory returned %d", error);
> +		if (error)
> +			break;
> +		/* memsize is always valid */
> +		nmr->nr_memsize = nm_mem.nm_totalsize;
> +		nmr->nr_offset = 0;
> +		nmr->nr_rx_rings = nmr->nr_tx_rings = 0;
> +		nmr->nr_rx_slots = nmr->nr_tx_slots = 0;
> +		if (nmr->nr_name[0] == '\0')	/* just get memory info */
> +			break;
> +		error = get_ifp(nmr->nr_name, &ifp); /* get a refcount */
> +		if (error)
> +			break;
> +		na = NA(ifp); /* retrieve netmap_adapter */
> +		netmap_update_config(na);
> +		nmr->nr_rx_rings = na->num_rx_rings;
> +		nmr->nr_tx_rings = na->num_tx_rings;
> +		nmr->nr_rx_slots = na->num_rx_desc;
> +		nmr->nr_tx_slots = na->num_tx_desc;
> +		nm_if_rele(ifp);	/* return the refcount */
> +		break;
> +
> +	case NIOCREGIF:
> +		if (nmr->nr_version != NETMAP_API) {
> +			nmr->nr_version = NETMAP_API;
> +			error = EINVAL;
> +			break;
> +		}
> +		/* ensure allocators are ready */
> +		error = netmap_get_memory(priv);
> +		ND("get_memory returned %d", error);
> +		if (error)
> +			break;
> +
> +		/* protect access to priv from concurrent NIOCREGIF */
> +		NMA_LOCK();
> +		if (priv->np_ifp != NULL) {	/* thread already registered */
> +			error = netmap_set_ringid(priv, nmr->nr_ringid);
> +			NMA_UNLOCK();
> +			break;
> +		}
> +		/* find the interface and a reference */
> +		error = get_ifp(nmr->nr_name, &ifp); /* keep reference */
> +		if (error) {
> +			NMA_UNLOCK();
> +			break;
> +		}
> +		na = NA(ifp); /* retrieve netmap adapter */
> +
> +		for (i = 10; i > 0; i--) {
> +			na->nm_lock(ifp, NETMAP_REG_LOCK, 0);
> +			if (!NETMAP_DELETING(na))
> +				break;
> +			na->nm_lock(ifp, NETMAP_REG_UNLOCK, 0);
> +			tsleep(na, 0, "NIOCREGIF", hz/10);
> +		}
> +		if (i == 0) {
> +			D("too many NIOCREGIF attempts, give up");
> +			error = EINVAL;
> +			nm_if_rele(ifp);	/* return the refcount */
> +			NMA_UNLOCK();
> +			break;
> +		}
> +
> +		/* ring configuration may have changed, fetch from the card */
> +		netmap_update_config(na);
> +		priv->np_ifp = ifp;	/* store the reference */
> +		error = netmap_set_ringid(priv, nmr->nr_ringid);
> +		if (error)
> +			goto error;
> +		nifp = netmap_if_new(nmr->nr_name, na);
> +		if (nifp == NULL) { /* allocation failed */
> +			error = ENOMEM;
> +		} else if (ifp->if_capenable & IFCAP_NETMAP) {
> +			/* was already set */
> +		} else {
> +			/* Otherwise set the card in netmap mode
> +			 * and make it use the shared buffers.
> +			 */
> +			for (i = 0 ; i < na->num_tx_rings + 1; i++)
> +				mtx_init(&na->tx_rings[i].q_lock, "nm_txq_lock", MTX_NETWORK_LOCK, MTX_DEF);
> +			for (i = 0 ; i < na->num_rx_rings + 1; i++) {
> +				mtx_init(&na->rx_rings[i].q_lock, "nm_rxq_lock", MTX_NETWORK_LOCK, MTX_DEF);
> +			}
> +			error = na->nm_register(ifp, 1); /* mode on */
> +			if (error) {
> +				netmap_dtor_locked(priv);
> +				netmap_if_free(nifp);
> +			}
> +		}
> +
> +		if (error) {	/* reg. failed, release priv and ref */
> +error:
> +			na->nm_lock(ifp, NETMAP_REG_UNLOCK, 0);
> +			nm_if_rele(ifp);	/* return the refcount */
> +			priv->np_ifp = NULL;
> +			priv->np_nifp = NULL;
> +			NMA_UNLOCK();
> +			break;
> +		}
> +
> +		na->nm_lock(ifp, NETMAP_REG_UNLOCK, 0);
> +
> +		/* the following assignment is a commitment.
> +		 * Readers (i.e., poll and *SYNC) check for
> +		 * np_nifp != NULL without locking
> +		 */
> +		wmb(); /* make sure previous writes are visible to all CPUs */
> +		priv->np_nifp = nifp;
> +		NMA_UNLOCK();
> +
> +		/* return the offset of the netmap_if object */
> +		nmr->nr_rx_rings = na->num_rx_rings;
> +		nmr->nr_tx_rings = na->num_tx_rings;
> +		nmr->nr_rx_slots = na->num_rx_desc;
> +		nmr->nr_tx_slots = na->num_tx_desc;
> +		nmr->nr_memsize = nm_mem.nm_totalsize;
> +		nmr->nr_offset = netmap_if_offset(nifp);
> +		break;
> +
> +	case NIOCUNREGIF:
> +		// XXX we have no data here ?
> +		D("deprecated, data is %p", nmr);
> +		error = EINVAL;
> +		break;
> +
> +	case NIOCTXSYNC:
> +	case NIOCRXSYNC:
> +		nifp = priv->np_nifp;
> +
> +		if (nifp == NULL) {
> +			error = ENXIO;
> +			break;
> +		}
> +		rmb(); /* make sure following reads are not from cache */
> +
> +
> +		ifp = priv->np_ifp;	/* we have a reference */
> +
> +		if (ifp == NULL) {
> +			D("Internal error: nifp != NULL && ifp == NULL");
> +			error = ENXIO;
> +			break;
> +		}
> +
> +		na = NA(ifp); /* retrieve netmap adapter */
> +		if (priv->np_qfirst == NETMAP_SW_RING) { /* host rings */
> +			if (cmd == NIOCTXSYNC)
> +				netmap_sync_to_host(na);
> +			else
> +				netmap_sync_from_host(na, NULL, NULL);
> +			break;
> +		}
> +		/* find the last ring to scan */
> +		lim = priv->np_qlast;
> +		if (lim == NETMAP_HW_RING)
> +			lim = (cmd == NIOCTXSYNC) ?
> +			    na->num_tx_rings : na->num_rx_rings;
> +
> +		for (i = priv->np_qfirst; i < lim; i++) {
> +			if (cmd == NIOCTXSYNC) {
> +				struct netmap_kring *kring = &na->tx_rings[i];
> +				if (netmap_verbose & NM_VERB_TXSYNC)
> +					D("pre txsync ring %d cur %d hwcur %d",
> +					    i, kring->ring->cur,
> +					    kring->nr_hwcur);
> +				na->nm_txsync(ifp, i, 1 /* do lock */);
> +				if (netmap_verbose & NM_VERB_TXSYNC)
> +					D("post txsync ring %d cur %d hwcur %d",
> +					    i, kring->ring->cur,
> +					    kring->nr_hwcur);
> +			} else {
> +				na->nm_rxsync(ifp, i, 1 /* do lock */);
> +				microtime(&na->rx_rings[i].ring->ts);
> +			}
> +		}
> +
> +		break;
> +
> +#ifdef __FreeBSD__
> +	case BIOCIMMEDIATE:
> +	case BIOCGHDRCMPLT:
> +	case BIOCSHDRCMPLT:
> +	case BIOCSSEESENT:
> +		D("ignore BIOCIMMEDIATE/BIOCSHDRCMPLT/BIOCSHDRCMPLT/BIOCSSEESENT");
> +		break;
> +
> +	default:	/* allow device-specific ioctls */
> +	    {
> +		struct socket so;
> +		bzero(&so, sizeof(so));
> +		error = get_ifp(nmr->nr_name, &ifp); /* keep reference */
> +		if (error)
> +			break;
> +		so.so_vnet = ifp->if_vnet;
> +		// so->so_proto not null.
> +		error = ifioctl(&so, cmd, data, td);
> +		nm_if_rele(ifp);
> +		break;
> +	    }
> +
> +#else /* linux */
> +	default:
> +		error = EOPNOTSUPP;
> +#endif /* linux */
> +	}
> +
> +	CURVNET_RESTORE();
> +	return (error);
> +}
> +
> +
> +/*
> + * select(2) and poll(2) handlers for the "netmap" device.
> + *
> + * Can be called for one or more queues.
> + * Return true the event mask corresponding to ready events.
> + * If there are no ready events, do a selrecord on either individual
> + * selfd or on the global one.
> + * Device-dependent parts (locking and sync of tx/rx rings)
> + * are done through callbacks.
> + *
> + * On linux, arguments are really pwait, the poll table, and 'td' is struct file *
> + * The first one is remapped to pwait as selrecord() uses the name as an
> + * hidden argument.
> + */
> +static int
> +netmap_poll(struct cdev *dev, int events, struct thread *td)
> +{
> +	struct netmap_priv_d *priv = NULL;
> +	struct netmap_adapter *na;
> +	struct ifnet *ifp;
> +	struct netmap_kring *kring;
> +	u_int core_lock, i, check_all, want_tx, want_rx, revents = 0;
> +	u_int lim_tx, lim_rx, host_forwarded = 0;
> +	struct mbq q = { NULL, NULL, 0 };
> +	enum {NO_CL, NEED_CL, LOCKED_CL }; /* see below */
> +	void *pwait = dev;	/* linux compatibility */
> +
> +	(void)pwait;
> +
> +	if (devfs_get_cdevpriv((void **)&priv) != 0 || priv == NULL)
> +		return POLLERR;
> +
> +	if (priv->np_nifp == NULL) {
> +		D("No if registered");
> +		return POLLERR;
> +	}
> +	rmb(); /* make sure following reads are not from cache */
> +
> +	ifp = priv->np_ifp;
> +	// XXX check for deleting() ?
> +	if ( (ifp->if_capenable & IFCAP_NETMAP) == 0)
> +		return POLLERR;
> +
> +	if (netmap_verbose & 0x8000)
> +		D("device %s events 0x%x", ifp->if_xname, events);
> +	want_tx = events & (POLLOUT | POLLWRNORM);
> +	want_rx = events & (POLLIN | POLLRDNORM);
> +
> +	na = NA(ifp); /* retrieve netmap adapter */
> +
> +	lim_tx = na->num_tx_rings;
> +	lim_rx = na->num_rx_rings;
> +	/* how many queues we are scanning */
> +	if (priv->np_qfirst == NETMAP_SW_RING) {
> +		if (priv->np_txpoll || want_tx) {
> +			/* push any packets up, then we are always ready */
> +			kring = &na->tx_rings[lim_tx];
> +			netmap_sync_to_host(na);
> +			revents |= want_tx;
> +		}
> +		if (want_rx) {
> +			kring = &na->rx_rings[lim_rx];
> +			if (kring->ring->avail == 0)
> +				netmap_sync_from_host(na, td, dev);
> +			if (kring->ring->avail > 0) {
> +				revents |= want_rx;
> +			}
> +		}
> +		return (revents);
> +	}
> +
> +	/* if we are in transparent mode, check also the host rx ring */
> +	kring = &na->rx_rings[lim_rx];
> +	if ( (priv->np_qlast == NETMAP_HW_RING) // XXX check_all
> +			&& want_rx
> +			&& (netmap_fwd || kring->ring->flags & NR_FORWARD) ) {
> +		if (kring->ring->avail == 0)
> +			netmap_sync_from_host(na, td, dev);
> +		if (kring->ring->avail > 0)
> +			revents |= want_rx;
> +	}
> +
> +	/*
> +	 * check_all is set if the card has more than one queue and
> +	 * the client is polling all of them. If true, we sleep on
> +	 * the "global" selfd, otherwise we sleep on individual selfd
> +	 * (we can only sleep on one of them per direction).
> +	 * The interrupt routine in the driver should always wake on
> +	 * the individual selfd, and also on the global one if the card
> +	 * has more than one ring.
> +	 *
> +	 * If the card has only one lock, we just use that.
> +	 * If the card has separate ring locks, we just use those
> +	 * unless we are doing check_all, in which case the whole
> +	 * loop is wrapped by the global lock.
> +	 * We acquire locks only when necessary: if poll is called
> +	 * when buffers are available, we can just return without locks.
> +	 *
> +	 * rxsync() is only called if we run out of buffers on a POLLIN.
> +	 * txsync() is called if we run out of buffers on POLLOUT, or
> +	 * there are pending packets to send. The latter can be disabled
> +	 * passing NETMAP_NO_TX_POLL in the NIOCREG call.
> +	 */
> +	check_all = (priv->np_qlast == NETMAP_HW_RING) && (lim_tx > 1 || lim_rx > 1);
> +
> +	/*
> +	 * core_lock indicates what to do with the core lock.
> +	 * The core lock is used when either the card has no individual
> +	 * locks, or it has individual locks but we are cheking all
> +	 * rings so we need the core lock to avoid missing wakeup events.
> +	 *
> +	 * It has three possible states:
> +	 * NO_CL	we don't need to use the core lock, e.g.
> +	 *		because we are protected by individual locks.
> +	 * NEED_CL	we need the core lock. In this case, when we
> +	 *		call the lock routine, move to LOCKED_CL
> +	 *		to remember to release the lock once done.
> +	 * LOCKED_CL	core lock is set, so we need to release it.
> +	 */
> +	core_lock = (check_all || !na->separate_locks) ? NEED_CL : NO_CL;
> +#ifdef NM_BRIDGE
> +	/* the bridge uses separate locks */
> +	if (na->nm_register == bdg_netmap_reg) {
> +		ND("not using core lock for %s", ifp->if_xname);
> +		core_lock = NO_CL;
> +	}
> +#endif /* NM_BRIDGE */
> +	if (priv->np_qlast != NETMAP_HW_RING) {
> +		lim_tx = lim_rx = priv->np_qlast;
> +	}
> +
> +	/*
> +	 * We start with a lock free round which is good if we have
> +	 * data available. If this fails, then lock and call the sync
> +	 * routines.
> +	 */
> +	for (i = priv->np_qfirst; want_rx && i < lim_rx; i++) {
> +		kring = &na->rx_rings[i];
> +		if (kring->ring->avail > 0) {
> +			revents |= want_rx;
> +			want_rx = 0;	/* also breaks the loop */
> +		}
> +	}
> +	for (i = priv->np_qfirst; want_tx && i < lim_tx; i++) {
> +		kring = &na->tx_rings[i];
> +		if (kring->ring->avail > 0) {
> +			revents |= want_tx;
> +			want_tx = 0;	/* also breaks the loop */
> +		}
> +	}
> +
> +	/*
> +	 * If we to push packets out (priv->np_txpoll) or want_tx is
> +	 * still set, we do need to run the txsync calls (on all rings,
> +	 * to avoid that the tx rings stall).
> +	 */
> +	if (priv->np_txpoll || want_tx) {
> +flush_tx:
> +		for (i = priv->np_qfirst; i < lim_tx; i++) {
> +			kring = &na->tx_rings[i];
> +			/*
> +			 * Skip the current ring if want_tx == 0
> +			 * (we have already done a successful sync on
> +			 * a previous ring) AND kring->cur == kring->hwcur
> +			 * (there are no pending transmissions for this ring).
> +			 */
> +			if (!want_tx && kring->ring->cur == kring->nr_hwcur)
> +				continue;
> +			if (core_lock == NEED_CL) {
> +				na->nm_lock(ifp, NETMAP_CORE_LOCK, 0);
> +				core_lock = LOCKED_CL;
> +			}
> +			if (na->separate_locks)
> +				na->nm_lock(ifp, NETMAP_TX_LOCK, i);
> +			if (netmap_verbose & NM_VERB_TXSYNC)
> +				D("send %d on %s %d",
> +					kring->ring->cur,
> +					ifp->if_xname, i);
> +			if (na->nm_txsync(ifp, i, 0 /* no lock */))
> +				revents |= POLLERR;
> +
> +			/* Check avail/call selrecord only if called with POLLOUT */
> +			if (want_tx) {
> +				if (kring->ring->avail > 0) {
> +					/* stop at the first ring. We don't risk
> +					 * starvation.
> +					 */
> +					revents |= want_tx;
> +					want_tx = 0;
> +				} else if (!check_all)
> +					selrecord(td, &kring->si);
> +			}
> +			if (na->separate_locks)
> +				na->nm_lock(ifp, NETMAP_TX_UNLOCK, i);
> +		}
> +	}
> +
> +	/*
> +	 * now if want_rx is still set we need to lock and rxsync.
> +	 * Do it on all rings because otherwise we starve.
> +	 */
> +	if (want_rx) {
> +		for (i = priv->np_qfirst; i < lim_rx; i++) {
> +			kring = &na->rx_rings[i];
> +			if (core_lock == NEED_CL) {
> +				na->nm_lock(ifp, NETMAP_CORE_LOCK, 0);
> +				core_lock = LOCKED_CL;
> +			}
> +			if (na->separate_locks)
> +				na->nm_lock(ifp, NETMAP_RX_LOCK, i);
> +			if (netmap_fwd ||kring->ring->flags & NR_FORWARD) {
> +				ND(10, "forwarding some buffers up %d to %d",
> +				    kring->nr_hwcur, kring->ring->cur);
> +				netmap_grab_packets(kring, &q, netmap_fwd);
> +			}
> +
> +			if (na->nm_rxsync(ifp, i, 0 /* no lock */))
> +				revents |= POLLERR;
> +			if (netmap_no_timestamp == 0 ||
> +					kring->ring->flags & NR_TIMESTAMP) {
> +				microtime(&kring->ring->ts);
> +			}
> +
> +			if (kring->ring->avail > 0)
> +				revents |= want_rx;
> +			else if (!check_all)
> +				selrecord(td, &kring->si);
> +			if (na->separate_locks)
> +				na->nm_lock(ifp, NETMAP_RX_UNLOCK, i);
> +		}
> +	}
> +	if (check_all && revents == 0) { /* signal on the global queue */
> +		if (want_tx)
> +			selrecord(td, &na->tx_si);
> +		if (want_rx)
> +			selrecord(td, &na->rx_si);
> +	}
> +
> +	/* forward host to the netmap ring */
> +	kring = &na->rx_rings[lim_rx];
> +	if (kring->nr_hwavail > 0)
> +		ND("host rx %d has %d packets", lim_rx, kring->nr_hwavail);
> +	if ( (priv->np_qlast == NETMAP_HW_RING) // XXX check_all
> +			&& (netmap_fwd || kring->ring->flags & NR_FORWARD)
> +			 && kring->nr_hwavail > 0 && !host_forwarded) {
> +		if (core_lock == NEED_CL) {
> +			na->nm_lock(ifp, NETMAP_CORE_LOCK, 0);
> +			core_lock = LOCKED_CL;
> +		}
> +		netmap_sw_to_nic(na);
> +		host_forwarded = 1; /* prevent another pass */
> +		want_rx = 0;
> +		goto flush_tx;
> +	}
> +
> +	if (core_lock == LOCKED_CL)
> +		na->nm_lock(ifp, NETMAP_CORE_UNLOCK, 0);
> +	if (q.head)
> +		netmap_send_up(na->ifp, q.head);
> +
> +	return (revents);
> +}
> +
> +/*------- driver support routines ------*/
> +
> +/*
> + * default lock wrapper.
> + */
> +static void
> +netmap_lock_wrapper(struct ifnet *dev, int what, u_int queueid)
> +{
> +	struct netmap_adapter *na = NA(dev);
> +
> +	switch (what) {
> +#ifdef linux	/* some system do not need lock on register */
> +	case NETMAP_REG_LOCK:
> +	case NETMAP_REG_UNLOCK:
> +		break;
> +#endif /* linux */
> +
> +	case NETMAP_CORE_LOCK:
> +		mtx_lock(&na->core_lock);
> +		break;
> +
> +	case NETMAP_CORE_UNLOCK:
> +		mtx_unlock(&na->core_lock);
> +		break;
> +
> +	case NETMAP_TX_LOCK:
> +		mtx_lock(&na->tx_rings[queueid].q_lock);
> +		break;
> +
> +	case NETMAP_TX_UNLOCK:
> +		mtx_unlock(&na->tx_rings[queueid].q_lock);
> +		break;
> +
> +	case NETMAP_RX_LOCK:
> +		mtx_lock(&na->rx_rings[queueid].q_lock);
> +		break;
> +
> +	case NETMAP_RX_UNLOCK:
> +		mtx_unlock(&na->rx_rings[queueid].q_lock);
> +		break;
> +	}
> +}
> +
> +
> +/*
> + * Initialize a ``netmap_adapter`` object created by driver on attach.
> + * We allocate a block of memory with room for a struct netmap_adapter
> + * plus two sets of N+2 struct netmap_kring (where N is the number
> + * of hardware rings):
> + * krings	0..N-1	are for the hardware queues.
> + * kring	N	is for the host stack queue
> + * kring	N+1	is only used for the selinfo for all queues.
> + * Return 0 on success, ENOMEM otherwise.
> + *
> + * By default the receive and transmit adapter ring counts are both initialized
> + * to num_queues.  na->num_tx_rings can be set for cards with different tx/rx
> + * setups.
> + */
> +int
> +netmap_attach(struct netmap_adapter *arg, int num_queues)
> +{
> +	struct netmap_adapter *na = NULL;
> +	struct ifnet *ifp = arg ? arg->ifp : NULL;
> +
> +	if (arg == NULL || ifp == NULL)
> +		goto fail;
> +	na = malloc(sizeof(*na), M_DEVBUF, M_NOWAIT | M_ZERO);
> +	if (na == NULL)
> +		goto fail;
> +	WNA(ifp) = na;
> +	*na = *arg; /* copy everything, trust the driver to not pass junk */
> +	NETMAP_SET_CAPABLE(ifp);
> +	if (na->num_tx_rings == 0)
> +		na->num_tx_rings = num_queues;
> +	na->num_rx_rings = num_queues;
> +	na->refcount = na->na_single = na->na_multi = 0;
> +	/* Core lock initialized here, others after netmap_if_new. */
> +	mtx_init(&na->core_lock, "netmap core lock", MTX_NETWORK_LOCK, MTX_DEF);
> +	if (na->nm_lock == NULL) {
> +		ND("using default locks for %s", ifp->if_xname);
> +		na->nm_lock = netmap_lock_wrapper;
> +	}
> +#ifdef linux
> +	if (ifp->netdev_ops) {
> +		ND("netdev_ops %p", ifp->netdev_ops);
> +		/* prepare a clone of the netdev ops */
> +		na->nm_ndo = *ifp->netdev_ops;
> +	}
> +	na->nm_ndo.ndo_start_xmit = linux_netmap_start;
> +#endif
> +	D("success for %s", ifp->if_xname);
> +	return 0;
> +
> +fail:
> +	D("fail, arg %p ifp %p na %p", arg, ifp, na);
> +	return (na ? EINVAL : ENOMEM);
> +}
> +
> +
> +/*
> + * Free the allocated memory linked to the given ``netmap_adapter``
> + * object.
> + */
> +void
> +netmap_detach(struct ifnet *ifp)
> +{
> +	struct netmap_adapter *na = NA(ifp);
> +
> +	if (!na)
> +		return;
> +
> +	mtx_destroy(&na->core_lock);
> +
> +	if (na->tx_rings) { /* XXX should not happen */
> +		D("freeing leftover tx_rings");
> +		free(na->tx_rings, M_DEVBUF);
> +	}
> +	bzero(na, sizeof(*na));
> +	WNA(ifp) = NULL;
> +	free(na, M_DEVBUF);
> +}
> +
> +
> +/*
> + * Intercept packets from the network stack and pass them
> + * to netmap as incoming packets on the 'software' ring.
> + * We are not locked when called.
> + */
> +int
> +netmap_start(struct ifnet *ifp, struct mbuf *m)
> +{
> +	struct netmap_adapter *na = NA(ifp);
> +	struct netmap_kring *kring = &na->rx_rings[na->num_rx_rings];
> +	u_int i, len = MBUF_LEN(m);
> +	u_int error = EBUSY, lim = kring->nkr_num_slots - 1;
> +	struct netmap_slot *slot;
> +
> +	if (netmap_verbose & NM_VERB_HOST)
> +		D("%s packet %d len %d from the stack", ifp->if_xname,
> +			kring->nr_hwcur + kring->nr_hwavail, len);
> +	na->nm_lock(ifp, NETMAP_CORE_LOCK, 0);
> +	if (kring->nr_hwavail >= lim) {
> +		if (netmap_verbose)
> +			D("stack ring %s full\n", ifp->if_xname);
> +		goto done;	/* no space */
> +	}
> +	if (len > NETMAP_BUF_SIZE) {
> +		D("%s from_host, drop packet size %d > %d", ifp->if_xname,
> +			len, NETMAP_BUF_SIZE);
> +		goto done;	/* too long for us */
> +	}
> +
> +	/* compute the insert position */
> +	i = kring->nr_hwcur + kring->nr_hwavail;
> +	if (i > lim)
> +		i -= lim + 1;
> +	slot = &kring->ring->slot[i];
> +	m_copydata(m, 0, len, NMB(slot));
> +	slot->len = len;
> +	slot->flags = kring->nkr_slot_flags;
> +	kring->nr_hwavail++;
> +	if (netmap_verbose  & NM_VERB_HOST)
> +		D("wake up host ring %s %d", na->ifp->if_xname, na->num_rx_rings);
> +	selwakeuppri(&kring->si, PI_NET);
> +	error = 0;
> +done:
> +	na->nm_lock(ifp, NETMAP_CORE_UNLOCK, 0);
> +
> +	/* release the mbuf in either cases of success or failure. As an
> +	 * alternative, put the mbuf in a free list and free the list
> +	 * only when really necessary.
> +	 */
> +	m_freem(m);
> +
> +	return (error);
> +}
> +
> +
> +/*
> + * netmap_reset() is called by the driver routines when reinitializing
> + * a ring. The driver is in charge of locking to protect the kring.
> + * If netmap mode is not set just return NULL.
> + */
> +struct netmap_slot *
> +netmap_reset(struct netmap_adapter *na, enum txrx tx, int n,
> +	u_int new_cur)
> +{
> +	struct netmap_kring *kring;
> +	int new_hwofs, lim;
> +
> +	if (na == NULL)
> +		return NULL;	/* no netmap support here */
> +	if (!(na->ifp->if_capenable & IFCAP_NETMAP))
> +		return NULL;	/* nothing to reinitialize */
> +
> +	if (tx == NR_TX) {
> +		if (n >= na->num_tx_rings)
> +			return NULL;
> +		kring = na->tx_rings + n;
> +		new_hwofs = kring->nr_hwcur - new_cur;
> +	} else {
> +		if (n >= na->num_rx_rings)
> +			return NULL;
> +		kring = na->rx_rings + n;
> +		new_hwofs = kring->nr_hwcur + kring->nr_hwavail - new_cur;
> +	}
> +	lim = kring->nkr_num_slots - 1;
> +	if (new_hwofs > lim)
> +		new_hwofs -= lim + 1;
> +
> +	/* Alwayws set the new offset value and realign the ring. */
> +	kring->nkr_hwofs = new_hwofs;
> +	if (tx == NR_TX)
> +		kring->nr_hwavail = kring->nkr_num_slots - 1;
> +	ND(10, "new hwofs %d on %s %s[%d]",
> +			kring->nkr_hwofs, na->ifp->if_xname,
> +			tx == NR_TX ? "TX" : "RX", n);
> +
> +#if 0 // def linux
> +	/* XXX check that the mappings are correct */
> +	/* need ring_nr, adapter->pdev, direction */
> +	buffer_info->dma = dma_map_single(&pdev->dev, addr, adapter->rx_buffer_len, DMA_FROM_DEVICE);
> +	if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma)) {
> +		D("error mapping rx netmap buffer %d", i);
> +		// XXX fix error handling
> +	}
> +
> +#endif /* linux */
> +	/*
> +	 * Wakeup on the individual and global lock
> +	 * We do the wakeup here, but the ring is not yet reconfigured.
> +	 * However, we are under lock so there are no races.
> +	 */
> +	selwakeuppri(&kring->si, PI_NET);
> +	selwakeuppri(tx == NR_TX ? &na->tx_si : &na->rx_si, PI_NET);
> +	return kring->ring->slot;
> +}
> +
> +
> +/*
> + * Default functions to handle rx/tx interrupts
> + * we have 4 cases:
> + * 1 ring, single lock:
> + *	lock(core); wake(i=0); unlock(core)
> + * N rings, single lock:
> + *	lock(core); wake(i); wake(N+1) unlock(core)
> + * 1 ring, separate locks: (i=0)
> + *	lock(i); wake(i); unlock(i)
> + * N rings, separate locks:
> + *	lock(i); wake(i); unlock(i); lock(core) wake(N+1) unlock(core)
> + * work_done is non-null on the RX path.
> + */
> +int
> +netmap_rx_irq(struct ifnet *ifp, int q, int *work_done)
> +{
> +	struct netmap_adapter *na;
> +	struct netmap_kring *r;
> +	NM_SELINFO_T *main_wq;
> +
> +	if (!(ifp->if_capenable & IFCAP_NETMAP))
> +		return 0;
> +	ND(5, "received %s queue %d", work_done ? "RX" : "TX" , q);
> +	na = NA(ifp);
> +	if (na->na_flags & NAF_SKIP_INTR) {
> +		ND("use regular interrupt");
> +		return 0;
> +	}
> +
> +	if (work_done) { /* RX path */
> +		if (q >= na->num_rx_rings)
> +			return 0;	// regular queue
> +		r = na->rx_rings + q;
> +		r->nr_kflags |= NKR_PENDINTR;
> +		main_wq = (na->num_rx_rings > 1) ? &na->rx_si : NULL;
> +	} else { /* tx path */
> +		if (q >= na->num_tx_rings)
> +			return 0;	// regular queue
> +		r = na->tx_rings + q;
> +		main_wq = (na->num_tx_rings > 1) ? &na->tx_si : NULL;
> +		work_done = &q; /* dummy */
> +	}
> +	if (na->separate_locks) {
> +		mtx_lock(&r->q_lock);
> +		selwakeuppri(&r->si, PI_NET);
> +		mtx_unlock(&r->q_lock);
> +		if (main_wq) {
> +			mtx_lock(&na->core_lock);
> +			selwakeuppri(main_wq, PI_NET);
> +			mtx_unlock(&na->core_lock);
> +		}
> +	} else {
> +		mtx_lock(&na->core_lock);
> +		selwakeuppri(&r->si, PI_NET);
> +		if (main_wq)
> +			selwakeuppri(main_wq, PI_NET);
> +		mtx_unlock(&na->core_lock);
> +	}
> +	*work_done = 1; /* do not fire napi again */
> +	return 1;
> +}
> +
> +
> +#ifdef linux	/* linux-specific routines */
> +
> +/*
> + * Remap linux arguments into the FreeBSD call.
> + * - pwait is the poll table, passed as 'dev';
> + *   If pwait == NULL someone else already woke up before. We can report
> + *   events but they are filtered upstream.
> + *   If pwait != NULL, then pwait->key contains the list of events.
> + * - events is computed from pwait as above.
> + * - file is passed as 'td';
> + */
> +static u_int
> +linux_netmap_poll(struct file * file, struct poll_table_struct *pwait)
> +{
> +#if LINUX_VERSION_CODE < KERNEL_VERSION(3,4,0)
> +	int events = pwait ? pwait->key : POLLIN | POLLOUT;
> +#else /* in 3.4.0 field 'key' was renamed to '_key' */
> +	int events = pwait ? pwait->_key : POLLIN | POLLOUT;
> +#endif
> +	return netmap_poll((void *)pwait, events, (void *)file);
> +}
> +
> +static int
> +linux_netmap_mmap(struct file *f, struct vm_area_struct *vma)
> +{
> +	int lut_skip, i, j;
> +	int user_skip = 0;
> +	struct lut_entry *l_entry;
> +	int error = 0;
> +	unsigned long off, tomap;
> +	/*
> +	 * vma->vm_start: start of mapping user address space
> +	 * vma->vm_end: end of the mapping user address space
> +	 * vma->vm_pfoff: offset of first page in the device
> +	 */
> +
> +	// XXX security checks
> +
> +	error = netmap_get_memory(f->private_data);
> +	ND("get_memory returned %d", error);
> +	if (error)
> +	    return -error;
> +
> +	off = vma->vm_pgoff << PAGE_SHIFT; /* offset in bytes */
> +	tomap = vma->vm_end - vma->vm_start;
> +	for (i = 0; i < NETMAP_POOLS_NR; i++) {  /* loop through obj_pools */
> +		const struct netmap_obj_pool *p = &nm_mem.pools[i];
> +		/*
> +		 * In each pool memory is allocated in clusters
> +		 * of size _clustsize, each containing clustentries
> +		 * entries. For each object k we already store the
> +		 * vtophys mapping in lut[k] so we use that, scanning
> +		 * the lut[] array in steps of clustentries,
> +		 * and we map each cluster (not individual pages,
> +		 * it would be overkill).
> +		 */
> +
> +		/*
> +		 * We interpret vm_pgoff as an offset into the whole
> +		 * netmap memory, as if all clusters where contiguous.
> +		 */
> +		for (lut_skip = 0, j = 0; j < p->_numclusters; j++, lut_skip += p->clustentries) {
> +			unsigned long paddr, mapsize;
> +			if (p->_clustsize <= off) {
> +				off -= p->_clustsize;
> +				continue;
> +			}
> +			l_entry = &p->lut[lut_skip]; /* first obj in the cluster */
> +			paddr = l_entry->paddr + off;
> +			mapsize = p->_clustsize - off;
> +			off = 0;
> +			if (mapsize > tomap)
> +				mapsize = tomap;
> +			ND("remap_pfn_range(%lx, %lx, %lx)",
> +				vma->vm_start + user_skip,
> +				paddr >> PAGE_SHIFT, mapsize);
> +			if (remap_pfn_range(vma, vma->vm_start + user_skip,
> +					paddr >> PAGE_SHIFT, mapsize,
> +					vma->vm_page_prot))
> +				return -EAGAIN; // XXX check return value
> +			user_skip += mapsize;
> +			tomap -= mapsize;
> +			if (tomap == 0)
> +				goto done;
> +		}
> +	}
> +done:
> +
> +	return 0;
> +}
> +
> +static netdev_tx_t
> +linux_netmap_start(struct sk_buff *skb, struct net_device *dev)
> +{
> +	netmap_start(dev, skb);
> +	return (NETDEV_TX_OK);
> +}
> +
> +
> +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,37)	// XXX was 38
> +#define LIN_IOCTL_NAME	.ioctl
> +int
> +linux_netmap_ioctl(struct inode *inode, struct file *file, u_int cmd, u_long data /* arg */)
> +#else
> +#define LIN_IOCTL_NAME	.unlocked_ioctl
> +long
> +linux_netmap_ioctl(struct file *file, u_int cmd, u_long data /* arg */)
> +#endif
> +{
> +	int ret;
> +	struct nmreq nmr;
> +	bzero(&nmr, sizeof(nmr));
> +
> +	if (data && copy_from_user(&nmr, (void *)data, sizeof(nmr) ) != 0)
> +		return -EFAULT;
> +	ret = netmap_ioctl(NULL, cmd, (caddr_t)&nmr, 0, (void *)file);
> +	pr_info("netmap ioctl %u ret = %d\n", cmd, ret);
> +	if (data && copy_to_user((void*)data, &nmr, sizeof(nmr) ) != 0)
> +		return -EFAULT;
> +	return -ret;
> +}
> +
> +
> +static int
> +netmap_release(struct inode *inode, struct file *file)
> +{
> +	(void)inode;	/* UNUSED */
> +	if (file->private_data)
> +		netmap_dtor(file->private_data);
> +	return (0);
> +}
> +
> +static int
> +linux_netmap_open(struct inode *inode, struct file *file)
> +{
> +	struct netmap_priv_d *priv;
> +	(void)inode;	/* UNUSED */
> +
> +	priv = malloc(sizeof(struct netmap_priv_d), M_DEVBUF,
> +			      M_NOWAIT | M_ZERO);
> +	if (priv == NULL)
> +		return -ENOMEM;
> +
> +	file->private_data = priv;
> +
> +	return (0);
> +}
> +
> +static struct file_operations netmap_fops = {
> +    .open = linux_netmap_open,
> +    .mmap = linux_netmap_mmap,
> +    LIN_IOCTL_NAME = linux_netmap_ioctl,
> +    .poll = linux_netmap_poll,
> +    .release = netmap_release,
> +};
> +
> +static struct miscdevice netmap_cdevsw = {	/* same name as FreeBSD */
> +	MISC_DYNAMIC_MINOR,
> +	"netmap",
> +	&netmap_fops,
> +};
> +
> +static int netmap_init(void);
> +static void netmap_fini(void);
> +
> +/* Errors have negative values on linux */
> +static int linux_netmap_init(void)
> +{
> +	return -netmap_init();
> +}
> +
> +module_init(linux_netmap_init);
> +module_exit(netmap_fini);
> +/* export certain symbols to other modules */
> +EXPORT_SYMBOL(netmap_attach);		// driver attach routines
> +EXPORT_SYMBOL(netmap_detach);		// driver detach routines
> +EXPORT_SYMBOL(netmap_ring_reinit);	// ring init on error
> +EXPORT_SYMBOL(netmap_buffer_lut);
> +EXPORT_SYMBOL(netmap_total_buffers);	// index check
> +EXPORT_SYMBOL(netmap_buffer_base);
> +EXPORT_SYMBOL(netmap_reset);		// ring init routines
> +EXPORT_SYMBOL(netmap_buf_size);
> +EXPORT_SYMBOL(netmap_rx_irq);		// default irq handler
> +EXPORT_SYMBOL(netmap_no_pendintr);	// XXX mitigation - should go away
> +
> +
> +MODULE_AUTHOR("Matteo Landi, Luigi Rizzo");
> +MODULE_DESCRIPTION("The netmap packet I/O framework");
> +MODULE_LICENSE("Dual BSD/GPL"); /* the code here is all BSD. */
> +
> +#else /* __FreeBSD__ */
> +
> +static struct cdevsw netmap_cdevsw = {
> +	.d_version = D_VERSION,
> +	.d_name = "netmap",
> +	.d_open = netmap_open,
> +	.d_mmap = netmap_mmap,
> +	.d_mmap_single = netmap_mmap_single,
> +	.d_ioctl = netmap_ioctl,
> +	.d_poll = netmap_poll,
> +	.d_close = netmap_close,
> +};
> +#endif /* __FreeBSD__ */
> +
> +#ifdef NM_BRIDGE
> +/*
> + *---- support for virtual bridge -----
> + */
> +
> +/* ----- FreeBSD if_bridge hash function ------- */
> +
> +/*
> + * The following hash function is adapted from "Hash Functions" by Bob Jenkins
> + * ("Algorithm Alley", Dr. Dobbs Journal, September 1997).
> + *
> + * http://www.burtleburtle.net/bob/hash/spooky.html
> + */
> +#define mix(a, b, c)                                                    \
> +do {                                                                    \
> +        a -= b; a -= c; a ^= (c >> 13);                                 \
> +        b -= c; b -= a; b ^= (a << 8);                                  \
> +        c -= a; c -= b; c ^= (b >> 13);                                 \
> +        a -= b; a -= c; a ^= (c >> 12);                                 \
> +        b -= c; b -= a; b ^= (a << 16);                                 \
> +        c -= a; c -= b; c ^= (b >> 5);                                  \
> +        a -= b; a -= c; a ^= (c >> 3);                                  \
> +        b -= c; b -= a; b ^= (a << 10);                                 \
> +        c -= a; c -= b; c ^= (b >> 15);                                 \
> +} while (/*CONSTCOND*/0)
> +
> +static __inline uint32_t
> +nm_bridge_rthash(const uint8_t *addr)
> +{
> +        uint32_t a = 0x9e3779b9, b = 0x9e3779b9, c = 0; // hask key
> +
> +        b += addr[5] << 8;
> +        b += addr[4];
> +        a += addr[3] << 24;
> +        a += addr[2] << 16;
> +        a += addr[1] << 8;
> +        a += addr[0];
> +
> +        mix(a, b, c);
> +#define BRIDGE_RTHASH_MASK	(NM_BDG_HASH-1)
> +        return (c & BRIDGE_RTHASH_MASK);
> +}
> +
> +#undef mix
> +
> +
> +static int
> +bdg_netmap_reg(struct ifnet *ifp, int onoff)
> +{
> +	int i, err = 0;
> +	struct nm_bridge *b = ifp->if_bridge;
> +
> +	BDG_LOCK(b);
> +	if (onoff) {
> +		/* the interface must be already in the list.
> +		 * only need to mark the port as active
> +		 */
> +		ND("should attach %s to the bridge", ifp->if_xname);
> +		for (i=0; i < NM_BDG_MAXPORTS; i++)
> +			if (b->bdg_ports[i] == ifp)
> +				break;
> +		if (i == NM_BDG_MAXPORTS) {
> +			D("no more ports available");
> +			err = EINVAL;
> +			goto done;
> +		}
> +		ND("setting %s in netmap mode", ifp->if_xname);
> +		ifp->if_capenable |= IFCAP_NETMAP;
> +		NA(ifp)->bdg_port = i;
> +		b->act_ports |= (1<<i);
> +		b->bdg_ports[i] = ifp;
> +	} else {
> +		/* should be in the list, too -- remove from the mask */
> +		ND("removing %s from netmap mode", ifp->if_xname);
> +		ifp->if_capenable &= ~IFCAP_NETMAP;
> +		i = NA(ifp)->bdg_port;
> +		b->act_ports &= ~(1<<i);
> +	}
> +done:
> +	BDG_UNLOCK(b);
> +	return err;
> +}
> +
> +
> +static int
> +nm_bdg_flush(struct nm_bdg_fwd *ft, int n, struct ifnet *ifp)
> +{
> +	int i, ifn;
> +	uint64_t all_dst, dst;
> +	uint32_t sh, dh;
> +	uint64_t mysrc = 1 << NA(ifp)->bdg_port;
> +	uint64_t smac, dmac;
> +	struct netmap_slot *slot;
> +	struct nm_bridge *b = ifp->if_bridge;
> +
> +	ND("prepare to send %d packets, act_ports 0x%x", n, b->act_ports);
> +	/* only consider valid destinations */
> +	all_dst = (b->act_ports & ~mysrc);
> +	/* first pass: hash and find destinations */
> +	for (i = 0; likely(i < n); i++) {
> +		uint8_t *buf = ft[i].buf;
> +		dmac = le64toh(*(uint64_t *)(buf)) & 0xffffffffffff;
> +		smac = le64toh(*(uint64_t *)(buf + 4));
> +		smac >>= 16;
> +		if (unlikely(netmap_verbose)) {
> +		    uint8_t *s = buf+6, *d = buf;
> +		    D("%d len %4d %02x:%02x:%02x:%02x:%02x:%02x -> %02x:%02x:%02x:%02x:%02x:%02x",
> +			i,
> +			ft[i].len,
> +			s[0], s[1], s[2], s[3], s[4], s[5],
> +			d[0], d[1], d[2], d[3], d[4], d[5]);
> +		}
> +		/*
> +		 * The hash is somewhat expensive, there might be some
> +		 * worthwhile optimizations here.
> +		 */
> +		if ((buf[6] & 1) == 0) { /* valid src */
> +		    	uint8_t *s = buf+6;
> +			sh = nm_bridge_rthash(buf+6); // XXX hash of source
> +			/* update source port forwarding entry */
> +			b->ht[sh].mac = smac;	/* XXX expire ? */
> +			b->ht[sh].ports = mysrc;
> +			if (netmap_verbose)
> +			    D("src %02x:%02x:%02x:%02x:%02x:%02x on port %d",
> +				s[0], s[1], s[2], s[3], s[4], s[5], NA(ifp)->bdg_port);
> +		}
> +		dst = 0;
> +		if ( (buf[0] & 1) == 0) { /* unicast */
> +		    	uint8_t *d = buf;
> +			dh = nm_bridge_rthash(buf); // XXX hash of dst
> +			if (b->ht[dh].mac == dmac) {	/* found dst */
> +				dst = b->ht[dh].ports;
> +				if (netmap_verbose)
> +				    D("dst %02x:%02x:%02x:%02x:%02x:%02x to port %x",
> +					d[0], d[1], d[2], d[3], d[4], d[5], (uint32_t)(dst >> 16));
> +			}
> +		}
> +		if (dst == 0)
> +			dst = all_dst;
> +		dst &= all_dst; /* only consider valid ports */
> +		if (unlikely(netmap_verbose))
> +			D("pkt goes to ports 0x%x", (uint32_t)dst);
> +		ft[i].dst = dst;
> +	}
> +
> +	/* second pass, scan interfaces and forward */
> +	all_dst = (b->act_ports & ~mysrc);
> +	for (ifn = 0; all_dst; ifn++) {
> +		struct ifnet *dst_ifp = b->bdg_ports[ifn];
> +		struct netmap_adapter *na;
> +		struct netmap_kring *kring;
> +		struct netmap_ring *ring;
> +		int j, lim, sent, locked;
> +
> +		if (!dst_ifp)
> +			continue;
> +		ND("scan port %d %s", ifn, dst_ifp->if_xname);
> +		dst = 1 << ifn;
> +		if ((dst & all_dst) == 0)	/* skip if not set */
> +			continue;
> +		all_dst &= ~dst;	/* clear current node */
> +		na = NA(dst_ifp);
> +
> +		ring = NULL;
> +		kring = NULL;
> +		lim = sent = locked = 0;
> +		/* inside, scan slots */
> +		for (i = 0; likely(i < n); i++) {
> +			if ((ft[i].dst & dst) == 0)
> +				continue;	/* not here */
> +			if (!locked) {
> +				kring = &na->rx_rings[0];
> +				ring = kring->ring;
> +				lim = kring->nkr_num_slots - 1;
> +				na->nm_lock(dst_ifp, NETMAP_RX_LOCK, 0);
> +				locked = 1;
> +			}
> +			if (unlikely(kring->nr_hwavail >= lim)) {
> +				if (netmap_verbose)
> +					D("rx ring full on %s", ifp->if_xname);
> +				break;
> +			}
> +			j = kring->nr_hwcur + kring->nr_hwavail;
> +			if (j > lim)
> +				j -= kring->nkr_num_slots;
> +			slot = &ring->slot[j];
> +			ND("send %d %d bytes at %s:%d", i, ft[i].len, dst_ifp->if_xname, j);
> +			pkt_copy(ft[i].buf, NMB(slot), ft[i].len);
> +			slot->len = ft[i].len;
> +			kring->nr_hwavail++;
> +			sent++;
> +		}
> +		if (locked) {
> +			ND("sent %d on %s", sent, dst_ifp->if_xname);
> +			if (sent)
> +				selwakeuppri(&kring->si, PI_NET);
> +			na->nm_lock(dst_ifp, NETMAP_RX_UNLOCK, 0);
> +		}
> +	}
> +	return 0;
> +}
> +
> +/*
> + * main dispatch routine
> + */
> +static int
> +bdg_netmap_txsync(struct ifnet *ifp, u_int ring_nr, int do_lock)
> +{
> +	struct netmap_adapter *na = NA(ifp);
> +	struct netmap_kring *kring = &na->tx_rings[ring_nr];
> +	struct netmap_ring *ring = kring->ring;
> +	int i, j, k, lim = kring->nkr_num_slots - 1;
> +	struct nm_bdg_fwd *ft = (struct nm_bdg_fwd *)(ifp + 1);
> +	int ft_i;	/* position in the forwarding table */
> +
> +	k = ring->cur;
> +	if (k > lim)
> +		return netmap_ring_reinit(kring);
> +	if (do_lock)
> +		na->nm_lock(ifp, NETMAP_TX_LOCK, ring_nr);
> +
> +	if (netmap_bridge <= 0) { /* testing only */
> +		j = k; // used all
> +		goto done;
> +	}
> +	if (netmap_bridge > NM_BDG_BATCH)
> +		netmap_bridge = NM_BDG_BATCH;
> +
> +	ft_i = 0;	/* start from 0 */
> +	for (j = kring->nr_hwcur; likely(j != k); j = unlikely(j == lim) ? 0 : j+1) {
> +		struct netmap_slot *slot = &ring->slot[j];
> +		int len = ft[ft_i].len = slot->len;
> +		char *buf = ft[ft_i].buf = NMB(slot);
> +
> +		prefetch(buf);
> +		if (unlikely(len < 14))
> +			continue;
> +		if (unlikely(++ft_i == netmap_bridge))
> +			ft_i = nm_bdg_flush(ft, ft_i, ifp);
> +	}
> +	if (ft_i)
> +		ft_i = nm_bdg_flush(ft, ft_i, ifp);
> +	/* count how many packets we sent */
> +	i = k - j;
> +	if (i < 0)
> +		i += kring->nkr_num_slots;
> +	kring->nr_hwavail = kring->nkr_num_slots - 1 - i;
> +	if (j != k)
> +		D("early break at %d/ %d, avail %d", j, k, kring->nr_hwavail);
> +
> +done:
> +	kring->nr_hwcur = j;
> +	ring->avail = kring->nr_hwavail;
> +	if (do_lock)
> +		na->nm_lock(ifp, NETMAP_TX_UNLOCK, ring_nr);
> +
> +	if (netmap_verbose)
> +		D("%s ring %d lock %d", ifp->if_xname, ring_nr, do_lock);
> +	return 0;
> +}
> +
> +static int
> +bdg_netmap_rxsync(struct ifnet *ifp, u_int ring_nr, int do_lock)
> +{
> +	struct netmap_adapter *na = NA(ifp);
> +	struct netmap_kring *kring = &na->rx_rings[ring_nr];
> +	struct netmap_ring *ring = kring->ring;
> +	u_int j, n, lim = kring->nkr_num_slots - 1;
> +	u_int k = ring->cur, resvd = ring->reserved;
> +
> +	ND("%s ring %d lock %d avail %d",
> +		ifp->if_xname, ring_nr, do_lock, kring->nr_hwavail);
> +
> +	if (k > lim)
> +		return netmap_ring_reinit(kring);
> +	if (do_lock)
> +		na->nm_lock(ifp, NETMAP_RX_LOCK, ring_nr);
> +
> +	/* skip past packets that userspace has released */
> +	j = kring->nr_hwcur;    /* netmap ring index */
> +	if (resvd > 0) {
> +		if (resvd + ring->avail >= lim + 1) {
> +			D("XXX invalid reserve/avail %d %d", resvd, ring->avail);
> +			ring->reserved = resvd = 0; // XXX panic...
> +		}
> +		k = (k >= resvd) ? k - resvd : k + lim + 1 - resvd;
> +	}
> +
> +	if (j != k) { /* userspace has released some packets. */
> +		n = k - j;
> +		if (n < 0)
> +			n += kring->nkr_num_slots;
> +		ND("userspace releases %d packets", n);
> +                for (n = 0; likely(j != k); n++) {
> +                        struct netmap_slot *slot = &ring->slot[j];
> +                        void *addr = NMB(slot);
> +
> +                        if (addr == netmap_buffer_base) { /* bad buf */
> +                                if (do_lock)
> +                                        na->nm_lock(ifp, NETMAP_RX_UNLOCK, ring_nr);
> +                                return netmap_ring_reinit(kring);
> +                        }
> +			/* decrease refcount for buffer */
> +
> +			slot->flags &= ~NS_BUF_CHANGED;
> +                        j = unlikely(j == lim) ? 0 : j + 1;
> +                }
> +                kring->nr_hwavail -= n;
> +                kring->nr_hwcur = k;
> +        }
> +        /* tell userspace that there are new packets */
> +        ring->avail = kring->nr_hwavail - resvd;
> +
> +	if (do_lock)
> +		na->nm_lock(ifp, NETMAP_RX_UNLOCK, ring_nr);
> +	return 0;
> +}
> +
> +static void
> +bdg_netmap_attach(struct ifnet *ifp)
> +{
> +	struct netmap_adapter na;
> +
> +	ND("attaching virtual bridge");
> +	bzero(&na, sizeof(na));
> +
> +	na.ifp = ifp;
> +	na.separate_locks = 1;
> +	na.num_tx_desc = NM_BRIDGE_RINGSIZE;
> +	na.num_rx_desc = NM_BRIDGE_RINGSIZE;
> +	na.nm_txsync = bdg_netmap_txsync;
> +	na.nm_rxsync = bdg_netmap_rxsync;
> +	na.nm_register = bdg_netmap_reg;
> +	netmap_attach(&na, 1);
> +}
> +
> +#endif /* NM_BRIDGE */
> +
> +static struct cdev *netmap_dev; /* /dev/netmap character device. */
> +
> +
> +/*
> + * Module loader.
> + *
> + * Create the /dev/netmap device and initialize all global
> + * variables.
> + *
> + * Return 0 on success, errno on failure.
> + */
> +static int
> +netmap_init(void)
> +{
> +	int error;
> +
> +	error = netmap_memory_init();
> +	if (error != 0) {
> +		printf("netmap: unable to initialize the memory allocator.\n");
> +		return (error);
> +	}
> +	printf("netmap: loaded module\n");
> +	netmap_dev = make_dev(&netmap_cdevsw, 0, UID_ROOT, GID_WHEEL, 0660,
> +			      "netmap");
> +
> +#ifdef NM_BRIDGE
> +	{
> +	int i;
> +	for (i = 0; i < NM_BRIDGES; i++)
> +		mtx_init(&nm_bridges[i].bdg_lock, "bdg lock", "bdg_lock", MTX_DEF);
> +	}
> +#endif
> +	return (error);
> +}
> +
> +
> +/*
> + * Module unloader.
> + *
> + * Free all the memory, and destroy the ``/dev/netmap`` device.
> + */
> +static void
> +netmap_fini(void)
> +{
> +	destroy_dev(netmap_dev);
> +	netmap_memory_fini();
> +	printf("netmap: unloaded module.\n");
> +}
> +
> +
> +#ifdef __FreeBSD__
> +/*
> + * Kernel entry point.
> + *
> + * Initialize/finalize the module and return.
> + *
> + * Return 0 on success, errno on failure.
> + */
> +static int
> +netmap_loader(__unused struct module *module, int event, __unused void *arg)
> +{
> +	int error = 0;
> +
> +	switch (event) {
> +	case MOD_LOAD:
> +		error = netmap_init();
> +		break;
> +
> +	case MOD_UNLOAD:
> +		netmap_fini();
> +		break;
> +
> +	default:
> +		error = EOPNOTSUPP;
> +		break;
> +	}
> +
> +	return (error);
> +}
> +
> +
> +DEV_MODULE(netmap, netmap_loader, NULL);
> +#endif /* __FreeBSD__ */
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ b/drivers/staging/netmap/README	2013-03-10 10:08:20.327671428 -0700
> @@ -0,0 +1,127 @@
> +# $Id: README 10832 2012-03-22 18:22:42Z luigi $
> +
> +NETMAP FOR LINUX
> +----------------
> +
> +This directory contains a version of the "netmap" code for Linux.
> +
> +Netmap is a BSD-licensed framework that supports line-rate direct packet
> +I/O even on 10GBit/s interfaces (14.88Mpps) with limited system load,
> +and includes a libpcap emulation library to port applications.  See
> +
> +	http://info.iet.unipi.it/~luigi/netmap/
> +
> +for more details. There you can also find the latest versions
> +of the code and documentation as well as pre-built TinyCore
> +images based on linux 3.0.3 and containing the netmap modules
> +and some test applications.
> +
> +This is a preliminary version supporting the ixgbe and e1000/e1000e
> +driver. Patches for other devices (igb, r8169, forcedeth) are
> +untested and probably not working yet.
> +
> +Netmap relies on a kernel module (netmap_lin.ko) and slightly modified
> +device drivers. Userspace programs can use the native API (documented
> +in netmap.4) or a libpcap emulation library.
> +
> +    Directory structure for this archive
> +
> +	.		documentation, patches etc.
> +	include/net	header files for user programs
> +	net/netmap	kernel core files,
> +			sample applications, manpage
> +	net/*		patched device drivers for a 3.0.x linux version.
> +
> +HOW TO BUILD THE CODE
> +---------------------
> +
> +1. make sure you have kernel sources/headers matching your installed system
> +
> +2. do the following
> +	(cd net;  make KSRC=/usr/src/linux-kernel-source-or-headers )
> +   this produces net/netmap/netmap_lin.ko and other kernel modules.
> +
> +3. to build sample applications, run
> +	(cd net/netmap; make apps )
> +   (you will need the pthreads and libpcap-dev packages to build them)
> +
> +HOW TO USE THE CODE
> +-------------------
> +
> +    REMEMBER
> +	THIS IS EXPERIMENTAL CODE WHICH MAY CRASH YOUR SYSTEM.
> +	USE IT AT YOUR OWN RISk.
> +
> +Whether you built your own modules, or are using the prebuilt
> +TinyCore image, the following steps can be used for initial testing:
> +
> +1. unload any modules for the network cards you want to use, e.g.
> +	sudo rmmod ixgbe
> +	sudo rmmod e1000
> +	sudo rmmod e1000e
> +
> +2. load netmap and device driver module
> +	sudo insmod net/netmap/netmap_lin.ko
> +	sudo insmod net/ixgbe/ixgbe.ko
> +	sudo insmod net/ixgbe/e1000.ko
> +	sudo insmod net/ixgbe/e1000e.ko
> +
> +3. turn the interface(s) up
> +
> +	sudo ifconfig eth0 up # and same for others
> +
> +4. Run test applications -- as an example, pkt-gen is a raw packet
> +   sender/receiver which can do line rate on a 10G interface
> +
> +	# send about 500 million packets of 64 bytes each.
> +	# wait 5s before starting, so the link can go up
> +	sudo pkt-gen -i eth0 -t 500111222 -l 64 -w 5
> +	# you should see about 14.2 Mpps
> +
> +	sudo pkt-gen -i eth0 # act as a receiver
> +
> +
> +COMMON PROBLEMS
> +----------------
> +
> +* switching in/out of netmap mode causes the link to go down and up.
> +  If your card is connected to a switch with spanning tree enabled,
> +  the switch will likely MUTE THE LINK FOR 10 SECONDS while it is
> +  detecting the new topology. Either disable the spanning tree on
> +  the switch or use long pauses before sending data;
> +
> +* Not all cards can do line rate no matter how fast is your software or
> +  CPU. Several have hardware limitations that prevent reaching the peak
> +  speed, especially for small packet sizes. Examples:
> +
> +  - ixgbe cannot receive at line rate with packet sizes that are
> +    not multiple of 64 (after CRC stripping).
> +    This is especially evident with minimum-sized frames (-l 60 )
> +
> +  - some of the low-end 'e1000' cards can send 1.2 - 1.3Mpps instead
> +    of the theoretical maximum (1.488Mpps)
> +
> +  - the 'realtek' cards seem unable to send more than 450-500Kpps
> +    even though they can receive at least 1.1Mpps
> +
> +* if the link is not up when the packet generator starts, you will
> +  see frequent messages about a link reset. While we work on a fix,
> +  use the '-w' argument on the generator to specify a longer timeout
> +
> +* the ixgbe driver (and perhaps others) is severely slowed down if the
> +  remote party is senting flow control frames to slow down traffic.
> +  If that happens try to use the ethtool command to disable flow control.
> +
> +
> +REVISION HISTORY
> +-----------------
> +
> +20120322 - fixed the 'igb' driver, now it can send and receive correctly
> +	(the problem was in netmap_rx_irq() so it might have affected
> +	other multiqueue cards).
> +	Also tested the 'r8169' in transmit mode.
> +	Added comments on switches and spanning tree.
> +
> +20120217 - initial version. Only ixgbe, e1000 and e1000e are working.
> +	Other drivers (igb, r8169, forcedeth) are supplied only as a
> +	proof of concept.
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ b/drivers/staging/netmap/TODO	2013-03-10 16:45:40.377960319 -0700
> @@ -0,0 +1,21 @@
> +
> +This driver is functional but needs some cleanup before it is ready
> +for included in networking subsystem.
> +
> +  - Use unifdef to elimiate non Linux code
> +  - Get rid of wrapper code including bsd_glue.h
> +  - Fix coding style to be Linux rather than BSD
> +  - Fix whitespace warnings
> +  - Add stubs so that ethernet drivers can use without #ifdef's
> +  - Autoload the module, assign a real minor number and have aliases
> +  - Rework documentation and put in Documentation/networking/
> +  - Remove memory allocator kludge wrapper
> +  - Remove devfs kludge wrappers
> +  - Review configuration API's and management (counters)
> +  - Locking should be consistent (ie get rid of separate_locks option)?
> +  - Doesn't work with DMA remapping
> +  - Use __u32 rather than uint32_t in headers
> +  - netmap_user.h headers should use inline instead of macros
> +
> +Fundamentally this will break source compatiablity with FreeBSD,
> +but this is not our problem.
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ b/drivers/staging/netmap/netmap_mem2.c	2013-03-10 10:08:20.327671428 -0700
> @@ -0,0 +1,974 @@
> +/*
> + * Copyright (C) 2012 Matteo Landi, Luigi Rizzo, Giuseppe Lettieri. All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *   1. Redistributions of source code must retain the above copyright
> + *      notice, this list of conditions and the following disclaimer.
> + *   2. Redistributions in binary form must reproduce the above copyright
> + *      notice, this list of conditions and the following disclaimer in the
> + *    documentation and/or other materials provided with the distribution.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
> + * ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
> + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
> + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
> + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
> + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> + * SUCH DAMAGE.
> + */
> +
> +/*
> + * $FreeBSD: head/sys/dev/netmap/netmap_mem2.c 234290 2012-04-14 16:44:18Z luigi $
> + * $Id: netmap_mem2.c 12010 2013-01-23 03:57:30Z luigi $
> + *
> + * (New) memory allocator for netmap
> + */
> +
> +/*
> + * This allocator creates three memory regions:
> + *	nm_if_pool	for the struct netmap_if
> + *	nm_ring_pool	for the struct netmap_ring
> + *	nm_buf_pool	for the packet buffers.
> + *
> + * All regions need to be multiple of a page size as we export them to
> + * userspace through mmap. Only the latter needs to be dma-able,
> + * but for convenience use the same type of allocator for all.
> + *
> + * Once mapped, the three regions are exported to userspace
> + * as a contiguous block, starting from nm_if_pool. Each
> + * cluster (and pool) is an integral number of pages.
> + *   [ . . . ][ . . . . . .][ . . . . . . . . . .]
> + *    nm_if     nm_ring            nm_buf
> + *
> + * The userspace areas contain offsets of the objects in userspace.
> + * When (at init time) we write these offsets, we find out the index
> + * of the object, and from there locate the offset from the beginning
> + * of the region.
> + *
> + * The invididual allocators manage a pool of memory for objects of
> + * the same size.
> + * The pool is split into smaller clusters, whose size is a
> + * multiple of the page size. The cluster size is chosen
> + * to minimize the waste for a given max cluster size
> + * (we do it by brute force, as we have relatively few object
> + * per cluster).
> + *
> + * Objects are aligned to the cache line (64 bytes) rounding up object
> + * sizes when needed. A bitmap contains the state of each object.
> + * Allocation scans the bitmap; this is done only on attach, so we are not
> + * too worried about performance
> + *
> + * For each allocator we can define (thorugh sysctl) the size and
> + * number of each object. Memory is allocated at the first use of a
> + * netmap file descriptor, and can be freed when all such descriptors
> + * have been released (including unmapping the memory).
> + * If memory is scarce, the system tries to get as much as possible
> + * and the sysctl values reflect the actual allocation.
> + * Together with desired values, the sysctl export also absolute
> + * min and maximum values that cannot be overridden.
> + *
> + * struct netmap_if:
> + *	variable size, max 16 bytes per ring pair plus some fixed amount.
> + *	1024 bytes should be large enough in practice.
> + *
> + *	In the worst case we have one netmap_if per ring in the system.
> + *
> + * struct netmap_ring
> + *	variable too, 8 byte per slot plus some fixed amount.
> + *	Rings can be large (e.g. 4k slots, or >32Kbytes).
> + *	We default to 36 KB (9 pages), and a few hundred rings.
> + *
> + * struct netmap_buffer
> + *	The more the better, both because fast interfaces tend to have
> + *	many slots, and because we may want to use buffers to store
> + *	packets in userspace avoiding copies.
> + *	Must contain a full frame (eg 1518, or more for vlans, jumbo
> + *	frames etc.) plus be nicely aligned, plus some NICs restrict
> + *	the size to multiple of 1K or so. Default to 2K
> + */
> +
> +#ifndef CONSERVATIVE
> +#define NETMAP_BUF_MAX_NUM	20*4096*2	/* large machine */
> +#else /* CONSERVATIVE */
> +#define NETMAP_BUF_MAX_NUM      20000   /* 40MB */
> +#endif
> +
> +#ifdef linux
> +#define NMA_LOCK_T		struct semaphore
> +#define NMA_LOCK_INIT()		sema_init(&nm_mem.nm_mtx, 1)
> +#define NMA_LOCK_DESTROY()
> +#define NMA_LOCK()		down(&nm_mem.nm_mtx)
> +#define NMA_UNLOCK()		up(&nm_mem.nm_mtx)
> +#else /* !linux */
> +#define NMA_LOCK_T		struct mtx
> +#define NMA_LOCK_INIT()		mtx_init(&nm_mem.nm_mtx, "netmap memory allocator lock", NULL, MTX_DEF)
> +#define NMA_LOCK_DESTROY()	mtx_destroy(&nm_mem.nm_mtx)
> +#define NMA_LOCK()		mtx_lock(&nm_mem.nm_mtx)
> +#define NMA_UNLOCK()		mtx_unlock(&nm_mem.nm_mtx)
> +#endif /* linux */
> +
> +enum {
> +	NETMAP_IF_POOL   = 0,
> +	NETMAP_RING_POOL,
> +	NETMAP_BUF_POOL,
> +	NETMAP_POOLS_NR
> +};
> +
> +
> +struct netmap_obj_params {
> +	u_int size;
> +	u_int num;
> +};
> +
> +
> +struct netmap_obj_params netmap_params[NETMAP_POOLS_NR] = {
> +	[NETMAP_IF_POOL] = {
> +		.size = 1024,
> +		.num  = 100,
> +	},
> +	[NETMAP_RING_POOL] = {
> +		.size = 9*PAGE_SIZE,
> +		.num  = 200,
> +	},
> +	[NETMAP_BUF_POOL] = {
> +		.size = 2048,
> +		.num  = NETMAP_BUF_MAX_NUM,
> +	},
> +};
> +
> +
> +struct netmap_obj_pool {
> +	char name[16];		/* name of the allocator */
> +	u_int objtotal;         /* actual total number of objects. */
> +	u_int objfree;          /* number of free objects. */
> +	u_int clustentries;	/* actual objects per cluster */
> +
> +	/* limits */
> +	u_int objminsize;	/* minimum object size */
> +	u_int objmaxsize;	/* maximum object size */
> +	u_int nummin;		/* minimum number of objects */
> +	u_int nummax;		/* maximum number of objects */
> +
> +	/* the total memory space is _numclusters*_clustsize */
> +	u_int _numclusters;	/* how many clusters */
> +	u_int _clustsize;        /* cluster size */
> +	u_int _objsize;		/* actual object size */
> +
> +	u_int _memtotal;	/* _numclusters*_clustsize */
> +	struct lut_entry *lut;  /* virt,phys addresses, objtotal entries */
> +	uint32_t *bitmap;       /* one bit per buffer, 1 means free */
> +	uint32_t bitmap_slots;	/* number of uint32 entries in bitmap */
> +};
> +
> +
> +struct netmap_mem_d {
> +	NMA_LOCK_T nm_mtx;  /* protect the allocator */
> +	u_int nm_totalsize; /* shorthand */
> +
> +	int finalized;		/* !=0 iff preallocation done */
> +	int lasterr;		/* last error for curr config */
> +	int refcount;		/* existing priv structures */
> +	/* the three allocators */
> +	struct netmap_obj_pool pools[NETMAP_POOLS_NR];
> +};
> +
> +
> +static struct netmap_mem_d nm_mem = {	/* Our memory allocator. */
> +	.pools = {
> +		[NETMAP_IF_POOL] = {
> +			.name 	= "netmap_if",
> +			.objminsize = sizeof(struct netmap_if),
> +			.objmaxsize = 4096,
> +			.nummin     = 10,	/* don't be stingy */
> +			.nummax	    = 10000,	/* XXX very large */
> +		},
> +		[NETMAP_RING_POOL] = {
> +			.name 	= "netmap_ring",
> +			.objminsize = sizeof(struct netmap_ring),
> +			.objmaxsize = 32*PAGE_SIZE,
> +			.nummin     = 2,
> +			.nummax	    = 1024,
> +		},
> +		[NETMAP_BUF_POOL] = {
> +			.name	= "netmap_buf",
> +			.objminsize = 64,
> +			.objmaxsize = 65536,
> +			.nummin     = 4,
> +			.nummax	    = 1000000, /* one million! */
> +		},
> +	},
> +};
> +
> +struct lut_entry *netmap_buffer_lut;	/* exported */
> +
> +/* memory allocator related sysctls */
> +
> +#define STRINGIFY(x) #x
> +
> +#define DECLARE_SYSCTLS(id, name) \
> +	/* TUNABLE_INT("hw.netmap." STRINGIFY(name) "_size", &netmap_params[id].size); */ \
> +	SYSCTL_INT(_dev_netmap, OID_AUTO, name##_size, \
> +	    CTLFLAG_RW, &netmap_params[id].size, 0, "Requested size of netmap " STRINGIFY(name) "s"); \
> +        SYSCTL_INT(_dev_netmap, OID_AUTO, name##_curr_size, \
> +            CTLFLAG_RD, &nm_mem.pools[id]._objsize, 0, "Current size of netmap " STRINGIFY(name) "s"); \
> +	/* TUNABLE_INT("hw.netmap." STRINGIFY(name) "_num", &netmap_params[id].num); */ \
> +        SYSCTL_INT(_dev_netmap, OID_AUTO, name##_num, \
> +            CTLFLAG_RW, &netmap_params[id].num, 0, "Requested number of netmap " STRINGIFY(name) "s"); \
> +        SYSCTL_INT(_dev_netmap, OID_AUTO, name##_curr_num, \
> +            CTLFLAG_RD, &nm_mem.pools[id].objtotal, 0, "Current number of netmap " STRINGIFY(name) "s")
> +
> +DECLARE_SYSCTLS(NETMAP_IF_POOL, if);
> +DECLARE_SYSCTLS(NETMAP_RING_POOL, ring);
> +DECLARE_SYSCTLS(NETMAP_BUF_POOL, buf);
> +
> +/*
> + * Convert a userspace offset to a phisical address.
> + * XXX re-do in a simpler way.
> + *
> + * The idea here is to hide userspace applications the fact that pre-allocated
> + * memory is not contiguous, but fragmented across different clusters and
> + * smaller memory allocators. Consequently, first of all we need to find which
> + * allocator is owning provided offset, then we need to find out the physical
> + * address associated to target page (this is done using the look-up table.
> + */
> +static inline vm_paddr_t
> +netmap_ofstophys(vm_offset_t offset)
> +{
> +	int i;
> +	vm_offset_t o = offset;
> +	struct netmap_obj_pool *p = nm_mem.pools;
> +
> +	for (i = 0; i < NETMAP_POOLS_NR; offset -= p[i]._memtotal, i++) {
> +		if (offset >= p[i]._memtotal)
> +			continue;
> +		// XXX now scan the clusters
> +		return p[i].lut[offset / p[i]._objsize].paddr +
> +			offset % p[i]._objsize;
> +	}
> +	/* this is only in case of errors */
> +	D("invalid ofs 0x%x out of 0x%x 0x%x 0x%x", (u_int)o,
> +		p[NETMAP_IF_POOL]._memtotal,
> +		p[NETMAP_IF_POOL]._memtotal
> +			+ p[NETMAP_RING_POOL]._memtotal,
> +		p[NETMAP_IF_POOL]._memtotal
> +			+ p[NETMAP_RING_POOL]._memtotal
> +			+ p[NETMAP_BUF_POOL]._memtotal);
> +	return 0;	// XXX bad address
> +}
> +
> +/*
> + * we store objects by kernel address, need to find the offset
> + * within the pool to export the value to userspace.
> + * Algorithm: scan until we find the cluster, then add the
> + * actual offset in the cluster
> + */
> +static ssize_t
> +netmap_obj_offset(struct netmap_obj_pool *p, const void *vaddr)
> +{
> +	int i, k = p->clustentries, n = p->objtotal;
> +	ssize_t ofs = 0;
> +
> +	for (i = 0; i < n; i += k, ofs += p->_clustsize) {
> +		const char *base = p->lut[i].vaddr;
> +		ssize_t relofs = (const char *) vaddr - base;
> +
> +		if (relofs < 0 || relofs > p->_clustsize)
> +			continue;
> +
> +		ofs = ofs + relofs;
> +		ND("%s: return offset %d (cluster %d) for pointer %p",
> +		    p->name, ofs, i, vaddr);
> +		return ofs;
> +	}
> +	D("address %p is not contained inside any cluster (%s)",
> +	    vaddr, p->name);
> +	return 0; /* An error occurred */
> +}
> +
> +/* Helper functions which convert virtual addresses to offsets */
> +#define netmap_if_offset(v)					\
> +	netmap_obj_offset(&nm_mem.pools[NETMAP_IF_POOL], (v))
> +
> +#define netmap_ring_offset(v)					\
> +    (nm_mem.pools[NETMAP_IF_POOL]._memtotal + 				\
> +	netmap_obj_offset(&nm_mem.pools[NETMAP_RING_POOL], (v)))
> +
> +#define netmap_buf_offset(v)					\
> +    (nm_mem.pools[NETMAP_IF_POOL]._memtotal +				\
> +	nm_mem.pools[NETMAP_RING_POOL]._memtotal +			\
> +	netmap_obj_offset(&nm_mem.pools[NETMAP_BUF_POOL], (v)))
> +
> +
> +/*
> + * report the index, and use start position as a hint,
> + * otherwise buffer allocation becomes terribly expensive.
> + */
> +static void *
> +netmap_obj_malloc(struct netmap_obj_pool *p, int len, uint32_t *start, uint32_t *index)
> +{
> +	uint32_t i = 0;			/* index in the bitmap */
> +	uint32_t mask, j;		/* slot counter */
> +	void *vaddr = NULL;
> +
> +	if (len > p->_objsize) {
> +		D("%s request size %d too large", p->name, len);
> +		// XXX cannot reduce the size
> +		return NULL;
> +	}
> +
> +	if (p->objfree == 0) {
> +		D("%s allocator: run out of memory", p->name);
> +		return NULL;
> +	}
> +	if (start)
> +		i = *start;
> +
> +	/* termination is guaranteed by p->free, but better check bounds on i */
> +	while (vaddr == NULL && i < p->bitmap_slots)  {
> +		uint32_t cur = p->bitmap[i];
> +		if (cur == 0) { /* bitmask is fully used */
> +			i++;
> +			continue;
> +		}
> +		/* locate a slot */
> +		for (j = 0, mask = 1; (cur & mask) == 0; j++, mask <<= 1)
> +			;
> +
> +		p->bitmap[i] &= ~mask; /* mark object as in use */
> +		p->objfree--;
> +
> +		vaddr = p->lut[i * 32 + j].vaddr;
> +		if (index)
> +			*index = i * 32 + j;
> +	}
> +	ND("%s allocator: allocated object @ [%d][%d]: vaddr %p", i, j, vaddr);
> +
> +	if (start)
> +		*start = i;
> +	return vaddr;
> +}
> +
> +
> +/*
> + * free by index, not by address
> + */
> +static void
> +netmap_obj_free(struct netmap_obj_pool *p, uint32_t j)
> +{
> +	if (j >= p->objtotal) {
> +		D("invalid index %u, max %u", j, p->objtotal);
> +		return;
> +	}
> +	p->bitmap[j / 32] |= (1 << (j % 32));
> +	p->objfree++;
> +	return;
> +}
> +
> +static void
> +netmap_obj_free_va(struct netmap_obj_pool *p, void *vaddr)
> +{
> +	int i, j, n = p->_memtotal / p->_clustsize;
> +
> +	for (i = 0, j = 0; i < n; i++, j += p->clustentries) {
> +		void *base = p->lut[i * p->clustentries].vaddr;
> +		ssize_t relofs = (ssize_t) vaddr - (ssize_t) base;
> +
> +		/* Given address, is out of the scope of the current cluster.*/
> +		if (vaddr < base || relofs > p->_clustsize)
> +			continue;
> +
> +		j = j + relofs / p->_objsize;
> +		KASSERT(j != 0, ("Cannot free object 0"));
> +		netmap_obj_free(p, j);
> +		return;
> +	}
> +	D("address %p is not contained inside any cluster (%s)",
> +	    vaddr, p->name);
> +}
> +
> +#define netmap_if_malloc(len)	netmap_obj_malloc(&nm_mem.pools[NETMAP_IF_POOL], len, NULL, NULL)
> +#define netmap_if_free(v)	netmap_obj_free_va(&nm_mem.pools[NETMAP_IF_POOL], (v))
> +#define netmap_ring_malloc(len)	netmap_obj_malloc(&nm_mem.pools[NETMAP_RING_POOL], len, NULL, NULL)
> +#define netmap_ring_free(v)	netmap_obj_free_va(&nm_mem.pools[NETMAP_RING_POOL], (v))
> +#define netmap_buf_malloc(_pos, _index)			\
> +	netmap_obj_malloc(&nm_mem.pools[NETMAP_BUF_POOL], NETMAP_BUF_SIZE, _pos, _index)
> +
> +
> +/* Return the index associated to the given packet buffer */
> +#define netmap_buf_index(v)						\
> +    (netmap_obj_offset(&nm_mem.pools[NETMAP_BUF_POOL], (v)) / nm_mem.pools[NETMAP_BUF_POOL]._objsize)
> +
> +
> +/* Return nonzero on error */
> +static int
> +netmap_new_bufs(struct netmap_if *nifp,
> +                struct netmap_slot *slot, u_int n)
> +{
> +	struct netmap_obj_pool *p = &nm_mem.pools[NETMAP_BUF_POOL];
> +	int i = 0;	/* slot counter */
> +	uint32_t pos = 0;	/* slot in p->bitmap */
> +	uint32_t index = 0;	/* buffer index */
> +
> +	(void)nifp;	/* UNUSED */
> +	for (i = 0; i < n; i++) {
> +		void *vaddr = netmap_buf_malloc(&pos, &index);
> +		if (vaddr == NULL) {
> +			D("unable to locate empty packet buffer");
> +			goto cleanup;
> +		}
> +		slot[i].buf_idx = index;
> +		slot[i].len = p->_objsize;
> +		/* XXX setting flags=NS_BUF_CHANGED forces a pointer reload
> +		 * in the NIC ring. This is a hack that hides missing
> +		 * initializations in the drivers, and should go away.
> +		 */
> +		slot[i].flags = NS_BUF_CHANGED;
> +	}
> +
> +	ND("allocated %d buffers, %d available, first at %d", n, p->objfree, pos);
> +	return (0);
> +
> +cleanup:
> +	while (i > 0) {
> +		i--;
> +		netmap_obj_free(p, slot[i].buf_idx);
> +	}
> +	bzero(slot, n * sizeof(slot[0]));
> +	return (ENOMEM);
> +}
> +
> +
> +static void
> +netmap_free_buf(struct netmap_if *nifp, uint32_t i)
> +{
> +	struct netmap_obj_pool *p = &nm_mem.pools[NETMAP_BUF_POOL];
> +
> +	if (i < 2 || i >= p->objtotal) {
> +		D("Cannot free buf#%d: should be in [2, %d[", i, p->objtotal);
> +		return;
> +	}
> +	netmap_obj_free(p, i);
> +}
> +
> +static void
> +netmap_reset_obj_allocator(struct netmap_obj_pool *p)
> +{
> +	if (p == NULL)
> +		return;
> +	if (p->bitmap)
> +		free(p->bitmap, M_NETMAP);
> +	p->bitmap = NULL;
> +	if (p->lut) {
> +		int i;
> +		for (i = 0; i < p->objtotal; i += p->clustentries) {
> +			if (p->lut[i].vaddr)
> +				contigfree(p->lut[i].vaddr, p->_clustsize, M_NETMAP);
> +		}
> +		bzero(p->lut, sizeof(struct lut_entry) * p->objtotal);
> +#ifdef linux
> +		vfree(p->lut);
> +#else
> +		free(p->lut, M_NETMAP);
> +#endif
> +	}
> +	p->lut = NULL;
> +}
> +
> +/*
> + * Free all resources related to an allocator.
> + */
> +static void
> +netmap_destroy_obj_allocator(struct netmap_obj_pool *p)
> +{
> +	if (p == NULL)
> +		return;
> +	netmap_reset_obj_allocator(p);
> +}
> +
> +/*
> + * We receive a request for objtotal objects, of size objsize each.
> + * Internally we may round up both numbers, as we allocate objects
> + * in small clusters multiple of the page size.
> + * In the allocator we don't need to store the objsize,
> + * but we do need to keep track of objtotal' and clustentries,
> + * as they are needed when freeing memory.
> + *
> + * XXX note -- userspace needs the buffers to be contiguous,
> + *	so we cannot afford gaps at the end of a cluster.
> + */
> +
> +
> +/* call with NMA_LOCK held */
> +static int
> +netmap_config_obj_allocator(struct netmap_obj_pool *p, u_int objtotal, u_int objsize)
> +{
> +	int i, n;
> +	u_int clustsize;	/* the cluster size, multiple of page size */
> +	u_int clustentries;	/* how many objects per entry */
> +
> +#define MAX_CLUSTSIZE	(1<<17)
> +#define LINE_ROUND	64
> +	if (objsize >= MAX_CLUSTSIZE) {
> +		/* we could do it but there is no point */
> +		D("unsupported allocation for %d bytes", objsize);
> +		goto error;
> +	}
> +	/* make sure objsize is a multiple of LINE_ROUND */
> +	i = (objsize & (LINE_ROUND - 1));
> +	if (i) {
> +		D("XXX aligning object by %d bytes", LINE_ROUND - i);
> +		objsize += LINE_ROUND - i;
> +	}
> +	if (objsize < p->objminsize || objsize > p->objmaxsize) {
> +		D("requested objsize %d out of range [%d, %d]",
> +			objsize, p->objminsize, p->objmaxsize);
> +		goto error;
> +	}
> +	if (objtotal < p->nummin || objtotal > p->nummax) {
> +		D("requested objtotal %d out of range [%d, %d]",
> +			objtotal, p->nummin, p->nummax);
> +		goto error;
> +	}
> +	/*
> +	 * Compute number of objects using a brute-force approach:
> +	 * given a max cluster size,
> +	 * we try to fill it with objects keeping track of the
> +	 * wasted space to the next page boundary.
> +	 */
> +	for (clustentries = 0, i = 1;; i++) {
> +		u_int delta, used = i * objsize;
> +		if (used > MAX_CLUSTSIZE)
> +			break;
> +		delta = used % PAGE_SIZE;
> +		if (delta == 0) { // exact solution
> +			clustentries = i;
> +			break;
> +		}
> +		if (delta > ( (clustentries*objsize) % PAGE_SIZE) )
> +			clustentries = i;
> +	}
> +	// D("XXX --- ouch, delta %d (bad for buffers)", delta);
> +	/* compute clustsize and round to the next page */
> +	clustsize = clustentries * objsize;
> +	i =  (clustsize & (PAGE_SIZE - 1));
> +	if (i)
> +		clustsize += PAGE_SIZE - i;
> +	if (netmap_verbose)
> +		D("objsize %d clustsize %d objects %d",
> +			objsize, clustsize, clustentries);
> +
> +	/*
> +	 * The number of clusters is n = ceil(objtotal/clustentries)
> +	 * objtotal' = n * clustentries
> +	 */
> +	p->clustentries = clustentries;
> +	p->_clustsize = clustsize;
> +	n = (objtotal + clustentries - 1) / clustentries;
> +	p->_numclusters = n;
> +	p->objtotal = n * clustentries;
> +	p->objfree = p->objtotal - 2; /* obj 0 and 1 are reserved */
> +	p->_memtotal = p->_numclusters * p->_clustsize;
> +	p->_objsize = objsize;
> +
> +	return 0;
> +
> +error:
> +	p->_objsize = objsize;
> +	p->objtotal = objtotal;
> +
> +	return EINVAL;
> +}
> +
> +
> +/* call with NMA_LOCK held */
> +static int
> +netmap_finalize_obj_allocator(struct netmap_obj_pool *p)
> +{
> +	int i, n;
> +
> +	n = sizeof(struct lut_entry) * p->objtotal;
> +#ifdef linux
> +	p->lut = vmalloc(n);
> +#else
> +	p->lut = malloc(n, M_NETMAP, M_NOWAIT | M_ZERO);
> +#endif
> +	if (p->lut == NULL) {
> +		D("Unable to create lookup table (%d bytes) for '%s'", n, p->name);
> +		goto clean;
> +	}
> +
> +	/* Allocate the bitmap */
> +	n = (p->objtotal + 31) / 32;
> +	p->bitmap = malloc(sizeof(uint32_t) * n, M_NETMAP, M_NOWAIT | M_ZERO);
> +	if (p->bitmap == NULL) {
> +		D("Unable to create bitmap (%d entries) for allocator '%s'", n,
> +		    p->name);
> +		goto clean;
> +	}
> +	p->bitmap_slots = n;
> +
> +	/*
> +	 * Allocate clusters, init pointers and bitmap
> +	 */
> +	for (i = 0; i < p->objtotal;) {
> +		int lim = i + p->clustentries;
> +		char *clust;
> +
> +		clust = contigmalloc(p->_clustsize, M_NETMAP, M_NOWAIT | M_ZERO,
> +		    0, -1UL, PAGE_SIZE, 0);
> +		if (clust == NULL) {
> +			/*
> +			 * If we get here, there is a severe memory shortage,
> +			 * so halve the allocated memory to reclaim some.
> +			 * XXX check boundaries
> +			 */
> +			D("Unable to create cluster at %d for '%s' allocator",
> +			    i, p->name);
> +			lim = i / 2;
> +			for (i--; i >= lim; i--) {
> +				p->bitmap[ (i>>5) ] &=  ~( 1 << (i & 31) );
> +				if (i % p->clustentries == 0 && p->lut[i].vaddr)
> +					contigfree(p->lut[i].vaddr,
> +						p->_clustsize, M_NETMAP);
> +			}
> +			p->objtotal = i;
> +			p->objfree = p->objtotal - 2;
> +			p->_numclusters = i / p->clustentries;
> +			p->_memtotal = p->_numclusters * p->_clustsize;
> +			break;
> +		}
> +		for (; i < lim; i++, clust += p->_objsize) {
> +			p->bitmap[ (i>>5) ] |=  ( 1 << (i & 31) );
> +			p->lut[i].vaddr = clust;
> +			p->lut[i].paddr = vtophys(clust);
> +		}
> +	}
> +	p->bitmap[0] = ~3; /* objs 0 and 1 is always busy */
> +	if (netmap_verbose)
> +		D("Pre-allocated %d clusters (%d/%dKB) for '%s'",
> +		    p->_numclusters, p->_clustsize >> 10,
> +		    p->_memtotal >> 10, p->name);
> +
> +	return 0;
> +
> +clean:
> +	netmap_reset_obj_allocator(p);
> +	return ENOMEM;
> +}
> +
> +/* call with lock held */
> +static int
> +netmap_memory_config_changed(void)
> +{
> +	int i;
> +
> +	for (i = 0; i < NETMAP_POOLS_NR; i++) {
> +		if (nm_mem.pools[i]._objsize != netmap_params[i].size ||
> +		    nm_mem.pools[i].objtotal != netmap_params[i].num)
> +		    return 1;
> +	}
> +	return 0;
> +}
> +
> +
> +/* call with lock held */
> +static int
> +netmap_memory_config(void)
> +{
> +	int i;
> +
> +
> +	if (!netmap_memory_config_changed())
> +		goto out;
> +
> +	D("reconfiguring");
> +
> +	if (nm_mem.finalized) {
> +		/* reset previous allocation */
> +		for (i = 0; i < NETMAP_POOLS_NR; i++) {
> +			netmap_reset_obj_allocator(&nm_mem.pools[i]);
> +		}
> +		nm_mem.finalized = 0;
> +        }
> +
> +	for (i = 0; i < NETMAP_POOLS_NR; i++) {
> +		nm_mem.lasterr = netmap_config_obj_allocator(&nm_mem.pools[i],
> +				netmap_params[i].num, netmap_params[i].size);
> +		if (nm_mem.lasterr)
> +			goto out;
> +	}
> +
> +	D("Have %d KB for interfaces, %d KB for rings and %d MB for buffers",
> +	    nm_mem.pools[NETMAP_IF_POOL]._memtotal >> 10,
> +	    nm_mem.pools[NETMAP_RING_POOL]._memtotal >> 10,
> +	    nm_mem.pools[NETMAP_BUF_POOL]._memtotal >> 20);
> +
> +out:
> +
> +	return nm_mem.lasterr;
> +}
> +
> +/* call with lock held */
> +static int
> +netmap_memory_finalize(void)
> +{
> +	int i;
> +	u_int totalsize = 0;
> +
> +	nm_mem.refcount++;
> +	if (nm_mem.refcount > 1) {
> +		ND("busy (refcount %d)", nm_mem.refcount);
> +		goto out;
> +	}
> +
> +	/* update configuration if changed */
> +	if (netmap_memory_config())
> +		goto out;
> +
> +	if (nm_mem.finalized) {
> +		/* may happen if config is not changed */
> +		ND("nothing to do");
> +		goto out;
> +	}
> +
> +	for (i = 0; i < NETMAP_POOLS_NR; i++) {
> +		nm_mem.lasterr = netmap_finalize_obj_allocator(&nm_mem.pools[i]);
> +		if (nm_mem.lasterr)
> +			goto cleanup;
> +		totalsize += nm_mem.pools[i]._memtotal;
> +	}
> +	nm_mem.nm_totalsize = totalsize;
> +
> +	/* backward compatibility */
> +	netmap_buf_size = nm_mem.pools[NETMAP_BUF_POOL]._objsize;
> +	netmap_total_buffers = nm_mem.pools[NETMAP_BUF_POOL].objtotal;
> +
> +	netmap_buffer_lut = nm_mem.pools[NETMAP_BUF_POOL].lut;
> +	netmap_buffer_base = nm_mem.pools[NETMAP_BUF_POOL].lut[0].vaddr;
> +
> +	nm_mem.finalized = 1;
> +	nm_mem.lasterr = 0;
> +
> +	/* make sysctl values match actual values in the pools */
> +	for (i = 0; i < NETMAP_POOLS_NR; i++) {
> +		netmap_params[i].size = nm_mem.pools[i]._objsize;
> +		netmap_params[i].num  = nm_mem.pools[i].objtotal;
> +	}
> +
> +out:
> +	if (nm_mem.lasterr)
> +		nm_mem.refcount--;
> +
> +	return nm_mem.lasterr;
> +
> +cleanup:
> +	for (i = 0; i < NETMAP_POOLS_NR; i++) {
> +		netmap_reset_obj_allocator(&nm_mem.pools[i]);
> +	}
> +	nm_mem.refcount--;
> +
> +	return nm_mem.lasterr;
> +}
> +
> +static int
> +netmap_memory_init(void)
> +{
> +	NMA_LOCK_INIT();
> +	return (0);
> +}
> +
> +static void
> +netmap_memory_fini(void)
> +{
> +	int i;
> +
> +	for (i = 0; i < NETMAP_POOLS_NR; i++) {
> +	    netmap_destroy_obj_allocator(&nm_mem.pools[i]);
> +	}
> +	NMA_LOCK_DESTROY();
> +}
> +
> +static void
> +netmap_free_rings(struct netmap_adapter *na)
> +{
> +	int i;
> +	if (!na->tx_rings)
> +		return;
> +	for (i = 0; i < na->num_tx_rings + 1; i++) {
> +		netmap_ring_free(na->tx_rings[i].ring);
> +		na->tx_rings[i].ring = NULL;
> +	}
> +	for (i = 0; i < na->num_rx_rings + 1; i++) {
> +		netmap_ring_free(na->rx_rings[i].ring);
> +		na->rx_rings[i].ring = NULL;
> +	}
> +	free(na->tx_rings, M_DEVBUF);
> +	na->tx_rings = na->rx_rings = NULL;
> +}
> +
> +
> +
> +/* call with NMA_LOCK held */
> +/*
> + * Allocate the per-fd structure netmap_if.
> + * If this is the first instance, also allocate the krings, rings etc.
> + */
> +static void *
> +netmap_if_new(const char *ifname, struct netmap_adapter *na)
> +{
> +	struct netmap_if *nifp;
> +	struct netmap_ring *ring;
> +	ssize_t base; /* handy for relative offsets between rings and nifp */
> +	u_int i, len, ndesc, ntx, nrx;
> +	struct netmap_kring *kring;
> +
> +	if (netmap_update_config(na)) {
> +		/* configuration mismatch, report and fail */
> +		return NULL;
> +	}
> +	ntx = na->num_tx_rings + 1; /* shorthand, include stack ring */
> +	nrx = na->num_rx_rings + 1; /* shorthand, include stack ring */
> +	/*
> +	 * the descriptor is followed inline by an array of offsets
> +	 * to the tx and rx rings in the shared memory region.
> +	 */
> +	len = sizeof(struct netmap_if) + (nrx + ntx) * sizeof(ssize_t);
> +	nifp = netmap_if_malloc(len);
> +	if (nifp == NULL) {
> +		return NULL;
> +	}
> +
> +	/* initialize base fields -- override const */
> +	*(int *)(uintptr_t)&nifp->ni_tx_rings = na->num_tx_rings;
> +	*(int *)(uintptr_t)&nifp->ni_rx_rings = na->num_rx_rings;
> +	strncpy(nifp->ni_name, ifname, IFNAMSIZ);
> +
> +	(na->refcount)++;	/* XXX atomic ? we are under lock */
> +	if (na->refcount > 1) { /* already setup, we are done */
> +		goto final;
> +	}
> +
> +	len = (ntx + nrx) * sizeof(struct netmap_kring);
> +	na->tx_rings = malloc(len, M_DEVBUF, M_NOWAIT | M_ZERO);
> +	if (na->tx_rings == NULL) {
> +		D("Cannot allocate krings for %s", ifname);
> +		goto cleanup;
> +	}
> +	na->rx_rings = na->tx_rings + ntx;
> +
> +	/*
> +	 * First instance, allocate netmap rings and buffers for this card
> +	 * The rings are contiguous, but have variable size.
> +	 */
> +	for (i = 0; i < ntx; i++) { /* Transmit rings */
> +		kring = &na->tx_rings[i];
> +		ndesc = na->num_tx_desc;
> +		bzero(kring, sizeof(*kring));
> +		len = sizeof(struct netmap_ring) +
> +			  ndesc * sizeof(struct netmap_slot);
> +		ring = netmap_ring_malloc(len);
> +		if (ring == NULL) {
> +			D("Cannot allocate tx_ring[%d] for %s", i, ifname);
> +			goto cleanup;
> +		}
> +		ND("txring[%d] at %p ofs %d", i, ring);
> +		kring->na = na;
> +		kring->ring = ring;
> +		*(int *)(uintptr_t)&ring->num_slots = kring->nkr_num_slots = ndesc;
> +		*(ssize_t *)(uintptr_t)&ring->buf_ofs =
> +		    (nm_mem.pools[NETMAP_IF_POOL]._memtotal +
> +			nm_mem.pools[NETMAP_RING_POOL]._memtotal) -
> +			netmap_ring_offset(ring);
> +
> +		/*
> +		 * IMPORTANT:
> +		 * Always keep one slot empty, so we can detect new
> +		 * transmissions comparing cur and nr_hwcur (they are
> +		 * the same only if there are no new transmissions).
> +		 */
> +		ring->avail = kring->nr_hwavail = ndesc - 1;
> +		ring->cur = kring->nr_hwcur = 0;
> +		*(int *)(uintptr_t)&ring->nr_buf_size = NETMAP_BUF_SIZE;
> +		ND("initializing slots for txring[%d]", i);
> +		if (netmap_new_bufs(nifp, ring->slot, ndesc)) {
> +			D("Cannot allocate buffers for tx_ring[%d] for %s", i, ifname);
> +			goto cleanup;
> +		}
> +	}
> +
> +	for (i = 0; i < nrx; i++) { /* Receive rings */
> +		kring = &na->rx_rings[i];
> +		ndesc = na->num_rx_desc;
> +		bzero(kring, sizeof(*kring));
> +		len = sizeof(struct netmap_ring) +
> +			  ndesc * sizeof(struct netmap_slot);
> +		ring = netmap_ring_malloc(len);
> +		if (ring == NULL) {
> +			D("Cannot allocate rx_ring[%d] for %s", i, ifname);
> +			goto cleanup;
> +		}
> +		ND("rxring[%d] at %p ofs %d", i, ring);
> +
> +		kring->na = na;
> +		kring->ring = ring;
> +		*(int *)(uintptr_t)&ring->num_slots = kring->nkr_num_slots = ndesc;
> +		*(ssize_t *)(uintptr_t)&ring->buf_ofs =
> +		    (nm_mem.pools[NETMAP_IF_POOL]._memtotal +
> +		        nm_mem.pools[NETMAP_RING_POOL]._memtotal) -
> +			netmap_ring_offset(ring);
> +
> +		ring->cur = kring->nr_hwcur = 0;
> +		ring->avail = kring->nr_hwavail = 0; /* empty */
> +		*(int *)(uintptr_t)&ring->nr_buf_size = NETMAP_BUF_SIZE;
> +		ND("initializing slots for rxring[%d]", i);
> +		if (netmap_new_bufs(nifp, ring->slot, ndesc)) {
> +			D("Cannot allocate buffers for rx_ring[%d] for %s", i, ifname);
> +			goto cleanup;
> +		}
> +	}
> +#ifdef linux
> +	// XXX initialize the selrecord structs.
> +	for (i = 0; i < ntx; i++)
> +		init_waitqueue_head(&na->tx_rings[i].si);
> +	for (i = 0; i < nrx; i++)
> +		init_waitqueue_head(&na->rx_rings[i].si);
> +	init_waitqueue_head(&na->tx_si);
> +	init_waitqueue_head(&na->rx_si);
> +#endif
> +final:
> +	/*
> +	 * fill the slots for the rx and tx rings. They contain the offset
> +	 * between the ring and nifp, so the information is usable in
> +	 * userspace to reach the ring from the nifp.
> +	 */
> +	base = netmap_if_offset(nifp);
> +	for (i = 0; i < ntx; i++) {
> +		*(ssize_t *)(uintptr_t)&nifp->ring_ofs[i] =
> +			netmap_ring_offset(na->tx_rings[i].ring) - base;
> +	}
> +	for (i = 0; i < nrx; i++) {
> +		*(ssize_t *)(uintptr_t)&nifp->ring_ofs[i+ntx] =
> +			netmap_ring_offset(na->rx_rings[i].ring) - base;
> +	}
> +	return (nifp);
> +cleanup:
> +	netmap_free_rings(na);
> +	netmap_if_free(nifp);
> +	(na->refcount)--;
> +	return NULL;
> +}
> +
> +/* call with NMA_LOCK held */
> +static void
> +netmap_memory_deref(void)
> +{
> +	nm_mem.refcount--;
> +	if (netmap_verbose)
> +		D("refcount = %d", nm_mem.refcount);
> +}
> --- a/include/uapi/Kbuild	2013-02-26 10:19:28.000000000 -0800
> +++ b/include/uapi/Kbuild	2013-03-10 10:08:20.327671428 -0700
> @@ -12,3 +12,4 @@ header-y += video/
>  header-y += drm/
>  header-y += xen/
>  header-y += scsi/
> +header-y += netmap/
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ b/include/uapi/netmap/Kbuild	2013-03-10 10:08:20.327671428 -0700
> @@ -0,0 +1,3 @@
> +# UAPI Header export list
> +header-y += netmap.h
> +header-y += netmap_user.h
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ b/include/uapi/netmap/netmap.h	2013-03-10 10:08:20.327671428 -0700
> @@ -0,0 +1,289 @@
> +/*
> + * Copyright (C) 2011 Matteo Landi, Luigi Rizzo. All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions are
> + * met:
> + *
> + *   1. Redistributions of source code must retain the above copyright
> + *      notice, this list of conditions and the following disclaimer.
> + *
> + *   2. Redistributions in binary form must reproduce the above copyright
> + *      notice, this list of conditions and the following disclaimer in the
> + *      documentation and/or other materials provided with the
> + *      distribution.
> + *
> + *   3. Neither the name of the authors nor the names of their contributors
> + *      may be used to endorse or promote products derived from this
> + *      software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY MATTEO LANDI AND CONTRIBUTORS "AS IS" AND
> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
> + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL MATTEO LANDI OR CONTRIBUTORS
> + * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
> + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
> + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
> + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
> + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
> + * THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +/*
> + * $FreeBSD: head/sys/net/netmap.h 231198 2012-02-08 11:43:29Z luigi $
> + * $Id: netmap.h 10879 2012-04-12 22:48:59Z luigi $
> + *
> + * Definitions of constants and the structures used by the netmap
> + * framework, for the part visible to both kernel and userspace.
> + * Detailed info on netmap is available with "man netmap" or at
> + *
> + *	http://info.iet.unipi.it/~luigi/netmap/
> + */
> +
> +#ifndef _NET_NETMAP_H_
> +#define _NET_NETMAP_H_
> +
> +/*
> + * --- Netmap data structures ---
> + *
> + * The data structures used by netmap are shown below. Those in
> + * capital letters are in an mmapp()ed area shared with userspace,
> + * while others are private to the kernel.
> + * Shared structures do not contain pointers but only memory
> + * offsets, so that addressing is portable between kernel and userspace.
> +
> +
> + softc
> ++----------------+
> +| standard fields|
> +| if_pspare[0] ----------+
> ++----------------+       |
> +                         |
> ++----------------+<------+
> +|(netmap_adapter)|
> +|                |                             netmap_kring
> +| tx_rings *--------------------------------->+---------------+
> +|                |       netmap_kring         | ring    *---------.
> +| rx_rings *--------->+---------------+       | nr_hwcur      |   |
> ++----------------+    | ring    *--------.    | nr_hwavail    |   V
> +                      | nr_hwcur      |  |    | selinfo       |   |
> +                      | nr_hwavail    |  |    +---------------+   .
> +                      | selinfo       |  |    |     ...       |   .
> +                      +---------------+  |    |(ntx+1 entries)|
> +                      |    ....       |  |    |               |
> +                      |(nrx+1 entries)|  |    +---------------+
> +                      |               |  |
> +   KERNEL             +---------------+  |
> +                                         |
> +  ====================================================================
> +                                         |
> +   USERSPACE                             |      NETMAP_RING
> +                                         +---->+-------------+
> +                                             / | cur         |
> +   NETMAP_IF  (nifp, one per file desc.)    /  | avail       |
> +    +---------------+                      /   | buf_ofs     |
> +    | ni_tx_rings   |                     /    +=============+
> +    | ni_rx_rings   |                    /     | buf_idx     | slot[0]
> +    |               |                   /      | len, flags  |
> +    |               |                  /       +-------------+
> +    +===============+                 /        | buf_idx     | slot[1]
> +    | txring_ofs[0] | (rel.to nifp)--'         | len, flags  |
> +    | txring_ofs[1] |                          +-------------+
> +  (num_rings+1 entries)                     (nr_num_slots entries)
> +    | txring_ofs[n] |                          | buf_idx     | slot[n-1]
> +    +---------------+                          | len, flags  |
> +    | rxring_ofs[0] |                          +-------------+
> +    | rxring_ofs[1] |
> +  (num_rings+1 entries)
> +    | txring_ofs[n] |
> +    +---------------+
> +
> + * The private descriptor ('softc' or 'adapter') of each interface
> + * is extended with a "struct netmap_adapter" containing netmap-related
> + * info (see description in dev/netmap/netmap_kernel.h.
> + * Among other things, tx_rings and rx_rings point to the arrays of
> + * "struct netmap_kring" which in turn reache the various
> + * "struct netmap_ring", shared with userspace.
> +
> + * The NETMAP_RING is the userspace-visible replica of the NIC ring.
> + * Each slot has the index of a buffer, its length and some flags.
> + * In user space, the buffer address is computed as
> + *	(char *)ring + buf_ofs + index*NETMAP_BUF_SIZE
> + * In the kernel, buffers do not necessarily need to be contiguous,
> + * and the virtual and physical addresses are derived through
> + * a lookup table.
> + * To associate a different buffer to a slot, applications must
> + * write the new index in buf_idx, and set NS_BUF_CHANGED flag to
> + * make sure that the kernel updates the hardware ring as needed.
> + *
> + * Normally the driver is not requested to report the result of
> + * transmissions (this can dramatically speed up operation).
> + * However the user may request to report completion by setting
> + * NS_REPORT.
> + */
> +struct netmap_slot {
> +	uint32_t buf_idx; /* buffer index */
> +	uint16_t len;	/* packet length, to be copied to/from the hw ring */
> +	uint16_t flags;	/* buf changed, etc. */
> +#define	NS_BUF_CHANGED	0x0001	/* must resync the map, buffer changed */
> +#define	NS_REPORT	0x0002	/* ask the hardware to report results
> +				 * e.g. by generating an interrupt
> +				 */
> +};
> +
> +/*
> + * Netmap representation of a TX or RX ring (also known as "queue").
> + * This is a queue implemented as a fixed-size circular array.
> + * At the software level, two fields are important: avail and cur.
> + *
> + * In TX rings:
> + *	avail	indicates the number of slots available for transmission.
> + *		It is updated by the kernel after every netmap system call.
> + *		It MUST BE decremented by the application when it appends a
> + *		packet.
> + *	cur	indicates the slot to use for the next packet
> + *		to send (i.e. the "tail" of the queue).
> + *		It MUST BE incremented by the application before
> + *		netmap system calls to reflect the number of newly
> + *		sent packets.
> + *		It is checked by the kernel on netmap system calls
> + *		(normally unmodified by the kernel unless invalid).
> + *
> + *   The kernel side of netmap uses two additional fields in its own
> + *   private ring structure, netmap_kring:
> + *	nr_hwcur is a copy of nr_cur on an NIOCTXSYNC.
> + *	nr_hwavail is the number of slots known as available by the
> + *		hardware. It is updated on an INTR (inc by the
> + *		number of packets sent) and on a NIOCTXSYNC
> + *		(decrease by nr_cur - nr_hwcur)
> + *		A special case, nr_hwavail is -1 if the transmit
> + *		side is idle (no pending transmits).
> + *
> + * In RX rings:
> + *	avail	is the number of packets available (possibly 0).
> + *		It MUST BE decremented by the application when it consumes
> + *		a packet, and it is updated to nr_hwavail on a NIOCRXSYNC
> + *	cur	indicates the first slot that contains a packet not
> + *		processed yet (the "head" of the queue).
> + *		It MUST BE incremented by the software when it consumes
> + *		a packet.
> + *	reserved	indicates the number of buffers before 'cur'
> + *		that the application has still in use. Normally 0,
> + *		it MUST BE incremented by the application when it
> + *		does not return the buffer immediately, and decremented
> + *		when the buffer is finally freed.
> + *
> + *   The kernel side of netmap uses two additional fields in the kring:
> + *	nr_hwcur is a copy of nr_cur on an NIOCRXSYNC
> + *	nr_hwavail is the number of packets available. It is updated
> + *		on INTR (inc by the number of new packets arrived)
> + *		and on NIOCRXSYNC (decreased by nr_cur - nr_hwcur).
> + *
> + * DATA OWNERSHIP/LOCKING:
> + *	The netmap_ring is owned by the user program and it is only
> + *	accessed or modified in the upper half of the kernel during
> + *	a system call.
> + *
> + *	The netmap_kring is only modified by the upper half of the kernel.
> + */
> +struct netmap_ring {
> +	/*
> +	 * nr_buf_base_ofs is meant to be used through macros.
> +	 * It contains the offset of the buffer region from this
> +	 * descriptor.
> +	 */
> +	const ssize_t	buf_ofs;
> +	const uint32_t	num_slots;	/* number of slots in the ring. */
> +	uint32_t	avail;		/* number of usable slots */
> +	uint32_t        cur;		/* 'current' r/w position */
> +	uint32_t	reserved;	/* not refilled before current */
> +
> +	const uint16_t	nr_buf_size;
> +	uint16_t	flags;
> +#define	NR_TIMESTAMP	0x0002		/* set timestamp on *sync() */
> +
> +	struct timeval	ts;		/* time of last *sync() */
> +
> +	/* the slots follow. This struct has variable size */
> +	struct netmap_slot slot[0];	/* array of slots. */
> +};
> +
> +
> +/*
> + * Netmap representation of an interface and its queue(s).
> + * There is one netmap_if for each file descriptor on which we want
> + * to select/poll.  We assume that on each interface has the same number
> + * of receive and transmit queues.
> + * select/poll operates on one or all pairs depending on the value of
> + * nmr_queueid passed on the ioctl.
> + */
> +struct netmap_if {
> +	char		ni_name[IFNAMSIZ]; /* name of the interface. */
> +	const u_int	ni_version;	/* API version, currently unused */
> +	const u_int	ni_rx_rings;	/* number of rx rings */
> +	const u_int	ni_tx_rings;	/* if zero, same as ni_rx_rings */
> +	/*
> +	 * The following array contains the offset of each netmap ring
> +	 * from this structure. The first ni_tx_queues+1 entries refer
> +	 * to the tx rings, the next ni_rx_queues+1 refer to the rx rings
> +	 * (the last entry in each block refers to the host stack rings).
> +	 * The area is filled up by the kernel on NIOCREG,
> +	 * and then only read by userspace code.
> +	 */
> +	const ssize_t	ring_ofs[0];
> +};
> +
> +#ifndef NIOCREGIF
> +/*
> + * ioctl names and related fields
> + *
> + * NIOCGINFO takes a struct ifreq, the interface name is the input,
> + *	the outputs are number of queues and number of descriptor
> + *	for each queue (useful to set number of threads etc.).
> + *
> + * NIOCREGIF takes an interface name within a struct ifreq,
> + *	and activates netmap mode on the interface (if possible).
> + *
> + * NIOCUNREGIF unregisters the interface associated to the fd.
> + *
> + * NIOCTXSYNC, NIOCRXSYNC synchronize tx or rx queues,
> + *	whose identity is set in NIOCREGIF through nr_ringid
> + */
> +
> +/*
> + * struct nmreq overlays a struct ifreq
> + */
> +struct nmreq {
> +	char		nr_name[IFNAMSIZ];
> +	uint32_t	nr_version;	/* API version */
> +#define	NETMAP_API	3		/* current version */
> +	uint32_t	nr_offset;	/* nifp offset in the shared region */
> +	uint32_t	nr_memsize;	/* size of the shared region */
> +	uint32_t	nr_tx_slots;	/* slots in tx rings */
> +	uint32_t	nr_rx_slots;	/* slots in rx rings */
> +	uint16_t	nr_tx_rings;	/* number of tx rings */
> +	uint16_t	nr_rx_rings;	/* number of rx rings */
> +	uint16_t	nr_ringid;	/* ring(s) we care about */
> +#define NETMAP_HW_RING	0x4000		/* low bits indicate one hw ring */
> +#define NETMAP_SW_RING	0x2000		/* process the sw ring */
> +#define NETMAP_NO_TX_POLL	0x1000	/* no automatic txsync on poll */
> +#define NETMAP_RING_MASK 0xfff		/* the ring number */
> +	uint16_t	spare1;
> +	uint32_t	spare2[4];
> +};
> +
> +/*
> + * FreeBSD uses the size value embedded in the _IOWR to determine
> + * how much to copy in/out. So we need it to match the actual
> + * data structure we pass. We put some spares in the structure
> + * to ease compatibility with other versions
> + */
> +#define NIOCGINFO	_IOWR('i', 145, struct nmreq) /* return IF info */
> +#define NIOCREGIF	_IOWR('i', 146, struct nmreq) /* interface register */
> +#define NIOCUNREGIF	_IO('i', 147) /* interface unregister */
> +#define NIOCTXSYNC	_IO('i', 148) /* sync tx queues */
> +#define NIOCRXSYNC	_IO('i', 149) /* sync rx queues */
> +#endif /* !NIOCREGIF */
> +
> +#endif /* _NET_NETMAP_H_ */
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ b/include/uapi/netmap/netmap_user.h	2013-03-10 10:08:20.327671428 -0700
> @@ -0,0 +1,95 @@
> +/*
> + * Copyright (C) 2011 Matteo Landi, Luigi Rizzo. All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions are
> + * met:
> + *
> + *   1. Redistributions of source code must retain the above copyright
> + *      notice, this list of conditions and the following disclaimer.
> + *
> + *   2. Redistributions in binary form must reproduce the above copyright
> + *      notice, this list of conditions and the following disclaimer in the
> + *      documentation and/or other materials provided with the
> + *      distribution.
> + *
> + *   3. Neither the name of the authors nor the names of their contributors
> + *      may be used to endorse or promote products derived from this
> + *      software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY MATTEO LANDI AND CONTRIBUTORS "AS IS" AND
> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
> + * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL MATTEO LANDI OR CONTRIBUTORS
> + * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
> + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
> + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
> + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
> + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
> + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
> + * THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +/*
> + * $FreeBSD: head/sys/net/netmap_user.h 231198 2012-02-08 11:43:29Z luigi $
> + * $Id: netmap_user.h 10879 2012-04-12 22:48:59Z luigi $
> + *
> + * This header contains the macros used to manipulate netmap structures
> + * and packets in userspace. See netmap(4) for more information.
> + *
> + * The address of the struct netmap_if, say nifp, is computed from the
> + * value returned from ioctl(.., NIOCREG, ...) and the mmap region:
> + *	ioctl(fd, NIOCREG, &req);
> + *	mem = mmap(0, ... );
> + *	nifp = NETMAP_IF(mem, req.nr_nifp);
> + *		(so simple, we could just do it manually)
> + *
> + * From there:
> + *	struct netmap_ring *NETMAP_TXRING(nifp, index)
> + *	struct netmap_ring *NETMAP_RXRING(nifp, index)
> + *		we can access ring->nr_cur, ring->nr_avail, ring->nr_flags
> + *
> + *	ring->slot[i] gives us the i-th slot (we can access
> + *		directly plen, flags, bufindex)
> + *
> + *	char *buf = NETMAP_BUF(ring, index) returns a pointer to
> + *		the i-th buffer
> + *
> + * Since rings are circular, we have macros to compute the next index
> + *	i = NETMAP_RING_NEXT(ring, i);
> + */
> +
> +#ifndef _NET_NETMAP_USER_H_
> +#define _NET_NETMAP_USER_H_
> +
> +#define NETMAP_IF(b, o)	(struct netmap_if *)((char *)(b) + (o))
> +
> +#define NETMAP_TXRING(nifp, index)			\
> +	((struct netmap_ring *)((char *)(nifp) +	\
> +		(nifp)->ring_ofs[index] ) )
> +
> +#define NETMAP_RXRING(nifp, index)			\
> +	((struct netmap_ring *)((char *)(nifp) +	\
> +	    (nifp)->ring_ofs[index + (nifp)->ni_tx_rings + 1] ) )
> +
> +#define NETMAP_BUF(ring, index)				\
> +	((char *)(ring) + (ring)->buf_ofs + ((index)*(ring)->nr_buf_size))
> +
> +#define NETMAP_BUF_IDX(ring, buf)			\
> +	( ((char *)(buf) - ((char *)(ring) + (ring)->buf_ofs) ) / \
> +		(ring)->nr_buf_size )
> +
> +#define	NETMAP_RING_NEXT(r, i)				\
> +	((i)+1 == (r)->num_slots ? 0 : (i) + 1 )
> +
> +#define	NETMAP_RING_FIRST_RESERVED(r)			\
> +	( (r)->cur < (r)->reserved ?			\
> +	  (r)->cur + (r)->num_slots - (r)->reserved :	\
> +	  (r)->cur - (r)->reserved )
> +
> +/*
> + * Return 1 if the given tx ring is empty.
> + */
> +#define NETMAP_TX_RING_EMPTY(r)	((r)->avail >= (r)->num_slots - 1)
> +
> +#endif /* _NET_NETMAP_USER_H_ */
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ b/include/netmap/bsd_glue.h	2013-03-10 10:08:20.327671428 -0700
> @@ -0,0 +1,263 @@
> +/*
> + * (C) 2012 Luigi Rizzo - Universita` di Pisa
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *   1. Redistributions of source code must retain the above copyright
> + *      notice, this list of conditions and the following disclaimer.
> + *   2. Redistributions in binary form must reproduce the above copyright
> + *      notice, this list of conditions and the following disclaimer in the
> + *    documentation and/or other materials provided with the distribution.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
> + * ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
> + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
> + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
> + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
> + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> + * SUCH DAMAGE.
> + */
> +
> +/*
> + * glue code to build the netmap bsd code under linux.
> + * Some of these tweaks are generic, some are specific for
> + * character device drivers and network code/device drivers.
> + */
> +
> +#ifndef _BSD_GLUE_H
> +#define _BSD_GLUE_H
> +
> +/* a set of headers used in netmap */
> +#include <linux/version.h>
> +#include <linux/if.h>
> +#include <linux/list.h>
> +#include <linux/mutex.h>
> +#include <linux/types.h>
> +#include <linux/time.h>
> +#include <linux/mm.h>
> +#include <linux/poll.h>
> +#include <linux/netdevice.h>
> +#include <linux/sched.h>
> +#include <linux/wait.h>
> +#include <linux/miscdevice.h>
> +//#include <linux/log2.h>	// ilog2
> +#include <linux/etherdevice.h>	// eth_type_trans
> +#include <linux/module.h>
> +#include <linux/moduleparam.h>
> +#include <linux/virtio.h>	// virt_to_phys
> +
> +#define printf(fmt, arg...)	printk(KERN_ERR fmt, ##arg)
> +#define KASSERT(a, b)		BUG_ON(!(a))
> +
> +/* Type redefinitions. XXX check them */
> +typedef	void *			bus_dma_tag_t;
> +typedef	void *			bus_dmamap_t;
> +typedef	int			bus_size_t;
> +typedef	int			bus_dma_segment_t;
> +typedef void *			bus_addr_t;
> +#define vm_paddr_t		phys_addr_t
> +/* XXX the 'off_t' on Linux corresponds to a 'long' */
> +#define vm_offset_t		uint32_t
> +struct thread;
> +
> +/* endianness macros/functions */
> +#define le16toh			le16_to_cpu
> +#define le32toh			le32_to_cpu
> +#define le64toh			le64_to_cpu
> +#define be64toh			be64_to_cpu
> +#define htole32			cpu_to_le32
> +#define htole64			cpu_to_le64
> +
> +#include <linux/jiffies.h>
> +#define	time_second	(jiffies_to_msecs(jiffies) / 1000U )
> +
> +#define bzero(a, len)		memset(a, 0, len)
> +#define bcopy(_s, _d, len) 	memcpy(_d, _s, len)
> +
> +
> +// XXX maybe implement it as a proper function somewhere
> +// it is important to set s->len before the copy.
> +#define	m_devget(_buf, _len, _ofs, _dev, _fn)	( {		\
> +	struct sk_buff *s = netdev_alloc_skb(_dev, _len);	\
> +	if (s) {						\
> +		s->len += _len;					\
> +		skb_copy_to_linear_data_offset(s, _ofs, _buf, _len);	\
> +		s->protocol = eth_type_trans(s, _dev);		\
> +	}							\
> +	s; } )
> +
> +#define	mbuf			sk_buff
> +#define	m_nextpkt		next			// chain of mbufs
> +#define m_freem(m)		dev_kfree_skb_any(m)	// free a sk_buff
> +
> +/*
> + * m_copydata() copies from mbuf to buffer following the mbuf chain.
> + * XXX check which linux equivalent we should use to follow fragmented
> + * skbufs.
> + */
> +
> +//#define m_copydata(m, o, l, b)	skb_copy_bits(m, o, b, l)
> +#define m_copydata(m, o, l, b)	skb_copy_from_linear_data_offset(m, o, b, l)
> +
> +/*
> + * struct ifnet is remapped into struct net_device on linux.
> + * ifnet has an if_softc field pointing to the device-specific struct
> + * (adapter).
> + * On linux the ifnet/net_device is at the beginning of the device-specific
> + * structure, so a pointer to the first field of the ifnet works.
> + * We don't use this in netmap, though.
> + *
> + *	if_xname	name		device name
> + *	if_capabilities	flags		// XXX not used
> + *	if_capenable	priv_flags
> + *		we would use "features" but it is all taken.
> + *		XXX check for conflict in flags use.
> + *
> + *	if_bridge	atalk_ptr	struct nm_bridge (only for VALE ports)
> + *
> + * In netmap we use if_pspare[0] to point to the netmap_adapter,
> + * in linux we have no spares so we overload ax25_ptr, and the detection
> + * for netmap-capable is some magic in the area pointed by that.
> + */
> +#define WNA(_ifp)		(_ifp)->ax25_ptr
> +
> +#define ifnet           	net_device      /* remap */
> +#define	if_xname		name		/* field ifnet-> net_device */
> +//#define	if_capabilities		flags		/* IFCAP_NETMAP */
> +#define	if_capenable		priv_flags	/* IFCAP_NETMAP */
> +#define	if_bridge		atalk_ptr	/* remap, only for VALE ports */
> +#define ifunit_ref(_x)		dev_get_by_name(&init_net, _x);
> +#define if_rele(ifp)		dev_put(ifp)
> +#define CURVNET_SET(x)
> +#define CURVNET_RESTORE(x)
> +
> +
> +/*
> + * We use spin_lock_irqsave() because we use the lock in the
> + * (hard) interrupt context.
> + */
> +typedef struct {
> +        spinlock_t      sl;
> +        ulong           flags;
> +} safe_spinlock_t;
> +
> +static inline void mtx_lock(safe_spinlock_t *m)
> +{
> +        spin_lock_irqsave(&(m->sl), m->flags);
> +}
> +
> +static inline void mtx_unlock(safe_spinlock_t *m)
> +{
> +	ulong flags = ACCESS_ONCE(m->flags);
> +        spin_unlock_irqrestore(&(m->sl), flags);
> +}
> +
> +#define mtx_init(a, b, c, d)	spin_lock_init(&((a)->sl))
> +#define mtx_destroy(a)		// XXX spin_lock_destroy(a)
> +
> +/* use volatile to fix a probable compiler error on 2.6.25 */
> +#define malloc(_size, type, flags)                      \
> +        ({ volatile int _v = _size; kmalloc(_v, GFP_ATOMIC | __GFP_ZERO); })
> +
> +#define free(a, t)	kfree(a)
> +
> +// XXX do we need GPF_ZERO ?
> +// XXX do we need GFP_DMA for slots ?
> +// http://www.mjmwired.net/kernel/Documentation/DMA-API.txt
> +
> +#define contigmalloc(sz, ty, flags, a, b, pgsz, c)		\
> +	(char *) __get_free_pages(GFP_ATOMIC |  __GFP_ZERO,	\
> +		    ilog2(roundup_pow_of_two((sz)/PAGE_SIZE)))
> +#define contigfree(va, sz, ty)	free_pages((unsigned long)va,	\
> +		    ilog2(roundup_pow_of_two(sz)/PAGE_SIZE))
> +
> +#define vtophys		virt_to_phys
> +
> +/*--- selrecord and friends ---*/
> +/* wake_up() or wake_up_interruptible() ? */
> +#define	selwakeuppri(sw, pri)	wake_up(sw)
> +#define selrecord(x, y)		poll_wait((struct file *)x, y, pwait)
> +#define knlist_destroy(x)	// XXX todo
> +
> +/* we use tsleep/wakeup to sleep a bit. */
> +#define	tsleep(a, b, c, t)	msleep(10)	// XXX
> +#define	wakeup(sw)				// XXX double check
> +#define microtime		do_gettimeofday
> +
> +
> +/*
> + * The following trick is to map a struct cdev into a struct miscdevice
> + */
> +#define	cdev			miscdevice
> +
> +
> +/*
> + * XXX to complete - the dmamap interface
> + */
> +#define	BUS_DMA_NOWAIT	0
> +#define	bus_dmamap_load(_1, _2, _3, _4, _5, _6, _7)
> +#define	bus_dmamap_unload(_1, _2)
> +
> +typedef int (d_mmap_t)(struct file *f, struct vm_area_struct *vma);
> +typedef unsigned int (d_poll_t)(struct file * file, struct poll_table_struct *pwait);
> +
> +/*
> + * make_dev will set an error and return the first argument.
> + * This relies on the availability of the 'error' local variable.
> + */
> +#define make_dev(_cdev, _zero, _uid, _gid, _perm, _name)	\
> +	({error = misc_register(_cdev);				\
> +	D("run mknod /dev/%s c %d %d # error %d",		\
> +	    (_cdev)->name, MISC_MAJOR, (_cdev)->minor, error);	\
> +	 _cdev; } )
> +#define destroy_dev(_cdev)	misc_deregister(_cdev)
> +
> +/*--- sysctl API ----*/
> +/*
> + * linux: sysctl are mapped into /sys/module/ipfw_mod parameters
> + * windows: they are emulated via get/setsockopt
> + */
> +#define CTLFLAG_RD              1
> +#define CTLFLAG_RW              2
> +
> +struct sysctl_oid;
> +struct sysctl_req;
> +
> +
> +#define SYSCTL_DECL(_1)
> +#define SYSCTL_OID(_1, _2, _3, _4, _5, _6, _7, _8)
> +#define SYSCTL_NODE(_1, _2, _3, _4, _5, _6)
> +#define _SYSCTL_BASE(_name, _var, _ty, _perm)			\
> +		module_param_named(_name, *(_var), _ty,         \
> +			( (_perm) == CTLFLAG_RD) ? 0444: 0644 )
> +
> +#define SYSCTL_PROC(_base, _oid, _name, _mode, _var, _val, _desc, _a, _b)
> +
> +#define SYSCTL_INT(_base, _oid, _name, _mode, _var, _val, _desc)        \
> +        _SYSCTL_BASE(_name, _var, int, _mode)
> +
> +#define SYSCTL_LONG(_base, _oid, _name, _mode, _var, _val, _desc)       \
> +        _SYSCTL_BASE(_name, _var, long, _mode)
> +
> +#define SYSCTL_ULONG(_base, _oid, _name, _mode, _var, _val, _desc)      \
> +        _SYSCTL_BASE(_name, _var, ulong, _mode)
> +
> +#define SYSCTL_UINT(_base, _oid, _name, _mode, _var, _val, _desc)       \
> +         _SYSCTL_BASE(_name, _var, uint, _mode)
> +
> +#define TUNABLE_INT(_name, _ptr)
> +
> +#define SYSCTL_VNET_PROC                SYSCTL_PROC
> +#define SYSCTL_VNET_INT                 SYSCTL_INT
> +
> +#define SYSCTL_HANDLER_ARGS             \
> +        struct sysctl_oid *oidp, void *arg1, int arg2, struct sysctl_req *req
> +int sysctl_handle_int(SYSCTL_HANDLER_ARGS);
> +int sysctl_handle_long(SYSCTL_HANDLER_ARGS);
> +
> +#endif /* _BSD_GLUE_H */
> --- /dev/null	1970-01-01 00:00:00.000000000 +0000
> +++ b/include/netmap/netmap_kern.h	2013-03-10 11:30:37.253528570 -0700
> @@ -0,0 +1,474 @@
> +/*
> + * Copyright (C) 2011-2012 Matteo Landi, Luigi Rizzo. All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *   1. Redistributions of source code must retain the above copyright
> + *      notice, this list of conditions and the following disclaimer.
> + *   2. Redistributions in binary form must reproduce the above copyright
> + *      notice, this list of conditions and the following disclaimer in the
> + *    documentation and/or other materials provided with the distribution.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
> + * ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
> + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
> + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
> + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
> + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> + * SUCH DAMAGE.
> + */
> +
> +/*
> + * $FreeBSD: head/sys/dev/netmap/netmap_kern.h 238985 2012-08-02 11:59:43Z luigi $
> + * $Id: netmap_kern.h 12089 2013-02-15 01:20:25Z luigi $
> + *
> + * The header contains the definitions of constants and function
> + * prototypes used only in kernelspace.
> + */
> +
> +#ifndef _NET_NETMAP_KERN_H_
> +#define _NET_NETMAP_KERN_H_
> +
> +#define NETMAP_MEM2    // use the new memory allocator
> +
> +#if defined(__FreeBSD__)
> +#define likely(x)	__builtin_expect(!!(x), 1)
> +#define unlikely(x)	__builtin_expect(!!(x), 0)
> +
> +#define	NM_LOCK_T	struct mtx
> +#define	NM_SELINFO_T	struct selinfo
> +#define	MBUF_LEN(m)	((m)->m_pkthdr.len)
> +#define	NM_SEND_UP(ifp, m)	((ifp)->if_input)(ifp, m)
> +#elif defined (linux)
> +
> +#define	NM_LOCK_T	safe_spinlock_t	// see bsd_glue.h
> +#define	NM_SELINFO_T	wait_queue_head_t
> +#define	MBUF_LEN(m)	((m)->len)
> +#define	NM_SEND_UP(ifp, m)	netif_rx(m)
> +
> +#ifndef DEV_NETMAP
> +#define DEV_NETMAP
> +#endif
> +
> +/*
> + * IFF_NETMAP goes into net_device's priv_flags (if_capenable).
> + * Map it to BSD style cap until this driver is cleaned up.
> + */
> +#define IFCAP_NETMAP	IFF_NETMAP
> +
> +
> +#elif defined (__APPLE__)
> +#warning apple support is incomplete.
> +#define likely(x)	__builtin_expect(!!(x), 1)
> +#define unlikely(x)	__builtin_expect(!!(x), 0)
> +#define	NM_LOCK_T	IOLock *
> +#define	NM_SELINFO_T	struct selinfo
> +#define	MBUF_LEN(m)	((m)->m_pkthdr.len)
> +#define	NM_SEND_UP(ifp, m)	((ifp)->if_input)(ifp, m)
> +
> +#else
> +#error unsupported platform
> +#endif
> +
> +#define ND(format, ...)
> +#define D(format, ...)						\
> +	do {							\
> +		struct timeval __xxts;				\
> +		microtime(&__xxts);				\
> +		printf("%03d.%06d %s [%d] " format "\n",	\
> +		(int)__xxts.tv_sec % 1000, (int)__xxts.tv_usec,	\
> +		__FUNCTION__, __LINE__, ##__VA_ARGS__);		\
> +	} while (0)
> +
> +/* rate limited, lps indicates how many per second */
> +#define RD(lps, format, ...)					\
> +	do {							\
> +		static int t0, __cnt;				\
> +		if (t0 != time_second) {			\
> +			t0 = time_second;			\
> +			__cnt = 0;				\
> +		}						\
> +		if (__cnt++ < lps)				\
> +			D(format, ##__VA_ARGS__);		\
> +	} while (0)
> +
> +struct netmap_adapter;
> +
> +/*
> + * private, kernel view of a ring. Keeps track of the status of
> + * a ring across system calls.
> + *
> + *	nr_hwcur	index of the next buffer to refill.
> + *			It corresponds to ring->cur - ring->reserved
> + *
> + *	nr_hwavail	the number of slots "owned" by userspace.
> + *			nr_hwavail =:= ring->avail + ring->reserved
> + *
> + * The indexes in the NIC and netmap rings are offset by nkr_hwofs slots.
> + * This is so that, on a reset, buffers owned by userspace are not
> + * modified by the kernel. In particular:
> + * RX rings: the next empty buffer (hwcur + hwavail + hwofs) coincides with
> + * 	the next empty buffer as known by the hardware (next_to_check or so).
> + * TX rings: hwcur + hwofs coincides with next_to_send
> + *
> + * For received packets, slot->flags is set to nkr_slot_flags
> + * so we can provide a proper initial value (e.g. set NS_FORWARD
> + * when operating in 'transparent' mode).
> + */
> +struct netmap_kring {
> +	struct netmap_ring *ring;
> +	u_int nr_hwcur;
> +	int nr_hwavail;
> +	u_int nr_kflags;	/* private driver flags */
> +#define NKR_PENDINTR	0x1	// Pending interrupt.
> +	u_int nkr_num_slots;
> +
> +	uint16_t	nkr_slot_flags;	/* initial value for flags */
> +	int	nkr_hwofs;	/* offset between NIC and netmap ring */
> +	struct netmap_adapter *na;
> +	NM_SELINFO_T si;	/* poll/select wait queue */
> +	NM_LOCK_T q_lock;	/* used if no device lock available */
> +} __attribute__((__aligned__(64)));
> +
> +/*
> + * This struct extends the 'struct adapter' (or
> + * equivalent) device descriptor. It contains all fields needed to
> + * support netmap operation.
> + */
> +struct netmap_adapter {
> +	/*
> +	 * On linux we do not have a good way to tell if an interface
> +	 * is netmap-capable. So we use the following trick:
> +	 * NA(ifp) points here, and the first entry (which hopefully
> +	 * always exists and is at least 32 bits) contains a magic
> +	 * value which we can use to detect that the interface is good.
> +	 */
> +	uint32_t magic;
> +	uint32_t na_flags;	/* future place for IFCAP_NETMAP */
> +#define NAF_SKIP_INTR	1	/* use the regular interrupt handler.
> +				 * useful during initialization
> +				 */
> +	int refcount; /* number of user-space descriptors using this
> +			 interface, which is equal to the number of
> +			 struct netmap_if objs in the mapped region. */
> +	/*
> +	 * The selwakeup in the interrupt thread can use per-ring
> +	 * and/or global wait queues. We track how many clients
> +	 * of each type we have so we can optimize the drivers,
> +	 * and especially avoid huge contention on the locks.
> +	 */
> +	int na_single;	/* threads attached to a single hw queue */
> +	int na_multi;	/* threads attached to multiple hw queues */
> +
> +	int separate_locks; /* set if the interface suports different
> +			       locks for rx, tx and core. */
> +
> +	u_int num_rx_rings; /* number of adapter receive rings */
> +	u_int num_tx_rings; /* number of adapter transmit rings */
> +
> +	u_int num_tx_desc; /* number of descriptor in each queue */
> +	u_int num_rx_desc;
> +
> +	/* tx_rings and rx_rings are private but allocated
> +	 * as a contiguous chunk of memory. Each array has
> +	 * N+1 entries, for the adapter queues and for the host queue.
> +	 */
> +	struct netmap_kring *tx_rings; /* array of TX rings. */
> +	struct netmap_kring *rx_rings; /* array of RX rings. */
> +
> +	NM_SELINFO_T tx_si, rx_si;	/* global wait queues */
> +
> +	/* copy of if_qflush and if_transmit pointers, to intercept
> +	 * packets from the network stack when netmap is active.
> +	 */
> +	int     (*if_transmit)(struct ifnet *, struct mbuf *);
> +
> +	/* references to the ifnet and device routines, used by
> +	 * the generic netmap functions.
> +	 */
> +	struct ifnet *ifp; /* adapter is ifp->if_softc */
> +
> +	NM_LOCK_T core_lock;	/* used if no device lock available */
> +
> +	int (*nm_register)(struct ifnet *, int onoff);
> +	void (*nm_lock)(struct ifnet *, int what, u_int ringid);
> +	int (*nm_txsync)(struct ifnet *, u_int ring, int lock);
> +	int (*nm_rxsync)(struct ifnet *, u_int ring, int lock);
> +	/* return configuration information */
> +	int (*nm_config)(struct ifnet *, u_int *txr, u_int *txd,
> +					u_int *rxr, u_int *rxd);
> +
> +	int bdg_port;
> +#ifdef linux
> +	struct net_device_ops nm_ndo;
> +	int if_refcount;	// XXX additions for bridge
> +#endif /* linux */
> +};
> +
> +/*
> + * The combination of "enable" (ifp->if_capenable & IFCAP_NETMAP)
> + * and refcount gives the status of the interface, namely:
> + *
> + *	enable	refcount	Status
> + *
> + *	FALSE	0		normal operation
> + *	FALSE	!= 0		-- (impossible)
> + *	TRUE	1		netmap mode
> + *	TRUE	0		being deleted.
> + */
> +
> +#define NETMAP_DELETING(_na)  (  ((_na)->refcount == 0) &&	\
> +	( (_na)->ifp->if_capenable & IFCAP_NETMAP) )
> +
> +/*
> + * parameters for (*nm_lock)(adapter, what, index)
> + */
> +enum {
> +	NETMAP_NO_LOCK = 0,
> +	NETMAP_CORE_LOCK, NETMAP_CORE_UNLOCK,
> +	NETMAP_TX_LOCK, NETMAP_TX_UNLOCK,
> +	NETMAP_RX_LOCK, NETMAP_RX_UNLOCK,
> +#ifdef __FreeBSD__
> +#define	NETMAP_REG_LOCK		NETMAP_CORE_LOCK
> +#define	NETMAP_REG_UNLOCK	NETMAP_CORE_UNLOCK
> +#else
> +	NETMAP_REG_LOCK, NETMAP_REG_UNLOCK
> +#endif
> +};
> +
> +/*
> + * The following are support routines used by individual drivers to
> + * support netmap operation.
> + *
> + * netmap_attach() initializes a struct netmap_adapter, allocating the
> + * 	struct netmap_ring's and the struct selinfo.
> + *
> + * netmap_detach() frees the memory allocated by netmap_attach().
> + *
> + * netmap_start() replaces the if_transmit routine of the interface,
> + *	and is used to intercept packets coming from the stack.
> + *
> + * netmap_load_map/netmap_reload_map are helper routines to set/reset
> + *	the dmamap for a packet buffer
> + *
> + * netmap_reset() is a helper routine to be called in the driver
> + *	when reinitializing a ring.
> + */
> +int netmap_attach(struct netmap_adapter *, int);
> +void netmap_detach(struct ifnet *);
> +int netmap_start(struct ifnet *, struct mbuf *);
> +enum txrx { NR_RX = 0, NR_TX = 1 };
> +struct netmap_slot *netmap_reset(struct netmap_adapter *na,
> +	enum txrx tx, int n, u_int new_cur);
> +int netmap_ring_reinit(struct netmap_kring *);
> +
> +extern u_int netmap_buf_size;
> +#define NETMAP_BUF_SIZE	netmap_buf_size
> +extern int netmap_mitigate;
> +extern int netmap_no_pendintr;
> +extern u_int netmap_total_buffers;
> +extern char *netmap_buffer_base;
> +extern int netmap_verbose;	// XXX debugging
> +enum {                                  /* verbose flags */
> +	NM_VERB_ON = 1,                 /* generic verbose */
> +	NM_VERB_HOST = 0x2,             /* verbose host stack */
> +	NM_VERB_RXSYNC = 0x10,          /* verbose on rxsync/txsync */
> +	NM_VERB_TXSYNC = 0x20,
> +	NM_VERB_RXINTR = 0x100,         /* verbose on rx/tx intr (driver) */
> +	NM_VERB_TXINTR = 0x200,
> +	NM_VERB_NIC_RXSYNC = 0x1000,    /* verbose on rx/tx intr (driver) */
> +	NM_VERB_NIC_TXSYNC = 0x2000,
> +};
> +
> +/*
> + * NA returns a pointer to the struct netmap adapter from the ifp,
> + * WNA is used to write it.
> + */
> +#ifndef WNA
> +#define	WNA(_ifp)	(_ifp)->if_pspare[0]
> +#endif
> +#define	NA(_ifp)	((struct netmap_adapter *)WNA(_ifp))
> +
> +/*
> + * Macros to determine if an interface is netmap capable or netmap enabled.
> + * See the magic field in struct netmap_adapter.
> + */
> +#ifdef __FreeBSD__
> +/*
> + * on FreeBSD just use if_capabilities and if_capenable.
> + */
> +#define NETMAP_CAPABLE(ifp)	(NA(ifp) &&		\
> +	(ifp)->if_capabilities & IFCAP_NETMAP )
> +
> +#define	NETMAP_SET_CAPABLE(ifp)				\
> +	(ifp)->if_capabilities |= IFCAP_NETMAP
> +
> +#else	/* linux */
> +
> +/*
> + * on linux:
> + * we check if NA(ifp) is set and its first element has a related
> + * magic value. The capenable is within the struct netmap_adapter.
> + */
> +#define	NETMAP_MAGIC	0x52697a7a
> +
> +#define NETMAP_CAPABLE(ifp)	(NA(ifp) &&		\
> +	((uint32_t)(uintptr_t)NA(ifp) ^ NA(ifp)->magic) == NETMAP_MAGIC )
> +
> +#define	NETMAP_SET_CAPABLE(ifp)				\
> +	NA(ifp)->magic = ((uint32_t)(uintptr_t)NA(ifp)) ^ NETMAP_MAGIC
> +
> +#endif	/* linux */
> +
> +#ifdef __FreeBSD__
> +/* Callback invoked by the dma machinery after a successfull dmamap_load */
> +static void netmap_dmamap_cb(__unused void *arg,
> +    __unused bus_dma_segment_t * segs, __unused int nseg, __unused int error)
> +{
> +}
> +
> +/* bus_dmamap_load wrapper: call aforementioned function if map != NULL.
> + * XXX can we do it without a callback ?
> + */
> +static inline void
> +netmap_load_map(bus_dma_tag_t tag, bus_dmamap_t map, void *buf)
> +{
> +	if (map)
> +		bus_dmamap_load(tag, map, buf, NETMAP_BUF_SIZE,
> +		    netmap_dmamap_cb, NULL, BUS_DMA_NOWAIT);
> +}
> +
> +/* update the map when a buffer changes. */
> +static inline void
> +netmap_reload_map(bus_dma_tag_t tag, bus_dmamap_t map, void *buf)
> +{
> +	if (map) {
> +		bus_dmamap_unload(tag, map);
> +		bus_dmamap_load(tag, map, buf, NETMAP_BUF_SIZE,
> +		    netmap_dmamap_cb, NULL, BUS_DMA_NOWAIT);
> +	}
> +}
> +#else /* linux */
> +
> +/*
> + * XXX How do we redefine these functions:
> + *
> + * on linux we need
> + *	dma_map_single(&pdev->dev, virt_addr, len, direction)
> + *	dma_unmap_single(&adapter->pdev->dev, phys_addr, len, direction
> + * The len can be implicit (on netmap it is NETMAP_BUF_SIZE)
> + * unfortunately the direction is not, so we need to change
> + * something to have a cross API
> + */
> +#define netmap_load_map(_t, _m, _b)
> +#define netmap_reload_map(_t, _m, _b)
> +#if 0
> +	struct e1000_buffer *buffer_info =  &tx_ring->buffer_info[l];
> +	/* set time_stamp *before* dma to help avoid a possible race */
> +	buffer_info->time_stamp = jiffies;
> +	buffer_info->mapped_as_page = false;
> +	buffer_info->length = len;
> +	//buffer_info->next_to_watch = l;
> +	/* reload dma map */
> +	dma_unmap_single(&adapter->pdev->dev, buffer_info->dma,
> +			NETMAP_BUF_SIZE, DMA_TO_DEVICE);
> +	buffer_info->dma = dma_map_single(&adapter->pdev->dev,
> +			addr, NETMAP_BUF_SIZE, DMA_TO_DEVICE);
> +
> +	if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma)) {
> +		D("dma mapping error");
> +		/* goto dma_error; See e1000_put_txbuf() */
> +		/* XXX reset */
> +	}
> +	tx_desc->buffer_addr = htole64(buffer_info->dma); //XXX
> +
> +#endif
> +
> +/*
> + * The bus_dmamap_sync() can be one of wmb() or rmb() depending on direction.
> + */
> +#define bus_dmamap_sync(_a, _b, _c)
> +
> +#endif /* linux */
> +
> +/*
> + * functions to map NIC to KRING indexes (n2k) and vice versa (k2n)
> + */
> +static inline int
> +netmap_idx_n2k(struct netmap_kring *kr, int idx)
> +{
> +	int n = kr->nkr_num_slots;
> +	idx += kr->nkr_hwofs;
> +	if (idx < 0)
> +		return idx + n;
> +	else if (idx < n)
> +		return idx;
> +	else
> +		return idx - n;
> +}
> +
> +
> +static inline int
> +netmap_idx_k2n(struct netmap_kring *kr, int idx)
> +{
> +	int n = kr->nkr_num_slots;
> +	idx -= kr->nkr_hwofs;
> +	if (idx < 0)
> +		return idx + n;
> +	else if (idx < n)
> +		return idx;
> +	else
> +		return idx - n;
> +}
> +
> +
> +#ifdef NETMAP_MEM2
> +/* Entries of the look-up table. */
> +struct lut_entry {
> +	void *vaddr;		/* virtual address. */
> +	vm_paddr_t paddr;	/* phisical address. */
> +};
> +
> +struct netmap_obj_pool;
> +extern struct lut_entry *netmap_buffer_lut;
> +#define NMB_VA(i)	(netmap_buffer_lut[i].vaddr)
> +#define NMB_PA(i)	(netmap_buffer_lut[i].paddr)
> +#else /* NETMAP_MEM1 */
> +#define NMB_VA(i)	(netmap_buffer_base + (i * NETMAP_BUF_SIZE) )
> +#endif /* NETMAP_MEM2 */
> +
> +/*
> + * NMB return the virtual address of a buffer (buffer 0 on bad index)
> + * PNMB also fills the physical address
> + */
> +static inline void *
> +NMB(struct netmap_slot *slot)
> +{
> +	uint32_t i = slot->buf_idx;
> +	return (unlikely(i >= netmap_total_buffers)) ?  NMB_VA(0) : NMB_VA(i);
> +}
> +
> +static inline void *
> +PNMB(struct netmap_slot *slot, uint64_t *pp)
> +{
> +	uint32_t i = slot->buf_idx;
> +	void *ret = (i >= netmap_total_buffers) ? NMB_VA(0) : NMB_VA(i);
> +#ifdef NETMAP_MEM2
> +	*pp = (i >= netmap_total_buffers) ? NMB_PA(0) : NMB_PA(i);
> +#else
> +	*pp = vtophys(ret);
> +#endif
> +	return ret;
> +}
> +
> +/* default functions to handle rx/tx interrupts */
> +int netmap_rx_irq(struct ifnet *, int, int *);
> +#define netmap_tx_irq(_n, _q) netmap_rx_irq(_n, _q, NULL)
> +
> +extern int netmap_copy;
> +#endif /* _NET_NETMAP_KERN_H_ */
> --- a/include/uapi/linux/if.h	2013-02-16 09:10:41.000000000 -0800
> +++ b/include/uapi/linux/if.h	2013-03-10 11:26:44.500548075 -0700
> @@ -83,6 +83,7 @@
>  #define IFF_SUPP_NOFCS	0x80000		/* device supports sending custom FCS */
>  #define IFF_LIVE_ADDR_CHANGE 0x100000	/* device supports hardware address
>  					 * change when it's running */
> +#define IFF_NETMAP	0x200000	/* device used with netmap */
>  
>  
>  #define IF_GET_IFACE	0x0001		/* for querying only */
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller April 23, 2013, 7:10 a.m. UTC | #14
Do not quote a HUGE patch when replying with some small commentary,
instead just quote a small relevant part of the posting that you
want to respond to.

Everyone on this mailing list does not need to see the ENTIRE patch
again.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Willy Tarreau April 28, 2013, 10:33 p.m. UTC | #15
Hi Jamal,

On Sat, Apr 20, 2013 at 10:57:15AM -0400, Jamal Hadi Salim wrote:
> On 13-04-20 07:31 AM, Daniel Borkmann wrote:
> 
> >Also, I just looked over Netmap's Usenix paper from 2012, where they
> >compare netmap against pktgen, and while they state the version of the 
> >FreeBSD
> >kernel
> >where they did the evaluation on, they just don't even mention the Linux'
> >kernel version, their Linux kernel setup etc. Not even mentioning a
> >comparison
> >of PF_PACKET+fanout (similarly as the PF_RING project seems to avoid this
> >comparison and only presents perf numbers where they just count packets !).
> >Also, I've seen other papers published in 2012 on this topic, where they
> >compare performance with a 2.6.2x kernel, hm, quite sad actually.
> 
> I hope I can put your doubts to rest. Netmap does provide the 
> performance it claims to. I did play with it about 6-9 months back and i 
> was able to loopback wirerate 10Gbps (~14.4Mpps) 64B packets on a 
> _single core_. i.e i send to from machine A to B which echoes back to 
> the sender via a driver hack i had on the intel driver and i count the 
> packets. I should note that this was with machines that have circa 2010 
> capabilities (and they were cheap too).

I second this. I experimented packet filtering and generation at line rate
on a single core with netmap. This is ~28 Mpps on 2 cores. I could barely
achieve 8.8 Mpps using the all 6 cores in this machine using AF_PACKET and
mmap.

There are currently shortcomings to netmap, including the loss of the csum
information from incoming packets, and lack of support for gso/tso for outgoing
ones. I don't much like the way it bypasses the driver using netif_rx() etc...
and would prefer a design which sits between the NIC and the driver so as to
be as much transparent as possible. But for some specific usages, it can be
great. And it's true that for capturing line-rate traffic on production
systems without eating all their resources, it's nice as well.

I don't know if it's mature enough for being usable as-is in the kernel,
but I think it could get enough attention to significantly improve.

Regards,
Willy

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
chetan L May 7, 2013, 5:20 p.m. UTC | #16
On Fri, Apr 19, 2013 at 4:49 PM, Stephen Hemminger
<stephen@networkplumber.org> wrote:

>
> I get 7Mpps (single queue) with ixgbe and pktgen.
> Easily hit 14.8 Mpps (single queue) with netmap.
>
> The real problem is that DPDK and netmap can do multiple packets per request
> to driver. Right now their is one PCI bus transaction per packet with current
> driver model.
>

Look @ tpacket_v3 and you will realize that you can easily transfer a
block of packets. A block_descriptor(either the one we have or a
variant of this) is all you need when you are doing real_life packet
capture/transmit. Sure you might have a need to add other packet
descriptors but then it's easy to achieve. The tpacket_v3
block-descriptor(not packet descriptor) even has some fields that even
the state of the art FPGAs didn't have on them when I first started
venturing out in this space.

As far as coalescing PCIe transactions are concerned, it also depends
how much buffer the NIC/NPU/FPGA has. Cheap NICs don't have onboard
buffer so they can't absorb the spikes and batch small transfers. If
we test a NIC w/ another NIC it's not going to tell the whole story. A
simple spirent test will expose this.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

--- a/drivers/staging/Kconfig	2013-02-26 10:19:35.000000000 -0800
+++ b/drivers/staging/Kconfig	2013-03-10 10:08:20.323671490 -0700
@@ -140,4 +140,6 @@  source "drivers/staging/zcache/Kconfig"
 
 source "drivers/staging/goldfish/Kconfig"
 
+source "drivers/staging/netmap/Kconfig"
+
 endif # STAGING
--- a/drivers/staging/Makefile	2013-02-26 10:19:35.000000000 -0800
+++ b/drivers/staging/Makefile	2013-03-10 10:08:48.555305267 -0700
@@ -62,3 +62,4 @@  obj-$(CONFIG_SB105X)		+= sb105x/
 obj-$(CONFIG_FIREWIRE_SERIAL)	+= fwserial/
 obj-$(CONFIG_ZCACHE)		+= zcache/
 obj-$(CONFIG_GOLDFISH)		+= goldfish/
+obj-$(CONFIG_NETMAP)		+= netmap/
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ b/drivers/staging/netmap/Kconfig	2013-03-10 10:08:20.323671490 -0700
@@ -0,0 +1,16 @@ 
+#
+# Netmap - user mode packet processing framework
+#
+
+config NETMAP
+	tristate "Netmap - user mode networking"
+	depends on NET && !IOMMU_API
+	default n
+	help
+	  If you say Y here, you will get experimental support for
+	  netmap a framework for fast packet I/O using memory mapped
+	  buffers.
+
+	  See <http:/nfo.iet.unipi.it/~luigi/netmap/> for more information.
+
+	  If unsure, say N.
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ b/drivers/staging/netmap/Makefile	2013-03-10 10:08:20.323671490 -0700
@@ -0,0 +1,2 @@ 
+EXTRA_CFLAGS := -DDEBUG
+obj-$(CONFIG_NETMAP) += netmap.o
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ b/drivers/staging/netmap/netmap.c	2013-03-10 16:52:34.160592316 -0700
@@ -0,0 +1,2515 @@ 
+/*
+ * Copyright (C) 2011-2012 Matteo Landi, Luigi Rizzo. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *   1. Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *   2. Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+#define NM_BRIDGE
+
+/*
+ * This module supports memory mapped access to network devices,
+ * see netmap(4).
+ *
+ * The module uses a large, memory pool allocated by the kernel
+ * and accessible as mmapped memory by multiple userspace threads/processes.
+ * The memory pool contains packet buffers and "netmap rings",
+ * i.e. user-accessible copies of the interface's queues.
+ *
+ * Access to the network card works like this:
+ * 1. a process/thread issues one or more open() on /dev/netmap, to create
+ *    select()able file descriptor on which events are reported.
+ * 2. on each descriptor, the process issues an ioctl() to identify
+ *    the interface that should report events to the file descriptor.
+ * 3. on each descriptor, the process issues an mmap() request to
+ *    map the shared memory region within the process' address space.
+ *    The list of interesting queues is indicated by a location in
+ *    the shared memory region.
+ * 4. using the functions in the netmap(4) userspace API, a process
+ *    can look up the occupation state of a queue, access memory buffers,
+ *    and retrieve received packets or enqueue packets to transmit.
+ * 5. using some ioctl()s the process can synchronize the userspace view
+ *    of the queue with the actual status in the kernel. This includes both
+ *    receiving the notification of new packets, and transmitting new
+ *    packets on the output interface.
+ * 6. select() or poll() can be used to wait for events on individual
+ *    transmit or receive queues (or all queues for a given interface).
+ */
+
+#ifdef linux
+#include <netmap/bsd_glue.h>
+static netdev_tx_t linux_netmap_start(struct sk_buff *skb, struct net_device *dev);
+#endif /* linux */
+
+#ifdef __APPLE__
+#include "osx_glue.h"
+#endif /* __APPLE__ */
+
+#ifdef __FreeBSD__
+#include <sys/cdefs.h> /* prerequisite */
+__FBSDID("$FreeBSD: head/sys/dev/netmap/netmap.c 241723 2012-10-19 09:41:45Z glebius $");
+
+#include <sys/types.h>
+#include <sys/module.h>
+#include <sys/errno.h>
+#include <sys/param.h>	/* defines used in kernel.h */
+#include <sys/jail.h>
+#include <sys/kernel.h>	/* types used in module initialization */
+#include <sys/conf.h>	/* cdevsw struct */
+#include <sys/uio.h>	/* uio struct */
+#include <sys/sockio.h>
+#include <sys/socketvar.h>	/* struct socket */
+#include <sys/malloc.h>
+#include <sys/mman.h>	/* PROT_EXEC */
+#include <sys/poll.h>
+#include <sys/proc.h>
+#include <vm/vm.h>	/* vtophys */
+#include <vm/pmap.h>	/* vtophys */
+#include <sys/socket.h> /* sockaddrs */
+#include <machine/bus.h>
+#include <sys/selinfo.h>
+#include <sys/sysctl.h>
+#include <net/if.h>
+#include <net/bpf.h>		/* BIOCIMMEDIATE */
+#include <net/vnet.h>
+#include <machine/bus.h>	/* bus_dmamap_* */
+
+MALLOC_DEFINE(M_NETMAP, "netmap", "Network memory map");
+#endif /* __FreeBSD__ */
+
+#include <netmap/netmap.h>
+#include <netmap/netmap_kern.h>
+
+u_int netmap_total_buffers;
+u_int netmap_buf_size;
+char *netmap_buffer_base;	/* address of an invalid buffer */
+
+/* user-controlled variables */
+int netmap_verbose = 1;
+
+static int netmap_no_timestamp; /* don't timestamp on rxsync */
+
+SYSCTL_NODE(_dev, OID_AUTO, netmap, CTLFLAG_RW, 0, "Netmap args");
+SYSCTL_INT(_dev_netmap, OID_AUTO, verbose,
+    CTLFLAG_RW, &netmap_verbose, 0, "Verbose mode");
+SYSCTL_INT(_dev_netmap, OID_AUTO, no_timestamp,
+    CTLFLAG_RW, &netmap_no_timestamp, 0, "no_timestamp");
+int netmap_mitigate = 1;
+SYSCTL_INT(_dev_netmap, OID_AUTO, mitigate, CTLFLAG_RW, &netmap_mitigate, 0, "");
+int netmap_no_pendintr = 1;
+SYSCTL_INT(_dev_netmap, OID_AUTO, no_pendintr,
+    CTLFLAG_RW, &netmap_no_pendintr, 0, "Always look for new received packets.");
+
+int netmap_drop = 0;	/* debugging */
+int netmap_flags = 0;	/* debug flags */
+int netmap_fwd = 0;	/* force transparent mode */
+int netmap_copy = 0;	/* debugging, copy content */
+
+SYSCTL_INT(_dev_netmap, OID_AUTO, drop, CTLFLAG_RW, &netmap_drop, 0 , "");
+SYSCTL_INT(_dev_netmap, OID_AUTO, flags, CTLFLAG_RW, &netmap_flags, 0 , "");
+SYSCTL_INT(_dev_netmap, OID_AUTO, fwd, CTLFLAG_RW, &netmap_fwd, 0 , "");
+SYSCTL_INT(_dev_netmap, OID_AUTO, copy, CTLFLAG_RW, &netmap_copy, 0 , "");
+
+#ifdef NM_BRIDGE /* support for netmap bridge */
+
+/*
+ * system parameters.
+ *
+ * All switched ports have prefix NM_NAME.
+ * The switch has a max of NM_BDG_MAXPORTS ports (often stored in a bitmap,
+ * so a practical upper bound is 64).
+ * Each tx ring is read-write, whereas rx rings are readonly (XXX not done yet).
+ * The virtual interfaces use per-queue lock instead of core lock.
+ * In the tx loop, we aggregate traffic in batches to make all operations
+ * faster. The batch size is NM_BDG_BATCH
+ */
+#define	NM_NAME			"vale"	/* prefix for the interface */
+#define NM_BDG_MAXPORTS		16	/* up to 64 ? */
+#define NM_BRIDGE_RINGSIZE	1024	/* in the device */
+#define NM_BDG_HASH		1024	/* forwarding table entries */
+#define NM_BDG_BATCH		1024	/* entries in the forwarding buffer */
+#define	NM_BRIDGES		4	/* number of bridges */
+int netmap_bridge = NM_BDG_BATCH; /* bridge batch size */
+SYSCTL_INT(_dev_netmap, OID_AUTO, bridge, CTLFLAG_RW, &netmap_bridge, 0 , "");
+
+#ifdef linux
+#define	ADD_BDG_REF(ifp)	(NA(ifp)->if_refcount++)
+#define	DROP_BDG_REF(ifp)	(NA(ifp)->if_refcount-- <= 1)
+#else /* !linux */
+#define	ADD_BDG_REF(ifp)	(ifp)->if_refcount++
+#define	DROP_BDG_REF(ifp)	refcount_release(&(ifp)->if_refcount)
+#ifdef __FreeBSD__
+#include <sys/endian.h>
+#include <sys/refcount.h>
+#endif /* __FreeBSD__ */
+#define prefetch(x)	__builtin_prefetch(x)
+#endif /* !linux */
+
+static void bdg_netmap_attach(struct ifnet *ifp);
+static int bdg_netmap_reg(struct ifnet *ifp, int onoff);
+/* per-tx-queue entry */
+struct nm_bdg_fwd {	/* forwarding entry for a bridge */
+	void *buf;
+	uint64_t dst;	/* dst mask */
+	uint32_t src;	/* src index ? */
+	uint16_t len;	/* src len */
+};
+
+struct nm_hash_ent {
+	uint64_t	mac;	/* the top 2 bytes are the epoch */
+	uint64_t	ports;
+};
+
+/*
+ * Interfaces for a bridge are all in ports[].
+ * The array has fixed size, an empty entry does not terminate
+ * the search.
+ */
+struct nm_bridge {
+	struct ifnet *bdg_ports[NM_BDG_MAXPORTS];
+	int n_ports;
+	uint64_t act_ports;
+	int freelist;	/* first buffer index */
+	NM_SELINFO_T si;	/* poll/select wait queue */
+	NM_LOCK_T bdg_lock;	/* protect the selinfo ? */
+
+	/* the forwarding table, MAC+ports */
+	struct nm_hash_ent ht[NM_BDG_HASH];
+
+	int namelen;	/* 0 means free */
+	char basename[IFNAMSIZ];
+};
+
+struct nm_bridge nm_bridges[NM_BRIDGES];
+
+#define BDG_LOCK(b)	mtx_lock(&(b)->bdg_lock)
+#define BDG_UNLOCK(b)	mtx_unlock(&(b)->bdg_lock)
+
+/*
+ * NA(ifp)->bdg_port	port index
+ */
+
+// XXX only for multiples of 64 bytes, non overlapped.
+static inline void
+pkt_copy(void *_src, void *_dst, int l)
+{
+        uint64_t *src = _src;
+        uint64_t *dst = _dst;
+        if (unlikely(l >= 1024)) {
+                bcopy(src, dst, l);
+                return;
+        }
+        for (; likely(l > 0); l-=64) {
+                *dst++ = *src++;
+                *dst++ = *src++;
+                *dst++ = *src++;
+                *dst++ = *src++;
+                *dst++ = *src++;
+                *dst++ = *src++;
+                *dst++ = *src++;
+                *dst++ = *src++;
+        }
+}
+
+/*
+ * locate a bridge among the existing ones.
+ * a ':' in the name terminates the bridge name. Otherwise, just NM_NAME.
+ * We assume that this is called with a name of at least NM_NAME chars.
+ */
+static struct nm_bridge *
+nm_find_bridge(const char *name)
+{
+	int i, l, namelen, e;
+	struct nm_bridge *b = NULL;
+
+	namelen = strlen(NM_NAME);	/* base length */
+	l = strlen(name);		/* actual length */
+	for (i = namelen + 1; i < l; i++) {
+		if (name[i] == ':') {
+			namelen = i;
+			break;
+		}
+	}
+	if (namelen >= IFNAMSIZ)
+		namelen = IFNAMSIZ;
+	ND("--- prefix is '%.*s' ---", namelen, name);
+
+	/* use the first entry for locking */
+	BDG_LOCK(nm_bridges); // XXX do better
+	for (e = -1, i = 1; i < NM_BRIDGES; i++) {
+		b = nm_bridges + i;
+		if (b->namelen == 0)
+			e = i;	/* record empty slot */
+		else if (strncmp(name, b->basename, namelen) == 0) {
+			ND("found '%.*s' at %d", namelen, name, i);
+			break;
+		}
+	}
+	if (i == NM_BRIDGES) { /* all full */
+		if (e == -1) { /* no empty slot */
+			b = NULL;
+		} else {
+			b = nm_bridges + e;
+			strncpy(b->basename, name, namelen);
+			b->namelen = namelen;
+		}
+	}
+	BDG_UNLOCK(nm_bridges);
+	return b;
+}
+#endif /* NM_BRIDGE */
+
+
+/*
+ * Fetch configuration from the device, to cope with dynamic
+ * reconfigurations after loading the module.
+ */
+static int
+netmap_update_config(struct netmap_adapter *na)
+{
+	struct ifnet *ifp = na->ifp;
+	u_int txr, txd, rxr, rxd;
+
+	txr = txd = rxr = rxd = 0;
+	if (na->nm_config) {
+		na->nm_config(ifp, &txr, &txd, &rxr, &rxd);
+	} else {
+		/* take whatever we had at init time */
+		txr = na->num_tx_rings;
+		txd = na->num_tx_desc;
+		rxr = na->num_rx_rings;
+		rxd = na->num_rx_desc;
+	}
+
+	if (na->num_tx_rings == txr && na->num_tx_desc == txd &&
+	    na->num_rx_rings == rxr && na->num_rx_desc == rxd)
+		return 0; /* nothing changed */
+	if (netmap_verbose || na->refcount > 0) {
+		D("stored config %s: txring %d x %d, rxring %d x %d",
+			ifp->if_xname,
+			na->num_tx_rings, na->num_tx_desc,
+			na->num_rx_rings, na->num_rx_desc);
+		D("new config %s: txring %d x %d, rxring %d x %d",
+			ifp->if_xname, txr, txd, rxr, rxd);
+	}
+	if (na->refcount == 0) {
+		D("configuration changed (but fine)");
+		na->num_tx_rings = txr;
+		na->num_tx_desc = txd;
+		na->num_rx_rings = rxr;
+		na->num_rx_desc = rxd;
+		return 0;
+	}
+	D("configuration changed while active, this is bad...");
+	return 1;
+}
+
+/*------------- memory allocator -----------------*/
+#ifdef NETMAP_MEM2
+#include "netmap_mem2.c"
+#else /* !NETMAP_MEM2 */
+#include "netmap_mem1.c"
+#endif /* !NETMAP_MEM2 */
+/*------------ end of memory allocator ----------*/
+
+
+/* Structure associated to each thread which registered an interface.
+ *
+ * The first 4 fields of this structure are written by NIOCREGIF and
+ * read by poll() and NIOC?XSYNC.
+ * There is low contention among writers (actually, a correct user program
+ * should have no contention among writers) and among writers and readers,
+ * so we use a single global lock to protect the structure initialization.
+ * Since initialization involves the allocation of memory, we reuse the memory
+ * allocator lock.
+ * Read access to the structure is lock free. Readers must check that
+ * np_nifp is not NULL before using the other fields.
+ * If np_nifp is NULL initialization has not been performed, so they should
+ * return an error to userlevel.
+ *
+ * The ref_done field is used to regulate access to the refcount in the
+ * memory allocator. The refcount must be incremented at most once for
+ * each open("/dev/netmap"). The increment is performed by the first
+ * function that calls netmap_get_memory() (currently called by
+ * mmap(), NIOCGINFO and NIOCREGIF).
+ * If the refcount is incremented, it is then decremented when the
+ * private structure is destroyed.
+ */
+struct netmap_priv_d {
+	struct netmap_if * volatile np_nifp;	/* netmap interface descriptor. */
+
+	struct ifnet	*np_ifp;	/* device for which we hold a reference */
+	int		np_ringid;	/* from the ioctl */
+	u_int		np_qfirst, np_qlast;	/* range of rings to scan */
+	uint16_t	np_txpoll;
+
+	unsigned long	ref_done;	/* use with NMA_LOCK held */
+};
+
+
+static int
+netmap_get_memory(struct netmap_priv_d* p)
+{
+	int error = 0;
+	NMA_LOCK();
+	if (!p->ref_done) {
+		error = netmap_memory_finalize();
+		if (!error)
+			p->ref_done = 1;
+	}
+	NMA_UNLOCK();
+	return error;
+}
+
+/*
+ * File descriptor's private data destructor.
+ *
+ * Call nm_register(ifp,0) to stop netmap mode on the interface and
+ * revert to normal operation. We expect that np_ifp has not gone.
+ */
+/* call with NMA_LOCK held */
+static void
+netmap_dtor_locked(void *data)
+{
+	struct netmap_priv_d *priv = data;
+	struct ifnet *ifp = priv->np_ifp;
+	struct netmap_adapter *na = NA(ifp);
+	struct netmap_if *nifp = priv->np_nifp;
+
+	na->refcount--;
+	if (na->refcount <= 0) {	/* last instance */
+		u_int i, j, lim;
+
+		if (netmap_verbose)
+			D("deleting last instance for %s", ifp->if_xname);
+		/*
+		 * there is a race here with *_netmap_task() and
+		 * netmap_poll(), which don't run under NETMAP_REG_LOCK.
+		 * na->refcount == 0 && na->ifp->if_capenable & IFCAP_NETMAP
+		 * (aka NETMAP_DELETING(na)) are a unique marker that the
+		 * device is dying.
+		 * Before destroying stuff we sleep a bit, and then complete
+		 * the job. NIOCREG should realize the condition and
+		 * loop until they can continue; the other routines
+		 * should check the condition at entry and quit if
+		 * they cannot run.
+		 */
+		na->nm_lock(ifp, NETMAP_REG_UNLOCK, 0);
+		tsleep(na, 0, "NIOCUNREG", 4);
+		na->nm_lock(ifp, NETMAP_REG_LOCK, 0);
+		na->nm_register(ifp, 0); /* off, clear IFCAP_NETMAP */
+		/* Wake up any sleeping threads. netmap_poll will
+		 * then return POLLERR
+		 */
+		for (i = 0; i < na->num_tx_rings + 1; i++)
+			selwakeuppri(&na->tx_rings[i].si, PI_NET);
+		for (i = 0; i < na->num_rx_rings + 1; i++)
+			selwakeuppri(&na->rx_rings[i].si, PI_NET);
+		selwakeuppri(&na->tx_si, PI_NET);
+		selwakeuppri(&na->rx_si, PI_NET);
+		/* release all buffers */
+		for (i = 0; i < na->num_tx_rings + 1; i++) {
+			struct netmap_ring *ring = na->tx_rings[i].ring;
+			lim = na->tx_rings[i].nkr_num_slots;
+			for (j = 0; j < lim; j++)
+				netmap_free_buf(nifp, ring->slot[j].buf_idx);
+			/* knlist_destroy(&na->tx_rings[i].si.si_note); */
+			mtx_destroy(&na->tx_rings[i].q_lock);
+		}
+		for (i = 0; i < na->num_rx_rings + 1; i++) {
+			struct netmap_ring *ring = na->rx_rings[i].ring;
+			lim = na->rx_rings[i].nkr_num_slots;
+			for (j = 0; j < lim; j++)
+				netmap_free_buf(nifp, ring->slot[j].buf_idx);
+			/* knlist_destroy(&na->rx_rings[i].si.si_note); */
+			mtx_destroy(&na->rx_rings[i].q_lock);
+		}
+		/* XXX kqueue(9) needed; these will mirror knlist_init. */
+		/* knlist_destroy(&na->tx_si.si_note); */
+		/* knlist_destroy(&na->rx_si.si_note); */
+		netmap_free_rings(na);
+		wakeup(na);
+	}
+	netmap_if_free(nifp);
+}
+
+static void
+nm_if_rele(struct ifnet *ifp)
+{
+#ifndef NM_BRIDGE
+	if_rele(ifp);
+#else /* NM_BRIDGE */
+	int i, full;
+	struct nm_bridge *b;
+
+	if (strncmp(ifp->if_xname, NM_NAME, sizeof(NM_NAME) - 1)) {
+		if_rele(ifp);
+		return;
+	}
+	if (!DROP_BDG_REF(ifp))
+		return;
+	b = ifp->if_bridge;
+	BDG_LOCK(nm_bridges);
+	BDG_LOCK(b);
+	ND("want to disconnect %s from the bridge", ifp->if_xname);
+	full = 0;
+	for (i = 0; i < NM_BDG_MAXPORTS; i++) {
+		if (b->bdg_ports[i] == ifp) {
+			b->bdg_ports[i] = NULL;
+			bzero(ifp, sizeof(*ifp));
+			free(ifp, M_DEVBUF);
+			break;
+		}
+		else if (b->bdg_ports[i] != NULL)
+			full = 1;
+	}
+	BDG_UNLOCK(b);
+	if (full == 0) {
+		ND("freeing bridge %d", b - nm_bridges);
+		b->namelen = 0;
+	}
+	BDG_UNLOCK(nm_bridges);
+	if (i == NM_BDG_MAXPORTS)
+		D("ouch, cannot find ifp to remove");
+#endif /* NM_BRIDGE */
+}
+
+static void
+netmap_dtor(void *data)
+{
+	struct netmap_priv_d *priv = data;
+	struct ifnet *ifp = priv->np_ifp;
+	struct netmap_adapter *na;
+
+	NMA_LOCK();
+	if (ifp) {
+		na = NA(ifp);
+		na->nm_lock(ifp, NETMAP_REG_LOCK, 0);
+		netmap_dtor_locked(data);
+		na->nm_lock(ifp, NETMAP_REG_UNLOCK, 0);
+
+		nm_if_rele(ifp);
+	}
+	if (priv->ref_done) {
+		netmap_memory_deref();
+	}
+	NMA_UNLOCK();
+	bzero(priv, sizeof(*priv));	/* XXX for safety */
+	free(priv, M_DEVBUF);
+}
+
+#ifdef __FreeBSD__
+#include <vm/vm.h>
+#include <vm/vm_param.h>
+#include <vm/vm_object.h>
+#include <vm/vm_page.h>
+#include <vm/vm_pager.h>
+#include <vm/uma.h>
+
+static struct cdev_pager_ops saved_cdev_pager_ops;
+
+static int
+netmap_dev_pager_ctor(void *handle, vm_ooffset_t size, vm_prot_t prot,
+    vm_ooffset_t foff, struct ucred *cred, u_short *color)
+{
+	if (netmap_verbose)
+		D("first mmap for %p", handle);
+	return saved_cdev_pager_ops.cdev_pg_ctor(handle,
+			size, prot, foff, cred, color);
+}
+
+static void
+netmap_dev_pager_dtor(void *handle)
+{
+	saved_cdev_pager_ops.cdev_pg_dtor(handle);
+	ND("ready to release memory for %p", handle);
+}
+
+
+static struct cdev_pager_ops netmap_cdev_pager_ops = {
+        .cdev_pg_ctor = netmap_dev_pager_ctor,
+        .cdev_pg_dtor = netmap_dev_pager_dtor,
+        .cdev_pg_fault = NULL,
+};
+
+static int
+netmap_mmap_single(struct cdev *cdev, vm_ooffset_t *foff,
+	vm_size_t objsize,  vm_object_t *objp, int prot)
+{
+	vm_object_t obj;
+
+	ND("cdev %p foff %jd size %jd objp %p prot %d", cdev,
+	    (intmax_t )*foff, (intmax_t )objsize, objp, prot);
+	obj = vm_pager_allocate(OBJT_DEVICE, cdev, objsize, prot, *foff,
+            curthread->td_ucred);
+	ND("returns obj %p", obj);
+	if (obj == NULL)
+		return EINVAL;
+	if (saved_cdev_pager_ops.cdev_pg_fault == NULL) {
+		ND("initialize cdev_pager_ops");
+		saved_cdev_pager_ops = *(obj->un_pager.devp.ops);
+		netmap_cdev_pager_ops.cdev_pg_fault =
+			saved_cdev_pager_ops.cdev_pg_fault;
+	};
+	obj->un_pager.devp.ops = &netmap_cdev_pager_ops;
+	*objp = obj;
+	return 0;
+}
+#endif /* __FreeBSD__ */
+
+
+/*
+ * mmap(2) support for the "netmap" device.
+ *
+ * Expose all the memory previously allocated by our custom memory
+ * allocator: this way the user has only to issue a single mmap(2), and
+ * can work on all the data structures flawlessly.
+ *
+ * Return 0 on success, -1 otherwise.
+ */
+
+#ifdef __FreeBSD__
+static int
+netmap_mmap(__unused struct cdev *dev,
+#if __FreeBSD_version < 900000
+		vm_offset_t offset, vm_paddr_t *paddr, int nprot
+#else
+		vm_ooffset_t offset, vm_paddr_t *paddr, int nprot,
+		__unused vm_memattr_t *memattr
+#endif
+	)
+{
+	int error = 0;
+	struct netmap_priv_d *priv;
+
+	if (nprot & PROT_EXEC)
+		return (-1);	// XXX -1 or EINVAL ?
+
+	error = devfs_get_cdevpriv((void **)&priv);
+	if (error == EBADF) {	/* called on fault, memory is initialized */
+		ND(5, "handling fault at ofs 0x%x", offset);
+		error = 0;
+	} else if (error == 0)	/* make sure memory is set */
+		error = netmap_get_memory(priv);
+	if (error)
+		return (error);
+
+	ND("request for offset 0x%x", (uint32_t)offset);
+	*paddr = netmap_ofstophys(offset);
+
+	return (*paddr ? 0 : ENOMEM);
+}
+
+static int
+netmap_close(struct cdev *dev, int fflag, int devtype, struct thread *td)
+{
+	if (netmap_verbose)
+		D("dev %p fflag 0x%x devtype %d td %p",
+			dev, fflag, devtype, td);
+	return 0;
+}
+
+static int
+netmap_open(struct cdev *dev, int oflags, int devtype, struct thread *td)
+{
+	struct netmap_priv_d *priv;
+	int error;
+
+	priv = malloc(sizeof(struct netmap_priv_d), M_DEVBUF,
+			      M_NOWAIT | M_ZERO);
+	if (priv == NULL)
+		return ENOMEM;
+
+	error = devfs_set_cdevpriv(priv, netmap_dtor);
+	if (error)
+	        return error;
+
+	return 0;
+}
+#endif /* __FreeBSD__ */
+
+
+/*
+ * Handlers for synchronization of the queues from/to the host.
+ * Netmap has two operating modes:
+ * - in the default mode, the rings connected to the host stack are
+ *   just another ring pair managed by userspace;
+ * - in transparent mode (XXX to be defined) incoming packets
+ *   (from the host or the NIC) are marked as NS_FORWARD upon
+ *   arrival, and the user application has a chance to reset the
+ *   flag for packets that should be dropped.
+ *   On the RXSYNC or poll(), packets in RX rings between
+ *   kring->nr_kcur and ring->cur with NS_FORWARD still set are moved
+ *   to the other side.
+ * The transfer NIC --> host is relatively easy, just encapsulate
+ * into mbufs and we are done. The host --> NIC side is slightly
+ * harder because there might not be room in the tx ring so it
+ * might take a while before releasing the buffer.
+ */
+
+/*
+ * pass a chain of buffers to the host stack as coming from 'dst'
+ */
+static void
+netmap_send_up(struct ifnet *dst, struct mbuf *head)
+{
+	struct mbuf *m;
+
+	/* send packets up, outside the lock */
+	while ((m = head) != NULL) {
+		head = head->m_nextpkt;
+		m->m_nextpkt = NULL;
+		if (netmap_verbose & NM_VERB_HOST)
+			D("sending up pkt %p size %d", m, MBUF_LEN(m));
+		NM_SEND_UP(dst, m);
+	}
+}
+
+struct mbq {
+	struct mbuf *head;
+	struct mbuf *tail;
+	int count;
+};
+
+/*
+ * put a copy of the buffers marked NS_FORWARD into an mbuf chain.
+ * Run from hwcur to cur - reserved
+ */
+static void
+netmap_grab_packets(struct netmap_kring *kring, struct mbq *q, int force)
+{
+	/* Take packets from hwcur to cur-reserved and pass them up.
+	 * In case of no buffers we give up. At the end of the loop,
+	 * the queue is drained in all cases.
+	 * XXX handle reserved
+	 */
+	int k = kring->ring->cur - kring->ring->reserved;
+	u_int n, lim = kring->nkr_num_slots - 1;
+	struct mbuf *m, *tail = q->tail;
+
+	if (k < 0)
+		k = k + kring->nkr_num_slots;
+	for (n = kring->nr_hwcur; n != k;) {
+		struct netmap_slot *slot = &kring->ring->slot[n];
+
+		n = (n == lim) ? 0 : n + 1;
+		if ((slot->flags & NS_FORWARD) == 0 && !force)
+			continue;
+		if (slot->len < 14 || slot->len > NETMAP_BUF_SIZE) {
+			D("bad pkt at %d len %d", n, slot->len);
+			continue;
+		}
+		slot->flags &= ~NS_FORWARD; // XXX needed ?
+		m = m_devget(NMB(slot), slot->len, 0, kring->na->ifp, NULL);
+
+		if (m == NULL)
+			break;
+		if (tail)
+			tail->m_nextpkt = m;
+		else
+			q->head = m;
+		tail = m;
+		q->count++;
+		m->m_nextpkt = NULL;
+	}
+	q->tail = tail;
+}
+
+/*
+ * called under main lock to send packets from the host to the NIC
+ * The host ring has packets from nr_hwcur to (cur - reserved)
+ * to be sent down. We scan the tx rings, which have just been
+ * flushed so nr_hwcur == cur. Pushing packets down means
+ * increment cur and decrement avail.
+ * XXX to be verified
+ */
+static void
+netmap_sw_to_nic(struct netmap_adapter *na)
+{
+	struct netmap_kring *kring = &na->rx_rings[na->num_rx_rings];
+	struct netmap_kring *k1 = &na->tx_rings[0];
+	int i, howmany, src_lim, dst_lim;
+
+	howmany = kring->nr_hwavail;	/* XXX otherwise cur - reserved - nr_hwcur */
+
+	src_lim = kring->nkr_num_slots;
+	for (i = 0; howmany > 0 && i < na->num_tx_rings; i++, k1++) {
+		ND("%d packets left to ring %d (space %d)", howmany, i, k1->nr_hwavail);
+		dst_lim = k1->nkr_num_slots;
+		while (howmany > 0 && k1->ring->avail > 0) {
+			struct netmap_slot *src, *dst, tmp;
+			src = &kring->ring->slot[kring->nr_hwcur];
+			dst = &k1->ring->slot[k1->ring->cur];
+			tmp = *src;
+			src->buf_idx = dst->buf_idx;
+			src->flags = NS_BUF_CHANGED;
+
+			dst->buf_idx = tmp.buf_idx;
+			dst->len = tmp.len;
+			dst->flags = NS_BUF_CHANGED;
+			ND("out len %d buf %d from %d to %d",
+				dst->len, dst->buf_idx,
+				kring->nr_hwcur, k1->ring->cur);
+
+			if (++kring->nr_hwcur >= src_lim)
+				kring->nr_hwcur = 0;
+			howmany--;
+			kring->nr_hwavail--;
+			if (++k1->ring->cur >= dst_lim)
+				k1->ring->cur = 0;
+			k1->ring->avail--;
+		}
+		kring->ring->cur = kring->nr_hwcur; // XXX
+		k1++;
+	}
+}
+
+/*
+ * netmap_sync_to_host() passes packets up. We are called from a
+ * system call in user process context, and the only contention
+ * can be among multiple user threads erroneously calling
+ * this routine concurrently.
+ */
+static void
+netmap_sync_to_host(struct netmap_adapter *na)
+{
+	struct netmap_kring *kring = &na->tx_rings[na->num_tx_rings];
+	struct netmap_ring *ring = kring->ring;
+	u_int k, lim = kring->nkr_num_slots - 1;
+	struct mbq q = { NULL, NULL };
+
+	k = ring->cur;
+	if (k > lim) {
+		netmap_ring_reinit(kring);
+		return;
+	}
+	// na->nm_lock(na->ifp, NETMAP_CORE_LOCK, 0);
+
+	/* Take packets from hwcur to cur and pass them up.
+	 * In case of no buffers we give up. At the end of the loop,
+	 * the queue is drained in all cases.
+	 */
+	netmap_grab_packets(kring, &q, 1);
+	kring->nr_hwcur = k;
+	kring->nr_hwavail = ring->avail = lim;
+	// na->nm_lock(na->ifp, NETMAP_CORE_UNLOCK, 0);
+
+	netmap_send_up(na->ifp, q.head);
+}
+
+/*
+ * rxsync backend for packets coming from the host stack.
+ * They have been put in the queue by netmap_start() so we
+ * need to protect access to the kring using a lock.
+ *
+ * This routine also does the selrecord if called from the poll handler
+ * (we know because td != NULL).
+ *
+ * NOTE: on linux, selrecord() is defined as a macro and uses pwait
+ *     as an additional hidden argument.
+ */
+static void
+netmap_sync_from_host(struct netmap_adapter *na, struct thread *td, void *pwait)
+{
+	struct netmap_kring *kring = &na->rx_rings[na->num_rx_rings];
+	struct netmap_ring *ring = kring->ring;
+	u_int j, n, lim = kring->nkr_num_slots;
+	u_int k = ring->cur, resvd = ring->reserved;
+
+	(void)pwait;	/* disable unused warnings */
+	na->nm_lock(na->ifp, NETMAP_CORE_LOCK, 0);
+	if (k >= lim) {
+		netmap_ring_reinit(kring);
+		return;
+	}
+	/* new packets are already set in nr_hwavail */
+	/* skip past packets that userspace has released */
+	j = kring->nr_hwcur;
+	if (resvd > 0) {
+		if (resvd + ring->avail >= lim + 1) {
+			D("XXX invalid reserve/avail %d %d", resvd, ring->avail);
+			ring->reserved = resvd = 0; // XXX panic...
+		}
+		k = (k >= resvd) ? k - resvd : k + lim - resvd;
+        }
+	if (j != k) {
+		n = k >= j ? k - j : k + lim - j;
+		kring->nr_hwavail -= n;
+		kring->nr_hwcur = k;
+	}
+	k = ring->avail = kring->nr_hwavail - resvd;
+	if (k == 0 && td)
+		selrecord(td, &kring->si);
+	if (k && (netmap_verbose & NM_VERB_HOST))
+		D("%d pkts from stack", k);
+	na->nm_lock(na->ifp, NETMAP_CORE_UNLOCK, 0);
+}
+
+
+/*
+ * get a refcounted reference to an interface.
+ * Return ENXIO if the interface does not exist, EINVAL if netmap
+ * is not supported by the interface.
+ * If successful, hold a reference.
+ */
+static int
+get_ifp(const char *name, struct ifnet **ifp)
+{
+#ifdef NM_BRIDGE
+	struct ifnet *iter = NULL;
+
+	do {
+		struct nm_bridge *b;
+		int i, l, cand = -1;
+
+		if (strncmp(name, NM_NAME, sizeof(NM_NAME) - 1))
+			break;
+		b = nm_find_bridge(name);
+		if (b == NULL) {
+			D("no bridges available for '%s'", name);
+			return (ENXIO);
+		}
+		/* XXX locking */
+		BDG_LOCK(b);
+		/* lookup in the local list of ports */
+		for (i = 0; i < NM_BDG_MAXPORTS; i++) {
+			iter = b->bdg_ports[i];
+			if (iter == NULL) {
+				if (cand == -1)
+					cand = i; /* potential insert point */
+				continue;
+			}
+			if (!strcmp(iter->if_xname, name)) {
+				ADD_BDG_REF(iter);
+				ND("found existing interface");
+				BDG_UNLOCK(b);
+				break;
+			}
+		}
+		if (i < NM_BDG_MAXPORTS) /* already unlocked */
+			break;
+		if (cand == -1) {
+			D("bridge full, cannot create new port");
+no_port:
+			BDG_UNLOCK(b);
+			*ifp = NULL;
+			return EINVAL;
+		}
+		ND("create new bridge port %s", name);
+		/* space for forwarding list after the ifnet */
+		l = sizeof(*iter) +
+			 sizeof(struct nm_bdg_fwd)*NM_BDG_BATCH ;
+		iter = malloc(l, M_DEVBUF, M_NOWAIT | M_ZERO);
+		if (!iter)
+			goto no_port;
+		strcpy(iter->if_xname, name);
+		bdg_netmap_attach(iter);
+		b->bdg_ports[cand] = iter;
+		iter->if_bridge = b;
+		ADD_BDG_REF(iter);
+		BDG_UNLOCK(b);
+		ND("attaching virtual bridge %p", b);
+	} while (0);
+	*ifp = iter;
+	if (! *ifp)
+#endif /* NM_BRIDGE */
+	*ifp = ifunit_ref(name);
+	if (*ifp == NULL)
+		return (ENXIO);
+	/* can do this if the capability exists and if_pspare[0]
+	 * points to the netmap descriptor.
+	 */
+	if (NETMAP_CAPABLE(*ifp))
+		return 0;	/* valid pointer, we hold the refcount */
+	nm_if_rele(*ifp);
+	return EINVAL;	// not NETMAP capable
+}
+
+
+/*
+ * Error routine called when txsync/rxsync detects an error.
+ * Can't do much more than resetting cur = hwcur, avail = hwavail.
+ * Return 1 on reinit.
+ *
+ * This routine is only called by the upper half of the kernel.
+ * It only reads hwcur (which is changed only by the upper half, too)
+ * and hwavail (which may be changed by the lower half, but only on
+ * a tx ring and only to increase it, so any error will be recovered
+ * on the next call). For the above, we don't strictly need to call
+ * it under lock.
+ */
+int
+netmap_ring_reinit(struct netmap_kring *kring)
+{
+	struct netmap_ring *ring = kring->ring;
+	u_int i, lim = kring->nkr_num_slots - 1;
+	int errors = 0;
+
+	RD(10, "called for %s", kring->na->ifp->if_xname);
+	if (ring->cur > lim)
+		errors++;
+	for (i = 0; i <= lim; i++) {
+		u_int idx = ring->slot[i].buf_idx;
+		u_int len = ring->slot[i].len;
+		if (idx < 2 || idx >= netmap_total_buffers) {
+			if (!errors++)
+				D("bad buffer at slot %d idx %d len %d ", i, idx, len);
+			ring->slot[i].buf_idx = 0;
+			ring->slot[i].len = 0;
+		} else if (len > NETMAP_BUF_SIZE) {
+			ring->slot[i].len = 0;
+			if (!errors++)
+				D("bad len %d at slot %d idx %d",
+					len, i, idx);
+		}
+	}
+	if (errors) {
+		int pos = kring - kring->na->tx_rings;
+		int n = kring->na->num_tx_rings + 1;
+
+		RD(10, "total %d errors", errors);
+		errors++;
+		RD(10, "%s %s[%d] reinit, cur %d -> %d avail %d -> %d",
+			kring->na->ifp->if_xname,
+			pos < n ?  "TX" : "RX", pos < n ? pos : pos - n,
+			ring->cur, kring->nr_hwcur,
+			ring->avail, kring->nr_hwavail);
+		ring->cur = kring->nr_hwcur;
+		ring->avail = kring->nr_hwavail;
+	}
+	return (errors ? 1 : 0);
+}
+
+
+/*
+ * Set the ring ID. For devices with a single queue, a request
+ * for all rings is the same as a single ring.
+ */
+static int
+netmap_set_ringid(struct netmap_priv_d *priv, u_int ringid)
+{
+	struct ifnet *ifp = priv->np_ifp;
+	struct netmap_adapter *na = NA(ifp);
+	u_int i = ringid & NETMAP_RING_MASK;
+	/* initially (np_qfirst == np_qlast) we don't want to lock */
+	int need_lock = (priv->np_qfirst != priv->np_qlast);
+	int lim = na->num_rx_rings;
+
+	if (na->num_tx_rings > lim)
+		lim = na->num_tx_rings;
+	if ( (ringid & NETMAP_HW_RING) && i >= lim) {
+		D("invalid ring id %d", i);
+		return (EINVAL);
+	}
+	if (need_lock)
+		na->nm_lock(ifp, NETMAP_CORE_LOCK, 0);
+	priv->np_ringid = ringid;
+	if (ringid & NETMAP_SW_RING) {
+		priv->np_qfirst = NETMAP_SW_RING;
+		priv->np_qlast = 0;
+	} else if (ringid & NETMAP_HW_RING) {
+		priv->np_qfirst = i;
+		priv->np_qlast = i + 1;
+	} else {
+		priv->np_qfirst = 0;
+		priv->np_qlast = NETMAP_HW_RING ;
+	}
+	priv->np_txpoll = (ringid & NETMAP_NO_TX_POLL) ? 0 : 1;
+	if (need_lock)
+		na->nm_lock(ifp, NETMAP_CORE_UNLOCK, 0);
+    if (netmap_verbose) {
+	if (ringid & NETMAP_SW_RING)
+		D("ringid %s set to SW RING", ifp->if_xname);
+	else if (ringid & NETMAP_HW_RING)
+		D("ringid %s set to HW RING %d", ifp->if_xname,
+			priv->np_qfirst);
+	else
+		D("ringid %s set to all %d HW RINGS", ifp->if_xname, lim);
+    }
+	return 0;
+}
+
+/*
+ * ioctl(2) support for the "netmap" device.
+ *
+ * Following a list of accepted commands:
+ * - NIOCGINFO
+ * - SIOCGIFADDR	just for convenience
+ * - NIOCREGIF
+ * - NIOCUNREGIF
+ * - NIOCTXSYNC
+ * - NIOCRXSYNC
+ *
+ * Return 0 on success, errno otherwise.
+ */
+static int
+netmap_ioctl(struct cdev *dev, u_long cmd, caddr_t data,
+	int fflag, struct thread *td)
+{
+	struct netmap_priv_d *priv = NULL;
+	struct ifnet *ifp;
+	struct nmreq *nmr = (struct nmreq *) data;
+	struct netmap_adapter *na;
+	int error;
+	u_int i, lim;
+	struct netmap_if *nifp;
+
+	(void)dev;	/* UNUSED */
+	(void)fflag;	/* UNUSED */
+#ifdef linux
+#define devfs_get_cdevpriv(pp)				\
+	({ *(struct netmap_priv_d **)pp = ((struct file *)td)->private_data; 	\
+		(*pp ? 0 : ENOENT); })
+
+/* devfs_set_cdevpriv cannot fail on linux */
+#define devfs_set_cdevpriv(p, fn)				\
+	({ ((struct file *)td)->private_data = p; (p ? 0 : EINVAL); })
+
+
+#define devfs_clear_cdevpriv()	do {				\
+		netmap_dtor(priv); ((struct file *)td)->private_data = 0;	\
+	} while (0)
+#endif /* linux */
+
+	CURVNET_SET(TD_TO_VNET(td));
+
+	error = devfs_get_cdevpriv((void **)&priv);
+	if (error) {
+		CURVNET_RESTORE();
+		/* XXX ENOENT should be impossible, since the priv
+		 * is now created in the open */
+		return (error == ENOENT ? ENXIO : error);
+	}
+
+	nmr->nr_name[sizeof(nmr->nr_name) - 1] = '\0';	/* truncate name */
+	switch (cmd) {
+	case NIOCGINFO:		/* return capabilities etc */
+		if (nmr->nr_version != NETMAP_API) {
+			D("API mismatch got %d have %d",
+				nmr->nr_version, NETMAP_API);
+			nmr->nr_version = NETMAP_API;
+			error = EINVAL;
+			break;
+		}
+		/* update configuration */
+		error = netmap_get_memory(priv);
+		ND("get_memory returned %d", error);
+		if (error)
+			break;
+		/* memsize is always valid */
+		nmr->nr_memsize = nm_mem.nm_totalsize;
+		nmr->nr_offset = 0;
+		nmr->nr_rx_rings = nmr->nr_tx_rings = 0;
+		nmr->nr_rx_slots = nmr->nr_tx_slots = 0;
+		if (nmr->nr_name[0] == '\0')	/* just get memory info */
+			break;
+		error = get_ifp(nmr->nr_name, &ifp); /* get a refcount */
+		if (error)
+			break;
+		na = NA(ifp); /* retrieve netmap_adapter */
+		netmap_update_config(na);
+		nmr->nr_rx_rings = na->num_rx_rings;
+		nmr->nr_tx_rings = na->num_tx_rings;
+		nmr->nr_rx_slots = na->num_rx_desc;
+		nmr->nr_tx_slots = na->num_tx_desc;
+		nm_if_rele(ifp);	/* return the refcount */
+		break;
+
+	case NIOCREGIF:
+		if (nmr->nr_version != NETMAP_API) {
+			nmr->nr_version = NETMAP_API;
+			error = EINVAL;
+			break;
+		}
+		/* ensure allocators are ready */
+		error = netmap_get_memory(priv);
+		ND("get_memory returned %d", error);
+		if (error)
+			break;
+
+		/* protect access to priv from concurrent NIOCREGIF */
+		NMA_LOCK();
+		if (priv->np_ifp != NULL) {	/* thread already registered */
+			error = netmap_set_ringid(priv, nmr->nr_ringid);
+			NMA_UNLOCK();
+			break;
+		}
+		/* find the interface and a reference */
+		error = get_ifp(nmr->nr_name, &ifp); /* keep reference */
+		if (error) {
+			NMA_UNLOCK();
+			break;
+		}
+		na = NA(ifp); /* retrieve netmap adapter */
+
+		for (i = 10; i > 0; i--) {
+			na->nm_lock(ifp, NETMAP_REG_LOCK, 0);
+			if (!NETMAP_DELETING(na))
+				break;
+			na->nm_lock(ifp, NETMAP_REG_UNLOCK, 0);
+			tsleep(na, 0, "NIOCREGIF", hz/10);
+		}
+		if (i == 0) {
+			D("too many NIOCREGIF attempts, give up");
+			error = EINVAL;
+			nm_if_rele(ifp);	/* return the refcount */
+			NMA_UNLOCK();
+			break;
+		}
+
+		/* ring configuration may have changed, fetch from the card */
+		netmap_update_config(na);
+		priv->np_ifp = ifp;	/* store the reference */
+		error = netmap_set_ringid(priv, nmr->nr_ringid);
+		if (error)
+			goto error;
+		nifp = netmap_if_new(nmr->nr_name, na);
+		if (nifp == NULL) { /* allocation failed */
+			error = ENOMEM;
+		} else if (ifp->if_capenable & IFCAP_NETMAP) {
+			/* was already set */
+		} else {
+			/* Otherwise set the card in netmap mode
+			 * and make it use the shared buffers.
+			 */
+			for (i = 0 ; i < na->num_tx_rings + 1; i++)
+				mtx_init(&na->tx_rings[i].q_lock, "nm_txq_lock", MTX_NETWORK_LOCK, MTX_DEF);
+			for (i = 0 ; i < na->num_rx_rings + 1; i++) {
+				mtx_init(&na->rx_rings[i].q_lock, "nm_rxq_lock", MTX_NETWORK_LOCK, MTX_DEF);
+			}
+			error = na->nm_register(ifp, 1); /* mode on */
+			if (error) {
+				netmap_dtor_locked(priv);
+				netmap_if_free(nifp);
+			}
+		}
+
+		if (error) {	/* reg. failed, release priv and ref */
+error:
+			na->nm_lock(ifp, NETMAP_REG_UNLOCK, 0);
+			nm_if_rele(ifp);	/* return the refcount */
+			priv->np_ifp = NULL;
+			priv->np_nifp = NULL;
+			NMA_UNLOCK();
+			break;
+		}
+
+		na->nm_lock(ifp, NETMAP_REG_UNLOCK, 0);
+
+		/* the following assignment is a commitment.
+		 * Readers (i.e., poll and *SYNC) check for
+		 * np_nifp != NULL without locking
+		 */
+		wmb(); /* make sure previous writes are visible to all CPUs */
+		priv->np_nifp = nifp;
+		NMA_UNLOCK();
+
+		/* return the offset of the netmap_if object */
+		nmr->nr_rx_rings = na->num_rx_rings;
+		nmr->nr_tx_rings = na->num_tx_rings;
+		nmr->nr_rx_slots = na->num_rx_desc;
+		nmr->nr_tx_slots = na->num_tx_desc;
+		nmr->nr_memsize = nm_mem.nm_totalsize;
+		nmr->nr_offset = netmap_if_offset(nifp);
+		break;
+
+	case NIOCUNREGIF:
+		// XXX we have no data here ?
+		D("deprecated, data is %p", nmr);
+		error = EINVAL;
+		break;
+
+	case NIOCTXSYNC:
+	case NIOCRXSYNC:
+		nifp = priv->np_nifp;
+
+		if (nifp == NULL) {
+			error = ENXIO;
+			break;
+		}
+		rmb(); /* make sure following reads are not from cache */
+
+
+		ifp = priv->np_ifp;	/* we have a reference */
+
+		if (ifp == NULL) {
+			D("Internal error: nifp != NULL && ifp == NULL");
+			error = ENXIO;
+			break;
+		}
+
+		na = NA(ifp); /* retrieve netmap adapter */
+		if (priv->np_qfirst == NETMAP_SW_RING) { /* host rings */
+			if (cmd == NIOCTXSYNC)
+				netmap_sync_to_host(na);
+			else
+				netmap_sync_from_host(na, NULL, NULL);
+			break;
+		}
+		/* find the last ring to scan */
+		lim = priv->np_qlast;
+		if (lim == NETMAP_HW_RING)
+			lim = (cmd == NIOCTXSYNC) ?
+			    na->num_tx_rings : na->num_rx_rings;
+
+		for (i = priv->np_qfirst; i < lim; i++) {
+			if (cmd == NIOCTXSYNC) {
+				struct netmap_kring *kring = &na->tx_rings[i];
+				if (netmap_verbose & NM_VERB_TXSYNC)
+					D("pre txsync ring %d cur %d hwcur %d",
+					    i, kring->ring->cur,
+					    kring->nr_hwcur);
+				na->nm_txsync(ifp, i, 1 /* do lock */);
+				if (netmap_verbose & NM_VERB_TXSYNC)
+					D("post txsync ring %d cur %d hwcur %d",
+					    i, kring->ring->cur,
+					    kring->nr_hwcur);
+			} else {
+				na->nm_rxsync(ifp, i, 1 /* do lock */);
+				microtime(&na->rx_rings[i].ring->ts);
+			}
+		}
+
+		break;
+
+#ifdef __FreeBSD__
+	case BIOCIMMEDIATE:
+	case BIOCGHDRCMPLT:
+	case BIOCSHDRCMPLT:
+	case BIOCSSEESENT:
+		D("ignore BIOCIMMEDIATE/BIOCSHDRCMPLT/BIOCSHDRCMPLT/BIOCSSEESENT");
+		break;
+
+	default:	/* allow device-specific ioctls */
+	    {
+		struct socket so;
+		bzero(&so, sizeof(so));
+		error = get_ifp(nmr->nr_name, &ifp); /* keep reference */
+		if (error)
+			break;
+		so.so_vnet = ifp->if_vnet;
+		// so->so_proto not null.
+		error = ifioctl(&so, cmd, data, td);
+		nm_if_rele(ifp);
+		break;
+	    }
+
+#else /* linux */
+	default:
+		error = EOPNOTSUPP;
+#endif /* linux */
+	}
+
+	CURVNET_RESTORE();
+	return (error);
+}
+
+
+/*
+ * select(2) and poll(2) handlers for the "netmap" device.
+ *
+ * Can be called for one or more queues.
+ * Return true the event mask corresponding to ready events.
+ * If there are no ready events, do a selrecord on either individual
+ * selfd or on the global one.
+ * Device-dependent parts (locking and sync of tx/rx rings)
+ * are done through callbacks.
+ *
+ * On linux, arguments are really pwait, the poll table, and 'td' is struct file *
+ * The first one is remapped to pwait as selrecord() uses the name as an
+ * hidden argument.
+ */
+static int
+netmap_poll(struct cdev *dev, int events, struct thread *td)
+{
+	struct netmap_priv_d *priv = NULL;
+	struct netmap_adapter *na;
+	struct ifnet *ifp;
+	struct netmap_kring *kring;
+	u_int core_lock, i, check_all, want_tx, want_rx, revents = 0;
+	u_int lim_tx, lim_rx, host_forwarded = 0;
+	struct mbq q = { NULL, NULL, 0 };
+	enum {NO_CL, NEED_CL, LOCKED_CL }; /* see below */
+	void *pwait = dev;	/* linux compatibility */
+
+	(void)pwait;
+
+	if (devfs_get_cdevpriv((void **)&priv) != 0 || priv == NULL)
+		return POLLERR;
+
+	if (priv->np_nifp == NULL) {
+		D("No if registered");
+		return POLLERR;
+	}
+	rmb(); /* make sure following reads are not from cache */
+
+	ifp = priv->np_ifp;
+	// XXX check for deleting() ?
+	if ( (ifp->if_capenable & IFCAP_NETMAP) == 0)
+		return POLLERR;
+
+	if (netmap_verbose & 0x8000)
+		D("device %s events 0x%x", ifp->if_xname, events);
+	want_tx = events & (POLLOUT | POLLWRNORM);
+	want_rx = events & (POLLIN | POLLRDNORM);
+
+	na = NA(ifp); /* retrieve netmap adapter */
+
+	lim_tx = na->num_tx_rings;
+	lim_rx = na->num_rx_rings;
+	/* how many queues we are scanning */
+	if (priv->np_qfirst == NETMAP_SW_RING) {
+		if (priv->np_txpoll || want_tx) {
+			/* push any packets up, then we are always ready */
+			kring = &na->tx_rings[lim_tx];
+			netmap_sync_to_host(na);
+			revents |= want_tx;
+		}
+		if (want_rx) {
+			kring = &na->rx_rings[lim_rx];
+			if (kring->ring->avail == 0)
+				netmap_sync_from_host(na, td, dev);
+			if (kring->ring->avail > 0) {
+				revents |= want_rx;
+			}
+		}
+		return (revents);
+	}
+
+	/* if we are in transparent mode, check also the host rx ring */
+	kring = &na->rx_rings[lim_rx];
+	if ( (priv->np_qlast == NETMAP_HW_RING) // XXX check_all
+			&& want_rx
+			&& (netmap_fwd || kring->ring->flags & NR_FORWARD) ) {
+		if (kring->ring->avail == 0)
+			netmap_sync_from_host(na, td, dev);
+		if (kring->ring->avail > 0)
+			revents |= want_rx;
+	}
+
+	/*
+	 * check_all is set if the card has more than one queue and
+	 * the client is polling all of them. If true, we sleep on
+	 * the "global" selfd, otherwise we sleep on individual selfd
+	 * (we can only sleep on one of them per direction).
+	 * The interrupt routine in the driver should always wake on
+	 * the individual selfd, and also on the global one if the card
+	 * has more than one ring.
+	 *
+	 * If the card has only one lock, we just use that.
+	 * If the card has separate ring locks, we just use those
+	 * unless we are doing check_all, in which case the whole
+	 * loop is wrapped by the global lock.
+	 * We acquire locks only when necessary: if poll is called
+	 * when buffers are available, we can just return without locks.
+	 *
+	 * rxsync() is only called if we run out of buffers on a POLLIN.
+	 * txsync() is called if we run out of buffers on POLLOUT, or
+	 * there are pending packets to send. The latter can be disabled
+	 * passing NETMAP_NO_TX_POLL in the NIOCREG call.
+	 */
+	check_all = (priv->np_qlast == NETMAP_HW_RING) && (lim_tx > 1 || lim_rx > 1);
+
+	/*
+	 * core_lock indicates what to do with the core lock.
+	 * The core lock is used when either the card has no individual
+	 * locks, or it has individual locks but we are cheking all
+	 * rings so we need the core lock to avoid missing wakeup events.
+	 *
+	 * It has three possible states:
+	 * NO_CL	we don't need to use the core lock, e.g.
+	 *		because we are protected by individual locks.
+	 * NEED_CL	we need the core lock. In this case, when we
+	 *		call the lock routine, move to LOCKED_CL
+	 *		to remember to release the lock once done.
+	 * LOCKED_CL	core lock is set, so we need to release it.
+	 */
+	core_lock = (check_all || !na->separate_locks) ? NEED_CL : NO_CL;
+#ifdef NM_BRIDGE
+	/* the bridge uses separate locks */
+	if (na->nm_register == bdg_netmap_reg) {
+		ND("not using core lock for %s", ifp->if_xname);
+		core_lock = NO_CL;
+	}
+#endif /* NM_BRIDGE */
+	if (priv->np_qlast != NETMAP_HW_RING) {
+		lim_tx = lim_rx = priv->np_qlast;
+	}
+
+	/*
+	 * We start with a lock free round which is good if we have
+	 * data available. If this fails, then lock and call the sync
+	 * routines.
+	 */
+	for (i = priv->np_qfirst; want_rx && i < lim_rx; i++) {
+		kring = &na->rx_rings[i];
+		if (kring->ring->avail > 0) {
+			revents |= want_rx;
+			want_rx = 0;	/* also breaks the loop */
+		}
+	}
+	for (i = priv->np_qfirst; want_tx && i < lim_tx; i++) {
+		kring = &na->tx_rings[i];
+		if (kring->ring->avail > 0) {
+			revents |= want_tx;
+			want_tx = 0;	/* also breaks the loop */
+		}
+	}
+
+	/*
+	 * If we to push packets out (priv->np_txpoll) or want_tx is
+	 * still set, we do need to run the txsync calls (on all rings,
+	 * to avoid that the tx rings stall).
+	 */
+	if (priv->np_txpoll || want_tx) {
+flush_tx:
+		for (i = priv->np_qfirst; i < lim_tx; i++) {
+			kring = &na->tx_rings[i];
+			/*
+			 * Skip the current ring if want_tx == 0
+			 * (we have already done a successful sync on
+			 * a previous ring) AND kring->cur == kring->hwcur
+			 * (there are no pending transmissions for this ring).
+			 */
+			if (!want_tx && kring->ring->cur == kring->nr_hwcur)
+				continue;
+			if (core_lock == NEED_CL) {
+				na->nm_lock(ifp, NETMAP_CORE_LOCK, 0);
+				core_lock = LOCKED_CL;
+			}
+			if (na->separate_locks)
+				na->nm_lock(ifp, NETMAP_TX_LOCK, i);
+			if (netmap_verbose & NM_VERB_TXSYNC)
+				D("send %d on %s %d",
+					kring->ring->cur,
+					ifp->if_xname, i);
+			if (na->nm_txsync(ifp, i, 0 /* no lock */))
+				revents |= POLLERR;
+
+			/* Check avail/call selrecord only if called with POLLOUT */
+			if (want_tx) {
+				if (kring->ring->avail > 0) {
+					/* stop at the first ring. We don't risk
+					 * starvation.
+					 */
+					revents |= want_tx;
+					want_tx = 0;
+				} else if (!check_all)
+					selrecord(td, &kring->si);
+			}
+			if (na->separate_locks)
+				na->nm_lock(ifp, NETMAP_TX_UNLOCK, i);
+		}
+	}
+
+	/*
+	 * now if want_rx is still set we need to lock and rxsync.
+	 * Do it on all rings because otherwise we starve.
+	 */
+	if (want_rx) {
+		for (i = priv->np_qfirst; i < lim_rx; i++) {
+			kring = &na->rx_rings[i];
+			if (core_lock == NEED_CL) {
+				na->nm_lock(ifp, NETMAP_CORE_LOCK, 0);
+				core_lock = LOCKED_CL;
+			}
+			if (na->separate_locks)
+				na->nm_lock(ifp, NETMAP_RX_LOCK, i);
+			if (netmap_fwd ||kring->ring->flags & NR_FORWARD) {
+				ND(10, "forwarding some buffers up %d to %d",
+				    kring->nr_hwcur, kring->ring->cur);
+				netmap_grab_packets(kring, &q, netmap_fwd);
+			}
+
+			if (na->nm_rxsync(ifp, i, 0 /* no lock */))
+				revents |= POLLERR;
+			if (netmap_no_timestamp == 0 ||
+					kring->ring->flags & NR_TIMESTAMP) {
+				microtime(&kring->ring->ts);
+			}
+
+			if (kring->ring->avail > 0)
+				revents |= want_rx;
+			else if (!check_all)
+				selrecord(td, &kring->si);
+			if (na->separate_locks)
+				na->nm_lock(ifp, NETMAP_RX_UNLOCK, i);
+		}
+	}
+	if (check_all && revents == 0) { /* signal on the global queue */
+		if (want_tx)
+			selrecord(td, &na->tx_si);
+		if (want_rx)
+			selrecord(td, &na->rx_si);
+	}
+
+	/* forward host to the netmap ring */
+	kring = &na->rx_rings[lim_rx];
+	if (kring->nr_hwavail > 0)
+		ND("host rx %d has %d packets", lim_rx, kring->nr_hwavail);
+	if ( (priv->np_qlast == NETMAP_HW_RING) // XXX check_all
+			&& (netmap_fwd || kring->ring->flags & NR_FORWARD)
+			 && kring->nr_hwavail > 0 && !host_forwarded) {
+		if (core_lock == NEED_CL) {
+			na->nm_lock(ifp, NETMAP_CORE_LOCK, 0);
+			core_lock = LOCKED_CL;
+		}
+		netmap_sw_to_nic(na);
+		host_forwarded = 1; /* prevent another pass */
+		want_rx = 0;
+		goto flush_tx;
+	}
+
+	if (core_lock == LOCKED_CL)
+		na->nm_lock(ifp, NETMAP_CORE_UNLOCK, 0);
+	if (q.head)
+		netmap_send_up(na->ifp, q.head);
+
+	return (revents);
+}
+
+/*------- driver support routines ------*/
+
+/*
+ * default lock wrapper.
+ */
+static void
+netmap_lock_wrapper(struct ifnet *dev, int what, u_int queueid)
+{
+	struct netmap_adapter *na = NA(dev);
+
+	switch (what) {
+#ifdef linux	/* some system do not need lock on register */
+	case NETMAP_REG_LOCK:
+	case NETMAP_REG_UNLOCK:
+		break;
+#endif /* linux */
+
+	case NETMAP_CORE_LOCK:
+		mtx_lock(&na->core_lock);
+		break;
+
+	case NETMAP_CORE_UNLOCK:
+		mtx_unlock(&na->core_lock);
+		break;
+
+	case NETMAP_TX_LOCK:
+		mtx_lock(&na->tx_rings[queueid].q_lock);
+		break;
+
+	case NETMAP_TX_UNLOCK:
+		mtx_unlock(&na->tx_rings[queueid].q_lock);
+		break;
+
+	case NETMAP_RX_LOCK:
+		mtx_lock(&na->rx_rings[queueid].q_lock);
+		break;
+
+	case NETMAP_RX_UNLOCK:
+		mtx_unlock(&na->rx_rings[queueid].q_lock);
+		break;
+	}
+}
+
+
+/*
+ * Initialize a ``netmap_adapter`` object created by driver on attach.
+ * We allocate a block of memory with room for a struct netmap_adapter
+ * plus two sets of N+2 struct netmap_kring (where N is the number
+ * of hardware rings):
+ * krings	0..N-1	are for the hardware queues.
+ * kring	N	is for the host stack queue
+ * kring	N+1	is only used for the selinfo for all queues.
+ * Return 0 on success, ENOMEM otherwise.
+ *
+ * By default the receive and transmit adapter ring counts are both initialized
+ * to num_queues.  na->num_tx_rings can be set for cards with different tx/rx
+ * setups.
+ */
+int
+netmap_attach(struct netmap_adapter *arg, int num_queues)
+{
+	struct netmap_adapter *na = NULL;
+	struct ifnet *ifp = arg ? arg->ifp : NULL;
+
+	if (arg == NULL || ifp == NULL)
+		goto fail;
+	na = malloc(sizeof(*na), M_DEVBUF, M_NOWAIT | M_ZERO);
+	if (na == NULL)
+		goto fail;
+	WNA(ifp) = na;
+	*na = *arg; /* copy everything, trust the driver to not pass junk */
+	NETMAP_SET_CAPABLE(ifp);
+	if (na->num_tx_rings == 0)
+		na->num_tx_rings = num_queues;
+	na->num_rx_rings = num_queues;
+	na->refcount = na->na_single = na->na_multi = 0;
+	/* Core lock initialized here, others after netmap_if_new. */
+	mtx_init(&na->core_lock, "netmap core lock", MTX_NETWORK_LOCK, MTX_DEF);
+	if (na->nm_lock == NULL) {
+		ND("using default locks for %s", ifp->if_xname);
+		na->nm_lock = netmap_lock_wrapper;
+	}
+#ifdef linux
+	if (ifp->netdev_ops) {
+		ND("netdev_ops %p", ifp->netdev_ops);
+		/* prepare a clone of the netdev ops */
+		na->nm_ndo = *ifp->netdev_ops;
+	}
+	na->nm_ndo.ndo_start_xmit = linux_netmap_start;
+#endif
+	D("success for %s", ifp->if_xname);
+	return 0;
+
+fail:
+	D("fail, arg %p ifp %p na %p", arg, ifp, na);
+	return (na ? EINVAL : ENOMEM);
+}
+
+
+/*
+ * Free the allocated memory linked to the given ``netmap_adapter``
+ * object.
+ */
+void
+netmap_detach(struct ifnet *ifp)
+{
+	struct netmap_adapter *na = NA(ifp);
+
+	if (!na)
+		return;
+
+	mtx_destroy(&na->core_lock);
+
+	if (na->tx_rings) { /* XXX should not happen */
+		D("freeing leftover tx_rings");
+		free(na->tx_rings, M_DEVBUF);
+	}
+	bzero(na, sizeof(*na));
+	WNA(ifp) = NULL;
+	free(na, M_DEVBUF);
+}
+
+
+/*
+ * Intercept packets from the network stack and pass them
+ * to netmap as incoming packets on the 'software' ring.
+ * We are not locked when called.
+ */
+int
+netmap_start(struct ifnet *ifp, struct mbuf *m)
+{
+	struct netmap_adapter *na = NA(ifp);
+	struct netmap_kring *kring = &na->rx_rings[na->num_rx_rings];
+	u_int i, len = MBUF_LEN(m);
+	u_int error = EBUSY, lim = kring->nkr_num_slots - 1;
+	struct netmap_slot *slot;
+
+	if (netmap_verbose & NM_VERB_HOST)
+		D("%s packet %d len %d from the stack", ifp->if_xname,
+			kring->nr_hwcur + kring->nr_hwavail, len);
+	na->nm_lock(ifp, NETMAP_CORE_LOCK, 0);
+	if (kring->nr_hwavail >= lim) {
+		if (netmap_verbose)
+			D("stack ring %s full\n", ifp->if_xname);
+		goto done;	/* no space */
+	}
+	if (len > NETMAP_BUF_SIZE) {
+		D("%s from_host, drop packet size %d > %d", ifp->if_xname,
+			len, NETMAP_BUF_SIZE);
+		goto done;	/* too long for us */
+	}
+
+	/* compute the insert position */
+	i = kring->nr_hwcur + kring->nr_hwavail;
+	if (i > lim)
+		i -= lim + 1;
+	slot = &kring->ring->slot[i];
+	m_copydata(m, 0, len, NMB(slot));
+	slot->len = len;
+	slot->flags = kring->nkr_slot_flags;
+	kring->nr_hwavail++;
+	if (netmap_verbose  & NM_VERB_HOST)
+		D("wake up host ring %s %d", na->ifp->if_xname, na->num_rx_rings);
+	selwakeuppri(&kring->si, PI_NET);
+	error = 0;
+done:
+	na->nm_lock(ifp, NETMAP_CORE_UNLOCK, 0);
+
+	/* release the mbuf in either cases of success or failure. As an
+	 * alternative, put the mbuf in a free list and free the list
+	 * only when really necessary.
+	 */
+	m_freem(m);
+
+	return (error);
+}
+
+
+/*
+ * netmap_reset() is called by the driver routines when reinitializing
+ * a ring. The driver is in charge of locking to protect the kring.
+ * If netmap mode is not set just return NULL.
+ */
+struct netmap_slot *
+netmap_reset(struct netmap_adapter *na, enum txrx tx, int n,
+	u_int new_cur)
+{
+	struct netmap_kring *kring;
+	int new_hwofs, lim;
+
+	if (na == NULL)
+		return NULL;	/* no netmap support here */
+	if (!(na->ifp->if_capenable & IFCAP_NETMAP))
+		return NULL;	/* nothing to reinitialize */
+
+	if (tx == NR_TX) {
+		if (n >= na->num_tx_rings)
+			return NULL;
+		kring = na->tx_rings + n;
+		new_hwofs = kring->nr_hwcur - new_cur;
+	} else {
+		if (n >= na->num_rx_rings)
+			return NULL;
+		kring = na->rx_rings + n;
+		new_hwofs = kring->nr_hwcur + kring->nr_hwavail - new_cur;
+	}
+	lim = kring->nkr_num_slots - 1;
+	if (new_hwofs > lim)
+		new_hwofs -= lim + 1;
+
+	/* Alwayws set the new offset value and realign the ring. */
+	kring->nkr_hwofs = new_hwofs;
+	if (tx == NR_TX)
+		kring->nr_hwavail = kring->nkr_num_slots - 1;
+	ND(10, "new hwofs %d on %s %s[%d]",
+			kring->nkr_hwofs, na->ifp->if_xname,
+			tx == NR_TX ? "TX" : "RX", n);
+
+#if 0 // def linux
+	/* XXX check that the mappings are correct */
+	/* need ring_nr, adapter->pdev, direction */
+	buffer_info->dma = dma_map_single(&pdev->dev, addr, adapter->rx_buffer_len, DMA_FROM_DEVICE);
+	if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma)) {
+		D("error mapping rx netmap buffer %d", i);
+		// XXX fix error handling
+	}
+
+#endif /* linux */
+	/*
+	 * Wakeup on the individual and global lock
+	 * We do the wakeup here, but the ring is not yet reconfigured.
+	 * However, we are under lock so there are no races.
+	 */
+	selwakeuppri(&kring->si, PI_NET);
+	selwakeuppri(tx == NR_TX ? &na->tx_si : &na->rx_si, PI_NET);
+	return kring->ring->slot;
+}
+
+
+/*
+ * Default functions to handle rx/tx interrupts
+ * we have 4 cases:
+ * 1 ring, single lock:
+ *	lock(core); wake(i=0); unlock(core)
+ * N rings, single lock:
+ *	lock(core); wake(i); wake(N+1) unlock(core)
+ * 1 ring, separate locks: (i=0)
+ *	lock(i); wake(i); unlock(i)
+ * N rings, separate locks:
+ *	lock(i); wake(i); unlock(i); lock(core) wake(N+1) unlock(core)
+ * work_done is non-null on the RX path.
+ */
+int
+netmap_rx_irq(struct ifnet *ifp, int q, int *work_done)
+{
+	struct netmap_adapter *na;
+	struct netmap_kring *r;
+	NM_SELINFO_T *main_wq;
+
+	if (!(ifp->if_capenable & IFCAP_NETMAP))
+		return 0;
+	ND(5, "received %s queue %d", work_done ? "RX" : "TX" , q);
+	na = NA(ifp);
+	if (na->na_flags & NAF_SKIP_INTR) {
+		ND("use regular interrupt");
+		return 0;
+	}
+
+	if (work_done) { /* RX path */
+		if (q >= na->num_rx_rings)
+			return 0;	// regular queue
+		r = na->rx_rings + q;
+		r->nr_kflags |= NKR_PENDINTR;
+		main_wq = (na->num_rx_rings > 1) ? &na->rx_si : NULL;
+	} else { /* tx path */
+		if (q >= na->num_tx_rings)
+			return 0;	// regular queue
+		r = na->tx_rings + q;
+		main_wq = (na->num_tx_rings > 1) ? &na->tx_si : NULL;
+		work_done = &q; /* dummy */
+	}
+	if (na->separate_locks) {
+		mtx_lock(&r->q_lock);
+		selwakeuppri(&r->si, PI_NET);
+		mtx_unlock(&r->q_lock);
+		if (main_wq) {
+			mtx_lock(&na->core_lock);
+			selwakeuppri(main_wq, PI_NET);
+			mtx_unlock(&na->core_lock);
+		}
+	} else {
+		mtx_lock(&na->core_lock);
+		selwakeuppri(&r->si, PI_NET);
+		if (main_wq)
+			selwakeuppri(main_wq, PI_NET);
+		mtx_unlock(&na->core_lock);
+	}
+	*work_done = 1; /* do not fire napi again */
+	return 1;
+}
+
+
+#ifdef linux	/* linux-specific routines */
+
+/*
+ * Remap linux arguments into the FreeBSD call.
+ * - pwait is the poll table, passed as 'dev';
+ *   If pwait == NULL someone else already woke up before. We can report
+ *   events but they are filtered upstream.
+ *   If pwait != NULL, then pwait->key contains the list of events.
+ * - events is computed from pwait as above.
+ * - file is passed as 'td';
+ */
+static u_int
+linux_netmap_poll(struct file * file, struct poll_table_struct *pwait)
+{
+#if LINUX_VERSION_CODE < KERNEL_VERSION(3,4,0)
+	int events = pwait ? pwait->key : POLLIN | POLLOUT;
+#else /* in 3.4.0 field 'key' was renamed to '_key' */
+	int events = pwait ? pwait->_key : POLLIN | POLLOUT;
+#endif
+	return netmap_poll((void *)pwait, events, (void *)file);
+}
+
+static int
+linux_netmap_mmap(struct file *f, struct vm_area_struct *vma)
+{
+	int lut_skip, i, j;
+	int user_skip = 0;
+	struct lut_entry *l_entry;
+	int error = 0;
+	unsigned long off, tomap;
+	/*
+	 * vma->vm_start: start of mapping user address space
+	 * vma->vm_end: end of the mapping user address space
+	 * vma->vm_pfoff: offset of first page in the device
+	 */
+
+	// XXX security checks
+
+	error = netmap_get_memory(f->private_data);
+	ND("get_memory returned %d", error);
+	if (error)
+	    return -error;
+
+	off = vma->vm_pgoff << PAGE_SHIFT; /* offset in bytes */
+	tomap = vma->vm_end - vma->vm_start;
+	for (i = 0; i < NETMAP_POOLS_NR; i++) {  /* loop through obj_pools */
+		const struct netmap_obj_pool *p = &nm_mem.pools[i];
+		/*
+		 * In each pool memory is allocated in clusters
+		 * of size _clustsize, each containing clustentries
+		 * entries. For each object k we already store the
+		 * vtophys mapping in lut[k] so we use that, scanning
+		 * the lut[] array in steps of clustentries,
+		 * and we map each cluster (not individual pages,
+		 * it would be overkill).
+		 */
+
+		/*
+		 * We interpret vm_pgoff as an offset into the whole
+		 * netmap memory, as if all clusters where contiguous.
+		 */
+		for (lut_skip = 0, j = 0; j < p->_numclusters; j++, lut_skip += p->clustentries) {
+			unsigned long paddr, mapsize;
+			if (p->_clustsize <= off) {
+				off -= p->_clustsize;
+				continue;
+			}
+			l_entry = &p->lut[lut_skip]; /* first obj in the cluster */
+			paddr = l_entry->paddr + off;
+			mapsize = p->_clustsize - off;
+			off = 0;
+			if (mapsize > tomap)
+				mapsize = tomap;
+			ND("remap_pfn_range(%lx, %lx, %lx)",
+				vma->vm_start + user_skip,
+				paddr >> PAGE_SHIFT, mapsize);
+			if (remap_pfn_range(vma, vma->vm_start + user_skip,
+					paddr >> PAGE_SHIFT, mapsize,
+					vma->vm_page_prot))
+				return -EAGAIN; // XXX check return value
+			user_skip += mapsize;
+			tomap -= mapsize;
+			if (tomap == 0)
+				goto done;
+		}
+	}
+done:
+
+	return 0;
+}
+
+static netdev_tx_t
+linux_netmap_start(struct sk_buff *skb, struct net_device *dev)
+{
+	netmap_start(dev, skb);
+	return (NETDEV_TX_OK);
+}
+
+
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,37)	// XXX was 38
+#define LIN_IOCTL_NAME	.ioctl
+int
+linux_netmap_ioctl(struct inode *inode, struct file *file, u_int cmd, u_long data /* arg */)
+#else
+#define LIN_IOCTL_NAME	.unlocked_ioctl
+long
+linux_netmap_ioctl(struct file *file, u_int cmd, u_long data /* arg */)
+#endif
+{
+	int ret;
+	struct nmreq nmr;
+	bzero(&nmr, sizeof(nmr));
+
+	if (data && copy_from_user(&nmr, (void *)data, sizeof(nmr) ) != 0)
+		return -EFAULT;
+	ret = netmap_ioctl(NULL, cmd, (caddr_t)&nmr, 0, (void *)file);
+	pr_info("netmap ioctl %u ret = %d\n", cmd, ret);
+	if (data && copy_to_user((void*)data, &nmr, sizeof(nmr) ) != 0)
+		return -EFAULT;
+	return -ret;
+}
+
+
+static int
+netmap_release(struct inode *inode, struct file *file)
+{
+	(void)inode;	/* UNUSED */
+	if (file->private_data)
+		netmap_dtor(file->private_data);
+	return (0);
+}
+
+static int
+linux_netmap_open(struct inode *inode, struct file *file)
+{
+	struct netmap_priv_d *priv;
+	(void)inode;	/* UNUSED */
+
+	priv = malloc(sizeof(struct netmap_priv_d), M_DEVBUF,
+			      M_NOWAIT | M_ZERO);
+	if (priv == NULL)
+		return -ENOMEM;
+
+	file->private_data = priv;
+
+	return (0);
+}
+
+static struct file_operations netmap_fops = {
+    .open = linux_netmap_open,
+    .mmap = linux_netmap_mmap,
+    LIN_IOCTL_NAME = linux_netmap_ioctl,
+    .poll = linux_netmap_poll,
+    .release = netmap_release,
+};
+
+static struct miscdevice netmap_cdevsw = {	/* same name as FreeBSD */
+	MISC_DYNAMIC_MINOR,
+	"netmap",
+	&netmap_fops,
+};
+
+static int netmap_init(void);
+static void netmap_fini(void);
+
+/* Errors have negative values on linux */
+static int linux_netmap_init(void)
+{
+	return -netmap_init();
+}
+
+module_init(linux_netmap_init);
+module_exit(netmap_fini);
+/* export certain symbols to other modules */
+EXPORT_SYMBOL(netmap_attach);		// driver attach routines
+EXPORT_SYMBOL(netmap_detach);		// driver detach routines
+EXPORT_SYMBOL(netmap_ring_reinit);	// ring init on error
+EXPORT_SYMBOL(netmap_buffer_lut);
+EXPORT_SYMBOL(netmap_total_buffers);	// index check
+EXPORT_SYMBOL(netmap_buffer_base);
+EXPORT_SYMBOL(netmap_reset);		// ring init routines
+EXPORT_SYMBOL(netmap_buf_size);
+EXPORT_SYMBOL(netmap_rx_irq);		// default irq handler
+EXPORT_SYMBOL(netmap_no_pendintr);	// XXX mitigation - should go away
+
+
+MODULE_AUTHOR("Matteo Landi, Luigi Rizzo");
+MODULE_DESCRIPTION("The netmap packet I/O framework");
+MODULE_LICENSE("Dual BSD/GPL"); /* the code here is all BSD. */
+
+#else /* __FreeBSD__ */
+
+static struct cdevsw netmap_cdevsw = {
+	.d_version = D_VERSION,
+	.d_name = "netmap",
+	.d_open = netmap_open,
+	.d_mmap = netmap_mmap,
+	.d_mmap_single = netmap_mmap_single,
+	.d_ioctl = netmap_ioctl,
+	.d_poll = netmap_poll,
+	.d_close = netmap_close,
+};
+#endif /* __FreeBSD__ */
+
+#ifdef NM_BRIDGE
+/*
+ *---- support for virtual bridge -----
+ */
+
+/* ----- FreeBSD if_bridge hash function ------- */
+
+/*
+ * The following hash function is adapted from "Hash Functions" by Bob Jenkins
+ * ("Algorithm Alley", Dr. Dobbs Journal, September 1997).
+ *
+ * http://www.burtleburtle.net/bob/hash/spooky.html
+ */
+#define mix(a, b, c)                                                    \
+do {                                                                    \
+        a -= b; a -= c; a ^= (c >> 13);                                 \
+        b -= c; b -= a; b ^= (a << 8);                                  \
+        c -= a; c -= b; c ^= (b >> 13);                                 \
+        a -= b; a -= c; a ^= (c >> 12);                                 \
+        b -= c; b -= a; b ^= (a << 16);                                 \
+        c -= a; c -= b; c ^= (b >> 5);                                  \
+        a -= b; a -= c; a ^= (c >> 3);                                  \
+        b -= c; b -= a; b ^= (a << 10);                                 \
+        c -= a; c -= b; c ^= (b >> 15);                                 \
+} while (/*CONSTCOND*/0)
+
+static __inline uint32_t
+nm_bridge_rthash(const uint8_t *addr)
+{
+        uint32_t a = 0x9e3779b9, b = 0x9e3779b9, c = 0; // hask key
+
+        b += addr[5] << 8;
+        b += addr[4];
+        a += addr[3] << 24;
+        a += addr[2] << 16;
+        a += addr[1] << 8;
+        a += addr[0];
+
+        mix(a, b, c);
+#define BRIDGE_RTHASH_MASK	(NM_BDG_HASH-1)
+        return (c & BRIDGE_RTHASH_MASK);
+}
+
+#undef mix
+
+
+static int
+bdg_netmap_reg(struct ifnet *ifp, int onoff)
+{
+	int i, err = 0;
+	struct nm_bridge *b = ifp->if_bridge;
+
+	BDG_LOCK(b);
+	if (onoff) {
+		/* the interface must be already in the list.
+		 * only need to mark the port as active
+		 */
+		ND("should attach %s to the bridge", ifp->if_xname);
+		for (i=0; i < NM_BDG_MAXPORTS; i++)
+			if (b->bdg_ports[i] == ifp)
+				break;
+		if (i == NM_BDG_MAXPORTS) {
+			D("no more ports available");
+			err = EINVAL;
+			goto done;
+		}
+		ND("setting %s in netmap mode", ifp->if_xname);
+		ifp->if_capenable |= IFCAP_NETMAP;
+		NA(ifp)->bdg_port = i;
+		b->act_ports |= (1<<i);
+		b->bdg_ports[i] = ifp;
+	} else {
+		/* should be in the list, too -- remove from the mask */
+		ND("removing %s from netmap mode", ifp->if_xname);
+		ifp->if_capenable &= ~IFCAP_NETMAP;
+		i = NA(ifp)->bdg_port;
+		b->act_ports &= ~(1<<i);
+	}
+done:
+	BDG_UNLOCK(b);
+	return err;
+}
+
+
+static int
+nm_bdg_flush(struct nm_bdg_fwd *ft, int n, struct ifnet *ifp)
+{
+	int i, ifn;
+	uint64_t all_dst, dst;
+	uint32_t sh, dh;
+	uint64_t mysrc = 1 << NA(ifp)->bdg_port;
+	uint64_t smac, dmac;
+	struct netmap_slot *slot;
+	struct nm_bridge *b = ifp->if_bridge;
+
+	ND("prepare to send %d packets, act_ports 0x%x", n, b->act_ports);
+	/* only consider valid destinations */
+	all_dst = (b->act_ports & ~mysrc);
+	/* first pass: hash and find destinations */
+	for (i = 0; likely(i < n); i++) {
+		uint8_t *buf = ft[i].buf;
+		dmac = le64toh(*(uint64_t *)(buf)) & 0xffffffffffff;
+		smac = le64toh(*(uint64_t *)(buf + 4));
+		smac >>= 16;
+		if (unlikely(netmap_verbose)) {
+		    uint8_t *s = buf+6, *d = buf;
+		    D("%d len %4d %02x:%02x:%02x:%02x:%02x:%02x -> %02x:%02x:%02x:%02x:%02x:%02x",
+			i,
+			ft[i].len,
+			s[0], s[1], s[2], s[3], s[4], s[5],
+			d[0], d[1], d[2], d[3], d[4], d[5]);
+		}
+		/*
+		 * The hash is somewhat expensive, there might be some
+		 * worthwhile optimizations here.
+		 */
+		if ((buf[6] & 1) == 0) { /* valid src */
+		    	uint8_t *s = buf+6;
+			sh = nm_bridge_rthash(buf+6); // XXX hash of source
+			/* update source port forwarding entry */
+			b->ht[sh].mac = smac;	/* XXX expire ? */
+			b->ht[sh].ports = mysrc;
+			if (netmap_verbose)
+			    D("src %02x:%02x:%02x:%02x:%02x:%02x on port %d",
+				s[0], s[1], s[2], s[3], s[4], s[5], NA(ifp)->bdg_port);
+		}
+		dst = 0;
+		if ( (buf[0] & 1) == 0) { /* unicast */
+		    	uint8_t *d = buf;
+			dh = nm_bridge_rthash(buf); // XXX hash of dst
+			if (b->ht[dh].mac == dmac) {	/* found dst */
+				dst = b->ht[dh].ports;
+				if (netmap_verbose)
+				    D("dst %02x:%02x:%02x:%02x:%02x:%02x to port %x",
+					d[0], d[1], d[2], d[3], d[4], d[5], (uint32_t)(dst >> 16));
+			}
+		}
+		if (dst == 0)
+			dst = all_dst;
+		dst &= all_dst; /* only consider valid ports */
+		if (unlikely(netmap_verbose))
+			D("pkt goes to ports 0x%x", (uint32_t)dst);
+		ft[i].dst = dst;
+	}
+
+	/* second pass, scan interfaces and forward */
+	all_dst = (b->act_ports & ~mysrc);
+	for (ifn = 0; all_dst; ifn++) {
+		struct ifnet *dst_ifp = b->bdg_ports[ifn];
+		struct netmap_adapter *na;
+		struct netmap_kring *kring;
+		struct netmap_ring *ring;
+		int j, lim, sent, locked;
+
+		if (!dst_ifp)
+			continue;
+		ND("scan port %d %s", ifn, dst_ifp->if_xname);
+		dst = 1 << ifn;
+		if ((dst & all_dst) == 0)	/* skip if not set */
+			continue;
+		all_dst &= ~dst;	/* clear current node */
+		na = NA(dst_ifp);
+
+		ring = NULL;
+		kring = NULL;
+		lim = sent = locked = 0;
+		/* inside, scan slots */
+		for (i = 0; likely(i < n); i++) {
+			if ((ft[i].dst & dst) == 0)
+				continue;	/* not here */
+			if (!locked) {
+				kring = &na->rx_rings[0];
+				ring = kring->ring;
+				lim = kring->nkr_num_slots - 1;
+				na->nm_lock(dst_ifp, NETMAP_RX_LOCK, 0);
+				locked = 1;
+			}
+			if (unlikely(kring->nr_hwavail >= lim)) {
+				if (netmap_verbose)
+					D("rx ring full on %s", ifp->if_xname);
+				break;
+			}
+			j = kring->nr_hwcur + kring->nr_hwavail;
+			if (j > lim)
+				j -= kring->nkr_num_slots;
+			slot = &ring->slot[j];
+			ND("send %d %d bytes at %s:%d", i, ft[i].len, dst_ifp->if_xname, j);
+			pkt_copy(ft[i].buf, NMB(slot), ft[i].len);
+			slot->len = ft[i].len;
+			kring->nr_hwavail++;
+			sent++;
+		}
+		if (locked) {
+			ND("sent %d on %s", sent, dst_ifp->if_xname);
+			if (sent)
+				selwakeuppri(&kring->si, PI_NET);
+			na->nm_lock(dst_ifp, NETMAP_RX_UNLOCK, 0);
+		}
+	}
+	return 0;
+}
+
+/*
+ * main dispatch routine
+ */
+static int
+bdg_netmap_txsync(struct ifnet *ifp, u_int ring_nr, int do_lock)
+{
+	struct netmap_adapter *na = NA(ifp);
+	struct netmap_kring *kring = &na->tx_rings[ring_nr];
+	struct netmap_ring *ring = kring->ring;
+	int i, j, k, lim = kring->nkr_num_slots - 1;
+	struct nm_bdg_fwd *ft = (struct nm_bdg_fwd *)(ifp + 1);
+	int ft_i;	/* position in the forwarding table */
+
+	k = ring->cur;
+	if (k > lim)
+		return netmap_ring_reinit(kring);
+	if (do_lock)
+		na->nm_lock(ifp, NETMAP_TX_LOCK, ring_nr);
+
+	if (netmap_bridge <= 0) { /* testing only */
+		j = k; // used all
+		goto done;
+	}
+	if (netmap_bridge > NM_BDG_BATCH)
+		netmap_bridge = NM_BDG_BATCH;
+
+	ft_i = 0;	/* start from 0 */
+	for (j = kring->nr_hwcur; likely(j != k); j = unlikely(j == lim) ? 0 : j+1) {
+		struct netmap_slot *slot = &ring->slot[j];
+		int len = ft[ft_i].len = slot->len;
+		char *buf = ft[ft_i].buf = NMB(slot);
+
+		prefetch(buf);
+		if (unlikely(len < 14))
+			continue;
+		if (unlikely(++ft_i == netmap_bridge))
+			ft_i = nm_bdg_flush(ft, ft_i, ifp);
+	}
+	if (ft_i)
+		ft_i = nm_bdg_flush(ft, ft_i, ifp);
+	/* count how many packets we sent */
+	i = k - j;
+	if (i < 0)
+		i += kring->nkr_num_slots;
+	kring->nr_hwavail = kring->nkr_num_slots - 1 - i;
+	if (j != k)
+		D("early break at %d/ %d, avail %d", j, k, kring->nr_hwavail);
+
+done:
+	kring->nr_hwcur = j;
+	ring->avail = kring->nr_hwavail;
+	if (do_lock)
+		na->nm_lock(ifp, NETMAP_TX_UNLOCK, ring_nr);
+
+	if (netmap_verbose)
+		D("%s ring %d lock %d", ifp->if_xname, ring_nr, do_lock);
+	return 0;
+}
+
+static int
+bdg_netmap_rxsync(struct ifnet *ifp, u_int ring_nr, int do_lock)
+{
+	struct netmap_adapter *na = NA(ifp);
+	struct netmap_kring *kring = &na->rx_rings[ring_nr];
+	struct netmap_ring *ring = kring->ring;
+	u_int j, n, lim = kring->nkr_num_slots - 1;
+	u_int k = ring->cur, resvd = ring->reserved;
+
+	ND("%s ring %d lock %d avail %d",
+		ifp->if_xname, ring_nr, do_lock, kring->nr_hwavail);
+
+	if (k > lim)
+		return netmap_ring_reinit(kring);
+	if (do_lock)
+		na->nm_lock(ifp, NETMAP_RX_LOCK, ring_nr);
+
+	/* skip past packets that userspace has released */
+	j = kring->nr_hwcur;    /* netmap ring index */
+	if (resvd > 0) {
+		if (resvd + ring->avail >= lim + 1) {
+			D("XXX invalid reserve/avail %d %d", resvd, ring->avail);
+			ring->reserved = resvd = 0; // XXX panic...
+		}
+		k = (k >= resvd) ? k - resvd : k + lim + 1 - resvd;
+	}
+
+	if (j != k) { /* userspace has released some packets. */
+		n = k - j;
+		if (n < 0)
+			n += kring->nkr_num_slots;
+		ND("userspace releases %d packets", n);
+                for (n = 0; likely(j != k); n++) {
+                        struct netmap_slot *slot = &ring->slot[j];
+                        void *addr = NMB(slot);
+
+                        if (addr == netmap_buffer_base) { /* bad buf */
+                                if (do_lock)
+                                        na->nm_lock(ifp, NETMAP_RX_UNLOCK, ring_nr);
+                                return netmap_ring_reinit(kring);
+                        }
+			/* decrease refcount for buffer */
+
+			slot->flags &= ~NS_BUF_CHANGED;
+                        j = unlikely(j == lim) ? 0 : j + 1;
+                }
+                kring->nr_hwavail -= n;
+                kring->nr_hwcur = k;
+        }
+        /* tell userspace that there are new packets */
+        ring->avail = kring->nr_hwavail - resvd;
+
+	if (do_lock)
+		na->nm_lock(ifp, NETMAP_RX_UNLOCK, ring_nr);
+	return 0;
+}
+
+static void
+bdg_netmap_attach(struct ifnet *ifp)
+{
+	struct netmap_adapter na;
+
+	ND("attaching virtual bridge");
+	bzero(&na, sizeof(na));
+
+	na.ifp = ifp;
+	na.separate_locks = 1;
+	na.num_tx_desc = NM_BRIDGE_RINGSIZE;
+	na.num_rx_desc = NM_BRIDGE_RINGSIZE;
+	na.nm_txsync = bdg_netmap_txsync;
+	na.nm_rxsync = bdg_netmap_rxsync;
+	na.nm_register = bdg_netmap_reg;
+	netmap_attach(&na, 1);
+}
+
+#endif /* NM_BRIDGE */
+
+static struct cdev *netmap_dev; /* /dev/netmap character device. */
+
+
+/*
+ * Module loader.
+ *
+ * Create the /dev/netmap device and initialize all global
+ * variables.
+ *
+ * Return 0 on success, errno on failure.
+ */
+static int
+netmap_init(void)
+{
+	int error;
+
+	error = netmap_memory_init();
+	if (error != 0) {
+		printf("netmap: unable to initialize the memory allocator.\n");
+		return (error);
+	}
+	printf("netmap: loaded module\n");
+	netmap_dev = make_dev(&netmap_cdevsw, 0, UID_ROOT, GID_WHEEL, 0660,
+			      "netmap");
+
+#ifdef NM_BRIDGE
+	{
+	int i;
+	for (i = 0; i < NM_BRIDGES; i++)
+		mtx_init(&nm_bridges[i].bdg_lock, "bdg lock", "bdg_lock", MTX_DEF);
+	}
+#endif
+	return (error);
+}
+
+
+/*
+ * Module unloader.
+ *
+ * Free all the memory, and destroy the ``/dev/netmap`` device.
+ */
+static void
+netmap_fini(void)
+{
+	destroy_dev(netmap_dev);
+	netmap_memory_fini();
+	printf("netmap: unloaded module.\n");
+}
+
+
+#ifdef __FreeBSD__
+/*
+ * Kernel entry point.
+ *
+ * Initialize/finalize the module and return.
+ *
+ * Return 0 on success, errno on failure.
+ */
+static int
+netmap_loader(__unused struct module *module, int event, __unused void *arg)
+{
+	int error = 0;
+
+	switch (event) {
+	case MOD_LOAD:
+		error = netmap_init();
+		break;
+
+	case MOD_UNLOAD:
+		netmap_fini();
+		break;
+
+	default:
+		error = EOPNOTSUPP;
+		break;
+	}
+
+	return (error);
+}
+
+
+DEV_MODULE(netmap, netmap_loader, NULL);
+#endif /* __FreeBSD__ */
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ b/drivers/staging/netmap/README	2013-03-10 10:08:20.327671428 -0700
@@ -0,0 +1,127 @@ 
+# $Id: README 10832 2012-03-22 18:22:42Z luigi $
+
+NETMAP FOR LINUX
+----------------
+
+This directory contains a version of the "netmap" code for Linux.
+
+Netmap is a BSD-licensed framework that supports line-rate direct packet
+I/O even on 10GBit/s interfaces (14.88Mpps) with limited system load,
+and includes a libpcap emulation library to port applications.  See
+
+	http://info.iet.unipi.it/~luigi/netmap/
+
+for more details. There you can also find the latest versions
+of the code and documentation as well as pre-built TinyCore
+images based on linux 3.0.3 and containing the netmap modules
+and some test applications.
+
+This is a preliminary version supporting the ixgbe and e1000/e1000e
+driver. Patches for other devices (igb, r8169, forcedeth) are
+untested and probably not working yet.
+
+Netmap relies on a kernel module (netmap_lin.ko) and slightly modified
+device drivers. Userspace programs can use the native API (documented
+in netmap.4) or a libpcap emulation library.
+
+    Directory structure for this archive
+
+	.		documentation, patches etc.
+	include/net	header files for user programs
+	net/netmap	kernel core files,
+			sample applications, manpage
+	net/*		patched device drivers for a 3.0.x linux version.
+
+HOW TO BUILD THE CODE
+---------------------
+
+1. make sure you have kernel sources/headers matching your installed system
+
+2. do the following
+	(cd net;  make KSRC=/usr/src/linux-kernel-source-or-headers )
+   this produces net/netmap/netmap_lin.ko and other kernel modules.
+
+3. to build sample applications, run
+	(cd net/netmap; make apps )
+   (you will need the pthreads and libpcap-dev packages to build them)
+
+HOW TO USE THE CODE
+-------------------
+
+    REMEMBER
+	THIS IS EXPERIMENTAL CODE WHICH MAY CRASH YOUR SYSTEM.
+	USE IT AT YOUR OWN RISk.
+
+Whether you built your own modules, or are using the prebuilt
+TinyCore image, the following steps can be used for initial testing:
+
+1. unload any modules for the network cards you want to use, e.g.
+	sudo rmmod ixgbe
+	sudo rmmod e1000
+	sudo rmmod e1000e
+
+2. load netmap and device driver module
+	sudo insmod net/netmap/netmap_lin.ko
+	sudo insmod net/ixgbe/ixgbe.ko
+	sudo insmod net/ixgbe/e1000.ko
+	sudo insmod net/ixgbe/e1000e.ko
+
+3. turn the interface(s) up
+
+	sudo ifconfig eth0 up # and same for others
+
+4. Run test applications -- as an example, pkt-gen is a raw packet
+   sender/receiver which can do line rate on a 10G interface
+
+	# send about 500 million packets of 64 bytes each.
+	# wait 5s before starting, so the link can go up
+	sudo pkt-gen -i eth0 -t 500111222 -l 64 -w 5
+	# you should see about 14.2 Mpps
+
+	sudo pkt-gen -i eth0 # act as a receiver
+
+
+COMMON PROBLEMS
+----------------
+
+* switching in/out of netmap mode causes the link to go down and up.
+  If your card is connected to a switch with spanning tree enabled,
+  the switch will likely MUTE THE LINK FOR 10 SECONDS while it is
+  detecting the new topology. Either disable the spanning tree on
+  the switch or use long pauses before sending data;
+
+* Not all cards can do line rate no matter how fast is your software or
+  CPU. Several have hardware limitations that prevent reaching the peak
+  speed, especially for small packet sizes. Examples:
+
+  - ixgbe cannot receive at line rate with packet sizes that are
+    not multiple of 64 (after CRC stripping).
+    This is especially evident with minimum-sized frames (-l 60 )
+
+  - some of the low-end 'e1000' cards can send 1.2 - 1.3Mpps instead
+    of the theoretical maximum (1.488Mpps)
+
+  - the 'realtek' cards seem unable to send more than 450-500Kpps
+    even though they can receive at least 1.1Mpps
+
+* if the link is not up when the packet generator starts, you will
+  see frequent messages about a link reset. While we work on a fix,
+  use the '-w' argument on the generator to specify a longer timeout
+
+* the ixgbe driver (and perhaps others) is severely slowed down if the
+  remote party is senting flow control frames to slow down traffic.
+  If that happens try to use the ethtool command to disable flow control.
+
+
+REVISION HISTORY
+-----------------
+
+20120322 - fixed the 'igb' driver, now it can send and receive correctly
+	(the problem was in netmap_rx_irq() so it might have affected
+	other multiqueue cards).
+	Also tested the 'r8169' in transmit mode.
+	Added comments on switches and spanning tree.
+
+20120217 - initial version. Only ixgbe, e1000 and e1000e are working.
+	Other drivers (igb, r8169, forcedeth) are supplied only as a
+	proof of concept.
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ b/drivers/staging/netmap/TODO	2013-03-10 16:45:40.377960319 -0700
@@ -0,0 +1,21 @@ 
+
+This driver is functional but needs some cleanup before it is ready
+for included in networking subsystem.
+
+  - Use unifdef to elimiate non Linux code
+  - Get rid of wrapper code including bsd_glue.h
+  - Fix coding style to be Linux rather than BSD
+  - Fix whitespace warnings
+  - Add stubs so that ethernet drivers can use without #ifdef's
+  - Autoload the module, assign a real minor number and have aliases
+  - Rework documentation and put in Documentation/networking/
+  - Remove memory allocator kludge wrapper
+  - Remove devfs kludge wrappers
+  - Review configuration API's and management (counters)
+  - Locking should be consistent (ie get rid of separate_locks option)?
+  - Doesn't work with DMA remapping
+  - Use __u32 rather than uint32_t in headers
+  - netmap_user.h headers should use inline instead of macros
+
+Fundamentally this will break source compatiablity with FreeBSD,
+but this is not our problem.
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ b/drivers/staging/netmap/netmap_mem2.c	2013-03-10 10:08:20.327671428 -0700
@@ -0,0 +1,974 @@ 
+/*
+ * Copyright (C) 2012 Matteo Landi, Luigi Rizzo, Giuseppe Lettieri. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *   1. Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *   2. Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+/*
+ * $FreeBSD: head/sys/dev/netmap/netmap_mem2.c 234290 2012-04-14 16:44:18Z luigi $
+ * $Id: netmap_mem2.c 12010 2013-01-23 03:57:30Z luigi $
+ *
+ * (New) memory allocator for netmap
+ */
+
+/*
+ * This allocator creates three memory regions:
+ *	nm_if_pool	for the struct netmap_if
+ *	nm_ring_pool	for the struct netmap_ring
+ *	nm_buf_pool	for the packet buffers.
+ *
+ * All regions need to be multiple of a page size as we export them to
+ * userspace through mmap. Only the latter needs to be dma-able,
+ * but for convenience use the same type of allocator for all.
+ *
+ * Once mapped, the three regions are exported to userspace
+ * as a contiguous block, starting from nm_if_pool. Each
+ * cluster (and pool) is an integral number of pages.
+ *   [ . . . ][ . . . . . .][ . . . . . . . . . .]
+ *    nm_if     nm_ring            nm_buf
+ *
+ * The userspace areas contain offsets of the objects in userspace.
+ * When (at init time) we write these offsets, we find out the index
+ * of the object, and from there locate the offset from the beginning
+ * of the region.
+ *
+ * The invididual allocators manage a pool of memory for objects of
+ * the same size.
+ * The pool is split into smaller clusters, whose size is a
+ * multiple of the page size. The cluster size is chosen
+ * to minimize the waste for a given max cluster size
+ * (we do it by brute force, as we have relatively few object
+ * per cluster).
+ *
+ * Objects are aligned to the cache line (64 bytes) rounding up object
+ * sizes when needed. A bitmap contains the state of each object.
+ * Allocation scans the bitmap; this is done only on attach, so we are not
+ * too worried about performance
+ *
+ * For each allocator we can define (thorugh sysctl) the size and
+ * number of each object. Memory is allocated at the first use of a
+ * netmap file descriptor, and can be freed when all such descriptors
+ * have been released (including unmapping the memory).
+ * If memory is scarce, the system tries to get as much as possible
+ * and the sysctl values reflect the actual allocation.
+ * Together with desired values, the sysctl export also absolute
+ * min and maximum values that cannot be overridden.
+ *
+ * struct netmap_if:
+ *	variable size, max 16 bytes per ring pair plus some fixed amount.
+ *	1024 bytes should be large enough in practice.
+ *
+ *	In the worst case we have one netmap_if per ring in the system.
+ *
+ * struct netmap_ring
+ *	variable too, 8 byte per slot plus some fixed amount.
+ *	Rings can be large (e.g. 4k slots, or >32Kbytes).
+ *	We default to 36 KB (9 pages), and a few hundred rings.
+ *
+ * struct netmap_buffer
+ *	The more the better, both because fast interfaces tend to have
+ *	many slots, and because we may want to use buffers to store
+ *	packets in userspace avoiding copies.
+ *	Must contain a full frame (eg 1518, or more for vlans, jumbo
+ *	frames etc.) plus be nicely aligned, plus some NICs restrict
+ *	the size to multiple of 1K or so. Default to 2K
+ */
+
+#ifndef CONSERVATIVE
+#define NETMAP_BUF_MAX_NUM	20*4096*2	/* large machine */
+#else /* CONSERVATIVE */
+#define NETMAP_BUF_MAX_NUM      20000   /* 40MB */
+#endif
+
+#ifdef linux
+#define NMA_LOCK_T		struct semaphore
+#define NMA_LOCK_INIT()		sema_init(&nm_mem.nm_mtx, 1)
+#define NMA_LOCK_DESTROY()
+#define NMA_LOCK()		down(&nm_mem.nm_mtx)
+#define NMA_UNLOCK()		up(&nm_mem.nm_mtx)
+#else /* !linux */
+#define NMA_LOCK_T		struct mtx
+#define NMA_LOCK_INIT()		mtx_init(&nm_mem.nm_mtx, "netmap memory allocator lock", NULL, MTX_DEF)
+#define NMA_LOCK_DESTROY()	mtx_destroy(&nm_mem.nm_mtx)
+#define NMA_LOCK()		mtx_lock(&nm_mem.nm_mtx)
+#define NMA_UNLOCK()		mtx_unlock(&nm_mem.nm_mtx)
+#endif /* linux */
+
+enum {
+	NETMAP_IF_POOL   = 0,
+	NETMAP_RING_POOL,
+	NETMAP_BUF_POOL,
+	NETMAP_POOLS_NR
+};
+
+
+struct netmap_obj_params {
+	u_int size;
+	u_int num;
+};
+
+
+struct netmap_obj_params netmap_params[NETMAP_POOLS_NR] = {
+	[NETMAP_IF_POOL] = {
+		.size = 1024,
+		.num  = 100,
+	},
+	[NETMAP_RING_POOL] = {
+		.size = 9*PAGE_SIZE,
+		.num  = 200,
+	},
+	[NETMAP_BUF_POOL] = {
+		.size = 2048,
+		.num  = NETMAP_BUF_MAX_NUM,
+	},
+};
+
+
+struct netmap_obj_pool {
+	char name[16];		/* name of the allocator */
+	u_int objtotal;         /* actual total number of objects. */
+	u_int objfree;          /* number of free objects. */
+	u_int clustentries;	/* actual objects per cluster */
+
+	/* limits */
+	u_int objminsize;	/* minimum object size */
+	u_int objmaxsize;	/* maximum object size */
+	u_int nummin;		/* minimum number of objects */
+	u_int nummax;		/* maximum number of objects */
+
+	/* the total memory space is _numclusters*_clustsize */
+	u_int _numclusters;	/* how many clusters */
+	u_int _clustsize;        /* cluster size */
+	u_int _objsize;		/* actual object size */
+
+	u_int _memtotal;	/* _numclusters*_clustsize */
+	struct lut_entry *lut;  /* virt,phys addresses, objtotal entries */
+	uint32_t *bitmap;       /* one bit per buffer, 1 means free */
+	uint32_t bitmap_slots;	/* number of uint32 entries in bitmap */
+};
+
+
+struct netmap_mem_d {
+	NMA_LOCK_T nm_mtx;  /* protect the allocator */
+	u_int nm_totalsize; /* shorthand */
+
+	int finalized;		/* !=0 iff preallocation done */
+	int lasterr;		/* last error for curr config */
+	int refcount;		/* existing priv structures */
+	/* the three allocators */
+	struct netmap_obj_pool pools[NETMAP_POOLS_NR];
+};
+
+
+static struct netmap_mem_d nm_mem = {	/* Our memory allocator. */
+	.pools = {
+		[NETMAP_IF_POOL] = {
+			.name 	= "netmap_if",
+			.objminsize = sizeof(struct netmap_if),
+			.objmaxsize = 4096,
+			.nummin     = 10,	/* don't be stingy */
+			.nummax	    = 10000,	/* XXX very large */
+		},
+		[NETMAP_RING_POOL] = {
+			.name 	= "netmap_ring",
+			.objminsize = sizeof(struct netmap_ring),
+			.objmaxsize = 32*PAGE_SIZE,
+			.nummin     = 2,
+			.nummax	    = 1024,
+		},
+		[NETMAP_BUF_POOL] = {
+			.name	= "netmap_buf",
+			.objminsize = 64,
+			.objmaxsize = 65536,
+			.nummin     = 4,
+			.nummax	    = 1000000, /* one million! */
+		},
+	},
+};
+
+struct lut_entry *netmap_buffer_lut;	/* exported */
+
+/* memory allocator related sysctls */
+
+#define STRINGIFY(x) #x
+
+#define DECLARE_SYSCTLS(id, name) \
+	/* TUNABLE_INT("hw.netmap." STRINGIFY(name) "_size", &netmap_params[id].size); */ \
+	SYSCTL_INT(_dev_netmap, OID_AUTO, name##_size, \
+	    CTLFLAG_RW, &netmap_params[id].size, 0, "Requested size of netmap " STRINGIFY(name) "s"); \
+        SYSCTL_INT(_dev_netmap, OID_AUTO, name##_curr_size, \
+            CTLFLAG_RD, &nm_mem.pools[id]._objsize, 0, "Current size of netmap " STRINGIFY(name) "s"); \
+	/* TUNABLE_INT("hw.netmap." STRINGIFY(name) "_num", &netmap_params[id].num); */ \
+        SYSCTL_INT(_dev_netmap, OID_AUTO, name##_num, \
+            CTLFLAG_RW, &netmap_params[id].num, 0, "Requested number of netmap " STRINGIFY(name) "s"); \
+        SYSCTL_INT(_dev_netmap, OID_AUTO, name##_curr_num, \
+            CTLFLAG_RD, &nm_mem.pools[id].objtotal, 0, "Current number of netmap " STRINGIFY(name) "s")
+
+DECLARE_SYSCTLS(NETMAP_IF_POOL, if);
+DECLARE_SYSCTLS(NETMAP_RING_POOL, ring);
+DECLARE_SYSCTLS(NETMAP_BUF_POOL, buf);
+
+/*
+ * Convert a userspace offset to a phisical address.
+ * XXX re-do in a simpler way.
+ *
+ * The idea here is to hide userspace applications the fact that pre-allocated
+ * memory is not contiguous, but fragmented across different clusters and
+ * smaller memory allocators. Consequently, first of all we need to find which
+ * allocator is owning provided offset, then we need to find out the physical
+ * address associated to target page (this is done using the look-up table.
+ */
+static inline vm_paddr_t
+netmap_ofstophys(vm_offset_t offset)
+{
+	int i;
+	vm_offset_t o = offset;
+	struct netmap_obj_pool *p = nm_mem.pools;
+
+	for (i = 0; i < NETMAP_POOLS_NR; offset -= p[i]._memtotal, i++) {
+		if (offset >= p[i]._memtotal)
+			continue;
+		// XXX now scan the clusters
+		return p[i].lut[offset / p[i]._objsize].paddr +
+			offset % p[i]._objsize;
+	}
+	/* this is only in case of errors */
+	D("invalid ofs 0x%x out of 0x%x 0x%x 0x%x", (u_int)o,
+		p[NETMAP_IF_POOL]._memtotal,
+		p[NETMAP_IF_POOL]._memtotal
+			+ p[NETMAP_RING_POOL]._memtotal,
+		p[NETMAP_IF_POOL]._memtotal
+			+ p[NETMAP_RING_POOL]._memtotal
+			+ p[NETMAP_BUF_POOL]._memtotal);
+	return 0;	// XXX bad address
+}
+
+/*
+ * we store objects by kernel address, need to find the offset
+ * within the pool to export the value to userspace.
+ * Algorithm: scan until we find the cluster, then add the
+ * actual offset in the cluster
+ */
+static ssize_t
+netmap_obj_offset(struct netmap_obj_pool *p, const void *vaddr)
+{
+	int i, k = p->clustentries, n = p->objtotal;
+	ssize_t ofs = 0;
+
+	for (i = 0; i < n; i += k, ofs += p->_clustsize) {
+		const char *base = p->lut[i].vaddr;
+		ssize_t relofs = (const char *) vaddr - base;
+
+		if (relofs < 0 || relofs > p->_clustsize)
+			continue;
+
+		ofs = ofs + relofs;
+		ND("%s: return offset %d (cluster %d) for pointer %p",
+		    p->name, ofs, i, vaddr);
+		return ofs;
+	}
+	D("address %p is not contained inside any cluster (%s)",
+	    vaddr, p->name);
+	return 0; /* An error occurred */
+}
+
+/* Helper functions which convert virtual addresses to offsets */
+#define netmap_if_offset(v)					\
+	netmap_obj_offset(&nm_mem.pools[NETMAP_IF_POOL], (v))
+
+#define netmap_ring_offset(v)					\
+    (nm_mem.pools[NETMAP_IF_POOL]._memtotal + 				\
+	netmap_obj_offset(&nm_mem.pools[NETMAP_RING_POOL], (v)))
+
+#define netmap_buf_offset(v)					\
+    (nm_mem.pools[NETMAP_IF_POOL]._memtotal +				\
+	nm_mem.pools[NETMAP_RING_POOL]._memtotal +			\
+	netmap_obj_offset(&nm_mem.pools[NETMAP_BUF_POOL], (v)))
+
+
+/*
+ * report the index, and use start position as a hint,
+ * otherwise buffer allocation becomes terribly expensive.
+ */
+static void *
+netmap_obj_malloc(struct netmap_obj_pool *p, int len, uint32_t *start, uint32_t *index)
+{
+	uint32_t i = 0;			/* index in the bitmap */
+	uint32_t mask, j;		/* slot counter */
+	void *vaddr = NULL;
+
+	if (len > p->_objsize) {
+		D("%s request size %d too large", p->name, len);
+		// XXX cannot reduce the size
+		return NULL;
+	}
+
+	if (p->objfree == 0) {
+		D("%s allocator: run out of memory", p->name);
+		return NULL;
+	}
+	if (start)
+		i = *start;
+
+	/* termination is guaranteed by p->free, but better check bounds on i */
+	while (vaddr == NULL && i < p->bitmap_slots)  {
+		uint32_t cur = p->bitmap[i];
+		if (cur == 0) { /* bitmask is fully used */
+			i++;
+			continue;
+		}
+		/* locate a slot */
+		for (j = 0, mask = 1; (cur & mask) == 0; j++, mask <<= 1)
+			;
+
+		p->bitmap[i] &= ~mask; /* mark object as in use */
+		p->objfree--;
+
+		vaddr = p->lut[i * 32 + j].vaddr;
+		if (index)
+			*index = i * 32 + j;
+	}
+	ND("%s allocator: allocated object @ [%d][%d]: vaddr %p", i, j, vaddr);
+
+	if (start)
+		*start = i;
+	return vaddr;
+}
+
+
+/*
+ * free by index, not by address
+ */
+static void
+netmap_obj_free(struct netmap_obj_pool *p, uint32_t j)
+{
+	if (j >= p->objtotal) {
+		D("invalid index %u, max %u", j, p->objtotal);
+		return;
+	}
+	p->bitmap[j / 32] |= (1 << (j % 32));
+	p->objfree++;
+	return;
+}
+
+static void
+netmap_obj_free_va(struct netmap_obj_pool *p, void *vaddr)
+{
+	int i, j, n = p->_memtotal / p->_clustsize;
+
+	for (i = 0, j = 0; i < n; i++, j += p->clustentries) {
+		void *base = p->lut[i * p->clustentries].vaddr;
+		ssize_t relofs = (ssize_t) vaddr - (ssize_t) base;
+
+		/* Given address, is out of the scope of the current cluster.*/
+		if (vaddr < base || relofs > p->_clustsize)
+			continue;
+
+		j = j + relofs / p->_objsize;
+		KASSERT(j != 0, ("Cannot free object 0"));
+		netmap_obj_free(p, j);
+		return;
+	}
+	D("address %p is not contained inside any cluster (%s)",
+	    vaddr, p->name);
+}
+
+#define netmap_if_malloc(len)	netmap_obj_malloc(&nm_mem.pools[NETMAP_IF_POOL], len, NULL, NULL)
+#define netmap_if_free(v)	netmap_obj_free_va(&nm_mem.pools[NETMAP_IF_POOL], (v))
+#define netmap_ring_malloc(len)	netmap_obj_malloc(&nm_mem.pools[NETMAP_RING_POOL], len, NULL, NULL)
+#define netmap_ring_free(v)	netmap_obj_free_va(&nm_mem.pools[NETMAP_RING_POOL], (v))
+#define netmap_buf_malloc(_pos, _index)			\
+	netmap_obj_malloc(&nm_mem.pools[NETMAP_BUF_POOL], NETMAP_BUF_SIZE, _pos, _index)
+
+
+/* Return the index associated to the given packet buffer */
+#define netmap_buf_index(v)						\
+    (netmap_obj_offset(&nm_mem.pools[NETMAP_BUF_POOL], (v)) / nm_mem.pools[NETMAP_BUF_POOL]._objsize)
+
+
+/* Return nonzero on error */
+static int
+netmap_new_bufs(struct netmap_if *nifp,
+                struct netmap_slot *slot, u_int n)
+{
+	struct netmap_obj_pool *p = &nm_mem.pools[NETMAP_BUF_POOL];
+	int i = 0;	/* slot counter */
+	uint32_t pos = 0;	/* slot in p->bitmap */
+	uint32_t index = 0;	/* buffer index */
+
+	(void)nifp;	/* UNUSED */
+	for (i = 0; i < n; i++) {
+		void *vaddr = netmap_buf_malloc(&pos, &index);
+		if (vaddr == NULL) {
+			D("unable to locate empty packet buffer");
+			goto cleanup;
+		}
+		slot[i].buf_idx = index;
+		slot[i].len = p->_objsize;
+		/* XXX setting flags=NS_BUF_CHANGED forces a pointer reload
+		 * in the NIC ring. This is a hack that hides missing
+		 * initializations in the drivers, and should go away.
+		 */
+		slot[i].flags = NS_BUF_CHANGED;
+	}
+
+	ND("allocated %d buffers, %d available, first at %d", n, p->objfree, pos);
+	return (0);
+
+cleanup:
+	while (i > 0) {
+		i--;
+		netmap_obj_free(p, slot[i].buf_idx);
+	}
+	bzero(slot, n * sizeof(slot[0]));
+	return (ENOMEM);
+}
+
+
+static void
+netmap_free_buf(struct netmap_if *nifp, uint32_t i)
+{
+	struct netmap_obj_pool *p = &nm_mem.pools[NETMAP_BUF_POOL];
+
+	if (i < 2 || i >= p->objtotal) {
+		D("Cannot free buf#%d: should be in [2, %d[", i, p->objtotal);
+		return;
+	}
+	netmap_obj_free(p, i);
+}
+
+static void
+netmap_reset_obj_allocator(struct netmap_obj_pool *p)
+{
+	if (p == NULL)
+		return;
+	if (p->bitmap)
+		free(p->bitmap, M_NETMAP);
+	p->bitmap = NULL;
+	if (p->lut) {
+		int i;
+		for (i = 0; i < p->objtotal; i += p->clustentries) {
+			if (p->lut[i].vaddr)
+				contigfree(p->lut[i].vaddr, p->_clustsize, M_NETMAP);
+		}
+		bzero(p->lut, sizeof(struct lut_entry) * p->objtotal);
+#ifdef linux
+		vfree(p->lut);
+#else
+		free(p->lut, M_NETMAP);
+#endif
+	}
+	p->lut = NULL;
+}
+
+/*
+ * Free all resources related to an allocator.
+ */
+static void
+netmap_destroy_obj_allocator(struct netmap_obj_pool *p)
+{
+	if (p == NULL)
+		return;
+	netmap_reset_obj_allocator(p);
+}
+
+/*
+ * We receive a request for objtotal objects, of size objsize each.
+ * Internally we may round up both numbers, as we allocate objects
+ * in small clusters multiple of the page size.
+ * In the allocator we don't need to store the objsize,
+ * but we do need to keep track of objtotal' and clustentries,
+ * as they are needed when freeing memory.
+ *
+ * XXX note -- userspace needs the buffers to be contiguous,
+ *	so we cannot afford gaps at the end of a cluster.
+ */
+
+
+/* call with NMA_LOCK held */
+static int
+netmap_config_obj_allocator(struct netmap_obj_pool *p, u_int objtotal, u_int objsize)
+{
+	int i, n;
+	u_int clustsize;	/* the cluster size, multiple of page size */
+	u_int clustentries;	/* how many objects per entry */
+
+#define MAX_CLUSTSIZE	(1<<17)
+#define LINE_ROUND	64
+	if (objsize >= MAX_CLUSTSIZE) {
+		/* we could do it but there is no point */
+		D("unsupported allocation for %d bytes", objsize);
+		goto error;
+	}
+	/* make sure objsize is a multiple of LINE_ROUND */
+	i = (objsize & (LINE_ROUND - 1));
+	if (i) {
+		D("XXX aligning object by %d bytes", LINE_ROUND - i);
+		objsize += LINE_ROUND - i;
+	}
+	if (objsize < p->objminsize || objsize > p->objmaxsize) {
+		D("requested objsize %d out of range [%d, %d]",
+			objsize, p->objminsize, p->objmaxsize);
+		goto error;
+	}
+	if (objtotal < p->nummin || objtotal > p->nummax) {
+		D("requested objtotal %d out of range [%d, %d]",
+			objtotal, p->nummin, p->nummax);
+		goto error;
+	}
+	/*
+	 * Compute number of objects using a brute-force approach:
+	 * given a max cluster size,
+	 * we try to fill it with objects keeping track of the
+	 * wasted space to the next page boundary.
+	 */
+	for (clustentries = 0, i = 1;; i++) {
+		u_int delta, used = i * objsize;
+		if (used > MAX_CLUSTSIZE)
+			break;
+		delta = used % PAGE_SIZE;
+		if (delta == 0) { // exact solution
+			clustentries = i;
+			break;
+		}
+		if (delta > ( (clustentries*objsize) % PAGE_SIZE) )
+			clustentries = i;
+	}
+	// D("XXX --- ouch, delta %d (bad for buffers)", delta);
+	/* compute clustsize and round to the next page */
+	clustsize = clustentries * objsize;
+	i =  (clustsize & (PAGE_SIZE - 1));
+	if (i)
+		clustsize += PAGE_SIZE - i;
+	if (netmap_verbose)
+		D("objsize %d clustsize %d objects %d",
+			objsize, clustsize, clustentries);
+
+	/*
+	 * The number of clusters is n = ceil(objtotal/clustentries)
+	 * objtotal' = n * clustentries
+	 */
+	p->clustentries = clustentries;
+	p->_clustsize = clustsize;
+	n = (objtotal + clustentries - 1) / clustentries;
+	p->_numclusters = n;
+	p->objtotal = n * clustentries;
+	p->objfree = p->objtotal - 2; /* obj 0 and 1 are reserved */
+	p->_memtotal = p->_numclusters * p->_clustsize;
+	p->_objsize = objsize;
+
+	return 0;
+
+error:
+	p->_objsize = objsize;
+	p->objtotal = objtotal;
+
+	return EINVAL;
+}
+
+
+/* call with NMA_LOCK held */
+static int
+netmap_finalize_obj_allocator(struct netmap_obj_pool *p)
+{
+	int i, n;
+
+	n = sizeof(struct lut_entry) * p->objtotal;
+#ifdef linux
+	p->lut = vmalloc(n);
+#else
+	p->lut = malloc(n, M_NETMAP, M_NOWAIT | M_ZERO);
+#endif
+	if (p->lut == NULL) {
+		D("Unable to create lookup table (%d bytes) for '%s'", n, p->name);
+		goto clean;
+	}
+
+	/* Allocate the bitmap */
+	n = (p->objtotal + 31) / 32;
+	p->bitmap = malloc(sizeof(uint32_t) * n, M_NETMAP, M_NOWAIT | M_ZERO);
+	if (p->bitmap == NULL) {
+		D("Unable to create bitmap (%d entries) for allocator '%s'", n,
+		    p->name);
+		goto clean;
+	}
+	p->bitmap_slots = n;
+
+	/*
+	 * Allocate clusters, init pointers and bitmap
+	 */
+	for (i = 0; i < p->objtotal;) {
+		int lim = i + p->clustentries;
+		char *clust;
+
+		clust = contigmalloc(p->_clustsize, M_NETMAP, M_NOWAIT | M_ZERO,
+		    0, -1UL, PAGE_SIZE, 0);
+		if (clust == NULL) {
+			/*
+			 * If we get here, there is a severe memory shortage,
+			 * so halve the allocated memory to reclaim some.
+			 * XXX check boundaries
+			 */
+			D("Unable to create cluster at %d for '%s' allocator",
+			    i, p->name);
+			lim = i / 2;
+			for (i--; i >= lim; i--) {
+				p->bitmap[ (i>>5) ] &=  ~( 1 << (i & 31) );
+				if (i % p->clustentries == 0 && p->lut[i].vaddr)
+					contigfree(p->lut[i].vaddr,
+						p->_clustsize, M_NETMAP);
+			}
+			p->objtotal = i;
+			p->objfree = p->objtotal - 2;
+			p->_numclusters = i / p->clustentries;
+			p->_memtotal = p->_numclusters * p->_clustsize;
+			break;
+		}
+		for (; i < lim; i++, clust += p->_objsize) {
+			p->bitmap[ (i>>5) ] |=  ( 1 << (i & 31) );
+			p->lut[i].vaddr = clust;
+			p->lut[i].paddr = vtophys(clust);
+		}
+	}
+	p->bitmap[0] = ~3; /* objs 0 and 1 is always busy */
+	if (netmap_verbose)
+		D("Pre-allocated %d clusters (%d/%dKB) for '%s'",
+		    p->_numclusters, p->_clustsize >> 10,
+		    p->_memtotal >> 10, p->name);
+
+	return 0;
+
+clean:
+	netmap_reset_obj_allocator(p);
+	return ENOMEM;
+}
+
+/* call with lock held */
+static int
+netmap_memory_config_changed(void)
+{
+	int i;
+
+	for (i = 0; i < NETMAP_POOLS_NR; i++) {
+		if (nm_mem.pools[i]._objsize != netmap_params[i].size ||
+		    nm_mem.pools[i].objtotal != netmap_params[i].num)
+		    return 1;
+	}
+	return 0;
+}
+
+
+/* call with lock held */
+static int
+netmap_memory_config(void)
+{
+	int i;
+
+
+	if (!netmap_memory_config_changed())
+		goto out;
+
+	D("reconfiguring");
+
+	if (nm_mem.finalized) {
+		/* reset previous allocation */
+		for (i = 0; i < NETMAP_POOLS_NR; i++) {
+			netmap_reset_obj_allocator(&nm_mem.pools[i]);
+		}
+		nm_mem.finalized = 0;
+        }
+
+	for (i = 0; i < NETMAP_POOLS_NR; i++) {
+		nm_mem.lasterr = netmap_config_obj_allocator(&nm_mem.pools[i],
+				netmap_params[i].num, netmap_params[i].size);
+		if (nm_mem.lasterr)
+			goto out;
+	}
+
+	D("Have %d KB for interfaces, %d KB for rings and %d MB for buffers",
+	    nm_mem.pools[NETMAP_IF_POOL]._memtotal >> 10,
+	    nm_mem.pools[NETMAP_RING_POOL]._memtotal >> 10,
+	    nm_mem.pools[NETMAP_BUF_POOL]._memtotal >> 20);
+
+out:
+
+	return nm_mem.lasterr;
+}
+
+/* call with lock held */
+static int
+netmap_memory_finalize(void)
+{
+	int i;
+	u_int totalsize = 0;
+
+	nm_mem.refcount++;
+	if (nm_mem.refcount > 1) {
+		ND("busy (refcount %d)", nm_mem.refcount);
+		goto out;
+	}
+
+	/* update configuration if changed */
+	if (netmap_memory_config())
+		goto out;
+
+	if (nm_mem.finalized) {
+		/* may happen if config is not changed */
+		ND("nothing to do");
+		goto out;
+	}
+
+	for (i = 0; i < NETMAP_POOLS_NR; i++) {
+		nm_mem.lasterr = netmap_finalize_obj_allocator(&nm_mem.pools[i]);
+		if (nm_mem.lasterr)
+			goto cleanup;
+		totalsize += nm_mem.pools[i]._memtotal;
+	}
+	nm_mem.nm_totalsize = totalsize;
+
+	/* backward compatibility */
+	netmap_buf_size = nm_mem.pools[NETMAP_BUF_POOL]._objsize;
+	netmap_total_buffers = nm_mem.pools[NETMAP_BUF_POOL].objtotal;
+
+	netmap_buffer_lut = nm_mem.pools[NETMAP_BUF_POOL].lut;
+	netmap_buffer_base = nm_mem.pools[NETMAP_BUF_POOL].lut[0].vaddr;
+
+	nm_mem.finalized = 1;
+	nm_mem.lasterr = 0;
+
+	/* make sysctl values match actual values in the pools */
+	for (i = 0; i < NETMAP_POOLS_NR; i++) {
+		netmap_params[i].size = nm_mem.pools[i]._objsize;
+		netmap_params[i].num  = nm_mem.pools[i].objtotal;
+	}
+
+out:
+	if (nm_mem.lasterr)
+		nm_mem.refcount--;
+
+	return nm_mem.lasterr;
+
+cleanup:
+	for (i = 0; i < NETMAP_POOLS_NR; i++) {
+		netmap_reset_obj_allocator(&nm_mem.pools[i]);
+	}
+	nm_mem.refcount--;
+
+	return nm_mem.lasterr;
+}
+
+static int
+netmap_memory_init(void)
+{
+	NMA_LOCK_INIT();
+	return (0);
+}
+
+static void
+netmap_memory_fini(void)
+{
+	int i;
+
+	for (i = 0; i < NETMAP_POOLS_NR; i++) {
+	    netmap_destroy_obj_allocator(&nm_mem.pools[i]);
+	}
+	NMA_LOCK_DESTROY();
+}
+
+static void
+netmap_free_rings(struct netmap_adapter *na)
+{
+	int i;
+	if (!na->tx_rings)
+		return;
+	for (i = 0; i < na->num_tx_rings + 1; i++) {
+		netmap_ring_free(na->tx_rings[i].ring);
+		na->tx_rings[i].ring = NULL;
+	}
+	for (i = 0; i < na->num_rx_rings + 1; i++) {
+		netmap_ring_free(na->rx_rings[i].ring);
+		na->rx_rings[i].ring = NULL;
+	}
+	free(na->tx_rings, M_DEVBUF);
+	na->tx_rings = na->rx_rings = NULL;
+}
+
+
+
+/* call with NMA_LOCK held */
+/*
+ * Allocate the per-fd structure netmap_if.
+ * If this is the first instance, also allocate the krings, rings etc.
+ */
+static void *
+netmap_if_new(const char *ifname, struct netmap_adapter *na)
+{
+	struct netmap_if *nifp;
+	struct netmap_ring *ring;
+	ssize_t base; /* handy for relative offsets between rings and nifp */
+	u_int i, len, ndesc, ntx, nrx;
+	struct netmap_kring *kring;
+
+	if (netmap_update_config(na)) {
+		/* configuration mismatch, report and fail */
+		return NULL;
+	}
+	ntx = na->num_tx_rings + 1; /* shorthand, include stack ring */
+	nrx = na->num_rx_rings + 1; /* shorthand, include stack ring */
+	/*
+	 * the descriptor is followed inline by an array of offsets
+	 * to the tx and rx rings in the shared memory region.
+	 */
+	len = sizeof(struct netmap_if) + (nrx + ntx) * sizeof(ssize_t);
+	nifp = netmap_if_malloc(len);
+	if (nifp == NULL) {
+		return NULL;
+	}
+
+	/* initialize base fields -- override const */
+	*(int *)(uintptr_t)&nifp->ni_tx_rings = na->num_tx_rings;
+	*(int *)(uintptr_t)&nifp->ni_rx_rings = na->num_rx_rings;
+	strncpy(nifp->ni_name, ifname, IFNAMSIZ);
+
+	(na->refcount)++;	/* XXX atomic ? we are under lock */
+	if (na->refcount > 1) { /* already setup, we are done */
+		goto final;
+	}
+
+	len = (ntx + nrx) * sizeof(struct netmap_kring);
+	na->tx_rings = malloc(len, M_DEVBUF, M_NOWAIT | M_ZERO);
+	if (na->tx_rings == NULL) {
+		D("Cannot allocate krings for %s", ifname);
+		goto cleanup;
+	}
+	na->rx_rings = na->tx_rings + ntx;
+
+	/*
+	 * First instance, allocate netmap rings and buffers for this card
+	 * The rings are contiguous, but have variable size.
+	 */
+	for (i = 0; i < ntx; i++) { /* Transmit rings */
+		kring = &na->tx_rings[i];
+		ndesc = na->num_tx_desc;
+		bzero(kring, sizeof(*kring));
+		len = sizeof(struct netmap_ring) +
+			  ndesc * sizeof(struct netmap_slot);
+		ring = netmap_ring_malloc(len);
+		if (ring == NULL) {
+			D("Cannot allocate tx_ring[%d] for %s", i, ifname);
+			goto cleanup;
+		}
+		ND("txring[%d] at %p ofs %d", i, ring);
+		kring->na = na;
+		kring->ring = ring;
+		*(int *)(uintptr_t)&ring->num_slots = kring->nkr_num_slots = ndesc;
+		*(ssize_t *)(uintptr_t)&ring->buf_ofs =
+		    (nm_mem.pools[NETMAP_IF_POOL]._memtotal +
+			nm_mem.pools[NETMAP_RING_POOL]._memtotal) -
+			netmap_ring_offset(ring);
+
+		/*
+		 * IMPORTANT:
+		 * Always keep one slot empty, so we can detect new
+		 * transmissions comparing cur and nr_hwcur (they are
+		 * the same only if there are no new transmissions).
+		 */
+		ring->avail = kring->nr_hwavail = ndesc - 1;
+		ring->cur = kring->nr_hwcur = 0;
+		*(int *)(uintptr_t)&ring->nr_buf_size = NETMAP_BUF_SIZE;
+		ND("initializing slots for txring[%d]", i);
+		if (netmap_new_bufs(nifp, ring->slot, ndesc)) {
+			D("Cannot allocate buffers for tx_ring[%d] for %s", i, ifname);
+			goto cleanup;
+		}
+	}
+
+	for (i = 0; i < nrx; i++) { /* Receive rings */
+		kring = &na->rx_rings[i];
+		ndesc = na->num_rx_desc;
+		bzero(kring, sizeof(*kring));
+		len = sizeof(struct netmap_ring) +
+			  ndesc * sizeof(struct netmap_slot);
+		ring = netmap_ring_malloc(len);
+		if (ring == NULL) {
+			D("Cannot allocate rx_ring[%d] for %s", i, ifname);
+			goto cleanup;
+		}
+		ND("rxring[%d] at %p ofs %d", i, ring);
+
+		kring->na = na;
+		kring->ring = ring;
+		*(int *)(uintptr_t)&ring->num_slots = kring->nkr_num_slots = ndesc;
+		*(ssize_t *)(uintptr_t)&ring->buf_ofs =
+		    (nm_mem.pools[NETMAP_IF_POOL]._memtotal +
+		        nm_mem.pools[NETMAP_RING_POOL]._memtotal) -
+			netmap_ring_offset(ring);
+
+		ring->cur = kring->nr_hwcur = 0;
+		ring->avail = kring->nr_hwavail = 0; /* empty */
+		*(int *)(uintptr_t)&ring->nr_buf_size = NETMAP_BUF_SIZE;
+		ND("initializing slots for rxring[%d]", i);
+		if (netmap_new_bufs(nifp, ring->slot, ndesc)) {
+			D("Cannot allocate buffers for rx_ring[%d] for %s", i, ifname);
+			goto cleanup;
+		}
+	}
+#ifdef linux
+	// XXX initialize the selrecord structs.
+	for (i = 0; i < ntx; i++)
+		init_waitqueue_head(&na->tx_rings[i].si);
+	for (i = 0; i < nrx; i++)
+		init_waitqueue_head(&na->rx_rings[i].si);
+	init_waitqueue_head(&na->tx_si);
+	init_waitqueue_head(&na->rx_si);
+#endif
+final:
+	/*
+	 * fill the slots for the rx and tx rings. They contain the offset
+	 * between the ring and nifp, so the information is usable in
+	 * userspace to reach the ring from the nifp.
+	 */
+	base = netmap_if_offset(nifp);
+	for (i = 0; i < ntx; i++) {
+		*(ssize_t *)(uintptr_t)&nifp->ring_ofs[i] =
+			netmap_ring_offset(na->tx_rings[i].ring) - base;
+	}
+	for (i = 0; i < nrx; i++) {
+		*(ssize_t *)(uintptr_t)&nifp->ring_ofs[i+ntx] =
+			netmap_ring_offset(na->rx_rings[i].ring) - base;
+	}
+	return (nifp);
+cleanup:
+	netmap_free_rings(na);
+	netmap_if_free(nifp);
+	(na->refcount)--;
+	return NULL;
+}
+
+/* call with NMA_LOCK held */
+static void
+netmap_memory_deref(void)
+{
+	nm_mem.refcount--;
+	if (netmap_verbose)
+		D("refcount = %d", nm_mem.refcount);
+}
--- a/include/uapi/Kbuild	2013-02-26 10:19:28.000000000 -0800
+++ b/include/uapi/Kbuild	2013-03-10 10:08:20.327671428 -0700
@@ -12,3 +12,4 @@  header-y += video/
 header-y += drm/
 header-y += xen/
 header-y += scsi/
+header-y += netmap/
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ b/include/uapi/netmap/Kbuild	2013-03-10 10:08:20.327671428 -0700
@@ -0,0 +1,3 @@ 
+# UAPI Header export list
+header-y += netmap.h
+header-y += netmap_user.h
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ b/include/uapi/netmap/netmap.h	2013-03-10 10:08:20.327671428 -0700
@@ -0,0 +1,289 @@ 
+/*
+ * Copyright (C) 2011 Matteo Landi, Luigi Rizzo. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ *   1. Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *
+ *   2. Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in the
+ *      documentation and/or other materials provided with the
+ *      distribution.
+ *
+ *   3. Neither the name of the authors nor the names of their contributors
+ *      may be used to endorse or promote products derived from this
+ *      software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY MATTEO LANDI AND CONTRIBUTORS "AS IS" AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL MATTEO LANDI OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/*
+ * $FreeBSD: head/sys/net/netmap.h 231198 2012-02-08 11:43:29Z luigi $
+ * $Id: netmap.h 10879 2012-04-12 22:48:59Z luigi $
+ *
+ * Definitions of constants and the structures used by the netmap
+ * framework, for the part visible to both kernel and userspace.
+ * Detailed info on netmap is available with "man netmap" or at
+ *
+ *	http://info.iet.unipi.it/~luigi/netmap/
+ */
+
+#ifndef _NET_NETMAP_H_
+#define _NET_NETMAP_H_
+
+/*
+ * --- Netmap data structures ---
+ *
+ * The data structures used by netmap are shown below. Those in
+ * capital letters are in an mmapp()ed area shared with userspace,
+ * while others are private to the kernel.
+ * Shared structures do not contain pointers but only memory
+ * offsets, so that addressing is portable between kernel and userspace.
+
+
+ softc
++----------------+
+| standard fields|
+| if_pspare[0] ----------+
++----------------+       |
+                         |
++----------------+<------+
+|(netmap_adapter)|
+|                |                             netmap_kring
+| tx_rings *--------------------------------->+---------------+
+|                |       netmap_kring         | ring    *---------.
+| rx_rings *--------->+---------------+       | nr_hwcur      |   |
++----------------+    | ring    *--------.    | nr_hwavail    |   V
+                      | nr_hwcur      |  |    | selinfo       |   |
+                      | nr_hwavail    |  |    +---------------+   .
+                      | selinfo       |  |    |     ...       |   .
+                      +---------------+  |    |(ntx+1 entries)|
+                      |    ....       |  |    |               |
+                      |(nrx+1 entries)|  |    +---------------+
+                      |               |  |
+   KERNEL             +---------------+  |
+                                         |
+  ====================================================================
+                                         |
+   USERSPACE                             |      NETMAP_RING
+                                         +---->+-------------+
+                                             / | cur         |
+   NETMAP_IF  (nifp, one per file desc.)    /  | avail       |
+    +---------------+                      /   | buf_ofs     |
+    | ni_tx_rings   |                     /    +=============+
+    | ni_rx_rings   |                    /     | buf_idx     | slot[0]
+    |               |                   /      | len, flags  |
+    |               |                  /       +-------------+
+    +===============+                 /        | buf_idx     | slot[1]
+    | txring_ofs[0] | (rel.to nifp)--'         | len, flags  |
+    | txring_ofs[1] |                          +-------------+
+  (num_rings+1 entries)                     (nr_num_slots entries)
+    | txring_ofs[n] |                          | buf_idx     | slot[n-1]
+    +---------------+                          | len, flags  |
+    | rxring_ofs[0] |                          +-------------+
+    | rxring_ofs[1] |
+  (num_rings+1 entries)
+    | txring_ofs[n] |
+    +---------------+
+
+ * The private descriptor ('softc' or 'adapter') of each interface
+ * is extended with a "struct netmap_adapter" containing netmap-related
+ * info (see description in dev/netmap/netmap_kernel.h.
+ * Among other things, tx_rings and rx_rings point to the arrays of
+ * "struct netmap_kring" which in turn reache the various
+ * "struct netmap_ring", shared with userspace.
+
+ * The NETMAP_RING is the userspace-visible replica of the NIC ring.
+ * Each slot has the index of a buffer, its length and some flags.
+ * In user space, the buffer address is computed as
+ *	(char *)ring + buf_ofs + index*NETMAP_BUF_SIZE
+ * In the kernel, buffers do not necessarily need to be contiguous,
+ * and the virtual and physical addresses are derived through
+ * a lookup table.
+ * To associate a different buffer to a slot, applications must
+ * write the new index in buf_idx, and set NS_BUF_CHANGED flag to
+ * make sure that the kernel updates the hardware ring as needed.
+ *
+ * Normally the driver is not requested to report the result of
+ * transmissions (this can dramatically speed up operation).
+ * However the user may request to report completion by setting
+ * NS_REPORT.
+ */
+struct netmap_slot {
+	uint32_t buf_idx; /* buffer index */
+	uint16_t len;	/* packet length, to be copied to/from the hw ring */
+	uint16_t flags;	/* buf changed, etc. */
+#define	NS_BUF_CHANGED	0x0001	/* must resync the map, buffer changed */
+#define	NS_REPORT	0x0002	/* ask the hardware to report results
+				 * e.g. by generating an interrupt
+				 */
+};
+
+/*
+ * Netmap representation of a TX or RX ring (also known as "queue").
+ * This is a queue implemented as a fixed-size circular array.
+ * At the software level, two fields are important: avail and cur.
+ *
+ * In TX rings:
+ *	avail	indicates the number of slots available for transmission.
+ *		It is updated by the kernel after every netmap system call.
+ *		It MUST BE decremented by the application when it appends a
+ *		packet.
+ *	cur	indicates the slot to use for the next packet
+ *		to send (i.e. the "tail" of the queue).
+ *		It MUST BE incremented by the application before
+ *		netmap system calls to reflect the number of newly
+ *		sent packets.
+ *		It is checked by the kernel on netmap system calls
+ *		(normally unmodified by the kernel unless invalid).
+ *
+ *   The kernel side of netmap uses two additional fields in its own
+ *   private ring structure, netmap_kring:
+ *	nr_hwcur is a copy of nr_cur on an NIOCTXSYNC.
+ *	nr_hwavail is the number of slots known as available by the
+ *		hardware. It is updated on an INTR (inc by the
+ *		number of packets sent) and on a NIOCTXSYNC
+ *		(decrease by nr_cur - nr_hwcur)
+ *		A special case, nr_hwavail is -1 if the transmit
+ *		side is idle (no pending transmits).
+ *
+ * In RX rings:
+ *	avail	is the number of packets available (possibly 0).
+ *		It MUST BE decremented by the application when it consumes
+ *		a packet, and it is updated to nr_hwavail on a NIOCRXSYNC
+ *	cur	indicates the first slot that contains a packet not
+ *		processed yet (the "head" of the queue).
+ *		It MUST BE incremented by the software when it consumes
+ *		a packet.
+ *	reserved	indicates the number of buffers before 'cur'
+ *		that the application has still in use. Normally 0,
+ *		it MUST BE incremented by the application when it
+ *		does not return the buffer immediately, and decremented
+ *		when the buffer is finally freed.
+ *
+ *   The kernel side of netmap uses two additional fields in the kring:
+ *	nr_hwcur is a copy of nr_cur on an NIOCRXSYNC
+ *	nr_hwavail is the number of packets available. It is updated
+ *		on INTR (inc by the number of new packets arrived)
+ *		and on NIOCRXSYNC (decreased by nr_cur - nr_hwcur).
+ *
+ * DATA OWNERSHIP/LOCKING:
+ *	The netmap_ring is owned by the user program and it is only
+ *	accessed or modified in the upper half of the kernel during
+ *	a system call.
+ *
+ *	The netmap_kring is only modified by the upper half of the kernel.
+ */
+struct netmap_ring {
+	/*
+	 * nr_buf_base_ofs is meant to be used through macros.
+	 * It contains the offset of the buffer region from this
+	 * descriptor.
+	 */
+	const ssize_t	buf_ofs;
+	const uint32_t	num_slots;	/* number of slots in the ring. */
+	uint32_t	avail;		/* number of usable slots */
+	uint32_t        cur;		/* 'current' r/w position */
+	uint32_t	reserved;	/* not refilled before current */
+
+	const uint16_t	nr_buf_size;
+	uint16_t	flags;
+#define	NR_TIMESTAMP	0x0002		/* set timestamp on *sync() */
+
+	struct timeval	ts;		/* time of last *sync() */
+
+	/* the slots follow. This struct has variable size */
+	struct netmap_slot slot[0];	/* array of slots. */
+};
+
+
+/*
+ * Netmap representation of an interface and its queue(s).
+ * There is one netmap_if for each file descriptor on which we want
+ * to select/poll.  We assume that on each interface has the same number
+ * of receive and transmit queues.
+ * select/poll operates on one or all pairs depending on the value of
+ * nmr_queueid passed on the ioctl.
+ */
+struct netmap_if {
+	char		ni_name[IFNAMSIZ]; /* name of the interface. */
+	const u_int	ni_version;	/* API version, currently unused */
+	const u_int	ni_rx_rings;	/* number of rx rings */
+	const u_int	ni_tx_rings;	/* if zero, same as ni_rx_rings */
+	/*
+	 * The following array contains the offset of each netmap ring
+	 * from this structure. The first ni_tx_queues+1 entries refer
+	 * to the tx rings, the next ni_rx_queues+1 refer to the rx rings
+	 * (the last entry in each block refers to the host stack rings).
+	 * The area is filled up by the kernel on NIOCREG,
+	 * and then only read by userspace code.
+	 */
+	const ssize_t	ring_ofs[0];
+};
+
+#ifndef NIOCREGIF
+/*
+ * ioctl names and related fields
+ *
+ * NIOCGINFO takes a struct ifreq, the interface name is the input,
+ *	the outputs are number of queues and number of descriptor
+ *	for each queue (useful to set number of threads etc.).
+ *
+ * NIOCREGIF takes an interface name within a struct ifreq,
+ *	and activates netmap mode on the interface (if possible).
+ *
+ * NIOCUNREGIF unregisters the interface associated to the fd.
+ *
+ * NIOCTXSYNC, NIOCRXSYNC synchronize tx or rx queues,
+ *	whose identity is set in NIOCREGIF through nr_ringid
+ */
+
+/*
+ * struct nmreq overlays a struct ifreq
+ */
+struct nmreq {
+	char		nr_name[IFNAMSIZ];
+	uint32_t	nr_version;	/* API version */
+#define	NETMAP_API	3		/* current version */
+	uint32_t	nr_offset;	/* nifp offset in the shared region */
+	uint32_t	nr_memsize;	/* size of the shared region */
+	uint32_t	nr_tx_slots;	/* slots in tx rings */
+	uint32_t	nr_rx_slots;	/* slots in rx rings */
+	uint16_t	nr_tx_rings;	/* number of tx rings */
+	uint16_t	nr_rx_rings;	/* number of rx rings */
+	uint16_t	nr_ringid;	/* ring(s) we care about */
+#define NETMAP_HW_RING	0x4000		/* low bits indicate one hw ring */
+#define NETMAP_SW_RING	0x2000		/* process the sw ring */
+#define NETMAP_NO_TX_POLL	0x1000	/* no automatic txsync on poll */
+#define NETMAP_RING_MASK 0xfff		/* the ring number */
+	uint16_t	spare1;
+	uint32_t	spare2[4];
+};
+
+/*
+ * FreeBSD uses the size value embedded in the _IOWR to determine
+ * how much to copy in/out. So we need it to match the actual
+ * data structure we pass. We put some spares in the structure
+ * to ease compatibility with other versions
+ */
+#define NIOCGINFO	_IOWR('i', 145, struct nmreq) /* return IF info */
+#define NIOCREGIF	_IOWR('i', 146, struct nmreq) /* interface register */
+#define NIOCUNREGIF	_IO('i', 147) /* interface unregister */
+#define NIOCTXSYNC	_IO('i', 148) /* sync tx queues */
+#define NIOCRXSYNC	_IO('i', 149) /* sync rx queues */
+#endif /* !NIOCREGIF */
+
+#endif /* _NET_NETMAP_H_ */
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ b/include/uapi/netmap/netmap_user.h	2013-03-10 10:08:20.327671428 -0700
@@ -0,0 +1,95 @@ 
+/*
+ * Copyright (C) 2011 Matteo Landi, Luigi Rizzo. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are
+ * met:
+ *
+ *   1. Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *
+ *   2. Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in the
+ *      documentation and/or other materials provided with the
+ *      distribution.
+ *
+ *   3. Neither the name of the authors nor the names of their contributors
+ *      may be used to endorse or promote products derived from this
+ *      software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY MATTEO LANDI AND CONTRIBUTORS "AS IS" AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL MATTEO LANDI OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/*
+ * $FreeBSD: head/sys/net/netmap_user.h 231198 2012-02-08 11:43:29Z luigi $
+ * $Id: netmap_user.h 10879 2012-04-12 22:48:59Z luigi $
+ *
+ * This header contains the macros used to manipulate netmap structures
+ * and packets in userspace. See netmap(4) for more information.
+ *
+ * The address of the struct netmap_if, say nifp, is computed from the
+ * value returned from ioctl(.., NIOCREG, ...) and the mmap region:
+ *	ioctl(fd, NIOCREG, &req);
+ *	mem = mmap(0, ... );
+ *	nifp = NETMAP_IF(mem, req.nr_nifp);
+ *		(so simple, we could just do it manually)
+ *
+ * From there:
+ *	struct netmap_ring *NETMAP_TXRING(nifp, index)
+ *	struct netmap_ring *NETMAP_RXRING(nifp, index)
+ *		we can access ring->nr_cur, ring->nr_avail, ring->nr_flags
+ *
+ *	ring->slot[i] gives us the i-th slot (we can access
+ *		directly plen, flags, bufindex)
+ *
+ *	char *buf = NETMAP_BUF(ring, index) returns a pointer to
+ *		the i-th buffer
+ *
+ * Since rings are circular, we have macros to compute the next index
+ *	i = NETMAP_RING_NEXT(ring, i);
+ */
+
+#ifndef _NET_NETMAP_USER_H_
+#define _NET_NETMAP_USER_H_
+
+#define NETMAP_IF(b, o)	(struct netmap_if *)((char *)(b) + (o))
+
+#define NETMAP_TXRING(nifp, index)			\
+	((struct netmap_ring *)((char *)(nifp) +	\
+		(nifp)->ring_ofs[index] ) )
+
+#define NETMAP_RXRING(nifp, index)			\
+	((struct netmap_ring *)((char *)(nifp) +	\
+	    (nifp)->ring_ofs[index + (nifp)->ni_tx_rings + 1] ) )
+
+#define NETMAP_BUF(ring, index)				\
+	((char *)(ring) + (ring)->buf_ofs + ((index)*(ring)->nr_buf_size))
+
+#define NETMAP_BUF_IDX(ring, buf)			\
+	( ((char *)(buf) - ((char *)(ring) + (ring)->buf_ofs) ) / \
+		(ring)->nr_buf_size )
+
+#define	NETMAP_RING_NEXT(r, i)				\
+	((i)+1 == (r)->num_slots ? 0 : (i) + 1 )
+
+#define	NETMAP_RING_FIRST_RESERVED(r)			\
+	( (r)->cur < (r)->reserved ?			\
+	  (r)->cur + (r)->num_slots - (r)->reserved :	\
+	  (r)->cur - (r)->reserved )
+
+/*
+ * Return 1 if the given tx ring is empty.
+ */
+#define NETMAP_TX_RING_EMPTY(r)	((r)->avail >= (r)->num_slots - 1)
+
+#endif /* _NET_NETMAP_USER_H_ */
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ b/include/netmap/bsd_glue.h	2013-03-10 10:08:20.327671428 -0700
@@ -0,0 +1,263 @@ 
+/*
+ * (C) 2012 Luigi Rizzo - Universita` di Pisa
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *   1. Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *   2. Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+/*
+ * glue code to build the netmap bsd code under linux.
+ * Some of these tweaks are generic, some are specific for
+ * character device drivers and network code/device drivers.
+ */
+
+#ifndef _BSD_GLUE_H
+#define _BSD_GLUE_H
+
+/* a set of headers used in netmap */
+#include <linux/version.h>
+#include <linux/if.h>
+#include <linux/list.h>
+#include <linux/mutex.h>
+#include <linux/types.h>
+#include <linux/time.h>
+#include <linux/mm.h>
+#include <linux/poll.h>
+#include <linux/netdevice.h>
+#include <linux/sched.h>
+#include <linux/wait.h>
+#include <linux/miscdevice.h>
+//#include <linux/log2.h>	// ilog2
+#include <linux/etherdevice.h>	// eth_type_trans
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/virtio.h>	// virt_to_phys
+
+#define printf(fmt, arg...)	printk(KERN_ERR fmt, ##arg)
+#define KASSERT(a, b)		BUG_ON(!(a))
+
+/* Type redefinitions. XXX check them */
+typedef	void *			bus_dma_tag_t;
+typedef	void *			bus_dmamap_t;
+typedef	int			bus_size_t;
+typedef	int			bus_dma_segment_t;
+typedef void *			bus_addr_t;
+#define vm_paddr_t		phys_addr_t
+/* XXX the 'off_t' on Linux corresponds to a 'long' */
+#define vm_offset_t		uint32_t
+struct thread;
+
+/* endianness macros/functions */
+#define le16toh			le16_to_cpu
+#define le32toh			le32_to_cpu
+#define le64toh			le64_to_cpu
+#define be64toh			be64_to_cpu
+#define htole32			cpu_to_le32
+#define htole64			cpu_to_le64
+
+#include <linux/jiffies.h>
+#define	time_second	(jiffies_to_msecs(jiffies) / 1000U )
+
+#define bzero(a, len)		memset(a, 0, len)
+#define bcopy(_s, _d, len) 	memcpy(_d, _s, len)
+
+
+// XXX maybe implement it as a proper function somewhere
+// it is important to set s->len before the copy.
+#define	m_devget(_buf, _len, _ofs, _dev, _fn)	( {		\
+	struct sk_buff *s = netdev_alloc_skb(_dev, _len);	\
+	if (s) {						\
+		s->len += _len;					\
+		skb_copy_to_linear_data_offset(s, _ofs, _buf, _len);	\
+		s->protocol = eth_type_trans(s, _dev);		\
+	}							\
+	s; } )
+
+#define	mbuf			sk_buff
+#define	m_nextpkt		next			// chain of mbufs
+#define m_freem(m)		dev_kfree_skb_any(m)	// free a sk_buff
+
+/*
+ * m_copydata() copies from mbuf to buffer following the mbuf chain.
+ * XXX check which linux equivalent we should use to follow fragmented
+ * skbufs.
+ */
+
+//#define m_copydata(m, o, l, b)	skb_copy_bits(m, o, b, l)
+#define m_copydata(m, o, l, b)	skb_copy_from_linear_data_offset(m, o, b, l)
+
+/*
+ * struct ifnet is remapped into struct net_device on linux.
+ * ifnet has an if_softc field pointing to the device-specific struct
+ * (adapter).
+ * On linux the ifnet/net_device is at the beginning of the device-specific
+ * structure, so a pointer to the first field of the ifnet works.
+ * We don't use this in netmap, though.
+ *
+ *	if_xname	name		device name
+ *	if_capabilities	flags		// XXX not used
+ *	if_capenable	priv_flags
+ *		we would use "features" but it is all taken.
+ *		XXX check for conflict in flags use.
+ *
+ *	if_bridge	atalk_ptr	struct nm_bridge (only for VALE ports)
+ *
+ * In netmap we use if_pspare[0] to point to the netmap_adapter,
+ * in linux we have no spares so we overload ax25_ptr, and the detection
+ * for netmap-capable is some magic in the area pointed by that.
+ */
+#define WNA(_ifp)		(_ifp)->ax25_ptr
+
+#define ifnet           	net_device      /* remap */
+#define	if_xname		name		/* field ifnet-> net_device */
+//#define	if_capabilities		flags		/* IFCAP_NETMAP */
+#define	if_capenable		priv_flags	/* IFCAP_NETMAP */
+#define	if_bridge		atalk_ptr	/* remap, only for VALE ports */
+#define ifunit_ref(_x)		dev_get_by_name(&init_net, _x);
+#define if_rele(ifp)		dev_put(ifp)
+#define CURVNET_SET(x)
+#define CURVNET_RESTORE(x)
+
+
+/*
+ * We use spin_lock_irqsave() because we use the lock in the
+ * (hard) interrupt context.
+ */
+typedef struct {
+        spinlock_t      sl;
+        ulong           flags;
+} safe_spinlock_t;
+
+static inline void mtx_lock(safe_spinlock_t *m)
+{
+        spin_lock_irqsave(&(m->sl), m->flags);
+}
+
+static inline void mtx_unlock(safe_spinlock_t *m)
+{
+	ulong flags = ACCESS_ONCE(m->flags);
+        spin_unlock_irqrestore(&(m->sl), flags);
+}
+
+#define mtx_init(a, b, c, d)	spin_lock_init(&((a)->sl))
+#define mtx_destroy(a)		// XXX spin_lock_destroy(a)
+
+/* use volatile to fix a probable compiler error on 2.6.25 */
+#define malloc(_size, type, flags)                      \
+        ({ volatile int _v = _size; kmalloc(_v, GFP_ATOMIC | __GFP_ZERO); })
+
+#define free(a, t)	kfree(a)
+
+// XXX do we need GPF_ZERO ?
+// XXX do we need GFP_DMA for slots ?
+// http://www.mjmwired.net/kernel/Documentation/DMA-API.txt
+
+#define contigmalloc(sz, ty, flags, a, b, pgsz, c)		\
+	(char *) __get_free_pages(GFP_ATOMIC |  __GFP_ZERO,	\
+		    ilog2(roundup_pow_of_two((sz)/PAGE_SIZE)))
+#define contigfree(va, sz, ty)	free_pages((unsigned long)va,	\
+		    ilog2(roundup_pow_of_two(sz)/PAGE_SIZE))
+
+#define vtophys		virt_to_phys
+
+/*--- selrecord and friends ---*/
+/* wake_up() or wake_up_interruptible() ? */
+#define	selwakeuppri(sw, pri)	wake_up(sw)
+#define selrecord(x, y)		poll_wait((struct file *)x, y, pwait)
+#define knlist_destroy(x)	// XXX todo
+
+/* we use tsleep/wakeup to sleep a bit. */
+#define	tsleep(a, b, c, t)	msleep(10)	// XXX
+#define	wakeup(sw)				// XXX double check
+#define microtime		do_gettimeofday
+
+
+/*
+ * The following trick is to map a struct cdev into a struct miscdevice
+ */
+#define	cdev			miscdevice
+
+
+/*
+ * XXX to complete - the dmamap interface
+ */
+#define	BUS_DMA_NOWAIT	0
+#define	bus_dmamap_load(_1, _2, _3, _4, _5, _6, _7)
+#define	bus_dmamap_unload(_1, _2)
+
+typedef int (d_mmap_t)(struct file *f, struct vm_area_struct *vma);
+typedef unsigned int (d_poll_t)(struct file * file, struct poll_table_struct *pwait);
+
+/*
+ * make_dev will set an error and return the first argument.
+ * This relies on the availability of the 'error' local variable.
+ */
+#define make_dev(_cdev, _zero, _uid, _gid, _perm, _name)	\
+	({error = misc_register(_cdev);				\
+	D("run mknod /dev/%s c %d %d # error %d",		\
+	    (_cdev)->name, MISC_MAJOR, (_cdev)->minor, error);	\
+	 _cdev; } )
+#define destroy_dev(_cdev)	misc_deregister(_cdev)
+
+/*--- sysctl API ----*/
+/*
+ * linux: sysctl are mapped into /sys/module/ipfw_mod parameters
+ * windows: they are emulated via get/setsockopt
+ */
+#define CTLFLAG_RD              1
+#define CTLFLAG_RW              2
+
+struct sysctl_oid;
+struct sysctl_req;
+
+
+#define SYSCTL_DECL(_1)
+#define SYSCTL_OID(_1, _2, _3, _4, _5, _6, _7, _8)
+#define SYSCTL_NODE(_1, _2, _3, _4, _5, _6)
+#define _SYSCTL_BASE(_name, _var, _ty, _perm)			\
+		module_param_named(_name, *(_var), _ty,         \
+			( (_perm) == CTLFLAG_RD) ? 0444: 0644 )
+
+#define SYSCTL_PROC(_base, _oid, _name, _mode, _var, _val, _desc, _a, _b)
+
+#define SYSCTL_INT(_base, _oid, _name, _mode, _var, _val, _desc)        \
+        _SYSCTL_BASE(_name, _var, int, _mode)
+
+#define SYSCTL_LONG(_base, _oid, _name, _mode, _var, _val, _desc)       \
+        _SYSCTL_BASE(_name, _var, long, _mode)
+
+#define SYSCTL_ULONG(_base, _oid, _name, _mode, _var, _val, _desc)      \
+        _SYSCTL_BASE(_name, _var, ulong, _mode)
+
+#define SYSCTL_UINT(_base, _oid, _name, _mode, _var, _val, _desc)       \
+         _SYSCTL_BASE(_name, _var, uint, _mode)
+
+#define TUNABLE_INT(_name, _ptr)
+
+#define SYSCTL_VNET_PROC                SYSCTL_PROC
+#define SYSCTL_VNET_INT                 SYSCTL_INT
+
+#define SYSCTL_HANDLER_ARGS             \
+        struct sysctl_oid *oidp, void *arg1, int arg2, struct sysctl_req *req
+int sysctl_handle_int(SYSCTL_HANDLER_ARGS);
+int sysctl_handle_long(SYSCTL_HANDLER_ARGS);
+
+#endif /* _BSD_GLUE_H */
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ b/include/netmap/netmap_kern.h	2013-03-10 11:30:37.253528570 -0700
@@ -0,0 +1,474 @@ 
+/*
+ * Copyright (C) 2011-2012 Matteo Landi, Luigi Rizzo. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *   1. Redistributions of source code must retain the above copyright
+ *      notice, this list of conditions and the following disclaimer.
+ *   2. Redistributions in binary form must reproduce the above copyright
+ *      notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+/*
+ * $FreeBSD: head/sys/dev/netmap/netmap_kern.h 238985 2012-08-02 11:59:43Z luigi $
+ * $Id: netmap_kern.h 12089 2013-02-15 01:20:25Z luigi $
+ *
+ * The header contains the definitions of constants and function
+ * prototypes used only in kernelspace.
+ */
+
+#ifndef _NET_NETMAP_KERN_H_
+#define _NET_NETMAP_KERN_H_
+
+#define NETMAP_MEM2    // use the new memory allocator
+
+#if defined(__FreeBSD__)
+#define likely(x)	__builtin_expect(!!(x), 1)
+#define unlikely(x)	__builtin_expect(!!(x), 0)
+
+#define	NM_LOCK_T	struct mtx
+#define	NM_SELINFO_T	struct selinfo
+#define	MBUF_LEN(m)	((m)->m_pkthdr.len)
+#define	NM_SEND_UP(ifp, m)	((ifp)->if_input)(ifp, m)
+#elif defined (linux)
+
+#define	NM_LOCK_T	safe_spinlock_t	// see bsd_glue.h
+#define	NM_SELINFO_T	wait_queue_head_t
+#define	MBUF_LEN(m)	((m)->len)
+#define	NM_SEND_UP(ifp, m)	netif_rx(m)
+
+#ifndef DEV_NETMAP
+#define DEV_NETMAP
+#endif
+
+/*
+ * IFF_NETMAP goes into net_device's priv_flags (if_capenable).
+ * Map it to BSD style cap until this driver is cleaned up.
+ */
+#define IFCAP_NETMAP	IFF_NETMAP
+
+
+#elif defined (__APPLE__)
+#warning apple support is incomplete.
+#define likely(x)	__builtin_expect(!!(x), 1)
+#define unlikely(x)	__builtin_expect(!!(x), 0)
+#define	NM_LOCK_T	IOLock *
+#define	NM_SELINFO_T	struct selinfo
+#define	MBUF_LEN(m)	((m)->m_pkthdr.len)
+#define	NM_SEND_UP(ifp, m)	((ifp)->if_input)(ifp, m)
+
+#else
+#error unsupported platform
+#endif
+
+#define ND(format, ...)
+#define D(format, ...)						\
+	do {							\
+		struct timeval __xxts;				\
+		microtime(&__xxts);				\
+		printf("%03d.%06d %s [%d] " format "\n",	\
+		(int)__xxts.tv_sec % 1000, (int)__xxts.tv_usec,	\
+		__FUNCTION__, __LINE__, ##__VA_ARGS__);		\
+	} while (0)
+
+/* rate limited, lps indicates how many per second */
+#define RD(lps, format, ...)					\
+	do {							\
+		static int t0, __cnt;				\
+		if (t0 != time_second) {			\
+			t0 = time_second;			\
+			__cnt = 0;				\
+		}						\
+		if (__cnt++ < lps)				\
+			D(format, ##__VA_ARGS__);		\
+	} while (0)
+
+struct netmap_adapter;
+
+/*
+ * private, kernel view of a ring. Keeps track of the status of
+ * a ring across system calls.
+ *
+ *	nr_hwcur	index of the next buffer to refill.
+ *			It corresponds to ring->cur - ring->reserved
+ *
+ *	nr_hwavail	the number of slots "owned" by userspace.
+ *			nr_hwavail =:= ring->avail + ring->reserved
+ *
+ * The indexes in the NIC and netmap rings are offset by nkr_hwofs slots.
+ * This is so that, on a reset, buffers owned by userspace are not
+ * modified by the kernel. In particular:
+ * RX rings: the next empty buffer (hwcur + hwavail + hwofs) coincides with
+ * 	the next empty buffer as known by the hardware (next_to_check or so).
+ * TX rings: hwcur + hwofs coincides with next_to_send
+ *
+ * For received packets, slot->flags is set to nkr_slot_flags
+ * so we can provide a proper initial value (e.g. set NS_FORWARD
+ * when operating in 'transparent' mode).
+ */
+struct netmap_kring {
+	struct netmap_ring *ring;
+	u_int nr_hwcur;
+	int nr_hwavail;
+	u_int nr_kflags;	/* private driver flags */
+#define NKR_PENDINTR	0x1	// Pending interrupt.
+	u_int nkr_num_slots;
+
+	uint16_t	nkr_slot_flags;	/* initial value for flags */
+	int	nkr_hwofs;	/* offset between NIC and netmap ring */
+	struct netmap_adapter *na;
+	NM_SELINFO_T si;	/* poll/select wait queue */
+	NM_LOCK_T q_lock;	/* used if no device lock available */
+} __attribute__((__aligned__(64)));
+
+/*
+ * This struct extends the 'struct adapter' (or
+ * equivalent) device descriptor. It contains all fields needed to
+ * support netmap operation.
+ */
+struct netmap_adapter {
+	/*
+	 * On linux we do not have a good way to tell if an interface
+	 * is netmap-capable. So we use the following trick:
+	 * NA(ifp) points here, and the first entry (which hopefully
+	 * always exists and is at least 32 bits) contains a magic
+	 * value which we can use to detect that the interface is good.
+	 */
+	uint32_t magic;
+	uint32_t na_flags;	/* future place for IFCAP_NETMAP */
+#define NAF_SKIP_INTR	1	/* use the regular interrupt handler.
+				 * useful during initialization
+				 */
+	int refcount; /* number of user-space descriptors using this
+			 interface, which is equal to the number of
+			 struct netmap_if objs in the mapped region. */
+	/*
+	 * The selwakeup in the interrupt thread can use per-ring
+	 * and/or global wait queues. We track how many clients
+	 * of each type we have so we can optimize the drivers,
+	 * and especially avoid huge contention on the locks.
+	 */
+	int na_single;	/* threads attached to a single hw queue */
+	int na_multi;	/* threads attached to multiple hw queues */
+
+	int separate_locks; /* set if the interface suports different
+			       locks for rx, tx and core. */
+
+	u_int num_rx_rings; /* number of adapter receive rings */
+	u_int num_tx_rings; /* number of adapter transmit rings */
+
+	u_int num_tx_desc; /* number of descriptor in each queue */
+	u_int num_rx_desc;
+
+	/* tx_rings and rx_rings are private but allocated
+	 * as a contiguous chunk of memory. Each array has
+	 * N+1 entries, for the adapter queues and for the host queue.
+	 */
+	struct netmap_kring *tx_rings; /* array of TX rings. */
+	struct netmap_kring *rx_rings; /* array of RX rings. */
+
+	NM_SELINFO_T tx_si, rx_si;	/* global wait queues */
+
+	/* copy of if_qflush and if_transmit pointers, to intercept
+	 * packets from the network stack when netmap is active.
+	 */
+	int     (*if_transmit)(struct ifnet *, struct mbuf *);
+
+	/* references to the ifnet and device routines, used by
+	 * the generic netmap functions.
+	 */
+	struct ifnet *ifp; /* adapter is ifp->if_softc */
+
+	NM_LOCK_T core_lock;	/* used if no device lock available */
+
+	int (*nm_register)(struct ifnet *, int onoff);
+	void (*nm_lock)(struct ifnet *, int what, u_int ringid);
+	int (*nm_txsync)(struct ifnet *, u_int ring, int lock);
+	int (*nm_rxsync)(struct ifnet *, u_int ring, int lock);
+	/* return configuration information */
+	int (*nm_config)(struct ifnet *, u_int *txr, u_int *txd,
+					u_int *rxr, u_int *rxd);
+
+	int bdg_port;
+#ifdef linux
+	struct net_device_ops nm_ndo;
+	int if_refcount;	// XXX additions for bridge
+#endif /* linux */
+};
+
+/*
+ * The combination of "enable" (ifp->if_capenable & IFCAP_NETMAP)
+ * and refcount gives the status of the interface, namely:
+ *
+ *	enable	refcount	Status
+ *
+ *	FALSE	0		normal operation
+ *	FALSE	!= 0		-- (impossible)
+ *	TRUE	1		netmap mode
+ *	TRUE	0		being deleted.
+ */
+
+#define NETMAP_DELETING(_na)  (  ((_na)->refcount == 0) &&	\
+	( (_na)->ifp->if_capenable & IFCAP_NETMAP) )
+
+/*
+ * parameters for (*nm_lock)(adapter, what, index)
+ */
+enum {
+	NETMAP_NO_LOCK = 0,
+	NETMAP_CORE_LOCK, NETMAP_CORE_UNLOCK,
+	NETMAP_TX_LOCK, NETMAP_TX_UNLOCK,
+	NETMAP_RX_LOCK, NETMAP_RX_UNLOCK,
+#ifdef __FreeBSD__
+#define	NETMAP_REG_LOCK		NETMAP_CORE_LOCK
+#define	NETMAP_REG_UNLOCK	NETMAP_CORE_UNLOCK
+#else
+	NETMAP_REG_LOCK, NETMAP_REG_UNLOCK
+#endif
+};
+
+/*
+ * The following are support routines used by individual drivers to
+ * support netmap operation.
+ *
+ * netmap_attach() initializes a struct netmap_adapter, allocating the
+ * 	struct netmap_ring's and the struct selinfo.
+ *
+ * netmap_detach() frees the memory allocated by netmap_attach().
+ *
+ * netmap_start() replaces the if_transmit routine of the interface,
+ *	and is used to intercept packets coming from the stack.
+ *
+ * netmap_load_map/netmap_reload_map are helper routines to set/reset
+ *	the dmamap for a packet buffer
+ *
+ * netmap_reset() is a helper routine to be called in the driver
+ *	when reinitializing a ring.
+ */
+int netmap_attach(struct netmap_adapter *, int);
+void netmap_detach(struct ifnet *);
+int netmap_start(struct ifnet *, struct mbuf *);
+enum txrx { NR_RX = 0, NR_TX = 1 };
+struct netmap_slot *netmap_reset(struct netmap_adapter *na,
+	enum txrx tx, int n, u_int new_cur);
+int netmap_ring_reinit(struct netmap_kring *);
+
+extern u_int netmap_buf_size;
+#define NETMAP_BUF_SIZE	netmap_buf_size
+extern int netmap_mitigate;
+extern int netmap_no_pendintr;
+extern u_int netmap_total_buffers;
+extern char *netmap_buffer_base;
+extern int netmap_verbose;	// XXX debugging
+enum {                                  /* verbose flags */
+	NM_VERB_ON = 1,                 /* generic verbose */
+	NM_VERB_HOST = 0x2,             /* verbose host stack */
+	NM_VERB_RXSYNC = 0x10,          /* verbose on rxsync/txsync */
+	NM_VERB_TXSYNC = 0x20,
+	NM_VERB_RXINTR = 0x100,         /* verbose on rx/tx intr (driver) */
+	NM_VERB_TXINTR = 0x200,
+	NM_VERB_NIC_RXSYNC = 0x1000,    /* verbose on rx/tx intr (driver) */
+	NM_VERB_NIC_TXSYNC = 0x2000,
+};
+
+/*
+ * NA returns a pointer to the struct netmap adapter from the ifp,
+ * WNA is used to write it.
+ */
+#ifndef WNA
+#define	WNA(_ifp)	(_ifp)->if_pspare[0]
+#endif
+#define	NA(_ifp)	((struct netmap_adapter *)WNA(_ifp))
+
+/*
+ * Macros to determine if an interface is netmap capable or netmap enabled.
+ * See the magic field in struct netmap_adapter.
+ */
+#ifdef __FreeBSD__
+/*
+ * on FreeBSD just use if_capabilities and if_capenable.
+ */
+#define NETMAP_CAPABLE(ifp)	(NA(ifp) &&		\
+	(ifp)->if_capabilities & IFCAP_NETMAP )
+
+#define	NETMAP_SET_CAPABLE(ifp)				\
+	(ifp)->if_capabilities |= IFCAP_NETMAP
+
+#else	/* linux */
+
+/*
+ * on linux:
+ * we check if NA(ifp) is set and its first element has a related
+ * magic value. The capenable is within the struct netmap_adapter.
+ */
+#define	NETMAP_MAGIC	0x52697a7a
+
+#define NETMAP_CAPABLE(ifp)	(NA(ifp) &&		\
+	((uint32_t)(uintptr_t)NA(ifp) ^ NA(ifp)->magic) == NETMAP_MAGIC )
+
+#define	NETMAP_SET_CAPABLE(ifp)				\
+	NA(ifp)->magic = ((uint32_t)(uintptr_t)NA(ifp)) ^ NETMAP_MAGIC
+
+#endif	/* linux */
+
+#ifdef __FreeBSD__
+/* Callback invoked by the dma machinery after a successfull dmamap_load */
+static void netmap_dmamap_cb(__unused void *arg,
+    __unused bus_dma_segment_t * segs, __unused int nseg, __unused int error)
+{
+}
+
+/* bus_dmamap_load wrapper: call aforementioned function if map != NULL.
+ * XXX can we do it without a callback ?
+ */
+static inline void
+netmap_load_map(bus_dma_tag_t tag, bus_dmamap_t map, void *buf)
+{
+	if (map)
+		bus_dmamap_load(tag, map, buf, NETMAP_BUF_SIZE,
+		    netmap_dmamap_cb, NULL, BUS_DMA_NOWAIT);
+}
+
+/* update the map when a buffer changes. */
+static inline void
+netmap_reload_map(bus_dma_tag_t tag, bus_dmamap_t map, void *buf)
+{
+	if (map) {
+		bus_dmamap_unload(tag, map);
+		bus_dmamap_load(tag, map, buf, NETMAP_BUF_SIZE,
+		    netmap_dmamap_cb, NULL, BUS_DMA_NOWAIT);
+	}
+}
+#else /* linux */
+
+/*
+ * XXX How do we redefine these functions:
+ *
+ * on linux we need
+ *	dma_map_single(&pdev->dev, virt_addr, len, direction)
+ *	dma_unmap_single(&adapter->pdev->dev, phys_addr, len, direction
+ * The len can be implicit (on netmap it is NETMAP_BUF_SIZE)
+ * unfortunately the direction is not, so we need to change
+ * something to have a cross API
+ */
+#define netmap_load_map(_t, _m, _b)
+#define netmap_reload_map(_t, _m, _b)
+#if 0
+	struct e1000_buffer *buffer_info =  &tx_ring->buffer_info[l];
+	/* set time_stamp *before* dma to help avoid a possible race */
+	buffer_info->time_stamp = jiffies;
+	buffer_info->mapped_as_page = false;
+	buffer_info->length = len;
+	//buffer_info->next_to_watch = l;
+	/* reload dma map */
+	dma_unmap_single(&adapter->pdev->dev, buffer_info->dma,
+			NETMAP_BUF_SIZE, DMA_TO_DEVICE);
+	buffer_info->dma = dma_map_single(&adapter->pdev->dev,
+			addr, NETMAP_BUF_SIZE, DMA_TO_DEVICE);
+
+	if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma)) {
+		D("dma mapping error");
+		/* goto dma_error; See e1000_put_txbuf() */
+		/* XXX reset */
+	}
+	tx_desc->buffer_addr = htole64(buffer_info->dma); //XXX
+
+#endif
+
+/*
+ * The bus_dmamap_sync() can be one of wmb() or rmb() depending on direction.
+ */
+#define bus_dmamap_sync(_a, _b, _c)
+
+#endif /* linux */
+
+/*
+ * functions to map NIC to KRING indexes (n2k) and vice versa (k2n)
+ */
+static inline int
+netmap_idx_n2k(struct netmap_kring *kr, int idx)
+{
+	int n = kr->nkr_num_slots;
+	idx += kr->nkr_hwofs;
+	if (idx < 0)
+		return idx + n;
+	else if (idx < n)
+		return idx;
+	else
+		return idx - n;
+}
+
+
+static inline int
+netmap_idx_k2n(struct netmap_kring *kr, int idx)
+{
+	int n = kr->nkr_num_slots;
+	idx -= kr->nkr_hwofs;
+	if (idx < 0)
+		return idx + n;
+	else if (idx < n)
+		return idx;
+	else
+		return idx - n;
+}
+
+
+#ifdef NETMAP_MEM2
+/* Entries of the look-up table. */
+struct lut_entry {
+	void *vaddr;		/* virtual address. */
+	vm_paddr_t paddr;	/* phisical address. */
+};
+
+struct netmap_obj_pool;
+extern struct lut_entry *netmap_buffer_lut;
+#define NMB_VA(i)	(netmap_buffer_lut[i].vaddr)
+#define NMB_PA(i)	(netmap_buffer_lut[i].paddr)
+#else /* NETMAP_MEM1 */
+#define NMB_VA(i)	(netmap_buffer_base + (i * NETMAP_BUF_SIZE) )
+#endif /* NETMAP_MEM2 */
+
+/*
+ * NMB return the virtual address of a buffer (buffer 0 on bad index)
+ * PNMB also fills the physical address
+ */
+static inline void *
+NMB(struct netmap_slot *slot)
+{
+	uint32_t i = slot->buf_idx;
+	return (unlikely(i >= netmap_total_buffers)) ?  NMB_VA(0) : NMB_VA(i);
+}
+
+static inline void *
+PNMB(struct netmap_slot *slot, uint64_t *pp)
+{
+	uint32_t i = slot->buf_idx;
+	void *ret = (i >= netmap_total_buffers) ? NMB_VA(0) : NMB_VA(i);
+#ifdef NETMAP_MEM2
+	*pp = (i >= netmap_total_buffers) ? NMB_PA(0) : NMB_PA(i);
+#else
+	*pp = vtophys(ret);
+#endif
+	return ret;
+}
+
+/* default functions to handle rx/tx interrupts */
+int netmap_rx_irq(struct ifnet *, int, int *);
+#define netmap_tx_irq(_n, _q) netmap_rx_irq(_n, _q, NULL)
+
+extern int netmap_copy;
+#endif /* _NET_NETMAP_KERN_H_ */
--- a/include/uapi/linux/if.h	2013-02-16 09:10:41.000000000 -0800
+++ b/include/uapi/linux/if.h	2013-03-10 11:26:44.500548075 -0700
@@ -83,6 +83,7 @@ 
 #define IFF_SUPP_NOFCS	0x80000		/* device supports sending custom FCS */
 #define IFF_LIVE_ADDR_CHANGE 0x100000	/* device supports hardware address
 					 * change when it's running */
+#define IFF_NETMAP	0x200000	/* device used with netmap */
 
 
 #define IF_GET_IFACE	0x0001		/* for querying only */