diff mbox series

[net,v2] veth: xdp: use head instead of hard_start

Message ID 20200331060641.79999-1-maowenan@huawei.com
State Changes Requested
Delegated to: David Miller
Headers show
Series [net,v2] veth: xdp: use head instead of hard_start | expand

Commit Message

maowenan March 31, 2020, 6:06 a.m. UTC
xdp.data_hard_start is equal to first address of
struct xdp_frame, which is mentioned in
convert_to_xdp_frame(). But the pointer hard_start
in veth_xdp_rcv_one() is 32 bytes offset of frame,
so it should use head instead of hard_start to
set xdp.data_hard_start. Otherwise, if BPF program
calls helper_function such as bpf_xdp_adjust_head, it
will be confused for xdp_frame_end.

Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
Signed-off-by: Mao Wenan <maowenan@huawei.com>
---
 v2: add fixes tag, as well as commit log.
 drivers/net/veth.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Toshiaki Makita March 31, 2020, 6:16 a.m. UTC | #1
On 2020/03/31 15:06, Mao Wenan wrote:
> xdp.data_hard_start is equal to first address of
> struct xdp_frame, which is mentioned in
> convert_to_xdp_frame(). But the pointer hard_start
> in veth_xdp_rcv_one() is 32 bytes offset of frame,
> so it should use head instead of hard_start to
> set xdp.data_hard_start. Otherwise, if BPF program
> calls helper_function such as bpf_xdp_adjust_head, it
> will be confused for xdp_frame_end.

I think you should discuss this more with Jesper before
submitting v2.
He does not like this to be included now due to merge conflict risk.
Basically I agree with him that we don't need to hurry with this fix.

Toshiaki Makita

> 
> Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
> Signed-off-by: Mao Wenan <maowenan@huawei.com>
> ---
>   v2: add fixes tag, as well as commit log.
>   drivers/net/veth.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index d4cbb9e8c63f..5ea550884bf8 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
> @@ -506,7 +506,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
>   		struct xdp_buff xdp;
>   		u32 act;
>   
> -		xdp.data_hard_start = hard_start;
> +		xdp.data_hard_start = head;
>   		xdp.data = frame->data;
>   		xdp.data_end = frame->data + frame->len;
>   		xdp.data_meta = frame->data - frame->metasize;
>
Jesper Dangaard Brouer April 1, 2020, 4:15 p.m. UTC | #2
On Tue, 31 Mar 2020 15:16:22 +0900
Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:

> On 2020/03/31 15:06, Mao Wenan wrote:
> > xdp.data_hard_start is equal to first address of
> > struct xdp_frame, which is mentioned in
> > convert_to_xdp_frame(). But the pointer hard_start
> > in veth_xdp_rcv_one() is 32 bytes offset of frame,
> > so it should use head instead of hard_start to
> > set xdp.data_hard_start. Otherwise, if BPF program
> > calls helper_function such as bpf_xdp_adjust_head, it
> > will be confused for xdp_frame_end.  
> 
> I think you should discuss this more with Jesper before
> submitting v2.
> He does not like this to be included now due to merge conflict risk.
> Basically I agree with him that we don't need to hurry with this fix.
> 
> Toshiaki Makita
> 
> > 
> > Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
> > Signed-off-by: Mao Wenan <maowenan@huawei.com>
> > ---
> >   v2: add fixes tag, as well as commit log.
> >   drivers/net/veth.c | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> > index d4cbb9e8c63f..5ea550884bf8 100644
> > --- a/drivers/net/veth.c
> > +++ b/drivers/net/veth.c
> > @@ -506,7 +506,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
> >   		struct xdp_buff xdp;
> >   		u32 act;
> >   
> > -		xdp.data_hard_start = hard_start;
> > +		xdp.data_hard_start = head;
> >   		xdp.data = frame->data;
> >   		xdp.data_end = frame->data + frame->len;
> >   		xdp.data_meta = frame->data - frame->metasize;
> >   

Below is the patch that I have in my queue.  I've added a Reported-by
tag to give you some credit, even-though I already had plans to fix
this, as part of my XDP frame_sz work.


[PATCH RFC net-next] veth: adjust hard_start offset on redirect XDP frames

When native XDP redirect into a veth device, the frame arrives in the
xdp_frame structure. It is then processed in veth_xdp_rcv_one(),
which can run a new XDP bpf_prog on the packet. Doing so requires
converting xdp_frame to xdp_buff, but the tricky part is that
xdp_frame memory area is located in the top (data_hard_start) memory
area that xdp_buff will point into.

The current code tried to protect the xdp_frame area, by assigning
xdp_buff.data_hard_start past this memory. This results in 32 bytes
less headroom to expand into via BPF-helper bpf_xdp_adjust_head().

This protect step is actually not needed, because BPF-helper
bpf_xdp_adjust_head() already reserve this area, and don't allow
BPF-prog to expand into it. Thus, it is safe to point data_hard_start
directly at xdp_frame memory area.

Cc: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
Reported-by: Mao Wenan <maowenan@huawei.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 drivers/net/veth.c |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 8cdc4415fa70..2edc04a8ab8e 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -493,13 +493,15 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
 					struct veth_xdp_tx_bq *bq)
 {
 	void *hard_start = frame->data - frame->headroom;
-	void *head = hard_start - sizeof(struct xdp_frame);
 	int len = frame->len, delta = 0;
 	struct xdp_frame orig_frame;
 	struct bpf_prog *xdp_prog;
 	unsigned int headroom;
 	struct sk_buff *skb;
 
+	/* bpf_xdp_adjust_head() assures BPF cannot access xdp_frame area */
+	hard_start -= sizeof(struct xdp_frame);
+
 	rcu_read_lock();
 	xdp_prog = rcu_dereference(rq->xdp_prog);
 	if (likely(xdp_prog)) {
@@ -521,7 +523,6 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
 			break;
 		case XDP_TX:
 			orig_frame = *frame;
-			xdp.data_hard_start = head;
 			xdp.rxq->mem = frame->mem;
 			if (unlikely(veth_xdp_tx(rq->dev, &xdp, bq) < 0)) {
 				trace_xdp_exception(rq->dev, xdp_prog, act);
@@ -533,7 +534,6 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
 			goto xdp_xmit;
 		case XDP_REDIRECT:
 			orig_frame = *frame;
-			xdp.data_hard_start = head;
 			xdp.rxq->mem = frame->mem;
 			if (xdp_do_redirect(rq->dev, &xdp, xdp_prog)) {
 				frame = &orig_frame;
@@ -555,7 +555,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
 	rcu_read_unlock();
 
 	headroom = sizeof(struct xdp_frame) + frame->headroom - delta;
-	skb = veth_build_skb(head, headroom, len, 0);
+	skb = veth_build_skb(hard_start, headroom, len, 0);
 	if (!skb) {
 		xdp_return_frame(frame);
 		goto err;
Toshiaki Makita April 2, 2020, 12:47 a.m. UTC | #3
On 2020/04/02 1:15, Jesper Dangaard Brouer wrote:
...
> [PATCH RFC net-next] veth: adjust hard_start offset on redirect XDP frames
> 
> When native XDP redirect into a veth device, the frame arrives in the
> xdp_frame structure. It is then processed in veth_xdp_rcv_one(),
> which can run a new XDP bpf_prog on the packet. Doing so requires
> converting xdp_frame to xdp_buff, but the tricky part is that
> xdp_frame memory area is located in the top (data_hard_start) memory
> area that xdp_buff will point into.
> 
> The current code tried to protect the xdp_frame area, by assigning
> xdp_buff.data_hard_start past this memory. This results in 32 bytes
> less headroom to expand into via BPF-helper bpf_xdp_adjust_head().
> 
> This protect step is actually not needed, because BPF-helper
> bpf_xdp_adjust_head() already reserve this area, and don't allow
> BPF-prog to expand into it. Thus, it is safe to point data_hard_start
> directly at xdp_frame memory area.
> 
> Cc: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>

FYI: This mail address is deprecated.

> Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
> Reported-by: Mao Wenan <maowenan@huawei.com>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>

FWIW,

Acked-by: Toshiaki Makita <toshiaki.makita1@gmail.com>
maowenan April 2, 2020, 1:23 a.m. UTC | #4
On 2020/4/2 0:15, Jesper Dangaard Brouer wrote:
> On Tue, 31 Mar 2020 15:16:22 +0900
> Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:
> 
>> On 2020/03/31 15:06, Mao Wenan wrote:
>>> xdp.data_hard_start is equal to first address of
>>> struct xdp_frame, which is mentioned in
>>> convert_to_xdp_frame(). But the pointer hard_start
>>> in veth_xdp_rcv_one() is 32 bytes offset of frame,
>>> so it should use head instead of hard_start to
>>> set xdp.data_hard_start. Otherwise, if BPF program
>>> calls helper_function such as bpf_xdp_adjust_head, it
>>> will be confused for xdp_frame_end.  
>>
>> I think you should discuss this more with Jesper before
>> submitting v2.
>> He does not like this to be included now due to merge conflict risk.
>> Basically I agree with him that we don't need to hurry with this fix.
>>
>> Toshiaki Makita
>>
>>>
>>> Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
>>> Signed-off-by: Mao Wenan <maowenan@huawei.com>
>>> ---
>>>   v2: add fixes tag, as well as commit log.
>>>   drivers/net/veth.c | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
>>> index d4cbb9e8c63f..5ea550884bf8 100644
>>> --- a/drivers/net/veth.c
>>> +++ b/drivers/net/veth.c
>>> @@ -506,7 +506,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
>>>   		struct xdp_buff xdp;
>>>   		u32 act;
>>>   
>>> -		xdp.data_hard_start = hard_start;
>>> +		xdp.data_hard_start = head;
>>>   		xdp.data = frame->data;
>>>   		xdp.data_end = frame->data + frame->len;
>>>   		xdp.data_meta = frame->data - frame->metasize;
>>>   
> 
> Below is the patch that I have in my queue.  I've added a Reported-by
> tag to give you some credit, even-though I already had plans to fix
> this, as part of my XDP frame_sz work.
thanks for reported-by.
Actually the fault is found by reviewing veth code two weeks ago,
when I debugged another warning bpf_warn_invalid_xdp_action associated veth
module, and there is no chance to send such fix patch as quick.

> 
> 
> [PATCH RFC net-next] veth: adjust hard_start offset on redirect XDP frames
> 
> When native XDP redirect into a veth device, the frame arrives in the
> xdp_frame structure. It is then processed in veth_xdp_rcv_one(),
> which can run a new XDP bpf_prog on the packet. Doing so requires
> converting xdp_frame to xdp_buff, but the tricky part is that
> xdp_frame memory area is located in the top (data_hard_start) memory
> area that xdp_buff will point into.
> 
> The current code tried to protect the xdp_frame area, by assigning
> xdp_buff.data_hard_start past this memory. This results in 32 bytes
> less headroom to expand into via BPF-helper bpf_xdp_adjust_head().
> 
> This protect step is actually not needed, because BPF-helper
> bpf_xdp_adjust_head() already reserve this area, and don't allow
> BPF-prog to expand into it. Thus, it is safe to point data_hard_start
> directly at xdp_frame memory area.
> 
> Cc: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
> Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
> Reported-by: Mao Wenan <maowenan@huawei.com>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> ---
>  drivers/net/veth.c |    8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/veth.c b/drivers/net/veth.c
> index 8cdc4415fa70..2edc04a8ab8e 100644
> --- a/drivers/net/veth.c
> +++ b/drivers/net/veth.c
> @@ -493,13 +493,15 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
>  					struct veth_xdp_tx_bq *bq)
>  {
>  	void *hard_start = frame->data - frame->headroom;
> -	void *head = hard_start - sizeof(struct xdp_frame);
>  	int len = frame->len, delta = 0;
>  	struct xdp_frame orig_frame;
>  	struct bpf_prog *xdp_prog;
>  	unsigned int headroom;
>  	struct sk_buff *skb;
>  
> +	/* bpf_xdp_adjust_head() assures BPF cannot access xdp_frame area */
> +	hard_start -= sizeof(struct xdp_frame);
> +
>  	rcu_read_lock();
>  	xdp_prog = rcu_dereference(rq->xdp_prog);
>  	if (likely(xdp_prog)) {
> @@ -521,7 +523,6 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
>  			break;
>  		case XDP_TX:
>  			orig_frame = *frame;
> -			xdp.data_hard_start = head;
>  			xdp.rxq->mem = frame->mem;
>  			if (unlikely(veth_xdp_tx(rq->dev, &xdp, bq) < 0)) {
>  				trace_xdp_exception(rq->dev, xdp_prog, act);
> @@ -533,7 +534,6 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
>  			goto xdp_xmit;
>  		case XDP_REDIRECT:
>  			orig_frame = *frame;
> -			xdp.data_hard_start = head;
>  			xdp.rxq->mem = frame->mem;
>  			if (xdp_do_redirect(rq->dev, &xdp, xdp_prog)) {
>  				frame = &orig_frame;
> @@ -555,7 +555,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
>  	rcu_read_unlock();
>  
>  	headroom = sizeof(struct xdp_frame) + frame->headroom - delta;
> -	skb = veth_build_skb(head, headroom, len, 0);
> +	skb = veth_build_skb(hard_start, headroom, len, 0);
>  	if (!skb) {
>  		xdp_return_frame(frame);
>  		goto err;
> 
>
Jesper Dangaard Brouer April 2, 2020, 9:06 a.m. UTC | #5
On Thu, 2 Apr 2020 09:47:03 +0900
Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:

> On 2020/04/02 1:15, Jesper Dangaard Brouer wrote:
> ...
> > [PATCH RFC net-next] veth: adjust hard_start offset on redirect XDP frames
> > 
> > When native XDP redirect into a veth device, the frame arrives in the
> > xdp_frame structure. It is then processed in veth_xdp_rcv_one(),
> > which can run a new XDP bpf_prog on the packet. Doing so requires
> > converting xdp_frame to xdp_buff, but the tricky part is that
> > xdp_frame memory area is located in the top (data_hard_start) memory
> > area that xdp_buff will point into.
> > 
> > The current code tried to protect the xdp_frame area, by assigning
> > xdp_buff.data_hard_start past this memory. This results in 32 bytes
> > less headroom to expand into via BPF-helper bpf_xdp_adjust_head().
> > 
> > This protect step is actually not needed, because BPF-helper
> > bpf_xdp_adjust_head() already reserve this area, and don't allow
> > BPF-prog to expand into it. Thus, it is safe to point data_hard_start
> > directly at xdp_frame memory area.
> > 
> > Cc: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>  
> 
> FYI: This mail address is deprecated.
> 
> > Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
> > Reported-by: Mao Wenan <maowenan@huawei.com>
> > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>  
> 
> FWIW,
> 
> Acked-by: Toshiaki Makita <toshiaki.makita1@gmail.com>

Thanks.

I have updated your email and added your ack in my patchset.  I will
submit this officially once net-next opens up again[1], as part my
larger patchset for introducing XDP frame_sz.

[1] http://vger.kernel.org/~davem/net-next.html
Alexei Starovoitov April 2, 2020, 3:40 p.m. UTC | #6
On Thu, Apr 2, 2020 at 2:06 AM Jesper Dangaard Brouer <brouer@redhat.com> wrote:
>
> On Thu, 2 Apr 2020 09:47:03 +0900
> Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:
>
> > On 2020/04/02 1:15, Jesper Dangaard Brouer wrote:
> > ...
> > > [PATCH RFC net-next] veth: adjust hard_start offset on redirect XDP frames
> > >
> > > When native XDP redirect into a veth device, the frame arrives in the
> > > xdp_frame structure. It is then processed in veth_xdp_rcv_one(),
> > > which can run a new XDP bpf_prog on the packet. Doing so requires
> > > converting xdp_frame to xdp_buff, but the tricky part is that
> > > xdp_frame memory area is located in the top (data_hard_start) memory
> > > area that xdp_buff will point into.
> > >
> > > The current code tried to protect the xdp_frame area, by assigning
> > > xdp_buff.data_hard_start past this memory. This results in 32 bytes
> > > less headroom to expand into via BPF-helper bpf_xdp_adjust_head().
> > >
> > > This protect step is actually not needed, because BPF-helper
> > > bpf_xdp_adjust_head() already reserve this area, and don't allow
> > > BPF-prog to expand into it. Thus, it is safe to point data_hard_start
> > > directly at xdp_frame memory area.
> > >
> > > Cc: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>
> >
> > FYI: This mail address is deprecated.
> >
> > > Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
> > > Reported-by: Mao Wenan <maowenan@huawei.com>
> > > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> >
> > FWIW,
> >
> > Acked-by: Toshiaki Makita <toshiaki.makita1@gmail.com>
>
> Thanks.
>
> I have updated your email and added your ack in my patchset.  I will
> submit this officially once net-next opens up again[1], as part my
> larger patchset for introducing XDP frame_sz.

It looks like bug fix to me.
The way I read it that behavior of bpf_xdp_adjust_head() is a bit
buggy with veth netdev,
so why wait ?
Jesper Dangaard Brouer April 3, 2020, 7:58 a.m. UTC | #7
On Thu, 2 Apr 2020 08:40:23 -0700
Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:

> On Thu, Apr 2, 2020 at 2:06 AM Jesper Dangaard Brouer <brouer@redhat.com> wrote:
> >
> > On Thu, 2 Apr 2020 09:47:03 +0900
> > Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:
> >  
> > > On 2020/04/02 1:15, Jesper Dangaard Brouer wrote:
> > > ...  
> > > > [PATCH RFC net-next] veth: adjust hard_start offset on redirect XDP frames
> > > >
> > > > When native XDP redirect into a veth device, the frame arrives in the
> > > > xdp_frame structure. It is then processed in veth_xdp_rcv_one(),
> > > > which can run a new XDP bpf_prog on the packet. Doing so requires
> > > > converting xdp_frame to xdp_buff, but the tricky part is that
> > > > xdp_frame memory area is located in the top (data_hard_start) memory
> > > > area that xdp_buff will point into.
> > > >
> > > > The current code tried to protect the xdp_frame area, by assigning
> > > > xdp_buff.data_hard_start past this memory. This results in 32 bytes
> > > > less headroom to expand into via BPF-helper bpf_xdp_adjust_head().
> > > >
> > > > This protect step is actually not needed, because BPF-helper
> > > > bpf_xdp_adjust_head() already reserve this area, and don't allow
> > > > BPF-prog to expand into it. Thus, it is safe to point data_hard_start
> > > > directly at xdp_frame memory area.
> > > >
> > > > Cc: Toshiaki Makita <makita.toshiaki@lab.ntt.co.jp>  
> > >
> > > FYI: This mail address is deprecated.
> > >  
> > > > Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
> > > > Reported-by: Mao Wenan <maowenan@huawei.com>
> > > > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>  
> > >
> > > FWIW,
> > >
> > > Acked-by: Toshiaki Makita <toshiaki.makita1@gmail.com>  
> >
> > Thanks.
> >
> > I have updated your email and added your ack in my patchset.  I will
> > submit this officially once net-next opens up again[1], as part my
> > larger patchset for introducing XDP frame_sz.  
> 
> It looks like bug fix to me.
> The way I read it that behavior of bpf_xdp_adjust_head() is a bit
> buggy with veth netdev,
> so why wait ?

I want to wait to ease your life as maintainer. This is part of a
larger patchset (for XDP frame_sz) and the next patch touch same code
path and thus depend on these code adjustments.  If we apply them in
bpf vs bpf-next then you/we will have to handle merge conflicts.  The
severity of the "fix" is really low, it only means 32 bytes less
headroom (which I doubt anyone is using).
Alexei Starovoitov April 3, 2020, 9:12 p.m. UTC | #8
On Fri, Apr 3, 2020 at 12:59 AM Jesper Dangaard Brouer
<brouer@redhat.com> wrote:
>
> I want to wait to ease your life as maintainer. This is part of a
> larger patchset (for XDP frame_sz) and the next patch touch same code
> path and thus depend on these code adjustments.  If we apply them in
> bpf vs bpf-next then you/we will have to handle merge conflicts.  The
> severity of the "fix" is really low, it only means 32 bytes less
> headroom (which I doubt anyone is using).

Ahh. Make sense. That type of fix can wait.
Thanks!
diff mbox series

Patch

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index d4cbb9e8c63f..5ea550884bf8 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -506,7 +506,7 @@  static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
 		struct xdp_buff xdp;
 		u32 act;
 
-		xdp.data_hard_start = hard_start;
+		xdp.data_hard_start = head;
 		xdp.data = frame->data;
 		xdp.data_end = frame->data + frame->len;
 		xdp.data_meta = frame->data - frame->metasize;