diff mbox series

[net] inet: frag: enforce memory limits earlier

Message ID 20180731030911.248637-1-edumazet@google.com
State Accepted, archived
Delegated to: David Miller
Headers show
Series [net] inet: frag: enforce memory limits earlier | expand

Commit Message

Eric Dumazet July 31, 2018, 3:09 a.m. UTC
We currently check current frags memory usage only when
a new frag queue is created. This allows attackers to first
consume the memory budget (default : 4 MB) creating thousands
of frag queues, then sending tiny skbs to exceed high_thresh
limit by 2 to 3 order of magnitude.

Note that before commit 648700f76b03 ("inet: frags: use rhashtables
for reassembly units"), work queue could be starved under DOS,
getting no cpu cycles.
After commit 648700f76b03, only the per frag queue timer can eventually
remove an incomplete frag queue and its skbs.

Fixes: b13d3cbfb8e8 ("inet: frag: move eviction of queues to work queue")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Jann Horn <jannh@google.com>
Cc: Florian Westphal <fw@strlen.de>
Cc: Peter Oskolkov <posk@google.com>
Cc: Paolo Abeni <pabeni@redhat.com>
---
 net/ipv4/inet_fragment.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Comments

Florian Westphal July 31, 2018, 5:54 a.m. UTC | #1
Eric Dumazet <edumazet@google.com> wrote:
> We currently check current frags memory usage only when
> a new frag queue is created. This allows attackers to first
> consume the memory budget (default : 4 MB) creating thousands
> of frag queues, then sending tiny skbs to exceed high_thresh
> limit by 2 to 3 order of magnitude.
> 
> Note that before commit 648700f76b03 ("inet: frags: use rhashtables
> for reassembly units"), work queue could be starved under DOS,
> getting no cpu cycles.
> After commit 648700f76b03, only the per frag queue timer can eventually
> remove an incomplete frag queue and its skbs.

I'm not sure this is a good idea.

This can now prevent "good" queue from completing just because attacker
is sending garbage.
Jann Horn July 31, 2018, 3:23 p.m. UTC | #2
On Tue, Jul 31, 2018 at 7:54 AM Florian Westphal <fw@strlen.de> wrote:
>
> Eric Dumazet <edumazet@google.com> wrote:
> > We currently check current frags memory usage only when
> > a new frag queue is created. This allows attackers to first
> > consume the memory budget (default : 4 MB) creating thousands
> > of frag queues, then sending tiny skbs to exceed high_thresh
> > limit by 2 to 3 order of magnitude.
> >
> > Note that before commit 648700f76b03 ("inet: frags: use rhashtables
> > for reassembly units"), work queue could be starved under DOS,
> > getting no cpu cycles.
> > After commit 648700f76b03, only the per frag queue timer can eventually
> > remove an incomplete frag queue and its skbs.
>
> I'm not sure this is a good idea.
>
> This can now prevent "good" queue from completing just because attacker
> is sending garbage.

There is only a limited amount of memory available to store fragments.
If you receive lots of fragments that don't form complete packets,
you'll have to drop some packets. I don't see why it matters whether
incoming garbage only prevents the creation of new queues or also the
completion of existing queues.
Florian Westphal July 31, 2018, 3:52 p.m. UTC | #3
Jann Horn <jannh@google.com> wrote:
> On Tue, Jul 31, 2018 at 7:54 AM Florian Westphal <fw@strlen.de> wrote:
> >
> > Eric Dumazet <edumazet@google.com> wrote:
> > > We currently check current frags memory usage only when
> > > a new frag queue is created. This allows attackers to first
> > > consume the memory budget (default : 4 MB) creating thousands
> > > of frag queues, then sending tiny skbs to exceed high_thresh
> > > limit by 2 to 3 order of magnitude.
> > >
> > > Note that before commit 648700f76b03 ("inet: frags: use rhashtables
> > > for reassembly units"), work queue could be starved under DOS,
> > > getting no cpu cycles.
> > > After commit 648700f76b03, only the per frag queue timer can eventually
> > > remove an incomplete frag queue and its skbs.
> >
> > I'm not sure this is a good idea.
> >
> > This can now prevent "good" queue from completing just because attacker
> > is sending garbage.
> 
> There is only a limited amount of memory available to store fragments.
> If you receive lots of fragments that don't form complete packets,
> you'll have to drop some packets. I don't see why it matters whether
> incoming garbage only prevents the creation of new queues or also the
> completion of existing queues.

Agreed.  Objection withdrawn.

Acked-by: Florian Westphal <fw@strlen.de>
David Miller July 31, 2018, 9:44 p.m. UTC | #4
From: Eric Dumazet <edumazet@google.com>
Date: Mon, 30 Jul 2018 20:09:11 -0700

> We currently check current frags memory usage only when
> a new frag queue is created. This allows attackers to first
> consume the memory budget (default : 4 MB) creating thousands
> of frag queues, then sending tiny skbs to exceed high_thresh
> limit by 2 to 3 order of magnitude.
> 
> Note that before commit 648700f76b03 ("inet: frags: use rhashtables
> for reassembly units"), work queue could be starved under DOS,
> getting no cpu cycles.
> After commit 648700f76b03, only the per frag queue timer can eventually
> remove an incomplete frag queue and its skbs.
> 
> Fixes: b13d3cbfb8e8 ("inet: frag: move eviction of queues to work queue")
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> Reported-by: Jann Horn <jannh@google.com>

Applied and queued up for -stable.
diff mbox series

Patch

diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c
index 1e4cf3ab560fac154fefb7acd3539eb6e91ed84e..0d70608cc2e18bfa0df35f331e12fbe9b45b168b 100644
--- a/net/ipv4/inet_fragment.c
+++ b/net/ipv4/inet_fragment.c
@@ -157,9 +157,6 @@  static struct inet_frag_queue *inet_frag_alloc(struct netns_frags *nf,
 {
 	struct inet_frag_queue *q;
 
-	if (!nf->high_thresh || frag_mem_limit(nf) > nf->high_thresh)
-		return NULL;
-
 	q = kmem_cache_zalloc(f->frags_cachep, GFP_ATOMIC);
 	if (!q)
 		return NULL;
@@ -204,6 +201,9 @@  struct inet_frag_queue *inet_frag_find(struct netns_frags *nf, void *key)
 {
 	struct inet_frag_queue *fq;
 
+	if (!nf->high_thresh || frag_mem_limit(nf) > nf->high_thresh)
+		return NULL;
+
 	rcu_read_lock();
 
 	fq = rhashtable_lookup(&nf->rhashtable, key, nf->f->rhash_params);