diff mbox

[1/2] e1000: Fixing interrupts pace.

Message ID 1458200278-11940-2-git-send-email-sameeh@daynix.com
State New
Headers show

Commit Message

Sameeh Jubran March 17, 2016, 7:37 a.m. UTC
This patch introduces an upper bound for number of interrupts
per second. Without this bound an interrupt storm can occur as
it has been observed on Windows 10 when disabling the device.

According to the SPEC - Intel PCI/PCI-X Family of Gigabit
Ethernet Controllers Software Developer's Manual, section
13.4.18 - the Ethernet controller guarantees a maximum
observable interrupt rate of 7813 interrupts/sec. If there is
no upper bound this could lead to an interrupt storm by e1000
(when mit_delay < 500) causing interrupts to fire at a very high
pace.
Thus if mit_delay < 500 then the delay should be set to the
minimum delay possible which is 500. This can be calculated
easily as follows:

Interval = 10^9 / (7813 * 256) = 500.

Signed-off-by: Sameeh Jubran <sameeh@daynix.com>
---
 hw/net/e1000.c | 8 ++++++++
 1 file changed, 8 insertions(+)

Comments

Shmulik Ladkani May 4, 2016, 11:34 a.m. UTC | #1
Hi Sameeh,

On Thu, 17 Mar 2016 09:37:57 +0200, sameeh@daynix.com wrote:
> @@ -357,6 +357,14 @@ set_interrupt_cause(E1000State *s, int index, uint32_t val)
>              }
>              mit_update_delay(&mit_delay, s->mac_reg[ITR]);
>  
> +            /*
> +             * According to e1000 SPEC, the Ethernet controller guarantees
> +             * a maximum observable interrupt rate of 7813 interrupts/sec.
> +             * Thus if mit_delay < 500 then the delay should be set to the
> +             * minimum delay possible which is 500.
> +             */
> +            mit_delay = (mit_delay < 500) ? 500 : mit_delay;
> +
>              if (mit_delay) {
>                  s->mit_timer_on = 1;
>                  timer_mod(s->mit_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +

Sorry for late response.

Formerly, 'mit_delay' could possibly be 0 (as being not updated by
any of the mit_update_delay calls), thus 'mit_timer' wouldn't be
armed.

The new logic forces mit_delay to be set to 500, even if it was 0
("unset").

Which approach is correct:
- Either the 'if (mit_delay)' is now superflous,
- Or, do we need to keep the "unset" sematics (i.e. mit_delay==0 means
  don't use the timer)

Regards,
Shmulik
Sameeh Jubran May 16, 2016, 5:58 a.m. UTC | #2
As mentioned in the patch:
"According to the SPEC - intel PCI/PCI-X Family of Gigabit
Ethernet Controllers Software Developer's Manual, section
13.4.18 - the Ethernet controller guarantees a maximum
observable interrupt rate of 7813 interrupts/sec. If there is
no upper bound this could lead to an interrupt storm by e1000
(when mit_delay < 500) causing interrupts to fire at a very high
pace."
This means that on a real hardware when mit_delay==0 ( don't use the timer
) the Ethernet controller guarantees a maximum
observable interrupt rate of 7813 interrupts/sec. Unfortunately that isn't
the case in the emulated device and the interrupt
rate bypass the rate of the real hardware which could lead to an interrupt
storm. Setting mit_delay to 500 guarantees a maximum
interrupt rate of 7813 interrupts/sec.

Regards,
Sameeh

On Wed, May 4, 2016 at 2:34 PM, Shmulik Ladkani <
shmulik.ladkani@ravellosystems.com> wrote:

> Hi Sameeh,
>
> On Thu, 17 Mar 2016 09:37:57 +0200, sameeh@daynix.com wrote:
> > @@ -357,6 +357,14 @@ set_interrupt_cause(E1000State *s, int index,
> uint32_t val)
> >              }
> >              mit_update_delay(&mit_delay, s->mac_reg[ITR]);
> >
> > +            /*
> > +             * According to e1000 SPEC, the Ethernet controller
> guarantees
> > +             * a maximum observable interrupt rate of 7813
> interrupts/sec.
> > +             * Thus if mit_delay < 500 then the delay should be set to
> the
> > +             * minimum delay possible which is 500.
> > +             */
> > +            mit_delay = (mit_delay < 500) ? 500 : mit_delay;
> > +
> >              if (mit_delay) {
> >                  s->mit_timer_on = 1;
> >                  timer_mod(s->mit_timer,
> qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
>
> Sorry for late response.
>
> Formerly, 'mit_delay' could possibly be 0 (as being not updated by
> any of the mit_update_delay calls), thus 'mit_timer' wouldn't be
> armed.
>
> The new logic forces mit_delay to be set to 500, even if it was 0
> ("unset").
>
> Which approach is correct:
> - Either the 'if (mit_delay)' is now superflous,
> - Or, do we need to keep the "unset" sematics (i.e. mit_delay==0 means
>   don't use the timer)
>
> Regards,
> Shmulik
>
Shmulik Ladkani May 16, 2016, 10:55 a.m. UTC | #3
Hi,

On Mon, 16 May 2016 08:58:32 +0300, sameeh@daynix.com wrote:
> This means that on a real hardware when mit_delay==0 ( don't use the timer
> ) the Ethernet controller guarantees a maximum
> observable interrupt rate of 7813 interrupts/sec. Unfortunately that isn't
> the case in the emulated device and the interrupt
> rate bypass the rate of the real hardware which could lead to an interrupt
> storm. Setting mit_delay to 500 guarantees a maximum
> interrupt rate of 7813 interrupts/sec.

OK, if that is the case, then

> > > +            mit_delay = (mit_delay < 500) ? 500 : mit_delay;
> > > +
> > >              if (mit_delay) {
> > >                  s->mit_timer_on = 1;
> > >                  timer_mod(s->mit_timer,

The existing 'if (mit_delay)' is now superflous, since always true,
hence can be removed.
Sameeh Jubran May 17, 2016, 6:26 p.m. UTC | #4
This seems like a good idea, thanks!

On Mon, May 16, 2016 at 1:55 PM, Shmulik Ladkani <
shmulik.ladkani@ravellosystems.com> wrote:

> Hi,
>
> On Mon, 16 May 2016 08:58:32 +0300, sameeh@daynix.com wrote:
> > This means that on a real hardware when mit_delay==0 ( don't use the
> timer
> > ) the Ethernet controller guarantees a maximum
> > observable interrupt rate of 7813 interrupts/sec. Unfortunately that
> isn't
> > the case in the emulated device and the interrupt
> > rate bypass the rate of the real hardware which could lead to an
> interrupt
> > storm. Setting mit_delay to 500 guarantees a maximum
> > interrupt rate of 7813 interrupts/sec.
>
> OK, if that is the case, then
>
> > > > +            mit_delay = (mit_delay < 500) ? 500 : mit_delay;
> > > > +
> > > >              if (mit_delay) {
> > > >                  s->mit_timer_on = 1;
> > > >                  timer_mod(s->mit_timer,
>
> The existing 'if (mit_delay)' is now superflous, since always true,
> hence can be removed.
>
diff mbox

Patch

diff --git a/hw/net/e1000.c b/hw/net/e1000.c
index 0387fa0..09b9ab5 100644
--- a/hw/net/e1000.c
+++ b/hw/net/e1000.c
@@ -357,6 +357,14 @@  set_interrupt_cause(E1000State *s, int index, uint32_t val)
             }
             mit_update_delay(&mit_delay, s->mac_reg[ITR]);
 
+            /*
+             * According to e1000 SPEC, the Ethernet controller guarantees
+             * a maximum observable interrupt rate of 7813 interrupts/sec.
+             * Thus if mit_delay < 500 then the delay should be set to the
+             * minimum delay possible which is 500.
+             */
+            mit_delay = (mit_delay < 500) ? 500 : mit_delay;
+
             if (mit_delay) {
                 s->mit_timer_on = 1;
                 timer_mod(s->mit_timer, qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +