diff mbox series

[2/2] net: AWS ENA: Flush WCBs before writing new SQ tail to doorbell

Message ID 20200102180830.66676-3-liran.alon@oracle.com
State Changes Requested
Delegated to: David Miller
Headers show
Series net: AWS ENA: Fix memory barrier usage when using LLQ | expand

Commit Message

Liran Alon Jan. 2, 2020, 6:08 p.m. UTC
AWS ENA NIC supports Tx SQ in Low Latency Queue (LLQ) mode (Also
referred to as "push-mode"). In this mode, the driver pushes the
transmit descriptors and the first 128 bytes of the packet directly
to the ENA device memory space, while the rest of the packet payload
is fetched by the device from host memory. For this operation mode,
the driver uses a dedicated PCI BAR which is mapped as WC memory.

The function ena_com_write_bounce_buffer_to_dev() is responsible
to write to the above mentioned PCI BAR.

When the write of new SQ tail to doorbell is visible to device, device
expects to be able to read relevant transmit descriptors and packets
headers from device memory. Therefore, driver should ensure
write-combined buffers (WCBs) are flushed before the write to doorbell
is visible to the device.

For some CPUs, this will be taken care of by writel(). For example,
x86 Intel CPUs flushes write-combined buffers when a read or write
is done to UC memory (In our case, the doorbell). See Intel SDM section
11.3 METHODS OF CACHING AVAILABLE:
"If the WC buffer is partially filled, the writes may be delayed until
the next occurrence of a serializing event; such as, an SFENCE or MFENCE
instruction, CPUID execution, a read or write to uncached memory, an
interrupt occurrence, or a LOCK instruction execution.”

However, other CPUs do not provide this guarantee. For example, x86
AMD CPUs flush write-combined buffers only on a read from UC memory.
Not a write to UC memory. See AMD Software Optimisation Guide for AMD
Family 17h Processors section 2.13.3 Write-Combining Operations.

Therefore, modify ena_com_write_sq_doorbell() to flush write-combined
buffers with wmb() in case Tx SQ is in LLQ mode.

Note that this cause 2 theoretical unnecessary perf hits:
(1) On x86 Intel, this will execute unnecessary SFENCE.
But probably the perf impact is neglictable because it will also
cause the implciit SFENCE done internally by write to UC memory to do
less work.
(2) On ARM64 this will change from using dma_wmb() to using wmb()
which is more costly (Use DSB instead of DMB) even though DMB should be
sufficient to flush WCBs.

This patch will focus on making sure WCBs are flushed on all CPUs, and a
later future patch will be made to add a new macro to Linux such as
flush_wc_writeX() that does the right thing for all archs and CPU
vendors.

Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com>
Signed-off-by: Liran Alon <liran.alon@oracle.com>
---
 drivers/net/ethernet/amazon/ena/ena_eth_com.h | 17 ++++++++++++++++-
 1 file changed, 16 insertions(+), 1 deletion(-)

Comments

Liran Alon Jan. 3, 2020, 6:46 p.m. UTC | #1
> On 2 Jan 2020, at 20:08, Liran Alon <liran.alon@oracle.com> wrote:
> 
> AWS ENA NIC supports Tx SQ in Low Latency Queue (LLQ) mode (Also
> referred to as "push-mode"). In this mode, the driver pushes the
> transmit descriptors and the first 128 bytes of the packet directly
> to the ENA device memory space, while the rest of the packet payload
> is fetched by the device from host memory. For this operation mode,
> the driver uses a dedicated PCI BAR which is mapped as WC memory.
> 
> The function ena_com_write_bounce_buffer_to_dev() is responsible
> to write to the above mentioned PCI BAR.
> 
> When the write of new SQ tail to doorbell is visible to device, device
> expects to be able to read relevant transmit descriptors and packets
> headers from device memory. Therefore, driver should ensure
> write-combined buffers (WCBs) are flushed before the write to doorbell
> is visible to the device.
> 
> For some CPUs, this will be taken care of by writel(). For example,
> x86 Intel CPUs flushes write-combined buffers when a read or write
> is done to UC memory (In our case, the doorbell). See Intel SDM section
> 11.3 METHODS OF CACHING AVAILABLE:
> "If the WC buffer is partially filled, the writes may be delayed until
> the next occurrence of a serializing event; such as, an SFENCE or MFENCE
> instruction, CPUID execution, a read or write to uncached memory, an
> interrupt occurrence, or a LOCK instruction execution.”
> 
> However, other CPUs do not provide this guarantee. For example, x86
> AMD CPUs flush write-combined buffers only on a read from UC memory.
> Not a write to UC memory. See AMD Software Optimisation Guide for AMD
> Family 17h Processors section 2.13.3 Write-Combining Operations.

Actually... After re-reading AMD Optimization Guide SDM, I see it is guaranteed that:
“Write-combining is closed if all 64 bytes of the write buffer are valid”.
And this is indeed always the case for AWS ENA LLQ. Because as can be seen at
ena_com_config_llq_info(), desc_list_entry_size is either 128, 192 or 256. i.e. Always
a multiple of 64 bytes.

So this patch in theory could maybe be dropped as for x86 Intel & AMD and ARM64 with
current desc_list_entry_size, it isn’t strictly necessary to guarantee that WC buffers are flushed.

I will let AWS folks to decide if they prefer to apply this patch anyway to make WC flush explicit
and to avoid hard-to-debug issues in case of new non-64-multiply size appear in the future. Or
to drop this patch and instead add a WARN_ON() to ena_com_config_llq_info() in case desc_list_entry_size
is not a multiple of 64 bytes. To avoid taking perf hit for no real value.

-Liran
Machulsky, Zorik Jan. 4, 2020, 4:55 a.m. UTC | #2
On 1/3/20, 1:47 PM, "Liran Alon" <liran.alon@oracle.com> wrote:

    
    
    > On 2 Jan 2020, at 20:08, Liran Alon <liran.alon@oracle.com> wrote:
    > 
    > AWS ENA NIC supports Tx SQ in Low Latency Queue (LLQ) mode (Also
    > referred to as "push-mode"). In this mode, the driver pushes the
    > transmit descriptors and the first 128 bytes of the packet directly
    > to the ENA device memory space, while the rest of the packet payload
    > is fetched by the device from host memory. For this operation mode,
    > the driver uses a dedicated PCI BAR which is mapped as WC memory.
    > 
    > The function ena_com_write_bounce_buffer_to_dev() is responsible
    > to write to the above mentioned PCI BAR.
    > 
    > When the write of new SQ tail to doorbell is visible to device, device
    > expects to be able to read relevant transmit descriptors and packets
    > headers from device memory. Therefore, driver should ensure
    > write-combined buffers (WCBs) are flushed before the write to doorbell
    > is visible to the device.
    > 
    > For some CPUs, this will be taken care of by writel(). For example,
    > x86 Intel CPUs flushes write-combined buffers when a read or write
    > is done to UC memory (In our case, the doorbell). See Intel SDM section
    > 11.3 METHODS OF CACHING AVAILABLE:
    > "If the WC buffer is partially filled, the writes may be delayed until
    > the next occurrence of a serializing event; such as, an SFENCE or MFENCE
    > instruction, CPUID execution, a read or write to uncached memory, an
    > interrupt occurrence, or a LOCK instruction execution.”
    > 
    > However, other CPUs do not provide this guarantee. For example, x86
    > AMD CPUs flush write-combined buffers only on a read from UC memory.
    > Not a write to UC memory. See AMD Software Optimisation Guide for AMD
    > Family 17h Processors section 2.13.3 Write-Combining Operations.
    
    Actually... After re-reading AMD Optimization Guide SDM, I see it is guaranteed that:
    “Write-combining is closed if all 64 bytes of the write buffer are valid”.
    And this is indeed always the case for AWS ENA LLQ. Because as can be seen at
    ena_com_config_llq_info(), desc_list_entry_size is either 128, 192 or 256. i.e. Always
    a multiple of 64 bytes.
    
    So this patch in theory could maybe be dropped as for x86 Intel & AMD and ARM64 with
    current desc_list_entry_size, it isn’t strictly necessary to guarantee that WC buffers are flushed.
    
    I will let AWS folks to decide if they prefer to apply this patch anyway to make WC flush explicit
    and to avoid hard-to-debug issues in case of new non-64-multiply size appear in the future. Or
    to drop this patch and instead add a WARN_ON() to ena_com_config_llq_info() in case desc_list_entry_size
    is not a multiple of 64 bytes. To avoid taking perf hit for no real value.
  
Liran, thanks for this important info. If this is the case, I believe we should drop this patch as it introduces unnecessary branch 
in data path. Agree with your WARN_ON() suggestion. 
  
    -Liran
Bshara, Saeed Jan. 5, 2020, 9:53 a.m. UTC | #3
Thanks Liran,

I think we missed the payload visibility; The LLQ descriptor contains the header part of the packet, in theory we will need also to make sure that all cpu writes to the packet payload are visible to the device, I bet that in practice those stores will be visible without explicit barrier, but we better stick to the rules.
so we still need dma_wmb(), also, that means the first patch can't simply remove the wmb() as it actually may be taking care for the payload visibility.

saeed

From: Machulsky, Zorik
Sent: Saturday, January 4, 2020 6:55 AM
To: Liran Alon
Cc: Belgazal, Netanel; davem@davemloft.net; netdev@vger.kernel.org; Bshara, Saeed; Jubran, Samih; Chauskin, Igor; Kiyanovski, Arthur; Schmeilin, Evgeny; Tzalik, Guy; Dagan, Noam; Matushevsky, Alexander; Pressman, Gal; Håkon Bugge
Subject: Re: [PATCH 2/2] net: AWS ENA: Flush WCBs before writing new SQ tail to doorbell
    


On 1/3/20, 1:47 PM, "Liran Alon" <liran.alon@oracle.com> wrote:

    
    
    > On 2 Jan 2020, at 20:08, Liran Alon <liran.alon@oracle.com> wrote:
    > 
    > AWS ENA NIC supports Tx SQ in Low Latency Queue (LLQ) mode (Also
    > referred to as "push-mode"). In this mode, the driver pushes the
    > transmit descriptors and the first 128 bytes of the packet directly
    > to the ENA device memory space, while the rest of the packet payload
    > is fetched by the device from host memory. For this operation mode,
    > the driver uses a dedicated PCI BAR which is mapped as WC memory.
    > 
    > The function ena_com_write_bounce_buffer_to_dev() is responsible
    > to write to the above mentioned PCI BAR.
    > 
    > When the write of new SQ tail to doorbell is visible to device, device
    > expects to be able to read relevant transmit descriptors and packets
    > headers from device memory. Therefore, driver should ensure
    > write-combined buffers (WCBs) are flushed before the write to doorbell
    > is visible to the device.
    > 
    > For some CPUs, this will be taken care of by writel(). For example,
    > x86 Intel CPUs flushes write-combined buffers when a read or write
    > is done to UC memory (In our case, the doorbell). See Intel SDM section
    > 11.3 METHODS OF CACHING AVAILABLE:
    > "If the WC buffer is partially filled, the writes may be delayed until
    > the next occurrence of a serializing event; such as, an SFENCE or MFENCE
    > instruction, CPUID execution, a read or write to uncached memory, an
    > interrupt occurrence, or a LOCK instruction execution.”
    > 
    > However, other CPUs do not provide this guarantee. For example, x86
    > AMD CPUs flush write-combined buffers only on a read from UC memory.
    > Not a write to UC memory. See AMD Software Optimisation Guide for AMD
    > Family 17h Processors section 2.13.3 Write-Combining Operations.
    
    Actually... After re-reading AMD Optimization Guide SDM, I see it is guaranteed that:
    “Write-combining is closed if all 64 bytes of the write buffer are valid”.
    And this is indeed always the case for AWS ENA LLQ. Because as can be seen at
    ena_com_config_llq_info(), desc_list_entry_size is either 128, 192 or 256. i.e. Always
    a multiple of 64 bytes.
    
    So this patch in theory could maybe be dropped as for x86 Intel & AMD and ARM64 with
    current desc_list_entry_size, it isn’t strictly necessary to guarantee that WC buffers are flushed.
    
    I will let AWS folks to decide if they prefer to apply this patch anyway to make WC flush explicit
    and to avoid hard-to-debug issues in case of new non-64-multiply size appear in the future. Or
    to drop this patch and instead add a WARN_ON() to ena_com_config_llq_info() in case desc_list_entry_size
    is not a multiple of 64 bytes. To avoid taking perf hit for no real value.
  
Liran, thanks for this important info. If this is the case, I believe we should drop this patch as it introduces unnecessary branch
in data path. Agree with your WARN_ON() suggestion. 
  
    -Liran
Liran Alon Jan. 5, 2020, 10:22 a.m. UTC | #4
Hi Saeed,

If I understand correctly, the device is only aware of new descriptors once the tail is updated by ena_com_write_sq_doorbell() using writel().
If that’s the case, then writel() guarantees all previous writes to WB/UC memory is visible to device before the write done by writel().

If device is allowed to fetch packet payload at the moment the transmit descriptor is written into device-memory using LLQ,
then ena_com_write_bounce_buffer_to_dev() should dma_wmb() before __iowrite64_copy(). Instead of wmb(). And comment
is wrong and should be updated accordingly.
For example, this will optimise x86 to only have a compiler-barrier instead of executing a SFENCE.

Can you clarify what is device behaviour on when it is allowed to read the packet payload?
i.e. Is it only after writing to doorbell or is it from the moment the transmit descriptor is written to LLQ?

-Liran

> On 5 Jan 2020, at 11:53, Bshara, Saeed <saeedb@amazon.com> wrote:
> 
> 
> Thanks Liran,
> 
> I think we missed the payload visibility; The LLQ descriptor contains the header part of the packet, in theory we will need also to make sure that all cpu writes to the packet payload are visible to the device, I bet that in practice those stores will be visible without explicit barrier, but we better stick to the rules.
> so we still need dma_wmb(), also, that means the first patch can't simply remove the wmb() as it actually may be taking care for the payload visibility.
> 
> saeed
> 
> From: Machulsky, Zorik
> Sent: Saturday, January 4, 2020 6:55 AM
> To: Liran Alon
> Cc: Belgazal, Netanel; davem@davemloft.net; netdev@vger.kernel.org; Bshara, Saeed; Jubran, Samih; Chauskin, Igor; Kiyanovski, Arthur; Schmeilin, Evgeny; Tzalik, Guy; Dagan, Noam; Matushevsky, Alexander; Pressman, Gal; Håkon Bugge
> Subject: Re: [PATCH 2/2] net: AWS ENA: Flush WCBs before writing new SQ tail to doorbell
>     
> 
> 
> On 1/3/20, 1:47 PM, "Liran Alon" <liran.alon@oracle.com> wrote:
> 
>     
>     
>     > On 2 Jan 2020, at 20:08, Liran Alon <liran.alon@oracle.com> wrote:
>     > 
>     > AWS ENA NIC supports Tx SQ in Low Latency Queue (LLQ) mode (Also
>     > referred to as "push-mode"). In this mode, the driver pushes the
>     > transmit descriptors and the first 128 bytes of the packet directly
>     > to the ENA device memory space, while the rest of the packet payload
>     > is fetched by the device from host memory. For this operation mode,
>     > the driver uses a dedicated PCI BAR which is mapped as WC memory.
>     > 
>     > The function ena_com_write_bounce_buffer_to_dev() is responsible
>     > to write to the above mentioned PCI BAR.
>     > 
>     > When the write of new SQ tail to doorbell is visible to device, device
>     > expects to be able to read relevant transmit descriptors and packets
>     > headers from device memory. Therefore, driver should ensure
>     > write-combined buffers (WCBs) are flushed before the write to doorbell
>     > is visible to the device.
>     > 
>     > For some CPUs, this will be taken care of by writel(). For example,
>     > x86 Intel CPUs flushes write-combined buffers when a read or write
>     > is done to UC memory (In our case, the doorbell). See Intel SDM section
>     > 11.3 METHODS OF CACHING AVAILABLE:
>     > "If the WC buffer is partially filled, the writes may be delayed until
>     > the next occurrence of a serializing event; such as, an SFENCE or MFENCE
>     > instruction, CPUID execution, a read or write to uncached memory, an
>     > interrupt occurrence, or a LOCK instruction execution.”
>     > 
>     > However, other CPUs do not provide this guarantee. For example, x86
>     > AMD CPUs flush write-combined buffers only on a read from UC memory.
>     > Not a write to UC memory. See AMD Software Optimisation Guide for AMD
>     > Family 17h Processors section 2.13.3 Write-Combining Operations.
>     
>     Actually... After re-reading AMD Optimization Guide SDM, I see it is guaranteed that:
>     “Write-combining is closed if all 64 bytes of the write buffer are valid”.
>     And this is indeed always the case for AWS ENA LLQ. Because as can be seen at
>     ena_com_config_llq_info(), desc_list_entry_size is either 128, 192 or 256. i.e. Always
>     a multiple of 64 bytes.
>     
>     So this patch in theory could maybe be dropped as for x86 Intel & AMD and ARM64 with
>     current desc_list_entry_size, it isn’t strictly necessary to guarantee that WC buffers are flushed.
>     
>     I will let AWS folks to decide if they prefer to apply this patch anyway to make WC flush explicit
>     and to avoid hard-to-debug issues in case of new non-64-multiply size appear in the future. Or
>     to drop this patch and instead add a WARN_ON() to ena_com_config_llq_info() in case desc_list_entry_size
>     is not a multiple of 64 bytes. To avoid taking perf hit for no real value.
>   
> Liran, thanks for this important info. If this is the case, I believe we should drop this patch as it introduces unnecessary branch
> in data path. Agree with your WARN_ON() suggestion. 
>   
>     -Liran
>     
>     
>
Bshara, Saeed Jan. 5, 2020, 11:49 a.m. UTC | #5
On Sun, 2020-01-05 at 12:22 +0200, Liran Alon wrote:
> Hi Saeed,
> 
> If I understand correctly, the device is only aware of new
> descriptors once the tail is updated by ena_com_write_sq_doorbell()
> using writel().
> If that’s the case, then writel() guarantees all previous writes to
> WB/UC memory is visible to device before the write done by writel().
device fetches packet only after doorbell notification.
you are right, writel() includes the needed barrier (
https://elixir.bootlin.com/linux/v5.4.8/source/Documentation/memory-barriers.txt#L1929
)
so indeed we should be ok without any explicit wmb() or dma_wmb().

> 
> If device is allowed to fetch packet payload at the moment the
> transmit descriptor is written into device-memory using LLQ,
> then ena_com_write_bounce_buffer_to_dev() should dma_wmb() before
> __iowrite64_copy(). Instead of wmb(). And comment
> is wrong and should be updated accordingly.
> For example, this will optimise x86 to only have a compiler-barrier
> instead of executing a SFENCE.
> 
> Can you clarify what is device behaviour on when it is allowed to
> read the packet payload?
> i.e. Is it only after writing to doorbell or is it from the moment
> the transmit descriptor is written to LLQ?
> 
> -Liran
>
diff mbox series

Patch

diff --git a/drivers/net/ethernet/amazon/ena/ena_eth_com.h b/drivers/net/ethernet/amazon/ena/ena_eth_com.h
index 77986c0ea52c..f9bfaef08bfa 100644
--- a/drivers/net/ethernet/amazon/ena/ena_eth_com.h
+++ b/drivers/net/ethernet/amazon/ena/ena_eth_com.h
@@ -179,7 +179,22 @@  static inline int ena_com_write_sq_doorbell(struct ena_com_io_sq *io_sq)
 	pr_debug("write submission queue doorbell for queue: %d tail: %d\n",
 		 io_sq->qid, tail);
 
-	writel(tail, io_sq->db_addr);
+	/*
+	 * When Tx SQ is in LLQ mode, transmit descriptors and packet headers
+	 * are written to device-memory mapped as WC. Therefore, we need to
+	 * ensure write-combined buffers are flushed before writing new SQ
+	 * tail to doorbell.
+	 *
+	 * On some CPUs (E.g. x86 AMD) writel() doesn't guarantee this.
+	 * Therefore, prefer to explicitly flush write-combined buffers
+	 * with wmb() before writing to doorbell in case Tx SQ is in LLQ mode.
+	 */
+	if (io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) {
+		wmb();
+		writel_relaxed(tail, io_sq->db_addr);
+	} else {
+		writel(tail, io_sq->db_addr);
+	}
 
 	if (is_llq_max_tx_burst_exists(io_sq)) {
 		pr_debug("reset available entries in tx burst for queue %d to %d\n",