diff mbox

[2/8] cpu: Define new cpu_transaction_failed() hook

Message ID 1501867249-1924-3-git-send-email-peter.maydell@linaro.org
State New
Headers show

Commit Message

Peter Maydell Aug. 4, 2017, 5:20 p.m. UTC
Currently we have a rather half-baked setup for allowing CPUs to
generate exceptions on accesses to invalid memory: the CPU has a
cpu_unassigned_access() hook which the memory system calls in
unassigned_mem_write() and unassigned_mem_read() if the current_cpu
pointer is non-NULL.  This was originally designed before we
implemented the MemTxResult type that allows memory operations to
report a success or failure code, which is why the hook is called
right at the bottom of the memory system.  The major problem with
this is that it means that the hook can be called even when the
access was not actually done by the CPU: for instance if the CPU
writes to a DMA engine register which causes the DMA engine to begin
a transaction which has been set up by the guest to operate on
invalid memory then this will casue the CPU to take an exception
incorrectly.  Another minor problem is that currently if a device
returns a transaction error then this won't turn into a CPU exception
at all.

The right way to do this is to have allow the CPU to respond
to memory system transaction failures at the point where the
CPU specific code calls into the memory system.

Define a new QOM CPU method and utility function
cpu_transaction_failed() which is called in these cases.
The functionality here overlaps with the existing
cpu_unassigned_access() because individual target CPUs will
need some work to convert them to the new system. When this
transition is complete we can remove the old cpu_unassigned_access()
code.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
 include/qom/cpu.h | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

Comments

Richard Henderson Aug. 4, 2017, 6:42 p.m. UTC | #1
On 08/04/2017 10:20 AM, Peter Maydell wrote:
> Currently we have a rather half-baked setup for allowing CPUs to
> generate exceptions on accesses to invalid memory: the CPU has a
> cpu_unassigned_access() hook which the memory system calls in
> unassigned_mem_write() and unassigned_mem_read() if the current_cpu
> pointer is non-NULL.  This was originally designed before we
> implemented the MemTxResult type that allows memory operations to
> report a success or failure code, which is why the hook is called
> right at the bottom of the memory system.  The major problem with
> this is that it means that the hook can be called even when the
> access was not actually done by the CPU: for instance if the CPU
> writes to a DMA engine register which causes the DMA engine to begin
> a transaction which has been set up by the guest to operate on
> invalid memory then this will casue the CPU to take an exception
> incorrectly.  Another minor problem is that currently if a device
> returns a transaction error then this won't turn into a CPU exception
> at all.
> 
> The right way to do this is to have allow the CPU to respond
> to memory system transaction failures at the point where the
> CPU specific code calls into the memory system.
> 
> Define a new QOM CPU method and utility function
> cpu_transaction_failed() which is called in these cases.
> The functionality here overlaps with the existing
> cpu_unassigned_access() because individual target CPUs will
> need some work to convert them to the new system. When this
> transition is complete we can remove the old cpu_unassigned_access()
> code.
> 
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> ---
>  include/qom/cpu.h | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~
Edgar E. Iglesias Aug. 5, 2017, 1:06 a.m. UTC | #2
On Fri, Aug 04, 2017 at 06:20:43PM +0100, Peter Maydell wrote:
> Currently we have a rather half-baked setup for allowing CPUs to
> generate exceptions on accesses to invalid memory: the CPU has a
> cpu_unassigned_access() hook which the memory system calls in
> unassigned_mem_write() and unassigned_mem_read() if the current_cpu
> pointer is non-NULL.  This was originally designed before we
> implemented the MemTxResult type that allows memory operations to
> report a success or failure code, which is why the hook is called
> right at the bottom of the memory system.  The major problem with
> this is that it means that the hook can be called even when the
> access was not actually done by the CPU: for instance if the CPU
> writes to a DMA engine register which causes the DMA engine to begin
> a transaction which has been set up by the guest to operate on
> invalid memory then this will casue the CPU to take an exception
> incorrectly.  Another minor problem is that currently if a device
> returns a transaction error then this won't turn into a CPU exception
> at all.
> 
> The right way to do this is to have allow the CPU to respond
> to memory system transaction failures at the point where the
> CPU specific code calls into the memory system.
> 
> Define a new QOM CPU method and utility function
> cpu_transaction_failed() which is called in these cases.
> The functionality here overlaps with the existing
> cpu_unassigned_access() because individual target CPUs will
> need some work to convert them to the new system. When this
> transition is complete we can remove the old cpu_unassigned_access()
> code.
> 
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
>
> ---
>  include/qom/cpu.h | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/include/qom/cpu.h b/include/qom/cpu.h
> index 25eefea..fc54d55 100644
> --- a/include/qom/cpu.h
> +++ b/include/qom/cpu.h
> @@ -85,8 +85,10 @@ struct TranslationBlock;
>   * @has_work: Callback for checking if there is work to do.
>   * @do_interrupt: Callback for interrupt handling.
>   * @do_unassigned_access: Callback for unassigned access handling.
> + * (this is deprecated: new targets should use do_transaction_failed instead)
>   * @do_unaligned_access: Callback for unaligned access handling, if
>   * the target defines #ALIGNED_ONLY.
> + * @do_transaction_failed: Callback for handling failed memory transactions

Looks OK but I wonder if there you might want to clarify that this is a
bus/slave failure and not a failure within the CPU (e.g not an MMU fault).

Anyway:
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>



>   * @virtio_is_big_endian: Callback to return %true if a CPU which supports
>   * runtime configurable endianness is currently big-endian. Non-configurable
>   * CPUs can use the default implementation of this method. This method should
> @@ -153,6 +155,10 @@ typedef struct CPUClass {
>      void (*do_unaligned_access)(CPUState *cpu, vaddr addr,
>                                  MMUAccessType access_type,
>                                  int mmu_idx, uintptr_t retaddr);
> +    void (*do_transaction_failed)(CPUState *cpu, hwaddr physaddr, vaddr addr,
> +                                  unsigned size, MMUAccessType access_type,
> +                                  int mmu_idx, MemTxAttrs attrs,
> +                                  MemTxResult response, uintptr_t retaddr);
>      bool (*virtio_is_big_endian)(CPUState *cpu);
>      int (*memory_rw_debug)(CPUState *cpu, vaddr addr,
>                             uint8_t *buf, int len, bool is_write);
> @@ -837,6 +843,21 @@ static inline void cpu_unaligned_access(CPUState *cpu, vaddr addr,
>  
>      cc->do_unaligned_access(cpu, addr, access_type, mmu_idx, retaddr);
>  }
> +
> +static inline void cpu_transaction_failed(CPUState *cpu, hwaddr physaddr,
> +                                          vaddr addr, unsigned size,
> +                                          MMUAccessType access_type,
> +                                          int mmu_idx, MemTxAttrs attrs,
> +                                          MemTxResult response,
> +                                          uintptr_t retaddr)
> +{
> +    CPUClass *cc = CPU_GET_CLASS(cpu);
> +
> +    if (cc->do_transaction_failed) {
> +        cc->do_transaction_failed(cpu, physaddr, addr, size, access_type,
> +                                  mmu_idx, attrs, response, retaddr);
> +    }
> +}
>  #endif
>  
>  #endif /* NEED_CPU_H */
> -- 
> 2.7.4
> 
>
Edgar E. Iglesias Aug. 5, 2017, 1:12 a.m. UTC | #3
On Fri, Aug 04, 2017 at 06:20:43PM +0100, Peter Maydell wrote:
> Currently we have a rather half-baked setup for allowing CPUs to
> generate exceptions on accesses to invalid memory: the CPU has a
> cpu_unassigned_access() hook which the memory system calls in
> unassigned_mem_write() and unassigned_mem_read() if the current_cpu
> pointer is non-NULL.  This was originally designed before we
> implemented the MemTxResult type that allows memory operations to
> report a success or failure code, which is why the hook is called
> right at the bottom of the memory system.  The major problem with
> this is that it means that the hook can be called even when the
> access was not actually done by the CPU: for instance if the CPU
> writes to a DMA engine register which causes the DMA engine to begin
> a transaction which has been set up by the guest to operate on
> invalid memory then this will casue the CPU to take an exception
> incorrectly.  Another minor problem is that currently if a device
> returns a transaction error then this won't turn into a CPU exception
> at all.
> 
> The right way to do this is to have allow the CPU to respond
> to memory system transaction failures at the point where the
> CPU specific code calls into the memory system.
> 
> Define a new QOM CPU method and utility function
> cpu_transaction_failed() which is called in these cases.
> The functionality here overlaps with the existing
> cpu_unassigned_access() because individual target CPUs will
> need some work to convert them to the new system. When this
> transition is complete we can remove the old cpu_unassigned_access()
> code.

BTW, a question. I don't know of any from memory but does any arch
have the ability to report the payload that failed for stores?
I guess it's easy enough to add that if needed though.


> 
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> ---
>  include/qom/cpu.h | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/include/qom/cpu.h b/include/qom/cpu.h
> index 25eefea..fc54d55 100644
> --- a/include/qom/cpu.h
> +++ b/include/qom/cpu.h
> @@ -85,8 +85,10 @@ struct TranslationBlock;
>   * @has_work: Callback for checking if there is work to do.
>   * @do_interrupt: Callback for interrupt handling.
>   * @do_unassigned_access: Callback for unassigned access handling.
> + * (this is deprecated: new targets should use do_transaction_failed instead)
>   * @do_unaligned_access: Callback for unaligned access handling, if
>   * the target defines #ALIGNED_ONLY.
> + * @do_transaction_failed: Callback for handling failed memory transactions
>   * @virtio_is_big_endian: Callback to return %true if a CPU which supports
>   * runtime configurable endianness is currently big-endian. Non-configurable
>   * CPUs can use the default implementation of this method. This method should
> @@ -153,6 +155,10 @@ typedef struct CPUClass {
>      void (*do_unaligned_access)(CPUState *cpu, vaddr addr,
>                                  MMUAccessType access_type,
>                                  int mmu_idx, uintptr_t retaddr);
> +    void (*do_transaction_failed)(CPUState *cpu, hwaddr physaddr, vaddr addr,
> +                                  unsigned size, MMUAccessType access_type,
> +                                  int mmu_idx, MemTxAttrs attrs,
> +                                  MemTxResult response, uintptr_t retaddr);
>      bool (*virtio_is_big_endian)(CPUState *cpu);
>      int (*memory_rw_debug)(CPUState *cpu, vaddr addr,
>                             uint8_t *buf, int len, bool is_write);
> @@ -837,6 +843,21 @@ static inline void cpu_unaligned_access(CPUState *cpu, vaddr addr,
>  
>      cc->do_unaligned_access(cpu, addr, access_type, mmu_idx, retaddr);
>  }
> +
> +static inline void cpu_transaction_failed(CPUState *cpu, hwaddr physaddr,
> +                                          vaddr addr, unsigned size,
> +                                          MMUAccessType access_type,
> +                                          int mmu_idx, MemTxAttrs attrs,
> +                                          MemTxResult response,
> +                                          uintptr_t retaddr)
> +{
> +    CPUClass *cc = CPU_GET_CLASS(cpu);
> +
> +    if (cc->do_transaction_failed) {
> +        cc->do_transaction_failed(cpu, physaddr, addr, size, access_type,
> +                                  mmu_idx, attrs, response, retaddr);
> +    }
> +}
>  #endif
>  
>  #endif /* NEED_CPU_H */
> -- 
> 2.7.4
> 
>
Peter Maydell Aug. 5, 2017, 4:51 p.m. UTC | #4
On 5 August 2017 at 02:06, Edgar E. Iglesias <edgar.iglesias@gmail.com> wrote:
> On Fri, Aug 04, 2017 at 06:20:43PM +0100, Peter Maydell wrote:
>> @@ -85,8 +85,10 @@ struct TranslationBlock;
>>   * @has_work: Callback for checking if there is work to do.
>>   * @do_interrupt: Callback for interrupt handling.
>>   * @do_unassigned_access: Callback for unassigned access handling.
>> + * (this is deprecated: new targets should use do_transaction_failed instead)
>>   * @do_unaligned_access: Callback for unaligned access handling, if
>>   * the target defines #ALIGNED_ONLY.
>> + * @do_transaction_failed: Callback for handling failed memory transactions
>
> Looks OK but I wonder if there you might want to clarify that this is a
> bus/slave failure and not a failure within the CPU (e.g not an MMU fault).

Yes, we could add
"(ie bus faults or external aborts; not MMU faults)"
just to clarify.

thanks
-- PMM
Peter Maydell Aug. 5, 2017, 5:18 p.m. UTC | #5
On 5 August 2017 at 02:12, Edgar E. Iglesias <edgar.iglesias@gmail.com> wrote:
> BTW, a question. I don't know of any from memory but does any arch
> have the ability to report the payload that failed for stores?
> I guess it's easy enough to add that if needed though.

I think maybe m68k bus fault stack frames have
store payload data? The description of them is pretty
complicated though and I'm not sure how much of the
frame is "stuff we actually need to emulate" vs "data
that's only important if your implementation pipelines
instruction execution"...

As you say, we can easily add a 'uint64_t data' (only valid
when access_type == MMU_DATA_STORE), either now or later.

thanks
-- PMM
diff mbox

Patch

diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 25eefea..fc54d55 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -85,8 +85,10 @@  struct TranslationBlock;
  * @has_work: Callback for checking if there is work to do.
  * @do_interrupt: Callback for interrupt handling.
  * @do_unassigned_access: Callback for unassigned access handling.
+ * (this is deprecated: new targets should use do_transaction_failed instead)
  * @do_unaligned_access: Callback for unaligned access handling, if
  * the target defines #ALIGNED_ONLY.
+ * @do_transaction_failed: Callback for handling failed memory transactions
  * @virtio_is_big_endian: Callback to return %true if a CPU which supports
  * runtime configurable endianness is currently big-endian. Non-configurable
  * CPUs can use the default implementation of this method. This method should
@@ -153,6 +155,10 @@  typedef struct CPUClass {
     void (*do_unaligned_access)(CPUState *cpu, vaddr addr,
                                 MMUAccessType access_type,
                                 int mmu_idx, uintptr_t retaddr);
+    void (*do_transaction_failed)(CPUState *cpu, hwaddr physaddr, vaddr addr,
+                                  unsigned size, MMUAccessType access_type,
+                                  int mmu_idx, MemTxAttrs attrs,
+                                  MemTxResult response, uintptr_t retaddr);
     bool (*virtio_is_big_endian)(CPUState *cpu);
     int (*memory_rw_debug)(CPUState *cpu, vaddr addr,
                            uint8_t *buf, int len, bool is_write);
@@ -837,6 +843,21 @@  static inline void cpu_unaligned_access(CPUState *cpu, vaddr addr,
 
     cc->do_unaligned_access(cpu, addr, access_type, mmu_idx, retaddr);
 }
+
+static inline void cpu_transaction_failed(CPUState *cpu, hwaddr physaddr,
+                                          vaddr addr, unsigned size,
+                                          MMUAccessType access_type,
+                                          int mmu_idx, MemTxAttrs attrs,
+                                          MemTxResult response,
+                                          uintptr_t retaddr)
+{
+    CPUClass *cc = CPU_GET_CLASS(cpu);
+
+    if (cc->do_transaction_failed) {
+        cc->do_transaction_failed(cpu, physaddr, addr, size, access_type,
+                                  mmu_idx, attrs, response, retaddr);
+    }
+}
 #endif
 
 #endif /* NEED_CPU_H */