diff mbox

[V3,2/2] kvm tools: Create arch-specific kvm_cpu__emulate_{mm}io()

Message ID 1323757307-10411-3-git-send-email-matt@ozlabs.org
State New, archived
Headers show

Commit Message

Matt Evans Dec. 13, 2011, 6:21 a.m. UTC
Different architectures will deal with MMIO exits differently.  For example,
KVM_EXIT_IO is x86-specific, and I/O cycles are often synthesised by steering
into windows in PCI bridges on other architectures.

This patch calls arch-specific kvm_cpu__emulate_io() and kvm_cpu__emulate_mmio()
from the main runloop's IO and MMIO exit handlers.  For x86, these directly
call kvm__emulate_io() and kvm__emulate_mmio() but other architectures will
perform some address munging before passing on the call.

Signed-off-by: Matt Evans <matt@ozlabs.org>
---
 tools/kvm/kvm-cpu.c                      |   34 +++++++++++++++---------------
 tools/kvm/x86/include/kvm/kvm-cpu-arch.h |   17 ++++++++++++++-
 2 files changed, 33 insertions(+), 18 deletions(-)

Comments

Alexander Graf Dec. 23, 2011, 12:58 p.m. UTC | #1
On 13.12.2011, at 07:21, Matt Evans wrote:

> Different architectures will deal with MMIO exits differently.  For example,
> KVM_EXIT_IO is x86-specific, and I/O cycles are often synthesised by steering
> into windows in PCI bridges on other architectures.
> 
> This patch calls arch-specific kvm_cpu__emulate_io() and kvm_cpu__emulate_mmio()
> from the main runloop's IO and MMIO exit handlers.  For x86, these directly
> call kvm__emulate_io() and kvm__emulate_mmio() but other architectures will
> perform some address munging before passing on the call.

Why do you need address munging? PIO is simply not there and MMIO always goes to the physical address the CPU sees, so I don't see what you want to munge. The way the memory bus is attached to the CPU should certainly not be modeled differently for PPC and x86.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Matt Evans Dec. 23, 2011, 1:26 p.m. UTC | #2
On 23/12/2011, at 11:58 PM, Alexander Graf wrote:

> 
> On 13.12.2011, at 07:21, Matt Evans wrote:
> 
>> Different architectures will deal with MMIO exits differently.  For example,
>> KVM_EXIT_IO is x86-specific, and I/O cycles are often synthesised by steering
>> into windows in PCI bridges on other architectures.
>> 
>> This patch calls arch-specific kvm_cpu__emulate_io() and kvm_cpu__emulate_mmio()
>> from the main runloop's IO and MMIO exit handlers.  For x86, these directly
>> call kvm__emulate_io() and kvm__emulate_mmio() but other architectures will
>> perform some address munging before passing on the call.
> 
> Why do you need address munging? PIO is simply not there and MMIO always goes to the physical address the CPU sees, so I don't see what you want to munge. The way the memory bus is attached to the CPU should certainly not be modeled differently for PPC and x86.

PIO not there?  PIO is used heavily in kvmtool.  So, I made a window in a similar way to how a real PHB has PIO-window-in-MMIO.

PCI BARs are currently 32-bit.  I don't want to limit the guest RAM to <4G nor puncture holes in it just to make it look like x86... PCI bus addresses == CPU addresses is a bit of an x86ism.  So, I just used another PHB window to offset 32bit PCI MMIO up somewhere else.  We can then use all 4G of PCI MMIO space without putting that at addr 0 and RAM starting >4G.  (And then, exception vectors where?)

The PCI/BARs/MMIO code could really support 64bit addresses though that's a bit of an orthogonal bit of work.  Why should PPC have an MMIO hole in the middle of RAM?


Cheers!


Matt

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf Dec. 23, 2011, 1:39 p.m. UTC | #3
On 23.12.2011, at 14:26, Matt Evans wrote:

> 
> On 23/12/2011, at 11:58 PM, Alexander Graf wrote:
> 
>> 
>> On 13.12.2011, at 07:21, Matt Evans wrote:
>> 
>>> Different architectures will deal with MMIO exits differently.  For example,
>>> KVM_EXIT_IO is x86-specific, and I/O cycles are often synthesised by steering
>>> into windows in PCI bridges on other architectures.
>>> 
>>> This patch calls arch-specific kvm_cpu__emulate_io() and kvm_cpu__emulate_mmio()
>>> from the main runloop's IO and MMIO exit handlers.  For x86, these directly
>>> call kvm__emulate_io() and kvm__emulate_mmio() but other architectures will
>>> perform some address munging before passing on the call.
>> 
>> Why do you need address munging? PIO is simply not there and MMIO always goes to the physical address the CPU sees, so I don't see what you want to munge. The way the memory bus is attached to the CPU should certainly not be modeled differently for PPC and x86.
> 
> PIO not there?  PIO is used heavily in kvmtool.  So, I made a window in a similar way to how a real PHB has PIO-window-in-MMIO.
> 
> PCI BARs are currently 32-bit.  I don't want to limit the guest RAM to <4G nor puncture holes in it just to make it look like x86... PCI bus addresses == CPU addresses is a bit of an x86ism.  So, I just used another PHB window to offset 32bit PCI MMIO up somewhere else.  We can then use all 4G of PCI MMIO space without putting that at addr 0 and RAM starting >4G.  (And then, exception vectors where?)
> 
> The PCI/BARs/MMIO code could really support 64bit addresses though that's a bit of an orthogonal bit of work.  Why should PPC have an MMIO hole in the middle of RAM?

I fully agree with what you're saying, but the layering seems off. If the CPU gets an MMIO request, it gets that on a physical address from the view of the CPU. Why would you want to have manual munging there to get to whatever window you have? Just map the MMIO regions to the higher addresses and expose whatever different representation you have to the device, not to the CPU layer.

As for PIO, you won't get PIO requests from the CPU, so you can ignore those. All events you get from the CPU are MMIO.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Matt Evans Jan. 6, 2012, 5:32 a.m. UTC | #4
Hey Alex,

On 24/12/11 00:39, Alexander Graf wrote:
> 
> On 23.12.2011, at 14:26, Matt Evans wrote:
> 
>>
>> On 23/12/2011, at 11:58 PM, Alexander Graf wrote:
>>
>>>
>>> On 13.12.2011, at 07:21, Matt Evans wrote:
>>>
>>>> Different architectures will deal with MMIO exits differently.  For example,
>>>> KVM_EXIT_IO is x86-specific, and I/O cycles are often synthesised by steering
>>>> into windows in PCI bridges on other architectures.
>>>>
>>>> This patch calls arch-specific kvm_cpu__emulate_io() and kvm_cpu__emulate_mmio()
>>>> from the main runloop's IO and MMIO exit handlers.  For x86, these directly
>>>> call kvm__emulate_io() and kvm__emulate_mmio() but other architectures will
>>>> perform some address munging before passing on the call.
>>>
>>> Why do you need address munging? PIO is simply not there and MMIO always goes to the physical address the CPU sees, so I don't see what you want to munge. The way the memory bus is attached to the CPU should certainly not be modeled differently for PPC and x86.
>>
>> PIO not there?  PIO is used heavily in kvmtool.  So, I made a window in a similar way to how a real PHB has PIO-window-in-MMIO.
>>
>> PCI BARs are currently 32-bit.  I don't want to limit the guest RAM to <4G
>> nor puncture holes in it just to make it look like x86... PCI bus addresses
>> == CPU addresses is a bit of an x86ism.  So, I just used another PHB window
>> to offset 32bit PCI MMIO up somewhere else.  We can then use all 4G of PCI
>> MMIO space without putting that at addr 0 and RAM starting >4G.  (And then,
>> exception vectors where?)
>>
>> The PCI/BARs/MMIO code could really support 64bit addresses though that's a
>> bit of an orthogonal bit of work.  Why should PPC have an MMIO hole in the
>> middle of RAM?
> 

Sooo.. call it post-holiday bliss but I don't understand what you're saying
here. :)

> I fully agree with what you're saying, but the layering seems off. If the CPU
> gets an MMIO request, it gets that on a physical address from the view of the
  ^^^^ produces?

> CPU. Why would you want to have manual munging there to get to whatever window
> you have? Just map the MMIO regions to the higher addresses and expose
> whatever different representation you have to the device, not to the CPU
> layer.

What do you mean here by "map" and representation?  The only way I can parse
this is as though you're describing PCI devices seeing PCI bus addresses which
CPU MMIOs are converted to by the window offset, i.e. what already exists
i.e. what you're disagreeing with :-)

Sorry.. please explain some more.  Is your suggestion to make CPU phys addresses
and PCI bus addresses 1:1?  (Hole in RAM..)


Thanks!


Matt
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf Jan. 9, 2012, 1:41 p.m. UTC | #5
On 06.01.2012, at 06:32, Matt Evans wrote:

> Hey Alex,
> 
> On 24/12/11 00:39, Alexander Graf wrote:
>> 
>> On 23.12.2011, at 14:26, Matt Evans wrote:
>> 
>>> 
>>> On 23/12/2011, at 11:58 PM, Alexander Graf wrote:
>>> 
>>>> 
>>>> On 13.12.2011, at 07:21, Matt Evans wrote:
>>>> 
>>>>> Different architectures will deal with MMIO exits differently.  For example,
>>>>> KVM_EXIT_IO is x86-specific, and I/O cycles are often synthesised by steering
>>>>> into windows in PCI bridges on other architectures.
>>>>> 
>>>>> This patch calls arch-specific kvm_cpu__emulate_io() and kvm_cpu__emulate_mmio()
>>>>> from the main runloop's IO and MMIO exit handlers.  For x86, these directly
>>>>> call kvm__emulate_io() and kvm__emulate_mmio() but other architectures will
>>>>> perform some address munging before passing on the call.
>>>> 
>>>> Why do you need address munging? PIO is simply not there and MMIO always goes to the physical address the CPU sees, so I don't see what you want to munge. The way the memory bus is attached to the CPU should certainly not be modeled differently for PPC and x86.
>>> 
>>> PIO not there?  PIO is used heavily in kvmtool.  So, I made a window in a similar way to how a real PHB has PIO-window-in-MMIO.
>>> 
>>> PCI BARs are currently 32-bit.  I don't want to limit the guest RAM to <4G
>>> nor puncture holes in it just to make it look like x86... PCI bus addresses
>>> == CPU addresses is a bit of an x86ism.  So, I just used another PHB window
>>> to offset 32bit PCI MMIO up somewhere else.  We can then use all 4G of PCI
>>> MMIO space without putting that at addr 0 and RAM starting >4G.  (And then,
>>> exception vectors where?)
>>> 
>>> The PCI/BARs/MMIO code could really support 64bit addresses though that's a
>>> bit of an orthogonal bit of work.  Why should PPC have an MMIO hole in the
>>> middle of RAM?
>> 
> 
> Sooo.. call it post-holiday bliss but I don't understand what you're saying
> here. :)
> 
>> I fully agree with what you're saying, but the layering seems off. If the CPU
>> gets an MMIO request, it gets that on a physical address from the view of the
>  ^^^^ produces?
> 
>> CPU. Why would you want to have manual munging there to get to whatever window
>> you have? Just map the MMIO regions to the higher addresses and expose
>> whatever different representation you have to the device, not to the CPU
>> layer.
> 
> What do you mean here by "map" and representation?  The only way I can parse
> this is as though you're describing PCI devices seeing PCI bus addresses which
> CPU MMIOs are converted to by the window offset, i.e. what already exists
> i.e. what you're disagreeing with :-)

Yup :). There's no reason you only have a single PCI bus. Or only PCI in general too. Having a single offset there is a tremendous oversimplification :)


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/tools/kvm/kvm-cpu.c b/tools/kvm/kvm-cpu.c
index 8ec4efa..b7ae3d3 100644
--- a/tools/kvm/kvm-cpu.c
+++ b/tools/kvm/kvm-cpu.c
@@ -52,11 +52,11 @@  static void kvm_cpu__handle_coalesced_mmio(struct kvm_cpu *cpu)
 		while (cpu->ring->first != cpu->ring->last) {
 			struct kvm_coalesced_mmio *m;
 			m = &cpu->ring->coalesced_mmio[cpu->ring->first];
-			kvm__emulate_mmio(cpu->kvm,
-					m->phys_addr,
-					m->data,
-					m->len,
-					1);
+			kvm_cpu__emulate_mmio(cpu->kvm,
+					      m->phys_addr,
+					      m->data,
+					      m->len,
+					      1);
 			cpu->ring->first = (cpu->ring->first + 1) % KVM_COALESCED_MMIO_MAX;
 		}
 	}
@@ -111,13 +111,13 @@  int kvm_cpu__start(struct kvm_cpu *cpu)
 		case KVM_EXIT_IO: {
 			bool ret;
 
-			ret = kvm__emulate_io(cpu->kvm,
-					cpu->kvm_run->io.port,
-					(u8 *)cpu->kvm_run +
-					cpu->kvm_run->io.data_offset,
-					cpu->kvm_run->io.direction,
-					cpu->kvm_run->io.size,
-					cpu->kvm_run->io.count);
+			ret = kvm_cpu__emulate_io(cpu->kvm,
+						  cpu->kvm_run->io.port,
+						  (u8 *)cpu->kvm_run +
+						  cpu->kvm_run->io.data_offset,
+						  cpu->kvm_run->io.direction,
+						  cpu->kvm_run->io.size,
+						  cpu->kvm_run->io.count);
 
 			if (!ret)
 				goto panic_kvm;
@@ -126,11 +126,11 @@  int kvm_cpu__start(struct kvm_cpu *cpu)
 		case KVM_EXIT_MMIO: {
 			bool ret;
 
-			ret = kvm__emulate_mmio(cpu->kvm,
-					cpu->kvm_run->mmio.phys_addr,
-					cpu->kvm_run->mmio.data,
-					cpu->kvm_run->mmio.len,
-					cpu->kvm_run->mmio.is_write);
+			ret = kvm_cpu__emulate_mmio(cpu->kvm,
+						    cpu->kvm_run->mmio.phys_addr,
+						    cpu->kvm_run->mmio.data,
+						    cpu->kvm_run->mmio.len,
+						    cpu->kvm_run->mmio.is_write);
 
 			if (!ret)
 				goto panic_kvm;
diff --git a/tools/kvm/x86/include/kvm/kvm-cpu-arch.h b/tools/kvm/x86/include/kvm/kvm-cpu-arch.h
index 822d966..198efe6 100644
--- a/tools/kvm/x86/include/kvm/kvm-cpu-arch.h
+++ b/tools/kvm/x86/include/kvm/kvm-cpu-arch.h
@@ -4,7 +4,8 @@ 
 /* Architecture-specific kvm_cpu definitions. */
 
 #include <linux/kvm.h>	/* for struct kvm_regs */
-
+#include "kvm/kvm.h"	/* for kvm__emulate_{mm}io() */
+#include <stdbool.h>
 #include <pthread.h>
 
 struct kvm;
@@ -31,4 +32,18 @@  struct kvm_cpu {
 	struct kvm_coalesced_mmio_ring	*ring;
 };
 
+/*
+ * As these are such simple wrappers, let's have them in the header so they'll
+ * be cheaper to call:
+ */
+static inline bool kvm_cpu__emulate_io(struct kvm *kvm, u16 port, void *data, int direction, int size, u32 count)
+{
+	return kvm__emulate_io(kvm, port, data, direction, size, count);
+}
+
+static inline bool kvm_cpu__emulate_mmio(struct kvm *kvm, u64 phys_addr, u8 *data, u32 len, u8 is_write)
+{
+	return kvm__emulate_mmio(kvm, phys_addr, data, len, is_write);
+}
+
 #endif /* KVM__KVM_CPU_ARCH_H */