Patchwork [5/8] KVM: Add hva_to_memslot

login
register
mail settings
Submitter Alexander Graf
Date Aug. 7, 2012, 10:57 a.m.
Message ID <1344337036-22244-6-git-send-email-agraf@suse.de>
Download mbox | patch
Permalink /patch/175592/
State New
Headers show

Comments

Alexander Graf - Aug. 7, 2012, 10:57 a.m.
Architecture code might want to figure out if an hva that it gets for example
via the mmu notifier callbacks actually is in guest mapped memory, and if so,
in which memory slot.

This patch introduces a helper function to enable it to do so. It is a
prerequisite for the e500 mmu notifier implementation.

Signed-off-by: Alexander Graf <agraf@suse.de>
---
 include/linux/kvm_host.h |    1 +
 virt/kvm/kvm_main.c      |   14 ++++++++++++++
 2 files changed, 15 insertions(+), 0 deletions(-)
Christoffer Dall - Aug. 8, 2012, 4:55 a.m.
On Tue, Aug 7, 2012 at 6:57 AM, Alexander Graf <agraf@suse.de> wrote:
> Architecture code might want to figure out if an hva that it gets for example
> via the mmu notifier callbacks actually is in guest mapped memory, and if so,
> in which memory slot.
>
> This patch introduces a helper function to enable it to do so. It is a
> prerequisite for the e500 mmu notifier implementation.
>
> Signed-off-by: Alexander Graf <agraf@suse.de>
> ---
>  include/linux/kvm_host.h |    1 +
>  virt/kvm/kvm_main.c      |   14 ++++++++++++++
>  2 files changed, 15 insertions(+), 0 deletions(-)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index dbc65f9..2b92928 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -464,6 +464,7 @@ int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
>  int kvm_clear_guest_page(struct kvm *kvm, gfn_t gfn, int offset, int len);
>  int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len);
>  struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn);
> +struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva);
>  int kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn);
>  unsigned long kvm_host_page_size(struct kvm *kvm, gfn_t gfn);
>  void mark_page_dirty(struct kvm *kvm, gfn_t gfn);
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index bcf973e..d42591d 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -999,6 +999,20 @@ struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn)
>  }
>  EXPORT_SYMBOL_GPL(gfn_to_memslot);
>
> +struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva)
> +{
> +       struct kvm_memslots *slots = kvm_memslots(kvm);
> +       struct kvm_memory_slot *memslot;
> +
> +       kvm_for_each_memslot(memslot, slots)
> +               if (hva >= memslot->userspace_addr &&
> +                     hva < memslot->userspace_addr + memslot->npages)

addr + npages, this doesn't look right

> +                       return memslot;
> +
> +       return NULL;
> +}
> +EXPORT_SYMBOL_GPL(hva_to_memslot);

consider also adding a hva_to_gpa wrapper now when you're add it, then
ARM code will be so happy:

bool hva_to_gpa(struct kvm *kvm, unsigned long hva, gpa_t *gpa)
{
	struct kvm_memory_slot *memslot;

	memslot = hva_to_memslot(kvm, hva);
	if (!memslot)
		return false;

	gpa_t gpa_offset = hva - memslot->userspace_addr;
	*gpa = (memslot->base_gfn << PAGE_SHIFT) + gpa_offset;
	return true;
}

> +
>  int kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn)
>  {
>         struct kvm_memory_slot *memslot = gfn_to_memslot(kvm, gfn);
> --
> 1.6.0.2
>
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Alexander Graf - Aug. 8, 2012, 5:30 p.m.
On 08.08.2012, at 06:55, Christoffer Dall wrote:

> On Tue, Aug 7, 2012 at 6:57 AM, Alexander Graf <agraf@suse.de> wrote:
>> Architecture code might want to figure out if an hva that it gets for example
>> via the mmu notifier callbacks actually is in guest mapped memory, and if so,
>> in which memory slot.
>> 
>> This patch introduces a helper function to enable it to do so. It is a
>> prerequisite for the e500 mmu notifier implementation.
>> 
>> Signed-off-by: Alexander Graf <agraf@suse.de>
>> ---
>> include/linux/kvm_host.h |    1 +
>> virt/kvm/kvm_main.c      |   14 ++++++++++++++
>> 2 files changed, 15 insertions(+), 0 deletions(-)
>> 
>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>> index dbc65f9..2b92928 100644
>> --- a/include/linux/kvm_host.h
>> +++ b/include/linux/kvm_host.h
>> @@ -464,6 +464,7 @@ int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
>> int kvm_clear_guest_page(struct kvm *kvm, gfn_t gfn, int offset, int len);
>> int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len);
>> struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn);
>> +struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva);
>> int kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn);
>> unsigned long kvm_host_page_size(struct kvm *kvm, gfn_t gfn);
>> void mark_page_dirty(struct kvm *kvm, gfn_t gfn);
>> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
>> index bcf973e..d42591d 100644
>> --- a/virt/kvm/kvm_main.c
>> +++ b/virt/kvm/kvm_main.c
>> @@ -999,6 +999,20 @@ struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn)
>> }
>> EXPORT_SYMBOL_GPL(gfn_to_memslot);
>> 
>> +struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva)
>> +{
>> +       struct kvm_memslots *slots = kvm_memslots(kvm);
>> +       struct kvm_memory_slot *memslot;
>> +
>> +       kvm_for_each_memslot(memslot, slots)
>> +               if (hva >= memslot->userspace_addr &&
>> +                     hva < memslot->userspace_addr + memslot->npages)
> 
> addr + npages, this doesn't look right

Thanks a lot for spotting that one!

> 
>> +                       return memslot;
>> +
>> +       return NULL;
>> +}
>> +EXPORT_SYMBOL_GPL(hva_to_memslot);
> 
> consider also adding a hva_to_gpa wrapper now when you're add it, then
> ARM code will be so happy:
> 
> bool hva_to_gpa(struct kvm *kvm, unsigned long hva, gpa_t *gpa)
> {
> 	struct kvm_memory_slot *memslot;
> 
> 	memslot = hva_to_memslot(kvm, hva);
> 	if (!memslot)
> 		return false;
> 
> 	gpa_t gpa_offset = hva - memslot->userspace_addr;
> 	*gpa = (memslot->base_gfn << PAGE_SHIFT) + gpa_offset;
> 	return true;
> }

What do you need this for? I usually don't like to add framework code that has no users (yet).


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Takuya Yoshikawa - Aug. 9, 2012, 10:34 a.m.
On Tue,  7 Aug 2012 12:57:13 +0200
Alexander Graf <agraf@suse.de> wrote:

> +struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva)
> +{
> +	struct kvm_memslots *slots = kvm_memslots(kvm);
> +	struct kvm_memory_slot *memslot;
> +
> +	kvm_for_each_memslot(memslot, slots)
> +		if (hva >= memslot->userspace_addr &&
> +		      hva < memslot->userspace_addr + memslot->npages)
> +			return memslot;
> +
> +	return NULL;
> +}

Can't we have two memory slots which contain that hva?
I thought that's why hva handler had to check all slots.

Thanks,
	Takuya
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Avi Kivity - Aug. 9, 2012, 10:36 a.m.
On 08/09/2012 01:34 PM, Takuya Yoshikawa wrote:
> On Tue,  7 Aug 2012 12:57:13 +0200
> Alexander Graf <agraf@suse.de> wrote:
> 
>> +struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva)
>> +{
>> +	struct kvm_memslots *slots = kvm_memslots(kvm);
>> +	struct kvm_memory_slot *memslot;
>> +
>> +	kvm_for_each_memslot(memslot, slots)
>> +		if (hva >= memslot->userspace_addr &&
>> +		      hva < memslot->userspace_addr + memslot->npages)
>> +			return memslot;
>> +
>> +	return NULL;
>> +}
> 
> Can't we have two memory slots which contain that hva?
> I thought that's why hva handler had to check all slots.

We can and do.  Good catch.
Alexander Graf - Aug. 9, 2012, 5:02 p.m.
On 09.08.2012, at 12:36, Avi Kivity <avi@redhat.com> wrote:

> On 08/09/2012 01:34 PM, Takuya Yoshikawa wrote:
>> On Tue,  7 Aug 2012 12:57:13 +0200
>> Alexander Graf <agraf@suse.de> wrote:
>> 
>>> +struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva)
>>> +{
>>> +    struct kvm_memslots *slots = kvm_memslots(kvm);
>>> +    struct kvm_memory_slot *memslot;
>>> +
>>> +    kvm_for_each_memslot(memslot, slots)
>>> +        if (hva >= memslot->userspace_addr &&
>>> +              hva < memslot->userspace_addr + memslot->npages)
>>> +            return memslot;
>>> +
>>> +    return NULL;
>>> +}
>> 
>> Can't we have two memory slots which contain that hva?
>> I thought that's why hva handler had to check all slots.
> 
> We can and do.  Good catch.
> 

Hrm. So I guess we can only do an hva_is_guest_memory() helper? That's all I really need anyways :)

Alex


> 
> -- 
> error compiling committee.c: too many arguments to function
> --
> To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Avi Kivity - Aug. 12, 2012, 9:24 a.m.
On 08/09/2012 08:02 PM, Alexander Graf wrote:
> 
> 
> On 09.08.2012, at 12:36, Avi Kivity <avi@redhat.com> wrote:
> 
>> On 08/09/2012 01:34 PM, Takuya Yoshikawa wrote:
>>> On Tue,  7 Aug 2012 12:57:13 +0200
>>> Alexander Graf <agraf@suse.de> wrote:
>>> 
>>>> +struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva)
>>>> +{
>>>> +    struct kvm_memslots *slots = kvm_memslots(kvm);
>>>> +    struct kvm_memory_slot *memslot;
>>>> +
>>>> +    kvm_for_each_memslot(memslot, slots)
>>>> +        if (hva >= memslot->userspace_addr &&
>>>> +              hva < memslot->userspace_addr + memslot->npages)
>>>> +            return memslot;
>>>> +
>>>> +    return NULL;
>>>> +}
>>> 
>>> Can't we have two memory slots which contain that hva?
>>> I thought that's why hva handler had to check all slots.
>> 
>> We can and do.  Good catch.
>> 
> 
> Hrm. So I guess we can only do an hva_is_guest_memory() helper? That's all I really need anyways :)
> 

How about kvm_for_each_memslot_hva_range()?  That can useful in
kvm_handle_hva_range().  For your use case, you just do you stuff and
return immediately.
Alexander Graf - Aug. 12, 2012, 11:03 a.m.
On 12.08.2012, at 11:24, Avi Kivity wrote:

> On 08/09/2012 08:02 PM, Alexander Graf wrote:
>> 
>> 
>> On 09.08.2012, at 12:36, Avi Kivity <avi@redhat.com> wrote:
>> 
>>> On 08/09/2012 01:34 PM, Takuya Yoshikawa wrote:
>>>> On Tue,  7 Aug 2012 12:57:13 +0200
>>>> Alexander Graf <agraf@suse.de> wrote:
>>>> 
>>>>> +struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva)
>>>>> +{
>>>>> +    struct kvm_memslots *slots = kvm_memslots(kvm);
>>>>> +    struct kvm_memory_slot *memslot;
>>>>> +
>>>>> +    kvm_for_each_memslot(memslot, slots)
>>>>> +        if (hva >= memslot->userspace_addr &&
>>>>> +              hva < memslot->userspace_addr + memslot->npages)
>>>>> +            return memslot;
>>>>> +
>>>>> +    return NULL;
>>>>> +}
>>>> 
>>>> Can't we have two memory slots which contain that hva?
>>>> I thought that's why hva handler had to check all slots.
>>> 
>>> We can and do.  Good catch.
>>> 
>> 
>> Hrm. So I guess we can only do an hva_is_guest_memory() helper? That's all I really need anyways :)
>> 
> 
> How about kvm_for_each_memslot_hva_range()?  That can useful in
> kvm_handle_hva_range().  For your use case, you just do you stuff and
> return immediately.

Well, for now I just dropped the whole thing. In general, chances are pretty good that an HVA we get notified on with mmu notifiers is representing guest memory. And flushing a few times too often shouldn't hurt.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Avi Kivity - Aug. 12, 2012, 11:21 a.m.
On 08/12/2012 02:03 PM, Alexander Graf wrote:
> 
> Well, for now I just dropped the whole thing. In general, chances are pretty good that an HVA we get notified on with mmu notifiers is representing guest memory. And flushing a few times too often shouldn't hurt.

That is not the case, actually.  We did this optimization for x86 for
this reason.
Alexander Graf - Aug. 12, 2012, 12:47 p.m.
On 12.08.2012, at 13:21, Avi Kivity <avi@redhat.com> wrote:

> On 08/12/2012 02:03 PM, Alexander Graf wrote:
>> 
>> Well, for now I just dropped the whole thing. In general, chances are pretty good that an HVA we get notified on with mmu notifiers is representing guest memory. And flushing a few times too often shouldn't hurt.
> 
> That is not the case, actually.  We did this optimization for x86 for
> this reason.

Well, it's still what it is: an optimization. I'll start worrying about it again when I have a way to search for cache entries by hva :)

Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Patch

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index dbc65f9..2b92928 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -464,6 +464,7 @@  int kvm_gfn_to_hva_cache_init(struct kvm *kvm, struct gfn_to_hva_cache *ghc,
 int kvm_clear_guest_page(struct kvm *kvm, gfn_t gfn, int offset, int len);
 int kvm_clear_guest(struct kvm *kvm, gpa_t gpa, unsigned long len);
 struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn);
+struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva);
 int kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn);
 unsigned long kvm_host_page_size(struct kvm *kvm, gfn_t gfn);
 void mark_page_dirty(struct kvm *kvm, gfn_t gfn);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index bcf973e..d42591d 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -999,6 +999,20 @@  struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn)
 }
 EXPORT_SYMBOL_GPL(gfn_to_memslot);
 
+struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm, hva_t hva)
+{
+	struct kvm_memslots *slots = kvm_memslots(kvm);
+	struct kvm_memory_slot *memslot;
+
+	kvm_for_each_memslot(memslot, slots)
+		if (hva >= memslot->userspace_addr &&
+		      hva < memslot->userspace_addr + memslot->npages)
+			return memslot;
+
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(hva_to_memslot);
+
 int kvm_is_visible_gfn(struct kvm *kvm, gfn_t gfn)
 {
 	struct kvm_memory_slot *memslot = gfn_to_memslot(kvm, gfn);