mbox series

[RFC,v1,00/10] Add AMD SEV guest live migration support

Message ID 20190424160942.13567-1-brijesh.singh@amd.com
Headers show
Series Add AMD SEV guest live migration support | expand

Message

Brijesh Singh April 24, 2019, 4:09 p.m. UTC
The series add support for AMD SEV guest live migration commands. To protect the
confidentiality of an SEV protected guest memory while in transit we need to
use the SEV commands defined in SEV API spec [1].

SEV guest VMs have the concept of private and shared memory. Private memory
is encrypted with the guest-specific key, while shared memory may be encrypted
with hypervisor key. The commands provided by the SEV FW are meant to be used
for the private memory only. The patch series introduces a new hypercall.
The guest OS can use this hypercall to notify the page encryption status.
If the page is encrypted with guest specific-key then we use SEV command during
the migration. If page is not encrypted then fallback to default.

The patch adds a new ioctl KVM_GET_PAGE_ENC_BITMAP. The ioctl can be used
by the qemu to get the page encrypted bitmap. Qemu can consult this bitmap
during the migration to know whether the page is encrypted.

[1] https://developer.amd.com/wp-content/resources/55766.PDF

The series is tested with the Qemu, I am in process of cleaning
up the Qemu code and will submit soon.

While implementing the migration I stumbled on the follow question:

- Since there is a guest OS changes required to support the migration,
  so how do we know whether guest OS is updated? Should we extend KVM
  capabilities/feature bits to check this?

TODO:
 - add an ioctl to build encryption bitmap. The encryption bitmap is built during
   the guest bootup/execution. We should provide an ioctl so that destination
   can build the bitmap as it receives the pages. 
 - reset the bitmap on guest reboot.

The complete tree with patch is available at:
https://github.com/codomania/kvm/tree/sev-migration-rfc-v1

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: x86@kernel.org
Cc: kvm@vger.kernel.org
Cc: linux-kernel@vger.kernel.org

Brijesh Singh (10):
  KVM: SVM: Add KVM_SEV SEND_START command
  KVM: SVM: Add KVM_SEND_UPDATE_DATA command
  KVM: SVM: Add KVM_SEV_SEND_FINISH command
  KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
  KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command
  KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command
  KVM: x86: Add AMD SEV specific Hypercall3
  KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
  KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl
  mm: x86: Invoke hypercall when page encryption status is changed

 .../virtual/kvm/amd-memory-encryption.rst     | 116 ++++
 Documentation/virtual/kvm/hypercalls.txt      |  14 +
 arch/x86/include/asm/kvm_host.h               |   3 +
 arch/x86/include/asm/kvm_para.h               |  12 +
 arch/x86/include/asm/mem_encrypt.h            |   3 +
 arch/x86/kvm/svm.c                            | 560 +++++++++++++++++-
 arch/x86/kvm/vmx/vmx.c                        |   1 +
 arch/x86/kvm/x86.c                            |  17 +
 arch/x86/mm/mem_encrypt.c                     |  45 +-
 arch/x86/mm/pageattr.c                        |  15 +
 include/uapi/linux/kvm.h                      |  51 ++
 include/uapi/linux/kvm_para.h                 |   1 +
 12 files changed, 834 insertions(+), 4 deletions(-)

Comments

Cameron Esfahani via April 24, 2019, 7:15 p.m. UTC | #1
On Wed, Apr 24, 2019 at 9:10 AM Singh, Brijesh <brijesh.singh@amd.com> wrote:
>
> The series add support for AMD SEV guest live migration commands. To protect the
> confidentiality of an SEV protected guest memory while in transit we need to
> use the SEV commands defined in SEV API spec [1].
>
> SEV guest VMs have the concept of private and shared memory. Private memory
> is encrypted with the guest-specific key, while shared memory may be encrypted
> with hypervisor key. The commands provided by the SEV FW are meant to be used
> for the private memory only. The patch series introduces a new hypercall.
> The guest OS can use this hypercall to notify the page encryption status.
> If the page is encrypted with guest specific-key then we use SEV command during
> the migration. If page is not encrypted then fallback to default.
>
> The patch adds a new ioctl KVM_GET_PAGE_ENC_BITMAP. The ioctl can be used
> by the qemu to get the page encrypted bitmap. Qemu can consult this bitmap
> during the migration to know whether the page is encrypted.
>
> [1] https://developer.amd.com/wp-content/resources/55766.PDF
>
> The series is tested with the Qemu, I am in process of cleaning
> up the Qemu code and will submit soon.
>
> While implementing the migration I stumbled on the follow question:
>
> - Since there is a guest OS changes required to support the migration,
>   so how do we know whether guest OS is updated? Should we extend KVM
>   capabilities/feature bits to check this?
>
> TODO:
>  - add an ioctl to build encryption bitmap. The encryption bitmap is built during
>    the guest bootup/execution. We should provide an ioctl so that destination
>    can build the bitmap as it receives the pages.
>  - reset the bitmap on guest reboot.
>
> The complete tree with patch is available at:
> https://github.com/codomania/kvm/tree/sev-migration-rfc-v1
>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
>
> Brijesh Singh (10):
>   KVM: SVM: Add KVM_SEV SEND_START command
>   KVM: SVM: Add KVM_SEND_UPDATE_DATA command
>   KVM: SVM: Add KVM_SEV_SEND_FINISH command
>   KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
>   KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command
>   KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command
>   KVM: x86: Add AMD SEV specific Hypercall3
>   KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
>   KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl
>   mm: x86: Invoke hypercall when page encryption status is changed
>
>  .../virtual/kvm/amd-memory-encryption.rst     | 116 ++++
>  Documentation/virtual/kvm/hypercalls.txt      |  14 +
>  arch/x86/include/asm/kvm_host.h               |   3 +
>  arch/x86/include/asm/kvm_para.h               |  12 +
>  arch/x86/include/asm/mem_encrypt.h            |   3 +
>  arch/x86/kvm/svm.c                            | 560 +++++++++++++++++-
>  arch/x86/kvm/vmx/vmx.c                        |   1 +
>  arch/x86/kvm/x86.c                            |  17 +
>  arch/x86/mm/mem_encrypt.c                     |  45 +-
>  arch/x86/mm/pageattr.c                        |  15 +
>  include/uapi/linux/kvm.h                      |  51 ++
>  include/uapi/linux/kvm_para.h                 |   1 +
>  12 files changed, 834 insertions(+), 4 deletions(-)
>
> --
> 2.17.1
>

What's the back-of-the-envelope marginal cost of transferring a 16kB
region from one host to another? I'm interested in what the end to end
migration perf changes look like for this. If you have measured
migration perf, I'm interested in that also.
Brijesh Singh April 24, 2019, 9:32 p.m. UTC | #2
On 4/24/19 2:15 PM, Steve Rutherford wrote:
> On Wed, Apr 24, 2019 at 9:10 AM Singh, Brijesh <brijesh.singh@amd.com> wrote:
>>
>> The series add support for AMD SEV guest live migration commands. To protect the
>> confidentiality of an SEV protected guest memory while in transit we need to
>> use the SEV commands defined in SEV API spec [1].
>>
>> SEV guest VMs have the concept of private and shared memory. Private memory
>> is encrypted with the guest-specific key, while shared memory may be encrypted
>> with hypervisor key. The commands provided by the SEV FW are meant to be used
>> for the private memory only. The patch series introduces a new hypercall.
>> The guest OS can use this hypercall to notify the page encryption status.
>> If the page is encrypted with guest specific-key then we use SEV command during
>> the migration. If page is not encrypted then fallback to default.
>>
>> The patch adds a new ioctl KVM_GET_PAGE_ENC_BITMAP. The ioctl can be used
>> by the qemu to get the page encrypted bitmap. Qemu can consult this bitmap
>> during the migration to know whether the page is encrypted.
>>
>> [1] https://developer.amd.com/wp-content/resources/55766.PDF
>>
>> The series is tested with the Qemu, I am in process of cleaning
>> up the Qemu code and will submit soon.
>>
>> While implementing the migration I stumbled on the follow question:
>>
>> - Since there is a guest OS changes required to support the migration,
>>    so how do we know whether guest OS is updated? Should we extend KVM
>>    capabilities/feature bits to check this?
>>
>> TODO:
>>   - add an ioctl to build encryption bitmap. The encryption bitmap is built during
>>     the guest bootup/execution. We should provide an ioctl so that destination
>>     can build the bitmap as it receives the pages.
>>   - reset the bitmap on guest reboot.
>>
>> The complete tree with patch is available at:
>> https://github.com/codomania/kvm/tree/sev-migration-rfc-v1
>>
>> Cc: Thomas Gleixner <tglx@linutronix.de>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: "H. Peter Anvin" <hpa@zytor.com>
>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
>> Cc: Joerg Roedel <joro@8bytes.org>
>> Cc: Borislav Petkov <bp@suse.de>
>> Cc: Tom Lendacky <thomas.lendacky@amd.com>
>> Cc: x86@kernel.org
>> Cc: kvm@vger.kernel.org
>> Cc: linux-kernel@vger.kernel.org
>>
>> Brijesh Singh (10):
>>    KVM: SVM: Add KVM_SEV SEND_START command
>>    KVM: SVM: Add KVM_SEND_UPDATE_DATA command
>>    KVM: SVM: Add KVM_SEV_SEND_FINISH command
>>    KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
>>    KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command
>>    KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command
>>    KVM: x86: Add AMD SEV specific Hypercall3
>>    KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
>>    KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl
>>    mm: x86: Invoke hypercall when page encryption status is changed
>>
>>   .../virtual/kvm/amd-memory-encryption.rst     | 116 ++++
>>   Documentation/virtual/kvm/hypercalls.txt      |  14 +
>>   arch/x86/include/asm/kvm_host.h               |   3 +
>>   arch/x86/include/asm/kvm_para.h               |  12 +
>>   arch/x86/include/asm/mem_encrypt.h            |   3 +
>>   arch/x86/kvm/svm.c                            | 560 +++++++++++++++++-
>>   arch/x86/kvm/vmx/vmx.c                        |   1 +
>>   arch/x86/kvm/x86.c                            |  17 +
>>   arch/x86/mm/mem_encrypt.c                     |  45 +-
>>   arch/x86/mm/pageattr.c                        |  15 +
>>   include/uapi/linux/kvm.h                      |  51 ++
>>   include/uapi/linux/kvm_para.h                 |   1 +
>>   12 files changed, 834 insertions(+), 4 deletions(-)
>>
>> --
>> 2.17.1
>>
> 
> What's the back-of-the-envelope marginal cost of transferring a 16kB
> region from one host to another? I'm interested in what the end to end
> migration perf changes look like for this. If you have measured
> migration perf, I'm interested in that also.
> 

I have not done a complete performance analysis yet! From the qemu
QMP prompt (query-migration) I am getting ~8mbps throughput from
one host to another (this is with 4kb regions). I have been told
that increasing the transfer size from 4kb -> 16kb may not give a
huge performance gain because at FW level they still operating
on 4kb blocks. There is possibility that future FW updates may
give a bit better performance on 16kb size.

-Brijesh
Cameron Esfahani via April 25, 2019, 12:18 a.m. UTC | #3
Do you mean MiB/s, MB/s or Mb/s? Since you are talking about network
speeds, sometimes these get conflated.

I'm guessing you mean MB/s since you are also using 4kb for page size.

On Wed, Apr 24, 2019 at 2:32 PM Singh, Brijesh <brijesh.singh@amd.com>
wrote:

>
>
> On 4/24/19 2:15 PM, Steve Rutherford wrote:
> > On Wed, Apr 24, 2019 at 9:10 AM Singh, Brijesh <brijesh.singh@amd.com>
> wrote:
> >>
> >> The series add support for AMD SEV guest live migration commands. To
> protect the
> >> confidentiality of an SEV protected guest memory while in transit we
> need to
> >> use the SEV commands defined in SEV API spec [1].
> >>
> >> SEV guest VMs have the concept of private and shared memory. Private
> memory
> >> is encrypted with the guest-specific key, while shared memory may be
> encrypted
> >> with hypervisor key. The commands provided by the SEV FW are meant to
> be used
> >> for the private memory only. The patch series introduces a new
> hypercall.
> >> The guest OS can use this hypercall to notify the page encryption
> status.
> >> If the page is encrypted with guest specific-key then we use SEV
> command during
> >> the migration. If page is not encrypted then fallback to default.
> >>
> >> The patch adds a new ioctl KVM_GET_PAGE_ENC_BITMAP. The ioctl can be
> used
> >> by the qemu to get the page encrypted bitmap. Qemu can consult this
> bitmap
> >> during the migration to know whether the page is encrypted.
> >>
> >> [1] https://developer.amd.com/wp-content/resources/55766.PDF
> >>
> >> The series is tested with the Qemu, I am in process of cleaning
> >> up the Qemu code and will submit soon.
> >>
> >> While implementing the migration I stumbled on the follow question:
> >>
> >> - Since there is a guest OS changes required to support the migration,
> >>    so how do we know whether guest OS is updated? Should we extend KVM
> >>    capabilities/feature bits to check this?
> >>
> >> TODO:
> >>   - add an ioctl to build encryption bitmap. The encryption bitmap is
> built during
> >>     the guest bootup/execution. We should provide an ioctl so that
> destination
> >>     can build the bitmap as it receives the pages.
> >>   - reset the bitmap on guest reboot.
> >>
> >> The complete tree with patch is available at:
> >> https://github.com/codomania/kvm/tree/sev-migration-rfc-v1
> >>
> >> Cc: Thomas Gleixner <tglx@linutronix.de>
> >> Cc: Ingo Molnar <mingo@redhat.com>
> >> Cc: "H. Peter Anvin" <hpa@zytor.com>
> >> Cc: Paolo Bonzini <pbonzini@redhat.com>
> >> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> >> Cc: Joerg Roedel <joro@8bytes.org>
> >> Cc: Borislav Petkov <bp@suse.de>
> >> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> >> Cc: x86@kernel.org
> >> Cc: kvm@vger.kernel.org
> >> Cc: linux-kernel@vger.kernel.org
> >>
> >> Brijesh Singh (10):
> >>    KVM: SVM: Add KVM_SEV SEND_START command
> >>    KVM: SVM: Add KVM_SEND_UPDATE_DATA command
> >>    KVM: SVM: Add KVM_SEV_SEND_FINISH command
> >>    KVM: SVM: Add support for KVM_SEV_RECEIVE_START command
> >>    KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command
> >>    KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command
> >>    KVM: x86: Add AMD SEV specific Hypercall3
> >>    KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
> >>    KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl
> >>    mm: x86: Invoke hypercall when page encryption status is changed
> >>
> >>   .../virtual/kvm/amd-memory-encryption.rst     | 116 ++++
> >>   Documentation/virtual/kvm/hypercalls.txt      |  14 +
> >>   arch/x86/include/asm/kvm_host.h               |   3 +
> >>   arch/x86/include/asm/kvm_para.h               |  12 +
> >>   arch/x86/include/asm/mem_encrypt.h            |   3 +
> >>   arch/x86/kvm/svm.c                            | 560 +++++++++++++++++-
> >>   arch/x86/kvm/vmx/vmx.c                        |   1 +
> >>   arch/x86/kvm/x86.c                            |  17 +
> >>   arch/x86/mm/mem_encrypt.c                     |  45 +-
> >>   arch/x86/mm/pageattr.c                        |  15 +
> >>   include/uapi/linux/kvm.h                      |  51 ++
> >>   include/uapi/linux/kvm_para.h                 |   1 +
> >>   12 files changed, 834 insertions(+), 4 deletions(-)
> >>
> >> --
> >> 2.17.1
> >>
> >
> > What's the back-of-the-envelope marginal cost of transferring a 16kB
> > region from one host to another? I'm interested in what the end to end
> > migration perf changes look like for this. If you have measured
> > migration perf, I'm interested in that also.
> >
>
> I have not done a complete performance analysis yet! From the qemu
> QMP prompt (query-migration) I am getting ~8mbps throughput from
> one host to another (this is with 4kb regions). I have been told
> that increasing the transfer size from 4kb -> 16kb may not give a
> huge performance gain because at FW level they still operating
> on 4kb blocks. There is possibility that future FW updates may
> give a bit better performance on 16kb size.
>
> -Brijesh
>
Brijesh Singh April 25, 2019, 2:15 a.m. UTC | #4
On 4/24/19 7:18 PM, Steve Rutherford wrote:
> Do you mean MiB/s, MB/s or Mb/s? Since you are talking about network
> speeds, sometimes these get conflated.

It's megabits/sec. The QMP query-migration command shows the throughput
in Mbits/s. It includes PSP command execution and the network write.
Most of the time is spent in PSP FW. I have not performed raw PSP
command benchmark yet but I believe SEV FW 0.17 may reproduce up to
12Mbits/s. I will update thread after I finish the further performance
breakdown.

-Brijesh