Message ID | 1322129109-18140-1-git-send-email-zanghongyong@huawei.com |
---|---|
State | New |
Headers | show |
On Thu, Nov 24, 2011 at 3:05 AM, <zanghongyong@huawei.com> wrote: > From: Hongyong Zang <zanghongyong@huawei.com> > > When a guest boots with ioeventfd, an error (by gdb) occurs: > Program received signal SIGSEGV, Segmentation fault. > 0x00000000006009cc in setup_ioeventfds (s=0x171dc40) > at /home/louzhengwei/git_source/qemu-kvm/hw/ivshmem.c:363 > 363 for (j = 0; j < s->peers[i].nb_eventfds; j++) { > The bug is due to accessing s->peers which is NULL. Can you share the command-line that caused the fault? > > This patch uses the memory region API to replace the old one kvm_set_ioeventfd_mmio_long(). > And this patch makes memory_region_add_eventfd() called in ivshmem_read() when qemu receives > eventfd information from ivshmem_server. Should this patch be split into two patches, to separate the bug fix from the other changes related to the Memory API? Unless I misunderstand how the two are necessarily related. Cam > > Signed-off-by: Hongyong Zang <zanghongyong@huawei.com> > --- > hw/ivshmem.c | 41 ++++++++++++++--------------------------- > 1 files changed, 14 insertions(+), 27 deletions(-) > > diff --git a/hw/ivshmem.c b/hw/ivshmem.c > index 242fbea..be26f03 100644 > --- a/hw/ivshmem.c > +++ b/hw/ivshmem.c > @@ -58,7 +58,6 @@ typedef struct IVShmemState { > CharDriverState *server_chr; > MemoryRegion ivshmem_mmio; > > - pcibus_t mmio_addr; > /* We might need to register the BAR before we actually have the memory. > * So prepare a container MemoryRegion for the BAR immediately and > * add a subregion when we have the memory. > @@ -346,8 +345,14 @@ static void close_guest_eventfds(IVShmemState *s, int posn) > guest_curr_max = s->peers[posn].nb_eventfds; > > for (i = 0; i < guest_curr_max; i++) { > - kvm_set_ioeventfd_mmio_long(s->peers[posn].eventfds[i], > - s->mmio_addr + DOORBELL, (posn << 16) | i, 0); > + if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { > + memory_region_del_eventfd(&s->ivshmem_mmio, > + DOORBELL, > + 4, > + true, > + (posn << 16) | i, > + s->peers[posn].eventfds[i]); > + } > close(s->peers[posn].eventfds[i]); > } > > @@ -355,22 +360,6 @@ static void close_guest_eventfds(IVShmemState *s, int posn) > s->peers[posn].nb_eventfds = 0; > } > > -static void setup_ioeventfds(IVShmemState *s) { > - > - int i, j; > - > - for (i = 0; i <= s->max_peer; i++) { > - for (j = 0; j < s->peers[i].nb_eventfds; j++) { > - memory_region_add_eventfd(&s->ivshmem_mmio, > - DOORBELL, > - 4, > - true, > - (i << 16) | j, > - s->peers[i].eventfds[j]); > - } > - } > -} > - > /* this function increase the dynamic storage need to store data about other > * guests */ > static void increase_dynamic_storage(IVShmemState *s, int new_min_size) { > @@ -491,10 +480,12 @@ static void ivshmem_read(void *opaque, const uint8_t * buf, int flags) > } > > if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { > - if (kvm_set_ioeventfd_mmio_long(incoming_fd, s->mmio_addr + DOORBELL, > - (incoming_posn << 16) | guest_max_eventfd, 1) < 0) { > - fprintf(stderr, "ivshmem: ioeventfd not available\n"); > - } > + memory_region_add_eventfd(&s->ivshmem_mmio, > + DOORBELL, > + 4, > + true, > + (incoming_posn << 16) | guest_max_eventfd, > + incoming_fd); > } > > return; > @@ -659,10 +650,6 @@ static int pci_ivshmem_init(PCIDevice *dev) > memory_region_init_io(&s->ivshmem_mmio, &ivshmem_mmio_ops, s, > "ivshmem-mmio", IVSHMEM_REG_BAR_SIZE); > > - if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { > - setup_ioeventfds(s); > - } > - > /* region for registers*/ > pci_register_bar(&s->dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY, > &s->ivshmem_mmio); > -- > 1.7.1 > >
于 2011/11/25,星期五 4:29, Cam Macdonell 写道: > On Thu, Nov 24, 2011 at 3:05 AM,<zanghongyong@huawei.com> wrote: >> From: Hongyong Zang<zanghongyong@huawei.com> >> >> When a guest boots with ioeventfd, an error (by gdb) occurs: >> Program received signal SIGSEGV, Segmentation fault. >> 0x00000000006009cc in setup_ioeventfds (s=0x171dc40) >> at /home/louzhengwei/git_source/qemu-kvm/hw/ivshmem.c:363 >> 363 for (j = 0; j< s->peers[i].nb_eventfds; j++) { >> The bug is due to accessing s->peers which is NULL. > Can you share the command-line that caused the fault? The command-lines: 1) ivshmem_server -m 4 -p /tmp/nahanni 2) qemu-system-x86_64 -smp 1 -m 1024 -boot c -drive file=./rhel6_ivsh1.img,if=virtio -chardev socket,path=/tmp/nahanni,id=lzw -device ivshmem,chardev=lzw,size=4m,ioeventfd=on,vectors=8 -vnc :41 >> This patch uses the memory region API to replace the old one kvm_set_ioeventfd_mmio_long(). >> And this patch makes memory_region_add_eventfd() called in ivshmem_read() when qemu receives >> eventfd information from ivshmem_server. > Should this patch be split into two patches, to separate the bug fix > from the other changes related to the Memory API? Unless I > misunderstand how the two are necessarily related. This bug locates in setup_ioeventfds(). The setup_ioeventfds() function wants to call memory_region_add_eventfd() to configure eventfd info for every s->peer, but s->peers are uninitialized at this moment. When qemu receives eventfd info from "ivshmem_server", ivshmem_read() is called to set eventfd info. Function ivshmem_read() calls kvm_set_ioeventfd_mmio_long() which can be encapsulated in the new Memory API. So this patch uses memory_region_add_eventfd() to replace kvm_set_ioeventfd_mmio_long(). In this way, each qemu has already configured the proper eventfd info, and there's no need to use the function setup_ioeventfds(). Furthermore, nobody uses IVShmemState's member "pcibus_t mmio_addr". So we can remove the member as well. Regards, Hongyong Zang > > Cam > >> Signed-off-by: Hongyong Zang<zanghongyong@huawei.com> >> --- >> hw/ivshmem.c | 41 ++++++++++++++--------------------------- >> 1 files changed, 14 insertions(+), 27 deletions(-) >> >> diff --git a/hw/ivshmem.c b/hw/ivshmem.c >> index 242fbea..be26f03 100644 >> --- a/hw/ivshmem.c >> +++ b/hw/ivshmem.c >> @@ -58,7 +58,6 @@ typedef struct IVShmemState { >> CharDriverState *server_chr; >> MemoryRegion ivshmem_mmio; >> >> - pcibus_t mmio_addr; >> /* We might need to register the BAR before we actually have the memory. >> * So prepare a container MemoryRegion for the BAR immediately and >> * add a subregion when we have the memory. >> @@ -346,8 +345,14 @@ static void close_guest_eventfds(IVShmemState *s, int posn) >> guest_curr_max = s->peers[posn].nb_eventfds; >> >> for (i = 0; i< guest_curr_max; i++) { >> - kvm_set_ioeventfd_mmio_long(s->peers[posn].eventfds[i], >> - s->mmio_addr + DOORBELL, (posn<< 16) | i, 0); >> + if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { >> + memory_region_del_eventfd(&s->ivshmem_mmio, >> + DOORBELL, >> + 4, >> + true, >> + (posn<< 16) | i, >> + s->peers[posn].eventfds[i]); >> + } >> close(s->peers[posn].eventfds[i]); >> } >> >> @@ -355,22 +360,6 @@ static void close_guest_eventfds(IVShmemState *s, int posn) >> s->peers[posn].nb_eventfds = 0; >> } >> >> -static void setup_ioeventfds(IVShmemState *s) { >> - >> - int i, j; >> - >> - for (i = 0; i<= s->max_peer; i++) { >> - for (j = 0; j< s->peers[i].nb_eventfds; j++) { >> - memory_region_add_eventfd(&s->ivshmem_mmio, >> - DOORBELL, >> - 4, >> - true, >> - (i<< 16) | j, >> - s->peers[i].eventfds[j]); >> - } >> - } >> -} >> - >> /* this function increase the dynamic storage need to store data about other >> * guests */ >> static void increase_dynamic_storage(IVShmemState *s, int new_min_size) { >> @@ -491,10 +480,12 @@ static void ivshmem_read(void *opaque, const uint8_t * buf, int flags) >> } >> >> if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { >> - if (kvm_set_ioeventfd_mmio_long(incoming_fd, s->mmio_addr + DOORBELL, >> - (incoming_posn<< 16) | guest_max_eventfd, 1)< 0) { >> - fprintf(stderr, "ivshmem: ioeventfd not available\n"); >> - } >> + memory_region_add_eventfd(&s->ivshmem_mmio, >> + DOORBELL, >> + 4, >> + true, >> + (incoming_posn<< 16) | guest_max_eventfd, >> + incoming_fd); >> } >> >> return; >> @@ -659,10 +650,6 @@ static int pci_ivshmem_init(PCIDevice *dev) >> memory_region_init_io(&s->ivshmem_mmio,&ivshmem_mmio_ops, s, >> "ivshmem-mmio", IVSHMEM_REG_BAR_SIZE); >> >> - if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { >> - setup_ioeventfds(s); >> - } >> - >> /* region for registers*/ >> pci_register_bar(&s->dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY, >> &s->ivshmem_mmio); >> -- >> 1.7.1 >> >> > . >
Can this bug fix patch be applied yet? With this bug, guest os cannot successfully boot with ioeventfd. Thus the new PIO DoorBell patch cannot be posted. Thanks, Hongyong 于 2011/11/24,星期四 18:05, zanghongyong@huawei.com 写道: > From: Hongyong Zang <zanghongyong@huawei.com> > > When a guest boots with ioeventfd, an error (by gdb) occurs: > Program received signal SIGSEGV, Segmentation fault. > 0x00000000006009cc in setup_ioeventfds (s=0x171dc40) > at /home/louzhengwei/git_source/qemu-kvm/hw/ivshmem.c:363 > 363 for (j = 0; j < s->peers[i].nb_eventfds; j++) { > The bug is due to accessing s->peers which is NULL. > > This patch uses the memory region API to replace the old one kvm_set_ioeventfd_mmio_long(). > And this patch makes memory_region_add_eventfd() called in ivshmem_read() when qemu receives > eventfd information from ivshmem_server. > > Signed-off-by: Hongyong Zang <zanghongyong@huawei.com> > --- > hw/ivshmem.c | 41 ++++++++++++++--------------------------- > 1 files changed, 14 insertions(+), 27 deletions(-) > > diff --git a/hw/ivshmem.c b/hw/ivshmem.c > index 242fbea..be26f03 100644 > --- a/hw/ivshmem.c > +++ b/hw/ivshmem.c > @@ -58,7 +58,6 @@ typedef struct IVShmemState { > CharDriverState *server_chr; > MemoryRegion ivshmem_mmio; > > - pcibus_t mmio_addr; > /* We might need to register the BAR before we actually have the memory. > * So prepare a container MemoryRegion for the BAR immediately and > * add a subregion when we have the memory. > @@ -346,8 +345,14 @@ static void close_guest_eventfds(IVShmemState *s, int posn) > guest_curr_max = s->peers[posn].nb_eventfds; > > for (i = 0; i < guest_curr_max; i++) { > - kvm_set_ioeventfd_mmio_long(s->peers[posn].eventfds[i], > - s->mmio_addr + DOORBELL, (posn << 16) | i, 0); > + if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { > + memory_region_del_eventfd(&s->ivshmem_mmio, > + DOORBELL, > + 4, > + true, > + (posn << 16) | i, > + s->peers[posn].eventfds[i]); > + } > close(s->peers[posn].eventfds[i]); > } > > @@ -355,22 +360,6 @@ static void close_guest_eventfds(IVShmemState *s, int posn) > s->peers[posn].nb_eventfds = 0; > } > > -static void setup_ioeventfds(IVShmemState *s) { > - > - int i, j; > - > - for (i = 0; i <= s->max_peer; i++) { > - for (j = 0; j < s->peers[i].nb_eventfds; j++) { > - memory_region_add_eventfd(&s->ivshmem_mmio, > - DOORBELL, > - 4, > - true, > - (i << 16) | j, > - s->peers[i].eventfds[j]); > - } > - } > -} > - > /* this function increase the dynamic storage need to store data about other > * guests */ > static void increase_dynamic_storage(IVShmemState *s, int new_min_size) { > @@ -491,10 +480,12 @@ static void ivshmem_read(void *opaque, const uint8_t * buf, int flags) > } > > if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { > - if (kvm_set_ioeventfd_mmio_long(incoming_fd, s->mmio_addr + DOORBELL, > - (incoming_posn << 16) | guest_max_eventfd, 1) < 0) { > - fprintf(stderr, "ivshmem: ioeventfd not available\n"); > - } > + memory_region_add_eventfd(&s->ivshmem_mmio, > + DOORBELL, > + 4, > + true, > + (incoming_posn << 16) | guest_max_eventfd, > + incoming_fd); > } > > return; > @@ -659,10 +650,6 @@ static int pci_ivshmem_init(PCIDevice *dev) > memory_region_init_io(&s->ivshmem_mmio, &ivshmem_mmio_ops, s, > "ivshmem-mmio", IVSHMEM_REG_BAR_SIZE); > > - if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { > - setup_ioeventfds(s); > - } > - > /* region for registers*/ > pci_register_bar(&s->dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY, > &s->ivshmem_mmio);
2011/11/30 Zang Hongyong <zanghongyong@huawei.com>: > Can this bug fix patch be applied yet? Sorry, for not replying yet. I'll test your patch within the next day. > With this bug, guest os cannot successfully boot with ioeventfd. > Thus the new PIO DoorBell patch cannot be posted. Well, you can certainly post the new patch, just clarify that it's dependent on this patch. Sincerely, Cam > > Thanks, > Hongyong > > 于 2011/11/24,星期四 18:05, zanghongyong@huawei.com 写道: >> From: Hongyong Zang <zanghongyong@huawei.com> >> >> When a guest boots with ioeventfd, an error (by gdb) occurs: >> Program received signal SIGSEGV, Segmentation fault. >> 0x00000000006009cc in setup_ioeventfds (s=0x171dc40) >> at /home/louzhengwei/git_source/qemu-kvm/hw/ivshmem.c:363 >> 363 for (j = 0; j < s->peers[i].nb_eventfds; j++) { >> The bug is due to accessing s->peers which is NULL. >> >> This patch uses the memory region API to replace the old one kvm_set_ioeventfd_mmio_long(). >> And this patch makes memory_region_add_eventfd() called in ivshmem_read() when qemu receives >> eventfd information from ivshmem_server. >> >> Signed-off-by: Hongyong Zang <zanghongyong@huawei.com> >> --- >> hw/ivshmem.c | 41 ++++++++++++++--------------------------- >> 1 files changed, 14 insertions(+), 27 deletions(-) >> >> diff --git a/hw/ivshmem.c b/hw/ivshmem.c >> index 242fbea..be26f03 100644 >> --- a/hw/ivshmem.c >> +++ b/hw/ivshmem.c >> @@ -58,7 +58,6 @@ typedef struct IVShmemState { >> CharDriverState *server_chr; >> MemoryRegion ivshmem_mmio; >> >> - pcibus_t mmio_addr; >> /* We might need to register the BAR before we actually have the memory. >> * So prepare a container MemoryRegion for the BAR immediately and >> * add a subregion when we have the memory. >> @@ -346,8 +345,14 @@ static void close_guest_eventfds(IVShmemState *s, int posn) >> guest_curr_max = s->peers[posn].nb_eventfds; >> >> for (i = 0; i < guest_curr_max; i++) { >> - kvm_set_ioeventfd_mmio_long(s->peers[posn].eventfds[i], >> - s->mmio_addr + DOORBELL, (posn << 16) | i, 0); >> + if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { >> + memory_region_del_eventfd(&s->ivshmem_mmio, >> + DOORBELL, >> + 4, >> + true, >> + (posn << 16) | i, >> + s->peers[posn].eventfds[i]); >> + } >> close(s->peers[posn].eventfds[i]); >> } >> >> @@ -355,22 +360,6 @@ static void close_guest_eventfds(IVShmemState *s, int posn) >> s->peers[posn].nb_eventfds = 0; >> } >> >> -static void setup_ioeventfds(IVShmemState *s) { >> - >> - int i, j; >> - >> - for (i = 0; i <= s->max_peer; i++) { >> - for (j = 0; j < s->peers[i].nb_eventfds; j++) { >> - memory_region_add_eventfd(&s->ivshmem_mmio, >> - DOORBELL, >> - 4, >> - true, >> - (i << 16) | j, >> - s->peers[i].eventfds[j]); >> - } >> - } >> -} >> - >> /* this function increase the dynamic storage need to store data about other >> * guests */ >> static void increase_dynamic_storage(IVShmemState *s, int new_min_size) { >> @@ -491,10 +480,12 @@ static void ivshmem_read(void *opaque, const uint8_t * buf, int flags) >> } >> >> if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { >> - if (kvm_set_ioeventfd_mmio_long(incoming_fd, s->mmio_addr + DOORBELL, >> - (incoming_posn << 16) | guest_max_eventfd, 1) < 0) { >> - fprintf(stderr, "ivshmem: ioeventfd not available\n"); >> - } >> + memory_region_add_eventfd(&s->ivshmem_mmio, >> + DOORBELL, >> + 4, >> + true, >> + (incoming_posn << 16) | guest_max_eventfd, >> + incoming_fd); >> } >> >> return; >> @@ -659,10 +650,6 @@ static int pci_ivshmem_init(PCIDevice *dev) >> memory_region_init_io(&s->ivshmem_mmio, &ivshmem_mmio_ops, s, >> "ivshmem-mmio", IVSHMEM_REG_BAR_SIZE); >> >> - if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { >> - setup_ioeventfds(s); >> - } >> - >> /* region for registers*/ >> pci_register_bar(&s->dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY, >> &s->ivshmem_mmio); > >
2011/11/30 Cam Macdonell <cam@cs.ualberta.ca>: > 2011/11/30 Zang Hongyong <zanghongyong@huawei.com>: >> Can this bug fix patch be applied yet? > > Sorry, for not replying yet. I'll test your patch within the next day. Have you confirmed the proper receipt of interrupts in the receiving guests? I can confirm the bug occurs with ioeventfd enabled and that the patches fixes it, but sometime after 15.1, I no longer see interrupts (MSI or regular) being delivered in the guest. I will bisect tomorrow. Cam > >> With this bug, guest os cannot successfully boot with ioeventfd. >> Thus the new PIO DoorBell patch cannot be posted. > > Well, you can certainly post the new patch, just clarify that it's > dependent on this patch. > > Sincerely, > Cam > >> >> Thanks, >> Hongyong >> >> 于 2011/11/24,星期四 18:05, zanghongyong@huawei.com 写道: >>> From: Hongyong Zang <zanghongyong@huawei.com> >>> >>> When a guest boots with ioeventfd, an error (by gdb) occurs: >>> Program received signal SIGSEGV, Segmentation fault. >>> 0x00000000006009cc in setup_ioeventfds (s=0x171dc40) >>> at /home/louzhengwei/git_source/qemu-kvm/hw/ivshmem.c:363 >>> 363 for (j = 0; j < s->peers[i].nb_eventfds; j++) { >>> The bug is due to accessing s->peers which is NULL. >>> >>> This patch uses the memory region API to replace the old one kvm_set_ioeventfd_mmio_long(). >>> And this patch makes memory_region_add_eventfd() called in ivshmem_read() when qemu receives >>> eventfd information from ivshmem_server. >>> >>> Signed-off-by: Hongyong Zang <zanghongyong@huawei.com> >>> --- >>> hw/ivshmem.c | 41 ++++++++++++++--------------------------- >>> 1 files changed, 14 insertions(+), 27 deletions(-) >>> >>> diff --git a/hw/ivshmem.c b/hw/ivshmem.c >>> index 242fbea..be26f03 100644 >>> --- a/hw/ivshmem.c >>> +++ b/hw/ivshmem.c >>> @@ -58,7 +58,6 @@ typedef struct IVShmemState { >>> CharDriverState *server_chr; >>> MemoryRegion ivshmem_mmio; >>> >>> - pcibus_t mmio_addr; >>> /* We might need to register the BAR before we actually have the memory. >>> * So prepare a container MemoryRegion for the BAR immediately and >>> * add a subregion when we have the memory. >>> @@ -346,8 +345,14 @@ static void close_guest_eventfds(IVShmemState *s, int posn) >>> guest_curr_max = s->peers[posn].nb_eventfds; >>> >>> for (i = 0; i < guest_curr_max; i++) { >>> - kvm_set_ioeventfd_mmio_long(s->peers[posn].eventfds[i], >>> - s->mmio_addr + DOORBELL, (posn << 16) | i, 0); >>> + if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { >>> + memory_region_del_eventfd(&s->ivshmem_mmio, >>> + DOORBELL, >>> + 4, >>> + true, >>> + (posn << 16) | i, >>> + s->peers[posn].eventfds[i]); >>> + } >>> close(s->peers[posn].eventfds[i]); >>> } >>> >>> @@ -355,22 +360,6 @@ static void close_guest_eventfds(IVShmemState *s, int posn) >>> s->peers[posn].nb_eventfds = 0; >>> } >>> >>> -static void setup_ioeventfds(IVShmemState *s) { >>> - >>> - int i, j; >>> - >>> - for (i = 0; i <= s->max_peer; i++) { >>> - for (j = 0; j < s->peers[i].nb_eventfds; j++) { >>> - memory_region_add_eventfd(&s->ivshmem_mmio, >>> - DOORBELL, >>> - 4, >>> - true, >>> - (i << 16) | j, >>> - s->peers[i].eventfds[j]); >>> - } >>> - } >>> -} >>> - >>> /* this function increase the dynamic storage need to store data about other >>> * guests */ >>> static void increase_dynamic_storage(IVShmemState *s, int new_min_size) { >>> @@ -491,10 +480,12 @@ static void ivshmem_read(void *opaque, const uint8_t * buf, int flags) >>> } >>> >>> if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { >>> - if (kvm_set_ioeventfd_mmio_long(incoming_fd, s->mmio_addr + DOORBELL, >>> - (incoming_posn << 16) | guest_max_eventfd, 1) < 0) { >>> - fprintf(stderr, "ivshmem: ioeventfd not available\n"); >>> - } >>> + memory_region_add_eventfd(&s->ivshmem_mmio, >>> + DOORBELL, >>> + 4, >>> + true, >>> + (incoming_posn << 16) | guest_max_eventfd, >>> + incoming_fd); >>> } >>> >>> return; >>> @@ -659,10 +650,6 @@ static int pci_ivshmem_init(PCIDevice *dev) >>> memory_region_init_io(&s->ivshmem_mmio, &ivshmem_mmio_ops, s, >>> "ivshmem-mmio", IVSHMEM_REG_BAR_SIZE); >>> >>> - if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { >>> - setup_ioeventfds(s); >>> - } >>> - >>> /* region for registers*/ >>> pci_register_bar(&s->dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY, >>> &s->ivshmem_mmio); >> >>
2011/12/2 Cam Macdonell <cam@cs.ualberta.ca>: > 2011/11/30 Cam Macdonell <cam@cs.ualberta.ca>: >> 2011/11/30 Zang Hongyong <zanghongyong@huawei.com>: >>> Can this bug fix patch be applied yet? >> >> Sorry, for not replying yet. I'll test your patch within the next day. > > Have you confirmed the proper receipt of interrupts in the receiving guests? > > I can confirm the bug occurs with ioeventfd enabled and that the > patches fixes it, but sometime after 15.1, I no longer see interrupts > (MSI or regular) being delivered in the guest. > > I will bisect tomorrow. With Michael's help we debugged msi-x interrupt delivery. With that fix in place, this patch fixes ioeventfd in ivshmem. > > Cam > >> >>> With this bug, guest os cannot successfully boot with ioeventfd. >>> Thus the new PIO DoorBell patch cannot be posted. >> >> Well, you can certainly post the new patch, just clarify that it's >> dependent on this patch. >> >> Sincerely, >> Cam >> >>> >>> Thanks, >>> Hongyong >>> >>> 于 2011/11/24,星期四 18:05, zanghongyong@huawei.com 写道: >>>> From: Hongyong Zang <zanghongyong@huawei.com> >>>> >>>> When a guest boots with ioeventfd, an error (by gdb) occurs: >>>> Program received signal SIGSEGV, Segmentation fault. >>>> 0x00000000006009cc in setup_ioeventfds (s=0x171dc40) >>>> at /home/louzhengwei/git_source/qemu-kvm/hw/ivshmem.c:363 >>>> 363 for (j = 0; j < s->peers[i].nb_eventfds; j++) { >>>> The bug is due to accessing s->peers which is NULL. >>>> >>>> This patch uses the memory region API to replace the old one kvm_set_ioeventfd_mmio_long(). >>>> And this patch makes memory_region_add_eventfd() called in ivshmem_read() when qemu receives >>>> eventfd information from ivshmem_server. >>>> >>>> Signed-off-by: Hongyong Zang <zanghongyong@huawei.com> >>>> --- >>>> hw/ivshmem.c | 41 ++++++++++++++--------------------------- >>>> 1 files changed, 14 insertions(+), 27 deletions(-) >>>> >>>> diff --git a/hw/ivshmem.c b/hw/ivshmem.c >>>> index 242fbea..be26f03 100644 >>>> --- a/hw/ivshmem.c >>>> +++ b/hw/ivshmem.c >>>> @@ -58,7 +58,6 @@ typedef struct IVShmemState { >>>> CharDriverState *server_chr; >>>> MemoryRegion ivshmem_mmio; >>>> >>>> - pcibus_t mmio_addr; >>>> /* We might need to register the BAR before we actually have the memory. >>>> * So prepare a container MemoryRegion for the BAR immediately and >>>> * add a subregion when we have the memory. >>>> @@ -346,8 +345,14 @@ static void close_guest_eventfds(IVShmemState *s, int posn) >>>> guest_curr_max = s->peers[posn].nb_eventfds; >>>> >>>> for (i = 0; i < guest_curr_max; i++) { >>>> - kvm_set_ioeventfd_mmio_long(s->peers[posn].eventfds[i], >>>> - s->mmio_addr + DOORBELL, (posn << 16) | i, 0); >>>> + if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { >>>> + memory_region_del_eventfd(&s->ivshmem_mmio, >>>> + DOORBELL, >>>> + 4, >>>> + true, >>>> + (posn << 16) | i, >>>> + s->peers[posn].eventfds[i]); >>>> + } >>>> close(s->peers[posn].eventfds[i]); >>>> } >>>> >>>> @@ -355,22 +360,6 @@ static void close_guest_eventfds(IVShmemState *s, int posn) >>>> s->peers[posn].nb_eventfds = 0; >>>> } >>>> >>>> -static void setup_ioeventfds(IVShmemState *s) { >>>> - >>>> - int i, j; >>>> - >>>> - for (i = 0; i <= s->max_peer; i++) { >>>> - for (j = 0; j < s->peers[i].nb_eventfds; j++) { >>>> - memory_region_add_eventfd(&s->ivshmem_mmio, >>>> - DOORBELL, >>>> - 4, >>>> - true, >>>> - (i << 16) | j, >>>> - s->peers[i].eventfds[j]); >>>> - } >>>> - } >>>> -} >>>> - >>>> /* this function increase the dynamic storage need to store data about other >>>> * guests */ >>>> static void increase_dynamic_storage(IVShmemState *s, int new_min_size) { >>>> @@ -491,10 +480,12 @@ static void ivshmem_read(void *opaque, const uint8_t * buf, int flags) >>>> } >>>> >>>> if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { >>>> - if (kvm_set_ioeventfd_mmio_long(incoming_fd, s->mmio_addr + DOORBELL, >>>> - (incoming_posn << 16) | guest_max_eventfd, 1) < 0) { >>>> - fprintf(stderr, "ivshmem: ioeventfd not available\n"); >>>> - } >>>> + memory_region_add_eventfd(&s->ivshmem_mmio, >>>> + DOORBELL, >>>> + 4, >>>> + true, >>>> + (incoming_posn << 16) | guest_max_eventfd, >>>> + incoming_fd); >>>> } >>>> >>>> return; >>>> @@ -659,10 +650,6 @@ static int pci_ivshmem_init(PCIDevice *dev) >>>> memory_region_init_io(&s->ivshmem_mmio, &ivshmem_mmio_ops, s, >>>> "ivshmem-mmio", IVSHMEM_REG_BAR_SIZE); >>>> >>>> - if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { >>>> - setup_ioeventfds(s); >>>> - } >>>> - >>>> /* region for registers*/ >>>> pci_register_bar(&s->dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY, >>>> &s->ivshmem_mmio); >>> >>>
Can this patch please be merged? It fixes an important regression with ioeventfd. Thanks, Cam On Thu, Nov 24, 2011 at 3:05 AM, <zanghongyong@huawei.com> wrote: > From: Hongyong Zang <zanghongyong@huawei.com> > > When a guest boots with ioeventfd, an error (by gdb) occurs: > Program received signal SIGSEGV, Segmentation fault. > 0x00000000006009cc in setup_ioeventfds (s=0x171dc40) > at /home/louzhengwei/git_source/qemu-kvm/hw/ivshmem.c:363 > 363 for (j = 0; j < s->peers[i].nb_eventfds; j++) { > The bug is due to accessing s->peers which is NULL. > > This patch uses the memory region API to replace the old one kvm_set_ioeventfd_mmio_long(). > And this patch makes memory_region_add_eventfd() called in ivshmem_read() when qemu receives > eventfd information from ivshmem_server. > > Signed-off-by: Hongyong Zang <zanghongyong@huawei.com> > --- > hw/ivshmem.c | 41 ++++++++++++++--------------------------- > 1 files changed, 14 insertions(+), 27 deletions(-) > > diff --git a/hw/ivshmem.c b/hw/ivshmem.c > index 242fbea..be26f03 100644 > --- a/hw/ivshmem.c > +++ b/hw/ivshmem.c > @@ -58,7 +58,6 @@ typedef struct IVShmemState { > CharDriverState *server_chr; > MemoryRegion ivshmem_mmio; > > - pcibus_t mmio_addr; > /* We might need to register the BAR before we actually have the memory. > * So prepare a container MemoryRegion for the BAR immediately and > * add a subregion when we have the memory. > @@ -346,8 +345,14 @@ static void close_guest_eventfds(IVShmemState *s, int posn) > guest_curr_max = s->peers[posn].nb_eventfds; > > for (i = 0; i < guest_curr_max; i++) { > - kvm_set_ioeventfd_mmio_long(s->peers[posn].eventfds[i], > - s->mmio_addr + DOORBELL, (posn << 16) | i, 0); > + if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { > + memory_region_del_eventfd(&s->ivshmem_mmio, > + DOORBELL, > + 4, > + true, > + (posn << 16) | i, > + s->peers[posn].eventfds[i]); > + } > close(s->peers[posn].eventfds[i]); > } > > @@ -355,22 +360,6 @@ static void close_guest_eventfds(IVShmemState *s, int posn) > s->peers[posn].nb_eventfds = 0; > } > > -static void setup_ioeventfds(IVShmemState *s) { > - > - int i, j; > - > - for (i = 0; i <= s->max_peer; i++) { > - for (j = 0; j < s->peers[i].nb_eventfds; j++) { > - memory_region_add_eventfd(&s->ivshmem_mmio, > - DOORBELL, > - 4, > - true, > - (i << 16) | j, > - s->peers[i].eventfds[j]); > - } > - } > -} > - > /* this function increase the dynamic storage need to store data about other > * guests */ > static void increase_dynamic_storage(IVShmemState *s, int new_min_size) { > @@ -491,10 +480,12 @@ static void ivshmem_read(void *opaque, const uint8_t * buf, int flags) > } > > if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { > - if (kvm_set_ioeventfd_mmio_long(incoming_fd, s->mmio_addr + DOORBELL, > - (incoming_posn << 16) | guest_max_eventfd, 1) < 0) { > - fprintf(stderr, "ivshmem: ioeventfd not available\n"); > - } > + memory_region_add_eventfd(&s->ivshmem_mmio, > + DOORBELL, > + 4, > + true, > + (incoming_posn << 16) | guest_max_eventfd, > + incoming_fd); > } > > return; > @@ -659,10 +650,6 @@ static int pci_ivshmem_init(PCIDevice *dev) > memory_region_init_io(&s->ivshmem_mmio, &ivshmem_mmio_ops, s, > "ivshmem-mmio", IVSHMEM_REG_BAR_SIZE); > > - if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { > - setup_ioeventfds(s); > - } > - > /* region for registers*/ > pci_register_bar(&s->dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY, > &s->ivshmem_mmio); > -- > 1.7.1 >
diff --git a/hw/ivshmem.c b/hw/ivshmem.c index 242fbea..be26f03 100644 --- a/hw/ivshmem.c +++ b/hw/ivshmem.c @@ -58,7 +58,6 @@ typedef struct IVShmemState { CharDriverState *server_chr; MemoryRegion ivshmem_mmio; - pcibus_t mmio_addr; /* We might need to register the BAR before we actually have the memory. * So prepare a container MemoryRegion for the BAR immediately and * add a subregion when we have the memory. @@ -346,8 +345,14 @@ static void close_guest_eventfds(IVShmemState *s, int posn) guest_curr_max = s->peers[posn].nb_eventfds; for (i = 0; i < guest_curr_max; i++) { - kvm_set_ioeventfd_mmio_long(s->peers[posn].eventfds[i], - s->mmio_addr + DOORBELL, (posn << 16) | i, 0); + if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { + memory_region_del_eventfd(&s->ivshmem_mmio, + DOORBELL, + 4, + true, + (posn << 16) | i, + s->peers[posn].eventfds[i]); + } close(s->peers[posn].eventfds[i]); } @@ -355,22 +360,6 @@ static void close_guest_eventfds(IVShmemState *s, int posn) s->peers[posn].nb_eventfds = 0; } -static void setup_ioeventfds(IVShmemState *s) { - - int i, j; - - for (i = 0; i <= s->max_peer; i++) { - for (j = 0; j < s->peers[i].nb_eventfds; j++) { - memory_region_add_eventfd(&s->ivshmem_mmio, - DOORBELL, - 4, - true, - (i << 16) | j, - s->peers[i].eventfds[j]); - } - } -} - /* this function increase the dynamic storage need to store data about other * guests */ static void increase_dynamic_storage(IVShmemState *s, int new_min_size) { @@ -491,10 +480,12 @@ static void ivshmem_read(void *opaque, const uint8_t * buf, int flags) } if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { - if (kvm_set_ioeventfd_mmio_long(incoming_fd, s->mmio_addr + DOORBELL, - (incoming_posn << 16) | guest_max_eventfd, 1) < 0) { - fprintf(stderr, "ivshmem: ioeventfd not available\n"); - } + memory_region_add_eventfd(&s->ivshmem_mmio, + DOORBELL, + 4, + true, + (incoming_posn << 16) | guest_max_eventfd, + incoming_fd); } return; @@ -659,10 +650,6 @@ static int pci_ivshmem_init(PCIDevice *dev) memory_region_init_io(&s->ivshmem_mmio, &ivshmem_mmio_ops, s, "ivshmem-mmio", IVSHMEM_REG_BAR_SIZE); - if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { - setup_ioeventfds(s); - } - /* region for registers*/ pci_register_bar(&s->dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY, &s->ivshmem_mmio);