Message ID | 1432214398-14990-1-git-send-email-pbonzini@redhat.com |
---|---|
State | New |
Headers | show |
On Thu, May 21, 2015 at 03:19:58PM +0200, Paolo Bonzini wrote: > phys_page_set_level is writing zeroes to a struct that has just been > filled in by phys_map_node_alloc. Instead, tell phys_map_node_alloc > whether to fill in the page "as a leaf" or "as a non-leaf". > > memcpy is faster than struct assignment, which copies each bitfield > individually. Arguably a compiler bug, but memcpy is super-special > cased anyway so what could go wrong? > > This cuts the cost of phys_page_set_level from 25% to 5% when > booting qboot. > > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > --- > exec.c | 24 ++++++++++-------------- > 1 file changed, 10 insertions(+), 14 deletions(-) Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
On 05/21/2015 06:19 AM, Paolo Bonzini wrote: > memcpy is faster than struct assignment, which copies each bitfield > individually. Arguably a compiler bug, but memcpy is super-special > cased anyway so what could go wrong? > The compiler has the option of doing the copy either way. Any way to actually show that the small memcpy is faster? That's one of those things where I'm sure there's a cost calculation that said per member was better. r~
On 03/06/2015 06:30, Richard Henderson wrote: > On 05/21/2015 06:19 AM, Paolo Bonzini wrote: >> memcpy is faster than struct assignment, which copies each bitfield >> individually. Arguably a compiler bug, but memcpy is super-special >> cased anyway so what could go wrong? > > The compiler has the option of doing the copy either way. Any way to > actually show that the small memcpy is faster? That's one of those > things where I'm sure there's a cost calculation that said per member > was better. Because the struct size is 32 bits, it's a no brainer that full copy is faster. However, SRA gets in the way, and causes the struct assignment to be compiled as two separate bitfield assignment. Later GCC passes don't have the means to merge them again. I filed https://gcc.gnu.org/PR66391 about this and CCed Martin Jambor. Paolo
On Thu, May 21, 2015 at 03:19:58PM +0200, Paolo Bonzini wrote: > phys_page_set_level is writing zeroes to a struct that has just been > filled in by phys_map_node_alloc. Instead, tell phys_map_node_alloc > whether to fill in the page "as a leaf" or "as a non-leaf". > > memcpy is faster than struct assignment, which copies each bitfield > individually. Arguably a compiler bug, but memcpy is super-special > cased anyway so what could go wrong? > > This cuts the cost of phys_page_set_level from 25% to 5% when > booting qboot. > > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> This patch might also be faster for another reason: it skips an extra loop over L2 in the leaf case. Reviewed-by: Michael S. Tsirkin <mst@redhat.com> > --- > exec.c | 24 ++++++++++-------------- > 1 file changed, 10 insertions(+), 14 deletions(-) > > diff --git a/exec.c b/exec.c > index e19ab22..fc8d05d 100644 > --- a/exec.c > +++ b/exec.c > @@ -173,17 +173,22 @@ static void phys_map_node_reserve(PhysPageMap *map, unsigned nodes) > } > } > > -static uint32_t phys_map_node_alloc(PhysPageMap *map) > +static uint32_t phys_map_node_alloc(PhysPageMap *map, bool leaf) > { > unsigned i; > uint32_t ret; > + PhysPageEntry e; > + PhysPageEntry *p; > > ret = map->nodes_nb++; > + p = map->nodes[ret]; > assert(ret != PHYS_MAP_NODE_NIL); > assert(ret != map->nodes_nb_alloc); > + > + e.skip = leaf ? 0 : 1; > + e.ptr = leaf ? PHYS_SECTION_UNASSIGNED : PHYS_MAP_NODE_NIL; > for (i = 0; i < P_L2_SIZE; ++i) { > - map->nodes[ret][i].skip = 1; > - map->nodes[ret][i].ptr = PHYS_MAP_NODE_NIL; > + memcpy(&p[i], &e, sizeof(e)); > } > return ret; > } > @@ -193,21 +198,12 @@ static void phys_page_set_level(PhysPageMap *map, PhysPageEntry *lp, > int level) > { > PhysPageEntry *p; > - int i; > hwaddr step = (hwaddr)1 << (level * P_L2_BITS); > > if (lp->skip && lp->ptr == PHYS_MAP_NODE_NIL) { > - lp->ptr = phys_map_node_alloc(map); > - p = map->nodes[lp->ptr]; > - if (level == 0) { > - for (i = 0; i < P_L2_SIZE; i++) { > - p[i].skip = 0; > - p[i].ptr = PHYS_SECTION_UNASSIGNED; > - } > - } > - } else { > - p = map->nodes[lp->ptr]; > + lp->ptr = phys_map_node_alloc(map, level == 0); > } > + p = map->nodes[lp->ptr]; > lp = &p[(*index >> (level * P_L2_BITS)) & (P_L2_SIZE - 1)]; > > while (*nb && lp < &p[P_L2_SIZE]) { > -- > 2.4.1 >
diff --git a/exec.c b/exec.c index e19ab22..fc8d05d 100644 --- a/exec.c +++ b/exec.c @@ -173,17 +173,22 @@ static void phys_map_node_reserve(PhysPageMap *map, unsigned nodes) } } -static uint32_t phys_map_node_alloc(PhysPageMap *map) +static uint32_t phys_map_node_alloc(PhysPageMap *map, bool leaf) { unsigned i; uint32_t ret; + PhysPageEntry e; + PhysPageEntry *p; ret = map->nodes_nb++; + p = map->nodes[ret]; assert(ret != PHYS_MAP_NODE_NIL); assert(ret != map->nodes_nb_alloc); + + e.skip = leaf ? 0 : 1; + e.ptr = leaf ? PHYS_SECTION_UNASSIGNED : PHYS_MAP_NODE_NIL; for (i = 0; i < P_L2_SIZE; ++i) { - map->nodes[ret][i].skip = 1; - map->nodes[ret][i].ptr = PHYS_MAP_NODE_NIL; + memcpy(&p[i], &e, sizeof(e)); } return ret; } @@ -193,21 +198,12 @@ static void phys_page_set_level(PhysPageMap *map, PhysPageEntry *lp, int level) { PhysPageEntry *p; - int i; hwaddr step = (hwaddr)1 << (level * P_L2_BITS); if (lp->skip && lp->ptr == PHYS_MAP_NODE_NIL) { - lp->ptr = phys_map_node_alloc(map); - p = map->nodes[lp->ptr]; - if (level == 0) { - for (i = 0; i < P_L2_SIZE; i++) { - p[i].skip = 0; - p[i].ptr = PHYS_SECTION_UNASSIGNED; - } - } - } else { - p = map->nodes[lp->ptr]; + lp->ptr = phys_map_node_alloc(map, level == 0); } + p = map->nodes[lp->ptr]; lp = &p[(*index >> (level * P_L2_BITS)) & (P_L2_SIZE - 1)]; while (*nb && lp < &p[P_L2_SIZE]) {
phys_page_set_level is writing zeroes to a struct that has just been filled in by phys_map_node_alloc. Instead, tell phys_map_node_alloc whether to fill in the page "as a leaf" or "as a non-leaf". memcpy is faster than struct assignment, which copies each bitfield individually. Arguably a compiler bug, but memcpy is super-special cased anyway so what could go wrong? This cuts the cost of phys_page_set_level from 25% to 5% when booting qboot. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> --- exec.c | 24 ++++++++++-------------- 1 file changed, 10 insertions(+), 14 deletions(-)