diff mbox series

[committed] libgomp: Fix up two non-GOMP_USE_ALIGNED_WORK_SHARES related issues [PR105358]

Message ID YmecaFysPuhk1air@tucnak
State New
Headers show
Series [committed] libgomp: Fix up two non-GOMP_USE_ALIGNED_WORK_SHARES related issues [PR105358] | expand

Commit Message

Jakub Jelinek April 26, 2022, 7:16 a.m. UTC
Hi!

Last fall I've changed struct gomp_work_share, so that it doesn't have
__attribute__((aligned (64))) lock member in the middle unless the target has
non-emulated aligned allocator, otherwise it just makes sure the first and
second halves are 64 bytes appart for cache line reasons, but doesn't make
the struct 64-byte aligned itself and so we can use normal allocators for it.

When the struct isn't 64-byte aligned, the amount of tail padding significantly
decreases, to 0 or 4 bytes or so.  The library uses that tail padding when
the ordered_teams_ids array (array of uints) and/or the memory for lastprivate
conditional temporaries (the latter wants to guarantee long long alignment).
The problem with it on ia32 darwin9 is that while the struct contains
long long members, long long is just 4 byte aligned while __alignof__(long long)
is 8.  That causes problems in gomp_init_work_share, where we currently rely on
if offsetof (struct gomp_work_share, inline_ordered_team_ids) is long long
aligned, then that tail array will be aligned at runtime and so no extra
memory for dynamic realignment will be needed (that is false when the whole
struct doesn't have long long alignment).  And also in the remaining hunks
causes another problem, where we compute INLINE_ORDERED_TEAM_IDS_OFF
as the above offsetof aligned up to long long boundary and subtract
sizeof (struct gomp_work_share) and INLINE_ORDERED_TEAM_IDS_OFF.
When unlucky, the former isn't multiple of 8 and the latter is 4 bigger
than that and as the subtraction is done in size_t, we end up with (size_t) -4,
so the comparison doesn't really work.

The fixes add additional conditions to make it work properly, but all of them
should be evaluated at compile time when optimizing and so shouldn't slow
anything.

Bootstrapped/regtested on x86_64-linux and i686-linux and in the PR Iain
said he has tested it on affected targets, committed to trunk.

2022-04-26  Jakub Jelinek  <jakub@redhat.com>

	PR libgomp/105358
	* work.c (gomp_init_work_share): Don't mask of adjustment for
	dynamic long long realignment if struct gomp_work_share has smaller
	alignof than long long.
	* loop.c (GOMP_loop_start): Don't use inline_ordered_team_ids if
	struct gomp_work_share has smaller alignof than long long or if
	sizeof (struct gomp_work_share) is smaller than
	INLINE_ORDERED_TEAM_IDS_OFF.
	* loop_ull.c (GOMP_loop_ull_start): Likewise.
	* sections.c (GOMP_sections2_start): Likewise.


	Jakub
diff mbox series

Patch

--- libgomp/work.c.jj	2022-01-11 23:11:23.944268316 +0100
+++ libgomp/work.c	2022-04-25 13:42:24.885500128 +0200
@@ -113,7 +113,9 @@  gomp_init_work_share (struct gomp_work_s
 	  size_t o = nthreads * sizeof (*ws->ordered_team_ids);
 	  o += __alignof__ (long long) - 1;
 	  if ((offsetof (struct gomp_work_share, inline_ordered_team_ids)
-	       & (__alignof__ (long long) - 1)) == 0)
+	       & (__alignof__ (long long) - 1)) == 0
+	      && __alignof__ (struct gomp_work_share)
+		 >= __alignof__ (long long))
 	    o &= ~(__alignof__ (long long) - 1);
 	  ordered += o - 1;
 	}
--- libgomp/loop.c.jj	2022-01-11 23:11:23.890269075 +0100
+++ libgomp/loop.c	2022-04-25 13:39:24.266009817 +0200
@@ -270,8 +270,11 @@  GOMP_loop_start (long start, long end, l
 #define INLINE_ORDERED_TEAM_IDS_OFF \
   ((offsetof (struct gomp_work_share, inline_ordered_team_ids)		\
     + __alignof__ (long long) - 1) & ~(__alignof__ (long long) - 1))
-	  if (size > (sizeof (struct gomp_work_share)
-		      - INLINE_ORDERED_TEAM_IDS_OFF))
+	  if (sizeof (struct gomp_work_share)
+	      <= INLINE_ORDERED_TEAM_IDS_OFF
+	      || __alignof__ (struct gomp_work_share) < __alignof__ (long long)
+	      || size > (sizeof (struct gomp_work_share)
+			- INLINE_ORDERED_TEAM_IDS_OFF))
 	    *mem
 	      = (void *) (thr->ts.work_share->ordered_team_ids
 			  = gomp_malloc_cleared (size));
--- libgomp/loop_ull.c.jj	2022-01-11 23:11:23.890269075 +0100
+++ libgomp/loop_ull.c	2022-04-25 13:40:49.221829365 +0200
@@ -269,8 +269,11 @@  GOMP_loop_ull_start (bool up, gomp_ull s
 #define INLINE_ORDERED_TEAM_IDS_OFF \
   ((offsetof (struct gomp_work_share, inline_ordered_team_ids)		\
     + __alignof__ (long long) - 1) & ~(__alignof__ (long long) - 1))
-	  if (size > (sizeof (struct gomp_work_share)
-		      - INLINE_ORDERED_TEAM_IDS_OFF))
+	  if (sizeof (struct gomp_work_share)
+	      <= INLINE_ORDERED_TEAM_IDS_OFF
+	      || __alignof__ (struct gomp_work_share) < __alignof__ (long long)
+	      || size > (sizeof (struct gomp_work_share)
+			- INLINE_ORDERED_TEAM_IDS_OFF))
 	    *mem
 	      = (void *) (thr->ts.work_share->ordered_team_ids
 			  = gomp_malloc_cleared (size));
--- libgomp/sections.c.jj	2022-01-11 23:11:23.913268752 +0100
+++ libgomp/sections.c	2022-04-25 13:41:12.423506981 +0200
@@ -121,8 +121,11 @@  GOMP_sections2_start (unsigned count, ui
 #define INLINE_ORDERED_TEAM_IDS_OFF \
   ((offsetof (struct gomp_work_share, inline_ordered_team_ids)		\
     + __alignof__ (long long) - 1) & ~(__alignof__ (long long) - 1))
-	  if (size > (sizeof (struct gomp_work_share)
-		      - INLINE_ORDERED_TEAM_IDS_OFF))
+	  if (sizeof (struct gomp_work_share)
+	      <= INLINE_ORDERED_TEAM_IDS_OFF
+	      || __alignof__ (struct gomp_work_share) < __alignof__ (long long)
+	      || size > (sizeof (struct gomp_work_share)
+			- INLINE_ORDERED_TEAM_IDS_OFF))
 	    *mem
 	      = (void *) (thr->ts.work_share->ordered_team_ids
 			  = gomp_malloc_cleared (size));