Patchwork PING^2 [libitm,PATCH] Fix bootstrap due to __always_inline in libitm

login
register
mail settings
Submitter Gerald Pfeifer
Date June 1, 2013, 10:44 p.m.
Message ID <alpine.LNX.2.00.1306020026132.2019@tuna.site>
Download mbox | patch
Permalink /patch/248091/
State New
Headers show

Comments

Gerald Pfeifer - June 1, 2013, 10:44 p.m.
[ I could not identify who approved the original patch, but believe
  it was one of Richi, Jakub or RTH, so I hope one of you can approve
  this.  Andi acked the previous submission, but cannot approve. ]

The following patch broke bootstrap on all FreeBSD platforms, which took 
me a bit to realize since originally Andi did not update ChangeLog:

   2013-03-23  Andi Kleen  <andi@my.domain.org>

        * local_atomic (__always_inline): Add.
        (__calculate_memory_order, atomic_thread_fence,
         atomic_signal_fence, test_and_set, clear, store, load,
         exchange, compare_exchange_weak, compare_exchange_strong,
         fetch_add, fetch_sub, fetch_and, fetch_or, fetch_xor):
        Add __always_inline to force inlining.

The problem is the patch added

  #ifndef __always_inline
  #define __always_inline inline __attribute__((always_inline))
  #endif

whereas /usr/include/sys/cdefs.h on FreeBSD has the following

  #define        __always_inline __attribute__((__always_inline__))

and hence misses the "inline".

I am fixing this by adding an explicit inline to those cases where
necessary.  I did not add it to struct members, which are considered
inline by default (and believe Andi's patch may have been a bit over-
eager from that perspective).

On top of this, for this revision of the patch, I am unconditionally
defining __always_inline to get the same behavior on all platforms.

Bootstrapped and regression tested on i386-unknown-freebsd10.0,
bootstrapped on amd64-unknown-freebsd8.4 and x86_64-suse-linux.

Okay?

Gerald


2013-05-31  Gerald Pfeifer  <gerald@pfeifer.com>

	PR bootstrap/56714
	* local_atomic (__always_inline): Always define our version.
	(__calculate_memory_order): Mark inline.
	(atomic_thread_fence): Ditto.
	(atomic_signal_fence): Ditto.
	(atomic_bool::atomic_flag_test_and_set_explicit): Ditto.
	(atomic_bool::atomic_flag_clear_explicit): Ditto.
	(atomic_bool::atomic_flag_test_and_set): Ditto.
	(atomic_bool::atomic_flag_clear): Ditto.
Richard Henderson - June 3, 2013, 2:35 p.m.
On 06/01/2013 03:44 PM, Gerald Pfeifer wrote:
> 2013-05-31  Gerald Pfeifer  <gerald@pfeifer.com>
> 
> 	PR bootstrap/56714
> 	* local_atomic (__always_inline): Always define our version.
> 	(__calculate_memory_order): Mark inline.
> 	(atomic_thread_fence): Ditto.
> 	(atomic_signal_fence): Ditto.
> 	(atomic_bool::atomic_flag_test_and_set_explicit): Ditto.
> 	(atomic_bool::atomic_flag_clear_explicit): Ditto.
> 	(atomic_bool::atomic_flag_test_and_set): Ditto.
> 	(atomic_bool::atomic_flag_clear): Ditto.

Ok.


r~

Patch

Index: libitm/local_atomic
===================================================================
--- libitm/local_atomic	(revision 199585)
+++ libitm/local_atomic	(working copy)
@@ -41,9 +41,8 @@ 
 #ifndef _GLIBCXX_ATOMIC
 #define _GLIBCXX_ATOMIC 1
 
-#ifndef __always_inline
-#define __always_inline inline __attribute__((always_inline))
-#endif
+#undef  __always_inline
+#define __always_inline __attribute__((always_inline))
 
 // #pragma GCC system_header
 
@@ -75,7 +74,7 @@ 
       memory_order_seq_cst
     } memory_order;
 
-  __always_inline memory_order
+  inline __always_inline memory_order
   __calculate_memory_order(memory_order __m) noexcept
   {
     const bool __cond1 = __m == memory_order_release;
@@ -85,13 +84,13 @@ 
     return __mo2;
   }
 
-  __always_inline void
+  inline __always_inline void
   atomic_thread_fence(memory_order __m) noexcept
   {
     __atomic_thread_fence (__m);
   }
 
-  __always_inline void
+  inline __always_inline void
   atomic_signal_fence(memory_order __m) noexcept
   {
     __atomic_thread_fence (__m);
@@ -1545,38 +1544,38 @@ 
 
 
   // Function definitions, atomic_flag operations.
-  __always_inline bool
+  inline __always_inline bool
   atomic_flag_test_and_set_explicit(atomic_flag* __a,
 				    memory_order __m) noexcept
   { return __a->test_and_set(__m); }
 
-  __always_inline bool
+  inline __always_inline bool
   atomic_flag_test_and_set_explicit(volatile atomic_flag* __a,
 				    memory_order __m) noexcept
   { return __a->test_and_set(__m); }
 
-  __always_inline void
+  inline __always_inline void
   atomic_flag_clear_explicit(atomic_flag* __a, memory_order __m) noexcept
   { __a->clear(__m); }
 
-  __always_inline void
+  inline __always_inline void
   atomic_flag_clear_explicit(volatile atomic_flag* __a,
 			     memory_order __m) noexcept
   { __a->clear(__m); }
 
-  __always_inline bool
+  inline __always_inline bool
   atomic_flag_test_and_set(atomic_flag* __a) noexcept
   { return atomic_flag_test_and_set_explicit(__a, memory_order_seq_cst); }
 
-  __always_inline bool
+  inline __always_inline bool
   atomic_flag_test_and_set(volatile atomic_flag* __a) noexcept
   { return atomic_flag_test_and_set_explicit(__a, memory_order_seq_cst); }
 
-  __always_inline void
+  inline __always_inline void
   atomic_flag_clear(atomic_flag* __a) noexcept
   { atomic_flag_clear_explicit(__a, memory_order_seq_cst); }
 
-  __always_inline void
+  inline __always_inline void
   atomic_flag_clear(volatile atomic_flag* __a) noexcept
   { atomic_flag_clear_explicit(__a, memory_order_seq_cst); }