Patchwork [libitm] Fix bootstrap due to __always_inline in libitm

login
register
mail settings
Submitter Gerald Pfeifer
Date April 1, 2013, midnight
Message ID <alpine.LNX.2.00.1303312332190.4333@tuna.site>
Download mbox | patch
Permalink /patch/232654/
State New
Headers show

Comments

Gerald Pfeifer - April 1, 2013, midnight
Andi's patch broke bootstrap on all FreeBSD platforms, which took me
a bit to realize since he did not update the ChangeLog:

   2013-03-23  Andi Kleen  <andi@my.domain.org>

        * local_atomic (__always_inline): Add.
        (__calculate_memory_order, atomic_thread_fence,
         atomic_signal_fence, test_and_set, clear, store, load,
         exchange, compare_exchange_weak, compare_exchange_strong,
         fetch_add, fetch_sub, fetch_and, fetch_or, fetch_xor):
        Add __always_inline to force inlining.

The problem is the he added the following to local_atomic

  #ifndef __always_inline
  #define __always_inline inline __attribute__((always_inline))
  #endif

whereas /usr/include/sys/cdefs.h on FreeBSD has the following

  #define        __always_inline __attribute__((__always_inline__))

and hence misses the inline (plus libitm/common.h already has
ALWAYS_INLINE for that purpose).

I am fixing this by adding an explicit inline to those cases where
necessary.  I did not add it to struct members, which are considered
inline by default (and believe Andi's patch may have been a bit over-
eager from that perspective).

Bootstrapped and regression tested on i386-unknown-freebsd10.0.

Okay?

Gerald

2013-03-31  Gerald Pfeifer  <gerald@pfeifer.com>

	PR bootstrap/56714
	* local_atomic (__calculate_memory_order): Mark inline.
	(atomic_thread_fence): Ditto.
	(atomic_signal_fence): Ditto.
	(atomic_bool::atomic_flag_test_and_set_explicit): Ditto.
	(atomic_bool::atomic_flag_clear_explicit): Ditto.
	(atomic_bool::atomic_flag_test_and_set): Ditto.
	(atomic_bool::atomic_flag_clear): Ditto.
Paolo Carlini - April 1, 2013, 10:47 a.m.
Hi,

On 04/01/2013 02:00 AM, Gerald Pfeifer wrote:
> Andi's patch broke bootstrap on all FreeBSD platforms, which took me
> a bit to realize since he did not update the ChangeLog:
>
>     2013-03-23  Andi Kleen  <andi@my.domain.org>
>
>          * local_atomic (__always_inline): Add.
>          (__calculate_memory_order, atomic_thread_fence,
>           atomic_signal_fence, test_and_set, clear, store, load,
>           exchange, compare_exchange_weak, compare_exchange_strong,
>           fetch_add, fetch_sub, fetch_and, fetch_or, fetch_xor):
>          Add __always_inline to force inlining.
>
> The problem is the he added the following to local_atomic
>
>    #ifndef __always_inline
>    #define __always_inline inline __attribute__((always_inline))
>    #endif
>
> whereas /usr/include/sys/cdefs.h on FreeBSD has the following
>
>    #define        __always_inline __attribute__((__always_inline__))
>
> and hence misses the inline (plus libitm/common.h already has
> ALWAYS_INLINE for that purpose).
First blush it seems to me that we should consistently use ALWAYS_INLINE 
in this file too.
> I am fixing this by adding an explicit inline to those cases where
> necessary.  I did not add it to struct members, which are considered
> inline by default (and believe Andi's patch may have been a bit over-
> eager from that perspective).
But not that inline isn't the same as always_inline...

Paolo.
Gerald Pfeifer - May 17, 2013, 12:08 a.m.
Ping.  This will fix bootstrap on FreeBSD (and it seems NetBSD).

(Paolo provided some comments, though this looks like the simplest
patch to fix the issue.)

Gerald

On Mon, 1 Apr 2013, Gerald Pfeifer wrote:
> Andi's patch broke bootstrap on all FreeBSD platforms, which took me
> a bit to realize since he did not update the ChangeLog:
> 
>    2013-03-23  Andi Kleen  <andi@my.domain.org>
> 
>         * local_atomic (__always_inline): Add.
>         (__calculate_memory_order, atomic_thread_fence,
>          atomic_signal_fence, test_and_set, clear, store, load,
>          exchange, compare_exchange_weak, compare_exchange_strong,
>          fetch_add, fetch_sub, fetch_and, fetch_or, fetch_xor):
>         Add __always_inline to force inlining.
> 
> The problem is the he added the following to local_atomic
> 
>   #ifndef __always_inline
>   #define __always_inline inline __attribute__((always_inline))
>   #endif
> 
> whereas /usr/include/sys/cdefs.h on FreeBSD has the following
> 
>   #define        __always_inline __attribute__((__always_inline__))
> 
> and hence misses the inline (plus libitm/common.h already has
> ALWAYS_INLINE for that purpose).
> 
> I am fixing this by adding an explicit inline to those cases where
> necessary.  I did not add it to struct members, which are considered
> inline by default (and believe Andi's patch may have been a bit over-
> eager from that perspective).
> 
> Bootstrapped and regression tested on i386-unknown-freebsd10.0.
> 
> Okay?
> 
> Gerald
> 
> 2013-03-31  Gerald Pfeifer  <gerald@pfeifer.com>
> 
> 	PR bootstrap/56714
> 	* local_atomic (__calculate_memory_order): Mark inline.
> 	(atomic_thread_fence): Ditto.
> 	(atomic_signal_fence): Ditto.
> 	(atomic_bool::atomic_flag_test_and_set_explicit): Ditto.
> 	(atomic_bool::atomic_flag_clear_explicit): Ditto.
> 	(atomic_bool::atomic_flag_test_and_set): Ditto.
> 	(atomic_bool::atomic_flag_clear): Ditto.
> 
> Index: local_atomic
> ===================================================================
> --- local_atomic	(revision 197262)
> +++ local_atomic	(working copy)
> @@ -75,7 +75,7 @@
>        memory_order_seq_cst
>      } memory_order;
>  
> -  __always_inline memory_order
> +  inline __always_inline memory_order
>    __calculate_memory_order(memory_order __m) noexcept
>    {
>      const bool __cond1 = __m == memory_order_release;
> @@ -85,13 +85,13 @@
>      return __mo2;
>    }
>  
> -  __always_inline void
> +  inline __always_inline void
>    atomic_thread_fence(memory_order __m) noexcept
>    {
>      __atomic_thread_fence (__m);
>    }
>  
> -  __always_inline void
> +  inline __always_inline void
>    atomic_signal_fence(memory_order __m) noexcept
>    {
>      __atomic_thread_fence (__m);
> @@ -1545,38 +1545,38 @@
>  
>  
>    // Function definitions, atomic_flag operations.
> -  __always_inline bool
> +  inline __always_inline bool
>    atomic_flag_test_and_set_explicit(atomic_flag* __a,
>  				    memory_order __m) noexcept
>    { return __a->test_and_set(__m); }
>  
> -  __always_inline bool
> +  inline __always_inline bool
>    atomic_flag_test_and_set_explicit(volatile atomic_flag* __a,
>  				    memory_order __m) noexcept
>    { return __a->test_and_set(__m); }
>  
> -  __always_inline void
> +  inline __always_inline void
>    atomic_flag_clear_explicit(atomic_flag* __a, memory_order __m) noexcept
>    { __a->clear(__m); }
>  
> -  __always_inline void
> +  inline __always_inline void
>    atomic_flag_clear_explicit(volatile atomic_flag* __a,
>  			     memory_order __m) noexcept
>    { __a->clear(__m); }
>  
> -  __always_inline bool
> +  inline __always_inline bool
>    atomic_flag_test_and_set(atomic_flag* __a) noexcept
>    { return atomic_flag_test_and_set_explicit(__a, memory_order_seq_cst); }
>  
> -  __always_inline bool
> +  inline __always_inline bool
>    atomic_flag_test_and_set(volatile atomic_flag* __a) noexcept
>    { return atomic_flag_test_and_set_explicit(__a, memory_order_seq_cst); }
>  
> -  __always_inline void
> +  inline __always_inline void
>    atomic_flag_clear(atomic_flag* __a) noexcept
>    { atomic_flag_clear_explicit(__a, memory_order_seq_cst); }
>  
> -  __always_inline void
> +  inline __always_inline void
>    atomic_flag_clear(volatile atomic_flag* __a) noexcept
>    { atomic_flag_clear_explicit(__a, memory_order_seq_cst); }
>  
>
Andi Kleen - May 17, 2013, 1:09 a.m.
On Fri, May 17, 2013 at 02:08:27AM +0200, Gerald Pfeifer wrote:
> Ping.  This will fix bootstrap on FreeBSD (and it seems NetBSD).
> 
> (Paolo provided some comments, though this looks like the simplest
> patch to fix the issue.)

It's ok for me, but I cannot approve it.
-Andi

Patch

Index: local_atomic
===================================================================
--- local_atomic	(revision 197262)
+++ local_atomic	(working copy)
@@ -75,7 +75,7 @@ 
       memory_order_seq_cst
     } memory_order;
 
-  __always_inline memory_order
+  inline __always_inline memory_order
   __calculate_memory_order(memory_order __m) noexcept
   {
     const bool __cond1 = __m == memory_order_release;
@@ -85,13 +85,13 @@ 
     return __mo2;
   }
 
-  __always_inline void
+  inline __always_inline void
   atomic_thread_fence(memory_order __m) noexcept
   {
     __atomic_thread_fence (__m);
   }
 
-  __always_inline void
+  inline __always_inline void
   atomic_signal_fence(memory_order __m) noexcept
   {
     __atomic_thread_fence (__m);
@@ -1545,38 +1545,38 @@ 
 
 
   // Function definitions, atomic_flag operations.
-  __always_inline bool
+  inline __always_inline bool
   atomic_flag_test_and_set_explicit(atomic_flag* __a,
 				    memory_order __m) noexcept
   { return __a->test_and_set(__m); }
 
-  __always_inline bool
+  inline __always_inline bool
   atomic_flag_test_and_set_explicit(volatile atomic_flag* __a,
 				    memory_order __m) noexcept
   { return __a->test_and_set(__m); }
 
-  __always_inline void
+  inline __always_inline void
   atomic_flag_clear_explicit(atomic_flag* __a, memory_order __m) noexcept
   { __a->clear(__m); }
 
-  __always_inline void
+  inline __always_inline void
   atomic_flag_clear_explicit(volatile atomic_flag* __a,
 			     memory_order __m) noexcept
   { __a->clear(__m); }
 
-  __always_inline bool
+  inline __always_inline bool
   atomic_flag_test_and_set(atomic_flag* __a) noexcept
   { return atomic_flag_test_and_set_explicit(__a, memory_order_seq_cst); }
 
-  __always_inline bool
+  inline __always_inline bool
   atomic_flag_test_and_set(volatile atomic_flag* __a) noexcept
   { return atomic_flag_test_and_set_explicit(__a, memory_order_seq_cst); }
 
-  __always_inline void
+  inline __always_inline void
   atomic_flag_clear(atomic_flag* __a) noexcept
   { atomic_flag_clear_explicit(__a, memory_order_seq_cst); }
 
-  __always_inline void
+  inline __always_inline void
   atomic_flag_clear(volatile atomic_flag* __a) noexcept
   { atomic_flag_clear_explicit(__a, memory_order_seq_cst); }