Patchwork [trans-mem] Add gl_wt TM method.

login
register
mail settings
Submitter Torvald Riegel
Date Aug. 29, 2011, 10:33 p.m.
Message ID <1314657235.14756.1465.camel@triegel.csb>
Download mbox | patch
Permalink /patch/112162/
State New
Headers show

Comments

Torvald Riegel - Aug. 29, 2011, 10:33 p.m.
The attached patches are several changes required for a new TM method,
gl_wt (global lock, write-through), which is added by the last patch

patch1: Add TM-method-specific begin code. All time-based TMs need to
know at which point in time they start working. Initializing lazily on
the first txnal load or store would be unnecessary overhead.

patch2: A small fix for serial mode. This change should have been
included in the previous renaming of the serial mode dispatchs.

patch3: We can't free transaction-local memory during nested commits
unless we also go through the undo and redo logs and remove all
references to the to-be-freed memory (otherwise, we'll undo/redo to
privatized memory...). I guess going trough the logs is higher overhead
than just keeping the allocations around. If we see transactions in
practice that have large malloc/free cycles embedded in nested txns that
are not flattened, we can still add special handling for this case.

patch4: We sometimes need to re-initialize method groups (e.g., to avoid
overflow of counters etc.). TM methods can request this using a special
restart reason.

patch5: The undo log is used for both thread-local and shared data
(which are separate). Maintaining two undo logs does not provide any
advantages. However, we have to perform undo actions to shared data
before dispatch-specific rollback (e.g., where we release locks).

patch6: Add support for quiescence-based privatization safety (using
gtm_thread::shared_state as the value of the current (snapshot) time of
a transaction). Currently, this uses just spinning, but it should
eventually be changed to block using cond vars / futexes if necessary.
This requires more thought and tuning however, as it should probably be
integrated with the serial lock (and it poses similar challenges, such
as having to minimize the number of wait/wakeup calls, number of cache
misses, etc.). Therefore, this should be addressed in a future patch.

patch7: Finally, the new TM method, gl_wt (global lock, write trough).
This is a simple algorithm that uses a global versioned lock (aka
ownership record or orec) together with write-through / undolog-style
txnal writes. This has a lot of similarities to undolog-style TM methods
that use several locks (e.g., privatization safety has to be ensured),
but has less overhead. If update txns are frequent, it obviously won't
scale. With the current code base, gl_wt performs better than
serialirr_onwrite but probably mostly due spinning and restarts when the
global lock is acquired instead of falling back to heavyweight waiting
via futex wait/wakeup calls.
gl_wt is in the globallock method group, to which at least another
write-back, value-based-validation TM method will be added.

OK for branch?
commit 5103c0b0ef7e8c8120bab19ff3a69245aa722435
Author: Torvald Riegel <triegel@redhat.com>
Date:   Wed Aug 24 15:36:14 2011 +0200

    Add support for TM-method-specific begin code.
    
    	* libitm_i.h (GTM::gtm_restart_reason): Re-arrange and clean up
    	declarations.
    	* dispatch.h (GTM::abi_dispatch::begin_or_restart): New.
    	* method-serial.cc: Implement begin_or_restart().
    	* beginend.cc (GTM::gtm_thread::begin_transaction): Call
    	dispatch-specific begin_or_restart().
    	(GTM::gtm_thread::restart): Same.
commit d2a4bff483f4eeab6eb6d192c01d41a0cbf4a7af
Author: Torvald Riegel <triegel@redhat.com>
Date:   Thu Aug 25 12:16:43 2011 +0200

    Fixed gtm_thread::serialirr_mode to actually use serialirr, not serial.
    
    	* method-serial.cc (GTM::gtm_thread::serialirr_mode): Fixed: Use
    	serial-irrevocable dispatch, not serial.

diff --git a/libitm/method-serial.cc b/libitm/method-serial.cc
index 133b964..ac9005d 100644
--- a/libitm/method-serial.cc
+++ b/libitm/method-serial.cc
@@ -263,7 +263,7 @@ GTM::gtm_thread::serialirr_mode ()
   else
     {
       this->state |= (STATE_SERIAL | STATE_IRREVOCABLE);
-      set_abi_disp (dispatch_serial ());
+      set_abi_disp (dispatch_serialirr ());
     }
 }
commit 2c9e18092e7feb2125cff69d2e3b6dede8cdb75c
Author: Torvald Riegel <triegel@redhat.com>
Date:   Fri Aug 26 12:45:36 2011 +0200

    Do not free transaction-local memory when committing a nested transaction.
    
    	* alloc.cc (commit_allocations_2): Do not free transaction-local
    	memory when committing a nested transaction.

diff --git a/libitm/alloc.cc b/libitm/alloc.cc
index 810d1d5..523ccbf 100644
--- a/libitm/alloc.cc
+++ b/libitm/alloc.cc
@@ -81,22 +81,16 @@ commit_allocations_2 (uintptr_t key, gtm_alloc_action *a, void *data)
         }
       else
         {
-          // Eliminate a parent allocation if it matches this memory release,
-          // otherwise just add it to the parent.
+          // ??? We could eliminate a parent allocation that matches this
+          // memory release, if we had support for removing all accesses
+          // to this allocation from the transaction's undo and redo logs
+          // (otherwise, the parent transaction's undo or redo might write to
+          // data that is already shared again because of calling free()).
+          // We don't have this support currently, and the benefit of this
+          // optimization is unknown, so just add it to the parent.
           gtm_alloc_action* a_parent;
-          aa_tree<uintptr_t, gtm_alloc_action>::node_ptr node_ptr =
-              cb_data->parent->remove(key, &a_parent);
-          if (node_ptr)
-            {
-              assert(a_parent->allocated);
-              a_parent->free_fn(ptr);
-              delete node_ptr;
-            }
-          else
-            {
-              a_parent = cb_data->parent->insert(key);
-              *a_parent = *a;
-            }
+          a_parent = cb_data->parent->insert(key);
+          *a_parent = *a;
         }
     }
 }
commit 803319441f4cf80bf9d3447f8b46d3a9c0c0031a
Author: Torvald Riegel <triegel@redhat.com>
Date:   Fri Aug 26 13:06:37 2011 +0200

    Handle re-initialization of the current method group.
    
    	* retry.cc (GTM::gtm_thread::decide_retry_strategy): Handle
    	re-initialization of the current method group.
    	* libitm_i.h (GTM::gtm_restart_reason): Add restart reason for this.

diff --git a/libitm/libitm_i.h b/libitm/libitm_i.h
index 2e1913a..7fb02f9 100644
--- a/libitm/libitm_i.h
+++ b/libitm/libitm_i.h
@@ -67,6 +67,7 @@ enum gtm_restart_reason
   RESTART_SERIAL_IRR,
   RESTART_NOT_READONLY,
   RESTART_CLOSED_NESTING,
+  RESTART_INIT_METHOD_GROUP,
   NUM_RESTARTS
 };
 
diff --git a/libitm/retry.cc b/libitm/retry.cc
index 630ca1a..6fc4a38 100644
--- a/libitm/retry.cc
+++ b/libitm/retry.cc
@@ -40,6 +40,45 @@ GTM::gtm_thread::decide_retry_strategy (gtm_restart_reason r)
   this->restart_reason[r]++;
   this->restart_total++;
 
+  if (r == RESTART_INIT_METHOD_GROUP)
+    {
+      // A re-initializations of the method group has been requested. Switch
+      // to serial mode, initialize, and resume normal operation.
+      if ((state & STATE_SERIAL) == 0)
+        {
+          // We have to eventually re-init the method group. Therefore,
+          // we cannot just upgrade to a write lock here because this could
+          // fail forever when other transactions execute in serial mode.
+          // However, giving up the read lock then means that a change of the
+          // method group could happen in-between, so check that we're not
+          // re-initializing without a need.
+          // ??? Note that we can still re-initialize too often, but avoiding
+          // that would increase code complexity, which seems unnecessary
+          // given that re-inits should be very infrequent.
+          serial_lock.read_unlock(this);
+          serial_lock.write_lock();
+          if (disp->get_method_group() == default_dispatch->get_method_group())
+            {
+              // Still the same method group.
+              disp->get_method_group()->fini();
+              disp->get_method_group()->init();
+            }
+          serial_lock.write_unlock();
+          serial_lock.read_lock(this);
+          if (disp->get_method_group() != default_dispatch->get_method_group())
+            {
+              disp = default_dispatch;
+              set_abi_disp(disp);
+            }
+        }
+      else
+        {
+          // We are a serial transaction already, which makes things simple.
+          disp->get_method_group()->fini();
+          disp->get_method_group()->init();
+        }
+    }
+
   bool retry_irr = (r == RESTART_SERIAL_IRR);
   bool retry_serial = (retry_irr || this->restart_total > 100);
commit a4eef826c3b8e83e65ab239f8a3f007c0d655a2e
Author: Torvald Riegel <triegel@redhat.com>
Date:   Fri Aug 26 13:53:42 2011 +0200

    Undo log is used for both thread-local and shared data.
    
    	* libitm_i.h: Renamed gtm_local_undo to gtm_undolog_entry.
    	(GTM::gtm_thread): Renamed local_undo to undolog. Renamed
    	undolog-related member functions from *_local to *_undolog.
    	* local.cc (gtm_thread::commit_undolog): Same.
    	* beginend.cc (GTM::gtm_thread::trycommit): Same.
    	(GTM::gtm_thread::rollback): Roll back undolog before
    	dispatch-specific rollback.

diff --git a/libitm/beginend.cc b/libitm/beginend.cc
index 1770dad..5dd7926 100644
--- a/libitm/beginend.cc
+++ b/libitm/beginend.cc
@@ -292,7 +292,7 @@ GTM::gtm_transaction_cp::save(gtm_thread* tx)
 {
   // Save everything that we might have to restore on restarts or aborts.
   jb = tx->jb;
-  local_undo_size = tx->local_undo.size();
+  undolog_size = tx->undolog.size();
   memcpy(&alloc_actions, &tx->alloc_actions, sizeof(alloc_actions));
   user_actions_size = tx->user_actions.size();
   id = tx->id;
@@ -320,9 +320,16 @@ GTM::gtm_transaction_cp::commit(gtm_thread* tx)
 void
 GTM::gtm_thread::rollback (gtm_transaction_cp *cp)
 {
+  // The undo log is special in that it used for both thread-local and shared
+  // data. Because of the latter, we have to roll it back before any
+  // dispatch-specific rollback (which handles synchronization with other
+  // transactions).
+  rollback_undolog (cp ? cp->undolog_size : 0);
+
+  // Perform dispatch-specific rollback.
   abi_disp()->rollback (cp);
 
-  rollback_local (cp ? cp->local_undo_size : 0);
+  // Roll back all actions that are supposed to happen around the transaction.
   rollback_user_actions (cp ? cp->user_actions_size : 0);
   commit_allocations (true, (cp ? &cp->alloc_actions : 0));
   revert_cpp_exceptions (cp);
@@ -436,7 +443,9 @@ GTM::gtm_thread::trycommit ()
   // Commit of an outermost transaction.
   if (abi_disp()->trycommit ())
     {
-      commit_local ();
+      // We can commit the undo log after dispatch-specific commit because we
+      // only have to reset gtm_thread state.
+      commit_undolog ();
       // FIXME: run after ensuring privatization safety:
       commit_user_actions ();
       commit_allocations (false, 0);
diff --git a/libitm/libitm_i.h b/libitm/libitm_i.h
index 7fb02f9..68e2de7 100644
--- a/libitm/libitm_i.h
+++ b/libitm/libitm_i.h
@@ -93,7 +93,7 @@ struct gtm_alloc_action
 };
 
 // This type is private to local.c.
-struct gtm_local_undo;
+struct gtm_undolog_entry;
 
 struct gtm_thread;
 
@@ -102,7 +102,7 @@ struct gtm_thread;
 struct gtm_transaction_cp
 {
   gtm_jmpbuf jb;
-  size_t local_undo_size;
+  size_t undolog_size;
   aa_tree<uintptr_t, gtm_alloc_action> alloc_actions;
   size_t user_actions_size;
   _ITM_transactionId_t id;
@@ -146,8 +146,8 @@ struct gtm_thread
   // This field *must* be at the beginning of the transaction.
   gtm_jmpbuf jb;
 
-  // Data used by local.c for the local memory undo log.
-  vector<gtm_local_undo*> local_undo;
+  // Data used by local.c for the undo log for both local and shared memory.
+  vector<gtm_undolog_entry*> undolog;
 
   // Data used by alloc.c for the malloc/free undo log.
   aa_tree<uintptr_t, gtm_alloc_action> alloc_actions;
@@ -247,9 +247,9 @@ struct gtm_thread
   void revert_cpp_exceptions (gtm_transaction_cp *cp = 0);
 
   // In local.cc
-  void commit_local (void);
-  void rollback_local (size_t until_size = 0);
-  void drop_references_local (const void *, size_t);
+  void commit_undolog (void);
+  void rollback_undolog (size_t until_size = 0);
+  void drop_references_undolog (const void *, size_t);
 
   // In retry.cc
   // Must be called outside of transactions (i.e., after rollback).
diff --git a/libitm/local.cc b/libitm/local.cc
index 735e5a7..fab73c5 100644
--- a/libitm/local.cc
+++ b/libitm/local.cc
@@ -1,4 +1,4 @@
-/* Copyright (C) 2008, 2009 Free Software Foundation, Inc.
+/* Copyright (C) 2008, 2009, 2011 Free Software Foundation, Inc.
    Contributed by Richard Henderson <rth@redhat.com>.
 
    This file is part of the GNU Transactional Memory Library (libitm).
@@ -26,7 +26,7 @@
 
 namespace GTM HIDDEN {
 
-struct gtm_local_undo
+struct gtm_undolog_entry
 {
   void *addr;
   size_t len;
@@ -35,28 +35,28 @@ struct gtm_local_undo
 
 
 void
-gtm_thread::commit_local ()
+gtm_thread::commit_undolog ()
 {
-  size_t i, n = local_undo.size();
+  size_t i, n = undolog.size();
 
   if (n > 0)
     {
       for (i = 0; i < n; ++i)
-	free (local_undo[i]);
-      this->local_undo.clear();
+	free (undolog[i]);
+      this->undolog.clear();
     }
 }
 
 void
-gtm_thread::rollback_local (size_t until_size)
+gtm_thread::rollback_undolog (size_t until_size)
 {
-  size_t i, n = local_undo.size();
+  size_t i, n = undolog.size();
 
   if (n > 0)
     {
       for (i = n; i-- > until_size; )
 	{
-	  gtm_local_undo *u = *local_undo.pop();
+	  gtm_undolog_entry *u = *undolog.pop();
 	  if (u)
 	    {
 	      memcpy (u->addr, u->saved, u->len);
@@ -69,22 +69,22 @@ gtm_thread::rollback_local (size_t until_size)
 /* Forget any references to PTR in the local log.  */
 
 void
-gtm_thread::drop_references_local (const void *ptr, size_t len)
+gtm_thread::drop_references_undolog (const void *ptr, size_t len)
 {
-  size_t i, n = local_undo.size();
+  size_t i, n = undolog.size();
 
   if (n > 0)
     {
       for (i = n; i > 0; i--)
 	{
-	  gtm_local_undo *u = local_undo[i];
+	  gtm_undolog_entry *u = undolog[i];
 	  /* ?? Do we need such granularity, or can we get away with
 	     just comparing PTR and LEN. ??  */
 	  if ((const char *)u->addr >= (const char *)ptr
 	      && ((const char *)u->addr + u->len <= (const char *)ptr + len))
 	    {
 	      free (u);
-	      local_undo[i] = NULL;
+	      undolog[i] = NULL;
 	    }
 	}
     }
@@ -94,13 +94,14 @@ void ITM_REGPARM
 GTM_LB (const void *ptr, size_t len)
 {
   gtm_thread *tx = gtm_thr();
-  gtm_local_undo *undo;
+  gtm_undolog_entry *undo;
 
-  undo = (gtm_local_undo *) xmalloc (sizeof (struct gtm_local_undo) + len);
+  undo = (gtm_undolog_entry *)
+      xmalloc (sizeof (struct gtm_undolog_entry) + len);
   undo->addr = (void *) ptr;
   undo->len = len;
 
-  tx->local_undo.push()[0] = undo;
+  tx->undolog.push()[0] = undo;
 
   memcpy (undo->saved, ptr, len);
 }
commit bc12a2f22271e182599994f1e46da65b520e7ef3
Author: Torvald Riegel <triegel@redhat.com>
Date:   Mon Aug 29 23:52:42 2011 +0200

    Ensure privatization safety if requested by a TM method.
    
    	* beginend.cc (GTM::gtm_thread::trycommit): Ensure privatization
    	safety if requested by a TM method.
    	* dispatch.h (GTM::abi_dispatch::trycommit): Add parameter for
    	privatization safety.
    	* method-serial.cc: Same.

diff --git a/libitm/beginend.cc b/libitm/beginend.cc
index 5dd7926..b10645c 100644
--- a/libitm/beginend.cc
+++ b/libitm/beginend.cc
@@ -441,27 +441,48 @@ GTM::gtm_thread::trycommit ()
     }
 
   // Commit of an outermost transaction.
-  if (abi_disp()->trycommit ())
+  gtm_word priv_time = 0;
+  if (abi_disp()->trycommit (priv_time))
     {
-      // We can commit the undo log after dispatch-specific commit because we
-      // only have to reset gtm_thread state.
-      commit_undolog ();
-      // FIXME: run after ensuring privatization safety:
-      commit_user_actions ();
-      commit_allocations (false, 0);
+      // The transaction is now inactive. Everything that we still have to do
+      // will not synchronize with other transactions anymore.
+      if (state & gtm_thread::STATE_SERIAL)
+        gtm_thread::serial_lock.write_unlock ();
+      else
+        gtm_thread::serial_lock.read_unlock (this);
+      state = 0;
 
-      // Reset transaction state.
+      // We can commit the undo log after dispatch-specific commit and after
+      // making the transaction inactive because we only have to reset
+      // gtm_thread state.
+      commit_undolog ();
+      // Reset further transaction state.
       cxa_catch_count = 0;
       cxa_unthrown = NULL;
       restart_total = 0;
 
-      // TODO can release SI mode before committing user actions? If so,
-      // we can release before ensuring privatization safety too.
-      if (state & gtm_thread::STATE_SERIAL)
-	gtm_thread::serial_lock.write_unlock ();
-      else
-	gtm_thread::serial_lock.read_unlock (this);
-      state = 0;
+      // Ensure privatization safety, if necessary.
+      if (priv_time)
+        {
+          // TODO Don't just spin but also block using cond vars / futexes
+          // here. Should probably be integrated with the serial lock code.
+          // TODO For C++0x atomics, the loads of other threads' shared_state
+          // should have acquire semantics (together with releases for the
+          // respective updates). But is this unnecessary overhead because
+          // weaker barriers are sufficient?
+          for (gtm_thread *it = gtm_thread::list_of_threads; it != 0;
+              it = it->next_thread)
+            {
+              if (it == this) continue;
+              while (it->shared_state < priv_time)
+                cpu_relax();
+            }
+        }
+
+      // After ensuring privatization safety, we execute potentially
+      // privatizing actions (e.g., calling free()). User actions are first.
+      commit_user_actions ();
+      commit_allocations (false, 0);
 
       return true;
     }
diff --git a/libitm/dispatch.h b/libitm/dispatch.h
index 2f6fdd7..e33c9fb 100644
--- a/libitm/dispatch.h
+++ b/libitm/dispatch.h
@@ -275,9 +275,11 @@ public:
   // Currently, this is called only for the commit of the outermost
   // transaction, or when switching to serial mode (which can happen in a
   // nested transaction).
-  // If the current transaction is in serial or serial-irrevocable mode, this
-  // must return true.
-  virtual bool trycommit() = 0;
+  // If privatization safety must be ensured in a quiescence-based way, set
+  // priv_time to a value different to 0. Nontransactional code will not be
+  // executed after this commit until all registered threads' shared_state is
+  // larger than or equal to this value.
+  virtual bool trycommit(gtm_word& priv_time) = 0;
   // Rolls back a transaction. Called on abort or after trycommit() returned
   // false.
   virtual void rollback(gtm_transaction_cp *cp = 0) = 0;
diff --git a/libitm/method-serial.cc b/libitm/method-serial.cc
index ac9005d..ae9905c 100644
--- a/libitm/method-serial.cc
+++ b/libitm/method-serial.cc
@@ -91,7 +91,7 @@ class serialirr_dispatch : public abi_dispatch
   CREATE_DISPATCH_METHODS_MEM()
 
   virtual gtm_restart_reason begin_or_restart() { return NUM_RESTARTS; }
-  virtual bool trycommit() { return true; }
+  virtual bool trycommit(gtm_word& priv_time) { return true; }
   virtual void rollback(gtm_transaction_cp *cp) { abort(); }
 
   virtual abi_dispatch* closed_nesting_alternative()
@@ -143,7 +143,7 @@ public:
   }
 
   virtual gtm_restart_reason begin_or_restart() { return NUM_RESTARTS; }
-  virtual bool trycommit() { return true; }
+  virtual bool trycommit(gtm_word& priv_time) { return true; }
   // Local undo will handle this.
   // trydropreference() need not be changed either.
   virtual void rollback(gtm_transaction_cp *cp) { }
@@ -246,15 +246,25 @@ GTM::gtm_thread::serialirr_mode ()
       if (this->state & STATE_IRREVOCABLE)
 	return;
 
+      // Try to commit the dispatch-specific part of the transaction, as we
+      // would do for an outermost commit.
+      // We're already serial, so we don't need to ensure privatization safety
+      // for other transactions here.
+      gtm_word priv_time = 0;
+      bool ok = disp->trycommit (priv_time);
       // Given that we're already serial, the trycommit better work.
-      bool ok = disp->trycommit ();
       assert (ok);
       need_restart = false;
     }
   else if (serial_lock.write_upgrade (this))
     {
       this->state |= STATE_SERIAL;
-      if (disp->trycommit ())
+      // Try to commit the dispatch-specific part of the transaction, as we
+      // would do for an outermost commit.
+      // We have successfully upgraded to serial mode, so we don't need to
+      // ensure privatization safety for other transactions here.
+      gtm_word priv_time = 0;
+      if (disp->trycommit (priv_time))
         need_restart = false;
     }
commit 305297a5fce12d688df1e80f43c9a067c3198fa8
Author: Torvald Riegel <triegel@redhat.com>
Date:   Tue Aug 30 00:06:30 2011 +0200

    Add gl_wt TM method.
    
    	* libitm_i.h: Add gl_wt dispatch.
    	* retry.cc (parse_default_method): Same.
    	* method-gl.cc: New file.
    	* Makefile.am: Use method-gl.cc.
    	* Makefile.in: Rebuild.

diff --git a/libitm/Makefile.am b/libitm/Makefile.am
index ee1822b..6923409 100644
--- a/libitm/Makefile.am
+++ b/libitm/Makefile.am
@@ -43,7 +43,7 @@ libitm_la_SOURCES = \
 	aatree.cc alloc.cc alloc_c.cc alloc_cpp.cc barrier.cc beginend.cc \
 	clone.cc cacheline.cc cachepage.cc eh_cpp.cc local.cc \
 	query.cc retry.cc rwlock.cc useraction.cc util.cc \
-	sjlj.S tls.cc method-serial.cc
+	sjlj.S tls.cc method-serial.cc method-gl.cc
 
 if ARCH_X86
 libitm_la_SOURCES += x86_sse.cc x86_avx.cc
diff --git a/libitm/Makefile.in b/libitm/Makefile.in
index 524753e..7dc864b 100644
--- a/libitm/Makefile.in
+++ b/libitm/Makefile.in
@@ -97,14 +97,14 @@ am__libitm_la_SOURCES_DIST = aatree.cc alloc.cc alloc_c.cc \
 	alloc_cpp.cc barrier.cc beginend.cc clone.cc cacheline.cc \
 	cachepage.cc eh_cpp.cc local.cc query.cc retry.cc rwlock.cc \
 	useraction.cc util.cc sjlj.S tls.cc method-serial.cc \
-	x86_sse.cc x86_avx.cc futex.cc
+	method-gl.cc x86_sse.cc x86_avx.cc futex.cc
 @ARCH_X86_TRUE@am__objects_1 = x86_sse.lo x86_avx.lo
 @ARCH_FUTEX_TRUE@am__objects_2 = futex.lo
 am_libitm_la_OBJECTS = aatree.lo alloc.lo alloc_c.lo alloc_cpp.lo \
 	barrier.lo beginend.lo clone.lo cacheline.lo cachepage.lo \
 	eh_cpp.lo local.lo query.lo retry.lo rwlock.lo useraction.lo \
-	util.lo sjlj.lo tls.lo method-serial.lo $(am__objects_1) \
-	$(am__objects_2)
+	util.lo sjlj.lo tls.lo method-serial.lo method-gl.lo \
+	$(am__objects_1) $(am__objects_2)
 libitm_la_OBJECTS = $(am_libitm_la_OBJECTS)
 DEFAULT_INCLUDES = -I.@am__isrc@
 depcomp = $(SHELL) $(top_srcdir)/../depcomp
@@ -373,8 +373,8 @@ libitm_la_LDFLAGS = $(libitm_version_info) $(libitm_version_script) \
 libitm_la_SOURCES = aatree.cc alloc.cc alloc_c.cc alloc_cpp.cc \
 	barrier.cc beginend.cc clone.cc cacheline.cc cachepage.cc \
 	eh_cpp.cc local.cc query.cc retry.cc rwlock.cc useraction.cc \
-	util.cc sjlj.S tls.cc method-serial.cc $(am__append_1) \
-	$(am__append_2)
+	util.cc sjlj.S tls.cc method-serial.cc method-gl.cc \
+	$(am__append_1) $(am__append_2)
 
 # Automake Documentation:
 # If your package has Texinfo files in many directories, you can use the
@@ -506,6 +506,7 @@ distclean-compile:
 @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/eh_cpp.Plo@am__quote@
 @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/futex.Plo@am__quote@
 @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/local.Plo@am__quote@
+@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/method-gl.Plo@am__quote@
 @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/method-serial.Plo@am__quote@
 @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/query.Plo@am__quote@
 @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/retry.Plo@am__quote@
diff --git a/libitm/libitm_i.h b/libitm/libitm_i.h
index 68e2de7..7c0853e 100644
--- a/libitm/libitm_i.h
+++ b/libitm/libitm_i.h
@@ -292,6 +292,7 @@ extern void GTM_fatal (const char *fmt, ...)
 extern abi_dispatch *dispatch_serial();
 extern abi_dispatch *dispatch_serialirr();
 extern abi_dispatch *dispatch_serialirr_onwrite();
+extern abi_dispatch *dispatch_gl_wt();
 
 extern gtm_cacheline_mask gtm_mask_stack(gtm_cacheline *, gtm_cacheline_mask);
 
diff --git a/libitm/method-gl.cc b/libitm/method-gl.cc
new file mode 100644
index 0000000..17a2b9f
--- /dev/null
+++ b/libitm/method-gl.cc
@@ -0,0 +1,272 @@
+/* Copyright (C) 2011 Free Software Foundation, Inc.
+   Contributed by Torvald Riegel <triegel@redhat.com>.
+
+   This file is part of the GNU Transactional Memory Library (libitm).
+
+   Libitm is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   Libitm is distributed in the hope that it will be useful, but WITHOUT ANY
+   WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+   FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+   more details.
+
+   Under Section 7 of GPL version 3, you are granted additional
+   permissions described in the GCC Runtime Library Exception, version
+   3.1, as published by the Free Software Foundation.
+
+   You should have received a copy of the GNU General Public License and
+   a copy of the GCC Runtime Library Exception along with this program;
+   see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include "libitm_i.h"
+
+using namespace GTM;
+
+namespace {
+
+// This group consists of all TM methods that synchronize via just a single
+// global lock (or ownership record).
+struct gl_mg : public method_group
+{
+  static const gtm_word LOCK_BIT = (~(gtm_word)0 >> 1) + 1;
+  // We can't use the full bitrange because ~0 in gtm_thread::shared_state has
+  // special meaning.
+  static const gtm_word VERSION_MAX = (~(gtm_word)0 >> 1) - 1;
+  static bool is_locked(gtm_word l) { return l & LOCK_BIT; }
+  static gtm_word set_locked(gtm_word l) { return l | LOCK_BIT; }
+  static gtm_word clear_locked(gtm_word l) { return l & ~LOCK_BIT; }
+
+  // The global ownership record.
+  gtm_word orec;
+  virtual void init()
+  {
+    orec = 0;
+  }
+  virtual void fini() { }
+};
+
+static gl_mg o_gl_mg;
+
+
+// The global lock, write-through TM method.
+// Acquires the orec eagerly before the first write, and then writes through.
+// Reads abort if the global orec's version number changed or if it is locked.
+// Currently, writes require undo-logging to prevent deadlock between the
+// serial lock and the global orec (writer txn acquires orec, reader txn
+// upgrades to serial and waits for all other txns, writer tries to upgrade to
+// serial too but cannot, writer cannot abort either, deadlock). We could
+// avoid this if the serial lock would allow us to prevent other threads from
+// going to serial mode, but this probably is too much additional complexity
+// just to optimize this TM method.
+// gtm_thread::shared_state is used to store a transaction's current
+// snapshot time (or commit time). The serial lock uses ~0 for inactive
+// transactions and 0 for active ones. Thus, we always have a meaningful
+// timestamp in shared_state that can be used to implement quiescence-based
+// privatization safety. This even holds if a writing transaction has the
+// lock bit set in its shared_state because this is fine for both the serial
+// lock (the value will be smaller than ~0) and privatization safety (we
+// validate that no other update transaction comitted before we acquired the
+// orec, so we have the most recent timestamp and no other transaction can
+// commit until we have committed).
+class gl_wt_dispatch : public abi_dispatch
+{
+protected:
+  static void pre_write(const void *addr, size_t len)
+  {
+    gtm_thread *tx = gtm_thr();
+    if (unlikely(!gl_mg::is_locked(tx->shared_state)))
+      {
+        // Check for and handle version number overflow.
+        if (unlikely(tx->shared_state >= gl_mg::VERSION_MAX))
+          tx->restart(RESTART_INIT_METHOD_GROUP);
+
+        // CAS global orec from our snapshot time to the locked state.
+        // This validates that we have a consistent snapshot, which is also
+        // for making privatization safety work (see the class' comments).
+        gtm_word now = o_gl_mg.orec;
+        if (now != tx->shared_state)
+          tx->restart(RESTART_VALIDATE_WRITE);
+        if (__sync_val_compare_and_swap(&o_gl_mg.orec, now,
+            gl_mg::set_locked(now)) != now)
+          tx->restart(RESTART_LOCKED_WRITE);
+
+        // Set shared_state to new value. The CAS is a full barrier, so the
+        // acquisition of the global orec is visible before this store here,
+        // and the store will not be visible before earlier data loads, which
+        // is required to correctly ensure privatization safety (see
+        // begin_and_restart() and release_orec() for further comments).
+        tx->shared_state = gl_mg::set_locked(now);
+      }
+
+    // TODO Ensure that this gets inlined: Use internal log interface and LTO.
+    GTM_LB(addr, len);
+  }
+
+  static void validate()
+  {
+    // Check that snapshot is consistent. The barrier ensures that this
+    // happens after previous data loads.
+    atomic_read_barrier();
+    gtm_thread *tx = gtm_thr();
+    gtm_word l = o_gl_mg.orec;
+    if (l != tx->shared_state)
+      tx->restart(RESTART_VALIDATE_READ);
+  }
+
+  template <typename V> static V load(const V* addr, ls_modifier mod)
+  {
+    // Read-for-write should be unlikely, but we need to handle it or will
+    // break later WaW optimizations.
+    if (unlikely(mod == RfW))
+      {
+        pre_write(addr, sizeof(V));
+        return *addr;
+      }
+    V v = *addr;
+    if (likely(mod != RaW))
+      validate();
+    return v;
+  }
+
+  template <typename V> static void store(V* addr, const V value,
+      ls_modifier mod)
+  {
+    if (unlikely(mod != WaW))
+      pre_write(addr, sizeof(V));
+    *addr = value;
+  }
+
+public:
+  static void memtransfer_static(void *dst, const void* src, size_t size,
+      bool may_overlap, ls_modifier dst_mod, ls_modifier src_mod)
+  {
+    if ((dst_mod != WaW && src_mod != RaW)
+        && (dst_mod != NONTXNAL || src_mod == RfW))
+      pre_write(dst, size);
+
+    if (!may_overlap)
+      ::memcpy(dst, src, size);
+    else
+      ::memmove(dst, src, size);
+
+    if (src_mod != RfW && src_mod != RaW && src_mod != NONTXNAL
+        && dst_mod != WaW)
+      validate();
+  }
+
+  static void memset_static(void *dst, int c, size_t size, ls_modifier mod)
+  {
+    if (mod != WaW)
+      pre_write(dst, size);
+    ::memset(dst, c, size);
+  }
+
+  virtual gtm_restart_reason begin_or_restart()
+  {
+    // We don't need to do anything for nested transactions.
+    gtm_thread *tx = gtm_thr();
+    if (tx->parent_txns.size() > 0)
+      return NUM_RESTARTS;
+
+    // Spin until global orec is not locked.
+    // TODO This is not necessary if there are no pure loads (check txn props).
+    gtm_word v;
+    unsigned i = 0;
+    while (gl_mg::is_locked(v = o_gl_mg.orec))
+      {
+        // TODO need method-specific max spin count
+        if (++i > gtm_spin_count_var) return RESTART_VALIDATE_READ;
+        cpu_relax();
+      }
+    // This barrier ensures that we have read the global orec before later
+    // data loads.
+    atomic_read_barrier();
+
+    // Everything is okay, we have a snapshot time.
+    // We don't need to enforce any ordering for the following store. There
+    // are no earlier data loads in this transaction, so the store cannot
+    // become visible before those (which could lead to the violation of
+    // privatization safety). The store can become visible after later loads
+    // but this does not matter because the previous value will have been
+    // smaller or equal (the serial lock will set shared_state to zero when
+    // marking the transaction as active, and restarts enforce immediate
+    // visibility of a smaller or equal value with a barrier (see
+    // release_orec()).
+    tx->shared_state = v;
+    return NUM_RESTARTS;
+  }
+
+  virtual bool trycommit(gtm_word& priv_time)
+  {
+    // Release the orec but do not reset shared_state, which will be modified
+    // by the serial lock right after our commit anyway.
+    gtm_thread* tx = gtm_thr();
+    gtm_word v = tx->shared_state;
+    if (gl_mg::is_locked(v))
+      {
+        // Release the global orec, increasing its version number / timestamp.
+        // TODO replace with C++0x-style atomics (a release in this case)
+        atomic_write_barrier();
+        v = gl_mg::clear_locked(v) + 1;
+        o_gl_mg.orec = v;
+
+        // Need to ensure privatization safety. Every other transaction must
+        // have a snapshot time that is at least as high as our commit time
+        // (i.e., our commit must be visible to them).
+        priv_time = v;
+      }
+    return true;
+  }
+
+  virtual void rollback(gtm_transaction_cp *cp)
+  {
+    // We don't do anything for rollbacks of nested transactions.
+    if (cp != 0)
+      return;
+
+    // Release lock and increment version number to prevent dirty reads.
+    // Also reset shared state here, so that begin_or_restart() can expect a
+    // value that is correct wrt. privatization safety.
+    gtm_thread *tx = gtm_thr();
+    gtm_word v = tx->shared_state;
+    if (gl_mg::is_locked(v))
+      {
+        // Release the global orec, increasing its version number / timestamp.
+        // TODO replace with C++0x-style atomics (a release in this case)
+        atomic_write_barrier();
+        v = gl_mg::clear_locked(v) + 1;
+        o_gl_mg.orec = v;
+
+        // Also reset the timestamp published via shared_state.
+        tx->shared_state = v;
+        // We need a store-load barrier after this store to prevent it
+        // from becoming visible after later data loads because the
+        // previous value of shared_state has been higher than the actual
+        // snapshot time (the lock bit had been set), which could break
+        // privatization safety. We do not need a barrier before this
+        // store (see pre_write() for an explanation).
+        __sync_synchronize();
+      }
+
+  }
+
+  CREATE_DISPATCH_METHODS(virtual, )
+  CREATE_DISPATCH_METHODS_MEM()
+
+  gl_wt_dispatch() : abi_dispatch(false, true, false, false, &o_gl_mg)
+  { }
+};
+
+} // anon namespace
+
+static const gl_wt_dispatch o_gl_wt_dispatch;
+
+abi_dispatch *
+GTM::dispatch_gl_wt ()
+{
+  return const_cast<gl_wt_dispatch *>(&o_gl_wt_dispatch);
+}
diff --git a/libitm/retry.cc b/libitm/retry.cc
index 6fc4a38..3086dc7 100644
--- a/libitm/retry.cc
+++ b/libitm/retry.cc
@@ -200,6 +200,11 @@ parse_default_method()
       disp = GTM::dispatch_serial();
       env += 6;
     }
+  else if (strncmp(env, "gl_wt", 5) == 0)
+    {
+      disp = GTM::dispatch_gl_wt();
+      env += 5;
+    }
   else
     goto unknown;
Richard Henderson - Oct. 19, 2011, 7:17 p.m.
>     Add support for TM-method-specific begin code.
>     
>     	* libitm_i.h (GTM::gtm_restart_reason): Re-arrange and clean up
>     	declarations.
>     	* dispatch.h (GTM::abi_dispatch::begin_or_restart): New.
>     	* method-serial.cc: Implement begin_or_restart().
>     	* beginend.cc (GTM::gtm_thread::begin_transaction): Call
>     	dispatch-specific begin_or_restart().
>     	(GTM::gtm_thread::restart): Same.

Ok except,

> +  // Run dispatch-specific restart code. Retry until we succeed.
> +  GTM::gtm_restart_reason rr;
> +  while ((rr = disp->begin_or_restart())
> +      != NUM_RESTARTS)

Please add

  NO_RESTART = NUM_RESTARTS

(or it's own number *after* NUM_RESTARTS, or -1, or something)
to the enumeration and use that name.  Using num_restarts here is confusing.

>     Fixed gtm_thread::serialirr_mode to actually use serialirr, not serial.
>     
>     	* method-serial.cc (GTM::gtm_thread::serialirr_mode): Fixed: Use
>     	serial-irrevocable dispatch, not serial.

Ok.

>     Do not free transaction-local memory when committing a nested transaction.
>     
>     	* alloc.cc (commit_allocations_2): Do not free transaction-local
>     	memory when committing a nested transaction.

Ok.

>     Handle re-initialization of the current method group.
>     
>     	* retry.cc (GTM::gtm_thread::decide_retry_strategy): Handle
>     	re-initialization of the current method group.
>     	* libitm_i.h (GTM::gtm_restart_reason): Add restart reason for this.

Ok.

>     Undo log is used for both thread-local and shared data.
>     
>     	* libitm_i.h: Renamed gtm_local_undo to gtm_undolog_entry.
>     	(GTM::gtm_thread): Renamed local_undo to undolog. Renamed
>     	undolog-related member functions from *_local to *_undolog.
>     	* local.cc (gtm_thread::commit_undolog): Same.
>     	* beginend.cc (GTM::gtm_thread::trycommit): Same.
>     	(GTM::gtm_thread::rollback): Roll back undolog before
>     	dispatch-specific rollback.

Ok.

>     Ensure privatization safety if requested by a TM method.
>     
>     	* beginend.cc (GTM::gtm_thread::trycommit): Ensure privatization
>     	safety if requested by a TM method.
>     	* dispatch.h (GTM::abi_dispatch::trycommit): Add parameter for
>     	privatization safety.
>     	* method-serial.cc: Same.

Ok.

>     Add gl_wt TM method.
>     
>     	* libitm_i.h: Add gl_wt dispatch.
>     	* retry.cc (parse_default_method): Same.
>     	* method-gl.cc: New file.
>     	* Makefile.am: Use method-gl.cc.
>     	* Makefile.in: Rebuild.

Ok with...

>     Fix gl_wt commit/rollback when serial lock has been acquired.
>     
>     	* method-gl.cc (gl_wt_dispatch::trycommit): Fix interaction with
>     	gtm_thread::shared_state when the serial lock is acquired.
>     	(gl_wt_dispatch::rollback): Same.

... this merged with the previous commit.

>     Fix TLS read accesses on Linux/x86.
>     
>     	* config/linux/x86/tls.h (abi_disp): Make TLS slot read volatile.
>     	(gtm_thr): Same.

Ok.


r~
Torvald Riegel - Oct. 19, 2011, 10:18 p.m.
Committed with the changes below.

On Wed, 2011-10-19 at 12:17 -0700, Richard Henderson wrote:
> >     Add support for TM-method-specific begin code.
> >     
> >     	* libitm_i.h (GTM::gtm_restart_reason): Re-arrange and clean up
> >     	declarations.
> >     	* dispatch.h (GTM::abi_dispatch::begin_or_restart): New.
> >     	* method-serial.cc: Implement begin_or_restart().
> >     	* beginend.cc (GTM::gtm_thread::begin_transaction): Call
> >     	dispatch-specific begin_or_restart().
> >     	(GTM::gtm_thread::restart): Same.
> 
> Ok except,
> 
> > +  // Run dispatch-specific restart code. Retry until we succeed.
> > +  GTM::gtm_restart_reason rr;
> > +  while ((rr = disp->begin_or_restart())
> > +      != NUM_RESTARTS)
> 
> Please add
> 
>   NO_RESTART = NUM_RESTARTS
> 
> (or it's own number *after* NUM_RESTARTS, or -1, or something)
> to the enumeration and use that name.  Using num_restarts here is confusing.
> 
> >     Fixed gtm_thread::serialirr_mode to actually use serialirr, not serial.
> >     
> >     	* method-serial.cc (GTM::gtm_thread::serialirr_mode): Fixed: Use
> >     	serial-irrevocable dispatch, not serial.
> 
> Ok.
> 
> >     Do not free transaction-local memory when committing a nested transaction.
> >     
> >     	* alloc.cc (commit_allocations_2): Do not free transaction-local
> >     	memory when committing a nested transaction.
> 
> Ok.
> 
> >     Handle re-initialization of the current method group.
> >     
> >     	* retry.cc (GTM::gtm_thread::decide_retry_strategy): Handle
> >     	re-initialization of the current method group.
> >     	* libitm_i.h (GTM::gtm_restart_reason): Add restart reason for this.
> 
> Ok.
> 
> >     Undo log is used for both thread-local and shared data.
> >     
> >     	* libitm_i.h: Renamed gtm_local_undo to gtm_undolog_entry.
> >     	(GTM::gtm_thread): Renamed local_undo to undolog. Renamed
> >     	undolog-related member functions from *_local to *_undolog.
> >     	* local.cc (gtm_thread::commit_undolog): Same.
> >     	* beginend.cc (GTM::gtm_thread::trycommit): Same.
> >     	(GTM::gtm_thread::rollback): Roll back undolog before
> >     	dispatch-specific rollback.
> 
> Ok.
> 
> >     Ensure privatization safety if requested by a TM method.
> >     
> >     	* beginend.cc (GTM::gtm_thread::trycommit): Ensure privatization
> >     	safety if requested by a TM method.
> >     	* dispatch.h (GTM::abi_dispatch::trycommit): Add parameter for
> >     	privatization safety.
> >     	* method-serial.cc: Same.
> 
> Ok.
> 
> >     Add gl_wt TM method.
> >     
> >     	* libitm_i.h: Add gl_wt dispatch.
> >     	* retry.cc (parse_default_method): Same.
> >     	* method-gl.cc: New file.
> >     	* Makefile.am: Use method-gl.cc.
> >     	* Makefile.in: Rebuild.
> 
> Ok with...
> 
> >     Fix gl_wt commit/rollback when serial lock has been acquired.
> >     
> >     	* method-gl.cc (gl_wt_dispatch::trycommit): Fix interaction with
> >     	gtm_thread::shared_state when the serial lock is acquired.
> >     	(gl_wt_dispatch::rollback): Same.
> 
> ... this merged with the previous commit.
> 
> >     Fix TLS read accesses on Linux/x86.
> >     
> >     	* config/linux/x86/tls.h (abi_disp): Make TLS slot read volatile.
> >     	(gtm_thr): Same.
> 
> Ok.
> 
> 
> r~
>

Patch

diff --git a/libitm/beginend.cc b/libitm/beginend.cc
index cc25d17..1770dad 100644
--- a/libitm/beginend.cc
+++ b/libitm/beginend.cc
@@ -269,6 +269,15 @@  GTM::gtm_thread::begin_transaction (uint32_t prop, const gtm_jmpbuf *jb)
 #endif
     }
 
+  // Run dispatch-specific restart code. Retry until we succeed.
+  GTM::gtm_restart_reason rr;
+  while ((rr = disp->begin_or_restart())
+      != NUM_RESTARTS)
+    {
+      tx->decide_retry_strategy(rr);
+      disp = abi_disp();
+    }
+
   // Determine the code path to run. Only irrevocable transactions cannot be
   // restarted, so all other transactions need to save live variables.
   ret = choose_code_path(prop, disp);
@@ -458,9 +467,17 @@  GTM::gtm_thread::restart (gtm_restart_reason r)
   rollback ();
   decide_retry_strategy (r);
 
-  GTM_longjmp (&this->jb,
-      choose_code_path(prop, abi_disp()) | a_restoreLiveVariables,
-      this->prop);
+  // Run dispatch-specific restart code. Retry until we succeed.
+  abi_dispatch* disp = abi_disp();
+  GTM::gtm_restart_reason rr;
+  while ((rr = disp->begin_or_restart()) != NUM_RESTARTS)
+    {
+      decide_retry_strategy(rr);
+      disp = abi_disp();
+    }
+
+  GTM_longjmp (&jb,
+      choose_code_path(prop, disp) | a_restoreLiveVariables, prop);
 }
 
 void ITM_REGPARM
diff --git a/libitm/dispatch.h b/libitm/dispatch.h
index 9c33684..2f6fdd7 100644
--- a/libitm/dispatch.h
+++ b/libitm/dispatch.h
@@ -260,6 +260,16 @@  private:
   abi_dispatch& operator=(const abi_dispatch &) = delete;
 
 public:
+  // Starts or restarts a transaction. Is called right before executing the
+  // transactional application code (by either returning from
+  // gtm_thread::begin_transaction or doing the longjmp when restarting).
+  // Returns NUM_RESTARTS if the transaction started successfully. Returns
+  // a real restart reason if it couldn't start and does need to abort. This
+  // allows TM methods to just give up and delegate ensuring progress to the
+  // restart mechanism. If it returns a restart reason, this call must be
+  // idempotent because it will trigger the restart mechanism, which could
+  // switch to a different TM method.
+  virtual gtm_restart_reason begin_or_restart() = 0;
   // Tries to commit the transaction. Iff this returns true, the transaction
   // got committed and all per-transaction data will have been reset.
   // Currently, this is called only for the commit of the outermost
diff --git a/libitm/libitm_i.h b/libitm/libitm_i.h
index ea89870..2e1913a 100644
--- a/libitm/libitm_i.h
+++ b/libitm/libitm_i.h
@@ -53,22 +53,6 @@  template<> struct sized_integral<8> { typedef uint64_t type; };
 
 typedef unsigned int gtm_word __attribute__((mode (word)));
 
-} // namespace GTM
-
-#include "target.h"
-#include "rwlock.h"
-#include "aatree.h"
-#include "cacheline.h"
-#include "cachepage.h"
-#include "stmlock.h"
-#include "dispatch.h"
-#include "containers.h"
-
-namespace GTM HIDDEN {
-
-// A dispatch table parameterizes the implementation of the STM.
-struct abi_dispatch;
-
 // These values are given to GTM_restart_transaction and indicate the
 // reason for the restart.  The reason is used to decide what STM
 // implementation should be used during the next iteration.
@@ -86,6 +70,19 @@  enum gtm_restart_reason
   NUM_RESTARTS
 };
 
+} // namespace GTM
+
+#include "target.h"
+#include "rwlock.h"
+#include "aatree.h"
+#include "cacheline.h"
+#include "cachepage.h"
+#include "stmlock.h"
+#include "dispatch.h"
+#include "containers.h"
+
+namespace GTM HIDDEN {
+
 // This type is private to alloc.c, but needs to be defined so that
 // the template used inside gtm_thread can instantiate.
 struct gtm_alloc_action
diff --git a/libitm/method-serial.cc b/libitm/method-serial.cc
index 4621345..133b964 100644
--- a/libitm/method-serial.cc
+++ b/libitm/method-serial.cc
@@ -90,6 +90,7 @@  class serialirr_dispatch : public abi_dispatch
   CREATE_DISPATCH_METHODS(virtual, )
   CREATE_DISPATCH_METHODS_MEM()
 
+  virtual gtm_restart_reason begin_or_restart() { return NUM_RESTARTS; }
   virtual bool trycommit() { return true; }
   virtual void rollback(gtm_transaction_cp *cp) { abort(); }
 
@@ -141,6 +142,7 @@  public:
     ::memset(dst, c, size);
   }
 
+  virtual gtm_restart_reason begin_or_restart() { return NUM_RESTARTS; }
   virtual bool trycommit() { return true; }
   // Local undo will handle this.
   // trydropreference() need not be changed either.