diff mbox

[Bug,libstdc++/54075,4.7.1] unordered_map insert still slower than 4.6.2

Message ID 509C142C.6080907@gmail.com
State New
Headers show

Commit Message

François Dumont Nov. 8, 2012, 8:21 p.m. UTC
Attached patch applied to trunk and 4.7 branch.

2012-11-08  François Dumont  <fdumont@gcc.gnu.org>

     PR libstdc++/54075
     * include/bits/hashtable.h (_Hashtable<>::rehash): Reset hash
     policy state if no rehash.
     * testsuite/23_containers/unordered_set/modifiers/reserve.cc
     (test02): New.


François

On 11/08/2012 01:58 AM, Jonathan Wakely wrote:
> On 7 November 2012 22:02, François Dumont wrote:
>> Ok to commit ? If so, where ?
> That patch is OK for trunk and 4.7, thanks.
>
diff mbox

Patch

Index: include/bits/hashtable.h
===================================================================
--- include/bits/hashtable.h	(revision 193258)
+++ include/bits/hashtable.h	(working copy)
@@ -1597,6 +1597,9 @@ 
 	  // level.
 	  _M_rehash_policy._M_prev_resize = 0;
 	}
+      else
+	// No rehash, restore previous state to keep a consistent state.
+	_M_rehash_policy._M_reset(__saved_state);
     }
 
   template<typename _Key, typename _Value,
Index: testsuite/23_containers/unordered_set/modifiers/reserve.cc
===================================================================
--- testsuite/23_containers/unordered_set/modifiers/reserve.cc	(revision 193258)
+++ testsuite/23_containers/unordered_set/modifiers/reserve.cc	(working copy)
@@ -40,8 +40,28 @@ 
     }
 }
 
+void test02()
+{
+  const int N = 1000;
+
+  typedef std::unordered_set<int> Set;
+  Set s;
+  s.reserve(N);
+  s.reserve(N);
+
+  std::size_t bkts = s.bucket_count();
+  for (int i = 0; i != N; ++i)
+    {
+      s.insert(i);
+      // As long as we insert less than the reserved number of elements we
+      // shouldn't experiment any rehash.
+      VERIFY( s.bucket_count() == bkts );
+    }
+}
+
 int main()
 {
   test01();
+  test02();
   return 0;
 }