Add some more implementation details.

[SVN r3117]
This commit is contained in:
Daniel James
2006-08-06 20:46:06 +00:00
parent 10c5150f39
commit 8f96a08523

View File

@@ -9,27 +9,31 @@ containers in the draft standard, so the interface was fixed. But there are
still some implementation desicions to make. The priorities are still some implementation desicions to make. The priorities are
conformance to the standard and portability. conformance to the standard and portability.
The [@http://en.wikipedia.org/wiki/Hash_table wikipedia article on hash tables]
has a good summary of the implementation issues for hash tables in general.
[h2 Data Structure] [h2 Data Structure]
By specifying an interface for accessing the buckets of the container the By specifying an interface for accessing the buckets of the container the
standard pretty much requires that the hash table uses chained addressing. standard pretty much requires that the hash table uses chained addressing.
It would be conceivable to write a hash table that uses another method. It would be conceivable to write a hash table that uses another method. For
For example, one could use open addressing, example, an it could use open addressing, and use the lookup chain to act as a
and use the lookup chain to act as a bucket but there are a few problems bucket but there are a some serious problems with this. The biggest one is that
with this. Local iterators would be veryinefficient and may not be able to the draft standard requires that pointers to elements aren't invalidated, so
meet the complexity requirements. Indicating when an entry is the table is the elements couldn't be stored in one array, but instead will need a layer of
empty or deleted would be impossible without allocating extra storage - indirection - loosing the efficiency and memory gains for small types.
loosing one of the advantages of open addressing. And for containers with
Local iterators would be very inefficient and may not be able to
meet the complexity requirements. And for containers with
equivalent keys, making sure that they are adjacent would probably require a equivalent keys, making sure that they are adjacent would probably require a
chain of some sort anyway. chain of some sort anyway.
But most damaging is perhaps the There are also the restrictions on when iterators can be invalidated. Since
restrictions on when iterators can be invalidated. Since open addressing open addressing degrades badly when there are a high number of collisions the
degrades badly when there are a high number of collisions the implemenation restrictions could prevent rehash when it's really needed. The maximum load
might sometimes be unable to rehash when it is essential. To avoid such factor could be set to a fairly low value to work around this - but the
problems an implementation would need to set its maximum load factor to a standard requires that it is initially set to 1.0.
fairly low value - but the standard requires that it is initially set to 1.0.
And, of course, since the standard is written with a eye towards chained And, of course, since the standard is written with a eye towards chained
addressing, users will be suprised if the performance doesn't reflect that. addressing, users will be suprised if the performance doesn't reflect that.
@@ -57,7 +61,7 @@ of 2.
Using a prime number of buckets, and choosing a bucket by using the modulous Using a prime number of buckets, and choosing a bucket by using the modulous
of the hash functions's result will usually give a good result. The downside of the hash functions's result will usually give a good result. The downside
is that the modulous operation is fairly expensive. is that the required modulous operation is fairly expensive.
Using a power of 2 allows for much quicker selection of the bucket Using a power of 2 allows for much quicker selection of the bucket
to use, but at the expense of loosing the upper bits of the hash value. to use, but at the expense of loosing the upper bits of the hash value.
@@ -70,7 +74,7 @@ example see __wang__. Unfortunately, a transformation like Wang's requires
knowledge of the number of bits in the hash value, so it isn't portable enough. knowledge of the number of bits in the hash value, so it isn't portable enough.
This leaves more expensive methods, such as Knuth's Multiplicative Method This leaves more expensive methods, such as Knuth's Multiplicative Method
(mentioned in Wang's article). These don't tend to work as well as taking the (mentioned in Wang's article). These don't tend to work as well as taking the
modulous of a prime, and can take enough time to loose the modulous of a prime, and the extra computation required might negate
efficiency advantage of power of 2 hash tables. efficiency advantage of power of 2 hash tables.
So, this implementation uses a prime number for the hash table size. So, this implementation uses a prime number for the hash table size.
@@ -87,14 +91,22 @@ Need to look into this one.
In a fit of probably unwise enthusiasm, I implemented all the three versions In a fit of probably unwise enthusiasm, I implemented all the three versions
with a macro (BOOST_UNORDERED_SWAP_METHOD) to pick which one is used. As with a macro (BOOST_UNORDERED_SWAP_METHOD) to pick which one is used. As
suggested by Howard Hinnant, I set option 3 as the default. I'll probably remove suggested by Howard Hinnant, I set option 3 as the default. I'll probably
the alternative implementations before review. remove the alternative implementations before review.
There is currently a further issue - if the allocator's swap does throw there's
no guarantee what state the allocators will be in. The only solution seems to
be to double buffer the allocators.
[h3 [@http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-active.html#518 [h3 [@http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-active.html#518
518. Are insert and erase stable for unordered_multiset and unordered_multimap?]] 518. Are insert and erase stable for unordered_multiset and unordered_multimap?]]
In this implementation, erase is stable but insert is not. As long as a rehash In this implementation, erase is stable. All inserts are stable, except for
can change the order of the elements, insert can't be. inserting with a hint, which has slightly surprising behaviour. If the hint
points to the first element in the correct equal range it inserts at the end of
the range, for all other elements in the range it inserts immediately before
the element. I am very tempted to change insert with a hint to just ignore the
hint completely.
[h3 [@http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-active.html#528 [h3 [@http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-active.html#528
528. TR1: issue 6.19 vs 6.3.4.3/2 (and 6.3.4.5/2)]] 528. TR1: issue 6.19 vs 6.3.4.3/2 (and 6.3.4.5/2)]]