forked from boostorg/unordered
		
	
		
			
				
	
	
		
			112 lines
		
	
	
		
			5.3 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
			
		
		
	
	
			112 lines
		
	
	
		
			5.3 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
	
	
| [/ Copyright 2006-2008 Daniel James.
 | |
|  / Distributed under the Boost Software License, Version 1.0. (See accompanying
 | |
|  / file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) ]
 | |
| 
 | |
| [def __wang__
 | |
|     [@http://web.archive.org/web/20121102023700/http://www.concentric.net/~Ttwang/tech/inthash.htm
 | |
|     Thomas Wang's article on integer hash functions]]
 | |
| 
 | |
| [section:rationale Implementation Rationale]
 | |
| 
 | |
| The intent of this library is to implement the unordered
 | |
| containers in the draft standard, so the interface was fixed. But there are
 | |
| still some implementation decisions to make. The priorities are
 | |
| conformance to the standard and portability.
 | |
| 
 | |
| The [@http://en.wikipedia.org/wiki/Hash_table Wikipedia article on hash tables]
 | |
| has a good summary of the implementation issues for hash tables in general.
 | |
| 
 | |
| [h2 Data Structure]
 | |
| 
 | |
| By specifying an interface for accessing the buckets of the container the
 | |
| standard pretty much requires that the hash table uses chained addressing.
 | |
| 
 | |
| It would be conceivable to write a hash table that uses another method.  For
 | |
| example, it could use open addressing, and use the lookup chain to act as a
 | |
| bucket but there are some serious problems with this:
 | |
| 
 | |
| * The draft standard requires that pointers to elements aren't invalidated, so
 | |
|   the elements can't be stored in one array, but will need a layer of
 | |
|   indirection instead - losing the efficiency and most of the memory gain,
 | |
|   the main advantages of open addressing.
 | |
| 
 | |
| * Local iterators would be very inefficient and may not be able to
 | |
|   meet the complexity requirements.
 | |
|   
 | |
| * There are also the restrictions on when iterators can be invalidated. Since
 | |
|   open addressing degrades badly when there are a high number of collisions the
 | |
|   restrictions could prevent a rehash when it's really needed. The maximum load
 | |
|   factor could be set to a fairly low value to work around this - but the
 | |
|   standard requires that it is initially set to 1.0.
 | |
| 
 | |
| * And since the standard is written with a eye towards chained
 | |
|   addressing, users will be surprised if the performance doesn't reflect that.
 | |
| 
 | |
| So chained addressing is used.
 | |
| 
 | |
| [/ (Removing for now as this is out of date)
 | |
| 
 | |
| For containers with unique keys I store the buckets in a single-linked list.
 | |
| There are other possible data structures (such as a double-linked list)
 | |
| that allow for some operations to be faster (such as erasing and iteration)
 | |
| but the possible gain seems small compared to the extra memory needed.
 | |
| The most commonly used operations (insertion and lookup) would not be improved
 | |
| at all.
 | |
| 
 | |
| But for containers with equivalent keys a single-linked list can degrade badly
 | |
| when a large number of elements with equivalent keys are inserted. I think it's
 | |
| reasonable to assume that users who choose to use `unordered_multiset` or
 | |
| `unordered_multimap` do so because they are likely to insert elements with
 | |
| equivalent keys. So I have used an alternative data structure that doesn't
 | |
| degrade, at the expense of an extra pointer per node.
 | |
| 
 | |
| This works by adding storing a circular linked list for each group of equivalent
 | |
| nodes in reverse order. This allows quick navigation to the end of a group (since
 | |
| the first element points to the last) and can be quickly updated when elements
 | |
| are inserted or erased. The main disadvantage of this approach is some hairy code
 | |
| for erasing elements.
 | |
| ]
 | |
| 
 | |
| [/ (Starting to write up new structure, might not be ready in time)
 | |
| The node used to be stored in a linked list for each bucket but that
 | |
| didn't meet the complexity requirements for C++11, so now the nodes
 | |
| are stored in one long single linked list. But there needs a way to get
 | |
| the bucket from the node, to do that a copy of the key's hash value is
 | |
| stored in the node. Another possibility would be to store a pointer to
 | |
| the bucket, or the bucket's index, but storing the hash value allows
 | |
| some operations to be faster.
 | |
| ]
 | |
| 
 | |
| [h2 Number of Buckets]
 | |
| 
 | |
| There are two popular methods for choosing the number of buckets in a hash
 | |
| table. One is to have a prime number of buckets, another is to use a power
 | |
| of 2.
 | |
| 
 | |
| Using a prime number of buckets, and choosing a bucket by using the modulus
 | |
| of the hash function's result will usually give a good result. The downside
 | |
| is that the required modulus operation is fairly expensive. This is what the
 | |
| containers do in most cases.
 | |
| 
 | |
| Using a power of 2 allows for much quicker selection of the bucket
 | |
| to use, but at the expense of losing the upper bits of the hash value.
 | |
| For some specially designed hash functions it is possible to do this and
 | |
| still get a good result but as the containers can take arbitrary hash
 | |
| functions this can't be relied on.
 | |
| 
 | |
| To avoid this a transformation could be applied to the hash function, for an
 | |
| example see __wang__.  Unfortunately, a transformation like Wang's requires
 | |
| knowledge of the number of bits in the hash value, so it isn't portable enough
 | |
| to use as a default. It can applicable in certain cases so the containers
 | |
| have a policy based implementation that can use this alternative technique.
 | |
| 
 | |
| Currently this is only done on 64 bit architectures, where prime number
 | |
| modulus can be expensive. Although this varies depending on the architecture,
 | |
| so I probably should revisit it.
 | |
| 
 | |
| I'm also thinking of introducing a mechanism whereby a hash function can
 | |
| indicate that it's safe to be used directly with power of 2 buckets, in
 | |
| which case a faster plain power of 2 implementation can be used.
 | |
| 
 | |
| [endsect]
 |