diff --git a/doc/unordered/buckets.adoc b/doc/unordered/buckets.adoc index 027afaca..28cca9b9 100644 --- a/doc/unordered/buckets.adoc +++ b/doc/unordered/buckets.adoc @@ -120,23 +120,33 @@ or close to the hint - unless your hint is unreasonably small or large. == Iterator Invalidation -It is not specified how member functions other than `rehash` affect +It is not specified how member functions other than `rehash` and `reserve` affect the bucket count, although `insert` is only allowed to invalidate iterators when the insertion causes the load factor to be greater than or equal to the maximum load factor. For most implementations this means that `insert` will only change the number of buckets when this happens. While iterators can be -invalidated by calls to `insert` and `rehash`, pointers and references to the +invalidated by calls to `insert`, `rehash` and `reserve`, pointers and references to the container's elements are never invalidated. In a similar manner to using `reserve` for ``vector``s, it can be a good idea -to call `rehash` before inserting a large number of elements. This will get +to call `reserve` before inserting a large number of elements. This will get the expensive rehashing out of the way and let you store iterators, safe in the knowledge that they won't be invalidated. If you are inserting `n` elements into container `x`, you could first call: ``` -x.rehash((x.size() + n) / x.max_load_factor()); +x.reserve(n); ``` -Note:: ``rehash``'s argument is the minimum number of buckets, not the -number of elements, which is why the new size is divided by the maximum load factor. +Note:: `reserve(n)` reserves space for at least `n` elements, allocating enough buckets +so as to not exceed the maximum load factor. ++ +Because the maximum load factor is defined as the number of elements divided by the total +number of available buckets, this function is logically equivalent to: ++ +``` +x.rehash(std::ceil(n / x.max_load_factor())) +``` ++ +See the <> on the `rehash` function. +