Spell check the unordered container quickbook files.

[SVN r41123]
This commit is contained in:
Daniel James
2007-11-15 23:36:33 +00:00
parent a42f9ba834
commit 29a0e5d163
5 changed files with 15 additions and 15 deletions

View File

@@ -14,7 +14,7 @@ will have more buckets).
In order to decide which bucket to place an element in, the container applies In order to decide which bucket to place an element in, the container applies
the hash function, `Hash`, to the element's key (for `unordered_set` and the hash function, `Hash`, to the element's key (for `unordered_set` and
`unordered_multiset` the key is the whole element, but is refered to as the key `unordered_multiset` the key is the whole element, but is referred to as the key
so that the same terminology can be used for sets and maps). This returns a so that the same terminology can be used for sets and maps). This returns a
value of type `std::size_t`. `std::size_t` has a much greater range of values value of type `std::size_t`. `std::size_t` has a much greater range of values
then the number of buckets, so that container applies another transformation to then the number of buckets, so that container applies another transformation to

View File

@@ -8,8 +8,8 @@
[[Associative Containers] [Unordered Associative Containers]] [[Associative Containers] [Unordered Associative Containers]]
[ [
[Parameterized by an ordering relation `Compare`] [Parametrised by an ordering relation `Compare`]
[Parameterized by a function object `Hash` and an equivalence relation [Parametrised by a function object `Hash` and an equivalence relation
`Pred`] `Pred`]
] ]
[ [
@@ -51,7 +51,7 @@
element can be inserted into a different bucket.] element can be inserted into a different bucket.]
] ]
[ [
[`iterator`, `const_iterator` are of the biderctional category.] [`iterator`, `const_iterator` are of the bidirectional category.]
[`iterator`, `const_iterator` are of at least the forward category.] [`iterator`, `const_iterator` are of at least the forward category.]
] ]
[ [
@@ -108,7 +108,7 @@
] ]
[ [
[Insert a single element with a hint] [Insert a single element with a hint]
[Amortized constant if t elements inserted right after hint, [Amortised constant if t elements inserted right after hint,
logarithmic otherwise] logarithmic otherwise]
[Average case constant, worst case linear (ie. the same as [Average case constant, worst case linear (ie. the same as
a normal insert).] a normal insert).]
@@ -125,7 +125,7 @@
] ]
[ [
[Erase a single element by iterator] [Erase a single element by iterator]
[Amortized constant] [Amortised constant]
[Average case: O(1), Worst case: O(`size()`)] [Average case: O(1), Worst case: O(`size()`)]
] ]
[ [

View File

@@ -68,7 +68,7 @@ Similarly, a custom hash function can be used for custom types:
boost::unordered_multiset<point, std::equal_to<point>, point_hash> boost::unordered_multiset<point, std::equal_to<point>, point_hash>
points; points;
Although, customizing Boost.Hash is probably a better solution: Although, customising Boost.Hash is probably a better solution:
struct point { struct point {
int x; int x;

View File

@@ -28,7 +28,7 @@ with some care, can be avoided.
Also, the existing containers require a 'less than' comparison object Also, the existing containers require a 'less than' comparison object
to order their elements. For some data types this is impossible to implement to order their elements. For some data types this is impossible to implement
or isn't practicle. For a hash table you need an equality function or isn't practical. For a hash table you need an equality function
and a hash function for the key. and a hash function for the key.
So the __tr1__ introduced the unordered associative containers, which are So the __tr1__ introduced the unordered associative containers, which are

View File

@@ -16,7 +16,7 @@
The intent of this library is to implement the unordered The intent of this library is to implement the unordered
containers in the draft standard, so the interface was fixed. But there are containers in the draft standard, so the interface was fixed. But there are
still some implementation desicions to make. The priorities are still some implementation decisions to make. The priorities are
conformance to the standard and portability. conformance to the standard and portability.
The [@http://en.wikipedia.org/wiki/Hash_table wikipedia article on hash tables] The [@http://en.wikipedia.org/wiki/Hash_table wikipedia article on hash tables]
@@ -46,7 +46,7 @@ bucket but there are a some serious problems with this:
standard requires that it is initially set to 1.0. standard requires that it is initially set to 1.0.
* And since the standard is written with a eye towards chained * And since the standard is written with a eye towards chained
addressing, users will be suprised if the performance doesn't reflect that. addressing, users will be surprised if the performance doesn't reflect that.
So chained addressing is used. So chained addressing is used.
@@ -76,9 +76,9 @@ There are two popular methods for choosing the number of buckets in a hash
table. One is to have a prime number of buckets, another is to use a power table. One is to have a prime number of buckets, another is to use a power
of 2. of 2.
Using a prime number of buckets, and choosing a bucket by using the modulous Using a prime number of buckets, and choosing a bucket by using the modulus
of the hash functions's result will usually give a good result. The downside of the hash function's result will usually give a good result. The downside
is that the required modulous operation is fairly expensive. is that the required modulus operation is fairly expensive.
Using a power of 2 allows for much quicker selection of the bucket Using a power of 2 allows for much quicker selection of the bucket
to use, but at the expense of loosing the upper bits of the hash value. to use, but at the expense of loosing the upper bits of the hash value.
@@ -91,7 +91,7 @@ example see __wang__. Unfortunately, a transformation like Wang's requires
knowledge of the number of bits in the hash value, so it isn't portable enough. knowledge of the number of bits in the hash value, so it isn't portable enough.
This leaves more expensive methods, such as Knuth's Multiplicative Method This leaves more expensive methods, such as Knuth's Multiplicative Method
(mentioned in Wang's article). These don't tend to work as well as taking the (mentioned in Wang's article). These don't tend to work as well as taking the
modulous of a prime, and the extra computation required might negate modulus of a prime, and the extra computation required might negate
efficiency advantage of power of 2 hash tables. efficiency advantage of power of 2 hash tables.
So, this implementation uses a prime number for the hash table size. So, this implementation uses a prime number for the hash table size.
@@ -117,7 +117,7 @@ There is currently a further issue - if the allocator's swap does throw there's
no guarantee what state the allocators will be in. The only solution seems to no guarantee what state the allocators will be in. The only solution seems to
be to double buffer the allocators. But I'm assuming that it won't throw for now. be to double buffer the allocators. But I'm assuming that it won't throw for now.
Update: The comittee have now decided that `swap` should do a fast swap if the Update: The committee have now decided that `swap` should do a fast swap if the
allocator is Swappable and a slow swap using copy construction otherwise. To allocator is Swappable and a slow swap using copy construction otherwise. To
make this distinction requires concepts. For now I'm sticking with the current make this distinction requires concepts. For now I'm sticking with the current
implementation. implementation.