polished description

This commit is contained in:
joaquintides
2023-04-21 09:13:28 +02:00
parent 80d7203d78
commit 7f7e577e77

View File

@ -304,6 +304,8 @@ struct concurrent_table_arrays:table_arrays<Value,Group,SizePolicy>
* - The API provides member functions for all the meaningful composite * - The API provides member functions for all the meaningful composite
* operations of the form "X (and|or) Y", where X, Y are one of the * operations of the form "X (and|or) Y", where X, Y are one of the
* primitives FIND, ACCESS, INSERT or ERASE. * primitives FIND, ACCESS, INSERT or ERASE.
* - Parallel versions of [c]visit_all(f) and erase_if(f) are provided based
* on C++17 stdlib parallel algorithms.
* *
* Consult boost::unordered_flat_map docs for the full API reference. * Consult boost::unordered_flat_map docs for the full API reference.
* Heterogeneous lookup is suported by default, that is, without checking for * Heterogeneous lookup is suported by default, that is, without checking for
@ -316,23 +318,25 @@ struct concurrent_table_arrays:table_arrays<Value,Group,SizePolicy>
* rw spinlocks acting as a single rw mutex with very little * rw spinlocks acting as a single rw mutex with very little
* cache-coherence traffic on read (each thread is assigned a different * cache-coherence traffic on read (each thread is assigned a different
* spinlock in the array). Container-level write locking is only used for * spinlock in the array). Container-level write locking is only used for
* rehashing and container-wide operations (assignment, swap). * rehashing and other container-wide operations (assignment, swap, etc.)
* - Each group of slots has an associated rw spinlock. Lookup is * - Each group of slots has an associated rw spinlock. A thread holds
* implemented in a (groupwise) lock-free manner until a reduced hash match * at most one group lock at any given time. Lookup is implemented in
* is found, in which case the relevant group is locked and the slot is * a (groupwise) lock-free manner until a reduced hash match is found, in
* double-checked for occupancy and compared with the key. * which case the relevant group is locked and the slot is double-checked
* for occupancy and compared with the key.
* - Each group has also an associated so-called insertion counter used for * - Each group has also an associated so-called insertion counter used for
* the following optimistic insertion algorithm: * the following optimistic insertion algorithm:
* - The value of the insertion counter for the initial group in the probe * - The value of the insertion counter for the initial group in the probe
* sequence is recorded (let's call this value c0). * sequence is locally recorded (let's call this value c0).
* - Lookup and search for an available slot (if lookup failed) are * - Lookup is as described above. If lookup finds no equivalent element,
* lock-free. * search for an available slot for insertion successively locks/unlocks
* each group in the probing sequence.
* - When an available slot is located, it is preemptively occupied (its * - When an available slot is located, it is preemptively occupied (its
* reduced hash value is set) after locking and the insertion counter is * reduced hash value is set) and the insertion counter is atomically
* atomically incremented: if no other thread has incremented the counter * incremented: if no other thread has incremented the counter during the
* during the whole operation (which is checked by comparing with c0), * whole operation (which is checked by comparing with c0), then we're
* then we're good to go and complete the insertion, otherwise we roll * good to go and complete the insertion, otherwise we roll back and start
* back and start over. * over.
*/ */
template <typename TypePolicy,typename Hash,typename Pred,typename Allocator> template <typename TypePolicy,typename Hash,typename Pred,typename Allocator>