forked from boostorg/unordered
documented boost::concurrent_flat_set
This commit is contained in:
@@ -6,14 +6,15 @@
|
||||
:github-pr-url: https://github.com/boostorg/unordered/pull
|
||||
:cpp: C++
|
||||
|
||||
== Release 1.84.0
|
||||
== Release 1.84.0 - Major update
|
||||
|
||||
* Added `[c]visit_while` operations to `boost::concurrent_map`,
|
||||
* Added `boost::concurrent_flat_set`.
|
||||
* Added `[c]visit_while` operations to concurrent containers,
|
||||
with serial and parallel variants.
|
||||
* Added efficient move construction of `boost::unordered_flat_map` from
|
||||
`boost::concurrent_flat_map` and vice versa.
|
||||
* Added debug mode mechanisms for detecting illegal reentrancies into
|
||||
a `boost::concurrent_flat_map` from user code.
|
||||
* Added efficient move construction of `boost::unordered_flat_(map|set)` from
|
||||
`boost::concurrent_flat_(map|set)` and vice versa.
|
||||
* Added debug-mode mechanisms for detecting illegal reentrancies into
|
||||
a concurrent container from user code.
|
||||
* Added Boost.Serialization support to all containers and their (non-local) iterator types.
|
||||
* Added support for fancy pointers to open-addressing and concurrent containers.
|
||||
This enables scenarios like the use of Boost.Interprocess allocators to construct containers in shared memory.
|
||||
|
@@ -148,14 +148,14 @@ The main differences with C++ unordered associative containers are:
|
||||
|
||||
== Concurrent Containers
|
||||
|
||||
There is currently no specification in the C++ standard for this or any other concurrent
|
||||
data structure. `boost::concurrent_flat_map` takes the same template parameters as `std::unordered_map`
|
||||
and all the maps provided by Boost.Unordered, and its API is modelled after that of
|
||||
`boost::unordered_flat_map` with the crucial difference that iterators are not provided
|
||||
There is currently no specification in the C++ standard for this or any other type of concurrent
|
||||
data structure. The APIs of `boost::concurrent_flat_set` and `boost::concurrent_flat_map`
|
||||
are modelled after `std::unordered_flat_set` and `std::unordered_flat_map`, respectively,
|
||||
with the crucial difference that iterators are not provided
|
||||
due to their inherent problems in concurrent scenarios (high contention, prone to deadlocking):
|
||||
so, `boost::concurrent_flat_map` is technically not a
|
||||
so, Boost.Unordered concurrent containers are technically not models of
|
||||
https://en.cppreference.com/w/cpp/named_req/Container[Container^], although
|
||||
it meets all the requirements of https://en.cppreference.com/w/cpp/named_req/AllocatorAwareContainer[AllocatorAware^]
|
||||
they meet all the requirements of https://en.cppreference.com/w/cpp/named_req/AllocatorAwareContainer[AllocatorAware^]
|
||||
containers except those implying iterators.
|
||||
|
||||
In a non-concurrent unordered container, iterators serve two main purposes:
|
||||
@@ -163,7 +163,7 @@ In a non-concurrent unordered container, iterators serve two main purposes:
|
||||
* Access to an element previously located via lookup.
|
||||
* Container traversal.
|
||||
|
||||
In place of iterators, `boost::concurrent_flat_map` uses _internal visitation_
|
||||
In place of iterators, `boost::concurrent_flat_set` and `boost::concurrent_flat_map` use _internal visitation_
|
||||
facilities as a thread-safe substitute. Classical operations returning an iterator to an
|
||||
element already existing in the container, like for instance:
|
||||
|
||||
@@ -191,15 +191,15 @@ template<class F> size_t visit_all(F f);
|
||||
----
|
||||
|
||||
of which there are parallelized versions in C++17 compilers with parallel
|
||||
algorithm support. In general, the interface of `boost::concurrent_flat_map`
|
||||
is derived from that of `boost::unordered_flat_map` by a fairly straightforward
|
||||
process of replacing iterators with visitation where applicable. If
|
||||
`iterator` and `const_iterator` provide mutable and const access to elements,
|
||||
algorithm support. In general, the interface of concurrent containers
|
||||
is derived from that of their non-concurrent counterparts by a fairly straightforward
|
||||
process of replacing iterators with visitation where applicable. If for
|
||||
regular maps `iterator` and `const_iterator` provide mutable and const access to elements,
|
||||
respectively, here visitation is granted mutable or const access depending on
|
||||
the constness of the member function used (there are also `*cvisit` overloads for
|
||||
explicit const visitation).
|
||||
explicit const visitation); In the case of `boost::concurrent_flat_set`, visitation is always const.
|
||||
|
||||
The one notable operation not provided is `operator[]`/`at`, which can be
|
||||
One notable operation not provided by `boost::concurrent_flat_map` is `operator[]`/`at`, which can be
|
||||
replaced, if in a more convoluted manner, by
|
||||
xref:#concurrent_flat_map_try_emplace_or_cvisit[`try_emplace_or_visit`].
|
||||
|
||||
|
@@ -3,8 +3,8 @@
|
||||
|
||||
:idprefix: concurrent_
|
||||
|
||||
Boost.Unordered currently provides just one concurrent container named `boost::concurrent_flat_map`.
|
||||
`boost::concurrent_flat_map` is a hash table that allows concurrent write/read access from
|
||||
Boost.Unordered provides `boost::concurrent_flat_set` and `boost::concurrent_flat_map`,
|
||||
hash tables that allow concurrent write/read access from
|
||||
different threads without having to implement any synchronzation mechanism on the user's side.
|
||||
|
||||
[source,c++]
|
||||
@@ -36,16 +36,16 @@ In the example above, threads access `m` without synchronization, just as we'd d
|
||||
single-threaded scenario. In an ideal setting, if a given workload is distributed among
|
||||
_N_ threads, execution is _N_ times faster than with one thread —this limit is
|
||||
never attained in practice due to synchronization overheads and _contention_ (one thread
|
||||
waiting for another to leave a locked portion of the map), but `boost::concurrent_flat_map`
|
||||
is designed to perform with very little overhead and typically achieves _linear scaling_
|
||||
waiting for another to leave a locked portion of the map), but Boost.Unordered concurrent containers
|
||||
are designed to perform with very little overhead and typically achieve _linear scaling_
|
||||
(that is, performance is proportional to the number of threads up to the number of
|
||||
logical cores in the CPU).
|
||||
|
||||
== Visitation-based API
|
||||
|
||||
The first thing a new user of `boost::concurrent_flat_map` will notice is that this
|
||||
class _does not provide iterators_ (which makes it technically
|
||||
not a https://en.cppreference.com/w/cpp/named_req/Container[Container^]
|
||||
The first thing a new user of `boost::concurrent_flat_set` or `boost::concurrent_flat_map`
|
||||
will notice is that these classes _do not provide iterators_ (which makes then technically
|
||||
not https://en.cppreference.com/w/cpp/named_req/Container[Containers^]
|
||||
in the C++ standard sense). The reason for this is that iterators are inherently
|
||||
thread-unsafe. Consider this hypothetical code:
|
||||
|
||||
@@ -73,7 +73,7 @@ m.visit(k, [](const auto& x) { // x is the element with key k (if it exists)
|
||||
----
|
||||
|
||||
The visitation function passed by the user (in this case, a lambda function)
|
||||
is executed internally by `boost::concurrent_flat_map` in
|
||||
is executed internally by Boost.Unordered in
|
||||
a thread-safe manner, so it can access the element without worrying about other
|
||||
threads interfering in the process.
|
||||
|
||||
@@ -112,7 +112,7 @@ if (found) {
|
||||
}
|
||||
----
|
||||
|
||||
Visitation is prominent in the API provided by `boost::concurrent_flat_map`, and
|
||||
Visitation is prominent in the API provided by `boost::concurrent_flat_ser` and `boost::concurrent_flat_map`, and
|
||||
many classical operations have visitation-enabled variations:
|
||||
|
||||
[source,c++]
|
||||
@@ -129,13 +129,17 @@ the element: as a general rule, operations on a `boost::concurrent_flat_map` `m`
|
||||
will grant visitation functions const/non-const access to the element depending on whether
|
||||
`m` is const/non-const. Const access can be always be explicitly requested
|
||||
by using `cvisit` overloads (for instance, `insert_or_cvisit`) and may result
|
||||
in higher parallelization. Consult the xref:#concurrent_flat_map[reference]
|
||||
for a complete list of available operations.
|
||||
in higher parallelization. For `boost::concurrent_flat_set`, on the other hand,
|
||||
visitation is always const access.
|
||||
Consult the references of
|
||||
xref:#concurrent_flat_set[`boost::concurrent_flat_set`] and
|
||||
xref:#concurrent_flat_map[`boost::concurrent_flat_map`]
|
||||
for the complete list of visitation-enabled operations.
|
||||
|
||||
== Whole-Table Visitation
|
||||
|
||||
In the absence of iterators, `boost::concurrent_flat_map` provides `visit_all`
|
||||
as an alternative way to process all the elements in the map:
|
||||
In the absence of iterators, `visit_all` is provided
|
||||
as an alternative way to process all the elements in the container:
|
||||
|
||||
[source,c++]
|
||||
----
|
||||
@@ -187,12 +191,12 @@ m.erase_if([](auto& x) {
|
||||
`visit_while` and `erase_if` can also be parallelized. Note that, in order to increase efficiency,
|
||||
whole-table visitation operations do not block the table during execution: this implies that elements
|
||||
may be inserted, modified or erased by other threads during visitation. It is
|
||||
advisable not to assume too much about the exact global state of a `boost::concurrent_flat_map`
|
||||
advisable not to assume too much about the exact global state of a concurrent container
|
||||
at any point in your program.
|
||||
|
||||
== Blocking Operations
|
||||
|
||||
``boost::concurrent_flat_map``s can be copied, assigned, cleared and merged just like any
|
||||
``boost::concurrent_flat_set``s and ``boost::concurrent_flat_map``s can be copied, assigned, cleared and merged just like any
|
||||
Boost.Unordered container. Unlike most other operations, these are _blocking_,
|
||||
that is, all other threads are prevented from accesing the tables involved while a copy, assignment,
|
||||
clear or merge operation is in progress. Blocking is taken care of automatically by the library
|
||||
@@ -204,8 +208,10 @@ reserving space in advance of bulk insertions will generally speed up the proces
|
||||
|
||||
== Interoperability with non-concurrent containers
|
||||
|
||||
As their internal data structure is basically the same, `boost::unordered_flat_map` can
|
||||
be efficiently move-constructed from `boost::concurrent_flat_map` and vice versa.
|
||||
As open-addressing and concurrent containers are based on the same internal data structure,
|
||||
`boost::unordered_flat_set` and `boost::unordered_flat_map` can
|
||||
be efficiently move-constructed from `boost::concurrent_flat_set` and `boost::concurrent_flat_map`,
|
||||
respectively, and vice versa.
|
||||
This interoperability comes handy in multistage scenarios where parts of the data processing happen
|
||||
in parallel whereas other steps are non-concurrent (or non-modifying). In the following example,
|
||||
we want to construct a histogram from a huge input vector of words:
|
||||
|
1369
doc/unordered/concurrent_flat_set.adoc
Normal file
1369
doc/unordered/concurrent_flat_set.adoc
Normal file
File diff suppressed because it is too large
Load Diff
@@ -44,7 +44,8 @@ boost::unordered_flat_map
|
||||
|
||||
^.^h|*Concurrent*
|
||||
^|
|
||||
^| `boost::concurrent_flat_map`
|
||||
^| `boost::concurrent_flat_set` +
|
||||
`boost::concurrent_flat_map`
|
||||
|
||||
|===
|
||||
|
||||
@@ -56,9 +57,8 @@ in the market within the technical constraints imposed by the required standard
|
||||
interface to accommodate the implementation.
|
||||
There are two variants: **flat** (the fastest) and **node-based**, which
|
||||
provide pointer stability under rehashing at the expense of being slower.
|
||||
* Finally, `boost::concurrent_flat_map` (the only **concurrent container** provided
|
||||
at present) is a hashmap designed and implemented to be used in high-performance
|
||||
multithreaded scenarios. Its interface is radically different from that of regular C++ containers.
|
||||
* Finally, **concurrent containers** are designed and implemented to be used in high-performance
|
||||
multithreaded scenarios. Their interface is radically different from that of regular C++ containers.
|
||||
|
||||
All sets and maps in Boost.Unordered are instantiatied similarly as
|
||||
`std::unordered_set` and `std::unordered_map`, respectively:
|
||||
@@ -73,6 +73,7 @@ namespace boost {
|
||||
class Alloc = std::allocator<Key> >
|
||||
class unordered_set;
|
||||
// same for unordered_multiset, unordered_flat_set, unordered_node_set
|
||||
// and concurrent_flat_set
|
||||
|
||||
template <
|
||||
class Key, class Mapped,
|
||||
|
@@ -121,7 +121,7 @@ for Visual Studio on an x64-mode Intel CPU with SSE2 and for GCC on an IBM s390x
|
||||
== Concurrent Containers
|
||||
|
||||
The same data structure used by Boost.Unordered open-addressing containers has been chosen
|
||||
also as the foundation of `boost::concurrent_flat_map`:
|
||||
also as the foundation of `boost::concurrent_flat_set` and `boost::concurrent_flat_map`:
|
||||
|
||||
* Open-addressing is faster than closed-addressing alternatives, both in non-concurrent and
|
||||
concurrent scenarios.
|
||||
@@ -135,7 +135,7 @@ and vice versa.
|
||||
|
||||
=== Hash Function and Platform Interoperability
|
||||
|
||||
`boost::concurrent_flat_map` makes the same decisions and provides the same guarantees
|
||||
Concurrent containers make the same decisions and provide the same guarantees
|
||||
as Boost.Unordered open-addressing containers with regards to
|
||||
xref:#rationale_hash_function[hash function defaults] and
|
||||
xref:#rationale_platform_interoperability[platform interoperability].
|
||||
|
@@ -11,3 +11,4 @@ include::unordered_flat_set.adoc[]
|
||||
include::unordered_node_map.adoc[]
|
||||
include::unordered_node_set.adoc[]
|
||||
include::concurrent_flat_map.adoc[]
|
||||
include::concurrent_flat_set.adoc[]
|
||||
|
@@ -67,8 +67,8 @@ xref:#rationale_closed_addressing_containers[corresponding section].
|
||||
|
||||
== Open-addressing Containers
|
||||
|
||||
The diagram shows the basic internal layout of `boost::unordered_flat_map`/`unordered_node_map` and
|
||||
`boost:unordered_flat_set`/`unordered_node_set`.
|
||||
The diagram shows the basic internal layout of `boost::unordered_flat_set`/`unordered_node_set` and
|
||||
`boost:unordered_flat_map`/`unordered_node_map`.
|
||||
|
||||
|
||||
[#img-foa-layout]
|
||||
@@ -76,7 +76,7 @@ The diagram shows the basic internal layout of `boost::unordered_flat_map`/`unor
|
||||
image::foa.png[align=center]
|
||||
|
||||
As with all open-addressing containers, elements (or pointers to the element nodes in the case of
|
||||
`boost::unordered_node_map` and `boost::unordered_node_set`) are stored directly in the bucket array.
|
||||
`boost::unordered_node_set` and `boost::unordered_node_map`) are stored directly in the bucket array.
|
||||
This array is logically divided into 2^_n_^ _groups_ of 15 elements each.
|
||||
In addition to the bucket array, there is an associated _metadata array_ with 2^_n_^
|
||||
16-byte words.
|
||||
@@ -129,7 +129,7 @@ xref:#rationale_open_addresing_containers[corresponding section].
|
||||
|
||||
== Concurrent Containers
|
||||
|
||||
`boost::concurrent_flat_map` uses the basic
|
||||
`boost::concurrent_flat_set` and `boost::concurrent_flat_map` use the basic
|
||||
xref:#structures_open_addressing_containers[open-addressing layout] described above
|
||||
augmented with synchronization mechanisms.
|
||||
|
||||
|
@@ -71,7 +71,7 @@ namespace boost {
|
||||
xref:#unordered_flat_set_iterator_range_constructor_with_allocator[unordered_flat_set](InputIterator f, InputIterator l, const allocator_type& a);
|
||||
explicit xref:#unordered_flat_set_allocator_constructor[unordered_flat_set](const Allocator& a);
|
||||
xref:#unordered_flat_set_copy_constructor_with_allocator[unordered_flat_set](const unordered_flat_set& other, const Allocator& a);
|
||||
xref:#unordered_flat_set_move_constructor_with_allocator[unordered_flat_set](unordered_flat_set&& other, const Allocator& a);
|
||||
xref:#unordered_flat_set_move_constructor_from_concurrent_flat_set[unordered_flat_set](concurrent_flat_set<Key, Hash, Pred, Allocator>&& other);
|
||||
xref:#unordered_flat_set_initializer_list_constructor[unordered_flat_set](std::initializer_list<value_type> il,
|
||||
size_type n = _implementation-defined_
|
||||
const hasher& hf = hasher(),
|
||||
@@ -422,6 +422,22 @@ from `other`, and the allocator is copy-constructed from `a`.
|
||||
|
||||
---
|
||||
|
||||
==== Move Constructor from concurrent_flat_set
|
||||
|
||||
```c++
|
||||
unordered_flat_set(concurrent_flat_set<Key, Hash, Pred, Allocator>&& other);
|
||||
```
|
||||
|
||||
Move construction from a xref:#concurrent_flat_set[`concurrent_flat_set`].
|
||||
The internal bucket array of `other` is transferred directly to the new container.
|
||||
The hash function, predicate and allocator are moved-constructed from `other`.
|
||||
|
||||
[horizontal]
|
||||
Complexity:;; Constant time.
|
||||
Concurrency:;; Blocking on `other`.
|
||||
|
||||
---
|
||||
|
||||
==== Initializer List Constructor
|
||||
[source,c++,subs="+quotes"]
|
||||
----
|
||||
|
Reference in New Issue
Block a user