Merge pull request #212 from boostorg/feature/concurrent_flat_set

Feature/concurrent_flat_set
This commit is contained in:
joaquintides
2023-09-16 18:36:36 +02:00
committed by GitHub
42 changed files with 4681 additions and 1368 deletions

View File

@@ -6,14 +6,15 @@
:github-pr-url: https://github.com/boostorg/unordered/pull :github-pr-url: https://github.com/boostorg/unordered/pull
:cpp: C++ :cpp: C++
== Release 1.84.0 == Release 1.84.0 - Major update
* Added `[c]visit_while` operations to `boost::concurrent_map`, * Added `boost::concurrent_flat_set`.
* Added `[c]visit_while` operations to concurrent containers,
with serial and parallel variants. with serial and parallel variants.
* Added efficient move construction of `boost::unordered_flat_map` from * Added efficient move construction of `boost::unordered_flat_(map|set)` from
`boost::concurrent_flat_map` and vice versa. `boost::concurrent_flat_(map|set)` and vice versa.
* Added debug mode mechanisms for detecting illegal reentrancies into * Added debug-mode mechanisms for detecting illegal reentrancies into
a `boost::concurrent_flat_map` from user code. a concurrent container from user code.
* Added Boost.Serialization support to all containers and their (non-local) iterator types. * Added Boost.Serialization support to all containers and their (non-local) iterator types.
* Added support for fancy pointers to open-addressing and concurrent containers. * Added support for fancy pointers to open-addressing and concurrent containers.
This enables scenarios like the use of Boost.Interprocess allocators to construct containers in shared memory. This enables scenarios like the use of Boost.Interprocess allocators to construct containers in shared memory.

View File

@@ -148,14 +148,14 @@ The main differences with C++ unordered associative containers are:
== Concurrent Containers == Concurrent Containers
There is currently no specification in the C++ standard for this or any other concurrent There is currently no specification in the C++ standard for this or any other type of concurrent
data structure. `boost::concurrent_flat_map` takes the same template parameters as `std::unordered_map` data structure. The APIs of `boost::concurrent_flat_set` and `boost::concurrent_flat_map`
and all the maps provided by Boost.Unordered, and its API is modelled after that of are modelled after `std::unordered_flat_set` and `std::unordered_flat_map`, respectively,
`boost::unordered_flat_map` with the crucial difference that iterators are not provided with the crucial difference that iterators are not provided
due to their inherent problems in concurrent scenarios (high contention, prone to deadlocking): due to their inherent problems in concurrent scenarios (high contention, prone to deadlocking):
so, `boost::concurrent_flat_map` is technically not a so, Boost.Unordered concurrent containers are technically not models of
https://en.cppreference.com/w/cpp/named_req/Container[Container^], although https://en.cppreference.com/w/cpp/named_req/Container[Container^], although
it meets all the requirements of https://en.cppreference.com/w/cpp/named_req/AllocatorAwareContainer[AllocatorAware^] they meet all the requirements of https://en.cppreference.com/w/cpp/named_req/AllocatorAwareContainer[AllocatorAware^]
containers except those implying iterators. containers except those implying iterators.
In a non-concurrent unordered container, iterators serve two main purposes: In a non-concurrent unordered container, iterators serve two main purposes:
@@ -163,7 +163,7 @@ In a non-concurrent unordered container, iterators serve two main purposes:
* Access to an element previously located via lookup. * Access to an element previously located via lookup.
* Container traversal. * Container traversal.
In place of iterators, `boost::concurrent_flat_map` uses _internal visitation_ In place of iterators, `boost::concurrent_flat_set` and `boost::concurrent_flat_map` use _internal visitation_
facilities as a thread-safe substitute. Classical operations returning an iterator to an facilities as a thread-safe substitute. Classical operations returning an iterator to an
element already existing in the container, like for instance: element already existing in the container, like for instance:
@@ -191,15 +191,15 @@ template<class F> size_t visit_all(F f);
---- ----
of which there are parallelized versions in C++17 compilers with parallel of which there are parallelized versions in C++17 compilers with parallel
algorithm support. In general, the interface of `boost::concurrent_flat_map` algorithm support. In general, the interface of concurrent containers
is derived from that of `boost::unordered_flat_map` by a fairly straightforward is derived from that of their non-concurrent counterparts by a fairly straightforward
process of replacing iterators with visitation where applicable. If process of replacing iterators with visitation where applicable. If for
`iterator` and `const_iterator` provide mutable and const access to elements, regular maps `iterator` and `const_iterator` provide mutable and const access to elements,
respectively, here visitation is granted mutable or const access depending on respectively, here visitation is granted mutable or const access depending on
the constness of the member function used (there are also `*cvisit` overloads for the constness of the member function used (there are also `*cvisit` overloads for
explicit const visitation). explicit const visitation); In the case of `boost::concurrent_flat_set`, visitation is always const.
The one notable operation not provided is `operator[]`/`at`, which can be One notable operation not provided by `boost::concurrent_flat_map` is `operator[]`/`at`, which can be
replaced, if in a more convoluted manner, by replaced, if in a more convoluted manner, by
xref:#concurrent_flat_map_try_emplace_or_cvisit[`try_emplace_or_visit`]. xref:#concurrent_flat_map_try_emplace_or_cvisit[`try_emplace_or_visit`].

View File

@@ -3,8 +3,8 @@
:idprefix: concurrent_ :idprefix: concurrent_
Boost.Unordered currently provides just one concurrent container named `boost::concurrent_flat_map`. Boost.Unordered provides `boost::concurrent_flat_set` and `boost::concurrent_flat_map`,
`boost::concurrent_flat_map` is a hash table that allows concurrent write/read access from hash tables that allow concurrent write/read access from
different threads without having to implement any synchronzation mechanism on the user's side. different threads without having to implement any synchronzation mechanism on the user's side.
[source,c++] [source,c++]
@@ -36,16 +36,16 @@ In the example above, threads access `m` without synchronization, just as we'd d
single-threaded scenario. In an ideal setting, if a given workload is distributed among single-threaded scenario. In an ideal setting, if a given workload is distributed among
_N_ threads, execution is _N_ times faster than with one thread —this limit is _N_ threads, execution is _N_ times faster than with one thread —this limit is
never attained in practice due to synchronization overheads and _contention_ (one thread never attained in practice due to synchronization overheads and _contention_ (one thread
waiting for another to leave a locked portion of the map), but `boost::concurrent_flat_map` waiting for another to leave a locked portion of the map), but Boost.Unordered concurrent containers
is designed to perform with very little overhead and typically achieves _linear scaling_ are designed to perform with very little overhead and typically achieve _linear scaling_
(that is, performance is proportional to the number of threads up to the number of (that is, performance is proportional to the number of threads up to the number of
logical cores in the CPU). logical cores in the CPU).
== Visitation-based API == Visitation-based API
The first thing a new user of `boost::concurrent_flat_map` will notice is that this The first thing a new user of `boost::concurrent_flat_set` or `boost::concurrent_flat_map`
class _does not provide iterators_ (which makes it technically will notice is that these classes _do not provide iterators_ (which makes them technically
not a https://en.cppreference.com/w/cpp/named_req/Container[Container^] not https://en.cppreference.com/w/cpp/named_req/Container[Containers^]
in the C++ standard sense). The reason for this is that iterators are inherently in the C++ standard sense). The reason for this is that iterators are inherently
thread-unsafe. Consider this hypothetical code: thread-unsafe. Consider this hypothetical code:
@@ -73,7 +73,7 @@ m.visit(k, [](const auto& x) { // x is the element with key k (if it exists)
---- ----
The visitation function passed by the user (in this case, a lambda function) The visitation function passed by the user (in this case, a lambda function)
is executed internally by `boost::concurrent_flat_map` in is executed internally by Boost.Unordered in
a thread-safe manner, so it can access the element without worrying about other a thread-safe manner, so it can access the element without worrying about other
threads interfering in the process. threads interfering in the process.
@@ -112,7 +112,7 @@ if (found) {
} }
---- ----
Visitation is prominent in the API provided by `boost::concurrent_flat_map`, and Visitation is prominent in the API provided by `boost::concurrent_flat_set` and `boost::concurrent_flat_map`, and
many classical operations have visitation-enabled variations: many classical operations have visitation-enabled variations:
[source,c++] [source,c++]
@@ -129,13 +129,17 @@ the element: as a general rule, operations on a `boost::concurrent_flat_map` `m`
will grant visitation functions const/non-const access to the element depending on whether will grant visitation functions const/non-const access to the element depending on whether
`m` is const/non-const. Const access can be always be explicitly requested `m` is const/non-const. Const access can be always be explicitly requested
by using `cvisit` overloads (for instance, `insert_or_cvisit`) and may result by using `cvisit` overloads (for instance, `insert_or_cvisit`) and may result
in higher parallelization. Consult the xref:#concurrent_flat_map[reference] in higher parallelization. For `boost::concurrent_flat_set`, on the other hand,
for a complete list of available operations. visitation is always const access.
Consult the references of
xref:#concurrent_flat_set[`boost::concurrent_flat_set`] and
xref:#concurrent_flat_map[`boost::concurrent_flat_map`]
for the complete list of visitation-enabled operations.
== Whole-Table Visitation == Whole-Table Visitation
In the absence of iterators, `boost::concurrent_flat_map` provides `visit_all` In the absence of iterators, `visit_all` is provided
as an alternative way to process all the elements in the map: as an alternative way to process all the elements in the container:
[source,c++] [source,c++]
---- ----
@@ -187,12 +191,12 @@ m.erase_if([](auto& x) {
`visit_while` and `erase_if` can also be parallelized. Note that, in order to increase efficiency, `visit_while` and `erase_if` can also be parallelized. Note that, in order to increase efficiency,
whole-table visitation operations do not block the table during execution: this implies that elements whole-table visitation operations do not block the table during execution: this implies that elements
may be inserted, modified or erased by other threads during visitation. It is may be inserted, modified or erased by other threads during visitation. It is
advisable not to assume too much about the exact global state of a `boost::concurrent_flat_map` advisable not to assume too much about the exact global state of a concurrent container
at any point in your program. at any point in your program.
== Blocking Operations == Blocking Operations
``boost::concurrent_flat_map``s can be copied, assigned, cleared and merged just like any ``boost::concurrent_flat_set``s and ``boost::concurrent_flat_map``s can be copied, assigned, cleared and merged just like any
Boost.Unordered container. Unlike most other operations, these are _blocking_, Boost.Unordered container. Unlike most other operations, these are _blocking_,
that is, all other threads are prevented from accesing the tables involved while a copy, assignment, that is, all other threads are prevented from accesing the tables involved while a copy, assignment,
clear or merge operation is in progress. Blocking is taken care of automatically by the library clear or merge operation is in progress. Blocking is taken care of automatically by the library
@@ -204,8 +208,10 @@ reserving space in advance of bulk insertions will generally speed up the proces
== Interoperability with non-concurrent containers == Interoperability with non-concurrent containers
As their internal data structure is basically the same, `boost::unordered_flat_map` can As open-addressing and concurrent containers are based on the same internal data structure,
be efficiently move-constructed from `boost::concurrent_flat_map` and vice versa. `boost::unordered_flat_set` and `boost::unordered_flat_map` can
be efficiently move-constructed from `boost::concurrent_flat_set` and `boost::concurrent_flat_map`,
respectively, and vice versa.
This interoperability comes handy in multistage scenarios where parts of the data processing happen This interoperability comes handy in multistage scenarios where parts of the data processing happen
in parallel whereas other steps are non-concurrent (or non-modifying). In the following example, in parallel whereas other steps are non-concurrent (or non-modifying). In the following example,
we want to construct a histogram from a huge input vector of words: we want to construct a histogram from a huge input vector of words:

File diff suppressed because it is too large Load Diff

View File

@@ -44,7 +44,8 @@ boost::unordered_flat_map
^.^h|*Concurrent* ^.^h|*Concurrent*
^| ^|
^| `boost::concurrent_flat_map` ^| `boost::concurrent_flat_set` +
`boost::concurrent_flat_map`
|=== |===
@@ -56,9 +57,8 @@ in the market within the technical constraints imposed by the required standard
interface to accommodate the implementation. interface to accommodate the implementation.
There are two variants: **flat** (the fastest) and **node-based**, which There are two variants: **flat** (the fastest) and **node-based**, which
provide pointer stability under rehashing at the expense of being slower. provide pointer stability under rehashing at the expense of being slower.
* Finally, `boost::concurrent_flat_map` (the only **concurrent container** provided * Finally, **concurrent containers** are designed and implemented to be used in high-performance
at present) is a hashmap designed and implemented to be used in high-performance multithreaded scenarios. Their interface is radically different from that of regular C++ containers.
multithreaded scenarios. Its interface is radically different from that of regular C++ containers.
All sets and maps in Boost.Unordered are instantiatied similarly as All sets and maps in Boost.Unordered are instantiatied similarly as
`std::unordered_set` and `std::unordered_map`, respectively: `std::unordered_set` and `std::unordered_map`, respectively:
@@ -73,6 +73,7 @@ namespace boost {
class Alloc = std::allocator<Key> > class Alloc = std::allocator<Key> >
class unordered_set; class unordered_set;
// same for unordered_multiset, unordered_flat_set, unordered_node_set // same for unordered_multiset, unordered_flat_set, unordered_node_set
// and concurrent_flat_set
template < template <
class Key, class Mapped, class Key, class Mapped,

View File

@@ -121,7 +121,7 @@ for Visual Studio on an x64-mode Intel CPU with SSE2 and for GCC on an IBM s390x
== Concurrent Containers == Concurrent Containers
The same data structure used by Boost.Unordered open-addressing containers has been chosen The same data structure used by Boost.Unordered open-addressing containers has been chosen
also as the foundation of `boost::concurrent_flat_map`: also as the foundation of `boost::concurrent_flat_set` and `boost::concurrent_flat_map`:
* Open-addressing is faster than closed-addressing alternatives, both in non-concurrent and * Open-addressing is faster than closed-addressing alternatives, both in non-concurrent and
concurrent scenarios. concurrent scenarios.
@@ -135,7 +135,7 @@ and vice versa.
=== Hash Function and Platform Interoperability === Hash Function and Platform Interoperability
`boost::concurrent_flat_map` makes the same decisions and provides the same guarantees Concurrent containers make the same decisions and provide the same guarantees
as Boost.Unordered open-addressing containers with regards to as Boost.Unordered open-addressing containers with regards to
xref:#rationale_hash_function[hash function defaults] and xref:#rationale_hash_function[hash function defaults] and
xref:#rationale_platform_interoperability[platform interoperability]. xref:#rationale_platform_interoperability[platform interoperability].

View File

@@ -11,3 +11,4 @@ include::unordered_flat_set.adoc[]
include::unordered_node_map.adoc[] include::unordered_node_map.adoc[]
include::unordered_node_set.adoc[] include::unordered_node_set.adoc[]
include::concurrent_flat_map.adoc[] include::concurrent_flat_map.adoc[]
include::concurrent_flat_set.adoc[]

View File

@@ -67,8 +67,8 @@ xref:#rationale_closed_addressing_containers[corresponding section].
== Open-addressing Containers == Open-addressing Containers
The diagram shows the basic internal layout of `boost::unordered_flat_map`/`unordered_node_map` and The diagram shows the basic internal layout of `boost::unordered_flat_set`/`unordered_node_set` and
`boost:unordered_flat_set`/`unordered_node_set`. `boost:unordered_flat_map`/`unordered_node_map`.
[#img-foa-layout] [#img-foa-layout]
@@ -76,7 +76,7 @@ The diagram shows the basic internal layout of `boost::unordered_flat_map`/`unor
image::foa.png[align=center] image::foa.png[align=center]
As with all open-addressing containers, elements (or pointers to the element nodes in the case of As with all open-addressing containers, elements (or pointers to the element nodes in the case of
`boost::unordered_node_map` and `boost::unordered_node_set`) are stored directly in the bucket array. `boost::unordered_node_set` and `boost::unordered_node_map`) are stored directly in the bucket array.
This array is logically divided into 2^_n_^ _groups_ of 15 elements each. This array is logically divided into 2^_n_^ _groups_ of 15 elements each.
In addition to the bucket array, there is an associated _metadata array_ with 2^_n_^ In addition to the bucket array, there is an associated _metadata array_ with 2^_n_^
16-byte words. 16-byte words.
@@ -129,7 +129,7 @@ xref:#rationale_open_addresing_containers[corresponding section].
== Concurrent Containers == Concurrent Containers
`boost::concurrent_flat_map` uses the basic `boost::concurrent_flat_set` and `boost::concurrent_flat_map` use the basic
xref:#structures_open_addressing_containers[open-addressing layout] described above xref:#structures_open_addressing_containers[open-addressing layout] described above
augmented with synchronization mechanisms. augmented with synchronization mechanisms.

View File

@@ -71,7 +71,7 @@ namespace boost {
xref:#unordered_flat_set_iterator_range_constructor_with_allocator[unordered_flat_set](InputIterator f, InputIterator l, const allocator_type& a); xref:#unordered_flat_set_iterator_range_constructor_with_allocator[unordered_flat_set](InputIterator f, InputIterator l, const allocator_type& a);
explicit xref:#unordered_flat_set_allocator_constructor[unordered_flat_set](const Allocator& a); explicit xref:#unordered_flat_set_allocator_constructor[unordered_flat_set](const Allocator& a);
xref:#unordered_flat_set_copy_constructor_with_allocator[unordered_flat_set](const unordered_flat_set& other, const Allocator& a); xref:#unordered_flat_set_copy_constructor_with_allocator[unordered_flat_set](const unordered_flat_set& other, const Allocator& a);
xref:#unordered_flat_set_move_constructor_with_allocator[unordered_flat_set](unordered_flat_set&& other, const Allocator& a); xref:#unordered_flat_set_move_constructor_from_concurrent_flat_set[unordered_flat_set](concurrent_flat_set<Key, Hash, Pred, Allocator>&& other);
xref:#unordered_flat_set_initializer_list_constructor[unordered_flat_set](std::initializer_list<value_type> il, xref:#unordered_flat_set_initializer_list_constructor[unordered_flat_set](std::initializer_list<value_type> il,
size_type n = _implementation-defined_ size_type n = _implementation-defined_
const hasher& hf = hasher(), const hasher& hf = hasher(),
@@ -422,6 +422,22 @@ from `other`, and the allocator is copy-constructed from `a`.
--- ---
==== Move Constructor from concurrent_flat_set
```c++
unordered_flat_set(concurrent_flat_set<Key, Hash, Pred, Allocator>&& other);
```
Move construction from a xref:#concurrent_flat_set[`concurrent_flat_set`].
The internal bucket array of `other` is transferred directly to the new container.
The hash function, predicate and allocator are moved-constructed from `other`.
[horizontal]
Complexity:;; Constant time.
Concurrency:;; Blocking on `other`.
---
==== Initializer List Constructor ==== Initializer List Constructor
[source,c++,subs="+quotes"] [source,c++,subs="+quotes"]
---- ----

View File

@@ -1,4 +1,4 @@
/* Fast open-addressing concurrent hash table. /* Fast open-addressing concurrent hashmap.
* *
* Copyright 2023 Christian Mazakas. * Copyright 2023 Christian Mazakas.
* Distributed under the Boost Software License, Version 1.0. * Distributed under the Boost Software License, Version 1.0.
@@ -12,6 +12,7 @@
#define BOOST_UNORDERED_CONCURRENT_FLAT_MAP_HPP #define BOOST_UNORDERED_CONCURRENT_FLAT_MAP_HPP
#include <boost/unordered/concurrent_flat_map_fwd.hpp> #include <boost/unordered/concurrent_flat_map_fwd.hpp>
#include <boost/unordered/detail/concurrent_static_asserts.hpp>
#include <boost/unordered/detail/foa/concurrent_table.hpp> #include <boost/unordered/detail/foa/concurrent_table.hpp>
#include <boost/unordered/detail/foa/flat_map_types.hpp> #include <boost/unordered/detail/foa/flat_map_types.hpp>
#include <boost/unordered/detail/type_traits.hpp> #include <boost/unordered/detail/type_traits.hpp>
@@ -20,65 +21,12 @@
#include <boost/container_hash/hash.hpp> #include <boost/container_hash/hash.hpp>
#include <boost/core/allocator_access.hpp> #include <boost/core/allocator_access.hpp>
#include <boost/core/serialization.hpp> #include <boost/core/serialization.hpp>
#include <boost/mp11/algorithm.hpp>
#include <boost/mp11/list.hpp>
#include <boost/type_traits/type_identity.hpp> #include <boost/type_traits/type_identity.hpp>
#include <functional>
#include <type_traits> #include <type_traits>
#include <utility>
#define BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE(F) \
static_assert(boost::unordered::detail::is_invocable<F, value_type&>::value, \
"The provided Callable must be invocable with value_type&");
#define BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) \
static_assert( \
boost::unordered::detail::is_invocable<F, value_type const&>::value, \
"The provided Callable must be invocable with value_type const&");
#if BOOST_CXX_VERSION >= 202002L
#define BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(P) \
static_assert(!std::is_base_of<std::execution::parallel_unsequenced_policy, \
ExecPolicy>::value, \
"ExecPolicy must be sequenced."); \
static_assert( \
!std::is_base_of<std::execution::unsequenced_policy, ExecPolicy>::value, \
"ExecPolicy must be sequenced.");
#else
#define BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(P) \
static_assert(!std::is_base_of<std::execution::parallel_unsequenced_policy, \
ExecPolicy>::value, \
"ExecPolicy must be sequenced.");
#endif
#define BOOST_UNORDERED_COMMA ,
#define BOOST_UNORDERED_LAST_ARG(Arg, Args) \
mp11::mp_back<mp11::mp_list<Arg BOOST_UNORDERED_COMMA Args> >
#define BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_INVOCABLE(Arg, Args) \
BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE(BOOST_UNORDERED_LAST_ARG(Arg, Args))
#define BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_CONST_INVOCABLE(Arg, Args) \
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE( \
BOOST_UNORDERED_LAST_ARG(Arg, Args))
namespace boost { namespace boost {
namespace unordered { namespace unordered {
namespace detail {
template <class F, class... Args>
struct is_invocable
: std::is_constructible<std::function<void(Args...)>,
std::reference_wrapper<typename std::remove_reference<F>::type> >
{
};
} // namespace detail
template <class Key, class T, class Hash, class Pred, class Allocator> template <class Key, class T, class Hash, class Pred, class Allocator>
class concurrent_flat_map class concurrent_flat_map
{ {
@@ -479,6 +427,7 @@ namespace boost {
BOOST_FORCEINLINE auto insert_or_visit(Ty&& value, F f) BOOST_FORCEINLINE auto insert_or_visit(Ty&& value, F f)
-> decltype(table_.insert_or_visit(std::forward<Ty>(value), f)) -> decltype(table_.insert_or_visit(std::forward<Ty>(value), f))
{ {
BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE(F)
return table_.insert_or_visit(std::forward<Ty>(value), f); return table_.insert_or_visit(std::forward<Ty>(value), f);
} }
@@ -533,7 +482,7 @@ namespace boost {
void insert_or_cvisit(std::initializer_list<value_type> ilist, F f) void insert_or_cvisit(std::initializer_list<value_type> ilist, F f)
{ {
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
this->insert_or_visit(ilist.begin(), ilist.end(), f); this->insert_or_cvisit(ilist.begin(), ilist.end(), f);
} }
template <class... Args> BOOST_FORCEINLINE bool emplace(Args&&... args) template <class... Args> BOOST_FORCEINLINE bool emplace(Args&&... args)
@@ -882,12 +831,4 @@ namespace boost {
using unordered::concurrent_flat_map; using unordered::concurrent_flat_map;
} // namespace boost } // namespace boost
#undef BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE
#undef BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE
#undef BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY
#undef BOOST_UNORDERED_COMMA
#undef BOOST_UNORDERED_LAST_ARG
#undef BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_INVOCABLE
#undef BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_CONST_INVOCABLE
#endif // BOOST_UNORDERED_CONCURRENT_FLAT_MAP_HPP #endif // BOOST_UNORDERED_CONCURRENT_FLAT_MAP_HPP

View File

@@ -1,4 +1,4 @@
/* Fast open-addressing concurrent hash table. /* Fast open-addressing concurrent hashmap.
* *
* Copyright 2023 Christian Mazakas. * Copyright 2023 Christian Mazakas.
* Distributed under the Boost Software License, Version 1.0. * Distributed under the Boost Software License, Version 1.0.

View File

@@ -0,0 +1,697 @@
/* Fast open-addressing concurrent hashset.
*
* Copyright 2023 Christian Mazakas.
* Copyright 2023 Joaquin M Lopez Munoz.
* Distributed under the Boost Software License, Version 1.0.
* (See accompanying file LICENSE_1_0.txt or copy at
* http://www.boost.org/LICENSE_1_0.txt)
*
* See https://www.boost.org/libs/unordered for library home page.
*/
#ifndef BOOST_UNORDERED_CONCURRENT_FLAT_SET_HPP
#define BOOST_UNORDERED_CONCURRENT_FLAT_SET_HPP
#include <boost/unordered/concurrent_flat_set_fwd.hpp>
#include <boost/unordered/detail/concurrent_static_asserts.hpp>
#include <boost/unordered/detail/foa/concurrent_table.hpp>
#include <boost/unordered/detail/foa/flat_set_types.hpp>
#include <boost/unordered/detail/type_traits.hpp>
#include <boost/unordered/unordered_flat_set_fwd.hpp>
#include <boost/container_hash/hash.hpp>
#include <boost/core/allocator_access.hpp>
#include <boost/core/serialization.hpp>
#include <boost/type_traits/type_identity.hpp>
#include <utility>
namespace boost {
namespace unordered {
template <class Key, class Hash, class Pred, class Allocator>
class concurrent_flat_set
{
private:
template <class Key2, class Hash2, class Pred2, class Allocator2>
friend class concurrent_flat_set;
template <class Key2, class Hash2, class Pred2, class Allocator2>
friend class unordered_flat_set;
using type_policy = detail::foa::flat_set_types<Key>;
detail::foa::concurrent_table<type_policy, Hash, Pred, Allocator> table_;
template <class K, class H, class KE, class A>
bool friend operator==(concurrent_flat_set<K, H, KE, A> const& lhs,
concurrent_flat_set<K, H, KE, A> const& rhs);
template <class K, class H, class KE, class A, class Predicate>
friend typename concurrent_flat_set<K, H, KE, A>::size_type erase_if(
concurrent_flat_set<K, H, KE, A>& set, Predicate pred);
template<class Archive, class K, class H, class KE, class A>
friend void serialize(
Archive& ar, concurrent_flat_set<K, H, KE, A>& c,
unsigned int version);
public:
using key_type = Key;
using value_type = typename type_policy::value_type;
using init_type = typename type_policy::init_type;
using size_type = std::size_t;
using difference_type = std::ptrdiff_t;
using hasher = typename boost::type_identity<Hash>::type;
using key_equal = typename boost::type_identity<Pred>::type;
using allocator_type = typename boost::type_identity<Allocator>::type;
using reference = value_type&;
using const_reference = value_type const&;
using pointer = typename boost::allocator_pointer<allocator_type>::type;
using const_pointer =
typename boost::allocator_const_pointer<allocator_type>::type;
concurrent_flat_set()
: concurrent_flat_set(detail::foa::default_bucket_count)
{
}
explicit concurrent_flat_set(size_type n, const hasher& hf = hasher(),
const key_equal& eql = key_equal(),
const allocator_type& a = allocator_type())
: table_(n, hf, eql, a)
{
}
template <class InputIterator>
concurrent_flat_set(InputIterator f, InputIterator l,
size_type n = detail::foa::default_bucket_count,
const hasher& hf = hasher(), const key_equal& eql = key_equal(),
const allocator_type& a = allocator_type())
: table_(n, hf, eql, a)
{
this->insert(f, l);
}
concurrent_flat_set(concurrent_flat_set const& rhs)
: table_(rhs.table_,
boost::allocator_select_on_container_copy_construction(
rhs.get_allocator()))
{
}
concurrent_flat_set(concurrent_flat_set&& rhs)
: table_(std::move(rhs.table_))
{
}
template <class InputIterator>
concurrent_flat_set(
InputIterator f, InputIterator l, allocator_type const& a)
: concurrent_flat_set(f, l, 0, hasher(), key_equal(), a)
{
}
explicit concurrent_flat_set(allocator_type const& a)
: table_(detail::foa::default_bucket_count, hasher(), key_equal(), a)
{
}
concurrent_flat_set(
concurrent_flat_set const& rhs, allocator_type const& a)
: table_(rhs.table_, a)
{
}
concurrent_flat_set(concurrent_flat_set&& rhs, allocator_type const& a)
: table_(std::move(rhs.table_), a)
{
}
concurrent_flat_set(std::initializer_list<value_type> il,
size_type n = detail::foa::default_bucket_count,
const hasher& hf = hasher(), const key_equal& eql = key_equal(),
const allocator_type& a = allocator_type())
: concurrent_flat_set(n, hf, eql, a)
{
this->insert(il.begin(), il.end());
}
concurrent_flat_set(size_type n, const allocator_type& a)
: concurrent_flat_set(n, hasher(), key_equal(), a)
{
}
concurrent_flat_set(
size_type n, const hasher& hf, const allocator_type& a)
: concurrent_flat_set(n, hf, key_equal(), a)
{
}
template <typename InputIterator>
concurrent_flat_set(
InputIterator f, InputIterator l, size_type n, const allocator_type& a)
: concurrent_flat_set(f, l, n, hasher(), key_equal(), a)
{
}
template <typename InputIterator>
concurrent_flat_set(InputIterator f, InputIterator l, size_type n,
const hasher& hf, const allocator_type& a)
: concurrent_flat_set(f, l, n, hf, key_equal(), a)
{
}
concurrent_flat_set(
std::initializer_list<value_type> il, const allocator_type& a)
: concurrent_flat_set(
il, detail::foa::default_bucket_count, hasher(), key_equal(), a)
{
}
concurrent_flat_set(std::initializer_list<value_type> il, size_type n,
const allocator_type& a)
: concurrent_flat_set(il, n, hasher(), key_equal(), a)
{
}
concurrent_flat_set(std::initializer_list<value_type> il, size_type n,
const hasher& hf, const allocator_type& a)
: concurrent_flat_set(il, n, hf, key_equal(), a)
{
}
concurrent_flat_set(
unordered_flat_set<Key, Hash, Pred, Allocator>&& other)
: table_(std::move(other.table_))
{
}
~concurrent_flat_set() = default;
concurrent_flat_set& operator=(concurrent_flat_set const& rhs)
{
table_ = rhs.table_;
return *this;
}
concurrent_flat_set& operator=(concurrent_flat_set&& rhs)
noexcept(boost::allocator_is_always_equal<Allocator>::type::value ||
boost::allocator_propagate_on_container_move_assignment<
Allocator>::type::value)
{
table_ = std::move(rhs.table_);
return *this;
}
concurrent_flat_set& operator=(std::initializer_list<value_type> ilist)
{
table_ = ilist;
return *this;
}
/// Capacity
///
size_type size() const noexcept { return table_.size(); }
size_type max_size() const noexcept { return table_.max_size(); }
BOOST_ATTRIBUTE_NODISCARD bool empty() const noexcept
{
return size() == 0;
}
template <class F>
BOOST_FORCEINLINE size_type visit(key_type const& k, F f) const
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
return table_.visit(k, f);
}
template <class F>
BOOST_FORCEINLINE size_type cvisit(key_type const& k, F f) const
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
return table_.visit(k, f);
}
template <class K, class F>
BOOST_FORCEINLINE typename std::enable_if<
detail::are_transparent<K, hasher, key_equal>::value, size_type>::type
visit(K&& k, F f) const
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
return table_.visit(std::forward<K>(k), f);
}
template <class K, class F>
BOOST_FORCEINLINE typename std::enable_if<
detail::are_transparent<K, hasher, key_equal>::value, size_type>::type
cvisit(K&& k, F f) const
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
return table_.visit(std::forward<K>(k), f);
}
template <class F> size_type visit_all(F f) const
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
return table_.visit_all(f);
}
template <class F> size_type cvisit_all(F f) const
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
return table_.cvisit_all(f);
}
#if defined(BOOST_UNORDERED_PARALLEL_ALGORITHMS)
template <class ExecPolicy, class F>
typename std::enable_if<detail::is_execution_policy<ExecPolicy>::value,
void>::type
visit_all(ExecPolicy&& p, F f) const
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy)
table_.visit_all(p, f);
}
template <class ExecPolicy, class F>
typename std::enable_if<detail::is_execution_policy<ExecPolicy>::value,
void>::type
cvisit_all(ExecPolicy&& p, F f) const
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy)
table_.cvisit_all(p, f);
}
#endif
template <class F> bool visit_while(F f) const
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
return table_.visit_while(f);
}
template <class F> bool cvisit_while(F f) const
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
return table_.cvisit_while(f);
}
#if defined(BOOST_UNORDERED_PARALLEL_ALGORITHMS)
template <class ExecPolicy, class F>
typename std::enable_if<detail::is_execution_policy<ExecPolicy>::value,
bool>::type
visit_while(ExecPolicy&& p, F f) const
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy)
return table_.visit_while(p, f);
}
template <class ExecPolicy, class F>
typename std::enable_if<detail::is_execution_policy<ExecPolicy>::value,
bool>::type
cvisit_while(ExecPolicy&& p, F f) const
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy)
return table_.cvisit_while(p, f);
}
#endif
/// Modifiers
///
BOOST_FORCEINLINE bool insert(value_type const& obj)
{
return table_.insert(obj);
}
BOOST_FORCEINLINE bool insert(value_type&& obj)
{
return table_.insert(std::move(obj));
}
template <class K>
BOOST_FORCEINLINE typename std::enable_if<
detail::are_transparent<K, hasher, key_equal>::value,
bool >::type
insert(K&& k)
{
return table_.try_emplace(std::forward<K>(k));
}
template <class InputIterator>
void insert(InputIterator begin, InputIterator end)
{
for (auto pos = begin; pos != end; ++pos) {
table_.emplace(*pos);
}
}
void insert(std::initializer_list<value_type> ilist)
{
this->insert(ilist.begin(), ilist.end());
}
template <class F>
BOOST_FORCEINLINE bool insert_or_visit(value_type const& obj, F f)
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
return table_.insert_or_cvisit(obj, f);
}
template <class F>
BOOST_FORCEINLINE bool insert_or_visit(value_type&& obj, F f)
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
return table_.insert_or_cvisit(std::move(obj), f);
}
template <class K, class F>
BOOST_FORCEINLINE typename std::enable_if<
detail::are_transparent<K, hasher, key_equal>::value,
bool >::type
insert_or_visit(K&& k, F f)
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
return table_.try_emplace_or_cvisit(std::forward<K>(k), f);
}
template <class InputIterator, class F>
void insert_or_visit(InputIterator first, InputIterator last, F f)
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
for (; first != last; ++first) {
table_.emplace_or_cvisit(*first, f);
}
}
template <class F>
void insert_or_visit(std::initializer_list<value_type> ilist, F f)
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
this->insert_or_cvisit(ilist.begin(), ilist.end(), f);
}
template <class F>
BOOST_FORCEINLINE bool insert_or_cvisit(value_type const& obj, F f)
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
return table_.insert_or_cvisit(obj, f);
}
template <class F>
BOOST_FORCEINLINE bool insert_or_cvisit(value_type&& obj, F f)
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
return table_.insert_or_cvisit(std::move(obj), f);
}
template <class K, class F>
BOOST_FORCEINLINE typename std::enable_if<
detail::are_transparent<K, hasher, key_equal>::value,
bool >::type
insert_or_cvisit(K&& k, F f)
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
return table_.try_emplace_or_cvisit(std::forward<K>(k), f);
}
template <class InputIterator, class F>
void insert_or_cvisit(InputIterator first, InputIterator last, F f)
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
for (; first != last; ++first) {
table_.emplace_or_cvisit(*first, f);
}
}
template <class F>
void insert_or_cvisit(std::initializer_list<value_type> ilist, F f)
{
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F)
this->insert_or_cvisit(ilist.begin(), ilist.end(), f);
}
template <class... Args> BOOST_FORCEINLINE bool emplace(Args&&... args)
{
return table_.emplace(std::forward<Args>(args)...);
}
template <class Arg, class... Args>
BOOST_FORCEINLINE bool emplace_or_visit(Arg&& arg, Args&&... args)
{
BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_CONST_INVOCABLE(Arg, Args...)
return table_.emplace_or_cvisit(
std::forward<Arg>(arg), std::forward<Args>(args)...);
}
template <class Arg, class... Args>
BOOST_FORCEINLINE bool emplace_or_cvisit(Arg&& arg, Args&&... args)
{
BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_CONST_INVOCABLE(Arg, Args...)
return table_.emplace_or_cvisit(
std::forward<Arg>(arg), std::forward<Args>(args)...);
}
BOOST_FORCEINLINE size_type erase(key_type const& k)
{
return table_.erase(k);
}
template <class K>
BOOST_FORCEINLINE typename std::enable_if<
detail::are_transparent<K, hasher, key_equal>::value, size_type>::type
erase(K&& k)
{
return table_.erase(std::forward<K>(k));
}
template <class F>
BOOST_FORCEINLINE size_type erase_if(key_type const& k, F f)
{
return table_.erase_if(k, f);
}
template <class K, class F>
BOOST_FORCEINLINE typename std::enable_if<
detail::are_transparent<K, hasher, key_equal>::value &&
!detail::is_execution_policy<K>::value,
size_type>::type
erase_if(K&& k, F f)
{
return table_.erase_if(std::forward<K>(k), f);
}
#if defined(BOOST_UNORDERED_PARALLEL_ALGORITHMS)
template <class ExecPolicy, class F>
typename std::enable_if<detail::is_execution_policy<ExecPolicy>::value,
void>::type
erase_if(ExecPolicy&& p, F f)
{
BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(ExecPolicy)
table_.erase_if(p, f);
}
#endif
template <class F> size_type erase_if(F f) { return table_.erase_if(f); }
void swap(concurrent_flat_set& other) noexcept(
boost::allocator_is_always_equal<Allocator>::type::value ||
boost::allocator_propagate_on_container_swap<Allocator>::type::value)
{
return table_.swap(other.table_);
}
void clear() noexcept { table_.clear(); }
template <typename H2, typename P2>
size_type merge(concurrent_flat_set<Key, H2, P2, Allocator>& x)
{
BOOST_ASSERT(get_allocator() == x.get_allocator());
return table_.merge(x.table_);
}
template <typename H2, typename P2>
size_type merge(concurrent_flat_set<Key, H2, P2, Allocator>&& x)
{
return merge(x);
}
BOOST_FORCEINLINE size_type count(key_type const& k) const
{
return table_.count(k);
}
template <class K>
BOOST_FORCEINLINE typename std::enable_if<
detail::are_transparent<K, hasher, key_equal>::value, size_type>::type
count(K const& k)
{
return table_.count(k);
}
BOOST_FORCEINLINE bool contains(key_type const& k) const
{
return table_.contains(k);
}
template <class K>
BOOST_FORCEINLINE typename std::enable_if<
detail::are_transparent<K, hasher, key_equal>::value, bool>::type
contains(K const& k) const
{
return table_.contains(k);
}
/// Hash Policy
///
size_type bucket_count() const noexcept { return table_.capacity(); }
float load_factor() const noexcept { return table_.load_factor(); }
float max_load_factor() const noexcept
{
return table_.max_load_factor();
}
void max_load_factor(float) {}
size_type max_load() const noexcept { return table_.max_load(); }
void rehash(size_type n) { table_.rehash(n); }
void reserve(size_type n) { table_.reserve(n); }
/// Observers
///
allocator_type get_allocator() const noexcept
{
return table_.get_allocator();
}
hasher hash_function() const { return table_.hash_function(); }
key_equal key_eq() const { return table_.key_eq(); }
};
template <class Key, class Hash, class KeyEqual, class Allocator>
bool operator==(
concurrent_flat_set<Key, Hash, KeyEqual, Allocator> const& lhs,
concurrent_flat_set<Key, Hash, KeyEqual, Allocator> const& rhs)
{
return lhs.table_ == rhs.table_;
}
template <class Key, class Hash, class KeyEqual, class Allocator>
bool operator!=(
concurrent_flat_set<Key, Hash, KeyEqual, Allocator> const& lhs,
concurrent_flat_set<Key, Hash, KeyEqual, Allocator> const& rhs)
{
return !(lhs == rhs);
}
template <class Key, class Hash, class Pred, class Alloc>
void swap(concurrent_flat_set<Key, Hash, Pred, Alloc>& x,
concurrent_flat_set<Key, Hash, Pred, Alloc>& y)
noexcept(noexcept(x.swap(y)))
{
x.swap(y);
}
template <class K, class H, class P, class A, class Predicate>
typename concurrent_flat_set<K, H, P, A>::size_type erase_if(
concurrent_flat_set<K, H, P, A>& c, Predicate pred)
{
return c.table_.erase_if(pred);
}
template<class Archive, class K, class H, class KE, class A>
void serialize(
Archive& ar, concurrent_flat_set<K, H, KE, A>& c, unsigned int)
{
ar & core::make_nvp("table",c.table_);
}
#if BOOST_UNORDERED_TEMPLATE_DEDUCTION_GUIDES
template <class InputIterator,
class Hash =
boost::hash<typename std::iterator_traits<InputIterator>::value_type>,
class Pred =
std::equal_to<typename std::iterator_traits<InputIterator>::value_type>,
class Allocator = std::allocator<
typename std::iterator_traits<InputIterator>::value_type>,
class = boost::enable_if_t<detail::is_input_iterator_v<InputIterator> >,
class = boost::enable_if_t<detail::is_hash_v<Hash> >,
class = boost::enable_if_t<detail::is_pred_v<Pred> >,
class = boost::enable_if_t<detail::is_allocator_v<Allocator> > >
concurrent_flat_set(InputIterator, InputIterator,
std::size_t = boost::unordered::detail::foa::default_bucket_count,
Hash = Hash(), Pred = Pred(), Allocator = Allocator())
-> concurrent_flat_set<
typename std::iterator_traits<InputIterator>::value_type, Hash, Pred,
Allocator>;
template <class T, class Hash = boost::hash<T>,
class Pred = std::equal_to<T>, class Allocator = std::allocator<T>,
class = boost::enable_if_t<detail::is_hash_v<Hash> >,
class = boost::enable_if_t<detail::is_pred_v<Pred> >,
class = boost::enable_if_t<detail::is_allocator_v<Allocator> > >
concurrent_flat_set(std::initializer_list<T>,
std::size_t = boost::unordered::detail::foa::default_bucket_count,
Hash = Hash(), Pred = Pred(), Allocator = Allocator())
-> concurrent_flat_set< T, Hash, Pred, Allocator>;
template <class InputIterator, class Allocator,
class = boost::enable_if_t<detail::is_input_iterator_v<InputIterator> >,
class = boost::enable_if_t<detail::is_allocator_v<Allocator> > >
concurrent_flat_set(InputIterator, InputIterator, std::size_t, Allocator)
-> concurrent_flat_set<
typename std::iterator_traits<InputIterator>::value_type,
boost::hash<typename std::iterator_traits<InputIterator>::value_type>,
std::equal_to<typename std::iterator_traits<InputIterator>::value_type>,
Allocator>;
template <class InputIterator, class Allocator,
class = boost::enable_if_t<detail::is_input_iterator_v<InputIterator> >,
class = boost::enable_if_t<detail::is_allocator_v<Allocator> > >
concurrent_flat_set(InputIterator, InputIterator, Allocator)
-> concurrent_flat_set<
typename std::iterator_traits<InputIterator>::value_type,
boost::hash<typename std::iterator_traits<InputIterator>::value_type>,
std::equal_to<typename std::iterator_traits<InputIterator>::value_type>,
Allocator>;
template <class InputIterator, class Hash, class Allocator,
class = boost::enable_if_t<detail::is_hash_v<Hash> >,
class = boost::enable_if_t<detail::is_input_iterator_v<InputIterator> >,
class = boost::enable_if_t<detail::is_allocator_v<Allocator> > >
concurrent_flat_set(
InputIterator, InputIterator, std::size_t, Hash, Allocator)
-> concurrent_flat_set<
typename std::iterator_traits<InputIterator>::value_type, Hash,
std::equal_to<typename std::iterator_traits<InputIterator>::value_type>,
Allocator>;
template <class T, class Allocator,
class = boost::enable_if_t<detail::is_allocator_v<Allocator> > >
concurrent_flat_set(std::initializer_list<T>, std::size_t, Allocator)
-> concurrent_flat_set<T, boost::hash<T>,std::equal_to<T>, Allocator>;
template <class T, class Allocator,
class = boost::enable_if_t<detail::is_allocator_v<Allocator> > >
concurrent_flat_set(std::initializer_list<T >, Allocator)
-> concurrent_flat_set<T, boost::hash<T>, std::equal_to<T>, Allocator>;
template <class T, class Hash, class Allocator,
class = boost::enable_if_t<detail::is_hash_v<Hash> >,
class = boost::enable_if_t<detail::is_allocator_v<Allocator> > >
concurrent_flat_set(std::initializer_list<T >, std::size_t,Hash, Allocator)
-> concurrent_flat_set<T, Hash, std::equal_to<T>, Allocator>;
#endif
} // namespace unordered
using unordered::concurrent_flat_set;
} // namespace boost
#endif // BOOST_UNORDERED_CONCURRENT_FLAT_SET_HPP

View File

@@ -0,0 +1,55 @@
/* Fast open-addressing concurrent hashset.
*
* Copyright 2023 Christian Mazakas.
* Copyright 2023 Joaquin M Lopez Munoz.
* Distributed under the Boost Software License, Version 1.0.
* (See accompanying file LICENSE_1_0.txt or copy at
* http://www.boost.org/LICENSE_1_0.txt)
*
* See https://www.boost.org/libs/unordered for library home page.
*/
#ifndef BOOST_UNORDERED_CONCURRENT_FLAT_SET_FWD_HPP
#define BOOST_UNORDERED_CONCURRENT_FLAT_SET_FWD_HPP
#include <boost/container_hash/hash_fwd.hpp>
#include <functional>
#include <memory>
namespace boost {
namespace unordered {
template <class Key, class Hash = boost::hash<Key>,
class Pred = std::equal_to<Key>,
class Allocator = std::allocator<Key> >
class concurrent_flat_set;
template <class Key, class Hash, class KeyEqual, class Allocator>
bool operator==(
concurrent_flat_set<Key, Hash, KeyEqual, Allocator> const& lhs,
concurrent_flat_set<Key, Hash, KeyEqual, Allocator> const& rhs);
template <class Key, class Hash, class KeyEqual, class Allocator>
bool operator!=(
concurrent_flat_set<Key, Hash, KeyEqual, Allocator> const& lhs,
concurrent_flat_set<Key, Hash, KeyEqual, Allocator> const& rhs);
template <class Key, class Hash, class Pred, class Alloc>
void swap(concurrent_flat_set<Key, Hash, Pred, Alloc>& x,
concurrent_flat_set<Key, Hash, Pred, Alloc>& y)
noexcept(noexcept(x.swap(y)));
template <class K, class H, class P, class A, class Predicate>
typename concurrent_flat_set<K, H, P, A>::size_type erase_if(
concurrent_flat_set<K, H, P, A>& c, Predicate pred);
} // namespace unordered
using boost::unordered::concurrent_flat_set;
using boost::unordered::swap;
using boost::unordered::operator==;
using boost::unordered::operator!=;
} // namespace boost
#endif // BOOST_UNORDERED_CONCURRENT_FLAT_SET_FWD_HPP

View File

@@ -0,0 +1,75 @@
/* Copyright 2023 Christian Mazakas.
* Copyright 2023 Joaquin M Lopez Munoz.
* Distributed under the Boost Software License, Version 1.0.
* (See accompanying file LICENSE_1_0.txt or copy at
* http://www.boost.org/LICENSE_1_0.txt)
*
* See https://www.boost.org/libs/unordered for library home page.
*/
#ifndef BOOST_UNORDERED_DETAIL_CONCURRENT_STATIC_ASSERTS_HPP
#define BOOST_UNORDERED_DETAIL_CONCURRENT_STATIC_ASSERTS_HPP
#include <boost/mp11/algorithm.hpp>
#include <boost/mp11/list.hpp>
#include <functional>
#include <type_traits>
#define BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE(F) \
static_assert(boost::unordered::detail::is_invocable<F, value_type&>::value, \
"The provided Callable must be invocable with value_type&");
#define BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE(F) \
static_assert( \
boost::unordered::detail::is_invocable<F, value_type const&>::value, \
"The provided Callable must be invocable with value_type const&");
#if BOOST_CXX_VERSION >= 202002L
#define BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(P) \
static_assert(!std::is_base_of<std::execution::parallel_unsequenced_policy, \
ExecPolicy>::value, \
"ExecPolicy must be sequenced."); \
static_assert( \
!std::is_base_of<std::execution::unsequenced_policy, ExecPolicy>::value, \
"ExecPolicy must be sequenced.");
#else
#define BOOST_UNORDERED_STATIC_ASSERT_EXEC_POLICY(P) \
static_assert(!std::is_base_of<std::execution::parallel_unsequenced_policy, \
ExecPolicy>::value, \
"ExecPolicy must be sequenced.");
#endif
#define BOOST_UNORDERED_DETAIL_COMMA ,
#define BOOST_UNORDERED_DETAIL_LAST_ARG(Arg, Args) \
mp11::mp_back<mp11::mp_list<Arg BOOST_UNORDERED_DETAIL_COMMA Args> >
#define BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_INVOCABLE(Arg, Args) \
BOOST_UNORDERED_STATIC_ASSERT_INVOCABLE( \
BOOST_UNORDERED_DETAIL_LAST_ARG(Arg, Args))
#define BOOST_UNORDERED_STATIC_ASSERT_LAST_ARG_CONST_INVOCABLE(Arg, Args) \
BOOST_UNORDERED_STATIC_ASSERT_CONST_INVOCABLE( \
BOOST_UNORDERED_DETAIL_LAST_ARG(Arg, Args))
namespace boost {
namespace unordered {
namespace detail {
template <class F, class... Args>
struct is_invocable
: std::is_constructible<std::function<void(Args...)>,
std::reference_wrapper<typename std::remove_reference<F>::type> >
{
};
} // namespace detail
} // namespace unordered
} // namespace boost
#endif // BOOST_UNORDERED_DETAIL_CONCURRENT_STATIC_ASSERTS_HPP

View File

@@ -264,7 +264,8 @@ struct concurrent_table_arrays:table_arrays<Value,Group,SizePolicy,Allocator>
return boost::to_address(group_accesses_); return boost::to_address(group_accesses_);
} }
static concurrent_table_arrays new_(group_access_allocator_type al,std::size_t n) static concurrent_table_arrays new_(
group_access_allocator_type al,std::size_t n)
{ {
super x{super::new_(al,n)}; super x{super::new_(al,n)};
BOOST_TRY{ BOOST_TRY{
@@ -310,20 +311,23 @@ struct concurrent_table_arrays:table_arrays<Value,Group,SizePolicy,Allocator>
} }
} }
static concurrent_table_arrays new_group_access(group_access_allocator_type al,const super& x) static concurrent_table_arrays new_group_access(
group_access_allocator_type al,const super& x)
{ {
concurrent_table_arrays arrays{x,nullptr}; concurrent_table_arrays arrays{x,nullptr};
set_group_access(al,arrays); set_group_access(al,arrays);
return arrays; return arrays;
} }
static void delete_(group_access_allocator_type al,concurrent_table_arrays& arrays)noexcept static void delete_(
group_access_allocator_type al,concurrent_table_arrays& arrays)noexcept
{ {
delete_group_access(al,arrays); delete_group_access(al,arrays);
super::delete_(al,arrays); super::delete_(al,arrays);
} }
static void delete_group_access(group_access_allocator_type al,concurrent_table_arrays& arrays)noexcept static void delete_group_access(
group_access_allocator_type al,concurrent_table_arrays& arrays)noexcept
{ {
if(arrays.elements()){ if(arrays.elements()){
boost::allocator_deallocate( boost::allocator_deallocate(
@@ -369,9 +373,7 @@ inline void swap(atomic_size_control& x,atomic_size_control& y)
} }
/* foa::concurrent_table serves as the foundation for end-user concurrent /* foa::concurrent_table serves as the foundation for end-user concurrent
* hash containers. The TypePolicy parameter can specify flat/node-based * hash containers.
* map-like and set-like containers, though currently we're only providing
* boost::concurrent_flat_map.
* *
* The exposed interface (completed by the wrapping containers) is not that * The exposed interface (completed by the wrapping containers) is not that
* of a regular container (in fact, it does not model Container as understood * of a regular container (in fact, it does not model Container as understood
@@ -393,7 +395,7 @@ inline void swap(atomic_size_control& x,atomic_size_control& y)
* - Parallel versions of [c]visit_all(f) and erase_if(f) are provided based * - Parallel versions of [c]visit_all(f) and erase_if(f) are provided based
* on C++17 stdlib parallel algorithms. * on C++17 stdlib parallel algorithms.
* *
* Consult boost::concurrent_flat_map docs for the full API reference. * Consult boost::concurrent_flat_(map|set) docs for the full API reference.
* Heterogeneous lookup is suported by default, that is, without checking for * Heterogeneous lookup is suported by default, that is, without checking for
* any ::is_transparent typedefs --this checking is done by the wrapping * any ::is_transparent typedefs --this checking is done by the wrapping
* containers. * containers.
@@ -421,8 +423,8 @@ inline void swap(atomic_size_control& x,atomic_size_control& y)
* reduced hash value is set) and the insertion counter is atomically * reduced hash value is set) and the insertion counter is atomically
* incremented: if no other thread has incremented the counter during the * incremented: if no other thread has incremented the counter during the
* whole operation (which is checked by comparing with c0), then we're * whole operation (which is checked by comparing with c0), then we're
* good to go and complete the insertion, otherwise we roll back and start * good to go and complete the insertion, otherwise we roll back and
* over. * start over.
*/ */
template<typename,typename,typename,typename> template<typename,typename,typename,typename>
@@ -946,7 +948,8 @@ private:
using multimutex_type=multimutex<mutex_type,128>; // TODO: adapt 128 to the machine using multimutex_type=multimutex<mutex_type,128>; // TODO: adapt 128 to the machine
using shared_lock_guard=reentrancy_checked<shared_lock<mutex_type>>; using shared_lock_guard=reentrancy_checked<shared_lock<mutex_type>>;
using exclusive_lock_guard=reentrancy_checked<lock_guard<multimutex_type>>; using exclusive_lock_guard=reentrancy_checked<lock_guard<multimutex_type>>;
using exclusive_bilock_guard=reentrancy_bichecked<scoped_bilock<multimutex_type>>; using exclusive_bilock_guard=
reentrancy_bichecked<scoped_bilock<multimutex_type>>;
using group_shared_lock_guard=typename group_access::shared_lock_guard; using group_shared_lock_guard=typename group_access::shared_lock_guard;
using group_exclusive_lock_guard=typename group_access::exclusive_lock_guard; using group_exclusive_lock_guard=typename group_access::exclusive_lock_guard;
using group_insert_counter_type=typename group_access::insert_counter_type; using group_insert_counter_type=typename group_access::insert_counter_type;

View File

@@ -148,7 +148,7 @@ static constexpr std::size_t default_bucket_count=0;
/* foa::table_core is the common base of foa::table and foa::concurrent_table, /* foa::table_core is the common base of foa::table and foa::concurrent_table,
* which in their turn serve as the foundational core of * which in their turn serve as the foundational core of
* boost::unordered_(flat|node)_(map|set) and boost::concurrent_flat_map, * boost::unordered_(flat|node)_(map|set) and boost::concurrent_flat_(map|set),
* respectively. Its main internal design aspects are: * respectively. Its main internal design aspects are:
* *
* - Element slots are logically split into groups of size N=15. The number * - Element slots are logically split into groups of size N=15. The number
@@ -337,38 +337,49 @@ private:
{ {
static constexpr boost::uint32_t word[]= static constexpr boost::uint32_t word[]=
{ {
0x08080808u,0x09090909u,0x02020202u,0x03030303u,0x04040404u,0x05050505u,0x06060606u,0x07070707u, 0x08080808u,0x09090909u,0x02020202u,0x03030303u,0x04040404u,0x05050505u,
0x08080808u,0x09090909u,0x0A0A0A0Au,0x0B0B0B0Bu,0x0C0C0C0Cu,0x0D0D0D0Du,0x0E0E0E0Eu,0x0F0F0F0Fu, 0x06060606u,0x07070707u,0x08080808u,0x09090909u,0x0A0A0A0Au,0x0B0B0B0Bu,
0x10101010u,0x11111111u,0x12121212u,0x13131313u,0x14141414u,0x15151515u,0x16161616u,0x17171717u, 0x0C0C0C0Cu,0x0D0D0D0Du,0x0E0E0E0Eu,0x0F0F0F0Fu,0x10101010u,0x11111111u,
0x18181818u,0x19191919u,0x1A1A1A1Au,0x1B1B1B1Bu,0x1C1C1C1Cu,0x1D1D1D1Du,0x1E1E1E1Eu,0x1F1F1F1Fu, 0x12121212u,0x13131313u,0x14141414u,0x15151515u,0x16161616u,0x17171717u,
0x20202020u,0x21212121u,0x22222222u,0x23232323u,0x24242424u,0x25252525u,0x26262626u,0x27272727u, 0x18181818u,0x19191919u,0x1A1A1A1Au,0x1B1B1B1Bu,0x1C1C1C1Cu,0x1D1D1D1Du,
0x28282828u,0x29292929u,0x2A2A2A2Au,0x2B2B2B2Bu,0x2C2C2C2Cu,0x2D2D2D2Du,0x2E2E2E2Eu,0x2F2F2F2Fu, 0x1E1E1E1Eu,0x1F1F1F1Fu,0x20202020u,0x21212121u,0x22222222u,0x23232323u,
0x30303030u,0x31313131u,0x32323232u,0x33333333u,0x34343434u,0x35353535u,0x36363636u,0x37373737u, 0x24242424u,0x25252525u,0x26262626u,0x27272727u,0x28282828u,0x29292929u,
0x38383838u,0x39393939u,0x3A3A3A3Au,0x3B3B3B3Bu,0x3C3C3C3Cu,0x3D3D3D3Du,0x3E3E3E3Eu,0x3F3F3F3Fu, 0x2A2A2A2Au,0x2B2B2B2Bu,0x2C2C2C2Cu,0x2D2D2D2Du,0x2E2E2E2Eu,0x2F2F2F2Fu,
0x40404040u,0x41414141u,0x42424242u,0x43434343u,0x44444444u,0x45454545u,0x46464646u,0x47474747u, 0x30303030u,0x31313131u,0x32323232u,0x33333333u,0x34343434u,0x35353535u,
0x48484848u,0x49494949u,0x4A4A4A4Au,0x4B4B4B4Bu,0x4C4C4C4Cu,0x4D4D4D4Du,0x4E4E4E4Eu,0x4F4F4F4Fu, 0x36363636u,0x37373737u,0x38383838u,0x39393939u,0x3A3A3A3Au,0x3B3B3B3Bu,
0x50505050u,0x51515151u,0x52525252u,0x53535353u,0x54545454u,0x55555555u,0x56565656u,0x57575757u, 0x3C3C3C3Cu,0x3D3D3D3Du,0x3E3E3E3Eu,0x3F3F3F3Fu,0x40404040u,0x41414141u,
0x58585858u,0x59595959u,0x5A5A5A5Au,0x5B5B5B5Bu,0x5C5C5C5Cu,0x5D5D5D5Du,0x5E5E5E5Eu,0x5F5F5F5Fu, 0x42424242u,0x43434343u,0x44444444u,0x45454545u,0x46464646u,0x47474747u,
0x60606060u,0x61616161u,0x62626262u,0x63636363u,0x64646464u,0x65656565u,0x66666666u,0x67676767u, 0x48484848u,0x49494949u,0x4A4A4A4Au,0x4B4B4B4Bu,0x4C4C4C4Cu,0x4D4D4D4Du,
0x68686868u,0x69696969u,0x6A6A6A6Au,0x6B6B6B6Bu,0x6C6C6C6Cu,0x6D6D6D6Du,0x6E6E6E6Eu,0x6F6F6F6Fu, 0x4E4E4E4Eu,0x4F4F4F4Fu,0x50505050u,0x51515151u,0x52525252u,0x53535353u,
0x70707070u,0x71717171u,0x72727272u,0x73737373u,0x74747474u,0x75757575u,0x76767676u,0x77777777u, 0x54545454u,0x55555555u,0x56565656u,0x57575757u,0x58585858u,0x59595959u,
0x78787878u,0x79797979u,0x7A7A7A7Au,0x7B7B7B7Bu,0x7C7C7C7Cu,0x7D7D7D7Du,0x7E7E7E7Eu,0x7F7F7F7Fu, 0x5A5A5A5Au,0x5B5B5B5Bu,0x5C5C5C5Cu,0x5D5D5D5Du,0x5E5E5E5Eu,0x5F5F5F5Fu,
0x80808080u,0x81818181u,0x82828282u,0x83838383u,0x84848484u,0x85858585u,0x86868686u,0x87878787u, 0x60606060u,0x61616161u,0x62626262u,0x63636363u,0x64646464u,0x65656565u,
0x88888888u,0x89898989u,0x8A8A8A8Au,0x8B8B8B8Bu,0x8C8C8C8Cu,0x8D8D8D8Du,0x8E8E8E8Eu,0x8F8F8F8Fu, 0x66666666u,0x67676767u,0x68686868u,0x69696969u,0x6A6A6A6Au,0x6B6B6B6Bu,
0x90909090u,0x91919191u,0x92929292u,0x93939393u,0x94949494u,0x95959595u,0x96969696u,0x97979797u, 0x6C6C6C6Cu,0x6D6D6D6Du,0x6E6E6E6Eu,0x6F6F6F6Fu,0x70707070u,0x71717171u,
0x98989898u,0x99999999u,0x9A9A9A9Au,0x9B9B9B9Bu,0x9C9C9C9Cu,0x9D9D9D9Du,0x9E9E9E9Eu,0x9F9F9F9Fu, 0x72727272u,0x73737373u,0x74747474u,0x75757575u,0x76767676u,0x77777777u,
0xA0A0A0A0u,0xA1A1A1A1u,0xA2A2A2A2u,0xA3A3A3A3u,0xA4A4A4A4u,0xA5A5A5A5u,0xA6A6A6A6u,0xA7A7A7A7u, 0x78787878u,0x79797979u,0x7A7A7A7Au,0x7B7B7B7Bu,0x7C7C7C7Cu,0x7D7D7D7Du,
0xA8A8A8A8u,0xA9A9A9A9u,0xAAAAAAAAu,0xABABABABu,0xACACACACu,0xADADADADu,0xAEAEAEAEu,0xAFAFAFAFu, 0x7E7E7E7Eu,0x7F7F7F7Fu,0x80808080u,0x81818181u,0x82828282u,0x83838383u,
0xB0B0B0B0u,0xB1B1B1B1u,0xB2B2B2B2u,0xB3B3B3B3u,0xB4B4B4B4u,0xB5B5B5B5u,0xB6B6B6B6u,0xB7B7B7B7u, 0x84848484u,0x85858585u,0x86868686u,0x87878787u,0x88888888u,0x89898989u,
0xB8B8B8B8u,0xB9B9B9B9u,0xBABABABAu,0xBBBBBBBBu,0xBCBCBCBCu,0xBDBDBDBDu,0xBEBEBEBEu,0xBFBFBFBFu, 0x8A8A8A8Au,0x8B8B8B8Bu,0x8C8C8C8Cu,0x8D8D8D8Du,0x8E8E8E8Eu,0x8F8F8F8Fu,
0xC0C0C0C0u,0xC1C1C1C1u,0xC2C2C2C2u,0xC3C3C3C3u,0xC4C4C4C4u,0xC5C5C5C5u,0xC6C6C6C6u,0xC7C7C7C7u, 0x90909090u,0x91919191u,0x92929292u,0x93939393u,0x94949494u,0x95959595u,
0xC8C8C8C8u,0xC9C9C9C9u,0xCACACACAu,0xCBCBCBCBu,0xCCCCCCCCu,0xCDCDCDCDu,0xCECECECEu,0xCFCFCFCFu, 0x96969696u,0x97979797u,0x98989898u,0x99999999u,0x9A9A9A9Au,0x9B9B9B9Bu,
0xD0D0D0D0u,0xD1D1D1D1u,0xD2D2D2D2u,0xD3D3D3D3u,0xD4D4D4D4u,0xD5D5D5D5u,0xD6D6D6D6u,0xD7D7D7D7u, 0x9C9C9C9Cu,0x9D9D9D9Du,0x9E9E9E9Eu,0x9F9F9F9Fu,0xA0A0A0A0u,0xA1A1A1A1u,
0xD8D8D8D8u,0xD9D9D9D9u,0xDADADADAu,0xDBDBDBDBu,0xDCDCDCDCu,0xDDDDDDDDu,0xDEDEDEDEu,0xDFDFDFDFu, 0xA2A2A2A2u,0xA3A3A3A3u,0xA4A4A4A4u,0xA5A5A5A5u,0xA6A6A6A6u,0xA7A7A7A7u,
0xE0E0E0E0u,0xE1E1E1E1u,0xE2E2E2E2u,0xE3E3E3E3u,0xE4E4E4E4u,0xE5E5E5E5u,0xE6E6E6E6u,0xE7E7E7E7u, 0xA8A8A8A8u,0xA9A9A9A9u,0xAAAAAAAAu,0xABABABABu,0xACACACACu,0xADADADADu,
0xE8E8E8E8u,0xE9E9E9E9u,0xEAEAEAEAu,0xEBEBEBEBu,0xECECECECu,0xEDEDEDEDu,0xEEEEEEEEu,0xEFEFEFEFu, 0xAEAEAEAEu,0xAFAFAFAFu,0xB0B0B0B0u,0xB1B1B1B1u,0xB2B2B2B2u,0xB3B3B3B3u,
0xF0F0F0F0u,0xF1F1F1F1u,0xF2F2F2F2u,0xF3F3F3F3u,0xF4F4F4F4u,0xF5F5F5F5u,0xF6F6F6F6u,0xF7F7F7F7u, 0xB4B4B4B4u,0xB5B5B5B5u,0xB6B6B6B6u,0xB7B7B7B7u,0xB8B8B8B8u,0xB9B9B9B9u,
0xF8F8F8F8u,0xF9F9F9F9u,0xFAFAFAFAu,0xFBFBFBFBu,0xFCFCFCFCu,0xFDFDFDFDu,0xFEFEFEFEu,0xFFFFFFFFu, 0xBABABABAu,0xBBBBBBBBu,0xBCBCBCBCu,0xBDBDBDBDu,0xBEBEBEBEu,0xBFBFBFBFu,
0xC0C0C0C0u,0xC1C1C1C1u,0xC2C2C2C2u,0xC3C3C3C3u,0xC4C4C4C4u,0xC5C5C5C5u,
0xC6C6C6C6u,0xC7C7C7C7u,0xC8C8C8C8u,0xC9C9C9C9u,0xCACACACAu,0xCBCBCBCBu,
0xCCCCCCCCu,0xCDCDCDCDu,0xCECECECEu,0xCFCFCFCFu,0xD0D0D0D0u,0xD1D1D1D1u,
0xD2D2D2D2u,0xD3D3D3D3u,0xD4D4D4D4u,0xD5D5D5D5u,0xD6D6D6D6u,0xD7D7D7D7u,
0xD8D8D8D8u,0xD9D9D9D9u,0xDADADADAu,0xDBDBDBDBu,0xDCDCDCDCu,0xDDDDDDDDu,
0xDEDEDEDEu,0xDFDFDFDFu,0xE0E0E0E0u,0xE1E1E1E1u,0xE2E2E2E2u,0xE3E3E3E3u,
0xE4E4E4E4u,0xE5E5E5E5u,0xE6E6E6E6u,0xE7E7E7E7u,0xE8E8E8E8u,0xE9E9E9E9u,
0xEAEAEAEAu,0xEBEBEBEBu,0xECECECECu,0xEDEDEDEDu,0xEEEEEEEEu,0xEFEFEFEFu,
0xF0F0F0F0u,0xF1F1F1F1u,0xF2F2F2F2u,0xF3F3F3F3u,0xF4F4F4F4u,0xF5F5F5F5u,
0xF6F6F6F6u,0xF7F7F7F7u,0xF8F8F8F8u,0xF9F9F9F9u,0xFAFAFAFAu,0xFBFBFBFBu,
0xFCFCFCFCu,0xFDFDFDFDu,0xFEFEFEFEu,0xFFFFFFFFu,
}; };
return (int)word[narrow_cast<unsigned char>(hash)]; return (int)word[narrow_cast<unsigned char>(hash)];
@@ -549,7 +560,8 @@ private:
} }
/* Copied from /* Copied from
* https://github.com/simd-everywhere/simde/blob/master/simde/x86/sse2.h#L3763 * https://github.com/simd-everywhere/simde/blob/master/simde/x86/
* sse2.h#L3763
*/ */
static inline int simde_mm_movemask_epi8(uint8x16_t a) static inline int simde_mm_movemask_epi8(uint8x16_t a)
@@ -628,7 +640,8 @@ struct group15
BOOST_ASSERT(pos<N); BOOST_ASSERT(pos<N);
return return
pos==N-1&& pos==N-1&&
(m[0] & boost::uint64_t(0x4000400040004000ull))==boost::uint64_t(0x4000ull)&& (m[0] & boost::uint64_t(0x4000400040004000ull))==
boost::uint64_t(0x4000ull)&&
(m[1] & boost::uint64_t(0x4000400040004000ull))==0; (m[1] & boost::uint64_t(0x4000400040004000ull))==0;
} }
@@ -787,8 +800,8 @@ private:
* *
* - size_index(n) returns an unspecified "index" number used in other policy * - size_index(n) returns an unspecified "index" number used in other policy
* operations. * operations.
* - size(size_index_) returns the number of groups for the given index. It is * - size(size_index_) returns the number of groups for the given index. It
* guaranteed that size(size_index(n)) >= n. * is guaranteed that size(size_index(n)) >= n.
* - min_size() is the minimum number of groups permissible, i.e. * - min_size() is the minimum number of groups permissible, i.e.
* size(size_index(0)). * size(size_index(0)).
* - position(hash,size_index_) maps hash to a position in the range * - position(hash,size_index_) maps hash to a position in the range
@@ -1003,7 +1016,9 @@ struct table_arrays
rebind<group_type>; rebind<group_type>;
using group_type_pointer_traits=boost::pointer_traits<group_type_pointer>; using group_type_pointer_traits=boost::pointer_traits<group_type_pointer>;
table_arrays(std::size_t gsi,std::size_t gsm,group_type_pointer pg,value_type_pointer pe): table_arrays(
std::size_t gsi,std::size_t gsm,
group_type_pointer pg,value_type_pointer pe):
groups_size_index{gsi},groups_size_mask{gsm},groups_{pg},elements_{pe}{} groups_size_index{gsi},groups_size_mask{gsm},groups_{pg},elements_{pe}{}
value_type* elements()const noexcept{return boost::to_address(elements_);} value_type* elements()const noexcept{return boost::to_address(elements_);}
@@ -1016,7 +1031,8 @@ struct table_arrays
} }
static void set_arrays( static void set_arrays(
table_arrays& arrays,allocator_type al,std::size_t,std::false_type /* always allocate */) table_arrays& arrays,allocator_type al,std::size_t,
std::false_type /* always allocate */)
{ {
using storage_traits=boost::allocator_traits<allocator_type>; using storage_traits=boost::allocator_traits<allocator_type>;
auto groups_size_index=arrays.groups_size_index; auto groups_size_index=arrays.groups_size_index;
@@ -1032,7 +1048,8 @@ struct table_arrays
auto p=reinterpret_cast<unsigned char*>(arrays.elements()+groups_size*N-1); auto p=reinterpret_cast<unsigned char*>(arrays.elements()+groups_size*N-1);
p+=(uintptr_t(sizeof(group_type))- p+=(uintptr_t(sizeof(group_type))-
reinterpret_cast<uintptr_t>(p))%sizeof(group_type); reinterpret_cast<uintptr_t>(p))%sizeof(group_type);
arrays.groups_=group_type_pointer_traits::pointer_to(*reinterpret_cast<group_type*>(p)); arrays.groups_=
group_type_pointer_traits::pointer_to(*reinterpret_cast<group_type*>(p));
initialize_groups( initialize_groups(
arrays.groups(),groups_size, arrays.groups(),groups_size,
@@ -1049,7 +1066,8 @@ struct table_arrays
} }
static void set_arrays( static void set_arrays(
table_arrays& arrays,allocator_type al,std::size_t n,std::true_type /* optimize for n==0*/) table_arrays& arrays,allocator_type al,std::size_t n,
std::true_type /* optimize for n==0*/)
{ {
if(!n){ if(!n){
arrays.groups_=dummy_groups<group_type,size_policy::min_size()>(); arrays.groups_=dummy_groups<group_type,size_policy::min_size()>();
@@ -1262,8 +1280,8 @@ alloc_make_insert_type(const Allocator& al,Args&&... args)
* both init_type and value_type references. * both init_type and value_type references.
* *
* - TypePolicy::construct and TypePolicy::destroy are used for the * - TypePolicy::construct and TypePolicy::destroy are used for the
* construction and destruction of the internal types: value_type, init_type * construction and destruction of the internal types: value_type,
* and element_type. * init_type and element_type.
* *
* - TypePolicy::move is used to provide move semantics for the internal * - TypePolicy::move is used to provide move semantics for the internal
* types used by the container during rehashing and emplace. These types * types used by the container during rehashing and emplace. These types
@@ -1376,9 +1394,12 @@ public:
table_core{x,alloc_traits::select_on_container_copy_construction(x.al())}{} table_core{x,alloc_traits::select_on_container_copy_construction(x.al())}{}
template<typename ArraysFn> template<typename ArraysFn>
table_core(table_core&& x,arrays_holder<arrays_type,Allocator>&& ah,ArraysFn arrays_fn):
table_core( table_core(
std::move(x.h()),std::move(x.pred()),std::move(x.al()),arrays_fn,x.size_ctrl) table_core&& x,arrays_holder<arrays_type,Allocator>&& ah,
ArraysFn arrays_fn):
table_core(
std::move(x.h()),std::move(x.pred()),std::move(x.al()),
arrays_fn,x.size_ctrl)
{ {
ah.release(); ah.release();
x.arrays=ah.get(); x.arrays=ah.get();
@@ -1393,7 +1414,8 @@ public:
std::is_nothrow_move_constructible<Allocator>::value&& std::is_nothrow_move_constructible<Allocator>::value&&
!uses_fancy_pointers): !uses_fancy_pointers):
table_core{ table_core{
std::move(x),arrays_holder<arrays_type,Allocator>{x.new_arrays(0),x.al()}, std::move(x),arrays_holder<arrays_type,Allocator>{
x.new_arrays(0),x.al()},
[&x]{return x.arrays;}} [&x]{return x.arrays;}}
{} {}
@@ -2075,8 +2097,8 @@ private:
void recover_slot(unsigned char* pc) void recover_slot(unsigned char* pc)
{ {
/* If this slot potentially caused overflow, we decrease the maximum load so /* If this slot potentially caused overflow, we decrease the maximum load
* that average probe length won't increase unboundedly in repeated * so that average probe length won't increase unboundedly in repeated
* insert/erase cycles (drift). * insert/erase cycles (drift).
*/ */
size_ctrl.ml-=group_type::maybe_caused_overflow(pc); size_ctrl.ml-=group_type::maybe_caused_overflow(pc);

View File

@@ -10,6 +10,7 @@
#pragma once #pragma once
#endif #endif
#include <boost/unordered/concurrent_flat_set_fwd.hpp>
#include <boost/unordered/detail/foa/flat_set_types.hpp> #include <boost/unordered/detail/foa/flat_set_types.hpp>
#include <boost/unordered/detail/foa/table.hpp> #include <boost/unordered/detail/foa/table.hpp>
#include <boost/unordered/detail/serialize_container.hpp> #include <boost/unordered/detail/serialize_container.hpp>
@@ -35,6 +36,9 @@ namespace boost {
template <class Key, class Hash, class KeyEqual, class Allocator> template <class Key, class Hash, class KeyEqual, class Allocator>
class unordered_flat_set class unordered_flat_set
{ {
template <class Key2, class Hash2, class KeyEqual2, class Allocator2>
friend class concurrent_flat_set;
using set_types = detail::foa::flat_set_types<Key>; using set_types = detail::foa::flat_set_types<Key>;
using table_type = detail::foa::table<set_types, Hash, KeyEqual, using table_type = detail::foa::table<set_types, Hash, KeyEqual,
@@ -169,6 +173,12 @@ namespace boost {
{ {
} }
unordered_flat_set(
concurrent_flat_set<Key, Hash, KeyEqual, Allocator>&& other)
: table_(std::move(other.table_))
{
}
~unordered_flat_set() = default; ~unordered_flat_set() = default;
unordered_flat_set& operator=(unordered_flat_set const& other) unordered_flat_set& operator=(unordered_flat_set const& other)

View File

@@ -260,6 +260,7 @@ local MMAP_CONTAINERS =
unordered_multimap unordered_multimap
unordered_multiset unordered_multiset
concurrent_flat_map concurrent_flat_map
concurrent_flat_set
; ;
for local container in $(MMAP_CONTAINERS) for local container in $(MMAP_CONTAINERS)

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,12 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include "helpers.hpp" #include "helpers.hpp"
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
test::seed_t initialize_seed{674140082}; test::seed_t initialize_seed{674140082};
@@ -14,49 +16,62 @@ using test::sequential;
using hasher = stateful_hash; using hasher = stateful_hash;
using key_equal = stateful_key_equal; using key_equal = stateful_key_equal;
using allocator_type = stateful_allocator<std::pair<raii const, raii> >;
using map_type = boost::unordered::concurrent_flat_map<raii, raii, hasher, using map_type = boost::unordered::concurrent_flat_map<raii, raii, hasher,
key_equal, allocator_type>; key_equal, stateful_allocator<std::pair<raii const, raii> > >;
using map_value_type = typename map_type::value_type; using set_type = boost::unordered::concurrent_flat_set<raii, hasher,
key_equal, stateful_allocator<raii> >;
map_type* test_map;
set_type* test_set;
namespace { namespace {
template <class G> void clear_tests(G gen, test::random_generator rg) template <class X, class GF>
void clear_tests(X*, GF gen_factory, test::random_generator rg)
{ {
using value_type = typename X::value_type;
static constexpr auto value_type_cardinality =
value_cardinality<value_type>::value;
using allocator_type = typename X::allocator_type;
auto gen = gen_factory.template get<X>();
auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto values = make_random_values(1024 * 16, [&] { return gen(rg); });
raii::reset_counts(); raii::reset_counts();
map_type x(values.begin(), values.end(), values.size(), hasher(1), X x(values.begin(), values.end(), values.size(), hasher(1),
key_equal(2), allocator_type(3)); key_equal(2), allocator_type(3));
auto const old_size = x.size(); auto const old_size = x.size();
auto const old_d = +raii::destructor; auto const old_d = +raii::destructor;
thread_runner(values, [&x](boost::span<map_value_type> s) { thread_runner(values, [&x](boost::span<value_type> s) {
(void)s; (void)s;
x.clear(); x.clear();
}); });
BOOST_TEST(x.empty()); BOOST_TEST(x.empty());
BOOST_TEST_EQ(raii::destructor, old_d + 2 * old_size); BOOST_TEST_EQ(raii::destructor, old_d + value_type_cardinality * old_size);
check_raii_counts(); check_raii_counts();
} }
template <class G> void insert_and_clear(G gen, test::random_generator rg) template <class X, class GF>
void insert_and_clear(X*, GF gen_factory, test::random_generator rg)
{ {
using allocator_type = typename X::allocator_type;
auto gen = gen_factory.template get<X>();
auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto values = make_random_values(1024 * 16, [&] { return gen(rg); });
auto reference_map = auto reference_cont = reference_container<X>(values.begin(), values.end());
boost::unordered_flat_map<raii, raii>(values.begin(), values.end());
raii::reset_counts(); raii::reset_counts();
std::thread t1, t2; std::thread t1, t2;
{ {
map_type x(0, hasher(1), key_equal(2), allocator_type(3)); X x(0, hasher(1), key_equal(2), allocator_type(3));
std::mutex m; std::mutex m;
std::condition_variable cv; std::condition_variable cv;
@@ -103,7 +118,7 @@ namespace {
BOOST_TEST_GE(num_clears, 1u); BOOST_TEST_GE(num_clears, 1u);
if (!x.empty()) { if (!x.empty()) {
test_fuzzy_matches_reference(x, reference_map, rg); test_fuzzy_matches_reference(x, reference_cont, rg);
} }
} }
@@ -115,11 +130,13 @@ namespace {
// clang-format off // clang-format off
UNORDERED_TEST( UNORDERED_TEST(
clear_tests, clear_tests,
((value_type_generator)) ((test_map)(test_set))
((value_type_generator_factory))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))
UNORDERED_TEST(insert_and_clear, UNORDERED_TEST(insert_and_clear,
((value_type_generator)) ((test_map)(test_set))
((value_type_generator_factory))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))
// clang-format on // clang-format on

View File

@@ -0,0 +1,137 @@
// Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#ifndef BOOST_UNORDERED_TEST_CFOA_COMMON_HELPERS_HPP
#define BOOST_UNORDERED_TEST_CFOA_COMMON_HELPERS_HPP
#include <boost/unordered/concurrent_flat_map_fwd.hpp>
#include <boost/unordered/concurrent_flat_set_fwd.hpp>
#include <boost/unordered/unordered_flat_map.hpp>
#include <boost/unordered/unordered_flat_set.hpp>
#include <cstddef>
#include <type_traits>
#include <utility>
template <typename K>
struct value_cardinality
{
static constexpr std::size_t value=1;
};
template <typename K, typename V>
struct value_cardinality<std::pair<K, V> >
{
static constexpr std::size_t value=2;
};
template <class Container>
struct reference_container_impl;
template <class Container>
using reference_container = typename reference_container_impl<Container>::type;
template <typename K, typename V, typename H, typename P, typename A>
struct reference_container_impl<boost::concurrent_flat_map<K, V, H, P, A> >
{
using type = boost::unordered_flat_map<K, V>;
};
template <typename K, typename H, typename P, typename A>
struct reference_container_impl<boost::concurrent_flat_set<K, H, P, A> >
{
using type = boost::unordered_flat_set<K>;
};
template <class Container>
struct flat_container_impl;
template <class Container>
using flat_container = typename flat_container_impl<Container>::type;
template <typename K, typename V, typename H, typename P, typename A>
struct flat_container_impl<boost::concurrent_flat_map<K, V, H, P, A> >
{
using type = boost::unordered_flat_map<K, V, H, P, A>;
};
template <typename K, typename H, typename P, typename A>
struct flat_container_impl<boost::concurrent_flat_set<K, H, P, A> >
{
using type = boost::unordered_flat_set<K, H, P, A>;
};
template <typename Container, template <typename> class Allocator>
struct replace_allocator_impl;
template <typename Container, template <typename> class Allocator>
using replace_allocator =
typename replace_allocator_impl<Container, Allocator>::type;
template <
typename K, typename V, typename H, typename P, typename A,
template <typename> class Allocator
>
struct replace_allocator_impl<
boost::concurrent_flat_map<K, V, H, P, A>, Allocator>
{
using value_type =
typename boost::concurrent_flat_map<K, V, H, P, A>::value_type;
using type =
boost::concurrent_flat_map<K, V, H, P, Allocator<value_type> >;
};
template <
typename K, typename H, typename P, typename A,
template <typename> class Allocator
>
struct replace_allocator_impl<
boost::concurrent_flat_set<K, H, P, A>, Allocator>
{
using value_type =
typename boost::concurrent_flat_set<K, H, P, A>::value_type;
using type =
boost::concurrent_flat_set<K, H, P, Allocator<value_type> >;
};
template <typename K>
K const& get_key(K const& x) { return x; }
template <typename K,typename V>
K const& get_key(const std::pair<K, V>& x) { return x.first; }
template <typename K>
K const& get_value(K const& x) { return x; }
template <typename K,typename V>
V const& get_value(const std::pair<K, V>& x) { return x.second; }
template <typename K,typename V>
V& get_value(std::pair<K, V>& x) { return x.second; }
template <class X, class Y>
void test_matches_reference(X const& x, Y const& reference_cont)
{
using value_type = typename X::value_type;
BOOST_TEST_EQ(x.size(), x.visit_all([&](value_type const& v) {
BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
}));
}
template <class X, class Y>
void test_fuzzy_matches_reference(
X const& x, Y const& reference_cont, test::random_generator rg)
{
using value_type = typename X::value_type;
BOOST_TEST_EQ(x.size(), x.visit_all([&](value_type const& v) {
BOOST_TEST(reference_cont.contains(get_key(v)));
if (rg == test::sequential) {
BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
}
}));
}
#endif // BOOST_UNORDERED_TEST_CFOA_COMMON_HELPERS_HPP

File diff suppressed because it is too large Load Diff

View File

@@ -1,34 +1,76 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include "helpers.hpp" #include "helpers.hpp"
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
#include <boost/core/ignore_unused.hpp> #include <boost/core/ignore_unused.hpp>
namespace { namespace {
test::seed_t initialize_seed(335740237); test::seed_t initialize_seed(335740237);
template <typename Container, typename Value>
bool member_emplace(Container& x, Value const & v)
{
return x.emplace(v.x_);
}
template <typename Container, typename Key, typename Value>
bool member_emplace(Container& x, std::pair<Key, Value> const & v)
{
return x.emplace(v.first.x_, v.second.x_);
}
template <typename Container, typename Value, typename F>
bool member_emplace_or_visit(Container& x, Value& v, F f)
{
return x.emplace_or_visit(v.x_, f);
}
template <typename Container, typename Key, typename Value, typename F>
bool member_emplace_or_visit(Container& x, std::pair<Key, Value>& v, F f)
{
return x.emplace_or_visit(v.first.x_, v.second.x_, f);
}
template <typename Container, typename Value, typename F>
bool member_emplace_or_cvisit(Container& x, Value& v, F f)
{
return x.emplace_or_cvisit(v.x_, f);
}
template <typename Container, typename Key, typename Value, typename F>
bool member_emplace_or_cvisit(Container& x, std::pair<Key, Value>& v, F f)
{
return x.emplace_or_cvisit(v.first.x_, v.second.x_, f);
}
struct lvalue_emplacer_type struct lvalue_emplacer_type
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
std::atomic<std::uint64_t> num_inserts{0}; std::atomic<std::uint64_t> num_inserts{0};
thread_runner(values, [&x, &num_inserts](boost::span<T> s) { thread_runner(values, [&x, &num_inserts](boost::span<T> s) {
for (auto const& r : s) { for (auto const& r : s) {
bool b = x.emplace(r.first.x_, r.second.x_); bool b = member_emplace(x, r);
if (b) { if (b) {
++num_inserts; ++num_inserts;
} }
} }
}); });
BOOST_TEST_EQ(num_inserts, x.size()); BOOST_TEST_EQ(num_inserts, x.size());
BOOST_TEST_EQ(raii::default_constructor, 2 * values.size()); BOOST_TEST_EQ(
raii::default_constructor, value_type_cardinality * values.size());
BOOST_TEST_EQ(raii::copy_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, 0u);
BOOST_TEST_GE(raii::move_constructor, 2 * x.size()); BOOST_TEST_GE(raii::move_constructor, value_type_cardinality * x.size());
BOOST_TEST_EQ(raii::copy_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, 0u);
BOOST_TEST_EQ(raii::copy_assignment, 0u); BOOST_TEST_EQ(raii::copy_assignment, 0u);
@@ -40,9 +82,12 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
x.reserve(values.size()); x.reserve(values.size());
lvalue_emplacer_type::operator()(values, x); lvalue_emplacer_type::operator()(values, x);
BOOST_TEST_EQ(raii::move_constructor, 2 * x.size()); BOOST_TEST_EQ(raii::move_constructor, value_type_cardinality * x.size());
} }
} norehash_lvalue_emplacer; } norehash_lvalue_emplacer;
@@ -50,12 +95,15 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
std::atomic<std::uint64_t> num_inserts{0}; std::atomic<std::uint64_t> num_inserts{0};
std::atomic<std::uint64_t> num_invokes{0}; std::atomic<std::uint64_t> num_invokes{0};
thread_runner(values, [&x, &num_inserts, &num_invokes](boost::span<T> s) { thread_runner(values, [&x, &num_inserts, &num_invokes](boost::span<T> s) {
for (auto& r : s) { for (auto& r : s) {
bool b = x.emplace_or_cvisit( bool b = member_emplace_or_cvisit(
r.first.x_, r.second.x_, x, r,
[&num_invokes](typename X::value_type const& v) { [&num_invokes](typename X::value_type const& v) {
(void)v; (void)v;
++num_invokes; ++num_invokes;
@@ -70,9 +118,10 @@ namespace {
BOOST_TEST_EQ(num_inserts, x.size()); BOOST_TEST_EQ(num_inserts, x.size());
BOOST_TEST_EQ(num_invokes, values.size() - x.size()); BOOST_TEST_EQ(num_invokes, values.size() - x.size());
BOOST_TEST_EQ(raii::default_constructor, 2 * values.size()); BOOST_TEST_EQ(
raii::default_constructor, value_type_cardinality * values.size());
BOOST_TEST_EQ(raii::copy_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, 0u);
BOOST_TEST_GE(raii::move_constructor, 2 * x.size()); BOOST_TEST_GE(raii::move_constructor, value_type_cardinality * x.size());
BOOST_TEST_EQ(raii::move_assignment, 0u); BOOST_TEST_EQ(raii::move_assignment, 0u);
BOOST_TEST_EQ(raii::copy_assignment, 0u); BOOST_TEST_EQ(raii::copy_assignment, 0u);
} }
@@ -82,13 +131,23 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
std::atomic<std::uint64_t> num_inserts{0}; std::atomic<std::uint64_t> num_inserts{0};
std::atomic<std::uint64_t> num_invokes{0}; std::atomic<std::uint64_t> num_invokes{0};
thread_runner(values, [&x, &num_inserts, &num_invokes](boost::span<T> s) { thread_runner(values, [&x, &num_inserts, &num_invokes](boost::span<T> s) {
for (auto& r : s) { for (auto& r : s) {
bool b = x.emplace_or_visit( bool b = member_emplace_or_visit(
r.first.x_, r.second.x_, x, r,
[&num_invokes](typename X::value_type& v) { [&num_invokes](arg_type& v) {
(void)v; (void)v;
++num_invokes; ++num_invokes;
}); });
@@ -102,20 +161,21 @@ namespace {
BOOST_TEST_EQ(num_inserts, x.size()); BOOST_TEST_EQ(num_inserts, x.size());
BOOST_TEST_EQ(num_invokes, values.size() - x.size()); BOOST_TEST_EQ(num_invokes, values.size() - x.size());
BOOST_TEST_EQ(raii::default_constructor, 2 * values.size()); BOOST_TEST_EQ(
raii::default_constructor, value_type_cardinality * values.size());
BOOST_TEST_EQ(raii::copy_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, 0u);
BOOST_TEST_GE(raii::move_constructor, 2 * x.size()); BOOST_TEST_GE(raii::move_constructor, value_type_cardinality * x.size());
BOOST_TEST_EQ(raii::move_assignment, 0u); BOOST_TEST_EQ(raii::move_assignment, 0u);
BOOST_TEST_EQ(raii::copy_assignment, 0u); BOOST_TEST_EQ(raii::copy_assignment, 0u);
} }
} lvalue_emplace_or_visit; } lvalue_emplace_or_visit;
template <class X, class G, class F> template <class X, class GF, class F>
void emplace(X*, G gen, F emplacer, test::random_generator rg) void emplace(X*, GF gen_factory, F emplacer, test::random_generator rg)
{ {
auto gen = gen_factory.template get<X>();
auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto values = make_random_values(1024 * 16, [&] { return gen(rg); });
auto reference_map = auto reference_cont = reference_container<X>(values.begin(), values.end());
boost::unordered_flat_map<raii, raii>(values.begin(), values.end());
raii::reset_counts(); raii::reset_counts();
{ {
@@ -123,13 +183,13 @@ namespace {
emplacer(values, x); emplacer(values, x);
BOOST_TEST_EQ(x.size(), reference_map.size()); BOOST_TEST_EQ(x.size(), reference_cont.size());
using value_type = typename X::value_type; using value_type = typename X::value_type;
BOOST_TEST_EQ(x.size(), x.visit_all([&](value_type const& kv) { BOOST_TEST_EQ(x.size(), x.visit_all([&](value_type const& v) {
BOOST_TEST(reference_map.contains(kv.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
if (rg == test::sequential) { if (rg == test::sequential) {
BOOST_TEST_EQ(kv.second, reference_map[kv.first]); BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
} }
})); }));
} }
@@ -145,6 +205,7 @@ namespace {
} }
boost::unordered::concurrent_flat_map<raii, raii>* map; boost::unordered::concurrent_flat_map<raii, raii>* map;
boost::unordered::concurrent_flat_set<raii>* set;
} // namespace } // namespace
@@ -156,8 +217,8 @@ using test::sequential;
UNORDERED_TEST( UNORDERED_TEST(
emplace, emplace,
((map)) ((map)(set))
((value_type_generator)(init_type_generator)) ((value_type_generator_factory)(init_type_generator_factory))
((lvalue_emplacer)(norehash_lvalue_emplacer) ((lvalue_emplacer)(norehash_lvalue_emplacer)
(lvalue_emplace_or_cvisit)(lvalue_emplace_or_visit)) (lvalue_emplace_or_cvisit)(lvalue_emplace_or_visit))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))

View File

@@ -1,10 +1,12 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include "helpers.hpp" #include "helpers.hpp"
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
test::seed_t initialize_seed{1634048962}; test::seed_t initialize_seed{1634048962};
@@ -14,16 +16,21 @@ using test::sequential;
using hasher = stateful_hash; using hasher = stateful_hash;
using key_equal = stateful_key_equal; using key_equal = stateful_key_equal;
using allocator_type = stateful_allocator<std::pair<raii const, raii> >;
using map_type = boost::unordered::concurrent_flat_map<raii, raii, hasher, using map_type = boost::unordered::concurrent_flat_map<raii, raii, hasher,
key_equal, allocator_type>; key_equal, stateful_allocator<std::pair<raii const, raii> > >;
using map_value_type = typename map_type::value_type; using set_type = boost::unordered::concurrent_flat_set<raii, hasher,
key_equal, stateful_allocator<raii> >;
map_type* test_map;
set_type* test_set;
namespace { namespace {
UNORDERED_AUTO_TEST (simple_equality) { UNORDERED_AUTO_TEST (simple_map_equality) {
using allocator_type = map_type::allocator_type;
{ {
map_type x1( map_type x1(
{{1, 11}, {2, 22}}, 0, hasher(1), key_equal(2), allocator_type(3)); {{1, 11}, {2, 22}}, 0, hasher(1), key_equal(2), allocator_type(3));
@@ -50,17 +57,42 @@ namespace {
} }
} }
template <class G> void insert_and_compare(G gen, test::random_generator rg) UNORDERED_AUTO_TEST (simple_set_equality) {
using allocator_type = set_type::allocator_type;
{ {
set_type x1(
{1, 2}, 0, hasher(1), key_equal(2), allocator_type(3));
set_type x2(
{1, 2}, 0, hasher(2), key_equal(2), allocator_type(3));
set_type x3({1}, 0, hasher(2), key_equal(2), allocator_type(3));
BOOST_TEST_EQ(x1.size(), x2.size());
BOOST_TEST(x1 == x2);
BOOST_TEST(!(x1 != x2));
BOOST_TEST(x1.size() != x3.size());
BOOST_TEST(!(x1 == x3));
BOOST_TEST(x1 != x3);
}
}
template <class X, class GF>
void insert_and_compare(X*, GF gen_factory, test::random_generator rg)
{
using allocator_type = typename X::allocator_type;
auto gen = gen_factory.template get<X>();
auto vals1 = make_random_values(1024 * 8, [&] { return gen(rg); }); auto vals1 = make_random_values(1024 * 8, [&] { return gen(rg); });
boost::unordered_flat_map<raii, raii> reference_map( auto reference_cont = reference_container<X>(vals1.begin(), vals1.end());
vals1.begin(), vals1.end());
{ {
raii::reset_counts(); raii::reset_counts();
map_type x1(vals1.size(), hasher(1), key_equal(2), allocator_type(3)); X x1(vals1.size(), hasher(1), key_equal(2), allocator_type(3));
map_type x2(vals1.begin(), vals1.end(), vals1.size(), hasher(2), X x2(vals1.begin(), vals1.end(), vals1.size(), hasher(2),
key_equal(2), allocator_type(3)); key_equal(2), allocator_type(3));
std::thread t1, t2; std::thread t1, t2;
@@ -126,7 +158,7 @@ namespace {
BOOST_TEST(x1 == x2); BOOST_TEST(x1 == x2);
BOOST_TEST(!(x1 != x2)); BOOST_TEST(!(x1 != x2));
test_matches_reference(x1, reference_map); test_matches_reference(x1, reference_cont);
} }
check_raii_counts(); check_raii_counts();
} }
@@ -135,7 +167,8 @@ namespace {
// clang-format off // clang-format off
UNORDERED_TEST( UNORDERED_TEST(
insert_and_compare, insert_and_compare,
((value_type_generator)) ((test_map)(test_set))
((value_type_generator_factory))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))
// clang-format on // clang-format on

View File

@@ -1,10 +1,12 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include "helpers.hpp" #include "helpers.hpp"
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
#include <boost/core/ignore_unused.hpp> #include <boost/core/ignore_unused.hpp>
@@ -15,6 +17,9 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
std::atomic<std::uint64_t> num_erased{0}; std::atomic<std::uint64_t> num_erased{0};
auto const old_size = x.size(); auto const old_size = x.size();
@@ -26,11 +31,11 @@ namespace {
BOOST_TEST_EQ(raii::default_constructor + raii::copy_constructor + BOOST_TEST_EQ(raii::default_constructor + raii::copy_constructor +
raii::move_constructor, raii::move_constructor,
raii::destructor + 2 * x.size()); raii::destructor + value_type_cardinality * x.size());
thread_runner(values, [&values, &num_erased, &x](boost::span<T>) { thread_runner(values, [&values, &num_erased, &x](boost::span<T>) {
for (auto const& k : values) { for (auto const& v : values) {
auto count = x.erase(k.first); auto count = x.erase(get_key(v));
num_erased += count; num_erased += count;
BOOST_TEST_LE(count, 1u); BOOST_TEST_LE(count, 1u);
BOOST_TEST_GE(count, 0u); BOOST_TEST_GE(count, 0u);
@@ -41,7 +46,7 @@ namespace {
BOOST_TEST_EQ(raii::copy_constructor, old_cc); BOOST_TEST_EQ(raii::copy_constructor, old_cc);
BOOST_TEST_EQ(raii::move_constructor, old_mc); BOOST_TEST_EQ(raii::move_constructor, old_mc);
BOOST_TEST_EQ(raii::destructor, old_d + 2 * old_size); BOOST_TEST_EQ(raii::destructor, old_d + value_type_cardinality * old_size);
BOOST_TEST_EQ(x.size(), 0u); BOOST_TEST_EQ(x.size(), 0u);
BOOST_TEST(x.empty()); BOOST_TEST(x.empty());
@@ -53,6 +58,9 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
std::atomic<std::uint64_t> num_erased{0}; std::atomic<std::uint64_t> num_erased{0};
auto const old_size = x.size(); auto const old_size = x.size();
@@ -64,11 +72,11 @@ namespace {
BOOST_TEST_EQ(raii::default_constructor + raii::copy_constructor + BOOST_TEST_EQ(raii::default_constructor + raii::copy_constructor +
raii::move_constructor, raii::move_constructor,
raii::destructor + 2 * x.size()); raii::destructor + value_type_cardinality * x.size());
thread_runner(values, [&num_erased, &x](boost::span<T> s) { thread_runner(values, [&num_erased, &x](boost::span<T> s) {
for (auto const& k : s) { for (auto const& v : s) {
auto count = x.erase(k.first.x_); auto count = x.erase(get_key(v).x_);
num_erased += count; num_erased += count;
BOOST_TEST_LE(count, 1u); BOOST_TEST_LE(count, 1u);
BOOST_TEST_GE(count, 0u); BOOST_TEST_GE(count, 0u);
@@ -79,7 +87,8 @@ namespace {
BOOST_TEST_EQ(raii::copy_constructor, old_cc); BOOST_TEST_EQ(raii::copy_constructor, old_cc);
BOOST_TEST_EQ(raii::move_constructor, old_mc); BOOST_TEST_EQ(raii::move_constructor, old_mc);
BOOST_TEST_EQ(raii::destructor, old_d + 2 * num_erased); BOOST_TEST_EQ(
raii::destructor, old_d + value_type_cardinality * num_erased);
BOOST_TEST_EQ(x.size(), 0u); BOOST_TEST_EQ(x.size(), 0u);
BOOST_TEST(x.empty()); BOOST_TEST(x.empty());
@@ -92,6 +101,15 @@ namespace {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
using value_type = typename X::value_type; using value_type = typename X::value_type;
static constexpr auto value_type_cardinality =
value_cardinality<value_type>::value;
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
std::atomic<std::uint64_t> num_erased{0}; std::atomic<std::uint64_t> num_erased{0};
@@ -105,8 +123,8 @@ namespace {
auto max = 0; auto max = 0;
x.visit_all([&max](value_type const& v) { x.visit_all([&max](value_type const& v) {
if (v.second.x_ > max) { if (get_value(v).x_ > max) {
max = v.second.x_; max = get_value(v).x_;
} }
}); });
@@ -114,15 +132,15 @@ namespace {
auto expected_erasures = 0u; auto expected_erasures = 0u;
x.visit_all([&expected_erasures, threshold](value_type const& v) { x.visit_all([&expected_erasures, threshold](value_type const& v) {
if (v.second.x_ > threshold) { if (get_value(v).x_ > threshold) {
++expected_erasures; ++expected_erasures;
} }
}); });
thread_runner(values, [&num_erased, &x, threshold](boost::span<T> s) { thread_runner(values, [&num_erased, &x, threshold](boost::span<T> s) {
for (auto const& k : s) { for (auto const& v : s) {
auto count = x.erase_if(k.first, auto count = x.erase_if(get_key(v),
[threshold](value_type& v) { return v.second.x_ > threshold; }); [threshold](arg_type& w) { return get_value(w).x_ > threshold; });
num_erased += count; num_erased += count;
BOOST_TEST_LE(count, 1u); BOOST_TEST_LE(count, 1u);
BOOST_TEST_GE(count, 0u); BOOST_TEST_GE(count, 0u);
@@ -136,7 +154,8 @@ namespace {
BOOST_TEST_EQ(raii::copy_constructor, old_cc); BOOST_TEST_EQ(raii::copy_constructor, old_cc);
BOOST_TEST_EQ(raii::move_constructor, old_mc); BOOST_TEST_EQ(raii::move_constructor, old_mc);
BOOST_TEST_EQ(raii::destructor, old_d + 2 * num_erased); BOOST_TEST_EQ(
raii::destructor, old_d + value_type_cardinality * num_erased);
} }
} lvalue_eraser_if; } lvalue_eraser_if;
@@ -145,6 +164,15 @@ namespace {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
using value_type = typename X::value_type; using value_type = typename X::value_type;
static constexpr auto value_type_cardinality =
value_cardinality<value_type>::value;
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
std::atomic<std::uint64_t> num_erased{0}; std::atomic<std::uint64_t> num_erased{0};
@@ -158,8 +186,8 @@ namespace {
auto max = 0; auto max = 0;
x.visit_all([&max](value_type const& v) { x.visit_all([&max](value_type const& v) {
if (v.second.x_ > max) { if (get_value(v).x_ > max) {
max = v.second.x_; max = get_value(v).x_;
} }
}); });
@@ -167,15 +195,15 @@ namespace {
auto expected_erasures = 0u; auto expected_erasures = 0u;
x.visit_all([&expected_erasures, threshold](value_type const& v) { x.visit_all([&expected_erasures, threshold](value_type const& v) {
if (v.second.x_ > threshold) { if (get_value(v).x_ > threshold) {
++expected_erasures; ++expected_erasures;
} }
}); });
thread_runner(values, [&num_erased, &x, threshold](boost::span<T> s) { thread_runner(values, [&num_erased, &x, threshold](boost::span<T> s) {
for (auto const& k : s) { for (auto const& v : s) {
auto count = x.erase_if(k.first.x_, auto count = x.erase_if(get_key(v).x_,
[threshold](value_type& v) { return v.second.x_ > threshold; }); [threshold](arg_type& w) { return get_value(w).x_ > threshold; });
num_erased += count; num_erased += count;
BOOST_TEST_LE(count, 1u); BOOST_TEST_LE(count, 1u);
BOOST_TEST_GE(count, 0u); BOOST_TEST_GE(count, 0u);
@@ -189,7 +217,8 @@ namespace {
BOOST_TEST_EQ(raii::copy_constructor, old_cc); BOOST_TEST_EQ(raii::copy_constructor, old_cc);
BOOST_TEST_EQ(raii::move_constructor, old_mc); BOOST_TEST_EQ(raii::move_constructor, old_mc);
BOOST_TEST_EQ(raii::destructor, old_d + 2 * num_erased); BOOST_TEST_EQ(
raii::destructor, old_d + value_type_cardinality * num_erased);
} }
} transp_lvalue_eraser_if; } transp_lvalue_eraser_if;
@@ -198,6 +227,15 @@ namespace {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
using value_type = typename X::value_type; using value_type = typename X::value_type;
static constexpr auto value_type_cardinality =
value_cardinality<value_type>::value;
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
std::atomic<std::uint64_t> num_erased{0}; std::atomic<std::uint64_t> num_erased{0};
@@ -211,8 +249,8 @@ namespace {
auto max = 0; auto max = 0;
x.visit_all([&max](value_type const& v) { x.visit_all([&max](value_type const& v) {
if (v.second.x_ > max) { if (get_value(v).x_ > max) {
max = v.second.x_; max = get_value(v).x_;
} }
}); });
@@ -220,7 +258,7 @@ namespace {
auto expected_erasures = 0u; auto expected_erasures = 0u;
x.visit_all([&expected_erasures, threshold](value_type const& v) { x.visit_all([&expected_erasures, threshold](value_type const& v) {
if (v.second.x_ > threshold) { if (get_value(v).x_ > threshold) {
++expected_erasures; ++expected_erasures;
} }
}); });
@@ -229,7 +267,7 @@ namespace {
values, [&num_erased, &x, threshold](boost::span<T> /* s */) { values, [&num_erased, &x, threshold](boost::span<T> /* s */) {
for (std::size_t i = 0; i < 128; ++i) { for (std::size_t i = 0; i < 128; ++i) {
auto count = x.erase_if( auto count = x.erase_if(
[threshold](value_type& v) { return v.second.x_ > threshold; }); [threshold](arg_type& v) { return get_value(v).x_ > threshold; });
num_erased += count; num_erased += count;
} }
}); });
@@ -241,7 +279,8 @@ namespace {
BOOST_TEST_EQ(raii::copy_constructor, old_cc); BOOST_TEST_EQ(raii::copy_constructor, old_cc);
BOOST_TEST_EQ(raii::move_constructor, old_mc); BOOST_TEST_EQ(raii::move_constructor, old_mc);
BOOST_TEST_EQ(raii::destructor, old_d + 2 * num_erased); BOOST_TEST_EQ(
raii::destructor, old_d + value_type_cardinality * num_erased);
} }
} erase_if; } erase_if;
@@ -250,6 +289,15 @@ namespace {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
using value_type = typename X::value_type; using value_type = typename X::value_type;
static constexpr auto value_type_cardinality =
value_cardinality<value_type>::value;
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
std::atomic<std::uint64_t> num_erased{0}; std::atomic<std::uint64_t> num_erased{0};
@@ -263,8 +311,8 @@ namespace {
auto max = 0; auto max = 0;
x.visit_all([&max](value_type const& v) { x.visit_all([&max](value_type const& v) {
if (v.second.x_ > max) { if (get_value(v).x_ > max) {
max = v.second.x_; max = get_value(v).x_;
} }
}); });
@@ -272,7 +320,7 @@ namespace {
auto expected_erasures = 0u; auto expected_erasures = 0u;
x.visit_all([&expected_erasures, threshold](value_type const& v) { x.visit_all([&expected_erasures, threshold](value_type const& v) {
if (v.second.x_ > threshold) { if (get_value(v).x_ > threshold) {
++expected_erasures; ++expected_erasures;
} }
}); });
@@ -281,7 +329,8 @@ namespace {
values, [&num_erased, &x, threshold](boost::span<T> /* s */) { values, [&num_erased, &x, threshold](boost::span<T> /* s */) {
for (std::size_t i = 0; i < 128; ++i) { for (std::size_t i = 0; i < 128; ++i) {
auto count = boost::unordered::erase_if(x, auto count = boost::unordered::erase_if(x,
[threshold](value_type& v) { return v.second.x_ > threshold; }); [threshold](arg_type& v) {
return get_value(v).x_ > threshold; });
num_erased += count; num_erased += count;
} }
}); });
@@ -293,7 +342,8 @@ namespace {
BOOST_TEST_EQ(raii::copy_constructor, old_cc); BOOST_TEST_EQ(raii::copy_constructor, old_cc);
BOOST_TEST_EQ(raii::move_constructor, old_mc); BOOST_TEST_EQ(raii::move_constructor, old_mc);
BOOST_TEST_EQ(raii::destructor, old_d + 2 * num_erased); BOOST_TEST_EQ(
raii::destructor, old_d + value_type_cardinality * num_erased);
} }
} free_fn_erase_if; } free_fn_erase_if;
@@ -303,6 +353,15 @@ namespace {
{ {
#if defined(BOOST_UNORDERED_PARALLEL_ALGORITHMS) #if defined(BOOST_UNORDERED_PARALLEL_ALGORITHMS)
using value_type = typename X::value_type; using value_type = typename X::value_type;
static constexpr auto value_type_cardinality =
value_cardinality<value_type>::value;
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
std::atomic<std::uint64_t> num_invokes{0}; std::atomic<std::uint64_t> num_invokes{0};
@@ -316,8 +375,8 @@ namespace {
auto max = 0; auto max = 0;
x.visit_all([&max](value_type const& v) { x.visit_all([&max](value_type const& v) {
if (v.second.x_ > max) { if (get_value(v).x_ > max) {
max = v.second.x_; max = get_value(v).x_;
} }
}); });
@@ -325,7 +384,7 @@ namespace {
auto expected_erasures = 0u; auto expected_erasures = 0u;
x.visit_all([&expected_erasures, threshold](value_type const& v) { x.visit_all([&expected_erasures, threshold](value_type const& v) {
if (v.second.x_ > threshold) { if (get_value(v).x_ > threshold) {
++expected_erasures; ++expected_erasures;
} }
}); });
@@ -333,9 +392,9 @@ namespace {
thread_runner(values, [&num_invokes, &x, threshold](boost::span<T> s) { thread_runner(values, [&num_invokes, &x, threshold](boost::span<T> s) {
(void)s; (void)s;
x.erase_if( x.erase_if(
std::execution::par, [&num_invokes, threshold](value_type& v) { std::execution::par, [&num_invokes, threshold](arg_type& v) {
++num_invokes; ++num_invokes;
return v.second.x_ > threshold; return get_value(v).x_ > threshold;
}); });
}); });
@@ -346,7 +405,8 @@ namespace {
BOOST_TEST_EQ(raii::copy_constructor, old_cc); BOOST_TEST_EQ(raii::copy_constructor, old_cc);
BOOST_TEST_EQ(raii::move_constructor, old_mc); BOOST_TEST_EQ(raii::move_constructor, old_mc);
BOOST_TEST_EQ(raii::destructor, old_d + 2 * expected_erasures); BOOST_TEST_EQ(
raii::destructor, old_d + value_type_cardinality * expected_erasures);
#else #else
(void)values; (void)values;
(void)x; (void)x;
@@ -354,12 +414,12 @@ namespace {
} }
} erase_if_exec_policy; } erase_if_exec_policy;
template <class X, class G, class F> template <class X, class GF, class F>
void erase(X*, G gen, F eraser, test::random_generator rg) void erase(X*, GF gen_factory, F eraser, test::random_generator rg)
{ {
auto gen = gen_factory.template get<X>();
auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto values = make_random_values(1024 * 16, [&] { return gen(rg); });
auto reference_map = auto reference_cont = reference_container<X>(values.begin(), values.end());
boost::unordered_flat_map<raii, raii>(values.begin(), values.end());
raii::reset_counts(); raii::reset_counts();
{ {
@@ -367,20 +427,23 @@ namespace {
x.insert(values.begin(), values.end()); x.insert(values.begin(), values.end());
BOOST_TEST_EQ(x.size(), reference_map.size()); BOOST_TEST_EQ(x.size(), reference_cont.size());
test_fuzzy_matches_reference(x, reference_map, rg); test_fuzzy_matches_reference(x, reference_cont, rg);
eraser(values, x); eraser(values, x);
test_fuzzy_matches_reference(x, reference_map, rg); test_fuzzy_matches_reference(x, reference_cont, rg);
} }
check_raii_counts(); check_raii_counts();
} }
boost::unordered::concurrent_flat_map<raii, raii>* map; boost::unordered::concurrent_flat_map<raii, raii>* map;
boost::unordered::concurrent_flat_set<raii>* set;
boost::unordered::concurrent_flat_map<raii, raii, transp_hash, boost::unordered::concurrent_flat_map<raii, raii, transp_hash,
transp_key_equal>* transparent_map; transp_key_equal>* transparent_map;
boost::unordered::concurrent_flat_set<raii, transp_hash,
transp_key_equal>* transparent_set;
} // namespace } // namespace
@@ -391,15 +454,15 @@ using test::sequential;
// clang-format off // clang-format off
UNORDERED_TEST( UNORDERED_TEST(
erase, erase,
((map)) ((map)(set))
((value_type_generator)(init_type_generator)) ((value_type_generator_factory)(init_type_generator_factory))
((lvalue_eraser)(lvalue_eraser_if)(erase_if)(free_fn_erase_if)(erase_if_exec_policy)) ((lvalue_eraser)(lvalue_eraser_if)(erase_if)(free_fn_erase_if)(erase_if_exec_policy))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))
UNORDERED_TEST( UNORDERED_TEST(
erase, erase,
((transparent_map)) ((transparent_map)(transparent_set))
((value_type_generator)(init_type_generator)) ((value_type_generator_factory)(init_type_generator_factory))
((transp_lvalue_eraser)(transp_lvalue_eraser_if)(erase_if_exec_policy)) ((transp_lvalue_eraser)(transp_lvalue_eraser_if)(erase_if_exec_policy))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))

View File

@@ -1,24 +1,87 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include "exception_helpers.hpp" #include "exception_helpers.hpp"
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
using allocator_type = stateful_allocator<std::pair<raii const, raii> >;
using hasher = stateful_hash; using hasher = stateful_hash;
using key_equal = stateful_key_equal; using key_equal = stateful_key_equal;
using map_type = boost::unordered::concurrent_flat_map<raii, raii, hasher, using map_type = boost::unordered::concurrent_flat_map<raii, raii, hasher,
key_equal, allocator_type>; key_equal, stateful_allocator<std::pair<raii const, raii> > >;
using set_type = boost::unordered::concurrent_flat_set<raii, hasher,
key_equal, stateful_allocator<raii> >;
map_type* test_map;
set_type* test_set;
std::initializer_list<map_type::value_type> map_init_list{
{raii{0}, raii{0}},
{raii{1}, raii{1}},
{raii{2}, raii{2}},
{raii{3}, raii{3}},
{raii{4}, raii{4}},
{raii{5}, raii{5}},
{raii{6}, raii{6}},
{raii{6}, raii{6}},
{raii{7}, raii{7}},
{raii{8}, raii{8}},
{raii{9}, raii{9}},
{raii{10}, raii{10}},
{raii{9}, raii{9}},
{raii{8}, raii{8}},
{raii{7}, raii{7}},
{raii{6}, raii{6}},
{raii{5}, raii{5}},
{raii{4}, raii{4}},
{raii{3}, raii{3}},
{raii{2}, raii{2}},
{raii{1}, raii{1}},
{raii{0}, raii{0}},
};
std::initializer_list<set_type::value_type> set_init_list{
raii{0},
raii{1},
raii{2},
raii{3},
raii{4},
raii{5},
raii{6},
raii{6},
raii{7},
raii{8},
raii{9},
raii{10},
raii{9},
raii{8},
raii{7},
raii{6},
raii{5},
raii{4},
raii{3},
raii{2},
raii{1},
raii{0},
};
auto test_map_and_init_list=std::make_pair(test_map,map_init_list);
auto test_set_and_init_list=std::make_pair(test_set,set_init_list);
namespace { namespace {
test::seed_t initialize_seed(1794114520); test::seed_t initialize_seed(1794114520);
template <class G> void copy_assign(G gen, test::random_generator rg) template <class X, class GF>
void copy_assign(X*, GF gen_factory, test::random_generator rg)
{ {
using allocator_type = typename X::allocator_type;
auto gen = gen_factory.template get<X>();
auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto values = make_random_values(1024 * 16, [&] { return gen(rg); });
{ {
@@ -31,12 +94,12 @@ namespace {
values.begin() + static_cast<std::ptrdiff_t>(values.size() / 2); values.begin() + static_cast<std::ptrdiff_t>(values.size() / 2);
auto end = values.end(); auto end = values.end();
auto reference_map = boost::unordered_flat_map<raii, raii>(begin, mid); auto reference_cont = reference_container<X>(begin, mid);
map_type x( X x(
begin, mid, values.size(), hasher(1), key_equal(2), allocator_type(3)); begin, mid, values.size(), hasher(1), key_equal(2), allocator_type(3));
map_type y( X y(
mid, end, values.size(), hasher(2), key_equal(1), allocator_type(4)); mid, end, values.size(), hasher(2), key_equal(1), allocator_type(4));
BOOST_TEST(!y.empty()); BOOST_TEST(!y.empty());
@@ -53,13 +116,17 @@ namespace {
disable_exceptions(); disable_exceptions();
BOOST_TEST_GT(num_throws, 0u); BOOST_TEST_GT(num_throws, 0u);
test_fuzzy_matches_reference(y, reference_map, rg); test_fuzzy_matches_reference(y, reference_cont, rg);
} }
check_raii_counts(); check_raii_counts();
} }
template <class G> void move_assign(G gen, test::random_generator rg) template <class X, class GF>
void move_assign(X*, GF gen_factory, test::random_generator rg)
{ {
using allocator_type = typename X::allocator_type;
auto gen = gen_factory.template get<X>();
auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto values = make_random_values(1024 * 16, [&] { return gen(rg); });
{ {
@@ -72,7 +139,7 @@ namespace {
values.begin() + static_cast<std::ptrdiff_t>(values.size() / 2); values.begin() + static_cast<std::ptrdiff_t>(values.size() / 2);
auto end = values.end(); auto end = values.end();
auto reference_map = boost::unordered_flat_map<raii, raii>(begin, mid); auto reference_cont = reference_container<X>(begin, mid);
BOOST_TEST( BOOST_TEST(
!boost::allocator_is_always_equal<allocator_type>::type::value); !boost::allocator_is_always_equal<allocator_type>::type::value);
@@ -83,10 +150,10 @@ namespace {
for (std::size_t i = 0; i < 2 * alloc_throw_threshold; ++i) { for (std::size_t i = 0; i < 2 * alloc_throw_threshold; ++i) {
disable_exceptions(); disable_exceptions();
map_type x(begin, mid, values.size(), hasher(1), key_equal(2), X x(begin, mid, values.size(), hasher(1), key_equal(2),
allocator_type(3)); allocator_type(3));
map_type y( X y(
mid, end, values.size(), hasher(2), key_equal(1), allocator_type(4)); mid, end, values.size(), hasher(2), key_equal(1), allocator_type(4));
enable_exceptions(); enable_exceptions();
@@ -96,7 +163,7 @@ namespace {
++num_throws; ++num_throws;
} }
disable_exceptions(); disable_exceptions();
test_fuzzy_matches_reference(y, reference_map, rg); test_fuzzy_matches_reference(y, reference_cont, rg);
} }
BOOST_TEST_GT(num_throws, 0u); BOOST_TEST_GT(num_throws, 0u);
@@ -104,43 +171,22 @@ namespace {
check_raii_counts(); check_raii_counts();
} }
UNORDERED_AUTO_TEST (intializer_list_assign) { template <class X, class IL>
using value_type = typename map_type::value_type; void intializer_list_assign(std::pair<X*, IL> p)
{
using allocator_type = typename X::allocator_type;
std::initializer_list<value_type> values{ auto init_list = p.second;
value_type{raii{0}, raii{0}},
value_type{raii{1}, raii{1}},
value_type{raii{2}, raii{2}},
value_type{raii{3}, raii{3}},
value_type{raii{4}, raii{4}},
value_type{raii{5}, raii{5}},
value_type{raii{6}, raii{6}},
value_type{raii{6}, raii{6}},
value_type{raii{7}, raii{7}},
value_type{raii{8}, raii{8}},
value_type{raii{9}, raii{9}},
value_type{raii{10}, raii{10}},
value_type{raii{9}, raii{9}},
value_type{raii{8}, raii{8}},
value_type{raii{7}, raii{7}},
value_type{raii{6}, raii{6}},
value_type{raii{5}, raii{5}},
value_type{raii{4}, raii{4}},
value_type{raii{3}, raii{3}},
value_type{raii{2}, raii{2}},
value_type{raii{1}, raii{1}},
value_type{raii{0}, raii{0}},
};
{ {
raii::reset_counts(); raii::reset_counts();
unsigned num_throws = 0; unsigned num_throws = 0;
for (std::size_t i = 0; i < throw_threshold; ++i) { for (std::size_t i = 0; i < throw_threshold; ++i) {
map_type x(0, hasher(1), key_equal(2), allocator_type(3)); X x(0, hasher(1), key_equal(2), allocator_type(3));
enable_exceptions(); enable_exceptions();
try { try {
x = values; x = init_list;
} catch (...) { } catch (...) {
++num_throws; ++num_throws;
} }
@@ -160,13 +206,19 @@ using test::sequential;
// clang-format off // clang-format off
UNORDERED_TEST( UNORDERED_TEST(
copy_assign, copy_assign,
((exception_value_type_generator)) ((test_map)(test_set))
((exception_value_type_generator_factory))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))
UNORDERED_TEST( UNORDERED_TEST(
move_assign, move_assign,
((exception_value_type_generator)) ((test_map)(test_set))
((exception_value_type_generator_factory))
((default_generator)(sequential))) ((default_generator)(sequential)))
UNORDERED_TEST(
intializer_list_assign,
((test_map_and_init_list)(test_set_and_init_list)))
// clang-format on // clang-format on
RUN_TESTS() RUN_TESTS()

View File

@@ -1,23 +1,84 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include "exception_helpers.hpp" #include "exception_helpers.hpp"
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
using allocator_type = stateful_allocator<std::pair<raii const, raii> >;
using hasher = stateful_hash; using hasher = stateful_hash;
using key_equal = stateful_key_equal; using key_equal = stateful_key_equal;
using map_type = boost::unordered::concurrent_flat_map<raii, raii, hasher, using map_type = boost::unordered::concurrent_flat_map<raii, raii, hasher,
key_equal, allocator_type>; key_equal, stateful_allocator<std::pair<raii const, raii> > >;
using set_type = boost::unordered::concurrent_flat_set<raii, hasher,
key_equal, stateful_allocator<raii> >;
map_type* test_map;
set_type* test_set;
std::initializer_list<map_type::value_type> map_init_list{
{raii{0}, raii{0}},
{raii{1}, raii{1}},
{raii{2}, raii{2}},
{raii{3}, raii{3}},
{raii{4}, raii{4}},
{raii{5}, raii{5}},
{raii{6}, raii{6}},
{raii{6}, raii{6}},
{raii{7}, raii{7}},
{raii{8}, raii{8}},
{raii{9}, raii{9}},
{raii{10}, raii{10}},
{raii{9}, raii{9}},
{raii{8}, raii{8}},
{raii{7}, raii{7}},
{raii{6}, raii{6}},
{raii{5}, raii{5}},
{raii{4}, raii{4}},
{raii{3}, raii{3}},
{raii{2}, raii{2}},
{raii{1}, raii{1}},
{raii{0}, raii{0}},
};
std::initializer_list<set_type::value_type> set_init_list{
raii{0},
raii{1},
raii{2},
raii{3},
raii{4},
raii{5},
raii{6},
raii{6},
raii{7},
raii{8},
raii{9},
raii{10},
raii{9},
raii{8},
raii{7},
raii{6},
raii{5},
raii{4},
raii{3},
raii{2},
raii{1},
raii{0},
};
auto test_map_and_init_list=std::make_pair(test_map,map_init_list);
auto test_set_and_init_list=std::make_pair(test_set,set_init_list);
namespace { namespace {
test::seed_t initialize_seed(795610904); test::seed_t initialize_seed(795610904);
UNORDERED_AUTO_TEST (bucket_constructor) { template <class X>
void bucket_constructor(X*)
{
raii::reset_counts(); raii::reset_counts();
bool was_thrown = false; bool was_thrown = false;
@@ -25,7 +86,7 @@ namespace {
enable_exceptions(); enable_exceptions();
for (std::size_t i = 0; i < alloc_throw_threshold; ++i) { for (std::size_t i = 0; i < alloc_throw_threshold; ++i) {
try { try {
map_type m(128); X m(128);
} catch (...) { } catch (...) {
was_thrown = true; was_thrown = true;
} }
@@ -35,8 +96,12 @@ namespace {
BOOST_TEST(was_thrown); BOOST_TEST(was_thrown);
} }
template <class G> void iterator_range(G gen, test::random_generator rg) template <class X, class GF>
void iterator_range(X*, GF gen_factory, test::random_generator rg)
{ {
using allocator_type = typename X::allocator_type;
auto gen = gen_factory.template get<X>();
auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto values = make_random_values(1024 * 16, [&] { return gen(rg); });
{ {
@@ -46,7 +111,7 @@ namespace {
enable_exceptions(); enable_exceptions();
try { try {
map_type x(values.begin(), values.end(), 0, hasher(1), key_equal(2), X x(values.begin(), values.end(), 0, hasher(1), key_equal(2),
allocator_type(3)); allocator_type(3));
} catch (...) { } catch (...) {
was_thrown = true; was_thrown = true;
@@ -64,7 +129,7 @@ namespace {
enable_exceptions(); enable_exceptions();
try { try {
map_type x(values.begin(), values.end(), allocator_type(3)); X x(values.begin(), values.end(), allocator_type(3));
} catch (...) { } catch (...) {
was_thrown = true; was_thrown = true;
} }
@@ -81,7 +146,7 @@ namespace {
enable_exceptions(); enable_exceptions();
try { try {
map_type x( X x(
values.begin(), values.end(), values.size(), allocator_type(3)); values.begin(), values.end(), values.size(), allocator_type(3));
} catch (...) { } catch (...) {
was_thrown = true; was_thrown = true;
@@ -99,7 +164,7 @@ namespace {
enable_exceptions(); enable_exceptions();
try { try {
map_type x(values.begin(), values.end(), values.size(), hasher(1), X x(values.begin(), values.end(), values.size(), hasher(1),
allocator_type(3)); allocator_type(3));
} catch (...) { } catch (...) {
was_thrown = true; was_thrown = true;
@@ -111,8 +176,12 @@ namespace {
} }
} }
template <class G> void copy_constructor(G gen, test::random_generator rg) template <class X, class GF>
void copy_constructor(X*, GF gen_factory, test::random_generator rg)
{ {
using allocator_type = typename X::allocator_type;
auto gen = gen_factory.template get<X>();
auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto values = make_random_values(1024 * 16, [&] { return gen(rg); });
{ {
@@ -121,10 +190,10 @@ namespace {
bool was_thrown = false; bool was_thrown = false;
try { try {
map_type x(values.begin(), values.end(), 0); X x(values.begin(), values.end(), 0);
enable_exceptions(); enable_exceptions();
map_type y(x); X y(x);
} catch (...) { } catch (...) {
was_thrown = true; was_thrown = true;
} }
@@ -140,10 +209,10 @@ namespace {
bool was_thrown = false; bool was_thrown = false;
try { try {
map_type x(values.begin(), values.end(), 0); X x(values.begin(), values.end(), 0);
enable_exceptions(); enable_exceptions();
map_type y(x, allocator_type(4)); X y(x, allocator_type(4));
} catch (...) { } catch (...) {
was_thrown = true; was_thrown = true;
} }
@@ -154,8 +223,12 @@ namespace {
} }
} }
template <class G> void move_constructor(G gen, test::random_generator rg) template <class X, class GF>
void move_constructor(X*, GF gen_factory, test::random_generator rg)
{ {
using allocator_type = typename X::allocator_type;
auto gen = gen_factory.template get<X>();
auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto values = make_random_values(1024 * 16, [&] { return gen(rg); });
{ {
@@ -164,10 +237,10 @@ namespace {
bool was_thrown = false; bool was_thrown = false;
try { try {
map_type x(values.begin(), values.end(), 0); X x(values.begin(), values.end(), 0);
enable_exceptions(); enable_exceptions();
map_type y(std::move(x), allocator_type(4)); X y(std::move(x), allocator_type(4));
} catch (...) { } catch (...) {
was_thrown = true; was_thrown = true;
} }
@@ -178,33 +251,12 @@ namespace {
} }
} }
UNORDERED_AUTO_TEST (initializer_list_bucket_count) { template <class X, class IL>
using value_type = typename map_type::value_type; void initializer_list_bucket_count(std::pair<X*, IL> p)
{
using allocator_type = typename X::allocator_type;
std::initializer_list<value_type> values{ auto init_list = p.second;
value_type{raii{0}, raii{0}},
value_type{raii{1}, raii{1}},
value_type{raii{2}, raii{2}},
value_type{raii{3}, raii{3}},
value_type{raii{4}, raii{4}},
value_type{raii{5}, raii{5}},
value_type{raii{6}, raii{6}},
value_type{raii{6}, raii{6}},
value_type{raii{7}, raii{7}},
value_type{raii{8}, raii{8}},
value_type{raii{9}, raii{9}},
value_type{raii{10}, raii{10}},
value_type{raii{9}, raii{9}},
value_type{raii{8}, raii{8}},
value_type{raii{7}, raii{7}},
value_type{raii{6}, raii{6}},
value_type{raii{5}, raii{5}},
value_type{raii{4}, raii{4}},
value_type{raii{3}, raii{3}},
value_type{raii{2}, raii{2}},
value_type{raii{1}, raii{1}},
value_type{raii{0}, raii{0}},
};
{ {
raii::reset_counts(); raii::reset_counts();
@@ -213,7 +265,7 @@ namespace {
enable_exceptions(); enable_exceptions();
for (std::size_t i = 0; i < throw_threshold; ++i) { for (std::size_t i = 0; i < throw_threshold; ++i) {
try { try {
map_type x(values, 0, hasher(1), key_equal(2), allocator_type(3)); X x(init_list, 0, hasher(1), key_equal(2), allocator_type(3));
} catch (...) { } catch (...) {
++num_throws; ++num_throws;
} }
@@ -231,7 +283,7 @@ namespace {
enable_exceptions(); enable_exceptions();
for (std::size_t i = 0; i < alloc_throw_threshold * 2; ++i) { for (std::size_t i = 0; i < alloc_throw_threshold * 2; ++i) {
try { try {
map_type x(values, allocator_type(3)); X x(init_list, allocator_type(3));
} catch (...) { } catch (...) {
++num_throws; ++num_throws;
} }
@@ -249,7 +301,7 @@ namespace {
enable_exceptions(); enable_exceptions();
for (std::size_t i = 0; i < alloc_throw_threshold * 2; ++i) { for (std::size_t i = 0; i < alloc_throw_threshold * 2; ++i) {
try { try {
map_type x(values, values.size() * 2, allocator_type(3)); X x(init_list, init_list.size() * 2, allocator_type(3));
} catch (...) { } catch (...) {
++num_throws; ++num_throws;
} }
@@ -267,7 +319,7 @@ namespace {
enable_exceptions(); enable_exceptions();
for (std::size_t i = 0; i < throw_threshold; ++i) { for (std::size_t i = 0; i < throw_threshold; ++i) {
try { try {
map_type x(values, values.size() * 2, hasher(1), allocator_type(3)); X x(init_list, init_list.size() * 2, hasher(1), allocator_type(3));
} catch (...) { } catch (...) {
++num_throws; ++num_throws;
} }
@@ -285,20 +337,31 @@ using test::limited_range;
using test::sequential; using test::sequential;
// clang-format off // clang-format off
UNORDERED_TEST(
bucket_constructor,
((test_map)(test_set)))
UNORDERED_TEST( UNORDERED_TEST(
iterator_range, iterator_range,
((exception_value_type_generator)) ((test_map)(test_set))
((exception_value_type_generator_factory))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))
UNORDERED_TEST( UNORDERED_TEST(
copy_constructor, copy_constructor,
((exception_value_type_generator)) ((test_map)(test_set))
((exception_value_type_generator_factory))
((default_generator)(sequential))) ((default_generator)(sequential)))
UNORDERED_TEST( UNORDERED_TEST(
move_constructor, move_constructor,
((exception_value_type_generator)) ((test_map)(test_set))
((exception_value_type_generator_factory))
((default_generator)(sequential))) ((default_generator)(sequential)))
UNORDERED_TEST(
initializer_list_bucket_count,
((test_map_and_init_list)(test_set_and_init_list)))
// clang-format on // clang-format on
RUN_TESTS() RUN_TESTS()

View File

@@ -1,10 +1,12 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include "exception_helpers.hpp" #include "exception_helpers.hpp"
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
#include <boost/core/ignore_unused.hpp> #include <boost/core/ignore_unused.hpp>
@@ -15,6 +17,9 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
std::atomic<std::uint64_t> num_erased{0}; std::atomic<std::uint64_t> num_erased{0};
auto const old_size = x.size(); auto const old_size = x.size();
@@ -27,9 +32,9 @@ namespace {
enable_exceptions(); enable_exceptions();
thread_runner(values, [&values, &num_erased, &x](boost::span<T>) { thread_runner(values, [&values, &num_erased, &x](boost::span<T>) {
for (auto const& k : values) { for (auto const& v : values) {
try { try {
auto count = x.erase(k.first); auto count = x.erase(get_key(v));
BOOST_TEST_LE(count, 1u); BOOST_TEST_LE(count, 1u);
BOOST_TEST_GE(count, 0u); BOOST_TEST_GE(count, 0u);
@@ -46,7 +51,8 @@ namespace {
BOOST_TEST_EQ(raii::copy_constructor, old_cc); BOOST_TEST_EQ(raii::copy_constructor, old_cc);
BOOST_TEST_EQ(raii::move_constructor, old_mc); BOOST_TEST_EQ(raii::move_constructor, old_mc);
BOOST_TEST_EQ(raii::destructor, old_d + 2 * num_erased); BOOST_TEST_EQ(
raii::destructor, old_d + value_type_cardinality * num_erased);
} }
} lvalue_eraser; } lvalue_eraser;
@@ -55,6 +61,15 @@ namespace {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
using value_type = typename X::value_type; using value_type = typename X::value_type;
static constexpr auto value_type_cardinality =
value_cardinality<value_type>::value;
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
std::atomic<std::uint64_t> num_erased{0}; std::atomic<std::uint64_t> num_erased{0};
@@ -68,8 +83,8 @@ namespace {
auto max = 0; auto max = 0;
x.visit_all([&max](value_type const& v) { x.visit_all([&max](value_type const& v) {
if (v.second.x_ > max) { if (get_value(v).x_ > max) {
max = v.second.x_; max = get_value(v).x_;
} }
}); });
@@ -77,17 +92,17 @@ namespace {
auto expected_erasures = 0u; auto expected_erasures = 0u;
x.visit_all([&expected_erasures, threshold](value_type const& v) { x.visit_all([&expected_erasures, threshold](value_type const& v) {
if (v.second.x_ > threshold) { if (get_value(v).x_ > threshold) {
++expected_erasures; ++expected_erasures;
} }
}); });
enable_exceptions(); enable_exceptions();
thread_runner(values, [&num_erased, &x, threshold](boost::span<T> s) { thread_runner(values, [&num_erased, &x, threshold](boost::span<T> s) {
for (auto const& k : s) { for (auto const& v : s) {
try { try {
auto count = x.erase_if(k.first, auto count = x.erase_if(get_key(v),
[threshold](value_type& v) { return v.second.x_ > threshold; }); [threshold](arg_type& w) { return get_value(w).x_ > threshold; });
num_erased += count; num_erased += count;
BOOST_TEST_LE(count, 1u); BOOST_TEST_LE(count, 1u);
BOOST_TEST_GE(count, 0u); BOOST_TEST_GE(count, 0u);
@@ -104,7 +119,8 @@ namespace {
BOOST_TEST_EQ(raii::copy_constructor, old_cc); BOOST_TEST_EQ(raii::copy_constructor, old_cc);
BOOST_TEST_EQ(raii::move_constructor, old_mc); BOOST_TEST_EQ(raii::move_constructor, old_mc);
BOOST_TEST_EQ(raii::destructor, old_d + 2 * num_erased); BOOST_TEST_EQ(
raii::destructor, old_d + value_type_cardinality * num_erased);
} }
} lvalue_eraser_if; } lvalue_eraser_if;
@@ -113,6 +129,15 @@ namespace {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
using value_type = typename X::value_type; using value_type = typename X::value_type;
static constexpr auto value_type_cardinality =
value_cardinality<value_type>::value;
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
auto const old_size = x.size(); auto const old_size = x.size();
@@ -124,8 +149,8 @@ namespace {
auto max = 0; auto max = 0;
x.visit_all([&max](value_type const& v) { x.visit_all([&max](value_type const& v) {
if (v.second.x_ > max) { if (get_value(v).x_ > max) {
max = v.second.x_; max = get_value(v).x_;
} }
}); });
@@ -133,7 +158,7 @@ namespace {
auto expected_erasures = 0u; auto expected_erasures = 0u;
x.visit_all([&expected_erasures, threshold](value_type const& v) { x.visit_all([&expected_erasures, threshold](value_type const& v) {
if (v.second.x_ > threshold) { if (get_value(v).x_ > threshold) {
++expected_erasures; ++expected_erasures;
} }
}); });
@@ -142,14 +167,14 @@ namespace {
thread_runner(values, [&x, threshold](boost::span<T> /* s */) { thread_runner(values, [&x, threshold](boost::span<T> /* s */) {
for (std::size_t i = 0; i < 256; ++i) { for (std::size_t i = 0; i < 256; ++i) {
try { try {
x.erase_if([threshold](value_type& v) { x.erase_if([threshold](arg_type& v) {
static std::atomic<std::uint32_t> c{0}; static std::atomic<std::uint32_t> c{0};
auto t = ++c; auto t = ++c;
if (should_throw && (t % throw_threshold == 0)) { if (should_throw && (t % throw_threshold == 0)) {
throw exception_tag{}; throw exception_tag{};
} }
return v.second.x_ > threshold; return get_value(v).x_ > threshold;
}); });
} catch (...) { } catch (...) {
} }
@@ -161,7 +186,9 @@ namespace {
BOOST_TEST_EQ(raii::copy_constructor, old_cc); BOOST_TEST_EQ(raii::copy_constructor, old_cc);
BOOST_TEST_EQ(raii::move_constructor, old_mc); BOOST_TEST_EQ(raii::move_constructor, old_mc);
BOOST_TEST_EQ(raii::destructor, old_d + 2 * (old_size - x.size())); BOOST_TEST_EQ(
raii::destructor,
old_d + value_type_cardinality * (old_size - x.size()));
} }
} erase_if; } erase_if;
@@ -170,6 +197,15 @@ namespace {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
using value_type = typename X::value_type; using value_type = typename X::value_type;
static constexpr auto value_type_cardinality =
value_cardinality<value_type>::value;
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
auto const old_size = x.size(); auto const old_size = x.size();
@@ -181,8 +217,8 @@ namespace {
auto max = 0; auto max = 0;
x.visit_all([&max](value_type const& v) { x.visit_all([&max](value_type const& v) {
if (v.second.x_ > max) { if (get_value(v).x_ > max) {
max = v.second.x_; max = get_value(v).x_;
} }
}); });
@@ -192,14 +228,14 @@ namespace {
thread_runner(values, [&x, threshold](boost::span<T> /* s */) { thread_runner(values, [&x, threshold](boost::span<T> /* s */) {
for (std::size_t i = 0; i < 256; ++i) { for (std::size_t i = 0; i < 256; ++i) {
try { try {
boost::unordered::erase_if(x, [threshold](value_type& v) { boost::unordered::erase_if(x, [threshold](arg_type& v) {
static std::atomic<std::uint32_t> c{0}; static std::atomic<std::uint32_t> c{0};
auto t = ++c; auto t = ++c;
if (should_throw && (t % throw_threshold == 0)) { if (should_throw && (t % throw_threshold == 0)) {
throw exception_tag{}; throw exception_tag{};
} }
return v.second.x_ > threshold; return get_value(v).x_ > threshold;
}); });
} catch (...) { } catch (...) {
@@ -212,16 +248,18 @@ namespace {
BOOST_TEST_EQ(raii::copy_constructor, old_cc); BOOST_TEST_EQ(raii::copy_constructor, old_cc);
BOOST_TEST_EQ(raii::move_constructor, old_mc); BOOST_TEST_EQ(raii::move_constructor, old_mc);
BOOST_TEST_EQ(raii::destructor, old_d + 2 * (old_size - x.size())); BOOST_TEST_EQ(
raii::destructor,
old_d + value_type_cardinality * (old_size - x.size()));
} }
} free_fn_erase_if; } free_fn_erase_if;
template <class X, class G, class F> template <class X, class GF, class F>
void erase(X*, G gen, F eraser, test::random_generator rg) void erase(X*, GF gen_factory, F eraser, test::random_generator rg)
{ {
auto gen = gen_factory.template get<X>();
auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto values = make_random_values(1024 * 16, [&] { return gen(rg); });
auto reference_map = auto reference_cont = reference_container<X>(values.begin(), values.end());
boost::unordered_flat_map<raii, raii>(values.begin(), values.end());
raii::reset_counts(); raii::reset_counts();
@@ -231,13 +269,13 @@ namespace {
x.insert(v); x.insert(v);
} }
BOOST_TEST_EQ(x.size(), reference_map.size()); BOOST_TEST_EQ(x.size(), reference_cont.size());
BOOST_TEST_EQ(raii::destructor, 0u); BOOST_TEST_EQ(raii::destructor, 0u);
test_fuzzy_matches_reference(x, reference_map, rg); test_fuzzy_matches_reference(x, reference_cont, rg);
eraser(values, x); eraser(values, x);
test_fuzzy_matches_reference(x, reference_map, rg); test_fuzzy_matches_reference(x, reference_cont, rg);
} }
check_raii_counts(); check_raii_counts();
@@ -245,6 +283,8 @@ namespace {
boost::unordered::concurrent_flat_map<raii, raii, stateful_hash, boost::unordered::concurrent_flat_map<raii, raii, stateful_hash,
stateful_key_equal, stateful_allocator<std::pair<raii const, raii> > >* map; stateful_key_equal, stateful_allocator<std::pair<raii const, raii> > >* map;
boost::unordered::concurrent_flat_set<raii, stateful_hash,
stateful_key_equal, stateful_allocator<raii> >* set;
} // namespace } // namespace
@@ -255,8 +295,9 @@ using test::sequential;
// clang-format off // clang-format off
UNORDERED_TEST( UNORDERED_TEST(
erase, erase,
((map)) ((map)(set))
((exception_value_type_generator)(exception_init_type_generator)) ((exception_value_type_generator_factory)
(exception_init_type_generator_factory))
((lvalue_eraser)(lvalue_eraser_if)(erase_if)(free_fn_erase_if)) ((lvalue_eraser)(lvalue_eraser_if)(erase_if)(free_fn_erase_if))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))

View File

@@ -1,14 +1,20 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#ifndef BOOST_UNORDERED_TEST_CFOA_EXCEPTION_HELPERS_HPP
#define BOOST_UNORDERED_TEST_CFOA_EXCEPTION_HELPERS_HPP
#include "../helpers/generators.hpp" #include "../helpers/generators.hpp"
#include "../helpers/test.hpp" #include "../helpers/test.hpp"
#include "common_helpers.hpp"
#include <boost/compat/latch.hpp> #include <boost/compat/latch.hpp>
#include <boost/container_hash/hash.hpp> #include <boost/container_hash/hash.hpp>
#include <boost/core/span.hpp> #include <boost/core/span.hpp>
#include <boost/unordered/unordered_flat_map.hpp> #include <boost/unordered/unordered_flat_map.hpp>
#include <boost/unordered/unordered_flat_set.hpp>
#include <algorithm> #include <algorithm>
#include <atomic> #include <atomic>
@@ -308,16 +314,54 @@ std::size_t hash_value(raii const& r) noexcept
return hasher(r.x_); return hasher(r.x_);
} }
struct exception_value_type_generator_type template <typename K>
struct exception_value_generator
{ {
std::pair<raii const, raii> operator()(test::random_generator rg) using value_type = raii;
value_type operator()(test::random_generator rg)
{
int* p = nullptr;
int a = generate(p, rg);
return value_type(a);
}
};
template <typename K, typename V>
struct exception_value_generator<std::pair<K, V> >
{
static constexpr bool const_key = std::is_const<K>::value;
static constexpr bool const_mapped = std::is_const<V>::value;
using value_type = std::pair<
typename std::conditional<const_key, raii const, raii>::type,
typename std::conditional<const_mapped, raii const, raii>::type>;
value_type operator()(test::random_generator rg)
{ {
int* p = nullptr; int* p = nullptr;
int a = generate(p, rg); int a = generate(p, rg);
int b = generate(p, rg); int b = generate(p, rg);
return std::make_pair(raii{a}, raii{b}); return std::make_pair(raii{a}, raii{b});
} }
} exception_value_type_generator; };
struct exception_value_type_generator_factory_type
{
template <typename Container>
exception_value_generator<typename Container::value_type> get()
{
return {};
}
} exception_value_type_generator_factory;
struct exception_init_type_generator_factory_type
{
template <typename Container>
exception_value_generator<typename Container::init_type> get()
{
return {};
}
} exception_init_type_generator_factory;
struct exception_init_type_generator_type struct exception_init_type_generator_type
{ {
@@ -388,29 +432,6 @@ template <class T, class F> void thread_runner(std::vector<T>& values, F f)
} }
} }
template <class X, class Y>
void test_matches_reference(X const& x, Y const& reference_map)
{
using value_type = typename X::value_type;
BOOST_TEST_EQ(x.size(), x.visit_all([&](value_type const& kv) {
BOOST_TEST(reference_map.contains(kv.first));
BOOST_TEST_EQ(kv.second, reference_map.find(kv.first)->second);
}));
}
template <class X, class Y>
void test_fuzzy_matches_reference(
X const& x, Y const& reference_map, test::random_generator rg)
{
using value_type = typename X::value_type;
BOOST_TEST_EQ(x.size(), x.visit_all([&](value_type const& kv) {
BOOST_TEST(reference_map.contains(kv.first));
if (rg == test::sequential) {
BOOST_TEST_EQ(kv.second, reference_map.find(kv.first)->second);
}
}));
}
template <class T> using span_value_type = typename T::value_type; template <class T> using span_value_type = typename T::value_type;
void check_raii_counts() void check_raii_counts()
@@ -442,3 +463,5 @@ auto make_random_values(std::size_t count, F f) -> std::vector<decltype(f())>
} }
return v; return v;
} }
#endif // BOOST_UNORDERED_TEST_CFOA_EXCEPTION_HELPERS_HPP

View File

@@ -1,10 +1,12 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include "exception_helpers.hpp" #include "exception_helpers.hpp"
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
#include <boost/core/ignore_unused.hpp> #include <boost/core/ignore_unused.hpp>
@@ -84,6 +86,9 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
x.reserve(values.size()); x.reserve(values.size());
BOOST_TEST_EQ(raii::copy_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, 0u);
@@ -92,11 +97,19 @@ namespace {
rvalue_inserter_type::operator()(values, x); rvalue_inserter_type::operator()(values, x);
if (std::is_same<T, typename X::value_type>::value) { if (std::is_same<T, typename X::value_type>::value) {
if (std::is_same<typename X::key_type,
typename X::value_type>::value) {
BOOST_TEST_EQ(raii::copy_constructor, 0u);
BOOST_TEST_EQ(raii::move_constructor, x.size());
}
else {
BOOST_TEST_EQ(raii::copy_constructor, x.size()); BOOST_TEST_EQ(raii::copy_constructor, x.size());
BOOST_TEST_EQ(raii::move_constructor, x.size()); BOOST_TEST_EQ(raii::move_constructor, x.size());
}
} else { } else {
BOOST_TEST_EQ(raii::copy_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, 0u);
BOOST_TEST_EQ(raii::move_constructor, 2 * x.size()); BOOST_TEST_EQ(
raii::move_constructor, value_type_cardinality * x.size());
} }
} }
} norehash_rvalue_inserter; } norehash_rvalue_inserter;
@@ -246,6 +259,13 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
std::atomic<std::uint64_t> num_inserts{0}; std::atomic<std::uint64_t> num_inserts{0};
enable_exceptions(); enable_exceptions();
@@ -253,7 +273,7 @@ namespace {
for (auto& r : s) { for (auto& r : s) {
try { try {
bool b = bool b =
x.insert_or_visit(r, [](typename X::value_type& v) { (void)v; }); x.insert_or_visit(r, [](arg_type& v) { (void)v; });
if (b) { if (b) {
++num_inserts; ++num_inserts;
@@ -306,6 +326,13 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
std::atomic<std::uint64_t> num_inserts{0}; std::atomic<std::uint64_t> num_inserts{0};
enable_exceptions(); enable_exceptions();
@@ -313,7 +340,7 @@ namespace {
for (auto& r : s) { for (auto& r : s) {
try { try {
bool b = x.insert_or_visit( bool b = x.insert_or_visit(
std::move(r), [](typename X::value_type& v) { (void)v; }); std::move(r), [](arg_type& v) { (void)v; });
if (b) { if (b) {
++num_inserts; ++num_inserts;
@@ -377,14 +404,14 @@ namespace {
} }
} iterator_range_insert_or_visit; } iterator_range_insert_or_visit;
template <class X, class G, class F> template <class X, class GF, class F>
void insert(X*, G gen, F inserter, test::random_generator rg) void insert(X*, GF gen_factory, F inserter, test::random_generator rg)
{ {
disable_exceptions(); disable_exceptions();
auto gen = gen_factory.template get<X>();
auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto values = make_random_values(1024 * 16, [&] { return gen(rg); });
auto reference_map = auto reference_cont = reference_container<X>(values.begin(), values.end());
boost::unordered_flat_map<raii, raii>(values.begin(), values.end());
raii::reset_counts(); raii::reset_counts();
{ {
@@ -392,13 +419,15 @@ namespace {
inserter(values, x); inserter(values, x);
test_fuzzy_matches_reference(x, reference_map, rg); test_fuzzy_matches_reference(x, reference_cont, rg);
} }
check_raii_counts(); check_raii_counts();
} }
boost::unordered::concurrent_flat_map<raii, raii, stateful_hash, boost::unordered::concurrent_flat_map<raii, raii, stateful_hash,
stateful_key_equal, stateful_allocator<std::pair<raii const, raii> > >* map; stateful_key_equal, stateful_allocator<std::pair<raii const, raii> > >* map;
boost::unordered::concurrent_flat_set<raii, stateful_hash,
stateful_key_equal, stateful_allocator<raii> >* set;
} // namespace } // namespace
@@ -409,8 +438,9 @@ using test::sequential;
// clang-format off // clang-format off
UNORDERED_TEST( UNORDERED_TEST(
insert, insert,
((map)) ((map)(set))
((exception_value_type_generator)(exception_init_type_generator)) ((exception_value_type_generator_factory)
(exception_init_type_generator_factory))
((lvalue_inserter)(rvalue_inserter)(iterator_range_inserter) ((lvalue_inserter)(rvalue_inserter)(iterator_range_inserter)
(norehash_lvalue_inserter)(norehash_rvalue_inserter) (norehash_lvalue_inserter)(norehash_rvalue_inserter)
(lvalue_insert_or_cvisit)(lvalue_insert_or_visit) (lvalue_insert_or_cvisit)(lvalue_insert_or_visit)
@@ -421,7 +451,7 @@ UNORDERED_TEST(
UNORDERED_TEST( UNORDERED_TEST(
insert, insert,
((map)) ((map))
((exception_init_type_generator)) ((exception_init_type_generator_factory))
((lvalue_insert_or_assign_copy_assign)(lvalue_insert_or_assign_move_assign) ((lvalue_insert_or_assign_copy_assign)(lvalue_insert_or_assign_move_assign)
(rvalue_insert_or_assign_copy_assign)(rvalue_insert_or_assign_move_assign)) (rvalue_insert_or_assign_copy_assign)(rvalue_insert_or_assign_move_assign))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))

View File

@@ -1,29 +1,38 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include "exception_helpers.hpp" #include "exception_helpers.hpp"
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
#include <boost/core/ignore_unused.hpp> #include <boost/core/ignore_unused.hpp>
using allocator_type = stateful_allocator<std::pair<raii const, raii> >;
using hasher = stateful_hash; using hasher = stateful_hash;
using key_equal = stateful_key_equal; using key_equal = stateful_key_equal;
using map_type = boost::unordered::concurrent_flat_map<raii, raii, hasher, using map_type = boost::unordered::concurrent_flat_map<raii, raii, hasher,
key_equal, allocator_type>; key_equal, stateful_allocator<std::pair<raii const, raii> > >;
using set_type = boost::unordered::concurrent_flat_set<raii, hasher,
key_equal, stateful_allocator<raii> >;
map_type* test_map;
set_type* test_set;
namespace { namespace {
test::seed_t initialize_seed(223333016); test::seed_t initialize_seed(223333016);
template <class G> void merge(G gen, test::random_generator rg) template <class X, class GF>
void merge(X*, GF gen_factory, test::random_generator rg)
{ {
using allocator_type = typename X::allocator_type;
auto gen = gen_factory.template get<X>();
auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto values = make_random_values(1024 * 16, [&] { return gen(rg); });
auto reference_map = auto reference_cont = reference_container<X>(values.begin(), values.end());
boost::unordered_flat_map<raii, raii>(values.begin(), values.end());
raii::reset_counts(); raii::reset_counts();
@@ -37,10 +46,10 @@ namespace {
for (unsigned i = 0; i < 5 * alloc_throw_threshold; ++i) { for (unsigned i = 0; i < 5 * alloc_throw_threshold; ++i) {
disable_exceptions(); disable_exceptions();
map_type x1(0, hasher(1), key_equal(2), allocator_type(3)); X x1(0, hasher(1), key_equal(2), allocator_type(3));
x1.insert(begin, mid); x1.insert(begin, mid);
map_type x2(0, hasher(2), key_equal(1), allocator_type(3)); X x2(0, hasher(2), key_equal(1), allocator_type(3));
x2.insert(mid, end); x2.insert(mid, end);
enable_exceptions(); enable_exceptions();
@@ -51,8 +60,8 @@ namespace {
} }
disable_exceptions(); disable_exceptions();
test_fuzzy_matches_reference(x1, reference_map, rg); test_fuzzy_matches_reference(x1, reference_cont, rg);
test_fuzzy_matches_reference(x2, reference_map, rg); test_fuzzy_matches_reference(x2, reference_cont, rg);
} }
BOOST_TEST_GT(num_throws, 0u); BOOST_TEST_GT(num_throws, 0u);
@@ -70,7 +79,8 @@ using test::sequential;
// clang-format off // clang-format off
UNORDERED_TEST( UNORDERED_TEST(
merge, merge,
((exception_value_type_generator)) ((test_map)(test_set))
((exception_value_type_generator_factory))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))
// clang-format on // clang-format on

View File

@@ -1,10 +1,12 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include "helpers.hpp" #include "helpers.hpp"
#include <boost/config/workaround.hpp> #include <boost/config/workaround.hpp>
#include <boost/unordered/concurrent_flat_map_fwd.hpp> #include <boost/unordered/concurrent_flat_map_fwd.hpp>
#include <boost/unordered/concurrent_flat_set_fwd.hpp>
#include <limits> #include <limits>
test::seed_t initialize_seed{32304628}; test::seed_t initialize_seed{32304628};
@@ -34,37 +36,89 @@ bool unequal_call(boost::unordered::concurrent_flat_map<T, T>& x1,
return x1 != x2; return x1 != x2;
} }
template <class T>
void swap_call(boost::unordered::concurrent_flat_set<T>& x1,
boost::unordered::concurrent_flat_set<T>& x2)
{
swap(x1, x2);
}
template <class T>
bool equal_call(boost::unordered::concurrent_flat_set<T>& x1,
boost::unordered::concurrent_flat_set<T>& x2)
{
return x1 == x2;
}
template <class T>
bool unequal_call(boost::unordered::concurrent_flat_set<T>& x1,
boost::unordered::concurrent_flat_set<T>& x2)
{
return x1 != x2;
}
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
using map_type = boost::unordered::concurrent_flat_map<int, int>; using map_type = boost::unordered::concurrent_flat_map<int, int>;
using set_type = boost::unordered::concurrent_flat_map<int, int>;
map_type* test_map;
set_type* test_set;
template <typename X>
void fwd_swap_call(X*)
{
#if !defined(BOOST_CLANG_VERSION) || \ #if !defined(BOOST_CLANG_VERSION) || \
BOOST_WORKAROUND(BOOST_CLANG_VERSION, < 30700) || \ BOOST_WORKAROUND(BOOST_CLANG_VERSION, < 30700) || \
BOOST_WORKAROUND(BOOST_CLANG_VERSION, >= 30800) BOOST_WORKAROUND(BOOST_CLANG_VERSION, >= 30800)
// clang-3.7 seems to have a codegen bug here so we workaround it // clang-3.7 seems to have a codegen bug here so we workaround it
UNORDERED_AUTO_TEST (fwd_swap_call) {
map_type x1, x2; X x1, x2;
swap_call(x1, x2); swap_call(x1, x2);
#endif
} }
#endif template <typename X>
void fwd_equal_call(X*)
UNORDERED_AUTO_TEST (fwd_equal_call) { {
map_type x1, x2; X x1, x2;
BOOST_TEST(equal_call(x1, x2)); BOOST_TEST(equal_call(x1, x2));
} }
UNORDERED_AUTO_TEST (fwd_unequal_call) { template <typename X>
map_type x1, x2; void fwd_unequal_call(X*)
{
X x1, x2;
BOOST_TEST_NOT(unequal_call(x1, x2)); BOOST_TEST_NOT(unequal_call(x1, x2));
} }
// this isn't the best place for this test but it's better than introducing a // this isn't the best place for this test but it's better than introducing a
// new file // new file
UNORDERED_AUTO_TEST (max_size) { template <typename X>
map_type x1; void max_size(X*)
{
X x1;
BOOST_TEST_EQ( BOOST_TEST_EQ(
x1.max_size(), std::numeric_limits<typename map_type::size_type>::max()); x1.max_size(), std::numeric_limits<typename X::size_type>::max());
} }
// clang-format off
UNORDERED_TEST(
fwd_swap_call,
((test_map)(test_set)))
UNORDERED_TEST(
fwd_equal_call,
((test_map)(test_set)))
UNORDERED_TEST(
fwd_unequal_call,
((test_map)(test_set)))
UNORDERED_TEST(
max_size,
((test_map)(test_set)))
// clang-format on
RUN_TESTS() RUN_TESTS()

View File

@@ -1,4 +1,5 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
@@ -7,11 +8,15 @@
#include "../helpers/generators.hpp" #include "../helpers/generators.hpp"
#include "../helpers/test.hpp" #include "../helpers/test.hpp"
#include "common_helpers.hpp"
#include <boost/compat/latch.hpp> #include <boost/compat/latch.hpp>
#include <boost/container_hash/hash.hpp> #include <boost/container_hash/hash.hpp>
#include <boost/core/span.hpp> #include <boost/core/span.hpp>
#include <boost/unordered/concurrent_flat_map_fwd.hpp>
#include <boost/unordered/concurrent_flat_set_fwd.hpp>
#include <boost/unordered/unordered_flat_map.hpp> #include <boost/unordered/unordered_flat_map.hpp>
#include <boost/unordered/unordered_flat_set.hpp>
#include <algorithm> #include <algorithm>
#include <atomic> #include <atomic>
@@ -328,27 +333,48 @@ auto make_random_values(std::size_t count, F f) -> std::vector<decltype(f())>
return v; return v;
} }
struct value_type_generator_type template <typename K>
struct value_generator
{ {
std::pair<raii const, raii> operator()(test::random_generator rg) using value_type = raii;
{
int* p = nullptr;
int a = generate(p, rg);
int b = generate(p, rg);
return std::make_pair(raii{a}, raii{b});
}
} value_type_generator;
struct init_type_generator_type value_type operator()(test::random_generator rg)
{ {
std::pair<raii, raii> operator()(test::random_generator rg) int* p = nullptr;
int a = generate(p, rg);
return value_type(a);
}
};
template <typename K, typename V>
struct value_generator<std::pair<K, V> >
{
static constexpr bool const_key = std::is_const<K>::value;
static constexpr bool const_mapped = std::is_const<V>::value;
using value_type = std::pair<
typename std::conditional<const_key, raii const, raii>::type,
typename std::conditional<const_mapped, raii const, raii>::type>;
value_type operator()(test::random_generator rg)
{ {
int* p = nullptr; int* p = nullptr;
int a = generate(p, rg); int a = generate(p, rg);
int b = generate(p, rg); int b = generate(p, rg);
return std::make_pair(raii{a}, raii{b}); return std::make_pair(raii{a}, raii{b});
} }
} init_type_generator; };
struct value_type_generator_factory_type
{
template <typename Container>
value_generator<typename Container::value_type> get() { return {}; }
} value_type_generator_factory;
struct init_type_generator_factory_type
{
template <typename Container>
value_generator<typename Container::init_type> get() { return {}; }
} init_type_generator_factory;
template <class T> template <class T>
std::vector<boost::span<T> > split( std::vector<boost::span<T> > split(
@@ -408,29 +434,6 @@ template <class T, class F> void thread_runner(std::vector<T>& values, F f)
} }
} }
template <class X, class Y>
void test_matches_reference(X const& x, Y const& reference_map)
{
using value_type = typename X::value_type;
BOOST_TEST_EQ(x.size(), x.visit_all([&](value_type const& kv) {
BOOST_TEST(reference_map.contains(kv.first));
BOOST_TEST_EQ(kv.second, reference_map.find(kv.first)->second);
}));
}
template <class X, class Y>
void test_fuzzy_matches_reference(
X const& x, Y const& reference_map, test::random_generator rg)
{
using value_type = typename X::value_type;
BOOST_TEST_EQ(x.size(), x.visit_all([&](value_type const& kv) {
BOOST_TEST(reference_map.contains(kv.first));
if (rg == test::sequential) {
BOOST_TEST_EQ(kv.second, reference_map.find(kv.first)->second);
}
}));
}
template <class T> using span_value_type = typename T::value_type; template <class T> using span_value_type = typename T::value_type;
void check_raii_counts() void check_raii_counts()

View File

@@ -1,18 +1,32 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include "helpers.hpp" #include "helpers.hpp"
#include <boost/config.hpp>
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
#include <boost/core/ignore_unused.hpp> #include <boost/core/ignore_unused.hpp>
#if defined(BOOST_MSVC)
#pragma warning(disable : 4127) // conditional expression is constant
#endif
struct raii_convertible struct raii_convertible
{ {
int x, y; int x = 0, y = 0 ;
raii_convertible(int x_, int y_) : x{x_}, y{y_} {}
template <typename T>
raii_convertible(T const & t) : x{t.x_} {}
template <typename T, typename Q>
raii_convertible(std::pair<T, Q> const & p) : x{p.first.x_}, y{p.second.x_}
{}
operator raii() { return {x}; }
operator std::pair<raii const, raii>() { return {x, y}; } operator std::pair<raii const, raii>() { return {x, y}; }
}; };
@@ -23,6 +37,9 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
std::atomic<std::uint64_t> num_inserts{0}; std::atomic<std::uint64_t> num_inserts{0};
thread_runner(values, [&x, &num_inserts](boost::span<T> s) { thread_runner(values, [&x, &num_inserts](boost::span<T> s) {
for (auto const& r : s) { for (auto const& r : s) {
@@ -33,7 +50,8 @@ namespace {
} }
}); });
BOOST_TEST_EQ(num_inserts, x.size()); BOOST_TEST_EQ(num_inserts, x.size());
BOOST_TEST_EQ(raii::copy_constructor, 2 * x.size()); BOOST_TEST_EQ(
raii::copy_constructor, value_type_cardinality * x.size());
BOOST_TEST_EQ(raii::copy_assignment, 0u); BOOST_TEST_EQ(raii::copy_assignment, 0u);
BOOST_TEST_EQ(raii::move_assignment, 0u); BOOST_TEST_EQ(raii::move_assignment, 0u);
} }
@@ -43,9 +61,13 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
x.reserve(values.size()); x.reserve(values.size());
lvalue_inserter_type::operator()(values, x); lvalue_inserter_type::operator()(values, x);
BOOST_TEST_EQ(raii::copy_constructor, 2 * x.size()); BOOST_TEST_EQ(
raii::copy_constructor, value_type_cardinality * x.size());
BOOST_TEST_EQ(raii::move_constructor, 0u); BOOST_TEST_EQ(raii::move_constructor, 0u);
} }
} norehash_lvalue_inserter; } norehash_lvalue_inserter;
@@ -67,7 +89,8 @@ namespace {
}); });
BOOST_TEST_EQ(num_inserts, x.size()); BOOST_TEST_EQ(num_inserts, x.size());
if (std::is_same<T, typename X::value_type>::value) { if (std::is_same<T, typename X::value_type>::value &&
!std::is_same<typename X::key_type, typename X::value_type>::value) {
BOOST_TEST_EQ(raii::copy_constructor, x.size()); BOOST_TEST_EQ(raii::copy_constructor, x.size());
} else { } else {
BOOST_TEST_EQ(raii::copy_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, 0u);
@@ -82,6 +105,9 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
x.reserve(values.size()); x.reserve(values.size());
BOOST_TEST_EQ(raii::copy_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, 0u);
@@ -90,11 +116,19 @@ namespace {
rvalue_inserter_type::operator()(values, x); rvalue_inserter_type::operator()(values, x);
if (std::is_same<T, typename X::value_type>::value) { if (std::is_same<T, typename X::value_type>::value) {
if (std::is_same<typename X::key_type,
typename X::value_type>::value) {
BOOST_TEST_EQ(raii::copy_constructor, 0u);
BOOST_TEST_EQ(raii::move_constructor, x.size());
}
else {
BOOST_TEST_EQ(raii::copy_constructor, x.size()); BOOST_TEST_EQ(raii::copy_constructor, x.size());
BOOST_TEST_EQ(raii::move_constructor, x.size()); BOOST_TEST_EQ(raii::move_constructor, x.size());
}
} else { } else {
BOOST_TEST_EQ(raii::copy_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, 0u);
BOOST_TEST_EQ(raii::move_constructor, 2 * x.size()); BOOST_TEST_EQ(
raii::move_constructor, value_type_cardinality * x.size());
} }
} }
} norehash_rvalue_inserter; } norehash_rvalue_inserter;
@@ -103,17 +137,21 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
std::vector<raii_convertible> values2; std::vector<raii_convertible> values2;
values2.reserve(values.size()); values2.reserve(values.size());
for (auto const& p : values) { for (auto const& v : values) {
values2.push_back(raii_convertible(p.first.x_, p.second.x_)); values2.push_back(raii_convertible(v));
} }
thread_runner(values2, [&x](boost::span<raii_convertible> s) { thread_runner(values2, [&x](boost::span<raii_convertible> s) {
x.insert(s.begin(), s.end()); x.insert(s.begin(), s.end());
}); });
BOOST_TEST_EQ(raii::default_constructor, 2 * values2.size()); BOOST_TEST_EQ(
raii::default_constructor, value_type_cardinality * values2.size());
#if BOOST_WORKAROUND(BOOST_GCC_VERSION, >= 50300) && \ #if BOOST_WORKAROUND(BOOST_GCC_VERSION, >= 50300) && \
BOOST_WORKAROUND(BOOST_GCC_VERSION, < 50500) BOOST_WORKAROUND(BOOST_GCC_VERSION, < 50500)
// some versions of old gcc have trouble eliding copies here // some versions of old gcc have trouble eliding copies here
@@ -253,6 +291,9 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
std::atomic<std::uint64_t> num_inserts{0}; std::atomic<std::uint64_t> num_inserts{0};
std::atomic<std::uint64_t> num_invokes{0}; std::atomic<std::uint64_t> num_invokes{0};
thread_runner(values, [&x, &num_inserts, &num_invokes](boost::span<T> s) { thread_runner(values, [&x, &num_inserts, &num_invokes](boost::span<T> s) {
@@ -273,7 +314,8 @@ namespace {
BOOST_TEST_EQ(num_invokes, values.size() - x.size()); BOOST_TEST_EQ(num_invokes, values.size() - x.size());
BOOST_TEST_EQ(raii::default_constructor, 0u); BOOST_TEST_EQ(raii::default_constructor, 0u);
BOOST_TEST_EQ(raii::copy_constructor, 2 * x.size()); BOOST_TEST_EQ(
raii::copy_constructor, value_type_cardinality * x.size());
// don't check move construction count here because of rehashing // don't check move construction count here because of rehashing
BOOST_TEST_GT(raii::move_constructor, 0u); BOOST_TEST_GT(raii::move_constructor, 0u);
BOOST_TEST_EQ(raii::move_assignment, 0u); BOOST_TEST_EQ(raii::move_assignment, 0u);
@@ -284,12 +326,22 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
std::atomic<std::uint64_t> num_inserts{0}; std::atomic<std::uint64_t> num_inserts{0};
std::atomic<std::uint64_t> num_invokes{0}; std::atomic<std::uint64_t> num_invokes{0};
thread_runner(values, [&x, &num_inserts, &num_invokes](boost::span<T> s) { thread_runner(values, [&x, &num_inserts, &num_invokes](boost::span<T> s) {
for (auto& r : s) { for (auto& r : s) {
bool b = bool b =
x.insert_or_visit(r, [&num_invokes](typename X::value_type& v) { x.insert_or_visit(r, [&num_invokes](arg_type& v) {
(void)v; (void)v;
++num_invokes; ++num_invokes;
}); });
@@ -304,7 +356,7 @@ namespace {
BOOST_TEST_EQ(num_invokes, values.size() - x.size()); BOOST_TEST_EQ(num_invokes, values.size() - x.size());
BOOST_TEST_EQ(raii::default_constructor, 0u); BOOST_TEST_EQ(raii::default_constructor, 0u);
BOOST_TEST_EQ(raii::copy_constructor, 2 * x.size()); BOOST_TEST_EQ(raii::copy_constructor, value_type_cardinality * x.size());
// don't check move construction count here because of rehashing // don't check move construction count here because of rehashing
BOOST_TEST_GT(raii::move_constructor, 0u); BOOST_TEST_GT(raii::move_constructor, 0u);
BOOST_TEST_EQ(raii::move_assignment, 0u); BOOST_TEST_EQ(raii::move_assignment, 0u);
@@ -315,6 +367,9 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
std::atomic<std::uint64_t> num_inserts{0}; std::atomic<std::uint64_t> num_inserts{0};
std::atomic<std::uint64_t> num_invokes{0}; std::atomic<std::uint64_t> num_invokes{0};
thread_runner(values, [&x, &num_inserts, &num_invokes](boost::span<T> s) { thread_runner(values, [&x, &num_inserts, &num_invokes](boost::span<T> s) {
@@ -337,11 +392,19 @@ namespace {
BOOST_TEST_EQ(raii::default_constructor, 0u); BOOST_TEST_EQ(raii::default_constructor, 0u);
if (std::is_same<T, typename X::value_type>::value) { if (std::is_same<T, typename X::value_type>::value) {
if (std::is_same<typename X::key_type,
typename X::value_type>::value) {
BOOST_TEST_EQ(raii::copy_constructor, 0u);
BOOST_TEST_GE(raii::move_constructor, x.size());
}
else {
BOOST_TEST_EQ(raii::copy_constructor, x.size()); BOOST_TEST_EQ(raii::copy_constructor, x.size());
BOOST_TEST_GE(raii::move_constructor, x.size()); BOOST_TEST_GE(raii::move_constructor, x.size());
}
} else { } else {
BOOST_TEST_EQ(raii::copy_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, 0u);
BOOST_TEST_GE(raii::move_constructor, 2 * x.size()); BOOST_TEST_GE(
raii::move_constructor, value_type_cardinality * x.size());
} }
} }
} rvalue_insert_or_cvisit; } rvalue_insert_or_cvisit;
@@ -350,12 +413,22 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
std::atomic<std::uint64_t> num_inserts{0}; std::atomic<std::uint64_t> num_inserts{0};
std::atomic<std::uint64_t> num_invokes{0}; std::atomic<std::uint64_t> num_invokes{0};
thread_runner(values, [&x, &num_inserts, &num_invokes](boost::span<T> s) { thread_runner(values, [&x, &num_inserts, &num_invokes](boost::span<T> s) {
for (auto& r : s) { for (auto& r : s) {
bool b = x.insert_or_visit( bool b = x.insert_or_visit(
std::move(r), [&num_invokes](typename X::value_type& v) { std::move(r), [&num_invokes](arg_type& v) {
(void)v; (void)v;
++num_invokes; ++num_invokes;
}); });
@@ -371,11 +444,19 @@ namespace {
BOOST_TEST_EQ(raii::default_constructor, 0u); BOOST_TEST_EQ(raii::default_constructor, 0u);
if (std::is_same<T, typename X::value_type>::value) { if (std::is_same<T, typename X::value_type>::value) {
if (std::is_same<typename X::key_type,
typename X::value_type>::value) {
BOOST_TEST_EQ(raii::copy_constructor, 0u);
BOOST_TEST_GE(raii::move_constructor, x.size());
}
else {
BOOST_TEST_EQ(raii::copy_constructor, x.size()); BOOST_TEST_EQ(raii::copy_constructor, x.size());
BOOST_TEST_GE(raii::move_constructor, x.size()); BOOST_TEST_GE(raii::move_constructor, x.size());
}
} else { } else {
BOOST_TEST_EQ(raii::copy_constructor, 0u); BOOST_TEST_EQ(raii::copy_constructor, 0u);
BOOST_TEST_GE(raii::move_constructor, 2 * x.size()); BOOST_TEST_GE(
raii::move_constructor, value_type_cardinality * x.size());
} }
} }
} rvalue_insert_or_visit; } rvalue_insert_or_visit;
@@ -384,10 +465,13 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
std::vector<raii_convertible> values2; std::vector<raii_convertible> values2;
values2.reserve(values.size()); values2.reserve(values.size());
for (auto const& p : values) { for (auto const& v : values) {
values2.push_back(raii_convertible(p.first.x_, p.second.x_)); values2.push_back(raii_convertible(v));
} }
std::atomic<std::uint64_t> num_invokes{0}; std::atomic<std::uint64_t> num_invokes{0};
@@ -402,7 +486,8 @@ namespace {
BOOST_TEST_EQ(num_invokes, values.size() - x.size()); BOOST_TEST_EQ(num_invokes, values.size() - x.size());
BOOST_TEST_EQ(raii::default_constructor, 2 * values2.size()); BOOST_TEST_EQ(
raii::default_constructor, value_type_cardinality * values2.size());
#if BOOST_WORKAROUND(BOOST_GCC_VERSION, >= 50300) && \ #if BOOST_WORKAROUND(BOOST_GCC_VERSION, >= 50300) && \
BOOST_WORKAROUND(BOOST_GCC_VERSION, < 50500) BOOST_WORKAROUND(BOOST_GCC_VERSION, < 50500)
// skip test // skip test
@@ -417,10 +502,13 @@ namespace {
{ {
template <class T, class X> void operator()(std::vector<T>& values, X& x) template <class T, class X> void operator()(std::vector<T>& values, X& x)
{ {
static constexpr auto value_type_cardinality =
value_cardinality<typename X::value_type>::value;
std::vector<raii_convertible> values2; std::vector<raii_convertible> values2;
values2.reserve(values.size()); values2.reserve(values.size());
for (auto const& p : values) { for (auto const& v : values) {
values2.push_back(raii_convertible(p.first.x_, p.second.x_)); values2.push_back(raii_convertible(v));
} }
std::atomic<std::uint64_t> num_invokes{0}; std::atomic<std::uint64_t> num_invokes{0};
@@ -435,7 +523,8 @@ namespace {
BOOST_TEST_EQ(num_invokes, values.size() - x.size()); BOOST_TEST_EQ(num_invokes, values.size() - x.size());
BOOST_TEST_EQ(raii::default_constructor, 2 * values2.size()); BOOST_TEST_EQ(
raii::default_constructor, value_type_cardinality * values2.size());
#if BOOST_WORKAROUND(BOOST_GCC_VERSION, >= 50300) && \ #if BOOST_WORKAROUND(BOOST_GCC_VERSION, >= 50300) && \
BOOST_WORKAROUND(BOOST_GCC_VERSION, < 50500) BOOST_WORKAROUND(BOOST_GCC_VERSION, < 50500)
// skip test // skip test
@@ -446,12 +535,12 @@ namespace {
} }
} iterator_range_insert_or_visit; } iterator_range_insert_or_visit;
template <class X, class G, class F> template <class X, class GF, class F>
void insert(X*, G gen, F inserter, test::random_generator rg) void insert(X*, GF gen_factory, F inserter, test::random_generator rg)
{ {
auto gen = gen_factory.template get<X>();
auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto values = make_random_values(1024 * 16, [&] { return gen(rg); });
auto reference_map = auto reference_cont = reference_container<X>(values.begin(), values.end());
boost::unordered_flat_map<raii, raii>(values.begin(), values.end());
raii::reset_counts(); raii::reset_counts();
{ {
@@ -459,13 +548,13 @@ namespace {
inserter(values, x); inserter(values, x);
BOOST_TEST_EQ(x.size(), reference_map.size()); BOOST_TEST_EQ(x.size(), reference_cont.size());
using value_type = typename X::value_type; using value_type = typename X::value_type;
BOOST_TEST_EQ(x.size(), x.visit_all([&](value_type const& kv) { BOOST_TEST_EQ(x.size(), x.visit_all([&](value_type const& v) {
BOOST_TEST(reference_map.contains(kv.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
if (rg == test::sequential) { if (rg == test::sequential) {
BOOST_TEST_EQ(kv.second, reference_map[kv.first]); BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
} }
})); }));
} }
@@ -480,39 +569,21 @@ namespace {
raii::destructor); raii::destructor);
} }
template <class X> void insert_initializer_list(X*) template <class X, class IL>
void insert_initializer_list(std::pair<X*, IL> p)
{ {
using value_type = typename X::value_type; using value_type = typename X::value_type;
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
std::initializer_list<value_type> values{ auto init_list = p.second;
value_type{raii{0}, raii{0}},
value_type{raii{1}, raii{1}},
value_type{raii{2}, raii{2}},
value_type{raii{3}, raii{3}},
value_type{raii{4}, raii{4}},
value_type{raii{5}, raii{5}},
value_type{raii{6}, raii{6}},
value_type{raii{6}, raii{6}},
value_type{raii{7}, raii{7}},
value_type{raii{8}, raii{8}},
value_type{raii{9}, raii{9}},
value_type{raii{10}, raii{10}},
value_type{raii{9}, raii{9}},
value_type{raii{8}, raii{8}},
value_type{raii{7}, raii{7}},
value_type{raii{6}, raii{6}},
value_type{raii{5}, raii{5}},
value_type{raii{4}, raii{4}},
value_type{raii{3}, raii{3}},
value_type{raii{2}, raii{2}},
value_type{raii{1}, raii{1}},
value_type{raii{0}, raii{0}},
};
std::vector<raii> dummy; std::vector<raii> dummy;
auto reference_cont = reference_container<X>(
auto reference_map = init_list.begin(), init_list.end());
boost::unordered_flat_map<raii, raii>(values.begin(), values.end());
raii::reset_counts(); raii::reset_counts();
{ {
@@ -520,13 +591,13 @@ namespace {
X x; X x;
thread_runner( thread_runner(
dummy, [&x, &values](boost::span<raii>) { x.insert(values); }); dummy, [&x, &init_list](boost::span<raii>) { x.insert(init_list); });
BOOST_TEST_EQ(x.size(), reference_map.size()); BOOST_TEST_EQ(x.size(), reference_cont.size());
BOOST_TEST_EQ(x.size(), x.visit_all([&](value_type const& kv) { BOOST_TEST_EQ(x.size(), x.visit_all([&](value_type const& v) {
BOOST_TEST(reference_map.contains(kv.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(kv.second, reference_map[kv.first]); BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
})); }));
} }
@@ -549,27 +620,27 @@ namespace {
X x; X x;
thread_runner(dummy, [&x, &values, &num_invokes](boost::span<raii>) { thread_runner(dummy, [&x, &init_list, &num_invokes](boost::span<raii>) {
x.insert_or_visit(values, [&num_invokes](typename X::value_type& v) { x.insert_or_visit(init_list, [&num_invokes](arg_type& v) {
(void)v; (void)v;
++num_invokes; ++num_invokes;
}); });
x.insert_or_cvisit( x.insert_or_cvisit(
values, [&num_invokes](typename X::value_type const& v) { init_list, [&num_invokes](typename X::value_type const& v) {
(void)v; (void)v;
++num_invokes; ++num_invokes;
}); });
}); });
BOOST_TEST_EQ(num_invokes, (values.size() - x.size()) + BOOST_TEST_EQ(num_invokes, (init_list.size() - x.size()) +
(num_threads - 1) * values.size() + (num_threads - 1) * init_list.size() +
num_threads * values.size()); num_threads * init_list.size());
BOOST_TEST_EQ(x.size(), reference_map.size()); BOOST_TEST_EQ(x.size(), reference_cont.size());
BOOST_TEST_EQ(x.size(), x.visit_all([&](value_type const& kv) { BOOST_TEST_EQ(x.size(), x.visit_all([&](value_type const& v) {
BOOST_TEST(reference_map.contains(kv.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(kv.second, reference_map[kv.first]); BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
})); }));
} }
@@ -606,6 +677,64 @@ namespace {
std::equal_to<raii>, fancy_allocator<std::pair<raii const, raii> > >* std::equal_to<raii>, fancy_allocator<std::pair<raii const, raii> > >*
fancy_map; fancy_map;
boost::unordered::concurrent_flat_set<raii>* set;
boost::unordered::concurrent_flat_set<raii, boost::hash<raii>,
std::equal_to<raii>, fancy_allocator<std::pair<raii const, raii> > >*
fancy_set;
std::initializer_list<std::pair<raii const, raii> > map_init_list{
{raii{0}, raii{0}},
{raii{1}, raii{1}},
{raii{2}, raii{2}},
{raii{3}, raii{3}},
{raii{4}, raii{4}},
{raii{5}, raii{5}},
{raii{6}, raii{6}},
{raii{6}, raii{6}},
{raii{7}, raii{7}},
{raii{8}, raii{8}},
{raii{9}, raii{9}},
{raii{10}, raii{10}},
{raii{9}, raii{9}},
{raii{8}, raii{8}},
{raii{7}, raii{7}},
{raii{6}, raii{6}},
{raii{5}, raii{5}},
{raii{4}, raii{4}},
{raii{3}, raii{3}},
{raii{2}, raii{2}},
{raii{1}, raii{1}},
{raii{0}, raii{0}},
};
std::initializer_list<raii> set_init_list{
raii{0},
raii{1},
raii{2},
raii{3},
raii{4},
raii{5},
raii{6},
raii{6},
raii{7},
raii{8},
raii{9},
raii{10},
raii{9},
raii{8},
raii{7},
raii{6},
raii{5},
raii{4},
raii{3},
raii{2},
raii{1},
raii{0},
};
auto map_and_init_list=std::make_pair(map,map_init_list);
auto set_and_init_list=std::make_pair(set,set_init_list);
} // namespace } // namespace
using test::default_generator; using test::default_generator;
@@ -615,12 +744,12 @@ using test::sequential;
// clang-format off // clang-format off
UNORDERED_TEST( UNORDERED_TEST(
insert_initializer_list, insert_initializer_list,
((map))) ((map_and_init_list)(set_and_init_list)))
UNORDERED_TEST( UNORDERED_TEST(
insert, insert,
((map)(fancy_map)) ((map)(fancy_map)(set)(fancy_set))
((value_type_generator)(init_type_generator)) ((value_type_generator_factory)(init_type_generator_factory))
((lvalue_inserter)(rvalue_inserter)(iterator_range_inserter) ((lvalue_inserter)(rvalue_inserter)(iterator_range_inserter)
(norehash_lvalue_inserter)(norehash_rvalue_inserter) (norehash_lvalue_inserter)(norehash_rvalue_inserter)
(lvalue_insert_or_cvisit)(lvalue_insert_or_visit) (lvalue_insert_or_cvisit)(lvalue_insert_or_visit)
@@ -631,7 +760,7 @@ UNORDERED_TEST(
UNORDERED_TEST( UNORDERED_TEST(
insert, insert,
((map)) ((map))
((init_type_generator)) ((init_type_generator_factory))
((lvalue_insert_or_assign_copy_assign)(lvalue_insert_or_assign_move_assign) ((lvalue_insert_or_assign_copy_assign)(lvalue_insert_or_assign_move_assign)
(rvalue_insert_or_assign_copy_assign)(rvalue_insert_or_assign_move_assign)) (rvalue_insert_or_assign_copy_assign)(rvalue_insert_or_assign_move_assign))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))
@@ -639,7 +768,7 @@ UNORDERED_TEST(
UNORDERED_TEST( UNORDERED_TEST(
insert, insert,
((trans_map)) ((trans_map))
((init_type_generator)) ((init_type_generator_factory))
((trans_insert_or_assign_copy_assign)(trans_insert_or_assign_move_assign)) ((trans_insert_or_assign_copy_assign)(trans_insert_or_assign_move_assign))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))
// clang-format on // clang-format on

View File

@@ -1,10 +1,12 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include "helpers.hpp" #include "helpers.hpp"
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
test::seed_t initialize_seed{402031699}; test::seed_t initialize_seed{402031699};
@@ -14,12 +16,25 @@ using test::sequential;
using hasher = stateful_hash; using hasher = stateful_hash;
using key_equal = stateful_key_equal; using key_equal = stateful_key_equal;
using allocator_type = stateful_allocator<std::pair<raii const, raii> >;
using map_type = boost::unordered::concurrent_flat_map<raii, raii, hasher, using map_type = boost::unordered::concurrent_flat_map<raii, raii,
key_equal, allocator_type>; hasher, key_equal, stateful_allocator<std::pair<raii const, raii> > >;
using map2_type = boost::unordered::concurrent_flat_map<raii, raii,
std::hash<raii>, std::equal_to<raii>,
stateful_allocator<std::pair<raii const, raii> > >;
using map_value_type = typename map_type::value_type; using set_type = boost::unordered::concurrent_flat_set<raii, hasher,
key_equal, stateful_allocator<raii> >;
using set2_type = boost::unordered::concurrent_flat_set<raii, std::hash<raii>,
std::equal_to<raii>, stateful_allocator<raii> >;
map_type* test_map;
map2_type* test_map2;
auto test_maps=std::make_pair(test_map,test_map2);
set_type* test_set;
set2_type* test_set2;
auto test_sets=std::make_pair(test_set,test_set2);
struct struct
{ {
@@ -40,18 +55,23 @@ struct
} rvalue_merge; } rvalue_merge;
namespace { namespace {
template <class F, class G> template <typename X, typename Y, class F, class GF>
void merge_tests(F merger, G gen, test::random_generator rg) void merge_tests(
std::pair<X*, Y*>, F merger, GF gen_factory, test::random_generator rg)
{ {
auto values = make_random_values(1024 * 8, [&] { return gen(rg); }); using value_type = typename X::value_type;
static constexpr auto value_type_cardinality =
value_cardinality<value_type>::value;
using allocator_type = typename X::allocator_type;
auto ref_map = auto gen = gen_factory.template get<X>();
boost::unordered_flat_map<raii, raii>(values.begin(), values.end()); auto values = make_random_values(1024 * 8, [&] { return gen(rg); });
auto reference_cont = reference_container<X>(values.begin(), values.end());
{ {
raii::reset_counts(); raii::reset_counts();
map_type x(values.size(), hasher(1), key_equal(2), allocator_type(3)); X x(values.size(), hasher(1), key_equal(2), allocator_type(3));
auto const old_cc = +raii::copy_constructor; auto const old_cc = +raii::copy_constructor;
@@ -59,48 +79,50 @@ namespace {
std::atomic<unsigned long long> num_merged{0}; std::atomic<unsigned long long> num_merged{0};
thread_runner(values, [&x, &expected_copies, &num_merged, merger]( thread_runner(values, [&x, &expected_copies, &num_merged, merger](
boost::span<map_value_type> s) { boost::span<value_type> s) {
using map2_type = boost::unordered::concurrent_flat_map<raii, raii, Y y(s.size(), allocator_type(3));
std::hash<raii>, std::equal_to<raii>, allocator_type>;
map2_type y(s.size(), allocator_type(3));
for (auto const& v : s) { for (auto const& v : s) {
y.insert(v); y.insert(v);
} }
expected_copies += 2 * y.size(); expected_copies += value_type_cardinality * y.size();
BOOST_TEST(x.get_allocator() == y.get_allocator()); BOOST_TEST(x.get_allocator() == y.get_allocator());
num_merged += merger(x, y); num_merged += merger(x, y);
}); });
BOOST_TEST_EQ(raii::copy_constructor, old_cc + expected_copies); BOOST_TEST_EQ(raii::copy_constructor, old_cc + expected_copies);
BOOST_TEST_EQ(raii::move_constructor, 2 * ref_map.size()); BOOST_TEST_EQ(
BOOST_TEST_EQ(+num_merged, ref_map.size()); raii::move_constructor,
value_type_cardinality * reference_cont.size());
BOOST_TEST_EQ(+num_merged, reference_cont.size());
test_fuzzy_matches_reference(x, ref_map, rg); test_fuzzy_matches_reference(x, reference_cont, rg);
} }
check_raii_counts(); check_raii_counts();
} }
template <class G> template <typename X, typename Y, class GF>
void insert_and_merge_tests(G gen, test::random_generator rg) void insert_and_merge_tests(
std::pair<X*, Y*>, GF gen_factory, test::random_generator rg)
{ {
using map2_type = boost::unordered::concurrent_flat_map<raii, raii, static constexpr auto value_type_cardinality =
std::hash<raii>, std::equal_to<raii>, allocator_type>; value_cardinality<typename X::value_type>::value;
using allocator_type = typename X::allocator_type;
auto gen = gen_factory.template get<X>();
auto vals1 = make_random_values(1024 * 8, [&] { return gen(rg); }); auto vals1 = make_random_values(1024 * 8, [&] { return gen(rg); });
auto vals2 = make_random_values(1024 * 4, [&] { return gen(rg); }); auto vals2 = make_random_values(1024 * 4, [&] { return gen(rg); });
auto ref_map = boost::unordered_flat_map<raii, raii>(); auto reference_cont = reference_container<X>();
ref_map.insert(vals1.begin(), vals1.end()); reference_cont.insert(vals1.begin(), vals1.end());
ref_map.insert(vals2.begin(), vals2.end()); reference_cont.insert(vals2.begin(), vals2.end());
{ {
raii::reset_counts(); raii::reset_counts();
map_type x1(2 * vals1.size(), hasher(1), key_equal(2), allocator_type(3)); X x1(2 * vals1.size(), hasher(1), key_equal(2), allocator_type(3));
map2_type x2(2 * vals1.size(), allocator_type(3)); Y x2(2 * vals1.size(), allocator_type(3));
std::thread t1, t2, t3; std::thread t1, t2, t3;
boost::compat::latch l(2); boost::compat::latch l(2);
@@ -190,12 +212,13 @@ namespace {
if (num_merges > 0) { if (num_merges > 0) {
// num merges is 0 most commonly in the cast of the limited_range // num merges is 0 most commonly in the cast of the limited_range
// generator as both maps will contains keys from 0 to 99 // generator as both maps will contains keys from 0 to 99
BOOST_TEST_EQ(+raii::move_constructor, 2 * num_merges); BOOST_TEST_EQ(
+raii::move_constructor, value_type_cardinality * num_merges);
BOOST_TEST_GE(call_count, 1u); BOOST_TEST_GE(call_count, 1u);
} }
x1.merge(x2); x1.merge(x2);
test_fuzzy_matches_reference(x1, ref_map, rg); test_fuzzy_matches_reference(x1, reference_cont, rg);
} }
check_raii_counts(); check_raii_counts();
@@ -206,13 +229,15 @@ namespace {
// clang-format off // clang-format off
UNORDERED_TEST( UNORDERED_TEST(
merge_tests, merge_tests,
((test_maps)(test_sets))
((lvalue_merge)(rvalue_merge)) ((lvalue_merge)(rvalue_merge))
((value_type_generator)) ((value_type_generator_factory))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))
UNORDERED_TEST( UNORDERED_TEST(
insert_and_merge_tests, insert_and_merge_tests,
((value_type_generator)) ((test_maps)(test_sets))
((value_type_generator_factory))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))
// clang-format on // clang-format on

View File

@@ -19,16 +19,29 @@ void assertion_failed_msg(
throw 0; throw 0;
} }
void assertion_failed(char const*, char const*, char const*, long) // LCOV_EXCL_START // LCOV_EXCL_START
void assertion_failed(char const*, char const*, char const*, long)
{ {
std::abort(); std::abort();
} // LCOV_EXCL_STOP
} }
// LCOV_EXCL_STOP
} // namespace boost
#include "helpers.hpp"
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
#include <boost/core/lightweight_test.hpp> #include <boost/core/lightweight_test.hpp>
using test::default_generator;
using map_type = boost::unordered::concurrent_flat_map<raii, raii>;
using set_type = boost::unordered::concurrent_flat_set<raii>;
map_type* test_map;
set_type* test_set;
template<typename F> template<typename F>
void detect_reentrancy(F f) void detect_reentrancy(F f)
{ {
@@ -40,40 +53,61 @@ void detect_reentrancy(F f)
BOOST_TEST(reentrancy_detected); BOOST_TEST(reentrancy_detected);
} }
int main() namespace {
template <class X, class GF>
void reentrancy_tests(X*, GF gen_factory, test::random_generator rg)
{ {
using map = boost::concurrent_flat_map<int, int>; using key_type = typename X::key_type;
using value_type = typename map::value_type;
map m1, m2; // concurrent_flat_set visit is always const access
m1.emplace(0, 0); using arg_type = typename std::conditional<
m2.emplace(1, 0); std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
auto gen = gen_factory.template get<X>();
auto values = make_random_values(1024 * 16, [&] { return gen(rg); });
X x1, x2;
x1.insert(values.begin(), values.end());
x2.insert(values.begin(), values.end());
detect_reentrancy([&] { detect_reentrancy([&] {
m1.visit_all([&](value_type&) { (void)m1.contains(0); }); x1.visit_all([&](arg_type&) { (void)x1.contains(key_type()); });
}); // LCOV_EXCL_LINE }); // LCOV_EXCL_LINE
detect_reentrancy([&] { detect_reentrancy([&] {
m1.visit_all([&](value_type&) { m1.rehash(0); }); x1.visit_all([&](arg_type&) { x1.rehash(0); });
}); // LCOV_EXCL_LINE }); // LCOV_EXCL_LINE
detect_reentrancy([&] { detect_reentrancy([&] {
m1.visit_all([&](value_type&) { x1.visit_all([&](arg_type&) {
m2.visit_all([&](value_type&) { x2.visit_all([&](arg_type&) {
m1=m2; x1=x2;
}); // LCOV_EXCL_START }); // LCOV_EXCL_START
}); });
}); });
// LCOV_EXCL_STOP // LCOV_EXCL_STOP
detect_reentrancy([&] { detect_reentrancy([&] {
m1.visit_all([&](value_type&) { x1.visit_all([&](arg_type&) {
m2.visit_all([&](value_type&) { x2.visit_all([&](arg_type&) {
m2=m1; x2=x1;
}); // LCOV_EXCL_START }); // LCOV_EXCL_START
}); });
}); });
// LCOV_EXCL_STOP // LCOV_EXCL_STOP
return boost::report_errors();
} }
} // namespace
// clang-format off
UNORDERED_TEST(
reentrancy_tests,
((test_map)(test_set))
((value_type_generator_factory))
((default_generator)))
// clang-format on
RUN_TESTS()

View File

@@ -1,10 +1,12 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include "helpers.hpp" #include "helpers.hpp"
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
using test::default_generator; using test::default_generator;
using test::limited_range; using test::limited_range;
@@ -12,18 +14,25 @@ using test::sequential;
using hasher = stateful_hash; using hasher = stateful_hash;
using key_equal = stateful_key_equal; using key_equal = stateful_key_equal;
using allocator_type = stateful_allocator<std::pair<raii const, raii> >;
using map_type = boost::unordered::concurrent_flat_map<raii, raii, hasher, using map_type = boost::unordered::concurrent_flat_map<raii, raii, hasher,
key_equal, allocator_type>; key_equal, stateful_allocator<std::pair<raii const, raii> > >;
using map_value_type = typename map_type::value_type; using set_type = boost::unordered::concurrent_flat_set<raii, hasher,
key_equal, stateful_allocator<raii> >;
map_type* test_map;
set_type* test_set;
namespace { namespace {
test::seed_t initialize_seed{748775921}; test::seed_t initialize_seed{748775921};
UNORDERED_AUTO_TEST (rehash_no_insert) { template <typename X>
map_type x(0, hasher(1), key_equal(2), allocator_type(3)); void rehash_no_insert(X*)
{
using allocator_type = typename X::allocator_type;
X x(0, hasher(1), key_equal(2), allocator_type(3));
BOOST_TEST_EQ(x.bucket_count(), 0u); BOOST_TEST_EQ(x.bucket_count(), 0u);
x.rehash(1024); x.rehash(1024);
@@ -37,10 +46,13 @@ namespace {
BOOST_TEST_EQ(x.bucket_count(), 0u); BOOST_TEST_EQ(x.bucket_count(), 0u);
} }
UNORDERED_AUTO_TEST (reserve_no_insert) { template <typename X>
using size_type = map_type::size_type; void reserve_no_insert(X*)
{
using allocator_type = typename X::allocator_type;
using size_type = typename X::size_type;
map_type x(0, hasher(1), key_equal(2), allocator_type(3)); X x(0, hasher(1), key_equal(2), allocator_type(3));
auto f = [&x](double c) { auto f = [&x](double c) {
return static_cast<size_type>(std::ceil(c / x.max_load_factor())); return static_cast<size_type>(std::ceil(c / x.max_load_factor()));
@@ -59,9 +71,13 @@ namespace {
BOOST_TEST_EQ(x.bucket_count(), f(0.0)); BOOST_TEST_EQ(x.bucket_count(), f(0.0));
} }
template <class G> template <class X, class GF>
void insert_and_erase_with_rehash(G gen, test::random_generator rg) void insert_and_erase_with_rehash(
X*, GF gen_factory, test::random_generator rg)
{ {
using allocator_type = typename X::allocator_type;
auto gen = gen_factory.template get<X>();
auto vals1 = make_random_values(1024 * 8, [&] { return gen(rg); }); auto vals1 = make_random_values(1024 * 8, [&] { return gen(rg); });
auto erase_indices = std::vector<std::size_t>(vals1.size()); auto erase_indices = std::vector<std::size_t>(vals1.size());
@@ -70,13 +86,13 @@ namespace {
} }
shuffle_values(erase_indices); shuffle_values(erase_indices);
auto ref_map = boost::unordered_flat_map<raii, raii>(); auto reference_cont = reference_container<X>();
ref_map.insert(vals1.begin(), vals1.end()); reference_cont.insert(vals1.begin(), vals1.end());
{ {
raii::reset_counts(); raii::reset_counts();
map_type x(0, hasher(1), key_equal(2), allocator_type(3)); X x(0, hasher(1), key_equal(2), allocator_type(3));
std::thread t1, t2, t3; std::thread t1, t2, t3;
boost::compat::latch l(2); boost::compat::latch l(2);
@@ -121,7 +137,7 @@ namespace {
for (std::size_t idx = 0; idx < erase_indices.size(); ++idx) { for (std::size_t idx = 0; idx < erase_indices.size(); ++idx) {
auto const& val = vals1[erase_indices[idx]]; auto const& val = vals1[erase_indices[idx]];
x.erase(val.first); x.erase(get_key(val));
if (idx % 100 == 0) { if (idx % 100 == 0) {
std::this_thread::yield(); std::this_thread::yield();
} }
@@ -161,7 +177,7 @@ namespace {
BOOST_TEST_GE(call_count, 1u); BOOST_TEST_GE(call_count, 1u);
test_fuzzy_matches_reference(x, ref_map, rg); test_fuzzy_matches_reference(x, reference_cont, rg);
} }
check_raii_counts(); check_raii_counts();
@@ -169,9 +185,18 @@ namespace {
} // namespace } // namespace
// clang-format off // clang-format off
UNORDERED_TEST(
rehash_no_insert,
((test_map)(test_set)))
UNORDERED_TEST(
reserve_no_insert,
((test_map)(test_set)))
UNORDERED_TEST( UNORDERED_TEST(
insert_and_erase_with_rehash, insert_and_erase_with_rehash,
((value_type_generator)) ((test_map)(test_set))
((value_type_generator_factory))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))
// clang-format on // clang-format on

View File

@@ -11,6 +11,7 @@
#include <boost/archive/xml_iarchive.hpp> #include <boost/archive/xml_iarchive.hpp>
#include <boost/serialization/nvp.hpp> #include <boost/serialization/nvp.hpp>
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
namespace { namespace {
@@ -69,9 +70,11 @@ namespace {
boost::concurrent_flat_map< boost::concurrent_flat_map<
test::object, test::object, test::hash, test::equal_to>* test_flat_map; test::object, test::object, test::hash, test::equal_to>* test_flat_map;
boost::concurrent_flat_set<
test::object, test::hash, test::equal_to>* test_flat_set;
UNORDERED_TEST(serialization_tests, UNORDERED_TEST(serialization_tests,
((test_flat_map)) ((test_flat_map)(test_flat_set))
((text_archive)(xml_archive)) ((text_archive)(xml_archive))
((default_generator))) ((default_generator)))
} }

View File

@@ -1,10 +1,12 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include "helpers.hpp" #include "helpers.hpp"
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
test::seed_t initialize_seed{996130204}; test::seed_t initialize_seed{996130204};
@@ -56,17 +58,12 @@ template <class T> struct pocs_allocator
using hasher = stateful_hash; using hasher = stateful_hash;
using key_equal = stateful_key_equal; using key_equal = stateful_key_equal;
using allocator_type = stateful_allocator<std::pair<raii const, raii> >;
using map_type = boost::unordered::concurrent_flat_map<raii, raii, hasher, using map_type = boost::unordered::concurrent_flat_map<raii, raii, hasher,
key_equal, allocator_type>; key_equal, stateful_allocator<std::pair<raii const, raii> > >;
using map_value_type = typename map_type::value_type; using set_type = boost::unordered::concurrent_flat_set<raii, hasher,
key_equal, stateful_allocator<raii> >;
using pocs_allocator_type = pocs_allocator<std::pair<const raii, raii> >;
using pocs_map_type = boost::unordered::concurrent_flat_map<raii, raii, hasher,
key_equal, pocs_allocator_type>;
template <class T> struct is_nothrow_member_swappable template <class T> struct is_nothrow_member_swappable
{ {
@@ -75,13 +72,21 @@ template <class T> struct is_nothrow_member_swappable
}; };
BOOST_STATIC_ASSERT(is_nothrow_member_swappable< BOOST_STATIC_ASSERT(is_nothrow_member_swappable<
boost::unordered::concurrent_flat_map<int, int, std::hash<int>, replace_allocator<map_type, std::allocator> >::value);
std::equal_to<int>, std::allocator<std::pair<int const, int> > > >::value);
BOOST_STATIC_ASSERT(is_nothrow_member_swappable<pocs_map_type>::value); BOOST_STATIC_ASSERT(is_nothrow_member_swappable<
replace_allocator<map_type, pocs_allocator> >::value);
BOOST_STATIC_ASSERT(!is_nothrow_member_swappable<map_type>::value); BOOST_STATIC_ASSERT(!is_nothrow_member_swappable<map_type>::value);
BOOST_STATIC_ASSERT(is_nothrow_member_swappable<
replace_allocator<set_type, std::allocator> >::value);
BOOST_STATIC_ASSERT(is_nothrow_member_swappable<
replace_allocator<set_type, pocs_allocator> >::value);
BOOST_STATIC_ASSERT(!is_nothrow_member_swappable<set_type>::value);
namespace { namespace {
struct struct
{ {
@@ -97,31 +102,31 @@ namespace {
} }
} free_fn_swap; } free_fn_swap;
template <class X, class F, class G> template <class X, class F, class GF>
void swap_tests(X*, F swapper, G gen, test::random_generator rg) void swap_tests(X*, F swapper, GF gen_factory, test::random_generator rg)
{ {
using allocator = typename X::allocator_type; using value_type = typename X::value_type;
using allocator_type = typename X::allocator_type;
bool const pocs = bool const pocs =
boost::allocator_propagate_on_container_swap<allocator>::type::value; boost::allocator_propagate_on_container_swap<
allocator_type>::type::value;
auto gen = gen_factory.template get<X>();
auto vals1 = make_random_values(1024 * 8, [&] { return gen(rg); }); auto vals1 = make_random_values(1024 * 8, [&] { return gen(rg); });
auto vals2 = make_random_values(1024 * 4, [&] { return gen(rg); }); auto vals2 = make_random_values(1024 * 4, [&] { return gen(rg); });
auto ref_map1 = auto reference_cont1 = reference_container<X>(vals1.begin(), vals1.end());
boost::unordered_flat_map<raii, raii>(vals1.begin(), vals1.end()); auto reference_cont2 = reference_container<X>(vals2.begin(), vals2.end());
auto ref_map2 =
boost::unordered_flat_map<raii, raii>(vals2.begin(), vals2.end());
{ {
raii::reset_counts(); raii::reset_counts();
X x1(vals1.begin(), vals1.end(), vals1.size(), hasher(1), key_equal(2), X x1(vals1.begin(), vals1.end(), vals1.size(), hasher(1), key_equal(2),
allocator(3)); allocator_type(3));
X x2(vals2.begin(), vals2.end(), vals2.size(), hasher(2), key_equal(1), X x2(vals2.begin(), vals2.end(), vals2.size(), hasher(2), key_equal(1),
pocs ? allocator(4) : allocator(3)); pocs ? allocator_type(4) : allocator_type(3));
if (pocs) { if (pocs) {
BOOST_TEST(x1.get_allocator() != x2.get_allocator()); BOOST_TEST(x1.get_allocator() != x2.get_allocator());
@@ -132,7 +137,7 @@ namespace {
auto const old_cc = +raii::copy_constructor; auto const old_cc = +raii::copy_constructor;
auto const old_mc = +raii::move_constructor; auto const old_mc = +raii::move_constructor;
thread_runner(vals1, [&x1, &x2, swapper](boost::span<map_value_type> s) { thread_runner(vals1, [&x1, &x2, swapper](boost::span<value_type> s) {
(void)s; (void)s;
swapper(x1, x2); swapper(x1, x2);
@@ -143,20 +148,20 @@ namespace {
BOOST_TEST_EQ(raii::move_constructor, old_mc); BOOST_TEST_EQ(raii::move_constructor, old_mc);
if (pocs) { if (pocs) {
if (x1.get_allocator() == allocator(3)) { if (x1.get_allocator() == allocator_type(3)) {
BOOST_TEST(x2.get_allocator() == allocator(4)); BOOST_TEST(x2.get_allocator() == allocator_type(4));
} else { } else {
BOOST_TEST(x1.get_allocator() == allocator(4)); BOOST_TEST(x1.get_allocator() == allocator_type(4));
BOOST_TEST(x2.get_allocator() == allocator(3)); BOOST_TEST(x2.get_allocator() == allocator_type(3));
} }
} else { } else {
BOOST_TEST(x1.get_allocator() == allocator(3)); BOOST_TEST(x1.get_allocator() == allocator_type(3));
BOOST_TEST(x1.get_allocator() == x2.get_allocator()); BOOST_TEST(x1.get_allocator() == x2.get_allocator());
} }
if (x1.size() == ref_map1.size()) { if (x1.size() == reference_cont1.size()) {
test_matches_reference(x1, ref_map1); test_matches_reference(x1, reference_cont1);
test_matches_reference(x2, ref_map2); test_matches_reference(x2, reference_cont2);
BOOST_TEST_EQ(x1.hash_function(), hasher(1)); BOOST_TEST_EQ(x1.hash_function(), hasher(1));
BOOST_TEST_EQ(x1.key_eq(), key_equal(2)); BOOST_TEST_EQ(x1.key_eq(), key_equal(2));
@@ -164,8 +169,8 @@ namespace {
BOOST_TEST_EQ(x2.hash_function(), hasher(2)); BOOST_TEST_EQ(x2.hash_function(), hasher(2));
BOOST_TEST_EQ(x2.key_eq(), key_equal(1)); BOOST_TEST_EQ(x2.key_eq(), key_equal(1));
} else { } else {
test_matches_reference(x2, ref_map1); test_matches_reference(x2, reference_cont1);
test_matches_reference(x1, ref_map2); test_matches_reference(x1, reference_cont2);
BOOST_TEST_EQ(x1.hash_function(), hasher(2)); BOOST_TEST_EQ(x1.hash_function(), hasher(2));
BOOST_TEST_EQ(x1.key_eq(), key_equal(1)); BOOST_TEST_EQ(x1.key_eq(), key_equal(1));
@@ -177,17 +182,21 @@ namespace {
check_raii_counts(); check_raii_counts();
} }
template <class F, class G> template <class X, class F, class GF>
void insert_and_swap(F swapper, G gen, test::random_generator rg) void insert_and_swap(
X*, F swapper, GF gen_factory, test::random_generator rg)
{ {
using allocator_type = typename X::allocator_type;
auto gen = gen_factory.template get<X>();
auto vals1 = make_random_values(1024 * 8, [&] { return gen(rg); }); auto vals1 = make_random_values(1024 * 8, [&] { return gen(rg); });
auto vals2 = make_random_values(1024 * 4, [&] { return gen(rg); }); auto vals2 = make_random_values(1024 * 4, [&] { return gen(rg); });
{ {
raii::reset_counts(); raii::reset_counts();
map_type x1(vals1.size(), hasher(1), key_equal(2), allocator_type(3)); X x1(vals1.size(), hasher(1), key_equal(2), allocator_type(3));
map_type x2(vals2.size(), hasher(2), key_equal(1), allocator_type(3)); X x2(vals2.size(), hasher(2), key_equal(1), allocator_type(3));
std::thread t1, t2, t3; std::thread t1, t2, t3;
boost::compat::latch l(2); boost::compat::latch l(2);
@@ -282,21 +291,25 @@ namespace {
} }
map_type* map; map_type* map;
pocs_map_type* pocs_map; replace_allocator<map_type, pocs_allocator>* pocs_map;
set_type* set;
replace_allocator<set_type, pocs_allocator>* pocs_set;
} // namespace } // namespace
// clang-format off // clang-format off
UNORDERED_TEST( UNORDERED_TEST(
swap_tests, swap_tests,
((map)(pocs_map)) ((map)(pocs_map)(set)(pocs_set))
((member_fn_swap)(free_fn_swap)) ((member_fn_swap)(free_fn_swap))
((value_type_generator)) ((value_type_generator_factory))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))
UNORDERED_TEST(insert_and_swap, UNORDERED_TEST(insert_and_swap,
((map)(set))
((member_fn_swap)(free_fn_swap)) ((member_fn_swap)(free_fn_swap))
((value_type_generator)) ((value_type_generator_factory))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))
// clang-format on // clang-format on

View File

@@ -373,6 +373,9 @@ using test::default_generator;
using test::limited_range; using test::limited_range;
using test::sequential; using test::sequential;
value_generator<std::pair<raii const, raii> > value_type_generator;
value_generator<std::pair<raii, raii> > init_type_generator;
// clang-format off // clang-format off
UNORDERED_TEST( UNORDERED_TEST(
try_emplace, try_emplace,

View File

@@ -1,38 +1,65 @@
// Copyright (C) 2023 Christian Mazakas // Copyright (C) 2023 Christian Mazakas
// Copyright (C) 2023 Joaquin M Lopez Munoz
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
#include "helpers.hpp" #include "helpers.hpp"
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
#include <boost/core/ignore_unused.hpp> #include <boost/core/ignore_unused.hpp>
#include <array>
#include <functional> #include <functional>
#include <vector> #include <vector>
namespace { namespace {
test::seed_t initialize_seed(335740237); test::seed_t initialize_seed(335740237);
auto non_present_keys = []
{
std::array<raii,128> a;
for(std::size_t i = 0; i < a.size(); ++i) {
a[i].x_ = -((int)i + 1);
}
return a;
}();
template<typename T>
raii const & get_non_present_key(T const & x)
{
return non_present_keys[
(std::size_t)get_key(x).x_ % non_present_keys.size()];
}
struct lvalue_visitor_type struct lvalue_visitor_type
{ {
template <class T, class X, class M> template <class T, class X, class M>
void operator()(std::vector<T>& values, X& x, M const& reference_map) void operator()(std::vector<T>& values, X& x, M const& reference_cont)
{ {
using value_type = typename X::value_type; using value_type = typename X::value_type;
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
std::atomic<std::uint64_t> num_visits{0}; std::atomic<std::uint64_t> num_visits{0};
std::atomic<std::uint64_t> total_count{0}; std::atomic<std::uint64_t> total_count{0};
auto mut_visitor = [&num_visits, &reference_map](value_type& v) { auto mut_visitor = [&num_visits, &reference_cont](arg_type& v) {
BOOST_TEST(reference_map.contains(v.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(v.second, reference_map.find(v.first)->second); BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
++num_visits; ++num_visits;
}; };
auto const_visitor = [&num_visits, &reference_map](value_type const& v) { auto const_visitor =
BOOST_TEST(reference_map.contains(v.first)); [&num_visits, &reference_cont](value_type const& v) {
BOOST_TEST_EQ(v.second, reference_map.find(v.first)->second); BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
++num_visits; ++num_visits;
}; };
@@ -40,14 +67,14 @@ namespace {
thread_runner( thread_runner(
values, [&x, &mut_visitor, &total_count](boost::span<T> s) { values, [&x, &mut_visitor, &total_count](boost::span<T> s) {
for (auto const& val : s) { for (auto const& val : s) {
auto r = val.first.x_; auto r = get_key(val).x_;
BOOST_TEST(r >= 0); BOOST_TEST(r >= 0);
auto count = x.visit(val.first, mut_visitor); auto count = x.visit(get_key(val), mut_visitor);
BOOST_TEST_EQ(count, 1u); BOOST_TEST_EQ(count, 1u);
total_count += count; total_count += count;
count = x.visit(val.second, mut_visitor); count = x.visit(get_non_present_key(val), mut_visitor);
BOOST_TEST_EQ(count, 0u); BOOST_TEST_EQ(count, 0u);
} }
}); });
@@ -63,16 +90,16 @@ namespace {
thread_runner( thread_runner(
values, [&x, &const_visitor, &total_count](boost::span<T> s) { values, [&x, &const_visitor, &total_count](boost::span<T> s) {
for (auto const& val : s) { for (auto const& val : s) {
auto r = val.first.x_; auto r = get_key(val).x_;
BOOST_TEST(r >= 0); BOOST_TEST(r >= 0);
auto const& y = x; auto const& y = x;
auto count = y.visit(val.first, const_visitor); auto count = y.visit(get_key(val), const_visitor);
BOOST_TEST_EQ(count, 1u); BOOST_TEST_EQ(count, 1u);
total_count += count; total_count += count;
count = y.visit(val.second, const_visitor); count = y.visit(get_non_present_key(val), const_visitor);
BOOST_TEST_EQ(count, 0u); BOOST_TEST_EQ(count, 0u);
} }
}); });
@@ -88,15 +115,15 @@ namespace {
thread_runner( thread_runner(
values, [&x, &const_visitor, &total_count](boost::span<T> s) { values, [&x, &const_visitor, &total_count](boost::span<T> s) {
for (auto const& val : s) { for (auto const& val : s) {
auto r = val.first.x_; auto r = get_key(val).x_;
BOOST_TEST(r >= 0); BOOST_TEST(r >= 0);
auto count = x.cvisit(val.first, const_visitor); auto count = x.cvisit(get_key(val), const_visitor);
BOOST_TEST_EQ(count, 1u); BOOST_TEST_EQ(count, 1u);
total_count += count; total_count += count;
count = x.cvisit(val.second, const_visitor); count = x.cvisit(get_non_present_key(val), const_visitor);
BOOST_TEST_EQ(count, 0u); BOOST_TEST_EQ(count, 0u);
} }
}); });
@@ -111,14 +138,14 @@ namespace {
{ {
thread_runner(values, [&x, &total_count](boost::span<T> s) { thread_runner(values, [&x, &total_count](boost::span<T> s) {
for (auto const& val : s) { for (auto const& val : s) {
auto r = val.first.x_; auto r = get_key(val).x_;
BOOST_TEST(r >= 0); BOOST_TEST(r >= 0);
auto count = x.count(val.first); auto count = x.count(get_key(val));
BOOST_TEST_EQ(count, 1u); BOOST_TEST_EQ(count, 1u);
total_count += count; total_count += count;
count = x.count(val.second); count = x.count(get_non_present_key(val));
BOOST_TEST_EQ(count, 0u); BOOST_TEST_EQ(count, 0u);
} }
}); });
@@ -132,13 +159,13 @@ namespace {
{ {
thread_runner(values, [&x](boost::span<T> s) { thread_runner(values, [&x](boost::span<T> s) {
for (auto const& val : s) { for (auto const& val : s) {
auto r = val.first.x_; auto r = get_key(val).x_;
BOOST_TEST(r >= 0); BOOST_TEST(r >= 0);
auto contains = x.contains(val.first); auto contains = x.contains(get_key(val));
BOOST_TEST(contains); BOOST_TEST(contains);
contains = x.contains(val.second); contains = x.contains(get_non_present_key(val));
BOOST_TEST(!contains); BOOST_TEST(!contains);
} }
}); });
@@ -152,22 +179,29 @@ namespace {
struct transp_visitor_type struct transp_visitor_type
{ {
template <class T, class X, class M> template <class T, class X, class M>
void operator()(std::vector<T>& values, X& x, M const& reference_map) void operator()(std::vector<T>& values, X& x, M const& reference_cont)
{ {
using value_type = typename X::value_type; using value_type = typename X::value_type;
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
std::atomic<std::uint64_t> num_visits{0}; std::atomic<std::uint64_t> num_visits{0};
std::atomic<std::uint64_t> total_count{0}; std::atomic<std::uint64_t> total_count{0};
auto mut_visitor = [&num_visits, &reference_map](value_type& v) { auto mut_visitor = [&num_visits, &reference_cont](arg_type& v) {
BOOST_TEST(reference_map.contains(v.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(v.second, reference_map.find(v.first)->second); BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
++num_visits; ++num_visits;
}; };
auto const_visitor = [&num_visits, &reference_map](value_type const& v) { auto const_visitor = [&num_visits, &reference_cont](value_type const& v) {
BOOST_TEST(reference_map.contains(v.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(v.second, reference_map.find(v.first)->second); BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
++num_visits; ++num_visits;
}; };
@@ -175,15 +209,15 @@ namespace {
thread_runner( thread_runner(
values, [&x, &mut_visitor, &total_count](boost::span<T> s) { values, [&x, &mut_visitor, &total_count](boost::span<T> s) {
for (auto const& val : s) { for (auto const& val : s) {
auto r = val.first.x_; auto r = get_key(val).x_;
BOOST_TEST(r >= 0); BOOST_TEST(r >= 0);
auto count = x.visit(val.first.x_, mut_visitor); auto count = x.visit(get_key(val).x_, mut_visitor);
BOOST_TEST_EQ(count, 1u); BOOST_TEST_EQ(count, 1u);
total_count += count; total_count += count;
count = x.visit(val.second.x_, mut_visitor); count = x.visit(get_non_present_key(val).x_, mut_visitor);
BOOST_TEST_EQ(count, 0u); BOOST_TEST_EQ(count, 0u);
} }
}); });
@@ -199,16 +233,16 @@ namespace {
thread_runner( thread_runner(
values, [&x, &const_visitor, &total_count](boost::span<T> s) { values, [&x, &const_visitor, &total_count](boost::span<T> s) {
for (auto const& val : s) { for (auto const& val : s) {
auto r = val.first.x_; auto r = get_key(val).x_;
BOOST_TEST(r >= 0); BOOST_TEST(r >= 0);
auto const& y = x; auto const& y = x;
auto count = y.visit(val.first.x_, const_visitor); auto count = y.visit(get_key(val).x_, const_visitor);
BOOST_TEST_EQ(count, 1u); BOOST_TEST_EQ(count, 1u);
total_count += count; total_count += count;
count = y.visit(val.second.x_, const_visitor); count = y.visit(get_non_present_key(val).x_, const_visitor);
BOOST_TEST_EQ(count, 0u); BOOST_TEST_EQ(count, 0u);
} }
}); });
@@ -224,15 +258,15 @@ namespace {
thread_runner( thread_runner(
values, [&x, &const_visitor, &total_count](boost::span<T> s) { values, [&x, &const_visitor, &total_count](boost::span<T> s) {
for (auto const& val : s) { for (auto const& val : s) {
auto r = val.first.x_; auto r = get_key(val).x_;
BOOST_TEST(r >= 0); BOOST_TEST(r >= 0);
auto count = x.cvisit(val.first.x_, const_visitor); auto count = x.cvisit(get_key(val).x_, const_visitor);
BOOST_TEST_EQ(count, 1u); BOOST_TEST_EQ(count, 1u);
total_count += count; total_count += count;
count = x.cvisit(val.second.x_, const_visitor); count = x.cvisit(get_non_present_key(val).x_, const_visitor);
BOOST_TEST_EQ(count, 0u); BOOST_TEST_EQ(count, 0u);
} }
}); });
@@ -247,14 +281,14 @@ namespace {
{ {
thread_runner(values, [&x, &total_count](boost::span<T> s) { thread_runner(values, [&x, &total_count](boost::span<T> s) {
for (auto const& val : s) { for (auto const& val : s) {
auto r = val.first.x_; auto r = get_key(val).x_;
BOOST_TEST(r >= 0); BOOST_TEST(r >= 0);
auto count = x.count(val.first.x_); auto count = x.count(get_key(val).x_);
BOOST_TEST_EQ(count, 1u); BOOST_TEST_EQ(count, 1u);
total_count += count; total_count += count;
count = x.count(val.second.x_); count = x.count(get_non_present_key(val).x_);
BOOST_TEST_EQ(count, 0u); BOOST_TEST_EQ(count, 0u);
} }
}); });
@@ -268,13 +302,13 @@ namespace {
{ {
thread_runner(values, [&x](boost::span<T> s) { thread_runner(values, [&x](boost::span<T> s) {
for (auto const& val : s) { for (auto const& val : s) {
auto r = val.first.x_; auto r = get_key(val).x_;
BOOST_TEST(r >= 0); BOOST_TEST(r >= 0);
auto contains = x.contains(val.first.x_); auto contains = x.contains(get_key(val).x_);
BOOST_TEST(contains); BOOST_TEST(contains);
contains = x.contains(val.second.x_); contains = x.contains(get_non_present_key(val).x_);
BOOST_TEST(!contains); BOOST_TEST(!contains);
} }
}); });
@@ -288,24 +322,31 @@ namespace {
struct visit_all_type struct visit_all_type
{ {
template <class T, class X, class M> template <class T, class X, class M>
void operator()(std::vector<T>& values, X& x, M const& reference_map) void operator()(std::vector<T>& values, X& x, M const& reference_cont)
{ {
using value_type = typename X::value_type; using value_type = typename X::value_type;
// concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
std::atomic<std::uint64_t> total_count{0}; std::atomic<std::uint64_t> total_count{0};
auto mut_visitor = [&reference_map](std::atomic<uint64_t>& num_visits) { auto mut_visitor = [&reference_cont](std::atomic<uint64_t>& num_visits) {
return [&reference_map, &num_visits](value_type& kv) { return [&reference_cont, &num_visits](arg_type& v) {
BOOST_TEST(reference_map.contains(kv.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(kv.second, reference_map.find(kv.first)->second); BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
++num_visits; ++num_visits;
}; };
}; };
auto const_visitor = [&reference_map](std::atomic<uint64_t>& num_visits) { auto const_visitor = [&reference_cont](std::atomic<uint64_t>& num_visits) {
return [&reference_map, &num_visits](value_type const& kv) { return [&reference_cont, &num_visits](value_type const& v) {
BOOST_TEST(reference_map.contains(kv.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(kv.second, reference_map.find(kv.first)->second); BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
++num_visits; ++num_visits;
}; };
}; };
@@ -352,45 +393,52 @@ namespace {
struct visit_while_type struct visit_while_type
{ {
template <class T, class X, class M> template <class T, class X, class M>
void operator()(std::vector<T>& values, X& x, M const& reference_map) void operator()(std::vector<T>& values, X& x, M const& reference_cont)
{ {
using value_type = typename X::value_type; using value_type = typename X::value_type;
auto mut_truthy_visitor = [&reference_map]( // concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
auto mut_truthy_visitor = [&reference_cont](
std::atomic<uint64_t>& num_visits) { std::atomic<uint64_t>& num_visits) {
return [&reference_map, &num_visits](value_type& kv) { return [&reference_cont, &num_visits](arg_type& v) {
BOOST_TEST(reference_map.contains(kv.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(kv.second, reference_map.find(kv.first)->second); BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
++num_visits; ++num_visits;
return true; return true;
}; };
}; };
auto const_truthy_visitor = [&reference_map]( auto const_truthy_visitor = [&reference_cont](
std::atomic<uint64_t>& num_visits) { std::atomic<uint64_t>& num_visits) {
return [&reference_map, &num_visits](value_type const& kv) { return [&reference_cont, &num_visits](value_type const& v) {
BOOST_TEST(reference_map.contains(kv.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(kv.second, reference_map.find(kv.first)->second); BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
++num_visits; ++num_visits;
return true; return true;
}; };
}; };
auto mut_falsey_visitor = [&reference_map]( auto mut_falsey_visitor = [&reference_cont](
std::atomic<uint64_t>& num_visits) { std::atomic<uint64_t>& num_visits) {
return [&reference_map, &num_visits](value_type& kv) { return [&reference_cont, &num_visits](arg_type& v) {
BOOST_TEST(reference_map.contains(kv.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
++num_visits; ++num_visits;
return (kv.second.x_ % 100) == 0; return (get_value(v).x_ % 100) == 0;
}; };
}; };
auto const_falsey_visitor = [&reference_map]( auto const_falsey_visitor = [&reference_cont](
std::atomic<uint64_t>& num_visits) { std::atomic<uint64_t>& num_visits) {
return [&reference_map, &num_visits](value_type const& kv) { return [&reference_cont, &num_visits](value_type const& v) {
BOOST_TEST(reference_map.contains(kv.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
++num_visits; ++num_visits;
return (kv.second.x_ % 100) == 0; return (get_value(v).x_ % 100) == 0;
}; };
}; };
@@ -452,23 +500,30 @@ namespace {
struct exec_policy_visit_all_type struct exec_policy_visit_all_type
{ {
template <class T, class X, class M> template <class T, class X, class M>
void operator()(std::vector<T>& values, X& x, M const& reference_map) void operator()(std::vector<T>& values, X& x, M const& reference_cont)
{ {
#if defined(BOOST_UNORDERED_PARALLEL_ALGORITHMS) #if defined(BOOST_UNORDERED_PARALLEL_ALGORITHMS)
using value_type = typename X::value_type; using value_type = typename X::value_type;
auto mut_visitor = [&reference_map](std::atomic<uint64_t>& num_visits) { // concurrent_flat_set visit is always const access
return [&reference_map, &num_visits](value_type& kv) { using arg_type = typename std::conditional<
BOOST_TEST(reference_map.contains(kv.first)); std::is_same<typename X::key_type, typename X::value_type>::value,
BOOST_TEST_EQ(kv.second, reference_map.find(kv.first)->second); typename X::value_type const,
typename X::value_type
>::type;
auto mut_visitor = [&reference_cont](std::atomic<uint64_t>& num_visits) {
return [&reference_cont, &num_visits](arg_type& v) {
BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
++num_visits; ++num_visits;
}; };
}; };
auto const_visitor = [&reference_map](std::atomic<uint64_t>& num_visits) { auto const_visitor = [&reference_cont](std::atomic<uint64_t>& num_visits) {
return [&reference_map, &num_visits](value_type const& kv) { return [&reference_cont, &num_visits](value_type const& v) {
BOOST_TEST(reference_map.contains(kv.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(kv.second, reference_map.find(kv.first)->second); BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
++num_visits; ++num_visits;
}; };
}; };
@@ -502,7 +557,7 @@ namespace {
#else #else
(void)values; (void)values;
(void)x; (void)x;
(void)reference_map; (void)reference_cont;
#endif #endif
} }
} exec_policy_visit_all; } exec_policy_visit_all;
@@ -510,48 +565,55 @@ namespace {
struct exec_policy_visit_while_type struct exec_policy_visit_while_type
{ {
template <class T, class X, class M> template <class T, class X, class M>
void operator()(std::vector<T>& values, X& x, M const& reference_map) void operator()(std::vector<T>& values, X& x, M const& reference_cont)
{ {
#if defined(BOOST_UNORDERED_PARALLEL_ALGORITHMS) #if defined(BOOST_UNORDERED_PARALLEL_ALGORITHMS)
using value_type = typename X::value_type; using value_type = typename X::value_type;
auto mut_truthy_visitor = [&reference_map]( // concurrent_flat_set visit is always const access
using arg_type = typename std::conditional<
std::is_same<typename X::key_type, typename X::value_type>::value,
typename X::value_type const,
typename X::value_type
>::type;
auto mut_truthy_visitor = [&reference_cont](
std::atomic<uint64_t>& num_visits) { std::atomic<uint64_t>& num_visits) {
return [&reference_map, &num_visits](value_type& kv) { return [&reference_cont, &num_visits](arg_type& v) {
BOOST_TEST(reference_map.contains(kv.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(kv.second, reference_map.find(kv.first)->second); BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
++num_visits; ++num_visits;
return true; return true;
}; };
}; };
auto const_truthy_visitor = [&reference_map]( auto const_truthy_visitor = [&reference_cont](
std::atomic<uint64_t>& num_visits) { std::atomic<uint64_t>& num_visits) {
return [&reference_map, &num_visits](value_type const& kv) { return [&reference_cont, &num_visits](value_type const& v) {
BOOST_TEST(reference_map.contains(kv.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(kv.second, reference_map.find(kv.first)->second); BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
++num_visits; ++num_visits;
return true; return true;
}; };
}; };
auto mut_falsey_visitor = [&reference_map]( auto mut_falsey_visitor = [&reference_cont](
std::atomic<uint64_t>& num_visits) { std::atomic<uint64_t>& num_visits) {
return [&reference_map, &num_visits](value_type& kv) { return [&reference_cont, &num_visits](arg_type& v) {
BOOST_TEST(reference_map.contains(kv.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(kv.second, reference_map.find(kv.first)->second); BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
++num_visits; ++num_visits;
return (kv.second.x_ % 100) == 0; return (get_value(v).x_ % 100) == 0;
}; };
}; };
auto const_falsey_visitor = [&reference_map]( auto const_falsey_visitor = [&reference_cont](
std::atomic<uint64_t>& num_visits) { std::atomic<uint64_t>& num_visits) {
return [&reference_map, &num_visits](value_type const& kv) { return [&reference_cont, &num_visits](value_type const& v) {
BOOST_TEST(reference_map.contains(kv.first)); BOOST_TEST(reference_cont.contains(get_key(v)));
BOOST_TEST_EQ(kv.second, reference_map.find(kv.first)->second); BOOST_TEST_EQ(v, *reference_cont.find(get_key(v)));
++num_visits; ++num_visits;
return (kv.second.x_ % 100) == 0; return (get_value(v).x_ % 100) == 0;
}; };
}; };
@@ -616,24 +678,17 @@ namespace {
#else #else
(void)values; (void)values;
(void)x; (void)x;
(void)reference_map; (void)reference_cont;
#endif #endif
} }
} exec_policy_visit_while; } exec_policy_visit_while;
template <class X, class G, class F> template <class X, class GF, class F>
void visit(X*, G gen, F visitor, test::random_generator rg) void visit(X*, GF gen_factory, F visitor, test::random_generator rg)
{ {
auto gen = gen_factory.template get<X>();
auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto values = make_random_values(1024 * 16, [&] { return gen(rg); });
for (auto& val : values) { auto reference_cont = reference_container<X>(values.begin(), values.end());
if (val.second.x_ == 0) {
val.second.x_ = 1;
}
val.second.x_ *= -1;
}
auto reference_map =
boost::unordered_flat_map<raii, raii>(values.begin(), values.end());
raii::reset_counts(); raii::reset_counts();
@@ -642,7 +697,7 @@ namespace {
for (auto const& v : values) { for (auto const& v : values) {
x.insert(v); x.insert(v);
} }
BOOST_TEST_EQ(x.size(), reference_map.size()); BOOST_TEST_EQ(x.size(), reference_cont.size());
std::uint64_t old_default_constructor = raii::default_constructor; std::uint64_t old_default_constructor = raii::default_constructor;
std::uint64_t old_copy_constructor = raii::copy_constructor; std::uint64_t old_copy_constructor = raii::copy_constructor;
@@ -650,7 +705,7 @@ namespace {
std::uint64_t old_copy_assignment = raii::copy_assignment; std::uint64_t old_copy_assignment = raii::copy_assignment;
std::uint64_t old_move_assignment = raii::move_assignment; std::uint64_t old_move_assignment = raii::move_assignment;
visitor(values, x, reference_map); visitor(values, x, reference_cont);
BOOST_TEST_EQ(old_default_constructor, raii::default_constructor); BOOST_TEST_EQ(old_default_constructor, raii::default_constructor);
BOOST_TEST_EQ(old_copy_constructor, raii::copy_constructor); BOOST_TEST_EQ(old_copy_constructor, raii::copy_constructor);
@@ -669,9 +724,10 @@ namespace {
raii::destructor); raii::destructor);
} }
template <class X, class G> template <class X, class GF>
void empty_visit(X*, G gen, test::random_generator rg) void empty_visit(X*, GF gen_factory, test::random_generator rg)
{ {
auto gen = gen_factory.template get<X>();
auto values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto values = make_random_values(1024 * 16, [&] { return gen(rg); });
using values_type = decltype(values); using values_type = decltype(values);
using span_value_type = typename values_type::value_type; using span_value_type = typename values_type::value_type;
@@ -696,7 +752,7 @@ namespace {
BOOST_TEST_EQ(num_visits, 0u); BOOST_TEST_EQ(num_visits, 0u);
for (auto const& val : s) { for (auto const& val : s) {
auto count = x.visit(val.first, auto count = x.visit(get_key(val),
[&num_visits](typename X::value_type const&) { ++num_visits; }); [&num_visits](typename X::value_type const&) { ++num_visits; });
BOOST_TEST_EQ(count, 0u); BOOST_TEST_EQ(count, 0u);
} }
@@ -716,8 +772,8 @@ namespace {
BOOST_TEST_EQ(raii::destructor, 0u); BOOST_TEST_EQ(raii::destructor, 0u);
} }
template <class X, class G> template <class X, class GF>
void insert_and_visit(X*, G gen, test::random_generator rg) void insert_and_visit(X*, GF gen_factory, test::random_generator rg)
{ {
// here we attempt to ensure happens-before and synchronizes-with // here we attempt to ensure happens-before and synchronizes-with
// the visitation thread essentially chases the insertion one // the visitation thread essentially chases the insertion one
@@ -726,6 +782,7 @@ namespace {
BOOST_TEST(rg == test::sequential); BOOST_TEST(rg == test::sequential);
auto gen = gen_factory.template get<X>();
auto const values = make_random_values(1024 * 16, [&] { return gen(rg); }); auto const values = make_random_values(1024 * 16, [&] { return gen(rg); });
{ {
@@ -752,9 +809,9 @@ namespace {
for (std::size_t idx = 0; idx < values.size(); ++idx) { for (std::size_t idx = 0; idx < values.size(); ++idx) {
std::atomic_bool b{false}; std::atomic_bool b{false};
while (!b) { while (!b) {
x.cvisit(values[idx].first, x.cvisit(get_key(values[idx]),
[&b, &strs, idx, &values](typename X::value_type const& v) { [&b, &strs, idx, &values](typename X::value_type const& v) {
BOOST_TEST_EQ(v.second, values[idx].second); BOOST_TEST_EQ(get_value(v), get_value(values[idx]));
BOOST_TEST_EQ(strs[idx], "rawr"); BOOST_TEST_EQ(strs[idx], "rawr");
b = true; b = true;
}); });
@@ -771,6 +828,9 @@ namespace {
boost::unordered::concurrent_flat_map<raii, raii>* map; boost::unordered::concurrent_flat_map<raii, raii>* map;
boost::unordered::concurrent_flat_map<raii, raii, transp_hash, boost::unordered::concurrent_flat_map<raii, raii, transp_hash,
transp_key_equal>* transp_map; transp_key_equal>* transp_map;
boost::unordered::concurrent_flat_set<raii>* set;
boost::unordered::concurrent_flat_set<raii, transp_hash,
transp_key_equal>* transp_set;
} // namespace } // namespace
@@ -782,29 +842,30 @@ using test::sequential;
UNORDERED_TEST( UNORDERED_TEST(
visit, visit,
((map)) ((map)(set))
((value_type_generator)(init_type_generator)) ((value_type_generator_factory)(init_type_generator_factory))
((lvalue_visitor)(visit_all)(visit_while)(exec_policy_visit_all)(exec_policy_visit_while)) ((lvalue_visitor)(visit_all)(visit_while)(exec_policy_visit_all)
(exec_policy_visit_while))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))
UNORDERED_TEST( UNORDERED_TEST(
visit, visit,
((transp_map)) ((transp_map)(transp_set))
((value_type_generator)(init_type_generator)) ((value_type_generator_factory)(init_type_generator_factory))
((transp_visitor)) ((transp_visitor))
((default_generator)(sequential)(limited_range))) ((default_generator)(sequential)(limited_range)))
UNORDERED_TEST( UNORDERED_TEST(
empty_visit, empty_visit,
((map)(transp_map)) ((map)(transp_map)(set)(transp_set))
((value_type_generator)(init_type_generator)) ((value_type_generator_factory)(init_type_generator_factory))
((default_generator)(sequential)(limited_range)) ((default_generator)(sequential)(limited_range))
) )
UNORDERED_TEST( UNORDERED_TEST(
insert_and_visit, insert_and_visit,
((map)) ((map)(set))
((value_type_generator)) ((value_type_generator_factory))
((sequential)) ((sequential))
) )

View File

@@ -1,4 +1,5 @@
// Copyright 2023 Christian Mazakas. // Copyright 2023 Christian Mazakas.
// Copyright 2023 Joaquin M Lopez Munoz.
// Distributed under the Boost Software License, Version 1.0. (See accompanying // Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt) // file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
@@ -21,6 +22,7 @@ int main() {}
#include <boost/unordered/unordered_set.hpp> #include <boost/unordered/unordered_set.hpp>
#include <boost/unordered/concurrent_flat_map.hpp> #include <boost/unordered/concurrent_flat_map.hpp>
#include <boost/unordered/concurrent_flat_set.hpp>
#include <boost/interprocess/allocators/allocator.hpp> #include <boost/interprocess/allocators/allocator.hpp>
#include <boost/interprocess/containers/string.hpp> #include <boost/interprocess/containers/string.hpp>
@@ -82,10 +84,25 @@ get_container_type()
using concurrent_map = decltype( using concurrent_map = decltype(
get_container_type<boost::concurrent_flat_map>()); get_container_type<boost::concurrent_flat_map>());
using concurrent_set = decltype(
get_container_type<boost::concurrent_flat_set>());
template <class C>
struct is_concurrent_container: std::false_type {};
template <typename... Args>
struct is_concurrent_container<boost::concurrent_flat_map<Args...> >:
std::true_type {};
template <typename... Args>
struct is_concurrent_container<boost::concurrent_flat_set<Args...> >:
std::true_type {};
static char const* shm_map_name = "shared_map"; static char const* shm_map_name = "shared_map";
template <class C> template <class C>
void parent(std::string const& shm_name_, char const* exe_name, C*) typename std::enable_if<!is_concurrent_container<C>::value, void>::type
parent(std::string const& shm_name_, char const* exe_name, C*)
{ {
struct shm_remove struct shm_remove
{ {
@@ -151,7 +168,9 @@ void parent(std::string const& shm_name_, char const* exe_name, C*)
segment.destroy<container_type>(shm_map_name); segment.destroy<container_type>(shm_map_name);
} }
template <class C> void child(std::string const& shm_name, C*) template <class C>
typename std::enable_if<!is_concurrent_container<C>::value, void>::type
child(std::string const& shm_name, C*)
{ {
using container_type = C; using container_type = C;
using iterator = typename container_type::iterator; using iterator = typename container_type::iterator;
@@ -184,7 +203,9 @@ template <class C> void child(std::string const& shm_name, C*)
} }
} }
void parent(std::string const& shm_name_, char const* exe_name, concurrent_map*) template <class C>
typename std::enable_if<is_concurrent_container<C>::value, void>::type
parent(std::string const& shm_name_, char const* exe_name, C*)
{ {
struct shm_remove struct shm_remove
{ {
@@ -200,7 +221,7 @@ void parent(std::string const& shm_name_, char const* exe_name, concurrent_map*)
} }
} remover{shm_name_.c_str()}; } remover{shm_name_.c_str()};
using container_type = concurrent_map; using container_type = C;
std::size_t const shm_size = 64 * 1024; std::size_t const shm_size = 64 * 1024;
@@ -239,9 +260,11 @@ void parent(std::string const& shm_name_, char const* exe_name, concurrent_map*)
segment.destroy<container_type>(shm_map_name); segment.destroy<container_type>(shm_map_name);
} }
void child(std::string const& shm_name, concurrent_map*) template <class C>
typename std::enable_if<is_concurrent_container<C>::value, void>::type
child(std::string const& shm_name, C*)
{ {
using container_type = concurrent_map; using container_type = C;
boost::interprocess::managed_shared_memory segment( boost::interprocess::managed_shared_memory segment(
boost::interprocess::open_only, shm_name.c_str()); boost::interprocess::open_only, shm_name.c_str());