added implementation rationale for concurrent hashmap

This commit is contained in:
joaquintides
2023-05-13 19:29:08 +02:00
parent add01e2dfd
commit 69ee0039e0

View File

@ -108,7 +108,7 @@ When using a hash function directly suitable for open addressing, post-mixing ca
=== Platform interoperability === Platform interoperability
The observable behavior of `boost::unordered_flat_set`/`unordered_node_set` and `boost::unordered_flat_map`/`unordered_node_map` is deterministically The observable behavior of `boost::unordered_flat_set`/`unordered_node_set` and `boost::unordered_flat_map`/`unordered_node_map` is deterministically
identical across different compilers as long as their ``std::size_type``s are the same size and the user-provided identical across different compilers as long as their ``std::size_t``s are the same size and the user-provided
hash function and equality predicate are also interoperable hash function and equality predicate are also interoperable
—this includes elements being ordered in exactly the same way for the same sequence of —this includes elements being ordered in exactly the same way for the same sequence of
operations. operations.
@ -117,3 +117,25 @@ Although the implementation internally uses SIMD technologies, such as https://e
and https://en.wikipedia.org/wiki/ARM_architecture_family#Advanced_SIMD_(NEON)[Neon^], when available, and https://en.wikipedia.org/wiki/ARM_architecture_family#Advanced_SIMD_(NEON)[Neon^], when available,
this does not affect interoperatility. For instance, the behavior is the same this does not affect interoperatility. For instance, the behavior is the same
for Visual Studio on an x64-mode Intel CPU with SSE2 and for GCC on an IBM s390x without any supported SIMD technology. for Visual Studio on an x64-mode Intel CPU with SSE2 and for GCC on an IBM s390x without any supported SIMD technology.
== Concurrent Hashmap
The same data structure used by Boost.Unordered open-addressing containers has been chosen
also as the foundation of `boost::concurrent_flat_map`:
* Open-addressing is faster than closed-addressing alternatives, both in non-concurrent and
concurrent scenarios.
* Open-addressing layouts are eminently suitable for concurrent access and modification
with minimal locking. In particular, the metadata array can be used for implementations of
lookup that are lock-free up to the last step of actual element comparison.
* Layout compatibility with Boost.Unordered flat containers allows for fast transfer
of all elements between `boost::concurrent_flat_map` and `boost::unordered_flat_map`.
(This feature has not been implemented yet.)
=== Hash function and platform interopersability
`boost::concurrent_flat_map` makes the same decisions and provides the same guarantees
as Boost.Unordered open-addressing containers with regards to
xref:#rationale_hash_function[hash function defaults] and
xref:#rationale_platform_interoperability[platform interoperability].