From 48f7363cd7b87c3effd7e4a67a8ee5f8b943c562 Mon Sep 17 00:00:00 2001
From: Beman The best approach to endianness for a particular application depends on the interaction between
the application's needs and the characteristics of each of the three (conversion
-functions, buffer types, or arithmetic types) approaches.
Recommendation: If you are new to endianness, uncertain, or don't want to invest the time to @@ -212,8 +212,7 @@ alignment requirements.
Endian conversion functions use objects of the ordinary C++ arithmetic
types like int
or unsigned short
to hold values. That
breaks the implicit invariant that the C++ language rules apply. The usual
-language rules only apply if the endianness of the object is currently set by
-the conversion functions to the native endianness for the platform. That can
+language rules only apply if the endianness of the object is currently set to the native endianness for the platform. That can
make it very hard to reason about complex logic flow, and result in difficult to
find bugs.
This pattern is appropriate when all endian elements in a record are typically used regardless of record content or other circumstances
-This pattern in general defers conversion but does -anticipatory conversion for specific local needs.
+This pattern in general defers conversion but for specific local needs does +anticipatory conversion.
This pattern is particularly appropriate when coupled with the endian buffer or arithmetic types.
-The endian
arithmetic approach is recommended to meet these needs. A relatively small
-number of header files dealing with binary I/O layouts need to change types like
-short
or int16_t
to big_int16_t
, and
-int
or int32_t
to bif_int32_t
. No
+number of header files dealing with binary I/O layouts need to change types. For
+example,
+short
or int16_t
would change to big_int16_t
. No
changes are required for .cpp
files.