1. Revision history
1.1. Changes since R0
-
Fix minor editorial issues.
-
Clarify that the alias definition in § 7.6.1 Implementation effort is not too high is C++-specific.
-
Further clarify § 7.5.7 _BitInt breaks more existing code.
-
In § 9.7 Absolute values, change "Returns:" to "Effects: Equivalent to:".
2. Introduction
128-bit integers have numerous practical uses, and all major implementations
(MSVC, GCC, LLVM) provide 128-bit integers already.
Among C++ users, there has been great interest in standardizing integers beyond 64 bits for a long time.
With the new wording in the C23 standard for
(see § 4.1 C Compatibility),
one of the last obstacles has been removed.
The goal of this paper is to obtain a mandatory ≥ 128-bit integer type with no core language changes
and strong support from the C++ standard library.
To accomplish this, the mandatory aliases
and
are proposed.
Note that any non-malicious implementation would be required to define
if possible,
so standardizing the minimum-width types is standardizing exact-width types by proxy.
While the definition of these aliases is trivial, mandating them also implies
library support from
,
,
,
, and other facilities.
After extensive investigation, it was determined that the § 4 Impact on the standard and § 5 Impact on implementations is relatively low.
2.1. Lifting library restrictions
The standard library contains a large amount of artificial hurdles
which make it impossible to provide library support for extended integers.
The current standard already permits the implementation to provide additional
extended (fundamental) integer types in addition to the standard integers (
,
, etc.).
However, even if there exists an extended 128-bit integer, among other issues:
-
cannot exist,std :: to_string ( std :: int128_t ) -
cannot be constructed from it (without truncating tostd :: bitset
), andunsigned long long -
cannot exist.std :: abs ( std :: int128_t )
It would not be legal for an implementation to provide such additional overloads because it would change the meaning of well-formed programs.
std :: int128_t
type.
#include <string>#include <concepts>struct S { template < typename T > requires std :: same_as < T , long long > || std :: same_as < T , std :: int128_t > operator T () const { return 0 ; } }; int main () { std :: to_string ( S {}); }
This code must always call
.
If
was not the same type as
and the implementation added an overload
in spite of the standard, the call to
would be ambiguous.
The implementation has some ability to add overloads, stated in [global.functions] paragraph 2:
A call to a non-member function signature described in [support] through [thread] and [depr] shall behave as if the implementation declared no additional non-member function signatures.
This condition is not satisfied.
If a
overload existed, the behavior would not be as if the implementation declared no additional signature.
Note:
is not a compiler extension; it’s an optional feature.
C23 [N3047] subclause 7.22.1.1 [Exact-width integer types] paragraph 3 requires
implementations to "define the corresponding typedef names" if there exists a
padding-free integer type with 128 bits.
Even if you don’t find this example convincing, at best,
and other library support
would be optional.
There are also functions which undeniably cannot exist, like
(there are only
and to
).
It would be highly undesirable to have a 128-bit type but no standard library support which is
documented in the standard, and which is optional on a per-function basis, with no feature-testing macros.
Wording changes should be made to clean up this environment.
3. Motivation and scope
There are compelling reasons for standardizing a 128-bit integer type:
-
Utility: 128-bit integers are extremely useful in a variety of domains.
-
Uniformity: Standardization would unify the many uses under a common name and ideally, common ABI.
-
Existing practice: 128-bit integers are already implemented in multiple compilers (see § 8.1 Existing 128-bit integer types).
-
Performance: It is difficult, if not impossible to optimize 128-bit operations in software as well as the compiler could do for a builtin type (see § 3.2 Utilizing hardware support).
-
Low impact: The § 4 Impact on the standard and § 5 Impact on implementations is reasonable.
3.1. Use cases
A GitHub code search for
yields 150K files,
and a language-agnostic search for
yields more than a million.
While it is impossible to discuss every one of these, I will introduce a few use cases of 128-bit integers.
3.1.1. Cryptography
128-bit integers are commonly used in many cryptographic algorithms:
-
Most notably, AES-128 uses a 128-bit key size. AES variants with wider key sizes still use a block size of 128 bits.
-
Various other block ciphers such as Twofish and Serpent also have key and/or block sizes of 128 bits.
-
MD5 hashes produce 128-bit output.
-
SHA-2 and SHA-3 produce outputs beyond 128-bit, but outputs can be truncated to 128-bit, or represented as a pair/array of 128-bit integers.
For example, the
step in AES is simply a 128-bit bitwise XOR.
To be fair, the utility of 128-bit integers in cryptographic applications is often limited to providing storage for blocks and keys.
3.1.2. Random number generation
Some random number generators produce 128-bit numbers.
For example, the CSRPNG (cryptographically secure pseudo-random number generator) Fortuna uses a block cipher to produce random numbers.
When a 128-bit block cipher is used, the output is naturally 128-bit as well.
Fortuna is used in the implementation of
in FreeBSD 11, and in AppleOSes since 2020.
Some PRNGs use a 128-bit state, such as xorshift128.
std :: uint32_t xor128 ( std :: uint32_t x [ 4 ]) { std :: uint32_t t = x [ 3 ]; t ^= t << 11 ; t ^= t >> 8 ; x [ 3 ] = x [ 2 ]; x [ 2 ] = x [ 1 ]; x [ 1 ] = x [ 0 ]; x [ 0 ] ^= t ^ ( x [ 0 ] >> 19 ); return x [ 0 ]; }
This can be expressed more elegantly using 128-bit integers:
std :: uint32_t xor128 ( std :: uint128_t & x ) { std :: uint32_t t = x >> 96 ; t ^= t << 11 ; t ^= t >> 8 ; x = ( x << 32 ) | ( t ^ ( std :: uint32_t ( x ) ^ ( std :: uint32_t ( x ) >> 19 ))); return x ; }
Generally speaking, there is a large amount of code that effectively performs
128-bit operations, but operates on sequences of 32-bit or 64-bit integers.
In the above example, it is not immediately obvious that the
line
is effectively performing a 32-bit shift, whereas
is self-documenting.
[P2075R3] proposes counter-based Philox engines for the C++ standard library, and has been received positively. The [DEShawResearch] reference implementation makes use of 128-bit integers.
3.1.3. Widening operations
128-bit arithmetic can produce optimal code for mixed 64/128-bit operations, for which there is already widespread hardware support. Among other instructions, this hardware support includes:
Operation | x86_64 | ARM | RISC-V |
---|---|---|---|
64-to-128-bit unsigned multiply | : output to register pair
| for high bits, for low bits
| for high bits, for low bits
|
64-to-128-bit signed multiply | : output to register pair
| for high bits, for low bits
| for high bits, for low bits
|
128-to-64-bit unsigned divide | : = quotient, = remainder
| ❌ | (RV128I)
|
128-to-64-bit signed divide | : = quotient, = remainder
| ❌ | (RV128I)
|
64-to-128-bit carry-less multiply | : output 128 bits to register
| for low bits, for high bits
| for low bits, for high bits
|
Some operating systems also provide 64/128-bit operations.
For example, the Windows API provides a
function.
A more general solution was proposed by [P3018R0], which supports widening multiplication through a
function, which yields the low and high part of the multiplication as a pair
of integers.
Such utilities would be useful in generic code where integers of any width can be used.
For 64-to-128-bit, it’s obviously more ergonomic to cast operands to
prior to an
operation.
mul_hi64
function which yields the high part
of a 64-bit multiplication:
inline uint64_t mul_hi64 ( uint64_t a , uint64_t b ) { #if defined(__GNUC__) && defined(IS_64BIT) __extension__ using uint128 = unsigned __int128 ; return ( uint128 ( a ) * uint128 ( b )) >> 64 ; #else // ... }
3.1.3.1. 64-bit modular arithmetic
To perform modular arithmetic with a 64-bit modulo, 128-bit integers are needed.
For example, when computing
between 64-bit unsigned integers
,
, and
,
the multiplication between
and
is already mod 264,
and the result would be incorrect unless
was a power of two.
128-bit operations are used in implementations of
(see § 8.2.10 <random> for implementation experience).
Linear congruential engines use modular arithmetic, and since the user can choose
a modulo arbitrarily, the issue is unavoidable.
Note: A popular workaround for linear congruential generators is to choose the modulo to be 264 or 232. This means that division is not required at all.
3.1.3.2. Multi-precision operations
For various applications (cryptography, numerics, etc.) arithmetic with large widths is required.
For example, the RSA (Rivest–Shamir–Adleman) cryptosystem typically uses key sizes of 2048 or 4096.
"Scripting languages" also commonly use an infinite-precision integer type.
For example, the
type in Python has no size limit.
Multi-precision operations are implemented through multiple widening operations. For example, to implement N-bit multiplication, the number can be split into a sequence of 64-bit "limbs", and long multiplication is performed. Since this involves a carry between digits, a 64-to-128-bit widening operation is required.
[BoostMultiPrecision] uses a 128-bit integer as a
.
This type is used extensively in the implementation of multi-precision arithmetic.
Note: The introduction of bit-precise integers (§ 7.5 Why no bit-precise integers?) does not
obsolete multi-precision libraries because infinite-precision numbers like
Python’s
cannot be implement using a constant size.
3.1.4. Fixed-point operations
While 64-bit integers are sufficient for many calculations, the amount of available bits is reduced when the 64 bits are divided into an integral and fractional part. This may cause issues in § 3.1.8 Financial systems.
Furthermore, fixed-point arithmetic with a double-wide operand can emulate integer division, which is a relatively expensive operation, even with hardware support.
std :: uint64_t div3 ( std :: uint64_t x ) { // 1 / 3 as a Q63.65 fixed-point number: constexpr std :: uint_fast128_t reciprocal_3 = 0xAAAA'AAAA'AAAA'AAAB ; return ( x * reciprocal_3 ) >> 65 ; // equivalent to return x / 3; }
While modern compilers perform this strength reduction optimization for constant divisors already, they don’t perform it for frequently reused non-constant divisors.
For such divisors, it can make sense to pre-compute the reciprocal and shift constants and use them many times for faster division. Among other libraries, [libdivide] uses this technique (using a pair of 64-bit integers, which effectively forms a 128-bit integer).
Note: 3 is a "lucky" case because all nonzero bits fit into a 64-bit integer. The amount of digits required differs between divisors.
3.1.5. High-precision time calculations
64-bit integers are somewhat insufficient for high-precision clocks, if large time spans should also be covered. When counting nanoseconds, a maximum value of 263-1 can only represent approximately 9 billion seconds, or 7020 years. This is enough to keep time for the forseeable future, but is insufficient for representing historical data long in the past.
This makes 64-bit integers insufficient for some time calculations, where 128-bit integers would suffice. Alternatively, 64-bit floating-point numbers can provide a reasonable trade-off between resolution and range.
is effectively a 128-bit type in POSIX (
in C++), since both the
seconds and nanoseconds part of the class are 64-bit integers
(assuming that
is 64-bit).
[Bloomberg] uses 128-bit integers to safeguard against potential overflow in time
calculations (see
)
3.1.6. Floating-point operations
The implementation of IEEE 754/IEC-559 floating-point operations often involves examining the bit-representation of the floating-point number through an unsigned integer.
The C++ standard provides
, but no matching 128-bit integer type,
which makes this more difficult.
std :: signbit ( std :: float128_t )
can be implemented as follows:
constexpr bool signbit ( float128_t x ) { return bit_cast < uint128_t > ( x ) >> 127 ; }
std :: isinf ( std :: float128_t )
can be implemented as follows:
constexpr float128_t abs ( float128_t x ) { return bit_cast < float128_t > ( bit_cast < uint128_t > ( x ) & ( uint128_t ( -1 ) >> 1 )); } constexpr bool isinf ( float128_t x ) { return bit_cast < uint128_t > ( abs ( x )) == 0x7fff'0000'0000'0000'0000'0000'0000'0000 ; }
Note: Infinity for binary128 numbers is represented as any sign bit, 15 exponent bits all set to
,
and 112 mantissa bits all set to
.
[Bloomberg] uses 128-bit integers as part of a 128-bit decimal floating point
implementation, among other uses.
Decimal floating-point numbers are commonly used in financial applications
and are standard C23 types (e.g.
) since [N2341].
In C++, a software implementation is necessary.
3.1.7. Float-to-string/String-to-float conversion
The [Dragonbox] binary-to-decimal conversion algorithm requires an integer type that is twice the width of the converted floating-point number. To convert a floating-point number in binary64 format, a 128-bit integer type is used.
Similarly, [fast_float] uses 128-bit numbers as part of decimal-to-binary conversions.
This library provides an efficient
implementation.
3.1.8. Financial systems
128-bit integers can be used to represent huge monetary values with high accuracy. When representing cents of a dollar as a 64-bit integer, a monetary value of up to 184.5 quadrillion dollars can be represented. However, this value shrinks dramatically when using smaller fractions.
Since 2005, stock markets are legally required to accept price increments of $0.0001 when the price of a stock is ≤ $1 (see [SEC]). At this precision, 1.84 quadrillion dollars can be represented. Using a uniform precision of ten thousandths would prove problematic when applied to other currencies such as Yen, which forces the complexity of variable precision on the developer.
Even more extremely, the smallest fraction of a Bitcoin is a Satoshi, which is a hundred millionth of a Bitcoin. 263 Satoshis equal approximately 92 billion BTC. In 2009, a Bitcoin was worth less than a penny, so a monetary value of only 920 million USD could be represented in Satoshis.
In conclusion, simply storing the smallest relevant fraction as a 64-bit integer is often insufficient, especially when this fraction is very small and exponential price changes are involved. Rounding is not always an acceptable solution in financial applications.
[NVIDIA] mentions fixed-point accounting calculations as a possible use case of the
type, which is a preview feature of NVIDIA CUDA 11.5.
[TigerBeetle] discusses why 64-bit integers have been retired in favor of 128-bit integers to store financial amounts and balances in the TigerBeetle financial accounting database. The aforementioned sub-penny requirement is part of the motivation.
3.1.9. Universally unique identifiers
A 128-bit integer can be used to represent a UUID (Universally Unique Identifier). While 64-bit integers are often sufficient as a unique identifier, it is quite likely that two identical identifiers are chosen by a random number generator over a long period of time, especially considering the Birthday Problem. Therefore, at least 128 bits are typically used for such applications.
std :: uint128_t random_uuid_v4 () { return std :: experimental :: randint < std :: uint128_t > ( 0 , -1 ) & 0x ffff ' ffff ' ffff ' 003f ' ff0f ' ffff ' ffff ' ffff // clear version and variant | 0x 0000 ' 0000 ' 0000 ' 0080 ' 0040 ' 0000 ' 0000 ' 0000 ; // set to version 4 and IETF variant }
The [ClickHouse] database management system defines their
type through
.
3.1.10. Networking
IPv6 addresses can be represented as a 128-bit integer. This may be a convenient representation because bitwise operations for masking and accessing individual bits or bit groups may be used. Implementing these is much easier using a 128-bit integers compared to multi-precision operations using two 64-bit integers.
std :: uint128_t ipv6 = /* ... */ ; constexpr auto mask10 = 0x3ff ; if (( ipv6 & mask10 ) != 0b1111111010 ) /* wrong prefix */ ; constexpr auto mask54 = ( std :: uint64_t ( 1 ) << 54 ) - 1 ; if (( ipv6 >> 10 & mask54 ) != 0 ) /* expected 54 zeros */ ; constexpr auto mask64 = std :: uint64_t ( -1 ); interface_identifier = ( data >> 64 ) & mask64 ;
The [ClickHouse] database management system defines their
type through
.
3.1.11. Bitsets and lookup tables
A popular technique for optimizing small lookup tables in high-performance applications is to turn them into a bitset. 128 bits offer additional space over 64 bits.
( -1 , 0 , 0 )
, ( 1 , 0 , 0 )
, ..., or ( 0 , 0 , 1 )
)
can be represented as integers in range [0, 6).
This requires three bits of storage, and a lookup table for the cross product of two vectors
requires 33 = 81 bits.
The cross product can be computed as follows:
unsigned cross ( unsigned a , unsigned b ) { [[ assume ( a < 6 && b < 6 )]]; constexpr std :: uint128_t lookup = 0x201'6812'1320'8941'06c4'ec21'a941 ; return ( lookup >> ( a * 6 + b * 3 )) & 0b111 ; }
This is significantly faster than computing a cross product between triples of
or
using multiplication and subtraction.
Using unsigned integers as lookup tables is a very popular technique in chess engines, and commonly referred to as Bitboard. The [px0] chess engine uses a 90-bit board, stored in a 128-bit integer.
Note: Compilers can only perform an "array-to-bitset optimization" to a limited extent at this time.
Clang is the only compiler which performs it, and only for arrays of
.
Note:
does not offer a way to extract ranges of bits, only individual bits.
Therefore, it would not have been of much help in the example.
Furthermore,
has runtime checks and potentially throws exceptions,
which makes it unattractive to some language users.
3.2. Utilizing hardware support
3.2.1. Future-proofing for direct 128-bit support
Hardware support for 64/128-bit mixed operations is already common in x86_64 and ARM. It is also conceivable that hardware support for 128-bit integer arithmetic will be expanded in the forseeable future. The RISC-V instruction set architecture has a 128-bit variant named RV128I, described in [RISC-V], although no implementation of it exists yet.
When hardware support for 128-bit operations is available, but the source code emulates these in software, the burden of fusing multiple 64-bit operations into a single 128-bit operation is put on the optimizer.
For example, multiple 64-bit multiplications may be fused into a single 64-to-128-bit multiplication. x86_64 already provides hardware support in this case (see § 3.1.3 Widening operations), however, the language provides no way of expressing such an operation through integer types.
3.2.2. Support through 128-bit floating-point
On hardware which provides native support for
(see [Wikipedia] for a list), integer division up to 113
bits can be implemented in terms of floating-point division, and this is possibly the fastest routine.
For such instruction selection, the 113-bit division must be recognized by the compiler.
It is very unlikely that the hundreds of operations
comprising a software integer division could be recognized as such.
128 bits is obviously more than 113 bits, so not every operation can be performed this way. However, modern optimizing compilers keep track of range constraints of values.
-
If the divisor is zero, mark the operation undefined behavior for the purpose of compiler optimization, or emit
.ud2 -
Otherwise, if the divisor and dividend are constant, compute the result.
-
Otherwise, if the divisor is constant and greater than the dividend, yield zero.
-
Otherwise, if the divisor is constant, perform strength reduction (§ 3.1.4 Fixed-point operations), making the division a shift and multiplication.
-
Otherwise, if the divisor is a power of two, count the trailing zeros and perform a right-shift.
-
Otherwise, if both operands are 264-1 or less, perform 64-bit integer division.
-
Otherwise, if one of the operands is 264-1 or less, perform 128-to-64-bit (§ 3.1.3 Widening operations) division.
-
Otherwise, if both operands are 2113-1 or less, and if there is hardware support for 128-bit floating point numbers, perform floating-point division.
-
Otherwise, use a software implementation of 128-bit division.
ISO C++ does not offer a mechanism through which implementations can be chosen based on optimizer knowledge. What is easy for the implementation is difficult for the user, which makes it very compelling to provide a built-in type.
Note: The pre-computation in bullet 4 must not be done in C++ because the cost of computing the reciprocal is as high as the division itself. The user must be guaranteed that the entire pre-computation of shift and factor is constant-folded, and this is generally impossible because optimization passes are finite.
Note: Historically, floating-point division in hardware was used to implement integer division. The x87 circuitry for dividing 80-bit floating-point numbers could be repurposed for 64-bit integer division. This strategy is still many times faster than software division. Intel desktop processors have received dedicated integer dividers starting with Cannon Lake.
3.3. Other programming languages
Among modern general purpose programming languages, C++ is somewhat of an outlier because it provides no standard support for arithmetic beyond 64 bits.
Many languages provide a standard infinite-precision type, including
Java (
), Python (
), JavaScript (
), Go (
), C# (
),
Ruby (
), and Haskell (
).
In some cases, special support for 128-bit integers is also provided or proposed:
-
[RFC-1504] proposes
for Rust, and has been accepted.i128 -
[SE-0425] proposes
for Swift, and is currently under review.Int128 -
[Go-9455] proposes
for Go, and has popular support (218 👍, 1 👎).int128 -
.NET (C#) provides an
Int128
.struct
4. Impact on the standard
First and foremost, this proposal mandates the following integer types in
:
using int_least128_t = /* signed integer type */ ; using uint_least128_t = /* unsigned integer type */ ; using int_fast128_t = /* signed integer type */ ; using uint_fast128_t = /* unsigned integer type */ ; using int128_t = /* signed integer type */ ; // optional using uint128_t = /* unsigned integer type */ ; // optional // TODO: define corresponding macros/specializations in <cinttypes>, <climits>, <limits>, ...
This change in itself is almost no change at all.
The implementation can already provide
while complying with the C++11 standard.
Challenges only arise when considering the impact of these new types on the rest of the
standard library, and possibly C compatibility.
Note: A compliant libstdc++ implementation could define all aliases as
.
4.1. C Compatibility
This proposal makes the assumption that C++26 will be based on C23. Any attempt of standardizing 128-bit integers must also keep possible compatibility with the C standard in mind.
4.1.1. intmax_t
and uintmax_t
In particular,
has historically prevented implementations from providing
integer types wider than
without breaking ABI compatibility.
A wider integer type would change the width of
.
C23 has relaxed the definition of
. [N3047], 7.22.1.5 [Greatest-width integer types] currently defines
as follows:
The following type designates a signed integer type, other than a bit-precise integer type, capable of representing any value of any signed integer type with the possible exceptions of signed bit-precise integer types and of signed extended integer types that are wider thanand that are referred by the type definition for an exact width integer type:
long long intmax_t
For
to not be
,
there must exist an
alias for the same type.
GCC already provides an
type which satisfies the padding-free requirement and could
be exposed as
.
In conclusion, it is possible to provide a
alias with equivalent
semantics in C and C++, and with no ABI break.
4.1.2. int_least128_t
in C
does not force
to exist in C.
In principle, C++ compilers can disable support for the type in C mode, so that there is
effectively no impact.
However, this would be somewhat undesirable because there would be no mandatory interoperable type.
C users would use
or
and C++ users would use
,
which would only be
by coincidence.
For the sake of QoI, implementations should expose the corresponding alias in C as well
(which they are allowed to).
To make the type mandatory in both languages, cooperation from WG14 is needed.
4.1.3. C library impact
A C++ implementation is required to provide compatibility headers as per [support.c.headers] which have equivalent semantics to the headers in C, with some details altered. The affected candidates are those listed in Table 40: C headers [tab:c.headers].
Header | Impact of extended integers |
---|---|
| Define macro constants |
| Define macro constants |
| Define type aliases |
| Define type aliases |
| Support 128-bit / optionally
|
There is no impact on other C headers.
Most of the issues are trivial and have no runtime C library impact.
The only thing worth noting is that 128-bit support from
/
would be
required (§ 8.2.17 <cstdio> for implementation experience).
Note: This support is made optional, so that C++ implementations are able to keep using
the system’s C library.
Otherwise, the C++ implementation could only guarantee 128-bit
support
if it was part of the C++ runtime library.
Note: If additionally, a C implementation wanted to support
,
it would need to add extended integer support in a few other places.
For example,
requires type-generic functions to support all extended integers.
4.2. Impact on the core language
The proposal makes no changes to the core language because existing semantics of extended integers are sufficient (see § 7.7 Should extended integer semantics be changed? for discussion). See also § 7.8 Do we need new user-defined literals?.
4.2.1. Note on surprising semantics
It is worth noting that the existence of
leads to some oddities:
-
The result of the
operator can be a 128-bit integer.sizeof -
The underlying type of enumerations can be a 128-bit integer.
-
Previously ill-formed integer literals could now be of 128-bit integer type.
-
The conditions in
preprocessing directives are evaluated as if operands had the same representation as#if
orintmax_t
, which means that 128-bit integers cannot be used in this context, or the values would be truncated.uintmax_t
However, none of this is a new issue introduced by this proposal. Any compliant implementation could already have produced this behavior, assuming it supported 128-bit integers as an optional extended integer type.
Note: The fact that the preprocessor doesn’t operate on the widest integer type,
but on
will need to be addressed.
However, this is a general problem with rebasing on C23 and not within the scope
of this proposal.
4.3. Impact on the library
Find below a summary of issues that arise from the introduction of 128-bit integers in the
C++ standard library.
One common issue is that aliases such as
and
within
containers, iterators, and other types can be a 128-bit integer.
The same applies to
,
,
.
The proposal does not force library maintainers to re-define any of these aliases; it’s just a possibility. Whether to define them as such is a QoI issue in general, and won’t be discussed further.
4.3.1. Language support library
Issue:
may need an overload for 128-bit integers.
Action: ✔️ None because we don’t support it (see § 7.9 Why no std::div?).
Issue:
may need 128-bit support.
Action: ✔️ None
(see § 8.2.1 std::to_integer for implementation experience).
Issue:
needs 128-bit integer feature-testing macro.
Action: ⚠️ Add macros (see § 9.1 Header <version> for wording).
Issue:
needs a trait for 128-bit integers.
Action: ✔️ None.
Issue:
needs additional constants for 128-bit integers.
Action: ✔️ None.
Issue:
needs to explicitly require support for 128-bit integers
in its synopsys.
Action: ⚠️ Define aliases (see § 9.2 Header <cstdint> for wording).
Issue:
needs to support 128-bit only optionally.
Action: ⚠️ Make support optional (see § 9.3 Header <inttypes.h> for wording).
4.3.2. Metaprogramming library
Issue:
needs to support 128-bit integers.
Action: ✔️ None
(see § 8.2.2 <type_traits> for implementation experience).
Issue:
and
require 128-bit support.
Action: ✔️ None
(see § 8.2.2 <type_traits> for implementation experience).
Issue:
currently accepts non-type template arguments of
.
is no longer the widest integer type and changing the type of NTTP to
would be an ABI break because the type of template argument participates
in name mangling.
Action: ✔️ None
(see § 7.10 What about the std::ratio dilemma? for discussion).
4.3.3. General utilities library
Issue: Integer comparison functions (
et al.) require 128-bit support.
Action: ✔️ None
(see § 8.2.3 std::cmp_xxx for implementation experience).
Issue:
needs to support 128-bit integers.
Action: ✔️ None.
Issue:
could receive an additional constructor taking
.
Action: ⚠️ Add such a constructor
(see § 9.4 Class template bitset for wording § 6.4 std::bitset constructor semantic changes for discussion).
Issue:
could receive an additional
function, similar to
.
Action: ⚠️ Add such a function
(see § 9.4 Class template bitset for wording and § 8.2.4 <bitset> for implementation experience).
Issue:
and
need to support 128-bit integers.
Action: ✔️ None
(see § 8.2.5 <charconv> for implementation experience).
Issue:
needs to support 128-bit integers.
Action: ✔️ None
(see § 8.2.6 <format> for implementation experience).
Issue:
might need 128-bit integers support.
Action: ✔️ None, it doesn’t
(see § 8.2.6 <format> for implementation experience).
Issue:
might need support for 128-bit integers.
Action: ✔️ None, it doesn’t
(see § 8.2.6 <format> for implementation experience).
Issue:
needs to support 128-bit integers.
Action: ✔️ None
(see § 8.2.7 <bit> for implementation experience).
Issue:
could support 128-bit types.
Action: ⚠️ Add overloads
(see § 9.5 Numeric conversions for wording and § 8.2.8 std::to_string for implementation experience).
4.3.4. Containers library
Issue: The extents and index types of
could be 128-bit integers.
This is also the case for type aliases of
.
The exposition-only helper
now also includes
128-bit integers.
Action: ✔️ None.
All these issues are either QoI or don’t impact existing implementations substantially.
4.3.5. Iterators library
Issue: The exposition-only helper
now also includes
128-bit integers.
Generally, 128-bit integers would be a valid
and an implementation needs to
consider this when defining concepts that use integers in any way.
Action: ✔️ None.
Note: As long as
(and by proxy,
) is correct, the existing wording
should be unaffected.
4.3.6. Ranges library
Issue:
is required to be a 128-bit integer if
is not 128-bit, and such a type exists.
This is not the the case in libc++ and MSVC STL at this time,
where
is
and
respectively.
Action: ⚠️ Relax wording to prevent breaking ABI
(see § 9.6 Iota view for wording and § 5.2 std::ranges::iota_view ABI issue for discussion).
Issue:
may now return a 128-bit integer.
The standard recommends to use a type which is sufficiently wide to store the product
of sizes of underlying ranges.
A similar issue arises for
.
Action: ✔️ None.
Note: The choice of integer type used to be (and still is) implementation-defined.
4.3.7. Algorithms library
Issue:
,
, and
need to support 128-bit integers.
Action: ✔️ None
(see § 8.2.9 std::gcd, std::lcm for implementation experience).
Issue: Saturating arithmetic functions and
need to support 128-bit integers.
Action: ✔️ None
(see § 8.2.12 std::xxx_sat for implementation experience).
4.3.8. Numerics library
Issue: Various random number generators and
need to support
128-bit types.
Action: ✔️ None
(see § 8.2.10 <random> for implementation experience).
Issue:
needs to support
.
Action: ✔️ None
(see § 8.2.10 <random> for implementation experience).
Issue:
needs to support 128-bit integers.
Action: ✔️ None (see § 8.2.15 std::valarray for implementation experience)
Issue: For most
functions, an additional overload taking 128-bit integers would
need to be defined.
Action: ✔️ None
(see § 8.2.14 <cmath> for implementation experience.)
Issue:
could receive an additional 128-bit overload.
Action: ⚠️ Add an overload
(see § 9.7 Absolute values for wording and § 8.2.13 std::abs for implementation experience).
Issue: The
library needs 128-bit support.
Action: ✔️ None
(see § 8.2.16 <linalg> for implementation experience).
4.3.9. Time library
Issue: Significant portions of
use
, which has
template
parameters.
Action: ✔️ None
(see § 7.10 What about the std::ratio dilemma? for discussion).
4.3.10. Localization library
Issue:
and
could use
overloads for
.
Action: ✔️ None.
Note: This would be a an ABI break if changes were made.
relies on virtual member functions, and modifying the vtable breaks ABI.
and
cha can be used for locale-dependent formatting without breaking ABI.
and
provide locale-independent alternatives.
4.3.11. Input/output library
Issue:
and
don’t support 128-bit integers.
By proxy, extraction with
and insertion with
would not work.
Action: ✔️ None.
Note: The standard doesn’t require these to work for all integer types, only for standard integer types.
Any change would be an ABI break, so these facilities could be left untouched.
Unfortunately, the user won’t be able to
, however,
the language provides sufficient alternatives
(
,
,
,
).
Issue:
and
need to support 128-bit integers.
Action: ✔️ None
(see § 8.2.17 <cstdio> for implementation experience).
Issue:
needs to include the wording changes for
.
Action: ⚠️ Include changes
(see § 9.8 Header <cinttypes> for wording).
4.3.12. Concurrency support library
Issue:
needs to support
.
Action: ✔️ No impact on the standard
(see § 8.2.18 std::atomic for implementation experience).
Issue: There should be additional aliases
et al. aliases.
Action: ⚠️ Define aliases
(see § 9.9 Atomic operations for wording).
5. Impact on implementations
5.1. Estimated implementation effort
The following table summarizes the affected standard library parts and the estimated effort required to implement the proposed changes.
Affected library part | Work definitely required | Implementation experience |
---|---|---|
| ✔️ no | § 8.2.1 std::to_integer |
| § 9.1 Header <version> | |
| add specializations | |
| add macro constants | |
| ✔️ no | § 8.2.2 <type_traits> |
| ✔️ no | § 8.2.2 <type_traits> |
| ✔️ no | § 8.2.3 std::cmp_xxx |
| ✔️ no | |
| § 9.4 Class template bitset | § 8.2.4 <bitset> |
| ✔️ no | § 8.2.5 <charconv> |
| ✔️ no | § 8.2.6 <format> |
| support 128-bit | § 8.2.7 <bit> |
| § 9.5 Numeric conversions | § 8.2.8 std::to_string |
| § 9.6 Iota view | § 5.2 std::ranges::iota_view ABI issue |
,
| ✔️ no | § 8.2.9 std::gcd, std::lcm |
| ✔️ no | § 8.2.11 std::midpoint |
| ✔️ no | § 8.2.12 std::xxx_sat |
| 256-bit LCG | § 8.2.10 <random> |
| ✔️ no | § 8.2.15 std::valarray |
overloads
| ✔️ no | § 8.2.14 <cmath> |
| § 9.7 Absolute values | § 8.2.13 std::abs |
| ✔️ no | § 8.2.16 <linalg> |
,
| ✔️ no | § 8.2.17 <cstdio> |
| § 9.3 Header <inttypes.h> | |
| § 9.8 Header <cinttypes> | |
| § 9.9 Atomic operations | § 8.2.18 std::atomic |
When deciding "Work definitely required", this paper does not consider menial changes like
relaxing
and such,
which may be present in functions such as
.
Also, if there exists at least one standard library which implements these features, it is assumed that they can be adapted into other libraries with relative ease.
5.2. std :: ranges :: iota_view
ABI issue
libstdc++ defines
for
to be
.
Since a
alias would likely be defined as
, there is no ABI impact.
Other libraries are not so fortunate.
5.2.1. Affected implementations
By contrast, the
for
in libc++
is
.
The MSVC STL uses a class type
.
Even trivially copyable classes aren’t passed via register in the Microsoft x86_64 ABI,
so this type is passed via the stack.
Re-defining this to be an integer type would break ABI, assuming that a Microsoft
would be passed via registers.
5.2.2. Cause of ABI break
The ABI break stems from the fact that
for
is defined to be:
a signed integer type of width greater than the width of
if such a type exists.
W
Currently, no such type exists, but if
did exist, it would no longer be valid
to use a class type or
as a
.
5.2.3. Possible solution
See § 9.6 Iota view for a proposed solution which resolves this issue without requiring action from implementors.
5.2.4. What about iota_view < std :: int_least128_t >
?
Besides the option to provide a ≥ 129-bit
,
implementations can also define
to be a 128-bit integer.
Neither the Cpp17RandomAccessIterator requirement nor the
concept
require the difference between two iterators to be representable using their
.
Therefore, this is a reasonable strategy which is easy to implement.
Of course, it has the adverse affect that
is possibly undefined behavior for two iterators.
In practice, will the user ever need a 128-bit
, and if so, do they need to represent
such extreme differences?
These are quality-of-implementation issues which maintainers will need to consider.
libstdc++ already supports
, where the
is
.
Besides QoI questions, this proposal does not introduce any new issues.
6. Impact on existing code
With no core language changes and only additional standard library features, the impact on existing code should be minimal.
6.1. Possible semantic changes
However, this idea is put into question when it comes to integer literals.
auto x = 18446744073709551615 ; // 2 64 -1
If the widest signed integer type is a 64-bit type, this code is ill-formed. Every compiler handles this differently:
-
clang declares
asx
and emits a warning (✔️ compliant).unsigned long long -
GCC declares
asx
and emits a misleading warning (✔️ compliant).__int128 -
MSVC declares
asx
and emits no warning (❌ non-compliant).unsigned long long
The example demonstrates that in practice, introducing a 128-bit integer may
impact some existing code.
To comply with the C++ standard, the type of
would have to be
assuming that
cannot represent the value.
Hexadecimal literals are not affected in the same way because they
are required to be of type
if
cannot represent the value.
The introduction of a 128-bit integer type would not alter the signedness of existing literals.
6.2. Impact on overload sets
Besides templates, a popular technique for covering multiple integer types is to create an "exhaustive" overload sets like:
// support "all" signed integers (anything less than int is promoted) void foo ( int ); void foo ( long ); void foo ( long long );
I’m putting "exhaustive" in quotes because such code does not cover extended integer types, which can exist. Only the implementation knows the whole set of integer types and can ensure completeness.
Note: Creating an overload set from
,
,
,
and
is not possible because it only covers four out of five standard integer types,
making some calls ambiguous.
While creating sets like these outside of language implementations is not ideal,
the proposal can minimize the impact by making
a distinct type from
standard integer types.
std :: int_least128_t
is long long
, the following code is ill-formed:
void foo ( int ) { } void foo ( long ) { } void foo ( long long ) { } void foo ( std :: int_least128_t ) { } // re-definition of foo(long long)
long long
is 128-bit, so no code is really affected.
Note: A workaround is to write
.
However, this solution is not obvious, and should not be necessary.
6.2.1. Proposed solution
There should exist a natural and universally correct way to extend such overload sets,
so that the effort of "upgrading" to 128-bit is minimal.
Therefore
should be distinct.
Guaranteeing that
is distinct means that even if
is a 128-bit type, it won’t be chosen by this alias.
This breaks the conventions of
and may be surprising,
but no implementation with
aliases beyond
exists,
and no implementation where
is 128-bit exists.
No existing code is affected; this is a purely academic problem.
6.3. Possible assumption violations
There is obviously a substantial amount of code which assumes that integers are no wider
than 64 bits.
There is also a substantial amount of code which assumes that
is
the widest integer type, and this assumption would be broken by introducing
.
The exact impact is investigated in this proposal. Assumptions about hardware or integer width limitations cannot hold back language development. C would be stuck with 32-bit types if that had ever been a convincing rationale. Also, the introduction of a 128-bit type does not break existing code unless the user chooses to use it.
6.4. std :: bitset
constructor semantic changes
The only overload which accepts integers is
.
Ideally, we would like to construct bitsets from wider integer types, if available. My proposed solution changes the semantics of this constructor (see § 9.4 Class template bitset for wording).
The existing constructor is problematic for multiple reasons:
-
If extended by a
overload, a callstd :: int_least128_t
would become ambiguous.bitset < N > ( 0 ) -
When called with negative numbers, a sign extension only takes place up to the width of
. Beyond that, the bits are zero-filled.unsigned long long
#include <bitset>constexpr std :: bitset < 128 > bits ( -1 ); static_assert ( bits . count () == 64 );
The original behavior is very difficult to preserve if we add more overloads.
If we added an
overload, then
would be ambiguous.
Therefore, we must at least have an overload for all integers with a conversion rank
of
or greater.
However, if so,
under the current definition would result int a
that has only 32 one-bits (assuming 32-bit
).
We could preserve the current behavior exactly if sign-extension occurred up to the width
of
; beyond that, zero-extension would be used.
This is not proposed because the design makes no sense outside of its historical context.
6.4.1. Proposed solution
Therefore, I propose to perform sign-extension for the full size of the
.
In other words,
would always be a bitset where every bit is set.
This almost certainly matches the intent of the user.
A GitHub code search for
finds 30 uses of constructing a bitset from a negative literal.
Of the ones which use
, all uses are of the form
-
wherestd :: bitset < N > ( -1 )
is less than 64, orN -
.std :: bitset < N > ( static_cast < T > ( -1 ))
None of these existing uses would be affected.
Note: See § 9.4 Class template bitset for wording.
7. Design considerations
The goal of this proposal is to obtain a mandatory 128-bit type with strong library support.
A
alias is the only option that does not involve any changes
to the core language.
Therefore, it is the obvious design choice for this proposal.
Note: Unlike the existing
and
aliases, this type is distinct.
See § 6.2 Impact on overload sets for rationale, and § 9.2 Header <cstdint> for wording.
Besides the current approach, there are a few alternatives which have been considered:
7.1. Why no standard integer type?
Why standardize a
type alias but no standard integer type? Essentially, why no
std :: uint_least128_t ?
unsigned long long long
Firstly, naming is a problem here.
A standard integer type would likely warrant the ability to name it by keyword, and an
ever-increasing sequence of
s isn’t an attractive solution.
Even with a concise keyword such as
, it is unclear what advantage such a keyword
would have over a type alias, other than saving one
directive.
Secondly, it is useful to keep
a second-class citizen by not making it a
standard integer type.
For example, in the formatting library, a format string can
specify a dynamic width for an argument, which must be a standard integer.
A width that cannot be represented by a 64-bit number is unreasonable,
so it makes sense to limit support to standard integers.
Thirdly, as already stated in § 4.1 C Compatibility, C23’s
must be the
widest standard integer type.
To not break ABI and be C23-compatible,
must be
an extended integer type.
7.2. Why no mandatory std :: int128_t
type?
Mandating any exact
inadvertently restricts the byte width
because exact-width types cannot have any padding.
implies that the width of a byte is a power of two ≤ 128,
and historically, C++ has not restricted implementations to a specific byte size.
This decrease in portability also has no good rationale.
If
is mandatory and an implementation is able to define it without padding,
then
is effectively mandatory.
Hypothetically, a malicious implementation could define
to be a 1000-bit
integer with 872 padding bits, even if it was able to define a padding-free 128-bit integer.
However, malicious implementations have never been a strong argument to guide design.
7.3. Why no std :: int_least256_t
type?
256-bit integers are also useful and one could use many of the arguments in favor of 128-bit integers to also propose them. However, there are are a few strong reasons against including them in this proposal:
-
The wider the bit sizes, the fewer the use cases are. For example, 128 bits are sufficient for high-precision clocks and most financial applications.
-
There is tremendously less hardware support for 256-bit integers. x86 has instructions to perform a 64-to-128-bit multiplication, but no such 128-to-256-bit instruction exists.
-
There are fewer existing implementations of 256-bit integers.
-
Many use cases of 256-bit integers are simply bit manipulation. The wider the type, the less common arithmetic becomes. Bitwise operations (
,&
,|
) are best done through vector registers, but the ABI for~
is to use general purpose registers or the stack, and_BitInt ( 256 )
would likely be the same. Since there is strong hardware support for § 3.1.3 Widening operations which is based on general purpose registers, the choice for 128-bit is easy; not so much for 256-bit.std :: int_least256_t -
is a "support type" forstd :: uint128_t
which simplifies the implementation of manystd :: float128_t
functions. There exists no hardware with binary256 support to motivate a< cmath >
support type.std :: uint256_t -
Longer integer literals are a major reason to get a fundamental type. 256-bit hexadecimal literals are up to 64 digits long, which degrades code quality too much. With indentation and digit separators, using the full width can exceed the auto-formatter’s column limit.
It is also unclear whether there should ever be a mandatory 256-bit extended integer type, or if support should be provided through 256-bit bit-precise integers. Overall, this proposal is more focused if it includes only 128-bit.
Nevertheless, many of the changes in § 9 Proposed wording pave the way for a future
or even
.
There would be no wording impact other than defining the necessary aliases and macros.
7.4. Why no class type?
Instead of an extended integer type, it would also be possible to provide the user with a
128-bit class type.
This could even be done through a general
class.
However, there are compelling reasons against doing so:
-
is sufficiently common to where the added cost of class type (overload resolution for operators, function call evaluation in constant expressions, etc.) would be a burden to the user.std :: int_least128_t
is the "work horse" of any multi-precision library which uses 64-to-128-bit widening operations (see § 3.1.3.2 Multi-precision operations). This cost could add up quickly.__int128 -
A fundamental type also comes with integer literals, and up to 128-bit, there are still reasonable use cases for integer literals. § 3 Motivation and scope shows multiple examples where 128-bit literals were used. Besides these use cases, it would be nice to represent an IPv6 address using a hexadecimal literal (which is the typical representation of these addresses).
-
There are 128-bit architectures where the general purpose register size is 128-bit. For example, the RV128I variant of RISC-V is such an architecture. To be fair, there exists no implementation of RV128I yet. Still, it would be unusual not to have a fundamental type that represents the general purpose register.
In essence, 128-bit is still "special" enough to deserve a fundamental type. Beyond 128-bit, even hexadecimal literals become hard to read due to their sheer length, and we are unlikely to find any ISA with a 256-bit register size, even counting theoretical ISAs like RV128I.
7.5. Why no bit-precise integers?
Instead of putting work into 128-bit integers, it would also be possible to integrate
bit-precise integers (C’s
type, proposed in [N2763]) into the C++ standard.
This would be a sufficient alternative if
was a fundamental type,
had integer literal support, and had strong library support.
However, there are numerous reasons why
is not the right path for this proposal, described below.
In short, this proposal argues that
does not bring sufficient value to C++ relative to its
impact, and would better be exposed via a class type
than a fundamental type.
7.5.1. _BitInt
has no existing C++ work
After enquiry in the std-proposals mailing list, no one has expressed that they are working on
, nor has anyone expressed interest on beginning work on this.
Right now,
is a purely hypothetical feature.
7.5.2. _BitInt
has less motivation in C++
A significant part of the rationale for [N2709] was that only
can utilize hardware
resources optimally on hardware such as FPGAs that have a native 2031-bit type.
C++ is much less ambitious in its goal of supporting all hardware,
with changes such as C++20 effectively mandating two’s complement signed integers.
C23 supporting
is not rationale for a C++ fundamental type in itself.
Therefore, a limited solution such as focusing on 128-bit is not unreasonable.
7.5.3. _BitInt
should be exposed as a class type
By default, all new language features are library features. If it’s possible to express N-bit integers through a class, then this is the go-to solution.
In C++, it is possible to expose the compiler’s
functionality as follows:
inline constexpr size_t big_int_max_width = BITINT_MAXWIDTH ; template < size_t N > requires ( N <= BITINT_MAXWIDTH ) struct big_int { _BitInt ( N ) _M_value ; // TODO: constructors, operator overloads, etc. }; // TODO: define big_uint similarly // compatibility macros: #ifdef __cplusplus #define _BigInt(...) ::std::big_int<__VA_ARGS__> #define _BigUint(...) ::std::big_uint<__VA_ARGS__> #else #define _BigInt(...) _BitInt(__VA_ARGS__) #define _BIgUint(...) unsigned _BitInt(__VA_ARGS__) #endif
There is precedent for this design:
-
is compatible with C’sstd :: complex < T >
._Complex T -
is compatible with C’sstd :: atomic < T >
._Atomic ( T )
Unless an extremely strong case for a new category of integer types in the core language can be made, this is the obvious solution.
Of course, a class type is more costly in terms of compilation slowdown as well as
performance on debug builds and in constant evaluations.
This is acceptable because
is not a replacement for the existing integers,
but an infrequently used special-purpose type which only comes into play when no other size
would suffice.
7.5.4. _BitInt
is not a replacement for standard integers
In C,
is a second-class citizen.
The overwhelming majority of library functions take
,
, and other standard integer types.
Conversion rules are also biased in favor of standard integers.
For example,
is of type
in C, if
is 32-bit.
Standard integers are essentially engraved into the language and have special status, both in C
and in C++.
Billions of lines of code use these integers, and this is never going to change.
Virtually all learning resources in existence teach standard integers or
aliases,
which are standard integers in all implementations.
Even if
was a replacement for the existing integers, the transition process would take a century.
It is better to think of it as an extension of the existing integers.
7.5.5. _BitInt
does not guarantee 128-bit support
The whole point of this proposal is to guarantee developers a 128-bit type.
However,
is only valid for
, where
is not guaranteed to be more than the width of
.
7.5.6. _BitInt
requires more library effort
This proposal only mandates
, not any specific width.
Therefore, implementers have the luxury of relying on all integers being padding-free
and all integers having a width which is 2N times the platform’s byte size.
These assumptions greatly simplify the implementation of various algorithms.
From personal experience, implementing any functions in
is tremendously easier
when it is guaranteed that integers have a 2N width.
Full
library support requires tremendously greater library effort.
It is unclear what parts of the standard library can be burdened.
7.5.7. _BitInt
breaks more existing code
also breaks assumptions in existing, generic code.
C++ users have enjoyed the luxury of padding-free integers on conventional implementations
for a very long time, and some code depends on it.
It may depend through anti-patterns like using
for comparison of
s storing integers,
or more justified uses like
.
inevitably breaks assumptions about padding and size, which is a great challenge
to both the implementation and C++ users.
_BitInt ( 30 )
has padding bits.
template < std :: integral T > auto to_byte_array ( const T & x ) { return std :: bit_cast < std :: array < std :: byte , sizeof ( T ) >> ( x ); } int main () { _BitInt ( 30 ) x = ...; write_buffer_to_file ( file , to_byte_array ( x )); }
Assuming a 4-byte array is returned and a byte is 8-bit, at least one of these bytes would have indeterminate value. Reading this indeterminate value to store it in a file is undefined behavior.
Note:
produces indeterminate values in the result object
or its subobjects for any corresponding padding bits in
.
in the example has two padding bits, so the corresponding byte(s)
in the result would be indeterminate (see also 22.15.3 [bit.cast]).
While the assumption that all integers are padding-free is not universally correct,
C++ users have enjoyed this guarantee for decades, and an unknown amount of code depends on it.
If
was just another integral type which satisfies
,
it could silently break existing code like in the example.
Note:
does not have the same problem because the implementation can define
it as a type with more than 128 bits which has no padding, if need be.
7.5.8. _BitInt
has teachability issues
has different promotion and conversion rules.
These rules are not necessarily obvious, especially when bit-precise integers interact with
other types.
For example,
is of type
if
is of type
or narrower, and
is 32-bit.
Not only does the user have to learn the existing integer conversion and promotion rules,
the user also has to learn this new
system and how it interacts with the old system.
This also complicates overload resolution:
void foo ( int ); void foo ( _BitInt ( 32 )); int main () { _BitInt ( 32 ) x = 0 ; foo ( x ); // calls which? }
If
dominates in implicit conversions, should it also dominate in overload resolution so that
is selected?
The answer is not entirely clear.
void foo ( int ); void foo ( _BitInt ( 32 )); int main () { foo ( 1'000'000'000 ); // calls which? }
On a target where
is 16-bit, the integer literal is of type
.
It is possible to convert it to
in a narrowing conversion, and possible
to convert it to
losslessly.
-
Should overload resolution prefer the lossless conversion to
? After all, lossless is better._BitInt ( 32 ) -
Should the call be ambiguous? After all, that’s what normally happens when there are two viable candidates with equally long conversion sequences.
-
Should
be called? After all, standard integers are normally dominant, andfoo ( int )
could be of type1 ’000 ’000 ’000
if onlyint
was 32-bit.int
There is a compelling case for each of these options, and no matter what design choice is made, the complexity of the language increases significantly.
7.5.9. _BitInt
in C may be too permissive
Many C++ users have lamented that integers are too permissive.
As is tradition, C has not restricted the behavior of
substantially:
-
Signed/unsigned mixed comparison are permitted.
-
Signed/unsigned mixed arithmetic is permitted.
-
Implicit truncating conversion is permitted.
-
Implicit signed/unsigned conversion is permitted.
One substantial difference is that
is not promoted to
, other than that,
the semantics are the same as the old integers.
If
behaves almost the same as standard integers and brings all of this bug-prone
behavior with it, how can one justify adding it as a new fundamental type?
Of course, C++ could decide to make these semantics more restrictive for its
type,
similar to how implicit casts from
are only permitted in C, but not in C++.
However, this would complicate writing C/C++ interoperable code
and make the language even less teachable because
semantics would become language-specific.
Furthermore, the more distinct the
semantics become, the less of a drop-in replacement for
it becomes:
std :: uint64_t mul_hi ( std :: uint64_t x , std :: uint64_t y ) { return u128 ( x ) * u128 ( y ) >> 64 ; }
If
is extended integer type, this code is well-formed.
If
is a C-style
, this code is well-formed.
If
is a more restrictive C++
which forbids narrowing,
this code is ill-formed.
In short, the dilemma is as follows:
-
If
is as permissive as existing integers, it is harder to justify its addition to the language because it innovates too little._BitInt -
If
is more restrictive, its impact on existing code and on language teachability is greater._BitInt
There is no obvious path here, only a bottomless potential for discussion.
By comparison,
has exactly the same rules as existing integers,
which
follows.
The user can use it as a drop-in replacement:
#ifdef __SIZEOF_INT128__ using i128 = __int128 std :: int128_t ; #else // struct i128 { /* ... */ }; #endif
7.5.10. _BitInt
false dichotomy
The
vs.
argument is a false dichotomy.
Bit-precise integers essentially create a parallel, alternative type system with
different rules for promotion, implicit conversion, and possibly overload resolution.
Defining any
/
type alias as a bit-precise integer would be hugely surprising
to language users, who have certain expectations towards aliases in these headers.
These expectations have been formed over the past 30 years.
Therefore, even if all issues regarding
mentioned in this paper were resolved
and
becomes a fundamental type,
it would be reasonable to maintain the "legacy" type system in parallel.
7.6. Why not make it optional?
Instead of making
entirely mandatory, it would also be possible to
make it an optional type, or to make it mandatory only on hosted implementations.
First and foremost, making the type optional has severe consequences. Library authors still have to write twice the code: one version with 128-bit support, one version without. To C++ users, the ability to write portable code is more valuable than the underlying implementation effort, or potential performance issues. I will go into these two issues more:
7.6.1. Implementation effort is not too high
The criticism is:
On freestanding/embedded platforms, the implementation effort of
is too great.
std :: int_least128_t
While this concern is valid, C23 requires arbitrary-precision arithmetic through
anyway
and GCC and clang support
already (see § 8.1.4 _BitInt(128) for support).
Assuming that vendors implement
for the purpose of C23 compatibility
and expose
as a C++ extension, they can simply provide:
#ifdef __cplusplus using int_least128_t = _BitInt ( 128 ); #endif
Note: Bit-precise integers have slightly different semantics than extended integers.
However, these differences don’t matter if
is the widest integer,
making it valid to use as
.
The remaining impact is limited to the standard library.
For the most part, it is simple to generalize library algorithms to an arbitrary bit size.
It is especially easy when the implementation can ensure that all integers are padding-free
and have a size that is a 2N multiple of the byte size.
Only
is mandatory (not the exact-width types),
so the implementation can ensure it.
7.6.2. Software emulation is acceptable
The criticism is:
should not be mandatory if software emulation degrades performance.
std :: int_least128_t
There is also merit to this concern. A mandatory type may give the user a false sense of hardware support which simply doesn’t exist, especially on 32-bit or even 8-bit hardware.
However, this problem is innate to standard integers as well.
If a user is compiling for a 32-bit architecture, a 64-bit
will have to be
software-emulated, and 64-bit integer division can have dramatic cost.
Why should a 64-bit
be mandatory on an 8-bit architecture?
The answer is: because it’s useful to rely on
so we can write portable code,
even if we try to avoid the type for the sake of performance.
In the end, it’s the responsibility of the user to be vaguely aware of hardware capabilities
and not use integer types that are poorly supported.
If the user wants to perform a 128-bit integer division on an 8-bit machine,
the language shouldn’t artificially restrict them from doing so.
The same principle applies to
,
, C23’s
, etc.
7.7. Should extended integer semantics be changed?
An interesting question is whether extended integer semantics make sense in the first place, or require some form of changes. The relevant standard section is 6.8.6 [conv.rank].
In summary:
Relevant Rule | Example |
---|---|
Unsigned integers have the same rank as signed integers of equal width. |
|
Wider integers have greater rank. |
|
Standard integers of equal width have greater rank. |
if is 128-bit
|
Extended integers with the same width are ranked implementation-defined. |
if these types are distinct |
7.4 [expr.arith.conv] decides that for integers of equal rank, the common type is unsigned.
For example,
is of type
.
I believe these semantics to be sufficiently clear; they don’t require any change.
Note: The rules for determining the better overload are based on implicit conversion sequences. If the rules for conversions are unchanged, by proxy, overload resolution remains unchanged.
7.8. Do we need new user-defined literals?
No, not necessarily. If the user desperately wants to define a user-defined literal which accepts 128-bit numeric values and beyond, they can write:
constexpr int operator "" _zero ( const char * ) { return 0 ; } int x = 100000000000000000000000000000000000000000000000000000000000000000 _zero ;
Obviously, this forces the user into parsing the input string at compile-time if they want
to obtain a numeric value.
A literal operator of the form
circumvents this problem,
but
is typically not 128-bit.
Therefore, it could be argued that these rules should be expanded to save the user the trouble
of parsing.
However, this is not proposed because it lacks motivation. User-defined literals have diminishing returns the longer they are:
-
The difference between
and0 s
is huge: 800%.chrono :: seconds { 0 } -
The difference between
and1000000000 s
is not: ~150%.chrono :: seconds { 1000000000 }
The shortest user-defined literal that does not fit into
is
, which is 20 characters long.
At this length, user-defined literals never provide overwhelming utility.
However, if WG21 favors new forms of
for
integer types wider than
, I am open to working on this.
Note: The standard currently does not allow
for any
except
.
7.9. Why no std :: div
?
is a function which returns the quotient and remainder of an integer division in one
operation.
This proposal intentionally doesn’t extend support to 128-bit types because each overload of
returns a different type.
Namely, the current overloads for
,
, and
return
,
, and
respectively.
This scheme isn’t easy to generalize to 128-bit integers or other extended integer types.
A possibly approach would be to define a class template
and re-define
the concrete types to be aliases for
etc.
However, this is arguably a breaking change because it alters what template argument deduction
is possible from these types.
Furthermore,
is arguably useless.
Optimizing compilers recognize separate uses of
and
and fuse them into a single
division which yields both quotient and remainder, at least on platforms where this is possible.
Note: In C89,
was useful because it had a well specified rounding mode,
whereas the division operator had implementation-defined rounding.
7.10. What about the std :: ratio
dilemma?
Assuming that C++26 is based on C23,
will be problematic
because it is defined as:
template < intmax_t Num , intmax_t Denom = 1 > class ratio { /* ... */ };
would no longer be the widest integer type, and certain extreme ratios
would become unrepresentable.
It is not possible to redefine it to have other types for template parameters
because the types of template arguments participate in name mangling.
This issue is not caused by this proposal, but the introduction of a 128-bit integer first manifests it.
This proposal does not attempt to resolve it.
However, a possible path forward is to make
less dependent on
and allow ratio-like types instead.
8. Implementation experience
8.1. Existing 128-bit integer types
8.1.1. __int128
(GNU-like)
GCC and clang already provide the 128-bit integer types in the form of
and
.
However, this type is not available when compiling for 32-bit targets.
Clang provides the same support.
8.1.2. __int128
(CUDA)
In NVIDIA CUDA 11.5, the NVCC compiler has added preview support for the
signed and unsigned
data types on platforms where the host compiler supports it.
See [NVIDIA].
8.1.3. std :: _Signed128
, std :: _Unsigned128
The MSVC STL provides the class types
and
defined in
.
These types implement all arithmetic operations and integer comparisons.
They satisfy the integer-like constraint and have been added to implement [P1522R1].
is possibly defined as
.
8.1.4. _BitInt ( 128 )
The C23 standard requires support for bit-precise integers
where
.
While this doesn’t strictly force support for 128-bit integers,
GNU-family implementations support more than 128 bits already.
As of February 2024, the support is as follows:
Compiler |
| Targets | Languages |
---|---|---|---|
clang 14 |
| all | C & C++ |
clang 16 |
| all | C & C++ |
GCC 14 |
| 64-bit only | C |
MSVC 19.38 | ❌ | ❌ | ❌ |
Note: clang has supported
as an
compiler extension prior to C standardization.
It is possible that given enough time,
will be
supported by Microsoft as well.
Note: Microsoft Developer Community users have requested support for a 128-bit type at [MSDN].
8.2. Library implementation experience
8.2.1. std :: to_integer
is equivalent to
with constraints.
8.2.2. < type_traits >
Assuming that
and
don’t simply delegate to a compiler intrinsic,
implementing these traits merely requires adding two specializations such as
.
See libstdc++'s
.
8.2.3. std :: cmp_xxx
Libstdc++ provides a width-agnostic implementation of
and other safe comparison functions in
.
Being able to extend to a wider type is helpful in principle (e.g. implementing
in terms of a comparison between
s),
however, the current implementations don’t make use of this opportunity anyway.
8.2.4. < bitset >
Note: See § 9.4 Class template bitset for proposed changes.
To implement these changes, a constructor and member function template can be defined:
bitset ( integral auto ); template < unsigned_integral T > T to ();
These are functionally equivalent to the existing
constructor
and
function respectively, just generalized.
8.2.5. < charconv >
libstdc++ already provides a width-agnostic implementation
of
in
,
and a width-agnostic implementation of
in
.
In general, it is not difficult to generalize
for any width.
Stringification uses integer division, which may be a problem.
However, the divisor is constant.
Due to strength reduction optimization (see § 3.1.4 Fixed-point operations for an example),
no extreme cost is incurred no matter the width.
8.2.6. < format >
libstdc++ already supports
for
.
The locale-independent forms are simply implemented in terms of
and are not
affected by the introduction of 128-bit integers.
As explained above,
implementations typically already support 128-bit integers.
The new
function is not affected.
This function only checks for the type of a dynamic width or precision, and the arguments are required to be of standard integer type.
Realistically the user will never need a 128-bit width or precision,
which is why no changes are proposed.
also requires no changes because
already covers extended integer and floating-point types.
Also, modifying the
within a
would be an avoidable ABI-break.
8.2.7. < bit >
In [BitPermutations], I have implemented the majority of C++ bit manipulation functions
for any width, i.e. in a way that is compatible with
for any
.
Such an extremely generalized implementation is challenging, however, merely extending support to 128-bit given a 64-bit implementation is simple.
std :: popcount
, a 128-bit implementation looks as follows:
int popcount ( uint128_t x ) { return popcount ( uint64_t ( x >> 64 )) + popcount ( uint64_t ( x )); }
std :: countr_zero
, a 128-bit implementation looks as follows:
int countr_zero ( uint128_t x ) { int result = countr_zero ( uint64_t ( x )); return result < 64 ? result : 64 + countr_zero ( uint64_t ( x >> 64 )); }
All bit manipulation functions are easily constructed this way.
8.2.8. std :: to_string
Note: See § 9.5 Numeric conversions for proposed changes.
libstdc++ already implements
as an inline function which forwards to
.
In general,
simply needs to forward to
or a similar function,
and this is easily generalized.
8.2.9. std :: gcd
, std :: lcm
libstdc++ provides a
implementation which uses the Binary GCD algorithm.
The MSVC STL has a similar implementation.
This algorithm is easily generalized to any width.
It requires
for an efficient implementation, which is easy to implement
for 128-bit integers (see § 8.2.7 <bit>).
libc++ uses a naive
implementation based on the Euclidean Algorithm, which relies on integer division.
Due to the immense cost of integer division for 128-bit integers,
such an implementation may need revision.
requires no work because mathematically,
.
When solving for
,
.
The implementation effort (if any) is limited to
.
Note: By dividing by
prior to multiplication, overflow in
is avoided.
Overflow can only occur if
is not representable by the result type.
8.2.10. < random >
requires at least double-wide integers
to safely perform the operation
, where
is the LCG state.
Otherwise, the multiplication and addition could overflow.
libstdc++ solves this issue by performing all operations using
if available
(see
),
and otherwise:
static_assert ( __which < 0 , /* needs to be dependent */ "sorry, would be too much trouble for a slow result" );
Introducing 128-bit integers would force implementations to also provide 256-bit operations
solely for the purpose of
.
This can be considered reasonable because C23 requires implementations to provide
arbitrary-precision arithmetic anyway, and both GCC and clang already implement
for
(see § 8.1.4 _BitInt(128) for details on support).
8.2.11. std :: midpoint
Libstdc++ has a
implementation which is width-agnostic.
8.2.12. std :: xxx_sat
libstdc++ provides a width-agnostic implementation for all saturating arithmetic functions in
.
Saturating arithmetic is generally done through compiler intrinsics such as
.
These are already supported by GCC and Clang.
A software implementation of overflow detection may be very tedious as explained in [P0543R3], but that isn’t the chosen implementation anyway.
8.2.13. std :: abs
Note: See § 9.7 Absolute values for proposed changes.
can be easily implemented width-agnostically as
for any integer
.
Note that an overload must exist for every integer type to avoid calling
for
floating-point types.
Such an overload is proposed in § 9.7 Absolute values.
8.2.14. < cmath >
libstdc++, libc++, and the MSVC STL implement the integral math overloads using SFINAE.
Effectively, they define function templates using
.
Therefore, no changes are required.
8.2.15. std :: valarray
does not rely on any specific bit-size, or on
being any
type in general.
While it is possible to provide specializations for specific types that make
more optimal use of hardware, it is also possible to rely on the
optimizers auto-vectorization capabilities alone.
8.2.16. < linalg >
The linear algebra library introduced by [P1673R13] does not rely on any specific widths and
is generalized by default.
The corresponding reference implementation can operate on
.
Providing specializations for specific widths is a quality-of-implementation issue.
8.2.17. < cstdio >
Note: This proposal makes 128-bit
/
support entirely optional
(see § 9.3 Header <inttypes.h> for wording).
Similar to
, extending support to 128-bit integers for printing and parsing
requires only moderate effort because the underlying algorithm easily generalizes to any bit size.
The
et al. macros in
would also need to be defined,
and would expand to an implementation-defined format constant.
LLVM libc currently uses a
function for stringification.
This would require replacement, possibly with a function template.
Other standard libraries may be impacted more significantly.
std :: printf ( "%w128d \n " , std :: int_least128_t { 123 });
8.2.18. std :: atomic
Libc++ already provides support for fetch-operations for
.
For example,
delegates to
in libatomic.
In general, the
operations that
provides
must already have a software fallback for 32-bit hardware, where no 64-bit
atomic
instruction exists.
Such a software fallback may be implemented as a CAS-and-retry loop.
The introduction of 128-bit integers adds no new challenges.
9. Proposed wording
The proposed wording is relative to [CxxDraft], accessed 2024-02-10.
9.1. Header < version >
In subclause 17.3.2 [version.syn], update the feature-testing macros as follows:
#define __cpp_lib_atomic_int128 20XXXX #define __cpp_lib_bitset 202306L 20XXXX #define __cpp_lib_bitset_int128 20XXXX #define __cpp_lib_int128 20XXXX #define __cpp_lib_to_string 202306L 20XXXX #define __cpp_lib_to_string_int128 20XXXX
Note: Feature-detection for
and
is intentionally omitted
because the user can detect whether
,
etc. are defined.
9.2. Header < cstdint >
In subclause 17.4.1 [cstdint.syn], update the synopsis as follows:
// all freestanding namespace std { using int8_t = signed integer type ; // optional using int16_t = signed integer type ; // optional using int32_t = signed integer type ; // optional using int64_t = signed integer type ; // optional using int128_t = signed integer type ; // optional using intN_t = see below ; // optional using int_fast8_t = signed integer type ; using int_fast16_t = signed integer type ; using int_fast32_t = signed integer type ; using int_fast64_t = signed integer type ; using int_fast128_t = signed integer type ; using int_fastN_t = see below ; // optional using int_least8_t = signed integer type ; using int_least16_t = signed integer type ; using int_least32_t = signed integer type ; using int_least64_t = signed integer type ; using int_least128_t = signed integer type ; using int_leastN_t = see below ; // optional using intmax_t = signed integer type ; using intptr_t = signed integer type ; // optional using uint8_t = unsigned integer type ; // optional using uint16_t = unsigned integer type ; // optional using uint32_t = unsigned integer type ; // optional using uint64_t = unsigned integer type ; // optional using uint128_t = unsigned integer type ; // optional using uintN_t = see below ; // optional using uint_fast8_t = unsigned integer type ; using uint_fast16_t = unsigned integer type ; using uint_fast32_t = unsigned integer type ; using uint_fast64_t = unsigned integer type ; using uint_fast128_t = unsigned integer type ; using uint_fastN_t = see below ; // optional using uint_least8_t = unsigned integer type ; using uint_least16_t = unsigned integer type ; using uint_least32_t = unsigned integer type ; using uint_least64_t = unsigned integer type ; using uint_least128_t = unsigned integer type ; using uint_leastN_t = see below ; // optional using uintmax_t = unsigned integer type ; using uintptr_t = unsigned integer type ; // optional }
In subclause 17.4.1 [cstdint.syn], update paragraph 3 as follows:
All types that use the placeholder N are optional when N is not,
8 ,
16 ,
32 or, or
64 . The exact-width types
128 and
intN_t for N =
uintN_t ,
8 ,
16 ,
32 and, and
64 are also optional; however, if an implementation defines integer types with the corresponding width and no padding bits, it defines the corresponding typedef-names. Each of the macros listed in this subclause is defined if and only if the implementation defines the corresponding typedef-name.
128
In subclause 17.4.1 [cstdint.syn], add the following paragraph:
None of the types that use the placeholder N are standard integers types ([basic.fundamental]) if N is greater than 64.
[Example:is an extended integer type.
int_least128_t is an extended integer type or a standard integer type whose width is at least 64. — end example]
int_least64_t
Note: This restriction is intended to address § 6.2 Impact on overload sets.
9.3. Header < inttypes . h >
In subclause 17.14 [support.c.headers], add the following subclause:
17.14.X Header
< inttypes . h > The contents of the C++ header
are the same as the C standard library header
< inttypes . h > with the following exception: The definition of the
< inttypes . h > and
fprintf macros for the corresponding integers in the header
fscanf is optional for any integer with a width greater than 64. However, if any macro for an integer with width N is defined, all macros corresponding to integers with the same or lower width as N shall be defined.
< stdint . h >
See also: ISO/IEC 9899:2018, 7.8.1
Note: This effectively makes
/
128-bit support optional because without any
/
macros, the user has no standard way of using these functions with 128-bit integers.
Note: After rebasing on C23, additional restrictions to
must be applied
so that
(see [N2680]) is not mandatory in C++.
9.4. Class template bitset
In subclause 22.9.2.1 [template.bitset.general], update the synopsis as follows:
// [bitset.cons], constructors constexpr bitset () noexcept ; constexpr bitset ( unsigned long long val ) noexcept ; constexpr bitset ( integer - least - int val ) noexcept ; [...] constexpr unsigned long to_ulong () const ; constexpr unsigned long long to_ullong () const ; template < class T > constexpr T to () const ;
In subclause 22.9.2.1 [template.bitset.general], add a paragraph:
For each function with a parameter of type integer-least-int, the implementation provides an overload for each cv-unqualified integer type ([basic.fundamental]) whose conversion rank is that of
or greater, where integer-least-int in the function signature is replaced with that integer type.
int
Note: See § 6.4 std::bitset constructor semantic changes for discussion.
In subclause 22.9.2.2 [template.bitset.const], update the constructors as follows:
constexpr bitset ( unsigned long long val ) noexcept ; constexpr bitset ( integer - least - int val ) noexcept ; Effects: Initializes the first
bit positions to the corresponding bit values in val.
M is the smaller of
M and the
N number of bits in the value representationwidth ([basic.types.general]) ofinteger-least-int . If
unsigned long long , the remaining bit positions are initialized to
M < N zeroone ifis negative, otherwise to zero .
val
In subclause 22.9.2.3 [bitset.members], make the following changes:
constexpr unsigned long to_ulong () const ;
Returns:.
x
Throws:if the integral value
overflow_error corresponding to the bits in
x cannot be represented as type
* this .
unsigned long constexpr unsigned long long to_ullong () const ; template < class T > constexpr T to () const ; Constraints:
is an unsigned integer type ([basic.fundamental]).
T Returns:
.
x Throws:
if the integral value
overflow_error corresponding to the bits in
x cannot be represented as
* this typethe return type of this function .
unsigned long long
9.5. Numeric conversions
Update subclause 23.4.2 [string.syn] as follows:
string to_string ( int val ); string to_string ( unsigned val ); string to_string ( long val ); string to_string ( unsigned long val ); string to_string ( long long val ); string to_string ( unsigned long long val ); string to_string ( int_least128_t ); string to_string ( integer - least - int val ); string to_string ( float val ); string to_string ( double val ); string to_string ( long double val ); [...] wstring to_wstring ( int val ); wstring to_wstring ( unsigned val ); wstring to_wstring ( long val ); wstring to_wstring ( unsigned long val ); wstring to_wstring ( long long val ); wstring to_wstring ( unsigned long long val ); wstring to_wstring ( integer - least - int val ); wstring to_wstring ( float val ); wstring to_wstring ( double val ); wstring to_wstring ( long double val );
In subclause 23.4.2 [string.syn], add a paragraph:
For each function with a parameter of type integer-least-int,
the implementation provides an overload for each cv-unqualified
integer type ([basic.fundamental]) whose conversion rank is that of int
or greater,
where integer-least-int in the function signature is replaced with that integer type.
In subclause 23.4.5 [string.conversions], update
:
string to_string ( int val ); string to_string ( unsigned val ); string to_string ( long val ); string to_string ( unsigned long val ); string to_string ( long long val ); string to_string ( unsigned long long val ); string to_string ( integer - least - int val ); string to_string ( float val ); string to_string ( double val ); string to_string ( long double val ); Returns:
.
format ( "{}" , val )
In subclause 23.4.5 [string.conversions], update
:
wstring to_string ( int val ); wstring to_wstring ( unsigned val ); wstring to_wstring ( long val ); wstring to_wstring ( unsigned long val ); wstring to_wstring ( long long val ); wstring to_wstring ( unsigned long long val ); wstring to_wstring ( integer - least - int val ); wstring to_wstring ( float val ); wstring to_wstring ( double val ); wstring to_wstring ( long double val ); Returns:
.
format ( L"{}" , val )
9.6. Iota view
In subclause 26.6.4.2 [ranges.iota.view], update paragraph 1 as follows:
Letbe defined as follows:
IOTA - DIFF - T ( W )
If
is not an integral type, or if it is an integral type and
W is greater than
sizeof ( iter_difference_t < W > ) , then
sizeof ( W ) denotes
IOTA - DIFF - T ( W ) .
iter_difference_t < W > Otherwise,
is a signed standard integer type of width greater than the width of
IOTA - DIFF - T ( W ) if such a type exists.
W Otherwise,
is an unspecified signed-integer-like type ([iterator.concept.winc]) of width not less than the width of
IOTA - DIFF - T ( W ) .
W
[Note 1: It is unspecified whether this type satisfies. — end note]
weakly_incrementable
Note: This change resolves the potential ABI break explained in § 5.2 std::ranges::iota_view ABI issue.
This change purely increases implementor freedom.
An extended integer type still models signed-integer-like, so GCC’s existing implementation
using
remains valid.
However, a wider extended integer type is no longer the mandatory difference type (if it exists)
as per the second bullet.
9.7. Absolute values
In subclause 28.7.1 [cmath.syn], update the synopsis as follows:
// [c.math.abs], absolute values constexpr int abs ( int j ); // freestanding constexpr long int abs ( long int j ); // freestanding constexpr long long int abs ( long long int j ); // freestanding constexpr signed - integer - least - int abs ( signed - integer - least - int j ); // freestanding constexpr floating - point - type abs ( floating - point - type j ); // freestanding-deleted
In subclause 28.7.1 [cmath.syn], add a paragraph after paragraph 2:
For each function with a parameter of type signed-integer-least-int,
the implementation provides an overload for each cv-unqualified
signed integer type ([basic.fundamental]) whose conversion rank is that of int
or greater,
where all uses of signed-integer-least-int in the function signature are replaced with that
signed integer type.
In subclause 28.7.2 [c.math.abs], make the following changes:
constexpr int abs ( int j ); constexpr long int abs ( long int j ); constexpr long long int abs ( long long int j ); constexpr signed - integer - least - int abs ( signed - integer - least - int j ); Effects:
These functions have the semantics specified in the C standard library for the functionsEquivalent to:,
abs , and
labs , respectively.
llabs .
return j >= 0 ? j : - j ;
Note:
matches the semantics of the C functions exactly,
even in undefined cases like
.
Note: The floating-point overload set is intentionally not re-defined to return
.
This expression is not equivalent to clearing the sign bit.
9.8. Header < cinttypes >
In subclause 31.13.2 [cinttypes.syn], update paragraph 1 as follows:
The contents and meaning of the headerare the same as the
< cinttypes > C standard libraryC++ header, with the following changes:
< inttypes . h >
Note: Unlike the C standard library header, the C++ header has the changes described in § 9.3 Header <inttypes.h> applied.
9.9. Atomic operations
In subclause 33.5.2 [atomics.syn], update the synopsis as follows:
// all freestanding namespace std { [...] using atomic_int8_t = atomic < int8_t > ; // freestanding using atomic_uint8_t = atomic < uint8_t > ; // freestanding using atomic_int16_t = atomic < int16_t > ; // freestanding using atomic_uint16_t = atomic < uint16_t > ; // freestanding using atomic_int32_t = atomic < int32_t > ; // freestanding using atomic_uint32_t = atomic < uint32_t > ; // freestanding using atomic_int64_t = atomic < int64_t > ; // freestanding using atomic_uint64_t = atomic < uint64_t > ; // freestanding using atomic_int128_t = atomic < int128_t > ; // freestanding using atomic_uint128_t = atomic < uint128_t > ; // freestanding using atomic_int_least8_t = atomic < int_least8_t > ; // freestanding using atomic_uint_least8_t = atomic < uint_least8_t > ; // freestanding using atomic_int_least16_t = atomic < int_least16_t > ; // freestanding using atomic_uint_least16_t = atomic < uint_least16_t > ; // freestanding using atomic_int_least32_t = atomic < int_least32_t > ; // freestanding using atomic_uint_least32_t = atomic < uint_least32_t > ; // freestanding using atomic_int_least64_t = atomic < int_least64_t > ; // freestanding using atomic_uint_least64_t = atomic < uint_least64_t > ; // freestanding using atomic_int_least128_t = atomic < int_least128_t > ; // freestanding using atomic_uint_least128_t = atomic < uint_least128_t > ; // freestanding using atomic_int_fast8_t = atomic < int_fast8_t > ; // freestanding using atomic_uint_fast8_t = atomic < uint_fast8_t > ; // freestanding using atomic_int_fast16_t = atomic < int_fast16_t > ; // freestanding using atomic_uint_fast16_t = atomic < uint_fast16_t > ; // freestanding using atomic_int_fast32_t = atomic < int_fast32_t > ; // freestanding using atomic_uint_fast32_t = atomic < uint_fast32_t > ; // freestanding using atomic_int_fast64_t = atomic < int_fast64_t > ; // freestanding using atomic_uint_fast64_t = atomic < uint_fast64_t > ; // freestanding using atomic_int_fast128_t = atomic < int_fast128_t > ; // freestanding using atomic_uint_fast128_t = atomic < uint_fast128_t > ; // freestanding [...] }
10. Acknowledgements
I thank Jonathan Wakely and other participants in the std-proposals mailing list whose feedback has helped me improve the quality of this proposal substantially.
I thank Darrel Wright for bringing [Go-9455] to my attention, and for other feedback.
I also thank Lénárd Szolnoki for contributing the example in § 2.1 Lifting library restrictions.
Note: See [std-proposals] for discussion of this proposal.