Document: P3899R1
Authors: Jan Schultke, Matthias Kretz
Date: 2026-02-20
Audience: SG6 (Numerics), EWG (Evolution), CWG (Core Wording)
For roughly a decade, the C++ standard has been technically ambiguous about whether FLT_MAX * 2.0f is undefined behavior on a platform with IEEE 754 infinity. CWG issue 2168 (2016) asked SG6 for guidance; SG6 said overflow to infinity should be well-defined at runtime. Then CWG issue 2723 silently added a “range of representable values” definition that — without consulting SG6 — effectively made overflow UB again. The result: three major compilers implement three different behaviors, especially for constant evaluation, and the standard cannot arbitrate between them.
P3899R1 draws the lines cleanly. On IEEE 754 types: overflow to ±∞ is well-defined at runtime but disqualifies a constant expression. Infinity and quiet NaN propagation — infinity() + 1, nan * 2 — is well-defined and valid in constexpr. Producing NaN from finite operands (indeterminate forms like inf * 0) is not a constant expression. Division by zero remains undefined behavior; the scope of this paper stops short of the full IEEE 754 exceptions story. Underflow is clarified as never having been undefined.
The wording changes touch three places: [expr.pre] ¶4 (rewritten to a two-condition formulation), [basic.fundamental] ¶13 (small parenthetical about infinity), and [expr.const] ¶10 (three new disqualifying conditions). GCC 15 already implements the proposed behavior exactly. Clang and MSVC deviate only on the constexpr overflow case.
If you write numerical code, care about -ffinite-math-only vs. standard conformance, or have ever wondered whether std::numeric_limits<float>::infinity() * 2.0f is valid in a constexpr context — this paper is for you.
committee gonna committee. floating point has had this exact ambiguity since at least Kona 2016 and it only took one contradictory DR and three diverging compiler implementations to get a cleanup paper. progress.
lol Rust doesn’t have floating-point undefined behavior. just saying.
Rust f32/f64 also don’t give you
std::fexcept/ floating-point exception flags, and the SIMD story is nowhere nearstd::simd. Different tradeoffs.Rule 3. Stay on topic.
-ffast-mathassumes no infinities, no NaNs, no signed zeros. This paper is for people who need IEEE conformance without the flag — HFT desks, numerics researchers, anyone who ships to an auditor. The rest of us just flip the flag and move on.That’s… the point. When you don’t use
-ffast-math, the standard should clearly say what your code means. Before this paper it didn’t, not for overflow.Eight years of writing C++ and the mental overhead of tracking which FP operations are UB vs. well-defined on which platform is genuinely exhausting. Glad this is being fixed. I still wish there was a
[[strict_ieee]]attribute I could slap on a TU and stop thinking about it.I’ve been tracking CWG2168 since the 2016 Kona meeting. What this paper doesn’t advertise loudly enough in the abstract is that it’s reversing part of the SG6 guidance from that session. The 2016 guidance said overflow to infinity should be well-defined at runtime, but infinity should not be usable as a subexpression in constant expressions — the idea being that
constexpr float x = infinity() + 1;was meant to be ill-formed. This paper flips that: infinity propagation is now explicitly valid in constexpr. The justification is reasonable (all three compilers already accept it and rejecting it would break code), but it’s a policy change dressed as a clarification.The other thing that jumps out: division by zero stays undefined. So on an IEEE 754 type,
FLT_MAX * 2is now well-defined (produces+inf), while1.0f / 0.0fis still UB. The paper explains the reasons — signed infinity, negative zero, NaN payloads, non-IEEE types — but it means you can’t read this as “C++ finally matches IEEE 754.” Half the table is set.The CWG2723 situation is its own story. Someone adds a “range of representable values” definition without looping in SG6, it silently contradicts a decade-old guidance, and a separate paper has to untangle it. Standard archaeology is genuinely a full-time job.
I work on GCC’s constexpr evaluator. The reversal on constexpr infinity propagation is the right call regardless of what SG6 said in 2016. We’ve been accepting
infinity() + 1in constexpr for years — rejecting it now would break existing code. The 2016 concern was about whether constant folding across TUs could produce surprising results; those concerns apply less given how the constexpr evaluation model has matured.The 2016 concern was deeper than TU-level folding. If you allow infinity as a constant subexpression, you need to decide what
inf * 0means in constexpr: (a) ill-formed, (b) NaN, or (c) UB. This paper picks (a) —inf * 0is not a constant expression because the result is “not mathematically defined in the domain of real number arithmetic.” Defensible. But it creates an asymmetry:inf * 2is constexpr-valid,inf * 0is not. You need to internalize which operations on infinity are “mathematically defined” to predict what constant-evaluates.The implementation heuristic is: did we produce NaN, and was any input already NaN? Yes → propagation → constexpr OK. No NaN input but NaN output → indeterminate form → not constexpr. “Not mathematically defined in the domain of real number arithmetic” operationalizes to exactly that check. Clear for implementers. Less obviously clear in the standard text, which is the legitimate concern.
The implementation heuristic is sound. What I’d want before CWG wording review is either a forward reference to IEEE 60559 to ground “domain of real number arithmetic,” or a note with worked examples. The phrase is doing load-bearing semantic work — it distinguishes
inf * 0(excluded) fromnan * 2(allowed) — but it’s new vocabulary with no anchor in the standard. Future DR work will hit this phrase and have to reverse-engineer the intent.Fair point. “Domain of real number arithmetic” should get a footnote or an IEEE 60559 cross-reference before CWG merges the wording. The scope was already ambitious. Worth flagging as an NB comment.
In HFT we sometimes want infinity production as a sentinel — client sends a degenerate price, arithmetic propagates to inf, downstream logic catches it in a single branch instead of bounds-checking every intermediate. The “technically UB” status meant that was load-bearing on implementation-defined behavior even though every IEEE 754 compiler does the right thing. This clarification removes a sword of Damocles. Division by zero staying UB is fine; we already defensively handle denominators.
GCC 15 already implements exactly this behavior. Clang and MSVC deviate only in the constexpr overflow case — they accept
FLT_MAX * 2.0fas a constant expression; this paper would reject it. This is basically standardizing existing practice. Rubber stamp.That’s how most core wording changes work. Compilers converge on sensible behavior, then a paper makes it normative. The alternative is leaving the standard wrong while implementations move on. Both outcomes are bad; one of them produces useful papers.
What specifically does Clang need to change?
Clang currently accepts
FLT_MAX * 2.0fin constexpr contexts — it constant-folds the overflow to infinity without an error. Under this proposal, overflow at constant-evaluation time becomes a new disqualifying condition. Minor implementation change, probably one check in the constexpr evaluator.great, constexpr floating point with NaN propagation. can’t wait for the “why is my constant evaluation 3x slower” issue reports in six months
constexpr FP has been a thing since C++23 (
<cmath>constexpr). This paper clarifies what’s legal; it doesn’t add new FP operations to constant evaluation. Your compile times are already paying the cost.hot take: if your safety-critical code uses floating-point at all you already have bigger problems than FP UB. fixed-point arithmetic with explicit precision tracking is the correct approach for anything where correctness actually matters.
Fixed-point doesn’t vectorize to AVX-512 for FFTs on price time series. Some problems require floating-point. Also “safety-critical” and “HFT” are different axes — HFT cares about determinism and performance, not functional safety certification standards.
The phrase “not mathematically defined in the domain of real number arithmetic” is new vocabulary introduced in this paper. It’s doing genuine semantic work — it’s why
inf * 0(indeterminate) fails as a constant expression whilenan * 2(propagation) passes — but it appears nowhere else in the standard and has no anchor to IEEE 60559.The operative exclusion from the proposed
[expr.const]wording is:I understand what it means in context. I suspect CWG reviewers will too. But “domain of real number arithmetic” is informal vocabulary and a future DR will hit it and have to reverse-engineer the intent from the paper’s examples.
The standard already uses phrases like “the mathematical value of” without exhaustive formalization. In context “domain of real number arithmetic” pretty clearly means “operations with a unique answer in ℝ” — addition, multiplication (except 0×∞), etc. Agreed a footnote or IEEE 60559 cross-reference would help. Normal CWG cleanup.
[removed by moderator]
report and move on
The underflow treatment deserves more attention. The paper proposes that underflow — producing a denormal or zero — is well-defined and is a valid constant expression, even though it raises
FE_UNDERFLOW. This is treated the same asFE_INEXACT(rounding) for constant-evaluation purposes. The paper openly admits this is “inconsistent with the rest of the design” but justifies it on pragmatic grounds: no major compiler diagnoses underflow in constexpr today.Honest concession, probably the right call, but it means the “constant expression ↔ no floating-point exceptions raised” model has a deliberate hole. The new well-definedness guarantee for overflow says:
Underflow has no analogous clean statement. It’s just “this won’t disqualify the expression.” If you write code that checks
fegetexceptafter a constexpr init, you need to know this.Does any of this affect embedded targets without hardware IEEE 754? I’m on Cortex-M0 with emulated FP.
No behavior change for you. The well-definedness guarantees apply only to types that have an infinity representation. On a target where
FLT_MAX * 2can’t produce infinity, overflow stays UB — same as today.For those who want the backstory:
CWG2168 (~2016): CWG has an open question about whether FP overflow is UB on IEEE 754 types. They ask SG6. SG6’s guidance: overflow to infinity is well-defined at runtime; infinity should not be usable as a constant-expression subexpression. Filed and parked.
CWG2723 (later): Adds a definition of “range of representable values” for floating-point. No SG6 consultation. The definition doesn’t carve out ±∞ — effectively reverting SG6’s guidance and making overflow UB again.
Result: two DR issues pointing in opposite directions, three compilers making different choices, and a standard that can’t answer “is
FLT_MAX * 2.0fundefined behavior?” without hedging.P3899R1 untangles it. The partial deviation from the 2016 SG6 guidance — now allowing infinity propagation in constexpr — is deliberate. The original concern doesn’t hold as strongly under the current constexpr evaluation model.
One thing that jumps out: this paper is routed to SG6, then EWG, then CWG. Three groups, three sets of polls, plausibly spanning 2–3 meetings. For what is essentially a contradiction between two existing DRs. GCC 15 already does the right thing. By the time this is officially normative it will have been de facto standard for years. The committee process is thorough. Whether it is appropriately fast for correctness fixes is a separate question.
Reminder: paper authors occasionally read these threads. Keep it technical and keep it C++. The Rust tangent has been warned.
[deleted]
what did they say
something about NaN being a design mistake and C++ needing to deprecate
floataltogether. the usual.If you want to see the current behavioral gap: https://godbolt.org/z/K3vxYaE89
UBSAN reports
FLT_MAX * 2.0fas floating-point overflow under-fsanitize=undefined. After this paper, that specific case becomes well-defined on IEEE 754 implementations and UBSAN should stop flagging it at runtime.1.0f / 0.0fwould still trigger.The fix is in the standard, not in UBSAN — toolchain updates will follow once the wording lands.
Wait so after this paper
1.0f / 0.0fis still UB? I thought the whole point was IEEE 754 conformance. Edit: ok I read section 4.3, the authors explain the signed-infinity and negative-zero edge cases. I was wrong. Edit2: still annoyed.1.0f / 0.0fshould just be+infon any IEEE 754 type and everyone knows it.Early bird registration ends May 15. The conference for the C++ community.
Smart code analysis, refactoring, and debugger. Free 30-day trial.