P4014R0 — The Sender Sub-Language
(6 items)
LEWG
This paper provides a comprehensive guide to the Sender Sub-Language introduced in C++26 via std::execution (P2300R10), explaining its theoretical foundations in continuation-passing style and monadic composition, and documenting the programming model it provides as a replacement for C++'s control flow, variable binding, error handling, and iteration. Through worked examples, it demonstrates the complexity costs of the model and argues that while the Sender Sub-Language serves specific domains (GPU compute, HFT, embedded systems) exceptionally well, coroutine-driven async I/O deserves the same freedom to adopt a domain-optimized model. The paper concludes with straw polls asking whether std::execution serves coroutine-driven async I/O less ideally than heterogeneous compute, and whether that domain should have equivalent freedom to optimize for its own needs.
- Section 6.2, page 19 — Systematic off-by-one citation error: prose cites bulk as [44], then as [45], when_all as [46], continues_on as [47], let_value as [48], split as [49], reduce as [50], but each bibliography entry is actually one number lower (bulk is 43, then is 44, etc.). [1]
- Section 6.2, page 19 — Prose cites [51] for CUDA C/C++ Language Extensions, but bibliography entry 51 is nvcc; the correct entry is 50. [2]
- Section 6.2, page 20 — Prose cites [52] for nvcc, but bibliography entry 52 is the HPC Wire article; the correct entry is 51. [3]
- Section 6.3, page 20 — Prose cites [55] for CUDA C++ Language Support, but no bibliography entry 55 exists; the correct entry is 54. [4]
- Section 1, page 2 — The St. Louis WG21 meeting is dated July 2024; the meeting took place June 24-29, 2024. [5]
References — Anthropic Citations API
[1]
"std::execution (P2300R10)[1] was formally adopted into the C++26 working paper at St. Louis in July 2024."
"std::execution (P2300R10)[1] was formally adopted into the C++26 working paper at St. Louis in July 2024."
[2]
"The nvexec[41] implementation includes GPU-specific reimplementations of the standard sender algorithms - bulk[44], then[45], when_all[46], continues_on[47], let_value[48], split[49], and reduce[50]"
"The nvexec[41] implementation includes GPU-specific reimplementations of the standard sender algorithms - bulk[44], then[45], when_all[46], continues_on[47], let_value[48], split[49], and reduce[50]"
[3]
"The CUDA C/C++ Language Extensions[51] documentation enumerates them: Execution space specifiers: __host__[51], __device__[51], and __global__[51]"
"The CUDA C/C++ Language Extensions[51] documentation enumerates them: Execution space specifiers: __host__[51], __device__[51], and __global__[51]"
[4]
"These extensions require a specialized compiler (nvcc[52])"
"These extensions require a specialized compiler (nvcc[52])"
[5]
"The CUDA C++ Language Support[55] documentation (v13.1, December 2025) lists C++20 feature support for GPU device code."
"The CUDA C++ Language Support[55] documentation (v13.1, December 2025) lists C++20 feature support for GPU device code."
Summary: P4014R0 argues that the sender/receiver model in std::execution constitutes a sub-language within C++ that is too complex, too difficult to teach, and insufficiently motivated by field experience, and it asks LEWG to reconsider the design direction. Five findings were identified, all concerning factual or bibliographic errors.
Pipeline: Discovery (Anthropic Opus + Citations API) → Verification Gate (OpenRouter Opus) → Report Writer (OpenRouter Opus)
Provenance: All references are machine-verified character positions from the Anthropic Citations API — deterministic, exact substrings, not model-generated quotes.
Provenance: All references are machine-verified character positions from the Anthropic Citations API — deterministic, exact substrings, not model-generated quotes.