Fix integer overflow in allocation size calculations#9860
Open
mohammadmseet-hue wants to merge 2 commits intogoogle:masterfrom
Open
Fix integer overflow in allocation size calculations#9860mohammadmseet-hue wants to merge 2 commits intogoogle:masterfrom
mohammadmseet-hue wants to merge 2 commits intogoogle:masterfrom
Conversation
…implementations Several operator reshape/create functions compute allocation sizes via unchecked size_t multiplications. When the product of attacker-controlled dimensions (batch_size, height, width, kernel_size, etc.) overflows, the allocation is undersized but subsequent writes use the original (pre-overflow) dimensions, causing heap buffer overflows. This commit: 1. Adds overflow-safe arithmetic helpers (xnn_safe_mul, xnn_safe_add, xnn_safe_mul3, xnn_safe_mul4) to src/xnnpack/math.h using __builtin_mul_overflow where available with a portable fallback. 2. Fixes overflow in xnn_reserve_weights_memory (addition overflow bypassing capacity check). 3. Fixes 4-way multiplication overflow in unpooling indirection buffer. 4. Fixes 3-way multiplication overflow in batch matrix multiply packed weights allocation. 5. Fixes addition overflow in slice-nd bounds check (offsets[i]+sizes[i] wrapping past SIZE_MAX bypasses validation). 6. Fixes multiplication overflows in resize-bilinear (NHWC and NCHW) indirection buffer and packed weights allocations. Attack vector: crafted ML model with large tensor dimensions processed through TensorFlow Lite, MediaPipe, PyTorch, or Chrome WebNN delegates. Note: This is a subset of a systemic issue — XNNPACK has no overflow-safe arithmetic for size calculations. The same pattern exists in convolution, deconvolution, fully-connected, average/max pooling, and packing functions.
The igemm and dwconv reshape paths in convolution-nhwc.c compute indirection buffer sizes from kernel and output dimensions without overflow checks: kernel_size = kernel_height * kernel_width output_size = output_height * output_width indirection_buffer_size = sizeof(void*) * kernel_size * tiled_output_size On 32-bit platforms (WASM, Android armv7), these multiplications can overflow size_t, producing a small allocation that is subsequently used with the original (non-overflowed) dimensions, causing heap buffer overflow. The dwconv path has additional unchecked multiplications: step_height = kernel_size + (output_width - 1) * step_width * kernel_height buffer_size = sizeof(void*) * (primary_tile - kernel_size + output_height * step_height) Replace all unchecked multiplications with xnn_safe_mul/xnn_safe_mul3/ xnn_safe_add, returning xnn_status_out_of_memory on overflow.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Several operator reshape/create functions compute allocation sizes via unchecked
size_tmultiplications. When attacker-controlled tensor dimensions cause the product to overflow, the resulting allocation is undersized but subsequent writes use the original (unwrapped) dimensions, leading to heap buffer overflows.Root cause: XNNPACK has no overflow-safe arithmetic helpers. All
size_tsize calculations use raw*and+operators without overflow checks.Changes
src/xnnpack/math.h— Addedxnn_safe_mul(),xnn_safe_add(),xnn_safe_mul3(),xnn_safe_mul4()using__builtin_mul_overflow(GCC/Clang) with portable fallback.src/memory.c:294— Fixed addition overflow inxnn_reserve_weights_memory()wherebuffer->size + min_available_sizecould wrap, bypassing the capacity check and causing writes past buffer end.src/operators/unpooling-nhwc.c:220— Fixed 4-way multiplication overflow in indirection buffer allocation:batch_size * input_height * input_width * pooling_size.src/operators/batch-matrix-multiply-nc.c:409— Fixed 3-way multiplication overflow in packed weights allocation:batch_size_b * n_stride * weights_stride.src/operators/slice-nd.c:175— Fixed bounds check bypass whereoffsets[i] + sizes[i]could wrap pastSIZE_MAX, causing the> input_shape[i]check to pass incorrectly and allowing out-of-bounds access.src/operators/resize-bilinear-nhwc.c:196-197— Fixed multiplication overflows in indirection buffer (output_height * output_width * 4) and packed weights (output_height * output_width * 2) allocations.src/operators/resize-bilinear-nchw.c:195-196— Same fixes as NHWC variant.Attack Vector
Crafted ML model with large tensor dimensions → framework delegate (TensorFlow Lite, MediaPipe, PyTorch, Chrome WebNN) → XNNPACK API → integer overflow in size calculation → undersized allocation → heap buffer overflow during weight packing or indirection buffer initialization.
Note
This PR fixes the highest-impact subset of a systemic issue. The same unchecked multiplication pattern exists in convolution, deconvolution, fully-connected, average/max pooling, and reference packing functions. The
xnn_safe_mulhelpers added here can be applied to those sites in follow-up PRs.