Skip to content

Fix integer overflow in allocation size calculations#9860

Open
mohammadmseet-hue wants to merge 2 commits intogoogle:masterfrom
mohammadmseet-hue:fix/integer-overflow-allocation-sizes
Open

Fix integer overflow in allocation size calculations#9860
mohammadmseet-hue wants to merge 2 commits intogoogle:masterfrom
mohammadmseet-hue:fix/integer-overflow-allocation-sizes

Conversation

@mohammadmseet-hue
Copy link
Copy Markdown
Contributor

Summary

Several operator reshape/create functions compute allocation sizes via unchecked size_t multiplications. When attacker-controlled tensor dimensions cause the product to overflow, the resulting allocation is undersized but subsequent writes use the original (unwrapped) dimensions, leading to heap buffer overflows.

Root cause: XNNPACK has no overflow-safe arithmetic helpers. All size_t size calculations use raw * and + operators without overflow checks.

Changes

  1. src/xnnpack/math.h — Added xnn_safe_mul(), xnn_safe_add(), xnn_safe_mul3(), xnn_safe_mul4() using __builtin_mul_overflow (GCC/Clang) with portable fallback.

  2. src/memory.c:294 — Fixed addition overflow in xnn_reserve_weights_memory() where buffer->size + min_available_size could wrap, bypassing the capacity check and causing writes past buffer end.

  3. src/operators/unpooling-nhwc.c:220 — Fixed 4-way multiplication overflow in indirection buffer allocation: batch_size * input_height * input_width * pooling_size.

  4. src/operators/batch-matrix-multiply-nc.c:409 — Fixed 3-way multiplication overflow in packed weights allocation: batch_size_b * n_stride * weights_stride.

  5. src/operators/slice-nd.c:175 — Fixed bounds check bypass where offsets[i] + sizes[i] could wrap past SIZE_MAX, causing the > input_shape[i] check to pass incorrectly and allowing out-of-bounds access.

  6. src/operators/resize-bilinear-nhwc.c:196-197 — Fixed multiplication overflows in indirection buffer (output_height * output_width * 4) and packed weights (output_height * output_width * 2) allocations.

  7. src/operators/resize-bilinear-nchw.c:195-196 — Same fixes as NHWC variant.

Attack Vector

Crafted ML model with large tensor dimensions → framework delegate (TensorFlow Lite, MediaPipe, PyTorch, Chrome WebNN) → XNNPACK API → integer overflow in size calculation → undersized allocation → heap buffer overflow during weight packing or indirection buffer initialization.

Note

This PR fixes the highest-impact subset of a systemic issue. The same unchecked multiplication pattern exists in convolution, deconvolution, fully-connected, average/max pooling, and reference packing functions. The xnn_safe_mul helpers added here can be applied to those sites in follow-up PRs.

…implementations

Several operator reshape/create functions compute allocation sizes via
unchecked size_t multiplications. When the product of attacker-controlled
dimensions (batch_size, height, width, kernel_size, etc.) overflows, the
allocation is undersized but subsequent writes use the original (pre-overflow)
dimensions, causing heap buffer overflows.

This commit:
1. Adds overflow-safe arithmetic helpers (xnn_safe_mul, xnn_safe_add,
   xnn_safe_mul3, xnn_safe_mul4) to src/xnnpack/math.h using
   __builtin_mul_overflow where available with a portable fallback.
2. Fixes overflow in xnn_reserve_weights_memory (addition overflow
   bypassing capacity check).
3. Fixes 4-way multiplication overflow in unpooling indirection buffer.
4. Fixes 3-way multiplication overflow in batch matrix multiply
   packed weights allocation.
5. Fixes addition overflow in slice-nd bounds check (offsets[i]+sizes[i]
   wrapping past SIZE_MAX bypasses validation).
6. Fixes multiplication overflows in resize-bilinear (NHWC and NCHW)
   indirection buffer and packed weights allocations.

Attack vector: crafted ML model with large tensor dimensions processed
through TensorFlow Lite, MediaPipe, PyTorch, or Chrome WebNN delegates.

Note: This is a subset of a systemic issue — XNNPACK has no overflow-safe
arithmetic for size calculations. The same pattern exists in convolution,
deconvolution, fully-connected, average/max pooling, and packing functions.
The igemm and dwconv reshape paths in convolution-nhwc.c compute
indirection buffer sizes from kernel and output dimensions without
overflow checks:

  kernel_size = kernel_height * kernel_width
  output_size = output_height * output_width
  indirection_buffer_size = sizeof(void*) * kernel_size * tiled_output_size

On 32-bit platforms (WASM, Android armv7), these multiplications can
overflow size_t, producing a small allocation that is subsequently
used with the original (non-overflowed) dimensions, causing heap
buffer overflow.

The dwconv path has additional unchecked multiplications:
  step_height = kernel_size + (output_width - 1) * step_width * kernel_height
  buffer_size = sizeof(void*) * (primary_tile - kernel_size + output_height * step_height)

Replace all unchecked multiplications with xnn_safe_mul/xnn_safe_mul3/
xnn_safe_add, returning xnn_status_out_of_memory on overflow.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant