-
Notifications
You must be signed in to change notification settings - Fork 530
Issues: openxla/xla
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
AttributeError: 'UnspecifiedValue' of multidevice sharding with custom PJRT backend
#24400
opened Mar 31, 2025 by
ajakovljevicTT
Help replacing tanhf on aarch64 with a vectorized SVE implementation
CPU
Related to XLA on CPU
err:performance
Performance issues
question
Further information is requested
#24201
opened Mar 25, 2025 by
AWSjswinney
dynamic-update-slice fails in combination with sharding and double precision
bug
Something isn't working
#24186
opened Mar 25, 2025 by
Findus23
Build fails with Build or compilation failed
no such target '@platforms//cpu:ppc64le'
error
err:Build
#24033
opened Mar 21, 2025 by
giordano
Hermetic gcc
enhancement
New feature or request
err:Build
Build or compilation failed
#23716
opened Mar 13, 2025 by
shraiysh
XLA Kernel Fusion / Selection Expectations
question
Further information is requested
stat:awaiting response from contributor
Awaiting response from contributor/author
#23630
opened Mar 12, 2025 by
YixuanSeanZhou
BufferFromHostBuffer for pre-allocated GPU buffers
GPU
XLA on GPU
#23297
opened Mar 2, 2025 by
drewjenks01
[TPU] Bug: Reverse is orders of magnitude slower on TPU
bug
Something isn't working
#23191
opened Feb 27, 2025 by
bjenik
Expose memory space in general to JAX users
stat:awaiting response from contributor
Awaiting response from contributor/author
#23152
opened Feb 26, 2025 by
yliu120
Pmap slower with new CPU runtime
CPU
Related to XLA on CPU
err:performance
Performance issues
#23110
opened Feb 25, 2025 by
lockwo
Make tensorflow profiler into OpenXLA
question
Further information is requested
#22932
opened Feb 21, 2025 by
yliu120
Feature Request: An option to disable flushing to zero on CPU
CPU
Related to XLA on CPU
enhancement
New feature or request
#22858
opened Feb 19, 2025 by
Zentrik
[Doc] xla/pjrt/c/README.md needs to be updated
doc
Improvements or additions to documentation
#22840
opened Feb 19, 2025 by
yuanfz98
OptionOverride parser does not support the same syntax as XLA_FLAGS for enum values [jax.jit(..., compiler_options={...})]
bug
Something isn't working
#22459
opened Feb 7, 2025 by
olupton
Sorted scatter emitter is sometimes very slow
bug
Something isn't working
#22233
opened Feb 3, 2025 by
jreiffers
Hitting Something isn't working
Assertion 'succeeded(range) && "element type cannot be iterated"' failed.
in Scatter operation
bug
#22209
opened Feb 2, 2025 by
giordano
Enabling CPU backend optimization reveals a possible inconsistency in computing float32 power differences
CPU
Related to XLA on CPU
#22116
opened Jan 30, 2025 by
pearu
CPU PJRT: reduce max of NaNs different if the value is a constant or if the value comes from a parameter
CPU
Related to XLA on CPU
#21461
opened Jan 15, 2025 by
janpfeifer
Could OpenXLA support input dynamic shape?
question
Further information is requested
stat:awaiting response from contributor
Awaiting response from contributor/author
#21126
opened Jan 8, 2025 by
jinjidejinmuyan
Enabling stablehlo-complex-math-expander pass is platform dependent
enhancement
New feature or request
#20903
opened Dec 27, 2024 by
pearu
[TPU] XLA fails to fuse embedding lookup / array indexing
Google-TPU
XLA on Google TPUs
#20899
opened Dec 27, 2024 by
neel04
USE_CLANG does not get set in crosstool_wrapper_driver_rocm.tpl
err:Build
Build or compilation failed
#20874
opened Dec 26, 2024 by
davidheiss
MLIR module generated by torch_xla fails to compile
GPU
XLA on GPU
#20696
opened Dec 18, 2024 by
aboubezari
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.