You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/src/index.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ The elastic cluster manager automatically adds new workers to an automatically c
14
14
Since workers can appear and disappear dynamically, initializing them (loading packages, etc.) via the standard `Distributed.@everywhere` macro is problematic, as workers added afterwards won't be initialized. Parallel processing tools provides the macro [`@always_everywhere`](@ref) to run code globally on all current processes, but also store the code so it can be run again on future new worker processes. Workers that are part of a [`FlexWorkerPool`](@ref) will be updated automatically on `take!` and `onworker`. You can also use [`ensure_procinit`](@ref) to manually update all workers
15
15
to all `@always_everywhere` used so far.
16
16
17
-
The function [`pinthreads_auto`](@ref) (used inside of `@always_everywhere`) provides a convenient way to perform some automatic thread pinning on all processes. Note that it needs to follow an [`import ThreadPinning`](https://github.com/carstenbauer/ThreadPinning.jl/), and that more complex use cases may require customized thread pinning for best performance.
17
+
[`AutoThreadPinning`](@ref), in conjunction with the package [`ThreadPinning`](https://github.com/carstenbauer/ThreadPinning.jl/), provides a convenient way to perform automatic thread pinning (e.g. inside of `@always_everywhere`, to apply thead pinning to all processes). Note that `ThreadPinning.pinthreads(AutoThreadPinning())` works on a best-effort basis and that advanced applications may require customized thread pinning for best performance.
18
18
19
19
Some batch system configurations can result in whole Julia processes, or even a whole batch job, being terminated if a process exceeds its memory limit. In such cases, you can try to gain a softer failure mode by setting a custom (slightly smaller) memory limit using [`memory_limit!`](@ref).
0 commit comments