-
-
Notifications
You must be signed in to change notification settings - Fork 5.7k
Scheduler: Use the root task as a scheduler task #57465
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Why is this? |
|
Thread 1s root task actually does stuff like run the REPL/toplevel code. So it's a |
|
I considered creating a new task in |
e05ce1a to
5993983
Compare
|
Could we make the root task on thread 1, do nothing but launch a task that does the work it used to do, and thus it becomes the scheduler task? |
We can. It gets complicated though, because we have to return from Am I understanding your suggestion correctly? |
|
@vtjnash suggested a couple of alternative ways to do this. Both have pros and cons, so I'd like to invite discussion. With the first alternative, the scheduler uses a new field in Pros:
Cons:
The second alternative uses a Pros:
Cons:
I don't much like the first alternative -- it's tricky, hard to understand/explain, and complicates the scheduler interface which will make it harder to switch schedulers. I like the second alternative, but feel that the "early" switch to the scheduler task is problematic and this switch should happen only after the thread sleep interval. IMO the second alternative would be best once the thread sleep logic moves to Julia. So I feel the current PR is still the right way to do this, but would like to hear other folks' opinions. |
|
Closing this in favor of #57544:
|
A Julia thread runs Julia's scheduler in the context of the switching task. If no task is found to switch to, the thread will sleep while holding onto the (possibly completed) task, preventing the task from being garbage collected. This recent [Discourse post](https://discourse.julialang.org/t/weird-behaviour-of-gc-with-multithreaded-array-access/125433) illustrates precisely this problem. A solution to this would be for an idle Julia thread to switch to a "scheduler" task, thereby freeing the old task. This PR uses `OncePerThread` to create a "scheduler" task (that does nothing but run `wait()` in a loop) and switches to that task when the thread finds itself idle. Other approaches considered and discarded in favor of this one: #57465 and #57543.
A Julia thread runs Julia's scheduler in the context of the switching task. If no task is found to switch to, the thread will sleep while holding onto the (possibly completed) task, preventing the task from being garbage collected. This recent [Discourse post](https://discourse.julialang.org/t/weird-behaviour-of-gc-with-multithreaded-array-access/125433) illustrates precisely this problem. A solution to this would be for an idle Julia thread to switch to a "scheduler" task, thereby freeing the old task. This PR uses `OncePerThread` to create a "scheduler" task (that does nothing but run `wait()` in a loop) and switches to that task when the thread finds itself idle. Other approaches considered and discarded in favor of this one: #57465 and #57543. (cherry picked from commit 0d4d6d9)
A Julia thread runs Julia's scheduler in the context of the switching task. If no task is found to switch to, the thread will sleep while holding onto the (possibly completed) task, preventing the task from being garbage collected. This recent Discourse post illustrates precisely this problem.
A solution to this would be for an idle Julia thread to switch to a "scheduler" task, thereby freeing the old task.
Other than thread 1, the root task that is created for every Julia-started (non-GC) thread essentially ends immediately -- we call
jl_finish_taskat the end ofjl_threadfun.This PR uses root tasks (on all but thread 1) as scheduler tasks. This solves the problem for all but thread 1. We could do the same for thread 1 also, but it would require special-casing as we cannot use thread 1's root task for this purpose.