-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Job management system #34
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some small naming things otherwise looks good
* @param task the function to run per index | ||
* @param sharedMem the size of a shared memory buffer to allocate for every group. | ||
*/ | ||
void Dispatch(ExecutionContext& context, size_t jobs, size_t groups, const std::function<void(TaskArguments)>& task, size_t sharedMem = 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would rename jobs
to count
Could we provide a default grouping? Seams pointless to write out the jobs / max_threads if jobs > max_threads, or 5 otherwise
part every time...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In most cases, it'll be one. I can move it to the end of the function as a default parameter if it'll help.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that is just more confusing cause you say in most cases it will be 1
but the comment says otherwise...
Also what about the jobs
to count
rename?
void Dispatch(ExecutionContext& context, size_t jobs, size_t groups, const std::function<void(TaskArguments)>& task, size_t sharedMem = 0); | |
void Dispatch(ExecutionContext& context, size_t count, size_t groups, const std::function<void(TaskArguments)>& task, size_t sharedMem = 0); |
* @param groups the number of jobs to put in each group | ||
* @return The number of groups to create to fit all jobs | ||
*/ | ||
size_t DispatchGroups(size_t jobs, size_t groups); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait is this a helper for doing jobs / max_threads if jobs > max_threads, or 5 otherwise
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not quite. It's basically a weighted average of the number of times to iterate the task and the number of threads to spread the iteration over, with heavy weighting towards the jobs.
This is an implementation detail and should not be documented except in the source.
|
||
/** | ||
* Wait for the given context to stop running. | ||
* The current thread will be added to the thread pool for jobs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this mean?
Especially in the context of waiting on the mani thread
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It means that if the main thread waits for a different thread to run, it can be paused and another thread can take its place until the dependency thread finishes.
The main thread does not take priority over anything else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was mainly interested in the wording of the last parts added to the thread pool for jobs
. For me, a thread pool is a place where you go to get new worker threads. So adding the main thread to it is counter intuitive.
|
||
void Init(size_t maxThreads) { | ||
// Don't reinit if we're already doing something, that's BAD | ||
if (internalState.nCores > 0) return; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean technically.... Feels a bit weird.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you remove the list of running threads while a thread is running, it becomes impossible to join all those threads back, and the process CANNOT stop cleanly any more.
It feels weird because C++ threading is a fucking disaster. We need to just deal with it in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment is on the fact that you are using the nCores
as an inite'd flag.
Like yes, there is little to no chance of this running on a computer with 0 cores. But still that field is now pulling double duty.
internalState.Destroy(); | ||
} | ||
|
||
size_t GetThreadsOfPriority(Priority p) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GetThreadCountOfPriority ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GetNumberOfThreadsThatCanRunForPriorityQueue?
Allows multiple simultaneous tasks, assigned to three separate queues with separate priorities.
New jobs in the HIGHEST priority queue can take priority over other HIGHEST, and any LOW or STREAM priority tasks.
LOW priority tasks can take priority over STREAM priority tasks, but not HIGHEST.
STREAM priority tasks only run if no thread is currently processing higher priority tasks.
Some development still needed to actually implement that priority system on Windows and Linux, due to some very awkward intricacies in how mingw decides which part of process management is actually implemented.