Replies: 1 comment 2 replies
-
HI I'm doing the same thing, and was also very confused about how to get this to work. But at least have found a working configuration,
The config, is optional... That will initialize python and "capture" gil.
This will "release" the gil, and any other thread (or this one) can now capture the gil again using
You now have to do this in all thread before any interaction with the interpreter, or there will be crash usually. This is also true for the thread that initialized the interpreter. Some pybind11 wrapper will automatically do this, for example calling a std::function that is assigned from python will automatically acquire the GIL, but generally you have to do it your self. It is fine to acquire it multiple times from the same thread as long as you release it the same number of times... And in the end I restore the thread state before finalizing the interpreter
I will leave it up to someone else to say if this is "correct" or not, but It works great for me. |
Beta Was this translation helpful? Give feedback.
-
Hi there. I have an app in C++ that uses a lot of threads, and for testing and rapid development, it would be really nice for them to be able to call into python implementations of various things rather than implementing them all in C++.
I understand that because of the GIL, effective python execution will be through a single thread, or perhaps more accurately, "as if" through a single thread, but I'm still struggling to get it to work at all. What I think I have observed is that, no matter what I do with
gil_scoped_acquire
andgil_scoped_release
, the only thread that can successfully call into python is the one that holdsscoped_interpreter
.As a result, I've had to write a serializing wrapper (not shown) that all my threads call into that pushes python function calls and their arguments into a thread-safe queue, and the "python thread" runs a pop-execute loop fires a condition variable to notify the original calling thread of the result. This works, but I feel like it should be unnecessary, and is, strictly speaking, worse than even normal python threads since it cannot release the gil until completion of the function rather than real python that can swap threads when blocked on io.
Is there a better way to do this? I'd like to not have my serializer since I am told that the GIL is going away and I'd prefer that my code continue to be serialized when it does.
Here's a simplified example of what I tried first:
This crashes. I naively thought py::exec would automatically take/release the GIL as required, but apparently nope.
Then I tried this, and a bunch of variations:
This does not crash, but it deadlocks.
Finally, the code below crashes, too, but after running the thread. I think what is happening is that in thread t1 the interpreter is created and when it exits it is not destroyed because the object holds it. When code (from the main thread) tries to destroy the object at the end of the program, it crashes because it's not the same thread.
I guess the last one is not unexpected.
I'm just curious if there is a better way to do this. Ideally, I'd like to just call into pybind::exec without any extra work, and it would be pybind that waits for the GIL, killing my thread parallelism, but at least the program runs.
Am I missing something?
Using:
Linux Ubuntu 24.04.1
clang 18.1.3
pybind 11 2.13.16
python 3.12.3
Best,
Dave J
Beta Was this translation helpful? Give feedback.
All reactions