-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Please support GPU #38
Comments
Hi there, Thank you! |
hi @shayansadeghieh, while we would also very much love to have it, our high priority bucket list is still very full :( so from ourside we will not likely work on this in the near future. Accelerators (GPU, TPU, etc) is in our TODO list though. While inference would be simpler to do, leveraging GPU/TPUs for training would be much harder. Notice DF algorithms doesn't do many floating point operations (other than calculating the scores at each level of the tree). Inference could be accelerated more easily though -- we did a draft in the past. Maybe some interested developer would contribute it ? |
Hi @janpfeifer Thank you for the quick response. No worries that it is not in your high priority list, I was just curious. Do you by any chance have a link to the draft you previously did for inference? |
I don't have a link because we never open-sourced that more experimental code. Let me check here if it would be easy to make it visible. Btw, notice the CPU implementation can be really fast, depending on the inference engine used:
|
Hi everyone, |
Background
My tensorflow codes work on GPU. They have some matrix operations which can be done fast on GPU. If they run with tfdf, the data must be downloaded from GPU & uploaded to GPU when classification is done. In terms of throughput, this is a great loss.
Feature Request
Please support GPU especially for inference like predict function. Training can take times because an user can try various configurations to find the best one. This is understandable. However, applying the trained model must meet the runtime requirement.
The text was updated successfully, but these errors were encountered: