Skip to content

Conversation

@gibiansky
Copy link

Not only is this slow, this will break on GPUs because TensorFlow does not release its allocated memory. So if you try to run a catalog with multiple files on a GPU, the first file will succeed, and the second file will give you an OOM error.

Not only is this slow, this will break on GPUs because TensorFlow does not release its allocated memory. So if you try to run a catalog with multiple files on a GPU, the first file will succeed, and the second file will give you an OOM error.
@gibiansky
Copy link
Author

I'm not totally sure how this interacts with multiple processes though, or if this creates some sort of race condition. On first glance I think it's fine but I haven't thought about this deeply.

I made this PR because I hit this issue and the change I put in this PR fixed it so figured I'd put this up in case someone else hits this.

Otherwise using this on GPU fails, because TensorFlow constantly allocates the entire heap and doesn't let go of it
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant