-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] snake_case #617
base: main
Are you sure you want to change the base?
[WIP] snake_case #617
Conversation
|
||
// Pytorch can only handle up to 128 GPUs. | ||
// https://github.com/pytorch/pytorch/blob/e30c55ee527b40d67555464b9e402b4b7ce03737/c10/cuda/CUDAMacros.h#L44 | ||
const int MAX_CUDA_GPUS = 128; | ||
// Set to -1 to have an infinitely sized cache. Set it to 0 to disable caching. | ||
// Set to a positive number to have a cache of that size. | ||
const int MAX_CONTEXTS_PER_GPU_IN_CACHE = -1; | ||
std::vector<AVBufferRef*> g_cached_hw_device_ctxs[MAX_CUDA_GPUS]; | ||
std::vector<_avbuffer_ref*> g_cached_hw_device_ctxs[MAX_CUDA_GPUS]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AVBufferRef
should remain Pascal case
@@ -217,40 +218,41 @@ void convertAVFrameToFrameOutputOnCuda( | |||
"x3, got ", | |||
shape); | |||
} else { | |||
dst = allocateEmptyHWCTensor(height, width, videoStreamOptions.device); | |||
dst = | |||
allocate_empty_h_w_c_tensor(height, width, video_stream_options.device); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here and below, I'd find it easier if we wrote this as allocate_empty_hwc_tensor()
and nppi_nv12_to_rgb_8u_p2c3r()
. Basically, we'll want to be careful about how to turn a sequence of all caps into snake case.
No description provided.