-
-
Notifications
You must be signed in to change notification settings - Fork 813
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to Stream GLSurfaceView Output from FaceUnity Using RootEncoder? #1768
Comments
Hello, If you have a way to get a buffer data from that library in a buffer, you can use BufferVideoSource to handle it. Ideally, you should find a way to render a surface or surfacetexture with that library and create a new VideoSource because this provide a better performance. |
Yep, I can get getSurfaceTexture. How to create a VideoSource with a surfacetexture. Do you give me a example. Thanks This is a example code about sufaceview:
CameraRenderer, have a function:
And, I can get surfacetexture frome FUCamera:
Thanks @pedroSG94 |
Hello, After check the library. I can't find a way to render a SurfaceTexture properly but maybe we can use onRenderAfter callback like this: //we are using BufferVideoSource to send data to RootEncoder library as YUV images, bitrate depend of your resolution. The value is equivalent to the prepareVideo method.
private val bufferVideoSource = BufferVideoSource(format = BufferVideoSource.Format.NV12, bitrate = 1200 * 1000)
//buffers to NV12
private fun toNv12(y: ByteArray, u: ByteArray, v: ByteArray): ByteBuffer {
//NV12 is Y buffer and then V and U interleaved
val nv12 = ByteBuffer.allocate(y.size + u.size + v.size)
nv12.put(y)
//V and U must have the same size according with YUV standard
for (i in u.indices) {
nv12.put(v[i])
nv12.put(u[i])
}
return nv12
}
override fun onRenderAfter(
outputData: FURenderOutputData,
frameData: FURenderFrameData
) {
val dataY = outputData.image?.buffer ?: return
val dataU = outputData.image?.buffer1 ?: return
val dataV = outputData.image?.buffer2 ?: return
bufferVideoSource.setBuffer(toNv12(dataY, dataU, dataV))
} This could have a limited performance because you need to do this conversion in each frame. To do the other way we need to find a way where the FaceUnity library output the image into a Surface or a SurfaceTexture. Similar to when you want to play a video into a Surface from a SurfaceView using MediaPlayer class. This has a better performance because you skip buffer conversion and encoding used in BufferVideoSource class. |
Hello,
I am using the RootEncoder library to stream content, typically from the phone's camera. Now, I need to apply some filters and effects, so I am integrating FaceUnity (FULiveDemoDroid).
FaceUnity renders using android.opengl.GLSurfaceView, and I would like to use this GLSurfaceView output for streaming with RootEncoder.
Implementation Details:
FaceUnity has a CameraRenderer class that extends BaseFURenderer and implements ICameraRenderer.
CameraRenderer uses a GLSurfaceView for rendering camera input with effects.
The camera is configured using FUCameraConfig, which sets properties like resolution, frame rate, and camera type.
The rendering process is managed by OnGlRendererListener, which provides callbacks such as onRenderBefore (for raw input data) and onRenderAfter (for processed data).
My Goal:
I want to take the output from GLSurfaceView (after FaceUnity has processed the camera feed) and stream it using RootEncoder.
Question:
Any guidance or sample code would be greatly appreciated!
Thank you.
The text was updated successfully, but these errors were encountered: