Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to Stream GLSurfaceView Output from FaceUnity Using RootEncoder? #1768

Open
danganhhao opened this issue Mar 22, 2025 · 3 comments
Open

Comments

@danganhhao
Copy link

Hello,

I am using the RootEncoder library to stream content, typically from the phone's camera. Now, I need to apply some filters and effects, so I am integrating FaceUnity (FULiveDemoDroid).

FaceUnity renders using android.opengl.GLSurfaceView, and I would like to use this GLSurfaceView output for streaming with RootEncoder.

Implementation Details:

FaceUnity has a CameraRenderer class that extends BaseFURenderer and implements ICameraRenderer.

CameraRenderer uses a GLSurfaceView for rendering camera input with effects.

The camera is configured using FUCameraConfig, which sets properties like resolution, frame rate, and camera type.

The rendering process is managed by OnGlRendererListener, which provides callbacks such as onRenderBefore (for raw input data) and onRenderAfter (for processed data).

My Goal:
I want to take the output from GLSurfaceView (after FaceUnity has processed the camera feed) and stream it using RootEncoder.

Question:

  1. How can I access the processed frame data from FaceUnity's GLSurfaceView?
  2. What is the best way to pass this data to RootEncoder for streaming?

Any guidance or sample code would be greatly appreciated!
Thank you.

@pedroSG94
Copy link
Owner

Hello,

If you have a way to get a buffer data from that library in a buffer, you can use BufferVideoSource to handle it.

Ideally, you should find a way to render a surface or surfacetexture with that library and create a new VideoSource because this provide a better performance.
If you have a code example working with surfaceview or textureview we can try this last way.

@danganhhao
Copy link
Author

Yep, I can get getSurfaceTexture. How to create a VideoSource with a surfacetexture. Do you give me a example. Thanks

This is a example code about sufaceview:

mCameraRenderer = new CameraRenderer(mSurfaceView, getCameraConfig(), mOnGlRendererListener);

CameraRenderer, have a function:

  override fun updateTexImage() {
        val surfaceTexture = fUCamera.getSurfaceTexture()
        try {
            surfaceTexture?.updateTexImage()
        } catch (e: Exception) {
            e.printStackTrace()
        }
    }

And, I can get surfacetexture frome FUCamera:

    override fun getSurfaceTexture(): SurfaceTexture? {
        return mFaceUnityCamera?.mSurfaceTexture
    }

Thanks @pedroSG94

@pedroSG94
Copy link
Owner

pedroSG94 commented Mar 22, 2025

Hello,

After check the library. I can't find a way to render a SurfaceTexture properly but maybe we can use onRenderAfter callback like this:

   //we are using BufferVideoSource to send data to RootEncoder library as YUV images, bitrate depend of your resolution. The value is equivalent to the prepareVideo method.
    private val bufferVideoSource = BufferVideoSource(format = BufferVideoSource.Format.NV12, bitrate = 1200 * 1000)

   //buffers to NV12
    private fun toNv12(y: ByteArray, u: ByteArray, v: ByteArray): ByteBuffer {
        //NV12 is Y buffer and then V and U interleaved
        val nv12 = ByteBuffer.allocate(y.size + u.size + v.size)
        nv12.put(y)
       //V and U must have the same size according with YUV standard
        for (i in u.indices) {
            nv12.put(v[i])
            nv12.put(u[i])
        }
        return nv12
    }

    override fun onRenderAfter(
        outputData: FURenderOutputData,
        frameData: FURenderFrameData
    ) {
        val dataY = outputData.image?.buffer ?: return
        val dataU = outputData.image?.buffer1 ?: return
        val dataV = outputData.image?.buffer2 ?: return
        bufferVideoSource.setBuffer(toNv12(dataY, dataU, dataV))
    }

This could have a limited performance because you need to do this conversion in each frame.

To do the other way we need to find a way where the FaceUnity library output the image into a Surface or a SurfaceTexture. Similar to when you want to play a video into a Surface from a SurfaceView using MediaPlayer class. This has a better performance because you skip buffer conversion and encoding used in BufferVideoSource class.
Basically we want receive image into the SurfaceTexture provided by start method in VideoSource class

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants