日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

浅析WebRtc中视频数据的收集和发送流程

發(fā)布時間:2024/4/15 编程问答 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 浅析WebRtc中视频数据的收集和发送流程 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

前言

本文是基于PineAppRtc開源項目https://github.com/thfhongfeng/PineAppRtc

因為一個需求,我們需要將一個視頻流通過WebRtc發(fā)送出去,所以就研究一下WebRtc是如何采集視頻數(shù)據(jù)并進(jìn)行處理發(fā)送的,于是有了這篇文章。

采集發(fā)送

在使用webrtc進(jìn)行即時通話時,雙方連接上后,會根據(jù)參數(shù)創(chuàng)建一個PeerConnection連接對象,具體代碼在PeerConnectionClient類中,這個是需要自己來實現(xiàn)的。這個連接的作用來進(jìn)行推拉流的。

然后創(chuàng)建一個MediaStream對象,并添加給PeerConnection

mPeerConnection.addStream(mMediaStream);

這個MediaStream就是處理流的,可以給MediaStream對象添加多個軌道,比如聲音軌道、視頻軌道

mMediaStream.addTrack(createVideoTrack(mVideoCapturer));

這里mVideoCapturer是一個VideoCapturer對象,用來處理視頻收集的,實際上就是封裝了相機(jī)

VideoCapturer是一個接口,有很多實現(xiàn)類。這里以CameraCapturer及子類Camera1Capturer為例子

繼續(xù)看createVideoTrack這個函數(shù)

private VideoTrack createVideoTrack(VideoCapturer capturer) {mVideoSource = mFactory.createVideoSource(capturer);capturer.startCapture(mVideoWidth, mVideoHeight, mVideoFps);mLocalVideoTrack = mFactory.createVideoTrack(VIDEO_TRACK_ID, mVideoSource);mLocalVideoTrack.setEnabled(mRenderVideo);mLocalVideoTrack.addRenderer(new VideoRenderer(mLocalRender));return mLocalVideoTrack; }

可以看到通過createVideoSource函數(shù)將VideoCapturer封裝到VideoSource對象中,然后利用VideoSource創(chuàng)建出軌道的VideoTrack。

來看看createVideoSource函數(shù)

public VideoSource createVideoSource(VideoCapturer capturer) {org.webrtc.EglBase.Context eglContext = this.localEglbase == null ? null : this.localEglbase.getEglBaseContext();SurfaceTextureHelper surfaceTextureHelper = SurfaceTextureHelper.create("VideoCapturerThread", eglContext);long nativeAndroidVideoTrackSource = nativeCreateVideoSource(this.nativeFactory, surfaceTextureHelper, capturer.isScreencast());CapturerObserver capturerObserver = new AndroidVideoTrackSourceObserver(nativeAndroidVideoTrackSource);capturer.initialize(surfaceTextureHelper, ContextUtils.getApplicationContext(), capturerObserver);return new VideoSource(nativeAndroidVideoTrackSource); }

可以看到這里新建了一個AndroidVideoTrackSourceObserver對象,它是CaptureObserver接口的實現(xiàn),然后調(diào)用了VideoCapturer的initialize函數(shù)
在CameraCapturer實現(xiàn)的initialize函數(shù)中將AndroidVideoTrackSourceObserver對象賦值給了VideoCapturer的capturerObserver屬性。

回過頭再看看PeerConnectionClient類中,還調(diào)用了VideoCapturer的startCapture函數(shù),看看它在CameraCapturer中的實現(xiàn)

public void startCapture(int width, int height, int framerate) {Logging.d("CameraCapturer", "startCapture: " + width + "x" + height + "@" + framerate);if (this.applicationContext == null) {throw new RuntimeException("CameraCapturer must be initialized before calling startCapture.");} else {Object var4 = this.stateLock;synchronized(this.stateLock) {if (!this.sessionOpening && this.currentSession == null) {this.width = width;this.height = height;this.framerate = framerate;this.sessionOpening = true;this.openAttemptsRemaining = 3;this.createSessionInternal(0, (MediaRecorder)null);} else {Logging.w("CameraCapturer", "Session already open");}}} }

最后執(zhí)行了createSessionInternal

private void createSessionInternal(int delayMs, final MediaRecorder mediaRecorder) {this.uiThreadHandler.postDelayed(this.openCameraTimeoutRunnable, (long)(delayMs + 10000));this.cameraThreadHandler.postDelayed(new Runnable() {public void run() {CameraCapturer.this.createCameraSession(CameraCapturer.this.createSessionCallback, CameraCapturer.this.cameraSessionEventsHandler, CameraCapturer.this.applicationContext, CameraCapturer.this.surfaceHelper, mediaRecorder, CameraCapturer.this.cameraName, CameraCapturer.this.width, CameraCapturer.this.height, CameraCapturer.this.framerate);}}, (long)delayMs); }

又執(zhí)行了createCameraSession,在Camera1Capturer中該函數(shù)代碼如下

protected void createCameraSession(CreateSessionCallback createSessionCallback, Events events, Context applicationContext, SurfaceTextureHelper surfaceTextureHelper, MediaRecorder mediaRecorder, String cameraName, int width, int height, int framerate) {Camera1Session.create(createSessionCallback, events, this.captureToTexture || mediaRecorder != null, applicationContext, surfaceTextureHelper, mediaRecorder, Camera1Enumerator.getCameraIndex(cameraName), width, height, framerate); }

可以看到創(chuàng)建了一個Camera1Session,這個類就是實際操作相機(jī)的,在這個類里就看到了熟悉的Camera,在listenForBytebufferFrames函數(shù)中

private void listenForBytebufferFrames() {this.camera.setPreviewCallbackWithBuffer(new PreviewCallback() {public void onPreviewFrame(byte[] data, Camera callbackCamera) {Camera1Session.this.checkIsOnCameraThread();if (callbackCamera != Camera1Session.this.camera) {Logging.e("Camera1Session", "Callback from a different camera. This should never happen.");} else if (Camera1Session.this.state != Camera1Session.SessionState.RUNNING) {Logging.d("Camera1Session", "Bytebuffer frame captured but camera is no longer running.");} else {long captureTimeNs = TimeUnit.MILLISECONDS.toNanos(SystemClock.elapsedRealtime());if (!Camera1Session.this.firstFrameReported) {int startTimeMs = (int)TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - Camera1Session.this.constructionTimeNs);Camera1Session.camera1StartTimeMsHistogram.addSample(startTimeMs);Camera1Session.this.firstFrameReported = true;}Camera1Session.this.events.onByteBufferFrameCaptured(Camera1Session.this, data, Camera1Session.this.captureFormat.width, Camera1Session.this.captureFormat.height, Camera1Session.this.getFrameOrientation(), captureTimeNs);Camera1Session.this.camera.addCallbackBuffer(data);}}}); }

可以看到在通過預(yù)覽回調(diào)onPreviewFrame拿到視頻數(shù)據(jù)后,調(diào)用了events.onByteBufferFrameCaptured,這個events就是create時傳入的,回溯上面的流程可以發(fā)現(xiàn)這個events就是CameraCapturer中的cameraSessionEventsHandler,它的onByteBufferFrameCaptured函數(shù)如下:

public void onByteBufferFrameCaptured(CameraSession session, byte[] data, int width, int height, int rotation, long timestamp) {CameraCapturer.this.checkIsOnCameraThread();synchronized(CameraCapturer.this.stateLock) {if (session != CameraCapturer.this.currentSession) {Logging.w("CameraCapturer", "onByteBufferFrameCaptured from another session.");} else {if (!CameraCapturer.this.firstFrameObserved) {CameraCapturer.this.eventsHandler.onFirstFrameAvailable();CameraCapturer.this.firstFrameObserved = true;}CameraCapturer.this.cameraStatistics.addFrame();CameraCapturer.this.capturerObserver.onByteBufferFrameCaptured(data, width, height, rotation, timestamp);}} }

這里調(diào)用了capturerObserver.onByteBufferFrameCaptured,這個capturerObserver就是前面initialize時傳入的AndroidVideoTrackSourceObserver對象,它的onByteBufferFrameCaptured函數(shù)

public void onByteBufferFrameCaptured(byte[] data, int width, int height, int rotation, long timeStamp) {this.nativeOnByteBufferFrameCaptured(this.nativeSource, data, data.length, width, height, rotation, timeStamp); }

調(diào)用了native函數(shù)。這樣整個流程就結(jié)束了,應(yīng)該在native中對數(shù)據(jù)進(jìn)行處理并發(fā)送。

其實這里關(guān)鍵就是VideoCapturer,除了CameraCapturer及子類,還有FileVideoCapturer等。
如果我們需要直接發(fā)送byte[]原生數(shù)據(jù),可以自定義實現(xiàn)一個VideoCapturer,獲取他的capturerObserver變量,主動調(diào)用它的onByteBufferFrameCaptured函數(shù)即可。

總結(jié)

以上是生活随笔為你收集整理的浅析WebRtc中视频数据的收集和发送流程的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。