Android平台Camera2数据如何对接RTMP推流到服务器
1. Camera2架構(gòu)
在Google 推出Android 5.0的時候, Android Camera API 版本升級到了API2(android.hardware.camera2), 之前使用的API1(android.hardware.camera)就被標(biāo)為 Deprecated 了。
Camera API2相較于API1有很大不同, 并且API2是為了配合HAL3進行使用的, API2有很多API1不支持的特性, 比如:
在API架構(gòu)方面, Camera2和之前的Camera有很大區(qū)別, APP和底層Camera之前可以想象成用管道方式連接, 如下圖:
這里引用了管道的概念將安卓設(shè)備和攝像頭之間聯(lián)通起來,系統(tǒng)向攝像頭發(fā)送 Capture 請求,而攝像頭會返回 CameraMetadata。這一切建立在一個叫作 CameraCaptureSession 的會話中。
下面是 camera2包中的主要類:
其中 CameraManager 是那個站在高處統(tǒng)管所有攝像投設(shè)備(CameraDevice)的管理者,而每個 CameraDevice 自己會負責(zé)建立 CameraCaptureSession 以及建立 CaptureRequest。
CameraCharacteristics 是 CameraDevice 的屬性描述類,非要做個對比的話,那么它與原來的 CameraInfo 有相似性。
CameraManager處于頂層管理位置負責(zé)檢測獲取所有攝像頭及其特性和傳入指定的CameraDevice.StateCallback回調(diào)打開指定攝像頭,CameraDevice是負責(zé)管理抽象對象,包括監(jiān)聽Camera 的狀態(tài)回調(diào)CameraDevice.StateCallback、創(chuàng)建CameraCaptureSession和CameraRequest,CameraCaptureSession用于描述一次圖像捕獲操作,主要負責(zé)監(jiān)聽自己會話的狀態(tài)回調(diào)CameraCaptureSession.StateCallback和CameraCaptureSession.CaptureCallback捕獲回調(diào),還有發(fā)送處理CameraRequest;CameraRequest則可以看成是一個"JavaBean"的作用用于描述希望什么樣的配置來處理這次請求;最后三個回調(diào)用于監(jiān)聽對應(yīng)的狀態(tài)。
2. 官方解釋
The android.hardware.camera2 package provides an interface to individual camera devices connected to an Android device. It replaces the deprecated?Camera?class.
This package models a camera device as a pipeline, which takes in input requests for capturing a single frame, captures the single image per the request, and then outputs one capture result metadata packet, plus a set of output image buffers for the request. The requests are processed in-order, and multiple requests can be in flight at once. Since the camera device is a pipeline with multiple stages, having multiple requests in flight is required to maintain full framerate on most Android devices.
To enumerate, query, and open available camera devices, obtain a?CameraManager?instance.
Individual?CameraDevices?provide a set of static property information that describes the hardware device and the available settings and output parameters for the device. This information is provided through the?CameraCharacteristics?object, and is available through?getCameraCharacteristics(String)
To capture or stream images from a camera device, the application must first create a?camera capture session?with a set of output Surfaces for use with the camera device, with?createCaptureSession(SessionConfiguration). Each Surface has to be pre-configured with an?appropriate size and format?(if applicable) to match the sizes and formats available from the camera device. A target Surface can be obtained from a variety of classes, including?SurfaceView,?SurfaceTexture?via?Surface(SurfaceTexture),?MediaCodec,?MediaRecorder,?Allocation, and?ImageReader.
Generally, camera preview images are sent to?SurfaceView?or?TextureView?(via its?SurfaceTexture). Capture of JPEG images or RAW buffers for?DngCreator?can be done with?ImageReader?with the?JPEG?and?RAW_SENSOR?formats. Application-driven processing of camera data in RenderScript, OpenGL ES, or directly in managed or native code is best done through?Allocation?with a YUV?Type,?SurfaceTexture, and?ImageReader?with a?YUV_420_888?format, respectively.
The application then needs to construct a?CaptureRequest, which defines all the capture parameters needed by a camera device to capture a single image. The request also lists which of the configured output Surfaces should be used as targets for this capture. The CameraDevice has a?factory method?for creating a?request builder?for a given use case, which is optimized for the Android device the application is running on.
Once the request has been set up, it can be handed to the active capture session either for a one-shot?capture?or for an endlessly?repeating?use. Both methods also have a variant that accepts a list of requests to use as a burst capture / repeating burst. Repeating requests have a lower priority than captures, so a request submitted through?capture()?while there's a repeating request configured will be captured before any new instances of the currently repeating (burst) capture will begin capture.
After processing a request, the camera device will produce a?TotalCaptureResult?object, which contains information about the state of the camera device at time of capture, and the final settings used. These may vary somewhat from the request, if rounding or resolving contradictory parameters was necessary. The camera device will also send a frame of image data into each of the output?Surfaces?included in the request. These are produced asynchronously relative to the output CaptureResult, sometimes substantially later.
3. Camera2 API調(diào)用基礎(chǔ)流程:
4. 如何實現(xiàn)camera2數(shù)據(jù)對接RTMP推送:
通過OnImageAvailableListenerImpl 獲取到原始數(shù)據(jù),推送端以大牛直播SDK https://github.com/daniulive/SmarterStreaming/?的萬能推送接口為例,獲取數(shù)據(jù)后,調(diào)用SmartPublisherOnImageYUV420888() 完成數(shù)據(jù)傳送,底層進行二次處理后,編碼后傳輸即可。
接口描述:
/** 專門為android.media.Image的android.graphics.ImageFormat.YUV_420_888格式提供的接口** @param width: 必須是8的倍數(shù)** @param height: 必須是8的倍數(shù)** @param crop_left: 剪切左上角水平坐標(biāo), 一般根據(jù)android.media.Image.getCropRect() 填充** @param crop_top: 剪切左上角垂直坐標(biāo), 一般根據(jù)android.media.Image.getCropRect() 填充** @param crop_width: 必須是8的倍數(shù), 填0將忽略這個參數(shù), 一般根據(jù)android.media.Image.getCropRect() 填充** @param crop_height: 必須是8的倍數(shù), 填0將忽略這個參數(shù),一般根據(jù)android.media.Image.getCropRect() 填充** @param y_plane 對應(yīng)android.media.Image.Plane[0].getBuffer()** @param y_row_stride 對應(yīng)android.media.Image.Plane[0].getRowStride()** @param u_plane 對應(yīng)android.media.Image.Plane[1].getBuffer()** @param v_plane 對應(yīng)android.media.Image.Plane[2].getBuffer()** @param uv_row_stride 對應(yīng)android.media.Image.Plane[1].getRowStride()** @param uv_pixel_stride 對應(yīng)android.media.Image.Plane[1].getPixelStride()** @param rotation_degree: 順時針旋轉(zhuǎn), 必須是0, 90, 180, 270** @param is_vertical_flip: 是否垂直翻轉(zhuǎn), 0不翻轉(zhuǎn), 1翻轉(zhuǎn)** @param is_horizontal_flip:是否水平翻轉(zhuǎn), 0不翻轉(zhuǎn), 1翻轉(zhuǎn)** @param scale_width: 縮放寬,必須是8的倍數(shù), 0不縮放** @param scale_height: 縮放高, 必須是8的倍數(shù), 0不縮放** @param scale_filter_mode: 縮放質(zhì)量, 范圍必須是[1,3], 傳0使用默認速度** @return {0} if successful*/public native int SmartPublisherOnImageYUV420888(long handle, int width, int height,int crop_left, int crop_top, int crop_width, int crop_height,ByteBuffer y_plane, int y_row_stride,ByteBuffer u_plane, ByteBuffer v_plane, int uv_row_stride, int uv_pixel_stride,int rotation_degree, int is_vertical_flip, int is_horizontal_flip,int scale_width, int scale_height, int scale_filter_mode); private class OnImageAvailableListenerImpl implements ImageReader.OnImageAvailableListener {@Overridepublic void onImageAvailable(ImageReader reader) {Image image = reader.acquireLatestImage();if ( image != null ){if ( camera2Listener != null ){camera2Listener.onCameraImageData(image);}image.close();}}} @Overridepublic void onCameraImageData(Image image) {synchronized(this){Rect crop_rect = image.getCropRect();if(isPushingRtmp || isRTSPPublisherRunning) {if(libPublisher != null){Image.Plane[] planes = image.getPlanes();// crop_rect.left, crop_rect.top, crop_rect.width(), crop_rect.height(),// 這里縮放寬高可以填0,使用原視視頻寬高都可以的libPublisher. SmartPublisherOnImageYUV420888(publisherHandle, image.getWidth(), image.getHeight(),crop_rect.left, crop_rect.top, crop_rect.width(), crop_rect.height(),planes[0].getBuffer(), planes[0].getRowStride(),planes[1].getBuffer(), planes[2].getBuffer(), planes[1].getRowStride(), planes[1].getPixelStride(),displayOrientation, 0, 0,videoWidth, videoHeight, 1);}}}}5. Camera2對焦API擴展說明
關(guān)于CONTROL_AF_MODE描述:
當(dāng)前是否開啟自動對焦,以及設(shè)置它的模式。
它只有在 android.control.mode = AUTO 和鏡頭沒有固定焦距(i.e android.lens.info.minimumFocusDistance > 0)的情況下,才有用。
當(dāng)aeMode 為 OFF時,AF的行為取決了設(shè)備。
建議在將android.control.aeMode設(shè)置為OFF之前使用android.control.afTrigger鎖定AF,或者在AE關(guān)閉時將AF模式設(shè)置為OFF。
它的值有:
CONTINUOUS_VIDEO:在該模式中,AF算法連續(xù)地修改鏡頭位置以嘗試提供恒定對焦的圖像流,缺點是對焦過程中焦點的移動較慢。
The focusing behavior should be suitable for good quality video recording; typically this means slower focus movement and no overshoots. When the AF trigger is not involved, the AF algorithm should start in INACTIVE state, and then transition into PASSIVE_SCAN and PASSIVE_FOCUSED states as appropriate. When the AF trigger is activated, the algorithm should immediately transition into AF_FOCUSED or AF_NOT_FOCUSED as appropriate, and lock the lens position until a cancel AF trigger is received.
一旦收到取消,算法應(yīng)轉(zhuǎn)換回INACTIVE并恢復(fù)被動掃描。 請注意,此行為與CONTINUOUS_PICTURE不同,因為必須立即取消正在進行的PASSIVE_SCAN。
CONTINUOUS_PICTURE:在該模式中,AF算法連續(xù)地修改鏡頭位置以嘗試提供恒定對焦的圖像流,對焦的過程盡可能的快,建議使用。
The focusing behavior should be suitable for still image capture; typically this means focusing as fast as possible. When the AF trigger is not involved, the AF algorithm should start in INACTIVE state, and then transition into PASSIVE_SCAN and PASSIVE_FOCUSED states as appropriate as it attempts to maintain focus. When the AF trigger is activated, the algorithm should finish its PASSIVE_SCAN if active, and then transition into AF_FOCUSED or AF_NOT_FOCUSED as appropriate, and lock the lens position until a cancel AF trigger is received.
總結(jié)
以上是生活随笔為你收集整理的Android平台Camera2数据如何对接RTMP推流到服务器的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 【数据竞赛】5行代码提升GBDT,提升巨
- 下一篇: 【Python】pip工具使用知识,模型