Android MediaRecorder系统结构
前面有分析過Camera的實現,現在來看看MediaRecorder的實現,這里我不會太去關注它的分層結構,我更關注它的邏輯!
APP層 /path/to/aosp/frameworks/base/media/java/android/media/MediaRecorder.java
JNI層 /path/to/aosp/frameworks/base/media/jni/android_media_MediaRecorder.cpp
調用NATIVE層的MediaRecorder(這里是BnMediaRecorderClient)
header /path/to/aosp/frameworks/av/include/media/mediarecorder.h
implementation /path/to/aosp/frameworks/av/media/libmedia/mediarecorder.cpp
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | MediaRecorder::MediaRecorder() : mSurfaceMediaSource(NULL) { ????ALOGV("constructor"); ????const sp<IMediaPlayerService>& service(getMediaPlayerService()); ????if (service != NULL) { ????????mMediaRecorder = service->createMediaRecorder(getpid()); ????} ????if (mMediaRecorder != NULL) { ????????mCurrentState = MEDIA_RECORDER_IDLE; ????} ????doCleanUp(); } |
getMediaPlayerService()這個方法位于/path/to/aosp/frameworks/av/include/media/IMediaDeathNotifier.h
獲取到MediaPlayerService(這個是BpMediaPlayerService)之后
調用IMediaPlayerService當中的
| 1 2 3 4 5 6 7 8 9 | sp<IMediaRecorder> MediaPlayerService::createMediaRecorder(pid_t pid) { ????sp<MediaRecorderClient> recorder = new MediaRecorderClient(this, pid); ????wp<MediaRecorderClient> w = recorder; ????Mutex::Autolock lock(mLock); ????mMediaRecorderClients.add(w); ????ALOGV("Create new media recorder client from pid %d", pid); ????return recorder; } |
創建MediaRecorderClient(這里是BnMediaRecorder)
但是通過binder拿到的是BpMediaRecorder
因為有如下的interface_cast過程
| 1 2 3 4 5 6 7 8 | virtual sp<IMediaRecorder> createMediaRecorder(pid_t pid) { ????Parcel data, reply; ????data.writeInterfaceToken(IMediaPlayerService::getInterfaceDescriptor()); ????data.writeInt32(pid); ????remote()->transact(CREATE_MEDIA_RECORDER, data, &reply); ????return interface_cast<IMediaRecorder>(reply.readStrongBinder()); } |
而MediaRecorderClient當中又會創建StagefrightRecorder(MediaRecorderBase),它位于
/path/to/aosp/frameworks/av/media/libmediaplayerservice/StagefrightRecorder.cpp
目前我們可以認為在APP/JNI/NATIVE這邊是在一個進程當中,在MediaPlayerService當中的MediaRecorderClient/StagefrightRecorder是在另外一個進程當中,他們之間通過binder通信,而且Bp和Bn我們也都有拿到,后面我們將不再仔細區分Bp和Bn。
客戶端這邊
BnMediaRecorderClient
BpMediaRecorder
BpMediaPlayerService
服務端這邊
BpMediaRecorderClient(如果需要通知客戶端的話,它可以獲得這個Bp)
BnMediaRecorder
BnMediaPlayerService
這有張圖(點過去看原始大圖)
我們以開始錄影為例子,比如start()
在這里就兵分兩路,一個CameraSource,一個MPEG4Writer(sp mWriter)
這兩個class都位于/path/to/aosp/frameworks/av/media/libstagefright/當中
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | status_t StagefrightRecorder::startMPEG4Recording() { ????int32_t totalBitRate; ????status_t err = setupMPEG4Recording( ????????????mOutputFd, mVideoWidth, mVideoHeight, ????????????mVideoBitRate, &totalBitRate, &mWriter); ????if (err != OK) { ????????return err; ????} ????int64_t startTimeUs = systemTime() / 1000; ????sp<MetaData> meta = new MetaData; ????setupMPEG4MetaData(startTimeUs, totalBitRate, &meta); ????err = mWriter->start(meta.get()); ????if (err != OK) { ????????return err; ????} ????return OK; } |
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | status_t StagefrightRecorder::setupMPEG4Recording( ????????int outputFd, ????????int32_t videoWidth, int32_t videoHeight, ????????int32_t videoBitRate, ????????int32_t *totalBitRate, ????????sp<MediaWriter> *mediaWriter) { ????mediaWriter->clear(); ????*totalBitRate = 0; ????status_t err = OK; ????sp<MediaWriter> writer = new MPEG4Writer(outputFd); ????if (mVideoSource < VIDEO_SOURCE_LIST_END) { ????????sp<MediaSource> mediaSource; ????????err = setupMediaSource(&mediaSource); // very important ????????if (err != OK) { ????????????return err; ????????} ????????sp<MediaSource> encoder; ????????err = setupVideoEncoder(mediaSource, videoBitRate, &encoder); // very important ????????if (err != OK) { ????????????return err; ????????} ????????writer->addSource(encoder); ????????*totalBitRate += videoBitRate; ????} ????// Audio source is added at the end if it exists. ????// This help make sure that the "recoding" sound is suppressed for ????// camcorder applications in the recorded files. ????if (!mCaptureTimeLapse && (mAudioSource != AUDIO_SOURCE_CNT)) { ????????err = setupAudioEncoder(writer); // very important ????????if (err != OK) return err; ????????*totalBitRate += mAudioBitRate; ????} ????... ????writer->setListener(mListener); ????*mediaWriter = writer; ????return OK; } |
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | // Set up the appropriate MediaSource depending on the chosen option status_t StagefrightRecorder::setupMediaSource( ??????????????????????sp<MediaSource> *mediaSource) { ????if (mVideoSource == VIDEO_SOURCE_DEFAULT ????????????|| mVideoSource == VIDEO_SOURCE_CAMERA) { ????????sp<CameraSource> cameraSource; ????????status_t err = setupCameraSource(&cameraSource); ????????if (err != OK) { ????????????return err; ????????} ????????*mediaSource = cameraSource; ????} else if (mVideoSource == VIDEO_SOURCE_GRALLOC_BUFFER) { ????????// If using GRAlloc buffers, setup surfacemediasource. ????????// Later a handle to that will be passed ????????// to the client side when queried ????????status_t err = setupSurfaceMediaSource(); ????????if (err != OK) { ????????????return err; ????????} ????????*mediaSource = mSurfaceMediaSource; ????} else { ????????return INVALID_OPERATION; ????} ????return OK; } |
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | status_t StagefrightRecorder::setupCameraSource( ????????sp<CameraSource> *cameraSource) { ????status_t err = OK; ????if ((err = checkVideoEncoderCapabilities()) != OK) { ????????return err; ????} ????Size videoSize; ????videoSize.width = mVideoWidth; ????videoSize.height = mVideoHeight; ????if (mCaptureTimeLapse) { ????????if (mTimeBetweenTimeLapseFrameCaptureUs < 0) { ????????????ALOGE("Invalid mTimeBetweenTimeLapseFrameCaptureUs value: %lld", ????????????????mTimeBetweenTimeLapseFrameCaptureUs); ????????????return BAD_VALUE; ????????} ????????mCameraSourceTimeLapse = CameraSourceTimeLapse::CreateFromCamera( ????????????????mCamera, mCameraProxy, mCameraId, ????????????????videoSize, mFrameRate, mPreviewSurface, ????????????????mTimeBetweenTimeLapseFrameCaptureUs); ????????*cameraSource = mCameraSourceTimeLapse; ????} else { ????????*cameraSource = CameraSource::CreateFromCamera( ????????????????mCamera, mCameraProxy, mCameraId, videoSize, mFrameRate, ????????????????mPreviewSurface, true /*storeMetaDataInVideoBuffers*/); ????} ????mCamera.clear(); ????mCameraProxy.clear(); ????if (*cameraSource == NULL) { ????????return UNKNOWN_ERROR; ????} ????if ((*cameraSource)->initCheck() != OK) { ????????(*cameraSource).clear(); ????????*cameraSource = NULL; ????????return NO_INIT; ????} ????// When frame rate is not set, the actual frame rate will be set to ????// the current frame rate being used. ????if (mFrameRate == -1) { ????????int32_t frameRate = 0; ????????CHECK ((*cameraSource)->getFormat()->findInt32( ????????????????????kKeyFrameRate, &frameRate)); ????????ALOGI("Frame rate is not explicitly set. Use the current frame " ?????????????"rate (%d fps)", frameRate); ????????mFrameRate = frameRate; ????} ????CHECK(mFrameRate != -1); ????mIsMetaDataStoredInVideoBuffers = ????????(*cameraSource)->isMetaDataStoredInVideoBuffers(); ????return OK; } |
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | status_t StagefrightRecorder::setupVideoEncoder( ????????sp<MediaSource> cameraSource, ????????int32_t videoBitRate, ????????sp<MediaSource> *source) { ????source->clear(); ????sp<MetaData> enc_meta = new MetaData; ????enc_meta->setInt32(kKeyBitRate, videoBitRate); ????enc_meta->setInt32(kKeyFrameRate, mFrameRate); ????switch (mVideoEncoder) { ????????case VIDEO_ENCODER_H263: ????????????enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_H263); ????????????break; ????????case VIDEO_ENCODER_MPEG_4_SP: ????????????enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_MPEG4); ????????????break; ????????case VIDEO_ENCODER_H264: ????????????enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_AVC); ????????????break; ????????default: ????????????CHECK(!"Should not be here, unsupported video encoding."); ????????????break; ????} ????sp<MetaData> meta = cameraSource->getFormat(); ????int32_t width, height, stride, sliceHeight, colorFormat; ????CHECK(meta->findInt32(kKeyWidth, &width)); ????CHECK(meta->findInt32(kKeyHeight, &height)); ????CHECK(meta->findInt32(kKeyStride, &stride)); ????CHECK(meta->findInt32(kKeySliceHeight, &sliceHeight)); ????CHECK(meta->findInt32(kKeyColorFormat, &colorFormat)); ????enc_meta->setInt32(kKeyWidth, width); ????enc_meta->setInt32(kKeyHeight, height); ????enc_meta->setInt32(kKeyIFramesInterval, mIFramesIntervalSec); ????enc_meta->setInt32(kKeyStride, stride); ????enc_meta->setInt32(kKeySliceHeight, sliceHeight); ????enc_meta->setInt32(kKeyColorFormat, colorFormat); ????if (mVideoTimeScale > 0) { ????????enc_meta->setInt32(kKeyTimeScale, mVideoTimeScale); ????} ????if (mVideoEncoderProfile != -1) { ????????enc_meta->setInt32(kKeyVideoProfile, mVideoEncoderProfile); ????} ????if (mVideoEncoderLevel != -1) { ????????enc_meta->setInt32(kKeyVideoLevel, mVideoEncoderLevel); ????} ????OMXClient client; ????CHECK_EQ(client.connect(), (status_t)OK); ????uint32_t encoder_flags = 0; ????if (mIsMetaDataStoredInVideoBuffers) { ????????encoder_flags |= OMXCodec::kStoreMetaDataInVideoBuffers; ????} ????// Do not wait for all the input buffers to become available. ????// This give timelapse video recording faster response in ????// receiving output from video encoder component. ????if (mCaptureTimeLapse) { ????????encoder_flags |= OMXCodec::kOnlySubmitOneInputBufferAtOneTime; ????} ????sp<MediaSource> encoder = OMXCodec::Create( ????????????client.interface(), enc_meta, ????????????true /* createEncoder */, cameraSource, ????????????NULL, encoder_flags); ????if (encoder == NULL) { ????????ALOGW("Failed to create the encoder"); ????????// When the encoder fails to be created, we need ????????// release the camera source due to the camera's lock ????????// and unlock mechanism. ????????cameraSource->stop(); ????????return UNKNOWN_ERROR; ????} ????*source = encoder; ????return OK; } |
這里和OMXCodec關聯起來
有一個叫media_codecs.xml的配置文件來表明設備支持哪些codec
我們錄制MPEG 4的時候還會有聲音,所以后面還有個setupAudioEncoder,具體的方法就不展開了,總之就是把聲音也作為一個Track加入到MPEG4Writer當中去。
這里插個題外話,Google說把setupAudioEncoder放到后面是為了避免開始錄影的那一個提示聲音也被錄制進去,但是實際發現它這樣做還是會有bug,在一些設備上還是會把那聲錄制進去,這個遇到的都是靠APP自己來播放聲音來繞過這個問題的。
另外MPEG4Writer當中有個
start(MetaData*)
啟動兩個方法
a) startWriterThread
啟動一個thread去寫
?| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | void MPEG4Writer::threadFunc() { ????ALOGV("threadFunc"); ????prctl(PR_SET_NAME, (unsigned long)"MPEG4Writer", 0, 0, 0); ????Mutex::Autolock autoLock(mLock); ????while (!mDone) { ????????Chunk chunk; ????????bool chunkFound = false; ????????while (!mDone && !(chunkFound = findChunkToWrite(&chunk))) { ????????????mChunkReadyCondition.wait(mLock); ????????} ????????// Actual write without holding the lock in order to ????????// reduce the blocking time for media track threads. ????????if (chunkFound) { ????????????mLock.unlock(); ????????????writeChunkToFile(&chunk); ????????????mLock.lock(); ????????} ????} ????writeAllChunks(); } |
b) startTracks
?| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | status_t MPEG4Writer::startTracks(MetaData *params) { ????for (List<Track *>::iterator it = mTracks.begin(); ?????????it != mTracks.end(); ++it) { ????????status_t err = (*it)->start(params); ????????if (err != OK) { ????????????for (List<Track *>::iterator it2 = mTracks.begin(); ?????????????????it2 != it; ++it2) { ????????????????(*it2)->stop(); ????????????} ????????????return err; ????????} ????} ????return OK; } |
然后調用每個Track的start方法
?| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | status_t MPEG4Writer::Track::start(MetaData *params) { ????... ????initTrackingProgressStatus(params); ????... ????status_t err = mSource->start(meta.get()); // 這里會去執行CameraSource(start),這兩個是相互關聯的 ????... ????pthread_create(&mThread, &attr, ThreadWrapper, this); ????return OK; } void *MPEG4Writer::Track::ThreadWrapper(void *me) { ????Track *track = static_cast<Track *>(me); ????status_t err = track->threadEntry(); ????return (void *) err; } |
通過status_t MPEG4Writer::Track::threadEntry()
是新啟動另外一個thread,它里面會通過一個循環來不斷讀取CameraSource(read)里面的數據,CameraSource里面的數據當然是從driver返回過來的(可以參見CameraSourceListener,CameraSource用一個叫做mFrameReceived的List專門存放從driver過來的數據,如果收到數據會調用mFrameAvailableCondition.signal,若還沒有開始錄影,這個時候收到的數據是被丟棄的,當然MediaWriter先啟動的是CameraSource的start方法,再啟動寫Track),然后寫到文件當中。
注意:準確來說這里MPEG4Writer讀取的是OMXCodec里的數據,因為數據先到CameraSource,codec負責編碼之后,MPEG4Writer才負責寫到文件當中!關于數據在CameraSource/OMXCodec/MPEG4Writer之間是怎么傳遞的,可以參見http://guoh.org/lifelog/2013/06/interaction-between-stagefright-and-codec/當中講Buffer的傳輸過程。
回頭再來看,Stagefright做了什么事情?我更覺得它只是一個粘合劑(glue)的用處,它工作在MediaPlayerService這一層,把MediaSource,MediaWriter,Codec以及上層的MediaRecorder綁定在一起,這應該就是它最大的作用,Google用它來替換Opencore也是符合其一貫的工程派作風(相比復雜的學術派而言,雖然Google很多東西也很復雜,但是它一般都是以盡量簡單的方式來解決問題)。
讓大家覺得有點不習慣的是,它把MediaRecorder放在MediaPlayerService當中,這兩個看起來是對立的事情,或者某一天它們會改名字,或者是兩者分開,不知道~~
當然這只是個簡單的大體介紹,Codec相關的后面爭取專門來分析一下!
有些細節的東西在這里沒有列出,需要的話會把一些注意點列出來:
1. 時光流逝錄影
CameraSource對應的就是CameraSourceTimeLapse
具體做法就是在
dataCallbackTimestamp
當中有skipCurrentFrame
當然它是用些變量來記錄和計算
mTimeBetweenTimeLapseVideoFramesUs(1E6/videoFrameRate) // 兩個frame之間的間隔時間
記錄上一個frame的(mLastTimeLapseFrameRealTimestampUs) // 上一個frame發生的時間
然后通過frame rate計算出兩個frame之間的相距離時間,中間的都透過releaseOneRecordingFrame來drop掉
也就是說driver返回的東西都不變,只是在SW這層我們自己來處理掉
關于Time-lapse相關的可以參閱
https://en.wikipedia.org/wiki/Time-lapse_photography
2. 錄影當中需要用到Camera的話是通過ICameraRecordingProxy,即Camera當中的RecordingProxy(這是一個BnCameraRecordingProxy)
當透過binder,將ICameraRecordingProxy傳到服務端進程之后,它就變成了Bp,如下:
| 1 2 3 4 5 6 7 8 9 | case SET_CAMERA: { ????ALOGV("SET_CAMERA"); ????CHECK_INTERFACE(IMediaRecorder, data, reply); ????sp<ICamera> camera = interface_cast<ICamera>(data.readStrongBinder()); ????sp<ICameraRecordingProxy> proxy = ????????interface_cast<ICameraRecordingProxy>(data.readStrongBinder()); ????reply->writeInt32(setCamera(camera, proxy)); ????return NO_ERROR; } break; |
在CameraSource當中會這樣去使用
?| 1 2 3 4 5 6 7 8 | // We get the proxy from Camera, not ICamera. We need to get the proxy // to the remote Camera owned by the application. Here mCamera is a // local Camera object created by us. We cannot use the proxy from // mCamera here. mCamera = Camera::create(camera); if (mCamera == 0) return -EBUSY; mCameraRecordingProxy = proxy; mCameraFlags |= FLAGS_HOT_CAMERA; |
疑問點:
CameraSource當中這個
List > mFramesBeingEncoded;
有什么用?
每編碼完一個frame,CameraSource就會將其保存起來,Buffer被release的時候,會反過來release掉這些frame(s),這種做法是為了效率么?為什么不編碼完一個frame就將其release掉?
另外不得不再感嘆下Google經常的delete this;行為,精妙,但是看起來反常!
原文地址: http://guoh.org/lifelog/2013/06/android-mediarecorder-architecture/
總結
以上是生活随笔為你收集整理的Android MediaRecorder系统结构的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Android异步编程
- 下一篇: android sina oauth2.