Android 4.4 MediaRecorder系统结构
前面有分析過Camera的實現(xiàn),現(xiàn)在來看看MediaRecorder的實現(xiàn),這里我不會太去關(guān)注它的分層結(jié)構(gòu),我更關(guān)注它的邏輯!
APP層/path/to/aosp/frameworks/base/media/java/android/media/MediaRecorder.java
JNI層/path/to/aosp/frameworks/base/media/jni/android_media_MediaRecorder.cpp
調(diào)用NATIVE層的MediaRecorder(這里是BnMediaRecorderClient)
header?/path/to/aosp/frameworks/av/include/media/mediarecorder.h
implementation/path/to/aosp/frameworks/av/media/libmedia/mediarecorder.cpp
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 | MediaRecorder::MediaRecorder() : mSurfaceMediaSource(NULL) { ????ALOGV("constructor"); ????constsp<IMediaPlayerService>& service(getMediaPlayerService()); ????if(service != NULL) { ????????mMediaRecorder = service->createMediaRecorder(getpid()); ????} ????if(mMediaRecorder != NULL) { ????????mCurrentState = MEDIA_RECORDER_IDLE; ????} ????doCleanUp(); } |
getMediaPlayerService()這個方法位于/path/to/aosp/frameworks/av/include/media/IMediaDeathNotifier.h
獲取到MediaPlayerService(這個是BpMediaPlayerService)之后
調(diào)用IMediaPlayerService當(dāng)中的
| 1 2 3 4 5 6 7 8 9 | sp<IMediaRecorder> MediaPlayerService::createMediaRecorder(pid_t pid) { ????sp<MediaRecorderClient> recorder = newMediaRecorderClient(this, pid); ????wp<MediaRecorderClient> w = recorder; ????Mutex::Autolock lock(mLock); ????mMediaRecorderClients.add(w); ????ALOGV("Create new media recorder client from pid %d", pid); ????returnrecorder; } |
創(chuàng)建MediaRecorderClient(這里是BnMediaRecorder)
但是通過binder拿到的是BpMediaRecorder
因為有如下的interface_cast過程
| 1 2 3 4 5 6 7 8 | virtual sp<IMediaRecorder> createMediaRecorder(pid_t pid) { ????Parcel data, reply; ????data.writeInterfaceToken(IMediaPlayerService::getInterfaceDescriptor()); ????data.writeInt32(pid); ????remote()->transact(CREATE_MEDIA_RECORDER, data, &reply); ????returninterface_cast<IMediaRecorder>(reply.readStrongBinder()); } |
而MediaRecorderClient當(dāng)中又會創(chuàng)建StagefrightRecorder(MediaRecorderBase),它位于
/path/to/aosp/frameworks/av/media/libmediaplayerservice/StagefrightRecorder.cpp
目前我們可以認(rèn)為在APP/JNI/NATIVE這邊是在一個進程當(dāng)中,在MediaPlayerService當(dāng)中的MediaRecorderClient/StagefrightRecorder是在另外一個進程當(dāng)中,他們之間通過binder通信,而且Bp和Bn我們也都有拿到,后面我們將不再仔細(xì)區(qū)分Bp和Bn。
客戶端這邊
BnMediaRecorderClient
BpMediaRecorder
BpMediaPlayerService
服務(wù)端這邊
BpMediaRecorderClient(如果需要通知客戶端的話,它可以獲得這個Bp)
BnMediaRecorder
BnMediaPlayerService
這有張圖(點過去看原始大圖)
我們以開始錄影為例子,比如start()
在這里就兵分兩路,一個CameraSource,一個MPEG4Writer(sp?mWriter)
這兩個class都位于/path/to/aosp/frameworks/av/media/libstagefright/當(dāng)中
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | status_t StagefrightRecorder::startMPEG4Recording() { ????int32_t totalBitRate; ????status_t err = setupMPEG4Recording( ????????????mOutputFd, mVideoWidth, mVideoHeight, ????????????mVideoBitRate, &totalBitRate, &mWriter); ????if(err != OK) { ????????returnerr; ????} ????int64_t startTimeUs = systemTime() / 1000; ????sp<MetaData> meta = newMetaData; ????setupMPEG4MetaData(startTimeUs, totalBitRate, &meta); ????err = mWriter->start(meta.get()); ????if(err != OK) { ????????returnerr; ????} ????returnOK; } |
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | status_t StagefrightRecorder::setupMPEG4Recording( ????????intoutputFd, ????????int32_t videoWidth, int32_t videoHeight, ????????int32_t videoBitRate, ????????int32_t *totalBitRate, ????????sp<MediaWriter> *mediaWriter) { ????mediaWriter->clear(); ????*totalBitRate = 0; ????status_t err = OK; ????sp<MediaWriter> writer = newMPEG4Writer(outputFd); ????if(mVideoSource < VIDEO_SOURCE_LIST_END) { ????????sp<MediaSource> mediaSource; ????????err = setupMediaSource(&mediaSource); // very important ????????if(err != OK) { ????????????returnerr; ????????} ????????sp<MediaSource> encoder; ????????err = setupVideoEncoder(mediaSource, videoBitRate, &encoder); // very important ????????if(err != OK) { ????????????returnerr; ????????} ????????writer->addSource(encoder); ????????*totalBitRate += videoBitRate; ????} ????// Audio source is added at the end if it exists. ????// This help make sure that the "recoding" sound is suppressed for ????// camcorder applications in the recorded files. ????if(!mCaptureTimeLapse && (mAudioSource != AUDIO_SOURCE_CNT)) { ????????err = setupAudioEncoder(writer); // very important ????????if(err != OK) returnerr; ????????*totalBitRate += mAudioBitRate; ????} ????... ????writer->setListener(mListener); ????*mediaWriter = writer; ????returnOK; } |
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | // Set up the appropriate MediaSource depending on the chosen option status_t StagefrightRecorder::setupMediaSource( ??????????????????????sp<MediaSource> *mediaSource) { ????if(mVideoSource == VIDEO_SOURCE_DEFAULT ????????????|| mVideoSource == VIDEO_SOURCE_CAMERA) { ????????sp<CameraSource> cameraSource; ????????status_t err = setupCameraSource(&cameraSource); ????????if(err != OK) { ????????????returnerr; ????????} ????????*mediaSource = cameraSource; ????}elseif (mVideoSource == VIDEO_SOURCE_GRALLOC_BUFFER) { ????????// If using GRAlloc buffers, setup surfacemediasource. ????????// Later a handle to that will be passed ????????// to the client side when queried ????????status_t err = setupSurfaceMediaSource(); ????????if(err != OK) { ????????????returnerr; ????????} ????????*mediaSource = mSurfaceMediaSource; ????}else{ ????????returnINVALID_OPERATION; ????} ????returnOK; } |
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | status_t StagefrightRecorder::setupCameraSource( ????????sp<CameraSource> *cameraSource) { ????status_t err = OK; ????if((err = checkVideoEncoderCapabilities()) != OK) { ????????returnerr; ????} ????Size videoSize; ????videoSize.width = mVideoWidth; ????videoSize.height = mVideoHeight; ????if(mCaptureTimeLapse) { ????????if(mTimeBetweenTimeLapseFrameCaptureUs < 0) { ????????????ALOGE("Invalid mTimeBetweenTimeLapseFrameCaptureUs value: %lld", ????????????????mTimeBetweenTimeLapseFrameCaptureUs); ????????????returnBAD_VALUE; ????????} ????????mCameraSourceTimeLapse = CameraSourceTimeLapse::CreateFromCamera( ????????????????mCamera, mCameraProxy, mCameraId, ????????????????videoSize, mFrameRate, mPreviewSurface, ????????????????mTimeBetweenTimeLapseFrameCaptureUs); ????????*cameraSource = mCameraSourceTimeLapse; ????}else{ ????????*cameraSource = CameraSource::CreateFromCamera( ????????????????mCamera, mCameraProxy, mCameraId, videoSize, mFrameRate, ????????????????mPreviewSurface,true/*storeMetaDataInVideoBuffers*/); ????} ????mCamera.clear(); ????mCameraProxy.clear(); ????if(*cameraSource == NULL) { ????????returnUNKNOWN_ERROR; ????} ????if((*cameraSource)->initCheck() != OK) { ????????(*cameraSource).clear(); ????????*cameraSource = NULL; ????????returnNO_INIT; ????} ????// When frame rate is not set, the actual frame rate will be set to ????// the current frame rate being used. ????if(mFrameRate == -1) { ????????int32_t frameRate = 0; ????????CHECK ((*cameraSource)->getFormat()->findInt32( ????????????????????kKeyFrameRate, &frameRate)); ????????ALOGI("Frame rate is not explicitly set. Use the current frame " ?????????????"rate (%d fps)", frameRate); ????????mFrameRate = frameRate; ????} ????CHECK(mFrameRate != -1); ????mIsMetaDataStoredInVideoBuffers = ????????(*cameraSource)->isMetaDataStoredInVideoBuffers(); ????returnOK; } |
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 | status_t StagefrightRecorder::setupVideoEncoder( ????????sp<MediaSource> cameraSource, ????????int32_t videoBitRate, ????????sp<MediaSource> *source) { ????source->clear(); ????sp<MetaData> enc_meta = newMetaData; ????enc_meta->setInt32(kKeyBitRate, videoBitRate); ????enc_meta->setInt32(kKeyFrameRate, mFrameRate); ????switch(mVideoEncoder) { ????????caseVIDEO_ENCODER_H263: ????????????enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_H263); ????????????break; ????????caseVIDEO_ENCODER_MPEG_4_SP: ????????????enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_MPEG4); ????????????break; ????????caseVIDEO_ENCODER_H264: ????????????enc_meta->setCString(kKeyMIMEType, MEDIA_MIMETYPE_VIDEO_AVC); ????????????break; ????????default: ????????????CHECK(!"Should not be here, unsupported video encoding."); ????????????break; ????} ????sp<MetaData> meta = cameraSource->getFormat(); ????int32_t width, height, stride, sliceHeight, colorFormat; ????CHECK(meta->findInt32(kKeyWidth, &width)); ????CHECK(meta->findInt32(kKeyHeight, &height)); ????CHECK(meta->findInt32(kKeyStride, &stride)); ????CHECK(meta->findInt32(kKeySliceHeight, &sliceHeight)); ????CHECK(meta->findInt32(kKeyColorFormat, &colorFormat)); ????enc_meta->setInt32(kKeyWidth, width); ????enc_meta->setInt32(kKeyHeight, height); ????enc_meta->setInt32(kKeyIFramesInterval, mIFramesIntervalSec); ????enc_meta->setInt32(kKeyStride, stride); ????enc_meta->setInt32(kKeySliceHeight, sliceHeight); ????enc_meta->setInt32(kKeyColorFormat, colorFormat); ????if(mVideoTimeScale > 0) { ????????enc_meta->setInt32(kKeyTimeScale, mVideoTimeScale); ????} ????if(mVideoEncoderProfile != -1) { ????????enc_meta->setInt32(kKeyVideoProfile, mVideoEncoderProfile); ????} ????if(mVideoEncoderLevel != -1) { ????????enc_meta->setInt32(kKeyVideoLevel, mVideoEncoderLevel); ????} ????OMXClient client; ????CHECK_EQ(client.connect(), (status_t)OK); ????uint32_t encoder_flags = 0; ????if(mIsMetaDataStoredInVideoBuffers) { ????????encoder_flags |= OMXCodec::kStoreMetaDataInVideoBuffers; ????} ????// Do not wait for all the input buffers to become available. ????// This give timelapse video recording faster response in ????// receiving output from video encoder component. ????if(mCaptureTimeLapse) { ????????encoder_flags |= OMXCodec::kOnlySubmitOneInputBufferAtOneTime; ????} ????sp<MediaSource> encoder = OMXCodec::Create( ????????????client.interface(), enc_meta, ????????????true/* createEncoder */, cameraSource, ????????????NULL, encoder_flags); ????if(encoder == NULL) { ????????ALOGW("Failed to create the encoder"); ????????// When the encoder fails to be created, we need ????????// release the camera source due to the camera's lock ????????// and unlock mechanism. ????????cameraSource->stop(); ????????returnUNKNOWN_ERROR; ????} ????*source = encoder; ????returnOK; } |
這里和OMXCodec關(guān)聯(lián)起來
有一個叫media_codecs.xml的配置文件來表明設(shè)備支持哪些codec
我們錄制MPEG 4的時候還會有聲音,所以后面還有個setupAudioEncoder,具體的方法就不展開了,總之就是把聲音也作為一個Track加入到MPEG4Writer當(dāng)中去。
這里插個題外話,Google說把setupAudioEncoder放到后面是為了避免開始錄影的那一個提示聲音也被錄制進去,但是實際發(fā)現(xiàn)它這樣做還是會有bug,在一些設(shè)備上還是會把那聲錄制進去,這個遇到的都是靠APP自己來播放聲音來繞過這個問題的。
另外MPEG4Writer當(dāng)中有個
start(MetaData*)
啟動兩個方法
a) startWriterThread
啟動一個thread去寫
?| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | voidMPEG4Writer::threadFunc() { ????ALOGV("threadFunc"); ????prctl(PR_SET_NAME, (unsigned long)"MPEG4Writer",0,0,0); ????Mutex::Autolock autoLock(mLock); ????while(!mDone) { ????????Chunk chunk; ????????bool chunkFound = false; ????????while(!mDone && !(chunkFound = findChunkToWrite(&chunk))) { ????????????mChunkReadyCondition.wait(mLock); ????????} ????????// Actual write without holding the lock in order to ????????// reduce the blocking time for media track threads. ????????if(chunkFound) { ????????????mLock.unlock(); ????????????writeChunkToFile(&chunk); ????????????mLock.lock(); ????????} ????} ????writeAllChunks(); } |
b) startTracks
?| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | status_t MPEG4Writer::startTracks(MetaData *params) { ????for(List<Track *>::iterator it = mTracks.begin(); ?????????it != mTracks.end(); ++it) { ????????status_t err = (*it)->start(params); ????????if(err != OK) { ????????????for(List<Track *>::iterator it2 = mTracks.begin(); ?????????????????it2 != it; ++it2) { ????????????????(*it2)->stop(); ????????????} ????????????returnerr; ????????} ????} ????returnOK; } |
然后調(diào)用每個Track的start方法
?| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | status_t MPEG4Writer::Track::start(MetaData *params) { ????... ????initTrackingProgressStatus(params); ????... ????status_t err = mSource->start(meta.get()); // 這里會去執(zhí)行CameraSource(start),這兩個是相互關(guān)聯(lián)的 ????... ????pthread_create(&mThread, &attr, ThreadWrapper, this); ????returnOK; } void*MPEG4Writer::Track::ThreadWrapper(void*me) { ????Track *track = static_cast<Track *>(me); ????status_t err = track->threadEntry(); ????return(void*) err; } |
通過status_t MPEG4Writer::Track::threadEntry()
是新啟動另外一個thread,它里面會通過一個循環(huán)來不斷讀取CameraSource(read)里面的數(shù)據(jù),CameraSource里面的數(shù)據(jù)當(dāng)然是從driver返回過來的(可以參見CameraSourceListener,CameraSource用一個叫做mFrameReceived的List專門存放從driver過來的數(shù)據(jù),如果收到數(shù)據(jù)會調(diào)用mFrameAvailableCondition.signal,若還沒有開始錄影,這個時候收到的數(shù)據(jù)是被丟棄的,當(dāng)然MediaWriter先啟動的是CameraSource的start方法,再啟動寫Track),然后寫到文件當(dāng)中。
注意:準(zhǔn)確來說這里MPEG4Writer讀取的是OMXCodec里的數(shù)據(jù),因為數(shù)據(jù)先到CameraSource,codec負(fù)責(zé)編碼之后,MPEG4Writer才負(fù)責(zé)寫到文件當(dāng)中!關(guān)于數(shù)據(jù)在CameraSource/OMXCodec/MPEG4Writer之間是怎么傳遞的,可以參見http://guoh.org/lifelog/2013/06/interaction-between-stagefright-and-codec/當(dāng)中講Buffer的傳輸過程。
回頭再來看,Stagefright做了什么事情?我更覺得它只是一個粘合劑(glue)的用處,它工作在MediaPlayerService這一層,把MediaSource,MediaWriter,Codec以及上層的MediaRecorder綁定在一起,這應(yīng)該就是它最大的作用,Google用它來替換Opencore也是符合其一貫的工程派作風(fēng)(相比復(fù)雜的學(xué)術(shù)派而言,雖然Google很多東西也很復(fù)雜,但是它一般都是以盡量簡單的方式來解決問題)。
讓大家覺得有點不習(xí)慣的是,它把MediaRecorder放在MediaPlayerService當(dāng)中,這兩個看起來是對立的事情,或者某一天它們會改名字,或者是兩者分開,不知道~~
當(dāng)然這只是個簡單的大體介紹,Codec相關(guān)的后面爭取專門來分析一下!
有些細(xì)節(jié)的東西在這里沒有列出,需要的話會把一些注意點列出來:
1. 時光流逝錄影
CameraSource對應(yīng)的就是CameraSourceTimeLapse
具體做法就是在
dataCallbackTimestamp
當(dāng)中有skipCurrentFrame
當(dāng)然它是用些變量來記錄和計算
mTimeBetweenTimeLapseVideoFramesUs(1E6/videoFrameRate) // 兩個frame之間的間隔時間
記錄上一個frame的(mLastTimeLapseFrameRealTimestampUs) // 上一個frame發(fā)生的時間
然后通過frame rate計算出兩個frame之間的相距離時間,中間的都透過releaseOneRecordingFrame來drop掉
也就是說driver返回的東西都不變,只是在SW這層我們自己來處理掉
關(guān)于Time-lapse相關(guān)的可以參閱
https://en.wikipedia.org/wiki/Time-lapse_photography
2. 錄影當(dāng)中需要用到Camera的話是通過ICameraRecordingProxy,即Camera當(dāng)中的RecordingProxy(這是一個BnCameraRecordingProxy)
當(dāng)透過binder,將ICameraRecordingProxy傳到服務(wù)端進程之后,它就變成了Bp,如下:
| 1 2 3 4 5 6 7 8 9 | caseSET_CAMERA: { ????ALOGV("SET_CAMERA"); ????CHECK_INTERFACE(IMediaRecorder, data, reply); ????sp<ICamera> camera = interface_cast<ICamera>(data.readStrongBinder()); ????sp<ICameraRecordingProxy> proxy = ????????interface_cast<ICameraRecordingProxy>(data.readStrongBinder()); ????reply->writeInt32(setCamera(camera, proxy)); ????returnNO_ERROR; }break; |
在CameraSource當(dāng)中會這樣去使用
?| 1 2 3 4 5 6 7 8 | // We get the proxy from Camera, not ICamera. We need to get the proxy // to the remote Camera owned by the application. Here mCamera is a // local Camera object created by us. We cannot use the proxy from // mCamera here. mCamera = Camera::create(camera); if(mCamera == 0)return-EBUSY; mCameraRecordingProxy = proxy; mCameraFlags |= FLAGS_HOT_CAMERA; |
疑問點:
CameraSource當(dāng)中這個
List<sp?> mFramesBeingEncoded;
有什么用?
每編碼完一個frame,CameraSource就會將其保存起來,Buffer被release的時候,會反過來release掉這些frame(s),這種做法是為了效率么?為什么不編碼完一個frame就將其release掉?
另外不得不再感嘆下Google經(jīng)常的delete this;行為,精妙,但是看起來反常!
原文地址:?http://guoh.org/lifelog/2013/06/android-mediarecorder-architecture/
與50位技術(shù)專家面對面20年技術(shù)見證,附贈技術(shù)全景圖總結(jié)
以上是生活随笔為你收集整理的Android 4.4 MediaRecorder系统结构的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Deep Learning Blogs
- 下一篇: android sina oauth2.