日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

live555 源码分析:播放启动

發(fā)布時間:2024/4/11 编程问答 28 豆豆
生活随笔 收集整理的這篇文章主要介紹了 live555 源码分析:播放启动 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

本文分析 live555 中,流媒體播放啟動,數(shù)據(jù)開始通過 RTP/RTCP 傳輸?shù)倪^程。

如我們在 live555 源碼分析:子會話 SETUP 中看到的,一個流媒體子會話的播放啟動,由 StreamState::startPlaying 完成:

void OnDemandServerMediaSubsession::startStream(unsigned clientSessionId,void* streamToken,TaskFunc* rtcpRRHandler,void* rtcpRRHandlerClientData,unsigned short& rtpSeqNum,unsigned& rtpTimestamp,ServerRequestAlternativeByteHandler* serverRequestAlternativeByteHandler,void* serverRequestAlternativeByteHandlerClientData) {StreamState* streamState = (StreamState*)streamToken;Destinations* destinations= (Destinations*)(fDestinationsHashTable->Lookup((char const*)clientSessionId));if (streamState != NULL) {streamState->startPlaying(destinations, clientSessionId,rtcpRRHandler, rtcpRRHandlerClientData,serverRequestAlternativeByteHandler, serverRequestAlternativeByteHandlerClientData);RTPSink* rtpSink = streamState->rtpSink(); // aliasif (rtpSink != NULL) {rtpSeqNum = rtpSink->currentSeqNo();rtpTimestamp = rtpSink->presetNextTimestamp();}} }

在這個函數(shù)中,首先找到子會話的目標地址,也就是客戶端的 IP 地址,和用于接收 RTP/RTCP 的端口號,然后通過 StreamState::startPlaying() 啟動播放,最后將 RTP 包的初始序列號和初始時間戳返回給調用者,也就是 RTSPServer,并由后者返回給客戶端,以用于客戶端的播放同步。

StreamState::startPlaying() 的實現(xiàn)是這樣的:

void StreamState ::startPlaying(Destinations* dests, unsigned clientSessionId,TaskFunc* rtcpRRHandler, void* rtcpRRHandlerClientData,ServerRequestAlternativeByteHandler* serverRequestAlternativeByteHandler,void* serverRequestAlternativeByteHandlerClientData) {if (dests == NULL) return;if (fRTCPInstance == NULL && fRTPSink != NULL) {// Create (and start) a 'RTCP instance' for this RTP sink:fRTCPInstance = fMaster.createRTCP(fRTCPgs, fTotalBW, (unsigned char*)fMaster.fCNAME, fRTPSink);// Note: This starts RTCP running automaticallyfRTCPInstance->setAppHandler(fMaster.fAppHandlerTask, fMaster.fAppHandlerClientData);}if (dests->isTCP) {// Change RTP and RTCP to use the TCP socket instead of UDP:if (fRTPSink != NULL) {fRTPSink->addStreamSocket(dests->tcpSocketNum, dests->rtpChannelId);RTPInterface::setServerRequestAlternativeByteHandler(fRTPSink->envir(), dests->tcpSocketNum,serverRequestAlternativeByteHandler, serverRequestAlternativeByteHandlerClientData);// So that we continue to handle RTSP commands from the client}if (fRTCPInstance != NULL) {fRTCPInstance->addStreamSocket(dests->tcpSocketNum, dests->rtcpChannelId);fRTCPInstance->setSpecificRRHandler(dests->tcpSocketNum, dests->rtcpChannelId,rtcpRRHandler, rtcpRRHandlerClientData);}} else {// Tell the RTP and RTCP 'groupsocks' about this destination// (in case they don't already have it):if (fRTPgs != NULL) fRTPgs->addDestination(dests->addr, dests->rtpPort, clientSessionId);if (fRTCPgs != NULL && !(fRTCPgs == fRTPgs && dests->rtcpPort.num() == dests->rtpPort.num())) {fRTCPgs->addDestination(dests->addr, dests->rtcpPort, clientSessionId);}if (fRTCPInstance != NULL) {fRTCPInstance->setSpecificRRHandler(dests->addr.s_addr, dests->rtcpPort,rtcpRRHandler, rtcpRRHandlerClientData);}}if (fRTCPInstance != NULL) {// Hack: Send an initial RTCP "SR" packet, before the initial RTP packet, so that receivers will (likely) be able to// get RTCP-synchronized presentation times immediately:fRTCPInstance->sendReport();}if (!fAreCurrentlyPlaying && fMediaSource != NULL) {if (fRTPSink != NULL) {fRTPSink->startPlaying(*fMediaSource, afterPlayingStreamState, this);fAreCurrentlyPlaying = True;} else if (fUDPSink != NULL) {fUDPSink->startPlaying(*fMediaSource, afterPlayingStreamState, this);fAreCurrentlyPlaying = True;}} }

在這個函數(shù)中,首先在 RTCPInstance 還沒有創(chuàng)建時去創(chuàng)建它:

RTCPInstance* OnDemandServerMediaSubsession ::createRTCP(Groupsock* RTCPgs, unsigned totSessionBW, /* in kbps */unsigned char const* cname, RTPSink* sink) {// Default implementation; may be redefined by subclasses:return RTCPInstance::createNew(envir(), RTCPgs, totSessionBW, cname, sink, NULL/*we're a server*/); }

忽略 RTP/RTCP 包走 TCP 的情況。隨后 StreamState::startPlaying() 對 RTP 和 RTCP 的 groupsock 做一些設置,即為它們添加目標地址,并為 RTCPInstance 做了一些設置:

} else {// Tell the RTP and RTCP 'groupsocks' about this destination// (in case they don't already have it):if (fRTPgs != NULL) fRTPgs->addDestination(dests->addr, dests->rtpPort, clientSessionId);if (fRTCPgs != NULL && !(fRTCPgs == fRTPgs && dests->rtcpPort.num() == dests->rtpPort.num())) {fRTCPgs->addDestination(dests->addr, dests->rtcpPort, clientSessionId);}if (fRTCPInstance != NULL) {fRTCPInstance->setSpecificRRHandler(dests->addr.s_addr, dests->rtcpPort,rtcpRRHandler, rtcpRRHandlerClientData);}}

之后 StreamState::startPlaying() 發(fā)出一個 RTCP 包。

if (fRTCPInstance != NULL) {// Hack: Send an initial RTCP "SR" packet, before the initial RTP packet, so that receivers will (likely) be able to// get RTCP-synchronized presentation times immediately:fRTCPInstance->sendReport();}

fUDPSink 用于流模式為 RAW UDP 的情況,忽略這種流模式的情況。最后執(zhí)行 MediaSink::startPlaying(),并設置標記 fAreCurrentlyPlaying,表示流播放已經(jīng)啟動。

RTP 包的發(fā)送

下面具體來看 RTP 包是怎么被發(fā)送出去的。MediaSink::startPlaying() 函數(shù)的定義如下:

Boolean MediaSink::startPlaying(MediaSource& source,afterPlayingFunc* afterFunc,void* afterClientData) {// Make sure we're not already being played:if (fSource != NULL) {envir().setResultMsg("This sink is already being played");return False;}// Make sure our source is compatible:if (!sourceIsCompatibleWithUs(source)) {envir().setResultMsg("MediaSink::startPlaying(): source is not compatible!");return False;}fSource = (FramedSource*)&source;fAfterFunc = afterFunc;fAfterClientData = afterClientData;return continuePlaying(); }

在這個函數(shù)中,保存了傳入的回調及回調的參數(shù),然后執(zhí)行 continuePlaying(),continuePlaying() 是一個純虛函數(shù),其實現(xiàn)由 MediaSink 的子類 H264or5VideoRTPSink 實現(xiàn):

Boolean H264or5VideoRTPSink::continuePlaying() {// First, check whether we have a 'fragmenter' class set up yet.// If not, create it now:if (fOurFragmenter == NULL) {fOurFragmenter = new H264or5Fragmenter(fHNumber, envir(), fSource, OutPacketBuffer::maxSize,ourMaxPacketSize() - 12/*RTP hdr size*/);} else {fOurFragmenter->reassignInputSource(fSource);}fSource = fOurFragmenter;// Then call the parent class's implementation:return MultiFramedRTPSink::continuePlaying(); }

在這個類中,主要是為 H264or5Fragmenter 設置了流媒體數(shù)據(jù)源,并將 fSource 設置為 H264or5Fragmenter。在這里,MultiFramedRTPSink 持有的流媒體數(shù)據(jù)源 FramedSource 由最初在 H264VideoFileServerMediaSubsession 中創(chuàng)建的 H264VideoStreamFramer 變?yōu)榱?H264or5Fragmenter,而 H264or5Fragmenter 則封裝了 H264VideoStreamFramer。

隨后 H264or5VideoRTPSink::continuePlaying() 執(zhí)行 MultiFramedRTPSink::continuePlaying() 做進一步的處理。

Boolean MultiFramedRTPSink::continuePlaying() {// Send the first packet.// (This will also schedule any future sends.)buildAndSendPacket(True);return True; } . . . . . . void MultiFramedRTPSink::buildAndSendPacket(Boolean isFirstPacket) {nextTask() = NULL;fIsFirstPacket = isFirstPacket;// Set up the RTP header:unsigned rtpHdr = 0x80000000; // RTP version 2; marker ('M') bit not set (by default; it can be set later)rtpHdr |= (fRTPPayloadType<<16);rtpHdr |= fSeqNo; // sequence numberfOutBuf->enqueueWord(rtpHdr);// Note where the RTP timestamp will go.// (We can't fill this in until we start packing payload frames.)fTimestampPosition = fOutBuf->curPacketSize();fOutBuf->skipBytes(4); // leave a hole for the timestampfOutBuf->enqueueWord(SSRC());// Allow for a special, payload-format-specific header following the// RTP header:fSpecialHeaderPosition = fOutBuf->curPacketSize();fSpecialHeaderSize = specialHeaderSize();fOutBuf->skipBytes(fSpecialHeaderSize);// Begin packing as many (complete) frames into the packet as we can:fTotalFrameSpecificHeaderSizes = 0;fNoFramesLeft = False;fNumFramesUsedSoFar = 0;packFrame(); }

MultiFramedRTPSink::continuePlaying() 執(zhí)行 MultiFramedRTPSink::buildAndSendPacket()。而 MultiFramedRTPSink::buildAndSendPacket() 則是在輸出緩沖區(qū)構造了 RTP 頭部,對于其中暫時無法準確獲得的頭部字段,還預留了空間。隨后調用了 MultiFramedRTPSink::packFrame()。

void MultiFramedRTPSink::packFrame() {// Get the next frame.// First, skip over the space we'll use for any frame-specific header:fCurFrameSpecificHeaderPosition = fOutBuf->curPacketSize();fCurFrameSpecificHeaderSize = frameSpecificHeaderSize();fOutBuf->skipBytes(fCurFrameSpecificHeaderSize);fTotalFrameSpecificHeaderSizes += fCurFrameSpecificHeaderSize;// See if we have an overflow frame that was too big for the last pktif (fOutBuf->haveOverflowData()) {// Use this frame before reading a new one from the sourceunsigned frameSize = fOutBuf->overflowDataSize();struct timeval presentationTime = fOutBuf->overflowPresentationTime();unsigned durationInMicroseconds = fOutBuf->overflowDurationInMicroseconds();fOutBuf->useOverflowData();afterGettingFrame1(frameSize, 0, presentationTime, durationInMicroseconds);} else {// Normal case: we need to read a new frame from the sourceif (fSource == NULL) return;fSource->getNextFrame(fOutBuf->curPtr(), fOutBuf->totalBytesAvailable(),afterGettingFrame, this, ourHandleClosure, this);} }

MultiFramedRTPSink::packFrame() 由 FramedSource 的 getNextFrame() 獲得幀數(shù)據(jù),并在獲得幀數(shù)據(jù)之后得到通知。

void FramedSource::getNextFrame(unsigned char* to, unsigned maxSize,afterGettingFunc* afterGettingFunc,void* afterGettingClientData,onCloseFunc* onCloseFunc,void* onCloseClientData) {// Make sure we're not already being read:if (fIsCurrentlyAwaitingData) {envir() << "FramedSource[" << this << "]::getNextFrame(): attempting to read more than once at the same time!\n";envir().internalError();}fTo = to;fMaxSize = maxSize;fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()fAfterGettingFunc = afterGettingFunc;fAfterGettingClientData = afterGettingClientData;fOnCloseFunc = onCloseFunc;fOnCloseClientData = onCloseClientData;fIsCurrentlyAwaitingData = True;doGetNextFrame(); }

這個函數(shù)主要用于為 FramedSource 設置媒體流數(shù)據(jù)要讀到哪里,可以讀多少自己,以及回調函數(shù)的地址。并最終執(zhí)行 doGetNextFrame() 讀取數(shù)據(jù)。

最終數(shù)據(jù)將由 ByteStreamFileSource 的 doGetNextFrame() 執(zhí)行讀取任務的調度,并從文件中讀取。

#0 ByteStreamFileSource::doGetNextFrame (this=0x6d8f10) at ByteStreamFileSource.cpp:96 #1 0x000000000043004c in FramedSource::getNextFrame (this=0x6d8f10, to=0x6da9c0 "(\243\203\367\377\177", maxSize=150000, afterGettingFunc=0x46f6c8 <StreamParser::afterGettingBytes(void*, unsigned int, unsigned int, timeval, unsigned int)>, afterGettingClientData=0x6d91b0, onCloseFunc=0x46f852 <StreamParser::onInputClosure(void*)>, onCloseClientData=0x6d91b0) at FramedSource.cpp:78-------------------------------------------------------------------------------------------------------------------------------------#2 0x000000000046f69c in StreamParser::ensureValidBytes1 (this=0x6d91b0, numBytesNeeded=4) at StreamParser.cpp:159 #3 0x00000000004343e5 in StreamParser::ensureValidBytes (this=0x6d91b0, numBytesNeeded=4) at StreamParser.hh:118 #4 0x0000000000434179 in StreamParser::test4Bytes (this=0x6d91b0) at StreamParser.hh:54 #5 0x0000000000471b85 in H264or5VideoStreamParser::parse (this=0x6d91b0) at H264or5VideoStreamFramer.cpp:951 #6 0x000000000043510f in MPEGVideoStreamFramer::continueReadProcessing (this=0x6d9000) at MPEGVideoStreamFramer.cpp:159 #7 0x0000000000435077 in MPEGVideoStreamFramer::doGetNextFrame (this=0x6d9000) at MPEGVideoStreamFramer.cpp:142 #8 0x000000000043004c in FramedSource::getNextFrame (this=0x6d9000, to=0x748d61 "", maxSize=100000, afterGettingFunc=0x474cd2 <H264or5Fragmenter::afterGettingFrame(void*, unsigned int, unsigned int, timeval, unsigned int)>, afterGettingClientData=0x700300, onCloseFunc=0x4300c6 <FramedSource::handleClosure(void*)>, onCloseClientData=0x700300) at FramedSource.cpp:78-------------------------------------------------------------------------------------------------------------------------------------#9 0x000000000047480a in H264or5Fragmenter::doGetNextFrame (this=0x700300) at H264or5VideoRTPSink.cpp:181 #10 0x000000000043004c in FramedSource::getNextFrame (this=0x700300, to=0x7304ec "", maxSize=100452, afterGettingFunc=0x45af82 <MultiFramedRTPSink::afterGettingFrame(void*, unsigned int, unsigned int, timeval, unsigned int)>, afterGettingClientData=0x6d92e0, onCloseFunc=0x45b96c <MultiFramedRTPSink::ourHandleClosure(void*)>, onCloseClientData=0x6d92e0) at FramedSource.cpp:78-------------------------------------------------------------------------------------------------------------------------------------#11 0x000000000045af61 in MultiFramedRTPSink::packFrame (this=0x6d92e0) at MultiFramedRTPSink.cpp:224 #12 0x000000000045adae in MultiFramedRTPSink::buildAndSendPacket (this=0x6d92e0, isFirstPacket=1 '\001') at MultiFramedRTPSink.cpp:199 #13 0x000000000045abed in MultiFramedRTPSink::continuePlaying (this=0x6d92e0) at MultiFramedRTPSink.cpp:159-------------------------------------------------------------------------------------------------------------------------------------#14 0x000000000047452a in H264or5VideoRTPSink::continuePlaying (this=0x6d92e0) at H264or5VideoRTPSink.cpp:127 #15 0x0000000000405d2a in MediaSink::startPlaying (this=0x6d92e0, source=..., afterFunc=0x4621f4 <afterPlayingStreamState(void*)>, afterClientData=0x6d95b0) at MediaSink.cpp:78 #16 0x00000000004626ea in StreamState::startPlaying (this=0x6d95b0, dests=0x6d9620, clientSessionId=1584618840, rtcpRRHandler=0x407280 <GenericMediaServer::ClientSession::noteClientLiveness(GenericMediaServer::ClientSession*)>, rtcpRRHandlerClientData=0x70ba40, serverRequestAlternativeByteHandler=0x4093a6 <RTSPServer::RTSPClientConnection::handleAlternativeRequestByte(void*, unsigned char)>, serverRequestAlternativeByteHandlerClientData=0x6ce910) at OnDemandServerMediaSubsession.cpp:576 #17 0x000000000046138d in OnDemandServerMediaSubsession::startStream (this=0x6d8710, clientSessionId=1584618840, streamToken=0x6d95b0, rtcpRRHandler=0x407280 <GenericMediaServer::ClientSession::noteClientLiveness(GenericMediaServer::ClientSession*)>, rtcpRRHandlerClientData=0x70ba40, rtpSeqNum=@0x7fffffffcd76: 0, rtpTimestamp=@0x7fffffffcdc0: 0, serverRequestAlternativeByteHandler=0x4093a6 <RTSPServer::RTSPClientConnection::handleAlternativeRequestByte(void*, unsigned char)>, serverRequestAlternativeByteHandlerClientData=0x6ce910) at OnDemandServerMediaSubsession.cpp:223

這個調用棧比較深。看起來可能會讓人感覺比較費解。實際上 live555 中采用裝飾器模式來設計 FramedSource,一個 FramedSource 可以包裝另一個 FramedSource,并額外提供一些功能,或為了性能優(yōu)化,或為了數(shù)據(jù)解析等。

live555 中眾多的 FramedSource 類之間的關系大概如下圖所示:

上面的調用棧,也主要根據(jù) FramedSource 的包裝關系,由虛線分割為幾個不同的階段。

在 ByteStreamFileSource 的 doGetNextFrame() 中,調度讀取任務:

void ByteStreamFileSource::doGetNextFrame() {if (feof(fFid) || ferror(fFid) || (fLimitNumBytesToStream && fNumBytesToStream == 0)) {handleClosure();return;}#ifdef READ_FROM_FILES_SYNCHRONOUSLYdoReadFromFile(); #elseif (!fHaveStartedReading) {// Await readable data from the file:envir().taskScheduler().turnOnBackgroundReadHandling(fileno(fFid),(TaskScheduler::BackgroundHandlerProc*)&fileReadableHandler, this);fHaveStartedReading = True;} #endif }

ByteStreamFileSource::fileReadableHandler() 讀取流媒體內容,并通知調用者:

void FramedSource::afterGetting(FramedSource* source) {source->nextTask() = NULL;source->fIsCurrentlyAwaitingData = False;// indicates that we can be read again// Note that this needs to be done here, in case the "fAfterFunc"// called below tries to read another frame (which it usually will)if (source->fAfterGettingFunc != NULL) {(*(source->fAfterGettingFunc))(source->fAfterGettingClientData,source->fFrameSize, source->fNumTruncatedBytes,source->fPresentationTime,source->fDurationInMicroseconds);} } . . . . . . void ByteStreamFileSource::fileReadableHandler(ByteStreamFileSource* source, int /*mask*/) {if (!source->isCurrentlyAwaitingData()) {source->doStopGettingFrames(); // we're not ready for the data yetreturn;}source->doReadFromFile(); }void ByteStreamFileSource::doReadFromFile() {// Try to read as many bytes as will fit in the buffer provided (or "fPreferredFrameSize" if less)if (fLimitNumBytesToStream && fNumBytesToStream < (u_int64_t)fMaxSize) {fMaxSize = (unsigned)fNumBytesToStream;}if (fPreferredFrameSize > 0 && fPreferredFrameSize < fMaxSize) {fMaxSize = fPreferredFrameSize;} #ifdef READ_FROM_FILES_SYNCHRONOUSLYfFrameSize = fread(fTo, 1, fMaxSize, fFid); #elseif (fFidIsSeekable) {fFrameSize = fread(fTo, 1, fMaxSize, fFid);} else {// For non-seekable files (e.g., pipes), call "read()" rather than "fread()", to ensure that the read doesn't block:fFrameSize = read(fileno(fFid), fTo, fMaxSize);} #endifif (fFrameSize == 0) {handleClosure();return;}fNumBytesToStream -= fFrameSize;// Set the 'presentation time':if (fPlayTimePerFrame > 0 && fPreferredFrameSize > 0) {if (fPresentationTime.tv_sec == 0 && fPresentationTime.tv_usec == 0) {// This is the first frame, so use the current time:gettimeofday(&fPresentationTime, NULL);} else {// Increment by the play time of the previous data:unsigned uSeconds = fPresentationTime.tv_usec + fLastPlayTime;fPresentationTime.tv_sec += uSeconds/1000000;fPresentationTime.tv_usec = uSeconds%1000000;}// Remember the play time of this data:fLastPlayTime = (fPlayTimePerFrame*fFrameSize)/fPreferredFrameSize;fDurationInMicroseconds = fLastPlayTime;} else {// We don't know a specific play time duration for this data,// so just record the current time as being the 'presentation time':gettimeofday(&fPresentationTime, NULL);}// Inform the reader that he has data: #ifdef READ_FROM_FILES_SYNCHRONOUSLY// To avoid possible infinite recursion, we need to return to the event loop to do this:nextTask() = envir().taskScheduler().scheduleDelayedTask(0,(TaskFunc*)FramedSource::afterGetting, this); #else// Because the file read was done from the event loop, we can call the// 'after getting' function directly, without risk of infinite recursion:FramedSource::afterGetting(this); #endif }

數(shù)據(jù)讀取完成之后,MultiFramedRTPSink 將得到通知:

#0 MultiFramedRTPSink::afterGettingFrame (clientData=0x6d92e0, numBytesRead=18, numTruncatedBytes=0, presentationTime=..., durationInMicroseconds=0) at MultiFramedRTPSink.cpp:233---------------------------------------------------------------------------------------------------------------------------#1 0x00000000004300c2 in FramedSource::afterGetting (source=0x7002c0) at FramedSource.cpp:92 #2 0x0000000000474ca6 in H264or5Fragmenter::doGetNextFrame (this=0x7002c0) at H264or5VideoRTPSink.cpp:263 #3 0x0000000000474dac in H264or5Fragmenter::afterGettingFrame1 (this=0x7002c0, frameSize=18, numTruncatedBytes=0, presentationTime=..., durationInMicroseconds=0) at H264or5VideoRTPSink.cpp:292 #4 0x0000000000474d25 in H264or5Fragmenter::afterGettingFrame (clientData=0x7002c0, frameSize=18, numTruncatedBytes=0, presentationTime=..., durationInMicroseconds=0) at H264or5VideoRTPSink.cpp:279---------------------------------------------------------------------------------------------------------------------------#5 0x00000000004300c2 in FramedSource::afterGetting (source=0x6d9000) at FramedSource.cpp:92 #6 0x00000000004351ea in MPEGVideoStreamFramer::continueReadProcessing (this=0x6d9000) at MPEGVideoStreamFramer.cpp:179 #7 0x00000000004350da in MPEGVideoStreamFramer::continueReadProcessing (clientData=0x6d9000) at MPEGVideoStreamFramer.cpp:155 #8 0x000000000046f84f in StreamParser::afterGettingBytes1 (this=0x6d91b0, numBytesRead=150000, presentationTime=...) at StreamParser.cpp:191 #9 0x000000000046f718 in StreamParser::afterGettingBytes (clientData=0x6d91b0, numBytesRead=150000, presentationTime=...)at StreamParser.cpp:170---------------------------------------------------------------------------------------------------------------------------#10 0x00000000004300c2 in FramedSource::afterGetting (source=0x6d8f10) at FramedSource.cpp:92 #11 0x0000000000430c2c in ByteStreamFileSource::doReadFromFile (this=0x6d8f10) at ByteStreamFileSource.cpp:182 #12 0x00000000004309cb in ByteStreamFileSource::fileReadableHandler (source=0x6d8f10) at ByteStreamFileSource.cpp:126

我們同樣將回調的調用棧,根據(jù) FramedSource 的包裝關系,分為幾個階段,不同階段以虛線分割。

MultiFramedRTPSink::afterGettingFrame() 函數(shù)定義如下:

void MultiFramedRTPSink ::afterGettingFrame(void* clientData, unsigned numBytesRead,unsigned numTruncatedBytes,struct timeval presentationTime,unsigned durationInMicroseconds) {MultiFramedRTPSink* sink = (MultiFramedRTPSink*)clientData;sink->afterGettingFrame1(numBytesRead, numTruncatedBytes,presentationTime, durationInMicroseconds); }

在這個函數(shù)中調用 afterGettingFrame1(), afterGettingFrame1() 則會根據(jù)需要調用 sendPacketIfNecessary()。MultiFramedRTPSink::sendPacketIfNecessary() 定義如下:

void MultiFramedRTPSink::sendPacketIfNecessary() {if (fNumFramesUsedSoFar > 0) {// Send the packet: #ifdef TEST_LOSSif ((our_random()%10) != 0) // simulate 10% packet loss ##### #endifif (!fRTPInterface.sendPacket(fOutBuf->packet(), fOutBuf->curPacketSize())) {// if failure handler has been specified, call itif (fOnSendErrorFunc != NULL) (*fOnSendErrorFunc)(fOnSendErrorData);}++fPacketCount;fTotalOctetCount += fOutBuf->curPacketSize();fOctetCount += fOutBuf->curPacketSize()- rtpHeaderSize - fSpecialHeaderSize - fTotalFrameSpecificHeaderSizes;++fSeqNo; // for next time}if (fOutBuf->haveOverflowData()&& fOutBuf->totalBytesAvailable() > fOutBuf->totalBufferSize()/2) {// Efficiency hack: Reset the packet start pointer to just in front of// the overflow data (allowing for the RTP header and special headers),// so that we probably don't have to "memmove()" the overflow data// into place when building the next packet:unsigned newPacketStart = fOutBuf->curPacketSize()- (rtpHeaderSize + fSpecialHeaderSize + frameSpecificHeaderSize());fOutBuf->adjustPacketStart(newPacketStart);} else {// Normal case: Reset the packet start pointer back to the start:fOutBuf->resetPacketStart();}fOutBuf->resetOffset();fNumFramesUsedSoFar = 0;if (fNoFramesLeft) {// We're done:onSourceClosure();} else {// We have more frames left to send. Figure out when the next frame// is due to start playing, then make sure that we wait this long before// sending the next packet.struct timeval timeNow;gettimeofday(&timeNow, NULL);int secsDiff = fNextSendTime.tv_sec - timeNow.tv_sec;int64_t uSecondsToGo = secsDiff*1000000 + (fNextSendTime.tv_usec - timeNow.tv_usec);if (uSecondsToGo < 0 || secsDiff < 0) { // sanity check: Make sure that the time-to-delay is non-negative:uSecondsToGo = 0;}// Delay this amount of time:nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecondsToGo, (TaskFunc*)sendNext, this);} }

在 MultiFramedRTPSink::sendPacketIfNecessary() 中,會發(fā)送幀數(shù)據(jù)。且如果流媒體數(shù)據(jù)發(fā)送沒有結束的話,在一幀數(shù)據(jù)發(fā)送完成之后,會調度一個定時器任務 MultiFramedRTPSink::sendNext() 再次發(fā)送幀數(shù)據(jù)。

MultiFramedRTPSink::sendNext() 執(zhí)行與 MultiFramedRTPSink::continuePlaying() 類似的流程,獲取下一幀數(shù)據(jù)并發(fā)送。

void MultiFramedRTPSink::sendNext(void* firstArg) {MultiFramedRTPSink* sink = (MultiFramedRTPSink*)firstArg;sink->buildAndSendPacket(False); }

當然也并不是每一次發(fā)送幀數(shù)據(jù)的時候,都需要直接從流媒體源中去獲得數(shù)據(jù)。在 StreamParser 中會做判斷,當需要幀數(shù)據(jù)的時候,它會發(fā)起對流媒體文件的讀取。若無需從文件中讀取流媒體數(shù)據(jù),則會直接回調:

#0 MultiFramedRTPSink::sendPacketIfNecessary (this=0x702140) at MultiFramedRTPSink.cpp:365 #1 0x000000000045b5a4 in MultiFramedRTPSink::afterGettingFrame1 (this=0x702140, frameSize=1444, numTruncatedBytes=0, presentationTime=..., durationInMicroseconds=40000) at MultiFramedRTPSink.cpp:347 #2 0x000000000045afd5 in MultiFramedRTPSink::afterGettingFrame (clientData=0x702140, numBytesRead=1444, numTruncatedBytes=0, presentationTime=..., durationInMicroseconds=40000) at MultiFramedRTPSink.cpp:235 #3 0x00000000004300c2 in FramedSource::afterGetting (source=0x7036d0) at FramedSource.cpp:92------------------------------------------------------------------------------------------------------------------------------------#4 0x0000000000474ca6 in H264or5Fragmenter::doGetNextFrame (this=0x7036d0) at H264or5VideoRTPSink.cpp:263 #5 0x0000000000474dac in H264or5Fragmenter::afterGettingFrame1 (this=0x7036d0, frameSize=53527, numTruncatedBytes=0, presentationTime=..., durationInMicroseconds=40000) at H264or5VideoRTPSink.cpp:292 #6 0x0000000000474d25 in H264or5Fragmenter::afterGettingFrame (clientData=0x7036d0, frameSize=53527, numTruncatedBytes=0, presentationTime=..., durationInMicroseconds=40000) at H264or5VideoRTPSink.cpp:279 #7 0x00000000004300c2 in FramedSource::afterGetting (source=0x701e20) at FramedSource.cpp:92------------------------------------------------------------------------------------------------------------------------------------#8 0x00000000004351ea in MPEGVideoStreamFramer::continueReadProcessing (this=0x701e20) at MPEGVideoStreamFramer.cpp:179 #9 0x0000000000435077 in MPEGVideoStreamFramer::doGetNextFrame (this=0x701e20) at MPEGVideoStreamFramer.cpp:142------------------------------------------------------------------------------------------------------------------------------------#10 0x000000000043004c in FramedSource::getNextFrame (this=0x701e20, to=0x7c3091 "\205\270@\367\017\204?\017", <incomplete sequence \340>, maxSize=100000, afterGettingFunc=0x474cd2 <H264or5Fragmenter::afterGettingFrame(void*, unsigned int, unsigned int, timeval, unsigned int)>, afterGettingClientData=0x7036d0, onCloseFunc=0x4300c6 <FramedSource::handleClosure(void*)>, onCloseClientData=0x7036d0)at FramedSource.cpp:78 #11 0x000000000047480a in H264or5Fragmenter::doGetNextFrame (this=0x7036d0) at H264or5VideoRTPSink.cpp:181------------------------------------------------------------------------------------------------------------------------------------#12 0x000000000043004c in FramedSource::getNextFrame (this=0x7036d0, to=0x7aa81c "|\205\270@\367\017\204?\017", <incomplete sequence \340>, maxSize=100452, afterGettingFunc=0x45af82 <MultiFramedRTPSink::afterGettingFrame(void*, unsigned int, unsigned int, timeval, unsigned int)>, afterGettingClientData=0x702140, onCloseFunc=0x45b96c <MultiFramedRTPSink::ourHandleClosure(void*)>, onCloseClientData=0x702140)at FramedSource.cpp:78 #13 0x000000000045af61 in MultiFramedRTPSink::packFrame (this=0x702140) at MultiFramedRTPSink.cpp:224 #14 0x000000000045adae in MultiFramedRTPSink::buildAndSendPacket (this=0x702140, isFirstPacket=0 '\000') at MultiFramedRTPSink.cpp:199 #15 0x000000000045b969 in MultiFramedRTPSink::sendNext (firstArg=0x702140) at MultiFramedRTPSink.cpp:422 #16 0x000000000047f165 in AlarmHandler::handleTimeout (this=0x7038a0) at BasicTaskScheduler0.cpp:34 #17 0x000000000047d268 in DelayQueue::handleAlarm (this=0x6cdc28) at DelayQueue.cpp:187 #18 0x000000000047c196 in BasicTaskScheduler::SingleStep (this=0x6cdc20, maxDelayTime=0) at BasicTaskScheduler.cpp:212

總結一下 RTP 數(shù)據(jù)包的發(fā)送過程:

  • OnDemandServerMediaSubsession 中執(zhí)行 startStream() 時,將發(fā)起一個對流媒體文件進行讀取的任務,讀取文件的工作由 ByteStreamFileSource 的 doReadFromFile() 執(zhí)行。
  • 在文件讀取了一些數(shù)據(jù)之后,MultiFramedRTPSink 得到回調 afterGetting(),在這個回調中,發(fā)送幀數(shù)據(jù)。
  • MultiFramedRTPSink 的回調中,如果流媒體數(shù)據(jù)還沒有讀完的話,則調度一個定時器任務,一段時間之后再次發(fā)起獲取幀數(shù)據(jù)的動作。
  • 重復 2 和 3 兩步,直到所有的數(shù)據(jù)都發(fā)送完。
  • RTCP 包的接收

    StreamState::startPlaying() 通過 OnDemandServerMediaSubsession::createRTCP() 創(chuàng)建 RTCPInstance:

    RTCPInstance* OnDemandServerMediaSubsession ::createRTCP(Groupsock* RTCPgs, unsigned totSessionBW, /* in kbps */unsigned char const* cname, RTPSink* sink) {fprintf(stderr, "OnDemandServerMediaSubsession::createRTCP().\n");// Default implementation; may be redefined by subclasses:return RTCPInstance::createNew(envir(), RTCPgs, totSessionBW, cname, sink, NULL/*we're a server*/); }

    OnDemandServerMediaSubsession::createRTCP() 則通過 RTCPInstance::createNew() 創(chuàng)建:

    RTCPInstance::RTCPInstance(UsageEnvironment& env, Groupsock* RTCPgs,unsigned totSessionBW,unsigned char const* cname,RTPSink* sink, RTPSource* source,Boolean isSSMSource): Medium(env), fRTCPInterface(this, RTCPgs), fTotSessionBW(totSessionBW),fSink(sink), fSource(source), fIsSSMSource(isSSMSource),fCNAME(RTCP_SDES_CNAME, cname), fOutgoingReportCount(1),fAveRTCPSize(0), fIsInitial(1), fPrevNumMembers(0),fLastSentSize(0), fLastReceivedSize(0), fLastReceivedSSRC(0),fTypeOfEvent(EVENT_UNKNOWN), fTypeOfPacket(PACKET_UNKNOWN_TYPE),fHaveJustSentPacket(False), fLastPacketSentSize(0),fByeHandlerTask(NULL), fByeHandlerClientData(NULL),fSRHandlerTask(NULL), fSRHandlerClientData(NULL),fRRHandlerTask(NULL), fRRHandlerClientData(NULL),fSpecificRRHandlerTable(NULL),fAppHandlerTask(NULL), fAppHandlerClientData(NULL) { #ifdef DEBUGfprintf(stderr, "RTCPInstance[%p]::RTCPInstance()\n", this); #endifif (fTotSessionBW == 0) { // not allowed!env << "RTCPInstance::RTCPInstance error: totSessionBW parameter should not be zero!\n";fTotSessionBW = 1;}if (isSSMSource) RTCPgs->multicastSendOnly(); // don't receive multicastdouble timeNow = dTimeNow();fPrevReportTime = fNextReportTime = timeNow;fKnownMembers = new RTCPMemberDatabase(*this);fInBuf = new unsigned char[maxRTCPPacketSize];if (fKnownMembers == NULL || fInBuf == NULL) return;fNumBytesAlreadyRead = 0;fOutBuf = new OutPacketBuffer(preferredRTCPPacketSize, maxRTCPPacketSize, maxRTCPPacketSize);if (fOutBuf == NULL) return;if (fSource != NULL && fSource->RTPgs() == RTCPgs) {// We're receiving RTCP reports that are multiplexed with RTP, so ask the RTP source// to give them to us:fSource->registerForMultiplexedRTCPPackets(this);} else {// Arrange to handle incoming reports from the network:TaskScheduler::BackgroundHandlerProc* handler= (TaskScheduler::BackgroundHandlerProc*)&incomingReportHandler;fRTCPInterface.startNetworkReading(handler);}// Send our first report.fTypeOfEvent = EVENT_REPORT;onExpire(this); } . . . . . . RTCPInstance* RTCPInstance::createNew(UsageEnvironment& env, Groupsock* RTCPgs,unsigned totSessionBW,unsigned char const* cname,RTPSink* sink, RTPSource* source,Boolean isSSMSource) {return new RTCPInstance(env, RTCPgs, totSessionBW, cname, sink, source,isSSMSource); }

    可以看到,在 RTCPInstance 的構造函數(shù)中,調用 RTPInterface::startNetworkReading() 注冊了一個回調:

    void RTPInterface ::startNetworkReading(TaskScheduler::BackgroundHandlerProc* handlerProc) {// Normal case: Arrange to read UDP packets:envir().taskScheduler().turnOnBackgroundReadHandling(fGS->socketNum(), handlerProc, fOwner);// Also, receive RTP over TCP, on each of our TCP connections:fReadHandlerProc = handlerProc;for (tcpStreamRecord* streams = fTCPStreams; streams != NULL;streams = streams->fNext) {// Get a socket descriptor for "streams->fStreamSocketNum":SocketDescriptor* socketDescriptor = lookupSocketDescriptor(envir(), streams->fStreamSocketNum);// Tell it about our subChannel:socketDescriptor->registerRTPInterface(streams->fStreamChannelId, this);} }

    在 RTPInterface::startNetworkReading() 中則會向 TaskScheduler 注冊 RTCP 的 socket 及該 socket 上的事件的處理程序。live555 中正是通過這種方式,在有 RTCP 包到來時得到通知,并通過 RTCPInstance::incomingReportHandler() 來處理 RTCP 包的。

    RTCP 包的發(fā)送

    RTCP 包根據(jù)需要,由 RTCPInstance::sendReport() 等函數(shù)發(fā)送:

    void RTCPInstance::sendReport() { #ifdef DEBUGfprintf(stderr, "sending REPORT\n"); #endif// Begin by including a SR and/or RR report:if (!addReport()) return;// Then, include a SDES:addSDES();// Send the report:sendBuiltPacket();// Periodically clean out old members from our SSRC membership database:const unsigned membershipReapPeriod = 5;if ((++fOutgoingReportCount) % membershipReapPeriod == 0) {unsigned threshold = fOutgoingReportCount - membershipReapPeriod;fKnownMembers->reapOldMembers(threshold);} }void RTCPInstance::sendBYE() { #ifdef DEBUGfprintf(stderr, "sending BYE\n"); #endif// The packet must begin with a SR and/or RR report:(void)addReport(True);addBYE();sendBuiltPacket(); }void RTCPInstance::sendBuiltPacket() { #ifdef DEBUGfprintf(stderr, "sending RTCP packet\n");unsigned char* p = fOutBuf->packet();for (unsigned i = 0; i < fOutBuf->curPacketSize(); ++i) {if (i%4 == 0) fprintf(stderr," ");fprintf(stderr, "%02x", p[i]);}fprintf(stderr, "\n"); #endifunsigned reportSize = fOutBuf->curPacketSize();fRTCPInterface.sendPacket(fOutBuf->packet(), reportSize);fOutBuf->resetOffset();fLastSentSize = IP_UDP_HDR_SIZE + reportSize;fHaveJustSentPacket = True;fLastPacketSentSize = reportSize; }

    就像在 StreamState::startPlaying() 中看到的那樣。

    Done.

    live555 源碼分析系列文章

    live555 源碼分析:簡介
    live555 源碼分析:基礎設施
    live555 源碼分析:MediaSever
    Wireshark 抓包分析 RTSP/RTP/RTCP 基本工作過程
    live555 源碼分析:RTSPServer
    live555 源碼分析:DESCRIBE 的處理
    live555 源碼分析:SETUP 的處理
    live555 源碼分析:PLAY 的處理
    live555 源碼分析:RTSPServer 組件結構
    live555 源碼分析:ServerMediaSession
    live555 源碼分析:子會話 SDP 行生成
    live555 源碼分析:子會話 SETUP
    live555 源碼分析:播放啟動

    總結

    以上是生活随笔為你收集整理的live555 源码分析:播放启动的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網(wǎng)站內容還不錯,歡迎將生活随笔推薦給好友。