七 RTP打包与发送

rtp传送开始于函数:MediaSink::startPlaying()。想想也有道理,应是sink跟source要数据,所以从sink上调用startplaying(嘿嘿,相当于directshow的拉模式)。

看一下这个函数:
Boolean MediaSink::startPlaying(MediaSource& source, afterPlayingFunc* afterFunc, void* afterClientData) { //参数afterFunc是在播放结束时才被调用。 // Make sure we're not already being played: if (fSource != NULL) { envir().setResultMsg("This sink is already being played"); return False; } // Make sure our source is compatible: if (!sourceIsCompatibleWithUs(source)) { envir().setResultMsg( "MediaSink::startPlaying(): source is not compatible!"); return False; } //记下一些要使用的对象 fSource = (FramedSource*) &source; fAfterFunc = afterFunc; fAfterClientData = afterClientData; return continuePlaying(); }

为了进一步封装(让继承类少写一些代码),搞出了一个虚函数continuePlaying()。让我们来看一下:

Boolean MultiFramedRTPSink::continuePlaying() { // Send the first packet. // (This will also schedule any future sends.) buildAndSendPacket(True); return True; }MultiFramedRTPSink是与帧有关的类,其实它要求每次必须从source获得一个帧的数据,所以才叫这个name。可以看到continuePlaying()完全被buildAndSendPacket()代替。看一下buildAndSendPacket():
void MultiFramedRTPSink::buildAndSendPacket(Boolean isFirstPacket) { //此函数中主要是准备rtp包的头,为一些需要跟据实际数据改变的字段留出位置。 fIsFirstPacket = isFirstPacket; // Set up the RTP header: unsigned rtpHdr = 0x80000000; // RTP version 2; marker ('M') bit not set (by default; it can be set later) rtpHdr |= (fRTPPayloadType << 16); rtpHdr |= fSeqNo; // sequence number fOutBuf->enqueueWord(rtpHdr);//向包中加入一个字 // Note where the RTP timestamp will go. // (We can't fill this in until we start packing payload frames.) fTimestampPosition = fOutBuf->curPacketSize(); fOutBuf->skipBytes(4); // leave a hole for the timestamp 在缓冲中空出时间戳的位置 fOutBuf->enqueueWord(SSRC()); // Allow for a special, payload-format-specific header following the // RTP header: fSpecialHeaderPosition = fOutBuf->curPacketSize(); fSpecialHeaderSize = specialHeaderSize(); fOutBuf->skipBytes(fSpecialHeaderSize); // Begin packing as many (complete) frames into the packet as we can: fTotalFrameSpecificHeaderSizes = 0; fNoFramesLeft = False; fNumFramesUsedSoFar = 0; // 一个包中已打入的帧数。 //头准备好了,再打包帧数据 packFrame(); }继续看packFrame():
void MultiFramedRTPSink::packFrame() { // First, see if we have an overflow frame that was too big for the last pkt if (fOutBuf->haveOverflowData()) { //如果有帧数据,则使用之。OverflowData是指上次打包时剩下的帧数据,因为一个包可能容纳不了一个帧。 // Use this frame before reading a new one from the source unsigned frameSize = fOutBuf->overflowDataSize(); struct timeval presentationTime = fOutBuf->overflowPresentationTime(); unsigned durationInMicroseconds =fOutBuf->overflowDurationInMicroseconds(); fOutBuf->useOverflowData(); afterGettingFrame1(frameSize, 0, presentationTime,durationInMicroseconds); } else { //一点帧数据都没有,跟source要吧。 // Normal case: we need to read a new frame from the source if (fSource == NULL) return; //更新缓冲中的一些位置 fCurFrameSpecificHeaderPosition = fOutBuf->curPacketSize(); fCurFrameSpecificHeaderSize = frameSpecificHeaderSize(); fOutBuf->skipBytes(fCurFrameSpecificHeaderSize); fTotalFrameSpecificHeaderSizes += fCurFrameSpecificHeaderSize; //从source获取下一帧 fSource->getNextFrame(fOutBuf->curPtr(),//新数据存放开始的位置 fOutBuf->totalBytesAvailable(),//缓冲中空余的空间大小 afterGettingFrame, //因为可能source中的读数据函数会被放在任务调度中,所以把获取帧后应调用的函数传给source this, ourHandleClosure, //这个是source结束时(比如文件读完了)要调用的函数。 this); } }可以想像下面就是source从文件(或某个设备)中读取一帧数据,读完后返回给sink,当然不是从函数返回了,而是以调用afterGettingFrame这个回调函数的方式。所以下面看一下afterGettingFrame():
void MultiFramedRTPSink::afterGettingFrame(void* clientData, unsigned numBytesRead, unsigned numTruncatedBytes, struct timeval presentationTime, unsigned durationInMicroseconds) { MultiFramedRTPSink* sink = (MultiFramedRTPSink*) clientData; sink->afterGettingFrame1(numBytesRead, numTruncatedBytes, presentationTime, durationInMicroseconds); }没什么可看的,只是过度为调用成员函数,所以afterGettingFrame1()才是重点:
void MultiFramedRTPSink::afterGettingFrame1( unsigned frameSize, unsigned numTruncatedBytes, struct timeval presentationTime, unsigned durationInMicroseconds) { if (fIsFirstPacket) { // Record the fact that we're starting to play now: gettimeofday(&fNextSendTime, NULL); } //如果给予一帧的缓冲不够大,就会发生截断一帧数据的现象。但也只能提示一下用户 if (numTruncatedBytes > 0) { unsigned const bufferSize = fOutBuf->totalBytesAvailable(); envir() << "MultiFramedRTPSink::afterGettingFrame1(): The input frame data was too large for our buffer size (" << bufferSize << "). " << numTruncatedBytes << " bytes of trailing data was dropped! Correct this by increasing \"OutPacketBuffer::maxSize\" to at least " << OutPacketBuffer::maxSize + numTruncatedBytes << ", *before* creating this 'RTPSink'. (Current value is " << OutPacketBuffer::maxSize << ".)\n"; } unsigned curFragmentationOffset = fCurFragmentationOffset; unsigned numFrameBytesToUse = frameSize; unsigned overflowBytes = 0; //如果包只已经打入帧数据了,并且不能再向这个包中加数据了,则把新获得的帧数据保存下来。 // If we have already packed one or more frames into this packet, // check whether this new frame is eligible to be packed after them. // (This is independent of whether the packet has enough room for this // new frame; that check comes later.) if (fNumFramesUsedSoFar > 0) { //如果包中已有了一个帧,并且不允许再打入新的帧了,则只记录下新的帧。 if ((fPreviousFrameEndedFragmentation && !allowOtherFramesAfterLastFragment()) || !frameCanAppearAfterPacketStart(fOutBuf->curPtr(), frameSize)) { // Save away this frame for next time: numFrameBytesToUse = 0; fOutBuf->setOverflowData(fOutBuf->curPacketSize(), frameSize, presentationTime, durationInMicroseconds); } } //表示当前打入的是否是上一个帧的最后一块数据。 fPreviousFrameEndedFragmentation = False; //下面是计算获取的帧中有多少数据可以打到当前包中,剩下的数据就作为overflow数据保存下来。 if (numFrameBytesToUse > 0) { // Check whether this frame overflows the packet if (fOutBuf->wouldOverflow(frameSize)) { // Don't use this frame now; instead, save it as overflow data, and // send it in the next packet instead. However, if the frame is too // big to fit in a packet by itself, then we need to fragment it (and // use some of it in this packet, if the payload format permits this.) if (isTooBigForAPacket(frameSize) && (fNumFramesUsedSoFar == 0 || allowFragmentationAfterStart())) { // We need to fragment this frame, and use some of it now: overflowBytes = computeOverflowForNewFrame(frameSize); numFrameBytesToUse -= overflowBytes; fCurFragmentationOffset += numFrameBytesToUse; } else { // We don't use any of this frame now: overflowBytes = frameSize; numFrameBytesToUse = 0; } fOutBuf->setOverflowData(fOutBuf->curPacketSize() + numFrameBytesToUse, overflowBytes, presentationTime, durationInMicroseconds); } else if (fCurFragmentationOffset > 0) { // This is the last fragment of a frame that was fragmented over // more than one packet. Do any special handling for this case: fCurFragmentationOffset = 0; fPreviousFrameEndedFragmentation = True; } } if (numFrameBytesToUse == 0 && frameSize > 0) { //如果包中有数据并且没有新数据了,则发送之。(这种情况好像很难发生啊!) // Send our packet now, because we have filled it up: sendPacketIfNecessary(); } else { //需要向包中打入数据。 // Use this frame in our outgoing packet: unsigned char* frameStart = fOutBuf->curPtr(); fOutBuf->increment(numFrameBytesToUse); // do this now, in case "doSpecialFrameHandling()" calls "setFramePadding()" to append padding bytes // Here's where any payload format specific processing gets done: doSpecialFrameHandling(curFragmentationOffset, frameStart, numFrameBytesToUse, presentationTime, overflowBytes); ++fNumFramesUsedSoFar; // Update the time at which the next packet should be sent, based // on the duration of the frame that we just packed into it. // However, if this frame has overflow data remaining, then don't // count its duration yet. if (overflowBytes == 0) { fNextSendTime.tv_usec += durationInMicroseconds; fNextSendTime.tv_sec += fNextSendTime.tv_usec / 1000000; fNextSendTime.tv_usec %= 1000000; } //如果需要,就发出包,否则继续打入数据。 // Send our packet now if (i) it's already at our preferred size, or // (ii) (heuristic) another frame of the same size as the one we just // read would overflow the packet, or // (iii) it contains the last fragment of a fragmented frame, and we // don't allow anything else to follow this or // (iv) one frame per packet is allowed: if (fOutBuf->isPreferredSize() || fOutBuf->wouldOverflow(numFrameBytesToUse) || (fPreviousFrameEndedFragmentation && !allowOtherFramesAfterLastFragment()) || !frameCanAppearAfterPacketStart( fOutBuf->curPtr() - frameSize, frameSize)) { // The packet is ready to be sent now sendPacketIfNecessary(); } else { // There's room for more frames; try getting another: packFrame(); } } }
看一下发送数据的函数:
void MultiFramedRTPSink::sendPacketIfNecessary() { //发送包 if (fNumFramesUsedSoFar > 0) { // Send the packet: #ifdef TEST_LOSS if ((our_random()%10) != 0) // simulate 10% packet loss ##### #endif if (!fRTPInterface.sendPacket(fOutBuf->packet(),fOutBuf->curPacketSize())) { // if failure handler has been specified, call it if (fOnSendErrorFunc != NULL) (*fOnSendErrorFunc)(fOnSendErrorData); } ++fPacketCount; fTotalOctetCount += fOutBuf->curPacketSize(); fOctetCount += fOutBuf->curPacketSize() - rtpHeaderSize - fSpecialHeaderSize - fTotalFrameSpecificHeaderSizes; ++fSeqNo; // for next time } //如果还有剩余数据,则调整缓冲区 if (fOutBuf->haveOverflowData() && fOutBuf->totalBytesAvailable() > fOutBuf->totalBufferSize() / 2) { // Efficiency hack: Reset the packet start pointer to just in front of // the overflow data (allowing for the RTP header and special headers), // so that we probably don't have to "memmove()" the overflow data // into place when building the next packet: unsigned newPacketStart = fOutBuf->curPacketSize()- (rtpHeaderSize + fSpecialHeaderSize + frameSpecificHeaderSize()); fOutBuf->adjustPacketStart(newPacketStart); } else { // Normal case: Reset the packet start pointer back to the start: fOutBuf->resetPacketStart(); } fOutBuf->resetOffset(); fNumFramesUsedSoFar = 0; if (fNoFramesLeft) { //如果再没有数据了,则结束之 // We're done: onSourceClosure(this); } else { //如果还有数据,则在下一次需要发送的时间再次打包发送。 // We have more frames left to send. Figure out when the next frame // is due to start playing, then make sure that we wait this long before // sending the next packet. struct timeval timeNow; gettimeofday(&timeNow, NULL); int secsDiff = fNextSendTime.tv_sec - timeNow.tv_sec; int64_t uSecondsToGo = secsDiff * 1000000 + (fNextSendTime.tv_usec - timeNow.tv_usec); if (uSecondsToGo < 0 || secsDiff < 0) { // sanity check: Make sure that the time-to-delay is non-negative: uSecondsToGo = 0; } // Delay this amount of time: nextTask() = envir().taskScheduler().scheduleDelayedTask(uSecondsToGo, (TaskFunc*) sendNext, this); } }

可以看到为了延迟包的发送,使用了delay task来执行下次打包发送任务。

sendNext()中又调用了buildAndSendPacket()函数,呵呵,又是一个圈圈。

总结一下调用过程:

最后,再说明一下包缓冲区的使用:

MultiFramedRTPSink中的帧数据和包缓冲区共用一个,只是用一些额外的变量指明缓冲区中属于包的部分以及属于帧数据的部分(包以外的数据叫做overflow data)。它有时会把overflow data以mem move的方式移到包开始的位置,有时把包的开始位置直接设置到overflow data开始的地方。那么这个缓冲的大小是怎样确定的呢?是跟据调用者指定的的一个最大的包的大小+60000算出的。这个地方把我搞胡涂了:如果一次从source获取一个帧的话,那这个缓冲应设为不小于最大的一个帧的大小才是,为何是按包的大小设置呢?可以看到,当缓冲不够时只是提示一下:

if (numTruncatedBytes > 0) { unsigned const bufferSize = fOutBuf->totalBytesAvailable(); envir() << "MultiFramedRTPSink::afterGettingFrame1(): The input frame data was too large for our buffer size (" << bufferSize << "). " << numTruncatedBytes << " bytes of trailing data was dropped! Correct this by increasing \"OutPacketBuffer::maxSize\" to at least " << OutPacketBuffer::maxSize + numTruncatedBytes << ", *before* creating this 'RTPSink'. (Current value is " << OutPacketBuffer::maxSize << ".)\n"; }当然此时不会出错,但有可能导致时间戳计算不准,或增加时间戳计算与source端处理的复杂性(因为一次取一帧时间戳是很好计算的)。

转载于:https://www.cnblogs.com/android-html5/archive/2011/10/31/2533627.html

live555学习笔记7-RTP打包与发送相关推荐

  1. live555学习笔记-RTP打包与发送

    RTP打包与发送 rtp传送开始于函数:MediaSink::startPlaying().想想也有道理,应是sink跟source要数据,所以从sink上调用startplaying(嘿嘿,相当于d ...

  2. live555 学习笔记-建立RTSP连接的过程(RTSP服务器端)

    live555 学习笔记-建立RTSP连接的过程(RTSP服务器端) 监听 创建rtsp server,rtspserver的构造函数中,创建监听socket,添加到调度管理器BasicTaskSch ...

  3. Python学习笔记--exe文件打包与UI界面设计

    exe文件打包与UI界面设计 前言 一.基于tkinter实现的UI设计 1.1 库的选择及思路 1.2 定位方法的选用 1.3 Frame控件 1.4 变量设置 1.5 批量设置 1.6 Text文 ...

  4. (FlexSim 学习笔记)合成器打包的工作机制分析和实现

    在之前文章<"(FlexSim 学习笔记)案例1:不合格产品二次优先加工,两次不合格作废>中简单描述了下 flexsim 合成器的合成打包工作原理,并在该文"3.3.合 ...

  5. linux下h.264码流实时rtp打包与发送,Linux下H.264码流实时RTP打包与发送

    由于项目要求在DM6467T平台上添加实时RTP打包发送模块,这才找了找有没有人分享 这方面的经验.这里需要感谢网友:yanyuan9527,他写的文章对我帮助很大,可以说让一个完全小白的人了解了RT ...

  6. live555学习笔记【3】---RTSP服务器(一)

    Live555库是一个使用开放标准协议如RTP/RTCP.RTSP.SIP等实现多媒体流式传输的开源C 库集.这些函数库可以在Unix.Windows.QNX等操作系统下编译使用,基于此建立RTSP/ ...

  7. live555学习笔记-RTSPClient分析

    八 RTSPClient分析 有RTSPServer,当然就要有RTSPClient. 如果按照Server端的架构,想一下Client端各部分的组成可能是这样: 因为要连接RTSP server,所 ...

  8. live555学习笔记-RTSP服务运作

    RTSP服务运作 基础基本搞明白了,那么RTSP,RTP等这些协议又是如何利用这些基础机制运作的呢? 首先来看RTSP. RTSP首先需建立TCP侦听socket.可见于此函数: [cpp] view ...

  9. Live555学习笔记(一)—— live555概述

    IVE555是为流媒体提供解决方案的跨平台C++开源项目. 一.各库简要介绍 LIVE555下包含LiveMedia.UsageEnvironment.BasicUsageEnvironment.Gr ...

最新文章

  1. android组件化架构 书,Android MVVM组件化架构方案
  2. python如何封装成可调用的库_在python中如何以异步的方式调用第三方库提供的同步API...
  3. Java编程语言的历史和未来
  4. JAVA进阶教学之(Object类中的equals方法)
  5. 二叉排序数的构造-理论
  6. Linux 日志系统
  7. Source code manager common
  8. 2015轻院校赛 H五子棋
  9. Asp.net MVC 示例项目Suteki.Shop分析之---IOC(控制反转)
  10. linux挂载安卓手机命令,Android开发中,mount指令的各种用法大全,挂载设备的各种配置...
  11. applicationcontext获取bean_如果你每次面试前都要去背一篇Spring中Bean的生命周期,请看完这篇文章...
  12. cdsn自动添加目录
  13. TortoiseSVN 安装中文语言包,SVN中文语言包
  14. gmssl编译linux,gmssl编译安装出错解决
  15. 配置nginx.conf证书,实现http跳转htpps(80-->443)
  16. 10分钟了解何为ECharts
  17. Redis 持久化,写入磁盘的方式
  18. Docker安装和部署
  19. 【C语言进阶】常见数据格式转换处理的代码实现
  20. 合理清除AlibabaProtect进程

热门文章

  1. 分布式数据库的模式结构介绍​
  2. 算法基础:常用的排序算法知识笔记
  3. C#操作HttpClient工具类库
  4. 配置文件服务器实训报告,文件服务器的配置实训报告
  5. C语言实现与功能的程序,用C语言实现Ping程序功能
  6. 简单的ftp服务器(客户端、服务器端、socket)
  7. java xml 递归_Java递归遍历XML所有元素
  8. MySQL Event
  9. 三级菜单页面布局_三级菜单的最快导航布局
  10. 一个好的设计师_是什么让一个好的设计师