苹果在iOS 8.0系统之前若要做音视频开发需使用第三方软件进行编解码(FFmpeg软解码H264视频流可看到这里),学习成本较大,项目开发进度也可能超出预期。在iOS 8.0之后开放了视频编解码框架VideoToolbox,在此之后对于音视频开发变得相对简单。

一、硬解码名词(结构)解释

1、VTDecompressionSessionRef:解码器对象数据结构;

2、CMVideoFormatDescriptionRef:图形解码相关格式及描述;

3、CVPixelBufferRef:编码前和解码后的图像数据结构;

4、CMBlockBufferRef:存在解码前图像数据内存结构;

5、CMSampleBufferRef:存放解码前的视频图像的容器数据结构;

6、AVSampleBufferDisplayLayer:以CMSampleBufferRef进行解码并显示Layer图层;

7、SPS、PPS:h.264解码参数信息;IDR:h.264视频流I帧;

二、H264硬解码流程图

三:IDR(I帧)网络裸流数据结构

一般情况下网络视频裸流I帧中基本会包含SPS、PPS、SEI、IDR帧数据,如下图所示,但是部分只含有IDR帧数据,其他解码参数信息被单独已Slice获取。

四、硬解码相关接口

1、初始化H264硬解解码器

1)使用CMVideoFormatDescriptionCreateFromH264ParameterSets函数构建解码描述结构CMVideoFormatDescriptionRef:

    const uint8_t *const parameterSetPointers[2] = {pSPS,pPPS};const size_t parameterSetSizes[2] = {mSpsSize, mPpsSize};OSStatus status = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault,2,    //参数个数,主要包含SPS、PPSparameterSetPointers,parameterSetSizes,4,    //NALU起始位个数&mDecoderFormatDescription);

2)使用VTDecompressionSessionCreate函数构建解码器结构VTDecompressionSessionRef:

    uint32_t pixelFormatType = kCVPixelFormatType_420YpCbCr8BiPlanarFullRange;  //NV12const void *keys[] = { kCVPixelBufferPixelFormatTypeKey };const void *values[] = { CFNumberCreate(NULL, kCFNumberSInt32Type, &pixelFormatType) };    //32位CFDictionaryRef attrs = CFDictionaryCreate(NULL, keys, values, 1, NULL, NULL);VTDecompressionOutputCallbackRecord callBackRecord;callBackRecord.decompressionOutputCallback = didDecompress;    callBackRecord.decompressionOutputRefCon = NULL;status = VTDecompressionSessionCreate(kCFAllocatorDefault,mDecoderFormatDescription,NULL, attrs,&callBackRecord,&mDeocderSession);CFRelease(attrs);

2、H264硬件解码

1)将视频裸流数据构建成CMBlockBufferRef,主要目的是进一步转换为CMSampleBufferRef:

    CMBlockBufferRef blockBuffer = NULL;OSStatus status  = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault, (void *)videoBuffer, videoBufferSize, kCFAllocatorNull, NULL, 0, videoBufferSize, 0, &blockBuffer);
    CMSampleBufferRef sampleBuffer = NULL;const size_t sampleSizeArray[] = { videoBufferSize };OSStatus status = CMSampleBufferCreateReady(kCFAllocatorDefault, blockBuffer, mDecoderFormatDescription , 1, 0, NULL, 1, sampleSizeArray, &sampleBuffer);

2)将CMSampleBufferRef结构送入VTDecompressionSessionDecodeFrame函数进行解码处理:

    VTDecodeFrameFlags flags = 0;VTDecodeInfoFlags flagOut = 0;CVPixelBufferRef outputPixelBuffer = NULL;OSStatus decodeStatus = VTDecompressionSessionDecodeFrame(mDeocderSession, sampleBuffer, flags, &outputPixelBuffer, &flagOut);

3)若使用AVSampleBufferDisplayLayer图层进行直接显示,可忽略上一步的还行,直接将CMSampleBufferRef送入AVSampleBufferDisplayLayer进行显示:

    CFArrayRef attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, YES);CFMutableDictionaryRef dict = (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0);CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanTrue);if ([self.displayLayer isReadyForMoreMediaData]) {@weakify(self);dispatch_sync(dispatch_get_main_queue(),^{@strongify(self);[self.displayLayer enqueueSampleBuffer:sampleBuffer];});}

3、解码之后的数据显示

在本文中支持3种显示方式:UIImage、CVPixelBufferRef、AVSampleBufferDisplayLayer,因在项目中需要UIImage,所以被默认转化模式。

CVPixelBufferRef:即不进行UIImage转换而直接输出的方式;

AVSampleBufferDisplayLayer:不进行代码逻辑解码,被Layer层自行解码和显示;

UIImage:通过CVPixelBufferRef进一步转化所得(提供了2种转化方法,可在后面代码中查看):

    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];CIContext *temporaryContext = [CIContext contextWithOptions:nil];CGImageRef videoImage = [temporaryContext createCGImage:ciImage fromRect:CGRectMake(0, 0, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer))];image = [[UIImage alloc] initWithCGImage:videoImage];CGImageRelease(videoImage);

五、完整H264解码代码

本人原则上主张自主编写相关代码并学习相应的知识,在此贴出iOS对H264视频裸流硬解码的完整代码,各位可进行参考或学习,若存在问题或者疑问欢迎留言。代码中关于CLog接口为打印输出,可自行屏蔽。

//
//  H264HwDecoder.h
//  IOTCamera
//
//  Created by lzj<lizhijian_21@163.com> on 2017/2/18.
//  Copyright (c) 2017 LZJ. All rights reserved.
//#import <Foundation/Foundation.h>
#import <VideoToolbox/VideoToolbox.h>
#import <AVFoundation/AVSampleBufferDisplayLayer.h>typedef enum : NSUInteger {H264HWDataType_Image = 0,H264HWDataType_Pixel,H264HWDataType_Layer,
} H264HWDataType;@interface H264HwDecoder : NSObject@property (nonatomic,assign) H264HWDataType showType;    //显示类型
@property (nonatomic,strong) UIImage *image;            //解码成RGB数据时的IMG
@property (nonatomic,assign) CVPixelBufferRef pixelBuffer;    //解码成YUV数据时的解码BUF
@property (nonatomic,strong) AVSampleBufferDisplayLayer *displayLayer;  //显示图层@property (nonatomic,assign) BOOL isNeedPerfectImg;    //是否读取完整UIImage图形(showType为0时才有效)- (instancetype)init;/**H264视频流解码@param videoData 视频帧数据@param videoSize 视频帧大小@return 视图的宽高(width, height),当为接收为AVSampleBufferDisplayLayer时返回接口是无效的*/
- (CGSize)decodeH264VideoData:(uint8_t *)videoData videoSize:(NSInteger)videoSize;/**释放解码器*/
- (void)releaseH264HwDecoder;/**视频截图@return IMG*/
- (UIImage *)snapshot;@end
//
//  H264HwDecoder.m
//  IOTCamera
//
//  Created by lzj<lizhijian_21@163.com> on 2017/2/18.
//  Copyright (c) 2017 LZJ. All rights reserved.
//#import "H264HwDecoder.h"#ifndef FreeCharP
#define FreeCharP(p) if (p) {free(p); p = NULL;}
#endiftypedef enum : NSUInteger {HWVideoFrameType_UNKNOWN = 0,HWVideoFrameType_I,HWVideoFrameType_P,HWVideoFrameType_B,HWVideoFrameType_SPS,HWVideoFrameType_PPS,HWVideoFrameType_SEI,
} HWVideoFrameType;@interface H264HwDecoder ()
{VTDecompressionSessionRef mDeocderSession;CMVideoFormatDescriptionRef mDecoderFormatDescription;uint8_t *pSPS;uint8_t *pPPS;uint8_t *pSEI;NSInteger mSpsSize;NSInteger mPpsSize;NSInteger mSeiSize;NSInteger mINalCount;        //I帧起始码个数NSInteger mPBNalCount;       //P、B帧起始码个数NSInteger mINalIndex;       //I帧起始码开始位BOOL mIsNeedReinit;         //需要重置解码器
}@endstatic void didDecompress(void *decompressionOutputRefCon, void *sourceFrameRefCon, OSStatus status, VTDecodeInfoFlags infoFlags, CVImageBufferRef pixelBuffer, CMTime presentationTimeStamp, CMTime presentationDuration )
{CVPixelBufferRef *outputPixelBuffer = (CVPixelBufferRef *)sourceFrameRefCon;*outputPixelBuffer = CVPixelBufferRetain(pixelBuffer);
}@implementation H264HwDecoder- (instancetype)init
{if (self = [super init]) {pSPS = pPPS = pSEI = NULL;mSpsSize = mPpsSize = mSeiSize = 0;mINalCount = mPBNalCount = mINalIndex = 0;mIsNeedReinit = NO;_showType = H264HWDataType_Image;_isNeedPerfectImg = NO;_pixelBuffer = NULL;}return self;
}- (void)dealloc
{[self releaseH264HwDecoder];
}- (BOOL)initH264HwDecoder
{if (mDeocderSession) {return YES;}const uint8_t *const parameterSetPointers[2] = {pSPS,pPPS};const size_t parameterSetSizes[2] = {mSpsSize, mPpsSize};OSStatus status = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault, 2, parameterSetPointers, parameterSetSizes, 4, &mDecoderFormatDescription);if (status == noErr) {//      kCVPixelFormatType_420YpCbCr8Planar is YUV420//      kCVPixelFormatType_420YpCbCr8BiPlanarFullRange is NV12//      kCVPixelFormatType_24RGB    //使用24位bitsPerPixel//      kCVPixelFormatType_32BGRA   //使用32位bitsPerPixel,kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirstuint32_t pixelFormatType = kCVPixelFormatType_420YpCbCr8BiPlanarFullRange;  //NV12if (self.showType == H264HWDataType_Pixel) {pixelFormatType = kCVPixelFormatType_420YpCbCr8Planar;}const void *keys[] = { kCVPixelBufferPixelFormatTypeKey };const void *values[] = { CFNumberCreate(NULL, kCFNumberSInt32Type, &pixelFormatType) };CFDictionaryRef attrs = CFDictionaryCreate(NULL, keys, values, 1, NULL, NULL);VTDecompressionOutputCallbackRecord callBackRecord;callBackRecord.decompressionOutputCallback = didDecompress;callBackRecord.decompressionOutputRefCon = NULL;status = VTDecompressionSessionCreate(kCFAllocatorDefault,mDecoderFormatDescription,NULL, attrs,&callBackRecord,&mDeocderSession);CFRelease(attrs);CLog(@"Init H264 hardware decoder success");} else {CLog([NSString stringWithFormat:@"Init H264 hardware decoder fail: %d", (int)status]);return NO;}return YES;
}- (void)removeH264HwDecoder
{if(mDeocderSession) {VTDecompressionSessionInvalidate(mDeocderSession);CFRelease(mDeocderSession);mDeocderSession = NULL;}if(mDecoderFormatDescription) {CFRelease(mDecoderFormatDescription);mDecoderFormatDescription = NULL;}
}- (void)releaseH264HwDecoder
{[self removeH264HwDecoder];[self releaseSliceInfo];if (_pixelBuffer) {CVPixelBufferRelease(_pixelBuffer);_pixelBuffer = NULL;}
}- (void)releaseSliceInfo
{FreeCharP(pSPS);FreeCharP(pPPS);FreeCharP(pSEI);mSpsSize = 0;mPpsSize = 0;mSeiSize = 0;
}//将视频数据封装成CMSampleBufferRef进行解码
- (CVPixelBufferRef)decode:(uint8_t *)videoBuffer videoSize:(NSInteger)videoBufferSize
{CVPixelBufferRef outputPixelBuffer = NULL;CMBlockBufferRef blockBuffer = NULL;OSStatus status  = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault, (void *)videoBuffer, videoBufferSize, kCFAllocatorNull, NULL, 0, videoBufferSize, 0, &blockBuffer);if (status == kCMBlockBufferNoErr) {CMSampleBufferRef sampleBuffer = NULL;const size_t sampleSizeArray[] = { videoBufferSize };status = CMSampleBufferCreateReady(kCFAllocatorDefault, blockBuffer, mDecoderFormatDescription , 1, 0, NULL, 1, sampleSizeArray, &sampleBuffer);if (status == kCMBlockBufferNoErr && sampleBuffer) {if (self.showType == H264HWDataType_Layer && _displayLayer) {CFArrayRef attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, YES);CFMutableDictionaryRef dict = (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0);CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanTrue);if ([self.displayLayer isReadyForMoreMediaData]) {@weakify(self);dispatch_sync(dispatch_get_main_queue(),^{@strongify(self);[self.displayLayer enqueueSampleBuffer:sampleBuffer];});}CFRelease(sampleBuffer);} else {VTDecodeFrameFlags flags = 0;VTDecodeInfoFlags flagOut = 0;OSStatus decodeStatus = VTDecompressionSessionDecodeFrame(mDeocderSession, sampleBuffer, flags, &outputPixelBuffer, &flagOut);CFRelease(sampleBuffer);if (decodeStatus == kVTVideoDecoderMalfunctionErr) {CLog(@"Decode failed status: kVTVideoDecoderMalfunctionErr");CVPixelBufferRelease(outputPixelBuffer);outputPixelBuffer = NULL;} else if(decodeStatus == kVTInvalidSessionErr) {CLog(@"Invalid session, reset decoder session");[self removeH264HwDecoder];} else if(decodeStatus == kVTVideoDecoderBadDataErr) {CLog([NSString stringWithFormat:@"Decode failed status=%d(Bad data)", (int)decodeStatus]);} else if(decodeStatus != noErr) {CLog([NSString stringWithFormat:@"Decode failed status=%d", (int)decodeStatus]);}}}CFRelease(blockBuffer);}return outputPixelBuffer;
}- (CGSize)decodeH264VideoData:(uint8_t *)videoData videoSize:(NSInteger)videoSize
{CGSize imageSize = CGSizeMake(0, 0);if (videoData && videoSize > 0) {HWVideoFrameType frameFlag = [self analyticalData:videoData size:videoSize];if (mIsNeedReinit) {mIsNeedReinit = NO;[self removeH264HwDecoder];}if (pSPS && pPPS && (frameFlag == HWVideoFrameType_I || frameFlag == HWVideoFrameType_P || frameFlag == HWVideoFrameType_B)) {uint8_t *buffer = NULL;if (frameFlag == HWVideoFrameType_I) {int nalExtra = (mINalCount==3?1:0);      //如果是3位的起始码,转为大端时需要增加1位videoSize -= mINalIndex;buffer = (uint8_t *)malloc(videoSize + nalExtra);memcpy(buffer + nalExtra, videoData + mINalIndex, videoSize);videoSize += nalExtra;} else {int nalExtra = (mPBNalCount==3?1:0);buffer = (uint8_t *)malloc(videoSize + nalExtra);memcpy(buffer + nalExtra, videoData, videoSize);videoSize += nalExtra;}uint32_t nalSize = (uint32_t)(videoSize - 4);uint32_t *pNalSize = (uint32_t *)buffer;*pNalSize = CFSwapInt32HostToBig(nalSize);CVPixelBufferRef pixelBuffer = NULL;if ([self initH264HwDecoder]) {pixelBuffer = [self decode:buffer videoSize:videoSize];if(pixelBuffer) {NSInteger width = CVPixelBufferGetWidth(pixelBuffer);NSInteger height = CVPixelBufferGetHeight(pixelBuffer);imageSize = CGSizeMake(width, height);if (self.showType == H264HWDataType_Pixel) {if (_pixelBuffer) {CVPixelBufferRelease(_pixelBuffer);}self.pixelBuffer = CVPixelBufferRetain(pixelBuffer);} else {if (frameFlag == HWVideoFrameType_B) {  //若B帧未进行乱序解码,顺序播放,则在此需要去除,否则解码图形则是灰色。size_t planeCount = CVPixelBufferGetPlaneCount(pixelBuffer);if (planeCount >= 2 && planeCount <= 3) {CVPixelBufferLockBaseAddress(pixelBuffer, 0);u_char *yDestPlane = (u_char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);if (planeCount == 2) {u_char *uvDestPlane = (u_char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);if (yDestPlane[0] == 0x80 && uvDestPlane[0] == 0x80 && uvDestPlane[1] == 0x80) {frameFlag = HWVideoFrameType_UNKNOWN;NSLog(@"Video YUV data parse error: Y=%02x U=%02x V=%02x", yDestPlane[0], uvDestPlane[0], uvDestPlane[1]);}} else if (planeCount == 3) {u_char *uDestPlane = (u_char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);u_char *vDestPlane = (u_char *)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 2);if (yDestPlane[0] == 0x80 && uDestPlane[0] == 0x80 && vDestPlane[0] == 0x80) {frameFlag = HWVideoFrameType_UNKNOWN;NSLog(@"Video YUV data parse error: Y=%02x U=%02x V=%02x", yDestPlane[0], uDestPlane[0], vDestPlane[0]);}}CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);}}if (frameFlag != HWVideoFrameType_UNKNOWN) {self.image = [self pixelBufferToImage:pixelBuffer];}}CVPixelBufferRelease(pixelBuffer);}}FreeCharP(buffer);}}return imageSize;
}- (UIImage *)pixelBufferToImage:(CVPixelBufferRef)pixelBuffer
{UIImage *image = nil;if (!self.isNeedPerfectImg) {//第1种绘制(可直接显示,不可保存为文件(无效缺少图像描述参数))CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];image = [UIImage imageWithCIImage:ciImage];} else {//第2种绘制(可直接显示,可直接保存为文件,相对第一种性能消耗略大)CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];CIContext *temporaryContext = [CIContext contextWithOptions:nil];CGImageRef videoImage = [temporaryContext createCGImage:ciImage fromRect:CGRectMake(0, 0, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer))];image = [[UIImage alloc] initWithCGImage:videoImage];CGImageRelease(videoImage);}return image;
}- (UIImage *)snapshot
{UIImage *img = nil;if (self.displayLayer) {UIGraphicsBeginImageContext(self.displayLayer.bounds.size);[self.displayLayer renderInContext:UIGraphicsGetCurrentContext()];img = UIGraphicsGetImageFromCurrentImageContext();UIGraphicsEndImageContext();} else {if (self.showType == H264HWDataType_Pixel) {if (self.pixelBuffer) {img = [self pixelBufferToImage:self.pixelBuffer];}} else {img = self.image;}if (!self.isNeedPerfectImg) {UIGraphicsBeginImageContext(CGSizeMake(img.size.width, img.size.height));[img drawInRect:CGRectMake(0, 0, img.size.width, img.size.height)];img = UIGraphicsGetImageFromCurrentImageContext();UIGraphicsEndImageContext();}}return img;
}//从起始位开始查询SPS、PPS、SEI、I、B、P帧起始码,遇到I、P、B帧则退出
//存在多种情况:
//1、起始码是0x0 0x0 0x0 0x01 或 0x0 0x0 0x1
//2、每个SPS、PPS、SEI、I、B、P帧为单独的Slice
//3、I帧中包含SPS、PPS、I数据Slice
//4、I帧中包含第3点的数据之外还包含SEI,顺序:SPS、PPS、SEI、I
//5、起始位是AVCC协议格式的大端数据(不支持多Slice的视频帧)
- (HWVideoFrameType)analyticalData:(const uint8_t *)buffer size:(NSInteger)size
{NSInteger preIndex = 0;HWVideoFrameType preFrameType = HWVideoFrameType_UNKNOWN;HWVideoFrameType curFrameType = HWVideoFrameType_UNKNOWN;for (int i=0; i<size && i<300; i++) {       //一般第四种情况下的帧起始信息不会超过(32+256+12)位,可适当增大,为了不循环整个帧片数据int nalSize = [self getNALHeaderLen:(buffer + i) size:size-i];if (nalSize == 0 && i == 0) {   //当每个Slice起始位开始若使用AVCC协议则判断帧大小是否一致uint32_t *pNalSize = (uint32_t *)(buffer);uint32_t videoSize = CFSwapInt32BigToHost(*pNalSize);    //大端模式转为系统端模式if (videoSize == size - 4) {     //是大端模式(AVCC)nalSize = 4;}}if (nalSize && i + nalSize + 1 < size) {int sliceType = buffer[i + nalSize] & 0x1F;if (sliceType == 0x1) {mPBNalCount = nalSize;if (buffer[i + nalSize] == 0x1) {   //B帧curFrameType = HWVideoFrameType_B;} else {    //P帧curFrameType = HWVideoFrameType_P;}break;} else if (sliceType == 0x5) {     //IDR(I帧)if (preFrameType == HWVideoFrameType_PPS) {mIsNeedReinit = [self getSliceInfo:buffer slice:&pPPS size:&mPpsSize start:preIndex end:i];} else if (preFrameType == HWVideoFrameType_SEI)  {[self getSliceInfo:buffer slice:&pSEI size:&mSeiSize start:preIndex end:i];}mINalCount = nalSize;mINalIndex = i;curFrameType = HWVideoFrameType_I;goto Goto_Exit;} else if (sliceType == 0x7) {      //SPSpreFrameType = HWVideoFrameType_SPS;preIndex = i + nalSize;i += nalSize;} else if (sliceType == 0x8) {      //PPSif (preFrameType == HWVideoFrameType_SPS) {mIsNeedReinit = [self getSliceInfo:buffer slice:&pSPS size:&mSpsSize start:preIndex end:i];}preFrameType = HWVideoFrameType_PPS;preIndex = i + nalSize;i += nalSize;} else if (sliceType == 0x6) {      //SEIif (preFrameType == HWVideoFrameType_PPS) {mIsNeedReinit = [self getSliceInfo:buffer slice:&pPPS size:&mPpsSize start:preIndex end:i];}preFrameType = HWVideoFrameType_SEI;preIndex = i + nalSize;i += nalSize;}}}//SPS、PPS、SEI为单独的Slice帧片if (curFrameType == HWVideoFrameType_UNKNOWN && preIndex != 0) {if (preFrameType == HWVideoFrameType_SPS) {mIsNeedReinit = [self getSliceInfo:buffer slice:&pSPS size:&mSpsSize start:preIndex end:size];curFrameType = HWVideoFrameType_SPS;} else if (preFrameType == HWVideoFrameType_PPS) {mIsNeedReinit = [self getSliceInfo:buffer slice:&pPPS size:&mPpsSize start:preIndex end:size];curFrameType = HWVideoFrameType_PPS;} else if (preFrameType == HWVideoFrameType_SEI)  {[self getSliceInfo:buffer slice:&pSEI size:&mSeiSize start:preIndex end:size];curFrameType = HWVideoFrameType_SEI;}}Goto_Exit:return curFrameType;
}//获取NAL的起始码长度是3还4
- (int)getNALHeaderLen:(const uint8_t *)buffer size:(NSInteger)size
{if (size >= 4 && buffer[0] == 0x0 && buffer[1] == 0x0 && buffer[2] == 0x0 && buffer[3] == 0x1) {return 4;} else if (size >= 3 && buffer[0] == 0x0 && buffer[1] == 0x0 && buffer[2] == 0x1) {return 3;}return 0;
}//给SPS、PPS、SEI的Buf赋值,返回YES表示不同于之前的值
- (BOOL)getSliceInfo:(const uint8_t *)videoBuf slice:(uint8_t **)sliceBuf size:(NSInteger *)size start:(NSInteger)start end:(NSInteger)end
{BOOL isDif = NO;NSInteger len = end - start;uint8_t *tempBuf = (uint8_t *)(*sliceBuf);if (tempBuf) {if (len != *size || memcmp(tempBuf, videoBuf + start, len) != 0) {free(tempBuf);tempBuf = (uint8_t *)malloc(len);memcpy(tempBuf, videoBuf + start, len);*sliceBuf = tempBuf;*size = len;isDif = YES;}} else {tempBuf = (uint8_t *)malloc(len);memcpy(tempBuf, videoBuf + start, len);*sliceBuf = tempBuf;*size = len;}return isDif;
}@end

iOS硬解码H264视频流相关推荐

  1. 摄像头RTSP流硬解码

    1. 问题分析 项目中,之前用的是OpenCV对摄像头的RTSP流进行解码.随着时间的推移以及业务摄像头的增加,发现十路流CPU就已经100%啦,很明显解码所占CPU资源较多,导致整个系统处理效率不高 ...

  2. iOS硬编解码相关知识

    1.软编与硬编概念 1.1 软编码:使用CPU进行编码. 实现直接.简单,参数调整方便,升级易,但CPU负载重,性能较硬编码低,低码率下质量通常比硬编码要好一点. 1.2 硬编码:不使用CPU进行编码 ...

  3. Android使用MediaCodec硬解码播放H264格式视频文件

    前些时间,通过各种搜索加请教了好几个同行的朋友,在他们的指点下实现: RTSP+H264实时视频播放播放及把实时视频流保存到手机SD卡中,再对保存的H264格式文件进行播放等基本功能.非常感谢这些朋友 ...

  4. AI视频行为分析系统项目复盘——技术篇2:视频流GPU硬解码

    0 项目背景 见<AI视频行为分析系统项目复盘--技术篇1> https://blog.csdn.net/weixin_42118657/article/details/118105545 ...

  5. H5解码H264实时视频流

    浏览器如何解码实时视频流?最近研究了一下,大体思路为通过websocket把裸H264传输到浏览器,在通过js封装成mp4格式,再通过Html5的video标签进行解码,效果还是比较不错. <! ...

  6. 本文以H264视频流为例,讲解解码流数据的步骤。

    本文以H264视频流为例,讲解解码流数据的步骤. 为突出重点,本文只专注于讨论解码视频流数据,不涉及其它(如开发环境的配置等).如果您需要这方面的信息,请和我联系. 准备变量 定义AVCodecCon ...

  7. Android音视频【三】硬解码播放H264

    人间观察 穷人家的孩子真的是在社会上瞎混 遥远的2020年马上就过去了,天呐!!! 前两篇介绍了下H264的知识和码流结构,本篇就拿上篇从抖音/快手抽离的h264文件实现在Android中进行解码播放 ...

  8. ffmpeg+nvidia解码SDK+GPU实现视频流硬解码成Mat

    方法原理 rtsp流解码方式分为两种:硬解码和软解码.软解码一般通过ffmpeg编解码库实现,但是cpu占用率很高,解码一路1080p视频cpu占用率达到70%左右,对实际应用来说,严重影响机器最大解 ...

  9. 瑞芯微rk3568移植openbmc(四)----关于novnc h264 webcodec硬解码

    2022.11.04 更新 1.关于h264 novnc 软解码 上一篇中实现了novnc h264的webassembly软解码,kvm功能 运行一切正常,但帧率较低,且cpu负载相对较高,goog ...

  10. 如何使用ffmpeg为Mac进行视频硬解码/硬编码(在Qt环境)

    如何使用ffmpeg为Mac进行视频硬解码/硬编码(在Qt环境) 科普 前期准备 安装ffmpeg 将ffmpeg的库文件添加到Qt项目的.pro文件中 在源文件用引入头文件 第一步:先查看本机支持哪 ...

最新文章

  1. js 时间比较_成都js聚合物弹性防水涂料生产厂家来电洽谈_鹏晨防水
  2. ASP.NET 缓存技术分析
  3. python中nlp的库_用于nlp的python中的网站数据清理
  4. RefFieldMethodDetails——查看类的域和方法
  5. JavaScript String 对象、Math 对象使用详解
  6. C/C++[codeup 1926]EXCEL排序
  7. 系统建模与计算机仿真内容,系统建模与计算机仿真
  8. Wemos D1 Mini / nodeMcu / esp8266 + GUIslice库 驱动ST7789 TFT显示屏
  9. OPPO系统推送SDK集成踩坑思路
  10. java 常用四舍五入保留小数点后两位方法
  11. iPhone 14连夜跌至4800元,现在应该直接入手14,还是等15?
  12. 【20220926】html综合案例世纪佳缘
  13. 详解平板电脑和笔记本的区别
  14. ios+透明度+css,ios -css
  15. 我们真的需要会议耳机吗?
  16. ubuntu16.04下解决wps无法使用五笔输入中文的问题
  17. 国产半桥驱动IC屹晶微_EG2014_搭建H桥_立创eda
  18. python安装c编译的软件_Notepad++配置C/C++、C#、Java、Python编译环境详细教程
  19. PageHelper 分页插件使用总结
  20. ybt.1550 花神游历各国 题解

热门文章

  1. 在线式极限学习机OS-ELM
  2. mac下二进制文件查看
  3. pgsql修改字段长度
  4. php返回代码翻译,php 在线翻译函数代码
  5. C语言的源代码文件、目标文件和可执行文件
  6. linux 挂载镜像文件命令,Linux mount命令系统挂载与镜像处理
  7. H5 js方式实现前端视频压缩
  8. Python FastAPI 微信公众号后台服务器验证
  9. 文件传输工具rzsz
  10. nexus本地maven仓库部署及下载