iOS视频采集实战(AVCaptureSession)
需求:使用AVFoundation中的AVCaptureSession实现设置相机的分辨率,帧率(包括高帧率), 切换前后置摄像头,对焦,屏幕旋转,调节曝光度...
阅读前提:
- 原理请参考另一篇文章:iOS视频流采集概述(AVCaptureSession)
- 基于AVFoundation框架
GitHub地址(附代码) : iOS视频采集实战(AVCaptureSession)
简书地址 : iOS视频采集实战(AVCaptureSession)
博客地址 : iOS视频采集实战(AVCaptureSession)
掘金地址 : iOS视频采集实战(AVCaptureSession)
1. 设置分辨率与帧率
1.1. 低帧率模式(fps <= 30)
在要求帧率小于等于30帧的情况下,相机设置分辨率与帧率的方法是单独的,即设置帧率是帧率的方法,设置分辨率是分辨率的方法,两者没有绑定.
设置分辨率
使用此方法可以设置相机分辨率,可以设置的类型可以直接跳转进API文档处自行选择,目前支持最大的是3840*2160,如果不要求相机帧率大于30帧,此方法可以适用于你.
- (void)setCameraResolutionByPresetWithHeight:(int)height session:(AVCaptureSession *)session {/*Note: the method only support your frame rate <= 30 because we must use `activeFormat` when frame rate > 30, the `activeFormat` and `sessionPreset` are exclusive*/AVCaptureSessionPreset preset = [self getSessionPresetByResolutionHeight:height];if ([session.sessionPreset isEqualToString:preset]) {NSLog(@"Needn't to set camera resolution repeatly !");return;}if (![session canSetSessionPreset:preset]) {NSLog(@"Can't set the sessionPreset !");return;}[session beginConfiguration];session.sessionPreset = preset;[session commitConfiguration];
}
复制代码
设置帧率
使用此方法可以设置相机帧率,仅支持帧率小于等于30帧.
- (void)setCameraForLFRWithFrameRate:(int)frameRate {// Only for frame rate <= 30AVCaptureDevice *captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];[captureDevice lockForConfiguration:NULL];[captureDevice setActiveVideoMinFrameDuration:CMTimeMake(1, frameRate)];[captureDevice setActiveVideoMaxFrameDuration:CMTimeMake(1, frameRate)];[captureDevice unlockForConfiguration];
}
复制代码
1.2. 高帧率模式(fps > 30)
如果需要对某一分辨率支持高帧率的设置,如50帧,60帧,120帧...,原先setActiveVideoMinFrameDuration
与setActiveVideoMaxFrameDuration
是无法做到的,Apple规定我们需要使用新的方法设置帧率setActiveVideoMinFrameDuration
与setActiveVideoMaxFrameDuration
,并且该方法必须配合新的设置分辨率activeFormat
的方法一起使用.
新的设置分辨率的方法activeFormat
与sessionPreset
是互斥的,如果使用了一个, 另一个会失效,建议直接使用高帧率的设置方法,废弃低帧率下设置方法,避免产生兼容问题。
Apple在更新方法后将原先分离的分辨率与帧率的设置方法合二为一,原先是单独设置相机分辨率与帧率,而现在则需要一起设置,即每个分辨率有其对应支持的帧率范围,每个帧率也有其支持的分辨率,需要我们遍历来查询,所以原先统一的单独的设置分辨率与帧率的方法在高帧率模式下相当于弃用,可以根据项目需求选择,如果确定项目不会支持高帧率(fps>30),可以使用以前的方法,简单且有效.
注意: 使用
activeFormat
方法后,之前使用sessionPreset
方法设置的分辨率将自动变为AVCaptureSessionPresetInputPriority
,所以如果项目之前有用canSetSessionPreset
比较的if语句也都将失效,建议如果项目必须支持高帧率则彻底启用sessionPreset
方法.
+ (BOOL)setCameraFrameRateAndResolutionWithFrameRate:(int)frameRate andResolutionHeight:(CGFloat)resolutionHeight bySession:(AVCaptureSession *)session position:(AVCaptureDevicePosition)position videoFormat:(OSType)videoFormat {AVCaptureDevice *captureDevice = [self getCaptureDevicePosition:position];BOOL isSuccess = NO;for(AVCaptureDeviceFormat *vFormat in [captureDevice formats]) {CMFormatDescriptionRef description = vFormat.formatDescription;float maxRate = ((AVFrameRateRange*) [vFormat.videoSupportedFrameRateRanges objectAtIndex:0]).maxFrameRate;if (maxRate >= frameRate && CMFormatDescriptionGetMediaSubType(description) == videoFormat) {if ([captureDevice lockForConfiguration:NULL] == YES) {// 对比镜头支持的分辨率和当前设置的分辨率CMVideoDimensions dims = CMVideoFormatDescriptionGetDimensions(description);if (dims.height == resolutionHeight && dims.width == [self getResolutionWidthByHeight:resolutionHeight]) {[session beginConfiguration];if ([captureDevice lockForConfiguration:NULL]){captureDevice.activeFormat = vFormat;[captureDevice setActiveVideoMinFrameDuration:CMTimeMake(1, frameRate)];[captureDevice setActiveVideoMaxFrameDuration:CMTimeMake(1, frameRate)];[captureDevice unlockForConfiguration];}[session commitConfiguration];return YES;}}else {NSLog(@"%s: lock failed!",__func__);}}}NSLog(@"Set camera frame is success : %d, frame rate is %lu, resolution height = %f",isSuccess,(unsigned long)frameRate,resolutionHeight);return NO;
}+ (AVCaptureDevice *)getCaptureDevicePosition:(AVCaptureDevicePosition)position {NSArray *devices = nil;if (@available(iOS 10.0, *)) {AVCaptureDeviceDiscoverySession *deviceDiscoverySession = [AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:@[AVCaptureDeviceTypeBuiltInWideAngleCamera] mediaType:AVMediaTypeVideo position:position];devices = deviceDiscoverySession.devices;} else {
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdeprecated-declarations"devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
#pragma clang diagnostic pop}for (AVCaptureDevice *device in devices) {if (position == device.position) {return device;}}return NULL;
}
复制代码
2. 前后置摄像头切换
切换前后置摄像头,看似简单,实际应用中会产生很多问题,因为同一部设备前后置摄像头支持的分辨率帧率的值是不同的,所以如果从支持切向不支持就会产生问题,具体案例如下
比如iPhoneX, 后置摄像头最大支持(4K,60fps),前置摄像头最大支持(2K,30fps),当使用(4K,60fps)后置摄像头切到前置摄像头如果不做处理则无法切换,程序错乱.
注意
下面代码中我们这行代码session.sessionPreset = AVCaptureSessionPresetLow;
,因为从后置切到前置我们需要重新计算当前输入设备支持最大的分辨率与帧率,而输入设备如果不先添加上去我们无法计算,所以在这里先随便设置一个可接受的分辨率以使我们可以把输入设备添加,之后在求出当前设备最大支持的分辨率与帧率后再重新设置分辨率与帧率.
- (void)setCameraPosition:(AVCaptureDevicePosition)position session:(AVCaptureSession *)session input:(AVCaptureDeviceInput *)input videoFormat:(OSType)videoFormat resolutionHeight:(CGFloat)resolutionHeight frameRate:(int)frameRate {if (input) {[session beginConfiguration];[session removeInput:input];AVCaptureDevice *device = [self.class getCaptureDevicePosition:position];NSError *error = nil;AVCaptureDeviceInput *newInput = [AVCaptureDeviceInput deviceInputWithDevice:deviceerror:&error];if (error != noErr) {NSLog(@"%s: error:%@",__func__, error.localizedDescription);return;}// 比如: 后置是4K, 前置最多支持2K,此时切换需要降级, 而如果不先把Input添加到session中,我们无法计算当前摄像头支持的最大分辨率session.sessionPreset = AVCaptureSessionPresetLow;if ([session canAddInput:newInput]) {self.input = newInput;[session addInput:newInput];}else {NSLog(@"%s: add input failed.",__func__);return;}int maxResolutionHeight = [self getMaxSupportResolutionByPreset];if (resolutionHeight > maxResolutionHeight) {resolutionHeight = maxResolutionHeight;self.cameraModel.resolutionHeight = resolutionHeight;NSLog(@"%s: Current support max resolution height = %d", __func__, maxResolutionHeight);}int maxFrameRate = [self getMaxFrameRateByCurrentResolution];if (frameRate > maxFrameRate) {frameRate = maxFrameRate;self.cameraModel.frameRate = frameRate;NSLog(@"%s: Current support max frame rate = %d",__func__, maxFrameRate);}BOOL isSuccess = [self.class setCameraFrameRateAndResolutionWithFrameRate:frameRateandResolutionHeight:resolutionHeightbySession:sessionposition:positionvideoFormat:videoFormat];if (!isSuccess) {NSLog(@"%s: Set resolution and frame rate failed.",__func__);}[session commitConfiguration];}
}
复制代码
3.屏幕视频方向切换
我们在这里首先要区分下屏幕方向与视频方向的概念,一个是用来表示设备方向(UIDeviceOrientation),一个是用来表示视频方向(AVCaptureVideoOrientation). 我们使用的AVCaptureSession,如果要支持屏幕旋转,需要在屏幕旋转的同时将我们的视频画面也进行旋转.
屏幕方向的旋转可以通过通知UIDeviceOrientationDidChangeNotification
接收,这里不做过多说明.
- (void)adjustVideoOrientationByScreenOrientation:(UIDeviceOrientation)orientation previewFrame:(CGRect)previewFrame previewLayer:(AVCaptureVideoPreviewLayer *)previewLayer videoOutput:(AVCaptureVideoDataOutput *)videoOutput {[previewLayer setFrame:previewFrame];switch (orientation) {case UIInterfaceOrientationPortrait:[self adjustAVOutputDataOrientation:AVCaptureVideoOrientationPortraitvideoOutput:videoOutput];break;case UIInterfaceOrientationPortraitUpsideDown:[self adjustAVOutputDataOrientation:AVCaptureVideoOrientationPortraitUpsideDownvideoOutput:videoOutput];break;case UIInterfaceOrientationLandscapeLeft:[[previewLayer connection] setVideoOrientation:AVCaptureVideoOrientationLandscapeLeft];[self adjustAVOutputDataOrientation:AVCaptureVideoOrientationLandscapeLeftvideoOutput:videoOutput];break;case UIInterfaceOrientationLandscapeRight:[[previewLayer connection] setVideoOrientation:AVCaptureVideoOrientationLandscapeRight];[self adjustAVOutputDataOrientation:AVCaptureVideoOrientationLandscapeRightvideoOutput:videoOutput];break;default:break;}
}-(void)adjustAVOutputDataOrientation:(AVCaptureVideoOrientation)orientation videoOutput:(AVCaptureVideoDataOutput *)videoOutput {for(AVCaptureConnection *connection in videoOutput.connections) {for(AVCaptureInputPort *port in [connection inputPorts]) {if([[port mediaType] isEqual:AVMediaTypeVideo]) {if([connection isVideoOrientationSupported]) {[connection setVideoOrientation:orientation];}}}}
}复制代码
4.对焦调节
关于对焦,我们需要特别说明手动设置对焦点进行对焦,因为对焦方法仅接受以左上角为(0,0),右下角为(1,1)的坐标系,所以我们需要对UIView的坐标系进行转换,但是转换需要分为多种情况,如下
- 视频是否以镜像模式输出: 如前置摄像头可能会开启镜像模式(x,y坐标是反的)
- 屏幕方向是以Home在右还是在左: 在右的话是以左上角为原点,在左的话则是以右下角为原点.
- 视频渲染方式: 是保持分辨率比例,还是填充模式,因为手机型号不同,所以可能是填充黑边,可能超出屏幕,需要重新计算对焦点.
如果我们是直接使用AVCaptureSession的AVCaptureVideoPreviewLayer做渲染,我们可以使用captureDevicePointOfInterestForPoint
方法自动计算,此结果会考虑上面所有情况.但如果我们是自己对屏幕做渲染,则需要自己计算对焦点,上面的情况都需要考虑. 下面提供自动与手动计算两种方法.
- (void)autoFocusAtPoint:(CGPoint)point {AVCaptureDevice *device = self.input.device;if ([device isFocusPointOfInterestSupported] && [device isFocusModeSupported:AVCaptureFocusModeAutoFocus]) {NSError *error;if ([device lockForConfiguration:&error]) {[device setExposurePointOfInterest:point];[device setExposureMode:AVCaptureExposureModeContinuousAutoExposure];[device setFocusPointOfInterest:point];[device setFocusMode:AVCaptureFocusModeAutoFocus];[device unlockForConfiguration];}}
}
复制代码
4.1. 自动计算对焦点
- (CGPoint)convertToPointOfInterestFromViewCoordinates:(CGPoint)viewCoordinates captureVideoPreviewLayer:(AVCaptureVideoPreviewLayer *)captureVideoPreviewLayer {CGPoint pointOfInterest = CGPointMake(.5f, .5f);CGSize frameSize = [captureVideoPreviewLayer frame].size;if ([captureVideoPreviewLayer.connection isVideoMirrored]) {viewCoordinates.x = frameSize.width - viewCoordinates.x;}// Convert UIKit coordinate to Focus Point(0.0~1.1)pointOfInterest = [captureVideoPreviewLayer captureDevicePointOfInterestForPoint:viewCoordinates];// NSLog(@"Focus - Auto test: %@",NSStringFromCGPoint(pointOfInterest));return pointOfInterest;
}复制代码
4.2. 手动计算对焦点
- 如果手机屏幕尺寸与分辨率比例完全吻合,则直接将坐标系转为(0,0)到(1,1)即可
- 如果屏幕尺寸比例与分辨率比例不同,需要进一步分析视频渲染方式来计算,如果是保持分辨率,则肯定会留下黑边,我们在计算对焦点时需要减去黑边长度,如果是以分辨率比例填充屏幕则会牺牲一部分像素,我们在计算对焦点时同样需要加上牺牲的像素.
- (CGPoint)manualConvertFocusPoint:(CGPoint)point frameSize:(CGSize)frameSize captureVideoPreviewLayer:(AVCaptureVideoPreviewLayer *)captureVideoPreviewLayer position:(AVCaptureDevicePosition)position videoDataOutput:(AVCaptureVideoDataOutput *)videoDataOutput input:(AVCaptureDeviceInput *)input {CGPoint pointOfInterest = CGPointMake(.5f, .5f);if ([[videoDataOutput connectionWithMediaType:AVMediaTypeVideo] isVideoMirrored]) {point.x = frameSize.width - point.x;}for (AVCaptureInputPort *port in [input ports]) {if ([port mediaType] == AVMediaTypeVideo) {CGRect cleanAperture = CMVideoFormatDescriptionGetCleanAperture([port formatDescription], YES);CGSize resolutionSize = cleanAperture.size;CGFloat resolutionRatio = resolutionSize.width / resolutionSize.height;CGFloat screenSizeRatio = frameSize.width / frameSize.height;CGFloat xc = .5f;CGFloat yc = .5f;if (resolutionRatio == screenSizeRatio) {xc = point.x / frameSize.width;yc = point.y / frameSize.height;}else if (resolutionRatio > screenSizeRatio) {if ([[captureVideoPreviewLayer videoGravity] isEqualToString:AVLayerVideoGravityResizeAspectFill]) {CGFloat needScreenWidth = resolutionRatio * frameSize.height;CGFloat cropWidth = (needScreenWidth - frameSize.width) / 2;xc = (cropWidth + point.x) / needScreenWidth;yc = point.y / frameSize.height;}else if ([[captureVideoPreviewLayer videoGravity] isEqualToString:AVLayerVideoGravityResizeAspect]){CGFloat needScreenHeight = frameSize.width * (1/resolutionRatio);CGFloat blackBarLength = (frameSize.height - needScreenHeight) / 2;xc = point.x / frameSize.width;yc = (point.y - blackBarLength) / needScreenHeight;}else if ([[captureVideoPreviewLayer videoGravity] isEqualToString:AVLayerVideoGravityResize]) {xc = point.x / frameSize.width;yc = point.y / frameSize.height;}}else {if ([[captureVideoPreviewLayer videoGravity] isEqualToString:AVLayerVideoGravityResizeAspectFill]) {CGFloat needScreenHeight = (1/resolutionRatio) * frameSize.width;CGFloat cropHeight = (needScreenHeight - frameSize.height) / 2;xc = point.x / frameSize.width;yc = (cropHeight + point.y) / needScreenHeight;}else if ([[captureVideoPreviewLayer videoGravity] isEqualToString:AVLayerVideoGravityResizeAspect]){CGFloat needScreenWidth = frameSize.height * resolutionRatio;CGFloat blackBarLength = (frameSize.width - needScreenWidth) / 2;xc = (point.x - blackBarLength) / needScreenWidth;yc = point.y / frameSize.height;}else if ([[captureVideoPreviewLayer videoGravity] isEqualToString:AVLayerVideoGravityResize]) {xc = point.x / frameSize.width;yc = point.y / frameSize.height;}}pointOfInterest = CGPointMake(xc, yc);}}if (position == AVCaptureDevicePositionBack) {if (captureVideoPreviewLayer.connection.videoOrientation == AVCaptureVideoOrientationLandscapeLeft) {pointOfInterest = CGPointMake(1-pointOfInterest.x, 1-pointOfInterest.y);}}else {pointOfInterest = CGPointMake(pointOfInterest.x, 1-pointOfInterest.y);}//NSLog(@"Focus - manu test: %@",NSStringFromCGPoint(pointOfInterest));return pointOfInterest;
}
复制代码
5.曝光调节
如果我们是以UISlider作为调节控件,最简单的做法可以将其范围设置的与曝光度值的范围相同,即(-8~8),这样无需转换值,直接传入即可,如果是手势或是其他控件可根据需求自行调整.较为简单,不再叙述.
- (void)setExposureWithNewValue:(CGFloat)newExposureValue device:(AVCaptureDevice *)device {NSError *error;if ([device lockForConfiguration:&error]) {[device setExposureTargetBias:newExposureValue completionHandler:nil];[device unlockForConfiguration];}
}
复制代码
6.手电筒模式
- AVCaptureTorchModeAuto: 自动
- AVCaptureTorchModeOn: 打开
- AVCaptureTorchModeOff: 关闭
- (void)setTorchState:(BOOL)isOpen device:(AVCaptureDevice *)device {if ([device hasTorch]) {NSError *error;[device lockForConfiguration:&error];device.torchMode = isOpen ? AVCaptureTorchModeOn : AVCaptureTorchModeOff;[device unlockForConfiguration];}else {NSLog(@"The device not support torch!");}
}
复制代码
7.视频稳定性调节
注意: 部分机型,部分分辨率使用此属性渲染可能会出现问题 (iphone xs, 自己渲染)
-(void)adjustVideoStabilizationWithOutput:(AVCaptureVideoDataOutput *)output {NSArray *devices = nil;if (@available(iOS 10.0, *)) {AVCaptureDeviceDiscoverySession *deviceDiscoverySession = [AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:@[AVCaptureDeviceTypeBuiltInWideAngleCamera] mediaType:AVMediaTypeVideo position:self.cameraModel.position];devices = deviceDiscoverySession.devices;} else {
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdeprecated-declarations"devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
#pragma clang diagnostic pop}for(AVCaptureDevice *device in devices){if([device hasMediaType:AVMediaTypeVideo]){if([device.activeFormat isVideoStabilizationModeSupported:AVCaptureVideoStabilizationModeAuto]) {for(AVCaptureConnection *connection in output.connections) {for(AVCaptureInputPort *port in [connection inputPorts]) {if([[port mediaType] isEqual:AVMediaTypeVideo]) {if(connection.supportsVideoStabilization) {connection.preferredVideoStabilizationMode = AVCaptureVideoStabilizationModeStandard;NSLog(@"activeVideoStabilizationMode = %ld",(long)connection.activeVideoStabilizationMode);}else {NSLog(@"connection don't support video stabilization");}}}}}else{NSLog(@"device don't support video stablization");}}}
}复制代码
8.白平衡调节
- temperature: 通过华氏温度调节 (-150-~250)
- tint: 通过色调调节 (-150-~150)
注意在使用setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains
方法时必须比较当前的AVCaptureWhiteBalanceGains
值是否在有效范围.
-(AVCaptureWhiteBalanceGains)clampGains:(AVCaptureWhiteBalanceGains)gains toMinVal:(CGFloat)minVal andMaxVal:(CGFloat)maxVal {AVCaptureWhiteBalanceGains tmpGains = gains;tmpGains.blueGain = MAX(MIN(tmpGains.blueGain , maxVal), minVal);tmpGains.redGain = MAX(MIN(tmpGains.redGain , maxVal), minVal);tmpGains.greenGain = MAX(MIN(tmpGains.greenGain, maxVal), minVal);return tmpGains;
}-(void)setWhiteBlanceValueByTemperature:(CGFloat)temperature device:(AVCaptureDevice *)device {if ([device isWhiteBalanceModeSupported:AVCaptureWhiteBalanceModeLocked]) {[device lockForConfiguration:nil];AVCaptureWhiteBalanceGains currentGains = device.deviceWhiteBalanceGains;CGFloat currentTint = [device temperatureAndTintValuesForDeviceWhiteBalanceGains:currentGains].tint;AVCaptureWhiteBalanceTemperatureAndTintValues tempAndTintValues = {.temperature = temperature,.tint = currentTint,};AVCaptureWhiteBalanceGains deviceGains = [device deviceWhiteBalanceGainsForTemperatureAndTintValues:tempAndTintValues];CGFloat maxWhiteBalanceGain = device.maxWhiteBalanceGain;deviceGains = [self clampGains:deviceGains toMinVal:1 andMaxVal:maxWhiteBalanceGain];[device setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:deviceGains completionHandler:nil];[device unlockForConfiguration];}
}-(void)setWhiteBlanceValueByTint:(CGFloat)tint device:(AVCaptureDevice *)device {if ([device isWhiteBalanceModeSupported:AVCaptureWhiteBalanceModeLocked]) {[device lockForConfiguration:nil];CGFloat maxWhiteBalaceGain = device.maxWhiteBalanceGain;AVCaptureWhiteBalanceGains currentGains = device.deviceWhiteBalanceGains;currentGains = [self clampGains:currentGains toMinVal:1 andMaxVal:maxWhiteBalaceGain];CGFloat currentTemperature = [device temperatureAndTintValuesForDeviceWhiteBalanceGains:currentGains].temperature;AVCaptureWhiteBalanceTemperatureAndTintValues tempAndTintValues = {.temperature = currentTemperature,.tint = tint,};AVCaptureWhiteBalanceGains deviceGains = [device deviceWhiteBalanceGainsForTemperatureAndTintValues:tempAndTintValues];deviceGains = [self clampGains:deviceGains toMinVal:1 andMaxVal:maxWhiteBalaceGain];[device setWhiteBalanceModeLockedWithDeviceWhiteBalanceGains:deviceGains completionHandler:nil];[device unlockForConfiguration];}
}复制代码
9.屏幕填充方式
- AVLayerVideoGravityResizeAspect: 保持分辨率比例,如果屏幕分辨率与视频分辨率不一致会留下黑边.
- AVLayerVideoGravityResizeAspectFill: 保持分辨率比例去填充屏幕,即以较小的边来准填充屏幕,会牺牲掉一些像素,因为超出屏幕.
- AVLayerVideoGravityResize:以拉伸的方式来填充屏幕,不会牺牲像素,但是画面会被拉伸.
- (void)setVideoGravity:(AVLayerVideoGravity)videoGravity previewLayer:(AVCaptureVideoPreviewLayer *)previewLayer session:(AVCaptureSession *)session {[session beginConfiguration];[previewLayer setVideoGravity:videoGravity];[session commitConfiguration];
}
复制代码
转载于:https://juejin.im/post/5cb1fd9af265da03bb6fa489
iOS视频采集实战(AVCaptureSession)相关推荐
- 一对一直播软件源码开发,iOS视频采集的实现过程
在一对一直播软件源码日益火热的发展形势下,音视频开发(采集.编解码.传输.播放.美颜)等技术也随之成为开发者们关注的重点,本系列文章就音视频开发过程中所运用到的技术和原理进行梳理和总结. 认识 AVC ...
- iOS视频流采集概述(AVCaptureSession)
需求:需要采集到视频帧数据从而可以进行一系列处理(如: 裁剪,旋转,美颜,特效....). 所以,必须采集到视频帧数据. 阅读前提: 使用AVFoundation框架 采集音视频帧数据 GitHub地 ...
- WeBRTC IOS视频采集流程
因CSDN MardDown语法问题,流程图部分不兼容有道云笔记,所以流程图部分请拷贝到有道云笔记生成查看. iOS视频录制: 同拍照一样视频录制功能有两种实现方式 UIImagePickerView ...
- WebRTC系列 -- iOS 视频采集(1)
文章目录 1. iOS端视频数据采集 1.1 采集控制 1.2 采集输出 1.3 开始停止 2. 视频数据处理 `ObjCVideoTrackSource`类 2.1 采集时间戳处理 2.2 帧率及分 ...
- iphone ios 视频采集AVCaptureSessionPresetHigh/Medium/Low分辨率等参数
做视频capture的时候老是要用到这些参数,记录一下,以下均来自官网 http://developer.apple.com/library/mac/#documentation/AudioVideo ...
- iOS音视频开发七:视频采集
将通过拆解采集 → 编码 → 封装 → 解封装 → 解码 → 渲染流程并实现 Demo 来向大家介绍如何在 iOS/Android 平台上手音视频开发. 这里是第七篇:iOS 视频采集 Demo.这个 ...
- ios视频和音频采集
ios视频和音频采集以及预览 本文将说明如何用ios做视频和音频的采集,以及预览,预览采用的是系统自带的AVCaptureVideoPreviewLayer和UIView,视频采集用AVCapture ...
- iOS视频硬编码技术
iOS视频硬编码技术 一.iOS视频采集硬编码 基本原理 硬编码 & 软编码 硬编码:通过系统自带的Camera录制视频,实际上调用的是底层的高清编码硬件模块,即显卡,不使用CPU,速度快 软 ...
- 移动互联网实时视频通讯之视频采集
原文:http://blog.easemob.com/?p=277 一 .前言 一套完整的实时网络视频通讯系统包括视频采集.视频编码.视频传输.视频解码和播放.对于视频采集,大多数视频编码器对输入原始 ...
最新文章
- win10显示隐藏文件_如何在Mac上显示隐藏文件?苹果mac显示隐藏文件夹方法
- 在Flex中使用本地共享对象
- C#实现图片的无损压缩
- POJ1456-Supermarket【并查集】
- volatile和原子操作
- Ubuntu16.04下安装、配置Sublime运行环境
- 映射的网络驱动器怎么共享_如何在Windows上通过网络共享CD和DVD驱动器
- C语言程序100例之C#版-029
- 免费的端口映射工具哪个好用
- 计算机内存小怎么改大,电脑内存太小的优化方法步骤
- 计算机网络线接法,电脑网线插座接法图文详解
- directive 自定义指令
- 三大思维导图软件比较
- It彭于晏带你学JAVA之适配器模式及API
- SparkSql-redis:将查询到的结果保存到redis中
- 10019---记录一次壮烈牺牲的阿里巴巴面试
- matlab中滑模boost,一种基于滑模控制的新型Boost正弦波逆变器
- web前端入门到实战:HTML字符实体,转义字符串
- 多进程concurrent.futures的ProcessPoolExecutor的一个注意点
- 如何在Windows 10 上安装SQL Server 2000数据库?
热门文章
- python同时发大量请求_Python批量发送post请求的实现代码
- scala java抽象理解_scala – 抽象覆盖如何在Java代码方面起作用?
- spark structured stream的Update模式
- 清华大学成立听觉智能研究中心,要解决可解释和鲁棒性问题
- 提前11秒,AI让神经科学家预知了你的决定
- 预防医学的曙光 | 微软要用AI构建免疫系统图谱
- 单手撸了个springboot+mybatis+druid 1
- Angular Material 教程之布局篇 (五) : 布局参数
- intent 系统设置界面
- 大数据时代的全能日志分析专家--Splunk安装与实践