转自:http://www.hudongdong.com/ios/550.html

前言

用代码在简单视频编辑中,主要就是加美颜、水印(贴图)、视频截取、视频拼接、音视频的处理,在美颜中,使用GPUImage即可实现多种滤镜、磨皮美颜的功能,并且可以脸部识别实时美颜等功能,这个有很多成熟的处理方案,所以现在主要说后面的水印(贴图)、视频截取、视频拼接、音视频的处理,在文章结尾会给出一个完整的测试demo,该demo可以操作视频之后保存到系统相册,文章主要说明下注意的点。

上篇讲了视频编辑功能详解上篇-添加水印,本篇就说下视频裁剪、视频拼接、音视频的处理。

原理

正如上篇提到的,因为GPUImage只是对视频进行滤镜处理,并没有涉及到视频轨和音轨的处理,所以在视频的处理裁剪等编辑上面主要还是使用的AVFoundation对视频轨和音轨进行处理。

本篇略长,如果只是使用,可以直接去文章最后下载demo源码,复制使用即可。不过建议看懂源码,这样在视频的移动编辑上面就可以自己随便改了。

一、视频裁剪

完整源码:

//使用gpuimage重新录制一次
-(void)saveVedioPath:(NSURL*)vedioPath WithFileName:(NSString*)fileName andCallBack:(JLXCommonToolVedioCompletionHandler)competion
{self.completionHandler = competion;// 滤镜filter = [[GPUImageAlphaBlendFilter alloc] init];//    //mix即为叠加后的透明度,这里就直接写1.0了[(GPUImageDissolveBlendFilter *)filter setMix:1.0f];// 播放NSURL *sampleURL  = vedioPath;AVAsset *asset = [AVAsset assetWithURL:sampleURL];movieFile = [[GPUImageMovie alloc] initWithAsset:asset];movieFile.runBenchmark = YES;movieFile.playAtActualSpeed = NO;AVAssetTrack *videoAssetTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];UIImageOrientation videoAssetOrientation_  = UIImageOrientationUp;//拍摄的时候视频是否是竖屏拍的BOOL isVideoAssetvertical  = NO;CGAffineTransform videoTransform = videoAssetTrack.preferredTransform;if (videoTransform.a == 0 && videoTransform.b == 1.0 && videoTransform.c == -1.0 && videoTransform.d == 0) {isVideoAssetvertical = YES;videoAssetOrientation_ =  UIImageOrientationUp;//正着拍}if (videoTransform.a == 0 && videoTransform.b == -1.0 && videoTransform.c == 1.0 && videoTransform.d == 0) {//        videoAssetOrientation_ =  UIImageOrientationLeft;isVideoAssetvertical = YES;videoAssetOrientation_ = UIImageOrientationDown;//倒着拍}if (videoTransform.a == 1.0 && videoTransform.b == 0 && videoTransform.c == 0 && videoTransform.d == 1.0) {isVideoAssetvertical = NO;videoAssetOrientation_ =  UIImageOrientationLeft;//左边拍的}if (videoTransform.a == -1.0 && videoTransform.b == 0 && videoTransform.c == 0 && videoTransform.d == -1.0) {isVideoAssetvertical = NO;videoAssetOrientation_ = UIImageOrientationRight;//右边拍}GPUImageView *filterView = [[GPUImageView alloc] initWithFrame:CGRectMake(0, 0, asset.naturalSize.width, asset.naturalSize.height)];[filterView setTransform:CGAffineTransformMakeRotation(M_PI_2)];UIView *view = [[UIView alloc] initWithFrame:CGRectMake(0, 0, asset.naturalSize.width, asset.naturalSize.height)];[view setBackgroundColor:[UIColor clearColor]];GPUImageUIElement *uielement = [[GPUImageUIElement alloc] initWithView:view];NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:@"Documents/%@.mp4",fileName]];unlink([pathToMovie UTF8String]);NSURL *movieURL = [NSURL fileURLWithPath:pathToMovie];movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(asset.naturalSize.width, asset.naturalSize.height)];GPUImageFilter* progressFilter = [[GPUImageFilter alloc] init];[movieFile addTarget:progressFilter];[progressFilter addTarget:filter];[uielement addTarget:filter];movieWriter.shouldPassthroughAudio = YES;if ([[asset tracksWithMediaType:AVMediaTypeAudio] count] > 0){movieFile.audioEncodingTarget = movieWriter;} else {//no audiomovieFile.audioEncodingTarget = nil;}[movieFile enableSynchronizedEncodingUsingMovieWriter:movieWriter];// 显示到界面[filter addTarget:filterView];[filter addTarget:movieWriter];[movieWriter startRecording];[movieFile startProcessing];__weak typeof(self) weakSelf = self;[progressFilter setFrameProcessingCompletionBlock:^(GPUImageOutput *output, CMTime time) {[uielement update];}];[movieWriter setCompletionBlock:^{__strong typeof(self) strongSelf = weakSelf;[strongSelf->filter removeTarget:strongSelf->movieWriter];[strongSelf->movieWriter finishRecording];if (strongSelf.completionHandler) {strongSelf.completionHandler(movieURL,nil,isVideoAssetvertical);}}];
}/**裁剪视频@param videoPath 视频的路径@param startTime 截取视频开始时间@param endTime 截取视频结束时间,如果为0则为整个视频@param videoSize 视频截取的大小,如果为0则不裁剪视频大小@param videoDealPoint Point(x,y):传zero则为裁剪从0,0开始@param fileName 文件名字@param shouldScale 是否拉伸,false的话不拉伸,裁剪黑背景*/
- (void)saveVideoPath:(NSURL*)videoPath withStartTime:(float)startTime withEndTime:(float)endTime withSize:(CGSize)videoSize withVideoDealPoint:(CGPoint)videoDealPoint WithFileName:(NSString*)fileName shouldScale:(BOOL)shouldScale
{if (!videoPath) {[SVProgressHUD dismiss];return;}[SVProgressHUD showWithStatus:@"裁剪视频到系统相册"];//1 创建AVAsset实例 AVAsset包含了video的所有信息 self.videoUrl输入视频的路径//封面图片NSDictionary *opts = [NSDictionary dictionaryWithObject:@(YES) forKey:AVURLAssetPreferPreciseDurationAndTimingKey];videoAsset = [AVURLAsset URLAssetWithURL:videoPath options:opts];     //初始化视频媒体文件bool isWXVideo = false;for (int i=0; i<videoAsset.metadata.count; i++) {AVMetadataItem * item = [videoAsset.metadata objectAtIndex:i];NSLog(@"======metadata:%@,%@,%@,%@",item.identifier,item.extraAttributes,item.value,item.dataType);NSDictionary *dic = [self StrToArrayOrNSDictionary:[NSString stringWithFormat:@"%@",item.value]];if ([[dic.allKeys objectAtIndex:0] isEqualToString: @"WXVer"]) {isWXVideo = true;[self saveVedioPath:videoPath WithFileName:@"wxVideo" andCallBack:^(NSURL *assetURL, NSError *error,BOOL isVideoAssetvertical) {dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{NSString *newVideoPath = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/wxVideo.mp4"];[self goSaveVideoPath:[NSURL fileURLWithPath:newVideoPath] withStartTime:startTime withEndTime:endTime withSize:videoSize withVideoDealPoint:videoDealPoint WithFileName:fileName shouldScale:shouldScale isWxVideoAssetvertical:isVideoAssetvertical];});}];break;}}if ([[videoAsset tracksWithMediaType:AVMediaTypeAudio] count] == 0){[self saveVedioPath:videoPath WithFileName:@"wxVideo" andCallBack:^(NSURL *assetURL, NSError *error,BOOL isVideoAssetvertical) {dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{NSString *newVideoPath = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/wxVideo.mp4"];[self goSaveVideoPath:[NSURL fileURLWithPath:newVideoPath] withStartTime:startTime withEndTime:endTime withSize:videoSize withVideoDealPoint:videoDealPoint WithFileName:fileName shouldScale:shouldScale isWxVideoAssetvertical:isVideoAssetvertical];});}];return;}if (!isWXVideo) {[self goSaveVideoPath:videoPath withStartTime:startTime withEndTime:endTime withSize:videoSize withVideoDealPoint:videoDealPoint WithFileName:fileName shouldScale:shouldScale isWxVideoAssetvertical:NO];}
}//Assetvertical gpuimage会把微信的竖屏渲染成横屏,横屏还是横屏
- (void)goSaveVideoPath:(NSURL*)videoPath withStartTime:(float)startTime withEndTime:(float)endTime withSize:(CGSize)videoSize withVideoDealPoint:(CGPoint)videoDealPoint WithFileName:(NSString*)fileName shouldScale:(BOOL)shouldScale isWxVideoAssetvertical:(BOOL)Assetvertical{if (!videoPath) {[SVProgressHUD dismiss];return;}//1 创建AVAsset实例 AVAsset包含了video的所有信息 self.videoUrl输入视频的路径//封面图片NSDictionary *opts = [NSDictionary dictionaryWithObject:@(YES) forKey:AVURLAssetPreferPreciseDurationAndTimingKey];//    NSDictionary *opts = [NSDictionary dictionaryWithObjectsAndKeys:@(YES),AVURLAssetPreferPreciseDurationAndTimingKey,AVAssetReferenceRestrictionForbidNone,AVURLAssetReferenceRestrictionsKey, nil];videoAsset = [AVURLAsset URLAssetWithURL:videoPath options:opts];     //初始化视频媒体文件//开始时间CMTime startCropTime = CMTimeMakeWithSeconds(startTime, 600);//结束时间CMTime endCropTime = CMTimeMakeWithSeconds(endTime, 600);if (endTime == 0) {endCropTime = CMTimeMakeWithSeconds(videoAsset.duration.value/videoAsset.duration.timescale-startTime, videoAsset.duration.timescale);}//2 创建AVMutableComposition实例. apple developer 里边的解释 【AVMutableComposition is a mutable subclass of AVComposition you use when you want to create a new composition from existing assets. You can add and remove tracks, and you can add, remove, and scale time ranges.】AVMutableComposition *mixComposition = [AVMutableComposition composition];//有声音if ([[videoAsset tracksWithMediaType:AVMediaTypeAudio] count] > 0){//声音采集AVURLAsset * audioAsset = [[AVURLAsset alloc] initWithURL:videoPath options:opts];//音频通道AVMutableCompositionTrack * audioTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];//音频采集通道AVAssetTrack * audioAssetTrack = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] lastObject];[audioTrack insertTimeRange:CMTimeRangeMake(startCropTime, endCropTime) ofTrack:audioAssetTrack atTime:kCMTimeZero error:nil];}//3 视频通道  工程文件中的轨道,有音频轨、视频轨等,里面可以插入各种对应的素材AVMutableCompositionTrack *videoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideopreferredTrackID:kCMPersistentTrackID_Invalid];NSError *error;//把视频轨道数据加入到可变轨道中 这部分可以做视频裁剪TimeRange[videoTrack insertTimeRange:CMTimeRangeMake(startCropTime, endCropTime)ofTrack:[[videoAsset tracksWithMediaType:AVMediaTypeVideo] lastObject]atTime:kCMTimeZero error:&error];//3.1 AVMutableVideoCompositionInstruction 视频轨道中的一个视频,可以缩放、旋转等AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoTrack.timeRange.duration);// 3.2 AVMutableVideoCompositionLayerInstruction 一个视频轨道,包含了这个轨道上的所有视频素材AVMutableVideoCompositionLayerInstruction *videolayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] lastObject];UIImageOrientation videoAssetOrientation_  = UIImageOrientationUp;//拍摄的时候视频是否是竖屏拍的BOOL isVideoAssetvertical  = NO;CGAffineTransform videoTransform = videoAssetTrack.preferredTransform;if (videoTransform.a == 0 && videoTransform.b == 1.0 && videoTransform.c == -1.0 && videoTransform.d == 0) {isVideoAssetvertical = YES;videoAssetOrientation_ =  UIImageOrientationUp;//正着拍}if (videoTransform.a == 0 && videoTransform.b == -1.0 && videoTransform.c == 1.0 && videoTransform.d == 0) {//        videoAssetOrientation_ =  UIImageOrientationLeft;isVideoAssetvertical = YES;videoAssetOrientation_ = UIImageOrientationDown;//倒着拍}if (videoTransform.a == 1.0 && videoTransform.b == 0 && videoTransform.c == 0 && videoTransform.d == 1.0) {isVideoAssetvertical = NO;videoAssetOrientation_ =  UIImageOrientationLeft;//左边拍的}if (videoTransform.a == -1.0 && videoTransform.b == 0 && videoTransform.c == 0 && videoTransform.d == -1.0) {isVideoAssetvertical = NO;videoAssetOrientation_ = UIImageOrientationRight;//右边拍}float scaleX = 1.0,scaleY = 1.0,scale = 1.0;CGSize originVideoSize;if (isVideoAssetvertical || Assetvertical) {originVideoSize = CGSizeMake([[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] naturalSize].height, [[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] naturalSize].width);}else{originVideoSize = CGSizeMake([[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] naturalSize].width, [[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] naturalSize].height);}float x = videoDealPoint.x;float y = videoDealPoint.y;if (shouldScale) {scaleX = videoSize.width/originVideoSize.width;scaleY = videoSize.height/originVideoSize.height;scale  = MAX(scaleX, scaleY);if (scaleX>scaleY) {NSLog(@"竖屏");}else{NSLog(@"横屏");}}else{scaleX = 1.0;scaleY = 1.0;scale = 1.0;}if (Assetvertical) {CGAffineTransform trans = CGAffineTransformMake(videoAssetTrack.preferredTransform.a*scale, videoAssetTrack.preferredTransform.b*scale, videoAssetTrack.preferredTransform.c*scale, videoAssetTrack.preferredTransform.d*scale, videoAssetTrack.preferredTransform.tx*scale-x+720, videoAssetTrack.preferredTransform.ty*scale-y);//    [videolayerInstruction setTransform:trans atTime:kCMTimeZero];CGAffineTransform trans2 = CGAffineTransformRotate(trans, M_PI_2);[videolayerInstruction setTransform:trans2 atTime:kCMTimeZero];}else{CGAffineTransform trans = CGAffineTransformMake(videoAssetTrack.preferredTransform.a*scale, videoAssetTrack.preferredTransform.b*scale, videoAssetTrack.preferredTransform.c*scale, videoAssetTrack.preferredTransform.d*scale, videoAssetTrack.preferredTransform.tx*scale-x, videoAssetTrack.preferredTransform.ty*scale-y);[videolayerInstruction setTransform:trans atTime:kCMTimeZero];}//裁剪区域//    [videolayerInstruction setCropRectangle:CGRectMake(0, 0, 720, 720) atTime:kCMTimeZero];// 3.3 - Add instructionsmainInstruction.layerInstructions = [NSArray arrayWithObjects:videolayerInstruction,nil];//AVMutableVideoComposition:管理所有视频轨道,可以决定最终视频的尺寸,裁剪需要在这里进行AVMutableVideoComposition *mainCompositionInst = [AVMutableVideoComposition videoComposition];CGSize naturalSize;//    if(isVideoAssetvertical){//        naturalSize = CGSizeMake(videoAssetTrack.naturalSize.height, videoAssetTrack.naturalSize.width);//    } else {//        naturalSize = videoAssetTrack.naturalSize;//    }naturalSize = originVideoSize;int64_t renderWidth = 0, renderHeight = 0;if (videoSize.height ==0.0 || videoSize.width == 0.0) {renderWidth = naturalSize.width;renderHeight = naturalSize.height;}else{renderWidth = ceil(videoSize.width);renderHeight = ceil(videoSize.height);}mainCompositionInst.renderSize = CGSizeMake(renderWidth, renderHeight);mainCompositionInst.instructions = [NSArray arrayWithObject:mainInstruction];mainCompositionInst.frameDuration = CMTimeMake(1, 30);// 4 - 输出路径NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);NSString *documentsDirectory = [paths objectAtIndex:0];NSString *myPathDocs =  [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:@"%@.mp4",fileName]];unlink([myPathDocs UTF8String]);NSURL* videoUrl = [NSURL fileURLWithPath:myPathDocs];//    dlink = [CADisplayLink displayLinkWithTarget:self selector:@selector(updateProgress)];//    [dlink setFrameInterval:15];//    [dlink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];//    [dlink setPaused:NO];// 5 - 视频文件输出exporter = [[AVAssetExportSession alloc] initWithAsset:mixCompositionpresetName:AVAssetExportPresetHighestQuality];exporter.outputURL=videoUrl;exporter.outputFileType = AVFileTypeMPEG4;exporter.shouldOptimizeForNetworkUse = YES;exporter.videoComposition = mainCompositionInst;[exporter exportAsynchronouslyWithCompletionHandler:^{dispatch_async(dispatch_get_main_queue(), ^{//这里是输出视频之后的操作,做你想做的[self cropExportDidFinish:exporter];});}];
}- (void)cropExportDidFinish:(AVAssetExportSession*)session {if (session.status == AVAssetExportSessionStatusCompleted) {NSURL *outputURL = session.outputURL;dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.2 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{[SVProgressHUD dismiss];__block PHObjectPlaceholder *placeholder;if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(outputURL.path)) {NSError *error;[[PHPhotoLibrary sharedPhotoLibrary] performChangesAndWait:^{PHAssetChangeRequest* createAssetRequest = [PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL:outputURL];placeholder = [createAssetRequest placeholderForCreatedAsset];} error:&error];if (error) {[SVProgressHUD showErrorWithStatus:[NSString stringWithFormat:@"%@",error]];}else{[SVProgressHUD showSuccessWithStatus:@"视频已经保存到相册"];}}else {[SVProgressHUD showErrorWithStatus:NSLocalizedString(@"视频保存相册失败,请设置软件读取相册权限", nil)];}});}else{NSLog(@"%@",session.error);[SVProgressHUD showErrorWithStatus:NSLocalizedString(@"裁剪失败", nil)];}
}// 将JSONDATA转化为字典或者数组
- (id)DataToArrayOrNSDictionary:(NSData *)jsonData{NSError *error = nil;id jsonObject = [NSJSONSerialization JSONObjectWithData:jsonData options:NSJSONReadingAllowFragments error:&error];if (jsonObject != nil && error == nil){return jsonObject;}else{// 解析错误return nil;}
}// 将JSON串转化为字典或者数组
- (id)StrToArrayOrNSDictionary:(NSString *)jsonStr {NSData *jsonData = [jsonStr dataUsingEncoding:NSUTF8StringEncoding];return [self DataToArrayOrNSDictionary:jsonData];
}

调用的时候,直接使用

-(void)cropImage{NSURL *videoPath = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:@"selfS" ofType:@"MOV"]];[self saveVideoPath:videoPath withStartTime:0.1 withEndTime:0 withSize:CGSizeMake(300, 300) withVideoDealPoint:CGPointMake(50, 50) WithFileName:@"cropVideo" shouldScale:YES];
}

其实对于一般的视频裁剪,只需要使用下面这个函数即可

- (void)goSaveVideoPath:(NSURL*)videoPath withStartTime:(float)startTime withEndTime:(float)endTime withSize:(CGSize)videoSize withVideoDealPoint:(CGPoint)videoDealPoint WithFileName:(NSString*)fileName shouldScale:(BOOL)shouldScale isWxVideoAssetvertical:(BOOL)Assetvertical

1.1、微信的处理

但是在实际使用过程中发现,单独使用这个函数中的AVFoundation处理裁剪视频的时候,对微信的支持并不好,如果使用微信自带的相机拍摄那个十秒视频虽然可以裁剪成功,但是是蓝屏的,只有声音没有画面,在打印的metadata信息中对比发现

WX拍摄2017-05-11 19:35:58.529751+0800 JianLiXiu[9592:2774766] ======metadata:uiso/dscp,{dataType = 2;dataTypeNamespace = "com.apple.quicktime.udta";
},{"WXVer":369428256},com.apple.metadata.datatype.UTF-8
2017-05-11 19:35:58.531034+0800 JianLiXiu[9592:2774766] ======commonMetadata:uiso/dscp,{dataType = 2;dataTypeNamespace = "com.apple.quicktime.udta";
},{"WXVer":369428256},com.apple.metadata.datatype.UTF-8WX下载======metadata:uiso/dscp,{dataType = 2;dataTypeNamespace = "com.apple.quicktime.udta";
},{"WXVer":369428256},com.apple.metadata.datatype.UTF-8======commonMetadata:uiso/dscp,{dataType = 2;dataTypeNamespace = "com.apple.quicktime.udta";
},{"WXVer":369428256},com.apple.metadata.datatype.UTF-8自带拍摄2017-05-11 19:35:09.708010+0800 JianLiXiu[9592:2774253] ======metadata:uiso/loci,{dataType = 2;dataTypeNamespace = "com.apple.quicktime.udta";
},+31.1711+121.3836+045.636/,com.apple.metadata.datatype.UTF-8
2017-05-11 19:35:09.710245+0800 JianLiXiu[9592:2774253] ======metadata:uiso/date,{dataType = 0;dataTypeNamespace = "com.apple.quicktime.udta";
},2017-05-11T17:40:41+0800,com.apple.metadata.datatype.raw-data
2017-05-11 19:35:09.712193+0800 JianLiXiu[9592:2774253] ======commonMetadata:uiso/date,{dataType = 0;dataTypeNamespace = "com.apple.quicktime.udta";
},2017-05-11T17:40:41+0800,com.apple.metadata.datatype.raw-data
2017-05-11 19:35:09.712746+0800 JianLiXiu[9592:2774253] ======commonMetadata:uiso/loci,{dataType = 2;dataTypeNamespace = "com.apple.quicktime.udta";
},+31.1711+121.3836+045.636/,com.apple.metadata.datatype.UTF-8

所以通过视频的metadata信息来判断是不是微信的视频,通过metadata是否含有{"WXVer":369428256}这个来判断是不是微信处理过的视频,如果是微信的视频先使用GPUImage重新渲染编码一次然后再处理。

代码中的

CGAffineTransform videoTransform = videoAssetTrack.preferredTransform;

是代表的视频的拍摄方向,在正常拍摄的时候打印如下

自带竖 0,1,-1,0
自带横 1,0,0,1qq竖 1,0,0,1
qq横 0,-1,1,0微信竖 0,1,-1,0
微信横 1,0,0,1

但微信的视频在GPUImage重新录制之后,拍摄方向变成了

微信竖 1,0,0,1
微信横 1,0,0,1 

也就是说不管横竖,全部变成了一样的,这样就导致横屏拍摄的时候裁剪正常,竖屏拍摄的时候,裁剪之后出来的视频时错误的,所以要先判断原视频是什么方向,之后在裁剪处理的时候去处理。

1.2、裁剪位置的选择

int64_t renderWidth = 0, renderHeight = 0;

是要渲染输出的视频大小是多少,所以如果想设置指定大小,那就在这个设置,输出的大小就可以了,但是这个裁剪是默认从(0,0)开始的,如果是从中间开始的就需要设置videolayerInstruction 这个的移动了。

videolayerInstruction是这个视频轨的动画等移动效果,视频的移动,翻转,缩小等都在这个上面进行操作。通过设置CGAffineTransform来达到裁剪位置

x会跟着c的值进行拉伸(View的宽度是跟着改变),y会跟着b的值进行拉伸(View的高度跟着改变),要注意到的是c和b的值改变不会影响到View的point(center中心点)的改变。这是个很有意思的两个参数。

x会跟着t.x进行x做表平移,y会跟着t.y进行平移。这里的point(center)是跟着变换的。

下面是Apple整合的transform

平移 :

①根据本身的transform进行平移 CGAffineTransformMakeTranslation(CGFloat tx,CGFloat ty)

②根据本身的transform后者另外的transform进行平移CGAffineTransformTranslate(CGAffineTransform t,CGFloat tx,CGFloat ty)

缩放 :

①根据本身的transform进行缩放

CGAffineTransformMakeScale(CGFloat sx,CGFloat sy)

②根据本身的transform后者另外的transform进行缩放

CGAffineTransformScale(CGAffineTransform t,CGFloat sx,CGFloat sy)

旋转 :

① 根据本身的transform进行旋转

CGAffineTransformMakeRotation(CGFloat angle) (angle 旋转的角度)

②根据本身的transform后者另外的transform进行旋转

CGAffineTransformRotate(CGAffineTransform t,CGFloat angle)

具体可以参考《iOS 仿射变换CGAffineTransform详解》

所以结合渲染的大小和要移动的效果,就可以知道是从哪个点开始裁剪,裁剪多大的视频了。

1.3、裁剪时间的选择

裁剪时间主要是在视频轨和音轨编辑的时候,设置插入的时长,从而控制裁剪时间。

//把视频轨道数据加入到可变轨道中 这部分可以做视频裁剪TimeRange[videoTrack insertTimeRange:CMTimeRangeMake(startCropTime, endCropTime)ofTrack:[[videoAsset tracksWithMediaType:AVMediaTypeVideo] lastObject]atTime:kCMTimeZero error:&error];

1.4、对没有声音视频的处理,比如延时视频

上篇说了如果没有声音的视频在处理的时候会错误,所以在裁剪的时候也要注意是否含有声音,如果没有声音就不要添加音轨了

 //有声音if ([[videoAsset tracksWithMediaType:AVMediaTypeAudio] count] > 0){//声音采集AVURLAsset * audioAsset = [[AVURLAsset alloc] initWithURL:videoPath options:opts];//音频通道AVMutableCompositionTrack * audioTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];//音频采集通道AVAssetTrack * audioAssetTrack = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] lastObject];[audioTrack insertTimeRange:CMTimeRangeMake(startCropTime, endCropTime) ofTrack:audioAssetTrack atTime:kCMTimeZero error:nil];}

1.5、原视频大小的处理

如果不裁剪视频的大小,只是默认大小裁剪时间的时候,发现获取的naturalSize在竖屏的时候是错误的,比如一个视频720*1280,结果获得的naturalSize是1280*720,横竖刚好相反,所以需要对竖屏的大小进行处理一下,这样就是正确的了

 if (isVideoAssetvertical || Assetvertical) {originVideoSize = CGSizeMake([[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] naturalSize].height, [[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] naturalSize].width);}else{originVideoSize = CGSizeMake([[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] naturalSize].width, [[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] naturalSize].height);}

二、视频拼接、音视频的处理

视频的拼接和音视频的拼接是一样的,都是处理视频轨和音轨的开始时间,如果第二个视频的开始时间是第一个视频的结束时间,那么就是两段视频的拼接,如果开始时间相同,那么就是两个视频在混合了。

下面给出一个两个视频拼接,再加上一个背景音乐的例子。

-(void)addFirstVideo:(NSURL*)firstVideoPath andSecondVideo:(NSURL*)secondVideo withMusic:(NSURL*)musicPath{[SVProgressHUD showWithStatus:@"正在合成到系统相册中"];AVAsset *firstAsset = [AVAsset assetWithURL:firstVideoPath];AVAsset *secondAsset = [AVAsset assetWithURL:secondVideo];AVAsset *musciAsset = [AVAsset assetWithURL:musicPath];// 1 - Create AVMutableComposition object. This object will hold your AVMutableCompositionTrack instances.AVMutableComposition *mixComposition = [[AVMutableComposition alloc] init];// 2 - Video trackAVMutableCompositionTrack *firstTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideopreferredTrackID:kCMPersistentTrackID_Invalid];[firstTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstAsset.duration)ofTrack:[[firstAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:kCMTimeZero error:nil];[firstTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, secondAsset.duration)ofTrack:[[secondAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:firstAsset.duration error:nil];if (musciAsset!=nil){AVMutableCompositionTrack *AudioTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudiopreferredTrackID:kCMPersistentTrackID_Invalid];[AudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstAsset.duration, secondAsset.duration))ofTrack:[[musciAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:kCMTimeZero error:nil];}// 4 - Get pathNSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);NSString *documentsDirectory = [paths objectAtIndex:0];NSString *myPathDocs =  [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:@"mergeVideo-%d.mov",arc4random() % 1000]];NSURL *url = [NSURL fileURLWithPath:myPathDocs];// 5 - Create exporterexporter = [[AVAssetExportSession alloc] initWithAsset:mixCompositionpresetName:AVAssetExportPresetHighestQuality];exporter.outputURL=url;exporter.outputFileType = AVFileTypeQuickTimeMovie;exporter.shouldOptimizeForNetworkUse = YES;[exporter exportAsynchronouslyWithCompletionHandler:^{dispatch_async(dispatch_get_main_queue(), ^{[self exportDidFinish:exporter];});}];
}- (void)exportDidFinish:(AVAssetExportSession*)session {if (session.status == AVAssetExportSessionStatusCompleted) {NSURL *outputURL = session.outputURL;dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.2 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{[SVProgressHUD dismiss];__block PHObjectPlaceholder *placeholder;if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(outputURL.path)) {NSError *error;[[PHPhotoLibrary sharedPhotoLibrary] performChangesAndWait:^{PHAssetChangeRequest* createAssetRequest = [PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL:outputURL];placeholder = [createAssetRequest placeholderForCreatedAsset];} error:&error];if (error) {[SVProgressHUD showErrorWithStatus:[NSString stringWithFormat:@"%@",error]];}else{[SVProgressHUD showSuccessWithStatus:@"视频已经保存到相册"];}}else {[SVProgressHUD showErrorWithStatus:NSLocalizedString(@"视频保存相册失败,请设置软件读取相册权限", nil)];}});}
}

调用的时候

-(void)addMusic{NSURL *videoPath1 = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:@"selfS" ofType:@"MOV"]];NSURL *videoPath2 = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:@"selfH" ofType:@"MOV"]];NSURL *music = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:@"music" ofType:@"mp3"]];[self addFirstVideo:videoPath1 andSecondVideo:videoPath2 withMusic:music];
}

这样就是videopath1和videopath2拼接,然后加上一个背景音乐music,之后生成一个视频保存到相册。

(2017-08-14)2.1、音频音量的调节

最近需要把背景音乐的音量调小,当然背景音乐文件音量调小即可,但是毕竟每个文件处理都不太方便,所以还是用代码把指定音轨的音量减小,这里只减小背景音乐的音量,不减小其他的音量,所以用到了AVMutableAudioMix这个。

在原工程增加以下代码即可

//修改背景音乐的音量startAVMutableAudioMix *videoAudioMixTools = [AVMutableAudioMix audioMix];if (musciAsset) {//调节音量//获取音频轨道AVMutableAudioMixInputParameters *firstAudioParam = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:AudioTrack];//设置音轨音量,可以设置渐变,设置为1.0就是全音量[firstAudioParam setVolumeRampFromStartVolume:1.0 toEndVolume:1.0 timeRange:CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstAsset.duration, secondAsset.duration))];[firstAudioParam setTrackID:AudioTrack.trackID];videoAudioMixTools.inputParameters = [NSArray arrayWithObject:firstAudioParam];}//end
exporter.audioMix = videoAudioMixTools;

三、demo下载

github下载:https://github.com/DamonHu/VideoEditDemo

gitosc下载:http://git.oschina.net/DamonHoo/VideoEditDemo

四、demo演示

五、总结

本篇已完结,在视频的处理上面主要就是对视频轨、音频轨的活用,当然以后还有字幕轨等,但是原理都是一样的,参看里面的源码即可看到所有轨道的编辑。GPUImage是一个很好的补充,在渲染方面很好用,可以结合一下采用最适合项目的方案。

在项目中碰到了很多问题,在demo中尽量避免了这类问题,如果你也碰到了,可以看看谷歌,我下面的参考文章也是碰到问题时看到的有价值的文章,希望对你有用。

六、参考文章

讲解篇

  • iOS 视频旋转及平移详解
  • iOS 仿射变换CGAffineTransform详解
  • iOS视频功能模块的开发
  • iOS之AVFoundation视频转码
  • AVFoundation之AVAsset
  • iOS视频录制、压缩导出、取帧
  • IOS视频添加背景音乐同时保留原音
  • iOS图片渲染
  • 《视频直播技术详解》系列之一:视频采集和处理
  • 【如何快速的开发一个完整的iOS直播app】(原理篇)
  • iOS视频编辑学习笔记(1)-AVAsset、AVMutableComposition系列类的理解及视频裁
  • iOS 音视频高级编程:AVAsset、CoreVideo、VideoToolbox、FFmpeg与CMTime
  • iOS获取本地视频和网络URL视频的缩略图方法
  • Video Manipulation in iOS : Resizing,Merging and Overlapping Videos in iOS
  • AVFoundation和 GPUImage初探
  • iOS-----AVFoundation框架的功能详解
  • AVComposition中的CALayer
  • AVFoundation(二):核心AVAsset
  • iOS AVFoundation 视频暂停 多视频合成 流程
  • 分段录制的实现
  • 一、音视频配置文档概念

BUG篇

  • naturalSize returning wrong orientation from AVURLAsset
  • AVMutableVideoComposition rotated video captured in portrait modeAsk
  • Error Domain=AVFoundationErrorDomain Code=-11800 "这项操作无法完成"
  • ios视频、图片翻转问题
  • AVPlayer Video Gravity Positioning?
  • Fix GPUImageMovieWriter isPaused
  • Crash when app goes in background.
  • Added pause/resume methods to the GPUImageMovieWriter
  • [GPUImageMovieWriter 无法2次录像 报错:[AVAssetWriter startWriting] Cannot call method when status is 3](http://www.jianshu.com/p/bd204b34a85d)
  • Issue when saving videos
  • AVAssetWriter fails when calling finishWritingWithCompletionHandler
  • SimpleVideoFileFilter sample crash in GPUImageMovieWriter's finishRecordingWithCompletionHandler
  • GPUImageMovieWriter - crashing with “Cannot call method when status is 2”
  • AVAssetWriter finishWritingWithCompletionHandler函数没有正常执行的原因
  • https://github.com/BradLarson/GPUImage/issues/1203
  • GPUImageMovieWriter原生BUG(1)时间戳问题

版权属于:胡东东博客

本文链接:http://www.hudongdong.com/ios/550.html

转载本文以及大段采集进行后续编辑请注明参考本文标题和链接!否则禁止所有转载和采集行为!
自2017年12月26日起,文章内容底部均会出现该版权声明,对于忽视该声明隐藏原站链接违规转载和采集行为,将会对服务提供者进行版权投诉及赔偿!

IOS视频编辑功能详解下篇-视频裁剪、视频拼接、音视频的处理相关推荐

  1. IOS视频编辑功能详解上篇-添加水印

    前言 用代码在简单视频编辑中,主要就是加美颜.水印(贴图).视频截取.视频拼接.音视频的处理,在美颜中,使用GPUImage即可实现多种滤镜.磨皮美颜的功能,并且可以脸部识别实时美颜等功能,这个有很多 ...

  2. (5)3DMAX之认识创建面板的三种特殊图形、<编辑样条线>的修改编辑功能详解

    一.三种特殊图形 1.线: 选择创建面板的"线",按住shfit画的是垂直线或平行线,不按画的是斜线,在画了第一条线之后,如果在第二次的点位确定的时候按住鼠标不放的状态下画的是弧形 ...

  3. Camtasia视频剪辑功能详解

    Camtasia是一款备受好评的屏幕录制软件,其中一个非常好用的功能就是视频剪辑,其视频剪辑功能的初衷是为了方便用户在录制视频后直接对视频文件进行编辑,而后也可以将其他视频导入Camtasia进行编辑 ...

  4. 各种音视频编解码学习详解 h264 mpeg4 aac 等所有音视频格式

    分享一下我老师大神的人工智能教程!零基础,通俗易懂!http://blog.csdn.net/jiangjunshow 也欢迎大家转载本篇文章.分享知识,造福人民,实现我们中华民族伟大复兴! 编解码学 ...

  5. 移动端实时音视频直播技术详解(一):开篇

    移动端实时音视频直播技术详解(一):开篇 1.引言 随着互联网用户消费内容和交互方式的升级,支撑这些内容和交互方式的基础设施也正在悄悄发生变革.手机设备拍摄视频能力和网络的升级催生了大家对视频直播领域 ...

  6. 图像视频滤镜算法详解系列

    序言 为什么要写滤镜相关的内容? 这个问题其实不用回答,大家活在互联网高度发达的社会,发照片,发视频就像吃饭穿衣睡觉一样,已经成为了我们生活中的一部分,在发照片,发视频前,有谁没有用过一些特效处理呢? ...

  7. 《视频直播技术详解》系列:(6)编码和封装

    原文来自七牛云,感谢原作者. <视频直播技术详解>系列:(0)汇总 视频编码是本系列一个重要的部分,如果把整个流媒体比喻成一个物流系统,那么编解码就是其中配货和装货的过程,这个过程非常重要 ...

  8. 《视频直播技术详解》之二:编码和封装、推流和传输

    视频编码是本系列一个重要的部分,如果把整个流媒体比喻成一个物流系统,那么编解码就是其中配货和装货的过程,这个过程非常重要,它的速度和压缩比对物流系统的意义非常大,影响物流系统的整体速度和成本.同样,对 ...

  9. 视频直播技术详解(0)开篇

    (原标题:<视频直播技术详解>系列之一:开篇) 文|何李石 随着互联网用户消费内容和交互方式的升级,支撑这些内容和交互方式的基础设施也正在悄悄发生变革.手机设备拍摄视频能力和网络的升级催生 ...

最新文章

  1. 程序猿面试什么最重要?
  2. python翻译成matlab_matlab语言转译成python
  3. ads in shanghai
  4. Vmare 15 安装 macOS 15.5 的关键步骤
  5. python圆柱体积代码_python实现Bencode解码方法
  6. Flutter — 实现验证码倒计时功能
  7. 一个简单ASP调用存储过程查询
  8. 《高度安全环境下的高级渗透测试》—第1章1.5节安装OpenOffice
  9. VSS2005安装和配置过程中遇到的问题
  10. 使用openlayers投影阿伯斯(Albers)
  11. 2000年建模b题matlab,2013数学建模B题matlab代码
  12. 软件工程——团队作业2
  13. PHP的exec()函数用法详解
  14. js任意进制转换(二进制,八进制,十进制...三十六进制)
  15. 高速设计学习-干货!高速串行Serdes均衡之FFE
  16. 光猫路由器一体机安装和千兆网络
  17. 行存与列存的简单对比
  18. 《无极限之危情速递》观后感
  19. 刷脸不要手机也能付款高端大气上档次
  20. Mysql性能监控常用查询命令

热门文章

  1. Python每日学习总结(五)
  2. Eclipse调试找不到源的解决办法
  3. 适合摘抄在便签上的简短句子素材 用它记录很合适
  4. mysql- 如何对数据库进行分库分表,不允许停止服务
  5. 实现新浪微博授权一次多次登录的功能
  6. web前端-JavaScript构造函数创建对象
  7. HTML的Encode(转码)和解码(Decode)
  8. CSDN外链图片的测试
  9. SQL注入原理及其简单演示
  10. android的动态tab,Android自定义view仿QQ的Tab按钮动画效果(示例代码)