ffmpeg实战教程(八)Android平台下AVfilter 实现水印,滤镜等特效功能

ffmpeg实战教程(七)Android CMake avi解码后SurfaceView显示

本篇我们在此基础上实现滤镜,水印等功能。

对ffmpeg不熟的客官看这里:ffmpeg源码简析(一)结构总览

先上两张效果图: 
黑白:const char *filters_descr = “lutyuv=’u=128:v=128’”; 

添加水印:const char *filters_descr = “movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]”;

在前面的几篇文章中我们已经学会了用ffmpeg对音视频进行编解码,下面我们就主要介绍一下libavfilter 
ffmpeg的libavfilter是为音视频添加特效功能的。

libavfilter的关键函数如下所示:

avfilter_register_all():注册所有AVFilter。
avfilter_graph_alloc():为FilterGraph分配内存。
avfilter_graph_create_filter():创建并向FilterGraph中添加一个Filter。
avfilter_graph_parse_ptr():将一串通过字符串描述的Graph添加到FilterGraph中。
avfilter_graph_config():检查FilterGraph的配置。
av_buffersrc_add_frame():向FilterGraph中加入一个AVFrame。av_buffersink_get_frame():从FilterGraph中取出一个AVFrame。

今天我们的示例程序中提供了几种特效:

const char *filters_descr = "lutyuv='u=128:v=128'";
//const char *filters_descr = "hflip";
//const char *filters_descr = "hue='h=60:s=-3'";
//const char *filters_descr = "crop=2/3*in_w:2/3*in_h";
//const char *filters_descr = "drawbox=x=200:y=200:w=300:h=300:color=pink@0.5";
//const char *filters_descr = "movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]";
//const char *filters_descr="drawgrid=width=100:height=100:thickness=4:color=pink@0.9";

上面的黑白特效,和水印使用了下面的两个

const char *filters_descr = "lutyuv='u=128:v=128'";
//const char *filters_descr = "movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]";

更多的特效使用,请到官网学习,http://www.ffmpeg.org/ffmpeg-filters.html

下面看代码实现:

在我们的MainActivity中初始化了一个SurfaceView,并定义一个native函数用于把Surface传到底层(底层把处理过的数据交给Surface传给上层显示)

 SurfaceView surfaceView = (SurfaceView) findViewById(R.id.surface_view);surfaceHolder = surfaceView.getHolder();surfaceHolder.addCallback(this);...public native int play(Object surface);

surfaceCreated()函数中实现play函数。

  @Overridepublic void surfaceCreated(SurfaceHolder holder) {new Thread(new Runnable() {@Overridepublic void run() {play(surfaceHolder.getSurface());}}).start();}

那么重点就是JNI层的play()函数做了什么?

首先我们在上一篇play()函数的基础上添加libavfilter各种特效需要的头文件

//added by ws for AVfilter start
#include <libavfilter/avfiltergraph.h>
#include <libavfilter/buffersrc.h>
#include <libavfilter/buffersink.h>
//added by ws for AVfilter end
};

然后我们声明初始化一些必要的结构体。

//added by ws for AVfilter startconst char *filters_descr = "lutyuv='u=128:v=128'";
//const char *filters_descr = "hflip";
//const char *filters_descr = "hue='h=60:s=-3'";
//const char *filters_descr = "crop=2/3*in_w:2/3*in_h";
//const char *filters_descr = "drawbox=x=200:y=200:w=300:h=300:color=pink@0.5";
//const char *filters_descr = "movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]";
//const char *filters_descr="drawgrid=width=100:height=100:thickness=4:color=pink@0.9";AVFilterContext *buffersink_ctx;
AVFilterContext *buffersrc_ctx;
AVFilterGraph *filter_graph;//added by ws for AVfilter end

现在我们可以正式的初始化AVfilter 了,代码比较多,对着上面的AVfilter 关键函数看比较好

 //added by ws for AVfilter start----------init AVfilter--------------------------wschar args[512];int ret;AVFilter *buffersrc  = avfilter_get_by_name("buffer");AVFilter *buffersink = avfilter_get_by_name("buffersink");//新版的ffmpeg库必须为buffersinkAVFilterInOut *outputs = avfilter_inout_alloc();AVFilterInOut *inputs  = avfilter_inout_alloc();enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE };AVBufferSinkParams *buffersink_params;filter_graph = avfilter_graph_alloc();/* buffer video source: the decoded frames from the decoder will be inserted here. */snprintf(args, sizeof(args),"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt,pCodecCtx->time_base.num, pCodecCtx->time_base.den,pCodecCtx->sample_aspect_ratio.num, pCodecCtx->sample_aspect_ratio.den);ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",args, NULL, filter_graph);if (ret < 0) {LOGD("Cannot create buffer source\n");return ret;}/* buffer video sink: to terminate the filter chain. */buffersink_params = av_buffersink_params_alloc();buffersink_params->pixel_fmts = pix_fmts;ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",NULL, buffersink_params, filter_graph);av_free(buffersink_params);if (ret < 0) {LOGD("Cannot create buffer sink\n");return ret;}/* Endpoints for the filter graph. */outputs->name       = av_strdup("in");outputs->filter_ctx = buffersrc_ctx;outputs->pad_idx    = 0;outputs->next       = NULL;inputs->name       = av_strdup("out");inputs->filter_ctx = buffersink_ctx;inputs->pad_idx    = 0;inputs->next       = NULL;// avfilter_link(buffersrc_ctx, 0, buffersink_ctx, 0);if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,&inputs, &outputs, NULL)) < 0) {LOGD("Cannot avfilter_graph_parse_ptr\n");return ret;}if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0) {LOGD("Cannot avfilter_graph_config\n");return ret;}//added by ws for AVfilter end------------init AVfilter------------------------------ws

初始化完成后, 
我们把解码器解码出来的帧进行加工改造。

               //added by ws for AVfilter startpFrame->pts = av_frame_get_best_effort_timestamp(pFrame);//* push the decoded frame into the filtergraphif (av_buffersrc_add_frame(buffersrc_ctx, pFrame) < 0) {LOGD("Could not av_buffersrc_add_frame");break;}ret = av_buffersink_get_frame(buffersink_ctx, pFrame);if (ret < 0) {LOGD("Could not av_buffersink_get_frame");break;}//added by ws for AVfilter end
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

改造后的帧就是已经加上特效了 
记着最后释放内存:

avfilter_graph_free(&filter_graph); //added by ws for avfilter
  • 1

到此我们今天的功能已经实现了。

建议大家结合着代码看,否则如盲人摸象 
这一篇可能助你理解libavfilter: 
libavfilter实践指南 :http://blog.csdn.net/king1425/article/details/71215686

对于C和JNI不熟悉的客官看这里: 
http://blog.csdn.net/King1425/article/category/6865816

下面我们再看几个效果图,然后上源码:

const char *filters_descr = "hue='h=60:s=-3'";
  • 1

const char *filters_descr="drawgrid=width=100:height=100:thickness=4:color=pink@0.9";
  • 1

const char *filters_descr = "drawbox=x=200:y=200:w=300:h=300:color=pink@0.5";
  • 1

下面是JNI源码:

#include <jni.h>
#include <android/log.h>
#include <android/native_window.h>
#include <android/native_window_jni.h>
#include "native-lib.h"extern "C" {
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
#include "libavutil/imgutils.h"//added by ws for AVfilter start
#include <libavfilter/avfiltergraph.h>
#include <libavfilter/buffersrc.h>
#include <libavfilter/buffersink.h>
//added by ws for AVfilter end
};#define  LOG_TAG    "ffmpegandroidplayer"
#define  LOGD(...)  __android_log_print(ANDROID_LOG_ERROR, LOG_TAG, __VA_ARGS__)//added by ws for AVfilter startconst char *filters_descr = "lutyuv='u=128:v=128'";
//const char *filters_descr = "hflip";
//const char *filters_descr = "hue='h=60:s=-3'";
//const char *filters_descr = "crop=2/3*in_w:2/3*in_h";
//const char *filters_descr = "drawbox=x=200:y=200:w=300:h=300:color=pink@0.5";
//const char *filters_descr = "movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]";
//const char *filters_descr="drawgrid=width=100:height=100:thickness=4:color=pink@0.9";AVFilterContext *buffersink_ctx;
AVFilterContext *buffersrc_ctx;
AVFilterGraph *filter_graph;//added by ws for AVfilter endJNIEXPORT jint JNICALL
Java_com_ws_ffmpegandroidavfilter_MainActivity_play(JNIEnv *env, jclass clazz, jobject surface) {LOGD("play");// sd卡中的视频文件地址,可自行修改或者通过jni传入char *file_name = "/storage/emulated/0/ws2.mp4";//char *file_name = "/storage/emulated/0/video.avi";av_register_all();avfilter_register_all();//added by ws for AVfilterAVFormatContext *pFormatCtx = avformat_alloc_context();// Open video fileif (avformat_open_input(&pFormatCtx, file_name, NULL, NULL) != 0) {LOGD("Couldn't open file:%s\n", file_name);return -1; // Couldn't open file}// Retrieve stream informationif (avformat_find_stream_info(pFormatCtx, NULL) < 0) {LOGD("Couldn't find stream information.");return -1;}// Find the first video streamint videoStream = -1, i;for (i = 0; i < pFormatCtx->nb_streams; i++) {if (pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO&& videoStream < 0) {videoStream = i;}}if (videoStream == -1) {LOGD("Didn't find a video stream.");return -1; // Didn't find a video stream}// Get a pointer to the codec context for the video streamAVCodecContext *pCodecCtx = pFormatCtx->streams[videoStream]->codec;//added by ws for AVfilter start----------init AVfilter--------------------------wschar args[512];int ret;AVFilter *buffersrc  = avfilter_get_by_name("buffer");AVFilter *buffersink = avfilter_get_by_name("buffersink");//新版的ffmpeg库必须为buffersinkAVFilterInOut *outputs = avfilter_inout_alloc();AVFilterInOut *inputs  = avfilter_inout_alloc();enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE };AVBufferSinkParams *buffersink_params;filter_graph = avfilter_graph_alloc();/* buffer video source: the decoded frames from the decoder will be inserted here. */snprintf(args, sizeof(args),"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt,pCodecCtx->time_base.num, pCodecCtx->time_base.den,pCodecCtx->sample_aspect_ratio.num, pCodecCtx->sample_aspect_ratio.den);ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",args, NULL, filter_graph);if (ret < 0) {LOGD("Cannot create buffer source\n");return ret;}/* buffer video sink: to terminate the filter chain. */buffersink_params = av_buffersink_params_alloc();buffersink_params->pixel_fmts = pix_fmts;ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",NULL, buffersink_params, filter_graph);av_free(buffersink_params);if (ret < 0) {LOGD("Cannot create buffer sink\n");return ret;}/* Endpoints for the filter graph. */outputs->name       = av_strdup("in");outputs->filter_ctx = buffersrc_ctx;outputs->pad_idx    = 0;outputs->next       = NULL;inputs->name       = av_strdup("out");inputs->filter_ctx = buffersink_ctx;inputs->pad_idx    = 0;inputs->next       = NULL;// avfilter_link(buffersrc_ctx, 0, buffersink_ctx, 0);if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,&inputs, &outputs, NULL)) < 0) {LOGD("Cannot avfilter_graph_parse_ptr\n");return ret;}if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0) {LOGD("Cannot avfilter_graph_config\n");return ret;}//added by ws for AVfilter start------------init AVfilter------------------------------ws// Find the decoder for the video streamAVCodec *pCodec = avcodec_find_decoder(pCodecCtx->codec_id);if (pCodec == NULL) {LOGD("Codec not found.");return -1; // Codec not found}if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0) {LOGD("Could not open codec.");return -1; // Could not open codec}// 获取native windowANativeWindow *nativeWindow = ANativeWindow_fromSurface(env, surface);// 获取视频宽高int videoWidth = pCodecCtx->width;int videoHeight = pCodecCtx->height;// 设置native window的buffer大小,可自动拉伸ANativeWindow_setBuffersGeometry(nativeWindow, videoWidth, videoHeight,WINDOW_FORMAT_RGBA_8888);ANativeWindow_Buffer windowBuffer;if (avcodec_open2(pCodecCtx, pCodec, NULL) < 0) {LOGD("Could not open codec.");return -1; // Could not open codec}// Allocate video frameAVFrame *pFrame = av_frame_alloc();// 用于渲染AVFrame *pFrameRGBA = av_frame_alloc();if (pFrameRGBA == NULL || pFrame == NULL) {LOGD("Could not allocate video frame.");return -1;}// Determine required buffer size and allocate buffer// buffer中数据就是用于渲染的,且格式为RGBAint numBytes = av_image_get_buffer_size(AV_PIX_FMT_RGBA, pCodecCtx->width, pCodecCtx->height,1);uint8_t *buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));av_image_fill_arrays(pFrameRGBA->data, pFrameRGBA->linesize, buffer, AV_PIX_FMT_RGBA,pCodecCtx->width, pCodecCtx->height, 1);// 由于解码出来的帧格式不是RGBA的,在渲染之前需要进行格式转换struct SwsContext *sws_ctx = sws_getContext(pCodecCtx->width,pCodecCtx->height,pCodecCtx->pix_fmt,pCodecCtx->width,pCodecCtx->height,AV_PIX_FMT_RGBA,SWS_BILINEAR,NULL,NULL,NULL);int frameFinished;AVPacket packet;while (av_read_frame(pFormatCtx, &packet) >= 0) {// Is this a packet from the video stream?if (packet.stream_index == videoStream) {// Decode video frameavcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);// 并不是decode一次就可解码出一帧if (frameFinished) {//added by ws for AVfilter startpFrame->pts = av_frame_get_best_effort_timestamp(pFrame);//* push the decoded frame into the filtergraphif (av_buffersrc_add_frame(buffersrc_ctx, pFrame) < 0) {LOGD("Could not av_buffersrc_add_frame");break;}ret = av_buffersink_get_frame(buffersink_ctx, pFrame);if (ret < 0) {LOGD("Could not av_buffersink_get_frame");break;}//added by ws for AVfilter end// lock native window bufferANativeWindow_lock(nativeWindow, &windowBuffer, 0);// 格式转换sws_scale(sws_ctx, (uint8_t const *const *) pFrame->data,pFrame->linesize, 0, pCodecCtx->height,pFrameRGBA->data, pFrameRGBA->linesize);// 获取strideuint8_t *dst = (uint8_t *) windowBuffer.bits;int dstStride = windowBuffer.stride * 4;uint8_t *src = (pFrameRGBA->data[0]);int srcStride = pFrameRGBA->linesize[0];// 由于window的stride和帧的stride不同,因此需要逐行复制int h;for (h = 0; h < videoHeight; h++) {memcpy(dst + h * dstStride, src + h * srcStride, srcStride);}ANativeWindow_unlockAndPost(nativeWindow);}}av_packet_unref(&packet);}av_free(buffer);av_free(pFrameRGBA);// Free the YUV frameav_free(pFrame);avfilter_graph_free(&filter_graph); //added by ws for avfilter// Close the codecsavcodec_close(pCodecCtx);// Close the video fileavformat_close_input(&pFormatCtx);return 0;
}

native-lib.h

#include <jni.h>#ifndef FFMPEGANDROID_NATIVE_LIB_H
#define FFMPEGANDROID_NATIVE_LIB_H
#ifdef __cplusplus
extern "C" {
#endifJNIEXPORT jint JNICALL Java_com_ws_ffmpegandroidavfilter_MainActivity_play(JNIEnv *, jclass, jobject);#ifdef __cplusplus
}
#endif
#endif

demo:https://github.com/WangShuo1143368701/FFmpegAndroid/tree/master/ffmpegandroidavfilter

ffmpeg实战教程(八)Android平台下AVfilter 实现水印,滤镜等特效功能相关推荐

  1. ffmpeg实战教程(七)Android CMake avi解码后SurfaceView显示

    ffmpeg实战教程(七)Android CMake avi解码后SurfaceView显示 本篇我们实现Android平台解码avi并用SurfaceView播放. 先上图看效果: 思路:  1.把 ...

  2. ffmpeg实战教程(六)Android CMake实现解码(MP4转YUV)

    ffmpeg实战教程(六)Android CMake实现解码(MP4转YUV) 我们将使用最新版: 最新版ffmpeg ffmpeg3.3  新版Android studio Android stud ...

  3. ffmpeg实战教程(三)音频PCM采样为AAC,视频YUV编码为H264/HEVC

    ffmpeg实战教程(三)音频PCM采样为AAC,视频YUV编码为H264/HEVC https://blog.csdn.net/King1425/article/details/71180330 音 ...

  4. ffmpeg实战教程(二)用SDL播放YUV,并结合ffmpeg实现简易播放器

    ffmpeg实战教程(二)用SDL播放YUV,并结合ffmpeg实现简易播放器 https://blog.csdn.net/King1425/article/details/71171142 我们先实 ...

  5. android平台下OpenGL ES 3.0给图片添加黑白滤镜

    OpenGL ES 3.0学习实践 android平台下OpenGL ES 3.0从零开始 android平台下OpenGL ES 3.0绘制纯色背景 android平台下OpenGL ES 3.0绘 ...

  6. ffmpeg实战教程(十一)手把手教你实现直播功能,不依赖第三方SDK

    直播,2016最火的技术之一了,更多的关于直播的知识:http://blog.csdn.net/king1425/article/details/72489272 -这篇我们就不依赖任何集成好的SDK ...

  7. android平台下OpenGL ES 3.0绘制圆点、直线和三角形

    OpenGL ES 3.0学习实践 android平台下OpenGL ES 3.0从零开始 android平台下OpenGL ES 3.0绘制纯色背景 android平台下OpenGL ES 3.0绘 ...

  8. android平台下OpenGL ES 3.0绘制纯色背景

    OpenGL ES 3.0学习实践 android平台下OpenGL ES 3.0从零开始 android平台下OpenGL ES 3.0绘制纯色背景 android平台下OpenGL ES 3.0绘 ...

  9. android平台下OpenGL ES 3.0从零开始

    OpenGL ES 3.0学习实践 android平台下OpenGL ES 3.0从零开始 android平台下OpenGL ES 3.0绘制纯色背景 android平台下OpenGL ES 3.0绘 ...

最新文章

  1. MySQL 学习笔记(5)— 视图优缺点、创建视图、修改视图、删除视图
  2. IEnumerable和IQueryable在使用时的区别
  3. [RN] React Native 键盘管理 在Android TextInput遮盖,上移等问题解决办法
  4. Self Organizing Maps (SOM): 一种基于神经网络的聚类算法
  5. t-sql判断一个字符串是否为bigint的函数(全角数字需要判断为不合格)
  6. [转]java中byte转换int时为何与0xff进行与运算
  7. python小技巧积累--题库(持续更新)
  8. css 右上角 翻开动画_css制作电闪雷鸣的天气图标
  9. ajax 解决csrf的3种方法,input标签的文件上传
  10. 【java奇思妙想】关于JavaScript实现全选,全不选以及反选功能的示例
  11. jsonp实现跨域问题
  12. java 解析tgw_给Java新手的一些建议——Java知识点归纳(Java基础部分)
  13. AI、大数据、云计算深度融合,星环大数据3.0给用户带来哪些体验?
  14. bolb layer
  15. 数据结构:项目三、算术表达式求解
  16. 仿掌阅app打开书籍动画效果
  17. 100个C语言的编程题
  18. 摄像头之自动驾驶中的应用
  19. 2021四川艺术高考成绩查询,2021四川高考艺术类分数线预测
  20. Java使用itextpdf生成PDF文件,用浏览器下载

热门文章

  1. boost any 实现万能容器_全面剖析 C++ Boost 智能指针!| CSDN 博文精选
  2. 用Openswan组建Linux IPSec ---第一部分
  3. 深入理解ARM体系架构(S3C6410)---PWM实例
  4. NAND FLASH读写原理
  5. 【LeetCode】剑指 Offer 55 - I. 二叉树的深度
  6. CentOS下安装jdk1.8.0_181
  7. 常用HTML转义字符,html转义符,JavaScript转义符,html转义字符表,HTML语言特殊字符对照表(ISO Latin-1字符集)...
  8. 再见 Docker !5分钟转型 containerd !
  9. 支付宝发布数据可视化规范,可视化分析有套路!
  10. IDEA 创建maven jar、war、 pom项目