android yuv加水印_在Android采集视频过程中增加水印功能实现
CSDN旧文搬迁!
在Android采集视频过程中增加水印,并且水印包含一个变化的时间戳,这里考虑方案实现的时候,就想到了ffmpeg,ffmpeg包含很多filter能实现水印添加的功能。
基本实现方案是,Camera预览 -> 得到预览帧的bitmap -> 给bitmap通过ffmpeg 添加水印 -> ffmpeg把bitmap使用h264编码 -> 写文件。
其实ffmpeg添加水印功能在网上例子很多,也都大同小异,但在Android端,比较难搞的地方其实是ffmpeg编译出能带水印添加功能的so库文件,其中:ffmpeg的drawtext filter依赖freetype的so,需要先编一个android平台下freetype的so,而在android平台的编译ffmpeg时,难搞的pkg-cfg总是检查依赖的freetype失败,迫不得已修改了ffmpeg的configure,不在对freetype库做检查,但编译配置的时候需要手动指定freetype so的搜索路径。
我使用的配置如下:/usr/local/lib/pkgconfig $FFMPEG_ROOT/configure --target-os=linux \
--prefix=$PREFIX \
--disable-encoders \
--disable-decoders \
--disable-muxers \
--disable-demuxers \
--disable-parsers \
--disable-bsfs \
--disable-protocols \
--disable-devices \
--disable-avdevice \
--disable-zlib \
--disable-bzlib \
--enable-cross-compile \
--enable-runtime-cpudetect \
--pkg-config-flags="--static" \
--disable-asm \
--arch=arm \
--enable-armv5te \
--cc=$PREBUILT/bin/arm-linux-androideabi-gcc \
--cross-prefix=$PREBUILT/bin/arm-linux-androideabi- \
--disable-stripping \
--nm=$PREBUILT/bin/arm-linux-androideabi-nm \
--sysroot=$PLATFORM \
--enable-nonfree \
--enable-version3 \
--enable-gpl \
--disable-doc \
--disable-ffplay \
--disable-ffserver \
--disable-ffprobe \
--enable-avcodec \
--enable-avformat \
--enable-avutil \
--enable-avfilter \
--enable-avresample \
--enable-swresample \
--enable-swscale \
--enable-postproc \
--enable-libx264 \
--enable-encoder=libx264 \
--enable-decoder=h264 \
--enable-hwaccels \
--enable-memalign-hack \
--disable-debug \
--enable-pthreads \
--disable-filters \
--enable-libfreetype \
--enable-filter=drawbox \
--enable-filter=drawtext \
--enable-avisynth \
--enable-iconv \
--extra-cflags="-Os -s -I$X264_ROOT -I$NDK/sysroot/include -I$PREFIX/include/freetype -I$PREFIX/include/ -fPIC -DANDROID -D__thumb__ -mthumb -Wfatal-errors -Wno-deprecated -mfloat-abi=softfp -mfpu=neon -marm -march=armv7-a -mvectorize-with-neon-quad" \
--extra-ldflags="-L$ELIB -L$NDK/sysroot/lib -L$NDK/sources/cxx-stl/gnu-libstdc++/4.6/libs/armeabi-v7a -L$PREFIX/lib" \
--extra-libs="-lfreetype2-static -lstdc++ -lgnustl_static -fexceptions -lsupc++ -llog "
然后添加水印过程中出现水波纹的现象,主要原因是给定的width和height和实际的bitmap比匹配产生。
如下为doubango下编码前添加水印的部分代码:# include
# include
# include
# include
# include
# include
# include
static AVFilterContext* buffersink_ctx = NULL;
static AVFilterContext* buffersrc_ctx = NULL;
static AVFilterGraph* filter_graph = NULL;
static AVFrame* frame_in = NULL;
static AVFrame* frame_out = NULL;
static int isInited;
static int origin_in_width = 480;
static int origin_in_height = 320;
static char last_wartmark_str[125] = "\0";
static char filters_descr[256] = "\0";
static int init_filters(tmedia_codec_t* self)
{
tdav_codec_h264_t* h264 = (tdav_codec_h264_t*)self;
if (!tmedia_defaults_get_use_water_mark_func_flg()){
return -1;
}
if (tmedia_defaults_get_water_mark_strvalue() == tsk_null){
TSK_DEBUG_ERROR("tmedia_defaults_get_water_mark_strvalue() is null\n");
tmedia_defaults_set_use_water_mark_func_flg(tsk_false, tmedia_defaults_get_water_mark_position_x(), tmedia_defaults_get_water_mark_position_y());
return -1;
}
if (strlen(last_wartmark_str) == 0){
strncpy(last_wartmark_str, tmedia_defaults_get_water_mark_strvalue(), sizeof(last_wartmark_str) - 1);
last_wartmark_str[124] = '\0';
}
if ((tsk_strcmp(last_wartmark_str, tmedia_defaults_get_water_mark_strvalue()) != 0)){
tdav_codec_h264_deinit_filters();
isInited = 0;//refresh filters.
}
strncpy(last_wartmark_str, tmedia_defaults_get_water_mark_strvalue(), sizeof(last_wartmark_str));
//TSK_DEBUG_INFO("init filters ,Picture size: %u ** %u", h264->encoder.context->width, h264->encoder.context->height);
if(!self){
TSK_DEBUG_ERROR("self is null\n");
return -1;
}
int in_width=h264->encoder.context->width;
int in_height=h264->encoder.context->height;
int format = PIX_FMT_YUV420P;
if(!in_width || !in_height) {
TSK_DEBUG_ERROR("in_width\in_height is null\n");
return -1;
}
if( in_width != origin_in_width || in_height != origin_in_height){
tdav_codec_h264_deinit_filters();
isInited = 0;
}
if(isInited){
TSK_DEBUG_INFO("here init graphfilter ok.\n");
return -1;
}
if(filter_graph) {
avfilter_graph_free(&filter_graph);
}
//static char *filters_descr = "drawbox=x=100:y=100:w=50:h=50:color=pink@0.5";
//static char *filters_descr = "drawtext=fontfile=/sdcard/arialbd.ttf:fontsize=30:text=\'6102124695\':x=100:y=x/dar:fontcolor=red@0.5:shadowy=2";
//static char *filters_descr = "drawtext=fontfile=/sdcard/arialbd.ttf:fontsize=30:text=\'6102124695\':x=100:y=x/dar";
//static char *filters_descr = "drawtext=fontsize=30:text=\'6102124695\':fontcolor=red";
char *font_color = "red";
switch(tmedia_defaults_get_water_font_color()){
case 0: //red
font_color = "red";
break;
case 1://green
font_color = "green";
break;
case 2://blue
font_color = "blue";
break;
case 3://black
font_color = "black";
break;
case 4://yello
font_color = "yello";
break;
case 5://oreage
font_color = "oreage";
break;
case 6://white
font_color = "White";
break;
default:
break;
};
if (tmedia_defaults_get_water_font_path() == tsk_null){
snprintf(filters_descr, sizeof(filters_descr), "drawtext=fontfile=/sdcard/arialbd.ttf:fontsize=%d:text=%s:x=%d:y=%d:fontcolor=%s@0.6:borderw=2:bordercolor=black@0.6",
tmedia_defaults_get_water_font_size(),
tmedia_defaults_get_water_mark_strvalue(),
tmedia_defaults_get_water_mark_position_x(),
tmedia_defaults_get_water_mark_position_y(),
font_color);
}else{
snprintf(filters_descr, sizeof(filters_descr), "drawtext=fontfile=%s:fontsize=%d:text=%s:x=%d:y=%d:fontcolor=%s@0.6:borderw=2:bordercolor=black@0.6",
tmedia_defaults_get_water_font_path(),
tmedia_defaults_get_water_font_size(),
tmedia_defaults_get_water_mark_strvalue(),
tmedia_defaults_get_water_mark_position_x(),
tmedia_defaults_get_water_mark_position_y(),
font_color);
}
avfilter_register_all();
char args[512];
int ret = 0;
AVFilter *buffersrc = avfilter_get_by_name("buffer");
AVFilter *buffersink = avfilter_get_by_name("buffersink");
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
filter_graph = avfilter_graph_alloc();
if (!outputs || !inputs || !filter_graph) {
ret = AVERROR(ENOMEM);
goto end;
}
/* buffer video source: the decoded frames from the decoder will be inserted here. */
snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
in_width, in_height, PIX_FMT_YUV420P,
1, tmedia_defaults_get_video_fps(),
1, 1);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
args, NULL, filter_graph);
if (ret
TSK_DEBUG_ERROR("Cannot create buffer source, %d, \n", ret);
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
goto end;
}
/* buffer video sink: to terminate the filter chain. */
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
NULL, NULL, filter_graph);
if (ret
TSK_DEBUG_ERROR("Cannot create buffer sink, %d, \n", ret);
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
goto end;
}
ret = av_opt_set_int_list(buffersink_ctx, "pix_fmts", pix_fmts,
PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);
if (ret
TSK_DEBUG_ERROR("Cannot set output pixel format, %d, \n", ret);
av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
goto end;
}
/*
* Set the endpoints for the filter graph. The filter_graph will
* be linked to the graph described by filters_descr.
*/
/*
* The buffer source output must be connected to the input pad of
* the first filter described by filters_descr; since the first
* filter input label is not specified, it is set to "in" by
* default.
*/
outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0;
outputs->next = NULL;
/*
* The buffer sink input must be connected to the output pad of
* the last filter described by filters_descr; since the last
* filter output label is not specified, it is set to "out" by
* default.
*/
inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0;
inputs->next = NULL;
if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
&inputs, &outputs, NULL))
TSK_DEBUG_ERROR("avfilter_graph_parse_ptr failed, ret:%d.\n", ret);
goto end;
}
if ((ret = avfilter_graph_config(filter_graph, NULL))
TSK_DEBUG_ERROR("avfilter_graph_config failed, ret:%d.\n", ret);
goto end;
}
if (ret == 0){
frame_in = av_frame_alloc();
unsigned char* frame_buffer_in = (unsigned char*)av_malloc(av_image_get_buffer_size(format,in_width, in_height, 1));
av_image_fill_arrays(frame_in->data, frame_in->linesize, frame_buffer_in, format,in_width, in_height, 1);
frame_out = av_frame_alloc();
/* unsigned char* frame_buffer_out = (unsigned char*)av_malloc(av_image_get_buffer_size(format,in_width, in_height, 1));
av_image_fill_arrays(frame_out->data, frame_out->linesize, frame_buffer_out,format,in_width, in_height, 1);
*/
}
TSK_DEBUG_INFO("init graphfilter ok.\n");
isInited = 1;
end:
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);
return ret;
}
//
static int tdav_codec_h264_deinit_filters(){
if (!isInited){
return;
}
isInited = 0;
if (frame_in != NULL){
av_frame_free(&frame_in);
}
if (frame_out != NULL){
av_frame_free(&frame_out);
}
if (buffersink_ctx != NULL){
avfilter_free(buffersink_ctx);
buffersink_ctx = NULL;
}
if (buffersrc_ctx != NULL){
avfilter_free(buffersrc_ctx);
buffersrc_ctx = NULL;
}
if (filter_graph != NULL){
//avfilter_graph_free(&filter_graph);
}
last_wartmark_str[0] = '\0';
}
//
static int tdav_codec_h264_add_water_marker(tmedia_codec_t* self, AVFrame* frame_src, const unsigned char* in_data){
//TSK_DEBUG_ERROR("enter tdav_codec_h264_add_water_marker\n");
if (!tmedia_defaults_get_use_water_mark_func_flg()){
return -1;
}
if(!isInited) {
TSK_DEBUG_INFO(" graphfilter not inited.\n");
return -1;
}
int ret;
//AVFrame* frame_out = NULL;
tdav_codec_h264_t* h264 = (tdav_codec_h264_t*)self;
//TSK_DEBUG_INFO("Picture size: %u ** %u", h264->encoder.context->width, h264->encoder.context->height);
if(!self){
TSK_DEBUG_ERROR("self is null\n");
return -1;
}
int in_width=h264->encoder.context->width;
int in_height=h264->encoder.context->height;
int format = PIX_FMT_YUV420P;
if(!in_width || !in_height) {
TSK_DEBUG_ERROR("in_width\in_height is null\n");
return -1;
}
//init frame out
//frame_out = av_frame_alloc();
/*
unsigned char* frame_buffer_out = (unsigned char*)av_malloc(av_image_get_buffer_size(format,in_width, in_height, 1));
av_image_fill_arrays(frame_out->data, frame_out->linesize, frame_buffer_out,format,in_width, in_height, 1);
*/
if (!frame_in || !frame_out) {
TSK_DEBUG_ERROR("Could not allocate frame\n");
return -1;
}
frame_in->width=in_width;
frame_in->height=in_height;
frame_in->format=format;
// TSK_DEBUG_ERROR("frame_in width is %d, height is %d \n",frame_in->width, frame_in->height );
//copy data
//0. copy data to frame_in
memcpy(frame_in->data[0], frame_src->data[0], in_width*in_height);
memcpy(frame_in->data[1], frame_src->data[1], in_width*in_height*1/4);
memcpy(frame_in->data[2], frame_src->data[2], in_width*in_height*1/4);
// 1.add frame to filtergraph
// ret = av_buffersrc_add_frame_flags(buffersrc_ctx, frame_in, 0);
ret = av_buffersrc_add_frame_flags(buffersrc_ctx, frame_in, AV_BUFFERSRC_FLAG_KEEP_REF);
if(ret
TSK_DEBUG_ERROR("Cannot add frame to graph \n");
TSK_DEBUG_ERROR("Cannot add frame to graph,ret is %d \n",ret);
goto end;
}
// 2.pull filtered pictures from the filtergraph
ret = av_buffersink_get_frame(buffersink_ctx, frame_out);
if(ret
TSK_DEBUG_ERROR("Cannot get frame from graph \n");
TSK_DEBUG_ERROR("Cannot get frame from graph,ret is %d \n",ret);
goto end;
}
//3. copy data to frame_src
//ret = av_frame_copy(frame_src, frame_out);
//memcpy(frame_src->data[0], frame_out->data[0], in_width*in_height);
//memcpy(frame_src->data[1], frame_out->data[1], in_width*in_height*1/4);
//memcpy(frame_src->data[2], frame_out->data[2], in_width*in_height*1/4);
av_image_copy(frame_src->data, frame_src->linesize, frame_out->data, frame_out->linesize, PIX_FMT_YUV420P, in_width, in_height);
end:
//av_frame_free(&frame_out);
av_frame_unref(frame_out);
return ret;
}
//编码前先将bitmap的yuv数据添加水印
#if ADD_WATER_MARKER
//tdav_codec_h264_init_filters(self);
init_filters(self);
tdav_codec_h264_add_water_marker(self, h264->encoder.picture, (const unsigned char*)in_data);
//tdav_codec_h264_add_water_marker2((const unsigned char*)in_data, in_size);
#endif
ffmpeg编码参数优化:
做了一段时间的视频后,最先碰到的是花屏,解码端丢包的花屏,先是通过抓取编码后的BITMAP,发现解码出来就是花屏的,所以考虑优化编码来减少因为丢包产生的花屏;另外调整丢包策略规避解码花屏的问题.
1、X264编码参数调整:
H264 FF_PROFILE_H264_BASELINE、 FF_PROFILE_H264_MAIN两种编码差异,其中最明显的差异是profile_idc_baseline没有B帧,而profile_idc_main带B帧,这个差异体现在解码时,带B帧的不仅依赖之前的帧,还依赖之后到来的帧,通常在实时视频类应用中不建议带B帧的编码。
质量和码率控制:
最开始也是用bit_rate 来控制:
encoder.context->bit_rate = (self->encoder.max_bw_kpbs * 1024);// bps
但bit_rate是平均码率,总是达不到理想的结果(包括编码后的视频帧大小和质量),后来查看网上关于移动设备X264编码优化,提到了通过CRF来控制质量和码率,认为: x264默认是使用”crf”压缩算法, 默认值为23, 代表了编码速度,画质与码流的均衡.并且对各种取值做了编码大小和帧率的比较:
ultrafast baseline crf 28encoded 467 frames, 58.94 fps, 515.58 kb/s 2006474
superfast baseline crf 26encoded 467 frames, 41.73 fps, 460.02 kb/s 1790244
superfast baseline crf 28encoded 467 frames, 43.64 fps, 366.28 kb/s 1425436
配置crfif((ret = av_opt_set_double(self->encoder.context->priv_data, "crf", (double)30, 0))){
TSK_DEBUG_ERROR("Failed to set x264 crf 28.0");
return;
}
//ultrafast veryfast superfast
if((ret = av_opt_set(self->encoder.context->priv_data, "preset", "superfast", 0))){
TSK_DEBUG_ERROR("Failed to set x264 preset to veryfast");
}
编码后视频NALU单元大小控制:
encoder.context->rtp_payload_size = H264_RTP_PAYLOAD_SIZE;//H264_RTP_PAYLOAD_SIZE大小为1300if((ret = av_opt_set_int(self->encoder.context->priv_data, "slice-max-size", H264_RTP_PAYLOAD_SIZE, 0))){TSK_DEBUG_ERROR("Failed to set x264 slice-max-size to %d", H264_RTP_PAYLOAD_SIZE);}
两个I帧之间帧个数的控制:
encoder.context->gop_size = TMEDIA_CODEC_VIDEO(self)->out.fps * TDAV_H264_GOP_SIZE_IN_SECONDS
2、丢包策略:
基于BP的H264编码,P帧只依赖之前的帧就能解码,所以出现丢包时的处理策略会比较简单,如果发现有P帧丢了,则丢弃后面的所有P帧,直到有I帧到来;如果是I帧丢了,则丢弃I帧及之后的P帧,直到有I帧到来。
本文为呱牛笔记原创文章,转载无需和我联系,但请注明来自呱牛笔记 ,it3q.com
android yuv加水印_在Android采集视频过程中增加水印功能实现相关推荐
- android jar 加入图片,Android动态加载外部jar包及jar包中图片等资源文件
Android动态加载外部jar包及jar包中图片等资源文件 Android应用程序由Java开发,因此Java中许多实用的特性,在Android中也有体现.动态加载Class,也就是外部jar包,在 ...
- android yuv加水印_Android-Camera添加水印(最简单)
思路: 我的实时视频流需要加水印,我的解决思路是:将水印图片转换成YUV格式,在Camera中onPreviewFrame中将获取到的视频流做一个叠加 1:水印转YUV //从drawble中获取水印 ...
- android yuv加水印_Android Camera录制视频添加水印
通常用Camera 采集视频 得到预览数据,使用mediaCodec获取视频数据,用mediaMuxer进行音视频的混流, 如果想要添加水印很简单: 1.拿到相机预览的帧数据 2.将帧数据转为Bitm ...
- android surfaceview 大小_Android 使用Camera2 API采集视频数据
Android 视频数据采集系列的最后一篇出炉了,和前两篇文章想比,这篇文章从系统API层面进行一些探索,涉及到的细节更多.初次接触 Camera2 API 会觉得它的使用有些繁琐,涉及到的类有些多, ...
- android标题栏添加按钮_改善Android布局性能
布局是 Android 应用中直接影响用户体验的关键部分.如果实现不当,您的布局可能会导致应用界面缓慢且需要占用大量内存.Android SDK 包含一些工具,有助于您识别布局性能方面的问题,将这些工 ...
- android studio json插件_热门Android Studio 插件,这里是Top 20
Android Studio是Google基于IntelliJ开发的一款功能强大的开发工具,它具有构建出色Android应用所需要的一切.借助基于IntelliJ IDEA的强大的功能,插件非常丰富. ...
- android sdk方法隐藏_每个Android开发都必须知道的利器
1.背景介绍 在移动端项目功能不断完善和丰富的过程中我们一直在寻找一种可以高效开发且复用率高的开发模式,特别是多应用同步开发.管理. 在开发过程中你是否遇到需要发布影子工程?新建项目是否需要耗 ...
- android反射开启通知_作为Android开发者 你真的知道app从启动到主页显示的过程吗?...
前言 之前我跟大家说过,在一个夜黑风高的晚上,我的男同事突然给我发了一条微信,我点开来看,他竟然问我Android从按下开机键到启动到底发生了什么?此刻我的内心如下图: 然后就在昨天晚上,我又收到了他 ...
- android 同根动画_[转载]Android anim动画切换效果
关于动画的实现,Android提供了Animation,在Android SDK介绍了2种Animation模式: 1. Tween Animation:通过对场景里的对象不断做图像变换(平移.缩放. ...
最新文章
- 怎么将文件转换成linux文件,Linux将DOS文件格式转换成UNIX文件格式的方法
- 山石网科-Hillstone-IPsec V_P_N常见故障debug排错心得终结版
- pycharm 进行远程服务器修改与调试
- Python多线程--互斥锁、死锁
- 分离圆环图显示百分比_excel这个百分比图,你不一定会制作
- 使用标准测试函数测试全套 MATLAB 优化算法
- Zookeeper-watcher机制源码分析(一)
- 【论文】清华九歌作诗系统
- html中复选框如何添加,Word 怎么添加复选框 怎么在word文档中插入复选框
- 滤波器原理及其作用计算机网络,三种滤波器的工作原理
- linux上传文件到百度云盘(使用shell脚本,不依赖python库)
- HTML5 codecademy
- 西门子smart plc远程监控应用实例
- 记录安装Ubuntu16.04后必须要做的事,杂篇
- Android 读取Txt文件内容
- 5G NR——传输信道、逻辑信道
- ubuntu 安装tar.gz文件
- express中间件原理
- 12. tie_breaker的使用原因和使用方法
- [蓝桥杯2020初赛] 成绩统计