文章目录

  • 如何实现播放进度控制
    • av_seek_frame
    • seek操作要点
  • 按视频流seek
  • 按音频流seek
  • 代码实现

上一篇基本实现了音视频的播放同步,简单的按键控制暂停、恢复、退出操作,这一篇就打算实现播放的进度控制主要是实现快进、快退、重新播放等,但是不打算用SDL来实现GUI操作,主要是用按键操作实现、GUI的部分还是放到用QT实现吧,毕竟不是主要研究SDL的GUI的。

如何实现播放进度控制

想实现播放进度控制,就意味着需要随机的访问流媒体文件,那么就需要使用av_seek_frame或者avformat_seek_file函数。

av_seek_frame

  • 函数:int av_seek_frame(AVFormatContext *s, int stream_index, int64_t timestamp, int flags)
  • AVFormatContext *s:流媒体打开的上下文结构指针
  • int stream_index:流媒体索引,视频流或者是音频流,根据其中一个来检索,实现seek操作
  • int64_t timestamp:检索时间戳,若指定流媒体索引,则时间单位是流媒体对应的AVStream.time_base,若不指定流媒体索引,则时间单位是AV_TIME_BASE
  • int flags:指明seek操作的标志
    #define AVSEEK_FLAG_BACKWARD 1 ///< seek backward
    #define AVSEEK_FLAG_BYTE 2 ///< seeking based on position in bytes
    #define AVSEEK_FLAG_ANY 4 ///< seek to any frame, even non-keyframes
    #define AVSEEK_FLAG_FRAME 8 ///< seeking based on frame number
    当flag中有AVSEEK_FLAG_BYTE时,时间戳要改为byte字节计数
    当flag中有AVSEEK_FLAG_FRAME时,表示seek到离timestamp最近的关键帧,对于视频流来说就是I帧,音频流未知。

seek操作要点

  • seek操作基准:即seek操作当前时间点,因此在进行播放时需要实时记录音视频流的DTS或者PTS
  • seek时间单位:进行seek操作时,根据流索引,确定其时间单位time_base,
  • seek时间大小:即向前或者向后seek的时间大小,根据此时间大小和当前时间点,可以计算出向前或者向后进行seek操作时的目标时间
  • seek操作载体:即是按照音频还是视频进行seek操作,只需要按照其中一个流进行seek即可,不需要分别对音视频流进行seek操作
  • seek操作刷新:在进行seek操作之后,正常播放之前需要将编解码的内部缓冲区进行刷新,同时也要将自行控制的缓冲区内缓存的音视频清零

按视频流seek

首先定义视频流控制结构:

typedef struct __VideoCtrlStruct
{AVFormatContext    *pFormatCtx;AVStream        *pStream;AVCodec            *pCodec;AVCodecContext  *pCodecCtx;SwsContext       *pConvertCtx;AVFrame            *pVideoFrame, *pFrameYUV;unsigned char  *pVideoOutBuffer;int            VideoIndex;int          VideoCnt;int            RefreshTime;int screen_w,screen_h;SDL_Window *screen;SDL_Renderer* sdlRenderer;SDL_Texture* sdlTexture;SDL_Rect sdlRect;SDL_Thread *video_tid;sem_t frame_put;sem_t video_refresh;PacketArrayStruct Video;
}VideoCtrlStruct;

根据seek操作的要点,其中seek时间大小我们可以自行定义,这里暂定为3S

  • CurVideoDts:记录视频流当前的DTS
  • seek的时间单位:VideoCtrl.pStream->time_base就是视频流的时间单位,
  • 前进:int64_t DstVideoDts = CurVideoDts + (int64_t) ( 3 / av_q2d(VideoCtrl.pStream->time_base));
  • 后退:int64_t DstVideoDts = CurVideoDts - (int64_t) ( 3 / av_q2d(VideoCtrl.pStream->time_base));
  • 因为pStream->time_base结构表示一个分数数据,而视频流中的DTS与PTS时间都是以time_base为时间单位,因此需要将seek时间大小转换为以time_base为时间单位的大小
  • seek操作:ret = av_seek_frame(pFormatCtx, VideoIndex, DstVideoDts, AVSEEK_FLAG_FRAME);向后seek是需要在flag参数中加上AVSEEK_FLAG_BACKWARD
  • 缓冲区刷新:avcodec_flush_buffers(VideoCtrl.pCodecCtx);
  • 因为time_base分数结构中,num基本都为1,因此为了减少浮点数乘除法,可以直接乘以time_base的分母,见以下代码实例

按音频流seek

音频流控制结构

typedef struct __AudioCtrlStruct
{AVFormatContext    *pFormatCtx;AVStream        *pStream;AVCodec            *pCodec;AVCodecContext  *pCodecCtx;SwrContext       *pConvertCtx;Uint8      *audio_chunk;Sint32  audio_len;Uint8    *audio_pos;int  AudioIndex;int  AudioCnt;uint64_t AudioOutChannelLayout;int out_nb_samples;             //nb_samples: AAC-1024 MP3-1152AVSampleFormat out_sample_fmt;int out_sample_rate;int out_channels;int out_buffer_size;unsigned char* pAudioOutBuffer;sem_t frame_put;sem_t frame_get;PacketArrayStruct  Audio;
}AudioCtrlStruct;

根据seek操作的要点,其中seek时间大小我们可以自行定义,这里暂定为3S

  • CurAudioDts:记录视频流当前的DTS
  • seek的时间单位:AudioCtrl.pStream->time_base就是视频流的时间单位,
  • 前进:int64_t DstAudioDts = CurAudioDts + (int64_t) ( 3 / av_q2d(AudioCtrl.pStream->time_base));
  • 后退:int64_t DstAudioDts = CurAudioDts - (int64_t) ( 3 / av_q2d(AudioCtrl.pStream->time_base));
  • 因为pStream->time_base结构表示一个分数数据,而视频流中的DTS与PTS时间都是以time_base为时间单位,因此需要将seek时间大小转换为以time_base为时间单位的大小
    • seek操作:ret = av_seek_frame(pFormatCtx, AudioIndex, DstAudioDts, AVSEEK_FLAG_FRAME);向后seek是需要在flag参数中加上AVSEEK_FLAG_BACKWARD
  • 缓冲区刷新:avcodec_flush_buffers(AudioCtrl.pCodecCtx);

代码实现

此代码用左右方向键表示前进与后退,并且在文件播放完毕时,重头开始播放,使用音频流进行seek操作

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define __STDC_CONSTANT_MACROS#ifdef __cplusplus
extern "C"
{#endif
#include <libavutil/time.h>
#include <libavutil/imgutils.h>
#include <libavutil/mathematics.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavdevice/avdevice.h>
#include <libswscale/swscale.h>
#include <libswresample/swresample.h>
#include <SDL2/SDL.h>#include <errno.h>#include <unistd.h>
#include <assert.h>
#include <pthread.h>
#include <semaphore.h>#ifdef __cplusplus
};
#endif#define MAX_AUDIO_FRAME_SIZE 192000 // 1 second of 48khz 32bit audio#define PACKET_ARRAY_SIZE         (60)
typedef struct __PacketStruct
{AVPacket Packet;int64_t dts;int64_t pts;int state;
}PacketStruct;typedef struct
{unsigned int rIndex;unsigned int wIndex;PacketStruct PacketArray[PACKET_ARRAY_SIZE];
}PacketArrayStruct;typedef struct __AudioCtrlStruct
{AVFormatContext    *pFormatCtx;AVStream        *pStream;AVCodec            *pCodec;AVCodecContext  *pCodecCtx;SwrContext       *pConvertCtx;Uint8      *audio_chunk;Sint32  audio_len;Uint8    *audio_pos;int  AudioIndex;int  AudioCnt;uint64_t AudioOutChannelLayout;int out_nb_samples;             //nb_samples: AAC-1024 MP3-1152AVSampleFormat out_sample_fmt;int out_sample_rate;int out_channels;int out_buffer_size;unsigned char* pAudioOutBuffer;sem_t frame_put;sem_t frame_get;PacketArrayStruct  Audio;
}AudioCtrlStruct;typedef struct __VideoCtrlStruct
{AVFormatContext    *pFormatCtx;AVStream        *pStream;AVCodec            *pCodec;AVCodecContext  *pCodecCtx;SwsContext       *pConvertCtx;AVFrame            *pVideoFrame, *pFrameYUV;unsigned char  *pVideoOutBuffer;int            VideoIndex;int          VideoCnt;int            RefreshTime;int screen_w,screen_h;SDL_Window *screen;SDL_Renderer* sdlRenderer;SDL_Texture* sdlTexture;SDL_Rect sdlRect;SDL_Thread *video_tid;sem_t frame_put;sem_t video_refresh;PacketArrayStruct Video;
}VideoCtrlStruct;//Refresh Event
#define SFM_REFRESH_VIDEO_EVENT     (SDL_USEREVENT + 1)
#define SFM_REFRESH_AUDIO_EVENT     (SDL_USEREVENT + 2)
#define SFM_BREAK_EVENT             (SDL_USEREVENT + 3)int thread_exit = 0;
int thread_pause = 0;
int audio_pause = 0;       //音频播放是否暂停,1表示暂停,0表示在播放
int video_pause = 0;       //视频播放是否暂停,1表示暂停,0表示在播放
SDL_Keycode CurKeyCode;     //记录按下的是前进还是后退键,即小键盘的左右方向键,右表示前进,左表示后退
int CurKeyProcess;          //按键seek操作是否已经被处理,0表示未处理,1表示已经处理
int64_t CurVideoDts;    //记录当前播放的视频流Packet的DTS
int64_t CurVideoPts;    //记录当前播放的视频流Packet的DTSint64_t CurAudioDts;  //记录当前播放的音频流Packet的DTS
int64_t CurAudioPts;    //记录当前播放的音频流Packet的DTSint64_t DstAudioDts;  //进行seek操作时,计算后的目标音频流的DTS
int64_t DstAudioPts;    //进行seek操作时,计算后的目标音频流的PTS
int64_t DstVideoDts;    //进行seek操作时,计算后的目标视频流的DTS
int64_t DstVideoPts;    //进行seek操作时,计算后的目标视频流的DTSVideoCtrlStruct VideoCtrl;
AudioCtrlStruct AudioCtrl;
//video time_base.num:1, time_base.den:16, avg_frame_rate.num:8, avg_frame_rate.den:1
//audio time_base.num:1, time_base.den:48000, avg_frame_rate.num:0, avg_frame_rate.den:0
int IsPacketArrayFull(PacketArrayStruct* p)
{int i = 0;i = p->wIndex % PACKET_ARRAY_SIZE;if(p->PacketArray[i].state != 0) return 1;return 0;
}int IsPacketArrayEmpty(PacketArrayStruct* p)
{int i = 0;i = p->rIndex % PACKET_ARRAY_SIZE;if(p->PacketArray[i].state == 0) return 1;return 0;
}int PacketArrayClear(PacketArrayStruct* p)
{int i = 0;for(i = 0; i < PACKET_ARRAY_SIZE; i++){if(p->PacketArray[i].state != 0){av_packet_unref(&p->PacketArray[i].Packet);p->PacketArray[i].state = 0;}}p->rIndex = 0;p->wIndex = 0;return 0;
}int SDL_event_thread(void *opaque)
{SDL_Event event;while(1){SDL_WaitEvent(&event);if(event.type == SDL_KEYDOWN){//Pauseif(event.key.keysym.sym == SDLK_SPACE){thread_pause = !thread_pause;printf("video got pause event!\n");}if(event.key.keysym.sym == SDLK_RIGHT){thread_pause = !thread_pause;CurKeyProcess = 0;CurKeyCode = SDLK_RIGHT;printf("video got right key event!\n");}if(event.key.keysym.sym == SDLK_LEFT){thread_pause = !thread_pause;CurKeyProcess = 0;CurKeyCode = SDLK_LEFT;printf("video got left key event!\n");}}else if(event.type == SDL_QUIT){thread_exit = 1;printf("------------------------------>video got SDL_QUIT event!\n");break;}else if(event.type == SFM_BREAK_EVENT){break;}}printf("---------> SDL_event_thread end !!!! \n");return 0;
}int video_refresh_thread(void *opaque)
{while (1){if(thread_exit) break;if(thread_pause){SDL_Delay(40);continue;}//SDL_Delay(40);usleep(VideoCtrl.RefreshTime);sem_post(&VideoCtrl.video_refresh);}printf("---------> video_refresh_thread end !!!! \n");return 0;
}static void *thread_audio(void *arg)
{AVCodecContext *pAudioCodecCtx;AVFrame         *pAudioFrame;unsigned char  *pAudioOutBuffer;AVPacket       *Packet;int             i, ret, GotAudioPicture;struct SwrContext *AudioConvertCtx;AudioCtrlStruct* AudioCtrl = (AudioCtrlStruct*)arg;pAudioCodecCtx = AudioCtrl->pCodecCtx;pAudioOutBuffer = AudioCtrl->pAudioOutBuffer;AudioConvertCtx = AudioCtrl->pConvertCtx;printf("---------> thread_audio start !!!! \n");pAudioFrame = av_frame_alloc();while(1){if(thread_exit) break;if(thread_pause){usleep(10000);audio_pause = 1;continue;}if(IsPacketArrayEmpty(&AudioCtrl->Audio)){SDL_Delay(1);printf("---------> thread_audio empty !!!! \n");continue;}audio_pause = 0;i = AudioCtrl->Audio.rIndex;Packet = &AudioCtrl->Audio.PacketArray[i].Packet;CurAudioDts = AudioCtrl->Audio.PacketArray[i].dts;CurAudioPts = AudioCtrl->Audio.PacketArray[i].pts;if(Packet->stream_index == AudioCtrl->AudioIndex){ret = avcodec_decode_audio4( pAudioCodecCtx, pAudioFrame, &GotAudioPicture, Packet);if ( ret < 0 ){printf("Error in decoding audio frame.\n");return 0;}if ( GotAudioPicture > 0 ){swr_convert(AudioConvertCtx,&pAudioOutBuffer, MAX_AUDIO_FRAME_SIZE,(const uint8_t **)pAudioFrame->data , pAudioFrame->nb_samples);printf("Auduo index:%5d\t pts:%ld\t packet size:%d, pFrame->nb_samples:%d\n",AudioCtrl->AudioCnt, Packet->pts, Packet->size, pAudioFrame->nb_samples);AudioCtrl->AudioCnt++;}while(AudioCtrl->audio_len > 0)//Wait until finishSDL_Delay(1);//Set audio buffer (PCM data)AudioCtrl->audio_chunk = (Uint8 *) pAudioOutBuffer;AudioCtrl->audio_pos = AudioCtrl->audio_chunk;AudioCtrl->audio_len = AudioCtrl->out_buffer_size;av_packet_unref(Packet);AudioCtrl->Audio.PacketArray[i].state = 0;i++;if(i >= PACKET_ARRAY_SIZE) i = 0;AudioCtrl->Audio.rIndex = i;}}printf("---------> thread_audio end !!!! \n");return 0;
}static void *thread_video(void *arg)
{//AVFormatContext  *pFormatCtx;AVCodecContext  *pVideoCodecCtx;//AVCodec           *pVideoCodec;AVFrame            *pVideoFrame,*pFrameYUV;//unsigned char     *pVideoOutBuffer;AVPacket       *Packet;int             i, ret, GotPicture;struct SwsContext *VideoConvertCtx;VideoCtrlStruct* VideoCtrl = (VideoCtrlStruct*)arg;pVideoCodecCtx = VideoCtrl->pCodecCtx;//pVideoOutBuffer = VideoCtrl->pVideoOutBuffer;VideoConvertCtx = VideoCtrl->pConvertCtx;pVideoFrame = VideoCtrl->pVideoFrame;pFrameYUV   = VideoCtrl->pFrameYUV;printf("---------> thread_video start !!!! \n");while(1){if(thread_exit) break;if(thread_pause){usleep(10000);video_pause = 1;continue;}if(IsPacketArrayEmpty(&VideoCtrl->Video)){SDL_Delay(1);continue;}video_pause = 0;i = VideoCtrl->Video.rIndex;Packet = &VideoCtrl->Video.PacketArray[i].Packet;CurVideoDts = VideoCtrl->Video.PacketArray[i].dts;CurVideoPts = VideoCtrl->Video.PacketArray[i].pts;if(Packet->stream_index == VideoCtrl->VideoIndex){ret = avcodec_decode_video2(pVideoCodecCtx, pVideoFrame, &GotPicture, Packet);if(ret < 0){printf("Video Decode Error.\n");return 0;}printf("Video index:%5d\t dts:%ld\t, pts:%ld\t packet size:%d, GotVideoPicture:%d\n",VideoCtrl->VideoCnt, Packet->dts, Packet->pts, Packet->size, GotPicture);
//          printf("Video index:%5d\t pFrame->pkt_dts:%ld, pFrame->pkt_pts:%ld, pFrame->pts:%ld, pFrame->pict_type:%d, "
//                  "pFrame->best_effort_timestamp:%ld, pFrame->pkt_pos:%ld, pVideoFrame->pkt_duration:%ld\n",
//                  VideoCtrl->VideoCnt, pVideoFrame->pkt_dts, pVideoFrame->pkt_pts, pVideoFrame->pts,
//                  pVideoFrame->pict_type, pVideoFrame->best_effort_timestamp,
//                  pVideoFrame->pkt_pos, pVideoFrame->pkt_duration);VideoCtrl->VideoCnt++;if(GotPicture){sws_scale(VideoConvertCtx, (const unsigned char* const*)pVideoFrame->data,pVideoFrame->linesize, 0, pVideoCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize);sem_wait(&VideoCtrl->video_refresh);//SDL---------------------------SDL_UpdateTexture( VideoCtrl->sdlTexture, NULL, pFrameYUV->data[0], pFrameYUV->linesize[0] );SDL_RenderClear( VideoCtrl->sdlRenderer );//SDL_RenderCopy( sdlRenderer, sdlTexture, &sdlRect, &sdlRect );SDL_RenderCopy( VideoCtrl->sdlRenderer, VideoCtrl->sdlTexture, NULL, NULL);SDL_RenderPresent( VideoCtrl->sdlRenderer );//SDL End-----------------------}av_packet_unref(Packet);VideoCtrl->Video.PacketArray[i].state = 0;i++;if(i >= PACKET_ARRAY_SIZE) i = 0;VideoCtrl->Video.rIndex = i;}}printf("---------> thread_video end !!!! \n");return 0;
}/* The audio function callback takes the following parameters:* stream: A pointer to the audio buffer to be filled* len: The length (in bytes) of the audio buffer
*/
void  fill_audio(void *udata,Uint8 *stream,int len)
{AudioCtrlStruct* AudioCtrl = (AudioCtrlStruct*)udata;//SDL 2.0SDL_memset(stream, 0, len);if(AudioCtrl->audio_len == 0) return;len=(len > AudioCtrl->audio_len ? AudioCtrl->audio_len : len);   /*  Mix  as  much  data  as  possible  */SDL_MixAudio(stream, AudioCtrl->audio_pos, len, SDL_MIX_MAXVOLUME);AudioCtrl->audio_pos += len;AudioCtrl->audio_len -= len;
}int main(int argc, char* argv[])
{AVFormatContext    *pFormatCtx;AVCodecContext  *pVideoCodecCtx, *pAudioCodecCtx;AVCodec            *pVideoCodec, *pAudioCodec;AVPacket     *Packet;unsigned char   *pVideoOutBuffer, *pAudioOutBuffer;int          ret;unsigned int    i;pthread_t         audio_tid, video_tid;uint64_t AudioOutChannelLayout;//nb_samples: AAC-1024 MP3-1152int out_nb_samples;AVSampleFormat out_sample_fmt;int out_sample_rate;int out_channels;//Out Buffer Sizeint out_buffer_size;//------------SDL----------------struct SwsContext *VideoConvertCtx;struct SwrContext *AudioConvertCtx;int VideoIndex, VideoCnt;int AudioIndex, AudioCnt;memset(&AudioCtrl, 0, sizeof(AudioCtrlStruct));memset(&VideoCtrl, 0, sizeof(VideoCtrlStruct));char *filepath = argv[1];sem_init(&VideoCtrl.video_refresh, 0, 0);sem_init(&VideoCtrl.frame_put, 0, 0);sem_init(&AudioCtrl.frame_put, 0, 0);thread_exit = 0;thread_pause = 0;CurKeyProcess = 1;CurKeyCode = 0;av_register_all();avformat_network_init();pFormatCtx = avformat_alloc_context();if(avformat_open_input(&pFormatCtx, filepath, NULL, NULL) !=0 ){printf("Couldn't open input stream.\n");return -1;}if(avformat_find_stream_info(pFormatCtx,NULL) < 0){printf("Couldn't find stream information.\n");return -1;}VideoIndex = -1;AudioIndex = -1;for(i = 0; i < pFormatCtx->nb_streams; i++){if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO){VideoIndex = i;//打印输出视频流的信息printf("video time_base.num:%d, time_base.den:%d, avg_frame_rate.num:%d, avg_frame_rate.den:%d\n",pFormatCtx->streams[VideoIndex]->time_base.num,pFormatCtx->streams[VideoIndex]->time_base.den,pFormatCtx->streams[VideoIndex]->avg_frame_rate.num,pFormatCtx->streams[VideoIndex]->avg_frame_rate.den);}if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO){AudioIndex = i;//打印输出音频流的信息//pFormatCtx->streams[AudioIndex]->time_base.den <<= 1;printf("audio time_base.num:%d, time_base.den:%d, avg_frame_rate.num:%d, avg_frame_rate.den:%d\n",pFormatCtx->streams[AudioIndex]->time_base.num,pFormatCtx->streams[AudioIndex]->time_base.den,pFormatCtx->streams[AudioIndex]->avg_frame_rate.num,pFormatCtx->streams[AudioIndex]->avg_frame_rate.den);}}if(VideoIndex != -1){  //准备视频的解码操作上下文数据结构,pVideoCodecCtx = pFormatCtx->streams[VideoIndex]->codec;pVideoCodec = avcodec_find_decoder(pVideoCodecCtx->codec_id);if(pVideoCodec == NULL){printf("Video Codec not found.\n");return -1;}if(avcodec_open2(pVideoCodecCtx, pVideoCodec,NULL) < 0){printf("Could not open video codec.\n");return -1;}// prepare videoVideoCtrl.pVideoFrame = av_frame_alloc();VideoCtrl.pFrameYUV = av_frame_alloc();ret = av_image_get_buffer_size(AV_PIX_FMT_YUV420P, pVideoCodecCtx->width, pVideoCodecCtx->height, 1);pVideoOutBuffer = (unsigned char *)av_malloc(ret);av_image_fill_arrays(VideoCtrl.pFrameYUV->data, VideoCtrl.pFrameYUV->linesize, pVideoOutBuffer,AV_PIX_FMT_YUV420P, pVideoCodecCtx->width, pVideoCodecCtx->height, 1);VideoConvertCtx = sws_getContext(pVideoCodecCtx->width, pVideoCodecCtx->height, pVideoCodecCtx->pix_fmt,pVideoCodecCtx->width, pVideoCodecCtx->height,AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL);VideoCtrl.pFormatCtx = pFormatCtx;VideoCtrl.pStream = pFormatCtx->streams[VideoIndex];VideoCtrl.pCodec = pVideoCodec;VideoCtrl.pCodecCtx = pFormatCtx->streams[VideoIndex]->codec;VideoCtrl.pConvertCtx = VideoConvertCtx;VideoCtrl.pVideoOutBuffer = pVideoOutBuffer;VideoCtrl.VideoIndex = VideoIndex;if(pFormatCtx->streams[VideoIndex]->avg_frame_rate.num == 0 ||pFormatCtx->streams[VideoIndex]->avg_frame_rate.den == 0){VideoCtrl.RefreshTime = 40000;}else{VideoCtrl.RefreshTime = 1000000 * pFormatCtx->streams[VideoIndex]->avg_frame_rate.den;VideoCtrl.RefreshTime /= pFormatCtx->streams[VideoIndex]->avg_frame_rate.num;}printf("VideoCtrl.RefreshTime:%d\n", VideoCtrl.RefreshTime);}else{printf("Didn't find a video stream.\n");}if(AudioIndex != -1){    //准备音频的解码操作上下文数据结构,pAudioCodecCtx = pFormatCtx->streams[AudioIndex]->codec;pAudioCodec = avcodec_find_decoder(pAudioCodecCtx->codec_id);if(pAudioCodec == NULL){printf("Audio Codec not found.\n");return -1;}if(avcodec_open2(pAudioCodecCtx, pAudioCodec,NULL) < 0){printf("Could not open audio codec.\n");return -1;}// prepare Out Audio ParamAudioOutChannelLayout     = AV_CH_LAYOUT_STEREO;out_nb_samples           = pAudioCodecCtx->frame_size * 2;   //nb_samples: AAC-1024 MP3-1152out_sample_fmt           = AV_SAMPLE_FMT_S16;out_sample_rate            = pAudioCodecCtx->sample_rate * 2;// 此处一定使用pAudioCodecCtx->sample_rate这个变量赋值,否则使用不一样的值会造成音频少采样或者过采样,导致音频播放出现杂音out_channels         = av_get_channel_layout_nb_channels(AudioOutChannelLayout);out_buffer_size         = av_samples_get_buffer_size(NULL,out_channels ,out_nb_samples,out_sample_fmt, 1);//mp3:out_nb_samples:1152, out_channels:2, out_buffer_size:4608, pCodecCtx->channels:2//aac:out_nb_samples:1024, out_channels:2, out_buffer_size:4096, pCodecCtx->channels:2printf("out_nb_samples:%d, out_channels:%d, out_buffer_size:%d, pCodecCtx->channels:%d\n",out_nb_samples, out_channels, out_buffer_size, pAudioCodecCtx->channels);pAudioOutBuffer                 = (uint8_t *)av_malloc(MAX_AUDIO_FRAME_SIZE*2);//FIX:Some Codec's Context Information is missingint64_t in_channel_layout     = av_get_default_channel_layout(pAudioCodecCtx->channels);//SwrAudioConvertCtx              = swr_alloc();AudioConvertCtx                  = swr_alloc_set_opts(AudioConvertCtx, AudioOutChannelLayout,out_sample_fmt, out_sample_rate,in_channel_layout, pAudioCodecCtx->sample_fmt ,pAudioCodecCtx->sample_rate, 0, NULL);swr_init(AudioConvertCtx);AudioCtrl.pFormatCtx = pFormatCtx;AudioCtrl.pStream = pFormatCtx->streams[AudioIndex];AudioCtrl.pCodec = pAudioCodec;AudioCtrl.pCodecCtx = pFormatCtx->streams[AudioIndex]->codec;AudioCtrl.pConvertCtx = AudioConvertCtx;AudioCtrl.AudioOutChannelLayout = AudioOutChannelLayout;AudioCtrl.out_nb_samples = out_nb_samples;AudioCtrl.out_sample_fmt = out_sample_fmt;AudioCtrl.out_sample_rate = out_sample_rate;AudioCtrl.out_channels = out_channels;AudioCtrl.out_buffer_size = out_buffer_size;AudioCtrl.pAudioOutBuffer = pAudioOutBuffer;AudioCtrl.AudioIndex = AudioIndex;}else{printf("Didn't find a audio stream.\n");}//Output Info-----------------------------printf("---------------- File Information ---------------\n");av_dump_format(pFormatCtx, 0, filepath, 0);printf("-------------- File Information end -------------\n");if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)){printf( "Could not initialize SDL - %s\n", SDL_GetError());return -1;}if(VideoIndex != -1){//SDL 2.0 Support for multiple windows//SDL_VideoSpecVideoCtrl.screen_w = pVideoCodecCtx->width;VideoCtrl.screen_h = pVideoCodecCtx->height;VideoCtrl.screen = SDL_CreateWindow("Simplest ffmpeg player's Window", SDL_WINDOWPOS_UNDEFINED,SDL_WINDOWPOS_UNDEFINED, VideoCtrl.screen_w, VideoCtrl.screen_h, SDL_WINDOW_OPENGL);if(!VideoCtrl.screen){printf("SDL: could not create window - exiting:%s\n",SDL_GetError());return -1;}VideoCtrl.sdlRenderer = SDL_CreateRenderer(VideoCtrl.screen, -1, 0);//IYUV: Y + U + V  (3 planes)//YV12: Y + V + U  (3 planes)VideoCtrl.sdlTexture = SDL_CreateTexture(VideoCtrl.sdlRenderer, SDL_PIXELFORMAT_IYUV, SDL_TEXTUREACCESS_STREAMING,pVideoCodecCtx->width, pVideoCodecCtx->height);VideoCtrl.sdlRect.x = 0;VideoCtrl.sdlRect.y = 0;VideoCtrl.sdlRect.w = VideoCtrl.screen_w;VideoCtrl.sdlRect.h = VideoCtrl.screen_h;VideoCtrl.video_tid = SDL_CreateThread(video_refresh_thread, NULL, NULL);ret = pthread_create(&video_tid, NULL, thread_video, &VideoCtrl);if (ret){printf("create thr_rvs video thread failed, error = %d \n", ret);return -1;}}if(AudioIndex != -1){//SDL_AudioSpecSDL_AudioSpec AudioSpec;AudioSpec.freq      = out_sample_rate;AudioSpec.format     = AUDIO_S16SYS;AudioSpec.channels  = out_channels;AudioSpec.silence   = 0;AudioSpec.samples  = out_nb_samples;AudioSpec.callback    = fill_audio;AudioSpec.userdata    = (void*)&AudioCtrl;if (SDL_OpenAudio(&AudioSpec, NULL) < 0){printf("can't open audio.\n");return -1;}ret = pthread_create(&audio_tid, NULL, thread_audio, &AudioCtrl);if (ret){printf("create thr_rvs video thread failed, error = %d \n", ret);return -1;}SDL_PauseAudio(0);}SDL_Thread *event_tid;event_tid = SDL_CreateThread(SDL_event_thread, NULL, NULL);VideoCnt = 0;AudioCnt = 0;Packet = (AVPacket *)av_malloc(sizeof(AVPacket));av_init_packet(Packet);while(1){if( thread_pause ){if((CurKeyProcess == 0) &&  video_pause && audio_pause){switch(CurKeyCode){case SDLK_RIGHT://DstAudioDts = CurAudioDts + (int64_t) (3 / av_q2d(AudioCtrl.pStream->time_base));//因为time_base分数结构中,num基本都为1,因此为了减少浮点数乘除法,可以直接乘以time_base的分母DstAudioDts = CurAudioDts + 3*AudioCtrl.pStream->time_base.den;DstVideoDts = CurVideoDts + 3*VideoCtrl.pStream->time_base.den;ret = av_seek_frame(pFormatCtx, AudioIndex, DstAudioDts, AVSEEK_FLAG_FRAME);//ret = av_seek_frame(pFormatCtx, VideoIndex, DstVideoDts, AVSEEK_FLAG_FRAME);printf("SDLK_RIGHT av_seek_frame ret = %d, CurAudioDts:%ld, CurVideoDts:%ld, DstVideoDts:%ld, DstAudioDts:%ld\n",ret, CurAudioDts, CurVideoDts, DstVideoDts, DstAudioDts);avcodec_flush_buffers(AudioCtrl.pCodecCtx);avcodec_flush_buffers(VideoCtrl.pCodecCtx);PacketArrayClear(&VideoCtrl.Video);PacketArrayClear(&AudioCtrl.Audio);break;case SDLK_LEFT:DstAudioDts = CurAudioDts - 3*AudioCtrl.pStream->time_base.den;DstVideoDts = CurVideoDts - 3*VideoCtrl.pStream->time_base.den;if(DstAudioDts < 0) DstAudioDts = 0;if(DstVideoDts < 0) DstVideoDts = 0;ret = av_seek_frame(pFormatCtx, AudioIndex, DstAudioDts, AVSEEK_FLAG_BACKWARD | AVSEEK_FLAG_FRAME);//ret = av_seek_frame(pFormatCtx, VideoIndex, DstVideoDts, AVSEEK_FLAG_BACKWARD | AVSEEK_FLAG_FRAME);printf("SDLK_LEFT av_seek_frame ret = %d, CurAudioDts:%ld, CurVideoDts:%ld, DstVideoDts:%ld, DstAudioDts:%ld\n",ret, CurAudioDts, CurVideoDts, DstVideoDts, DstAudioDts);avcodec_flush_buffers(AudioCtrl.pCodecCtx);avcodec_flush_buffers(VideoCtrl.pCodecCtx);PacketArrayClear(&VideoCtrl.Video);PacketArrayClear(&AudioCtrl.Audio);break;default:break;}CurKeyProcess = 1;thread_pause = !thread_pause;}usleep(10000);continue;}if(av_read_frame(pFormatCtx, Packet) < 0){//          thread_exit = 1;
//          SDL_Event event;
//          event.type = SFM_BREAK_EVENT;
//          SDL_PushEvent(&event);
//          printf("---------> av_read_frame < 0, thread_exit = 1  !!!\n");
//          break;av_seek_frame(pFormatCtx, AudioIndex, 0, AVSEEK_FLAG_BACKWARD | AVSEEK_FLAG_FRAME);continue;}if(Packet->stream_index == VideoIndex){if(VideoCtrl.Video.wIndex >= PACKET_ARRAY_SIZE){VideoCtrl.Video.wIndex = 0;}while(IsPacketArrayFull(&VideoCtrl.Video)){usleep(5000);//printf("---------> VideoCtrl.Video.PacketArray FULL !!!\n");}i = VideoCtrl.Video.wIndex;VideoCtrl.Video.PacketArray[i].Packet = *Packet;VideoCtrl.Video.PacketArray[i].dts = Packet->dts;VideoCtrl.Video.PacketArray[i].pts = Packet->pts;VideoCtrl.Video.PacketArray[i].state = 1;VideoCtrl.Video.wIndex++;//printf("VideoCtrl packet put,dts:%ld, pts:%ld, VideoCnt:%d\n", Packet->dts, Packet->pts, VideoCnt++);}if(Packet->stream_index == AudioIndex){if(AudioCtrl.Audio.wIndex >= PACKET_ARRAY_SIZE){AudioCtrl.Audio.wIndex = 0;}while(IsPacketArrayFull(&AudioCtrl.Audio)){usleep(5000);//printf("---------> AudioCtrl.Audio.PacketArray FULL !!!\n");}i = AudioCtrl.Audio.wIndex;AudioCtrl.Audio.PacketArray[i].Packet = *Packet;AudioCtrl.Audio.PacketArray[i].dts = Packet->dts;AudioCtrl.Audio.PacketArray[i].pts = Packet->pts;AudioCtrl.Audio.PacketArray[i].state = 1;AudioCtrl.Audio.wIndex++;//printf("AudioCtrl.frame_put, AudioCnt:%d\n", AudioCnt++);}if(thread_exit) break;}SDL_WaitThread(event_tid, NULL);SDL_WaitThread(VideoCtrl.video_tid, NULL);pthread_join(audio_tid, NULL);pthread_join(video_tid, NULL);SDL_CloseAudio();//Close SDLSDL_Quit();swr_free(&AudioConvertCtx);sws_freeContext(VideoConvertCtx);av_free(pVideoOutBuffer);avcodec_close(pVideoCodecCtx);av_free(pAudioOutBuffer);avcodec_close(pAudioCodecCtx);avformat_close_input(&pFormatCtx);printf("--------------------------->main exit 8 !!\n");}

好了,下一篇就要开始研究怎么用QT与FFmpeg相结合了,估计需要一点时间了。

FFmpeg音视频播放器系列(第三篇:seek实现播放进度控制)相关推荐

  1. 【基于QMediaPlayer的简易视频播放器】— 3、结合QSlider实现播放进度控制和音量控制

    基于QMediaPlayer的简易视频播放器 1.创建基本布局 2.QMediaPlayer的基本使用 3.结合QSlider实现播放进度控制和音量控制 4.重载QSlider鼠标响应事件,实现单击跳 ...

  2. Qt FFmpeg 音视频播放器

    使用FFmpeg库实现 本地和rtp 音视频播放器,使用qt绘制视频. 本demo环境为 qt5.12 vs2019-32位 .pro的qt工程 FFmpeg版本位3.4.8 vs2092-32位 本 ...

  3. wince版ffmpeg音视频播放器

    介绍: 1. 基于ffmpeg 0.8.7版本开发 2. 目标运行平台 Wince 6.0+ Armv4 3. 音视频的全格式支持 4. 支持音乐播放器的歌词显示 5. 支持音乐播放的频谱显示 6.支 ...

  4. 德声科技代理M-Live音视频播放器

    M-live于1987年在里米尼成立,30年来一直是意大利MIDI领域(软件和播放器)的领导者,音乐家.音响工程师和IT专家构成了其工作团队的核心. M-Live生产的乐器消除了个人与音乐体验之间的所 ...

  5. 基于Qt、FFMpeg的音视频播放器设计一

    前言:整个项目的源代码 https://download.csdn.net/download/hfuu1504011020/10672140 最近刚完成基于Qt.FFMpeg的音视频播放器相关C++程 ...

  6. QT + FFmpeg 5.x + x264 + x265 + SDL2 音视频播放器

    QT + FFmpeg 5.x + x264 + x265 + SDL2 音视频播放器 使用了QT的QML设计界面,人机交互; 使用了FFmpeg 5.x + x264 + x265 + SDL2 完 ...

  7. QT软件开发-基于FFMPEG设计视频播放器-解码音频(三)

    QT软件开发-基于FFMPEG设计视频播放器-CPU软解视频(一) https://xiaolong.blog.csdn.net/article/details/126832537 QT软件开发-基于 ...

  8. QT软件开发-基于FFMPEG设计视频播放器-软解图像(一)

    QT软件开发-基于FFMPEG设计视频播放器-CPU软解视频(一) https://xiaolong.blog.csdn.net/article/details/126832537 QT软件开发-基于 ...

  9. 基于FFmpeg开发视频播放器, 基本流程(一)

    刚开始学习FFmpeg,用几篇文章记录下,使用ffmpeg开发一个简单的视频播放器,大概的过程.这里只讨论核心代码,如解封装,音频的解码播放,视频的解码播放,音视频同步,不涉及UI布局. 基于FFmp ...

最新文章

  1. python __file__ 与相对路径
  2. [转载] 中华典故故事(孙刚)——21 正月剪头死舅舅
  3. struts中select标签的使用
  4. jQuery 陷阱。。。。
  5. 【评分】第三次作业-团队展示
  6. sql java 创建数据库_java动态创建数据库(sql server)
  7. LeetCode 1214. 查找两棵二叉搜索树之和(二叉树迭代器+双指针)
  8. oracle完全卸载重装历程
  9. 新浪微博回应热搜被暂停更新一周;即刻 APP 下架一年后恢复上线;Android 11 Beta 版发布| 极客头条...
  10. jquery检测input变化_检测jQuery中的输入变化?
  11. 时间序列分析之协整检验
  12. UVALIVE 3713 Astronauts(2-SAT)
  13. Zephyr RTOS -- FIFO (LIFO)
  14. Connection could not be established with host smtp.163.com [Connection timed out #110]
  15. 记一篇我的机器学习和目标检测的学习历程_目标检测与卷积神经网络的简单理解
  16. SCI 论文投稿时该如何撰写 Highlights?
  17. 述职答辩提问环节一般可以问些什么_什么是述职。述职会问些什么问题。
  18. latex 行内公式和行间公式高亮问题、多行高亮问题
  19. 数据结构与算法(003):线性表-概述
  20. 3、文件、函数练习题

热门文章

  1. 用python绘制树和深林
  2. catia曲面扫掠命令详解_4.3.3.15-扫掠曲面之二次曲线_两条引导线扫略
  3. python命令行操作:Click包
  4. 富格林金业:原油天然气怎么掌控买卖点?
  5. 建立一台虚拟机并安装linux系统
  6. 使用八种牛云存储解决方案ios7.1的app部署问题
  7. 请简述独占设备的分配过程。
  8. 计算机x线平扫对人健康有影响吗,X线检查-医学影像频道-家庭医生在线 第16页...
  9. ‘parent.relativePath‘ points at com.xxx instead of org.springframework.boot:spring-boot-starter的快速解决
  10. 如何将Mac设置为热点?