ffmpeg中的avcodec_decode_video2()的作用是解码一帧视频数据。输入一个压缩编码的结构体AVPacket,输出一个解码后的结构体AVFrame。

被avcodec_send_packet() 和avcodec_receive_frame()替代

该函数的声明位于libavcodec\avcodec.h,如下所示。

avcodec_decode_video2()

/*** Decode the video frame of size avpkt->size from avpkt->data into picture.* Some decoders may support multiple frames in a single AVPacket, such* decoders would then just decode the first frame.** @warning The input buffer must be AV_INPUT_BUFFER_PADDING_SIZE larger than* the actual read bytes because some optimized bitstream readers read 32 or 64* bits at once and could read over the end.** @warning The end of the input buffer buf should be set to 0 to ensure that* no overreading happens for damaged MPEG streams.** @note Codecs which have the AV_CODEC_CAP_DELAY capability set have a delay* between input and output, these need to be fed with avpkt->data=NULL,* avpkt->size=0 at the end to return the remaining frames.** @note The AVCodecContext MUST have been opened with @ref avcodec_open2()* before packets may be fed to the decoder.** @param avctx the codec context* @param[out] picture The AVFrame in which the decoded video frame will be stored.*             Use av_frame_alloc() to get an AVFrame. The codec will*             allocate memory for the actual bitmap by calling the*             AVCodecContext.get_buffer2() callback.*             When AVCodecContext.refcounted_frames is set to 1, the frame is*             reference counted and the returned reference belongs to the*             caller. The caller must release the frame using av_frame_unref()*             when the frame is no longer needed. The caller may safely write*             to the frame if av_frame_is_writable() returns 1.*             When AVCodecContext.refcounted_frames is set to 0, the returned*             reference belongs to the decoder and is valid only until the*             next call to this function or until closing or flushing the*             decoder. The caller may not write to it.** @param[in] avpkt The input AVPacket containing the input buffer.*            You can create such packet with av_init_packet() and by then setting*            data and size, some decoders might in addition need other fields like*            flags&AV_PKT_FLAG_KEY. All decoders are designed to use the least*            fields possible.* @param[in,out] got_picture_ptr Zero if no frame could be decompressed, otherwise, it is nonzero.* @return On error a negative value is returned, otherwise the number of bytes* used or zero if no frame could be decompressed.** @deprecated Use avcodec_send_packet() and avcodec_receive_frame().*/
attribute_deprecated
int avcodec_decode_video2(AVCodecContext *avctx, AVFrame *picture,int *got_picture_ptr,const AVPacket *avpkt);

源代码位于libavcodec\utils.c,如下所示:


int attribute_align_arg avcodec_decode_video2(AVCodecContext *avctx, AVFrame *picture,int *got_picture_ptr,const AVPacket *avpkt)
{return compat_decode(avctx, picture, got_picture_ptr, avpkt);
}

compat_decode()

compat_decode()位于avcodec_decode_video2()函数上方

static int compat_decode(AVCodecContext *avctx, AVFrame *frame,int *got_frame, const AVPacket *pkt)
{AVCodecInternal *avci = avctx->internal;int ret = 0;av_assert0(avci->compat_decode_consumed == 0);if (avci->draining_done && pkt && pkt->size != 0) {av_log(avctx, AV_LOG_WARNING, "Got unexpected packet after EOF\n");avcodec_flush_buffers(avctx);}*got_frame = 0;avci->compat_decode = 1;if (avci->compat_decode_partial_size > 0 &&avci->compat_decode_partial_size != pkt->size) {av_log(avctx, AV_LOG_ERROR,"Got unexpected packet size after a partial decode\n");ret = AVERROR(EINVAL);goto finish;}if (!avci->compat_decode_partial_size) {ret = avcodec_send_packet(avctx, pkt);if (ret == AVERROR_EOF)ret = 0;else if (ret == AVERROR(EAGAIN)) {/* we fully drain all the output in each decode call, so this should not* ever happen */ret = AVERROR_BUG;goto finish;} else if (ret < 0)goto finish;}while (ret >= 0) {ret = avcodec_receive_frame(avctx, frame);if (ret < 0) {if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)ret = 0;goto finish;}if (frame != avci->compat_decode_frame) {if (!avctx->refcounted_frames) {ret = unrefcount_frame(avci, frame);if (ret < 0)goto finish;}*got_frame = 1;frame = avci->compat_decode_frame;} else {if (!avci->compat_decode_warned) {av_log(avctx, AV_LOG_WARNING, "The deprecated avcodec_decode_* ""API cannot return all the frames for this decoder. ""Some frames will be dropped. Update your code to the ""new decoding API to fix this.\n");avci->compat_decode_warned = 1;}}if (avci->draining || (!avctx->codec->bsfs && avci->compat_decode_consumed < pkt->size))break;}finish:if (ret == 0) {/* if there are any bsfs then assume full packet is always consumed */if (avctx->codec->bsfs)ret = pkt->size;elseret = FFMIN(avci->compat_decode_consumed, pkt->size);}avci->compat_decode_consumed = 0;avci->compat_decode_partial_size = (ret >= 0) ? pkt->size - ret : 0;return ret;
}
/*** Supply raw packet data as input to a decoder.** Internally, this call will copy relevant AVCodecContext fields, which can* influence decoding per-packet, and apply them when the packet is actually* decoded. (For example AVCodecContext.skip_frame, which might direct the* decoder to drop the frame contained by the packet sent with this function.)** @warning The input buffer, avpkt->data must be AV_INPUT_BUFFER_PADDING_SIZE*          larger than the actual read bytes because some optimized bitstream*          readers read 32 or 64 bits at once and could read over the end.** @warning Do not mix this API with the legacy API (like avcodec_decode_video2())*          on the same AVCodecContext. It will return unexpected results now*          or in future libavcodec versions.** @note The AVCodecContext MUST have been opened with @ref avcodec_open2()*       before packets may be fed to the decoder.** @param avctx codec context* @param[in] avpkt The input AVPacket. Usually, this will be a single video*                  frame, or several complete audio frames.*                  Ownership of the packet remains with the caller, and the*                  decoder will not write to the packet. The decoder may create*                  a reference to the packet data (or copy it if the packet is*                  not reference-counted).*                  Unlike with older APIs, the packet is always fully consumed,*                  and if it contains multiple frames (e.g. some audio codecs),*                  will require you to call avcodec_receive_frame() multiple*                  times afterwards before you can send a new packet.*                  It can be NULL (or an AVPacket with data set to NULL and*                  size set to 0); in this case, it is considered a flush*                  packet, which signals the end of the stream. Sending the*                  first flush packet will return success. Subsequent ones are*                  unnecessary and will return AVERROR_EOF. If the decoder*                  still has frames buffered, it will return them after sending*                  a flush packet.** @return 0 on success, otherwise negative error code:*      AVERROR(EAGAIN):   input is not accepted in the current state - user*                         must read output with avcodec_receive_frame() (once*                         all output is read, the packet should be resent, and*                         the call will not fail with EAGAIN).*      AVERROR_EOF:       the decoder has been flushed, and no new packets can*                         be sent to it (also returned if more than 1 flush*                         packet is sent)*      AVERROR(EINVAL):   codec not opened, it is an encoder, or requires flush*      AVERROR(ENOMEM):   failed to add packet to internal queue, or similar*      other errors: legitimate decoding errors*/
int avcodec_send_packet(AVCodecContext *avctx, const AVPacket *avpkt);

avcodec_send_packet()

avcodec_send_packet() 声明如下:

/*** Supply raw packet data as input to a decoder.** Internally, this call will copy relevant AVCodecContext fields, which can* influence decoding per-packet, and apply them when the packet is actually* decoded. (For example AVCodecContext.skip_frame, which might direct the* decoder to drop the frame contained by the packet sent with this function.)** @warning The input buffer, avpkt->data must be AV_INPUT_BUFFER_PADDING_SIZE*          larger than the actual read bytes because some optimized bitstream*          readers read 32 or 64 bits at once and could read over the end.** @warning Do not mix this API with the legacy API (like avcodec_decode_video2())*          on the same AVCodecContext. It will return unexpected results now*          or in future libavcodec versions.** @note The AVCodecContext MUST have been opened with @ref avcodec_open2()*       before packets may be fed to the decoder.** @param avctx codec context* @param[in] avpkt The input AVPacket. Usually, this will be a single video*                  frame, or several complete audio frames.*                  Ownership of the packet remains with the caller, and the*                  decoder will not write to the packet. The decoder may create*                  a reference to the packet data (or copy it if the packet is*                  not reference-counted).*                  Unlike with older APIs, the packet is always fully consumed,*                  and if it contains multiple frames (e.g. some audio codecs),*                  will require you to call avcodec_receive_frame() multiple*                  times afterwards before you can send a new packet.*                  It can be NULL (or an AVPacket with data set to NULL and*                  size set to 0); in this case, it is considered a flush*                  packet, which signals the end of the stream. Sending the*                  first flush packet will return success. Subsequent ones are*                  unnecessary and will return AVERROR_EOF. If the decoder*                  still has frames buffered, it will return them after sending*                  a flush packet.** @return 0 on success, otherwise negative error code:*      AVERROR(EAGAIN):   input is not accepted in the current state - user*                         must read output with avcodec_receive_frame() (once*                         all output is read, the packet should be resent, and*                         the call will not fail with EAGAIN).*      AVERROR_EOF:       the decoder has been flushed, and no new packets can*                         be sent to it (also returned if more than 1 flush*                         packet is sent)*      AVERROR(EINVAL):   codec not opened, it is an encoder, or requires flush*      AVERROR(ENOMEM):   failed to add packet to internal queue, or similar*      other errors: legitimate decoding errors*/
int avcodec_send_packet(AVCodecContext *avctx, const AVPacket *avpkt);

avcodec_send_packet() 定义如下:


int attribute_align_arg avcodec_send_packet(AVCodecContext *avctx, const AVPacket *avpkt)
{AVCodecInternal *avci = avctx->internal;int ret;if (!avcodec_is_open(avctx) || !av_codec_is_decoder(avctx->codec))return AVERROR(EINVAL);if (avctx->internal->draining)return AVERROR_EOF;if (avpkt && !avpkt->size && avpkt->data)return AVERROR(EINVAL);ret = bsfs_init(avctx);if (ret < 0)return ret;av_packet_unref(avci->buffer_pkt);if (avpkt && (avpkt->data || avpkt->side_data_elems)) {ret = av_packet_ref(avci->buffer_pkt, avpkt);if (ret < 0)return ret;}ret = av_bsf_send_packet(avci->filter.bsfs[0], avci->buffer_pkt);if (ret < 0) {av_packet_unref(avci->buffer_pkt);return ret;}if (!avci->buffer_frame->buf[0]) {ret = decode_receive_frame_internal(avctx, avci->buffer_frame);if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)return ret;}return 0;
}

avcodec_receive_frame()

avcodec_receive_frame()声明如下:

/*** Return decoded output data from a decoder.** @param avctx codec context* @param frame This will be set to a reference-counted video or audio*              frame (depending on the decoder type) allocated by the*              decoder. Note that the function will always call*              av_frame_unref(frame) before doing anything else.** @return*      0:                 success, a frame was returned*      AVERROR(EAGAIN):   output is not available in this state - user must try*                         to send new input*      AVERROR_EOF:       the decoder has been fully flushed, and there will be*                         no more output frames*      AVERROR(EINVAL):   codec not opened, or it is an encoder*      other negative values: legitimate decoding errors*/
int avcodec_receive_frame(AVCodecContext *avctx, AVFrame *frame);

avcodec_receive_frame()定义如下:

int attribute_align_arg avcodec_receive_frame(AVCodecContext *avctx, AVFrame *frame)
{AVCodecInternal *avci = avctx->internal;int ret;av_frame_unref(frame);if (!avcodec_is_open(avctx) || !av_codec_is_decoder(avctx->codec))return AVERROR(EINVAL);ret = bsfs_init(avctx);if (ret < 0)return ret;if (avci->buffer_frame->buf[0]) {av_frame_move_ref(frame, avci->buffer_frame);} else {ret = decode_receive_frame_internal(avctx, frame);if (ret < 0)return ret;}if (avctx->codec_type == AVMEDIA_TYPE_VIDEO) {ret = apply_cropping(avctx, frame);if (ret < 0) {av_frame_unref(frame);return ret;}}avctx->frame_number++;return 0;
}

均调用decode_receive_frame_internal()函数

static int decode_receive_frame_internal(AVCodecContext *avctx, AVFrame *frame)
{AVCodecInternal *avci = avctx->internal;int ret;av_assert0(!frame->buf[0]);if (avctx->codec->receive_frame)ret = avctx->codec->receive_frame(avctx, frame);elseret = decode_simple_receive_frame(avctx, frame);if (ret == AVERROR_EOF)avci->draining_done = 1;if (!ret) {/* the only case where decode data is not set should be decoders* that do not call ff_get_buffer() */av_assert0((frame->private_ref && frame->private_ref->size == sizeof(FrameDecodeData)) ||!(avctx->codec->capabilities & AV_CODEC_CAP_DR1));if (frame->private_ref) {FrameDecodeData *fdd = (FrameDecodeData*)frame->private_ref->data;if (fdd->post_process) {ret = fdd->post_process(avctx, frame);if (ret < 0) {av_frame_unref(frame);return ret;}}}}/* free the per-frame decode data */av_buffer_unref(&frame->private_ref);return ret;
}

其中decode_simple_receive_frame()定义如下

static int decode_simple_receive_frame(AVCodecContext *avctx, AVFrame *frame)
{int ret;while (!frame->buf[0]) {ret = decode_simple_internal(avctx, frame);if (ret < 0)return ret;}return 0;
}

decode_simple_internal()函数定义如下:


/** The core of the receive_frame_wrapper for the decoders implementing* the simple API. Certain decoders might consume partial packets without* returning any output, so this function needs to be called in a loop until it* returns EAGAIN.**/
static int decode_simple_internal(AVCodecContext *avctx, AVFrame *frame)
{AVCodecInternal   *avci = avctx->internal;DecodeSimpleContext *ds = &avci->ds;AVPacket           *pkt = ds->in_pkt;// copy to ensure we do not change pktint got_frame, actual_got_frame;int ret;if (!pkt->data && !avci->draining) {av_packet_unref(pkt);ret = ff_decode_get_packet(avctx, pkt);if (ret < 0 && ret != AVERROR_EOF)return ret;}// Some codecs (at least wma lossless) will crash when feeding drain packets// after EOF was signaled.if (avci->draining_done)return AVERROR_EOF;if (!pkt->data &&!(avctx->codec->capabilities & AV_CODEC_CAP_DELAY ||avctx->active_thread_type & FF_THREAD_FRAME))return AVERROR_EOF;got_frame = 0;if (HAVE_THREADS && avctx->active_thread_type & FF_THREAD_FRAME) {ret = ff_thread_decode_frame(avctx, frame, &got_frame, pkt);} else {ret = avctx->codec->decode(avctx, frame, &got_frame, pkt);if (!(avctx->codec->caps_internal & FF_CODEC_CAP_SETS_PKT_DTS))frame->pkt_dts = pkt->dts;if (avctx->codec->type == AVMEDIA_TYPE_VIDEO) {if(!avctx->has_b_frames)frame->pkt_pos = pkt->pos;//FIXME these should be under if(!avctx->has_b_frames)/* get_buffer is supposed to set frame parameters */if (!(avctx->codec->capabilities & AV_CODEC_CAP_DR1)) {if (!frame->sample_aspect_ratio.num)  frame->sample_aspect_ratio = avctx->sample_aspect_ratio;if (!frame->width)                    frame->width               = avctx->width;if (!frame->height)                   frame->height              = avctx->height;if (frame->format == AV_PIX_FMT_NONE) frame->format              = avctx->pix_fmt;}}}emms_c();actual_got_frame = got_frame;if (avctx->codec->type == AVMEDIA_TYPE_VIDEO) {if (frame->flags & AV_FRAME_FLAG_DISCARD)got_frame = 0;if (got_frame)frame->best_effort_timestamp = guess_correct_pts(avctx,frame->pts,frame->pkt_dts);} else if (avctx->codec->type == AVMEDIA_TYPE_AUDIO) {uint8_t *side;int side_size;uint32_t discard_padding = 0;uint8_t skip_reason = 0;uint8_t discard_reason = 0;if (ret >= 0 && got_frame) {frame->best_effort_timestamp = guess_correct_pts(avctx,frame->pts,frame->pkt_dts);if (frame->format == AV_SAMPLE_FMT_NONE)frame->format = avctx->sample_fmt;if (!frame->channel_layout)frame->channel_layout = avctx->channel_layout;if (!frame->channels)frame->channels = avctx->channels;if (!frame->sample_rate)frame->sample_rate = avctx->sample_rate;}side= av_packet_get_side_data(avci->last_pkt_props, AV_PKT_DATA_SKIP_SAMPLES, &side_size);if(side && side_size>=10) {avctx->internal->skip_samples = AV_RL32(side) * avctx->internal->skip_samples_multiplier;discard_padding = AV_RL32(side + 4);av_log(avctx, AV_LOG_DEBUG, "skip %d / discard %d samples due to side data\n",avctx->internal->skip_samples, (int)discard_padding);skip_reason = AV_RL8(side + 8);discard_reason = AV_RL8(side + 9);}if ((frame->flags & AV_FRAME_FLAG_DISCARD) && got_frame &&!(avctx->flags2 & AV_CODEC_FLAG2_SKIP_MANUAL)) {avctx->internal->skip_samples = FFMAX(0, avctx->internal->skip_samples - frame->nb_samples);got_frame = 0;}if (avctx->internal->skip_samples > 0 && got_frame &&!(avctx->flags2 & AV_CODEC_FLAG2_SKIP_MANUAL)) {if(frame->nb_samples <= avctx->internal->skip_samples){got_frame = 0;avctx->internal->skip_samples -= frame->nb_samples;av_log(avctx, AV_LOG_DEBUG, "skip whole frame, skip left: %d\n",avctx->internal->skip_samples);} else {av_samples_copy(frame->extended_data, frame->extended_data, 0, avctx->internal->skip_samples,frame->nb_samples - avctx->internal->skip_samples, avctx->channels, frame->format);if(avctx->pkt_timebase.num && avctx->sample_rate) {int64_t diff_ts = av_rescale_q(avctx->internal->skip_samples,(AVRational){1, avctx->sample_rate},avctx->pkt_timebase);if(frame->pts!=AV_NOPTS_VALUE)frame->pts += diff_ts;
#if FF_API_PKT_PTS
FF_DISABLE_DEPRECATION_WARNINGSif(frame->pkt_pts!=AV_NOPTS_VALUE)frame->pkt_pts += diff_ts;
FF_ENABLE_DEPRECATION_WARNINGS
#endifif(frame->pkt_dts!=AV_NOPTS_VALUE)frame->pkt_dts += diff_ts;if (frame->pkt_duration >= diff_ts)frame->pkt_duration -= diff_ts;} else {av_log(avctx, AV_LOG_WARNING, "Could not update timestamps for skipped samples.\n");}av_log(avctx, AV_LOG_DEBUG, "skip %d/%d samples\n",avctx->internal->skip_samples, frame->nb_samples);frame->nb_samples -= avctx->internal->skip_samples;avctx->internal->skip_samples = 0;}}if (discard_padding > 0 && discard_padding <= frame->nb_samples && got_frame &&!(avctx->flags2 & AV_CODEC_FLAG2_SKIP_MANUAL)) {if (discard_padding == frame->nb_samples) {got_frame = 0;} else {if(avctx->pkt_timebase.num && avctx->sample_rate) {int64_t diff_ts = av_rescale_q(frame->nb_samples - discard_padding,(AVRational){1, avctx->sample_rate},avctx->pkt_timebase);frame->pkt_duration = diff_ts;} else {av_log(avctx, AV_LOG_WARNING, "Could not update timestamps for discarded samples.\n");}av_log(avctx, AV_LOG_DEBUG, "discard %d/%d samples\n",(int)discard_padding, frame->nb_samples);frame->nb_samples -= discard_padding;}}if ((avctx->flags2 & AV_CODEC_FLAG2_SKIP_MANUAL) && got_frame) {AVFrameSideData *fside = av_frame_new_side_data(frame, AV_FRAME_DATA_SKIP_SAMPLES, 10);if (fside) {AV_WL32(fside->data, avctx->internal->skip_samples);AV_WL32(fside->data + 4, discard_padding);AV_WL8(fside->data + 8, skip_reason);AV_WL8(fside->data + 9, discard_reason);avctx->internal->skip_samples = 0;}}}if (avctx->codec->type == AVMEDIA_TYPE_AUDIO &&!avci->showed_multi_packet_warning &&ret >= 0 && ret != pkt->size && !(avctx->codec->capabilities & AV_CODEC_CAP_SUBFRAMES)) {av_log(avctx, AV_LOG_WARNING, "Multiple frames in a packet.\n");avci->showed_multi_packet_warning = 1;}if (!got_frame)av_frame_unref(frame);if (ret >= 0 && avctx->codec->type == AVMEDIA_TYPE_VIDEO && !(avctx->flags & AV_CODEC_FLAG_TRUNCATED))ret = pkt->size;#if FF_API_AVCTX_TIMEBASEif (avctx->framerate.num > 0 && avctx->framerate.den > 0)avctx->time_base = av_inv_q(av_mul_q(avctx->framerate, (AVRational){avctx->ticks_per_frame, 1}));
#endif/* do not stop draining when actual_got_frame != 0 or ret < 0 *//* got_frame == 0 but actual_got_frame != 0 when frame is discarded */if (avctx->internal->draining && !actual_got_frame) {if (ret < 0) {/* prevent infinite loop if a decoder wrongly always return error on draining *//* reasonable nb_errors_max = maximum b frames + thread count */int nb_errors_max = 20 + (HAVE_THREADS && avctx->active_thread_type & FF_THREAD_FRAME ?avctx->thread_count : 1);if (avci->nb_draining_errors++ >= nb_errors_max) {av_log(avctx, AV_LOG_ERROR, "Too many errors when draining, this is a bug. ""Stop draining and force EOF.\n");avci->draining_done = 1;ret = AVERROR_BUG;}} else {avci->draining_done = 1;}}avci->compat_decode_consumed += ret;if (ret >= pkt->size || ret < 0) {av_packet_unref(pkt);} else {int consumed = ret;pkt->data                += consumed;pkt->size                -= consumed;avci->last_pkt_props->size -= consumed; // See extract_packet_props() comment.pkt->pts                  = AV_NOPTS_VALUE;pkt->dts                  = AV_NOPTS_VALUE;avci->last_pkt_props->pts = AV_NOPTS_VALUE;avci->last_pkt_props->dts = AV_NOPTS_VALUE;}if (got_frame)av_assert0(frame->buf[0]);return ret < 0 ? ret : 0;
}

ret = avctx->codec->decode(avctx, frame, &got_frame, pkt);调用了相应AVCodec的decode()函数,完成了解码操作。

这句代码是关键的一步,它调用了AVCodec的decode()方法完成了解码。AVCodec的decode()方法是一个函数指针,指向了具体解码器的解码函数。

在这里我们以H.264解码器为例,看一下解码的实现过程。H.264解码器对应的AVCodec的定义位于libavcodec\h264dec.c,如下所示。


AVCodec ff_h264_decoder = {.name                  = "h264",.long_name             = NULL_IF_CONFIG_SMALL("H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10"),.type                  = AVMEDIA_TYPE_VIDEO,.id                    = AV_CODEC_ID_H264,.priv_data_size        = sizeof(H264Context),.init                  = h264_decode_init,.close                 = h264_decode_end,.decode                = h264_decode_frame,.capabilities          = /*AV_CODEC_CAP_DRAW_HORIZ_BAND |*/ AV_CODEC_CAP_DR1 |AV_CODEC_CAP_DELAY | AV_CODEC_CAP_SLICE_THREADS |AV_CODEC_CAP_FRAME_THREADS,.hw_configs            = (const AVCodecHWConfigInternal*[]) {NULL},.caps_internal         = FF_CODEC_CAP_INIT_THREADSAFE | FF_CODEC_CAP_EXPORTS_CROPPING,.flush                 = flush_dpb,.init_thread_copy      = ONLY_IF_THREADS_ENABLED(decode_init_thread_copy),.update_thread_context = ONLY_IF_THREADS_ENABLED(ff_h264_update_thread_context),.profiles              = NULL_IF_CONFIG_SMALL(ff_h264_profiles),.priv_class            = &h264_class,
};

从ff_h264_decoder的定义可以看出,decode()指向了h264_decode_frame()函数。继续看一下h264_decode_frame()函数的定义,如下所示。


static int h264_decode_frame(AVCodecContext *avctx, void *data,int *got_frame, AVPacket *avpkt)
{const uint8_t *buf = avpkt->data;int buf_size       = avpkt->size;H264Context *h     = avctx->priv_data;AVFrame *pict      = data;int buf_index;int ret;h->flags = avctx->flags;h->setup_finished = 0;h->nb_slice_ctx_queued = 0;ff_h264_unref_picture(h, &h->last_pic_for_ec);/* end of stream, output what is still in the buffers */if (buf_size == 0)return send_next_delayed_frame(h, pict, got_frame, 0);if (h->is_avc && av_packet_get_side_data(avpkt, AV_PKT_DATA_NEW_EXTRADATA, NULL)) {int side_size;uint8_t *side = av_packet_get_side_data(avpkt, AV_PKT_DATA_NEW_EXTRADATA, &side_size);if (is_extra(side, side_size))ff_h264_decode_extradata(side, side_size,&h->ps, &h->is_avc, &h->nal_length_size,avctx->err_recognition, avctx);}if(h->is_avc && buf_size >= 9 && buf[0]==1 && buf[2]==0 && (buf[4]&0xFC)==0xFC && (buf[5]&0x1F) && buf[8]==0x67){if (is_extra(buf, buf_size))return ff_h264_decode_extradata(buf, buf_size,&h->ps, &h->is_avc, &h->nal_length_size,avctx->err_recognition, avctx);}buf_index = decode_nal_units(h, buf, buf_size);if (buf_index < 0)return AVERROR_INVALIDDATA;if (!h->cur_pic_ptr && h->nal_unit_type == H264_NAL_END_SEQUENCE) {av_assert0(buf_index <= buf_size);return send_next_delayed_frame(h, pict, got_frame, buf_index);}if (!(avctx->flags2 & AV_CODEC_FLAG2_CHUNKS) && (!h->cur_pic_ptr || !h->has_slice)) {if (avctx->skip_frame >= AVDISCARD_NONREF ||buf_size >= 4 && !memcmp("Q264", buf, 4))return buf_size;av_log(avctx, AV_LOG_ERROR, "no frame!\n");return AVERROR_INVALIDDATA;}if (!(avctx->flags2 & AV_CODEC_FLAG2_CHUNKS) ||(h->mb_y >= h->mb_height && h->mb_height)) {if ((ret = ff_h264_field_end(h, &h->slice_ctx[0], 0)) < 0)return ret;/* Wait for second field. */if (h->next_output_pic) {ret = finalize_frame(h, pict, h->next_output_pic, got_frame);if (ret < 0)return ret;}}av_assert0(pict->buf[0] || !*got_frame);ff_h264_unref_picture(h, &h->last_pic_for_ec);return get_consumed_bytes(buf_index, buf_size);
}

从h264_decode_frame()的定义可以看出,它调用了decode_nal_units()完成了具体的H.264解码工作。有关H.264解码就不在详细分析了。

decode_nal_units()定义如下:


static int decode_nal_units(H264Context *h, const uint8_t *buf, int buf_size)
{AVCodecContext *const avctx = h->avctx;int nals_needed = 0; ///< number of NALs that need decoding before the next frame thread startsint idr_cleared=0;int i, ret = 0;h->has_slice = 0;h->nal_unit_type= 0;if (!(avctx->flags2 & AV_CODEC_FLAG2_CHUNKS)) {h->current_slice = 0;if (!h->first_field)h->cur_pic_ptr = NULL;ff_h264_sei_uninit(&h->sei);}if (h->nal_length_size == 4) {if (buf_size > 8 && AV_RB32(buf) == 1 && AV_RB32(buf+5) > (unsigned)buf_size) {h->is_avc = 0;}else if(buf_size > 3 && AV_RB32(buf) > 1 && AV_RB32(buf) <= (unsigned)buf_size)h->is_avc = 1;}ret = ff_h2645_packet_split(&h->pkt, buf, buf_size, avctx, h->is_avc,h->nal_length_size, avctx->codec_id, avctx->flags2 & AV_CODEC_FLAG2_FAST);if (ret < 0) {av_log(avctx, AV_LOG_ERROR,"Error splitting the input into NAL units.\n");return ret;}if (avctx->active_thread_type & FF_THREAD_FRAME)nals_needed = get_last_needed_nal(h);if (nals_needed < 0)return nals_needed;for (i = 0; i < h->pkt.nb_nals; i++) {H2645NAL *nal = &h->pkt.nals[i];int max_slice_ctx, err;if (avctx->skip_frame >= AVDISCARD_NONREF &&nal->ref_idc == 0 && nal->type != H264_NAL_SEI)continue;// FIXME these should stop being context-global variablesh->nal_ref_idc   = nal->ref_idc;h->nal_unit_type = nal->type;err = 0;switch (nal->type) {case H264_NAL_IDR_SLICE:if ((nal->data[1] & 0xFC) == 0x98) {av_log(h->avctx, AV_LOG_ERROR, "Invalid inter IDR frame\n");h->next_outputed_poc = INT_MIN;ret = -1;goto end;}if(!idr_cleared) {if (h->current_slice && (avctx->active_thread_type & FF_THREAD_SLICE)) {av_log(h, AV_LOG_ERROR, "invalid mixed IDR / non IDR frames cannot be decoded in slice multithreading mode\n");ret = AVERROR_INVALIDDATA;goto end;}idr(h); // FIXME ensure we don't lose some frames if there is reordering}idr_cleared = 1;h->has_recovery_point = 1;case H264_NAL_SLICE:h->has_slice = 1;if ((err = ff_h264_queue_decode_slice(h, nal))) {H264SliceContext *sl = h->slice_ctx + h->nb_slice_ctx_queued;sl->ref_count[0] = sl->ref_count[1] = 0;break;}if (h->current_slice == 1) {if (avctx->active_thread_type & FF_THREAD_FRAME &&i >= nals_needed && !h->setup_finished && h->cur_pic_ptr) {ff_thread_finish_setup(avctx);h->setup_finished = 1;}if (h->avctx->hwaccel &&(ret = h->avctx->hwaccel->start_frame(h->avctx, buf, buf_size)) < 0)goto end;}max_slice_ctx = avctx->hwaccel ? 1 : h->nb_slice_ctx;if (h->nb_slice_ctx_queued == max_slice_ctx) {if (h->avctx->hwaccel) {ret = avctx->hwaccel->decode_slice(avctx, nal->raw_data, nal->raw_size);h->nb_slice_ctx_queued = 0;} elseret = ff_h264_execute_decode_slices(h);if (ret < 0 && (h->avctx->err_recognition & AV_EF_EXPLODE))goto end;}break;case H264_NAL_DPA:case H264_NAL_DPB:case H264_NAL_DPC:avpriv_request_sample(avctx, "data partitioning");break;case H264_NAL_SEI:ret = ff_h264_sei_decode(&h->sei, &nal->gb, &h->ps, avctx);h->has_recovery_point = h->has_recovery_point || h->sei.recovery_point.recovery_frame_cnt != -1;if (avctx->debug & FF_DEBUG_GREEN_MD)debug_green_metadata(&h->sei.green_metadata, h->avctx);if (ret < 0 && (h->avctx->err_recognition & AV_EF_EXPLODE))goto end;break;case H264_NAL_SPS: {GetBitContext tmp_gb = nal->gb;if (avctx->hwaccel && avctx->hwaccel->decode_params) {ret = avctx->hwaccel->decode_params(avctx,nal->type,nal->raw_data,nal->raw_size);if (ret < 0)goto end;}if (ff_h264_decode_seq_parameter_set(&tmp_gb, avctx, &h->ps, 0) >= 0)break;av_log(h->avctx, AV_LOG_DEBUG,"SPS decoding failure, trying again with the complete NAL\n");init_get_bits8(&tmp_gb, nal->raw_data + 1, nal->raw_size - 1);if (ff_h264_decode_seq_parameter_set(&tmp_gb, avctx, &h->ps, 0) >= 0)break;ff_h264_decode_seq_parameter_set(&nal->gb, avctx, &h->ps, 1);break;}case H264_NAL_PPS:if (avctx->hwaccel && avctx->hwaccel->decode_params) {ret = avctx->hwaccel->decode_params(avctx,nal->type,nal->raw_data,nal->raw_size);if (ret < 0)goto end;}ret = ff_h264_decode_picture_parameter_set(&nal->gb, avctx, &h->ps,nal->size_bits);if (ret < 0 && (h->avctx->err_recognition & AV_EF_EXPLODE))goto end;break;case H264_NAL_AUD:case H264_NAL_END_SEQUENCE:case H264_NAL_END_STREAM:case H264_NAL_FILLER_DATA:case H264_NAL_SPS_EXT:case H264_NAL_AUXILIARY_SLICE:break;default:av_log(avctx, AV_LOG_DEBUG, "Unknown NAL code: %d (%d bits)\n",nal->type, nal->size_bits);}if (err < 0) {av_log(h->avctx, AV_LOG_ERROR, "decode_slice_header error\n");}}ret = ff_h264_execute_decode_slices(h);if (ret < 0 && (h->avctx->err_recognition & AV_EF_EXPLODE))goto end;ret = 0;
end:#if CONFIG_ERROR_RESILIENCE/** FIXME: Error handling code does not seem to support interlaced* when slices span multiple rows* The ff_er_add_slice calls don't work right for bottom* fields; they cause massive erroneous error concealing* Error marking covers both fields (top and bottom).* This causes a mismatched s->error_count* and a bad error table. Further, the error count goes to* INT_MAX when called for bottom field, because mb_y is* past end by one (callers fault) and resync_mb_y != 0* causes problems for the first MB line, too.*/if (!FIELD_PICTURE(h) && h->current_slice &&h->ps.sps == (const SPS*)h->ps.sps_list[h->ps.pps->sps_id]->data &&h->enable_er) {H264SliceContext *sl = h->slice_ctx;int use_last_pic = h->last_pic_for_ec.f->buf[0] && !sl->ref_count[0];ff_h264_set_erpic(&sl->er.cur_pic, h->cur_pic_ptr);if (use_last_pic) {ff_h264_set_erpic(&sl->er.last_pic, &h->last_pic_for_ec);sl->ref_list[0][0].parent = &h->last_pic_for_ec;memcpy(sl->ref_list[0][0].data, h->last_pic_for_ec.f->data, sizeof(sl->ref_list[0][0].data));memcpy(sl->ref_list[0][0].linesize, h->last_pic_for_ec.f->linesize, sizeof(sl->ref_list[0][0].linesize));sl->ref_list[0][0].reference = h->last_pic_for_ec.reference;} else if (sl->ref_count[0]) {ff_h264_set_erpic(&sl->er.last_pic, sl->ref_list[0][0].parent);} elseff_h264_set_erpic(&sl->er.last_pic, NULL);if (sl->ref_count[1])ff_h264_set_erpic(&sl->er.next_pic, sl->ref_list[1][0].parent);sl->er.ref_count = sl->ref_count[0];ff_er_frame_end(&sl->er);if (use_last_pic)memset(&sl->ref_list[0][0], 0, sizeof(sl->ref_list[0][0]));}
#endif /* CONFIG_ERROR_RESILIENCE *//* clean up */if (h->cur_pic_ptr && !h->droppable && h->has_slice) {ff_thread_report_progress(&h->cur_pic_ptr->tf, INT_MAX,h->picture_structure == PICT_BOTTOM_FIELD);}return (ret < 0) ? ret : buf_size;
}

参考文献

1、https://blog.csdn.net/leixiaohua1020/article/details/12679719

ffmpeg 源代码简单分析 : avcodec_decode_video2()/avcodec_send_packet()/avcodec_receive_frame()相关推荐

  1. ffmpeg 源代码简单分析 : avcodec_decode_video2()

    2019独角兽企业重金招聘Python工程师标准>>> 此前写了好几篇ffmpeg源代码分析文章,列表如下: 图解FFMPEG打开媒体的函数avformat_open_input f ...

  2. FFmpeg源代码简单分析:libavdevice的gdigrab

    ===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...

  3. FFmpeg源代码简单分析:libavdevice的avdevice_register_all()

    ===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...

  4. FFmpeg源代码简单分析:configure

    ===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...

  5. FFmpeg源代码简单分析:makefile

    ===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...

  6. FFmpeg源代码简单分析:libswscale的sws_getContext()

    ===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...

  7. FFmpeg源代码简单分析:结构体成员管理系统-AVOption

    ===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...

  8. FFmpeg源代码简单分析:结构体成员管理系统-AVClass

    ===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...

  9. FFmpeg源代码简单分析:日志输出系统(av_log()等)

    ===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...

  10. FFmpeg源代码简单分析:avcodec_close()

    ===================================================== FFmpeg的库函数源代码分析文章列表: [架构图] FFmpeg源代码结构图 - 解码 F ...

最新文章

  1. 最大值(3.3)(java)
  2. 计算机学院陈宇,为了纯粹的追求———记计算机学院“物联网”工作室-湖北第二师范学院电子版《湖北第二师范学院报》...
  3. 以POST方式下载文件
  4. jsp自定义alert
  5. linux日志添加到文件,关于linux:将变量中的内容追加到日志文件中
  6. 94年的博士后又拿到了这个金奖!原来是他的学弟
  7. 你真的适合做前端吗?自学入行的那些坑
  8. 13个您应该安装的WordPress插件
  9. java最大最小距离算法缺点_java算法(蓝桥杯)- 算法提高 题目1 最大最小值
  10. mysql Split函数
  11. 如何从javascript检索GET参数? [重复]
  12. thrift编写服务端 客户端
  13. 读史知今、以史为鉴 【技术商业化】
  14. [简历模板] 英文简历要用到的各种词汇-奖学金/担任职务(很全)
  15. 杭电oj —— 2052
  16. Python3开启自带http服务
  17. Linux—生成随机密码
  18. 计算机网络共享文件共享,终于发现如何取消计算机网络共享文件
  19. UI设计必掌握的软件之一:Axure!
  20. 文化袁探索专栏——线程池执行原理|线程复用|线程回收

热门文章

  1. 使用BeanCopier工具类拷贝JavaBean
  2. layui 实现搜索页码重置
  3. over(Partition by...) 详细用法
  4. 弘辽科技:社区团购硝烟弥漫,京东对兴盛优选“出手了”
  5. BugFree解决方案的说明
  6. svmlib之svmtrain及svmpredict
  7. 经典非对称加密算法:RSA算法原理、实现详解(C++、Java)
  8. 如何清服务器redis缓存信息,redis desktop manager怎么清空缓存?redis desktop manager清空Redis缓存的方法...
  9. 实时流处理Storm、Spark Streaming、Samza、Flink孰优孰劣
  10. 在传统公司干IT是一种什么体验(六)