[FFmpeg-devel] [PATCH 2/2] add libyami.cpp for h264 decoding by libyami

wm4 nfxjfg at googlemail.com
Wed Jan 14 11:25:13 CET 2015


On Wed, 14 Jan 2015 06:53:18 +0000
"Zhao, Halley" <halley.zhao at intel.com> wrote:

> Thanks for your valued comments, I followed them in updated patch set.
> 
> >> +    config_buffer.profile = VAProfileNone;
> >> +    status = s->decoder->start(&config_buffer);
> >> +    if (status != DECODE_SUCCESS) {
> >> +        av_log(avctx, AV_LOG_ERROR, "yami h264 decoder fail to start\n");
> >> +        return -1;
> >> +    }
> 
> > Does this check the codec profile and the hw support for it etc.?
> 
> Yes, it does.

So what happens if the profile changes mid-stream? Also, it should
return a specific error code.

A user might also want to disable decoding for certain profiles, if
they're known to be buggy on specific hardware.

> 
> >> +        if (DECODE_FORMAT_CHANGE == status) {
> >> +            s->format_info = s->decoder->getFormatInfo();
> >> +            PRINT_DECODE_THREAD("decode format change %dx%d\n",s->format_info->width,s->format_info->height);
> >> +            // resend the buffer
> >> +            status = s->decoder->decode(in_buffer);
> >> +            PRINT_DECODE_THREAD("decode() status=%d\n",status);
> >> +            avctx->width = s->format_info->width;
> >> +            avctx->height = s->format_info->height;
> >> +            avctx->pix_fmt = AV_PIX_FMT_YUV420P;
> 
> > It writes to the global AVCodecContext without any form of synchronization?
> So far, libyami supports avcC formatted codec data, not byte streamed SPS/PPS in decoder->start() yet.
> After byte streamed sps/pps data is supported in decoder->start(), I will move format change check to yami_init.
> Do you think it will fix the issue?

Well, in general, sps/pps can be updated mid-stream, so you need to
handle it anyway.

> >> +    if (!s->decoder) // XXX, use shared pointer for s
> >> +        return;
> >> +    pthread_mutex_lock(&s->mutex_);
> >> +    s->decoder->renderDone(frame);
> >> +    pthread_mutex_unlock(&s->mutex_);
> >> +    av_log(avctx, AV_LOG_DEBUG, "recycle previous frame: %p\n", 
> >> +frame); }
> 
> > This fails if you keep a frame longer than the decoder. (Which worked with _all_ other decoders so far.)
> How about keep a reference of decoder in the video frame?

As long as the AVCodecContext can be destroyed...

Keeping a reference to the decoder seems overly expensive, though.
Also, can it happen that the GPU runs out of resources when creating a
new decoder while the old one is still not destroyed?

> >> +    while (s->decode_status < DECODE_THREAD_GOT_EOS) { // we need 
> >> + enque eos buffer more than once
> 
> > Checking decode_status without a lock?
> Yes, it seems not an issue.
> We just try not to enque an empty buffer (for EOS) many times
> 
> >> +        pthread_mutex_lock(&s->in_mutex);
> >> +            if (s->in_queue->size()<4) {
> >> +                s->in_queue->push_back(in_buffer);
> >> +                av_log(avctx, AV_LOG_VERBOSE, "wakeup decode thread ...\n");
> >> +                pthread_cond_signal(&s->in_cond);
> ...
> >> +        av_log(avctx, AV_LOG_DEBUG, "s->in_queue->size()=%ld, s->decode_count=%d, s->decode_count_yami=%d, too many buffer are under decoding, wait ...\n",
> >> +        s->in_queue->size(), s->decode_count, s->decode_count_yami);
> >> +        usleep(10000);
> 
> > No. Use condition variables.
> I updated the patch to enque the input buffer unconditionally.
> 
> > Does it even need a decode thread?
> Two threads help GPU runs in parallel

But you also wrote:

> - do not support multi-thread decoding, it is unnecessary for hw

> >> +    do {
> >> +        if (!s->format_info) {
> >> +            usleep(10000);
> 
> > Nonsense again...
> It waits during start until stream resolution/format are got, seems fine.
> 
> >> +            yami_frame->fourcc = VA_FOURCC_I420;
> >What about other pixel formats?
> We support other formats (NV12/YUY2/RGBX etc), using GPU for color conversion automatically.
> 
> >> +        if (s->decode_status == DECODE_THREAD_GOT_EOS) {
> >> +            usleep(10000);
> >.....
> After got EOS, we try to wait until an available output frame, seems fine.
> 
> >> +        frame = (AVFrame*)data;
> >This assignment was already done above.
> Thanks, I removed it in updated patch
> 
> >> +        frame->data[1] = (uint8_t*)yami_frame->pitch[0];
> > Why not set linesize?
> Thanks, I updated it in new patch
> 
> >> +        ((AVFrame*)data)->extended_data = ((AVFrame*)data)->data;
> > If you used the proper avframe functions, you wouldn't have to do this.
> I don't know ffmpeg well, Which function do you mean to? 
> 
> > Also, for this frame no pixel format is set.
> Add it, thanks.
> 
> >> +    frame->buf[0] = av_buffer_create((uint8_t*)yami_frame, 
> >> + sizeof(VideoFrameRawData), yami_recycle_frame, avctx, 0);
> > Breaks refcounting of the YUV420P frame?
> Sorry, not catch your meaning. Could you explain more?

You're using ff_get_buffer() above, which allocates frame data, and
sets frame->buf[0] to the AVBuffer holding this data. If you just
overwrite the buffer pointer, the data will never be freed, and you
have a frame-sized memory leak on every frame.


More information about the ffmpeg-devel mailing list