[FFmpeg-devel] [PATCH] libavu: add pkt_timebase to AVFrame.

wm4 nfxjfg at googlemail.com
Sun Jul 20 17:27:16 CEST 2014


On Sun, 20 Jul 2014 17:01:42 +0200
Michael Niedermayer <michaelni at gmx.at> wrote:

> On Sun, Jul 20, 2014 at 04:26:01PM +0200, wm4 wrote:
> > On Fri, 18 Jul 2014 13:12:39 +0200
> > Michael Niedermayer <michaelni at gmx.at> wrote:
> > 
> > > On Fri, Jul 18, 2014 at 12:47:06PM +0200, Hendrik Leppkes wrote:
> > > > Am 18.07.2014 12:04 schrieb "Benoit Fouet" <benoit.fouet at free.fr>:
> > > > >
> > > > > In order to easily correlate pkt_duration to its real duration, add the
> > > > > packet time base information to the frame structure.
> > > > >
> > > > > Fixes issue #3052
> > > > 
> > > > The code in avcodec doesn't know the timebase, unless the user tells it.
> > > 
> > > or the user uses libavformat with libavcodec
> > > 
> > > 
> > > > 
> > > > And if the user wants to tell it, there already is an avctx field for it
> > > > (pkt_timebase), no need to store it in the frame since its not going to
> > > > change in every frame.
> > > > 
> > > > As such, I'm not sure what this new field would solve.
> > > 
> > > It would allow interpreting the AVFrame.pkt_duration without the need
> > > to have access to a AVCodecContext.
> > > AVFrames are part of libavutil, AVCodecContext is part of libavcodec
> > 
> > I don't agree. When you use AVFrame and AVPacket, you usually _do_ have
> > to cooperate with other parts of the libraries.
> 
> > For example, for
> > AVPacket you need to know the codec, which is another fixed value
> > exported by libavformat. Do you want to add a codec field to AVPacket?
> 
> You do realize that the patch is about AVFrame and you argue about
> AVPacket ?

Well, you brought up AVPacket...

> 
> > No, you just make sure the receiver also gets the codec somehow. You can
> > do the same for the timebase.
> 
> > For AVFrame, you can't use it with
> > libavfilter without making sure the pixel format and dimensions match
> > with the input buffer. Same for the timebase.
> 
> some filters do support filtering AVFrames with
> variing dimensions and pixel formats
> both dimension and pixel format are a field of AVFrame
> (width, height, format)

But the filter API does not.

> also some codecs allocate multiple AVFrames with different dimensions
> hevc is one.
> and hypothetical future support of things like spatial scalability
> would also need internal buffers of different dimensions
> and temporal scalability could maybe slightly benefit from
> AVFrames with a different timebase, well maybe iam drifting too much
> into "what if" here ... iam not sure
> 
> 
> > 
> > Adding the timebase to AVFrame and AVPacket would be reasonable if it's
> > guaranteed that other parts of the libraries interpret them properly.
> > But they don't (because the timebase is a fixed constant over a stream
> > in the first place), and adding this field would lead to a confusing
> > situation where some parts of the code set the pkt_timebase, some maybe
> > even interpret it, but most code silently ignores it. The API is much
> > cleaner without this field.
> 
> i dont disagree at all but as is we have these packet related fields
> and can in half our code not use / interpret it because we dont know
> the timebase

It's simple: the timebase of the AVStream of the demuxer is the
timebase. Maybe you should remove less useful timebase fields instead.
For example, libavcodec absolutely has no business to interpret
timestamps (the only reason by lavc needs to deal with timestamps at
all is frame reordering). Maybe that would be better than adding even
more of these confusing timebases.


More information about the ffmpeg-devel mailing list