[FFmpeg-devel] lavfi noise generator

Stefano Sabatini stefano.sabatini-lala
Fri Jan 16 00:25:23 CET 2009


On date Thursday 2009-01-15 22:01:02 +0100, Vitor Sessak encoded:
> Stefano Sabatini wrote:
> > On date Monday 2008-12-29 16:27:04 +0100, Vitor Sessak encoded:
> >> Stefano Sabatini wrote:
> >>> On date Monday 2008-12-29 12:06:20 +0100, Vitor Sessak encoded:
> > [...]
> >>>> Indeed, we are lacking good examples of filters in the SoC tree. But  
> >>>> exactly for this reason, I think vf_noise.c should fill one slice at 
> >>>> a time to give the good example. After this is done, IMO it is 
> >>>> welcome to soc svn, at least to serve as a template.
> >>> Hi Vitor, vsrc_noise.c is a *source* rather than a filter, so I don't
> >>> think it is possible to use the the draw_slice() API.
> >> Indeed, it should not be possible, at least not with the current svn  
> >> code. See my attached patch.
> >>
> >>> What I'm currently doing is:
> >>>
> >>> static int request_frame(AVFilterLink *link)
> >>> {
> >>>     NoiseContext *ctx = link->src->priv;
> >>>     AVFilterPicRef *picref = avfilter_get_video_buffer(link, AV_PERM_WRITE);
> >>>
> >>>     fill_picture(ctx, picref);
> >>>     picref->pts = av_rescale_q(ctx->pts++, (AVRational){ ctx->frame_rate.den, ctx->frame_rate.num }, AV_TIME_BASE_Q);
> >>>
> >>>     avfilter_start_frame(link, avfilter_ref_pic(picref, ~0));
> >>>     avfilter_draw_slice(link, 0, picref->h);
> >>>     avfilter_end_frame(link);
> >>>
> >>>     avfilter_unref_pic(picref);
> >>>
> >>>     return 0;
> >>> }
> >> Could something like the following work?
> >>
> >> #define SLICE_SIZE 32
> >>
> >> static int request_frame(AVFilterLink *link)
> >> {
> >>     NoiseContext *ctx = link->src->priv;
> >>     AVFilterPicRef *picref = avfilter_get_video_buffer(link,  
> >> AV_PERM_WRITE);
> >>     int h;
> >>
> >>     picref->pts = av_rescale_q(ctx->pts++, (AVRational) {  
> >> ctx->frame_rate.den, ctx->frame_rate.num }, AV_TIME_BASE_Q);
> >>
> >>     avfilter_start_frame(link, avfilter_ref_pic(picref, ~0));
> >>     for(h=0; h < ctx->h; h += SLICE_SIZE) {
> >>        fill_picture(ctx, picref, h, FFMIN(h+SLICE_SIZE, ctx->h));
> >>        avfilter_draw_slice(link, h, FFMIN(h+SLICE_SIZE, ctx->h));
> >>     }
> >>     avfilter_end_frame(link);
> >>
> >>     avfilter_unref_pic(picref);
> >>
> >>     return 0;
> >> }
> > 
> > It should work, the only thing I don't like is that in this way the
> > code *needs* to know about the picture structure (it needs to know how
> > to access the slice), while before I was simply filling the whole
> > buffer, and this consideration leads to this (maybe silly) question:
> > what's the advantage of per-slice filling in this case?
> 
> Imagine a filter chain like
> 
> noise -> slow_filter -> scale -> another_slow_filter -> 
> overlay_with_something -> output
> 
> The noise filter will pass slices to slow_filter that will pass slices 
> to the next filters and etc. Not that one could not explicitly add a 
> slicify filter for that, but it should be faster thanks to a better use 
> of the data cache.

So that is to optimize the CPU data cache usage, yep it makes sense.

> As a rule of thumb, if a filter don't need to work with whole frames, it 
> should work with slices. At least, that how I understand lavfi is 
> expected to work. But in this case I agree it is more up to the 
> framework to deal with it...

Let's suppose I have this filters chain:

f1 (issues complete frame) -> f2 (can work with slices) -> f3 (can work with slices) -> ...

If I understood it correctly then we could increase speed introducing a
slicify filter between f1 and f2:

f1 -> slicify -> f2 ...

That is the speed increase of the slicify API is lost if we use big
slices. Is it right?

Thanks, regards.
-- 
FFmpeg = Fiendish and Furious Multimedia Programmable EniGma




More information about the ffmpeg-devel mailing list