[FFmpeg-devel] [PATCH V2 2/2] vf_dnn_processing.c: add dnn backend openvino
Guo, Yejun
yejun.guo at intel.com
Mon Jun 29 06:17:22 EEST 2020
> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces at ffmpeg.org> On Behalf Of Pedro
> Arthur
> Sent: 2020年6月28日 23:29
> To: FFmpeg development discussions and patches <ffmpeg-devel at ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH V2 2/2] vf_dnn_processing.c: add dnn
> backend openvino
>
> Hi,
>
> Em qua., 24 de jun. de 2020 às 03:40, Guo, Yejun <yejun.guo at intel.com>
> escreveu:
>
> >
> >
> > > -----Original Message-----
> > > From: Guo, Yejun <yejun.guo at intel.com>
> > > Sent: 2020年6月11日 21:01
> > > To: ffmpeg-devel at ffmpeg.org
> > > Cc: Guo, Yejun <yejun.guo at intel.com>
> > > Subject: [PATCH V2 2/2] vf_dnn_processing.c: add dnn backend
> > > openvino
> > >
> > > We can try with the srcnn model from sr filter.
> > > 1) get srcnn.pb model file, see filter sr
> > > 2) convert srcnn.pb into openvino model with command:
> > > python mo_tf.py --input_model srcnn.pb --data_type=FP32
> > > --input_shape [1,960,1440,1] --keep_shape_ops
> > >
> > > See the script at
> > > https://github.com/openvinotoolkit/openvino/tree/master/model-optimi
> > > zer We'll see srcnn.xml and srcnn.bin at current path, copy them to
> > > the
> > directory
> > > where ffmpeg is.
> > >
> > > I have also uploaded the model files at
> > > https://github.com/guoyejun/dnn_processing/tree/master/models
> > >
> > > 3) run with openvino backend:
> > > ffmpeg -i input.jpg -vf
> > >
> format=yuv420p,scale=w=iw*2:h=ih*2,dnn_processing=dnn_backend=openvi
> > > no :model=srcnn.xml:input=x:output=srcnn/Maximum -y srcnn.ov.jpg
> > > (The input.jpg resolution is 720*480)
> > >
> > > Signed-off-by: Guo, Yejun <yejun.guo at intel.com>
> > > ---
> > > doc/filters.texi | 10 +++++++++-
> > > libavfilter/vf_dnn_processing.c | 5 ++++-
> > > 2 files changed, 13 insertions(+), 2 deletions(-)
> >
> > any comment for this patch set? thanks.
> >
> It would be nice if you include some benchmark numbers, comparing it with the
> others backends.
> Rest LGTM, thanks!
Thanks Pedro, I tried srcnn model on a half-minute video to check the performance of tesorflow
backend (libtensorflow-cpu-linux-x86_64-1.14.0.tar.gz) and openvino backend with cpu path.
The openvino backend does not cost more time. Actually, current openvino patch did not
enable throughput mode which increases performance a lot, see more detail at
https://docs.openvinotoolkit.org/latest/_docs_optimization_guide_dldt_optimization_guide.html#cpu-streams
I plan to enable the features for performance next, for example, use filter's activate interface,
enable batch frames for one inference (they are common for different backends) and also
the throughput mode (in openvino backend), etc.
will push the patch tomorrow if no other comments, thanks.
More information about the ffmpeg-devel
mailing list