[FFmpeg-devel] GSoC

Dylan Fernando dylanf123 at gmail.com
Sun Mar 11 06:36:32 EET 2018


On Thu, Mar 8, 2018 at 8:57 AM, Mark Thompson <sw at jkqxz.net> wrote:

> On 07/03/18 03:56, Dylan Fernando wrote:
> > Thanks, it works now
> >
> > Would trying to implement an OpenCL version of vf_fade be a good idea
> for a
> > qualification task, or would it be a better idea to try a different
> filter?
>
> That sounds like a sensible choice to me, though if you haven't written a
> filter before you might find it helpful to write something simpler first to
> understand how it fits together (for example: vflip, which has trivial
> processing parts but still needs the surrounding boilerplate).
>
> - Mark
>
> (PS: be aware that top-posting is generally frowned upon on this mailing
> list.)
>
>
> > On Wed, Mar 7, 2018 at 1:20 AM, Mark Thompson <sw at jkqxz.net> wrote:
> >
> >> On 06/03/18 12:37, Dylan Fernando wrote:
> >>> Hi,
> >>>
> >>> I am Dylan Fernando. I am a Computer Science student from Australia. I
> am
> >>> new to FFmpeg and I wish to apply for GSoC this year.
> >>> I would like to do the Video filtering with OpenCL project and I have a
> >> few
> >>> questions. Would trying to implement an opencl version of vf_fade be a
> >> good
> >>> idea for the qualification task, or would I be better off using a
> >> different
> >>> filter?
> >>>
> >>> Also, I’m having a bit of trouble with running unsharp_opencl. I tried
> >>> running:
> >>> ffmpeg -hide_banner -nostats -v verbose -init_hw_device opencl=ocl:0.1
> >>> -filter_hw_device ocl -i space.mpg -filter_complex unsharp_opencl
> >> output.mp4
> >>>
> >>> but I got the error:
> >>> [AVHWDeviceContext @ 0x7fdac050c700] 0.1: Apple / Intel(R) Iris(TM)
> >>> Graphics 6100
> >>> [mpeg @ 0x7fdac3132600] max_analyze_duration 5000000 reached at 5005000
> >>> microseconds st:0
> >>> Input #0, mpeg, from 'space.mpg':
> >>>   Duration: 00:00:21.99, start: 0.387500, bitrate: 6108 kb/s
> >>>     Stream #0:0[0x1e0]: Video: mpeg2video (Main), 1 reference frame,
> >>> yuv420p(tv, bt470bg, bottom first, left), 720x480 [SAR 8:9 DAR 4:3],
> 6000
> >>> kb/s, 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc
> >>> Stream mapping:
> >>>   Stream #0:0 (mpeg2video) -> unsharp_opencl
> >>>   unsharp_opencl -> Stream #0:0 (mpeg4)
> >>> Press [q] to stop, [?] for help
> >>> [graph 0 input from stream 0:0 @ 0x7fdac0418800] w:720 h:480
> >> pixfmt:yuv420p
> >>> tb:1/90000 fr:30000/1001 sar:8/9 sws_param:flags=2
> >>> [auto_scaler_0 @ 0x7fdac05232c0] w:iw h:ih flags:'bilinear' interl:0
> >>> [Parsed_unsharp_opencl_0 @ 0x7fdac0715a80] auto-inserting filter
> >>> 'auto_scaler_0' between the filter 'graph 0 input from stream 0:0' and
> >> the
> >>> filter 'Parsed_unsharp_opencl_0'
> >>> Impossible to convert between the formats supported by the filter
> 'graph
> >> 0
> >>> input from stream 0:0' and the filter 'auto_scaler_0'
> >>> Error reinitializing filters!
> >>> Failed to inject frame into filter network: Function not implemented
> >>> Error while processing the decoded data for stream #0:0
> >>> Conversion failed!
> >>>
> >>> How do I correctly run unsharp_opencl? Should I be running it on a
> >>> different video file?
> >>
> >> It's intended to be used in filter graphs where much of the activity is
> >> already happening on the GPU, so the input and output are in the
> >> AV_PIX_FMT_OPENCL format which contains GPU-side OpenCL images.
> >>
> >> If you want to use it standalone then you need hwupload and hwdownload
> >> filters to move the frames between the CPU and GPU.  For your example,
> it
> >> should work with:
> >>
> >> ffmpeg -init_hw_device opencl=ocl:0.1 -filter_hw_device ocl -i space.mpg
> >> -filter_complex hwupload,unsharp_opencl,hwdownload output.mp4
> >>
> >> (There are constraints on what formats can be used and therefore
> suitable
> >> files (or required format conversions), but I believe a normal yuv420p
> >> video like this should work in all cases.)
> >>
> >> - Mark
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>

Thanks.

How is AV_PIX_FMT_OPENCL formatted? When using read_imagef(), does xyzw
correspond to RGBA respectively, or to YUV? Would I have to account for
different formats? If so, how do I check the format of the input?

Regards,
Dylan


More information about the ffmpeg-devel mailing list