[FFmpeg-devel] [PATCH] avdevice/decklink: Add option to align Capture start time

Marton Balint cus at passwd.hu
Thu Sep 27 00:14:19 EEST 2018



On Tue, 25 Sep 2018, Jeyapal, Karthick wrote:

>
> On 9/24/18 7:42 PM, Devin Heitmueller wrote:
>> Hello Karthick,
>>
>>
>>> On Sep 24, 2018, at 7:49 AM, Karthick J <kjeyapal at akamai.com> wrote:
>>>
>>> From: Karthick Jeyapal <kjeyapal at akamai.com>
>>>
>>> This option is useful for maintaining input synchronization across N
>>> different hardware devices deployed for 'N-way' redundancy.
>>> The system time of different hardware devices should be synchronized
>>> with protocols such as NTP or PTP, before using this option.
>>
>> I can certainly see the usefulness of such a feature, but is the 
>> decklink module really the right place for this?  This feels like 
>> something that should be done through a filter (either as a multimedia 
>> filter or a BSF).
> Hi Devin,
>
> Thank you very much for the feedback. I agree with you that if it can be 
> done through a filter, then it is certainly a better place to do it. But 
> I as far I understand it can't be implemented reliably in a filter, 
> without imposing additional restrictions and/or added complexities. This 
> is primarily due to the fact the frames might take different times to 
> pass through the pipeline thread in each hardware and reach the filter 
> function at different times and hence losing some synchronization w.r.t 
> system time. In other words, some modules in the pipeline contains CPU 
> intensive code(such as video decode), before it reaches the filter 
> function. The thread that needs to do this should be very lightweight 
> without any CPU intensive operations. And for better reliability it 
> needs to be done as soon as the frame is received from the driver. For 
> example, the video frame(captured by decklink device) could take 
> different times to pass through V210 decoder due to HW differences 
> and/or CPU load due to other encoder threads. This unpredictable decoder 
> delay kind of rules out multimedia filters for this kind of operation. 
> Now a bitstream filter(BSF) can mitigate this issue to some extent as it 
> sits before a decoder. We will still need to insert a thread(and 
> associated buffering) in the BSF, so that the decoder is decoupled from 
> this time-sensitive thread. But still it doesn't provide any guarantee 
> against CPU intensive operations performed in the capture plugin. For 
> example, the Decklink plugin performs some VANC processing which could 
> be CPU intensive in a low-end 2-core Intel processor. Or even if we 
> assume Decklink plugin doesn't perform any CPU intensive operations, we 
> cannot guarantee the same for other capture device plugins. Another 
> option to implement this in filters would be to use "copyts" and drop 
> the frames based of PTS/DTS value instead of system time. But then this 
> imposes a restriction that "copyts" needs to be used mandatorily. If 
> somebody needs to use it without "copyts", then it won't work. My 
> understanding on ffmpeg is limited, and hence the above explanation may 
> not be entirely correct. So, please feel free to correct me.

How about adding such an option to ffmpeg.c? You still can use wallclock 
timestamps in decklink, and then drop the frames (packets) in ffmpeg.c 
before the timestamps are touched.

Another approch might be to store the wallclock frame time as some kind of 
metadata (as it is done for "timecode") and then add the possiblity to 
f_select to drop based on this. However the evaluation engine has no 
concept of complex objects (like frame, or frame metadata) so this 
probably needs additional work.

Regards,
Marton


More information about the ffmpeg-devel mailing list