[FFmpeg-devel] grabbing from dv1394

Tommi Sakari Uimonen tuimonen
Tue Jul 10 22:39:39 CEST 2007


>> In ffmpeg, the ringbuffer is done with mmap which I guess is faster to use
>> than vector of buffers,
>
>  The idea is to be as close to zero-copy as hardware permits.
>
>> but I fail to see how the threads are handled if
>> any, since grepping 'thread' over the source tree does not give many hits.
>
>  Well, strictly speaking threads are not a requirement here.

I might agree here if I was smart enough.

>
>> The single thread approach works for
>> encoding disk streams but not for realtime usage.
>
>  Could you, please, elaborate on this one?

Yes, it was a bit uninformative and merely waving hands.

I mean that if the DV capturing and disk writing are done in the same 
thread so that the capture must wait until the write() finishes, it is 
possible that some frames get dropped because of write being momentarily 
slow due to other disk activity.

I might be wrong of course that write() does not wait at all even if there 
is disk activity (the O_NONBLOCK option at open()) but to my understanding 
all these so called realtime critical processes like capturing audio from 
soundcard or capturing DV from camera should be done in two separate 
threads; one doing the capture should have RT priority and kernel should 
be capable of low latency and pre-empting, and all the code in the capture 
loop should be "realtime safe" - that means no calls to functions that 
might block or spend too much time doing something irrelevant. At least in 
the audio world, if you ask Paul Davis et.al. about it. This assures that 
when there is audio/DV frame available, it will be captured before it's 
too late.

The other thread then does all the other tasks like writing to disk and 
other processing if needed, and this does not need to be realtime. It can 
lag behind and still the outcome after capture is finished is preserved.

Of course if computer is too slow, there is nothing to be done to prevent 
the frames being lost. Big buffer maybe, but for video the memory is 
rapidly consumed.


The comment about single threads and encoding disk streams meant that even 
slow computers finish their tasks without losing frames - they just take 
longer time to get there, hence the "time" is no realtime and there is no 
need for special manouvers.

But I admit that using two threads is not the answer, since dvgrab is not 
using any realtime privileges for the threads, so it must be something 
extra that ffmpeg does, or maybe ffmpeg uses too small ringbuffer, or 
maybe ffmpeg is too fast and processes some frames twice - that would 
explain some of the jerkiness and incomplete frames on the output.

The next thing I should do is to create a reference video which shows the 
frame number for each frame with some easy background pattern so I can 
recognize if the frame gets corrupted. Then I grab it with ffmpeg and 
analyze the result, to see exactly what happens. The problem is that I 
don't know how to feed DV back to the camera. I think I tried it once but 
it didn't work. The poor man's solution is to record my monitor which 
shows the test display with running timestamp.

I'll get back when this is done. Any suggestions how to create this test 
video easily are welcomed.

Tommi




More information about the ffmpeg-devel mailing list