<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
As per the ffmpeg tutorial code, for code like this:<i><br>
while(av_read_frame_proc(pFormatCtx, &packet) >= 0) <br>
{<br>
if (packet.stream_index == videoStream)<br>
{<br>
avcodec_decode_video2_proc(pCodecCtx, pFrame,
&frameFinished, &packet);// Decode video frame <br>
if (frameFinished) <br>
{<br>
sws_scale_proc(sws_ctx, (uint8_t const * const
*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height,
pFrameRGB->data, pFrameRGB->linesize );</i><br>
<br>
As a general question, if I'm receiving realtime webcam frames or
from a remote video file from a very fast network and I receive
frames faster than the framerate of the video, I'd need to buffer
the frames in my own datastructure. But if my datastructure is
small, then won't I lose a lot of frames that didn't get buffered?
I'm asking because I read somewhere that ffmpeg buffers a video
internally. How does this buffering happen? How would that buffering
be any different than if I implemented my own buffer? The chances
that I'd lose frames is very real, isn't it? Any chance I could read
up or see some source code about this?<br>
Or does all this depend on the streaming protocol being used?<br>
<pre class="moz-signature" cols="72">--
Navin</pre>
</body>
</html>