<html><head><meta http-equiv="Content-Type" content="text/html charset=iso-8859-1"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><div>On Dec 1, 2013, at 2:43 AM, Adi Shavit <<a href="mailto:adishavit@gmail.com">adishavit@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div dir="ltr">Does anyone have any insights or some references I should follow regarding this issue?</div></blockquote><div><br></div><div>Adi, are you aware that ffmpeg does/can employ multi-threaded decoding already? If you set the correct number of threads by setting <span style="color: rgb(112, 61, 170); font-family: Menlo; font-size: 11px; ">thread_count</span> in your avcodeccontext before opening the codec, it will do exactly what you propose.</div><div><br></div><div>In effect, the first few decode calls will return immediately, then your frames will start to come out, having been delayed by the number of threads you requested.</div><div><br></div><div>Bruce</div><div><br></div><br><blockquote type="cite"><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Nov 26, 2013 at 9:15 PM, Adi Shavit <span dir="ltr"><<a href="mailto:adishavit@gmail.com" target="_blank">adishavit@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi,<div><br></div><div> I am consuming a multi-program transport stream with several video streams and decoding them simultaneously. This works well.</div>
<div><br></div><div>I am currently doing it al on a single thread. </div>
<div>Each AVPacket received by av_read_frame() is checked for the relevant stream_index and passed to a <i>corresponding</i> decoder. </div><div>Hence, I have one AVCodecContext per decoded elementary stream. Each such AVCodecContext handles one elementary stream, calling avcodec_decode_video2() etc.</div>
<div><br></div><div>The current single threaded design means that the next packet isn't decoded until the one before it is decoded.</div><div>I'd like to move to a multi-threaded design where each AVCodecContext resides in a separate thread with its own AVPacket (concurrent SPSC-)queue and the master thread calls av_read_frame() and inserts the coded packet into the relevant queue (Actor Model / Erlang style).</div>
<div>Note that each elementary stream is always decoded by the same single thread.</div><div><br></div><div>Before I refactor my code to do this, I'd like to know if there is anything on the avlib side <i>preventing</i> me from implementing this approach.</div>
<div><ul><li>AVPacket is a pointer to internal and external data. Are there any such data that are shared between elementary streams?<br></li><li>What should I beware of?</li></ul></div><div>Please advise,</div><div>Thanks,</div>
<div>Adi</div><div><br><br></div></div>
</blockquote></div><br></div>
_______________________________________________<br>Libav-user mailing list<br><a href="mailto:Libav-user@ffmpeg.org">Libav-user@ffmpeg.org</a><br>http://ffmpeg.org/mailman/listinfo/libav-user<br></blockquote></div><br></body></html>