[FFmpeg-devel] GSoC

Stephan Holljes klaxa1337 at googlemail.com
Wed Feb 28 00:11:20 EET 2018

On Tue, Feb 27, 2018 at 8:13 PM, Nicolas George <george at nsup.org> wrote:
> Hi, and welcome back.

Thanks you, and thanks for the feedback.

> Stephan Holljes (2018-02-26):
>> seeing that people are already applying for projects, I think I should as well.
>> My name is Stephan Holljes, on IRC I go by klaxa. Some may remember me
>> from GSoC 2015 where I implemented the (how I later found out
>> controversly debated) HTTP-Server API.
>> Since someone else already applied for the DICOM project, which I
>> would also like to do, maybe instead I could work on an ffserver
>> replacement, since it has been dropped recently. Maybe some have
>> already seen my mkv-server (https://github.com/klaxa/mkvserver_mk2/)
>> which was mentioned a few times on the ML. I think it could serve as a
>> starting point, since it already only uses the public API.
> I do not know what features you have in mind exactly for this project,
> but it reminds me of a pet project that I have: a muxer that acts as a
> multi-clients server.

More protocols than HTTP (presumably RTP and/or RTMP although I cannot
guess how much work that would be) and/or HTTP-based protocols like
DASH or HLS? I recently discussed this with a friend too, who argued
that webservers already do a lot of optimization regarding connection
handling and that implementing it again would be somewhat reinventing
the wheel. So I'm not too sure about the usefulness of that.

> It would work like the tee muxer: frames sent to the muxer are
> duplicated and re-muxed into several slave muxers. The difference is be
> that the list of slave muxers is not set at the beginning but evolves
> when clients connect and disconnect.
> I have not given much thought on the way of doing it expressive enough
> so that it would not be just a toy. Using programs to allow the server
> to provide different streams? Using attachments to send non-live files?

In fact the project I posted already does that, while for the time
being only for video files and only for a single stream, but the
architecture allows for any number of streams to be served.

> Anyway, if somebody wants to achieve anything in that direction within
> the FFmpeg project, I am pretty sure it would require a big cleanup of
> the inconsistent mess that is the network code currently.

Since I have only had somewhat in-depth insight into http and tcp I'll
have to do quite some research on the other protocols.

> Due to personal stuff, I do not feel up to offering mentoring this year
> either, but I can share my thoughts on the issue and help as much as I
> can.
> The big but not-that-hard part of the issue would be to design and
> implement a real event loop with enough features.
> It needs to be able to run things in separate threads when they need to
> react with low latency. That is the infamous "UDP thread". Note that a
> single thread should be able to handle several low-latency network
> streams, if it is not burdened by heavy computation at the same time.
> It needs to be able to run things in separate threads to integrate
> protocols that have not been integrated with it. Lightweight
> collaborative threads could be used, but they do not seem to exist with
> an up-to-date portable API.
> Protocols that rely on sub-protocols (HTTP→TCP, RTP→UDP, etc.) need to
> be updated to make use of the even loop instead of querying directly
> their slaves. Protocols will probably need to be attached to an event
> loop.

I'm not sure I understand. So far I could implement everything just
using the public http api without any modifications.

> That means having an event loop will be mandatory when using the current
> AVIO API: its implementation must be updated to create one on the fly as
> needed. But they need to be deprecated too.

So far my project implements an event-loop, in which any number of
(pthreads-)threads can be used to process data transfer from the
server to the client. The state of each client transfer is stored in a
struct which is protected by pthread_mutex locks during modification.
Each of these worker threads is stateless and does work depending on
the state of the client structs. A separate thread handles accepting
the connection and sending the muxed header. Yet another separate
thread reads the input file.

> Some work need to be done on demuxers to make them non-blocking. I think
> the best approach is to keep using the push/pull pattern: instead of
> providing an AVIO to a demuxer, push it data as it arrives, and then try
> to read the frames, or maybe rely on a callback.
> Demuxers that have not been updated for that API and rely heavily on
> blocking reads of the exact amount of data they need will probably need
> to be run in separate threads (again, if we could have lightweight
> threads…).

Again I'll probably have to do quite some research on this. So far
things have worked pretty okay-ish with blocking calls, but I guess
that's also because threads are kind of a giant hack?

> All this is only the skeleton of the ideas I have accumulated for these
> issues. It would make very interesting and stimulating work.
> Unfortunately, I do not have the time to work on it at the time. Nor do
> I have the energy to fight all the negativity that such an endeavour
> would elicit from some people. But if somebody is interested I would be
> happy to share details about any part of the plan.
> Regards,
> --
>   Nicolas George
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel at ffmpeg.org
> http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Again thanks for your thoughts!


More information about the ffmpeg-devel mailing list