[FFmpeg-devel] [PATCH] Add libavsequencer.

Sebastian Vater cdgs.basty
Wed Aug 18 22:21:28 CEST 2010


Ronald S. Bultje a ?crit :
> Hi,
>
> On Tue, Aug 17, 2010 at 1:43 PM, Michael Niedermayer <michaelni at gmx.at> wrote:
>   
>> On Sat, Aug 14, 2010 at 02:24:15AM +0200, Stefano Sabatini wrote:
>>     
>>> On date Friday 2010-08-13 22:53:30 +0200, Sebastian Vater encoded:
>>>       
>>>> The new library is meant to contain the sequencer multimedia features for
>>>> being able to playback modules and MIDI files in FFmpeg.
>>>>
>>>>         
>>> [...]
>>>       
>>>> diff --git a/libavsequencer/avsequencer.c b/libavsequencer/avsequencer.c
>>>> new file mode 100644
>>>> index 0000000..d43284f
>>>> --- /dev/null
>>>> +++ b/libavsequencer/avsequencer.c
>>>> @@ -0,0 +1,43 @@
>>>> +/*
>>>> + * Implement AVSequencer functions
>>>> + * Copyright (c) 2010 Sebastian Vater <cdgs.basty at googlemail.com>
>>>> + *
>>>> + * This file is part of FFmpeg.
>>>> + *
>>>> + * FFmpeg is free software; you can redistribute it and/or
>>>> + * modify it under the terms of the GNU Lesser General Public
>>>> + * License as published by the Free Software Foundation; either
>>>> + * version 2.1 of the License, or (at your option) any later version.
>>>> + *
>>>> + * FFmpeg is distributed in the hope that it will be useful,
>>>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>>>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>>>> + * Lesser General Public License for more details.
>>>> + *
>>>> + * You should have received a copy of the GNU Lesser General Public
>>>> + * License along with FFmpeg; if not, write to the Free Software
>>>> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
>>>> + */
>>>> +
>>>> +#include "config.h"
>>>> +#include "avsequencer.h"
>>>> +
>>>> +/**
>>>> + * @file
>>>> + * Implement AVSequencer functions.
>>>> + */
>>>> +
>>>> +#include <libavutil/avutil.h>
>>>> +
>>>>         
>>>> +unsigned avsequencer_version(void) {
>>>> +    return LIBAVSEQUENCER_VERSION_INT;
>>>> +}
>>>> +
>>>> +const char *avsequencer_configuration(void) {
>>>> +    return FFMPEG_CONFIGURATION;
>>>> +}
>>>> +
>>>> +const char *avsequencer_license(void) {
>>>> +#define LICENSE_PREFIX "libavsequencer license: "
>>>> +    return LICENSE_PREFIX FFMPEG_LICENSE + sizeof(LICENSE_PREFIX) - 1;
>>>> +}
>>>>         
>>> Nits:
>>> foo(...)
>>> {
>>>   ...
>>> }
>>>
>>> Looks OK otherwise to me, assuming that Micheal is OK with
>>> libavsequencer inclusion, note that libavsequencer is currently
>>> disabled by default. We can consider libavsequencer API unstable, and
>>> so don't worry too much about API/ABI breaks.
>>>
>>> I believe it's better to keep it integrated and disabled by default
>>> rather than keep it in an external repo, Sebastian will add piece by
>>> piece as review will go on.
>>>       
>> this patch here is ok if the respective maintainers are ok with it
>>     
>
> I want to hold off until I see the TCM decoder patches and am actually
> convinced that we need a pluggable mixer framework. Until then, please
> don't apply this yet.
>   

Just one question here, what has the pluggable mixer stuff to do with
the basic lavseq integration patch?

As I remember correctly there's no line which is dependant here on
whether we take a pluggable mixer API or not. The same also for TCM
decoder patches.

Regardings to pluggable mixing API, I summarize the discussion I had
recently with Stefano about this.

The point is we were discussing about OPL2/3 and such special stuff, but
the actual point is way much simpler and closer (10l to me for that).

We were discussing only pure software mixing engines (like low / high
quality mixers), but totally missed out the point (blame me for this,
not remembering this although I did knew/know this perfectly), that
there are a lot of hardware mixing capabilities.

I remembered it when I discussed the features of original DOS Cublic
Player (the most famous MOD player in that time) and it's move to open
source today known as OpenCubicPlayer (look at a package named ocp in
your Linux repository) which supported not only basic software mixing
(null, lq, hq and floating point mixer) but also hardware capable mixing
(in the DOS time there was an GUS mixer, a SB AWE 32/64 EMU8000 chip
mixer, etc.).

Now, of course, we don't work with old Gravis UltraSound cards and/or SB
AWE 32/64, but today we have:
DirectSound mixing API (which uses hardware mixing of modern soundcards
if availble), or OpenAL (same for Linux, etc.) API.

Since practically every soundcard today supports hardware mixing, it
would pose up the question: "Why not use HW mixing as only option then?".

Well the answer is. Hardware mixers are very different in its features /
capabilities, esp. maximum number of channels, volume stuff and output
sampling rate.

The craziest one is probably the GUS regarding this, the more maximum
number of channels you did allocate, the lesser the maximum sampling
rate allowed was (this is independent of how much channels are actually
being played).

Such limitations still apply to most hardware mixers today (very often
we have a limit of 64 channels, panning also (often no surround), volume
ranges, etc.)

Now you remember the point, I told already, that you can "chain" mixers
as I planned and posted them here as patch (see my copying channels from
lq to null mixer and back in the exact seek discussion).

This is, where FFmpeg again can kick ass, if we manage it to offer an
mixer API which a) utilizes hardware mixing devices and b) software
mixing devices...we can combine THEM!

Cubic Player was, for example, not able to chain / mix them, which means
either use a hardware mixer (with all limits) XOR a software mixer.

This is, however, (even with current mixing API I submitted) not necessary!

We could query a list of available mixers, add some flags like hardware
mixer / software mixer and combine them!

To illustrate, an example:
We have a module, requiring a number of 256 channels.

a) We query the list of hardware mixers, after querying we get a list of
mixers which supports 64 channels by hardware mixing.

b) We query the list of software mixers, we get the low, high and null
mixer all supporting 65535 channels at maximum.

Fine, we have hardware rendering for 64 channels, BUT: what's with the
remaining 192 channels?
Therefore ONLY taking b)? Nah, that's idiotic!

So instead, we allocate a hardware mixer for the first 64 channels of
the module *AND* a software mixer for the remaining 192 channels!

If there are two hardware mixers (user with two soundcards maybe), we
can allocate 64 channels each soundcard and remaining 128 channels for
software rendering, saving huge amount of mixing data calculation time.

Luckily in the todays time, we don't have to write mixers for each
soundcard floating around there (like it was required during DOS times),
but we could simply write a) DirectSound mixer which does query the
mixer capabilites and/or a OpenAL / similar API.

Not quite sure about this, maybe even plain ALSA has such capabilities...

So for the multiple mixer question:
We could have:
a) null_mixer
b) low_quality_mixer
c) high_quality_mixer
d) dsound_hwaccel_mixer
e) openal_hwaccel_mixer
f) sbawe32_hwaccel_mixer
g) gus_hwaccel_mixer

and maybe many more soundcard / API mixers (what Apple has offering
regarding this, mru?)

-- 

Best regards,
                   :-) Basty/CDGS (-:




More information about the ffmpeg-devel mailing list