[Ffmpeg-devel] [RFC/Patch] native mms code...
Tue Jan 9 02:24:48 CET 2007
On Mon, Jan 08, 2007 at 06:24:45PM -0600, Ryan Martell wrote:
> On Jan 6, 2007, at 7:27 PM, Michael Niedermayer wrote:
> >On Fri, Jan 05, 2007 at 06:04:18PM -0600, Ryan Martell wrote:
> >>On Jan 5, 2007, at 5:38 PM, Michael Niedermayer wrote:
> >>>On Fri, Jan 05, 2007 at 03:56:33PM -0600, Ryan Martell wrote:
> >>>>>>On Tue, Jan 02, 2007 at 04:51:07PM -0600, Ryan Martell wrote:
> >>>>>>>4) MMS has parameters for the tcp connection bitrate; and if
> >>>>>>>are multiple encodings in the file, it will choose the best
> >>>>>>>ones. I
> >>>>>>>am currently only streaming the first audio and first video
> >>>>>>>How would i get the bandwidth input from the user? I know we
> >>>>>>>want to add new AVOptions that aren't globally useful. Also, I
> >>>>>>>ask for audio only in this manor.
> >>>>>>see AVStream.discard
> >>>>>Will look into it.
> >>>>I don't think this is what I was looking for. Specifically, when I
> >>>>am establishing the mms stream, I can ask for any one of a
> >>>>number of
> >>>>streams (it may have an audio stream, and a low, medium, and high
> >>>>quality all in the same file). The way Windoze handles that is
> >>>>by a
> >>>>bitrate setting, where it chooses the best stream that will fit
> >>>>within the specified bitrate. So essentially I'd need a bitrate
> >>>>parameter. Also, I want to be able to stream audio only, so I want
> >>>>to have the option of turning off the video stream. The rtp stuff
> >>>>uses a similiar feature, with the tcp settings global. I don't
> >>>>to use a global if i don't have to; what's the better way?
> >>>my idea was that during header parsing all streams would be enabled
> >>>so that all AVStreams would be initalized with bitrate, width/height
> >>>sample_rate, ...
> >>>then the user (application) would set AVDiscard for all streams it
> >>>want and the mms demuxer would tell the server not to send anything
> >>>user doesnt want
> >>>there are several cases where a pure 0/1 video + 0/1 audio bitrate
> >>>based selection would fail
> >>>* a user might want all streams for archivial/backup purposes
> >>>* the streams might have different resolution and the users screen
> >>> or cpu might limit her to some resolution so her stream selection
> >>> criteria becomes more complex
> >>>* the user might care more about audio (or video) quality then
> >>>video (or
> >>> audio)
> >>>implementing the part for this on the user app side is trivial
> >>>using eval.c with a user supplied scoring function for which
> >>>>0 means always get the stream and if none has a score >0 then the
> >>>with the highest score would be choosen) that of course could also
> >>>be done
> >>>in the mms demuxer but then one string for eval.c would have to be
> >>>from the user app to the demuxer ...
> >>>but maybe iam missing some detail why this isnt possible?
> >>Those are valid points, but I'm not sure if MMS can stream more than
> >>one video stream at a time. I don't think it was architected for the
> >>features you have suggested- I think it is dumber than that, and
> >>mainly designed for bitrate selection for throughput, not for screen
> >>size or any of the other things you suggested. Although I can
> >>certainly check (if I had an mms stream with multiple encodings in
> >>it, which unfortunately I don't).
> >>Currently, it somewhat works as you described; the headers for all
> >>streams are setup, and the AVStreams are setup to properly handle the
> >>data, but then depending on what I tell the server, only certain
> >>streams receive packets.
> >>As for having it use eval.c or passing in a scoring function, that
> >>sounds like the proper way to go, but I don't know how that gets
> >>added into the AVInputFormat architecture. I have to tell it which
> >>streams to send me during av_open_stream().
> >well i thought about this a little and IMHO the decission of which
> >to receive must be done in the user application not the AVInputFormat
> >all demuxers in ffmpeg i know behave this way and ffmpeg would
> >break if this
> >where changed (a user specifies input file:stream pairs which she
> >in the output not some bitrate or other selection criteria)
> >a bitrate based or other selection criterium would certainly be a nice
> >feature but it belongs to the user app (ffmpeg/ffplay.c) not the
> >if its inpossible to "activate" all streams then just set their
> >if thats everything you know and wait with the activation until
> >AVStream.discard has been set by the user app, also the user app might
> >wish to switch to a different stream at runtime (changed network
> >conditions, or user/app missestimated the ideal bitrate)
> >so just activating stream #1 and then switching would be an option
> >(such a a switch should always be possible by reconnecting if mms
> >directly support it)
> I agree that those features need to be in the application (ffmpeg/
> ffplay), not the library or AVInputFormat.
> I can turn on all the streams (that's not a problem). Turning them
> off appears to require a resend of the ASFHeader (from the server).
> The documentation for MMS isn't open, so I'm using the specifications
> that have been reverse engineered, and what your're asking for hasn't
> really been researched (since WMP doesn't do it).
> Connecting again isn't actually always possible; I connect to a
> tokenized Akamai stream, and the token is only valid for about 5
> seconds. If I reconnect to the exact same URL after the token has
> expired, I get a permission denied. Requerying for a new token is
> possible, but not quite as simple as just reconnecting.
> The way that the rtsp does this is with:
> extern int rtsp_default_protocols;
> extern int rtsp_rtp_port_min;
> extern int rtsp_rtp_port_max;
> which can be set by the application (ffmpeg/ffplay).
the above 3 variables should be removed IMO
> So would something like an:
> extern uint32_t mms_parameters;
> with the bitrate in k in the lower 16 bits, and flags in the upper 16
> bits (Enable audio, enable video, enable all streams),
no, global variables are not acceptable, the problem is not one of how to
pass the wanted bitrate but rather that there is no wanted bitrate, i really
dont like the idea that mms functions differently from other things
of course could we add a wanted bitrate or max bitrate parameter but
currently iam not convinced that this is needed
a user application should need to know as little as possible about the
specific codec / format / protocol / ...
and with such a bitrate value you would create the following mess:
some non mms stream contains 5 different audio streams with differing
bitrates the user app is just looking at the AVStreams sets AVStream.discard
and gets what it expects, it also provides the user with a list of these
streams and allows the user to override this ...
now to mms, the stream selection requires extra code, the list of AVStreams
presented to the user is incomplete (just contains the 1 stream which is
transmitted) and if the user overrides it extra code is required to
translate this to the bitrate, now imagine 2 mms streams and what happens
with the global variable
is it possible to during header decode create all AVStreams and set their
birates and then return from the header decode and continue when the first
packet is requested? that way AVStream.discard is setup (if not its a bug)
and you could just request the stream from the server which the user
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
I am the wisest man alive, for I know one thing, and that is that I know
nothing. -- Socrates
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 189 bytes
Desc: not available
More information about the ffmpeg-devel