[Ffmpeg-devel] Decode a RTP stream in H.263

Cool_Zer0 c00jz3r0
Thu Dec 21 09:36:00 CET 2006


Ryan Martell wrote:
>
> On Nov 30, 2006, at 10:38 AM, Cool_Zer0 wrote:
>
>> On 11/30/06, Ryan Martell <rdm4 at martellventures.com> wrote:
>>>
>>> Hi-
>>>
>>> On Nov 30, 2006, at 4:42 AM, Michael Niedermayer wrote:
>>> > Hi
>>> >
>>> > On Thu, Nov 30, 2006 at 10:05:22AM +0000, Cool_Zer0 wrote:
>>> >> Hi there!
>>> >>
>>> >> I have a question. I've look on the documentation sectional of
>>> >> FFMpeg page
>>> >> and check the code but I can't figured it out.
>>> >>
>>> >>
>>> >> My main purpose, for now, is decode H.263, that I receive by RTP,
>>> >> and then
>>> >> show it on my PDA screen.
>>> >>
>>> >> So... I already have the RTP stack and I can extract the encoded
>>> >> H.263packet.
>>> >>
>>> >> My main question when I look through "libavcodec/h263dec.c" is
>>> >> that you have
>>> >> a decode_init(), a decode_end() and then a decode_frame(). (I
>>> >> think I'm
>>> >> right!!!)
>>> >>
>>> >>
>>> >> And now the ultimate question:
>>> >> - Can I call decode_frame() for each packet that I receive?
>>> >> - What will happen if the packet that I received doesn't contain the
>>> >> complete frame?
>>> >
>>> > you will have to use a AVParser
>>> >
>>> >>
>>> >>
>>> >> Another question:
>>> >>
>>> >> On decode_init() you have:
>>> >>
>>> >> /* select sub codec */
>>> >> switch(avctx->codec->id) {
>>> >>   case CODEC_ID_H263: ...
>>> >>   case CODEC_ID_MPEG4: ...
>>> >>   case CODEC_ID_MSMPEG4V1: ...
>>> >>   ...
>>> >>   case CODEC_ID_FLV1: ...
>>> >>
>>> >> I'm wondering if that switch is to say: "The result of the
>>> >> decoding process
>>> >> will be a [MPEG4, WMV3, ...]".... Can't I say that I want the
>>> >> result on YUV
>>> >> or RGB?
>>> >
>>> > go and read apiexample.c
>>>
>>> Have you just tried it yet?
>>
>>
>>
>> I just finished the building (it gives some fight on Windows Mobile).
>>
>>
>> rtp.c has the H263 codec in it for a non-
>>> dynamic transport protocol type.  It might just work:
>>
>> ./ffplay rtsp://blahblahblah/file.sdp
>>
>>
>>
>> Hummm... But I don't have RTSP... I don't know the RTSP specification 
>> but in
>> my case I have a Desktop client that make a "video call" to my VoIP
>> application (that I'm developing)... The desktop client sends me
>> H.263packets generated with Microsoft RTC. In my PDA application I
>> want to
>> receive that packets and the decode in order to show them.
>
> Well, ffmpeg has the rtsp stuff built into it too, if you can compile 
> it with the networking enabled.  RTSP is the session protocol, which 
> sets up the rtp streams, which (generally) you have two of- an audio 
> and a video.  So if it's a standard rtsp type stream, ffmpeg will 
> handle the audio and the video, and the syncing for you.
>
> Now, if you CAN"T do that, I think you will have a difficult time of 
> syncing the audio and video.
>
> But the answer to your question:
>
>> I could use the stack of FFMPeg but since I'm already using other RTP 
>> stack
>> (because I have sound too) I think is better to use only one stack, 
>> instead
>> of using 2 different stacks.
>
>> So... What you are saying is that I can't use directly the h263dec.c, 
>> right?
>
> No, you could do that, but I'm not sure how much stuff you would have 
> to modify.  Go back to Michael's answer and check out apiexample.c.  
> You would want to put the packets you got into an AVParser, and let it 
> do the right thing with figuring out complete frames though.
>
>> It's better for me to use the entire RTP stack of ffmpeg? I imagine 
>> that the
>> RTP stack do the assembling of packets and other interessing stuff, 
>> right?
>> If I don't use the rtp of ffmpeg I have to do by myself, right?
>
> Yes; also it does the a/v syncing (or at least keeping the timestamps 
> right).
>
> The problem is that if you have audio coming from one source, and 
> video from another, and you were handing the video to ffmpeg to 
> decode, when you get ready to display the video, you'll have to figure 
> out what the timestamps are, how they have been adjusted, and then 
> figure out the audio syncing.  Since ffmpeg can handle audio as well, 
> you could let it handle it all for you.
>
> Now, if you absolutely have to let someone else hand you the packets 
> for both the audio and video, you could (someone else jump in here if 
> this is a bad idea) setup an AVInputFormat.  The input format could 
> take whatever packets you get over the wire, and massage them into the 
> form that the decoder needs.  This is how ffmpeg handles things like 
> http:, file:, etc.  It's not really the best way to do it, but it 
> might be the easiest to implement.
>

Hi there.
Seems that I'll wave to use the AVInputFormat. Is there any 
documentation on this subject?


Thanks

[...]




More information about the ffmpeg-devel mailing list