[Ffmpeg-devel] RTP patches & RFC
Mon Oct 9 20:22:24 CEST 2006
On Oct 9, 2006, at 11:35 AM, Michael Niedermayer wrote:
> On Mon, Oct 09, 2006 at 10:53:31AM -0500, Ryan Martell wrote:
>>>>> 5) My code in rtp_h264 uses linked lists. It creates packets,
>>>>> stuff into them, resizes them, etc. It means at best 2 copies per
>>>>> packet (from the UDP buffer, to my packet, to the AVPacket),
>>>>> and at
>>>>> worst it could be many copies (I have to aggregate all of the
>>>>> for a given timestamp together). My philosophy is "get it
>>>>> working, then
>>>>> make it fast.". Are these acceptable issues? I could keep a
>>>>> pool of
>>>>> packets around, but the payloads are various sizes.
>>>>> Alternatively could
>>>>> set it up the way tcp handles it's streams, but that's a lot of
>>>>> overhead and room for error.
>>>> Can't you put some of this part in the framer layer?
>>> framer == AVParser (so you have a chance to find it ...)
>>> anyway, code which does unneeded memcpies when they can be avoided
>>> will be rejected
>>> and i dont understand why you need 2 copies, just have a few
>>> AVPackets (one
>>> for each frame) and get_buffer() the data into them
>>> if the final size isnt known (missdesigned protocoll ...) then you
>>> some av_realloc() for out of order packets which IMO should be rare
>>> memcpy() should be fine
>> Okay, I'll take a look at the framer. I was only looking at the rtp/
>> rtsp stuff, and have no idea what happens to the AVPacket once I hand
>> it up from the rtp code.
>> I don't think I'm using unneeded memcpy's right now: this is what
>> 1) A UDP packet comes in.
>> 2) There are three things that can happen to that packet:
>> a) I split it into several NAL units (creating my own internal
>> b) I pass it through unchanged (to my own internal packet)
>> c) I accumulate it into a partial packet, which, when complete
>> (multiple udp packets to compose), gets added to my own internal
>> packet queue.
>> 3) I then take all the packets on my own internal queue, get all of
>> them that are for the same timestamp, (assuming there is a different
>> timestamp beyond them in the queue; meaning i have everything for a
>> given frame) and accumulate them into a single AVPacket.
>> Previously, I was not using my own packet queue, and was just handing
>> them up as I got them. But the h264 codec must have all NALs for a
>> given frame at the same time, so that didn't work.
> just set AVStream->need_parsing = 1; in the "demuxer" and an AVParser
> shall merge and chop up the packets into complete frames, theres only
> one thing it cannot and that is reorder packets ...
Okay, I've been cleaning stuff up per other suggestions, and I'm
pretty close and pretty generic. I understand what you're saying
above, but I have a couple of questions:
1) Currently, there doesn't appear to be a way to take a single udp
packet and return multiple AVPacket's from it (using the existing rtp
codebase). Thus, when I get a packet over the wire that needs to be
split into multiple NALs, I'm not sure how I could return multiple
packets to the AVParser.
2) I would still have to accumuluate fragmented packets prior to
passing them to the AVParser, correct?
2) Maybe I'm going about this wrong; should I have just written a
special h264/rtp AVParser that takes whatever packet came over the
wire and splits it, passes it through, or combines it? If so, what
would be a good place to look to get initial ideas?
Furthermore, when I set the packets timestamp field to the rtp
timestamp, the video plays slower than the audio (which appears to be
normal speed). When I set the timestamp to the 90kHz math from rtp.c:
delta_timestamp = timestamp - s->last_rtcp_timestamp;
/* convert to 90 kHz without overflow */
addend = (s->last_rtcp_ntp_time - s->first_rtcp_ntp_time) >> 14;
addend = (addend * 5625) >> 14;
pkt->pts = addend + delta_timestamp;
Then I get video that plays slower than the audio.
More information about the ffmpeg-devel