[FFmpeg-devel] [PATCH] RV30/40 decoder

Michael Niedermayer michaelni
Tue Sep 18 01:57:54 CEST 2007


On Tue, Sep 18, 2007 at 12:41:42AM +0200, Roberto Togni wrote:
> On Mon, 17 Sep 2007 11:34:17 +0200
> Michael Niedermayer <michaelni at gmx.at> wrote:
> [...]
> > > 
> > > They are stored in container that way and I have not found a way to determine
> > > whether current slice is really previous slice tail or not while demuxing.
> > 
> > how does mplayers demuxer do it? it does pass complete frames from what
> > i remember
> > 
> [...]
> > > > AVCodecContext.slice_count and slice_offset is deprecated, they break
> > > > remuxing, cause thread issues with the demuxer writing these into the
> > > > decoder context, ...
> > > 
> > > At least MPlayer and Xine pass slices gathered into one frame and my decoder
> > > decodes both single slice and multiple slice data.
> > 
> > mplayer and xine should be changed to pass the offsets or sizes within the
> > frame that makes remuxing work, fixes a few race conditions and so on
> > of course its not your job to fix xine and mplayer, but if you support
> > only the broken API noone will fix them
> > 
> This is how MPlayer passes frames to the decoder. All data si
> passed along with the frame, there is no out-of-band data
> (except from the extradata stored in the file header). Please note that
> some changes are made in vd_ffmpeg before sending this data to lavc.
> This is a quick explanation, i'll double-check it tomorrow, feel free
> to ask for details.
> uint32_t chunk_number-1 //< number of chunks (0==1 chunk)
> uint32_t timestamp //< frame timestamp, the int value from the stream
> uint32_t video_data_length //< length of the video data (all chunks)
> uint32_t chunk_table_offset //< offset to chunk table
> uint8_t[video_data_length] //< video data
> uint32_t 1 //< flag?
> uint32_t offset_to_1st_chunk // counting from video data
> uint32_t 1
> uint32_t offset_to_2nd_chunk
> ...
> uint32_t 1
> uint32_t offset_to_nth_chunk
> At the moment all uint32_t fields are stored in native endian format
> (this will be changed). iirc the binary codec need the chunk table in
> native endian.
> Chunks are the video subpackets in the demuxer, probably they are the
> same thing you call slices.
> The 1 fields (some kind of flag?) are needed by the binary codec. They
> are passed this way from the demuxer to avoid a realloc() for every
> video frame (imo, this code predates my hacking on the real demuxer).
> The binary codec wants the first 4 values into a special struct
> (along with some padding), and everything else starting from video data
> in a separate buffer.
> Don't ask me why the binary codec need the timestamp of the frame.

maybe it uses the time stamp to calculate the motion vectors of direct
MBs in b frames, that is direct MBs use the MV of the next P frame and
use the timestamps of the next and previous P frames and the timetsamp
of the B frame to find the forward and backward MVs
(its done that way in h.264 and mpeg4 part 2 though they dont depend on
timetstamps from outside the video stream)
(note IIRC kostya called direct MBs interpolated MBs)


Michael     GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB

Its not that you shouldnt use gotos but rather that you should write
readable code and code with gotos often but not always is less readable
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: <http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/attachments/20070918/f9c05eb4/attachment.pgp>

More information about the ffmpeg-devel mailing list