[FFmpeg-devel] Psygnosis YOP decoding

Thomas Higdon thomas.p.higdon
Fri Aug 7 05:22:47 CEST 2009


Hi,

I'm interested in getting involved in ffmpeg development. In order to
get more familiar with the code base, I've decided to take a look at
one of the suggested small FFmpeg tasks:

http://wiki.multimedia.cx/index.php?title=Small_FFmpeg_Tasks#YOP_Playback_System

I've been using this page:

http://wiki.multimedia.cx/index.php?title=Psygnosis_YOP

as a spec for the decoder. I understand that a couple of people
expressed interest in this task last year as a SoC qualifier, but as
far as I can tell, nothing came of it, because there's no decoder in
the tree that I can see, nor any reference to it in the svn history or
on the mailing list. Let me know if I'm wrong.

I've had some luck so far. I've been able decode the audio data by
writing a demuxer and sending it to the Westwood IMA ADPCM decoder.
I've written some code for decoding the video, but I'm a little
confused by what's on the wiki. I have a few questions:

1. How does the palette work? Each frame apparently carries its own
palette part, which is PalColors * 3 bytes long, one byte for each RGB
color component. I'm pretty sure this is correct, because the audio
data starts where I expect it to. However, I'm not clear on the roles
of FirstColor1 and FirstColor2. What does it mean to "update PalColors
entries starting from FirstColor1 for odd and FirstColor2 for even
frames respectively"? Does the encoder carry some palette state that
is updated by each frame?

2. I'm assuming when the algorithm says to use a byte to "paint" a
pixel, I'm taking the byte and indexing into a palette, and painting
that pixel with that color. Is this correct?

3. Does the decoding proceed from top-to-bottom and left-to-right?.
That is, when I look at the first tag in some frame's video data, and
consume the next byte to see what color to paint, am I on the upper
left macroblock at that point? Does the decoding proceed to the next
block to the right after the first is painted? I ask this because in
the "copy previous block" part of the algorithm, all of the possible
offsets have negative numbers, and would seem to refer to
already-painted macroblocks if things are decoded top-to-bottom,
left-to-right. Am I missing something?

4. Where did the information on the wiki come from anyway? Is there
source code somewhere?

Hope I'm not too naive. I've always been interested in video
compression, but this is the first time I've had a hand in
implementation.

-t



More information about the ffmpeg-devel mailing list