[FFmpeg-devel] [RFC] Feeding bit masks into encoders (particularly an ICO muxer)
mbradshaw at sorensonmedia.com
Fri Aug 3 03:30:54 CEST 2012
I'm currently working on an ICO muxer and have it mostly working (PNGs
are supported and BMPs are almost supported, but are the reason for
this email). When a BMP image is put in an ICO file, it is required to
have an associated bitmask that specifies its transparency. The data
for this bitmask is tagged onto the end of the BMP image's data
(regardless of whether or not this BMP is monotone or not, so the
format can change at this point from color image to monotone bitmask),
and has no header of its own. I haven't settled on a good way for
users to specify an image's bitmask though.
I've thought of allowing packets to be fed into the ICO muxer and
making their ordering significant. PNG packets get directly written,
but when a BMP packet is received, a flag is set in the muxer and it
requires the next packet to be a bitmask for this corresponding BMP.
I've also thought of allowing the user to skip a bitmask packet if
they want the whole BMP to be visible, in which case the muxer writes
an opaque bitmask out when another image is received or when the
trailer is written. The problem with this though is detecting when a
bitmask has been skipped by the user, because if the next "color"
image BMP is monotone, it'll be tough to distinguish it between a
bitmask and a colored image. But even if this was sorted out, there's
the next problem:
It's possible the image is monotone, but it probably won't be. This
makes the question of what's used to "encode" the bitmask, because it
probably won't be the same thing that's used to encode the image BMP.
Really, the bitmask is expected to have the same dimensions as it's
corresponding color frame, and all I need is the raw data, so the
"encode" is just simply filling a mono frame which I suppose the user
could do, but I'm unsure of how they'd convert that into an AVPacket
(in a way that's consistent with the rest of the API, that is). I
don't like the idea of creating additional streams solely for creating
bitmasks. I suppose I could allow them to create their own AVPacket
and fill it in with data themselves (and maybe not set AV_PKT_FLAG_KEY
to signify it's a bitmask). Or I could write a method that would take
in an encoded BMP image packet and a raw bitmask data pointer and fill
and return an AVPacket (so they don't have to set it up themselves).
But this feels too different from the traditional flow of the API.
I'm completely puzzled on how to let the user specify a bitmask for
this ICO muxer using the libav* API in a way that flows nicely with
the existing API, and would really appreciate your input on this.
More information about the ffmpeg-devel