Go to the documentation of this file.
47 int chroma_w, chroma_h;
49 if (chroma_w == 1 && chroma_h == 1) {
53 }
else if (chroma_w == 1 && chroma_h == 0) {
55 }
else if (chroma_w == 0 && chroma_h == 0) {
67 int vt_extradata_size;
68 uint8_t *vt_extradata;
71 vt_extradata_size = 1 + 3 + 6 + 2;
72 vt_extradata =
av_malloc(vt_extradata_size);
96 av_assert0(p - vt_extradata == vt_extradata_size);
98 data = CFDataCreate(kCFAllocatorDefault, vt_extradata, vt_extradata_size);
128 .
name =
"vp9_videotoolbox",
AVPixelFormat
Pixel format.
int ff_videotoolbox_common_end_frame(AVCodecContext *avctx, AVFrame *frame)
enum AVColorSpace colorspace
YUV colorspace type.
int ff_videotoolbox_uninit(AVCodecContext *avctx)
This structure describes decoded (raw) audio or video data.
enum AVColorTransferCharacteristic color_trc
Color Transfer Characteristic.
@ AVCOL_RANGE_JPEG
Full range content.
int av_pix_fmt_get_chroma_sub_sample(enum AVPixelFormat pix_fmt, int *h_shift, int *v_shift)
Utility function to access log2_chroma_w log2_chroma_h from the pixel format AVPixFmtDescriptor.
int ff_videotoolbox_common_init(AVCodecContext *avctx)
enum AVColorPrimaries color_primaries
Chromaticity coordinates of the source primaries.
#define av_assert0(cond)
assert() equivalent, that is always enabled.
enum AVColorRange color_range
MPEG vs JPEG YUV range.
@ AVCHROMA_LOC_LEFT
MPEG-2/4 4:2:0, H.264 default for 4:2:0.
struct AVCodecInternal * internal
Private context used for internal data.
int ff_videotoolbox_frame_params(AVCodecContext *avctx, AVBufferRef *hw_frames_ctx)
void * hwaccel_priv_data
hwaccel-specific private data
AVChromaLocation
Location of chroma samples.
const char * name
Name of the hardware accelerated codec.
@ AV_PIX_FMT_VIDEOTOOLBOX
hardware decoding through Videotoolbox
enum AVChromaLocation chroma_sample_location
This defines the location of chroma samples.
these buffered frames must be flushed immediately if a new input produces new the filter must not call request_frame to get more It must just process the frame or queue it The task of requesting more frames is left to the filter s request_frame method or the application If a filter has several the filter must be ready for frames arriving randomly on any input any filter with several inputs will most likely require some kind of queuing mechanism It is perfectly acceptable to have a limited queue and to drop frames when the inputs are too unbalanced request_frame For filters that do not use the this method is called when a frame is wanted on an output For a it should directly call filter_frame on the corresponding output For a if there are queued frames already one of these frames should be pushed If the filter should request a frame on one of its repeatedly until at least one frame has been pushed Return or at least make progress towards producing a frame
main external API structure.
the frame and frame reference mechanism is intended to as much as expensive copies of that data while still allowing the filters to produce correct results The data is stored in buffers represented by AVFrame structures Several references can point to the same frame buffer
enum AVPixelFormat sw_pix_fmt
Nominal unaccelerated pixel format, see AV_PIX_FMT_xxx.