Go to the documentation of this file.
23 #ifndef AVCODEC_HWACCEL_INTERNAL_H
24 #define AVCODEC_HWACCEL_INTERNAL_H
31 #define HWACCEL_CAP_ASYNC_SAFE (1 << 0)
32 #define HWACCEL_CAP_THREAD_SAFE (1 << 1)
171 #define FF_HW_CALL(avctx, function, ...) \
172 (ffhwaccel((avctx)->hwaccel)->function((avctx), __VA_ARGS__))
174 #define FF_HW_SIMPLE_CALL(avctx, function) \
175 (ffhwaccel((avctx)->hwaccel)->function(avctx))
177 #define FF_HW_HAS_CB(avctx, function) \
178 ((avctx)->hwaccel && ffhwaccel((avctx)->hwaccel)->function)
AVHWAccel p
The public AVHWAccel.
This structure describes decoded (raw) audio or video data.
int(* frame_params)(AVCodecContext *avctx, AVBufferRef *hw_frames_ctx)
Fill the given hw_frames context with current codec parameters.
void(* flush)(AVCodecContext *avctx)
Callback to flush the hwaccel state.
RefStruct is an API for creating reference-counted objects with minimal overhead.
int(* alloc_frame)(AVCodecContext *avctx, AVFrame *frame)
Allocate a custom buffer.
int(* end_frame)(AVCodecContext *avctx)
Called at the end of each frame or field picture.
it s the only field you need to keep assuming you have a context There is some magic you don t need to care about around this just let it vf type
int(* start_frame)(AVCodecContext *avctx, const uint8_t *buf, uint32_t buf_size)
Called at the beginning of each frame or field picture.
int frame_priv_data_size
Size of per-frame hardware accelerator private data.
int priv_data_size
Size of the private data to allocate in AVCodecInternal.hwaccel_priv_data.
int(* decode_params)(AVCodecContext *avctx, int type, const uint8_t *buf, uint32_t buf_size)
Callback for parameter data (SPS/PPS/VPS etc).
int(* init)(AVCodecContext *avctx)
Initialize the hwaccel private data.
uint8_t ptrdiff_t const uint8_t ptrdiff_t int intptr_t intptr_t int int16_t * dst
int(* uninit)(AVCodecContext *avctx)
Uninitialize the hwaccel private data.
void(* free_frame_priv)(FFRefStructOpaque hwctx, void *data)
Callback to free the hwaccel-specific frame data.
these buffered frames must be flushed immediately if a new input produces new the filter must not call request_frame to get more It must just process the frame or queue it The task of requesting more frames is left to the filter s request_frame method or the application If a filter has several the filter must be ready for frames arriving randomly on any input any filter with several inputs will most likely require some kind of queuing mechanism It is perfectly acceptable to have a limited queue and to drop frames when the inputs are too unbalanced request_frame For filters that do not use the this method is called when a frame is wanted on an output For a it should directly call filter_frame on the corresponding output For a if there are queued frames already one of these frames should be pushed If the filter should request a frame on one of its repeatedly until at least one frame has been pushed Return or at least make progress towards producing a frame
main external API structure.
static const FFHWAccel * ffhwaccel(const AVHWAccel *codec)
int(* decode_slice)(AVCodecContext *avctx, const uint8_t *buf, uint32_t buf_size)
Callback for each slice.
A reference to a data buffer.
int caps_internal
Internal hwaccel capabilities.
int(* update_thread_context)(AVCodecContext *dst, const AVCodecContext *src)
Copy necessary context variables from a previous thread context to the current one.