Go to the documentation of this file.
21 #ifndef AVCODEC_ENCODE_H
22 #define AVCODEC_ENCODE_H
33 #define FF_INPUT_BUFFER_MIN_SIZE 16384
int sample_rate
samples per second
This structure describes decoded (raw) audio or video data.
int ff_encode_encode_cb(AVCodecContext *avctx, AVPacket *avpkt, AVFrame *frame, int *got_packet)
int64_t av_rescale_q(int64_t a, AVRational bq, AVRational cq)
Rescale a 64-bit integer by 2 rational numbers.
This structure describes the bitrate properties of an encoded bitstream.
Rational number (pair of numerator and denominator).
static av_always_inline int64_t ff_samples_to_time_base(const AVCodecContext *avctx, int64_t samples)
Rescale from sample rate to AVCodecContext.time_base.
int ff_encode_alloc_frame(AVCodecContext *avctx, AVFrame *frame)
Allocate buffers for a frame.
AVRational time_base
This is the fundamental unit of time (in seconds) in terms of which frame timestamps are represented.
#define AV_NOPTS_VALUE
Undefined timestamp value.
int ff_encode_get_frame(AVCodecContext *avctx, AVFrame *frame)
Called by encoders to get the next frame for encoding.
these buffered frames must be flushed immediately if a new input produces new the filter must not call request_frame to get more It must just process the frame or queue it The task of requesting more frames is left to the filter s request_frame method or the application If a filter has several the filter must be ready for frames arriving randomly on any input any filter with several inputs will most likely require some kind of queuing mechanism It is perfectly acceptable to have a limited queue and to drop frames when the inputs are too unbalanced request_frame For filters that do not use the this method is called when a frame is wanted on an output For a it should directly call filter_frame on the corresponding output For a if there are queued frames already one of these frames should be pushed If the filter should request a frame on one of its repeatedly until at least one frame has been pushed Return or at least make progress towards producing a frame
main external API structure.
int ff_alloc_packet(AVCodecContext *avctx, AVPacket *avpkt, int64_t size)
Check AVPacket size and allocate data.
Filter the word “frame” indicates either a video frame or a group of audio samples
int ff_get_encode_buffer(AVCodecContext *avctx, AVPacket *avpkt, int64_t size, int flags)
Get a buffer for a packet.
AVCPBProperties * ff_encode_add_cpb_side_data(AVCodecContext *avctx)
Add a CPB properties side data to an encoding context.
This structure stores compressed data.
#define flags(name, subs,...)
int ff_encode_reordered_opaque(AVCodecContext *avctx, AVPacket *pkt, const AVFrame *frame)
Propagate user opaque values from the frame to avctx/pkt as needed.