From theateist at gmail.com Mon Sep 2 01:27:42 2013 From: theateist at gmail.com (theateist84) Date: Sun, 1 Sep 2013 16:27:42 -0700 (PDT) Subject: [Libav-user] Why low qmax value improve video quality? Message-ID: <1378078062146-4658441.post@n4.nabble.com> Maybe my questions doesn't make sense due to not understanding but please explain me what I miss because I did read posts and wiki and still it's not clear to me. As I understand setting low value for qmax will improve the quality by increasing the bitrate. Maybe I didn't understood something but isn't lowing the Q(quantization) will decrease the quantization levels and thus the bitrate which means degradation in quality? Or in ffmpeg lowing Q means increasing the quantization levels? If the last is true so it make sense that lower qmax improves the quality. If the above is true, so increasing qmax will decrease the quantization levels which means less bits for coding a quantization level. So, if number of bits for a level is lower, so total bits per frame will be lower, so how the encoder manage to get to the desired bitrate? -- View this message in context: http://libav-users.943685.n4.nabble.com/Why-low-qmax-value-improve-video-quality-tp4658441.html Sent from the libav-users mailing list archive at Nabble.com. From mrfun.china at gmail.com Mon Sep 2 04:06:30 2013 From: mrfun.china at gmail.com (YIRAN LI) Date: Mon, 2 Sep 2013 12:06:30 +1000 Subject: [Libav-user] how will AVCodecContext.rc_max_rate be respected Message-ID: Hi friend, I met a problem when converting video stream into mpeg 2 format with specified max bitrate. The input file has a high bitrate of about 40Mbps (resolution is 720P), when converting with "ffmpeg -i input.mp4 -target pal-dvd dvd.mpg" then everything is OK, resulting mpeg has a bitrate of 7620kbps. As we know, specifying "-taget pal-dvd" in fact supply a bunch of arguments into the command line, in fact they are opt_video_codec(o, "c:v", "mpeg2video"); opt_audio_codec(o, "c:a", "ac3"); parse_option(o, "f", "dvd", options); parse_option(o, "s", norm == PAL ? "720x576" : "720x480", options); parse_option(o, "r", frame_rates[norm], options); parse_option(o, "pix_fmt", "yuv420p", options); av_dict_set(&o->g->codec_opts, "g", norm == PAL ? "15" : "18", 0); av_dict_set(&o->g->codec_opts, "b:v", "6000000", 0); av_dict_set(&o->g->codec_opts, "maxrate", "9000000", 0); av_dict_set(&o->g->codec_opts, "minrate", "0", 0); // 1500000; av_dict_set(&o->g->codec_opts, "bufsize", "1835008", 0); // 224*1024*8; av_dict_set(&o->g->format_opts, "packetsize", "2048", 0); // from www.mpucoder.com: DVD sectors contain 2048 bytes of data, this is also the size of one pack. av_dict_set(&o->g->format_opts, "muxrate", "10080000", 0); // from mplex project: data_rate = 1260000. mux_rate = data_rate * 8 av_dict_set(&o->g->codec_opts, "b:a", "448000", 0); parse_option(o, "ar", "48000", options); So when I supply this argument one by one instead of using "-target pal-dvd", generated video is still OK. But once I removed "-f dvd -s 720x576", there come out lots warnings of "buffer underflow", and the generated file has an overall bitrate of 29.7Mbps although 9000k is specified to be max. I think that means "-target paldvd" succeeds because down scale is done, and if without resizing (in case input bitrate is high), only specifying maxrate doesn't guarantee the bitrate of the output file. So I want to know, if there is any option available, so that no matter what the input bitrate is, output can always stay below a specified bitrate? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From night.rain.whisper at gmail.com Mon Sep 2 20:39:23 2013 From: night.rain.whisper at gmail.com (Sergey Fedorov) Date: Mon, 2 Sep 2013 22:39:23 +0400 Subject: [Libav-user] Why low qmax value improve video quality? In-Reply-To: <1378078062146-4658441.post@n4.nabble.com> References: <1378078062146-4658441.post@n4.nabble.com> Message-ID: Quantization parameter is the denominator of element-wise division. http://en.wikipedia.org/wiki/Quantization_%28image_processing%29#Quantization_matrices. So increasing QP will increase a number of lost bits of information about a picture and therefore lower image quality. 2013/9/2 theateist84 : > Maybe my questions doesn't make sense due to not understanding but please > explain me what I miss because I did read posts and wiki and still it's not > clear to me. > > As I understand setting low value for qmax will improve the quality by > increasing the bitrate. Maybe I didn't understood something but isn't lowing > the Q(quantization) will decrease the quantization levels and thus the > bitrate which means degradation in quality? Or in ffmpeg lowing Q means > increasing the quantization levels? If the last is true so it make sense > that lower qmax improves the quality. > > If the above is true, so increasing qmax will decrease the quantization > levels which means less bits for coding a quantization level. So, if number > of bits for a level is lower, so total bits per frame will be lower, so how > the encoder manage to get to the desired bitrate? > > > > -- > View this message in context: http://libav-users.943685.n4.nabble.com/Why-low-qmax-value-improve-video-quality-tp4658441.html > Sent from the libav-users mailing list archive at Nabble.com. > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user From aworldgonewrong at gmail.com Tue Sep 3 12:47:37 2013 From: aworldgonewrong at gmail.com (John Freeman) Date: Tue, 3 Sep 2013 11:47:37 +0100 Subject: [Libav-user] RTSP Pause: Times out In-Reply-To: References: Message-ID: Any ideas on this one? On 22 August 2013 15:07, John Freeman wrote: > I have an RTSP server that supports the PAUSE command, so in my code I > call av_read_pause and then when I want to resume playback, I > call av_read_play. > > However, between the call to pause and the call to play, there is no call > for keep-alive to the server and therefore the connection times out and I > lose the stream. > > How can I keep the connection alive? > > > ~ Jay > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From soho123.2012 at gmail.com Tue Sep 3 13:07:04 2013 From: soho123.2012 at gmail.com (Huang Soho) Date: Tue, 3 Sep 2013 19:07:04 +0800 Subject: [Libav-user] HELP !!Audio + video sync when both stream copy from USB webcam Message-ID: Hi All, I use ffmpeg + ffserver to support a stream server. but there is a big problem when Audio + video. both audio and video are use "stream copy" option. I can use ffplay to play the rtp stream by the url "rtsp:// 192.168.1.254:5554/test1-rtsp.mpg" the conf of ffserver is : Feed feed2.ffm Format rtp AVOptionVideo flags +global_header VideoSize 1280x720 VideoFrameRate 30 VideoCodec libx264 AVOptionAudio flags +global_header AudioCodec pcm_s16be AudioChannels 2 AudioSampleRate 48000 the ffmpeg command is : ffmpeg -sn -f video4linux2 -r 30 -s 1280x720 -input_format h264 -i /dev/video1 -f alsa -ar 48000 -ac 2 -i hw:0 -vcodec copy -acodec copy -map 0:0 -map 1:0 http://localhost:8090/feed2.ffm Audio is played earlier than video about 3~5 seonds or more than 5 seconds. why Audio will be playied for a few seconds? I can see , ffmpeg capture the data from USB webcam and USB sound card by interleaved Is there any option can be used for fine tune ? Any input is very appreciated!!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin_ffmpeg at interia.pl Tue Sep 3 15:21:11 2013 From: marcin_ffmpeg at interia.pl (marcin_ffmpeg) Date: Tue, 03 Sep 2013 15:21:11 +0200 Subject: [Libav-user] Number of cached frames in audio codecs Message-ID: Dear All How can I determine number of cached frames in audio codecs for both decoding and encoding? I'd like to know the maximum number of frames which I receive on decoder/encoder flush. For video codecs I use AVCodecContext->gop_size, is this correct? What should I use for audio codecs? Thank you Marcin From andrey.krieger.utkin at gmail.com Tue Sep 3 20:34:45 2013 From: andrey.krieger.utkin at gmail.com (Andrey Utkin) Date: Tue, 3 Sep 2013 21:34:45 +0300 Subject: [Libav-user] RTSP Pause: Times out In-Reply-To: References: Message-ID: 2013/8/22 John Freeman : > I have an RTSP server that supports the PAUSE command, so in my code I call > av_read_pause and then when I want to resume playback, I call av_read_play. > > However, between the call to pause and the call to play, there is no call > for keep-alive to the server and therefore the connection times out and I > lose the stream. > > How can I keep the connection alive? Maybe check the source code of RTSP-related code and try to fix it. Previously check that your assumption about the reason is right. -- Andrey Utkin From j.lugagne at spikenet-technology.com Tue Sep 3 21:32:02 2013 From: j.lugagne at spikenet-technology.com (=?iso-8859-15?Q?J=E9r=E9my_Lugagne?=) Date: Tue, 03 Sep 2013 21:32:02 +0200 Subject: [Libav-user] Close a RTSP stream In-Reply-To: <201308221044245478368@spikenet-technology.com> References: <201308221044245478368@spikenet-technology.com> Message-ID: Hi, No body had/have the same issue ? J?r?my Le Thu, 22 Aug 2013 10:44:26 +0200, Jeremy Lugagne a ?crit: > Hello all, >I'm using the libav to grab images from a network camera with the rtsp > protocol but I have an issue. When I release >the stream, my camera > don't release resources for the sessions, so when I do 4 tests with my > applications I must >restart the camera because I have no more slot. >This is how I open the stream : > > AVFormatContext *pFormatCtx; > AVCodecContext* vCodecCtxp; > AVPacket packet; > AVFrame *pFrame; > AVFrame *pFrameRGB = NULL; > int numBytes; > AVDictionary *optionsDict = NULL; > struct SwsContext *sws_ctx = NULL; > uint8_t *buffer = NULL; > AVCodec *videoCodec; > av_register_all(); > avformat_network_init(); > pFormatCtx = avformat_alloc_context(); > if(avformat_open_input(&pFormatCtx, qPrintable(url), NULL, 0)!=0) > return false; // Couldn't open file > videoStream=-1; > if(avformat_find_stream_info(pFormatCtx,NULL) < 0){ > return false; > } > for(int i=0; inb_streams; i++) { > if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO && > videoStream < 0) { > videoStream=i; > } > } > vCodecCtxp = pFormatCtx->streams[videoStream]->codec; > // Allocate an AVFrame structure > pFrameRGB = avcodec_alloc_frame(); > pFrame = avcodec_alloc_frame(); > if(pFrameRGB==NULL) return ; > // Determine required buffer size and allocate buffer > numBytes=avpicture_get_size(PIX_FMT_RGB24, vCodecCtxp->width, > vCodecCtxp->height); > buffer=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t)); > sws_ctx = sws_getContext(vCodecCtxp->width, vCodecCtxp->height, > vCodecCtxp->pix_fmt, vCodecCtxp->>width, vCodecCtxp->height, > PIX_FMT_RGB24, SWS_BILINEAR, NULL, NULL, NULL ); > // Assign appropriate parts of buffer to image planes in pFrameRGB. > Note that pFrameRGB is an AVFrame, but AVFrame is a superset of AVPicture > avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24, > vCodecCtxp->width, vCodecCtxp->height); > // Assign appropriate parts of buffer to image planes in pFrameRGB. > Note that pFrameRGB is an AVFrame, but AVFrame is a superset of AVPicture > int res = avpicture_fill((AVPicture *)pFrameRGB, buffer, PIX_FMT_RGB24, > vCodecCtxp->width, vCodecCtxp->height); > videoCodec=avcodec_find_decoder(vCodecCtxp->codec_id); > avcodec_open2(vCodecCtxp, videoCodec, 0); > av_read_play(pFormatCtx); > >And how I close it : > > av_free(pFrameRGB); > av_free(buffer); > av_free(pFrame); > sws_freeContext(sws_ctx); > avformat_close_input(&pFormatCtx); > >Do i forgetting something to close properly my stream ? Because when I > watch what messages are sent to the camera I >see the TEARDOWN message. >Thanks in advance, >Jeremy L. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arvind_raman at yahoo.com Tue Sep 3 22:15:09 2013 From: arvind_raman at yahoo.com (Arvind Raman) Date: Tue, 3 Sep 2013 13:15:09 -0700 (PDT) Subject: [Libav-user] Unable to encode using VP9 Message-ID: <1378239309.32303.YahooMailNeo@web122302.mail.ne1.yahoo.com> I am unable to encode using the vp9 video encoder. This is what I have done. 1. I have installed the latest version of FFmpeg available on git. The configure command that I used was the following "./configure --extra-cflags=-O2 --enable-bzlib --disable-devices --enable-libfdk-aac --enable-libfaac --enable-libgsm --enable-libmp3lame --enable-libopus --enable-libschroedinger --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libvpx --enable-avfilter --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --enable-optimizations --disable-stripping --enable-nonfree --enable-version3 --libdir=/usr/local/lib" 2. I have additionally installed the libvorbis, libvpx, ligogg shared binaries. However, when I run ./ffmpeg -encoders I get the following output Encoders: ?V..... = Video ?A..... = Audio ?S..... = Subtitle ?.F.... = Frame-level multithreading ?..S... = Slice-level multithreading ?...X.. = Codec is experimental ?....B. = Supports draw_horiz_band ?.....D = Supports direct rendering method 1 ?------ ?V..... a64multi???????????? Multicolor charset for Commodore 64 (codec a64_multi) ?V..... a64multi5??????????? Multicolor charset for Commodore 64, extended with 5th color (colram) (codec a64_multi5) ?V..... amv????????????????? AMV Video ?V..... asv1???????????????? ASUS V1 ?V..... asv2???????????????? ASUS V2 ?V..... avrp???????????????? Avid 1:1 10-bit RGB Packer ?V..X.. avui???????????????? Avid Meridien Uncompressed ?V..... ayuv???????????????? Uncompressed packed MS 4:4:4:4 ?V..... bmp????????????????? BMP (Windows and OS/2 bitmap) ?V..... cljr???????????????? Cirrus Logic AccuPak ?V..... libschroedinger????? libschroedinger Dirac 2.2 (codec dirac) ?V.S... dnxhd??????????????? VC3/DNxHD ?V..... dpx????????????????? DPX image ?V.S... dvvideo????????????? DV (Digital Video) ?V.S... ffv1???????????????? FFmpeg video codec #1 ?V..... ffvhuff????????????? Huffyuv FFmpeg variant ?V..... flashsv????????????? Flash Screen Video ?V..... flashsv2???????????? Flash Screen Video Version 2 ?V..... flv????????????????? FLV / Sorenson Spark / Sorenson H.263 (Flash Video) (codec flv1) ?V..... gif????????????????? GIF (Graphics Interchange Format) ?V..... h261???????????????? H.261 ?V..... h263???????????????? H.263 / H.263-1996 ?V.S... h263p??????????????? H.263+ / H.263-1998 / H.263 version 2 ?V..... libx264????????????? libx264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 (codec h264) ?V..... libx264rgb?????????? libx264 H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10 RGB (codec h264) ?V..... huffyuv????????????? Huffyuv / HuffYUV ?V..X.. jpeg2000???????????? JPEG 2000 ?V..... libopenjpeg????????? OpenJPEG JPEG 2000 (codec jpeg2000) ?V..... jpegls?????????????? JPEG-LS ?V..... ljpeg??????????????? Lossless JPEG ?VFS... mjpeg??????????????? MJPEG (Motion JPEG) ?V.S... mpeg1video?????????? MPEG-1 video ?V.S... mpeg2video?????????? MPEG-2 video ?V.S... mpeg4??????????????? MPEG-4 part 2 ?V..... msmpeg4v2??????????? MPEG-4 part 2 Microsoft variant version 2 ?V..... msmpeg4????????????? MPEG-4 part 2 Microsoft variant version 3 (codec msmpeg4v3) ?V..... msvideo1???????????? Microsoft Video-1 ?V..... pam????????????????? PAM (Portable AnyMap) image ?V..... pbm????????????????? PBM (Portable BitMap) image ?V..... pcx????????????????? PC Paintbrush PCX image ?V..... pgm????????????????? PGM (Portable GrayMap) image ?V..... pgmyuv?????????????? PGMYUV (Portable GrayMap YUV) image ?VF.... png????????????????? PNG (Portable Network Graphics) image ?V..... ppm????????????????? PPM (Portable PixelMap) image ?VF.... prores?????????????? Apple ProRes ?VF.... prores_aw??????????? Apple ProRes (codec prores) ?V.S... prores_ks??????????? Apple ProRes (iCodec Pro) (codec prores) ?V..... qtrle??????????????? QuickTime Animation (RLE) video ?V..... r10k???????????????? AJA Kona 10-bit RGB Codec ?V..... r210???????????????? Uncompressed RGB 10-bit ?V..... rawvideo???????????? raw video ?V..... roqvideo???????????? id RoQ video (codec roq) ?V..... rv10???????????????? RealVideo 1.0 ?V..... rv20???????????????? RealVideo 2.0 ?V..... sgi????????????????? SGI image ?V..... snow???????????????? Snow ?V..... sunrast????????????? Sun Rasterfile image ?V..... svq1???????????????? Sorenson Vector Quantizer 1 / Sorenson Video 1 / SVQ1 ?V..... targa??????????????? Truevision Targa image ?V..... libtheora??????????? libtheora Theora (codec theora) ?V..... tiff???????????????? TIFF image ?V..... utvideo????????????? Ut Video ?V..... v210???????????????? Uncompressed 4:2:2 10-bit ?V..... v308???????????????? Uncompressed packed 4:4:4 ?V..... v408???????????????? Uncompressed packed QT 4:4:4:4 ?V..... v410???????????????? Uncompressed 4:4:4 10-bit ?V..... wmv1???????????????? Windows Media Video 7 ?V..... wmv2???????????????? Windows Media Video 8 ?V..... xbm????????????????? XBM (X BitMap) image ?V..... xface??????????????? X-face image ?V..... xwd????????????????? XWD (X Window Dump) image ?V..... y41p???????????????? Uncompressed YUV 4:1:1 12-bit ?V..... yuv4???????????????? Uncompressed packed 4:2:0 ?V..... zlib???????????????? LCL (LossLess Codec Library) ZLIB ?V..... zmbv???????????????? Zip Motion Blocks Video Clearly libvpx isn't listed here. However, the output of ./ffmpeg -codecs mentions vp9 as in ?D.V.L. vp6f???????????????? On2 VP6 (Flash version) ?D.V.L. vp8????????????????? On2 VP8 ?..V.L. vp9????????????????? Google VP9 ?D.V.L. webp???????????????? WebP Would you please suggest what might be going wrong and what I should be doing to enable support for VP9 in my FFmpeg build. Thanks Arvind -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrfun.china at gmail.com Wed Sep 4 02:25:37 2013 From: mrfun.china at gmail.com (YIRAN LI) Date: Wed, 4 Sep 2013 10:25:37 +1000 Subject: [Libav-user] what library is linked to if w32thread is enabled Message-ID: Hi I'm building ffmpeg in MingW, and tried to build two version: one uses w32thread and another uses pthread. I built with --enable-shared, and seems the dlls generated are almost of the same size. I know MingW comes with pthead dll, so it's dynamically linked to. If 2 versions are of the same size, that means the version using w32thread also is dynamically linked to something. (if statically linked, size should be larger). Could anyone tell me, which library is linked to in case w32thread is used . When checked with dependency walker, I only saw dependency on some windows dll, does that mean w32thread is provide via windows dlls and are available on all Windows system? Great thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.leppkes at gmail.com Wed Sep 4 08:24:24 2013 From: h.leppkes at gmail.com (Hendrik Leppkes) Date: Wed, 4 Sep 2013 08:24:24 +0200 Subject: [Libav-user] what library is linked to if w32thread is enabled In-Reply-To: References: Message-ID: On Wed, Sep 4, 2013 at 2:25 AM, YIRAN LI wrote: > Hi I'm building ffmpeg in MingW, and tried to build two version: one uses > w32thread and another uses pthread. > > I built with --enable-shared, and seems the dlls generated are almost of > the same size. > > I know MingW comes with pthead dll, so it's dynamically linked to. If 2 > versions are of the same size, that means the version using w32thread also > is dynamically linked to something. (if statically linked, size should be > larger). > > Could anyone tell me, which library is linked to in case w32thread is used > . When checked with dependency walker, I only saw dependency on some > windows dll, does that mean w32thread is provide via windows dlls and are > available on all Windows system? > > > Yes, w32thread uses actual Windows Threading functions, and is supported on Windows XP and newer. > Great thanks! > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas.soltic at orange.fr Wed Sep 4 15:30:44 2013 From: lucas.soltic at orange.fr (Lucas Soltic) Date: Wed, 4 Sep 2013 15:30:44 +0200 Subject: [Libav-user] Hardware accelerated decoding Message-ID: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> Hello, I would like to know how can hwaccels be enabled: does the OS on which it's built needs to support them? is adding a --enable-hwaccel=... parameter for each hwaccel enough, even if the current OS doesn't support them? And if.. in case of enabled hwaccel, will it break decoding in case FFmpeg is run on a host that doesn't support these hwaccels? Regards, Lucas From attila.sukosd at gmail.com Wed Sep 4 15:33:45 2013 From: attila.sukosd at gmail.com (Attila Sukosd) Date: Wed, 4 Sep 2013 15:33:45 +0200 Subject: [Libav-user] Hardware accelerated decoding In-Reply-To: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> References: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> Message-ID: Hi, I would also be interested in hearing how to do this. I've looked at OS X's VDA acceleration, but it seems as though you need to do some extra work in the application to support the different hwaccels, but I haven't found any nice examples on how to do it. Best, Attila ----------------------------------------- DTU Computing Center - www.cc.dtu.dk attila at cc.dtu.dk, gbaras at student.dtu.dk, s070600 at student.dtu.dk On Wed, Sep 4, 2013 at 3:30 PM, Lucas Soltic wrote: > Hello, > > I would like to know how can hwaccels be enabled: does the OS on which > it's built needs to support them? is adding a --enable-hwaccel=... > parameter for each hwaccel enough, even if the current OS doesn't support > them? > > And if.. in case of enabled hwaccel, will it break decoding in case FFmpeg > is run on a host that doesn't support these hwaccels? > > Regards, > Lucas > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gavr.mail at gmail.com Wed Sep 4 16:21:50 2013 From: gavr.mail at gmail.com (Kirill Gavrilov) Date: Wed, 4 Sep 2013 18:21:50 +0400 Subject: [Libav-user] Hardware accelerated decoding In-Reply-To: References: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> Message-ID: Hi, On Wed, Sep 4, 2013 at 5:33 PM, Attila Sukosd wrote: > I've looked at OS X's VDA acceleration, but it seems as though you need to > do some extra work in the application to support the different hwaccels, > but I haven't found any nice examples on how to do it. > Most accelerated decoders decode picture into API-specific surface in GPU memory, which can be drawn using OpenGL (VDPAU) or Direct3D (DXVA2) without extra copying it back to CPU memory. For this reason you need to do a lot of extra work to configure FFmpeg using specific hardware decoder (or detect when it can not be used), render result on the screen using more complicated scenarios. Because this stuff is really overcomplicated, painful and contradicts to implemented decoding+rendering pipeline - I haven't tried it in my application yet. VDA is somewhat simpler than most others because the original Apple API doesn't provide the way to render result directly and you should copy frame back to CPU memory anyway. Technically you should just try to open another decoder and use it instead of auto-detected one in avcodec_open2 (with extra checks and + probably overridden get_format2 if you like planar YUV420P): AVCodec* aCodecVda = avcodec_find_decoder_by_name ("h264_vda"); avcodec_open2 (theCodecCtx, aCodecVda, NULL); I have tried this decoder on my old Macbook and it is significantly slower than software decoder. There is also patch in mail list which introduces similar decoder (automatic GPU->CPU memory copying) with DXVA2 acceleration. ----------------------------------------------- Kirill Gavrilov, Software designer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas.soltic at orange.fr Wed Sep 4 16:24:52 2013 From: lucas.soltic at orange.fr (Lucas Soltic) Date: Wed, 4 Sep 2013 16:24:52 +0200 Subject: [Libav-user] Hardware accelerated decoding In-Reply-To: References: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> Message-ID: Hi Kirill, Do you know by how much it is slower? Because I'm not only interested in speed, but also in CPU consumption. Le 4 sept. 2013 ? 16:21, Kirill Gavrilov a ?crit : > Hi, > > On Wed, Sep 4, 2013 at 5:33 PM, Attila Sukosd wrote: > I've looked at OS X's VDA acceleration, but it seems as though you need to do some extra work in the application to support the different hwaccels, but I haven't found any nice examples on how to do it. > Most accelerated decoders decode picture into API-specific surface in GPU memory, which can be drawn using OpenGL (VDPAU) or Direct3D (DXVA2) without extra copying it back to CPU memory. > For this reason you need to do a lot of extra work to configure FFmpeg using specific hardware decoder (or detect when it can not be used), render result on the screen using more complicated scenarios. > Because this stuff is really overcomplicated, painful and contradicts to implemented decoding+rendering pipeline - I haven't tried it in my application yet. > > VDA is somewhat simpler than most others because the original Apple API doesn't provide the way to render result directly and you should copy frame back to CPU memory anyway. > Technically you should just try to open another decoder and use it instead of auto-detected one in avcodec_open2 (with extra checks and + probably overridden get_format2 if you like planar YUV420P): > AVCodec* aCodecVda = avcodec_find_decoder_by_name ("h264_vda"); > avcodec_open2 (theCodecCtx, aCodecVda, NULL); > > I have tried this decoder on my old Macbook and it is significantly slower than software decoder. > There is also patch in mail list which introduces similar decoder (automatic GPU->CPU memory copying) with DXVA2 acceleration. > ----------------------------------------------- > Kirill Gavrilov, > Software designer. > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas.soltic at orange.fr Wed Sep 4 16:26:16 2013 From: lucas.soltic at orange.fr (Lucas Soltic) Date: Wed, 4 Sep 2013 16:26:16 +0200 Subject: [Libav-user] Hardware accelerated decoding In-Reply-To: References: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> Message-ID: <02980A28-0BE1-4114-9D76-39C4ABD6B0EE@orange.fr> (more exactly the computer energy consumption) Le 4 sept. 2013 ? 16:24, Lucas Soltic a ?crit : > Hi Kirill, > > Do you know by how much it is slower? Because I'm not only interested in speed, but also in CPU consumption. > > Le 4 sept. 2013 ? 16:21, Kirill Gavrilov a ?crit : > >> Hi, >> >> On Wed, Sep 4, 2013 at 5:33 PM, Attila Sukosd wrote: >> I've looked at OS X's VDA acceleration, but it seems as though you need to do some extra work in the application to support the different hwaccels, but I haven't found any nice examples on how to do it. >> Most accelerated decoders decode picture into API-specific surface in GPU memory, which can be drawn using OpenGL (VDPAU) or Direct3D (DXVA2) without extra copying it back to CPU memory. >> For this reason you need to do a lot of extra work to configure FFmpeg using specific hardware decoder (or detect when it can not be used), render result on the screen using more complicated scenarios. >> Because this stuff is really overcomplicated, painful and contradicts to implemented decoding+rendering pipeline - I haven't tried it in my application yet. >> >> VDA is somewhat simpler than most others because the original Apple API doesn't provide the way to render result directly and you should copy frame back to CPU memory anyway. >> Technically you should just try to open another decoder and use it instead of auto-detected one in avcodec_open2 (with extra checks and + probably overridden get_format2 if you like planar YUV420P): >> AVCodec* aCodecVda = avcodec_find_decoder_by_name ("h264_vda"); >> avcodec_open2 (theCodecCtx, aCodecVda, NULL); >> >> I have tried this decoder on my old Macbook and it is significantly slower than software decoder. >> There is also patch in mail list which introduces similar decoder (automatic GPU->CPU memory copying) with DXVA2 acceleration. >> ----------------------------------------------- >> Kirill Gavrilov, >> Software designer. >> _______________________________________________ >> Libav-user mailing list >> Libav-user at ffmpeg.org >> http://ffmpeg.org/mailman/listinfo/libav-user > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From gavr.mail at gmail.com Wed Sep 4 16:37:30 2013 From: gavr.mail at gmail.com (Kirill Gavrilov) Date: Wed, 4 Sep 2013 18:37:30 +0400 Subject: [Libav-user] Hardware accelerated decoding In-Reply-To: References: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> Message-ID: On Wed, Sep 4, 2013 at 6:24 PM, Lucas Soltic wrote: > Do you know by how much it is slower? Because I'm not only interested in > speed, but also in CPU consumption. > The problem is that it was insufficient for runtime decoding on many samples. The CPU utilization was slightly lower than of software decoder. However DXVA2 decoder on more recent hardware shows better results (CPU utilization was much lower and notebook with 2x core AMD APU colder), although I have luck only with WMV samples, not h264: http://ffmpeg.org/pipermail/ffmpeg-devel/2013-July/145388.html ----------------------------------------------- Kirill Gavrilov, Software designer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas.soltic at orange.fr Wed Sep 4 21:04:02 2013 From: lucas.soltic at orange.fr (Lucas Soltic) Date: Wed, 4 Sep 2013 21:04:02 +0200 Subject: [Libav-user] Hardware accelerated decoding In-Reply-To: References: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> Message-ID: <9545F04D-786F-4F9A-8D51-09C3D84A8C86@orange.fr> Le 4 sept. 2013 ? 16:21, Kirill Gavrilov a ?crit : > Hi, > > On Wed, Sep 4, 2013 at 5:33 PM, Attila Sukosd wrote: > I've looked at OS X's VDA acceleration, but it seems as though you need to do some extra work in the application to support the different hwaccels, but I haven't found any nice examples on how to do it. > Most accelerated decoders decode picture into API-specific surface in GPU memory, which can be drawn using OpenGL (VDPAU) or Direct3D (DXVA2) without extra copying it back to CPU memory. > For this reason you need to do a lot of extra work to configure FFmpeg using specific hardware decoder (or detect when it can not be used), render result on the screen using more complicated scenarios. > Because this stuff is really overcomplicated, painful and contradicts to implemented decoding+rendering pipeline - I haven't tried it in my application yet. > > VDA is somewhat simpler than most others because the original Apple API doesn't provide the way to render result directly and you should copy frame back to CPU memory anyway. > Technically you should just try to open another decoder and use it instead of auto-detected one in avcodec_open2 (with extra checks and + probably overridden get_format2 if you like planar YUV420P): > AVCodec* aCodecVda = avcodec_find_decoder_by_name ("h264_vda"); > avcodec_open2 (theCodecCtx, aCodecVda, NULL); > > I have tried this decoder on my old Macbook and it is significantly slower than software decoder. > There is also patch in mail list which introduces similar decoder (automatic GPU->CPU memory copying) with DXVA2 acceleration. On my side, when I had tried Apple's VDA, I don't remember it was too slow for real time decoding. Plus I didn't need to explicitly give the codec name for the hwaccel to be used (or at least that's what I guess because the CPU consumption was really lower). What you say about the modifications required to use most hwaccel isn't a good news... and makes it really less interesting... Is Apple's VDA the only hwaccel working that way? > ----------------------------------------------- > Kirill Gavrilov, > Software designer. > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas.soltic at orange.fr Wed Sep 4 23:10:54 2013 From: lucas.soltic at orange.fr (Lucas Soltic) Date: Wed, 4 Sep 2013 23:10:54 +0200 Subject: [Libav-user] How to know the output libraries names Message-ID: Hello, Is there any reliable way of knowing the names of the libraries that will be created by a standard FFmpeg's configure & make process? On Unix platforms there are ".so" and ".so.version", and on Windows, there are ".dll" and "-version.dll". This makes it hard to use FFmpeg with CMake which requires the library name to be already known at configuration time, in order to add install rules for example. I would prefer to avoid hardcoding the versions that are contained in my current FFmpeg version, so that when I update FFmpeg in my project, I don't have to update anything else than my FFmpeg copy. Regards, Lucas From nfxjfg at googlemail.com Thu Sep 5 11:00:12 2013 From: nfxjfg at googlemail.com (wm4) Date: Thu, 5 Sep 2013 11:00:12 +0200 Subject: [Libav-user] Hardware accelerated decoding In-Reply-To: References: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> Message-ID: <20130905110012.0c6af301@debian> On Wed, 4 Sep 2013 18:21:50 +0400 Kirill Gavrilov wrote: > Hi, > > On Wed, Sep 4, 2013 at 5:33 PM, Attila Sukosd wrote: > VDA is somewhat simpler than most others because the original Apple API > doesn't provide the way to render result directly and you should copy frame > back to CPU memory anyway. That's not true. VDA actually provides a way to render the video directly to an OpenGL texture. When we tried this (with native VDA hwaccel instead of h264_vda), we experienced a tremendous reduction in CPU usage. From mybrokenbeat at gmail.com Thu Sep 5 12:53:46 2013 From: mybrokenbeat at gmail.com (Oleg) Date: Thu, 5 Sep 2013 13:53:46 +0300 Subject: [Libav-user] Hardware accelerated decoding In-Reply-To: References: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> Message-ID: Hi, 1. VDA is faster then CPU decoding in general. 2. CPU usage of VDA API depends on your video-card. Old cards don't support full decoding acceleration and some job is still done by CPU. All cards have limitation on image size. 3. VDA decoded frames could be rendered without copying frame to RAM, using OpenGL. Anyway ffmpeg stores all frames in RAM, so h264_vda decoder copies each frame to RAM that's way h264_vda could be slower then classical h264 decoder. 04.09.2013, ? 17:21, Kirill Gavrilov ???????(?): > Hi, > > On Wed, Sep 4, 2013 at 5:33 PM, Attila Sukosd wrote: > I've looked at OS X's VDA acceleration, but it seems as though you need to do some extra work in the application to support the different hwaccels, but I haven't found any nice examples on how to do it. > Most accelerated decoders decode picture into API-specific surface in GPU memory, which can be drawn using OpenGL (VDPAU) or Direct3D (DXVA2) without extra copying it back to CPU memory. > For this reason you need to do a lot of extra work to configure FFmpeg using specific hardware decoder (or detect when it can not be used), render result on the screen using more complicated scenarios. > Because this stuff is really overcomplicated, painful and contradicts to implemented decoding+rendering pipeline - I haven't tried it in my application yet. > > VDA is somewhat simpler than most others because the original Apple API doesn't provide the way to render result directly and you should copy frame back to CPU memory anyway. > Technically you should just try to open another decoder and use it instead of auto-detected one in avcodec_open2 (with extra checks and + probably overridden get_format2 if you like planar YUV420P): > AVCodec* aCodecVda = avcodec_find_decoder_by_name ("h264_vda"); > avcodec_open2 (theCodecCtx, aCodecVda, NULL); > > I have tried this decoder on my old Macbook and it is significantly slower than software decoder. > There is also patch in mail list which introduces similar decoder (automatic GPU->CPU memory copying) with DXVA2 acceleration. > ----------------------------------------------- > Kirill Gavrilov, > Software designer. > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From gavr.mail at gmail.com Thu Sep 5 13:14:32 2013 From: gavr.mail at gmail.com (Kirill Gavrilov) Date: Thu, 5 Sep 2013 15:14:32 +0400 Subject: [Libav-user] Hardware accelerated decoding In-Reply-To: References: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> Message-ID: Hi, On Thu, Sep 5, 2013 at 2:53 PM, Oleg wrote: > 3. VDA decoded frames could be rendered without copying frame to RAM, > using OpenGL. Anyway ffmpeg stores all frames in RAM, so h264_vda decoder > copies each frame to RAM that's way h264_vda could be slower then classical > h264 decoder. > Thanks for information. Could you please refer to documentation? I was curious when I have read VDA documentation without mensioning of OpenGL at all: https://developer.apple.com/library/mac/technotes/tn2267/_index.html ----------------------------------------------- Kirill Gavrilov, Software designer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nfxjfg at googlemail.com Thu Sep 5 17:47:33 2013 From: nfxjfg at googlemail.com (wm4) Date: Thu, 5 Sep 2013 17:47:33 +0200 Subject: [Libav-user] Hardware accelerated decoding In-Reply-To: References: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> Message-ID: <20130905174733.3ab58aec@debian> On Thu, 5 Sep 2013 13:53:46 +0300 Oleg wrote: > 3. VDA decoded frames could be rendered without copying frame to RAM, using OpenGL. Anyway ffmpeg stores all frames in RAM, so h264_vda decoder copies each frame to RAM that's way h264_vda could be slower then classical h264 decoder. Just to make sure there's no confusion: h264_vda is a different interface from h264 + VDA hwaccel. It's true that h264_vda forces copying video data to RAM. But h264 + hwaccel does not. From lucas.soltic at orange.fr Thu Sep 5 18:06:00 2013 From: lucas.soltic at orange.fr (Lucas Soltic) Date: Thu, 5 Sep 2013 18:06:00 +0200 Subject: [Libav-user] Hardware accelerated decoding In-Reply-To: <20130905174733.3ab58aec@debian> References: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> <20130905174733.3ab58aec@debian> Message-ID: <4984DDAC-CAFA-426F-85E5-7967B55F0B5C@orange.fr> Le 5 sept. 2013 ? 17:47, wm4 a ?crit : > On Thu, 5 Sep 2013 13:53:46 +0300 > Oleg wrote: > >> 3. VDA decoded frames could be rendered without copying frame to RAM, using OpenGL. Anyway ffmpeg stores all frames in RAM, so h264_vda decoder copies each frame to RAM that's way h264_vda could be slower then classical h264 decoder. > > Just to make sure there's no confusion: h264_vda is a different > interface from h264 + VDA hwaccel. It's true that h264_vda forces > copying video data to RAM. But h264 + hwaccel does not. > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user Hi, What is "h264 + hwaccel" ? What is the difference between h264_vda and h264_vdpau that are listed among FFmpeg's decoders? And... is H264 VDA the only hwaccel that doesn't require many modifications to be used? (because of how it works => copying the data back to RAM) Regards, Lucas From lucas.soltic at orange.fr Thu Sep 5 18:07:03 2013 From: lucas.soltic at orange.fr (Lucas Soltic) Date: Thu, 5 Sep 2013 18:07:03 +0200 Subject: [Libav-user] How to know the output libraries names In-Reply-To: References: Message-ID: <728D58A8-1C91-4CFB-9ADA-65420B782A3B@orange.fr> Le 4 sept. 2013 ? 23:10, Lucas Soltic a ?crit : > Hello, > > Is there any reliable way of knowing the names of the libraries that will be created by a standard FFmpeg's configure & make process? > On Unix platforms there are ".so" and ".so.version", and on Windows, there are ".dll" and "-version.dll". > > This makes it hard to use FFmpeg with CMake which requires the library name to be already known at configuration time, in order to add install rules for example. > > I would prefer to avoid hardcoding the versions that are contained in my current FFmpeg version, so that when I update FFmpeg in my project, I don't have to update anything else than my FFmpeg copy. > > Regards, > Lucas Hi, Absolutely no idea about this? Thanks, Lucas From jpboard2 at yahoo.com Thu Sep 5 18:38:26 2013 From: jpboard2 at yahoo.com (James Board) Date: Thu, 5 Sep 2013 09:38:26 -0700 (PDT) Subject: [Libav-user] Encoding and Decoding Single Frames with libAV Message-ID: <1378399106.69669.YahooMailNeo@web164703.mail.gq1.yahoo.com> Is there a way to use libav to encode and decode a single frame of image data with a lossless codec such as ffvhuff?? I know this can be done with video.? But if I have a single frame of image data stored in a ppm or pnm file, can I read it into my?libAV program, compress with the ffvhuff codec, and then write the compressed output to a file?? And later, do the reverse? -------------- next part -------------- An HTML attachment was scrubbed... URL: From onemda at gmail.com Thu Sep 5 18:43:57 2013 From: onemda at gmail.com (Paul B Mahol) Date: Thu, 5 Sep 2013 16:43:57 +0000 Subject: [Libav-user] Encoding and Decoding Single Frames with libAV In-Reply-To: <1378399106.69669.YahooMailNeo@web164703.mail.gq1.yahoo.com> References: <1378399106.69669.YahooMailNeo@web164703.mail.gq1.yahoo.com> Message-ID: On 9/5/13, James Board wrote: > Is there a way to use libav to encode and decode a single frame of > image data with a lossless codec such as ffvhuff? I know this can > be done with video. But if I have a single frame of image data stored > in a ppm or pnm file, can I read it into my libAV program, compress with > the ffvhuff codec, and then write the compressed output to a file? And > later, do the reverse? Not with ffvhuff (but it can be added), but with ffv1: ffmpeg -i input out%d.ffv1-img From jpboard2 at yahoo.com Thu Sep 5 18:53:45 2013 From: jpboard2 at yahoo.com (James Board) Date: Thu, 5 Sep 2013 09:53:45 -0700 (PDT) Subject: [Libav-user] Encoding and Decoding Single Frames with libAV In-Reply-To: References: <1378399106.69669.YahooMailNeo@web164703.mail.gq1.yahoo.com> Message-ID: <1378400025.14294.YahooMailNeo@web164701.mail.gq1.yahoo.com> >>On 9/5/13, James Board wrote: >> Is there a way to use libav to encode and decode a single frame of >> image data with a lossless codec such as ffvhuff?? I know this can >> be done with video.? But if I have a single frame of image data stored >> in a ppm or pnm file, can I read it into my libAV program, compress with >> the ffvhuff codec, and then write the compressed output to a file?? And >> later, do the reverse? > >Not with ffvhuff (but it can be added), but with ffv1: > >ffmpeg -i input out%d.ffv1-img Thanks.? Can I do it with the libav API? -------------- next part -------------- An HTML attachment was scrubbed... URL: From onemda at gmail.com Thu Sep 5 19:02:12 2013 From: onemda at gmail.com (Paul B Mahol) Date: Thu, 5 Sep 2013 17:02:12 +0000 Subject: [Libav-user] Encoding and Decoding Single Frames with libAV In-Reply-To: <1378400025.14294.YahooMailNeo@web164701.mail.gq1.yahoo.com> References: <1378399106.69669.YahooMailNeo@web164703.mail.gq1.yahoo.com> <1378400025.14294.YahooMailNeo@web164701.mail.gq1.yahoo.com> Message-ID: On 9/5/13, James Board wrote: >>>On 9/5/13, James Board wrote: >>> Is there a way to use libav to encode and decode a single frame of >>> image data with a lossless codec such as ffvhuff? I know this can >>> be done with video. But if I have a single frame of image data stored >>> in a ppm or pnm file, can I read it into my libAV program, compress with >>> the ffvhuff codec, and then write the compressed output to a file? And >>> later, do the reverse? >> >>Not with ffvhuff (but it can be added), but with ffv1: >> >>ffmpeg -i input out%d.ffv1-img > > Thanks. Can I do it with the libav API? Yes in several ways.... From nfxjfg at googlemail.com Thu Sep 5 19:06:57 2013 From: nfxjfg at googlemail.com (wm4) Date: Thu, 5 Sep 2013 19:06:57 +0200 Subject: [Libav-user] Hardware accelerated decoding In-Reply-To: <4984DDAC-CAFA-426F-85E5-7967B55F0B5C@orange.fr> References: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> <20130905174733.3ab58aec@debian> <4984DDAC-CAFA-426F-85E5-7967B55F0B5C@orange.fr> Message-ID: <20130905190657.649b9ff6@debian> On Thu, 5 Sep 2013 18:06:00 +0200 Lucas Soltic wrote: > What is "h264 + hwaccel" ? The "h264" decoder, using the hwaccel mechanism. The h264 decoder can do both software and hardware decoding. It works by overriding the get_format callback. When libavcodec calls your get_format callback, you request a certain hwaccel pixel format (such as AV_PIX_FMT_VDA_VLD in the case of VDA), which puts the decoder into hwaccel mode. Then, the decoder will return AVFrames with that pixel format, and AVFrame.data[3] will contain a reference to a surface, using the hwaccel specific type (CVPixelBufferRef in case of VDA). With most hwaccel types, you also need to override get_buffer2 to allocate a surface using external API (VDA is an exception). It's a quite messy API, with no examples, help, or anything around. You'll have to look at the source code of video players which support it. Especially the VDA part's memory management is extremely messy, and there are differences between various ffmpeg and Libav versions. > What is the difference between h264_vda and h264_vdpau that are listed among FFmpeg's decoders? Both are outdated APIs. h264_vda was created only because a developer misunderstood how VDA works. h264_vdpau is rather similar to h264 + vdpau hwaccel, and in particular it isn't anything like h264_vda. You should use the hwaccel API instead. Don't let the similarity of the decoder names confuse you. > And... is H264 VDA the only hwaccel that doesn't require many modifications to be used? (because of how it works => copying the data back to RAM) h264_vda pretends to be a normal software decoder, and works just like one. It doesn't require anything special; even ffplay could use it. The only good thing about this decoder is that it's very easy to use. But it appears it barely provides any advantages over software decoding due to the high overhead of copying back the video data to RAM. The actual hwaccel decoders are much harder to use; partially this is because the interaction between the decoder and the platform specific API (like VDA, vdpau, vaapi, DXVA) is complicated, in part because ffmpeg's hardware decoding support is very low level, and in part because of chaotic API design. From mybrokenbeat at gmail.com Thu Sep 5 19:45:42 2013 From: mybrokenbeat at gmail.com (Oleg) Date: Thu, 5 Sep 2013 20:45:42 +0300 Subject: [Libav-user] Hardware accelerated decoding In-Reply-To: References: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> Message-ID: Well, OS X documentation (as well as Windows one) is not informative. Perhaps, the best way of learning VDA API is a workable example. Here's a good one: https://github.com/sailesha/VDA-Sample 05.09.2013, ? 14:14, Kirill Gavrilov ???????(?): > Hi, > > On Thu, Sep 5, 2013 at 2:53 PM, Oleg wrote: > 3. VDA decoded frames could be rendered without copying frame to RAM, using OpenGL. Anyway ffmpeg stores all frames in RAM, so h264_vda decoder copies each frame to RAM that's way h264_vda could be slower then classical h264 decoder. > Thanks for information. Could you please refer to documentation? > I was curious when I have read VDA documentation without mensioning of OpenGL at all: > https://developer.apple.com/library/mac/technotes/tn2267/_index.html > ----------------------------------------------- > Kirill Gavrilov, > Software designer. > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From nfxjfg at googlemail.com Thu Sep 5 19:53:19 2013 From: nfxjfg at googlemail.com (wm4) Date: Thu, 5 Sep 2013 19:53:19 +0200 Subject: [Libav-user] Hardware accelerated decoding In-Reply-To: References: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> Message-ID: <20130905195319.6df62b01@debian> On Thu, 5 Sep 2013 20:45:42 +0300 Oleg wrote: > Well, OS X documentation (as well as Windows one) is not informative. Perhaps, the best way of learning VDA API is a workable example. Here's a good one: https://github.com/sailesha/VDA-Sample > 05.09.2013, ? 14:14, Kirill Gavrilov ???????(?): This doesn't seem to use ffmpeg hwaccel at all, but deals with VDA directly. (Well, maybe that's an advantage.) From mybrokenbeat at gmail.com Thu Sep 5 19:57:21 2013 From: mybrokenbeat at gmail.com (Oleg) Date: Thu, 5 Sep 2013 20:57:21 +0300 Subject: [Libav-user] Hardware accelerated decoding In-Reply-To: <20130905195319.6df62b01@debian> References: <944BB95E-5E64-4536-8D9D-5E31AB816DE4@orange.fr> <20130905195319.6df62b01@debian> Message-ID: <2751757A-A33A-4DC8-82D1-0DA7CA25F45A@gmail.com> Well, all you need is to copy render mechanism from that sample and pass to it AVFrame.data[3]. That should be a simple task. 05.09.2013, ? 20:53, wm4 ???????(?): > On Thu, 5 Sep 2013 20:45:42 +0300 > Oleg wrote: > >> Well, OS X documentation (as well as Windows one) is not informative. Perhaps, the best way of learning VDA API is a workable example. Here's a good one: https://github.com/sailesha/VDA-Sample >> 05.09.2013, ? 14:14, Kirill Gavrilov ???????(?): > > This doesn't seem to use ffmpeg hwaccel at all, but deals with VDA > directly. (Well, maybe that's an advantage.) > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user From cehoyos at ag.or.at Fri Sep 6 09:53:53 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Fri, 6 Sep 2013 07:53:53 +0000 (UTC) Subject: [Libav-user] How to know the output libraries names References: Message-ID: Lucas Soltic writes: > Is there any reliable way of knowing the names of the > libraries that will be created by a standard FFmpeg's > configure & make process? > On Unix platforms there are ".so" and ".so.version", > and on Windows, there are ".dll" and "-version.dll". I probably misunderstand but are you suggesting that we name the Unix shared libraries *.dll ? (Or the Windows shared libraries *.so ?) > This makes it hard to use FFmpeg with CMake which > requires the library name to be already known at > configuration time, in order to add install rules for > example. Why are you using CMake? And why would linkers be interested in the complete file name of the libraries, when I tested last, all sane linkers tried different suffixes... Carl Eugen From lucas.soltic at orange.fr Fri Sep 6 11:47:04 2013 From: lucas.soltic at orange.fr (Lucas Soltic) Date: Fri, 6 Sep 2013 11:47:04 +0200 Subject: [Libav-user] How to know the output libraries names In-Reply-To: References: Message-ID: Le 6 sept. 2013 ? 09:53, Carl Eugen Hoyos a ?crit : > Lucas Soltic writes: > >> Is there any reliable way of knowing the names of the >> libraries that will be created by a standard FFmpeg's >> configure & make process? >> On Unix platforms there are ".so" and ".so.version", >> and on Windows, there are ".dll" and "-version.dll". > > I probably misunderstand but are you suggesting that we > name the Unix shared libraries *.dll ? > (Or the Windows shared libraries *.so ?) Hi, No, Linux correctly uses .so and Windows .dll. The annoying point is the version suffix because of course it changes when I update FFmpeg. > >> This makes it hard to use FFmpeg with CMake which >> requires the library name to be already known at >> configuration time, in order to add install rules for >> example. > > Why are you using CMake? > And why would linkers be interested in the complete > file name of the libraries, when I tested last, all > sane linkers tried different suffixes... My issue is both an install and runtime library resolution one. I have a project that relies on CMake and that needs FFmpeg to work. So that everything is integrated in CMake, I did things so that the FFmpeg build is considered as a custom CMake target that will execute FFmpeg's configure & make. And the other part of my project is setup the common way with CMake. CMake allows adding install rules. But to do so I have to give the exact name of the files that will be installed, and I have to give them before the FFmpeg libraries are built (at CMake configuration time). In case the file to be installed does not exist yet at CMake configuration time, as my custom target create them, CMake knows it first need to execute my custom target and won't complain. However at the moment on Linux I only tell CMake to install the .so file without any version number because I don't know it. So it'll only install one .so instead of all the .so (links + the real library). The issue is more or less the same on Windows : I can only know in advance the name of the dll without any version in it. So I install that dll. BUT when using the .lib files produced by FFmpeg's make process with Visual Studio and executing the program, it asks for the dll WITH the version in it. So the dll I installed seems completely useless. Thanks for your time :) Lucas > > Carl Eugen > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user From cehoyos at ag.or.at Fri Sep 6 12:11:10 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Fri, 6 Sep 2013 10:11:10 +0000 (UTC) Subject: [Libav-user] How to know the output libraries names References: Message-ID: Lucas Soltic writes: > >> On Unix platforms there are ".so" and ".so.version", > >> and on Windows, there are ".dll" and "-version.dll". > > > > I probably misunderstand but are you suggesting that we > > name the Unix shared libraries *.dll ? > > (Or the Windows shared libraries *.so ?) > > Hi, > No, Linux correctly uses .so and Windows .dll. I did misunderstand... > The annoying point is the version suffix because of > course it changes when I update FFmpeg. It should not change. Or in other words: It only changes if updating your libraries would break your existing applications, so the change was (really, really) needed. Sorry, I am not sure if you are 100% comfortable with how dynamic linking (on unix-like systems) work, so I am unsure what your problem is. (To make sure: I am not implying I am.) You simply need the versions at runtime, they have absolutely no relevance at compile time (because when installing the libraries, a symbolic link is added to make sure you link against the current library version while having old version/versions still installed). At compile time, you should always link against "xyz.so" (or actually "xyz" because the linker will add "so"), not ".so.99". I am *guessing* that you actually want static linking in your case. Carl Eugen From soho123.2012 at gmail.com Fri Sep 6 13:38:52 2013 From: soho123.2012 at gmail.com (Huang Soho) Date: Fri, 6 Sep 2013 19:38:52 +0800 Subject: [Libav-user] what is av_rescale_q()? Message-ID: Hi All, I use ffmpeg + ffserver to support a stream server. the video and audio source is from usb webcam. When I test the stream by VLC, it can get video data , and audio data respectively. but I try to use ffserver to output a rtp stream for audio and a rtp stream for video. there are not sync. When I study the code of ffmpeg.c and ffserver.c. I can see there are a lot of code that does av_rescale_q() for pkt.pts, pkt.dts and pkt.duration Can somebody explain more detail about that? When I can find the document that have detail description about that? -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexcohn at netvision.net.il Fri Sep 6 13:58:56 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Fri, 6 Sep 2013 14:58:56 +0300 Subject: [Libav-user] How to know the output libraries names In-Reply-To: References: Message-ID: On Sep 6, 2013 12:47 PM, "Lucas Soltic" wrote: > > Le 6 sept. 2013 ? 09:53, Carl Eugen Hoyos a ?crit : > > > Lucas Soltic writes: > > > >> Is there any reliable way of knowing the names of the > >> libraries that will be created by a standard FFmpeg's > >> configure & make process? > >> On Unix platforms there are ".so" and ".so.version", > >> and on Windows, there are ".dll" and "-version.dll". > > > > I probably misunderstand but are you suggesting that we > > name the Unix shared libraries *.dll ? > > (Or the Windows shared libraries *.so ?) > > Hi, > No, Linux correctly uses .so and Windows .dll. The annoying point is the version suffix because of course it changes when I update FFmpeg. > > > > >> This makes it hard to use FFmpeg with CMake which > >> requires the library name to be already known at > >> configuration time, in order to add install rules for > >> example. > > > > Why are you using CMake? > > And why would linkers be interested in the complete > > file name of the libraries, when I tested last, all > > sane linkers tried different suffixes... > > My issue is both an install and runtime library resolution one. > > I have a project that relies on CMake and that needs FFmpeg to work. So that everything is integrated in CMake, I did things so that the FFmpeg build is considered as a custom CMake target that will execute FFmpeg's configure & make. And the other part of my project is setup the common way with CMake. > > CMake allows adding install rules. But to do so I have to give the exact name of the files that will be installed, and I have to give them before the FFmpeg libraries are built (at CMake configuration time). In case the file to be installed does not exist yet at CMake configuration time, as my custom target create them, CMake knows it first need to execute my custom target and won't complain. > > However at the moment on Linux I only tell CMake to install the .so file without any version number because I don't know it. So it'll only install one .so instead of all the .so (links + the real library). > > The issue is more or less the same on Windows : I can only know in advance the name of the dll without any version in it. So I install that dll. BUT when using the .lib files produced by FFmpeg's make process with Visual Studio and executing the program, it asks for the dll WITH the version in it. So the dll I installed seems completely useless. > > Thanks for your time :) > Lucas > > > > > Carl Eugen I believe that you can use the "common" name (e.g. libavformat.so) as custom target, as you did before, but have the custom command copy or install the libavformat.so.* together wirh it. BR, Alex Cohn -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas.soltic at orange.fr Fri Sep 6 19:12:43 2013 From: lucas.soltic at orange.fr (Lucas Soltic) Date: Fri, 6 Sep 2013 19:12:43 +0200 Subject: [Libav-user] How to know the output libraries names In-Reply-To: References: Message-ID: <53D95C01-59F5-4729-859C-BF39B53D6153@orange.fr> Le 6 sept. 2013 ? 12:11, Carl Eugen Hoyos a ?crit : > Lucas Soltic writes: > >>>> On Unix platforms there are ".so" and ".so.version", >>>> and on Windows, there are ".dll" and "-version.dll". >>> >>> I probably misunderstand but are you suggesting that we >>> name the Unix shared libraries *.dll ? >>> (Or the Windows shared libraries *.so ?) >> >> Hi, >> No, Linux correctly uses .so and Windows .dll. > > I did misunderstand... > >> The annoying point is the version suffix because of >> course it changes when I update FFmpeg. > > It should not change. > > Or in other words: It only changes if updating your > libraries would break your existing applications, > so the change was (really, really) needed. Hmm you're right, I understand your point. > Sorry, I am not sure if you are 100% comfortable with > how dynamic linking (on unix-like systems) work, so > I am unsure what your problem is. > (To make sure: I am not implying I am.) > > You simply need the versions at runtime, they have > absolutely no relevance at compile time (because > when installing the libraries, a symbolic link is > added to make sure you link against the current > library version while having old version/versions > still installed). At compile time, you should > always link against "xyz.so" (or actually "xyz" > because the linker will add "so"), not ".so.99". I was more or less comfortable with dynamic linking (actually not 100% so thanks for the clarification!) But.. well, the linking issue is only on Windows. The issue I had both on Windows and Unix was with the CMake install rule. > I am *guessing* that you actually want static > linking in your case. Not sure if my previous answers weren't clear but.. no, I want dynamic linking. Anyway, I've actually found a *somewhat* clean solution: instead of installing the files, I install the directory that contains the FFmpeg libraries (I've copied them to a single directory). And this works great on every platform :) So no more need to worry about this issue, everything works fine now! Thanks to all of you for your time :) Lucas > Carl Eugen > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user From lgiancri at tiscali.it Fri Sep 6 19:33:16 2013 From: lgiancri at tiscali.it (luigi) Date: Fri, 06 Sep 2013 19:33:16 +0200 Subject: [Libav-user] decoding h264 Message-ID: <522A11DC.7050801@tiscali.it> Hi, I have problems decoding h264 video chunks included in a Mp4 video. As far as I know nal units start with 0x000001 identifier. So I manage to parse mp4 video, spot the video chunks, re-parse the chunks to find nal unit identifier, then I reassemble the chunks and, finally, send the nal units to av_decode_video. In the code below, the nal unit buffer is tm_buf->buffer, its length is tm_buf->buf_len ... AVPacket *avpack = &packet ; avpack->side_data = NULL; avpack->side_data_elems = 0; avpack->data = tm_buf->buffer; avpack->size = tm_buf->buf_len; AVFrame *decoded_frame = NULL; if (!decoded_frame) { if (!(decoded_frame = avcodec_alloc_frame())) { fprintf(stderr, "Could not allocate audio frame\n"); exit(1); } } else avcodec_get_frame_defaults(decoded_frame); len = avcodec_decode_video2( context, decoded_frame, &got_picture, avpack ) ; Unfortunately the output window remains black. I do not use av_read_frame to read from the file, because I want to rewrite the demuxer (as a matter of fact I already have). My knowledge of h264 is very poor, but why doesn't libav manage to decode the chunks. In addition, I don't seem to get any error message. Is there a way to solve all this without having to study all the stuff about h264? Luigi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpboard2 at yahoo.com Fri Sep 6 20:12:33 2013 From: jpboard2 at yahoo.com (James Board) Date: Fri, 6 Sep 2013 11:12:33 -0700 (PDT) Subject: [Libav-user] VFR and CFR and libav Message-ID: <1378491153.77876.YahooMailNeo@web164701.mail.gq1.yahoo.com> Let's say I have an input video (AVI format) that is vfr, and I write a libav application that steps through each frame in the video and does something: maybe I?write that frame to a file on disk.? Will I have vfr-problems where the libav is duplicating frames, or removing frames (behind the scenes) in order to make it CFR?? Does libav do things like that?? If so, can I turn it off? ? Also, and this probably isn't a libav question, but how can I tell if my input AVI file is vfr or cfr? -------------- next part -------------- An HTML attachment was scrubbed... URL: From onemda at gmail.com Fri Sep 6 20:17:58 2013 From: onemda at gmail.com (Paul B Mahol) Date: Fri, 6 Sep 2013 18:17:58 +0000 Subject: [Libav-user] VFR and CFR and libav In-Reply-To: <1378491153.77876.YahooMailNeo@web164701.mail.gq1.yahoo.com> References: <1378491153.77876.YahooMailNeo@web164701.mail.gq1.yahoo.com> Message-ID: On 9/6/13, James Board wrote: > Let's say I have an input video (AVI format) that is vfr, and I write > a libav application that steps through each frame in the video and > does something: maybe I write that frame to a file on disk. Will > I have vfr-problems where the libav is duplicating frames, or removing > frames (behind the scenes) in order to make it CFR? Does libav > do things like that? If so, can I turn it off? When using libs you are on your own. The VFR/CFR comes with ffmpeg tool. > > Also, and this probably isn't a libav question, but how can I tell > if my input AVI file is vfr or cfr? By looking at frame timestamps. From jpboard2 at yahoo.com Fri Sep 6 20:26:42 2013 From: jpboard2 at yahoo.com (James Board) Date: Fri, 6 Sep 2013 11:26:42 -0700 (PDT) Subject: [Libav-user] VFR and CFR and libav In-Reply-To: References: <1378491153.77876.YahooMailNeo@web164701.mail.gq1.yahoo.com> Message-ID: <1378492002.25109.YahooMailNeo@web164705.mail.gq1.yahoo.com> > >When using libs you are on your own. The VFR/CFR comes with ffmpeg tool. So it sounds like what yo're saying is that with the libs, I won't have the same issues with the ffmpeg feature where it duplicated input frames in order to meet some unspecified constant frame rate?? With the libs, when I ask for frame N, they will return the N-th video frame, exactly.? Is that correct? ? That is what I prefer.? Coding things in libav might be the best way for me to get the behavior I want, and learn how all these things work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From krueger at lesspain.de Sat Sep 7 17:20:15 2013 From: krueger at lesspain.de (=?UTF-8?Q?Robert_Kr=C3=BCger?=) Date: Sat, 7 Sep 2013 17:20:15 +0200 Subject: [Libav-user] VFR and CFR and libav In-Reply-To: <1378492002.25109.YahooMailNeo@web164705.mail.gq1.yahoo.com> References: <1378491153.77876.YahooMailNeo@web164701.mail.gq1.yahoo.com> <1378492002.25109.YahooMailNeo@web164705.mail.gq1.yahoo.com> Message-ID: On Fri, Sep 6, 2013 at 8:26 PM, James Board wrote: >> >>When using libs you are on your own. The VFR/CFR comes with ffmpeg tool. > So it sounds like what yo're saying is that with the libs, I won't have the > same issues with the ffmpeg feature where it duplicated input frames in > order to meet some unspecified constant frame rate? With the libs, when > I ask for frame N, they will return the N-th video frame, exactly. Is that > correct? Yes but you don't ask for the n-th frame but feed the decoder packet after packet and get one decoded frame after the other in display order but maybe that is what you meant. If you don't care about the timestamps (e.g. when you just want to dump every frame) you can ignore them. > > That is what I prefer. Coding things in libav might be the best > way for me to get the behavior I want, and learn how all these things > work. > > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > From jpboard2 at yahoo.com Sun Sep 8 16:14:52 2013 From: jpboard2 at yahoo.com (James Board) Date: Sun, 8 Sep 2013 07:14:52 -0700 (PDT) Subject: [Libav-user] VFR and CFR and libav In-Reply-To: References: <1378491153.77876.YahooMailNeo@web164701.mail.gq1.yahoo.com> Message-ID: <1378649692.21715.YahooMailNeo@web164705.mail.gq1.yahoo.com> >> Also, and this probably isn't a libav question, but how can I tell >> if my input AVI file is vfr or cfr? > >By looking at frame timestamps. I tried this: ??? ffmprobe IN.avi -show_frames | grep pkt_pts_time Is that the correct way to look t time stamps?? Are those the real timestamps, or are those fake time stamps that were invented to conform to CFR? Also, how can I tell if the video is vfr or cvf based on those timestamps? -------------- next part -------------- An HTML attachment was scrubbed... URL: From manisandro at gmail.com Sun Sep 8 17:17:28 2013 From: manisandro at gmail.com (Sandro Mani) Date: Sun, 08 Sep 2013 17:17:28 +0200 Subject: [Libav-user] libavformat, RTSP: periodically read last available frame of live stream Message-ID: <522C9508.8060306@gmail.com> Hello, I would like to periodically (say every second) grab the latest frame from a rtsp live stream of a webcam. I am able to successfully open the stream and read and decode frames, however if I read one frame every second, I am still reading subsequent frames (as opposed to frames which are i.e. one second apart). I guess I need to seek to the last available frame before reading the next frame. Can anyone point out how this can be done? For reference, my code is here: [1] [2]. Thanks for any inputs. Sandro [1] http://smani.fedorapeople.org/VideoCapture.hpp [2] http://smani.fedorapeople.org/VideoCapture.cpp From alexcohn at netvision.net.il Sun Sep 8 18:32:49 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Sun, 8 Sep 2013 18:32:49 +0200 Subject: [Libav-user] libavformat, RTSP: periodically read last available frame of live stream In-Reply-To: <522C9508.8060306@gmail.com> References: <522C9508.8060306@gmail.com> Message-ID: On Sep 8, 2013 6:17 PM, "Sandro Mani" wrote: > > Hello, > > I would like to periodically (say every second) grab the latest frame from a rtsp live stream of a webcam. I am able to successfully open the stream and read and decode frames, however if I read one frame every second, I am still reading subsequent frames (as opposed to frames which are i.e. one second apart). I guess I need to seek to the last available frame before reading the next frame. Can anyone point out how this can be done? > > For reference, my code is here: [1] [2]. > > Thanks for any inputs. > > Sandro > > > [1] http://smani.fedorapeople.org/VideoCapture.hpp > [2] http://smani.fedorapeople.org/VideoCapture.cpp If your webcam has GOP of 1 sec, e.g. GOP length = 30 and FPS = 30, you can skip to next second; otherwize, the correct strategy would be to decode all frames, but throw away all frames that you don't need. BR Alex Cohn -------------- next part -------------- An HTML attachment was scrubbed... URL: From manisandro at gmail.com Sun Sep 8 19:15:15 2013 From: manisandro at gmail.com (Sandro Mani) Date: Sun, 08 Sep 2013 19:15:15 +0200 Subject: [Libav-user] libavformat, RTSP: periodically read last available frame of live stream In-Reply-To: References: <522C9508.8060306@gmail.com> Message-ID: <522CB0A3.4090908@gmail.com> On 08.09.2013 18:32, Alex Cohn wrote: > > On Sep 8, 2013 6:17 PM, "Sandro Mani" > wrote: > > > > Hello, > > > > I would like to periodically (say every second) grab the latest > frame from a rtsp live stream of a webcam. I am able to successfully > open the stream and read and decode frames, however if I read one > frame every second, I am still reading subsequent frames (as opposed > to frames which are i.e. one second apart). I guess I need to seek to > the last available frame before reading the next frame. Can anyone > point out how this can be done? > > > > For reference, my code is here: [1] [2]. > > > > Thanks for any inputs. > > > > Sandro > > > > > > [1] http://smani.fedorapeople.org/VideoCapture.hpp > > [2] http://smani.fedorapeople.org/VideoCapture.cpp > > If your webcam has GOP of 1 sec, e.g. GOP length = 30 and FPS = 30, > you can skip to next second; otherwize, the correct strategy would be > to decode all frames, but throw away all frames that you don't need. > > BR > Alex Cohn > > Thanks for the reply. With skip do you mean by calling av_seek_frame with an appropriately computed timestamp? (Sorry if I may be asking the obvious, I'm rather a novice in this area.) Sandro -------------- next part -------------- An HTML attachment was scrubbed... URL: From lgiancri at tiscali.it Mon Sep 9 16:59:01 2013 From: lgiancri at tiscali.it (luigi) Date: Mon, 09 Sep 2013 16:59:01 +0200 Subject: [Libav-user] decoding h264 Message-ID: <522DE235.80104@tiscali.it> Hi, I solved the problem: it was a trivial character forgotten in Mp4_demuxer (#) which checked the video fifo (that's why I never received error messages from av_video_decode2: the buffers never got processed). Now everything is fine, there was nothing wrong in the piece of code I enclosed in the previous message. By the way, avcodec_decode_video2 decodes h264 nal units without problems. The source file was a video clip downloaded from Youtube, the container was .mp4, video h264, audio mp4a. Bye Luigi From fisher.jeff at gmail.com Thu Sep 5 19:35:27 2013 From: fisher.jeff at gmail.com (Jeff Fisher) Date: Thu, 5 Sep 2013 10:35:27 -0700 Subject: [Libav-user] avformat_open_input hangs on UDP stream Message-ID: <5619CBC6-7432-4C34-AC9F-97207B7F2B0B@gmail.com> Hi everyone, I have the following code snippet in a video display class: bool VideoSourceNetwork::connectToStream(const QString &Address) { avcodec_register_all(); av_register_all(); if (avformat_network_init() < 0) return false; m_pContextFormat = NULL; if (avformat_open_input(&m_pContextFormat, Address.toStdString().c_str(), NULL, NULL) < 0) return false; ? etc ? return true; } If I call that function with a local file path (e.g. "/root/video/test.mpg"), everything works great. If, however, I pass it a network address in (e.g. "udp://:1234"), avformat_open_input never returns. If I use VLC to connect to udp://:1234, I get a video stream in probably less than a second. Any ideas?? Thanks in advance. From alexcohn at netvision.net.il Mon Sep 9 22:33:09 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Mon, 9 Sep 2013 23:33:09 +0300 Subject: [Libav-user] libavformat, RTSP: periodically read last available frame of live stream In-Reply-To: <522CB0A3.4090908@gmail.com> References: <522C9508.8060306@gmail.com> <522CB0A3.4090908@gmail.com> Message-ID: On Sep 8, 2013 8:15 PM, "Sandro Mani" wrote: > > On 08.09.2013 18:32, Alex Cohn wrote: >> >> On Sep 8, 2013 6:17 PM, "Sandro Mani" wrote: >> > >> > Hello, >> > >> > I would like to periodically (say every second) grab the latest frame from a rtsp live stream of a webcam. I am able to successfully open the stream and read and decode frames, however if I read one frame every second, I am still reading subsequent frames (as opposed to frames which are i.e. one second apart). I guess I need to seek to the last available frame before reading the next frame. Can anyone point out how this can be done? >> > >> > For reference, my code is here: [1] [2]. >> > >> > Thanks for any inputs. >> > >> > Sandro >> > >> > >> > [1] http://smani.fedorapeople.org/VideoCapture.hpp >> > [2] http://smani.fedorapeople.org/VideoCapture.cpp >> >> If your webcam has GOP of 1 sec, e.g. GOP length = 30 and FPS = 30, you can skip to next second; otherwize, the correct strategy would be to decode all frames, but throw away all frames that you don't need. >> >> BR >> Alex Cohn >> >> > Thanks for the reply. With skip do you mean by calling av_seek_frame with an appropriately computed timestamp? (Sorry if I may be asking the obvious, I'm rather a novice in this area.) > > Sandro This is a good question. I would suggest av_seek_frame with relevant timestamp and flags=0 (i.e. only keyframes). But if your stream does not support seeking, you can simply read incoming packets (av_read_frame) until you reach the next key frame (when AVPacket::flags is PKT_FLAG_KEY). Good luck, Alex Cohn From mrfun.china at gmail.com Tue Sep 10 04:53:33 2013 From: mrfun.china at gmail.com (YIRAN LI) Date: Tue, 10 Sep 2013 12:53:33 +1000 Subject: [Libav-user] about index_entries in AVStream Message-ID: Hi, I met a problem when doing seek in a mpeg file (MPEG2 encoded MPEG-PS video). After the file is opened using avformat_open_input, seems the index_entries of video stream is populated with incorrect values (the comment says these index_entries are only used if format does not support seeking natively). I can directly call av_seek_frame and it succeeded on this file. But I don't understand if those index_entries are not valid, why they get populated when file is opend So I'd like to know if there's a way to judge if the index_entries are valid, or if there's a list telling me what format natively doesn't support seek (So if I know which format supports direct seek, I can ignore index_entries and seek directly) the index_entries of that mpeg files has following timestamps: 00:00:19.267 MAIN entry[0].time = 21600 00:00:20.109 MAIN entry[1].time = 57600 00:00:20.842 MAIN entry[2].time = 100800 00:00:20.842 MAIN entry[3].time = 144000 00:00:20.842 MAIN entry[4].time = 187200 00:00:20.842 MAIN entry[5].time = 230400 00:00:20.842 MAIN entry[6].time = 273600 00:00:20.842 MAIN entry[7].time = 316800 00:00:20.842 MAIN entry[8].time = 360000 00:00:20.842 MAIN entry[9].time = 403200 00:00:20.842 MAIN entry[10].time = 446400 00:00:20.842 MAIN entry[11].time = 489600 00:00:20.842 MAIN entry[12].time = 532800 00:00:20.842 MAIN entry[13].time = 576000 00:00:20.842 MAIN entry[14].time = 619200 00:00:20.842 MAIN entry[15].time = 662400 00:00:20.842 MAIN entry[16].time = 705600 00:00:20.842 MAIN entry[17].time = 285264000 the last timestamp is quite close to the end of video so the first 17 are quite close to the beginning. So if I seek based on these indexes, it always goes to the the very beginning first. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From manisandro at gmail.com Tue Sep 10 10:02:24 2013 From: manisandro at gmail.com (Sandro Mani) Date: Tue, 10 Sep 2013 10:02:24 +0200 Subject: [Libav-user] libavformat, RTSP: periodically read last available frame of live stream In-Reply-To: References: <522C9508.8060306@gmail.com> <522CB0A3.4090908@gmail.com> Message-ID: <522ED210.8040907@gmail.com> On 09.09.2013 22:33, Alex Cohn wrote: > On Sep 8, 2013 8:15 PM, "Sandro Mani" wrote: >> On 08.09.2013 18:32, Alex Cohn wrote: >>> On Sep 8, 2013 6:17 PM, "Sandro Mani" wrote: >>>> Hello, >>>> >>>> I would like to periodically (say every second) grab the latest frame from a rtsp live stream of a webcam. I am able to successfully open the stream and read and decode frames, however if I read one frame every second, I am still reading subsequent frames (as opposed to frames which are i.e. one second apart). I guess I need to seek to the last available frame before reading the next frame. Can anyone point out how this can be done? >>>> >>>> For reference, my code is here: [1] [2]. >>>> >>>> Thanks for any inputs. >>>> >>>> Sandro >>>> >>>> >>>> [1] http://smani.fedorapeople.org/VideoCapture.hpp >>>> [2] http://smani.fedorapeople.org/VideoCapture.cpp >>> If your webcam has GOP of 1 sec, e.g. GOP length = 30 and FPS = 30, you can skip to next second; otherwize, the correct strategy would be to decode all frames, but throw away all frames that you don't need. >>> >>> BR >>> Alex Cohn >>> >>> >> Thanks for the reply. With skip do you mean by calling av_seek_frame with an appropriately computed timestamp? (Sorry if I may be asking the obvious, I'm rather a novice in this area.) >> >> Sandro > This is a good question. I would suggest av_seek_frame with relevant > timestamp and flags=0 (i.e. only keyframes). But if your stream does > not support seeking, you can simply read incoming packets > (av_read_frame) until you reach the next key frame (when > AVPacket::flags is PKT_FLAG_KEY). > > Good luck, > Alex Cohn > Thanks Alex! From lgiancri at tiscali.it Tue Sep 10 13:56:23 2013 From: lgiancri at tiscali.it (luigi) Date: Tue, 10 Sep 2013 13:56:23 +0200 Subject: [Libav-user] downmixing 5.1 to stereo Message-ID: <522F08E7.7090608@tiscali.it> the following code (adapted from resampling_audio.c written by Stefano Sabatini and included in ffmpeg doc/examples) is able do resample fltp to AV_SAMPLE_FMT_S16 but fails to downmix 5.1 to stereo. I've tried several solutions without hope. The routine takes a buffer decoded with av_decode_audio4 and after resampling sends it to an output buffer. Can someone have a look at this code and spot where the mistake is? Luigi int32_t LG_ffmpeg_Audio_decoder::ResampleAudio( AVFrame *dec_fr ) { int64_t src_ch_layout = dec_fr->channel_layout, dst_ch_layout = AV_CH_LAYOUT_STEREO; int32_t src_rate = dec_fr->sample_rate, dst_rate = 48000; uint8_t **src_data = dec_fr->data; int32_t dst_nb_channels = 0; int32_t dst_linesize; int32_t src_nb_samples = dec_fr->nb_samples, dst_nb_samples, max_dst_nb_samples; enum AVSampleFormat src_sample_fmt = (AVSampleFormat)dec_fr->format; enum AVSampleFormat dst_sample_fmt = AV_SAMPLE_FMT_S16; int32_t dst_bufsize; const char *fmt; struct SwrContext *swr_ctx; int32_t ret; /* create resampler context */ swr_ctx = swr_alloc(); if (!swr_ctx) { fprintf(stderr, "Could not allocate resampler context\n"); ret = AVERROR(ENOMEM); goto end; } /* set options */ av_opt_set_int(swr_ctx, "in_channel_layout", src_ch_layout, 0); av_opt_set_int(swr_ctx, "in_sample_rate", src_rate, 0); av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", src_sample_fmt, 0); av_opt_set_int(swr_ctx, "out_channel_layout", dst_ch_layout, 0); av_opt_set_int(swr_ctx, "out_sample_rate", dst_rate, 0); av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", dst_sample_fmt, 0); /* initialize the resampling context */ if ((ret = swr_init(swr_ctx)) < 0) { fprintf(stderr, "Failed to initialize the resampling context\n"); goto end; } /* compute the number of converted samples: buffering is avoided * ensuring that the output buffer will contain at least all the * converted input samples */ max_dst_nb_samples = dst_nb_samples = av_rescale_rnd(src_nb_samples, dst_rate, src_rate, AV_ROUND_UP); /* buffer is going to be directly written to a rawaudio file, no alignment */ dst_nb_channels = av_get_channel_layout_nb_channels(dst_ch_layout); ret = av_samples_alloc_array_and_samples(&dst_data, &dst_linesize, dst_nb_channels, dst_nb_samples, dst_sample_fmt, 0); if (ret < 0) { fprintf(stderr, "Could not allocate destination samples\n"); goto end; } /* compute destination number of samples */ dst_nb_samples = av_rescale_rnd(swr_get_delay(swr_ctx, src_rate) + src_nb_samples, dst_rate, src_rate, AV_ROUND_UP); if (dst_nb_samples > max_dst_nb_samples) { av_free(dst_data[0]); ret = av_samples_alloc(dst_data, &dst_linesize, dst_nb_channels, dst_nb_samples, dst_sample_fmt, 1); if (ret < 0) exit(0);// break; max_dst_nb_samples = dst_nb_samples; } /* convert to destination format */ ret = swr_convert(swr_ctx, dst_data, dst_nb_samples, (const uint8_t **)src_data, src_nb_samples); if (ret < 0) { fprintf(stderr, "Error while converting\n"); goto end; } dst_bufsize = av_samples_get_buffer_size(&dst_linesize, dst_nb_channels, ret, dst_sample_fmt, 1); // printf("t:%f in:%d out:%d\n", t, src_nb_samples, ret); InsertBufferInOutBuf((unsigned char *)dst_data[0], dst_bufsize ) ; if ((ret = GetFormatFromSampleFmt(&fmt, dst_sample_fmt)) < 0) goto end; // fprintf(stderr, "Resampling succeeded. Play the output file with the command:\n" // "ffplay -f %s -channel_layout % -channels %d -ar %d %s\n", // fmt, dst_ch_layout, dst_nb_channels, dst_rate, dst_filename); goto end; end: if (dst_data) av_freep(&dst_data[0]); av_freep(&dst_data); swr_free(&swr_ctx); return ret < 0; } From leo.fernando34 at gmail.com Tue Sep 10 17:14:01 2013 From: leo.fernando34 at gmail.com (Leo Fernando) Date: Tue, 10 Sep 2013 08:14:01 -0700 (PDT) Subject: [Libav-user] Segmentation fault in get_bits Message-ID: <1378826041208-4658495.post@n4.nabble.com> Hi, I use libav through gstreamer. I'm using gst version 1.0.8. I often see crash with segmentation fault pointing to get_bits function while decoding 576i Mpeg2 stream Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fffe67c8700 (LWP 20524)] get_bits (s=0x7fffd804a1a0, mb_y=34, buf=0x7fffe67c6a00, buf_size=0) at libavcodec/get_bits.h:241 241 UPDATE_CACHE(re, s); (gdb) Anyone has any idea on this? Cheers, Leo -- View this message in context: http://libav-users.943685.n4.nabble.com/Segmentation-fault-in-get-bits-tp4658495.html Sent from the libav-users mailing list archive at Nabble.com. From cehoyos at ag.or.at Tue Sep 10 23:34:59 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Tue, 10 Sep 2013 21:34:59 +0000 (UTC) Subject: [Libav-user] =?utf-8?q?Segmentation_fault_in_get=5Fbits?= References: <1378826041208-4658495.post@n4.nabble.com> Message-ID: Leo Fernando writes: > Program received signal SIGSEGV, Segmentation fault. > [Switching to Thread 0x7fffe67c8700 (LWP 20524)] > get_bits (s=0x7fffd804a1a0, mb_y=34, buf=0x7fffe67c6a00, > buf_size=0) at libavcodec/get_bits.h:241 > 241 UPDATE_CACHE(re, s); > (gdb) Needed part of gdb output missing, see http://ffmpeg.org/bugreports.html Much more important: Please search for the "lavc" version string in the libavcodec library and tell us. Carl Eugen From rolaoo at gazeta.pl Wed Sep 11 12:17:37 2013 From: rolaoo at gazeta.pl (rolaoo Gazeta.pl) Date: Wed, 11 Sep 2013 12:17:37 +0200 Subject: [Libav-user] Effective demuxing single stream of multiple-stream file Message-ID: Hello, I want to demux only one stream (e.g. some selected audio stream) from a file which contains video and multiple audio streams. I use standard demuxing procedue: open the file using avformat_open_input and avformat_find_stream_info, then read packets using av_read_frame. But I am interested for packets from only one stream so I am ignoring all packets from remaining streams. However it introduces a big and unnecessary overhead as all streams are processed and e.g. video packets bandwitch is way bigger than audio which I am interested only. How to tell demuxer to process packets for stream which I am interested and do not process others? At this moment I am iterating all AVFormatContext::stream streams and setting discard = AVDISCARD_ALL for streams which I don't want and I see that disk I/O is way lower. The question is: is it right way to do so? Maybe some additional parameters for avformat_open_input can select only one stream in a better way? best regards Remek -------------- next part -------------- An HTML attachment was scrubbed... URL: From gokhan.narin at ortana.com Wed Sep 11 11:29:11 2013 From: gokhan.narin at ortana.com (gokhan.narin at ortana.com) Date: Wed, 11 Sep 2013 12:29:11 +0300 Subject: [Libav-user] Release a video Message-ID: <6a792cbbed6523a6ea56254deabee95f@linuxserver.ortana.com> Hi folks, I try to release a video file but I can't, I use muxing example to make a video. However, termination steps are not able to finish a file. Could you please some one assist me? Gokhan From 42 at flubb.net Thu Sep 12 14:46:31 2013 From: 42 at flubb.net (Fulbert Boussaton) Date: Thu, 12 Sep 2013 14:46:31 +0200 Subject: [Libav-user] Basic problem : memory leak when playing several movies Message-ID: Hi everyone, I use FFMpeg on iOS/armv7 (I just cloned the repository yesterday) and even though I easily managed to make a basic video player (thanks to all the devs by the way, the project is really a technical achievement !), I still have a frustrating memory leak problem. Short version : some memory is not released each time I play (switch to) a new movie in the same run. The amount of leaking memory depends on the movie, the order is 500KB-5MB. In order to better explain the situation, here are the operations I execute to open a new movie (The code is slightly modified to keep only the relevant calls) : AVFormatContext* Ctx; avformat_open_input(&Ctx, <...>, NULL, &Options); avformat_find_stream_info(Ctx, NULL); AVCodecContext* VCodecCtx = Ctx->streams[0]->codec; AVCodec* TargetVideoCodec = avcodec_find_decoder(VCodecCtx->codec_id); avcodec_open2(VCodecCtx, TargetVideoCodec, NULL); AVFrame* TargetFrame = avcodec_alloc_frame(); AVFrame* TargetFrameRGB = avcodec_alloc_frame(); int FrameSizeInBytes = avpicture_get_size(PIX_FMT_RGB24, VCodecCtx->width, VCodecCtx->height); uint8_t* FrameBuffer = (uint8_t*) av_malloc(FrameSizeInBytes); avpicture_fill((AVPicture*) TargetFrameRGB, FrameBuffer, PIX_FMT_RGB24, VCodecCtx->width, VCodecCtx->height); struct SwsContext* ScaleCtx = sws_getContext(VCodecCtx->width, VCodecCtx->height, VCodecCtx->pix_fmt, VCodecCtx->width, VCodecCtx->height, PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL); And here are the functions I call to release all the previous structures : sws_freeContext(ScaleCtx); avpicture_free((AVPicture*)TargetFrameRGB); av_free(FrameBuffer); avcodec_free_frame(&TargetFrame); avcodec_free_frame(&TargetFrameRGB); avcodec_close(VCodecCtx); avformat_close_input(&Ctx); There's no AVPacket management, picture scaling or anything graphical here because I tried to present you the minimal pieces of code for which the problem occurs. Obviously, I didn't release something I asked to be allocated so I'm sure I've forgot to call a avXXX_free_something but I can't find anything in the sources and I'm quite stuck. Is there anybody who can see what the problem is ? Thanks. Fulbert. -------------- next part -------------- An HTML attachment was scrubbed... URL: From 42 at flubb.net Thu Sep 12 16:08:01 2013 From: 42 at flubb.net (Fulbert Boussaton) Date: Thu, 12 Sep 2013 16:08:01 +0200 Subject: [Libav-user] Basic problem : memory leak when playing several movies In-Reply-To: References: Message-ID: I investigated further and found I made a basic error : the releasing code was not called as it should ! So, the code was correct (bar the avpicture_free call) and doesn't leak at all. Sorry for the noise... Fulbert. On Sep 12, 2013, at 14:46, Fulbert <42 at flubb.net> wrote: > Hi everyone, > > I use FFMpeg on iOS/armv7 (I just cloned the repository yesterday) and even though I easily managed to make a basic video player (thanks to all the devs by the way, the project is really a technical achievement !), I still have a frustrating memory leak problem. > > Short version : some memory is not released each time I play (switch to) a new movie in the same run. The amount of leaking memory depends on the movie, the order is 500KB-5MB. > > In order to better explain the situation, here are the operations I execute to open a new movie (The code is slightly modified to keep only the relevant calls) : > > AVFormatContext* Ctx; > avformat_open_input(&Ctx, <...>, NULL, &Options); > avformat_find_stream_info(Ctx, NULL); > AVCodecContext* VCodecCtx = Ctx->streams[0]->codec; > AVCodec* TargetVideoCodec = avcodec_find_decoder(VCodecCtx->codec_id); > avcodec_open2(VCodecCtx, TargetVideoCodec, NULL); > AVFrame* TargetFrame = avcodec_alloc_frame(); > AVFrame* TargetFrameRGB = avcodec_alloc_frame(); > int FrameSizeInBytes = avpicture_get_size(PIX_FMT_RGB24, VCodecCtx->width, VCodecCtx->height); > uint8_t* FrameBuffer = (uint8_t*) av_malloc(FrameSizeInBytes); > avpicture_fill((AVPicture*) TargetFrameRGB, FrameBuffer, PIX_FMT_RGB24, VCodecCtx->width, VCodecCtx->height); > struct SwsContext* ScaleCtx = sws_getContext(VCodecCtx->width, VCodecCtx->height, VCodecCtx->pix_fmt, VCodecCtx->width, VCodecCtx->height, PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL); > > And here are the functions I call to release all the previous structures : > > sws_freeContext(ScaleCtx); > avpicture_free((AVPicture*)TargetFrameRGB); > av_free(FrameBuffer); > avcodec_free_frame(&TargetFrame); > avcodec_free_frame(&TargetFrameRGB); > avcodec_close(VCodecCtx); > avformat_close_input(&Ctx); > > There's no AVPacket management, picture scaling or anything graphical here because I tried to present you the minimal pieces of code for which the problem occurs. > > Obviously, I didn't release something I asked to be allocated so I'm sure I've forgot to call a avXXX_free_something but I can't find anything in the sources and I'm quite stuck. > > Is there anybody who can see what the problem is ? > > > Thanks. > > Fulbert. > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user From mm.xie at Sunmedia.com.cn Thu Sep 12 05:44:33 2013 From: mm.xie at Sunmedia.com.cn (mm.xie at Sunmedia.com.cn) Date: Thu, 12 Sep 2013 11:44:33 +0800 Subject: [Libav-user] ffmpeg AAC problem Message-ID: An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Thu Sep 12 23:14:54 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Thu, 12 Sep 2013 21:14:54 +0000 (UTC) Subject: [Libav-user] ffmpeg AAC problem References: Message-ID: writes: > I use aacdec decoding the data only one channel there > is sound, the same file with ffmpeg - version 0.11 into > the same parameters and data decoding is no problem. Please provide the sample. Carl Eugen From nfxjfg at googlemail.com Fri Sep 13 12:47:43 2013 From: nfxjfg at googlemail.com (wm4) Date: Fri, 13 Sep 2013 12:47:43 +0200 Subject: [Libav-user] Basic problem : memory leak when playing several movies In-Reply-To: References: Message-ID: <20130913124743.476b2851@debian> On Thu, 12 Sep 2013 14:46:31 +0200 Fulbert Boussaton <42 at flubb.net> wrote: > AVFormatContext* Ctx; > avformat_open_input(&Ctx, <...>, NULL, &Options); > avformat_find_stream_info(Ctx, NULL); > AVCodecContext* VCodecCtx = Ctx->streams[0]->codec; > AVCodec* TargetVideoCodec = avcodec_find_decoder(VCodecCtx->codec_id); > avcodec_open2(VCodecCtx, TargetVideoCodec, NULL); > AVFrame* TargetFrame = avcodec_alloc_frame(); > AVFrame* TargetFrameRGB = avcodec_alloc_frame(); > int FrameSizeInBytes = avpicture_get_size(PIX_FMT_RGB24, VCodecCtx->width, VCodecCtx->height); > uint8_t* FrameBuffer = (uint8_t*) av_malloc(FrameSizeInBytes); > avpicture_fill((AVPicture*) TargetFrameRGB, FrameBuffer, PIX_FMT_RGB24, VCodecCtx->width, VCodecCtx->height); > struct SwsContext* ScaleCtx = sws_getContext(VCodecCtx->width, VCodecCtx->height, VCodecCtx->pix_fmt, VCodecCtx->width, VCodecCtx->height, PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL); > > And here are the functions I call to release all the previous structures : > > sws_freeContext(ScaleCtx); > avpicture_free((AVPicture*)TargetFrameRGB); > av_free(FrameBuffer); > avcodec_free_frame(&TargetFrame); > avcodec_free_frame(&TargetFrameRGB); > avcodec_close(VCodecCtx); > avformat_close_input(&Ctx); This all looks very wrong. I'd just stick to the AVFrame API, and not touch AVPicture. In some cases, you cast AVFrame to AVPicture, which only works by coincidence. (I'm not sure if it's possibly supposed to work - as far as I'm concerned AVPicture is ridiculous. You can get everything done without using AVPicture at all.) See libavutil/frame.h. From jpboard2 at yahoo.com Sat Sep 14 19:41:57 2013 From: jpboard2 at yahoo.com (James Board) Date: Sat, 14 Sep 2013 10:41:57 -0700 (PDT) Subject: [Libav-user] Copy Image Data from Decode Frame To Encode Frame Message-ID: <1379180517.51030.YahooMailNeo@web164703.mail.gq1.yahoo.com> I'm writing a simple libAV program that reads an AVI file, decodes each frame, then writes that frame data to another AVI file as output.? The output file should be pretty much the same as the input file. I'm using the examples in the ffmpeg source as a start.? This is how each frame from the input file gets decoded. ??? avcodec_decode_video2(tmpCodecContextForInputFile, myFrameDecode, got_frame, &pktDecode); Then, in another subroutine, this is how each frame gets encoded for the output file: ??? avcodec_encode_video2(tmpCodecContextForOutputFile, &pktEncode, myFrameEncode, &got_packet); In between the two calls above, I have to convert the image data from myFrameDecode to image data in myFrameEncode.? That's what I'm having trouble with.? Right now I have lots of row/col loops and they do things differently for different pixel formats.? Is there a single subroutine I can that can convert the image data from myFrameDecode to the image data in myFrameEncode? Also, if the input and output file have the same pixel format, is there a simple way to copy the data from one to the other? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexcohn at netvision.net.il Sat Sep 14 20:43:18 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Sat, 14 Sep 2013 21:43:18 +0300 Subject: [Libav-user] Copy Image Data from Decode Frame To Encode Frame In-Reply-To: <1379180517.51030.YahooMailNeo@web164703.mail.gq1.yahoo.com> References: <1379180517.51030.YahooMailNeo@web164703.mail.gq1.yahoo.com> Message-ID: On Sep 14, 2013 8:42 PM, "James Board" wrote: > > I'm writing a simple libAV program that reads an AVI file, decodes each > frame, then writes that frame data to another AVI file as output. The output > file should be pretty much the same as the input file. > > I'm using the examples in the ffmpeg source as a start. This is how > each frame from the input file gets decoded. > avcodec_decode_video2(tmpCodecContextForInputFile, myFrameDecode, got_frame, &pktDecode); > > Then, in another subroutine, this is how each frame gets encoded for the > output file: > avcodec_encode_video2(tmpCodecContextForOutputFile, &pktEncode, myFrameEncode, &got_packet); > > In between the two calls above, I have to convert the image data from > myFrameDecode to image data in myFrameEncode. That's what I'm having trouble > with. Right now I have lots of row/col loops and they do things differently > for different pixel formats. Is there a single subroutine I can that can > convert the image data from myFrameDecode to the image data in myFrameEncode? > > Also, if the input and output file have the same pixel format, is there a > simple way to copy the data from one to the other? > > Thanks You don't need to copy data between frames, you can reuse the decoder frame as input for encoder. BR Alex Cohn -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpboard2 at yahoo.com Sat Sep 14 21:12:22 2013 From: jpboard2 at yahoo.com (James Board) Date: Sat, 14 Sep 2013 12:12:22 -0700 (PDT) Subject: [Libav-user] Copy Image Data from Decode Frame To Encode Frame In-Reply-To: References: <1379180517.51030.YahooMailNeo@web164703.mail.gq1.yahoo.com> Message-ID: <1379185942.29374.YahooMailNeo@web164705.mail.gq1.yahoo.com> >> Also, if the input and output file have the same pixel format, is there a >> simple way to copy the data from one to the other? >> >> Thanks >You don't need to copy data between frames, you can reuse the decoder frame as input for encoder. Okay.? I will try this? (actually, I tried it and it didn't work, but I probably didn't do it right). Also, I was able to figure out my main question with the scale functions. I have another question.? If my input frames are YUV422P compressed with ffvhuff and my output frames are also YUV422P compressed with ffvhuff, can I somehow copy them directly from input file to output file without decoding them?? Maybe copy the packet directly?? I'm currently using AVI containers.? Is this dependent on what container I choose?? If so, is there some container that allows this? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From onemda at gmail.com Sat Sep 14 21:15:19 2013 From: onemda at gmail.com (Paul B Mahol) Date: Sat, 14 Sep 2013 19:15:19 +0000 Subject: [Libav-user] Copy Image Data from Decode Frame To Encode Frame In-Reply-To: <1379180517.51030.YahooMailNeo@web164703.mail.gq1.yahoo.com> References: <1379180517.51030.YahooMailNeo@web164703.mail.gq1.yahoo.com> Message-ID: On 9/14/13, James Board wrote: > I'm writing a simple libAV program that reads an AVI file, decodes each > frame, then writes that frame data to another AVI file as output. The > output > file should be pretty much the same as the input file. > > I'm using the examples in the ffmpeg source as a start. This is how > each frame from the input file gets decoded. > > avcodec_decode_video2(tmpCodecContextForInputFile, myFrameDecode, > got_frame, &pktDecode); > > Then, in another subroutine, this is how each frame gets encoded for the > output file: > avcodec_encode_video2(tmpCodecContextForOutputFile, &pktEncode, > myFrameEncode, &got_packet); > > In between the two calls above, I have to convert the image data from > myFrameDecode to image data in myFrameEncode. That's what I'm having > trouble > with. Right now I have lots of row/col loops and they do things differently > for different pixel formats. Is there a single subroutine I can that can > convert the image data from myFrameDecode to the image data in > myFrameEncode? You can use same one if you enable reference counting, then you do not need to copy it every time you need it. Just when going to modify it (if it is referenced by something else it will be copied otherwise reused....). > > Also, if the input and output file have the same pixel format, is there a > simple way to copy the data from one to the other? > > Thanks > From alexcohn at netvision.net.il Sun Sep 15 09:15:26 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Sun, 15 Sep 2013 10:15:26 +0300 Subject: [Libav-user] Copy Image Data from Decode Frame To Encode Frame In-Reply-To: <1379185942.29374.YahooMailNeo@web164705.mail.gq1.yahoo.com> References: <1379180517.51030.YahooMailNeo@web164703.mail.gq1.yahoo.com> <1379185942.29374.YahooMailNeo@web164705.mail.gq1.yahoo.com> Message-ID: On Sat, Sep 14, 2013 at 10:12 PM, James Board wrote: >>> Also, if the input and output file have the same pixel format, is there a >>> simple way to copy the data from one to the other? >>> >>> Thanks >>You don't need to copy data between frames, you can reuse the decoder frame >> as input for encoder. > > Okay. I will try this (actually, I tried it and it didn't work, but I > probably didn't do it right). > > Also, I was able to figure out my main question with the scale functions. > > I have another question. If my input frames are YUV422P compressed with > ffvhuff > and my output frames are also YUV422P compressed with ffvhuff, can I somehow > copy them directly from input file to output file without decoding them? > Maybe > copy the packet directly? I'm currently using AVI containers. Is this > dependent on > what container I choose? If so, is there some container that allows this? > > Thanks. Generally speaking, the answer is yes, you can simply copy encoded packets from demuxer to muxer, if you don't change them. I am sure AVI container will not be upset, but out in the wild, there could be some containers for which this trick may be less straightforward to implement. Generally speaking, the prerequisite is that you copy *all* frames from input to output. If you want to "resample" the output (e.g. convert 60 fps to 15 fps), or want to add some more frames (e.g. merge two video streams into one), or otherwise manipulate the stream or the frames in that stream, your mileage may vary. Specifically for HuffYUV, every frame is intracoded, therefore the above prerequisite is lifted. BR, Alex Cohn From jpboard2 at yahoo.com Sun Sep 15 17:33:32 2013 From: jpboard2 at yahoo.com (James Board) Date: Sun, 15 Sep 2013 08:33:32 -0700 (PDT) Subject: [Libav-user] Copy Image Data from Decode Frame To Encode Frame In-Reply-To: References: <1379180517.51030.YahooMailNeo@web164703.mail.gq1.yahoo.com> <1379185942.29374.YahooMailNeo@web164705.mail.gq1.yahoo.com> Message-ID: <1379259212.954.YahooMailNeo@web164704.mail.gq1.yahoo.com> >Generally speaking, the answer is yes, you can simply copy encoded >packets from demuxer to muxer, if you don't change them. I am sure AVI >container will not be upset, but out in the wild, there could be some >containers for which this trick may be less straightforward to >implement. > >Generally speaking, the prerequisite is that you copy *all* frames >from input to output. If you want to "resample" the output (e.g. >convert 60 fps to 15 fps), or want to add some more frames (e.g. merge >two video streams into one), or otherwise manipulate the stream or the >frames in that stream, your mileage may vary. Okay, that's useful.? My input frames are all ffvhuff-encoded and the output frames will be the same.? I don't change any of the video data. I'm ignoring the audio data.? Sounds like copying packets might work. However, what if I only copy a subset of the input frames to the output file?? Can I still copy the packet?? If the answer is no, can I do something simple to allow me to copy the packet (like modify a timestamp?). -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpboard2 at yahoo.com Sun Sep 15 17:35:51 2013 From: jpboard2 at yahoo.com (James Board) Date: Sun, 15 Sep 2013 08:35:51 -0700 (PDT) Subject: [Libav-user] Copy Image Data from Decode Frame To Encode Frame In-Reply-To: References: <1379180517.51030.YahooMailNeo@web164703.mail.gq1.yahoo.com> Message-ID: <1379259351.6407.YahooMailNeo@web164704.mail.gq1.yahoo.com> >> In between the two calls above, I have to convert the image data from >> myFrameDecode to image data in myFrameEncode.? That's what I'm having >> trouble >> with.? Right now I have lots of row/col loops and they do things differently >> for different pixel formats.? Is there a single subroutine I can that can >> convert the image data from myFrameDecode to the image data in >> myFrameEncode? > >You can use same one if you enable reference counting, then you do not need to >copy it every time you need it. Just when going to modify it (if it is >referenced by something else it will be copied otherwise reused....). What do you mean by 'reference counting'?? Is this memory reference counting in the context of what some people do to implement smarter memory management in C so they don't delete segments of memory that is being used by elsewhere? Or is it something else? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpboard2 at yahoo.com Sun Sep 15 17:41:58 2013 From: jpboard2 at yahoo.com (James Board) Date: Sun, 15 Sep 2013 08:41:58 -0700 (PDT) Subject: [Libav-user] What are Cached Frames? Message-ID: <1379259718.5275.YahooMailNeo@web164703.mail.gq1.yahoo.com> In the libAV examples distributed with ffmpeg, I see the following code: ??? // Flush cache frames ??? do { ??????? decode_packet(&got_frame, 1); ??? } while (got_frame); What are cached frames and what does the above do?? Is tis explained anywhere? In my libav app, I'm trying to do simple edits.? So, I seek to the edit-start, decode all packets until I reach the edit-end, then seek to the next edit-start.? After all edits are processed, I call the above (not knowing what it does) and it loops forever. Thanks for the help! -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexcohn at netvision.net.il Sun Sep 15 18:02:44 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Sun, 15 Sep 2013 19:02:44 +0300 Subject: [Libav-user] Copy Image Data from Decode Frame To Encode Frame In-Reply-To: <1379259212.954.YahooMailNeo@web164704.mail.gq1.yahoo.com> References: <1379180517.51030.YahooMailNeo@web164703.mail.gq1.yahoo.com> <1379185942.29374.YahooMailNeo@web164705.mail.gq1.yahoo.com> <1379259212.954.YahooMailNeo@web164704.mail.gq1.yahoo.com> Message-ID: On Sun, Sep 15, 2013 at 6:33 PM, James Board wrote: > >>Generally speaking, the answer is yes, you can simply copy encoded >>packets from demuxer to muxer, if you don't change them. I am sure AVI >>container will not be upset, but out in the wild, there could be some >>containers for which this trick may be less straightforward to >>implement. >> >>Generally speaking, the prerequisite is that you copy *all* frames >>from input to output. If you want to "resample" the output (e.g. >>convert 60 fps to 15 fps), or want to add some more frames (e.g. merge >>two video streams into one), or otherwise manipulate the stream or the >>frames in that stream, your mileage may vary. > > Okay, that's useful. My input frames are all ffvhuff-encoded and the > output frames will be the same. I don't change any of the video data. > I'm ignoring the audio data. Sounds like copying packets might work. Yes, sounds so... The best advice would be to try and see if there is some problem. > However, what if I only copy a subset of the input frames to the output > file? Can I still copy the packet? If the answer is no, can I do something > simple to allow me to copy the packet (like modify a timestamp?). The best advice again would be to try and see if there is some problem. Yes, you probably want to manipulate the timestamps. BR, Alex Cohn From jpboard2 at yahoo.com Sun Sep 15 18:37:09 2013 From: jpboard2 at yahoo.com (James Board) Date: Sun, 15 Sep 2013 09:37:09 -0700 (PDT) Subject: [Libav-user] Copy Image Data from Decode Frame To Encode Frame In-Reply-To: References: <1379180517.51030.YahooMailNeo@web164703.mail.gq1.yahoo.com> <1379185942.29374.YahooMailNeo@web164705.mail.gq1.yahoo.com> <1379259212.954.YahooMailNeo@web164704.mail.gq1.yahoo.com> Message-ID: <1379263029.74034.YahooMailNeo@web164703.mail.gq1.yahoo.com> >> However, what if I only copy a subset of the input frames to the output >> file?? Can I still copy the packet?? If the answer is no, can I do something >> simple to allow me to copy the packet (like modify a timestamp?). > >The best advice again would be to try and see if there is some >problem. Yes, you probably want to manipulate the timestamps. That's okay, but I'd really like to understand how it works behind the scenes so I know it works for sure. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpboard2 at yahoo.com Sun Sep 15 23:37:41 2013 From: jpboard2 at yahoo.com (James Board) Date: Sun, 15 Sep 2013 14:37:41 -0700 (PDT) Subject: [Libav-user] Seeking, timestamps, AVFrame, AVStream Message-ID: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> I'm writing a libAV application in C, and I'm trying to use the av_seek_frame() or avformatseek_file() to move within the video stream. Ideally, I'd like to seek to frame number N.? I'd like a subroutine that seeks to frame number N, but I think from previous discussions, that isn't the way things are done in ffmpeg and libav.? Okay. I'm trying to understand how things actually are done in ffmpeg/libav.? When decoding a video file (all my files, for now, are AVI containers with no compression, or ffvhuff compression), the AVFrame struct has several members: ??? pts ??? pkt_pts ??? pkt_dts ??? coded_picture_number ??? display_picture_number ??? best_effort_timestamp (the name of this one really worries me) ??? pkt_pos ??? pkt_duration What do those all mean?? Can I use those to figure out which frame this is? Also, AVStream has some members: ??? pts (has 3 components) ??? timebase (2 components) ??? start_time ??? duration What do they mean? I have test code and the only thing that seems useful to me is that pkt_pos seems to increase by a very large value for each frame (over 4 million) and the it is usually the same difference between two frames, but not always. Anyway, can someone tell me what the above struct members mean, and how I can use them to tell where each frame is in the overall video file?? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From krueger at lesspain.de Mon Sep 16 09:28:55 2013 From: krueger at lesspain.de (=?UTF-8?Q?Robert_Kr=C3=BCger?=) Date: Mon, 16 Sep 2013 09:28:55 +0200 Subject: [Libav-user] Seeking, timestamps, AVFrame, AVStream In-Reply-To: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> References: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> Message-ID: On Sun, Sep 15, 2013 at 11:37 PM, James Board wrote: > I'm writing a libAV application in C, and I'm trying to use the > av_seek_frame() or avformatseek_file() to move within the video stream. > Ideally, I'd like to seek to frame number N. I'd like a subroutine that > seeks > to frame number N, but I think from previous discussions, that isn't the way > things are done in ffmpeg and libav. Okay. > > I'm trying to understand how things actually are done in ffmpeg/libav. When > decoding a video file (all my files, for now, are AVI containers with no > compression, or ffvhuff compression), the AVFrame struct has several > members: > pts > pkt_pts > pkt_dts > coded_picture_number > display_picture_number > best_effort_timestamp (the name of this one really worries me) > pkt_pos > pkt_duration > What do those all mean? Can I use those to figure out which frame this is? > > Also, AVStream has some members: > pts (has 3 components) > timebase (2 components) > start_time > duration > What do they mean? > > I have test code and the only thing that seems useful to me is that pkt_pos > seems to increase by a very large value for each frame (over 4 million) and > the it > is usually the same difference between two frames, but not always. > > Anyway, can someone tell me what the above struct members mean, and how I > can > use them to tell where each frame is in the overall video file? Thanks. > Try looking at the documentation in the header files, in this case avcodec.h. It should help you quite a bit. From mike at mikeversteeg.com Mon Sep 16 09:29:45 2013 From: mike at mikeversteeg.com (mikeversteeg) Date: Mon, 16 Sep 2013 00:29:45 -0700 (PDT) Subject: [Libav-user] Seeking, timestamps, AVFrame, AVStream In-Reply-To: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> References: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> Message-ID: <1379316585311-4658518.post@n4.nabble.com> How to seek is actually quite easy so I won't explain that (google it and you find it easily), but keep in mind that you must seek to key frames. How to (fast) seek to any frame is something I haven't figured out myself, it's not something ffmpeg seems to support so I fear you'd have to seek to the preceding key frame and then decode some frames until you hit the right one. Ugh. -- View this message in context: http://libav-users.943685.n4.nabble.com/Libav-user-Seeking-timestamps-AVFrame-AVStream-tp4658516p4658518.html Sent from the libav-users mailing list archive at Nabble.com. From alexcohn at netvision.net.il Mon Sep 16 10:25:44 2013 From: alexcohn at netvision.net.il (Alex Cohn) Date: Mon, 16 Sep 2013 11:25:44 +0300 Subject: [Libav-user] Seeking, timestamps, AVFrame, AVStream In-Reply-To: <1379316585311-4658518.post@n4.nabble.com> References: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> <1379316585311-4658518.post@n4.nabble.com> Message-ID: On Sep 16, 2013 10:52 AM, "mikeversteeg" wrote: > > How to seek is actually quite easy so I won't explain that (google it and you > find it easily), but keep in mind that you must seek to key frames. How to > (fast) seek to any frame is something I haven't figured out myself, it's not > something ffmpeg seems to support so I fear you'd have to seek to the > preceding key frame and then decode some frames until you hit the right one. > Ugh. Luckily, in HuffYuv all frames are key frames. Alex Cohn -------------- next part -------------- An HTML attachment was scrubbed... URL: From aworldgonewrong at gmail.com Mon Sep 16 11:37:46 2013 From: aworldgonewrong at gmail.com (John Freeman) Date: Mon, 16 Sep 2013 10:37:46 +0100 Subject: [Libav-user] Scale in RTSP In-Reply-To: References: Message-ID: Is there any reason that scale is missing from the PLAY instruction? I know it is optional, but could we not have it included. The only solution I can see, is editing the libAV code myself, and I'd rather not do that for LGPL license reasons. On 12 August 2013 09:14, John Freeman wrote: > Why doesn't libav support scale in the RTSP PLAY command? The RTSP RFC > defines the usage in section 12.34. > > I really need to use scale in my application without changing the libav > source code. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 42 at flubb.net Mon Sep 16 14:40:37 2013 From: 42 at flubb.net (Fulbert Boussaton) Date: Mon, 16 Sep 2013 14:40:37 +0200 Subject: [Libav-user] Basic problem : memory leak when playing several movies In-Reply-To: <20130913124743.476b2851@debian> References: <20130913124743.476b2851@debian> Message-ID: <58A41E04-D2FB-46B8-97ED-9EDA8BD3EB33@flubb.net> Very informative : I didn't know AVPicture had a bad rep. I'll check it out. Thanks. On Sep 13, 2013, at 12:47, wm4 wrote: > On Thu, 12 Sep 2013 14:46:31 +0200 > Fulbert Boussaton <42 at flubb.net> wrote: > >> AVFormatContext* Ctx; >> avformat_open_input(&Ctx, <...>, NULL, &Options); >> avformat_find_stream_info(Ctx, NULL); >> AVCodecContext* VCodecCtx = Ctx->streams[0]->codec; >> AVCodec* TargetVideoCodec = avcodec_find_decoder(VCodecCtx->codec_id); >> avcodec_open2(VCodecCtx, TargetVideoCodec, NULL); >> AVFrame* TargetFrame = avcodec_alloc_frame(); >> AVFrame* TargetFrameRGB = avcodec_alloc_frame(); >> int FrameSizeInBytes = avpicture_get_size(PIX_FMT_RGB24, VCodecCtx->width, VCodecCtx->height); >> uint8_t* FrameBuffer = (uint8_t*) av_malloc(FrameSizeInBytes); >> avpicture_fill((AVPicture*) TargetFrameRGB, FrameBuffer, PIX_FMT_RGB24, VCodecCtx->width, VCodecCtx->height); >> struct SwsContext* ScaleCtx = sws_getContext(VCodecCtx->width, VCodecCtx->height, VCodecCtx->pix_fmt, VCodecCtx->width, VCodecCtx->height, PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL); >> >> And here are the functions I call to release all the previous structures : >> >> sws_freeContext(ScaleCtx); >> avpicture_free((AVPicture*)TargetFrameRGB); >> av_free(FrameBuffer); >> avcodec_free_frame(&TargetFrame); >> avcodec_free_frame(&TargetFrameRGB); >> avcodec_close(VCodecCtx); >> avformat_close_input(&Ctx); > > This all looks very wrong. I'd just stick to the AVFrame API, and not > touch AVPicture. In some cases, you cast AVFrame to AVPicture, which > only works by coincidence. (I'm not sure if it's possibly supposed to > work - as far as I'm concerned AVPicture is ridiculous. You can get > everything done without using AVPicture at all.) > > See libavutil/frame.h. > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpboard2 at yahoo.com Mon Sep 16 15:37:11 2013 From: jpboard2 at yahoo.com (James Board) Date: Mon, 16 Sep 2013 06:37:11 -0700 (PDT) Subject: [Libav-user] Seeking, timestamps, AVFrame, AVStream In-Reply-To: <1379316585311-4658518.post@n4.nabble.com> References: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> <1379316585311-4658518.post@n4.nabble.com> Message-ID: <1379338631.81785.YahooMailNeo@web164705.mail.gq1.yahoo.com> >How to seek is actually quite easy so I won't explain that (google it and you >find it easily), but keep in mind that you must seek to key frames. How to >(fast) seek to any frame is something I haven't figured out myself, it's not >something ffmpeg seems to support so I fear you'd have to seek to the >preceding key frame and then decode some frames until you hit the right one. I know how to 'seek'.? I don't know how to seek 'correctly'.? My files are all fvhuff encoded so all frames are key frames.? What I want to know is how the various pts and dts and all those other members of the AVFrame struct are calculated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpboard2 at yahoo.com Mon Sep 16 16:19:02 2013 From: jpboard2 at yahoo.com (James Board) Date: Mon, 16 Sep 2013 07:19:02 -0700 (PDT) Subject: [Libav-user] Seeking, timestamps, AVFrame, AVStream In-Reply-To: References: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> Message-ID: <1379341142.59741.YahooMailNeo@web164704.mail.gq1.yahoo.com> >Try looking at the documentation in the header files, in this case >avcodec.h. It should help you quite a bit. Yes, I looked at avcodec.h and could not find what I wanted.? When I was previously discussing the inaccuracy of floating-point numbers and how I wanted to use frame numbers in editing, everyone said I should be using timestamps. Then I complained that timestamps were specified by floating-point numbers which aren't accurate.? Then Nicolas George said something like the timestamps stored internally were exact.? So where are those timestamps stored? When I asked how to calculate a timestamp as a function of the frame number, the answer was I can't do that.? Instead, I should merely read the timestamp.? Okay, I'm taking the advice.? Where do I read the timestamp from?? Is it stored in AVFrame, AVPacket, the stream, the CodecContext??? It it best_effort_timestamp? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From krueger at lesspain.de Mon Sep 16 18:21:01 2013 From: krueger at lesspain.de (=?UTF-8?Q?Robert_Kr=C3=BCger?=) Date: Mon, 16 Sep 2013 18:21:01 +0200 Subject: [Libav-user] Seeking, timestamps, AVFrame, AVStream In-Reply-To: <1379341142.59741.YahooMailNeo@web164704.mail.gq1.yahoo.com> References: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> <1379341142.59741.YahooMailNeo@web164704.mail.gq1.yahoo.com> Message-ID: On Mon, Sep 16, 2013 at 4:19 PM, James Board wrote: >>Try looking at the documentation in the header files, in this case >>avcodec.h. It should help you quite a bit. > > Yes, I looked at avcodec.h and could not find what I wanted. When I was You wrote: pts pkt_pts pkt_dts coded_picture_number display_picture_number best_effort_timestamp (the name of this one really worries me) pkt_pos pkt_duration What do those all mean? Can I use those to figure out which frame this is? All of these are documented there. That's why I thought you hadn't looked at the docs. My gut feeling is that the biggest problem is that you probably need to read up on some basics, so the docs (which are quite OK IMHO) make sense to you, so you know what a timebase, presentation timestamp, decoding timestamp etc. are and this is not meant to diss you but honest advice. You will not get far without that. > previously discussing the inaccuracy of floating-point numbers and how I > wanted > to use frame numbers in editing, everyone said I should be using timestamps. > Then I complained that timestamps were specified by floating-point numbers > which aren't accurate. Then Nicolas George said something like the > timestamps > stored internally were exact. So where are those timestamps stored? > > When I asked how to calculate a timestamp as a function of the frame number, > the answer was I can't do that. Instead, I should merely read the > timestamp. Okay, > I'm taking the advice. Where do I read the timestamp from? Is it stored in > AVFrame, AVPacket, the stream, the CodecContext? It it > best_effort_timestamp? yes, use best_effort_timestamp and multiply it by the corresponding stream's time base. BTW: If you really, really know that a stream has a constant frame rate, you can indeed calculate the timestamp based the frame number. You take into account that a stream's presentation time stamps might not start at 0, though (check AVStream.start_time for that). HTH From jpboard2 at yahoo.com Mon Sep 16 18:50:58 2013 From: jpboard2 at yahoo.com (James Board) Date: Mon, 16 Sep 2013 09:50:58 -0700 (PDT) Subject: [Libav-user] Seeking, timestamps, AVFrame, AVStream In-Reply-To: References: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> <1379341142.59741.YahooMailNeo@web164704.mail.gq1.yahoo.com> Message-ID: <1379350258.93814.YahooMailNeo@web164703.mail.gq1.yahoo.com> >> Yes, I looked at avcodec.h and could not find what I wanted.? When I was > >You wrote: > >? ? pts >? ? pkt_pts >? ? pkt_dts >? ? coded_picture_number >? ? display_picture_number >? ? best_effort_timestamp (the name of this one really worries me) >? ? pkt_pos >? ? pkt_duration >What do those all mean?? Can I use those to figure out which frame this is? > >All of these are documented there. That's why I thought you hadn't >looked at the docs. Actually, none of them (except pts) are defined there.? Most aren't even mentioned.? But they are mentioned in AVFrame.h >My gut feeling is that the biggest problem is that you probably need >to read up on some basics, so the docs (which are quite OK IMHO) make >sense to you, so you know what a timebase, presentation timestamp, Yes, I definitely prefer reading docs then asking for help.? But what docs are you talking about?? I'm not aware of anything other than the source code. Anyway, it sounds like pts and best_effort_timestamp are really frame numbers (not time values) and to get the actual time value, you multiple them by the time base.? Is that correct?? I'm actually starting with frame numbers and I don't really need the time values, per se.? Can I merely use the pts or best_effort_timestamp and assume they equal the frame number? Also, will the libs do anything behind the scenes to duplicate frames for whatever reason (like to meet some CFR rate)?? I don't want that. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From donmoir at comcast.net Mon Sep 16 20:33:46 2013 From: donmoir at comcast.net (Don Moir) Date: Mon, 16 Sep 2013 14:33:46 -0400 Subject: [Libav-user] Seeking, timestamps, AVFrame, AVStream References: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> <1379341142.59741.YahooMailNeo@web164704.mail.gq1.yahoo.com> <1379350258.93814.YahooMailNeo@web164703.mail.gq1.yahoo.com> Message-ID: <15596AD5E624453CA0D5B7990361CD26@MANLAP> >>> Yes, I looked at avcodec.h and could not find what I wanted. When I was >> >>You wrote: >> >> pts >> pkt_pts >> pkt_dts >> coded_picture_number >> display_picture_number >> best_effort_timestamp (the name of this one really worries me) >> pkt_pos >> pkt_duration >>What do those all mean? Can I use those to figure out which frame this is? >> >>All of these are documented there. That's why I thought you hadn't >>looked at the docs. >Actually, none of them (except pts) are defined there. Most aren't >even mentioned. But they are mentioned in AVFrame.h >>My gut feeling is that the biggest problem is that you probably need >>to read up on some basics, so the docs (which are quite OK IMHO) make >>sense to you, so you know what a timebase, presentation timestamp, >Yes, I definitely prefer reading docs then asking for help. But what docs >are you talking about? I'm not aware of anything other than the source code. >Anyway, it sounds like pts and best_effort_timestamp are really frame numbers >(not time values) and to get the actual time value, you multiple them by the >time base. Is that correct? I'm actually starting with frame numbers and I >don't really need the time values, per se. Can I merely use the pts or >best_effort_timestamp and assume they equal the frame number? >Also, will the libs do anything behind the scenes to duplicate frames for >whatever reason (like to meet some CFR rate)? I don't want that. In principle, seeking is easy in ffmpeg, but it practice over a broad range of file types, it's complicated. This is due sometimes to behavior differences, sync issues, timestamp issues, start times, and other things but over the last couple years seeking has gotten a lot better. I have found that seeking by timestamps is the most reliable over a broad range of file types. In your case, there should be a way to esitmate the timestamp given a frame number. Timestamps are in AVStream.time_base units. To convert a timestamp to seconds you could do this: int64_t some_timestamp = some value; double seconds = some_timestamp * av_q2d (pStream->time_base); This is just an example and there is more to it. Timestamp can be AV_NOPTS_VALUE and timestamp can be relative to some start value. Now like I said you should be able to estimate the timestamp given a frame number in your case since it's consistent. There is also something called AVStream.index_entries that is your case should contain all the frame offsets starting with 0 and their associated timestamps. The thing is it varies when this table is completed. It might be done on first seek, it might be done on the fly as frames are read and so on. This should help you a bit, but like others have said, you need to read. It takes awhile to get familiar with all this. From krueger at lesspain.de Mon Sep 16 20:46:04 2013 From: krueger at lesspain.de (=?UTF-8?Q?Robert_Kr=C3=BCger?=) Date: Mon, 16 Sep 2013 20:46:04 +0200 Subject: [Libav-user] Seeking, timestamps, AVFrame, AVStream In-Reply-To: <1379350258.93814.YahooMailNeo@web164703.mail.gq1.yahoo.com> References: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> <1379341142.59741.YahooMailNeo@web164704.mail.gq1.yahoo.com> <1379350258.93814.YahooMailNeo@web164703.mail.gq1.yahoo.com> Message-ID: On Mon, Sep 16, 2013 at 6:50 PM, James Board wrote: >>> Yes, I looked at avcodec.h and could not find what I wanted. When I was >> >>You wrote: >> >> pts >> pkt_pts >> pkt_dts >> coded_picture_number >> display_picture_number >> best_effort_timestamp (the name of this one really worries me) >> pkt_pos >> pkt_duration >>What do those all mean? Can I use those to figure out which frame this is? >> >>All of these are documented there. That's why I thought you hadn't >>looked at the docs. > > Actually, none of them (except pts) are defined there. Most aren't > even mentioned. But they are mentioned in AVFrame.h Sorry my mistake. With latest source code, what I am talking about i.e. docs for the struct AVFrame, is in frame.h and there they are documented. > > >>My gut feeling is that the biggest problem is that you probably need >>to read up on some basics, so the docs (which are quite OK IMHO) make >>sense to you, so you know what a timebase, presentation timestamp, > > Yes, I definitely prefer reading docs then asking for help. But what docs > are you talking about? I'm not aware of anything other than the source > code. > > Anyway, it sounds like pts and best_effort_timestamp are really frame > numbers No, they are not. It is the case if timebase = 1/framerate but that is not guaranteed at all and should be seen as a special case, unless for some reason you know all your files are that way (and they typically aren't). > (not time values) and to get the actual time value, you multiple them by the > time base. Is that correct? I'm actually starting with frame numbers and I > don't really need the time values, per se. Can I merely use the pts or > best_effort_timestamp and assume they equal the frame number? No, in the case of CFR you can compute the frame number like so frameNum = (best_effort_timestamp - stream.start_time)*timebase*fps, e.g. let's say you have a timebaste of 1/1000, a start_time of 200, a frame pts of 600 and 25 FPS, so the frame number will be (600-200)/1000*25 = 10, i.e. the 10th frame will have a pts of 600. > > Also, will the libs do anything behind the scenes to duplicate frames for > whatever reason (like to meet some CFR rate)? I don't want that. No, you will just get frame after frame with their time stamps. What you're talking about happens in the ffmpeg CL tool but IIRC someone else already mentioned that in one of your threads. HTH From jpboard2 at yahoo.com Mon Sep 16 21:49:52 2013 From: jpboard2 at yahoo.com (James Board) Date: Mon, 16 Sep 2013 12:49:52 -0700 (PDT) Subject: [Libav-user] Seeking, timestamps, AVFrame, AVStream In-Reply-To: References: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> <1379341142.59741.YahooMailNeo@web164704.mail.gq1.yahoo.com> <1379350258.93814.YahooMailNeo@web164703.mail.gq1.yahoo.com> Message-ID: <1379360992.71308.YahooMailNeo@web164703.mail.gq1.yahoo.com> >> Anyway, it sounds like pts and best_effort_timestamp are really frame >> numbers > >No, they are not. It is the case if timebase = 1/framerate but that is >not guaranteed at all and should be seen as a special case, unless for >some reason you know all your files are that way (and they typically >aren't). Okay.? My files all come from the same source.? So if I check to see if timebase = 1/framerate, and if it is, then I can use pts or best_effort_timestamp as the frame numbers (assuming stream.start_time is zero). Why would timebase be anything other than 1/framerate?? I thought timebase was in fact defined as 1/framerate.? That's what avcodec.h says. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jkordani at lsa2.com Tue Sep 17 06:32:17 2013 From: jkordani at lsa2.com (Joshua Kordani) Date: Tue, 17 Sep 2013 00:32:17 -0400 Subject: [Libav-user] Yet another noob converting rgb24 to yuv420 Message-ID: <5237DB51.5030903@lsa2.com> Greetings all, This must be asked a million times, but I am trying to take rgb24 and convert it to yuv420 for use with x264. My code is failing on the sws_scale call with a bad access, inside ff_yuv2plane1_8_avx.loop_a, which suggests to me that I haven't given it the input data structures in the right way. I'm starting with packed data, that is, rgb pixel values every 24 bits, but it looks like the sws_scale call is generalized to assume that input and output data is planar, and that (I imagine) if the input (or output) is supposed to be non-planar, that it should be stored/can be found in the first plane of the relevant data structures Since x264 seems to provide for a data structure that fits this purpose (the img struct inside of a pic_t, properly initialized), I've used it as my dst and dstStride locations My code looks like this. //the compiler complained that it couldn't locate these when I tried to call them directly in the getCachedContext call. This is the result of a few rounds of debugging enum AVPixelFormat src_pix_fmt = AV_PIX_FMT_RGB24, dst_pix_fmt = AV_PIX_FMT_YUV420P; scale_context = sws_getCachedContext(scale_context, frame_width, frame_height, src_pix_fmt, frame_width, frame_height, dst_pix_fmt, SWS_BICUBIC, NULL, NULL, NULL ); //where frame_width, frame_height is the size in pixels of my input frame (right?) ...snip... sws_scale(scale_context, &frame, &rowstride, 0, x264_params->i_height, pic_in.img.plane, pic_in.img.i_stride ); where frame is a pointer to my packed data, so I pass in a reference to the pointer (so that, assuming that the input pixel format specifier assumes packed data, and that the packed data is in the [0] offset of the pointer provided to the function....) right?, rowstride is 3 * frame_width stored in a local, same problem, the function call is expecting a pointer or reference to de-reference, but since this is packed data there is only one stride, right? pic_in is a struct from the x264 library, with a convenient sub structure img, which seems to be purpose built for this kind of function (or vice versa), where i_stride is an array of strides, plane an array of planes, i_height was a convenient place for me to retrieve this value. I have a feeling I'm abusing lots of things, I was trying to avoid doing this and using x264 support for encoding rgb directly, but apparently it was too bleeding edge and decoding it was becoming a problem. I thought the plan b wouldn't be too hard to implement but I just don't seem to be understanding how to setup and use this code. I'm starting from a codebase that did compile and run, attempting to encode the rgb directly. I couldn't find a decoder that was able to decode what I was producing, it seemed to be related to the colorspace I was trying to decode to, so it became more apparent that I needed to do a conversion. I would appreciate any pointers anyone may have. I'm not afraid of reading, I just haven't been able to leverage the things I've found so far. -- Joshua Kordani LSA Autonomy -------------- next part -------------- An HTML attachment was scrubbed... URL: From krueger at lesspain.de Tue Sep 17 10:54:46 2013 From: krueger at lesspain.de (=?UTF-8?Q?Robert_Kr=C3=BCger?=) Date: Tue, 17 Sep 2013 10:54:46 +0200 Subject: [Libav-user] Seeking, timestamps, AVFrame, AVStream In-Reply-To: <1379360992.71308.YahooMailNeo@web164703.mail.gq1.yahoo.com> References: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> <1379341142.59741.YahooMailNeo@web164704.mail.gq1.yahoo.com> <1379350258.93814.YahooMailNeo@web164703.mail.gq1.yahoo.com> <1379360992.71308.YahooMailNeo@web164703.mail.gq1.yahoo.com> Message-ID: On Mon, Sep 16, 2013 at 9:49 PM, James Board wrote: >>> Anyway, it sounds like pts and best_effort_timestamp are really frame >>> numbers >> >>No, they are not. It is the case if timebase = 1/framerate but that is >>not guaranteed at all and should be seen as a special case, unless for >>some reason you know all your files are that way (and they typically >>aren't). > > Okay. My files all come from the same source. So if I check to see > if timebase = 1/framerate, and if it is, then I can use pts or > best_effort_timestamp as the frame numbers (assuming stream.start_time > is zero). > > Why would timebase be anything other than 1/framerate? I thought timebase > was in fact defined as 1/framerate. That's what avcodec.h says. Timebase simply defines the maximum granularity of your timestamps. If for some reason (e.g. to synchronize frames with content from other tracks in the same file) there can be cases where it needs to be finer than 1/frame rate. If you just learn how to calculate with it (see the formula in my earlier mail), you don't have to rely on the assumption that PTS == frame number and it is not really more work. Good luck! From jpboard2 at yahoo.com Tue Sep 17 16:45:01 2013 From: jpboard2 at yahoo.com (James Board) Date: Tue, 17 Sep 2013 07:45:01 -0700 (PDT) Subject: [Libav-user] Seeking, timestamps, AVFrame, AVStream In-Reply-To: References: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> <1379341142.59741.YahooMailNeo@web164704.mail.gq1.yahoo.com> <1379350258.93814.YahooMailNeo@web164703.mail.gq1.yahoo.com> <1379360992.71308.YahooMailNeo@web164703.mail.gq1.yahoo.com> Message-ID: <1379429101.89475.YahooMailNeo@web164703.mail.gq1.yahoo.com> >Timebase simply defines the maximum granularity of your timestamps. If >for some reason (e.g. to synchronize frames with content from other >tracks in the same file) there can be cases where it needs to be finer >than 1/frame rate. If you just learn how to calculate with it (see the >formula in my earlier mail), you don't have to rely on the assumption >that PTS == frame number and it is not really more work. Good luck! Okay, that's helpful. But then how and where and by what program is timebase calculated?? Do the ffmpeg tools calculate it?? I don't see how they can.? They'd have to look at every frame in the video and that could take a very long time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cyrilapan at yahoo.fr Tue Sep 17 20:11:09 2013 From: cyrilapan at yahoo.fr (cyril apan) Date: Tue, 17 Sep 2013 19:11:09 +0100 (BST) Subject: [Libav-user] Seeking, timestamps, AVFrame, AVStream In-Reply-To: <1379429101.89475.YahooMailNeo@web164703.mail.gq1.yahoo.com> References: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> <1379341142.59741.YahooMailNeo@web164704.mail.gq1.yahoo.com> <1379350258.93814.YahooMailNeo@web164703.mail.gq1.yahoo.com> <1379360992.71308.YahooMailNeo@web164703.mail.gq1.yahoo.com> <1379429101.89475.YahooMailNeo@web164703.mail.gq1.yahoo.com> Message-ID: <1379441469.18775.YahooMailNeo@web28804.mail.ir2.yahoo.com> >Okay, that's helpful. But then how and where and by what program is >timebase calculated?? Do the ffmpeg tools calculate it?? I don't see >how they can.? They'd have to look at every frame in the video and >that could take a very long time. You should know that video formats reserve some space for metadata. And a portion is dedicated to describe track properties like VBR/CBR, total time length and so on. FFmpeg relies on those metadata to initiate everything but then can recompute some on the fly, some other tools allows to actually calculate those track metrics directly from track content, it's safer but slower of course. Cheers, Cyril APAN. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpboard2 at yahoo.com Tue Sep 17 21:36:33 2013 From: jpboard2 at yahoo.com (James Board) Date: Tue, 17 Sep 2013 12:36:33 -0700 (PDT) Subject: [Libav-user] Seeking, timestamps, AVFrame, AVStream In-Reply-To: <1379441469.18775.YahooMailNeo@web28804.mail.ir2.yahoo.com> References: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> <1379341142.59741.YahooMailNeo@web164704.mail.gq1.yahoo.com> <1379350258.93814.YahooMailNeo@web164703.mail.gq1.yahoo.com> <1379360992.71308.YahooMailNeo@web164703.mail.gq1.yahoo.com> <1379429101.89475.YahooMailNeo@web164703.mail.gq1.yahoo.com> <1379441469.18775.YahooMailNeo@web28804.mail.ir2.yahoo.com> Message-ID: <1379446593.56884.YahooMailNeo@web164705.mail.gq1.yahoo.com> >>Okay, that's helpful. But then how and where and by what program is >>timebase calculated?? Do the ffmpeg tools calculate it?? I don't see >>how they can.? They'd have to look at every frame in the video and >>that could take a very long time. > >You should know that video formats reserve some space for metadata. And a portion is dedicated to describe track properties like >VBR/CBR, total time length and so on. >FFmpeg relies on those metadata to initiate everything but then can recompute some on the fly, some other tools allows to actually >calculate those track metrics directly from track content, it's safer but slower of course. Let's say my input file is 500 GB and was generated by my capture card.? Robert said?the timebase member in ffmpeg structs hold the?maximum granularity of all the timestamps. I'm wondering how that can be true.? It would take a very long time for ffmpeg to read 500 GB of data to figure out the maximum granularity of all timestamps, and that would definitely be noticeable.? Something else must happen. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpboard2 at yahoo.com Wed Sep 18 03:56:14 2013 From: jpboard2 at yahoo.com (James Board) Date: Tue, 17 Sep 2013 18:56:14 -0700 (PDT) Subject: [Libav-user] Cached Frames Message-ID: <1379469374.19919.YahooMailNeo@web164705.mail.gq1.yahoo.com> In the libAV examples distributed with ffmpeg, I see the following code: ??? // Flush cache frames ??? do { ??????? decode_packet(&got_frame, 1); ??? } while (got_frame); What are cached frames and what does the above do?? Is this explained anywhere? In my libav app, I'm trying to do simple edits.? So, I seek to the edit-start, decode all packets until I reach the edit-end, then seek to the next edit-start.? After all edits are processed, I call the above (not knowing what it does) and it loops forever. -------------- next part -------------- An HTML attachment was scrubbed... URL: From krueger at lesspain.de Wed Sep 18 09:11:35 2013 From: krueger at lesspain.de (=?UTF-8?Q?Robert_Kr=C3=BCger?=) Date: Wed, 18 Sep 2013 09:11:35 +0200 Subject: [Libav-user] Seeking, timestamps, AVFrame, AVStream In-Reply-To: <1379446593.56884.YahooMailNeo@web164705.mail.gq1.yahoo.com> References: <1379281061.76151.YahooMailNeo@web164705.mail.gq1.yahoo.com> <1379341142.59741.YahooMailNeo@web164704.mail.gq1.yahoo.com> <1379350258.93814.YahooMailNeo@web164703.mail.gq1.yahoo.com> <1379360992.71308.YahooMailNeo@web164703.mail.gq1.yahoo.com> <1379429101.89475.YahooMailNeo@web164703.mail.gq1.yahoo.com> <1379441469.18775.YahooMailNeo@web28804.mail.ir2.yahoo.com> <1379446593.56884.YahooMailNeo@web164705.mail.gq1.yahoo.com> Message-ID: On Tue, Sep 17, 2013 at 9:36 PM, James Board wrote: >>>Okay, that's helpful. But then how and where and by what program is >>>timebase calculated? Do the ffmpeg tools calculate it? I don't see >>>how they can. They'd have to look at every frame in the video and >>>that could take a very long time. >> >>You should know that video formats reserve some space for metadata. And a >> portion is dedicated to describe track properties like >VBR/CBR, total time >> length and so on. >>FFmpeg relies on those metadata to initiate everything but then can >> recompute some on the fly, some other tools allows to actually >calculate >> those track metrics directly from track content, it's safer but slower of >> course. > > Let's say my input file is 500 GB and was generated by my capture card. > Robert > said the timebase member in ffmpeg structs hold the maximum granularity of > all the timestamps. > I'm wondering how that can be true. It would take a very long time for > ffmpeg to read > 500 GB of data to figure out the maximum granularity of all timestamps, and > that would > definitely be noticeable. Something else must happen. To determine the timebase not all frames have to be read but for all formats I have worked with so far, it is either in container or bitstream metadata or defined by a standard. Again, I recommend reading (there is so much material available via Wikipedia or sources like wiki.multimedia.cx or others) over naive guessing via observation (some is OK but without reading IMHO this will be a slow learning curve). I hope my comments have helped you somewhat. Over and out. From blazej.slusarek at gmail.com Sat Sep 21 18:40:16 2013 From: blazej.slusarek at gmail.com (=?UTF-8?B?QsWCYcW8ZWogxZpsdXNhcmVr?=) Date: Sat, 21 Sep 2013 18:40:16 +0200 Subject: [Libav-user] Using AMIX filter in C code Message-ID: Hello, I am looking for some clues or code examples on how could I use the AMIX filter to mix multiple audio inputs into one output. I have checked the doc/examples I found in FFmpeg source snapshot, but the code is too complicated and doesn't have enough comments for a newbie like me. Also, most of the examples feature only one input file. I have also checked the AMIX filter source code, but I have no idea how to use it. Could someone point me to an example of setting up a complex AMIX filter in C code? Thanks in advance! BR, Blazej -------------- next part -------------- An HTML attachment was scrubbed... URL: From audionuma at gmail.com Mon Sep 23 07:42:04 2013 From: audionuma at gmail.com (Manu N) Date: Mon, 23 Sep 2013 07:42:04 +0200 Subject: [Libav-user] Demuxing audio streams and flushing Message-ID: Hello, I'm currently trying to use libavformat to demux audio from multimedia files. I'm using the demuxing.c example (in ffmpeg-2.0.1/doc/examples/) as a starting point. I have a problem understanding the flushing part of this example. As I only need to demux audio, I have commented out the whole video decoding part like this : if (pkt.stream_index == video_stream_idx) { // /* decode video frame */ // ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt); // if (ret < 0) { // fprintf(stderr, "Error decoding video frame\n"); // return ret; // } // // if (*got_frame) { // printf("video_frame%s n:%d coded_n:%d pts:%s\n", // cached ? "(cached)" : "", // video_frame_count++, frame->coded_picture_number, // av_ts2timestr(frame->pts, &video_dec_ctx->time_base)); // // /* copy decoded frame to destination buffer: // * this is required since rawvideo expects non aligned data */ // av_image_copy(video_dst_data, video_dst_linesize, // (const uint8_t **)(frame->data), frame->linesize, // video_dec_ctx->pix_fmt, video_dec_ctx->width, video_dec_ctx->height); // // /* write to rawvideo file */ // fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file); // } } else if (pkt.stream_index == audio_stream_idx) { I can run succesfully on audio only files (.wav, .ac3, ...). But when I run it on a video + audio file (stream #0 video, stream #1 audio), I go into a never ending loop at the flushing part of the code : do { decode_packet(&got_frame, 1); } while (got_frame); What happens is that at this point, pkt.stream_index == 0, so avcodec_decode_audio4() is never called in the decode_packet function, and got_frame is never set to 0. My question is : do audio streams need flushing at all or is it only for video streams ? If flushing is needed for audio streams, how to handle it properly, given the fact that my goal is to demux all audio streams in a file ? Thanks for your advices, Manuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.gulino at gmail.com Mon Sep 23 14:04:21 2013 From: marco.gulino at gmail.com (Marco Gulino) Date: Mon, 23 Sep 2013 14:04:21 +0200 Subject: [Libav-user] Fwd: decode/encode subtitles example In-Reply-To: References: Message-ID: Hello! I'm pretty new to libav* development, and I'm mostly a c++ developer, not C, so forgive me if I might ask dumb questions. What I'm trying to achieve is to open a media file and extract a subtitle track. Possibly in memory, but using a temp output file would be acceptable. I did read the muxing/demuxing c examples, and following them I'm already able to open the file, detect all streams, and decode the subtitle packets, but I'm stuck in the "reencode" part. There is an "avcodec_encode_subtitle" API, which seems undocumented (there's an unofficial doc here, though: http://wiki.aasimon.org/doku.php?id=ffmpeg:avcodec_encode_subtitle ). You can see my code here: http://pastebin.com/cUxCs33a . It's just a spike of course, quick & dirty... If I comment out the avcodec_encode_subtitle line everything runs just fine, and I can even see the subtitle lines if I print pkt.data. the avcodec_encode_subtitle however, when called, always gives me a segmentation fault. Also, it's not clear how to save the buffer data to the newly created stream (though maybe I won't need it, if the buffer will contain the raw srt data). Thanks! Marco -------------- next part -------------- An HTML attachment was scrubbed... URL: From marco.gulino at gmail.com Mon Sep 23 20:21:41 2013 From: marco.gulino at gmail.com (Marco Gulino) Date: Mon, 23 Sep 2013 20:21:41 +0200 Subject: [Libav-user] decode/encode subtitles example In-Reply-To: References: Message-ID: update: I found out what was giving the segmentation fault, I simply forgot to do an avcodec_open2 on the output codec. Now encoding to ASS format works fine, but SRT gives me no error, but no output either (avcodec_encode_subtitle returns 0). I guess it should have something to do with the AVCodecContext configuration, but without an example, it's hard to find out what exactly is the problem.. On Mon, Sep 23, 2013 at 2:04 PM, Marco Gulino wrote: > Hello! > I'm pretty new to libav* development, and I'm mostly a c++ developer, not > C, so forgive me if I might ask dumb questions. > > What I'm trying to achieve is to open a media file and extract a subtitle > track. > Possibly in memory, but using a temp output file would be acceptable. > > I did read the muxing/demuxing c examples, and following them I'm already > able to open the file, detect all streams, and decode the subtitle packets, > but I'm stuck in the "reencode" part. > There is an "avcodec_encode_subtitle" API, which seems undocumented > (there's an unofficial doc here, though: > http://wiki.aasimon.org/doku.php?id=ffmpeg:avcodec_encode_subtitle ). > You can see my code here: http://pastebin.com/cUxCs33a . > It's just a spike of course, quick & dirty... If I comment out the > avcodec_encode_subtitle line everything runs just fine, and I can even > see the subtitle lines if I print pkt.data. > the avcodec_encode_subtitle however, when called, always gives me a > segmentation fault. > Also, it's not clear how to save the buffer data to the newly created > stream (though maybe I won't need it, if the buffer will contain the raw > srt data). > > Thanks! > Marco > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fkwatson at aol.com Tue Sep 24 02:54:49 2013 From: fkwatson at aol.com (fkwatson at aol.com) Date: Mon, 23 Sep 2013 20:54:49 -0400 (EDT) Subject: [Libav-user] Having trouble with custom I/O Message-ID: <8D086D7CE0B4091-18BC-1A6D4@webmail-d244.sysops.aol.com> Hi, I am using a Zeranoe build of ffmpeg 1.2.1 for windows 32-bit. I am also using Java Native Access (JNA) to do the reading and writing of Java InputStreams. Has anyone had success using custom i/o reads and seeks with Zeranoe builds? Are there any known bugs with custom i/o in this version? I set AVFormatCtx pFormatCtx.debug=1 to see the debug messages for reading packets and frames. Packets and frames are read correctly during: avformat_open_input(); avformat_find_stream_info(); However during the main reading/decoding loop: while (av_read_frame(pFormatCtx, &pkt) > 0) { . . . av_free_packet(&pkt) } Frames are read correctly up to a point. Then for the next packet, pkt.size is less than what was read during the avformat_find_stream_info() call. This packet happens to be an audio packet and the av_decode_audio4 call fails. The next read returns a video packet, but pkt.size is also less than what was read earlier. I can read no more packets after this. I have tried setting different buffer sizes including zero (null) when I open the context avio_alloc_context(). This has no effect on the results. I also tried setting AVFMT_FLAG_CUSTOM_IO and AVFMT_FLAG_NO_BUFFER: same results. I can only read and decode to the exact same point every time. -Felix -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin_ffmpeg at interia.pl Tue Sep 24 10:51:49 2013 From: marcin_ffmpeg at interia.pl (marcin_ffmpeg) Date: Tue, 24 Sep 2013 10:51:49 +0200 Subject: [Libav-user] Number of cached frames in audio codecs In-Reply-To: References: Message-ID: > Dear All > > How can I determine number of cached frames in audio codecs for both decoding and encoding? > I'd like to know the maximum number of frames which I receive on decoder/encoder flush. > For video codecs I use AVCodecContext->gop_size, is this correct? > What should I use for audio codecs? > Does anyone knows something about it? It looks that for mp3lame audio codec number of cached frames is 4, but can it be controlled somehow or at least known from ffmpeg structures? I'd really appreciate some tips on it. Thank you Marcin From elijah.houle at email.wsu.edu Tue Sep 24 18:14:08 2013 From: elijah.houle at email.wsu.edu (Houle, Elijah Js) Date: Tue, 24 Sep 2013 16:14:08 +0000 Subject: [Libav-user] Implementing MPEG-1 Real-Time Video Encryption In-Reply-To: <90B018341F97FB4DBFF5802C7684793F31AFD9B1@CO1PRD0113MB700.prod.exchangelabs.com> References: <90B018341F97FB4DBFF5802C7684793F31AFD9B1@CO1PRD0113MB700.prod.exchangelabs.com> Message-ID: <90B018341F97FB4DBFF5802C7684793F31AFD9D5@CO1PRD0113MB700.prod.exchangelabs.com> I'd like to implement the encryption algorithm from this paper (www.cs.purdue.edu/homes/bb/security99.ps), where only the sign bits for the DC/AC coefficients are selected and encrypted. This requires selecting bits in a zigzag order from each block (Y1, Y2, Y3, Y4, Cb, Cr). Not wanting to write an MPEG-1 encoder from scratch, my first thought was to modify the mpeg12enc.c file, but I can't figure out whether/where it distinguishes among the different blocks of the macroblock, especially because I'm unfamiliar with some of the terminology implicit in the variable names. Am I close to the right track, or is there a better approach? From nick at hoodtech.com Tue Sep 24 22:55:25 2013 From: nick at hoodtech.com (Nick Wood) Date: Tue, 24 Sep 2013 20:55:25 +0000 Subject: [Libav-user] Getting started with ffmpeg Message-ID: <10493A229E74364F8AF6F08A64A2CB5F27B0FD@SERVER-HV-SBS.HoodTechnology.lan> I'm new at programming with a little experience in both Visual Basic and Python. I just start a new PyQT project and have reached the point where I want to decode and display a .ts file in my application. So I have the following questions: 1. Is ffmpeg the proper choice for decoding and displaying the .ts files? 2. How do I get started implementing ffmpeg into my PyQT project? I downloaded ffmpeg, but where/how do I get it into the PyQT project? 3. Can I also use ffmpeg to record the video and it's metadata? Thanks, Nicholas Wood -------------- next part -------------- An HTML attachment was scrubbed... URL: From krueger at lesspain.de Wed Sep 25 09:24:09 2013 From: krueger at lesspain.de (=?UTF-8?Q?Robert_Kr=C3=BCger?=) Date: Wed, 25 Sep 2013 09:24:09 +0200 Subject: [Libav-user] Implementing MPEG-1 Real-Time Video Encryption In-Reply-To: <90B018341F97FB4DBFF5802C7684793F31AFD9D5@CO1PRD0113MB700.prod.exchangelabs.com> References: <90B018341F97FB4DBFF5802C7684793F31AFD9B1@CO1PRD0113MB700.prod.exchangelabs.com> <90B018341F97FB4DBFF5802C7684793F31AFD9D5@CO1PRD0113MB700.prod.exchangelabs.com> Message-ID: On Tue, Sep 24, 2013 at 6:14 PM, Houle, Elijah Js wrote: > I'd like to implement the encryption algorithm from this paper (www.cs.purdue.edu/homes/bb/security99.ps), where only the sign bits for the DC/AC coefficients are selected and encrypted. This requires selecting bits in a zigzag order from each block (Y1, Y2, Y3, Y4, Cb, Cr). Not wanting to write an MPEG-1 encoder from scratch, my first thought was to modify the mpeg12enc.c file, but I can't figure out whether/where it distinguishes among the different blocks of the macroblock, especially because I'm unfamiliar with some of the terminology implicit in the variable names. Am I close to the right track, or is there a better approach? > please try the ffmpeg-devel list as your question seems to be about extending/modifying the core functionality of ffmpeg rather than building an application using its libs. From rolaoo at gazeta.pl Thu Sep 26 14:28:43 2013 From: rolaoo at gazeta.pl (rolaoo Gazeta.pl) Date: Thu, 26 Sep 2013 14:28:43 +0200 Subject: [Libav-user] Number of cached frames in audio codecs Message-ID: Hello, the delay field of AVCodecContext might interest you. However as the doc says, it is purely informative. And what you have proposed (gop_size) has nothing to do with codec delay. Why do you need this information? There are very less really usable needs for it. Video encoder delay may vary depends on its configuration, e.g. AVC may be configured to give 0 delay (excluding time spend on compression) e.g. no B-frames and constant quantizer mode up to few seconds (CBR mode with long buffer). IMHO you should constantly estimate it by constantly watching the difference between timestamps for what goes in and goes out from the decoder/encoder. best regards Remigiusz >> Dear All >> >> How can I determine number of cached frames in audio codecs for both decoding and encoding? >> I'd like to know the maximum number of frames which I receive on decoder/encoder flush. >> For video codecs I use AVCodecContext->gop_size, is this correct? >> What should I use for audio codecs? >> > Does anyone knows something about it? > > It looks that for mp3lame audio codec number of cached frames is 4, > but can it be controlled somehow or at least known from ffmpeg structures? > > I'd really appreciate some tips on it. > Thank you > Marcin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Thu Sep 26 20:30:52 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Thu, 26 Sep 2013 18:30:52 +0000 (UTC) Subject: [Libav-user] Getting started with ffmpeg References: <10493A229E74364F8AF6F08A64A2CB5F27B0FD@SERVER-HV-SBS.HoodTechnology.lan> Message-ID: Nick Wood writes: > 1.?Is ffmpeg the proper choice for decoding and > displaying the .ts files? I suspect you can try if ffmpeg (the command line tool) and ffplay can read your file to answer this question. (I don't think anybody else can answer the question unless you upload a sample which likely takes longer than testing.) > 2. How do I get started implementing ffmpeg into my > PyQT project?? I downloaded ffmpeg, but where/how do > I get it into the PyQT project? Isn't this a question for a PyQT-related mailing list? FFmpeg (actually libavcodec, libavformat and libavutil) are C libraries, what you want is probably possible if C libraries are supported at all. As an alternative, you can probably call ffmpeg (the command line application) from your PyQT project. > 3.?Can I also use ffmpeg to?record the video and > it's metadata? That depends on what you mean with "record". FFmpeg for example supports capturing the screen on X. For recording DVB, I would suggest cat (or mplayer -dumpstream). Carl Eugen From soho123.2012 at gmail.com Fri Sep 27 04:24:31 2013 From: soho123.2012 at gmail.com (Huang Soho) Date: Fri, 27 Sep 2013 10:24:31 +0800 Subject: [Libav-user] rtp timestamp overflow when output h.264 live stream in long time Message-ID: Hi All, When I try to use ffserver + ffmpeg to output a live h.264 rtp stream, I got the problem. the timestamp in rtp header may get overflow. since the pkt.pts, pkt.dts in ffmpeg are stored in 64bits length timestamp field in RTP header is stored in 32bits length it may get overflow when long time test. does any one have idea about how to fix the issue? -------------- next part -------------- An HTML attachment was scrubbed... URL: From craig04reg at 21cn.com Fri Sep 27 04:36:30 2013 From: craig04reg at 21cn.com (craig04reg at 21cn.com) Date: Fri, 27 Sep 2013 10:36:30 +0800 Subject: [Libav-user] Error when open a camera on Android Message-ID: <201309271036299864853@21cn.com> Hi to all, I am trying develop an video chat app on android and have ported ffmpeg 2.0.1 to android successfully. The code to open camera is very simple: AVFormatContext *fmt_ctx = NULL; AVInputFormat *input_fmt; input_fmt = av_find_input_format("video4linux2"); if (input_fmt == NULL) return -1? char f_name[] = "/dev/video0"; if ((ret = avformat_open_input(&fmt_ctx, f_name, input_fmt, NULL)) < 0) // stuck here { LOG_D("can not open camera, ret = %d", ret); return ret; } the strange thing is the ret value is always negative with the following logcat output by av_log: 09-26 15:27:48.901: E/Codec-FFMpeg(17716): ioctl(VIDIOC_G_PARM): Invalid argument Do i forget anything before calling avformat_open_input()? Thank you! Regards, craig -------------- next part -------------- An HTML attachment was scrubbed... URL: From rolaoo at gazeta.pl Fri Sep 27 08:32:11 2013 From: rolaoo at gazeta.pl (rolaoo Gazeta.pl) Date: Fri, 27 Sep 2013 08:32:11 +0200 Subject: [Libav-user] Seeking backward to a key frame within transport stream Message-ID: Hello, few days ago I met a problem with av_seek_frame with AVSEEK_FLAG_BACKWARD. In case of transport stream and mpeg2 or avc and others it seeks exactly to a specified time stamp and not to a previous key frame, so the first decoded frame is past desired time stamp, of course if I am unlucky to seek not to a key frame which is a most frequent case. I've been googling and it seems that this is known problem for years. From the user stand point it looks that av_seek_frame ignores passed flag and uses AVSEEK_FLAG_ANY, however there's no clear feedback that it has happened. Is any progress, helper or workaroud on this issue till now (I work with ffmpeg-20130909-git-b4e1630)? In meantime I have developed my own workaround which I want to share, maybe some find it usefull. Instead of typical seek and decode until required time stamp (a simplified pseudocode below): av_seek_frame( desired_time_stamp, AVSEEK_FLAG_BACKWARD) do { demux packet and decode frame } while( frame.time_stamp < desired_time_stamp) I use following approach: // Seek to desired time stamp av_seek_frame( desired_time_stamp, AVSEEK_FLAG_BACKWARD) // try to decode some frame. If decoded frame time stamp is after seek point // then rewind (seek) once again before desired time stamp. int round = 0; long long rewind_time = 1 second while(true) { demux packet and decode frame if(frame.time_stamp < desired_time_stamp) break ++round av_seek_frame( desired_time_stamp - rewind_time * round, AVSEEK_FLAG_BACKWARD) reset decoder } // Now we have at least one frame before desired time stamp // Decode forward up to desired time stamp. while( frame.time_stamp < desired_time_stamp) { demux packet and decode frame } It works pretty well and does not add overhead for non-mpeg demuxers, where av_seek_frame works as described in documentation. Of course there's a significant overhead caused by iterative backward seeking. In this pseudocode I choosed to rewind one second. The obvious rewind time is gop size, but I found gop_size as unreliable (e.g. some of my test streams have key frame distance equal to 96, however gop_size is 12). Also GOP size may vary within the same file. Now my approach is to track gop size during decoding the file since any fixed revind value is IMHO not good. Another useful trick is to set AVCodecContext::flags2 to CODEC_FLAG2_SHOW_ALL. It gives faster decoder output so the need for rewind can be detected quicker, however a special care must be taken to not to display corrupted frames. Any comments/improvements are welcome. best regards Remigiusz -------------- next part -------------- An HTML attachment was scrubbed... URL: From aworldgonewrong at gmail.com Fri Sep 27 10:48:35 2013 From: aworldgonewrong at gmail.com (John Freeman) Date: Fri, 27 Sep 2013 09:48:35 +0100 Subject: [Libav-user] rtp timestamp overflow when output h.264 live stream in long time In-Reply-To: References: Message-ID: Is it sufficient enough to use the RTP header extensions to embed a 64 bit time-code or epoch seconds? I use this for the very same reason. http://tools.ietf.org/html/rfc5285 On 27 September 2013 03:24, Huang Soho wrote: > Hi All, > > > When I try to use ffserver + ffmpeg to output a live h.264 rtp stream, I > got the problem. > the timestamp in rtp header may get overflow. > since the pkt.pts, pkt.dts in ffmpeg are stored in 64bits length > timestamp field in RTP header is stored in 32bits length > it may get overflow when long time test. > does any one have idea about how to fix the issue? > > _______________________________________________ > Libav-user mailing list > Libav-user at ffmpeg.org > http://ffmpeg.org/mailman/listinfo/libav-user > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From safiuddinkhan at gmail.com Fri Sep 27 12:20:48 2013 From: safiuddinkhan at gmail.com (Safi) Date: Fri, 27 Sep 2013 15:20:48 +0500 Subject: [Libav-user] How to decode video in libav via video acceleration using vaapi Message-ID: <52455C00.5030500@gmail.com> Hello i have developed a working media player using libav for my project . now i want to decode my h264 video using video acceleration decoding via vaapi and libav libaries but i have very little idea on how to do that . could someone guide me on how to do it and where could i get at least some examples on how this is done From satyagowtham.k at gmail.com Sat Sep 28 09:51:34 2013 From: satyagowtham.k at gmail.com (satya gowtham kudupudi) Date: Sat, 28 Sep 2013 13:21:34 +0530 Subject: [Libav-user] linker error after compiling encoding_decoding.c Message-ID: On ubuntu i've installed ffmpeg as per http://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide I'm trying to compile http://www.ffmpeg.org/doxygen/2.0/doc_2examples_2decoding_encoding_8c-example.html ; changed int main (int argc, char **argv) to int libavcodec_example(int argc, char **argv); included it in my application; called libavcodec_example(int argc, char **argv) g++ -D__STDC_CONSTANT_MACROS -o dist/Debug/GNU-Linux-x86/remotedevicecontroller build/Debug/GNU-Linux-x86/libavcodec-example.o build/Debug/GNU-Linux-x86/main.o build/Debug/GNU-Linux-x86/test-echo.o /usr/local/ffmpeg_build/lib/libavdevice.a /usr/local/ffmpeg_build/lib/libavfilter.a /usr/local/ffmpeg_build/lib/libavcodec.a /usr/local/ffmpeg_build/lib/libavutil.a /usr/local/ffmpeg_build/lib/libswscale.a /usr/local/ffmpeg_build/lib/libavformat.a -lxml2 -lpthread -lssl -lcrypto -lwebsockets It gave the following error: build/Debug/GNU-Linux-x86/libavcodec-example.o: In function `select_channel_layout': /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:77: undefined reference to `av_get_channel_layout_nb_channels(unsigned long long)' build/Debug/GNU-Linux-x86/libavcodec-example.o: In function `audio_encode_example': /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:102: undefined reference to `avcodec_find_encoder(AVCodecID)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:107: undefined reference to `avcodec_alloc_context3(AVCodec const*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:118: undefined reference to `av_get_sample_fmt_name(AVSampleFormat)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:124: undefined reference to `av_get_channel_layout_nb_channels(unsigned long long)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:126: undefined reference to `avcodec_open2(AVCodecContext*, AVCodec const*, AVDictionary**)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:136: undefined reference to `avcodec_alloc_frame()' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:147: undefined reference to `av_samples_get_buffer_size(int*, int, int, AVSampleFormat, int)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:148: undefined reference to `av_malloc(unsigned int)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:156: undefined reference to `avcodec_fill_audio_frame(AVFrame*, int, AVSampleFormat, unsigned char const*, int, int)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:165: undefined reference to `av_init_packet(AVPacket*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:175: undefined reference to `avcodec_encode_audio2(AVCodecContext*, AVPacket*, AVFrame const*, int*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:182: undefined reference to `av_free_packet(AVPacket*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:187: undefined reference to `avcodec_encode_audio2(AVCodecContext*, AVPacket*, AVFrame const*, int*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:194: undefined reference to `av_free_packet(AVPacket*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:198: undefined reference to `av_freep(void*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:199: undefined reference to `avcodec_free_frame(AVFrame**)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:200: undefined reference to `avcodec_close(AVCodecContext*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:201: undefined reference to `av_free(void*)' build/Debug/GNU-Linux-x86/libavcodec-example.o: In function `audio_decode_example': /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:215: undefined reference to `av_init_packet(AVPacket*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:218: undefined reference to `avcodec_find_decoder(AVCodecID)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:223: undefined reference to `avcodec_alloc_context3(AVCodec const*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:229: undefined reference to `avcodec_open2(AVCodecContext*, AVCodec const*, AVDictionary**)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:240: undefined reference to `av_free(void*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:249: undefined reference to `avcodec_alloc_frame()' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:254: undefined reference to `avcodec_get_frame_defaults(AVFrame*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:255: undefined reference to `avcodec_decode_audio4(AVCodecContext*, AVFrame*, int*, AVPacket const*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:264: undefined reference to `av_samples_get_buffer_size(int*, int, int, AVSampleFormat, int)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:286: undefined reference to `avcodec_close(AVCodecContext*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:287: undefined reference to `av_free(void*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:288: undefined reference to `avcodec_free_frame(AVFrame**)' build/Debug/GNU-Linux-x86/libavcodec-example.o: In function `video_encode_example': /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:304: undefined reference to `avcodec_find_encoder(AVCodecID)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:309: undefined reference to `avcodec_alloc_context3(AVCodec const*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:325: undefined reference to `av_opt_set(void*, char const*, char const*, int)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:327: undefined reference to `avcodec_open2(AVCodecContext*, AVCodec const*, AVDictionary**)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:336: undefined reference to `avcodec_alloc_frame()' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:347: undefined reference to `av_image_alloc(unsigned char**, int*, int, int, AVPixelFormat, int)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:354: undefined reference to `av_init_packet(AVPacket*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:374: undefined reference to `avcodec_encode_video2(AVCodecContext*, AVPacket*, AVFrame const*, int*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:382: undefined reference to `av_free_packet(AVPacket*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:388: undefined reference to `avcodec_encode_video2(AVCodecContext*, AVPacket*, AVFrame const*, int*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:396: undefined reference to `av_free_packet(AVPacket*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:402: undefined reference to `avcodec_close(AVCodecContext*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:403: undefined reference to `av_free(void*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:404: undefined reference to `av_freep(void*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:405: undefined reference to `avcodec_free_frame(AVFrame**)' build/Debug/GNU-Linux-x86/libavcodec-example.o: In function `decode_write_frame': /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:427: undefined reference to `avcodec_decode_video2(AVCodecContext*, AVFrame*, int*, AVPacket const*)' build/Debug/GNU-Linux-x86/libavcodec-example.o: In function `video_decode_example': /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:456: undefined reference to `av_init_packet(AVPacket*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:461: undefined reference to `avcodec_find_decoder(AVCodecID)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:466: undefined reference to `avcodec_alloc_context3(AVCodec const*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:477: undefined reference to `avcodec_open2(AVCodecContext*, AVCodec const*, AVDictionary**)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:486: undefined reference to `avcodec_alloc_frame()' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:520: undefined reference to `avcodec_close(AVCodecContext*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:521: undefined reference to `av_free(void*)' /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:522: undefined reference to `avcodec_free_frame(AVFrame**)' build/Debug/GNU-Linux-x86/libavcodec-example.o: In function `libavcodec_example(int, char**)': /home/gowtham/NetBeansProjects/remotedevicecontroller/libavcodec-example.cpp:529: undefined reference to `avcodec_register_all()' collect2: error: ld returned 1 exit status how should I succeed? -- *Gowtham* -------------- next part -------------- An HTML attachment was scrubbed... URL: From cehoyos at ag.or.at Sat Sep 28 09:56:07 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Sat, 28 Sep 2013 07:56:07 +0000 (UTC) Subject: [Libav-user] =?utf-8?q?linker_error_after_compiling_encoding=5Fde?= =?utf-8?q?coding=2Ec?= References: Message-ID: satya gowtham kudupudi writes: > On ubuntu i've installed ffmpeg as per? > http://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide? > I'm trying to compile? > http://www.ffmpeg.org/doxygen/2.0/doc_2examples_2decoding_encoding_8c-example.html; > changed?int main(int argc, char **argv)?to > int libavcodec_example(int argc, char **argv); > included it in my application Does compilation succeed if you don't change anything? Carl Eugen From cehoyos at ag.or.at Sat Sep 28 09:58:07 2013 From: cehoyos at ag.or.at (Carl Eugen Hoyos) Date: Sat, 28 Sep 2013 07:58:07 +0000 (UTC) Subject: [Libav-user] How to decode video in libav via video acceleration using vaapi References: <52455C00.5030500@gmail.com> Message-ID: Safi writes: > Hello i have developed a working media player using libav > for my project . now i want to decode my h264 video using > video acceleration decoding via vaapi There is a patch for MPlayer that you should be able to find googling "mplayer vaapi": for example gitorious.org/vaapi/mplayer/ Carl Eugen From ffmpeg at gmail.com Sat Sep 28 10:16:26 2013 From: ffmpeg at gmail.com (Geek.Song) Date: Sat, 28 Sep 2013 16:16:26 +0800 Subject: [Libav-user] linker error after compiling encoding_decoding.c In-Reply-To: References: Message-ID: On Sat, Sep 28, 2013 at 3:51 PM, satya gowtham kudupudi < satyagowtham.k at gmail.com> wrote: > On ubuntu i've installed ffmpeg as per > http://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide I'm trying to > compile > http://www.ffmpeg.org/doxygen/2.0/doc_2examples_2decoding_encoding_8c-example.html > ; > changed int main > (int argc, char **argv) to int libavcodec_example(int argc, char **argv); > included it in my application; called libavcodec_example(int argc, char > **argv) > > g++ -D__STDC_CONSTANT_MACROS -o > dist/Debug/GNU-Linux-x86/remotedevicecontroller > build/Debug/GNU-Linux-x86/libavcodec-example.o > build/Debug/GNU-Linux-x86/main.o build/Debug/GNU-Linux-x86/test-echo.o > /usr/local/ffmpeg_build/lib/libavdevice.a > /usr/local/ffmpeg_build/lib/libavfilter.a > /usr/local/ffmpeg_build/lib/libavcodec.a > /usr/local/ffmpeg_build/lib/libavutil.a > /usr/local/ffmpeg_build/lib/libswscale.a > /usr/local/ffmpeg_build/lib/libavformat.a -lxml2 -lpthread -lssl -lcrypto > -lwebsockets > > Please change your link order to : libavformat.a libavcodec.a xxxx libavutils.a ..... -- ----------------------------------------------------------------------------------------- My key fingerprint: d1:03:f5:32:26:ff:d7:3c:e4:42:e3:51:ec:92:78:b2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From satyagowtham.k at gmail.com Sat Sep 28 10:48:48 2013 From: satyagowtham.k at gmail.com (satya gowtham kudupudi) Date: Sat, 28 Sep 2013 14:18:48 +0530 Subject: [Libav-user] linker error after compiling encoding_decoding.c In-Reply-To: References: Message-ID: g++ -D__STDC_CONSTANT_MACROS -o dist/Debug/GNU-Linux-x86/remotedevicecontroller build/Debug/GNU-Linux-x86/libavcodec-example.o build/Debug/GNU-Linux-x86/main.o build/Debug/GNU-Linux-x86/test-echo.o /usr/local/ffmpeg_build/lib/libavformat.a /usr/local/ffmpeg_build/lib/libavcodec.a /usr/local/ffmpeg_build/lib/libavdevice.a /usr/local/ffmpeg_build/lib/libswscale.a /usr/local/ffmpeg_build/lib/libavfilter.a /usr/local/ffmpeg_build/lib/libavutil.a -lxml2 -lpthread -lssl -lcrypto -lwebsockets even this didn't work. Thank you *Gowtham* -------------- next part -------------- An HTML attachment was scrubbed... URL: From satyagowtham.k at gmail.com Sat Sep 28 11:52:33 2013 From: satyagowtham.k at gmail.com (satya gowtham kudupudi) Date: Sat, 28 Sep 2013 15:22:33 +0530 Subject: [Libav-user] linker error after compiling encoding_decoding.c In-Reply-To: References: Message-ID: I tried many combinations; they don't work. Any other options? *Gowtham* -------------- next part -------------- An HTML attachment was scrubbed... URL: From satyagowtham.k at gmail.com Sat Sep 28 16:20:33 2013 From: satyagowtham.k at gmail.com (satya gowtham kudupudi) Date: Sat, 28 Sep 2013 19:50:33 +0530 Subject: [Libav-user] linker error while compiling decoding_encoding.c Message-ID: Yes this is the duplicate post but this time I am right away compiling http://www.ffmpeg.org/doxygen/2.0/doc_2examples_2decoding_encoding_8c-example.html without including it in any other application. I've rebuilt ffmpeg as guided at http://ffmpeg.org/trac/ffmpeg/wiki/UbuntuCompilationGuide and ran following command with the following output g++ -D__STDC_CONSTANT_MACROS -c -g -Iffmpeg_build/include -D__STDC_CONSTANT_MACROS -MMD -MP -MF build/Debug/GNU-Linux-x86/main.o.d -o build/Debug/GNU-Linux-x86/main.o main.cpp mkdir -p dist/Debug/GNU-Linux-x86 g++ -D__STDC_CONSTANT_MACROS -o dist/Debug/GNU-Linux-x86/test build/Debug/GNU-Linux-x86/main.o ffmpeg_build/lib/libavformat.a ffmpeg_build/lib/libavcodec.a ffmpeg_build/lib/libswscale.a ffmpeg_build/lib/libavdevice.a ffmpeg_build/lib/libavfilter.a ffmpeg_build/lib/libavutil.a build/Debug/GNU-Linux-x86/main.o: In function `select_channel_layout': /home/gowtham/NetBeansProjects/test/main.cpp:77: undefined reference to `av_get_channel_layout_nb_channels(unsigned long long)' build/Debug/GNU-Linux-x86/main.o: In function `audio_encode_example': /home/gowtham/NetBeansProjects/test/main.cpp:102: undefined reference to `avcodec_find_encoder(AVCodecID)' /home/gowtham/NetBeansProjects/test/main.cpp:107: undefined reference to `avcodec_alloc_context3(AVCodec const*)' /home/gowtham/NetBeansProjects/test/main.cpp:118: undefined reference to `av_get_sample_fmt_name(AVSampleFormat)' /home/gowtham/NetBeansProjects/test/main.cpp:124: undefined reference to `av_get_channel_layout_nb_channels(unsigned long long)' /home/gowtham/NetBeansProjects/test/main.cpp:126: undefined reference to `avcodec_open2(AVCodecContext*, AVCodec const*, AVDictionary**)' /home/gowtham/NetBeansProjects/test/main.cpp:136: undefined reference to `avcodec_alloc_frame()' /home/gowtham/NetBeansProjects/test/main.cpp:147: undefined reference to `av_samples_get_buffer_size(int*, int, int, AVSampleFormat, int)' /home/gowtham/NetBeansProjects/test/main.cpp:148: undefined reference to `av_malloc(unsigned int)' /home/gowtham/NetBeansProjects/test/main.cpp:156: undefined reference to `avcodec_fill_audio_frame(AVFrame*, int, AVSampleFormat, unsigned char const*, int, int)' /home/gowtham/NetBeansProjects/test/main.cpp:165: undefined reference to `av_init_packet(AVPacket*)' /home/gowtham/NetBeansProjects/test/main.cpp:175: undefined reference to `avcodec_encode_audio2(AVCodecContext*, AVPacket*, AVFrame const*, int*)' /home/gowtham/NetBeansProjects/test/main.cpp:182: undefined reference to `av_free_packet(AVPacket*)' /home/gowtham/NetBeansProjects/test/main.cpp:187: undefined reference to `avcodec_encode_audio2(AVCodecContext*, AVPacket*, AVFrame const*, int*)' /home/gowtham/NetBeansProjects/test/main.cpp:194: undefined reference to `av_free_packet(AVPacket*)' /home/gowtham/NetBeansProjects/test/main.cpp:198: undefined reference to `av_freep(void*)' /home/gowtham/NetBeansProjects/test/main.cpp:199: undefined reference to `avcodec_free_frame(AVFrame**)' /home/gowtham/NetBeansProjects/test/main.cpp:200: undefined reference to `avcodec_close(AVCodecContext*)' /home/gowtham/NetBeansProjects/test/main.cpp:201: undefined reference to `av_free(void*)' build/Debug/GNU-Linux-x86/main.o: In function `audio_decode_example': /home/gowtham/NetBeansProjects/test/main.cpp:215: undefined reference to `av_init_packet(AVPacket*)' /home/gowtham/NetBeansProjects/test/main.cpp:218: undefined reference to `avcodec_find_decoder(AVCodecID)' /home/gowtham/NetBeansProjects/test/main.cpp:223: undefined reference to `avcodec_alloc_context3(AVCodec const*)' /home/gowtham/NetBeansProjects/test/main.cpp:229: undefined reference to `avcodec_open2(AVCodecContext*, AVCodec const*, AVDictionary**)' /home/gowtham/NetBeansProjects/test/main.cpp:240: undefined reference to `av_free(void*)' /home/gowtham/NetBeansProjects/test/main.cpp:249: undefined reference to `avcodec_alloc_frame()' /home/gowtham/NetBeansProjects/test/main.cpp:254: undefined reference to `avcodec_get_frame_defaults(AVFrame*)' /home/gowtham/NetBeansProjects/test/main.cpp:255: undefined reference to `avcodec_decode_audio4(AVCodecContext*, AVFrame*, int*, AVPacket const*)' /home/gowtham/NetBeansProjects/test/main.cpp:264: undefined reference to `av_samples_get_buffer_size(int*, int, int, AVSampleFormat, int)' /home/gowtham/NetBeansProjects/test/main.cpp:286: undefined reference to `avcodec_close(AVCodecContext*)' /home/gowtham/NetBeansProjects/test/main.cpp:287: undefined reference to `av_free(void*)' /home/gowtham/NetBeansProjects/test/main.cpp:288: undefined reference to `avcodec_free_frame(AVFrame**)' build/Debug/GNU-Linux-x86/main.o: In function `video_encode_example': /home/gowtham/NetBeansProjects/test/main.cpp:304: undefined reference to `avcodec_find_encoder(AVCodecID)' /home/gowtham/NetBeansProjects/test/main.cpp:309: undefined reference to `avcodec_alloc_context3(AVCodec const*)' /home/gowtham/NetBeansProjects/test/main.cpp:325: undefined reference to `av_opt_set(void*, char const*, char const*, int)' /home/gowtham/NetBeansProjects/test/main.cpp:327: undefined reference to `avcodec_open2(AVCodecContext*, AVCodec const*, AVDictionary**)' /home/gowtham/NetBeansProjects/test/main.cpp:336: undefined reference to `avcodec_alloc_frame()' /home/gowtham/NetBeansProjects/test/main.cpp:347: undefined reference to `av_image_alloc(unsigned char**, int*, int, int, AVPixelFormat, int)' /home/gowtham/NetBeansProjects/test/main.cpp:354: undefined reference to `av_init_packet(AVPacket*)' /home/gowtham/NetBeansProjects/test/main.cpp:374: undefined reference to `avcodec_encode_video2(AVCodecContext*, AVPacket*, AVFrame const*, int*)' /home/gowtham/NetBeansProjects/test/main.cpp:382: undefined reference to `av_free_packet(AVPacket*)' /home/gowtham/NetBeansProjects/test/main.cpp:388: undefined reference to `avcodec_encode_video2(AVCodecContext*, AVPacket*, AVFrame const*, int*)' /home/gowtham/NetBeansProjects/test/main.cpp:396: undefined reference to `av_free_packet(AVPacket*)' /home/gowtham/NetBeansProjects/test/main.cpp:402: undefined reference to `avcodec_close(AVCodecContext*)' /home/gowtham/NetBeansProjects/test/main.cpp:403: undefined reference to `av_free(void*)' /home/gowtham/NetBeansProjects/test/main.cpp:404: undefined reference to `av_freep(void*)' /home/gowtham/NetBeansProjects/test/main.cpp:405: undefined reference to `avcodec_free_frame(AVFrame**)' build/Debug/GNU-Linux-x86/main.o: In function `decode_write_frame': /home/gowtham/NetBeansProjects/test/main.cpp:427: undefined reference to `avcodec_decode_video2(AVCodecContext*, AVFrame*, int*, AVPacket const*)' build/Debug/GNU-Linux-x86/main.o: In function `video_decode_example': /home/gowtham/NetBeansProjects/test/main.cpp:456: undefined reference to `av_init_packet(AVPacket*)' /home/gowtham/NetBeansProjects/test/main.cpp:461: undefined reference to `avcodec_find_decoder(AVCodecID)' /home/gowtham/NetBeansProjects/test/main.cpp:466: undefined reference to `avcodec_alloc_context3(AVCodec const*)' /home/gowtham/NetBeansProjects/test/main.cpp:477: undefined reference to `avcodec_open2(AVCodecContext*, AVCodec const*, AVDictionary**)' /home/gowtham/NetBeansProjects/test/main.cpp:486: undefined reference to `avcodec_alloc_frame()' /home/gowtham/NetBeansProjects/test/main.cpp:520: undefined reference to `avcodec_close(AVCodecContext*)' /home/gowtham/NetBeansProjects/test/main.cpp:521: undefined reference to `av_free(void*)' /home/gowtham/NetBeansProjects/test/main.cpp:522: undefined reference to `avcodec_free_frame(AVFrame**)' build/Debug/GNU-Linux-x86/main.o: In function `main': /home/gowtham/NetBeansProjects/test/main.cpp:529: undefined reference to `avcodec_register_all()' collect2: error: ld returned 1 exit status If link order is wrong please suggest the correct order. I've been trying to compile the example since yesterday. Thank you -- *Gowtham* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcygogo at gmail.com Sun Sep 29 03:24:28 2013 From: mcygogo at gmail.com (Michael) Date: Sun, 29 Sep 2013 09:24:28 +0800 Subject: [Libav-user] About implement the recorder Message-ID: Hi all I know there is a tutorial about how to implement a player using the FFmpeg lib, Is there some code about how to implement a recorder? tks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nhanndt_87 at yahoo.com Mon Sep 30 06:16:15 2013 From: nhanndt_87 at yahoo.com (thanh nhan thanh nhan) Date: Sun, 29 Sep 2013 21:16:15 -0700 (PDT) Subject: [Libav-user] Runtime error with ffmpeg libs in win64 release mode Message-ID: <1380514575.44618.YahooMailNeo@web121701.mail.ne1.yahoo.com> Dear all, I've successfully built win64 ffmpeg libs. I am using Visual Studio 2010 to develop an video-concerned application. In debug mode of VS2010, it works fine. However, in Release mode, the program got crashed due to ffmpeg function. I think this is?maybe caused by library linking error. Does anyone use to be in the same case? I really appreciate any help. Thank you very much! -------------- next part -------------- An HTML attachment was scrubbed... URL: From malutik at yandex.ru Mon Sep 30 08:08:58 2013 From: malutik at yandex.ru (=?koi8-r?B?5cvB1MXSyc7B?=) Date: Mon, 30 Sep 2013 10:08:58 +0400 Subject: [Libav-user] Runtime error with ffmpeg libs in win64 release mode In-Reply-To: <1380514575.44618.YahooMailNeo@web121701.mail.ne1.yahoo.com> References: <1380514575.44618.YahooMailNeo@web121701.mail.ne1.yahoo.com> Message-ID: <300731380521338@web16m.yandex.ru> An HTML attachment was scrubbed... URL: From satyagowtham.k at gmail.com Mon Sep 30 08:58:45 2013 From: satyagowtham.k at gmail.com (satya gowtham kudupudi) Date: Mon, 30 Sep 2013 12:28:45 +0530 Subject: [Libav-user] linker error while compiling decoding_encoding.c In-Reply-To: References: Message-ID: I compiled finally after couple of days! If its CPP application you should extern "C" { #include #include #include #include #include #include #include } and my build commands are g++ -D__STDC_CONSTANT_MACROS -c -g -Iffmpeg_build/include -D__STDC_CONSTANT_MACROS -MMD -MP -MF build/Debug/GNU-Linux-x86/main.o.d -o build/Debug/GNU-Linux-x86/main.o main.cpp g++ -D__STDC_CONSTANT_MACROS -o dist/Debug/GNU-Linux-x86/test build/Debug/GNU-Linux-x86/main.o -Lffmpeg_build/lib -L/usr/lib/i386-linux-gnu ffmpeg_build/lib/libavformat.a ffmpeg_build/lib/libavcodec.a ffmpeg_build/lib/libswscale.a ffmpeg_build/lib/libavdevice.a ffmpeg_build/lib/libavfilter.a ffmpeg_build/lib/libfdk-aac.a ffmpeg_build/lib/libpostproc.a ffmpeg_build/lib/libswresample.a ffmpeg_build/lib/libx264.a ffmpeg_build/lib/libavutil.a -lvpx -lvorbisenc -lvorbis -lmp3lame -lopus -ltheora -ltheoraenc -ltheoradec -lva -ldl -lzip -lz -lpthread we need to link heck amount of libraries *Gowtham* -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander_eroma at htc.com Thu Sep 19 18:11:21 2013 From: alexander_eroma at htc.com (alexander_eroma at htc.com) Date: Thu, 19 Sep 2013 17:11:21 +0100 Subject: [Libav-user] Programmatically blend two frames using libavfilter Message-ID: Hello I need to programmatically draw an image over the video frame when encoding video via libavcodec. (which is part of ffmpeg). As I understand I need to use libavfilter tools to do this. I can access AVFilter via avfilter_get_by_name("blend"). Also I can access filter graph viaavfilter_graph_create_filter(&filterCtx, filter, "blender", NULL, NULL, fgr). But I can not understand what should I do next. I have two AV frames, for example: AVFrame* bottomFrame; AVFrame* topFrame; How can I put topFrame over the bottomFrame using libav API? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: